url stringlengths 31 38 | title stringlengths 7 229 | abstract stringlengths 44 2.87k | text stringlengths 319 2.51M | meta dict |
|---|---|---|---|---|
https://arxiv.org/abs/1602.07209 | Polytopes and simplexes in p-adic fields | We introduce topological notions of polytopes and simplexes, the latter being expected to play in p-adically closed fields the role played by real simplexes in the classical results of triangulation of semi-algebraic sets over real closed fields. We prove that the faces of every p-adic polytope are polytopes and that they form a rooted tree with respect to specialisation. Simplexes are then defined as polytopes whose faces tree is a chain. Our main result is a construction allowing to divide every p-adic polytope in a complex of p-adic simplexes with prescribed faces and shapes. | \section{Introduction}
\label{se:intro}
Throughout all this paper we fix a $p$\--adically closed field
$(K,v)$. The reader unfamiliar with this notion may restrict to the
special case where $K={\rm\bf Q}_p$ or a finite extension of it, and $v$ is its
$p$\--adic valuation. We let $R$ denote the valuation ring of $v$, and
$\Gamma=v(K)$ its valuation group (augmented with one element $+\infty=v(0)$).
In this introductory section we present informally what we are aiming
at. Precise definitions will be given in Section~\ref{se:notation}
and at the beginning of Section~\ref{se:p-adic}.
Our long-term objective is to set a triangulation theorem which would
be an acceptable analogue over $K$ of the classical triangulation of
semi-algebraic sets over the reals. Polytopes and simplexes in ${\rm\bf R}^m$
are well known to have the following properties, among others (see for
example \cite{boch-cost-roy-1998} or \cite{drie-1998}).
\begin{description}
\item[(Sim)]
They are bounded subsets of ${\rm\bf R}^m$ which can be described by a
finite set of linear inequalities of a very simple form.
\item[(Fac)]
There is a notion of ``faces'' attached to them with good
properties: every face of a polytope $S$ is itself a polytope; if
$S''$ is a face of $S'$ and $S'$ a face of $S$ then $S''$ is a
face of $S$; the union of the proper faces of $S$ is a
partition of its frontier.
\item[(Div)]
Last but not least, every polytope can be divided in simplexes by
a certain uniform process of ``Barycentric Division'' which offers
a good control both on their shapes and their faces.
\end{description}
The goal of the present paper is to build a $p$\--adic counterpart of
real polytopes and simplexes having similar properties. Obviously
there is no direct translation of concepts like linear {\em
inequalities} and {\em Barycentric} Division to {\em non-ordered}
fields, such as the $p$\--adic ones. Nevertheless we want our
$p$\--adic polytopes and simplexes to be defined by conditions which
are as simple as possible, to obtain a notion of faces satisfying all
the above properties, and most of all to develop a flexible and
powerful division tool.
This is achieved here by first introducing and studying certain
subsets of $\Gamma^m$ called ``largely continuous precells mod $N$'', for a
fixed $m$\--tuple $N$ of positive integers. These sets will
be defined by a very special triangular system of linear inequalities
and congruence relations mod $N$. In particular they are defined
simply by linear inequalities in the special case where $N=(1,\dots,1)$
(again, see Section~\ref{se:notation} for precise definitions and
basic examples).
\medskip
This paper, which is essentially self-contained, is organised as
follows. The general properties of subsets of $\Gamma^m$ defined by
conjunctions of linear inequalities and congruence conditions are
studied in section~\ref{se:face-proj}. Property (Fac) is proved there
to hold true for largely continuous precells mod $N$ (while
property (Sim) is a by-product of their definition).
Section~\ref{se:bound-fun} is devoted to two technical properties
preparing the proof of our main result, a construction analogous to
(Div) in our context. We call this ``Monohedral Division'' (see
below). Section~\ref{se:division} is devoted to its proof.
We then return to the $p$\--adic context in the final
section~\ref{se:p-adic}. By taking inverse images of largely
continuous precells by the valuation $v$ (which maps $K^m$ onto $\Gamma^m$)
and restricting them to certain subsets of $R^m$, we transfer all the
definitions and results built in $\Gamma^m$ in the previous sections to
$K^m$, especially the Monohedral Division (which becomes in this
context the ``Monotopic Division'', Theorem~\ref{th:div-p-adic}). This
latter result paves the way towards a triangulation of semi-algebraic
$p$\--adic sets, to appear in a further paper.
\paragraph{Monohedral division.}
In addition to (Sim) and (fac), every largely continuous precell $A$ mod
$N$ has one more remarkable property which real polytopes are
lacking: its proper faces, ordered by specialisation\footnote{The {\bf
specialisation pre-order} of the subsets of a topological space is
defined as usually by $B\leq A$ if and only if $B$ is contained in the
closure or $A$.\label{fo:spec-ord}}\!, form a rooted tree
(Proposition~\ref{pr:face-egale-proj}(\ref{it:face-lattice}). When
this tree is a chain, we say that $A$ is ``monohedral''.
Among real polytopes of a given dimension, the simplexes are those
whose number of facets is minimal: a polytope $A\subseteq{\rm\bf R}^m$ of dimension $d$
has at least $d+1$ facets, and it is a simplex if and only if it has
exactly $d+1$ facets (see Corollary~9.5 and Corollary 12.8 in
\cite{bron-1983}). We expect largely continuous precells to fulfill in
$\Gamma^m$ a function similar to polytopes in ${\rm\bf R}^m$, and the monohedral
ones (whose ordered set of faces is in a sense the simplest possible
tree) to fulfill a function similar to simplexes.
Indeed our main result, the ``Monohedral Division''
(Theorem~\ref{th:mono-div}), provides in our context a powerful tool
very similar to (Div), the Barycentric Division of real polytopes. It
provides in particular a ``Monohedral Decomposition''
(Theorem~\ref{th:mono-dec}) which says that every largely continuous
precell mod $N$ in $\Gamma^m$ is the disjoint union of a complex of {\em
monohedral} largely continuous precells mod $N$. The latter result is in
analogy with the situation in the real case, where every polytope
can be divided in simplexes forming a simplicial complex.
But the Barycentric Division in ${\rm\bf R}^m$ says much more than this. Roughly
speaking, given a polytope $A$ and a simplicial complex ${\cal D}$
partitioning the frontier of $A$, it makes it possible to build a simplicial
complex ${\cal C}$ by partitioning $A$ and ``lifting'' ${\cal D}$, in the sense
that for every $C$ in ${\cal C}$, the faces $D$ of $C$ which are outside
$A$ belong to ${\cal D}$. Moreover, given a positive function $\varepsilon:B
\to{\rm\bf R}$ (where $B$ is any proper face of $A$), the shapes of the elements
of ${\cal C}$ can be required to satisfy the following condition: for every
$D$ in ${\cal D}$ there is a unique $C\in{\cal C}$ such that $D$ is the largest
proper face of $C$ in ${\cal C}$, and in that case the distance of any point
$x\in C$ to its projection $y$ onto $B$ is smaller than $\varepsilon(y)$ (see
Figure~\ref{fi:div}, where the dotted curve shows how the $\varepsilon$ function
controls the shapes of the elements of ${\cal C}$ whose largest proper face
outside $A$ is contained in the lower facet $B$).
\begin{figure}[h]
\begin{center}
\begin{tikzpicture}[scale=0.5]
\coordinate (A) at (1,0);
\coordinate (B) at (3,0);
\coordinate (C) at (5,0);
\coordinate (D) at (6,0);
\coordinate (E) at (7,1);
\coordinate (F) at (6.5,2.5);
\coordinate (G) at (6,4);
\coordinate (H) at (3,4.5);
\coordinate (I) at (0,3);
\coordinate (J) at (0.5,1.5);
\coordinate (AB) at (2,0.5);
\coordinate (CD) at (4.5,1);
\coordinate (GH) at (4,3);
\coordinate (IJ) at (3,2);
\coordinate (X) at (3.75,.5);
\filldraw[fill=gray!30] (A) -- (D) -- (CD) -- (X) -- (AB) --cycle;
\draw (A) -- (D) -- (E) -- (G) -- (H) -- (I) -- (A);
\draw (A) -- (AB) -- (X) -- (CD) -- (F) -- (GH) -- (H) -- (IJ)
(X) -- (IJ) -- (J) -- (AB)
(CD) -- (IJ) -- (GH) -- (CD) -- (C)
(CD) -- (E)
(GH) -- (G)
(CD) -- (B) -- (AB) -- (IJ) -- (I);
\draw[dotted] (1,.5) .. controls +(1,.5) and +(-1,0) .. (3.2,.7)
.. controls +(1,0) and +(-1.2,2.4) .. (D);
\foreach \p in {A,B,C,D,E,F,G,H,I,J}
\draw (\p) node {\tiny$\bullet$};
\end{tikzpicture}
\caption{Division with constraints along a facet.}
\label{fi:div}
\end{center}
\end{figure}
Although all these properties can be derived from the Barycentric Division in
${\rm\bf R}^m$, none of them involves the notion of barycenter. The strength of
our Monohedral Division in $\Gamma^m$ (Theorem~\ref{th:mono-div}), and
eventually of our Monotopic Division in $K^m$
(Theorem~\ref{th:div-p-adic}), is that they preserve\footnote{In
addition, the Monohedral and Monotopic Divisions even ensure that
every $C$ in ${\cal C}$ has no proper face inside $A$: either $C$ has no
proper face at all, or its unique facet is outside $A$ (hence so are
all its proper faces) and belongs to ${\cal D}$.} all of these properties.
\section{Notation, definitions}
\label{se:notation}
We let ${\rm\bf N}$, ${\rm\bf Z}$, ${\rm\bf Q}$ denote respectively the set of non-negative integers,
of integers and of rational numbers, and ${\rm\bf N}^*={\rm\bf N}\setminus\{0\}$. { For every
$p,q\in{\rm\bf Z}$ we let $[\![p,q]\!]$ denote the set of integers $k$ such that
$p\leq k\leq q$ (that is the empty set if $p>q$).}
\medskip
Recall that a {\bf ${\rm\bf Z}$\--group} is a linearly ordered group $G$ with a
smallest positive element such that $G/nG$ has $n$ elements for every
integer $n\geq1$. The reader unfamiliar with ${\rm\bf Z}$\--groups may restrict to
the special but prominent case of ${\rm\bf Z}$ itself. Indeed a linearly
ordered group is a ${\rm\bf Z}$\--group if and only if it is elementarily
equivalent to ${\rm\bf Z}$ (in the Presburger language ${\cal L}_{Pres}$ defined below).
$(K,v)$ is a $p$\--adically closed field in the sense of
\cite{pres-roqu-1984}, that is a Henselian valued field of
characteristic zero whose residue field is finite and whose value group
${\cal Z}=\Gamma\setminus\{+\infty\}=v(K^*)$ is a ${\rm\bf Z}$\--group. A field is $p$\--adically closed
if and only if it is elementarily equivalent (in the language of
rings) to a finite extension of ${\rm\bf Q}_p$, so the reader unfamiliar with
the formalism of model-theory may restrict to this fundamental case.
Let ${\cal Q}$ be the divisible hull of ${\cal Z}$. By identifying ${\rm\bf Z}$ with the
smallest non-trivial convex subgroup of ${\cal Z}$, we consider ${\rm\bf Z}$
embedded into ${\cal Z}$ (and ${\rm\bf Q}$ into ${\cal Q}$). For every $a\in{\cal Q}$
we let $|a|=\max(-a,a)$.
{
\begin{remark}\label{re:def-naive-1}
When ${\cal Z}={\rm\bf Z}$, one may naively define polytopes in ${\rm\bf Z}^m$ as
intersections $S\cap {\rm\bf Z}^m$ where $S\subseteq{\rm\bf R}^m$ is a polytope of ${\rm\bf R}^m$. With
other words, a polytope in ${\rm\bf Z}^m$ (and more generally in ${\cal Z}^m$)
would be an intersection of finitely many half-spaces, that is the
set of solutions of finitely many linear inequalities. Our
polytopes are indeed so, but it will soon become clear that we need to
be much more restrictive. Note first that such a definition does
not lead naturally to a good notion of faces, because $S\cap{\rm\bf Z}^m$ is
closed with respect to the topology inherited from ${\rm\bf R}^m$. In
particular if $S'$ is a face of $S$, $S'\cap{\rm\bf Z}^m$ is disjoint from the
closure of $S\cap{\rm\bf Z}^m$. In order to carry significant topological
properties, a notion of face for subsets of ${\rm\bf Z}^m$ must involve
points at infinity.
\end{remark}
}
For every $a$ in $\Omega={\cal Q}\cup\{+\infty\}$ we let $a+(+\infty)=(+\infty)+a=+\infty$. $\Omega$ is
endowed the topology generated by the open intervals and the intervals
$]a,+\infty]$ for $a\in{\cal Q}$. $\Omega^m$ is equipped with the product topology, and
$\Gamma^m$ with the induced topology. The topological closure of any set
$A$ in $\Omega^m$ is denoted $\overline{A}$. Thus for example
$\Omega=\overline{{\cal Q}}$ and $\Gamma=\overline{{\cal Z}}$. The {\bf frontier} of a
subset $A$ of $\Omega^m$ is the closure of $\overline{A}\setminus A$. We denote it
$\partial A$.
\medskip
Whenever we take an element $a\in\Omega^m$ it is understood that
$a_1,\dots,a_m\in\Omega$ are its coordinates. We say that $a$ is {\bf
non-negative} if all its coordinates are. A subset $A$ of $\Omega^m$ is
{\bf non-negative} if all its elements are. { A function $f$ with values
in $\Omega$ is {\bf non-negative} (resp. {\bf positive}) on a subset $X$ of
its domain if $f(x)$ is non-negative (resp. positive) for every $x\in
X$. }
If $m\geq1$ we let $\widehat{a}$ (resp. $\widehat{A}$)
denote the image of $a$ (resp. $A$) under the coordinate projection of
$\Omega^m$ onto the first $m-1$ coordinates $\Omega^{m-1}$. We call it the {\bf
socle} of $a$ (resp. $A$). If ${\cal A}$ is a family of subsets of $\Gamma^m$ we
also call $\widehat{{\cal A}}=\{\widehat{A}\,\big/\ A\in{\cal A}\}$ the socle of ${\cal A}$.
The {\bf support of $a$}, denoted $\mathop{\rm Supp} a$, is the set of indexes $i$
such that $a_i\neq+\infty$. When all the elements of $A$ have the same
support, we call it the {\bf support of $A$} and denote it $\mathop{\rm Supp} A$.
For every subset $I=\{i_1,\dots,i_r\}$ of $[\![1,m]\!]$ we let:
\begin{displaymath}
F_I(A)=F_{i_1,\dots,i_r}(A)=\{a \in \overline{A}\,\big/\ \mathop{\rm Supp} a = I\}.
\end{displaymath}
When $F_I(A)\neq\emptyset$ we call it the {\bf face of $A$ of support $I$}. It is
an {\bf upward face} if moreover $m\in I$ and $F_{I\setminus\{m\}}(A)$ is
non-empty. Note that if $A$ is contained in $\Gamma^m$ then
so are its faces, because $\Gamma^m$ is closed in $\Omega^m$. By construction,
$F_I(A)=\overline{A}\cap F_I(\Omega^m)$ hence $\overline{A}$ is the disjoint
union of all its faces. A {\bf complex} in $\Gamma^m$ is a finite family
${\cal A}$ of pairwise disjoint subsets of $\Gamma^m$ such that for every $A,B\in{\cal A}$,
$\overline{A}\cap\overline{B}$ is the union of the {\em common} faces of $A$
and $B$. It is a {\bf closed complex} if moreover it contains all the
faces of its members, or equivalently if $\bigcup{\cal A}$ is closed. Note that
a finite partition ${\cal S}$ of a subset $A$ of $\Gamma^m$ is a closed complex if
and only if ${\cal S}$ contains the faces of all its members.
The specialisation pre-order (see footnote~\ref{fo:spec-ord}) is an
order on the faces of $A$. The largest proper faces of $A$ with
respect to this order are called its {\bf facets}. We say that $A$ is
{\bf monohedral} if its faces are linearly ordered by specialisation.
Note that every subset of $F_I(\Gamma^m)$ is clopen in $F_I(\Gamma^m)$. In
particular if $A\subseteq F_I(\Gamma^m)$ then $\partial A$ is the disjoint union of its
proper faces. Note also that $F_J(A)=\emptyset$ whenever $J\nsubseteq I$.
\begin{example}\label{ex:face-non-cvx}
Let $A\subseteq{\rm\bf Z}^3$ be defined by $a_1\geq 0$, $a_2\geq a_1$ and $a_3=2a_2-2a_1$.
It has four non-empty faces: $A$ itself, two facets
$F_1(A)={\rm\bf N}\times\{+\infty\}\times\{+\infty\}$ and $F_3(A)=\{+\infty\}\times\{+\infty\}\times2{\rm\bf N}$, plus
$F_\emptyset(A)=\{(+\infty,+\infty,+\infty)\}$.
\end{example}
We let $\pi^m_I$ be the natural projection of $\Gamma^m$ onto $F_I(\Gamma^m)$.
When $m$ is clear from the context, $\pi_I^m$ is simply
denoted $\pi_I$.
\begin{remark}\label{re:face-in-proj}
For every $A\subseteq \Gamma^m$ note that $F_J(A)\subseteq\pi_J(A)$ and $\widehat{F_J(A)}\subseteq
F_{\widehat{J}}(\widehat{A})$ (where $\widehat{J}=J\setminus\{m\}$). Indeed
for every $b\in F_J(B)$ there are points in $A$ arbitrarily close to
$b$. In particular there is a point $a\in A$ such that $\max_{i\in
I}|a_i-b_i|<1$, which implies that $\pi_J(a)=b$. This proves the first
inclusion, and we leave the second one to the reader.
\end{remark}
For every $J\subseteq[\![1,m]\!]$ and $a\in\Omega^m$ we let $\Delta_J^m(a)=\min\{a_i\,\big/\ i\notin
J\}$ (if $J=[\![1,m]\!]$ we use the convention that
$\Delta^m_J(a)=\min\emptyset=+\infty$). Again the superscript $m$ is omitted whenever it
is clear from the context. Note that for every $a,b\in\Omega^m$
\begin{displaymath}
\Delta_J(a+b) \geq \Delta_J(a) + \Delta_J(b).
\end{displaymath}
\begin{remark}\label{re:delta-dist}
When ${\cal Z}={\rm\bf Z}$ the topology\footnote{ The restriction of our topology
to ${\rm\bf Q}^m$ agrees with the usual one, induced by the Euclidian
distance.} on $\Omega^m$ comes from the distance $d(a,b)=\max_{1\leq i\leq m}
|2^{-a_i}-2^{-b_i}|$, with the convention that $2^{-\infty}=0$. Thus
$2^{-\Delta_J(a)}$ is just the distance from $a$ to its projection
$\pi_J(a)$. In the general case the topology on $\Omega^m$ no longer comes
from a distance. Nevertheless we will keep this geometric intuition in
mind, that $\Delta_J(a)$ measures something like a distance from $a$ to
$F_J(\Omega^m)$: the bigger $\Delta_J(a)$ is, the closer $a$ is to $F_J(\Omega^m)$.
\end{remark}
This intuitive meaning makes the following facts rather obvious.
\begin{fact}\label{fa:delta-dist}
For every function $f:A\subseteq \Gamma^m\to\Omega$, given $b\in\Gamma^m$ such that $\mathop{\rm Supp} b =J$ we
have:
\begin{enumerate}
\item
$b\in F_J(A)$ iff $b\in\overline{A}$ iff
$\forall\delta\in{\cal Z}$, $\exists a\in A$, $\pi_J(a)=b$ and $\Delta_J(a)\geq\delta$.
\item
If $b\in F_J(A)$ then $f$ has limit $+\infty$ at $b$ iff $\forall\varepsilon\in{\cal Z}$,
$\exists\delta\in{\cal Z}$, $\forall a\in A$, $[\pi_J(a)=b$ and $\Delta_J(a)\geq\delta] \Rightarrow f(a)\geq\varepsilon$.
\end{enumerate}
\end{fact}
Given a vector $u\in{\cal Z}^m$ we let $A+u=\{x+u\,\big/\ x\in A\}$. We say
that $u$ is {\bf pointing} to some $J\subseteq[\![1,m]\!]$ if $u_i=0$ for $i\in
J$ and $u_i>0$ for $i\notin J$.
\begin{remark}\label{re:half-line}
Let $J\subseteq I\subseteq[\![1,m]\!]$ and $S$ be any subset of $F_I(\Gamma^m)$. Using
Remark~\ref{re:face-in-proj} and the above facts, one easily sees that
if for every $\delta\in{\cal Z}$ there is $u\in{\cal Z}^m$ pointing to $J$ such that
$\Delta_J(u)\geq\delta$ and $S+u\subseteq S$ then $F_J(S)=\pi_J(S)$, and in particular
$F_J(S)\neq\emptyset$.
\end{remark}
{
Example~\ref{ex:face-non-cvx} shows that even if a subset $A$ of ${\rm\bf Z}^m$
is defined by finitely many linear inequalities, its faces may not be
so. Thus a polytope $A$ in $\Gamma^m$ must satisfy additional conditions,
in order to ensure that some linear inequalities which define $A$ also
define its faces after passing to the limits (see
Proposition~\ref{pr:pres-face} for a precise statement). It is
these conditions that we are going to introduce now. }
A function $f:A\subseteq \Gamma^m\to\Omega$ is {\bf largely continuous} on
$A$ if it can be extended to a continuous function on $\overline{A}$,
which we will usually denote $\bar f$. If $A$ has support $I$, we say
that $f$ is an {\bf affine map} (resp. a {\bf linear map}) if either $f$
is constantly equal to $+\infty$, or for some $\alpha_0\in{\cal Q}$ (resp. $\alpha_0=0$) and
some $(\alpha_i)_{i\in I}\in{\rm\bf Q}^I$, we have
\begin{equation}\label{eq:def-lin}
\forall a\in A,\quad f(a)=\alpha_0 + \sum_{i\in I}\alpha_ia_i.
\end{equation}
We call $\alpha_0$ the ``constant coefficient'' in the above expression of
$f$. If such an expression exists for which $\alpha_0\in{\cal Z}$
and $\alpha_i\in {\rm\bf Z}$ for $i\in I$, we say that $f$ is {\bf integrally affine}. A
affine map which takes values in $\Gamma$ will be called {\bf
$\Gamma$\--affine}. For example $f(x)=x/2$ is $\Gamma$\--affine on $2{\rm\bf Z}$ but is
not integrally affine.
\begin{remark}\label{re:linearity}
Affinity and linearity are intrinsic properties because a function
$\varphi:A\subseteq F_I(\Gamma^m)\mapsto{\cal Q}$ is a linear map if and only if for every
$a_1,\dots,a_k\in A$ and every $\lambda_1,\dots,\lambda_k\in{\rm\bf Q}^m$~:
\begin{displaymath}
\sum_{1\leq i\leq k}\lambda_ia_i\in A \ \Longrightarrow \ \varphi\bigg(\sum_{1\leq i\leq k}\lambda_ia_i\bigg)=\sum_{1\leq
i\leq k}\lambda_i\varphi(a_i)
\end{displaymath}
\end{remark}
The symbols of the Presburger language
${\cal L}_{Pres}=\{0,1,+,\leq,(\equiv_n)_{n\in{\rm\bf N}^*}\}$ are interpreted as usually in
${\cal Z}$: the binary relation $a\equiv_nb$ says that $a-b\in n{\cal Z}$, and the
other symbols have their obvious meanings. A subset $X$ of ${\cal Z}^d$ is
{\bf ${\cal L}_{Pres}$\--definable} if there is a first order formula
$\varphi(\xi)$ in ${\cal L}_{Pres}$, with parameters in ${\cal Z}$ and a $d$\--tuple $\xi$
of free variables, such that $X=\{x\in{\cal Z}^d\,\big/\ {\cal Z}\models\varphi(x)\}$. A function
$f:X\subseteq{\cal Z}^d\to{\cal Z}$ is {\bf ${\cal L}_{Pres}$\--definable} if its graph is.
Each $F_I(\Gamma^m)$ can be identified with ${\cal Z}^d$ with $d=\mathop{\rm Card}(I)$. We say that a
subset $A$ of $\Gamma^m$ is {\bf definable} if for every $I\subseteq[\![1,m]\!]$
the set $A\cap F_I(\Gamma^m)$ is ${\cal L}_{Pres}$\--definable by means of this
identification. We say that a function $f:A\subseteq \Gamma^m\to\Omega$ is {\bf definable}
if there is an integer $N\geq1$ such that $Nf(X)\subseteq\Gamma$ and if the
restrictions of $Nf$ to each $F_I(\Gamma^m)$ become, after this
identification, either an ${\cal L}_{Pres}$\--definable map from
${\cal Z}^{\mathop{\rm Card}(I)}$ to ${\cal Z}$ or the constant map $+\infty$. Note that every
affine map is definable in this broader sense.
The next characterisation of definable maps and sets comes
directly from Theorem~1 in \cite{cluc-2003}.
\begin{theorem}[Cluckers]\label{th:cluck-piece-lin}
For every definable function $f:A\subseteq \Gamma^m\to\Gamma$ on a non-negative set $A$,
there exists a partition of $A$ in finitely many
definable sets, on each of which the restriction of $f$ is
an affine map.
\end{theorem}
It is well known that the theory of ${\rm\bf Z}$\--groups has quantifier
elimination and definable Skolem functions. At many places, without
mentioning, we will use the latter property under the following form.
\begin{theorem}[Skolem Functions]\label{th:skolem}
Let $A\subseteq{\cal Z}^m$ and $B\subseteq{\cal Z}^n$ be two ${\cal L}_{Pres}$\--definable sets.
Let $\varphi(x,y)$ be a first order formula in ${\cal L}_{Pres}$. If for every
$a\in A$ there is $b\in B$ such that ${\cal Z}\models\varphi(a,b)$ then there is a
definable map $\lambda:A\to B$ such that ${\cal Z}\models\varphi(a,\lambda(a))$ for every $a\in
A$.
\end{theorem}
Since ${\cal Z}$ is elementarily equivalent to ${\rm\bf Z}$ in the language
${\cal L}_{Pres}$, every non-empty ${\cal L}_{Pres}$\--definable subset of ${\cal Z}$
which is bounded above (resp. below) has a maximum (resp. minimum)
element. As a consequence for every $a\in\Omega$ there is in ${\cal Z}$ a largest
element $\lfloor a\rfloor$ (resp. $\lceil a\rceil$) which is $\leq a$ (resp. $\geq a$). Note that
if $f:X\subseteq{\cal Z}^d\to{\cal Q}$ is definable and $N\geq1$ is an integer such that
$Nf$ is ${\cal L}_{Pres}$\--definable, then for every integer $0\leq k<N$ the
set $S_k=\{x\in X\,\big/\ Nf(x)\equiv_N k\}$ is ${\cal L}_{Pres}$\--definable, and so is
the map $\lfloor f\rfloor(x)=(Nf(x)-k)/N$ on $S_k$. Thus the map $\lfloor f\rfloor:S\to{\cal Z}$ is
${\cal L}_{Pres}$\--definable, and so is $\lceil f\rceil$ by a symmetric argument.
Obviously the same holds true for every definable map from $A\subseteq\Gamma^m$
to $\Omega$.
\begin{lemma}\label{le:precomp-borne}
If $f:A\subseteq\Gamma^m\to\Omega$ is a largely continuous definable map on
a non-negative set $A$, then it has a minimum in $A$.
\end{lemma}
\begin{proof}
It suffices to prove the result separately for each $A\cap F_I(\Gamma^m)$ with
$I\subseteq[\![1,m]\!]$. Every such piece can be identified with a definable subset
of ${\cal Z}^{\mathop{\rm Card} I}$ hence we can assume that $A\subseteq{\cal Z}^m$. Multiplying $f$
by some integer $n\geq1$ if necessary we can assume that $f$ takes values
in $\Gamma$, and even in ${\cal Z}$ (otherwise $f$ is constantly $+\infty$ and the
result is trivial). Since ${\cal Z}\equiv{\rm\bf Z}$, by instantiating the parameters of
a definition of $f:A\subseteq{\cal Z}^m\to{\cal Z}$ it suffices to prove the result
for every largely continuous definable function on a non-negative subset
$A$ of ${\rm\bf Z}^m$. But in that case the topology on $\Gamma^m$ comes from a
metric such that every non-negative subset of $\Gamma^m$ is precompact (that is
$\overline{A}$ is compact). So there is $\bar a\in\overline{A}$ such
that $\bar f(\bar a)=\min\{\bar f(x)\,\big/\ x\in\overline{A}\}$. For any $a\in
A$ close enough to $\bar a$ we have $f(a)=\bar f(\bar a)$ (because
$f(A)\subseteq{\rm\bf Z}$) hence $f(a)=\min\{f(x)\,\big/\ x\in A\}$.
\end{proof}
\begin{lemma}\label{le:image-compact}
Let $f:A\subseteq\Gamma^m\to\Omega^n$ a continuous definable map. If $A$ is non-negative
then $f(\overline{A})$ is closed.
\end{lemma}
\begin{proof}
As for Lemma~\ref{le:precomp-borne} we can reduce to the case where
${\cal Z}={\rm\bf Z}$. But then $\overline{A}$ is compact, hence so is
$f(\overline{A})$ since $f$ is continuous.
\end{proof}
We extend the binary congruence relations of ${\cal Z}$ to $\Gamma$ with the convention
that $a\equiv+\infty\;[N]$ for every $a\in\Gamma$ and every $N\in{\rm\bf N}$. A subset $A$ of
$F_I(\Gamma^m)$ is a {\bf basic Presburger set} if it is the set of
solutions of finitely many linear inequalities and congruence
relations. Although we will not use it, it is worth mentioning that,
by the quantifier elimination of the theory of ${\rm\bf Z}$ in the language
${\cal L}_{Pres}$, the definable subsets of ${\cal Z}^d$, and more generally of
$\Gamma^m$, are exactly the finite unions of basic Presburger sets.
Cluckers has shown in \cite{cluc-2003} that every definable subset of
${\cal Z}^d$ is actually the disjoint union of finitely many subsets of a
much more restrictive sort, called cells. The following definition of
precells in $\Gamma^m$, more precisely of precells mod $N$ for a given
$N\in({\rm\bf N}^*)^m$, is adapted from his (see Remark~\ref{re:cell-cluc} below).
Since we only need to consider non-negative precells it is convenient to
restrict the definition to this case. If $m=0$, $\Gamma^0$ itself is the
only precell mod $N$ in $\Gamma^0$. If $m\geq1$, for every $I\subseteq[\![1,m]\!]$, a
subset $A$ of $F_I(\Gamma^m)$ is a {\bf precell mod $N$} if: $\widehat{A}$ is
a precell mod $\widehat{N}$, and there are non-negative affine maps
$\mu,\nu:\widehat{A}\to\Omega$ and an integer $\rho$ such that $0\leq\rho<N_m$ and $A$ is
exactly the set of points $a\in F_I(\Gamma^m)$ such that
$\widehat{a}\in\widehat{A}$ and
\begin{equation}
\mu(\widehat{a})\leq a_m\leq \nu(\widehat{a}) \mbox{ \ and \ }a_m\equiv\rho\;[N_m].
\label{eq:def-cell-mod-N}
\end{equation}
We call $\mu$, $\nu$ the {\bf boundaries} of $A$, $\rho$ a {\bf modulus} for $A$,
and such a triple $(\mu,\nu,\rho)$ a {\bf presentation} of $A$. We call it a
{\bf largely continuous presentation} of $A$ if moreover $\mu,\nu$ are
largely continuous. $A$ is a {\bf largely continuous precell mod $N$} if
$m=0$, or if $\widehat{A}$ is largely continuous precell mod
$\widehat{N}$ and $A$ is a precell mod $N$ having a largely continuous
presentation.
\begin{remark}\label{re:cell-cluc}
Cells in \cite{cluc-2003} are not required to have non-negative
boundaries, but to be of one of these two types~: either $\mu-\nu$ is
not finitely bounded or $\mu=\nu$. Unfortunately this condition seems to
be too restrictive
for our constructions, we have to relax it. Thus our precells are {\em
not} cells in the sense of \cite{cluc-2003} but a restriction (we
require non-negative boundaries) of a slight generalisation
($\mu-\nu$ can be finitely bounded and non-zero) of them.
\end{remark}
{
We are going to prove in the next section that the faces of every
largely continuous precell mod $N$ are still largely continuous
precells mod \emph{the same} $N$ (see Proposition~\ref{pr:pres-face}).
Thus, if one restricts to the case where $N=(1,\dots,1)$, these precells
are the best candidate we have for a discrete analogue of real
polytopes: they are intersections of finitely many half-spaces and
their faces are so. In the $p$\--adic triangulation that we are aiming
at we will indeed restrict to this case. However it appears that all
the results and constructions that we are going to consider in the
present paper remain valid for largely continuous precells mod
arbitrary $N$. Since it does not create significant complication, we
will then stick to this more general setting.
}
\section{Faces and projections}
\label{se:face-proj}
In this section we consider a non-empty basic Presburger set $A\subseteq
F_I(\Gamma^m)$ defined by
\begin{equation}
\mathop{\hbox{$\conj\mskip-11mu\relax\conj$}}\limits_{1\leq l\leq l_0}\varphi_l(x)\geq\gamma_l \mbox{ \ and \ }
\mathop{\hbox{$\conj\mskip-11mu\relax\conj$}}\limits_{1\leq l\leq l_1}\psi_l(x)\equiv\rho_l\;[n_l]
\label{eq:basic-pres}
\end{equation}
where $\varphi_l,\psi_l:F_I(\Gamma^m)\to{\cal Z}$ are integrally linear maps, $\gamma_l\in{\cal Z}$,
$\rho_l$ and $n_l$ are integers such that $0\leq\rho_l<n_l$. We prove some basic
properties on the faces of $A$ and the affine maps on $A$. Finally we
derive from these facts that every face of a largely continuous
precell $A$ mod $N$ is a largely continuous precell mod $N$ and has a
presentation inherited from $A$ in a uniform way (Proposition~\ref{pr:pres-face}).
\smallskip
Example~\ref{ex:face-non-cvx} shows that precell mod $N$ (here
$N=(1,1,1)$) can have a facet which is no longer a precell mod $N$. But
even worse is possible: the next example shows that a precell mod $N$ can
have a facet which is not even a basic Presburger set.
\begin{example}\label{ex:face-non-Presb}
$A\subseteq{\rm\bf Z}^3$ is defined by $0\leq x_1\leq x_2$ and $(x_1+3x_2)/3\leq z\leq
(x_1+3x_2+1)/3$. Its unique facet $F_{1}(A)$ is defined by $0\leq x_1$
and either $x_1\equiv0\;[3]$ or $x_1\equiv2\;[3]$ (and of course $x_2=x_3=+\infty$).
\end{example}
\begin{lemma}\label{le:half-line}
Let $A\subseteq F_I(\Gamma^m)$ be defined by (\ref{eq:basic-pres}). Let $J$ be any
subset of $[\![1,m]\!]$. Then $F_J(A)\neq\emptyset$ if
and only if for every $\delta\in{\cal Z}$ there is $u\in{\cal Z}^m$ pointing to $J$
such that $\Delta_J(u)\geq\delta$ and $A+u\subseteq A$.
\end{lemma}
\begin{proof}
It suffices to prove the result when $I=[\![1,m]\!]$. One direction is
general by Remark~\ref{re:half-line}, so let us prove the converse.
Assume that $F_J(A)\neq\emptyset$ and fix any $\delta\in{\cal Z}$. Without lost of generality we can assume
that $\delta>0$. Pick $y_0\in F_J(A)$ and let $A_0=\{x\in A\,\big/\ \pi_J(x)=y_0\}$. By
Remark~\ref{re:face-in-proj}, $F_J(A_0)=\{y_0\}$.
Assume that for some $k\in[\![0,l_0-1]\!]$ we have found a definable
subset $A_k$ of $A_0$ such that $F_J(A_k)=\{y_0\}$ and for every
$l\in[\![1,k]\!]$, either $\varphi_l$ is constant on $A_k$ or $\varphi_l(x)$ tends
to $+\infty$ as $x$ tends to $y_0$ in $A_k$. If the same holds true for
$\varphi_{k+1}$, let $A_{k+1}=A_k$. Otherwise, there is some $\alpha\in{\cal Z}$ such that
for every $\omega\in{\cal Z}$ there is $x\in A_k$ such that $\Delta_J(x)\geq\omega$ and
$\varphi_{k+1}(x)\leq\alpha$. The set $\Upsilon$ of these $\alpha$'s is definable, non-empty, and
bounded below since $\varphi_{k+1}\geq\gamma_{k+1}$ on $A_k$. Hence it has a
minimum, say $\beta$. By minimality of $\beta$ there is $\omega_0\in{\cal Z}$ such that
for every $x\in A_k$ such that $\Delta_J(x)\geq\omega_0$, $\varphi_{k+1}(x)>\beta-1$. Thus, for
every $\omega\in{\cal Z}$ there is $x\in A_0$ such that $\Delta_J(x)\geq\omega$ and $\varphi_{k+1}(x)=\beta$
(because $\beta\in\Upsilon$). With other words, the set $A_{k+1}$ defined by
\begin{displaymath}
A_{k+1}=\big\{x\in A_k\,\big/\ \varphi_{k+1}(x)=\beta\big\}
\end{displaymath}
is such that $F_J(A_{k+1})\neq0$ (see fact~\ref{fa:delta-dist}). It
obviously has all the other required properties since it is contained
in $A_k$.
By repeating the process until $k=l_0$ we get a definable set
$A_{l_0}$ as above. Pick any $a\in A_{l_0}$, by construction there is
$\omega\in{\cal Z}$ such that for every $x\in A_{l_0}$ if $\Delta_J(x)\geq\omega$ then $\varphi_l(x)\geq
\varphi_l(a)$ for every $l\in[\![1,l_0]\!]$. Pick any $b\in A_{l_0}$ such that
$\Delta_J(b)\geq\omega$ and $\Delta_J(b)\geq\delta+a_i$ for every $i\notin J$. It remains to check
that $u=b-a$ gives the conclusion. For every $j\in J$, $a_j=b_j=y_{0,j}$
because $a,b\in A_{l_0}\subseteq A_0$ and $\pi_J(A_0)=\{y_0\}$, hence $u_j=0$. For
$i\notin J$ we have $b_i\geq\Delta_J(b)\geq\delta+a_i$, hence $u_i\geq\delta>0$. In particular $u$
points to $J$ and $\Delta_J(u)\geq\delta$.
Finally let $x$ be any element of $A_{l_0}$. For every $l\leq l_0$ we
have $\varphi_l(x)\geq\gamma_l$ since $x\in A$, and by linearity of $\varphi_l$
\begin{equation}
\varphi_l(x+u)=\varphi_l(x)+\varphi_l(u)\geq\gamma_l+\varphi_l(u).
\label{eq:fl-x-plus-u}
\end{equation}
We also have $\varphi_l(b)=\varphi_l(a)+\varphi_l(u)$ by linearity, and $\varphi_l(b)\geq \varphi_l(a)$
because $\Delta_J(b)\geq\omega$, hence $\varphi_l(u)\geq0$. It follows that $\varphi_l(x+u)\geq \gamma_l$ by
(\ref{eq:fl-x-plus-u}). On the other hand, for every $l\in[\![1,l_1]\!]$
we have $\psi_l(x)\equiv\rho_l\;[n_l]$ because $x\in A$, $\psi_l(a)\equiv\rho_l\;[n_l]$ and
$\psi_l(b)\equiv\rho_l\;[n_l]$ for the same reason, hence
$\psi_l(x+u)=\psi_l(x)+\psi_l(a)-\psi_l(b)\equiv\rho_l\;[n_l]$. Thus $x+u\in A$ for every
$x\in A$, which proves the result.
\end{proof}
\begin{proposition}\label{pr:face-egale-proj}
Let $A\subseteq F_I(\Gamma^m)$ be a basic Presburger set, $J$ and $H$ be any
subsets of\/ $[\![1,m]\!]$ such that $F_J(A)$ and $F_H(A)$ are
non-empty.
\begin{enumerate}
\item\label{it:face-proj}
$F_J(A)=\pi_J(A)$.
\item\label{it:face-face}
If $H\subseteq J$ then $F_H(A)=F_H(F_J(A))$.
\item\label{it:mono-supp}
$F_H(A)\subseteq\overline{F_J(A)}$ if and only if $H\subseteq J$. In particular
the faces of $A$ are linearly ordered by specialisation if and
only if their supports are linearly ordered by inclusion.
\item\label{it:face-lattice}
$F_{H\cap J}(A)$ is non-empty.
\end{enumerate}
\end{proposition}
We will refer to the $n$\--th point of
Proposition~\ref{pr:face-egale-proj} as to
Proposition~\ref{pr:face-egale-proj}($n$).
\begin{remark}\label{re:sub-mono}
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-lattice}) shows
that the set of faces of $A$ ordered by specialisation is a
distributive lower semi-lattice with one smallest element. If $S$ is
any monohedral subset of $\Gamma^m$,
Proposition~\ref{pr:face-egale-proj}(\ref{it:mono-supp}) implies
that every basic Presburger subset $A$ of $S$ is monohedral, and
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-face}) that every
face of $A$ is again monohedral.
\end{remark}
\begin{proof}
The first point $F_J(A)=\pi_J(A)$ follows from Lemma~\ref{le:half-line}, by
Remark~\ref{re:half-line} applied to $S=A$.
For the second point, $H\subseteq J$ implies that $\pi_H(A)=\pi_H(\pi_J(A))$. Since
$F_H(A)=\pi_H(A)$ and $F_J(A)=\pi_J(A)$ by the first point, it suffices
to prove that $F_H(\pi_J(A))=\pi_H(\pi_J(A))$. For every $\delta\in{\cal Z}$ there is by
Lemma~\ref{le:half-line} a vector $u\in{\cal Z}^m$ pointing to $H$ such
that $\Delta_H(u)\geq\delta$ and $A+u\subseteq A$. Then obviously
$\pi_J(A)+u=\pi_J(A+u)\subseteq\pi_J(A)$, and the conclusion follows by
Remark~\ref{re:half-line} applied to $S=\pi_J(A)$.
For the third point, one direction follows from the second point and
the other direction is general since $F_H(A)\subseteq
F_H(\Gamma^m)$, $\overline{F_J(A)}\subseteq\overline{F_J(\Gamma^m)}$, and $F_H(\Gamma^m)$ is
disjoint from $\overline{F_J(A)}$ if $H$ is not contained in $J$.
It remains to prove the last point. For every $\delta\in{\cal Z}$,
Lemma~\ref{le:half-line} gives $u_J$ and $u_H$ in ${\cal Z}^m$ pointing to
$J$ and $H$ respectively such that $\Delta_J(u_J)\geq\delta$, $A+u_J\subseteq A$ and
similarly for $u_H$. Without lost of generality we can assume that
$\delta>0$ hence for every $i\notin J\cap H$, $u_{J,i}+u_{H,i}\geq\delta>0$. In particular
$u_J+u_H$ points to $J\cap H$ and $\Delta_{J\cap H}(u_J+u_H)\geq\delta$. Obviously
$A+u_J+u_H$ is contained in $A$. So $F_{J\cap H}(A)$ is non-empty by
Remark~\ref{re:half-line}.
\end{proof}
\begin{proposition}\label{pr:prol-pi}
Let $A\subseteq F_I(\Gamma^m)$ be a basic Presburger set defined by
(\ref{eq:basic-pres}), $f:A\to\Omega$ be an affine map, $J\subseteq I$ and
$B=F_J(A)$. Assume that $B$ is not empty and that $f$ extends to a
continuous map $f^*:A\cup B\to\Omega$. Then $f^*$ is affine, and if $f^*\neq+\infty$
then $f=f^*_{|B}\circ{\pi_J}_{|A}$. In particular if $f^*\neq+\infty$ then
$f(A)=f^*(B)$.
\end{proposition}
If $f$ is $\Gamma$\--affine then so is $f^*$ of course. However, if $f$ is
integrally affine we cannot conclude that $f^*$ will be integrally
affine as well, even if $f$ is largely continuous, as the following
example shows.
\begin{example}\label{ex:prol-non-Z-lin}
Keep $A\subseteq{\rm\bf Z}^3$ as in Example~\ref{ex:face-non-cvx}. The map
$f(x)=x_2-x_1$ is integrally affine and largely continuous on $A$,
with $\overline{f}(x)=x_3/2$ on $\partial A$. This is no longer an
integrally affine map on $B=F_3(A)=\{+\infty\}\times\{+\infty\}\times2{\rm\bf N}$.
\end{example}
\begin{proof}
It suffices to prove the result when $I=[\![1,m]\!]$, $f<+\infty$ is an
integrally linear map and $f^*$ is not constantly equal to $+\infty$.
{ Note
that $f^*$ is affine by Remark~\ref{re:linearity} (because equalities
satisfied by $f$ at every point of $A$ pass to the limits). So we only
have to prove that $f(x)=f^*(\pi_J(x))$ for every $x\in A$.
}
Let $\varphi$ be an integrally linear map on ${\cal Z}^m$ extending $f$, and $b\in
B$ such that $f^*(b)<+\infty$. Since $f(A)\subseteq{\cal Z}$ and $f(x)$ tends to
$f^*(b)$ at $b$, there exists $\delta\in{\cal Z}$ such that for every $x\in A$, if
$\pi_J(x)=b$ and $\Delta_J(x)\geq\delta$ then $f(x)=f^*(b)$. Pick any $a\in A$ such
that $\pi_J(a)=b$ and $\Delta_J(a)\geq\delta$, hence $f(a)=f^*(b)$.
Now assume for a contradiction that $f(x_0)\neq f^*(\pi_J(x_0))$ for some
$x_0\in A$. Let $y_0=\pi_J(x_0)$, since $f(x)$ tends to $f^*(y_0)$ at
$y_0$ and $f^*(y_0)\neq f(x_0)$ there exists $\omega\in{\cal Z}$ such that for every
$x\in A$, if $\pi_J(x)=y_0$ and $\Delta_J(x)\geq\omega$ then $f(x)\neq f(x_0)$.
Lemma~\ref{le:half-line} gives $u\in{\cal Z}^m$ pointing to $J$ such
that $\Delta_J(u)\geq\omega-\Delta_J(x_0)$ and $A+u\subseteq A$. Then $\pi_J(x_0+u)=\pi_J(x_0)=y_0$
and $\Delta_J(x_0+u)\geq\Delta_J(x_0)+\Delta_J(u)\geq\omega$, hence $f(x_0+u)\neq f(x_0)$. By
linearity it follows that $\varphi(u)=f(x_0+u)-f(x_0)\neq0$. On the other hand
we have $\Delta_J(a+u)\geq\Delta_J(a)+\Delta_J(u)\geq\delta$ and $\pi_J(a+u)=\pi_J(a)=b$ hence
$f(a+u)=f^*(b)=f(a)$, and thus by linearity $\varphi(u)=f(a+u)-f(a)=0$, a
contradiction.
\end{proof}
\begin{proposition}\label{pr:face-socle}
Let $A\subseteq F_I(\Gamma^m)$ be a basic Presburger set with $m\geq1$, and
$X=\widehat{A}$. Then for every face\footnote{ As already mentioned
in Section~\ref{se:notation}, $A\subseteq F_I(\Gamma^m)$ and $F_J(A)\neq\emptyset$ imply
that $J\subseteq I$.} $B=F_J(A)$, its socle
$\widehat{B}=F_{\widehat{J}}(\widehat{A})$ is a face of
$\widehat{A}$. If moreover $A$ is non-negative, then conversely for
every face $Y$ of $X$ there is a face $B$ of $A$ such that
$\widehat{B}=Y$. In that case $B=Y\times\{+\infty\}$ if $m\notin\mathop{\rm Supp} B$, and
$B=(Y\times{\cal Z})\cap\overline{A}$ if $m\in\mathop{\rm Supp} B$.
\end{proposition}
\begin{remark}\label{re:face-socle}
The last assertion on $B$ is general: for every subset $S$
of $F_I(\Gamma^m)$ and every face $T=F_J(\Gamma^m)$ with socle $Y$, we have
$T=Y\times\{+\infty\}$ if $m\notin J$, and $T=(Y\times{\cal Z})\cap\overline{S}$ if $m\in J$. Indeed
$T=F_J(\Gamma^m)\cap\overline{S}$ and $F_J(\Gamma^m)$ is equal to
$F_{\widehat{J}}(\Gamma^{m-1})\times\{+\infty\}$ if $m\notin J$ and to
$F_{\widehat{J}}(\Gamma^{m-1})\times{\cal Z}$ otherwise.
\end{remark}
\begin{example}
Let $A=\{x\in{\rm\bf Z}^3\,\big/\ x_1-x_2-x_3=0\}$, its proper faces are
$B_0=\{(+\infty,+\infty,+\infty)\}$, $B_1=\{+\infty\}\times\{+\infty\}\times{\rm\bf Z}$ and $B_2=\{+\infty\}\times{\rm\bf Z}\times\{+\infty\}$. Thus
${\rm\bf Z}\times\{+\infty\}$ is a facet of $\widehat{A}={\rm\bf Z}^2$ which is not the socle
of any face of $A$.
\end{example}
This example shows that the assumption that $A$ is non-negative is
needed for the second part of Proposition~\ref{pr:face-socle} to
hold. Note that $\widehat{B}_1=\{(+\infty,+\infty)\}$ is not a facet of
$\widehat{A}={\rm\bf Z}^2$, which shows that the positivity of $A$
is mandatory also in Corollary~\ref{co:socle-facet}.
\begin{proof}
Given that $B=F_J(A)$ is a face of $A$, hence non-empty, let us prove
that $F_{\widehat{J}}(\widehat{A})=\pi_{\widehat{J}}(\widehat{A})$. For
every $\delta\in{\cal Z}$ we can find a vector $u\in{\cal Z}^m$ pointing to $J$ such
that $\Delta_J(u)\geq\delta$ and $A+u\subseteq A$, in particular $\widehat{u}$ points to
$\widehat{J}$, $\Delta_{\widehat{J}}(\widehat{u})\geq\delta$ and
$\widehat{A}+\widehat{u}\subseteq\widehat{A}$. Thus
$F_{\widehat{J}}(\widehat{A})=\pi_{\widehat{J}}(\widehat{A})$ by
Remark~\ref{re:half-line} applied to $S=\widehat{A}$. Since $B=\pi_J(A)$
by Proposition~\ref{pr:face-egale-proj}(\ref{it:face-proj}), and
obviously $\widehat{\pi_J(A)}=\pi_{\widehat{J}}(\widehat{A})$, it follows
that $\widehat{B}=F_{\widehat{J}}(\widehat{A})$.
Now assume that $A$ is non-negative. Then the socle of $\overline{A}$ is
closed by Lemma~\ref{le:image-compact}. It contains $X$, hence
$\overline{X}$. In particular it contains $Y$, which is non-empty. So
there is $b\in\overline{A}$ whose socle $\widehat{b}$ belongs to $Y$.
Let $J=\mathop{\rm Supp} b$ and $B=F_J(A)$. Since $B$ contains $b$ it is
non-empty, hence a face of $A$. Then $\widehat{B}$ is a face of $X$
by the first point. Since $\widehat{b}$ belongs both to $Y$ and
$\widehat{B}$, it follows that $Y=\widehat{B}$.
\end{proof}
\begin{corollary}\label{co:socle-facet}
Let $A\subseteq F_I(\Gamma^m)$ be a non-closed non-negative basic Presburger set with
socle $X$. Let $B$ be a facet of $A$ with socle $Y$. Then $Y=X$ or
$Y$ is a facet of $X$.
\end{corollary}
\begin{proof}
By Proposition~\ref{pr:face-socle}, $Y$ is a face of $X$. If $Y\neq X$
then there is a facet $Y'$ of $X$ whose closure contains $Y$. It
remains to show that $Y=Y'$. Proposition~\ref{pr:face-socle} gives a
face $B'$ of $A$ with socle $Y'$. Let $J$, $J'$, $H$, $H'$ be the
supports of $A$, $A'$, $B$, $B'$ respectively. Obviously $H=J\setminus\{m\}$ and
$H'=J'\setminus\{m\}$. If $B=B'$ then $J=J'$ hence $H=H'$ and thus $Y=Y'$. Now
assume that $B\neq B'$. Since $B$ is a facet of $A$ it is not smaller
than $B'$ (with respect to the specialisation order) hence $J\nsubseteq J'$ by
Proposition~\ref{pr:face-egale-proj}(\ref{it:mono-supp}). On the other
hand $Y\leq Y'$ hence $H\subseteq H'$ (otherwise $F_J(\Gamma^{m-1})$ is disjoint from
the closure of $F_{J'}(\Gamma^{m-1})$). Altogether this implies that
$J=J'\cup\{m\}$. In particular $H=J\setminus\{m\}=J'\setminus\{m\}=H'$ hence $Y=Y'$ is a facet
of $X$.
\end{proof}
\begin{proposition}\label{pr:pres-face}
Let $A\subseteq F_I(\Gamma^m)$ be a largely continuous precell mod $N$ with $m\geq1$.
Let $(\mu,\nu,\rho)$ be a largely continuous presentation of $A$, $J$ a
subset of $I$, $\widehat{J}=J\setminus\{m\}$ and
$Y=F_{\widehat{J}}(\widehat{A})$. Then $F_J(A)\neq\emptyset$ if and only
if either $m\in J$ and $\bar\mu<+\infty$ on $Y$, or $m\notin J$ and $\bar\nu=+\infty$ on
$Y$. In any case
\begin{displaymath}
F_J(A)=\big\{b\in F_J(\Gamma^m)\,\big/\ \widehat{b}\in Y,\ \bar\mu(\widehat{b})\leq
b_m\leq \bar\nu(\widehat{b})\mbox{ and }b_m\equiv\rho\;[N_m]\big\}.
\end{displaymath}
In particular, if $F_J(A)$ is non-empty then it is a largely
continuous precell $A$ mod $N$ and $(\bar\mu_{|Y},\bar\nu_{|Y},\rho)$ is a
presentation of $F_J(A)$.
\end{proposition}
\begin{remark}\label{re:mono-cell}
Combining the last point of the above result with
Remark~\ref{re:sub-mono}, we get that if $A$ is a {\em monohedral}
largely continuous precell mod $N$ in $\Gamma^m$ then so is every face of $A$.
\end{remark}
\begin{proof}
Let $X$ be the socle of $A$. Recall that $Y=F_{\widehat{J}}(X)$ is a
face of $X$ and of the socle of $F_J(A)$ by
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-proj}). Let $B$ be
the set of $a\in F_J(\Gamma^m)$ such that $\widehat{a}\in
F_{\widehat{J}}(\widehat{A})=Y$, $\bar\mu(\widehat{a})\leq a_m\leq
\bar\nu(\widehat{a})$ and $a_m\equiv\rho\;[N_m]$. Non-strict inequalities and
congruence relations valid on $A$ pass to the limits, hence remain
valid on $F_J(A)$. So $B\subseteq F_J(A)$, and if $F_J(A)\neq\emptyset$ then necessarily
one of the two alternatives of the first point hold true.
Conversely, take any point $a\in A$ and let $b=\pi_J(a)$. Assume first
that $m\in J$ and $\bar\mu<+\infty$. By Proposition~\ref{pr:prol-pi},
$\bar\mu\circ\pi_J=\mu$ on $X$ hence $\bar\mu(\widehat{b})=\mu(\widehat{a})\leq
a_m=b_m$. If $\bar\nu<+\infty$ then similarly $b_m\leq\bar\nu(\widehat{b})$.
Otherwise $\bar\nu=+\infty$ and $b_m\leq\bar\nu(\widehat{b})$ is obvious. Since
$b_m=a_m\equiv\rho\;[N]$ it follows in both cases that $b\in B$. Now assume
that $m\notin J$ and $\bar\nu=+\infty$. Then $b_m=+\infty$, hence obviously
$\bar\mu(\widehat{b})\leq +\infty=b_m=\bar\nu(\widehat{b})$ and $b_m=+\infty\equiv\rho\;[N]$.
Thus $b\in B$, which proves that $\pi_J(A)\subseteq B$. In particular $B\neq\emptyset$,
hence $F_J(A)\neq\emptyset$ since it contains $B$. This proves the first point.
Moreover by Proposition~\ref{pr:face-egale-proj}(\ref{it:face-proj})
it follows that $F_J(A)=\pi_J(A)\subseteq B$. Hence $F_J(A)=B$, which proves the
second point. In particular $F_J(A)$ is a largely continuous precell if
$F_{\widehat{J}}(Y)$ is so. The remaining of the conclusion then
follows by a straightforward induction.
\end{proof}
\section{Bounding functions}
\label{se:bound-fun}
We prove here two technical results (Propositions~\ref{pr:maj-Z-aff}
and \ref{pr:min-Z-aff}) used in the next section.
\begin{proposition}\label{pr:maj-Z-aff}
Let $A\subseteq F_I(\Gamma^m)$ be a definable set, and $f_1,\dots,f_r$ be definable
maps from $A$ to ${\cal Q}$. Assume that the coordinates of all the
points of $A$ are non-negative. Then there exists a largely continuous,
positive, integrally affine map $f:A\to{\cal Z}$ such that $f(x)\geq
\max_j f_j(x)$ on $A$ and $\bar f=+\infty$ on $\partial A$. More precisely $f$
can be taken of the form $f(x)=\beta+\alpha\sum_{i\in I}x_i$ on $A$, for some
positive $\alpha\in{\rm\bf Z}$ and $\beta\in{\cal Z}$.
\end{proposition}
\begin{proof}
Without lost of generality we can assume that $I=[\![1,m]\!]$.
Because of Theorem~\ref{th:cluck-piece-lin}, it suffices to consider
the case where the $f_j$'s
are affine. Each $f_j$ then can be written as
\begin{displaymath}
f_j(x)=\alpha_{0,j}+\sum_{1\leq i\leq m} \alpha_{i,j}x_i
\end{displaymath}
for some $\alpha_{i,j}\in{\rm\bf Z}$ for $i\geq1$ and some $\alpha_{0,j}\in{\cal Z}$. Let $\alpha\geq1$ be an
integer greater than $\alpha_{i,j}$ for every $i,j\geq1$, and $\beta\geq1$ an element
of ${\cal Z}$ greater the $\alpha_{0,j}$ for every $j\geq1$. For every $x$ in $A$
and every $i,j\geq1$, since $x_i\geq0$ we have $\alpha x_i\geq \alpha_{i,j}x_i$. So the
function $f(x)=\beta+\alpha\sum_{1\leq i\leq m} x_i$ has all the required properties.
\end{proof}
\begin{lemma}\label{le:f-hat-C0bar}
Let $A\subseteq{\cal Z}^m$ be a largely continuous precell mod $N$, $X$ its socle,
$(\mu,\nu,\rho)$ a largely continuous presentation of $A$ and $f$ a largely
continuous affine map on $A$ such that $\bar f=+\infty$ on $\partial A$. Let
$(\alpha_i)_{1\leq i\leq m}\in{\rm\bf Q}^m$ and $\beta\in{\cal Q}$ be such that $f(a)=\beta+\sum_{1\leq i\leq
m}\alpha_i a_i$ on $A$. Extend $f$ to ${\cal Q}^m$ by means of this expression.
For every $x\in\widehat{A}$ let $\hat f(x)=f(x,\mu(x))$ if $\alpha_m\geq0$, and
$\hat f(x)=f(x,\nu(x))$ otherwise. Then $\hat f$ is a well-defined
largely continuous affine map on $X$ with limit $+\infty$ at every point of $\partial
X$, and $\min f(A)-|\alpha_m|N_m\leq \hat f(\widehat{a})\leq f(a)$ for every $a\in A$.
\end{lemma}
\begin{proof}
The only possible problem in the definition of $\hat f$ is when
$\nu=+\infty$. But then $\alpha_m\geq0$ because otherwise, given any $x\in X$ we have
$(x,+\infty)\in\partial A$ and $f(a)<0$ for every $x\in A$ close enough to $(x,+\infty)$, a
contradiction since $\bar f=+\infty$ on $\partial A$. Thus $\hat f(x,\mu(x))$ is
well-defined in this case too.
Let $\lambda=\mu$ if $\alpha_m\geq0$ and $\lambda=\nu$ otherwise. Then $\hat f(x)=f(x,\lambda(x))$
is an affine map and $\alpha_m(a_m-\lambda(\widehat{a}))$ is non-negative on $A$ by
construction, hence
\begin{displaymath}
f(a) = f\big(\widehat{a},\lambda(\widehat{a})\big)
+ \alpha_m\big(a_m-\lambda(\widehat{a})\big)
\geq \hat f(\widehat{a}).
\end{displaymath}
For every $x\in X$ there is a point $a\in A$ such that
$\widehat{a}=x$ and $|a_m-\lambda(x)|\leq N_m$. So there is a definable
function $\delta:X\to{\cal Z}$ such that $(x,\delta(x))\in A$ and
$|\delta(x)-\lambda(x)|\leq N_m$ for every $x\in X$. We have
\begin{eqnarray*}
f(x,\lambda(x))&=& f\big(x,\delta(x)\big)+\alpha_m\big(\lambda(x)-\delta(x)\big) \\
&\geq& f\big(x,\delta(x)\big)-|\alpha_m|N_m \label{eq:ineg-f-crochet}
\end{eqnarray*}
In particular $\hat f(x)\geq \min f(A)-\alpha_mN_m$ on $X$.
It only remains to check that, given any $y\in \partial X$, $f(x,\lambda(x))$ tends
to $+\infty$ when $x\in A$ tends to $y$. By the above inequality it suffices
to prove that $f(x,\delta(x))$ tends to $+\infty$ when $x\in A$ tends to $y$.
Since $(x,\delta(x))\in A$ for every $x\in X$ and $\bar f=+\infty$ on $\partial A$, it
is sufficient to show that $\delta(x)$ tends to a limit $l\in\Gamma$ as $x\in X$
tends to $y$. Indeed, since $y\in\partial X$ we will then have that $(x,\delta(x))$
tends to $(y,l)\in\partial A$ so the conclusion. We prove it only when $\alpha_m\geq0$,
the case where $\alpha_m<0$ being similar.
If $\bar\mu(y)=+\infty$, then obviously $\delta(x)$ tends to $+\infty$ since
$\mu(x)\leq\delta(x)$. If $\bar\mu(y)<+\infty$ then $\mu(x)=\bar\mu(y)$ for every $x\in X$
close enough to $y$. Hence $\delta(x)$, which is the smallest element $t$ in
$\Gamma$ such that $\mu(x)\leq t$ and $t\equiv\rho\;[N_m]$, remains constant too. In
particular it has a limit in ${\cal Z}$ as $x\in X$ tends to $y$.
\end{proof}
\begin{proposition}\label{pr:min-Z-aff}
Let $A\subseteq F_I(\Gamma^m)$ be a largely continuous precell mod $N$, and
$f_1,\dots,f_r$ be largely continuous affine maps on $A$ such
that $\overline{f_j}=+\infty$ on $\partial A$ for every $j$. Then there exists a
largely continuous affine map $f$ on $A$ such that
$\overline{f}=+\infty$ on $\partial A$ and $f(x) \leq \min_j f_j(x)$ for every $x\in
A$. If, moreover, each $f_j$ is positive on $A$ then $f$
can be chosen positive on $A$.
\end{proposition}
\begin{proof}
W.l.o.g. we can assume that $A\subseteq {\cal Z}^m$ and $f_j<+\infty$ for every $j$. By
Lemma~\ref{le:precomp-borne} there is $\gamma\in{\cal Q}$ such that
$\gamma=\min\bigcup_jf_j(A)$. Given an arbitrary $\gamma'<\gamma$ in ${\cal Q}$ we are going to
show that there exists a largely continuous map $f:A\to{\cal Q}$ such that
$\bar f=+\infty$ on $\partial A$ and $\gamma'\leq f(x)\leq\min_j f_j(x)$ on $A$. This will
prove simultaneously the two statements, because if each $f_j$ is
positive then $\gamma>0$ hence taking for example $\gamma'=\gamma/2$ will give
that $0<\gamma/2\leq f$ on $A$.
The proof goes, needless to say, by induction on $m$. If $m=0$, and
more generally if $A$ is closed, the constant function $f=\gamma$ has the
required properties. So we can assume that $A$ is not closed, $m\geq1$
and the result is proven for smaller integers. Replacing each $f_j$ by
$f_j-\gamma$ we can assume that $\gamma=0$. Replacing $\gamma'<0$ by a bigger one if
necessary we can assume that $\gamma'\in{\rm\bf Q}$.
Let $\alpha_{i,j}\in{\rm\bf Q}$ and $\beta_j\in{\cal Q}$ such that $f_j(x)=\beta_j+\sum_{1\leq i\leq
m}\alpha_{i,j}x_i$. Let $\hat f_j:X\to{\cal Q}$ be defined as in
Lemma~\ref{le:f-hat-C0bar}, and $\eta=\min\bigcup_j\hat f_j(X)$. By
Lemma~\ref{le:f-hat-C0bar} the induction hypothesis applies to these
functions. Given any $\eta'<\eta$, it gives a largely continuous affine map
$g:X\to{\cal Q}$ such that $\bar g=+\infty$ on $\partial X$ and $\eta'\leq g(x)\leq\hat f_j(x)$ on
$X$ for $1\leq j\leq r$. We do this for $\eta'=-(\max_j|\alpha_{m,j}|N_m+1)$. Indeed
by Lemma~\ref{le:f-hat-C0bar}, $-|\alpha_{m,j}|N_m\leq\hat f_j$ on $X$ for $1\leq
j\leq r$ hence $\eta'\leq\eta-1<\eta$. Since $\eta'<0$, replacing $\gamma'$ by a bigger one if necessary we
can assume that $\eta'\leq\gamma'$.
\paragraph{Case 1:} $\nu_A=+\infty$. Then for $1\leq j\leq r$ the coefficient
$\alpha_{m,j}$ of $x_m$ in the above expression of $f_j$ is
positive (see the proof of lemma~\ref{le:f-hat-C0bar}), hence $\hat
f_j(x)=f_j(x,\mu(x))$ and $\alpha=\min_{j\leq r}\alpha_{m,j}$ is positive.
Let $G(a)=g(\widehat{a})+\alpha(a_m-\mu(\widehat{a}))$ on $A$. For $1\leq j\leq r$
we have
\begin{displaymath}
G(a) \leq \hat f_j(\widehat{a}) + \alpha_{m,j}\big(a_m-\mu(\widehat{a})\big)
= f_j\big(\widehat{a},\mu(\widehat{a})\big)
+ \alpha_{m,j}\big(a_m-\mu(\widehat{a})\big)
= f_j(a).
\end{displaymath}
Every $b\in\partial A$ either belongs to $X\times\{+\infty\}$ or to $\partial X\times\Gamma$. If $b\in X\times\{+\infty\}$
then $G(a)=g(\widehat{a})+\alpha(a_m-\mu(\widehat{a}))$ tends to $+\infty$ as $a\in
A$ tends to $b$, because $\widehat{a}$ then tends to $\widehat{b}$,
$a_m$ tends to $+\infty$ and $\alpha>0$. If $b\in \partial X\times\Gamma$ then $G(a)\geq
g(\widehat{a})$ tends to $+\infty$ as $a\in A$ tends to $b$, because
$\widehat{a}$ then tends to $\widehat{b}$. Hence $G$ is largely
continuous and $\bar G=+\infty$ on $\partial A$.
\paragraph{Case 2:} $\nu_A<+\infty$. Then every $b\in \partial A$ belongs to $\partial X\times\Gamma$
hence $g(\widehat{a})$ tends to $+\infty$ as $a\in A$ tends to $b$. Moreover
$g(\widehat{a})\leq\hat f_j(\widehat{a})\leq f_j(a)$ for $1\leq j\leq r$.
\paragraph{Cases 1 and 2:} In both cases, it remains to modify $G$ so
that its minimum becomes greater than $\gamma'$. By construction $G(a)\geq
g(\widehat{a})\geq \eta'$ on $A$. Recall that $\eta'=-(\max_j|\alpha_{m,j}|N_m+1)$
and $\gamma'\geq \eta'$ are strictly negative {\em rational} numbers. Thus we
can define $f(a)=(\gamma'/\eta') G(a)$ on $A$. Clearly $f$ is a largely
continuous affine function on $A$ with $\bar f=+\infty$ on $\partial A$, and $f\geq
(\gamma'/\eta')\eta'=\gamma'$ since $\gamma'/\eta'\geq0$ and $G\geq\eta'$ on $A$. Moreover $0\leq\gamma'/\eta'\leq
1$ hence for every $a\in A$:
\begin{displaymath}
f(a)=\frac{\gamma'}{\eta'}G(a)\leq\max(0,G(a))\leq\min_{1\leq j\leq r}f_j(a)
\end{displaymath}
\end{proof}
\section{Monohedral division}
\label{se:division}
{
The next lemma is the technical heart of this paper. Loosely speaking,
given a precell $A\subseteq\Gamma^m$, a facet $B$ of $A$, a function $f:B\to{\cal Z}$ and
a family ${\cal D}$ of monohedral precells covering $B$, we are going to
inflate each $D$ in ${\cal D}$ to a precell $C_D\subseteq A$ in such a way that:
\begin{enumerate}
\item
$C_D$ is a monohedral precell with facet $D$;
\item
the shape of $C_D$ is controlled by $f$, in the sense that the
distance to $B$ of any point $a\in C_D$ is less than $f(b)$ (where
$b$ is the projection of $a$ onto $B$);
\item
$C_D$ contains a ``neighbourhood of $D$'', so to say, in the sense
that every point of $A$ close enough to $D$ belongs to $C_D$ (the
``close enough'' condition will be controlled by a function
$\delta:B\to{\cal Z}$);
\item
the various $C_D$'s do not intersect too much (in particular if
${\cal D}$ is a partition of $B$, we require the various precells $C_D$ to
be pairwise disjoint).
\end{enumerate}
In addition we construct simultaneously a family ${\cal U}$ of precells
partitioning the complement of $\bigcup_{D\in{\cal D}}C_D$ in $A$, such that the
proper faces of every $U\in{\cal U}$ are proper faces of $A$ different from
$B$. In particular $U$ has less faces than $A$, which will make
possible to repeatedly use the next lemma (first applied to $A$, then
to each $U\in{\cal U}$) while proving results by induction on the number of faces.
}
\begin{lemma}\label{le:face-elargie}
Let $A\subseteq F_I(\Gamma^m)$ be a non-closed largely continuous precell
mod $N$. Let $B$ be a facet of
$A$, $J$ its support, $f:B\to{\cal Z}$ a definable map. Let ${\cal D}$ be a
family of largely continuous monohedral precells mod $N$ such that
$\bigcup{\cal D}= B$. Then there exists a pair $({\cal C},{\cal U})$ of families of
largely continuous precells mod $N$ contained in $A$ and an integrally
affine map $\delta:B\to{\cal Z}$ such that ${\cal U}$ is a finite partition of
$A\setminus\bigcup{\cal C}$, the proper faces of every precell in ${\cal U}$ are proper faces
of $A$, and ${\cal C}$ is a family $(C_D)_{D\in{\cal D}}$ of precells with the
following properties:
\begin{description}
\item
[(Fac)] $C_D$ has a unique facet which is $D$.
\item
[(Sub)] $C_D\subseteq\{a\in A\,\big/\ \pi_J(a)\in D$ and $\Delta_J(a)\geq f\circ\pi_J(a)\}$.
\item
[(Sup)] $C_D\supseteq\{a\in A\,\big/\ \pi_J(a)\in D$ and $\Delta_J(a)\geq \delta\circ\pi_J(a)\}$.
\item
[(Diff)] For every $E\in{\cal D}$, $\pi_J(C_D\setminus C_E)\subseteq D\setminus E$.
\end{description}
\end{lemma}
\begin{remark}\label{re:face-elargie}
In every application of Lemma~\ref{le:face-elargie}, ${\cal D}$ will be a
partition of $B$. So the condition (Diff) simply says that the precells
in ${\cal C}$ are pairwise disjoint, hence that ${\cal C}\cup{\cal U}$ is a
partition of $A$. However we can not restrict to this case because
it may happen that ${\cal D}$ is a partition of $B$ and $\widehat{{\cal D}}$
is not a partition of $\widehat{B}$, which will be crippling when
proving the result by induction on $m$.
\end{remark}
Before entering in the somewhat intricate proof of this lemma, let us
make a few preliminary observations.
\begin{claim}\label{cl:U-pas-B}
With the notation of Lemma~\ref{le:face-elargie}, $B$ is not a face
of any $U\in{\cal U}$.
\end{claim}
\begin{proof}
For every $b\in B$ there is $D\in{\cal D}$ such that $b\in C_D$. By (Sup) every
point in $A$ such that $\pi_J(a)=b$ and $\Delta_J(a)\geq\delta\circ\pi_J(a)$ belongs to
$C_D$, hence not to $U$. Thus $b\notin\overline{U}$, that is
$B\cap\overline{U}=\emptyset$.
\end{proof}
\begin{claim}\label{cl:fac}
Let $A\subseteq F_I(\Gamma^m)$ be a non-closed largely continuous precell mod
$N$, $B$ a facet of $A$, $J=\mathop{\rm Supp} B$. Let $C_D$, $D$ be any precells
mod $N$ contained in $A$, $B$ respectively, satisfying conditions
(Sub) and (Sup) of Lemma~\ref{le:face-elargie} for some definable
maps $f,\delta:B\to{\cal Z}$. If $f$ is largely continuous and $\bar f=+\infty$ on $\partial
B$ then property (Fac) of Lemma~\ref{le:face-elargie} follows: $C_D$
has a unique facet which is $D$.
\end{claim}
\begin{proof}
For every $b\in D$ and every $\varepsilon\in{\cal Z}$, $b\in\overline{A}$ hence there
exists $a\in A$ such that $\pi_J(a)=b$ and $\Delta_J(a)\geq\max(\delta(b),\varepsilon)$. By
(Sup) this point $a$ belongs to $C_D$, hence $b$ is in the closure
of $C_D$. So $D\subseteq F_J(C_D)$, and conversely (Sub) implies that
$\pi_J(C_D)\subseteq D$, hence $F_J(C_D)=\pi_J(C_D)=D$ by
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-proj}).
Assume for a contradiction that $C_D$ has a proper face $F_H(C_D)$
not contained in $\overline{D}$. Pick any $c$ in $F_H(C_D)$. By
Proposition~\ref{pr:face-egale-proj}(\ref{it:mono-supp}), $H$ is not
contained in $J$ so pick any $k\in H\setminus J$. By
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-lattice}), $F_{J\cap
H}(C_D)\neq\emptyset$ hence by the remaining of
Proposition~\ref{pr:face-egale-proj}, $\pi_{J\cap H}(C_D)=F_{J\cap
H}(C_D)=F_{J\cap H}(F_H(C_D))\subseteq F_{J\cap H}(B)\subseteq \partial B$. So $\pi_{J\cap H}(c)\in \partial
B$, hence $f$ has limit $+\infty$ at $\pi_{J\cap H}(c)$. In particular there
is $\delta\in{\cal Z}$ such that for every $b\in B$
\begin{equation}
\big[\pi_{J\cap H}(b)=\pi_{J\cap H}(c)\mbox{ and }\Delta_{J\cap H}(b)\geq \delta\big]
\Rightarrow f(b)>c_k
\label{eq:fac1}
\end{equation}
On the other hand $c\in F_H(C_D)$ hence there is $a\in C_D$ such that
$\pi_H(a)=c$ and $\Delta_H(a)\geq\delta$. Let $b=\pi_J(a)$, then $\pi_{J\cap H}(b)=\pi_{J\cap
H}(a)=c$ and
\begin{displaymath}
\Delta_{J\cap H}(b) = \min_{j\notin H}b_j = \min_{j\in J\setminus H}a_j
\geq \min_{i\notin H}a_i = \Delta_H(a)\geq\delta.
\end{displaymath}
By (\ref{eq:fac1}) this implies that $f(b)>c_k$, that is
$f\circ\pi_J(a)>c_k$. By (Sub) it follows that $\Delta_J(a)>c_k$, a
contradiction since $\Delta_J(a)=\min_{j\notin J}a_j\leq a_k$ (because $k\notin J$)
and $a_k=c_k$ (because $k\in H$ and $\pi_H(a)=c$.
\end{proof}
\begin{proof}[(of Lemma~\ref{le:face-elargie})]
Let $(\mu,\nu,\rho)$ be a largely continuous presentation of $A$. Let $X$,
$Y$ be the socles of $A$, $B$ respectively, $\widehat{I}=\mathop{\rm Supp} X$ and
$\widehat{J}=\mathop{\rm Supp} Y$. Since $B$ is a facet of $A$, by
Proposition~\ref{pr:face-socle} either $Y=X$ and $B=X\times\{+\infty\}$, or $Y$ is
a facet of $A$ and either $B=Y\times\{+\infty\}$ or $B=(Y\times{\cal Z})\cap\overline{A}$. For
each $D\in{\cal D}$ let $(\mu_D,\nu_D,\rho_D)$ be a largely continuous presentation
of $D$.
If $m=0$ the result is trivially true because there is no non-closed
precell contained in $\Gamma^0$. So we can assume that $m\geq1$ and the result
is proven for smaller integers. If $\mu=+\infty$ then $A=X\times\{+\infty\}$ can be
identified with $X$, after which the result follows by the induction
hypothesis. So we can assume that $\mu<+\infty$.
Proposition~\ref{pr:maj-Z-aff} gives positive $\alpha\in{\rm\bf Z}$ and
$\beta\in{\cal Z}$ such that $f(x)\leq\beta+\alpha\sum_{j\in J}x_j$ on $B$. Without loss of
generality we can assume that equality holds on $B$, and we still
denote by $f$ the corresponding extension of $f$ to $F_J(\Omega^m)$.
In particular $f$ is now largely continuous on $B$ with $\bar f_B=+\infty$
on $\partial B$.
{
It is sufficient to build a pair $((C_D)_{D\in{\cal D}},{\cal U})$ of families of
largely continuous precells mod $N$ contained in $A$ and a definable
map $\delta:B\to{\cal Q}$ such that ${\cal U}$ is a finite partition of $A\setminus\bigcup{\cal C}$, that
the proper faces of every precell in ${\cal U}$ are proper faces of $A$,
and that for each $D$ in ${\cal D}$ we have:
\begin{itemize}
\item[(Fac')]
$\pi_J(C_D)=D$;
\item[(Sub')]
$\Delta_J\geq f\circ\pi_J$ on $C_D$;
\item [(Sup)]
$C_D\supseteq\{a\in A\,\big/\ \pi_J(a)\in D$ and $\Delta_J(a)\geq\delta(\pi_J(a))\}$;
\item[(Diff')]
$\pi_J(C_D\setminus C_E)$ is disjoint from $E$, for every $E\in{\cal D}$.
\end{itemize}
Indeed, we do not need to require that $\delta$ is integrally affine,
because if property (Sup) holds for a definable map $\delta:B\to{\cal Q}$ it will
then hold for every larger map, and Proposition~\ref{pr:maj-Z-aff}
provides an integrally affine one. Then by (Fac'), properties (Sub)
and (Diff) will follow from (Sub') and (Diff'). Because $\bar f=+\infty$ on
$\partial B$, (Fac) will then follow from (Sup) and (Sub) by
Claim~\ref{cl:fac}.
We have to distinguish several cases. For the convenience of the
reader most of them are accompanied by a figure representing (very
approximatively) the general idea of the construction when $m=2$. In
these figures, each $(i,j)\in\Gamma^2$ with positive coordinates takes place
at the point of coordinates $(1-2^{-i},1-2^{-j})$ in the figure, so
that the order is preserved. Therefore the set of positive points of
$\Gamma^2$ is represented by a square whose bottom, left, right and
top edges represent $\Gamma\times\{0\}$, $\{0\}\times\Gamma$, $\Gamma\times\{+\infty\}$ and $\{\infty\}\times\Gamma$
respectively\footnote{The reader may imagine that this lower edge of
the square is actually $\Gamma^{m-1}$ in the induction steps, while the
left edge is $\Gamma$.}\;. A side effect of this compactification is that
linear functions are represented by curved lines.
The various precells involved will be represented in this square by
gray areas whose union is $A$ (or the auxiliary precell $A^\circ$ in
figure~\ref{fi:div-3-right}). Their socles will take place in the
bottom $\Gamma\times\{0\}$, as we identify it with $\Gamma$. Finally $B$ will be
represented by a thick edge or a corner of the square, depending on
the cases.
}
\paragraph{Case 1:} $Y=X$.
\\
Then $B=X\times\{+\infty\}$ hence $\nu=+\infty$ and $J= I\setminus\{m\}$, thus $\Delta_J(a)=a_m$ and
$\pi_J(a)=(\widehat{a},+\infty)$ for every $a\in A$. So $B$ identifies to $X$
and ${\cal D}$ to $\widehat{{\cal D}}$. Roughly speaking, we are going on one
hand to split $A$ in two parts by means of a function $\lambda$ to be
defined such that $\mu<\lambda$ and $f<\lambda$, and on the other hand to lift the
family $\widehat{{\cal D}}$ of $X$ to a family of precells covering of the
upper part of $A$. This will give us ${\cal C}$. As figure~\ref{fi:div-1}
suggests, the lower part of $A$ will remain unchanged and give ${\cal U}$.
\begin{figure}[h]
\small
\begin{center}
\begin{tikzpicture}[scale=.4]
\def\fonctionL#1{plot[domain=#1] (\x,{2.5*((\x+2)/12)^2+7.5})}
\def\fonctionMU#1{plot[domain=#1] (\x,{6*((\x+1)/11)^2+4})}
\coordinate (A) at (0,7.5694444444444);
\coordinate (B) at (2.5,7.85156250000000);
\coordinate (C) at (6,8.611111111111111);
\fill[color=gray!50] \fonctionMU{0:10} -- (0,10) --cycle;
\fill[color=gray!20] \fonctionL{0:2.5} -- (2.5,10) -- (0,10) --cycle;
\fill[color=gray!40] \fonctionL{2.5:6} -- (6,10) -- (2.5,10) --cycle;
\fill[color=gray!60] \fonctionL{6:10} -- (6,10) --cycle;
\draw \fonctionMU{0:10};
\draw[dashed] \fonctionL{0:10};
\draw[color=lightgray,dotted]
(0,0) -- (A)
(2.5,0) -- (B)
(6,0) -- (C)
(10,0) -- (10,10);
\draw[dashed]
(A) -- (0,10)
(B) -- (2.5,10)
(C) -- (6,10);
\draw[line width=2pt] (0,10) -- (10,10);
\draw[(-)] (0,10) -- node[above]{$D_1$} (2.5,10);
\draw[(-)] (2.5,10) -- node[above]{$D_2$} (6,10);
\draw[(-)] (6,10) -- node[above]{$D_3$} (10,10);
\draw[(-)] (0,0) -- node[above]{$\widehat{D}_1$} (2.5,0);
\draw[(-)] (2.5,0) -- node[above]{$\widehat{D}_2$} (6,0);
\draw[(-)] (6,0) -- node[above]{$\widehat{D}_3$} (10,0);
\draw[dotted] (0,7.4) .. controls +(5,-.5) and +(-6,-2.8) .. (10,10);
\draw
(2,6) node {$U$}
(1.25,8.9) node {$C_{D_1}$}
(4.25,9) node {$C_{D_2}$}
(6.9,9.45) node {$C_{D_3}$}
(5,5.2) node {$\mu$}
(5,7.1) node {$f$}
(3.65,7.7) node {$\lambda$};
\end{tikzpicture}
\end{center}
\caption{Dividing $A$ when $\nu=+\infty$.\label{fi:div-1}}
\end{figure}
Let us check the details now. Proposition~\ref{pr:maj-Z-aff} gives
a largely continuous affine function $\lambda:X\to{\cal Z}$ such that
$\lambda(x)\geq\max(f(x,+\infty),\mu(x)+N_m)$ on $X$. Let $U$ be the set of $a\in
F_I(\Gamma^m)$ such that $\widehat{a}\in X$, $\mu(\widehat{a})\leq a_m\leq
\lambda(\widehat{a})$ and $a_m\equiv\rho\;[N_m]$. It is clearly a largely continuous
precell mod $N$ (with socle $X$ since $\lambda\geq\mu+N_m$). For each $D\in{\cal D}$ let
$C_D$ be the set of $a\in F_I(\Gamma^m)$ such that $\widehat{a}\in\widehat{D}$,
$\lambda(\widehat{a})+1\leq a_m$ and $a_m\equiv\rho\;[N_m]$. This is a largely
continuous precell mod $N$ with socle $\widehat{D}$. Let ${\cal C}=\{C_D\,\big/\
D\in{\cal D}\}$ and ${\cal U}=\{U\}$. Obviously $\bigcup{\cal C}=A\setminus U$, $\partial U=\partial B$ and { (Fac),
(Diff) hold for every $D\in{\cal D}$}.
By construction, for every $D\in{\cal D}$ and every $a\in C_D$ we have
\begin{displaymath}
\Delta_J(a)=a_m> \lambda(\widehat{a})\geq f(\widehat{a},+\infty)=f\big(\pi_J(a)\big)
\end{displaymath}
{ which proves (Sub')}. Further, for every $a\in A$ such that $\pi_J(a)\in D$
and $\Delta_J(a)\geq \lambda(\widehat{a})$, we have $\widehat{a}\in\widehat{D}$,
$a_m\geq\lambda(\widehat{a})$ and $a_m\equiv\rho\;[N_m]$ hence $a\in C_D$. { This is
property (Sup)} with $\delta(b)=\lambda(\widehat{b})$ on $B$.
\paragraph{Case 2:} $Y$ is a facet of $X$ and $B=Y\times\{+\infty\}$.
\\
Then $J=\widehat{J}$, $\bar\mu=+\infty$ on $Y$ (otherwise by
Proposition~\ref{pr:pres-face}, $F_{J\cup\{m\}}(A)\neq\emptyset$ is a proper face of
$A$ larger than $B$) and $\nu<+\infty$ (otherwise $X\times\{+\infty\}$ is a proper face
of $A$ larger than $B$). In particular $\mu(x)\geq f(y,+\infty)$ for every $y\in
Y$ and every $x\in X$ close enough to $y$, so there is a definable map
$\eta:Y\to{\cal Z}$ such that for every $x\in X$
\begin{equation}
\Delta_{\widehat{J}}(x)\geq \eta\big(\pi_{\widehat{J}}(x)\big)
\Rightarrow \mu(x)\geq f\big(\pi_{\widehat{J}}(x),+\infty\big).
\label{eq:le-fa-eta2}
\end{equation}
In the precells $C_D$ that we are looking for, we want to have
$a_m\geq\mu(\widehat{a})\geq\pi_{\widehat{J}}(\widehat{a})$ in order to get
condition (Sub). The idea is then to inflate first $\widehat{{\cal D}}$ in
a way controlled by $\eta$ (using the induction hypothesis), and then to
divide $A$ by lifting this division of its socle $X$ (see
figure~\ref{fi:div-2}).
\begin{figure}[h]
\small
\begin{center}
\begin{tikzpicture}[scale=.4]
\def\fonctionNU#1{plot[domain=#1] (\x,{2.5*((\x+2)/12)^2+7.5})}
\def\fonctionMU#1{plot[domain=#1] (\x,{6*((\x+1)/11)^2+4})}
\newcommand{\cellule}[3]{
\fill[color=gray!#1]
plot[domain=#2:#3] (\x,{2.5*((\x+2)/12)^2+7.5}) --
plot[domain=#3:#2] (\x,{6*((\x+1)/11)^2+4}) -- cycle;
}
\coordinate (AN) at (0,7.5694444444444);
\coordinate (BN) at (2.5,7.8515625000000);
\coordinate (CN) at (6,8.61111111111111);
\coordinate (AM) at (0,4.0495867768595);
\coordinate (BM) at (2.5,4.6074380165289);
\coordinate (CM) at (6,6.4297520661157);
\cellule{20}{0}{2.5};
\cellule{40}{2.5}{6};
\cellule{60}{6}{10};
\draw[color=lightgray,dotted]
(0,0) -- (AM)
(2.5,0) -- (BM)
(6,0) -- (CM)
(10,0) -- (10,10);
\draw[dashed]
(AM) -- (AN)
(BM) -- (BN)
(CM) -- (CN);
\draw[color=lightgray,very thin]
(AN) -- (0,10) -- (10,10);
\draw \fonctionMU{0:10};
\draw \fonctionNU{0:10};
\draw (10,10) node {$\bullet$} node[above] {$D$};
\draw (10,0) node {$\bullet$} node[above] {$\widehat{D}$};
\draw[(-)] (0,0) -- node[above]{$W$} (2.5,0);
\draw[(-)] (2.5,0) -- node[above]{$W'$} (6,0);
\draw[(-)] (6,0) -- node[above]{$S_D$} (9.9,0);
\draw[<->] (10.4,10) -- node[right] {$f$} (10.4,6);
\draw[dotted] (5.9,6) -- (10.5,6);
\draw
(1.25,6) node {$U_W$}
(4.25,6.7) node {$U_{W'}$}
(7.3,8.3) node {$C_D$}
(5,5.2) node {$\mu$}
(5,8.7) node {$\nu$};
\end{tikzpicture}
\end{center}
\caption{Dividing $A$ when $B=Y\times\{+\infty\}$ and $Y$ is a facet of $X$.\label{fi:div-2}}
\end{figure}
The induction hypothesis applies to $X$, $Y$, $\widehat{{\cal D}}$ and
$g(y)=\max(f(y,+\infty),\eta(y))$ on $Y$. It gives a definable map $\varepsilon:Y\to{\cal Z}$
and a pair $({\cal S},{\cal W})$ of families of precells. For each $W\in{\cal W}$ (resp.
$D\in{\cal D}$) let $U_W$ (resp. $C_D$) be the set of $a\in F_J(\Gamma^m)$ such that
$\widehat{a}\in W$ (resp. $\widehat{a}$ belongs to the unique precell
$S_{\widehat{D}}\in{\cal S}$ whose facet is $\widehat{D}$), $\mu(\widehat{a})\leq
a_m\leq\nu(\widehat{a})$ and $a_m\equiv\rho\;[N_m]$. This is obviously a largely
continuous precell mod $N$ with socle $W$ (resp. $S_{\widehat{D}}$), and
exactly the set of $a\in A$ such that $\widehat{a}\in W$) (resp.
$S_{\widehat{D}}$). In particular it is contained in $A$, and if we
let ${\cal U}=\{U_W\,\big/\ W\in{\cal W}\}$ and ${\cal C}=\{C_D\,\big/\ D\in{\cal D}\}$ then ${\cal U}$ is a
partition $A\setminus\bigcup{\cal C}$ by induction hypothesis on $({\cal S},{\cal W})$.
For every $W\in{\cal W}$, every proper face of $W$ is a proper face $Z$ of
$X$. Let $H$ be its support. Then by Proposition~\ref{pr:pres-face},
$(\bar\mu_{|Z},\bar\nu_{|Z},\rho)$ is a presentation of $F_H(U_W)$, but
also of $F_H(A)$ hence $F_H(U_W)=F_H(A)$ is a proper face of $A$.
{ Let us check (Fac') and (Diff).} For
every $D\in{\cal D}$, since $\bar\mu=+\infty$ on $Y$ we have
$F_J(C_D)=\widehat{D}\times\{+\infty\}=D$ by Proposition~\ref{pr:pres-face},
hence $\pi_J(C_D)=D$ by Proposition~\ref{pr:face-egale-proj}.
Moreover for every $E\in{\cal E}$, $\pi_{\widehat{J}}(S_{\widehat{D}}\setminus
S_{\widehat{E}}) \subseteq \widehat{D}\setminus\widehat{E}$ by induction hypothesis
hence
\begin{displaymath}
\pi_J(C_D\setminus C_E)
= \big[\pi_{\widehat{J}}(S_{\widehat{D}})
\setminus \pi_{\widehat{J}}(S_{\widehat{E}})\big] \times \{+\infty\}
\subseteq (\widehat{D}\setminus\widehat{E}) \times \{+\infty\}
= D\setminus E.
\end{displaymath}
{ Now we turn to (Sub').} For every $a\in C_D$, since $J=\widehat{J}$ we
have $\Delta_J(a)=\min(a_m,\Delta_{\widehat{J}}(\widehat{a}))$ and
$\pi_J(a)=(\pi_{\widehat{J}}(\widehat{a}),+\infty)$. By the induction hypothesis
$\Delta_{\widehat{J}}(\widehat{a})\geq\eta\circ\pi_{\widehat{J}}(\widehat{a})$ and
$\Delta_{\widehat{J}}(\widehat{a})\geq f(\pi_{\widehat{J}}(\widehat{a}),+\infty)$
because $\widehat{a}\in S_{\widehat{D}}$. The first inequality implies
that $a_m\geq\mu(x)\geq f(\pi_{\widehat{J}}(\widehat{a}),+\infty)$ by
(\ref{eq:le-fa-eta2}). Together with the second inequality this gives
that $\min(a_m,\Delta_{\widehat{J}}(\widehat{a}))\geq
f(\pi_{\widehat{J}}(\widehat{a}),+\infty)$. That is $\Delta_J(a)\geq f(\pi_J(a))$.
We finally check (Sup) { with $\delta(b)=\varepsilon(\widehat{b})$ on
$B$}. Since $C_D$ is clearly the set of $a\in A$ such that
$\widehat{a}\in S_{\widehat{D}}$, for every $a\in A$ such that $\pi_J(a)\in D$
(hence $\pi_{\widehat{J}}(\widehat{a})\in\widehat{D}$) and
$\Delta_J(a)\geq\varepsilon\circ\pi_{\widehat{J}}(\widehat{a})$ we have $\widehat{a}\in
S_{\widehat{D}}$ by induction hypothesis on $\varepsilon$ and $\widehat{D}$
hence $a\in C_D$.
\paragraph{Case 3:} $Y$ is a facet of $X$ and $B=(Y\times{\cal Z})\cap\overline{A}$.
\\
Then $m\in\mathop{\rm Supp} B = J$, hence
$\bar\mu<+\infty$ on $Y$, $\mu_D<+\infty$ for every $D\in{\cal D}$, and for every $a\in A$:
\begin{equation}
\Delta_J(a)=\Delta_{\widehat{J}}(\widehat{a})
\quad\mbox{and}\quad
\pi_J(a)=(\pi_{\widehat{J}}(\widehat{a}),a_m)
\label{eq:delta-J}
\end{equation}
Note that $\rho=\rho_D$ for every $D\in{\cal D}$ because, given any $b\in
D\subseteq B$, we have $b_m\neq+\infty$ and on one hand $b_m\equiv\rho_D\;[N_m]$, on the
other hand $b_m\equiv\rho\;[N_m]$ (using the presentation of $B=F_J(A)$
given by Proposition~\ref{pr:pres-face}).
\paragraph{\em Sub-case 3.1:} $\nu<\infty$.
\\
{
This is the most difficult case, because $f(b)$ depends both on
$\widehat{b}$ and $b_m$. Therefore our construction is done in two
steps.
\\
{\em Step 1}:
Intuitively, we are going to remove the top of $A$ by introducing a
function $\zeta(x)$ which will ensure that $a_m$ doesn't grow too fast as
$a\in A$ goes closer to $B$. The connection between $\zeta$ (a function of
$x\in\widehat{A}$) and $f$ (a function of $b\in B$) will be made {\it via}
an intermediate function $g$ defined below. We need to restrict the
socle $X$ of $A$ to a domain $X^\circ$ close enough to $Y$ so as to ensure
at least that $\mu<\zeta<\nu$ on $X^\circ$. In order to do this we will divide $X$ by
applying to it the induction hypothesis. The resulting partition of
$X$ together with $\zeta$ will give us a partition of $A$ which might look
like figure~\ref{fi:div-3-top}.
}
\begin{figure}[h]
\small
\begin{center}
\begin{tikzpicture}[scale=.4]
\draw[thin,color=gray!50] (0,0) rectangle (10,10);
\def\fonctionNU#1{plot[domain=#1] (\x,{6.5*((\x+3)/13)^3+3.5})}
\def\courbeF{(8,2.5) .. controls +(0,7) and +(-1,-5) .. (10,10)}
\def(7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10){(7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10)}
\fill[color=gray!60] (7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10) -- (10,2.5) -- (7,2.5) --cycle;
\fill[color=gray!20] \fonctionNU{7:10} -- (7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10) -- (7,5.5) --cycle;
\fill[color=gray!40] \fonctionNU{4:7} -- (7,2.5) -- (4,2.5) --cycle;
\fill[color=gray!20] \fonctionNU{0:4} -- (4,2.5) -- (0,2.5) --cycle;
\draw[line width=2pt] (10.03,2.5) -- node[right]{$B$} (10.03,10);
\draw[dotted,color=lightgray]
(0,0) -- (0,2.5)
(4,0) -- (4,2.5)
(7,0) -- (7,2.5);
\draw \fonctionNU{0:10};
\draw plot[domain=0:10] (\x,2.5);
\draw[dashed] (7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10)
(4,2.5) -- (4,4.5147928994083)
(7,2.5) -- (7,6.4585798816568)
(0,2.5) -- (0,3.5798816568047);
\draw[dotted,thick] \courbeF ;
\draw[(-)] (0,0) -- node[above]{$W_1$} (4,0);
\draw[(-)] (4,0) -- node[above]{$W_2$} (7,0);
\draw[(-)] (7,0) -- node[above]{$X^\circ$} (10,0);
\draw (7.6,6.4) node{$V$};
\draw (9,3.4) node{$A^\circ$};
\draw (5.5,3.15) node{$U_{W_2}$};
\draw (2,3.15) node{$U_{W_1}$};
\draw (8,4.6) node[right] {$f$};
\draw (9.2,6.5) node{$\zeta$};
\draw (5.5,2.1) node{$\mu$};
\draw (5.5, 5.8) node{$\nu$};
\end{tikzpicture}
\end{center}
\caption{Removing the top of $A$.\label{fi:div-3-top}}
\end{figure}
Let $g:Y\to{\cal Z}$ be a positive affine map given by
Proposition~\ref{pr:maj-Z-aff} such that $g(y)\geq f(y,0)+\alpha(\bar\mu(y)+
N_m)$ on $Y$ and $\bar g=+\infty$ on $\partial Y$. Given any $y\in Y$, since
$g(y)<+\infty$ and $\nu-\mu$ has limit $+\infty$ at $y$, we have
$\nu(x)-\mu(x)>2N_m+1+g(y)$ for every $x\in X$ close enough to
$y$. So there is a definable function $\eta_1:Y\to{\cal Z}$ such that
for every $x\in X$
\begin{equation}
\Delta_{\widehat{J}}(x)\geq\eta_1\big(\pi_{\widehat{J}}(x)\big) \Rightarrow
\nu(x)-\mu(x)>2N_m+1+g\big(\pi_{\widehat{J}}(x)\big).
\label{eq:le-fa-eta31}
\end{equation}
The induction hypothesis applies to $X$, $Y$, $\{Y\}$ and $\max(\eta_1,2g)$.
It gives a definable map $\varepsilon_1:Y\to{\cal Z}$ and a pair $({\cal S}_1,{\cal W}_1)$ of
families of precells. In the present case ${\cal S}_1$ consists of a single
largely continuous precell $X^\circ$ mod $N$ contained in $X$, such that
$\Delta_{\widehat{J}}\geq\max(\eta_1\circ\pi_{\widehat{J}},2g\circ\pi_{\widehat{J}})$ on
$X^\circ$, and every $x\in X$ such that $\pi_{\widehat{J}}(x)\in Y$ and
$\Delta_{\widehat{J}}(x)\geq\varepsilon_1(\pi_{\widehat{J}}(x))$ belongs to $X^\circ$. The
family ${\cal W}_1$ is a finite partition of $X\setminus X^\circ$ in largely continuous
precells mod $N$. Let ${\cal U}_1=\{U_W\,\big/\ W\in{\cal W}_1\}$ where $U_W=(W\times{\cal Z})\cap A$
for every $W\in{\cal W}$. Since $\nu<+\infty$, the proper faces of $U_W$ are proper
faces of $A$ by Claim~\ref{cl:U-pas-B}.
For every $k\notin\widehat{J}$ and every $x\in X^\circ$, we have
$x_k\geq\Delta_{\widehat{J}}(x)$ because $k\notin\widehat{J}$, and $\Delta_{\widehat{J}}
\geq 2g\circ\pi_{\widehat{J}}(x)$ on $X^\circ$ by the induction hypothesis. Thus on one
hand $x_k-g\circ\pi_{\widehat{J}}(x)\geq g\circ\pi_{\widehat{J}}(x)\geq1$, and on the
other hand $x_k-g\circ\pi_{\widehat{J}}(x)\geq x_k/2$. In particular $x\mapsto
x_k-g\circ\pi_{\widehat{J}}(x)$ is a largely continuous positive
affine function on $X^\circ$ with limit $+\infty$ at every point of $\partial X^\circ$.
We also have $\Delta_{\widehat{J}}(x)\geq\eta_1(\pi_{\widehat{J}}(x))$ by induction
hypothesis, hence $\nu(x)-\mu(x)>2N_m+1+g(\pi_{\widehat{J}}(x))$ by
(\ref{eq:le-fa-eta31}). In particular the restriction of $\nu-\mu-2N_m-1$
to $X^\circ$ is a positive affine function with limit $+\infty$ at
every point of $\partial X^\circ$. Proposition~\ref{pr:min-Z-aff} then gives a largely
continuous positive affine function $\lambda:X^\circ\to{\cal Q}$ such that
$\bar\lambda=+\infty$ on $\partial X^\circ$, $\lambda\leq\nu-\mu-2N_m-1$ on $X^\circ$ and
$\lambda(x)\leq(x_k-g\circ\pi_{\widehat{J}}(x))/\alpha$ for every $k\notin\widehat{J}$.
Let us quote for further use that in particular
\begin{equation}
\alpha\lambda(x) \leq \min_{k\notin\widehat{J}}\big(x_k-g(\pi_{\widehat{J}}(x))\big)
= \Delta_{\widehat{J}}(x)-g(\pi_{\widehat{J}}(x)).
\label{eq:le-fa-lambda}
\end{equation}
Note that $\partial X^\circ=\overline{Y}$ because $X^\circ$ has a unique facet which
is $Y$ by Claim~\ref{cl:fac}, hence $\bar\lambda=+\infty$ on $\overline{Y}$.
Let $n\geq1$ an integer such that $n\lambda$ is integrally affine, so that
$\lambda(x)>t$ if and only if $\lambda(x)\geq t+1/n$ for every $(x,t)\in X^\circ\times{\cal Z}$.
Let $\zeta=\mu+\lambda+N_m$ on $X^\circ$, and $V$ (resp. $A^\circ$) be the set
of $a\in F_I(\Gamma^m)$ such that $\widehat{a}\in X^\circ$, $\zeta(\widehat{a})+1/n\leq
a_m\leq \nu(\widehat{a})$ (resp. $\mu(\widehat{a})\leq a_m\leq\zeta(\widehat{a})$) and
$a_m\equiv\rho\;[N_m]$. By construction $\zeta$ is a largely continuous affine
map on $X^\circ$ with $\bar\zeta=+\infty$ on $\partial X^\circ$. Moreover on $X^\circ$ we have
\begin{displaymath}
\zeta+\frac{1}{n}+N_m=\mu+\lambda+2N_m+\frac{1}{n}\leq \nu
\end{displaymath}
(because $\lambda\leq\nu-\mu-2N_m-1$ by construction) hence the socle of
$V$ is $X^\circ$. Obviously $\mu+N_m\leq\mu+\lambda+N_m=\zeta$ (because $\lambda>0$ by
construction) hence the socle of $A^\circ$ is $X^\circ$. Thus both $V$ and
$A^\circ$ are largely continuous precells mod $N$ contained in $(X^\circ\times{\cal Z})\cap
A$. Moreover $a_m>\zeta(\widehat{a})=\mu(\widehat{a})+\lambda(\widehat{a})+N_m$
if and only if $a_m \geq \mu(\widehat{a})+\lambda(\widehat{a})+1/n+N_m =
\zeta(\widehat{a})+1/n$. Thus $V$ and $A^\circ$ form a partition of
$(X^\circ\times{\cal Z})\cap A$, or equivalently ${\cal U}_1\cup\{V\}$ is a partition of
$A\setminus A^\circ$. Since $\bar\zeta=+\infty$ on $\partial X^\circ=\overline{Y}$, by
Proposition~\ref{pr:pres-face} every proper face $V'$ of $V$ is of type
$Z\times\{+\infty\}$ for $Z$ a face of $Y$. In particular $V'$ is a proper face of
$A$.
\\
{
{\em Step 2}:
Intuitively, we are going to build the $C_D$'s by inflating inside
$A^\circ$ each $D$ in ${\cal D}$ as suggested by figure~\ref{fi:div-3-right}
(which zooms in on $A^\circ$, the other parts of $A$ remaining
as in figure~\ref{fi:div-3-top}).
}
\begin{figure}[h]
\small
\begin{center}
\begin{tikzpicture}[yscale=.5]
\def\courbeF{(8,2.5) .. controls +(0,7) and +(-1,-5) .. (10,10)}
\def(7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10){(7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10)}
\begin{scope}
\clip (8.7,6) rectangle (10,10);
\fill[color=gray!20] (7,2.5) -- (7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10) -- (10,2.5) --cycle;
\end{scope}
\fill[color=gray!40] (8.7,4) rectangle (10,6);
\fill[color=gray!60] (8.7,2.5) rectangle (10,4);
\begin{scope}
\clip (7.8,0) rectangle (8.7,10);
\fill[color=gray!10] (7,2.5) -- (7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10) -- (10,2.5) --cycle;
\end{scope}
\begin{scope}
\clip (7,0) rectangle (7.8,10);
\fill[color=gray!30] (7,2.5) -- (7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10) -- (10,2.5) --cycle;
\end{scope}
\draw[dotted,color=lightgray]
(7,0) -- (7,2.5)
(7.8,0) -- (7.8,2.5)
(8.7,0) -- (8.7,2.5)
(10,0) -- (10,2.5);
\draw[dashed]
(7,2.5) -- (7,5.5)
(7.8,2.5) -- (7.8,5.82)
(8.7,2.5) -- (8.7,6.45)
(8.7,4) -- (10,4) ++(0.1,0) node[right] {$\zeta_{D_3}=\nu_{D_3}\circ\pi_{\widehat{J}}$}
(8.7,6) -- (10,6) ++(0.1,0) node[right] {$\zeta_{D_1}=\mu_{D_1}\circ\pi_{\widehat{J}}$};
\draw[color=lightgray,thin] (7,10) -- (10,10);
\draw[color=lightgray,dashed] (6.4,10) -- (7,10);
\draw[color=lightgray,dashed] (6.4,5.4) .. controls +(.5,.1) and +(-.3,-.1) .. (7,5.5);
\draw[color=lightgray,dashed] (6.4,2.5) -- (7,2.5);
\draw[color=lightgray,dashed] (6.4,0) -- (7,0);
\draw[line width=2pt] (10.03,2.5) -- (10.03,10);
\draw[(-)] (10.03,2.5) -- node[right] {$D_3$} (10.03,4);
\draw[(-)] (10.03,4) -- node[right] {$D_2$} (10.03,6);
\draw[(-)] (10.03,6) -- node[right] {$D_1$} (10.03,10);
\draw (7,5.5) .. controls +(3,1) and +(-.1,-1) .. (10,10);
\draw plot[domain=7:10] (\x,2.5);
\draw[dotted,thick] \courbeF;
\draw[(-)] (7,0) -- node[above]{$W_3$} (7.8,0);
\draw[(-)] (7.8,0) -- node[above]{$W_4$} (8.7,0);
\draw[(-)] (8.7,0) -- node[above]{$S$} (10,0);
\draw
(7.4,3.6) node{$U_{W_3}$}
(8.35,3.5) node{$U_{W_4}$}
(9.5,3.3) node{$C_{D_3}$}
(9.5,5) node{$C_{D_2}$}
(9.5,6.6) node{$C_{D_1}$}
(8.8,7.5) node{$f$}
(7.5,6) node{$\zeta$}
(8.5,2) node{$\mu$};
\end{tikzpicture}
\end{center}
\caption{Dividing $A^\circ$ by inflating each $D\in{\cal D}$.\label{fi:div-3-right}}
\end{figure}
For every $D\in{\cal D}$ let $\zeta_D=\nu_D$ if $\nu_D<+\infty$ and $\zeta_D=\mu_D+N_m$
otherwise. Since $\bar\zeta=+\infty$ on $Y$ there is a definable function
$\eta_2:Y\to{\cal Z}$ such that for every $x\in X^\circ$ and every $D\in{\cal D}$ such that
$\pi_{\widehat{J}}(x)\in\widehat{D}$ we have
\begin{equation}
\Delta_{\widehat{J}}(x)\geq \eta_2\big(\pi_{\widehat{J}}(x)\big) \Rightarrow
\zeta(x)\geq \zeta_D\big(\pi_{\widehat{J}}(x)\big).
\label{eq:le-fa-eta32}
\end{equation}
The induction hypothesis applies to $X^\circ$, $Y$, $\widehat{{\cal D}}$ and
$\eta_2$. It gives a definable map $\varepsilon_2:Y\to{\cal Z}$ and a pair $({\cal S}_2,{\cal W}_2)$
of families of precells. For each $W\in{\cal W}_2$ let $U_W=(W\times{\cal Z})\cap A^\circ$.
Clearly the family ${\cal U}_2=\{U_W\,\big/\ W\in{\cal W}_2\}$ is a finite partition in
largely continuous precells mod $N$ of the complement in $A^\circ$ of the set
$A^{\circ\circ}=(\bigcup{\cal S}_2\times{\cal Z})\cap A^\circ$. Equivalently, ${\cal U}_1\cup\{V\}\cup{\cal U}_2$ is a
finite partition of $A\setminus A^{\circ\circ}$. Since $\nu<+\infty$, by
Claim~\ref{cl:U-pas-B} the proper faces of $U_W$ are proper faces of
$A$ for every $W\in{\cal W}_2$.
For each $D\in{\cal D}$ let $S_{\widehat{D}}$ be the precell in ${\cal S}_2$ given by
the induction hypothesis, so that conditions (Fac), (Sub), (Sup), (diff)
apply to $S_{\widehat{D}}$, $\eta_2$ and $\varepsilon_2$. If $\nu_D=+\infty$ (resp.
$\nu_D<+\infty$) let $C_D$ be the set of $a\in F_I(\Gamma^m)$ such that
$\widehat{a}\in S_{\widehat{D}}$, $\mu_D(\pi_J(\widehat{a}))\leq a_m\leq
\zeta(\widehat{a})$ (resp. $\mu_D(\pi_J(\widehat{a}))\leq a_m\leq
\nu_D(\pi_J(\widehat{a}))$) and $a_m\equiv\rho\;[N_m]$.
{ Let us check that $C_D$ is a largely continuous precell mod
$N$.} For every $x\in S_{\widehat{D}}$ we have
$\pi_{\widehat{J}}(x)\in\widehat{D}$, because
$\widehat{D}=F_{\widehat{J}}(S_{\widehat{D}})$ by (Fac), and
$F_{\widehat{J}}(S_{\widehat{D}})=\pi_{\widehat{J}}(S_{\widehat{D}})$ by
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-proj}). So there is
$b\in D$ such that $\widehat{b}=x$, $\mu_D(\pi_{\widehat{J}}(x))\leq
b_m\leq\nu_D(\pi_{\widehat{J}}(x))$ and $b_m\equiv\rho\;[N_m]$. We can (and do)
require in addition that $b_m\leq\mu_D(\pi_{\widehat{J}}(x))+N_m$, hence
$b_m\leq\zeta_D(\pi_{\widehat{J}}(x))$. Because $x\in S_{\widehat{D}}$ we also
have $\Delta_{\widehat{J}}(x)\geq\eta_2\circ\pi_{\widehat{J}}(x)$ by (Sub), hence
$\zeta_D(\pi_{\widehat{J}}x)\leq\zeta(x)$ by (\ref{eq:le-fa-eta32}). Altogether
this proves that $(x,b_m)\in C_D$, hence $x$ belongs to the socle of
$C_D$. So the socle of $C_D$ is exactly $S_{\widehat{D}}$ and $C_D$ is
then a largely continuous precell mod $N$.
{ Now we turn to (Fac').} The presentation of
$F_J(C_D)$ given by Proposition~\ref{pr:pres-face} is exactly
$(\mu_D,\nu_D,\rho)$, hence $F_J(C_D)=D$ since $\rho_D=\rho$. In particular
$\pi_J(C_D)=D$ by
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-proj}). More
precisely, the above computations show that we have
\begin{equation}
C_D=\big\{a\in A^{\circ\circ}\,\big/\ \widehat{a}\in S_{\widehat{D}} \mbox{ and }
\pi_J(a)\in D\big\}.
\label{eq:le-fa-CD}
\end{equation}
Let ${\cal C}=\{C_D\,\big/\ D\in{\cal D}\}$ and ${\cal U}={\cal U}_1\cup\{V\}\cup{\cal U}_2$. We already know
that ${\cal U}$ is a finite partition of $A\setminus A^{\circ\circ}$ in largely continuous
precells mod $N$ whose proper faces are proper faces of $A$, and that
each $C_D\in{\cal C}$ is a largely continuous precell mod $N$ contained in
$A^{\circ\circ}$ with socle $S_{\widehat{D}}$ and $F_J(C_D)=\pi_J(C_D)=D$.
Let us check that $\bigcup{\cal C}=A^{\circ\circ}$. In order to do so, we are claiming
that
\begin{equation}
\forall a\in A^{\circ\circ},\forall E\in{\cal D},\ \pi_J(a)\in E \Rightarrow a\in C_E.
\label{eq:le-fa-claim}
\end{equation}
Assume the contrary and let $a\in A^{\circ\circ}$, $E\in{\cal E}$ be such that $\pi_J(a)\in E$
and $a\notin C_E$. By (\ref{eq:le-fa-CD}) this implies that $\widehat{a}\notin
S_{\widehat{E}}$. But the socle of $A^{\circ\circ}$ is $\bigcup{\cal S}_2$, hence
$\widehat{a}\in S_{\widehat{D}}$ for some $D\in{\cal D}$. Thus
$\pi_{\widehat{J}}(\widehat{a})$ belongs to
$\pi_{\widehat{J}}(S_{\widehat{D}}\setminus S_{\widehat{E}})$. By the induction
hypothesis the latter is contained in $\widehat{D}\setminus\widehat{E}$, hence
$\pi_{\widehat{J}}(\widehat{a})\notin\widehat{E}$. But
$\pi_{\widehat{J}}(\widehat{a})$ is also the socle of $\pi_J(a)$. Since
$\pi_J(a)\in E$ it follows that $\pi_{\widehat{J}}(\widehat{a})\in\widehat{E}$, a
contradiction.
That $A^{\circ\circ}\subseteq\bigcup{\cal C}$ then follows immediately from
(\ref{eq:le-fa-claim}) and the fact that $\pi_J(A^{\circ\circ})\subseteq\pi_J(A)=B\subseteq\bigcup{\cal D}$.
So $A^{\circ\circ}=\bigcup{\cal C}$ and it only remains to check (Sub'), (Sup) and
(Diff') for any fixed $D\in{\cal D}$.
We start with (Diff'). Pick any $E\in{\cal D}$, assume that there is a point
$b$ in $\pi_J(C_D\setminus C_E)$ which belongs to $E$. Then $b=\pi_J(a)$ for some
$a\in C_D\setminus C_E$. We have $a\in A^{\circ\circ}$ and $a\notin C_E$, hence $\pi_J(a)\notin E$ by
(\ref{eq:le-fa-claim}), that is $b\notin E$, a contradiction. Hence
$\pi_J(C_D\setminus C_E)$ is disjoint from $E$.
Let us now turn to (Sup). For every $b\in B$, since $\bar\zeta=+\infty$ on $\partial
X^\circ=\overline{Y}$ and $\widehat{b}\in Y$, we have $\zeta(x)\geq b_m$ whenever $x\in
X^\circ$ is close enough to $\widehat{b}$ (that is whenever
$\pi_{\widehat{J}}(x)=\widehat{b}$ and $\Delta_{\widehat{J}}(x)$ is large
enough). So there is a definable function $\eta_3:B\to{\cal Z}$ such that for
every $a\in (X^\circ\times{\cal Z})\cap A$
\begin{equation}
\Delta_{\widehat{J}}(\widehat{a})\geq\eta_3\big(\pi_J(a)\big) \Rightarrow
\zeta(\widehat{a})\geq a_m.
\label{eq:le-fa-eta33}
\end{equation}
Let $\delta:b\in B\mapsto\max(\varepsilon_1(\widehat{b}),\eta_3(b),\varepsilon_2(\widehat{b}))$. For
every $a\in A$ such that $\pi_J(a)\in D$ and $\Delta_J(a)\geq\delta\circ\pi_J(a)$, since
$\Delta_J(a)=\Delta_{\widehat{J}}(\widehat{a})$ by (\ref{eq:delta-J}) we have in
particular $\pi_{\widehat{J}}(\widehat{a})\in Y$ and
$\Delta_{\widehat{J}}(\widehat{a})\geq\varepsilon_1(\pi_{\widehat{J}}(\widehat{a}))$,
hence $\widehat{a}\in X^\circ$ by construction. So $a\in (X^\circ\times{\cal Z})\cap A$ and
$\Delta_{\widehat{J}}(\widehat{a})\geq\eta_3\big(\pi_J(a)\big)$, which implies that
$a_m\leq \zeta(\widehat{a})$ by (\ref{eq:le-fa-eta33}), hence $a\in A^\circ$ by
construction. On the other hand, since $\widehat{a}\in X^\circ$,
$\pi_{\widehat{J}}(\widehat{a})\in\widehat{D}$ and
$\Delta_{\widehat{J}}(\widehat{a})\geq\varepsilon_2(\pi_{\widehat{J}}(\widehat{a}))$, we
get that $\widehat{a}\in S_{\widehat{D}}$ by construction. In particular
$\widehat{a}\in\bigcup{\cal S}_2$, hence $a\in A^{\circ\circ}$ since $A^{\circ\circ}=(\bigcup{\cal S}_2\times{\cal Z})\cap
A^\circ$. Altogether we have $a\in A^{\circ\circ}$, $\widehat{a}\in S_{\widehat{D}}$
and $\pi_J(a)\in D$ hence that $a\in C_D$ by (\ref{eq:le-fa-CD}), which
proves (Sup).
It only remains to check (Sub'), that is $\Delta_J\geq f\circ\pi_J$ on $C_D$.
This is the moment to recall (\ref{eq:le-fa-lambda}), which says that
$\alpha\lambda\leq\Delta_{\widehat{J}}-g\circ\pi_{\widehat{J}}$ on $X^\circ$. Recall also that
$g(y)\geq f(y,0)+\alpha(\bar\mu(y)+N_m)$ on $Y$ by definition of $g$. Thus on
$X^\circ$ we have
\begin{equation}
\alpha\lambda(x)
\leq \Delta_{\widehat{J}}(x)
- f\big(\pi_{\widehat{J}}(x),0\big)
- \alpha \bar\mu\big(\pi_{\widehat{J}}(x)\big)
- \alpha N_m.
\label{eq:le-fa-fin-lambda}
\end{equation}
For every $a\in C_D$, $\widehat{a}\in X^\circ$ and $a\in A^\circ$ hence
$a_m\leq\zeta(\widehat{a})=\mu(\widehat{a})+\lambda(\widehat{a})+N_m$. We also have
$\mu(\widehat{a})=\bar\mu\big(\pi_{\widehat{J}}(\widehat{a})\big)$ by
Proposition~\ref{pr:prol-pi}. Combining all this with
(\ref{eq:le-fa-fin-lambda}) we get that
\begin{equation}
\alpha a_m \leq { \alpha\bar\mu\big(\pi_{\widehat{J}}(x)\big) + \alpha \lambda(\widehat{a}) +\alpha N_m}
\leq \Delta_{\widehat{J}}(\widehat{a})
- f\big(\pi_{\widehat{J}}(\widehat{a}),0\big).
\label{eq:le-fa-fin-am}
\end{equation}
Since $f(\pi_J(a))=f\big(\pi_{\widehat{J}}(\widehat{a}),0\big) +\alpha a_m$ by
the definition of $f$, and $\Delta_J(a)=\Delta_{\widehat{J}}(\widehat{a})$ by
(\ref{eq:delta-J}), we finally get from (\ref{eq:le-fa-fin-am}) that
$\Delta_J(a)=\Delta_{\widehat{J}}(\widehat{a})\geq f(\pi_J(a))$.
\paragraph{{\em Sub-case 3.2:}} $\nu=+\infty$.
\\
This final case is easy: we simply divide $A$ in two pieces, above and
below a function $\lambda$ to be defined, so that the previous sub-case~3.2
applies to the lower part of $A$. The upper part $A$ doesn't require
any special treatment: it will simply be incorporated in the family
${\cal U}$.
Let us check the details now.
Proposition~\ref{pr:maj-Z-aff} gives a largely continuous integrally
affine map $\lambda$ on $X$ such that $\bar\lambda=+\infty$ on $\partial X$ and $\lambda\geq \mu+N_m$.
Let $A^-$ (resp. $A^+$) be the set of $a\in F_I(\Gamma^m)$ such that
$\widehat{a}\in X$, $\mu(\widehat{a})\leq a_m\leq \lambda(\widehat{a})$ (resp.
$\lambda(\widehat{a})+1\leq a_m$) and $a_m\equiv\rho\;[N_m]$. Its socle is $X$ (for
$A^-$ we use that $\lambda\geq \mu+N_m$) hence it is a largely continuous precell
mod $N$. Since $\lambda$ takes values in ${\cal Z}$, $A^-$ and $A^+$ form a
partition of $A$. The presentation of the faces of $A$, $A^-$ $A^+$
given by Proposition~\ref{pr:pres-face} gives that every proper face
of $A^-$ and $A^+$ is a proper face of $A$, and $B$ is a face of
$A^-$. The previous sub-case~3.1 applies to $A^-$, $B$, ${\cal D}$ and $f$.
It gives a pair $({\cal C}^-,{\cal W}^-)$ of families of largely continuous
precells mod $N$ and an integrally affine map $\delta^-:B\to{\cal Z}$. Then
$({\cal C}^-,{\cal W}^-\cup\{A^+\})$ and $\delta^-$ have all the required properties for
$A$, ${\cal D}$ and $f$, except possibly (Sup). We remedy this by
replacing $\delta^-$ by a larger function $\delta$ defined as follows.
For every $b\in B$, we have $\lambda(x)\geq b_m$ for every $x\in X$ close
enough to $\widehat{b}$ since $\bar\lambda=+\infty$ on $Y$. So there is a
definable function $\eta:B\to{\cal Z}$ such that for every $a\in A$
\begin{equation}
\Delta_J(a)\geq \eta\big(\pi_J(a)\big) \Rightarrow \lambda(\widehat{a})\geq a_m.
\label{eq:le-fa-eta-final}
\end{equation}
Let $\delta=\max(\eta,\delta^-)$, then for every $D\in{\cal D}$ and every $a\in A$ such that
$\pi_J(a)\in D$ and $\Delta_J(a)\geq \delta(\pi_J(a))$ we have in particular
$\Delta_J(a)\geq\eta(\pi_J(a))$ hence $a_m\leq \lambda(\widehat{a})$ by
(\ref{eq:le-fa-eta-final}), that is $a\in A^-$. Moreover we
have $\pi_J(a)\in D$ and $\Delta_J(a)\geq\delta^-(\pi_J(a))$. Altogether this implies
that $a$ belongs to $C_D\in{\cal C}^-$, which in turn proves (Sup).
\end{proof}
\begin{theorem}[Monohedral Division]\label{th:mono-div}
Let $A\subseteq F_I(\Gamma^m)$ be a largely continuous precell mod $N$, $f:\partial A\to{\cal Z}$
a definable function, and ${\cal D}$ a complex of monohedral largely
continuous precells mod $N$ such that $\bigcup{\cal D}=\partial A$. Then there exists a
finite partition ${\cal C}$ of $A$ in monohedral largely continuous precells
mod $N$ such that ${\cal C}\cup{\cal D}$ is a closed complex, ${\cal C}$ contains for
every $D\in{\cal D}$ a unique precell $C$ with facet $D$, and moreover $\Delta_J\geq
f\circ\pi_J$ on $C$ where $J=\mathop{\rm Supp} D$.
\end{theorem}
\begin{proof}
The proof goes by induction on the number $n$ of proper faces of
$A$. If $n=0$ then ${\cal D}=\emptyset$ and $A$ is monohedral, hence
${\cal C}=\{A\}$ gives the conclusion. So let us assume that $n\geq1$ and the
result is proved for smaller integers. Let $B$ be a facet of $A$.
Lemma~\ref{le:face-elargie} applied to $A$, $B$, ${\cal D}$ and the
restriction of $f$ to $B$ gives a pair $({\cal C}_B,{\cal U})$ of families of
precells. For every $U\in{\cal U}$, the proper faces of $U$ are proper faces
of $A$. So the family ${\cal D}_U=\{D\in{\cal D}\,\big/\ D\subseteq\partial U\}$ is a complex and
$\bigcup{\cal D}_U=\partial U$. Since $B$ is not a proper face of $U$ by
Claim~\ref{cl:U-pas-B}, the induction hypothesis applies to $U$,
${\cal D}_U$ and the restriction of $f$ to $\partial U$. It gives a family
${\cal C}_U$ of precells. Let ${\cal C}$ be the union of ${\cal C}_B$ and ${\cal C}_U$ for
$U\in{\cal U}$. This is a family of largely continuous precells mod $N$
partitioning $A$. By construction ${\cal C}$ contains for every $D\in{\cal D}$ a
unique precell $C$ with facet $D$, and $\Delta_J\geq f\circ\pi_J$ on $C$ with
$J=\mathop{\rm Supp} D$. In particular ${\cal C}\cup{\cal D}$ is a partition of
$\overline{A}$ which contains the faces of all its members, since
${\cal D}$ is a closed complex (because ${\cal D}$ is a complex and $\bigcup{\cal D}=\partial
B$ is closed). So ${\cal C}\cup{\cal D}$ is a closed complex.
\end{proof}
\begin{theorem}[Monohedral Decomposition]\label{th:mono-dec}
Let $A\subseteq F_I(\Gamma^m)$ be a largely continuous precell mod $N$. Then there
exists a complex ${\cal C}$ of monohedral largely continuous precells mod $N$
such that $A=\bigcup{\cal C}$.
\end{theorem}
\begin{proof}
We are going to show that given any closed complex ${\cal A}$ of largely
continuous precells mod $N$ in $\Gamma^m$, there is a closed complex ${\cal C}$ of
largely continuous {\em monohedral} precells mod $N$ such that
$\bigcup{\cal C}=\bigcup{\cal A}$ and ${\cal C}$ refines ${\cal A}$ (that is every $C\in{\cal C}$ is
contained in some $A\in{\cal A}$). The conclusion for $A$ will follow, by
applying this to the closed complex consisting of all the faces of
$A$. The proof goes by induction on the cardinality $n$ of ${\cal A}$. If
$n=0$ then ${\cal C}={\cal A}=\emptyset$ proves the result. Assume that $n\geq1$ and the
result is proved for smaller integers. Let $A$ be a maximal element of
${\cal A}$ with respect to specialisation, and ${\cal B}={\cal A}\setminus\{A\}$. By maximality
of $A$, ${\cal B}$ is again a closed complex. The induction hypothesis
gives a closed complex ${\cal D}$ of largely continuous monohedral precells
mod $N$ such that $\bigcup{\cal D}=\bigcup{\cal B}$ and ${\cal D}$ refines ${\cal B}$. If $A$ is
closed then obviously ${\cal C}={\cal D}\cup\{A\}$ proves the result for ${\cal A}$.
Otherwise let ${\cal D}_A=\{D\in{\cal D}\,\big/\ D\subseteq\partial A\}$. The Monohedral Division
Theorem~\ref{th:mono-div} applied to $A$, ${\cal D}_A$ and the constant
function $f=0$ gives a finite partition ${\cal C}_A$ of $A$ in monohedral
largely continuous precells mod $N$ such that the family ${\cal C}_A\cup{\cal D}_A$
is a closed complex. The family ${\cal C}={\cal C}_A\cup{\cal D}$ is a partition of
$A\cup\bigcup{\cal B}=\bigcup{\cal A}$. Since ${\cal D}$ is a closed complex and every precell in
${\cal C}_A$ has a unique facet which belongs to ${\cal D}$, it follows that
${\cal C}$ is a complex.
\end{proof}
We finish this section with another, much more elementary, division
result. Contrary to the above ones, it is drastically different from
what occurs in the real situation, where the polytopes are connected sets.
\begin{proposition}\label{pr:split-mono}
Let $A\subseteq F_I(\Gamma^m)$ be a non-closed monohedral largely continuous precell
mod $N$. For every integer $n\geq1$ there exists for some $N'\in({\rm\bf N}^*)^m$
a partition $(A_i)_{1\leq i\leq n}$ of $A$ in largely continuous precells
mod $N'$ such that $\partial A_i=\partial A$ for $1\leq i\leq n$.
\end{proposition}
\begin{proof}
The proof goes by induction on $m$. The result is trivially true for
$m=0$ since there is no non-closed precell in $\Gamma^0$. Assume that $m\geq1$
and the result is proved for smaller integers. Let $(\mu,\nu,\rho)$ be a
presentation of $A$. By induction hypothesis we can assume that
$m\in\mathop{\rm Supp} A$ hence $\mu<+\infty$. If $\nu=+\infty$, for $1\leq i\leq n$ let $A_i$ be the
set of $a\in F_I(\Gamma^m)$ such that $\widehat{a}\in\widehat{A}$,
$\mu(\widehat{a})\leq a_m\leq \nu(\widehat{a})$ and $a_m\equiv\rho+iN_m\;[nN_m]$. This
is obviously a partition of $A$ in largely continuous precells mod
$N'=(\widehat{N},nN_m)$ having the same boundaries as $A$. On the other
hand, if $\nu< +\infty$ then $\widehat{A}$ is not closed (otherwise $A$ would
be closed) hence the induction hypothesis gives for some $P'\in({\rm\bf N}^*)^m$
a partition $(X_i)_{1\leq i\leq n}$ of $\widehat{A}$ in largely continuous
precells mod $P'$ such that $\partial X_i=\partial X$ for every $i$. Let
$A_i=(X_i\times{\cal Z})\cap A$ for every $i$. Then $(A_i)_{1\leq i\leq n}$ is easily
seen to give the conclusion, thanks to the description of the faces of
$A$ and $A_i$ given by Proposition~\ref{pr:pres-face}.
\end{proof}
\section{Polytopes in $p$\--adic fields}
\label{se:p-adic}
Recall that $K$ is a $p$\--adically closed field, $v$ its
$p$\--valuation, $R$ its valuation ring and $\Gamma=v(K)$.
We still denote by $v$ the map $(v,\dots,v)$ from $K^m$ to $\Gamma^m$.
We are going to define polytopes\footnote{We don't call them largely
continuous precells because they are much more special than the
usual $p$\--adic cells as defined in \cite{dene-1986}.}
mod $N$ in $K^m$ by means of the inverse image by $v$ of largely
continuous precells mod $N$ in $\Gamma^m$. However, the $p$\--adic
triangulation theorem that we are aiming at requires a more versatile
definition. It involves semi-algebraic subgroups $Q_{1,M}$ of the
multiplicative group $K^\times=K\setminus\{0\}$, where $M$ is a positive integer. In
the special case where $K$ is a finite extension of ${\rm\bf Q}_p$, we have
\begin{displaymath}
Q_{1,M}=\bigcup_{k\in{\rm\bf Z}}\pi^k(1+\pi^MR).
\end{displaymath}
where $\pi$ is any generator of the maximal ideal of $R$.
Since in this paper we will only use that $v(Q_{1,M})={\cal Z}$, we refer
the reader to \cite{cluc-leen-2012} for a general definition of
$Q_{N,M}$ for every integers $N,M\geq1$ in arbitrary $p$\--adically
closed fields.
We let $D^MR=(\{0\}\cup Q_{1,M})\cap R$. Given an $m$\--tuple $N\in({\rm\bf N}^*)^m$ we
call a set $S\subseteq K^m$ a {\bf polytope mod $N$ in $D^MR^m$} if $v(S)$ is
a largely continuous precell mod $N$ in $\Gamma^m$ and $S=v^{-1}(v(S))\cap D^MR^m$. The {\bf
faces} and {\bf facets} $F_J(S)$ of a subset $S$ of $D^MR^m$ are
defined as the inverse images, by the restriction of $v$ to
$D^MR^m$, of the faces and facets of $v(S)$. The {\bf support} of
$S$ (resp. of $x\in K^m$) is the support of $v(S)$ (resp. of $v(x$),
so that:
\begin{displaymath}
\mathop{\rm Supp}(x)=\big\{i\in[\![1,m]\!]\,\big/\ x_i\neq0\big\}
\end{displaymath}
\begin{displaymath}
F_J(S)=\big\{x\in\overline{S}\,\big/\ \mathop{\rm Supp} x=J\big\}
\end{displaymath}
We say that $S$ is {\bf monohedral} if $v(S)$ is so, that is if the
faces of $S$ are linearly ordered by specialisation, in which case we
call $S$ a {\bf monotope mod $N$ in $D^MR^m$}.
A family ${\cal C}$ of polytopes mod $N$ in $D^MR^m$ is a {\bf complex} if
it is finite and for every $S,T\in{\cal C}$, $\overline{S}\cap\overline{T}$ is
the union of the common faces of $S$ and $T$. It is a {\bf closed
complex} if moreover it contains all the faces of its members. Every
complex ${\cal S}$ of polytopes mod $N$ is contained in a smallest closed
complex, namely the family of all the faces of the members of ${\cal S}$.
We call it the {\bf closure} of ${\cal S}$ and denote it $\overline{{\cal S}}$.
In order to ease the notation, we write $vS$ for $v(S)$, and $v{\cal C}$
for $\{vS\,\big/\ S\in{\cal C}\}$. Clearly ${\cal C}$ is a (closed) complex if
and only if $vC$ is.
\begin{proposition}\label{pr:face-preim}
Let $S$ be a polytope mod $N$ in $D^MR^m$, and let $T=F_J(S)$ be any of
its faces. Then $T$ is a polytope mod $N$ equal to $\pi_J(S)$.
\end{proposition}
\begin{proof}
Due to the correspondence between the faces of $S$ and $vS$, this
follows directly from Proposition~\ref{pr:pres-face} and
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-proj}).
\end{proof}
More generally, all the points of
Proposition~\ref{pr:face-egale-proj}, as well as
Proposition~\ref{pr:face-socle}, Corollary~\ref{co:socle-facet},
the Monohedral Decomposition (Theorem~\ref{th:mono-dec}) and
Proposition~\ref{pr:split-mono} immediately
transfer to polytopes mod $N$ in $D^MR^m$. Only the Monohedral Division
(Theorem~\ref{th:mono-div}) requires a bit more of preparation.
For the sake of generality we want the $p$\--adic analogon of the
Monohedral Division Theorem in $\Gamma^m$ to hold not only with a map $\varepsilon:\partial
S\subseteq K^m\to K^*$ definable in the language of rings ({\it i.e.}
semi-algebraic) but also with a map definable in various expansions
$(K,{\cal L})$ of the ring structure of $K$. The proof of
Theorem~\ref{th:div-p-adic} below shows that it suffices to make the
following assumptions on $(K,{\cal L})$:
\begin{description}
\item[(Ext)]
For every definable function $f:X\subseteq K^m\to K^*$, if $f$ is continuous
and $X$ is closed and bounded, then $v(f(x))$ takes a maximum
value at some point $x\in X$.
\item[(Pres)]
The image by the valuation of every subset of $K^m$ definable in
$(K,{\cal L})$, is ${\cal L}_{Pres}$\--definable.
\end{description}
\begin{remark}\label{re:P-min}
If $K$ is a finite extension of ${\rm\bf Q}_p$ then
condition~(Ext) holds for every continuous function by
the Extreme Value Theorem. But this condition, when restricted to
definable continuous functions, is preserved by elementary
equivalence. Hence it will be satisfied whenever the complete theory
of $(K,{\cal L})$ has a $p$\--adic model (that is a model whose
underlying field is a finite extension of ${\rm\bf Q}_p$). On the other hand,
if $(K,{\cal L})$ is $P$\--minimal (see \cite{hask-macp-1997}), Theorem~6
in \cite{cluc-2003} proves that condition~(Pres) is
satisfied. In particular Theorem~\ref{th:div-p-adic} applies for
example to every subanalytic map $\varepsilon$, and more generally to every
map $\varepsilon$ which is definable in a $P$\--minimal structure $(K,{\cal L})$
which has a $p$\--adic model.
\end{remark}
For every $x\in K^m$ we let $w(x)=\min_{1\leq i\leq m}v(x_i)$. If $v(K)={\rm\bf Z}$
this is the valuative counterpart of the usual norm on $K^m$, which
measures the distance of $x$ to the origin (see also
Remark~\ref{re:delta-dist}).
\begin{theorem}[Monotopic Division]\label{th:div-p-adic}
Let $S$ be a polytope mod $N$ in $D^MR^m$, $\varepsilon:\partial S\to K^*$ a definable
function (in some expansion $(K,{\cal L})$ of the ring structure of $K$
satisfying previous conditions (Ext) and (Pres)). Let ${\cal T}$ be a
complex of monotopes mod $N$ in $D^MR^m$ such that $\bigcup{\cal T}=\partial S$.
Assume that the restriction of $v\circ\varepsilon$ to every proper face of $S$ is
continuous. Then there exists a finite partition ${\cal U}$ of $S$ in
monotopes mod $N$ in $D^MR^m$ such that ${\cal U}\cup{\cal T}$ is a closed
complex, ${\cal U}$ contains for every $T\in{\cal T}$ a unique monotope $U$ with
facet $T$, and moreover for every $u\in U$
\begin{displaymath}
w\big(u - \pi_J(u)\big) \geq v\big(\varepsilon(\pi_J(u))\big)
\end{displaymath}
where $J=\mathop{\rm Supp}(T)$.
\end{theorem}
\begin{proof}
For every proper face $F_J(S)$ of $S$, and every $s\in F_J(S)$, the
function $t\mapsto v(\varepsilon(t))$ is continuous on $v^{-1}(\{v(s)\})\cap F_J(S)$,
which is a closed and bounded domain. Thus it attains a maximum value
$e(s)$ (see Remark~\ref{re:P-min}). So let
\begin{displaymath}
G_J=\big\{(s,t)\in F_J(S)\times K\,\big/\ v(t)=e(s)\big\}.
\end{displaymath}
This is a definable set hence $v(G_J)$ is ${\cal L}_{Pres}$\--definable
(see Remark~\ref{re:P-min}). Moreover by construction $v(G_J)$ is the
graph of a function $g_J:vF_J(S)=F_J(vS)\to{\cal Z}$, such that $v(\varepsilon(s))\leq g_J(v(s))$
for every $s\in S$. Let $g:\partial(vS)\to{\cal Z}$ be the function whose restriction
to each $F_J(vS)$ is $g_J$.
The Monotopic Division (Theorem~\ref{th:mono-div}) applies to $vS$,
$g$ and $vT$. It gives a finite partition ${\cal C}$ of $vS$ in monotopes
mod $N$ such that ${\cal C}\cup v{\cal T}$ is a complex, every non-closed $C\in {\cal C}$
has a unique facet $D$ which belongs to $v{\cal T}$ and $\Delta_J\geq g\circ\pi_J$ on $C$
where $J=\mathop{\rm Supp} D$. Let ${\cal U}$ be the family of $v^{-1}(C)\cap D^MR^m$ for
$C\in{\cal C}$. This is clearly a finite partition of $S$ in monotopes mod $N$
in $D^MR^m$. Every $U\in{\cal U}$ has a unique facet $T\in{\cal T}$, and $\Delta_J\geq
g\circ\pi_J$ on $vT$ where $J=\mathop{\rm Supp} vT=\mathop{\rm Supp} T$. That is, for every $u\in U$
we have
\begin{equation}
w\big(u - \pi_J(u)\big)= \min_{i\notin J}v(u_i) = \Delta_J(v(u))
\geq g\circ\pi_J(v(u))
\label{eq:div-p-adiv}
\end{equation}
By construction $\pi_J(v(u))=v(\pi_J(u))$ and
$g(v(t))\geq v(\varepsilon(t))$ for every $t\in T$, hence
\begin{displaymath}
g\circ\pi_J\big(v(u)\big)=g\big(v(\pi_J(u))\big)\geq v\big(\varepsilon(\pi_J(u))\big).
\end{displaymath}
Together with (\ref{eq:div-p-adiv}), this proves the last point.
\end{proof}
Finally, let us mention for further works the following generalisation of
Proposition~\ref{pr:split-mono}.
\begin{proposition}\label{pr:split-p-adic}
Let $A\subseteq D^MR^m$ be a relatively open\footnote{A subset $A$ of a
toplogical set is called {\bf relatively open} if it is open in
its closure, that is $\overline{A}\setminus A$ is closed.} set. Assume
that $A$ is the union of a complex ${\cal A}$ of monotopes mod $N$ in
$D^MR^m$. Then for every integer $n\geq1$ there exists a finite
partition of $A$ in semi-algebraic sets $A_1,\dots,A_n$ such that $\partial
A_k=\partial A$ for every $k$.
\end{proposition}
\begin{proof}
Thanks to the correspondence between the faces of the monotopes mod
$N$ in $D^MR^m$ and their faces, it suffices to prove the result for
a relatively open set $A\subseteq\Gamma^m$ which is the union of a complex of
monotopes mod $N$ in $\Gamma^m$.
Let ${\cal C}=\overline{{\cal A}}\setminus{\cal A}$ and $C=\bigcup{\cal C}=\overline{A}\setminus A$. By
assumption $A$ is relatively open hence $C$ is closed, so ${\cal C}$ is a
closed complex. Let $U_1,\dots,U_r$ be the list of minimal elements of
${\cal A}$. Every $S\in\overline{{\cal A}}$ such that $U_i\leq S$ for some $i$
belongs to ${\cal A}$ (otherwise $S\in\overline{{\cal A}}\setminus{\cal A}={\cal C}$ which is
closed, hence $U_i\in{\cal C}$, a contradiction since $U_i\in{\cal A}$). Note
further that every $T\in\overline{{\cal A}}\setminus{\cal A}$ is a proper face of some $U_i$
(because $T$ is a face of some $S\in{\cal A}$ and $U_i\leq S$ for some $i$, hence
$T< U_i$ or $U_i\leq T$ because $S$ is a monotope, and the second case is
excluded because $T\notin{\cal A}$). In particular $\partial A=\overline{A}\setminus A$ is the
union of the sets $T\in\overline{{\cal A}}$ such $T<U_i$ for some $i$, that
is $\partial A=\bigcup_{i\leq r}\partial U_i$.
For each $i\leq r$ let ${\cal B}_i$ be the family of $S\in{\cal A}$ such that $S\geq
U_i$, and $B_i=\bigcup{\cal B}_i$. The families ${\cal B}_i$ are pairwise disjoint, and
so are the sets $B_i$ since ${\cal A}$ is a complex. By the same argument
as above (replacing ${\cal A}$ by ${\cal B}_i$) $\overline{B}_i\setminus B_i
=\bigcup(\overline{{\cal B}}_i\setminus{\cal B}_i) =\partial U_i$, hence $B_i$ is relatively open
and $\partial B_i=\partial U_i$. It suffices to prove the result separately for each
$B_i$. Indeed, assume that for each $i\leq r$ we have found a partition
$(B_{i,j})_{1\leq j\leq n}$ of $B_i$ in definable sets such that $\partial
B_{i,j}=\partial B_i$. Then let $A_j=\bigcup_{i\leq r}B_{i,j}$ for each $j$. By
construction these sets form a partition of $A$ and
\begin{displaymath}
\overline{A}_j\setminus A_j
= \overline{A}_j\setminus A
= \bigcup_{i\leq r}\overline{B}_{i,j}\setminus A
= \bigcup_{i\leq r}\overline{B}_i\setminus A
= \bigcup_{i\leq r}\partial B_i
= \partial A.
\end{displaymath}
Thus replacing $A$ and ${\cal A}$ by $B_i$ and ${\cal B}_i$ if necessary, we can
assume that ${\cal A}$ has a unique smallest element $U_0$. If $U_0$ is
closed, then $\partial A=\partial U_0=\emptyset$ (by minimality of $U_0$), and it suffices
to take $A_1=A$ and $A_k=\emptyset$ for $k\geq2$. So from now on we assume that
$U_0$ is not closed. Proposition~\ref{pr:split-p-adic} then
applies to $U_0$ and gives for some $N'$ a partition
$A_{1}(U_0),\dots,A_{n}(U_0)$ of $U_0$ in largely continuous monotopic
cells mod $N'$ such that $\partial A_i(U_0)=\partial U_0$ for every $i$. In
particular each $A_i(U_0)$ is a basic Presburger set. Let $H=\mathop{\rm Supp}
U_0$, and for every $S\in{\cal S}$ and $i\in[\![1,n]\!]$ let
$A_i(S)=\pi_H^{-1}(A_i(U_0))\cap S$. Note that this is a basic Presburger
set. Indeed, $S$ itself is a basic Presburger set, and
$\pi_H^{-1}(A_i(U_0))\cap F_I(\Gamma^m)$ with $I=\mathop{\rm Supp} S$ is a basic Presburger
set because $A_i(U_0)$ is so (replace every condition $f(x)\geq0$
defining $A_i(U_0)$ by $f\circ\pi_H(x)\geq0$). Hence their intersection
$A_i(S)$ is a basic Presburger set too. For every $i\leq n$ let
$A_i=\bigcup\{A_i(S)\,\big/\ S\in{\cal A}\}$. This defines a partition of $A$. In order
to conclude it only remains to show that $\overline{A_i}=A_i\cup\partial U_0$
for each $i$, so that $\partial A_i=\partial U_0=\partial A$. Since
$\overline{A_i}=\bigcup\{\overline{A_i(S)}\,\big/\ S\in{\cal A}\}$, it suffices to check
that for every $S\in{\cal A}$
\begin{equation}
\overline{A_i(S)}=\bigcup\{A_i(T)\,\big/\ T\in{\cal A},\ U_0\leq T\leq S\}\cup\partial U_0.
\label{eq:split-p-adic}
\end{equation}
Let $I=\mathop{\rm Supp} A_i(S)=\mathop{\rm Supp} S$, and $J\subseteq I$ be the support of any face of
$A_i(S)$. Note that $F_J(A_i(S))\neq\emptyset$ implies that $F_J(S)\neq\emptyset$, thus $J$
is the support of a face $T=F_J(S)$ of $S$. This face $T$ belongs to
$\overline{{\cal A}}$, hence to ${\cal S}$ if $U_0\leq T$. We are claiming that
$F_J(A_i(S))=A_i(T)$ in that case, and that $F_J(T)=T=F_J(U_0)$ if
$T<U_0$. This will finish the proof since $\overline{A_i(S)}$ is the
union of its faces, and (\ref{eq:split-p-adic}) then follows
immediately.
Assume first that $U_0\leq T$. Since $A_i(S)$ and $S$ are basic
Presburger set, we know by
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-proj}) that
$F_J(A_i(S))=\pi_J(A_i(S))$ and $F_J(S)=\pi_J(S)$, that is $T=\pi_J(S)$.
Since $U_0\leq T$ we have $H\subseteq J$ hence
$\pi_J(\pi_H^{-1}(A_i(U_0)))=\pi_J^{-1}(A_i(U_0))$. It follows that
\begin{displaymath}
\pi_J\big(\pi_H^{-1}(A_i(U_0))\cap S\big)
\subseteq \pi_J\big(\pi_H^{-1}(A_i(U_0))\big)\cap\pi_J(S)
= \pi_J^{-1}(A_i(U_0)) \cap T
\end{displaymath}
that is $\pi_J\big(A_i(S)\big) \subseteq A_i(T)$.
Conversely, for every $y\in A_i(T)$ we have on one hand $y\in T=\pi_J(S)$ so
there is $x\in S$ such that $\pi_J(x)=y$, and on the other hand
$y\in\pi_H^{-1}(A_i(U_0))$ so $\pi_H(x)=\pi_H(\pi_J(x))=\pi_H(y)\in A_i(U_0)$. Thus
$x\in\pi_H^{-1}(A_i(U_0))\cap S=A_i(S)$, and since $y=\pi_J(x)$ this proves
that $A_i(T)\subseteq\pi_J(A_i(S))$. This proves our claim in this case.
Now assume that $T<U_0$. Then $J\subset H$ hence
$\pi_J(A_i(S))=\pi_J(\pi_H(A_i(S))$. We already know that
$\pi_H(A_i(S))=A_i(U_0)$ be the previous case, and that $\partial A_i(U_0)=\partial
U_0$ by construction. In particular $F_J(A_i(U_0))=F_J(U_0)$. But
$F_J(U_0)=T$ since ${\cal A}$ is a complex and $T<U_0$. Altogether, using
Proposition~\ref{pr:face-egale-proj}(\ref{it:face-proj}) for $A_i(S)$
and $A_i(U_0)$ we get
\begin{displaymath}
F_J(A_i(S))=\pi_J(A_i(S))
=\pi_J(A_i(U_0))=F_J(A_i(U_0))=T.
\end{displaymath}
\end{proof}
\bibliographystyle{alpha}
| {
"timestamp": "2016-11-15T02:12:28",
"yymm": "1602",
"arxiv_id": "1602.07209",
"language": "en",
"url": "https://arxiv.org/abs/1602.07209",
"abstract": "We introduce topological notions of polytopes and simplexes, the latter being expected to play in p-adically closed fields the role played by real simplexes in the classical results of triangulation of semi-algebraic sets over real closed fields. We prove that the faces of every p-adic polytope are polytopes and that they form a rooted tree with respect to specialisation. Simplexes are then defined as polytopes whose faces tree is a chain. Our main result is a construction allowing to divide every p-adic polytope in a complex of p-adic simplexes with prescribed faces and shapes.",
"subjects": "Logic (math.LO)",
"title": "Polytopes and simplexes in p-adic fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137895115187,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7087617869480458
} |
https://arxiv.org/abs/1202.1218 | A geometrical triumvirate of real random matrices | We present a five-step method for the calculation of eigenvalue correlation functions for various ensembles of real random matrices, based upon the method of (skew-) orthogonal polynomials. This scheme systematises existing methods and also involves some new techniques. The ensembles considered are: the Gaussian Orthogonal Ensemble (GOE), the real Ginibre ensemble, ensembles of partially symmetric real Gaussian matrices, the real spherical ensemble (of matrices $A^{-1}B$), and the real anti-spherical ensemble (consisting of truncated real orthogonal matrices). Particular emphasis is paid to the variations in method required to treat odd-sized matrices. Various universality results are discussed, and a conjecture for an `anti-spherical law' is presented. | \section*{}
\pagenumbering{roman}
\thispagestyle{empty}
\begin{centering}
\vspace{60pt}A GEOMETRICAL TRIUMVIRATE OF REAL RANDOM MATRICES
\vspace{12pt} Anthony Mays
\vspace{280pt}Submitted in total fulfilment of the requirements of the degree of Doctor of Philosophy
\vspace{50pt}November 2011
\vspace{12pt}Department of Mathematics and Statistics
University of Melbourne
\vspace{80pt} {\small PRODUCED ON ARCHIVAL QUALITY PAPER}
\end{centering}
\newpage
\thispagestyle{empty}
${}$
\newpage
\section*{Abstract}
\addcontentsline{toc}{section}{Abstract}
The eigenvalue correlation functions for random matrix ensembles are fundamental descriptors of the statistical properties of these ensembles. In this work we present a five-step method for the calculation of these correlation functions, based upon the method of (skew-) orthogonal polynomials. This scheme systematises existing methods and also involves some new techniques. By way of illustration we apply the scheme to the well known case of the Gaussian orthogonal ensemble, before moving on to the real Ginibre ensemble. A generalising parameter is then introduced to interpolate between the GOE and the real Ginibre ensemble. These real matrices have orthogonal symmetry, which is known to lead to Pfaffian or quaternion determinant processes, yet Pfaffians and quaternion determinants are not defined for odd-sized matrices. We present two methods for the calculation of the correlation functions in this case: the first is an extension of the even method, and the second establishes the odd case as a limit of the even case.
Having demonstrated our methods by reclaiming known results, we move on to study an ensemble of matrices $\mathbf{Y} = \mathbf{A}^{-1} \mathbf{B}$, where $\mathbf{A}$ and $\mathbf{B}$ are each real Ginibre matrices. This ensemble is known as the \textit{real spherical ensemble}. By a convenient fractional linear transformation, we map the eigenvalues into the unit disk to obtain a rotationally invariant distribution of eigenvalues. The correlation functions are then calculated in terms of these new variables by means of finding the relevant skew-orthogonal polynomials. The expected number of real eigenvalues is computed, as is the probability of obtaining any number of real eigenvalues; the latter is compared to numerical simulation. The generating function for these probabilities is given by an explicit factorised polynomial, in which the zeroes are gamma functions.
We show that in the limit of large matrix dimension, the eigenvalues (after stereographic projection) are uniformly distributed on the sphere, a result which is part of a universality result called the \textit{spherical law}. By taking a different limit, we also show that the local behaviour of the eigenvalues matches that of the real Ginibre ensemble, which corresponds to the planar limit of the sphere.
Lastly, we examine the third ensemble in the triumvirate, the \textit{real truncated ensemble}, which is formed by truncating $L$ rows and columns from an $N\times N$ Haar distributed orthogonal matrix. By applying the five-step scheme and by averaging over characteristic polynomials we proceed to calculate correlation functions and probabilities analogously to the other ensembles considered in this work. The probabilities of obtaining real eigenvalues are again compared to numerical simulation. In the large $N$ limit (with small $L$) we find that the eigenvalues are uniformly distributed on the anti-sphere (after being suitably projected). This leads to a conjecture that, analogous to the circular law and the spherical law, there exists an \textit{anti-spherical law}. As we found for the spherical ensemble, we also find that in some limits the behaviour of this ensemble matches that of the real Ginibre ensemble.
\newpage
\section*{Declaration}
\addcontentsline{toc}{section}{Declaration}
This is to certify that:
\begin{description}
\item[(\textit{i})] {the thesis comprises only my original work towards the PhD except where indicated in the Preface,}
\item[(\textit{ii})]{due acknowledgement has been made in the text to all other material used,}
\item[(\textit{iii})]{the thesis is fewer than 100,000 words in length, exclusive of tables, maps, bibliographies and appendices.}
\end{description}
\vspace{80pt} Signed,
\vspace{60pt} \hspace{160pt} ANTHONY MAYS
\newpage
\section*{Preface}
\addcontentsline{toc}{section}{Preface}
The odd-dimensional method of Chapters \ref{sec:odd_from_even} and \ref{sec:Gin_oddfromeven} was joint work with Peter Forrester, and began as an offshoot of his Australian Research Council (ARC) project on the integrability aspects of random matrix theory. Our work was originally published in \cite{FM09}.
Most of the content of Chapter \ref{sec:SOE} was also work with Peter Forrester, which was developed collaboratively and published jointly in \cite{FM11}. I have since reworked the paper into the present format so that it coheres with the overall structure of the thesis.
The remainder of the thesis is largely an attempt to unify various existing methods in the field, and as such a number of prior results have been included. This will hopefully have the additional benefit of providing a useful self-contained resource for students and others. Any previous results are, of course, clearly identified as such and the original references are cited.
\newpage
\section*{Acknowledgements}
\addcontentsline{toc}{section}{Acknowledgements}
Firstly, thanks are due to the staff of the Department of Mathematics and Statistics, University of Melbourne for the use of their Research Support Scheme and for providing the day-to-day needs of Ph.D. study life. For financial income, I am grateful for the support of the Australia Postgraduate Award.
Thanks also to those other groups who supported me during my studies: Australian Mathematical Sciences Institute (AMSI) for their summer and winter schools; Erwin\\Schr\"{o}dinger Institute (ESI), Vienna; Mathematical Sciences Research Institute (MSRI), Berkeley; American Institute of Mathematics (AIM), Palo Alto; and the University of Oregon, Eugene. In relation to the latter, particular thanks are given to Chris Sinclair for arranging the logistics of my trip and for his hospitality and stimulating discussions.
Thanks to Jonith Fischmann for pointing out corrections to Chapter \ref{sec:SOEcharpolys}, Craig Hodgson for a short tutorial about the Stiefel manifold, Peter Paule for providing a Mathematica version of the Zeilberger algorithm, and James Garza for housing me in Southern California. An especially big thanks to Anita Ponsaing for putting up with all my clowning. She was always ready to listen and help out, but most importantly she's been a great friend with whom I've enjoyed travelling around the country and around the world.
As my secondary supervisor, Jan de Gier was not intimately involved in my research, however he was often able to assist me on more general questions, as well as help with conference attendance and discuss future career prospects. He was also the consummate BBQ host.
But the major portion of my gratitude is reserved for my primary supervisor, Peter Forrester, who, after taking me on as an honours student who just wanted to juggle, continued with me into a Ph.D. on random matrix theory. There were very few instances, if any, where he could not provide truly insightful comments that would clear up muddled understanding and point the way forward. His guidance paved the road of understanding to this remarkably interesting field.
\newpage
\tableofcontents
\newpage
\listoffigures
\listoftables
\newpage
\pagenumbering{arabic}
\section{Introduction}
One feels the need to start at the beginning, by defining what a random matrix is. At its broadest, the term is self-explanatory: we pick a matrix at random (using a specific distribution) from a set of matrices. This set is defined by some desired attributes of the matrices, such as Hermitian, orthogonal, non-singular or Gaussian distributed entries. To study a `typical' matrix from the set, one thinks of picking a matrix randomly from an \textit{ensemble} of all matrices having the particular attributes of interest. However, as we are cautioned in \cite{Edelman1993}, we should not confuse a `typical' random matrix with `any old' matrix; the matrices under study here have a very rich structure. A short and very readable general introductory review of the field is found in \cite{Diaconis2005}, while \cite{ForSnaVer2003, MezSna2005, AkeBaiDiF2011} contain reviews of a more technical nature and \cite{GuhMGWei1997, Diaconis2003, deift2007} focus on the applications of random matrices. Standard texts include \cite{Deift2000, mehta2004, BaiSilver2006, Deift2009, AndGuiZei2009, forrester?}.
The main focus of the field of random matrix theory is to analyse the eigenvalue distribution of the ensemble, although the behaviour of the eigenvectors may also be of interest. We remind the reader that the eigenvalues of a matrix $\mathbf{A}$ are the set of $\lambda$ that satisfy the equation $\det (\mathbf{A}-\lambda \mathbf{1})=0$, where $\mathbf{1}$ is the identity matrix. This determinant is a polynomial in $\lambda$, called the \textit{characteristic polynomial} of the matrix $\mathbf{A}$, and so the eigenvalues of a matrix are also the zeroes of its corresponding characteristic polynomial. From this observation, we expect that there should be a close correspondence between results concerning eigenvalue distributions in random matrix theory and those of the distributions of zeroes in the theory of random polynomials. Indeed, this turns out to be true, although the relationship extends far beyond the characteristic polynomial. We will not pursue random polynomial theory here; the interested reader should see \cite{HKPV2009} and references therein.
It may be expected \textit{a priori} that the eigenvalues of a random matrix are scattered uniformly at random over their support (exhibiting the `clumpy' patterns typical of such data), however this is far from true and they instead display strongly correlated behaviour. In this thesis we develop a method for calculating correlation functions for several ensembles of matrices with real elements. Many of the results are new, although, since it is our hope that this work will form a useful part of the reference literature for those working with real random matrices, we have attempted to provide a self-contained treatment, which explains its voluminous nature. Our original contributions include: a streamlined method for the calculation of the correlation kernel, for both even- and odd-sized matrices (unpublished) in Chapters \ref{sec:GOE_steps} and \ref{sec:GOE_odd}; an alternative method for calculating correlation functions for odd-sized matrices \cite{FM09} in Chapter \ref{sec:odd_from_even}; and the calculation of the correlation functions for the real spherical ensemble \cite{FM11} in Chapter \ref{sec:SOE}. The methods developed and presented here have also been applied in the papers \cite{ForSinc2010} and \cite{Forrester2010a}. We have also provided various reworkings and reinterpretations of known results, as well as calculations and simulations of the probability of obtaining some number of real eigenvalues.
The study of random matrices can be traced back to Hurwitz \cite{Hurwitz1898} (which is included in \cite{Hurwitz1933}) where he presented a parameterisation of the orthogonal group and then computed its volume form in terms of generalized Euler angles, which are a standard set of co-ordinates describing the rotation of one co-ordinate frame relative to another. In \cite{PozZycKus1998} the authors discuss Hurwitz's parameterisation and then use it as a practical way to generate random orthogonal matrices.
A significant milestone was passed in 1928 with the paper by Wishart \cite{Wishart1928}, where the purpose of his study was to analyse the estimated variance of an underlying population by taking $N$ samples from the population. If one writes the normalised, centred variables as a vector $\utilde{x}=[(x_j-\bar{x}) /\sqrt{N}]_{j=1,...,N}$, where $\bar{x}$ is the mean of the sample, then the variance is given by $\utilde{x}^T \cdot \utilde{x}$. However, when there are multiple variates $x_j^{(1)}, x_j^{(2)},...,x_j^{(M)}$, then one needs to consider all possible dot products $\utilde{x}^{(l)}{}^T \cdot \utilde{x}^{(m)}$, $l,m =1,...,M$ of the corresponding vectors. These dot products can be conveniently written in the form $\mathbf{X}^T \mathbf{X}$, where $\mathbf{X}$ is an $N\times M$ matrix with these vectors forming the columns, a structure which has become known as a Wishart matrix. Wishart's contribution was to find the distribution of these variances for general $M$; he did so by adapting a geometrical technique that had previously been used by Fisher to establish the $M=2$ case in \cite{Fisher1915}.
One of the major technical achievements in the field came in 1939 with the (more or less) simultaneous calculation of various Jacobians, showing that they depend on a product of differences \cite{Fisher1939, Hsu1939, Roy1939, Girshick1939, Mood1951},
\begin{align}
\label{eqn:evalbeta} \prod_{1\leq j < k \leq N} |\lambda_k-\lambda_j|^{\beta}
\end{align}
(see \cite{Anderson2007} for a review of these calculations and a discussion of the timing of their publication). A Vandermonde factor in the eigenvalue distribution can be interpreted as repulsion between eigenvalues, where $\beta$ is the `strength' of the repulsion between them. This repulsion implies that the eigenvalues will tend to be more evenly spread over the support than if there were no interaction. In the latter case, where they are independent, then we have a Poisson process and one expects a spacing distribution like that in Figure \ref{fig:PoiDist}, in which case clumping of the points tends to occur. Numerical simulations on real symmetric matrices confirmed that the eigenvalues are inclined to repel \cite{Ros1958}, leading to a spacing distribution like Figure \ref{fig:RMTDist}, which turns out to be characteristic of determinantal processes, of which random matrix eigenvalue distributions is an example.
\begin{figure}[htp]
\begin{center}
\subfloat[]{\label{fig:PoiDist} \includegraphics[scale=0.7]{PoiSpace.pdf}}
\qquad \qquad \subfloat[]{\label{fig:RMTDist} \includegraphics[scale=0.7]{RMTSpace.pdf}}
\caption[Typical spacing distributions for Poisson and random matrix processes.]{Spacing distributions for a) $e^{-x}$, representing a Poisson process, and b) $x {e^{-x^2}}$, representing a random matrix process.}
\end{center}
\end{figure}
Interest in random matrices within the physics community began with Eugene Wigner in the 1950s. The problem being faced at the time was the analysis of the highly excited states of heavy nuclei. Modelling the problem as a set of interacting particles quickly leads to a set of unwieldy coupled equations. Rather, Wigner \cite{Wigner1951} suggested that a statistical approach might be more useful, and he conjectured that the distribution of the spacing between energy levels will be well approximated by the eigenvalue spacing distribution of large symmetric matrices (this statement became known as the \textit{Wigner surmise}) \cite{Wigner1957a, Wigner1957b}. This suggestion was based upon the physical reasoning that the nuclear energy levels corresponding to the same spin should repel, and that for small spacing the number of spacings should be approximately linearly dependent on the spacing distance (giving a graph something like Figure \ref{fig:RMTDist}); this expectation was also proposed by Landau and Smorodinsky \cite[Lecture 7]{LanSmo1955}. Experimental results such as those in \cite{PorTho1956, GurPev1956, BluPor1958} confirmed that this was true. In \cite{PorRos1960, RosPor1960} the authors comprehensively demonstrated that the repulsive nature of the energy levels could indeed be modeled by eigenvalues of symmetric matrices, and that the results matched Wigner's predictions. Further, they demonstrated that atomic spectra obey a similar repulsion, which was also confirmed in \cite{CaGe1983}, although the evidence tends to be less convincing than that of the nuclear levels. (For more evidence on nuclear energy levels and random matrix distributions, see \cite{BHP1983}.)
\begin{remark}
An excellent reference for historical information on nuclear and atomic spectra is \cite{Porter1965}, where many of these seminal papers are collected along with an introductory review of the theory by the editor.
\end{remark}
Random matrix eigenvalue distributions have also been compared to other quantum systems in a similar spirit; for instance, in \cite{Bohi1984} the spacing of eigenvalues in the quantum Sinai billiard are found to agree with those of a certain random matrix ensemble (the Gaussian orthogonal ensemble, to be introduced below). Another interpretation of these eigenvalues is as a Coulomb gas within a confining background potential; a viewpoint that Forrester employs in \cite{forrester?}, following Dyson \cite{dyson1962b}. Outside the arena of physics, a random matrix distribution has been favourably compared to the distribution of zeroes of the Riemann zeta function (see \cite{Diaconis2003, forrester?} for reviews and references), and in \cite{deift2007} Deift points to various social behaviours (boarding a plane, sorting playing cards, bus timetabling) that appear to obey Gaussian ensemble statistics.
\begin{remark}
Clearly, during the development of random matrix theory the comparison of theory with physical experiments and numerical simulations was a key factor in the progress of the field, and in this work we continue with this tradition. Since large numerical computations are relatively easy to perform on a desktop computer these days, we present plots of simulated spectra and numerical estimates of various probabilities, and compare them to the analytical results.
\end{remark}
In the series of papers \cite{dyson1962a, dyson1962b, dyson1962c} Dyson established that random matrix ensembles can naturally be classified into three classes corresponding to physical symmetries: time-reversal invariance with an even number of spins; time-reversal invariance with an odd number of spins; and systems without time-reversal invariance. He identified that each of these ensembles is connected to one of the classical groups studied by Weyl --- orthogonal, symplectic and unitary respectively --- by its invariance under conjugation by matrices from these groups. In \cite{Dyson1962TFW} Dyson deepens the argument, showing how this classification is isomorphic to that identified in Wigner's similar work on time-inversion groups \cite{Wigner1959} (Wigner's original work was published, in German, in \cite{Wigner1932}, which was reprinted in \cite{Wightman1993}), and how these correspondences are fundamentally due to a theorem of Frobenius \cite[Section 11]{Dickson1914}, which states that there are exactly three associative division algebras over the real number field: the real numbers, the complex numbers, and the real quaternions (see Chapter \ref{sec:qdets_pfs} for a definition of a real quaternion). One finds that in each of these ensembles (and many since), the eigenvalue jpdf contains a Vandermonde product (\ref{eqn:evalbeta}). Dyson called this tripartite division the \textit{three-fold way} and found that they can be conveniently characterised by the parameter $\beta$ in (\ref{eqn:evalbeta}), with $\beta=1$ corresponding to the orthogonal ensemble, $\beta=2$ corresponding to the unitary ensemble and $\beta=4$ corresponding to the symplectic ensemble. In the case of matrices with Gaussian entries, this corresponds to real, complex and real quaternion matrices respectively, with the ensembles being called the Gaussian orthogonal ensemble (GOE), Gaussian unitary ensemble (GUE) and the Gaussian symplectic ensemble (GSE). In this work we will deal exclusively with ensembles of real matrices, that is, with $\beta=1$.
Dyson focused on what became known as the circular ensembles --- the circular orthogonal ensemble (COE), the circular unitary ensemble (CUE) and the circular symplectic ensemble (CSE) --- which consist of symmetric unitary, general unitary and self-dual unitary matrices respectively. These circular ensembles produced eigenvalues with compact support (the unit circle), which had the physical benefit of allowing a uniform probability distribution to be imposed. The matrices are drawn from the relevant invariant (or Haar) measure (see Chapter \ref{sec:step2} for more on this point). The unitary ($\beta=2$) ensemble turned out to be simplest, mathematically, to work with and in \cite{dyson1962c} a determinantal structure of its correlation functions was found. Dyson's seminal papers established the framework within which random matrix analysis was found to be naturally conducted; indeed in \cite{Mehta1967} Mehta applied the methods to the Gaussian ensembles and likewise found determinantal correlation functions.
The next step forward was contained in \cite{dyson1970}, where it was determined that the eigenvalue correlation functions for the $\beta=1$ and $\beta=4$ circular ensembles were given by quaternion determinants (see Chapter \ref{sec:qdets_pfs} for definitions) of matrices with $2\times 2$ matrix kernels (which reduced to determinants of $1\times 1$ kernels in the known $\beta=2$ case). This was shortly followed by the work in \cite{mehta1971} where Dyson's method was adapted to obtain similar results for the analogous Gaussian ensembles. The structural properties of these results has turned out to be another feature of random matrix studies; $\beta=2$ ensembles produce determinantal correlation functions, while $\beta=1$ and $\beta=4$ ensembles result in quaternion determinant or Pfaffian structures (where quaternion determinants and Pfaffians may, for the moment, be thought of as the square root of a determinant; see Chapter \ref{sec:Step3_GOE} for a more punctilious description). More recently however, Sinclair \cite{Sinclair2010} has shown that there is a Pfaffian structure for $\beta=2$, which does not appear to be a trivial rewriting of a determinant. In the same paper, that author goes on to establish that generalised Pfaffian structures (hyperpfaffians) occur for the Hermitian and circular ensembles for more general $\beta$ --- when $\beta=L^2$ ($L$ an integer) and $\beta=M^2+1$ ($M$ an odd integer). For a specific example of a $\beta=2$ Pfaffian correlation function see \cite{Kie2011}; also see \cite{ForSinc2010} for further applications of this idea.
Some years before the publication of these correlation functions, the Gaussian ensembles were generalised by Ginibre \cite{Gi65} by relaxing the Hermitivity constraint on the entries of the matrices. He defined three non-Hermitian ensembles of real, complex and real quaternion Gaussian entries. Although these ensembles do not obey the same invariance properties that the Gaussian ensembles do, they are often called the Ginibre orthogonal (GinOE), Ginibre unitary (GinUE) and Ginibre symplectic (GinSE) ensembles respectively, in analogue with the Gaussian ensembles. (In this work, we will refer to them as the real, complex and real quaternion Ginibre ensembles to keep in mind that invariance under the respective group is a key attribute of the Gaussian ensembles, and not of the Ginibre ensembles.)
While the eigenvalues of the (Hermitian) Gaussian ensembles, which all lay on the real line, had straightforward physical interpretation as energy levels, the spectra of the Ginibre matrices, which lay in the complex plane, did not have immediate physical motivation. However applications were forthcoming, and indeed it has been claimed (\cite{AK2007}) that non-Hermitian random matrices are now just as physically applicable as their Hermitian comrades. One of the first applications to be found for real non-Hermitian ensembles was in the work of May \cite{may1972}, where it was determined that the stability of a large biological web depended on all eigenvalues of a corresponding matrix having negative real part, and so analysis of the eigenvalue distribution was required.
As discussed above, the eigenvalues of Hermitian random matrices tend to repel, and it turns out that those of non-Hermitian matrices do so as well. In Figure \ref{fig:PoiRMTDisk} we compare a Poisson process in the unit disk (Figure \ref{fig:PoiDisk}) with an eigenvalue distribution over the same region (Figure \ref{fig:RMTDisk}); note the clumping of points in the former, and the more uniform distribution in the latter.
\begin{figure}[htp]
\begin{center}
\subfloat[]{\label{fig:PoiDisk} \includegraphics[scale=0.4]{PoiDiskNew.jpg}}
\qquad \qquad \qquad \subfloat[]{\label{fig:RMTDisk} \includegraphics[scale=0.4]{GUEDiskNew.jpg}}
\caption[Simulation of a Poisson and random matrix point process in the disk.]{(a) 1000 points placed uniformly at random in the unit disk (a Poisson process), (b) Eigenvalues (scaled into the unit disk) of a $1000 \times 1000$ complex Ginibre matrix.}
\label{fig:PoiRMTDisk}
\end{center}
\end{figure}
These eigenvalue distributions have been interpreted as a two-dimensional Coulomb gas \cite{forrester?}, or as describing a Voronoi tessellation of the plane \cite{LeCHo1990} that is more uniform than that given by a Poisson process \cite{HayQui2002}. This latter viewpoint can be applied to analyse situations where one expects, due to physical considerations, that there would be some repulsion between some entities such as trees in a forest, bird nesting sites or impurities in metals \cite{LeCHo1990}. Other uses of random non-Hermitian matrices have included synchronisation in random networks, statistical analysis of neurological activity, quantum chaos and polynuclear growth processes (see \cite{AK2007, KS2009, forrester?} for overviews and further references).
In his original paper Ginibre found that the eigenvalue jpdf for the (non-Hermitian) complex ensemble involved the product of differences (\ref{eqn:evalbeta}), and was structurally similar to that of the Gaussian ensembles. He went on to calculate the general $n$-point correlations for the complex case and found they were given by the determinant of a $1 \times 1$ kernel, again similar to the complex circular and Gaussian ensembles. In the case of the real quaternion matrices he was able to state the eigenvalue jpdf, but not to calculate the correlations as he lacked the quaternion determinant structure that Dyson would later introduce to the theory. The $1$-, $2$- and $3$-point functions for $\beta=4$ were identified by Mehta in \cite{Mehta1967} with the full correlations appearing many years later in the second edition of his book \cite{mehta1991}.
The real ensemble, however, proved much more difficult and is the subject of Chapter \ref{sec:GinOE} of the present work. First note from classical linear algebra or polynomial theory that a generic $N \times N$ real matrix has $0\leq k\leq N$ (with $k$ of the same parity as $N$) real eigenvalues and $(N-k)/2$ complex conjugate pairs of eigenvalues; so the eigenvalues come in two distinct species. This is a significant difference from the ensembles considered previously, where only one species was present: all eigenvalues are real for the Gaussian ensembles; they all lie on the unit circle for the circular ensembles; and the eigenvalues are general complex numbers for $\beta=2$ and strictly non-real complex for $\beta=4$ Ginibre ensembles. Ginibre was only able to establish the eigenvalue jpdf for the real ensemble in the restricted case that all eigenvalues were real, with the jpdf for general $k$ not appearing until \cite{LS91} and again independently in \cite{Ed97} where new methods of matrix decomposition were employed (see below). The correlation functions were yet longer in coming, needing a result from \cite{sinclair2006}, which established a quaternion determinant or Pfaffian form of the ensemble average, allowing Forrester and Nagao to calculate the real and complex correlation functions (for $N$ even) in \cite{FN07}. As for the GOE, the correlations had a quaternion determinant or Pfaffian structure with a $2\times 2$ kernel. These correlations were generalised to include real--complex cross-correlations in \cite{sommers2007} and independently in \cite{b&s2009}, again only for even dimensional matrices. The odd case was identified shortly afterwards by Sommers and Wieczorek \cite{sommers_and_w2008}, Forrester and the present author \cite{FM09} and Sinclair \cite{Sinc09} using three separate methods.
The difficulties that led to such a long delay in first the identification of the eigenvalue jpdf and then the full even and odd correlation functions for the real Ginibre ensemble were several. Classical results in linear algebra tell us that symmetric real matrices are orthogonally diagonalisable, that is, a symmetric matrix $\mathbf{S}$ can be decomposed as
\begin{align}
\label{eqn:diagdecomp} \mathbf{S}=\mathbf{O}^{T}\mathbf{D}\mathbf{O},
\end{align}
where $\mathbf{D}=\mathrm{diag} [\lambda_1,...,\lambda_N]$ is a diagonal matrix containing the eigenvalues and $\mathbf{O}$ is a matrix whose columns are the corresponding orthonormal eigenvectors \cite{GoVl1996}. The integration over the orthogonal matrices (which gives us the volume of the orthogonal group $O(N)$) has a known evaluation, meaning that the dependence on the eigenvectors can be integrated out of the problem. However, asymmetric real matrices do not have this property; the diagonalising matrices are not orthogonal. Progress required the introduction of the Schur decomposition (see Chapter \ref{sec:Gejpdf}), where the diagonal structure (\ref{eqn:diagdecomp}) is forfeited, with $\mathbf{D}$ being replaced by an upper triangular matrix with the eigenvalues of $\mathbf{S}$ on the diagonal. The benefit is that the conjugating matrices are still orthogonal, and so they can be integrated over with known methods. This, of course, comes at the cost of requiring an extra $N(N-1)/2$ integrations over the upper triangular entries. The Schur decomposition method was employed in \cite{Ed97}, and a closely related form --- related via elementary row operations --- was used in \cite{LS91}, to obtain the joint distribution of the eigenvalues.
Yet even when the eigenvalue jpdf is established, there are more complications. In the case of the classical orthogonal ensembles, Dyson and Mehta were able to use an integration theorem (Proposition \ref{thm:integral_identities}) to calculate the correlation functions from the eigenvalue jpdf, however, as pointed out in \cite{AK2007}, this does not work for the real Ginibre ensemble. The distinction is that the eigenvalue jpdf for the real Ginibre ensemble pertains to one particular $(N,k)$ pair (recall that $k$ is the number of real eigenvalues), yet the system is only normalised for the sum over all $k$; Dyson's integration theorem does not seem applicable to this sum. In \cite{AK2007} the authors presented a Pfaffian integration formula to deal with this problem, but shortly thereafter the formulation of \cite{Sinc09} circumvented the problem entirely by presenting the ensemble average as a Pfaffian independent of $k$. Using this structure, and applying functional differentiation, the real--real and complex--complex correlations were established in \cite{FN08} via explicit calculation of the skew-orthogonal polynomials (see Chapters \ref{sec:skew_orthog_polys} and \ref{sec:Gsops}), which, as mentioned above, then led to the full correlations in \cite{sommers2007,b&s2009} in the restricted case that the system size is even. Yet, silver linings abound --- this bipartite nature of the set of eigenvalues leads to a particularly interesting question about the real Ginibre ensemble that was raised in \cite{Ed97}: what is the probability of obtaining $k$ real eigenvalues from an $N \times N$ real matrix? We will investigate this for each of the non-Hermitian ensembles (Chapters \ref{sec:pnk}, \ref{sec:tGprobs}, \ref{sec:Ssops} and \ref{sec:TOEsops}).
In the case when the matrix dimension is odd, we must overcome more hurdles. Pfaffians are only defined for even-sized matrices and to adapt them to odd-size involves `bordering' by a new row and column or by removing a row and column from a computable even-sized system (see Chapters \ref{sec:GOE_odd} and \ref{sec:GinOE_odd}). These are not new ideas; they were used by de Bruijn in \cite{deB1955}, although the bordering procedure for Pfaffians can be traced back, at least, to Cayley in 1855 \cite{Cayley1855}, and the generation of an odd system from an even system has an even older pedigree, being found in Pfaff's original presentation of the theory in 1815 \cite{Pfaff1815} where he was motivated by reducing a set of ordinary differential equations in $2m$ variables to a set of equations in $2m-1$ variables (for a (somewhat) modern interpretation of these historical articles, see \cite{Muir1906,Muir1911}).
This even--odd asymmetry does not show itself in the $\beta=2$ or $\beta=4$ ensembles since the complex ensembles resulted in determinant structures which are insensitive to the parity of the matrix, while an $N \times N$ real quaternion matrix ensemble can be effectively viewed as a restricted class of $2N \times 2N$ complex matrices, and so they have an underlying even dimension. For the real Ginibre ensemble, we can explicitly identify the culprit. It turns out that since there is one real eigenvalue guaranteed to exist in an odd-sized real matrix (since the eigenvalues are real or complex-conjugate paired), this eigenvalue naturally forms the final row and column; it seems that the technical problems presented by the odd case are due to the fact that this preordained real eigenvalue exists at all. The problem arises when one attempts to apply the important method of integration over alternate variables, which was developed by de Bruijn and Mehta in \cite{deB1955, mehta1960, Mehta1967}, to obtain a Pfaffian expression for the partition function. This method pairs all the eigenvalues to allow one to overcome the asymmetry in the eigenvalue jpdf when $\beta=1$, but of course, in the case of $N$ odd, one eigenvalue must be unpaired and dealt with separately. This leads to other consequences, for example, given that the odd-sized matrix has at least one real eigenvalue, the probability of obtaining $k$ real eigenvalues is qualitatively different in the even and odd cases (for finite $N$) (see (\ref{eqn:GinOE_probsGF_pf}) and (\ref{eqn:GinOE_probs_odd})), although they are the same in the large $N$ limit.
The Ginibre ensembles can be generalised to the partially symmetric ensembles (see Chapter \ref{sec:tG}) by the incorporation of a parameter $-1<\tau<1$ (by convention). These ensembles interpolate between the symmetric/Hermitian/self-dual Gaussian ensembles ($\tau\to 1$) and ensembles of anti-symmetric/anti-Hermitian/anti-self-dual Gaussian matrices ($\tau\to -1$); $\tau=0$ corresponds to the Ginibre ensembles, with maximum asymmetry. With $\tau$ bounded away from $1$, then in the large $N$ limit the eigenvalue distributions and correlations are just scaled forms of those in the Ginibre ensembles. However, by carefully taking $\tau\to 1$ with increasing $N$ it is shown in \cite{FKS97} (where ensembles of complex matrices are discussed and this limit is called the \textit{weakly non-Hermitian} limit) that a new cross-over regime is obtained that interpolates between the apparently qualitatively different behaviours of the Ginibre and Gaussian ensembles. These partially-symmetric matrix ensembles have found application in the study of neural networks (see \cite{LS91} and references therein) and in quantum chaotic scattering \cite{FS03}. Another interesting review is contained in \cite{KS2009}. (For a review on related non-Hermitian ensembles as applied to quantum chromodynamics (QCD) see \cite{Ake2007}.)
Similar to Dyson's three-fold classification of the matrix ensembles --- having real, complex or real quaternion elements --- a new tripartite scheme has become apparent recently (see \cite{Krish2006,Krish2009} where it was introduced in the context of Gaussian analytic functions). From differential geometry we know that there are three distinct surfaces corresponding to constant Gaussian curvature $\kappa$: the plane ($\kappa=0$), the sphere ($\kappa >0$) and the anti- or pseudo-sphere ($\kappa <0$). For the Ginibre ensembles, one finds that in the limit of large matrix dimension the eigenvalues tend to uniform density on a (planar) disk (the so-called \textit{circular law}; see below), and we identify these ensembles with the plane. The sphere can be identified with the problem of generalised eigenvalues, that is, the set of $\lambda_j$ given by the solutions to the equation
\begin{align}
\label{eqn:genEvals} \det(\mathbf{B}-\lambda\mathbf{A})=0,
\end{align}
where $\mathbf{A},\mathbf{B}$ are some $N\times N$ matrices. Assuming that $\mathbf{A}$ is invertible, these generalised eigenvalues are equivalent to the eigenvalues of the matrix $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$. In \cite{Krish2009} Krishnapur considers the case where $\mathbf{A}$ and $\mathbf{B}$ are complex Ginibre matrices. It turns out that these eigenvalues have uniform density on the sphere (under stereographic projection) and so ensembles of these matrices are appellated \textit{spherical ensembles}. Similar to the complex Ginibre ensemble, the complex spherical ensemble can be thought of as modelling a gas of charged particles, this time on a sphere; the works \cite{caillol81, FJM1992, forrester?} highlight the analogies. In \cite{FM09} Forrester and the present author analyse the analogous real ($\beta=1$) spherical ensemble, where the matrices $\mathbf{A}, \mathbf{B}$ are real Ginibre matrices. This is the subject of Chapter \ref{sec:SOE}.
The last in the geometrical triumvirate are the ensembles corresponding to the anti-sphere. By truncating a number $L$ of rows and columns from an $N \times N$ (complex) unitary matrix (that is, a matrix from the CUE) \.{Z}yczkowski and Sommers \cite{Z&S2000} form the complex \textit{truncated ensemble}. Various applications of these truncated unitary matrices have been found, such as quantum chaotic scattering and conductance (see \cite{FS03} and references therein and \cite{Forrester2006}) and the zeroes of Kac random polynomials \cite{Forrester2010a}. The analogous real ensemble, which was briefly discussed in \cite{Z&S2000}, is the truncation of real orthogonal matrices, the eigenvalue jpdf and correlation functions of which were contained in \cite{KSZ2010}. The analysis of these truncated ensembles is somewhat more intricate than those of Ginibre or spherical ensembles since the size of the truncation relative to the dimension of the unitary matrix leads to qualitatively different eigenvalue behaviour. For example, if the truncation is large then the eigenvalue statistics (under certain scaling) approach those of the real Ginibre ensemble, since the orthogonality constraint has small effect, but with a small truncation, then the orthogonality is strongly felt and the eigenvalues cluster near the unit circle. We find that the eigenvalues are uniformly distributed on the anti-sphere in the limit of large dimension, hence the correspondence with a surface having constant $\kappa <0$.
Since each of these surfaces has constant curvature, we can reasonably expect a uniform distribution of eigenvalues over some region of support. However, this should be contrasted with the work in \cite{FanTell2008} where they study the analogous problem on a particular surface (called \textit{Flamm's paraboloid}, which arises in general relativity) with non-constant curvature, resulting in a non-uniform density in the thermodynamic limit.
In the various analyses of the ensembles discussed above there are several techniques and approaches that are regularly employed. It is the purpose of this work to present a systematic approach that can be used for each of the real (corresponding to $\beta=1$) ensembles in the 12-part classification: the 3 symmetric/Hermitian/self-dual Gaussian ensembles and the 9 non-symmetric/non-Hermitian/non-self-dual ensembles corresponding to each of the surfaces of constant Gaussian curvature. Specifically, in Chapter \ref{sec:GOE_steps} we lay out a $5$-step scheme that is applicable to all the ensembles to be discussed, applying them to the GOE by way of illustration; the method can be broadly described as the \textit{(skew-) orthogonal polynomial method}, since knowledge of such polynomials allows explicit calculation of statistical quantities. Then in the following chapters we apply the $5$ steps to the real cases of the Ginibre ensemble (including the real partially symmetric ensemble as a generalisation), the spherical ensemble and the truncated ensemble. Only in the last of these (the real truncated ensemble) will we find that the scheme has some shortcomings, with another method, which has been applied in \cite{sommers_and_w2008,FM09,KSZ2010} (and is discussed in Chapters \ref{sec:SOEcharpolys} and \ref{sec:TOEkernelts}), seeming to be the more useful in that case.
It should be mentioned that this geometric classification is one of a number of classification schemes in the random matrix literature. The work by Altland and Zirnbauer (\cite{Zirn1996, AltZir1997}) classifies Hermitian random matrix ensembles by the requirement of symmetry under conjugation by various operators. A correspondence between this classification and the families of \textit{symmetric spaces} (which are also defined by their symmetries), as categorised by Cartan \cite{Car1926, Car1927}, is identified. This classification includes Dyson's `threefold way' and that of Verbaarschot \cite{Verb1994a}, where he identifies a `threefold way' for the \textit{chiral ensembles}. (We will briefly revisit chiral ensembles in Chapter \ref{sec:FW}.) In \cite{BerLeC2002} and \cite{Mag2008} the set of symmetries is broadened to include non-Hermitian matrix ensembles, which introduces a further twenty symmetric spaces (bringing the total to thirty classes). It turns out that the classification of the non-Hermitian cases is not as useful as that in the Hermitian cases --- for Hermitian ensembles, the form of the Jacobian for the change of variables to the eigenvalue jpdf is determined by its classification, however, this is not true for the non-Hermitian cases. We will not pursue these classifications any further here; the interested reader is referred to the original references as stated, or to \cite[Chapter 15.11]{forrester?}.
Lying behind all the results in the study of random matrices is the concept of \textit{universality}, which is analogous to the central limit theorem in classical probability theory. Universality refers to the observed phenomena that the statistics of high dimensional matrices tend to some unique behaviour, dependent only on some structural feature of the ensemble rather than on the particular distribution of the entries. In the case of the Gaussian ensembles, the statistics of Hermitian matrices with identically and independently distributed (iid) standard normal ($N[0,1]$) entries converge to those of Hermitian matrices with iid entries from any mean zero, unit variance distribution with the same value of $\beta$. An important result to come from the considerations of these random matrices is \textit{Wigner's semi-circle law} \cite{Wigner1957c} for the density of the eigenvalues (see Chapter \ref{sec:GOE_sums}). Although it did not provide good agreement with experiments on nuclear energy levels (as the spacing distribution did), since the energy levels certainly do not have a semi-circular distribution, it has proven to be ubiquitous in the study of random Hermitian matrices. A similar result in the case of the Ginibre matrices is the \textit{circular law} (Chapter \ref{sec:circlaw}) often attributed to Girko \cite{Girko1985a, Girko1985b}, which states that for large matrix size the eigenvalue densities of ensembles with iid entries drawn from a distribution with mean zero and unit variance converge to the uniform distribution on a disk of radius $\sqrt{N}$. We will also discuss an \textit{elliptical law} for the partially-symmetric ensembles (Chapter \ref{sec:tGkernelts}) and a \textit{spherical law} for the spherical ensembles (Chapter \ref{sec:SOElims}), which will lead us to the conjecture of an \textit{anti-spherical law} for the truncated ensembles by analogy (Chapter \ref{sec:uasc}).
We also find that, in various scaled limits, the real and complex members of the novempartite categorisation of ensembles (real, complex and real quaternion versions of Ginibre, spherical and truncated matrices) have identical behaviour. The real quaternion ($\beta=4$) cases of the spherical and truncated ensembles have yet to be explored, although they are expected to conform to the same behaviour. This is another interpretation of universality and we discuss it in Chapters \ref{sec:SOEsclims} and \ref{sec:uasc}.
Treating the concept of universality more generally we can find analogies of our results in the studies of random tensors \cite{Ma07}, random walks and random involutions \cite{BakFor2001}, the zeroes of random polynomials \cite{EK95, BakFor1997, Forrester2010a}, and, as discussed above, in seemingly unrelated physical applications: from nuclear energy levels, to Coulomb gases to car parking \cite{deift2006}. When ruminating on such contemplations, as with so many things in random matrix theory, we may invoke the spirit of Wigner; this time calling to mind his observation of the ``unreasonable effectiveness of mathematics" \cite{Wigner1960} (as did the authors of \cite{AK2007}). In the same way that the central limit theorem is the justification for the common appearance of the normal distribution in large ``real world" data sets, it seems that random matrix universality is pointing us to something fundamental (and yet fundamentally mystifying) in the relationship between the eigenvalues of a random matrix and the operation of our universe.
\newpage
\section{The Gaussian orthogonal ensemble}
\setcounter{figure}{0}
\label{sec:GOE_steps}
The eigenvalue pdf for a number of ensembles of random matrices with real entries is integrable. By this we mean that probabilistic quantities such as the generalised partition function and the correlation functions exhibit special structures leading to closed form expressions. Our concern with the detail of such calculation can be broken down into five steps:
\begin{enumerate}[I.]
\item{specification of the distribution of matrix elements;}
\item{changing variables to find the distribution of eigenvalues;}
\item{establishing a Pfaffian or quaternion determinant form of the generalised partition function;}
\item{finding the appropriate skew-orthogonal polynomials to simplify the Pfaffian or quaternion determinant;}
\item{calculating the correlations in terms of Pfaffians or quaternion determinants with explicit entries.}
\end{enumerate}
(For definitions of Pfaffians and quaternion determinants see Chapter \ref{sec:qdets_pfs}.) We will first illustrate the steps using the case of the Gaussian Orthogonal Ensemble (GOE), before applying them to the non-symmetric ensembles in the remaining chapters of this work.
\begin{remark}
There are, of course, methods that do not follow this structure, however, in this work we restrict our scope to these steps.
\end{remark}
\subsection{Step I: Joint probability density function of the matrix elements}
This first step, calculating the joint probability density function (jpdf), is essentially the statement of the problem, however, this does not mean that the distribution is necessarily obvious. Indeed, in the case of the spherical ensemble of Chapter \ref{sec:SOE} the element distribution is first specified as a product of the distribution of the component matrices; obtaining knowledge of the elements of the matrix product requires quite a deal of calculation. Further, as we shall see in Chapter \ref{sec:truncs}, for the case of anti-spherical ensemble the matrix distribution is non-analytic for large truncations. (Specifically, the normalisation in (\ref{eqn:TOEmpjdfnorm1}) does not exist when the number of truncated rows and columns is less than the number of rows and columns retained.)
\subsubsection{GOE element jpdf}
The formalism we use here follows that of \cite[Chapters 6 \& 7]{forrester?}. The GOE consists of real, symmetric matrices $\mathbf{X}$, containing Gaussian distributed elements. Specifically, the diagonal and strictly upper triangular elements individually have distributions
\begin{align}
\label{eqn:GOE_el_dist} \frac{1}{\sqrt{2\pi}}e^{-x^2_{jj}/2} \;\; \textrm{and}\;\; \frac{1}{\sqrt{\pi}}e^{-x^2_{jk}}
\end{align}
respectively. Elementary probability theory tells us that the probability of a set of multiple random, independent events is the product of these individual probability density functions. Thus, for our real, symmetric matrices with elements distributed as in (\ref{eqn:GOE_el_dist}), we have for the jpdf $P(\mathbf{X})$ of the entries of $\mathbf{X}$
\begin{align}
\nonumber P(\mathbf{X})&=\prod_{j=1}^N \frac{1}{\sqrt{2\pi}}e^{-x^2_{jj}/2}\prod_{1\leq j< k \leq N} \frac{1}{\sqrt{\pi}}e^{-x^2_{jk}}=2^{-N/2}\pi^{-N(N+1)/4}\prod_{j,k=1}^Ne^{-x^2_{jk}/2}\\
\label{eqn:GOE_el_jpdf} &=2^{-N/2}\pi^{-N(N+1)/4}e^{-\sum_{j,k=1}^Nx^2_{jk}/2}=2^{-N/2}\pi^{-N(N+1)/4}e^{-(1/2)\mathrm{Tr}\mathbf{X}^2},
\end{align}
where Tr is the matrix trace. We note that $\int P(\mathbf{X}) \prod_{1\leq j \leq k \leq N}dx_{jk}=1$, where each of the $N(N+1)/2$ integrals is over $(-\infty,\infty)$, and so $P(\mathbf{X})$ is, of course, a probability density function. The ensemble is called \textit{orthogonal} because $P(\mathbf{X}) \prod_{1\leq j \leq k \leq N}dx_{jk}$ is unchanged by orthogonal conjugation; we will make this clear after Proposition \ref{prop:GOE_J}.
We now have the elemental distribution but we are ultimately interested in the distribution of the eigenvalues of the matrices specified by (\ref{eqn:GOE_el_dist}). To gain some insight into the expected eigenvalue distribution we can simulate a sequence of random matrices and plot the resulting density. In Figure \ref{fig:GOE_eval_dens} we have plotted the eigenvalue density for $1000$ independent GOE matrices of size $500\times 500$.
\begin{remark}
Note that in Figure \ref{fig:GOE_eval_dens} we have normalised the density to have total mass one and scaled the $x$ axis so that the reader will be all the more awed when we compare the results to the known asymptotic limit in (\ref{eqn:wssl}).
\end{remark}
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.7]{GOEWSLi.pdf}
\caption[Simulated GOE eigenvalue density.]{Eigenvalue density of $1000$ independent $500\times 500$ GOE matrices, normalised and scaled by $\sqrt{2N}$.}
\label{fig:GOE_eval_dens}
\end{center}
\end{figure}
The candid pattern in the data suggests the integrable nature of the problem.
\subsection{Step II: Eigenvalue jpdf}
\label{sec:step2}
The goal here is to re-express the element jpdf in terms of the eigenvalues of the matrices that compose the ensemble. The idea is to separate the eigenvalues from the other independent variables --- relating to the eigenvectors --- in some fashion. Indeed, the key part of this step is to choose a convenient matrix decomposition that exposes the eigenvalues in such a way that the remaining degrees of freedom can be integrated over yielding a constant overall factor. This means that the problem is essentially one of changing variables and calculating the associated Jacobian.
\subsubsection{GOE eigenvalue jpdf}
\label{sec:GOE_eval_jpdf}
With $\vec{\lambda} =\{ \lambda_1,\dots,\lambda_N\}$ the eigenvalues of $\mathbf{X}$, we see that we are looking to change variables and integrate (with somewhat loose notation) according to
\begin{align}
\label{eqn:PX_QL1} \int P(\mathbf{X})\prod_{1\leq j \leq k \leq N}dx_{j,k}=Q(\vec{\lambda})\prod_{j=1}^Nd\lambda_j,
\end{align}
where $Q(\vec{\lambda})$ is the eigenvalue jpdf. We understand the integral in (\ref{eqn:PX_QL1}) to be over the $N(N-1)/2$ variables relating to the eigenvectors, leaving only the dependence on the eigenvalues $\lambda_j$. This will be made precise below.
\begin{remark}
Note that in this work we will consistently use $P$ to denote the probability distribution of a matrix (or the elements of the matrix), while $Q$ denotes the distribution of the eigenvalues of the matrix.
\end{remark}
\noindent To carry out the change of variables we use the decomposition (\ref{eqn:diagdecomp}) of real symmetric matrices to write
\begin{align}
\label{eqn:diag_decomp} \mathbf{X}=\mathbf{R LR}^T,
\end{align}
where $\mathbf{L}$ is diagonal, containing the $N$ eigenvalues of $\mathbf{X}$, and $\mathbf{R}$ is a real, orthogonal matrix whose columns are the corresponding normalised eigenvectors. The decomposition will be unique if \textit{i}) we order the eigenvalues in $\mathbf{L}$, and \textit{ii}) we specify that the first row of $\mathbf{R}$ is non-negative.
First, recall that the appropriate operation for products of differentials is the wedge product (although in (\ref{eqn:PX_QL1}) we used product notation since they are the same in this setting).
\begin{definition}
With $\mathbf{X}=[x_{ij}]_{i,j=1,...,N}$, let $d\mathbf{X}$ be the matrix of differentials of the elements of $\mathbf{X}$,
\begin{align}
\nonumber d\mathbf{X}=\left[\begin{array}{cccc}
dx_{11}&dx_{12}&\cdot\cdot\cdot&dx_{1N}\\
dx_{21}&dx_{22}&\cdot\cdot\cdot&dx_{2N}\\
\vdots&\vdots&\ddots&\vdots\\
dx_{N1}&dx_{N2}&\cdot\cdot\cdot&dx_{NN}
\end{array}\right].
\end{align}
\end{definition}
\begin{definition}
Let $(d\mathbf{X})$ be the wedge product of the independent elements of $d\mathbf{X}$.
\end{definition}
In the case of $\mathbf{X}$ an $N \times N$ real, symmetric matrix we have
\begin{align}
\nonumber (d\mathbf{X})=\bigwedge_{1\leq j \leq k \leq N}dx_{jk},
\end{align}
and so we rewrite (\ref{eqn:PX_QL1}) as
\begin{align}
\label{eqn:PX_QL2} \int P(\mathbf{X})(d\mathbf{X})=Q(\vec{\lambda})(d\vec{\lambda}),
\end{align}
where, as in (\ref{eqn:PX_QL1}), the integral is understood to be over the variables relating to the eigenvectors.
\begin{remark}
Although products of differentials are understood to be wedge products, we will commonly use the notation of (\ref{eqn:PX_QL1}) when no confusion is likely.
\end{remark}
\noindent To proceed with the enterprise of calculating $Q(\vec{\lambda})$, first recall from multivariable calculus that in order to change variables from $\{ x_{j}\}_{j=1,...,N}$ to $\{ y_{j}\}_{j=1,...,N}$ we use the identity
\begin{align}
\label{eqn:COV} \bigwedge_{j=1}^Ndx_i=\left|J\right|\bigwedge_{j=1}^Ndy_j,
\end{align}
where $J$ is known as the \textit{Jacobian}, and is defined as
\begin{align}
\nonumber J:=\det \left[\frac{\partial x_{j}}{\partial y_{k}}\right]_{j,k=1,...,N}.
\end{align}
Comparing (\ref{eqn:PX_QL1}) with (\ref{eqn:COV}), and keeping in mind that the products of differentials in the former are in fact wedge products, we see that calculating the Jacobian is a key part of our program, yet it is not the whole program. In (\ref{eqn:COV}) there are an equal number of differentials on both sides of the equation, indeed the Jacobian would not even be defined if the number of differentials were not equal. Yet, on the right hand side (RHS) of (\ref{eqn:PX_QL1}) we see that there are $N$ independent differentials, while on the left hand side (LHS) there are $N(N+1)/2$, which, the incisive reader will note, is often considerably more than $N$. In fact, the change of variables equation we are actually interested in (ignoring constants for the moment) is
\begin{align}
\label{eqn:GOE_cov1} e^{-(1/2)\mathrm{Tr}\mathbf{X}^2}(d\mathbf{X})=|J|e^{-\sum_{j=1}^N\lambda_j^2/2}(d\vec{\lambda})(d\vec{p}),
\end{align}
where $\vec{p}=\{ p_1,...,p_{N(N-1)/2}\}$ are variables associated with the eigenvectors in the decomposition (\ref{eqn:diag_decomp}) and
\begin{align}
\label{def:GOE_J} J=\mathrm{det}\left[\begin{array}{cccc}
\frac{\partial x_{11}}{\partial\lambda_1}&\frac{\partial x_{12}}{\partial\lambda_1}&\cdot\cdot\cdot&\frac{\partial x_{NN}}{\partial\lambda_1}\\
\vdots&\vdots&\ddots&\vdots\\
\frac{\partial x_{11}}{\partial\lambda_N}&\frac{\partial x_{1N}}{\partial\lambda_N}&\cdot\cdot\cdot&\frac{\partial x_{NN}}{\partial\lambda_N}\\
\frac{\partial x_{11}}{\partial p_{1}}&\frac{\partial x_{12}}{\partial p_{1}}&\cdot\cdot\cdot&\frac{\partial x_{NN}}{\partial p_{1}}\\
\vdots&\vdots&\ddots&\vdots\\
\frac{\partial x_{11}}{\partial p_{N(N-1)/2}}&\frac{\partial x_{12}}{\partial p_{N(N-1)/2}}&\cdot\cdot\cdot&\frac{\partial x_{NN}}{\partial p_{N(N-1)/2}}
\end{array}\right].
\end{align}
Since the differentials $(d\vec{p})$ do not appear on the right hand side of (\ref{eqn:PX_QL1}), the variables $\{ p_1,...,p_{N(N-1)/2}\}$ are considered undesirables, and so we will integrate them out of the final expression, which will leave us with $Q(\vec{\lambda})$.
Before progressing, we establish a useful lemma, which we will repeatedly compel into service.
\begin{lemma} [\cite{muirhead1982} Theorem 2.1.6]
\label{lem:adma}
Let $\mathbf{Z}:=[z_{j,k}]_{j,k=1,...,N}=\mathbf{A}^T \mathbf{M} \mathbf{A}$ where\\$\mathbf{A}:=[a_{j,k}]_{j,k=1,...,N}$ is a fixed, real, non-singular matrix and $\mathbf{M}:=[m_{j,k}]_{j,k=1,...,N}$ is a real, symmetric matrix. Then
\begin{align}
\nonumber (d\mathbf{Z})&= \mathrm{det}(\mathbf{A})^{N+1}(d\mathbf{M})
\end{align}
\end{lemma}
\textit{Proof}: Firstly, noting that $\mathbf{A}$ is fixed, we apply the product rule of differentiation to find that $d\mathbf{Z}:=d(\mathbf{A}^T\mathbf{M}\mathbf{A})=\mathbf{A}^Td\mathbf{M}\mathbf{A}$. Next we note that
\begin{align}
\label{eqn:p(a)} (d\mathbf{Z})=p(\mathbf{A})(d\mathbf{M}),
\end{align}
where $p(\mathbf{A})$ is some polynomial in the $a_{j,k}$. This is clear from (\ref{eqn:COV}) since each $dz_{j,k}$ is a polynomial in the variables $dm_{j,k}$ with coefficients from $\mathbf{A}$. Again using (\ref{eqn:p(a)}) we have that
\begin{align}
\nonumber ((\mathbf{A}_1\mathbf{A}_2)^Td\mathbf{M}\mathbf{A}_1\mathbf{A}_2)=p(\mathbf{A}_1\mathbf{A}_2)(d\mathbf{M}).
\end{align}
But we can also write
\begin{align}
\nonumber ((\mathbf{A}_1\mathbf{A}_2)^Td\mathbf{M}\mathbf{A}_1\mathbf{A}_2)&=p(\mathbf{A}_2)(\mathbf{A}_1^Td\mathbf{M}\mathbf{A}_1)\\
\nonumber &=p(\mathbf{A}_2)p(\mathbf{A}_1)(d\mathbf{M})
\end{align}
and so we have the factorisation property
\begin{align}
\label{eqn:det_fact} p(\mathbf{A}_1\mathbf{A}_2)=p(\mathbf{A}_1)p(\mathbf{A}_2).
\end{align}
From the working in \cite{MacD1943} we know that by considering the elementary rotation, stretching and shearing matrices, the only polynomial satisfying this property is
\begin{align}
\label{eqn:p=det} p(\mathbf{A})=(\det \mathbf{A})^m,
\end{align}
for some integer $m$. By taking $\mathbf{A}=\mathrm{diag}[a,1,...,1]$ we find that $(d\mathbf{Z})=a^{N+1}(d\mathbf{M})$, and so $m=N+1$. Substituting this into (\ref{eqn:p=det}) and then (\ref{eqn:p=det}) into (\ref{eqn:p(a)}) gives the result.
\hfill $\Box$
We may now compute the Jacobian corresponding to the change of variables (\ref{eqn:PX_QL2}).
\begin{proposition}
\label{prop:GOE_J}
For $\mathbf{X}$ an $N \times N$ real, symmetric matrix we have
\begin{align}
\label{eqn:dX_jacob} (d\mathbf{X})=\prod_{1\leq j < k \leq N}|\lambda_k-\lambda_j|\bigwedge_{j=1}^N d\lambda_j \; (\mathbf{R}^Td\mathbf{R}),
\end{align}
where $\lambda_1,...,\lambda_N$ are the eigenvalues of $\mathbf{X}$, and $\mathbf{R}$ is real orthogonal.
\end{proposition}
\textit{Proof}: Applying the product rule of differentiation to (\ref{eqn:diag_decomp}) we have
\begin{align}
\nonumber d\mathbf{X}=d\mathbf{R} \mathbf{L} \mathbf{R}^T + \mathbf{R} d\mathbf{L} \mathbf{R}^T + \mathbf{R} \mathbf{L} d\mathbf{R}^T.
\end{align}
It will prove convenient to premultiply by $\mathbf{R}^T$ and post multiply by $\mathbf{R}$ giving
\begin{align}
\nonumber \mathbf{R}^T d\mathbf{X} \mathbf{R} &=\mathbf{R}^T d\mathbf{R} \mathbf{L} + \mathbf{L} d\mathbf{R}^T \mathbf{R}+d\mathbf{L}\\
\label{eqn:RTdXR} &=\mathbf{R}^T d\mathbf{R} \mathbf{L}-\mathbf{L} \mathbf{R}^Td\mathbf{R}+d\mathbf{L},
\end{align}
where we have used the fact that $\mathbf{R}\bR^T=\mathbf{1}$ and the corollary $d\mathbf{R}^T\mathbf{R}=-\mathbf{R}^Td\mathbf{R}$. The convenience comes from the fact that now the first two terms in (\ref{eqn:RTdXR}) are products of the same matrices. Although it appears this convenience may come at the expense of complicating the LHS of (\ref{eqn:RTdXR}) we use Lemma \ref{lem:adma} to see that $(\mathbf{R}^T d\mathbf{X} \mathbf{R})=(\det \mathbf{R})^{N+1}(d\mathbf{X})$. Then when we recall that $\det \mathbf{R} =\pm1$ and only the magnitude of the Jacobian is retained, we see that after taking wedge products, we have escaped penalty.
By explicit multiplication we have
\begin{eqnarray}
\nonumber &&\mathbf{R}^T d\mathbf{R} \mathbf{L} -\mathbf{L} \mathbf{R}^Td\mathbf{R} +d\mathbf{L} = \\
\nonumber &&\left[\begin{array}{cccc}
d\lambda_1 & (\lambda_2-\lambda_1)\vec{r_1}^T\cdot d\vec{r}_2 &\cdot\cdot\cdot&(\lambda_N-\lambda_1)\vec{r_1}^T\cdot d\vec{r}_N\\
(\lambda_2-\lambda_1)\vec{r_1}^T\cdot d\vec{r}_2&d\lambda_2&\cdot\cdot\cdot&(\lambda_N-\lambda_2)\vec{r_2}^T\cdot d\vec{r}_N\\
\vdots&\vdots&\ddots&\vdots\\
(\lambda_N-\lambda_1)\vec{r_1}^T\cdot d\vec{r}_N&(\lambda_N-\lambda_2)\vec{r_2}^T\cdot d\vec{r}_N&\cdot\cdot\cdot&d\lambda_N
\end{array}\right],
\end{eqnarray}
where we have used the equalities $r_j^T\cdot dr_i = dr_i^T\cdot r_j = -r_i^T\cdot dr_j$ (the first equality follows since scalars are invariant under transposition, and the second is another consequence of $\mathbf{R} \mathbf{R}^T=\mathbf{1}$).
Taking wedge products of both sides of (\ref{eqn:RTdXR}) gives the result. Note that this tells us the natural choice for the variables $p_j$ in (\ref{def:GOE_J}) is such that each $dp_j$ is one of $\{ \vec{r}_i \cdot d\vec{r}_j\}_{i<j}$.
\hfill $\Box$
The key structural component of the eigenvalue jpdf is apparent from Proposition \ref{prop:GOE_J} --- the product of differences between eigenvalues. It is a ubiquitous occurrence in random matrix theory and is one of the unifying themes of the study.
In the previous section we mentioned that the reason for the appellation \textit{orthogonal} to describe the ensemble of real, symmetric Gaussian matrices is that the normalised quantity $P(\mathbf{X}) (d\mathbf{X})$ is unchanged by orthogonal conjugation; we can now see why this is true. First note from (\ref{eqn:GOE_el_jpdf}) that $P(\mathbf{X})$ is invariant under any conjugation $\mathbf{X}\to \mathbf{M}^{-1}\mathbf{X}\mathbf{M}$ because of the cyclic property of the trace operator. Second, we examine the measure $(d\mathbf{X})$ from (\ref{eqn:dX_jacob}). The differential $n(n-1)/2$-form $(\mathbf{R}^T d\mathbf{R})$ is invariant under the left operation $\mathbf{O}\mathbf{R}$, where $\mathbf{O}$ is a fixed, orthogonal $N\times N$ matrix because
\begin{align}
\label{eqn:HMLinv} (\mathbf{R}^T d\mathbf{R}) \to \left( (\mathbf{O} \mathbf{R})^Td(\mathbf{O}\mathbf{R}) \right)= \left(\mathbf{R}^T \mathbf{O}^T d(\mathbf{O})\mathbf{R} +\mathbf{R}^T \mathbf{O}^T\mathbf{O} d(\mathbf{R})\right) = (\mathbf{R}^T d\mathbf{R}),
\end{align}
where the first term vanishes since $\mathbf{O}$ is fixed. We also see that $(\mathbf{R}^T d\mathbf{R})$ is invariant under right operation $\mathbf{R}\mathbf{O}^T$ using similar reasoning,
\begin{align}
\label{eqn:HMRinv} (\mathbf{R}^T d\mathbf{R}) \to \left(\mathbf{O} \mathbf{R}^T d(\mathbf{R}) \mathbf{O}^T\right) = (\mathbf{R}^T d\mathbf{R}),
\end{align}
where the equality follows from Lemma \ref{lem:adma} with $d\mathbf{M}=\mathbf{R}^T d\mathbf{R}$, and $\mathbf{A}=\mathbf{O}^T$, since $\det \mathbf{O} =1$. Note that (\ref{eqn:HMLinv}) and (\ref{eqn:HMRinv}) also imply that, for any operations $\mathbf{C}\mathbf{R}$ and $\mathbf{R}\mathbf{C}$, if $(\mathbf{R}^Td\mathbf{R})$ is invariant then $\mathbf{C}^T\mathbf{C}=1=\det \mathbf{C}$, or in other words, $(\mathbf{R}^T d\mathbf{R})$ in only invariant under orthogonal transformation. So the GOE is well-defined by orthogonal invariance.
A measure with such invariance properties is called a left and/or right \textit{invariant measure}, or a (left/right) \textit{Haar measure}. (An excellent technical treatment of Haar measure is contained in \cite{nachbin1965}, while \cite{muirhead1982} is also very informative, specifically regarding the orthogonal group.)
\begin{definition}
\label{def:Haar}
Let $G$ be a locally compact topological group and $H$ a Borel subgroup of $G$. If $\mu$ is a measure on $H$ and $\mu(h*H)=\mu(H)$ for all $h\in H$ then call $\mu$ a \textit{left invariant measure} or \textit{left Haar measure}.
Similarly, if $\mu(H)=\mu(H*h)$ then $\mu$ is called a \textit{right invariant measure} or \textit{right Haar measure}.
\end{definition}
It can be shown that left and right Haar measure is unique (up to a constant multiple), and in the case of the orthogonal group, the left and right Haar measures are the same (since $O(N)$ is compact). Combining this with the statements above we have that
\begin{align}
\label{eqn:Haar} (\mathbf{R}^T d\mathbf{R})
\end{align}
is the unique Haar (or invariant) measure on $O(N)$, and we say the \textit{volume} of $O(N)$ is given by
\begin{align}
\label{def:volON}
\mathrm{vol} (O(N)):= \int_{O(N)} (\mathbf{R}^T d\mathbf{R}).
\end{align}
In order to complete the calculation of the eigenvalue jpdf we must calculate this volume, under the restriction that the first row of $\mathbf{R}$ is positive. This amounts to integrating out the variables corresponding to the eigenvectors, which was the task implied by (\ref{eqn:PX_QL1}) and (\ref{eqn:PX_QL2}).
\begin{proposition}[\cite{muirhead1982} Corollary 2.1.16]
\label{prop:int_(RdR)}
Let $\mathbf{R}$ be an $N \times N$ real, orthogonal matrix, with the first row of $\mathbf{R}$ restricted to be positive, then we have
\begin{align}
\label{eqn:RTdR_integ} \int (\mathbf{R}^T d\mathbf{R}) = \frac{\pi^{N(N+1)/4}}{\prod_{j=1}^N\Gamma(j/2)}.
\end{align}
\end{proposition}
\textit{Proof}: Let $\mathbf{Z}=[z_{j,k}]_{j,k=1,...,N}$ with the elements distributed as standard Gaussians
\begin{align}
\nonumber \frac{1}{\sqrt{2\pi}}e^{-z_{j,k}^2/2}.
\end{align}
(Note that $\mathbf{Z}$ differs from $\mathbf{X}$ as defined in (\ref{eqn:GOE_el_dist}) since $\mathbf{Z}$ is not required to be symmetric.) The jpdf of the elements of $\mathbf{Z}$ is therefore
\begin{align}
\nonumber P(\mathbf{Z})=(2\pi)^{-N^2/2}e^{-\sum_{j,k=1}^Nz_{j,k}^2/2}=(2\pi)^{-N^2/2}e^{-\mathrm{Tr}(\mathbf{Z}^T\mathbf{Z})},
\end{align}
and, because the Gaussian distributions are normalised,
\begin{align}
\label{eqn:int_PZ=1} \int P(\mathbf{Z}) (d\mathbf{Z})=1,
\end{align}
where the domain of integration is $\mathbb{R}^{N^2}$.
Now write $\mathbf{Z}$ as $\mathbf{Z}=\mathbf{H}\mathbf{T}$ where $\mathbf{H}$ and $\mathbf{T}$ are $N\times N$ matrices, $\mathbf{H}$ is orthogonal and $\mathbf{T}$ is upper triangular; this is known as a $\mathbf{QR}$ decomposition (here $\mathbf{Q}=\mathbf{H}$ and $\mathbf{T}=\mathbf{R}$) and can be accomplished by the Gram-Schmidt algorithm. To make the decomposition unique, the diagonal elements of $\mathbf{T}$ are specified to be positive. With this decomposition we have
\begin{align}
\nonumber \mathrm{Tr} (\mathbf{Z}^T\mathbf{Z})=\mathrm{Tr}(\mathbf{T}^T\mathbf{T})=\sum_{1\leq j\leq k\leq N}t_{j,k}^2.
\end{align}
We also have
\begin{align}
\nonumber (d\mathbf{Z})=\prod_{j=1}^Nt_{j,j}^{N-j}(d\mathbf{T})(\mathbf{H}^T d\mathbf{H}),
\end{align}
which we can establish using the method of Proposition \ref{prop:GOE_J}. We see that (\ref{eqn:int_PZ=1}) becomes
\begin{align}
\label{eqn:hTdH1} \int \prod_{j=1}^Nt_{j,j}^{N-j}\prod_{1\leq j \leq k \leq N}e^{-t_{j,k}^2/2}dt_{j,k}\int (\mathbf{H}^T d\mathbf{H})=(2\pi)^{N^2/2},
\end{align}
where the integrals over $t_{j,k}$ can be evaluated thusly
\begin{align}
\nonumber &\int \prod_{j=1}^Nt_{j,j}^{N-j}\prod_{1\leq j \leq k \leq N}e^{-t_{j,k}^2/2}dt_{j,k}\\
\nonumber &=\prod_{1\leq j < k \leq N}\int_{-\infty}^{\infty}e^{-t_{j,k}^2/2}dt_{j,k}\prod_{j=1}^N\int_{0}^{\infty}e^{-t_{j,j}^2/2}t^{N-j}_{j,j}dt_{j,j}\\
\nonumber &=\prod_{1\leq j < k \leq N}\sqrt{2\pi}\prod_{j=1}^N2^{j/2-1}\Gamma(j/2)\\
\label{eqn:hTdH2} &=2^{N^2/2-N}\pi^{N(N-1)/4}\prod_{j=1}^N\Gamma(j/2).
\end{align}
Substituting (\ref{eqn:hTdH2}) into (\ref{eqn:hTdH1}) we have
\begin{align}
\int (\mathbf{H}^T d\mathbf{H})=\frac{2^N\pi^{N(N+1)/4}}{\prod_{j=1}^N\Gamma(j/2)}.
\end{align}
Since we specified the first row of $\mathbf{R}$ to be positive, we divide through by $2^N$ (the number of possible signs in the first row) and we have the result.
\hfill $\Box$
\begin{remark}
Note that in the proof of Proposition \ref{prop:int_(RdR)} we found that
\begin{align}
\label{def:volO} \mathrm{vol} (O(N)) = \frac{2^N\pi^{N(N+1)/4}} {\prod_{j=1}^N\Gamma(j/2)}.
\end{align}
This will be of use in Chapter \ref{sec:truncs}.
\end{remark}
Combining Propositions \ref{prop:GOE_J} and \ref{prop:int_(RdR)} with (\ref{eqn:GOE_cov1}) we have the eigenvalue jpdf, which was first identified with the GOE (up to normalisation) in \cite{Wigner1957a}.
\begin{proposition}
\label{prop:GOE_eval_jpdf}
For $\mathbf{X}$ an $N \times N$ real, symmetric matrix with iid Gaussian entries, the jpdf for the set $\vec{\lambda}=\{ \lambda_1,...,\lambda_N\}$, the eigenvalues of $\mathbf{X}$, is
\begin{align}
\label{eqn:GOE_eval_jpdf} Q(\vec{\lambda})=2^{-3N/2}\prod_{j=1}^N\frac{e^{-\lambda_j^2/2}}{\Gamma(j/2+1)}\prod_{1\leq j < k \leq N}|\lambda_k-\lambda_j|.
\end{align}
\end{proposition}
\textit{Proof}: Substituting (\ref{eqn:dX_jacob}) into (\ref{eqn:PX_QL2}) using (\ref{eqn:RTdR_integ}) we almost have the result. The only extra concern is the constraints \textit{i} and \textit{ii} as discussed below (\ref{eqn:diag_decomp}). We have already accounted for the specification of the first row as positive at the end of the proof of Proposition \ref{prop:int_(RdR)}, while the relaxation of the ordering on the eigenvalues introduces a factor of $(N!)^{-1}$.
\hfill $\Box$
\begin{remark}
Note that we have implicitly ignored the matrices with repeated eigenvalues; we can do this since they form a set of measure zero inside the set of all GOE matrices. For the same reason we also ignore singular matrices and so we may take inverses with impunity.
\end{remark}
\subsection{Step III: Pfaffian form of generalised partition function}
\label{sec:Step3_GOE}
As mentioned after Proposition \ref{prop:GOE_J}, the main structural feature of (\ref{eqn:GOE_eval_jpdf}) is the product of differences (\ref{eqn:evalbeta}) with $\beta=1$, and this is one of the characteristic attributes of eigenvalue distributions where the entries of the matrix are real. (Recall from the Introduction that when the entries are complex then we find the same product raised to the power $\beta>1$.) A product of this form naturally leads to a determinantal expression via the identity
\begin{eqnarray}
\label{eqn:vandermonde}
\prod_{1\leq j<k\leq n}(x_k-x_j)=\mathrm{det}
\left[\begin{array}{ccccc}
1 & x_1 & x_1^2 & \cdot\cdot\cdot & x_1^{n-1}\\
1 & x_2 & x_2^2 & \cdot\cdot\cdot & x_2^{n-1}\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
1 & x_n & x_n^2 & \cdot\cdot\cdot & x_n^{n-1}
\end{array}\right],
\end{eqnarray}
where (\ref{eqn:vandermonde}) is referred to as a \textit{Vandermonde determinant}. We can modify the identity to the form
\begin{eqnarray}
\label{eqn:vandermonde_polys}
\prod_{1\leq j<k\leq n}(x_k-x_j)=\mathrm{det}
\left[\begin{array}{ccccc}
p_0(x_1) & p_1(x_1) & p_2(x_1) & \cdot\cdot\cdot & p_{n-1}(x_1)\\
p_0(x_2) & p_1(x_2) & p_2(x_2) & \cdot\cdot\cdot & p_{n-1}(x_2)\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
p_0(x_n) & p_1(x_n) & p_2(x_n) & \cdot\cdot\cdot & p_{n-1}(x_n)\\
\end{array}\right],
\end{eqnarray}
where $p_m(x)$ is a monic polynomial of degree $m$, by adding to each column appropriate multiples of the other columns. It will turn out that (\ref{eqn:vandermonde_polys}) is a more useful form for our desideratum.
This is all very gratifying, and will be crucial to the story that follows, however, in the case of GOE (and the other $\beta=1$ ensembles), we can look past the determinant and evince a deeper Pfaffian (or quaternion determinant) structure in a quantity called the \textit{generalised partition function}, from which we will calculate the correlation functions.
\begin{remark}
This Pfaffian structure also shows itself in the $\beta=4$ cases, although we shall not study them in this work. On the other hand, $\beta=2$ is traditionally analysed at the level of determinants and misses the Pfaffian substructure. While a determinant can always be rewritten trivially as a Pfaffian (see (\ref{eqn:chequer1}) and the surrounding discussion), in \cite{Sinclair2010} it is shown that there is a Pfaffian structure for $\beta=2$ which does not appear to be such a trivial rewriting, and in \cite{Kie2011} an explicit example of this has been found in the setting of chiral matrix ensembles (see also \cite{ForSinc2010}).
\end{remark}
\subsubsection{Quaternion determinants and Pfaffians}
\label{sec:qdets_pfs}
The \textit{quaternion determinant} was used by Dyson \cite{dyson1970} as a convenient notation for writing the eigenvalue correlation functions for the $\beta=1$ and $4$ cases of the circular ensemble. We will find that our correlations here similarly contain a quaternion structure and so we review some of the theory (\cite{mehta2004} contains a similar discussion). A good historical and technical overview is provided in \cite{dyson1972}.
A quaternion is analogous to a complex number, except it has four basis vectors instead of two. Typically they are written in the form $q=q_0+iq_1+jq_2+kq_3$, with the relations $i^2=j^2=k^2=ijk=-1$, and the $q_l$ are in general complex. Alternatively, quaternions can be represented as $2\times 2$ matrices $q=q_0\mathbf{1}+q_1\mathbf{e}_1+q_2\mathbf{e}_2+q_3\mathbf{e}_3$ using the Pauli spin matrices $\sigma_x,\sigma_y,\sigma_z$
\begin{align}
\nonumber &\mathbf{1}:=\left[\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}\right], &&\mathbf{e}_1:=i\sigma_z=\left[\begin{array}{cc}
i & 0\\
0 & -i
\end{array}\right],\\
\nonumber &\mathbf{e}_2:=i\sigma_y=\left[\begin{array}{cc}
0 & 1\\
-1 & 0
\end{array}\right], &&\mathbf{e}_3:=i\sigma_x=\left[\begin{array}{cc}
0 & i\\
i & 0
\end{array}\right].
\end{align}
For $q_0=a+ib,q_1=c+id,q_2=e+if,q_3=g+ih$ we have
\begin{align}
\label{eqn:mat_quat} q=\left[ \begin{array}{cc}
w & x\\
y & z
\end{array}\right],
\end{align}
where $w=(a-d)+i(b+c),x=(e-h)+i(f+g),y=-(e+h)+i(g-f),z=(a+d)+i(b-c)$. The analogue of complex conjugation for quaternions we denote $\bar{q}=q_0-iq_1-jq_2-kq_3$, or in the matrix representation
\begin{align}
\nonumber \bar{q}=\left[ \begin{array}{cc}
z & -x\\
-y & w
\end{array}\right].
\end{align}
With the representation (\ref{eqn:mat_quat}) an $N \times N$ matrix with quaternion elements $\mathbf{Q}=[q_{j,k}]$ can be viewed as a $2N \times 2N$ matrix with complex elements.
In the case that $q_0,q_1,q_2,q_3\in\mathbb{R}$ we say that $q$ is a \textit{real quaternion} and from (\ref{eqn:mat_quat}), with $\alpha=a+ic$ and $\beta=e+ig$, we have
\begin{align}
\label{def:real_quats} q=\left[ \begin{array}{cc}
\alpha & \beta\\
-\bar{\beta} & \bar{\alpha}
\end{array}\right],
\end{align}
with conjugate
\begin{align}
\nonumber \bar{q}=\left[ \begin{array}{cc}
\bar{\alpha} & -\beta\\
\bar{\beta} & \alpha
\end{array}\right].
\end{align}
A matrix $\mathbf{Q}=[q_{j,k}]$, is said to be \textit{quaternion real} if all the quaternion elements $q_{j,k}$ are real quaternions.
We denote by $\mathbf{Q}^D$ the matrix $[\bar{q}_{k,j}]$, and we call it the \textit{dual} of $\mathbf{Q}$. If $\mathbf{Q}=\mathbf{Q}^D$ then $\mathbf{Q}$ is said to be \textit{self-dual}.
\begin{definition} [Quaternion determinant]
Let $\mathbf{Q}=[q_{j,k}]$ be an $N \times N$ self-dual matrix of $2\times 2$ real quaternions as in (\ref{def:real_quats}). The \textit{quaternion determinant} is defined by
\begin{equation}
\label{def:qdet} \mathrm{qdet}[\mathbf{Q}]=\sum_{P\in S_N}(-1)^{N-l}\prod_1^l(q_{ab}q_{bc}\cdot\cdot\cdot q_{sa})^{(0)}.
\end{equation}
The superscript $(0)$ denotes the operation $\frac{1}{2}\mathrm{Tr}$ of the quantity in brackets. $P$ is any permutation of $(1,...,N)$ that consists of $l$ disjoint cycles of the form $(a\rightarrow b \rightarrow c \rightarrow \cdot\cdot\cdot \rightarrow s \rightarrow a)$.
\end{definition}
\begin{remark}
If the $q_{j,k}$ are scalar multiples of the identity, say $q_{j,k}=c_{j,k}\mathbf{1}_2$, then $\mathrm{qdet}[\mathbf{Q}]=\mathrm{det}[\mathbf{C}]$ where $\mathbf{C}=[c_{j,k}]$.
\end{remark}
A structure that is closely related to the quaternion determinant is the Pfaffian.
\begin{definition} [Pfaffian]
\label{def:pfaff}
Let $\mathbf{X}=[x_{ij}]_{i,j=1,...,2N}$, where $x_{ji}=-x_{ij}$, so that $\mathbf{X}$ is an anti-symmetric matrix of even size. Then the \textit{Pfaffian} of $\mathbf{X}$ is defined by
\begin{align}
\nonumber \mathrm{Pf} [\mathbf{X}]&=\sum^*_{P(2l)>P(2l-1)}\varepsilon (P) x_{P(1),P(2)}x_{P(3),P(4)}\cdot\cdot\cdot x_{P(2N-1),P(2N)}\\
\label{def:Pf} &=\frac{1}{2^NN!}\sum_{P\in S_{2N}}\varepsilon (P) x_{P(1),P(2)}x_{P(3),P(4)}\cdot\cdot\cdot x_{P(2N-1),P(2N)},
\end{align}
where $S_{2N}$ is the group of permutations of $2N$ letters and $\varepsilon (P)$ is the sign of the permutation $P$. The * above the first sum indicates that the sum is over distinct terms only (that is, all permutations of the pairs of indices are regarded as identical).
\end{definition}
\begin{remark}
\label{rem:Pf_def}
In the second equality of (\ref{def:Pf}) the factors of $2$ are associated with the restriction $P(2l)>P(2l-1)$ while the factorial is associated with counting only distinct terms ($N!$ is the number of ways of arranging the $N$ pairs of indices $\{ P(2l-1),P(2l)\}$).
\end{remark}
\begin{remark}
At the risk of confusion, we shall use the terms \textit{skew-symmetric} and \textit{anti-symmetric} synonymously.
\end{remark}
In his 1815 publication, in an effort to solve certain classes of differential equations, Pfaff dealt with a structure that became what we know as Pfaffians. Determinants were not in common use at the time, and so they were not seen as being of a similar form. The treatment was formalised by Jacobi and recognised as an analogue of a determinant, indeed it was he who proved that skew-symmetric determinants of odd size are zero. As for nomenclature, Jacobi referred to Pfaff's Method (`\textit{Pfaffsche Methode}') in 1827, but when Cayley takes up the discussion in 1847 he refers to Jacobi (`\textit{les fonctions de M. Jacobi}'), before changing the eponym to Pfaff in a paper of 1854. The following relationship between a Pfaffian and a determinant of a skew-symmetric matrix $\mathbf{A}$,
\begin{align}
\label{eqn:pf_det} (\mathrm{Pf} \mathbf{A})^2 = \det \mathbf{A},
\end{align}
is also a classical result.
\begin{remark}
The historical information comes from Thomas Muir in \cite{Muir1906} and \cite{Muir1911}. He provides interesting contextualising commentary on many of the seminal papers in the theory of determinants.
\end{remark}
We can also trivially rewrite any determinant as a Pfaffian of a chequerboard matrix
\begin{align}
\label{eqn:chequer1} \det \mathbf{A} =\mathrm{Pf} \; \tilde{\mathbf{A}}_1,
\end{align}
where the $(2i-1)$-th row of $\tilde{\mathbf{A}}_1$ is $[0, a_{i,1},0,a_{i,1}, ..., 0, a_{i,N}]$, with the remaining elements being determined by the required anti-symmetry. For example, with $N=3$,
\begin{align}
\label{eqn:N3chequer} \tilde{\mathbf{A}}_1=\left[ \begin{array}{cccccc}
0 & a_{11} & 0 & a_{1,2} & 0 & a_{1,3}\\
-a_{11} & 0 & -a_{2,1} & 0 & -a_{3,1} & 0\\
0 & a_{2,1} & 0 & a_{2,2} & 0 & a_{2,3}\\
-a_{1,2} & 0 & -a_{2,2} & 0 & -a_{3,2} & 0\\
0 & a_{3,1} & 0 & a_{3,2} & 0 & a_{3,3}\\
-a_{1,3} & 0 & -a_{2,3} & 0 & -a_{3,3} & 0
\end{array}
\right].
\end{align}
A more compact description of this correspondence specifies the determinant matrix in terms of the chequerboard matrix
\begin{align}
\label{eqn:chequer} \mathrm{Pf} \tilde{\mathbf{A}}_1=\det [\alpha_{2i-1,2j}]_{i,j=1,...,N},
\end{align}
where $\tilde{\mathbf{A}}_1=[\alpha_{i,j}]_{i,j=1,...,2N}$. We will have use of (\ref{eqn:chequer}) in Chapter \ref{sec:pnk}.
Alternatively, we may rewrite the determinant as a Pfaffian with blocks of zeros on the diagonal
\begin{align}
\nonumber \det \mathbf{A} = \mathrm{Pf} \left[\begin{array}{cc}
\mathbf{0}_{N\times N} & \tilde{\mathbf{A}}_2\\
-\tilde{\mathbf{A}}_2 & \mathbf{0}_{N\times N}
\end{array}\right],
\end{align}
where $\tilde{\mathbf{A}}_2$ is given by elementary transformations of $\mathbf{A}$ as so
\begin{align}
\nonumber \tilde{\mathbf{A}}_2=\left[\begin{array}{c}
a_{2i-1,j}\\
-a_{2i,j}
\end{array} \right]_{i=1,...,N/2 \atop j=1,...,N}.
\end{align}
From these facts we note Pfaffian/quaternion determinantal processes are equivalent to determinantal processes, albeit with special structure.
Usefully, Pfaffians can be calculated using a form of Laplace expansion. To calculate a determinant, recall that we can expand along any row or column. For example, expand a matrix $A=[a_{ij}]_{i,j=1,...n}$ along the first row:
\begin{equation}
\nonumber \mathrm{det}[A]=a_{1,1}\mathrm{det}[A]^{1,1}-a_{1,2}\mathrm{det}[A]^{1,2} + \cdot\cdot\cdot (-1)^{n+1} a_{1,n}\mathrm{det}[A]^{1,n},
\end{equation}
where $\mathrm{det}[A]^{i,j}$ means the determinant of the matrix left over after deleting the $i$th row and $j$th column.
The analogous expansion for a Pfaffian involves deleting \textit{two} rows and \textit{two} columns each time. For example, expanding a skew-symmetric matrix $B=[b_{ij}]_{i,j=1,...n}$ ($n$ even) along the first row:
\begin{equation}
\nonumber \mathrm{Pf}[B]=b_{1,1}\mathrm{Pf}[B]^{1,1}-b_{1,2}\mathrm{Pf}[B]^{1,2} + \cdot\cdot\cdot (-1)^n b_{1,n}\mathrm{Pf}[B]^{1,n},
\end{equation}
where $\mathrm{Pf}[B^{i,j}]$ means the Pfaffian of the matrix left after deleting the $i$th and $j$th rows and the $i$th and $j$th columns. Laplace expansion requires $n!$ calculations for a determinant, and $n!!=n\cdot (n-2)\cdot (n-4 )\cdot...$ in the case of a Pfaffian.
We can also identify quaternion determinant and Pfaffian analogues of a diagonal matrix. A determinant is most easily calculated if its matrix is diagonal, since then
\begin{align}
\nonumber \det\Big( \mathrm{diag} [a_1,...,a_N] \Big) =\prod_{j=1}^N a_j.
\end{align}
From (\ref{def:qdet}) we see that the analogous result for the quaternion determinant is
\begin{align}
\label{eqn:qdet_diag} \mathrm{qdet}\Big( \mathrm{diag}[a_1,a_1,...,a_{N/2}, a_{N/2}]\Big) = \prod_{j=1}^{N/2} a_j.
\end{align}
In the case of Pfaffians, however, clearly, diagonal matrices (with at least one non-zero element) are not skew-symmetric and so the Pfaffian of a diagonal matrix is undefined. However, we can define a suitably analogous matrix for a Pfaffian as
\begin{align}
\label{eqn:skew_diag_mat} \mathbf{A}^{(D)}=\left[\begin{array}{cccc}
\mathbf{A}_1 & \mathbf{0} & \cdot\cdot\cdot & \mathbf{0}\\
\mathbf{0} & \mathbf{A}_2 & \cdot\cdot\cdot & \mathbf{0}\\
\vdots & \vdots & \ddots & \vdots\\
\mathbf{0} & \mathbf{0} & \cdot\cdot\cdot & \mathbf{A}_{N/2}
\end{array}
\right],
\end{align}
where $\mathbf{A}_j=\left[ \begin{array}{cc}
0 & a_j\\
-a_j & 0
\end{array}
\right]$ and $\mathbf{0}$ is the $2\times 2$ zero matrix. That is, the matrix has entries $\{a_1,...,a_{N/2} \}$ along the diagonal above the main diagonal, and $\{-a_1,...,-a_{N/2} \}$ on the diagonal just below the main diagonal, with zeros elsewhere. We call such a matrix \textit{skew-diagonal}. The analogy with the diagonal matrix of a quaternion determinant (\ref{eqn:qdet_diag}) comes from the fact that
\begin{align}
\label{eqn:pf_skew_diag_eval} \mathrm{Pf} \mathbf{A}^{(D)}=\prod_{j=1}^{N/2}a_j.
\end{align}
Note that in (\ref{eqn:skew_diag_mat}) and (\ref{eqn:pf_skew_diag_eval}), we have implicitly assumed that $N$ is even. In the case that $N$ is odd there are additional technical details, which are dealt with in Chapter \ref{sec:GOE_odd}.
From the preceding we see that there is clearly a relationship between quaternion determinants and Pfaffians, and we would like to formalise this, but we first need the quaternion determinant analogue of (\ref{eqn:pf_det}) for $\mathbf{M}$ a self-dual matrix \cite[Theorem 2]{dyson1970},
\begin{align}
\label{eqn:qdet2=det} (\mathrm{qdet} [\mathbf{M}])^2=\det[\mathbf{M}].
\end{align}
We also need to define
\begin{equation}
\label{def:Z2N} \mathbf{Z}_{2N}:=\mathbf{1}_N\otimes \left[\begin{array}{cc}
0 & -1\\
1 & 0\\
\end{array}\right].
\end{equation}
\begin{proposition}
\label{prop:qdet=pf}
With $\mathbf{M}$ a $2N \times 2N$ self-dual matrix and $\mathbf{Z}_{2N}$ from (\ref{def:Z2N}) we have
\begin{align}
\nonumber \mathrm{Pf}[\mathbf{M}\mathbf{Z}^{-1}_{2N}]&=\mathrm{Pf}[\mathbf{Z}^{-1}_{2N}\mathbf{M}]=\mathrm{qdet} [\mathbf{M}],\\
\label{eqn:qdet=pf} \mathrm{Pf}[\mathbf{M}\mathbf{Z}_{2N}]&=\mathrm{Pf}[\mathbf{Z}_{2N}\mathbf{M}]=(-1)^N\mathrm{qdet} [\mathbf{M}].
\end{align}
\end{proposition}
\textit{Proof:} First we must be sure that the equations are well formed, that is, that if $\mathbf{M}$ is a self-dual matrix then the result of operation by $\mathbf{Z}_{2N}$ or $\mathbf{Z}_{2N}^{-1}$ is anti-symmetric. Note that the tangible effect of right multiplication by $\mathbf{Z}_{2N}^{-1}$ on any matrix $\mathbf{M}$ is to interchange every pair of columns, and multiply the leftmost of each pair by $-1$. That of left multiplication is to interchange every pair of rows and multiply the bottom-most by $-1$. (Right/left multiplication by $\mathbf{Z}_{2N}$ will also interchange each pair of columns/rows, but will multiply the \textit{rightmost} column/\textit{top-most} row of each pair by $-1$, since $\mathbf{Z}_{2N}^{-1}=-\mathbf{Z}_{2N}$.)
If the elements of $\mathbf{M}$ are $\{ m_{j,k}\}_{j,k=1,...,2N}$ then self-duality implies $m_{2j-1,2k-1}=m_{2k,2j}$, $m_{2j-1,2k}=-m_{2k-1,2j}$, $m_{2j,2k-1}= -m_{2k,2j-1}$, $m_{2j,2k}= m_{2k-1,2j-1}$. If we now operate on this matrix with $\mathbf{Z}_{2N}^{-1}$ on the right, the discussion above shows that the second index in each entry is switched from even to odd or vice-versa, with the extra condition that a change from even to odd picks up a negative sign. Applying this to the entries of $\mathbf{M}$ we have $m_{2j-1,2k}=-m_{2k,2j-1},m_{2j-1,2k-1}=-m_{2k-1,2j-1}, m_{2j,2k}=-m_{2k,2j}, m_{2j,2k-1}= -m_{2k-1,2j}$, which the condition for an anti-symmetric matrix. A similar argument also works for the other operations.
From (\ref{eqn:pf_skew_diag_eval}) we see that $\mathrm{Pf} [\mathbf{Z}_{2N}^{-1}]=1=(-1)^N\mathrm{Pf} [\mathbf{Z}_{2N}]$, and so, by (\ref{eqn:pf_det}) and (\ref{eqn:qdet2=det}) we have that
\begin{align}
\nonumber (\mathrm{Pf} [\mathbf{M}\mathbf{Z}_{2N}^{-1}])^2=\det [\mathbf{M}\mathbf{Z}_{2N}^{-1}]=\det [\mathbf{Z}_{2N}^{-1}\mathbf{M}]=\det [\mathbf{M}]=(\mathrm{qdet} [\mathbf{M}])^{2}
\end{align}
and
\begin{align}
\nonumber (\mathrm{Pf} [\mathbf{M}\mathbf{Z}_{2N}])^2&=(-1)^{2N}\det [\mathbf{M}\mathbf{Z}_{2N}]=(-1)^{2N}\det [\mathbf{Z}_{2N}\mathbf{M}]=(-1)^{2N}\det [\mathbf{M}]\\
&=((-1)^{N}\mathrm{qdet} [\mathbf{M}])^{2}.
\end{align}
With $\mathbf{M}$ the identity we establish the sign after taking the square root.
\hfill $\Box$
We can generalise the preceding result to any skew-diagonal matrix.
\begin{corollary}
\label{cor:qdet=pf}
For a skew-diagonal $2N\times 2N$ matrix $\mathbf{A}$ as defined in (\ref{eqn:skew_diag_mat}) and a self-dual $2N\times 2N$ matrix $\mathbf{M}$ we have
\begin{align}
\nonumber \mathrm{Pf}[\mathbf{M}\mathbf{A}]=\mathrm{Pf}[\mathbf{A}\mathbf{M}]=\mathrm{Pf}[\mathbf{A}]\; \mathrm{qdet}[\mathbf{M}]=\prod_{j=1}^{N/2}a_j\; \mathrm{qdet}[\mathbf{M}].
\end{align}
\end{corollary}
\textit{Proof}: First note that $\mathbf{A}=\mathbf{D}\mathbf{Z}_{2N}^{-1}=\mathbf{Z}_{2N}^{-1}\mathbf{D}$ where $\mathbf{D}=\mathrm{diag}[a_0,a_0,a_1,a_1,...,a_N,a_N]$. Then
\begin{align}
\nonumber \mathrm{Pf}[\mathbf{M}\mathbf{A}]=\mathrm{Pf}[\mathbf{A}\mathbf{M}]&=\mathrm{Pf}[\mathbf{D}\mathbf{Z}_{2N}^{-1}\mathbf{M}]\\
\nonumber &=\prod_{j=1}^{N/2}a_j \; \mathrm{Pf}[\mathbf{Z}_{2N}^{-1}\mathbf{M}]
\end{align}
where we have used (\ref{def:Pf}) for the last equality. The result now follows from Proposition \ref{prop:qdet=pf}.
\hfill $\Box$
With Proposition \ref {prop:qdet=pf} in mind, we see that quaternion determinants and Pfaffians are trivially related, a relation that will be exploited in this work. The only subtlety is that the Pfaffian matrix must be anti-symmetric and the quaternion matrix must be self-dual.
\subsubsection{Generalised partition function}
With $P(x_1,...,x_N)$ a probability density function, the average of the function $f(x_1,...,x_N)$ is
\begin{align}
\label{def:av} \langle f(x_1,...,x_N) \rangle_P:=\int_{\Omega}f(x_1,...,x_N)P(x_1,...,x_N)dx_1\cdot\cdot\cdot dx_N,
\end{align}
where $\Omega$ is the support of $P$. A special case is $f(x_1,...,x_N)=\prod_{j=1}^N u(x_j)$; choosing $u(x)=\chi_{x\in S}$, where $\chi_A=1$ if $A$ is true and $\chi_A=0$ otherwise, gives that (\ref{def:av}) equals the probability that all eigenvalues are in the set $S$. Our interest in $\langle \prod_{j=1}^N u(x_j) \rangle_P$ for general $u$ stems from its use in calculating correlation functions by applying functional differentiation (see (\ref{eqn:fnal_diff_correln}) below), although, it does have more general use. For instance, if $u(x)=1-\zeta \chi_{x_j\in J}$ then the probability that $n$ eigenvalues lie in the set $J$ is given by the $n$th derivative with respect to $\zeta$ (times some combinatorial factor). See \cite[Chapter 8]{forrester?} and \cite{tracy_and_widom1998} for more details.
\begin{definition}
\label{def:gen_part_fn}
Let $\mathcal{Q}(\mathbf{x})$ be the jpdf of the set $\mathbf{x}=\{x_1,...,x_N\}$ and define the generalised partition function of $\mathbf{x}$ as
\begin{align}
\label{def:single_gen_part_fn} Z_N[u]:=\Big\langle \prod_{j=1}^Nu(x_j) \Big\rangle_{\mathcal{Q}} = \int\prod_{j=1}^Nu(x_j)\; \mathcal{Q}(\mathbf{x})\; d\mathbf{x}.
\end{align}
In the case that $\mathbf{x}=\bigcup_{l=1}^m\mathbf{x}^{(l)}$, that is, $\mathbf{x}$ consists of multiple disjoint sets $\mathbf{x}^{(l)}=\{x_1^{(l)},...,x_{N_l}^{(l)}\}$, each containing elements of a different species, then define
\begin{align}
\label{def:multi_gen_part_fn} Z_N[u_1,...,u_m]:=\int\left(\prod_{j=1}^{N_1} u_1\left(x_j^{(1)}\right)\cdot\cdot\cdot\prod_{j=1}^{N_m} u_m\left(x_j^{(m)}\right)\right)\mathcal{Q}(\mathbf{x})\; d\mathbf{x}.
\end{align}
\end{definition}
The multiple disjoint sets of Definition \ref{def:gen_part_fn} correspond to the sets of eigenvalues in the ensemble. While (\ref{def:multi_gen_part_fn}) is unnecessarily general for a study of GOE (where there are only real eigenvalues), it will become relevant in the following chapters where we discuss matrices whose eigenvalues are either real, or non-real complex conjugate pairs.
It will turn out that the generalised partition function for GOE can be written in a convenient quaternion determinant or Pfaffian form, and then, in such a case, the correlation functions --- given as functional derivatives in (\ref{eqn:fnal_diff_correln}) below --- are a particularly terse quaternion determinant or Pfaffian expression.
\subsubsection{Pfaffian generalised partition function for GOE, $N$ even}
\label{sec:pf_gpf}
Before we proceed, note that Definition \ref{def:pfaff} only applies when the size of the matrix is even --- for now we will make this assumption. The case of $N$ odd will be dealt with in Chapter \ref{sec:GOE_odd}, where the particular problems presented by parity will be explored.
In the case of GOE we have only one species of eigenvalue (they are all real) and so we substitute (\ref{eqn:GOE_eval_jpdf}) into (\ref{def:single_gen_part_fn}) to find
\begin{align}
\nonumber Z_N[u]&=2^{-3N/2}\prod_{j=1}^N\frac{1}{\Gamma(j/2+1)} \int_{-\infty}^{\infty}d\lambda_1\cdot\cdot\cdot\int_{-\infty}^{\infty}d\lambda_N \\
\label{def:GOE_gpf1} &\times \prod_{j=1}^N u(\lambda_j)\: e^{-\lambda_j^2/2}\prod_{1\leq j < k \leq N}|\lambda_k-\lambda_j|.
\end{align}
Since $Q(\vec{\lambda})$ from Proposition \ref{prop:GOE_eval_jpdf} is a jpdf, we see that $Z_N[1]=1$.
Our task now is to express (\ref{def:GOE_gpf1}) in Pfaffian form. The method that will be used here and in the following chapters is known as the method of integration over alternate variables, which was introduced by de Bruijn \cite{deB1955} and applied to the present problem by Mehta \cite{mehta1960,Mehta1967}. The purpose of this method is to deal with the absolute value signs around the product of differences in the eigenvalue jpdf. The method is required since we note that (\ref{eqn:vandermonde_polys}) refers to a signed product of differences, and so we cannot apply (\ref{eqn:vandermonde_polys}) to (\ref{def:GOE_gpf1}) directly. However, if the eigenvalues have their ordering reinstated --- which is equivalent to a corresponding restriction to the domain of integration --- then the $|\cdot|$ can be removed and the identity applies. Integration over alternate variables is a technique to perform these integrals with ordered domain over a Vandermonde determinant. The method will be illustrated in the proof of the following proposition.
\begin{proposition}
\label{prop:GOE_gen_part_fn}
Let $Q(\vec{\lambda})$ be the eigenvalue jpdf for the GOE matrices, as given in (\ref{eqn:GOE_eval_jpdf}). For $N$ even the generalised partition function (as defined in (\ref{def:single_gen_part_fn})) for $Q(\vec{\lambda})$ is
\begin{align}
\label{eqn:GOE_GPF_even} Z_N[u]=\frac{N!}{2^N}\prod_{j=1}^N\frac{1}{\Gamma(j/2+1)}\mathrm{Pf} [\gamma_{jk}]_{j,k=1,...,N},
\end{align}
where
\begin{align}
\label{eqn:gammajk} \gamma_{jk}=\frac{1}{2}\int_{-\infty}^{\infty}dx\: e^{-x^2/2}\: u(x)\: p_{j-1}(x)\int_{-\infty}^{\infty}dy \: e^{-y^2/2}\: u(y)\: p_{k-1}(y)\: \mathrm{sgn}(y-x),
\end{align}
and $\{p_j(x)\}_{j,k=0,1,...,N-1}$ are arbitrary monic polynomials of degree $j$.
\end{proposition}
\textit{Proof}: We start by ordering the eigenvalues $-\infty < x_1 <\cdot\cdot\cdot < x_N < \infty$ (incurring a factor of $N!$) in (\ref{def:GOE_gpf1}) so that we can remove the $|\cdot|$ from the product of differences, putting it into Vandermonde form. With $A_N:= 2^{-3N/2}\prod_{j=1}^N\frac{1}{\Gamma(j/2+1)}$ this reordering gives
{\small
\begin{align}
\nonumber &Z_N[u]=A_NN!\int_{-\infty}^{x_2}dx_1 \int_{x_1}^{x_3}dx_2 \cdot\cdot\cdot \int_{x_{N-1}}^{\infty}dx_N\prod_{j=1}^Ne^{-x_j^2/2}\; u(x_j)\prod_{1\leq j < k \leq N}(x_k-x_j)\\
\nonumber &=A_NN!\int_{-\infty}^{x_2}dx_1\int_{x_1}^{x_3} dx_2\cdot\cdot\cdot \int_{x_{N-1}}^{\infty} dx_N \det \left[ e^{-x_j^2/2}u(x_j)p_{k-1}(x_j) \right]_{j,k=1,...,N}\\
\nonumber &=A_NN! \int_{-\infty}^{x_4}dx_2 \int_{x_2}^{x_6}dx_4 \cdot\cdot\cdot\int_{x_{N-2}}^{\infty}dx_N\\
\nonumber &\times \det \left[ \begin{array}{c}
\int_{-\infty}^{x_{2j}}e^{-x^2/2}u(x)p_{k-1}(x)dx\\
e^{-x_{2j}^2/2}u(x_{2j})p_{k-1}(x_{2j})
\end{array}\right]_{j=1,...,N/2 \atop k=1,...,N},
\end{align}
}where, for the second equality, we have used (\ref{eqn:vandermonde_polys}) and for the third we have made use of the observation that all dependence on $x_i$ occurs in row $i$ so the integrals can be applied individually to the relevant row of the determinant. The integrals over the odd numbered variables have been moved into the determinant and then, by adding the first row to the third row, and the first and third rows to the fifth row, and so on, all the integrals have lower terminal $-\infty$.
We see that the determinant is now symmetric in the variables $x_2,x_4,...,x_N$, and so we can remove the ordering $x_2 < x_4 < ... < x_N$ at the cost of dividing by $(N/2)!$. Expanding the determinant we find
\begin{align}
\nonumber Z_N[u]=A_N\frac{N!}{(N/2)!}\sum_{P\in S_N}\varepsilon(P)\prod_{l=1}^{N/2}\mu_{P(2l-1),P(2l)},
\end{align}
where
\begin{align}
\label{def:mu_GOE} \mu_{j,k}:=\int_{-\infty}^{\infty}dx\: e^{-x^2/2}\:u(x)\:p_{k-1}(x)\int_{-\infty}^xdy\: e^{-y^2/2}\:u(y)\:p_{j-1}(y),
\end{align}
and $\varepsilon(P)$ is the sign of the permutation $P$. By defining
\begin{align}
\nonumber \gamma_{j,k}:=\frac{1}{2}(\mu_{j,k}-\mu_{k,j}),
\end{align}
then we can restrict the sum to terms with $P(2l)>P(2l-1)$ and use the second equality in Definition \ref{def:pfaff} (recalling Remark \ref{rem:Pf_def}) to write
\begin{align}
\nonumber Z_N[u]=A_N2^{N/2} N!\sum_{P\in S_N \atop P(2l)>P(2l-1)}^*\varepsilon(P)\prod_{l=1}^{N/2}\gamma_{P(2l-1),P(2l)}.
\end{align}
Now using the first equality in Definition \ref{def:pfaff} we have the result.
\hfill $\Box$
\subsection{Step IV: Skew-orthogonal polynomials}
\label{sec:skew_orthog_polys}
As discussed above, if a Pfaffian is in skew-diagonal form (\ref{eqn:skew_diag_mat}), then it is easily calculated as the product of the upper diagonal entries. With the goal of achieving such a simplified form, we define an inner product $\langle p_j, p_k\rangle$ with a set of monic polynomials $p_0,p_1, \dots$ such that for $a_j \neq 0$
\begin{align}
\label{eqn:so_polys} \langle p_{2j},p_{2k}\rangle = \langle p_{2j+1},p_{2k+1}\rangle=0 &,&\langle p_{2j},p_{2k+1}\rangle=-\langle p_{2k+1},p_{2j}\rangle=\delta_{j,k}a_j.
\end{align}
Using these polynomials, the matrix $[\langle p_j, p_k\rangle]_{j,k=0,...,N-1}$ is in skew-diagonal form and its Pfaffian is given by (\ref{eqn:pf_skew_diag_eval}). We call the polynomials satisfying (\ref{eqn:so_polys}) \textit{skew-orthogonal polynomials}. If an appropriate inner-product can be defined such that the matrix in (\ref{eqn:GOE_GPF_even}) is of the form $[\langle p_j,p_k\rangle]_{j,k =0,...,N-1}$, and the corresponding skew-orthogonal polynomials can be found, then the calculation of $Z_N$ will be greatly simplified, and the correlation function may be computed. This, then, is the next task.
\subsubsection{Skew-orthogonal polynomials for GOE}
\label{sec:GOE_sops}
\begin{definition}
\label{def:GOE_soip}
Let $\langle p,q\rangle$ be the inner product defined by
\begin{align}
\label{eqn:GOE_soip} \langle p,q\rangle&:= \frac{1}{2}\int_{-\infty}^{\infty}dx\: e^{-x^2/2}\: p(x)\int_{-\infty}^{\infty}dy \: e^{-y^2/2}\: q(y)\: \mathrm{sgn}(y-x).
\end{align}
Also let $\{ R_j \}_{j=0,1,...}$ be a set of monic skew-orthogonal polynomials, satisfying the conditions (\ref{eqn:so_polys}) with respect to the inner product (\ref{eqn:GOE_soip}). (These are not unique as any replacement $p_{2m+1}(x) \mapsto p_{2m+1}(x) +c \: p_{2m}(x)$, where $c$ is some constant, leaves (\ref{eqn:so_polys}) unchanged by the linearity property of inner products.)
\end{definition}
\begin{remark}
There exist formulae for the skew-orthogonal polynomials, corresponding to various weight functions, in determinantal \cite{ForNagRai2006} and Pfaffian forms \cite{AkeKiePhi2010}, however the calculations implied by these methods may not be tractable (for instance, in the case of the truncated ensembles of Chapter \ref{sec:truncs}).
\end{remark}
We note that since the inner product (\ref{eqn:GOE_soip}) is just $\gamma_{jk}\big|_{u=1}$ of Proposition \ref{prop:GOE_gen_part_fn}, then the skew-orthogonal polynomials $R_0,R_1,...$ corresponding to this inner product will skew-diagonalise the matrix in (\ref{eqn:GOE_GPF_even}) with $u=1$. We will present these skew-orthogonal polynomials, and verify that they indeed satisfy (\ref{eqn:so_polys}) --- a derivation of these polynomials can be found in \cite{afnvm2000} and \cite[Chapter 6.4]{forrester?}, where use is made of facts pertaining to the $\beta=2$ and $\beta=4$ Gaussian ensembles. The skew-orthogonal polynomials for GOE turn out to be proportional to the \textit{Hermite polynomials}
\begin{align}
\nonumber H_n(x):=&\;(-1)^n e^{x^2}\frac{d^n}{dx^n}e^{-x^2}\\
\label{eqn:herm_polys} =&\;\sum_{m=0}^{\lfloor n/2\rfloor}(-1)^m2^{(n-m)}{n \choose 2m}\frac{(2m)!}{2^m m!}\; x^{(n-2m)},
\end{align}
where $n=0,1,...$ corresponds to the degree of the polynomial and $\lfloor x\rfloor$ is the floor function. Note that $H_n(x)$ is an even or odd function of $x$ depending on the parity of $n$. The Hermite polynomials have the remarkable recursive properties
\begin{align}
\label{eqn:recursive_herm} H_{n+1}(x)=2x\:H_{n}(x)-2n\:H_{n-1}(x)&,&&\frac{d}{dx}H_{n}(x)=2n\:H_{n-1}(x),
\end{align}
as well as satisfying the orthogonality condition
\begin{align}
\label{eqn:orthog_herm} \int_{-\infty}^{\infty}H_n(x)H_m(x)e^{-x^2}dx=\delta_{n,m}n! \:2^n\sqrt{\pi}.
\end{align}
Given that (\ref{eqn:GOE_soip}) also has a negative squared exponential weight, in light of (\ref{eqn:orthog_herm}) it is perhaps not surprising that Hermite polynomials appear as a result of skew-orthogonalising.
\begin{proposition}
\label{prop:GOE_soip}
Let $H_n(x)$ $(n=0,1,...)$ be the Hermite polynomials (\ref{eqn:herm_polys}). Skew-orthogonal polynomials corresponding to the inner product (\ref{eqn:GOE_soip}) are
\begin{align}
\nonumber R_{2j}(x)=\frac{H_{2j}(x)}{2^{2j}}&,&R_{2j+1}(x)&=\frac{H_{2j+1}(x)}{2^{2j+1}}-j\: \frac{H_{2j-1}(x)}{2^{2j-1}}\\
\label{eqn:sops} &&&=\frac{e^{x^2/2}}{2^{2j}}\frac{d}{dx}e^{-x^2/2}H_{2j}(x).
\end{align}
The normalisation is
\begin{align}
\label{eqn:GOE_norm} \langle R_{2j},R_{2j+1}\rangle = r_j=\frac{\Gamma (2j+1)}{2^{2j}}\sqrt{\pi}.
\end{align}
\end{proposition}
\textit{Proof}: The anti-symmetric condition is apparent from the presence of the sign function in (\ref{eqn:GOE_soip}). The conditions $\langle R_{2j},R_{2k}\rangle = \langle R_{2j+1},R_{2k+1}\rangle=0$ are easily checked: the inner integral (over $y$) will yield a function of opposite parity to the integrand of the outer integral, resulting in integration over an odd function from $-\infty$ to $\infty$, and so it is zero.
Now assume the polynomial degrees are of opposite parity. First, using the recursive properties (\ref{eqn:recursive_herm}) of Hermite polynomials we can establish the second equality of $R_{2j+1}(x)$ in (\ref{eqn:sops}). With this in hand we find that
\begin{align}
\nonumber \langle R_{2j},R_{2j+1}\rangle&=2^{-4j}\int_{-\infty}^{\infty}e^{-x^2}H_{2j}(x)H_{2k}(x)\:dx
\end{align}
and then from the orthogonality property (\ref{eqn:orthog_herm}) we have (\ref{eqn:GOE_norm}).
\hfill $\Box$
The immediate consequence of the polynomials in Proposition \ref{prop:GOE_soip} is that the Pfaffian in (\ref{eqn:GOE_GPF_even}) can be evaluated using (\ref{eqn:pf_skew_diag_eval}) as
\begin{align}
\label{eqn:pf=prodr} \mathrm{Pf}[\gamma_{j,k}]\Big|_{u=1}=\prod_{j=0}^{N/2-1}r_j.
\end{align}
Hence we can calculate $Z_N[1]$, which turns out to be $1$, as we knew it must be from the comment below (\ref{def:GOE_gpf1}). However, the important point is not that the generalised partition function has unit evaluation, but that the form it takes, using the skew-orthogonal polynomials, will be useful in the calculation of the correlation functions.
\begin{corollary}
\label{cor:ZN[1]}
With the generalised partition function $Z_N[u]$ from (\ref{def:GOE_gpf1}) and $r_0,...,r_{N/2-1}$ given by (\ref{eqn:GOE_norm}), we have
\begin{align}
\label{eqn:ZN[1]_eval} Z_N[1]=1=\frac{N!}{2^N}\prod_{j=1}^N\frac{1}{\Gamma(j/2+1)}\prod_{j=0}^{N/2-1}r_j.
\end{align}
\end{corollary}
\subsection{Step V: Correlation functions}
\label{sec:detPfcorrelns}
A statistic commonly of interest in random matrix systems is the eigenvalue density and higher order generalisations of the density, collectively called \textit{correlation functions}. A calculation of these correlation functions for various ensembles is a major aim of this work. We begin with the definition of correlation functions and then go on to discuss various tools and methods used in their calculation.
\begin{definition}
\label{def:integ_correlns}
For an ensemble of $N\times N$ matrices with eigenvalues $\lambda_1,...,\lambda_N$ in the set $\Omega$ and with eigenvalue jpdf $\mathcal{Q}(\lambda_1,...,\lambda_N)$ the \textit{$n$-point correlation function} of the positions $r_1,...,r_n$ is given by
\begin{align}
\nonumber \rho_{(n)}(r_1,...,r_n)&:=\frac{N(N-1)\cdot\cdot\cdot(N-n+1)}{Z_N[1]}\int_{\Omega}d\lambda_{n+1}\cdot\cdot\cdot\int_{\Omega}d\lambda_N \\
\label{eqn:integ_correlns} &\times \mathcal{Q}(r_1,...,r_n,\lambda_{n+1},...,\lambda_N).
\end{align}
\end{definition}
The eigenvalue density is the $n=1$ case of (\ref{eqn:integ_correlns}). While the interpretation of the density (as the number of eigenvalues per unit volume) is clear, the higher order correlations are less perspicuous. A viewpoint in terms of conditional probabilities is that the ratio
\begin{align}
\nonumber \frac{\rho_{(n)}(r_1,...,r_n)}{\rho_{(n-1)}(r_1,...,r_{n-1})}
\end{align}
is equal to the eigenvalue density at $r_n$ given that there are eigenvalues at $r_1,...,r_{n-1}$.
One of the common ways to calculate the correlation functions is by using a recursion for integrals of quaternion determinants, known as the Dyson Integration Theorem \cite{dyson1970,mehta1976,AK2007}.
\begin{proposition} {\rm \textbf{Dyson Integration Theorem}}
\label{thm:integral_identities}
Let $f(x,y)$ be a function of real, complex or quaternion variables where
\begin{equation}
\nonumber \bar{f}(x,y)=f(y,x),
\end{equation}
with $\bar{f}$ being the function $f$, the complex conjugate of $f$ or the dual of $f$ depending on whether $x$ and $y$ are real, complex or quaternion respectively.
Also let
\begin{eqnarray}
\label{eqn:proj_prop1} \int f(x,x) d\mu (x)&=&c,\\
\label{eqn:proj_prop2} \int f(x,y)f(y,z) d\mu(y)&=&f(x,z)+\lambda f(x,z) -f(x,z)\lambda,
\end{eqnarray}
for some suitable measure $d\mu$, a constant scalar $c$ and a constant quaternion $\lambda$.
Then for a matrix $F_{n\times n}=[f(x_i,x_j)]_{n\times n}$ we have
\begin{equation}
\nonumber \int \mathrm{qdet}[F_{n\times n}] d\mu(x_n) = (c-n+1)\hspace{3pt}\mathrm{qdet}[F_{(n-1) \times (n-1)}].
\end{equation}
\end{proposition}
\noindent (For a proof see Theorem 5.1.4 in \cite{mehta2004}.)
Examining Proposition \ref{thm:integral_identities} with (\ref{eqn:integ_correlns}) in mind, we see that it will be possible to calculate the correlation functions if the eigenvalue jpdf is in the form of a quaternion determinant (or Pfaffian). This is indeed possible (see \cite{dyson1970,mehta1976}), although this is not the approach we employ and we include it only for completeness. We do not use Dyson's theorem because for real non-symmetric ensembles (discussed in Chapters \ref{sec:GinOE}--\ref{sec:truncs}) we cannot satisfy (\ref{eqn:proj_prop1}) and (\ref{eqn:proj_prop2}), collectively called the projection property in \cite{AK2007}. In that paper the authors establish a generalised form of Dyson's theorem, which they call the \textit{Pfaffian integration theorem}, and use it to find the probability of obtaining some number of real eigenvalues in terms of zonal polynomials. Still, this does not suit our later purposes and we adopt instead a strategy first used in \cite{tracy_and_widom1998}, but with some significant modifications. This requires use of the general operator identity
\begin{align}
\label{eqn:1+AB} \det(\mathbf{1}+\mathbf{AB})=\det(\mathbf{1}+\mathbf{BA})
\end{align}
and the quaternion determinant analogue
\begin{align}
\label{eqn:qdet_1+AB} \mathrm{qdet}(\mathbf{1}+\mathbf{AB})=\mathrm{qdet}(\mathbf{1}+\mathbf{BA}),
\end{align}
(provided that the product $\mathbf{BA}$ is self-dual) where, for our purposes, $\mathbf{1}, \mathbf{A}$ and $\mathbf{B}$ are $N \times N$ matrices, and $\mathbf{1}$ is specifically the identity matrix. We will also employ the \textit{Fredholm determinant} and its comrades the \textit{Fredholm quaternion determinant} and the \textit{Fredholm Pfaffian}.
\begin{definition}
\label{def:fred_D_QD_P}
Let $K$ be an integral operator with kernel $K(x,y)$ and $\lambda$ a complex parameter, then the \textit{Fredholm determinant} is defined by
\begin{align}
\nonumber \det[1+\lambda K]:=1+\sum_{s=1}^{\infty}\frac{\lambda^s}{s!}\int_{-\infty}^{\infty}dx_1\cdot\cdot\cdot\int_{-\infty}^{\infty}dx_s\det [K(x_j,x_k)]_{j,k=1,...,s}.
\end{align}
In the case that the matrix $[K(x_j,x_k)]_{j,k=1,...,s}$ is self-dual, we define the \textit{Fredholm quaternion determinant}
\begin{align}
\nonumber \mathrm{qdet}[1+\lambda K]:=1+\sum_{s=1}^{\infty}\frac{\lambda^s}{s!}\int_{-\infty}^{\infty}dx_1\cdot\cdot\cdot\int_{-\infty}^{\infty}dx_s\;\mathrm{qdet} [K(x_j,x_k)]_{j,k=1,...,s},
\end{align}
and, when $[K(x_j,x_k)]_{j,k=1,...,s}$ is anti-symmetric, the \textit{Fredholm Pfaffian}
\begin{align}
\nonumber \mathrm{Pf}[1+\lambda K]:=1+\sum_{s=1}^{\infty}\frac{\lambda^s}{s!}\int_{-\infty}^{\infty}dx_1\cdot\cdot\cdot\int_{-\infty}^{\infty}dx_s\;\mathrm{Pf} [K(x_j,x_k)]_{j,k=1,...,s}.
\end{align}
\end{definition}
\begin{remark}
We note that while the Fredholm determinant has been known for over a century, the Fredholm Pfaffian seems to have be been introduced in \cite{Rains2000}, and was then used in \cite{BK2007} to prove some variants of the Pfaffian integration theorem discussed above. The first mention of a Fredholm quaternion determinant in the literature appears to be in \cite{tracy_and_widom1998}; of course, by Corollary \ref{cor:qdet=pf}, it is a trivial rewriting of the Fredholm Pfaffian.
\end{remark}
\begin{remark}
Since these definitions involve infinite sums, there is the question of convergence. However, we sidestep this complication since we will only be using operators of finite rank, and thus, the sums are of finite length.
\end{remark}
Although we shall not tackle the $N$ odd case until Chapter \ref{sec:GOE_odd}, a key technical consideration in that case will be the square root of a Fredholm determinant. We would like the square root to be a Fredholm quaternion determinant or Pfaffian (depending on the attributes of $K$) in analogue with (\ref{eqn:pf_det}) and (\ref{eqn:qdet2=det}). However, we cannot invoke the `Freshman's dream' (that the power of a sum is the sum of the powers \cite{Hu90}) and so we require a more subtle approach. Instead, we first establish that Fredholm operators are limiting cases of some discretised form.
\begin{lemma}[\cite{WW63}, Chapter XI]
\label{lem:pf_qdet_kernel}
Let $\lambda$ be some complex variable and $\{x_1,...,x_m \}\in\mathbb{R}^m$, with fixed $\delta:=x_j-x_{j-1}$ for all $1\leq j\leq m$ (that is, $\delta$ is the constant distance between any two of the variables), considered as a discretisation of an interval $I$. Then, for some integral operator $K$ with kernel $K(x,y)$ supported on $I$ and
\begin{align}
\nonumber \tilde{K}_m(\delta)=\left[
\begin{array}{cccc}
\delta K(x_1,x_1) & \delta K(x_1,x_2) & \cdot\cdot\cdot & \delta K(x_1,x_m)\\
\delta K(x_2,x_1) & \delta K(x_2,x_2) & \cdot\cdot\cdot & \delta K(x_2,x_m)\\
\vdots & \vdots & \ddots & \vdots\\
\delta K(x_m,x_1) & \delta K(x_m,x_2) & \cdot\cdot\cdot & \delta K(x_m,x_m)
\end{array}
\right],
\end{align}
we have
\begin{align}
\label{eqn:freddet_lim} \lim_{\substack{\delta\rightarrow 0\\ m\rightarrow \infty}}\det [1-\lambda \tilde{K}_m(\delta)]=\det[1-\lambda K],
\end{align}
and, in the case that $\tilde{K}_m$ is self-dual,
\begin{align}
\nonumber \lim_{\substack{\delta\rightarrow 0\\ m\rightarrow \infty}}\mathrm{qdet} [1-\lambda \tilde{K}_m(\delta)]=\mathrm{qdet}[1-\lambda K],
\end{align}
or, in the case that $\tilde{K}_m$ is anti-symmetric,
\begin{align}
\nonumber \lim_{\substack{\delta\rightarrow 0\\ m\rightarrow \infty}}\mathrm{Pf} [1-\lambda \tilde{K}_m(\delta)]=\mathrm{Pf} [1-\lambda K].
\end{align}
\end{lemma}
\textit{Proof}: Expanding the LHS of (\ref{eqn:freddet_lim}) in powers of $\lambda$ we have
\begin{align}
\nonumber &\det\left[1-\lambda \tilde{K}_m(\delta)\right]\\
\nonumber &= 1-\lambda \sum_{p=1}^m \delta K(x_p,x_p)+\frac{\lambda^2}{2!}\sum_{p,q=1}^m\delta^2 \det\left[
\begin{array}{cc}
K(x_p,x_p) & K(x_p,x_q)\\
K(x_q,x_p) & K(x_q,x_q)
\end{array} \right]\\
\label{eqn:discrete_FH} &-\frac{\lambda^3}{3!}\sum_{p,q,r=1}^m\delta^3 \det\left[
\begin{array}{ccc}
K(x_p,x_p) & K(x_p,x_q) & K(x_p,x_r)\\
K(x_q,x_p) & K(x_q,x_q) & K(x_q,x_r)\\
K(x_r,x_p) & K(x_q,x_r) & K(x_r,x_r)
\end{array} \right]+\cdot\cdot\cdot
\end{align}
where the sum continues up to the $m$-th power of $\lambda$. Taking $m\to\infty$ and $\delta\to 0$ the sums become Riemann integrals and (\ref{eqn:discrete_FH}) becomes
\begin{align}
\nonumber &1-\lambda\int_I dx\; K(x,x)+\frac{\lambda^2}{2!}\int_I dx\int_I dy \det\left[
\begin{array}{cc}
K(x,x) & K(x,y)\\
K(y,x) & K(y,y)
\end{array} \right]+\cdot\cdot\cdot.
\end{align}
Recalling Definition \ref{def:fred_D_QD_P} establishes (\ref{eqn:freddet_lim}). Applying the same reasoning to the Fredholm quaternion determinant and Pfaffian we have the remaining results.
\hfill $\Box$
Observe that (\ref{eqn:freddet_lim}) allows us to intepret the LHS's of Definition \ref{def:fred_D_QD_P} as the product $\prod_{j=1}^{\infty} (1+\lambda \mu_j)$, where $\mu_j$ are the eigenvalues of $K$; for this infinite product to make sense we require some technical assumptions on $K$ (see \cite[Section XI.1]{WW63}).
Combining Lemma \ref{lem:pf_qdet_kernel} with (\ref{eqn:pf_det}) and (\ref{eqn:qdet2=det}) the desired square root relationships between the Fredholm operators now follow trivially; we quote them here as corollaries for ease of reference.
\begin{corollary}
\label{cor:sqrtFreds} Let $K$ be an integral operator with kernel $K(x,y)$ and with Fredholm determinant, Fredholm quaternion determinant and Fredholm Pfaffian as in Definition \ref{def:fred_D_QD_P}, then we have
\begin{align}
\label{eqn:det_qdet_kernel} \Big(\det [1+\lambda K]\Big)^{1/2}=\mathrm{qdet} [1+\lambda K]
\end{align}
in the case that $[K(x_j,x_k)]_{j,k=1,2,...}$ is self-dual, and
\begin{align}
\label{eqn:det_pf_kernel} \Big(\det [1+\lambda K]\Big)^{1/2}=\mathrm{Pf} [1+\lambda K]
\end{align}
in the case that $[K(x_j,x_k)]_{j,k=1,2,...}$ is anti-symmetric.
\end{corollary}
The utility of these Fredholm operators comes from an alternative form of the correlation functions: with $Z_N[a]$ from (\ref{def:single_gen_part_fn}) the $n$-point correlation function is
\begin{align}
\label{eqn:fnal_diff_correln} \rho_{(n)}(r_{1},...,r_{n})=\frac{1}{Z_N[a]}\frac{\delta^n}{\delta a(r_1)\cdot\cdot\cdot \delta a(r_n)}Z_N[a]{\Bigg |}_{a=1}.
\end{align}
We can describe the equivalence between Definition \ref{def:integ_correlns} and (\ref{eqn:fnal_diff_correln}) in a heuristic fashion: while (\ref{eqn:integ_correlns}) relies on integrating over the density function to leave only the number of eigenvalues desired, (\ref{eqn:fnal_diff_correln}) starts by integrating over all eigenvalues (which is the partition function) and then ``undoes" a number of integrals equal to the number of eigenvalues one wishes to keep. This heuristic points to our intended use of the Fredholm operators (defined as sums of integrals); the functional differentiation will pick out only the particular term required.
The method that we develop here, particularly that in Proposition \ref{prop:pf_integ_op}, is inspired by the approaches of Forrester \cite[Chapter 5.2]{forrester?} and Tracy and Widom \cite{tracy_and_widom1998} (for the even case) where they use (\ref{eqn:1+AB}) to find the correlation functions for Hermitian ensembles, and the work of Borodin and Sinclair for both the even \cite{b&s2009} and odd \cite{Sinc09} cases. (In \cite{sommers2007} and \cite{sommers_and_w2008} the authors also use (\ref{eqn:fnal_diff_correln}) to obtain the eigenvalue correlations for $N$ even and odd (respectively), however the details are somewhat different to our techniques and we will not pursue their methods here.) In \cite{b&s2009} the authors use the Pfaffian identity
\begin{align}
\label{eqn:rains} \frac{\mathrm{Pf}(\mathbf{C}^{-T}-\mathbf{A}^T\mathbf{B}\mathbf{A})}{\mathrm{Pf}(\mathbf{C}^{-T})}=\frac{\mathrm{Pf}(\mathbf{B}^{-T}-\mathbf{A}\mathbf{C}\mathbf{A}^T)}{\mathrm{Pf}(\mathbf{B}^{-T})},
\end{align}
which is due to Rains \cite{Rains2000}, where $\mathbf{B}$ and $\mathbf{C}$ are $2m\times 2m$ and $2n\times 2n$ anti-symmetric matrices respectively, and $\mathbf{A}$ is any $2m\times 2n$ matrix. Proposition \ref{prop:pf_integ_op} unifies the approaches of Forrester, Tracy and Widom, with that of Borodin and Sinclair.
The advantage of our method is that all cases --- symmetric and asymmetric for both even and odd --- can be dealt with in the same framework, with only minor modifications and generalisations at each step. There are also hints that the method may also be applicable to ensembles with a higher number of distinct species of eigenvalue, such as the $*$-cosquare ensembles discussed in Chapter \ref{sec:FW}.
\subsubsection{GOE $n$-point correlations, $N$ even}
\label{sec:correlns_even_GOE}
\begin{definition}
\label{def:GOE_correln_kernel}
With $R_0(x),R_1(x),...$ the skew-orthogonal polynomials of Proposition \ref{prop:GOE_soip} and $r_j:=\langle R_{2j},R_{2j+1}\rangle$ the corresponding normalisations, let
\begin{align}
\label{def:GOEphi} \Phi_k(x):=\frac{1}{2}\int_{-\infty}^{\infty}dy\:R_k(y)\: e^{-y^2/2}\: \mathrm{sgn}(x-y).
\end{align}
Then let $\mathbf{f}(x,y)$ be the $2\times 2$ matrix
\begin{align}
\label{def:GOE_Qdcorrelnk} \mathbf{f}(x,y)=\left[\begin{array}{cc}
S(x,y) & \tilde{I}(x,y)\\
D(x,y) & S(y,x)
\end{array}\right],
\end{align}
where
\begin{align}
\nonumber S(x,y)&=\sum_{j=0}^{N/2-1}\frac{e^{-y^2/2}}{r_j}\Big( \Phi_{2j}(x)R_{2j+1}(y)-\Phi_{2j+1}(x)R_{2j}(y)\Big),\\
\nonumber D(x,y)&=\sum_{j=0}^{N/2-1}\frac{e^{-(x^2+y^2)/2}}{r_j}\Big( R_{2j}(x)R_{2j+1}(y)-R_{2j+1}(x)R_{2j}(y)\Big),\\
\nonumber \tilde{I}(x,y)&:=I(x,y)+\frac{1}{2}\mathrm{sgn}(y-x)\\
\nonumber &=\sum_{j=0}^{N/2-1}\frac{1}{r_j}\Big( \Phi_{2j+1}(x)\Phi_{2j}(y)-\Phi_{2j}(x)\Phi_{2j+1}(y)\Big)+\frac{1}{2}\mathrm{sgn}(y-x).
\end{align}
\end{definition}
Since the correlation functions will turn out to be quaternion determinants of matrices composed of blocks of (\ref{def:GOE_Qdcorrelnk}), $\mathbf{f}(x,y)$ is known as a \textit{correlation kernel}. From (\ref{eqn:qdet=pf}) we find the equivalent kernel for the correlations expressed as Pfaffians,
\begin{align}
\label{def:GOE_Pfcorrelnk}
\mathbf{f}(x,y)\mathbf{Z}_2^{-1}=\left[\begin{array}{cc}
-\tilde{I}(x,y) & S(x,y)\\
-S(y,x) & D(x,y)
\end{array}\right].
\end{align}
The term \textit{Pfaffian kernel} is also used to refer to (\ref{def:GOE_Pfcorrelnk}).
Here we point out a simple relationship between the elements of $\mathbf{f}(x,y)$, which explains the choice of the appellations $D(x,y)$ and $I(x,y)$.
\begin{lemma}
\label{lem:GOE_D=S=I}
The elements of $\mathbf{f}(x,y)$ are thusly related
\begin{align}
\nonumber D(x,y)=\frac{\partial}{\partial x}S(x,y)&,& I(x,y)&=\frac{1}{2}\int_{-\infty}^{\infty}S(x,z)\mathrm{sgn} (z-y)dz\\
\nonumber &&&=-\int_x^y S(x,z)dz.
\end{align}
\end{lemma}
\textit{Proof}: The derivative for $D(x,y)$ can be done simply when one recalls the identity $\frac{\partial}{\partial x}\mathrm{sgn}(x-a)=2\delta(x-a)$. The first equality of $I(x,y)$ can be seen by inspection, and the second equality is verified by noting that the two sides agree along the line $y=x$, and the derivatives with respect to both $x$ and $y$ are equal (the differentiation can be accomplished using the method of differentiating under the integral sign).
\hfill $\Box$
Using Definition \ref{def:GOE_correln_kernel}, we will rewrite the generalised partition function (\ref{eqn:GOE_GPF_even}) by applying the identity (\ref{eqn:qdet_1+AB}). First, recall that an \textit{integral operator} $T$ with kernel $K(x,y)$ supported on $I$ operates on a function $h$ thusly,
\begin{align}
\label{def:integop} T\:h[x]=\int_I K(x,y)h(y)dy,
\end{align}
with the convention that $y$ is always the variable of integration. The square brackets indicate that the resulting function (after the operation by the integral operator) is a function of $x$. We use the notation $a\otimes b$ to denote an operator with kernel $K(x,y)=a(x)b(y)$, that is, a kernel that factorises as separate functions of $x$ and $y$, in which case we have
\begin{align}
\label{eqn:integop_ab} a\otimes b\:h[x]=a(x)\int_I b(y)h(y)dy.
\end{align}
We now use (\ref{eqn:qdet_1+AB}) to convert the problem from a 2 dimensional function in variable-sized matrix to a variable dimensional function in a $2\times 2$ matrix, which is how we will present the correlation functions.
\begin{proposition}
\label{prop:pf_integ_op}
Let $\gamma_{jk}$ be as in (\ref{eqn:gammajk}) and $\mathbf{f}(x,y)$ be as in (\ref{def:GOE_Qdcorrelnk}). Then, by using the skew-orthogonal polynomials of Proposition \ref{prop:GOE_soip}, we have
\begin{align}
\label{prop:integ_op_even} \mathrm{Pf} [\gamma_{jk}]_{j,k=1,...,N}=\prod_{j=0}^{N/2-1}r_j\; \mathrm{qdet} [\mathbf{1}_2+ \mathbf{f}^T (\mathbf{u}-\mathbf{1}_2)],
\end{align}
where $\mathbf{1}_2$ is the $2\times 2$ identity matrix, and $\mathbf{f}^T(\mathbf{u}-\mathbf{1}_2)$ is the matrix integral operator with kernel $\mathbf{f}^T(x,y) \mathrm{diag}[u(y)-1,u(y)-1]$ (that is, we have a Fredholm quaternion determinant).
\end{proposition}
\textit{Proof}: In the definition of $\gamma_{jk}$ in (\ref{eqn:gammajk}) we let $u=\sigma+1$ and $\psi_j(x):=e^{-x^2/2}R_{j-1}(x)$, where $R_0,R_1,...$ are the skew-orthogonal polynomials (\ref{eqn:sops}), and denote by $\epsilon$ the integral operator with kernel $\mathrm{sgn}(x-y)/2$. We then have
{\small
\begin{align}
\nonumber \gamma_{jk}&=\gamma_{jk}\Big|_{u=1}-\int_{-\infty}^{\infty}\Big(\sigma(x)\psi_j(x)\epsilon \psi_k[x]-\sigma(x)\psi_k(x)\epsilon\psi_j[x]-\sigma(x)\psi_k(x)\epsilon(\sigma\psi_j)[x]\Big)dx\\
\nonumber &=\gamma^{(1)}_{jk}-\gamma^{(\sigma)}_{jk},
\end{align}
}with $\gamma_{jk}^{(1)}:= \gamma_{jk}\big|_{u=1}$ and $\gamma_{jk}^{(\sigma)}$ the remaining term. With the skew-orthogonal polynomials we see that $[\gamma^{(1)}_{jk}]$ is of the form (\ref{eqn:skew_diag_mat}), and so it can be written $\mathbf{D}\mathbf{Z}^{-1}_N$ with $\mathbf{Z}_N$ as in (\ref{def:Z2N}) and $\mathbf{D}=\mathrm{diag}[r_0,r_0,r_1,r_1,...,r_{N/2},r_{N/2}]$. We then have
\begin{align}
\nonumber \mathrm{Pf}[\gamma_{jk}]&=\mathrm{Pf}(\mathbf{D}\mathbf{Z}^{-1}_N-[\gamma^{(\sigma)}_{jk}])\\
\nonumber &=\mathrm{Pf}\big{(}\mathbf{D}\mathbf{Z}^{-1}_N(\mathbf{1}_N-\mathbf{Z}_N \mathbf{D}^{-1} [\gamma^{(\sigma)}_{jk}]) \big{)}\\
\nonumber &=\prod_{j=0}^{N/2-1}r_j\; \mathrm{qdet} \big{(}\mathbf{1}_N- \mathbf{Z}_N\mathbf{D}^{-1}[\gamma^{ (\sigma)}_{jk}] \big{)},
\end{align}
where, for the last equality, we have used Corollary \ref{cor:qdet=pf}. Defining
\begin{align}
\label{def:G_psi} G_{2j-1}(x):=\psi_{2j}(x),&& G_{2j}(x):=-\psi_{2j-1}(x),
\end{align}
and recalling the effect of operation by $\mathbf{Z}_N$ (as discussed in the proof of Proposition \ref{prop:qdet=pf}) we find
\begin{align}
\nonumber &\mathrm{Pf} [\gamma_{jk}]_{j,k=1,...,N}=\prod_{j=0}^{N/2-1}r_j \; \mathrm{qdet} \Bigg[\delta_{j,k}+\\
\label{eqn_detgamma1} &\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\int_{-\infty}^{\infty}\Big(\sigma(x)G_j(x)\epsilon \psi_k[x]-\sigma(x)\psi_k(x)\epsilon G_j[x]-\sigma(x)\psi_k(x)\epsilon(\sigma G_j)[x]\Big)dx \Bigg]
\end{align}
with $\lfloor z \rfloor$ the floor function. Now let $\mathbf{A}$ be the $N\times 2$ matrix-valued integral operator on $(-\infty, \infty)$ with kernel $\mathbf{Z}^{-1}_N\mathbf{D}^{-1}\sigma(y) (\Omega\mathbf{E})^T$ where
\begin{align}
\label{eqn:omega_E} \Omega:=\left[
\begin{array}{cc}
-\epsilon \sigma & -1\\
1 & 0
\end{array}
\right]&&\mathrm{and} && \mathbf{E}:=\left[
\begin{array}{ccc}
\psi_1(y) & \cdot \cdot\cdot & \psi_N(y)\\
\epsilon\psi_1[y] & \cdot \cdot\cdot & \epsilon\psi_N[y]
\end{array}
\right].
\end{align}
(Care should be taken to note that the top-left element of $\Omega$ is an integral operator acting on the elements of $\mathbf{E}$.) Explicitly, the kernel of $\mathbf{A}$ can be written
\begin{align}
\left[\begin{array}{cc}
\nonumber -\frac{\sigma(y)}{r_{\lfloor (j-1)/2 \rfloor}}\big(\epsilon G_j[y]+\epsilon(\sigma G_j)[y]\big)&\frac{\sigma(y)}{r_{\lfloor (j-1)/2 \rfloor}}G_j(y)
\end{array}\right]_{j=1,...,N},
\end{align}
which we include for clarity. Now with $\mathbf{B}=\mathbf{E}$ we have
\begin{align}
\label{eqn:1+ABdecomp} \mathbf{1}_N-\mathbf{Z}_N\mathbf{D}^{-1}[\gamma^{(\sigma)}_{jk}]= \mathbf{1}_N+\mathbf{A}\mathbf{B}
\end{align}
and we may make use of (\ref{eqn:qdet_1+AB}). With the definitions above, we see that $\mathbf{1}_2 + \mathbf{B}\mathbf{A}$ is the $2\times 2$ matrix integral operator
\begin{align}
\label{eqn:1+ba1} \left[\begin{array}{cc}
1-\sum_{j=1}^N\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\Big( \psi_j\otimes \sigma \epsilon G_j+\psi_j\otimes \sigma \epsilon (\sigma G_j)\Big) & \sum_{j=1}^N\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\psi_j\otimes \sigma G_j\\
-\sum_{j=1}^N\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\Big(\epsilon \psi_j\otimes \sigma\epsilon G_j+\epsilon \psi_j\otimes \sigma \epsilon(\sigma G_j) \Big) & 1+\sum_{j=1}^N\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \psi_j\otimes \sigma G_j
\end{array}\right]
\end{align}
using the integral operator notation of (\ref{eqn:integop_ab}). To achieve the final result we should like to eliminate terms containing the factor $\epsilon\sigma$, since we will then be able to factor out $\sigma$ and make the impending functional differentiation straightforward. However, the appearance of the $\epsilon\sigma$ factor is only an apparent complication and will be dealt with by a judicious matrix factorisation. We have
\begin{align}
\label{eqn:ba} \mathbf{B}\mathbf{A}=\mathbf{E}\; \mathbf{Z}^{-1}_N\mathbf{D}^{-1}\sigma(y) (\Omega\mathbf{E})^T =\mathbf{E}\; \mathbf{Z}^{-1}_N\mathbf{D}^{-1}\sigma(y) \mathbf{E}^T\Omega^T,
\end{align}
and
\begin{align}
\label{eqn:OT_decomp} \Omega^T\left[
\begin{array}{cc}
1 & 0\\
-\epsilon\sigma & 1
\end{array}
\right]=\left[
\begin{array}{cc}
\epsilon\sigma & 1\\
-1 & 0
\end{array}
\right]\left[
\begin{array}{cc}
1 & 0\\
-\epsilon\sigma & 1
\end{array}
\right]=\mathbf{Z}^{-1}_2
\end{align}
(where taking the transpose of $\Omega$ includes the interchange of the variables $x$ and $y$ in $\epsilon$). Replacing $\Omega^T$ using (\ref{eqn:OT_decomp}) we find that the kernel of (\ref{eqn:1+ba1}) factorises as
\begin{align}
\nonumber \mathbf{1}_2 + \mathbf{B}\mathbf{A}=\left(\left[
\begin{array}{cc}
1 & 0\\
-\epsilon\sigma & 1
\end{array}
\right]+\mathbf{E}\; \mathbf{Z}^{-1}_N\mathbf{D}^{-1}\sigma(y) \mathbf{E}^T \mathbf{Z}^{-1}_2\right) \left[
\begin{array}{cc}
1 & 0\\
\epsilon\sigma & 1
\end{array}
\right]
\end{align}
{\small \begin{align}
\label{eqn:1+ba2} =\left[\begin{array}{cc}
1-\sum_{j=1}^N\frac{1}{r_{\lfloor (j-1)/2 \rfloor}} \psi_j\otimes \sigma \epsilon G_j & \sum_{j=1}^N\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\psi_j\otimes \sigma G_j\\
-\epsilon\sigma-\sum_{j=1}^N\frac{1}{r_{\lfloor (j-1)/2 \rfloor}} \epsilon \psi_j\otimes \sigma\epsilon G_j & 1+\sum_{j=1}^N\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \psi_j\otimes \sigma G_j
\end{array}\right] \left[
\begin{array}{cc}
1 & 0\\
\epsilon\sigma & 1
\end{array}
\right].
\end{align}
}The equality of (\ref{eqn:1+ba1}) and (\ref{eqn:1+ba2}) can also be checked directly by noting that $\psi_j\otimes\sigma\epsilon(\sigma G_j)=-\psi_j\otimes \sigma G_j(\epsilon \sigma)$ where, on the right hand side, the operator $\epsilon\sigma$ is understood to act before the larger integral operator; to wit
\begin{align}
\label{eqn:eps_sig} \psi_j\otimes \sigma G_j(\epsilon \sigma) h[x]=\psi_j(x)\int_{-\infty}^{\infty}dy \sigma(y) G_j(y) \int_{-\infty}^{\infty}\sigma(z) h(z) \mathrm{sgn}(y-z)dz.
\end{align}
Similarly $\epsilon\psi_j\otimes\sigma\epsilon(\sigma G_j)=-\epsilon\psi_j\otimes\sigma G_j(\epsilon \sigma)$.
The right-most matrix in (\ref{eqn:1+ba2}) has quaternion determinant $1$ and, recalling that the transpose of a matrix integral operator involves both the transpose of the matrix itself and also transposition of the operator variables, we can identify the remaining matrix in (\ref{eqn:1+ba2}) with the right hand side of (\ref{prop:integ_op_even}). The only caveat is that we obtain the negative of $D$ and $\tilde{I}$ as defined in Definition \ref{def:GOE_correln_kernel}, however since they only appear as the product $D(\alpha,\beta) \tilde{I}(\alpha,\beta)$ in the expansion of the Fredholm quaternion determinant the result is unchanged.
\hfill $\Box$
With Proposition \ref{prop:pf_integ_op} in hand, we see from (\ref{eqn:GOE_GPF_even}) that
\begin{align}
\label{eqn:ZN_integ_op} Z_N[u]=\frac{N!}{2^N}\prod_{j=1}^N\frac{1}{\Gamma(j/2+1)} \prod_{j=0}^{N/2-1}r_j \: \mathrm{qdet} [\mathbf{1}_2+\mathbf{f}^T(\mathbf{u}-\mathbf{1}_2)],
\end{align}
and we can now establish the general $n$-point correlation functions. Note that (\ref{eqn:ZN_integ_op}) contains a Fredholm quaternion determinant, and so, by the discussion under (\ref{eqn:fnal_diff_correln}), we expect that we will apply functional differentiation to pick out the particular term corresponding to the desired correlation function.
In the following proposition we find a quaternion determinant expression for the correlation functions, and then apply Proposition \ref{prop:qdet=pf} in an `after-the-fact' manner to conveniently find the Pfaffian expression, which is how the correlations were written in \cite{b&s2009}. However, in the proof of Proposition \ref{prop:pf_integ_op} above, we can see a deeper structural connection between the two expressions. Recall that to obtain (\ref{eqn:1+ba2}) we used (\ref{eqn:OT_decomp}) to factorise $\Omega^T$ in (\ref{eqn:ba}). If we instead wrote
\begin{align}
\nonumber \mathbf{1}+\mathbf{B}\mathbf{A}=\Big((\Omega^{T})^{-1} + \mathbf{E}\; \mathbf{Z}^{-1}_N\mathbf{D}^{-1}\sigma(y) \mathbf{E}^T \Big) \Omega^T,
\end{align}
then we are led directly to a Pfaffian form of (\ref{prop:integ_op_even}), and then to Pfaffian correlation functions. This is essentially how Rains' identity (\ref{eqn:rains}) comes into play.
\begin{remark}
In fact, (\ref{eqn:rains}) plays a more significant role in our proof of Proposition \ref{prop:pf_integ_op} than it may, at first, appear. The seemingly miraculous appearance of the matrices $\Omega$ and $\mathbf{E}$ in (\ref{eqn:omega_E}) was inspired by Rains' identity. Recalling (\ref{eqn:pf_det}) and using some simple algebra, we see that (\ref{eqn:rains}) becomes
\begin{align}
\label{eqn:rains_mod} \det(\mathbf{1}-\mathbf{C}^T\mathbf{A}^T\mathbf{B}\mathbf{A})^{1/2} =\det(\mathbf{1}-\mathbf{B}^T\mathbf{A}\mathbf{C}\mathbf{A}^T)^{1/2}.
\end{align}
In the case that $\mathbf{C}$ is skew-diagonal, we have quaternion determinants, instead of square roots of determinants. Using (\ref{eqn:rains_mod}) one can conjecture the form of the required integral operators.
\end{remark}
\begin{proposition}
\label{prop:correlns_GOE_even}
With $\mathbf{f}(x,y)$ as in Definition \ref{def:GOE_correln_kernel}, the $n$-point eigenvalue correlations for the GOE, with $N$ even, are given by
\begin{align}
\nonumber \rho_{(n)}(x_1,...,x_n)&=\mathrm{qdet}[\mathbf{f}(x_l,x_m)]_{l,m=1,...,n}\\
\label{eqn:correlns_GOE} &=\mathrm{Pf}[\mathbf{f}(x_l,x_m)\mathbf{Z}_2^{-1}]_{l,m=1,...,n}.
\end{align}
\end{proposition}
\textit{Proof}: Recalling Definition \ref{def:fred_D_QD_P} we see that (\ref{eqn:ZN_integ_op}) becomes
{\small
\begin{align}
\nonumber &Z_N[u]=\frac{N!}{2^N}\prod_{j=1}^N\frac{1}{\Gamma(j/2+1)}\prod_{j=0}^{N/2-1}r_j\\
\nonumber &\times\bigg( 1+\sum_{k=1}^{\infty}\frac{1}{k!}\int_{-\infty}^{\infty}dx_1\: (u(x_1)-1) \cdot\cdot\cdot \int_{-\infty}^{\infty} dx_k \:(u(x_k)-1)\:\mathrm{qdet}[\mathbf{f}(x_l,x_m)]_{l,m=1,...,k} \bigg).
\end{align}
}Now to make use of (\ref{eqn:fnal_diff_correln}) we first note that only terms with $k\geq n$ survive the functional differentiation, and then any terms with $k>n$ will be killed off once $u=1$. So we are left with the $n!$ terms corresponding to $k=n$, giving
\begin{align}
\nonumber \rho_{(n)}(x_1,...,x_n)=\frac{\prod_{j=0}^{N/2-1}r_j}{\mathrm{Pf} [\gamma_{jk}]_{j,k=1,...,N}\Big|_{u=1}}\mathrm{qdet} [\mathbf{f}(x_l,x_m)]_{l,m=1,...,n}.
\end{align}
Recalling (\ref{eqn:pf=prodr}) we have the first equality in (\ref{eqn:correlns_GOE}), and by using (\ref{eqn:qdet=pf}) we have the second.
\hfill $\Box$
\subsubsection{Summation formulae for the kernel elements, $N$ even}
\label{sec:GOE_sums}
Here we will show that the sum $S(x,y)$ in Definition \ref{def:GOE_correln_kernel} can be performed explicitly. This will be of use in analysing the large $N$ limit of the density, and will give us the leading order behaviour that we see in Figure \ref{fig:GOE_eval_dens}. Further, since we have the inter-relationships of Lemma \ref{lem:GOE_D=S=I}, a closed form for $S(x,y)$ implies that all the correlation functions can be written in a closed form.
First we quote a classical result, known as the \textit{Christoffel--Darboux formula}.
\begin{proposition}
With a set of orthogonal polynomials $\{q_0(x),...,q_n(x) \}$ with highest degree coefficient $k_n$, and
\begin{align}
\nonumber (q_j, q_k):= \int_{-\infty}^{\infty} e^{-z^2} q_j(z) q_k(z) dz,
\end{align}
we have
\begin{align}
\label{eqn:CD} \sum_{j=0}^n \frac{q_j(x)q_j(y)} {(q_j, q_j)}=\frac{k_n}{(q_n, q_n) k_{n+1}} \frac{q_{n+1}(x)q_n(y) - q_n(x)q_{n+1}(y)}{x-y}.
\end{align}
\end{proposition}
(For proofs we refer the reader to \cite{Szego1959} and \cite{forrester?}.) By taking the limit $y\to\ x$ in (\ref{eqn:CD}) we also have the formula
\begin{align}
\label{eqn:CDdiff} \sum_{j=0}^n \frac{\big(q_j(x)\big)^2} {(q_j, q_j)}=\frac{k_n}{(q_n, q_n) k_{n+1}} \Big(q'_{n+1}(x)q_n(x) -q'_n(x)q_{n+1}(x)\Big),
\end{align}
where the apostrophe represents differentiation with respect to $x$. Note that in our case, we are using monic Hermite polynomials and so $k_j=1$ for all $j=0,...,n$.
Using (\ref{eqn:CD}), with the working given in \cite[Chapter 6.4.2]{forrester?}, we have
\begin{align}
\nonumber S(x,y)&=\frac{e^{-(x^2+y^2)/2}}{2^{N-1}\sqrt{\pi}\: \Gamma(N-1)}\frac{H_{N-1}(x)H_{N-2}(y)- H_{N-2}(x)H_{N-1}(y)}{x-y}\\
\label{eqn:GOEsumS} &+\frac{\: e^{-y^2/2}}{ 2\sqrt{\pi}\: \Gamma(N-1)}H_{N-1}(y)\Phi_{N-2}(x),
\end{align}
where the $H_j(x)$ are the Hermite polynomials (\ref{eqn:herm_polys}) and $\Phi_j(x)$ is from (\ref{def:GOEphi}). Since the density is defined as the one-point correlation we see from (\ref{eqn:correlns_GOE}) that
\begin{align}
\label{eqn:1pt_correln} \rho_{(1)}(x)=S(x,x).
\end{align}
By applying (\ref{eqn:CDdiff}) and (\ref{eqn:recursive_herm}) to (\ref{eqn:GOEsumS}) we have that
\begin{align}
\nonumber \rho_{(1)}(x)&=\frac{e^{-x^2}}{2^{N-2}\sqrt{\pi}\: \Gamma(N-1)}\Big((N-1)(H_{N-2}(x))^2- (N-2)H_{N-3}(x)H_{N-1}(x)\Big)\\
\label{eqn:GOEsumSdiff} &+\frac{\: e^{-x^2/2}}{ 2\sqrt{\pi}\: \Gamma(N-1)}H_{N-1}(x)\Phi_{N-2}(x).
\end{align}
Using an electrostatic analogy, reasoning in \cite[Chapter 4.2]{mehta2004} and \cite[Chapter 1.4]{forrester?} concludes that the leading order behaviour of the eigenvalue density will be given by
\begin{align}
\label{eqn:wsslwN} \rho_{(1)}(x)=\frac{1}{\pi}\sqrt{2N-x^2},
\end{align}
which is a semi-circle of radius $\sqrt{2N}$. This suggests taking the normalised limit
\begin{align}
\label{eqn:wssl} \lim_{N\to \infty}\sqrt{\frac{2}{N}}\rho_{(1)}\left( \sqrt{2N}x \right)=\left\{ \begin{array}{ll}
\frac{2}{\pi}\sqrt{1-x^2}, & |x|<1,\\
0, & |x| \geq1.
\end{array}\right.
\end{align}
(One way to carry out this task is to use the so-called \textit{Plancherel-Rotach} asymptotic formula for Hermite polynomials. In \cite[Chapter 1.4.3]{forrester?}, the author obtains the semi-circular density by reframing the question as a Riemann-Hilbert problem.) In Figure \ref{fig:GOEwWSSL} we compare the simulated eigenvalue density of Figure \ref{fig:GOE_eval_dens} to (\ref{eqn:wssl}).
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.7]{GOEWSLii.pdf}
\caption[Comparison of simulated GOE eigenvalue density with analytic prediction.]{Comparison of the simulated eigenvalue density of Figure \ref{fig:GOE_eval_dens} with the solid line given by the analytic result in (\ref{eqn:wssl}).}
\label{fig:GOEwWSSL}
\end{center}
\end{figure}
Using the knowledge that the density tends towards (\ref{eqn:wsslwN}), we scale the eigenvalues $\tilde{x}_j=\pi \rho x_j/\sqrt{N}$, where $\rho$ is the average bulk density, to obtain the bulk limiting form of the general correlation functions \cite{Gaudin1961} \cite[Chapter 7.8.1]{forrester?}
\begin{align}
\nonumber &\lim_{N\to\infty}\left( \frac{\pi \rho}{\sqrt{N}}\right)^n \rho_{(n)}\left( \tilde{x}_1,..., \tilde{x}_n\right)\\
\label{eqn:GOESbulk} &=\rho^{n} \mathrm{qdet}\left[\begin{array}{cc}
S^{\mathrm{bulk}}(x_j,x_k)&\tilde{I}^{\mathrm{bulk}}(x_j,x_k)\\
D^{\mathrm{bulk}}(x_j,x_k)&S^{\mathrm{bulk}}(x_k,x_j)
\end{array}
\right]_{j,k=1,...,n},
\end{align}
where
\begin{align}
\nonumber S^{\mathrm{bulk}}(x_j,x_k)&=\frac{\sin \pi \rho\: (x_j-x_k)}{\pi \rho\: (x_j-x_k)},\\
\nonumber \tilde{I}^{\mathrm{bulk}}(x_j,x_k)&=\frac{1}{\pi\rho}\int_{0}^{\pi \rho\: (x_j-x_k)}\frac{\sin t}{t}dt+\frac{1}{2\rho}\mathrm{sgn}(x_k-x_j),\\
\nonumber D^{\mathrm{bulk}}(x_j,x_k)&=\frac{\partial}{\partial x_j}\frac{\sin \pi \rho\: (x_j-x_k)}{\pi \rho\: (x_j-x_k)}.
\end{align}
The limiting density (\ref{eqn:wssl}) appears commonly in Hermitian and symmetric random matrix ensembles and is known as \textit{Wigner's semi-circle law}. It was first conjectured in the 1950s based upon numerical evidence before being shown analytically by Wigner in 1955 \cite{wigner1955} for a restricted class of matrices, and then found to apply to a broader range of matrices in \cite{wigner1958}. Various results since then, including \cite{Pastur1972, Joh1998, BaiSilver2006}, have established a quite general form of the law.
\begin{proposition}[Semi-circle law]
\label{prop:wssl}
Let $\mathbf{X}=[x_{j,k}]_{j,k=1,...,N}$ be Hermitian, with iid entries (up to the required symmetry) $x_{j,k}$ drawn from any distribution of zero mean and variance $1$. Then the scaled eigenvalue density of $\mathbf{X}$ tends to (\ref{eqn:wssl}) as $N\to\infty$.
\end{proposition}
\newpage
\section{The importance of being odd}
\setcounter{figure}{0}
\label{sec:GOE_odd}
As mentioned at the beginning of Chapter \ref{sec:qdets_pfs}, Definition \ref{def:pfaff} implies that the Pfaffian of a matrix $\mathbf{X}$ is only defined if $\mathbf{X}$ is of even dimension. In \cite{deB1955} de Bruijn discusses how the definition can be interpreted to include odd-sized matrices. We will stick with the convention that Definition \ref{def:pfaff} only applies to even-sized matrices, however his methods --- which involve bordering an odd-sized matrix with an extra row and column, or calculating the $N$ odd case by removing one variable to infinity in the $N+1$ case --- turn out to be similar to our development.
We see from Proposition \ref{prop:GOE_eval_jpdf} that the eigenvalue jpdf is insensitive to the parity of $N$, so the calculation up to that point does not need to be modified. But when Pfaffians are introduced in Step III (Chapter \ref{sec:Step3_GOE}) things go awry. It is at this point that we pick up the calculation, with $N$ now specified to be odd. We present two methods for calculating the correlation functions for odd-sized matrices: one in which an extra row and column are added to the generalised partition function (\ref{eqn:GOE_GPF_even}) (\cite{PanShu1991} and \cite{frahm_and_pichard1995} use similar constructions for the circular ensembles); while the other uses the known correlations for a $2N$-sized system and removes one eigenvalue off to infinity, leaving the correlations of a $(2N-1)$-sized system. Note that these techniques were not required for the original calculations of the GOE correlation functions for odd matrix size, since Proposition \ref{thm:integral_identities} was used. However, we develop these techniques here since Dyson's method is not applicable to the other ensembles that will be considered in the following chapters of the present work, and we hope to provide a unified treatment.
For the first approach (involving functional differentiation) the plan is to modify the method of integration over alternate variables in such a way as to generate an even-sized Pfaffian for $N$ odd, giving $Z_N[u]$. It is this modification to the alternate variable method that essentially distinguishes the treatment of $N$ odd from $N$ even in all of the $\beta=1$ ensembles. We will then rewrite $Z_N[u]$ as a Fredholm quaternion determinant and Fredholm Pfaffian and apply functional differentiation as in the even case to find the correlation functions. However, there is a further complication in establishing the Fredholm operators for $N$ odd: because of the structure of the odd generalised partition function, the calculation cannot be carried out strictly as a Pfaffian or quaternion determinant, since non-anti-symmetric and non-self-dual matrices are involved. Instead, we resort to a determinantal form and then use Corollary \ref{cor:sqrtFreds} after the fact.
\subsection{Pfaffian generalised partition function for GOE, $N$ odd}
Here we derive the $N$ odd analogue of Proposition \ref{prop:GOE_gen_part_fn} again using the method of integration over alternate variables. Recall from the proof of that proposition that the method relied on pairing up the rows (corresponding to pairing up the eigenvalues) in the Vandermonde determinant to symmetrise $Z_N[u]$. However, with $N$ odd, there is clearly a difficulty as there will be one unpaired row. Dealing with this row is something of a technical task, but it naturally leads to an even-sized Pfaffian, with an $N\times N$ block bordered by a row and column corresponding to the unpaired eigenvalue. This type of method was applied
\begin{proposition}[\cite{Mehta1967}]
\label{prop:GOE_gen_part_fn_odd}
Let $Q(\vec{\lambda})$ be as in (\ref{eqn:GOE_eval_jpdf}). For $N$ odd the generalised partition function for $Q(\vec{\lambda})$ is
\begin{align}
\label{eqn:GOE_GPF_odd} Z_{N\:\mathrm{odd}}[u]=\frac{N!}{2^{N+1/2}}\prod_{j=1}^N\frac{1}{\Gamma(j/2+1)}\mathrm{Pf} \left[\begin{array}{cc}
[\gamma_{jk}] & [\nu_j]\\
\left[-\nu_k\right] & 0
\end{array}\right]_{j,k=1,...,N},
\end{align}
where $\gamma_{jk}$, $p_j(x)$ and $u$ are as in Proposition \ref{prop:GOE_gen_part_fn} and
\begin{align}
\label{def:GOE_nu} \nu_k:=\int_{-\infty}^{\infty}e^{-x^2/2}u(x)p_{k-1}(x)\:dx.
\end{align}
\end{proposition}
\textit{Proof:} As in the even case we order the eigenvalues in (\ref{def:GOE_gpf1}) according to $-\infty < x_1 <\cdot\cdot\cdot < x_N < \infty$, picking up a factor of $N!$, and let $A_N:= 2^{-3N/2}\prod_{j=1}^N\frac{1}{\Gamma(j/2+1)}$. We then have
{\small
\begin{align}
\nonumber Z_N[u]&=A_NN!\int_{-\infty}^{x_2}dx_1 \int_{x_1}^{x_3}dx_2 \cdot\cdot\cdot \int_{x_{N-1}}^{\infty}dx_N\prod_{j=1}^Ne^{-x_j^2/2}u(x_j)\prod_{1\leq j < k \leq N}(x_k-x_j)\\
\nonumber &=A_NN!\int_{-\infty}^{x_2} dx_1\int_{x_1}^{x_3} dx_2\cdot\cdot\cdot \int_{x_{N-1}}^{\infty}dx_N \det \left[ e^{-x_j^2/2}u(x_j)p_{k-1}(x_j) \right]_{j,k=1,...,N}\\
\nonumber &=A_NN!\int_{-\infty}^{x_4}dx_2\int_{x_2}^{x_6}dx_4 \cdot\cdot\cdot \int_{x_{N-3}}^{x_{N-1}} dx_{N-1}\\
\nonumber &\times \det \left[\begin{array}{c}
\left[ \begin{array}{c}
\int_{-\infty}^{x_{2j}}e^{-x^2/2}u(x)p_{k-1}(x)dx\\
e^{-x_{2j}^2/2}u(x_{2j})p_{k-1}(x_{2j})
\end{array}\right]\\
\int_{-\infty}^{\infty}e^{-x^2/2}u(x)p_{k-1}(x)dx
\end{array}\right]_{j=1,...,(N-1)/2 \atop k=1,...,N}\\
\nonumber &=A_N\frac{N!}{((N-1)/2)!}\int_{-\infty}^{\infty} dx_2\int_{-\infty}^{\infty} dx_4 \cdot\cdot\cdot \int_{-\infty}^{\infty}dx_{N-1}\\
\nonumber &\times \det \left[\begin{array}{c}
\left[ \begin{array}{c}
\int_{-\infty}^{x_{2j}}e^{-x^2/2}u(x)p_{k-1}(x)dx\\
e^{-x_{2j}^2/2}u(x_{2j})p_{k-1}(x_{2j})
\end{array}\right]\\
\int_{-\infty}^{\infty}e^{-x^2/2}u(x)p_{k-1}(x)dx
\end{array}\right]_{j=1,...,(N-1)/2 \atop k=1,...,N},
\end{align}
}where, for the third equality, we have added rows to make the integrals inside the determinant start at $-\infty$, and for the last equality we removed the ordering on the $(N-1)/2$ variables outside the determinant.
With $\mu_{jk}$ as in (\ref{def:mu_GOE}), expanding the determinant yields
\begin{align}
\nonumber Z_N[a]=A_N\frac{N!}{(N/2)!}\sum_{P\in S_N}\varepsilon(P)\nu_{P(N)}\prod_{l=1}^{(N-1)/2}\mu_{P(2l-1),P(2l)},
\end{align}
and restricting the sum to terms with $P(2l)>P(2l-1)$ we have
\begin{align}
\nonumber Z_N[a]=A_N2^{(N-1)/2} N!\sum_{P\in S_N \atop P(2l)>P(2l-1)}^*\varepsilon(P) \nu_{P(N)} \prod_{l=1}^{(N-1)/2} \gamma_{P(2l-1),P(2l)}.
\end{align}
Now, letting $\nu_{P(N),N+1}:=\nu_{P(N)}$ we use the first equality in Definition \ref{def:pfaff} and we have the result.
\hfill $\Box$
\subsection{Skew-orthogonal polynomials}
\label{sec:GOE_sops_odd}
Since (\ref{eqn:GOE_GPF_odd}) contains a Pfaffian, it will be simplest to calculate if we pick polynomials that skew-diagonalise the matrix as in the even case. However, we note that with the polynomials (\ref{eqn:sops}) $\nu_i\big|_{u=1} \neq 0$ for any $1\leq i \leq N$, instead we see that the matrix is of the form
\begin{align}
\label{eqn:skew_diag_mat_odd} \mathbf{A}_o=\left[\begin{array}{ccc}
\mathbf{A} & \mathbf{0}_{N-1} & \mathbf{b}_{N-1}\\
\mathbf{0}_{N-1}^T & 0 & b_N\\
-\mathbf{b}_{N-1}^T & -b_N & 0
\end{array}\right]
\end{align}
where $\mathbf{A}$ is given by (\ref{eqn:skew_diag_mat}) and $\mathbf{b}_{N-1}=[b_1\; b_2\;\cdot\cdot\cdot\; b_{N-1}]^T$. So this matrix differs from that of (\ref{eqn:skew_diag_mat}) in that it contains two extra rows and columns that border the $N-1\times N-1$ skew-diagonal matrix. While this matrix is not strictly skew-diagonal, it will serve our turn since by Laplace expansion
\begin{align}
\nonumber \mathrm{Pf} \mathbf{A}_o = b_N\; \prod^{(N-1)/2}_{j=1}a_j.
\end{align}
and so we say the matrix $\mathbf{A}_o$ is \textit{odd skew-diagonal}. However, a key difference between the even and odd skew-diagonal matrices is their inverses. An even skew-diagonal matrix can be written as $\mathbf{D}\mathbf{Z}^{-1}_N$ where $\mathbf{D}$ is some diagonal matrix with every non-zero element repeated, and so its inverse is simply $\mathbf{Z}_N\mathbf{D}^{-1}$ (a fact that was exploited in Proposition \ref{prop:pf_integ_op}). Yet an odd skew-diagonal matrix $\mathbf{A}_o$ cannot be decomposed in such a fashion; the inverse is of the more complicated form
\begin{align}
\label{eqn:skew_inverse_odd} \mathbf{A}_o^{-1}=\left[\begin{array}{ccc}
\mathbf{A}^{-1} & \mathbf{c}_{N-1} & \mathbf{0}_{N-1}\\
-\mathbf{c}_{N-1}^T & 0 & -b_{N}^{-1}\\
\mathbf{0}_{N-1}^T & b_N^{-1} & 0
\end{array}\right]
\end{align}
where $\mathbf{c}_{N-1}=[\frac{b_2}{a_1b_N}\;\frac{-b_1}{a_1b_N}\;\frac{b_4}{a_2b_N}\;\frac{-b_3}{a_2b_N}\;\cdot\cdot\cdot\; \frac{b_{N-1}}{a_{(N-1)/2}b_N}\; \frac{-b_{N-2}}{a_{(N-1)/2}b_N}]^T$, although we note that $\mathbf{A}_o^{-1}$ is still anti-symmetric and the Pfaffian is
\begin{align}
\label{eqn:Pfinvo} \mathrm{Pf} \mathbf{A}_o^{-1} = \left( b_N\; \prod^{(N-1)/2}_{j=1}a_j\right)^{-1},
\end{align}
as we should expect.
\subsection{GOE $n$-point correlations, $N$ odd}
\begin{definition}
\label{def:GOE_correln_kernel_odd}
With the definitions as used in Proposition \ref{def:GOE_correln_kernel} and $\nu_k$ as defined in (\ref{def:GOE_nu}), let $\mathbf{f}_{\mathrm{odd}}(x,y)$ be the $2\times 2$ matrix
\begin{align}
\label{eqn:GOE_correln_kernel} \mathbf{f}_{\mathrm{odd}}(x,y)=\left[\begin{array}{cc}
S_{\mathrm{odd}}(x,y) & \tilde{I}_{\mathrm{odd}}(x,y)\\
D_{\mathrm{odd}}(x,y) & S_{\mathrm{odd}}(y,x)
\end{array}\right],
\end{align}
where
\begin{align}
\nonumber S_{\mathrm{odd}}(x,y)&=\sum_{j=0}^{(N-1)/2-1}\frac{e^{-y^2/2}}{r_j}\Big( \hat{\Phi}_{2j}(x)\hat{R}_{2j+1}(y)-\hat{\Phi}_{2j+1}(x)\hat{R}_{2j}(y)\Big)\\
\nonumber &+\frac{e^{-y^2/2}}{\bar{\nu}_N}R_{N-1}(y),\\
\nonumber D_{\mathrm{odd}}(x,y)&=\sum_{j=0}^{(N-1)/2-1}\frac{e^{-(x^2+y^2)/2}}{r_j}\Big( \hat{R}_{2j}(x)\hat{R}_{2j+1}(y)-\hat{R}_{2j+1}(x)\hat{R}_{2j}(y)\Big),\\
\nonumber \tilde{I}_{\mathrm{odd}}(x,y)&:=I_{\mathrm{odd}}(x,y)+\frac{1}{2}\mathrm{sgn}(y-x)\\
\nonumber &=\sum_{j=0}^{(N-1)/2-1}\frac{1}{r_j}\Big( \hat{\Phi}_{2j+1}(x)\hat{\Phi}_{2j}(y)-\hat{\Phi}_{2j}(x)\hat{\Phi}_{2j+1}(y)\Big)+\frac{1}{2}\mathrm{sgn}(y-x)\\
\nonumber &+\frac{1}{\bar{\nu}_N}\Big( \Phi_{N-1}(x)-\Phi_{N-1}(y) \Big),
\end{align}
with $\bar{\nu}_{j}:=\nu_j\big|_{u=1}$ and
\begin{align}
\label{eqn:Rhat_GOE} \hat{R}_j(x)&:=R_j(x)-\frac{\bar{\nu}_{j+1}}{\bar{\nu}_N} R_{N-1}(x),\\
\nonumber \hat{\Phi}_j(x)&:=\frac{1}{2}\int_{-\infty}^{\infty}dy\: \hat{R}_j(y)\: e^{-y^2/2}\: \mathrm{sgn}(x-y).
\end{align}
\end{definition}
Again, the Pfaffian equivalent is
\begin{align}
\mathbf{f}_{\mathrm{odd}}(x,y)\mathbf{Z}_2^{-1}=\left[\begin{array}{cc}
-\tilde{I}(x,y) & S(x,y)\\
-S(y,x) & D(x,y)
\end{array}\right].
\end{align}
To give away the ending, we will find that the correlations for the odd case are given by (\ref{eqn:correlns_GOE}) with $\mathbf{f}(x,y)$ replaced by $\mathbf{f}_{\mathrm{odd}}(x,y)$. The obvious way to obtain this result is to repeat the calculations of Chapter \ref{sec:correlns_even_GOE} using $Z_{N\:\mathrm{odd}}[u]$ of (\ref{eqn:GOE_GPF_odd}) instead of $Z_N[u]$ from (\ref{eqn:GOE_GPF_even}). The presentation of this method in the proof of Proposition \ref{prop:pf_integ_op} is such that, with a minor modification, we can proceed in the same fashion, highlighting the structural similarity between the even and odd cases.
A perhaps more elegant approach is to use the known result for $N$ even and combine it with the physical intuition that a system containing an even number of interacting particles will tend to a system with one fewer particles if one of them is removed to infinity. This process works well for eigenvalues in an open set (such as here and in the Ginibre ensembles of Chapter \ref{sec:GinOE}), however if the eigenvalues are contained in a compact set (such as the spherical ensemble of Chapter \ref{sec:SOE}) then this method does not seem applicable.
\subsubsection{Functional differentiation method}
In the following proposition we will modify the proof of Proposition \ref{prop:pf_integ_op} to produce the odd analogue. The required modification is essentially the addition of an extra column to the matrix $\mathbf{E}$ in (\ref{eqn:omega_E}). This approach is similar to that in \cite{Sinc09} where use was made of (\ref{eqn:rains}) while we use (\ref{eqn:1+AB}), although in that paper the matrix equivalent to $\mathbf{E}$ (labelled $\mathbf{A}$) is of much larger size: $(N+1)\times 2T$ where $T$ is some integer larger than $2N$.
\begin{proposition}
\label{prop:pf_integ_op_odd}
Let $\gamma_{jk}$ be as in (\ref{eqn:gammajk}), $\nu_k$ as in (\ref{def:GOE_nu}), with $\bar{\nu}_k:=\nu_k\big|_{u=1}$. Then, using the skew-orthogonal polynomials of Proposition \ref{prop:GOE_soip}, we have
\begin{align}
\label{prop:integ_op_odd} \mathrm{Pf} \left[\begin{array}{cc}
[\gamma_{jk}] & [\nu_j]\\
\left[-\nu_k\right] & 0
\end{array}\right]_{j,k=1,...,N}=\left(\bar{\nu}_N \prod_{j=0}^{(N-1)/2-1}r_j\right) \mathrm{qdet} [\mathbf{1}_2+\mathbf{f}_{\mathrm{odd}}^T(\mathbf{u}-\mathbf{1}_2)],
\end{align}
where $\mathbf{f}_{\mathrm{odd}}^T(\mathbf{u}-\mathbf{1}_2)$ is the matrix integral operator with kernel $\mathbf{f}_{\mathrm{odd}}^T(x,y)\; \mathrm{diag}[u(y)-1,u(y)-1]$.
\end{proposition}
\textit{Proof}:
The proof for the odd case proceeds along the same lines as for the even case in Proposition \ref{prop:pf_integ_op}: we look for a pair of matrices $\mathbf{A}$ and $\mathbf{B}$ such that the left hand side of (\ref{prop:integ_op_odd}) can be expressed as $\mathbf{1}+\mathbf{A}\mathbf{B}$ and then apply (\ref{eqn:1+AB}).
First, for convenience, we define
\begin{align}
\nonumber \mathbf{C}:=\left[\begin{array}{cc}
[\gamma_{jk}] & [\nu_j]\\
\left[-\nu_k\right] & 0
\end{array}\right]_{j,k=1,...,N}.
\end{align}
Then with $\psi_j:=e^{-x^2/2}R_{j-1}$ and $u=\sigma+1$ as in Proposition \ref{prop:pf_integ_op} we have
\begin{align}
\nonumber \mathbf{C}=\mathbf{C}^{(1)}-\mathbf{C}^{(\sigma)},
\end{align}
where $\mathbf{C}^{(1)}_{jk}:=\mathbf{C}_{jk}\Big|_{u=1}$,
\begin{align}
\nonumber \mathbf{C}^{(\sigma)}_{jk}:=\int_{-\infty}^{\infty}\Big(\sigma(x)\psi_j(x)\epsilon \psi_k[x]-\sigma(x)\psi_k(x) \epsilon\psi_j[x]-\sigma(x) \psi_k(x) \epsilon(\sigma \psi_j)[x] \Big)dx
\end{align}
for $1\leq j<k\leq N$ and
\begin{align}
\label{eqn:oddCnu} \mathbf{C}^{(\sigma)}_{j,N+1}=-\int_{-\infty}^{\infty}e^{-x^2/2}\sigma(x)R_{j-1}(x)\; dx
\end{align}
for $1\leq j\leq N$, with the remaining elements being established by the anti-symmetry of $\mathbf{C}$. Clearly
\begin{align}
\nonumber \mathrm{Pf}\; \mathbf{C}&=\mathrm{Pf}\Big(\mathbf{C}^{(1)}\big(\mathbf{1}_{N+1}-(\mathbf{C}^{(1)})^{-1}\mathbf{C}^{(\sigma)}\big)\Big).
\end{align}
From the discussion at the beginning of Chapter \ref{sec:GOE_sops_odd} we know that with the skew-orthogonal polynomials (\ref{eqn:sops}) $\mathbf{C}^{(1)}$ is of the form (\ref{eqn:skew_diag_mat_odd}) and not of the form (\ref{eqn:skew_diag_mat}) and so we cannot apply Corollary \ref{cor:qdet=pf}. Instead we square both sides to obtain
\begin{align}
\det \mathbf{C} = \left(\bar{\nu}_N \prod_{j=0}^{(N-1)/2-1}r_j\right)^2 \det \Big(\mathbf{1}_{N+1}-(\mathbf{C}^{(1)})^{-1}\mathbf{C}^{(\sigma)}\Big).
\end{align}
Recall $\Omega$ from (\ref{eqn:omega_E}) and extend the $2\times N$ matrix $\mathbf{E}$ with an extra column, defining
\begin{align}
\label{def:Eodd} \mathbf{E}_{\mathrm{odd}}:=\left[
\begin{array}{cccc}
\psi_1(y) & \cdot \cdot\cdot & \psi_N(y)&0\\
\epsilon\psi_1[y] & \cdot \cdot\cdot & \epsilon\psi_N[y]&-1
\end{array}
\right].
\end{align}
In analogy with the even case, we let $\mathbf{A}$ be the $N+1\times 2$ integral operator on $(-\infty,\infty)$ with kernel $(\mathbf{C}^{(1)})^{-1}\sigma(y)(\Omega \mathbf{E}_{\mathrm{odd}})^T$, although $(\mathbf{C}^{(1)})^{-1}$ has the structure (\ref{eqn:skew_inverse_odd}). Carrying out the explicit computation of the kernel of $\mathbf{A}$ we find the `hat' structure of (\ref{eqn:Rhat_GOE}) emerges naturally, so we define $\hat{\psi}$ and $\hat{G}$ to be the equivalent definitions used in Proposition \ref{prop:pf_integ_op} but replacing $R$ with $\hat{R}$.
The kernel of $\mathbf{A}$ is then the $N+1\times 2$ matrix
\begin{align}
\left[\begin{array}{cc}
\nonumber [-\frac{\sigma(y)}{r_{\lfloor (j-1)/2 \rfloor}}\big(\epsilon \hat{G}_j[y]+\epsilon(\sigma \hat{G}_j)[y]\big)]&[\frac{\sigma (y)}{r_{\lfloor (j-1)/2 \rfloor}}\hat{G}_j(y)]\\
\sigma(y)\sum_{k=0}^{N-2}X_k\big(\epsilon G_k[y]+\epsilon(\sigma G_k)[y] \big)+\frac{\sigma(y)}{\bar{\nu}_N} & -\sigma(y)\sum_{k=0}^{N-2} X_k G_k(y)\\
\frac{\sigma(y)}{\bar{\nu}_N}\big(\epsilon \psi_N[y]+\epsilon(\sigma \psi_N)[y]\big)&-\frac{\sigma(y)}{\bar{\nu}_N}\psi_N(y)
\end{array}\right]_{j=1,...,N-1},
\end{align}
where $X_k:=\bar{\nu}_k/(r_{\lfloor(k-1)/2\rfloor}\bar{\nu}_N)$. With $\mathbf{B}=\mathbf{E}_{\mathrm{odd}}$ we then have $\mathbf{1}_{N+1}-(\mathbf{C}^{(1)})^{-1}\mathbf{C}^{(\sigma)}=\mathbf{1}_{N+1}+\mathbf{A}\mathbf{B}$. Applying (\ref{eqn:1+AB}) then $\mathbf{1}_{N+1}+\mathbf{B}\mathbf{A}$ equals
\begin{align}
\label{eqn:1+ba1_odd}\left[\begin{array}{cc}
1+\kappa_{1,1} & \kappa_{1,2}\\
\kappa_{2,1} & 1+\kappa_{2,2}
\end{array}\right],
\end{align}
where
\begin{align}
\nonumber \kappa_{1,1}=&-\sum_{j=1}^{N-1}\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\Big( \hat{\psi}_j\otimes \sigma \epsilon \hat{G}_j+\hat{\psi}_j\otimes \sigma \epsilon (\sigma \hat{G}_j)\Big) + \frac{1}{\bar{\nu}_N}\psi_N\otimes \sigma,\\
\nonumber \kappa_{1,2}=&-\sum_{j=1}^{N-1}\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\Big(\epsilon \hat{\psi}_j\otimes \sigma\epsilon \hat{G}_j+\epsilon \hat{\psi}_j\otimes \sigma \epsilon(\sigma \hat{G}_j) \Big) +\frac{1}{\bar{\nu}_N}\epsilon\psi_N\otimes \sigma\\
\nonumber &-\frac{1}{\bar{\nu}_N}\left( 1\otimes\sigma\epsilon(\sigma\psi_N) +1\otimes\sigma\epsilon\psi_N\right),\\
\nonumber \kappa_{2,1}=&\sum_{j=1}^{N-1}\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\hat{\psi}_j\otimes \sigma \hat{G}_j,\\
\nonumber \kappa_{2,2}=&\sum_{j=1}^{N-1}\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \hat{\psi}_j\otimes \sigma \hat{G}_j+\frac{1}{\bar{\nu}_N}1\otimes\sigma\psi_N.
\end{align}
With the decomposition of $\Omega^T$ given by (\ref{eqn:OT_decomp}) we factorise (\ref{eqn:1+ba1_odd}) as
{\small
\begin{align}
\nonumber \left[\begin{array}{c}
1-\sum_{j=1}^{N-1}\frac{1}{r_{\lfloor (j-1)/2 \rfloor}} \hat{\psi}_j\otimes \sigma \epsilon \hat{G}_j + \frac{1}{\bar{\nu}_N}\psi_N\otimes \sigma\\
-\epsilon\sigma-\sum_{j=1}^{N-1}\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \hat{\psi}_j\otimes \sigma\epsilon \hat{G}_j +\frac{1}{\bar{\nu}_N}\left( \epsilon\psi_N\otimes\sigma -1\otimes\sigma\epsilon\psi_N\right)
\end{array}
\right.
\end{align}
\begin{align}
\left.
\label{eqn:1+ba2_odd} \begin{array}{c}
\sum_{j=1}^{N-1}\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\hat{\psi}_j\otimes \sigma \hat{G}_j\\
1+\sum_{j=1}^{N-1}\frac{1}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \hat{\psi}_j\otimes \sigma \hat{G}_j+\frac{1}{\bar{\nu}_N}1\otimes\sigma\psi_N
\end{array}\right]\left[
\begin{array}{cc}
1 & 0\\
\epsilon\sigma & 1
\end{array}
\right],
\end{align}
}which, as we saw in (\ref{eqn:1+ba1}) and (\ref{eqn:1+ba2}), eliminates the apparent complication of the factor $\epsilon\sigma$.
The matrix on the right in (\ref{eqn:1+ba2_odd}) has determinant $1$ and, recalling that $\mathbf{f}_{\mathrm{odd}}$ is invariant under the coincidental replacements $\tilde{I}(x,y)\rightarrow -\tilde{I}(x,y),D(x,y)\rightarrow -D(x,y)$, we have established the square of (\ref{prop:integ_op_odd}). Applying Corollary \ref{cor:sqrtFreds} then gives the result.
\hfill $\Box$
By comparing (\ref{eqn:1+ba1_odd}) and (\ref{eqn:1+ba2_odd}) to their counterparts (\ref{eqn:1+ba1}) and (\ref{eqn:1+ba2}) in the even case the similarity in the methods used is clear, which highlights the reason for this particular presentation. The key difference was that the matrix in the generalised partition function was skew-diagonalised in the even case, but not in the odd case.
\begin{corollary}
With $Z_{N\:\mathrm{odd}}[u]$ as in Proposition \ref{prop:GOE_gen_part_fn_odd} we have
\begin{align}
\nonumber Z_{N\:\mathrm{odd}}[u]=\frac{N!}{2^{N+1/2}}\prod_{j=1}^N\frac{1}{\Gamma(j/2+1)}\left(\bar{\nu}_N\prod_{j=0}^{(N-1)/2-1}r_j\right) \mathrm{qdet} [\mathbf{1}_2+\mathbf{f}_{\mathrm{odd}}^T(\mathbf{u}-\mathbf{1}_2)].
\end{align}
\end{corollary}
\textit{Proof}: Substitute (\ref{prop:integ_op_odd}) into (\ref{eqn:GOE_GPF_odd}).
\hfill $\Box$
Now, applying function differentiation as in Proposition \ref{prop:correlns_GOE_even}, we find the $N$ odd correlations.
\begin{proposition}
\label{prop:correlns_GOE_odd}
With $\mathbf{f}_{\mathrm{odd}}(x_j,x_k)$ from Definition \ref{def:GOE_correln_kernel_odd} the $n$th-order correlation function for GOE, with $N$ odd, is
\begin{align}
\nonumber \rho_{(n)}(x_1,...,x_n)&=\mathrm{qdet} [\mathbf{f}_{\mathrm{odd}}(x_l,x_m)]_{l,m=1,...,n}\\
\nonumber &=\mathrm{Pf} [\mathbf{f}_{\mathrm{odd}}(x_l,x_m)\mathbf{Z}_2^{-1}]_{l,m=1,...,n}.
\end{align}
\end{proposition}
\subsubsection{Odd from even}
\label{sec:odd_from_even}
An alternative approach to the problem of deducing $N$ odd correlations is to take the known result in the even case and then somehow generate the odd case from that. To this end we can imagine that if one of the eigenvalues is removed to infinity, then we essentially have two independent systems: one of $N-1$ eigenvalues, and one with a single eigenvalue. The probability function is then the product of the individual probabilities. So the calculation of the odd case from that of the even with this `eigenvalue off to infinity' method will be a useful strategy if there exists an $f_N$ such that (\ref{eqn:GOE_eval_jpdf}) exhibits the factorisation
\begin{align}
\label{eqn:jpdfOEfactorisation}
\fullsub{Q(\vec{\lambda})}{\sim}{|\lambda_N|\rightarrow\infty}{f_N(\lambda_N)\; Q(\lambda_1,...,\lambda_{N-1}).}
\end{align}
Note that for finite $N$
\begin{align}
\nonumber Q(\vec{\lambda})&=2^{-3(N-1)/2}\prod_{j=1}^{N-1}\frac{e^{-\lambda_j^2/2}}{\Gamma(j/2+1)}\prod_{1\leq j < k \leq N-1}|\lambda_k-\lambda_j|\\
\nonumber &\times 2^{-3/2}\frac{e^{-\lambda_N^2/2}}{\Gamma (N/2+1)}\prod_{j=1}^{N-1}|\lambda_N-\lambda_j |.
\end{align}
So with
\begin{align}
\label{eqn:fN} f_N(x)=2^{-3/2}\frac{e^{-x^2/2}x^{N-1}}{\Gamma (N/2+1)}
\end{align}
(\ref{eqn:GOE_eval_jpdf}) satisfies (\ref{eqn:jpdfOEfactorisation}), and using (\ref{eqn:integ_correlns}) we then have
\begin{align}
\label{eqn:correlnOEfactorisation}
\fullsub{\rho_{(m)}^N(r_1,...,r_m)}{\sim}{|r_m|\rightarrow\infty}{Nf_N(r_m)\hspace{3pt}\rho_{(m-1)}^{N-1}(r_1,...,r_{m-1}),}
\end{align}
where the superscripts refer to the number of variables in the relevant distribution function. We now seek an interpretation of $N f_N(r_m)$.
\begin{lemma}
\label{lem:xm_to_infty}
Let
\begin{align}
\label{def:gen_jpdf} \mathcal{Q}(x_1,...,x_N):=\frac{1}{C_N}\prod_{j=1}^N e^{-V(x_j)}\prod_{1\leq j < k \leq N}|x_k-x_j|
\end{align}
be the joint probability density function of the variables $x_1,...,x_N$ and define
\begin{align}
\label{def:gen_fN} \mathcal{F}_N(x):=\frac{C_{N-1}}{C_N}x^{N-1}e^{-V(r)}.
\end{align}
Then, with $r:=x_1$, we have
\begin{align}
\nonumber \fullsub{\rho_{(1)}(r)}{\sim}{|r|\rightarrow\infty}{N\mathcal{F}_N(r).}
\end{align}
\end{lemma}
\textit{Proof}: Applying (\ref{eqn:integ_correlns}) to (\ref{def:gen_jpdf}) yields
{\small
\begin{align}
\nonumber \rho_1(r&)=N\frac{e^{-V(r)}}{C_N}\int_{-\infty}^{\infty}dx_2 \cdot\cdot\cdot \int_{-\infty}^{\infty} dx_N\prod_{j=2}^N |r-x_j|\; e^{-V(x_j)}\prod_{2\leq j < k \leq N}|x_k-x_j|\\
\nonumber &\mathop{\sim}\limits_{|r|\to\infty}N\frac{C_{N-1}}{C_N}r^{N-1}e^{-V(r)}\frac{1}{C_{N-1}}\int_{-\infty}^{\infty}dx_2\cdot\cdot\cdot\int_{-\infty}^{\infty}dx_N\\
\label{eqn:fN_proof} &\quad \times \prod_{j=2}^N e^{-V(x_j)}\prod_{2\leq j < k \leq N}|x_k-x_j|.
\end{align}
}Since $\mathcal{Q}(x_1,...,x_N)$ was defined as a probability density function in (\ref{def:gen_jpdf}) (that is, the integral is normalised to 1), the integral in (\ref{eqn:fN_proof}) equals $C_{N-1}$ and we have the result.
\hfill $\Box$
With (\ref{eqn:correlnOEfactorisation}) and Lemma \ref{lem:xm_to_infty} we have
\begin{equation}
\label{eqn:finalOEfactorisation}
\fullsub{\rho_{(m)}^N(\lambda_1,...,\lambda_m)}{\sim}{|r_m|\rightarrow\infty}{\rho_{(1)}^N(r_m)\rho_{(m-1)}^{N-1}(r_1,...,r_{m-1}),}
\end{equation}
with the minor caveat that in the case that the eigenvalues are ordered (which they are in this case) one must be careful to remove only the largest eigenvalue off to infinity; however, this amounts to nothing more than a relabeling.
So we see that from knowledge of the $m$-point correlation with $N$ even, we can find the $(m-1)$-point correlation with $N$ odd, by factoring out the density of the largest eigenvalue and taking the limit. Our task now is to use (\ref{eqn:finalOEfactorisation}) to deduce Proposition \ref{prop:correlns_GOE_odd} from Proposition \ref{prop:correlns_GOE_even}; and for that we begin with humble row and column reduction.
Recalling (\ref{def:GOE_Pfcorrelnk}) we write out the Pfaffian in (\ref{eqn:correlns_GOE}), explicitly identifying the last two rows and columns, thusly
\begin{align}
\nonumber &\rho_{(m)}(x_1,...,x_m)=\\
\nonumber &\mathrm{Pf}\left[\begin{array}{cc}
\left[\begin{array}{cc}
-\tilde{I}(x_i,x_j) & S(x_i,x_j)\\
-S(x_j,x_i) & D(x_i,x_j)\\
\end{array}\right] & \left[\begin{array}{cc}
-\tilde{I}(x_i,x_m) & S(x_i,x_m)\\
-S(x_m,x_i) & D(x_i,x_m)\\
\end{array}\right]\\
&\\
\left[\begin{array}{cc}
-\tilde{I}(x_m,x_j) & S(x_m,x_j)\\
-S(x_j,x_m) & D(x_m,x_j)\\
\end{array}\right] & \left[\begin{array}{cc}
0 & S(x_m,x_m)\\
-S(x_m,x_m) & 0\\
\end{array}\right]\\
\end{array}\right]_{i,j=1,...,m-1},
\end{align}
for some fixed $m$. This matrix consists of four submatrices of sizes
\begin{itemize}
\item{Top left: $2(m-1)\times 2(m-1)$,}
\item{Top right: $2(m-1)\times 2$,}
\item{Bottom left: $2 \times 2(m-1)$,}
\item{Bottom right: $2 \times 2$.}
\end{itemize}
Applying elementary row and column operations yields
\begin{align}
\nonumber &\rho_{(m)}(x_1,...,x_m)=\\
\nonumber &S(x_m,x_m)\;\mathrm{Pf}\left[\begin{array}{cc}
\left[\begin{array}{cc}
\vspace{3pt}-\tilde{I}^*(x_i,x_j) & S^*(x_i,x_j)\\
-S^*(x_j,x_i) & D^*(x_i,x_j)\\
\end{array}\right] & \left[\begin{array}{cc}
\vspace{3pt}-\tilde{I}(x_i,x_m) & 0\\
-S(x_m,x_i) & 0\\
\end{array}\right]\\
&\\
\left[\begin{array}{cc}
\vspace{3pt}-\tilde{I}(x_m,x_j) & S(x_m,x_j)\\
0 & 0\\
\end{array}\right] & \left[\begin{array}{cc}
\vspace{3pt}0 & 1\\
-1 & 0\\
\end{array}\right]\\
\end{array}\right]_{i,j=1,...,m-1}\\
\label{eqn:Pf3}&=S(x_m,x_m)\;\mathrm{Pf}
\left[\begin{array}{cc}
\vspace{3pt}-\tilde{I}^*(x_i,x_j) & S^*(x_i,x_j)\\
-S^*(x_j,x_i) & D^*(x_i,x_j)\\
\end{array}\right]_{i,j=1,...,m-1},
\end{align}
where
\begin{align}
\nonumber D^*(x_i,x_j)&:=D(x_i,x_j)-\frac{D(x_i,x_m)S(x_m,x_j)}{S(x_m,x_m)}-\frac{S(x_m,x_i)D(x_m,x_j)}{S(x_m,x_m)},\\
\nonumber S^*(x_i,x_j)&:=S(x_i,x_j)-\frac{S(x_i,x_m)S(x_m,x_j)}{S(x_m,x_m)}-\frac{D(x_m,x_j)\tilde{I}(x_i,x_m)}{S(x_m,x_m)},\\
\nonumber \tilde{I}^*(x_i,x_j)&:=\tilde{I}(x_i,x_j)-\frac{S(x_i,x_m)\tilde{I}(x_m,x_j)}{S(x_m,x_m)}-\frac{S(x_j,x_m)\tilde{I}(x_i,x_m)}{S(x_m,x_m)}.
\end{align}
The second equality in (\ref{eqn:Pf3}) can be seen by using the Laplace expansion method for Pfaffians discussed in Chapter \ref{sec:qdets_pfs}. Recalling (\ref{eqn:1pt_correln}) we see that (\ref{eqn:Pf3}) factors out $\rho_{(1)}^N(x_m)$ as required by (\ref{eqn:finalOEfactorisation}). To reclaim Proposition \ref{prop:correlns_GOE_odd} we must then have
\begin{align}
\nonumber D^*(x_i,x_j)\Big|_{x_m\to\infty}&=D_{\mathrm{odd}}(x_i,x_j)\Big|_{N\to N-1},\\
\nonumber S^*(x_i,x_j)\Big|_{x_m\to\infty}&=S_{\mathrm{odd}}(x_i,x_j)\Big|_{N\to N-1},\\
\label{eqn:DSI*} \tilde{I}^*(x_i,x_j)\Big|_{x_m\to\infty}&=\tilde{I}_{\mathrm{odd}}(x_i,x_j)\Big|_{N\to N-1},
\end{align}
which is easily established when we note from Definition \ref{def:GOE_correln_kernel} that as $x_m\rightarrow \infty$
\begin{align}
\nonumber S(x_m,x_m)&\rightarrow \frac{e^{-x_m^2/2}}{r_{N/2-1}}R_{N-1}(x_m)\hspace{3pt}\frac{1}{2}\bar{\nu}_{N-1},\\
\nonumber D(x_i,x_m)&\rightarrow \frac{e^{-(x_i^2+x_m^2)/2}}{r_{N/2-1}}R_{N-2}(x_i)\hspace{3pt}R_{N-1}(x_m),\\
\nonumber S(x_i,x_m)&\rightarrow \frac{e^{-x^2/2}}{r_{N/2-1}}\Phi_{N-2}(x_i)\hspace{3pt}R_{N-1}(x_m),\\
\nonumber S(x_m,x_i)&\rightarrow \sum_{k=0}^{N/2-1}\frac{e^{-x_i^2/2}}{r_k}\Bigl[R_{2k+1}(x_i)\hspace{3pt} \frac{1}{2}\bar{\nu}_{2k+1} - R_{2k}(x_i)\hspace{3pt} \frac{1}{2}\bar{\nu}_{2k+2}\Bigr],\\
\label{SDI_lims} \tilde{I}(x_i,x_m)&\rightarrow \sum_{k=0}^{N/2-1}\frac{1}{r_k}\Bigl[\Phi_{2k+1}(x_i)\hspace{3pt} \frac{1}{2}\bar{\nu}_{2k+1} - \Phi_{2k}(x_i)\hspace{3pt} \frac{1}{2}\bar{\nu}_{2k+2}\Bigr]+\frac{1}{2}.
\end{align}
\begin{remark}
Note that it may appear to the reader that an error has been made: the limiting forms of $S(x_m,x_i)$ and $\tilde{I}(x_i,x_m)$ contain terms with factors of $\bar{\nu}_{N}$ whilst the odd forms of the kernel elements on the right hand side of (\ref{eqn:DSI*}) (with $N\to N-1$) have only $\bar{\nu}_{N-1}$ terms and lower. This seems to indicate that the sums in (\ref{SDI_lims}) should be restricted to $k=0,...,N/2-2$. However, this is only an apparent problem since the six terms corresponding to $k=N/2-1$ in the limiting forms of $D^*,S^*$ and $\tilde{I}^*$ are conjoined in a conspiracy of cancellation, resolving the problem.
\end{remark}
\newpage
\section{Real asymmetric ensemble}
\setcounter{figure}{0}
\label{sec:GinOE}
In this chapter we modify the Gaussian orthogonal ensemble of Chapter \ref{sec:GOE_steps} by relaxing the symmetry constraint on the elements of the matrices. As discussed in the introduction, the resulting ensemble was first formulated by Ginibre in 1965 \cite{Gi65}, where he also considered non-Hermitian complex and non-self-dual real quaternion matrices. As with the GOE, GUE and GSE these real, complex and real quaternion Ginibre ensembles correspond to $\beta=1,2$ and $4$ respectively. Recall that these ensembles do not obey the same invariance under orthogonal, unitary and symplectic groups, although they are sometimes denoted GinOE, GinUE and GinSE by analogy. In keeping with the theme of this work, we will only be looking at the case $\beta=1$ of real, asymmetric Gaussian matrices.
The effective difference between this ensemble and those considered earlier by Dyson and Mehta is that the eigenvalues are no longer constrained to a one dimensional support since, in general, a real matrix may have both real and complex conjugate paired eigenvalues. Given that the complex eigenvalues always come in conjugate pairs, we see that the number of real eigenvalues $k$ must be of the same parity as the size $N$ of the matrix. While these facts may be unsurprising when one knows a little linear algebra, it is remarkable since for us it means the real line is populated with eigenvalues despite having measure zero inside the support of the set of all eigenvalues. We will find in Chapter \ref{sec:Ginkernelts} that the expected number of real eigenvalues is proportional to $\sqrt{N}$ \cite{eks1994}.
The existence of both real and non-real complex eigenvalues is particular to the $\beta=1$ (real) case of Ginibre's ensembles, and is a significant complication. Indeed in his original paper, Ginibre was able to find the eigenvalue distribution for $\beta=4$ and both the eigenvalue distribution and the correlation functions for $\beta=2$. However, for $\beta=1$ he was only able to calculate the distribution in the restricted case that all the eigenvalues are real --- the full jpdf was not calculated until 1991 in \cite{LS91} and the full correlation functions not until 2008 \cite{FN07, sommers2007, sommers_and_w2008, b&s2009, FM09, Sinc09}. A major source of trouble is that the Dyson integration theorem (Proposition \ref{thm:integral_identities}) does not hold for real, asymmetric matrices, as pointed out in \cite{AK2007}. The effect of this bipartite set of eigenvalues is that the partition function is now a sum over the individual partition functions for each $k$.
Further, as for the GOE, the odd case again presents more difficulties, however it turns out that we can overcome them in exactly the same way: we find that there is naturally an extra row and column bordering the odd sized Pfaffian in the generalised partition function. Interestingly, this extra row and column have a more natural interpretation in the present setting: they correspond to the one real eigenvalue that is required to exist in an odd-sized real matrix (recalling from above that the eigenvalues are real or one of a complex conjugate pair).
\subsection{Element distribution}
According to the procedure outlined in Chapter \ref{sec:GOE_steps}, our first task is to specify the matrix element distribution. In the case of the real Ginibre ensemble this is particularly simple: for a matrix $\mathbf{X}=[x_{j,k}]_{j,k=1,...,N}$ each element is independently drawn from a standard Gaussian distribution,
\begin{align}
\label{def:Gin_eldist} \frac{1}{\sqrt{2\pi}}e^{-x_{j,k}^2/2},
\end{align}
meaning that the element jpdf is
\begin{eqnarray}
\label{eqn:GinOE_eldist} P(\mathbf{X})=(2\pi)^{-N^2/2}\prod_{j,k=1}^{N}e^{-x^2_{j,k}/2}=(2\pi)^{-N^2/2}e^{-(\mathrm{Tr}\mathbf{X}\bX^{T})/2}.
\end{eqnarray}
As mentioned above, the real Ginibre ensemble effectively contains two species of eigenvalues: real and non-real complex. To have a clear picture in our mind, we can produce a simulated eigenvalue plot; Figure \ref{fig:GinOE_eval_plot} clearly displays the finite probability of finding real eigenvalues, which were absent in the complex Ginibre ensemble of Figure \ref{fig:RMTDisk}.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.5]{GinOE_eval_plot_50x50_2LowQual.jpg}
\caption[Simulated real Ginibre eigenvalue plot.]{Plot of eigenvalues from 50 independent $75\times 75$ asymmetric real Gaussian matrices (3750 points in total). Note the density of eigenvalues along the real line and the reflective symmetry of the upper and lower half planes.}
\label{fig:GinOE_eval_plot}
\end{center}
\end{figure}
We can also generate plots analogous to Figure \ref{fig:GOE_eval_dens} for the GOE, of the density of these real eigenvalues for varying matrix dimension, see Figure \ref{fig:GinOErdist}.
\begin{figure}[htp]
\begin{center}
\subfloat[][$N=2$, $14206$ reals]{\includegraphics[scale=0.43]{GinN2densHistc.pdf}}
\;\subfloat[][$N=4$, $19490$ reals]{\includegraphics[scale=0.43]{GinN4densHistc.pdf}}
\;\subfloat[][$N=9$, $27960$ reals]{\includegraphics[scale=0.43]{GinN9densHistc.pdf}}
\subfloat[][$N=16$, $36104$ reals]{\includegraphics[scale=0.43]{GinN16densHistc.pdf}}
\;\subfloat[][$N=25$, $44300$ reals]{\includegraphics[scale=0.43]{GinN25densHistc.pdf}}
\;\subfloat[][$N=49$, $60774$ reals]{\includegraphics[scale=0.43]{GinN49densHistc.pdf}}
\subfloat[][$N=100$, $84418$ reals]{\includegraphics[scale=0.43]{GinN100densHistc.pdf}}
\caption[Simulated density of real eigenvalues in the real Ginibre ensemble.]{Real eigenvalue density for 10,000 instances of $N\times N$ real Ginibre matrices; the second value is the number of real eigenvalues in that simulation.}
\label{fig:GinOErdist}
\end{center}
\end{figure}
Note that Figure \ref{fig:GinOErdist} also tells us the average number of real eigenvalues in these simulations for each matrix size; these compare favourably to the $\sqrt{N}$ behaviour of (\ref{eqn:bigNEN}), which is the expected number of these reals in the large $N$ limit \cite{eks1994}.
We will discuss the probability of finding real eigenvalues at length, so we make the following definition.
\begin{definition}
\label{def:probpnk}
Let $\mathbf{X}$ be an $N\times N$ matrix. Define $p_{N,k}$ as the probability that there are $k$ real eigenvalues amongst the $N$ total eigenvalues of $\mathbf{X}$.
\end{definition}
We can estimate the probability of finding some number of real eigenvalues in an $N\times N$ matrix from the real Ginibre ensemble by a Monte Carlo simulation, the results are contained in Table \ref{tab:GinOE_pnk}. Another feature to note is the apparently circular distribution of the eigenvalues in the complex plane. This is an illustration of the so-called \textit{Girko's Circular Law} which will be discussed in Chapter \ref{sec:circlaw}.
\begin{longtable}{|c|c||c|c||c|c|}
\hline
& Simulated $p_{N,k}$ & & Simulated $p_{N,k}$ & & Simulated $p_{N,k}$\\
\hline
\hline
\endfirsthead
\hline
& Simulated $p_{N,k}$ & & Simulated $p_{N,k}$ & & Simulated $p_{N,k}$\\
\hline
\hline
\endhead
$p_{2,2}$ & $0.70762$ & $p_{3,3}$ & $0.35349$&$p_{38,38}$&$0.00000$\\
$p_{2,0}$ & $0.29238$ & $p_{3,1}$ & $0.64651$&$p_{38,36}$&$0.00000$\\
\hline
$p_{4,4}$ & $0.12440$ & $p_{5,5}$ & $0.03116$&$p_{38,34}$ &$0.00000$\\
$p_{4,2}$ & $0.72249$ & $p_{5,3}$ & $0.51300$&$p_{38,32}$ &$0.00000$\\
$p_{4,0}$ & $0.15311$ & $p_{5,1}$ & $0.45584$&$p_{38,30}$ &$0.00000$\\
\cline{1-4}
$p_{6,6}$ & $0.00506$ & $p_{7,7}$ & $0.00073$&$p_{38,28}$ &$0.00000$\\
$p_{6,4}$ & $0.25231$ & $p_{7,5}$ & $0.08343$&$p_{38,26}$ &$0.00000$\\
$p_{6,2}$ & $0.64888$ & $p_{7,3}$ & $0.57908$&$p_{38,24}$ &$0.00000$\\
$p_{6,0}$ & $0.09375$ & $p_{7,1}$ & $0.33676$&$p_{38,22}$ &$0.00000$\\
\cline{1-4}
$p_{8,8}$ & $0.00006$ & $p_{9,9}$ & $0.00000$&$p_{38,20}$ &$0.00000$\\
$p_{8,6}$ & $0.02052$ & $p_{9,7}$ & $0.00359$&$p_{38,18}$ &$0.00000$\\
$p_{8,4}$ & $0.34469$ & $p_{9,5}$ & $0.14479$&$p_{38,16}$ &$0.00000$\\
$p_{8,2}$ & $0.57257$ & $p_{9,3}$ & $0.59056$&$p_{38,14}$ &$0.00001$\\
$p_{8,0}$ & $0.06216$ & $p_{9,1}$ & $0.26106$&$p_{38,12}$ &$0.00075$\\
\cline{1-4}
$p_{10,10}$ & $0.00000$ & $p_{11,11}$ & $0.00000$&$p_{38,10}$ &$0.01751$\\
$p_{10,8}$ & $0.00041$ & $p_{11,9}$ & $0.00001$&$p_{38,8}$ &$0.14862$\\
$p_{10,6}$ & $0.04477$ & $p_{11,7}$ & $0.01006$&$p_{38,6}$ &$0.4093$\\
$p_{10,4}$ & $0.41561$ & $p_{11,5}$ & $0.20768$&$p_{38,4}$ &$0.34735$\\
$p_{10,2}$ & $0.49503$ & $p_{11,3}$ & $0.58174$&$p_{38,2}$ &$0.07475$\\
$p_{10,0}$ & $0.04418$ & $p_{11,1}$ & $0.20051$&$p_{38,0}$ &$0.00171$\\
\hline
\caption[Simulated eigenvalue probabilities $p_{N,k}$ for the real Ginibre ensemble.]{Experimentally determined probabilities $p_{N,k}$ from simulations of 100,000 independent real Ginibre matrices.}
\label{tab:GinOE_pnk}
\end{longtable}
\subsection{Eigenvalue jpdf}
\label{sec:Gejpdf}
As in Chapter \ref{sec:GOE_eval_jpdf} one would like to diagonalise the matrix $\mathbf{X}$ so that the $N^2-N$ degrees of freedom corresponding to the eigenvectors can be integrated over, leaving only the dependence on the eigenvalues. Since real symmetric matrices are diagonalised by orthogonal matrices, a fact we used in (\ref{eqn:diag_decomp}), integration over the elements of the diagonalising matrices (which amounts to computation of the volume of $O(N)$, the group of orthogonal $N\times N$ matrices) was a readily computable problem; this is the content of Proposition \ref{prop:int_(RdR)}. In \cite{Gi65} this was the approach of Ginibre as he attempted to calculate the eigenvalue distributions and correlation functions by diagonalising these non-Hermitian matrices, however he was unable to proceed very far in this direction for the $\beta=1$ (real asymmetric) matrices. The set of eigenvectors do not form an orthonormal basis, and so these matrices are not orthogonally diagonalisable as real symmetric matrices are. This means that the integral over the diagonalising matrices is not given by (\ref{eqn:RTdR_integ}), and in fact appears intractable.
Progress was made in 1991 \cite{LS91} and, independently, in \cite{Ed97}, where diagonal decomposition was abandoned in favour of upper triangular decomposition; in particular, Schur decomposition.
\begin{remark}
The decomposition used in \cite{LS91} was not that of Schur, however the triangular matrix they used was equivalent to that in (\ref{def:triangular_mat}) through elementary row and column operations.
\end{remark}
To express a matrix in this form, first note that if a general $N\times N$ real matrix $\mathbf{A}$ has $k$ real eigenvalues then it has $(N-k)/2$ complex conjugate pairs of eigenvalues. The Schur decomposition of $\mathbf{A}$ is then \cite{Schur1909}
\begin{align}
\label{eqn:GinOE_decomp} \mathbf{A}=\mathbf{Q}\mathbf{R}\mathbf{Q}^T,
\end{align}
where $\mathbf{Q}$ is orthogonal and $\mathbf{R}$ is the block upper triangular matrix
\begin{align}
\label{def:triangular_mat} \mathbf{R}&= \left[\begin{array}{cccccc}
\lambda_1 & ... & R_{1,k} & R_{1,k+1} & ... & R_{1,m}\\
& \ddots & \vdots & \vdots & & \vdots \\
& & \lambda_k & R_{k,k+1} & ... & R_{k,m}\\
& & & z_{k+1} & ... & R_{k+1,m}\\
& 0 & & & \ddots & \vdots\\
& & & & & z_m\\
\end{array}\right],&m=(N+k)/2,
\end{align}
where, on the diagonal, we have the real eigenvalues $\lambda_j$ and the $2\times2$ blocks
\begin{align}
\label{eqn:Schurz} z_j &=\left[\begin{array}{cc}
x_j & -c_j\\
b_j & x_j
\end{array}\right], \mathrm{where}\; b_j,c_j>0,
\end{align}corresponding to the complex eigenvalues $x_j \pm iy_j$, $y_j=\sqrt{b_jc_j}$. (This correspondence is clear since the eigenvalues of $z_j$ are $x_j\pm iy_j$.) Note that the dimension of $R_{i,j}$ depends on its position in $\mathbf{R}$:
\begin{itemize}
\item{$1\times 1$ for $i,j\leq k$,}
\item{$1\times 2$ for $i\leq k, j>k$,}
\item{$2\times 2$ for $i,j>k$.}
\end{itemize}
(Proofs (in English) of the Schur decomposition are available in several places, for example see \cite[Theorem 7.1.3]{GoVl1996}.)
This decomposition is not yet unique for two reasons; first, we know that, in general, eigenvectors are unique only up to normalisation and direction. Since $\mathbf{Q}$ corresponds to the eigenvectors and it is orthogonal, the normalisation constraint is already imposed. To fix the sign, we specify that the first row of $\mathbf{Q}$ must be positive. Second, it is clear that the eigenvalues are at present arbitrarily ordered along the diagonal of $\mathbf{R}$, so we choose the ordering
\begin{eqnarray}\label{9'}
\lambda_1 < \cdot\cdot\cdot < \lambda_k \qquad {\rm and}
\qquad x_{k+1}<\cdot\cdot\cdot<x_m
\end{eqnarray}
and (\ref{eqn:GinOE_decomp}) is now a 1-1 correspondence.
\begin{remark}
\label{rem:Ginsing}
Note that, as we did for the GOE, we are ignore singular matrices and matrices with repeated eigenvalues since the set of these matrices has measure zero in the real Ginibre ensemble.
\end{remark}
The benefit of using the Schur decomposition in place of diagonalisation is that the integral over the conjugating matrices is given by (\ref{eqn:RTdR_integ}). Of course, the strictly upper triangular components of $\mathbf{R}$ must now be integrated over (which will leave us with just the $1\times 1$ and $2\times 2$ diagonal blocks corresponding to the eigenvalues), but we shall see that the dependence on these variables factorises. Note that in Chapter \ref{sec:SOE} (for the real spherical ensemble), where we also use Schur decomposition, the situation is significantly more complicated as the integral over these upper triangular elements no longer factorises.
We will next calculate the Jacobian for the change of variables from the elements of the matrix $\mathbf{A}$ to the eigenvalues of $\mathbf{A}$ and find that as in Proposition \ref{prop:GOE_J} (the symmetric analogue) we will have a product of differences of eigenvalues. However, here the structure is slightly more complicated due to the existence of both real and non-real complex eigenvalues. In an attempt to arrest confusion, we first define the notation for this product.
\begin{definition}
Let $k$ be a positive integer. Then with $t_j=\lambda_j\in\mathbb{R}$ for $j\leq k$ and $t_j=w_j\in\mathbb{C}\backslash\mathbb{R}$ for $j>k$ define
\begin{align}
\label{def:GinOEpods} |\lambda(t_j)-\lambda(t_i)|:=\left\{\begin{array}{ll}
|\lambda_j-\lambda_i|, & \mathrm{for}\; k\geq j>i,\\
|w_j-\lambda_i||\bar{w}_j-\lambda_i|, & \mathrm{for}\; j>k\geq i,\\
|w_j-w_i||\bar{w}_j-w_i| &\\
\times |w_j-\bar{w}_i||\bar{w}_j-\bar{w}_i|, & \mathrm{for}\; j>i>k.
\end{array}\right.
\end{align}
\end{definition}
We can now state the asymmetric analogue of Proposition \ref{prop:GOE_J}.
\begin{proposition}[\cite{Ed97} Theorem 5.1]
\label{prop:GinJ}
Decomposing a real matrix $\mathbf{A}$ according to the Schur decomposition (\ref{eqn:GinOE_decomp}) we have
\begin{align}
\nonumber (d\mathbf{A})&=2^{(N-k)/2}\prod_{j<p}|\lambda(R_{pp})-\lambda(R_{jj})| (d\tilde{\mathbf{R}})(\mathbf{Q}^Td\mathbf{Q})\\
\label{eqn:GinOE_J} &\times \prod_{j=1}^kd\lambda_j \prod_{l=k+1}^{(N+k)/2}|b_l-c_l |\;dx_ldb_ldc_l,
\end{align}
where $\tilde{\mathbf{R}}$ is the strictly upper triangular part of (\ref{def:triangular_mat}) and $\mathbf{Q}$ is an orthogonal matrix with the first row specified to be positive.
\end{proposition}
\textit{Proof}: In analogue with (\ref{eqn:RTdXR}) we begin with the decomposition (\ref{eqn:GinOE_decomp}) and apply the product rule of differentiation to find
\begin{align}
\nonumber \mathbf{Q}^Td\mathbf{A}\mathbf{Q}=d\mathbf{R}+\mathbf{Q}^Td\mathbf{Q}\mathbf{R}-\mathbf{R}\mathbf{Q}^Td\mathbf{Q}.
\end{align}
Now let $d\mathbf{O}:=\mathbf{Q}^Td\mathbf{Q}$ and we see that the $ij$-th element of $\mathbf{Q}^Td\mathbf{A}\mathbf{Q}$ is
\begin{align}
\nonumber dR_{ij}+dO_{ij}R_{jj}-R_{ii}dO_{ij}+\sum_{l<j}dO_{il}R_{lj}-\sum_{l>i}R_{il}dO_{lj},
\end{align}
or, specialising to particular cases,
\begin{align}
\nonumber &dO_{ij}R_{jj}-R_{ii}dO_{ij}+\sum_{l<j}dO_{il}R_{lj}-\sum_{l>i}R_{il}dO_{lj},&\mbox{ for } i>j,\\
\label{eqn:GinOE_wedges} &dR_{ii}+\sum_{l<i}dO_{il}R_{li}-\sum_{l>i}R_{il}dO_{li},&\mbox{ for } i=j,\\
\nonumber &dR_{ij}+dO_{ij}R_{jj}-R_{ii}dO_{ij}+\sum_{l<j}dO_{il}R_{lj}-\sum_{l>i}R_{il}dO_{lj},&\mbox{ for } i<j.
\end{align}
Noting that $dO_{ij}=-dO_{ji}$ (which is a consequence of the fact $\mathbf{Q}\bQ^T=\mathbf{1}$) we find that taking the wedge product of the elements of $\mathbf{Q}^Td\mathbf{A}\mathbf{Q}$ in the order $j=1,i=N,N-1,...,1$ then $j=2,i=N,N-1,...,1$ \textit{et cetera}, up to $j=N, i=N,N-1,...,1$ each of the differentials in the summations over $l$ in (\ref{eqn:GinOE_wedges}) have already been wedged and so, for the purposes of the wedge product, they can be ignored. Let $d\tilde{\mathbf{O}}$ be the matrix $d\mathbf{O}$ excluding the $2\times 2$ blocks $dO_{ii}$ along the diagonal, which have the form
\begin{align}
\nonumber \left[\begin{array}{cc}
0 & do_i\\
-do_i & 0
\end{array}\right],
\end{align}
for some $do_i$. Then the wedge product of the off-diagonal elements is
\begin{align}
\label{eqn:GinOEpods2} \prod_{j<p}|\lambda(R_{pp})-\lambda(R_{jj})|(d\tilde{\mathbf{O}})(d\tilde{\mathbf{R}}),
\end{align}
using the notation defined in (\ref{def:GinOEpods}). More explicitly, the $(d\tilde{\mathbf{R}})$ factor in (\ref{eqn:GinOEpods2}) comes from the $dR_{ij}$ terms in (\ref{eqn:GinOE_wedges}) while the remaining factors are given by the terms $dO_{ij}R_{jj}-R_{ii}dO_{ij}$, each of which contributes
\begin{align}
\nonumber \begin{array}{rl}
|\lambda_j-\lambda_i| & \mathrm{for}\; k\geq j>i,\\
(x_j^2-\lambda_i^2)^2 +y_j^2 & \mathrm{for}\; j>k\geq i,\\
\left( (x_j-x_i)^2+(y_j-y_i)^2\right)\cdot \left( (x_j-x_i)^2+(y_j+y_i)^2\right)& \mathrm{for}\; j>i>k,
\end{array}
\end{align}
which we see is the same as (\ref{def:GinOEpods}) with $w_l=x_l+iy_l$. (We can also establish (\ref{eqn:GinOEpods2}) directly using \cite[Lemma 5.1]{Ed97}.) For $i=j\leq k$ we pick up just $d\lambda_i$, while for $i=j>k$, the middle row in (\ref{eqn:GinOE_wedges}) becomes
\begin{align}
\nonumber \left[\begin{array}{cc}
dx_i+(b_i-c_i)do_i & db_i\\
-dc_i& dx_i+(c_i-b_i)do_i
\end{array}\right]
\end{align}
and the wedge product of these elements is
\begin{align}
\nonumber 2^{(N-k)/2}\prod_{l=k+1}^{(N+k)/2}|b_l-c_l|do_ldx_ldb_ldc_l.
\end{align}
So then collecting the $do_i$ together with $(d\tilde{\mathbf{O}})$ gives $(d\mathbf{O})$ and we have the result.
\hfill $\Box$
\begin{remark}
In the case that $k=N$ we note that (\ref{eqn:GinOE_J}) reduces (as expected) to (\ref{eqn:dX_jacob}), since $\tilde{\mathbf{R}}$ is then the zero matrix.
\end{remark}
\begin{proposition}[\cite{LS91,Ed97}]
\label{prop:GinOE_eval_jpdf}
Let $\mathbf{A}=[a_{ij}]$ be an $N\times N$ real matrix with standard Gaussian entries. Then if $\mathbf{A}$ has sets of real and complex eigenvalues, $\Lambda=\{ \lambda_i \}_{i=1,...,k}$ and $W=\{ w_i,\bar{w}_i \}_{i=1,...,(N-k)/2}$ respectively, the eigenvalue jpdf is
\begin{equation}
\label{eqn:GinOEjpdf} Q_{N,k}(\Lambda,W) =C_{N,k} \prod_{i=1}^{k} e^{-\lambda_i^2/2} \prod_{j=1}^{(N-k)/2} e^{-(w_j^2+\bar{w}_j^2)/2}\:\mathrm{erfc}(\sqrt{2}| \mathrm{Im}(w_j)|)\: \big| \Delta (\Lambda\cup W)\big|,
\end{equation}
where $\Delta(\{x_i\}):=\prod_{i<j} (x_j-x_i)$ and
\begin{align}
\label{def:GinOECNK} C_{N,k}:=\frac{2^{-N(N-1)/4-k/2}}{k!((N-k)/2)!\prod_{l=1}^N\Gamma(l/2)}.
\end{align}
\end{proposition}
\textit{Proof}: We begin with the matrix element distribution (\ref{eqn:GinOE_eldist}).
Let $\delta_j:=b_j-c_j$ then, using the Schur decomposition (\ref{eqn:GinOE_decomp}), we see that
\begin{align}
\label{eqn:GinOEexp_decomp} e^{-\mathrm{Tr}(\mathbf{A}\bA^T)/2}=e^{-\sum_{i<j}r_{ij}^2/2} e^{-\sum_{j=1}^k\lambda_{j}^2/2} e^{-\sum_{j=1}^{(N-k)/2}x_j^2+y_j^2+\delta_j^2/2},
\end{align}
where $[r_{ij}]:=\tilde{\mathbf{R}}$ are the strictly upper triangular elements of $\mathbf{R}$. We can change variables from $b,c$ to $y,\delta$ with the equation
\begin{align}
\label{eqn:GinOE_covs} db_ldc_l=\frac{2y_l}{\sqrt{\delta_l^2+4y_l^2}}dy_l d\delta_l,
\end{align}
where $-\infty<\delta<\infty$ (and not $0<\delta<\infty$ as claimed in \cite[Lemma 5.2]{Ed97}). Taking the product of (\ref{eqn:GinOEexp_decomp}) and (\ref{eqn:GinOE_J}), using (\ref{eqn:GinOE_covs}), yields
\begin{align}
\nonumber &e^{-\mathrm{Tr}(\mathbf{A}\bA^T)/2}(d\mathbf{A})=2^{(N-k)}e^{-\sum_{i<j}r_{ij}^2/2} e^{-\sum_{j=1}^k\lambda_{j}^2/2} e^{-\sum_{j=1}^{(N-k)/2}x_j^2+y_j^2+\delta_j^2/2}\\
\label{eqn:GinOEjpdf2} &\times\prod_{j<p}\big|\lambda(R_{pp})-\lambda(R_{jj})\big| (d\tilde{\mathbf{R}})(\mathbf{Q}^Td\mathbf{Q})\prod_{j=1}^kd\lambda_j \prod_{l=k+1}^{(N+k)/2}\frac{2\:|\delta_l |\; y_l}{\sqrt{\delta_l^2+4y_l^2}}dx_ldy_ld\delta_l.
\end{align}
Combining $\prod_{l=k+1}^{(N+k)/2}2y_l$ with the product of differences in (\ref{eqn:GinOEjpdf2}) gives $|\Delta (\Lambda \cup W)|$.
As we would like the end result expressed in terms of the eigenvalue variables $\lambda_l,x_l$ and $y_l$, we plan to integrate over the variables $\delta_l$. Since the variables $\delta_l$ and $y_l$ are coupled, this gives a function of $y_l$, which can be explicitly determined according to
\begin{align}
\nonumber \int_{\delta=-\infty}^{\delta=\infty}\frac{|\delta|\;e^{-\delta^2/2}}{\sqrt{\delta^2+4y^2}}\;d\delta&= 2\int_{\delta=0}^{\infty} \frac{\delta\: e^{-\delta^2/2}} {\sqrt{\delta^2+4y^2}}\;d\delta\\
\label{eqn:GinOE_erfc} &=\sqrt{2\pi}\;e^{2y^2}\mathrm{erfc}(\sqrt{2}\;|y|),
\end{align}
where the second equality in (\ref{eqn:GinOE_erfc}) is given in \cite[3.362.2]{GraRyz2000} after a change of variables (although in Edelman this evaluation included an erroneous factor of $2$, which cancelled the factor of $1/2$ introduced in relation to (\ref{eqn:GinOE_covs})). This result can be verified by checking that both sides agree at $y=0$ and that they have identical derivatives.
We need to integrate out the unwanted $N^2-N$ independent elements contained in $\mathbf{Q}$ and $\tilde{\mathbf{R}}$: we have the integral over the orthogonal matrices from Proposition \ref{prop:int_(RdR)}, and the integrals over the $r_{ij}$ are simple Gaussians. Lastly, the factorials in the denominator of $C_{N,k}$ come from relaxing the ordering on the $k$ real eigenvalues and $(N-k)/2$ non-real complex conjugate pairs of eigenvalues.
\hfill $\Box$
\begin{remark}
\label{rem:Rtilde}
The integrals over $\tilde{\mathbf{R}}$ were quite straightforward in the proof of Proposition \ref{prop:GinOE_eval_jpdf} since each of them was a standard Gaussian. If this is not the case, however, then this calculation can be a significant technical hurdle. We will return to this point when we calculate the eigenvalue jpdf for the real, spherical ensemble in Chapter \ref{sec:Sevaldist}, where a more involved technique must be employed.
\end{remark}
In the following section we will find an expression for the probabilities of obtaining any number of real eigenvalues, however in the restricted case that $k=N$ (that is, we obtain all real eigenvalues) we can directly integrate the eigenvalue jpdf (\ref{eqn:GinOEjpdf}) by using the Selberg integral
\begin{align}
\label{def:Selb} S_N(\lambda_1,\lambda_2,\lambda):=\int_0^1 dt_1\cdot\cdot\cdot \int_0^1 dt_N \prod_{l=1}^N t_l^{\lambda_1}(1-t_l)^{\lambda_2}\prod_{1\leq j< l \leq N}|t_l-t_j|^{2\lambda},
\end{align}
which can be evaluated as the product of gamma functions \cite{Selb1944}
\begin{align}
\label{eqn:Selbint} S_N(\lambda_1,\lambda_2,\lambda)=\prod_{j=0}^{N-1}\frac{\Gamma(\lambda_1 + 1 +j\lambda) \Gamma(\lambda_2 +1 +j\lambda) \Gamma(1+\lambda(j+1))}{\Gamma(\lambda_1+\lambda_2+2 +\lambda(N+j-1)) \Gamma(1+\lambda)}.
\end{align}
With $k=N$ we have
\begin{align}
\label{eqn:pNN1} p_{N,N}=C_{N,N} \int_{-\infty}^{\infty} d\lambda_1 \cdot\cdot\cdot \int_{-\infty}^{\infty} d\lambda_N \prod_{i=1}^N e^{-\lambda_i^2/2} \prod_{j<l} |\lambda_l -\lambda_j|.
\end{align}
This multiple integral is known as a \textit{Mehta integral} \cite{mehta1991}, and by using (\ref{eqn:Selbint}), we find it has evaluation \cite[Chapter 4.7]{forrester?}
\begin{align}
\label{eqn:meEqn} 2^{3N/2}\prod_{j=1}^N \Gamma(j/2+1).
\end{align}
Substitution of (\ref{eqn:meEqn}) into (\ref{eqn:pNN1}) gives \cite[Corollary 7.1]{Ed97}
\begin{align}
\label{eqn:GinOEpNN} p_{N,N}=2^{-N(N-1)/4},
\end{align}
where we have used the Gamma function identity
\begin{align}
\nonumber \Gamma (n) \Gamma (n+1/2) = \frac{\sqrt{\pi} \: \Gamma (2n)} {2^{2n-1}}.
\end{align}
\subsection{Generalised partition function}
\label{sec:Ggpf}
\subsubsection{$N$ even}
\label{sec:Ggpfe}
With the eigenvalue distribution firmly in hand, we proceed to express the generalised partition function as a quaternion determinant or Pfaffian. As the anamnestic reader will recall, for the GOE we substituted the eigenvalue jpdf (\ref{eqn:GOE_eval_jpdf}) into (\ref{def:single_gen_part_fn}), the definition of $Z_N[u]$, and then applied the method of integration over alternate variables. However, as alluded to below Definition \ref{def:gen_part_fn}, the situation for real, asymmetric matrices is complicated by the existence of two species of eigenvalues, which means we must take (\ref{def:multi_gen_part_fn}), with $m=2$, as our definition of the generalised partition function. We then have
{\small
\begin{align}
\nonumber &Z_{k,(N-k)/2}[u,v]=C_{N,k}\int_{-\infty}^{\infty}d\lambda_1\cdot\cdot\cdot \int_{\infty}^{\infty}d\lambda_k \int_{\mathbb{R}^2_+}dw_1\cdot\cdot\cdot\int_{\mathbb{R}^2_+}dw_{(N-k)/2}\\
\label{eqn:GinOE_znk1} & \times\prod_{i=1}^ku(\lambda_i)\; e^{-\lambda_i^2/2}\prod_{j=1}^{(N-k)/2}v(w_j)\; e^{-(w_j^2+\bar{w}_j^2)/2}\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w_j)|) \; |\Delta (\Lambda\cup W)|.
\end{align}
}It will turn out that, as with the GOE in Chapter \ref{sec:pf_gpf}, the generalised partition function is dependent on the parity of $N$. Note that (\ref{eqn:GinOE_znk1}) is independent of the parity of $N$, but, as we did for the GOE, we proceed under the assumption that $N$ is even, postponing the odd case until Chapter \ref{sec:GinOE_genpartfn_odd}.
Since (\ref{eqn:GinOE_znk1}) is an integral over all the variables for a fixed number $k$ of real eigenvalues, we see that
\begin{align}
\label{eqn:Ginpnkzkn-k}
p_{N,k}=Z_{k,(N-k)/2}[1,1],
\end{align}
where $p_{N,k}$, from Definition \ref{def:probpnk}, is the probability of finding $k$ real eigenvalues from an $N\times N$ real Ginibre matrix. A generating function for these probabilities is then
\begin{align}
\label{def:GinOE_probsGF} Z_N(\zeta):=\sum_{k=0}^{N/2}\zeta^k Z_{2k,(N-2k)/2}[1,1].
\end{align}
But, of course, confining the correlations to a particular $k$ number of real eigenvalues is not in keeping with the realities of the problem, in which the number of real eigenvalues is not known \textit{a priori}. Consequently, we must introduce the summed-up generalised partition function
\begin{equation}
\label{eqn:summedup} Z_N[u,v]=\sum_{k=0}^N{}^{\sharp}\; Z_{k,(N-k)/2}[u,v],
\end{equation}
where $\sharp$ indicates that the sum is restricted to values of $k$ with the same parity as $N$ (in this case even). Our plan is to first find a quaternion determinant/Pfaffian form of (\ref{eqn:GinOE_znk1}), at which point the sum in (\ref{eqn:summedup}) is able to be performed quite simply.
Now we undertake the integration over alternate variables, with one further caveat: the ordering is no longer as simple as it was for GOE. However, the reader will see that this is just a small technical consideration and the procedure is not changed in any substantial way.
\begin{proposition}[\cite{sinclair2006, FN07}]
\label{prop:GinOE_gpf_even}
The generalised partition function $Z_{k,(N-k)/2}[u,v]$, for $k$ and $N$ even, can be written in the Pfaffian form
\begin{eqnarray}
\label{eqn:GinOE_gpf_even} Z_{k,(N-k)/2}[u,v]=\frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}[\zeta^{k/2}]\mathrm{Pf}[\zeta\alpha_{j,l}+\beta_{j,l}],
\end{eqnarray}
where $[\zeta^n]$ means the coefficient of $\zeta^n$ and, with monic polynomials $\{ p_{i}(x)\}$ of degree i,
\begin{align}
\label{eqn:GinOE_alphabeta} \begin{split} \alpha_{j,l} & = \int_{-\infty}^{\infty}dx\; u(x)\; \int_{-\infty}^{\infty} dy\hspace{3pt}u(y)\; e^{-(x^2+y^2)/2} p_{j-1}(x)p_{l-1}(y)\hspace{3pt}\mathrm{sgn}(y-x),\\
\beta_{j,l} & = 2i\int_{\mathbb{R}_+^2}dw\;v(w)\; \mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w)|)\; e^{-(w^2+\bar{w}^2)/2}\\
&\times \Bigl(p_{j-1}(w)p_{l-1}(\bar{w})-p_{l-1}(w)p_{j-1}(\bar{w})\Bigr).
\end{split}
\end{align}
\end{proposition}
\textit{Proof:} To remove the absolute value sign from the Vandermonde in (\ref{eqn:GinOE_znk1}) we start by ordering the real eigenvalues as $\lambda_1 < ... < \lambda_k$, picking up a factor of $k!$. Recall that $|\Delta (\Lambda\cup W)|= \prod_{1\leq j < l \leq N}|t_l- t_j|$, where $t_j\in \mathbb{R}$ for $j\leq k$, and $t_j\in \mathbb{C}\backslash \mathbb{R}$ for $j>k$. So with $l>k$, as long as $t_j\neq \bar{t}_l$, we have both $|t_l- t_j|$ and its complex conjugate in the Vandermonde product (recall the structure of (\ref{def:GinOEpods})) and so we can remove the absolute value. For the factors where $t_j= \bar{t}_l$ (of which there are $(N-k)/2$), then we can remove the absolute value as long as we multiply by $i$. Using (\ref{eqn:vandermonde_polys}) we have the Vandermonde determinant
\begin{align}
\label{eqn:vandermonde_GinOE} \Delta(\Lambda\cup W)=\mathrm{det}\left[ \begin{array}{c}
[p_{l-1}(\lambda_j)]_{j=1,...,k}\vspace{3pt}\\
\left[ \begin{array}{c}
p_{l-1}(w_j)\\
p_{l-1}(\bar{w}_j)\end{array} \right]_{j=1,...,(N-k)/2}
\end{array} \right]_{l=1,...,N},
\end{align}
which we substitute into (\ref{eqn:GinOE_znk1}) giving
{\small
\begin{align}
\nonumber &Z_{k,(N-k)/2}[u,v]=C_{N,k}i^{(N-k)/2}\frac{k!}{(k/2)!}\int_{\mathbb{R}^2_+}dw_1 \cdot\cdot\cdot \int_{\mathbb{R}^2_+} dw_{(N-k)/2}\\
\nonumber &\times\prod_{j=1}^{(N-k)/2} v(w_l)\; e^{-(w_j^2+\bar{w}_j^2)/2}\; \mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w_j)|)\int_{-\infty}^{\infty}d\lambda_2 \int_{-\infty}^{\infty}d\lambda_4 \cdot\cdot\cdot \int_{\infty}^{\infty} d\lambda_k\\
\label{eqn:GinOE_ioav_even} &\times \det\left[ \begin{array}{c}
\left[ \begin{array}{c}
\int_{-\infty}^{\lambda_{2j}}e^{-x^2/2}u(x)p_{l-1}(x)dx\\
e^{-\lambda_{2j}^2/2}u(\lambda_{2j})p_{l-1}(\lambda_{2j})
\end{array}\right]_{j=1,...,k}\\
\left[ \begin{array}{c}
p_{l-1}(w_j)\\
p_{l-1}(\bar{w}_j)\end{array} \right]_{j=1,...,(N-k)/2}
\end{array} \right]_{l=1,...,N},
\end{align}
}where, as in Proposition \ref{prop:GOE_gen_part_fn}, we have again added appropriate rows to make all the integrals inside the determinant begin at $-\infty$. The $(k/2)!$ in the denominator comes from relaxing the ordering on the real eigenvalues $\lambda_2,\lambda_4,...,\lambda_k$.
Expanding the determinant we have
\begin{align}
\nonumber Z_{k,(N-k)/2}[u,v]=C_{N,k}\frac{k!}{(k/2)!}\sum_{P\in S_N}\varepsilon(P) \prod_{l=1}^{k/2} a_{P(2l-1),P(2l)} \prod_{l=k/2+1}^{N/2} b_{P(2l-1),P(2l)},
\end{align}
where
\begin{align}
\label{def:GinOEgpf_ab}
\begin{split}
a_{j,l}&:=\int_{-\infty}^{\infty}dx\; u(x) e^{-x^2/2}p_{l-1}(x)\int_{-\infty}^{x}dy\; u(y) e^{-y^2/2}p_{j-1}(y),\\
b_{j,l}&:=i\int_{\mathbb{R}_+^2}dw\; v(w)\;\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w)|)\;e^{-(w^2+\bar{w}^2)/2}\; p_{j-1}(w)p_{l-1}(\bar{w}).
\end{split}
\end{align}
Let
\begin{align}
\label{def:alpha_beta}
\begin{split}
\alpha_{j,l}&:=a_{j,l}-a_{l,j},\\
\beta_{j,l}&:=2(b_{j,l}-b_{l,j}),
\end{split}
\end{align}
then we have the restriction $P(2l)>P(2l-1)$ and we write
\begin{align}
\nonumber &Z_{k,(N-k)/2}[u,v]=C_{N,k}\frac{k!}{(k/2)! 2^{(N-k)/2}}\\
\nonumber &\times \sum_{P\in S_N \atop P(2l)>P(2l-1)}\varepsilon(P)\prod_{l=1}^{k/2} \alpha_{P(2l-1),P(2l)}\prod_{l=k/2+1}^{N/2}\beta_{P(2l-1),P(2l)}\\
\nonumber &=C_{N,k}\; \frac{k!((N-k)/2)!}{2^{(N-k)/2}} \sum_{P\in S_N \atop P(2l)>P(2l-1)}^* \hspace{-12pt} \varepsilon(P) \prod_{l=1}^{k/2} \alpha_{P(2l-1),P(2l)} \prod_{l=k/2+1}^{N/2}\beta_{P(2l-1),P(2l)}\\
\nonumber &= C_{N,k}\; \frac{k!((N-k)/2)!}{2^{(N-k)/2}}\; [\zeta^{k/2}] \mathrm{Pf}[\zeta \alpha_{j,l}+\beta_{j,l}]_{j,l=1,...,N},
\end{align}
where we have used Definition \ref{def:pfaff}, recalling Remark \ref{rem:Pf_def}. Note that the $(k-N)/2$ factors of $2$ are to compensate for the factor of $2$ introduced in the definition of $\beta$. The result now follows immediately on substitution of $C_{N,k}$ from (\ref{def:GinOECNK}).
\hfill $\Box$
Performing the sum in (\ref{eqn:summedup}) gives us that the generalised partition function for general $N$ and $k$ (recalling that $k$ must be of the same parity as $N$) is
\begin{align}
\label{eqn:even_gpfsum} Z_N[u,v]=\frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}\mathrm{Pf}[\alpha_{j,l}+\beta_{j,l}]_{j,l=1,...,N}.
\end{align}
We also find that the generating function for the probabilities (\ref{def:GinOE_probsGF}) becomes
\begin{align}
\label{eqn:GinOE_probsGF_pf} Z_N(\zeta)=\frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}\mathrm{Pf}[\zeta\alpha_{j,l}+\beta_{j,l}]_{j,l=1,...,N}\Big|_{u=v=1}.
\end{align}
From (\ref{eqn:GinOE_probsGF_pf}), (\ref{eqn:Ginpnkzkn-k}) and (\ref{def:GinOE_probsGF}) we can see that the probabilities for the extremal values of $k$ are
\begin{align}
\label{eqn:pNNeven} p_{N,N}=\frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}\mathrm{Pf}[\alpha_{j,l}\big|_{u=1}]_{j,l=1,...,N}
\end{align}
for all real eigenvalues, and
\begin{align}
\nonumber p_{N,0}=\frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}\mathrm{Pf}[\beta_{j,l}\big|_{v=1}]_{j,l=1,...,N}
\end{align}
for all complex eigenvalues.
\subsubsection{$N$ odd}
\label{sec:GinOE_genpartfn_odd}
As discussed at the beginning of Chapter \ref{sec:Ggpfe}, the generalised partition function is parity dependent, in direct analogy with the GOE. Interestingly, the intricacies introduced by the odd case have a more natural interpretation in the real Ginibre ensemble. Recall that the application of integration over alternate variables in Proposition \ref{prop:GOE_gen_part_fn_odd} was complicated by the extra unpaired row in the Vandermonde determinant, which led to the Pfaffian of an odd-sized matrix with a border row and column corresponding to this extra eigenvalue. We will see the same structure for the real Ginibre matrices, however, this border now directly corresponds to the single real eigenvalue that is guaranteed to exist in an odd-sized real, asymmetric matrix. This is a consequence of the fact the eigenvalues of real, symmetric matrices are real or a complex conjugate pair.
Aside from some small technical considerations, the procedure is otherwise identical to the $N$ odd case of the GOE: we begin with the parity insensitive eigenvalue jpdf (\ref{eqn:GinOEjpdf}), and apply a modified form of integration over alternate variables to give us an even-sized Pfaffian.
\begin{proposition}
\label{prop:GinOE_gpf_odd}
Let $\alpha_{j,l}$ and $\beta_{j,l}$ be as in Proposition \ref{prop:GinOE_gpf_even} and
$\nu_j$ as in (\ref{def:GOE_nu}). Then, for $N,k$ odd, the generalised partition function for real, Ginibre matrices is
\begin{align}
\label{eqn:GinOE_gpf_odd} Z_{k,(N-k)/2}^{\mathrm{odd}}[u,v]=\frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}[\zeta^{(k-1)/2}]\mathrm{Pf}\left[\begin{array}{cc}
[\zeta \alpha_{j,l}+\beta_{j,l}] & [\nu_j]\\
\left[-\nu_l\right]& 0\\
\end{array}\right]_{j,l=1,...,N}.
\end{align}
\end{proposition}
\textit{Proof}: First we remove the absolute value from the Vandermonde in (\ref{eqn:GinOEjpdf}) in the same way as in Proposition \ref{prop:GinOE_gpf_even}, multiplying by $k!\; i^{(N-k)/2}$. Then, using (\ref{eqn:vandermonde_GinOE}), the odd analogue of (\ref{eqn:GinOE_ioav_even}) is
{\small
\begin{align}
\nonumber &Z_{k,(N-k)/2}^{\mathrm{odd}}[u,v]=C_{N,k}\:i^{(N-k)/2}\frac{k!}{((k-1)/2)!}\int_{\mathbb{R}^2_+}dw_1\cdot\cdot\cdot \int_{\mathbb{R}^2_+}dw_{(N-k)/2}\\
\nonumber &\times \prod_{j=1}^{(N-k)/2} v(w_l)\; e^{-(w_j^2+\bar{w}_j^2)/2}\; \mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w_j)|)\int_{-\infty}^{\infty}d\lambda_2\int_{-\infty}^{\infty}d\lambda_4\cdot\cdot \cdot\int_{-\infty}^{\infty}d\lambda_{k-1}\\
\nonumber &\times\hspace{3pt}\det \left[\begin{array}{l}
\left[\begin{array}{c}
\int_{-\infty}^{\lambda_{2j}}u(\lambda)e^{-\lambda^2/2}p_{l-1}(\lambda)d\lambda \\
u(\lambda_{2j})e^{-\lambda_{2j}^2/2}p_{l-1}(\lambda_{2j})\end{array}\right]_{j=1,...,(k-1)/2} \vspace{6pt}\\
\left[\begin{array}{c}
p_{l-1}(w_j)\\
p_{l-1}(\bar{w}_j)
\end{array}\right]_{j=1,...,(N-k)/2}\vspace{6pt}\\
\int_{-\infty}^{\infty}u(\lambda)e^{-\lambda^2/2}p_{l-1}(\lambda)d\lambda
\end{array}\right]_{l=1,...,N},
\end{align}
}where we have shifted the row corresponding to the $k$th eigenvalue to the bottom row. (Since this involves an even number of row transpositions the determinant is unchanged.) Expanding the determinant, with $a_{j,l}, b_{j,l}$ from (\ref{def:GinOEgpf_ab}) and $\alpha_{j,l},\beta_{j,l}$ from (\ref{def:alpha_beta}), we have
\begin{align}
\nonumber &Z_{k,(N-k)/2}^{\mathrm{odd}}[u,v]=C_{N,k}\frac{k!}{((k-1)/2)!}\\
\nonumber &\times \sum_{P\in S_N}\varepsilon(P)\; \nu_{P(N)}\prod_{l=1}^{(k-1)/2}a_{P(2l-1),P(2l)} \prod_{l=(k+1)/2}^{(N-1)/2}b_{P(2l-1),P(2l)}\\
\nonumber &=C_{N,k}\frac{k!((N-k)/2)!}{2^{(N-k)/2}}\\
\nonumber &\times \sum_{P\in S_N \atop P(2l)>P(2l-1)}^*\varepsilon(P)\;\nu_{P(N)}\prod_{l=1}^{(k-1)/2} \alpha_{P(2l-1),P(2l)} \prod_{l=(k+1)/2}^{(N-1)/2} \beta_{P(2l-1),P(2l)},
\end{align}
and, with $C_{N,k}$ from (\ref{def:GinOECNK}), we have the result.
\hfill $\Box$
As for $N$ even, $Z_{k,(N-k)/2}^{\mathrm{odd}}[u,v]$ is the generalised partition function for only one part of the relevant problem and we in fact need to sum over all possible $k$. The analogues of (\ref{eqn:summedup}) and (\ref{eqn:even_gpfsum}) are then
\begin{align}
\nonumber Z_N^{\mathrm{odd}}[u,v]&:=\sum_{k=1}^N{}^{\sharp}\; Z_{k,(N-k)/2}^{\mathrm{odd}}[u,v]\\
\label{eqn:odd_gpfsum} &=\frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}\mathrm{Pf}\left[\begin{array}{cc}
[\alpha_{j,l}+\beta_{j,l}] & [\nu_j]\\
\left[-\nu_l\right]& 0\\
\end{array}\right]_{j,l=1,...,N},
\end{align}
and those of (\ref{def:GinOE_probsGF}) and (\ref{eqn:GinOE_probsGF_pf}) for the probabilities of finding $k$ (odd) real eigenvalues are
\begin{align}
\nonumber Z_N^{\mathrm{odd}}(\zeta)&:=\sum_{k=0}^{(N-1)/2}\zeta^k Z_{2k+1,(N-1-2k)/2}[1,1]\\
\nonumber &=\sum_{k=0}^{(N-1)/2} \zeta^k p_{2k+1,(N-1-2k)/2}\\
\label{eqn:GinOE_probs_odd} &=\left. \frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}\mathrm{Pf}\left[\begin{array}{cc}
[\zeta \alpha_{j,l}+\beta_{j,l}] & [\nu_j]\\
\left[-\nu_l\right]& 0\\
\end{array}\right]_{j,l=1,...,N} \right|_{u=v=1}.
\end{align}
The probabilities for the extremal values of $k$ are also analogous.
\subsection{Skew-orthogonal polynomials for the real Ginibre ensemble}
\label{sec:Gsops}
As we saw in the case of the GOE, the Pfaffian in (\ref{eqn:even_gpfsum}) will be most easily calculated if we can find the appropriate polynomials that skew-diagonalise the matrix, or, for (\ref{eqn:odd_gpfsum}), make it odd skew-diagonal as in (\ref{eqn:skew_diag_mat_odd}). Recall that in Definition \ref{def:GOE_soip} we defined a skew-inner product based on the double integrals $\gamma_{j,l}$ from Proposition \ref{prop:GOE_gen_part_fn}, which were the entries of the ($N$ even) generalised partition function. Here we will make the analogous definition, this time using the $\alpha_{j,l}$ and $\beta_{j,l}$ of Proposition \ref{prop:GinOE_gpf_even}.
\begin{definition}
\label{def:Ginip1}
Let $\{ p_j\}_{j=1,2,...}$ be a set of monic polynomials of degree $N$. Define the inner product
\begin{align}
\nonumber (p_j,p_l)&:=\int_{-\infty}^{\infty}dx\int_{-\infty}^{\infty}dy\; e^{-(x^2+y^2)/2}p_{j}(x)p_{l}(y)\hspace{3pt}\mathrm{sgn}(y-x)\\
\nonumber &+2i\int_{\mathbb{R}_+^2}dw\; \mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w)|)\; e^{-(w^2+\bar{w}^2)/2}\Bigl(p_{j}(w)p_{l}(\bar{w})-p_{l}(w)p_{j}(\bar{w})\Bigr)\\
\label{def:GinOE_sip} &=\alpha_{j+1,l+1}+\beta_{j+1,l+1}\big|_{u=v=1},
\end{align}
with $\alpha_{j,l}$ and $\beta_{j,l}$ as in (\ref{eqn:GinOE_alphabeta}).
\end{definition}
We would like to find monic polynomials that satisfy the skew-orthogonality properties
\begin{align}
\label{eqn:GinOE_soprops} (p_{2j},p_{2l}) = (p_{2j+1},p_{2l+1})=0 &,&(p_{2j},p_{2l+1})=-(p_{2l+1},p_{2j})=\delta_{j,l}r_j,
\end{align}
although note that the anti-symmetry property is obvious from the definition of the inner product (\ref{def:GinOE_sip}). The polynomials for the real Ginibre ensemble were first presented in \cite{FN07}.
\begin{proposition}
\label{prop:Ginsops}
The skew-orthogonal polynomials for the real Ginibre ensemble are
\begin{align}
\label{eqn:GinOE_sopolys} p_{2j(x)}=x^{2j}&,&p_{2j+1}(x)=x^{2j+1}-2j\; x^{2j-1},
\end{align}
with normalisation
\begin{align}
\label{eqn:GinOE_sopolys_norm} (p_{2j},p_{2j+1})=r_j=2\sqrt{2\pi}\;\Gamma (2j+1).
\end{align}
\end{proposition}
The direct verification that these polynomials are in fact skew-orthogonal with respect to the inner product is somewhat of a chore; we will only sketch some of the salient points. In the case $(p_{2j},p_{2l})$ or $(p_{2j+1},p_{2l+1})$ we see that the integrand in $\beta_{2j+1,2l+1}$ and $\beta_{2j+2,2l+2}$ is odd (since $\mathrm{erfc}$ is an odd function), and so $\beta_{2j+1,2l+1}=\beta_{2j+2,2l+2}=0$. Also, $\alpha_{2j+1,2l+1}=\alpha_{2j+2,2l+2}=0$ by the same reasoning as in Proposition \ref{prop:GOE_soip} (the inner integral produces an odd integrand for the outer integral). For the remaining properties, including the calculation of the normalisation, the reader is referred to \cite{FN08} for the details.
\begin{remark}
The details we have omitted from the verification of the skew-orthogonal polynomials involve finding recursions for the $\alpha$ and $\beta$ integrals of (\ref{def:GinOE_sip}). We will partially address this issue in the following, when we discuss the calculation of the probabilities $p_{N,k}$.
\end{remark}
\begin{remark}
The skew-orthogonal polynomials may also be found via an average over a characteristic polynomial (see \cite{AkeKiePhi2010}) or indirectly using knowledge of the average of the product of two characteristic polynomials; we will pursue this latter method further in Chapters \ref{sec:SOEcharpolys} and \ref{sec:TOEsops}.
\end{remark}
\subsubsection{Probability of $k$ real eigenvalues}
\label{sec:pnk}
Recall from (\ref{eqn:Ginpnkzkn-k}) that $p_{N,k}$, the probability of obtaining $k$ real eigenvalues from an $N\times N$ real, Gaussian matrix, is given by $Z_{k,(N-k)/2}[1,1]$. In order to calculate the probabilities using (\ref{eqn:GinOE_gpf_even}) and (\ref{eqn:GinOE_gpf_odd}) we substitute $\zeta\alpha_{j,l}+\beta_{j,l}= (\zeta-1)\alpha_{j,l}+\alpha_{j,l}+\beta_{j,l} =(\zeta-1)\alpha_{j,l}+ \delta_{j,l} r_{\lfloor (j-1)/2\rfloor}$. Recalling that $\alpha_{j,l}$ is anti-symmetric we can calculate the (non-zero) $\alpha_{j,l}\big|_{u=1}$ using the relation
\begin{align}
\label{eqn:arecurs} \alpha_{2j+1,2l+2}\big|_{u=1}=2l\: I_{l-1,j}-I_{l,j},
\end{align}
where $I_{j,l}$ satisfies the recursions
\begin{align}
\label{eqn:Irecurs}
\begin{split}
I_{j+1,l}=(2j+2)I_{j,l}-2\Gamma(j+l+3/2)\\
I_{j,l+1}=(2l+1)I_{j,l}+2\Gamma(j+l+3/2),
\end{split}
\end{align}
with $I_{0,0}=-2\sqrt{\pi}$ \cite{FN08}. Combining (\ref{eqn:Irecurs}) and (\ref{eqn:arecurs}) we have
\begin{align}
\label{eqn:Ginafinal} \alpha_{2j+1,2l+2}\big|_{u=1}=2\: \Gamma(j+l+1/2).
\end{align}
For $N$ odd we see from (\ref{eqn:GinOE_gpf_odd}) that we also need to calculate $\nu_j\big|_{u=1}$, however from its definition (\ref{def:GOE_nu}) we have $\nu_j\big|_{u=1}=0$ for $j$ even and for $j$ odd we integrate by parts to find
\begin{align}
\label{eqn:Gnu} \nu_j\big|_{u=1}=(j-2)!!\sqrt{2\pi}.
\end{align}
With the entries of the matrices so specified, we can then calculate the probabilities, although we still have an unwieldy $N\times N$ Pfaffian to deal with. This situation can be improved somewhat by noting that with the polynomials (\ref{eqn:GinOE_sopolys}) (or indeed with any set of alternating even and odd functions) the Pfaffian matrix takes on a chequer pattern like (\ref{eqn:N3chequer}), and so it can be written as an $N/2\times N/2$ determinant using (\ref{eqn:chequer}). Generally an order $p$ determinant can be computed in floating point arithmetic using $\mathrm{O}(p^3)$ operations, although the bit length of intermediate values can become exponentially long, and ill-conditioning can result if this is truncated \cite{Stewart1973}. Alternatively, computer algebra can be used. The results of our calculations appear in Table \ref{tab:pnkxact_sim} of Appendix \ref{app:GinOE_kernel_elts} where they are compared to the results of the simulations in Table \ref{tab:GinOE_pnk}. Note that the exact results for $N=1,...,9$ appeared in \cite[Table 1]{Ed97}, while those for $N=12$ are listed in \cite[Table 2]{AK2007}.
The probability $p_{N,N}$ (that all eigenvalues are real) in (\ref{eqn:GinOEpNN}), which we calculated directly from the eigenvalue jpdf, can, of course, be obtained from (\ref{eqn:pNNeven}) or the odd equivalent from (\ref{eqn:GinOE_probs_odd}). The formula (\ref{eqn:GinOEpNN}) makes precise what we see experimentally in Table \ref{tab:GinOE_pnk}: the chance of finding all real eigenvalues rapidly decreases with $N$, yet for any finite $N$ we still have a non-zero probability that they are all real. Another interesting fact (which we can guess at from the table) established in the same work \cite[Corollary 7.2]{Ed97} is that all the probabilities are of the form $r+\sqrt{2}\:s$ where $r$ and $s$ are rational numbers.
We are now also in a position to quantify $E_N$, the expected number of real eigenvalues. We see from (\ref{def:GinOE_probsGF}) that (for $N$ even) this will be given by
\begin{align}
\label{eqn:Ginxnreals} E_N=2\frac{\partial}{\partial \zeta}Z_N(\zeta)\Big|_{\zeta=1}.
\end{align}
We will find, however, that we can calculate the expected value quite easily once we have the correlation functions and so we delay discussion of $E_N$ until Chapter \ref{sec:Ginkernelts}.
\begin{remark}
In (\ref{eqn:Ginxnreals}) we have required that $N$ be even, however this is just an artifact of the definition of the generating function $Z_N$. If the power of $\zeta$ in (\ref{def:GinOE_probsGF}) were $2k$ and in (\ref{eqn:GinOE_probs_odd}) it were $2k+1$ then we could use (\ref{eqn:Ginxnreals}) (without the factor of 2) for both even and odd cases. As it stands, for $N$ odd we have
\begin{align}
\nonumber E_N^{\mathrm{odd}}=2\frac{\partial}{\partial \zeta}Z^{\mathrm{odd}}_N(\zeta)+ \zeta^{-1}Z^{\mathrm{odd}}_N(\zeta) \Bigg|_{\zeta=1}.
\end{align}
\end{remark}
\subsection{Eigenvalue correlations for $N$ even}
\label{sec:Gincorrlnse}
As we have stressed, the difficulty we face in the case of the real Ginibre ensemble is the occurrence of two distinct species of eigenvalues. Consequently (recall the discussion below Proposition \ref{thm:integral_identities}) the real Ginibre ensemble does not satisfy (\ref{eqn:proj_prop1}) nor (\ref{eqn:proj_prop2}). If one attempts to apply that theorem it turns out that because the appropriate partition function is (\ref{eqn:summedup}) (a sum over the possible values of $k$) it is not possible to normalise the result of the integration on the left hand side, and so the right hand side does not eventuate. More details about this are contained in \cite{AK2007}, where the authors propose a way to integrate Pfaffians that avoids these complications. They then apply this method to calculate $p_{N,k}$ in terms of zonal polynomials. We will not pursue their method here since by using the generalised partition function with the relevant skew-orthogonal polynomials we are able to find a simple form for the computation of $p_{N,k}$, and further, we can push on to calculate the correlation functions with the same tools.
Recall that in Proposition \ref{prop:correlns_GOE_even} we found the correlation functions were given as a quaternion determinant with the $2\times 2$ correlation kernel $\mathbf{f}(x,y)$ from (\ref{def:GOE_Qdcorrelnk}). In that case (GOE matrices), all the eigenvalues are of a single species, and so the $2\times 2$ block represents the correlations between any pair of eigenvalues. In the present asymmetric case, the eigenvalues are now in two disjoint sets: real, and non-real complex. (We may use the term \textit{complex} to refer to these non-real complex eigenvalues if no confusion is likely.) So we will not be surprised to discover that a separate $2\times 2$ block is required for each pairing of eigenvalue species. Indeed, the correlation functions are built up from real-real, real-complex, complex-real and complex-complex $2\times 2$ blocks, each of which has the same structure as $\mathbf{f}(x,y)$.
\begin{definition}
\label{def:GinOE_kernel}
Let $p_0,p_1,...$ be the skew-orthogonal polynomials (\ref{eqn:GinOE_sopolys}) and $r_0,r_1,...$ the corresponding normalisations. With $N$ even define
\begin{align}
\nonumber S(\mu,\eta)&=2\sum_{j=0}^{\frac{N}{2}-1}\frac{1}{r_j}\Bigl[q_{2j}(\mu)\tau_{2j+1}(\eta)-q_{2j+1}(\mu)\tau_{2j}(\eta)\Bigr],\\
\nonumber D(\mu,\eta)&=2\sum_{j=0}^{\frac{N}{2}-1}\frac{1}{r_j}\Bigl[q_{2j}(\mu)q_{2j+1}(\eta)-q_{2j+1}(\mu)q_{2j}(\eta)\Bigr],\\
\nonumber \tilde{I}(\mu,\eta)&=2\sum_{j=0}^{\frac{N}{2}-1}\frac{1}{r_j}\Bigl[\tau_{2j}(\mu)\tau_{2j+1}(\eta)-\tau_{2j+1}(\mu)\tau_{2j}(\eta)\Bigr]+\epsilon(\mu,\eta)\\
\nonumber &=: I(\mu,\eta) +\epsilon(\mu,\eta),
\end{align}
where
\begin{align}
\nonumber q_j(\mu) &= e^{-\mu^2/2}\hspace{2pt}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(\mu)|)}\hspace{2pt}p_j(\mu),\\
\nonumber \tau_j(\mu) &=
\left\{
\begin{array}{ll}
-\frac{1}{2}\int_{-\infty}^{\infty}\mathrm{sgn}(\mu-z)\hspace{3pt}q_j(z)\hspace{3pt}dz, & \mu\in \mathbb{R},\\
iq_j(\bar{\mu}), & \mu\in \mathbb{R}_2^+,
\end{array}
\right.\\
\nonumber \epsilon(\mu,\eta) &=
\left\{
\begin{array}{ll}
\frac{1}{2}\mathrm{sgn}(\mu-\eta), & \mu,\eta\in \mathbb{R},\\
0, & \mathrm{otherwise}.\\
\end{array}
\right.
\end{align}
And, in terms of these quantities, define
\begin{align}
\label{def:GinOE_correlnK} \mathbf{K}(\mu,\eta)=\left[
\begin{array}{cc}
S(\mu,\eta) & - D(\mu,\eta)\\
\tilde{I}(\mu,\eta) & S(\eta,\mu)
\end{array}
\right].
\end{align}
\end{definition}
There are specific $2\times 2$ blocks corresponding to each of the four types of pairs of reals and complexes. The reader may find it helpful to look at the explicit forms of the kernel elements for each of these cases; they are written out in Appendix \ref{app:GinOE_kernel_elts_even}. Also note that we have assumed $N$ is even; we will require a modification to the kernel elements in the case $N$ is odd, which will be dealt with in course.
In the restricted case that the eigenvalues are all real or all complex, the kernel (\ref{def:GinOE_correlnK}) was identified in \cite{FN07}, with the cross-correlations being furnished in \cite{sommers2007} and, using notation similar to ours, independently in \cite{b&s2009}, although the latter uses Pfaffians instead of quaternion determinants. By Proposition \ref{prop:qdet=pf} the matrices of Pfaffians and quaternion determinants are related by a factor of $\mathbf{Z}_2^{-1}$, so the Pfaffian kernel is
\begin{align}
\label{def:GinPfK} \left[
\begin{array}{cc}
S(\mu,\eta) & - D(\mu,\eta)\\
\tilde{I}(\mu,\eta) & S(\eta,\mu)
\end{array}
\right] \mathbf{Z}_2^{-1}=\left[
\begin{array}{cc}
D(\mu,\eta) & S(\mu,\eta)\\
-S(\eta,\mu) & \tilde{I}(\mu,\eta)
\end{array}
\right],
\end{align}
which is identical to that in \cite{b&s2009}.
As in Lemma \ref{lem:GOE_D=S=I} for the GOE we have some relationships between the kernel elements, although here there are more options to consider since there are four pairs of real and complex eigenvalues. These relationships are easily verified (particularly when the explicit forms in the appendix are kept in mind) and so no proof is given.
\begin{lemma}
\label{lem:Gin_s=d=i}
With the functions $S,D$ and $\tilde{I}$ as given in Definition \ref{def:GinOE_kernel}, and using the convention that $x,y\in \mathbb{R}$ and $w,z\in \mathbb{C}\backslash \mathbb{R}$ we have
\begin{align}
\label{eqn:Gin_s=d=i}
\begin{split}\tilde{I}_{r,r}(x,y)&=-\int_{x}^{y}S_{r,r}(t,y)dt+\frac{1}{2}\mathrm{sgn}(x-y),\\
\tilde{I}_{c,r}(w,x)&=-\tilde{I}_{r,c}(x,w)=iS_{c,r}(\bar{w},x),\\
\tilde{I}_{c,c}(w,x)&=iS_{c,c}(\bar{w},z),\\
D_{r,r}(x,y)&=-\frac{\partial}{\partial y}S_{r,r}(x,y),\\
D_{r,c}(x,w)&=-D_{c,r}(w,x)=-iS_{r,c}(x,\bar{w}),\\
D_{c,r}(w,x)&=-D_{r,c}(x,w)=-\frac{\partial}{\partial x}S_{c,r}(w,x),\\
D_{c,c}(w,z)&=-iS_{c,c}(w,\bar{z}),
\end{split}
\end{align}
where the subscripts $r$ and $c$ are to clarify the domain.
\end{lemma}
\begin{remark}
Note the `missing' relation --- from the apparent symmetries it is expected that $\tilde{I}_{r,c}(x,w)$ could be calculated as some integral of $S_{r,c}(x,w)$. This can be done for the even case, however it cannot be done in the odd case because of the extra term that appears in $\tilde{I}_{r,c}(x,w)$, which is dependent on the complex variable (see Appendix \ref{app:GinOE_kernel_elts_odd}). Of course, we are still able to obtain $\tilde{I}_{r,c}(x,w)$ by its anti-symmetry and so the missing relation does not affect the formulation.
\end{remark}
In this section we will present two methods for calculating the correlation functions; both based on functional differentiation and both beginning with $Z_N[u,v]$ from (\ref{eqn:even_gpfsum}). The first method produces a Fredholm quaternion determinant with a $4\times 4$ kernel, instead of a $2\times 2$ kernel as in (\ref{eqn:ZN_integ_op}), where each $2\times 2$ block relates to a different pairing of reals and complexes. This method highlights the separate treatments required for the different pairings, however, it needs more general functional differentiation and Fredholm operators. The second method, which is more in keeping with the literature on the topic \cite{sommers2007,b&s2009,Sinc09}, uses a perhaps less natural approach where a generalised variable is used to stand for both real and complex variables as required, although it results in a $2\times 2$ kernel and allows the use of the existing functional differentiation and Fredholm operators from the GOE case. Of course, both methods result in the same correlation functions with kernel given by (\ref{def:GinOE_correlnK}), and each is easily generalised to the odd case using the same method as that in Proposition \ref{prop:pf_integ_op_odd}.
\begin{remark}
Although we say that the $2\times 2$ kernel method is more in keeping with the literature on the topic, a variant of the $4\times 4$ kernel method can be found in \cite[Chapter 6.7]{forrester?}.
\end{remark}
\subsubsection{Two component $4\times 4$ kernel method}
We would like to apply functional differentiation again (as in the case of the GOE in Chapter \ref{sec:correlns_even_GOE}) to find the correlation functions $\rho_{(n_1,n_2)}(x_{1},...,x_{n_1},w_{1},...,w_{n_2})$ with $n_1$ real and $n_2$ non-real complex eigenvalues. However, it seems (\ref{eqn:fnal_diff_correln}) is inadequate for our needs since it contains only one species of eigenvalue; we require instead the formula
{\small
\begin{align}
\nonumber &\rho_{(n_1,n_2)}(x_{1},...,x_{n_1},w_{1},...,w_{n_2}):=\\
\label{eqn:GinOEfnal_diff_correln} &\frac{1}{Z_N[u,v]}\frac{\delta^{n_1+n_2}}{\delta u(x_1)\cdot\cdot\cdot \delta u(x_{n_1})\delta v(w_1)\cdot\cdot\cdot \delta v(w_{n_2})}Z_N[u,v]{\Big |}_{u=v=1}.
\end{align}
}This formula augurs well, however, we will also require a generalised form of the Fredholm operators to make use of it.
\begin{definition}
\label{def:GinOE_Freds}
Let $\lambda$ be a constant parameter and $K_G$ an integral operator with kernel
\begin{align}
\nonumber \left[\begin{array}{cc}
\kappa_{11}(x,y) & \kappa_{12}(x,z)\\
\kappa_{21}(w,y) & \kappa_{22}(w,z),
\end{array}\right],
\end{align}
having two species of variable, $\{x,y\}$ and $\{ w,z\}$, and
\begin{align}
\nonumber K(x_i,x_j,w_m,w_n)_{s,t}:=\left[\begin{array}{cc}
\kappa_{11}(x_i,x_j) & \kappa_{12}(x_i,w_n)\\
\kappa_{21}(w_m,x_j) &\kappa_{22}(w_m,w_n)
\end{array} \right]_{i,j=1,...,s \atop m,n=1,...,t}.
\end{align}
Then
\begin{align}
\nonumber \det[1+\lambda K_G]:=&\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{\lambda^{s+t}}{s!t!}\int dx_1\cdot\cdot\cdot\int dx_s \int dw_1\cdot\cdot\cdot \int dw_t\\
\nonumber &\times \det K(x_i,x_j,w_m,w_n)_{s,t},
\end{align}
\begin{align}
\nonumber \mathrm{qdet}[1+\lambda K_G]:=&\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{\lambda^{s+t}}{s!t!}\int dx_1\cdot\cdot\cdot\int dx_s \int dw_1\cdot\cdot\cdot \int dw_t\\
\nonumber &\times \mathrm{qdet} \:K(x_i,x_j,w_m,w_n)_{s,t},
\end{align}
when $K(x_i,x_j,w_m,w_n)_{s,t}$ is self-dual, and
\begin{align}
\nonumber \mathrm{Pf}[1+\lambda K_G]:=&\sum_{s=0}^{\infty}\sum_{t=0}^{\infty}\frac{\lambda^{s+t}}{s!t!}\int dx_1\cdot\cdot\cdot\int dx_s \int dw_1\cdot\cdot\cdot \int dw_t\\
\nonumber &\times \mathrm{Pf} \: K(x_i,x_j,w_m,w_n)_{s,t},
\end{align}
when $K(x_i,x_j,w_m,w_n)_{s,t}$ is anti-symmetric. The first term $(s=t=0)$ in each sum is taken to be $1$.
\end{definition}
By following the same method of proof as in Lemma \ref{lem:pf_qdet_kernel}, we can establish its analogue in this case, which then gives us the analogue of Corollary \ref{cor:sqrtFreds}.
\begin{corollary}
With the definitions of Definition \ref{def:GinOE_Freds} we have
\begin{align}
\nonumber \left(\det[1+\lambda K_G]) \right)^{1/2}=\mathrm{qdet}[1+\lambda K_G]
\end{align}
in the case that $K(x_i,x_j,w_m,w_n)_{s,t}$ is self-dual, and
\begin{align}
\nonumber \left(\det[1+\lambda K_G]) \right)^{1/2}=\mathrm{Pf}[1+\lambda K_G]
\end{align}
when $K(x_i,x_j,w_m,w_n)_{s,t}$ is anti-symmetric.
\end{corollary}
We can see that with (\ref{eqn:GinOEfnal_diff_correln}) applied to the Fredholm operators in Definition \ref{def:GinOE_Freds}, we will be able to pick out the term in the expansion of the integral operator corresponding to any number of real and complex eigenvalues. The only difficulty that remains then is to express the generalised partition function as a Fredholm quaternion determinant, in analogue with (\ref{eqn:ZN_integ_op}).
We will use the integral operator definitions of (\ref{def:integop}) and (\ref{eqn:integop_ab}), where it will be recalled that, by convention, $y$ is the variable of integration. Here we use the convention that we integrate the real variable $y$ or the complex variable $z$. (We never have both $y$ and $z$ in the same expression so the convention can be applied consistently.)
\begin{proposition}
\label{prop:4x4_fred}
Let $x,y\in\mathbb{R}$ and $w,z\in\mathbb{C}\backslash\mathbb{R}$. Then, with $\alpha_{ij}$ and $\beta_{ij}$ as in Proposition \ref{prop:GinOE_gpf_even}, we have
\begin{align}
\label{eqn:4x4_fred} \mathrm{Pf}[\alpha_{ij} +\beta_{ij}]_{i,j=1,...,N}=\prod_{j=0}^{N/2-1}r_j\;\mathrm{qdet} [\mathbf{1}_4+ \mathbf{K}_G(\mathbf{t}-\mathbf{1}_4)],
\end{align}
where $\mathbf{K}_G(\mathbf{t}-\mathbf{1}_4)$ is the $4\times 4$ matrix integral operator with kernel
\begin{align}
\nonumber \left[\begin{array}{ll}
\mathbf{K}(x,y) & \mathbf{K}(x,z)\\
\mathbf{K}(w,y) & \mathbf{K}(w,z)
\end{array}\right] \left[\begin{array}{cccc}
u(y)-1 & 0 & 0 & 0\\
0 & u(y)-1 & 0 & 0\\
0 & 0 & v(z)-1 & 0\\
0 & 0 & 0 & v(z)-1\\
\end{array}\right],
\end{align}
with $\mathbf{K}(\mu,\eta)$ as in (\ref{def:GinOE_correlnK}).
\end{proposition}
\textit{Proof}: Let
\begin{align}
\nonumber u=&\; \sigma+1,\\
\nonumber v=&\; \eta+1,\\
\nonumber \psi_j(x):=&\; e^{-x^2/2}p_{j-1}(x)\\
\label{def:GFredqdetPrf} \phi_j(z):= &\; \sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(z)|)}\: e^{-z^2/2}p_{j-1}(z).
\end{align}
We still require the integral operator $\epsilon$ as used in Proposition \ref{prop:pf_integ_op} for the real integrals, but we also need to expand it to include the complex case,
\begin{align}
\label{def:rc_eps} \epsilon f [\eta]=\left\{ \begin{array}{cl}
\frac{1}{2}\int_{\mathbb{R}}\psi(y)\:\mathrm{sgn}(\eta-y)dy, & \eta \in \mathbb{R},\\
-i\; \phi(\bar{\eta}), & \eta \in \mathbb{C}\backslash \mathbb{R}.
\end{array}\right.
\end{align}
So then
\begin{align}
\nonumber \alpha_{j,l}+\beta_{j,l}&=(\alpha_{j,l}+\beta_{j,l})\Big|_{u=v=1}\\
\nonumber &-2\int_{-\infty}^{\infty}\sigma(x)\big(\psi_j(x)\epsilon \psi_k[x]-\psi_k(x)\epsilon\psi_j[x]-\psi_k(x)\epsilon(\sigma\psi_j)[x]\big)dx\\
\nonumber &-2\int_{\mathbb{R}_+^2} \eta(z) \big(\phi_j(z)\epsilon\phi_l[z]-\epsilon\phi_j[z]\phi_l(z)\big)\; dz\\
\label{def:abtau} &=(\alpha_{j,l}+\beta_{j,l})^{(1)}-(\alpha_{j,l}+\beta_{j,l})^{(\tau)},
\end{align}
where $(\alpha_{j,l}+\beta_{j,l})^{(1)}:=(\alpha_{j,l}+\beta_{j,l})\Big|_{u=v=1}$ and $(\alpha_{j,l}+\beta_{j,l})^{(\tau)}$ is the two remaining terms. By using the skew-orthogonal polynomials we can decompose $\left[(\alpha_{j,l}+\beta_{j,l})^{(1)}\right]_{j,l=1,...,N}$ as in the proof of Corollary \ref{cor:qdet=pf} with $\mathbf{D}\mathbf{Z}_N^{-1}$, using $\mathbf{Z}_N$ from (\ref{def:Z2N}) and\\ $\mathbf{D}=\mathrm{diag}[r_0,r_0,r_1,r_1,...,r_{N/2},r_{N/2}]$.
Recall $G_j(x)$ from (\ref{def:G_psi}) and let
\begin{align}
\nonumber H_{2j-1}(z):=\phi_{2j}(z),&& H_{2j}(z):=-\phi_{2j-1}(z),
\end{align}
then
\begin{align}
\nonumber &\mathrm{Pf}[\alpha_{j,l}+\beta_{j,l}]_{j,l=1,...,N}=\prod_{j=0}^{N/2-1}r_j\; \mathrm{qdet}\Big[\delta_{j,l}\\
\nonumber &+\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\int_{-\infty}^{\infty}\sigma(x)\big(G_j(x)\epsilon \psi_k[x]-\psi_k(x)\epsilon G_j[x]-\psi_k(x)\epsilon(\sigma G_j)[x]\big)dx\\
\nonumber &+\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\int_{\mathbb{R}_+^2} \eta(z)\big(H_j(z)\epsilon\phi_l[z]-\epsilon H_j[z]\phi_l(z)\big)\; dz\Big].
\end{align}
Now if we define
\begin{align}
\label{def:GinOE_om_F} \Omega_G:=\left[
\begin{array}{cccc}
-\epsilon \sigma & -1 & 0 & 0\\
1 & 0 & 0 & 0\\
0 & 0 & 0 & -1\\
0 & 0 & 1 & 0
\end{array}
\right]&&\mathrm{and}&& \mathbf{F}:=\left[
\begin{array}{ccc}
\psi_1(y) & \cdot \cdot\cdot & \psi_N(y)\\
\epsilon\psi_1[y] & \cdot \cdot\cdot & \epsilon\psi_N[y]\\
\phi_1(z) & \cdot\cdot\cdot & \phi_N(z)\\
\epsilon\phi_1[z] & \cdot\cdot\cdot & \epsilon\phi_N[z]
\end{array}
\right]
\end{align}
then we can proceed in much the same way as for Proposition \ref{prop:pf_integ_op}, with $\Omega_G$ and $\mathbf{F}$ replacing $\Omega$ and $\mathbf{E}$ respectively. To wit, let $\mathbf{A}$ be the integral operator with kernel\\
$2\mathbf{Z}_N^{-1}\mathbf{D}^{-1}(\tau(\sigma,\eta) \Omega_G \mathbf{F})^T$, where $\tau(\sigma,\eta)=\mathrm{diag}[\sigma(y),\sigma(y),\eta(z),\eta(z)]$, and $\mathbf{B}=\mathbf{F}$. Then
\begin{align}
\nonumber \mathbf{1}_N-\mathbf{Z}_N\mathbf{D}^{-1}[(\alpha_{j,l}+\beta_{j,l})^{(\tau)}]=\mathbf{1}_N + \mathbf{A}\mathbf{B}
\end{align}
(cf. (\ref{eqn:1+ABdecomp})) and we apply (\ref{eqn:qdet_1+AB}). Since the kernel of $\mathbf{A}$ is of the form
\begin{align}
\left[\begin{array}{ll}
\nonumber -\frac{2\sigma(y)}{r_{\lfloor (j-1)/2 \rfloor}}\big(\epsilon G_j[y]+\epsilon(\sigma G_j)[y]\big)&\frac{2\sigma(y)}{r_{\lfloor (j-1)/2 \rfloor}}G_j(y)
\end{array}\right.\\
\nonumber \left. \begin{array}{rr}
-\frac{2\eta(z)}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon H_j[z]& \frac{2\eta(z)}{r_{\lfloor (j-1)/2 \rfloor}} H_j(z)
\end{array}\right]_{j=1,...,N}
\end{align}
(which is $N\times 4$) and $\mathbf{B}$ is $4\times N$, after using (\ref{eqn:qdet_1+AB}) we are left with a $4\times 4$ matrix-valued integral operator, with kernel
\begin{align}
\nonumber \mathbf{1}_4+\mathbf{B}\mathbf{A}= \left[\begin{array}{cc}
\tilde{\kappa}_{r,r} & \tilde{\kappa}_{r,c}\\
\tilde{\kappa}_{c,r} & \tilde{\kappa}_{c,c}
\end{array}\right],
\end{align}
where
\begin{align}
\nonumber \tilde{\kappa}_{r,r}&=\left[\begin{array}{l}
1-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\Big(\psi_j\otimes \sigma \epsilon G_j + \psi_j \otimes \sigma\epsilon(\sigma G_j) \Big)\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\Big(\epsilon\psi_j\otimes \sigma \epsilon G_j + \epsilon\psi_j \otimes \sigma\epsilon(\sigma G_j) \Big)
\end{array}\right.\\
\nonumber &\qquad\qquad\qquad\qquad\left.\begin{array}{r}
\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\psi_j\otimes \sigma G_j\\
1+\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \psi_j\otimes \sigma G_j
\end{array}\right],\\
\nonumber \tilde{\kappa}_{r,c}&=\left[\begin{array}{cc}
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}} \psi_j\otimes \eta \epsilon H_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\psi_j\otimes \eta H_j\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}} \epsilon \psi_j\otimes \eta\epsilon H_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \psi_j\otimes \eta H_j
\end{array}\right],\\
\nonumber \tilde{\kappa}_{c,r}&=\left[\begin{array}{cc}
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\phi_j\otimes \sigma \epsilon G_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\phi_j\otimes \sigma G_j\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \phi_j\otimes \sigma\epsilon G_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \phi_j\otimes \sigma G_j
\end{array}\right],\\
\nonumber \tilde{\kappa}_{c,c}&=\left[\begin{array}{cc}
1-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\phi_j\otimes \eta \epsilon H_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\phi_j\otimes \eta H_j\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \phi_j\otimes \eta\epsilon H_j & 1+\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \phi_j\otimes \eta H_j
\end{array}\right].
\end{align}
Noting that
\begin{align}
\label{eqn:omg_decomp} \Omega_G^T\left[\begin{array}{cccc}
1 & 0 & 0 & 0\\
-\epsilon \sigma & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}
\right]=\mathbf{Z}_4^{-1},
\end{align}
we can rewrite $\mathbf{B}\mathbf{A}$ as
\begin{align}
\nonumber \mathbf{B}\mathbf{A}&=2\mathbf{E}\mathbf{Z}_N^{-1}\mathbf{D}(\tau(\sigma,\eta)\Omega_G\mathbf{F})^T
=2\mathbf{E}\mathbf{Z}_N^{-1}\mathbf{D}\mathbf{F}^T\Omega_G^T\tau(\sigma,\eta)\\
\label{eqn:GinBAfactrd} &=2\mathbf{E}\mathbf{Z}_N^{-1}\mathbf{D}\mathbf{F}^T\mathbf{Z}_4^{-1}\tau(\sigma,\eta)\left[\begin{array}{cccc}
1 & 0 & 0 & 0\\
\epsilon \sigma & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}
\right],
\end{align}
where we are keeping with the convention of (\ref{eqn:eps_sig}), that the operator $\epsilon\sigma$ from $\Omega_G$ acts before the larger operator. We can now remove terms with factors $\epsilon\sigma$ by factorising $\mathbf{1}_4+\mathbf{B}\mathbf{A}$ thusly
\begin{align}
\label{eqn:1+ABGinOE4} \mathbf{1}_4+\mathbf{B}\mathbf{A}= \left[\begin{array}{cc}
\kappa_{r,r} & \kappa_{r,c}\\
\kappa_{c,r} & \kappa_{c,c}
\end{array}\right]\;\left[\begin{array}{cccc}
1 & 0 & 0 & 0\\
\epsilon \sigma & 1 & 0 & 0\\
0 & 0 & 1 & 0\\
0 & 0 & 0 & 1
\end{array}
\right],
\end{align}
where
\begin{align}
\nonumber \kappa_{r,r}&=\left[\begin{array}{cc}
1-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\psi_j\otimes \sigma \epsilon G_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\psi_j\otimes \sigma G_j\\
-\epsilon\sigma-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \psi_j\otimes \sigma\epsilon G_j & 1+\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \psi_j\otimes \sigma G_j
\end{array}\right],\\
\nonumber \kappa_{r,c}&=\left[\begin{array}{cc}
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}} \psi_j\otimes \eta \epsilon H_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\psi_j\otimes \eta H_j\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}} \epsilon \psi_j\otimes \eta\epsilon H_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \psi_j\otimes \eta H_j
\end{array}\right],\\
\nonumber \kappa_{c,r}&=\left[\begin{array}{cc}
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\phi_j\otimes \sigma \epsilon G_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\phi_j\otimes \sigma G_j\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \phi_j\otimes \sigma\epsilon G_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \phi_j\otimes \sigma G_j
\end{array}\right],\\
\nonumber \kappa_{c,c}&=\left[\begin{array}{cc}
1-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\phi_j\otimes \eta \epsilon H_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\phi_j\otimes \eta H_j\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \phi_j\otimes \eta\epsilon H_j & 1+\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \phi_j\otimes \eta H_j
\end{array}\right].
\end{align}
The matrix on the right of (\ref{eqn:1+ABGinOE4}) has unit quaternion determinant and so we have the result (up to the sign of the $D$ and $\tilde{I}$ entries, which, as discussed in Proposition \ref{prop:pf_integ_op}, leaves the result unchanged).
\hfill $\Box$
Substitution of (\ref{eqn:4x4_fred}) into (\ref{eqn:even_gpfsum}) gives us the desired form of the generalised partition function,
\begin{align}
\label{eqn:4x4ZN} Z_N[u,v]=\frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}\prod_{j=0}^{N/2-1}r_j\;\mathrm{qdet} [\mathbf{1}_4+ \mathbf{K}_G(\mathbf{t}-\mathbf{1}_4)],
\end{align}
recalling that this Fredholm quaternion determinant involves a double sum, as defined in Definition \ref{def:GinOE_Freds}. So now we apply (\ref{eqn:GinOEfnal_diff_correln}), which will pick out the term in (\ref{eqn:4x4ZN}) corresponding to $n_1$ real eigenvalues and $n_2$ complex conjugate pairs of eigenvalues, that is, the sought correlation function. (The proof is readily adapted from that of Proposition \ref{prop:correlns_GOE_even}.)
\begin{proposition}[\cite{FN07, sommers2007, b&s2009}]
\label{prop:GinOE_evencorrelns}
Let $N$ be an even integer. Then, with $\mathbf{K}(\mu,\eta)$ from (\ref{def:GinOE_correlnK}), the real Ginibre eigenvalue correlation function for $n_1$ real and $n_2$ non-real, complex conjugate pairs is
\begin{align}
\nonumber &\rho_{(n_1,n_2)}(x_1,...,x_{n_1},w_1,...,w_{n_2})=\mathrm{qdet}\left[\begin{array}{cc}
\mathbf{K}(x_i,x_j) & \mathbf{K}(x_i,w_m)\\
\mathbf{K}(w_l,x_j) & \mathbf{K}(w_l,w_m)
\end{array}\right]_{i,j=1,...,n_1 \atop l,m=1,...,n_2}\\
\nonumber &=\mathrm{Pf}\left(\left[\begin{array}{cc}
\mathbf{K}(x_i,x_j) & \mathbf{K}(x_i,w_m)\\
\mathbf{K}(w_l,x_j) & \mathbf{K}(w_l,w_m)
\end{array}\right]\mathbf{Z}_{2(n_1+n_2)}^{-1}\right)_{i,j=1,...,n_1 \atop l,m=1,...,n_2}, \quad x_i\in \mathbb{R}, w_i \in \mathbb{R}_2^+.
\end{align}
\end{proposition}
\subsubsection{One component $2\times 2$ kernel method}
Here we find the correlation functions by using the observation that if we can treat both real and complex eigenvalues in the same manner, then we will be able to apply the method of Proposition \ref{prop:pf_integ_op} more or less directly. Conceptually, we integrate along the real line and then over upper complex plane, all with one (hopefully) convenient notation. This technique can be seen as a limit of that used in \cite{b&s2009}. A suggestion of this approach can be found in (\ref{def:rc_eps}), where we have defined a single operator that depends on the reality of the variable. The approach here then is to define all functions and operators such that they act appropriately on real or complex variables. Of course, we must find the same correlation functions as by any other method, so the end result will again be Proposition \ref{prop:GinOE_evencorrelns}. We include this method for two reasons: first, the idea of treating the real and complex eigenvalues together is commonly employed in the literature on the real Ginibre ensemble (and, as mentioned above, has already appeared briefly in (\ref{def:rc_eps})); and second, it highlights that our approach here is broadly applicable.
To this end let the uppercase variables $X,Y$ stand for the variables $x,y$ real or $w,z$ complex as required. Also let
\begin{align}
\nonumber f(X)=\left\{ \begin{array}{ll}
f_1(x),&x\in\mathbb{R},\\
f_2(w),&w\in\mathbb{C}\backslash \mathbb{R}.
\end{array}\right.
\end{align}
The key to this method is the measure: if $x$ is real and $w$ non-real complex, then we define the measure $\mu$ for the uppercase variables $X,Y$ as
\begin{align}
\label{def:GinOE_measure} \int f(X)\; d\mu(X)= \int_{-\infty}^{\infty}f_1(x)\; dx+\int_{\mathbb{R}_{+}^2}f_2(w)\;dw.
\end{align}
With these modifications, we can now treat the real and complex variables together.
\begin{proposition}
\label{prop:GinOE_fred_op}
Let $\alpha_{j,l}$ and $\beta_{j,l}$ be as in (\ref{eqn:GinOE_alphabeta}), and let $X,Y$ be real or non-real complex variables as required. Then, with the skew-orthogonal polynomials (\ref{eqn:GinOE_sopolys}) and corresponding normalisations (\ref{eqn:GinOE_sopolys_norm}), we have
\begin{align}
\label{eqn:2x2qdet} \mathrm{Pf}[\alpha_{j,l}+\beta_{j,l}]_{j,l=1,...,N}=\prod_{j=0}^{N/2-1}r_j\; \mathrm{qdet} [\mathbf{1}_2+\mathbf{K}(\mathbf{t}-\mathbf{1}_2)],
\end{align}
where $\mathbf{K}(\mathbf{t}-\mathbf{1}_2)$ is an integral operator with kernel $\mathbf{K}(X,Y)\mathrm{diag}[t(Y)-1,t(Y)-1]$, with $\mathbf{K}(x,y)$ as in (\ref{def:GinOE_correlnK}) and
\begin{align}
\nonumber t(Y)=\left\{ \begin{array}{cl}
u(Y), & Y \in \mathbb{R},\\
v(Y), & Y \in \mathbb{C}\backslash \mathbb{R}.
\end{array}\right.
\end{align}
\end{proposition}
\textit{Proof}: With $u, v, \psi_j(x)$ and $\phi_j(x)$ from (\ref{def:GFredqdetPrf}) let,
\begin{align}
\nonumber \varphi(X)=\left\{ \begin{array}{ll}
\psi(X), & X \in \mathbb{R},\\
\phi(X), & X \in \mathbb{C}\backslash \mathbb{R},
\end{array}\right. && \theta(X)=\left\{ \begin{array}{ll}
\sigma(X), & X \in \mathbb{R},\\
\eta(X), & X \in \mathbb{C}\backslash \mathbb{R},
\end{array}\right.
\end{align}
and recall the integral operator $\epsilon$ of (\ref{def:rc_eps}). Now, with the measure defined in (\ref{def:GinOE_measure}) and $(\alpha_{j,l}+\beta_{j,l})^{(1)}$ from (\ref{def:abtau}), we have
{\small
\begin{align}
\nonumber \alpha_{j,l}+\beta_{j,l}&=(\alpha_{j,l}+\beta_{j,l})\Big|_{u=v=1}\\
\nonumber &-2\int \theta(Y)\big(\varphi_j(Y)\epsilon \varphi_k[Y]-\varphi_k(Y)\epsilon\varphi_j[Y]-\varphi_k(Y)\epsilon(\theta \varphi_j)[Y]\big)d\mu(Y)\\
\nonumber &=:(\alpha_{j,l}+\beta_{j,l})^{(1)}-(\alpha_{j,l}+\beta_{j,l})^{(\theta)},
\end{align}
}where if $\epsilon$ appears to the left of $\theta$ in a term that corresponds to a complex number then it is understood to be zero.
With the matrix re-written in this form, where both the real and complex cases are treated simultaneously, we can now apply the same method of proof as in Proposition \ref{prop:pf_integ_op}, with $\varphi$ replacing $\psi$. Explicitly, using the skew-orthogonal polynomials we decompose $\left[(\alpha_{j,l}+\beta_{j,l})^{(1)}\right]_{j,l=1,...,N}$ as $\mathbf{D}\mathbf{Z}_N^{-1}$ recalling $\mathbf{Z}_N$ from (\ref{def:Z2N}) and with\\$\mathbf{D}=\mathrm{diag}[r_0,r_0,r_1,r_1,...,r_{N/2},r_{N/2}]$. Letting
\begin{align}
\nonumber G_{2j-1}(t):=\varphi_{2j}(t)&,&G_{2j}(t):=-\varphi_{2j-1}(t),
\end{align}
we have
{\small
\begin{align}
\nonumber &\mathrm{Pf}[\alpha_{j,l}+\beta_{j,l}]_{j,l=1,...,N}=\prod_{j=0}^{N/2-1}r_j\; \mathrm{qdet}\Bigg[\delta_{j,l}\\
\nonumber &+\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\int_{-\infty}^{\infty}\theta(Y)\Big(G_j(Y)\epsilon \varphi_k[Y]-\varphi_k(Y)\epsilon G_j[Y]-\varphi_k(Y)\epsilon(\theta G_j)[Y]\Big)d\mu(Y)\Bigg].
\end{align}
}With $\Omega_{\theta}$ as in (\ref{eqn:omega_E}), but with $\theta$ replacing $\sigma$, and
\begin{align}
\mathbf{E}_{\varphi}:=\left[
\begin{array}{ccc}
\varphi_1(Y) & \cdot \cdot\cdot & \varphi_N(Y)\\
\epsilon\varphi_1[Y] & \cdot \cdot\cdot & \epsilon\varphi_N[Y]
\end{array}
\right],
\end{align}
let $\mathbf{A}$ be the $N\times 2$ matrix-valued integral operator on $\mathbb{C}$ with kernel $2\mathbf{Z}_N^{-1}\mathbf{D}^{-1}\theta(Y)(\Omega_{\theta} \mathbf{E}_{\varphi})^T$ and $\mathbf{B}=\mathbf{E}_{\varphi}$. Then
\begin{align}
\nonumber \mathbf{1}_N-\mathbf{Z}_N\mathbf{D}^{-1}[\alpha_{j,l}+\beta_{j,l}]^{(\theta)}=\mathbf{1}_N+\mathbf{A}\mathbf{B}
\end{align}
and we may apply (\ref{eqn:qdet_1+AB}) to give
{\small
\begin{align}
\nonumber &\mathbf{1}_2+\mathbf{B}\mathbf{A}= \left[\begin{array}{l}
1-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\Big( \varphi_j\otimes \theta \epsilon G_j+\varphi_j\otimes \theta \epsilon (\theta G_j)\Big)\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\Big(\epsilon \varphi_j\otimes \theta \epsilon G_j+\epsilon \varphi_j\otimes \theta \epsilon(\theta G_j) \Big)
\end{array}\right.\\
\nonumber &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left.\begin{array}{r}
\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\varphi_j\otimes \theta G_j\\
1+\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \varphi_j\otimes \theta G_j
\end{array}\right]\\
\label{1+BA_GinOE} &=\left[\begin{array}{cc}
1-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}} \varphi_j\otimes \theta \epsilon G_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\varphi_j\otimes \theta G_j\\
-\epsilon \theta -\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}} \epsilon \varphi_j\otimes \theta\epsilon G_j & 1+\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \varphi_j\otimes \theta G_j
\end{array}\right] \left[
\begin{array}{cc}
1 & 0\\
\epsilon \theta & 1
\end{array}
\right],
\end{align}
}where we have used (\ref{eqn:OT_decomp}) to obtain the second equality (with $\Omega$ replaced by $\Omega_{\theta}$). The matrix on the right hand side of (\ref{1+BA_GinOE}) has unit quaternion determinant and so we have the result (up to the sign of the $D$ and $\tilde{I}$ entries).
\hfill $\Box$
If we relabel $Z_N[u,v]$ in (\ref{eqn:even_gpfsum}) as $Z_N^{(\varphi)}[t]$, with $u,v$ appropriately replaced by $t$, then substitute in (\ref{eqn:2x2qdet}), we have
\begin{align}
\label{eqn:Z_NGinOE_mod} Z_N^{(\varphi)}[t]=\frac{2^{-N(N+1)/4}}{\prod_{l=1}^N\Gamma(l/2)}\prod_{j=0}^{N/2-1}r_j\; \mathrm{qdet} [\mathbf{1}_2+\mathbf{K}(\mathbf{t}-\mathbf{1}_2)],
\end{align}
which is analogous (\ref{eqn:ZN_integ_op}). We must also rewrite (\ref{eqn:fnal_diff_correln}) as
{\small
\begin{align}
\label{eqn:GinOE_fnaldiff_mod} \rho_{(m)}(X_1,...,X_m)= &\frac{1}{Z_N^{(\varphi)}[t]}\frac{\delta^{m}}{\delta t(X_1)\cdot\cdot\cdot \delta t(X_{m})}Z_N^{(\varphi)}[t]{\Big |}_{t=1},
\end{align}
}where $m=n_1+n_2$. Substitution of (\ref{eqn:Z_NGinOE_mod}) into (\ref{eqn:GinOE_fnaldiff_mod}), and then replacing $X_i$ with $x_j$ real or $w_j$ non-real complex as obliged then gives Proposition \ref{prop:GinOE_evencorrelns}.
\begin{remark}
It is clear that both methods can be generalised to a higher number of eigenvalue domains (here we have only $\mathbb{R}$ and $\mathbb{R}_{2}^{+}$). To use the $4\times 4$ method, an extra 2 rows and/or columns are added to the matrices $\mathbf{E}$ and $\Omega$ in (\ref{eqn:omega_E}) for each new domain. For $l$ domains you would finish up with a $2l\times 2l$ kernel, with which to use with a suitably expanded Fredholm operator. While for the $2\times 2$ method, we may appropriately redefine the functions and measure to include all the cases. Of the two methods, the first seems the more transparent, where each row and column can be identified with a particular domain, however the second highlights the universal nature of the problem.
\end{remark}
\subsection{Eigenvalue correlations for $N$ odd}
\label{sec:GinOE_odd}
Roughly speaking, the method of integration over alternate variables and the evenness of Pfaffians are technically why $N$ odd requires a separate treatment, but the conceptual hurdle comes from the requirement that there must be at least one real eigenvalue in a real, odd-sized matrix. This also highlights one reason why the odd $\beta=2$ and $\beta=4$ Ginibre ensembles have not presented significantly more difficulties than their even counterparts --- in these cases the sets of matrices with real eigenvalues have measure zero in the eigenvalue support.
The odd case was first successfully dealt with in \cite{sommers_and_w2008} by invoking artificial Grassmannians, shortly followed by \cite{FM09} where the authors demonstrated that the $N$ odd correlations can be obtained as a limiting case of the correlations for $N$ even, as demonstrated in Chapter \ref{sec:odd_from_even}. Lastly, in \cite{Sinc09} it was shown how one obtains the $N$ odd correlations by modifying the approach taken in \cite{b&s2009}. As with the GOE we will first use a modified form of this latter approach (using Fredholm operators and applying functional differentiation) in Chapter \ref{sec:Gin_odd_fdiff}, and then look at obtaining the odd case as a limit of the even case in Chapter \ref{sec:Gin_oddfromeven}. Although functional differentiation is also employed in \cite{sommers_and_w2008}, the particulars of their method are sufficiently outside the scope of this work that we shall not investigate them here.
\begin{definition}
\label{def:GinOE_kernel_odd}
Let $p_0,p_1,...$ be the skew-orthogonal polynomials (\ref{eqn:GinOE_sopolys}) and $r_0,r_1,...$ the corresponding normalisations, and let $\nu_j$ be as in (\ref{def:GOE_nu}) and $\bar{\nu}_j =\nu_j\big|_{u=1}$ as it was in Proposition \ref{prop:pf_integ_op_odd}. Define
\begin{align}
\nonumber S^{\mathrm{odd}}(\mu,\eta)&=2\sum_{j=0}^{\frac{N-1}{2}-1}\frac{1}{r_j}\Bigl[\hat{q}_{2j}(\mu)\hat{\tau}_{2j+1}(\eta)-\hat{q}_{2j+1}(\mu)\hat{\tau}_{2j}(\eta)\Bigr]+\kappa(\mu,\eta),\\
\nonumber D^{\mathrm{odd}}(\mu,\eta)&=2\sum_{j=0}^{\frac{N-1}{2}-1}\frac{1}{r_j}\Bigl[\hat{q}_{2j}(\mu)\hat{q}_{2j+1}(\eta)-\hat{q}_{2j+1}(\mu)\hat{q}_{2j}(\eta)\Bigr],\\
\nonumber \tilde{I}^{\mathrm{odd}}(\mu,\eta)&=2\sum_{j=0}^{\frac{N-1}{2}-1}\frac{1}{r_j}\Bigl[\hat{\tau}_{2j}(\mu)\hat{\tau}_{2j+1}(\eta)-\hat{\tau}_{2j+1}(\mu)\hat{\tau}_{2j}(\eta)\Bigr]+\epsilon(\mu,\eta)+\theta(\mu,\eta),
\end{align}
where $\epsilon(\mu,\eta)$ is from Definition \ref{def:GinOE_kernel} and
\begin{align}
\nonumber \hat{p}_j(\mu)&=p_j(\mu)-\frac{\bar{\nu}_{j+1}}{\bar{\nu}_N}p_{N-1}(\mu),\\
\nonumber \hat{q}_j(\mu) &= e^{-\mu^2/2}\hspace{2pt}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(\mu)|)}\hspace{2pt}\hat{p}_j(\mu),\\
\nonumber \hat{\tau}_j(\mu) &=
\left\{
\begin{array}{ll}
-\frac{1}{2}\int_{-\infty}^{\infty}\mathrm{sgn}(\mu-z)\hspace{3pt}\hat{q}_j(z)\hspace{3pt}dz, & \mu\in \mathbb{R},\\
i\hat{q}_j(\bar{\mu}), & \mu\in \mathbb{R}_2^+,
\end{array}
\right.\\
\nonumber \kappa(\mu,\eta) &=
\left\{
\begin{array}{lll}
q_{N-1}(\mu)/\bar{\nu}_N, & \eta\in \mathbb{R},\\
0, & \mathrm{otherwise},\\
\end{array}
\right.\\
\nonumber \theta(\mu,\eta)&=
\big(\chi_{(\eta\in\mathbb{R})}\tau_{N-1}(\mu)-\chi_{(\mu\in\mathbb{R})}\tau_{N-1}(\eta)\big)/\bar{\nu}_N,
\end{align}
with the indicator function $\chi_{(A)}=1$ for $A$ true and zero for $A$ false. Then, let
\begin{align}
\nonumber \mathbf{K}_{\mathrm{odd}}(\mu,\eta)=\left[
\begin{array}{cc}
S^{\mathrm{odd}}(\mu,\eta) & -D^{\mathrm{odd}}(\mu,\eta)\\
\tilde{I}^{\mathrm{odd}}(\mu,\eta) & S^{\mathrm{odd}}(\eta,\mu)
\end{array}
\right].
\end{align}
\end{definition}
In Appendix \ref{app:GinOE_kernel_elts_odd} the kernel elements for $N$ odd are written out explicitly. As with the even kernel elements in Definition \ref{def:GinOE_kernel} the odd kernel elements also satisfy the inter-relationships in Lemma \ref{lem:Gin_s=d=i}.
\subsubsection{Functional differentiation method}
\label{sec:Gin_odd_fdiff}
As may be expected from the previous, both the functional differentiation methods, using either the $4\times 4$ or $2\times 2$ kernels, can be adapted to the odd case. We find for both that with $N$ odd the proof for the Fredholm operator form of the Pfaffian differs from the even case in exactly the way the odd case of the GOE differed from its respective even case. Explicitly, in order to find a Fredholm operator form of the Pfaffian in (\ref{eqn:odd_gpfsum}), we are led to consider an odd skew-diagonal matrix of the form (\ref{eqn:skew_diag_mat_odd}) and its inverse, which takes us outside the space of matrices that can be decomposed as $\mathbf{D}\mathbf{Z}_{2N}^{-1}$, with $\mathbf{D}$ diagonal, and thus, outside the realm of Pfaffians and quaternion determinants. However, as in the GOE odd case, this involves only technical minutiae, and instead, the real difference between even and odd cases boils down to simply the inclusion of an additional column in the matrices $\mathbf{F}$ and $\mathbf{E}_{\varphi}$ (for the $4\times 4$ and $2\times 2$ methods respectively). Since the technique has already been demonstrated in the GOE case, we will only briefly consider these modified methods, starting with the $4\times 4$ kernel.
\begin{proposition}
\label{prop:GinOE_fred_op_odd}
With $x,y\in\mathbb{R}$ and $w,z\in\mathbb{C}\backslash\mathbb{R}$ then, using Definition \ref{def:GinOE_kernel_odd}, the $N$ odd analogue of Proposition \ref{prop:4x4_fred} is
\begin{align}
\label{eqn:Ginoddfred} \mathrm{Pf}\left[\begin{array}{cc}
[\alpha_{j,l}+\beta_{j,l}] & [\nu_j]\\
\left[-\nu_l\right]& 0\\
\end{array}\right]_{j,l=1,...,N}=\bar{\nu}_N\prod_{j=0}^{(N-1)/2-1}r_j \;\mathrm{qdet} [\mathbf{1}_4+\mathbf{K}_{\mathrm{odd}}(\mathbf{t}-\mathbf{1}_4)],
\end{align}
where $\mathbf{K}_{\mathrm{odd}}(\mathbf{t}-\mathbf{1}_4)$ is an integral operator with kernel
\nonumber \begin{align}
\left[\begin{array}{cc}
\mathbf{K}_{\mathrm{odd}}(x,y) & \mathbf{K}_{\mathrm{odd}}(x,z)\\
\mathbf{K}_{\mathrm{odd}}(w,y) & \mathbf{K}_{\mathrm{odd}}(w,z)
\end{array}\right] \left[\begin{array}{cccc}
u(y)-1 & 0 & 0 & 0\\
0 & u(y)-1 & 0 & 0\\
0 & 0 & v(z)-1 & 0\\
0 & 0 & 0 & v(z)-1\\
\end{array}\right].
\end{align}
\end{proposition}
\textit{Proof}: If we let
\begin{align}
\nonumber \mathcal{C}:=\left[\begin{array}{cc}
[\alpha_{j,l}+\beta_{j,l}] & [\nu_j]\\
\left[-\nu_l\right]& 0\\
\end{array}\right]_{j,l=1,...,N}
\end{align}
then we note that $\mathcal{C}^{(1)}:=\mathcal{C}\big|_{u=v=1}$ is of the form (\ref{eqn:skew_diag_mat_odd}) (although with $b_j=\bar{\nu}_{j}=0$ for $j$ even), and thus Corollary \ref{cor:qdet=pf} does not apply. So, as in the case of the GOE, we work with $(\mathrm{Pf}\:\mathcal{C})^2=\det\: \mathcal{C}$ instead of $\mathrm{Pf}\:\mathcal{C}$ itself. Note that
\begin{align}
\nonumber \mathcal{C}=\mathcal{C}^{(1)}-\mathcal{C}^{(\tau)},
\end{align}
where
$\mathcal{C}^{(\tau)}_{j,l}:=(\alpha_{j,l}+\beta_{j,l})^{(\tau)}$ of (\ref{def:abtau}) for $1\leq j < k \leq N$, and $\mathcal{C}^{(\tau)}_{j,N+1}:=\mathbf{C}_{j,N+1}^{(\tau)}$ of (\ref{eqn:oddCnu}). Now substitute
\begin{align}
\nonumber \mathbf{F}_{\mathrm{odd}}:=\left[
\begin{array}{cccc}
\psi_1(y) & \cdot \cdot\cdot & \psi_N(y) & 0\\
\epsilon\psi_1[y] & \cdot \cdot\cdot & \epsilon\psi_N[y] & -1\\
\phi_1(z) & \cdot\cdot\cdot & \phi_N(z) & 0\\
\epsilon\phi_1[z] & \cdot\cdot\cdot & \epsilon\phi_N[z] & 0
\end{array}
\right]
\end{align}
for $\mathbf{F}$ in (\ref{def:GinOE_om_F}) and define $\mathbf{A}$ to be the $(N+1)\times 4$ integral operator with kernel\\$2(\mathcal{C}^{(1)})^{-1} (\tau(\sigma,\eta)\Omega_G\mathbf{F}_{\mathrm{odd}})^T$, where the remaining notation is from Proposition \ref{prop:4x4_fred}. Then, with $\mathbf{B}=\mathbf{F}_{\mathrm{odd}}$, we apply (\ref{eqn:1+AB}) and obtain
\begin{align}
\label{eqn:GinOE_odd_ktilde} \mathbf{1}_{4}+\mathbf{B}\mathbf{A}=\left[\begin{array}{cc}
\tilde{\kappa}^{\mathrm{odd}}_{r,r} & \tilde{\kappa}^{\mathrm{odd}}_{r,c}\\
\tilde{\kappa}^{\mathrm{odd}}_{c,r} & \tilde{\kappa}^{\mathrm{odd}}_{c,c}
\end{array}\right],
\end{align}
where
{\small
\begin{align}
\nonumber \tilde{\kappa}_{r,r}&=\left[\begin{array}{c}
1-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\Big(\hat{\psi}_j\otimes \sigma \epsilon \hat{G}_j + \hat{\psi}_j \otimes \sigma\epsilon(\sigma \hat{G}_j) \Big)+\frac{1}{\bar{\nu}_N}\psi_N\otimes \sigma\\
\mho
\end{array}\right.\\
\nonumber &\qquad\qquad\qquad\qquad\left.\begin{array}{l}
\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\hat{\psi}_j\otimes \sigma \hat{G}_j\\
1+\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \hat{\psi}_j\otimes \sigma \hat{G}_j+\frac{1}{\bar{\nu}_N}1\otimes\sigma\psi_N
\end{array}\right],\\
\nonumber \tilde{\kappa}_{r,c}&=\left[\begin{array}{lr}
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}} \hat{\psi}_j\otimes \eta \epsilon \hat{H}_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\hat{\psi}_j\otimes \eta \hat{H}_j\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}} \epsilon \hat{\psi}_j\otimes \eta\epsilon \hat{H}_j- \frac{1}{\bar{\nu}_N}1\otimes\eta\epsilon\phi_N & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \hat{\psi}_j\otimes \eta \hat{H}_j
\end{array}\right],\\
\nonumber \tilde{\kappa}_{c,r}&=\left[\begin{array}{cc}
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\hat{\phi}_j\otimes \sigma \epsilon \hat{G}_j +\frac{1}{\bar{\nu}_N}\psi_N\otimes \sigma & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\hat{\phi}_j\otimes \sigma \hat{G}_j\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \hat{\phi}_j\otimes \sigma\epsilon \hat{G}_j +\frac{1}{\bar{\nu}_N}\epsilon\phi_N\otimes\sigma & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \hat{\phi}_j\otimes \sigma \hat{G}_j
\end{array}\right],\\
\nonumber \tilde{\kappa}_{c,c}&=\left[\begin{array}{cc}
1-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\hat{\phi}_j\otimes \eta \epsilon \hat{H}_j & \sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\hat{\phi}_j\otimes \eta \hat{H}_j\\
-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \hat{\phi}_j\otimes \eta\epsilon \hat{H}_j & 1+\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\epsilon \hat{\phi}_j\otimes \eta \hat{H}_j
\end{array}\right],
\end{align}
}with
\begin{align}
\nonumber \mho&:=-\sum_{j=1}^N\frac{2}{r_{\lfloor (j-1)/2 \rfloor}}\Big(\epsilon\hat{\psi}_j\otimes \sigma \epsilon \hat{G}_j + \epsilon\hat{\psi}_j \otimes \sigma\epsilon(\sigma \hat{G}_j) \Big)\\
\nonumber &+\frac{1}{\bar{\nu}_N}\Big(\epsilon\psi_N\otimes \sigma -1\otimes\sigma\epsilon(\sigma\psi_N) - 1\otimes\sigma\psi_N \Big),
\end{align}
and $\hat{\psi}_j,\hat{\phi}_j,\hat{G}_j$ and $\hat{H}_j$ are given by the `non-hat' versions from Proposition \ref{prop:4x4_fred} with $p_j$ replaced by $\hat{p}_j$ of Definition \ref{def:GinOE_kernel_odd}. Now we use (\ref{eqn:omg_decomp}) to factorise (\ref{eqn:GinOE_odd_ktilde}), and then apply Corollary \ref{cor:sqrtFreds} to obtain the result.
\hfill $\Box$
\begin{remark}
By comparing Propositions \ref{prop:pf_integ_op}, \ref{prop:pf_integ_op_odd}, \ref{prop:4x4_fred} and \ref{prop:GinOE_fred_op_odd} one can see why this method has been used: with minor variations it can be applied to both the even and odd cases of the GOE and the real, Ginibre ensemble, which has a pleasing symmetry. This also points to the future possibilities of easily generalising this method to systems with an arbitrary number of distinct particle species. But that, as they say, is another story (and ahead of known applications). (See Chapter \ref{sec:FW} for possible uses.)
\end{remark}
Substitution of (\ref{eqn:Ginoddfred}) into (\ref{eqn:odd_gpfsum}) yields
\begin{align}
\label{eqn:Z_NGinOE_mod_odd} Z_N^{\mathrm{odd}}[u,v]= \frac{2^{-N(N+1)/4}}{\prod_{l=1}^N \Gamma(l/2)} \; \bar{\nu}_N\prod_{j=0}^{(N-1)/2-1}r_j \;\mathrm{qdet} [\mathbf{1}_4+\mathbf{K}_{\mathrm{odd}}(\mathbf{t}-\mathbf{1}_4)],
\end{align}
and, as is now routine, the correlation functions are given upon substitution of (\ref{eqn:Z_NGinOE_mod_odd}) into (\ref{eqn:GinOEfnal_diff_correln}).
\begin{proposition}[\cite{sommers_and_w2008, FM09, Sinc09}]
\label{prop:Gin_oddcorrelns}
For the real Ginibre ensemble of odd-sized matrices the correlation functions for $n_1$ real eigenvalues and $n_2$ non-real complex conjugate pairs of eigenvalues are
\begin{align}
\nonumber &\rho_{(n_1,n_2)}(x_1,...,x_{n_1},w_1,...,w_{n_2})=\mathrm{qdet}\left[\begin{array}{cc}
\mathbf{K}_{\mathrm{odd}}(x_i,x_j) & \mathbf{K}_{\mathrm{odd}}(x_i,w_m)\\
\mathbf{K}_{\mathrm{odd}}(w_l,x_j) & \mathbf{K}_{\mathrm{odd}}(w_l,w_m)\\
\end{array}\right]_{i,j=1,...,n_1 \atop l,m=1,...,n_2}\\
\nonumber &=\mathrm{Pf}\left(\left[\begin{array}{cc}
\mathbf{K}_{\mathrm{odd}}(x_i,x_j) & \mathbf{K}_{\mathrm{odd}}(x_i,w_m)\\
\mathbf{K}_{\mathrm{odd}}(w_l,x_j) & \mathbf{K}_{\mathrm{odd}}(w_l,w_m)\\
\end{array}\right]_{i,j=1,...,n_1 \atop l,m=1,...,n_2}\mathbf{Z}^{-1}_{2(n_1+n_2)} \right), x_i\in \mathbb{R}, w_i \in \mathbb{R}_2^+.
\end{align}
\end{proposition}
\begin{remark}
To use the $2\times 2$ kernel method, the essential step is to replace $\mathbf{E}_{\varphi}$ with
\begin{align}
\nonumber \mathbf{E}_{\varphi,\mathrm{odd}}:=\left[
\begin{array}{cccc}
\varphi_1(Y) & \cdot \cdot\cdot & \varphi_N(Y)&0\\
\epsilon\varphi_1[Y] & \cdot \cdot\cdot & \epsilon\varphi_N[Y]&-1
\end{array}
\right],
\end{align}
in Proposition \ref{prop:GinOE_fred_op}, then Proposition \ref{prop:Gin_oddcorrelns} follows by adapting the odd method used in Proposition \ref{prop:pf_integ_op_odd}.
\end{remark}
\subsubsection{Odd from even}
\label{sec:Gin_oddfromeven}
Here we can, more or less, directly apply the method of Chapter \ref{sec:odd_from_even} to precipitate the odd case from the even case for the real Ginibre ensemble. In fact, the method was originally presented in \cite{FM09} with the real Ginibre case in mind, only using the simpler GOE case to illustrate the technique.
To ensure success, we look for a factorisation analogous to (\ref{eqn:jpdfOEfactorisation}). Using (\ref{eqn:GinOEjpdf}) we separate out the dependence on the eigenvalue $\lambda_1$ to obtain
\begin{align}
\nonumber Q_{N,k}(\Lambda,W)&=C_{N,k}\;e^{-\lambda^2_1/2} \prod^{k}_{j=2} |\lambda_1- \lambda_j|\prod^{(N-k)/2}_{j=1}|\lambda_1-w_j|| \lambda_1-\bar{w}_j|\\
\nonumber &\times \prod_{j=2}^{k}e^{-\lambda_j^2/2} \prod_{j=1}^{(N-k)/2} e^{-(w_j^2+\bar{w}_j^2)/2} \mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w_j)|)|\Delta(\tilde{\Lambda}_1,W)|,
\end{align}
where $\tilde{\Lambda}_j$ is the $\Lambda$ of Proposition \ref{prop:GinOE_eval_jpdf} without $\lambda_j$. Now we let $\lambda_1$ tend to infinity to find
\begin{align}
\label{eqn:sortfactor} \fullsub{Q_{N,k}(\Lambda,W)}{\sim} {\lambda_1\to\infty}{\frac{C_{N,k}} {C_{N-1,k-1}} e^{-\lambda_1^2/2}\lambda_1^{N-1} Q_{N-1,k-1}(\tilde{\Lambda}_1,W)},
\end{align}
which is the required analogue of (\ref{eqn:jpdfOEfactorisation}). Define
\begin{align}
\nonumber g_{N,k}(\lambda_1):=\frac{C_{N,k}}{C_{N-1,k-1}}e^{-\lambda_1^2/2} \lambda_1^{N-1}= \frac{e^{-\lambda_1^2/2} \lambda_1^{N-1}}{k\:2^{N/2}\Gamma(N/2)},
\end{align}
then
\begin{align}
\nonumber \frac{\delta}{\delta u(\lambda_1)}Z_{k,(N-k)/2}[u,v] \mathop{\sim}\limits_{\lambda_1 \to \infty} k\; g_{N,k}(\lambda_1) \; Z_{k-1,(N-k)/2}[u,v],
\end{align}
recalling $Z_{k,(N-k)/2}[u,v]$ from (\ref{eqn:GinOE_gpf_even}). So with $Z_N[u,v]$ from (\ref{eqn:summedup}) we have
\begin{align}
\nonumber \frac{\delta}{\delta u (\lambda_1)} Z_N[u,v] &= \sum_{k=0}^N {}^{\sharp} \frac{\delta}{\delta u(\lambda_1)}Z_{k,(N-k)/2}[u,v]\\
\nonumber &\mathop{\sim}\limits_{\lambda_1 \to \infty} \sum_{k=1}^{N-1} {}^{\sharp}\; k\; g_{N,k}(\lambda_1)\; Z_{k-1,(N-k)/2}[u,v]\\
\nonumber &=\frac{e^{-\lambda_1^2/2}\lambda_1^{N-1}}{2^{N/2}\Gamma(N/2)} \sum_{k=1}^{N-1} {}^{\sharp}\; Z_{k-1,(N-k)/2}[u,v]\\
\label{eqn:ZkNsim} &=\frac{e^{-\lambda_1^2/2} \lambda_1^{N-1}}{2^{N/2} \Gamma(N/2)} Z_{N-1}[u,v],
\end{align}
where the $\sharp$ indicates that the sum is only over those values $k$ with the same parity as the upper terminal.
Recalling from (\ref{eqn:GinOEfnal_diff_correln}) that
\begin{align}
\rho_{(1,0)}(\lambda_1)=\frac{1}{Z_N[u,v]} \frac{\delta} {\delta u (\lambda_1)} Z_N[u,v]\Big|_{u=v=1}
\end{align}
and that $Z_N[1,1]=1$ for all $N$, it follows from (\ref{eqn:ZkNsim}) that with $g_N(\lambda) :=
k g_{N,k}(\lambda)$
\begin{align}
\nonumber \rho_{(1,0)}^N(\lambda_1) \sim g_N(\lambda_1),
\end{align}
which is the analogue of Lemma \ref{lem:xm_to_infty}, and so we have
\begin{align}
\nonumber Q_{N,k}(\Lambda,W) \mathop{\sim}\limits_{\lambda_1 \to \infty}
\rho_{(1,0)}(\lambda_1) Q_{N-1,k-1}(\tilde{\Lambda}_1,W),
\end{align}
which is the sought factorisation. Now, making use of (\ref{eqn:GinOEfnal_diff_correln}), in the general case we obtain
\begin{align}
\nonumber \rho_{(n_1,n_2)}^{(N)}(x_1,...,x_{n_1}, w_1,...,w_{n_2}) &\mathop{\sim}\limits_{x_1 \to \infty} \rho_{(1,0)}^{(N)}(x_1)\;\rho_{(n_1-1,n_2)}^{(N-1)}(x_2,...,x_{n_1},w_1,...,w_{n_2}),
\end{align}
the analogue of (\ref{eqn:finalOEfactorisation}). So, as with the GOE, knowledge of the correlation functions for $N$ even enables us to find the correlation functions for a system of $N-1$ eigenvalues by factoring out the density of the largest real eigenvalue and taking the limit.
Using the correlation function in Pfaffian form (recall the Pfaffian kernel (\ref{def:GinPfK}), which we call $K_N$ here), we shift the rows and columns corresponding to the eigenvalue $x_k$ to the far right and bottom of the matrix as so
\begin{align}
\label{eqn:GinHRCR} \mathrm{Pf}\left[\begin{array}{ccc}
K_N(x_i,x_j) & K_N(x_i,w_m) & K_N(x_i,x_k)\\
K_N(w_l,x_j) & K_N(w_l,w_m) & K_N(w_l,x_k)\\
K_N(x_k,x_j) & K_N(x_k,w_m) & K_N(x_k,x_k)\\
\end{array}\right]_{\begin{subarray}{l} i,j=1,...,n_1-1 \\ l,m=1,...,n_2 \end{subarray}}.
\end{align}
Since this involves shifting 2 rows and 2 columns an even number of times the Pfaffian is unchanged. The sizes of the submatrices in (\ref{eqn:GinHRCR}) are:
\begin{itemize}
\item{top left: $2(n_1 -1)\times 2(n_1 -1)$; top centre: $2(n_1 -1)\times 2(n_2)$;\\top right: $2(n_1 -1)\times 2$.}
\item{centre left: $2(n_2)\times 2(n_1-1)$; centre: $2(n_2)\times 2(n_2)$;\\centre right: $2(n_2)\times 2$.}
\item{bottom left: $2\times 2(n_1 -1)$; bottom centre: $2\times 2(n_2)$;\\bottom right: $2\times 2$.}
\end{itemize}
We now perform the same row and column reduction as in (\ref{eqn:Pf3}), finding that the `starred' kernel elements are slightly complicated by the existence of both real and complex eigenvalues. Then, on taking the limit $x_k\to\infty$ the starred kernel elements reduce to their odd counterparts, with $N$ replaced by $N-1$, and we recover Proposition \ref{prop:Gin_oddcorrelns}. The details are ommitted as the procedure is a straightforward modification of that described in Chapter \ref{sec:odd_from_even}.
\subsection{Correlation kernel elements and large $N$ limits}
\label{sec:Ginkernelts}
The summations in the kernel elements of Definitions \ref{def:GinOE_kernel} and \ref{def:GinOE_kernel_odd} can be performed explicitly on substitution of the skew-orthogonal polynomials (\ref{eqn:GinOE_sopolys}) \cite{FN08, b&s2009}. We list the results for $S(\mu,\eta)$ for each of the four combinations of real and complex eigenvalues, but first we recall the definitions
\begin{align}
\nonumber \Gamma(N,x):=&\int_{x}^{\infty}t^{N-1}e^{-t}dt&\gamma(N,x):=&\int_{0}^{x}t^{N-1}e^{-t}dt\\
\label{def:Gammas} =&\Gamma (N) e^{-x} \sum_{j=1}^N \frac{x^{j-1}}{(j-1)!}, &=&\Gamma(N)-\Gamma(N,x),
\end{align}
which are called the \textit{upper} and \textit{lower incomplete gamma functions} respectively. By substituting the polynomials (\ref{eqn:GinOE_sopolys}) into $S(\mu,\eta)$ (with $\mu=x,\eta=y\in\mathbb{R}$) and performing some manipulations involving the formulae
\begin{align}
\nonumber \int_{-\infty}^x e^{-u^2/2}u^{2k+1}du&=-(2k)!!\; e^{-x^2/2}\sum_{l=0}^k \frac{x^{2l}}{(2l)!!},\\
\nonumber \int_{-\infty}^x e^{-u^2/2}u^{2k}du&=(2k-1)!!\int_{-\infty}^x e^{-u^2/2}du\\
\nonumber &-(2k-1)!! \; e^{-x^2/2}\sum_{l=0}^k \frac{x^{2l-1}}{(2l-1)!!},
\end{align}
we obtain a closed form of $S_{r,r}(x,y)$. The other kernel elements can be similarly summed and we have
{\small
\begin{align}
\label{eqn:Ginsummed}
\begin{split}
S_{r,r}(x,y)&=\frac{1}{\sqrt{2\pi}}\left[e^{-(x-y)^2/2}\frac{\Gamma(N-1,xy)}{\Gamma(N-1)}+2^{(N-3)/2}e^{-x^2/2}x^{N-1}\mathrm{sgn}(y)\frac{\gamma(\frac{N-1}{2},y^2/2)}{\Gamma(N-1)}\right],\\
S_{r,c}(x,w)&=\frac{ie^{-(x-\bar{w})^2/2}}{\sqrt{2\pi}}(\bar{w}-x)\frac{\Gamma(N-1,x\bar{w})}{\Gamma(N-1)}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w)|)},\\
S_{c,r}(w,x)&=\frac{1}{\sqrt{2\pi}}\left[e^{-(w-x)^2/2}\frac{\Gamma(N-1,wx)}{\Gamma(N-1)}+2^{(N-3)/2}e^{-w^2/2}w^{N-1}\mathrm{sgn}(x)\frac{\gamma(\frac{N-1}{2},x^2/2)}{\Gamma(N-1)}\right]\\
&\times\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w)|)},\\
S_{c,c}(w,z)&=\frac{ie^{-(w-\bar{z})^2/2}}{\sqrt{2\pi}}(\bar{z}-w)\frac{\Gamma(N-1,w\bar{z})}{\Gamma(N-1)}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w)|)}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(z)|)},
\end{split}
\end{align}
}where we again use the convention that $x,y$ are real and $w,z$ are non-real complex. The remaining kernel elements $D(\mu,\eta)$ and $\tilde{I}(\mu,\eta)$ can also be written in such a form by direct summation or by using Lemma \ref{lem:Gin_s=d=i}. (The one caveat to the previous statement is that there does not seem to be a closed form of $I_{r,r}(x,y)$.) Note that the equations in $(\ref{eqn:Ginsummed})$ are independent of the parity of $N$, and so we suspect that they hold in both the even and odd cases. This can be checked by explicitly performing the sums in Definition \ref{def:GinOE_kernel_odd} using the skew-orthogonal polynomials; one then obtains the same set of equations explicitly. See Appendix \ref{app:Ginsummed} for the full set of summed kernel elements.
Recall from (\ref{eqn:1pt_correln}) that for the GOE the density --- which is identical to the $1$-point correlation function --- of real eigenvalues was given by $S(x,x)$; similarly for the real Ginibre ensemble we see from (\ref{eqn:Ginsummed}) that the density of real eigenvalues is given by \cite{eks1994}
\begin{align}
\nonumber \rho^r_{(1)}(x)&=S_{r,r}(x,x)\\
\label{eqn:Ginreal_density}&=\frac{1}{\sqrt{2\pi}}\left[\frac{\Gamma(N-1,x^2)}{\Gamma(N-1)} +2^{(N-3)/2} e^{-x^2/2} |x|^{N-1}\frac{\gamma(\frac{N-1}{2},x^2/2)} {\Gamma(N-1)}\right],
\end{align}
and the density of complex eigenvalues is given by \cite{Ed97}
\begin{align}
\nonumber \rho^c_{(1)}(w)&=S_{c,c}(w,w)\\
\label{eqn:Gincompx_density}&=\sqrt{\frac{2}{\pi}}\;ve^{2v^2}\frac{\Gamma(N-1,|w|^2)}{\Gamma(N-1)}\mathrm{erfc}(\sqrt{2}v),
\end{align}
where $w=u+iv$.
In Chapter \ref{sec:pnk} we discussed the probability $p_{N,k}$ of obtaining $k$ real eigenvalues from an $N\times N$ real Ginibre matrix, and we mentioned that an interesting related quantity is $E_N$, the expected number of real eigenvalues. By integrating (\ref{eqn:Ginreal_density}) over the real line we have a simpler method of calculating $E_N$ than using $(\ref{eqn:Ginxnreals})$, and, with a result from \cite[3.196.1]{GraRyz2000}, we find
\begin{align}
\nonumber E_N&=\frac{1}{2}+\sqrt{\frac{2}{\pi}}\frac{\Gamma(N+1/2)}{\Gamma(N)}{}_2F_1(1,-1/2;N;1/2)\\
\label{eqn:Ginxnreals2} &=\frac{1}{2} + \sqrt{\frac{2N}{\pi}}\left(1-\frac{3}{8N}-\frac{3}{128N^2}+\frac{27}{1024 N^3}+\frac{499}{32768 N^4}+O(1/N^5) \right),
\end{align}
which was first identified in \cite[Corollaries 5.1 and 5.2]{eks1994}. Clearly, from (\ref{eqn:Ginxnreals2}) we see that
\begin{align}
\label{eqn:bigNEN} E_N\sim\sqrt{\frac{2N}{\pi}}
\end{align}
for large $N$. In \cite{FN07} the authors calculated the large $N$ variance in the number of real eigenvalues
\begin{align}
\label{eqn:Gvar} \sigma_N^2=(2-\sqrt{2})E_N,
\end{align}
which, in Chapter \ref{sec:Ssops}, we will compare to the analogous result for the real spherical ensemble, finding that they are identical.
We can also find the large $N$ limits of the kernel elements. Firstly we look for the limit in the bulk: let $N\to\infty$ with $x,y,w,z$ fixed. Noting from (\ref{def:Gammas}) that $\Gamma(N,x) \to \Gamma(N)$ for large $N$ (and so $\gamma(N,x)\to 0$) we have \cite{FN07,b&s2009}
\begin{align}
\nonumber S^{\mathrm{bulk}}_{r,r}(x,y)&=\frac{1}{\sqrt{2\pi}}e^{-(x-y)^2/2},\\
\nonumber S^{\mathrm{bulk}}_{r,c}(x,w)&=\frac{i}{\sqrt{2\pi}}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w)|)}\;(\bar{w}-x)e^{-(x-\bar{w})^2/2},\\
\nonumber S^{\mathrm{bulk}}_{c,r}(w,x)&=\frac{1}{\sqrt{2\pi}}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w)|)}\;e^{-(w-x)^2/2},\\
\label{eqn:Ginbulk} S^{\mathrm{bulk}}_{c,c}(w,z)&=\frac{i}{\sqrt{2\pi}}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(w)|)}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(z)|)}\;(\bar{z}-w)e^{-(\bar{z}-w)^2/2}.
\end{align}
We will see in Chapter \ref{sec:circlaw} that the eigenvalue support tends to a disk centred at the origin with radius $\sqrt{N}$ and so we can calculate the limiting kernel elements at the real edge (the edge of the support on the real line) by taking $X=x-\sqrt{N}$ (and similarly for $Y,W,Z$). Then, with the following asymptotic forms for large $N$
\begin{align}
\nonumber \gamma(N-j+1,N)&\sim\frac{\Gamma(N-j+1)}{2}\Big( 1+\mathrm{erf}(j/\sqrt{2N}) \Big),\\
\nonumber \Gamma(N-j+1)&\sim \sqrt{2\pi}N^{N-j+1/2}e^{-N}e^{-j^2/2N},\\
\nonumber \Big( 1+\frac{x}{\sqrt{N}}\Big)^{N-1}&\sim e^{x\sqrt{N}-x^2/2},
\end{align}
we have \cite{FN07, b&s2009}
\begin{align}
\nonumber S^{\mathrm{edge}}_{r,r}(X,Y)&=\frac{1}{\sqrt{2\pi}}\Big[ \frac{e^{-(X-Y)^2/2}}{2} \mathrm{erfc}\Big(\frac{X+Y}{\sqrt{2}}\Big)+\frac{e^{-X^2}}{2\sqrt{2}}(1+\mathrm{erf} \:Y)\Big],\\
\nonumber S^{\mathrm{edge}}_{r,c}(X,W)&=\frac{i}{2\sqrt{2\pi}}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(W)|)}\;(\overline{W}-X)e^{-(X-\overline{W})^2/2}\mathrm{erfc} \Big( \frac{X+\overline{W}}{\sqrt{2}}\Big),\\
\nonumber S^{\mathrm{edge}}_{c,r}(W,X)&=\frac{1}{\sqrt{2\pi}}\Big[\frac{e^{-(W-X)^2/2}}{2}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(W)|)}\;\mathrm{erfc} \Big( \frac{X+W}{\sqrt{2}}\Big)\\
\nonumber &+\frac{e^{-W^2}}{2\sqrt{2}}(1+\mathrm{erf}\:X)\Big],\\
\nonumber S^{\mathrm{edge}}_{c,c}(W,Z)&=\frac{i}{2\sqrt{2\pi}}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(W)|)}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(Z)|)}\\
\label{eqn:Ginedge} &\times(\overline{Z}-W)e^{-(\overline{Z}-W)^2/2}\mathrm{erfc} \Big( \frac{W+\overline{Z}}{\sqrt{2}}\Big).
\end{align}
For a full list of the limiting kernel elements in the bulk and at the edge see (\ref{eqn:Ginbulkall}) and (\ref{eqn:Ginedgeall}) in Appendix \ref{app:Ginsummed}.
These edge and bulk results have been taken somewhere near the real line where the effect of the non-zero density on the real line can still be felt. If one were to look at the limits away from the real line, then one expects this effect to vanish. Indeed this is what happens; from \cite{b&s2009} the limiting bulk correlations for the complex eigenvalues away from the real line are given by
\begin{align}
\label{eqn:Gincbulk} \lim_{N\to\infty}\rho_{(n)}^{c}(w_1,...,w_n)=\det\left[\frac{1}{\pi}e^{-(w_j-\bar{w}_l)^2/2}\right]_{j,l=1,...,n},
\end{align}
and for the complex edge
\begin{align}
\label{eqn:Gincedge} \lim_{N\to\infty}\rho_{(n)}^{c}(W_1,...,W_n)=\det\left[\frac{1}{\pi}e^{-(W_j-\overline{W}_l)^2/2}\mathrm{erfc}\Big(\frac{W_j\bar{u}+\overline{W}_l u}{\sqrt{2}}\Big)\right]_{j,l=1,...,n},
\end{align}
where $W_j=w_j-u\sqrt{N}$ with $u\in\mathbb{C}$ and $|u|=1$ so that $u$ is just rotation around the edge. These correlations are identical to those of the complex Ginibre ensemble, which we expect since (\ref{eqn:Gincbulk}) and (\ref{eqn:Gincedge}) represent the eigenvalue correlations away from the effect of any real eigenvalues, and the complex Ginibre ensemble has no real eigenvalues at all. This naturally leads us onto the topic of universality and the circular law, which we review in the next section.
\subsubsection{Circular law}
\label{sec:circlaw}
Recall that in Chapter \ref{sec:GOE_sums} we discussed the semi-circle law, Proposition \ref{prop:wssl}, which states that for the Hermitian ensembles, with entries drawn from any mean zero, finite variance probability distribution, the eigenvalue density tends towards a semi-circle. There is an analogous result for non-Hermitian matrices called the \textit{circular law}.
If one normalises the eigenvalues by dividing by $\sqrt{N}$ (label these eigenvalues as $\tilde{w}$) then it turns out that the density of complex eigenvalues tends to uniformity on the unit circle. Further, when we recall (\ref{eqn:bigNEN}), which shows the expected number of real eigenvalues goes only as $\sqrt{N}$, then we see that the distribution of general eigenvalues for the real Ginibre ensemble tends to uniformity on the unit disk.
\begin{proposition}[\cite{Ed97}]
\label{prop:Gincirclaw}
The limiting density of eigenvalues, scaled by $1/\sqrt{N}$, in the real Ginibre ensemble is
\begin{align}
\label{eqn:Gcirclaw} \rho^c_{(1)}(\tilde{w})=\pi^{-1}\chi_{|\tilde{w}|<1},
\end{align}
where $\chi_{x}$ is the indicator function.
\end{proposition}
\textit{Proof}: With $w=u+iv$ we change variables $\tilde{u}=u/\sqrt{N}, \tilde{v}=v/\sqrt{N}$ in (\ref{eqn:Gincompx_density}) giving
\begin{align}
\nonumber \tilde{\rho}_{(1)}(\tilde{w})=N\sqrt{\frac{2N}{\pi}}\; \tilde{v} e^{2N\tilde{v}^2}\frac{\Gamma(N-1,N(\tilde{u}^2+\tilde{v}^2))}{\Gamma(N-1)}\mathrm{erfc}(\sqrt{2N}|\tilde{v}|),
\end{align}
where we have multiplied by $N$ (the Jacobian of the change of variables). We are therefore looking to calculate
\begin{align}
\label{eqn:normdens1} \rho^c_{(1)}(\tilde{w})=\lim_{N\to\infty} \frac{\tilde{\rho}_{(1)}(\tilde{w})}{N}.
\end{align}
Writing out the incomplete Gamma function using the definition
\begin{align}
\nonumber \Gamma(N-1,N\alpha)=\int_{N\alpha}^{\infty} t^{N-2} e^{-t}dt,
\end{align}
we see that the integral will be dominated by the maximum of the integrand in the large $N$ limit. Rewriting this integrand as
\begin{align}
\nonumber e^{(N-2) (\log t)-t}
\end{align}
we maximise this exponent by differentiating to find $t_{\max}=N-2\sim N$. So if $\alpha<1$, the maximum falls inside the interval of integration and the incomplete Gamma function tends to the complete Gamma function for large $N$. If $\alpha>1$, then it will be of lower order than the complete function. This gives us
\begin{align}
\label{def:uGa} \frac{\Gamma(N-1,N\alpha)}{\Gamma(N-1)}\to \left\{\begin{array}{cc}
1, &\alpha<1,\\
0, &\alpha>1.
\end{array}\right.
\end{align}
Lastly, using \cite[7.1.13]{AS1965} we find that for general $\tilde{v}$
\begin{align}
\label{eqn:bigerfc} \sqrt{N}\tilde{v}e^{2N\tilde{v}^2} \mathrm{erfc}(\tilde{v}\sqrt{2N})\mathop{\sim} \limits_{N\to\infty} \frac{1}{\sqrt{2\pi}}.
\end{align}
Substitution of these results into (\ref{eqn:normdens1}) gives (\ref{eqn:Gcirclaw}).
\hfill $\Box$
One can get an immediate sense of this result by looking at the simulation results in Figure \ref{fig:GinOE_eval_plot}. The complex eigenvalues are contained in a disk of radius roughly $\sqrt{N}$. The uniformity is clearly spoiled by the eigenvalues on the real line, but, as discussed above, these real eigenvalues have diminishing effect as $N$ becomes large.
Proposition \ref{prop:Gincirclaw} is specific to the real Ginibre ensemble, however it forms part of a wider class of results collectively known as the \textit{circular law}. The origins of the circular law can be traced back to (at least) the 1960's (although there are claims that it was being discussed a decade earlier \cite{Bai1997}). In \cite[Chapter 12.1]{Mehta1967} Mehta shows that the eigenvalue density for the complex Ginibre ensemble (complex asymmetric matrices) approaches uniformity inside the disk of radius $\sqrt{N}$ and zero outside the disk. Edelman showed the same is true for the real Ginibre ensemble \cite{Ed97} (Proposition \ref{prop:Gincirclaw}). One specific version of the circular law then states that for iid Gaussian matrices the support of the normalised eigenvalues $\tilde{w}=w/\sqrt{N}$ approaches the unit disk and the density inside the disk approaches uniformity as $N\to\infty$.
The name of the law is often prefixed by that of Girko \cite{Girko1985a, Girko1985b} who attempted to relax the Gaussian constraint; wanting to show that the eigenvalues of matrices with iid entries drawn from any mean zero, finite variance distribution will display the same density in the large $N$ limit. However, the consensus view seems to be that there were sufficiently many errors in Girko's work that the proof did not withstand scrutiny. By making some assumptions on the moments of the distribution Bai \cite{Bai1997} built on Girko's work and furnished a proof under such restrictions. Further refinements were made by G\"{o}tze and Tikhomirov \cite{GT2010}, Pan and Zhou \cite{PZ2010} and Tao and Vu \cite[Corollary 1.17]{TVK10}. We quote the latter result here.
\begin{proposition}[Circular law]
\label{prop:circlaw}
For an $N\times N$ random matrix with iid entries from a distribution with finite mean and variance $1$ the distribution of the normalised eigenvalues approaches the uniform distribution on the unit disk as $N\to\infty$.
\end{proposition}
The proof of Proposition \ref{prop:circlaw} in \cite{TVK10} relies on first establishing the universality of the limiting distribution of eigenvalues. In this context, universality means that the eigenvalue distribution in the large $N$ limit is independent of the probability distribution of the matrix elements. Having established this universality the authors use the circular result for the case of Gaussian distributed elements, which, as mentioned above, was contained in \cite{Mehta1967}, to prove the general circular law.
Note that if one has the circular law in advance, then it gives us a simple way of finding the asymptotic behaviour of $E_N$. First we see that, with $x=y$, (\ref{eqn:Ginbulk}) implies that the limiting density of real eigenvalues is $1/\sqrt{2\pi}$. Since the circular law tells us that general eigenvalues are only supported on $(-\sqrt{N},\sqrt{N})$ as $N\to \infty$, by directly integrating the limiting density over the real line we obtain (\ref{eqn:bigNEN}).
By similar reasoning to Proposition \ref{prop:Gincirclaw} we can find the limiting density of just the real eigenvalues, scaled into the unit disk by letting $\tilde{x}=x/\sqrt{N}$,
\begin{align}
\label{eqn:Grlimdens} \rho_{(1)}^r(\sqrt{N}\tilde{x}) = \sqrt{\frac{N}{2\pi}}.
\end{align}
Remarkably, we see that the real eigenvalues, despite being a lower-order contribution to the system, are also distributed uniformly. We will compare (\ref{eqn:Grlimdens}) to the analogous results in the real spherical (Chapter \ref{sec:SOE}) and real truncated ensembles (Chapter \ref{sec:truncs}) and find the same behaviour.
\newpage
\section{Partially symmetric real Ginibre ensemble}
\setcounter{figure}{0}
\label{sec:tG}
From the previous chapter we know the limiting density of real eigenvalues for the real Ginibre ensemble is constant on the interval $(-\sqrt{N},\sqrt{N})$ (set $x=y$ in the first equation of (\ref{eqn:Ginbulk})), and zero elsewhere. We also know that in the GOE, the limiting density of (real) eigenvalues is supported on $(-\sqrt{2N},\sqrt{2N})$, but is decidedly not constant (recall (\ref{eqn:wssl})); it is a semi-circle. One can imagine that as the symmetry constraint is relaxed the eigenvalue density transitions between these two regimes. Another transition that we expect to see is the probability of all real eigenvalues $p_{N,N}$ decreasing from $1$ in the GOE to that given in (\ref{eqn:GinOEpNN}) for the real Ginibre ensemble. One of the goals of this chapter is to make these ideas concrete by analysing the \textit{partially symmetric real Ginibre ensemble}.
This ensemble was analysed in \cite{SCSS1988} (building on the work of \cite{CS1987}) where they identified the \textit{elliptical law}, which describes the eigenvalue density as its behaviour changes from that of the circular law (\ref{eqn:Gcirclaw}) to that of the semi-circle law (\ref{eqn:wssl}), for Gaussian real asymmetric matrices. (The elliptical law seems to have been first discussed by Girko \cite{Girko1985c, Girko1986}.) The eigenvalue jpdf of the partially symmetric real ensemble was presented in \cite{LS91} along with the asymmetric (real Ginibre) specialisation. Refinements to the analysis were presented in \cite{Efe97}, where it was shown that the density of eigenvalues contains a singular delta function term corresponding to the non-zero density on the real line. The full correlation functions, in the case of $N$ even were contained in \cite{FN08} by generalising the orthogonal polynomial method used for the real Ginibre ensemble. (See \cite{FS03, KS2009} for reviews.)
\subsection{Element distribution}
Any matrix $\mathbf{X}$ can be decomposed into a sum of symmetric and anti-symmetric matrices; we choose the decomposition
\begin{align}
\label{def:tau_mats} \mathbf{X}=\frac{1}{\sqrt{b}}\left( \mathbf{S}+\sqrt{c}\mathbf{A} \right),
\end{align}
where $c:=(1-\tau)/(1+\tau)$, $-1<\tau<1$ and $b\in\mathbb{R}$. With $\tau\to 1$ then $c\to 0$ and so we obtain symmetric matrices while with $\tau\to-1$ we see that $c\to\infty$, however by suitably scaling $b$ we can control this behaviour and access anti-symmetric matrices. The matrix $\mathbf{X}$ will be completely asymmetric when $\tau=0$, meaning $c=1$.
For our purposes in this chapter we will take $\mathbf{S}$ to be a GOE matrix, that is a symmetric real, Gaussian matrix with iid elements distributed as in (\ref{eqn:GOE_el_dist}). The matrix $\mathbf{A}$ is the anti-symmetric analogue with iid elements
\begin{align}
\label{def:antisymdist} \frac{1}{\sqrt{\pi}}e^{-x_{jl}^2},\quad j<l,
\end{align}
with the remaining elements determined by anti-symmetry. So an ensemble of the matrices (\ref{def:tau_mats}) with $\tau\to1$ recovers the GOE, while with $\tau=0$ we have the real Ginibre ensemble. We also have an anti-symmetric Gaussian ensemble in the limit $\tau\to-1$; see \cite{mehta2004} where the author finds a semi-circular density in analogue with that of the GOE.
We will see that with the inclusion of this parameter we can access statistics interpolating between the ensembles discussed in Chapters \ref{sec:GOE_steps} and \ref{sec:GinOE} of the present work. We will proceed using the $5$ step method, which says that we first require the matrix or element distribution.
\begin{lemma}
\label{lem:tGindX}
The wedge product of the independent elements of $\mathbf{X}$ in terms of the wedge products of the independent elements of $\mathbf{S}$ and $\mathbf{A}$ is
\begin{align}
\nonumber (d\mathbf{X})=(2\sqrt{c})^{N(N-1)/2}\;b^{-N^2/2}\;(d\mathbf{S})(d\mathbf{A}).
\end{align}
\end{lemma}
\textit{Proof}: Each element in $d\mathbf{X}$ contributes a factor of $\sqrt{b}$ and each of the independent elements of $\mathbf{A}$ contributes a factor of $\sqrt{c}$; we can now ignore $b$ and $c$. From (\ref{def:tau_mats}) the elements of $d\mathbf{X}$ are
\begin{align}
\nonumber dx_{jl}=\left\{\begin{array}{ll}
ds_{jl}+ da_{jl},&j<l,\\
ds_{jj},&j=l,\\
ds_{lj}-da_{lj},&l<j.
\end{array}\right.
\end{align}
When wedging together each element in the strict upper triangle (of which there are $N(N-1)/2$), we have two choices: either the symmetric or anti-symmetric element. Picking one then forces the corresponding choice in the lower triangle. Picking the other incurs a factor of $(-1)$, which is cancelled by the anti-symmetry of the wedge product. Taking the absolute value then gives the result.
\hfill $\Box$
\begin{proposition}[\cite{FN08}]
Let $\mathbf{S}$ be a real, symmetric matrix with iid elements distributed as in (\ref{eqn:GOE_el_dist}), and let $\mathbf{A}$ be a real, anti-symmetric matrix with iid elements according to (\ref{def:antisymdist}). Then the pdf of the matrices $\mathbf{X}$ from (\ref{def:tau_mats}) is
\begin{align}
\label{eqn:tGinpdf} P(\mathbf{X})_{\tau,b}=\frac{b^{N^2/2}}{(2\pi)^{N^2/2}c^{N(N-1)/4}} \; e^{-b\frac{\mathrm{Tr} \mathbf{X}\bX^T-\tau\mathrm{Tr} \mathbf{X}^2}{2(1-\tau)}}.
\end{align}
\end{proposition}
\textit{Proof}: Since the matrices $\mathbf{S}=[s_{jl}]$ and $\mathbf{A}=[a_{jl}]$ are independent, we take the product of their elemental probability densities
\begin{align}
\nonumber \prod_{j=1}^N \frac{1}{\sqrt{2\pi}}e^{-s_{jj}^2/2}\prod_{1\leq j < l\leq N}\frac{1}{\sqrt{\pi}}e^{-s_{jl}^2}\prod_{1\leq j < l\leq N}\frac{1}{\sqrt{\pi}}e^{-a_{jl}^2},
\end{align}
which we can rewrite as
\begin{align}
\label{eqn:SAjpdf1} \frac{1}{2^{N/2}}\frac{1}{\pi^{N^2/2}}\; e^{-(\mathrm{Tr} \:\mathbf{S}^2 -\mathrm{Tr}\: \mathbf{A}^2)/2}.
\end{align}
Using (\ref{def:tau_mats}) we can express $\mathbf{S}$ and $\mathbf{A}$ in terms of $\mathbf{X}$ and $\mathbf{X}^T$ thusly
\begin{align}
\label{eqn:tGincovs} \begin{split}
\mathbf{S}&=\sqrt{\frac{b}{c}}\; \frac{\mathbf{X}-\mathbf{X}^T}{2},\\
\mathbf{A}&=\frac{\sqrt{b}}{2}\; (\mathbf{X}+\mathbf{X}^T),
\end{split}
\end{align}
and so
\begin{align}
\label{eqn:TrSA} \mathrm{Tr} \:\mathbf{S}^2-\mathrm{Tr}\: \mathbf{A}^2=\frac{b}{1-\tau}\Big( \mathrm{Tr}\: \mathbf{X}\bX^T-\tau\mathrm{Tr}\: \mathbf{X}^2 \Big).
\end{align}
Substituting (\ref{eqn:TrSA}) into (\ref{eqn:SAjpdf1}) and multiplying by the Jacobian for the change of variables (\ref{eqn:tGincovs}), which is the content of Lemma \ref{lem:tGindX}, we have the result.
\hfill $\Box$
Up to a constant factor, we see that with $\tau=0$ (\ref{eqn:tGinpdf}) reduces to the real Ginibre pdf (\ref{eqn:GinOE_eldist}) and, with $\tau\to 1$ (recalling that in this limit $\mathbf{X}^T\to \mathbf{X}$) we have the GOE (\ref{eqn:GOE_el_jpdf}).
The question of interest is: what happens to the eigenvalue distribution as $\tau$ is varied? We can gain some insight into the answer through numerical simulations of these ensembles.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.4]{tau_REhalf_complex_plotaebook.pdf}
\hspace{3ex}\includegraphics[scale=0.4]{tau_0_complex_plota100.pdf}
\hspace{3ex}\includegraphics[scale=0.4]{tau_IMhalf_complex_plota100.pdf}
\caption[Plot of simulated eigenvalues from partially symmetric real Ginibre ensembles, with $\tau=1/2, 0, 1/2$.]{Eigenvalues for 25 independent $64\times64$ matrices as defined in (\ref{def:tau_mats}). The left, middle and right plots correspond to $\tau=\frac{1}{2}, 0,-\frac{1}{2}$ respectively. In all three simulations $b=1$.}
\label{fig:tau_sims}
\end{center}
\end{figure}
\noindent As Figure \ref{fig:tau_sims} illustrates, the eigenvalue distribution for $\tau=0$ is circular with a distinct non-zero density of eigenvalues on the real line (this we of course knew from Chapter \ref{sec:GinOE}). As $\tau\to 1$ the matrices in the ensemble become more symmetric and the eigenvalues tend to congregate near the real line, forming an ellipse with major axis in the real direction. Conversely, with $\tau\to -1$ the eigenvalues collapse onto the imaginary axis as the ensemble approaches anti-symmetry, and the major axis of the ellipse is in the imaginary direction. We will not discuss anti-symmetric ensembles any further but we direct the interested reader to \cite{mehta2004}, and one can also find related self-dual matrices in \cite{Hast1999}. Note that since all these partially symmetric matrices are consistently real, we have a non-zero density of real eigenvalues for all $\tau\in (-1,1)$.
The distributions in Figure \ref{fig:tau_sims} may lead one to conjecture that there is an \textit{elliptical law}, which degenerates to the circular law when $\tau=0$. Indeed this is the case and we will analyse the situation further in Chapter \ref{sec:tGkernelts}. Another interesting point is that the ellipse must collapse onto the real axis in the limit $\tau\to 1$, since we must end up in the GOE in this limit. This leads to singular behaviour in the eigenvalue density as it shifts from being uniform in the ellipse, to semi-circular on the real line. The details of the analysis involve the \textit{strongly symmetric} (or \textit{weakly non-symmetric}) limit \cite{FKS97}, which we discuss in Chapter \ref{sec:tGsslim}.
\subsection{Eigenvalue jpdf}
As mentioned above, (\ref{eqn:tGinpdf}) reduces to (\ref{eqn:GinOE_eldist}) when $\tau=0$. By a simple scaling argument we can use this fact to deduce the eigenvalue jpdf for the partly symmetric ensemble from Proposition \ref{prop:GinOE_eval_jpdf}, which is the corresponding result for $\tau=0$.
\begin{proposition}[\cite{LS91}]
The eigenvalue jpdf for the partially symmetric real Ginibre matrices $\mathbf{X}$ from (\ref{def:tau_mats}) is
\begin{align}
\nonumber &Q_{N,k,\tau,b}(\Lambda,W)=C_{N,k,\tau,b}\prod_{i=1}^{k}e^{-b\lambda_i^2/2}\\
\label{eqn:tGinejpdf} &\times\prod_{j=1}^{(N-k)/2} e^{-b(w_j^2+\bar{w}_j^2)/2}\;\mathrm{erfc}\left(\sqrt{\frac{2b}{1-\tau}}\; |\mathrm{Im}(w_j)|\right) \big|\Delta (\Lambda\cup W)\big|,
\end{align}
where $\Lambda=\{ \lambda_i \}_{i=1,...,k}$ and $W=\{ w_i,\bar{w}_i \}_{i=1,...,(N-k)/2}$ are the sets of real and non-real complex eigenvalues respectively, and
\begin{align}
\nonumber C_{N,k,\tau,b}:=\frac{b^{N(N+1)/4}(1+\tau)^{N(N-1)/4} 2^{-N(N-1)/4-k/2}}{k!((N-k)/2)!\prod_{l=1}^N\Gamma(l/2)}.
\end{align}
\end{proposition}
\textit{Proof}: Note that with
\begin{align}
\label{def:tGinscaledeval} \mathbf{X}\to \mathbf{Y} = \sqrt{\frac{b}{1-\tau}}\: \mathbf{X},
\end{align}
we have
\begin{align}
\nonumber (d\mathbf{X}) \to (d\mathbf{Y}) = \left(\frac{b}{(1-\tau)}\right)^{N^2/2} (d\mathbf{X}),
\end{align}
and so the pdf for the real Ginibre matrices (\ref{eqn:GinOE_eldist}) becomes
\begin{align}
\nonumber P(\mathbf{Y}) (d\mathbf{Y}) &= P\left(\sqrt{\frac{b} {1-\tau}} \mathbf{X} \right) (d\mathbf{Y}) \\
\label{eqn:tGintoGincovs}& = \left(\frac{b}{2\pi(1-\tau)}\right)^{N^2/2} e^{-b\mathrm{Tr}\: \mathbf{X}\bX^T/2(1-\tau)} (d\mathbf{X}) =: \hat{P}(\mathbf{X}) (d\mathbf{X}).
\end{align}
We use (\ref{eqn:tGintoGincovs}) to rewrite (\ref{eqn:tGinpdf}) as
\begin{align}
\nonumber P(\mathbf{X})_{\tau,b} = \frac{(1+\tau)^{N(N-1)/4}}{(1-\tau)^{-N(N+ 1)/4}}\; e^{\tau b\: (\mathrm{Tr}\: \mathbf{X}^2)/2(1-\tau)}\hat{P}(\mathbf{X}) .
\end{align}
Now from Proposition \ref{prop:GinOE_eval_jpdf} we see that the eigenvalue jpdf corresponding to $\hat{P}(\mathbf{X})$, for the scaled matrices, is given by applying (\ref{def:tGinscaledeval}) to (\ref{eqn:GinOEjpdf}) and is
\begin{align}
\nonumber \left(\frac{b}{1-\tau}\right)^{N/2}Q_{N,k}\left(\sqrt{\frac{b}{1-\tau}} \Lambda,\sqrt{\frac{b}{1-\tau}} W \right).
\end{align}
The exponential factor containing $\mathrm{Tr}\: \mathbf{X}^2$ can be immediately written in terms of the eigenvalues of $\mathbf{X}$ and so we have
\begin{align}
\nonumber Q_{N,k,\tau,b}&=\frac{(1+\tau)^{N(N-1)/4}}{(1-\tau)^{-N(N+ 1)/4}}\left(\frac{b}{1-\tau}\right)^{N/2}\prod_{j=1}^k e^{\tau b\: \lambda^2/2(1-\tau)}\\
\nonumber &\times\prod_{j=1}^{(N-k)/2} e^{\tau b\: (w_j^2+\bar{w}_j^2) /2(1-\tau)}\; Q_{N,k}\left(\sqrt{\frac{b}{1-\tau}} \Lambda,\sqrt{\frac{b}{1-\tau}} W \right),
\end{align}
from which the result follows on factoring $\sqrt{b/(1-\tau)}$ out of the Vandermonde product.
\hfill $\Box$
\subsection{Generalised partition function}
Since the structure of the eigenvalue jpdf (\ref{eqn:tGinejpdf}) for the partially symmetric ensemble is identical to that of the real Ginibre ensemble (\ref{eqn:GinOEjpdf}) we can immediately write down the generalised partition function for the former by substituting it into (\ref{def:multi_gen_part_fn}), setting $m=2$.
\begin{proposition}[\cite{FN08}]
With $-1<\tau<1$ the generalised partition function for the partially symmetric real Ginibre ensemble, with $k,N$ even, can be written
\begin{align}
\label{eqn:tGingpfe} Z_{k,(N-k)/2}[u,v]_{\tau}=\frac{b^{N(N+1)/4} (1+\tau)^{N(N-1)/4} }{2^{N(N+1)/4}\prod_{l=1}^N\Gamma(l/2)}[\zeta^{k/2}]\mathrm{Pf}\left[\zeta\: \alpha_{j,l}^{(\tau)}+\beta_{j,l}^{(\tau)}\right]_{j,l =1,...,N},
\end{align}
where $[\zeta^n]$ means the coefficient of $\zeta^n$ and, with monic polynomials $\{ p_{i}(x)\}$ of degree i,
\begin{align}
\label{eqn:tGin_alphabeta} \begin{split} \alpha_{j,l}^{(\tau)} & = \int_{-\infty}^{\infty}dx\; u(x)\int_{-\infty}^{\infty}dy\; u(y)e^{-b(x^2+y^2)/2}p_{j-1}(x)p_{l-1}(y)\; \mathrm{sgn}(y-x),\\
\beta_{j,l}^{(\tau)} & = 2i\int_{\mathbb{R}_+^2}dw\;v(w)\; \mathrm{erfc}\left(\sqrt{\frac{2b}{1-\tau}}\; |\mathrm{Im}(w)|\right)\; e^{-b(w^2+\bar{w}^2)/2}\\
&\times \Bigl(p_{j-1}(w)p_{l-1}(\bar{w})-p_{l-1}(w)p_{j-1}(\bar{w})\Bigr).
\end{split}
\end{align}
\end{proposition}
\begin{proposition}
With $-1<\tau<1$ and $\alpha_{j,l}^{(\tau)},\beta_{j,l}^{(\tau)}$ as in (\ref{eqn:tGin_alphabeta}) the generalised partition function for $k,N$ odd can be written
\begin{align}
\nonumber Z_{k,(N-k)/2}^{\mathrm{odd}}[u,v]_{\tau}&=\frac{b^{N(N+1)/4} (1+\tau)^{N(N-1)/4} }{2^{N(N+1)/4}\prod_{l=1}^N\Gamma(l/2)}\\
\label{eqn:tGingpfo} &\times [\zeta^{(k-1)/2}]\mathrm{Pf}\left[\begin{array}{cc}
\left[\zeta \alpha_{j,l}^{(\tau)}+\beta_{j,l}^{(\tau)}\right] & \left[\nu_j^{(\tau)}\right]\\
\left[-\nu_l^{(\tau)}\right]& 0\\
\end{array}\right]_{j,l=1,...,N},
\end{align}
where
\begin{align}
\label{def:tGinnu} \nu_l^{(\tau)}=\int_{-\infty}^{\infty} e^{-b x^2/2}u(x)p_{l-1}(x)\; dx.
\end{align}
\end{proposition}
The corresponding summed partition functions come from substituting (\ref{eqn:tGingpfe}) and (\ref{eqn:tGingpfo}) into (\ref{eqn:summedup}) and its odd equivalent, resulting in
\begin{align}
\label{eqn:tGinsume} Z_N[u,v]_{\tau}=\frac{b^{N(N+1)/4} (1+\tau)^{N(N-1)/4} }{2^{N(N+1)/4} \prod_{l=1}^N \Gamma(l/2)} \mathrm{Pf}\left[\alpha_{j,l}^{(\tau)}+ \beta_{j,l}^{(\tau)} \right]_{j,l=1,...,N},
\end{align}
and
\begin{align}
\label{eqn:tGinsumo} Z_{N}^{\mathrm{odd}}[u,v]_{\tau}&=\frac{b^{N(N+1)/4} (1+\tau)^{N(N-1)/4} }{2^{N(N+1)/4} \prod_{l=1}^N\Gamma(l/2)} \; \mathrm{Pf}\left[\begin{array}{cc}
\left[\alpha_{j,l}^{(\tau)} +\beta_{j,l}^{(\tau)} \right] & \left[\nu_j^{(\tau)}\right]\\
\left[-\nu_l^{(\tau)}\right]& 0\\
\end{array}\right]_{j,l=1,...,N}.
\end{align}
\subsubsection{Probability of $k$ real eigenvalues}
\label{sec:tGprobs}
As for the real Ginibre ensemble the probability of obtaining $k$ real eigenvalues from an $N\times N$ partially symmetric matrix is given by integrating $Q_{N,k,\tau,b}$ from (\ref{eqn:tGinejpdf}) over all $k$ real and $(N-k)/2$ complex conjugate pairs of eigenvalues. Note that by changing variables $\lambda_j\to \sqrt{b}\lambda_j$ and $w_j\to \sqrt{b}w_j$ the parameter $b$ scales out of this integral and so the probability, which we call $p_{N,k, \tau}$ in this chapter, is independent of $b$, and so $b$ can therefore be set arbitrarily. We choose $b=1$ for convenience in this section.
Recall from Chapter \ref{sec:Gsops} that we put off the discussion of the probabilities until we had obtained the skew-orthogonal polynomials (\ref{eqn:GinOE_sopolys}) relevant to the real Ginibre ensemble. In that case the polynomials were quite simple and the calculation of $\alpha_{j,l}$, $\beta_{j,l}$ and $\bar{\nu}_j$ could be performed. However, in the present setting, we find (in Chapter \ref{sec:tGinsops}) that the polynomials are not as simple as for real Ginibre (they interpolate between the real Ginibre polynomials and the Hermite polynomials of the GOE) and instead the calculations can be done more easily by (following \cite{FN08}) assuming a different (non-skew-orthogonal) form of the polynomials. The polynomials we use are $p_j(x)=x^j$ and, applying integration by parts, we have
\begin{align}
\label{eqn:tGalpha} \alpha_{2j-1,2l}^{(\tau)}\Big|_{u=1}=2^l(l-1)!\sum_{p=1}^l \frac{\Gamma(j+p-3/2)}{2^{p-1}(p-1)!},
\end{align}
\begin{align}
\nonumber \beta_{2j-1,2l}^{(\tau)}\Big|_{v=1} &=-4\mathop{\sum_{s=0}^{2j-2}\sum_{t=0}^{2l-1}}\limits_{s+t \; \mathrm{odd}}(-1)^t \left(\frac{2j-2}{s}\right)\left(\frac{2l-1}{t}\right)\\
\label{eqn:tGbeta} &\times\Gamma(j+l-1-(s+t)/2))\; I_{s+t},
\end{align}
where
\begin{align}
\label{eqn:tGeye} I_{j}=\frac{(-1)^{(j-1)/2}((j-1)/2)!}{2}\left( \sqrt{\frac{2}{1+\tau}}\sum_{p=0}^{(j-1)/2} (-1)^p \left( \frac{1-\tau}{1+\tau}\right)^p \frac{(1/2)_p}{p!}-1\right)
\end{align}
(see \cite{FN08} for the intermediate steps). For the odd case we also need the evaluation of $\nu_j^{(\tau)}\big|_{u=1}$, but with $b=1$ and our current choice of polynomial the evaluation is identical to that in the real Ginibre ensemble using the skew-orthogonal polynomials applicable to that case, and so $\nu_j^{(\tau)}\big|_{u=1}$ is given by (\ref{eqn:Gnu}). We can then calculate the probabilities in terms of the generalised partition functions (\ref{eqn:tGingpfe}) and (\ref{eqn:tGingpfo})
\begin{align}
\nonumber p_{N,k,\tau}=\left\{
\begin{array}{cc}
Z_{k,(N-k)/2}[1,1]_{\tau},&N \mbox{ even},\\
Z_{k,(N-k)/2}^{\mathrm{odd}}[1,1]_{\tau},&N \mbox{ odd}.
\end{array}
\right.
\end{align}
For the case that $k=N$, that is, when all eigenvalues are real, we can write down the evaluation of the probabilities by first noting from (\ref{eqn:tGingpfe}), recalling (\ref{eqn:GinOE_gpf_even}), that
\begin{align}
\nonumber p_{N,N,\tau}=Z_{N,0}[1,1]_{\tau}&=(1+\tau)^{N(N-1)/4}Z_{N,0}[1,1],
\end{align}
since the $\alpha_{j,l}$, with polynomials $p_j(x)=x^j$ do not depend on $\tau$. Then using (\ref{eqn:GinOEpNN}) we see
\begin{align}
\label{eqn:tGinpNN} p_{N,N,\tau}=\left(\frac{1+\tau}{2}\right)^{N(N-1)/4}.
\end{align}
Of course, since (\ref{eqn:tGinpNN}) is independent of the parity of $N$, we obtain the same result if we use the odd analogues. Note that we now see $p_{N,N,\tau}\to 1$ for $\tau\to 1$ and $p_{N,N,\tau}\to 2^{-N(N-1)/4}$ for $\tau\to 0$, facts that we anticipated in the discussion at the beginning of this chapter by considering the GOE and real Ginibre ensemble.
The values of $p_{N,k, \tau}$ for $\tau=1/2$ and $\tau=-1/2$ are contained in Appendix \ref{app:tGinsimpnk}, Tables \ref{tab:pnkxact_simp} and \ref{tab:pnkxact_simm} respectively, where we compare them to some simulated results. From looking at the data in this table we notice that for certain $\tau$ all the probabilities are rational.
\begin{proposition}
\label{prop:tGirr}
The probabilities $p_{N,k, \tau}$, given by (\ref{eqn:tGingpfe}) for $N$ even and (\ref{eqn:tGingpfo}) for $N$ odd, are all rational if $\tau=2r^2-1$, where $-1<r<1$ is a rational number.
\end{proposition}
\textit{Proof}: First note that the gamma functions do not contribute any irrational factors for any value of $\tau$. To see this, first we focus on $N$ even. In this case the Pfaffian in (\ref{eqn:tGingpfe}) will always be a product of $\alpha_{j,l}^{(\tau)}$s and $\beta_{j,l}^{(\tau)}$s, containing $N/2$ factors in total. Each of these factors contributes $\sqrt{\pi}$ from the gamma functions in (\ref{eqn:tGalpha}) and (\ref{eqn:tGbeta}), which are cancelled by $\prod_{l=1}^N \Gamma(l/2)$ in the denominator of the pre-factor. When $N$ is odd, the Pfaffian has $\pi^{(N+1)/4}$, including the contribution from $\nu_j^{(\tau)}\big|_{u=1}$. Again these are cancelled by the product of gamma functions in the pre-factor.
We now deal with the remaining sources of irrationality. With $\tau=2r^2-1$ then we see from (\ref{eqn:tGalpha}), (\ref{eqn:tGbeta}) and (\ref{eqn:tGeye}) that $\alpha_{j,l}$ and $\beta_{j,l}$ contain only rational factors (other than the gamma functions, which we have already dealt with). Since we have an integer number of factors of $\alpha_{j,l}$ and $\beta_{j,l}$ in the expansion of the Pfaffian from (\ref{eqn:tGingpfe}) it follows that the Pfaffian does not contribute to the irrationality of $p_{N,k, \tau}$ when $N$ is even. For $N$ odd each term in the Pfaffian contains a factor of $\nu_j^{(\tau)}\big|_{u=1}$ and, from (\ref{eqn:Gnu}), this contributes $\sqrt{2}$.
The prefactor in (\ref{eqn:tGingpfe}) has the potentially irrational factor
\begin{align}
\nonumber \frac{(1+\tau)^{N(N-1)/4}}{2^{N(N+1)/4}},
\end{align}
however we substitute for $\tau$ as specified above and find
\begin{align}
\label{eqn:r2irr} \frac{(2r^2)^{N(N-1)/4}}{2^{N(N+1)/4}}=r^{N(N-1)/2}\; 2^{-N/2},
\end{align}
which is rational for all even $N$, and for $N$ odd the $\sqrt{2}$ from the Pfaffian cancels the irrational factor in (\ref{eqn:r2irr}).
\hfill $\Box$
\subsection{Skew-orthogonal polynomials}
\label{sec:tGinsops}
As we know from the previous chapters, skew-orthogonalising the Pfaffians in (\ref{eqn:tGinsume}) and (\ref{eqn:tGinsumo}) allows us to calculate the correlation functions. In analogue with Definition \ref{def:Ginip1} we would like to define an inner product based upon $\alpha_{j,l}^{(\tau)}$ and $\beta_{j,l}^{(\tau)}$ from (\ref{eqn:tGin_alphabeta}). Before we do so, however, recall that (\ref{eqn:tGingpfe}) and (\ref{eqn:tGingpfo}) are independent of $b$ and so we can set it to any (positive real) value for our convenience. This convenient value turns out to be
\begin{align}
\label{eqn:tGinb} b=\frac{1}{1+\tau},
\end{align}
and so from here forth we will assume (\ref{eqn:tGinb}).
\begin{definition}
\label{def:tauGinip1}
Define the inner product
\begin{align}
\nonumber &\langle p,q \rangle_{\tau} :=\; \int_{-\infty}^{\infty}dx\int_{-\infty}^{\infty}dy\; e^{-\frac{x^2 + y^2}{2(1+\tau)}} p_{j}(x)p_{l}(y)\hspace{3pt}\mathrm{sgn}(y-x)\\
\nonumber &+2i\int_{\mathbb{R}_+^2}dw\; \mathrm{erfc}\Big(\sqrt{\frac{2}{1- \tau^2}}\;|\mathrm{Im}(w)|\Big) \; e^{-\frac{w^2+\bar{w}^2}{2(1+\tau)}} \Bigl(p_{j}(w)p_{l} (\bar{w})-p_{l}(w)p_{j}(\bar{w}) \Bigr)\\
\label{def:tauGinip} &=\alpha_{j+1,l+1}^{(\tau)}+\beta_{j+1,l+1}^{(\tau)}\big|_{u=v=1, \: b=1/(1+\tau)}.
\end{align}
\end{definition}
\begin{proposition}[\cite{FN08}]
\label{prop:tGinsops}
With $H_j(z)$ the Hermite polynomials from (\ref{eqn:herm_polys}), let $C_j(z)$ be the scaled monic Hermite polynomials
\begin{align}
\label{def:tGsops} C_j(z) :=\left(\frac{\tau}{2}\right)^{j/2}\: H_j\left(\frac{z}{\sqrt{2\tau}} \right).
\end{align}
The polynomials skew-orthogonal with respect to the inner-product (\ref{def:tauGinip}) are
\begin{align}
\nonumber R_{2j}(z)=C_{2j}(z)&,&R_{2j+1}(z)=C_{2j+1}(z)-2j\:C_{2j-1}(z),
\end{align}
with normalisation
\begin{align}
\label{eqn:tGinnorms} r^{(\tau)}_j=\Gamma (2j+1) \;2 \sqrt{2\pi}\;(1+\tau).
\end{align}
\end{proposition}
The derivation of the polynomials in Proposition \ref{prop:tGinsops} is the general $\tau$ version of that used to obtain Proposition \ref{prop:Ginsops}; the reader is referred to \cite{FN08} for the details, or to \cite{AkePhilSom2010} for a method using an average over a characteristic polynomial.
Note that all but the first term in $H_j(z/(\sqrt{2\tau}))$ has $\tau^l$, where $-j/2<l\leq 0$ and so only the leading term of $C_j(z)$ is non-vanishing as $\tau\to 0$. The leading term has unit coefficient and so Definition \ref{def:tauGinip1} and Proposition \ref{prop:tGinsops} reduce to their real Ginibre counterparts (Definition \ref{def:Ginip1} and Proposition \ref{prop:Ginsops} respectively) when $\tau\to 0$. With $\tau \to 1$, and changing variables $x\to x/\sqrt{2},y\to y/\sqrt{2}$, we reclaim the GOE inner product of Definition \ref{def:GOE_soip} and the skew-orthogonal polynomials of Proposition \ref{prop:GOE_soip}.
\subsection{Correlation functions}
Since the structure of the generalised partition function (\ref{eqn:tGingpfe}) is identical to that of the real Ginibre ensemble (\ref{eqn:GinOE_gpf_even}), we can apply the machinery of Chapter \ref{sec:Gincorrlnse} to find the correlations. Similarly, in \cite{APS2009} the authors adapt the method of averaging over characteristic polynomials from the real Ginibre to the partially symmetric real Ginibre case, although we will not pursue their method here.
We first define a correlation kernel analogous to those in the real Ginibre ensemble (Definition \ref{def:GinOE_kernel}) and the GOE (Definition \ref{def:GOE_correln_kernel}).
\begin{definition}
\label{def:tGine_kernel}
Let $N$ be even. With $R_0,R_1,...$ the skew-orthogonal polynomials of Proposition \ref{prop:tGinsops} and $r^{(\tau)}_0,r^{(\tau)}_1,...$ the corresponding normalisations, define
\begin{align}
\nonumber S(\mu,\eta)_{\tau}&=2\sum_{j=0}^{\frac{N}{2}-1}\frac{1}{r_j^{(\tau)}}\Bigl[q_{2j}(\mu)\varphi_{2j+1}(\eta)-q_{2j+1}(\mu)\varphi_{2j}(\eta)\Bigr],\\
\nonumber D(\mu,\eta)_{\tau}&=2\sum_{j=0}^{\frac{N}{2}-1}\frac{1}{r_j^{(\tau)}}\Bigl[q_{2j}(\mu)q_{2j+1}(\eta)-q_{2j+1}(\mu)q_{2j}(\eta)\Bigr],\\
\nonumber \tilde{I}(\mu,\eta)_{\tau}&=2\sum_{j=0}^{\frac{N}{2}-1}\frac{1}{r_j^{(\tau)}}\Bigl[\varphi_{2j}(\mu)\varphi_{2j+1}(\eta)-\varphi_{2j+1}(\mu)\varphi_{2j}(\eta)\Bigr]+\epsilon(\mu,\eta)\\
\nonumber &=: I(\mu,\eta)_{\tau} +\epsilon(\mu,\eta),
\end{align}
where
\begin{align}
\nonumber h(\mu) &=e^{-\mu^2/2(1+\tau)}\sqrt{\mathrm{erfc}\left(\sqrt{\frac{2}{1-\tau^2}}\: |\mathrm{Im}(\mu)|\right)},\\
\nonumber q_j(\mu) &= h(\mu) R_j(\mu),\\
\nonumber \varphi_j(\mu) &=
\left\{
\begin{array}{ll}
-\frac{1}{2}\int_{-\infty}^{\infty}\mathrm{sgn}(\mu-z)\hspace{3pt}q_j(z)\hspace{3pt}dz, & \mu\in \mathbb{R},\\
iq_j(\bar{\mu}), & \mu\in \mathbb{R}_2^+,
\end{array}
\right.
\end{align}
and $\epsilon(\mu,\eta)$ is from Definition \ref{def:GinOE_kernel}.
In terms of these quantities, define
\begin{align}
\label{def:tGin_K} \mathbf{K}^{(\tau)}(\mu,\eta)=\left[
\begin{array}{cc}
S(\mu,\eta)_{\tau} & - D(\mu,\eta)_{\tau}\\
\tilde{I}(\mu,\eta)_{\tau} & S(\eta,\mu)_{\tau}
\end{array}
\right].
\end{align}
\end{definition}
By undertaking either the $4\times 4$ or $2\times 2$ kernel method of Chapter \ref{sec:Gincorrlnse} we find the correlation functions for $N$ even.
\begin{proposition}[\cite{FN08}]
Let $N$ be even. Then, with $\mathbf{K}^{(\tau)}(\mu,\eta)$ from (\ref{def:tGin_K}), the correlation functions for $n_1$ real and $n_2$ non-real, complex conjugate pairs of eigenvalues in the partially symmetric real Ginibre ensemble are
{\small
\begin{align}
\nonumber &\rho_{(n_1,n_2)}(x_1,...,x_{n_1},w_1,...,w_{n_2})_{\tau}=\mathrm{qdet}\left[\begin{array}{cc}
\mathbf{K}^{(\tau)}(x_i,x_j) & \mathbf{K}^{(\tau)}(x_i,w_m)\\
\mathbf{K}^{(\tau)}(w_l,x_j) & \mathbf{K}^{(\tau)}(w_l,w_m)
\end{array}\right]_{i,j=1,...,n_1, \atop l,m=1,...,n_2}\\
\nonumber &=\mathrm{Pf}\left(\left[\begin{array}{cc}
\mathbf{K}^{(\tau)}(x_i,x_j) & \mathbf{K}^{(\tau)}(x_i,w_m)\\
\mathbf{K}^{(\tau)}(w_l,x_j) & \mathbf{K}^{(\tau)}(w_l,w_m)
\end{array}\right]\mathbf{Z}_{2(n_1+n_2)}^{-1}\right)_{i,j=1,...,n_1, \atop l,m=1,...,n_2}, \quad x_i\in \mathbb{R}, w_i \in \mathbb{R}_2^+.
\end{align}
}
\end{proposition}
We can likewise apply the functional differentiation methods of Chapter \ref{sec:Gin_odd_fdiff} or use the `odd-from-even' approach of Chapter \ref{sec:Gin_oddfromeven} to obtain the $N$ odd case.
\begin{definition}
Let $N$ be odd. With $R_0,R_1,...$ the skew-orthogonal polynomials in Proposition \ref{prop:tGinsops}, $r^{(\tau)}_0,r^{(\tau)}_1,...$ the corresponding normalisations (\ref{eqn:tGinnorms}), and $\nu_j^{(\tau)}$ as in (\ref{def:tGinnu}) (with $\bar{\nu}_j^{(\tau)}:= \nu_j^{(\tau)}\big|_{u=1}$), define
\begin{align}
\nonumber S^{\mathrm{odd}}(\mu,\eta)_{\tau}&=2\sum_{j=0}^{\frac{N-1}{2}-1}\frac{1}{r_j^{(\tau)}}\Bigl[\hat{q}_{2j}(\mu)\hat{\varphi}_{2j+1}(\eta)-\hat{q}_{2j+1}(\mu) \hat{\varphi}_{2j}(\eta)\Bigr]+ \kappa(\mu,\eta),\\
\nonumber D^{\mathrm{odd}}(\mu,\eta)_{\tau}& =2\sum_{j=0}^{\frac{N-1}{2}-1}\frac{1}{r_j^{(\tau)}}\Bigl[\hat{q}_{2j}(\mu)\hat{q}_{2j+1}(\eta)- \hat{q}_{2j+1}(\mu) \hat{q}_{2j}(\eta)\Bigr],\\
\nonumber \tilde{I}^{\mathrm{odd}}(\mu,\eta)_{\tau}& =2\sum_{j=0}^{\frac{N-1}{2}-1}\frac{1}{r_j^{(\tau)}}\Bigl[\hat{\varphi}_{2j} (\mu)\hat{\varphi}_{2j+1}(\eta)- \hat{\varphi}_{2j+1}(\mu) \hat{\varphi}_{2j}(\eta)\Bigr]\\
\nonumber &+\epsilon(\mu,\eta)+\theta(\mu,\eta),
\end{align}
where $\epsilon(\mu,\eta)$ is from Definition \ref{def:GinOE_kernel} and
\begin{align}
\nonumber \hat{R}_j(\mu)&=R_j(\mu)- \frac{\bar{\nu}_{j+1}^{(\tau)}} {\bar{\nu}_N^{(\tau)}} R_{N-1}(\mu),\\
\nonumber \hat{q}_j(\mu) &= h(\mu)\: \hat{R}_j(\mu),\\
\nonumber \hat{\varphi}_j(\mu) &=
\left\{
\begin{array}{ll}
-\frac{1}{2}\int_{-\infty}^{\infty}\mathrm{sgn}(\mu-z)\hspace{3pt}\hat{q}_j(z)\hspace{3pt}dz, & \mu\in \mathbb{R},\\
i\hat{q}_j(\bar{\mu}), & \mu\in \mathbb{R}_2^+,
\end{array}
\right.\\
\nonumber \kappa(\mu,\eta) &=
\left\{
\begin{array}{lll}
q_{N-1}(\mu)/ \bar{\nu}_N^{(\tau)}, & \eta\in \mathbb{R},\\
0, & \mathrm{otherwise},\\
\end{array}
\right.\\
\nonumber \theta(\mu,\eta)&=
\big(\chi_{(\eta\in\mathbb{R})}\varphi_{N-1}(\mu)- \chi_{(\mu\in\mathbb{R})}\varphi_{N-1}(\eta)\big)/ \bar{\nu}_N^{(\tau)},
\end{align}
with the indicator function $\chi_{(A)}=1$ for $A$ true and zero for $A$ false. Then, let
\begin{align}
\label{def:tGin_Ko} \mathbf{K}^{(\tau)}_{\mathrm{odd}}(\mu,\eta)=\left[
\begin{array}{cc}
S^{\mathrm{odd}}(\mu,\eta)_{\tau} & -D^{\mathrm{odd}}(\mu,\eta)_{\tau}\\
\tilde{I}^{\mathrm{odd}}(\mu,\eta)_{\tau} & S^{\mathrm{odd}}(\eta,\mu)_{\tau}
\end{array}
\right].
\end{align}
\end{definition}
\begin{proposition}
Let $N$ be odd. Then with $\mathbf{K}_{\mathrm{odd}}^{(\tau)}(\mu,\eta)$ from (\ref{def:tGin_Ko}), the $(n_1,n_2)$-point correlation functions for the partially symmetric real Ginibre ensemble are
{\small
\begin{align}
\nonumber &\rho_{(n_1,n_2)}(x_1,...,x_{n_1},w_1,...,w_{n_2})_{\tau}=\mathrm{qdet}\left[\begin{array}{cc}
\mathbf{K}_{\mathrm{odd}}^{(\tau)}(x_i,x_j) & \mathbf{K}_{\mathrm{odd}}^{(\tau)}(x_i,w_m)\\
\mathbf{K}_{\mathrm{odd}}^{(\tau)}(w_l,x_j) & \mathbf{K}_{\mathrm{odd}}^{(\tau)}(w_l,w_m)
\end{array}\right]_{i,j=1,...,n_1, \atop l,m=1,...,n_2}\\
\nonumber &=\mathrm{Pf}\left(\left[\begin{array}{cc}
\mathbf{K}_{\mathrm{odd}}^{(\tau)}(x_i,x_j) & \mathbf{K}_{\mathrm{odd}}^{(\tau)}(x_i,w_m)\\
\mathbf{K}_{\mathrm{odd}}^{(\tau)}(w_l,x_j) & \mathbf{K}_{\mathrm{odd}}^{(\tau)}(w_l,w_m)
\end{array}\right]\mathbf{Z}_{2(n_1+n_2)}^{-1}\right)_{i,j=1,...,n_1, \atop l,m=1,...,n_2}, \quad x_i\in \mathbb{R}, w_i \in \mathbb{R}_2^+.
\end{align}
}
\end{proposition}
\subsection{Correlation kernel elements}
\label{sec:tGkernelts}
We expect that the correlation kernel elements will be deformations of those in the real Ginibre case of Chapter \ref{sec:Ginkernelts}. To establish this claim we use an integral representation of the Hermite polynomials to write (\ref{def:tGsops}) as
\begin{align}
\label{eqn:integHerm} C_n (z)=\frac{1}{\sqrt{\pi}}\int_{-\infty}^{\infty}e^{-t^2}(z+i\sqrt{2\tau}t)^n\;dt,
\end{align}
which we can verify by expanding the integrand using the binomial theorem and comparing the result to (\ref{eqn:herm_polys}). With (\ref{eqn:integHerm}) we can express $D(\mu,\eta)_{\tau}$ in terms of the $D(\mu,\eta)_0:=D(\mu,\eta)$ from Definition \ref{def:GinOE_kernel} thusly
\begin{align}
\nonumber &D(\mu,\eta)_{\tau}=\frac{h(\mu)h(\eta)}{\pi (1+\tau)} \int_{-\infty}^{\infty}dt_1\; e^{-t_1^2}\int_{-\infty}^{\infty}dt_2\; e^{-t_2^2}\\
\label{eqn:DDttrans} & \times \left( w(\mu+ i\sqrt{2\tau}t_1)w(\eta +i\sqrt{2\tau}t_2)\right)^{-1} D(\mu+i\sqrt{2\tau}t_1,\eta+i\sqrt{2\tau}t_2)_0,
\end{align}
where $h(x)$ is from Definition \ref{def:tGine_kernel} and $w(x)=h(x)\Big|_{\tau=0}=e^{-x^2/2}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(x)|)}$. Similar transformations also hold for $S_{c,c}(\mu,\eta)_{\tau}$ and $S_{r,c}(\mu,\eta)_{\tau}$ (where the second variable is complex conjugated) and $\tilde{I}_{c,c}(\mu,\eta)$ (where both variables are conjugated).
\begin{remark}
\label{rem:tGtrans}
The transformation (\ref{eqn:DDttrans}) does not hold for the remaining kernel elements since they all contain factors of $\varphi_j(x)$.
\end{remark}
To obtain the limiting complex correlation kernel in the bulk (in the strongly non-symmetric limit where $\tau$ is bounded away from $1$) we can directly apply the transform (\ref{eqn:DDttrans}) using the real Ginibre result from (\ref{eqn:Ginbulk}). This yields
\begin{align}
\nonumber S^{\mathrm{bulk}}_{c,c}(w,z)_{\tau}&=\frac{i}{\sqrt{2\pi}}\sqrt{\mathrm{erfc}\left(\sqrt{\frac{2}{1-\tau^2}}|\mathrm{Im}(w)|\right)}\sqrt{\mathrm{erfc}\left(\sqrt{\frac{2}{1-\tau^2}}|\mathrm{Im}(z)|\right)}\\
\nonumber &\times \frac{\bar{z}-w}{(1-\tau^2)} \:e^{-(\bar{z}-w)^2/2(1-\tau^2)},
\end{align}
and so the bulk limiting complex density is given by \cite{FN08}
\begin{align}
\label{eqn:tGbulkcdens} \rho_{(1)}^{\mathrm{bulk}}(w)_{\tau}=S^{\mathrm{bulk}}_{c,c}(w,w)_{\tau}=\sqrt{\frac{2}{\pi}}\: \mathrm{erfc}\left(\sqrt{\frac{2}{1-\tau^2}}\: v\right)\: \frac{v \: e^{2 v^2/(1-\tau^2)}}{1-\tau^2},
\end{align}
where $w=u+iv \in \mathbb{R}_2^{+}$.
For the real case we cannot simply apply (\ref{eqn:DDttrans}) to (\ref{eqn:Ginbulk}) since, as mentioned in Remark \ref{rem:tGtrans} the appearance of $\varphi_j$ factors invalidates its use. Instead, we substitute the skew-orthogonal polynomials of Proposition \ref{prop:tGinsops} and perform similar manipulations to those leading to (\ref{eqn:Ginsummed}) and find \cite{FN08}
\begin{align}
\nonumber S_{r,r}(x,y)_{\tau}&=\frac{e^{-(x^2+y^2)/2(1+\tau)}}{\sqrt{2 \pi}}\sum_{k=0}^{N-2}\frac{C_k(x)C_k(y)}{k!}\\
\label{eqn:Srrsum} &+\frac{e^{-x^2/2(1+\tau)}}{\sqrt{2\pi}(1+\tau)}\frac{C_{N-1}(x)\Phi_{N-2}(y)}{(N-2)!},
\end{align}
giving the large $N$ limit
\begin{align}
\label{eqn:tGSrrlim} \lim_{N\to \infty}S_{r,r}(x,y)_{\tau}=\frac{e^{-(x-y)^2/2(1-\tau^2)}}{\sqrt{2\pi (1-\tau^2)}}=:S_{r,r}^{\mathrm{bulk}}(x,y)_{\tau}.
\end{align}
The limiting bulk density is given by (\ref{eqn:tGSrrlim}) with $x=y$.
By comparing (\ref{eqn:tGbulkcdens}) and (\ref{eqn:tGSrrlim}) to (\ref{eqn:Ginbulk}) we see that we can obtain the general $\tau$ bulk densities for the complex eigenvalues by changing variables
\begin{align}
\label{def:tcovw} u\to u/(\sqrt{1-\tau^2}), v\to v/(\sqrt{1-\tau^2})
\end{align}
in $S_{c,c}^{\mathrm{bulk}}(u+iv, u+iv) dudv$, and for the real case by using
\begin{align}
\label{def:tcovx} x\to x/(\sqrt{1-\tau^2})
\end{align}
in $S_{r,r}^{\mathrm{bulk}}(x,x) dx$. Indeed, through the use of the inter-relationships (\ref{eqn:Gin_s=d=i}) we obtain the bulk limiting form of the general correlation functions from
\begin{align}
\rho_{n_1,n_2}^{(\tau)}(x_1,...,x_{n_1},w_1,...,w_{n_2})dx_1...dx_{n_1}dw_1,...,dw_{n_2}
\end{align}
by applying the scaling (\ref{def:tcovw}) and (\ref{def:tcovx}) to (\ref{eqn:Ginbulk}) and (\ref{eqn:Ginbulkall}).
In \cite{DGIL1994} the authors describe the boundary of a Coulomb gas (which from \cite{forrester?} we know is analogous to a system of eigenvalues of a random matrix) by looking for the point $w=x+iy$ that maximises the difference between the densities of systems with $N$ and $N+1$ particles; to wit, to maximise
\begin{align}
\nonumber \rho_{(1)}(w)\Big|_{N\mapsto N+1}-\rho_{(1)}(w).
\end{align}
Substituting the real Ginibre result (\ref{eqn:Ginsummed}) for $D(s,t)_0$ in (\ref{eqn:DDttrans}), it is shown in \cite{FN08} that
\begin{align}
\label{eqn:tGbdry} \rho_{(1)}(w)\Big|_{N\mapsto N+1}-\rho_{(1)}(w)\sim\frac{\sqrt{2}\;|C_{N-2}(w)|^2}{\pi(1+\tau)\Gamma(N-1)}e^{-\left(2|w|^2- \tau(w^2+ \bar{w}^2)\right) /2(1-\tau^2)}.
\end{align}
By maximising this difference with respect to $w$ the working in \cite{DGIL1994} shows that (\ref{eqn:tGbdry}) implies the boundary is an ellipse with semi-axes $(1+\tau) \sqrt{N}$ and $(1-\tau) \sqrt{N}$; a fact we illustrated in the plots of Figure \ref{fig:tau_sims}. We can then find the partially symmetric analogue to Proposition \ref{prop:Gincirclaw}.
\begin{proposition}[\cite{SCSS1988}]
With $\hat{z}=z/\sqrt{N}$ the limiting distribution of complex eigenvalues $\hat{z}$ in the partially symmetric real Ginibre ensemble is
\begin{align}
\label{eqn:ellaw1} \rho_{(1)}^c(\hat{z})_{\tau}=\frac{\chi_{\hat{z}\in E}}{\pi (1-\tau^2)},
\end{align}
where $E$ is the ellipse centred at the origin with semi-axes $(1+\tau)$ and $(1-\tau)$.
\end{proposition}
\textit{Proof}: We already know from (\ref{eqn:tGbdry}) that the boundary of the support is the ellipse with semi-axes $(1+\tau)$ and $(1-\tau)$. So all that remains is to show that the density is uniform on the ellipse as stated.
Making the change of variables $\hat{w} = w/\sqrt{N}$ and $\hat{z}= z/\sqrt{z}$ in (\ref{eqn:DDttrans}) we have
\begin{align}
\nonumber &D_{c,c}(\sqrt{N} \hat{w}, \sqrt{N} \hat{z})_{\tau}=\frac{h(\sqrt{N} \hat{w})h(\sqrt{N} \hat{z})}{\sqrt{2 \pi} (1+\tau) \pi} \int_{-\infty}^{\infty}dt_1\; e^{-t_1^2}\int_{-\infty}^{\infty}dt_2\; e^{-t_2^2}\\
\label{eqn:tDrtN} & \times (\sqrt{N}\hat{z}+i\sqrt{2\tau} t_2 -\sqrt{N}\hat{w}-i\sqrt{2\tau} t_1) \sum_{j=1}^{N-1} \frac{((\sqrt{N}\hat{w}+i\sqrt{2\tau} t_1)(\sqrt{N}\hat{z}+i\sqrt{2\tau} t_2))^{j-1}} {(j-1)!}.
\end{align}
Using the knowledge that for large $N$
\begin{align}
\nonumber \sum_{j=1}^{N-1} \frac{x^j}{j!} \sim e^{x},
\end{align}
we can perform the resulting integrals in (\ref{eqn:tDrtN}) to obtain
\begin{align}
\nonumber D_{c,c}(\sqrt{N} \hat{w}, \sqrt{N} \hat{z})_{\tau} &\sim \sqrt{\frac{N}{2\pi(1-\tau^2)}} \frac{(\hat{z}-\hat{w})}{1-\tau^2} e^{N(\hat{w}-\hat{z})^2/2(1-\tau^2)}\\
\nonumber & \times \left( \mathrm{erfc}\left(\sqrt{\frac{2N}{1-\tau^2}}\: \mathrm{Im}(\hat{w})\right) \mathrm{erfc}\left(\sqrt{\frac{2N}{1-\tau^2}}\: \mathrm{Im}(\hat{z})\right) \right)^{1/2}.
\end{align}
Using (\ref{eqn:Gin_s=d=i}) we have, by following the same reasoning in Proposition \ref{prop:Gincirclaw},
\begin{align}
\nonumber \rho_{(1)}^c (\sqrt{N} \hat{z})= S_{c,c}(\sqrt{N} \hat{z}, \sqrt{N} \hat{z})_{\tau}\sim \sqrt{\frac{2}{\pi}} \sqrt{\frac{N}{1-\tau^2}} \frac{\hat{y}} {1-\tau^2} e^{2N\hat{y}^2/(1-\tau^2)} \mathrm{erfc}\left(\sqrt{\frac{2N}{1-\tau^2}}\: \hat{y} \right),
\end{align}
where $\hat{z}=\hat{x}+i\hat{y}$. Noting the asymptotic behaviour (\ref{eqn:bigerfc}), we have the result.
\hfill $\Box$
In \cite{SCSS1988} the authors point out that the projection of (\ref{eqn:ellaw1}) gives a generalised semi-circle, which reduces to Wigner's semi-circle in the limit $\tau\to 1$.
\begin{remark}
Note that the analysis of the partially symmetric ensemble neatly explains why there is superficially a discrepancy between the radius of support in the GOE limit ($(-\sqrt{2N}, \sqrt{2N})$ from (\ref{eqn:wssl})) and the real Ginibre limit ($(-\sqrt{N},\sqrt{N})$ from (\ref{eqn:Gcirclaw})); with fixed $b=1/(1+\tau)$ then as $\tau\to 1$ we see from (\ref{def:tau_mats}) that $\mathbf{X}\to \mathbf{S}/\sqrt{2}$.
\end{remark}
From the universality established in \cite{TVK10}, we can conclude that since Proposition \ref{prop:circlaw} holds in the case $\tau=0$ for all distributions with finite mean and variance $1$, we also have, by scaling, the equivalent general elliptical law, for general $\tau$.
\subsubsection{Strongly symmetric limit}
\label{sec:tGsslim}
To analyse the regime of cross-over between symmetric and asymmetric matrices we follow \cite{FN08} and use the idea of \cite{FKS97, Efe97} to let
\begin{align}
\label{def:tssla} \tau=1-\alpha^2/N
\end{align}
and allow $N\to\infty$. Since we know that in the large $N$ limit we will recover the semi-circular density (\ref{eqn:wssl}), the eigenvalues will be supported on $[-2\sqrt{N},2\sqrt{N}]$ and the average spacing between them must be on the order of $1/\sqrt{N}$. With this in mind we scale the eigenvalues by
\begin{align}
\label{def:tGascale} x\to x\pi/\sqrt{N},
\end{align}
so that we have unit (real) density, and (\ref{eqn:Srrsum}) becomes
\begin{align}
\nonumber \frac{\pi}{\sqrt{N}} \; S_{rr}\left(\frac{\pi x}{\sqrt{N}}, \frac{\pi y}{\sqrt{N}}\right)_{1-\frac{\alpha^2}{N}} &\sim \sqrt{\frac{\pi} {2N}} e^{-\pi^2 \frac{x^2+y^2} {2N-\alpha^2}} \sum_{k=0}^{N-2} \frac{(1-\alpha^2/N)^k} {2^k k!}\\
\nonumber &\times H_k \left(\frac{\pi x}{\sqrt{2N-2\alpha^2}} \right) H_k \left(\frac{\pi y}{\sqrt{2N-2\alpha^2}} \right),
\end{align}
where we have ignored the second term in (\ref{eqn:Srrsum}) since it tends to zero because of the factorial denominator. Applying the asymptotic formula \cite[18.15.26]{NIST2010}
\begin{align}
\nonumber \frac{\Gamma(n/2+1)}{\Gamma(n+1)}e^{-x^2/2}H_n(x)=\cos(\sqrt{2n+1}\; x-n\pi/2) +O(n^{-1/2}),
\end{align}
and noting the asymptotic behaviour
\begin{align}
\nonumber & (1-\alpha^2/N)^k \sim e^{-k\alpha^2/N},
\end{align}
we have \cite{FN08}
\begin{align}
\nonumber \frac{\pi}{\sqrt{N}} \; S_{rr}\left(\frac{\pi x}{\sqrt{N}}, \frac{\pi y}{\sqrt{N}}\right)_{1-\frac{\alpha^2}{N}} &\sim \sum_{k= 1}^{N-2} \sqrt{\frac{1} {k N}} \; e^{-k\alpha^2/N}\\
\label{eqn:tGSrr1} &\times \cos \left(\sqrt{\frac{2k+1}{2N}} \pi x -\frac{k\pi}{2} \right) \cos \left(\sqrt{\frac{2k+1}{2N}} \pi y -\frac{k\pi}{2} \right),
\end{align}
where we have used the identity
\begin{align}
\nonumber \frac{\Gamma (k+1)}{\Gamma (k/2+1)} = \frac{2^{k} \Gamma ((k+1)/2)} {\sqrt{\pi}},
\end{align}
and then the large $x$ behaviour $\Gamma(x+a)/\Gamma (x) \sim x^a$.
The cosine multiple angle formulae tell us that $\cos x \cos y = (\cos (x+y)+\cos (x-y))/2$ and so the cosine product in (\ref{eqn:tGSrr1}) becomes
\begin{align}
\nonumber & \frac{1}{2} \cos \left(\sqrt{\frac{2k+1}{2N}} \pi (x+ y) -k\pi \right) + \frac{1}{2} \cos \left(\sqrt{\frac{2k+1}{2N}} \pi (x- y) \right).
\end{align}
To leading order, the contributions from the first term are cancelled because of the $(\pm 1)$ introduced by the $-k\pi$ term, and so we are left with
\begin{align}
\nonumber S_{r,r}(x,y)_{1-\frac{\alpha^2}{N}} &\sim \sum_{k=1}^{N-2} \sqrt{\frac{1} {k N}} \; e^{-k\alpha^2/N} \cos \left( \sqrt{\frac{k} {N}} \pi (x-y) \right).
\end{align}
Letting $t_k :=k /N \in (0,1)$ we rewrite this as
\begin{align}
\label{eqn:tGRSapp} \sum_{k=1}^{N-2} \frac{e^{-\alpha^2 t_k}} {N\sqrt{t_k}} \cos \left(\pi (x-y) \sqrt{t_k} \right) = \sum_{k=1}^{N-2} \frac{e^{-\alpha^2 t_k}} {\sqrt{t_k}} \cos \left(\pi (x-y) \sqrt{t_k} \right) \Delta_k,
\end{align}
where $\Delta_k:= t_k-t_{k-1}=1/N$ is the eigenvalue spacing. The right hand side of (\ref{eqn:tGRSapp}) is a Riemann sum approximation to a definite integral and so in the large $N$ limit we have \cite{FN08}
\begin{align}
\label{eqn:tGSrr2} \frac{\pi}{\sqrt{N}} \; S_{rr}\left(\frac{\pi x}{\sqrt{N}}, \frac{\pi y}{\sqrt{N}}\right)_{1-\frac{\alpha^2}{N}} &\sim \int_0^1 e^{-\alpha^2 t} \cos \left(\pi (x-y) t \right) dt,
\end{align}
where we have changed variables $t\to t^2$. By taking the limit $\alpha\to 0$ in (\ref{eqn:tGSrr2}) we see that we reclaim $S^{\mathrm{bulk}}(x_j,x_k)$ from (\ref{eqn:GOESbulk}) and so we have indeed obtained GOE behaviour as the $\tau \to 1$ limit of the partially symmetric real Ginibre ensemble, as we expected.
We can apply the same procedure to the complex case, recalling that both the real and imaginary parts of each eigenvalue must be scaled according to (\ref{def:tGascale}), that is we make the replacement $w_j \to \pi w_j/\sqrt{N} = \pi(u_j+ iv_j)/\sqrt{N}$. So, with $\tau$ specified by (\ref{def:tssla}), we have
\begin{align}
\nonumber &e^{-(w_j^2+\bar{w}_j^2)/2(1+\tau)} \sqrt{\mathrm{erfc}\left( \sqrt{\frac{2}{1-\tau^2}}\; v_j \right)}\\
\nonumber &\to e^{-\pi^2 (w_j^2+\bar{w}_j^2)/ (4N-2\alpha^2)} \sqrt{\mathrm{erfc}\left( \sqrt{\frac{2}{2\alpha^2/N -\alpha^4/N^2}}\; \frac{\pi v_j}{\sqrt{N}} \right)} \sim \sqrt{\mathrm{erfc}\left( \frac{\pi v_j}{\alpha}\right)},
\end{align}
and we obtain \cite{FN08}
\begin{align}
\nonumber \lim_{N\to\infty}\left(\frac{\pi}{\sqrt{N}}\right)^2 S_{cc} \left( \frac{\pi w_1} {\sqrt{N}}, \frac{\pi w_2} {\sqrt{N}}\right)_{\tau} &=i\pi \; \sqrt{\mathrm{erfc} \left( \frac{\pi v_1}{\alpha}\right) \mathrm{erfc} \left( \frac{\pi v_2}{\alpha}\right)}\\
\label{eqn:tGlimcc}&\times \int_0^1 t e^{-\alpha^2 t^2} \sin (\pi t (\bar{w}_1-w_2)) dt.
\end{align}
By summing (\ref{eqn:tGSrr2}) and (\ref{eqn:tGlimcc}) we reclaim the result of Efetov \cite[(5.30)]{Efe97}.
\newpage
\section{Real Spherical Ensemble}
\setcounter{figure}{0}
\label{sec:SOE}
In Chapter \ref{sec:tG} we generalised the real Ginibre ensemble with the inclusion of the parameter $\tau$, which controlled the degree of symmetry in the matrices. Here we consider a different generalisation that can be viewed from either a geometric viewpoint or as a problem in generalised eigenvalues. Recall that the real Ginibre ensemble possessed an eigenvalue distribution that `naturally' lay in the plane --- by that we mean that in the large $N$ limit the circular law takes effect and the distribution is uniform on the unit (planar) disk. The ensemble of this chapter has eigenvalues that `naturally' live on the sphere; the second of the triumvirate of surfaces of constant curvature discussed in the Introduction.
As it happens, the ensemble that we discuss here is intimately related to a question raised in \cite{eks1994} concerning the distribution of generalised eigenvalues of a pair of real matrices. Recall from the introduction that a generalised eigenvalue of the set $\{ \mathbf{A},\mathbf{B} \}$, where $\mathbf{A}$ and $\mathbf{B}$ are $N\times N$ matrices, is a value of $\lambda$ satisfying the equation (\ref{eqn:genEvals}), $\det (\mathbf{B} -\lambda \mathbf{A}) =0$. The authors of \cite{eks1994} provide a geometric interpretation of the problem: regard the pair of matrices $\mathbf{A},\mathbf{B}$ as two vectors in $\mathbb{R}^{N^2}$. The corresponding plane spanned by these vectors then intersects the sphere $S^{N^2-1}$ to give a great circle. The real generalised eigenvalues relate to the intersection of this great circle with the set $\Delta_N$ of all $N \times N$ singular matrices $\mathbf{X}$ such that $\mathrm{Tr}\: \mathbf{X}\bX^T=1$ (thus choose $\mathbf{X}=c(\mathbf{A}- \lambda \mathbf{B})$ for suitable $c$). With $\mathbf{A}, \mathbf{B}$ having standard Gaussian entries, the great circle has uniform measure, so the expected number of real eigenvalues is equal to the expected number of intersections of $\Delta_N$ with a random great circle.
Another feature of the random generalised eigenvalue problem studied in \cite{eks1994} is the density $\rho_{(1)}(\lambda)$ of real generalised eigenvalues. By writing $\lambda =\mathrm{tan} \hspace{2pt}\theta$ the generalised eigenvalue equation reads $\det (\mathrm{cos}\theta \mathbf{B}- \mathrm{sin}\theta \mathbf{A})=0$. Using the fact that a pair of standard Gaussians $(x_1,x_2)$ is, as a distribution in the plane, invariant under rotation, it was noted that $(\mathrm{cos} \theta,\mathrm{sin} \theta)$ must be distributed uniformly on the unit circle, and so, by $d\theta = d\lambda /(1+\lambda^2)$,
\begin{align}
\label{eqn:rho}
\rho_{(1)}(\lambda)=\frac{1}{\pi}\frac{E_N}{1+\lambda^2}.
\end{align}
The appearance of circles and spheres can be anticipated. First recall that a Cauchy random variable $\mathcal{C}$ can be defined as the ratio of two Gaussian random variables, and its density is
\begin{align}
\label{eqn:Caudist} \frac{1}{\pi (1+\mathcal{C}^2)}.
\end{align}
(Note that we are only interested in the standard Cauchy distribution here, which is centred at zero and with scale parameter equal to one.) In other words, a Cauchy distributed real variable has uniform distribution on a great circle of a sphere when stereographically projected. Second, note that by factoring out $\mathbf{A}$ from the determinant in (\ref{eqn:genEvals}) the generalised eigenvalue problem is equivalent to the standard eigenvalue problem for the matrix $\mathbf{A}^{-1}\mathbf{B}$. Since $\mathbf{A}$ and $\mathbf{B}$ are both random matrices with Gaussian entries, $\mathbf{A}^{-1}\mathbf{B}$ is the matrix equivalent of a Cauchy random variable. Indeed, in \cite{eks1994} the authors refer to these matrices as `Cauchy matrices'.
\begin{remark}
Note that by Remark \ref{rem:Ginsing} we can assume that $\mathbf{A}$ is invertible.
\end{remark}
This leads us to define the \textit{real, spherical ensemble}, consisting of matrices
\begin{align}
\label{def:ainvb} \mathbf{Y}=\mathbf{A}^{-1}\mathbf{B},
\end{align}
where $\mathbf{A},\mathbf{B}$ are $N\times N$ real, Ginibre matrices with elements specified by (\ref{def:Gin_eldist}). In this chapter we shall investigate the statistics of its eigenvalues, particularly in regards to stereographic projection onto the sphere. An analogous ensemble has been studied in \cite{Krish2009,HKPV2009}, where $\mathbf{A}$ and $\mathbf{B}$ are complex Ginibre matrices. The real spherical and complex spherical ensembles are analogous in the same way that the real Ginibre and complex Ginibre ensembles are analogous, and the GOE and GUE are analogous.
As with the real Ginibre matrices, the real spherical matrices differ from their complex comrades is that they exhibit a finite probability of having real eigenvalues, which, in the present case, corresponds to a great circle of uniform non-zero density of eigenvalues. To illustrate this point we have simulated an eigenvalue distribution and stereographically projected it in Figure \ref{fig:dstar}. As with the real Ginibre case, in the large $N$ limit the effect of the real eigenvalues becomes negligible. In \cite{FM11} the authors show that the large $N$ distribution matches that of the complex spherical ensemble, leading them to conjecture that there exists a spherical law that is analogous to the circular law of Proposition \ref{prop:circlaw}. This has since been established \cite{Bord2010}.
\begin{figure}[ht]
\begin{center}
\includegraphics[scale=0.6]{deathstar_vert111500.pdf}
\end{center}
\caption[Plot of simulated eigenvalues from the real spherical ensemble.]{A stereographic projection of the eigenvalues from 120 independent $100\times 100$ matrices of the form (\ref{def:ainvb}). The great circle of real eigenvalues can be clearly seen.}
\label{fig:dstar}
\end{figure}
A major difference in the analysis of this ensemble compared to that of Chapter \ref{sec:GinOE} is that we will not study the eigenvalues themselves directly. Given that we expect a more or less uniform distribution on the sphere (with the exception of the great circle corresponding to the real line) we use the fractional linear transformation
\begin{align}
\label{7'} z = {1 \over i} {w -1 \over w+1},
\end{align}
which maps the upper half-plane into the unit disk, with the real axis on the circumference. In the case that $z=\lambda \in\mathbb{R}$ then $w$ lies on the unit circle and we can write
\begin{align}
\label{14.2}
\lambda&=\frac{1} {i}\left(1 -\frac{2}{e+1} \right)=\mathrm{tan}\frac{\theta}{2},
\end{align}
with the convention $e:=e^{i\theta}$. In terms of the sphere, this is a projection of one hemisphere into the unit disk. The transformation (\ref{7'}) allows us to take advantage of the rotational symmetry of the problem, enabling us to compute an otherwise intractable integral.
One of the technical consequences of this choice of co-ordinates is that the Pfaffian in the generalised partition function (the real spherical analogue of (\ref{eqn:GinOE_gpf_even})) can be skew-diagonalised for general $\zeta$, which was not possible in the real Ginibre case. This results in probabilities $p_{N,k}$ that are products over the $\alpha$ and $\beta$, which are computationally easier than the Pfaffian or determinant structures heretofore encountered. We will discuss this further in Chapter \ref{sec:Ssops}.
Analysis similar to our study of the real spherical ensemble has been undertaken in \cite{caillol81, FJM1992}, where a Coulomb gas confined to a sphere is examined (although the authors of the latter paper use a system consisting of two oppositely charged species of particle). This viewpoint was exploited in \cite{FM11} to obtain two sum rules for our system here. Also, we remark that there is an analogy with the random polynomials
\begin{align}
\label{eqn:rand_polys}
p_{n}(z)=\sum_{p=0}^n {n \choose p}^{1/2}a_p z^p&,&a_p\sim N[0,1].
\end{align}
When stereographically projected onto the sphere there is of order $\sqrt{N}$ zeros on a great circle corresponding to the real axis \cite{EK95}, but for $N$ large the density is asymptotically uniform on the sphere \cite{Mc09}, which is what we find for the generalised eigenvalues.
\subsection{Element distribution}
As for the other ensembles already considered in this work, we must first establish the elemental distribution. For $N\times N$ matrices $\mathbf{A}$ and $\mathbf{B}$ taken from the real Ginibre ensemble, with distributions given by (\ref{eqn:GinOE_eldist}), we wish to write down the probability density function $\mathcal{P}(\mathbf{Y})$ of $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$. This requires changing variables in the joint density of $\mathbf{A}$ and $\mathbf{B}$,
\begin{align}
\label{eqn:YjpdfAB} (2\pi)^{-N^2}e^{-\mathrm{Tr}(\mathbf{A}\bA^T+\mathbf{B}\bB^T)/2}(d\mathbf{A})(d\mathbf{B}),
\end{align}
to those of $\mathbf{Y}$ and then integrating out the remaining $N^2$ independent variables. For this we will need the following pieces of theory.
\begin{lemma}[\cite{muirhead1982} Theorem 2.1.5]
\label{lem:alpha_tensor_beta}
For $\mathbf{X}=\alpha \mathbf{Y} \beta$, where $\alpha_{P\times P}$ and $\beta_{Q\times Q}$ are arbitrary real matrices and $\mathbf{Y}_{P\times Q}$ has $PQ$ independent entries (ie. the wedge product $(d\mathbf{Y})$ has $PQ$ factors) then
\begin{align}
\nonumber (d\mathbf{X})&=\left|\mathrm{det}(\alpha \otimes\beta^T)\right|(d\mathbf{Y})\\
\nonumber &=\left|\mathrm{det}(\alpha)^Q\mathrm{det}(\beta)^P\right|(d\mathbf{Y}).
\end{align}
\end{lemma}
\begin{lemma}[\cite{muirhead1982} Theorem 2.1.14]
\label{lem:tilde_const_covarbs}
For an $N\times M$ matrix (with $M\geq N$) $\mathbf{X}$, if $\mathbf{M}=\mathbf{X}\bX^T$ then
\begin{align}
\label{eqn:tilde_const_covarbs} (d\mathbf{X})= \tilde{c} \; \mathrm{det}\mathbf{M}^{(N-M-1)/2}(d\mathbf{M}),
\end{align}
where $\tilde{c}$ is independent of $\mathbf{M}$.
\end{lemma}
The following is another corollary of the Selberg integral (\ref{def:Selb}).
\begin{corollary}[\cite{forrester?} Proposition 4.7.3]
\label{cor:varselb}
By making the replacements $ t_l \mapsto x_l/L, 2\lambda=\beta, \lambda_1=\beta L/2$ and $\lambda_2=\beta L/2$ in (\ref{def:Selb}) we have the limit
\begin{align}
\nonumber &\lim_{L\to\infty} L^{N+N a \beta/2+\beta N(N-1)/2}S_N(\beta a/2,\beta L/2, \beta/2)\\
\nonumber &=\int_0^{\infty}dx_1\cdot\cdot\cdot \int_0^{\infty}dx_N \prod_{l=1}^N x_l^{\beta a/2} e^{-\beta x_l/2}\prod_{1\leq j < l \leq N}|x_l -x_j|^{\beta}\\
\label{cor:varselb1} &=(\beta /2)^{-N(a\beta /2+1+(N-1)\beta /2)}\prod_{j=0}^{N-1}\frac{\Gamma \left(1+(j+1)\beta /2\right)\Gamma (a\beta/2+1+j\beta/2)}{\Gamma (1+\beta/2)}.
\end{align}
\end{corollary}
We may now establish the jpdf for the elements of the matrix $\mathbf{Y}$.
\begin{proposition}
\label{prop:elementjpdf}
Let $\mathbf{A},\mathbf{B}$ be $N\times N$ real Ginibre matrices, having elements distributed according to (\ref{def:Gin_eldist}), and let $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$. The probability density function of $\mathbf{Y}$ is
\begin{align}
\label{eqn:elementjpdf} \mathcal{P}(\mathbf{Y})&=\pi^{-N^2/2}\prod_{j=0}^{N-1} \frac{\Gamma((N+1)/2+j/2)}{\Gamma((j+1)/2)}\; \mathrm{det}(\mathbf{1}_N+\mathbf{Y}\bY^T)^{-N}.
\end{align}
\end{proposition}
\textit{Proof:} Writing $\mathbf{B}=\mathbf{A}\mathbf{Y}$ we let $\alpha=\mathbf{A}$ and $\beta=\mathbf{1}_{N}$ in Lemma \ref{lem:alpha_tensor_beta} to see that
\begin{align}
\label{eqn:dBi} (d\mathbf{B})&=|\mathrm{det} \mathbf{A}|^N(d\mathbf{Y}).
\end{align}
Using (\ref{eqn:dBi}) we change variables in (\ref{eqn:YjpdfAB}) to obtain
\begin{align}
\nonumber (2\pi)^{-N^2}e^{-\frac{1}{2}\mathrm{Tr}\left(\mathbf{A}\bA^T(\mathbf{1}_N+\mathbf{Y}\bY^T)\right)}|\mathrm{det}\mathbf{A}\bA^T|^{N/2}(d\mathbf{A})(d\mathbf{Y}).
\end{align}
Setting $\mathbf{C}:=\mathbf{A}\bA^T$, Lemma \ref{lem:tilde_const_covarbs} tells us that $(d\mathbf{A})=\tilde{c}\: (\det \mathbf{C})^{-1/2}(d\mathbf{C})$ for some $\tilde{c}$ to be determined. Integrating over $\mathbf{C}$ (noting that $\mathbf{C}$ is positive definite, denoted $\mathbf{C}>0$) we have
\begin{align}
\nonumber &\mathcal{P}(\mathbf{Y})(d\mathbf{Y})=(2\pi)^{-N^2}\tilde{c}\int_{\mathbf{C}>0}(\mathrm{det}\: \mathbf{C})^{(N-1)/2}e^{-\frac{1}{2} \mathrm{Tr}\left(\mathbf{C} (\mathbf{1}_N+\mathbf{Y}\bY^T)\right)} (d\mathbf{C})(d\mathbf{Y})\\
\nonumber &=(2\pi)^{-N^2}\tilde{c}\int_{\mathbf{C}>0}(\mathrm{det}\: \mathbf{C})^{(N-1)/2} e^{-\frac{1}{2}\mathrm{Tr}\left((\mathbf{1}_N+\mathbf{Y}\bY^T)^{1/2}\mathbf{C} (\mathbf{1}_N+\mathbf{Y}\bY^T)^{1/2}\right)} (d\mathbf{C})(d\mathbf{Y}).
\end{align}
Carrying out the change of variables $\mathbf{C}\mapsto (\mathbf{1}_N+\mathbf{Y}\bY^T)^{1/2}\mathbf{C}(\mathbf{1}_N+\mathbf{Y}\bY^T)^{1/2}$ we use Lemma \ref{lem:adma} to find
{\small
\begin{align}
\nonumber \mathcal{P}(\mathbf{Y})(d\mathbf{Y})=(2\pi)^{-N^2}\tilde{c}\; \mathrm{det}(\mathbf{1}_N+\mathbf{Y}\bY^T)^{-N}\int_{\mathbf{C}>0} (\mathrm{det}\: \mathbf{C})^{(N-1)/2} e^{-\frac{1}{2} \mathrm{Tr}\: \mathbf{C}}(d\mathbf{C})(d\mathbf{Y}).
\end{align}
}Taking an integral transform of both sides of (\ref{eqn:tilde_const_covarbs}) we can calculate $\tilde{c}$ as
\begin{align}
\nonumber \int e^{-\mathrm{Tr}(\mathbf{A}\bA^T)/2}(d\mathbf{A})&=\tilde{c}\int_{\mathbf{C}>0}e^{-(\mathrm{Tr}\: \mathbf{C})/2}\frac{(d\mathbf{C})}{(\mathrm{det}\: \mathbf{C})^{1/2}},
\end{align}
where, on the LHS, we have $N^2$ standard Gaussian integrals and so
\begin{align}
\nonumber \tilde{c}&=\frac{(2\pi)^{N^2/2}}{\int_{\mathbf{C}>0} (\mathrm{det}\: \mathbf{C})^{-1/2} e^{-(\mathrm{Tr}\: \mathbf{C})/2}(d\mathbf{C})}.
\end{align}
Substituting for $\tilde{c}$ in the above formula for $\mathcal{P}(\mathbf{Y}) (d\mathbf{Y})$ gives
\begin{align}
\nonumber \mathcal{P}(\mathbf{Y})(d\mathbf{Y})&= (2\pi)^{-N^2/2}\mathrm{det}(\mathbf{1}_N+\mathbf{Y}\bY^T)^{-N}\frac{\int_{\mathbf{C}>0}\mathrm{det}(\mathbf{C})^{(N-1)/2}e^{-\mathrm{Tr}(\mathbf{C})/2}(d\mathbf{C})}{\int_{\mathbf{C}>0}\mathrm{det}(\mathbf{C})^{-1/2}e^{-\mathrm{Tr}(\mathbf{C})/2}(d\mathbf{C})}(d\mathbf{Y}).
\end{align}
Since $\mathbf{C}$ is symmetric we may use Proposition \ref{prop:GOE_J} to rewrite the ratio of integrals as
\begin{align}
\nonumber &\frac{\int_{\mathbf{C}>0}(\mathrm{det}\: \mathbf{C})^{(N-1)/2} e^{-(\mathrm{Tr}\: \mathbf{C})/2}(d\mathbf{C})} {\int_{\mathbf{C}>0} (\mathrm{det}\: \mathbf{C})^{-1/2} e^{-(\mathrm{Tr}\: \mathbf{C})/2}(d\mathbf{C})}\\
\nonumber & = \frac{\int_{(0,\infty)^N} \prod_{l=1}^Nx_l^{(N-1)/2}e^{-x_l/2} \prod_{j<k}^N|x_k-x_j|dx_1 \cdot\cdot\cdot dx_N}{\int_{(0,\infty)^N}\prod_{l=1}^Nx_l^{-1/2} e^{-x_l/2} \prod_{j<k}^N|x_k-x_j| dx_1\cdot\cdot\cdot dx_N},
\end{align}
which is seen to be a ratio of the Selberg-type integrals of Corollary \ref{cor:varselb}. The result follows on using the formula (\ref{cor:varselb1}).
\hfill $\Box$
From the discussion at the beginning of this chapter we know that a Cauchy variable has distribution (\ref{eqn:Caudist}) and we see that (\ref{eqn:elementjpdf}) is a matrix analogue of this distribution, which we anticipated since $\mathbf{A}^{-1}\mathbf{B}$ is the matrix analogue of a Cauchy variable. Note that when $N=1$ (\ref{eqn:elementjpdf}) is exactly (\ref{eqn:Caudist}).
\subsection{Eigenvalue distribution}
\label{sec:Sevaldist}
As we know from the study of the real Ginibre ensemble, for a general $N \times N$ non-symmetric real matrix, we will have have $0 \leq k \leq N$ real eigenvalues, where $N$ has the same parity as $k$. From knowledge of (\ref{eqn:elementjpdf}), by a suitable change of variables, we can extract the eigenvalue distribution for each allowed $k$. In this task we are motivated by the work on the analogous complex spherical ensemble of Hough \textit{et al} \cite{HKPV2009} (see also \cite{FK2009}). In particular, we again work with the real Schur decomposition (\ref{eqn:GinOE_decomp}), yielding $\mathbf{Y}=\mathbf{Q}\mathbf{R}_N\mathbf{Q}^T$, where $\mathbf{Q}$ is real orthogonal (each column is an eigenvector of $\mathbf{Y}$, with the restriction that the entry in the first row is positive) and $\mathbf{R}_N$ is the same as $\mathbf{R}$ in (\ref{def:triangular_mat}) (where we have introduced a subscript denoting the number of rows and columns for later convenience). For a unique decomposition we impose the ordering (\ref{9'}).
Since we are looking to change variables from the elements of $\mathbf{Y}$ to the eigenvalues of $\mathbf{Y}$ as implied by the real Schur decomposition, before proceeding we first need knowledge of the corresponding Jacobian. With $\tilde{\mathbf{R}}_N$ the strictly upper triangular part of $\mathbf{R}_N$, we know from Proposition \ref{prop:GinJ} that
\begin{align}
\nonumber (d\mathbf{Y})&=2^{(N-k)/2}\prod_{j<p}|\lambda(R_{pp})-\lambda(R_{jj})| \prod_{l=k+1}^{(N+k)/2} |b_l-c_l|\\
\nonumber &\times(d\tilde{\mathbf{R}}_N)(\mathbf{Q}^Td\mathbf{Q}) \prod_{s=1}^{k}d\lambda_s \prod_{l=k+1}^{(N+k)/2}dx_ldb_ldc_j,
\end{align}
using the notation of (\ref{def:GinOEpods}). Note that the dependence on $\mathbf{Q}$ can be immediately dispensed with by integrating over $(\mathbf{Q}^Td\mathbf{Q})$ using (\ref{eqn:RTdR_integ}).
So far the procedure is exactly the same as for the real Ginibre ensemble, but the integration over the elements of $\tilde{\mathbf{R}}_N$ is no longer straightforward. Indeed, following \cite{HKPV2009}, we integrate over the columns in $\tilde{\mathbf{R}}_N$ corresponding to each of the eigenvalues (or pair of complex conjugate eigenvalues) in turn, starting with the two columns on the far right (corresponding to the complex conjugate pair with largest real part). This process is then iterated from right to left, until the complex eigenvalue columns are exhausted. We then move on to iteratively integrate over the single columns above the real eigenvalues, which will then leave us with the eigenvalue jpdf. This procedure can be found in the work of Hua \cite{Hua1963}.
\subsubsection{Complex eigenvalue columns}
\label{sec:iitc}
In the region $j>k$ of $\mathbf{R}_N$ we can isolate the last two rows and columns to write
\begin{align}
\label{eqn:RNdecomp} \mathbf{R}_N=\left[\begin{array}{cc}
\mathbf{R}_{N-2} & u\\
\mathbf{0}^T & z_m
\end{array}\right],
\end{align}
where $u$ is of size $(N-2)\times 2$ and $\mathbf{0}^T$ is of size $2\times (N-2)$. So then
\begin{align}
\nonumber &\mathbf{1}_N +\mathbf{R}_N \mathbf{R}_N^T\\
\nonumber &=\left[\begin{array}{cc}
\mathbf{1}_{N-2}+\mathbf{R}_{N-2} \mathbf{R}_{N-2}^T+ uu^T & uz_m^T\\
z_mu^T & \mathbf{1}_2+z_m z_m^T
\end{array}\right]\\
\nonumber &=\left[\begin{array}{cc}
\mathbf{1}_{N-2}+ \mathbf{R}_{N-2}\mathbf{R}_{N-2}^T+ uu^T -uz_m^T (\mathbf{1}_2+z_mz_m^T)^{-1} z_mu^T & \mathbf{0}\\
z_mu^T & \mathbf{1}_2 +z_mz_m^T
\end{array}\right],
\end{align}
where we have used elementary row operations to obtain the second equality. Before proceeding, note the identity
\begin{align}
\nonumber &z_m^T(\mathbf{1}_2 +z_mz_m^T)^{-1}z_m\\
\nonumber &= z_m^T (\mathbf{1}_2 -z_mz_m^T+ z_mz_m^Tz_m z_m^T-z_mz_m^Tz_mz_m^Tz_mz_m^T+...)z_m\\
\nonumber &=(\mathbf{1}_2 -z_m^Tz_m+z_m^Tz_mz_m^T z_m-...) z_m^Tz_m\\
\nonumber &=(\mathbf{1}_2 +z_m^Tz_m)^{-1} z_m^T z_m\\
\label{eqn:zm1} &=\mathbf{1}_2-(\mathbf{1}_2 +z_m^Tz_m)^{-1}.
\end{align}
This enables us to expand the determinant as so
\begin{align}
\nonumber &\det \left(\mathbf{1}_N+\mathbf{R}_N \mathbf{R}_N^T \right)\\
\nonumber &= \det \left(\mathbf{1}_2+z_mz_m^T \right)\det \left(\mathbf{1}_{N-2}+\mathbf{R}_{N-2} \mathbf{R}_{N-2}^T+ u(\mathbf{1}_2+z_m z_m^T)^{-1} u^T \right)\\
\nonumber &=\det \left(\mathbf{1}_2+z_m z_m^T)\det(\mathbf{1}_{N-2}+\mathbf{R}_{N-2} \mathbf{R}_{N-2}^T \right)\\
\nonumber &\times \det \left(\mathbf{1}_{N-2}+(\mathbf{1}_{N-2}+\mathbf{R}_{N-2} \mathbf{R}_{N-2}^T)^{-1} u(\mathbf{1}_2 + z_m z_m^T)^{-1} u^T \right)\\
\nonumber &=\det \left(\mathbf{1}_2+z_mz_m^T) \det(\mathbf{1}_{N-2}+\mathbf{R}_{N-2} \mathbf{R}_{N-2}^T \right)\\
\label{eqn:det1RR} &\times\det \left(\mathbf{1}_2+(\mathbf{1}_2+ z_mz_m^T)^{-1/2}u^T (\mathbf{1}_{N-2}+ \mathbf{R}_{N-2}\mathbf{R}_{N-2}^T)^{-1}u (\mathbf{1}_2+ z_mz_m^T)^{-1/2}\right),
\end{align}
where we have used (\ref{eqn:zm1}) to obtain the first quality and (\ref{eqn:1+AB}) to obtain the third.
We are now in a position to integrate over the elements of the matrix $u$
{\small
\begin{align}
\nonumber &\int\frac{(du)}{\det(\mathbf{1}_N+\mathbf{R}_N \mathbf{R}_N^T)^N}=\frac{1}{\det(\mathbf{1}_2+z_m z_m^T)^N \det(\mathbf{1}_{N-2}+\mathbf{R}_{N-2} \mathbf{R}_{N-2}^T)^N}\\
\nonumber &\times \int\frac{(du)}{ \det(\mathbf{1}_{2}+ (\mathbf{1}_2+z_m z_m^T)^{-1/2} u^T(\mathbf{1}_{N-2}+\mathbf{R}_{N-2} \mathbf{R}_{N-2}^T)^{-1} u(\mathbf{1}_2+z_m z_m^T)^{-1/2})^N},
\end{align}
}where the integral for each independent real component of $u$ is over the real line. Changing variables $ v=(\mathbf{1}_{N-2}+\mathbf{R}_{N-2} \mathbf{R}_{N-2}^T)^{-1/2} \: u\: (\mathbf{1}_2+z_m z_m^T)^{-1/2}$ we use Lemma \ref{lem:alpha_tensor_beta} to find
\begin{align}
\nonumber \int\frac{(du)}{\det(\mathbf{1}_N+\mathbf{R}_N \mathbf{R}_N^T)^N}&=\frac{1}{\det(\mathbf{1}_2+z_mz_m^T)^{N/2+1}\det(\mathbf{1}_{N-2}+\mathbf{R}_{N-2} \mathbf{R}_{N-2}^T)^{N-1}}\\
\nonumber &\times\int\frac{(dv)}{\det(\mathbf{1}_{2}+v^Tv)^N}.
\end{align}
Iterating over all columns corresponding to complex eigenvalues we have
\begin{align}
\nonumber &\int\frac{(du_{N-2})\cdot\cdot\cdot (du_{k+1})}{\det(\mathbf{1}_N+\mathbf{R}_N \mathbf{R}_N^T)^N}= \frac{1}{\det(\mathbf{1}_k+\mathbf{R}_k \mathbf{R}_k^T)^{(N+k)/2}}\\
\label{eqn:with_subs} &\times \prod_{s=k+1}^{(N+k)/2} \frac{1}{\det(\mathbf{1}_2+z_sz_s^T)^{N/2+1}}\prod_{s=0}^{(N-k)/2-1}\int\frac{(dv_{N-2-2s})}{\det(\mathbf{1}_2+v_{N-2-2s}^Tv_{N-2-2s})^{N-s}},
\end{align}
where the subscripts $*$ on the matrices $v_*,du_*,dv_*$ denote their number of
rows.
To evaluate each of the $(N-k)/2$ integrals we use a method similar to that used in Proposition \ref{prop:elementjpdf}. Firstly, for each $v_{N-2-2s}$, we let $v_{N-2-2s}^Tv_{N-2-2s}=\mathbf{C}$ and apply Lemma \ref{lem:tilde_const_covarbs} to get
\begin{equation}\label{11'}
(dv_{N-2-2s})=\tilde{c}\: (\det \mathbf{C})^{(N-2s-5)/2}(d\mathbf{C}),
\end{equation}
and
\begin{eqnarray}
\label{eqn:Sctilde} \tilde{c}\int (\det \mathbf{C})^{(N-2s-5)/2}e^{-\mathrm{Tr} \: \mathbf{C}}(d\mathbf{C})=\int e^{-\mathrm{Tr}(v^Tv)}(dv)=\pi^{N-2s-2}.
\end{eqnarray}
So, with $\kappa:=(N-2s-5)/2$,
\begin{align}
\nonumber &\int\frac{(dv_{N-2-2s})}{\det(\mathbf{1}_2+v_{N-2-2s}^Tv_{N-2-2s})^{N-s}}=\pi^{N-2s-2}\frac{\int(\det \mathbf{C})^{\kappa}\det(\mathbf{1}_2+ \mathbf{C})^{s-N} (d\mathbf{C})}{\int(\det \mathbf{C})^{\kappa}e^{-\mathrm{Tr} \: \mathbf{C}}(d\mathbf{C})}\\
\nonumber & = \pi^{N-2s-2}\int_0^{\infty}\int_0^{\infty}\frac{x_1^{\kappa}}{(1+x_1)^{N-s}}\frac{x_2^{\kappa}}{(1+x_2)^{N-s}}|x_1-x_2|dx_1dx_2\\
\nonumber & \quad \times\left( \int_0^{\infty}\int_0^{\infty}x_1^{\kappa}x_2^{\kappa}e^{-x_1}e^{-x_2} |x_1-x_2|dx_1dx_2 \right)^{-1}\\
\nonumber & =\pi^{N-2s-2}\int_0^1\int_0^1y_1^{\kappa} y_2^{\kappa}(1-y_1)^{(N-1)/2}(1-y_2)^{(N-1)/2} |y_1-y_2|dy_1dy_2\\
\label{eqn:compx_selberg} & \quad \times\left( \int_0^{\infty}\int_0^{\infty}x_1^{\kappa}x_2^{\kappa}e^{-x_1}e^{-x_2} |x_1-x_2|dx_1dx_2 \right)^{-1},
\end{align}
where use was made of Proposition \ref{prop:GOE_J} for the second equality, and the change of variables $y=x/(1+x)$ for the third. We now have a ratio of Selberg-type integrals which can be evaluated using (\ref{cor:varselb1}) as
\begin{align}
\nonumber &\int\det(\mathbf{1}_2+v_{N-2-2s}^T v_{N-2-2s})^{-(N-s)}(dv)\\
\label{12'} &=\pi^{N-2s-2}\frac{\Gamma((N+1)/2)}{\Gamma(N-s-1/2)} \frac{\Gamma(N/2+1)} {\Gamma(N-2)}.
\end{align}
The case $N$ odd, $k=1$, corresponding to $s=(N-1)/2-1$ is special since then $v_{N-2-2s}$ consists of 1 row and 2 columns, and thus is the only case in which the number of rows is less than the number of columns and so Lemma \ref{lem:tilde_const_covarbs} does not apply. We must then write
$$
\det ({\bf 1}_2 + v^T_{N-2-2s} v_{N-2-2s})^{-p} = (1 + v_{N-2-2s} v_{N-2-2s}^T)^{-p},
$$
using (\ref{eqn:1+AB}). However, it turns out that the change this implies to
(\ref{eqn:compx_selberg}) does not effect the evaluation (\ref{12'}). So in all cases, after having integrated over the columns corresponding to complex eigenvalues we are left with
\begin{align}
\nonumber &\int\frac{(du_{N-2})\cdot\cdot\cdot(du_{k+1})}{\det(\mathbf{1}_N+\mathbf{R}_N \mathbf{R}_N^T)^N} =\prod_{s=k+1}^{(N+k)/2}\frac{1}{\det(\mathbf{1}_2+z_sz_s^T)^{N/2+1}}\\
\nonumber &\qquad \times\prod_{s=0}^{(N-k)/2-1}\pi^{N-2s-2}\frac{\Gamma((N+1)/2)}{\Gamma(N-s-1/2)}\frac{\Gamma(N/2+1)}{\Gamma(N-2)} \hspace{3pt}\frac{1}{\det(\mathbf{1}_k+\mathbf{R}_k \mathbf{R}_k^T)^{(N+k)/2}}.
\end{align}
It remains to compute the integrals over the columns corresponding to the real eigenvalues.
\subsubsection{Real eigenvalue columns}
\label{sec:iitr}
We see that we are left with a function of $\mathbf{R}_k$, which is the upper-left sub-block of $\mathbf{R}_N$. Similar to the process in the previous section, we isolate the last row and column
\begin{eqnarray}
\nonumber \mathbf{R}_k=\left[\begin{array}{cc}
\mathbf{R}_{k-1} & u_{k-1}\\
\mathbf{0}^T & \lambda_k
\end{array}\right],
\end{eqnarray}
where now $u_{k-1}$ is of size $(k-1)\times 1$ and $\mathbf{0}^T$ is of size $1\times (k-1)$. Following the same procedure that led to (\ref{eqn:det1RR}) for the columns corresponding to the complex eigenvalues, we find
\begin{align}
\nonumber \det(\mathbf{1}_k+\mathbf{R}_k \mathbf{R}_k^T)&= (1+ \lambda_k^2)\det(\mathbf{1}_{k-1}+\mathbf{R}_{k-1} \mathbf{R}_{k-1}^T)\\
\nonumber &\times \left(1+(1+ \lambda_k^2)^{-1} u_{k-1}^T (\mathbf{1}_{k-1}+\mathbf{R}_{k-1} \mathbf{R}_{k-1}^T)^{-1} u_{k-1} \right).
\end{align}
Setting $v_{k-1}=(\mathbf{1}_{j-1}+ \mathbf{R}_{j-1} \mathbf{R}_{j-1}^T)^{-1/2}u_{k-1} (1+ \lambda_j^2)^{-1/2}$ and again making use of Lemma \ref{lem:alpha_tensor_beta} we have
\begin{align}
\nonumber &\int\frac{(du_{k-1})}{\det(\mathbf{1}_k+\mathbf{R}_k \mathbf{R}_k^T)^{(N+k)/2}}\\
\nonumber &=\frac{1}{(1+\lambda_k^2)^{(N+1)/2}\det \left(\mathbf{1}_{k-1}+\mathbf{R}_{k-1} \mathbf{R}_{k-1}^T \right)^{(N+k-1)/2}} \int\frac{(dv_{k-1})}{(1+ v_{k-1}^T v_{k-1})^{(N+k)/2}}.
\end{align}
Iterating over the remaining columns of $\mathbf{R}_k$ gives
\begin{align}
\nonumber &\int\frac{(du_{k-1})\cdot\cdot\cdot(du_1)}{\det(\mathbf{1}+\mathbf{R}_k \mathbf{R}_k^T)^{(N+k)/2}}\\
\nonumber &=\prod_{s=1}^k\frac{1}{(1+\lambda_s^2)^{(N+1)/2}} \prod_{s=1}^{k-1} \int\frac{(dv_{k-s})}{(1+ v_{k-s}^T v_{k-s})^{(N+k)/2-(s-1)/2}}
\end{align}
(cf.~(\ref{eqn:with_subs})). To evaluate the integrals, we use the same method as for the integrals in (\ref{eqn:with_subs}) --- involving Lemma \ref{lem:tilde_const_covarbs} and a now one-dimensional case of the Selberg integral, which is the beta integral. This gives
\begin{align}
\nonumber \int\frac{(dv_{k-s})}{(1+v_{k-s}^Tv_{k-s})^{(N+k)/2-(s-1)/2}}=\pi^{(k-s)/2}\frac{\Gamma((N+1)/2)}{\Gamma((N+k-s+1)/2)},
\end{align}
and so
\begin{align}
\nonumber &\int\frac{(du_{k-1})\cdot\cdot\cdot(du_1)}{\det(\mathbf{1}_k+\mathbf{R}_k \mathbf{R}_k^T)^{(N+k)/2}}\\
\nonumber &=\prod_{s=1}^k\frac{1}{(1+\lambda_s^2)^{(N+1)/2}}\prod_{s=1}^{k-1}\pi^{(k-s)/2}\frac{\Gamma((N+1)/2)}{\Gamma((N+k-s+1)/2)}.
\end{align}
\subsubsection{Eigenvalue jpdf and fractional linear transformation}
According to the working in the preceding section (\ref{eqn:elementjpdf}) has been reduced to the following distribution of $\{ \lambda_i,x_i,b_i,c_i\}$ (with $x_i,b_i$ and $c_i$ from (\ref{eqn:Schurz}))
\begin{align}
\nonumber &\pi^{-(N-k)/4}\Gamma((N+1)/2)^{N/2}\Gamma(N/2+1)^{N/2} \left( \frac{\Gamma((N+1)/2)}{\Gamma(N/2+1)}\right)^{k/2} \prod_{j=1}^{N-1} \frac{1} {\Gamma(j/2)^2}\\
&\nonumber \quad \times\prod_{s=k+1}^{(N+k)/2} \frac{1}{\det(\mathbf{1}+z_sz_s^T)^{N/2+1}} \prod_{s=1}^{k} \frac{1}{(1+ \lambda^2_s)^{(N+1)/2}} 2^{(N-k)/2}\prod_{l=k+1}^{(N+k)/2} |b_l-c_l|\\
\label{eqn:reduced_dist}& \quad \times \quad \prod_{j<p}|\lambda(R_{pp})-\lambda(R_{jj})|,
\end{align}
where use has been made of the simplification
{\small
\begin{align}
\nonumber &\prod_{s=1}^{k-1}\pi^{(k-s)/2}\frac{ \Gamma((N+1)/2)}{ \Gamma((N+k-s+1)/2)}\prod_{s=0}^{(N-k)/2-1}\pi^{N-2s-2}\frac{ \Gamma((N+1)/2)}{ \Gamma(N-s-1/2)}\frac{\Gamma(N/2+1)}{ \Gamma(N-s)}\\
\nonumber &= \pi^{(k+N^2-2N)/4} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2} \left(\frac{\Gamma((N+1)/2)}{\Gamma(N/2+1)} \right)^{k/2}\\
\nonumber & \times \prod_{s=0}^{N-1}\frac{1} {\Gamma((N+1+s)/2)}.
\end{align}
}
With $\epsilon_s=1+x_s^2+y_s^2$ and $\delta_s=b_s-c_s$ we see that $\det (\mathbf{1}_2+z_sz_s^T) = \epsilon_s^2+\delta_s^2$. We can use (\ref{eqn:GinOE_covs}) to change variables from $b_l,c_l$ to $y_l,\delta_l$
recalling the correction that $-\infty < \delta < \infty$. Now we integrate over $\delta$
\begin{align}
\nonumber &\int_{\delta=-\infty}^{\delta=\infty} \frac{|b_s-c_s|}{\det(\mathbf{1}+z_sz_s^T)^{N/2+1}}\: dx_s db_s dc_s\\
\nonumber &=4y_s\int_{\delta=0}^{\delta=\infty} \frac{\delta \; d\delta}{(\epsilon_s^2+\delta^2)^{N/2+1} \sqrt{4y^2+\delta^2}}\: dx_sdy_s\\
\label{eqn:delta_integ} &=4y_s\int_{t=2y_s}^{t=\infty}\frac{dt}{(\epsilon_s^2-4y_s^2+t^2)^{N/2+1}}\: dx_s dy_s.
\end{align}
Substituting (\ref{eqn:delta_integ}) in (\ref{eqn:reduced_dist}) as appropriate
gives the reduced jpdf, but (\ref{eqn:delta_integ}) as written appears intractable for
further analysis. On the other hand the discussion at the beginning of this chapter suggests that when projected on to the sphere the eigenvalue density is unchanged by rotation in the $X$--$Z$ plane, where $X,Y,Z$ are the co-ordinates after stereographic projection. This suggests that simplifications can be achieved by an appropriate mapping of the half-plane that contains the rotational symmetry of the half sphere.
We therefore introduce the fractional linear transformation (\ref{7'}) mapping the upper half-plane to the interior of the unit disk, with (\ref{14.2}) (recalling the definition of $e$) mapping the real line to a great circle through the poles. In particular, the complicated dependence on $x_s$ and $y_s$ in (\ref{eqn:delta_integ}) is now unravelled.
\begin{lemma}
Let $\epsilon_s = 1 + x_s^2 + y_s^2$. With the change of co-ordinates (\ref{7'}) we have
\begin{align}
\nonumber &y_s\int_{t=2y_s}^{t=\infty}\frac{dt}{(\epsilon_s^2- 4y_s^2+ t^2)^{N/2+1}} \: dx_s dy_s\\
\label{14.3} & \qquad = \frac{(1-|w_s|^2)\: |1+w_s|^{2N-4}} {2^{2N-2} \: |w_s|^{N+1}} \int_{\frac{|w_s|^{-1} -|w_s|}{2}}^{\infty} \frac{dt}{\left(1+t^2\right)^{N/2+1}} \: du_s dv_s.
\end{align}
\end{lemma}
\textit{Proof}: Noting that
\begin{align}
\nonumber y_s &= {1 - |w_s|^2 \over |1 + w_s|^2}, \\
\nonumber \epsilon_s^2 - 4y_s^2 &= {16 |w_s|^2 \over |1 + w_s|^4}, \\
\nonumber dx_s dy_s &= \Big | {dz_s \over dw_s} \Big |^2 du_s dv_s =
{4 \over |1 + w_s|^4} du_s dv_s,
\end{align}
we reduce the given expression to
\begin{align}
\label{eqn:14.3a} \frac{16} {|1 + w_s|^4} \frac{1 - |w_s|^2} {|1 + w_s|^2}
\int_{\frac{2(1 - |w_s|^2)} {|1+w_s|^2}}^{\infty}
{dt \over \left( \frac{16 |w_s|^2} {|1 + w_s|^4} + t^2 \right)^{N/2 + 1}} \, du_s dv_s.
\end{align}
The RHS of (\ref{14.3}) results from (\ref{eqn:14.3a}) after the change of variables $t \mapsto 4|w_s| t/|1 + w_s|^2$.
\hfill $\square$
For the product of differences in (\ref{eqn:reduced_dist}), the substitutions (\ref{7'}) and (\ref{14.2}) give
{\small
\begin{align}
\nonumber \prod_{j<p}^k|\lambda_p-\lambda_j|&=(-2i)^{k(k-1)/2}\prod_{s=1}^k\frac{(\bar{e}_s)^{(k-1)/2}}{|e_s+1|^{k-1}}\prod_{j<p}^k(e_p-e_j)
\end{align}
for the real-real factors,
\begin{align}
\nonumber &\prod_{j=1}^k\prod_{s=k+1}^{(N+k)/2}|\lambda_j-z_s||\lambda_j-\bar{z}_s|=(-1)^{k(N-k)/2}2^{(N-k)k}\prod_{j=1}^k(\bar{e}_j)^{(N-k)/2}\left| \frac{1}{e_j+1}\right|^{(N-k)}\\
\nonumber &\times \prod_{s=k+1}^{(N+k)/2}(\bar{w}_s)^k\left| \frac{1}{w_s+1}\right|^{2k}\prod_{j=1}^k\prod_{s=k+1}^{(N+k)/2}(w_s-e_j)\left(\frac{1}{\bar{w}_s}-e_j\right)
\end{align}
for the real-complex factors, and
\begin{align}
\nonumber &\prod_{k+1\leq a < b \leq (N+k)/2}\hspace{-26pt} |z_a-z_b||\bar{z}_a-\bar{z}_b| \hspace{-6pt} \mathop{\prod_{c,d=k+1}^{(N+k)/2}}_{c \neq d} \hspace{-6pt} |z_c-\bar{z}_d|=(-2)^{2\left(\frac{N-k}{2}\frac{N-k-2}{2}\right)} \prod_{j=k+1}^{(N+k)/2} \hspace{-6pt} (1-|w_j|^2)^{-1}\\
\nonumber &\times \hspace{-6pt} \prod_{s=k+1}^{(N+k)/2}(\bar{w})^{N-k-1}\left| \frac{1}{w_s+1}\right|^{2(N-k-2)}\prod_{a < b}(w_b-w_a)\left( \frac{1}{\bar{w}_b}-\frac{1}{\bar{w}_a} \right)\prod_{c,d=k+1}^{(N+k)/2}\left( \frac{1}{\bar{w}_d}-w_c\right)
\end{align}
}for complex-complex. An essential feature is that, apart from the creation of some one-body terms, the product of difference structure is conserved by the substitutions. Combining all this, and using the identity
\begin{align}
\nonumber \left| \frac{1}{e_j+1}\right|^{N-1}&= \left(\frac{1} {2\: \mathrm{cos} (\theta_j/2)}\right)^{N-1},
\end{align}
we have the explicit form of the eigenvalue jpdf in the variables (\ref{7'}) and (\ref{14.2}).
\begin{proposition}
\label{thm:ainvb_jpdf}
Let $\mathbf{A},\mathbf{B}$ be $N\times N$ real Ginibre matrices having elements distributed according to (\ref{def:Gin_eldist}), and let $\mathbf{Y}=\mathbf{A}^{-1} \mathbf{B}$. In the variables (\ref{7'}) and (\ref{14.2}) the eigenvalue jpdf of $\mathbf{Y}$, conditioned to have $k$ real eigenvalues ($k$ being of the same parity as $N$), is
\begin{align}
\label{eqn:q(y)} \mathcal{Q}(\mathbf{Y})= A_{k,N} \prod_{j=1}^k \tau(e_j) \prod_{s=k+1}^{(N+k)/2} \frac{1} {|w_s|^2}\: \tau(w_s)\tau \left(\frac{1}{\bar{w}_s}\right) \Delta\left(\mathbf{e}, \mathbf{w}, \mathbf{\frac{1}{\bar{w}}} \right),
\end{align}
with $\mathbf{e}=\{e_1,...,e_k \}$, $\mathbf{w}=\{w_1,\bar{w}_1,...,w_{(N-k)/2},\bar{w}_{(N-k)/2} \}$ and
\begin{align}
\nonumber A_{k,N}&= \quad \frac{(-1)^{(N-k)/2((N-k)/2-1)/2+(N-k)k/2-k(k-1)/4}}{2^{(N(N-1)+k)/2}}\prod_{j=1}^N\frac{1}{\Gamma(j/2)^2}\\
&\nonumber \times\Gamma((N+1)/2)^{N/2}\Gamma(N/2+1)^{N/2},\\
\nonumber \tau(x)&=\left( \frac{1}{x}\right)^{(N-1)/2}\left[\frac{1}{\sqrt{\pi}}\int_{\frac{|x|^{-1}-|x|}{2}}^{\infty}\frac{dt}{\left(1+t^2\right)^{N/2+1}}\right]^{1/2},
\end{align}
\begin{align}
\nonumber \Delta\left(\mathbf{e},\mathbf{w},\mathbf{\frac{1}{\bar{w}}}\right)&=\prod_{j<p}(e_p-e_j)\prod_{j=1}^k\prod_{s=k+1}^{(N+k)/2}(w_s-e_j) \left(\frac{1} {\bar{w}_s}-e_j \right)\\
\nonumber &\quad \times \prod_{a<b}(w_b-w_a)\left(\frac{1}{\bar{w}_s}-\frac{1}{\bar{w}_s}\right) \prod_{c,d=k+1}^{(N+k)/2}\left( \frac{1}{\bar{w}_d}-w_c \right).
\end{align}
\end{proposition}
Note that when $x\in\mathbb{R}$ we have
\begin{align}
\label{eqn:taureal} \tau(x)=\left( \frac{1}{x}\right)^{(N-1)/2} \left( \frac{\Gamma((N+1)/2)} {2\: \Gamma(N/2)} \right)^{1/2}.
\end{align}
\subsection{Generalised partition function}
Conceptually, the procedure in this section is identical to that of Chapter \ref{sec:Ggpf} for both even and odd. To find a Pfaffian form of the generalised partition function (or ensemble average) we integrate over the eigenvalue jpdf (\ref{eqn:q(y)}), introducing some indeterminants $u,v$. Writing the Vandermonde product as an $N\times N$ determinant we apply the method of integration over alternate variables ultimately resulting in an $N/2 \times N/2$ Pfaffian. However, the key technical difference from Chapter \ref{sec:Ggpf} is a re-ordering of the matrix columns during the alternate variable integration, which is slightly more complicated in the odd case (\ref{eqn:poly_order_odd}) than the even (\ref{eqn:poly_ordering}). It will turn out that these re-orderings result in separate even and odd skew-orthogonal polynomials (see Propositions \ref{prop:skew_polys_even} and \ref{prop:odd_polys}), which have the benefit of skew-orthogonalising the generalised partition function Pfaffian for general $\zeta$, unlike in the real Ginibre ensemble.
With $\mathcal{Q}(\mathbf{Y})$ from (\ref{eqn:q(y)}) we use (\ref{def:multi_gen_part_fn}) (with $m=2$) to define the generalised partition function of the spherical ensemble
\begin{align}
\nonumber Z_{k,(N-k)/2}[u,v]_S&=\frac{1}{((N-k)/2)!}\int_0^{\theta_2}d\theta_1\int_{\theta_1}^{\theta_3}d\theta_2 \cdot\cdot\cdot\int_{\theta_{k-1}}^{2\pi}d\theta_k\prod_{l=1}^ku(e_l)\\
\label{eqn:ZkN-k} &\times\int_{\Omega}dw_{k+1} \cdot\cdot\cdot \int_{\Omega} dw_{(N+k)/2} \prod_{s=k+1}^{(N+k)/2}v(w_s) \: \mathcal{Q}(\mathbf{Y})
\end{align}
for fixed $k$, where $\Omega$ is the unit disk, and the factor of $1/((N-k)/2)!$ comes from relaxing the ordering constraint on the complex eigenvalues. Note the ordering of the angles $\theta_j$ corresponding to the real eigenvalues is in accord with the ordering (\ref{9'}) of $\{\lambda_i\}$. As for the GOE and the real Ginibre ensemble it is at this point that parity considerations become important.
\subsubsection{$N$ even}
\begin{proposition}
\label{prop:gen_part_fn}
Let $\{ p_{j}(x)\}$ be a set of monic polynomials of degree $j$, and define
\begin{align}
\label{eqn:q=p} q_{2j}(x)=p_{2j}(x)&,& q_{2j+1}(x)=p_{N-1-2j}(x).
\end{align}
The generalised partition function for the real spherical ensemble, with $N$ even, is
\begin{align}
\nonumber Z_{k,(N-k)/2}[u,v]_S&=\frac{(-1)^{(N/2)(N/2-1)/2}}{2^{N(N-1)/2}} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2}\\
\label{eqn:genpartfn} &\times\prod_{s=1}^{N}\frac{1} {\Gamma(s/2)^2}\; [\zeta^{k}] \mathrm{Pf} \left[\zeta^2 \alpha_{j,l}+ \beta_{j,l} \right],
\end{align}
with $[\zeta^k ]$ denoting the coefficient of $\zeta^k$, and where
{\small
\begin{align}
\nonumber \alpha_{j,k} &=-\frac{i}{2}\int_0^{2\pi}d\theta_1\: u(e_1)\tau(e_1)\int_0^{2\pi}d\theta_2\: u(e_2)\tau(e_2)q_{j-1}(e_1)q_{k-1}(e_2)\: \mathrm{sgn}(\theta_2-\theta_1),\\
\label{15'} \beta_{j,k} &=\int_{\Omega} dw\hspace{3pt}v(w)\tau(w) \tau\left(\frac{1}{\bar{w}}\right)\frac{1}{|w|^2}\left(q_{j-1}(w)q_{k-1}\left(\frac{1}{\bar{w}}\right) - q_{k-1}(w)q_{j-1}\left(\frac{1}{\bar{w}}\right) \right).
\end{align}
}
\end{proposition}
\textit{Proof:} With $p_l(x)$ an arbitrary monic polynomial of degree $l$, using (\ref{eqn:vandermonde_polys}) the Vandermonde product in $\mathcal{Q}(\mathbf{Y})$ can be written
{\small
\begin{align}
\nonumber &\Delta\left(\mathbf{e},\mathbf{w},\mathbf{\frac{1}{\bar{w}}}\right) =\mathrm{det}\left[\begin{array}{c}
[p_{l-1}(e_j)]_{j=1,...,k}\vspace{3pt}\\
\left[p_{l-1}(w_s)\right]_{s=k+1,...,(N+k)/2}\\
\left[p_{l-1}(1/\bar{w}_s)\right]_{s=k+1,...,(N+k)/2}
\end{array}\right]_{l=1,...,N}\\
\nonumber &=(-1)^{(N-k)/2((N-k)/2-1)/2}\mathrm{det}\left[\begin{array}{c}
[p_{l-1}(e_j)]_{j=1,...,k}\vspace{3pt}\\
\left[\begin{array}{c}
p_{l-1}(w_s)\\
p_{l-1}(1/\bar{w}_{s})
\end{array}
\right]_{s=k+1,...,(N+k)/2}
\end{array}\right]_{l=1,...,N},
\end{align}
}where, for the second equality, we have interlaced the rows corresponding to complex conjugate pairs; this will be convenient later.
Next, as in Proposition \ref{prop:GinOE_gpf_even}, we apply the method of integration over alternate variables to the $e_j$, which correspond to the real eigenvalues,
{\small
\begin{align}
\nonumber &Z_{k,(N-k)/2}[u,v]_S= (-1)^{(N-k)/2((N-k)/2-1)/2} \frac{A_{k,N}}{(k/2)!((N-k)/2)!}\\
\nonumber &\times \int_0^{2\pi} d\theta_2\int_{0}^{2\pi} d\theta_4 \cdot\cdot\cdot \int_{0}^{2\pi} d\theta_k \int_{\Omega}dw_{k+1} \cdot\cdot\cdot \int_{\Omega} dw_{(N+k)/2} \prod_{s=k+1}^{(N+k)/2}v(w_s)\\
\nonumber &\times \hspace{-6pt} \prod_{s=k+1}^{(N+k)/2}\hspace{-4pt}\frac{1}{|w_s|^2} \tau(w_s) \tau \left(\frac{1} {\bar{w}_s} \right) \mathrm{det}\left[\begin{array}{c}
\left[\begin{array}{c}
\int_{0}^{\theta_{2j}}u(\theta)\tau(e)p_{l-1}(e)d\theta \\
u(\theta_{2j})\tau(e_{2j})p_{l-1}(e_{2j})\end{array}\right]_{j=1,...k/2} \vspace{6pt}\\
\left[\begin{array}{c}
p_{l-1}(w_s)\\
p_{l-1}(1/\bar{w}_s)
\end{array}
\right]_{s=k+1,...,(N+k)/2}
\end{array}\right]_{l=1,...,N}.
\end{align}
}Re-order columns in the determinant according to
\begin{align}
\label{eqn:poly_ordering}
p_0,p_{N-1},p_2,p_{N-3},\cdot\cdot\cdot ,p_{N-2},p_1,
\end{align}
which introduces a factor of $(-1)^{(N/2)(N/2-1)/2}$. For labeling purposes define
\begin{align}
\nonumber q_{2j}(x)=p_{2j}(x)&,&q_{2j+1}(x)=p_{N-1-2j}(x).
\end{align}
Expanding the determinant according to its definition as a signed sum over permutations, then performing the remaining integrations gives
\begin{align}
\nonumber & Z_{k,(N-k)/2}[u,v]_S = (-1)^{(N-k)/2((N-k)/2-1)/2}\frac{A_{k,N}}{(k/2)!((N-k)/2)!} \\
\nonumber & \qquad \times \sum_{P \in S_N} \varepsilon(P) \prod_{l=1}^{k/2}
a_{P(2l-1),P(2l)} \prod_{l=k/2+1}^{N/2} b_{P(2l-1),P(2l)},
\end{align}
where
\begin{align}
\nonumber a_{j,k} &= \int_0^{2\pi} d \theta_1 \, u(\theta_1) \tau(e_1) q_{j-1}(e_1)
\int_0^{\theta_1} d \theta_2 \, u(\theta_2) \tau(e_2) q_{k-1}(e_2),\\
\nonumber b_{j,k} &= \int_{\Omega} dw \: v(w) \tau(w) \tau\left(\frac{1}{\bar{w}}\right) \frac{1}{|w|^2}\hspace{2pt}q_{j-1}(w) \: q_{k-1}\left(\frac{1}{\bar{w}}\right).
\end{align}
If we now impose the restriction $P(2l) > P(2l-1)$, ($l=1,\dots,N/2$) this can be rewritten as
\begin{align}
\nonumber Z_{k,(N-k)/2}[u,v]_S & = (-1)^{(N-k)/2((N-k)/2-1)/2} (2i)^{k/2} A_{k,N}\\
\label{15.1} & \times \sum_{P \in S_N \atop P(2l) > P(2l-1)} \hspace{-12pt}\varepsilon(P) \hspace{6pt}\prod_{l=1}^{k/2} \alpha_{P(2l-1),P(2l)}
\prod_{l=k/2+1}^{N/2} \beta_{P(2l-1),P(2l)},
\end{align}
with $\alpha_{j,k}, \beta_{j,k}$ given by (\ref{15'}). With Definition \ref{def:pfaff} we can write (\ref{15.1}) in terms of a Pfaffian, and (\ref{eqn:genpartfn}) follows.
\hfill $\Box$
The summed up partition function for the real symmetric ensemble we define as
\begin{align}
\label{ZS} Z_N[u,v]_S:= \sum_{k=0 \atop k \: {\rm even}}^N Z_{k,(N-k)/2}[u,v]_S
\end{align}
analogously to (\ref{eqn:summedup}), and substitution of (\ref{eqn:genpartfn}) yields
\begin{align}
\nonumber Z_N[u,v]_S & = \frac{(-1)^{(N/2)(N/2-1)/2}}{2^{N(N-1)/2}} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2}\\
\label{21} &\times\prod_{s=1}^{N}\frac{1}{\Gamma(s/2)^2} \mathrm{Pf} \left[\alpha_{j,l}+ \beta_{j,l} \right].
\end{align}
Recall that $Z_{k,(N-k)/2}[1,1]_S$ is the probability of finding $k$ real eigenvalues and $(N-k)/2$ complex eigenvalues, and so the generating function for these probabilities $p_{N,k}$ is
\begin{align}
\label{def:ZNxi} Z_N(\zeta)_S&:= \sum_{k=0 \atop k \: {\rm even}}^N \zeta^k p_{N,k} \: = \: \sum_{k=0}^{N/2}\zeta^{2k} Z_{2k,(N-2k)/2}[1,1]_S,
\end{align}
which becomes
\begin{align}
\nonumber Z_N(\zeta)_S &=\frac{(-1)^{(N/2)(N/2-1)/2}}{2^{N(N-1)/2}} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2}\\
\label{eqn:ZNxi} &\times\prod_{j=1}^{N}\frac{1} {\Gamma(j/2)^2}\; \mathrm{Pf} \left[\zeta^2 \alpha_{j,l}+ \beta_{j,l} \right]\Bigg|_{u=v=1}.
\end{align}
\subsubsection{$N$ odd}
The calculation of the generalised partition function for $N$ odd proceeds along the same lines as Proposition \ref{prop:gen_part_fn} for $N$ even, but with a more complicated ordering of the columns during the integration over alternate variables, the purpose of which is to aid in finding the skew-orthogonal polynomials.
\begin{proposition}
\label{prop:gen_part_fn_odd}
With monic polynomials $\{ p_j(x)\}$ of degree $j$ and $\alpha_{j,l},\beta_{j,l}$ as in (\ref{15'}), the generalised partition function for the real spherical ensemble, with $N$ odd, is
\begin{align}
\nonumber Z_{k,(N-k)/2}^{\mathrm{odd}}[u,v]_S &=\frac{(-1)^{(N-1)/4((N-1)/2)-1)}} {2^{N(N-1)/2}} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2}\\
\label{eqn:gen_fn_odd} &\times\prod_{s=1}^{N}\frac{1}{\Gamma(s/2)^2}\; [\zeta^{k-1}]\mathrm{Pf}\left[\begin{array}{cc}
\left[\zeta^2\alpha_{i,j} +\beta_{i,j}\right] & \left[\nu_i\right]\\
\left[-\nu_j\right] & 0\\
\end{array}\right]_{i,j=1,...,N},
\end{align}
where,
\begin{align}
\label{def:nu} \nu_l:=\frac{1}{\sqrt{2}}\int_0^{2\pi}u(\theta)\tau(e)q_{l-1}(e) d\theta,
\end{align}
and
\begin{align}
\nonumber &\left.
\begin{array}{l}
q_{2j}=p_{2j},\\
q_{2j+1}=p_{N-1-2j},
\end{array}
\right\}0 \leq 2j<(N-1)/2,\\
\nonumber &\left.
\begin{array}{l}
q_{2j}=p_{2j+1},\\
q_{2j+1}=p_{N-1-(2j+1)},
\end{array}
\right\}(N-1)/2 \leq 2j<N-1,\\
\label{eqn:odd_polys} &\left.
\begin{array}{l}
q_{N-1}=p_{(N-1)/2}.
\end{array}
\right.
\end{align}
\end{proposition}
\textit{Proof:} As for the even case write the Vandermonde product of $\mathcal{Q}(\mathbf{Y})$ as
\begin{align}
\nonumber &\Delta\left(\mathbf{e},\mathbf{w},\mathbf{\frac{1}{\bar{w}}}\right)=\mathrm{det}\left[\begin{array}{c}
[p_{l-1}(e_j)]_{j=1,...,k-1}\vspace{3pt}\\
\left[p_{l-1}(w_s)\right]_{s=k+1,...,(N+k)/2}\\
\left[p_{l-1}(1/\bar{w}_s)\right]_{s=k+1,...,(N+k)/2}
\end{array}\right]_{l=0,...,N}\\
\label{eqn:odd_vand} &=(-1)^{(N-k)/2((N-k)/2-1)/2} \mathrm{det}\left[\begin{array}{c}
[p_{l-1}(e_j)]_{j=1,...,k}\vspace{3pt}\\
\left[\begin{array}{c}
p_{l-1}(w_s)\\
p_{l-1}(1/\bar{w}_{s})
\end{array}
\right]_{s=k+1,...,(N+k)/2}\\
\left[ p_{l-1}(e_k)\right]
\end{array}\right]_{l=1,...,N},
\end{align}
where we have moved the row corresponding to the $k$th real eigenvalue to the bottom of the matrix. (It proves to be more convenient to convert this latter matrix to Pfaffian form than the equivalent matrix where the $k$th row is not moved.) This always involves an even number of transpositions so no overall factor is required. The shifted row corresponds to the single unpaired real eigenvalue that must exist in any odd-sized real matrix, a fact that is guaranteed by $N$ and $k$ being of the same parity. The factors of $-1$ come from the reordering of complex eigenvalue rows, exactly as in the even case. Now we substitute (\ref{eqn:odd_vand}) into (\ref{eqn:ZkN-k}) and apply integration over alternate variables, as in Proposition \ref{prop:gen_part_fn}, to find
{\small
\begin{align}
\nonumber &Z_{k,(N-k)/2}^{\mathrm{odd}}[u,v]_S=(-1)^{(N-k)/2((N-k)/2-1)/2}\frac{A_{k,N}}{((k-1)/2)!((N-k)/2)!}\\
\nonumber &\times \int_0^{2\pi}d\theta_2\int_{0}^{2\pi} d\theta_4 \cdot\cdot\cdot \int_{0}^{2\pi}d\theta_{k-1}\int_{\Omega}dw_{k+1} \cdot\cdot\cdot \int_{\Omega} dw_{(N+k)/2}\prod_{s=k+1}^{(N+k)/2}v(w_s)\\
\nonumber &\times \hspace{-6pt}\prod_{s=k+1}^{(N+k)/2} \hspace{-4pt} \frac{1}{|w_s|^2} \tau(w_s)\tau\left(\frac{1}{\bar{w}_s}\right) \mathrm{det}\left[\begin{array}{c}
\left[\begin{array}{c}
\int_{0}^{\theta_{2j}}u(\theta)\tau(e)p_{l-1}(e)d\theta \\
u(\theta_{2j})\tau(e_{2j})p_{l-1}(e_{2j})\end{array}\right]_{j=1,...(k-1)/2}\\
\left[\begin{array}{c}
p_{l-1}(w_s)\\
p_{l-1}(1/\bar{w}_s)
\end{array}
\right]_{s=k+1,...,(N+k)/2}\\
\left[ \int_{0}^{2\pi}u(\theta)\tau(e)p_{l-1}(e)\hspace{2pt}d\theta \right]
\end{array}\right]_{l=1,...,N}.
\end{align}
}
We need to reorder the columns of the determinant in a similar way to that of (\ref{eqn:poly_ordering}), although with the key difference of shifting the middle column to the end. The re-ordering is then
\begin{align}
\nonumber &p_0,p_{N-1},p_2,p_{N-3},... ,p_{(N-1)/2-\epsilon_{1,2}}, p_{(N-1)/2+ \epsilon_{1,2}},\\
\label{eqn:poly_order_odd} &p_{(N-1)/2+\epsilon_{2,1}}, p_{(N-1)/2- \epsilon_{2,1}}, ..., p_{N-4},p_{3},p_{N-2},p_{1},p_{(N-1)/2},
\end{align}
where
\begin{align}
\nonumber &\epsilon_{1,2}=\left\{
\begin{array}{ll}
1, & \mbox{for $(N-1)/2$ even},\\
2, & \mbox{for $(N-1)/2$ odd},
\end{array}
\right.\\
\nonumber &\epsilon_{2,1}=\left\{
\begin{array}{ll}
2, & \mbox{for $(N-1)/2$ even},\\
1, & \mbox{for $(N-1)/2$ odd}.
\end{array}
\right.
\end{align}
This introduces a factor of $(-1)^{(N-1)/2+(N-1)/2((N-1)/2-1)/2}$. Also, for $N$ odd, the factors of $-1$ in $A_{k,N}$ can be re-written by noting
\begin{align}
\nonumber (-1)^{(N-k)k/2-k(k-1)/4}=(-1)^{(N-1)/2-(k-1)/4},
\end{align}
which gives us an overall factor of
\begin{align}
\nonumber &(-1)^{(N-k)/2((N-k)/2-1)/2}\; (-1)^{(N-1)/2+(N-1)/2((N-1)/2-1)/2}\; A_{k,N}\\
\nonumber &= \frac{(-1)^{(N-1)/4((N-1)/2)-1)-(k-1)/4}}{2^{(N(N-1)+k)/2}} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2}\\
\nonumber &\quad\times \prod_{s=1}^{N}\frac{1}{\Gamma(s/2)^2}.
\end{align}
Now we again expand the determinant as a signed sum over permutations and impose the restriction $P(2l)>P(2l-1)$, which gives us the odd analogue of (\ref{15.1}),
{\small
\begin{align}
\nonumber &Z_{k,(N-k)/2}^{\mathrm{odd}}[u,v]_S = \frac{(-1)^{(N-1)/4((N-1)/2)-1)}}{2^{N(N-1)/2}}\Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2}\\
\nonumber &\times \prod_{j=1}^{N} \frac{1}{\Gamma(j/2)^2} \sum_{P \in S_N \atop P(2l) > P(2l-1)} \hspace{-12pt}\varepsilon(P)\; \nu_{P(N),N+1} \prod_{l=1}^{(k-1)/2} \hspace{-6pt} \alpha_{P(2l-1),P(2l)} \prod_{l=(k+1)/2}^{(N-1)/2} \beta_{P(2l-1),P(2l)},
\end{align}
}where $\nu_{P(N)}:=\nu_{P(N),N+1}$ is given by (\ref{def:nu}). Using the Pfaffian definition (\ref{def:Pf}), (\ref{eqn:gen_fn_odd}) now follows.
\hfill $\Box$
The analogous definition to (\ref{ZS}) for $N$ odd is
\begin{align}
\label{eqn:gen_fn_odd1} Z_N^{\mathrm{odd}}[u,v]_S:= \sum_{k=1 \atop k \: {\rm odd}}^N Z_{k, (N-k)/2}^{\mathrm{odd}} [u,v]_S,
\end{align}
and substituting (\ref{eqn:gen_fn_odd}) we have
\begin{align}
\nonumber Z_N^{\mathrm{odd}}[u,v]_S &=\frac{(-1)^{(N-1)/4((N-1)/2)-1)}}{2^{N(N-1)/2}}\Gamma((N+1)/2)^{N/2}\Gamma(N/2+1)^{N/2}\\
\nonumber &\times\prod_{s=1}^{N}\frac{1}{\Gamma(s/2)^2}\; \mathrm{Pf}\left[\begin{array}{cc}
\left[\alpha_{j,l} +\beta_{j,l} \right] & \left[\nu_j \right]\\
\left[-\nu_l \right] & 0\\
\end{array}\right]_{j,l=1,...,N}.
\end{align}
The generating function for the probabilities with $N$ odd is
\begin{align}
\label{def:SZNo} Z_N^{\mathrm{odd}}(\zeta)_S&:= \sum_{k=1 \atop k \: {\mathrm{odd}}}^N \zeta^k p_{N,k} \: = \: \sum_{k=0}^{(N-1)/2}\zeta^{2k+1} Z_{2k+1,(N-2k-1)/2}^{\mathrm{odd}}[1,1]_S,
\end{align}
and
\begin{align}
\nonumber Z_N^{\mathrm{odd}}(\zeta)_S &=\frac{(-1)^{(N-1)/4((N-1)/2)-1)}} {2^{N(N-1)/2}} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2}\\
\label{eqn:ZNxio} &\times \zeta \prod_{s=1}^{N}\frac{1}{\Gamma(s/2)^2}\; \mathrm{Pf}\left[\begin{array}{cc}
\left[\zeta^2\alpha_{j,l} +\beta_{j,l}\right] & \left[\nu_j \right]\\
\left[-\nu_l \right] & 0\\
\end{array}\right]_{j,l=1,...,N}\Bigg|_{u=v=1}.
\end{align}
In the next section we will see that with the relevant skew-orthogonal polynomials, the probabilities for both even and odd can be calculated in a straightforward manner.
\subsection{Skew-orthogonal polynomials}
\label{sec:Ssops}
Recall that the Pfaffian in the generating function (\ref{eqn:GinOE_probsGF_pf}) for the even case of the real Ginibre ensemble was brought to skew-diagonal form (\ref{eqn:skew_diag_mat}) using the polynomials (\ref{eqn:GinOE_sopolys}) only when $\zeta=1$. This means that the calculation of the probabilities seems to inevitably involve the calculation of a Pfaffian or determinant --- recall that we obtained a chequerboard $N\times N$ Pfaffian, which can be rewritten as an $N/2 \times N/2$ determinant according to (\ref{eqn:chequer}) --- and so is computationally intensive when using exact arithmetic. Separately, when discussing the probabilities of the partially symmetric real Ginibre ensemble in Chapter \ref{sec:tGprobs}, we found that we could use the monomials $p_j(x)=x^j$ to calculate the $\alpha_{2j-1,2l}^{(\tau)}\big|_{u=1}$ and $\beta_{2j-1,2l}^{(\tau)}\big|_{v=1}$ individually, however these did not skew-diagonalise the matrix.
For the problem at hand we can obtain both of these benefits simultaneously: we explicitly construct polynomials $q_i(x)$ that, for general $\zeta$, skew-diagonalise the matrix in (\ref{eqn:ZNxi}). The polynomials turn out to be quite simple, and it was knowledge of these polynomials that motivated the definition of the $\{q_i(x)\}$ in terms of the $\{p_i(x)\}$ in (\ref{eqn:q=p}). Further, we find that with some modification, we can also use a similar set of polynomials to bring the matrix in (\ref{eqn:ZNxio}) to a form which, for our purposes, is equivalent to the odd skew-diagonal form (\ref{eqn:skew_diag_mat_odd}), for all $\zeta$.
\begin{definition}
For monic polynomials $p_j(x),p_l(x)$ of degree $j$ and $l$ respectively, define the inner product
\begin{align}
\nonumber &\langle p_j,p_l \rangle_S:=-\frac{i}{2}\int_0^{2\pi}d\theta_1\: \tau(e_1) \int_0^{2\pi} d\theta_2 \: \tau(e_2) p_j(e_1) p_l(e_2)\: \mathrm{sgn}(\theta_2-\theta_1)\\
\nonumber &+\int_{\Omega} dw\; \tau(w) \tau \left(\frac{1}{\bar{w}}\right) \frac{1} {|w|^2} \left(p_j(w) p_l\left(\frac{1}{\bar{w}}\right) - p_l(w) p_j\left(\frac{1} {\bar{w}}\right) \right)\\
\label{def:Sip} &=\alpha_{j+1,l+1}+ \beta_{j+1,l+1}\big|_{u=v=1},
\end{align}
where $\alpha_{j,l}, \beta_{j,l}$ are from (\ref{15'}).
\end{definition}
We will also find it convenient to define
\begin{align}
\nonumber \hat{\alpha}_{j,l}&:=\alpha_{j,l}\big|_{u=1},\\
\label{def:hatab1} \hat{\beta}_{j,l}&:=\beta_{j,l}\big|_{v=1},
\end{align}
and
\begin{align}
\nonumber \hat{\alpha}_l&:=\hat{\alpha}_{2l+1,2l+2},\\
\label{def:hatab} \hat{\beta}_l&:=\hat{\beta}_{2l+1,2l+2}.
\end{align}
\begin{proposition}
\label{prop:skew_polys_even}
The inner product (\ref{def:Sip}) satisfies the skew-orthogonality conditions (\ref{eqn:GinOE_soprops}) using the polynomials $p_j(x)=x^j$ and thus, according to (\ref{eqn:q=p}),
\begin{eqnarray}
\label{eqn:skew_polys} q_{2j}(x)=x^{2j}, \qquad q_{2j+1}(x)=x^{N-1-2j}.
\end{eqnarray}
The normalisations $\hat{\alpha}_l, \hat{\beta}_l$ from (\ref{def:hatab}) are
\begin{align}
\nonumber \hat{\alpha}_l&=\frac{2\pi}{N-1-4l} \frac{\Gamma((N+1)/2)} {\Gamma(N/2+1)},\\
\label{eqn:hatabeval} \hat{\beta}_l&=\frac{2\sqrt{\pi}}{N-1-4l} \left( 2^N \frac{\Gamma(2l+1) \Gamma(N-2l)} {\Gamma(N+1)}-\sqrt{\pi}\: \frac{\Gamma((N+1)/2)} {\Gamma(N/2+1)}\right).
\end{align}
\end{proposition}
\textit{Proof}: The skew-symmetry property $\hat{\alpha}_{j,l}= -\hat{\alpha}_{l,j}$, $\hat{\beta}_{j,l}= -\hat{\beta}_{l,j}$ can be checked by observation, so to establish the result we must show that each of $\hat{\alpha}_{j,l}$ and $\hat{\beta}_{j,l}$ are non-zero only for $j=2t+1,l=2t+2$ in which case they have the evaluations stated.
From (\ref{15'}), we have
\begin{align}
\nonumber \hat{\alpha}_{j+1,l+1}&=c \: \frac{i}{2}\int_0^{2\pi}d\theta_1\: e^{i\theta_1 (\tilde{j} - (N-1)/2)}\int_{\theta_1}^{2\pi}d\theta_2\: e^{i\theta_2(\tilde{l}- (N-1)/2}\\
\nonumber &-c \: \frac{i}{2}\int_0^{2\pi}d\theta_1\: e^{i\theta_1 (\tilde{j} - (N-1)/2)}\int_0^{\theta_1}d\theta_2\: e^{i\theta_2(\tilde{l}- (N-1)/2},
\end{align}
where $c$ is a constant factor and $\tilde{j}=2j$ or $\tilde{j}=N-1-2j$ for $j$ even or odd respectively. Performing the inner integrals over $\theta_2$, using the fact that $\tilde{j}\neq (N-1)/2$ since $j$ is an integer and $N$ is even, we find
\begin{align}
\label{eqn:sopsa1} \hat{\alpha}_{j+1,l+1}&=c\: \frac{2}{2\tilde{l}-N+1} \int_0^{2\pi}d\theta_1\: e^{i\theta_1 (\tilde{j} +\tilde{l} - N+1)},
\end{align}
which is non-zero only in the case that $\tilde{l}=N-1-\tilde{j}$, which implies $j=2t+1,l=2t+2$ (for $\hat{\alpha}_{j+1,l+1}$ positive). The evaluation of (\ref{eqn:sopsa1}) in this case is straightforward. To obtain the conditions on $s$ and $t$ where $\hat{\beta}_{s,t}\neq 0$ we repeat the procedure used above by writing out $\tau(w),\tau(\bar{w}^{-1}),q_{2j}$ and $q_{2j+1}$. The fact that $s=2j+1$ and $t=2j+2$ for a non-zero evaluation is then immediate.
It remains to evaluate $\hat{\beta}_{2j+1,2j+2}=: \hat{\beta}_{j}$, which turns out to require knowledge of a non-standard form of the beta integral. Thus, after converting to polar co-ordinates, setting $c:=|w|^2$ and integrating by parts, one obtains
\begin{align}
\label{eqn:beta1} \hat{\beta}_{j}=-\frac{2\pi^{3/2}}{N-1-4j}\frac{\Gamma((N+1)/2)}{\Gamma(N/2+1)}+\frac{2^{N+1}\pi}{N-1-4j}\int_0^1\frac{c^{2j}+c^{N-2j-1}}{(1+c)^{N+1}}dc.
\end{align}
According to \cite[Equation 3.216 (1)]{GraRyz2000}, for general $a,b$ such that Re$\, b>0$,
Re$\, (a-b) > 0$,
\begin{align}
\nonumber \int_0^1(t^{b-1} + t^{a-b-1})(1 + t)^{-a} \, dt = \frac{\Gamma(b)\Gamma(a-b)}{\Gamma(a)},
\end{align}
and so, with $b = y$, $a-b = x$, we have a non-standard form of the beta integral
\begin{align}
\nonumber \int_0^1 t^{x-1}(1-t)^{y-1} \, dt = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}.
\end{align}
The stated formula for $\hat{\beta}_{j}$ now follows.
Alternatively, by some manipulations the integral in (\ref{eqn:beta1}) can be transformed to
\begin{align}
\nonumber \int_0^1 \frac{c^{2j}+c^{N-2j-1}} {(1+c)^{N+1}}dc&=\frac{1} {2j+1}{}_2 F_1 (2j+1,N+1,2j+2;-1)\\
\label{eqn:beta2} &+\frac{1}{N-2j} {}_2 F_1 (N-2j,N+1,N+1-2j;-1).
\end{align}
We see that the RHS of (\ref{eqn:beta2}) must equal $\Gamma(2j+1) \Gamma(N-2j)/ \Gamma(N+1)$ for the result to be obtained, a condition which can be shown to be equivalent to the statement
\begin{align}
\label{eqn:beta3} 1=\sum_{s=0}^{2j}\frac{2^{s-N} \Gamma(N-s)}{\Gamma(2j+1-s) \Gamma(N-2j)} +\sum_{s=0}^{N-2j-1}\frac{2^{s-N} \Gamma(N-2j-s)}{\Gamma(2j+1) \Gamma(N-2j)}.
\end{align}
With $j=0$ the RHS of (\ref{eqn:beta3}) can be evaluated as
\begin{align}
\nonumber \frac{1}{2^N}+\sum_{s=0}^{N-1}\frac{1}{2^{N-s}}=1,
\end{align}
where we have used the formula $\sum_{s=1}^j 2^{-j}=1-2^{-j}$. We now establish (\ref{eqn:beta3}) inductively for all integer $j\in [0, N/2-1]$. Through the use of Zeilberger's algorithm \cite{Zeil1990a,Zeil1990b} (in Mathematica form \cite{PaSc1994}) we obtain a proof that
\begin{align}
\nonumber &\sum_{s=0}^{2j}\frac{2^{s-N} \Gamma(N-s)}{\Gamma(2j+1-s) \Gamma(N-2j)} +\sum_{s=0}^{N-2j-1}\frac{2^{s-N} \Gamma(N-2j-s)}{\Gamma(2j+1) \Gamma(N-2j)}\\
\nonumber &=\sum_{s=0}^{2j+1}\frac{2^{s-N} \Gamma(N-s)}{\Gamma(2j+2-s) \Gamma(N-2j-1)} +\sum_{s=0}^{N-2j-2}\frac{2^{s-N} \Gamma(N-2j-1-s)}{\Gamma(2j+2) \Gamma(N-2j)},
\end{align}
so the RHS of (\ref{eqn:beta3}) is unchanged by $2j\mapsto 2j+1$ for all $j\in [0,N/2-1]$, and so this establishes (\ref{eqn:hatabeval}).
\hfill $\square$
Since Proposition \ref{prop:skew_polys_even} tells us that both $\hat{\alpha}_{j,l}$ and $\hat{\beta}_{j,l}$ are independently skew-\\orthogonalised by the polynomials (\ref{eqn:skew_polys}) for general $\zeta$, we have that
\begin{align}
\label{eqn:soPf} \mathrm{Pf}\left[\zeta^2\alpha_{j,l}+ \beta_{j,l} \right]_{j,l=1,2,...,N} \Big|_{u=v=1} &=\prod_{l=0}^{N/2-1} \zeta^2\hat{\alpha}_l+ \hat{\beta}_l.
\end{align}
Substitution of (\ref{eqn:soPf}) into (\ref{eqn:ZNxi}) gives us the generating function for the probabilities $p_{N,k}$
\begin{align}
\nonumber Z_N(\zeta)_S&=\frac{(-1)^{(N/2)(N/2-1)/2}}{2^{N(N-1)/2}} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2}\\
\label{eqn:Sprobs} &\times\prod_{s=1}^{N}\frac{1} {\Gamma(s/2)^2} \prod_{l=0}^{N/2-1} \zeta^2\hat{\alpha}_l+ \hat{\beta}_l.
\end{align}
The probability $p_{N,N}$ that all eigenvalues are real is the coefficient of $\zeta^N$ in (\ref{eqn:Sprobs}) and is
\begin{align}
\nonumber p_{N,N}&=\pi^{N/2}\frac{\Gamma((N+1)/2)^{N/2}}{2^{N(N-2)/2}} \prod_{s=1}^{N}\frac{1} {\Gamma(s/2)^2} \prod_{l=0}^{N/2-1}\frac{1}{N-1-4l},
\end{align}
where we have used the fact that $\lfloor N/4 \rfloor$ (the floor function of $N/4$) is of the same parity as $N/2(N/2-1)/2$.
We can also use the polynomials (\ref{eqn:skew_polys}) to calculate the expected number of real eigenvalues $E_N$ and the variance $\sigma^2_N$.
\begin{corollary}
\label{3.5}
With $N$ even the expected number of real eigenvalues of an $N \times N$ matrix $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$ is \cite{eks1994}
\begin{align}
\label{29} E_N=\left. \frac{\partial}{\partial \zeta}Z_N(\zeta) \right|_{\zeta=1}=\sum_{l=0}^{N/2-1}\frac{2\hspace{1pt}\hat{\alpha}_l} {\hat{\alpha}_l+\hat{\beta}_l} =
\frac{\sqrt{\pi} \Gamma((N+1)/2)}{\Gamma(N/2)}.
\end{align}
The variance in the number of real eigenvalues is
\begin{align}
\nonumber \sigma_N^2 &= {\partial^2 \over \partial \zeta^2} Z_N(\zeta) \Big |_{\zeta = 1} + E_N - E_N^2 \: = \: 2E_N - 4 \sum_{l=0}^{N/2 - 1} {\hat{\alpha}_l^2 \over (\hat{\alpha}_l + \hat{\beta}_l)^2} \nonumber\\
\label{30} &= 2E_N - 2 \sqrt{\pi} {\Gamma((N+1)/2)^2 \Gamma(N-1/2) \over \Gamma(N/2)^2 \Gamma(N)}.
\end{align}
\end{corollary}
\textit{Proof}: The second equalities follow from the first and (\ref{eqn:Sprobs}) through simple differentiation, while for the third equalities use has been made of the summations
\begin{align}
\nonumber \sum_{j=0}^{N/2 - 1} \Big ( {N - 1 \atop 2j} \Big ) = 2^{N-2}
\end{align}
and
\begin{align}
\nonumber \sum_{j=0}^{N/2 - 1} \Big ( {N - 1 \atop 2j} \Big )^2 &= \frac{1}{2}\sum_{j=0}^{N-1} \Big ( {N - 1 \atop j} \Big )^2 = \frac{1}{2} \Big ( {2N - 2 \atop N-1} \Big ) = \frac{\Gamma(2N-1)} {2(\Gamma(N))^2}\\
\nonumber &= {2^{2N-3} \; \Gamma ((2N-1)/2) \over \sqrt{\pi} \; \Gamma(N)},
\end{align}
where use was made of \cite[0.157 (1)]{GraRyz2000}.
\hfill $\Box$
The result (\ref{29}) was first derived by Edelman \textit{et al.}~\cite{eks1994} using ideas from integral geometry. (This statement is generalised to one concerning eigenvalues of a matrix polynomial in \cite{EK95}.) A corollary, also noted in \cite{eks1994}, is that for $N \to \infty$
\begin{align}
\label{eqn:SENasy} E_N \: \sim \: \sqrt{\pi N \over 2} \bigg ( 1 - {1 \over 4N} + {1 \over 32 N^2} + {5 \over 128 N^3} - {21 \over 2048 N^4} + {\rm O} \Big ( {1 \over N^5} \Big ) \bigg ),
\end{align}
which gives the leading order behaviour for large $N$. We also note that, to leading order, (\ref{30}) implies the variance is related to the mean by $\sigma_N^2 \sim (2 - \sqrt{2} )E_N$, which coincidentally (?) is the same asymptotic relation as (\ref{eqn:Gvar}), the analogous result for the real Ginibre ensemble.
The explicit form of the generating function (\ref{eqn:Sprobs}) allows for the computation of the large $N$ limiting form of the probability density of the scaled number of real eigenvalues.
\begin{proposition}
\label{pAB}
Let $\sigma_N^2$ and $E_N$ be as in Corollary \ref{3.5}, and let $p_{N,k}$ be the probability of obtaining $k$ eigenvalues from $N$ total eigenvalues. We have
\begin{align}
\nonumber \lim_{N \to \infty} \: {\rm sup}_{x \in (-\infty,\infty)} \Big | \sigma_N p_{N,\lfloor \sigma_N x + E_N \rfloor} - {1 \over \sqrt{2 \pi}} e^{- x^2/2} \Big | = 0,
\end{align}
where $\lfloor \cdot \rfloor$ denotes the floor function.
\end{proposition}
\noindent
\textit{Proof}: For a given $n$, let $\{p_n(k)\}_{k=0,1,\dots,n}$ be a sequence such that
\begin{align}
\nonumber P_N(x) = \sum_{k=0}^n p_n(k) x^k
\end{align}
has the properties that the zeros of $P_N(x)$ are all on the real axis, and $P_N(1)=1$.
Let
\begin{align}
\nonumber \mu_n = \sum_{k=0}^n k p_n(k)&,& \sigma_n^2 = \sum_{k=0}^n k^2 p_n(k) - \mu_n^2,
\end{align}
and suppose $\sigma_n \to \infty$ as $n \to \infty$. A local limit theorem due to Bender
\cite{Be73} gives
\begin{align}
\nonumber \lim_{n \to \infty} \: \mathrm{sup}_{x \in (-\infty,\infty)} \Big | \sigma_n p_n(\lfloor \sigma_n x + \mu_n \rfloor) - \frac{1} {\sqrt{2 \pi}} e^{-x^2/2} \Big | &= 0.
\end{align}
Application of this general theorem to $Z_N(\zeta)$ in (\ref{def:ZNxi}), with $\zeta^2 = x$, gives the stated result.
\hfill $\square$
\newline
\begin{figure}
\begin{center}
\includegraphics[scale=0.6,trim=0 0mm 80mm 540, clip=true]{pnk_n_300_labeled9.pdf}
\end{center}
\caption[Comparison of real spherical $p_{N,k}$ to the analytic prediction.]{A plot of $p_{300,k}$, that is, the probability of finding $k$ real eigenvalues from a $300\times 300$ real matrix $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$, where $\mathbf{A},\mathbf{B}$ are real matrices with iid Gaussian elements. The points were calculated using (\ref{eqn:Sprobs}), while the solid line is the Gaussian curve implied by Proposition \ref{pAB} (with a normalising factor of $2$ since $N$ and $k$ must be of the same parity).}
\label{fig:pnk}
\end{figure}
The implication of Proposition \ref{pAB} is that our probabilities $p_{N,k}$ (suitably scaled) will tend to lie on a Gaussian curve as $N$ becomes large. In Figure \ref{fig:pnk} we have calculated the value of $p_{N,k}$ for $N=300,k={2,4,...,38}$ and overlaid it with the Gaussian curve given by Proposition \ref{pAB}; the agreement is clear. (We do not, as yet, have any further results to explain the slight systematic shift of the points relative to the curve.)
\subsubsection{$N$ odd}
As discussed at the beginning of Chapter \ref{sec:Ssops} we can find polynomials that will reduce the matrix in (\ref{eqn:ZNxio}) to an easily computable form for all $\zeta$. The column reordering (\ref{eqn:poly_order_odd}) means while these polynomials are still monomials, compared to the even case the labeling is more complicated, since there was the additional movement of the middle column to the end. The first half of the polynomials are the same as the even case, while the second half are modified by $j\rightarrow j+1/2$. The middle polynomial must be singled out for special treatment.
\begin{proposition}
\label{prop:odd_polys}
Let $N$ be odd. The skew-orthogonal polynomials with respect to the inner product (\ref{def:Sip}) are
\begin{align}
\nonumber &\left.
\begin{array}{l}
q_{2j}(x)=x^{2j},\\
q_{2j+1}(x)=x^{N-1-2j},
\end{array}
\right\}0 \leq 2j<(N-1)/2,\\
\nonumber &\left.
\begin{array}{l}
q_{2j}(x)=x^{2j+1},\\
q_{2j+1}(x)=x^{N-1-(2j+1)},
\end{array}
\right\}(N-1)/2 \leq 2j<N-1,
\end{align}
and
\begin{align}
\nonumber &q_{N-1}(x)=x^{(N-1)/2}\\
\nonumber &+ \sum_{j=0}^{ (N-1)/2-1}\left(\frac{ \langle q_{2j+1}, x^{(N-1)/2}\rangle_S} {\hat{\alpha}_{j}+ \hat{\beta}_{j}} q_{2j}(x)- \frac{\langle q_{2j}, x^{(N-1)/2}\rangle_S} {\hat{\alpha}_{j}+ \hat{\beta}_{j}} q_{2j+1}(x)\right).
\end{align}
With these polynomials
\begin{align}
\nonumber &\left.
\begin{array}{l}
\hat{\alpha}_j^{\mathrm{odd}}=\hat{\alpha}_j,\\
\hat{\beta}_j^{\mathrm{odd}}=\hat{\beta}_j,
\end{array}
\right\}0 \leq 2j<(N-1)/2,\\
\nonumber &\left.
\begin{array}{l}
\hat{\alpha}_j^{\mathrm{odd}}=\hat{\alpha}_{j+1/2},\\
\hat{\beta}_j^{\mathrm{odd}}=\hat{\beta}_{j+1/2},
\end{array}
\right\}(N-1)/2 \leq 2j<N-1,\\
\nonumber &\left.
\begin{array}{l}
\nonumber \hat{\alpha}_{s,N}+\hat{\beta}_{s,N} =\hat{\alpha}_{N,s} +\hat{\beta}_{N,s}=0, \qquad s \leq N,
\end{array}
\right.\\
\nonumber &\left.
\begin{array}{l}
\nonumber \bar{\nu}_N :=\nu \big|_{u=1}=\pi \sqrt{\frac{\Gamma((N+1)/2)}{\Gamma(N/2+1)}},
\end{array}
\right.
\end{align}
where $\hat{\alpha}_j, \hat{\beta}_j$ are as in (\ref{eqn:hatabeval}) and
\begin{align}
\nonumber \hat{\alpha}_{j+1/2}&=\frac{2\pi}{N-3-4j}\frac{\Gamma((N+1)/2)}{\Gamma(N/2+1)},\\
\nonumber \hat{\beta}_{j+1/2}&=\frac{2\sqrt{\pi}}{N-3-4j}\left( 2^N\frac{\Gamma(2j+2)\Gamma(N-2j-1)}{\Gamma(N+1)}-\sqrt{\pi}\frac{\Gamma((N+1)/2)}{\Gamma(N/2+1)}\right).
\end{align}
\end{proposition}
\textit{Proof:} For $0\leq 2j<(N-1)/2$ we have the result by Proposition \ref{prop:skew_polys_even} and replacing $j\mapsto j+1/2$ we have result for $(N-1)/2\leq 2j < N-1$. By the construction of $q_{N-1}(x)$, we see that $\hat{\alpha}_{s,N}+\hat{\beta}_{s,N}=0$ for $1\leq s\leq N$. Writing out $\hat{\nu}_l$ using the definition (\ref{def:nu}), the fact that it is non-zero only for $l=N$ is clear, that is, only when $l=N$ does the angular dependence cancel from the integral, in which case the evaluation is straightforward.
\hfill $\Box$
With the polynomials of Proposition \ref{prop:odd_polys} the matrix in (\ref{eqn:ZNxio}) for general $\zeta$ (and $u=v=1$) has the structure
\begin{align}
\label{def:Ssod} \left[\begin{array}{ccc}
\mathbf{P} & \mathbf{g}_{N-1} & \mathbf{0}_{N-1}\\
-\mathbf{g}_{N-1}^T & 0 & h_{N}\\
\mathbf{0}_{N-1}^T & -h_N & 0
\end{array}\right],
\end{align}
where $\mathbf{P}$ is skew-diagonal and $\mathbf{g}_{N-1}$ is an $N-1$ dimensional non-zero vector. This structure happens to be identical to that of (\ref{eqn:skew_inverse_odd}) with $\mathbf{A}^{-1}=\mathbf{P},g_j=c_j$ and $h_N=-b_N^{-1}$, and we therefore know from (\ref{eqn:Pfinvo}) that the Pfaffian of the matrix in (\ref{def:Ssod}) is equal to
\begin{align}
\nonumber h_N\prod_{j=1}^{(N-1)/2}p_j.
\end{align}
This means the odd analogue of (\ref{eqn:soPf}) is
\begin{align}
\nonumber &\mathrm{Pf}\left.\left[\begin{array}{cc}
\left[\zeta^2\alpha_{i,j} +\beta_{i,j}\right] & \left[\nu_i\right]\\
\left[-\nu_j\right] & 0\\
\end{array}\right]_{i,j=1,...,N}\right|_{u=v=1}\\
\nonumber &= \bar{\nu}_N \prod_{l=0}^{\lceil (N-1)/4\rceil-1} \left( \zeta^2\hat{\alpha}_l +\hat{\beta}_l \right) \prod_{l=\lceil (N-1)/4\rceil}^{(N-1)/2-1} \left( \zeta^2\hat{\alpha}_{l+1/2}+ \hat{\beta}_{l+1/2} \right).
\end{align}
where $\lceil x\rceil$ is the ceiling function on $x$. Substituting this into (\ref{eqn:ZNxio}) gives
{\small
\begin{align}
\nonumber Z_N^{\mathrm{odd}}(\zeta)&=\frac{(-1)^{(N-1)/4((N-1)/2)-1)}}{2^{N(N-1)/2}} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2} \prod_{s=1}^{N}\frac{1}{\Gamma(s/2)^2}\\
\label{eqn:gfprobso} &\times \zeta \: \bar{\nu}_N \prod_{l=0}^{\lceil (N-1)/4\rceil-1} \left( \zeta^2\hat{\alpha}_l+ \hat{\beta}_l \right) \prod_{l= \lceil (N-1)/4\rceil}^{(N-1)/2-1} \left( \zeta^2\hat{\alpha}_{l+1/2} + \hat{\beta}_{l+1/2} \right).
\end{align}
}
From (\ref{eqn:gfprobso}) we can calculate the expected number of real eigenvalues in the case of $N$ odd, which we know from \cite{eks1994} is given by (\ref{29}) independent of the parity of $N$. Similarly, we can check that the formula (\ref{30}) for the variance also holds independent of the parity of $N$.
\begin{corollary}
\label{prop:odd_EN_var}
For $N$ odd, the expected number of real eigenvalues of $\mathbf{Y}$ can be written
\begin{align}
\nonumber E_N^{\mathrm{odd}} &= \frac{\partial}{\partial \zeta}Z_N^{\mathrm{odd}}(\zeta)\bigg|_{\zeta=1}\\
\nonumber &= 1+\sum_{l=0}^{\lceil (N-1)/4\rceil-1}\frac{2\: \hat{\alpha}_l} {\hat{\alpha}_l+ \hat{\beta}_l}+\sum_{l=\lceil (N-1)/4\rceil}^{(N-1)/2-1}\frac{2\: \hat{\alpha}_{l+1/2}} {\hat{\alpha}_{l+1/2}+ \hat{\beta}_{l+1/2}},
\end{align}
which has evaluation (\ref{29}). The variance for $N$ odd is
{\small
\begin{align}
\nonumber (\sigma_N^{\mathrm{odd}})^2&=\frac{\partial^2} {\partial\zeta^2}Z_N^{\mathrm{odd}}(\zeta)\bigg|_{\zeta=1}\\
\nonumber &= 2(E_N-1)-\sum_{l=0}^{\lceil (N-1)/4\rceil-1} \frac{4\: \hat{\alpha}_l^2} {(\hat{\alpha}_l+ \hat{\beta}_l)^2}+\sum_{l=\lceil (N-1)/4\rceil}^{(N-1)/2-1}\frac{4\: \hat{\alpha}_{l+1/2}^2}{(\hat{\alpha}_{l+1/2} +\hat{\beta}_{l+1/2})^2},
\end{align}
}
which has evaluation (\ref{30}).
\end{corollary}
\textit{Proof:} The formulae in terms of $\hat{\alpha}_l$ and $\hat{\beta}_l$ follow from
(\ref{eqn:gfprobso}) recalling the definition of $Z_N^{\mathrm{odd}}$ from (\ref{def:SZNo}). For the summations we use the identity
\begin{align}
\nonumber \sum_{l=0}^{\lceil (N-1)/4\rceil-1}{N-1 \choose 2l}^p+\sum_{l=\lceil (N-1)/4\rceil}^{(N-1)/2-1}{N-1 \choose 2l+1}^p&=\sum_{l=0}^{(N-1)/2-1}{N-1 \choose l}^p,
\end{align}
which holds for integer $p$ and for both $(N-1)/4\in\mathbb{Z}$ and $(N-1)/4\in\mathbb{Z}+1/2$. We obtain this identity by checking these two cases.
\hfill $\Box$
The values of $p_{N,k}$ for $N=2,...,7$, calculated using (\ref{eqn:Sprobs}) and (\ref{eqn:gfprobso}), are listed in Table \ref{table:Spnk} of Appendix \ref{app:Sprobs}, along with the results of a simulation of 100,000 matrices. A remarkable fact can be immediately seen in the table: the probabilities for even $N$ are polynomials in $\pi$ of degree $N/2$, while for odd $N$ they are rational numbers. The key difference is that $(N+1)/2$ and $N/2+1$ alternate as integers and half integers, depending on whether $N$ is even or odd. These values introduce factors of $\sqrt{\pi}$ through the gamma functions.
\begin{proposition}\label{p310}
Let $p_{N,k}$ be the probability of finding $k$ real eigenvalues in a matrix $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$, where $\mathbf{A},\mathbf{B}$ are real Ginibre matrices. Then for $N$ even, $p_{N,k}$ is a polynomial in $\pi$ of degree $N/2$. For $N$ odd, $p_{N,k}$ is a rational number.
\end{proposition}
\textit{Proof}: For $N$ even, from (\ref{eqn:hatabeval}) $\hat{\alpha}_l$ and the second term in $\hat{\beta}_l$ both yield factors of $\pi^{3/2}$. The pre-factor in (\ref{eqn:genpartfn}) yields $\pi^{-N/4}$. Combining these two facts we find the highest power of $\pi$ is $N/2$. Noting that the first term in $\hat{\beta}_l$ has a factor of $\pi^{1/2}$ and expanding the product in (\ref{eqn:genpartfn}) gives terms of lower order in $\pi$.
For the odd case, the pre-factor in (\ref{eqn:gen_fn_odd}) gives $\pi^{-N/4-1/2}$. Then by noting that
$(\zeta^2\hat{\alpha}_l+\hat{\beta}_{l})$ and $(\zeta^2 \hat{\alpha}_{l+1/2}+\hat{\beta}_{l+1/2})$ both give factors of $\pi^{1/2}$ and $\nu_N$ gives $\pi^{3/4}$ we see that the end result is a rational number.
\hfill $\Box$
\subsection{Correlation functions}
\label{sec:Scorrelns}
As we did for the GOE and for the real Ginibre ensemble we would like to make use of knowledge of the Pfaffian form of the generating function (\ref{21}), and the skew-orthogonal polynomials (\ref{eqn:skew_polys}), to compute the ($N$ even) correlation functions $\rho_{(k_1,k_2)}$. In the present setting, the latter specifies the probability density for $k_1$ eigenvalues occurring at specific points on the unit circle, and $k_2$ eigenvalues occurring at specific points in the unit disk. Analogous to (\ref{eqn:GinOEfnal_diff_correln}), the $(k_1,k_2)$-point correlation function can be calculated in terms of the summed up generalised partition function (\ref{ZS}) by functional differentiation,
\begin{equation}\label{ZS1}
\rho_{(k_1,k_2)}(\mathbf{e},\mathbf{w})
= {1 \over Z_N[u,v]} {\delta^{k_1+k_2} \over \delta u(e_1) \cdots \delta u(e_{k_1})
\delta v(w_1) \cdots \delta v(w_{k_2}) } Z_N[u,v] \Big |_{u=v=1}.
\end{equation}
Comparing (\ref{eqn:q(y)}) to (\ref{eqn:GinOEjpdf}) and (\ref{eqn:genpartfn}) to (\ref{eqn:GinOE_gpf_even}), we see that the equations governing the eigenvalue statistics in the real Ginibre and real spherical ensembles are strikingly similar. As such, we expect that the correlation functions for the spherical ensemble will display similar characteristics to those of the real Ginibre ensemble. Indeed, this is what we find.
\begin{definition}
\label{def:Scorrelne}
Let $N$ be even, and $\{q_j(x) \}$ be the set of monic skew-orthogonal polynomials (\ref{eqn:skew_polys}). Define
\begin{align}
\nonumber D(x_i,x_j)_S&=\sum_{l=0}^{\frac{N}{2}-1}\frac{1}{r_l}\Bigl[a_{2l}(x_i)a_{2l+1}(x_j)-a_{2l+1}(x_i)a_{2l}(x_j)\Bigr],\\
\nonumber S(x_i,x_j)_S&=\sum_{l=0}^{\frac{N}{2}-1}\frac{1}{r_l}\Bigl[a_{2l}(x_i)b_{2l+1}(x_j)-a_{2l+1}(x_i)b_{2l}(x_j)\Bigr],\\
\nonumber \tilde{I}(x_i,x_j)_S&=\sum_{l=0}^{\frac{N}{2}-1}\frac{1}{r_l}\Bigl[b_{2l}(x_i)b_{2l+1}(x_j)-b_{2l+1}(x_i)b_{2l}(x_j)\Bigr]+\epsilon(x_i,x_j),
\end{align}
where
\begin{align}
\nonumber a_j(x) &=
\left\{
\begin{array}{ll}
|x|^{-1}\tau(x)\hspace{2pt}q_j(x), & x\in \mathbb{D},\\
\sqrt{-i/2}\hspace{2pt}\tau(x) q_j(x), & x\in \partial \mathbb{D},\\
\end{array}
\right.\\
\nonumber b_j(x) &=
\left\{
\begin{array}{ll}
|x|^{-1}\tau(\bar{x}^{-1})\hspace{2pt}q_j(\bar{x}^{-1}), & x\in \mathbb{D},\\
\sqrt{-i/2}\int_{0}^{2\pi}\tau(e^{i\theta})q_j(e^{i\theta})\mathrm{sgn}(\theta-\mathrm{arg}(x))d\theta, & x\in \partial \mathbb{D},\\
\end{array}
\right.\\
\nonumber \epsilon(x_i,x_j) &=
\left\{
\begin{array}{ll}
\mathrm{sgn}(\mathrm{arg}(x_i)-\mathrm{arg}(x_j)), & x_i,x_j\in \partial \mathbb{D},\\
0, & \mathrm{otherwise},\\
\end{array}
\right.\\
\nonumber r_l&=\hat{\alpha}_l+\hat{\beta}_l,
\end{align}
and $\mathbb{D}$ is the unit disk, with $\partial\mathbb{D}$ its boundary. Also define
\begin{align}
\label{eqn:kernel} \mathbf{K}_N(s,t)_S=\left[\begin{array}{cc}
S(s,t)_S & -D(s,t)_S \\
\tilde{I}(s,t)_S& S(t,s)_S \\
\end{array}\right].
\end{align}
\end{definition}
From Proposition \ref{prop:gen_part_fn} we see that the generalised partition function for the real spherical ensemble ($N$ even) is structurally identical to the analogous quantity for the real Ginibre ensemble from Proposition \ref{prop:GinOE_gpf_even} upon the identifications $p_j(w) \leftrightarrow q_j(w)$, $p_j(\bar{w}) \leftrightarrow q_j(1/ \bar{w})$. So, by the working in Chapter \ref{sec:Gincorrlnse} we obtain the correlation functions for the real spherical ensemble.
\begin{proposition}
\label{thm:correlns}
Let $N$ be even, $\mathbf{Y}$ be an $N \times N$ matrix as in (\ref{def:ainvb}). The $(k_1,k_2)$-point correlation function is
\begin{align}
\label{eqn:correlns} &\rho_{(k_1,k_2)}(\mathbf{e},\mathbf{w})=\mathrm{qdet}\left[\begin{array}{cc}
\mathbf{K}_N(e_i,e_j)_S & \mathbf{K}_N(e_i,w_m)_S\\
\mathbf{K}_N(w_l,e_j)_S & \mathbf{K}_N(w_l,w_m)_S\\
\end{array}\right]\\
\nonumber &=\mathrm{Pf}\left( \left[\begin{array}{cc}
\mathbf{K}_N(e_i,e_j)_S & \mathbf{K}_N(e_i,w_m)_S\\
\mathbf{K}_N(w_l,e_j)_S & \mathbf{K}_N(w_l,w_m)_S\\
\end{array}\right]\mathbf{Z}^{-1}_{2(k_1+k_2)}\right),
e_i\in \partial \mathbb{D}, \; w_i \in \mathbb{D},
\end{align}
where $\mathbf{e}=\{e_1,...,e_{k_1} \}$ and $\{w_1,...,w_{k_2} \}$, and $\mathbf{Z}_{2n}$ is from (\ref{def:Z2N}).
\end{proposition}
Similar to the even case, the generalised partition functions (\ref{eqn:GinOE_gpf_odd}) and (\ref{eqn:gen_fn_odd}) are structurally identical under the replacement mentioned above, but note that we cannot use the odd-from-even method of Chapters \ref{sec:odd_from_even} and \ref{sec:Gin_oddfromeven}. Recall that the method relies on removing the variable corresponding to the largest real eigenvalue off to infinity. However, in the current situation we have transformed our real eigenvalues $\lambda_j$ into the complex exponentials $e_j$ according to (\ref{14.2}) (recalling the definition of $e_j$), and so the variables in (\ref{eqn:correlns}) corresponding to the real eigenvalues are constrained to lie on the unit circle. So the question of whether the method applies to the spherical ensemble is equivalent to asking whether it can be applied to the circular ensembles. It may be possible to project the eigenvalues from the circle onto the real line, remove one to infinity and then project back, although this is, as yet, unknown. (In \cite{FM09}, only the Gaussian and Ginibre ensembles were considered.) However, the method of functional differentiation still remains viable, and by applying the procedures of Chapter \ref{sec:Gin_odd_fdiff} to (\ref{eqn:gen_fn_odd}) we can obtain the $N$ odd correlations.
\begin{definition}
Let $N$ be odd, and $\{q_j(x) \}$ be the set of monic skew-orthogonal polynomials of Proposition \ref{prop:odd_polys}. Define
\begin{align}
\nonumber &D^{\mathrm{odd}}(x_i,x_j)_S=\sum_{l=0}^{\lceil (N-1)/4\rceil -1}\frac{1}{r_l}\Bigl[a_{2l}(x_i)a_{2l+1}(x_j)-a_{2l+1}(x_i)a_{2l}(x_j)\Bigr]\\
\nonumber &+\sum_{l=\lceil (N-1)/4\rceil}^{(N-1)/2 -1}\frac{1}{r_{l+1/2}}\Bigl[a_{2l}(x_i)a_{2l+1}(x_j)-a_{2l+1}(x_i)a_{2l}(x_j)\Bigr],\\
\nonumber &S^{\mathrm{odd}}(x_i,x_j)_S=\sum_{l=0}^{\lceil (N-1)/4\rceil -1}\frac{1}{r_l}\Bigl[a_{2l}(x_i)b_{2l+1}(x_j)-a_{2l+1}(x_i)b_{2l}(x_j)\Bigr]\\
\nonumber &+\sum_{l=\lceil (N-1)/4\rceil}^{(N-1)/2 -1}\frac{1}{r_{l+1/2}}\Bigl[a_{2l}(x_i)b_{2l+1}(x_j)-a_{2l+1}(x_i)b_{2l}(x_j)\Bigr]+\kappa(x_i,x_j),\\
\nonumber &\tilde{I}^{\mathrm{odd}}(x_i,x_j)_S=\sum_{l=0}^{\lceil (N-1)/4\rceil -1}\frac{1}{r_l}\Bigl[b_{2l}(x_i)b_{2l+1}(x_j)-b_{2l+1}(x_i)b_{2l}(x_j)\Bigr]\\
\nonumber &+\sum_{l=\lceil (N-1)/4\rceil}^{(N-1)/2 -1}\frac{1}{r_{l+1/2}}\Bigl[b_{2l}(x_i)b_{2l+1}(x_j)-b_{2l+1}(x_i)b_{2l}(x_j)\Bigr]+\epsilon(x_i,x_j)+\sigma(x_i,x_j),
\end{align}
where $a_j(x),b_j(x)$ and $\epsilon(x_i,x_j)$ are as in Definition \ref{def:Scorrelne}, and
\begin{align}
\nonumber \kappa(x_i,x_j) &=
\left\{
\begin{array}{ll}
\frac{\tau(x_i)}{\nu_N}q_{N-1}(x_i), & x_j\in \partial \mathbb{D},\\
0, & \mathrm{otherwise},\\
\end{array}
\right.\\
\nonumber \sigma(x_i,x_j) &=
\left\{
\begin{array}{ll}
\frac{1}{\nu_N}(b_{N-1}(x_i)-b_{N-1}(x_j)), & x_i,x_j\in \partial \mathbb{D},\\
-\frac{1}{\nu_N}b_{N-1}(x_j), &x_i\in\partial \mathbb{D},x_j\in \mathbb{D},\\
\frac{1}{\nu_N}b_{N-1}(x_i), &x_i\in \mathbb{D},x_j\in\partial \mathbb{D},\\
0, & \mathrm{otherwise},\\
\end{array}
\right.\\
\nonumber r_{l+1/2}&=\hat{\alpha}_{l+1/2}+\hat{\beta}_{l+1/2}.
\end{align}
Also define
\begin{align}
\label{eqn:Skernelo} \mathbf{K}_N^{\mathrm{odd}}(s,t)_S=\left[\begin{array}{cc}
S^{\mathrm{odd}}(s,t)_S & -D^{\mathrm{odd}}(s,t)_S \\
\tilde{I}^{\mathrm{odd}}(s,t)_S& S^{\mathrm{odd}}(t,s)_S \\
\end{array}\right].
\end{align}
\end{definition}
\begin{theorem}
With $N$ odd, the correlation functions for $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$ are given by
\begin{align}
\nonumber &\rho_{(k_1,k_2)}(\mathbf{e},\mathbf{w})=\mathrm{qdet}\left[\begin{array}{cc}
\mathbf{K}_N^{\mathrm{odd}}(e_i,e_j)_S & \mathbf{K}_N^{\mathrm{odd}}(e_i,w_m)_S\\
\mathbf{K}_N^{\mathrm{odd}}(w_l,e_j)_S & \mathbf{K}_N^{\mathrm{odd}}(w_l,w_m)_S\\
\end{array}\right]\\
\label{eqn:Scorrelnso} &=\mathrm{Pf}\left( \left[\begin{array}{cc}
\mathbf{K}_N^{\mathrm{odd}}(e_i,e_j)_S & \mathbf{K}_N^{\mathrm{odd}}(e_i,w_m)_S\\
\mathbf{K}_N^{\mathrm{odd}}(w_l,e_j)_S & \mathbf{K}_N^{\mathrm{odd}}(w_l,w_m)_S\\
\end{array}\right]\mathbf{Z}^{-1}_{2(k_1+k_2)}\right), \qquad
e_i\in \partial \mathbb{D}, \; w_i \in \mathbb{D}.
\end{align}
\end{theorem}
\subsection{Kernel element evaluations and scaled limits}
\label{sec:SOElims}
As for the GOE and real Ginibre ensembles, the correlations in (\ref{eqn:correlns}) and (\ref{eqn:Scorrelnso}) are completely determined by the $2\times 2$ kernels (\ref{eqn:kernel}) and (\ref{eqn:Skernelo}). The elements of these kernels countenance relations analogous to those of Lemma \ref{lem:Gin_s=d=i}. We only list the relations for the even kernel; those for the odd kernel are similar.
\begin{lemma}
\label{lem:S_s=d=i}
The elements of the correlation kernel (\ref{eqn:kernel}) obey the relations
\begin{align}
\nonumber \tilde{I}_{r,r}(e_1,e_2)_S&=\int_{\theta_1}^{\theta_2}S_{r,r}(e,e_2)_S \; d\theta +\mathrm{sgn} (\theta_1-\theta_2),\\
\nonumber \tilde{I}_{r,c}(e,w)_S&=-\tilde{I}_{c,r}(w,e)_S=\frac{1}{|w|^2}S_{c,r}(\bar{w}^{-1},e)_S,\\
\nonumber \tilde{I}_{c,c}(w_1,w_2)_S&=\frac{1}{|w_1|^2} S_{c,c} (\bar{w}_1^{-1}, w_2)_S,\\
\nonumber D_{r,r}(e_1,e_2)_S&=\frac{\partial}{\partial \theta_2} S_{r,r}(e_1, e_2)_S,\\
\nonumber D_{c,r}(w,e)_S&=D_{r,c}(e,w)_S=\frac{1} {|w|^2} S_{r,c}(e, \bar{w}^{-1})_S,\\
\label{eqn:kernelrelations} D_{c,c}(w_1,w_2)_S&= \frac{1} {|w_2|^2} S_{c,c}(w_1,\bar{w}_2^{-1})_S.
\end{align}
\end{lemma}
We can write the $S_{*,*}(s,t)$ in a summed-up form, which, remarkably, holds for both even and odd.
\begin{proposition}
\label{prop:summedupS}
The elements $S_{r,r}(s,t)_S$, $S_{r,c}(s,t)_S$, $S_{c,r}(s,t)_S$ and $S_{c,c}(s,t)_S$ of the correlation kernel (\ref{eqn:kernel}), corresponding to real-real, real-complex, complex-real and complex-complex eigenvalue pairs respectively, can be evaluated as
{\small
\begin{align}
\nonumber S_{r,r}(e_1,e_2)_S&=\frac{\Gamma((N+1)/2)}{2\sqrt{\pi}\Gamma(N/2)}\; \mathrm{cos}\left(\frac{\theta_2-\theta_1}{2}\right)^{N-1},
\end{align}
\begin{align}
\nonumber S_{c,r}(w,e_1)_S&=\left(\frac{-i}{\sqrt{\pi}}\right)^{1/2}\frac{1}{r_w}\frac{iN}{2^N\sqrt{\pi}}\sqrt{\frac{\Gamma((N+1)/2)}{\Gamma(N/2+1)}} \left[ \int_{\frac{r_w^{-1}-r_w}{2}}^{\infty} \frac{dt} {(1+t^2)^{N/2+1}}\right]^{1/2}\\
\nonumber &\times\left( \frac{e^{-i(\theta_w-\theta_1)/2}}{r_w^{1/2}}+\frac{e^{i(\theta_w-\theta_1)/2}}{r_w^{-1/2}} \right)^{N-1},\\
\nonumber S_{r,c}(e_1,w)_S&=\left( \frac{-i}{\sqrt{\pi}} \right)^{1/2}\frac{1}{r_w}\frac{N(N-1)}{2^{N+2}\sqrt{\pi}}\left[ \int_{\frac{r_w^{-1}-r_w}{2}}^{\infty}\frac{dt}{(1+t^2)^{N/2+1}}\right]^{1/2}\sqrt{\frac{\Gamma((N+1)/2)}{\Gamma(N/2+1)}}\\
\nonumber &\times \left( \frac{e^{-i(\theta_1-\theta_w)/2}}{r_w^{1/2}} +\frac{e^{i(\theta_1-\theta_w)/2}}{r_w^{-1/2}}\right)^{N-2}\left( \frac{e^{-i(\theta_1-\theta_w)/2}}{r_w^{1/2}} -\frac{e^{i(\theta_1-\theta_w)/2}}{r_w^{-1/2}}\right),\\
\nonumber S_{c,c}(w,z)_S&=\frac{N(N-1)}{2^{N+1}\pi r_w r_z} \left[\int_{\frac{r_w^{-1}-r_w}{2}}^{\infty} \frac{dt}{\left(1+t^2\right)^{N/2+1}}\right]^{1/2} \left[\int_{\frac{r_z^{-1}-r_z}{2}}^{\infty} \frac{dt}{\left(1+t^2\right)^{N/2+1}}\right]^{1/2}\\
\nonumber &\times \left( \frac{e^{i(\theta_z-\theta_w)/2}}{(r_wr_z)^{1/2}}+\frac{e^{-i(\theta_z-\theta_w)/2}}{(r_wr_z)^{-1/2}} \right)^{N-2} \left( \frac{e^{i(\theta_z-\theta_w)/2}}{(r_wr_z)^{1/2}}-\frac{e^{-i(\theta_z-\theta_w)/2}}{(r_wr_z)^{-1/2}} \right),
\end{align}
}
for $N$ even or odd, where $w,z:=r_w e^{i\theta_w}, r_z e^{i\theta_z}$.
\end{proposition}
\textit{Proof:} Using the binomial theorem we can establish the identity
\begin{eqnarray}
\nonumber \frac{1}{2}\left(\frac{d}{dx}(1+x)^{2n-1}+\frac{d}{dx}(1-x)^{2n-1}\right)=\sum_{p=0}^{n-1}2p {2n-1 \choose 2p}x^{2p-1}
\end{eqnarray}
With this identity and the polynomials of Proposition \ref{prop:skew_polys_even} (for the even case) and Proposition \ref{prop:odd_polys} (for the odd case) the respective sums can be performed.
\hfill $\Box$
\noindent From Proposition \ref{prop:summedupS} we can use the equations (\ref{eqn:kernelrelations}) to obtain the other kernel elements.
According to Propositions \ref{thm:correlns} and \ref{prop:summedupS}, we know that the real and complex densities are given by
\begin{align}
\label{eqn:SOElimdensr} \rho_{(1)}^{r}(\theta)&:=\rho_{(1,0)}(e)=S_{r,r}(e,e)_S =\frac{\Gamma((N+1)/2)} {2\sqrt{\pi} \Gamma(N/2)},
\end{align}
and
\begin{align}
\nonumber \rho_{(1)}^{c}(w)&:=\rho_{(0,1)}(w)=S_{c,c}(w,w)_S\\
\label{eqn:rhoc} &= \frac{N(N-1)} {2^{N+1} \pi r^2} \left( {1 \over r} + r \right)^{N-2} \left( \frac{1} {r} - r \right) \int_{\frac{r^{-1} - r} {2}}^\infty \frac{dt} {(1 + t^2)^{N/2 + 1}},
\end{align}
respectively. Note that $\int_{0}^{2\pi} \rho_{(1)}^{r}(\theta) d\theta= E_N$ and so the evaluation (\ref{29}) is immediate from (\ref{eqn:SOElimdensr}). Since $\rho_{(1)}^{r}(\theta)=E_N/2\pi$ we can use (\ref{eqn:SENasy}) to give us the large $N$ form of the density
\begin{align}
\label{eqn:Srlimdens} \rho_{(1)}^{r}(\theta) \sim {1 \over 2 \pi} \sqrt{{\pi N \over 2}},
\end{align}
while integration by parts of (\ref{eqn:rhoc}) shows
\begin{align}
\label{eqn:Sclimdens} \rho_{(1)}^{\rm c}(w) \sim {(N-1) \over \pi} {1 \over (1 + r^2)^2} - {N - 1 \over N - 2}
{1 \over \pi} {1 \over (1 - r^2)^2} + \rm{O}\Big ( {1 \over N} \Big ),
\end{align}
valid for $r \in [0, 1 - {\rm O}(1/\sqrt{N})]$. Note that the real eigenvalue density is only proportional to $\sqrt{N}$, and so to leading order in $N$ the density of general eigenvalues is
\begin{align}
\label{df}
{N \over \pi} {1 \over (1 + r^2)^2},
\end{align}
for all $r \in [0,1]$. This is a Cauchy distribution and, projected stereographically onto the half sphere, gives a uniform distribution. Note also that we have a uniform density of real eigenvalues in (\ref{eqn:Srlimdens}), which we recall was manifested in the analogous real Ginibre result (\ref{eqn:Grlimdens}).
\begin{remark}
This $1/N$ convergence in (\ref{eqn:Sclimdens}) should be contrasted with the exponential convergence in the case of the polynomials (\ref{eqn:rand_polys}) (see \cite{Mc09}).
\end{remark}
The uniform spherical distribution implied by (\ref{df}) led to a conjecture in \cite{FM11} that there is a \textit{spherical law} analogous to the circular law of Proposition \ref{prop:circlaw}. This conjecture has since been proven by adapting the method of \cite{TVK10} used to prove the circular law. Specifically, in \cite{Bord2010} the author shows that the eigenvalue density for a matrix $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$, where $\mathbf{A},\mathbf{B}$ have general iid mean zero, variance one distributed elements, converges to that where $\mathbf{A},\mathbf{B}$ are (complex) Gaussian distributed. Then using (\ref{df}) the spherical law follows.
\begin{proposition}[Spherical law, \cite{Bord2010}]
\label{prop:sphlaw}
Let $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$, where $\mathbf{A},\mathbf{B}$ are $N\times N$ matrices with iid elements from a distribution with mean $0$ and variance $1$. Then the density of eigenvalues of $\mathbf{Y}$ approaches the uniform distribution on the unit sphere under stereographic projection as $N \to \infty$.
\end{proposition}
\subsubsection{Scaled limit}
\label{sec:SOEsclims}
Before implementing the fractional linear transformations (\ref{7'}) and (\ref{14.2}), we have from (\ref{eqn:rho}) that the density of real eigenvalues near the origin is proportional to $E_N$, and thus $\sqrt{N}$. Taking a scaled limit --- involving changing the variables so that the real and complex densities are of order unity --- is of interest in this case because the resulting correlations are expected to be the same as for the eigenvalues of the real Ginibre ensemble, scaled near the origin, which we calculated in Chapter \ref{sec:Ginkernelts} and listed the results in (\ref{eqn:Ginbulkall}) of Appendix \ref{app:Ginsummed}. This expectation is based upon the geometrical knowledge that locally a sphere resembles a plane.
\begin{remark}
In the complex case, an analogy between the eigenvalue jpdf of the generalised eigenvalue problem and the Boltzmann factor for the two-dimensional one-component plasma on a sphere \cite{caillol81}, together with the analogy between the eigenvalue jpdf for the Ginibre matrices and the two-dimensional one-component plasma in the plane \cite{AJ80} allow this latter point (the similarity to the bulk real Ginibre correlations) to be anticipated from a Coulomb gas perspective.
\end{remark}
In the present problem, with our use of the transformed variables (\ref{7'}) and (\ref{14.2}), the original origin has been mapped to $(1,0)$. We must choose scaled co-ordinates so that in the vicinity of this point the real and complex eigenvalues have a density of order unity. For the real eigenvalues, from the knowledge that their expected value is of order $\sqrt{N}$ and that they are uniform on the unit circle, with $e_j:=e^{ix_j}$, we scale
\begin{eqnarray}
\label{eqn:largeNreal} x_j\mapsto \frac{2X_j}{\sqrt{N}}.
\end{eqnarray}
For the complex eigenvalues, which total of order $N$ in the unit disk, an order one density will result by writing
\begin{eqnarray}
\label{eqn:largeNcomplex} w_j\mapsto 1+\frac{2i}{\sqrt{N}}W_j
\end{eqnarray}
Note that the real and imaginary parts have been interchanged to match the geometry of the problem in the Ginibre ensemble, that is so the eigenvalues are again distributed in the upper half-plane, including the real line. The factors of $2$ in (\ref{eqn:largeNreal}) and (\ref{eqn:largeNcomplex}) are included so an exact correspondence with the results of (\ref{eqn:Ginbulk}) and (\ref{eqn:Ginbulkall}) can be obtained.
\begin{remark}
Since the eigenvalue density is rotationally invariant under (\ref{7'}), we can choose any point on the unit circle in (\ref{eqn:largeNcomplex}); we have chosen the image of the original origin for convenience.
\end{remark}
Since $S_{r,r}(x,x)_S$ is interpreted as a density, the normalised quantity is $S_{r,r}(x,x)_Sdx$. It then follows that in the more general case we must look at the scaled limit of $S_{r,r}(x,y)_S \sqrt{dx dy}$ and $S_{c,c}(w_1, w_2)_S\sqrt{d^2w_1 d^2w_2}$. For $S_{r,c}(x, w)_S$ and $S_{c,r}(w,x)_S$ we require that the product $S_{r,c}(x,w)_S S_{c,r}(w,x)_S dx d^2w$ has a well defined limit. From (\ref{eqn:largeNreal}) and (\ref{eqn:largeNcomplex}) we see
\begin{align}
\nonumber \sqrt{dxdy}&\mapsto\frac{2} {\sqrt{N}} \sqrt{dXdY},\\
\nonumber dxd^2w&\mapsto\left( \frac{4} {N}\right)^{3/2} dXd^2W,\\
\nonumber \sqrt{d^2w_1d^2w_2}&\mapsto\frac{4} {N} \sqrt{d^2W_1d^2W_2}.
\end{align}
With this change of variables the large $N$ form of the correlation kernel for the spherical ensemble matches that of the Ginibre ensemble.
\begin{proposition}
\label{prop:scaled_limit}
Recall $\mathbf{K}_{N}(s,t)_S$ from (\ref{eqn:kernel}). Replacing $x_j$ and $w_j$ according to (\ref{eqn:largeNreal}) and (\ref{eqn:largeNcomplex}) then taking $N\to\infty$ gives
\begin{align}
\nonumber \frac{2}{\sqrt{N}}\; \mathbf{K}_N(e_i,e_j)_S&\sim \mathbf{K}_{r,r}^{\mathrm{bulk}}(X_i,X_j),\\
\nonumber \frac{2^{5/2}}{N}\; \mathbf{K}_N(e_i,w_j)_S&\sim \mathbf{K}_{r,c}^{\mathrm{bulk}}(X_i,W_j),\\
\nonumber \frac{\sqrt{2}}{\sqrt{N}}\; \mathbf{K}_N(w_i,e_j)_S&\sim -\left(\mathbf{K}_{r,c}^{\mathrm{bulk}} (W_i,X_j)\right)^{T},\\
\nonumber \frac{4}{N}\; \mathbf{K}_N(w_i,w_j)_S&\sim \mathbf{K}_{c,c}^{\mathrm{bulk}}(W_i,W_j),
\end{align}
with $\mathbf{K}_{r,r}^{\mathrm{bulk}}(\mu,\eta),\mathbf{K}_{r,c}^{\mathrm{bulk}}(\mu,\eta)$ and $\mathbf{K}_{c,c}^{\mathrm{bulk}}(\mu,\eta)$ as in (\ref{eqn:Ginbulkall}).
\end{proposition}
\textit{Proof:} From the explicit functional forms of Proposition \ref{prop:summedupS}, we see that elementary limits suffice. For example, changing variables $t\mapsto 2t/\sqrt{N}$ shows
\begin{align}
\nonumber \int_{\frac{r_w^{-1}-r_w}{2}}^{\infty} \frac{dt} {(1+t^2)^{N/2+1}} \sim\sqrt{\frac{\pi}{2N}}\: \mathrm{erfc}(\sqrt{2}\mathrm{Im}W).
\end{align}
Combining such calculations we obtain
\begin{align}
\nonumber \frac{2}{\sqrt{N}}\; S_{r,r}(e_i,e_j)_S&\sim\frac{1}{\sqrt{2\pi}}e^{-(X_i-X_j)^2/2},\\
\nonumber \frac{2^{5/2}}{N}\; S_{r,c}(e_i,w_j)_S&\sim\frac{\sqrt{-i}} {\sqrt{2\pi}} \sqrt{\mathrm{erfc}(\sqrt{2} \mathrm{Im}W_j)}\: e^{-(X_i-\overline{W}_j)^2/2}\; i(\overline{W}_j-X_i),\\
\nonumber \frac{\sqrt{2}} {\sqrt{N}}\; S_{c,r}(w_i,e_j)_S&\sim\frac{\sqrt{-i}}{\sqrt{2\pi}} \sqrt{\mathrm{erfc} (\sqrt{2}\mathrm{Im}W_i)}\: ie^{-(W_i-X_j)^2/2},\\
\nonumber \frac{4}{N}\; S_{c,c}(w_i,w_j)_S&\sim\frac{1}{\sqrt{2\pi}}\sqrt{\mathrm{erfc}(\sqrt{2} \mathrm{Im}W_i)} \sqrt{\mathrm{erfc}(\sqrt{2}\mathrm{Im}W_j)}\\
\nonumber &\times i(\overline{W}_j-W_i)e^{-(W_i-\overline{W}_j)^2/2},
\end{align}
which is in agreement with the diagonal entries on the RHS of the present proposition, as implied by (\ref{eqn:Ginbulk}) (when one recalls that $S_{r,c}$ and $S_{c,r}$ never appear individually; only as the product $S_{r,c}S_{c,r}$).
Performing the remaining limits, or by recalling the inter-relationships (\ref{eqn:kernelrelations}), the other kernel elements $D$ and $I$ can be obtained from $S$, giving the off-diagonal entries required in (\ref{eqn:Ginbulkall}).
\hfill $\Box$
\subsection{Averages over characteristic polynomials}
\label{sec:SOEcharpolys}
In \cite{sommers2007, sommers_and_w2008} the author/s used an average over characteristic polynomials to obtain the correlation kernels. This method is again successfully applied in \cite{KSZ2010} to the real truncated ensemble (which we will discuss in Chapter \ref{sec:truncs}). As emphasised in \cite{FK07,APS2009} there is a large class of eigenvalue jpdfs such that the eigenvalue density is given in terms of an average over the corresponding characteristic polynomials. This is true of the one-point function (density) for the complex eigenvalues, with $N\to N+2$ (for convenience), in the present generalised eigenvalue problem for which the jpdf is given by (\ref{eqn:q(y)}). To establish such, write
\begin{align}
\label{eqn:charpoly} C_N(z)=\prod_{j=1}^k(z-e_j)\prod_{s=k+1}^{(N+k)/2}(z-w_s)(z-1/\bar{w}_s)
\end{align}
for the characteristic polynomial in the $N\times N$ case of $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$ conditioned to have $k$ real eigenvalues, with eigenvalues transformed according to (\ref{7'}) and (\ref{14.2}). Letting
\begin{align}
\nonumber \tilde{G}_{N}:=\frac{(-1)^{(N/2)(N/2-1)/2}}{2^{N(N-1)/2}} \Gamma((N+1)/2)^{N/2} \Gamma(N/2+1)^{N/2} \prod_{s=1}^{N}\frac{1}{\Gamma(s/2)^2},
\end{align}
which is the pre-factor in (\ref{eqn:ZNxi}), then it follows from (\ref{eqn:q(y)}) and the definition of the density that
\begin{eqnarray}
\label{eqn:charpolydensity} \rho_{(1)}^{(N+2, c)}(z)=\frac{\tilde{G}_{N+2}}{\tilde{G}_N} \frac{1}{|z|^2}\tau(z)\tau(1/\bar{z})(1/\bar{z}-z)\langle C_N(z)C_N(1/\bar{z})\rangle,
\end{eqnarray}
where the superscript $N+2$ denotes the number of eigenvalues in the system. Of course we can therefore read off from (\ref{eqn:rhoc}) the exact form of the average in (\ref{eqn:charpolydensity}). Moreover, in keeping with the development in \cite{APS2009}, we can use our integration methods to compute the more general average $\langle C_N(z_1)C_N(1/\bar{z}_2) \rangle$, which we expect to be closely related to $S_{c,c}(z_1,z_2)_S$, in accordance with known results from the real, complex and real quaternion Ginibre ensembles \cite{kanzieper2001, AV2003, AB07, APS2009, sommers_and_w2008}.
Note that we will also introduce a superscript on $\alpha$ and $\beta$ to indicate the size of system that they relate to, that is $\alpha_{j,l}^{(t)}$ has $j,l=1,...,t$, and $\hat{\alpha}_{s}^{(t)}$ are the corresponding normalisations.
\begin{proposition}
\label{prop:SOEcharpoly}
With the characteristic polynomial $C_N(z)$ given by (\ref{eqn:charpoly}) and $\langle\cdot\rangle$ an average with respect to (\ref{eqn:q(y)}) summed over $k$, one has
\begin{align}
\nonumber &\langle C_N(z_1)C_N(1/\bar{z}_2) \rangle=\frac{\tilde{G}_{N}}{\tilde{G}_{N+2}} (1/\bar{z}_2-z_1)^{-1}\\
\nonumber &\times \sum_{s=0}^{N/2}\frac{1}{\alpha_s^{(N+2)}+ \beta_s^{(N+2)}} \left( z_1^{2s}(1/\bar{z}_2)^{2s+1}-z_1^{2s+1}(1/\bar{z}_2)^{2s}\right)\\
\label{eqn:charpolyD} &=\frac{\tilde{G}_{N}}{\tilde{G}_{N+2}} (1/\bar{z}_2 -z_1)^{-1} \left(\frac{ \tau(z_1)}{|z_1|} \frac{\tau(1/\bar{z}_2)} {|z_2|}\right)^{-1} \left.D_{c,c}(z_1,1/\bar{z}_2) \right|_{N\to N+2},
\end{align}
where it is assumed $N$ is even. Furthermore
\begin{align}
\nonumber \left.S_{c,c}(z_1,z_2)_S \right|_{N \to N+2} &= \frac{\tilde{G}_{N+2}}{\tilde{G}_N} \frac{\tau(z_1)}{|z_1|} \frac{ \tau(1/\bar{z}_2)} {|z_2|}(1/\bar{z}_2-z_1)\langle C_N(z_1)C_N(1/\bar{z}_2) \rangle,
\end{align}
from which we reclaim (\ref{eqn:charpolydensity}).
\end{proposition}
\textit{Proof:} From (\ref{eqn:q(y)}) we see that
\begin{align}
\nonumber C_N(z_1)C_N(1/\bar{z}_2)\mathcal{Q}(Y) =A_{k,N}\prod_{j=1}^k\tau(e_j)\prod_{s=k+1}^{(N+k)/2}\frac{1}{|w_s|^2}\tau(w_s)\tau\left(\frac{1}{\bar{w}_s}\right)\\
\nonumber \times(1/\bar{z}_2-z_1)^{-1}\Delta \left(\mathbf{e}, \mathbf{w}, \mathbf{ \frac{1}{\bar{w}}}, z_1,\frac{1}{\bar{z}_2}\right).
\end{align}
Integrating over $\mathbf{e}$ and $\mathbf{w}$ gives
\begin{align}
\nonumber \langle C_N(z_1)C_N(1/\bar{z}_2\rangle\: \rule[-8pt]{0.5pt}{18pt}_{\: k \: {\rm fixed}}&= \tilde{G}_N(1/ \bar{z}_2-z_1)^{-1}\\
\nonumber &\times [\varkappa^{N/2}] [\zeta^{k}]\mathrm{Pf}\left[\varkappa \left(\zeta^2 \hat{\alpha}_{j,l}^{(N+2)}+ \hat{\beta}_{j,l}^{(N+2)} \right)+ \gamma_{j,l}^{(N+2)}\right],
\end{align}
where $\gamma_{j,l}^{(t)}=q_{j-1}(z_1)q_{l-1}(1/\bar{z}_2)-q_{l-1}(z_1)q_{j-1}(1/\bar{z}_2)$ $(j,l=1,...,t)$, and recalling from (\ref{def:hatab1}) that the `hat' on $\alpha,\beta$ indicates that $u=v=1$. Note that the matrix
\begin{align}
\nonumber \left[\varkappa \left(\zeta^2 \hat{\alpha}_{j,l}^{(N+2)}+ \hat{\beta }_{j,l}^{(N+2)} \right)+ \gamma_{j,l}^{(N+2)}\right]
\end{align}
is of dimension $(N+2) \times (N+2)$. Summing over $k$ leads to
\begin{align}
\nonumber &\langle C_N(z_1)C_N(1/\bar{z}_2\rangle:= \sum_{k=0 \atop k \: {\rm even}}^N \langle C_N(z_1)C_N(1/\bar{z}_2\rangle\: \rule[-8pt]{0.5pt}{18pt}_{\: k \: {\rm fixed}}\\
\nonumber &=\tilde{G}_N (1/\bar{z}_2-z_1)^{-1}[\varkappa^{N/2}]\mathrm{Pf} \left[\varkappa \left(\hat{\alpha}_{j,l}^{(N+2)} + \hat{\beta}_{j,l}^{(N+2)} \right)+ \gamma_{j,l}^{(N+2)}\right].
\end{align}
Using the skew-orthogonal polynomials (\ref{eqn:skew_polys}), we find
\begin{align}
\nonumber &[\varkappa^{N/2}]\mathrm{Pf} \left[\varkappa \left(\hat{\alpha}_{j,l}^{(N+2)} + \hat{\beta}_{j,l}^{(N+2)} \right)+ \gamma_{j,l}^{(N+2)}\right]\\
\nonumber &\qquad =\sum_{s=0}^{N/2}\gamma_{2s+1,2s+2}\prod_{j=0 \atop j\neq s}^{N/2} \left( \hat{\alpha}_{j}^{(N+2)} +\hat{\beta}_{j}^{(N+2)} \right).
\end{align}
The evaluation now follows upon recalling the form of
\begin{align}
\nonumber Z_{N+2}(1)_S =1=\tilde{G}_{N+2} \prod_{l=0}^{N/2} \hat{\alpha}_l^{(N+2)} +\hat{\beta}_l^{(N+2)}
\end{align}
implied by (\ref{eqn:Sprobs}).
The expression for $\left.S_{c,c}(z_1,z_2)_S\right|_{N\to N+2}$ is a simple manipulation of (\ref{eqn:charpolyD}).
\hfill $\Box$
\newpage
\section{Truncations of orthogonal matrices}
\setcounter{figure}{0}
\label{sec:truncs}
As discussed in the Introduction, the three homogeneous two-dimensional manifolds with constant Gaussian curvature $\kappa>0,\kappa=0$, or $\kappa<0$ (being the sphere, plane and pseudo- or anti-sphere respectively) correspond to certain random matrix ensembles. We have identified the real Ginibre ensemble (Chapter \ref{sec:GinOE}) with the plane and the real spherical ensemble (Chapter \ref{sec:SOE}) with the sphere. In this chapter we consider the last of the triumvirate: the anti-sphere, which corresponds to ensembles of truncated unitary or orthogonal matrices.
The \textit{real truncated ensemble} consists of sub-blocks of random, Haar-distributed orthogonal matrices (recall (\ref{eqn:Haar})). Enter, stage left, the following definition.
\begin{definition}
With $N=L+M$, let $\mathbf{R}$ be an $N\times N$ Haar distributed random orthogonal matrix. Decompose $\mathbf{R}$ as follows
\begin{align}
\label{def:Rdecomp} \mathbf{R}=\left[\begin{array}{cc}
\mathbf{A} & \mathbf{B}\\
\mathbf{C} & \mathbf{D}
\end{array}\right],
\end{align}
where $\mathbf{A}$ is of size $L\times L$ and $\mathbf{D}$ is of size $M\times M$, with the dimensions of $\mathbf{B}$ and $\mathbf{C}$ implicit.
\end{definition}
We will concern ourselves with the analysis of the bottom-right block, labeled here as $\mathbf{D}$.
\begin{remark}
Since the matrix $\mathbf{R}$ is Haar distributed, it is invariant under linear transformations and so every sub-block of size $M\times M$ manifests the same statistics. We have chosen the bottom-right sub-block to keep our notation consistent with that in \cite{Forrester2010a}, which is, in turn, consistent with \cite{Krish2009}.
\end{remark}
An important distinction between this ensemble and those of the previous chapters in this work is that the eigenvalues of $\mathbf{D}$ (with $L>0$) must lie strictly inside the unit circle. (This consideration has implications for the calculation of the odd correlations, which we will discuss in Chapter \ref{sec:TOEcorrelno}.) This can be seen by considering the induced $2$-norm on $N\times N$ matrices
\begin{align}
\label{def:i2n} ||\mathbf{M}||_2 = \max \{ ||\mathbf{M} x||_2: x\in \mathbb{R}^N, ||x||_2=1\},
\end{align}
where $||x||_2=\sqrt{x_1^2+x_2^2+...+x_N^2}$ is the usual $2$-norm on (real) vectors; (\ref{def:i2n}) is equivalent to the \textit{spectral norm} $\sqrt{\lambda_{\max} (\mathbf{M}^T \mathbf{M})}$, which in our case gives us the eigenvalue of largest absolute value. We can think of the matrix $\mathbf{D}$ as
\begin{align}
\nonumber \hat{\mathbf{R}}=\left[\begin{array}{cc}
\mathbf{0} & \mathbf{0} \\
\mathbf{0} & \mathbf{D}
\end{array}\right],
\end{align}
and from the definition of (\ref{def:i2n}) we see that $||\hat{\mathbf{R}}||_2 =||\mathbf{D}||_2$. By writing $\hat{\mathbf{R}}=\mathbf{P} \mathbf{R}$, where $\mathbf{P}$ is some matrix, we apply the sub-multiplicative property ($||\mathbf{A}\mathbf{B}||\leq ||\mathbf{A}|| ||\mathbf{B}||$) of the spectral norm to find $||\hat{\mathbf{R}}||_2 \leq ||\mathbf{P}||_2 ||\mathbf{R}||_2=||\mathbf{P}||_2$, where the equality follows since orthogonal matrices have norm one. Note that $\mathbf{P}$ is the projection operator on any vector $x$ (it sends some number of elements to zero), and so $||\mathbf{P} x||_2\leq ||x||_2$, which implies $||\mathbf{P}||_2\leq 1$. Therefore the eigenvalues of $\hat{\mathbf{R}}$ must lie in the unit disk, and consequently, so must those of $\mathbf{D}$.
\begin{remark}
\label{rem:evals<1}
Note that we can assume that all eigenvalues satisfy $|\lambda|<1$ since, in the space of all Haar distributed random orthogonal matrices, matrices whose truncation yields eigenvalues on the unit circle form a set of measure zero.
\end{remark}
As happened with the Gaussian and spherical ensembles, truncated matrices with complex entries (that is, truncations of unitary matrices) were more successfully dealt with before those with real entries (see \cite{Z&S2000,P&R2003} for the details relating to the complex truncated ensemble). In \cite{Z&S2000}, while focussing primarily on the complex case, the authors briefly discuss the real case and complete some preliminary calculations. Yet it was not until \cite{KSZ2010} the distribution of the sub-blocks and their eigenvalues were established. The authors of the latter paper go on to find a Pfaffian form of the correlations and compute the correlation kernel, which largely completes the study; although in \cite{Forrester2010a} the author finds the skew-orthogonal polynomials and establishes a correspondence between the eigenvalues of $\mathbf{D}$ and Kac random polynomials in the particular case when $M=N-1$. The fact that the correlation functions were found before the skew-orthogonal polynomials means (naturally) that the so-called \textit{orthogonal polynomial} method, which heretofore in this work we have been using, was not the method by which the correlations were established. Indeed, the method of \cite{KSZ2010} is based upon earlier work \cite{AV2003,APS2009} using averages over characteristic polynomials, a method we discussed in Chapter \ref{sec:SOEcharpolys}. In this chapter we proceed with the same five-step procedure as used in earlier chapters, with the caveat that it is not yet known how to derive the skew-orthogonal polynomials independently of the correlation kernel. (In \cite{ForNagRai2006} and \cite{AkeKiePhi2010} the authors present determinantal and Pfaffian (respectively) formulae for finding skew-orthogonal polynomials in the basis of the monomials, however it is not known how to perform the integrals this method entails; see also \cite[Ex. 6.1 Q4]{forrester?} and \cite{Eyn2001}.)
We expect that there are three limiting regimes of interest, which correspond to the behaviour of $\alpha:=M/N$: \textit{i}) $\alpha\to 1$, where $L$ is fixed or grows only subdominantly; \textit{ii}) $0<\alpha<1$, where both $L$ and $M$ grow proportionately to $N$, that is,
\begin{align}
\label{def:alphaML} M=\alpha N &&\mbox{and}&& L=\gamma N=(1-\alpha)N;
\end{align}
\textit{iii}) $\alpha\to 0$, where $M$ is fixed or grows subdominantly. If $\alpha$ is restricted to be less than $1$ (cases (\textit{ii}) and (\textit{iii})), then the ensemble will exhibit \textit{weakly orthogonal} behaviour, which matches that of the real Ginibre ensemble. This is anticipated, since with a relatively large number of rows and columns deleted then the elements in $\mathbf{D}$ will only weakly feel the orthogonality constraint imposed on $\mathbf{R}$, and so the elements should be more or less independent Gaussians. There is a well known theorem of Jiang \cite{Jiang2006} to this effect. On the other hand, if $\alpha\to 1$ then the orthogonality of $\mathbf{R}$ is keenly felt and so we call this case the \textit{strongly orthogonal} regime. In this limit, the behaviour is analogous to that of the real spherical ensemble, with the spherical geometry replaced by that of anti-spherical geometry. This will lead us to an anti-spherical conjecture, analogous to Proposition \ref{prop:sphlaw}, at the end of this chapter.
\subsection{Matrix distribution}
\label{sec:TOEmjpdf}
Since the matrix $\mathbf{R}$ is orthogonal, we must have $\mathbf{C} \mathbf{C}^T +\mathbf{D} \mathbf{D}^T=\mathbf{1}_M$ giving that the joint distribution of $\mathbf{C}$ and $\mathbf{D}$ is
\begin{align}
\label{eqn:TOEmatdist2} P(\mathbf{C}, \mathbf{D})\propto \delta (\mathbf{C} \mathbf{C}^T +\mathbf{D} \mathbf{D}^T- \mathbf{1}_M).
\end{align}
To furnish the normalisation we begin by noting that this constraint defines the manifold $V_{N,M}=\{ (\mathbf{C},\mathbf{D})\in \mathbb{R}^{N\times M} : \mathbf{C}\bC^T +\mathbf{D}\bD^T =\mathbf{1} \}$ in the set of all $N\times M$ matrices. We think of $V_{N,M}$ as a set of $M$ orthonormal vectors of length $N$; this manifold is called the \textit{Stiefel manifold}. A well-known algebraic result tells us that the Stiefel manifold can be written as a quotient group.
\begin{lemma}
\label{lem:orbstab1} Let $X$ be a set, $G$ a group that acts on $X$ and $G_x$ the stabilizer of $x\in X$ by $G$, that is $G_x=\{g\in G : g * x=x \}$ where $*$ is the group operation. Then
\begin{align}
\label{eqn:orbstab1} G/G_x \cong G * x.
\end{align}
\end{lemma}
\textit{Proof}: Define the map $\psi : G/G_x \to G*x$ by $\psi(g_x)=g*x$, where $g$ is a representative from the equivalence class $g_x\in G/G_x$. Since any element of $g*x\in G*x$ has a corresponding class in $G/G_x$, we have that $\psi$ is surjective. To establish injectivity, let $g_x,h_x\in G/G_x$, then we have $\psi(g_x)=\psi(h_x) \Leftrightarrow g*x=h*x \Leftrightarrow h^{-1}g*x=x \Leftrightarrow h^{-1}g \in G_x \Leftrightarrow h \equiv g \mod G_x \Leftrightarrow g_x=h_x$. Note that we have used the fact that $G_x$ is a subgroup of $G$ to establish the \textit{only if} direction in the third relation. $\psi$ is therefore a bijection, and the result follows.
\hfill $\Box$
\noindent We call $G*x$ the \textit{orbit} of the element $x\in X$ under the action of $G$.
If $X$ is finite then we can deduce the \textit{orbit-stabliser theorem}.
\begin{proposition}[Orbit-Stabliser Theorem]
\label{prop:orbstab}
Let $X$ be a finite set, $G$ a finite group and $G_x$ the stabilizer of $x\in X$ by $G$. Then, with $\mathrm{Orb} (x):= G*x$ the orbit of $x$ under $G$, we have
\begin{align}
\nonumber |G|/|G_x|=|\mathrm{Orb} (x)|,
\end{align}
where $|H|$ is the order (or cardinality) of the group $H$.
\end{proposition}
\textit{Proof}: By Lagrange's (Group) Theorem we have that $|G|=[G:H]\cdot |H|$, where $[G:H]$ is defined as the cardinality of the coset space $G/H$, called the \textit{index} of $H$ in $G$. Since $\mathrm{Orb} (x):= G*x$, the result follows from the isomorphic relation (\ref{eqn:orbstab1}).
\hfill $\Box$
Lagrange's theorem is a statement about how many cosets of $H$ there are in $G$, however if one is interested in the volume of an infinite set, then we need to make precise the statement ``$G$ is $\gamma$ times larger $H$". By carefully defining suitable measures, taking limits and viewing the isomorphism in (\ref{eqn:orbstab1}) as a topological isomorphism, one establishes that \cite{nachbin1965}
\begin{align}
\label{eqn:vol/vol} \mathrm{vol} (G/G_x) = \frac{\mathrm{vol} (G)} {\mathrm{vol} (G_x)} = \mathrm{vol} (\mathrm{Orb} (x)),
\end{align}
where the volume is with respect to Haar measure (\ref{def:volON}).
To apply these results to our business we take $G=O(N)$, the orthogonal group of degree $N$, which acts on $V_{N,M}$. Note that any set of orthonormal vectors can be transformed into any other under the operation of $O(N)$. This is the same as saying that the action of $O(N)$ on $V_{N,M}$ is transitive, that is $O(N)*f=V_{N,M}$, for any $f\in V_{N,M}$. To pick a specific stabliser, we may choose $f_0=\{ e_1,...,e_M\}$, where the $e_j$ are the standard basis vectors in $\mathbb{R}^N$. Then
\begin{align}
\nonumber G_{f_0}=\left\{ \left[
\begin{array}{cc}
\mathbf{1}_{M} & \mathbf{0}\\
\mathbf{0} & \mathbf{Q}_{L\times L}
\end{array} \right] : \mathbf{Q}_{L\times L} \mbox{ is an orthogonal $L\times L$ matrix} \right\} \cong O(L).
\end{align}
Combining this information, we have from (\ref{eqn:orbstab1}) that $O(N)/O(L)\cong V_{N,M}$ (where, as noted above, the isomorphism is to be interpreted as a topological isomorphism between locally compact topological groups). From (\ref{eqn:vol/vol}) we then have
\begin{align}
\nonumber \mathrm{vol}(V_{N,M})=\frac{\mathrm{vol} (O(N))}{\mathrm{vol} (O(L))},
\end{align}
where $\mathrm{vol}(O(N))$ is the volume of $O(N)$ with respect to Haar measure, and is given by (\ref{def:volO}). We have therefore found the normalisation in (\ref{eqn:TOEmatdist2}) giving \cite{KSZ2010}
\begin{align}
\label{eqn:TOEmatpdf1} P(\mathbf{C}, \mathbf{D})=\frac{\mathrm{vol} (O(L))}{\mathrm{vol} (O(N))} \; \delta (\mathbf{C} \mathbf{C}^T +\mathbf{D} \mathbf{D}^T- \mathbf{1}_M).
\end{align}
The distribution of the sub-block $\mathbf{D}$ itself, however, is a somewhat delicate procedure. For small $M$ (compared to $L$) the elements of the sub-block are roughly independent Gaussians, and the distribution is smooth \cite{Jiang2006}. But when $M$ becomes too large the orthogonality of $\mathbf{R}$ introduces singular effects to the distribution. This is also seen in the analogous case of truncations of unitary matrices \cite{JN03}. We will proceed to integrate over the matrices $\mathbf{C}$ in (\ref{eqn:TOEmatpdf1}), although we will find that we must accept the restriction $L\geq M$. Interestingly, it will turn out that the subsequent eigenvalue jpdf is insensitive to these concerns.
\begin{proposition}[\cite{Forrester2006,KSZ2010}]
\label{prop:TOEelpdf}
With $L\geq M$ the probability density function for $\mathbf{D}$, an $M\times M$ sub-block of $\mathbf{R}$ is
\begin{align}
\label{eqn:truncpdf} P(\mathbf{D})=\frac{(\mathrm{vol}(O(L)))^2}{\mathrm{vol}(O(N))\mathrm{vol}(O(L-M))}\det(\mathbf{1}-\mathbf{D}^T\mathbf{D})^{(L-M-1)/2}.
\end{align}
\end{proposition}
\textit{Proof}: To obtain the matrix pdf of $\mathbf{D}$ we will integrate over the matrices $\mathbf{C}$ in (\ref{eqn:TOEmatpdf1}). First note that the delta function therein can be rewritten as the product of delta functions
\begin{align}
\nonumber \prod_{j=1}^M \delta(\hat{c}_{jj}+\hat{d}_{jj}-1) \prod_{1\leq i < j \leq M} \delta (\hat{c}_{ij} +\hat{d}_{ij}),
\end{align}
where $\mathbf{C} \mathbf{C}^T=[\hat{c}_{i,j}]_{i,j=1,...,M}$ and $\mathbf{D} \mathbf{D}^T=[\hat{d}_{i,j}]_{i,j=1,...,M}$. Using a Fourier integral representation of the delta function we can write
\begin{align}
\nonumber \delta(\hat{c}_{jj}+\hat{d}_{jj}-1) &= \frac{1}{2\pi} \int_{-\infty}^{\infty} e^{i (\hat{c}_{jj} +\hat{d}_{jj}-1)h_{jj}}dh_{jj},\\
\nonumber \delta (\hat{c}_{ij} +\hat{d}_{ij}) &=\frac{1}{2\pi} \int_{-\infty}^{\infty} e^{i (\hat{c}_{ij} +\hat{d}_{ij})h_{ij}}dh_{ij},
\end{align}
where the $h_{ij}$ are real variables, and so
\begin{align}
\nonumber &\int \delta (\mathbf{C} \mathbf{C}^T + \mathbf{D} \mathbf{D}^T -\mathbf{1}_M) (d\mathbf{C}) =\\
\label{eqn:TOEdeltainteg1} &\left( \frac{1}{2\pi} \right)^{M(M+1)/2} \int (d\mathbf{C}) \int (d\mathbf{H}) \; e^{i \: \mathrm{Tr} (\mathbf{H}(\mathbf{C}\bC^T +\mathbf{D}\bD^T -\mathbf{1}_M))},
\end{align}
where $\mathbf{H}$ is an $M\times M$ real, symmetric matrix with diagonal elements $h_{jj}$ and off-diagonal elements $h_{ij}/2$. Perturbing $\mathbf{H}$ using $\mathbf{H} - \mu \mathbf{1}_M$, where $\mu>0$, will enable us to use the integral evaluation \cite[Proposition 3.2.8]{forrester?}
\begin{align}
\nonumber &I_{m,n}(\mathbf{Q}_m):= \int e^{\frac{i}{2} \mathrm{Tr} (\mathbf{H}_m \mathbf{Q}_m)} \big(\det (\mathbf{H}_m-\mu \mathbf{1}_m) \big)^{-n} (d\mathbf{H}_m)\\
\label{eqn:Imninteg} & =\frac{2^m \pi^{m(m+1)/2} i^{m} (-1)^{m(m-1)/2}} { \prod_{j=n-m+1}^n \Gamma(j) } \left( \det \left( \frac{i}{2} \mathbf{Q}_m \right) \right)^{n-(m+1)/2} e^{\frac{i}{2} \mu \mathrm{Tr} (\mathbf{Q}_m)},
\end{align}
with $n\geq m/2$, where $\mathbf{H}_m$ and $\mathbf{Q}_m$ are $m\times m$ real, symmetric matrices. (From this point we will only concern ourselves with proportional relations; the normalisation will be calculated \textit{post hoc}.) Introducing the aforementioned perturbation, we rewrite (\ref{eqn:TOEdeltainteg1}) as
\begin{align}
\nonumber &\int \delta (\mathbf{C} \mathbf{C}^T + \mathbf{D} \mathbf{D}^T -\mathbf{1}_M) (d\mathbf{C})\\
\nonumber &\propto \lim_{\mu \to 0^{+}} \int (d\mathbf{C}) \int (d\mathbf{H}) \; e^{i \: \mathrm{Tr} ((\mathbf{H}- i\mu\mathbf{1}_M) (\mathbf{C}\bC^T +\mathbf{D}\bD^T -\mathbf{1}_M))}\\
\nonumber &\propto \lim_{\mu \to 0^{+}} \int (d\mathbf{C}) \int (d\mathbf{H}) \; \det (\mathbf{H} -i\mu\mathbf{1}_M)^{-L/2} \; e^{i \: \mathrm{Tr} (\mathbf{C}\bC^T +(\mathbf{H}- i\mu\mathbf{1}_M) (\mathbf{D}\bD^T -\mathbf{1}_M))},
\end{align}
where we have changed variables $\mathbf{C} \to (\mathbf{H}-i \mu\mathbf{1}_M)^{1/2}\: \mathbf{C}$, which introduces a Jacobian of $\det (\mathbf{H} -i\mu\mathbf{1}_M)^{-L/2}$ using Lemma \ref{lem:alpha_tensor_beta}. We may now interchange the order of integration and perform the integrals over $\mathbf{C}$ to find that the matrix pdf is proportional to
\begin{align}
\nonumber &\lim_{\mu \to 0^+} \int (d\mathbf{H}) \; \det (\mathbf{H} -i\mu\mathbf{1}_M)^{-L/2} \; e^{i \: \mathrm{Tr} \big( (\mathbf{H}- i\mu\mathbf{1}_M) (\mathbf{D}\bD^T -\mathbf{1}_M) \big)}\\
\nonumber &= \lim_{\mu \to 0^+} \int (d\mathbf{H}) \; \det (\mathbf{H} -i\mu\mathbf{1}_M)^{-L/2} \; e^{i \: \mathrm{Tr} \big( \mathbf{H} (\mathbf{D}\bD^T -\mathbf{1}_M) \big)}.
\end{align}
With $m=M, n=L/2$ and $\mathbf{Q}_m=2(\mathbf{D}\bD^T -\mathbf{1}_M)$ the preceding limit can be written in terms of the integral $I_{m,n}$ and we may now make use of (\ref{eqn:Imninteg}) to yield
\begin{align}
\nonumber &\lim_{\mu \to 0^+} I_{M,L/2} (2(\mathbf{D}\bD^T -\mathbf{1}_M))\\
\nonumber &\propto \lim_{\mu \to 0^+} (\det (\mathbf{1}_M -\mathbf{D} \mathbf{D}^T) )^{(L-M-1)/2} e^{\mu \mathrm{Tr} (\mathbf{D} \mathbf{D}^T -\mathbf{1}_M) }\\
\nonumber &= \det (\mathbf{1}_M -\mathbf{D} \mathbf{D}^T)^{(L-M-1)/2}.
\end{align}
Note that, by the condition on $m$ and $n$ in (\ref{eqn:Imninteg}), we must have $L\geq M$.
The proportionality constant $\tilde{C}_{M,L}$ is defined by the condition that
\begin{align}
\label{eqn:TOEmpjdfnorm1} \tilde{C}_{M,L} \int \det (\mathbf{1}_M -\mathbf{D}^T\mathbf{D})^{(L-M-1)/2} (d\mathbf{D}) =1.
\end{align}
To calculate $\tilde{C}_{M,L}$ we will apply the change of variables $\mathbf{D}^T\mathbf{D} = \mathbf{G}$, and so we first calculate the constant for this transformation as so
\begin{align}
\nonumber 1& =\left(\frac{1}{2\pi} \right)^{M^2/2} \int e^{-\mathrm{Tr} (\mathbf{D}^T\mathbf{D})/2} (d\mathbf{D})\\
\nonumber &=b_M \left(\frac{1}{2\pi} \right)^{M^2/2} \int (\det \mathbf{G})^{-1/2} e^{-\mathrm{Tr} \mathbf{G}/2} (d\mathbf{G})\\
\label{eqn:TOEmpjdfnorm2} &=B_M \left(\frac{1}{2\pi} \right)^{M^2/2} \int_0^{\infty}d\lambda_1 \cdot\cdot\cdot \int_0^{\infty} d\lambda_M \prod_{j=1}^M \lambda_{j}^{-1/2} e^{-\lambda_j /2} \prod_{j<l} |\lambda_l -\lambda_j|,
\end{align}
where the $\lambda_j$ are the eigenvalues of $\mathbf{G}$. To obtain the second equality we have used Lemma \ref{lem:tilde_const_covarbs} and to obtain the third we have used Proposition \ref{prop:GOE_J}; $b_M$ and $B_M$ respectively incorporate the proportionality constants for these transformations. Note that the integral in the second equality is over positive definite matrices. With Corollary \ref{cor:varselb} we can evaluate the integral in (\ref{eqn:TOEmpjdfnorm2}) and find
\begin{align}
\nonumber B_M^{-1}&= \pi^{-M^2/2} \; \prod_{j=0}^{M-1} \frac{\Gamma ((j+3)/2) \Gamma ((j+1)/2)} {\Gamma (3/2)}\\
\label{eqn:BM} &= \pi^{-M(M+1)/2} \; \Gamma (M+1) \prod_{j=1}^{M-1} (\Gamma (j/2))^2.
\end{align}
Now we change variables in (\ref{eqn:TOEmpjdfnorm1}) to obtain
\begin{align}
\nonumber \left(\tilde{C}_{M,L} \right)^{-1} &= b_M \int (\det \mathbf{G})^{-1/2} \det (\mathbf{1}_M -\mathbf{G})^{(L-M-1)/2} (d \mathbf{G})\\
\nonumber &= B_M \int_0^1 d\lambda_1 \cdot\cdot\cdot \int_0^1 d\lambda_M \prod_{j=1}^M \lambda_j^{-1/2} (1-\lambda_j)^{(L-M-1)/2} \prod_{j<l} |\lambda_l- \lambda_j|.
\end{align}
With $B_M$ from (\ref{eqn:BM}) and using (\ref{eqn:Selbint}) we then have
\begin{align}
\nonumber \left(\tilde{C}_{M,L} \right)^{-1} &= \frac{\pi^{M(M+1)/2}} {\Gamma(M+1) \prod_{j=1}^M (\Gamma(j/2))^2}\\
\nonumber &\times \prod_{j=0}^{M-1} \frac{\Gamma ((j+1)/2) \Gamma ((L-M+1+j)/2) \Gamma ((j+3)/2)} {\Gamma ((L+1+j)/2) \Gamma (3/2)}\\
\nonumber &=\pi^{M^2/2} \prod_{j=0}^{M-1} \frac{\Gamma((L-M+1+j)/2)} {\Gamma((L+1+j)/2)}.
\end{align}
By noting that
\begin{align}
\nonumber \prod_{j=0}^{M-1} \Gamma ((L+1+j)/2) &= \frac{\prod_{j=1}^{L+M} \Gamma (j/2)}{\prod_{j=1}^{L} \Gamma (j/2)},\\
\nonumber \prod_{j=0}^{M-1} \Gamma ((L-M+1-j)/2) &= \frac{\prod_{j=1}^L \Gamma(j/2)}{\prod_{j=1}^{L-M} \Gamma (j/2)}
\end{align}
we can manipulate $\tilde{C}_{M,L}$ to give us the result.
\hfill $\Box$
As with the ensembles in our previous chapters, we find that computer simulations are easily generated and provide an immediate guide to the results we can expect. In Figure \ref{fig:TOE_sims} we have plotted eigenvalues from various truncations of 50 independent $75\times 75$ Haar distributed random orthogonal matrices. Note that for small $L$ (large $M$) the eigenvalues are largely congested on the unit circle, and as $L\to N$ they are restricted to a smaller disk, becoming more uniformly spread. Also note that, in all cases, there is clearly a non-zero density of real eigenvalues, which, as we have noted in previous chapters, is a distinctive feature of ensembles of real matrices.
\begin{remark}
The generation of Haar distributed random matrices was done by first generating a random Gaussian matrix (that is, a real Ginibre matrix) and then applying Gram-Schmidt orthogonalisation to obtain a Haar distributed orthogonal matrix. For a very readable introductory description of this see \cite{Diaconis2005}, or for a more technical treatment see \cite{P&R2003}.
\end{remark}
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.4]{truncsimn1aebook.pdf}
\hspace{3ex}\includegraphics[scale=0.4]{truncsimn2a100.pdf}
\hspace{3ex}\includegraphics[scale=0.4]{truncsimn3a100.pdf}
\hspace{3ex}\includegraphics[scale=0.4]{truncsimn4a100.pdf}
\hspace{3ex}\includegraphics[scale=0.4]{truncsimn5a100.pdf}
\hspace{3ex}\includegraphics[scale=0.4]{truncsimn6a100.pdf}
\hspace{3ex}\includegraphics[scale=0.4]{truncsimn7aebook.pdf}
\hspace{3ex}\includegraphics[scale=0.4]{truncsimn8a.pdf}
\caption[Simulated eigenvalue plots of real truncated ensembles with $N=75$ and $M=74, 72, 70, 42, 38, 36, 15, 5$.]{Eigenvalue plots for truncations of $50$ independent $75\times 75$ random Haar distributed orthogonal matrices, with (from left to right, top to bottom) $M=74,72,70,42,38,36,15,5$.}
\label{fig:TOE_sims}
\end{center}
\end{figure}
\subsection{Eigenvalue distribution}
In order to write the eigenvalue distribution in a concise way we will use the following definition.
\begin{definition}
With $\mu\in\mathbb{C}$ let
\begin{align}
\label{def:truncw} \omega(\mu)=\left\{\begin{array}{ll}
\left(\frac{L(L-1)}{2\pi}|1-\mu^2|^{L-2}\int_{2|\mathrm{Im}(\mu)|/|1-\mu^2|}^{1}(1-t^2)^{(L-3)/2}dt\right)^{1/2},& L>1,\\
\left(\frac{1}{2\pi}\right)^{1/2}|1-\mu^2|^{-1/2},&L=1.
\end{array}
\right.
\end{align}
\end{definition}
Note that the integral in (\ref{def:truncw}) (for $L>1$) will play the same role here as the $\mathrm{erfc}$ function did in (\ref{eqn:GinOEjpdf}) for the real Ginibre ensemble, and as the integral in the function $\tau(x)$ did in (\ref{eqn:q(y)}) for the spherical ensemble. If $\mu=x\in\mathbb{R}$ then the integral in the first case of (\ref{def:truncw}) becomes the Beta function
\begin{align}
\nonumber \int_{0}^{1} (1-t^2)^{(L-3)/2}dt &= \frac{1}{2} \int_{0}^{1} t^{-1/2} (1-t)^{(L-3)/2}dt \\
\label{eqn:Bxy1} &=: \frac{1}{2} \; B\left( \frac{1} {2},\frac{L-1}{2} \right) = \frac{\sqrt{\pi}}{2} \frac{\Gamma((L-1)/2)}{\Gamma (L/2)}
\end{align}
and so
\begin{align}
\label{eqn:TOErweight} \omega(x)= \frac{(1-x^2)^{L/2-1}}{\sqrt{2} \; \pi^{1/4}}\left(L\frac{\Gamma((L+1)/2)}{\Gamma(L/2)}\right)^{1/2},
\end{align}
which, with $L=1$, gives us the second case in (\ref{def:truncw}) immediately.
At the end of the previous chapter we found the explicit distribution of the sub-block matrices $\mathbf{D}$, under the restriction that $M \leq L$, by integrating out the dependence on the sub-block $\mathbf{C}$. This result, (\ref{eqn:truncpdf}), gives us a convenient way of finding the eigenvalue distribution using the tools that we successfully employed to obtain the distribution of the eigenvalues in the real Ginibre and real spherical ensembles in earlier chapters. The single caveat is that we have the aforementioned restriction $M \leq L$, which, as discussed in the introduction to this chapter, is a weakly orthogonal regime. The need for this restriction with this approach can be seen in (\ref{eqn:TOEmpjdfnorm1}), which can only be satisfied for $(L-M-1)/2\geq 1/2$. It turns out that the eigenvalue jpdf for this restricted case is identical to that in the general case. We will proceed with the calculation of the jpdf, assuming $M \leq L$, in the same fashion that led to (\ref{eqn:q(y)}). This will be followed by a discussion of how to establish the general case.
\begin{proposition}[\cite{KSZ2010}]
\label{prop:Tejpdf}
Let $\mathbf{D}$ be an $M\times M$ sub-block of an $(L+M)\times (L+M)$ Haar distributed random orthogonal matrix as in (\ref{def:Rdecomp}). Then $\mathbf{D}$ has $k$ real eigenvalues $\Lambda =\{\lambda_1,...,\lambda_k\}$ and $M-k$ complex conjugate paired eigenvalues $W =\{z_1,\bar{z}_1 ,... ,z_{(M-k)/2},$\\$ \bar{z}_{(M-k)/2}\}$. With the eigenvalues ordered as in (\ref{9'}) the eigenvalue jpdf is
\begin{align}
\label{eqn:TOEevaljpdf} Q(\Lambda, W)_T&=C_{M,L}\prod_{j=1}^k \omega(\lambda_j)\prod_{j=1}^{(M-k)/2}2i\;\omega(z_j)^2 \; \Delta(\Lambda \cup W),
\end{align}
where $\Delta(\{x_j \})=\prod_{i<j}(x_j-x_i)$ as in Proposition \ref{prop:GinOE_eval_jpdf},
\begin{align}
\label{def:CML} C_{M,L}:=\frac{\mathrm{vol}(O(L))\mathrm{vol}(O(M))}{\mathrm{vol}(O(L+M))}\left(\frac{(2\pi)^L}{L!}\right)^{M/2},
\end{align}
and $\mathrm{vol}(O(X))$ is from (\ref{def:volO}).
\end{proposition}
\begin{remark}
Note that (\ref{def:CML}) is the corrected normalisation; it was incorrectly stated in \cite{KSZ2010}.
\end{remark}
\textit{Proof}: Letting $\tilde{C}_{M,L}$ stand for the prefactor in (\ref{eqn:truncpdf}) we see that
\begin{align}
\nonumber \tilde{C}_{M,L} \det(1-\mathbf{D}^T\mathbf{D})^{(L-M-1)/2}
\end{align}
is structurally similar to (\ref{eqn:elementjpdf}) and so we expect that we may apply a Schur decomposition (\ref{eqn:GinOE_decomp}), followed by the iterated integration technique of Chapter \ref{sec:Sevaldist}, to make progress. We write $\mathbf{D}=\mathbf{Q}\mathbf{R}_M\mathbf{Q}^T$, where $\mathbf{R}_M$ is a the block upper-triangular matrix (\ref{def:triangular_mat}) with the diagonal $1\times 1$ and $2\times 2$ blocks corresponding to the real and complex conjugate pairs of eigenvalues respectively, and $\mathbf{Q}$ is the real orthogonal matrix of corresponding eigenvectors (with the restriction that the first row of $\mathbf{Q}$ is positive). With the ordering (\ref{9'}) the decomposition is unique.
The Jacobian of the change of variables from the elements of $\mathbf{D}$ to the eigenvalues of $\mathbf{D}$ is given by Proposition \ref{prop:GinJ},
\begin{align}
\nonumber (d\mathbf{D})&=2^{(M-k)/2}\prod_{j<p}|\lambda(R_{pp})-\lambda(R_{jj})| \prod_{l=k+1}^{(M+k)/2} |b_l-c_l|\\
\nonumber &\times(d\tilde{\mathbf{R}}_M)(\mathbf{Q}^Td\mathbf{Q}) \prod_{s=1}^{k}d\lambda_s \prod_{l=k+1}^{(M+k)/2}dx_ldb_ldc_l,
\end{align}
and we recall the integral over $(\mathbf{Q}^T\mathbf{Q})$ from (\ref{eqn:RTdR_integ}). By decomposing
\begin{align}
\nonumber \mathbf{R}_M=\left[\begin{array}{cc}
\mathbf{R}_{M-2} & u_{M-2}\\
\mathbf{0}^T & z_m
\end{array}\right]
\end{align}
as in (\ref{eqn:RNdecomp}), we may now directly apply the iterated integration technique (for the complex eigenvalue columns) of Chapter \ref{sec:iitc} by replacing $\det(\mathbf{1}_N+ \mathbf{R}_N\mathbf{R}_N^T)^{-N}$ therein with $\det(\mathbf{1}_M- \mathbf{R}_M\mathbf{R}_M^T)^{(L-M-1)/2}$, giving
\begin{align}
\nonumber &\int\det (\mathbf{1}_M- \mathbf{R}_M\mathbf{R}_M^T)^{(L-M-1)/2} (du_{M-2})\cdot\cdot\cdot (du_{k+2})\\
\nonumber &=\det (\mathbf{1}_k-\mathbf{R}_k\mathbf{R}_k^T)^{(L-k-1)/2} \prod_{s= k+1}^{(M+k)/2} \det (\mathbf{1}_2-z_sz_s^T )^{(L-3)/2}\\
\label{eqn:Tdet1-RMRMT} &\times\prod_{s=0}^{(M-k)/2-1}\int \det (\mathbf{1}_2 -v_{M-2s-2}^Tv_{M-2s-2})^{s+(L-M-1)/2} (dv_{M-2s-2}),
\end{align}
where $v_j=(\mathbf{1}_j- \mathbf{R}_j\mathbf{R}_j^T)^{-1/2} u_{j} (\mathbf{1}_2- z_{m-(M-2-j)/2}z_{m-(M-2-j)/2}^T)^{-1/2}$. Note that so far we have only changed variables and have not performed any of the integrals on the LHS of (\ref{eqn:Tdet1-RMRMT}). The task now is to perform the integrals on the RHS.
Defining the $2\times 2$ matrix $\mathbf{C}:= v_{M-2s-2}^T v_{M-2s-2}$ we use Lemma \ref{lem:tilde_const_covarbs} to write
\begin{align}
\label{eqn:Tctildecov} (dv_{M-2s-2}) = \tilde{c} \: \det \mathbf{C} ^{(M-2s-5)/2} (d\mathbf{C}).
\end{align}
In order to calculate $\tilde{c}$ (since it is independent of $v_{M-2s-2}$), we can take the elements of $v_{M-2s-2}$ to be independent variables in the interval $(-\infty, \infty)$, as was done in (\ref{eqn:Sctilde}). Then, using the notation $p=M-2s-2, \; q=(M-2s-5)/2$ for ease of expression,
\begin{align}
\nonumber \tilde{c} = \frac{\int e^{-\mathrm{Tr} \: v_{p}^T v_{p}} (d v_{p})} {\int e^{-\mathrm{Tr} \: \mathbf{C}} \left(\det \mathbf{C}\right)^{q} (d\mathbf{C})} = \frac{\pi^{p}} {\int_0^{\infty}dx_1 \int_0^{\infty}dx_2 \: x_1^q \: e^{-x_1} \: x_2^{q} \: e^{-x_2} |x_2- x_1|},
\end{align}
with $x_1$ and $x_2$ the eigenvalues of $\mathbf{C}$, having used Proposition \ref{prop:GOE_J} to change variables from the entries of $\mathbf{C}$ to its eigenvalues in the denominator of the second equality. We can now calculate the integrals on the RHS of (\ref{eqn:Tdet1-RMRMT}) by the change of variables (\ref{eqn:Tctildecov})
\begin{align}
\nonumber &\int \det (\mathbf{1}_2- v_{p}^Tv_{p})^{s+(L-M-1)/2} (dv_{p})\\
\nonumber & = \tilde{c} \int \det \left(\mathbf{1}_2-\mathbf{C} \right)^{s+(L-M-1)/2} \left(\det \mathbf{C}\right)^q (d\mathbf{C})\\
\nonumber & = \pi^p \frac{\int_0^{1}dy_1 \int_0^{1}dy_2 \: \big((1-y_1) (1-y_2)\big)^{s+(L-M-1)/2} (y_1 y_2)^{q} e^{-x_1} e^{-x_2} |x_2- x_1|} {\int_0^{\infty}dx_1 \int_0^{\infty}dx_2 \: x_1^q \: e^{-x_1} \: x_2^{q}\: e^{-x_2} |x_2- x_1|}\\
\label{eqn:ccoleval} &=\pi^{M-2s-2} \frac{\Gamma((L-M+1)/2+s) \Gamma((L-M)/2+1+s)} {\Gamma((L-1)/2) \Gamma(L/2)},
\end{align}
where we have used the Selberg integrals (\ref{eqn:Selbint}) and (\ref{cor:varselb1}).
If we let
\begin{align}
\nonumber \mathbf{R}_k=\left[\begin{array}{cc}
\mathbf{R}_{k-2} & u_{k-2}\\
\mathbf{0}^T & z_m
\end{array}\right]
\end{align}
then we can apply the method of Chapter \ref{sec:iitr} to integrate over the off-diagonal elements in the real eigenvalue columns, and we obtain
\begin{align}
\nonumber &\int \det (\mathbf{1}_k- \mathbf{R}_k\mathbf{R}_k^T)^{(L-k-1)/2} (du_k) \cdot\cdot\cdot (du_1)\\
\label{eqn:TOErcol} &= \prod_{s=1}^k (1-\lambda_s^2)^{L/2-1} \prod_{s=1}^{k-1}\int (1 - v_{k-s}^T v_{k-s})^{(L-k+s)/2-1}(dv_{k-s}).
\end{align}
The integral on the right hand side of (\ref{eqn:TOErcol}) can be evaluated analogously to (\ref{eqn:ccoleval}),
\begin{align}
\nonumber \int (1 - v_{k-s}^T v_{k-s})^{(L-k+s)/2-1}(dv_{k-s})=\pi^{(k-s)/2}\frac{\Gamma ((L-k+s)/2)}{\Gamma(L/2)}.
\end{align}
Combining the preceding we find the density of the truncated matrix $\mathbf{D}$ is
\begin{align}
\nonumber &\tilde{C}_{M,L} 2^{(M-k)/2} \prod_{j<p}|\lambda(R_{pp})-\lambda(R_{jj})| \frac{\pi^{M(M+1)/4}}{\prod_{j=1}^M \Gamma(j/2)} \\
\nonumber &\times \prod_{s=1}^k (1-\lambda_s^2)^{L/2-1} \pi^{(k-s)/2}\frac{\Gamma ((L-k+s)/2)}{\Gamma(L/2)}\\
\nonumber & \times \prod_{l=k+1}^{(M+k)/2} |b_l-c_l| \det (\mathbf{1}_2-z_lz_l^T )^{(L-3)/2}\\
\nonumber & \times \prod_{l=0}^{(M-k)/2-1} \pi^{M-2l-2} \frac{\Gamma((L-M+1)/2+l) \Gamma((L-M)/2+1+l)} {\Gamma((L-1)/2) \Gamma(L/2)}\\
\label{eqn:TOEjpdf1} & \times \prod_{s=1}^{k}d\lambda_s \prod_{l=k+1}^{(M+k)/2}dx_ldb_ldc_l.
\end{align}
We now wish to remove the dependence on $b_l$ and $c_l$. With $\epsilon_l:=1-x_l^2-y_l^2$ and $\delta_l:=b_l-c_l$ then by explicit calculation we see that $\det(\mathbf{1}_2-z_lz_l^T) =\epsilon_l^2 -\delta_l^2$. Using (\ref{eqn:GinOE_covs}) (noting again the correction there to the domain of $\delta$) we have
\begin{align}
\nonumber &\int_{\delta=-\epsilon_l}^{\delta=\epsilon_l}|b_l-c_l| \det(\mathbf{1}_2-z_lz_l^T)^q \; dx_ldb_ldc_l = 4y_l\int_{\delta=0}^{\delta=\epsilon_l}\frac{\delta(\epsilon_l^2- \delta^2)^{q}\;d\delta}{\sqrt{\delta^2+4y_l^2}}dx_ldy_l\\
\label{eqn:TOEerfclem} &=4y_l|1- z_l^2|^{2q+1}\int_{t=2|y_l|/|1-z_l^2|}^{t=1}(1-t^2)^q\; dt\; dx_ldy_l,
\end{align}
where the second equality follows after some changes of variable analogous to those of (\ref{eqn:delta_integ}).
By some rearrangement we have the identities
\begin{align}
& \nonumber \prod_{l=0}^{(M-k)/2-1} \Gamma((L-M+1)/2+l) \Gamma((L-M)/2+1+l) \prod_{s=1}^k \Gamma ((L-k+s)/2)\\
\nonumber &= \frac{\mathrm{vol} (O(L-M))}{\mathrm{vol} (O(L))} \frac{2^L \pi^{L(L+1)/4}}{2^{L-M} \pi^{(L-M) (L-M+1)/4}},\\
\nonumber & \prod_{s=1}^k \Gamma(L/2) \prod_{l=0}^{(M-k)/2 -1}\Gamma((L-1)/2) \Gamma(L/2) = \Gamma (L/2)^k \big(2^{2-L}\sqrt{\pi}\; \Gamma(L-1) \big)^{(M-k)/2}
\end{align}
and, recalling that
\begin{align}
\nonumber \prod_{l=k+1}^{(M+k)/2}(-2iy_l)\prod_{j<p}|\lambda(R_{pp}) - \lambda(R_{jj})|=\Delta(\Lambda \cup W),
\end{align}
(\ref{eqn:TOEjpdf1}) becomes
\begin{align}
\nonumber &\tilde{C}_{M,L} \Delta(\Lambda \cup W) \mathrm{vol} (O(M)) \frac{\mathrm{vol} (O(L-M))}{\mathrm{vol} (O(L))} \left( \frac{(2\pi)^{L}}{\Gamma (L+1)} \right)^{M/2} \\
\nonumber &\times \prod_{s=1}^k \left( \frac{L(L-1)}{2\pi}\right)^{1/2} (1-\lambda_s^2)^{L/2-1} \left[ \int_{0}^{1} (1-t^2)^{(L-3)/2}dt \right]^{1/2}\\
\nonumber & \times \prod_{l=k+1}^{(M+k)/2} 2i \frac{L(L-1)}{2\pi} |1- z_l^2|^{L-2}\int_{t=2|y_l|/|1-z_l^2|}^{t=1}(1-t^2)^{(L-3)/2} \; dt\\
\nonumber & \times \prod_{s=1}^{k}d\lambda_s \prod_{l=k+1}^{(M+k)/2} dx_ldy_l,
\end{align}
where we have used (\ref{eqn:Bxy1}) and the facts
\begin{align}
\nonumber \prod_{l=0}^{(M-k)/2 -1}\pi^{M-2l-2} = \pi^{-(M-k)(M+k-2)/4},&& \Gamma (z) \Gamma (z+1/2) = \frac{\sqrt{\pi} \; \Gamma (2z)}{2^{2z-1}}.
\end{align}
The result (\ref{eqn:TOEevaljpdf}) now follows by simplifying.
\hfill $\Box$
\begin{remark}
Note the domains of integration in the numerator of the second equality of (\ref{eqn:ccoleval}); this follows since $\mathbf{D}$ is sub-unitary (that is, all its eigenvalues are $<1$) and so all the $\mathbf{R}_j$ are sub-unitary, which further implies that the norms of the columns of $u_j$ and $v_j$ are, at most, unity.
\end{remark}
\begin{remark}
\label{rem:TOEana}
The proof of Proposition \ref{prop:Tejpdf} neatly demonstrates the analogy with the earlier ensembles considered in this work. For instance, comparing (\ref{eqn:TOEerfclem}) with the same step in the real Ginibre ensemble (\ref{eqn:GinOE_erfc}), we see that the integral in (\ref{eqn:TOEerfclem}) is the equivalent of the $\mathrm{erfc}$ function for the real truncated ensemble. Likewise, we can see the similarity to (\ref{14.3}) in the real spherical ensemble.
\end{remark}
Recall that the restriction $L\geq M$ in Proposition \ref{prop:Tejpdf} originates from the procedure of integrating over the sub-blocks $\mathbf{C}$ in (\ref{eqn:TOEmatpdf1}), using (\ref{eqn:Imninteg}), to obtain the distribution of the matrices $\mathbf{D}$. In \cite{KSZ2010} the restriction on the size of the truncation is side-stepped by using the Schur decomposition of $\mathbf{D}$ in (\ref{eqn:TOEmatpdf1}) before integrating, and then applying the delta function orthogonality conditions to progressively integrate over $\tilde{\mathbf{R}}$ and $\mathbf{C}$. This method involves the introduction of new matrices $\mathbf{X}_j$ that act on sub-blocks of $\mathbf{C}$; these matrices $\mathbf{X}_j$ satisfy a recursive relation that makes the problem tractable. It turns out that when done in this way the Jacobian introduced by the integration over $\tilde{\mathbf{R}}$ is exactly cancelled by that introduced by integration over $\mathbf{C}$. The problem is then reduced to a product of integrals over $2\times 2$ blocks, which can each be done explicitly. In performing this calculation the authors of \cite{KSZ2010} were guided by the analogous calculation in the case of truncations of unitary matrices \cite{Z&S2000, FS03, JN03, Reffy2005}. The end result is identical to (\ref{eqn:TOEevaljpdf}), which is why we have dropped the $L\geq M$ condition from the statement of Proposition \ref{prop:Tejpdf}.
With the eigenvalue jpdf specified we can find the probability of all real eigenvalues in the same way that we obtained (\ref{eqn:pNN1}).
\begin{proposition}
With the eigenvalue probability distribution $Q(\Lambda, W)_T$ of (\ref{eqn:TOEevaljpdf}) the probability of obtaining all real eigenvalues is
\begin{align}
\nonumber p_{M,M}&=\frac{2^{M(L-1)+M^2/2}\: C_{M,L}} {\pi^{3M/4} \Gamma(M+1)} \left(L\frac{ \Gamma((L+1)/2)} {\Gamma(L/2)}\right)^{M/2}\\
\label{eqn:TOEpMM} &\times \prod_{j=0}^{M-1} \frac{\Gamma((L+j)/2)^2\Gamma((j+3)/2)}{\Gamma(L+(M+j-1)/2)}.
\end{align}
\end{proposition}
\textit{Proof}: Setting $k=M$ in (\ref{eqn:TOEevaljpdf}) gives
\begin{align}
\nonumber Q(\Lambda ,\emptyset)_T \Big|_{k=M}&=C_{M,L}\left(L\frac{\Gamma((L+1)/2)}{2 \sqrt{\pi} \; \Gamma(L/2)}\right)^{M/2}\\
\label{eqn:TOEk=M} & \times\prod_{j=1}^{M}(1-\lambda_j^2)^{L/2-1} \prod_{j<l}(\lambda_l-\lambda_j).
\end{align}
Letting $A_{M,L}$ stand for the pre-factor which is independent of the $\lambda_j$ in (\ref{eqn:TOEk=M}) and $a:=M(L-1)+M(M-1)/2$ we integrate over the $\lambda_j$
\begin{align}
\nonumber &p_{M,M}=\frac{A_{M,L}}{M!} \int_{-1}^{1}d\lambda_1\cdot\cdot\cdot\int_{-1}^{1} d\lambda_M \prod_{j=1}^M(1-\lambda_j^2)^{L/2-1}\prod_{j<l}|\lambda_l-\lambda_j|\\
\label{eqn:TOEPMM1} &=A_{M,L}\:2^{a} \int_{0}^{1}ds_1\cdot\cdot\cdot\int_{0}^{1}ds_{M}\prod_{j=1}^{M}\Big( s_j(1-s_j)\Big)^{L/2-1}\prod_{j<l}|s_l-s_j|,
\end{align}
where, to obtain the first equality, we removed the ordering on the $\lambda_j$, incurring a factor of $(M!)^{-1}$, and, for the second, we changed variables $s_j=(1+\lambda_j)/2$. We see that (\ref{eqn:TOEPMM1}) is now in the form of a Selberg integral (\ref{def:Selb}). Applying (\ref{eqn:Selbint}) to (\ref{eqn:TOEPMM1}) gives the result.
\hfill $\Box$
The reader will have noted that we can compare the probabilities in (\ref{eqn:TOEpMM}) with simulation, however once we find the generalised partition function and skew-orthogonal polynomials in the following chapters, we will have a formula for the probability of a general number of real eigenvalues, and so we delay further discussion of this point until Chapter \ref{sec:TOEsops}.
\subsection{Generalised partition function}
The eigenvalue jpdf (\ref{eqn:TOEevaljpdf}) is structurally identical to that of the real Ginibre ensemble in (\ref{eqn:GinOEjpdf}), and so we can directly apply the method of Proposition \ref{prop:GinOE_gpf_even} to obtain the generalised partition function for $M$ even.
\begin{proposition}
Let $M$ be even and
\begin{align}
\nonumber \alpha_{j,l}&=\int_{-1}^{1}dx\;u(x)\int_{-1}^{1}dy\;u(y)\: \omega(x) \omega(y)p_{j-1}(x) p_{l-1}(y) \:\mathrm{sgn}(y-x),\\
\label{def:TOEab} \beta_{j,l}&=2i\int_{\mathbb{D}^+}dz\;v(z)\:\omega(z)^2\Big( p_{j-1}(z) p_{l-1}(\bar{z}) -p_{l-1}(z) p_{j-1}(\bar{z}) \Big),
\end{align}
where $\{ p_j(x)\}_{j=1,2,...}$ are monic polynomials of degree $j$, and $\mathbb{D}^+$ is the upper half of the unit disk. The generalised partition function for $k$ real eigenvalues and $M-k$ non-real complex eigenvalues in the real truncated ensemble is
\begin{align}
\label{eqn:TOEgpfe} Z_{k,(M-k)/2}[u,v]_T=C_{M,L}\; [\zeta^{k/2}]\mathrm{Pf}[\zeta\alpha_{j,l}+\beta_{j,l}]_{j,l=1,...,M},
\end{align}
with $C_{M,L}$ from (\ref{def:CML}).
\end{proposition}
Likewise, we can apply the method of Proposition \ref{prop:GinOE_gpf_odd} to find a Pfaffian form for $M$ odd.
\begin{proposition}
Let $M$ be odd, $\alpha_{j,l},\beta_{j,l}$ be as in (\ref{def:TOEab}),
\begin{align}
\label{def:vartheta} \vartheta_j :=\int_{-1}^{1}u(x)\;\omega(x) p_{j-1}(x)dx,
\end{align}
and $C_{M,L}$ as in (\ref{def:CML}). The generalised partition function for the real truncated ensemble with $M$ odd is
\begin{align}
\label{eqn:TOEgpfo} Z^{\mathrm{odd}}_{k,(M-k)/2}[u,v]_T=C_{M,L}\; [\zeta^{(k-1)/2}]\mathrm{Pf}\left[\begin{array}{cc}
\left[\zeta\alpha_{j,l}+\beta_{j,l}\right] & [\vartheta_j]\\
\left[\vartheta_l\right] & 0
\end{array}\right]_{j,l=1,...,M}.
\end{align}
\end{proposition}
The summed-up generalised partition functions are then
\begin{align}
\nonumber Z_M[u,v]_T&:=\sum_{k=0}^M{}^{\sharp}Z_{k,(M-k)/2}[u,v]_T\\
\label{eqn:sumgpfe} &=C_{M,L}\; \mathrm{Pf}[\alpha_{j,l}+\beta_{j,l}]_{j,l=1,...,M},
\end{align}
and
\begin{align}
\nonumber Z_M^{\mathrm{odd}}[u,v]_T&:=\sum_{k=1}^M{}^{\sharp}Z^{\mathrm{odd}}_{k,(M-k)/2}[u,v]_T\\
\label{eqn:sumgpfo} &=C_{M,L}\; \mathrm{Pf}\left[\begin{array}{cc}
\left[\zeta\alpha_{j,l}+\beta_{j,l}\right] & [\vartheta_j]\\
\left[\vartheta_l\right] & 0
\end{array}\right]_{j,l=1,...,M},
\end{align}
where the ${}^{\sharp}$ indicates that the sum is only over those values of $k$ with the same parity as $M$.
A generating function analogous to (\ref{def:GinOE_probsGF}) for the probabilities $p_{M,k}$ of obtaining $k$ real eigenvalues from $\mathbf{D}$ is
\begin{align}
\label{eqn:TOEpMkse} Z_M(\zeta)_T=C_{M,L}\; \mathrm{Pf}[\zeta\alpha_{j,l}+\beta_{j,l}]_{j,l=1,...,M} \Big|_{u=v=1},
\end{align}
with the probability of all real eigenvalues given by
\begin{align}
\nonumber p_{M,M}=C_{M,L}\; \mathrm{Pf}[\alpha_{j,l}\big|_{u=1}]_{j,l=1,...,M},
\end{align}
from which we can recover (\ref{eqn:TOEpMM}), using the skew-orthogonal polynomials (\ref{eqn:TOEsops}) of the next section. The odd probabilities are found similarly,
\begin{align}
\label{eqn:TOEpMkso} Z_M^{\mathrm{odd}}(\zeta)_T=C_{M,L}\; \mathrm{Pf}\left[\begin{array}{cc}
\left[\zeta\alpha_{j,l}+\beta_{j,l}\right] & [\vartheta_j]\\
\left[\vartheta_l\right] & 0
\end{array}\right]_{j,l=1,...,M} \Bigg|_{u=v=1}.
\end{align}
\subsection{Skew-orthogonal polynomials}
\label{sec:TOEsops}
As with the GOE, real Ginibre and real spherical ensembles, we can use the orthogonal polynomial method to make progress towards the eigenvalue correlation functions. Although this was not historically how the theory progressed, in order to keep the development of the truncated ensemble consistent with the other ensembles in this work, we introduce the polynomials at this point; the derivation of them relies on a procedure involving averages over characteristic polynomials from Chapter \ref{sec:SOEcharpolys}. We will outline its use in the current context in Chapter \ref{sec:TOEkernelts}. As mentioned at the beginning of this chapter, there are formulae for calculating the skew-orthogonal polynomials independently of the correlation kernel however, the application of these methods to the real truncated ensemble has not so far been successful.
\begin{definition}
With $x,y\in\mathbb{R}$ and $z\in\mathbb{C}\backslash\mathbb{R}$ define the inner product for the real truncated ensemble
\begin{align}
\label{def:TOEsoip} \langle p_j,p_l \rangle_T&:=\int_{-1}^1\int_{-1}^1\omega(x)\omega(y)p_j(x)p_l(y)\mathrm{sgn}(y-x)dxdy\\
\nonumber &+2i\int_{\mathbb{D}^{+}}\omega(z)^2\big( p_j(z)p_l(\bar{z})-p_l(z)p_j(\bar{z})\big)dz\\
\nonumber &=\alpha_{j+1,l+1}+\beta_{j+1,l+1}\Big|_{u=v=1},
\end{align}
where $\mathbb{D}^{+}$ is the upper half unit disk.
\end{definition}
We now seek monic polynomials that satisfy analogous conditions to those of (\ref{eqn:GinOE_soprops}), that is, polynomials for which
\begin{align}
\nonumber \langle p_{2j},p_{2l}\rangle_T = \langle p_{2j+1},p_{2l+1}\rangle_T=0 &,&\langle p_{2j},p_{2l+1}\rangle_T=-\langle p_{2l+1},p_{2j}\rangle_T=\delta_{j,l}\:r_j.
\end{align}
The requisite polynomials are similar to those of (\ref{eqn:GinOE_sopolys}) for the real Ginibre ensemble.
\begin{proposition}[\cite{Forrester2010a}]
The polynomials
\begin{align}
\label{eqn:TOEsops} p_{2j}(x)=x^{2j}&,&p_{2j+1}(x)=x^{2j+1}-\frac{2j}{L+2j}x^{2j-1},
\end{align}
are skew-orthogonal with respect to the inner product (\ref{def:TOEsoip}), with
\begin{align}
\label{eqn:TOEsopsr} \langle p_{2j},p_{2j+1}\rangle_T=r_j=\frac{L!(2j)!}{(L+2j)!}.
\end{align}
\end{proposition}
These polynomials were first presented in \cite{Forrester2010a}, and, as mentioned above, the derivation relies on the method of averaging over the characteristic polynomial of sub-blocks $\mathbf{D}_{(M-2) \times(M-2)}$ of $\mathbf{R}_{(N-2)\times(N-2)}$. This method was used in \cite{KSZ2010} as a way to calculate the kernel element $D(\mu,\eta)_T$ (to be introduced in (\ref{def:TOESDI1})), and the result of which was used by Forrester to extract the skew-orthogonal polynomials (\ref{eqn:TOEsops}) \cite{Forrester2010a}. As mentioned at the beginning of this chapter, it is not known how to derive the polynomials directly without first evaluating the correlation kernel; formulae such as those in \cite{ForNagRai2006} and \cite{AkeKiePhi2010} lead to integrals that appear intractable. In Chapter \ref{sec:FW} we make some suggestions for how the skew-orthogonal polynomials might be found using the classical Jacobi polynomials.
With the polynomials (\ref{eqn:TOEsops}) we can evaluate the probabilities $p_{M,k}$ implied by (\ref{eqn:TOEpMkse}) and (\ref{eqn:TOEpMkso}) for specific values of $L$. In Appendix \ref{app:TOEpMks} we have tabulated the probabilities for $M=2,...,6$ with $L=1,2,3,8$, and compared them to simulations. Note that as $L$ increases (giving the more weakly orthogonal regimes), the $p_{M,k}$ approach the probabilities $p_{N,k}$ of the real Ginibre ensemble in Table \ref{tab:pnkxact_sim} of Appendix \ref{app:GinOE_kernel_elts} --- for example, compare Table \ref{tab:pnkxact_sim} to Table \ref{tab:TOEpnkL8}.
\subsection{Correlation functions}
\subsubsection{M even}
To again highlight the analogy with the correlations of the real Ginibre and real spherical ensembles, we use similar notation here for the correlation kernel, trusting that no confusion is likely.
\begin{definition}
\label{def:TOESDI}
Let $p_0,p_1,...$ be the skew-orthogonal polynomials (\ref{eqn:TOEsops}) and $r_0,r_1,...$ the normalisations (\ref{eqn:TOEsopsr}). With $\epsilon(\mu,\eta)$ from Definition \ref{def:GinOE_kernel}, let
\begin{align}
\label{def:TOESDI1} \begin{split}
S(\mu,\eta)_T&=2\sum_{j=0}^{\frac{M}{2}-1}\frac{1}{r_j}\Bigl[q_{2j}(\mu)\tau_{2j+1}(\eta)-q_{2j+1}(\mu)\tau_{2j}(\eta)\Bigr],\\
D(\mu,\eta)_T&=2\sum_{j=0}^{\frac{M}{2}-1}\frac{1}{r_j}\Bigl[q_{2j}(\mu)q_{2j+1}(\eta)-q_{2j+1}(\mu)q_{2j}(\eta)\Bigr],\\
\tilde{I}(\mu,\eta)_T&=2\sum_{j=0}^{\frac{M}{2}-1}\frac{1}{r_j}\Bigl[\tau_{2j}(\mu)\tau_{2j+1}(\eta)-\tau_{2j+1}(\mu)\tau_{2j}(\eta)\Bigr]+\epsilon(\mu,\eta),
\end{split}
\end{align}
where
\begin{align}
\nonumber q_j(\mu) &= \omega(\mu)\; p_j(\mu),\\
\label{def:TOEtau} \tau_j(\mu) &=
\left\{
\begin{array}{ll}
-\frac{1}{2}\int_{-1}^{1}\mathrm{sgn}(\mu-z)\hspace{3pt}q_j(z)\hspace{3pt}dz, & \mu\in \mathbb{R},\\
iq_j(\bar{\mu}), & \mu\in \mathbb{D}^+.
\end{array}
\right.
\end{align}
Define
\begin{align}
\label{def:TOEkernel} \mathbf{K}(\mu,\eta)_T=\left[
\begin{array}{cc}
S(\mu,\eta)_T & - D(\mu,\eta)_T\\
\tilde{I}(\mu,\eta)_T & S(\eta,\mu)_T
\end{array}
\right].
\end{align}
\end{definition}
Note that the $S,D,\tilde{I}$ of (\ref{def:TOESDI1}) obey the same inter-relationships as in the real Ginibre case, contained in Lemma \ref{lem:Gin_s=d=i}, similar to those for the real spherical ensemble, Lemma \ref{lem:S_s=d=i}.
Since (\ref{eqn:TOEevaljpdf}) and (\ref{eqn:TOEgpfe}) are structurally identical to their real Ginibre counterparts (\ref{eqn:GinOEjpdf}) and (\ref{eqn:GinOE_gpf_even}), with $\omega(\mu)$ replacing $e^{-\mu^2/2}\sqrt{\mathrm{erfc}(\sqrt{2}|\mathrm{Im}(\mu)|)}$, we can apply the methods of Chapter \ref{sec:Gincorrlnse} to obtain the correlation functions, the details of which we omit.
\begin{proposition}
\label{prop:TOEcorrelne}
Let $M$ be even. With $\mathbf{K}(\mu,\eta)_T$ from (\ref{def:TOEkernel}), the real truncated eigenvalue correlation function for $n_1$ real and $n_2$ non-real, complex conjugate pairs is
{\small
\begin{align}
\nonumber &\rho_{(n_1,n_2)}(x_1,...,x_{n_1},z_1,...,z_{n_2})_T =\mathrm{qdet}\left[\begin{array}{cc}
\mathbf{K}_{rr}(x_i,x_j)_T & \mathbf{K}_{rc}(x_i,z_m)_T\\
\mathbf{K}_{cr}(z_l,x_j)_T & \mathbf{K}_{cc}(z_l,z_m)_T
\end{array}\right]_{i,j=1,...,n_1 \atop l,m=1,...,n_2}\\
\nonumber &=\mathrm{Pf}\left(\left[\begin{array}{cc}
\mathbf{K}_{rr}(x_i,x_j)_T & \mathbf{K}_{rc}(x_i,z_m)_T\\
\mathbf{K}_{cr}(z_l,x_j)_T & \mathbf{K}_{cc}(z_l,z_m)_T
\end{array}\right]\mathbf{Z}_{2(n_1+n_2)}^{-1}\right)_{i,j=1,...,n_1 \atop l,m=1,...,n_2}, x_i\in \mathbb{R}, z_i \in \mathbb{D}^+,
\end{align}
}where $\mathbb{D}^+$ is the upper half of the unit disk.
\end{proposition}
\subsubsection{M odd}
\label{sec:TOEcorrelno}
The $M$ odd correlations use an analogous kernel to that in Definition \ref{def:GinOE_kernel_odd}.
\begin{definition}
Let $p_0,p_1,...$ be the skew-orthogonal polynomials (\ref{eqn:TOEsops}), $r_0,r_1,...$ the normalisations (\ref{eqn:TOEsopsr}) and $\overline{\vartheta}_j:=\vartheta_j\big|_{u=1}$ from (\ref{def:vartheta}). With $\epsilon(\mu,\eta)$ from Definition \ref{def:GinOE_kernel}, let
\begin{align}
\nonumber S^{\mathrm{odd}}(\mu,\eta)_T&=2\sum_{j=0}^{\frac{M-1}{2}-1}\frac{1}{r_j}\Bigl[\hat{q}_{2j}(\mu)\hat{\tau}_{2j+1}(\eta)-\hat{q}_{2j+1}(\mu)\hat{\tau}_{2j}(\eta)\Bigr]+\kappa(\mu,\eta),\\
\nonumber D^{\mathrm{odd}}(\mu,\eta)_T&=2\sum_{j=0}^{\frac{M-1}{2}-1}\frac{1}{r_j}\Bigl[\hat{q}_{2j}(\mu)\hat{q}_{2j+1}(\eta)-\hat{q}_{2j+1}(\mu)\hat{q}_{2j}(\eta)\Bigr],\\
\nonumber \tilde{I}^{\mathrm{odd}}(\mu,\eta)_T&=2\sum_{j=0}^{\frac{M-1}{2}-1}\frac{1}{r_j}\Bigl[\hat{\tau}_{2j}(\mu)\hat{\tau}_{2j+1}(\eta)-\hat{\tau}_{2j+1}(\mu)\hat{\tau}_{2j}(\eta)\Bigr]+\epsilon(\mu,\eta)+\theta(\mu,\eta),
\end{align}
where
\begin{align}
\nonumber \hat{p}_j(\mu)&=p_j(\mu)-\frac{\overline{\vartheta}_{j+1}}{\overline{\vartheta}_N}p_{N-1}(\mu),\\
\nonumber \hat{q}_j(\mu) &= \omega(\mu)\;\hat{p}_j(\mu),\\
\nonumber \hat{\tau}_j(\mu) &=
\left\{
\begin{array}{ll}
-\frac{1}{2}\int_{-1}^{1}\mathrm{sgn}(\mu-z)\hspace{3pt}\hat{q}_j(z)\hspace{3pt}dz, & \mu\in \mathbb{R},\\
i\hat{q}_j(\bar{\mu}), & \mu\in \mathbb{D}^+,
\end{array}
\right.\\
\nonumber \kappa(\mu,\eta) &=
\left\{
\begin{array}{lll}
q_{N-1}(\mu)/\overline{\vartheta}_N, & \eta\in \mathbb{R},\\
0, & \mathrm{otherwise},\\
\end{array}
\right.\\
\nonumber \theta(\mu,\eta)&=
\big(\chi_{(\eta\in\mathbb{R})}\tau_{N-1}(\mu)- \chi_{(\mu\in\mathbb{R})} \tau_{N-1}(\eta)\big)/ \overline{\vartheta}_N,
\end{align}
and $q_j(\mu),\tau_j(\mu)$ are from Definition \ref{def:TOESDI}, with the indicator function $\chi_{(A)}=1$ for $A$ true and zero for $A$ false.
Then, let
\begin{align}
\label{def:TOEkernelo} \mathbf{K}_{\mathrm{odd}}(\mu,\eta)_T=\left[
\begin{array}{cc}
S^{\mathrm{odd}}(\mu,\eta)_T & -D^{\mathrm{odd}}(\mu,\eta)_T\\
\tilde{I}^{\mathrm{odd}}(\mu,\eta)_T & S^{\mathrm{odd}}(\eta,\mu)_T
\end{array}
\right].
\end{align}
\end{definition}
As for the even case, we see that (\ref{eqn:TOEgpfo}) is identical in structure to its real Ginibre counterpart (\ref{eqn:GinOE_gpf_odd}), and so we were able to immediately write down the correlation functions. However, as for the odd case of the real spherical ensemble in Chapter \ref{sec:Scorrelns}, we cannot apply the `odd-from-even' technique of Chapter \ref{sec:Gin_oddfromeven}. (Recall that for the GOE (Chapter \ref{sec:odd_from_even}) and the real Ginibre ensemble (Chapter \ref{sec:Gin_oddfromeven}), we were able to generate the odd correlations from the known even cases by removing one of the eigenvalues off to infinity, effectively leaving a system of $k-1$ real eigenvalues, and $N-1$ total eigenvalues. Of course, for the GOE, $N=k$.) In the real spherical ensemble, we were unable to use this technique because we had applied the fractional linear transformation (\ref{7'}), since the natural co-ordinate set for analysis was the unit disk. Hence, all the transformed eigenvalues (from the upper half plane) were constrained to have $|z|\leq 1$.
For the real truncated ensemble, as discussed above Remark \ref{rem:evals<1}, all the eigenvalues are necessarily contained inside the unit disk by the nature of truncations of orthogonal matrices, and so the removal of one eigenvalue off to infinity is meaningless. Of course, we can still use the functional differentiation method of Chapter \ref{sec:Gin_odd_fdiff}, which again highlights the utility of the method of Propositions \ref{prop:pf_integ_op} and \ref{prop:pf_integ_op_odd} (and their real Ginibre counterparts Propositions \ref{prop:4x4_fred}, \ref{prop:GinOE_fred_op} and \ref{prop:GinOE_fred_op_odd}). As for the even case in the previous section, we omit the details.
\begin{proposition}
Let $M$ be odd. With $\mathbf{K}_{\mathrm{odd}}(\mu,\eta)_T$ from (\ref{def:TOEkernelo}), the real truncated eigenvalue correlation function for $n_1$ real and $n_2$ non-real, complex conjugate pairs is
{\small
\begin{align}
\nonumber &\rho_{(n_1,n_2)}(x_1,...,x_{n_1},z_1,...,z_{n_2})_T=\mathrm{qdet}\left[\begin{array}{cc}
\mathbf{K}_{rr}^{\mathrm{odd}}(x_i,x_j)_T & \mathbf{K}_{rc}^{\mathrm{odd}}(x_i,z_m)_T\\
\mathbf{K}_{cr}^{\mathrm{odd}}(z_l,x_j)_T & \mathbf{K}_{cc}^{\mathrm{odd}}(z_l,z_m)_T
\end{array}\right]_{i,j=1,...,n_1 \atop l,m=1,...,n_2}\\
\nonumber &=\mathrm{Pf}\left(\left[\begin{array}{cc}
\mathbf{K}_{rr}^{\mathrm{odd}}(x_i,x_j)_T & \mathbf{K}_{rc}^{\mathrm{odd}}(x_i,z_m)_T\\
\mathbf{K}_{cr}^{\mathrm{odd}}(z_l,x_j)_T & \mathbf{K}_{cc}^{\mathrm{odd}}(z_l,z_m)_T
\end{array}\right]\mathbf{Z}_{2(n_1+n_2)}^{-1}\right)_{i,j=1,...,n_1 \atop l,m=1,...,n_2}, x_i\in \mathbb{R},z_i \in \mathbb{D}^+.
\end{align}
}
\end{proposition}
\subsection{Correlation kernel elements}
\label{sec:TOEkernelts}
In the previous ensembles considered in this work, we have successfully used the method of orthogonal polynomials to find the correlation kernel elements. However, as already mentioned, in \cite{Forrester2010a} the skew-orthogonal polynomials were found by working backwards from the earlier work of \cite{KSZ2010}, which already identified the correlation kernel using averages over characteristic polynomials. We will follow these developments here, first looking at the evaluation of the characteristic polynomial, then using the known inter-relationships between the kernel elements (Lemma \ref{lem:Gin_s=d=i}) to find most of the kernel. The remaining elements can be obtained from the same inter-relationships after performing a calculation analogous to one in \cite{FN08} pertaining to the partially symmetric real Ginibre ensemble of Chapter \ref{sec:tG}.
Since $\det(t-\mathbf{X}_{n\times n})=\prod_{j=1}^n(t-x_j)$ where $\{x_j\}$ are the eigenvalues of $\mathbf{X}_{n\times n}$, we see from Proposition \ref{prop:SOEcharpoly} that \cite{Forrester2010a}
\begin{align}
\nonumber &(\mu-\eta)\big\langle \det(\mu-\mathbf{D}_{(M-2)\times (M-2)})(\eta-\mathbf{D}_{(M-2)\times (M-2)})\big\rangle\\
\nonumber&=\sum_{j=0}^{M/2-1}\frac{1}{r_j} \Big( p_{2j+1}(\mu)p_{2j}(\eta) -p_{2j}(\mu)p_{2j+1} (\eta)\Big)\\
\nonumber &=-(2\:\omega(\mu)\omega(\eta))^{-1}D(\mu,\eta)_T.
\end{align}
Combining this with the result of \cite[Eq. (8)]{KSZ2010} we have
\begin{align}
\label{eqn:SOEDsum} D(\mu,\eta)_T=2\:\omega(\mu)\omega(\eta)(\eta-\mu)\sum_{j=0}^{M-2}\frac{(L+j)!}{L!j!}(\mu \eta)^j.
\end{align}
Note from Definition \ref{def:TOESDI} that (\ref{eqn:SOEDsum}) holds for all four combinations of real and complex variables $\mu,\eta$, and further, we see that it is parity independent (so it holds for $M$ odd as well). Using (\ref{eqn:Gin_s=d=i}) we can obtain some of the other kernel elements via the relations
\begin{align}
\nonumber S_{cc}(z_1,z_2)_T&= iD_{cc}(z_1,\bar{z}_2)_T,\\
\nonumber \tilde{I}_{cc}(z_1,z_2)_T&= iS_{cc}(\bar{z}_1,z_2)_T=-D_{cc} (\bar{z}_1,\bar{z}_2)_T,\\
\label{eqn:TOErels1} S_{rc}(x,z)_T&= iD_{cc}(x,\bar{z})_T.
\end{align}
The remaining kernel elements are related by the equations
\begin{align}
\nonumber S_{cr}(z,x)_T&=S_{rr}(z,x)_T,\\
\nonumber \tilde{I}_{rr}(x,y)_T&=-\int_{x}^{y}S_{rr}(t,y)_Tdt+\frac{1}{2}\mathrm{sgn}(x-y),\\
\label{eqn:TOErels2} \tilde{I}_{cr}(z,x)_T&=-\tilde{I}_{rc}(x,z)_T=iS_{cr}(z,x)_T.
\end{align}
Accordingly, if we find an expression for $S_{rr}(x,y)_T$ then we will have completely specified the kernel. Using techniques from \cite{FN08} the calculation of this quantity is carried out in \cite[Section 3.2]{Forrester2010a}. First note that, with the skew-orthogonal polynomials (\ref{eqn:TOEsops}), and $q_j(x), \tau_j(x)$ from Definition \ref{def:TOESDI}, we have the facts
\begin{align}
\nonumber &\sum_{j=0}^{M/2-1} \frac{q_{2j+1}(x) \tau_{2j}(y)} {r_j} = \frac{x^{M-1} \tau_{M-2}(y)} {r_{M/2-1}}\\
\nonumber & - \sum_{j=0}^{M/2-2} \frac{(L+2j+1)!} {L! (2j+1)!} x^{2j+1} \left( \tau_{2j+2} (y) -\frac{2j+1} {L+2j+1} \tau_{2j}(y) \right),
\end{align}
and
\begin{align}
\nonumber &p_{2j+1}(x)=-\frac{(1-x^2)^{1-L/2}}{L+2j}\;\frac{d}{dx}\Big( x^{2j} (1-x^2)^{L/2}\Big),\\
\nonumber &p_{2j+2}(x)-\frac{2j+1}{L+2j+1}p_{2j}(x)=-\frac{(1-x^2)^{1-L/2}}{L+2j+1}\frac{d}{dx}\Big( x^{2j+1}(1-x^2)^{L/2}\Big),
\end{align}
and, from the definition of $\tau_j(\mu)$ (\ref{def:TOEtau}), the corollaries
\begin{align}
\nonumber &\tau_{2j+1}(x)=\frac{ (1-x^2)^{L/2}}{L+2j}\left(\frac{L}{2 \sqrt{\pi}} \frac{ \Gamma((L+1)/2)}{\Gamma(L/2)}\right)^{1/2} x^{2j},\\
\label{eqn:TOEtaus} &\tau_{2j+2}(x)-\frac{2j+1}{L+2j+1}\tau_{2j}(x)= \frac{ (1-x^2)^{L/2}} {L+2j+1} \left( \frac{L} {2 \sqrt{\pi}} \frac{\Gamma ((L+1)/2)} {\Gamma(L/2)} \right)^{1/2} x^{2j+1},
\end{align}
for $x\in\mathbb{R}$. Substitution of these into the definition of $S_{rr}(x,y)_T$ from (\ref{def:TOESDI1}) yields
\begin{align}
\nonumber &S_{rr}(x,y)_T=-\frac{2\:\omega(x)}{r_{M/2-1}}x^{M-1}\tau_{M-2}(y)\\
\label{eqn:TOEsumS} &+\frac{\Gamma((L+1)/2)}{\sqrt{\pi} \;\Gamma(L/2)}(1-x^2)^{L/2-1}(1-y^2)^{L/2}\sum_{j=0}^{M-2}\frac{(L+j-1)!}{(L-1)!j!}x^j y^j.
\end{align}
So by (\ref{eqn:TOErels2}) we have now specified all kernel elements.
From the preceding chapters in this work we expect that the density of real eigenvalues will be given by $S_{rr}(x,x)_T$, which is the $n_1=1, n_2=0$ case of Proposition \ref{prop:TOEcorrelne}. Using the expression for $S_{rr}(x,x)_T$ given by (\ref{eqn:TOEsumS}), we apply an identity for the regularised Beta function \cite{KS2009},
\begin{align}
\nonumber I_{s}(a,b)&:=\int_{0}^{s}t^{a-1}(1-t)^{b-1}dt/B(a,b)\\
\label{eqn:BetaId} &\;=1-(1-s)^{b}\sum_{j=0}^{a-1}{b-1+j \choose j}s^j,
\end{align}
to write the density as
\begin{align}
\nonumber &\rho_{(1)}^r(x)_T=S_{rr}(x,x)_T=-\frac{2\:\omega(x)}{r_{M/2-1}}x^{M-1}\tau_{M-2}(x)\\
\label{eqn:TOErdens} &+\frac{\Gamma((L+1)/2)}{\sqrt{\pi} \;\Gamma(L/2)}\frac{1-I_{x^2}(M-1,L)}{1-x^2},
\end{align}
which is equivalent to that in \cite{KSZ2010}.
The complex-complex density is more straightforward; from (\ref{eqn:SOEDsum}), using (\ref{eqn:TOErels1}), we obtain
\begin{align}
\nonumber S_{cc}(w,z)_T = 2i \; \omega(w)\omega(\bar{z}) (\bar{z}-w) \sum_{j=0}^{M-2} \frac{(L+j)!}{L! j!} (w\bar{z})^j,
\end{align}
from which we see that the complex density, assuming $L \neq 1$, is
\begin{align}
\nonumber \rho_{(1)}^c (z)_T &=S_{cc}(z,z)_T=4\; \mathrm{Im}(z) \; (\omega(z))^2 \mathrm{Im}(z) \sum_{j=0}^{M-2} \frac{(L+j)!}{L! j!} |z|^{2j}\\
\nonumber & =\frac{2 \: \mathrm{Im}(z) L(L-1)} {\pi} |1-z^2|^{L-2} \int_{2|\mathrm{Im}(z)|/|1-z^2|}^{1}(1-t^2)^{(L-3)/2}dt \\
\nonumber &\times \sum_{j=0}^{M-2}\frac{(L+j)!}{L! j!} |z|^{2j}\\
\nonumber & = \frac{2\: \mathrm{Im}(z) L(L-1)} {\pi} \frac{|1-z^2|^{L-2}} {(1-|z|^2)^{L+1}} \int_{2|\mathrm{Im}(z)|/|1-z^2|}^{1}(1-t^2)^{(L-3)/2}dt \\
\label{eqn:TOEcdens} &\times \big(1-I_{|z|^2}(M-1,L+1)\big),
\end{align}
where we have used (\ref{eqn:BetaId}) to obtain the last equality.
In the case $L= 1$ we require the second definition in (\ref{def:truncw}), so (\ref{eqn:TOErdens}) and (\ref{eqn:TOEcdens}) respectively become
\begin{align}
\nonumber \rho_{(1)}^r(x)_T \Big|_{L=1}&=\frac{\sqrt{2} (M-1) x^{M-1} \tau_{M-2}(x)} {\sqrt{\pi} (1-x^2)^{1/2}} + \frac{(1-x^{2M-2})} {\pi (1-x^2)},\\
\nonumber \rho_{(1)}^c (z)_T \Big|_{L=1}&=\frac{2 \: \mathrm{Im}(z)}{\pi |1-z^2| (1-|z|^2)^2} \left( 1-M|z|^{2M-2}+ (M-1)|z|^{2M} \right),
\end{align}
where the incomplete Beta functions are integrated using elementary techniques. Of course, we could also have found this directly from (\ref{def:TOESDI1}) using the $L=1$ cases of (\ref{eqn:TOEtaus}),
\begin{align}
\nonumber &\tau_{2j+1}(x)\Big|_{L=1}= \frac{(1-x^2)^{1/2}} {\sqrt{2 \pi} (2j+1)}\; x^{2j},\\
\nonumber &\tau_{2j+2}(x)\Big|_{L=1} - \frac{2j+1} {2j+2} \tau_{2j}(x)\Big|_{L=1}= \frac{(1-x^2)^{1/2}}{\sqrt{2 \pi} (2j+2)}\; x^{2j+1}.
\end{align}
\subsubsection{Large $M$ and $L$ limits}
As discussed in the introduction to this chapter, there are three limiting regimes of interest: $\alpha\to 1$, $0<\alpha<1$ and $\alpha\to 0$, where we recall $\alpha =M/N$. We now investigate the real and complex densities in each of these cases.
\noindent\textbf{\underline{Real density}}
We see from (\ref{eqn:TOErels1}) and (\ref{eqn:TOErels2}) that if we have expressions for the limits of $D(\mu,\eta)_T$ and $S_{rr}(x,y)_T$ then we can obtain all the limiting kernel elements by those relations. To that end, note that we can use the classical identity, valid for $|t|<1$,
\begin{align}
\label{eqn:1on1-t} \frac{1}{(1-t)^n}=\sum_{j=0}^{\infty}{n-1 +j \choose j}t^j,
\end{align}
to obtain the large $M$, fixed $L$ limit of (\ref{eqn:SOEDsum})
\begin{align}
\nonumber \lim_{M\to\infty \atop L \; \mathrm{fixed}} D(\mu,\eta)_T= \frac{2\:\omega(\mu)\omega(\eta)\:(\eta-\mu)}{(1-\mu\eta)^{L+1}},
\end{align}
for all combinations of real and complex $\mu$ and $\eta$. For $\mu,\eta\in \mathbb{R}$ the large $M$ limit of (\ref{eqn:TOEsumS}) is
\begin{align}
\label{eqn:TOElimS} \lim_{M\to\infty \atop L \; \mathrm{fixed}} S_{rr}(x,y)_T=\frac{ \Gamma((L+1)/2)} {\sqrt{\pi} \;\Gamma(L/2)}\frac{(1-x^2)^{L/2-1}(1-y^2)^{L/2}}{(1-xy)^L}.
\end{align}
A particularly interesting case of this limit was identified in \cite{Forrester2010a}, where, with $L=1$, the correlation kernel is identical to that encountered when looking at the correlations between the zeroes of the Kac polynomials. In Appendix \ref{app:TOElims} we list these kernels. By letting $x=y$ in (\ref{eqn:TOElimS}) we have the real density in the strongly orthogonal regime, which is the $\alpha\to 1$ limit of (\ref{eqn:TOElimdensL}) below. (It turns out that the limiting densities for $\alpha\to 1$ regime have the same form as those in the $0< \alpha<1$ regime, and so we state them together in Proposition \ref{prop:TOElargedens}.)
Since the Beta function is a key factor in the densities, for the remainder of the chapter we will make repeated use of its asymptotic behaviour
\begin{align}
\label{eqn:betafixed2} B(x,y)\sim \sqrt{2\pi} \frac{x^{x-1/2} y^{y-1/2}}{(x+y)^{x+y-1/2}},\quad x,y \to \infty,
\end{align}
and
\begin{align}
\label{eqn:betafixed} B(x,y) \sim \frac{\Gamma (y)} {x^y},\quad x \to \infty, y \mbox { fixed}.
\end{align}
\begin{proposition}[\cite{KSZ2010}]
\label{prop:TOElargedens}
The limiting real density in the strongly orthogonal regime $(\alpha\to 1)$ is given by
\begin{align}
\label{eqn:TOElimdensL} \rho_{(1)}^r(x)_T \mathop{\sim}\limits_{N \to \infty} \frac{ \Gamma((L+1)/2)}{\sqrt{\pi} \;\Gamma(L/2)} \frac{1}{1-x^2}& ,& -1 <x< 1,
\end{align}
and in the weakly orthogonal regimes $(0<\alpha<1$ or $\alpha\to 1)$ is given by
\begin{align}
\label{eqn:TOElimdensLw} \rho_{(1)}^r(x)_T \mathop{\sim}\limits_{N \to \infty} \sqrt{\frac{L}{2\pi}} \frac{1}{1-x^2}& ,& -\sqrt{\alpha} <x< \sqrt{\alpha}.
\end{align}
\end{proposition}
\textit{Proof}: We will assume that the first term in (\ref{eqn:TOErdens}) tends to zero for the time being, and concentrate on the second term. We will then show that the first term does indeed tend to zero as assumed.
The result (\ref{eqn:TOElimdensL}) in the case $\alpha\to 1$ follows from (\ref{eqn:TOElimS}) with $x=y$, or by substituting (\ref{eqn:1on1-t}) into (\ref{eqn:TOErdens}) (recalling (\ref{eqn:BetaId})). For (\ref{eqn:TOElimdensLw}), that is with $\alpha <1$, we must analyse the behaviour of $I_{x^2} (M-1,L)=\int_{0}^{x^2} t^{M-2} (1-t)^{L-1} dt/B(M-1,L)$, and we use the same approach that led to (\ref{def:uGa}), which entails rewriting the integrand thusly
\begin{align}
\nonumber t^{M-2} (1-t)^{L-1} = \exp\Big( (\alpha N -2)\log t +((1-\alpha)N-1)\log (1-t) \Big).
\end{align}
Note that in the large $N$ limit, the integrand will be dominated by the maximum of $(\alpha N -2)\log t +((1-\alpha)N-1)\log (1-t)$, which, by differentiating, we find to be
\begin{align}
\nonumber t_{\max}=(\alpha N-2)/(N-3) \mathop{\sim}\limits_{N \to \infty} \alpha.
\end{align}
Also recall that $B(a,b)$ is the $s\to 1$ limit of $\int_{0}^{s} t^{a-1} (1-t)^{b-1} dt$, and so
\begin{align}
\label{eqn:Ix^2} I_{x^2} (M-1,L) \mathop{\sim}\limits_{N \to \infty} \left\{\begin{array}{cc}
1, & x^2 >\alpha,\\
0, & x^2 < \alpha,
\end{array} \right.
\end{align}
which gives us the bounds in (\ref{eqn:TOElimdensL}).
It remains to show that the first term in (\ref{eqn:TOErdens}) tends to zero for large $N$. Writing it out we have
\begin{align}
\label{eqn:TOESrr1} \frac{L}{2\pi} \frac{ \Gamma((L+1)/2)}{\sqrt{\pi} \;\Gamma(L/2)} \frac{(1-x^2)^{L/2-1} x^{M-1}}{B(L,M-2)} \int_{-1}^{1} \mathrm{sgn} (x-z) (1-z^2)^{L/2-1} p_{M-2}(z) dz.
\end{align}
Note that
\begin{align}
\nonumber &\int_{-1}^{1} \mathrm{sgn} (x-z) (1-z^2)^{L/2-1} p_{M-2}(z) dz\\
\nonumber & = \frac{1}{2} \left[ \int_{0}^{x^2} z^{(M-3)/2} (1-z)^{L/2-1} dz - \int_{x^2}^{1} z^{(M-3)/2} (1-z)^{L/2-1} dz\right]\\
\nonumber & = B (x^2,(M-1)/2,L/2) - B(M-1/2,L/2)/2\\
\nonumber & = B((M-1)/2,L/2)\Big(I_{x^2} ((M-1)/2, L/2) -1/2 \Big).
\end{align}
From (\ref{eqn:Ix^2}) we see that this tends to $\pm B((M-1)/2,L/2)/2$ for large $N$ (that is, as either or both of $L$ and $M$ tend to $\infty$); without loss of generality we can take the positive part since if $a$ tends to zero, then so does $-a$. Now assume that $L$ is fixed. Using the asymptotic behaviour of the Beta function (\ref{eqn:betafixed}) we see the large $M$ behaviour of (\ref{eqn:TOESrr1}) is
\begin{align}
\label{eqn:TOESrr2} A_L\; M^{2L} x^{M-1},
\end{align}
where $A_L$ is the remaining factors independent of $M$. Since $x<1$, (\ref{eqn:TOESrr2}) tends to zero for large $M$.
In the case that $L, M\to \infty$, we need a more involved approach. Using (\ref{eqn:betafixed}) we can write
\begin{align}
\label{eqn:bigLBeta} \frac{\Gamma((L+1)/2)}{\sqrt{\pi} \;\Gamma(L/2)} = \frac{L-1}{2\pi} \;B\left( \frac{1} {2},\frac{L-1} {2} \right) \mathop{\sim}\limits_{L \to \infty} \sqrt{\frac{L}{2\pi}},
\end{align}
and using (\ref{eqn:betafixed2}) we have
\begin{align}
\nonumber \frac{1} {B(L,M-2)} \sim \frac{1}{\sqrt{2\pi}} \frac{(L+M-2)^{L+M-5/2}}{L^{L-1/2}\; (M-2)^{M-5/2}}.
\end{align}
Combining the preceding with the proportionality of $M$ and $L$ to $N$ (according to (\ref{def:alphaML})), we obtain
\begin{align}
\nonumber \frac{2\:\omega(x)}{r_{M/2-1}}x^{M-1}\tau_{M-2}(x) \sim \frac{(1-x^2)^{(1-\alpha)N /2} x^{\alpha N}} {\sqrt{2\pi}} \frac{\Big( (1-\alpha) N+\alpha N\Big)^{N/2}} {\Big( (1-\alpha)N \Big)^{(1-\alpha)N/2} (\alpha N)^{\alpha N/2}}.
\end{align}
By rearranging, we find that for large $N$
\begin{align}
\nonumber \frac{2\:\omega(x)}{r_{M/2-1}}x^{M-1}\tau_{M-2}(x) \to 0 \Leftrightarrow \left( \frac{1-x^2} {1-\alpha} \right)^{(1-\alpha) N/2} \left( \frac{x^2} {\alpha} \right)^{\alpha N/2} \to 0,
\end{align}
or equivalently, by exponentiating, that
\begin{align}
\label{eqn:srr1termz} (1-\alpha) \log\left( \frac{1-x^2} {1-\alpha} \right) + \alpha \log \left( \frac{x^2} {\alpha} \right) < 0.
\end{align}
By differentiating the LHS of (\ref{eqn:srr1termz}) with respect to $\alpha$ and $x$, we find that it is strictly decreasing for $x,\alpha\in (0,1)$ and $\alpha\neq x^2$, and since the LHS is zero for $\alpha = x^2$ (\ref{eqn:srr1termz}) is satisfied for all $x$ and $\alpha$ except for where they are equal. We can see that the first term in (\ref{eqn:TOErdens}) goes to zero for $\alpha = x^2$ when combining the asymptotic Beta function behaviours together under that assumption.
\hfill $\Box$
We see from (\ref{eqn:SOElimdensr}) that (\ref{eqn:TOElimdensL}) is the anti-spherical analogue of the real eigenvalue density for the real spherical ensemble (recalling that the variables in the former case had been transformed according to (\ref{14.2})). Indeed, in the case $L=1$ (in the $\alpha\to 1$ strongly orthogonal regime), (\ref{eqn:TOElimdensL}) obviously reduces to
\begin{align}
\label{eqn:TlimdensL1} \lim_{M\to\infty}\rho_{(1)}^r(x)_T\Big|_{L=1}=\frac{1}{\pi(1-x^2)},
\end{align}
indicating (via the $artanh$ function) that the real eigenvalues are uniformly distributed on what might be called a `great anti-circle' on the anti-sphere. This formula appeared in \cite{Forrester2010a} where it was identified with the asymptotic density of real zeroes of Kac random polynomials (from \cite{BDi1997}).
With (\ref{def:alphaML}), we see that (\ref{eqn:TOElimdensLw}) is
\begin{align}
\label{eqn:TOELlimdens} \rho_{(1)}^r(x)_T \mathop{\sim}\limits_{N \to \infty} \sqrt{\frac{(1-\alpha)N} {2\pi}} \frac{1}{1-x^2}&,&-\sqrt{\alpha}<x<\sqrt{\alpha}.
\end{align}
Comparing (\ref{eqn:TOELlimdens}) to (\ref{eqn:Grlimdens}) we can see that in the (weakly orthogonal) regimes $\alpha< 1$ we have a $\sqrt{N}$ prefactor, which matches that in the Ginibre case.
\noindent\textbf{\underline{Complex density}}
Note that we could have found (\ref{eqn:TlimdensL1}) from (\ref{eqn:TlimL1}) as the 1-point real--real correlation. Likewise, we can read off the 1-point complex--complex correlation for $L=1$,
\begin{align}
\label{eqn:Tlimdensnrl1} \lim_{M\to \infty} \rho_1^{c}(z) \Big|_{L=1} =\frac{2y}{\pi (1-|z|^2)^2|1-z^2|}.
\end{align}
With only marginally more work, we can establish the complex density for arbitrary $L$. As for the real density, the complex densities in the three regimes can all be stated together.
\begin{proposition}[\cite{KSZ2010}]
The limiting complex density in the strongly orthogonal regime $(\alpha\to 1)$ is
\begin{align}
\nonumber \rho_{(1)}^c(z)_T \mathop{\sim}\limits_{N \to \infty} & \frac{2\: \mathrm{Im}(z) L(L-1)} {\pi} \frac{|1-z^2|^{L-2}} {(1-|z|^2)^{L+1}}\\
\label{eqn:TOElimdensc} &\times \int_{2|\mathrm{Im}(z)|/|1-z^2|}^{1}(1-t^2)^{(L-3)/2}dt,& -1 <z< 1,
\end{align}
and in the weakly orthogonal regimes $(0<\alpha<1$ or $\alpha\to 1)$ is given by
\begin{align}
\label{eqn:TOEldc} \rho_{(1)}^c (z)_T \sim \frac{(1- \alpha)N} {\pi} \frac{1}{(1- |z|^2)^2}&,& -\sqrt{\alpha} <z< \sqrt{\alpha}.
\end{align}
\end{proposition}
\textit{Proof}: To obtain (\ref{eqn:TOElimdensc}), we begin with (\ref{eqn:TOEcdens}) and apply the same reasoning as that that led to (\ref{eqn:Ix^2}). For (\ref{eqn:TOEldc}) we must consider the large $L$ limit of the integral in (\ref{eqn:TOElimdensc}). We note that the integrand is maximised for $t$ as close to zero as possible. This implies that the dominant contribution to the density will be when $t\to v:= 2|\mathrm{Im}(z)| /|1-z^2|$. We exponentiate the logarithm of the integrand and Taylor expand up to first order about the point $v$ to find
\begin{align}
\nonumber \frac{L-3}{2} \log (1-v^2) -\frac{(L-3)v} {1-v^2} (t-v).
\end{align}
Integrating this from $v$ to $1$ we have
\begin{align}
\nonumber &\int_v^1 (1-t^2)^{(L-3)/2} dt \sim (1-v^2)^{(L-3)/2} e^{\frac{(L-3)v^2} {1-v^2}} \int_v^1e^{-\frac{(L-3)vt} {1-v^2}} dt\\
\nonumber &=\frac{(1-v)^{(L-1)/2}} {(L-3)v} - \frac{(1-v)^{(L-1)/2}} {(L-3)v} e^{\frac{(L-3)v} {1-v^2} (v^2-v)}\\
\label{eqn:TOEintTE} & \sim \frac{(1-v^2)^{(L-1)/2}} {L t}= \frac{(1-|z|^2)^{L-1}} {L \: t \: |1-z^2|^{L-1}},
\end{align}
since $v^2-v<0$, where we have used the fact that $|1-z^2|^2-4y^2 = (1-|z|^2)^2$. Use of (\ref{eqn:TOEintTE}) gives us the result (\ref{eqn:TOEldc}).
\hfill $\Box$
\begin{remark}
Note that we can indeed interpret (\ref{eqn:Tlimdensnrl1}) as the $L=1$ case of (\ref{eqn:TOElimdensc}) since in that case we must have used the weight on the second line of (\ref{def:truncw}) and so the integral and the factors $L(L-1)$ on the RHS of (\ref{eqn:TOElimdensc}) do not appear.
\end{remark}
Comparison of (\ref{eqn:TOEldc}) with (\ref{df}) again shows clearly that we have the anti-spherical analogue of the latter, and on projection to the hyperbolic plane, we have a $\sqrt{L}$ density inside a unit disk, which matches (\ref{eqn:Gcirclaw}) in the real Ginibre ensemble. Also note that (\ref{eqn:TOEldc}) is rotationally invariant, meaning that the symmetry-breaking effect of the real eigenvalues has been overwhelmed by the large number of eigenvalues surrounding any non-real complex point. This implies that this ensemble should be manifesting behaviour similar to that of an ensemble of truncated unitary matrices. Indeed, comparison of (\ref{eqn:TOEldc}) to results in \cite{Z&S2000} confirms this expectation to be true. This same effect was observed in Chapter \ref{sec:Ginkernelts} (where the real Ginibre ensemble degenerates to the complex Ginibre ensemble) and in Chapter \ref{sec:SOElims} (where the real spherical ensemble approaches the complex spherical ensemble) at points away from the real line, in the limit of a large number of eigenvalues. In Chapter \ref{sec:uasc} we will discuss this further.
\noindent\textbf{\underline{Expected number of real eigenvalues}}
Part of the consideration of this unification of the correlation functions for analogous real and complex ensembles is the expected number of real eigenvalues. Since the effect of the real eigenvalues becomes negligible in the limit of large matrix dimension, we expect that the number of real eigenvalues grows more slowly than the number of total eigenvalues. This turns out to be true, however we shall see that the expectation is qualitatively different in the strong and weak orthogonality regimes.
\begin{proposition}[\cite{KSZ2010}]
The expected number of real eigenvalues for fixed $0<\alpha<1$ and large $N$ is
\begin{align}
\label{eqn:ENT} E_{N}^{(T)} \sim 2 \frac{ \Gamma((L+1)/2)}{\sqrt{\pi} \;\Gamma(L/2)} \; \mathrm{artanh}\sqrt{\alpha}.
\end{align}
\end{proposition}
\textit{Proof}: The result is obtained by integrating (\ref{eqn:TOElimdensL}) over $(-\sqrt{\alpha}, \sqrt{\alpha})$.
\hfill $\Box$
\begin{itemize}
\item{\underline{Strongly orthogonal}: To obtain the behaviour in the $\alpha\to 1$ strongly orthogonal regime we use the logarithmic expression for $\mathrm{artanh} \: z$ to write
\begin{align}
\nonumber 2 \; \mathrm{artanh} \: \sqrt{\alpha}=\log \frac{1+\sqrt{\alpha}} {1-\sqrt{\alpha}} &= \log \frac{1+\sqrt{1-\gamma}} {1-\sqrt{1-\gamma}}\\
\nonumber &\sim \log \frac{4-\gamma} {\gamma} = \log (4-\gamma) -\log L +\log N,
\end{align}
with $\gamma$ from (\ref{def:alphaML}). Combining this with (\ref{eqn:ENT}) we have for large $M$ and small $L$
\begin{align}
\label{eqn:ENTb} E_M^{(T)}\sim \frac{ \Gamma((L+1)/2)}{\sqrt{\pi} \;\Gamma(L/2)} \log M.
\end{align}}
\item{\underline{Weakly orthogonal}: Using the expansion $\mathrm{artanh} (x)=x+ x^3/2+ x^5/5+...$ we have, in the case that $\alpha\to 0$,
\begin{align}
\label{eqn:ENTa} E_{N}^{(T)} \sim \sqrt{\frac{2(\alpha-\alpha^2)N} {\pi}} \sim \sqrt{\frac{2 M} {\pi}},
\end{align}
where we have also used (\ref{eqn:bigLBeta}) and recalled from (\ref{def:alphaML}) that $\alpha=M/N$. Comparing (\ref{eqn:ENTa}) to (\ref{eqn:bigNEN}) we see that, in the weakly orthogonal limits, we have correspondence with the real Ginibre ensemble.}
\end{itemize}
\subsubsection{Universality and the `anti-spherical' conjecture}
\label{sec:uasc}
As we saw in the previous section, the expected number of real eigenvalues in both the strongly orthogonal (\ref{eqn:ENTb}) and weakly orthogonal limits (\ref{eqn:ENTa}) grows much more slowly than does the total number of eigenvalues. This same behaviour was manifest in both the real Ginibre and real spherical ensembles (see Chapters \ref{sec:Ginkernelts} and \ref{sec:Ssops} respectively), which led to the observation that, in the limit of large matrix dimension, the general eigenvalue density of the real (Ginibre/spherical) ensemble approached that of the corresponding complex ensemble. Recall from (\ref{eqn:TOEldc}), and the discussion below it, that we have the same phenomenon here. This leads us to conjecture that there is a law analogous to the circular law (Proposition \ref{prop:Gincirclaw}) and spherical law (Proposition \ref{prop:sphlaw}) for the anti-spherical case.
\begin{conjecture}[Anti-spherical law]
\label{con:asl}
Let $\mathbf{X}$ be an $N\times N$ matrix with iid entries, of zero mean and variance one. If $\hat{\mathbf{X}}$ is obtained from $\mathbf{X}$ by Gram-Schmidt orthogonalisation and
\begin{align}
\nonumber \hat{\mathbf{X}}= \left[\begin{array}{cc}
\mathbf{A}_{L\times L} & \mathbf{B}_{L\times M}\\
\mathbf{C}_{M\times L} & \mathbf{D}_{M\times M}
\end{array}\right]_{N\times N},
\end{align}
then the eigenvalues of $\mathbf{D}_{M\times M}$ are uniformly distributed, when projected on the anti-sphere, in the limit of large $N$.
\end{conjecture}
\begin{remark}
In \cite{Z&S2000} the authors present some interesting diagrams showing the eigenvalue density profiles for various values of $M$ and $N$ for truncations of both orthogonal and unitary matrices.
\end{remark}
We are also now in a position to identify a different correspondence. In the discussion at the beginning of this chapter we anticipated that for $L$ large we should obtain some Ginibre-like behaviour. We can make the correspondence exact in the regime $\alpha\to 0$. In effect we will undertake the anti-spherical analogue of the procedure in Chapter \ref{sec:SOEsclims}; we focus our view on a small enough region surrounding the origin such that the curvature of the underlying space (here $\kappa<0$) can be neglected, and so we are approximating the planar case. We must keep the eigenvalues sufficiently close to the real line to preserve the distinctive $\beta=1$ behaviour and so we scale them by $1/\sqrt{L}$, which we will find results in the bulk real Ginibre statistics.
Recall from the discussion at the beginning of Chapter \ref{sec:TOEkernelts} that once the kernel elements $D$ and $S_{rr}$ are specified, we can use the interrelations (\ref{eqn:TOErels1}) and (\ref{eqn:TOErels2}) to obtain the remaining ones. First note that
\begin{align}
\label{eqn:bigLfact} \frac{(L+j)!}{L!} \sim L^{j},
\end{align}
and so
\begin{align}
\nonumber \sum_{j=0}^{M-2} \frac{(L+j)!}{L! j!} \left(\frac{\mu\eta}{L}\right)^j \mathop{\sim}\limits_{L \to \infty} \sum_{j=0}^{M-2} \frac{(\mu\eta)^j }{j!}=e^{\mu\eta} \: \frac{\Gamma (M-1,\mu\eta)}{\Gamma(M-1)}.
\end{align}
We also have
\begin{align}
\nonumber \frac{L-2}{L} \log |1-\mu^2/L| \sim -\frac{\mu^2+\bar{\mu}^2}{4},
\end{align}
and
\begin{align}
\nonumber \int_{\nu}^1 (1-t^2)^{(L-3)/2} dt &\sim \int_{\nu}^1 e^{-L\: t^2/2} dt =\sqrt{\frac{2}{L}} \int_{\nu \sqrt{L/2}}^{\sqrt{L/2}} e^{-t^2} dt\\
\label{eqn:interfclim} &\sim \sqrt{\frac{2}{L}} \int_{\sqrt{2}\: \mathrm{Im}(z)}^{\infty} e^{-t^2} dt= \sqrt{\frac{\pi} {2L}} \; \mathrm{erfc} (\sqrt{2}\: \mathrm{Im}(z)),
\end{align}
where $\nu= 2 \: \mathrm{Im}(z/\sqrt{L})/ |1-z^2/L|$. Using these in (\ref{eqn:SOEDsum}) we have
\begin{align}
\nonumber D\left(\mu/\sqrt{L}, \eta/\sqrt{L} \right)_T &\mathop{\sim}\limits_{L \to \infty} \frac{L (\eta-\mu)}{\sqrt{2\pi}} e^{-(\mu^2+ \bar{\mu}^2)/4} e^{-(\eta^2+ \bar{\eta}^2)/4} \sqrt{\mathrm{erfc} \left( \sqrt{2} \mathrm{Im}(\mu) \right) \mathrm{erfc} \left( \sqrt{2} \mathrm{Im}(\eta) \right)}\\
\label{eqn:bigLD} & \times e^{\mu\eta} \: \frac{\Gamma (M-1,\mu\eta)}{\Gamma (M-1)}.
\end{align}
Next, we apply the same $1/\sqrt{L}$ scaling to (\ref{eqn:TOEsumS}),
\begin{align}
\nonumber &S_{rr}(x/\sqrt{L}, y/\sqrt{L})_T = \frac{L}{2} \frac{ \Gamma((L+1)/2)}{\sqrt{\pi}\; \Gamma(L/2)} \; (x/\sqrt{L})^{M-1} (1-x^2/L)^{L/2-1} \frac{(L+M-2)!} {L! (M-2)!}\\
\nonumber &\times \int_{-1}^{1} \mathrm{sgn} (y/\sqrt{L} -z) z^{M-2} (1-z^2)^{L/2-1}dz\\
\label{eqn:TOESrrbigL} & + \frac{ \Gamma((L+1)/2)}{\sqrt{\pi} \Gamma(L/2)} (1-x^2/L)^{L/2-1} (1-y^2/L)^{L/2} \sum_{j=0}^{M-2} \frac{(L+j-1)!}{(L-1)! j!} \left( \frac{xy}{L} \right)^j,
\end{align}
with $M$ assumed to be even for convenience. The second term in (\ref{eqn:TOESrrbigL}) can be dealt with using exactly the same procedure that led to (\ref{eqn:bigLD}), while for the first term note that
\begin{align}
\nonumber &\int_{-1}^{1} \mathrm{sgn} (y/\sqrt{L} -z) z^{M-2} (1-z^2)^{L/2-1}dz = 2 \; \mathrm{sgn} (y) \int_{0}^{y/\sqrt{L}} z^{M-2} (1-z^2)^{L/2-1}dz\\
\nonumber & \mathop{\sim}\limits_{L \to \infty} 2\; \mathrm{sgn} (y) \int_0^{y/\sqrt{L}} z^{M-2} e^{-Lz^2/2} dz = \mathrm{sgn} (y) \left(\frac{2}{L} \right)^{(M-1)/2} \gamma ((M-1)/2,y^2/2),
\end{align}
which gives
\begin{align}
\nonumber S_{rr}(x/\sqrt{L}, y/\sqrt{L})_T \mathop{\sim}\limits_{L \to \infty} &\sqrt{\frac{L}{2 \pi}} \left( 2^{(M-3)/2} \mathrm{sgn} (y) x^{M-1} e^{-x^2/2} \frac{\gamma((M-1)/2,y^2/2)}{\Gamma(M-1)} \right.\\
\label{eqn:bigLSrr} &\left. + e^{-(x-y)^2/2} \frac{\Gamma (M-1,xy)}{\Gamma(M-1)} \right),
\end{align}
where we have also used (\ref{eqn:bigLfact}).
We can use (\ref{eqn:TOErels1}) and (\ref{eqn:TOErels2}) to compare (\ref{eqn:bigLD}) and (\ref{eqn:bigLSrr}) to the corresponding real Ginibre results in (\ref{eqn:Ginsummed}). Also, by taking $x=y\in \mathbb{R}$ and $\mu=\eta=z\in \mathbb{C}$, we obtain the real and complex densities
\begin{align}
\nonumber \rho_{(1)} ^{r} \left(x/\sqrt{L}\right) = S_{rr}\left(x/\sqrt{L}, x/\sqrt{L} \right)_T \mathop{\sim}\limits_{L \to \infty} &\sqrt{\frac{L}{2 \pi}} \left( 2^{(M-3)/2} |x|^{M-1} e^{-x^2/2} \frac{\gamma((M-1)/2,x^2/2)}{\Gamma(M-1)} \right.\\
\label{eqn:bigLSrd} &\left. + \frac{\Gamma (M-1,x^2)}{\Gamma(M-1)} \right),
\end{align}
\begin{align}
\label{eqn:bigLDcd} \rho_{(1)} ^{c} \left(z/\sqrt{L}\right) = S_{cc}\left(z/\sqrt{L}, z/\sqrt{L} \right)_T \mathop{\sim}\limits_{L \to \infty} \sqrt{\frac{2}{\pi}}\: L\: y\: e^{- (z+\bar{z})^2/2} \frac{\Gamma (M-1,|z|^2)}{\Gamma (M-1)}\: \mathrm{erfc} (\sqrt{2} y),
\end{align}
which agree with (\ref{eqn:Ginreal_density}) and (\ref{eqn:Gincompx_density}), up to a factor of $\sqrt{L}$ in the real case, and $L$ in the complex case. (The same factors appeared in (\ref{eqn:bigLD}) and (\ref{eqn:bigLSrr}).) We anticipate these factors since in the large $N$ limit the complex eigenvalues, for instance, are supported on a disk of radius $\sqrt{\alpha}$ and so
\begin{align}
\nonumber \frac{\# \mathrm{eigenvalues}}{\mathrm{area}} = \frac{M}{\pi \alpha} \mathop{\sim}\limits_{\alpha \to 0} \frac{L}{\pi}.
\end{align}
The real case follows, since those eigenvalues are supported on an interval of radius $\sqrt{L}$.
As expected, we have found correspondence with the real Ginibre ensemble in the weakly orthogonal regimes --- demonstrated by (\ref{eqn:bigLDcd}) and (\ref{eqn:bigLSrd}). A visual demonstration is found in Figure \ref{fig:TOEtoGin}, where \ref{fig:GinOEn10j200} represents the standard real Ginibre ensemble and \ref{fig:TOEn1600m20j200} represents the large $L$ limit of the real truncated ensemble; both sets of eigenvalues have been scaled to the unit disk.
\begin{figure}[htp]
\begin{center}
\subfloat[]{\label{fig:GinOEn10j200} \includegraphics[scale=0.5]{GinOEn20j200ascreen.pdf}} \hspace{24pt}
\subfloat[]{\label{fig:TOEn1600m20j200} \includegraphics[scale=0.5]{TOEn1600m20j200ascreen.pdf}}
\caption[Comparison of simulated eigenvalue plots for the real Ginibre ensemble and the weakly orthogonal limit of the real truncated ensemble.]{Eigenvalues from 200 instances of: (a) $20\times 20$ real Ginibre matrices, scaled by $1/\sqrt{N}$; and (b) $1600\times 1600$ real truncated matrices with $M=20$, scaled by $\sqrt{N/M}$.}
\label{fig:TOEtoGin}
\end{center}
\end{figure}
We may also compare the $p_{N,k}$ statistics in Tables \ref{tab:pnkxact_sim} and \ref{tab:TOEpnkL8} where we find some convergence.
\newpage
\section{Further work}
\setcounter{figure}{0}
\label{sec:FW}
The first obvious direction to proceed from the results presented in this work is to establish (one way or the other) Conjecture \ref{con:asl}, the anti-spherical law. It seems that an extension of the method of Tao and Vu \cite{TVK10} (which they used for the circular law), akin to that of Bordenave \cite{Bord2010} for the spherical law, will enable this statement to be proven. We can also ask a deeper question on the eigenvalue distribution of these truncated ensembles. Recall from the discussion in Chapter \ref{sec:TOEmjpdf} that the distribution of the sub-block matrix $\mathbf{D}$ from (\ref{def:Rdecomp}) contains singular factors when $L<M$ (that is, for a small truncation), but when $L\geq M$ then we have the continuous expression (\ref{eqn:truncpdf}). The question then is: why is it that the eigenvalue distribution (\ref{eqn:TOEevaljpdf}), which we established from (\ref{eqn:truncpdf}), turns out to be identical to that obtained in \cite{KSZ2010}, where the restriction on the relative size of $L$ and $M$ was circumvented (as outlined below Remark \ref{rem:TOEana})? Somehow the singularities in the distribution of the sub-block vanish when we change variables to the distribution of eigenvalues.
Another anomaly in the case of the truncated ensemble is that we have not been able to strictly apply the 5-step method of Chapter \ref{sec:GOE_steps} as we did successfully with the other ensembles in this work; the missing component is that we do not yet know how to calculate the skew-orthogonal polynomials (\ref{eqn:TOEsops}) independently of the correlation kernel (\ref{eqn:SOEDsum}). One way to do this might be by first noting from \cite[Chapter 6]{forrester?} that with $a=b=L-1$ the Jacobi weight function
\begin{align}
\nonumber (1-x)^{(a-1)/2}(1+x)^{(b-1)/2}, \quad x\in (-1,1),
\end{align}
is structurally similar to (\ref{eqn:TOErweight}). The relevant skew-orthogonal polynomials are constructed from the classical Jacobi polynomials as described by Forrester \cite{forrester?}. It may be possible to use these results, perhaps by introducing an interpolating parameter like that used in Chapter \ref{sec:tG}, to obtain the skew-orthogonal polynomials for the real truncated ensemble.
As discussed in the Introduction, Dyson's three-fold way tells us that random matrix ensembles are naturally classified by the parameter $\beta$ in (\ref{eqn:evalbeta}). In this work we have focused on the $\beta=1$ real matrix ensembles; the $\beta=4$ cases of the spherical and truncated ensembles are yet to be investigated, while for the $\beta=4$ Ginibre ensemble see \cite{kanzieper2001}.
From the construction of the real spherical ensemble in Chapter \ref{sec:SOE} we see that the product $\mathbf{A}^{-1}\mathbf{B}$ (with $\mathbf{A}$ and $\mathbf{B}$ having Gaussian entries) is a matrix generalisation of a Cauchy random variable. We may attempt to form a spherical ensemble from any such product of Gaussian matrices, including the Hermitian Gaussian ensembles (GOE, GUE and GSE). Note that while the inverse of a Hermitian matrix is still Hermitian, the product of two Hermitian matrices is not in general Hermitian. This means that we expect the eigenvalues of $\mathbf{A},\mathbf{B}$ to be complex. For instance, take $\mathbf{A}, \mathbf{B}$ as GOE matrices (\ref{eqn:GOE_el_dist}), then by simulation we obtain Figure \ref{fig:sGOE}. Since the product $\mathbf{A}^{-1}\mathbf{B}$ is a general real matrix we are not surprised to find a ring corresponding to a non-zero density of real eigenvalues.
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.625]{ainvb_GOE10x10_1000a145.pdf}
\caption[Simulated eigenvalue plot for a spherical ensemble of GOE matrices.]{Stereographic projection of the eigenvalues of 1000 instances of the product $\mathbf{A}^{-1}\mathbf{B}$, where $\mathbf{A},\mathbf{B}$ are GOE matrices.}
\label{fig:sGOE}
\end{center}
\end{figure}
However, if the matrices are taken from the GUE or the GSE, as in Figure \ref{fig:sGUEsGSE}, then we see the same ring, although these matrices have complex entries, and so \textit{a priori} we do not expect any real eigenvalues at all. Superficially, these distributions resemble those of the real spherical ensemble (Figure \ref{fig:dstar}), although note that for increasing $\beta$ (that is, from the GOE with $\beta=1$ in Figure \ref{fig:sGOE} to the GUE with $\beta=2$ in Figure \ref{fig:sGUE}, to the GSE with $\beta=4$ in Figure \ref{fig:sGSE}) there is stronger repulsion from the great circle. It would be interesting to discover if this density is, firstly integrable, and secondly identical to the analogous spherical ensembles.
\begin{figure}[htp]
\begin{center}
\subfloat[]{\label{fig:sGUE} \includegraphics[scale=0.625]{ainvb_GUE10x10_1000a145.pdf}}
\subfloat[]{\label{fig:sGSE} \includegraphics[scale=0.625]{ainvb_GSE10x10_1000a145.pdf}}
\caption[Simulated eigenvalue plots for spherical ensembles of GUE and GSE matrices.]{Stereographic projection of the eigenvalues of 1000 instances of the product $\mathbf{A}^{-1}\mathbf{B}$, where $\mathbf{A},\mathbf{B}$ are (a) GUE matrices and (b) GSE matrices.}
\label{fig:sGUEsGSE}
\end{center}
\end{figure}
As suggested by Christopher Sinclair, from numerical simulations we can find another construction that seems to reproduce the real spherical distribution: the $*$-cosquare ensemble $\mathbf{A}^*\mathbf{A}^{-1}$, where $\mathbf{A}$ is a complex Ginibre matrix. As we see in Figure \ref{fig:cosquarea}, we have a ring of eigenvalues around the equator instead of through the poles as in Figure \ref{fig:dstar}, but otherwise the distributions seem identical. This equatorial ring comes from the non-zero density of eigenvalues on the unit circle that is an attribute of these $*$-cosquare ensembles; in the $1\times 1$ case this is clear since $r e^{-i\theta}/r e^{i\theta}=e^{-2i\theta}$ obviously lies on the unit circle. One suspects that the distribution of $\mathbf{A}^*\mathbf{A}^{-1}$ should match that of the real Ginibre ensemble.
\begin{figure}[htp]
\begin{center}
\subfloat[]{\label{fig:cosquarea} \includegraphics[scale=0.625]{compcosquarea145.pdf}}
\subfloat[]{\label{fig:cosquareb} \includegraphics[scale=0.625]{realcosquarea145.pdf}}
\caption[Simulated eigenvalue plots of complex and real $*$-cosquare matrices.]{Stereographic projection of the eigenvalues of 1000 instances of the product $\mathbf{A}^{*}\mathbf{A}^{-1}$, where $\mathbf{A}$ is (a) a complex Ginibre matrix and (b) a real Ginibre matrix.}
\end{center}
\end{figure}
We can further enliven matters by taking $\mathbf{A}$ a real Ginibre matrix, which results in a spherical distribution such as in Figure \ref{fig:cosquareb}. Note that we now have two rings; one corresponding to the eigenvalues on the unit circle (coming from the $*$-cosquare construction) and another corresponding to the real eigenvalues that we expect in any real matrix. This represents a set of three species of eigenvalue. In this case the constructions we developed in Chapter \ref{sec:Gincorrlnse} for calculating the correlation functions of multiple disjoint sets of eigenvalues will likely prove useful. For example, we expect to find a $6\times 6$ kernel in the result analogous to (\ref{eqn:4x4_fred}), to which we could apply functional differentiation to calculate the correlation functions.
We can obtain a similar distribution by taking spherical ($\mathbf{A}^{-1}\mathbf{B}$) products of matrices that arise in the study of chiral ensembles. A chiral ensemble contains matrices of the form
\begin{align}
\nonumber \mathbf{X} = \left[\begin{array}{cc}
\mathbf{0}_{L\times L} & \mathbf{C}_{L\times M}\\
(\mathbf{D}_{L\times M})^T & \mathbf{0}_{M\times M}
\end{array} \right];
\end{align}
these ensembles have attracted increasing interest over the last two decades (beginning with \cite{Verb1994a, Verb1994b}) because of their relationship with quantum chromodynamics (QCD). Using the relation
\begin{align}
\nonumber \det \left[ \begin{array}{cc}
\mathbf{M}_1 & \mathbf{M}_2\\
\mathbf{M}_3 & \mathbf{M}_4
\end{array}\right] = \det (\mathbf{M}_4) \det (\mathbf{M}_1-\mathbf{M}_2(\mathbf{M}_4)^{-1}\mathbf{M}_3)
\end{align}
the eigenvalues of $\mathbf{X}$ are seen to be given by the $\pm$ square roots of the eigenvalues of the product $\mathbf{C}\mathbf{D}^T$, which implies that they come in three distinct species: purely real, purely imaginary, and $\pm$ conjugate paired quadruplets. In the case that
\begin{align}
\label{eqn:APSch} \mathbf{C}=\mathbf{P}+\mu \mathbf{Q},\qquad \mathbf{D}=\mathbf{P}^T-\mu \mathbf{Q}^T,
\end{align}
where $\mathbf{P}, \mathbf{Q}$ are iid real Gaussian matrices and $\mu\in (0, 1]$, the eigenvalue distribution and correlation functions are calculated in \cite{AkePhilSom2010}. Since these chiral matrices have three eigenvalue species, to obtain a distribution resembling that in Figure \ref{fig:cosquareb}, we might ask for the eigenvalue distribution of the matrix
\begin{align}
\label{eqn:chProd1} \mathbf{X}_{1}^{-1}\mathbf{X}_2 = \left[\begin{array}{cc}
\mathbf{0} & \mathbf{C}_{1}\\
\mathbf{D}_{1}^T & \mathbf{0}
\end{array} \right]^{-1} \left[\begin{array}{cc}
\mathbf{0} & \mathbf{C}_{2}\\
\mathbf{D}_{2}^T & \mathbf{0}
\end{array} \right]= \left[\begin{array}{cc}
(\mathbf{D}_{1}^T)^{-1}\mathbf{D}_{2}^T & \mathbf{0}\\
\mathbf{0} & (\mathbf{C}_{1})^{-1}\mathbf{C}_{2}
\end{array} \right],
\end{align}
where the $\mathbf{C}_i$ and $\mathbf{D}_j$ are all $N\times N$ iid real Gaussian matrices. However the RHS of (\ref{eqn:chProd1}) implies that the set of eigenvalues of $\mathbf{X}_{1}^{-1}\mathbf{X}_2$ is just the union of the eigenvalue sets of the individual real spherical matrices $(\mathbf{D}_{1}^T)^{-1}\mathbf{D}_{2}^T$ and $(\mathbf{C}_{1})^{-1}\mathbf{C}_{2}$, and so we obtain a distribution like that in Figure \ref{fig:dstar}. Instead, if we take the $\pm$ square roots of the eigenvalues of $\mathbf{Y}=\mathbf{A}^{-1}\mathbf{B}$, where
\begin{align}
\label{eqn:AMch} \mathbf{A}= \mathbf{C}_{1} \mathbf{D}_{1}^T, \qquad \mathbf{B}= \mathbf{C}_{2}\mathbf{D}_{2}^T,
\end{align}
with $\mathbf{C}_1, \mathbf{C}_2, \mathbf{D}_1, \mathbf{D}_2$ each an iid real Gaussian matrix, then we obtain a distribution such as that in Figure \ref{fig:sABT} under stereographic projection. Although the matrix $\mathbf{Y}$ can be written as $(\mathbf{C}_{1} \mathbf{D}_{1}^T)^{-1}\mathbf{C}_{2}\mathbf{D}_{2}^T = (\mathbf{C}_{1}^{-T} \mathbf{D}_{1}^{-1})^{T}\mathbf{C}_{2}\mathbf{D}_{2}^T$, this is a quite different ensemble to that studied in \cite{AkePhilSom2010} since the distribution of the factors $\mathbf{A}^{-T}$ and $\mathbf{B}$ in (\ref{eqn:AMch}) are not the same as those of (\ref{eqn:APSch}).
\begin{figure}[htp]
\begin{center}
\includegraphics[scale=0.625]{ainvb_GinOEABT10x10_1000a145.pdf}
\caption[Simulated eigenvalue plot of a spherical ensemble formed from a chiral ensemble.]{Stereographic projection of the eigenvalues of 1000 instances of the product $(\mathbf{C}_{1} \mathbf{D}_{1}^T)^{-1}\mathbf{C}_{2}\mathbf{D}_{2}^T$, where $\mathbf{C}_1,\mathbf{C}_2, \mathbf{D}_1, \mathbf{D}_2$ are $10\times 10$ real Ginibre matrices.}
\label{fig:sABT}
\end{center}
\end{figure}
Investigation of these ensembles as well as the $*$-cosquare ensembles above (with the tools described in this work) may lead to the hyperpfaffian structures considered in \cite{Sinclair2010}.
As discussed in several parts of the present work, the eigenvalues of random matrices are mutually repulsive, and so have a more uniform distribution than that typically seen in the `clumpy' Poisson distribution (compare Figures \ref{fig:PoiDisk} and \ref{fig:RMTDisk}). This observation led to the work in \cite{LeCHo1990}, where the authors used the eigenvalues of a complex Ginibre matrix to generate a \textit{Voronoi tessellation}, which is more uniform than that generated from Poisson points. A Voronoi tessellation in this context is a tiling made of \textit{Voronoi cells} or polygons, each of which surrounds one eigenvalue and contains all the points closer to that eigenvalue than to any other. The authors compared this tiling to that corresponding to the cellular structure of cucumbers, where they analysed the statistics of the cells including area, perimeter, side length and number of sides. It seems likely that the same analysis (excluding the vegetables) can be performed on a Voronoi tessellation of the sphere, adapting the work in \cite{SafKui1997}. The spherical geometry introduces a richer structure since it is known (by considering the Euler characteristic) that a sphere cannot be tiled by hexagons. This fact implies that some number of other polygons are required to complete the tessellation. For example, if the only other polygon is a pentagon, then there must be 12 of those present, while if pentagons are also allowed, then there must be exactly 12 more pentagons than heptagons. From such considerations one expects to obtain quite different statistics to those seen in the analogous problem on the plane. The complex analogue of the real spherical ensemble provides a way of randomly distributing repulsive points on a sphere, and so it can be used to generate one of these more uniform Voronoi tessellations, which is a convenient place to begin the spherical analysis.
An important fact in the considerations of the circular law and its analogues (the spherical law and the anti-spherical law) is that the number of real eigenvalues grows sub-dominantly; see (\ref{eqn:bigNEN}), (\ref{eqn:SENasy}) and (\ref{eqn:ENT}). Without this fact being true, the circular, spherical and anti-spherical laws could not hold. Clearly, the number of total eigenvalues for an $N\times N$ matrix is $N$, and from (\ref{eqn:bigNEN}) we know that the number of reals in the real Ginibre ensemble is proportional to $\sqrt{N}$. Experimental evidence in \cite{eks1994} suggests that this $\sqrt{N}$ law is universal for an ensemble of matrices with iid entries from any zero mean, finite variance distribution. This is as yet unconfirmed. Further, we see from (\ref{eqn:SENasy}) that there is at least one ensemble (the real spherical ensemble) with non-iid entries that exhibits a similar behaviour. This may lead one to speculate that the $\sqrt{N}$ law extends to non-iid ensembles, however the real truncated ensemble shows that this is not true. We see from (\ref{eqn:ENT}) that the $\sqrt{N}$ law only holds in the weakly orthogonal limits, which as we discussed, just reclaims the real Ginibre results. In the strongly orthogonal limit, the number of reals grows as $\log N$. An investigation of real eigenvalues seems to have been neglected amidst the general coalescence of interest around the distribution of complex eigenvalues, yet is clearly warranted.
\newpage
| {
"timestamp": "2012-02-07T02:03:50",
"yymm": "1202",
"arxiv_id": "1202.1218",
"language": "en",
"url": "https://arxiv.org/abs/1202.1218",
"abstract": "We present a five-step method for the calculation of eigenvalue correlation functions for various ensembles of real random matrices, based upon the method of (skew-) orthogonal polynomials. This scheme systematises existing methods and also involves some new techniques. The ensembles considered are: the Gaussian Orthogonal Ensemble (GOE), the real Ginibre ensemble, ensembles of partially symmetric real Gaussian matrices, the real spherical ensemble (of matrices $A^{-1}B$), and the real anti-spherical ensemble (consisting of truncated real orthogonal matrices). Particular emphasis is paid to the variations in method required to treat odd-sized matrices. Various universality results are discussed, and a conjecture for an `anti-spherical law' is presented.",
"subjects": "Mathematical Physics (math-ph); Probability (math.PR)",
"title": "A geometrical triumvirate of real random matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137863531804,
"lm_q2_score": 0.7217432122827968,
"lm_q1q2_score": 0.7087617846685366
} |
https://arxiv.org/abs/1611.07820 | Local variational study of 2d lattice energies and application to Lennard-Jones type interactions | In this paper, we focus on finite Bravais lattice energies per point in two dimensions. We compute the first and second derivatives of these energies. We prove that the Hessian at the square and the triangular lattice are diagonal and we give simple sufficient conditions for the local minimality of these lattices. Furthermore, we apply our result to Lennard--Jones type interacting potentials that appear to be accurate in many physical and biological models. The goal of this investigation is to understand how the minimum of the Lennard--Jones lattice energy varies with respect to the density of the points. Considering the lattices of fixed area A, we find the maximal open set to which A must belong so that the triangular lattice is a minimizer (resp. a maximizer) among lattices of area A. Similarly, we find the maximal open set to which A must belong so that the square lattice is a minimizer (resp. a saddle point). Finally, we present a complete conjecture, based on numerical investigations and rigorous results among rhombic and rectangular lattices, for the minimality of the classical Lennard--Jones energy per point with respect to its area. In particular, we prove that the minimizer is a rectangular lattice if the area is sufficiently large. | \section{Introduction}
\subsection{Minimization at high and low densities: our previous works}
In our previous work with Zhang \cite{Betermin:2014fy}, generalized in \cite{BetTheta15}, we studied some two-dimensional lattice energies among Bravais lattices. More precisely, these energies are defined, for any Bravais lattice $L=\mathbb Z u\oplus \mathbb Z v$, by
$$
E_f[L]:=\sum_{p\in L\backslash\{0\}}f(|p|^2),
$$
where $f:(0,+\infty)\to \mathbb R$ is the interacting potential, with $|f(r)|=O(r^{-p})$, $p>1$ in order to get $|E_f(L)|<+\infty$ for any $L\subset \mathbb R^2$. Thus, using the optimality of the triangular lattice $\Lambda_1$ (see \eqref{deftriA} for the precise definition of $\Lambda_A$) for the theta functions, defined for any $\alpha>0$ by
$$
\theta_L(\alpha):=\sum_{p\in L} e^{-\pi\alpha |p|^2},
$$
proved by Montgomery \cite{Mont}, we get the minimality of the triangular lattice, at high density\footnote{In this paper, as in \cite{Betermin:2014fy,BetTheta15}, we give our result in terms of area, i.e. the area $|u \wedge v|$ of the primitive cell of the lattices, instead of the density.} (i.e. for an area $A\to 0$), for some interacting potentials $f$. In particular, we prove the following results for Lennard-Jones (see \cite{BetTheta15} for examples and motivation) type interactions defined, for $a=(a_1,a_2)$ and $t=(t_1,t_2)$, by
\begin{equation}\label{DEFLJINTRO}
V_{a,t}^{LJ}(r):=\frac{a_2}{r^{t_2}}-\frac{a_1}{r^{t_1}},\quad (a_1,a_2)\in (0,+\infty)^2,\quad 1<t_1<t_2.
\end{equation}
\begin{thm}[\cite{Betermin:2014fy,BetTheta15}] Let $V_{a,t}^{LJ}$ defined by \eqref{DEFLJINTRO}, then:
\begin{enumerate}
\item for any $A\leq \pi\left(\frac{a_2\Gamma(t_1)}{a_1\Gamma(t_2)} \right)^{\frac{1}{t_2-t_1}}$, the triangular lattice $\Lambda_A$ is the unique minimizer, up to rotation, of $L\mapsto E_{V_{a,t}^{LJ}}[L]$ among Bravais lattices of fixed area $A$;
\item the triangular lattice $\Lambda_A$ is a minimizer of $E_{V_{a,t}^{LJ}}$ among Bravais lattices of fixed area $A$ if and only if
\begin{equation}\label{UBLJ}
A\leq \inf_{|L|=1,L\neq \Lambda_1}\left(\frac{a_2\left( \zeta_L(2t_2)-\zeta_{\Lambda_1}(2t_2) \right)}{a_1\left( \zeta_L(2t_1)-\zeta_{\Lambda_1}(2t_1) \right)} \right)^{\frac{1}{t_2-t_1}},
\end{equation}
where the infimum is taken over the Bravais lattices $L$ of area $1$ and $\displaystyle \zeta_L(s):=\sum_{p \in L\backslash\{0\}} \frac{1}{|p|^s}$, $s>2$, is the Epstein zeta function;
\item if $\pi^{-t_2}\Gamma(t_2)t_2\leq \pi^{-t_1}\Gamma(t_1)t_1$, then the minimizer of $L\mapsto E_{V_{a,t}^{LJ}}[L]$ among all the Bravais lattices (without a density constraint) is unique and triangular.
\end{enumerate}
\end{thm}
We remark that point 2. implies the non-minimality of $\Lambda_A$ if $A$ is sufficiently large. Hence, in \cite{Betermin:2014fy}, we numerically computed that the right side term of \eqref{UBLJ} in the classical case $V(r)=r^{-6}-2r^{-3}$ is
$$
A_{BZ}:= \inf_{|L|=1\atop L\neq \Lambda_1} \left(\frac{\zeta_L(12)-\zeta_{\Lambda_1}(12)}{2(\zeta_L(6)-\zeta_{\Lambda_1}(6))} \right)^{1/3}\approx 1.138.
$$
Furthermore, we conjectured that the square lattice must be a minimizer for some values of the area (in an interval) larger than $A_{BZ}$. Obviously, our method, based on the global optimality of the triangular lattice for $L\mapsto \theta_L(\alpha)$, was not adapted to prove the optimality of another lattice (square, rectangular or rhombic). Thus, the goal of this paper is to study $L\mapsto E_V[L]$ locally in order to get more information about the optimality of the triangular and the square lattice, and then to precise our conjecture about the minimizers with respect to the area.\\
The study of this kind of energy is important to find good competitors for some crystallization problems (see \cite{Blanc:2015yu} for a recent review) where the interacting potential is radial (as in\cite{Crystal}). The study of $L\mapsto E_f[L]$ gives a good intuition of the shape of the minimizer of
$$
\mathcal{E}_f(x_1,...,x_N):=\sum_{i\neq j} f(|x_i-x_j|^2).
$$
Furthermore, the triangular lattice plays a fundamental role. Indeed, it was proved by Rankin \cite{Rankin}, Cassels \cite{Cassels}, Ennola \cite{Eno2} and Diananda \cite{Diananda}, in a series of improving papers (see the recent review by Henn \cite{Henn}), that $\Lambda_A$ is the only minimizer of $L\mapsto \zeta_L(s)$, $s>0$, among Bravais lattices of fixed area $A$, for any $A>0$. This result was rediscovered by Montgomery \cite{Mont} from the optimality of the triangular lattice for the theta functions. Moreover, it is easy to show (see \cite[Prop. 3.1]{BetTheta15}) that, for any $A>0$, $\Lambda_A$ is the unique minimizer of $L\mapsto E_f[L]$ if $f$ is completely monotone. Hence, these results was profusely used in Mathematical Physics (Superconductivity, Bose-Einstein Condensates, di-block copolymer melts, etc.). Indeed, a lot of complex interactions are simplified as a two-body interaction in the periodic case (see for instance \cite{CheOshita,Sandier_Serfaty,SerfRoug15,JMJ:9726774,AftBN,Zhang}).\\
\subsection{Main results about the local minimality of square and triangular lattices}
Moreover, the local study of $L\mapsto E_f[L]$, i.e. to search for its local minimizers/maximizers and saddle points, allows to characterize the local stability of some special lattices. For that, it is efficient to use the usual parametrization in the 2d half modular domain\footnote{It is sufficient to study these energies wit parameters in $\mathcal{D}$ because of the symmetry of the energies with respect to the $(Oy)$ axis.} $\mathcal{D}=\{(x,y)\in \mathbb R^2; 0\leq x \leq 1/2 , y>0 ; x^2+y^2\geq 1\}$, that we are going to present in Section \ref{secparam}, as in \cite{Rankin,Mont}. Hence, the topology for the lattices is the usual topology in $\mathcal{D}\subset \mathbb R^2$. Furthermore, the square lattice corresponds to the point $(0,1)$ and the triangular lattice to the point $(1/2,\sqrt{3}/2)$. Montgomery used exactly this parametrization and proved that the theta functions admit only $(0,1)$ and $(1/2,\sqrt{3}/2)$ as critical points: the first one is a saddle point and the second one is a (local) minimizer. We will write $E_f(x,y,A)$ with $(x,y)\in\mathcal{D}$ instead of $E_f[L_A]$ for any Bravais lattice $L_A$ of area $A$. More precisely, we will study energies defined, for $(x,y)\in\mathcal{D}$ and $A>0$, by
$$
E_f(x,y,A)=\sum_{m,n} f\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right),
$$
where the sum is taken over all the $(m,n)\in \mathbb Z^2 \backslash\{(0,0)\}$. The works of Coulangeon, Sch\"urmann and Lazzarini \cite{Coulangeon:kx,Coulangeon:2010uq,CoulLazzarini} give the first general results for the local minimality of some lattices (with enough symmetries) by using sphere designs. In particular, in two dimensions, due to their symmetries, the square and triangular lattices have good properties. They are critical points of $(x,y)\mapsto E_f(x,y,A)$ for any $A>0$ and any $f$ such that the energy is differentiable (see \cite[Thm. 4.4.(1)]{Coulangeon:2010uq}). We are going to prove again this property in Section \ref{firstderiv}. In three dimensions, see \cite{Ennola,SarStromb,BeterminPetrache} for the discussion of the local minimality of the body-centred-cubic and the face-centred-cubic lattices for Epstein zeta function and theta functions. For the second order study in dimension two, we will prove the following results, where $\mathcal{F}$ is the space of functions defined by \eqref{deff}:
\begin{prop}[See Cor. \ref{Hessienne01} and Prop. \ref{derivtriangular} below] \label{THintro2}
For any $f\in \mathcal{F}$, the second derivatives at point $(0,1)$ are
\begin{align*}
&\partial^2_{xx}E_f(0,1,A)=2A\sum_{m,n}n^2 f'\left(A\left[m^2+n^2 \right] \right)+4A^2\sum_{m,n}m^2n^2 f''\left(A\left[m^2+n^2 \right] \right),\\
&\partial^2_{yy}E_f(0,1,A)=2A\sum_{m,n}m^2 f'\left(A\left[m^2+n^2 \right] \right)+A^2\sum_{m,n}(n^2-m^2)^2f''\left(A\left[m^2+n^2 \right] \right),\\
&\partial^2_{xy}E_f(0,1,A)=\partial^2_{yx}E_f(0,1,A)=0.
\end{align*}
Furthermore, the second derivatives at point $(1/2,\sqrt{3}/2)$ are
\begin{align*}
&\partial^2_{xx}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)=\partial^2_{yy}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)\\
&=\frac{4A}{\sqrt{3}}\sum_{m,n}n^2 f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right)+\frac{4A^2}{3}\sum_{m,n}n^4f''\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right),
\end{align*}
and $\partial^2_{xy}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)=\partial^2_{yx}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)=0$.
\end{prop}
In particular, the Hessians at the square and the triangular lattice are both diagonal. Hence, the usual sufficient condition associated to the nature of these critical points are given by two inequalities in the square case, and by only one condition in the triangular case. Thus, it is clear that, for any classical interacting potential $f\in \mathcal{F}$ (constructed with exponentials, inverse power laws or other classical functions), the triangular lattice is, for almost every $A>0$, a local minimizer or a local maximizer (see Corollary \ref{coranalytic}).\\
This very useful result could be applied to a lot of types of potentials. We choose here, as a prolongation of our previous works, to apply it to all the Lennard-Jones type potentials defined by \eqref{DEFLJINTRO}. Hence, we find the largest open sets of values of the parameter $A$ where the square and the triangular lattices are local minimizers/maximizers or saddle points on $\mathcal{D}$.
\begin{thm}[See Thm. \ref{THmain1} and Thm. \ref{THmain2} below] \label{THintro3}
We define the following sums:
\begin{center}
\begin{minipage}[l]{0.4\linewidth}
\begin{align*}
&S_1(s)=\sum_{m,n}\frac{m^4}{(m^2+mn+n^2)^s},\\
&S_3(s)=\sum_{m,n}\frac{m^2n^2}{(m^2+n^2)^s},
\end{align*}
\end{minipage}
\begin{minipage}[c]{0.5\linewidth}
\begin{align*}
&S_2(s)=\sum_{m,n}\frac{m^2}{(m^2+n^2)^s},\\
&S_4(s)=\sum_{m,n}\frac{(n^2-m^2)^2}{(m^2+n^2)^s}.
\end{align*}
\end{minipage}
\end{center}
\textnormal{\textbf{Part A: Local optimality of the triangular lattice.}} For any $(a,t)$ as in \eqref{DEFLJINTRO}, let
$$
A_{0}:=\frac{\sqrt{3}}{2}\left( \frac{a_2 t_2(t_2-1)S_1(t_2+2)}{a_1t_1(t_1-1)S_1(t_1+2)} \right)^{\frac{1}{t_2-t_1}},
$$
then we have:
\begin{enumerate}
\item if $A<A_0$, then $\left(\frac{1}{2},\frac{\sqrt{3}}{2} \right)$ is a local minimizer of $(x,y)\mapsto E_{V_{a,t}^{LJ}}(x,y,A)$;
\item if $A>A_0$, then $\left(\frac{1}{2},\frac{\sqrt{3}}{2} \right)$ is a local maximizer of $(x,y)\mapsto E_{V_{a,t}^{LJ}}(x,y,A)$.
\end{enumerate}
\textnormal{\textbf{Part B. Local optimality of the square lattice.}} Let
$$
g(s)=S_2(s+1)-2(s+1)S_3(s+2),\quad k(s)=(s+1)S_4(s+2)-2S_2(s+1),
$$
and define
$$A_1:=\left(\frac{a_2t_2g(t_2)}{a_1t_1g(t_1)} \right)^{\frac{1}{t_2-t_1}} \quad \textnormal{and}\quad A_2:=\left(\frac{a_2t_2k(t_2)}{a_1t_1k(t_1)} \right)^{\frac{1}{t_2-t_1}}.
$$
It holds:
\begin{enumerate}
\item if $A_1<A<A_2$, then $(0,1)$ is a local minimizer of $(x,y) \mapsto E_{V_{a,t}^{LJ}}(x,y,A)$;
\item if $A\not\in [A_1,A_2]$, then $(0,1)$ is a saddle point of $(x,y) \mapsto E_{V_{a,t}^{LJ}}(x,y,A)$.
\end{enumerate}
\end{thm}
\subsection{Conjecture for the classical Lennard-Jones potential}
In particular, for the classical Lennard-Jones interaction $V$, i.e. $a=(2,1)$ and $t=(3,6)$, we will give a complete conjecture, improving that of \cite{Betermin:2014fy}, based on our previous result and numerical simulations among rhombic and rectangular lattices (see Section \ref{rhombrectdef} for a precise definition of these lattices). A summary of this conjecture is given in Figure \ref{CONJINTRO} and explained in Sections \ref{secrhomb} and \ref{rectangular}. Furthermore, we summarize in table \ref{tableintro} what is precisely conjectured, proved and numerically showed.
\begin{figure}[!h]
\centering
\includegraphics[width=12cm]{Conj.eps}
\caption{Conjecture about the minimization of $(x,y)\mapsto E_V(x,y,A)$ with respect to $A$. (1) If $0<A<A_{BZ}\approx 1.138$, then the minimizer is triangular. (2) If $A_{BZ}<A<A_1\approx 1.143$, then the minimizer is a rhombic lattice with an angle covering monotonically and continuously the interval $[76.43^\circ,90^\circ)$. (3) If $A_1<A<A_2\approx 1.268$, then the minimizer is a square lattice. (4) If $A>A_2$, then the minimizer is a rectangular lattice which degenerate (the primitive cell is more and more thin) as $A\to +\infty$.}
\label{CONJINTRO}
\end{figure}
\begin{table}
\centering
\begin{tabular}{|c|c|c|}
\hline
\textbf{Area $A$} & \textbf{Min of $L_A\mapsto E_V[L_A]$} & \textbf{Status}\\
\hline
$0<A<\frac{\pi}{(120)^{1/3}}\approx 0.63693$ & triangular & proved in \cite{Betermin:2014fy}\\
\hline
$\frac{\pi}{(120)^{1/3}}<A<A_{BZ}\approx 1.138$ & triangular & num. + loc. min. proved in Th. \ref{THmain1}\\
\hline
$A_{BZ}<A<A_1\approx 1.143$ & rhombic & num. \\
\hline
$A_1<A<A_2\approx 1.268$ & square & num. + loc. min. proved in Th. \ref{THmain2} \\
\hline
$A>A_2$ & rectangular & num., proved for large $A$ in Prop \ref{Rankinmethod} \\
\hline
\end{tabular}
\caption{Summary of our works. The abbreviations ``num." and ``loc. min" correspond respectively to ``numerically showed" and ``local minimality".}
\label{tableintro}
\end{table}
This conjecture is actually comparable to the numerical study of Ho and Mueller \cite[Fig. 1 and 2]{Mueller:2002aa} about the two-component Bose-Einstein Condensates (see also the review \cite[Fig. 16]{ReviewvorticesBEC}). Indeed, $A^6 E_V(x,y,A)$ is the sum of two terms with opposite behavior. The first one, $\zeta_L(12)$, is minimized by the triangular lattice and the second one, $-A^3\zeta_L(6)$, admits a degenerate minimizer. We found exactly the same kind of terms in the energy studied by Ho and Mueller (see Section \ref{summary} for more explanations).
Using a method of Rankin \cite{Rankin} and bounding the minimizer of $y\mapsto E_V(0,y,A)$ in terms of $A$, we show the following result, which partially prove the point $(4)$ of our Conjecture in Figure \ref{CONJINTRO}:
\begin{thm}[See Prop. \ref{degrect} and Prop. \ref{Rankinmethod} below]\label{THintro4}
Let $V(r)=\frac{1}{r^6}-\frac{2}{r^3}$, then there exists $\tilde{A}>0$ such that for any $A>\tilde{A}$, the minimizer $(x_A,y_A)$ of $(x,y)\mapsto E_{V}(x,y,A)$ is such that $x_A=0$, $y_A\geq 1$. Furthermore, we have $\displaystyle \lim_{A\to +\infty} y_A=+\infty$.
\end{thm}
After giving the precise definitions, in Section \ref{part1}, of the parameters, energies and lattices we are going to study, we compute, for any $A>0$, the first and second derivatives of $(x,y)\mapsto E_f(x,y,A)$ in Section \ref{part2}. In particular, we prove Theorem \ref{THintro2}. Thus, we apply this result to Lennard-Jones type potentials $V_{a,t}^{LJ}$ in Section \ref{part3} and we prove Theorem \ref{THintro3}. In Section \ref{part4}, we study numerically $(x,y)\mapsto E_V(x,y,A)$ in the classical Lennard-Jones case $V(r)=r^{-6}-2r^{-3}$, especially among rhombic and rectangular lattices, and we explain our Conjecture in Section \ref{summary}.
\section{Lattices, parametrization and energies}\label{part1}
\subsection{Lattice parametrization and general energy}\label{secparam}
Let $L=\mathbb Z u\oplus \mathbb Z v\subset \mathbb R^2$ be a Bravais lattice. We say that $A$ is the area of $L$ if $|u\wedge v|=A$, i.e. the area of its primitive cell is $A$. If $L$ is of area $1/2$, we use the usual parametrization (see Rankin \cite{Rankin} or Montgomery \cite{Mont}) of $L$ by $(x,y)\in \mathcal{D}$ where the half fundamental modular domain $\mathcal{D}$ is
$$
\mathcal{D}=\{(x,y)\in \mathbb R^2; 0\leq x \leq 1/2 , y>0 ; x^2+y^2\geq 1\}.
$$
It corresponds to parametrize $u$ and $v$ with $(x,y)\in \mathcal{D}$ such that
$$
u=\left(\frac{1}{\sqrt{2y}},0 \right)\quad \text{and} \quad v=\left( \frac{x}{\sqrt{2y}},\sqrt{\frac{y}{2} } \right).
$$
Thus, a lattice $L_A$ of area $A$ is uniquely parametrized by vectors $u_A$ and $v_A$ such that
$$
L_A=\mathbb Z u_A\oplus \mathbb Z v_A:=\mathbb Z\left(\frac{\sqrt{A}}{\sqrt{y}},0 \right)\oplus \mathbb Z\left( \frac{x\sqrt{A}}{\sqrt{y}},\sqrt{A}\sqrt{y}\right),
$$
with $(x,y)\in \mathcal{D}$. Therefore, we get, for any $(m,n)\in \mathbb Z^2$,
$$
|mu_A+nv_A|^2=A\left[\frac{1}{y}(m+xn)^2+yn^2 \right],
$$
and all these values are the square of the distances between $(0,0)$ and the points of the lattice $L_A$.\\
We recall that the \textbf{triangular lattice} of area $A$ (also called ``hexagonal lattice" or ``Abrikosov lattice" in the context of Superconductivity) is defined, up to rotation, by
\begin{equation}\label{deftriA}
\Lambda_A:=\sqrt{\frac{2A}{\sqrt{3}}}\left[\mathbb Z (1,0) \oplus \mathbb Z (1/2,\sqrt{3}/2)\right],
\end{equation}
and the square lattice of area $A$ is $\sqrt{A}\mathbb Z^2$.\\
In Figure \ref{Parametrization}, we have represented the fundamental domain $\mathcal{D}$. The point $(0,1)$ corresponds to the square lattice $2^{-1/2}\mathbb Z^2$ of area $1/2$ and $(1/2,\sqrt{3}/2)$ corresponds to the triangular lattice $\Lambda_{1/2}$ of area $1/2$.
\begin{figure}[!h]
\centering
\includegraphics[width=12cm]{Param1.eps}
\caption{Fundamental domain $\mathcal{D}$ and parametrization of a lattice $L$ by $(x,y)$.}
\label{Parametrization}
\end{figure}
We define the space of functions $\mathcal{F}$ by
\begin{equation}\label{deff}
\mathcal{F}:=\left\{f\in C^2(\mathbb R_+^*);\forall k\in\{0,1,2\}, |f^{(k)}(r)|=O(r^{-\eta_k-k}), \eta_k>1 \right\}.
\end{equation}
Thus, for any $A>0$, for any Bravais lattice $L_A$ of area $A$ and any $f\in\mathcal{F}$, we define its $f$-energy by
$$
E_f[L_A]=E_f(x,y,A)=\sum_{p\in L_A\backslash\{0\}} f(|p|^2)=\sum_{m,n} f\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right),
$$
where the sum is taken over all $(m,n)\in\mathbb Z^2\backslash\{(0,0)\}$. Thus, the function $(x,y)\mapsto E_f(x,y,A)$ belongs to $C^2(\mathcal{D})$ and, for any $k\in\{1,2\}$,
\begin{align}
&\partial^{(k)} E_f(x,y,A)=\sum_{m,n}\partial^{(k)}f\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right),
\end{align}
with respect to any variables. Furthermore, the symmetry $E_f(-x,y,A)=E_f(x,y,A)$ justifies the fact that we study $(x,y)\mapsto E_f(x,y,A)$ in the half modular domain $\mathcal{D}$.
\subsection{Rhombic and rectangular lattices}\label{rhombrectdef}
\begin{defi}
We say that a Bravais lattice $L_A=\mathbb Z u_A \oplus \mathbb Z v_A$, parametrized by $(x,y,A)$, is \textbf{rhombic} if it is generated by two vectors of same length $|u_A|=|v_A|$, which is equivalent with $x^2+y^2=1$. In particular, if $L_A$ is rhombic, then there exists $\theta\in \left[ \frac{\pi}{3},\frac{\pi}{2} \right]$ such that $x=\cos\theta$ and $y=\sin\theta$. Thus, we define, for any $f\in\mathcal{F}$, any $\pi/3\leq \theta \leq \pi/2$ and any $A>0$,
$$
E_f(\theta,A):=E_f(\cos\theta,\sin\theta,A).
$$
\end{defi}
\begin{lemma}If $L_A=\mathbb Z u_A \oplus \mathbb Z v_A$ is rhombic and $(x,y)=(\cos\theta,\sin\theta)$, then $(\widehat{u_A,v_A})=\theta$.
\end{lemma}
\begin{proof}
This is clear because, since $L$ is rhombic,
$$
u_A\cdot v_A=\frac{A x}{y}=|u_A||v_A|\cos(\widehat{u_A,v_A})=A\frac{\sqrt{x^2+y^2}}{y}\cos(\widehat{u_A,v_A})=\frac{A\cos(\widehat{u_A,v_A})}{y}.
$$
Therefore $\cos(\widehat{u_A,v_A})=x=\cos\theta$ and $(\widehat{u_A,v_A})=\theta$ because $\theta\in [\pi/3,\pi/2]$.
\end{proof}
\begin{defi}
We say that a Bravais lattice $L_A$, parametrized by $(x,y,A)$, is \textbf{rectangular} if its primitive cell is a rectangle, i.e. $u_A\bot v_A$ or if $x=0$ and $y\geq 1$. Thus, we define, for any $f\in\mathcal{F}$, any $y\geq 1$ and any $A>0$,
$$
E_f(y,A):=E_f(0,y,A).
$$
\end{defi}
\begin{remark}
If $L_A$ is rectangular, then it is generated by $\displaystyle u_A=\sqrt{A}\left(\frac{1}{\sqrt{y}},0\right)$ and $\displaystyle v_A=\sqrt{A}\left(0,\sqrt{y} \right)$.
\end{remark}
\section{Computation of first and second derivatives of $E_f$}\label{part2}
In this part, we compute the first and second derivatives of $(x,y)\mapsto E_f(x,y,A)$ with respect to $x$ and $y$, for fixed $A>0$. We do not give all the details of the computations, but only the key points.
\subsubsection{First derivatives}\label{firstderiv}
The following results stay true if there is no conditions for the second derivative of $f$ in the definition of $\mathcal{F}$. Furthermore, we are going to find again a result of Coulangeon and Sch\"urmann \cite[Thm. 4.4.(1)]{Coulangeon:2010uq} in the simple two-dimensional case, that is to say the fact that the square lattice and the triangular lattice are both critical points of $L_A\mapsto E_f[L_A]$ for any $A>0$. Indeed, all the shells of $\Lambda_A$ and $\sqrt{A}\mathbb Z^2$ are $2$-designs.
\begin{prop}\label{deriv}
We have, for any $f\in \mathcal{F}$, any $A>0$ and any $(x,y)\in \mathcal{D}$,
\begin{align*}
&\partial_x E_f(x,y,A)=\frac{2A}{y}\sum_{m,n} (mn+n^2x)f'\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right),\\
&\partial_y E_f(x,y,A)=-\frac{A}{y^2}\sum_{m,n} (m^2+2xmn+(x^2-y^2)n^2)f'\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right).
\end{align*}
\end{prop}
\begin{prop}\label{D1xy}
For any $A>0$ and any $f\in\mathcal{F}$, $(0,1)$ is a critical point of $(x,y)\mapsto E_f(x,y,A)$.
\end{prop}
\begin{proof}
By Proposition \ref{deriv}, we get
\begin{align*}
&\partial_x E_f(0,1,A)=2A\sum_{m,n} mn f'\left(A\left[m^2+n^2 \right] \right);\\
&\partial_y E_f(0,1,A)=-A\sum_{m,n} (m^2-n^2)f'\left(A\left[m^2+n^2 \right] \right).
\end{align*}
The first sum is equal to zero by pairing $(m,n)$ and $(-m,n)$. The second is equal to zero because
$$
\sum_{m,n} m^2 f(A[m^2+n^2])=\sum_{m,n} n^2 f(A[m^2+n^2])
$$
by exchange of variables.
\end{proof}
\begin{lemma}\label{sumlatticetri}
For any $(m,n)\in \mathbb Z^2\backslash\{(0,0) \}$, let $q(m,n)=m^2+mn+n^2$ and $F:\mathbb R\to \mathbb R$ be such that the following sums are convergent, then
\begin{equation}\label{LATSUM1}
\sum_{m,n}mn F(q(m,n))=-\frac{1}{2}\sum_{m,n} n^2 F(q(m,n)),
\end{equation}
\begin{equation}\label{LATSUM2}
\sum_{m,n}n^3m F(q(m,n))=-\frac{1}{2}\sum_{m,n}n^4 F(q(m,n)),
\end{equation}
\begin{equation}\label{LATSUM3}
\sum_{m,n}m^2n^2 F(q(m,n))=\frac{1}{2}\sum_{m,n}n^4 F(q(m,n)).
\end{equation}
\end{lemma}
\begin{proof}
The key point is the fact that, for any $(m,n)\in\mathbb Z^2\backslash\{(0,0) \}$,
$$
q(-m-n,n)=q(m,n).
$$
Consequently, we get
\begin{align*}
\sum_{m,n}mnF(q(m,n))&=\sum_{m,n}(-m-n)n F(q(m,n))\\
&=-\sum_{m,n}mnF(q(m,n))-\sum_{m,n}n^2F(q(m,n)),
\end{align*}
and \eqref{LATSUM1} is proved. For the second equality, we compute
\begin{align*}
\sum_{m,n}mn^3F(q(m,n))&=\sum_{m,n}n^3(-m-n)F(q(m,n))\\
&=-\sum_{m,n}mn^3F(q(m,n))-\sum_{m,n}n^4F(q(m,n))
\end{align*}
and \eqref{LATSUM2} is clear. For the last one, we remark that, using $q(m,n)=q(n,m)$,
\begin{align*}
\sum_{m,n}n^4F(q(m,n))&=\sum_{m,n}(-m-n)^4F(q(m,n))\\
&=\sum_{m,n}(2m^4+6m^2n^2+8m^3n)F(q(m,n)),
\end{align*}
and it follows that
$$
\sum_{m,n}m^2n^2 F(q(m,n))=-\frac{1}{6}\sum_{m,n}n^4 F(q(m,n))-\frac{4}{3}\sum_{m,n}m^3n F(q(m,n)).
$$
Combining this equality with \eqref{LATSUM2}, we get the result.
\end{proof}
\begin{prop}\label{triangCM}
For any $A>0$ and any $f\in \mathcal{F}$, $\left(\frac{1}{2},\frac{\sqrt{3}}{2} \right)$ is a critical point of $(x,y)\mapsto E_f(x,y,A)$. In particular, we have
\begin{equation}\label{DxCM1}
\sum_{m,n}\left( mn+\frac{n^2}{2} \right)f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right)=0.
\end{equation}
\end{prop}
\begin{proof}
Using Proposition \ref{deriv}, we obtain
$$
\partial_x E_f\left( \frac{1}{2},\frac{\sqrt{3}}{2},A \right)=\frac{4A}{\sqrt{3}}\sum_{m,n}\left(mn+\frac{n^2}{2} \right)f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right)
$$
and
$$
\partial_y E_f\left( \frac{1}{2},\frac{\sqrt{3}}{2} ,A \right)=-\frac{2A}{3}\sum_{m,n}\left(m^2+mn-\frac{n^2}{2}\right)f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right).
$$
We remark, using the exchange of $m$ and $n$, that
$$
\partial_y E_f\left( \frac{1}{2},\frac{\sqrt{3}}{2},A \right)=-\frac{2A}{3}\sum_{m,n}\left(mn+\frac{n^2}{2} \right)f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right)=-\frac{\partial_x E_f\left( \frac{1}{2},\frac{\sqrt{3}}{2},A \right)}{2\sqrt{3}}.
$$
Thus, by \eqref{LATSUM1}, we get
$$
\sum_{m,n}mn f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right)=-\frac{1}{2}\sum_{m,n}n^2 f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right),
$$
i.e.
$$
\sum_{m,n}\left(mn+\frac{n^2}{2} \right)f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right)=0,
$$
and the result is proved.
\end{proof}
Now we recall a simple application of Montgomery results \cite{Mont} to the case of completely monotone interacting potentials. We say that $f$ is completely monotone if, for any $k\in \mathbb N$ and any $r>0$, $(-1)^k f^{(k)}(r)\geq 0$.
\begin{prop}\label{genmgt} (\cite{Mont})
If $f\in \mathcal{F}$ is completely monotone, then for any $A>0$ and for any $(x,y)$ such that $0<x<1/2$ and $y>\sqrt{3}/2$, we have
$$
\partial_x E_f(x,y,A)<0 \quad \textnormal{and} \quad \partial_y E_f(x,y,A)>0.
$$
In particular, $(x,y)=(1/2,\sqrt{3}/2)$ is the only minimizer of $(x,y)\mapsto E_f(x,y,A)$ and $x=(0,1)$ is a saddle point. Furthermore, this function has no other critical point.
\end{prop}
\begin{proof}
It is clear by Montgomery results \cite[Lem. 4 and 7]{Mont} and the fact (see \cite[Section 3]{BetTheta15} for more details) that any completely monotone function $f$ can be written as the Laplace transform of a positive Borel measure $\mu$ on $[0,+\infty)$, i.e.
$$
f(r)=\int_0^{+\infty} e^{-rt}d\mu(t).
$$
\end{proof}
\begin{examples}\label{defzetatheta}
In particular, the previous Proposition holds for $f_{s/2}(r)=\frac{1}{r^{s/2}}$, $s>2$ and $f_\alpha(r)=e^{-\pi\alpha r}$. The first case corresponds to the Epstein zeta functions defined by
\begin{equation}\label{zetadef}
\zeta_{L_A}(s)=\sum_{p\in L_A\backslash\{0\}}\frac{1}{|p|^{s}},
\end{equation}
which will be denoted by $\zeta(x,y,s,A)$ in the last part of this paper. The second case corresponds to the theta functions defined by
\begin{equation}\label{thetadef}
\theta_{L_A}(\alpha)=\sum_{p\in L_A}e^{-\pi\alpha |p|^2},
\end{equation}
where the term $p=0$ is added.
\end{examples}
\subsubsection{Second derivatives}
\begin{prop}
For any $A>0$, any $f\in\mathcal{F}$ and any $(x,y)\in\mathcal{D}$, we have
\begin{align*}
\partial^2_{xx}E_f(x,y,A)=&\frac{2A}{y}\sum_{m,n}n^2 f'\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right)\\
&+\frac{4A^2}{y^2}\sum_{m,n}(mn+n^2x)^2f''\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right),
\end{align*}
\begin{align*}
\partial^2_{yy}E_f(x,y,A)=&\frac{2A}{y^3}\sum_{m,n}(m+xn)^2f'\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right)\\
&+A^2\sum_{m,n}\left(n^2-\frac{(m+xn)^2}{y^2} \right)^2f''\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right),
\end{align*}
and
\begin{align*}
\partial^2_{xy}E_f(x,y,A)=&-\frac{2A}{y^2}\sum_{m,n}(mn+n^2x)f'\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right)\\
&+\frac{2A^2}{y}\sum_{m,n}(mn+n^2x)\left( n^2-\frac{(m+xn)^2}{y^2} \right)f''\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right).
\end{align*}
In particular, if $(x,y)\in\mathcal{D}$ is a critical point of $(x,y)\mapsto E_f(x,y,A)$, then
\begin{equation}\label{Dxycritpoint}
\partial^2_{xy}E_f(x,y,A)=\frac{2A^2}{y}\sum_{m,n}(mn+n^2x)\left( n^2-\frac{(m+xn)^2}{y^2} \right)f''\left(A\left[\frac{1}{y}(m+xn)^2+yn^2 \right] \right).
\end{equation}
\end{prop}
\begin{proof}
Clear by direct computation. The last point follows from $\partial_x E_f(x,y,A)=0$ and the expression of this derivative in Proposition \ref{deriv}.
\end{proof}
\begin{corollary}\label{Hessienne01}
Let $A>0$ and $f\in\mathcal{F}$, then the second derivatives at point $(0,1)$ are:
\begin{align*}
&\partial^2_{xx}E_f(0,1,A)=2A\sum_{m,n}n^2 f'\left(A\left[m^2+n^2 \right] \right)+4A^2\sum_{m,n}m^2n^2 f''\left(A\left[m^2+n^2 \right] \right),\\
& \partial^2_{yy}E_f(0,1,A)=2A\sum_{m,n}m^2 f'\left(A\left[m^2+n^2 \right] \right)+A^2\sum_{m,n}(n^2-m^2)^2f''\left(A\left[m^2+n^2 \right] \right),\\
&\partial^2_{xy}E_f(0,1,A)=0.
\end{align*}
\end{corollary}
\begin{proof}
The both first results are obvious. Furthermore, we have
\begin{align*}
&\partial^2_{xy}E_f(0,1,A)\\
&=-2A\sum_{m,n}mnf'\left(A\left[m^2+n^2 \right] \right)+2A\sum_{m,n}mn(n^2-m^2)f''\left(A\left[m^2+n^2 \right] \right)=0,
\end{align*}
by pairing, in each sums, $(m,n)$ and $(-m,n)$.
\end{proof}
\begin{prop}\label{CNminsquare}
If $A>0$ and $f\in \mathcal{F}$ are such that
\begin{align*}
&K_f^1(A):=\sum_{m,n}n^2 f'\left(A\left[m^2+n^2 \right] \right)+2A\sum_{m,n}m^2n^2 f''\left(A\left[m^2+n^2 \right] \right)>0,\\
&K_f^2(A):=\sum_{m,n}2m^2 f'\left(A\left[m^2+n^2 \right] \right)+A\sum_{m,n}(n^2-m^2)^2f''\left(A\left[m^2+n^2 \right] \right)>0,
\end{align*}
then $(0,1)$ is a local minimizer of $(x,y)\mapsto E_f(x,y,A)$.
\end{prop}
\begin{proof}
It is clear by the previous corollary, and because $(0,1)$ is a critical point of the energy for any $f\in\mathcal{F}$ (see Proposition \ref{D1xy}).
\end{proof}
\begin{prop}\label{derivtriangular}
Let $f\in\mathcal{F}$, then the second derivatives at point $\left(\frac{1}{2},\frac{\sqrt{3}}{2} \right)$ are:
\begin{align*}
T_f(A):&=\partial^2_{xx}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)=\partial^2_{yy}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)\\
&=\frac{4A}{\sqrt{3}}\sum_{m,n}n^2 f'\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right)+\frac{4A^2}{3}\sum_{m,n}n^4f''\left(\frac{2A}{\sqrt{3}}[m^2+mn+n^2] \right),
\end{align*}
and $\displaystyle \partial^2_{xy}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)=0$.
\end{prop}
\begin{remark}
Consequently, if $T_f(A)>0$, then $\left(\frac{1}{2},\frac{\sqrt{3}}{2} \right)$ is a local minimizer of $(x,y)\mapsto E_f(x,y,A)$ and if $T_f(A)<0$, then $\left(\frac{1}{2},\frac{\sqrt{3}}{2} \right)$ is a local maximizer of $(x,y)\mapsto E_f(x,y,A)$.
\end{remark}
\begin{proof}
We write, for any $(m,n)\in\mathbb Z^2\backslash\{(0,0)\}$ and any $A>0$,
$$
Q_A(m,n):=\frac{2A}{\sqrt{3}}[m^2+mn+n^2].
$$
A direct computation give us
\begin{align*}
\partial^2_{xx}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)=&\frac{4A}{\sqrt{3}}\sum_{m,n}n^2 f'\left(Q_A(m,n) \right)+\frac{16A^2}{3}\sum_{m,n}\left( mn+\frac{n^2}{2}\right)^2f''\left(Q_A(m,n) \right),
\end{align*}
\begin{align*}
\partial^2_{yy}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)=&\frac{16A}{3\sqrt{3}}\sum_{m,n}\left( m+\frac{n}{2}\right)^2f'\left(Q_A(m,n) \right)\\
&+A^2\sum_{m,n}\left(n^2-\frac{4}{3}\left( m+\frac{n}{2} \right)^2 \right)^2f''\left(Q_A(m,n) \right),
\end{align*}
and
\begin{align*}
\partial^2_{xy}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)=&-\frac{8A}{3}\sum_{m,n}\left( mn+\frac{n^2}{2} \right)f'\left(Q_A(m,n) \right)\\
&+\frac{4A^2}{\sqrt{3}}\sum_{m,n}\left( mn+\frac{n^2}{2} \right)\left(n^2-\frac{4}{3}\left( m+\frac{n}{2} \right)^2 \right)f''\left(Q_A(m,n) \right).
\end{align*}
Now, let us prove that $\partial^2_{xx}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)=\partial^2_{yy}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)$, and more precisely
\begin{equation}\label{latsum1}
\frac{4}{\sqrt{3}}\sum_{m,n}n^2 f'\left(Q_A(m,n) \right)=\frac{16}{3\sqrt{3}}\sum_{m,n}\left( m+\frac{n}{2}\right)^2f'\left(Q_A(m,n) \right)
\end{equation}
and
\begin{equation}\label{latsum2}
\frac{16}{3}\sum_{m,n}\left( mn+\frac{n^2}{2}\right)^2f''\left(Q_A(m,n) \right)=\sum_{m,n}\left(n^2-\frac{4}{3}\left( m+\frac{n}{2} \right)^2 \right)^2f''\left( Q_A(m,n) \right).
\end{equation}
By \eqref{LATSUM1}, we get,
\begin{align*}
\frac{16}{3\sqrt{3}}\sum_{m,n}\left( m+\frac{n}{2}\right)^2f'\left(Q_A(m,n) \right)&=\frac{16}{3\sqrt{3}}\sum_{m,n}\left( m^2+\frac{n^2}{4}+mn\right)f'\left(Q_A(m,n) \right)\\
&=\frac{16}{3\sqrt{3}}\sum_{m,n}\left( m^2+\frac{n^2}{4}-\frac{n^2}{2}\right)f'\left(Q_A(m,n) \right)\\
&=\frac{4}{\sqrt{3}}\sum_{m,n}n^2 f'\left(Q_A(m,n) \right),
\end{align*}
and \eqref{latsum1} is proved. For the second equality, applying \eqref{LATSUM2} and \eqref{LATSUM3}, we get
\begin{align*}
\sum_{m,n}\left(n^2-\frac{4}{3}\left( m+\frac{n}{2} \right)^2 \right)^2f''\left( Q_A(m,n) \right)&=\frac{4}{9}\sum_{m,n}(5n^4+4m^3n)f''\left( Q_A(m,n) \right)\\
&=\frac{4}{3}\sum_{m,n}n^4 f''\left( Q_A(m,n) \right)
\end{align*}
and
\begin{align*}
\frac{16}{3}\sum_{m,n}\left( mn+\frac{n^2}{2}\right)^2f''\left(Q_A(m,n) \right)&=\frac{16}{3}\sum_{m,n}\left( m^2n^2+\frac{n^4}{4}+mn^3 \right)f''\left( Q_A(m,n) \right)\\
&=\frac{16}{3}\sum_{m,n}\left(\frac{n^4}{2}+\frac{n^4}{4}-\frac{n^4}{2} \right)f''\left( Q_A(m,n) \right)\\
&=\frac{4}{3}\sum_{m,n}n^4 f''\left( Q_A(m,n) \right).
\end{align*}
Hence, \eqref{latsum2} is established. By \eqref{LATSUM1}, the first sum in the expression of $\partial^2_{xy}E_f\left(\frac{1}{2},\frac{\sqrt{3}}{2},A \right)$ is equal to $0$. Combining \eqref{LATSUM2} and \eqref{LATSUM3}, we easily prove that the second part is also equal to $0$.
\end{proof}
\begin{corollary}\label{coranalytic}
If $f\in \mathcal{F}$ is analytic on an open neighbourhood of $(0,+\infty)$, then for almost every $A>0$, $(1/2,\sqrt{3}/2)$ is a local minimizer or a local maximizer of $(x,y)\mapsto E_f(x,y,A)$.
\end{corollary}
\begin{proof}
If $f$ is analytic, then $f'$ and $f''$ are analytic on an open neighbourhood of $(0,+\infty)$ and $T_f$ is also analytic on an open neighbourhood of $(0,+\infty)$. Then, the set of zeros of $A\mapsto T_f(A)$ is a discrete set and $T_f(A)\neq 0$ for almost every $A>0$.
\end{proof}
\begin{examples}
This result is true for any sum of inverse power laws $f(r)=\sum_{i=1}^p a_i r^{-s_i}$, $s_i>1$, any sum of exponential functions or any mixing of these type of functions (see \cite{BetTheta15} for more examples).
\end{examples}
\section{Application to Lennard-Jones type interactions}\label{part3}
The aim of this part is to apply the previous results to Lennard-Jones type potentials. We recall our definition from \cite[Section 6.3]{BetTheta15}.
\begin{defi}\label{defiLJpoten}
For any $t=(t_1,t_2)\in \mathbb R^2$ such that $1<t_1<t_2$ and any $a=(a_1,a_2)\in (0,+\infty)^2$, we define the Lennard-Jones type potential on $(0,+\infty)$ by
$$
V_{a,t}^{LJ}(r):=\frac{a_2}{r^{t_2}}-\frac{a_1}{r^{t_1}}.
$$
Hence, its lattice energy is defined, for any Bravais lattice $L\subset \mathbb R^2$, by
$$
E_{V_{a,t}^{LJ}}[L]=a_2\zeta_L(2t_2)-a_1\zeta_L(2t_1),
$$
where the Epstein zeta function of lattice $L$ is defined, for $s>2$, by $\displaystyle \zeta_L(s)=\sum_{p\in L\backslash \{0\}} \frac{1}{|p|^s}$.\\
\begin{figure}[!h]
\centering
\includegraphics[width=10cm,height=80mm]{LJones.eps}
\caption{Graph of $r\mapsto V_{a,t}^{LJ}(r^2)$}
\label{Ljonespot}
\end{figure}
Furthermore, we define the following lattice sums:
\begin{minipage}[l]{0.4\linewidth}
\begin{align*}
&S_1(s)=\sum_{m,n}\frac{m^4}{(m^2+mn+n^2)^s},\\
&S_3(s)=\sum_{m,n}\frac{m^2n^2}{(m^2+n^2)^s},
\end{align*}
\end{minipage}
\begin{minipage}[c]{0.5\linewidth}
\begin{align*}
&S_2(s)=\sum_{m,n}\frac{m^2}{(m^2+n^2)^s},\\
&S_4(s)=\sum_{m,n}\frac{(n^2-m^2)^2}{(m^2+n^2)^s}.
\end{align*}
\end{minipage}
\end{defi}
As we explained in \cite[Section 6.3]{BetTheta15}, these potentials are used in molecular simulation (classical interaction between atoms, hydrogen bonds, for finding energetically favourable regions in protein binding sites) or in the study of social aggregation \cite{MEKBS}. In particular, the classical $(12-6)$ Lennard-Jones potential (see \cite{LJ}) is a good simple model that approximates the interaction between neutral atoms.
\begin{thm}\label{THmain1}
For any $(a,t)$ as in Definition \ref{defiLJpoten}, let
$$
A_{0}:=\frac{\sqrt{3}}{2}\left( \frac{a_2 t_2(t_2-1)S_1(t_2+2)}{a_1t_1(t_1-1)S_1(t_1+2)} \right)^{\frac{1}{t_2-t_1}},
$$
then we have:
\begin{enumerate}
\item if $A<A_0$, then $\left(\frac{1}{2},\frac{\sqrt{3}}{2} \right)$ is a local minimizer of $(x,y)\mapsto E_{V_{a,t}^{LJ}}(x,y,A)$;
\item if $A>A_0$, then $\left(\frac{1}{2},\frac{\sqrt{3}}{2} \right)$ is a local maximizer of $(x,y)\mapsto E_{V_{a,t}^{LJ}}(x,y,A)$.
\end{enumerate}
\end{thm}
\begin{proof}
According to Proposition \ref{derivtriangular}, we easily get
\begin{align*}
T_f(A)=\frac{4A}{\sqrt{3}}\left( \frac{\sqrt{3}}{2A} \right)^{t_2+1}\left\{-a_1t_1\left(\sqrt{3}/2 \right)^{t_1-t_2}h(t_1)A^{t_2-t_1} +a_2t_2h(t_2)\right\},
\end{align*}
where $\displaystyle h(s)=\frac{s+1}{2}S_1(s+2)-\sum_{m,n}\frac{m^2}{(m^2+mn+n^2)^{s+1}}$. Now, by \eqref{LATSUM2} and \eqref{LATSUM3}, we remark that
\begin{align*}
\sum_{m,n}\frac{m^2}{(m^2+mn+n^2)^{s+1}}&=\sum_{m,n}\frac{m^2(m^2+mn+n^2)}{(m^2+mn+n^2)^{s+2}}\\
&=\sum_{m,n}\frac{m^2n^2+mn^3+m^4}{(m^2+mn+n^2)^{s+2}}\\
&=S_1(s+2).
\end{align*}
Consequently, we obtain
$$
h(s)=\frac{s-1}{2}S_1(s+2),
$$
and $T_f(A)>0$ if and only if $A<A_0$. The second point is clear.
\end{proof}
\begin{remark}
In the particular case $a=(2,1)$ and $t=(3,6)$, the interaction potential is
$$
V(r)=\frac{1}{r^6}-\frac{2}{r^3},
$$
and $\displaystyle V(r^2)=\frac{1}{r^{12}}-\frac{2}{r^6}$ is the so-called classical Lennard-Jones potential. The previous result shows that, for any $A<A_0\approx 1.152438$, then the triangular lattice is a local minimizer of $L\mapsto E_{V}[L_A]$, and if $A>A_0$, then the triangular lattice is a local maximizer.
\end{remark}
\begin{thm}\label{THmain2}
For any $(a,t)$ as in Definition \ref{defiLJpoten}, let
$$
g(s)=S_2(s+1)-2(s+1)S_3(s+2),\quad k(s)=(s+1)S_4(s+2)-2S_2(s+1),
$$
and define
$$A_1:=\left(\frac{a_2t_2g(t_2)}{a_1t_1g(t_1)} \right)^{\frac{1}{t_2-t_1}} \quad \textnormal{and}\quad A_2:=\left(\frac{a_2t_2k(t_2)}{a_1t_1k(t_1)} \right)^{\frac{1}{t_2-t_1}}.
$$
It holds:
\begin{enumerate}
\item if $A_1<A<A_2$, then $(0,1)$ is a local minimizer of $(x,y) \mapsto E_{V_{a,t}^{LJ}}(x,y,A)$;
\item if $A\not\in [A_1,A_2]$, then $(0,1)$ is a saddle point of $(x,y) \mapsto E_{V_{a,t}^{LJ}}(x,y,A)$.
\end{enumerate}
\end{thm}
\begin{proof}
We use Proposition \ref{CNminsquare} and we compute
\begin{align*}
&K_{V_{a,t}^{LJ}}^1(A)=\frac{1}{A^{t_2+1}}\left\{a_1t_1 g(t_1)A^{t_2-t_1}-a_2t_2g(t_2) \right\},\\
&K_{V_{a,t}^{LJ}}^2(A)=\frac{1}{A^{t_2+1}}\left\{-a_1t_1 k(t_1)A^{t_2-t_1}+a_2t_2k(t_2) \right\}.
\end{align*}
Now, we remark that $g(s)>0$ and $k(s)>0$. Indeed, we have
\begin{align*}
g(s)&=\sum_{m,n}\frac{m^2}{(m^2+n^2)^{s+1}}-2(s+1)\sum_{m,n}\frac{m^2n^2}{(m^2+n^2)^{s+2}}\\
&=\sum_{m,n}\frac{m^2(m^2+n^2)}{(m^2+n^2)^{s+2}}-2(s+1)\sum_{m,n}\frac{m^2n^2}{(m^2+n^2)^{s+2}}\\
&=\sum_{m,n}\frac{m^4}{(m^2+n^2)^{s+2}}-(2s+1)\sum_{m,n}\frac{m^2n^2}{(m^2+n^2)^{s+2}}.
\end{align*}
By change of variable $(m,n)=(k+\ell,k-\ell)$, we obtain, since the number of terms is larger in the sum on the right than in the left one,
\begin{align*}
\sum_{m,n}\frac{m^2n^2}{(m^2+n^2)^{s+2}}&\leq \sum_{k,\ell}\frac{(k+\ell)^2(k-\ell)^2}{(2k^2+2\ell^2)^{s+2}}\\
&=\frac{1}{2^{s+2}}\sum_{k,\ell}\frac{k^4+\ell^4-2k^2\ell^2}{(k^2+\ell^2)^{s+2}},\\
&=\frac{1}{2^{s+1}}\sum_{k,\ell}\frac{k^4}{(k^2+\ell^2)^{s+2}}-\frac{1}{2^{s+1}}\sum_{k,\ell}\frac{k^2\ell^2}{(k^2+\ell^2)^{s+2}},
\end{align*}
i.e. $\displaystyle \sum_{m,n}\frac{m^2n^2}{(m^2+n^2)^{s+2}}\leq \frac{1}{1+2^{s+1}}\sum_{m,n}\frac{m^4}{(m^2+n^2)^{s+2}}$. Therefore, we get, for any $s>1$,
$$
g(s)\geq \left( 1-\frac{1+2s}{1+2^{s+1}} \right)>0.
$$
With exactly the same arguments, we find, for any $s>1$,
$$
k(s)\geq \left(\frac{s2^{s+2}-4}{1+2^{s+1}} \right)\sum_{m,n}\frac{m^4}{(m^2+n^2)^{s+2}}>0.
$$
Hence, the result is proved because
$$
K_{V_{a,t}^{LJ}}^1(A)>0\iff A>A_1\quad \textnormal{and} \quad K_{V_{a,t}^{LJ}}^2(A)>0\iff A<A_2.
$$
\end{proof}
\begin{remark}
In the classical Lennard-Jones case $a=(2,1)$ and $t=(3,6)$, we numerically compute $A_1\approx 1.1430032$ and $A_2\approx 1.2679987$. In particular, if $A>A_2$, then the square lattice cannot be a minimizer of $(x,y)\mapsto E_{V}(x,y,A)$.
\end{remark}
\section{The classical Lennard-Jones energy: numerical study, degeneracy as $A\to +\infty$ and conjecture}\label{part4}
In this part, we study the energy per point associated to the classical Lennard-Jones potential, i.e. $a=(2,1)$ and $t=(3,6)$. Hence, the corresponding interaction potential is given by
$$
V(r)=\frac{1}{r^6}-\frac{2}{r^3},
$$
and its lattice energy is defined, for any Bravais lattice $L$, by
$$
E_V[L]=\zeta_L(12)-2\zeta_L(6).
$$
\subsection{Minimality among rhombic lattices}\label{secrhomb}
In Table \ref{table2}, we give the results of our numerical and theoretical investigations for the minimization of
$$
\theta\mapsto E_V(\theta,A):=E_V(\cos\theta,\sin\theta,A)
$$
with respect to the area $A$. For any fixed $A>0$, we call $\theta_A$ a minimizer of $\theta\mapsto E_V(\theta,A)$. We split $(0,+\infty)$ into four domains Rhi, $1\leq i\leq 4$, and we explain below these results.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Domain} & \textbf{Area $A$} & \textbf{Minimizer $\theta_A$} & \textbf{Status}\\
\hline
Rh1 & $0<A<\frac{\pi}{(120)^{1/3}}\approx 0.63693$ & $60^\circ$ & proved in \cite{Betermin:2014fy}\\
\hline
Rh2 & $\frac{\pi}{(120)^{1/3}}<A<A_{BZ}\approx 1.1378475$ & $60^\circ$ & num.+loc. min. proved in Th. \ref{THmain1}\\
\hline
Rh3 & $A_{BZ}<A<A_1\approx 1.143003$ & $76.43^\circ\leq \theta< 90^\circ$ & num.\\
\hline
Rh4 & $A>A_1$ & $90^\circ$ & num.+loc. min. proved in Th. \ref{THmain2}\\
\hline
\end{tabular}
\caption{Summary of our numerical and theoretical studies for the minimization among rhombic lattices of $L_A\mapsto E_V[L_A]$.}
\label{table2}
\end{table}
In \cite[Theorem 3.1]{Betermin:2014fy}, we proved that if $0<A<\frac{\pi}{(120)^{1/3}}$, then $\Lambda_A$ is the unique minimizer of $L\mapsto E_V[L]$ among Bravais lattices of fixed area $A$. Hence, it is clear that, on Rh1, $\theta_A=60^\circ$.\\
The optimality of $\theta_A=60^\circ$ on Rh2 follows from \cite[Proposition 3.5]{Betermin:2014fy}. Indeed, we proved that $\Lambda_A$ is a minimizer of $L\mapsto E_V[L]$ among Bravais lattices of fixed area $A$ if and only if
$$
A\leq A_{BZ}:=\inf_{|L|=1\atop L\neq \Lambda_1} \left(\frac{\zeta_L(12)-\zeta_{\Lambda_1}(12)}{2(\zeta_L(6)-\zeta_{\Lambda_1}(6))} \right)^{1/3},
$$
and we numerically compute
$$
\displaystyle A_{BZ}\approx 1.1378475.
$$
Furthermore, see Figure \ref{Fig1} for the $A=1$ case.
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{RhA1.eps}
\caption{Plot of $\theta\mapsto E_V(\theta,1)$, for $A=1$, on $[\pi/3,\pi/2]$. The minimizer seems to be $\theta_A=60^\circ$.}
\label{Fig1}
\end{figure}
For an area between $A_{BZ}\approx 1.13785$ and $A_1\approx 1.43003$, the minimizer seems (numerically) to cover monotonically and continuously the interval $[76.43^\circ,90^\circ)$ (see Figure \ref{Fig2}, \ref{Fig3} and \ref{Fig4}). There is no doubt about the fact that the transition from $60^\circ$ to $76.43^\circ$ is discontinuous (see Figure \ref{Fig2}).
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{RhA1137.eps} \includegraphics[width=8cm]{RhA1138.eps}
\caption{Plots of $\theta\mapsto E_V(\theta,A)$ on $[\pi/3,\pi/2]$, for $A=1.137$ (on the left) and $A=1.138$ (on the right). The minimizer seems to be $\theta_A=60^\circ$ for $A=1.137$ and $\theta_A=76.43^\circ$ for $A=1.138$.}
\label{Fig2}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{Rh3A1141.eps}
\caption{Plot of $\theta\mapsto E_V(\theta,1.141)$ on $[\pi/3,\pi/2]$, for $A=1.141$. The minimizer seems to be $\theta_A\approx 82.51^\circ$.}
\label{Fig3}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{Rh3A1142.eps} \includegraphics[width=8cm]{Rh3A11431.eps}
\caption{Plots of $\theta\mapsto E_V(\theta,A)$ on $[\pi/3,\pi/2]$, for $A=1.142$ (on the left) and $A=1.1431$ (on the right). The minimizer seems to be $\theta_A\approx 89.74^\circ$ for $A=1.142$ and $\theta_A=90^\circ$ for $A=1.1431$.}
\label{Fig4}
\end{figure}
For $A$ in the domain Rh4, our numerical simulations give us the optimality of $\theta_A=90^\circ$ for $A_1<A<20$. We will see in the next subsection that the minimizer seems to stay rectangular if $A$ is large enough (see Figure \ref{Fig5} for the $A=3$ case).
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{Rh4A3.eps}
\caption{Plot of $\theta\mapsto E_V(\theta,3)$ on $[\pi/3,\pi/2]$, for $A=3$. The minimizer seems to be $\theta_A=90^\circ$.}
\label{Fig5}
\end{figure}
\begin{remark}
It numerically appears that the minimizers of $(x,y)\mapsto E_V(x,y,A)$ on $\mathcal{D}$ are rhombic lattices if $0<A<A_2$.
\end{remark}
\subsection{Minimality among rectangular lattices}\label{rectangular}
As in the previous subsection, we give the results of our numerical and theoretical investigations for the minimization of
$$
y\mapsto E_V(y,A):=E_V(0,y,A)
$$
with respect to area $A$ in Table \ref{table3}. For any fixed $A>0$, we call $y_A$ a minimizer of $y\mapsto E_V(y,A)$. We split $(0,+\infty)$ into three domains Recti, $1\leq i\leq 3$ and we explain below these results. In particular, we will partially explain the behavior of the minimizer on Rect3.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Domain} & \textbf{Area $A$} & \textbf{Minimizer $y_A$}& \textbf{Status}\\
\hline
Rect1 & $0<A<\frac{\pi}{(120)^{1/3}}\approx 0.63693$ & $1$ & proved in \cite{Betermin:2014fy}\\
\hline
Rect2 & $\frac{\pi}{(120)^{1/3}}<A<A_2\approx 1.2679987$ & $1$ & num.+loc. min. proved in Th. \ref{THmain2}\\
\hline
Rect3 & $A>A_2$ & $\nearrow$ on $(1,+\infty)$ & num.+proved for large A in Prop. \ref{degrect} \\
\hline
\end{tabular}
\caption{Summary of our numerical and theoretical studies for the minimization among rhombic lattices of $L_A\mapsto E_V[L_A]$.}
\label{table3}
\end{table}
The optimality of $y_A=1$, i.e. the square lattice, on Rect1 is clear by \cite[Theorem 3.1]{Betermin:2014fy} and Montgomery result \cite[Lemma 7]{Mont}. Indeed, Montgomery proved that $\partial_y \theta(x,y,\alpha)\geq 0$ for any $(x,y)\in\mathcal{D}$ and any $\alpha>0$, where $\theta(x,y,\alpha):=E_{f_\alpha}(x,y,1/2)$ and $f_\alpha(r)=e^{-\pi\alpha r}$. Furthermore, we proved in \cite[Theorem 3.1]{Betermin:2014fy} that, for any $0<A<\frac{\pi}{(120)^{1/3}}$ and any Bravais lattice $L_A$ with area $A$,
$$
E_V[L_A]=C_A+\frac{\pi^3}{A^3}\int_1^{+\infty}\left(\theta_{L_A}\left(\frac{\alpha}{2A} \right)-1 \right)g_A(\alpha)\frac{d\alpha}{\alpha},
$$
where $C_A$ is a constant depending on $A$ but independent of $L_A$ and $g_A(\alpha)\geq 0$ for any $\alpha\geq 1$. Thus, we get
$$
\partial_y E_V(y,A)\geq 0
$$
for any $y\geq 1$ when $0<A<\frac{\pi}{(120)^{1/3}}$. Therefore, $y=1$ is the unique minimizer of $y\mapsto E_V(y,A)$.\\
On Rect2, $y=1$ seems numerically to be the minimizer (see Figure \ref{Fig6}). Actually, it is not difficult to prove rigorously, by using the algorithmic method detailed in \cite[Lem. 4.19]{BeterminPetrache}, that $y\mapsto E_V(y,A)$ is an increasing function, for some $A\in\textnormal{Rect2}$.
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{RectA08.eps} \includegraphics[width=8cm]{RectA1.eps}
\caption{Plots of $y\mapsto E_V(y,A)$ for $A=0.8$ (on the left) and $A=1$ (on the right). It seems that the minimizer is $y_A=1$.}
\label{Fig6}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{RectA126.eps} \includegraphics[width=8cm]{Rect127.eps}
\caption{Plots of $y\mapsto E_V(y,A)$ for $A=1.26$ (on the left) and $A=1.27$ (on the right). It seems that the minimizer is $y_A=1$ in the fist case and $y_A\approx 1.033$ in the second case.}
\label{Fig7}
\end{figure}
Numerically, in the domain Rect3, the minimizer seems to cover $(1,+\infty)$ monotonically and continuously with respect to $A$. In particular, we have the degeneracy of the minimizer as $A$ goes to infinity, i.e. $\lim_{A\to +\infty} y_A=+\infty$ (see Figures \ref{Fig7}, \ref{Fig8} and \ref{Fig9}).\\
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{RectA2.eps} \includegraphics[width=8cm]{RectA4.eps}
\caption{Plots of $y\mapsto E_V(y,A)$ for $A=2$ (on the left) and $A=4$ (on the right).}
\label{Fig8}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{RectA8.eps} \includegraphics[width=8cm]{RectA20.eps}
\caption{Plots of $y\mapsto E_V(y,A)$ for $A=8$ (on the left) and $A=20$ (on the right).}
\label{Fig9}
\end{figure}
\begin{remark}
The degeneracy of the minimizer, as $A\to +\infty$, follows from the fact that
$$
E_V[L_A]=\frac{1}{A^6}\left\{ \zeta_L(12)-2A^3\zeta_L(6) \right\}\sim -\frac{2}{A^3}\zeta_L(6)
$$
and the derivative, with respect to $x$, of the right side expression is positive by Proposition \ref{genmgt}. Furthermore, the competition between $\zeta_L(12)$ and $-2A^3\zeta_L(6)$ is naturally won by the first one as $A\to 0$, and that explains why the triangular lattice is the minimizer for $A\in \textnormal{Rect1}$.
\end{remark}
The following results prove the degeneracy of the minimizer among rectangular lattices, as $A\to +\infty$, and the fact that the minimizer of $L_A\mapsto E_V[L_A]$ is rectangular if $A$ is large enough.
\begin{prop}\label{degrect} \textbf{(Degeneracy in the rectangular case)}
There exists $A_3>0$ such that, if $y_A\geq 1$ is a minimizer of $y\mapsto E_V(y;A)$, then for any $A>A_3$,
\begin{equation}\label{asymptrect}
X_1(A)^{1/3}\leq y_A\leq X_2(A)^{1/3}
\end{equation}
where
$$
X_1(A)=\frac{2\zeta_{\mathbb Z^2}(6)A^3-\sqrt{4\zeta_{\mathbb Z^2}(6)^2A^6-32A^4+8\zeta_{\mathbb Z^2}(12)A^2}}{4}
$$
and
$$
X_2(A)=\frac{2\zeta_{\mathbb Z^2}(6)A^3+\sqrt{4\zeta_{\mathbb Z^2}(6)^2A^6-32A^4+8\zeta_{\mathbb Z^2}(12)A^2}}{4}
$$
In particular,
\begin{enumerate}
\item we have $\displaystyle \lim_{A\to +\infty} y_A=+\infty$;
\item more precisely, there exists $C>0$ such that, for any $A>A_3$, $y_A\leq CA$.
\end{enumerate}
\end{prop}
\begin{proof} Let $A>0$ and $y\geq 1$, then we have
$$
E_V(y,A)=A^{-6}\left(y^6\sum_{m,n}\frac{1}{(m^2+y^2n^2)^6}-2y^3A^3 \sum_{m,n}\frac{1}{(m^2+y^2n^2)^3} \right).
$$
Let $y_A$ be a minimizer, then we have $E_V(y_A,A)\leq E_V(A^{1/3},A)$, that is to say
\begin{align*}
y_A^6\sum_{m,n}\frac{1}{(m^2+y_A^2n^2)^6}-2y_A^3A^3 \sum_{m,n}\frac{1}{(m^2+y_A^2n^2)^3}\leq A^2\sum_{m,n}\frac{1}{(m^2+A^{2/3}n^2)^6}-2A^4 \sum_{m,n}\frac{1}{(m^2+A^{2/3}n^2)^3}.
\end{align*}
We remark that, for any $s\in\{3,6\}$ and $\alpha\geq 1$,
$$
2\leq \sum_{m,n} \frac{1}{(m^2+\alpha n^2)^s}\leq \zeta_{\mathbb Z^2}(2s).
$$
Thus, we get, for $A\geq 1$,
$$
-4A^4+2y_A^3\zeta_{\mathbb Z^2}(6)A^3+\zeta_{\mathbb Z^2}(12)A^2-2y_A^6\geq 0.
$$
In particular, this inequality fails if $A$ is enough large. Indeed, we can rewrite this inequality as $R_A(y_A^3)\geq 0$ with
$$
R_A(X):=-2X^2+2\zeta_{\mathbb Z^2}(6)A^3 X+\zeta_{\mathbb Z^2}(12)A^2-4A^4.
$$
The discriminant of polynomial $R_A$ is
$$
\Delta_A=4\zeta_{\mathbb Z^2}(6)^2A^6+8(\zeta_{\mathbb Z^2}(12)A^2-4A^4).
$$
Thus, if $A$ is sufficiently large, then $0<\Delta_A<4\zeta_{\mathbb Z^2}(6)^2A^6$. If follows that $R_A$ admits two positive zeros if $A$ is enough large, which are $1\leq X_1(A)<X_2(A)$ given in the statement of the proposition. Therefore, $R_A(y_A^3)\geq 0$ implies that, for $A$ large enough, $X_1(A)\leq y_A^3\leq X_2(A)$ with
$$
X_1(A)^{1/3}\sim C_1 A^{1/3}\quad \textnormal{and}\quad X_2(A)^{1/3}\sim C_2 A,
$$
as $A\to +\infty$, where $C_1$ and $C_2$ are both positive constants, and the result is proved.
\end{proof}
\begin{remark}
It is crystal clear that the same result holds for all the Lennard-Jones type potentials.
\end{remark}
The following result shows why the minimizer is rectangular if $A$ is large enough.\\
\begin{prop}\label{Rankinmethod} \textbf{(The minimizer is rectangular for sufficiently small density)}
There exists $A_4>0$ such that for any $A> A_4$, a minimizer $(x_A,y_A)$ of $(x,y)\mapsto E_V(x,y,A)$ satisfies $x_A=0$, i.e. a minimizer is a rectangular lattice.
\end{prop}
\begin{proof}
Let us prove that, for $A$ sufficiently large and any $(x,y)\in\mathcal{D}$, $\partial_x E_V(x,y,A)\geq 0$ with equality if and only if $x=0$. Using Rankin's notations \cite[Section 4., p. 157]{Rankin} and the notation of Examples \ref{defzetatheta} for the Epstein zeta function, we get, for any $x\neq 0$,
\begin{align*}
&A^6\partial_x E_V(x,y,A)\\
&=\partial_x\zeta(x,y,12,1)-2A^3\partial_x\zeta(x,y,6,1)\\
&=\frac{16\sqrt{\pi}}{4\sin^2\pi x}\sum_{k=1}^{+\infty}\left(\frac{C_1}{y^3} \Lambda(k,y,3)A^3-\frac{C_2}{y^6}\Lambda(k,y,6) \right)\left\{ (k+1)\sin2\pi x -\sin2\pi(k+1)x \right\}\\
&=\frac{16\sqrt{\pi}}{4y^6\sin^2\pi x}\sum_{k=1}^{+\infty}\left(C_1y^3 \Lambda(k,y,3)A^3-C_2\Lambda(k,y,6) \right)\left\{ (k+1)\sin2\pi x -\sin2\pi(k+1)x \right\}
\end{align*}
where
\begin{itemize}
\item $C_1$ and $C_2$ are both positive constants;
\item $\Lambda(k,y,s):=\lambda_{k+2}(y,s)-2\lambda_{k+1}(y,s)+\lambda_k(y,s)$;
\item $\lambda_k(y,s):=\sigma_{1-2s}(k)(2\pi ky)^{s+1/2}K_{s-1/2}(2\pi k y)$;
\item $\displaystyle \sigma_k(n)=\sum_{d|n}d^k$;
\item $\displaystyle K_\nu(u)=\int_0^{+\infty}e^{-u\cosh t}\cosh(\nu t)dt$.
\end{itemize}
Rankin \cite[Eq. (21)]{Rankin} proved that $\Lambda(k,y,3)>0$ for any $k\geq 1$ and any $y\geq \sqrt{3}/2$. Furthermore, by definition, we can write $\Lambda(k,y,3)=y^{7/2}\tilde{\Lambda}(k,y,3)$ and $\Lambda(k,y,6)=y^{13/2}\tilde{\Lambda}(k,y,6)$, where $\tilde{\Lambda}(k,y,3)$ and $\tilde{\Lambda}(k,y,6)$ have the same order with respect to $y$. Therefore, we get, for any $(x,y)\in\mathcal{D}$,
\begin{align*}
&A^6\partial_x E_V(x,y,A)\\
&=\frac{16\sqrt{\pi}\sqrt{y}}{4\sin^2\pi x}\sum_{k=1}^{+\infty}\left(C_1\tilde{\Lambda}(k,y,3)A^3-C_2\tilde{\Lambda}(k,y,6) \right)\left\{ (k+1)\sin2\pi x -\sin2\pi(k+1)x \right\}.
\end{align*}
Thus, since (see Rankin \cite[p. 158]{Rankin})
$$
1\leq \sigma_{1-2s}(r)<\zeta(2s-1), \quad s\in\{3,6\}
$$
and (see \cite[p. 81]{Mont})
$$
(k+1)\sin2\pi x -\sin2\pi(k+1)x\geq 0, \quad k\geq 1, 0\leq x\leq 1/2,
$$
with equality for any $k\geq 1$ if and only if $x=0$, we obtain that this quantity is positive, for any $(x,y)\in\mathcal{D}$, for $A$ large enough. Consequently, there exists $A_4$ such that for any $A>A_4$,
$$
\partial_x E_V(x,y,A)\geq 0,
$$
with equality if and only if $x=0$. Thus, we get $x_A=0$ for any $A>A_4$.
\end{proof}
A summary of both previous results is:
\begin{corollary}
For any $A>0$, we call $(x_A,y_A)\in\mathcal{D}$ a minimizer of $(x,y)\mapsto E_V(x,y,A)$. Then:
\begin{enumerate}
\item for $A$ large enough, $x_A=0$;
\item it holds $\displaystyle \lim_{A\to +\infty} y_A=+\infty$.
\end{enumerate}
\end{corollary}
\begin{remark} It numerically appears that the minimizer of $(x,y)\mapsto E_V(x,y,A)$ on $\mathcal{D}$ is a rectangular lattice for any $A>A_1$.
\end{remark}
\subsection{Remarks about the global minimality}
Using our previous work \cite{Betermin:2014fy}, we can prove the following result explaining why the $A=1$ case is fundamental for finding the global minimizer of the Lennard-Jones energy, among Bravais lattices, without area constraint.
\begin{prop}\label{globmin}
If $(1/2,\sqrt{3}/2)$ is the unique minimizer of $(x,y)\mapsto E_V(x,y,1)$, then the global minimizer of $(x,y,A)\mapsto E_V(x,y,A)$ is unique and triangular.
\end{prop}
\begin{proof}
By \cite[Proposition 3.5]{Betermin:2014fy}, we know that $\Lambda_A$ is a minimizer of $L\mapsto E_V[L]$ among Bravais lattices of fixed area $A$ if and only if
$$
A\leq \inf_{|L|=1\atop L\neq \Lambda_1} \left(\frac{\zeta_L(12)-\zeta_{\Lambda_1}(12)}{2(\zeta_L(6)-\zeta_{\Lambda_1}(6))} \right)^{1/3}.
$$
Furthermore, we proved in \cite[Proposition 4.1]{Betermin:2014fy} that the area of a global minimizer is smaller than $1$. Thus, if the triangular lattice is the unique minimizer among Bravais lattices of fixed area $1$, then it is the case for every fixed area such that $0<A<1$. Consequently, the minimizer of the energy is unique and triangular, because the minimum among dilated triangular lattices with respect to the area is unique.
\end{proof}
We numerically check that $(1/2,\sqrt{3}/2)$ seems to be the minimizer of $(x,y)\mapsto E_V(x,y,1)$, but a rigorous proof have to be done. A strategy could be the following:
\begin{enumerate}
\item By Rankin's method (see proof of Proposition \ref{Rankinmethod}), we find $\partial_x E_V(x,y,1)\leq 0$ for any $(x,y)\in \mathcal{D}$, with equality if and only if $x=1/2$;
\item By the same arguments as in Proposition \ref{degrect}, it is possible to prove that the minimizer of $y\mapsto E_V(1/2,y,1)$ admits an upper bound $y_1$;
\item By the algorithmic method based on \cite[Lem. 4.19]{BeterminPetrache}, the minimizer is $y=\sqrt{3}/2$ on $[1,y_1]$.
\end{enumerate}
While the first point seems difficult to prove by using classical estimates, the proofs of both other points are clear.
\subsection{Summary of our results, numerical studies and conjectures}\label{summary}
In this part, we summarize the supposed behavior of the minimizer $(x_A,y_A)$ of $(x,y)\mapsto E_V(x,y,A)$ based on our theoretical and numerical studies among rhombic and rectangular lattices. The summary is given in Figure \ref{CONJ}. In the following description, we detail the proved results and the conjectures based on numerical investigations.
More precisely, we have:
\begin{enumerate}
\item For $0<A<\frac{\pi}{(120)^{1/3}}\approx 0.637$, the minimizer is \textbf{triangular}. This is proved in \cite[Theorem 3.1]{Betermin:2014fy};
\item For $\frac{\pi}{(120)^{1/3}}<A<A_{BZ}\approx 1.138$, the minimizer seems to be \textbf{triangular}. This is only a numerical result. In particular, if we know that $A_{BZ}>1$, then the global minimizer of $L\mapsto E_V[L]$, without a density constraint, is unique and triangular (see Proposition \ref{globmin});
\begin{figure}[!h]
\centering
\includegraphics[width=3cm]{ConjTri.eps}
\caption{Triangular lattice}
\label{Conjtri}
\end{figure}
\item For $A_{BZ}<A<A_0\approx 1.152$, the \textbf{triangular lattice is a local minimizer} by Theorem \ref{THmain1};
\item For $A_{BZ}<A<A_1\approx 1.143$, the minimizer seems, numerically, to be a \textbf{rhombic lattice}. More precisely it covers continuously and monotonically the interval of angles $[76.43^\circ,90^\circ)$;
\begin{figure}[!h]
\centering
\includegraphics[width=3cm]{ConjRhomb.eps}
\caption{Rhombic lattice}
\label{Conjrh}
\end{figure}
\item For $A_1<A<A_2\approx 1.268$, \textbf{the square lattice is a local minimizer}, by Theorem \ref{THmain2}. Furthermore, it seems, numerically, that the square lattice is the unique minimizer;
\begin{figure}[!h]
\centering
\includegraphics[width=3cm]{ConjCarre.eps}
\caption{Square lattice}
\label{Conjsq}
\end{figure}
\item For $A>A_2$, then it seems, numerically, that the minimizer is a \textbf{rectangular lattice}. For $A$ large enough, we give a \textbf{proof} of this fact in Proposition \ref{Rankinmethod};
\begin{figure}[!h]
\centering
\includegraphics[width=3cm]{ConjRect.eps}
\caption{Rectangular lattice}
\label{Conjrect}
\end{figure}
\item As $A\to +\infty$, the minimizer becomes \textbf{more and more thin and rectangular}: it degenerates.
\begin{figure}[!h]
\centering
\includegraphics[width=3cm]{ConjRectdeg.eps}
\caption{Degeneracy of the rectangular lattice}
\label{Conjrectdeg}
\end{figure}
\end{enumerate}
\begin{figure}[!h]
\centering
\includegraphics[width=12cm]{Conj.eps}
\caption{Behavior of the minimizer of $(x,y)\mapsto E_V(x,y,A)$ with respect to $A$.}
\label{CONJ}
\end{figure}
This minimizer's shape evolution and the numerical investigations of Ho and Mueller \cite[Fig. 1 and 2]{Mueller:2002aa} (or see \cite{ReviewvorticesBEC}), about two-component Bose-Einstein Condensates, are very similar. It is not difficult to understand why. Indeed, in their work, Ho and Muller consider the following lattice energy
$$
E_\delta(L,u):=\theta_L(1)+\delta\theta_{L+u}(1),
$$
among Bravais lattices $L\subset \mathbb R^2$ of area one and vectors $u\in \mathbb R^2$, where $\theta_L(\alpha)$ is defined by \eqref{thetadef},
$$
\theta_{L+u}(1)=\sum_{p\in L} e^{-\pi |p+u|^2}
$$
and $-1\leq \delta \leq 1$. Thus, as we explained in \cite{BeterminPetrache}, this energy is the sum of two energies with opposite properties:
\begin{enumerate}
\item $L\mapsto \theta_{L}(1)$ is minimized by the triangular lattice $\Lambda_1$;
\item For any $u\not\in L$, $\theta_{L+u}(1)<\theta_{L}(1)$ and $L\mapsto \theta_{L+u}(1)$ does not admit any minimizer. More precisely, there exists a sequence of rectangular lattices $(L_k)_k$ which degenerate, as explained in Section \ref{rectangular}, such that $\lim_{k\to +\infty} \theta_{L_k+c_k}(1)=0$, where $c_k$ is the center of the primitive cell of $L_k$.
\end{enumerate}
Hence, since, for $L_A=\sqrt{A}L$ where $L$ has a unit area,
$$
A^6E_V[L_A]=\zeta_L(12)-A^3\zeta_L(6),
$$
$\theta_L(1)$ and $\theta_{L+u}(1)$ can be compared respectively to $\zeta_L(12)$ and $-\zeta_L(6)$. Furthermore, $\delta$ can be compared to $A^3$. Increasing $\delta$ (respectively $A$), Ho and Mueller find, as in this paper, that $L\mapsto \mathop{\rm argmin}\nolimits_L\{ \min_{(L,u)}\left\{ E_\delta (L,u)\right\}\}$ (respectively $\mathop{\rm argmin}\nolimits_L \{E_V[L]\}$ for us) is triangular at the beginning, becoming rhombic (with a discontinuous transition), square and finally rectangular. Another recent work \cite{Samaj12,TrizacWigner16} on Wigner bilayers presents a surprising similarity. \\
It is actually natural to conjecture that:
\begin{itemize}
\item the behavior of the minimizers of $L\mapsto E_f[L_A]$ with respect to the area $A$ is qualitatively the same for all the Lennard-Jones type potentials;
\item more generally, we can imagine that we find the same result for any potential $f$ written as
$$
f=f_1-f_2,
$$
where $f_1$ and $f_2$ are both completely monotone and $f$ has a well, i.e. $f$ is decreasing on $(0,a)$ and increasing on $(a,+\infty)$, because, for any $i\in\{1,2\}$, $L\mapsto E_{f_i}[L_A]$ has the same properties as $L\mapsto \theta_L(\alpha)$, for any $\alpha>0$ (see Proposition \ref{genmgt}).
\end{itemize}
\textbf{Acknowledgement:} I would like to thank the Mathematics Center Heidelberg (MATCH) for support, Doug Hardin for giving me the intuition of the degeneracy of the minimizer for the Lennard-Jones interaction, Mircea Petrache for giving me a first feedback and Florian Nolte for interesting discussions.
\bibliographystyle{plain}
| {
"timestamp": "2016-11-24T02:06:38",
"yymm": "1611",
"arxiv_id": "1611.07820",
"language": "en",
"url": "https://arxiv.org/abs/1611.07820",
"abstract": "In this paper, we focus on finite Bravais lattice energies per point in two dimensions. We compute the first and second derivatives of these energies. We prove that the Hessian at the square and the triangular lattice are diagonal and we give simple sufficient conditions for the local minimality of these lattices. Furthermore, we apply our result to Lennard--Jones type interacting potentials that appear to be accurate in many physical and biological models. The goal of this investigation is to understand how the minimum of the Lennard--Jones lattice energy varies with respect to the density of the points. Considering the lattices of fixed area A, we find the maximal open set to which A must belong so that the triangular lattice is a minimizer (resp. a maximizer) among lattices of area A. Similarly, we find the maximal open set to which A must belong so that the square lattice is a minimizer (resp. a saddle point). Finally, we present a complete conjecture, based on numerical investigations and rigorous results among rhombic and rectangular lattices, for the minimality of the classical Lennard--Jones energy per point with respect to its area. In particular, we prove that the minimizer is a rectangular lattice if the area is sufficiently large.",
"subjects": "Mathematical Physics (math-ph); Classical Analysis and ODEs (math.CA); Number Theory (math.NT)",
"title": "Local variational study of 2d lattice energies and application to Lennard-Jones type interactions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137937226356,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.708761784109843
} |
https://arxiv.org/abs/2206.01544 | Polynomial approximation on $C^2$-domains | We introduce appropriate computable moduli of smoothness to characterize the rate of best approximation by multivariate polynomials on a connected and compact $C^2$-domain $\Omega\subset \mathbb{R}^d$. This new modulus of smoothness is defined via finite differences along the directions of coordinate axes, and along a number of tangential directions from the boundary. With this modulus, we prove both the direct Jackson inequality and the corresponding inverse for the best polynomial approximation in $L_p(\Omega)$. The Jackson inequality is established for the full range of $0<p\leq \infty$, while its proof relies on a recently established Whitney type estimates with constants depending only on certain parameters; and on a highly localized polynomial partitions of unity on a $C^2$-domain which is of independent interest. The inverse inequality is established for $1\leq p\leq \infty$, and its proof relies on a recently proved Bernstein type inequality associated with the tangential derivatives on the boundary of $\Omega$. Such an inequality also allows us to establish the inverse theorem for Ivanov's average moduli of smoothness on general compact $C^2$-domains. | \subsection{Decomposition of the domain in $\RR^d$}
\subjclass[2010]{Primary 41A10, 41A17, 41A27, 41A63;\\Secondary 41A55, 65D32}
\keywords{$C^2$-domains, polynomial approximation, modulus of smoothness, Jackson inequality, inverse theorem}
\begin{abstract}
We introduce appropriate computable moduli of smoothness
to
characterize
the rate of best approximation by multivariate
polynomials on a connected and compact $C^2$-domain $\Omega\subset \mathbb{R}^d$. This new modulus of smoothness is defined via finite differences along the directions of coordinate axes, and along a number of tangential directions from the boundary. With this modulus, we prove both the direct Jackson inequality and the corresponding inverse for the best polynomial approximation in $L_p(\Omega)$. The Jackson inequality is established for the full range of $0<p\leq \infty$, while its proof relies on a recently established Whitney type estimates with constants depending only on certain parameters; and on a highly localized polynomial partitions of unity on a $C^2$-domain which is of independent interest. The inverse inequality is established for $1\leq p\leq \infty$, and its proof relies on a recently proved Bernstein type inequality associated with the tangential derivatives on the boundary of $\Omega$. Such an inequality also allows us to establish the inverse theorem for Ivanov's average moduli of smoothness on general compact $C^2$-domains.
\end{abstract}
\maketitle
\section{Introduction and Main Results}
\subsection{Historical remarks}
One of the primary questions of approximation theory is to characterize the
rate of approximation by a given system in terms of some modulus
of smoothness. It is well known (see, e.g.~\cite{De-Lo,Di-To,Ni}) that the quality of approximation by algebraic polynomials increases towards the boundary of the underlying domain. As a result, characterization of the class of functions with a prescribed rate of best approximation by algebraic polynomials on a compact domain with nonempty boundary
cannot be described by the ordinary moduli of smoothness.
Several successful moduli of smoothness were introduced to solve this problem in the setting of one variable. Among them the most established ones are the Ditzian-Totik moduli of smoothness \cite{Di-To} and the average moduli of smoothness of K. Ivanov \cite{Iv2} (see the survey paper \cite{Dit07} for details).
Successful attempts were also made to solve the problem in more variables, the most notable being the work of K. Ivanov for polynomial approximation on piecewise $C^2$-domains in $\RR^2$ \cite{Iv}, and the recent works of Totik for polynomial approximation on general polytopes and algebraic domains \cite{To14, To17}; we will describe~\cite{Iv} and~\cite{To17} in more details below. The following list is not meant to be exhaustive, but we would like to also mention several other related works: results for simple polytopes by Ditzian and Totik~\cite{Di-To}*{Chapter~12}, an announcement of a characterization of approximation classes by Netrusov~\cite{Ne}, possibly reduction to local approximation by Dubiner~\cite{Du}, results for simple polytopes for $p<1$ by Ditzian~\cite{Di96}, a new modulus of smoothness and characterization of approximation classes on the unit ball by the first author and Xu~\cite{DX}, and a different alternative approach on the unit ball by Ditzian~\cite{Di14a,Di14b}.
The main aim in this paper is to introduce a computable modulus of smoothness for functions on $C^2$-domains, for which both the direct Jackson inequality and the corresponding converse hold. As is well known, the definition of such a modulus must take into account the boundary of the underlying domain.
We start with some necessary notations.
Let $L^p(\Og)$, $0<p<\infty$ denote the Lebesgue $L^p$-space defined with respect to the Lebesgue measure on a compact domain $\Og\subset \RR^d$. In the limit case we set $L^\infty(\Og)=C(\Og)$, the space of all continuous functions on $\Og$ with the uniform norm $\|\cdot \|_\infty$.
Given $\xi, \eta\in \RR^d$, and $r\in\NN$, we define
$$ \tr_\xi^r f(\eta) :=\sum_{j=0}^r (-1)^{r+j} \binom{r} {j} f(\eta+j\xi ),$$
where we assume that $f$ is defined everywhere on the set $\{\eta+j\xi:\ \ j=0,1,\dots, r\}$.
For a function $f: \Og\to\RR$, we also define
\begin{equation}\label{finite-diff}
\tr_\xi^r (f, \Og, \eta):=\begin{cases}
\tr_\xi^r f(\eta),\ \ &\text{if $[\eta, \eta+r\xi]\subset \Og$,}\\
0, &\ \ \text{otherwise},
\end{cases}
\end{equation}
where $[x,y]$ denotes the line segment connecting any two points $x,y\in\RR^d$. The symmetric versions of these finite differences are
\[
\wt \tr^r_\xi f(\eta):= \tr^r_\xi f\Bl(\eta-\f r2 \xi\Br)
\quad\text{and}\quad
\wt \tr^r_\xi (f,\Og,\eta):= \tr^r_\xi \Bl(f,\Og,\eta-\f r2 \xi\Br).
\]
The best approximation of $f\in L^p(\Og)$ by means of algebraic polynomials of total degree at most $n$ is defined as
$$ E_n(f)_p=E_n(f)_{L^p(G)}:=\inf\Bl\{ \|f-Q\|_p:\ \ Q\in \Pi^d_n\Br\},$$
where $\Pi^d_n$ is the space of algebraic polynomials of total degree $\le n$ on $\RR^d$.
Given a set $E\subset \RR^d$, we denote by $|E|$ its Lebesgue measure in $\RR^d$, and define $\dist(\xi, E):=\inf_{\eta\in E}\|\xi-\eta\|$ for $\xi\in \RR^d$, (if $E=\emptyset$, then define $\dist(\xi, E)=1$). Here and throughout the paper, $\|\cdot\|$ denotes the Euclidean norm.
Finally, let $\sph\subset \RR^d$ be the unit sphere of $\RR^d$, and let $e_1=(1,0,\dots, 0), \dots, e_d =(0, \dots, 0, 1)$ denote the standard canonical basis in $\RR^d$.
Next, we describe the work of K. Ivanov \cite{Iv}, where a new modulus of smoothness was introduced to study the best algebraic polynomial approximation
for functions of two variables on a bounded domain with piecewise $C^2$ boundary. To avoid technicalities, we always assume that $\Og\subset \RR^d$ is the closure of an open, bounded, and connected domain in $\R^d$ with $C^2$ boundary $\Ga$ (see Definition~\ref{def:c2}). Consider the following metric on $\Og$:
\begin{equation}\label{metric}\rho_\Og (\xi,\eta):=\|\xi-\eta\|+ \Bl|\sqrt{\dist(\xi, \Gamma)} -\sqrt{\dist(\eta, \Gamma)}\Br|,\ \ \xi, \eta\in\Og. \end{equation}
For $\xi\in\Og$ and $t>0$, set
$ U( \xi, t):= \{\eta\in\Og:\ \ \rho_\Og(\xi,\eta) \leq t\}$.
For $0< q\leq p\leq \infty$, the average $(p,q)$-modulus of order $r\in\NN$ of $f\in L^p(\Og)$ was defined in \cite{Iv} by\footnote{Both the metric $\rho_\Og$ and the average moduli of smoothness $\tau_r(f,t)_{p,q}$ were defined in \cite{Iv} for a more general domain $\Og\subset \RR^2$. }
\begin{equation}\label{eqn:ivanov} \tau_r (f; \da)_{p,q} :=\Bl\| w_r (f, \cdot, \da)_q \Br\|_p,\end{equation}
where
$$ w_r (f, \xi, \da)_q : =\begin{cases}
\displaystyle \Bl( \f 1 {|U(\xi,\da)|} \int_{U(\xi,\da)} |\tr_{(\eta-\xi)/r} ^r (f,\Og,\xi)|^q \, d\eta\Br)^{\f1q},\ \ & \text{if $0<q <\infty$};\\
\sup_{\eta\in U( \xi,\da)} |\tr_{(\eta-\xi)/r}^r (f,\Og,\xi)|,\ \ &\text {if $q=\infty$}.\end{cases}$$
With this modulus, the following result was announced without proof in \cite{Iv} for a bounded domain in the plane with piecewise $C^2$ boundary.
\begin{thm}\label{thm:ivanov}\cite{Iv} Let $\Og$ be the closure of a bounded open domain in the plane $\RR^2$ with piecewise $C^2$-boundary $\Ga$.
If $f\in L^p(\Og)$, $1\leq q\leq p \leq \infty$ and $r\in\NN$, then
\begin{equation}\label{1-4-0} E_n (f)_p \leq C_{r, \Og} \tau_r (f, n^{-1})_{p,q}.\end{equation}
Conversely, if either $p=\infty$ or $\Og$ is a parallelogram or a disk and $1\leq p\leq \infty$, then
\begin{equation}\label{1-5-0}\tau_r (f, n^{-1})_{p,q} \leq C_{r,\Og} n^{-r} \sum_{s=0}^n (s+1)^{r-1} E_s (f)_p.\end{equation}
\end{thm}
It remained open in \cite{Iv} whether the inverse inequality~\eqref{1-5-0} holds for the full range of $1\leq p\leq \infty$ for more general $C^2$-domains other than parallelograms and disks. The methods developed in this paper allow us to give a positive answer to this question. In fact, we shall prove the Jackson inequality~\eqref{1-4-0} for $0<p\leq \infty$ and the inverse inequality~\eqref{1-5-0} for $1\leq p\leq \infty$ for all compact, connected $C^2$-domains $\Og\subset \RR^d$. Our results apply to higher dimensional domains as well.
Finally, we describe the recent work of Totik \cite{To17}, where a new modulus of smoothness using the univariate moduli of smoothness on circles and line segments was introduced to study polynomial approximation on algebraic domains.
Let $\Og\subset \RR^d$ be the closure of a bounded, finitely connected domain with $C^2$ boundary $\Ga$. Such a domain is called an algebraic domain if for each connected component $\Ga'$ of the boundary $\Ga$, there is a polynomial $\Phi(x_1,\dots, x_d)$ of $d$ variables such that $\Ga'$ is one of the components of the surface $\Phi(x_1,\dots, x_d)=0$ and $\nabla \Phi(\xi) \neq 0$ for each $\xi\in\Ga'$.
The $r$-th order modulus of smoothness of $f\in C(\Og)$ on a circle $\mathcal{C}\subset \Og$ is defined as in the classical trigonometric approximation theory by
\begin{align*}
\wh{\og}_{\mathcal{C}} ^r (f,t) :
&=\sup_{0\leq \ta \leq t} \sup_{0\leq \vi\leq 2\pi }
\Bl| \wt\tr^r_\theta f_{\mathcal{C}}(\vi) \Br|
\end{align*}
where we identify the circle $\mathcal{C}$ with the interval $[0, 2\pi)$ and $f_{\mathcal{C}}$ denotes the restriction of $f$ on $\mathcal{C}$.
Similarly, if $I=[a,b]\subset \Og$ is a line segment and $e\in\SS^{d-1}$ is the direction of $I$, then with $\wt{d}_I (e, z): =\sqrt{\|z-a\|\|z-b\|}$, we may define
the modulus of smoothness of $f\in C(\Og)$ on $I$ as
\begin{align*}
\wh{\og}_I^r (f,\da)
&=\sup_{0\leq h\leq \da} \sup_{z\in I}
\Bl| \wt\tr^r_{h \wt{d}_I (e, z)e}(f,\Omega,z) \Br|.
\end{align*}
Now we define the $r$-th order the modulus of smoothness of $f\in C(\Og)$ on the domain $\Og$ as $$ \wh{\og}^r(f,\da)_\Og=\max\Bl( \sup_{\mathcal{C}_\rho} \wh{\og}_{\mathcal{C}_\rho}^r (f,\da), \sup_I \wh{\og}_I^r (f,\da)\Br),$$
where the supremums are taken for all circles $\mathcal{C}_\rho\subset \Og$ of some radius $\rho$ which are parallel with a coordinate plane, and for all segments $I\subset \Og$ that are parallel with one of the coordinate axes.
With this modulus of smoothness, Totik proved
\begin{thm}\label{Totik:thm}\cite{To17}
If $\Og\subset \RR^d$ is an algebraic domain and $f\in C(\Og)$, then
\begin{equation} \label{1-6-0}E_n (f)_{C(\Og)} \leq C \wh{\og}^r (f, n^{-1})_{\Og},\ \ n\ge rd,\end{equation}
and
\begin{equation} \label{1-7-0} \wh{\og}^r(f, n^{-1})_{\Og} \leq C n^{-r} \sum_{k=0}^n (k+1)^{r-1} E_k (f)_{C(\Og)} \end{equation}
with a constant $C$ independent of $f$ and $n$.
\end{thm}
From the classical inverse inequalities in one variable, and
the way the moduli of smoothness $\wh{\og}^r(f,t)_\Og$ are defined, one can easily show that the inverse inequality~\eqref {1-7-0} in fact holds on more general $C^2$-domains $\Og$. On the other hand, however, it is much harder to show the direct Jackson inequality~\eqref{1-6-0} even on algebraic domains (see \cite{To17}).\footnote{{In a private communication, V. Totik kindly showed us that certain quasi-Whitney inequality can be established for the moduli $\wh{\og}^r$ on cells of distance $≥ C/n^2$ from
the boundary of $\Og$, which, combined with certain techniques from Section \ref{ch:direct} of the current paper, will yield the Jackson inequality \eqref{1-6-0} for the moduli $\wh{\og}^r$ on a general $C^2$-domain. }}
Furthermore, it is unclear how to extend the results of Theorem~\ref{Totik:thm} to $L^p$ spaces with $p<\infty$.
In this paper, we will introduce a new computable modulus of smoothness on a connected, compact $C^2$-domain $\Og\subset \RR^d$. Our new modulus of smoothness is defined via finite differences along the directions of coordinate axes, and along finitely many tangential directions on the boundary. With this modulus, we shall prove the direct Jackson inequality for the full range of $0<p\leq \infty$, and the corresponding inverse for $1\leq p\leq \infty$. The proof of the Jackson inequality relies on a Whitney type estimate on certain domains of special type which we recently established in~\cite{Da-Pr-Whitney}, and a polynomial partition of unity on $\Og$ which we construct motivated by the ideas of Dzjadyk and Konovalov~\cite{Dz-Ko}. On the other hand, the proof of the inverse inequality is more difficult. It
relies on a new tangential Bernstein inequality on $C^2$-domains, which we recently established in~\cite{Da-Pr-Bernstein}.
We give some preliminary materials in the next subsection.
After that, we define the new modulus of smoothness in Section~\ref{modulus:def}. The main results of this paper are summarized in Section~\ref{summary}, where we also describe briefly the organization of the rest of the paper.
\subsection{Preliminaries }\label{preliminary}
We start with a brief description of some necessary notations. Often we will work with domains bounded by graphs of functions, so it will be more convenient to work on the $(d+1)$-dimensional Euclidean space $\RR^{d+1}$ rather than the $d$-dimensional space $\RR^d$. We shall often write a point in $\RR^{d+1}$ in the form $(x, y)$ with $x=(x_1,\dots, x_d)\in\RR^d$ and $y=x_{d+1}~\in~\RR$.
Let $B_r[\xi]$ (resp., $B_r(\xi)$ ) denote the closed ball (resp., open ball) in $\R^{d+1}$ centered at $\xi\in\RR^{d+1}$ having radius $r>0$.
A rectangular box in $\RR^{d+1}$ is a set that takes the form $[a_1,b_1]\times \dots\times [a_{d+1}, b_{d+1}]$ with $-\infty<a_j<b_j<\infty$, $j=1,\dots, d+1$.
We always assume that the sides of a rectangular box are parallel with the coordinate axes. If $R$ denotes either a parallelepiped or a ball in $\RR^{d+1}$, then we denote by $cR$ the dilation of $R$ from its center by a factor $c>0$.
Given $1\leq i\neq j\leq d+1$, we call the coordinate plane spanned by the vectors $e_i$ and $e_j$ the $x_ix_j$-plane. Finally, we use the notation $A_1\sim A_2$ to mean that there exists a positive constant $c>0$ such that $c^{-1}A_1\leq A_2\leq c A_1$.
\subsection{Directional moduli of smoothness}
The $r$-th order directional modulus of smoothness on a domain $\Og\subset \RR^{d+1}$ along a set $\mathcal{E}\subset \SS^d$ of directions is defined by
$$ \og^r (f, t; \mathcal{E})_p:=\sup_{\xi\in \mathcal{E}} \sup_{0<u\leq t } \|\tr_{u\xi}^r (f, \Og, \cdot)\|_{L^p(\Og)}=
\sup_{\xi\in \mathcal{E}} \sup_{0<u\leq t } \|\tr_{u\xi}^r f\|_{L^p(\Og_{ru\xi})},$$
where $\tr_{u\xi}^r f=\tr_{u\xi}^r(f,\Og, \cdot)$ is given in~\eqref{finite-diff}, and
$\Og_{\eta}:=\{\xi\in \Og:\ \ [\xi, \xi+\eta]\subset \Og\}$ for $\eta\in\RR^{d+1}$.
Let
$$ \og^r(f, \Og; \mathcal{E})_p:=
\og^r (f, \diam(\Og); \mathcal{E})_p,$$ where $\diam (\Og) :=\sup_{\xi, \eta \in \Og} \|\xi-\eta\|$.
If $\mathcal{E}=\SS^d$, then we write $\og^r (f, t)_p=\og^r (f, t; \SS^d)_p$ and $\og^r(f, \Og)_p= \og^r(f, \Og; \SS^d)_p$, whereas if $\mathcal{E}=\{e\}$ contains only one direction $e\in\SS^d$, we write $\og^r (f, t; e)_p=\og^r (f, t; \mathcal{E})_p$ and $\og^r(f, \Og; e)_p= \og^r(f, \Og; \mathcal{E})_p$.
We shall frequently use the following two properties of these directional moduli of smoothness, which can be easily verified from the definition: \begin{enumerate}[\rm (a)]
\item For each $\mathcal{E}\subset \SS^d$, \begin{equation}\label{2-1-18}\og^r (f, \Og; \mathcal{E})_p=\og^r (f, \Og; \mathcal{E}\cup(-\mathcal{E}))_p.\end{equation}
\item If $T$ is an affine mapping given by $T\eta =\eta_0 +T_0\eta$ for all $\eta\in\R^{d+1}$ with $\eta_0\in\R^{d+1}$ and $T_0$ being a nonsingular linear mapping on $\R^{d+1}$, then
\begin{equation}\label{5-1-eq}
\og^r (f, \Og; \mathcal{E})_p =\Bl|\text{det} \ (T_0)\Br|^{-\f1p} \og^r( f\circ T^{-1}, T(\Og); \mathcal{E}_{T})_p,
\end{equation}
where
$\mathcal{E}_{T}=\bl\{\f{ T_0 x}{\|T_0 x\|}:\ \ x\in \mathcal{E}\br\}$. Moreover, if $\xi, e\in\SS^d$ is such that $e=T_0(\xi)$, then for any $h>0$,
\begin{equation}\label{2-3-18}
|\text{det} \ (T_0)|^{\f1p} \|\tr_{he} ^r ( f, \Og)\|_{L^p(\Og)}^p=\|\tr_{h\xi}^r (f\circ T^{-1}, T(\Og))\|_{L^p(T(\Og))}.
\end{equation}
\end{enumerate}
Next, we recall that the analogue of the Ditzian-Totik modulus on $\Og\subset \RR^{d+1}$ along a direction $e\in\SS^d$ is defined as (see \cite{To14, To17}):
\begin{equation}\label{2-3-DT}\og_{\Og, \vi}^r(f,t; e)_{p}:=
\sup_{\|h\|\leq \min\{t,1\}} \Bl\|\wt\tr_{h\vi_\Og (e, \cdot) e}^r (f, \Og, \cdot)\Br\|_{L^p(\Og)},\ \ t>0,\end{equation}
where
\begin{equation}\label{funct-vi} \vi_\Og (e, \xi):=\max\Bl\{ \sqrt{l_1l_2}:\ \ l_1, l_2\ge 0,\ \ [\xi-l_1 e, \xi+l_2 e]\subset \Og\Br\},\ \ \xi\in\Og.\end{equation}
For simplicity, we also define $\vi_\Og (\da e, \xi)=\vi_\Og (e, \xi)$ for $e\in\SS^d$, $\da>0$ and $\xi\in\Og$.
\subsection{Domains of special type}
A set $G\subset \RR^{d+1}$ is called an {\sl upward} $x_{d+1}$- domain with base size $b>0$ and parameter $L\ge 1$ if
it can be written in the form
\begin{equation} \label{2-7-special}G=\xi+\Bl\{( x, y):\ \ x\in (-b,b)^d,\ \ g(x)- L b< y \leq g(x)\Br\} \end{equation}
with $\xi\in\RR^{d+1}$ and $g\in C^2(\RR^d)$.
For such a domain $G$, and a parameter $\ld\in (0, 2]$, we define
\begin{align}
G(\ld):&=\xi +\Bl\{ (x, y):\ \ x\in (-\ld b, \ld b)^d,\ \ g(x)-\ld L b < y \leq g(x)\Br\},\\
\p'G(\ld)&:=\xi +\Bl\{ (x, g(x)):\ \ x\in (-\ld b, \ld b)^d\Br\}.
\end{align}
Associated with the set $G$ in~\eqref{2-7-special}, we also define
\begin{align}
G^\ast:&=\xi+\Bl\{( x, y):\ \ x\in (-2b,2b)^d,\ \ \min_{u\in [-2b, 2b]^d} g(u)-4Lb < y \leq g(x)\Br\}.\label{G}
\end{align}
For later applications, we give the following remark on the above definition.
\begin{rem}\label{rem-2-1-0}
In the above definition,
we may choose the base size $b$ as small as we wish, and we may also assume the parameter $L$ in~\eqref{2-7-special} satisfies
\begin{equation}\label{parameter-2-9}
L\ge L_G:=4\sqrt{d} \max_{x\in [-2b, 2b]^d} \|\nabla g(x)\| +1,
\end{equation}
since otherwise we may consider a subset of the form
\begin{equation*} G_0=\xi+\Bl\{( x, y):\ \ x\in (-b_0,b_0)^d,\ \ g(x)- L_0 b_0< y \leq g(x)\Br\}\end{equation*}
with $L_0=Lb/b_0$ and $b_0\in (0, b)$ being a sufficiently small constant.
Unless otherwise stated, we will always assume that the condition~\eqref{parameter-2-9} is satisfied for each upward $x_{d+1}$-domain.
\end{rem}
We may define
an upward $x_j$-domain $G\subset \RR^{d+1}$ and the associated sets $G(\ld)$, $\p' G(\ld)$, $G^\ast$ for $1\leq j\leq d$ in a similar manner, using the reflection
$$\sa_j (x) =(x_1,\dots, x_{j-1}, x_{d+1}, x_{j+1},\dots, x_d, x_j),\ \ x\in\RR^{d+1}.$$
Indeed, $G\subset \RR^{d+1}$
is an {\it upward} $x_j$-domain with base size $b>0$ and parameter $L\ge 1$ if $E:=\sa_j (G)$ is an upward $x_{d+1}$-domain with base size $b$ and parameter $L$, in which case we define
$$G (\ld) = \sa_j \bl( E (\ld)\br),\ \
\p' G(\ld)= \sa_j\bl(\p' E (\ld)\br),\ \ \ G^\ast =\sa_j (E^\ast).$$
We can also define a {\it downward} $x_j$-domain and the associated sets $G(\ld)$, $\p' G(\ld)$, using the reflection with respect to the coordinate plane $x_j=0$:
$$\tau_j (x) :=(x_1, \dots, x_{j-1}, -x_j, x_{j+1}, \dots, x_{d+1}),\ \ x\in\RR^{d+1}.$$
Indeed, $G\subset \RR^{d+1}$
is an {\it downward} $x_j$-domain with base size $b>0$ and parameter $L\ge 1$ if $H:=\tau_j (G)$ is an upward $x_{j}$-domain with base size $b$ and parameter $L\ge 1$, in which case we define
$$G (\ld) = \tau_j \bl( H (\ld)\br),\ \
\p' G(\ld)= \tau_j\bl(\p' H (\ld)\br),\ \ G^\ast =\tau_j(H^\ast).$$
We say $G\subset \RR^{d+1}$ is a domain of special type if it is an upward or downward $x_j$-domain for some $1\leq j\leq d+1$, in which case we call $\p' G(\ld)$ the essential boundary of $G(\ld)$, and write $\p' G =\p' G(1)$ and $\p' G^\ast =\p' G (2)$.
\begin{defn}\label{Def-2-1} Let $\Og\subset \RR^{d+1}$ be a bounded domain with boundary $\Ga=\p \Og$, and let $G\subset \Og$ be a domain of special type.
We say $G$ is attached to $\Ga$ if $\overline{G}^\ast\cap \Ga =\overline{\p' G^\ast}$
and there exists an open rectangular box $Q$ in $\RR^{d+1}$ such that $ G^\ast =Q\cap \Og$.
\end{defn}
\subsection*{$C^2$-domains}
In this paper, we shall mainly work on $C^2$-domains, defined as follows:
\begin{defn}\label{def:c2}
A bounded domain $\Og\subset \RR^{d+1}$ is called $C^2$ if there exist numbers $\da>0$, $M>0$ and a finite cover of the boundary $\Ga:= \p \Og$ by connected open sets $\{ U_j\}_{j=1}^J$ such that: {\rm (i)} for every $x\in\Og$ with $\dist(x, \Ga)<\da$, there exists an index $j$ such that $x\in U_j$, and $\dist(x, \p U_j)>\da$; {\rm (ii)} for each $j$ there exists a Cartesian coordinate system $(\xi_{j,1},\dots, \xi_{j,d+1})$ in $U_j$ such that the set $\Og\cap U_j$ can be represented by the inequality $\xi_{j,d+1}\leq f_j(\xi_{j,1}, \ldots, \xi_{j, d})$, where $f_j:\RR^{d}\to \RR$ is a $C^2$-function satisfying
$\max_{1\leq i, k\leq d}\|\p_i\p_k f_j\|_\infty\leq M.$
\end{defn}
\subsection{New moduli of smoothness on $C^2$ domains}\label{modulus:def}
Let $\Og\subset \RR^{d+1}$ be the closure of an open, connected, bounded $C^2$-domain in $\RR^{d+1}$ with boundary $\Ga=\p \Og$. In this section, we shall give the definition of our new moduli of smoothness on the domain $\Og$.
The definition requires a tangential modulus of smoothness $\wt{\og}^r_G (f, t)_p$ on a domain $G\subset \Og$ of special type, which is described below. We start with an upward $x_{d+1}$-domain $G$ given in~\eqref{2-7-special} with $\xi=0$.
Let
$$\xi_j(x):= {e_j + \p_j g(x) e_{d+1}}\in\RR^{d+1},\ \ j=1,\dots, d,\ \ x\in (-2b, 2b)^d.$$
Clearly, $\xi_j(x)$ is the tangent vector to the essential boundary $\p' G^\ast$ of $G^\ast$ at the point $(x, g(x))$ that is parallel to the $x_jx_{d+1}$-coordinate plane. Given a parameter $A_0>1$, we set
\begin{equation}\label{eqn:a0}
G^t:= \bl\{\xi\in G:\ \ \dist(\xi, \p' G) \ge A_0 t^2\br\},\ \ 0\leq t\leq 1.
\end{equation}
We then define
the $r$-th order tangential modulus of smoothness $\wt{\og}^r_G (f, t)_p$, ($0<t\leq 1$) of $f\in L^p(\Og)$ by
\begin{align}\label{modulus-special}
\wt{\og}^r_G (f, t)_p&:= \sup_{\sub{0<s\leq t\\
1\leq j\leq d}} \Bl(\int_{G^t}
\Bl[\f 1{(bt)^d} \int_{I_x(tb)} |\tr_{s \xi_j(u)}^r (f, \Og,(x,y))|^p \, du\Br] dxdy\Br)^{\f1p},
\end{align}
where $I_x(tb):=\{ u\in (-b, b)^d:\ \ \|u-x\|\leq tb\}$, and we use $L^\infty$-norm to replace the $L^p$-norm when $p=\infty$.
For $t>1$, we define $ \wt{\og}^r_G (f, t)_p=\wt{\og}^r_G (f, 1)_p$.
Next, if $G\subset \Og$ is a general domain of special type, then we define the tangential moduli $\wt{\og}^r_G (f, t)_p$
through the identity,
$$ \wt{\og}^r_G (f, t)_p=\wt{\og}^r_{T(G)} (f\circ T^{-1}, t)_p,$$
where $T$ is a composition of a translation and the reflections $\sa_j, \tau_j$ for some $1\leq j\leq d+1$ which takes $G$ to an upward $x_{d+1}$-domain of the form~\eqref{2-7-special} with $\xi=0$.
To define the new moduli of smoothness on $\Og$, we also need the following covering lemma, which was proved in~\cite{Da-Pr-Bernstein}*{Section~2}.
\begin{lem}[\cite{Da-Pr-Bernstein}*{Proposition~2.7}]\label{lem-2-1-18}
There exists a finite cover of the boundary $\Gamma=\p \Og$ by domains of special type $G_1, \dots, G_{m_0}\subset \Og$ that are attached to $\Ga$. In addition, we may select the domains $G_j$ in such a way that the size of each $G_j$ is as small as we wish, and the parameter of each $G_j$ satisfies the condition~\eqref{parameter-2-9}.
\end{lem}
Now we are in a position to define the new moduli of smoothness on $\Og$.
\begin{defn}\label{def:modulus}
Given $0<p\leq \infty$, the $r$-th order modulus of smoothness of $f\in L^p(\Og)$ is defined by
\begin{equation}\label{eqn:defmodulus}
\og_\Og^r(f, t)_p:=\og^r_{\Og, \vi} (f, t)_p +\og^r_{\Og, \tan} (f, t)_p,
\end{equation}
where
$$
\og_{\Og, \vi}^r (f,t)_p:=\max_{1\leq j\leq d+1} \og^r_{\Og, \vi} (f, t; e_j)_p\ \ \text{and}\ \
\og_{\Og, \tan}^r (f, t)_p :=\sum_{j=1}^{m_0} \wt{\og}_{G_j}^r (f,t)_p.
$$
Here $G_1,\dots, G_{m_0}\subset \Og$ are the domains of special type from Lemma~\ref{lem-2-1-18}.
\end{defn}
Note that the second term on the right hand side of~\eqref{eqn:defmodulus}
is defined via finite differences along certain tangential directions of the boundary $\Ga=\p \Og$. As a result,
we call $\og^r_{\Og, \tan} (f, t)_p$ the tangential part of the $r$-th order modulus of smoothness on $\Og$.
We conclude this subsection with the following remark.
\begin{rem}\label{rem-3-2} The moduli of smoothness defined in Definition~\ref{def:modulus} rely on the parameter $A_0$ in~\eqref{eqn:a0}. To emphasize the dependence on this parameter, we often write
$$\wt{\og}^r_G (f, t; A_0)_p:=\wt{\og}^r_G (f, t)_p,\ \ \ {\og}^r_{\Og} (f, t; A_0)_p:={\og}^r_{\Og} (f, t)_p.$$
By the Jackson theorem (Theorem~\ref{Jackson-thm}) and the univariate Remez inequality (see \cite{MT2}), it can be easily shown that given any two parameters $A_1, A_2\ge 1$,
$$ {\og}^r_{\Og} (f, t; A_1)_p\sim {\og}^r_{\Og} (f, t; A_2)_p,\ \ t>0,\ \ 0<p\leq \infty.$$
\end{rem}
\subsection{Summary of main results}\label{summary}
In this subsection, we shall summarize the main results of this paper. As always, we assume that $\Og$ is the closure of an open, connected and bounded $C^2$-domain in $\RR^{d+1}$.
For simplicity, we identify with $L^\infty(\Og)$ the space $C(\Og)$ of continuous functions on $\Omega$.
The main aim of this paper is to prove the Jackson type inequality and the corresponding inverse inequality for the modulus of smoothness $\og_\Og^r(f,t)_p$ defined in~\eqref{eqn:defmodulus}, as stated in the following two theorems.
\begin{thm}\label{Jackson-thm} If $r,n\in\NN$, $0<p\leq \infty$ and $f\in L^p(\Og)$, then
$$ E_n (f)_{L^p(\Og)} \leq C \og_\Og^r(f, \f 1 n)_p,$$
where the constant $C$ is independent of $f$ and $n$.
\end{thm}
\begin{thm}\label{inverse-thm} If $r, n\in\NN$, $1\leq p\leq \infty$ and $f\in L^p(\Og)$, then
$$\og_{\Og}^r (f, n^{-1})_{p} \leq\f{ C}{n^r} \sum_{j=0}^n (j+1)^{r-1} E_j (f)_{L^p(\Og)},$$
where the constant $C$ is independent of $f$ and $n$.
\end{thm}
Note that the Jackson inequality stated in Theorem~\ref{Jackson-thm} holds for the full range of $0<p\leq \infty$.
Now let us describe two main ingredients in the proof of the direct Jackson theorem: multivariate Whitney type inequalities on certain domains (not necessarily convex); and localized polynomial partitions of unity on $C^2$-domains.
The Whitney type inequality gives an upper estimate for the error of local polynomial approximation of a function via the behavior of its finite differences.
A useful multivariate Whitney type inequality was established by Dekel and Leviatan \cite{De-Le} on a convex body (compact convex set with non-empty interior) $G\subset \R^{d+1}$ asserting that
for any $0<p\leq \infty$ , $r\in\NN$, and $f\in L^p(G)$,
\begin{equation}\label{7-1-18-00}E_{r-1} (f)_{L^p(G)} \leq C(p,d,r) \og^r (f, G)_{p}.\end{equation}
It is remarkable that the constant $C(p,d,r)$ here depends only on the three parameters $p,d,r$, but is independent of the particular shape of the convex body $G$. However, the Whitney inequality~\eqref{7-1-18-00} is NOT enough for our purpose because our domain $\Og$ is not necessarily convex, and the definition of our moduli of smoothness uses local finite differences along a finite number of directions only.
In~\cite{Da-Pr-Whitney} we developed a new method to study the following Whitney type inequality for directional moduli of smoothness on a more general domain $G\subset\RR^{d+1}$ (not necessarily convex):
\begin{equation}\label{7-8-18-00}E_{r-1} (f)_{L^p(G)} \leq C \og^r(f, G; \mathcal{E})_p.\end{equation}
The key idea of~\cite{Da-Pr-Whitney} is to deduce the Whitney type inequality on a more complicated domain from the Whitney inequality on cubes or some other simpler domains. We state the result from~\cite{Da-Pr-Whitney} which is sufficient for the purposes of this work in Section~\ref{sec:tools}.
On the other hand, a polynomial partition of unity is a useful tool to patch together local polynomial approximation and can be of independent interest. For simplicity, we say
a set $\Ld$ in a metric space $(X,\rho)$ is $\va$-separated for some $\va>0$ if $\rho(\og, \og') \ge \va$ for any two distinct points $\og, \og'\in\Ld$, and we call an $\va$-separated subset $\Ld$ of $X$ maximal if
$\inf_{\og\in\Ld} \rho(x, \og)<\va$ for any $x\in X$.
In Section~\ref{sec:partition of unity}, relying on ideas by Dzjadyk and Konovalov~\cite{Dz-Ko}, we shall prove the following localized polynomial partitions of unity on $C^2$-domains:
\begin{thm}\label{polyPartition00}Given any parameter $\ell>1$ and positive integer $n$, there exist a $\f 1n$-separated subset $\{\xi_j\}_{j=1}^{m_n}$ of $\Og$ with respect to the metric $\rho_\Og$ defined in~\eqref{metric} and a sequence of polynomials $\{P_j\}_{j=1}^{m_n}\subset \Pi_{c_0 n}^{d+1}$ such that $\sum_{j=1}^{m_n} P_j (\xi) =1$ and
$ |P_j(\xi)| \leq C_1 (1+n\rho_\Og(\xi,\xi_j))^{-\ell}$, $j=1,2,\dots, m_n$
for every $\xi\in \Og$,
where the constants $c_0$ and $C_1$ depend only on $d$ and $\ell$.
\end{thm}
A crucial role in the proof of our inverse theorem (i.e., Theorem~\ref{inverse-thm}) is played by a new Bernstein inequality associated with the tangential derivatives on the boundary $\Ga$, which we recently established in~\cite{Da-Pr-Bernstein}. The corresponding definitions and statements required in the context of the current work can be found in Section~\ref{sec:tools}. We only mention here that this new tangential Bernstein inequality was used in~\cite{Da-Pr-Bernstein} to establish Marcinkiewicz-Zygmund type inequalities and positive cubature formulas on $C^2$ domains.
We also compare the moduli of smoothness $\og_\Og^r(f,t)_p$ with
the average $(p,q)$-moduli of smoothness $ \tau_r (f, t)_{p,q}$ introduced by Ivanov~\cite{Iv}.
It turns out that the moduli $\og_\Og^r(f,t)_p$
can be controlled above by the average moduli $ \tau_r (f, t)_{p,q}$, as shown in the following theorem that will be proved in Section~\ref{ch:IvanovModuli}:
\begin{thm}\label{thm-9-1-00}
For any $0<q\leq p\leq \infty$ and $f\in L^p(\Og)$,
$$\og_\Og^r(f, t; A_0)_p \leq C \tau_r (f, c_0 t)_{p,q},\ \ 0<t\leq 1,$$
where the constant $C$ is independent of $f$ and $t$.
\end{thm}
As an immediate consequence of Theorem~\ref{thm-9-1-00} and Theorem~\ref{Jackson-thm}, we obtain a Jackson type inequality for the average moduli of smoothness for any dimension $d\ge 1$ and the full range of $0<q\leq p\leq \infty$.
\begin{cor} \label{cor-4-4-0}
If $f\in L^p(\Og)$, $0< q\leq p \leq \infty$ and $r\in\NN$, then
$$ E_n (f)_p \leq C_{r, \Og} \tau_r \Bl(f, \f {c_0} n\Br)_{p,q}.$$
\end{cor}
As mentioned in the introduction, Corollary~\eqref{cor-4-4-0} for $1\leq p\leq \infty$ and $d=1$ was announced in \cite{Iv} for a piecewise $C^2$-domain $\Og\subset \RR^2$.
We shall prove the corresponding inverse theorem for the average moduli of smoothness $\tau_r(f, t)_{p,q}$ as well:
\begin{thm} \label{thm-15-1-00}If $r\in\NN$, $1\leq q\leq p\leq \infty$ and $f\in L^p(\Og)$, then
$$\tau_r (f, n^{-1})_{p,q} \leq C_{r} n^{-r} \sum_{s=0}^n (s+1)^{r-1} E_s (f)_p.$$
\end{thm}
In the case when $\Og\subset \RR^2$ (i.e., $d=1$), Theorem~~\ref{thm-15-1-00} was announced without detailed proofs in \cite{Iv} for the case $p=\infty$ and the case when $1\leq p\leq \infty$ and $\Og$ is a parallelogram or a disk.
The rest of the paper is organized as follows. Section~\ref{sec:tools} is devoted to the statements of the required Bernstein and Whitney-type inequalities obtained in~\cite{Da-Pr-Bernstein} and~\cite{Da-Pr-Whitney}.
Sections~\ref{sec:partition of unity}--\ref{ch:direct} contain the proof of the Jackson theorem (Theorem~\ref{Jackson-thm}). In Section~\ref{ch:IvanovModuli}, we compare our moduli of smoothness $\og_\Og^r(f,t)_p$ with the average moduli of smoothness $\tau_r(f,t)_{p,q}$. The main result of Section~\ref{ch:IvanovModuli} is stated in Theorem~\ref{thm-9-1-00}. Finally, in Section~\ref{sec:15}, we prove the inverse theorems as stated in Theorem~\ref{inverse-thm} and Theorem~\ref{thm-15-1-00}.
\section{Tools}\label{sec:tools}
In this section we collect several necessary ingredients which we established recently in~\cite{Da-Pr-Bernstein} and~\cite{Da-Pr-Whitney}. A useful domain covering result Lemma~\ref{lem-2-1-18} has already been stated.
\subsection{Equivalence of different metrics}
Let $\rho_\Og:\Og\times \Og\to [0,\infty)$ be the metric on $\Og$ given in \eqref{metric}. As in~\cite{Da-Pr-Bernstein}, we introduce another metric $\wh \rho_G$ on a domain $G$ of special type, which is equivalent to the restriction of $\rho_{\Og}$ on $G$ if $G\subset \Og$ is attached to $\Ga:=\p\Og$. Let $G\subset \R^{d+1}$ be an $x_d$-upward domain with base size $b\in (0,1)$ and parameter $L>0$:
\begin{align*}
G:=\varsigma+\{ (x, y):\ \ x\in (-b,b)^{d},\ \ g(x)-Lb< y\leq g(x)\},\ \ \varsigma\in\RR^{d+1},
\end{align*}
where $g$ is a $C^2$-function on $\RR^{d}$. Then $$G^\ast=\varsigma+\Bl\{ (x, y):\ \ x\in (-2b,2b)^{d},\ \ \min_{u\in [-2b, 2b]^{d}} g(u)-4Lb < y\leq g(x)\Br\}$$
and we define a metric $\wh\rho_G: \overline{G^\ast}\times \overline{G^\ast} \to (0,\infty)$ by
\begin{equation}\label{rhog}
\wh{\rho}_G(\varsigma+\xi, \varsigma+\eta):=\max\Bl\{\|\xi_x-\eta_x\|,
\Bl|\sqrt{g(\xi_x)-\xi_y}-\sqrt{g(\eta_x)-\eta_y}\Br|\Br\}
\end{equation}
for all $ \xi=(\xi_x, \xi_y),\eta=(\eta_x,\eta_y)\in \overline{G^\ast}-\varsigma$.
We can define the metric $\wh{\rho}_G$ on a more general $x_j$-domain $G\subset \RR^{d+1}$ (upward or downward) in a similar way.
We will use the following equivalence of the metric $\wh\rho_G$ and the restriction of $\rho_\Og$ on $G$ when $G\subset\Og$ is attached to $\Ga=\p \Og$.
\begin{prop}[\cite{Da-Pr-Bernstein}*{Proposition~3.1}]\label{metric-lem} If $G \subset \Og$ is a domain of special type attached to $\Ga$,
then
\begin{equation*}\label{6-1-metric}\wh{\rho}_G(\xi,\eta)\sim \rho_{\Og} (\xi,\eta),\ \ \ \xi, \eta\in G\end{equation*}
with the constants of equivalence depending only on $G$ and $\Og$.
\end{prop}
\subsection{Whitney type inequality}
\begin{defn}\label{def-3-4} Given $\xi\in\sph$, we say $G \subset \RR^{d+1}$ is a regular $\xi$-directional domain with parameter $L\ge 1$ if there exists a rotation $\pmb{\rho}\in SO(d+1)$ such that\begin{enumerate}[\rm (i)]
\item
$\pmb{\rho}(0,\dots, 0,1)=\xi$, and $ G$ takes the form
\begin{equation}\label{3-6}
G:=\pmb{\rho}\Bl(\{(x, y): \ x\in D,\ g_1(x)\leq y\leq g_2(x)\}\Br),
\end{equation}
where $D\subset \RR^{d}$ is compact and $g_i: D\to\RR$ are measurable;
\item
there exist an affine function (element of $\Pi_1^{d+1}$) $H:\RR^{d}\to\RR$ and a constant $\da>0$ such that $S\subset G\subset S_L$, where
\begin{align}
\pmb{\rho}^{-1} (S):&=\{(x,y):\ \ x\in D,\ \ H(x)-\da\leq y\leq H(x)+\da\},\label{3-7}\\
\pmb{\rho}^{-1} (S_L):&= \{(x,y):\ \ x\in D,\ \ H(x)-L\da\leq y\leq H(x)+L\da\}.\label{3-8}
\end{align}
\end{enumerate}
In this case, we say $S$ is the base of $G$.
\end{defn}
For $r\in\NN$, $0<p\leq \infty$ and a nonempty set $\CE\subset \SS^d$, we define the directional Whitney constant by
\begin{equation*}
w_r(\Og;\CE)_p:= \sup\Bl \{ E_{(d+1)(r-1)}(f)_{L^p(\Og)}:\ \ f\in L^p(\Og),\ \ \og^r(f, \Og;\CE)_p\leq 1\Br\}.
\end{equation*}
We remark that the above definition differs from the corresponding definition in~\cite{Da-Pr-Whitney} by using approximation from the wider space $\Pi_{(d+1)(r-1)}^{d+1}$ instead of certain ``directional'' polynomial space $\Pi_{r-1}^{d+1}(\CE)$, see~\cite{Da-Pr-Whitney}*{Prop.~1.1(ii)}. This results in smaller Whitney constants which are subject to the same upper bound as in the next lemma which is sufficient for our purposes here.
\begin{lem}[\cite{Da-Pr-Whitney}*{Lemma~2.5}]\label{cor-7-3}
Let $G\subset \RR^{d+1}$ be a regular $\xi$-directional domain with parameter $L\ge 1$ and base $S$ as given in Definition~\ref{def-3-4} for some $\xi\in\SS^d$. Let $\CE\subset\SS^d$ be a set of directions containing $\xi$. Assume that $K$ is a measurable subset of $\RR^{d+1}$ such that $S\subset K\cap G$ and $w_r(K; \CE)_p<\infty$ for some $r\in\NN$, $0<p\leq \infty$.
Then
\begin{equation}\label{3-9-a}
w_r(G\cup K; \EEE)_p\leq C_{p,r} L^{r-1+2/p}(1+ w_r(K; \CE)_p),
\end{equation}
where the constant $C_{p,r}$ depends only on $p$ and $r$.
\end{lem}
\subsection{Bernstein inequality}
If $P$ is an algebraic polynomial of one variable of degree $\le n$, then by the univariate Bernstein inequality (\cite[p. 265]{De-Lo}), we have that for any $b>0$ and $\al>1$,
\begin{equation}\label{markov-bern}
\Bl \|(\sqrt{b^{-1}t} +n^{-1})^iP^{(i+j)}(t)\Br\|_{L^p([0,b], dt)} \le C_\al n^{i+2j} b^{-(i+j)}
\|P\|_{L^p([0,\al b])}.
\end{equation}
Let $G\subset \RR^{d+1}$ be an $x_{d+1}$-upward domain with base size $b>0$ and parameter $L\ge 1$ given by
\begin{equation}\label{11-2-2} G:=\Bl\{ (x, y)\in\RR^{d+1}:\ \ x\in (-b,b)^{d},\ \ g(x)-Lb< y\leq g(x)\Br\},\end{equation}
where $g:\RR^{d}\to \RR$ is a $C^2$-function satisfying that $\min_{x\in [-2b, 2b]^{d}}g(x)= 4Lb$.
Denote
for each $\mu\in(0,2]$
\begin{align*}
G(\mu):&=\{ (x, y):\ \ x\in (-\mu b,\mu b)^{d},\ \ g(x)-\mu L b<y\leq g(x)\}.
\end{align*}
For $(x,y)\in G(2)$, we define
\begin{equation*}
\da(x,y):=g(x)-y\ \ \text{and}\ \ \
\vi_n(x,y) :=\sqrt{\da(x,y)} +\f 1n,\ \ n=1,2,\dots.
\end{equation*}
The Bernstein type inequality on the domain $G$ is formulated in terms of certain tangential derivatives along the essential boundary $\p' G$ of $G$, whose definition is given as follows.
For $x_0\in [-2a, 2a]^{d}$, let \begin{equation}\label{11-4-0}
\xi_j (x_0) :=e_j + \p_j g(x_0) e_{d+1},\ \ j=1,\dots, d
\end{equation}
be the tangent vector to $\p G$ at the point $(x_0, g(x_0))$ that is parallel to the $x_jx_{d+1}$-coordinate plane. We denote by $\p_{\xi_j(x_0)}^\ell$ the $\ell$-th order directional derivative along the direction of $\xi_j(x_0)$:
\begin{equation*}\label{11-3}
\p_{\xi_j(x_0)}^\ell:=(\xi_j(x_0)\cdot\nabla )^\ell=\sum_{i=0}^\ell \binom{\ell}i (\p_jg(x_0))^i \p_j^{\ell-i} \p_d^i,\ \
\end{equation*}
where $j=1,2,\dots, d$ and $ x_0\in [-2b,2b]^{d}.$ Thus,
for $(x,y)\in G$ and $ f\in C^1(G)$,
$$ \p_{\xi_j(x)}^\ell f(x,y)=\sum_{i=0}^\ell \binom{\ell}i (\p_jg(x))^i( \p_j^{\ell-i} \p_d^i f)(x,y),\ \ 1\leq j\le d. $$
We also need to deal with certain mixed directional derivatives. Let $\NN_0$ denote the set of all nonnegative integers. For $\pmb{\al}=(\al_1,\dots, \al_{d})\in \NN_0^{d}$, we set $|\pmb\al| =\al_1+\al_2+\dots+\al_{d}$, and define
\begin{align*}
{\mathcal{D}}_{\tan, x_0}^{\pmb{\al}} =\p_{\xi_1(x_0)}^{\al_1} \p_{\xi_2( x_0)}^{\al_2} \dots \p_{\xi_{d}( x_0)}^{\al_{d}},\ \ \ \ x_0\in [-2b, 2b]^{d}.\label{11-4}
\end{align*}
Finally, we are ready to state the required result.
\begin{thm}\label{cor-11-2} \cite{Da-Pr-Bernstein}*{Corollary~5.2} Let $\ld \in (1,2]$ and $\mu>1$ be two given parameters. If $0<p\leq \infty$ and $f\in\Pi_n^{d+1}$, then for any $\pmb{\al}\in\NN_0^{d}$, and $ i,j=0,1,\dots$,
\begin{align*}
\Bl\|\vi_n(\xi)^{i} &\max_{ u\in \Xi_{n,\mu,\ld}(\xi)}\Bigl| {\mathcal{D}}_{\tan, u}^{\pmb{\al}}\partial_{d+1}^{i+j}f(\xi)\Br|\Br\|_{L^p(G; d\xi)} \le c_\mu n^{|\pmb{\al}|+2j+i}\|f\|_{L^p(G(\ld))}, \end{align*}
where
$$\Xi_{n, \mu,\ld} (\xi):= \Bl\{ u\in [-\ld b, \ld b]^{d}:\ \ \|u-\xi_x\|\leq \mu \vi_n(\xi)\Br\}, \quad \xi=(\xi_x,\xi_y).$$
\end{thm}
\section{Polynomial partitions of the unity}\label{sec:partition of unity}
\subsection{Polynomial partitions of the unity on domains of special type}
\label{sec:5}
The main purpose in this section is to construct a localized polynomial partition of the unity on a domain $G\subset \RR^{d+1}$ of special type. Without loss of generality, we may assume that $G$ is an upward $x_{d+1}$-domain given in~\eqref{2-7-special} with $\xi=0$, small base size $b>0$ and parameter $L=b^{-1}$. Namely,
\begin{align*}
G:=\{ (x, y):\ \ x\in [-b,b]^{d},\ \ g(x)-1\leq y\leq g(x)\},
\end{align*}
where $b\in (0,(2d)^{-1})$ is a sufficiently small constant and $g$ is a $C^2$-function on $\RR^d$ satisfying that $\min_{x\in [-b, b]^d} g(x)\ge 4$.
Our construction of localized polynomial partition of the unity relies on a partition of the domain $G$, which we now describe.
Given a positive integer $n$, let $\Ld^d_n:=\{ 0, 1,\dots, n-1\}^d\subset \ZZ^d$ be an index set.
We shall use boldface letters $\mathbf{i}, \mathbf{j},\dots$ to denote indices in the set $\Ld_n^d$. For each $\mathbf{i}=(i_1,\dots, i_{d})\in \Ld_n^d$, define
\begin{equation}\label{partition-b}\Delta_{\bfi}:=[t_{i_1}, t_{i_1+1}]\times \dots \times [t_{i_{d}}, t_{i_{d}+1}] \ \ \ \text{with}\ \ t_{i}=-b+\f {2i}n b.
\end{equation}
Then $\{\Delta_{\bfi}\}_{\bfi\in\Ld_n^d}$ forms a partition of the cube $[-b,b]^d$.
Next, let $N:=N_n:=2\ell_1 n$
and $\al:= 1/(2\sin^2\f \pi{2\ell_1})$, where $\ell_1$ is a fixed large positive integer satisfying \begin{equation}\label{5-2-18}
\al\ge 5d\max_{x\in [-4b, 4b]^d} (|g(x)|+\max_{1\leq i, j\leq d} |\p_i\p_j g(x)|).
\end{equation}
Let $\{\al_j:=2\al \sin^2 (\f {j\pi}{2N})\}_{j=0}^N$ denote the Chebyshev partition of the interval $[0, 2\al]$ of order $N$ such that $\al_n=1$. Then $\{\al_j\}_{j=0}^n$ forms a partition of the interval $[0,1]$.
Finally, we define a partition of the domain $G$ as follows:
\begin{align*}
G&=\Bl\{(x,y):\ \ x\in [-b,b]^d,\ \ g(x)-y\in [0,1]\Br\} =\bigcup_{\bfi\in\Ld_n^d} \bigcup_{j=0}^{n-1} I_{\mathbf{i},j},
\end{align*}
where
$$I_{\mathbf{i},j}:=\Bl\{ (x, y):\ \ x\in \Delta_{\bfi},\ \ g(x)-y\in [\al_{j}, \al_{j+1}]\Br\}.$$
Note that $\Ld_n^d \times \{0,\dots, n-1\}=\Ld_n^{d+1}$.
With the above notation, we have
\begin{thm}\label{strips-0}
For any $m\ge2$, there exists a sequence of polynomials $\bl\{q_{\bfi, j}:\ \ (\bfi, j) \in\Ld_n^{d+1}\br\}$ of degree at most $ C(m, d) n$ on $\RR^{d+1}$ such that $$\sum_{(\bfi, j)\in\Ld_n^{d+1}} q_{\bfi,j}(x,y)=1\ \ \ \text{for all $(x,y)\in G$},$$
and for each $(x,y)\in I_{\mathbf{k}, l} $ with $(\mathbf{k},l)\in\Ld_n^{d+1}$,
\be\label{strips-ineq}
| q_{\bfi,j}(x,y)|\le \frac{C_{m,d}}{\Bl(1+\max\{ \|\bfi-\mathbf{k}\|, |j-l|\}\Br)^m}.
\ee
\end{thm}
Theorem~\ref{strips-0} is motivated by~\cite[Lemma~2.4]{Dz-Ko}, but some important details of the proof were omitted there. In this section, we shall give a complete and simpler proof of the theorem.
Recall that we write $\xi\in\RR^{d+1}$ in the form $\xi=(\xi_x, \xi_y)$ with $\xi_x\in\RR^d$ and $\xi_y\in\RR$.
\begin{rem}\label{rem-5-2}
Recall that in~\eqref{rhog} we introduced the following metric on the domain $G$: for $\xi=(\xi_x, \xi_y)$ and $\eta=(\eta_x, \eta_y)\in G$,
\begin{equation*
\wh{\rho}_G(\xi, \eta)=\max\Bl\{\|\xi_x-\eta_x\|,
\Bl|\sqrt{g(\xi_x)-\xi_y}-\sqrt{g(\eta_x)-\eta_y}\Br|\Br\}.
\end{equation*}
It can be easily seen that if $ \xi\in I_{\bfi, j}$ and $ \eta\in I_{\mathbf {k}, \ell}$, then
\begin{equation}\label{Chapter-5-1}
1+n\wh{\rho}_G(\xi,\eta) \sim 1+\max\{ \|\bfi-\mathbf{k}\|, |j-\ell|\}.
\end{equation}
This implies that
\begin{equation}\label{6-6-18}
| q_{\bfi,j}(\xi)|\le \frac{C_{m,d}}{(1+n\wh{\rho}_G(\xi,\og_{\bfi,j}))^m},\ \ \ \forall \xi\in G,\ \ \forall \og_{\ib, j} \in I_{\ib, j}.
\end{equation}
\end{rem}
\begin{rem}\label{rem-6-3}
If $r\in\NN$ and $n\ge 10 r$, then the polynomials $q_{\ib,j}$ in Theorem~\ref{strips-0} can be chosen to be of total degree $\le n/r$. Indeed, this can be obtained by invoking Theorem~\ref{strips-0} with $c(m,d)n/r$ in place of $n$, relabeling the indices, and setting some of the polynomials to be zero.
\end{rem}
For the proof of Theorem~\ref{strips-0}, we need two additional lemmas, the first of which is well known.
\begin{lem}\label{chebyshev}\cite[Theorem~1.1]{Dz-Ko}
Given any parameter $\ell>1$, there exists a sequence of polynomials $\{u_j\}_{j=1}^n$ of degree at most $ 2n$ on $\RR$ such that $\sum_{j=0}^{n-1}u_j(x)=1$ for all $x\in [-1,1]$ and
$$| u_j(\cos\t)|\leq \frac{C_{\ell}}
{( 1+n|\t -\f {j\pi}{n}|)^{\ell}},\ \ \t\in [0,\pi],\ \ j=0,\dots,n-1.$$
\end{lem}
The second lemma gives a polynomial partition of the unity associated with the partition $\{\Delta_{\bfj}:\ \ \bfj\in\Ld_n^d\}$ of the cube $[-b,b]^d$.
\begin{lem}\label{uniform} Given any parameter $\ell>1$, there exists a sequence of polynomials $\{v_{\mathbf{j}}^d\}_{\mathbf{j}\in\Ld_n^d} $ of total degree $\le 2dn$ on $\RR^d$ such that for all $x\in [-b, b]^{d}$, $\sum_{\mathbf{j}\in \Ld_n^d}v_{\mathbf{j}}^d(x)= 1$ and
$$|v_{\mathbf{j}}^d(x)|\le \f{C_{\ell,d}} { (1+n\|x-x_{\mathbf{j}}\|)^\ell},\ \ \mathbf{j}\in\Ld_n^d,
$$
where $x_{\bf j}$ is an arbitrary point in $\Delta_{\bf j}$.
\end{lem}
This lemma is probably well known, but for completeness, we present a proof below.
\begin{proof}
Without loss of generality, we may assume that $d=1$ and $b=\f12$.
The general case can be deduced easily using tensor products of polynomials in one variable.
Let $\{u_j\}_{j=0}^{ n-1}$ be a sequence of
polynomials of degree at most $2n$ as given in Lemma~\ref{chebyshev} with $2\ell$ in place of $\ell$. Noticing that for $u\in [-1,1]$ and $v\in [-\f 12, \f12]$,
\begin{equation}\label{4-1}
|u-v|\leq |\arccos u-\arccos v |\leq \pi |u-v|,
\end{equation}
we obtain
\begin{equation}\label{4-2}
| u_j(x)|\leq\f{ C_\ell}{ (1+n|x-\cos \f {j\pi}n|)^{2\ell}},\ \ x\in \Bigl[-\f12,\f12\Bigr].
\end{equation}
Next, we define a sequence of polynomials $\{v_j\}_{j=0}^{n-1}$ of degree at most $2n$ on $[-\f12, \f12]$ as follows:
$$ v_j(x)=\sum_{i:\ \ s_{j}<\cos \f {i\pi}n\leq s_{j+1}} u_i(x),$$
where $0\leq i\leq n-1$, $s_0=-2$, $s_n =2$, $s_j=t_j =-\f12 +\f {j}n$ for $1\leq j\leq n-1$, and we define $v_j(x)=0$ if the sum is taken over the emptyset. Clearly, $\sum_{j=0}^{n-1} v_j(x)=\sum_{i=0}^{n-1} u_i(x)=1$ for all $x\in [-\f12,\f12]$.
Furthermore, using~\eqref{4-2}, we have
\begin{align*}
| v_j(x) |\leq \f {C_\ell}{ (1+n|x-s_j|)^{\ell}} \sum_{i=0}^{n-1} \f{ 1}{ (1+n|x-\cos \f {i\pi}n|)^{2\ell}}\leq \f {C_\ell}{ (1+n|x-s_j|)^{\ell}},
\end{align*}
where the last step uses~\eqref{4-1}.
This completes the proof.
\end{proof}
We are now in a position to prove Theorem~\ref{strips-0}.\\
\begin{proof}[Proof of Theorem~\ref{strips-0}]
Set \[M:=d\max_{1\leq i,j\leq d}\max_{\|x\|\leq b} |\p_i\p_jg(x)|+1.\]
For each $\bfi\in\Ld_n^d$, let $x_{\bfi}\in \Delta_{\bfi}$ be an arbitrarily fixed point in the cube $\Delta_{\bfi}$, and define
$$f_{\bfi}( x):=g( x_{\bfi})+\nabla g( x_{\bfi}) \cdot ( x- x_{\bfi})+\f M2 \| x- x_{\bfi}\|^2.$$
By Taylor's theorem, it is easily seen that for each $x\in [-b,b]^d$,
\begin{align}\label{5-8-aug}
f_{\bfi}(x) -M\|x-x_{\bfi}\|^2 \leq g(x) \leq f_{\bfi}(x).
\end{align}
Since $0<b<(2d)^{-1}$, this implies that for each $\bfi\in\Ld_n^d$,
$$ G \subset\Bl \{ ( x,y):\ \ x\in [-b,b]^{d},\ \ 0\leq f_{\bfi}( x)-y\leq M+1\Br\}.$$
Recall that
$\{\al_j\}_{j=0}^N$
is a Chebyshev partition of $[\al_0, \al_N]=[0, 2\al]$ of degree $ N=2\ell_1n$, $\al_n=1$ and according to~\eqref{5-2-18}, $\al\ge 4M+1$. Thus,
\begin{align*}
G\subset \bigcup_{\bfi\in\Ld_n^d} \bigcup_{j=0}^{N-1} \Bl \{ ( x,y):\ \ x\in \Delta_{\bfi},\ \ \al_{j}\leq f_{\bfi}( x)-y\leq \al_{j+1}\Br\}.
\end{align*}
Next, using Lemma~\ref{chebyshev}, we obtain a sequence of polynomials $\{u_j\}_{j=0}^{N-1}$of degree at most $4\ell_1 n$ on $[0, 2\al]$ such that $\sum_{j=0}^{N-1} u_j(t) =1$ for all $t\in [0, 2\al]$, and
\begin{equation}\label{key-5-6}
| u_j (t)| \leq \f {C_m}{ (1+n |\sqrt{t}-\sqrt{\al_j}|)^{4m}},\ \ t\in [0, 2M]\subset [0, \al].
\end{equation}
Similarly, using Lemma~\ref{uniform}, we may obtain a sequence of polynomials $\{v_{\mathbf{j}}\}_{\mathbf{i}\in\Ld_n^d} $ of total degree $\le n$ on the cube $[-b, b]^d$ such that $\sum_{\mathbf{j}\in \Ld_n^d}v_{\mathbf{j}}( x)= 1$ for all $ x\in [-b, b]^{d}$, and
\begin{equation}\label{key-5-7}
| v_{\mathbf{j}}( x)|\le \f{C_m} { (1+n\| x- x_{\mathbf{j}}\|)^{4m}},\ \ x\in [-b, b]^d.
\end{equation}
Define a sequence $\{q^\ast_{\bfi,j}: \ \ \bfi\in\Ld_n^d,\ \ 0\leq j\leq N-1\}$ of auxiliary polynomials as follows:
\begin{equation}\label{key-5-8-0}
q^\ast_{\bfi,j}( x,y):=u_j(f_{\bfi}( x)-y)v_{\bfi}( x).
\end{equation}
It is easily seen from~\eqref{key-5-6} and~\eqref{key-5-7} that
for each $( x, y)\in G$,
\begin{align}\label{5-7-chapter}
| q^\ast_{\bfi,j}(x, y)|\leq \f {C_m}{(1+n\|x- x_{\bfi}\|)^{4m} (1+n|\sqrt{f_{\bfi}({x})-y}-\sqrt{\al_j}|)^{4m}}.
\end{align}
We claim that for each $({x},y)\in G$,
\begin{align}
| q^\ast_{\bfi,j}(x, y)|&\leq
\f {C_m}{(1+n\|{x}- x_{\bfi}\|)^{2m} (1+n|\sqrt{g({x})-y}-\sqrt{\al_j}|)^{2m}}.\label{claim-4-5}
\end{align}
Note that~\eqref{claim-4-5} follows directly from~\eqref{5-7-chapter} if $ 6M\| x- x_{\bfi}\|> |\sqrt{g({x})-y}-\sqrt{\al_j}|$.
Thus, for the proof of~\eqref{claim-4-5},
it suffices to prove that the equivalence
\begin{equation}\label{key-5-11}
|\sqrt{f_{\bfi}({x})-y}-\sqrt{\al_j}|\sim |\sqrt{g({x})-y}-\sqrt{\al_j}|,
\end{equation}
holds
under the assumption
\begin{equation}\label{key-5-13-18} 6M\| x- x_{\bfi}\|\leq |\sqrt{g({x})-y}-\sqrt{\al_j}|. \end{equation}
Indeed, if $\sqrt{f_{\bfi}({x})-y}+ \sqrt{g({x})-y}\leq 2M\|{x}-{x}_{\bfi}\|$, then~\eqref{key-5-13-18} implies
$$\sqrt{\al_j}\ge 4M \|{x}-{x}_{\bfi}\|\ge 2 \max\{\sqrt{f_{\bfi}({x})-y}, \sqrt{g({x})-y}\},$$
and hence
$$|\sqrt{f_{\bfi}({x})-y}-\sqrt{\al_j}|\sim \sqrt{\al_j}\sim |\sqrt{g({x})-y}-\sqrt{\al_j}|.$$
On the other hand, if $\sqrt{f_{\bfi}({x})-y}+ \sqrt{g({x})-y}> 2M\|{x}-{x}_{\bfi}\|$, then by~\eqref{key-5-13-18} and~\eqref{5-8-aug}, we have
\begin{align*}
&\Bl|\sqrt{f_{\bfi}({x})-y}- \sqrt{g({x})-y}\Br|=\f{|f_{\bfi}({x})-g({x})|}{\sqrt{f_{\bfi}({x})-y}+ \sqrt{g({x})-y}}\\
&\leq \f {M\|{x}-{x}_{\bfi}\|^2}{2M\|{x}-{x}_{\bfi}\|}= \f12 \|{x}-{x}_{\bfi}\|\leq \f 1{12M}|\sqrt{g({x})-y}-\sqrt{\al_j}|,
\end{align*}
which in turn implies~\eqref{key-5-11}. This completes the proof of~\eqref{claim-4-5}.
Finally, we define for $\bfi\in\Ld_n^d$,
$$q_{\bfi,j}(x, y)=\begin{cases}
q^\ast_{\bfi,j}(x,y),\ \ \text{ if $0\leq j\leq n-2$},\\
\sum_{k=n-1}^{N-1} q^\ast_{\bfi,k}(x,y),\ \ \text{if $j=n-1$.}
\end{cases}$$
Clearly, each $q_{\bfi, j}$ is a polynomial of degree at most $Cn$.
Since for any $( x, y)\in G$ the polynomial $u_j$ in the definition~\eqref{key-5-8-0} is evaluated at the point $f_{\bfi}( x)-y$, which lies in the interval $[0, M+1]\subset [\al_0, \al_N]$, it follows that for any $( x, y)\in{G}$,
$$\sum_{\bfi\in\Ld_n^d} \sum_{j=0}^{n-1} q_{\bfi,j}(x,y)=\sum_{\bfi\in\Ld_n^d} \sum_{j=0}^{N-1} q^\ast_{\bfi,j}(x)=\sum_{\bfi\in\Ld_n^d} v_{\bfi}^d ( x) \sum_{j=0}^{n-1} u_{j} (f_{\bfi}( x) -y)=1.$$
To complete the proof,
by~\eqref{claim-4-5}, it remains to estimate $q_{\bfi, j}$ for $j=n-1$.
Note that for $j\ge n$,
$$\sqrt{\al_j}-\sqrt{g({x})-y}\ge \sqrt{\al_n}-\sqrt{g({x})-y}\ge 0.$$
Thus, using~\eqref{claim-4-5}, and recalling that $m\ge 2$, we obtain that
\begin{align*}
|q_{\bfi,n-1}(x)|&\leq \f {C_m}{(1+n\|{x}- x_{\bfi}\|)^{2m} (1+n|\sqrt{g({x})-y}-\sqrt{\al_n}|)^{m}} \\
&\qquad\qquad\qquad \cdot
\sum_{j=n}^{N}\f 1{(1+n|\sqrt{g({x})-y}-
\sqrt{\al_j}|)^{m}}\\
&\leq \f {C_m}{(1+n\|{x}- x_{\bfi}\|)^{2m} (1+n|\sqrt{g({x})-y}-\sqrt{\al_n}|)^{m}}.
\end{align*}
This completes the proof.
\end{proof}
\subsection{Polynomial partitions of the unity on general $C^2$-domains}
\label{unity:sec}
In this section, we shall extend Theorem~\ref{strips-0} to the $C^2$-domain $\Og$. We will use the metric $\rho_\Og$ defined by~\eqref{metric}.
Our goal is to show the following theorem:
\begin{thm}\label{polyPartition}Given any $m>1$ and any positive integer $n$, there exist a finite subset $\Ld$ of $\Og$ and a sequence $\{\vi_\og\}_{\og\in\Ld}$ of polynomials of degree at most $C( m) n$ on the domain $\Og$ satisfying
\begin{enumerate}[\rm (i)]
\item $ \rho_{\Og} (\og,\og') \ge \f 1n$ for any two distinct points $\og,\og'\in\Ld$;
\item for every $\xi\in \Og$, $\sum_{\og \in\Ld} \vi_\og (\xi)=1$ and
\item for any $\xi\in \Og$ and $\og\in\Ld$,
$$ |\vi_\og(\xi)| \leq C_m (1+n\rho_\Og(\xi,\og))^{-m}.$$
\end{enumerate}
\end{thm}
\begin{rem}\label{rem-6-2}
Recall that for $\xi\in\Og$ and $\da>0$, we defined $ U(\xi,\da)=\{\eta\in\Og:\ \ \rho_{\Og}(\xi,\eta)\leq \da\}$.
By \cite{Da-Pr-Bernstein}*{Corollary~3.3(i)}, we have
$$\Bl|U\Bl(\xi, \f 1n\Br)\Br|\sim \f 1{n^{d+1}} \Bl( \f 1n +\sqrt{\dist(\xi, \Ga)}\Br),\ \ \ \xi\in\Og.$$
\end{rem}
\begin{proof}[Proof of Theorem~\ref{polyPartition}]
For convenience, we say a subset $K\subset \Og$ admits a polynomial partition of the unity of degree $Cn$ with parameter $m>1$ if there exist a finite subset $\Ld \subset \Og$ and a sequence $\{\vi_\og\}_{\og\in\Ld}$ of polynomials of degree at most $C n$ such that $\rho_\Og(\og,\og') \ge \f 1n$ for any two distinct points $\og,\og'\in\Ld$, $\sum_{\og \in\Ld} \vi_\og (x) =1$ for every $x\in K$ and
$ |\vi_\og (x)| \leq C (1+n\rho_{\Og} (x,\og))^{-m}$ for every $x\in K$ and $\og\in\Ld$, in which case $\{ \vi_\og\}_{\og\in\Ld}$ is called a polynomial partition of the unity of degree $Cn$ on the set $K$.
According to Theorem~\ref{strips-0}, Remark~~\ref{rem-5-2}, and Proposition~\ref{metric-lem}, if $G\subset \Og$ is a domain of special type attached to $\Ga$ or if $G=Q$ is a cube such that $4Q\subset \Og$, then for any $m>1$, $G$
admits a polynomial partition of the unity of degree $Cn$ with parameter $m$.
Our proof relies on the decomposition in Lemma~\ref{LEM-4-2-18-0}.
Let $\{\Og_s\}_{s=1}^J$ be the sequence of subsets of $\Og$ given in Lemma~\ref{LEM-4-2-18-0}. For $1\leq j\leq J$, let $H_j =\bigcup_{s=1}^j \Og_s$. Assume that for some $1\leq j\leq J-1$, $H_j$ admits a polynomial partition $\{u_{\og_i}\}_{i=1}^{n_0}$ of the unity of degree $Cn$ with parameter $m>1$. By induction and Lemma~\ref{LEM-4-2-18-0}, it will suffice to show that $H_{j+1}$ also admits a polynomial partition of the unity of degree $Cn$ with parameter $m>1$.
For simplicity, we write $H=H_j$ and $K=\Og_{j+1}$.
Without loss of generality, we may assume that $K=S_{G,\ld_0}$ with $\ld_0\in (\f12, 1)$ and $G\subset \Og$ a domain of special type attached to $\Ga$. The case when $K=Q$ is a cube such that $4Q\subset \Og$ can be treated similarly, and in fact, is simpler.
By Theorem~\ref{strips-0}, $G$ admits a polynomial partition $\{u_{\og_j}\}_{j=n_0+1}^{n_0+n_1}$ of the unity of degree $Cn$ with parameter $m>1$.
Recall $H\cap G$ contains an open ball of radius $\ga_0\in (0,1)$. Let $L>1$ be such that $\Og\subset B_L[0]$, and let
$\t:=\f {\ga_0}{20L} \in (0,1)$.
According to Lemma~\ref{lem-4-3}, there exists a polynomial $R_n$ of degree at most $Cn$ such that $0\leq R_n(\xi)\leq 1$ for $\xi\in B_L[0]$, $1-R_n(\xi)\leq \t^n$ for $\xi \in K$ and $R_n(\xi)\leq \t^n$ for $x\in \Og\setminus G$.
We now define
$$ w_j(\xi) =\begin{cases} u_{\og_j}(\xi) (1-R_n(\xi)), &\ \ \text{if $1\leq j\leq n_0$},\\
u_{\og_j}(\xi)R_n(\xi), &\ \ \text{if $n_0+1\leq j\leq n_0+n_1$}.\end{cases}$$
Clearly, each $w_j$ is a polynomial of degree at most $Cn$ on $\RR^{d+1}$.
Since polynomials are analytic functions and $H\cap G$ contains an open ball of radius $\ga_0$, it follows that
$$\sum_{j=1}^{n_0+n_1}
w_j(\xi) =R_n(\xi)+1-R_n(\xi)=1,\ \ \forall \xi\in \RR^{d+1}.$$
Next, we prove that for each $1\leq j\leq n_0+n_1$,
\begin{equation}\label{key-6-2-1}
| w_j(\xi)|\leq C (1+n\rho_{\Og}(\xi, \og_j))^{-m},\ \ \forall \xi\in H\cup K.
\end{equation}
Indeed, if $1\leq j\leq n_0$, then for $\xi\in H$,
$$|w_j(\xi)|\leq |u_{\og_j}(\xi)|\leq C (1+n\rho_{\Og}(\xi, \og_j))^{-m},
$$
whereas for $\xi\in K\subset G$,
\begin{align*}
| w_j(\xi)| &\leq \t^n \|u_{\og_j}\|_{L^\infty (B_L[0])}\leq C\t ^n \Bl( \f {10L}{\ga_0}\Br)^n\|u_{\og_j}\|_{L^\infty(H\cap G)}\\
\leq & C 2^{-n}\leq C_m (1+n\rho_{\Og}(\xi, \og_j))^{-m} ,
\end{align*}
where the second step uses Lemma~\ref{lem-4-1}.
Similarly, if $n_0<j\leq n_0+n_1$, then for $\xi\in G$,
$$| w_j(\xi)|\leq |u_{\og_j}(\xi)|\leq C_m (1+n\rho_{\Og}(\xi, \og_j))^{-m},
$$
whereas for $\xi\in H\setminus G$,
$$ | w_j(\xi)| \leq \t^n \|u_{\og_j}\|_{L^\infty (B_L[0])}\leq C\t ^n \Bl( \f {10L}{\ga_0}\Br)^n\leq C 2^{-n}\leq C_m (1+n\rho_{\Og}(\xi, \og_j))^{-m}.$$
Thus, in either case, we prove the estimate~\eqref{key-6-2-1}.
Finally,
we write the set $A:=\{\og_1,\dots, \og_{n_0+n_1}\}$ as a disjoint union
$A=\bigcup_{\og\in\Ld} I_{\og}$, where $\Ld$ is a subset of $A$ satisfying that
$\min_{\og\neq \og'\in\Ld} \rho_{\Og}(\og,\og')\ge \f 1n$, and
$ I_\og\subset \{\og'\in A:\ \ \rho_{\Og} (\og, \og') \leq \f1n\}$ for each $\og\in\Ld$. We then define
$$\vi_\og(\xi): =\sum_{j:\ \ \og_j\in I_\og} w_j(\xi),\ \ \xi \in H\cup G,\ \ \og\in\Ld,$$
where $1\leq j\leq n_0+n_1$.
Clearly, each $\vi_\og$ is a polynomial of degree at most $C n$ and
$$\sum_{\og\in\Ld} \vi_{\og} (\xi) =\sum_{j=1}^{n_0+n_1} w_j(\xi)=1,\ \ \ \forall \xi\in H\cup G.$$
On the other hand, we recall that
$\rho_{\Og}(\og_i, \og_j) \ge \f 1n$ if $1\leq i\neq j\leq n_0$ or $n_0+1\leq i\neq j\leq n_0+n_1$.
Thus, by the standard volume estimates and Remark~\ref{rem-6-2} (or directly by~\cite{Da-Pr-Bernstein}*{Corollary~3.3(iii)})
we have that $\# I_\og \leq C(\Og, m)$ for each $\og \in \Ld$, where $\# I$ denotes the cardinality of a set $I$. It then follows from~\eqref{key-6-2-1} that
$$|\vi_\og (\xi) |\leq C (1+n\rho_\Og (\xi,\og))^{-m},\ \ \xi\in H\cup G,\ \ \og \in\Ld.$$
Thus, we have shown that the set $H\cup K$ admits a polynomial partition of the unity of degree $cn$ with parameter $m$, completing the induction.
\end{proof}
\begin{rem}
The above proof implies $\#\Lambda=O(n^{d+1})$; recall that $\Omega\subset\R^{d+1}$.
\end{rem}
\section{Geometric reduction near the boundary}\label{sec:reduction}
Our main goal in this section is to show that the Jackson inequality in Theorem~\ref{Jackson-thm} can be deduced from the following Jackson-type estimates on domains of special type.
\begin{thm}\label{THM-4-1-18}
If $0<p\leq \infty$, and $G\subset \Og$ is an upward or downward $x_j$-domain attached to $\Ga$ for some $1\leq j\leq d+1$, then
\begin{equation}\label{Jackson:special}
E_n (f)_{L^p (G)}\leq C \Bl[\wt{\og}_{G}^r \Bl(f, \f \tau n\br)_p+ \og_{\Og,\vi}^r \Br(f, \f 1n; e_j\Br)_{p}\Br],\
\end{equation}
where
the constants $C, \tau>1$ are independent of $f$ and $n$.
\end{thm}
The proof of Theorem~\ref{THM-4-1-18} will be given in Section~\ref{Sec:8}. In this section, we will show how Theorem~\ref{Jackson-thm} can be deduced from Theorem~\ref{THM-4-1-18}. The idea of our proof is close to that in \cite[Chapter 7]{To17}.
\subsection{Lemmas and geometric reduction}
We need a series of lemmas, the first of which gives a well known Jackson type estimate on a rectangular box.
\begin{lem}\label{lem-6-1-0}\cite[Lemma 2.1]{Di96}
Let $B$ be a compact rectangular box in $\RR^{d+1}$. Assume that $f\in L^p(B)$ if $0<p< \infty$ and $f\in C(B)$ if $p=\infty$.
Then for $0<p\leq \infty$,
$$\inf_{P\in \Pi_n^{d+1}} \|f-P\|_{L^p(B)} \leq C \max_{1\leq j\leq d+1}\og_{B,\vi}^r \Bl(f, \f 1n, e_j\Br)_{p},$$
where $C$ is independent of $f$ and $B$.
\end{lem}
Our second lemma is a simple observation on domains of special type.
Recall that unless otherwise stated we always assume that the parameter $L$
of a domain of special type satisfies the condition~\eqref{parameter-2-9}.
\begin{lem}\label{lem-6-2:Dec} Let $G\subset \Og$ be an (upward or download) $x_j$- domain of special type attached to $\Ga$ for some $1\leq j\leq d+1$. Then for each parameter $\mu\in (\f12, 1]$, there exists an open rectangular box $Q_\mu$ in $\RR^{d+1}$ such that
\begin{equation}\label{eqn:decomp} \p' G(\mu) \subset S_{ G,\mu}:=Q_\mu\cap \Og \subset G(\mu)\ \ \text{and} \ \ \overline{Q_\mu}\subset Q_1\ \ \text{provided $\mu<1$}. \ \end{equation}
\end{lem}
\begin{proof}
Without loss of generality, we may assume that $G$ is given in~\eqref{2-7-special} with $\xi=0$. Let
$g_{\max}:=\max_{x\in [-b, b]^d} g(x)$ and $g_{\min} :=\min_{x\in [-b, b]^d} g(x)$.
Using~\eqref{parameter-2-9}, we have
$$g_{\max}-g_{\min} \leq 2 \sqrt{d} b \max_{x\in [-b, b]^d}\|\nabla g(x)\|\leq \f 12 Lb.$$
Thus, given each parameter $\mu\in (\f 12, 1]$, we may find a constant $a_{1,\mu}$ such that
$$ g_{\max} -\mu L b < a_{1,\mu} <g_{\min}.$$
We may choose the constant $a_{1,\mu}$ in such a way that $a_{1,1}<a_{1,\mu}$ if $\mu<1$.
On the other hand, since $G$ is attached to $\Ga$, we may find an open rectangular box $Q$ of the form $(-2b, 2b)^d\times (a_1, a_2)$ such that $G^\ast =Q\cap \Og$, where $a_1, a_2$ are two constants and $a_2>g_{\max}$.
Let $a_{2, 1} =a_2$ and let $a_{2,\mu}$ be a constant so that $g_{\max} < a_{2,\mu} <a_2$ for $\mu\in (\f12, 1)$. Now setting
\begin{equation}\label{6-3-Dec}
Q_\mu :=(-\mu b, \mu b)^d\times (a_{1,\mu}, a_{2,\mu})\ \ \ \text{and}\ \ S_{G,\mu}: = Q_\mu \cap \Og,
\end{equation}
we obtain~\eqref{eqn:decomp}.
\end{proof}
\begin{rem}
Note that~\eqref{eqn:decomp} implies that $\proj_j (Q_\mu) = \proj_j (G(\mu))$ for $\mu\in (\f 12, 1]$, where $\proj_j$ denotes the orthogonal projection onto the coordinate plane $x_j=0$.
\end{rem}
Now let $G_1,\dots, G_{m_0}\subset \Og$ be the domains of special type in Lemma~\ref{lem-2-1-18}.
Note that for every domain $G$ of special type, its essential boundary can be expressed as
$\p ' G=\bigcup_{n=1}^\infty \p' G (1-n^{-1})$.
Since $\Ga$ is compact and each $\p' G_j$ is open relative to the topology of $\Ga$, there exists $\ld_0\in (\f 12, 1)$ such that $\Ga =\bigcup_{j=1}^{m_0} \p' G_j (\ld_0)$.
For convenience, we call $S\subset \Og$ an admissible subset of $\Og$ if either $S=S_{G_j, \ld_0}$ for some $1\leq j\leq m_0$ or $S$ is an open cube in $\RR^{d+1}$ such that $4S\subset \Og$.
Our third lemma gives a useful decomposition of the domain $\Og$.
\begin{lem}\label{LEM-4-2-18-0} There exists a sequence $\{\Og_s\}_{s=1}^J$ of admissible subsets of $\Og$ such that $\Og=\bigcup_{j=1}^J \Og_j,$ and
$\Og_s \cap \Og_{s+1}$ contains an open ball of radius $\ga_0>0$ in $\RR^{d+1}$ for each $s=1,\dots, J-1$, where
the parameters $J$ and $\ga_0$ depend only on the domain $\Og$.
\end{lem}
To state the fourth lemma, let $\{\Og_s\}_{s=1}^{J}$ be the sequence of sets in Lemma~~\ref{LEM-4-2-18-0}, and let $H_m:=\bigcup_{j=1}^m \Og_j$ for $m=1,\dots, J$.
For $1\leq j\leq J$, define $\wh{\Og}_{j}=G_i$ if $\Og_j=S_{G_i, \ld_0}$ for some $1\leq i\leq m_0$; and $\wh{\Og}_j =2Q$ if $\Og_j$ is an open cube $Q$ such that $4 Q\subset \Og$.
\begin{lem}\label{REDUCTION}
If $0<p\leq \infty$ and $1\leq j<J$, then there exist constants $c_0,C>1$ depending only on $p$ and $\Og$ such that
$$ E_{c_0n} (f)_{L^p(H_{j+1})} \leq C \max \Bl\{ E_n (f)_{L^p(\wh{\Og}_{j+1})}, \ E_n (f)_{L^p (H_j)}\Br\}. $$
\end{lem}
Now we take Theorem~\ref{THM-4-1-18}, Lemma~\ref{LEM-4-2-18-0} and Lemma~\ref{REDUCTION} for granted and proceed with the proof of Theorem~\ref{Jackson-thm}.
\begin{proof}[Proof of Theorem~\ref{Jackson-thm}]
Applying Lemma~\ref{REDUCTION} $J-1$ times and recalling $H_J =\Og$, we obtain
\begin{equation}\label{6-3-0}E_{c_1n} (f)_{L^p(\Og)} \leq C \max_{1\leq j\leq J} E_n (f)_{L^p (\wh{\Og}_j)},\end{equation}
where $C, c_1>1$ depend only on $p$ and $\Og$.
If ${\Og}_j =S_{G_i,\ld_0}$ for some $1\leq i\leq m_0$, then $\wh{\Og}_j =G_i$, and by Theorem~\ref{THM-4-1-18},
$$E_n (f)_{L^p(\wh{\Og}_j)} \leq \max_{1\leq i\leq m_0} E_n (f)_{L^p(G_i)}\leq C \og^r_\Og (f, n^{-1})_p.$$
If $\Og_j=Q$ is a cube such that $4Q\subset \Og$, then $\wh{\Og}_j =2Q$ and by Lemma~\ref{lem-6-1-0},
$$E_n (f)_{L^p(2Q)}\leq C \max_{1\leq j\leq d+1}\og^r_{2Q,\vi} (f, n^{-1}; e_j)_p\leq C \og_{\Og,\vi}^r (f, n^{-1})_p,$$
where the last step uses the fact that for any $S\subset \Og$,
\begin{equation}\label{4-3-0-18}
\max_{1\leq j\leq d+1}\og^r_{S,\vi} (f, t; e_j)_{p}\leq C \og_{\Og, \vi} ^r (f, t)_p.
\end{equation}
Thus, in either case, we have
$$ E_n(f)_{L^p(\wh{\Og}_j)}\leq C \og_{\Og}^r (f, n^{-1})_p.$$
Theorem~\ref{Jackson-thm} then follows from the estimate~\eqref{6-3-0}.
\end{proof}
To complete the reduction argument in this section, it remains to prove Lemma~\ref{LEM-4-2-18-0} and Lemma~\ref{REDUCTION}.
\subsection{Proof of Lemma~\ref{LEM-4-2-18-0}}
The proof of Lemma~\ref{LEM-4-2-18-0} is inspired by~\cite[p.~17]{To14} but written in somewhat different language.
Let $S_j =S_{G_j, \ld_0}$ for $1\leq j\leq m_0$. Note that $S_j$ is an open neighborhood of $\p' G_j(\ld)$ relative to the topology of $\Og$.
Since $\p' G_j(\ld_0) \subset S_j\subset \Og$, and $S_j$ is open relative to the topology of $\Og$ for each $1\leq j\leq m_0$ , there exists $\va\in (0,r_0)$ such that $$ \Ga_\va:=\{ \xi\in\Og:\ \ \dist (\xi,\Ga):<16\sqrt{d+1}\va\}\subset \bigcup_{j=1}^{m_0} S_j.$$
Let us cover the remaining set $\Og\setminus \Ga_{\va}$ by finitely many open cubes $Q_j$, $j=m_0~+~1,\dots, M_0$ of side length $\va$ such that $4 Q_j\subset \Og$ for each $j$.
Thus, setting $E_j=S_j$ for $1\leq j\leq m_0$, and $E_j=Q_j$ for $m_0<j\leq M_0$,
we have
$ \Og=\bigcup_{j=1}^{M_0} E_j.$
First, note that if $E_j\cap E_{j'}\neq \emptyset$ for some $1\leq j, j'\leq M_0$, then $E_j\cap E_{j'}$ must contain a nonempty open ball in $\RR^{d+1}$. Indeed,
since $E_j\cap E_{j'}$ is open relative to the topology of $\Og$, there exists an open set $V$ in $\RR^{d+1}$ such that $V\cap \Og=E_j\cap E_{j'}\neq \emptyset$.
Since $\Og$ is the closure of an open set in $\RR^{d+1}$, the set $V\cap \Og$ must contain an interior point of $\Og$.
Next, we set $\mathcal{A}=\{E_1,\dots, E_{M_0}\}$. We say two sets $A, B$ from the collection $\mathcal{A}$ are connected with each other if there exists a sequence of distinct sets $A_1, \dots, A_n$ from the collection $\mathcal{A}$ such that $A_1=A$, $A_n=B$ and $A_i\cap A_{i+1}\neq \emptyset$ for $i=1,\dots, n-1$, in which case we write $[A:B]=\bigcup_{j=1}^n A_j$ and $(A: B)=\bigcup_{j=2}^{n-1} A_j$.
We claim that every set in the collection $\mathcal{A}$ is connected with the set $E_1$. Once this claim is proved, then Lemma~\ref{LEM-4-2-18-0} will follow since
\begin{align*}\label{connected graph}
\Og=\bigcup_{j=1}^{M_0} E_j = [E_1: E_2] \cup (E_2: E_1)\cup[E_1: E_3]\cup (E_3:E_1)\cup \dots \cup [E_1: E_{M_0}].
\end{align*}
To show the claim,
let $\mathcal{B}$ denote the collection of all sets $E_j$ from the collection $\mathcal{A}$ that are connected with $E_1$.
Assume that $\mathcal{A}\neq \mathcal{B}$. We obtain a contradiction as follows. Let $H:=\bigcup_{E\in\mathcal{B}} E$. Then a set $E$ from the collection $\mathcal{A}$ is connected with $E_1$ (i.e., $E\in\mathcal{B}$) if and only if $E\cap H\neq \emptyset$.
Since $\mathcal{A}\neq \mathcal{B}$, there exists $E\in\mathcal{A}$ such that $E\cap H=\emptyset$, which
in particular, implies that $H$ is a proper subset of $\Og=\bigcup_{A\in \mathcal{A}} A$.
Since $\Og$ is a connected subset of $\RR^{d+1}$, $H$ must have nonempty boundary relative to the topology of $\Og$. Let $x_0$ be a boundary point of $H$ relative to the topology of $\Og$. Since $H$ is open relative to $\Og$, $x_0\in \Og\setminus H=\bigcup_{A\in\mathcal{A}} A \setminus H$. Let $A_0\in\mathcal{A}$ be such that $x_0\in A_0$. Then $A_0$ is an open neighborhood of $x_0$ relative to the topology of $\Og$, and hence $A_0\cap H\neq \emptyset$, which in turn implies $A_0\in \mathcal{B}$ and $A_0\subset H$. But this is impossible as $x_0\notin H$.
\subsection{ Proof of Lemma~\ref{REDUCTION}}
We now turn to the proof of Lemma~\ref{REDUCTION}. The proof relies on three additional lemmas.
The first one is similar to~\cite[Lemma~14.3]{To14}, however, we could not follow the conclusion of its proof in~\cite{To14}, where some averaging argument appears to be missing. Our proof below uses a multivariate Nikol'skii inequality which simplifies the transition to the multivariate case.
\begin{lem}\label{lem-4-1}
If $B$ is a ball in $\RR^{d+1}$ and $\ld>1$, then for each $P\in\Pi_n^{d+1}$ and $0<q\leq \infty$,
\begin{equation}\label{eq1}
\|P\|_{L^q (\ld B)} \leq C_{d,q}(5 \ld)^{n+\f {d+1}q} \|P\|_{L^q (B)}.
\end{equation}
\end{lem}
\begin{proof} By dilation and translation, we may assume that $B=B_1[0]$.~\eqref{eq1} with the explicit constant $(4\ld)^n$ was proved in~\cite[Lemma~4.2]{To14} for $q=\infty$.
For $q<\infty$, we have
\begin{align*}
\|P\|_{L^q (\ld B)} &\leq C_d \ld^{\f{d+1}q} \|P\|_{L^\infty (\ld B)}\leq C_d \ld^{\f {d+1}q} (4\ld)^n \|P\|_{L^\infty (B)}\\
&\leq C_{d,q} \ld^{\f {d+1}q} (4\ld)^n n^{\f {d+1}q} \|P\|_{L^q (B)}
\leq C_{d,q} (5\ld)^{n+\f {d+1}q} \|P\|_{L^q (B)},
\end{align*}
where we used H\"older's inequality in the first step,~\eqref{eq1} for the already proven case $q=\infty$ in the second step, and Nikolskii's inequality for algebraic polynomials on the unit ball (see~\cite{Da06} or \cite[Section~7]{Di-Pr16}) in the third step.
\end{proof}
The second lemma is probably well known. It can be proved in the same way as in~\cite[Lemma~4.3]{To14}.
\begin{lem}\label{lem-4-2} Let $I$ be a parallelepiped in $\RR^d$. Then given parameters $R>1$ and $\theta,\mu\in (0,1)$, there exists a polynomial $P_n$ of degree at most $ C(\theta, \mu, R, d) n$ such that $ 0\leq P_n(\xi)\leq 1$ for $ \xi\in B_R[0]$,
$ 1-P_n(\xi) \leq \theta^n$ for $\xi\in \mu I$, and
$P_n(\xi)\leq \theta^n$ for $\xi\in B_R[0]\setminus I$,
where $\mu I$ denotes the dilation of $I$ from its center by a factor $\mu$.
\end{lem}
As a consequence of Lemma~\ref{lem-4-2}, we have
\begin{lem}\label{lem-4-3} Let $G\subset \Og$ be a domain of special type attached to $\Ga$, and $S_{G, \mu}: =\Og \cap Q_\mu$ be as defined in Lemma~\ref{lem-6-2:Dec} with $\mu\in (\f 12, 1]$. Let $R\ge 1$ be such that $Q_1 \cup \Og \subset B_R[0]$.
Then given $\ld\in (\f 12, 1)$ and $\t \in (0,1)$, there exists a polynomial $P_n$ of degree at most $C(d, \t, R, G, \ld)
n$ with the properties that $0\leq P_n(\xi)\leq 1$ for $\xi\in B_R[0]$, $1-P_n(\xi)\leq \t^n$ for $\xi \in S_{G,\ld}$ and $P_n(\xi)\leq \t^n$ for $\xi\in \Og\setminus S_{G,1}$. \end{lem}
\begin{proof}
Since $\ld<1$ and $Q_\ld$ is an open rectangular box such that $\overline{Q_\ld}\subset Q_1$, it follows by Lemma~\ref{lem-4-2} that there exists a polynomial $P_n$ of degree at most $Cn$ such that
$0\leq P_n(\xi)\leq 1$ for all $\xi\in B_{R} [0]$, $1-P_n(\xi) \leq \ta^n$ for all $\xi\in Q_\ld$ and $P_n(\xi) \leq \ta^n$ for all $\xi\in B_R[0]\setminus Q_1$.
To complete the proof, we just need to observe that
$$ \Og \setminus S_{G,1} =\Omega \setminus (Q_1 \cap \Og)=\Omega\setminus Q_1\subset B_R[0]\setminus Q_1.$$
\end{proof}
We are now in a position to prove Lemma~\ref{REDUCTION}.
\begin{proof}[Proof of Lemma~\ref{REDUCTION}]
The proof is essentially a repetition of that of~\cite[Lemma~4.1]{To14} or~\cite[Lemma~3.3]{To17} for our situation. Let $R>1$ be such that $\Og\subset B_R[0]$, and set $\theta:=\min\{\frac{\ga_0}{5R}, \f12\}$. Write $H=H_j$ and $S=\Og_{j+1}$.
Without loss of generality, we may assume that $S=S_{G,\ld_0}$ for some domain $G$ of special type attached to $\Ga$. (The case when $S$ is a cube $Q$ such that $4Q\subset \Og$ can be proved similarly using Lemma~\ref{lem-4-2} instead of Lemma~\ref{lem-4-3}). Then $S_{G,\ld_0}\cap H$ contains a ball $B$ of radius $\ga_0$.
By Lemma~\ref{lem-4-3}, there exists a polynomial $R_n$ of degree $\leq C(d, R, G)n$ such that
$0\leq R_n(x)\leq 1$ for all $x\in B_{R}[0]$, $R_n(x)\leq \theta^{-n}$ for $x\in \Og\setminus S_{G,1}$ and $1-R_n(x)\leq \theta^{-n}$ for $x\in S_{G,\ld_0}$.
Let
$P_1, P_2\in\Pi_n^{d+1}$ be such that
$$ E_n(f)_{L^p(S_{G,1})} =\|f-P_1\|_{L^p(S_{G,1})}\ \ \text{and}\ \ \ E_n(f)_{L^p(H)} =\|f-P_2\|_{L^p(H)}.$$
Define
$$ P(x):=R_n(x) P_1(x) +(1-R_n(x)) P_2(x)\in \Pi_{cn}^{d+1}.$$
Then
\begin{align*}
E_{cn} (f)_{L^p(H_{j+1})}&\leq
\|f-P\|_{L^p(H\cup S_{G,\ld_0})}\\
& \leq \|f-P\|_{L^p(H\cap S_{G,1})}+\|f-P\|_{L^p(H\setminus S_{G,1})}+\|f-P\|_{L^p(S_{G,\ld_0})}.
\end{align*}
First, we can estimate the term $\|f-P\|_{L^p(H\cap S_{G,1})}$ as follows:
\begin{align*}
\|f-P\|_{L^p(H\cap S_{G,1})}&=\|R_n (f-P_1) +(1-R_n) (f-P_2)\|_{L^p(H\cap S_{G,1})}\\
&\leq C_p \max\Bl\{\|f-P_1\|_{L^p(S_{G,1})}, \ \|f-P_2\|_{L^p(H)}\Br\}\\
&\leq C_p \max \Bl\{ E_n (f)_{L^p(S_{G,1})}, E_n(f)_{L^p(H)} \Br\}.
\end{align*}
Second, we show
\begin{align}\label{6-7-00}
\|f-P\|_{L^p(H\setminus S_{G,1})}\leq & C_{p,\ga_0,R} \max \Bl\{ E_n (f)_{L^p(S_{G,1})}, E_n(f)_{L^p(H)} \Br\}.
\end{align}
Indeed, we have
\begin{align}
\|f-P\|_{L^p(H\setminus S_{G,1})}&=\|( f-P_2) + R_n (P_2-P_1)\|_{L^p(H\setminus S_{G,1})}\notag\\
&\leq C_p E_n(f)_{L^p(H)}+ C_p \theta^{n} \|P_1-P_2\|_{L^p(\Og)}.\label{second}\end{align}
However, by Lemma~\ref{lem-4-1},
\begin{align}
\|P_1-P_2\|_{L^p (\Og)}& \leq \|P_1-P_2\|_{L^p (B_{R}[0])}\leq C\Bl( \f {5 R}{\ga_0}\Br)^{n+\f {d+1}p} \|P_1-P_2\|_{L^p(B)}\notag\\
&\leq C\Bl( \f {5 R}{\ga_0}\Br)^{n+\f {d+1}p} \|P_1-P_2\|_{L^p(H\cap S_{G,1})}\notag\\
&\leq C(R, d, \ga_0,p) \theta^{-n} \max\Bl\{E_n(f)_{L^p(S_{G,1})}, \ E_n(f)_{L^p(H)}\Br\}.\label{3-2-eq}
\end{align}
Thus, combining~\eqref{second} with~\eqref{3-2-eq}, we obtain~\eqref{6-7-00}.
Finally, we estimate the term $\|f-P\|_{L^p(S_{G,\ld_0})}$ as follows:
\begin{align*}
\|f-P\|_{L^p(S_{G,\ld_0})} & =\|f-P_1 +(1-R_n) (P_1-P_2)\|_{L^p(S_{G,\ld_0})} \\
&\leq C_p \|f-P_1\|_{L^p(S_{G,1})} + C_p \theta^{n} \|P_1-P_2\|_{L^p (\Og)}\\
&\leq C_{p,\ga_0,R}\max\Bl\{E_n(f)_{L^p(S_{G,1})}, \ E_n(f)_{L^p(H)}\Br\},
\end{align*}
where the last step uses~\eqref{3-2-eq}.
Now putting the above estimates together, and noticing $S_{G,1} \subset G=\wh{\Og}_{j+1}$, we complete the proof of Lemma~\ref{REDUCTION}.
\end{proof}
\section{The direct Jackson theorem}\label{ch:direct}
\subsection{Jackson inequality on domains of special type}\label{Sec:8}
We will first prove the Jackson inequality, Theorem~\ref{THM-4-1-18}, on a domain $G$ of special type that is attached to $\Ga=\p \Og$. Without loss of generality,
we may assume that
\begin{align}\label{standard}
G:=\{ (x, y):\ \ x\in (-b,b)^{d},\ \ g(x)-1\le y\leq g(x)\},
\end{align}
where $b\in (0,(2d)^{-1})$ is the base size of $G$, and $g$ is a $C^2$-function on $\RR^d$ satisfying that $\min_{x\in [-4b, 4b]^d} g(x)\ge4$. We may choose the base size $b$ to be sufficiently small so that
\begin{equation}\label{8-1-18}
\max_{x\in [-4b, 4b]^d} \|\nabla g(x)\|\leq \f 1{200db}.
\end{equation}
We first recall some notations from Section~\ref{sec:5} and Section~\ref{modulus:def}.
Given $n\in\NN$, the partition $\{\Delta_{\bfi}\}_{\bfi\in\Ld_n^d}$ of the cube $[-b,b]^d$ is defined by
\begin{equation}\label{partition-a}\Delta_{\bfi}:=[t_{i_1}, t_{i_1+1}]\times \dots \times [t_{i_{d}}, t_{i_{d}+1}] \ \ \ \text{with}\ \ t_{i}=\Bl(-1+\f {2i}n\Br)b,
\end{equation}
where
$\Ld_n^d:=\{ 0, 1,\dots, n-1\}^d\subset \ZZ^d$ is the index set. For simplicity, we also set $t_i=-b$ for $i<0$, and $t_i =b$ for $i>n$, and therefore, $\Delta_{\ib}$ is defined for all $\ib\in\ZZ^d$.
Next, the sequence,
\begin{equation}\label{8-3-0-18}
\al_j:=2\al \sin^2 \Bl(\f {j\pi}{2N}\Br),\ \ j=0,1,\dots, N:=2\ell_1 n,
\end{equation}
forms a Chybeshev partition of the interval $[0, 2\al]$, where
$\al:= 1/(2\sin^2\f \pi{2\ell_1})$, and $\ell_1$ is a fixed large positive integer for which~\eqref{5-2-18} is satisfied.
Note that $\al_n=1$, and
\begin{align}\label{8-4-18-0}
\f {4j\al} {N^2} \leq \al_j-\al_{j-1} \leq \f { \pi^2 j\al} {N^2},\ \ j=1,\dots, N.
\end{align}
Finally, a partition of the domain $G$ is defined as
\begin{align*}
G&=\Bl\{(x,y):\ \ x\in [-b,b]^d,\ \ g(x)-y\in [0,1]\Br\} =\bigcup_{(\bfi,j)\in\Ld_n^{d+1}} I_{\mathbf{i},j},
\end{align*}
where
$$I_{\mathbf{i},j}:=\Bl\{ (x, y):\ \ x\in \Delta_{\bfi},\ \ g(x)-y\in [\al_{j}, \al_{j+1}]\Br\}.$$
Next, we introduce a few new notations for this section. Without loss of generality, we assume that $n\ge 50$. Let $10\leq m_0, m_1\leq n/5$ be two fixed large integer parameters satisfying
\begin{equation}\label{8-5-18}
m_1\ge \f {32\ell_1^2 m_0^2 b^2}{\al} \|\nabla^2 g\|_{L^\infty ([-b, b]^d)}.
\end{equation}
We define, for $\bfi\in \Ld_n^d$,
$$\Delta_{\bfi}^\ast =[t_{i_1-m_0}, t_{i_1+m_0}]\times [t_{i_2-m_0}, t_{i_2+m_0}]\times \dots\times [t_{i_{d}-m_0}, t_{i_{d}+m_0}],$$
and for $(\ib, j) \in\Ld_n^{d+1}$,
$$I_{\bfi,j}^\ast:=\Bl\{ (x, y):\ \ x\in \Delta_{\bfi}^\ast,\ \ \al^\ast_{j-m_1}\leq g(x)-y\leq \al^\ast_{j+m_1}\Br\},$$
where
$\al_j^\ast =\al_j$ if $0\leq j\leq n$, $\al_j^\ast =0$ if $j<0$ and $\al_j^\ast =1$ if $j>n$.
Let $x_{\bfi}^\ast$ be
an arbitrarily given point in the set $ \Delta_{\bfi}^\ast$. Denote by $\zeta_{k}(x_{\bfi}^\ast)$ the unit tangent vector to the boundary $\Ga$ at the point
$({x}^\ast_{\bfi}, g({x}^\ast_{\bfi}) )$ that is parallel to the $x_kx_{d+1}$-plane and satisfies $\zeta_{k} (x_{\bfi}^\ast)\cdot e_k>0$ for $k=1,\dots, d$; that is,
$\zeta_{ k} (x_{\bfi}^\ast):= \f { e_k + \p_k g( x_{\bfi}^\ast) e_{d+1} }{\sqrt{1+|\p_k g( x_{\bfi}^\ast)|^2}}.$
Set
$$\mathcal{E}(x_{\bfi}^\ast):=\{\zeta_{1} (x_{\bfi}^\ast), \dots, \zeta_{d} (x_{\bfi}^\ast)\},\ \ \ib\in\Ld_n^d.$$
By Taylor's theorem, we have
\begin{equation}\label{8-9-18}
\Bl|g(x) -H_{\bfi}(x) \Br| \leq M_0 n^{-2},\ \ \forall x\in \Delta_{\bfi}^\ast,
\end{equation}
where $$H_{\bfi} (x):=g(x_{\bfi}^\ast)+ \nabla g(x_{\bfi}^\ast)\cdot (x-x_{\bfi}^\ast),\ \ x\in\RR^d,$$
and
$M_0:=8 m_0^2 b^2 \|\nabla^2 g\|_{L^\infty( [-b,b]^d)}+C_d A_0.$
Here we recall that $A_0$ is the parameter in~\eqref{modulus-special}.
Thus, setting
\begin{align}\label{8-7-1-18}
S_{\bfi,j}:=\Bl\{ (x,y): x\in\Delta_{\bfi}^\ast, H_{\bfi} (x) -\al^\ast_{j+m_1} +\f {M_0} {n^2}\leq y\leq H_{\bfi}(x) -\al^\ast_{j-m_1} -\f {M_0} {n^2}\Br\}
\end{align}
and
\begin{equation}\label{8-8-1}
S_{\bfi, j}^\ast:=\Bl\{ (x,y):\ \ x\in\Delta_{\bfi}^\ast,\ \ H_{\bfi} (x) -\al^\ast_{j+m_1} -\f {M_0} {n^2}\leq y\leq H_{\bfi}(x) -\al^\ast_{j-m_1} +\f {M_0} {n^2}\Br\},
\end{equation}
we have
\begin{equation}\label{8-7-0}
S_{\bfi,j} \subset I_{\bfi,j}^\ast \subset S_{\bfi,j}^\ast,\ \ (\bfi, j)\in\Ld_n^{d+1}.
\end{equation}
On the other hand, it is easily seen from~\eqref{8-3-0-18},~\eqref{8-5-18} and~\eqref{8-4-18-0} that $S_{\bfi,j}\neq \emptyset$ and
\begin{equation} \al^\ast_{j+m_1} -\al^\ast_{j-m_1} -\f {2M_0}{n^2} \sim \f {j+M_0}{n^2}.\end{equation}
Thus, $S_{\bfi, j}$ and $S_{\bfi,j}^\ast$ are two nonempty compact parallepipeds with the same set $\mathcal{E}(x^\ast_{\bfi})\cup \{e_{d+1}\}$ of edge directions
and comparable side lengths.
With the above notations, we introduce the following local modulus of smoothness on $G$:
\begin{defn}\label{def-8-1} For $0<p\leq \infty$, define the local modulus of smoothness of order $r$ of $f\in L^p( G)$ by
$$ \og_{\text{loc}}^r (f, n^{-1})_{L^p( G)}:=
\Bl[\sum_{(\bfi,j)\in\Ld_n^{d+1}} \Bl(\og^r (f, I_{\bfi,j}^\ast; e_{d+1})_p ^p+\og^r (f, S_{\bfi,j}; \mathcal{E}(x^\ast_{\bfi}))_p ^p\Br) \Br]^{1/p},$$
with the usual change of the $\ell^p$-norm over the set $(\bfi, j)\in\Ld_n^{d+1}$ for $p=\infty$.
\end{defn}
In this section, we shall prove the following Jackson type estimate for the above local modulus of smoothness, from which Theorem~\ref{THM-4-1-18} will follow.
\begin{thm}\label{THM-WT-OMEGA} For $0<p\leq \infty$, and $f\in L^p(G)$,
\[
E_{n} (f)_{L^p(G)} \leq C \omega_{\text{loc}}^r(f, n^{-1})_{L^p( G)},
\]
where the constant $C$ is independent of $f$ and $n$.
\end{thm}
\begin{rem}\label{rem:loc mod pnt choice}
Note that $\omega_{\text{loc}}^r(f, n^{-1})_{L^p( G)}$ depends on the choice of $x_{\bfi}^\ast$, which is an arbitrary point in $\Delta_{\bfi}^\ast$. It follows from the proof that the constant $C$ in Theorem~\ref{THM-WT-OMEGA} is independent of the selection of the points $x_{\bfi}^\ast\in \Delta_{\bfi}^\ast$.
\end{rem}
We divide the rest of this section into two parts. In the first part, we shall assume Theorem~\ref{THM-WT-OMEGA}, and show how it implies Theorem~\ref{THM-4-1-18}, while the second part is devoted to the proof of Theorem~\ref{THM-WT-OMEGA}.
\subsection{Proof of Theorem~\ref{THM-4-1-18}}\label{subsection-8:1}
The aim is to show that Theorem~\ref{THM-4-1-18} can be deduced from Theorem~~\ref{THM-WT-OMEGA}.
Recall that for each $\bfi\in\Ld_n^d$, $\mathcal{E}(x^\ast_{\bfi})$ is the set of unit tangent vectors to $\p' G^\ast$ at the point $(x_{\bfi}^\ast, g(x_{\bfi}^\ast))$, where $x_{\bfi}^\ast\in \Delta_{\bfi}^\ast$. Thus, by Definition~\ref{def-8-1}, Theorem~\ref{THM-WT-OMEGA}, and Remark~\ref{rem:loc mod pnt choice}, to show Theorem~\ref{THM-4-1-18},
it suffices to prove that
\begin{equation}\label{8-3-18}
\Sigma_1:=\sum_{(\bfi,j)\in\Ld_n^{d+1}} \og^r(f, I_{\bfi, j}^\ast; e_{d+1})^p_p\leq C\og_{\Og,\vi}^r \Bl(f, \f 1n; e_{d+1}\Br)^p_{p},\
\end{equation}
and for $k=1,\dots, d$,
\begin{equation}\label{8-4-18}
\Sigma_2 (k):=n^d \sum_{(\bfi, j)\in\Ld_n^{d+1}} \int_{\Delta_{\bfi}^\ast} \og^r(f, S_{\bfi,j}; \zeta_k(x_{\bfi}^\ast))^p_p dx_{\bfi}^\ast\leq C \wt{\og}_{G}^r \Bl(f, \f 1 n\Br)^p_p
\end{equation}
with the usual change of the $\ell^p$ norm in the case of $p=\infty$.
To prove the estimates~\eqref{8-3-18} and~\eqref{8-4-18}, we need to use the average modulus of smoothness of order $r$ on a compact interval $I=[a_I, b_I]\subset \RR$ defined as
$$w_r(f, t; I)_p :=\Bl(\f1t \int_{t/4r}^t\Bl( \int_{I-rh} |\tr_h^r f(x)|^p dx\Br)\, dh \Br)^{1/p},\ \ 0<p\leq \infty,$$
with the usual change when $p=\infty$.
The average modulus $w_r(f, t; I)_p$ turns out to be equivalent to the regular modulus $\og^r(f,t)_p:=\sup_{0<h\leq t} \|\tr_h^r f\|_{L^p(I-rh)}$, as is well known.
\begin{lem}\cite[p.~373, p.~185]{De-Lo} \label{lem-8-1} For $f\in L^p(I)$ and $0<p\leq \infty$,
\begin{equation}\label{key-equiv-mod-0}
C_1 w_r(f, t; I)_p \leq \og^r (f,t)_p\leq C_2 w_r(f,t; I)_p,\ \ 0<t\leq |I|,
\end{equation}
where the constants $C_1, C_2>0$ depend only on $p$ and $r$.
\end{lem}
For simplicity, we will assume $p<\infty$. The proof below with slight modifications works equally well for the case $p=\infty$.
We start with the proof of~\eqref{8-3-18}.
Using~\eqref{8-4-18-0} and~\eqref{key-equiv-mod-0}, we have
\begin{align*}\og^r(f, I_{\bfi,j}^\ast; e_{d+1})_p^p &=\sup_{0<h<\f {c(j+1)}{n^2}} \int_{\Delta_{\bfi}^\ast}\Bl[ \int_{g(x)-\al_{j+m_1}}^{g(x)-\al_{j-m_1}} |\tr_{h e_{d+1}}^r (f, I_{\bfi, j}^\ast, (x,y))|^p dy\Br] \, dx \\
&\sim \f {n^2} {j+1}\int_{\f {c(j+1)}{4rn^2}}^{\f {c(j+1)}{n^2}} \int_{I_{\bfi,j}^\ast } |\tr_{he_{d+1}}^r (f, I_{\bfi,j}^\ast, \xi)|^p d\xi dh.
\end{align*}
By~\eqref{funct-vi}, we note that for $\xi=(x,y)\in I_{\bfi, j}^\ast-\f {c(j+1)}{4n^2} e_{d+1}$,
\begin{align*} \vi_\Og (e_{d+1}, \xi)&\sim \sqrt{g(x)-y}\sim \f {j+1}n,\ \ 0\leq j\leq n.\end{align*}
Thus,
performing the change of variable
$h=s\vi_\Og(e_{d+1}, \xi)$ for each fixed $\xi\in I_{\bfi,j}^\ast-\f {c(j+1)}{4n^2} e_{d+1}$, we obtain
\begin{align*}
& \og^r(f, I_{\bfi,j}^\ast; e_{d+1})_p^p
\leq C n \int_{I_{\bfi,j}^\ast}\Bl[ \int_{0}^{\f cn} |\tr_{s\vi_\Og (e_{d+1}, \xi)e_{d+1}}^r(f, I_{\bfi, j}^\ast, \xi)|^p \, ds\Br] d\xi.
\end{align*}
It then follows that
\begin{align*}
\Sigma_1
&\leq Cn\sum_{j=0}^{n-1} \sum_{\bfi\in\Ld_n^d} \int_0^{\f cn}\Bl[ \int_{I_{\bfi, j}^\ast} |\tr_{s\vi_\Og (e_{d+1}, \xi)e_{d+1}}^r(f, \Og,x)|^p d\xi\Br] ds\\
&\leq C n\int_0^{\f cn} \int_{\Og }|\tr_{u\vi_\Og (e_{d+1}, \xi)e_{d+1}}^r (f,\Og,\xi)|^p\, d\xi ds\leq C \og^r_{\Og,\vi}(f, n^{-1}; e_{d+1})_p^p.
\end{align*}
This proves the estimate~\eqref{8-3-18}.
The estimate~\eqref{8-4-18} can be proved in a similar way. Indeed, by~\eqref{2-3-18} and~\eqref{key-equiv-mod-0}, it is easily seen that
\begin{align*}
\og^r(f, S_{\bfi, j}; \zeta_k(x_{\bfi}^\ast))_p^p\sim n \int_0^{\f cn} \|\tr_{h \zeta_k(x_{\bfi}^\ast)}^r (f, S_{\bfi, j}) \|_{L^p(S_{\bfi,j})}^p\, dh.
\end{align*}
It follows that
\begin{align*}
\Sigma_2(k) &\leq C n^{d+1} \int_0^{\f cn} \Bl[ \sum_{(\bfi, j)\in\Ld_n^{d+1}} \int_{\Delta_{\bfi}^\ast}
\|\tr_{h \zeta_k(x_{\bfi}^\ast)}^r (f, S_{\bfi, j}) \|_{L^p(S_{\bfi,j})}^p\, dx_{\bfi}^\ast\Br]\, dh\\
& \leq C n^{d}\sup_{0<h\leq \f cn} \sum_{(\bfi, j)\in\Ld_n^{d+1}}
\int_{S_{\bfi, j}} \int_{\|u-\xi_x\|\leq \f c n} |\tr_{h \zeta_k(u)}^r (f, S_{\bfi, j},\xi)|^p\, du\, d\xi\\
& \leq C n^{d}\sup_{0<h\leq \f cn}
\int_{G^n} \int_{\|u-\xi_x\|\leq \f c n} |\tr_{h \zeta_k(u)}^r (f, G,\xi)|^p\, du\, d\xi\leq C \wt{\og}_G^r(f, \f cn)_p^p,
\end{align*}
where $G^n:=\{\xi\in G:\ \ \dist(\xi, \p' G) \ge \f {A_0}{n^2}\}$.
This proves~\eqref{8-4-18}.
\subsection{Proof of Theorem~\ref{THM-WT-OMEGA} }\label{subsection-8:2}
The proof relies on several lemmas.
\begin{lem}\label{thm-2-1} Let $(\bfi,j)\in \Ld_n^{d+1}$. Then for $0<p\leq \infty$ and $r\in\NN$,
$$E_{(d+1)(r-1)}(f)_{L^p(I_{\bfi,j}^\ast)}\leq C(p, r, d, G) \Bl[\og^r (f, I_{\bfi,j}^\ast; e_{d+1})_p+\og^r (f, S_{\bfi,j}; \mathcal{E}(x^\ast_\bfi))_p\Br].$$
\end{lem}
\begin{proof}Lemma~\ref{thm-2-1} follows directly from~\eqref{8-7-0} and Lemma~\ref{cor-7-3}.
\end{proof}
\begin{lem}\label{lem-5-1} Given $0<p\leq \infty$ and $r\in\NN$, there exist positive constants $C=C(p, r)$ and $s_1=s_1(p,r)$ depending only on $p$ and $r$ such that for any integers $0\leq k, j\leq N/2$ and any $P\in\Pi_r^1$,
\begin{equation}\label{5-2a}
\|P\|_{L^p[\al_{j}, \al_{j+1}]}\leq C(p,r) (1+|j-k|)^{s_1}\|P\|_{L^p[\al_{k},\al_{k+1}]}.
\end{equation}
\end{lem}
\begin{proof} First, we prove that
\begin{equation}\label{5-1}
\|P\|_{L^p(I_{2t} (x))} \leq L_{p,r}\|P\|_{L^p(I_t (x))}, \ \ \forall P\in \Pi_r^1,\ \ \forall x\in [0, 2\al],\ \ \forall t\in (0, 1],
\end{equation}
where
$$I_t(x):=\Bl\{ y\in [0, 2\al]:\ \ |\sqrt{x}-\sqrt{y}|\leq \sqrt{2\al} t\Br\}.$$
To see this, we note that with $\rho_t(x)=2\al t^2+t\sqrt{2\al x}$,
\begin{equation}\label{8-17-0} \Bl[x-\f18 \rho_t(x), x+\f18 \rho_t(x)\Br]\cap [0, 2\al]\subset I_t(x)\subset I_{2t}(x) \subset [x-4\rho_t(x), x+4\rho_t(x)],\end{equation}
where the first relation can be deduced by considering the cases $0\leq x\leq \al t^2$ and $\al t^2<x\leq 4\al$ separatedly.
By Lemma~\ref{lem-4-1}, this implies that with $I_t=I_t(x)$ and $J=[x-4\rho_t(x), x+4\rho_t(x)]$,
\begin{align*}
\|P\|_{L^p(I_{2t})} &\leq \|P\|_{L^p(J)} \leq C_{p,r} \|P\|_{L^p (\f 1{32} J \cap [0, 2\al])} \leq C_{p,r}\|P\|_{L^p(I_t)},
\end{align*}
which proves~\eqref{5-1}.
Next, we note that the doubling property~\eqref{5-1} implies that for any $x, x'\in [0, 2\al]$ and any $t\in (0, 1]$,
\begin{equation}\label{8-14-18-00}
\|P\|_{L^p(I_{t} (x))} \leq L_{p,r} \Bl( 1+ \f {|\sqrt{x}-\sqrt{x'}|}{\sqrt{2\al} t}\Br)^{s_1}\|P\|_{L^p(I_t (x'))}, \ \ \forall P\in \Pi_r^1,
\end{equation}
where $s_1=(\log L_{p,r})/\log 2$.
Finally, for each $1\leq k\leq N/2$, we may write $ [\al_k, \al_{k+1}] =I_{t_k} (x_k)$
with $t_k:=\f {\sqrt{\al_{k+1}}-\sqrt{\al_k}}{2\sqrt{2\al}}$ and $x_k :=\f {(\sqrt{\al_k} +\sqrt{\al_{k+1}})^2} 4$. Note also that
by~\eqref{8-4-18-0},
\begin{align}\label{8-15-18-00}
\f {\sqrt{2}|k-j|}{2N}\leq \f{|\sqrt{\al_j}-\sqrt{ \al_k}|}{\sqrt{2\al}}\leq \f {\pi|k-j|} {2N},\ \ 0\leq k, j\leq N/2.
\end{align}
It then follows by~\eqref{8-14-18-00} and~\eqref{8-15-18-00} that
\begin{align*}
\|P\|_{L^p[\al_{j}, \al_{j+1}]}&\leq \|P\|_{L^p (I_{\pi/(4N)}(x_j))} \leq L_{p,r} \Bl ( 1+ \f {4N|\sqrt{x_j}-\sqrt{x_k}|}{\sqrt{2\al} \pi}\Br)^{s_1}
\|P\|_{L^p (I_{\pi/(4N)}(x_k))} \\
&\leq L_{p,r}^4 ( 1+|k-j|)^{s_1} \|P\|_{L^p[\al_{k}, \al_{k+1}]}.
\end{align*}
\end{proof}
For $x=(x_1,\dots, x_d)\in\R^d$, we set $\|x\|_\infty :=\max_{1\leq j\leq d} |x_j|$.
\begin{lem}\label{lem-8-4} Given $0<p\leq \infty$ and $r\in\NN$, there exist positive constants $C=C(p, r, d)$ and $s_2=s_2(p,r,d)$ depending only on $p$, $r$ and $d$ such that for any $\ib, \kb\in\Ld_n^d$ and $Q\in\Pi_r^d$,
\begin{equation}\label{5-2b}
\|Q\|_{L^p(\Delta_{\bfi})}\leq C(p,r,d) (1+\|\bfi-\kb\|_\infty)^{s_2}\|Q\|_{L^p(\Delta_{\kb})}.
\end{equation}
\end{lem}
\begin{proof} The proof of Lemma~\ref{lem-8-4} is similar to that of Lemma~\ref{lem-5-1}, and in fact, is simpler. It is a direct consequence of Lemma~\ref{lem-4-1}.
\end{proof}
\begin{lem}\label{lem-5-2} Given $0<p\leq \infty$ and $r\in\NN$, there exists a positive number $\ell=\ell(p,r,d)$ such that
for any $(\bfi, j), (\mathbf{k}, l)\in \Ld_n^{d+1}$ and any $Q\in\Pi_r^{d+1}$,
\begin{equation}\label{desire}
\|Q\|_{L^p(I_{\bfi, j})} \leq C \Bl(1+\max\{\|\bfi-\mathbf{k}\|_\infty, |j-l|\}\Br)^{\ell}\|Q\|_{L^p(I_{\mathbf{k}, l})},
\end{equation}
where the constant $C$ depends only on $p, d, r$ and $\|\nabla^2 g\|_\infty$.
\end{lem}
\begin{proof} For simplicity, we shall prove Lemma~\ref{lem-5-2} for the case of $0<p<\infty$ only. The proof below with slight modifications works for $p=\infty$.
Writing
$$\|Q\|^p_{L^p(I_{\bfi, j})} = \int_{\Delta_{\bfi}}\Bl[\int_{\al_{j-1}}
^{\al_{j}} |Q({x}, g({x})-u)|^p\, du\Br] d{x},$$
and using Lemma~\ref{lem-5-1}, we obtain
\begin{align}\label{8-18-0}
\|Q\|^p_{L^p(I_{\bfi, j})}\leq C(p,r) (1+|j-l|)^{s_1} \Bl[ \int_{\Delta_{\bfi}}\int_{g({x})-\al_{l}}
^{g({x})-\al_{l-1}} |Q({x}, y)|^p\, dy d{x}\Br].
\end{align}
Using Taylor's theorem, we have that
\begin{equation}\label{5-3}
|g({x})-t_{\bfi} ({x})|\leq \f {A}{2b^2}\|{x}-{x}_{\bfi}\|_\infty^2,\ \ \ \forall x\in [-b,b]^d,
\end{equation}
where ${x}_{\bf i}$ is the center of the cube $\Delta_{\bfi}$,
$t_{\bfi}({x}):=g({x}_{\bfi})+\nabla g({x}_{\bfi})\cdot ({x}-{x}_{\bfi})$, and
$A:=2b^2d^2\|\nabla^2 g\|_{L^\infty[-b,b]^d}/2$.
Thus, the double integral in the square brackets on the right hand side of~\eqref{8-18-0} is bounded above by
\begin{align*}
& \int_{\al_{\ell-1}}^{\al_\ell+\f {A}{n^2}}\Bl[ \int_{\Delta_{\bfi}}\Bl|Q\Bl({x}, t_{\bfi}({x})
-u+\f A {2n^2}\Br)\Br|^p\, d{x}\Br] du=: I.
\end{align*}
However, applying Lemma~\ref{lem-8-4} to this last inner integral in the square brackets, we obtain
\begin{align}\label{8-20-1}
I&\leq C(p,r,d) (1+\|\bfi-\mathbf{k}\|_\infty)^{s_2} \int_{\Delta_{\mathbf k}}\Bl[\int_{t_{\bfi}({x})
-\al_\ell-\f A {2n^2}}^{t_{\bfi}({x})
-\al_{\ell-1}+\f A {2n^2}} |Q({x}, u)|^p\, du\Br] d {x}.
\end{align}
By~\eqref{5-3}, this last integral in the square brackets on the right hand side of~\eqref{8-20-1} is bounded above by
\begin{align*}
& \int_{g({x})
-\al_\ell-\f {4A(1+\|\mathbf{k}-\bfi\|_\infty^2)}{n^2}}^{g({x})
-\al_{\ell-1}+\f {4A(1+\|\mathbf{k}-\bfi\|_\infty^2)}{n^2}} |Q({x}, u)|^p\, du
=\int^{
\al_\ell+\f {4A(1+\|\mathbf{k}-\bfi\|_\infty^2)}{n^2}}_{\al_{\ell-1}-\f {4A(1+\|\mathbf{k}-\bfi\|_\infty^2)}{n^2}} |Q({x}, g({x})-y)|^p\, dy, \end{align*}
which, using Lemma~\ref{lem-4-1} and the fact that $\al_\ell-\al_{\ell-1}\ge c n^{-2}$, is controlled above by
\begin{align*}
C(p,r) \Bl(A (1+\|\mathbf {k}-\bfi\|_\infty)\Br)^{2rp+4} \int^{\al_\ell}_{
\al_{\ell-1}} |Q({x}, g({x})-y)|^p\, dy.
\end{align*}
Putting the above together, we prove that
\begin{align*}
\|Q\|^p_{L^p(I_{\bfi, j})}\leq C(p,r,d) (1+|j-l|)^{s_1} (1+\|\bfi-\mathbf{k}\|_\infty)^{s_2+2rp+4}\|Q\|^p_{L^p(I_{\kb, l})}.
\end{align*}
This leads to the desired estimate~\eqref{desire} with $\ell= (s_1+s_2+2rp+4)/p$.
\end{proof}
Now we are in the position to prove Theorem~\ref{THM-WT-OMEGA}.
\begin{proof}[Proof of Theorem~\ref{THM-WT-OMEGA}] We shall prove the result for the case of $0<p<\infty$ only.
The proof below with slight modifications works equally well for the case $p=\infty$.
For simplicity, we use the Greek letters $\ga, \b,\dots$ to denote indices in the set $\Ld_n^{d+1}$.
By Lemma~\ref{thm-2-1}, for each $\ga:=(\ib, j)\in\Ld_n^{d+1}$ there exists a polynomial $s_{\ga}\in\Pi_{(d+1)(r-1)}^{d+1}$ such that
\begin{align}\label{5-4}
\|f-s_{\ga}\|_{L^p(I_{\ga}^\ast)} \leq C(p, r, d) W^r(f, I_{\ga}^\ast)_p,
\end{align}
where
$$ W^r(f, I_{\ga}^\ast)_p:=\og^r (f, I_{\ga}^\ast; e_{d+1})_p+\og^r (f, S_{\ga}; \mathcal{E}(x^\ast_\bfi))_p.$$
Let $\{q_{\ga}:\ \ \ga\in\Ld_n^{d+1}\}\subset \Pi_{\lfloor n/(r(d+1))\rfloor}^{d+1}$ be the polynomial partition of the unity as given in Theorem~\ref{strips-0} and Remark~\ref{rem-6-3} with a large parameter $m>2d+2$, to be specified later. Define
$$P_n(\xi):=\sum_{\ga\in\Ld_n^{d+1}} s_\ga (\xi) q_\ga(\xi)\in\Pi_{n}^{d+1}.$$
Clearly, it is sufficient to prove that
\begin{equation}\label{8-22-00}
\|f-P_n\|_{L^p(G)}\leq C \og^r_{\text{loc}} \Bl(f, \f1n\Br)_p.
\end{equation}
To show~\eqref{8-22-00}, we write, for each $\b\in \Ld_n^{d+1}$,
\begin{align*}
f(\xi)-P_n(\xi)&=f(\xi)-s_\b(\xi)+\sum_{\ga\in \Ld_n^{d+1}} (s_\b(\xi)-s_\ga (\xi))q_\ga (\xi).
\end{align*}
It follows by Theorem~\ref{strips-0} that
\begin{align*}
\|f-P_n\|_{L^p(I_\b)}^p &\leq C_p \|f-s_\b\|_{L^p(I_\b)}^p +C_p \sum_{\ga\in\Ld_n^{d+1}} \|s_\b-s_\ga\|^p_{L^p(I_\b)} (1+\|\b-\ga\|_\infty)^{-mp_1},
\end{align*}
where $p_1:=\min\{p,1\}$.
Using~\eqref{5-4}, we then reduce to showing that \begin{align}\label{8-23-00}
\Sigma_n'&:=\sum_{\b\in\Ld_n^{d+1}}\sum_{\ga\in\Ld_n^{d+1}} \|s_\b-s_\ga\|^p_{L^p(I_\b)} (1+\|\b-\ga\|_\infty)^{-mp_1}\leq C \og_{\text{loc}}^r\Bl(f,\f1n\Br)_p^p.
\end{align}
To show~\eqref{8-23-00}, we claim that there exists a positive number $s_3=s_3(p,d,r)$ such that for any $\ga,\b\in\Ld_n^{d+1}$,
\begin{equation}\label{claim-8-23} \|s_\ga-s_\b\|^p_{L^p(I_\ga)} \leq C (1+\|\ga-\b\|_\infty)^{s_3 p} \sum_{\eta\in \mathcal{I}_{k_0} ( \ga)} W^r(f, I^\ast_\eta)^p_p,\end{equation}
where $k_0:=1+\|\b-\ga\|_\infty$, and
$$ \mathcal{I}_t(\ga):=\{ \eta\in\Ld_n^{d+1}:\ \ \|\ga-\eta\|_\infty\leq t\}\ \ \text{for $ \ga\in\Ld_n^{d+1}$ and $t>0$}.$$
For the moment, we assume~\eqref{claim-8-23} and proceed with the proof of~\eqref{8-23-00}. Indeed,
we have
\begin{align*}
\Sigma_n' &\leq C\sum_{\b\in\Ld_n^{d+1}}\sum_{k=1}^\infty k^{-mp_1}\sum_{\ga\in \mathcal{I}_k(\b)\setminus \mathcal{I}_{k-1}(\b)}\|s_\b-s_\ga\|_{L^p(I_\b)}^p,\end{align*}
which, using~\eqref{claim-8-23}, is bounded above by
\begin{align*}
& C\sum_{k=1}^\infty k^{-mp_1+s_3p+2d+2} \sum_{\eta\in \Ld_n^{d+1}}W^r(f, I_\eta^\ast)_p^p.
\end{align*} Choosing the parameter $m$ to be bigger than $s_3p/p_1 + (2d+4)/p_1$, we then prove~\eqref{8-23-00}.
It remains to prove the claim~\eqref{claim-8-23}. A crucial ingredient in the proof is to construct a sequence $\{\ga_1,\dots, \ga_{N_0}\}$ of distinct indices in $\Ld_n^{d+1}$ with the properties that $N_0\leq C ( 1+\|\ga-\b\|_\infty)^2$,
$\ga_1=\ga$, $\ga_{N_0}=\b$, and for $j=0,\dots, N_0-1$,
\begin{align}\label{8-25-00}
I_{\ga_j} \subset I_{\ga_{j+1}}^\ast\ \ \text{and}\ \ \|\ga_j-\ga\|\leq 1+\|\ga-\b\|.
\end{align}
Indeed, once such a sequence is constructed, then
we have
\begin{align*}
\|s_\ga-s_\b\|_{L^p(I_\ga)}^p&\leq N_0^{\max\{p,1\}-1} \sum_{j=1}^{N_0-1}\|s_{\ga_j}-s_{\ga_{j+1}}\|^p_{L^p(I_{\ga})}, \end{align*}
which, using~\eqref{8-25-00} and Lemma~\ref{lem-5-2} with $\ell=\ell(p,r,d)>0$, is estimated above by
\begin{align*}
&\leq C N_0^{\max\{p,1\}-1} (1+\|\ga-\b\|_\infty)^{\ell p} \sum_{j=1}^{N_0-1}\|s_{\ga_j}-s_{\ga_{j+1}}\|^p_{L^p(I_{\ga_j})}.\end{align*}
However, using~\eqref{8-25-00} and~\eqref{5-4}, we have that
\begin{align*}
\|s_{\ga_j}-s_{\ga_{j+1}}\|^p_{L^p(I_{\ga_j})}\leq &C_p \Bl[ \|f-s_{\ga_j}\|_{L^p(I_{\ga_j})}^p +\|f-s_{\ga_{j+1}}\|_{L^p(I^\ast_{\ga_{j+1}})}^p\Br]\\
\leq& C(p,r,d)\Bl[ W^r(f, I_{\ga_j}^\ast)_p^p+ W^r(f, I_{\ga_{j+1}}^\ast)_p^p\Br].
\end{align*}
Putting the above together, we prove the claim~\eqref{claim-8-23} with $s_3:=\ell+2\max\{1, \f 1p\}$.
Finally, we construct the sequence $\{\ga_1,\dots, \ga_{N_0}\}$ as follows. Assume that $\ga=(\mathbf{k}, l)$, and $\b=(\mathbf{k}', l')$. Without loss of generality, we may assume that $l\leq l'$. (The case $l>l'$ can be treated similarly.)
Recall that
$ \Delta_{\ib}:=\Bl\{x\in \RR^d:\ \ \|x-x_{\ib}\|_\infty\leq \f b n\Br\},$
where ${x}_{\bfi}$ is the center of the cube $\Delta_{\bfi}$.
Let $\{z_j\}_{j=0}^{n_0+1}$ be a sequence of points on the line segment $[x_{\kb}, x_{\kb'}]$ satisfying that $z_0 =x_{\kb}$, $z_{n_0+1} =x_{\kb'}$, $\|z_j-z_{j+1}\|_\infty =\f {3b} n $ for $j=0,1,\dots, n_0-1$ and $\f {3b} n \leq \|z_{n_0}-z_{n_0+1}\|_\infty<\f {6b}n$, where $n_0+1\leq \f 23 \|\kb-\kb'\|_\infty$. Let $\ib_j \in\Ld_n^d$ be such that $z_j\in\Delta_{\ib_j}$ for $0\leq j\leq n_0+1$.
Since $\f {3b} n \leq \|z_j-z_{j+1}\|_\infty \leq \f {6b} n$, the cubes $\Delta_{\ib_j}$ are distinct and moreover
\begin{equation}\label{8-25-0}
\Delta_{\ib_j} \subset 9 \Delta_{\ib_{j+1}},\ \ j=0,1,\dots, n_0.
\end{equation}
In particular, this implies that $\ib_0=\kb$ and $\ib_{n_0+1} =\kb'$.
It can also be easily seen from the construction that for $j=0,\dots, n_0+1$,
\begin{equation}\label{8-27}
\|\ib_j -\kb\|_\infty \leq \|\kb-\kb'\|_\infty+1.
\end{equation}
Next, we order the indices $(\ib_j, k)$, $0\leq j\leq n_0+1$, $l\leq k\leq l'$ as follows:
\begin{align*}
(\ib_0, l), (\ib_0, l+1),\dots, (\ib_0, l'),
(\ib_1, l'), (\ib_1, l'-1),\dots, (\ib_1, l), (\ib_2, l),\dots, (\ib_{n_0+1}, l').
\end{align*}
We denote the resulting sequence by
$ \{\ga_1, \ga_2,\dots, \ga_{N_0}\},$
where
$$N_0\leq (1+|l-l'|) (n_0+2) \leq ( 1+\|\ga-\b\|_\infty)^2.$$
Clearly, $\ga_1=\ga$, and $\ga_{N_0}=\b$. Moreover, by~\eqref{8-27}, we have $\|\ga_j-\ga\|_\infty\leq 1+\|\ga-\b\|_\infty$ for $j=1,\dots, N_0$, whereas by~\eqref{8-25-0},
$I_{\ga_j} \subset I_{\ga_{j+1}}^\ast $ for $j=0,\dots, N_0-1$.
This completes the proof.
\end{proof}
\section{Comparison with average moduli}\label{ch:IvanovModuli}
In this section, we shall prove that the moduli of smoothness, defined in~\eqref{eqn:defmodulus} can be controlled from above by Ivanov's moduli of smoothness, defined in~\eqref{eqn:ivanov}. By Remark~\ref{rem-3-2}, it is enough to show
\begin{thm}\label{thm-9-1}
There exist a parameter $A_0>1$ and a constant $A>1$ such that for any $0<q\leq p\leq \infty$,
$$\og_\Og^r\Bl(f, \f 1n; A_0\Br)_p \leq C \tau_r \Bl(f, \f A n\Br)_{p,q},$$
where the constant $C$ is independent of $f$ and $n$.
\end{thm}
As a result, we may establish the Jackson inequality for Ivanov's moduli of smoothness for any dimension $d\ge 1$ and the full range of $0<q\leq p\leq \infty$.
\begin{cor}
If $f\in L^p(\Og)$, $0< q\leq p \leq \infty$ and $r\in\NN$, then
$$ E_n (f)_p \leq C_{r, \Og} \tau_r \Bl(f, \f A n\Br)_{p,q}.$$
\end{cor}
Recall that for $S\subset \R^d$,
$$S_{rh}:=\Bl\{\xi\in S:\ \ [\xi, \xi+rh]\subset S\Br\}, \ r>0, h\in\R^d.$$
The proof of Theorem~\ref{thm-9-1} relies on the following lemma, which generalizes Lemma~7.4 of~\cite{Di-Pr08}.
\begin{lem}\label{lem-9-1:Dec}
Let $r\in\NN$, $h\in\R^d$ and $\da_0\in (0,1)$. Assume that $(S,E)$ is a pair of subsets of $\R^d$ satisfying that for each $\xi\in S_{rh}$, there exists a convex subset $E^\xi$ of $E$ such that $|E^\xi|\ge \da_0 |E|$
and $[\xi, \xi+rh]\subset E^\xi$.
Then for any $0<q\leq p <\infty$ and $f\in L^p(E)$, we have
\begin{equation}\label{9-1-18}
\|\tr_h^r (f, S, \cdot)\|_{L^p(S)} \leq C(q, d, r)\Bl(\int_S \Bl(\f 1 {\da_0 |E|} \int_E \bl| \tr_{(\eta-\xi)/r} ^r (f, E, \xi)\br|^q\, d\xi\Br)^{\f pq}\, d\eta\Br)^{\f1p},
\end{equation}
where the constant $C(q,d, r)$ is independent of $S$, $E$ and $q$ if $q\ge 1$.
\end{lem}
Lemma~\ref{lem-9-1:Dec} was proved in \cite[Lemma 7.4]{Di-Pr08}
in the case when $p=q$ and $E=S$ is convex. For the general case, it can be obtained by modifying the proof there.
\begin{proof}
The proof is based on the following combinatorial identity, which was proved in \cite[Lemma 7.3]{Di-Pr08}: if $\xi, \eta\in\R^d$ and $f$ is defined on the convex hull of the set $\{\xi, \xi+rh, \eta\}$, then
\begin{align}\label{9-2-0}
\tr^r_h f(\xi) =&\sum_{j=0}^{r-1} (-1)^j \binom r j \tr^r f\Bl[\xi+jh, \ \f jr (\xi+rh)+\Bl(1-\f jr\Br)\eta\Br]\\
& -\sum_{j=1}^r (-1)^j \binom r j \tr^r f \Bl[ \Bl(1-\f jr\Br) \xi+\f jr \eta,\ \ \xi+rh \Br],\notag
\end{align}
where we used the notation
$\tr^r f[u, v] :=\tr_{(v-u)/r} ^r f(u)$ for $u,v\in\RR^d$.
Since $E^\xi$ is a convex set containing the line segment $[\xi, \xi+rh]$ for each $\xi\in S_{rh}$, we obtain from~\eqref{9-2-0} that for $\xi\in S_{rh}$,
\begin{align*}
|\tr_h^r f(\xi) |\leq& C_r \max_{0\leq j\leq r-1}
\Bl(\f 1 {|E^\xi|} \int_{E^\xi} \Bl|\tr^rf\bl[\xi+jh, \ \f jr (\xi+rh)+\Bl(1-\f jr\Br)\eta\br]\Br|^q\, d\eta\Br)^{\f1q}\\
&+ C_r \max_{1\leq j\leq r}
\Bl(\f 1 {|E^\xi|} \int_{E^\xi}\Bl|\tr^r f \bl[ \Bl(1-\f jr\Br) \xi+\f jr \eta,\ \ \xi+rh \br]\Br|^q\, d\eta\Br)^{\f1q}.
\end{align*}
Taking the $L^p$-norm over the set $S_{rh}$ on both sides of this last inequality, we obtain
\begin{align*}
\Bl(\int_{S_{rh}} |\tr_h^r f(\xi) |^p\, d\xi\Br)^{\f1p}\leq& C_{r}\da_0^{-\f1q} \Bl[\max_{0\leq j\leq r-1} I_j(h) + \max_{1\leq j\leq r}K_j(h)\Br],
\end{align*}
where
\begin{align*}
I_j(h):&=\Bl(\int_{S_{rh}}\Bl(\f 1{|E|} \int_{E^\xi} \Bl|\tr^r f\bl[\xi+jh, \ \f jr (\xi+rh)+\Bl(1-\f jr\Br)\eta\br]\Br|^q\, d\eta\Br)^{\f pq}\, d\xi\Br)^{\f1p},\\
K_j(h):&=\Bl(\int_{S_{rh}}\Bl(\f 1{|E|} \int_{E^\xi}\Bl|\tr^r f \bl[ \Bl(1-\f jr\Br) \xi+\f jr \eta,\ \ \xi+rh \br]\Br|^q\, d\eta\Br)^{\f pq}\, d\xi\Br)^{\f1p}.
\end{align*}
For the term $I_j(h)$ with $0\leq j\leq r-1$, we have
\begin{align*}
I_j(h)&=\Bl(\int_{S_{rh}+jh}\Bl(\f 1 {|E|} \int_{E^{u-jh}} \Bl|\tr^r f\bl[u, \ \f jr (u+(r-j)h)+\Bl(1-\f jr\Br)\eta\br]\Br|^q\, d\eta\Br)^{\f pq}\, du\Br)^{\f1p}\\
&\leq r^{d/q}
\Bl(\int_{S_{rh}+jh}\Bl(\f 1 {|E|} \int_{E^{u-jh}} \Bl|\tr^rf[u, \ v]\Br|^q\, dv\Br)^{\f pq}\, du\Br)^{\f1p},
\end{align*}
where we used the change of variables $u=\xi+jh$ in the first step, the change of variables $v=\f jr (u+(r-j) h)+\Bl(1-\f jr\Br) \eta$ and the fact that each set $E^\xi$ is convex in the second step.
Since
$[u,v] \subset E^{u-jh} \subset E$ whenever $u\in S_{rh}+jh$ and $v\in E^{u-jh}$ and since $\tr^r f [u,v]=\tr^r f[v,u]$, it follows that
$$ I_j(h) \leq r^{d/q} \Bl(\int_{S}\Bl(\f 1 {|E|} \int_{E} \Bl|\tr^r_{(u-v)/r} (f, E, v)\Br|^q\, dv\Br)^{\f pq}\, du\Br)^{\f1p}.$$
The terms $K_j(h)$, $1\leq j\leq r$ can be estimated in a similar way. In fact, making the change of variables $u=\xi+rh$ and $v=\Bl(1-\f jr\Br)(u-rh) +\f jr \eta$, we obtain
\begin{align*}
K_j(h)&=\Bl(\int_{S_{-rh}}
\Bl(\f 1 {|E|} \int_{E^{u-rh}}\Bl|\tr^r f \bl[ \Bl(1-\f jr\Br) (u-rh)+\f jr \eta,\ \ u \br]\Br|^q\, d\eta\Br)^{\f pq}\, du\Br)^{\f1p}\\
&\leq r^{d/q} \Bl(\int_{S_{-rh}}
\Bl(\f 1 {|E|} \int_{E^{u-rh}}\Bl|\tr^r f [v,\ \ u ]\Br|^q\, dv\Br)^{\f pq}\, du\Br)^{\f1p}\\
&\leq r^{d/q} \Bl(\int_{S}\Bl(\f 1 {|E|} \int_{E} \Bl|\tr^r_{(u-v)/r} (f, E, v)\Br|^q\, dv\Br)^{\f pq}\, du\Br)^{\f1p}.
\end{align*}
Putting the above together, we complete the proof.
\end{proof}
We are now in a position to prove Theorem~\ref{thm-9-1}.
\begin{proof}[Proof of Theorem~\ref{thm-9-1}]
We shall prove Theorem~\ref{thm-9-1} for $p<\infty$ only.
The case $p=\infty$ can be deduced by letting $p\to\infty$. In fact, all the general constants below are independent of $p$ as $p\to\infty$.
By Lemma~\ref{lem-2-1-18}, there exists $\da_0\in (0,1)$ such that
$\Og\setminus \Og(\da_0) \subset \bigcup_{j=1}^{m_0} G_j,$
where
$$\Og(\da_0):=\{\xi\in\Og:\ \ \dist(\xi, \Ga) > \da_0\}.$$
We claim that
for any $ 0<t< \f {\da_0}{ 8\diam (\Og) +8}$,
\begin{align}\label{9-5-2}
\sup_{\|h\|\leq t} \Bl\|\tr_{h\vi_{\Og} (h, \cdot)}^r (f, \Og, \cdot)\Br\|_{L^p(\Og(\da_0))}\leq C_{q,d} \tau_r (f, A_1t)_{p,q}.
\end{align} Indeed, using Fubini's theorem and Lemma~\ref{lem-8-1}, we have
\begin{align*}
\sup_{\|h\|\leq t} \Bl\|\tr_{h\vi_{\Og} (h, \cdot)}^r f\Br\|_{L^p(\Og(\da_0))}\leq C_d \sup_{\|h\|\leq t} \Bl\|\tr_{h}^r f\Br\|_{L^p(\Og(\da_0/2))}.
\end{align*}
Let $\{\og_1,\dots, \og_{N}\}$ be a subset of $\Og(\da_0/2)$ such that $\min_{1\leq i\neq j\leq N} \|\og_i-\og_j\|\ge t$ and
$\Og(\da_0/2) \subset \bigcup_{j=1}^{N} B_j$, where $B_j:=B_{t} (\og_j)$.
Using Lemma~~\ref{thm-9-1}, we then have
\begin{align*}
&\sup_{\|h\|\leq t} \Bl\|\tr_{h}^r f\Br\|^p_{L^p(\Og(\da_0/2))}\leq C_{p} \sum_{j=1}^{N} \sup_{\|h\|\leq t} \Bl\|\tr_{h}^r(f, 2B_j, \cdot)\Br\|^p_{L^p(B_j)}\\
&\leq C_{q} \sum_{j=1}^{N}\int_{2B_j} \Bl(\f 1{t^{d+1}} \int_{ B_{4t} (\xi)} |\tr_{(\eta-\xi)/r}^r f (\xi)|^q \, d\xi\Br)^{\f pq} \, d\eta\leq C_{q,d} \tau_r (f, A_1 t)_{p,q}^p.
\end{align*}
This proves the claim~\eqref{9-5-2}.
Now using~\eqref{9-5-2} and Definition~\ref{def:modulus}, we reduce to showing that for each $x_i$-domain $G\subset \Og$ attached to $\Ga$, and a sufficiently large parameter $A_0$,
\begin{equation}\label{9-3}
\wt{\og}^r_G (f, \f1n; A_0)_{L^p(G)}\leq C \tau_r \Bl(f, \f {A_1} n\Br)_{p,q}
\end{equation}
and
\begin{equation}\label{9-4}
\sup_{0<s\leq \f 1n} \|\tr_{s \vi_{\Og} (e_i,\cdot)e_i }^r (f, G,\cdot)\|_{L^p(G)}\leq C \tau_r \Bl(f, \f {A_1} n\Br)_{p,q}.
\end{equation}
Without loss of generality, we may assume that $e_i=e_{d+1}$, $G$ takes the form~\eqref{standard} with small base size $b\in(0,1)$, and $n\ge N_0$, where $N_0$ is a large positive integer depending only on the set $\Og$.
We follow the same notations as in Section~\ref{Sec:8} with sufficiently large parameters $m_0$ and $m_1$. Thus,
$\{I_{\ib, j}:\ \ (\ib, j) \in\Ld_{n}^{d+1}\}$ is a partition of $G$, and $S_{\ib,j}\subset I_{\ib,j}$ is the compact parallepiped as defined in~\eqref{8-7-1-18}.
We start with the proof of~\eqref{9-3}.
Given a parameter $\ell>1$, we define
\begin{align*}
S_{\bfi,j}^ \diamond:=\Bl\{ (x,y):\ \ & x\in(\ell \Delta_{\bfi}^\ast)\cap [-2b, 2b]^d,\ \ H_{\bfi} (x) -\al^\ast_{j+m_1} +\f {M_0-\ell} {n^2}\leq y\leq\\
&\leq H_{\bfi}(x) -\al^\ast_{j-m_1} -\f {M_0-\ell} {n^2}\Br\},
\end{align*}
where $\ell \Delta_{\ib}^\ast$ denotes the dilation of the cube $\Delta_{\ib}^\ast$ from its center $x_{\ib}$.
We choose the parameter $\ell$ sufficiently large so that
\begin{enumerate}[\rm (i)]
\item for any $\xi=(\xi_x,\xi_y)\in I_{\ib,j}$ and $u\in B_{n^{-1} } (\xi_x)\subset \R^d$, $ \Bl[\xi, \xi +\f rn \zeta_k(u)\Br]\subset S_{\ib,j}^ \diamond$ for all $1\leq k\leq d$;
\item there exists a constant $c_0>0$ such that
$I_{\ib, j} \subset S_{\ib,j}^{\diamond}\subset G^\ast$ whenever $\ib\in\Ld_n^{d}$ and $j\ge c_0 \ell$.
\end{enumerate}
Furthermore, we may also choose the parameter $A_0$ large enough so that
with $\Ld_{n,\ell}^{d+1}:=\{(\ib,j)\in \Ld_n^{d+1}:\ \ c_0\ell\leq j\leq n\}$,
$$ G_n:=\Bl\{\xi\in G:\ \ \dist(\xi, \p' G) \ge \f {A_0} {n^2}\Br\}\subset \bigcup_{(\ib,j)\in\Ld_{n,\ell}^{d+1}} I_{\ib,j}.$$
With the above notation, we have that for any $0<s\leq \f 1n$ and $k=1,\dots, d$,
\begin{align*}
&n^d\int_{G_n} \int_{\|u-\xi_x\|\leq \f1n} |\tr_{s \zeta_k(u)}^r (f, G^\ast,\xi)|^p \, du d\xi\leq C_d \sum_{(\ib,j)\in\Ld_{n,\ell}^{d+1}} \sup_{\zeta \in\SS^d} \int_{S^{\diamond}_{\ib,j}} |\tr_{s \zeta}^r (f, S_{\ib,j}^\diamond,\xi)|^p d\xi,\end{align*}
which, using Lemma~\ref{lem-9-1:Dec}, is estimated above by
\begin{equation}\label{9-5}
C_{q,d,r} \sum_{(\ib,j)\in\Ld_{n,\ell}^{d+1}}
\int_{S_{\ib,j}^\diamond}\Bl( \f 1 {|S_{\ib,j}^{\diamond}|} \int_{S_{\ib,j}^\diamond} \bl| \tr_{(\eta-\xi)/r} ^r f( \xi)\br|^q\, d\xi\Br)^{\f pq}\, d\eta.
\end{equation}
Recall that for $\xi\in\Og$ and $t>0$, we defined
$ U( \xi, t)= \{\eta\in\Og:\ \ \rho_\Og(\xi,\eta) \leq t\}$. Now, by Proposition~\ref{metric-lem}, there exists a constant $A_1>1$ such that for each $(\ib, j)\in\Ld_{n,\ell}^{d+1}$,
$$ U\Bl(\eta_{\ib,j}, \f 1{n A_1}\Br)\subset S_{\ib, j}^{\diamond}\subset U\Bl(\eta_{\ib,j}, \f {A_1}{2n}\Br) \ \ \text{for some $\eta_{\ib,j}\in S_{\ib,j}^\diamond$}.$$
Thus, by Remark~\ref{rem-6-2}, the sum in~\eqref{9-5} is controlled above by a constant multiple of
\begin{align*}
\int_{\Og}\Bl( \f 1 {|U(\xi, \f {A_1} n)|} \int_{U(\xi, \f {A_1} n)} \bl| \tr_{(\eta-\xi)/r} ^r (f,\Og, \xi)\br|^q\, d\xi\Br)^{\f pq}\, d\eta= \tau_r \Bl(f, \f {A_1} n\Br)_{p,q}^p.
\end{align*}
This completes the proof of~\eqref{9-3}.
It remains to prove~\eqref{9-4}. First, by the $C^2$ assumption of the domain $\Og$ (see, e.g.~\cite{Wa}), there exists a constant $r_0\in (0,1)$ such that for each $\xi=(\xi_x, \xi_y)\in G$, there exists a closed ball $B_\xi\subset G^\ast$ of radius $r_0\in (0, 1)$ that touches the boundary $\Ga$ at the point $\ga(\xi):=(\xi_x, g(\xi_x))$.
Given a large parameter $A$, we
define
\begin{equation}\label{9-6}
E_\xi:=\Bl\{ \eta \in B_\xi:\ \ \dist(\eta, T_\xi)\leq \f {A} {n^2}\Br\},\ \ \xi\in G,
\end{equation}
where $T_\xi$ denotes the tangent plane to $\Ga$ at the point $\ga(\xi)$. Clearly, $E_\xi\subset G^\ast$ is convex,
\begin{equation}\label{9-6-0}
U(\ga(\xi), \f {c_1} {n}) \subset E_\xi\subset U(\ga(\xi), \f {c_2}n),
\end{equation}
where the constants $c_1, c_2>0$ depend only on $G$ and the parameter $A$.
Next, recall that $S_{\ib,j}^\ast$ is the compact parallepiped defined in~\eqref{8-8-1}. By definition, there exists a positive integer $j_0$ depending only on $G$ such that $S_{\ib,j}^\ast\subset G^\ast$ whenever $j_0<j\leq n$. Furthermore, according to Proposition~\ref{metric-lem}, we have that
\begin{equation}\label{9-8-1}
\sup_{\xi\in S^\ast_{\ib, j}} \|\xi-\ga(\xi)\|\leq \f {c_3} {n^2},\ \ \text{ for $0\leq j\leq j_0$, }
\end{equation}
and
\begin{equation}\label{9-7}U\Bl(\eta_{\ib,j}, \f {c_4} n\Br) \subset I^\ast_{\ib,j} \subset S_{\ib,j}^\ast\cap G^\ast \subset U\Bl(\eta_{\ib,j}, \f {c_5} n\Br),\ \ \forall (\ib,j) \in\Ld_n^{d+1},\end{equation}
for some point $\eta_{\ib,j} \in I_{\ib,j}$,
where $c_3, c_4, c_5$ are positive constants depending only on the set $G$.
By~\eqref{9-8-1}, we may choose the parameter $A$ in~\eqref{9-6} large enough so that if $0\leq j\leq j_0$ and $\xi\in I^\ast_{\ib,j}$, then $[\xi, \ga(\xi)]\subset E_\xi$.
Note that if $\xi\in I_{\ib,j}^\ast$ with $0\leq j\leq j_0$, then by~\eqref{9-7} and~\eqref{9-8-1},
$$\rho_{\Og} (\eta_{\ib,j}, \ga(\xi)) \leq \f {c_6}n,$$
where $c_6>0$ is a constant depending only on $G$.
Now we define, for $(\ib,j) \in\Ld_n^{d+1}$,
$$ E_{\ib, j} =\begin{cases}
S_{\ib,j}^\ast, \ \ & \text{ if $j_0<j\leq n$}, \\
U(\eta_{\ib,j}, \f {c_2+c_6} n), \ \ & \text{ if $0\leq j\leq j_0$}.
\end{cases}$$
Thus, $E_{\ib, j}\subset G^\ast$, and by~\eqref{9-6-0},~\eqref{9-8-1} and~\eqref{9-7}, we have that for $\leq j\leq j_0$.
\begin{equation}\label{9-10}
\bigcup_{\xi\in I^\ast_{\ib,j}} E_\xi\subset \bigcup_{\xi\in I^\ast_{\ib,j}} U(\ga(\xi), \f {c_2} n)\subset E_{\ib,j}.
\end{equation}
Thus, setting $e=e_{d+1}$, and using Lemma~\ref{lem-8-1}, we have
\begin{align*}
\sup_{0<s\leq \f 1n}& \|\tr_{s \vi_{\Og} (e_i,\cdot)e_i }^r (f, G,\cdot)\|_{L^p(G)}^p\leq C n\int_0^{\f 1n} \int_{G }|\tr_{s\vi_G (e, \xi)e}^r (f, G,\xi)|^p\, d\xi ds\\
&\leq C\sum_{(\ib,j) \in\Ld_n^{d+1}} \sup_{0<s\leq \f {cj^2} {n^3}} \int_{I_{\bfi, j}^\ast} |\tr_{se}^r(f, I_{\ib, j}^\ast, \xi)|^p d\xi.
\end{align*}
However, by~\eqref{9-6-0},~\eqref{9-10} and Lemma~\ref{lem-9-1:Dec}, this last sum can be estimated above by a constant multiple of
\begin{align*}
& \sum_{(\ib,j) \in\Ld_n^{d+1}} \int_{I_{\bfi, j}^\ast} \Bl( \f 1{|E_{\ib, j}|}\int_{E_{\ib,j}} |\tr_{(\eta-\xi)/r}^r(f, \Og, \xi)|^q d\eta\Br)^{\f pq}\, d\xi\\
&\leq C \sum_{(\ib,j) \in\Ld_n^{d+1}} \int_{I_{\bfi, j}^\ast} \Bl( \f 1{|U(\xi, \f {A_1} n)|}\int_{U(\xi, \f {A_1} n)} |\tr_{(\eta-\xi)/r}^r(f, \Og, \xi)|^q d\eta\Br)^{\f pq}\, d\xi
\leq C \tau_r (f, \f {A_1} n)_{p,q}^p,
\end{align*}
where $A_1:= 2(c_2+c_5+c_6)$. This completes the proof.
\end{proof}
\section{Inverse inequality for $1\leq p\leq \infty$}\label{sec:15}
The main purpose in this section is to show Theorem~\ref{inverse-thm}, the inverse theorem. By Theorem~\ref{thm-9-1-00}, $\og^r_\Og(f, t)_p\leq C_{p,q} \tau_r(f, Ct)_{p,q}$ for $1\leq q\leq p\leq \infty$, where $\tau_r(f,t)_{p,q}$ is the $(q,p)$-averaged modulus of smoothness given in~\eqref{eqn:ivanov}. Thus, it is sufficient to prove
\begin{thm} \label{thm-15-1}If $r\in\NN$, $1\leq q\leq p\leq \infty$ and $f\in L^p(\Og)$, then
$$\tau_r (f, n^{-1})_{p,q} \leq C_{r} n^{-r} \sum_{s=0}^n (s+1)^{r-1} E_s (f)_p.$$
\end{thm}
Here we recall that $L^p(\Og)$ denotes the space $L^p(\Og)$ for $p<\infty$ and the space $C(\Og)$ for $p=\infty$.
The proof of Theorem~\ref{thm-15-1} relies on two lemmas. To state these lemmas, we recall that
for $t>0$, $\xi\in\Og$ and $f\in L^p(\Og)$,
$$U( \xi, t):= \{\eta\in \Og:\ \ \rho_\Og(\xi,\eta) \leq t\},$$ and
$$ w_r (f, \xi, t)_q : =\begin{cases}
\displaystyle \Bl( \f 1 {|U(\xi,t)|} \int_{U( \xi,t)} |\tr_{(\eta-\xi)/r} ^r (f,\Og,\xi)|^q \, d\eta\Br)^{\f1q},\ \ & \text{if $1\leq q <\infty$};\\
\sup_{\eta\in U( \xi,t)} |\tr_{(\eta-\xi)/r}^r (f,\Og,\xi)|,\ \ &\text {if $q=\infty$}.\end{cases}$$
\begin{lem} \label{lem-15-1} Let $G\subset \Og$ be a domain of special type attached to $\Ga$.
If $r\in\NN$, $1\leq q\leq p\leq \infty$ and $f\in L^p(\Og)$, then
\begin{equation}\label{inverse:15-1} \Bl\| w_r (f, \cdot, n^{-1})_q \Br\|_{L^p(G)}\leq C_{r,p} n^{-r} \sum_{s=0}^n (s+1)^{r-1} E_s (f)_{L^p(\Og)}.\end{equation}
\end{lem}
\begin{proof} By monotonicity, it is enough to consider the case $q=p$.
It is easily seen from the definition that
\begin{equation}\label{15-1}\|w_r(f,\cdot, t)_p\|_p\leq C_{p,r}\|f\|_p.\end{equation}
Without loss of generality, we may assume that
$$ G:=\{(x,y):\ \ x\in (-b, b)^d,\ \ g(x)-1<y\leq g(x)\},$$
where $b>0$ and $g\in C^2(\RR^d)$. We may aslo assume that $n\ge N_0$, where $N_0$ is a sufficiently large positive integer depending only on $\Og$, since otherwise~\eqref{inverse:15-1} follows directly from the inequality $\|w_r(f,\cdot, t)_p\|_p\leq C E_0 (f)_p$, which can be obtained from~\eqref{15-1}.
For $0\leq k\leq n$, let $P_k\in\Pi_k^{d+1}$ be such that
$\|f-P_k\|_{L^p(\Og)} =E_k(f)_{L^p(\Og)}$. Let $m\in\NN$ be such that $2^{m-1} \leq n <2^m$. Then by~\eqref{15-1}, we have
\begin{align*}
\Bl\| w_r (f, \cdot, n^{-1})_p \Br\|_{L^p(G)}&\leq \Bl\| w_r (f-P_{2^m}, \cdot, n^{-1})_p\Br\|_{L^p(G)}+\Bl\| w_r (P_{2^m}, \cdot, n^{-1})_p \Br\|_{L^p(G)}\\
&\leq C \|f-P_{2^m}\|_{L^p(\Og)} + \sum_{j=0}^{m-1} \Bl\| w_r (P_{2^{j+1}}-P_{2^j}, \cdot, n^{-1})_p\Br\|_{L^p(G)}.
\end{align*}
Thus, for the proof of~\eqref{inverse:15-1}, it suffices to show that for each $P\in\Pi_k^{d+1}$,
\begin{equation}\label{15-2}
\Bl\| w_r (P, \cdot, n^{-1})_p \Br\|_{L^p(G)}\leq C n^{-r} k^r \|P\|_{L^p(\Og)}.
\end{equation}
To show~\eqref{15-2}, we first recall the following partition
of the domain $\overline{G}$ constructed in Section~\ref{sec:5}: $
\overline{G}=\bigcup_{\bfi\in\Ld_n^d} \bigcup_{j=0}^{n-1} I_{\mathbf{i},j}$, where
\begin{equation}\label{15-4-0}
I_{\mathbf{i},j}:=\Bl\{ (x, y):\ \ x\in \Delta_{\bfi},\ \ g(x)-y\in [\al_{j}, \al_{j+1}]\Br\}
\end{equation}
and
\begin{align*}
\bfi&=(i_1,\dots, i_d)\in \Ld^d_n:=\{ 0, 1,\dots, n-1\}^d\subset \ZZ^d,\\
\Delta_{\bfi}:&=[t_{i_1}, t_{i_1+1}]\times \dots [t_{i_{d}}, t_{i_{d}+1}] \ \ \ \text{with}\ \ t_{i}=-b+\f {2i}n b,\\
\al_j:&= \sin^2 (\f {j\pi}{2\ell_1 n})/(\sin^2\f \pi{2\ell_1}), \ \ j=0,1,\dots, \ell_1 n,\\
&\text{ with $\ell_1>1$ being a large integer parameter}.
\end{align*}
As in Section~\ref{Sec:8}, we also define for any two given integer parameters $m_0, m_1>1$,
\begin{align*}
\Delta_{\bfi}^\ast&=\Delta_{\bfi,m_0}^\ast:=[t_{i_1-m_0}, t_{i_1+m_0}]\times [t_{i_2-m_0}, t_{i_2+m_0}]\times \dots\times [t_{i_{d}-m_0}, t_{i_{d}+m_0}],\\
I_{\bfi,j}^\ast:&=I_{\bfi,j, m_0,m_1}^\ast:=\Bl\{ (x, y):\ \ x\in \Delta_{\bfi}^\ast,\ \ \al^\ast_{j-m_1}\leq g(x)-y\leq \al^\ast_{j+m_1}\Br\},
\end{align*}
where
$\al_j^\ast =\al_j$ if $0\leq j\leq n$, $\al_j^\ast =0$ if $j<0$ and $\al_j^\ast =2$ if $j>n$. By Proposition~\ref{metric-lem}, we may choose the parameters $m_0, m_1$ large enough so that
\begin{equation}\label{15-4}U(\xi, n^{-1}) \subset I_{\bfi, j}^\ast\ \ \text{whenever $\xi\in I_{\bfi,j}$}.\end{equation}
Note that for $(\bfi, j)\in\Ld_n^{d+1}$ and $(x,y) \in I_{\bfi,j}^\ast$,
\begin{align}
\al_j& \sim \f {j^2}{n^2} \sim \da(x,y):=g(x)-y,\ \ j\ge 1, \notag\\
\al_{j+1}-\al_j &\leq C \f {j+1}{n^2} \leq \f Cn \vi_n(x,y):=\f C n \bl( \f 1n+\sqrt{\da(x,y)}\br).\label{15-5}
\end{align}
Now we turn to the proof of~\eqref{15-2}.
Let $P\in \Pi_k^d$ and $1\leq p<\infty$. Then using Remark~\ref{rem-6-2} , Proposition~\ref{metric-lem} and~\eqref{15-4}, we have
\begin{align*}
&\Bl\| w_r (P, \cdot, n^{-1})_p \Br\|^p_{L^p(G)}
\leq C \sum_{(\bfi,j)\in\Ld_n^d} \int_{I_{\bfi,j}} \f 1 {|I_{\bfi,j}^\ast|} \int_{I_{\bfi,j}^\ast(\xi)} |\tr_{(\eta-\xi)/r} ^r (P,\Og,\xi)|^p d\eta \, d\xi,
\end{align*}
where
$I_{\bfi,j}^\ast(\xi)=\{\eta\in I_{\bfi,j}^\ast:\ \ [\xi,\eta]\in\Og\}.$
Note that by H\"older's inequality,
\begin{align*}
|\tr_{(\eta-\xi)/r}^r (f,\Og, \xi)|^p & \leq \int_{[0,1]^r} \Bl|\p_{(\eta-\xi)/r}^r f(\xi+ r^{-1}(\eta-\xi) (t_1+\dots+t_r))\Br|^p\, dt_1\dots dr_r\\&\leq C \int_{0}^1 \Bl|\p_{\eta-\xi}^r f(\xi+ t(\eta-\xi) )\Br|^p\, dt.
\end{align*}
Thus,
\begin{align}
&\Bl\| w_r (P, \cdot, n^{-1})_p \Br\|^p_{L^p(G)}\notag\\
&\leq C \sum_{(\bfi,j)\in\Ld_n^d} \int_{I_{\bfi,j}} \f 1 {|I_{\bfi,j}^\ast|} \int_{I_{\bfi,j}^\ast(\xi)} \int_0^1 \Bl|\p_{\eta-\xi}^r P(\xi+ t(\eta-\xi) )\Br|^p\, dt
d\eta \, d\xi.\label{15-6}
\end{align}
To estimate the sum in this last equation, we shall use the Bernstein inequality stated in Theorem~\ref{cor-11-2}. For convenience, given a parameter $\mu>1$, and two nonnegative integers $l_1, l_2$, we define
\begin{align*}
M_{\mu,n}^{l_1,l_2} f(\xi) := &\max_{ u\in \Xi_{n,\mu}(\xi)}\max_{\zeta\in\sph} \Bigl| ( z_\zeta(u)\cdot \nabla )^{l_1}\partial_{d+1}^{l_2}f(\xi)\Br|,\ \ \xi\in G,\ \ f\in C^\infty(\Og), \end{align*}
where $z_\zeta(u)=(\zeta, \p_\zeta g(u))$, and
$$\Xi_{n, \mu} (\xi):= \Bl\{ u\in [-2 a, 2 a]^d:\ \ \|u-\xi_x\|\leq \mu \vi_n(\xi)\Br\}.$$
We choose the parameter $\mu$ large enough so that
$\Delta_{\bfi, 4m_0}^\ast \subset \Xi_{n,\mu}(\xi)$ for any $\xi\in I_{\bfi, j}^\ast$.
By Theorem~\ref{cor-11-2},
we have
\begin{equation}\label{15-7}
\| \vi_n^{l_2} M_{\mu,n}^{l_1,l_2} P\|_{L^p(G^\ast)} \leq C k^{l_1+l_2} \|P\|_{L^p(\Og)},\ \ \ \forall P\in\Pi_k^{d+1}.
\end{equation}
Now fix temporarily $\xi=(\xi_x, \xi_y)\in I_{\bfi,j}$ and $\eta=(\eta_x, \eta_y)\in I_{\bfi, j}^\ast$. Then $\|\xi_x-\eta_x\|\leq \f cn$, and
$$ \eta_y-\xi_y =\eta_y-g(\eta_x)+g(\eta_x)-g(\xi_x)+g(\xi_x) -\xi_y.$$
By the mean value theorem, there exists $u\in [\xi_x, \eta_x]$ such that
$$ \|(\eta_y-\xi_y)-\nabla g(u) \cdot (\eta_x-\xi_x)\|\leq \al_{j+m_1}^\ast -\al_{j-m_1}^\ast \leq c_1 \f {\vi_n(\xi)}n,$$
where
the last step uses~\eqref{15-5}. Thus, setting $\zeta=n(\eta_x-\xi_x)$,
we have $\|\zeta\|\leq c$ and we may write $\eta-\xi$ in the form
$$ \eta-\xi=\f 1n \Bl(\zeta, \p_{\zeta} g(u) +s\vi_n(\xi)\Br),$$
with
$$ s=\f {n(\eta_y-\xi_y) -\p_\zeta g(u)}{\vi_n(\xi)}\in [-c_1, c_1].$$
It follows that
$$ \p_{\eta-\xi} =(\eta-\xi)\cdot \nabla = \f 1n \Bl(\p_{z_\zeta (u)} + s \vi_n(\xi) \p_{d+1}\Br),$$
where $z_\zeta (u) = (\zeta, \p_{\zeta} g(u))$.
This implies that for any $(x,y)\in I_{\bfi,j}^\ast$,
\begin{align*}
|\p_{\eta-\xi}^r P(x,y)|&\leq C n^{-r} \max _{0\leq k\leq r}\vi_n(\xi)^k |\p_{z_\zeta(u)}^{r-k} \p_{d+1}^{k} P(x,y)| \\
&\leq C n^{-r} \max _{0\leq k\leq r}\vi_n(x,y)^k M_{\mu,n}^{r-k,k} P(x,y).\end{align*}
Thus, setting
$$ P_\ast (x,y):=\max _{0\leq k\leq r}\vi_n(x,y)^k M_{\mu,n}^{r-k,k} P(x,y),$$
we obtain from~\eqref{15-6} that
\begin{align*}
&\Bl\| w_r (P, \cdot, n^{-1})_p \Br\|^p_{L^p(G)}
\leq C n^{-rp} \sum_{(\bfi,j)\in\Ld_n^d} \int_{I_{\bfi,j}} \f 1 {|I_{\bfi,j}^\ast|} \int_{I_{\bfi,j}^\ast(\xi)} \int_0^1 |P_\ast(\xi+t(\eta-\xi))|^p\, dt
d\eta \, d\xi\\
&\leq C n^{-rp} \sum_{(\bfi,j)\in\Ld_n^d} \int_{I_{\bfi,j, 2m_0, 2m_1}^\ast} |P_\ast(\eta)|^p
d\eta\leq C n^{-rp} \|P_\ast \|_{L^p(G_\ast(2))}^p \leq C \Bl( \f kn\Br)^{rp} \|P\|_{L^p(\Og)}^p,
\end{align*}
where the last step uses~\eqref{15-7}. This proves~\eqref{15-2} for $1\leq p<\infty$.
Finally,~\eqref{15-2} for $p=\infty$ can be proved similarly. This completes the proof of Lemma~\ref{lem-15-1}.
\end{proof}
\begin{lem} \label{lem-15-2} Let $\va\in (0,1)$ and
$\Og^\va:=\{\xi\in \Og:\ \ \dist(\xi, \Ga) >\va\}.$
If $r\in\NN$, $1\leq q\leq p\leq \infty$ and $f\in L^p(\Og)$, then
\begin{equation}\label{inverse:15-8} \Bl\| w_r (f, \cdot, n^{-1})_q \Br\|_{L^p(\Og^\va)}\leq C_{r,p} n^{-r} \sum_{s=0}^n (s+1)^{r-1} E_s (f)_{L^p(\Og)}.\end{equation}
\end{lem}
\begin{proof}
The proof is similar to that of Lemma~\ref{lem-15-1}, and in fact, is simpler. It relies on the following Bernstein inequality,
$$ \|\p^{\b} P\|_{L^p(\Og^\va)} \leq C k^{|\b|} \|P\|_{L^p(\Og)},\ \ \forall P\in\Pi_k^{d+1},\ \ \forall \b\in\ZZ_+^{d+1},$$
which is a direct consequence of the univariate Bernstein inequality~\eqref{markov-bern}.
\end{proof}
Now we are in a position to prove Theorem~\ref{thm-15-1} .
\begin{proof}[Proof of Theorem~\ref{thm-15-1}]
By monotonicity, it suffices to consider the case $p=q$.
By Lemma~\ref{lem-2-1-18}, there exist $\va\in (0,1)$ and domains $G_1,\dots, G_{m_0}\subset \Og$ of special type attached to $\Ga$ such that
$$\Ga_\va:=\{ \xi\in\Og:\ \ \dist(\xi, \Ga) \leq \va\} \subset \bigcup_{j=1}^{m_0} G_j.$$
Setting $\Og^\va:=\Og\setminus \Ga_\va$, we have
\begin{align*}
\tau_r(f, n^{-1})_{p,p} &\leq \sum_{j=1}^{m_0} \|w_r (f, \cdot, n^{-1})_p\|_{L^p(G_i)} + \|w_r (f, \cdot, n^{-1})_p\|_{L^p(\Og^\va)},
\end{align*}
which, using Lemma~\ref {lem-15-1}
and Lemma~\ref {lem-15-2}, is estimated above by a constant multiple of
\begin{align*}
n^{-r}\sum_{s=0}^n (s+1)^{r-1} E_s (f)_{L^p(\Og)}.
\end{align*}
This completes the proof.
\end{proof}
\section*{Acknowledgement}
{The first named author would like to thank Professor K. G. Ivanov very much for kindly explaining the works of
\cite{Iv} to him. }
\begin{bibsection}
\begin{biblist}
\bib{BS}{book}{
author={Bennett, Colin},
author={Sharpley, Robert},
title={Interpolation of operators},
series={Pure and Applied Mathematics},
volume={129},
publisher={Academic Press, Inc., Boston, MA},
date={1988},
pages={xiv+469},
}
\bib{Ba}{article}{
author={Baran, M.},
title={Bernstein type theorems for compact sets in ${\bf R}^n$},
journal={J. Approx. Theory},
volume={69},
date={1992},
number={2},
pages={156--166},}
\bib{BE}{book}{
author={Borwein, P.},
author={Erd\'{e}lyi, T.},
title={Polynomials and polynomial inequalities},
series={Graduate Texts in Mathematics},
volume={161},
publisher={Springer-Verlag, New York},
date={1995},
}
\bib{CD}{article}{
author={Chen, W.},
author={Ditzian, Z.},
title={Mixed and directional derivatives},
journal={Proc. Amer. Math. Soc.},
volume={108},
date={1990},
number={1},
pages={177--185},
}
\bib{Co-Sa}{article}{
author={Constantine, G. M.},
author={Savits, T. H.},
title={A multivariate Fa\`a di Bruno formula with applications},
journal={Trans. Amer. Math. Soc.},
volume={348},
date={1996},
number={2},
pages={503--520},
}
\bib{Da06}{article}{
author={Dai, Feng},
title={Multivariate polynomial inequalities with respect to doubling
weights and $A_\infty$ weights},
journal={J. Funct. Anal.},
volume={235},
date={2006},
number={1},
pages={137--170},
}
\bib{Da-Pr-Bernstein}{article}{
author={Dai, Feng},
author={Prymak, Andriy},
title={$L^p$-Bernstein inequalities on $C^2$-domains and applications to
discretization},
journal={Trans. Amer. Math. Soc.},
volume={375},
date={2022},
number={3},
pages={1933--1976},
issn={0002-9947},
review={\MR{4378085}},
doi={10.1090/tran/8550},
\bib{Da-Pr-Whitney}{article}{
author={Dai, F.},
author={Prymak, A.},
title={On directional Whitney inequality},
journal={Canadian Journal of Mathematics},
volume={74},
number={3},
date={2022},
pages={833--857},
doi={10.4153/S0008414X21000110},
}
\bib{DPTT}{article}{
author={Dai, F.},
author={Prymak, A.},
author={Temlyakov, V. N.},
author={Tikhonov, S. Yu.},
title={Integral norm discretization and related problems},
language={Russian, with Russian summary},
journal={Uspekhi Mat. Nauk},
volume={74},
date={2019},
number={4(448)},
pages={3--58},
}
\bib{DX2}{book}{
author={Dai, F.},
author={Xu, Y.},
title={Approximation theory and harmonic analysis on spheres and balls},
series={Springer Monographs in Mathematics},
publisher={Springer, New York},
date={2013},
pages={xviii+440},
}
\bib{DX}{article}{
author={Dai, F.},
author={Xu, Y.},
title={Moduli of smoothness and approximation on the unit sphere and the
unit ball},
journal={Adv. Math.},
volume={224},
date={2010},
number={4},
pages={1233--1310},
}
\bib{De-Le}{article}{
author={Dekel, S.},
author={Leviatan, D.},
title={Whitney estimates for convex domains with applications to
multivariate piecewise polynomial approximation},
journal={Found. Comput. Math.},
volume={4},
date={2004},
number={4},
pages={345--368},
}
\bib{De-Lo}{book}{
author={DeVore, Ronald A.},
author={Lorentz, George G.},
title={Constructive approximation},
series={Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]},
volume={303},
publisher={Springer-Verlag, Berlin},
date={1993},
pages={x+449},
}
\bib{Di96}{article}{
author={Ditzian, Z.},
title={Polynomial approximation in $L_p(S)$ for $p>0$},
journal={Constr. Approx.},
volume={12},
date={1996},
number={2},
pages={241--269},
}
\bib{Dit07}{article}{
author={Ditzian, Z.},
title={Polynomial approximation and $\omega^r_\phi(f,t)$ twenty years
later},
journal={Surv. Approx. Theory},
volume={3},
date={2007},
pages={106--151},
}
\bib{Di14a}{article}{
author={Ditzian, Z.},
title={New moduli of smoothness on the unit ball and other domains,
introduction and main properties},
journal={Constr. Approx.},
volume={40},
date={2014},
number={1},
pages={1--36},
}
\bib{Di14b}{article}{
author={Ditzian, Z.},
title={New moduli of smoothness on the unit ball, applications and
computability},
journal={J. Approx. Theory},
volume={180},
date={2014},
pages={49--76},
}
\bib{Di-Pr08}{article}{
author={Ditzian, Z.},
author={Prymak, A.},
title={Ul$\prime$yanov-type inequality for bounded convex sets in $R^d$},
journal={J. Approx. Theory},
volume={151},
date={2008},
number={1},
pages={60--85},
}
\bib{Di-Pr16}{article}{
author={Ditzian, Z.},
author={Prymak, A.},
title={On Nikol'skii inequalities for domains in $\mathbb{R}^d$},
journal={Constr. Approx.},
volume={44},
date={2016},
number={1},
pages={23--51},
}
\bib{Di-To}{book}{
author={Ditzian, Z.},
author={Totik, V.},
title={Moduli of smoothness},
series={Springer Series in Computational Mathematics},
volume={9},
publisher={Springer-Verlag, New York},
date={1987},
pages={x+227},
isbn={0-387-96536-X},
}
\bib{Du}{article}{
author={Dubiner, M.},
title={The theory of multi-dimensional polynomial approximation},
journal={J. Anal. Math.},
volume={67},
date={1995},
pages={39--116},
issn={0021-7670},
}
\bib{Dz-Ko}{article}{
author={Dzjadyk, V. K.},
author={Konovalov, V. N.},
title={A method of partition of unity in domains with piecewise smooth
boundary into a sum of algebraic polynomials of two variables that have
certain kernel properties},
language={Russian},
journal={Ukrain. Mat. Z.},
volume={25},
date={1973},
pages={179--192, 285},
}
\bib{Er}{article}{
author={Erd\'{e}lyi, T.},
title={Notes on inequalities with doubling weights},
journal={J. Approx. Theory},
volume={100},
date={1999},
number={1},
pages={60--72},
}
\bib{Er2}{article}{
author={Erd\'{e}lyi, T.},
title={Arestov's theorems on Bernstein's inequality},
journal={J. Approx. Theory},
volume={250},
date={2020},
pages={105323, 9},
}
\bib{Iv}{article}{
author={Ivanov, K. G.},
title={Approximation of functions of two variables by algebraic
polynomials. I},
conference={
title={Anniversary volume on approximation theory and functional
analysis},
address={Oberwolfach},
date={1983},
},
book={
series={Internat. Schriftenreihe Numer. Math.},
volume={65},
publisher={Birkh\"auser, Basel},
},
date={1984},
pages={249--255},
}
\bib{Iv2}{article}{
author={Ivanov, K.G.},
title={A characterization of weighted Peetre K-functionals},
journal={J. Approx.
Theory},
volume={56},
date={1989},
number={1},
pages={185-211},
}
\bib{ITo}{article}{
author={Ivanov, K. G.},
author={Totik, V.},
title={Fast decreasing polynomials},
journal={Constr. Approx.},
volume={6},
date={1990},
number={1},
pages={1--20},
}
\bib{KNT}{article}{
author={Kalmykov, S.},
author={Nagy, B.},
author={Totik, V.},
title={Bernstein- and Markov-type inequalities for rational functions},
journal={Acta Math.},
volume={219},
date={2017},
number={1},
pages={21--63},
}
\bib{KL}{article}{
author={Kobindarajah, C. K.},
author={Lubinsky, D. S.},
title={$L_p$ Markov-Bernstein inequalities on all arcs of the circle},
journal={J. Approx. Theory},
volume={116},
date={2002},
number={2},
pages={343--368},
}
\bib{Kr09}{article}{
author={Kro\'{o}, Andr\'{a}s},
title={On Bernstein-Markov-type inequalities for multivariate polynomials
in $L_q$-norm},
journal={J. Approx. Theory},
volume={159},
date={2009},
number={1},
pages={85--96},
}
\bib{Kr2}{article}{
author={Kro\'{o}, Andr\'{a}s},
title={On optimal polynomial meshes},
journal={J. Approx. Theory},
volume={163},
date={2011},
number={9},
pages={1107--1124},
}
\bib{Kr13}{article}{
author={Kro\'{o}, Andr\'{a}s},
title={Bernstein type inequalities on star-like domains in $\Bbb{R}^d$
with application to norming sets},
journal={Bull. Math. Sci.},
volume={3},
date={2013},
number={3},
pages={349--361},
}
\bib{Kr-Re}{article}{
author={Kro\'{o}, Andr\'{a}s},
author={R\'{e}v\'{e}sz, Szil\'{a}rd},
title={On Bernstein and Markov-type inequalities for multivariate
polynomials on convex bodies},
journal={J. Approx. Theory},
volume={99},
date={1999},
number={1},
pages={134--152},
}
\bib{Lu1}{article}{
author={Lubinsky, D. S.},
title={Marcinkiewicz-Zygmund inequalities: methods and results},
conference={
title={Recent progress in inequalities},
address={Niv{s}},
date={1996},
},
book={
series={Math. Appl.},
volume={430},
publisher={Kluwer Acad. Publ., Dordrecht},
},
date={1998},
pages={213--240},
}
\bib{Lu2}{article}{
author={Lubinsky, D. S.},
title={On Marcinkiewicz-Zygmund inequalities at Jacobi zeros and their
Bessel function cousins},
conference={
title={Complex analysis and dynamical systems VII},
},
book={
series={Contemp. Math.},
volume={699},
publisher={Amer. Math. Soc., Providence, RI},
},
date={2017},
pages={223--245},
}
\bib{Lu3}{article}{
author={Lubinsky, D. S.},
title={On sharp constants in Marcinkiewicz-Zygmund and Plancherel-Polya
inequalities},
journal={Proc. Amer. Math. Soc.},
volume={142},
date={2014},
number={10},
pages={3575--3584},
issn={0002-9939},
}
\bib{MT2}{article}{
author={Mastroianni, G.},
author={Totik, V.},
title={Weighted polynomial inequalities with doubling and $A_\infty$
weights},
journal={Constr. Approx.},
volume={16},
date={2000},
number={1},
pages={37--71},
}
\bib{MK}{article}{
author={De Marchi, S.},
author={Kro\'{o}, A.},
title={Marcinkiewicz-Zygmund type results in multivariate domains},
journal={Acta Math. Hungar.},
volume={154},
date={2018},
number={1},
pages={69--89},
}
\bib{Ne}{article}{
author={Netrusov, Yu. V.},
title={Structural description of functions defined in a plane convex
domain that have a given order of approximation by algebraic polynomials},
language={Russian, with English and Russian summaries},
journal={Zap. Nauchn. Sem. S.-Peterburg. Otdel. Mat. Inst. Steklov.
(POMI)},
volume={215},
date={1994},
number={Differentsial\cprime naya Geom. Gruppy Li i Mekh. 14},
pages={217--225, 313--314},
issn={0373-2703},
translation={
journal={J. Math. Sci. (New York)},
volume={85},
date={1997},
number={1},
pages={1698--1703},
},
}
\bib{Ni}{article}{ author={Nikol’skii, S.M.}, title={On the best approximation by polynomials of functions which
satisfy Lipschitz condition}, journal={ Izv. Akad. Nauk SSSR}, volume={10},
date={1946},
pages={295-318}, }
\bib{To14}{article}{
author={Totik, V.},
title={Polynomial approximation on polytopes},
journal={Mem. Amer. Math. Soc.},
volume={232},
date={2014},
number={1091},
pages={vi+112},
issn={0065-9266},
isbn={978-1-4704-1666-9},
}
\bib{To17}{article}{
author={Totik, Vilmos},
title={Polynomial approximation in several variables},
journal={J. Approx. Theory},
volume={252},
date={2020},
pages={105364, 44},
}
\bib{Wa}{article}{
author={Walther, G.},
title={On a generalization of Blaschke's rolling theorem and the
smoothing of surfaces},
journal={Math. Methods Appl. Sci.},
volume={22},
date={1999},
number={4},
pages={301--316},
}
\end{biblist}
\end{bibsection}
\end{document}
| {
"timestamp": "2022-06-06T02:13:14",
"yymm": "2206",
"arxiv_id": "2206.01544",
"language": "en",
"url": "https://arxiv.org/abs/2206.01544",
"abstract": "We introduce appropriate computable moduli of smoothness to characterize the rate of best approximation by multivariate polynomials on a connected and compact $C^2$-domain $\\Omega\\subset \\mathbb{R}^d$. This new modulus of smoothness is defined via finite differences along the directions of coordinate axes, and along a number of tangential directions from the boundary. With this modulus, we prove both the direct Jackson inequality and the corresponding inverse for the best polynomial approximation in $L_p(\\Omega)$. The Jackson inequality is established for the full range of $0<p\\leq \\infty$, while its proof relies on a recently established Whitney type estimates with constants depending only on certain parameters; and on a highly localized polynomial partitions of unity on a $C^2$-domain which is of independent interest. The inverse inequality is established for $1\\leq p\\leq \\infty$, and its proof relies on a recently proved Bernstein type inequality associated with the tangential derivatives on the boundary of $\\Omega$. Such an inequality also allows us to establish the inverse theorem for Ivanov's average moduli of smoothness on general compact $C^2$-domains.",
"subjects": "Classical Analysis and ODEs (math.CA); Numerical Analysis (math.NA)",
"title": "Polynomial approximation on $C^2$-domains",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137931962462,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7087617837299249
} |
https://arxiv.org/abs/0906.3132 | Constructing matrix geometric means | In this paper, we analyze the process of "assembling" new matrix geometric means from existing ones, through function composition or limit processes. We show that for n=4 a new matrix mean exists which is simpler to compute than the existing ones. Moreover, we show that for n>4 the existing proving strategies cannot provide a mean computationally simpler than the existing ones. | \section{Introduction}
\paragraph{Literature review} In the last few years, several papers have been devoted to defining a proper way to generalize the concept of geometric mean to $n \geq 3$ Hermitian, positive definite $m \times m$ matrices. A seminal paper by Ando, Li and Mathias \cite{alm} defined the mathematical problem by stating ten properties that a ``good'' matrix geometric mean should satisfy. However, these properties do not uniquely define a multivariate matrix geometric mean; thus several different definitions appeared in literature.
Ando, Li and Mathias \cite{alm} first proposed a mean whose definition for $n$ matrices is based on a limit process involving several geometric means of $n-1$ matrices. Later Bini, Meini and Poloni \cite{bmp-means} noted that the slow convergence speed of this method prevents its use in applications; its main shortcoming is the fact that its complexity grows as $O(n!)$ with the number of involved matrices. In the same paper, they proposed a similar limit process with increased convergence speed, but still with complexity $O(n!)$. P\'alfia \cite{palfia} proposed a mean based on a similar process involving only means of $2$ matrices, and thus much simpler and cheaper to compute, but lacking property P3 (permutation invariance) from the ALM list. Lim \cite{lim} proposed a family of matrix geometric means that are based on an iteration requiring at each step the computation of a mean of $m\geq n$ matrices. Since the computational complexity for all known means greatly increases with $n$, the resulting family is useful as an example but highly impractical for numerical computations.
At the same time, Moakher \cite{moakher-simax,moakher} and Bhatia and Holbrook \cite{bhatiahol,bhatiabook} proposed a completely different definition, which we shall call the \emph{Riemannian centroid} of $A_1,A_2,\dots,A_n$. The Riemannian centroid $G^R(A_1,A_2,\dots,A_n)$ is defined as the minimizer of a sum of squared distances,
\begin{equation}\label{cartan1}
G^{R}(A_1,A_2,\dots,A_n)=\arg\min_X \sum_{i=1}^n \delta^2(A_i,X),
\end{equation}
where $\delta$ is the geodesic distance induced by a natural Riemannian metric on the space of symmetric positive definite matrices. The same $X$ is the unique solution of the equation
\begin{equation}\label{cartan2}
\sum_{i=1}^n \log(A_i^{-1}X)=0,
\end{equation}
involving the matrix logarithm function. While most of the ALM properties are easy to prove, it is still an open problem whether it satisfies P4 (monotonicity). The computational experiments performed up to now gave no counterexamples, but the monotonicity of the Riemannian centroid is still a conjecture \cite{bhatiahol}, up to our knowledge.
Moreover, while the other means had constructive definitions, it is not apparent how to compute the solution to either \eqref{cartan1} or \eqref{cartan2}. Two methods have been proposed, one based on a fixed-point iteration \cite{moakher} and one on the Newton methods for manifolds \cite{palfia,moakher}. Although both seem to work well on ``tame'' examples, their computational results show a fast degradation of the convergence behavior as the number of matrices and their dimension increase. It is unclear whether on more complicated examples there is convergence in the first place; unlike the other means, the convergence of these iteration processes has not been proved, as far as we know.
\paragraph{Notations}
Let us denote by $\spd{m}$ the space of Hermitian positive-definite $m \times m$ matrices. For all $A,B \in \spd{m}$, we shall say that $A<B$ ($A\leq B$) if $B-A$ is positive definite (semidefinite). With $A^*$ we denote the conjugate transpose of $A$.
We shall say that $\underline A=(A_i)_{i=1}^n \in (\spd{m})^n$ is a \emph{scalar} $n$-tuple of matrices if $A_1=A_2=\dots=A_n$. We shall use the convention that both $Q(\underline A)$ and $Q(A_1,\dots,A_n)$ denote the application of the map $Q:(\spd{m})^n \to \spd{m}$ to the $n$-tuple $\underline A$.
\paragraph{ALM properties}
Ando, Li and Mathias \cite{alm} introduced ten properties defining when a map $G:(\spd{m})^n \to \spd{m}$ can be called a \emph{geometric mean}. Following their paper, we report here the properties for $n=3$ only, for the sake of simplicity; the generalization to different values of $n$ is straightforward.
\begin{description}
\item [P1] (consistency with scalars) If $A$, $B$, $C$ commute then $G(A,B,C)=(ABC)^{1/3}$.
\item [P1'] This implies $G(A,A,A)=A$.
\item [P2] (joint homogeneity) $G(\alpha A, \beta B, \gamma C)=(\alpha\beta\gamma)^{1/3}G(A,B,C)$, for each $\alpha, \beta, \allowbreak \gamma > 0$.
\item [P2'] This implies $G(\alpha A, \alpha B, \alpha C)=\alpha G(A,B,C)$.
\item [P3] (permutation invariance) $G(A,B,C)=G(\pi(A,B,C))$ for all the permutations $\pi(A,B,C)$ of $A$, $B$, $C$.
\item [P4] (monotonicity) $G(A,B,C) \geq G(A',B',C')$ whenever $A \geq A'$, $B \geq B'$, $C \geq C'$.
\item [P5] (continuity from above) If $A_n$, $B_n$, $C_n$ are monotonic decreasing sequences converging to $A$, $B$, $C$, respectively, then $G(A_n,B_n,C_n)$ converges to $G(A,B,C)$.
\item [P6] (congruence invariance) $G(S^*AS, S^*BS, S^*CS)=S^*G(A,B,C)S$ for any nonsingular $S$.
\item [P7] (joint concavity) If $A=\lambda A_1+ (1-\lambda)A_2$, $B=\lambda B_1+ (1-\lambda)B_2$, $C=\lambda C_1+ (1-\lambda)C_2$, then $G(A,B,C) \geq \lambda G(A_1,B_1,C_1)+(1-\lambda)G(A_2,B_2,C_2)$.
\item [P8] (self-duality) $G(A,B,C)^{-1}=G(A^{-1},B^{-1},C^{-1})$.
\item [P9] (determinant identity) $\det G(A,B,C)=(\det A \det B \det C)^{1/3}$.
\item [P10] (arithmetic--geometric--harmonic mean inequality)
\[
\frac{A+B+C}3 \geq G(A,B,C) \geq \left(\frac{A^{-1}+B^{-1}+C^{-1}}3 \right)^{-1}.
\]
\end{description}
\paragraph{The matrix geometric mean for $n=2$}
For $n=2$, the ALM properties uniquely define a matrix geometric mean which can be expressed explicitly as
\begin{equation}\label{defssharp}
A \ssharp B := A(A^{-1}B)^{1/2}.
\end{equation}
This is a particular case of the more general map
\begin{equation}\label{defssharpt}
A \ssharp_t B := A(A^{-1}B)^t, \quad t \in \mathbb{R},
\end{equation}
which has a geometrical interpretation as the parametrization of the geodesic joining $A$ and $B$ for a certain Riemannian geometry on $\spd{m}$ \cite{bhatiabook}.
\paragraph{The ALM and BMP means} Ando, Li and Mathias \cite{alm} recursively define a matrix geometric mean $G^{ALM}_n$ of $n$ matrices in this way. The mean $G^{ALM}_2$ of two matrices coincides with \eqref{defssharp}; for $n \geq 3$, suppose the mean of $n-1$ matrices $G^{ALM}_{n-1}$ is already defined. Given $A_1,\dots,A_n$, compute for each $j=1,2,\dots$
\begin{equation}\label{defalm}
A_i^{(j+1)}:=G^{ALM}_{n-1}(A_1^{(j)}, A_2^{(j)}, \dots A_{i-1}^{(j)}, A_{i+1}^{(j)},\dots A_n^{(j)}) \quad i=1,\dots,n,
\end{equation}
where $A^{(0)}_i:=A_i$, $i=1,\dots n$.
The sequences $(A^{(j)}_i)_{j=1}^{\infty}$ converge to a common (not depending on $i$) matrix, and this matrix is a geometric mean of $A^{(0)}_1,\dots,A^{(0)}_n$.
The mean proposed by Bini, Meini and Poloni \cite{bmp-means} is defined in the same way, but with \eqref{defalm} replaced by
\begin{equation}\label{defbmp}
A_i^{(j+1)}:=G^{BMP}_{n-1}(A_1^{(j)}, A_2^{(j)}, \dots A_{i-1}^{(j)}, A_{i+1}^{(j)},\dots A_n^{(j)}) \ssharp_{1/n} A_i \quad i=1,\dots,n.
\end{equation}
Though both maps satisfy the ALM properties, matrices $A,B,C$ exist for which $G^{ALM}(A,B,C) \neq G^{BMP}(A,B,C)$.
While the former iteration converges linearly, the latter converges cubically, and thus allows one to compute a matrix geometric mean with a lower number of iterations. In fact, if we call $p_k$ the average number of iterations that the process giving a mean of $k$ matrices takes to converge (which may vary significantly depending on the starting matrices), the total computational cost of the ALM and BMP means can be expressed as $O(n! p_3 p_4 \dots p_n m^3)$. The only difference between the two complexity bounds lies in the expected magnitude of the values $p_k$. The presence of a factorial and of a linear number of factors $p_k$ is undesirable, since it means that the problem scales very badly with $n$. In fact, already with $n=7,8$ and moderate values of $m$, a large CPU time is generally needed to compute a matrix geometric mean \cite{bmp-means}.
\paragraph{The P\'alfia mean}
P\'alfia \cite{palfia} proposed to consider the following iteration. Let again $A^{(0)}_i:=A_i,\,i=1,\dots, n$. Let us define
\begin{equation}\label{palfiamean}
A^{(k+1)}_i:=A^{(k)}_i \ssharp A^{(k)}_{i+1},\quad i=1,\dots,n,
\end{equation}
where the indices are taken modulo $n$, i.e., $A^{(k)}_{n+1}=A^{(k)}_1$ for all $k$. We point out that the definition in the original paper \cite{palfia} is slightly different, as it considers several possible orderings of the input matrices, but the means defined there can be put in the form \eqref{palfiamean} up to a permutation of the starting matrices $A_1,\dots,A_n$.
As for the previous means, it can be proved that the iteration \eqref{palfiamean} converges to a scalar $n$-tuple; we call the common limit of all components $G^P(A_1,\dots,A_n)$. As we noted above, this function does not satisfy P3 (permutation invariance), and thus it is not a geometric mean in the ALM sense.
\paragraph{Other composite means} Apart from the Riemannian centroid, all the other definitions follow the same pattern:
\begin{itemize}
\item build new functions of $n$ matrices by taking nested compositions of the existing means---preferably using only means of less than $n$ matrices;
\item take the common limit of a set of $n$ functions defined as in the above step.
\end{itemize}
The possibilities for defining new iterations following this pattern are endless. Ando, Li, Mathias, and Bini, Meini, Poloni chose to use in the first step composite functions using computationally expensive means of $n-1$ matrices; this led to poor convergence results. P\'alfia chose instead to use more economical means of two variables as starting points; this led to better convergence (no $O(n!)$), but to a function which is not symmetric with respect to permutations of its entries (P3, permutation invariance).
As we shall see in the following, the property P3 is crucial: all the other ones are easily proved for a mean defined as composition/limit of existing means.
A natural question to ask is whether we can build a matrix geometric mean of $n$ matrices as the composition of matrix means of less matrices, without the need of a limit process. Two such unsuccessful attempts are reported in the paper by Ando, Li and Mathias \cite{alm}, as examples of the fact that it is not easy to define a matrix satisfying P1--P10. The first is
\begin{equation}\label{g4rec}
G^{4rec}(A,B,C,D):=(A \ssharp B) \ssharp (C \ssharp D).
\end{equation}
Unfortunately, there are matrices such that $(A \ssharp B) \ssharp (C\ssharp D)\neq (A \ssharp C) \ssharp (B \ssharp D)$, so P3 fails. A second attempt is
\begin{equation}\label{grec}
G^{rec}(A,B,C):=(A^{4/3}\ssharp B^{4/3}) \ssharp C^{2/3},
\end{equation}
where the exponents are chosen so that P1 (consistency with scalars) is satisfied. Again, this function is not symmetric in its arguments, and thus fails to satisfy P3.
A second natural question is whether an iterative scheme such as the ones for $G^{ALM}$, $G^{BMP}$ and $G^P$ can yield P3 without having a $O(n!)$ computational cost. For example, if we could build a scheme similar to the ALM and BMP ones, but using only means of $\frac{n}{2}$ matrices in the recursion, then the $O(n!)$ growth would disappear.
In this paper, we aim to analyze in more detail the process of ``assembling'' new matrix means from the existing ones, and show which new means can be found, and what cannot be done because of group-theoretical obstructions related to the symmetry properties of the composed functions. By means of a group-theoretical analysis, we will show that for $n=4$ a new matrix mean exists which is simpler to compute than the existing ones; numerical experiments show that the new definition leads to a significant computational advantage. Moreover, we will show that for $n>4$ the existing strategies of composing matrix means and taking limits cannot provide a mean which is computationally simpler than the existing ones.
\section{Quasi-means and notation}
\paragraph{Quasi-means}
Let us introduce the following variants to some of the Ando--Li--Mathias properties.
\begin{description}
\item [P1''] Weak consistency with scalars. There are $\alpha, \beta, \gamma \in \mathbb{R}$ such that if $A,B,C$ commute, then $G(A,B,C)=A^\alpha B^\beta C^\gamma$.
\item[P2''] Weak homogeneity. There are $\alpha, \beta, \gamma \in \mathbb{R}$ such that for each $r,s,t>0$, $G(rA,sB,tC)=r^\alpha s^\beta t^\gamma G(A,B,C)$. Notice that if P1'' holds as well, these must be the same $\alpha, \beta, \gamma$ (proof: substitute scalar values in P1'').
\item[P9'] Weak determinant identity. For all $d>0$, if $\det A=\det B=\det C=d$, then $\det G(A,B,C)=d$.
\end{description}
We shall call a \emph{quasi-mean} a function $Q:(\spd{m})^n \to (\spd{m})$ that satisfies P1'',P2'', P4, P6, P7, P8, P9'. This models expressions which are built starting from basic matrix means but are not symmetric, e.g., $A \ssharp G(B, C, D \ssharp E)$, \eqref{g4rec}, and \eqref{grec}.
\begin{theorem}
If a quasi-mean $Q$ satisfies P3 (permutation invariance), then it is a geometric mean.
\end{theorem}
\begin{proof}
From P2'' and P3, it follows that $\alpha=\beta=\gamma$. From P9', it follows that if $\det A=\det B = \det C=1$,
\[
2^m=\det Q(2A,2B,2C)=\det\left(2^{\alpha+\beta+\gamma} Q(A,B,C)\right)=2^{m(\alpha+\beta+\gamma)},
\]
thus $\alpha+\beta+\gamma=1$. The two relations combined together yield $\alpha=\beta=\gamma=1/3$. Finally, it is proved in Ando, Li and Mathias \cite{alm} that P5 and P10 are implied by the other eight properties P1--P4 and P6--P9.
\end{proof}
For two quasi-means $Q$ and $R$ of $n$ matrices, we shall write $Q=R$ if $Q(\underline A)=R(\underline A)$ for each $n$-tuple $\underline A \in \spd{m}$
\paragraph{Group theory notation}
The notation $H \leq G$ ($H < G$) means that $H$ is a subgroup (proper subgroup) of $G$. Let us denote by $\Sym{n}$ the symmetric group on $n$ elements, i.e., the group of all permutations of the set $\{1,2,\dots,n\}$. As usual, the symbol $(a_1 a_2 a_3 \dots a_k)$ stands for the permutation (``cycle'') that maps $a_1 \mapsto a_2$, $a_2 \mapsto a_3$, \dots $a_{k-1} \mapsto a_k$, $a_k \mapsto a_1$ and leaves the other elements of $\{1,2,\dots n\}$ unchanged. Different symbols in the above form can be chained to denote the group operation of function composition; for instance, $\sigma=(13)(24)$ is the permutation $(1,2,3,4)\mapsto (3,4,1,2)$. We shall denote by $\Alt{n}$ the alternating group on $n$ elements, i.e., the only subgroup of index 2 of $\Sym{n}$, and by $\Di{n}$ the dihedral group over $n$ elements, with cardinality $2n$. The latter is identified with the subgroup of $\Sym{n}$ generated by the rotation $(1,2,\dots,n)$ and the mirror symmetry $(2,n)(3,n-1)\cdots(n/2,n/2+2)$ (for even values of $n$) or $(2,n)(3,n-1)\cdots((n+1)/2,(n+3)/2)$ (for odd values of $n$).
\paragraph{Coset transversals}
Let now $H \leq \Sym{n}$, and let $\{\sigma_1,\dots, \sigma_r\} \subset \Sym{n}$ be a transversal for the right cosets $H\sigma$, i.e., a set of maximal cardinality $r=n!/|H|$ such that $\sigma_j\sigma_i^{-1} \not \in H$ for all $i \neq j$. The group $\Sym{n}$ acts by permutation over the cosets $(H\sigma_1,\dots, H\sigma_r)$, i.e., for each $\sigma$ there is a permutation $\tau=\rho_H(\sigma)$ such that
\[
(H\sigma_1\sigma,\dots,H\sigma_r\sigma)=(H\sigma_{\tau(1)},\dots,H\sigma_{\tau(r)}).
\]
It is easy to check that in this case $\rho_H:\Sym{n}\to\Sym{r}$ must be a group homomorphism. Notice that if $H$ is a normal subgroup of $\Sym{n}$, then the action of $\Sym{n}$ over the coset space is represented by the quotient group $\Sym{n}/H$, and the kernel of $\rho_H$ is $H$.
\begin{example}
The coset space of $H=\Di{4}$ has size $4!/8=3$, and a possible transversal is $\sigma_1=e$, $\sigma_2=(12)$, $\sigma_3=(14)$. We have $\rho_H(\Sym{4}) \cong \Sym{3}$: indeed, the permutation $\sigma=(12)\in\Sym{4}$ is such that $(H\sigma_1\sigma,H\sigma_2\sigma,H\sigma_3\sigma)=(H\sigma_2,H\sigma_1,H\sigma_3)$, and therefore $\rho_H(\sigma)=(12)$, while the permutation $\tilde\sigma=(14)\in\Sym{4}$ is such that $(H\sigma_1\tilde\sigma,H\sigma_2\tilde\sigma,H\sigma_3\tilde\sigma)=(H\sigma_3,H\sigma_2,H\sigma_1)$, therefore $\rho_H(\tilde\sigma)=(13)$. Thus $\rho_H(\Sym{4})$ must be a subgroup of $\Sym{3}$ containing $(12)$ and $(13)$, that is, $\Sym{3}$ itself.
\end{example}
With the same technique, noting that $\sigma_i^{-1}\sigma_j$ maps the coset $H\sigma_i$ to $H\sigma_j$, we can prove that the action $\rho_H$ of $\Sym{n}$ over the coset space is transitive.
\paragraph{Group action and composition of quasi-means} We may define a right action of $\Sym{n}$ on the set of quasi-means of $n$ matrices as
\[
(Q\sigma)(A_1,\dots,A_n):=Q(A_{\sigma(1)},\dots,A_{\sigma(n)}).
\]
The choice of putting $\sigma$ to the right, albeit slightly unusual, was chosen to simplify some of the notations used in Section~\ref{s:limits}.
When $Q$ is a quasi-mean of $r$ matrices and $R_1, R_2, \dots R_r$ are quasi-means of $n$ matrices, let us define $Q \circ (R_1, R_2, \dots R_r)$ as the map
\begin{equation}\label{defcomp}
\left(Q \circ (R_1, R_2, \dots R_r)\right) (\underline A) := Q(R_1(\underline A), R_2(\underline A), \dots, R_r(\underline A)).
\end{equation}
\begin{theorem} \label{t:composition}
Let $Q(A_1,\dots,A_r)$ and $R_j(A_1,\dots A_n)$ (for $j=1,\dots,r$) be quasi-means. Then,
\begin{enumerate}
\item For all $\sigma \in \Sym{r}$, $Q\sigma$ is a quasi-mean.
\item $(A_1,\dots,A_r,A_{r+1}) \mapsto Q(A_1,\dots,A_r)$ is a quasi-mean.
\item $Q \circ (R_1, R_2, \dots R_r)$ is a quasi-mean.
\end{enumerate}
\end{theorem}
\begin{proof}
All properties follow directly from the monotonicity (P4) and from the corresponding properties for the means $Q$ and $R_j$.
\end{proof}
We may then define the \emph{isotropy group}, or \emph{stabilizer group} of a quasi-mean $Q$
\begin{equation}\label{defstab}
\stab(Q) := \{ \sigma \in \mathfrak S^n : Q= Q\sigma\}.
\end{equation}
\section{Means obtained as map compositions}\label{s:compo}
\paragraph{Reductive symmetries}
Let us define the concept of \emph{reductive symmetries} of a quasi-mean as follows.
\begin{itemize}
\item in the special case in which $G_2(A,B)=A\ssharp B$, the symmetry property that $A\ssharp B=B\ssharp A$ is a reductive symmetry.
\item let $Q\circ(R_1,\dots,R_r)$ be a quasi-mean obtained by composition. The symmetry with respect to the permutation $\sigma$ (i.e., the fact that $Q=Q\sigma$) is a \emph{reductive symmetry} for $Q\circ(R_1,\dots,R_r)$ if this property can be formally proved relying only on the reductive symmetries of $Q$ and $R_1,\dots, R_r$.
\end{itemize}
For instance, if we take $Q(A,B,C):=A\ssharp (B\ssharp C)$, then we can deduce that $Q(A,B,C)=Q(A,C,B)$ for all $A,B,C$, but not that $Q(A,B,C)=Q(B,C,A)$ for all $A,B,C$. This does not imply that such a symmetry property does not hold: if we were considering the operator $+$ instead of $\ssharp$, then it would hold that $A+B+C=B+C+A$, but there are no means of proving it relying only on the commutativity of addition --- in fact, associativity is crucial.
As we stated in the introduction, Ando, Li and Mathias \cite{alm} showed explicit counterexamples proving that all the symmetry properties of $G^{4rec}$ and $G^{rec}$ are reductive symmetries. We conjecture the following.
\begin{conjecture}\label{ass}
All the symmetries of a quasi-mean obtained by recursive composition from $G_2$ are reductive symmetries.
\end{conjecture}
In other words, we postulate that no ``unexpected symmetries'' appear while examining quasi-means compositions. This is a rather strong statement; however, the numerical experiments and the theoretical analysis performed up to now never showed any invariance property that could not be inferred by those of the underlying means.
We shall prove several result limiting the reductive symmetries that a mean can have; to this aim, we introduce the \emph{reductive isotropy group}
\begin{equation}
\rstab(Q)=\{\sigma \in \stab(Q) : \text{$Q=Q\sigma$ is a reductive symmetry}\}.
\end{equation}
We will prove that there is no quasi-mean $Q$ such that $\rstab(Q)=\Sym{n}$. This shows that the existing ``tools'' in the mathematician's ``toolbox'' do not allow one to construct a matrix geometric mean (with full proof) based only on map compositions; thus we need either to devise a completely new construction or to find a novel way to prove additional invariance properties involving map compositions.
\paragraph{Reduction to a special form}
The following results show that when looking for a reductive matrix geometric mean, i.e., a quasi-mean $Q$ with $\rstab{Q}=\Sym{n}$, we may restrict our search to quasi-means of a special form.
\begin{theorem}\label{th:specialform}
Let $Q$ be a quasi-mean of $r+s$ matrices, and $R_1, R_2, \dots, R_r$, $S_1, S_2,\dots, S_s$ be quasi-means of $n$ matrices such that $R_i\neq S_j\sigma $ for all $i,j$ and every $\sigma \in \Sym{n}$. Then,
\begin{multline}
\rstab(Q \circ (R_1,R_2,\dots, R_r,S_1,S_2,\dots, S_s))\\ \subseteq \rstab(Q \circ (R_1,\dots, R_r,R_1,R_1,\dots, R_1)).
\end{multline}
\end{theorem}
\begin{proof}
Let $\sigma \in \rstab(Q \circ (R_1,R_2,\dots, R_r,S_1,S_2,\dots S_s))$; since the only invariance properties that we may assume on $Q$ are those predicted by its invariance group, it must be the case that
\[
(R_1\sigma, R_2\sigma, \dots, R_r\sigma, S_1\sigma , S_2\sigma, \dots S_s\sigma )
\]
is a permutation of $(R_1, R_2, \dots, R_r, S_1, S_2, \dots S_s)$ belonging to $\rstab(Q)$. Since $R_i\neq S_j\sigma $, this permutation must map the sets $\{R_1, R_2, \dots, R_r\}$ and $\{S_1, S_2, \dots S_s\}$ to themselves. Therefore, the same permutation maps
\[
(R_1, R_2, \dots, R_r, R_1, R_1, \dots R_1)
\]
to
\[
(R_1\sigma , R_2\sigma , \dots, R_r\sigma , R_1\sigma, R_1\sigma , \dots R_1\sigma ).
\]
This implies that
\begin{multline*}
Q(R_1, R_2, \dots, R_r, R_1, R_1, \dots R_1)
=Q(R_1\sigma, R_2\sigma , \dots, R_r\sigma, R_1\sigma, R_1\sigma, \dots R_1\sigma)
\end{multline*}
as requested.
\end{proof}
\begin{theorem}
Let $M_1:=Q\circ(R_1,R_2,\dots,R_r)$ be a quasi-mean. Then there is a quasi-mean $M_2$ in the form
\begin{equation}\label{specform}
\tilde Q \circ (\tilde R\sigma_1,\tilde R\sigma_2,\dots,\tilde R\sigma_{\tilde r}),
\end{equation}
where $(\sigma_1,\sigma_2,\dots,\sigma_{\tilde r})$ is a right coset transversal for $\rstab(\tilde R)$ in $\Sym{n}$, such that $\rstab(M_1) \subseteq \rstab(M_2)$.
\end{theorem}
\begin{proof}
Set $\tilde R=R_1$. For each $i=2,3,\dots,r$ if $R_i\neq \tilde R\sigma$, we may replace it with $\tilde R$, and by Theorem~\ref{th:specialform} the restricted isotropy group increases or stays the same. Thus by repeated application of this theorem, we
may reduce to the case in which each $R_i$ is in the form $\tilde R\tau_i$ for some permutation $\tau_i$.
Since $\{\sigma_i\}$ is a right transversal, we may write $\tau_i=h_i \sigma_{k(i)}$ for some $h_i\in H$ and $k(i)\in\{1,2,\dots,\tilde r\}$. We have $\tilde R h=\tilde R$ since $h\in\stab\tilde R$, thus $R_i=\tilde R\sigma_{k(i)}$. The resulting quasi-mean is $Q\circ(\tilde R \sigma_{k(1)},\dots,\tilde R\sigma_{k(r)})$. Notice that we may have $k(i)=k(j)$, or some cosets may be missing. Let now $\tilde Q$ be defined as $\tilde Q(A_1,A_2,\dots,A_{\tilde r}):=Q(A_{k(1)},\dots,A_{k(r)})$; then we have
\begin{equation}\label{uguagl}
\tilde Q(\tilde R\sigma_1,\dots,\tilde R\sigma_{\tilde r})=
Q(\tilde R\sigma_{k(1)},\dots,\tilde R\sigma_{k(r)})
\end{equation}
and thus the isotropy groups of the left-hand side and right-hand side coincide.
\end{proof}
For the sake of brevity, we shall define
\[
Q \circ R:=Q\circ(R\sigma_1,\dots,R\sigma_r),
\]
assuming a standard choice of the transversal for $H=\stab R$. Notice that $Q \circ R$ depends on the ordering of the cosets $H\sigma_1, \dots, H\sigma_r$, but not on the choice of the coset representative $\sigma_i$, since $Qh\sigma_i = Q\sigma_i$ for each $h\in H$.
\begin{example}
The quasi-mean $(A,B,C) \mapsto (A \ssharp B) \ssharp (B \ssharp C)$ is $Q \circ Q$, where $Q(X,Y,Z)=X \ssharp Y$, $H=\{e,(12)\}$, and the transversal is $\{e,(13),(23)\}$.
\end{example}
\begin{example}
The quasi-mean $(A,B,C) \mapsto (A \ssharp B) \ssharp C$ is not in the form \eqref{specform}, but in view of Theorem~\ref{th:specialform}, its restricted isotropy group is a subgroup of that of $(A,B,C) \mapsto (A \ssharp B) \ssharp (A \ssharp B)$.
\end{example}
The following theorem shows which permutations we can actually prove to belong to the reductive isotropy group of a mean in the form \eqref{specform}.
\begin{theorem}\label{t:isosharp}
Let $H \leq \Sym{n}$, $R$ be a quasi-mean of $n$ matrices such that $\rstab{R}=H$ and $Q$ be a quasi-mean of $r=n!/|H|$ matrices. Let $G\in\Sym{n}$ be the largest permutation subgroup such that $\rho_H(G) \leq \rstab(Q)$.
Then, $G = \rstab(Q \circ R)$.
\end{theorem}
\begin{proof}
Let $\sigma \in G$ and $\tau=\rho_H(\sigma)$; we have
\begin{equation*}
\begin{split}
(Q \circ R)\sigma(\underline A)& = Q\bigl(R\sigma_1 \sigma(\underline A),R\sigma_2 \sigma(\underline A), \dots, R\sigma_r \sigma(\underline A)\bigr)\\
&=Q\bigl(R\sigma_{\tau(1)} (\underline A),R\sigma_{\tau(2)} (\underline A), \dots, R\sigma_{\tau(r)}(\underline A)\bigr)\\
&=Q\bigl(R\sigma_1(\underline A),R\sigma_2 (\underline A), \dots, R\sigma_r (\underline A)\bigr),
\end{split}
\end{equation*}
where the last equality holds because $\tau \in \stab(Q)$.
Notice that the above construction is the only way to obtain invariance with respect to a given permutation $\sigma$: indeed, to prove invariance relying only on the invariance properties of $Q$, $(R\sigma_1\sigma,\dots,R\sigma_r\sigma)=(R\sigma_{\tau(1)},\dots,R\sigma_{\tau(r)})$ must be a permutation of $(R\sigma_1,\dots,R\sigma_r)$ belonging to $\rstab Q$, and thus $\rho_H(\sigma)=\tau \in \stab Q$. Thus the reductive invariance group of the composite mean is precisely the largest subgroup $G$ such that $\rho_H(G)\leq \stab Q$.
\end{proof}
\begin{example}\label{tournamentmean}
Let $n=4$, $Q$ be any (reductive) geometric mean of three matrices (i.e., $\rstab Q=\Sym{3}$), and $R(A,B,C,D):=(A \ssharp C) \ssharp (B \ssharp D)$. We have $H=\rstab R=\Di{4}$, the dihedral group over four elements, with cardinality 8. There are $r=4!/|H|=3$ cosets. Since $\rho_H(\Sym{4})$ is a subset of $\stab Q=\Sym{3}$, the isotropy group of $Q\circ R$ contains $G=\Sym{4}$ by Theorem~\ref{t:isosharp}. Therefore $Q\circ R$ is a geometric mean of four matrices.
\end{example}
Indeed, the only assertion we have to check explicitly is that $\rstab R=\Di{4}$. The isotropy group of $R$ contains $(24)$ and $(1234)$, since by using repeatedly the fact that $\ssharp$ is symmetric in its arguments we can prove that $R(A,B,C,D)=R(A,D,C,B)$ and $R(A,B,C,D)=R(D,A,B,C)$. Thus it must contain the subgroup generated by these two elements, that is, $\Di{4}\leq\rstab R$. The only subgroups of $\Sym{4}$ containing $\Di{4}$ as a subgroup are the two trivial ones $\Sym{4}$ and $\Di{4}$. We cannot have $\rstab R=\Sym{4}$, since $R$ has the same definition as $G^{4rec}$ of equation \eqref{g4rec}, apart from a reordering, and it was proved \cite{alm} that this is not a geometric mean.
It is important to notice that by choosing $G_3=G_3^{ALM}$ or $G_3=G_3^{BMP}$ in the previous example we may obtain a geometric mean of four matrices using a single limit process, the one needed for $G_3$. This is more efficient than $G^{ALM}_4$ and $G^{BMP}_4$, which compute a mean of four matrices via several means of three matrices, each of which requires a limit process in its computation. We will return to this topic in Section~\ref{s:exp}.
\paragraph{Above four elements}
Is it possible to obtain a reductive geometric mean of $n$ matrices, for $n>4$, starting from simpler means and using the construction of Theorem~\ref{t:isosharp}? The following result shows that the answer is no.
\begin{theorem}
Suppose $G :=\rstab(Q\circ R)\geq \Alt{n}$ and $n>4$. Then $\Alt{n} \leq \rstab(Q)$ or $\Alt{n} \leq \rstab(R)$.
\end{theorem}
\begin{proof}
Let us consider $K=\ker \rho_H$. It is a normal subgroup of $\Sym{n}$, but for $n>4$ the only normal subgroups of $\Sym{n}$ are the trivial group $\{e\}$, $\Alt{n}$ and $\Sym{n}$ \cite{dixonmort}. Let us consider the three cases separately.
\begin{enumerate}
\item $K=\{e\}$. In this case, $\rho_H(G) \cong G$, and thus $G \leq \rstab{Q}$.
\item $K=\Sym{n}$. In this case, $\rho_H(\Sym{n})$ is the trivial group. But the action of $\Sym{n}$ over the coset space is transitive, since $\sigma_i^{-1}\sigma_j$ sends the coset $H\sigma_i$ to the coset $H\sigma_j$. So the only possibility is that there is a single coset in the coset space, i.e., $H=\Sym{n}$.
\item $K=\Alt{n}$. As in the above case, since the action is transitive, it must be the case that there are at most two cosets in the coset space, and thus $H=\Sym{n}$ or $H=\Alt{n}$.
\end{enumerate}
\end{proof}
Thus it is impossible to apply Theorem \ref{t:isosharp} to obtain a quasi-mean with reductive isotropy group containing $\Alt{n}$, unless one of the two starting quasi-means has a reductive isotropy group already containing $\Alt{n}$.
\section{Means obtained as limits}\label{s:limits}
\paragraph{An algebraic setting for limit means}
We shall now describe a unifying algebraic setting in terms of isotropy groups, generalizing the procedures leading to the means defined by limit processes $G^{ALM}$, $G^{BMP}$ and $G^P$.
Let $S: (\spd{m})^n \to (\spd{m})^n$ be a map; we shall say that $S$ \emph{preserves} a subgroup $H < \Sym{n}$ if there is a map $\tau: H \to H$ such that $Sh(\underline A)=\tau(h)S(\underline A)$ for all $\underline A \in \spd{m}$.
\begin{theorem}\label{th:iteration}
Let $S: (\spd{m})^n \to (\spd{m})^n$ be a map and $H<\Sym{n}$ be a permutation group such that
\begin{enumerate}
\item $(\underline A) \to \bigl(S(\underline A)\bigr)_i$ is a quasi-mean for all $i=1,\dots,n$,
\item $S$ preserves $H$,
\item for all $\underline A \in (\spd{m})^n$, $\lim_{k \to \infty} S^{k}(\underline A)$ is a scalar $n$-tuple\footnote{Here $S^k$ denotes function iteration: $S^1=S$ and $S^{k+1}(\underline A)=S(S^k(\underline A))$ for all $k$.},
\end{enumerate}
and let us denote by $S^{\infty}(\underline A)$ the common value of all entries of the scalar $n$-tuple $\lim_{k \to \infty} S^{k}(\underline A)$.
Then, $S^\infty(\underline A)$ is a quasi-mean with isotropy group containing $H$.
\end{theorem}
\begin{proof}
From Theorem \ref{t:composition}, it follows that $\underline A \mapsto \bigl(S^k(\underline A)\bigr)_i$ is a quasi-mean for each $k$. Since all the properties defining a quasi-mean pass to the limit, $S^\infty$ is a quasi-mean itself.
Let us take $h \in H$ and $\underline A \in \spd{n}$. It is easy to prove by induction on $k$ that $S^kh(\underline A)=\tau^k(h) \left(S^k(\underline A)\right)$. Now, choose a matrix norm inducing the Euclidean topology on $\spd{m}$; let $\varepsilon>0$ be fixed, and let us take $K$ such that for all $k> K$ and for all $i=1,\dots,n$ the following inequalities hold:
\begin{itemize}
\item $\norm {\bigl(S^k(\underline A)\bigr)_i - S^{\infty}(\underline A)} < \varepsilon$,
\item $\norm {\bigl(S^kh(\underline A)\bigr)_i - S^{\infty}h(\underline A)} < \varepsilon$.
\end{itemize}
We know that $\bigl(S^kh(\underline A)\bigr)_i=\bigl(\tau^k(h)S^k(\underline A)\bigr)_i=\bigl(S^k(\underline A)\bigr)_{\tau^k(h)(i)}$, therefore
\begin{multline*}
\norm {S^{\infty}(\underline A) - S^{\infty}h(\underline A)} \leq \norm {\bigl(S^k(\underline A)\bigr)_{\tau^k(h)(i)} - S^{\infty}(\underline A)}\\ + \norm {\bigl(S^kh(\underline A)\bigr)_i - S^{\infty}h(\underline A)} < 2\varepsilon.
\end{multline*}
Since $\varepsilon$ is arbitrary, the two limits must coincide. This holds for each $h \in H$, therefore $H \leq \stab{S^{\infty}}$.
\end{proof}
\begin{example}
The map $S$ defining $G^{ALM}_4$ is
\[
\begin{bmatrix}
A \\ B \\ C \\D
\end{bmatrix}
\mapsto
\begin{bmatrix}
G_3^{ALM}(B,C,D) \\ G_3^{ALM}(A,C,D) \\ G_3^{ALM}(A,B,D) \\ G_3^{ALM}(A,B,C)
\end{bmatrix}.
\]
One can see that $S\sigma=\sigma^{-1}S$ for each $\sigma \in \Sym{4}$, and thus with the choice $\tau(\sigma):=\sigma^{-1}$ we get that $S$ preserves $\Sym{4}$. Thus, by Theorem~\ref{th:iteration}, $S^\infty=G^{ALM}_4$ is a geometric mean of four matrices. The same reasoning applies to $G^{BMP}$.
\end{example}
\begin{example}
The map $S$ defining $G^P_4$ is
\[
\begin{bmatrix}
A \\ B \\ C \\D
\end{bmatrix}
\mapsto
\begin{bmatrix}
A \ssharp B \\ B \ssharp C \\ C \ssharp D \\ D \ssharp A
\end{bmatrix}.
\]
$S$ preserves the dihedral group $\Di{4}$. Therefore, provided the iteration process converges to a scalar $n$-tuple, $S^{\infty}$ is a quasi-mean with isotropy group containing $\Di{4}$.
\end{example}
\paragraph{Efficiency of the limit process}
As in the previous section, we are interested in seeing whether this approach, which is the one that has been used to prove invariance properties of the known limit means \cite{alm,bmp-means}, can yield better results for a different map $S$.
\begin{theorem}
Let $S: (\spd{m})^n \to (\spd{m})^n$ preserve a group $H$. Then, the invariance group of each of its components $S_i$, $i=1,\dots, n$, is a subgroup of $H$ of index at most $n$.
\end{theorem}
\begin{proof}
Let $i$ be fixed, and set $I_k:=\{h \in H : \tau(h)(i)=k\}$. The sets $I_k$ are mutually disjoint and their union is $H$, so the largest one has cardinality at least $|H|/n$, let us call it $I_{\bar k}$.
From the hypothesis that $S$ preserves $H$, we get $S_i h(\underline A)=S_{\bar k}(\underline A)$ for each $\underline A$ and each $h \in I_k$. Let $\bar h$ be an element of $I_k$; then $S_ih(\bar h^{-1} \underline A)=S_{\bar k}(\bar h^{-1}\underline A)=S_i(\underline A)$. Thus the isotropy group of $S_i$ contains all the elements of the form $h\bar h^{-1}$, $h \in I_k$, and those are at least $|H|/n$.
\end{proof}
The following result holds \cite[page 147]{dixonmort}.
\begin{theorem}
For $n>4$, the only subgroups of $\Sym{n}$ with index at most $n$ are:
\begin{itemize}
\item the alternating group $\Alt{n}$,
\item the $n$ groups $T_k=\{\sigma \in \Sym{n}: \sigma(k)=k\}$, $k=1,\dots,n$, all of which are isomorphic to $\Sym{n-1}$,
\item for $n=6$ only, another conjugacy class of $6$ subgroups of index $6$ isomorphic to $\Sym{5}$.
\end{itemize}
Analogously, the only subgroups of $\Alt{n}$ with index at most $n$ are:
\begin{itemize}
\item the $n$ groups $U_k=\{\sigma \in \Alt{n}: \sigma(k)=k\}$, $k=1,\dots,n$, all of which are isomorphic to $\Alt{n-1}$,
\item for $n=6$ only, another conjugacy class of $6$ subgroups of index $6$ isomorphic to $\Alt{5}$.
\end{itemize}
\end{theorem}
This shows that whenever we try to construct a geometric mean of $n$ matrices by taking a limit processes, such as in the Ando--Li--Mathias approach, the isotropy groups of the starting means must contain $\Alt{n-1}$. On the other hand, by Theorem~\ref{t:isosharp}, we cannot generate means whose isotropy group contains $\Alt{n-1}$ by composition of simpler means; therefore, there is no simpler approach than that of building a mean of $n$ matrices as a limit process of means of $n-1$ matrices (or at least quasi-means with $\stab Q=\Alt{n-1}$, which makes little difference). This shows that the recursive approach of $G^{ALM}$ and $G^{BMP}$ cannot be simplified while still maintaining P3 (permutation invariance).
\section{Computational issues and numerical experiments}\label{s:exp}
\paragraph{A faster mean of four matrices} The results we have exposed up to now are negative results, and they hold for $n>4$. On the other hand, it turns out that for $n=4$, since $\Alt{n}$ is not a simple group, there is the possibility of obtaining a mean that is computationally simpler than the ones in use. Such a mean is the one we described in Example~\ref{tournamentmean}. Let us take any mean of three elements (we shall use $G^{BMP}_3$ here since it is the one with the best computational results); the new mean is therefore defined as
\begin{multline}\label{defgnew}
G^{NEW}_4(A,B,C,D) := G_3^{BMP}\left( (A \ssharp B) \ssharp (C \ssharp D), (A \ssharp C) \ssharp (B \ssharp D),\right.\\\left. (A \ssharp D) \ssharp (B \ssharp C) \right).
\end{multline}
Notice that only one limit process is needed to compute the mean; conversely, when computing $G_4^{ALM}$ or $G_4^{BMP}$ we are performing an iteration whose elements are computed by doing four additional limit processes; thus we may expect a large saving in the overall computational cost.
We may extend the definition recursively to $n>4$ elements using the construction described in \eqref{defbmp}, but with $G^{NEW}$ instead of $G^{BMP}$. The total computational cost, computed in the same fashion as for the ALM and BMP means, is $O(n! p_3 p_5 p_6 \dots p_n m^3)$. Thus the undesirable dependence from $n!$ does not disappear; the new mean should only yield a saving measured by a multiplicative constant in the complexity bound.
\paragraph{Benchmarks}
We have implemented the original BMP algorithm and the new one described in the above section with MATLAB\textregistered{} and run some tests on the same set of examples used by Moakher \cite{moakher} and Bini \emph{et al.} \cite{bmp-means}. It is an example deriving from physical experiments on elasticity. It consists of five sets of matrices to average, with $n$ varying from 4 to 6, and $6\times 6$ matrices split into smaller diagonal blocks.
For each of the five data sets, we have computed both the BMP and the new matrix mean. The CPU times are reported in Table~\ref{t:times}. As a stopping criterion for the iterations, we used
\[
\max_{i,j,k} \left\vert(A^{(h)}_i)_{jk}-(A^{(h+1)}_i)_{jk}\right\vert < 10^{-13}.
\]
\begin{table}
\begin{tabular}{ccc}
Data set (number of matrices) & BMP mean & New mean \\
\hline
NaClO$_3$ (5) & 1.3E+00 & 3.1E-01\\
Ammonium dihydrogen phosphate (4) & 3.5E-01 & 3.9E-02\\
Potassium dihydrogen phosphate (4) & 3.5E-01 & 3.9E-02\\
Quartz (6) & 2.9E+01 & 6.7E+00\\
Rochelle salt (4) & 6.0E-01 & 5.5E-02
\end{tabular}
\caption{CPU times for the elasticity data sets}\label{t:times}
\end{table}
As we expected, our mean provides a substantial reduction of the CPU time which is roughly by an order of magnitude.
Following Bini \emph{et al.} \cite{bmp-means}, we then focused on the second data set (ammonium dihydrogen phosphate) for a deeper analysis; we report in Table~\ref{t:hear} the number of iterations and matrix roots needed in both computations.
\begin{table}\label{t:hear}
\begin{small}
\begin{tabular}{ccc}
& BMP mean & New mean\\
\hline
Outer iterations ($n=4$) & 3 & none\\
Inner iterations ($n=3$) & $4 \times 2.0$ (avg.) per outer iteration & 2\\
Matrix square roots (\texttt{sqrtm}) & 72 & 15\\
Matrix $p$-th roots (\texttt{rootm}) & 84 & 6\\
\end{tabular}
\end{small}
\caption{Number of inner and outer iterations needed, and number of matrix roots needed (ammonium dihydrogen phosphate)}
\end{table}
The examples in these data sets are mainly composed of matrices very close to each other; we shall consider here instead an example of mean of four matrices whose mutual distances are larger:
\begin{equation}\label{matricibuffe}
\begin{aligned}
A&=\begin{bmatrix}
1 &0& 0\\0 & 1 & 0\\ 0 & 0 & 1
\end{bmatrix},\!&
B&=\begin{bmatrix}
3 &0& 0\\0 & 4 & 0\\ 0 & 0 & 100
\end{bmatrix},\!
&C&=\begin{bmatrix}
2 &1& 1\\1 & 2 & 1\\ 1 & 1 & 2
\end{bmatrix},\!&
D&=\begin{bmatrix}
20 &0& -10\\0 & 20 & 0\\ -10 & 0 & 20
\end{bmatrix}.
\end{aligned}
\end{equation}
The results regarding these matrices are reported in
\begin{table}
\begin{small}
\begin{tabular}{ccc}
& BMP mean & New mean\\
\hline
Outer iterations ($n=4$) & 4 & none\\
Inner iterations ($n=3$) & $4 \times 2.5$ (avg.) per outer iteration & 3\\
Matrix square roots (\texttt{sqrtm}) & 120 & 18\\
Matrix $p$-th roots (\texttt{rootm}) & 136 & 9\\
\end{tabular}
\end{small}
\caption{Number of inner and outer iterations needed, and number of matrix roots needed}\label{t:caso}
\end{table}
Table~\ref{t:caso}.
\paragraph{Accuracy} It is not clear how to check the accuracy of a limit process yielding a matrix geometric mean, since the exact value of the mean is not known \emph{a priori}, apart from the cases in which all the $A_i$ commute. In those cases, P1 yields a compact expression for the result. So we cannot test accuracy in the general case; instead, we have focused on two special examples.
As a first accuracy experiment, we computed $G(M^{-2},M,M^2,M^3)-M$, where $M$ is taken as the first matrix of the second data set on elasticity; the result of this computation should be zero according to P1. As a second experiment, we tested the validity of P9 (determinant identity) on the means of the four matrices in \eqref{matricibuffe}. The results of both computations are reported in Table~\ref{t:accuracy};
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{cc}
Operation & Result\\
\hline
$\left\Vert G^{BMP}(M^{-2},M,M^2,M^3)-M\right\Vert_2$ & 4.0E-14\\
$\left\Vert G^{NEW}(M^{-2},M,M^2,M^3)-M\right\Vert_2$ & 2.5E-14\\
\hline
$\left\vert\det(G^{BMP}(A,B,C,D))-(\det(A)\det(B)\det(C)\det(D))^{1/4}\right\vert$ & 5.5E-13\\
$\left\vert\det(G^{BMP}(A,B,C,D))-(\det(A)\det(B)\det(C)\det(D))^{1/4}\right\vert$ & 2.1E-13
\end{tabular}
\end{center}
\caption{Accuracy tests}\label{t:accuracy}
\end{table}
the results are well within the errors permitted by the stopping criterion, and show that both algorithms can reach a satisfying precision.
\section{Conclusions}
\paragraph{Research lines} The results of this paper show that, by combining existing matrix means, it is possible to create a new mean which is faster to compute than the existing ones. Moreover, we show that using only function compositions and limit processes with the existing proof strategies, it is not possible to achieve any further significant improvement with respect to the existing algorithms. In particular, the dependency from $n!$ cannot be removed. New attempts should focus on other aspects, such as:
\begin{itemize}
\item proving new ``unexpected'' algebraic relations involving the existing matrix means, which would allow to break out of the framework of Theorem~\ref{t:isosharp}--Theorem~\ref{th:iteration}.
\item introducing new kinds of matrix geometric means or quasi-means, different from the ones built using function composition and limits.
\item proving that the Riemannian centroid \eqref{cartan1} is a matrix mean in the sense of Ando--Li--Mathias (currently P4 is an open problem), or providing faster and reliable algorithms to compute it.
\end{itemize}
It is an interesting question whether it is possible to construct a quasi-mean whose isotropy group is exactly $\Alt{n}$.
\paragraph{Acknowledgments} The author would like to thank Dario Bini and Bruno Iannazzo for enlightening discussions on the topic of matrix means, and Roberto Dvornicich and Francesco Veneziano for their help with the group theory involved in the analysis of the problem.
| {
"timestamp": "2010-07-27T02:01:03",
"yymm": "0906",
"arxiv_id": "0906.3132",
"language": "en",
"url": "https://arxiv.org/abs/0906.3132",
"abstract": "In this paper, we analyze the process of \"assembling\" new matrix geometric means from existing ones, through function composition or limit processes. We show that for n=4 a new matrix mean exists which is simpler to compute than the existing ones. Moreover, we show that for n>4 the existing proving strategies cannot provide a mean computationally simpler than the existing ones.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Constructing matrix geometric means",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137931962462,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7087617837299249
} |
https://arxiv.org/abs/1207.5015 | Free and Very Free Morphisms into a Fermat Hypersurface | This paper studies the existence of free and very free curves on the degree 5 Fermat hypersurface in P^5 over a field of characteristic 2. We find that such curves exist in degrees 8 and 9 and not in lower degrees. | \section{Introduction}
\noindent
Any smooth projective Fano variety in characteristic zero is rationally
connected and hence contains a very free rational curve. In positive
characteristic a smooth projective Fano variety is rationally chain
connected. However, it is not known whether such varieties are separably
rationally connected, or equivalently, whether they have a very free
rational curve. This is an open question even for nonsingular
Fano hypersurfaces. See \cite{kollar}, as well as \cite{debarre}.
\medskip\noindent
Following \cite{mingmin}, we consider the degree $5$
Fermat hypersurface
$$
X \quad:\quad X_0^5 + X_1^5 + X_2^5 + X_3^5 + X_4^5 + X_5^5 = 0
$$
in $\mathbb{P}^5$ over an algebraically closed field $k$ of characteristic $2$.
Note that $X$ is a nonsingular projective Fano variety.
\begin{theorem}
\label{theorem-main}
Any free rational curve $\varphi : \mathbb{P}^1 \to X$ has degree $\geq 8$
and there exists a free rational curve of degree $8$. Any very free rational
curve $\varphi : \mathbb{P}^1 \to X$ has degree $\geq 9$ and there exists a
very free rational curve of degree $9$.
\end{theorem}
\noindent
This result, although perhaps expected, is interesting for several
reasons. First, it is known that $X$ is unirational,
see \cite[Page 52]{debarre} (the corresponding rational map
$\mathbb{P}^4 \dasharrow X$ is inseparable). Second, in
\cite{beauville}, it is shown that every nonsingular hyperplane section of $X$
is isomorphic to a Fermat hypersurface of dimension 3 and this property
characterizes Fermat hypersurfaces among all hypersurfaces of degree 5
in characteristic $2$. We believe that these facts single out the Fermat as a
likely candidate for a counter example to the conjecture below; instead
our theorem shows that they are evidence for it.
\begin{conjecture}
\label{conjecture-fano-src}
Nonsingular Fano hypersurfaces have very free rational curves.
\end{conjecture}
\noindent
The paper \cite{zhu} disusses this question more broadly.
A little bit about the method of proof. In
Section \ref{section-setup}
we translate the geometric question into an algebraic question which is
computationally more accessible. In
Sections \ref{splittingrelation}, \ref{section-numerology},
and \ref{section-degree-4-5} we exclude low degree solutions by theoretical
methods. Finally, in
Sections \ref{section-degree-8} and \ref{section-degree-9}
we explicitly describe some curves which
are free and very free in degrees 8 and 9 respectively.
\section{The overall setup}
\label{section-setup}
\noindent
In the rest of this paper $k$ will be an algebraically closed field of
characteristic $2$ and $X$ will be the Fermat hypersurface of
degree $5$ over $k$. Let $\varphi : \mathbb{P}^1 \to X$
be a nonconstant morphism. We will repeatedly use that every vector
bundle on $\mathbb{P}^1$ is a direct sum of line bundles, see
\cite{grothendieck}. Thus we can choose a splitting
$$
\varphi^*T_X =
\mathcal{O}_{\mathbb{P}^1}(a_1) \oplus
\mathcal{O}_{\mathbb{P}^1}(a_2) \oplus
\mathcal{O}_{\mathbb{P}^1}(a_3) \oplus
\mathcal{O}_{\mathbb{P}^1}(a_4).
$$
Recall that $\varphi$ is said to be a {\it free curve} on $X$ if
$a_i \geq 0$ and $\varphi$ is said to be {\it very free} if $a_i > 0$.
Consider the following commutative diagram
\begin{equation}
\label{equation-diagram}
\vcenter{
\xymatrix{
& 0 \ar[d] & 0 \ar[d] \\
& \mathcal{O}_X \ar@{=}[r] \ar[d] & \mathcal{O}_X \ar[d] \\
0 \ar[r] &
E_X \ar[d] \ar[r] &
\mathcal{O}_X(1)^{\oplus 6} \ar[d] \ar[r] &
\mathcal{O}_X(5) \ar@{=}[d] \ar[r] &
0 \\
0 \ar[r] &
T_X \ar[r] \ar[d] &
T_{\mathbb{P}^5}|_X \ar[r] \ar[d] &
N_{X/\mathbb{P}^5} \ar[r] &
0 \\
& 0 & 0
}
}
\end{equation}
with exact rows and columns as indicated. We will call $E_X$ the
{\it extended tangent bundle of $X$}. The left vertical exact sequence
determines a short exact sequence
$$
0 \to \mathcal{O}_{\mathbb{P}^1} \to \varphi^*E_X \to \varphi^*T_X \to 0.
$$
The splitting type of $\varphi^*E_X$ will consistently be denoted
$(f_1, f_2, f_3, f_4, f_5)$ in this paper. Since
$\text{Hom}_{\mathbb{P}^1}(\mathcal{O}_{\mathbb{P}^1}(f),
\mathcal{O}_{\mathbb{P}^1}(a)) = 0$
if $f > a$ we conclude that
\begin{enumerate}
\item If $f_i \geq 0$ for all $i$, then $\varphi$ is free.
\item If $f_i > 0$ for all $i$, then $\varphi$ is very free.
\end{enumerate}
For the converse, note that the map
$\mathcal{O}_{\mathbb{P}^1} \to \varphi^*E_X$ has image contained
in the direct sum of the summands with $f_i \geq 0$. Hence, if $f_i < 0$
for some $i$, then $\varphi$ is not free. Finally, suppose that
$f_i \geq 0$ for all $i$. If there are at least two $f_i$ equal to $0$,
then we see that $\varphi$ is free but not very free. We conclude that
\begin{enumerate}
\item[(3)] If $\varphi$ is free, then $f_i \geq 0$ for all $i$.
\item[(4)] If $\varphi$ is very free, then either (a) $f_i > 0$ for all $i$,
or (b) exactly one $f_i = 0$ and all others $> 0$.
\end{enumerate}
We do not know if (4)(b) occurs.
\medskip\noindent
{\bf Translation into algebra.}
Here we work over the graded $k$-algebra $R = k[S,T]$. As usual, we let
$R(e)$ be the graded free $R$-module whose underlying module is $R$ with
grading given by $R(e)_n = R_{e + n}$. A {\it graded free $R$-module}
will be any graded $R$-module isomorphic to a finite direct sum of $R(e)$'s.
Such a module $M$ has a {\it splitting type}, namely the sequence of integers
$u_1, \ldots, u_r$ such that $M \cong R(u_1) \oplus \ldots \oplus R(u_r)$.
\medskip\noindent
We will think of a degree $d$ morphism
$\varphi: \mathbb{P}^1 \to \mathbb{P}^5$ as a $6$-tuple
$(G_0, \ldots, G_5)$ of homogeneous elements in $R$ of degree $d$
with no common factors. Then $\varphi$ is a morphism into $X$ if and only if
$G_0^5 + \ldots + G_5^5 = 0$. In this situation we define two graded
$R$-modules. The first is called the {\it pullback of the cotangent bundle}
$$
\Omega_X(\varphi) =
\text{Ker}(\tilde\varphi : R^{\oplus 6}(-d) \longrightarrow R)
$$
where the map $\tilde\varphi$ is given by
$(A_0, \ldots, A_5) \mapsto \sum A_iG_i$. The second is called the
{\it the pullback of the extended tangent bundle}
$$
E_X(\varphi) = \text{Ker}(R^{\oplus 6}(d) \longrightarrow R(5d))
$$
where the map is given by $(A_0, \ldots, A_5) \mapsto \sum A_iG_i^4$.
Since the kernel of a map of graded free $R$-modules is a graded free
$R$-module, both $\Omega_X(\varphi)$ and $E_X(\varphi)$
are themselves graded free $R$-modules of rank $5$.
\begin{lemma}
\label{lemma-splitting-type}
The splitting type of $\varphi^*E_X$ is equal to the splitting type
of the $R$-module $E_X(\varphi)$.
\end{lemma}
\begin{proof}
Recall that $\mathbb{P}^1 = \text{Proj}(R)$. Thus, a finitely generated
graded $R$-module corresponds to a coherent sheaf on $\mathbb{P}^1$, see
\cite[Proposition 5.11]{Hartshorne}.
Under this correspondence, the module $R(e)$ corresponds to
$\mathcal{O}_{\mathbb{P}^1}(e)$. The lemma follows if we show that
$\varphi^*E_X$ is the coherent sheaf associated to $E_X(\varphi)$.
Diagram (\ref{equation-diagram}) shows that $\varphi^*E_X$ is the kernel
of a map
$\mathcal{O}_{\mathbb{P}^1}(d)^{\oplus 6} \to \mathcal{O}_{\mathbb{P}^1}$
given by substituting $(G_0, \ldots, G_5)$ into the partial derivatives
of the polynomial defining $X$. Since the equation is
$X_0^5 + \ldots + X_5^5$, the derivatives are $X_i^4$, and substituting
we obtain $G_i^4$ as desired.
\end{proof}
\section{Relating the Splitting Types}
\label{splittingrelation}
\noindent
Observe that $\Omega_X(\varphi)$ is also a graded free module of rank 5 and so has a splitting type, which we denote using $e_1,\dots,e_5$. In this section, we relate the splitting type of $\Omega_X(\varphi)$ to the splitting type of $E_X(\varphi)$.
\medskip \noindent If $(A_0,\dots,A_5) \in \Omega_X(\varphi)$, then $A_0G_0 + \cdots + A_5G_5 = 0$ so that
\[A_0^4G_0^4 + \cdots + A_5^4G_5^4 = 0\]
by the Frobenius endomorphism in characteristic 2. Let
\[\mathcal{T} = \{(A_0^4,\dots,A_5^4) \mid (A_0,\dots,A_5) \in \Omega_X(\varphi)\} \]
in $E_X(\varphi)$. We denote the $R$-module generated by $\mathcal{T}$ as $R\langle\mathcal{T}\rangle$.
\begin{lemma} In the notation above, $E_X(\varphi) = R\langle\mathcal{T}\rangle$.
\end{lemma}
\begin{proof}
Let $(B_0,\dots,B_5)$ be an element of $E_X(\varphi)$ where $B_i$ is a homogeneous polynomial of degree $b$. We consider the case $b \equiv 0 \mod 4$.
\medskip \noindent Observe that we can rewrite each monomial term of $B_i$ as $(c^{1/4}S^{\ell}T^k)^4S^iT^{4-i}$ or $(c^{1/4}S^{\ell}T^k)^4$ for some integers $\ell,k$, where $c \in k$ and $0 < i <4$. After collecting terms and applying the Frobenius endomorphism, we obtain
\[B_i = a_{i1}^4 + a_{i2}^4S^3T + a_{i3}^4S^2T^2 + a_{i4}^4ST^3\]
where each $a_{ij}$ is an element of $R$. Then, since $\displaystyle B_0G_0^4 + \cdots + B_5G_5^4 = 0$, substituting our expression for the $B_i$'s and applying Frobenius we obtain
\begin{eqnarray*}
(\sum_{i=0}^5 a_{i1}G_i)^4 + (\sum_{i=0}^5a_{i2}G_i)^4S^3T + (\sum_{i=0}^5a_{i3}G_i)^4S^2T^2 + (\sum_{i=0}^5a_{i4}G_i)^4ST^3 &=& 0
\end{eqnarray*}
The sums $\sum_{i=0}^5 a_{ij}G_i$ are each themselves homogeneous polynomials. But since the degree of $T$ in each term above is distinct modulo 4, the equation $\sum_{i=0}^5 a_{ij}G_i = 0$ implies that $(a_{0j},\dots,a_{5j}) \in \Omega_X(\varphi)$, so that $(a_{0j}^4,\dots,a_{5j}^4) \in \mathcal{T}$ for $1 \le j \le 4$.
\medskip\noindent
Hence, every homogeneous element of $E_X(\varphi)$ is contained in the submodule generated by $\mathcal{T}$. Since the reverse containment is trivial, it follows that $E_X(\varphi) = R\langle\mathcal{T}\rangle$. The cases for $b \equiv 1,2,3 \mod 4$ follow similarly.
\end{proof}
\begin{proposition}
If $x_i = (x_{i0},\dots,x_{i5})$ for $1 \le i \le 5$ form a basis for $\Omega_X(\varphi)$, then $y_i = (x_{i0}^4,\dots,x_{i5}^4)$ for $1 \le i \le 5$ form a basis for $E_X(\varphi)$.
\end{proposition}
\begin{proof}
If $x_i \in \Omega_X(\varphi)$, then $y_i \in \mathcal{T}$ and every element of $\mathcal{T}$ is an $R$-linear combination of the $y_i$'s. Since $E_X(\varphi) = R\langle\mathcal{T}\rangle$, every element of $E_X(\varphi)$ is also an $R$-linear combination of the $y_i$'s so that the $y_i$'s generate $E_X(\varphi)$. Moreover, $E_X(\varphi)$ is a free module of rank 5 over a domain, so the generators $y_i$ for $E_X(\varphi)$ must also be linearly independent and hence form a basis.
\end{proof}
\noindent
\noindent Accounting for twist, a simple computation using the results above gives us the following.
\begin{corollary}
\label{splittingformula}
If $f_i$ denotes the splitting type of $E_X(\varphi)$ and $e_i$ denotes the splitting type of $\Omega_X(\varphi)$, then for a degree $d$ morphism, $f_i = 4e_i + 5d$.
\end{corollary}
\section{Numerology}
\label{section-numerology}
\noindent We now utilize some facts about graded free modules in order to give constraints on potential splitting types. Given a graded free module
\[ M = R(u_1) \oplus ... \oplus R(u_r) \]
one can observe that the Hilbert polynomial $H_M$ is given by
\[H_M(m) = rm + u_1 + ... + u_r + r. \]
Let $\varphi$ denote a free morphism into $X$. Noting that the map $\tilde{\varphi}: R(-d)^{\oplus n+1}_m \longrightarrow R_m$ is surjective for $m \gg 0$, we obtain
\begin{align*}
H_{\Omega(\varphi)}(m) &= \dim_k\left(\ker(R(-d)^{\oplus n+1}_m \longrightarrow R_m)\right) \\
&= (n+1)(-d + m + 1) - (m+1) \\
&= nm + -d(n+1) + n.
\end{align*}
A similar calculation shows that,
\[H_{E_X(\varphi)}(m) = nm + d(n+1-5) + n\]
We continue to refer to the splitting type components of $\Omega(\varphi)$, respectively $E_X(\varphi)$ as $e_i$, respectively $f_i$. In both cases $n=r=5$, so combining these two equations with the general form for the Hilbert polynomial of a graded free module, we obtain our first constraints:
\begin{align}
e_1 + e_2 + e_3 + e_4 + e_5 &= -6d \\
f_1 + f_2 + f_3 + f_4 + f_5 &= d.
\end{align}
Recall from Section \ref{section-setup} that a curve is free, respectively very free if $f_i \geq 0$, respectively $f_i > 0$ for each $i$. Since $f_i = 4e_i + 5d$, it follows that
\begin{align}
e_i \geq -\frac{5d}{4}
\end{align}
where strict inequality implies the curve is very free. With these two bounds, we can quickly observe a few facts about curves of different degrees.
\begin{remark} \
\label{charlie}
\begin{enumerate}
\item
There exist no free curves in degrees $1,2,3,6,$ and $7$.
\item
Any free curve of degree not divisible by $4$ must be very free.
\item
There are no very free curves in degrees $4$ or $8$.
\item
The $\Omega(\varphi)$ splitting type of a degree $4$ free curve must be $(-5,-5,-5,-5,-4)$.
\item
The $\Omega(\varphi)$ splitting type of a degree $5$ very free curve must be $(-6,-6,-6,-6,-6)$.
\end{enumerate}
\end{remark}
\noindent All of these observation follow directly from the two constraints. For example, in degree $6$, $e_1 + e_2 + e_3 +e_4 + e_5 = -6d = -36$. However, each $e_i \geq \frac{-30}{4} = -7.5$. So even if each $e_i$ is at best $-7$, the $e_i$ cannot sum to $-36$.
\medskip \noindent The rest of the remarks follow in a similar manner. Note that one can glean even more information about these curves from the constraints, but the remarks listed above are sufficient for our purposes.
\section{Degree 4 and 5 morphisms into X}
\label{section-degree-4-5}
\noindent
We will now show that there are no free morphisms of degrees $4$ or $5$ into $X$.
A morphism $\varphi = (G_0, ..., G_5)$, where each $G_i = \sum_{j=0}^d a_{ij}S^{d - j}T^{j}$ is a homogeneous polynomials of degree $d$, gives us a $6 \times (d+1)$ matrix $(a_{ij})$. We will denote this matrix as $M_{\varphi}$.
\begin{lemma}
\label{lemma-matrices-have-max-rank}
If $\varphi$ is a degree $4$ or $5$ free morphism into $X$, then $M_{\varphi}$ has maximal rank.
\end{lemma}
\begin{proof} This follows from Remark \ref{charlie} (4) and (5) by observing that for a degree $d$ morphism into $X$, the transpose of $M_{\varphi}$ is the matrix of the $k$-linear map $\tilde{\varphi}_d: (R(-d)^{\oplus 6})_d \rightarrow R_d$.
\end{proof}
\begin{lemma}
\label{lemma-morphisms-degree 4 and 5}
\
\begin{enumerate}
\item[(a)] There are no degree $4$ free morphisms into $X$.
\item[(b)] There are no degree $5$ free morphisms into $X$.
\end{enumerate}
\end{lemma}
\begin{proof} (a) Assume a degree $4$ free morphism $\varphi = (G_0, ..., G_5)$ exists. By the previous lemma, the $6 \times 5$ matrix $M_{\varphi} = (a_{ij})$ has maximal rank. Since permuting the $G_i's$ does not affect the splitting type of $E_X(\varphi)$, we can assume that the first $5$ rows of $M_{\varphi}$ are linearly independent over $k$. Then det$((a_{ij})_{i \leq 4}) \neq 0$.
Now consider the matrix $\overline{M_{\varphi}} = (a^4_{ij})$. By the Frobenius endomorphism on $k$, $\det((a^4_{ij})_{i \leq 4})$ = $\det((a_{ij})_{i \leq 4}))^4 \neq 0$, proving that $\overline{M_{\varphi}}$ has maximal rank as well.
\medskip \noindent
Since $G^5_0 + ... + G^5_5 = 0$, computing the coefficients of $G^5_0 + ... + G^5_5 $, we obtain for $0 \leq j \leq 4$
\begin{equation}
\sum\nolimits_{i=0}^5 a^4_{ij}a_{i1} = 0
\quad\text{and}\quad
\sum\nolimits_{i=0}^5 a^4_{ij}a_{i3} = 0.
\end{equation}
The kernel of the map $k^6 \rightarrow k^5$ given by right multiplication by the matrix $\overline{M_{\varphi}}$ has dimension $1$ because rank$\big{(}\overline{M}_{\varphi}\big{)} = 5$. By (5.2.1), $(a_{01}, a_{11}, ...,a_{51})$, $(a_{03}, a_{13}, ..., a_{53})$ $\in$ ker$(k^6 \rightarrow k^5)$, and since these $6$-tuples are columns of $M_{\varphi}$, they are linearly independent over $k$. Then dim$_k$\big{(}ker$(k^6 \rightarrow k^5)\big{)}$ $\geq$ $2$, a contradiction.\\
\noindent
(b) Assume $\varphi = (G_0, ..., G_5)$ is a degree $5$ free morphism. By the previous lemma, the matrix $M_{\varphi} = (a_{ij})$ has maximal rank, and is invertible. Thus $\overline{M_{\varphi}} = (a^4_{ij})$ is invertible by the same argument above.
Since $G^5_0 + ... + G^5_5 = 0$, computing the coefficients of the polynomial $G^5_0 + \cdots + G^5_5$, we get
$\sum_{i=0}^5 a^4_{ij}a_{i2} = 0$ for $0 \leq j \leq 5$.
Thus, the product of the row matrix $(a_{02}, a_{12}, ... , a_{52})$ and the matrix $\overline{{M}_{\varphi}}$ is $0$, which is impossible because $(a_{02}, a_{12}, ... , a_{52}) \neq 0$ and $\overline{{M}_{\varphi}}$ is invertible.
\end{proof}
\section{Computations for the degree $8$ free curve}
\label{section-degree-8}
\noindent Let $\varphi:\mathbb{P}^1\to\mathbb{P}^5$ be a morphism given by the 6-tuple
\begin{align*}
&G_0 = S^7T \\
&G_1 = S^4T^4 + S^3T^5 \\
&G_2 = S^4T^4 + S^3T^5 + T^8 \\
&G_3 = S^7T + S^6T^2 + S^5T^3 + S^4T^4 + S^3T^5 \\
&G_4 = S^8 + S^7T + S^6T^2 + S^5T^3 + S^4T^4 + S^3T^5 + T^8 \\
&G_5 = S^8 + S^7T + S^6T^2 + S^5T^3 + S^4T^4 + S^3T^5 + S^2T^6 + ST^7.
\end{align*}
\noindent One can check by computer or by hand that this curve lies on the Fermat hypersurface $X\subset\mathbb{P}^5$. Due to twisting, the domain of the map $\tilde{\varphi}:R(-8)^{\oplus 6}\to R$ has its first nontrivial graded piece in dimension $8$. The $G_i$ are linearly independent over $k$, hence the kernel is trivial in dimension $8$. The matrix for the map $\tilde{\varphi}_9:R(-8)_{9}^{\oplus 6}\to R_9$ is
$$
\begin{pmatrix}
0 &0 &0 &0 &0 &0 &0 &0 &1 &0 &1 &0 \\
1 &0 &0 &0 &0 &0 &1 &0 &1 &1 &1 &1 \\
0 &1 &0 &0 &0 &0 &1 &1 &1 &1 &1 &1 \\
0 &0 &0 &0 &0 &0 &1 &1 &1 &1 &1 &1 \\
0 &0 &1 &0 &1 &0 &1 &1 &1 &1 &1 &1 \\
0 &0 &1 &1 &1 &1 &1 &1 &1 &1 &1 &1 \\
0 &0 &0 &1 &0 &1 &0 &1 &0 &1 &1 &1 \\
0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &1 &1 \\
0 &0 &0 &0 &1 &0 &0 &0 &1 &0 &0 &1 \\
0 &0 &0 &0 &0 &1 &0 &0 &0 &1 &0 &0
\end{pmatrix}
$$
\noindent where each direct summand of the domain has a basis $\left\{(S,0),(0,T)\right\}$, of which we take six copies (for total dimension $12$), and the range has basis given by the degree $9$ monomials in $S$ and $T$, ordered by increasing $T$-degree (for total dimension $10$). This matrix has rank $10$, which means that the map in degree $9$ is surjective. By rank-nullity, two dimensions of the kernel live in degree $9$; denote the generators by $x_1,x_2$. Surjectivity of $\tilde{\varphi}$ in degree $9$ implies surjectivity in all higher degrees. A second application of rank-nullity gives $\dim_k\Omega(\varphi)_{10}=7$. Four of the generators are inherited from the previous degree, taking the forms
$$
x_1S,x_2S,x_1T,x_2T.
$$
\noindent We conclude that there are three additional generators in degree $10$. Therefore, the splitting type of $\Omega_X(\varphi)$ is $(e_1,...,e_5)=(-10,-10,-10,-9,-9)$, which corresponds to a splitting type for $E_X(\varphi)$ of $(f_1,...,f_5)=(0,0,0,4,4)$, hence the curve is free.
\section{A Very Free Rational Curve of Degree 9}
\label{section-degree-9}
\noindent
We conclude by giving an example of a degree 9 very free curve lying on $X$. Let $\varphi:\mathbb{P}^1 \rightarrow \mathbb{P}^5$ be a morphism into the Fermat hypersurface given by the 6-tuple
\begin{align*}
G_0 &= S^4T^5 \\
G_1 &= S^9 + S^8T + S^5T^4 \\
G_2 &= S^9 + S^4T^5 + ST^8 \\
G_3 &= S^9 + S^8T + S^4T^5 + S^3T^6 + S^2T^7 + ST^8 \\
G_4 &= S^9 + S^5T^4 + S^3T^6 +S^2T^7 + ST^8 + T^9 \\
G_5 &= S^7T^2 + S^6T^3 + S^5T^4 + S^3T^6 + S^2T^7 +ST^8 +T^9.
\end{align*}
\noindent Let $e_1,...,e_5$ again denote the splitting type of $\Omega_X (\varphi)$. As in Section \ref{section-degree-8}, we know that $e_i \le -9$. Since the $G_i$ are linearly independent over $k$, $\dim_k(\Omega_X(\varphi)_9) = 0$. Next we claim that $\tilde{\varphi_{10}} : R_1^{\bigoplus6} \rightarrow R_{10}$ is surjective. In fact, it can be checked that the $\tilde{\varphi}(b_i)$ span $R_{10}$, where the $b_i$ are distinct basis elements of $R_1^{\bigoplus6}$. It follows that $\tilde{\varphi_n} : R(-9)_n^{\bigoplus6} \rightarrow R_n$ is surjective for $n \ge 10$. Hence,
\begin{align*}
\dim_k(\Omega_X(\varphi)_{10}) &= \dim_k(R_1^{\bigoplus6}) - \dim_k(R_{10}) = 1\\
\dim_k(\Omega_X(\varphi)_{11}) &= \dim_k(R_2^{\bigoplus6}) - \dim_k(R_{11}) = 6.
\end{align*}
\noindent
After reordering, this yields $(e_1,...,e_5) = (-11,-11,-11,-11,-10)$, which corresponds to the splitting type $(1,1,1,1,5)$ of $E_X(\varphi)$, showing that $\varphi$ is very free. This completes the proof of Theorem \ref{theorem-main}.
| {
"timestamp": "2012-07-26T02:06:22",
"yymm": "1207",
"arxiv_id": "1207.5015",
"language": "en",
"url": "https://arxiv.org/abs/1207.5015",
"abstract": "This paper studies the existence of free and very free curves on the degree 5 Fermat hypersurface in P^5 over a field of characteristic 2. We find that such curves exist in degrees 8 and 9 and not in lower degrees.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Free and Very Free Morphisms into a Fermat Hypersurface",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137926698567,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.708761783350007
} |
https://arxiv.org/abs/1001.3885 | Improved Source Coding Exponents via Witsenhausen's Rate | We provide a novel upper-bound on Witsenhausen's rate, the rate required in the zero-error analogue of the Slepian-Wolf problem; our bound is given in terms of a new information-theoretic functional defined on a certain graph. We then use the functional to give a single letter lower-bound on the error exponent for the Slepian-Wolf problem under the vanishing error probability criterion, where the decoder has full (i.e. unencoded) side information. Our exponent stems from our new encoding scheme which makes use of source distribution only through the positions of the zeros in the `channel' matrix connecting the source with the side information, and in this sense is `semi-universal'. We demonstrate that our error exponent can beat the `expurgated' source-coding exponent of Csiszár and Körner, achievability of which requires the use of a non-universal maximum-likelihood decoder. An extension of our scheme to the lossy case (i.e. Wyner-Ziv) is given. For the case when the side information is a deterministic function of the source, the exponent of our improved scheme agrees with the sphere-packing bound exactly (thus determining the reliability function). An application of our functional to zero-error channel capacity is also given. | \section{Introduction}
Under consideration is the communication problem depicted in Figure \ref{fig:scsi}; nature produces a sequence $(X_i,Y_i)$ governed by the i.i.d. distribution $P_{XY}$ on alphabet $\mathcal{X} \times \mathcal{Y}$. An encoder, observing the sequence $X^n$, must send a message to a decoder, observing the sequence $Y^n$ (the side information), so that the decoder can use the message and its observation to generate $\hat X^n$, an estimation of $X^n$ to some desired fidelity.
For lossless reproduction, using the criterion that $P_{XY}^n(X^n \ne \hat X^n) \to 0$ as the blocklength $n \to \infty$, Slepian and Wolf \cite{Slepian:1973p333} determined that all rates in excess of $H(X|Y)$ are achievable. Bounds on the rate of decay of the error probability for this problem, the so-called \emph{error exponent}, were determined by Csisz\'{a}r and K\"{o}rner \cite{Csiszar:1981p220} whose results include a universally attainable random coding exponent and a non-universal `expurgated' exponent. Previously Gallager \cite{gallagersideinfo} derived a non-universal exponent that was later shown to be universally attainable by Csisz\'{a}r, K\"{o}rner and Marton \cite{Csiszar:1980p344}. For the Slepian-Wolf problem in its full generality (i.e. allowing for coded side information) the best known exponents are those of Csisz\'{a}r \cite{Csiszar:1982p334} and Oohama and Han \cite{Oohama:1994p222}. In the regime where the rate of the second encoder is large, our new exponent also improves upon these results, but we do not consider the general case here.
In the case of lossy reproduction, with the loss measured by some single letter distortion function $d$, the scenario is known as the Wyner-Ziv problem \cite{wynerziv}, after Wyner and Ziv who showed that if the allowable expected distortion is $\Delta$, then the required rate is given by
$$
R_{WZ}(P_{XY},\Delta)=\inf I(X;U) - I(Y;U),
$$
where the infimum is over all auxiliary random variables $U$ such that (1) $U$,
$X$, and $Y$ form a Markov chain in this order and (2) there exists a function
$\phi$ such that
$$
\mathop {\mathbb{E} }[d(X,\phi(Y,U))] \le \Delta.
$$
The best available exponents for the Wyner-Ziv problem were determined by the present authors in \cite{kellywagnerscexponents}. Henceforth we refer to both lossless and lossy problems as \emph{full side information problems}.
We describe new encoding schemes for both full side information problems which rely on ideas from graph theory. Our analysis shows that the chromatic number of a particular graph can be used to characterize the number of sequences that can be communicated without error. We are able to give a single letter upper bound on this chromatic number via a new functional on a graph $G$. We call our schemes \emph{semi-universal} because the scheme depends on the source distribution only through the position of the zeroes in the channel matrix. By comparing our new exponent directly with the previous results one sees that our scheme is capable of sending a larger number of sequences without error, i.e. we can expurgate more types which leads to better exponents.
Although our scheme applies to the vanishing error probability case, it is derived from the study of a related zero-error problem. The zero-error formulation of source coding with full side information was studied by Witsenhausen \cite{Witsenhausen:1976p290}, who showed that for fixed blocklength, $n$, the fewest number of messages required so that the decoder can reproduce the source with no error, i.e. $P_{XY}^n(X^n = \hat X^n)=1$, is $\gamma(G_X^n)$, the chromatic number of the $n$-fold strong product of the characteristic graph of the source.
\begin{figure}
\centering
\scalebox{0.5}{\includegraphics*{sideinfofig.pdf}}
\caption{Source coding with full side information}
\label{fig:scsi}
\end{figure}
The required rate, sometimes referred to as Witsenhausen's rate in the literature, is therefore
\begin{equation}
\label{wrrate}
R(G) = \lim_{n \to \infty} \frac{1}{n} \log \gamma(G^n).
\end{equation}
(We note that limit in \eqref{wrrate} exists by sub-additivity and appealing to Fekete's lemma.) Unfortunately, the problem of determining $R(G)$ `seems, in general, far beyond the reach of existing techniques' \cite{alon:powers}; see also the comment at the end of section \ref{sect:wr} here. However, since $\gamma(G^n) \le \gamma(G)^n$, it is clear that $R(G) \le \log \gamma(G)$. We provide a new bound on $R(G)$ by bounding the chromatic number of $G^n$ restricted to typeclasses. Our techniques combine graph- and information-theoretic techniques, see K\"{o}rner and Orlitsky \cite{Korner:1998p65} for a comprehensive overview of the applications of graph theory in zero-error information theory.
The rest of the paper is organized as follows. Section \ref{sect:def} gives definitions. Section \ref{sect:kappaprops} gives some useful properties of $\kappa$. In Section \ref{sect:wr} we motivate $\kappa$ and give our first result, a single letter bound on Witsenhausen's rate. In Section \ref{sect:exp}, we give our second result, improved error exponents for the problem of lossless source coding with full side-information; examples and comparisons to previous known exponents are also given. In Section \ref{sect:coveringbinning} we use the ideas from Section \ref{sect:exp} to give our third and fourth results, an improved error exponent for the lossy problem and determination of the reliability function for the case when the side information is a deterministic function of the source. In Section \ref{sect:channelcoding} we briefly give an application of $\kappa$ to channel coding.
\section{Definitions}
\label{sect:def}
Script letters, e.g. $\mathcal{X}, \mathcal{Y}$, denote alphabets. The set of all probability distributions over an alphabet $\mathcal{X}$ will be denoted by $\mathcal{P}(\mathcal{X})$. Small bold-faced letters, e.g. $\mathbf{x} \in \mathcal{X}^n, \mathbf{y} \in \mathcal{Y}^n$ denote vectors, usually the alphabet and length are clear from the context. For information-theoretic quantities, we use the notations of \cite{ckit}. $H(\mathbf{x} | \mathbf{y})$ denotes conditional empirical entropy, i.e. the conditional entropy computed using the empirical distribution $P_{\mathbf{x},\mathbf{y}}$. We use $[x]^+$ to denote $\max(0,x)$. Unless specified, exponents and logarithms are taken in base 2.
A graph $G=(V,E)$ is a pair of sets, where $V$ is the set of vertices and $E \subset V \times V$ is the set of edges. Two vertices $x,y \in V$ are connected iff $(x,y) \in E$. We will restrict ourselves to simple graphs, i.e. undirected graphs without self-loops. The \emph{degree of a vertex} $v$, $\Delta(v)$, is the number of other vertices to which $v$ is connected. The \emph{degree of a graph} $G$, denoted $\Delta(G)$ is defined as $ \max_{v \in V} \Delta(v)$. A coloring of a graph is an assignment of colors to vertices so that no pair of adjacent vertices share the same color. The \emph{chromatic number} of $G$, $\gamma(G)$, is defined to be the fewest number of colors needed to color $G$. For $U \subset V$, $G(U)$ is the \emph{(vertex-) induced subgraph}, i.e. the graph with vertex set $U$ and edge set $E \cap (U \times U)$. For two matrices, $V, W$ we use $V \ll W$ to mean that $W(b|a)=0$ implies $V(b|a)=0$.
Let $G=(V,E), H=(V',E')$ be two graphs. The \emph{strong product} (or \emph{and product}) $G \wedge H$ is a graph whose vertex set is $V \times V'$ and in which two vertices $(v,v'), (u,u')$ are connected iff
\begin{enumerate}
\item $v=u$ and $(v',u') \in E'$ or
\item $v'=u'$ and $(v,u) \in E$ or
\item $(v,u) \in E$ and $(v',u') \in E'$.
\end{enumerate}
We will be interested in $G^n=G \wedge G \wedge \ldots \wedge G$ ($n$-factors), the $n$-fold strong product of $G$. One may think of the vertices of $G^n$ as length $n$ vectors $(v_1, \ldots, v_n)$ with two vertices are connected in $G^n$ if all of the components of the vectors are the same or connected in $G$. The \emph{characteristic graph}, $G_X$, of a source $P_{XY}$ is the graph whose vertex set is $\mathcal{X}$ and two vertices $x, x'$ are connected if there is a $y \in \mathcal{Y}$ such that $P(y|x')P(y|x) >0$. For a given $\mathbf{y}$, the set $Z(\mathbf{y}) = \{\mathbf{x} : P(\mathbf{x} | \mathbf{y}) > 0 \}$ is the set of `confusable' sequences, i.e. the set of $\mathbf{x}$s than can occur with a given $\mathbf{y}$. For a graph $G$ and distribution $Q$ on the vertices of $G$, we define the following functional.
\begin{definition}
\begin{align}
\kappa(G, Q) = \mathop{\max_{V: V \ll G}}_{Q V = Q} H(V | Q).
\end{align}
Note when we write the graph $G$ where a matrix is expected, we abuse notation and refer to the matrix $G=A+I$ where $A$ is the adjacency matrix of graph $G$ and $I$ is the identity matrix.
Equivalently one may think of $\kappa$ as follows
\[
\kappa(G, Q) = \mathop{\max_{X, \tilde X:}}_{Q_X = Q_{\tilde X}=Q} H(\tilde X | X).
\]
where $X$ and $\tilde X$ have common alphabet and $P(\tilde x | x) >0$ iff $\tilde x, x \in E(G)$.
\end{definition}
\section{Properties of \texorpdfstring{$\kappa$}{\textkappa}}
\label{sect:kappaprops}
In this section we give some properties of $\kappa$ which will be used elsewhere in the paper. Throughout this section $G$ is a graph, $Q$ is a distribution on the vertices of $G$ and $X$ is a random variable with distribution $Q$.
\begin{property}
\label{prop:kapent}
$\kappa(G, Q) \leq H(Q)=H(X)$, where equality holds if $G$ is fully connected.
\end{property}
\begin{IEEEproof}
Note that any valid choice of channel in the optimization defining $\kappa(G,Q)$ satisfies $Q V = Q$, thus $H(V|Q) \le H(Q)$, giving the first claim.
If $G$ is fully connected then the constraint $V \ll G$ imposes no restriction on the choice of $V$. The problem is then to choose a $V$ that produces the given output distribution $Q$. Setting the rows of $V$ equal to $Q$ gives $\kappa(G,Q)=H(Q)$.
\end{IEEEproof}
\begin{property}
\label{prop:hxcy}
If $G$ is the disjoint union of fully connected subgraphs then
\begin{equation}
\kappa(G,Q) = H(X|Y).
\end{equation}
where
\begin{enumerate}
\item $Y$ is a random variable with alphabet size $|\mathcal{Y}|$ equal to the number of disjoint subgraphs in $G$ so that to each subgraph we associate a unique element $y \in \mathcal{Y}$; and
\item for the subgraph associated with $y$, the event $\{X=a, Y=y\}$ has probability $Q(a)$ if $a$ is in the subgraph and probability zero otherwise.
\end{enumerate}
\end{property}
\begin{IEEEproof}
Without loss of generality we may assume the adjacency matrix of $G$ plus the identity matrix is block diagonal, where each block corresponds to a fully connected subgraph (i.e. is all 1s). By independence it suffices to solve the maximization problem for one of these blocks, say the one associated with element $y$.
Suppose that the subgraph has vertices $a_1, a_2, \ldots, a_n $ and define the (semi) probability measure $Q_y = [Q(a_1)~Q(a_2)~\ldots~Q(a_n)]$. Then the problem is
\begin{equation}
\label{eqn1:hxcystep1}
\max_{V: Q_y V = Q_y} \sum_{a} Q_y(a) \sum_{b} -V(b|a) \log V(b|a).
\end{equation}
Let $\tilde Q_y = \frac{Q_y}{\Vert Q_y \Vert}$. The maximizing $V$ is unchanged if we replace the problem by
\begin{align*}
&\max_{V: \tilde Q_y V = \tilde Q_y} \frac{1}{\Vert Q_y \Vert} \sum_{a} Q_y(a) \sum_{b} -V(b|a) \log V(b|a) \\
&=\max_{V: \tilde Q_y V = \tilde Q_y} H(V | \tilde Q_y ).
\end{align*}
We now use the proof of property 1 to allow us to conclude that setting the rows of $V$ to be $\tilde Q_y$ solves this maximization. Using the definition of $Y$ to see that $\Vert Q_y \Vert =\mathbb{P}(Y=y)$ and substituting the maximizing $V$, equation \eqref{eqn1:hxcystep1} becomes
\begin{align*}
\sum_{a} Q_y(a) \sum_{b} -\tilde Q_y(b) \log \tilde Q_y(b) &= \mathbb{P}(Y=y)H(\tilde Q_y) \\
&= \mathbb{P}(Y=y)H(X | Y=y)
\end{align*}
Summing over the subgraphs gives the result.
\end{IEEEproof}
\begin{property}
\label{prop:kappacont}
Let $G$ be a graph and $Q^{(n)}$ be a sequence of distributions (on the vertices of $G$) converging to distribution $Q^\infty$. Then
\[
\limsup_{n \to \infty} \kappa(G,Q^{(n)}) \leq \kappa(G,Q^\infty)
\]
(I.e. $\kappa(G,\cdot)$ is upper semicontinuous in $Q$ for a fixed $G$.)
\end{property}
\begin{IEEEproof}
Let
\[
V^{(n)} = \mathop{\mathop {\rm arg\ max }_{V:V \ll G}}_{Q^{(n)} V = Q^{(n)}} H(V|Q^{(n)}),
\]
where $V^{(n)}$ exists because we are maximizing a continuous function over a compact set. By choosing a subsequence and relabeling we may arrange it so that $H(V^{(n)}|Q^{(n)}) \to \limsup H(V^{(n)}|Q^{(n)})$ and $V^{(n)} \to V^\infty$, where both $V^\infty \ll G$ and $Q^\infty V^\infty =Q^\infty$ are true. In which case
\begin{align*}
\limsup_{n \to \infty} \kappa(G,Q^{(n)}) &= \limsup_{n \to \infty} H(V^{(n)}|Q^{(n)}) \\
&= H(V^\infty|Q^\infty) \leq \kappa(G,Q^\infty).
\end{align*}
\end{IEEEproof}
\section{Bounding Witsenhausen's Rate}
\label{sect:wr}
We recall that in Witsenhausen's problem \cite{Witsenhausen:1976p290} the goal is communication of $X^n$ to the decoder who has access to $Y^n$ under the criterion $P_{XY}^n(X^n = \hat X^n)=1$. This requirement is stricter than the vanishing error probability criterion of Slepian-Wolf and increases the rate from $H(X|Y)$ to $R(G_X)$. Witsenhausen's scheme is as follows: the decoder sees $Y^n$, a realization of the side-information and can identify the set $Z(Y^n)$ and this set forms a subgraph in $G^n_X$. If the vertices of $G_X^n$ are colored then the encoder can send this color to the decoder, which can then uniquely identify the source symbol in $Z(Y^n)$. And a result of \cite{Witsenhausen:1976p290} proves that when encoding blocks of length $n$, $\gamma(G_X^n)$ the smallest size of the signaling set possible.
When considering very large blocklengths, the fact that there are only polynomially many types means we can send the type essentially for free. A possible modification of Witsenhausen's scheme is as follows. First, fix the blocklength $n$ and for every type $Q_X$, the encoder and decoder agree on a coloring of the graph $G^n_X(T^n_{Q_X})$ using $\gamma(G^n_X(T^n_{Q_X}))$ colors. The encoder and decoder operate as follows.
\emph{Encoder:} The encoder first communicates $Q_\mathbf{x}$, the type of the source sequence. Next the encoder looks at the graph $G^n_X(T^n_{Q_\mathbf{x}})$, that is the subgraph of $G^n_X$ induced by $T^n_{Q_\mathbf{x}}$ and sends the color of vertex $\mathbf{x}$ to the decoder.
\emph{Decoder:} The decoder sees side-information $\mathbf{y}$ and identifies the set $Z(\mathbf{y})$. Knowing the type the decoder can examine the induced subgraph $G^n_X(T^n_{Q_\mathbf{x}} \cap Z(\mathbf{y}))$ and using the color from the encoder, identify the source sequence.
The following lemma shows that this scheme is asymptotically optimal.
\begin{lemma}
\begin{equation}
R(G) = \lim_{n \to \infty} \max_{Q_X \in \mathcal{P}^n(\mathcal{X})} \frac{\log \gamma(G_X^n(T^n_{Q_X}))}{n}
\end{equation}
\end{lemma}
\begin{IEEEproof}
The number of bits used by our scheme is an upper bound on $R(G)$ and hence
\begin{align*}
R(G) &\leq \liminf_{n \to \infty} \left [ \frac{\log (n+1)^{\vert \mathcal{X} \vert}}{n} + \max_{Q_X \in \mathcal{P}^n(\mathcal{X})}\frac{\log \gamma(G_X^n(T^n_{Q_X}))}{n} \right ] \\
&= \liminf_{n \to \infty} \max_{Q_X \in \mathcal{P}^n(\mathcal{X})} \frac{\log \gamma(G_X^n(T^n_{Q_X}))}{n}
\end{align*}
But trivially we also have
\[
R(G) \geq \limsup_{n \to \infty}\max_{Q_X \in \mathcal{P}^n(\mathcal{X})} \frac{\log \gamma(G_X^n(T^n_{Q_X}))}{n}
\]
where we used the fact that the chromatic number of the subgraph is at most the chromatic number of $G_X$.
\end{IEEEproof}
We now bound the chromatic number of the induced subgraph in two steps. First we give a degree bound on induced subgraph.
\begin{lemma}
\label{lem:degsg}
Let $Q_X \in \mathcal{P}^n(\mathcal{X})$. Then
\begin{align}
\label{eqn:deltabound}
(n+1)^{-\vert\mathcal{X}\vert\vert\mathcal{X}\vert}\exp(n\kappa_n(G_X,Q_X)) - 1 \le \Delta(G_X^n(T^n_{Q_X})) \leq (n+1)^{\vert\mathcal{X}\vert\vert\mathcal{X}\vert}\exp(n\kappa_n(G_X,Q_X))
\end{align}
where
\begin{align}
\kappa_n(G_X, Q_X) = \mathop{\mathop{\max_{V: Q_X \times V \in \mathcal{P}^n}}_{V \ll G_X}}_{Q_X V = Q_X} H(V | Q_X).
\end{align}
Note: $\kappa_n$ maximizes over types rather than distributions, but of course we may replace $\kappa_n$ by $\kappa$ in the right-hand equality of \eqref{eqn:deltabound} to get another valid upper bound.
\end{lemma}
\begin{IEEEproof}
%
Suppose $\mathbf{x} \in T^n_{Q_X}$, and let $W(\mathbf{x})$ denote the neighbors of $\mathbf{x}$ in the induced subgraph $G^n_X(T^n_{Q_X})$. We partition the set $\{ (\mathbf{x}, \mathbf{x'}) : \mathbf{x'} \in W(\mathbf{x}) \}$ by joint type $Q_{XX'}$ and observe that each joint type can be written as $Q_{X} \times V$ for some $V$. One may verify $V \ll G_X$. One also sees that $Q_X V = Q_X$, since $(\mathbf{x}, \mathbf{x'}) \in T^n_{Q_{XX'}} \cap E(G^n(T^n_{Q_X}))$ implies $Q_X' = Q_X$ and writing $Q_{XX'} = Q_X \times V$, tells us that $Q_X V = Q_X$.
For any $\mathbf{x} \in T^n_{Q_X}$ we can count the number of strings in $W(\mathbf{x})$ by decomposing $\{ (\mathbf{x}, \mathbf{x'}) : \mathbf{x'} \in W(\mathbf{x'}) \}$ into joint types, choosing a $V$ for each joint type and using the standard cardinality bounds for type classes. Thus
\begin{align*}
\Delta(G_X^n(T^n_{Q_X})) &\leq \mathop{\sum_{V:V \ll G}}_{Q_XV = Q_X} T^n_{V}(\mathbf{x}) \\
&\leq \mathop{\sum_{V:V \ll G}}_{Q_XV = Q_X} \exp(nH(V | Q_X)) \\
&\leq (n+1)^{\vert\mathcal{X}\vert\vert\mathcal{X}\vert} \mathop{\max_{V:V \ll G}}_{Q_XV = Q_X} \exp(nH(V | Q_X)).
\end{align*}
For the reverse inequality, we let $\Delta(\mathbf{x})$ denote the degree of vertex $\mathbf{x}$ in the induced subgraph. Then
\begin{align*}
\Delta(\mathbf{x}) = \mathop{\mathop{\sum_{V: Q_X \times V \in \mathcal{P}^n}}_{V \ne I, V \ll G}}_{Q_XV = Q_X} T^n_{V}(\mathbf{x}).
\end{align*}
To see this, note first that if $V$ arises by selecting a $\mathbf{x}' \in W(\mathbf{x})$, then $T_V(\mathbf{x}) \subset W(\mathbf{x})$. And second, that any $V \ne I$ with $V \ll G$ and $Q_X V = Q_X$ gives rise to a neighbor. Then because $\Delta(G_X^n(T^n_{Q_X})) = \max_{\mathbf{x} \in T_{Q_X}} \Delta(\mathbf{x})$, we have
\begin{align*}
\Delta(G_X^n(T^n_{Q_X})) &= \max_{\mathbf{x} \in T_{Q_X}} \mathop{\mathop{\sum_{V: Q_X \times V \in \mathcal{P}^n}}_{V \ne I, V \ll G}}_{Q_XV = Q_X} T^n_{V}(\mathbf{x}) \\
\Delta(G_X^n(T^n_{Q_X})) &= \max_{\mathbf{x} \in T_{Q_X}} \mathop{\mathop{\sum_{V: Q_X \times V \in \mathcal{P}^n}}_{V \ll G}}_{Q_XV = Q_X} T^n_{V}(\mathbf{x}) - 1.
\end{align*}
Using the cardinality bound for typeclasses we get
\begin{align*}
\Delta(G_X^n(T^n_{Q_X})) &\ge \max_{\mathbf{x} \in T_{Q_X}} \mathop{\max_{V:V \ll G}}_{Q_XV = Q_X} T^n_{V}(\mathbf{x}) - 1 \\
&\ge \max_{\mathbf{x}\in T_{Q_X}} (n+1)^{-\vert \mathcal{X} \vert\vert \mathcal{X} \vert} \mathop{\max_{V:V \ll G}}_{Q_XV = Q_X} \exp(n(H(V|Q_X))) -1\\
&= (n+1)^{-\vert \mathcal{X} \vert\vert \mathcal{X} \vert} \mathop{\max_{V:V \ll G}}_{Q_XV = Q_X} \exp(n(H(V|Q_X))) -1
\end{align*}
where we implicitly assumed we still have $Q_X \times V \in \mathcal{P}^n$.
\end{IEEEproof}
Using the previous lemma we bound $R(G)$ as follows
\begin{theorem}
\label{thm:kappabound}
\[
R(G_X) \leq \max_{Q_X \in \mathcal{P}(\mathcal{X})} \kappa(G_X, Q_X).
\]
\end{theorem}
\begin{IEEEproof}
A well-known fact from graph theory tells us that $\gamma(G) \leq \Delta(G) + 1$ \cite[sec 5.2]{DiestelGT}. This combined with the previous lemma gives
\begin{align*}
&\max_{Q_X \in \mathcal{P}^n(\mathcal{X})} \frac{\log \gamma(G_X^n(T^n_{Q_X}))}{n} \\
\quad &\leq \max_{Q_X \in \mathcal{P}^n(\mathcal{X})} n^{-1}\log \left [ (n+1)^{\vert\mathcal{X}\vert\vert\mathcal{X}\vert}\exp(n\kappa_n(G_X,Q_X)) + 1\right ] \\
&\leq \max_{Q_X \in \mathcal{P}(\mathcal{X})} n^{-1}\log \left [ (n+1)^{\vert\mathcal{X}\vert\vert\mathcal{X}\vert}\exp(n\kappa(G_X,Q_X)) + 1\right ]
\end{align*}
where the final line used the fact that in both maximizations we maximize over a larger set. Taking limits as $n \to \infty$ gives the result.
\end{IEEEproof}
We now discuss the tightness of the bound.
\subsection{Tightness of the bound in Theorem \ref{thm:kappabound}}
We note that the bound given by $\kappa$ on $R(G)$ need not be tight. To this see, consider the graph $G$ with $V(G)=\{0,1,\ldots,2^n\}$ and $E(G)=\{ (n, n+1) : n \ge 0 \} \cup \{ (0,n) : n \ge 2\}$. It is clear that $\gamma(G)=3$ for all $n$, and hence $R(G) \leq \log 3$. Yet, if we choose
\begin{align*}
V(b \vert 0) &= \begin{cases}
0 & \text{if } b=0 \\
2^{-n} & \text{otherwise}
\end{cases} \\
V(b | a \neq 0) &= \begin{cases}
1 & \text{if } b = 0 \\
0 & \text{otherwise}
\end{cases} \\
Q &= \left [ \frac{1}{2}, \frac{1}{2^{n+1}}, \frac{1}{2^{n+1}} \ldots \frac{1}{2^{n+1}} \right ]
\end{align*}
one sees that $V \ll G$ and therefore that
\[
\kappa(G,Q) \geq H(V | Q) = \frac{1}{2} \log 2^n = \frac{n}{2}.
\]
Although the gap between $R(G)$ and the bound of Theorem \ref{thm:kappabound} may be arbitrarily large, note that that the bound of Theorem \ref{thm:kappabound} is a convex program, where as the computation of even $\gamma(G)$ is NP-complete. Hence although we do not know whether our bound is ever better than the bound provided by $\gamma(G)$, from a computational point of view our bound has an advantage.
\section{Improved Exponents for Lossless Source Coding}
\label{sect:exp}
We consider the same setup as in Figure \ref{fig:scsi}. The encoder/decoder pair are functions $\psi:\mathcal{X}^n \to \mathcal{M}$ and $\varphi:\mathcal{M} \times \mathcal{Y}^n \to \mathcal{\hat X}^n$, where $\mathcal{M}$ is a fixed set. We define the error probability to be
\begin{equation}
P_e(\psi, \varphi) = \Pr(X^n \ne \hat X^n)
\end{equation}
where $\hat X^n = \varphi(\psi(X^n),Y^n)$. In this section we are interested in the asymptotic behaviour of the error probability $P_e(\psi,\varphi,
\Delta)$ as $n$ gets large.
We define the \emph{error exponent} (or \emph{reliablity function}) to be
\begin{equation}
\theta(R,P_{XY}) = \lim_{\epsilon \downarrow 0} \liminf_{n \to \infty} -\frac{1}{n} \log \left [ \min_{(\psi,
\varphi)} P_e(\psi,\varphi) \right ]
\end{equation}
where the minimization ranges over all encoder/decoder pairs satisfying \begin{equation}
\label{rateconst}
\frac{1}{n} \log | \mathcal{M} | \leq R + \epsilon.
\end{equation}
Our main result is
\begin{theorem}
\label{thm:exponent}
For any $R>0$ and $P_{XY} \in \mathcal{P}(\mathcal{X} \times \mathcal{Y})$,
\begin{align}
\notag\theta(R,P_{XY}) \geq
\mathop{\inf_{Q_{XY}: }}_{\min(\kappa(G_X,Q_X),\log \gamma(G_X)) \ge R} & \Big [ D(Q_{XY} || P_{XY}) \\
&\qquad + (R - H_Q(X|Y))^+ \Big ]
\end{align}
where $G_X$ is the characteristic graph of the source $P_{XY}$.
\end{theorem}
To achieve this exponent we use the following scheme. First, fix the blocklength $n$. For every type $Q_X$, the encoder and decoder agree on a coloring of the graph $G^n_X(T^n_{Q_X})$ using $\gamma(G^n_X(T^n_{Q_X}))$ colors. When $
\log \gamma(G^n_X(T^n_{Q_X})) \ge nR$, the encoder and decoder agree on a random binning of the typeclass $T^n(Q_X)$ into $\exp(nR)$ bins. The encoder's message set is
\begin{align*}
&\mathcal{M}= \mathcal{M}_1 \times \mathcal{M}_2 \text{ where } \\
&\mathcal{M}_1 = \{1,2,\ldots,\exp(nR) \}, ~ \mathcal{M}_2 = \{1,2,\ldots,(n+1)^{|\mathcal{X}|} \}
\end{align*}
\emph{Encoder:} The encoder sends the type $Q_\mathbf{x}$ of the string. If $\log \gamma(G^n_X(T^n_{Q_\mathbf{x}})) < nR$, then there is sufficient rate to send the color to the decoder. If not, the encoder sends the bin index of the string $\mathbf{x}$. In both cases we let $U(\mathbf{x})$ denote the index sent to the decoder.
\emph{Decoder:} The decoder receives the index of the type and side information $\mathbf{y}$. If $\log \gamma(G^n_X(T^n_{Q_X})) < nR$ the color index and the side information allow the decoder to reproduce $X^n$ without error. In the opposite case, the decoder receives a bin index, looks in that bin and chooses an $\mathbf{x}$ in the bin so that $H(\mathbf{x} | \mathbf{y}) \le H(\mathbf{\tilde x} | \mathbf{y})$ for all other $\mathbf{\tilde x}$ in the bin.
\subsection{Analysis}
To prove our theorem, we will use the following definition and lemmas. Let
$$\mathcal{E} = \{ (\mathbf{x},\mathbf{y}) : \log \gamma(G_X^n(T^n_{ Q_\mathbf{x}})) \ge nR \}. $$
Observe on $\mathcal{E}^c$ our scheme makes no error.
\begin{lemma}
\label{lem:empent}
For all strings $\mathbf{x},\mathbf{y}$, let $$S(\mathbf{x} | \mathbf{y}) = \{ \mathbf{\tilde x} |
H(\mathbf{\tilde x} | \mathbf y) \leq H(\mathbf x | \mathbf y), Q_{\mathbf{\tilde x}} = Q_\mathbf{x} \}.$$ Then $$|S(\mathbf{x}|\mathbf{y})| \leq (n
+1)^{|\mathcal{X}||\mathcal{Y}|} \exp(nH(\mathbf{x}|\mathbf{y})).$$
\end{lemma}
\begin{IEEEproof}
\begin{align*}
|S(\mathbf x | \mathbf y)| & \leq \vert \{ \mathbf{\tilde x} |
H(\mathbf{\tilde x} | \mathbf y) \leq H(\mathbf x | \mathbf y) \} \vert \\
&= \sum_{V: V \in \mathcal{C}^n(Q_\mathbf{y},\mathcal{X})}\sum_{\tilde
{\mathbf{x}} \in
T_V(\mathbf{y}): H(\tilde{\mathbf{x}}|\mathbf{y}) \le H(\mathbf{x}|\mathbf{y})}1 \\
&= \mathop{\sum_{V: V \in \mathcal{C}^n(Q_\mathbf{y},\mathcal{X})}}_{ H(V|Q_\mathbf{y}) \leq
H(\mathbf{x}|\mathbf{y})} |T_V(\mathbf{y})| \\
&\leq \mathop{\sum_{V: V \in \mathcal{C}^n(Q_\mathbf{y},\mathcal{X})}}_{ H(V|Q_\mathbf{y})\leq
H(\mathbf{x}|\mathbf{y})} \exp(nH(\mathbf{x}|\mathbf{y})) \\
&\leq (n+1)^{|\mathcal{X}||\mathcal{Y}|}
\exp(nH(\mathbf{x}|\mathbf{y}))
\end{align*}
\end{IEEEproof}
\begin{lemma}
\label{lem:binerror}
For all strings $\mathbf{x},\mathbf{y}$
\[
P(X^n \ne \hat X^n | X^n = \mathbf{x}, Y^n = \mathbf{y}) \leq \exp(-n(R - H(\mathbf{x} | \mathbf{y}) -\delta_n)^{+}).
\]
where $\delta_n \to 0$ with $n$. Moreover if $(\mathbf{x},\mathbf{y}) \in \mathcal{E}^c$ then
\[
P(X^n \ne \hat X^n | X^n = \mathbf{x}, Y^n = \mathbf{y}) = 0.
\]
\end{lemma}
\begin{IEEEproof}
As noted in the specification of the decoder, for types $Q_X$ so that $\log \gamma(G^n_X(T^n_{Q_X})) < nR$ the decoder makes no error. For the opposite case we bound the set of candidate $\tilde{\mathbf{x}}$ with $S(\mathbf{x}|\mathbf{y})$ yielding
\begin{align*}
P(X^n &\ne \hat X^n | X^n = \mathbf{x}, Y^n = \mathbf{y}) \\
&\leq \sum_{\mathbf{\tilde x} \in S(\mathbf{x}|\mathbf{y})} P(U(\mathbf{x}) = U(\mathbf{\tilde x})) \\
&\leq (n+1)^{|\mathcal{X}||\mathcal{Y}|}\exp(-n(R - H(\mathbf{x} | \mathbf{y}))) \\
&\leq \exp(-n(R - H(\mathbf{x} | \mathbf{y}) - \delta_n))
\end{align*}
Using the fact that $P(X^n \ne \hat X^n | X^n = \mathbf{x}, Y^n = \mathbf{y}) \le 1$ gives the result.
\end{IEEEproof}
\begin{lemma}
\label{lem:cont}
Let $G$ be a graph, $\delta_n>0, \tilde{\delta}_n > 0, \tilde{\tilde{\delta}}_n $ sequences converging to zero,
\begin{align*}
& F_n(Q_{XY}) = \\
\notag &\begin{cases} D(Q_{XY} || P_{XY}) & \text{if } \kappa(G,Q_X) \ge R - \tilde{\delta}_n \\
\ + (R- H_Q(X|Y) - \delta_n)^+ - \tilde{\tilde{\delta}}_n & \\
\infty & \text{otherwise,}
\end{cases} \\
& F(Q_{XY}) = \\
&\begin{cases} D(Q_{XY} || P_{XY}) + (R- H_Q(X|Y))^+ & \text{if } \kappa(G,Q_X) \ge R \\
\infty & \text{otherwise}
\end{cases}
\end{align*}
and $Q_{XY}^{(n)}$ be a sequence of distributions converging to $Q_{XY}^\infty$. Then
\begin{equation}
\label{eqn:cont}
\liminf_{n \to \infty} F_n(Q_{XY}^{(n)}) \ge F(Q^\infty_{XY})
\end{equation}
\end{lemma}
\begin{IEEEproof}
We proceed by cases. Case 1: $Q_{XY}^\infty$ is such that $\kappa(G,Q_X^\infty) \ge R$. If $\kappa(G,Q_X^{(n)}) < R - \tilde{\delta}_n$ for all sufficiently large $n$, then the left-hand side is infinity and the result trivially holds. Otherwise we appeal to the semicontinuity of the information measures.
Case 2: $Q_{XY}^\infty$ is such that $\kappa(G,Q_X^\infty) <R$. In this case we see, by appealing to $\kappa$ property \ref{prop:kappacont}, that $\limsup \kappa(G,Q_X^{(n)}) < R$, whence \eqref{eqn:cont} holds with equality eventually.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem \ref{thm:exponent}]
For any $\epsilon > 0$, we note that for sufficiently large $n$ the constraint \eqref{rateconst} is met.
Let $$\mathcal{T}^n = \{ Q_{XY} \in \mathcal{P}^n(\mathcal{X} \times \mathcal{Y}): \log \gamma(G_X^n(T^n_{Q_X})) \ge nR \}.$$
We begin by partitioning the sequence space by joint type and computing the error probability for each type
\begin{align*}
P_e &= \sum_{Q_{XY}} \sum_{(\mathbf{x},\mathbf{y}) \in T^n_{Q_{XY}}} P(X^n \ne \hat X^n, X^n=\mathbf{x},Y^n = \mathbf{y}) \\
&\stackrel{*}{\leq} \mathop{\sum_{Q_{XY} \in \mathcal{T}^n}}_{} \sum_{(\mathbf{x},\mathbf{y}) \in T^n_{Q_{XY}}} \exp(-n(R - H(\mathbf{x} | \mathbf{y}) -\delta_n)^{+}) \\
&\qquad \times \exp(-n(D(Q_{XY}||P_{XY}) + H(Q_{XY})) \\
&\leq \mathop{\sum_{Q_{XY} \in \mathcal{T}^n}}_{} \exp(-n((R - H_Q(X|Y) -\delta_n)^{+} \\
&\qquad + D(Q_{XY}||P_{XY}) )) \\
&\leq (n+1)^{\vert \mathcal{X} \vert \vert \mathcal{Y} \vert } \mathop{\max_{Q_{XY} \in \mathcal{T}^n}}_{} \exp(-n((R - H_Q(X|Y) -\delta_n)^{+} \\
&\quad + D(Q_{XY}||P_{XY}) ))
\end{align*}
where in $^*$ we applied a standard identity for the probability of a sequence in $T^n_{Q_{XY}}$ and Lemma \ref{lem:binerror}. For any $G$, $\Delta(G)+1 \ge \gamma(G)$, thus
\[
\mathcal{T}^n \subseteq \{ Q_{XY} \in \mathcal{P}^n(\mathcal{X} \times \mathcal{Y}) : \log ( \Delta(G_X^n(T^n_{Q_X})) +1 ) \ge nR \}.
\]
Let
\begin{align*}
& g^n(G_X, Q_{XY}) \\
&= \log(\exp(n[\kappa(G_X,Q_X) + n^{-1}\vert \mathcal{X} \vert^2 \log(n+1)])+1)
\end{align*}
and observe that $n^{-1} g^n(G_X, Q_{XY}) \to \kappa(G_X,Q_X)$ and let $\tilde{\delta}_n=n^{-1} g^n(G_X, Q_{XY}) -\kappa(G_X,Q_X)$.
Appealing to Lemma \ref{lem:degsg} with $\kappa$ in place of $\kappa_n$, we may further bound the set by $\{ Q_{XY} \in \mathcal{P}^n(\mathcal{X} \times \mathcal{Y}) : g^n(G_X, Q_{XY}) \ge R \}$ i.e.
$$\mathcal{T}^n \subseteq \tilde{\mathcal{T}}^n = \{ Q_{XY} \in \mathcal{P}^n(\mathcal{X} \times \mathcal{Y}): \kappa(G_X,Q_X) + \tilde{\delta} \ge R \}.$$
Adopting the definitions from Lemma \ref{lem:cont}, with $\tilde{\tilde{\delta}}_n=n^{-1}\vert \mathcal{X} \vert \vert \mathcal{Y} \vert \log(n+1)$ we see
\begin{align}
\label{eqn:pf1}
&-n^{-1} \log P_e \ge \min_{Q_{XY} \in \mathcal{P}^n(\mathcal{X} \times \mathcal{Y})} F_n(Q_{XY}).
\end{align}
For each $n$, let $Q_{XY}^{(n)}$ achieve the minimum in \eqref{eqn:pf1}. Taking a convergent subsequence and relabelling we may assume that $Q_{XY}^{(n)} \to Q_{XY}^\infty$. Hence
\begin{align*}
\liminf_{n \to \infty} F_n(Q_{XY}^{(n)}) &\stackrel{*}{\ge} F(Q^\infty_{XY}) \\
&\ge \inf_{Q_{XY} \in \mathcal{P}(\mathcal{X} \times \mathcal{Y})} F(Q_{XY})
\end{align*}
where $^*$ follows from Lemma \ref{lem:cont}. The inequality
\[
\log \gamma(G_X) \ge n^{-1} \log(\gamma(G_X^n)) \ge n^{-1} \log (G_X^n(T_{Q_X}^n))
\]
implies that we may repeat the argument above to yield the achievable exponent
\[
\begin{cases} D(Q_{XY} || P_{XY}) + (R- H_Q(X|Y))^+ & \text{if } \log \gamma(G_X) \ge R \\
\infty & \text{otherwise}
\end{cases}
\]
Taking the maximum of both exponents gives the result.
\end{IEEEproof}
\subsection{Examples}
\begin{figure}
\centering
\scalebox{0.5}{\includegraphics*{eg2.pdf}}
\caption{Two example source distributions and their characteristic graphs}
\label{fig:egs}
\end{figure}
In this section we compute the exponent of Theorem \ref{thm:exponent} and compare it with the best previously known exponents. First we demonstrate a case in which the exponent of Theorem \ref{thm:exponent} achieves the sphere packing exponent.
When the side information is a deterministic function of the source, i.e. $Y=f(X)$, $\kappa$ property \ref{prop:hxcy} allows us to compute $\kappa$ explicitly and the optimization forces the inner most optimization to yield $Q_{Y|X}=P_{Y|X}$, i.e. the `deterministic' side information. If we associate a $y$ to each fully connected subgraph in $G_X$, then we see that
\begin{align*}
\log \gamma(G_X) &= \max_{y \in \mathcal{Y}} \log \vert f^{-1}(y) \vert \\
&\ge \max_{y \in \mathcal{Y}} H(X|Y=y) \\
&\ge H(X|Y).
\end{align*}
From these observations it follows that the exponent reduces to
\[
e_{SP}(R,P_{XY}) = \inf_{Q_{XY}: H_Q(X|Y) \ge R} D(Q_{XY} || P_{XY}),
\]
the sphere packing exponent for this problem. Thus our scheme is optimal for all rates and the reliability function is determined for this problem.
For comparison with previous results we turn to \emph{Example A}\footnote{Please note the plot and discussion concerning Example A reported in a preliminary version of this work \cite{Kelly:2009p3712} were incorrect.} (see Fig. \ref{fig:egs}). In Figure \ref{fig:expplot2} we plot our exponent against $e_{CK}^*=\max(e_{CK},e_{CK,r})$ and $e_{OH}$, where $e_{CK}$ and $e_{CK,r}$ are the expurgated and random coding exponents of Csisz\'{a}r and K\"{o}rner \cite{Csiszar:1981p220}, and $e_{OH}$ is the exponent of Oohama and Han \cite{Oohama:1994p222}.
\begin{align*}
e_{CK} &= \inf_{Q_X} D(Q_X || P_X) \\
&\quad + \left [ \mathop{\inf_{Q_{\tilde X X}:H(\tilde X | X) \ge R}}_{Q_{\tilde X} = Q_{X}} \mathop {\mathbb{E} } [d_P(X, \tilde X)] + R - H(\tilde X | X) \right ]
\end{align*}
where
\[
d_P(x, \tilde x) = - \log \left ( \sum_{y} \sqrt{P(y|x)P(y|\tilde x)}\right ).
\]
and
\begin{align*}
e_{OH} &= \inf_{Q_{XY}: H(Q_X) \ge R} D(Q_{XY} || P_{XY}) + (R-H_Q(X|Y))^{+}.
\end{align*}
\begin{figure}
\begin{center}
\scalebox{0.7}{\includegraphics*{plot2.pdf}}
\end{center}
\caption{Comparing exponents for Example A of Figure \ref{fig:egs}. Our exponent coincides with $e_{CK}^*$ and both lie below the sphere packing exponent.}
\label{fig:expplot2}
\end{figure}
From Figure \ref{fig:expplot2} we see that our exponent lies below the sphere packing exponent and above the random coding exponent of Oohama and Han. When compared with $e_{CK}^*$, we see that our exponent agrees (numerically) and has the benefit of semi universality.
For \emph{Example B} (Fig. \ref{fig:egs}), it is clear that any rates in excess of one bit allows the decoder to determine the source sequence without error. The various error exponents are plotted in Fig \ref{fig:expplotb}. For this example our exponent is infinite for all rates above 1 bit since $\log (\gamma(G_X))=1$. However $e_{CK}^*$ is finite for some rates above one bit, and therefore we beat $e_{CK}^*$. Below 1 bit, $e_{OH}$, $e_{CK}^*$ and our exponent appear to agree. The random coding exponent remains finite for all rates below $\log(3)$ bits.
\begin{figure}
\begin{center}
\scalebox{0.7}{\includegraphics*{plotb.pdf}}
\end{center}
\caption{Comparing exponents for Example B of Figure \ref{fig:egs}. Our exponent is infinite for all rates above 1 bit. $e_{CK}^*$ is finite for some rates above 1 bit.}
\label{fig:expplotb}
\end{figure}
\emph{Note 1:} Formally, the strongest results of \cite{Csiszar:1981p220} are obtained by using ML decoding in their equation (41) but the complexity of the optimization make computation infeasible, even for these simple examples and exploiting convexity. However, in the particular case of our Example B, we note that if for some $R$ the exponent $e_{CK}$ is finite, then there exists a $Q_X$ for which
\[
\mathop{\inf_{Q_{\tilde X X}:H(\tilde X | X) \ge R}}_{Q_{\tilde X} = Q_{X}} \mathop {\mathbb{E} } [d_P(X, \tilde X)] + R - H(\tilde X | X) < \infty.
\]
Then according to \cite[Lemma 4]{Csiszar:1981p220}, the random variables in their set $\mathcal{P}(Q_{Y|X},\tilde {Q}_{Y|X}, Q, R)$, which give equality in their equation (28) would give rise to the exponent in their equation (16) being finite. As $e_{CK}$ is finite for some rates above 1 bit, their exponent (41) would be finite, thus at least for Example B, our exponent is strictly better than the previously known best exponent.
\emph{Note 2:} In general one also sees (via Property 1) that our exponent is never worse than the Oohama and Han exponent, because by $\kappa$ property \ref{prop:kapent} nature is forced to optimize over a smaller set of distributions. Put another way, compared to the Oohama and Han exponent, we are able to `expurgate' more types.
\section{Improved Exponents for Wyner-Ziv}
\label{sect:coveringbinning}
When dealing with lossy reproduction it is often convenient to use `covering' (i.e. quantization) followed by binning and in this section we describe how use of the characteristic graph can yield improved error exponents in such scenarios. We focus on lossy compression with side information i.e. Wyner-Ziv \cite{wynerziv}. Formally the error exponent problem in this case is as follows.
Let $\mathcal{\hat X}$ be the reproduction alphabet and $d : \mathcal{X} \to \mathcal{\hat X}$ a single letter distortion measure. Define the distortion between two strings as $d(\mathbf{x}, \hat{\mathbf{x}}) = \frac{1}{n}\sum_{i=1}^n d(x_i,\hat x_i)$. The encoder/decoder pair are functions $f^n:\mathcal{X}^n \to \mathcal{M}$ and $g^n:\mathcal{M} \times \mathcal{Y}^n \to \mathcal{\hat X}^n$, where $\mathcal{M}$ is a fixed set.
Let $\hat X^n =g^n(f^n(X^n),Y^n)$ be the decoder's output and define the error probability
\begin{equation}
P_e (f^n, g^n,\Delta, d) = \Pr\left (d (X^n, \hat X^n) > \Delta \right).
\end{equation}
We define the Wyner-Ziv error exponent to be
\begin{equation}
\pi(R,\Delta,P_{XY}, d)
= \lim_{\epsilon \downarrow 0}
\liminf_{n \to \infty} -\frac{1}{n} \log \left [ \min_{(f^n,
g^n)} P_e(f^n,g^n, \Delta, d) \right ]
\end{equation}
where the minimization ranges over all encoder/decoder pairs satisfying
\begin{equation}
\label{eqn:rateconstwz}
\log | \mathcal{M} | \leq n(R + \epsilon).
\end{equation}
Before we state the result we define another graph functional.
\begin{definition}
\label{eqn:kappa2}
\[
\kappa_2(P_{XY}, Q_{XYU}) = [\kappa(G_{U},Q_{U}) - H(Q_{U|X}|Q_X)]^+,
\]
where the graph $G_U$ is defined from the distribution
\[
Q_{UY}(u,y) = \sum_{x \in \mathcal{X}} P_{XY}(x,y)Q_{U|X}(u|x).
\]
Note: Since $P_{XY}$ will be fixed throughout, we will abbreviate to $\kappa_2(Q_{XYU})$ or even simply $\kappa_2(Q_X)$.
\end{definition}
Our first result in this section is Theorem \ref{thm:expwz}.
\begin{theorem}
\label{thm:expwz}
Let $P_{XY} \in \mathcal{P}(\mathcal{X} \times \mathcal{Y})$ and $R>0$, $\Delta > 0$, $d(\cdot,\cdot)$ be given. Then
\begin{align*}
\pi(R&,\Delta,P_{XY}, d) \geq \inf_{Q_X} \sup_{Q_{U|X}} \inf_{Q_Y} \sup_{\phi \in \mathcal{F}} \inf_{Q_{XYU}} \eta(R,P_{XY},Q_{XYU},\phi)
\end{align*}
where
\begin{align*}
\eta(R,P_{XY},Q_{XYU},\phi) &= \begin{cases}
D(Q_{XYU}||P_{XY}Q_{U|X}) & \text{ if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \\
D(Q_{XYU}||P_{XY}Q_{U|X}) & \text{ if } \mathbb{E}_Q[d(X,\phi(Y,U))] < \Delta \\
\quad + [R-I_Q(X;U)+I_Q(Y;U) ]^+ & \quad \text{ and }\kappa_2(P_{XY},Q_{XYU}) \ge R \\
\infty & \text{ otherwise}
\end{cases} \\
\end{align*}
and $\mathcal{F} = \{ \phi | \phi: \mathcal{Y} \times \mathcal{U} \to \hat
{\mathcal{X}} \}$. Note in the final minimization over $Q_{XYU}$, $Q_{XU}$ and $Q_Y$ are fixed to be those specified earlier in the optimization.
\end{theorem}
\emph{Discussion of Result}
In \cite{kellywagnerscexponents}, the present authors determined an achievable exponent for the Wyner-Ziv problem, obtained by replacing $\eta$ in Theorem \ref{thm:expwz} with
\[
\eta_D(R,P_{XY},Q_{XYU},\phi) =
\begin{cases}
D(Q_{XYU}||P_{XY}Q_{U|X}) & \text{ if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \\
D(Q_{XYU}||P_{XY}Q_{U|X}) & \text{ if } \mathbb{E}_Q[d(X,\phi(Y,U))] < \Delta \\
\quad + [R-I_Q(X;U)+I_Q(Y;U) ]^+ & \quad \text{ and } I(X;U) \ge R \\
\infty & \text{ otherwise,}
\end{cases}
\]
the difference being the conditions under which we switch from case 2 to case 3. Theorem \ref{thm:expwz} is obtained by modifying the scheme in \cite{kellywagnerscexponents} taking into account the graph-based expurgation established in the previous section. Recalling $\kappa$ property \ref{prop:kapent} we have the following inequality
\begin{align*}
\kappa_2(Q_{XYU}) &= [\kappa(G_U,Q_{U}) - H(U|X)]^+ \\
&\le [H(U) - H(U|X)]^+ \\
&=I(X;U)
\end{align*}
therefore for any $R,P_{XY},\phi$ and $Q_{XYU}$ we see that $\eta_D(R,P_{XY},Q_{XYU},\phi) \le \eta(R,P_{XY},Q_{XYU},\phi)$ and the present modification yields an achievable exponent that is never any worse than the result of \cite{kellywagnerscexponents}.
\subsection{Sketch of Scheme}
Operating at blocks of length $n$, for each type $Q_X$, a test channel $Q_{U|X}^*(Q_X)=Q_{U^*|X}$ is selected. The test channel is used to generate a codebook, $B^n(Q_X)$, of approximately $2^{nI(U^*;X)}$ codewords. The key insight is that the (random) graph $B^n(Q_X) \cap G_{U^*}^n$, constructed from
\[
Q_{U^*Y}(u,y) = \sum_{x \in \mathcal{X}} P_{XY}(x,y)Q_{U^*|X}(u|x)
\]
plays the same role in this problem as did the graph characteristic graph of the source $P_{XY}$ in the Slepian-Wolf problem.
In this modified scheme, the encoder first communicates the type of $X^n$ and then if there is sufficient rate, i.e. $nR > \log \gamma(B^n(Q_X) \cap G_{U^*}^n)$, rather than communicating a bin index the encoder may send the color of the codeword in the graph $G_{U^*}$. If there is insufficient rate, then the encoder communicates a bin index of the codeword. For each pair marginal types $(Q_X, Q_Y)$ the decoder can choose an estimation function $\phi$ and depending on the case, either decodes using the graph, or a minimum empirical entropy decoder. The estimation function is then used to combine the side information and the codeword to yield the reproduction.
\subsection{Deterministic Side Information}
We now use the result of Theorem \ref{thm:expwz} to determine the reliability function when the side information is a deterministic function of the source, i.e. $Y=f(X)$ a.s. for a deterministic $f$. We first note that in this case, the solution to the inner-most optimization must be $Q_{Y|XU}=P_{Y|X}$ else the exponent is infinite. This reduces the problem to
\[
\inf_{Q_X} \sup_{Q_{U|X},\phi} \eta(R,P_{XY},Q_{XYU},\phi)
\]
where the distribution of $Q_{XYU}$ is $Q_X P_{Y|X} Q_{X|U}$, i.e. $U,X$ and $Y$ form a Markov chain in that order. We can massage the exponent $\inf_{Q_X} \sup_{Q_{U|X},\phi} \eta(R,P_{XY},Q_{XYU},\phi)$ as follows
\begin{align*}
&\inf_{Q_X} \sup_{Q_{U|X},\phi} \begin{cases}
D(Q_{XYU}||P_{XY}Q_{U|X}) & \text{ if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \\
D(Q_{XYU}||P_{XY}Q_{U|X}) + & \text{ if } \mathbb{E}_Q[d(X,\phi(Y,U))] < \Delta \\
\quad [R-I_Q(X;U)+I_Q(Y;U) ]^+ & \text{ and }\kappa_2(Q_{XYU}) \ge R \\
\infty & \text{ otherwise}
\end{cases} \\
&\ge \inf_{Q_X} \sup_{Q_{U|X}:Y=\nu(U),\phi} \begin{cases}
D(Q_{XYU}||P_{XY}Q_{U|X}) & \text{ if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \\
D(Q_{XYU}||P_{XY}Q_{U|X}) + & \text{ if } \mathbb{E}_Q[d(X,\phi(Y,U))] < \Delta \\
\quad [R-I_Q(X;U)+I_Q(Y;U) ]^+ & \text{ and } [H(U|Y) - H(U|X)]^+ \ge R \\
\infty & \text{ otherwise}
\end{cases} \\
\end{align*}
where the previous inequality follows because we maximize over a smaller set. The notation $Q_{U|X}:Y=\nu(U)$ means we consider only those test channels that result in $Y$ being a deterministic function $\nu$ of $U$. By construction $U, X$ and $Y$ still form a Markov chain in that order, thus $H(U|X) = H(U|XY)$ and we can continue the chain of equalities with
\begin{align*}
&= \inf_{Q_X} \sup_{Q_{U|X}:Y=\nu(U),\phi} \begin{cases}
D(Q_{XYU}||P_{XY}Q_{U|X}) & \text{ if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \\
D(Q_{XYU}||P_{XY}Q_{U|X}) + & \text{ if } \mathbb{E}_Q[d(X,\phi(Y,U))] < \Delta \\
\quad [R-I_Q(X;U|Y) ]^+ & \text{ and } I(X;U|Y) \ge R \\
\infty & \text{ otherwise.}
\end{cases}
\end{align*}
Note now that the only difference between $Q_{XYU}$ and $P_{XY}Q_{U|X}$ occurs in $Q_X$, so it follows that the quantity above can be written as
\begin{align*}
&= \inf_{Q_X} \sup_{Q_{U|X}:Y=\nu(U),\phi} \begin{cases}
D(Q_{X}||P_X) & \text{ if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \text{ or } I(X;U|Y) \ge R\\
\infty & \text{ otherwise.}
\end{cases} \\
&= \inf_{Q_X} \sup_{Q_{U|X},\phi} \begin{cases}
D(Q_{X}||P_X) & \text{ if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \text{ or } I(X;U|Y) \ge R\\
\infty & \text{ otherwise}
\end{cases}
\end{align*}
To argue the final equality, let $Q_X$ and $R$ be fixed. The direction $\le$ is clear since we maximize over a larger set. For $\ge$, it suffices to show that if the optimization on the left side yields $D(Q_X||P_X)$ then so does the optimization on the right. On account of the fact that the objective is piecewise constant (over $Q_{U|X}$ and $\phi$), when the left side is finite, there exists a $Q^*_{U|X}:Y=\nu(U)$ and $\phi$ causing evaluation to $D(Q_X||P_X)$. Suppose by way of contradiction there exists a non-deterministic $Q_{U|X}$ which yields an infinite exponent. This means that
\[
I(X;U|Y) < R \text{ and } \mathbb{E}_Q[d(X,\phi(Y,U))] < \Delta
\]
but then by Lemma $\ref{lem:detqux}$ (which follows) we can find a deterministic $Q_{\tilde U|X}$ and corresponding $\tilde \phi$ with the property that
\[
I(X;\tilde U|Y) < R \text{ and } \mathbb{E}_Q[d(X,\tilde \phi(Y,\tilde U))] < \Delta
\]
implying that $Q_{\tilde U|X}$ would yield an infinite exponent, contradicting the optimality of $Q_{U|X}^*$.
\begin{lemma}
\label{lem:detqux}
Let $Q_X$ be given and let $Y=f(X)$ with $P_{Y|X}$ denoting the induced conditional distribution. Then for any $Q_{U|X},\phi$, there exists a $Q_{\tilde U|X}$ and $\tilde \phi$ so that when $Q_{XYU}=Q_X Q_{U|X}P_{Y|X}$,
\begin{align*}
1) ~ \mathbb{E}_{Q_{XYU}}[d(X,\phi(Y,U))] &= \mathbb{E}_{Q_{XY\tilde U}}[d(X,\tilde \phi(Y,\tilde U))], \\
2) ~ I(X;U|Y) &= I(X;\tilde U | Y)
\end{align*}
and 3) $Y=\nu(\tilde U)$ for some deterministic function $\nu$.
\end{lemma}
\begin{IEEEproof}
Define $\tilde U = (U,Y)$ and $\tilde \phi(Y,\tilde U)=\phi(Y,U)$. Then clearly conditions 1 and 3 hold. To see condition 2 note by the chain rule
\[
I(X;\tilde U |Y) = I(X;U,Y|Y) = I(X;U|Y) + I(X;Y|Y,U) = I(X;U|Y).
\]
Finally we point out that since $Y=f(X)$ we also have $\tilde U \leftrightarrow X \leftrightarrow Y$.
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
\end{IEEEproof}
Rewriting this final optimization problem as
\begin{align*}
&\inf_{Q_X} \sup_{Q_{U|X},\phi} \begin{cases}
D(Q_{X}||P_X) & \text{ if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \text{ or } I(X;U|Y) \ge R\\
\infty & \text{ otherwise}
\end{cases} \\
&= \inf_{Q_X: R_{WZ}(\Delta,Q_X) \ge R} D(Q_X||P_X) \\
&\le \pi(R,\Delta,P_{XY},d)
\end{align*}
where $R_{WZ}(\Delta,Q_X)$ denotes the Wyner-Ziv rate distortion function for the source with $X \sim Q_X$ and $Y=f(X)$ with distortion measure $d$. But according to the change-of-measure argument of \cite[Theorem 4]{kellywagnerscexponents},
\[
\pi(R,\Delta,P_{XY},d) \le \inf_{Q_X: R_{WZ}(\Delta, Q_X) \ge R} D(Q_X||P_X).
\]
Thus our scheme is optimal in the sense that it meets the change-of-measure upper bound.
\section{Connection to Channel Coding}
\label{sect:channelcoding}
In this section we demonstrate that $\kappa$ has applications in zero-error channel coding problems. Let $G=G(W)$ be the characteristic graph of the channel $W$, and $c(G)$ denote the zero error capacity (see \cite[Section III]{Korner:1998p65} for definitions). The independence number of a graph, denoted $\alpha(G)$, is the maximum cardinality of a set of vertices of $G$ of which no two are adjacent. We recall that $c(G) \ge \log \alpha(G)$. According to \cite[pg 187 (prob. 18)]{ckit}
\[
\log \alpha(G) =\max_P \mathop{\min_{P_X=P_{\tilde X}=P}}_{\mathop {\mathbb{E} } [d_W(X,\tilde X)] < \infty} I(X ; \tilde X).
\]
Expanding the mutual information gives
\begin{align*}
\log \alpha(G) &=\max_P \mathop{\min_{P_X=P_{\tilde X}=P}}_{\mathop {\mathbb{E} } [d_W(X,\tilde X)] < \infty} H(\tilde X) - H(\tilde X |X) \\
&= \max_P H(P) - \mathop{\max_{P_X=P_{\tilde X}=P}}_{\mathop {\mathbb{E} } [d_W(X,\tilde X)] < \infty} H(\tilde X |X).
\end{align*}
If $P \times V = P_{X\tilde X}$ then $\mathop {\mathbb{E} } [d_W(X,\tilde X)] < \infty$ is equivalent to $V \ll G$. To see this note that $\mathop {\mathbb{E} } [d_W(X,\tilde X)] < \infty$ if for all $x, \tilde x$ s.t. $P(x,\tilde x) > 0$, there is some $y$ for which $W(y|x)W(y|\tilde x) > 0$ i.e. $(x,\tilde x) \in E(G)$. Conversely, if $V \ll G$, then $V(x | \tilde x) > 0$ only when there is some $y$ for which $W(y|x)W(y|\tilde x) > 0$. Hence,
\[
\log \alpha(G) =\max_P H(P) - \kappa(G,P).
\]
Hence $\kappa$ provides a lower bound on the zero error capacity of a channel $W$.
\appendices
\section{Proof of Theorem \ref{thm:expwz}}
The key to the proof is Lemma \ref{lem:expdecayoftypes}, a bound on degree of the codebook graph which holds with exponentially high probability. With this fact established we give a scheme for coding when the bound holds and declare an error when the bound does not.
\subsection{Codebook Construction}
Operating on blocks of length $n$, for each type $Q_X$ choose a test channel $Q_{U^*|X}=Q_{U|X}^*(Q_X)$ and let $Q_{U^*}=Q_U^*(Q_X)$ denote the resulting induced marginal type\footnote{For brevity we will use the following conventions: The random variable $U^*$ (resp. channel $Q_{U^*|X}$) refers to the random variable (resp. channel) defined by the choice of test channel for the particular $Q_X$ under consideration.}. The test channel is used to build a codebook $B^n(Q_X)$ as follows. For each $\mathbf{u} \in T_{Q_{U^*}}$, flip a coin with probability of heads $$p \triangleq \exp \Big (-n \Big[H(Q_{U^*|X}|Q_X)-3\frac{\vert \mathcal{U} \vert \vert \mathcal{X} \vert \log(n+1)}{n} \Big ] \Big ),$$ and add $\mathbf{u}$ to the codebook only if the coin comes up heads. Define the distribution
\[
Q_{UY}(u,y) = \sum_{x \in \mathcal{X}} P_{XY}(x,y)Q_{U^*|X}(u|x)
\]
and let $G_{U^*}$ be the resulting characteristic graph. The codeword for $\mathbf{x} \in T_{Q_X}$ is chosen as follows. If $\mathcal{G}(\mathbf{x}) \triangleq B^n(Q_X) \cap T_{Q_{U|X}^*}(\mathbf{x})$ is non-empty, choose uniformly from $\mathcal{G}(\mathbf{x})$. If $\mathcal{G}(\mathbf{x})$ is null, choose uniformly from $B^n(Q_X)$. We let $U(\mathbf{x})$ denote the chosen codeword. For each codebook, we define $b_{Q_X}: B^n(Q_X) \to [1,\ldots,\exp(nR)]$ (a binning function) as follows, for all $\mathbf{u} \in B^n(Q_X)$
\[
\Pr(b_{Q_X}(\mathbf{u}) = i) = \exp(-nR), \text{for all } i \in [1,\ldots,\exp(nR)].
\]
\subsection{Scheme}
In Lemmas \ref{lem:badtypes} and \ref{lem:expdecayoftypes} we establish that
\begin{align*}
\gamma(G_{U^*} \cap B^n(Q_X) ) &\le \Delta(G_{U^*} \cap B^n(Q_X)) + 1 \\
& \stackrel{\text{w.h.p.}}{\le} \exp(n[\kappa_2(Q_X) + \lambda_n + \tilde{\delta}_n]) + 1,
\end{align*}
for some $\lambda_n > 0$, $\tilde{\delta}_n \to 0$ as $n \to \infty$ and where $w.h.p$ stands for probability tending to 1 as $n \to \infty$.
For types $Q_{X}$ in which the above bound fails to hold, we send an error message to the decoder. For types in which the bound holds, the scheme is as follows. To communicate the codeword to the decoder, the encoder may either give an index into the codeword set $B^n$ or using the ideas from the improved lossless binning scheme, it can color the graph $G_{U^*}^n \cap B^n(Q_X)$ using a minimal coloring and send the color of the codeword.
\emph{Encoder:}
The encoder first sends $k(Q_\mathbf{x})$, the type of the source sequence $Q_\mathbf{x}$. If $\exp(n[\kappa_2(Q_\mathbf{x})+\lambda_n + \tilde{\delta}_n]) +1 < \exp(nR)$, the encoder transmits the color of the codeword in the graph $G_{U^*} \cap B^n(Q_\mathbf{x})$. Otherwise it sends the bin index $b_{Q_\mathbf{x}}(U(\mathbf{x}))$. Formally, we denote the encoder by $f^n: \mathcal{X}^n \to \mathcal{M}$, where
\[
\mathcal{M} = [1,\ldots, (n+1)^{\vert \mathcal{X}\vert}] \times [1,\ldots,\exp(nR)]
\]
\emph{Decoder:}
The decoder receives a type index, a message and the side information $\mathbf{y}$. If $\exp(n[\kappa_2(Q_\mathbf{x})+\lambda]) +1 < \exp(nR)$ then the codeword can be decoded without error. In the opposite case, the decoder searches the bin for a unique codeword $\hat {\mathbf{u}}$, so that among all $\tilde {\mathbf{u}}$ in the received bin, $H(\hat{\mathbf{u}}|\mathbf{y}) < H(\tilde{ \mathbf{u}}|\mathbf{y})$. If there is no such unique codeword, the decoder chooses $\hat{\mathbf{u}}$ uniformly at randomly from the received bin. For each pair of types $Q_X,Q_Y$, the decoder picks an reproduction function $\phi$, and declares the output as
\[
\hat{\mathbf{x}} \text{ where } \mathbf{\hat{x}}_j = \phi(\hat{\mathbf{u}}_j,\mathbf{y}_j).
\]
Thus the decoder $g^n: \mathcal{Y}^n \times \mathcal{M} \to \mathcal{\hat X}$ is specified.
\begin{lemma}
\label{lem:badtypes}
Let
\begin{align*}
\delta_n &= 3\frac{\vert \mathcal{U} \vert \vert \mathcal{X} \vert \log(n+1)}{n} \text{ and }
\tilde{\delta}_n=\frac{\vert \mathcal{U}\vert\vert \mathcal{U}\vert}{n}\log(n+1) \\
\kappa_2^{n}(Q_X) &= \kappa_2(Q_X) + \tilde{\delta}_n \text{ and } \\
\lambda_n &= \frac{2}{n} \log(n+1) + \delta_n.
\end{align*}
Then for all $n$ sufficiently large and for all types $Q_X$,
\begin{align*}
&\Pr(\Delta(G_{U^*}^n \cap B^n(Q_X)) > \exp(n[\kappa_2^{n}(Q_X) +\lambda_n ]) \\
&\le \exp_e(-(n+1)^2).
\end{align*}
Note the randomness in $\Delta(G_U^n \cap B^n(Q_X))$ comes from the fact that $B^n(Q_X)$ is a random set.
\end{lemma}
\begin{IEEEproof}
Let $K= 2^{n[\kappa_2^{n}(Q_X)+\lambda_n]}$, then
\begin{align*}
&\Pr(\Delta(G_{U^*}^n \cap B^n(Q_X)) > K) \\
&= \Pr (\exists \mathbf{u} \in T_{Q_{U^*}} : \mathbf{u} \in B^n(Q_X), \Delta(\mathbf{u}) > K) \\
&\le \sum_{\mathbf{u} \in T_{Q_U^*}} \Pr(\mathbf{u} \in B^n(Q_X))\Pr( \Delta(\mathbf{u}) \ge K |\mathbf{u} \in B^n(Q_X) ) \\
&\le \sum_{\mathbf{u} \in T_{Q_U^*}}\Pr( \Delta(\mathbf{u}) \ge K |\mathbf{u} \in B^n(Q_X) ).
\end{align*}
Let $N(\mathbf{u})$ denote the neighbors of $\mathbf{u}$ in the graph $G_U^n$, then quantity in the previous line is upper bounded by
\begin{align*}
& \sum_{\mathbf{u} \in T_{Q_U^*}} \Pr \Big ( \sum_{\mathbf{v} \in N(\mathbf{u})} \mathbf{1}_{\{\mathbf{v} \in B^n\}}\ge K\Big ). %
%
\end{align*}
From the construction of the codebook, we know that for each string $\mathbf{v}$, $\mathbf{1}_{\{\mathbf{v} \in B^n\}}$ is Bernoulli with parameter $p$. Furthermore, by Lemma \ref{lem:degsg}, we know that $\vert N(\mathbf{u}) \vert \le \exp(n[\kappa(G_U,Q_U^*) + \tilde{\delta}_n]) \triangleq J(Q_X)$. Therefore, by bounding the number of terms in the summation, letting $D_i$ be a sequence of i.i.d. Bernoulli($p$) random variables, we have
\begin{align*}
&\Pr(\Delta(G_{U^*}^n \cap B^n(Q_X)) > K) \\
&\le \vert T_{Q_{U^*}} \vert\Pr \Big (\sum_{i=1}^{J(Q_X)} D_i \ge K\Big ).
\end{align*}
Focusing on the probability, using the exponential form of Markov's inequality, one has for any $\theta > 0$
\begin{align}
\notag \Pr \Big (\sum_{i=1}^{J(Q_X)} D_i \ge K\Big ) &\le \frac{\exp_e( J(Q_X) \ln ( 1+ p(e^\theta -1)))}{\exp_e(\theta K)} \\
\notag &\le \frac{\exp_e( J(Q_X) p(e^\theta -1))}{\exp_e(\theta K)} \\
\notag &\le \frac{\exp_e( J(Q_X) pe^\theta )}{\exp_e(\theta K)} \\
\label{eqn:badtypesproof} &\le \exp_e(2^{n[\kappa_2(Q_{X})+\delta_n+\tilde{\delta}_n]+\theta \log e} -\theta 2^{n[\kappa_2(Q_{X})+\tilde{\delta}_n+\lambda_n]}).
\end{align}
Choosing $\theta=1$, we have
\begin{align*}
\Pr \Big (\sum_{i=1}^{J(Q_X)} D_i \ge K\Big ) &\le \exp_e (2^{n[\kappa_2(Q_{X})+\delta_n +\tilde{\delta}_n]}(2^{\log e} - (n+1)^2)).
\end{align*}
For $n\ge 1$, $(e -(n+1)^2) < -1$, hence
\begin{align*}
\Pr(\Delta(G_{U^*}^n \cap B^n(Q_X)) > K) &\le \vert T_{Q_{U^*}} \vert \exp_e(-2^{n[\kappa_2(Q_X) + \delta_n + \tilde{\delta}_n]}) \\
&\le \vert T_{Q_{U^*}} \vert\exp_e(-2^{n\delta_n}) \\
&\le \vert T_{Q_{U^*}} \vert \exp_e(-(n+1)^3),
\end{align*}
for all $n$ sufficiently large. Since $\vert T_{Q_{U^*}} \vert$ is only exponential in $n$, the result holds.
\end{IEEEproof}
On account of the previous lemma, we have a bound, which holds with high probability, on the degree of $G_{U^*} \cap B^n(Q_X)$. For each $Q_{XYU}$, we define the event $F(Q_{XYU})$ as follows
\[
F(Q_{XYU}) \triangleq \{ \Delta(B^n(Q_X) \cap Q_{U^*}) > e^{n[\kappa_2^{n}(Q_X)+\lambda_n]} \}.
\]
\begin{lemma}
\label{lem:expdecayoftypes}
For all $n$ sufficiently large and any type $Q_{XYU}$
\[
\Pr(F(Q_{XYU})) \le \exp(-(n+1)^2).
\]
\end{lemma}
\begin{IEEEproof}
The result follows directly from Lemma \ref{lem:badtypes}.
\end{IEEEproof}
In the remainder of this appendix $\kappa_2^n$ and $\lambda_n$ will be defined as in the statement of Lemma \ref{lem:badtypes}.
\subsection{Error Analysis}
Let
\begin{align*}
\mathcal{E}_1 &= \{ (\mathbf{x},\mathbf{y},\mathbf{u}): \mathbf{u} \not \in T_{Q_{U|X}^*}(\mathbf{x}) \} \\
\mathcal{E}_2 &= \{ (\mathbf{x},\mathbf{y},\mathbf{u}): \mathbf{u} \in T_{Q_{U|X}^*}(\mathbf{x}), d(\mathbf{x},\phi_{Q_{\mathbf{x}},Q_{\mathbf{y}}}(\mathbf{u},\mathbf{y})) < \Delta \\
&\quad \exp(n[\kappa_2^n(Q_\mathbf{x}) + \lambda_n]) + 1 \ge \exp(nR) \} \\
\mathcal{E}_3 &= \{ (\mathbf{x},\mathbf{y},\mathbf{u}): \mathbf{u} \in T_{Q_{U|X}^*}(\mathbf{x}), d(\mathbf{x},\phi_{Q_{\mathbf{x}},Q_{\mathbf{y}}}(\mathbf{u},\mathbf{y})) < \Delta \\
&\quad \exp(n[\kappa^n_2(Q_\mathbf{x}) + \lambda_n]) + 1 < \exp(nR) \} \\
\mathcal{E}_4 &= \{ (\mathbf{x},\mathbf{y},\mathbf{u}): \mathbf{u} \in T_{Q_{U|X}^*}(\mathbf{x}), d(\mathbf{x},\phi_{Q_{\mathbf{x}},Q_{\mathbf{y}}}(\mathbf{u},\mathbf{y})) \ge \Delta \}
\end{align*}
and
\begin{align*}
\mathcal{D}_1 &= \{ Q_{XYU}: Q_{U|X} \neq Q_{U|X}^*(Q_X) ) \} \\
\mathcal{D}_2 &= \{ Q_{XYU}:\exp(n[\kappa^n_2(Q_X) + \lambda_n]) + 1 \ge \exp(nR) \\
&\quad Q_{U|X} =Q_{U|X}^*(Q_X), \mathbb{E}_Q[d(X,\phi_{Q_X,Q_Y}(U,Y)) < \Delta \} \\
\mathcal{D}_3 &= \{ Q_{XYU}:\exp(n[\kappa^n_2(Q_X) + \lambda_n]) + 1 < \exp(nR) \\
&\quad Q_{U|X} =Q_{U|X}^*(Q_X), \mathbb{E}_Q[d(X,\phi_{Q_X,Q_Y}(U,Y)) < \Delta \} \\
\mathcal{D}_4 &= \{ Q_{XYU}: Q_{U|X} =Q_{U|X}^*(Q_X), \mathbb{E}_Q[d(X,\phi_{Q_X,Q_Y}(U,Y)) \ge \Delta \}.
\end{align*}
The sets defined above and the following Lemmas allow us to bound the error probability for our improved scheme.
\begin{lemma}
\label{lem:coveringerror}
Let $X^n, Y^n, U^n$ be generated according to our scheme, then for all $n$ sufficiently large and all $(\mathbf{x},\mathbf{y},\mathbf{u}) \in \mathcal{E}_1$
\[
\Pr(X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}, F^c(Q_{\mathbf{xyu}})) \le \exp(-(n+1)^2).
\]
\end{lemma}
\begin{IEEEproof}
\begin{align*}
&\Pr(X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}, F^c(Q_{\mathbf{xyu}})) \\ &= \Pr(X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}) \\
&\quad \times \Pr(F^c(Q_{\mathbf{xyu}})| X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}) \\
&\le \Pr(X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u})
\end{align*}
Let $A$ denote the event that there does not exist a $\mathbf{u} \in B^n(Q_\mathbf{x})$ such that $\mathbf{u} \in T_{Q_{U^*|X}}(\mathbf{x})$. For $(\mathbf{x},\mathbf{y},\mathbf{u}) \in \mathcal{E}_1$, the event $\{X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}\}$ implies that the event $A$ has occurred. Hence
\begin{align*}
&\Pr(X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}) \\
&= \Pr(X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}, A) \\
&\le \Pr(X^n=\mathbf{x}) \Pr(A | X^n=\mathbf{x}) \\
&\le \Pr(A | X^n=\mathbf{x}).
\end{align*}
Recalling $p$ was the probability that each codeword is added to the codebook. We have
\begin{align*}
\Pr(A | X^n=\mathbf{x}) &= \Pr(\forall \mathbf{u} \in T_{Q_{U^*|X}}: \mathbf{u} \not \in B^n(Q_\mathbf{x})) \\
&= (1-p)^{\vert T_{Q_{U^*|X}(\mathbf{x})} \vert} \\
&\le \exp(-p\vert T_{Q_{U^*|X}(\mathbf{x})} \vert).
\end{align*}
For $\mathbf{x} \in T_{Q_X}$ we have the lower bound,
\[
|T^n_{Q_{U^*|X}}(\mathbf{x})| \geq (n+1)^{-|\mathcal{X}||\mathcal{U}|} \exp(nH(Q_{U^*|X}|Q_X))
\]
substituting this and the value of $p$ we get
\begin{align*}
\Pr(A | X^n=\mathbf{x}) &\le \exp \Big (-\exp \Big (n \Big [3\frac{\vert \mathcal{U} \vert \mathcal{X} \vert}{n} \log(n+1) - \frac{\vert \mathcal{U} \vert \mathcal{X} \vert}{n} \log(n+1) \Big ] \Big ) \Big) \\
&\le \exp(-(n+1)^2).
\end{align*}
\end{IEEEproof}
\begin{lemma}
\label{lem:goodtypes}
Let $\mathbf{x},\mathbf{y},\mathbf{u} \in \mathcal{E}_1^c$, then
\begin{align*}
&\Pr(X^n=\mathbf{x},Y^n=\mathbf{y},U^n=\mathbf{u},F^c(Q_{\mathbf{xyu}})) \\
&\le P_{XY}^n(\mathbf{x},\mathbf{y}) \exp(-n[H(Q_{U|X}^*(Q_\mathbf{x})|Q_\mathbf{x}) -\delta_n]),
\end{align*}
where
\[
\delta_n = 3\frac{\vert \mathcal{U} \vert \mathcal{X} \vert}{n} \log(n+1).
\]
\end{lemma}
\begin{IEEEproof}
Proceeding as in proof of Lemma \ref{lem:coveringerror}, we have
\begin{align*}
&\Pr(X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}, F^c(Q_{\mathbf{xyu}})) \\
&\le \Pr(X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}) \\
&= \Pr(X^n=\mathbf{x},Y^n=\mathbf{u}) \Pr(U^n=\mathbf{u} | X^n=\mathbf{x},Y^n=\mathbf{y}).
\end{align*}
Conditional on $\{X^n = \mathbf{x}\}$, the event $\{U^n = \mathbf{u} \}$ is equivalent to $\{\mathbf{u} \in B^n(Q_\mathbf{x}) \} \cap \{\mathbf{u}$ was chosen among all $\tilde {\mathbf{u}} \in B^n(Q_\mathbf{x})$ with $\tilde{\mathbf{u}} \in T_{Q_{U|X}^*}(\mathbf{x})\}$. Bounding the latter probability by 1, we have
\begin{align*}
&\Pr(X^n=\mathbf{x},Y^n=\mathbf{y}, U^n=\mathbf{u}, F^c(Q_{\mathbf{xyu}})) \\ &\le P_{XY}^n(\mathbf{x},\mathbf{y}) \exp(-n[H(Q_{U|X}^*|Q_\mathbf{x}) -3\frac{\vert \mathcal{U} \vert \mathcal{X} \vert}{n} \log(n+1)])
\end{align*}
\end{IEEEproof}
\begin{lemma}
\label{lem:typeprobabilty}
For any $Q_{XYU} \in \mathcal{D}_1^c$ and any $P_{XY}$
\begin{align*}
&\sum_{(\mathbf{x},\mathbf{y},\mathbf{u}) \in T_{Q_{XYU}}} \Pr(X^n=\mathbf{x},Y^n=\mathbf{y},U^n=\mathbf{u},F^c(Q_{XYU})) \\
&\le \exp(-n[D(Q_{XYU}||P_{XY}Q_{U|X}^*(Q_X))-\delta_n]),
\end{align*}
where $\delta_n$ is the same as in the statement of Lemma \ref{lem:goodtypes}.
\end{lemma}
\begin{IEEEproof}
Using the bound of Lemma \ref{lem:goodtypes} and the following identity for $(\mathbf{x},\mathbf{y}) \in T_{Q_{XY}}$,
\[
P^n_{XY}(\mathbf{x},\mathbf{y}) = \exp(-n[D(Q_{XY}||P_{XY})+H(Q_{XY})] ),
\]
we have
\begin{align}
\notag &\sum_{(\mathbf{x},\mathbf{y},\mathbf{u}) \in T_{Q_{XYU}}} \Pr(X^n=\mathbf{x},Y^n=\mathbf{y},U^n=\mathbf{u},F^c(Q_{XYU})) \\
\notag &\le \sum_{(\mathbf{x},\mathbf{y},\mathbf{u}) \in T_{Q_{XYU}}}\exp(-n[D(Q_{XY}||P_{XY}) + H(Q_{XY})\\
\notag &\quad + H(Q_{U|X}|Q_X) - \delta_n] ) \\
\notag &\le \exp(-n[D(Q_{XY}||P_{XY})-H(Q_{U|XY}|Q_{XY}) \\
\label{eqn:sumbound1} &\quad + H(Q_{U|X}|Q_X) - \delta_n] ).
\end{align}
Applying the identity
\begin{align*}
& D(Q_{XY}||P_{XY}) -H(Q_{U|XY}|Q_{XY}) + H(Q_{U|X}|Q_X) \\
&= D(Q_{XYU} || P_{XY}Q_{U|X})
\end{align*}
in \eqref{eqn:sumbound1} gives the result.
\end{IEEEproof}
\begin{lemma}
\label{lem:wzbinerror}
For $n$ sufficiently large and $(\mathbf{x},\mathbf{y},\mathbf{u}) \in \mathcal{E}_2$
\begin{align*}
& \Pr( d(X^n, \hat X^n) > \Delta | X^n = \mathbf{x}, Y^n = \mathbf{y}, U^n = \mathbf{u}, F^c(Q_{\mathbf{xyu}})) \\
&\le \frac{\exp(-n[R - I_{Q_{\mathbf{xyu}}}(X; U) - I_{Q_{\mathbf{xyu}}}(U;Y) -\delta_n]^+)}{1-\exp_e(-(n+1)^2)},
\end{align*}
where $\delta_n \to 0$ as $n \to \infty$.
\end{lemma}
\begin{IEEEproof}
Let $L$ be the event that the decoder decodes the wrong codeword, i.e.
\begin{align*}
L &\triangleq \{ \exists \tilde {\mathbf{u}} \ne U(X^n): H(\tilde {\mathbf{u}}|\mathbf{y}) \le H(U(X^n)|\mathbf{y}), \tilde {\mathbf{u}} \in B^n(Q_{X^n}), \\
&\qquad b_{Q_{X^n}}(U(X^n)) = b_{Q_{X^n}}(\tilde {\mathbf{u}} ) \}
\end{align*}
and note that $\{d(X^n,\hat X^n) > \Delta \} \cap \mathcal{E}_2 \subseteq L$. We can bound the conditional probability of $L$ as follows
\begin{align*}
& \Pr(L | X^n = \mathbf{x}, Y^n = \mathbf{y}, U^n = \mathbf{u}, F^c(Q_{\mathbf{xyu}})) \\
&= \frac{\Pr(L, F^c(Q_{\mathbf{xyu}}) | X^n = \mathbf{x}, Y^n = \mathbf{y}, U^n = \mathbf{u} )}{\Pr(F^c(Q_{\mathbf{xyu}}) | X^n = \mathbf{x}, Y^n = \mathbf{y}, U^n = \mathbf{u} )} \\
&\le \frac{\Pr(L | X^n = \mathbf{x}, Y^n = \mathbf{y}, U^n = \mathbf{u} )}{\Pr(\Delta(B^n(Q_\mathbf{x}) \cap Q_{U^*}) \le e^{n[\kappa^n_2(Q_\mathbf{x})+\lambda_n]} )}.
\end{align*}
We now bound the numerator. Recalling the definition of $S(\mathbf{u}|\mathbf{y})$ from Lemma \ref{lem:empent} and invoking the union bound gives
\begin{align*}
& \Pr(L | X^n = \mathbf{x}, Y^n = \mathbf{y}, U^n = \mathbf{u} ) \\
&\le \sum_{\tilde{\mathbf{u}} \in S(\mathbf{u}|\mathbf{y})} \Pr(\tilde{\mathbf{u}} \in B^n(Q_\mathbf{x}), b_{Q_\mathbf{x}}(\mathbf{u})=b_{Q_\mathbf{x}}(\tilde{\mathbf{u}})),
\end{align*}
and substituting the various bounds gives
\[
\exp(-n[R - I_{Q_{\mathbf{xyu}}}(X; U) + I_{Q_{\mathbf{xyu}}}(U;Y) -\delta_n]^+),
\]
where $\delta_n = 4\frac{\vert \mathcal{U}\vert \vert \mathcal{X} \vert}{n}\log(n+1)$. To handle the denominator, by Lemma \ref{lem:badtypes} the complementary event goes to zero super exponentially as $n \to \infty$.
\end{IEEEproof}
\begin{lemma}
\label{lem:contwz}
Let $\delta_n,\tilde{\delta}_n,\tilde{\tilde{\delta}}_n,\tilde{\tilde{\tilde{\delta}}}_n$ be positive sequences converging to 0 as $n \to \infty$,
\begin{align*}
\eta^{n}(R,P_{XY},Q_{XYU}, \phi) &= \begin{cases}
D(Q_{XYU}||P_{XY}Q_{U|X}) - \delta_n & \text{if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \\
D(Q_{XYU}||P_{XY}Q_{U|X}) - \delta_n + [R & \text{if }\mathbb{E}_Q[d(X,\phi(Y,U))] < \Delta \\
\quad -I_Q(X;U)+I_Q(Y;U) -\tilde{\delta}_n ]^+ -\tilde{\tilde{\delta}}_n & \text{ and }\kappa_2^{n}(Q_{X}) + \lambda_n \ge R - \tilde{\tilde{\tilde{\delta}}}_n \\
\infty & \text{otherwise,}
\end{cases} \\
\beta^n(R,\Delta,P_{XY},d) &= \min_{Q_X} \max_{Q_{U|X}} \min_{Q_Y} \max_{\phi} \min_{Q_{XYU}} \eta^{n}(R,P_{XY},Q_{XYU},\phi) \\
\eta(R,P_{XY},Q_{XYU},\phi) &= \begin{cases}
D(Q_{XYU}||P_{XY}Q_{U|X}) & \text{ if }\mathbb{E}_Q[d(X,\phi(Y,U))] \ge \Delta \\
D(Q_{XYU}||P_{XY}Q_{U|X}) + & \text{ if } \mathbb{E}_Q[d(X,\phi(Y,U))] < \Delta \\
\quad \{R-I_Q(X;U)+I_Q(Y;U) \}^+ & \text{ and }\kappa_2(Q_{X}) \ge R \\
\infty & \text{otherwise}
\end{cases} \\
\text{and } \beta(R,\Delta,P_{XY},d) &= \inf_{Q_X} \sup_{Q_{U|X}} \inf_{Q_Y} \sup_{\phi} \inf_{Q_{XYU}} \eta(R,P_{XY},Q_{XYU},\phi).
\end{align*}
Then
\[
\liminf_{n \to \infty} \beta^n(R,\Delta,P_{XY},d) \ge \beta(R,\Delta,P_{XY},d)
\]
(Note in $\beta^n$ the maximizations are over types/conditional types and in $\beta$ over distributions.)
\end{lemma}
\begin{IEEEproof}
One sees that $\kappa_2^n(Q_X) + \lambda_n = \kappa_2(Q_X) + o(n)$ is upper semicontinuous in $Q_X$, with this established the proof then follows a similar proof for the Wyner-Ziv error exponent in \cite{kellywagnerscexponents}.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem 2]
Define
\[
\mathcal{E} = \{ d(X^n,\hat X^n) > \Delta \},
\]
then for our scheme we have
\begin{align*}
P_e &= \sum_{\mathbf{x},\mathbf{y},\mathbf{u}} \Pr(\mathcal{E} | X^n = \mathbf{x},Y^n=\mathbf{y},U^n = \mathbf{u}, F(Q_{\mathbf{xyu}})) \\
&\quad \times \Pr(X^n = \mathbf{x},Y^n=\mathbf{y},U^n = \mathbf{u},F(Q_{\mathbf{xyu}})) \\
&+ \sum_{\mathbf{x},\mathbf{y},\mathbf{u}} \Pr(\mathcal{E} | X^n = \mathbf{x},Y^n=\mathbf{y},U^n = \mathbf{u}, F^c(Q_{\mathbf{xyu}})) \\
&\quad \times \Pr(X^n = \mathbf{x},Y^n=\mathbf{y},U^n = \mathbf{u},F^c(Q_{\mathbf{xyu}})).
\end{align*}
By definition, when $F$ occurs the encoder sends an error symbol, which we assume leads to the distortion constraint being violated. Using this observation, and rewriting the above equation, first summing over types then over sequences gives
\begin{align*}
P_e &\le \sum_{Q_{XYU}} \sum_{\mathbf{x},\mathbf{y},\mathbf{u} \in T_{Q_{XYU}}} \Big [ \Pr(\mathcal{E} | X^n = \mathbf{x},Y^n=\mathbf{y},U^n = \mathbf{u}, F^c(Q_{XYU})) \\
&\qquad \times \Pr(X^n = \mathbf{x},Y^n=\mathbf{y},U^n = \mathbf{u},F^c(Q_{XYU})) \Big ] \\
&\quad + \sum_{Q_{XYU}} \vert T_{Q_{XYU}}\vert \Pr(F(Q_{XYU})).
\end{align*}
On account of the fact that $\Pr(F(Q_{XYU}))$ goes to zero super exponentially for any choice of $Q_{XYU}$ and the fact that there are only exponentially many sequences and polynomially many types, the final summand can be safely ignored for the error exponent calculation. We use $a \preceq b$ to mean that
\[
\limsup_{n \to \infty} \frac{1}{n} \log a \le \limsup_{n \to \infty} \frac{1}{n} \log b.
\]
Let $$P(\mathbf{x},\mathbf{y},\mathbf{u})=\Pr(X^n=\mathbf{x},Y^n=\mathbf{y},U^n=\mathbf{u},F^c(Q_\mathbf{xyu}))$$
and
$$
P(\mathcal{E}|\mathbf{x},\mathbf{y},\mathbf{u})=\Pr(\mathcal{E} | X^n = \mathbf{x},Y^n=\mathbf{y},U^n = \mathbf{u}, F^c(Q_{\mathbf{xyu}})).
$$
We now group the summation according to the sets outlined at the start of this section. This gives
\begin{align*}
P_e &\preceq \sum_{Q_X} \sum_{Q_Y} \Big [ \sum_{Q_{XYU} \in \mathcal{D}_1} \sum_{\mathbf{x},\mathbf{y},\mathbf{u} \in T_{Q_{XYU}}} P(\mathbf{x},\mathbf{y},\mathbf{u}) P(\mathcal{E}|\mathbf{x},\mathbf{y},\mathbf{u}) \\
&\quad + \sum_{Q_{XYU} \in \mathcal{D}_2} \sum_{\mathbf{x},\mathbf{y},\mathbf{u} \in T_{Q_{XYU}}} P(\mathbf{x},\mathbf{y},\mathbf{u}) P(\mathcal{E}|\mathbf{x},\mathbf{y},\mathbf{u}) \\
&\quad + \sum_{Q_{XYU} \in \mathcal{D}_3} \sum_{\mathbf{x},\mathbf{y},\mathbf{u} \in T_{Q_{XYU}}} P(\mathbf{x},\mathbf{y},\mathbf{u}) P(\mathcal{E}|\mathbf{x},\mathbf{y},\mathbf{u}) \\
&\quad + \sum_{Q_{XYU} \in \mathcal{D}_4} \sum_{\mathbf{x},\mathbf{y},\mathbf{u} \in T_{Q_{XYU}}} P(\mathbf{x},\mathbf{y},\mathbf{u}) P(\mathcal{E}|\mathbf{x},\mathbf{y},\mathbf{u}) \Big]
\end{align*}
where in the inner summations over $Q_{XYU}$ on the sets $\mathcal{D}_i$, the types of $Q_X$ and $Q_Y$ are fixed to be those set by the outer summations. On the set $\mathcal{D}_1$, Lemma \ref{lem:coveringerror} implies the quantity $P(\mathbf{x},\mathbf{y},\mathbf{u})$ decays super exponentially. Since there are only polynomially many types and exponentially many sequences this term can therefore be safely ignored. On the set $\mathcal{D}_3$, conditional on the event $F^c(Q_{\mathbf{xyu}})$, the codeword can be decoded without error, and hence there is no error. Using the result of Lemmas \ref{lem:typeprobabilty} and \ref{lem:wzbinerror} we therefore have
\begin{align}
\notag P_e &\preceq \sum_{Q_X} \sum_{Q_Y} \Big [ \sum_{Q_{XYU} \in \mathcal{D}_2} \exp(-n[D(Q_{XYU}||P_{XY}Q_{U|X})-\delta_n \\
\notag &\qquad + [R-I_Q(X;U)+I_Q(Y;U)-\tilde{\delta}_n]^+ -\tilde{\tilde{\delta}}_n])\\
\notag&\quad + \sum_{Q_{XYU} \in \mathcal{D}_4} \exp(-n[D(Q_{XYU}||P_{XY}Q_{U|X})-\delta_n]) \Big ]
\end{align}
where $\tilde{\tilde{\delta}}_n=-\frac{1}{n}\log(1-\exp_e(-(n+1)^2)$. Bounding the summands by their maximum value gives
\begin{align}
\notag P_e &\preceq \vert \mathcal{P}^n(\mathcal{X}) \vert \max_{Q_X} \vert \mathcal{P}^n(\mathcal{Y}) \vert \max_{Q_Y} \vert \mathcal{P}^n(\mathcal{X} \times \mathcal{Y} \times \mathcal{U}) \vert \\
\notag &\quad \times \Big [ \max_{Q_{XYU} \in \mathcal{D}_2} \exp(-n[D(Q_{XYU}||P_{XY}Q_{U|X})-\delta_n \\
\notag &\qquad +[R-I_Q(X;U)+I_Q(Y;U) - \tilde{\delta}_n]^+ -\tilde{\tilde{\delta}}_n])\\
\label{eqn:mainthm1ps1}&\quad + \max_{Q_{XYU} \in \mathcal{D}_4} \exp(-n[D(Q_{XYU}||P_{XY}Q_{U|X})-\delta_n]) \Big ]
\end{align}
Let $$\tilde{\tilde{\tilde{\delta}}}_n(Q_X)=\frac{1}{n}\log(\exp(n[\kappa_2^{n}(Q_X)+\lambda_n])+1)-(\kappa_2^{n}(Q_X)+\lambda_n)$$
and let $\tilde{\tilde{\tilde{\delta}}}_n$ be the maximum over $Q_X \in \mathcal{P}^n(\mathcal{X})$ of $\tilde{\tilde{\tilde{\delta}}}_n(Q_X)$; it follows that $\tilde{\tilde{\tilde{\delta}}}_n \to 0$. Adopting the definitions from the statement of Lemma \ref{lem:contwz} and using $a+b \le 2\max(a,b)$ to combine the two sums of \eqref{eqn:mainthm1ps1} gives
\begin{align*}
\notag P_e &\preceq 2\vert \mathcal{P}^n(\mathcal{X}) \vert \vert\mathcal{P}^n(\mathcal{Y}) \vert \vert \mathcal{P}^n(\mathcal{X} \times \mathcal{Y} \times \mathcal{U}) \vert \\
&\quad \times \max_{Q_X} \max_{Q_Y} \max_{Q_{XYU}:Q_{U|X}=Q_{U|X}^*(Q_X)} \exp(-n[\eta^{n}(R,P_{XY}, Q_{XYU},\phi)])
\end{align*}
Finally, we can optimize over $Q_{U|X}^*$ and $\phi$, and move the optimizations in the exponent to give
\begin{align*}
\notag P_e &\preceq 2\vert \mathcal{P}^n(\mathcal{X})\vert \vert\mathcal{P}^n(\mathcal{Y}) \vert \vert \mathcal{P}^n(\mathcal{X} \times \mathcal{Y} \times \mathcal{U}) \vert \\
&\quad \times \exp(-n[\min_{Q_X} \max_{Q_{U|X}} \min_{Q_Y} \max_{\phi} \min_{Q_{XYU}}\eta^{n}(R,P_{XY},Q_{XYU},\phi)]).
\end{align*}
Taking the log, dividing by $-n$ and then taking the $\liminf_{n \to \infty}$ of both sides, invoking Lemma \ref{lem:contwz} on the righthand side gives the result.
\end{IEEEproof}
\bibliographystyle{IEEEtran}
| {
"timestamp": "2010-01-21T22:30:48",
"yymm": "1001",
"arxiv_id": "1001.3885",
"language": "en",
"url": "https://arxiv.org/abs/1001.3885",
"abstract": "We provide a novel upper-bound on Witsenhausen's rate, the rate required in the zero-error analogue of the Slepian-Wolf problem; our bound is given in terms of a new information-theoretic functional defined on a certain graph. We then use the functional to give a single letter lower-bound on the error exponent for the Slepian-Wolf problem under the vanishing error probability criterion, where the decoder has full (i.e. unencoded) side information. Our exponent stems from our new encoding scheme which makes use of source distribution only through the positions of the zeros in the `channel' matrix connecting the source with the side information, and in this sense is `semi-universal'. We demonstrate that our error exponent can beat the `expurgated' source-coding exponent of Csiszár and Körner, achievability of which requires the use of a non-universal maximum-likelihood decoder. An extension of our scheme to the lossy case (i.e. Wyner-Ziv) is given. For the case when the side information is a deterministic function of the source, the exponent of our improved scheme agrees with the sphere-packing bound exactly (thus determining the reliability function). An application of our functional to zero-error channel capacity is also given.",
"subjects": "Information Theory (cs.IT)",
"title": "Improved Source Coding Exponents via Witsenhausen's Rate",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137926698566,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7087617833500068
} |
https://arxiv.org/abs/1406.3233 | The Skolem-Abouzaid theorem in the singular case | Let F(X;Y) in Q[X;Y] be a Q-irreducible polynomial. In 1929 Skolem proved the following theorem: "Assume that F(0;0) = 0. Then for every non-zero integer d, the equation F(X;Y) = 0 has only finitely many solutions in integers (X;Y) with gcd(X;Y) = d". Skolem method allows one to bound the solutions explicitly in terms of the coefficients of the polynomial F and the integer d. In 2008, Abouzaid gave a far-going generalization of Skolem theorem. He extended it in two directions: first, he studied solutions not only in rational integers, but in arbitrary algebraic numbers. Second, he not only bounded the solution in terms of the logarithmic gcd, but obtained a sort of asymptotic relation between the heights of the coordinates and their logarithmic gcd. Unfortunately, Abouzaid assumption is slightly more restrictive than Skolem: he assumes not only that the point (0;0) belongs to the plane curve F(X;Y) = 0, but that (0;0) is a non-singular point on this curve. The purpose of the present article is to get rid of this non singularity hypothesis. | \section{Introduction}
Let ${F(X,Y)\in\Q[X,Y]}$ be a $\Q$-irreducible polynomial. In 1929 Skolem~\cite{Sk29} proved the following beautiful theorem:
\begin{theorem}[Skolem]
\label{tskol}
Assume that
\begin{equation}
\label{e00}
F(0,0)=0.
\end{equation}
Then for every non-zero integer~$d$, the equation ${F(X,Y)=0}$ has only finitely many solutions in integers ${(X,Y)\in \Z^2}$ with ${\gcd(X,Y)=d}$.
\end{theorem}
In the same year, Siegel obtained his celebrated finiteness theorem for integral solutions of Diophantine equations: equation ${F(X,Y)=0}$ has finitely many solutions in integers unless the corresponding plane curve is of genus~$0$ and has at most~$2$ points at infinity. While Siegel's result is, certainly, deeper and more powerful than Theorem~\ref{tskol}, the latter has one important advantage. Siegel's theorem is known to be non-effective: it does not give any bound for the size of integral solutions. On the contrary, Skolem's method allows one to bound the solutions explicitly in terms of the coefficients of the polynomial~$F$ and the integer~$d$. Indeed, such a bound was obtained by Walsh~\cite{Walsh}; see also~\cite{Poulakis}.
In 2008, Abouzaid~\cite{Ab08} gave a far-going generalization of Skolem's theorem. He extended it in two directions.
First, he studied solutions not only in rational integers, but in arbitrary algebraic numbers. To accomplish this, he introduced the notion of \textsl{logarithmic gcd} of two algebraic numbers~$\alpha$ and~$\beta$, which coincides with the logarithm of the usual gcd when ${\alpha, \beta\in \Z}$.
Second, he not only bounded the solution in terms of the logarithmic gcd, but obtained a sort of asymptotic relation between the heights of the coordinates and their logarithmic gcd.
Let us state Abouzaid's principal result (see~\cite[Theorem~1.3]{Ab08}). In the sequel we assume that ${F(X,Y) \in \bar\Q[X,Y]}$ is an absolutely irreducible polynomial, and use the notation
\begin{equation}
\label{emn}
m=\deg_XF, \quad n=\deg_YF, \quad M = \max\{m, n\}.
\end{equation}
We denote by $\height(\alpha)$ the absolute logarithmic height of ${\alpha\in \bar\Q}$ and by $\lgcd(\alpha,\beta)$ the logarithmic gcd of $\alpha,\beta\in \Q$. We also denote by $\height_p(F)$ the projective height of the polynomial~$F$. For all definitions, see Subsection~\ref{ssprem}.
\begin{theorem}[Abouza\"id]
Assume that $(0, 0)$ is a non-singular point of the plane curve $F(X,Y) = 0$. Let~$\varepsilon$ satisfy ${0 < \varepsilon <1}$. Then for any solution ${(\alpha,\beta)\in \bar\Q^2}$ of ${F(X,Y) = 0}$, we have either
$$
\max\{\height(\alpha), \height(\beta)\} \le 56M^8\varepsilon^{-2}\height_p(F) + 420M^{10}\varepsilon^{-2} \log(4M),
$$
or
\begin{align*}
\max\{|\height(\alpha) - n\lgcd(\alpha,\beta)|, |\height(\beta) - m\lgcd(\alpha,\beta)|\} \le\ & \varepsilon \max\{\height(\alpha), \height(\beta)\} + 742M^7\varepsilon^{-1}\height_p(F)\\& + 5762M^9\varepsilon^{-1} \log(2m + 2n).
\end{align*}
\end{theorem}
Informally speaking,
\begin{equation}
\label{easymp}
\frac{\height(\alpha)}n\sim\frac{\height(\beta)}m\sim\lgcd(\alpha,\beta)
\end{equation}
as ${\max\{\height(\alpha), \height(\beta)\}\to \infty}$.
Unfortunately, Abouzaid's assumption is slightly more restrictive than Skolem's~\eqref{e00}: he assumes not only that the point $(0,0)$ belongs to the plane curve ${F(X,Y)=0}$, but also that $(0,0)$ is a non-singular point on this curve.
Denote by~$r$ the ``order of vanishing'' of $F(X,Y)$ at the point $(0,0)$:
\begin{equation}
\label{eordvan}
r=\min\left\{i+j: \frac{\partial^{i+j}F}{\partial^iX\partial^jY}(0,0)\ne0\right\}.
\end{equation}
Clearly, ${r>0}$ if and only if ${F(0,0)=0}$ and ${r=1}$ if and only $(0,0)$ is a non-singular point of the plane curve ${F(X,Y)=0}$.
We can now state our principal result.
\begin{theorem}
\label{MainResult}
Let ${F(X,Y)\in \bar\Q[X,Y]}$ be an absolutely irreducible polynomial satisfying $F(0,0)=0$.
Let~$\varepsilon$ satisfy ${0 < \varepsilon <1}$.
Then, for any $\alpha, \beta \in \QQ$ such that $F(\alpha, \beta)=0$, we have either:
\begin{equation*}
\height (\alpha) \leq 200\varepsilon^{-2}mn^6(\height_p(F)+5)
\end{equation*}
or
\begin{equation*}
\left| \frac{\lgcd (\alpha, \beta)}{r} - \frac{\height(\alpha)}{n} \right| \leq
\frac{1}{r} \left( \varepsilon\height(\alpha)+4000\varepsilon^{-1} n^4(\height_p(F)+\log(mn) + 1) +30n^2m(\height_p (F) + \log (nm)) \right).
\end{equation*}
\end{theorem}
By symmetry, the same kind of bound holds true for the difference ${ \frac{\lgcd (\alpha, \beta)}{r} - \frac{\height(\beta)}{m} }$. Informally speaking,
\begin{equation}
\label{einformal}
\frac{\height(\alpha)}n\sim\frac{\height(\beta)}m\sim\frac{\lgcd(\alpha,\beta)}r
\end{equation}
as ${\max\{\height(\alpha), \height(\beta)\}\to \infty}$.
Validity of~\eqref{einformal} was conjectured by Abouzaid, see the end of Section~1 in~\cite{Ab08}\footnote{Abouzaid's definition of~$r$ looks different, but it can be easily shown that it is equivalent to ours.}.
The referee pointed us to an unpublished work of Habegger~\cite{Hab} from 2007, where he confirms Abouzaid's conjecture; moreover, his bounds are sharper than ours. We would like to remark that Habegger's method is quite different and uses his sharp quantitative version of the quasi-equivalence of heights. On the contrary, our paper follows closely the methods of~\cite{Ab08} wherever possible; in particular, like in~\cite{Ab08}, our main tool is Puiseux expansions.
\paragraph{Plan of the article}
Section~\ref{sheights} and~\ref{sseries} are preliminary: we compile therein some definitions and results from different sources, which will be used in the article. In Section~\ref{sspr} we establish the ``Main Lemma'', which is the heart of the proof of Theorem~\ref{MainResult}. In Section~\ref{sproof} we complete the proof of Theorem~\ref{MainResult} using the ``Main Lemma''.
\paragraph{Acknowledgments}
I am grateful to Yuri Bilu for having pointed my attention to this problem and for an emulating exchange on this topic. I am also thankful to the referee for her/his helpful suggestions and for pointing out the unpublished result from Philip Habbeger.
\section{Heights}
\label{sheights}
In this section we recall definitions and collect various results about absolute values and heights.
We normalize the absolute values on number fields so that they extend standard absolute values on~$\Q$: if ${v\mid p}$ (non-Archimedean) then ${|p|_v=p^{-1}}$ and if ${v\mid \infty}$ (Archimedean) then ${|2015|_v=2015}$.
\subsection{Heights and lgcd of algebraic numbers}
\label{ssprem}
Let~$\K$ be a number field, $d = [\K:\Q]$ and $d_v = [\K_v:\Q_v]$. The \textsl{height} of an algebraic number~$\alpha \in \K$ is defined as
$$
\height(\alpha) = \frac1{d}\sum_{v\in M_\K}d_v\log^+|\alpha|_v.
$$
where $M_\K$ is the set of places (normalized absolute values) of the number field~$\K$ and $\log^+=\max\{\log,0\}$. It is well-known that the height does not depend on the particular choice of~$\K$, but only on the number~$\alpha$ itself. It is equally well-known that
${\height(\alpha)=\height(\alpha^{-1})}$, so that
$$
\height(\alpha) = \frac1{d}\sum_{v\in M_\K}-d_v\log^-|\alpha|_v = \sum_{v\in M_\K}\height_v(\alpha),
$$
where ${\log^-=\min\{\log,0\}}$ and
$$
\height_v(\alpha)=-\frac{d_v}{d}\log^-|\alpha|_v.
$$
The quantities $\height_v(\alpha)$ can be viewed as ``local heights''. Clearly, ${\height_v(\alpha)\ge 0}$ for any~$v$ and~$\alpha$.
We define the \textsl{logarithmic gcd} of two algebraic numbers~$\alpha$ and~$\beta$, not both~$0$, as
$$
\lgcd (\alpha,\beta) =\sum_{v\in M_K}\min\{\height_v(\alpha), \height_v(\beta)\},
$$
where~$\K$ is a number field containing both~$\alpha$ and~$\beta$. It again depends only~$\alpha$ and~$\beta$, not on~$\K$. A simple verification shows that for ${\alpha, \beta \in \Z}$ we have
${\lgcd(\alpha,\beta)=\log\gcd(\alpha,\beta)}$.
Now let~$\K$ be a number field and~$S$ be a set of places of~$\K$. We define the \textsl{$S$-height} by
$$
\height_S(\alpha) = \sum_{v\in S}\height_v(\alpha).
$$
Similarly we define $\lgcd_S$. We shall frequently use the inequality $\lgcd_S (\alpha, \beta) \leq \height_S (\alpha) \leq \height(\alpha)$ without special reference.
\subsection{Affine and projective heights of polynomials}
We define the projective and the affine height of a vector $\underline{a}=(a_1, \ldots, a_m) \in \QQ^m$ with algebraic entries, by
\begin{align*}
\height _p (\underline{a}) &= \frac{1}{d}\sum_{v \in M_{\K}} d_v \log \max_{1 \leq k \leq m} |a_k|_v\qquad (\underline{a} \neq \underline{0}),\\
\height _a (\underline{a}) &= \frac{1}{d}\sum_{v \in M_{\K}} d_v \log^+ \max_{1 \leq k \leq m} |a_k|_v.
\end{align*}
Here, $\K$ is a number field containing $a_1, \ldots, a_m$, and $d$, $d_v$ are defined as in the previous subsection. We notice that the height of an algebraic number defined in the previous subsection corresponds to the affine height of a one-dimensional vector.
We define the projective and affine height of a polynomial as the corresponding heights of the vector of its non-zero coefficients. If $F$ is a non-zero polynomial, then, for ${\alpha \in \QQ^{*}}$ we have ${\height_p (\alpha F) = \height_p(F)}$. Also, ${\height_p ( F) \leq \height_a(F)}$, with ${\height_p ( F) = \height_a(F)}$ if $F$ has a coefficient equal to $1$.
In \cite[Lemma 4]{Schmidt2}, Schmidt proves the following lemma:
\begin{lemma}
\label{lSchmidt2}
Let $F(X,Y) \in \QQ[X,Y]$ be a polynomial with algebraic coefficients, such that $m = \deg_X F$ and $n = \deg_Y F$. Let $R_F(X) = \text{Res}_Y (F, F_Y^{'})$ be the resultant of F and its derivative polynomial with respect to Y. Then:
\begin{equation}
\height_p(R_F) \leq (2n-1)\height_p (F) + (2n-1) \log ((m+1)(n+1)\sqrt{n}).
\end{equation}
\end{lemma}
It is well-known that the height of a root of a polynomial is bounded in terms of the height of the polynomial itself. The following lemma can be found in
\cite[Proposition 3.6]{Yuri2}:
\begin{lemma}
\label{Yuri2}
Let $F(X)$ be a polynomial of degree $m$ with algebraic coefficients. Let $\alpha$ be a root of $F$. Then, $\height (\alpha) \leq \height_p(F) + \log 2$
\end{lemma}
We want to generalize this to a system of two algebraic equations in two variables.
\begin{lemma}
\label{lsys}
Let $F_1(X,Y)$ and $F_2(X,Y)$ be polynomials with algebraic coefficients, having no common factor. Put:
\begin{equation*}
m_i=\deg_X F_i,\ n_i = \deg_Y F_i \qquad (i=1,2).
\end{equation*}
Let $\alpha,\beta$ be algebraic numbers satisfying ${F_1(\alpha,\beta)=F_2(\alpha,\beta)=0}$.
Then
\begin{equation*}
\height (\alpha) \leq n_1 \height_p (F_2) + n_2 \height_p (F_1) + (m_1 n_2 + m_2 n_1) + (n_1 + n_2) \log (n_1 + n_2)+\log2.
\end{equation*}
\end{lemma}
\paragraph{Proof}
Since~$F_1$ and~$F_2$ have no common factor, their $Y$-divisor $R(X)$ is a non-zero polynomial, and ${R(\alpha)=0}$. \cite[Proposition 2.4]{Ab08} gives the estimate
\begin{equation*}
\height_p (R) \leq n_1 \height_p (F_2) + n_2 \height_p (F_1) + (m_1 n_2 + m_2 n_1) + (n_1 + n_2) \log (n_1 + n_2).
\end{equation*}
Combining this with Lemma~\ref{Yuri2}, the result follows. \qed
\bigskip
We will also use \cite[Proposition 2.5]{Ab08}:
\begin{lemma}
\label{lmaxG}
Let $F(X,Y) \in \QQ[X,Y]$ be a polynomial with $m = \deg_X F$ and $n = \deg_Y F$ and let $\alpha, \beta$ be two algebraic numbers. Then
\begin{enumerate}
\item
\label{ifab}
We have $\height (F(\alpha, \beta)) \leq \height_a (F) + m\height(\alpha) +n\height(\beta) + \log((m+1)(n+1))$.
\item
\label{iba}
If $F(\alpha, \beta) = 0$ with $F(\alpha, Y)$ not vanishing identically, then:
\begin{equation*}
\height(\beta) \leq \height_p(F) + m\height(\alpha) +n +\log (m+1).
\end{equation*}
\end{enumerate}
\end{lemma}
\subsection{Coefficients versus roots}
In this subsection we establish some simple relations between coefficients and roots of a polynomial over a field with absolute value, needed in the proof of our main result. It will be convenient to use the notion of \textsl{$v$-Mahler measure} of a polynomial.
Let~$\K$ be a field with absolute value~$v$ and ${f(X)\in \K[X]}$ a polynomial of degree~$n$. Let ${\beta_1, \ldots, \beta_n\in \bar \K}$ be the roots of $f$:
$$
f(X)=a_nX^n+a_{n-1}X^{n-1}+\ldots+a_0=a_n(X-\beta_1)\ldots(X-\beta_n).
$$
Define the $v$-Mahler measure of~$f$ by
$$
M_v(f)=|a_n|_v\prod_{i=1}^n\max\{1,|\beta_i|_v\},
$$
where we extend~$v$ somehow to~$\bar \K$. (Clearly, $M_v(f)$ does not depend on the particular extension of~$v$.)
It is well-known that ${|f|_v=M_v(f)}$ for non-archimedean~$v$ (``Gauss lemma'') and ${M_v(f)\le (n+1)|f|_v}$ for archimedean~$v$ (Mahler).
\begin{lemma}
\label{lr+1}
Let ${\beta_1, \ldots, \beta_{\ell+1}}$ be ${\ell+1}$ distinct roots of $f(X)$, where ${0\le \ell\le n-1}$. Then
$$
\max\{|\beta_1|_v, \ldots, |\beta_{\ell+1}|_v\}\ge c_v(n)\frac{|a_\ell|_v}{|f|_v},
$$
where ${c_v(n)=1}$ for non-archimedean~$v$ and ${c_v(n)=(n+1)^{-1}2^{-n}}$ for archimedean~$v$.
\end{lemma}
\paragraph{Proof}
We have
\begin{equation}
\label{eanu}
a_\ell=\pm a_n\sum_{1\le i_1<\ldots<i_{n-\ell}\le n}\beta_{i_1}\ldots\beta_{i_{n-\ell}},
\end{equation}
where ${\beta_1, \ldots, \beta_n}$ are all roots of $f(X)$ in~$\bar\K$ counted with multiplicities.
Observe that each term in the sum above contains one of the roots ${\beta_1, \ldots, \beta_{\ell+1}}$, and the product of the other roots together with~$a_n$ is $v$-bounded by $M_v(f)$.
Hence, denoting ${\mu= \max\{|\beta_1|_v, \ldots, |\beta_{\ell+1}|_v\}}$, we obtain
${|a_\ell|_v\le \mu M_v(f)}$ in the non-archimedean case and ${|a_\ell|_v\le \binom n\ell\mu M_v(f)}$ in the archimedean case. Since ${\binom n\ell\le 2^n}$, the result follows. \qed
\subsection{Siegel's ``Absolute'' Lemma}
In this section we give a version of the Absolute Siegel's Lemma due to David and Philippon~[3], adapted for our purposes.
We start from a slightly modified definition of the projective height of a non-zero vector ${{\underline{a}}=(a_1, \ldots, a_n)\in\bar\Q^n}$. As before, we fix a number field~$\K$ containing ${a_1,\ldots,a_n}$ and set ${d=[\K:\Q]}$, \ ${d_v=[\K_v:\Q_v]}$ for ${v\in M_\K}$.
Now we define
$$
\height_s({\underline{a}})= \sum_{v\in M_\K}\frac{d_v}d \log \|{\underline{a}}\|_v,
$$
where
$$
\|{\underline{a}}\|_v=
\begin{cases}
\max\{ |a_1|_v, \ldots, |a_n|_v\}, & v<\infty,\\
(|a_1|_v^2+\ldots+|a_n|_v^2)^{1/2}, & v\mid\infty.
\end{cases}
$$
This definition is the same as for $\height_p({\underline{a}})$, except that for the archimedean places the sup-norm is replaced by the euclidean norm.
We have clearly ${\height_s(\lambda{\underline{a}})=\height_s({\underline{a}})}$ for ${\lambda\in \bar\Q^\times}$, and
\begin{equation}
\label{ehphs}
\height_p({\underline{a}})\le \height_s({\underline{a}})\le \height_p({\underline{a}})+ \frac12\log n.
\end{equation}
Now let us define the height of a linear subspace of $\bar\Q^n$. If~$W$ is a $1$-dimensional subspace of $\bar\Q^n$ then we set
$$
\height_s(W):=\height_s({\underline{w}}),
$$
where~${\underline{w}}$ is an arbitrary non-zero vector from~$W$. Clearly, $\height_s(W)$ does not depend on the particular choice of the vector~${\underline{w}}$.
To extend this to subspaces of arbitrary dimension, we use Grassmann spaces. Recall that the $m$th Grassmann space
$\wedge^m\bar\Q^n$ is of dimension $\binom nm$, and has a standard basis consisting of the vectors
$$
e_{i_1}\wedge\ldots \wedge e_{i_m}, \qquad (1\le i_1<\ldots<i_m\le n),
$$
where ${e_1, \ldots, e_n}$ is the standard basis of $\bar\Q^n$. If~$W$ is an $m$-dimensional subspace of $\bar\Q^n$ then $\wedge^mW$ is a $1$-dimensional subspace of $\wedge^m\bar\Q^n$, and we simply define
$$
\height_s(W):=\height_s(\wedge^mW).
$$
Finally, we set ${\height_s(W)=0}$ for the zero subspace ${W=\{{\underline{0}}\}}$.
To make this more explicit, pick a basis ${{\underline{w}}_1,\ldots, {\underline{w}}_m}$ of~$W$. Then $\wedge^mW$ is generated by ${{\underline{w}}_1\wedge\ldots\wedge {\underline{w}}_m}$, and we have
\begin{equation}
\label{ehwedge}
\height_s(W)=\height_s({\underline{w}}_1\wedge\ldots\wedge {\underline{w}}_m).
\end{equation}
This allows one to estimate the height of a subspace generated by a finite set of vectors in terms of heights of generators.
\begin{proposition}
\label{elever}
Let~$W$ be a subspace of~$\bar\Q^n$ generated by vectors ${{\underline{w}}_1,\ldots, {\underline{w}}_m\in \bar\Q^n}$. Then
$$
\height_s(W)\le \height_s({\underline{w}}_1)+\ldots+\height_s({\underline{w}}_m).
$$
\end{proposition}
\paragraph{Proof}
Selecting among ${{\underline{w}}_1,\ldots, {\underline{w}}_m}$ a maximal linearly independent subset, we may assume that ${{\underline{w}}_1,\ldots, {\underline{w}}_m}$ is a basis of~$W$. Then we have~\eqref{ehwedge}. It remains to observe that for any place~$v$ we have
$$
\|{\underline{w}}_1\wedge\ldots\wedge {\underline{w}}_m\|_v\le \|{\underline{w}}_1\|_v\ldots\|{\underline{w}}_m\|_v.
$$
For non-archimedean~$v$ this is obvious, and for archimedean~$v$ this is the classical Hadamard's inequality. \qed
\bigskip
We denote by $({\underline{x}}\cdot{\underline{y}})$ the standard inner product on~$\bar\Q^n$:
$$
({\underline{x}}\cdot{\underline{y}})=x_1y_1+\ldots+x_ny_n.
$$
Let~$W^\perp$ denote the orthogonal complement to~$W$ with respect to this product. It is well-known that the coordinates of $\wedge^mW$ (where ${m=\dim W}$) in the standard basis of $\wedge^m\bar\Q^n$ are the same (up to a scalar multiple) as the coordinates of $\wedge^{n-m}W^\perp$ in the standard basis of $\wedge^{n-m}\bar\Q^n$. In particular,
\begin{equation}
\label{eh=h}
\height_s(W)=\height_s(W^\perp).
\end{equation}
We use this to estimate the height of the subpace defined by a system of linear equations.
\begin{proposition}
Let $L_1, \ldots, L_m$ be non-zero linear forms on $\bar\Q^n$, and let~$W$ be the subspace of $\bar\Q^n$ defined by ${L_1({\underline{x}})=\ldots=L_m({\underline{x}})=0}$. Then
\begin{equation}
\label{ehwllp}
\height_s(W)\le \height_p(L_1)+\ldots+\height_p(L_m)+\frac m2\log n.
\end{equation}
\end{proposition}
\paragraph{Proof}
Let ${{\underline{a}}_1, \ldots, {\underline{a}}_m}$ be vectors in $\bar\Q^n$ such that ${L_i({\underline{x}})=({\underline{x}}\cdot{\underline{a}}_i)}$. Then
\begin{equation}
\label{eal}
\height_p(L_i)=\height_p({\underline{a}}_i) \qquad (i=1, \ldots, m).
\end{equation}
The space~$W^\perp$ is generated by ${{\underline{a}}_1, \ldots, {\underline{a}}_m}$. Applying to it Proposition~\ref{elever} and using~\eqref{ehphs}, we obtain
$$
\height_s(W^\perp)\le \height_s({\underline{a}}_1)+\ldots+\height_s({\underline{a}}_m) \le \height_p({\underline{a}}_1)+\ldots+\height_p({\underline{a}}_m) +\frac m2\log n.
$$
Together with~\eqref{eh=h} and~\eqref{eal}, this gives~\eqref{ehwllp}. \qed
\begin{remark}
It is not difficult to slightly refine~\eqref{ehwllp}, replacing $\log n$ by $\log m$ in the right-hand side, but this would not lead to any substantial improvement of our results.
\end{remark}
In [3, Lemma~4.7] the following version of ``absolute Siegel's lemma'' is given.
\begin{proposition}
\label{pdp}
Let~$W$ be an $\ell$-dimensional subspace of $\bar\Q^n$ and ${\varepsilon>0}$. Then, there is a non-zero vector ${{\underline{x}}\in W}$, satisfying:
\begin{equation*}
\height_p({\underline{x}}) \le \frac{\height_s(W)}{\ell}+ \frac1{2 \ell}\sum_{i=1}^{\ell-1}\sum_{k=1}^i\frac{1}{k}+\varepsilon.
\end{equation*}
\end{proposition}
\begin{corollary}
\label{cdp}
Let $L_1, \ldots, L_m$ be non-zero linear forms in~$n$ variables with algebraic coefficients. Then, there exists a non-zero vector ${{\underline{x}}\in \bar\Q^n}$ such that ${L_1({\underline{x}})=\ldots=L_m({\underline{x}})=0}$ and
\begin{equation}
\label{eberny}
\height_p({\underline{x}}) \le \frac{1}{n-m}\left(\height_p(L_1)+\ldots+\height_p(L_m)\right)+ \frac12\frac{n}{n-m}\log n.
\end{equation}
\end{corollary}
\paragraph{Proof}
We apply Proposition~\ref{pdp} with~$W$ the subspace defined by ${L_1({\underline{x}})=\ldots=L_m({\underline{x}})=0}$. Denoting ${\ell=\dim W}$, we have clearly ${n-m\le r\le n}$ and
$$
\frac1{2\ell}\sum_{i=1}^{\ell-1}\sum_{k=1}^i\frac{1}{k}<\frac12\log \ell \le \frac12\log n.
$$
Hence there exists a non-zero ${{\underline{x}}\in W}$ satisfying
$$
\height_p({\underline{x}}) \le \frac{1}{n-m}\height_s(W)+ \frac12\log n.
$$
Using~\eqref{ehwllp}, we find
$$
\height_p({\underline{x}}) \le \frac{1}{n-m}\left(\height_p(L_1)+\ldots+\height_p(L_m)\right)+\frac12\frac{m}{n-m}\log n+\frac12\log n,
$$
which is~\eqref{eberny}. \qed
\section{Power series}
\label{sseries}
In this section we recall various results about power series, used in our proof.
\subsection{Puiseux Expansions}
Let~$\K$ be a field of characteristic~$0$, and $\K((x))$ the field of formal power series over~$\K$. It is well-known that an extension of $\K((x))$ of degree~$n$ is a subfield of a field of the form $\L((x^{1/e}))$, where~$e$ is a positive integer (the ramification index),~$\L$ is a finite extension of~$\K$, and
$$
[\L:\K], e\le n.
$$
This fact (quoted sometimes as the ``Theorem of Puiseux'') has the following consequence: if we fix an algebraic closure~$\bar \K$ of~$\K$, then the algebraic closure of $\K((x))$ can be given by
$$
\overline{\K((x))}=\bigcup_{e=1}^\infty\bigcup_{\genfrac{}{}{0pt}{}{\K\subset \L\subset \bar \K}{[\L:\K]<\infty}}\L((x^{1/e})),
$$
where the interior union is over all subfields~$\L$ of~$\bar \K$ finite over~$\K$.
Another immediate consequence of the ``Theorem of Puiseux'' is the following statement:
\begin{proposition}
\label{ppe}
Let
$$
F(X,Y)=f_n(X)Y^n+\cdots+f_0(X)\in \K[X,Y]
$$
be a polynomial of $Y$-degree~$n$. Then there exists a finite extension~$\L$ of~$\K$, positive integers ${e_1, \ldots, e_n}$, all not exceeding~$n$, and series ${y_i\in \L((x^{1/e_i}))}$ such that
\begin{equation}
\label{edecompf}
F(x,Y)=f_n(x)(Y-y_1)\cdots(Y-y_n).
\end{equation}
\end{proposition}
Write the series ${y_1,\ldots,y_n}$ as
$$
y_i=\sum_{k=\kappa_i}^\infty a_{ik}x^{k/e_i}
$$
with ${a_{i\kappa_i}\ne 0}$. It is well-known and easy to show that
$$
|\kappa_i|\le \deg_XF \qquad (i=1,\ldots, n).
$$
This inequality will be used throughout the article without special notice.
We want to link the numbers~$e_i$ and~$\kappa_i$ with the ``order of vanishing'' at $(0,0)$, introduced in~\eqref{eordvan}.
\begin{proposition}
\label{prke}
Let ${F(X,Y)\in \K[X,Y]}$ and ${y_1, \ldots, y_n}$ be as above, and assume that $F(0,Y)$ is not identically~$0$. Then the quantity~$r$, introduced in~\eqref{eordvan}, satisfies
\begin{equation}
\label{erke}
r=\sum_{\kappa_i>0}\min\{1, \kappa_i/e_i\},
\end{equation}
where the sum extends only to those~$i$ for which ${\kappa_i>0}$.
\end{proposition}
\paragraph{Proof}
We denote by~$\nu_x$ the standard additive valuation on $\K((x))$, normalized to have ${\nu_x(x)=1}$. This~$\nu_x$ extends in a unique way to the algebraic closure $\overline{\K((x))}$; precisely, for
$$
y(x)=\sum_{k=\kappa}^\infty a_kx^{k/e}\in \overline{\K((x))} \qquad (a_\kappa\ne0)
$$
we have ${\nu_x(y)=\kappa/e}$.
Furthermore, for
$$
G(x,Y)=g_s(x)Y^s+\cdots+g_0(x)\in \overline{\K((x))}[Y]
$$
we set ${\nu_x(G)=\min \{\nu_x(g_0), \ldots, \nu_x(g_s)\}}$.
Gauss' lemma asserts that for ${G_1,G_2 \in \overline{\K((x))}[Y]}$, we have ${\nu_x(G_1G_2)= \nu_x(G_1)+\nu_x(G_2)}$.
Since $F(0,Y)$ is not identically~$0$, we have ${\nu_x(F(x,Y))=0}$. Applying Gauss' lemma to~\eqref{edecompf}, we obtain
$$
\nu_x(f_0(x))+\sum\min\{0,\kappa_i/e_i\}=0.
$$
Hence, setting ${{\tilde f}_0=x^{-\nu_x(f_0(x))}f_0(x)}$, we may re-write~\eqref{edecompf} as
\begin{equation}
\label{edecf}
F(x,Y)= \prod_{\kappa_i>0}(Y-y_i)\cdot {\tilde f}_0(x)\prod_{\kappa_i\le 0}(x^{-\kappa_i/e_i}Y-x^{-\kappa_i/e_i}y_i).
\end{equation}
Now set ${G(x,Y)=F(x,xY)}$. Then clearly ${r=\nu_x(G)}$. Applying Gauss' Lemma to the decomposition
$$
G(x,Y)= \prod_{\kappa_i>0}(xY-y_i)\cdot {\tilde f}_0(x)\prod_{\kappa_i\le 0}(x^{1-\kappa_i/e_i}Y-x^{-\kappa_i/e_i}y_i),
$$
we obtain~\eqref{erke}. \qed
\bigskip
Here is one more useful property.
\begin{proposition}
\label{pfs}
In the set-up of Proposition~\ref{prke}, assume that ${\kappa_i>0}$ for exactly~$\ell$ indexes ${i\in \{1,\ldots, n\}}$. Then
${f_k(0)=0}$ for ${k<\ell}$, but ${f_\ell(0)\ne 0}$.
\end{proposition}
\paragraph{Proof}
Re-write~\eqref{edecf} as
$$
F(x,Y)= \prod_{\kappa_i>0}(Y-y_i)\prod_{\kappa_i=0}(Y-y_i)\cdot {\tilde f}_0(x)\prod_{\kappa_i< 0}(x^{-\kappa_i/e_i}Y-x^{-\kappa_i/e_i}y_i).
$$
Substituting ${x=0}$, every factor in the first product becomes~$Y$, every factor in the second product becomes ${Y-a_{i0}}$, with ${a_{i0}\ne 0}$, and every factor in the third product (including ${\tilde f}_0(0)$) becomes constant. Whence the result. \qed.
\subsection{Eisenstein's theorem}
In this subsection, we recall the quantitative Eisentsein's theorem due to work from Dwork, Robba, Schmidt and Van der Poorten \cite{DR79,DP92,Schmidt2}, as given in \cite{Yuri2}. It will be convenient to use the notion of $M_{\K}$-divisor.
An $M_{\K}$-divisor is an infinite vector $(A_v)_{v \in M_{\K}}$ of positive real numbers, each $A_v$ being associated to one $v \in M_{\K}$, such that for all but finitely many $v \in M_{\K}$ we have $A_v = 1$. An $M_{\K}$-divisor is effective if for all $v \in M_{\K}, A_v \geq 1$.
We define the \textit{height} of an $M_{\K}$-divisor $\A = (A_v)_{v \in M_{\K}}$ as
\begin{equation}
\height (\A) = \sum_{v \in M_{\K}} \frac{d_v}{d}\log A_v.
\end{equation}
The following version of Eisenstein's theorem is from \cite[Theorem 7.5]{Yuri2}.
\begin{theorem}
Let $F(X,Y)$ be a separable polynomial of degrees ${m=\deg_X F}$ and ${n=\deg_Y F}$. Further, let $y(x)=\sum_{k = \kappa}^{\infty} a_{k} x^{k/e} \in \K [[x^{1/e}]]$ be a power series satisfying $F(x,y(x))=0$. (Here we do not assume that ${a_\kappa\ne 0}$.) Then there exists an effective $M_{\K}$-divisor $\A = \left( A_v \right)_{v \in M_{\K}}$ such that:
\begin{equation*}
|a_k|_v \leq \max \{1, |a_{e \lfloor \kappa /e \rfloor }|_v \}A_v^{k/e - \lfloor \kappa/e \rfloor},
\end{equation*}
for any $v \in M_{\K}$ and any $k \geq \kappa$, and such that ${\height (\A) \leq 4n\height_p (F) + 3n \log (nm) + 10en}$.
\end{theorem}
Applying this theorem to the series of the form ${a_1x^{1/e}+a_2x^{2/e}+\ldots}$ (that is, with ${a_k = 0}$ for ${k \leq 0}$) and setting $\kappa=0$, we obtain that:
\begin{corollary}
\label{cBB}
Let $F(X,Y)$ be a separable polynomial of degrees ${m=\deg_X F}$ and ${n=\deg_Y F}$. Further, let $y(x)=\sum_{k = 1}^{\infty} a_{k} x^{k/e} \in \K [[x^{1/e}]]$ be a power series satisfying $F(x,y(x))=0$.
Then, there exists an effective $M_{\K}$-divisor $\A = \left( A_v \right)_{v \in M_{\K}}$ such that:
\begin{equation}
\label{eeis1}
|a_k|_v \leq A_v^{k/e } \qquad(v\in M_\K, \quad k=1, 2, \ldots),
\end{equation}
and such that
\begin{equation}
\label{eeis2}
\height (\A) \leq 4n\height_p (F) + 3n \log (nm) + 10en.
\end{equation}
\end{corollary}
The following lemma is a slightly modified version of Proposition~2.7 from~\cite{Ab08}:
\begin{lemma}
\label{lPS}
Let $\K$ be a number field and let $y(x) = \sum_{k=1}^{\infty} a_k x^{k/e}$ be a series with coefficients in $\K$. Assume further that there exists an effective $M_{\K}$-divisor ${\A = \left( A_v \right)_{v \in M_{\K}}}$, such that for all $k\geq 1$ we have $|a_k|_v \leq A_v^{k/e}$. For $\ell \in \N$ write $y(x)^\ell = \sum_{k=1}^{\infty} a_k^{(\ell)} c^{k/e}$. Then, for any ${v\in M_K}$ and for all $k \geq 1$ we have:
\begin{equation}
|a_k^{(\ell)}|_v \leq
\left\{
\begin{array}{lcl}
2^{\ell+k} A_v^{k/e}, & \text{ } & \text{if } v | \infty,\\
A_v^{k/e}, & \text{ } & \text{if } v < \infty.
\end{array}
\right.
\end{equation}
\end{lemma}
In~\cite{Ab08}, a slightly sharper estimate, with $\binom{\ell + k - 1}{k}$ instead of $2^{\ell+k}$ is given.
\section{The ``Main Lemma''}
\label{sspr}
In this section we prove an auxiliary statement which is crucial for the proof of Theorem~\ref{MainResult}. It can be viewed as a version of the famous Theorem of Sprindzhuk, see \cite{Bomb,BM06}. In fact, our argument is an adaptation of that from~\cite{BM06}. We follow \cite[Sections 3.1--3.3]{Ab08} with some changes.
\subsection{Statement of the Main Lemma}
\label{ssml}
In this section~$\K$ is a number field, ${F(X,Y)\in \K[X,Y]}$ an absolutely irreducible polynomial of degrees ${m=\deg_XF}$ and ${n=\deg_YF}$, and ${\alpha, \beta\in \K^\times}$ satisfy ${F(\alpha,\beta)=0}$. Furthermore, everywhere in this section except Subsection~\ref{ssrml}
$$
y(x)=\sum_{k=1}^\infty a_kx^k\in \K[[x]]
$$
is a power series satisfying ${F(x,y(x))=0}$; in particular, ${F(0,0)=0}$.
We consider the following finite subset of $M_K$:
\begin{equation*}
T=\{v\in M_\K: \text{${|\alpha|_v<1}$ and $y(x)$ converges $v$-adically to~$\beta$ at ${x=\alpha}$}\}.
\end{equation*}
\begin{lemma}[``Main Lemma'']
\label{lml}
Let~$\varepsilon$ satisfy ${0<\varepsilon\le 1}$. Then we have either
\begin{equation}
\label{eml1}
\height(\alpha)\le 200\varepsilon^{-2}mn^4(\height_p(F)+5),
\end{equation}
or
\begin{equation}
\label{eml2}
\left|\frac{\height(\alpha)}n-\height_T(\alpha)\right|\le \varepsilon n\height(\alpha)+200\varepsilon^{-1} n^2(\height_p(F)+\log(mn) + 10).
\end{equation}
\end{lemma}
\subsection{Preparations}
The proof of the ``Main Lemma'' requires some preparation. First of all, recall that, according to Eisenstein's Theorem as given in Corollary~\ref{cBB}, there exists an effective $M_\K$-divisor ${\A = \left( A_v \right)_{v \in M_{\K}}}$ such that both~\eqref{eeis1} and~\eqref{eeis2} hold with ${e=1}$:
\begin{align*}
|a_k|_v &\leq A_v^{k} \qquad(v\in M_\K, \quad k=1, 2, \ldots),\\
\height (\A) &\leq 4n\height_p (F) + 3n \log (nm) + 10n.
\end{align*}
We fix this~$\A$ until the end of the section.
Next, we need to construct an ``auxiliary polynomial''.
\begin{proposition}[Auxiliary polynomial]
\label{pAuxiliary}
Let~$\delta$ be a real number ${0<\delta\le 1/2}$ and let~$N$ be a positive integer.
There exists a non-zero polynomial ${G(X,Y) \in \QQ[X,Y]}$ satisfying
${\deg_X G \leq N}$, ${\deg_Y G \leq n-1}$,
\begin{align}
\label{eauxmult}
\nu_x (G (x, y(x))) &\geq (1 - \delta ) Nn,\\
\label{eauxh}
\height_p(G) &\leq \delta^{-1} nN(\height(\A) + 3) .
\end{align}
\end{proposition}
\paragraph{Proof}
It is quite analogous to the proof of Proposition~3.1 in~\cite{Ab08}.
Condition~\eqref{eauxmult} is equivalent to a system of ${(1 - \delta ) Nn}$ linear equations in the ${n(N+1)}$ coefficients of~$G$. Each coefficient of each linear equation is a coefficient of~$x^k$, for ${k\le Nn}$, one of the series $y(x)^\ell$ for ${\ell=0,\ldots, n-1}$.
Using~\eqref{eeis1} and Lemma~\ref{lPS}, we estimate the height of every equation as ${nN\height(\A)+ (Nn+n)\log 2}$. Corollary~\ref{cdp} implies now that we can find a non-zero solution of our system of height at most
$$
\delta^{-1}(nN\height(\A)+ (Nn+n)\log2)+ \frac12\delta^{-1}\log(nN).
$$
This is smaller than the right-hand side of~\eqref{eauxh}. \qed
\bigskip
\subsection{Upper Bound}
\label{ssup}
Now we can obtain an upper bound for $\height_T(\alpha)$ in terms of ${\height(\alpha)}$.
\begin{proposition}[Upper bound for $\height_T(\alpha)$]
\label{pupp}
Let~$\delta$ satisfy ${0<\delta \le 1/2}$. Then
we have either
\begin{equation}
\label{eledelta}
\height(\alpha)\le 10\delta^{-2}mn^4(\height_p(F)+5),
\end{equation}
or
\begin{equation}
\label{eupp}
n\height_T(\alpha)\le (1+4\delta)\height(\alpha)+ 8\delta^{-1} n(\height(\A) + 10)+ \height_p(F).
\end{equation}
\end{proposition}
\paragraph{Proof}
Fix a positive integer~$N$, to be specified later, and let $G(X,Y)$
be the auxiliary polynomial introduced in Proposition~\ref{pAuxiliary}. Extending the field~$\K$, we may assume that ${G(X,Y)\in \K[X,Y]}$.
We may also assume that~$G$ has a coefficient equal to~$1$; in particular, ${|G|_v\ge 1}$ for all ${v\in M_\K}$, where we denote by~$|G|_v$ the maximum of $v$-adic norms of coefficients of~$G$.
The series ${z(x)=G(x,y(x))\in \K[[x]]}$ can be written as
$$
z(x)=\sum_{k=\eta}^\infty b_kx^k
$$
with ${\eta\ge (1-\delta)Nn\ge\frac12Nn}$ (recall that ${\delta\le 1/2}$). Again using~\eqref{eeis1} and Lemma~\ref{lPS}, we estimate the coefficients~$b_k$ as follows: for ${v<\infty}$ we have ${|b_k|_v\le |G|_vA_v^k}$, and for ${v\mid\infty}$ we have
${|b_k|_v\le n(N+1)2^{k+n-1}|G|_vA_v^k}$. Since for ${k\ge\eta\ge \frac12Nn}$ we have ${n(N+1)2^{k+n-1}\le 8^k}$, we obtain the estimate
\begin{equation}
\label{ebkv}
|b_k|\le
\begin{cases}
|G|_vA_v^k, & v<\infty,\\
|G|_v(8A_v)^k, & v\mid\infty.
\end{cases}
\qquad (v\in M_k, \quad, k\ge \eta).
\end{equation}
Now we distinguish two cases.
\paragraph{Case~1: ${G(\alpha,\beta)=0}$}
In this case we have ${F(\alpha,\beta)=G(\alpha,\beta)=0}$. We want to apply Lemma~\ref{lsys}; for this, we have to verify that polynomials~$F$ and~$G$ do not have a common factor. This is indeed the case, because~$F$ is absolutely irreducible, and ${\deg_YG<\deg_YF}$.
Lemma~\ref{lsys}, combined with~\eqref{eauxh} and~\eqref{eeis2}, gives
\begin{align}
\height(\alpha)&\le n \height_p (G) + (n-1) \height_p F + (m (n-1) + Nn) + (2n-1) \log (2n-1)+\log2\nonumber\\
&\le \delta^{-1}Nn^2(\height(\A)+6) + (n-1)(\height_p(F)+m)\nonumber\\
\label{eleal}
&\le 5\delta^{-1}Nn^3(\height_p(F)+5)+mn.
\end{align}
Below, after specifying~$N$, we will see that this is sharper than~\eqref{eledelta}.
\paragraph{Case~2: ${G(\alpha,\beta)=\gamma\ne0}$}
To treat this case it will be convenient to use, instead of the set~$T$, a slightly smaller subset~${\widetilde T}$,
consisting of ${v\in T}$ satisfying
\begin{equation*}
|\alpha|_v <
\begin{cases}
A_v^{-1}, &v<\infty,\\
(16A_v)^{-1}, &v\mid\infty.
\end{cases}
\end{equation*}
We have clearly
\begin{equation}
\label{ettt}
0\le\height_T(\alpha)-\height_{\widetilde T}(\alpha)\le \height(\A)+\log 16,
\end{equation}
and~\eqref{ebkv} implies the estimate
\begin{equation}
\label{ebkak}
|b_k\alpha^k|_v<
\begin{cases}
|G|_vA_v^\eta|\alpha|_v^\eta, &v<\infty,\\
|G|_v (8A_v)^\eta |\alpha|_v^\eta \cdot (1/2)^{k-\eta}, &v\mid\infty.
\end{cases}
\qquad (v\in {\widetilde T},\quad k\ge\eta).
\end{equation}
Recall that for ${v\in T}$, the series $y(x)$ converges $v$-adically to~$\beta$ at ${x=\alpha}$. Hence the same holds true for ${v\in {\widetilde T}}$. It follows that, for ${v\in {\widetilde T}}$, the series ${z(x)=G(x,y(x))}$ converges $v$-adically to\footnote{For archimedean~$v$ to make this conclusion we need absolute convergence of $y(x)$ at ${x=\alpha}$, which is obvious for for ${v\in {\widetilde T}}$.} ${G(\alpha,\beta)=\gamma}$.
Using~\eqref{ebkak}, we can estimate $|\gamma|_v$ for ${v\in {\widetilde T}}$:
\begin{equation*}
|\gamma|_v<
\begin{cases}
|G|_vA_v^\eta|\alpha|_v^\eta, &v<\infty,\\
2|G|_v (8A_v)^\eta |\alpha|_v^\eta , &v\mid\infty.
\end{cases}
\qquad (v\in {\widetilde T},\quad k\ge\eta).
\end{equation*}
Using this and remembering that ${|G|_v\ge 1}$ for all~$v$, we obtain the following lower estimate for ${\height(\gamma)}$:
\begin{align*}
\height(\gamma)&\ge\height_{{\widetilde T}}(\gamma)\\
&\ge \eta \height_{\widetilde T}(\alpha)-\height_p(G) -\eta\height(\A)-\eta\log16-\log2\\
&\ge Nn(1-\delta)\height_{\widetilde T}(\alpha) - 2\delta^{-1} nN(\height(\A) + 6).
\end{align*}
Combining this with~\eqref{ettt}, we obtain
\begin{equation}
\label{elohg}
\height(\gamma)\ge Nn(1-\delta)\height_T(\alpha) - 3\delta^{-1} nN(\height(\A) + 6).
\end{equation}
On the other hand, using Lemma~\ref{lmaxG} it is easy to bound $\height(\gamma)$ from above. Indeed, part~\ref{iba} of this lemma implies that
$$
\height(\beta) \leq \height_p(F) + m\height(\alpha) +n +\log (m+1),
$$
and part~\ref{ifab} implies that
$$
\height(\gamma) \le \height_a (G) + N\height(\alpha) +(n-1)\height(\beta) + \log((N+1)n).
$$
Since~$G$ has a coefficient equal to~$1$, we have ${\height_a(G)=\height_p(G)\le \delta^{-1} nN(\height(\A) + 3)}$. Hence
\begin{align*}
\height(\gamma) &\le \height_p(G) + N\height(\alpha) +(n-1)(\height_p(F) + m\height(\alpha) +n +\log (m+1)) + \log((N+1)n)\\
&\le (N+mn)\height(\alpha) + \delta^{-1} nN(\height(\A) + 4) +n\height_p(F)+n^2+n\log(m+1) .
\end{align*}
Combining this with~\eqref{elohg} and dividing by~$N$, we obtain
\begin{equation}
\label{emessy}
n(1-\delta)\height_T(\alpha)\le \left(1+\frac{mn}N\right)\height(\alpha) + 4\delta^{-1} n(\height(\A) + 6) +N^{-1}(n\height_p(F)+n^2+n\log(m+1)).
\end{equation}
\paragraph{Completing the proof of Proposition~\ref{pupp}}
Now it is the time to specify~$N$: we set ${N=\lceil\delta^{-1}mn\rceil}$. With this choice of~$N$, inequality~\eqref{eleal} is indeed sharper than~\eqref{eledelta}, and inequality~\eqref{emessy} implies the following:
$$
n(1-\delta) \height_T(\alpha) \le (1+\delta)\height(\alpha)+ 4\delta^{-1} n(\height(\A) + 10)+ \delta\height_p(F).
$$
Since $\delta\le 1/2$, this is sharper than~\eqref{eupp}. \qed
\subsection{Lower Bound}
\label{sslow}
Our next objective is a lower bound for $\height_T(\alpha)$. We will see that it easily follows from the upper bound.
\begin{proposition}[Lower bound for $\height_T(\alpha)$]
\label{plow}
Let~$\delta$ satisfy ${0<\delta \le 1/2}$. Then
we have either~\eqref{eledelta} or
\begin{equation}
\label{elow}
n\height_T(\alpha)\ge (1-4n\delta)\height(\alpha)- 9\delta^{-1} n^2(\height(\A) + 10)- n\height_p(F).
\end{equation}
\end{proposition}
\paragraph{Proof}
Remark first of all that we may assume that the polynomial $F(\alpha,Y)$ is of degree~$n$ and separable. Indeed, if this is not the case, then ${R_F(\alpha)=0}$, where
$R_F(X)$ is the $Y$-resultant of $F(X,Y)$ and its $Y$-derivative $F'_Y(X,Y)$. In this case, the joint application of Lemmas~\ref{lSchmidt2} and~\ref{Yuri2} gives
\begin{equation*}
\height(\alpha)\le 2n\height_p (F) + 2n \log ((m+1)(n+1)\sqrt{n})+\log 2,
\end{equation*}
sharper than~\eqref{eledelta}.
Thus, $F(\alpha,Y)$ has~$n$ distinct roots in~$\bar\Q$, one of which is~$\beta$; we denote them ${\beta_1=\beta, \beta_2, \ldots, \beta_n}$. Extending the field~$\K$, we may assume that ${\beta_1, \ldots, \beta_n\in \K}$.
Set ${S=\{v\in M_\K: |\alpha|_v<1\}}$.
For ${i=1,\ldots,n}$ we let~$T_i$ be the set of ${v\in S}$ such that $y(x)$ converges $v$-adically to~$\beta_i$ at ${x=\alpha}$; in particular, ${T_1=T}$. The sets ${T_1, \ldots, T_n}$ are clearly disjoint, and we have
\begin{equation}
\label{ests}
S\supset T_1\cup\ldots\cup T_n\supset {\widetilde S},
\end{equation}
where~${\widetilde S}$ consists of ${v\in S}$ for which ${|\alpha|_v<A_v^{-1}}$. The left inclusion in~\eqref{ests} is trivial, and to prove the right one just observes that for every ${v\in{\widetilde S}}$, the series $y(x)$ absolutely converges $v$-adically at ${x=\alpha}$, and, since ${F(x,y(x))=0}$, the sum must be a root of $F(\alpha,Y)$.
Clearly,
$$
0\le\height(\alpha)-\height_{\widetilde S}(\alpha)=\height_S(\alpha)-\height_{\widetilde S}(\alpha)\le \height(\A).
$$
It follows that
$$
\height_{T_1}(\alpha)+\cdots+\height_{T_n}(\alpha)\ge \height_{\widetilde S}(\alpha)\ge \height(\alpha)-\height(\A).
$$
Now observe that the upper bound~\eqref{eupp} holds true with~$T$ replaced by any~$T_i$:
$$
n\height_{T_i}(\alpha)\le (1+4\delta)\height(\alpha)+ 8\delta^{-1} n(\height(\A) + 10)+ \height_p(F) \qquad (i=1, \ldots, n).
$$
The last two inequalities imply that
$$
n\height_{T}(\alpha)=n\height_{T_1}(\alpha)\ge n(\height(\alpha)-\height(\A))-(n-1)((1+4\delta)\height(\alpha)+ 8\delta^{-1} n(\height(\A) + 10)+ \height_p(F)),
$$
which easily transforms into~\eqref{elow}. \qed
\subsection{Proof of the ``Main Lemma''}
Using Propositions~\ref{pupp} and~\ref{plow} with ${\delta=\varepsilon/4}$ and dividing by~$n$, we obtain that either~\eqref{eml1} holds, or
$$
\left|\height_T(\alpha)-\frac{\height(\alpha)}n\right|\le \varepsilon \height(\alpha)+ 40\varepsilon^{-1} n(\height(\A) + 10)+ \height_p(F).
$$
Combining this with~\eqref{eeis2}, we obtain~\eqref{eml2}. \qed
\subsection{``Ramified Main Lemma''}
\label{ssrml}
We will actually need a slightly more general statement, allowing ramification in the series $y(x)$. The set-up is as before, except that now we consider the series
$$
y(x)=\sum_{k=1}^\infty a_kx^{k/e}\in \K[[x^{1/e}]]
$$
satisfying ${F(x,y(x))=0}$. We fix an $e$-th root $\alpha^{1/e}$ and we will assume that it belongs to~$\K$. We will now say that the series $y(x)$ converges $v$-adically to~$\beta$ at~$\alpha$ if the series $y(x^e)$ converges $v$-adically to~$\beta$ at $\alpha^{1/e}$. (Of course, this depends on the particular choice of the root $\alpha^{1/e}$.) We again define~$T$ as the set of all ${v\in S}$ for which $y(x)$ converges $v$-adically to~$\beta$ at~$\alpha$.
\begin{lemma}[``Ramified Main Lemma'']
\label{lrml}
Let~$\varepsilon$ satisfy ${0<\varepsilon\le 1}$. Then we have either
\begin{equation}
\label{erml1}
\height(\alpha)\le 200\varepsilon^{-2}me^2n^4(\height_p(F)+5),
\end{equation}
or
\begin{equation}
\label{erml2}
\left|\frac{\height(\alpha)}n-\height_T(\alpha)\right|\le \varepsilon\height(\alpha)+200\varepsilon^{-1} en^2(\height_p(F)+2\log(mn) + 10).
\end{equation}
\end{lemma}
\paragraph{Proof}
The proof is by reduction to the unramified case. Apply Lemma~\ref{lml} to the polynomial $F(X^e,Y)$, the series $y(x^e)$ and the number $\alpha^{1/e}$. We obtain that either
\begin{equation*}
\height(\alpha^{1/e})\le 200\varepsilon^{-2}men^6(\height_p(F)+5),
\end{equation*}
or
\begin{equation*}
|\height(\alpha^{1/e})-n\height_T(\alpha^{1/e})|\le\varepsilon \height(\alpha^{1/e})+200\varepsilon^{-1} n^4(\height_p(F)+\log(men) + 10).
\end{equation*}
These estimates easily transform into~\eqref{erml1} and~\eqref{erml2}, respectively, using that
$$
\height(\alpha^{1/e})=e^{-1}\height(\alpha), \quad \height_T(\alpha^{1/e})=e^{-1}\height_T(\alpha),\quad e\le n. \eqno\square
$$
\section{Proof of the Main Theorem}
\label{sproof}
In this section we prove Theorem~\ref{MainResult}. First of all, we investigate the relation between $\height_T(\alpha)$ and $\lgcd_T(\alpha,\beta)$, where~$T$ is defined as in Section~\ref{sspr}.
\subsection{Comparing \texorpdfstring{$\height_T(\alpha)$}{hT(a)} and \texorpdfstring{$\lgcd_T(\alpha,\beta)$}{lgcdT(a,b)}}
In this subsection we retain the set-up of Subsection~\ref{ssml}, except that we allow ramification in the series $y(x)$, as we did in Subsection~\ref{ssrml}. Thus, in this subsection:
\begin{itemize}
\item
$\K$ is a number field;
\item
$F(X,Y)\in\K[X,Y]$ is an absolutely irreducible polynomial;
\item
$\alpha,\beta\in \K$ satisfy ${F(\alpha,\beta)=0}$;
\item
${y(x)=\sum_{k=1}^\infty a_kx^{k/e}\in \K[[x^{1/e}]]}$
satisfies ${F(x,y(x))=0}$;
\item
${T\subset M_\K}$ is the set of all ${v\in M_\K}$ such that ${|\alpha|_v<1}$ and $y(x)$ converges $v$-adically at~$\alpha$ to~$\beta$.
\end{itemize}
The $v$-adic convergence is understood in the same sense as in Subsection~\ref{ssrml}: we fix an $e$-th root $\alpha^{1/e}$, assume that it belongs to~$\K$ and and define $v$-adic convergence of $y(x)$ to~$\beta$ at~$\alpha$ as $v$-adic convergence of $y(x^e)$ to~$\beta$ at~$\alpha^{1/e}$.
Let~$\kappa$ be the smallest~$k$ such that ${a_k\ne 0}$; by the assumption, ${\kappa>0}$. Then we have ${\nu_x(y)=\kappa/e}$ and
$$
y(x)=\sum_{k=\kappa}^\infty a_kx^{k/e}
$$
with ${a_\kappa\ne 0}$. In this subsection we prove that $\lgcd_T(\alpha,\beta)$ can be approximated by $\min\{1,\kappa/e\}\height_T(\alpha)$.
\begin{proposition}
\label{pnew}
In the above set-up we have
\begin{equation}
\label{edhgcd}
\left|\lgcd_T(\alpha,\beta)-\min\{\kappa/e,1\}\height_T(\alpha)\right|\le 30n\kappa\height_p (F) + 30n\kappa \log (nm) + 15en.
\end{equation}
\end{proposition}
This statement corresponds to Proposition~3.6 in~\cite{Ab08}. Our proof is, however, much more involved, in particular because Abouza\"id did not need the lower estimate.
\paragraph{Proof}
Let ${\A = \left( A_v \right)_{v \in M_{\K}}}$ be the $M_\K$-divisor from Corollary~\ref{cBB}. For the reader's convenience, we reproduce here~\eqref{eeis1} and~\eqref{eeis2}:
\begin{align*}
|a_k|_v &\leq A_v^{k/e} \qquad(v\in M_\K, \quad k\ge 1),\\
\height (\A) &\leq 4n\height_p (F) + 3n \log (nm) + 10en.
\end{align*}
As we already did several times in Section~\ref{sspr}, it will be convenient to replace~$T$ by a smaller subset. Thus, let~${\widetilde T}$ consist of ${v\in T}$ satisfying
\begin{equation}
\label{eleka}
|\alpha|_v<
\begin{cases}
A_v^{-\kappa-1}\min\{1,|a_\kappa|_v\}^e, & v<\infty,\\
(1/4)^eA_v^{-\kappa-1}\min\{1,|a_\kappa|_v\}^e, & v<\infty.
\end{cases}
\end{equation}
(Attention: this is not the same~${\widetilde T}$ as in Subsection~\ref{ssup}!)
Clearly,
$$
0\le\height_T(\alpha)-\height_{\widetilde T}(\alpha)\le (\kappa+1)\height(\A)+e\height_{T\smallsetminus{\widetilde T}}(a_\kappa).
$$
Using~\eqref{eeis1} we estimate ${\height(a_\kappa)\le (\kappa/e)\height(\A)}$. We obtain
\begin{equation}
\label{edifhe}
0\le\height_T(\alpha)-\height_{\widetilde T}(\alpha)\le (\kappa+1)\height(\A)\le3\kappa\height(\A)+e\log 4,
\end{equation}
where for the latter estimate we use ${\kappa\ge 1}$.
In particular,
\begin{equation}
\label{edifgcd}
0\le\lgcd_T(\alpha,\beta)-\lgcd_{\widetilde T}(\alpha,\beta)\le 3\kappa\height(\A)+e\log4.
\end{equation}
After this preparation, we can now proceed with the proof. For every ${v\in {\widetilde T}}$ we want to obtain an estimate of the form
${c_v|\alpha|_v^{\kappa/e}\le|\beta|_v\le c_v'|\alpha|_v^{\kappa/e}}$,
where $c_v$ and $c'_v$ are some quantities not depending on~$\alpha$.
\paragraph{Upper estimate for $|\beta|_v$.}
This is easy. It follows from~\eqref{eleka} that
$$
|\alpha|_v<
\begin{cases}
A_v^{-1}, & v<\infty,\\
(4^eA_v)^{-1}, & v<\infty.
\end{cases}
$$
From this and~\eqref{eeis1} we deduce that
\begin{equation}
\label{eakake}
|a_k\alpha^{k/e}|_v<
\begin{cases}
A_v^{\kappa/e}|\alpha|_v^{\kappa/e},& v<\infty,\\
A_v^{\kappa/e}|\alpha|_v^{\kappa/e}\cdot (1/4)^{k-\kappa},& v\mid\infty
\end{cases}
\qquad (k\ge \kappa).
\end{equation}
Hence
$$
|\beta|_v<
\begin{cases}
A_v^{\kappa/e}|\alpha|_v^{\kappa/e},& v<\infty,\\
2A_v^{\kappa/e}|\alpha|_v^{\kappa/e},& v\mid\infty.
\end{cases}
$$
\paragraph{Lower estimate for $|\beta|_v$.}
The lower estimate is slightly more subtle. First, we bound the difference ${\beta-a_\kappa\alpha^{\kappa/e}}$ from above using~\eqref{eleka}.
Similarly to~\eqref{eakake}, we have
$$
|a_k\alpha^{k/e}|_v<
\begin{cases}
A_v^{(\kappa+1)/e}|\alpha|_v^{(\kappa+1)/e},& v<\infty,\\
A_v^{(\kappa+1)/e}|\alpha|_v^{(\kappa+1)/e}\cdot (1/4)^{(k-\kappa-1)/e},& v\mid\infty
\end{cases}
\qquad (k\ge \kappa+1).
$$
Hence, presenting ${\beta-a_\kappa\alpha^{\kappa/e}}$ as the $v$-adic sum of the series
$$
y(x)-a_\kappa x^{\kappa/e}=\sum_{k=\kappa+1}^\infty a_kx^{k/e}
$$
at ${x=\alpha}$, we obtain the estimate
$$
|\beta-a_\kappa\alpha^{\kappa/e}|_v<
\begin{cases}
A_v^{(\kappa+1)/e}|\alpha|_v^{(\kappa+1)/e},& v<\infty,\\
2A_v^{(\kappa+1)/e}|\alpha|_v^{(\kappa+1)/e},& v\mid\infty.
\end{cases}
$$
Combining this with~\eqref{eleka}, we find
$$
|\beta-a_\kappa\alpha^{\kappa/e}|_v<
\begin{cases}
\min\{|a_\kappa|_v,1\}|\alpha|_v^{\kappa/e},& v<\infty,\\
(1/2)\min\{|a_\kappa|_v,1\}|\alpha|_v^{\kappa/e},& v\mid\infty.
\end{cases}
$$
Hence
$$
|\beta|_v\ge
\begin{cases}
\min\{|a_\kappa|_v,1\}|\alpha|_v^{\kappa/e},& v<\infty,\\
(1/2)\min\{|a_\kappa|_v,1\}|\alpha|_v^{\kappa/e},& v\mid\infty,
\end{cases}
$$
the lower estimate we were seeking.
\paragraph{Completing the proof of Proposition~\ref{pnew}}
Thus, we proved that
\begin{equation}
\label{edouble}
c_v|\alpha|_v^{\kappa/e}\le|\beta|_v\le c_v'|\alpha|_v^{\kappa/e}, \\
\end{equation}
with
\begin{equation*}
c_v=
\begin{cases}
\min\{|a_\kappa|_v,1\},& v<\infty,\\
(1/2)\min\{|a_\kappa|_v,1\},& v\mid\infty,
\end{cases}, \qquad
c'_v=
\begin{cases}
A_v^{\kappa/e},& v<\infty,\\
2A_v^{\kappa/e},& v\mid\infty.
\end{cases}
\end{equation*}
From~\eqref{edouble} we deduce that for ${v\in {\widetilde T}}$
$$
c_v|\alpha|_v^{\min\{\kappa/e,1\}}\max\{|\alpha|_v,|\beta|_v\}\le c_v'|\alpha|_v^{\min\{\kappa/e,1\}}.
$$
(We use here the obvious inequality ${c_v\le1\le c'_v}$.) Hence
$$
-(\kappa/e)\height(\A)-\log2\le \lgcd_{\widetilde T}(\alpha,\beta)-\min\{\kappa/e,1\}\height_{\widetilde T}(\alpha)\le \height(a_\kappa)+\log2.
$$
Since ${\height(a_\kappa)\le (\kappa/e)\height(\A)}$, this implies
$$
|\lgcd_{\widetilde T}(\alpha,\beta)-\min\{\kappa/e,1\}\height_{\widetilde T}(\alpha)|\le (\kappa/e)\height(\A)+\log2,
$$
which, together with~\eqref{edifhe} and~\eqref{edifgcd} gives
$$
|\lgcd_{\widetilde T}(\alpha,\beta)-\min\{\kappa/e,1\}\height_{\widetilde T}(\alpha)|\le 7\kappa\height(\A)+4e.
$$
Combining this with~\eqref{eeis1}, we obtain~\eqref{edhgcd}. \qed
\subsection{Proving Theorem~\ref{MainResult}}
Now we are fully equipped for the proof of our main result. We want to show that, assuming
\begin{equation}
\label{eassum}
\height(\alpha)\ge 200\varepsilon^{-2}mn^6(\height_p(F)+5),
\end{equation}
we have
\begin{equation}
\label{eresu}
\left| \frac{\lgcd (\alpha, \beta)}{r} - \frac{\height(\alpha)}{n} \right| \leq
\frac{1}{r} \left( \varepsilon\height(\alpha)+4000\varepsilon^{-1} n^4(\height_p(F)+\log(mn) + 1) +30n^2m(\height_p (F) + \log (nm)) \right).
\end{equation}
Write
${F(X,Y)=f_n(X)Y^n+\cdots+f_0(X)}$.
According to Proposition~\ref{ppe} we have
\begin{equation*}
F(x,Y)=f_n(x)(Y-y_1)\cdots(Y-y_n).
\end{equation*}
where
$$
y_i=\sum_{k=\kappa_i}^\infty a_{ik}x^{k/e_i}\in \K((x^{1/e_i})) \qquad (i=1, \ldots, n).
$$
We assume that ${a_{i\kappa_i}\ne 0}$ for ${i=1, \ldots, n}$, so that ${\kappa_i/e_i=\nu_x(y_i)}$.
Denoting by~$\ell$ the number of indexes~$i$ such that ${\kappa_i>0}$, we may assume that ${\kappa_1, \ldots, \kappa_\ell>0}$ and ${\kappa_{\ell+1}, \ldots, \kappa_n\le 0}$. Propositions~\ref{prke} implies that
\begin{equation}
\label{er=}
r=\sum_{i=1}^\ell\min\{1,\kappa_i/e_i\},
\end{equation}
and Proposition~\ref{pfs} implies that ${f_\ell(0)\ne 0}$. We may normalize polynomial ${F(X,Y)}$ to have
\begin{equation*}
f_\ell(0)=1.
\end{equation*}
In particular, ${|F|_v\ge 1}$ for every ${v\in M_\K}$, where $|F|_v$ denotes the maximum of $v$-adic norms of the coefficients of~$F$, and also ${\height_p(F)=\height_a(F)}$.
Set ${E=\lcm(e_1,\ldots,e_\ell )}$ and fix an $E$-th root $\alpha^{1/E}$. This fixes uniquely the roots ${\alpha^{1/e_1}, \ldots, \alpha^{1/e_\ell}}$.
Extending the field~$\K$ we may assume that the coefficients of the series ${y_1, \ldots, y_\ell}$ belong to~$\K$, and the same is true for $\alpha^{1/E}$ (and hence for ${\alpha^{1/e_1}, \ldots, \alpha^{1/e_\ell}}$ as well). Having fixed the root ${\alpha^{1/e_i}\in \K}$, we may define $v$-adic convergence of $y_i$ at~$\alpha$, see Subsection~\ref{ssrml}.
Extending further the field~$\K$, we may assume that it contains all the roots of the polynomial $F(\alpha,Y)$. Hence, if one of the series ${y_1,\ldots,y_\ell}$ converges $v$-adically at~$\alpha$ (and if the convergence is absolute in the archimedean case), then the sum must belong to~$\K$.
Consider the following subsets of~$M_\K$:
\begin{align*}
S&=\{v\in M_K: |\alpha|_v<1\},\\
T_i&=\{v\in S: \text{the series $y_i$ converges $v$-adically to~$\beta$ at~$\alpha$}\} \qquad (i=1, \ldots, \ell ).
\end{align*}
(These sets are not the same~$T_i$ as in Subsection~\ref{sslow}!)
We have clearly ${\lgcd(\alpha,\beta)=\lgcd_S(\alpha,\beta)}$. If we manage to show that
the sets~$T_i$ are pairwise disjoint, and that
$\height_{S\smallsetminus(T_1\cup\cdots\cup T_\ell)}(\beta)$ is ``negligible'',
then joint application of Lemma~\ref{lrml}, Proposition~\ref{pnew} and identity~\eqref{er=} would prove Theorem~\ref{MainResult}. We will argue like this, only with the sets~$T_i$ replaced by slightly smaller subsets.
Let ${\A_i=(A_{iv})_{v\in M_\K}}$ be the $M_\K$-divisor for the series~$y_i$ given by Corollary~\ref{cBB}. Define the $M_\K$-divisor ${\A=(A_{v})_{v\in M_\K}}$ by
$$
A_v=\max\{A_{1v},\ldots,A_{\ell v}\}\qquad (v\in M_\K).
$$
We have clearly
\begin{align}
\label{eakbo} |a_{ki}|_v &\leq A_v^{k/e} \qquad(v\in M_\K, \quad 1\le i\le \ell , \quad k\ge \kappa_i),\\
\height (\A) &\leq \height(\A_1)+\cdots+\height(\A_\ell)\nonumber\\
&\le 4n^2\height_p (F) + 3n^2 \log (nm) + 10n^3. \nonumber
\end{align}
Now let~${\widetilde S}$ consist of ${v\in S}$ satisfying
\begin{equation}
\label{ealbo}
|\alpha|_v<
\begin{cases}
|F|_v^{-n}A_v^{-1}, & v<\infty,\\
((n+1)2^{n+3}|F|_v)^{-n}A_v^{-1}, & v\mid\infty,
\end{cases}
\end{equation}
and set ${{\widetilde T}_i=T_i\cap{\widetilde S}}$. (This is not the same~${\widetilde S}$ that in Subsection~\ref{sslow}!) Clearly,
\begin{align}
0\le \lgcd(\alpha,\beta)-\lgcd_{\widetilde S}(\alpha,\beta)&\le \height(\alpha)-\height_{\widetilde S}(\alpha)\nonumber\\ &=\height_{S\smallsetminus{\widetilde S}}(\alpha)\nonumber\\ &\le \height(\A)+n\height_p(F)+\log ((n+1)2^{n+3})\nonumber\\
\label{esisi}
&\le 5n^2\height_p (F) + 3n^2 \log (nm) + 15n^3,\\
0\le \lgcd_{T_i\smallsetminus{\widetilde T}_i}(\alpha,\beta) &\le\height_{S\smallsetminus{\widetilde S}}(\alpha)\nonumber\\
\label{etiti}
&\le 5n^2\height_p (F) + 3n^2 \log (nm) + 15n^3 \qquad (i=1, \ldots, \ell).
\end{align}
Here we used the equality ${\height_p(F)=\height_a(F)}$.
Mention also that for ${v\in {\widetilde S}}$, we have ${|\alpha|_v<A_v^{-1}}$, which implies that the series ${y_1, \ldots, y_\ell}$ converge $v$-adically at~$\alpha$ in the completion~$\K_v$, the convergence being absolute when~$v$ is archimedean. Hence, as we have seen above, the sum must belong to~$\K$.
\begin{proposition}
\label{pdisj}
The sets ${{\widetilde T}_1, \ldots, {\widetilde T}_\ell}$ pairwise disjoint. Furthermore, if ${v\in {\widetilde S}}$ but ${v\notin {\widetilde T}_1\cup\ldots\cup {\widetilde T}_\ell}$ then
\begin{equation}
\label{ebge}
|\beta|_v\ge
\begin{cases}
|F|_v^{-1},&v<\infty,\\
((n+1)2^{n+2}|F|_v)^{-1},&v\mid\infty.
\end{cases}
\end{equation}
\end{proposition}
\paragraph{Proof}
The polynomial
$$
Q(Y)=(Y-y_1)\cdots(Y-y_\ell)\in \K[[x^{1/E}]][Y].
$$
divides $F(x,Y)$ in the ring $\K((x^{1/E}))[Y]$. By Gauss' Lemma, $Q(Y)$ divides $F(x,Y)$ in the ring $\K[[x^{1/E}]][Y]$ as well. Moreover, writing ${F(x,Y)=Q(Y)U(Y)}$ with
$$
U(Y)= f_n(x)Y^{n-\ell }+u_{n-\ell -1}Y^{n-\ell -1}+\cdots+u_0\in \K[[x^{1/E}]](Y),
$$
the coefficients ${u_0,\ldots, u_{n-\ell -1}}$ belong to the ring\footnote{This is a consequence of the general algebraic property: let~$R$ be a commutative ring,~$R'$ a subring and ${Q(Y),F(Y)\in R'[Y]}$, the polynomial~$Q$ being monic; assume that ${Q\mid F}$ in $R[Y]$; then ${Q\mid F}$ in $R'[Y]$. Indeed, denoting by~$a$ the leading coefficient of~$F$, the polynomial~$Q$ divides ${G=F-aY^{\deg F-\deg Q}Q}$ in $R[Y]$, and ${\deg G<\deg F}$, so by induction ${Q\mid G}$ in $R'[Y]$.} $\K[x,y_1,\ldots, y_\ell]$. Recall that for ${v\in {\widetilde S}}$ the series ${y_1, \ldots, y_\ell}$ converge $v$-adically at~$\alpha$ in the field~$\K$, the convergence being absolute when~$v$ is archimedean. Hence so do the coefficients of~$U$.
Fix ${v\in {\widetilde S}}$ and write
$$
F(\alpha,Y)=(Y-y_1(\alpha))\cdots(Y-y_\ell(\alpha))(f_n(\alpha)Y^{n-\ell }+u_{n-\ell -1}(\alpha)Y^{n-\ell -1}+\cdots+u_0(\alpha)),
$$
where ${y_1(\alpha),\ldots,y_\ell(\alpha)\in\K}$ the $v$-adic sum of the corresponding series at~$\alpha$, and similarly for ${u_{n-\ell -1}(\alpha),\ldots,u_0(\alpha)}$. We claim that $F(\alpha,Y)$ is a separable polynomial of degree~$n$; indeed, if this is not the case, then, as we have seen in Subsection~\ref{sslow}, our~$\alpha$ must satisfy~\eqref{eresu}, which contradicts~\eqref{eassum}.
Now if ${v\in T_i\cap T_j}$ for ${i\ne j}$ then ${\beta=y_i(\alpha)=y_j(\alpha)}$, and $F(\alpha,Y)$ must have~$\beta$ as a double root, a contradiction. This proves disjointedness of the sets~${\widetilde T}_i$.
Now assume that ${v\in {\widetilde S}}$ but ${v\notin {\widetilde T}_1\cup\ldots\cup {\widetilde T}_\ell}$. Then none of the sums ${y_1(\alpha),\ldots,y_\ell(\alpha)}$ is equal to~$\beta$; in other words ${y_1(\alpha),\ldots,y_\ell(\alpha),\beta}$ are ${\ell+1}$ distinct roots of the polynomial
$$
P(Y)=F(\alpha, Y)=f_n(\alpha)Y^n+\cdots+f_0(\alpha).
$$
We are going to use Lemma~\ref{lr+1}. Since ${f_\ell(0)=1}$ and
$$
|\alpha|_v<
\begin{cases}
|F_v|^{-1},&v<\infty,\\
(2|F|_v)^{-1},&v\mid\infty,
\end{cases}
$$
we have
$$
|f_\ell(\alpha)|_v\ge
\begin{cases}
1,&v<\infty,\\
1/2,&v\mid\infty,
\end{cases},\qquad
|P|_v\le
\begin{cases}
|F|_v,&v<\infty,\\
2|F|_v,&v\mid\infty.
\end{cases}
$$
Now Lemma~\ref{lr+1} implies that
\begin{equation}
\label{emaxle}
\max\{|y_1(\alpha)|_v,\ldots,|y_\ell(\alpha)|_v,|\beta|_v\}\ge
\begin{cases}
|F|_v^{-1},&v<\infty,\\
((n+1)2^{n+2}|F|_v)^{-1},&v\mid\infty.
\end{cases}
\end{equation}
On the other hand, we may estimate ${|y_i(\alpha)|_v}$ from above using~\eqref{eakbo} and~\eqref{ealbo}. In what follows we repeatedly use the inequality ${e_i\le n}$. Since
$$
|\alpha|_v<
\begin{cases}
A_v^{-1},&v<\infty,\\
(2^{e_i}A_v)^{-1},&v\mid\infty
\end{cases} \qquad (i=1, \ldots,\ell),
$$
we have
$$
|a_k\alpha^{k/e_i}|_v<
\begin{cases}
(A_v|\alpha|_v)^{1/e_i},&v<\infty,\\
(A_v|\alpha|_v)^{1/e_i}\cdot (1/2)^{k-1},&v\mid\infty
\end{cases} \qquad (k\ge 1,\quad i=1, \ldots,\ell),
$$
which implies
$$
|y_i(\alpha)|_v<
\begin{cases}
(A_v|\alpha|_v)^{1/e_i},&v<\infty,\\
2(A_v|\alpha|_v)^{1/e_i},&v\mid\infty
\end{cases} \qquad (i=1, \ldots,\ell).
$$
Now since
$$
|\alpha|_v<
\begin{cases}
|F|_v^{-e_i}A_v^{-1}, & v<\infty,\\
((n+1)2^{n+3}|F|_v)^{-e_i}A_v^{-1}, & v\mid\infty
\end{cases} \qquad (i=1, \ldots,\ell),
$$
we obtain finally
$$
|y_i(\alpha)|_v<
\begin{cases}
|F|_v^{-1},&v<\infty,\\
((n+1)2^{n+2}|F|_v)^{-1},&v\mid\infty
\end{cases} \qquad (i=1, \ldots,\ell).
$$
Compared with~\eqref{emaxle}, this implies~\eqref{ebge}. The proposition is proved.\qed
\bigskip
An immediate consequence of the second statement of Proposition~\ref{pdisj} is the estimate
\begin{equation}
\label{et''}
\lgcd_{{\widetilde S}\smallsetminus({\widetilde T}_1\cup\ldots\cup{\widetilde T}_\ell)}\le\height_{{\widetilde S}\smallsetminus({\widetilde T}_1\cup\ldots\cup{\widetilde T}_\ell)}(\beta)\le \height_p(F)+\log((n+1)2^{n+2})
\end{equation}
(we again use ${\height_a(F)=\height_p(F)}$).
\bigskip
Now we collect everything together to prove Theorem~\ref{MainResult}. According to Lemma~\ref{lrml}, condition~\eqref{eassum} implies that
$$
\left|\frac{\height(\alpha)}n-\height_{T_i}(\alpha)\right|\le \varepsilon\height(\alpha)+200\varepsilon^{-1} n^3(\height_p(F)+2\log(mn) + 10) \qquad (i=1, \ldots,\ell).
$$
Combining this with Proposition~\ref{pnew} and estimate~\eqref{etiti}, we obtain
\begin{align*}
\left|\min\bigl\{\frac{\kappa_i}{e_i},1\bigr\}\frac{\height(\alpha)}n-\lgcd_{{\widetilde T}_i}(\alpha,\beta)\right|&\le \varepsilon\height(\alpha)+3000\varepsilon^{-1} n^3(\height_p(F)+\log(mn) + 1)\\& +30nm\height_p (F) + 30nm \log (nm). \qquad (i=1, \ldots,\ell).
\end{align*}
Summing up, using~\eqref{er=} and the disjointedness of the sets~${\widetilde T}_i$, we obtain
\begin{align*}
\left|r\frac{\height(\alpha)}n-\lgcd_{{\widetilde T}_1\cup\ldots\cup{\widetilde T}_\ell}(\alpha,\beta)\right|&\le \varepsilon\height(\alpha)+3000\varepsilon^{-1} n^4(\height_p(F)+\log(mn) + 1)\\& +30n^2m\height_p (F) + 30n^2m \log (nm).
\end{align*}
Finally, combining this with~\eqref{esisi} and~\eqref{et''}, we obtain~\eqref{eresu}. \qed
| {
"timestamp": "2015-01-22T02:15:27",
"yymm": "1406",
"arxiv_id": "1406.3233",
"language": "en",
"url": "https://arxiv.org/abs/1406.3233",
"abstract": "Let F(X;Y) in Q[X;Y] be a Q-irreducible polynomial. In 1929 Skolem proved the following theorem: \"Assume that F(0;0) = 0. Then for every non-zero integer d, the equation F(X;Y) = 0 has only finitely many solutions in integers (X;Y) with gcd(X;Y) = d\". Skolem method allows one to bound the solutions explicitly in terms of the coefficients of the polynomial F and the integer d. In 2008, Abouzaid gave a far-going generalization of Skolem theorem. He extended it in two directions: first, he studied solutions not only in rational integers, but in arbitrary algebraic numbers. Second, he not only bounded the solution in terms of the logarithmic gcd, but obtained a sort of asymptotic relation between the heights of the coordinates and their logarithmic gcd. Unfortunately, Abouzaid assumption is slightly more restrictive than Skolem: he assumes not only that the point (0;0) belongs to the plane curve F(X;Y) = 0, but that (0;0) is a non-singular point on this curve. The purpose of the present article is to get rid of this non singularity hypothesis.",
"subjects": "Number Theory (math.NT)",
"title": "The Skolem-Abouzaid theorem in the singular case",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137900379083,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7087617814504161
} |
https://arxiv.org/abs/1312.3311 | Stochastic De Giorgi Iteration and Regularity of Stochastic Partial Differential Equation | Under general conditions we show that the solution of a stochastic parabolic partial differential equation of the form \[ \partial_t u = \mathrm{div} (A \nabla u) + f(t,x, u) + g_i (t,x,u) \dot{w}^i_t \] is almost surely Hölder continuous in both space and time variables. | \section{Introduction}
Stochastic partial differential equations (SPDEs) arise in many pure and applied sciences. Regularity of solutions is of central importance for theoretical development as well as for numerical simulations. For linear equations, $W^{k,2}$-theory has been well developed (see Pardoux~\cite{Par07} and Rozovskii~\cite{Roz90}). A more general $W^{2,p}$-theory has been established by Krylov~\cite{Kry96}. Such equations can also be studied from a semigroup point of view (Brz\`ezniak, van Neerven, Veraar and Weis~\cite{BNVW08} and Da Prato and Zabczyk~\cite{PZ92}). Results concerning nonlinear equations can be found in Debussche, De Moor and Hofmanova~\cite{DMH14} and Pardoux~\cite{Par75}. In particular, many examples of quasilinear SPDEs with measurable coefficients can be found in the survey monograph edited by Carmona and Rozovskii~\cite{Survey}. Although an obviously important question in applications, regularity of solutions of quasilinear SPDEs does not seem to have been adequately addressed in the literature.
In this paper we consider the following type of quasilinear SPDEs on $\R^n$:
\begin{equation}
\label{basiceqn}
\partial_t u = \divg ( A \nabla u) + f (t,x, u) + g_i (t,x, u) \dot{w}^i_t,
\end{equation}
where $\{w^i\}$ is a sequence of independent standard Brownian motions on a filtered probability space $(\Omega, \msf_*, \mp)$ and $g = \{g_i\}$ is an $\ell^2$-valued function such that for each fixed $x$ and an $\msf_* = \lbr\msf_t\rbr$-progressively measurable process $h$, the process $g(t,x,h_t)$ is also progressively measurable. We will show that almost surely a (stochastically strong) solution with $L^2$-initial data is H\"older continuous in both space and time variables. The basic assumptions on the SPDE \eqref{basiceqn} are as follows:
(1) uniform ellipticity: $A(t, x; \omega)$ is $\msf_*$-progressively measurable and uniformly elliptic, i.e., there is a positive constant $\lambda$ such that
$$\lambda I\le A(t, x; \omega )\le \lambda^{-1}I, \quad \forall (t,x,\omega) \in \R^+\times \R^n \times \Omega .$$
(2) linear growth: there exist a nonnegative function $K\in L^2(\mr^n) \cap L^{\infty} (\R^n)$
and a positive constant $ \Lambda $ such that
\[
\abs{f(t,x,u)}+ \abs{g (t,x , u)}_{\ell^2} \leq K (x)+ \Lambda \abs{u} ,\quad \forall (t,x, u ) \in \R^+ \times \R^n \times\mr.
\]
We emphasize that no further conditions concerning the continuity $A, f$ or $g$ are imposed.
A function $u = u(t,x;\omega)$ is said to be a (stochastically strong) solution of \rf{basiceqn} if
$u\in C ( \mr_+; W^{1,2}(\mr^n))$ almost surely and satisfies the corresponding partial differential equation (PDE) in the sense that
\[
\ip{u (t) , \varphi} = \ip{u (0) , \varphi}- \int_0^t \ip{ A\nabla u (s), \nabla \varphi } \;ds + \int_{0}^t \ip{ f(u (s) ), \varphi }\;ds +\int_0^t \ip{ g_i(u(s) ) , \varphi}\,dw_s^i
\]
for all $\varphi \in C_{c}^{\infty} (\R^n)$. Here $\langle\cdot, \cdot\rangle$ denotes the standard inner product on $L^2(\mr^n)$. The main result of the current work is the following regularity theorem.
\bigskip
\noindent{\bf Theorem.} {\it
Let $u$ be a solution of the SPDE \rf{basiceqn} with a (deterministic) initial condition $u (0) =u_0\in L^2(\mr^n)$. Then there exists a positive exponent $\alpha=\alpha (n, \lambda, \Lambda)$ such that for all $T>0$ the solution $u \in C^{\alpha} ([T,2T] \times \R^n)$ almost surely. Furthermore, for every $p>0$, there is a constant $C=C (n, \lambda, \Lambda, T, p)$ such that
\[
\me \norm{u}_{C^{\alpha} ([T,2T] \times \R^n) }^p \leq C\smp{\norm{u_0}_{L^2(\mr^n)} +\Vert K\Vert_{L^2(\mr^n)} + \Vert K\Vert_{L^\infty(\mr^n)}}^p.
\]
}
\begin{remark}
In general one needs to enlarge the underlying probability space in order for it to support a strong solution; see Carmona and Rozovskii~\cite{Survey} or Viot~\cite{Viot76} for a detailed exposition.
\end{remark}
The novelty of our result is that we do not impose any assumptions on the smoothness of $A, f$ or $g$.
Indeed, if $A$ and $g$ have some continuity, for example Dini continuity, then the above result follows directly from
Krylov\cite{Kry96, Kry99}. The approach we adopted in this work is quite different from the usual ones in the study of SPDEs.
Largely motivated by the recent work of Glatt-Holtz, \v{S}ver\'ak and Vicol \cite{GSV13}, rather than relying on abstract or explicit estimates of the solution kernel, we analyze the energy of the solution by a combination of PDE techniques and stochastic analysis. Indeed, our work can be viewed as a stochastic version of De Giorgi-Nash-Moser theory. As such our flexible method is potentially applicable to other types of nonlinear SPDEs.
The paper is orgainzed as follows. In {\sc Section} 2 we present a stochastic modification of De Giorgi's iteration. In {\sc Section} 3 we prove the decay of the tail probability of the solution. The main theorem stated above is proved in the last section.
\medskip
{\sc Acknowledgment.} The authors are very grateful to Professor Nicolai Krylov for the method used in {\sc Section} 4 of passing
from the $L^{\infty}$-bound to H\"older regularity. The authors would like to thank Professors James Norris and \'Etienne Pardoux for the electronic communications during the preparation of this work. The second author would like to thank Professors Luis Silvestre and Benjamin Gess for very helpful comments.
\section{Stochastic De Giorgi iteration}
De Giorgi's iteration is a classical method for studying heat equations with measurable coefficients. In this section we develop a stochastic extension of this method appopriate for the type of SPDEs under investigation. See Cafarelli and Vasseur~\cite{CV10, CV10-1} for an exposition of the classical theory without random perturbation.
Throughout the paper, an $L^p$-norm without specifying a domain is implicitly assumed to be taken on $\R^n$; thus $\norm{ K}_p = \norm{ K}_{L^p(\R^n)}$. For a time interval $I \subset \R^+$, we define the norm
\[
\norm{g}_{p_1, p_2, I} := \norm{ u }_{L^{p_1} ( I, L^{p_2 } (\R^n))} = \smp{ \int_{I} \norm{g}_{p_2}^{p_1} \;dt }^{1/p_1}.
\]
The norm most relevant for this paper is $\Vert \cdot\Vert_{4,2, I}$.
Let $I_k = [1-2^{-k}, 2]$, a sequence of time intervals shrinking from $[0,2]$ to $[1,2]$. For each $a \geq 1$, write $u_{k,a} = (u-a(1-2^{-k}))^+$ and let
\[
U_{k,a}:= \norm{u_{k,a}}_{4,2, I_k}^2 = \sqrt{\int_{I_k} \norm{u_{k,a}}_2^4\;dt}.
\]
For simplicity we will denote $f(t,x,u)$ and $g(t,x, u)$ by $f(u)$ and $g(u)$, respectively. We have the following iterative inequality.
\begin{prop}
\label{S itr prop} Assume that $\Vert K\Vert_\infty\le 1$. There exists a constant $C = C(n, \lambda, \Lambda)$ such that
\begin{equation}\label{S itr prop ineq}
U_{k,a} \leq \frac{C^k }{a^{2/(n+1)}} \smp{U_{k-1,a} + X^*_{k-1,a}} U_{k-1,a}^{1/(n+1)},
\end{equation}
where
\begin{equation}
\label{def X}
X_{k-1,a}^* = \sup_{1-2^{-k}\leq s\leq t \leq 2}\int_s^t \ip{g_i (u(\tau)) , u_{k,a}(\tau) }\;dw_\tau^i.
\end{equation}
\end{prop}
\begin{proof} H\"older's inequality with the conjugate exponents $(n+1)/n$ and $n+1$ gives
\begin{equation}
\Vert u_{k,a} (t)\Vert_2^2 \leq \norm{ u_{k,a} (t)}^{2}_{2(n+1)/n} \cdot \abs{ \lbr u_{k,a} ( t) > 0 \rbr}^{1/(n+1)}.
\label{holder}
\end{equation}
Using Chebyshev's inequality, we have
\[
\abs{\lbr u_{k,a} ( t) > 0\rbr }= \abs{\lbr u_{k-1,a} ( t) > 2^{-k}a \rbr}\leq
\left(\frac{2^k}a\right)^2\norm{u_{k-1,a} (t)}^2_2.
\]
Squaring \rf{holder} and integrating with respect to $t$ on $I_k$ we have
\[
U_{k,a}^2 \leq \left(\frac{2^k}a\right)^{4/(n+1)}\int_{I_k} \norm{u_{k,a} (t)}^4_{2(n+1)/n} \norm{u_{k-1,a}(t) }^{4/(n+1)}_{2} \,dt.
\]
Applying H\"older's inequality again with the same conjugate exponents, we obtain
\begin{equation}
\label{r2 eq 1}
\begin{split}
U_{k,a} & \leq \left(\frac{2^k}a\right)^{2/(n+1)} \smp{ \int_{I_k} \norm{u_{k,a}(t)}^{4(n+1)/n}_{2(n+1)/n} dt }^{n/2(n+1)} \smp{ \int_{I_k} \norm{u_{k-1,a}(t)}_2^4\;dt }^{1/2(n+1)}.
\end{split}
\end{equation}
The third factor on the right side is exactly $U_{k-1,a}^{1/(n+1)}$. The second factor is $\Vert u_{k,\alpha}\Vert_{4(n+1)/n, 2(n+1)/n, I_k}^2$. We claim that
\begin{equation}
\Vert u_{k,\alpha}\Vert_{4(n+1)/n, 2(n+1)/n, I_k}^2\le\sup_{t \in I_{k}}\norm{u_{k,\alpha} (t)}_{2}^2 + \int_{I_{k}} \norm{ u_{k,\alpha} (t) }_{2n/(n-2)}^2 \; dt. \label{interpolation}
\end{equation}
To prove this inequality we use the $L^p_tL^{q}_x$ interpolation inequality
$$\Vert u\Vert_{r_1,r_2, I}\le \Vert u\Vert^\gamma_{p_1, p_2, I}\Vert u\Vert^{1-\gamma}_{q_1, q_2, I}$$
with
$$\frac1{r_1} = \frac{\gamma}{p_1} + \frac{1-\gamma}{q_1}, \quad\frac{1}{r_2} = \frac{\gamma}{p_2}+\frac{1-\gamma}{q_2}.$$
Using this inequality with the parameters
$$r_1 = \frac{4(n+1)}n, \quad r_2 = \frac{2(n+1)}n, \quad p_1 = \infty, \quad q_1 = p_2 = 2, \quad q_2 = \frac{2n}{n-2}, \quad \gamma = \frac{n+2}{2(n+1)}$$
followed by the elementary inequality
$$ab\le \frac{a^p}p + \frac {b^q}q\le a^p + b^q$$
with $p = 2(n+1)/(n+2)$ and $q = 2(n+1)/n$ we obtain \rf{interpolation} immediately.
Applying the Sobolev inequality on $\mr^n$ to the second term on the right side of \rf{interpolation} and then substituting the result in \eqref{r2 eq 1}, we obtain
\begin{equation}
\label{itr p 1}
U_{k,a} \leq C\left(\frac{2^k}a\right)^{2/(n+1)} \midp{ \sup_{t \in I_{k}}\norm{u_{k,a} (t)}_{2}^2 + \int_{I_{k}} \norm{\nabla u_{k,a} (t) }_{2}^2dt} U_{k-1,a}^{1/(n+1)}.
\end{equation}
We now come to the key step of the proof, namely using It\^o's formula to bound the terms involving the supremum over $I_k$ and the gradient of $u$. The function $h_r(u) = \abs{ (u -r)^+}^2 $ is piecewise smooth with a single point of discontinuity and the process $u (t)$ has the martingale part whose quadratic variation process is absolutely continuous with respect to the Lebesgue measure on $\mr_+$. Thus It\^o's formula can be applied to the composition $h_{a_k} ( u(t) ) = \vert u_{k,a}(t)\vert^2$ and we have
\begin{align}\label{ito}
d\Vert u_{k,a}(t)\Vert^2_{2}= &-2\ip{\nabla u_{k,a}(t), A \nabla u_{k,a} (t) } dt+ 2 \ip{g_i (u) , u_{k,a} (t)} dw^i_t\\
&+\left[\int_{\R^n} \lbr \abs{g (u(t))}^2 +2u_{k,a}(t) f (u(t))\rbr 1_{\{ u_{k,a} (t)>0 \}} dx\right] dt.\nonumber
&
\end{align}
The uniform ellipticity assumption can be applied to the first term on the right side. For the third term, we observe that if $u_{k,a}\ge 0$, then
$1\leq a \leq 2^{k} u_{k-1,a}$ and $0<u \leq u_{k-1,a} + a \leq (1+ 2^{k}) u_{k-1,a}$. By the linear growth assumption (2) on $f$ and $g$ and $\norm{K}_\infty \leq 1$, the third term is bounded by $C^k\Vert u_{k,a}\Vert_2^2\, dt$ for some $C$. Now, integrating \rf{ito} from $t_0$ to $t$ with $t_0 \in I_{k-1}\setminus I_{k}$ and $t \in I_{k}$ gives
\begin{equation*}
\Vert u_{k,a}(t)\Vert^2_{2} + \lambda \int_{t_0}^t\norm{\nabla u_{k,a}(s)}_2^2ds \leq \Vert u_{k,a} (t_0)\Vert_{2}^2+C^k U_{k-1,a} + \int_{t_0}^t\ip{g_i (u(s)) , u_{k,a}(s)}dw^i_s.
\end{equation*}
Taking supremum over $t \in I_{k} $, we have
\begin{equation}
\label{Itr l2 2}
\begin{split}
& \sup_{ t\in I_k}\Vert u_{k,a} (t)\Vert_{2}^2 + \int_{t_0}^{2} \norm{\nabla u_{k,a} (s) }_{2}^2 \; ds \leq C \Vert u_{k,a} (t_0)\Vert_{2}^2+C^k U_{k-1,a}
+ X^*_{k-1,a}
\end{split}
\end{equation}
with $X^*_{k-1, \alpha}$ as defined in \eqref{def X}.
By the mean value theorem, we can find a $t_0 \in I_{k-1} \setminus I_{k} $ such that
\begin{equation}
\label{mvt}
\norm{u_{k,a} (t_0)}_{2}^2 = \frac{1}{\abs{ I_{k-1} \setminus I_{k}} }\int_{I_{k-1}} \norm{u_{k,a} (t)}_{2}^2 \;dt \leq 2^{k} U_{k-1,a}.
\end{equation}
Combining \eqref{itr p 1}, \eqref{Itr l2 2} and \eqref{mvt}, we obtain the desired iterative inequality \eqref{S itr prop ineq}.
\end{proof}
\begin{remark} \label{n=2 rem}
In the case $n=2$, the proof in this section shows that for any $\mu\in (0, 1/3)$, there is a constant $C = C(n, \lambda, \Lambda, \mu)$ such that
$$U_{k,a} \leq \frac{C^k }{a^{2 \mu}} \smp{U_{k-1,a} + X^*_{k-1,a}} U_{k-1,a}^{\mu}.$$
This is sufficient for estimating the tail probability of $\norm{u}_{\infty}$ in the next section, for all we need is that the factor $U_{k-1,a}$ carries an exponent strictly greater than $1$.
\end{remark}
\section{Estimate of the tail probability}
In the context of the stochastic De Giorgi iteration, controlling the size of $\Vert u^+\Vert_{\infty, [T, 2T]\times\mr^n}$ means estimating the decay of the tail probability $\mp\lbr \Vert u\Vert_{\infty, [T, 2T]\times\mr^n}\ge a\rbr$. In order to use the iterative inequality in {\sc Proposition} \ref{S itr prop} for this purpose we need to show that $X_{k-1, a}^*$ is comparable with $U_{k-1, a}$. This is accomplished in {\sc Lemma} \ref{A-X} below, whose proof depends on the following simple result from stochastic analysis (see Norris~\cite[page 123]{Nor86}).
\begin{lem} \label{simple} Suppose that $\lbr M_t\rbr$ is a continuous local martingale. Then we have
$$\mp\lbr \sup_{0\le s\le t\le T}(M_t-M_s)\ge a, \, \langle M\rangle_T\le b\rbr\le e^{-a^2/4b}.$$
\end{lem}
\begin{proof} There is a Brownian motion $B$ such that $M_t = B_{\langle M\rangle_t}$, hence the event in the statement implies the event
$\lbr \sup_{0\le t\le b} B_t\ge a/2\rbr$. Since $\sup_{0\le t\le b} B_t$ has the same distribution as $\sqrt b\vert B_1\vert$, we obtain the inequality from the explicit density function of a standard Gaussian random variable.
\end{proof}
Consider the continuous martingale
\begin{equation*} \label{spx}
X_t :=\int_{0}^t \ip{g_i (u(s)) , u_{k+1,a}(s)} \;dw^i_s
\end{equation*}
and recall from \rf{def X} that $X_{k,a}^* = \sup_{1-2^{-k}\le s\le t\le 2}(X_t-X_s)$.
\begin{lem}\label{A-X} Assume that $\Vert K\Vert_\infty\leq 1$. There exists a constant $C = C(n, \lambda,\Lambda)$ such that for all positive $\alpha$ and $\beta$,
\[
\mp\lbr X_{k,a}^*\ge\alpha\beta, \; U_{k,a} \leq\beta\rbr \leq {C} \, e^{ -\alpha^2/C^k}.
\]
\end{lem}
\begin{proof} Let $T_k = 1-2^{-k}$ for simplicity. If we can show that there is a constant $C$ such that
\begin{equation}
\label{A-Q}
\langle X\rangle_2 - \langle X\rangle_{T_k} \leq C^k U_{k,a}^2,
\end{equation}
then
$$\lbr X^*_{k, a}\ge\alpha\beta, U_{k a}\le\beta\rbr\subset\lbr \sup_{T_k\le s\le t\le 2}(X_t-X_s)\ge\alpha\beta,
\langle X\rangle_2 - \langle X\rangle_{T_k}\le C^k\beta^2\rbr$$
and the desired estimate follows immediately from {\sc Lemma} \ref{simple}. To prove \rf{A-Q}, we start with
$$\langle X\rangle_2 - \langle X\rangle_{T_k} = \sum_{i\in\mn}\int_{I_k}\langle g_i(u), u_{k+1}\rangle^2\, ds,$$
which follows from the definition of $X_t$.
We observe that if $u_{k+1,a}\ge 0$, then
$1\leq a \leq 2^{k+1} u_{k,a}$ and $0<u \leq u_{k,a} + a \leq (1+ 2^{k+1}) u_{k,a}$. By Minkowski's inequality (integral form) and the linear growth assuption (2) on $f$ and $g$ we have
\begin{equation*}
\label{A-X p1}
\sum_{i\in \N} \smp{ \int_{\R^n } g_i (u) \; u_{k+1,a} \; d x}^2 \leq \smp{
\int_{\R^n} |g (u)| u_{k+1,a} \;d x}^2 \leq C^k \smp{\int_{\R^n}
u_{k,a}^2 \;d x }^2.
\end{equation*}
Integrating over the interval $I_k$ we obtain the desired inequality \rf{A-Q}.
\end{proof}
Armed with the iterative inequality \rf{S itr prop ineq} and the comparison result {\sc Lemma} \ref{A-X} we are in a position to control the size of $\Vert u^+\Vert_{\infty, [1,2]\times\mr^n}$ by estimating its tail probability. It is important that the constant $M_0$ in the following proposition is independent of $a$.
\begin{prop}
\label{Itr Nor} Assume that $\Vert K\Vert_\infty\le 1$. There exists a constant $ M_0 = M_0(n, \lambda, \Lambda)$ such that for all $a \geq 1$ and $ M > M_0$,
\[
\mp\lbr\norm{u^+}_{\infty, [1,2] \times \R^n} > a, \; M \norm{u^+}_{4,2,[0,2]} \leq a\rbr \leq e^{-M^{1/(n+1)}}.
\]
\end{prop}
\begin{proof} As in the classical theory, we start with the observation that $\bigp{\norm{u^+}_{\infty, [1,2] \times \R^n}>a} \subset G_a^c$,
where $G_a = \bigp{\lim_{k \rightarrow \infty} U_{k,a} = 0}$. Consider the events $\mathcal{E}_{k} = \{U_{k,a} \leq (a/M)^2\gamma^k \}$ for a constant $\gamma<1$ to be determined later. Since $\norm{u}_{4,2,[0,2]} = \sqrt{U_{0, a}}$, it suffices to prove
\[
\mp\lbr G_a^c\cap\mathcal{E}_0\rbr\le e^{-M^{1/(n+1)}}.
\]
It is clear that
\[
G_{a}^c \subset \bigcup_{k \geq 0} \mathcal{E}_{k}^c \subset \mathcal{E}_{0}^c \cup \midp{ \bigcup_{k \geq 1} \smp{ \mathcal{E}_{k}^c \cap \mathcal{E}_{k-1} } },
\]
which implies
\begin{equation}
\label{Itr Nor 1}
\mp\lbr G_{a}^c \cap \E_{0}\rbr \leq \sum_{k \geq 1} \mp \lbr\mathcal{E}_{k}^c \cap \mathcal{E}_{k-1} \rbr.
\end{equation}
We estimate the probability $\mp\lbr\mathcal{E}_{k}^c \cap \mathcal{E}_{k-1}\rbr $. For simplicity let $\delta = 1/(n+1)$. Let $\alpha = (2C)^{k/2}M^\delta$ and $\beta = a^2\gamma^{k-1}/M^2$ in {\sc Lemma} \ref{A-X}. If $ X^*_{k-1,a} \leq \alpha\beta$ and $U_{k-1,a} \leq \beta $, then by the iterative inequality \rf{S itr prop ineq} in {\sc Proposition \ref{S itr prop}} we have (after canceling $a^{2\delta}$!)
$$U_{k,a} \le\frac{C_1^k}{a^{2\delta}}(\beta + \alpha\beta)\beta^{\delta} = \frac{(C_1\gamma^{\delta})^k(1+(2C)^{k/2}M^\delta)}{\gamma^{1+\delta}M^{2\delta}}\cdot \gamma\beta\le\gamma\beta.$$
The last inequality holds if we choose $\gamma$ sufficiently small such that $(C_1\gamma^{\delta})^k(1+(2C)^{k/2}M^\delta)\le M^\delta$ for all $k\ge 1$ and $M\ge 1$ and then $M$ sufficiently large such that $\gamma^{1+\delta}M^\delta\ge 1$.
Now the above inequality implies that $\mathcal{E}_{k}^c \cap \mathcal{E}_{k-1 } \subset \{ X^*_{k-1,a} >\alpha\beta, U_{k-1,a} \leq \beta \} $. Its probability is estimated by {\sc Lemma} \ref{A-X} and we have
$$\mp \lbr \mathcal{E}_{k}^c \cap \mathcal{E}_{k-1} \rbr \leq C e^{-\alpha^2/C^k} = Ce^{-2^{k}M^{2\delta}}.$$
Using this in \eqref{Itr Nor 1} we obtain, again for sufficiently large $M$,
\[
\mp\lbr G_{a}^c \cap \mathcal{E}_{0}^c\rbr \leq C\sum_{k=1}^{\infty} e^{-2^{k} M^{2\delta}} \leq e^{- M^{\delta} }.
\]
This completes the proof of {\sc Proposition} \ref{Itr Nor}.
\end{proof}
\section{H\"older continuity}
In this section, we prove our main result, namely the almost surely H\"older continuity of the solution of the SPDE \rf{basiceqn}
subject to the conditions stated in {\sc Section} 1. We start with the following moment estimate.
\begin{prop}
\label{Estimate}
Let $u$ be a (stochastically strong) solution of the SPDE \rf{basiceqn} with (non-random) initial data $u (0) =u_0$. Then for every $p>0$
there is a constant $C = C( n, \lambda, \Lambda, T,p)$ such that
\begin{equation*}
\me \int_0^{2T} \norm{u (t)}_2^p \;dt +
\me \norm{u}_{\infty, [T,2T] \times \R^{n} }^p \leq C\smp{ \norm{u_0}_2 + \Vert K\Vert_2+ \Vert K\Vert_\infty }^p.
\end{equation*}
\end{prop}
\begin{proof} By scaling it suffices to consider the case $T=1$, $\Vert K\Vert_2+ \Vert K\Vert_\infty \le 1$, and $\norm{u_0}_2 \leq 1$. We need to show that there exists a constant $C$ (depending on $p$ of course) such that
\begin{equation}\label{4.1}
\me \int_0^{2} \norm{u (t)}_2^p \;dt \leq C\quad\text{and}\quad
\me \norm{u}_{\infty, [1,2] \times \R^{n} }^p \leq C.
\end{equation}
As $\mp$ is a probability measure, we may assume $p \geq 4$. We start with the first inequality.
Let $\varphi (t) = \norm{ u(t)}^2_2 +1$.
By It\^o's formula,
\begin{equation}
\label{cor1 p 1}
d \varphi (t) = \varphi(t)(F(t)\;dt +d G_t),
\end{equation}
where
\[
F(t) = \frac{ - \ip{A\nabla u , \nabla u} + \ip{f (u), u} +\norm{g (u)}_2^2}{ \norm{u}_2^2 +1} \quad \text{and}\quad G_t = \int_0^t \frac{ \ip{ g_i (u), u }}{\norm{u}^2_2+1} \,dw_s^i.
\]
The solution of SDE \eqref{cor1 p 1} is explicitly given by
\[
\varphi (t) = \varphi (0) \exp\midp{\int_0^t F(s) \;ds +G_t - \frac{1}{2}\ip{G}_t }.
\]
By the assumptions we have $F(t) \leq 4 \Lambda^2$ and $\ip{G}_t \leq 2\Lambda^2$ for all $t \leq 2$, hence
\begin{equation*}
\me \varphi^p (t) = \varphi(0)^p\me\exp p\smp{ \int_0^t F(s) \;ds +G_t- \frac{1}{2}\ip{G}_t} \leq C \varphi(0)^p.
\end{equation*}
This implies the first inequality in \eqref{4.1}. Next we show the second inequality in \rf{4.1}. Let
\[
X = \norm{u}_{\infty,[1,2]\times \R^n}\quad\text{and}\quad Y = \smp{\int_0^{2} \norm{u}_2^4\;dt }^{1/4}.
\]
By considering $u$ and $-u$ we have from {\sc Proposition} \ref{Itr Nor},
\begin{equation}\label{yv}
\mp \lbr X > a , Y \leq \frac{a}{M}\rbr \leq 2\, e^{-M^{1/(n+1)}}
\end{equation}
for all $a \geq 1$ and $M \geq M_0$, hence
$$\mp\lbr X> a, Y\le\sqrt a\rbr\le 2\,e^{-a^{1/2(n+1)}}$$
for $a\geq M_0^2$, assuming that $M_0\geq 1$. By the first inequality in \eqref{4.1}, we have
\[
\me Y^{2p} \leq 2^{(p-2)/2} \me \int_0^2 \norm{u(t)}_2^{2p} \;dt \leq C.
\]
Hence,
\[
\begin{split}
\me \norm{u}_{\infty, [1,2] \times \R^n}^p & = p\int_0^\infty \mp (X > a) a^{p-1} \;da \\
& \leq M_0^{2p} + \int_{M^2_0}^\infty \mp \smp{ Y> \sqrt{a} } a^{p-1} \;da + \int_{M_0^2}^\infty \mp\lbr X> a, Y\le \sqrt a\rbr \;da.
\end{split}
\]
The second term is bounded by $\me Y^{2p}$, and the third term is finite by \rf{yv}. This proves the second inequality in \eqref{4.1}.
\end{proof}
We can now prove our main theorem, which we state again for easy reference.
\begin{theorem}
\label{Thm Holder1} Let $u$ be a solution of the SPDE
$$\partial_t u = \divg ( A \nabla u) + f (t,x, u) + g_i (t,x, u) \dot{w}^i_t$$
whose coefficients satisfy the conditions stated in {\sc Section 1}. Then there exists a positive exponent $\alpha=\alpha (n, \lambda, \Lambda)$ such that almost surely $u \in C^{\alpha} ([T,2T] \times \R^n)$ for all $T>0$. Furthermore, for every $p>0$, there is a constant $C=C (n, \lambda, \Lambda, T, p)$ such that
\[
\me \norm{u}_{C^{\alpha} ([T,2T] \times \R^n) }^p \leq C\smp{\norm{u_0}_{L^2(\mr^n)} +\Vert K\Vert_{L^2(\mr^n)} + \Vert K\Vert_{L^\infty(\mr^n)}}^p.
\]
\end{theorem}
\begin{proof} By scaling it suffices to assume $ \norm{u_0}_2 \leq 1$ and $\Vert K\Vert_\infty+\Vert K\Vert_2\le 1$. Following a suggestion of Professor Nicolai Krylov, we consider the solution $v$ of an SPDE with the same stochastic perturbation but simpler diffusion coefficients:
\[
d_t v =\Delta v\, dt + g_i(u ) dw^i_t, \quad v(2^{-1}) =0 .
\]
The function $\phi = u- v$ satisfies
\begin{equation}
\label{Holder p 1}
\partial_t \phi = \divg (A \nabla \phi ) + f(\phi + v) + \divg(A \nabla v) -\Delta v , \quad \text{ on } [2^{-1},2] \times \R^n.
\end{equation}
From the linear growth assumption (1) for $g$ and {\sc Proposition} \ref{Estimate}, we have
\[
\me \int_{2^{-1}}^{2} \norm{g(u)}_{p }^p \;dt\le C.
\]
According to Krylov's $W^{2,p}$-theory (see Krylov~\cite{Kry96}) $v \in C^{\alpha_1} ([2^{-1},2] \times \R^n)$ for some exponent $\alpha_1$. Furthermore, we have the estimates
\begin{equation}
\label{P H 1}
\me \norm{v}_{C^{\alpha_1} ( [2^{-1}, 2] \times \R^n )}^p \leq \me \norm{ g (u) }_{L^{p} ([2^{-1},2] \times \R^n )} \leq C
\end{equation}
and
\begin{equation}
\label{P H 3}
\me \int_{1/2}^2 \norm{D^2 v}_{W^{-1,p}}^p \;dt \le C_p.
\end{equation}
Since \eqref{Holder p 1} does not have a stochastic perturbation, the usual regularity theory (see Lieberman~\cite[Ch. VI]{Lieb}) applies and we have $\phi \in C^{\alpha_2} ( [1, 2] \times \R^n )$ for some small exponent $\alpha_2 \in (0,1) $ and
\[
\begin{split}
\norm{\phi }_{C^{\alpha_2} ( [1,2]\times \R^n )} & \leq C \smp{ \norm{\phi }_{\infty, [2^{-1},2] \times \R^n} + \norm{D^2 v}_{L^p ([1,2] , W^{-1,p} )} } \\
& \leq C \smp{ \norm{u }_{\infty, [2^{-1},2] \times \R^n} +\norm{v }_{\infty, [2^{-1},2] \times \R^n} + \norm{D^2 v}_{L^p ([2^{-1},2] ,
W^{-1,p} )} }.
\end{split}
\]
Using the estimates \eqref{P H 1} and \eqref{P H 3} we conclude that
$\me \norm{\phi}^p_{C^{\alpha_2 } ([1,2] \times \R^n )} \leq C$.
From this inequality, \eqref{P H 1} and $u = \phi+v$, we obtain the desired inequality $\me \norm{u}^p_{C^{\alpha} [1,2] \times \R^n} \leq C$
with $\alpha = \min \{\alpha_1, \alpha_2\}$.
\end{proof}
| {
"timestamp": "2014-05-14T02:03:58",
"yymm": "1312",
"arxiv_id": "1312.3311",
"language": "en",
"url": "https://arxiv.org/abs/1312.3311",
"abstract": "Under general conditions we show that the solution of a stochastic parabolic partial differential equation of the form \\[ \\partial_t u = \\mathrm{div} (A \\nabla u) + f(t,x, u) + g_i (t,x,u) \\dot{w}^i_t \\] is almost surely Hölder continuous in both space and time variables.",
"subjects": "Analysis of PDEs (math.AP); Probability (math.PR)",
"title": "Stochastic De Giorgi Iteration and Regularity of Stochastic Partial Differential Equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137895115187,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7087617810704979
} |
https://arxiv.org/abs/1911.05799 | Congruences satisfied by eta-quotients | The values of the partition function, and more generally the Fourier coefficients of many modular forms, are known to satisfy certain congruences. Results given by Ahlgren and Ono for the partition function and by Treneer for more general Fourier coefficients state the existence of infinitely many families of congruences. In this article we give an algorithm for computing explicit instances of such congruences for eta-quotients. We illustrate our method with a few examples. | \section{Introduction}\label{sec:intro}
The partition function $p(n)$ gives the number of non-increasing sequences of positive integers that sum to $n$. Ramanujan first discovered that $p(n)$ satisfies
\begin{align*}
p(5n+4)&\equiv 0\pmod{5},\\
p(7n+5)&\equiv 0\pmod{7}, \;\mathrm{and}\\
p(11n+6)&\equiv 0\pmod{11},
\end{align*}
for all positive integers $n$. His work inspired broad interest in other linear congruences for the partition function, and further results were found modulo powers of small primes, especially 5, 7 and 11 (see \cite{ahlgrenono} for references). Ono \cite{ono} showed more generally that infinitely many congruences of the form $p(An+B)\equiv 0\pmod{\ell}$ exist for each prime $\ell\geq 5$. This work was extended by Ahlgren \cite{ahlgren} to composite moduli $m$ coprime to $6$. While these works did not provide direct computation of such congruences, many explicit examples were given by Weaver \cite{weaver} and Johansson \cite{johansson} for prime moduli $\ell$ with $13\leq\ell\leq 31$, and by Yang \cite{yang} for $\ell=37$. All of these results give congruences for arithmetic progressions which lie within the class $-\delta_\ell$ modulo $\ell$, where $\delta_\ell=\frac{\ell^2-1}{24}$.
Here we will focus on congruences like those of the form guaranteed by Ahlgren and Ono in \cite{ahlgrenono}, which lie outside the class $-\delta_\ell$ modulo $\ell$.
More precisely, we consider the following result as stated in \cite{treneer1}.
\begin{theorem}\label{thm:partition}
Let $\ell \geq 5$ be a prime. Then for a positive proportion of the primes $Q\equiv -1 \pmod{576\ell^j}$, we have
\[
p\left (\frac{Q^3n+1}{24}\right )\equiv 0 \pmod{\ell^j}
\]
for all $n$ coprime to $\ell Q$ such that $n\equiv 1 \pmod{24}$ and $\kro{-n}{\ell}
\neq \kro{-1}{\ell}$.
\end{theorem}
Let $\eta(z)$ denote the Dedekind eta function, which is given by
\[
\eta(z) = q^{\tfrac1{24}}\prod_{n=1}^\infty \left(1-q^n\right),
\qquad q=e^{2\pi iz}.
\]
The proof of Theorem~\ref{thm:partition} relies on the relationship between
$p(n)$ and $\eta(z)$, given by
\[
\sum_{n=-1\pmod{24}}p\left (\frac{n+1}{24}\right)q^n =
\frac{1}{\eta(24z)}.
\]
The function on the right is an example of an \emph{eta-quotient}, a particular
class of weakly holomorphic modular forms built from $\eta(z)$ (see
Definition~\ref{def:etaquotient} below for details).
This class includes generating functions for a wide variety of partition
functions, including overpartitions, partitions into distinct parts, colored
partitions, and core partitions.
Methods from the theory of modular forms, Hecke operators, and twists by
quadratic characters, are used to derive congruences for the coefficients of
these modular forms.
Moreover, the fourth author \cite{treneer1} showed that congruences such as
those in Theorem \ref{thm:partition} are common to the coefficients of all
weakly holomorphic modular forms in a wide class.
\begin{theorem}[Theorem 1.2 in \cite{treneer1}]\label{thm:thm12}
Suppose that $\ell$ is an odd prime and that $k$ is an odd integer. Let $N$ be a
positive integer with $4\mid N$ and $(N,\ell)=1$. Let $K$ be a number field with
ring of integers $\mathcal{O}_K$, and suppose that $f=\sum_n a(n)q^n\in
M_{k/2}^{wh}\left(\Gamma_1(N)\right) \cap \mathcal{O}_K((q))$.
Let $f$ satisfy Condition C at $\ell$. Then for each positive integer $j$, a
positive proportion of the primes $Q\equiv -1\pmod{N\ell^j}$ satisfy the congruence
\[
a\left(Q^3n\right)\equiv 0\pmod{\ell^j}
\]
for all $n \geq 1$ coprime to $\ell Q$ such that $\kro{-n}{\ell} \neq \varepsilon_\ell$.
\end{theorem}
For a description of Condition C and the numbers $\varepsilon_\ell$, see
Definition~\ref{def:conditionC}.
A similar result holds for integral weight modular forms.
\begin{theorem}[Theorem 1.3 in \cite{treneer1}]\label{thm:thm13}
Suppose that $\ell$ is an odd prime and that $k$ is an integer. Let $N$ be a
positive integer with $(N,\ell)=1$.
Let $K$ be a number field with ring of integers $\mathcal{O}_K$, and suppose
that $f=\sum_n a(n)q^n\in M_{k}^{wh}\left(\Gamma_1(N)\right) \cap \mathcal{O}_K((q))$.
Let $f$ satisfy Condition C at $\ell$. Then for each positive integer $j$,
a positive proportion of the primes $Q\equiv -1 \pmod{N\ell^j}$ have the property that
\[
a(Qn)\equiv 0\pmod{\ell^j}
\]
for all $n \geq 1$ coprime to $\ell Q$ such that $\kro{-n}{\ell} \neq \varepsilon_\ell$.
\end{theorem}
We call a prime $Q$ as in Theorem~\ref{thm:thm12} or Theorem~\ref{thm:thm13} an
\emph{interesting} prime (with respect to $f, \ell$ and $j$).
While these results give the existence of infinitely many interesting primes,
they are not explicitly given, and in general they have not been simply
described.
A notable exception is the work of Atkin \cite{atkin}, in which it is shown that
every prime $Q\equiv -1\pmod{\ell}$ yields congruences as in Theorem
\ref{thm:partition} when $\ell\in\{5,7,13\}$ and $j=1$.
The main goal of this paper is to present results and algorithms that allow us
to find interesting primes $Q$, and hence prove explicit congruences for the
coefficients of eta-quotients. While we only explicitly state congruences for
select eta-quotients, we remark that our algorithms are general and, with enough
computing power, would work for any eta-quotient.
\medskip
We start by presenting some background on weakly holomorphic modular forms in
general and eta-quotients in particular. We proceed from there to discuss
various computations we carry out with eta-quotients: we recall how to compute
coefficients of a general eta-quotient and we discuss how to compute the
expansions of a general eta-quotients at cusps other than $\infty$.
In the subsequent section, we prove the theorems related to the computation of
interesting primes.
We conclude by describing the algorithms given by these theorems and presenting
some examples.
In particular, we give new explicit congruences for the partition function.
\section{Background and preliminaries}
Given a positive integer $N$ divisible by $4$ and an integer $k$, we let
$M_{k/2}^{wh}\left(\Gamma_1(N)\right)$ denote the space of weakly holomorphic modular forms for
$\Gamma_1(N)$ of weight $k/2$.
We do not make hypotheses on the parity of $k$, in order to handle both
half-integral and integral weight cases at the same time.
Given a Dirichlet character $\chi$ modulo $N$ we denote by $M_{k/2}^{wh}\left(\Gamma_0(N),\chi\right)$ the
corresponding subspace.
We
denote by $G$ the four-fold cover of $\mathrm{GL}_2^+(\mathbb{Q})$ acting on these spaces by the
slash operator $\mid_{\tfrac k2}$. We refer the reader to \cite{koblitz} for details.
We consider the Fourier expansion of a weakly holomorphic modular form $f \in
M_{k/2}^{wh}\left(\Gamma_0(N),\chi\right)$ at a cusp $s=a/c$ in $\mathbb{Q}\cup \{\infty\}$.
To such an $s$ we associate an element $\xi_s = (\gamma_s,\phi) \in G$ with
$\gamma_s \in \mathrm{SL}_2(\mathbb{Z})$ such that $\gamma_s \cdot \infty=s$.
Then in \cite[p. 182]{koblitz} it is shown that
\begin{equation}\label{eq:FE}
f(z) \mid_{\tfrac k2} \xi_s
= \sum_{n \geq n_0} a_{\xi_s}(n) q_{h_s}^{n+\tfrac{r_s}{4}},
\end{equation}
where $h_s = \tfrac{N}{\gcd(c^2,N)}$ is the width of $s$ as a cusp of
$\Gamma_0(N)$, $r_s \in \{0,1,2,3\}$ depends on $h_s, k$ and $\chi$ (see
\cite[Proposition 2.8]{treneer3} for an explicit formula) and $
q_{h_s}=\exp(2\pi i z/h_s)$.
While the value of the coefficient $a_{\xi_s}(n)$ in the expansion depends on
$\xi_s$, whether or not the coefficient is zero does not depend on $\xi_s$.
The theorems in \cite{treneer1} about congruences mod $\ell^j$ are subject to
whether the modular form we are considering satisfies the following
condition for $\ell$:
\begin{definition}\label{def:conditionC}
We say that $f$ satisfies \emph{Condition C} for a prime $\ell$ if there exists a
sign $\varepsilon_\p = \pm 1$ such that for each cusp $s$, the following is true for
the Fourier expansion in \eqref{eq:FE}: for all $n<0$, with $\ell\nmid (4n+r_s)$
and $a_{\xi_s}(n)\neq 0$, we have
\[
\kro{4n+r_s}{\ell}= \varepsilon_\p\kro{h_s}{\ell}.
\]
\end{definition}
We note that a modular form $f$ can satisfy Condition C for a prime $\ell$ in a
trivial way: namely, if for each cusp $s$ there is no $n<0$ with
$a_{\xi_s}(n)\neq 0$ and $\ell\nmid (4n+r_s)$.
In that case, we set $\varepsilon_\p = 0$.
We also note that when $s=\infty$ we have $r_s=0$ and $h_s=1$, and so Condition
C becomes $\kro{n}{\ell}=\varepsilon_\p$ for all $n<0$ with $\ell\nmid n$ and $a(n)\neq 0$.
\medskip
We conclude these preliminaries with the following result regarding the order of
vanishing after twisting, which will be needed later.
\begin{lemma}\label{lem:vanishing_twist}
Let $f \in M_{k/2}^{wh}\left(\Gamma_0(N),\chi\right)$.
Let $\ell$ be a prime not dividing $N$ and let $\psi$ be a character modulo $\ell$.
Let $s$ be a cusp with respect to $\Gamma_0(N \ell^2)$.
Then
\[
\operatorname{ord}_s(f\otimes \psi) \geq v_0,
\]
where $v_0 = \min\{\operatorname{ord}_t(f): t \text{ a cusp with respect to } \Gamma_0(N)\}$.
\end{lemma}
\begin{proof}
We follow \cite[Lemma 2, p. 127]{koblitz}.
In \cite[p. 128]{koblitz} it is shown that there exist complex numbers
$\lambda_\nu$ depending on $\psi$ such that
\[
f \otimes \psi = \sum_{\nu = 0}^{\ell-1}
\lambda_\nu \, f_\nu,
\]
where $\xi_\nu = \left(\smat{1 & -\nu/\ell \\ 0 & 1},1\right)$ and
$f_\nu = f \mid_{\tfrac k2} \xi_\nu$.
Let $\xi_s = (\gamma_s,\phi)\in G$, where $\gamma_s \in \mathrm{SL}_2(\mathbb{Z})$ is such
that $\gamma_s \cdot \infty = s$.
Write
\[
\pmat{\ell & -\nu \\ 0 & \ell} \gamma_s = \alpha \pmat{a & b \\ 0 & d},
\]
with $\alpha \in \mathrm{SL}_2(\mathbb{Z})$ and $a,d \in \mathbb{Z}$.
Then if we let $t = \alpha \cdot \infty$ and we consider it as a cusp with
respect to $\Gamma_0(N)$, it is easy to see that $a\in \{1,\ell\}$, which since
$\ell \nmid N$ implies that $h_t = \frac ad \, h_s$.
Let $\sum_{n \geq v_0} a_t(n) q^{n+\tfrac{r_t}4}_{h_t}$
be a Fourier expansion of $f$ at $t$.
Then
\[
f_\nu \mid_{\tfrac k2} \xi_s =
\left(a/\ell\right)^{k/2}
e^{\frac{2\pi i b r_t}{4 h_t d}}
\sum_{n \geq v_0}
a_t(n) \,
e^{\frac{2\pi i n b}{h_t d}}
q_{h_s}^{n + \tfrac{r_t}4},
\]
which shows that $\operatorname{ord}_s \left(f_\nu\right) \geq v_0$ (and that $r_s =
r_t$).
\end{proof}
\subsection{Eta-quotients}
Recall that the Dedekind eta function is defined by the infinite product
\begin{equation}\label{eq:eta}
\eta(z) = q^{\tfrac{1}{24}} \prod_{n=1}^\infty \left(1-q^n\right).
\end{equation}
\begin{definition}\label{def:etaquotient}
Let $X = \{(\delta,r_\delta)\}$ be a finite subset of $\mathbb{Z}_{>0} \times \mathbb{Z}$.
Assume that $X$ satisfies
\[
\sum_X r_\delta \delta \equiv 0 \pmod{24}.
\]
By $\eta^X$ we denote the eta-quotient
\begin{equation}\label{eq:etaq}
\eta^X(z) = \prod_X \eta(\delta z)^{r_\delta}.
\end{equation}
\end{definition}
It is proved in \cite[Corollary 2.7]{treneer3} that $\eta^X \in M_{k/2}^{wh}\left(\Gamma_0(N),\chi\right)$,
where $k = \sum_X r_\delta$, the level $N$ is the smallest multiple of $4$ and
of every $\delta$ such that
\[
N \sum_X \frac{r_\delta}{\delta} \equiv 0 \pmod{24},
\]
and letting $s = \prod_X\delta^{r_\delta}$ the character
is given by
\[
\chi(d) =
\begin{cases}
\kro{2s}{d}, & \text{for odd } k, \\
\kro{4s}{d}, & \text{for even } k. \\
\end{cases}
\]
For our calculations, we will need the order of vanishing of an eta-quotient
$\eta^X$ at a cusp $s = a/c \in \mathbb{Q}\cup\{\infty\}$ with respect to the variable
$q_{h_s}$.
Following \cite{ligozat} we have that, if $\gcd(a,c) = 1$, then
\begin{equation}\label{eqn:ligozat}
\operatorname{ord}_s\left(\eta^X\right) =
\frac{h_s}{24} \, \sum_X \gcd(c,\delta)^2 \, \frac{r_\delta}{\delta}.
\end{equation}
\section{Computing eta-quotients}\label{sec:computing}
To verify that a weakly holomorphic modular form $f$ satisfies Condition C, we
need to be able to compute the Fourier expansion of $f$ at cusps other than
$\infty$.
This is addressed by \cite{collins,dickson} in the case of holomorphic forms.
Rather than extending these general methods to arbitrary weakly holomorphic
modular forms, we limit our attention to forms that are eta-quotients.
For an eta-quotient we can take advantage of the very simple nature of $\eta(z)$
in two different ways. First, we can quickly compute coefficients of $\eta(z)$
at $\infty$ using Euler's Pentagonal Number Theorem:
\begin{equation}\label{eq:pentagonal}
\prod_{n=1}^\infty (1-x^n)
= 1+ \sum_{k=1}^\infty (-1)^k \left(x^{k(3k+1)/2}+x^{k(3k-1)/2}\right).
\end{equation}
Second, we can take advantage of the transformation properties of $\eta(z)$
under the action of $G$ to compute the expansion of $\eta(z)$ at any
cusp $s$ as in \eqref{eq:FE}. We describe this in more detail now.
Let $\eta^X$ be an eta-quotient as in \eqref{eq:etaq}. Let $s \in \mathbb{Q}$ be a
cusp, and let $\xi_s = (\gamma_s,\phi) \in G$ be such that $\gamma_s \cdot
\infty=s$.
Then
\[
\eta^X(z) \mid_{\tfrac k2} \xi_s
= \prod_X \left(\eta(\delta z)\mid_{\frac12}\xi_s\right)^{r_\delta}
\]
and so our problem reduces to computing $\eta(\delta z)\mid_{\frac12} \xi$ for a
given $\xi = (\gamma,\phi) \in G$, with $\gamma = \smat{a & b \\ c & d} \in
\mathrm{SL}_2(\mathbb{Z})$.
Since the calculations depend only on $\gamma$, by abuse of notation we slash by
matrices of $\mathrm{GL}_2^+(\mathbb{Q})$ rather than by elements of $G$.
We have that
\[
\eta(z)\mid_{\frac12} \smat{\delta & 0 \\0& 1} = \delta^{\frac14}\eta(\delta z).
\]
Furthermore, we can write
\[
\smat{\delta & 0 \\0& 1}\gamma =
\gamma' \smat{A&B\\0&D}
\]
where $\gamma'\in\mathrm{SL}_2(\mathbb{Z}), \, A=\gcd(c,\delta), \, D=\tfrac{\delta}{A}$, and
$B$ is an integer so that $\delta \mid (Ad-Bc)$.
We have that
\[
\eta(z)\mid_{\frac12}\gamma' = \varepsilon_{\gamma'} \, \eta(z),
\]
where $\varepsilon_{\gamma'}$ is 24\textsuperscript{th} root of unity given
explicitly in terms of $\gamma'$ in \cite[(74.93)]{rademacher}. Then
\begin{align*}
\eta(\delta z)\mid_{\frac12} \gamma
&= \delta^{-\frac14}\eta(z)\mid_{\frac12} \smat{\delta & 0 \\0& 1}\gamma
=\delta^{-\frac14}\eta(z)\mid_{\frac12}\gamma'
\smat{ A & B \\0& D}\\
&= \delta^{-\frac14}\varepsilon_{\gamma'}\eta(z)\mid_{\frac12}
\smat{A & B \\0& D}
= \delta^{-\frac14}\varepsilon_{\gamma'}
\left(\tfrac{A}{D}\right)^{\frac14}\eta\left(\tfrac{Az+B}{D}\right)
=D^{-\frac12}\varepsilon_{\gamma'}\eta\left(\tfrac{Az+B}{D}\right).
\end{align*}
Using \eqref{eq:eta} we conclude that
\begin{equation}\label{eq:etadelta}
\eta(\delta z)\mid_{\frac12}\gamma
= D^{-\frac12} \, \varepsilon_{\gamma'} \,
e^{2\pi i B/(24D)} \, q^{A/(24D)} \,
\prod_{n=1}^\infty \left(1-e^{2\pi i n B/D}q^{An/D}\right),
\end{equation}
and we compute the infinite product using \eqref{eq:pentagonal}.
Finally, to compute the Fourier expansion of $\eta^X$ at the cusp $s$ we expand
each eta factor as in \eqref{eq:etadelta} and multiply these expansions
together.
\section{Results}\label{sec:results}
We first recall the Sturm bound for half-integral weight modular forms.
Given a non-negative integer $M$, denote $\mu_M = [\mathrm{SL}_2(\mathbb{Z}) : \Gamma_0(M)]$.
\begin{prop}\label{prop:sturm}
Let $\kappa$ be a positive integer.
Let $K$ be a number field with ring of integers $\mathcal{O}_K$, and suppose
that $h=\sum_{n \geq 1} c(n) q^n \in
S_{\kappa/2}\left(\Gamma_0(M),\chi\right) \cap \mathcal{O}_K[[q]]$,
with $\chi$ of order $m$.
Let $\ell$ be a prime in $\mathcal{O}_K$.
Let
\begin{equation}\label{eqn:sturm}
n_0 = \left\lfloor\frac{\kappa \mu_M}{24} - \frac{\mu_M-1}{4mM}\right\rfloor
+1.
\end{equation}
If $c(n) \equiv 0 \pmod{\ell^j}$ for $1 \leq n \leq n_0$, then $h \equiv 0
\pmod{\ell^j}$.
\end{prop}
\begin{proof}
When $j = 1$ this follows from the standard Sturm bound for cusp forms of
integral weight and trivial character
(see \cite[Theorem 9.18]{stein})
applied to $h^{4m} \in S_{2\kappa m}\left(\Gamma_0(M)\right)$.
The general case follows by induction on $j$.
\end{proof}
\begin{theorem}\label{thm:main}
Assume the same hypotheses as in Theorem~\ref{thm:thm12} and that $f$ is an
eta-quotient.
Then for each positive integer $j$ there exist effectively computable
non-negative integers $\kappa$ and $n_0$ such that, if
$Q$ is a prime with $Q \equiv -1 \pmod {N\ell^j}$ such that
\begin{equation}\label{eqn:the_condition}
a(Q^2 n) +
\kro{(-1)^{\tfrac{\kappa-1}{2}}n}{Q} Q^{\tfrac{\kappa-3}{2}} a(n) +
Q^{\kappa-2} a\kro n{Q^2}
\equiv 0 \pmod {\ell^j}
\end{equation}
for every $1\leq n \leq n_0$ such that $\ell\nmid n$ and $\kro n\ell \neq \varepsilon_\p$,
then $Q$ is an interesting prime.
\end{theorem}
\begin{proof}
Let $F_\ell$ denote the eta-quotient given by
\[
F_\ell(z) = \begin{cases}
\tfrac{\eta^{27}(z)}{\eta^3(9z)} & \text{if } \ell = 3, \\
\tfrac{\eta^{\ell^2}(z)}{\eta(\ell^2 z)} & \text{if } \ell \geq 5.
\end{cases}
\]
We have that $F_\ell \in M_{k_\ell/2}(\Gamma_0(\ell^2))$, where
\begin{equation}\label{eqn:kp}
k_\ell = \begin{cases}
24, & \ell = 3, \\
\ell^2-1, & \ell \geq 5.
\end{cases}
\end{equation}
Furthermore, it satisfies
\begin{equation}\label{eqn:congruence_Fp}
{F_\ell}^{\ell^j-1} \equiv 1 \pmod{\ell^j}, \quad j \geq 1.
\end{equation}
Consider $F_\ell$ as a form with level $\ell^2 N$.
Then if $s = a/c$ is a cusp with $c \mid \ell^2 N$ and $\ell^2 \nmid c$, by
\eqref{eqn:ligozat} the order of vanishing of $F_\ell$ at $s$ is given by
\begin{equation}\label{eqn:vanishing_Fp}
\operatorname{ord}_s(F_\ell) =
\frac{N}{\gcd(c^2,N)} \cdot
\begin{cases}
10 & \text{if } \ell = 3 \text{ and } \ell \nmid c, \\
1 & \text{if } \ell = 3 \text{ and } \ell \mid c, \\
\tfrac{\ell^4-1}{24} & \text{if } \ell \geq 5 \text{ and } \ell\nmid c, \\
\tfrac{\ell^2-1}{24} & \text{if } \ell \geq 5 \text{ and } \ell\mid c.
\end{cases}
\end{equation}
Let $\chi$ denote the character of $f$.
We consider
\[
f_\ell = f \otimes \chi_\ell^{triv} - \varepsilon_\p f \otimes \kro{\cdot}\ell
\quad \in M_{k/2}^{wh}\left(\Gamma_0\left(N\p^2\right),\chi\right),
\]
where $\varepsilon_\p$ is given by Condition C, and $\chi_\ell^{triv}$
denotes the trivial character modulo $\ell$.
Then $f_\ell$ vanishes at every cusp $s = a/c$
with $c \mid \ell^2N$ and $\ell^2 \mid c$
(see \cite[Proposition 3.4]{treneer1}).
Let $v_0$ be the integer given by $f$ following Lemma
\ref{lem:vanishing_twist}.
This number can be computed using \eqref{eqn:ligozat}, since $f$ is an
eta-quotient.
Using \eqref{eqn:vanishing_Fp}, we compute a (small as possible) integer
$\beta$ such that $\beta \geq j-1$ and
\[
\ell^\beta \operatorname{ord}_s(F_\ell) > -v_0
\]
for every cusp $s = a/c$ such that $c \mid \ell^2 N$ and
$\ell^2 \nmid c$.
Furthermore, we let $\kappa = k + \ell^\beta k_\ell$, and we assume that $\beta$ is
such that $\kappa > 0$.
We consider
\[
g_{\ell,j} = \tfrac12 \, f_\ell \cdot {F_\ell}^{\ell^\beta}
\quad \in M_{\kappa/2}^{wh}\left(\Gamma_0\left(N\p^2\right),\chi\right).
\]
Then $g_{\ell,j}$ vanishes at every cusp.
Furthermore, by \eqref{eqn:congruence_Fp} it satisfies that
\begin{equation}\label{eqn:congruence_hp}
g_{\ell,j} \equiv \sum_{\ell \nmid n,\,\kro n\ell \neq \varepsilon_\p} a(n) q^n \pmod{\ell^j}.
\end{equation}
Let $Q$ be a prime such that $Q \equiv -1 \pmod{N \ell^j}$, and let
\[
h_{\ell,j} = g_{\ell,j} \vert T_{\kappa/2,N\p^2}\left(Q^2\right)
\quad \in S_{\kappa/2}\left(\Gamma_0\left(N\p^2\right),\chi\right).
\]
Write $h_{\ell,j} = \sum_{n=1}^\infty c(n) q^n$.
The formulas for the action of $T_{\kappa/2,N\p^2}\left(Q^2\right)$ in terms of
Fourier coefficients imply that if $h_{\ell,j} \equiv 0 \pmod{\ell^j}$, then $Q$
is an interesting prime (see the the proof of \cite[Theorem 1.2]{treneer1}
for details).
Moreover, by these formulas and \eqref{eqn:congruence_hp} we have that $c(n)
\equiv 0 \pmod{\ell^j}$ if $\ell \mid n$ or if $\kro n\ell = \varepsilon_\p$, and otherwise
$c(n)$ equals the left hand side of \eqref{eqn:the_condition}, modulo
$\ell^j$.
Hence the result follows from Proposition~\ref{prop:sturm}, taking
$n_0$ as in \eqref{eqn:sturm} with $M = N\ell^2$.
\end{proof}
We now consider the integral weight case.
\begin{theorem}
Assume the same hypotheses as in Theorem~\ref{thm:thm13} and that $f$ is an
eta-quotient.
Then for each positive integer $j$ there exist effectively computable
non-negative integers $\kappa$ and $n_0$ such that, if
$Q$ is a prime with $Q \equiv -1 \pmod {N\ell^j}$ such that
\begin{equation}\label{eqn:the_condition13}
a(Q n) + Q^{\kappa-1} a\kro n{Q}
\equiv 0 \pmod {\ell^j}
\end{equation}
for every $1\leq n \leq n_0$ such that $\ell \nmid n$ and $\kro n\ell \neq \varepsilon_\p$,
then $Q$ is an interesting prime.
\end{theorem}
\begin{proof}
We replace $N$ by $\operatorname{lcm}(4,N)$ and consider $f$ as a form of level $N$ and
weight $2k/2$ in order to be able to use \cite[Theorem 3.1]{treneer1}.
We let $g_{\ell,j}$ and $h_{\ell,j}$ be as
in the previous proof, so that if $h_{\ell,j} \equiv 0 \pmod{\ell^j}$ then $Q$ is
an interesting prime.
Write $h_{\ell,j} = \sum_{n=1}^\infty c(n) q^n$. Then if $\ell \mid n$ or $\kro n\ell
=\varepsilon_\p$ we have that $c(n) \equiv 0 \pmod{\ell^j}$, and otherwise $c(n)$
equals the left hand side of \eqref{eqn:the_condition13} modulo $\ell^j$.
The result follows then from Proposition~\ref{prop:sturm}.
\end{proof}
\begin{rmk}
In the proof of Theorems \ref{thm:thm12} and \ref{thm:thm13}
the existence of interesting primes among the candidates
(i.e., those $Q$ such that $Q\equiv -1 \pmod {N\ell^j}$) is given as
a consequence of Chebotarev's density theorem.
Thus we expect, asymptotically, to have them evenly distributed.
However, in the examples we computed in the next section we obtained that
most of the candidates that are computationally
tractable are actually interesting.
\end{rmk}
\section{Algorithms and examples}\label{sec:implementation}
We present two algorithms in this section.
First, we describe how to tell if a given eta-quotient satisfies Condition C at
a given prime $\ell$ and, second, following the proof of Theorem \ref{thm:main},
we describe an algorithm for finding interesting
primes given an eta-quotient $f$ and a prime $\ell$ at which $f$ satisfies Condition C
(and the corresponding value of $\varepsilon_\p$).
These algorithms were implemented in \texttt{Sage} (\cite{sagemath}). Our code
is available at \cite{code}, together with some examples and related data.
\begin{algorithm}\label{alg:conditionC}
\LinesNumbered
\KwData{An eta-quotient $f$ of level $N$; an odd prime $\ell$; $\varepsilon_\p \in
\{\pm1 ,0\}$}
\KwResult{True if Condition C is satisfied at $\ell$ with $\varepsilon_\p$, and
False otherwise}
\BlankLine
\For{every cusp $s$ in $\Gamma_0(N)$}
{
$n_0 \leftarrow \operatorname{ord}_s(f)$, using \eqref{eqn:ligozat}\;
\For{$n \leftarrow -1$ \KwTo $n_0$}
{
$c \leftarrow a_{\xi_s}(n)$, computed following Section
\ref{sec:computing}\;
\If{$c \neq 0$}
{
\If{$\kro{4n+r_s}{\ell} \neq \varepsilon_\p \kro{h_s}{\ell}$}
{\Return False}
}
}
}
\Return True
\BlankLine
\caption{Algorithm for deciding if Condition C is satisfied.}
\end{algorithm}
\begin{algorithm}\label{alg:interesting}
\LinesNumbered
\KwData{An eta-quotient $f=\sum a(n)q^n$ of level $N$ and half-integral weight
$k/2$; an odd prime $\ell$ at which condition C is satisfied and the
corresponding $\varepsilon_\p$;
an integer $j\geq 1$; a candidate $Q$}
\KwResult{True if $Q$ is interesting, and False if
\eqref{eqn:the_condition} does not hold}
\BlankLine
$v_0 \leftarrow \min\{\operatorname{ord}_t(f): t \text{ a cusp with respect to }
\Gamma_0(N)\}$, using \eqref{eqn:ligozat};
\For{every cusp $s = a/c$ in $\Gamma_0(N\ell^2)$ with $c \mid N\ell^2$}{
\If {$\ell^2 \nmid c$}{
$\operatorname{ord}_s(F_\ell) \leftarrow$ the result of \eqref{eqn:vanishing_Fp}\;
$\log_\ell \leftarrow \left \lceil \log_\ell\left(\frac{-v_0}
{\operatorname{ord}_s(F_\ell)}\right)\right \rceil$\;
$\beta \leftarrow \max\{\beta,\log_\ell\}$\;
}
}
$\kappa\leftarrow k + \ell^\beta k_\ell$, where $k_\ell$ is given by \eqref{eqn:kp}\;
$n_0 \leftarrow$ Sturm bound for $S_{\kappa/2}\left(\Gamma_0\left(N\p^2\right),\chi\right)$, as in Proposition \ref{prop:sturm}\;
interesting $\leftarrow$ True\;
\While{interesting}{
\For{$n\leftarrow 1$ \KwTo $n_0$}{\If{$\ell \nmid n$ and $\kro n\ell \neq \varepsilon_\p$}{
\If{$a(Q^2 n) + \kro{(-1)^{\tfrac{\kappa-1}{2}}n}{Q} Q^{\tfrac{\kappa-3}{2}} a(n) + Q^{\kappa-2} a\kro n{Q^2}\not\equiv 0 \pmod {\ell^j}$\label{step:cong}}
{interesting$\leftarrow$ False\;}
}
}
}
\Return interesting\;
\BlankLine
\caption{Algorithm for finding interesting primes.}
\end{algorithm}
\begin{rmk}
Algorithm \ref{alg:interesting} works in the integral weight case,
replacing Step \ref{step:cong} according to \eqref{eqn:the_condition13}.
\end{rmk}
\begin{rmk}\label{rmk:fewer_coeffs}
In general, Step \ref{step:cong} of Algorithm \ref{alg:interesting} requires
the computation of $a(n)$ up to the product of $Q^2$ and the Sturm bound.
For many forms of large level and large $Q$ this can be prohibitive.
Strictly speaking, though, the number of coefficients that we need to know
is roughly equal to the Sturm bound, since
Step \ref{step:cong} requires computing $a(Q^2n), a(n)$ and $a(n/Q^2)$ for
$n$ such that $\ell \nmid n$ and $\kro n\ell \neq \varepsilon_\p$, up to the Sturm bound; the
$a(n/Q^2)$'s are included in the $a(n)$'s, and the Kronecker symbol
condition cuts the required number of coefficients in half.
So, if there is some way to compute these coefficients other than
multiplying out the eta-quotient \eqref{eq:etaq}, we can use Algorithm
\ref{alg:interesting} to compute with forms of moderately large Sturm bound
and for moderately large candidates.
\end{rmk}
\begin{rmk}
Given a candidate $Q$, if the output of Algorithm \ref{alg:interesting} is
False, then there exists an $n$ such that \eqref{eqn:the_condition} (or
\eqref{eqn:the_condition13}, in the integral weight case) does not hold.
A priori, this does not imply that the prime $Q$ is not interesting.
Nevertheless in the following examples, at least when it was computationally
feasible, for each $Q$ for which the Algorithm \ref{alg:interesting}
returned False, we verified that one of the congruences Theorem
\ref{thm:thm12} (or Theorem \ref{thm:thm13}, in the integral weight case)
does not hold.
\end{rmk}
\begin{rmk} To the best of our knowledge, the examples computed below are new explicit congruences for these partition functions, except where otherwise noted.
\end{rmk}
\subsection{Examples of half-integral weight}
\subsubsection*{The partition function}
Recall that an eta-quotient of particular interest is related to the
partition function:
\[
\sum_{n \equiv -1 \pmod{24}}
p \left(\frac{n+1}{24}\right) q^n =
\frac{1}{\eta(24z)}
\quad \in M_{-1/2}^{wh}(\Gamma_0(576),\chi).
\]
The first prime at which it satisfies condition $C$ is $\ell=5$, with $\varepsilon_5 =
1$.
Using the notation from Section~\ref{sec:results}, we can take $\beta=0$.
Then since $\mu_{576\cdot 5^2} = 34560$, the Sturm bound is
\[
\left\lfloor
\frac{23}{24} \cdot 34560 -
\frac{34560-1}{8 \cdot 576\cdot 5^2}
\right\rfloor
= 33119.
\]
The first $Q$ such that $Q\equiv -1\pmod{576\cdot 5}$ is $Q=2879$.
In order to run Algorithm \ref{alg:interesting}, \textit{a priori}, we need to
compute $p\left(\frac{n+1}{24}\right)\pmod{5}$ for $1\leq n \leq 33119\cdot
2879^2 \approx 10^{12}$.
That is beyond the scope of our computational abilities.
Following Remark \ref{rmk:fewer_coeffs}, to check if $Q=2879$ is interesting,
since $Q^2 > 33119$
we only need to calculate
\begin{multline*}
p\left(\frac{n+1}{24}\right) \quad \text{ and } \quad
p\left(\frac{Q^2n+1}{24}\right) \pmod5, \\
\kro n5 = -1, \quad 1 \leq n \leq 33119,
\end{multline*}
Moreover, we only need to consider $n\equiv -1\pmod{24}$.
In conclusion, we need to compute
\begin{multline*}
p\left(\frac{n+1}{24}\right) \quad\text{ and }\quad
p\left(\frac{Q^2n+1}{24}\right) \pmod5, \\
n\equiv 23, 47 \pmod{120},\quad 1\leq n \leq 33119.
\end{multline*}
These individual coefficients can be computed independently (and in parallel)
using fast algorithms for computing partition numbers.
Similar computations for other candidates $Q$ and other
primes $\ell$ at which Condition C is satisfied can also be done.
We summarize the results of our computations in Proposition~\ref{prop:eta-congruences}.
We computed the needed coefficients using the \texttt{FLINT} library
(\cite{flint}).
The code is available at \cite{code}.
\begin{prop}\label{prop:eta-congruences}
We have that
\[
p\left(\frac{Q^3n+1}{24}\right)\equiv 0 \pmod{\ell},
\]
for all $n$ coprime to $\ell Q$ such that
$n \equiv 1 \pmod{24}$ and $\kro{-n}\ell \neq \varepsilon_\p$, for
\begin{itemize}
\item $\ell = 5, \varepsilon_5 = 1$ and
\[
Q = 2879, 11519, 23039, 25919, 51839, 66239, 69119, 71999, 86399,
97919.
\]
\item $\ell = 7, \varepsilon_7 = -1$ and
\[
Q = 16127, 44351, 48383, 68543, 76607.
\]
\item $\ell = 11, \varepsilon_{11} = -1$ and
\[
Q=25343.
\]
\item $\ell = 13, \varepsilon_{13} = 1$ and
\[
Q = 7487, 44927, 67391.
\]
\item $\ell = 17, \varepsilon_{17} = 1$ and
\[
Q = 9791.
\]
\end{itemize}
In each case, these $Q$ are all the interesting primes less than $10^5$.
\end{prop}
\begin{rmk}
The congruences for $\ell\in\{5, 7,13\}$ were known by Atkin.
Moreover, \cite[Theorem 2]{atkin} implies that in these cases every
candidate is interesting. This agrees with our computations as summarized in Proposition~\ref{prop:eta-congruences}.
In the same article, Atkin also suggests that the same should hold for $\ell \in
\{11,17,19,23\}$; namely, that every candidate is indeed interesting.
We found counterexamples for each of these $\ell$'s.
More precisely, our computations showed that the primes $Q = 12671, 44351, 76031$
are candidates for $\ell=11$, but they are not interesting.
The same holds for the primes $\ell = 17$ and $Q = 19583, 68543,
97919$, for $\ell = 19$ and $Q = 32831$, for $\ell = 23$ and $Q = 66239$, and for
$\ell = 29$ and $Q = 16703, 50111$.
\end{rmk}
For each pair of primes $\ell$ and $Q$ above, fixing a suitable residue class for $n$ modulo $24\ell$ will yield a Ramanujan-type congruence. For example, Proposition~\ref{prop:eta-congruences} implies that
\begin{multline*}
p\left(15956222111407 \, m + 9425121394238 \right)\equiv 0\pmod{17}\, \\
\mbox{ for }\,m\not\equiv 4007\pmod{9791}.
\end{multline*}
\subsubsection*{Overpartition function}
Let $\overline p(n)$ denote the overpartition function.
For information on $\overline p(n)$ see \cite{overpartitions}.
It is known that
\[
\sum_{n\geq 0} \overline{p}(n)q^n =
\frac{\eta(2z)}{\eta^2(z)}
\quad \in M_{-1/2}^{wh}(\Gamma_0(16),\chi).
\]
Since we do not have an algorithm for computing the numbers $\overline p(n)$
individually, in this case we must compute the whole Fourier expansion up to
the Sturm bound, thus obtaining smaller interesting primes.
\begin{prop}
We have that
\[
\overline{p}\left(Q^3n\right)\equiv 0 \pmod{\ell},
\]
for all $n$ coprime to $\ell Q$ such that $\kro{-n}{\ell} \neq \varepsilon_\p$, for
\begin{itemize}
\item $\ell = 3, \varepsilon_3 = -1$ and
\[
Q = 47, 191, 239, 383, 431, 479, 719, 863, 911, 1103.
\]
\item $\ell = 5, \varepsilon_5 = 1$ and
\[
Q = 79, 239, 479.
\]
\end{itemize}
\end{prop}
We remark that the primes $Q = 1151, 1439, 1487, 1583, 1823, 1871$ are
candidates for $\ell=3$, but they return False when running Algorithm
\ref{alg:interesting}.
The same holds for $Q = 719$ and $\ell = 5$, and for $Q = 223, 1231, 1567$ and $\ell
= 7$.
\subsection{Examples of integral weight}
\subsubsection*{$24$-color partitions}
Let $p_{24}(n)$ denote the number of 24-color partitions of $n$.
See \cite{keith} for background on $k$-color partitions.
Let $\Delta\in S_{12}(\mathrm{SL}_2(\mathbb{Z}))$ be the normalized cuspform of weight 12 and
level 1. Then
\[
\sum_{n\geq -1} p_{24}(n+1) q^n =
\frac{1}{\Delta(z)} =
\frac{1}{\eta(z)^{24}}
\quad \in M^{wh}_{-12}\left(\mathrm{SL}_2(\mathbb{Z})\right).
\]
\begin{prop}
We have that
\[
p_{24}\left(Q(n+1)\right) \equiv 0\pmod{\ell}
\]
for all $n$ coprime to $\ell Q$ such that $\kro{-n}{\ell} \neq \varepsilon_\p$, for
every $\ell < 12$, with
\[
\varepsilon_3 = -1,
\varepsilon_5 = 1,
\varepsilon_7 = -1,
\varepsilon_{11} = -1,
\]
and every $Q$ such that $Q \equiv -1 \pmod{\ell}$ and $Q < 10^5$.
The same holds for $\ell = 13,\, \varepsilon_{13} = 1$ and
\[
Q = 1741, 2963, 4523, 5407, 5563, 5927, 5953, 6733, 7331, 9749.
\]
\end{prop}
We remark that the remaining candidates $Q < 10^4$ for $\ell =13$
(i.e. $Q = 103, 181, 233,\dots$) are not interesting.
\subsubsection*{$3$-core partitions}
Let $B_3(n)$ denote the number of triples of $3$-core partitions of $n$.
In \cite{Wang} it was shown that
\[
\sum_{n\geq 1} B_3(n-1)q^n =
\frac{\eta(3z)^9}{\eta(z)^3}
\quad \in M_3(\Gamma_0(3)).
\]
Since this is a holomorphic eta-quotient, Condition C holds trivially at every
odd prime.
In this case all the candidates we computed systematically turned out to be
interesting.
We were not able to find a candidate for which Algorithm \ref{alg:interesting}
returns False.
\begin{prop}
We have that
\[
B_3\left(Q(n-1)\right)\equiv 0 \pmod{\ell}
\]
for every $n$ coprime to $\ell Q$,
for every $\ell<14$ and for every $Q\equiv -1\pmod{3\ell}$ less than $10^4$.
\end{prop}
| {
"timestamp": "2021-02-01T02:13:13",
"yymm": "1911",
"arxiv_id": "1911.05799",
"language": "en",
"url": "https://arxiv.org/abs/1911.05799",
"abstract": "The values of the partition function, and more generally the Fourier coefficients of many modular forms, are known to satisfy certain congruences. Results given by Ahlgren and Ono for the partition function and by Treneer for more general Fourier coefficients state the existence of infinitely many families of congruences. In this article we give an algorithm for computing explicit instances of such congruences for eta-quotients. We illustrate our method with a few examples.",
"subjects": "Number Theory (math.NT)",
"title": "Congruences satisfied by eta-quotients",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982013788985129,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7087617806905798
} |
https://arxiv.org/abs/2005.01576 | Group Presentations for Links in Thickened Surfaces | Using a combinatorial argument, we prove the well-known result that the Wirtinger and Dehn presentations of a link in 3-space describe isomorphic groups. The result is not true for links $\ell$ in a thickened surface $S \times [0,1]$. Their precise relationship, as given in the 2012 thesis of R.E. Byrd, is established here by an elementary argument. When a diagram in $S$ for $\ell$ can be checkerboard shaded, the Dehn presentation leads naturally to an abelian "Dehn coloring group," an isotopy invariant of $\ell$. Introducing homological information from $S$ produces a stronger invariant, $\cal C$, a module over the group ring of $H_1(S; {\mathbb Z})$. The authors previously defined the Laplacian modules ${\cal L}_G,{ \cal L}_{G^*}$ and polynomials $\Delta_G, \Delta_{G^*}$ associated to a Tait graph $G$ and its dual $G^*$, and showed that the pairs $\{{\cal L}_G, {\cal L}_{G^*}\}$, $\{\Delta_G, \Delta_{G^*}\}$ are isotopy invariants of $\ell$. The relationship between $\cal C$ and the Laplacian modules is described and used to prove that $\Delta_G$ and $\Delta_{G^*}$ are equal when $S$ is a torus. | \section{Introduction} Modern knot theory, which began in the early 1900's, was propelled by the nearly simultaneous publications of two different methods for computing presentations of knot groups, fundamental groups of knot complements. The methods are due to W. Wirtinger and M. Dehn. Both are combinatorial, beginning with a 2-dimensional drawing, or ``diagram," of a knot or link, and reading from it a group presentation.
Of course, Wirtinger and Dehn presentations describe the same group. However, the proof usually involves algebraic topology. Continuing in the combinatorial spirit of
early knot theory, a spirit that has revived greatly since 1985 with the landmark discoveries of V.F.R. Jones \cite{Jo85}, we offer a diagrammatic proof that the two presentations describe the same group. We then extend our technique to knots and links in thickened surfaces. There the presentations describe different groups. We explain their relationship.
Diagrams of knots and links in surfaces that can be ``checkerboard shaded" carry the same information as $\pm 1$-weighted embedded graphs. Laplacian matrices of graphs, well known to combinatorists, can be used to describe algebraic invariants that we show are closely related to Dehn presentations.
The first two sections are relatively elementary and should be accessible to a reader with a basic undergraduate mathematics background. Later sections become more sophisticated but require only modest knowledge of modules.
The authors are grateful to Louis Kauffman for helpful comments, and also Seiichi Kamada for sharing his and Naoko Kamada's early ideas about Wirtinger and Dehn presentations.
\section{Wirtinger and Dehn Link Group Presentations} \label{Intro}
A link in $\mathbb R^3$ is a finite embedded collection of circles $\ell$ regarded up to ambient isotopy. (A \textit{knot} is a special case, a link with a single connected component.) A link is usually described by a \textit{link diagram}, a 4-valent graph embedded in the plane with each vertex replaced by a broken line segment to indicate how the link passes over itself. Following \cite{Ka06} we call the graph a \textit{universe} of $\ell$.
Two links are isotopic if and only if a diagram of one can be changed into a diagram of the other by a finite sequence of local modifications called Reidemeister moves, as in Figure \ref{reid}, as well as deformations of the diagram that don't create or destroy crossings. (For a proof of this as well as other well-known facts about links see \cite{Mu96}.) The topological task of determining when two links are isotopic now becomes a combinatorial problem of understanding when two link diagrams are equivalent. Moreover, we can use Reidemeister moves to discover link invariants; they are quantities associated to a diagram that are unchanged by each of the three moves.
\begin{figure}[H]
\begin{center}
\includegraphics[height=1.5 in]{reid}
\caption{Reidemeister moves}
\label{reid}
\end{center}
\end{figure}
The \textit{link group}, the fundamental group $\pi_1(\mathbb R^3\setminus \ell)$ of the link complement, is a familiar link invariant. Usually it is described by a group presentation based on a link diagram, the most common being the Wirtinger and the Dehn presentations. In a Wirtinger presentation, which requires that the link be oriented, generators correspond to arcs, maximally connected components of the diagram, while relations correspond to the crossings, as in Figure \ref{wirt}(i).
We remind the reader that a presentation of a group $\pi$ is an expression of the form $\langle x_1, \ldots, x_n \mid r_1, \ldots, r_m\rangle$, where $x_1, \ldots, x_n$ generate $\pi$ while $r_1, \ldots, r_m$ are \textit{relators}, words in $x_1^{\pm 1}, \ldots, x_n^{\pm 1}$ that represent trivial elements. The group relators are sufficient to describe the group, in the sense that $\pi \cong F/R$, where $F$ is the free group on $x_1, \ldots, x_n$ and $R$ is the normal subgroup of $F$ generated by $r_1, \ldots, r_m$. A group that has such a presentation is said to be \textit{finitely presented}. Often it is more natural to include \textit{relations}, expressions of the form $r=s$, in a presentation rather than relators. Such an expression is another way of writing the relator $rs^{-1}$.
Just as link diagrams and Reidemeister moves convert the recognition problem for links to a combinatorial task, group presentations and \textit{Tietze transformations} turn the recognition problem for groups into a combinatorial one. Two finitely presented groups are isomorphic if and only one presentation can be converted into the other by a finite sequence of $(T1)^{\pm 1}$ generator addition/deletion and $(T2)^{\pm 1}$ relator addition\ deletion as well as changing the order of the generators or the relators.
\begin{itemize}
\item{} $(T1): \langle x_1, \ldots, x_n \mid r_1, \ldots, r_m\rangle$ $\to$
$\langle y, x_1, \ldots, x_n \mid yw^{-1}, r_1, \ldots, r_m\rangle$, where $w$ is a word in $x_1^{\pm 1}, \ldots, x_n^{\pm 1}$.
\item{} $(T1)^{-1}$: reverse of $(T1)$, replacing $y$ by $w$ where it appears in $r_1, \ldots, r_m$.
\item{} $(T2): \langle x_1, \ldots, x_n \mid r_1, \ldots, r_m\rangle$ $\to$
$\langle x_1, \ldots, x_n \mid r, r_1, \ldots, r_m\rangle$, where $r$ is a redundant relation (that is, $r \in R$).
\item{} $(T2)^{-1}$: reverse of $(T2)$.
\end{itemize}
A Dehn presentation ignores the link orientation. Its generators are regions of a diagram, components of the complement of the universe, with one region arbitrarily designated as the \textit{base region} and set equal to the identity. Relations again correspond to crossings, as in Figure \ref{wirt}(ii). The reader can check that the two presentations resulting from 2(ii a) and 2(ii b) describe isomorphic groups via an isomorphism that maps generators to their inverses. Neither depends on of the choice of base region $R_0$ (see Remark \ref{DWremark} belowre). We use the second presentation throughout.
For the sake of simplicity, we will not distinguish between arcs of ${\cal D}$ and Wirtinger generators, using the same symbols for both. Similarly, regions of ${\cal D}$ will be identified with Dehn generators.
\begin{figure}
\begin{center}
\includegraphics[height=1.5 in]{wirt}
\caption{(i) Wirtinger relation; (ii) two conventions for Dehn relations}
\label{wirt}
\end{center}
\end{figure}
The group $\pi_{wirt}$ described by the Wirtinger presentation is usually seen to be isomorphic to the link group by a topological argument (see \cite{St80}, for example). Then one proves that the group $\pi_{dehn}$ described by the Dehn presentation is isomorphic to $\pi_{wirt}$ by another topological argument (see \cite{LS77}). In the next section we present a short, purely combinatorial proof that $\pi_{wirt}$ and $\pi_{dehn}$ are isomorphic. The method involves combinatorial ``differentiation" and ``integration" on link diagrams, introduced in \cite{STW18}. Using it we will extend our study to links in thickened surfaces.
Instead of viewing a link diagram in the plane, we can put it in the 2-sphere ${\mathbb S}^2$. In this egalitarian approach all regions are compact. Such a diagram represents a link in the thickened sphere ${\mathbb S}^2 \times [0,1]$, ${\mathbb S}^3$, or again in $\mathbb R^3$. Regardless of which we choose, two links remain isotopic if and only if their diagrams are transformable into each other by finitely many Reidemeister moves.
It is natural to replace the 2-sphere by an arbitrary closed, connected orientable surface $S$. A diagram in $S$ represents a link in the thickened surface $S \times [0,1]$. As before, we regard links up to ambient isotopy. Again, two links are isotopic if and only if any diagram of one can be transformed to any diagram of the other by a finite sequence of Reidemeister moves. As explained in \cite{Bo08}, this follows from \cite{HZ64}, which ensures that isotopic links are in fact isotopic by linear moves in arbitrarily small neighborhoods.
Given a diagram in $S$ for a link $\ell$, the groups $\pi_{wirt}, \pi_{dehn}$ described by the Wirtinger and Dehn presentations are seen to be invariants using Reidemeister moves, but they no longer need be isomorphic. We will describe their precise relationship using combinatorial integration and differentiation on the diagram. (For a discussion of the fundamental group $\pi_1({\mathbb S} \times [0,1]\setminus \ell)$ of the link complement see \cite{CSW14}.)
\section{Integration on Link Diagrams} \label{IntDiff}
There is a natural homomorphism $f_{wd}: \pi_{wirt}\to \pi_{dehn}$, defined first on generators of $\pi_{wirt}$ and then extended to arbitrary words in the usual way. For any generator $a$ we define $f_{wd}(a)$ to be $A^{-1}B$, where $A$ is the region to the right of the oriented arc $a$, and $B$ is on the other side (Figure \ref{WD}(i)). This is well defined, since if the arc $a$
separates another pair of regions $C, D$, as in Figure \ref{WD} (ii), then $A^{-1}B = C^{-1}D$ in $\pi_{dehn}$.
(We think of $A^{-1}B$ as a \textit{derivative} across the arc of our Dehn generator-labeled diagram.)
We extend $f_{wd}$ in the usual way to a function on words in Wirtinger generators and their inverses.
In order to show that this induces a homomorphism on $\pi_{wirt}$, we must show that it sends Wirtinger relations to the identity element of $\pi_{dehn}$. For this consider a Wirtinger relation as in Figure \ref{WD}(ii). It is mapped by $f_{wd}$ to $f_{wd}(ab) = f_{wd}(ca)$, which can be written $(C^{-1}D)(D^{-1}B) = (C^{-1}A)(A^{-1}B)$. This simplifies to $C^{-1}B= C^{-1}B$. The case of a left-hand crossing is similar.
\begin{figure}[H]
\begin{center}
\includegraphics[height=1.5 in]{WD}
\caption{(i) $f_{wd}(a)= A^{-1}B$; (ii) $f_{wd}(ab) = f_{wd}(ca)$}
\label{WD}
\end{center}
\end{figure}
In fact $f_{wd}$ is an isomorphism. Our construction of the inverse homomorphism $f_{dw}: \pi_{dehn} \to \pi_{wirt}$ uses ``integration," which we describe next.
Beginning in a region $R$, we travel along a path ${\gamma}$ to another region $R'$. As we do this, we build an element of $\pi_{wirt}$ by ``integration," successively appending the generators of $\pi_{wirt}$ (or their inverses) to the right, corresponding to the arcs of the diagram that we cross, as in Figure \ref{integrate}(i). We will denote the final element by $\int_{\gamma} {\cal D}$, and call it the result of \textit{integration} along ${\gamma}$.
\begin{figure}[H]
\begin{center}
\includegraphics[height=2 in]{integrate}
\caption{(i) Integrating along a path; (ii) showing that $\int_{\gamma} {\cal D} = 1$ on a small loop.}
\label{integrate}
\end{center}
\end{figure}
We define $f_{dw}(R)$ to be $\int_{\gamma} {\cal D}$, where ${\gamma}$ is any path from the base region $R_0$ to $R$.
But is $f_{dw}$ well defined? Proving that is equivalent to proving that our integration is path independent. Given two paths with same initial point in $R$ and final point in $R'$, a loop is formed from their concatenation, following one path and then going along the other in the opposite direction. The claim that integration is path independent is equivalent to the claim that the path integral around the loop is trivial. Figure \ref{integrate}(ii)) shows this for small loops about a crossing. Since the 2-sphere is simply connected, the verification for small loops about crossings implies the general claim. We leave the details to the energetic reader.
We have shown that $f_{dw}$ is well defined on generators. To see that it induces a homomorphism on $\pi_{dehn}$, we must verify that it is trivial on a general Dehn relation $A^{-1}B = C^{-1}D$, as in Figure \ref{WD}(ii). If $A$ is sent to an element $w$, then $B$ maps to $wa$. Moreover, $C$ is sent to $wc^{-1}$ and $D$ is mapped to $wc^{-1}a$. Now $A^{-1}B$ and $C^{-1}D$ both map to the same value, $a$. The case of a left-hand crossing is similar.
Finally, we check that $f_{wd}$ and $f_{dw}$ are inverses of each other. The verification is brief and we will leave it to the reader.
We have proven the well-known result:
\begin{prop} If $\ell$ is an oriented link in $\mathbb R^3$, then $\pi_{wirt} \cong \pi_{dehn}$. \end{prop}
\begin{remark} The terms ``derivative" and ``integral" are used suggestively. But what do they suggest? We propose to think about a link diagram with arcs labeled by corresponding Wirtinger generators as a conservative vector field. Path integration produces labels of the regions by elements of $\pi_{wirt}$ that we associate with Dehn generators via $f_{dw}$. Thus the Dehn generator labeling might be viewed as a potential function, with the integral $f_{dw}(R)= \int_{\gamma}{\cal D}$ being the work done by the field as we move from the base region $R_0$ to $R$ along the path ${\gamma}$. Differentiating returns the original arc labeling.
\end{remark}
\section{Links in Thickened Surfaces} Moving to the world of link diagrams on surfaces we find that much remains unchanged. Given an oriented link diagram ${\cal D}$ in a closed, connected orientable surface $S$ of genus $g >0$, we can again form the Wirtinger presentation of a group $\pi_{wirt}$ and also the Dehn presentation of a second group $\pi_{dehn}$, and the demonstration of invariance under Reidemeister moves is unchanged. The groups need no longer be isomorphic, and so we will call $\pi_{wirt}$ the \textit{Wirtinger
link group} and $\pi_{dehn}$ the \textit{Dehn link group}.
In order to describe the relationship between $\pi_{wirt}$ and $\pi_{dehn}$, we will again make use of integration. We will need a couple of facts about it, as surfaces of positive genus are more complicated than spheres. While the first is quickly proved using basic algebraic topology, a geometric argument is possible. The second statement is immediate from the definition.
\begin{lemma} \label{integrate} Assume that ${\cal D}$ is an oriented link diagram on a closed, connected orientable surface $S$, and ${\gamma}_1, {\gamma}_2$ are oriented paths in $S$. \medskip
(i) If ${\gamma}_1$ and ${\gamma}_2$ have the same endpoints and are homotopic rel boundary (that is, homotopic keeping endpoints fixed), then $\int_{{\gamma}_1}{\cal D} = \int_{{\gamma}_2}{\cal D}$. \medskip
(ii) If the terminal point of ${\gamma}_1$ is the initial point of ${\gamma}_2$ and ${\gamma}_1{\gamma}_2$ is the concatenated path, then $\int_{{\gamma}_1{\gamma}_2}{\cal D} = \int_{{\gamma}_1}{\cal D} \int_{{\gamma}_2}{\cal D}$.
\end{lemma}
We can define a homomorphism $f_{wd}: \pi_{wirt} \to \pi_{dehn}$ just as we did in the previous section, mapping any generator $a$ to $A^{-1}B$, where $A$ is the region to the right of the arc representing $a$ and $B$ is the region to the left. However, $f_{wd}$ is generally no longer injective. To see why we need to look closely at the surface $S$.
We will visualize the surface $S$ of genus $g$ as a $2g$-gon with directed sides $x_1, y_1, \ldots, x_g, y_g$ identified in the usual way. Think of the bouquet of loops in $S$ as a coordinate system. Of course, there are other bouquets along which we could cut $S$ to get a $2g$-gon. We will say more about that in the last section.
Without loss of generality we assume that the diagram ${\cal D}$ meets the curves $x_i, y_i$ in \textit{general position}, which means that ${\cal D}$ intersects them transversely and avoids the common point. Then each $x_i, y_i$ determines a word $r_i = \int_{x_i}{\cal D}, s_i= \int_{y_i}{\cal D}$, respectively, by integration, as illustrated in Figure \ref{peripheral}. The elements of $\pi_{wirt}$ that they determine are unchanged by Reidemeister moves.
\begin{definition} The \textit{surface subgroup} of $\pi_{wirt}$, denoted by $\pi_{wirt}^S$, is the normal subgroup generated by $r_1, s_1, \ldots, r_g, s_g$. \end{definition}
\begin{figure}[H]
\begin{center}
\includegraphics[height=1.3 in]{peripheral}
\caption{Word $r_i$ determined by curve $x_i$}
\label{peripheral}
\end{center}
\end{figure}
\begin{lemma}\label{kernel} $\pi_{wirt}^S$ is in the kernel of $f_{wd}: \pi_{wirt} \to \pi_{dehn}$.
\end{lemma}
\begin{proof} A relator $r_i$ has the form $a_1^{\epsilon_1} a_2^{\epsilon_2} \cdots a_{2m}^{\epsilon_{2m}},$
where $\epsilon_1, \ldots, \epsilon_{2m} \in \{\pm 1\}$. Let $A_0, \ldots, A_{2m-1}, A_{2m}=A_0$ be the regions of the diagram that we encounter as we follow $x_i$. We must show that the image $f_{wd}(r_i)$ is trivial. This is clear since $f_{wd}(a_i^{\epsilon_i}) = A_{i-1}^{\epsilon_i}A_i$, for all $i$ and the two cases $\epsilon_i =1, \epsilon_i=-1$. Thus $f_{wd}(a_1^{\epsilon_1}\cdots a_{2m}^{\epsilon_{2m}}) = A_0^{-1}A_0 = 1$.
A similar argument applies to the relators $s_i$.
\end{proof}
We come now to the lesson of our story. If we try to use integration to define $f_{dw}$ as we did in the previous section, then Lemma \ref{kernel} warns us that the result will not be well defined. Remember that path-independence of integration is equivalent to the requirement that the path integral around any closed loop is trivial. Integrating around $x_i, y_i$ generally produce nontrivial elements. However, if we replace $\pi_{wirt}$ by the quotient $\pi_{wirt}/\pi_{wirt}^S$, then path-independence is recovered. At the same time, we arrive at the relationship between $\pi_{dehn}$ and $\pi_{wirt}$.
\begin{theorem} \label{DW} \cite{By12} If $\ell$ is an oriented link in a thickened surface $S \times [0,1]$, then $\pi_{dehn} \cong \pi_{wirt}/\pi_{wirt}^S$.
\end{theorem}
\begin{proof} We begin by showing that integration is path-independent provided we take values in the quotient group $\pi_{wirt}/\pi_{wirt}^S$. For convenience we will assume that the base region $R_0$ contains the common point $*$ of the loops $x_i, y_i$. Consider a loop ${\gamma}$ beginning and ending at $*$.
We can write ${\gamma}$ up to homotopy fixing $*$ as a product ${\gamma}_1^{\epsilon_1}\cdots {\gamma}_n^{e_n}$, where each
${\gamma}_j \in \{x_1, y_1, \ldots, x_g, y_g\}$ and each $\epsilon_j \in \{\pm 1\}$. (This follows from the fact of algebraic topology that the fundamental group of $S$ is generated by the loops $x_i, y_i$. However, one can see this
directly by puncturing the $2g$-gon that forms the surface at some point not in ${\gamma}$, and then retracting the punctured $2g$-gon to its boundary.) Since each $\int_{x_i}{\cal D}, \int_{y_i}{\cal D}$ is in $\pi_{wirt}^S$, so is
$\int_{\gamma}{\cal D}$ by Lemma \ref{kernel}.
By Lemma \ref{kernel}, the homomorphism $f_{wd}: \pi_{wirt} \to \pi_{dehn}$ induces a homomorphism
$\bar f_{wd}: \pi_{wirt}/\pi_{wirt}^S \to \pi_{dehn}$.
Define $\bar f_{dw}: \pi_{dehn} \to \pi_{wirt}/\pi_{wirt}^S$ by path integration, as we did in the previous section, but taking values in the quotient group $\pi_{wirt}/\pi_{wirt}^S$. It is a simple matter to verify that
the composition of $\bar f_{wd}$ and $\bar f_{dw}$ in either direction is the identity homomorphism.
\end{proof}
\begin{remark}\label{DWremark} (i) Theorem \ref{DW} is the main result of \cite{By12}.
(ii) Theorem \ref{DW} implies that $\pi_{dehn}$ does not depend upon the choice of base region $R_0$.
(iii) If we do not choose a base region $R_0$, then the Dehn presentation that we get describes the free product $\hat\pi_{dehn} = \pi_{dehn} * \mathbb Z.$ The standard proof of this classical result for planar diagrams (see, for example, \cite{LS77}) can be adapted in the case of higher genus surfaces. \end{remark}
\begin{example} Our link diagrams ${\cal D}$ arise from the projection of ${\mathbb S} \times [0,1]$ onto $S$ from above; that is, overcrossing arcs correspond to larger values of the second coordinate. In this sense, $\pi_{wirt}$ is the \textit{upper Wirtinger link group}. If instead we project from below, then another diagram of $\ell$ is obtained, and the resulting Wirtinger group, the \textit{lower Wirtinger link group}, can be different when $S$ has positive genus. Consider the knot in Figure \ref{topbottom} viewed from the two perspectives. The reader can verify that the upper Wirtinger group has presentation
$\langle a, b \mid aba=bab \rangle,$ while the lower Wirtinger group is infinite cyclic.
\begin{figure}[H]
\begin{center}
\includegraphics[height=2 in]{topbottom}
\caption{Distinct upper and lower Wirtinger knot groups (cf. \cite{GPV00}, Fig. 7)}
\label{topbottom}
\end{center}
\end{figure}
A reason for the difference can be found in algebraic topology. Recall that for a link in $\mathbb R^3$ or ${\mathbb S}^3$, its Wirtinger link group is the fundamental group of the link complement. Choosing the basepoint of the fundamental group above a diagram results in an upper group presentation while placing it below results in the lower group presentation. Since each group is the fundamental group of the link complement, the upper and lower Wirtinger presentations describe the same group.
For a link $\ell$ in $S \times [0,1]$, where $S$ is a surface of arbitrary genus, the upper Wirtinger group can be seen to be $\pi_1((S \times [0,1] \setminus \ell) / S \times \{1\})$, the fundamental group of $S\times [0,1] \setminus \ell$ with $S \times \{1\}$ coned to a point that serves as fundamental group basepoint.
Similarly, the lower group is $\pi_1((S \times [0,1] \setminus \ell) / S \times \{0\})$. Less obvious is that the Dehn link group is the fundamental group of $S \times [0,1] \setminus \ell$ with both $S \times \{0\}$ and $S \times \{1\}$ coned to separate points, and hence the ``upper" and ``lower" Dehn link groups are the same (trivial in the above example). These facts were previously observed by N. Kamada and S. Kamada \cite{Ka20}. They will not be used here and are mentioned only the sake of motivation.
\end{example}
\section{Fox's Free Differential Calculus}\label{color}
In a series of papers beginning in 1953, R.H. Fox introduced the ``free differential calculus," a method for constructing invariants for groups from presentations \cite{Fo54}. Although inspired by homology calculations in covering spaces, it is a completely combinatorial method.
Let $H$ be a group. We will make use of the group ring $\mathbb Z[H]$.
It consists of all finite linear combinations $\sum n_h h$, where each $n_h\in \mathbb Z$ and $h \in H$.
Addition is defined coordinate-wise by:
$$\sum m_h h + \sum n_h h = \sum (m_h + n_h) h,$$
while multiplication is given by:
$$(\sum m_h h)(\sum n_{h} h) = \sum(\sum_{h=kk'} m_k n_{k'}) h.$$
Note that $H$ is embedded in $\mathbb Z[H]$ in a natural way.
We can think of elements of $\mathbb Z[H]$ as a linearization of $H$.
The partial derivative $\partial/\partial x_i$ is a homomorphism from $\mathbb Z[F]$ to itself, defined by:
$${\partial x_i \over \partial x_j} = {\delta}_{ij}, \quad {\partial x_i^{-1} \over \partial x_j} = -{\delta}_{ij} x_i^{-1}$$
$${\partial (uv )\over \partial x_j} ={ \partial u \over \partial x_j }+ u {\partial v \over \partial x_j}.$$
The last equation is particularly useful when $v$ is the last symbol $x_i^{\pm 1}$ of a word.
Given a presentation $\<x_1, \ldots, x_n \mid R_1, \ldots, R_m \>$ of a group $\pi$, its \textit{Jacobian matrix} $J$ is the $m \times n$ matrix with $ij$th entry $\partial R_i/\partial x_j$. (If a relation $R=S$ appears in a presentation instead of a relator, then we can
take a partial derivative of each side and subtract the results or we can form the relator $RS^{-1}$ and take its partial derivative. The outcomes will be the same.)
How can we build invariants of a presented group $\pi$ using the free differential calculus? Begin by choosing a homomorphism $\phi$ from $\pi$ to an abelian group $H$. It extends to a homomorphism
$\phi: \mathbb Z[\pi] \to \mathbb Z[H]$. Applying $\phi$ to each coefficient of the Jacobian matrix $J$, we get the \textit{specialized Jacobian matrix} $J^\phi$. The quotient $\mathbb Z[H]^n/J^{\phi}\mathbb Z[H]^n$, the \textit{cokernel} of $J^\phi$, describes a module over the the group ring $\mathbb Z[H]$. It has generators $x_1, \ldots, x_n$, and relations corresponding to the rows of the matrix $J^\phi$. The $i$th row is the relation $\sum_j (\partial r_i/\partial x_j) x_j= 0$. By \cite{Fo54}, the module is independent of the presentation of the group $\pi$. The strategy of the proof is to show that Tietze transformations change the Jacobian matrix only up to elementary transformations.
Let $\pi= \pi_{wirt}$ be the Wirtinger group of a link in a thickened surface $S$, and consider the homomorphism $\phi$ from $\pi$ to the infinite cyclic group $\langle t \rangle$, mapping each generator to $t$. The entries of $J^\phi$ are integral polynomials in the variables $t, t^{-1}$. When $S$ is the 2-sphere, the module presented is well known to knot theorists: it is the first homology group of the infinite cyclic cover of the link complement.
\begin{example} \label{trefoilex} Consider the Wirtinger presentation $\pi= \langle a, b, c \mid ab=ca, bc=ab, ca=bc\rangle$ of the trefoil knot in ${\mathbb S}^3$ (Figure \ref{trefoil}). (Here it is convenient to denote generators by $a, b, c$, and avoid confusion with previously defined loops in $S$.) The reader can verify that
$$J = \begin{pmatrix} 1-c & a & -1\\ -1&1-a &b \\ c& -1 & 1-b \end{pmatrix}.$$
Let $\phi: \pi \to \langle t \rangle$ be the homomorphism that maps each generator to $t$. Then
$$J^\phi = \begin{pmatrix} 1-t & t & -1\\ -1&1-t &t \\ t& -1 & 1-t \end{pmatrix}.$$
\begin{figure}[H]
\begin{center}
\includegraphics[height=1.3 in]{trefoil}
\caption{Diagram of a trefoil knot with Wirtinger generators}
\label{trefoil}
\end{center}
\end{figure}
\end{example}
\section{The Dehn Coloring Group} \label{DCG}
Let $\nu$ be the homomorphism of $\pi= \pi_{wirt}$ to the multiplicative group $H=\{\pm 1\}$ of order 2, sending each Wirtinger generator to $-1$. The specialized Jacobian matrix $J^\nu$ has entries in $\mathbb Z[\pm 1]\cong \mathbb Z$. The partial derivatives of a Wirtinger relation $ab=ca$ (Figure \ref{WD}) contribute to $J^\nu$ a row corresponding to the relation $a-b=c-a$. Rewritten as $2a=b+c$, it is the well-known Fox coloring condition for arcs of a diagram, as in Figure \ref{fox}\footnote{Let $p$ be a prime. If we regard $a, b$ and $c$ as elements of $\mathbb Z/p$ (``colors"), then the condition says that the color of any overcrossing arc of the diagram is equal to the sum of the colors of the two arcs below it.}. (The reader should observe that $J^\nu$ can be recovered from $J^\phi$ in Example \ref{trefoilex} by replacing $t$ with $-1$.)
\begin{figure}[H]
\begin{center}
\includegraphics[height=1.3 in]{fox}
\caption{Fox coloring rule}
\label{fox}
\end{center}
\end{figure}
Any link diagram in the 2-sphere (or plane) can be \textit{checkerboard shaded}, some of its regions shaded so that whenever two regions share a common arc, exactly one of them is shaded. If a diagram admits a checkerboard shading, then it admits exactly two distinct checkerboard shadings.
What about diagrams in surfaces of higher genus? A diagram of a link $\ell$ in a surface $S$ can be checkerboard shaded if and only if $\ell$ represents the trivial element of $\mathbb Z/2$-homology $H_1(S;\mathbb Z/2)$. This condition is equivalent to the requirement that the diagram meets each loop $x_i, y_i$ in an even number of points. Proving this is a nice exercise.
Consider a checkerboard shaded link diagram in a surface. The homomorphism $\bar \nu = \nu \circ f_{dw}$ maps all unshaded Dehn generators of ${\cal D}$ to the same element of $\{\pm 1\}$, and all shaded generators to the other. The result of applying Fox calculus to a Dehn relation such as $A^{-1}B = C^{-1} D$ (Figure \ref{WD}(ii)) is $A+B = C+D$, which we call the \textit{Dehn coloring condition} for ${\cal D}$ (see Figure \ref{Dehn coloring}). The calculation depends only on the fact that $A, D$ map to the same value while $B, C$ map to the other. See \cite{CSW14'} for additional details.
\begin{figure}[H]
\begin{center}
\includegraphics[height=1.7 in]{Dehncoloring2}
\caption{Dehn coloring condition}
\label{Dehn coloring}
\end{center}
\end{figure}
\begin{definition} The \textit{Dehn coloring group} ${\cal C}$ of a link $\ell$ in a thickened surface $S \times [0,1]$ is the cokernel $\mathbb Z^n/J^{\bar \nu}\mathbb Z^n$, where $J^{\bar \nu}$ is the Jacobian $n \times n$ matrix of a Dehn presentation for the group of $\ell$ and $\bar \nu = \nu \circ f_{dw}$.
\end{definition}
\begin{remark} \label{homology} When $S = {\mathbb S}^2$ it is well known that the Dehn coloring group ${\cal C}$ is isomorphic to $H_1(M_2; \mathbb Z)\oplus \mathbb Z$, where $M_2$ is the 2-fold cyclic cover of ${\mathbb S}^3 \setminus \ell$ corresponding to the homomorphism that maps each meridian to a generator of $\mathbb Z/2$. \end{remark}
Given any link diagram ${\cal D}$ in a $S$, we can apply Reidemeister moves to assure that all regions are contractible. Then if the diagram is checkerboard shaded, we can construct a graph $G$, embedded in $S$ and unique up to isotopy, with a vertex in each shaded region and an edge through each crossing joining a pair shaded regions. (An edge is allowed to join a vertex to itself.) Such a graph is called a \textit{Tait graph} in honor of the nineteenth-century Scottish pioneer of knot theory (and golf enthusiast) Peter Guthrie Tait. For each edge $e$ of $G$ we assign a weight $w_e= \pm 1$ according to the type of crossing involved, as in Figure \ref{medial4}. In order to avoid notational clutter, unlabeled edges are assumed to have weight $+1$.
We will use the Tait graph to determine all of the Dehn coloring relations. Note that unshaded regions of ${\cal D}$ become faces of $G$. We will refer to the Dehn generators corresponding to shaded and unshaded regions of ${\cal D}$ as \textit{vertex generators} and \textit{face generators}, respectively.
\begin{figure}[H]
\begin{center}
\includegraphics[height=1.5 in]{medial4}
\caption{Constructing a Tait graph from a checkerboard shaded diagram. Shaded (resp. unshaded) generators of ${\cal D}$ become vertex (resp. face) generators of ${\cal D}$.}
\label{medial4}
\end{center}
\end{figure}
Recall that there are two checkerboard shadings of ${\cal D}$. If we use the other shading, then we get a Tait graph $G^*$ that is dual to $G$. Each edge $e^*$ of $G^*$ meets an edge $e$ of $G$ transversely in a single point. The product $w_e w_{e^*}$ of weights is $-1$.
The \emph{adjacency matrix} of any edge-weighted graph $G$ with vertices $v_1, \ldots, v_n$ is the $n \times n$ matrix
$A = (a_{i, j})$ such that $a_{i, j}$ is the sum of the weights of edges between $v_i$ and $v_j$.
An edge joining a vertex $v_i$ to itself is counted twice, so it contributes $\pm 2$ to $a_{i,i}$. Define ${\delta}= ({\delta}_{i, j})$ to be the $n \times n$ diagonal matrix with ${\delta}_{i,i}$ equal to the sum of the weights of edges incident on $v_i$, again counting loops twice.
\begin{definition} \label{lapdef} The \emph{Laplacian matrix} $L_G$ of a finite graph $G$ is ${\delta} - A$. The \emph{Laplacian group} ${{\cal L}}_G$ is the cokernel $\mathbb Z^n/ L_G\mathbb Z^n$.
\end{definition}
Using Reidemeister moves it can be shown that the pair $\{{\cal L}_G, {\cal L}_{G^*}\}$ is an invariant of the link $\ell$. See \cite{SW19'} for details.
The reader might wonder why we have introduced yet another group. The answer is that there is
a relationship between the Laplace group ${\cal L}_G$ and the Dehn coloring group ${\cal C}$. We see the relationship by using Dehn relations to eliminate face generators of ${\cal C}$.
Around any vertex $v$ of $G$, write the Dehn coloring conditions for all of the adjacent edges, always putting the face generator corresponding to the region to the left of the edge, as viewed from $v$, on the left side of the equation. (This puts $v$ on the right side of the equation if the edge carries a negative weight.) Define $R_v$ to be the sum of the relations. The face generators cancel in pairs, and so $R_v$ is a relation in the vertex generators. It is not difficult to see that $R_v$ is in fact the relation in ${\cal L}_G$ associated to $v$.
As an example, consider Figure \ref{holonomy}. Here
$$v+u_1 = v_1 + u_2$$
$$v+u_2 = v_2 + u_3$$
$$v_3 + u_3 = v + u_1$$
and so $R_v$ is the relation:
$$v+ v_3 = v_1 +v_2,$$
which can be written as the Laplacian group relation:
$$v = v_1 + v_2 - v_3.$$
\begin{figure}[H]
\begin{center}
\includegraphics[height=2 in]{holonomy}
\caption{Dehn coloring conditions produce the relation $R_v:$ $v=v_1+v_2-v_3$}
\label{holonomy}
\end{center}
\end{figure}
We will define ${\cal C}^s$ to be the subgroup of ${\cal C}$ generated by the vertex generators. Then $R_v$ is a relation of ${\cal C}^s$. As we will see, such relations form a sufficient set for ${\cal L}_G$ when $S = {\mathbb S}^2$. For surfaces of positive genus, the additional needed relations are easy to describe. They are the result of rewriting as we go around the loops $x_i, y_i$.
Every Dehn relation can be written in the form $v_1-v_2= u_1 -u_2$, for some vertex generators $v_1, v_2$ and face generators $u_1, u_2$. Any relation in ${\cal C}^s$ is a sum of Dehn relations in which the face generators cancel in pairs. Such relations correspond to circuits in the dual graph $G^*$, and hence to closed paths in $S$. The relations $R_v$, as $v$ varies over all vertices of $G$, generate the relations arising from contractible closed paths. (The proof is similar to the suggested argument in Section \ref{IntDiff} for showing that integration is path independent.) Consequently, since every closed path in the 2-sphere is contractible, ${\cal C}^s \cong {{\cal L}}_G$ when $S={\mathbb S}^2$.
Now let's replace the checkerboard shading of our link diagram with the other checkerboard shading. This reverses the roles of shaded and unshaded regions, and it replaces the Tait graph $G$ with the its dual $G^*$. If we define ${\cal C}^u$ to be the subgroup of ${\cal C}$ generated by face generators, then we find that ${\cal C}^u \cong {{\cal L}}_{G^*}$ when $S={\mathbb S}^2$.
Knot theorists recognize $L_G$ and $L_{G^*}$ as Goeritz matrices for the link $\ell$ described by the diagram. Both ${{\cal L}}_G$ and ${{\cal L}}_{G^*}$ are isomorphic to $H_1(M_2; \mathbb Z),$ where $M_2$ is the 2-fold cover of ${\mathbb S}^3$ branched over the $\ell$ (see Remark \ref{homology}). In particular, ${{\cal L}}_G \cong {{\cal L}}_{G^*}$. We will give an alternative proof of this fact, independent of algebraic topology.
\begin{prop}\label{iso} If ${\cal D}$ is a link diagram in ${\mathbb S}^2$, then the Laplacian groups ${{\cal L}}_G, {{\cal L}}_{G^*}$ are isomorphic.
\end{prop}
\begin{proof} Consider the short exact sequence of abelian groups
$$0 \to {\cal C}^s \hookrightarrow {\cal C} \to {\cal C}/{\cal C}^s \to 0.$$
The relation $v_1-v_2= u_1 -u_2$ in ${\cal C}$ becomes $0 = u_1-u_2$ in the quotient ${\cal C}/{\cal C}^s$, and hence face generators become equal whenever they share an edge in the graph $G^*$. Since we assume that all regions of ${\cal D}$ are contractible, $G^u$ is connected. The quotient is infinite cyclic, and the short exact sequence splits. Hence ${\cal C} \cong {\cal C}^s \oplus \mathbb Z$. The same argument applied to ${\cal C}^u$ shows that ${\cal C} \cong {\cal C}^u \oplus \mathbb Z$. It follows from the structure theorem for finitely generated abelian groups that ${\cal D}^s$ and ${\cal D}^u$ are isomorphic. Since
${\cal C}^s$ and ${\cal C}^u$ are isomorphic to ${{\cal L}}_G$ and ${{\cal L}}_{G^*}$, respectively, the proof is complete.
\end{proof}
\begin{remark} Proposition \ref{iso} can be proved yet a third way, using the fact that the Tait graph $G$ can be converted to its dual $G^*$ by a sequence of Reidemeister moves \cite{YK57}. (See also \cite{SW19'}.)
\end{remark}
\section{The Dehn Coloring Module} The Dehn coloring group ${\cal C}$ and the Laplacian group ${\cal L}_G$ associated to a checkerboard shaded diagram ${\cal D}$ of a link can be made stronger invariants, as done in \cite{SW19'}, using homological information from $S$. Then ${\cal C}$ and ${\cal L}_G$ become modules over the group ring of $H_1(S; \mathbb Z)$.
We think of $H_1(S; \mathbb Z)\cong \mathbb Z^{2g}$ as the multiplicative abelian group freely generated by $x_1, y_1, \ldots, x_g, y_g$. Recall that these generators are represented by a bouquet of simple oriented loops in $S$ (denoted by the same symbols). The universal abelian cover $\tilde S$ of $S$ has deck transformation group $A(\tilde S)$ that is isomorphic to $H_1(S; \mathbb Z)$. The Dehn coloring module and Laplacian module that we will define are modules over the ring ${\Lambda} = \mathbb Z[x_1^{\pm 1}, y_1^{\pm 1}, \ldots, x_g^{\pm 1}, y_g^{\pm 1}]$ of Laurent polynomials. In this section we will let ${\cal C}$ and ${\cal L}_G$ denote these modules.
Again, we view $S$ as a $2g$-gon with sides $x_1, y_1, \ldots, x_g, y_g$ identified. We may also think of the $2g$-gon as a fundamental region of the universal abelian cover $\tilde S$. In order to define the module ${\cal C}$,
label regions of the diagram by $A, B,C, \ldots$. If a region $A$ of ${\cal D}$ is divided into several subregions in the $2g$-gon, then we choose one subregion to receive the label $A$. Assume that $A'$ is another subregion. If $w$ is the element of $A(\tilde S)$ such that $A$ and $A'$ are in the same region of $\tilde S$,
then replace $A'$ by the label $wA$. (An example is seen in Figure \ref{WDex}.) The \textit{Dehn coloring module} ${\cal C}$ is a module over the group ring ${\Lambda}$ with generators $A, B, C, \ldots$. Defining relations are as for the Dehn coloring group (Figure \ref{Dehn coloring}). We will once more refer to shaded and unshaded generators.
The \textit{Laplacian matrix} is given by $L_G=\delta-A$ where $\delta$ is as before, but the adjaceny matrix $A$ now has coefficients in ${\Lambda}$. Edge weights $\pm 1$ are replaced by $\pm w$, where $w$ is the element of $A(\tilde S)$ determined by following the edge from $v_i$ to $v_j$. (See Example \ref{example}.)
The \textit{Laplacian module} ${\cal L}_G$ is the cokernel of the matrix.
It is reassuring to note that if all generators $x_k, y_k$ are set equal to $1$, then ${\cal C}$, $L_G$ and ${\cal L}_G$ become the Dehn coloring group, Laplacian matrix and Laplacian group, respectively, of the previous section. Using Reidemeister moves we see that ${\cal C}$ and ${\cal L}_G$ are link invariants. The reader can verify this or see the proof given in \cite{SW19'}.
The \textit{Laplacian polynomial} $\Delta _G$ is defined to be the module order of ${\cal L}_G$, which is found as the determinant of $L_G$.
When $G$ is replaced by the dual graph $G^*$, we obtain the module order $\Delta_{G^*}$. The pair $\{\Delta_G, \Delta_{G^*}\}$ is an invariant of the link. While the two polynomials are not generally equal (see \cite{SW19'}) we have the following.
\begin{prop} \label{same} Let $\ell$ be a checkerboard shaded link diagram in a torus. If $G, G^*$ are its Tait graphs, then $\Delta_G = \Delta_{G^*}$.
\end{prop}
\begin{proof} As in Section \ref{color}, we let ${\cal C}^s$ be the submodule of ${\cal C}$ generated by shaded generators.
The argument that follows Definition \ref{lapdef} can be modified to show that relations in ${\cal C}^s$ are obtained by eliminating unshaded generators along null-homologous closed paths in $S$. In particular, the relations $R_v$ corresponding to rows of $L_G$ arise from small loops encircling vertices of $G$. (To see this, it helps to view the $2g$-gon within the universal abelian cover $\tilde S$.)
Since $S$ is assumed to be a torus, all null-homologous closed paths are contractible. The relations $R_v$ suffice to generate the relations of ${\cal C}^s$. Hence ${\cal C}^s \cong {\cal L}_G$.
Consider the effect on the module ${\cal C}$ of setting all shaded generators equal to $0$. Using Dehn relations, we find that all of the unshaded generators are equal; that is, ${\cal C}/{\cal C}^s \cong \mathbb Z$.
We have the short exact sequence:
\begin{equation} 0 \to {\cal C}^s \to {\cal C} \to \mathbb Z \to 0 \end{equation}
The module order $\Delta_0({\cal C})$ is equal to the product of the orders of ${\cal C}^s$ and $\mathbb Z$. (See \cite{Mi68}, for example). The module order of ${\cal C}^s$ is $\Delta_G$ while that of $\mathbb Z$ is 1. Hence $\Delta_0({\cal C}) = \Delta_G$.
Replacing $G$ with its dual (or equivalently, reversing the checkerboard shading), yields $\Delta_0({\cal C}) = \Delta_{G^*}$. Hence $\Delta_G = \Delta_{G^*}.$
\end{proof}
\begin{example} \label{example} Consider the diagram ${\cal D}$ of a 3-component link in the thickened torus that appears in Figure \ref{WDex}(i) with generators indicated for the Wirtinger group $\pi_{wirt}$.
\begin{figure}[H]
\begin{center}
\includegraphics[height=4 in]{WDex}
\caption{3-component link: (i) generators for $\pi_{wirt}$; (ii) generators for $\pi_{dehn}$; (iii) module generators for ${\cal C}$; (iv) Tait graphs $G$ (black) and $G^*$ (purple). }
\label{WDex}
\end{center}
\end{figure}
One checks that
$$\pi_{wirt} \cong \langle a, b, c, d \mid ab=ba,\ ac =da, \ bc=db \rangle.$$
The group $\pi_{wirt}$ is more easily recognized by eliminating a generator and introducing a new generator via Tietze transformations. We use the last relation, written as $d = b c b^{-1}$, to eliminate $d$. Then introduce $e = a^{-1}b$, and eliminate $b$. Consequently,
$$\pi_{wirt} \cong \langle a, c, e \mid ae=ea,\ ce=ec \rangle,$$
which is the free product of two copies of $\mathbb Z^2$ amalgamated over the infinite cyclic subgroup generated by $e$. \smallskip
Theorem \ref{DW} implies that $\pi_{dehn}$ is isomorphic to the quotient of $\pi_{wirt}$ by the
relations $a=b, b=c$, and hence $\pi_{dehn} \cong \mathbb Z$.
We can see this directly. Using
the generators of Figure \ref{WDex}(ii), we have:
$$\pi_{dehn} \cong \langle A, B \mid A =A^{-1}B, \ B^{-1}A=A^{-1}, \ A^{-1}B=A \rangle \cong \langle A, B \mid B = A^2 \rangle \cong \langle A \mid \, \rangle \ \cong \mathbb Z.$$
With the labeled generators of Figure \ref{WDex}(iii), we obtain a presentation of the Dehn coloring module:
$${\cal C}\cong \langle A, B, C \mid A + C = B + xy^{-1} A,\ yB + xA= A + C,\ C + xA = xy^{-1} A + x B \rangle.$$
Its module order is
$$\Delta_0({\cal C}) = 2-x-x^{-1}+y+y^{-1} - x y^{-1}-x^{-1}y.$$
By Proposition \ref{same} the module order $\Delta_0({\cal C})$ agrees with both $\Delta_G$ and $\Delta_G^*$.
This is easily verified: the polynomial $\Delta_G$ is the determinant of
$$\begin{pmatrix} 1 & 1-x^{-1} -y^{-1} \\ 1-x -y& 1 \end{pmatrix}.$$ To see the first row of the matrix, for example, note that vertex $v_1$ in $G$ is incident to $x^{-1} v_2, y^{-1}v_2$ by edges of weight $1$, and it is incident to $v_2$ by an edge of weight $-1$.
Similarly, $\Delta_{G^*}$ is the determinant of the $1 \times 1$ matrix
$$(2-x-x^{-1}+y+y^{-1}-xy^{-1}-x^{-1}y)$$.
\end{example}
\begin{remark} (i) The modules ${\cal L}_G$ and ${\cal L}_{G^*}$ in Example \ref{example} are easily seen to be isomorphic. (Simply eliminate one of the generators from the presentation for ${\cal L}_G$.) For arbitrary links in a thickened torus, however, the two modules need not be isomorphic. See Example 3.8 of \cite{SW19'}. \smallskip
(ii) The graph $G$ in Example \ref{example} is sometimes called a ``theta graph." Note that one edge has weight $-1$. By changing the location of the weight to different edges, we obtain three theta graphs $G_1, G_2, G_3$ corresponding to 3-component links $\ell_1, \ell_2, \ell_3$ (respectively) in the thickened torus $S$. As discussed in \cite{SW19'}, each of the links can be transformed into any other by a homeomorphism of $S\times I$, but they cannot be transformed by isotopy. The latter claim is seen by computing the three polynomials $\Delta_{G_1}, \Delta_{G_2}, \Delta_{G_3}$, which are easily seen to be different. \end{remark}
(iii) The Dehn coloring group and the Laplacian group are invariants of the link up to homeomorphism of the thickened surface. On the other hand, the Laplacian module and polynomial require a homology basis $x_i, y_i$ for $S$. Such a basis acts in like a coordinate system. With it we can compare links in $S$.
However, if we regard ${\cal L}_G$ and $\Delta_G$ up to symplectic change of basis, then they become invariants that are independent of the choice of basis. See \cite{SW19'} for details.
| {
"timestamp": "2020-05-05T02:32:31",
"yymm": "2005",
"arxiv_id": "2005.01576",
"language": "en",
"url": "https://arxiv.org/abs/2005.01576",
"abstract": "Using a combinatorial argument, we prove the well-known result that the Wirtinger and Dehn presentations of a link in 3-space describe isomorphic groups. The result is not true for links $\\ell$ in a thickened surface $S \\times [0,1]$. Their precise relationship, as given in the 2012 thesis of R.E. Byrd, is established here by an elementary argument. When a diagram in $S$ for $\\ell$ can be checkerboard shaded, the Dehn presentation leads naturally to an abelian \"Dehn coloring group,\" an isotopy invariant of $\\ell$. Introducing homological information from $S$ produces a stronger invariant, $\\cal C$, a module over the group ring of $H_1(S; {\\mathbb Z})$. The authors previously defined the Laplacian modules ${\\cal L}_G,{ \\cal L}_{G^*}$ and polynomials $\\Delta_G, \\Delta_{G^*}$ associated to a Tait graph $G$ and its dual $G^*$, and showed that the pairs $\\{{\\cal L}_G, {\\cal L}_{G^*}\\}$, $\\{\\Delta_G, \\Delta_{G^*}\\}$ are isotopy invariants of $\\ell$. The relationship between $\\cal C$ and the Laplacian modules is described and used to prove that $\\Delta_G$ and $\\Delta_{G^*}$ are equal when $S$ is a torus.",
"subjects": "Geometric Topology (math.GT)",
"title": "Group Presentations for Links in Thickened Surfaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982013788985129,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7087617806905798
} |
https://arxiv.org/abs/0710.2020 | Valiron's construction in higher dimension | We consider holomorphic self-maps $\v$ of the unit ball $\B^N$ in $\C^N$ ($N=1,2,3,...$). In the one-dimensional case, when $\v$ has no fixed points in $\D\defeq \B^1$ and is of hyperbolic type, there is a classical renormalization procedure due to Valiron which allows to semi-linearize the map $\phi$, and therefore, in this case, the dynamical properties of $\phi$ are well understood. In what follows, we generalize the classical Valiron construction to higher dimensions under some weak assumptions on $\v$ at its Denjoy-Wolff point. As a result, we construct a semi-conjugation $\sigma$, which maps the ball into the right half plane of $\C$, and solves the functional equation $\sigma\circ \v=\lambda \sigma$, where $\lambda>1$ is the (inverse of the) boundary dilation coefficient at the Denjoy-Wolff point of $\v$. | \section{Introduction}
\subsection{The one-dimensional case}
Let $\varphi$ be a
holomorphic map on $\mathbb D$ with $\varphi(\mathbb D)\subset\mathbb D$. If $\varphi$ has no fixed points in $\mathbb D$, then by
the classical Wolff lemma (see, {\sl e.g.}, \cite{Abate}) there
exists a unique point $\tau\in\partial\mathbb D$, called {\sl the Denjoy-Wolff
point of $\varphi$}, such that the sequence of iterates $\{\varphi^{\circ n}\}$ of
$\varphi$ converges uniformly on compacta to the constant map
$\zeta\mapsto \tau$, $\forall \zeta\in\mathbb D$. Also, by the classical
Julia-Wolff-Caratheodory theorem, $\tau$ is a fixed point (as
nontangential limit) for $\varphi$ and the first derivative $\varphi'$
has nontangential limit $c\in (0,1]$ at $\tau$, moreover,
\[
c=\liminf_{\zeta\to\tau}\frac{1-|\varphi(\zeta)|}{1-|\zeta|}.
\]
The number $c$ is called the {\sl multiplier} of $\varphi$ or the
{\sl boundary dilatation coefficient} at $\tau$. The map $\varphi$
is called {\sl hyperbolic} if $c<1$ and {\sl parabolic} if
$c=1$.
Geometrically, one defines the horodisks $H(t)\mathrel{\mathop:}=\{z\in\mathbb D:
|\tau-z|^2/(1-|z|^2)<1/t\}$, which are disks in $\mathbb D$ internally tangent
to $\partial \mathbb D$ at $\tau$, and which get smaller as $t$ gets larger.
Then the following mapping property holds: $\phi(H(t))\subset
H(t/c)$. In formulas:
\[
\frac{|\tau-\varphi(z)|^2}{1-|\varphi(z)|^2}\leq c\frac{|\tau-z|^2}{1-|z|^2},
\]
for every $z\in \mathbb D$.
In 1931 G. Valiron
\cite{va1} (see also \cite{va2} and \cite{Br-Po}) proved that
if $\varphi$ is hyperbolic then there exists a nonconstant
holomorphic map $\theta:\mathbb D\to \mathbb H:=\{w\in \mathbb C: {\sf Re}\, w>0\}$ which
solves the so-called {\sl Schr\"oder equation}:
\begin{equation}\label{schroder}
\theta \circ \varphi= \frac{1}{c} \theta.
\end{equation}
Valiron constructs the map $\theta$ as follows.
First, in order to simplify notations, one can move to the
right half-plane $\mathbb H$ via the Cayley map
$C(\zeta)=(\tau+\zeta)/(\tau-\zeta)$, which takes
$\tau$ to $\infty$ and conjugates $\varphi$ to a self-map $\phi:=C\circ
\varphi \circ C^{-1}$ of $\mathbb H$, with Denjoy-Wolff point $\infty$
and multiplier $1/c$. Then, one considers the orbit
$x_n+iy_n:=\phi^{\circ n}(1)$ of the point $w=1$, and studies
the sequence of renormalized iterates:
\begin{equation}\label{valiron-method}
\sigma_n(w):= \frac{\phi^{\circ n}(w)}{x_n}.
\end{equation}
Valiron showed that $\{\sigma_n\}$ converges to a holomorphic
map $\sigma:\mathbb H\to \mathbb H$ such that $\sigma\circ
\phi=\frac{1}{c}\sigma$. Thus $\theta:=\sigma \circ C$ solves
\eqref{schroder}.
After Valiron's construction, Ch. Pommerenke \cite{pom},
\cite{pom2}, C. Cowen \cite{Co} and P. Bourdon and J. Shapiro
\cite{bs} exploited other constructions to solve
\eqref{schroder} (and the corresponding Abel's equation for the
parabolic case). In particular, Pommerenke's approach in
\cite{pom} is based on a slightly different, but equivalent,
renormalization which replaces \eqref{valiron-method}.
The approach in \cite{pom2}, which works for random iteration
sequences, needs some regularity hypothesis. On the other hand,
Cowen's construction \cite{Co} is based on an abstract model
relying strongly on the Riemmann uniformization theorem.
Finally, Bourdon and Shapiro's construction is based upon a
different renormalization process which works only with
some further regularity of $\varphi$ at $\tau$, but also guarantees
some stronger regularity properties for the semi-conjugation $\theta$.
In \cite[Prop. 6]{Br-Po} the first and last named authors
proved that actually all those different methods (when
applicable) provide essentially the same solution. Namely, if
$\tilde{\sigma}:\mathbb D\to\mathbb H$ is another (nonconstant) solution of
the functional equation \eqref{schroder} then there exists
$\lambda>0$ such that $\tilde{\sigma}=\lambda \sigma$.
Moreover, Valiron showed that $\sigma$ comes with some guaranteed, but
weak, regularity properties at $\tau$. In function theory language,
$\sigma$ is semi-conformal (or isogonal) at $\tau$, namely,
$\sigma$ fixes $\infty\in \partial\mathbb H$ non-tangentially and
${\sf Arg}\,\sigma$ has non-tangential limit $0$ at $\infty$.
As showed in \cite{Br-Po}, the semi-conformality of $\sigma$ is
essentially responsible for the uniqueness properties of $\sigma$ and
for the following dynamical properties of
$\phi$: for every orbit $z_n\mathrel{\mathop:}= \phi^{\circ n}(z_0)$, ${\sf Arg}\, z_n$ tends to a
limit $\alpha(z_0)\in (-\pi/2,\pi/2)$ which depends harmonically on
$z_0$, and conversely, given an angle $\alpha\in (-\pi/2,\pi/2)$ one
can always find an orbit whose limiting argument is $\alpha$.
\subsection{Valiron's method in higher dimensions}
In $\mathbb C^N$, $N=2,3,...$, we let $\pi_j:\mathbb C^N\rightarrow\mathbb C$,
$j=1,...,N$, be the coordinate mappings; the usual inner product is
$\langle z_1, z_2 \rangle\mathrel{\mathop:}=\sum_{j=1}^N z_{1,j} \overline{z_{2,j}}$, where
$z_{n,j}=\pi_j(z_n)$; the norm is $\|z\|^2\mathrel{\mathop:}=\langle z,z\rangle$. The unit
ball $\mathbb B^N$ is $\{z\in \mathbb C^N: \|z\|^2<1\}$.
Let $\varphi$ be a holomorphic self-map of $\mathbb B^N$.
If $\varphi$ has no fixed points in $\mathbb B^N$ then B. MacCluer
\cite{Mac} proved that the Denjoy-Wolff theorem still holds.
Namely, the sequence of iterates of $\varphi$, $\{\varphi^{\circ n}\}$, converges
uniformly on compacta to the constant map $z\mapsto \tau$, $\forall
z\in \mathbb B^N$, for a
(unique) point $\tau\in\partial{\mathbb B^N}$ (called again the {\sl
Denjoy-Wolff point} of $\varphi$). Like in the one-dimensional
case, the number
\[
c :=\liminf_{z\rightarrow \tau}\frac{1-\|\varphi (z)\|}{1-\|z\|},
\]
belongs to $(0,1]$ and is called the {\sl multiplier} of $\varphi$
or the {\sl boundary dilatation coefficient} of $\varphi$ at $\tau$.
Also, $\tau$ is a fixed point in the sense of non-tangential
limits (and actually in the sense of $K$-limits as we define
below). However, in this case the differential of $\varphi$ might
not have nontangential limit at $\tau$. The map $\varphi$ is called
{\sl hyperbolic} if $c<1$ and {\sl parabolic} if $c=1$.
Here too $\varphi$ preserves certain ellipsoids internally tangent to $\partial \mathbb B^N$
at $\tau$: defining
\begin{equation}\label{eq:horoball}
E(t)\mathrel{\mathop:}=\left\{z\in\mathbb B^N: \frac{|1-\langle z,
\tau\rangle|^2}{1-\|z\|^2}<1/t\right\},\end{equation}
then $\varphi(E(t))\subset E(t/c)$. In formulas,
\begin{equation}\label{eq:julia}
\frac{|1-\langle \varphi(z),\tau\rangle|^2}{1-\|\varphi(z)\|^2}\leq c\frac{|1-\langle
z,\tau\rangle|^2}{1-\|z\|^2},
\end{equation}
for every $z\in \mathbb B^N$.
Assuming some regularity for $\varphi$ at $\tau$, in the spirit of
Bourdon-Shapiro, in \cite{Br-Ge} the first and the second named
authors proved that, if $\varphi$ is hyperbolic, one can solve the
following functional equation:
\[
\sigma \circ \varphi=A \sigma,
\]
where $\sigma:\mathbb B^N\to\mathbb C^N$ is a nonconstant holomorphic map with good
regularity properties at $\tau$, and where
$A$ is the matrix $d\varphi_\tau$. Recently such a result has been improved in
$\mathbb B^2$ by F. Bayart assuming less regularity for $\varphi$ at $\tau$
(see \cite{Ba} where also the parabolic case is considered).
On the other hand, the first and third named author in
\cite{Br-Po} have shown, that for all hyperbolic self-maps
$\varphi$, i.e., with no regularity assumptions at $\tau$, and for
each orbit $z_n=\varphi^{\circ n}(z_0)$, there is a Koranyi region
$K(\tau,R)$ such that $z_n$ will tend to $\tau$ while staying
in $K(\tau,R)$. Recall that, for $R>1/2$, the {\sl $R$-Koranyi
approach region} at $\tau$ is a region of the form
\begin{equation}\label{koranji}
K(\tau,R)\mathrel{\mathop:}=\{z\in\mathbb B^N: |1-\langle z,\tau\rangle|<R (1-\|z\|^2)\}.
\end{equation}
The original aim, when looking for semi-conjugations in the one-dimensional
case, was to show that general hyperbolic self-maps do indeed have a
similar dynamical behavior as the hyperbolic
automorphisms that share the same attracting fixed point.
In higher dimensions however, it is easy to construct maps whose image
lies in a sub-variety with non-zero codimension, and thus automorphisms alone
don't seem to be enough to model the dynamics of such maps (although
one may try to consider automorphisms of lower dimensional balls).
Also the fact that the differential of $\varphi$ does not in general have a
non-tangential limit at $\tau$, shows that before trying to
semi-conjugate $\varphi$ to an automorphism on an higher-dimensional ball,
it is preferable to study
the following ``one-dimensional'' equation first.
\begin{problem}\label{prob:onedim}
Find a nonconstant holomorphic map
$\Theta:\mathbb B^N\to \mathbb H\subset\mathbb C$ such that
\begin{equation}\label{valiron-piu-dim}
\Theta\circ \varphi=\frac{1}{c} \Theta.
\end{equation}
\end{problem}
The aim of this paper is to try to solve Problem \ref{prob:onedim} by
generalizing the method of Valiron to higher dimensions.
As in the one-dimensional
case, it is more convenient to move to the Siegel domain
\begin{equation}\label{eq:siegel}
\mathbb H^N:=\{(z,w)\in\mathbb C\times\mathbb C^{N-1}: {\sf Re}\, z>\|w\|^2\}
\end{equation}
which is biholomorphic to $\mathbb B^N$ via the Cayley transform
$\mathcal C:\mathbb B^N\to \mathbb H^N$ defined as
\begin{equation}\label{eq:caley}
\mathcal C(\zeta_1, \zeta'):=\left(\frac{1+\zeta_1}{1-\zeta_1},
\frac{\zeta'}{1-\zeta_1}\right).
\end{equation}
Thus, if
$\phi:\mathbb H^N\to \mathbb H^N$ is a hyperbolic holomorphic map with
Denjoy-Wolff point $\infty$ and multiplier $1/c$, we
define the following sequence
\begin{equation}\label{valivali}
\sigma_n(z,w):=\frac{\pi_1\circ \phi^{\circ n}(z,w)}{x_n},
\end{equation}
where, $\pi_1(z,w):=z$ is the projection on the first
component and $x_n={\sf Re}\, \pi_1 (\phi^{\circ n}(1,0))$. For short we will
say that the {\sl Valiron method works} whenever the sequence
$\{\sigma_n\}$ converges uniformly on compacta.
Our main result is the following:
\begin{maintheorem}
Let $\varphi:\mathbb B^N\to\mathbb B^N$ be a hyperbolic holomorphic self-map with
Denjoy-Wolff point $\tau\in\partial\mathbb B^N$ and multiplier $c<1$. If
\begin{enumerate}
\item there exists $z_0\in\mathbb B^n$ such that the sequence
$\{\varphi^{\circ n}(z_0)\}$ is {\sl special} and
\item the $\displaystyle{\hbox{K-}\lim_{z\to \tau} \frac{1-\langle \varphi(z),\tau\rangle}{1-\langle z, \tau\rangle}}$ exists,
\end{enumerate}
then the Valiron method works and there exists a nonconstant
holomorphic function $\Theta:\mathbb B^N\to \mathbb H$ such that
$\Theta\circ \varphi=\frac{1}{c}\Theta$.
\end{maintheorem}
In order to explain our hypotheses (1) and (2), we recall that
a sequence $\{z_n\}\subset \mathbb B^N$ converging to a point
$\tau\in\partial\mathbb B^N$ is said to be {\sl special} if
\[
\lim_{n\to \infty}\frac{\|z_n-\langle z_n,\tau\rangle \tau\|^2}{1-|\langle
z_n,\tau\rangle|^2}=0,
\]
or, equivalently, the Kobayashi distance $k_{\mathbb B^N}(z_n, \langle
z_n,\tau\rangle\tau)$, between $\{z_n\}$ and the projection of
$z_n$ along $\tau$, tends to zero as $n\to \infty$.
For the definition and properties of the Kobayashi distance we
refer to \cite{Kob} or \cite{Abate}; we will only use the
fact that the Kobayashi distance is invariant under biholomorphisms
and that $k_{\mathbb B^N}(0,z)=\tanh^{-1}(\|z\|)$.
Moreover,
a function $h:\mathbb B^N\to \mathbb C$ has {\sl K-limit} $L$ at
$\tau\in\partial\mathbb B^N$, K-$\lim_{z\to \tau} h(z)=L$, if for any $R>1/2$
and any sequence $\{z_n\}\subset K(\tau, R)$ converging to
$\tau$ it follows that $\lim_{n\to \infty}h(z_n)=L$ (see
\cite{Abate} or \cite{Ru}).
Notice that if $\varphi:\mathbb B^N\to\mathbb B^N$ is a hyperbolic holomorphic
self-map with Denjoy-Wolff point $\tau\in\partial\mathbb B^N$ and
multiplier $c<1$, then Rudin's version of the classical
Julia-Wolff-Caratheodory theorem (see \cite[Thm. 8.5.6]{Ru} or
\cite[Thm. 2.2.29]{Abate}) implies that
\begin{equation}\label{eq:rjwc}
\lim_{n\to\infty}\frac{1-\langle \varphi(z_n),\tau\rangle}{1-\langle z_n,
\tau\rangle}=c
\end{equation}
for all sequences $\{z_n\}\subset\mathbb B^N$ converging to $\tau$
such that $\{z_n\}$ is special and $\{\langle z_n, \tau\rangle\}$
converges to $1$ nontangentially in $\mathbb D$. Such a limit is
called {\sl restricted K-limit}. Unfortunately, it is easy to show
that K-limits imply restricted K-limits, but not the converse. Thus,
hypothesis (2) is a non-trivial requirement.
Condition (1) is not always easy to verify, unless, say, the
map $\varphi$ happens to fix (as a set) a slice ending at $\tau$.
For instance, under the regularity assumptions of \cite{Br-Ge}
it follows that (2) holds, but it is not clear, {\em ex ante},
that (1) must also hold. On the other hand, once the
semi-conjugation is established in \cite{Br-Ge}, with good
regularity properties, then it is easy to verify that (1) had
to hold, {\em ex post}. In fact, we don't know of any explicit
examples where (1) fails. So it could be the case that (1) is
actually a superfluous hypothesis for the Main Theorem.
\subsection{An example}\label{ssec:example}
The following is an example of a map as in
the Main Theorem satisfying condition (1) but not (2)
and for which the Valiron method still works.
Consider the map
\[
\phi:\mathbb H^{2}\ni(z,w)\mapsto (Az+Aw^2\psi(z), 0)
\]
where $\psi:\mathbb H\to \mathbb D$ is any holomorphic function and $A>1$.
Then clearly, $\phi(\mathbb H^2)\subset \mathbb H^2$, $\infty$ is the
Denjoy-Wolff point of $\phi$, the multiplier is $A>1$ and the
sequence $\{\phi^{\circ n}(1,0)\}=\{(A^n,0)\}$ is special.
Moreover,
\[
\phi^{\circ n}(z,w)=A^n z+A^n w^2 \psi(z).
\]
Hence
\[
\sigma_n(z,w):=\frac{\pi_1\circ \phi^{\circ n}(z,w)}{x_n}=z+w^2\psi(z).
\]
Therefore $\{\sigma_n\}$ does not depend on $n$ and it can be
checked that the map $\sigma(z,w):=z+w^2\psi(z)$ solves $\sigma
\circ \phi=A\sigma$. Thus the Valiron method works. However, the
$K$-limit of $\frac{\phi_1(z,w)}{z}$ at $\infty$ does not exist
if $\psi$ doesn't have a non-tangential limit at $\infty$. In
particular, for such $\psi$, hypothesis (2) in the Main Theorem
is not satisfied.
It is interesting to note that for such an example, the crucial
equation \eqref{equi1} below becomes
\[
\frac{Ax_n z+Ax_n w^2\psi(x_nz)}{x_nz}=A+A\frac{w^2}{z}\psi(x_n
z),
\]
and the limit for $n\to \infty$ does not exists if $w\neq 0$.
In particular, the regularity hypothesis (2) in the Main
Theorem, while necessary in our proof, is not necessary for
Valiron's method to work.
Our Main Theorem is proved in Section \ref{sec:proof}.
In order to prove it, in Section \ref{limiti} we introduce a
new characterization of K-limits for functions, which we then
develop in the Appendix into the notion of {\sl E-limits}. We
believe that the new understanding of K-limits which comes from
the study of our E-limits might be a useful tool for other
results. In the last section we include some further comments
and open questions.
\section{Preliminaries on K-Limits}\label{limiti}
As mentioned before, we work in the Siegel domain (\ref{eq:siegel}).
A direct computation using \eqref{koranji} and (\ref{eq:caley}) shows that the Koranyi
region $K(\tau,R)$ with vertex at $\tau$ and amplitude $R$ in $\mathbb B^N$
corresponds to one with vertex at $\infty$ and amplitude $M\mathrel{\mathop:}= 2R>1$ in $\mathbb H^N$
given by
\begin{equation}\label{eq:hkoranyi}
K(\infty,M)\mathrel{\mathop:}=\left\{(z,w)\in\mathbb H^N: \|w\|^2 < {\sf Re}\, z - \frac{|z+1|}{M}\right\}.
\end{equation}
To get a geometric feeling for these objects, notice that the
ellipsoids $E(t)$ defined in (\ref{eq:horoball}) correspond in $\mathbb H^N$
to the sets
\[
\mathcal E(T)\mathrel{\mathop:}=\{(z,w)\in \mathbb H^N: {\sf Re}\, z -\|w\|^2 > T\}
\]
for some $T>0$ fixed. So, in particular, a sequence in $K(\infty,M)$
tending to infinity will eventually be contained in every $\mathcal E(T)$
for $T$ large, because $z$ tends to infinity when $(z,w)\in \mathbb H^{N}$
tends to infinity.
Notice also that the property (\ref{eq:julia}) for a hyperbolic map
$\phi:\mathbb H^N\rightarrow\mathbb H^N$ with multiplier $A>1$ reads as follows:
\[
{\sf Re}\, \phi_1(z,w) -\|\phi^\prime(z,w)\|^2 > A({\sf Re}\, z-\|w\|^2)
\]
for every $(z,w)\in \mathbb H^N$.
We will find it convenient to use an equivalent characterization of
K-limits. First we need a few definitions.
For $Z=(z,w)\in \mathbb H^N$, let $p(Z)\mathrel{\mathop:}= (z,0)$ be the projection of
$Z$ onto the complex line $L\mathrel{\mathop:}=\{(z,0): z\in \mathbb H\}\subset\mathbb H^N$
\begin{definition}
Let $Z_{n}= (z_{n},w_{n})\in\mathbb H^N$ converge to $\infty$.
\begin{itemize}
\item[(i)] We say the
convergence is {\sl $C$-special} if there exists $0\leq C <\infty$ such that
\[
k_{\mathbb H^N}(Z_n, p(Z_n))\leq C, \qquad \forall n,
\]
where $k_{\mathbb H^{N}}$ is the Kobayashi distance on $\mathbb H^{N}$.
\item[(ii)]
We say the convergence is {\sl restricted} if $\{z_{n}\}$ converges
non-tangentially to $\infty$ in $\mathbb H$.
\end{itemize}
\end{definition}
\begin{remark}
The concepts just introduced of $C$-special and restricted
sequences are formulated using the complex geodesic $z\in
\mathbb H\mapsto (z,0)\in\mathbb H^{N}$ and the projection associated to
it. It turns out that being $C$-special and restricted
do not depend on the chosen complex geodesic with
$\infty$ in its boundary. This is used in the proof of the Main
Theorem and could be useful in domains other than $\mathbb H^{N}$ and
$\mathbb B^N$. For this reason, in the Appendix, Section
\ref{sec:appendix}, we provide a rigorous proof of this fact.
\end{remark}
\begin{remark}
A $0$-special sequence is simply referred to as {\sl special},
see also \cite{Abate} and \cite{Ru}.
\end{remark}
\begin{lemma}\label{lem:klimits}
Let $Z_{n}= (z_{n},w_{n})\in\mathbb H^N$ converge to $\infty$. Then, the
following are equivalent:
\begin{enumerate}
\item $Z_{n}$ stays inside a Koranyi region $K (\infty ,M)$ for some
$1<M<\infty$;
\item $Z_{n}$ is $C$-special, for some $C<\infty$, and is restricted;
\item There is $0<a<1$ and $0<T<\infty$, such that
\[
\|w\|^2\leq a{\sf Re}\, z_n\qquad\mbox{ and }\qquad |{\sf Im}\, z_n| \leq T {\sf Re}\, z_n.
\]
\end{enumerate}
\end{lemma}
The proof of Lemma \ref{lem:klimits} rests on the following
computation. For $Z=(z,w)\in\mathbb H^N$, we compute the Kobayashi distance in
$\mathbb H^N$ between $Z$ and $p(Z)$. Set $z=x+iy$ and
notice that the map $T(u,v)=(\frac{u-iy}{x},\frac{v}{\sqrt{x}})$ is
an automorphism of $\mathbb H^N$. Thus by invariance, we have
\begin{equation}\label{kob-iperplane}
\begin{split}
k_{\mathbb H^N}((z,0),(z,w))&=k_{\mathbb H^N}((1,0),
T(z,w))=k_{\mathbb B^N}(0, \mathcal
C^{-1}(T(z,w)))\\&=\tanh^{-1}
\|\mathcal
C^{-1}(T(z,w))\|=\tanh^{-1}\|(0,\frac{w}{\sqrt{x}})\|
=\tanh^{-1}\frac{\|w\|}{\sqrt{x}}.
\end{split}
\end{equation}
In other words, $k_{\mathbb H^N}(Z,p(Z))=\tanh^{-1}(\|w\|/\sqrt{{\sf Re}\, z})$ and
it is useful to
recall that $\tanh^{-1}(s)=(e^s-e^{-s})/(e^s+e^{-s})$ is a positive increasing function on
$(0,1)$ with a vertical asymptote at $1$.
\begin{proof}[Proof of Lemma \ref{lem:klimits}]
By \eqref{kob-iperplane}, a sequence $Z_n=(z_n,w_n)\in \mathbb H^N$ is $C$-special
for some $0<C<\infty$ if and only if
$\|w_n\|^2 \leq a {\sf Re}\, z_n$ for some $0<a<1$. In fact, $a=\tanh C$.
Thus, since ${\sf Im}\, z_n \leq T{\sf Re}\, z_n$ is an usual formulation of
non-tangentiality in $\mathbb H$, we have that (2) and (3) are
equivalent.
Assuming (3) and writing $z_n=x_n+iy_n$, we have $|z_n+1|^2\leq
(1+T^2)x_n^2+2x_n+1$. Thus
\[
x_n-\frac{|z_n+1|}{M}\geq \left(1-\frac{\sqrt{1+T^2}}{M}\right)x_n+o\left(\frac{1}{x_n}\right),
\]
as $x_n$ tends to infinity. Choose $M$ large enough, so that
$1-\sqrt{1+T^2}/M<a<1$. This
ensures that $Z_n\in K(\infty, M)$ for all $n$ large. So (3) implies (1).
Conversely, assume that $Z_n\in K(\infty,M)$ for some
$1<M<\infty$. Then, since
\[
x_n-|z_n+1|/M\leq (1-1/M)x_n,
\]
by (\ref{eq:hkoranyi}), we have $\|w_n\|^2\leq
a{\sf Re}\, z_n$ with $a=1-1/M$. Also, $|z_n+1|\leq M{\sf Re}\, z_n$, so $|{\sf Im}\,
z_n|\leq M{\sf Re}\, z_n$. Hence, (1) implies (3).
\end{proof}
\section{The proof of the Main Theorem}\label{sec:proof}
We start by reformulating it in the context of $\mathbb H^{N}$.
\begin{maintheorem}[Siegel domain version]
Let $\phi=(\phi_1,\phi'):\mathbb H^N\to\mathbb H^N$ be holomorphic, with
Denjoy-Wolff point $\infty$ and multiplier $\lambda>1$. Assume
that
\begin{enumerate}
\item There exists $Z_0\in\mathbb H^N$ such that the sequence
$\{\phi^{\circ n}(Z_0)\}$ is special.
\item $\hbox{K-$\lim$}_{\mathbb H^N\ni (z,w)\to\infty}
\frac{\phi_1(z,w)}{z}$ exists.
\end{enumerate}
Then Valiron's method works and there exists a non-constant
holomorphic map $\sigma:\mathbb H^N\to\mathbb H$ such that
\[
\sigma \circ \phi=\lambda \sigma.
\]
\end{maintheorem}
\begin{remark}\label{rem:conj}
By considering $T\circ \phi \circ T^{-1}$, where $T$ is an
automorphism of $\mathbb H^N$ fixing $\infty$ and such that $T (Z_{0})=
(1,0)$, we can always assume that it is the sequence $\phi^{\circ n}
(1,0)$ that is special, see the proof of Lemma
\ref{specialspecial}. So we will make this assumption in the sequel.
\end{remark}
\begin{remark}\label{rem:valeVali}
The Valiron method is invariant under conjugation, namely, let
$\phi:\mathbb H^N\to\mathbb H^N$ be hyperbolic holomorphic with
Denjoy-Wolff point $\infty$, let $T$ be an automorphism of
$\mathbb H^N$ fixing $\infty$ and let $\tilde{\phi}:=T\circ \phi\circ T^{-1}$.
Then the sequence $\{\sigma_n\}:=\{(\pi_1\circ \phi^{\circ
n})/x_n\}$ given by \eqref{valivali} converges if and only if
the sequence $\{\tilde{\sigma}_n\}:=\{(\pi_1\circ \tilde{\phi}^{\circ
n})/\tilde{x}_n\}$ converges (here $\tilde{x}_n={\sf Re}\,
\pi_1(\tilde{\phi}^{\circ n}(1,0))={\sf Re}\, \pi_1(T(\phi^{\circ n}(
T^{-1}(1,0))))$). In fact, by a direct computation, it turns
out that if $\sigma_n\to \sigma$ as $n\to \infty$ then
$\tilde{\sigma}_n\to (x_0-\|w_0\|^2)\sigma \circ T^{-1}$, where
$(x_0+iy_0, w_0):=T^{-1}(1,0)$. We leave the details of such a
computation to the reader.
\end{remark}
We need a preliminary result.
\begin{lemma}\label{real and image}
Let $\phi=(\phi_1,\phi'):\mathbb H^N\to\mathbb H^N$ be holomorphic, with
Denjoy-Wolff point $\infty$ and multiplier $\lambda\geq 1$.
Assume the sequence $\{\phi^{\circ n}(1,0)\}$ is special. Write
$\phi^{\circ n}(1,0)=(z_n,w_n)$ and $z_{n}=x_{n}+iy_{n}$. Then
\begin{enumerate}
\item $\displaystyle{\lim_{n\to\infty}\frac{x_{n+1}}{x_n}=\lambda}$.
\item There exists $L\in \mathbb R$ such that $\displaystyle{ \lim_{n\to \infty}\frac{y_n}{x_n}=L.}$
\end{enumerate}
\end{lemma}
\begin{proof}
As proved in \cite[section
3.5]{Br-Po}, for any
fixed $Z\in\mathbb H^N$, the orbit $\{\phi^{\circ n}(Z)\}$ stays in a
Koranyi region with vertex at $\infty$ and so, in particular, it is
restricted. Therefore, there exists
$C>0$ such that for all $n\in\mathbb N$
\begin{equation}\label{restricted}
|y_n|\leq C x_n.
\end{equation}
By Rudin's version of the classical
Julia-Wolff-Caratheodory theorem (\ref{eq:rjwc}), reformulated in
$\mathbb H^{N}$ (see Theorem \ref{JWC} in the Appendix), since
$(z_{n},w_{n})$ is special and restricted, it follows that
\[
\lim_{n\to
\infty}\frac{z_{n+1}}{z_n}=\frac{\phi_1(z_n,w_n)}{z_n}=\lambda.
\]
In particular we can write
\begin{equation}\label{znuno}
z_{n+1}=\lambda z_n+ o(1)z_n.
\end{equation}
Dividing \eqref{znuno} by $x_n$
and taking the real part, we obtain $\frac{x_{n+1}}{x_n}=\lambda +
{\sf Re}\, o(1) - \frac{y_n}{x_n}{\sf Im}\, o(1)$. Taking the limit for $n\to
\infty$, by \eqref{restricted}, we get
\begin{equation}\label{limitxn}
\lim_{n\to\infty}\frac{x_{n+1}}{x_n}=\lambda,
\end{equation}
which proves (1).
In order to prove (2), let $\left\{\frac{y_{n_k}}{x_{n_k}}\right\}$ be any
convergent subsequence and let $L$ be its limit. By
\eqref{restricted}, $L$ is finite. Moreover,
\begin{equation}\label{samelimit}
\frac{z_{n+1}}{z_n}=\frac{x_{n+1}}{x_n}\frac{1+i\frac{y_{n+1}}{x_{n+1}}}{1+i\frac{y_{n}}{x_{n}}}
\end{equation}
and by \eqref{znuno} and \eqref{limitxn} we see that
$\left\{\frac{y_{n_k+1}}{x_{n_k+1}}\right\}$ is also
a convergent sequence with the
same limit $L$. Assume by contradiction that there exists a
converging subsequence $\left\{\frac{y_{m_k}}{x_{m_k}}\right\}$ with limit
$L'\neq L$. Let
\[
q_n:=\frac{x_{n+1}}{x_n}+i\frac{y_{n+1}-y_n}{x_n}.
\]
By \eqref{limitxn}, we have
\[
{\sf Im}\,
q_{n_k}=\frac{y_{n_k+1}-y_{n_k}}{x_{n_k}}=\frac{y_{n_k+1}}{x_{n_k+1}}\frac{x_{n_k+1}}{x_{n_k}}
-\frac{y_{n_k}}{x_{n_k}}\longrightarrow L(\lambda-1),
\]
and similarly ${\sf Im}\, q_{m_k}\to L'(\lambda-1)$. Therefore
$\{q_{n_k}\}$ converges to $\lambda+i L(\lambda-1)$ while
$\{q_{m_k}\}$ converges to $\lambda+i L'(\lambda-1)$.
We claim that $\{q_n\}$ can have at most two accumulation
points, say $a, a'$ (which must be necessarily
$a=\lambda+iL(\lambda-1)$ and $a'=\lambda+iL'(\lambda-1)$).
Assuming the claim is true, let $U, U'$ be two open
neighborhoods of $a$ and $a'$ respectively such that $U\cap
U'=\emptyset$. Since $\{q_n\}$ has only $a,a'$ as accumulation
points by our claim, there exists $n_0$ such that for all
$n>n_0$ then either $q_n\in U$ or $q_n\in U'$. Moreover, since
$\{q_{n_k}\}\subset U$ for $n_k>n_0$ and $\{q_{m_k}\}\subset
U'$ for $m_k>n_0$, one can select a subsequence
$\{q_{l_k}\}\subset U$ such that $\{q_{l_k+1}\}\subset U'$. But
this implies that $\left\{\frac{y_{l_k}}{x_{l_k}}\right\}$ converges to
$L(\lambda-1)$ while $\left\{\frac{y_{l_k+1}}{x_{l_k+1}}\right\}$
converges to $L'(\lambda-1)$, contradicting our previous
argument in \eqref{samelimit}.
We are left to show that $\{q_n\}$ can have at most two accumulation
points. We already know that ${\sf Re}\, q_n\to \lambda>1$. We are going to
show that the (real) sequence $\{k_{\mathbb H}(1,q_n)\}$ of hyperbolic
distances between $1$ and $q_n$ has limit, say $d$. Thus the
accumulation points of $\{q_n\}$ must belong to the intersection
between the real line $\{\zeta\in \mathbb H: {\sf Re}\, \zeta=\lambda\}$ and the
boundary of the hyperbolic disc of center $1$ and radius $d$, and this
intersection
consists of at most two points.
To see that $\{k_{\mathbb H}(1,q_n)\}$ converges, let us introduce the
family of automorphisms of $\mathbb H^N$ given by
\begin{equation}\label{Tn}
T_n(z,w):=\left(\frac{z-iy_n}{x_n}, \frac{w}{\sqrt{x_n}}\right).
\end{equation}
Notice that $T_n(z_n,0)=(1,0)$ and $T_n\circ
T_{n+1}^{-1}(1,0)=(q_n,0)$, from which we obtain that
\begin{equation}\label{ineq0}
\begin{split}
k_{\mathbb H}(1,q_n)&=k_{\mathbb H^N}((1,0), (q_n,0))=k_{\mathbb H^N}((1,0),T_n\circ
T_{n+1}^{-1}(1,0))\\&=k_{\mathbb H^N}(T_n^{-1}(1,0),
T_{n+1}^{-1}(1,0))=k_{\mathbb H^N}((z_n,0),(z_{n+1},0)).
\end{split}
\end{equation}
Now, by the contracting property of Kobayashi's distance,
\begin{equation}\label{ineq1}
\begin{split}
k_{\mathbb H^N}((z_n,0)&,(z_{n+1},0))=k_{\mathbb H}(z_n,z_{n+1})\\&=k_{\mathbb H}(\pi_1(z_n,w_n),\pi_1(z_{n+1},
w_{n+1}))\leq k_{\mathbb H^N}((z_n,w_n),(z_{n+1},w_{n+1})).
\end{split}
\end{equation}
On the other hand, by the triangle inequality,
\begin{equation}\label{ineq2}
\begin{split}
k_{\mathbb H^N}((z_n,0),(z_{n+1},0))&\geq
k_{\mathbb H^N}((z_n,w_n),(z_{n+1},w_{n+1}))\\&-k_{\mathbb H^N}((z_n,0),(z_{n},w_{n}))-
k_{\mathbb H^N}((z_{n+1},0),(z_{n+1},w_{n+1})).
\end{split}
\end{equation}
Since $\{(z_n, w_n)\}$ is special then both
$k_{\mathbb H^N}((z_n,0),(z_{n},w_{n}))$ and
$k_{\mathbb H^N}((z_{n+1},0),(z_{n+1},w_{n+1}))$ tend to $0$ as $n\to
\infty$. Therefore, from \eqref{ineq0}, \eqref{ineq1} and
\eqref{ineq2} it follows
\[
\lim_{n\to \infty} k_\mathbb H(1,q_n)=\lim_{n\to
\infty}k_{\mathbb H^N}((z_n,0),(z_{n+1},0))=\lim_{n\to \infty}
k_{\mathbb H^N}((z_n,w_n),(z_{n+1},w_{n+1})),
\]
and the latter limit exists because the sequence
$\{k_{\mathbb H^N}((z_n,w_n),(z_{n+1},w_{n+1}))\}$ is non-increasing
in $n$ since the Kobayashi distance is contracted by
holomorphic maps.
\end{proof}
\begin{proof}[Proof of the Main Theorem]
As mentioned in Remark \ref{rem:conj} and Remark
\ref{rem:valeVali}, after conjugating $\phi$ with some
automorphism of $\mathbb H^N$ we can suppose that $Z_0=(1,0)$, and
as we saw in the proof of Lemma \ref{real and image}, the orbit
of $(1,0)$ is thus both special and restricted. Moreover, see
Proposition \ref{tech2} in the Appendix, the conjugation made
does not effect our regularity hypothesis, namely
\begin{equation}\label{regularity}
{K\hbox{-}\lim}_{\mathbb H^N\ni
(z,w)\to\infty}\frac{\phi_1(z,w)}{z}=\lambda.
\end{equation}
Letting
$(z_n,w_n):=\phi^{\circ n}(1,0),\ z_n=x_n+iy_n$,
and using Lemma \ref{lem:klimits}, we see that
\begin{equation}\label{speciale}
\lim_{n\to\infty} \frac{\|w_n\|}{\sqrt{x_n}}=0.
\end{equation}
Now we consider the Valiron-like sequence $\{\sigma_n\}$ of
holomorphic maps from $\mathbb H^N$ to $\mathbb H$ defined by
\[
\sigma_n(z,w):=\frac{\pi_1\circ \phi^{\circ n}(z,w)}{x_n},
\]
where, as usual, $\pi_1(z,w):=z$ is the projection on the first
component. Notice that
\begin{equation}\label{intert}
\sigma_n \circ \phi = \frac{ \pi_1\circ
\phi^{\circ (n+1)}}{x_n}=\frac{x_{n+1}}{x_n} \sigma_{n+1}.
\end{equation}
If we can prove that the sequence $\{\sigma_n\}$ converges
uniformly on compacta to a non-constant map
$\sigma:\mathbb H^N\to\mathbb H$ (which is necessarily holomorphic), then
by taking the limit for $n\to \infty$ in \eqref{intert}, and by
Lemma \ref{real and image}~(1), we obtain that $\sigma\circ
\phi=\lambda \sigma$.
We will now show that $\{\sigma_n\}$ is uniformly convergent
on compacta to a non-constant function.
First of all, we notice that by Lemma \ref{real and image}~(2),
\[
\sigma_n(1,0)=1+i\frac{y_n}{x_n}\longrightarrow 1+i L,
\]
as $n\to\infty$. And, on the other hand, again by Lemma \ref{real
and image}
\[
\sigma_n(\phi(1,0))=\frac{\pi_1\circ
\phi^{\circ (n+1)}(1,0)}{x_n}=\frac{x_{n+1}+iy_{n+1}}{x_n}=
\frac{x_{n+1}}{x_n}+i\frac{x_{n+1}}{x_n}\frac{y_{n+1}}{x_{n+1}}\to
\lambda+i \lambda L,
\]
as $n\to \infty$. Since $\lambda>1$, the above proves that any
limit of the sequence $\{\sigma_n\}$ cannot be constant.
Now we are going to prove that for any $(z,w)\in\mathbb H^N$
\begin{equation}\label{specialsigma}
\lim_{n\to\infty}k_\mathbb H(\sigma_n(z,w),\sigma_{n+1}(z,w))=0.
\end{equation}
To this aim, we first notice that the set $\{\sigma_n(z,w)\}$ is
relatively compact in $\mathbb H$. Indeed, let $\pi_w:\mathbb C^N\to \mathbb C^{N-1}$ be
the projection $\mathbb C\times \mathbb C^{N-1}\ni (z,w)\mapsto w\in \mathbb C^{N-1}$ and
define
\begin{equation}\label{Sn}
S_n(z,w):=\left(\sigma_n(z,w),
\frac{\pi_w(\phi^{\circ n}(z,w))}{\sqrt{x_n}}\right).
\end{equation}
Notice that $S_n=L_n \circ \phi^{\circ n}$, where $L_n$ is the
automorphism of $\mathbb H^N$ defined by $L_n(z,w)=(z/x_n,
w/\sqrt{x_n})$. Therefore $S_n:\mathbb H^N\to \mathbb H^N$. Moreover, by
Lemma \ref{real and image}~(1) and \eqref{speciale}
\begin{equation}\label{limitS}
S_n(1,0)=\left(\sigma_n(1,0), \frac{w_n}{\sqrt{x_n}}
\right)=(1+i\frac{y_n}{x_n}, \frac{w_n}{\sqrt{x_n}})\to (1+iL,0),
\end{equation}
as $n\to \infty$. In particular there exists $C>0$ such that
$k_{\mathbb H^N}(S_n(1,0), (1+iL,0))<C$ for all $n\in \mathbb N$. Therefore, by the
triangle inequality and the contraction property,
\begin{equation*}
\begin{split}
k_{\mathbb H^N}(S_n(z,w), (1+iL,0))&\leq k_{\mathbb H^N}(S_n(z,w),
S_n(1,0))+k_{\mathbb H^N}(S_n(1,0), (1+iL,0))\\&\leq
k_{\mathbb H^N}((z,w),(1,0))+C,
\end{split}
\end{equation*}
which proves that $\{S_n(z,w)\}$ is relatively compact in $\mathbb H^N$.
Now, notice that
\[
\sigma_{n+1}=\pi_1 \circ L_{n+1}\circ \phi \circ L_n^{-1}\circ S_n.
\]
Since we already proved that the sequence $\{S_n(z,w)\}$ is
relatively compact in $\mathbb H^N$, \eqref{specialsigma} will follow if
we prove that $\pi_1 \circ L_{n+1}\circ \phi \circ L_n^{-1}\to \pi_{1}$
as $n\to \infty$. A direct computation shows that
\begin{equation}\label{equi1}
\pi_1 \circ L_{n+1}\circ \phi \circ
L_n^{-1}(z,w)=\frac{\pi_1(\phi(x_nz,
\sqrt{x_n}w))}{x_{n}z}\frac{x_nz}{x_{n+1}}.
\end{equation}
Now for all $n\in \mathbb N$, by \eqref{kob-iperplane}
\[
k_{\mathbb H^N}((x_nz, \sqrt{x_n}w),
(x_nz,0))=\tanh^{-1}\frac{\|w\|}{\sqrt{{\sf Re}\, z}}<\infty,
\]
and clearly $\{x_nz\}$ converges to $\infty$ non-tangentially in $\mathbb H$.
Thus the sequence $\{(x_nz, \sqrt{x_n}w)\}$ is $C$-special and
restricted. Hence, by applying \eqref{regularity} and Lemma
\ref{real and image}~(1) to the limit as $n\to \infty$ in
\eqref{equi1}, we get $\pi_1 \circ L_{n+1}\circ \phi \circ
L_n^{-1}(z,w)\to z$ as $n\to \infty$, as needed.
At this point, let $\{\sigma_{n_k}\}$ be a convergent
subsequence of $\{\sigma_n\}$ and let $\sigma$ be its limit, which we
know is non-constant. By
\eqref{specialsigma}, $\{\sigma_{n_k+1}\}$ also converges to
$\sigma$. By \eqref{intert} and Lemma \ref{real and image}~(1)
we see that
\begin{equation}\label{functeq}
\sigma \circ \phi = \lambda \sigma.
\end{equation}
It remains to show that the Valiron method works, namely, that
the sequence $\{\sigma_n\}$ converges. By the very definition,
$\{\sigma_n\}$ converges if and only if $\{\pi_1\circ S_n\}$
does---with $S_n$ defined in \eqref{Sn}. We already proved that
$\{S_n\}$ is bounded on compacta of $\mathbb H^N$, thus it is a
normal family. Let $S$ be a limit of $\{S_n\}$. Let
$Z\in\mathbb H^N$. Since the Kobayashi distance is contracted by
holomorphic maps, the sequence $\{k_\mathbb H(S_n(1,0),S_n(Z))\}$ is
decreasing in $n$ and must have a limit. Therefore, by
\eqref{limitS}, for all $Z\in \mathbb H^N$,
\[
\lim_{n\to \infty}k_\mathbb H(S_n(1,0),S_n(Z))=k_\mathbb H((1+iL,0), S(Z)).
\]
This implies that if $\tilde{S}$ is another limit of $\{S_n\}$
then $k_\mathbb H((1+iL,0), S(Z))=k_\mathbb H((1+iL,0), \tilde{S}(Z))$ for
all $Z\in \mathbb H^N$. Thus, conjugating both $S, \tilde{S}$ with a
Cayley map $\mathcal C'$ which maps $(1+iL,0)$ into $O\in
\mathbb B^N$, we find two holomorphic maps $S', \tilde{S}':\mathbb B^N\to
\mathbb B^N$ with the property that $\|S'(Z)\|=\|\tilde{S}'(Z)\|$ for
all $Z\in \mathbb B^N$. Hence (see, {\sl e.g.}, \cite[Prop. 3 p.
102]{Da}) there exists a unitary matrix $U$ such that
$S'=U\tilde{S}'$. Translating into $\mathbb H^N$ this means that
$\tilde{S}=T\circ S$ for some automorphism $T:\mathbb H^N\to\mathbb H^N$
fixing $(1+iL,0)$. We claim that $\pi_1\circ T(z,w)=z$, hence
$\pi_1\circ S=\pi_1\circ \tilde{S}$ which implies that
$\{\pi_1\circ S_n\}$---and hence $\{\sigma_n\}$---is
converging. In order to prove that $\pi_1\circ T(z,w)=z$, it is
enough to prove that $T(z,0)=(z,0)$ for some point $z\in \mathbb H\setminus
\{1+iL\}$,
because then by the classical theory of automorphisms (see
\cite{Abate} or \cite{Ru}) $T$ must fix pointwise the complex
geodesic $\mathbb H\times\{0\}$. To this aim, let $Z_1:=\phi(1,0)$. Let
$\{S_{n_k}\}$ be a sub-sequence of $\{S_n\}$ converging to $S$.
By \eqref{functeq},
\[
(\pi_1\circ S)(Z_1)= (\pi_1\circ S)(\phi(1,0))=\lambda
\sigma(1,0)=\lambda (1+iL).
\]
On the other hand, setting as before $(z_n,w_n):=\phi^{\circ n}(1,0)$, we
get
\begin{equation*}
\begin{split}
(\pi_w \circ
S)(Z_1)&=\lim_{k\to\infty}\frac{\pi_w(\phi^{\circ n_k}(Z_1))}{\sqrt{x_{n_k}}}
=\lim_{k\to\infty}\frac{\pi_w(\phi^{\circ (n_k+1)}(1,0))}{\sqrt{x_{n_k}}}\\
&=\lim_{k\to\infty}\frac{w_{n_k+1}}{\sqrt{x_{n_k}}}=
\lim_{k\to\infty}\frac{w_{n_k+1}}{\sqrt{x_{n_k+1}}}\sqrt{\frac{x_{n_k+1}}{x_{n_k}}}=0,
\end{split}
\end{equation*}
where the last equality follows from \eqref{speciale} and Lemma
\ref{real and image}~(1). Thus $S(Z_1)=(\lambda (1+iL),0)$.
Similarly, we have $\tilde{S}(Z_1)=(\lambda (1+iL),0)$.
Therefore
\[
T(\lambda (1+iL),0)=(T\circ S)(Z_1)=\tilde{S}(Z_1)=(\lambda
(1+iL),0),
\]
which proves that $\pi_1\circ T(z,0)=z$ as needed.
\end{proof}
\section{Further remarks and open questions}
{\bf 1.} In order to make the Valiron construction to work, in
the Main Theorem we need the technical hypothesis (1),
namely that $\phi$ possesses a $0$-special orbit. We do not know
whether any hyperbolic holomorphic self-map of the ball always
has such an orbit or not. Clearly, if the self-map has an
invariant complex geodesic (whose closure must necessarily
contain the Denjoy-Wolff point) then such a condition is
satisfied for all points on such a complex geodesic. For
instance, if $T:\mathbb B^N\to \mathbb B^N$ is a hyperbolic automorphism with
Denjoy-Wolff point $e_1$ and other fixed point $-e_1$, then
the orbit of any point $(z,0')$ is (obviously) special, and
conversely, the orbit of any point of the form $(z,z')$ with
$z'\neq 0'$ is not special.
{\bf 2.} As shown by Example in section \ref{ssec:example},
hypothesis (2) in the Main Theorem is not necessary for
the Valiron construction to work in higher dimension.
{\bf 3.} Along the lines of the one-dimensional Valiron
construction (see, {\sl e.g.}, \cite[p. 47]{Br-Po}) one can
prove that if $\sigma$ is the intertwining map given by the Main
Theorem, then $\mathbb H\ni \zeta\mapsto \sigma(\zeta,0)$ is {\sl
semi-conformal} at $\infty$. However, no further regularity on
$\sigma$ at $\infty$ seems to follow from the construction.
{\bf 4.} Uniqueness (up to composition with linear fractional
maps) of intertwining mappings in higher dimension---without
assign further conditions---does not hold. The main theoretical
reason is that in dimension one the centralizer of a given
hyperbolic automorphism consists of hyperbolic automorphisms
while in higher dimension this is not longer so (see
\cite{Ge-de}). For example, if $H:\mathbb B^N\to \mathbb B^N$ is a hyperbolic
automorphism, then any holomorphic self-map $F:\mathbb B^N\to \mathbb B^N$
such that $F\circ H=H\circ F$ solves the (trivial) Schr\"oder
equation $\sigma \circ H=H\circ \sigma$. By \cite{Ge-de}, if
$N>1$, then there exist mappings $F$ which are not linear
fractional maps.
\section{Appendix: E-limits}\label{sec:appendix}
In this appendix, we introduce the notion of $E$-limit in $\mathbb H^{N}$
and show that is equivalent to that of $K$-limit. However, this new
definition might be useful in more general domains. We also prove a
couple of routine facts that were needed in the proof of the Main Theorem.
A {\sl complex geodesic} $f:\mathbb H\to \mathbb H^N$ is a holomorphic map
which is an isometry between the Poincar\'e distance on $\mathbb H$
and the Kobayashi distance on $\mathbb H^N$. It is well known (see,
{\sl e.g.}, \cite{Abate}) that for $\mathbb H^N$ the image of a
complex geodesic is the intersection of $\mathbb H^N$ with an affine
complex line. A {\sl linear projection} $\rho:\mathbb H^N\to \mathbb H^N$
is a holomorphic map such that $\rho^2=\rho$, the image
$\rho(\mathbb H^N)$ is the intersection of $\mathbb H^N$ with an affine
complex line (namely it is a complex geodesic) and
$\rho^{-1}(\rho(Z))$ is an affine hyperplane in $\mathbb H^N$ for all
$Z\in\mathbb H^N$. To any complex geodesic it is associated a unique
linear projection and conversely, to any linear projection it
is associated a unique (up to parametrization) complex
geodesic.
Given any complex geodesic $f:\mathbb H\to \mathbb H^N$ there exists an
automorphism $G$ of $\mathbb H^N$ such that $f(\zeta)=G^{-1}(\zeta,
0)$. The linear projection associated to $f$ is then given by
$\rho(z,w)=G^{-1}(\pi_1(G(z,w)),0)$, where $\pi_1(z,w):=z$. The
map $\widetilde{\rho}:=f^{-1}\circ \rho:\mathbb H^N\to \mathbb H$ is called the {\sl
left inverse} of $f$.
If $\rho:\mathbb H^N\to \mathbb H^N$ is a linear projection such that
$\overline{\rho(\mathbb H^N)}$ contains $\infty$, for short we say
that $\rho$ is a linear projection at $\infty$.
We will denote by $p_1:\mathbb H^N\to \mathbb H^N$ the linear projection at
$\infty$ given by $p_1(z,w)=(z,0)$, associated to the complex
geodesic $f(\zeta)=(\zeta,0)$ and left inverse $\pi_1(z,w)=z$.
\begin{definition}
Let $\rho:\mathbb H^N\to \mathbb H^N$ be a linear projection at $\infty$. A
sequence $\{Z_k\}\subset\mathbb H^N$ converging to $\infty$ is said
{\sl $C$-special with respect to $\rho$} if there exists $C\geq
0$ such that
\[
\limsup_{k\to \infty} k_{\mathbb H^N}(Z_k, \rho(Z_k))\leq C.
\]
The sequence $\{Z_k\}$ converging to $\infty$ is said to be {\sl
$\rho$-restricted} if $\{\rho(Z_k)\}$ converges non-tangentially to
$\infty$ in $\rho(\mathbb H^N)$.
\end{definition}
\begin{lemma}\label{specialspecial}
Let $\{Z_k\}\subset \mathbb H^N$ be a sequence converging to
$\infty$. Let $\rho_0:\mathbb H^N\to \mathbb H^N$ be a linear projection at
$\infty$. Then $\{Z_k\}$ is $C$-special ($C\geq 0$) with
respect to $\rho_0$ if and only if it is $C$-special (same $C$)
with respect to any linear projection at $\infty$ $\rho$. The
sequence $\{Z_k\}$ is $\rho_0$-restricted if and only if it is
$\rho$-restricted with respect to any linear projection at
$\infty$ $\rho$.
\end{lemma}
\begin{proof}
Let $T_0$ be an automorphism of $\mathbb H^N$ fixing $\infty$ such
that $\rho(z,w)=T_0^{-1}(p_1(T_0(z,w))$. Since $T_0$ is an
isometry for $k_{\mathbb H^N}$, then $\{Z_k\}$ is $C$-special with
respect to $\rho_0$ ({\sl respectively} $\rho_0$-restricted) if
and only if $\{T_0(Z_k)\}$ is $C$-special with respect to $p_1$
({\sl respect.} $p_1$-restricted). Therefore it is enough to
prove that if $\{Z_k\}$ is $C$-special with respect to $p_1$
({\sl respectively} $p_1$-restricted) then it is $C$-special
with respect to any linear projection at $\infty$ $\rho$ ({\sl
respect.} $\rho$-restricted).
Given a linear projection at $\infty$ $\rho$, there exists
$a\in \mathbb C^{N-1}$ and an automorphism $T\in{\sf Aut}(\mathbb H^N)$ of the
type
\[
T(z,w)=(z+\|a\|^2+2\langle w,a\rangle, w+a)
\]
such that $\rho=T^{-1}\circ p_1\circ T$. A direct computation
shows that
\begin{equation}\label{rho}
\rho(z,w)=(z+2\|a\|^2+2\langle w, a\rangle, -a).
\end{equation}
Therefore, writing $Z_k=(z_k,w_k)=(x_k+iy_k, w_k)$ and, arguing
similarly to \eqref{kob-iperplane}, we obtain
\begin{equation*}
\begin{split}
k_{\mathbb H^N}(p_1(Z_k),\rho(Z_K))&=k_{\mathbb H^N}((z_k,0),
(z_k+2\|a\|^2+2\langle w_k, a\rangle, -a))\\&=k_{\mathbb H^N}\left((1,0),
(\frac{z_k+2\|a\|^2+2\langle w_k, a\rangle-iy_k}{x_k},
\frac{-a}{\sqrt{x_k}})\right)\\&=\tanh^{-1}\sqrt{\frac{|2\|a\|^2+2\langle
w_k,
a\rangle|^2+4x_k\|a\|^2}{x_k^2\left|2+\frac{2\|a\|^2}{x_k}+2\frac{\langle
w_k, a\rangle}{x_k}\right|^2}}.
\end{split}
\end{equation*}
The last term tends to $0$ as $x_k\to \infty$, which is the
case if $k\to \infty$ because $Z_k\to \infty$ and $x_k={\sf Re}\,
z_k>\|w_k\|^2$. Thus
\begin{equation}\label{specikilo}
\lim_{k\to\infty}k_{\mathbb H^N}(p_1(Z_k),\rho(Z_K))=0.
\end{equation}
Now, using the triangle inequality and \eqref{specikilo} we see that
if $\{Z_k\}$ is $C$-special with respect to $p_1$, then
\[
\limsup_{k\to\infty}k_{\mathbb H^N}(Z_k,\rho(Z_K))\leq
\limsup_{k\to\infty}k_{\mathbb H^N}(Z_k,p_1(Z_K))+\limsup_{k\to\infty}k_{\mathbb H^N}(p_1(Z_k),\rho(Z_K))\leq
C,
\]
as stated.
On the other hand, if $\{Z_k\}$ is $p_1$-restricted (namely,
${\sf Re}\, z_k\geq c {\sf Im}\, z_k$ for some $c>0$), from \eqref{rho} and
since ${\sf Re}\, z_k
>\|w_k\|^2$ it follows that $\{Z_k\}$ is also $\rho$-restricted
\end{proof}
\begin{remark}
It is worth to note explicitly that by Lemma
\ref{specialspecial} the condition of being $C$-special and
that of being restricted do not depend on the chosen linear
projection.
\end{remark}
\begin{definition}
Let $h:\mathbb H^N\to \mathbb C$ be holomorphic. We say that $h$ has $E$-limit
$A\in \mathbb C$ at $\infty$, and we write
\[
{E\hbox{-}\lim}_{\mathbb H^N\ni (z,w)\to\infty}h(z,w)=A,
\]
if for any sequence $\{Z_k\}\subset \mathbb H^N$ converging to $\infty$
which is $C$-special for some $C\geq 0$ ($C$ depending on
$\{Z_k\}$) and restricted, it follows that $\lim_{k\to
\infty}h(Z_k)=A$.
If the limit holds only for $0$-special, restricted sequences we
write
\[
{E^0\hbox{-}\lim}_{\mathbb H^N\ni (z,w)\to\infty}h(z,w)=A.
\]
\end{definition}
Next we state a version of the
Julia-Wolff-Carath\'eodory theorem due to Rudin for the unit
ball $\mathbb B^n$ (\cite[Thm. 8.5.6]{Ru}), using our
previous notations:
\begin{theorem}\label{JWC}
Let $\phi=(\phi_1,\phi'):\mathbb H^N\to\mathbb H^N$ be holomorphic, with
Denjoy-Wolff point $\infty$ and multiplier $\lambda\geq 1$. Let
$\rho:\mathbb H^N\to\mathbb H^N$ be a linear projection at $\infty$ and let
$\widetilde{\rho}:\mathbb H^N\to \mathbb H$ be an associated left inverse. Then
\begin{enumerate}
\item $\displaystyle{{E^0\hbox{-}\lim}_{\mathbb H^N\ni
(z,w)\to\infty}\frac{\widetilde{\rho}\circ\phi(z,w)}{\widetilde{\rho}(z,w)}=\lambda}$,
\item $\displaystyle{{E^0\hbox{-}\lim}_{\mathbb H^N\ni
(z,w)\to\infty}\frac{\|\phi(z,w)-\rho\circ \phi(z,w)\|}{|\widetilde{\rho}(z,w)|}=0}$.
\end{enumerate}
\end{theorem}
As a corollary we have the following:
\begin{lemma}\label{tech1}
Let $\phi=(\phi_1,\phi'):\mathbb H^N\to\mathbb H^N$ be holomorphic, with
Denjoy-Wolff point $\infty$ and multiplier $\lambda\geq 1$.
Assume ${E\hbox{-}\lim}_{\mathbb H^N\ni
(z,w)\to\infty}\frac{\phi_1(z,w)}{z}$ exists. Then
\begin{enumerate}
\item ${E\hbox{-}\lim}_{\mathbb H^N\ni
(z,w)\to\infty}\frac{\phi_1(z,w)}{z}=\lambda$,
\item ${E\hbox{-}\lim}_{\mathbb H^N\ni
(z,w)\to\infty}\frac{\|\phi'(z,w)\|}{|z|}=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) It follows directly from Theorem \ref{JWC}.(1).
(2) Since $\phi(\mathbb H^N)\subseteq\mathbb H^N$ then ${\sf Re}\, \phi_1(z,w)\geq
\|\phi'(z,w)\|^2$ for all $(z,w)\in\mathbb H^N$. Thus dividing by $|z|^2$
and taking limits, (2) follows from (1).
\end{proof}
The following technical proposition is needed in the proof of the Main Theorem.
\begin{proposition}\label{tech2}
Let $\phi=(\phi_1,\phi'):\mathbb H^N\to\mathbb H^N$ be holomorphic, with
Denjoy-Wolff point $\infty$ and multiplier $\lambda\geq 1$. Let
$\rho_0$ be a linear projection at $\infty$ with a left
inverse $\widetilde{\rho}_0$. Suppose the $E$-limit
${E\hbox{-}\lim}_{\mathbb H^N\ni
(z,w)\to\infty}\frac{\widetilde{\rho}_0(v(z,w))}{\widetilde{\rho}_0(z,w)}$ exists. Then
for any complex geodesic $f:\mathbb H\to \mathbb H^N$ with
$f(\infty)=\infty$ with left inverse $\widetilde{\rho}_f:\mathbb H^N\to\mathbb H$ it
follows
\[{E\hbox{-}\lim}_{\mathbb H^N\ni (z,w)\to\infty}\frac{\widetilde{\rho}_f\circ
\phi(z,w)}{\widetilde{\rho}_f(z,w)}=\lambda.\]
\end{proposition}
\begin{proof}
Up to conjugation we can assume that $\rho_0=p_1$ and then by
hypothesis we know that the $E$-limit ${E\hbox{-}\lim}_{\mathbb H^N\ni
(z,w)\to\infty}\frac{\phi_1(z,w)}{z}$ exists and equals $\lambda$ by
Lemma \ref{tech1}. Given the complex geodesic $f$, there exists
$a\in \mathbb C^{N-1}$ and an automorphism $T\in{\sf Aut}(\mathbb H^N)$ of the type
\[
T(z,w)=(z+\|a\|^2+2\langle w,a\rangle, w+a)
\]
such that $T\circ f(\zeta)=(\zeta,0)$ and $\widetilde{\rho}(z,w)=\pi_1 \circ
T(z,w)=z+\|a\|^2+2\langle w, a\rangle$, where, as usual $\pi_1(z,w)=z$. Thus
\[
\frac{\widetilde{\rho}_f\circ \phi(z,w)}{\widetilde{\rho}_f(z,w)}=\frac{\phi_1(z,w)+\|a\|^2+2\langle
\phi'(z,w), a\rangle}{z+\|a\|^2+2\langle w,a\rangle}=\frac{\phi_1(z,w)+\|a\|^2+2\langle
\phi'(z,w), a\rangle}{z(1+\|a\|^2/z+2\langle w,a\rangle/z)}.
\]
Taking into account that $1+\|a\|^2/z+2\langle w,a\rangle/z=1+o(|z|^{-1})$
since $|z|\geq {\sf Re}\, z\geq \|w\|^2$, the result follows from Lemma
\ref{tech1}.
\end{proof}
| {
"timestamp": "2007-10-10T16:13:43",
"yymm": "0710",
"arxiv_id": "0710.2020",
"language": "en",
"url": "https://arxiv.org/abs/0710.2020",
"abstract": "We consider holomorphic self-maps $\\v$ of the unit ball $\\B^N$ in $\\C^N$ ($N=1,2,3,...$). In the one-dimensional case, when $\\v$ has no fixed points in $\\D\\defeq \\B^1$ and is of hyperbolic type, there is a classical renormalization procedure due to Valiron which allows to semi-linearize the map $\\phi$, and therefore, in this case, the dynamical properties of $\\phi$ are well understood. In what follows, we generalize the classical Valiron construction to higher dimensions under some weak assumptions on $\\v$ at its Denjoy-Wolff point. As a result, we construct a semi-conjugation $\\sigma$, which maps the ball into the right half plane of $\\C$, and solves the functional equation $\\sigma\\circ \\v=\\lambda \\sigma$, where $\\lambda>1$ is the (inverse of the) boundary dilation coefficient at the Denjoy-Wolff point of $\\v$.",
"subjects": "Complex Variables (math.CV); Dynamical Systems (math.DS)",
"title": "Valiron's construction in higher dimension",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137866163753,
"lm_q2_score": 0.7217432062975979,
"lm_q1q2_score": 0.7087617789809478
} |
https://arxiv.org/abs/1907.03635 | Distance from the Nucleus to a Uniformly Random Point in the 0-cell and the Typical Cell of the Poisson-Voronoi Tessellation | Consider the distances $\tilde{R}_o$ and $R_o$ from the nucleus to a uniformly random point in the 0-cell and the typical cell, respectively, of the $d$-dimensional Poisson-Voronoi (PV) tessellation. The main objective of this paper is to characterize the exact distributions of $\tilde{R}_o$ and $R_o$. First, using the well-known relationship between the 0-cell and the typical cell, we show that the random variable $\tilde{R}_o$ is equivalent in distribution to the contact distance of the Poisson point process. Next, we derive a multi-integral expression for the exact distribution of $R_o$. Further, we derive a closed-form approximate expression for the distribution of $R_o$, which is the contact distribution with a mean corrected by a factor equal to the ratio of the mean volumes of the 0-cell and the typical cell. An additional outcome of our analysis is a direct proof of the well-known spherical property of the PV cells having a large inball. |
\section{Introduction}\label{sec:Intro}
The Poisson point process (PPP) has found many applications in science and engineering due to its useful mathematical properties. Several of these applications specifically focus on the Poisson-Voronoi (PV) tessellation \cite{moller1989random}, which partitions space into disjoint {\em cells} whose nuclei are the points of the PPP. There is a rich literature focused on characterizing the statistical properties of the PV tessellation, such as the distributions of the contact and chord lengths \cite{muche1992contact}, the distributions of the radii of the circumcircle and the incircle of the 0-cell and the typical cell \cite{Calka2002}, the distribution of the number of edges of the typical cell \cite{calka2003explicit}, the limiting shape of the 0-cell and the typical cell \cite{hug2004large}, and the relationship between the 0-cell and the typical cell \cite{MECKE1999}.
However, it is quite surprising to note that the distributions of the distances from the nucleus to uniformly random points in the 0-cell and the typical cell of the $d$-dimensional PV tessellation have not yet been investigated, which is the main goal of this paper.
The motivation behind our investigation comes from wireless networks, where the PPP has been extensively used to model the locations of cell towers (also called base stations) in cellular networks such that the service region of each cell tower is simply the PV cell with the corresponding cell tower at its nucleus \cite{BacBla2009,AndBacJ2011,DhiGanJ2012,Haenggi2013,blaszczyszyn_haenggi_keeler_mukherjee_2018}.
If one assumes mobile users to be distributed uniformly at random in the service region of each cell tower (a popular model used by the wireless networks community), one of the crucial steps towards the performance characterization of this network is to understand the distribution of the distance between a mobile user and its associated cell tower. In the PV tessellation, this corresponds to the distribution of the distance of the nucleus of a PV cell to a point chosen uniformly at random from that cell \cite{Haenggi2017,Praful_TypicalCell_letter}. Note that while it is sufficient to focus on the $2$-dimensional case from the wireless networks perspective, all the mathematical results presented in this paper are for the general $d$-dimensional case. With this brief introduction, we now formally define the problem of interest for this paper.
Let $\Phi\triangleq\{\textbf{x}_1,\textbf{x}_2,\dots\}$ be a homogeneous PPP with intensity $\lambda$ on $\mathbb{R}^d$. The PV cell with the nucleus at $\textbf{x}\in\Phi$ is defined as
\begin{equation}
V_\mathbf{x}=\{\mathbf{y}\in\mathbb{R}^d\mid \|\mathbf{y}-\mathbf{x}\|\leq\|\mathbf{x'}-\mathbf{y}\|, ~\forall \mathbf{x'}\in\Phi\},~~~~ \mathbf{x}\in\Phi.
\label{eq:PV_Cell_Definition}
\end{equation}
The set $\{V_\textbf{x}\}_{\textbf{x}\in\Phi}$ is known as the {\em PV tessellation}. For any (deterministic) $\textbf{y}\in\mathbb{R}^d$, almost surely there exists a unique nucleus $\mathbf{x}\in\Phi$ such that $\textbf{y}\in V_\textbf{x}$ \cite{Moller2012lectures}. The PV cell containing the origin $o$ is called the {\em 0-cell} and is denoted by $\tilde{V}_o$.
The statistical properties of the {\em typical cell} can be characterized using Palm theory, which formalizes the notion of conditioning on the presence of a point of a point process at a specific location. Since by Slivnyak's theorem, conditioning on a point is the same as adding a point to a PPP, we consider that the nucleus of the typical cell of the point process $\Phi\cup\{o\}$ is $o$, which is given by
\begin{equation}
V_o=\{\mathbf{y}\in\mathbb{R}^d\mid \|\mathbf{y}\|\leq\|\mathbf{x}-\mathbf{y}\|, ~\forall \mathbf{x}\in\Phi\}.
\end{equation}
Now, using $\tilde{V}_o$ and $V_o$, we define the main random variables of interest for this paper.
\begin{definition}
Let $\tilde{R}_o$ denote the distance from the nucleus to a uniformly random point in the 0-cell $\tilde{V}_o$.
\end{definition}
\begin{definition}
Let $R_o$ denote the distance from the nucleus to a uniformly random point in the typical cell $V_o$.
\end{definition}
We focus on the statistical characterization of $R_o$ and $\tilde{R}_o$ for the PPP with intensity $\lambda$. We derive the cumulative distribution function (CDF) of $\tilde{R}_o$ and $R_o$ in Sections \ref{sec:CroftonCell} and \ref{sec:TypicalCell}, respectively. In Section \ref{sec:CroftonCell}, a closed-form expression for the exact CDF of $\tilde{R}_o$ is derived based on the formula on the relationship between the 0-cell and the typical cell given in \cite{BacBla2009,MECKE1999}. It is well-known that the statistical properties of $R_o$ are hard to characterize for the case of $d>1$. Before going into the $d>1$ case, we discuss the case of $d=1$ in Section \ref{subsec:1D} for which the distribution of $R_o$ is far easier to characterize. In Section \ref{sec:GeneralCase_CDF_R}, we present an analytical approach to derive the distribution of $R_o$ for the $d>1$ case based on the analysis of the temporal evolution of the PV structure presented in \cite{Pineda}.
We also approximate the CDF of $R_o$ using a simple expression in Section \ref{sec:Approximate_CDF}. Therein, we also characterize the distribution of $R_o$ as $d$ tends to infinity. In addition, based on the formulation developed in Section \ref{sec:TypicalCell}, we provide a simpler proof for the well-known spherical nature of large PV cells in Section \ref{sec:LimitingShape}.
\section{Distribution of $\tilde{R}_o$}
\label{sec:CroftonCell}
In this section, we derive a closed-form expression for the CDF of the distance from the nucleus to uniformly random point in the 0-cell $\tilde{V}_o$. It is well-known that the expected volume of the 0-cell is greater than the expected volume of the typical cell. In fact, all the moments of the volume of the 0-cell are known to be greater than the moments of the volume of the typical cell \cite{MECKE1999}. This is quite intuitive as the origin (or, for that matter, any {\em fixed} point) is more likely to lie in a bigger cell.
Before presenting the CDF of $\tilde{R}_o$, we state the relationship of the distributions of the 0-cell and the typical cell from \cite[Corollary 4.2.4]{BacBla2009} as
\begin{equation}
\mathbb{E}[f(\tilde{V}_o)]=\lambda\mathbb{E}^{o}[\upsilon_d(V_o)f(V_o)],
\label{eq:Relation_Vo_V}
\end{equation}
where $\upsilon_d$ is the Lebesgue measure in $d$-dimensions, $\mathbb{E}^{o}$ is the expectation with respect to the Palm distribution, and $f$ is any translation-invariant non-negative function on compact sets.
We will use this expression along with an appropriately chosen function $f$ to derive the CDF of $\tilde{R}_o$ in Theorem~\ref{thm:CDF_R_Crofton}.
Let $\mathcal{B}_r(\mathbf{x})$ represent the $d$-dimensional ball of radius $r$ centered at $\mathbf{x}$.
Let $X$ be a random set in $\mathbb{R}^d$. Using the results of \cite{robbins1944} and \cite{robbins1945}, the $n$-th moment of the volume of $X$ can be evaluated as
\begin{equation}
\mathbb{E}[\upsilon_d(X)^n]=\int_{\mathbb{R}^d}\dots\int_{\mathbb{R}^d}\mathbb{P}(x_1,\dots, x_n\in X){\rm d}x_1\dots{\rm d}x_n.
\label{eq:Moment_Random_Set}
\end{equation}
Next, we restate a useful result from \cite[Lemma 4.2]{alishahi2008} on the mean volume of ${\mathcal{B}}_r(o)\cap V_o$, which directly follows from \eqref{eq:Moment_Random_Set}.
\begin{lemma}
\label{lemma:Mean_IntBall_PVCell}
For the homogeneous PPP with intensity $\lambda$ on $\mathbb{R}^d$, the mean volume of the intersection of the ball $\mathcal{B}_r(o)$ with the typical cell $V_o$ is given by
\begin{equation}
\mathbb{E}^o[\upsilon_d(\mathcal{B}_r(o)\cap V_o)]=\frac{1}{\lambda}\left(1-\exp(-\lambda\kappa_d r^d)\right),
\label{eq:Mean_Intersection}
\end{equation}
where $\kappa_d=\frac{\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2}+1)}$ is the volume of the unit-radius ball in $\mathbb{R}^d$.
\end{lemma}
\begin{proof}
Using \eqref{eq:Moment_Random_Set}, the first moment of the volume of intersection of $\mathcal{B}_r(o)$ with the typical cell $V_o$ can be determined as
\begin{align*}
\mathbb{E}^o[\upsilon_d(\mathcal{B}_r(o)\cap V_o)]&=\int\limits_{\mathbb{R}^d}\mathbb{P}\left(x\in \mathcal{B}_r(o)\cap V_o\right){\rm d}x=\int\limits_{\mathbb{R}^d\cap \mathcal{B}_r(o)}\mathbb{P}\left(x \in V_o\right){\rm d}x\stackrel{(a)}{=}d\kappa_d\int_{0}^r\exp(-\lambda\kappa_d v^d)v^{d-1}{\rm d}v.
\end{align*}
where (a) follows by the change of Cartesian coordinates to polar coordinates and the void probability of the homogeneous PPP.
\end{proof}
Now, we present the CDF of $\tilde{R}_o$ using the result given in Lemma \ref{lemma:Mean_IntBall_PVCell}.
\begin{thm}
\label{thm:CDF_R_Crofton}
For the homogeneous PPP with intensity $\lambda$ on $\mathbb{R}^d$, the CDF of the distance $\tilde{R}_o$ from the nucleus to a uniformly random point in the 0-cell $\tilde{V}_o$ is
\begin{equation}
F_{\tilde{R}_o}(r)=1-\exp\left(-\lambda\kappa_d r^d\right),~~~r\geq 0.
\label{eq:CDF_R_Crofton}
\end{equation}
\end{thm}
\begin{proof}
Let $\mathbf{x}_o$ represent the nucleus of $\tilde{V}_o$ and let $\textbf{y}$ represent the uniformly distributed point in $\tilde{V}_o$.
We note that the distance $\tilde{R}_o=\|\mathbf{x}_o-\mathbf{y}\|$ is less than $r$ when $\mathbf{y}$ lies in the intersection of the ball $\mathcal{B}_r(\mathbf{x}_o)$ and $\tilde{V}_o$.
Therefore, the CDF of $\tilde{R}_o$ can be written as
\begin{align*}
F_{\tilde{R}_o}(r)&=\mathbb{P}(\tilde{R}_o\leq r)=\mathbb{E}\left[\frac{\upsilon_d(\mathcal{B}_r(\mathbf{x}_o)\cap \tilde{V}_o)}{\upsilon_d(\tilde{V}_o)}\right].
\end{align*}
Now, we define the function $f$ of the PV cell $V_\mathbf{x}$ as the ratio of the volumes of $\mathcal{B}_r(\mathbf{x})\cap V_\textbf{x}$ and $V_\mathbf{x}$. Thus, the function $f$ for the 0-cell and the typical cell, respectively, becomes
\begin{align*}
f(\tilde{V}_o)=\frac{\upsilon_d(\mathcal{B}_r(\mathbf{x}_o)\cap \tilde{V}_o)}{\upsilon_d(\tilde{V}_o)}~~~\text{and}~~~f(V_o)=\frac{\upsilon_d(\mathcal{B}_r(o)\cap V_o)}{\upsilon_d(V_o)}.
\end{align*}
By substituting the above function in \eqref{eq:Relation_Vo_V}, we obtain
\begin{align*}
\mathbb{E}\left[\frac{\upsilon_d(\mathcal{B}_r(\mathbf{x}_o)\cap \tilde{V}_o)}{\upsilon_d(\tilde{V}_o)}\right]=\lambda\mathbb{E}^{o}[\upsilon_d(\mathcal{B}_r(o)\cap V_o)].
\end{align*}
Finally, we arrive at \eqref{eq:CDF_R_Crofton} by substituting the result of Lemma \ref{lemma:Mean_IntBall_PVCell} in the above equation.
\end{proof}
Using Theorem \ref{thm:CDF_R_Crofton}, we can calculate the $n$-th moment of the distance $\tilde{R}_o$.
\begin{cor}
\label{cor:Mean_Ro}
For the homogeneous PPP with intensity $\lambda$ on $\mathbb{R}^d$, the $n$-th moment of the distance $\tilde{R}_o$ from the a nucleus to uniformly random point in the 0-cell $\tilde{V}_o$ is
\begin{align}
\mathbb{E}[\tilde{R}_o^n]=\frac{\Gamma\left(1+\frac{n}{d}\right)}{\left(\lambda\kappa_d\right)^{\frac{n}{d}}}.
\label{eq:Mean_Ro_crofton}
\end{align}
\end{cor}
\begin{remark}
Using the void probability, the distribution of the distance between the origin and the nucleus of $\tilde{V}_o$, {\em say $\mathbf{x}_o$}, can be simply determined as $\mathbb{P}(\|\mathbf{x}_o\|\leq r)=1-\exp(- \lambda\kappa_dr^d)$. However, it does not reveal any information about how the origin is distributed in the 0-cell. While one can intuitively expect the origin to be uniformly distributed in $\tilde{V}_o$, there does not appear to be a straightforward way to prove this. Using \eqref{eq:Relation_Vo_V}, we have presented a simple construction to establish that the distribution of the origin in $\tilde{V}_o$ is in fact that of a uniformly random point in $\tilde{V}_o$.
\end{remark}
In the next section, we present our approach to the exact evaluation of the CDF of $R_o$.
\section{Distribution of $R_o$} \label{sec:TypicalCell}
We first characterize the CDF of $R_o$ for $d=1$ where the typical cell is completely characterized by the locations of the nearest points on either side of its nucleus. This allows us to explicitly describe the uniformly distributed point in the typical cell $V_o$ and, in turn, determine the CDF of $R_o$.
In contrast, the structure of the typical cell for $d>1$ is more complex, which makes the distribution of $R_o$ far more difficult to determine, as will be demonstrated in Section \ref{sec:GeneralCase_CDF_R}.
\subsection{Distribution of $R_o$ for $d=1$}
\label{subsec:1D}
Let $\Phi\triangleq\{x_1,x_2,\dots\}$ be a homogeneous PPP with intensity $\lambda$ on $\mathbb{R}$. Let $x\in\Phi_l\cap\mathbb{R}^-$ and $y\in\Phi_l\cap\mathbb{R}^+$ be left and right neighboring points of the origin (i.e., nucleus of $V_o$), respectively. Since $\Phi$ is a PPP, $|x|$ and $|y|$ are i.i.d.~random variables that follow an exponential distribution with mean $\lambda^{-1}$.
Denote by $R_1=\frac{1}{2}|x|$ and $R_2=\frac{1}{2}|y|$ the distances to the boundary points of the typical cell $V_o$.
Since $|x|$ and $|y|$ are i.i.d.,~ $R_1$ and $R_2$ are also i.i.d.~and follow exponential distribution with parameter $2\lambda$.
Let $\tilde{R}_{1}=\min(R_1,R_2)$ and $\tilde{R}_{2}=\max(R_1,R_2)$. The joint probability density function (pdf) of $\tilde{R}_{1}$ and $\tilde{R}_{2}$ is \cite[Chapter 2]{Mohammad2013}
\begin{align}
f_{\tilde{R}_{1}, \tilde{R}_{2}}(r_1, r_2) = 8\lambda^2 \exp\left(-2\lambda\left(r_1 + r_2\right)\right), \quad 0 \leq r_1\leq r_2.
\label{eq:JointPDF_1D}
\end{align}
The distribution of the distance $R_o$ from the nucleus to a uniformly random point in the typical cell $V_o$ conditioned on $\tilde{R}_{1}$ and $\tilde{R}_{2}$ is
\begin{equation}
\mathbb{P}(R_o \leq r\mid \tilde{R}_{1} = r_1,\tilde{R}_{2} = r_2) = \begin{cases} \frac{2r}{r_1 + r_2}, & 0 \leq r \leq r_1 \\
\frac{r + r_1}{r_1 + r_2}, & r_1 < r \leq r_2 \\
1, & r_2 < r. \\
\end{cases}
\label{eq:Cond_CDF_1D}
\end{equation}
By deconditioning the above expression with respect to the joint distribution of $\tilde{R}_{1}$ and $\tilde{R}_{2}$, the CDF of $R_o$ is presented in the following theorem.
\begin{figure}
\centering
\includegraphics[width=.55\textwidth]{CDF_R_1D_v4.eps}\vspace{-.3cm}
\caption{CDF of $R_o$ and $\tilde{R}_o$ for a unit-intensity Poisson point process for $d=1$.
}
\label{fig:PDF_R_1DPPP}
\end{figure}
\begin{theorem}\label{thm:CDF_R_1D}
For the homogeneous PPP with intensity $\lambda$ on $\mathbb{R}$, the CDF of the distance $R_o$ from the nucleus to a uniformly random point in the typical cell $V_o$ is
\begin{align}
F_{R_o}(r)&=1-\exp(-2\lambda r)+2\lambda r\exp(-2\lambda r)-4\lambda^2r^2\mathtt{E}_1(2\lambda r),~~ r> 0,\label{eq:CDF_R_1D}
\end{align}
where $\mathtt{E}_1(z)=\int_z^\infty \frac{1}{t}\exp(-t){\rm d}t$ is the exponential integral function.
\end{theorem}
\begin{proof}
Using the expression for the conditional CDF of $R_o$ given in \eqref{eq:Cond_CDF_1D} and the joint pdf of $\tilde{R}_1$ and $\tilde{R}_2$ given in \eqref{eq:JointPDF_1D}, the CDF of $R_o$ can be written as
\begin{align*}
F_{R_o}(r)=&\int_0^r\int_0^{r_2}8\lambda^2\exp(-2\lambda(r_1+r_2)){\rm d}r_1{\rm d}r_2 +\int_r^\infty\int_0^{r}\frac{r+r_1}{r_1+r_2}8\lambda^2\exp(-2\lambda(r_1+r_2)){\rm d}r_1{\rm d}r_2 \\
&+\int_r^\infty\int_r^{r_2}\frac{2r}{r_1+r_2}8\lambda^2\exp(-2\lambda(r_1+r_2)){\rm d}r_1{\rm d}r_2. \numberthis
\label{eq:CDF_Ro_d1_Integrals}
\end{align*}
Further, using some mathematical simplifications, we obtain the result in \eqref{eq:CDF_R_1D}. Please refer to Appendix \ref{app:CDF_Ro_d1_Integrals} for more details on the manipulation of the integrals in \eqref{eq:CDF_Ro_d1_Integrals}.
\end{proof}
In Fig.~\ref{fig:PDF_R_1DPPP}, we provide the plots for the CDFs of $R_o$ and $\tilde{R}_o$. From the figure, it can be seen that the distance $\tilde{R}_o$ stochastically dominates the distance $R_o$. In Section \ref{sec:Approximate_CDF}, we will demonstrate that this difference between the distributions of $\tilde{R}_o$ and $R_o$ diminishes with increasing $d$.
\subsection{Distribution of $R_o$ for $d>1$}
\label{sec:GeneralCase_CDF_R}
Similar to the distribution of $R_o$ for $d=1$ being derived by conditioning on the nuclei of the neighboring PV cells in Section \ref{subsec:1D}, here we derive the distribution of $R_o$ for $d>1$ by conditioning on the points in a hypersphere centered at the origin such that it includes the nuclei of all neighboring PV cells of $V_o$. We refer to the conditional positions of points in the sphere as the {\em domain configuration}.
The domain configuration enables the characterization of the shape and size of the PV cell $V_o$ which will be useful in the evaluation of the conditional distribution of $R_o$.
A similar construction is presented in \cite{Pineda,pineda2008temporal} to study the temporal evolution of the volume of the domain size and free boundary distributions for a PV transformation$^1$ for $d=\{1,2,3\}$\footnote{The simultaneously growing sets of randomly distributed nuclei (realized through PPP) at equal isotropic rate is referred to as the {\em PV transformation}. These sets eventually transform into the PV cells.}.
In the following subsection, we define the domain configuration and discuss its use for the conditional PV cell characterization.
\subsubsection{Domain Configuration}
First, we define the domain configuration and obtain its probability. Next, we discuss its connection with the conditional shape and size of the PV cell $V_o$.
\begin{definition}
For $\ell>0$, we define the set $\mathcal{C}_\ell^k$ as the set of $k$ points with polar coordinates $(l_i,\boldsymbol{\theta}_i)$ such that
\begin{equation}
\mathcal{C}_\ell^k\equiv\frac{1}{2}\{\Phi\cap \mathcal{B}_{2\ell}(o)\mid \Phi(\mathcal{B}_{2\ell}(o))=k\}.
\label{eq:Domain_Confg_Def}
\end{equation}
where $l_i$ is the radial coordinate and $\boldsymbol{\theta}_i=[\theta_{1i},\dots,\theta_{(d-1)i}]$ are the angular coordinates.
\end{definition}
Thus, the point $\tilde{\textbf{x}}_i\triangleq(l_i,\boldsymbol{\theta}_i)\in\mathcal{C}_\ell^k$ bisects the line segment joining $o$ and $\textbf{x}_i \in \Phi\cap \mathcal{B}_{2\ell}(o)$.
By construction, $l_i\in[0,\ell]$, $ \theta_{(d-1)i}\in[0,2\pi)$ and $\theta_{1i},\dots, \theta_{(d-2)i}\in[0,\pi]$.
Henceforth, the set ${\mathcal{C}}_\ell^k$ is referred to as the {\em domain configuration}.
Since $\Phi$ is a PPP, conditioned on $\Phi(\mathcal{B}_{2\ell}(o))=k$, the points $\textbf{x}_i\in\Phi\cap \mathcal{B}_{2\ell}(o)$, for $i\in\{1,\dots,k\}$, are distributed uniformly at random independently of each other in $\mathcal{B}_{2\ell}(o)$. Consequently, the $k$ points $\{\tilde{\textbf{x}}_i\}_{i=1}^k$ forming the domain configuration $\mathcal{C}_\ell^k$ are also distributed uniformly at random independently of each other in $\mathcal{B}_\ell(o)$. Using this fact, we can express the pdf of the domain configuration as done next.
The differential volume element in $d$ dimensions in polar coordinates is~\cite{blumenson1960derivation}
$$\Delta=v^{d-1}\sin^{d-2}(\alpha_{1})\dots\sin(\alpha_{d-2}){\rm d}v{\rm d}\alpha_{1}\dots{\rm d}\alpha_{d-1}.$$
Thus, the probability that a point distributed uniformly at random in $\mathcal{B}_\ell(o)$ lies in an infinitesimal region with volume $\Delta_i$ such that $v_i\leq \ell$ is equal to $\frac{\Delta_i}{\kappa_d \ell^d}$. Now, we obtain the pdf of the configuration $\mathcal{C}_\ell^k$ conditioned on $\Phi(\mathcal{B}_l(o))=k$ as
\begin{align}
\mathbb{P}((l_1,\boldsymbol{\theta}_1)\in\Delta_1,&\dots,(l_k,\boldsymbol{\theta}_k)\in\Delta_k;\ell)\stackrel{(a)}{=}\prod_{i=1}^k\mathbb{P}((l_i,\boldsymbol{\theta}_i)\in\Delta_i)\nonumber \\
&\stackrel{(b)}{=}\prod\limits_{i=1}^k \frac{1}{\kappa_d \ell^d}v_i^{d-1}\sin^{d-2}(\alpha_{1i})\dots\sin(\alpha_{(d-2)i}){\rm d}v_i{\rm d}\alpha_{1i}\dots{\rm d}\alpha_{(d-1)i}, ~~\text{for}~0\leq v_i\leq \ell,
\label{eq:Conf_Prob}
\end{align}
where (a) follows from the independence of the elements of $\mathcal{C}_\ell^k$ and (b) follows from the uniform distribution of elements of $\mathcal{C}_\ell^k$ in $\mathcal{B}_\ell(o)$.
\begin{figure}
\centering
\includegraphics[trim={10cm 8cm 8cm 5cm},width=.9\textwidth]{Config_Dia.eps}\vspace{-.1cm}
\caption{Illustration of $V_o(\mathcal{C}_\ell^3)\cap \mathcal{B}_\ell(o)$ for $d=2$.}
\label{fig:Illustration}
\end{figure}
\subsubsection{Connections with the Typical Cell} For an empty domain configuration $\mathcal{C}_\ell^0$, $\mathcal{B}_\ell(o)$ is contained in the typical cell $V_o$. However, a non-empty domain configuration, i.e., $\mathcal{C}_\ell^k$ for $k>0$, contains the mid-points of the chords of $\mathcal{B}_\ell(o)$ formed by the intersection of the edges of typical cell $V_o$ with $\mathcal{B}_\ell(o)$. In addition, the line segments connecting these mid-points to the origin are perpendicular to the corresponding edges. Therefore, the domain configuration provides useful information about the structure of $V_o$. We denote by $V_o(\mathcal{C}_\ell^k)$ the typical cell conditioned on the domain configuration $\mathcal{C}_\ell^k$.
As $k \rightarrow \infty$, it is easy to see that $V_o(\mathcal{C}_\ell^k)$ becomes deterministic.
However, for any finite $k$, $V_o(\mathcal{C}_\ell^k)$ is in general random because some of its edges may be defined by points of $\Phi$ lying outside $\mathcal{B}_{2\ell}(o)$. That said, conditioning on $\mathcal{C}_\ell^k$ is sufficient to uniquely determine the intersection of $V_o(\mathcal{C}_\ell^k)$ and the ball $\mathcal{B}_\ell(o)$.
Fig.~\ref{fig:Illustration} illustrates the intersection of the $\mathcal{B}_\ell(o)$ with the cell $V_o(\mathcal{C}_\ell^3)$ for $d=2$.
Let us define $H_{\textbf{x}}$ as the half-space formed by the points in $\mathbb{R}^d$ that are closer to the point $\mathbf{x}\in\Phi$ than the origin, i.e.,
\begin{equation}
H_{\textbf{x}}\triangleq\{\mathbf{y}\in\mathbb{R}^d\mid \|\mathbf{y}-\mathbf{x}\|< \|\mathbf{y}\|\}.
\end{equation}
Now, we denote by $L_i$ the surface (in $d-1$ dimensions) of the spherical cap of $\mathcal{B}_\ell(o)$ such that
\begin{equation}
L_i\triangleq H_{\textbf{x}_i}\cap \partial \mathcal{B}_\ell(o),
\label{eq:Arc_DomainCong}
\end{equation}
where $\partial \mathcal{B}_\ell(o)$ is the boundary of $\mathcal{B}_\ell(o)$.
Note that the surface of the spherical cap is the arc of a circle for $d=2$. From the above definition, it is clear that the point $\tilde{\mathbf{x}}_i\in\mathcal{C}_\ell^k$ is the nearest equidistant point to the origin and $\mathbf{x}_i$ that lies on the supporting hyperplane of $H_{\textbf{x}_i}$. Further, the point $\tilde{\textbf{x}}_i$ is also the center of the $(d-1)$-dimensional chord that forms $L_i$.
This is illustrated in Fig.~\ref{fig:Illustration} for $d=2$.
Now, since $\{\tilde{\textbf{x}}_i\}_{i=1}^k$ are distributed uniformly at random in $\mathcal{B}_\ell(o)$ independently of each other, the corresponding surfaces of the spherical caps $\{L_i\}_{i=1}^k$ have i.i.d.~surface areas\footnote{The surface area in this case is the Lebesgue measure in $d-1$ dimensions.} and are placed uniformly at random on $\partial \mathcal{B}_\ell(o)$. As will be evident in the sequel, this construction will allow us to establish useful conditional geometric properties of the PV cell such as the volume of the intersection of the ball with the PV cell, the conditional distribution of uniformly distributed points within the PV cell, and the shape of large PV cells. We will now use this construction to derive the distribution of $R_o$.
\subsubsection{Distance Distribution} For a given domain configuration $\mathcal{C}_\ell^k$, we define
\begin{align}
g_k(r;\mathcal{C}_\ell^k) =\upsilon_d(V_o(\mathcal{C}_\ell^k)\cap \mathcal{B}_{r}(o)),
\end{align}
for $0\leq r\leq \ell$, as the volume of the intersection of $\mathcal{B}_{r}(o)$ and cell $V_o(\mathcal{C}_\ell^k)$. As discussed before, $V_o(\mathcal{C}_\ell^k)$ is the typical cell conditioned on the domain configuration $\mathcal{C}_\ell^k$.
\begin{figure}
\centering
\includegraphics[trim={3cm 1cm 3cm 1cm},width=.9\textwidth]{PVCell_Dia_v3.eps}\vspace{-.3cm}
\caption{Illustration of $g_5(r;\mathcal{C}_\ell^5)$ and $g_5(\ell;\mathcal{C}_\ell^5)$ for $d=2$.}
\label{fig:g_k_r}
\end{figure}
\begin{definition}
Let $R_\ell$ denote the distance from the nucleus of $V_o$ (i.e., the origin) to a uniformly random point in $V_o\cap \mathcal{B}_\ell(o)$.
\end{definition}
The first main goal is to characterize the CDF of $R_\ell$. Since for $\ell \rightarrow \infty$, $V_o\subset \mathcal{B}_\ell(o)$, the CDF of $R_o$ will simply be
\begin{align}
F_{R_o} (z) = \lim_{\ell\to\infty} \mathbb{P}(R_\ell \leq z).
\label{eq:LimitingCase_Dist_of_R}
\end{align}
We first characterize the CDF of $R_\ell$ conditioned on the domain configuration $\mathcal{C}_\ell^k$. This conditional CDF of $R_\ell$ can be expressed as
\begin{align}
F_{R_\ell}(r;\mathcal{C}_\ell^k)&=\frac{\upsilon_d(V_o(\mathcal{C}_\ell^k)\cap \mathcal{B}_{r}(o))}{\upsilon_d(V_o(\mathcal{C}_\ell^k)\cap \mathcal{B}_\ell(o))}=\frac{ g_k(r;\mathcal{C}_\ell^k)}{ g_k(\ell;\mathcal{C}_\ell^k)}, ~~0 \leq r \leq \ell.
\label{eq:CDR_R_Cond_confg}
\end{align}
Fig. \ref{fig:g_k_r} provides the visual interpretation of $g_k(r,\mathcal{C}_\ell^k)$ and $g_k(l,\mathcal{C}_\ell^k)$ for the typical cell for $d=2$.
The region $g_k(r;\mathcal{C}_\ell^k)$ is shaded in green and the region $g_k(\ell;\mathcal{C}_\ell^k)$ is shaded in brown for $k=5$. Naturally, our next goal is to characterize $g_k(\cdot;\mathcal{C}_\ell^k)$ for which we use $\{L_i\}_{i=1}^k$ given by \eqref{eq:Arc_DomainCong}.
Define the index set ${\mathcal{I}}(r)$ as the collection of indices $i$ for which $l_i\leq r$. This set points to the collection of the points $\tilde{\textbf{x}}_i$ of the domain configuration that lie inside $\mathcal{B}_r(o)$. It is easy to see that $\cup_{i\in {\mathcal{I}}(r)} L_i$ represents the portion of $\partial \mathcal{B}_r(o)$ that is outside the typical cell $V_o(\mathcal{C}_\ell^k)$. This can be seen easily from Fig.~\ref{fig:g_k_r} for $d=2$, where the arcs on $\mathcal{B}_r(o)$ corresponding to $\tilde{\textbf{x}}_1\equiv(l_1, \theta_1)$ and $\tilde{\textbf{x}}_2\equiv(l_2, \theta_2)$ do not lie in the cell. Using this insight, we will explicitly characterize the portion of $\partial \mathcal{B}_r(o)$ that lies in $V_o(\mathcal{C}_\ell^k)$, which will then be used to derive the CDF of $R_{\ell}$. This evaluation requires a careful consideration of the overlaps between the surfaces of the spherical caps $\{L_i\}_{i \in {\mathcal{I}}(r)}$.
Let $\textbf{y}\triangleq(r,\boldsymbol{\alpha})$ be the point on the $\partial\mathcal{B}_r(o)$, where $\boldsymbol{\alpha} = [\alpha_1, \alpha_2, \ldots, \alpha_{d-1}]$.
The Euclidean distance between ${\mathbf{y}}\in\partial\mathcal{B}_r(o)$ and ${\mathbf{x}}_i \triangleq (2l_i, \boldsymbol{\theta}_i)\in\Phi$ is
\begin{align*}
d_2({\mathbf{y}}, {\mathbf{x}}_i) = \sqrt{\sum_{n=1}^d\left(y_n - x_{ni}\right)^2},
\end{align*}
where
\begin{flalign*}
\begin{aligned}
x_{ni} =
\begin{dcases}
2 l_i \cos(\theta_{1i}); & n= d, \\
2 l_i \prod_{j=1}^{n} \sin(\theta_{ji}) \cos(\theta_{ni}); & 1<n < d, \\
2 l_i \prod_{j=1}^{n-1} \sin(\theta_{ji}); & n = d ,
\end{dcases}
\end{aligned}
~~~~~\text{and}~~~~~
\begin{aligned}
y_{n} =
\begin{dcases}
r \cos(\alpha_{1}); & n = 1, \\
r \prod_{j=1}^{n} \sin(\alpha_{j}) \cos(\alpha_{n}); & n < d, \\
r \prod_{j=1}^{n-1} \sin(\alpha_{j}); & n = d.\hspace{1cm}
\end{dcases}
\end{aligned}
\end{flalign*}
Let $D_i(l_i,\boldsymbol{\theta}_i,\textbf{y})$ be the indicator function taking value 1 if the point $\mathbf{y}\notin L_i$ or $l_i>r$ (the second condition basically means that $i \notin {\mathcal{I}}(r)$), otherwise takes value 0.
Consequently, the points on $\partial \mathcal{B}_r(o)$ that lie in the typical cell $V_o(\mathcal{C}_\ell^k)$ have to be outside of $\{L_i\}_{i=1}^k$. Therefore, we define
\begin{equation}
D_i\left(l_i,\boldsymbol{\theta}_i,\mathbf{y}\right)\triangleq\begin{cases}
\mathbbm{1}\left(d_2({\mathbf{y}}, (2l_i, \boldsymbol{\theta}_i)) > r\right);& \text{for~}i \in {\mathcal{I}}(r) \\
1; &\text{for~}i \notin {\mathcal{I}}(r).
\end{cases}
\label{eq:Indicator_Fun}
\end{equation}
Let $\mathbb{D}=[0,2\pi)\times[0,\pi]^{d-2}$. Using \eqref{eq:Indicator_Fun}, we can now express the portion of $\partial \mathcal{B}_r(o)$ that belongs to the typical cell $V_o(\mathcal{C}_\ell^k)$ as
\begin{align}
&\int_{\mathbb{D}}\prod\limits_{i=1}^k D_i\left(l_i,\boldsymbol{\theta}_i,\textbf{y}\right)\Delta(\boldsymbol{\alpha}){\rm d}\boldsymbol{\alpha}=\frac{1}{r^{d-1}}\upsilon_d(\partial \mathcal{B}_r(o)\cap V_o(\mathcal{C}_\ell^k)),\nonumber
\end{align}
where $\Delta(\boldsymbol{\alpha})=\sin^{d-2}(\alpha_{1})\times\dots\times\sin(\alpha_{d-2})$.
Note that $\prod_{i=1}^{k}D_i\left(l_i,\boldsymbol{\theta}_i,\mathbf{y}\right)$ is 1 at all points ${\textbf{y}}$, such that $0\leq r\leq z$, lying inside of $\mathcal{B}_z(o)\cap V_o(\mathcal{C}_\ell^k)$, and 0 elsewhere. Thus, the integration of $\prod_{i=1}^{k} D_i\left(l_i,\boldsymbol{\theta}_i,\mathbf{y}\right)$ over all the points $\mathbf{y}\in\mathcal{B}_z(o)$ gives the value of $g_k(z;\mathcal{C}_\ell^k)$ for the given domain configuration, i.e.,
\begin{equation}
g_k(z;\mathcal{C}_\ell^k)=\int_{\mathbb{D}}\int_{r=0}^z\prod\limits_{i=1}^k D_i\left(l_i,\boldsymbol{\theta}_i,\mathbf{y}\right)r^{d-1}\Delta(\boldsymbol{\alpha}){\rm d}r{\rm d}\boldsymbol{\alpha}.
\label{eq:Conditional_Volume_of_Int_Bz_Vo}
\end{equation}
Using the above results, we present the distance distribution of a uniformly distributed point in $V_o\cap \mathcal{B}_\ell(o)$ conditioned on $\Phi(\mathcal{B}_{2\ell}(o))=k$ in the following lemma. Note that in this lemma, we condition on the number of points that form the domain configuration but not on their locations.
Let $\mathbf{y}_i=(u_i,\boldsymbol{\alpha}_i)$ and $\tilde{\mathbb{D}}^d=[0,\ell]\times[0,2\pi)\times[0,\pi]^{d-2}$.
\begin{lemma}
\label{lemma:CONd_CDF}
For given $\ell$, the CDF of $R_\ell$ conditioned on $\Phi(\mathcal{B}_{2\ell}(o))=k$ is
\begin{align}
F_{R_\ell}(z;k)=\int\limits_{\left(\tilde{\mathbb{D}}^d\right)^k}\frac{ g_k(z;(u_1,\boldsymbol{\alpha}_1),\dots,(u_k,\boldsymbol{\alpha}_k))}{g_k(\ell;(u_1,\boldsymbol{\alpha}_1),\dots,(u_k,\boldsymbol{\alpha}_k))}\prod\limits_{i=1}^k \frac{1}{\kappa_d \ell^d}u_i^{d-1}\Delta(\boldsymbol{\alpha}_i){\rm d}\mathbf{y}_i.
\label{eq:CDF_cond_K}
\end{align}
where $g_k(z;(u_1,\boldsymbol{\alpha}_1),\dots,(u_k,\boldsymbol{\alpha}_k))$ is given by \eqref{eq:Conditional_Volume_of_Int_Bz_Vo}.
\end{lemma}
\begin{proof}
The CDF of $R_\ell$ conditioned on $\Phi(\mathcal{B}_{2\ell}(o))=k$ is
$F_{R_\ell}(z;k)=\mathbb{E}_{\mathcal{C}_\ell^k}[F_{R_\ell}(z;\mathcal{C}_\ell^k)]$ where $F_{R_\ell}(z;\mathcal{C}_\ell^k)$ is given by \eqref{eq:CDR_R_Cond_confg}, and the pdf of $\mathcal{C}_\ell^k$ is given in \eqref{eq:Conf_Prob}.
\end{proof}
Using Lemma \ref{lemma:CONd_CDF}, we present the distance distribution of a uniformly distributed point in the typical cell in the following theorem.
\begin{thm}
\label{thm:DDistribution}
For the homogeneous PPP with intensity $\lambda$ on $\mathbb{R}^d$, the CDF of the distance $R_o$ from the nucleus to a uniformly random point in the typical cell $V_o$ is
\begin{equation}
F_{R_o}(z)=\lim_{l\to\infty}\sum_{k=0}^\infty F_{R_\ell}(z;k)\mathbb{P}(\Phi(\mathcal{B}_{2\ell}(o))=k),
\label{eq:CDF_R}
\end{equation}
where $F_{R_\ell}(z;k)$ is given in Lemma \ref{lemma:CONd_CDF}.
\end{thm}
\begin{proof}
The proof follows in two steps. We first take the expectation of the conditional CDF of $R_\ell$, given in Lemma \ref{lemma:CONd_CDF}, over $k$. We then take the limit $\ell \to\infty$ under which this distance distribution of a uniformly distributed point in $V_o\cap \mathcal{B}_\ell(o)$ converges to that of a uniformly distributed point in $V_o$ per \eqref{eq:LimitingCase_Dist_of_R}.
\end{proof}
\begin{cor}
For the homogeneous PPP with intensity $\lambda$ on $\mathbb{R}^d$, the mean of the distance $R_o$ from the nucleus to a uniformly random point in the typical cell $V_o$ is
\begin{align}
\mathbb{E}[R_o]=\lim_{\ell\to\infty}\int_{0}^\ell(1-\sum_{k=0}^\infty F_{R_\ell}(z;k)\mathbb{P}(\Phi(\mathcal{B}_{2\ell}(o))=k)){\rm d}z,
\label{eq:Mean_Distance}
\end{align}
where $F_{R_\ell}(z;k)$ is given in Lemma \ref{lemma:CONd_CDF}.
\end{cor}
\subsubsection{Numerical Results for $d=2$}
\label{sec:Numerical_Results}
\begin{figure}[t]
\centering
\includegraphics[width=.55\textwidth]{UniLoc_TypicalPVCell_CDF_v5.eps}\vspace{-.3cm}
\caption{CDF of $R_o$ and $\tilde{R}_o$ for unit-intensity PPP on $\mathbb{R}^2$.}
\label{fig:CDF_R}\vspace{.1cm}
\end{figure}
In Fig.~\ref{fig:CDF_R}, we plot the CDF of $\tilde{R}_o$ and the CDF of $R_o$ with $\ell = 1.6$ for $d=2$. This value for $\ell$ is selected because the probability that the distance of the farthest point in the typical cell in $\mathbb{R}^2$ is below $1.6$ is $0.99$ \cite{Calka2002}. The integrals in \eqref{eq:CDF_cond_K} are evaluated numerically using a Monte Carlo integration method. The numerically evaluated mean values of $\tilde{R}_o$ and $R_o$ come out to be $0.500$ and $0.445$. Given the complicated form of the exact CDF of $R_o$, it is desirable to construct closed-form approximations that could be used in obtaining design insights in application-oriented studies. On that note, it has been empirically demonstrated in \cite{Haenggi2017} and \cite{Martin2017_Meta} for $d=2$ that the CDF of $R_o$ can be tightly approximated by $1-\exp(- \pi\rho\lambda r^2)$. It is obtained by introducing a correction factor (c.f.)~$\rho$ in the CDF of $\tilde{R}_o$ given in \eqref{eq:CDF_R_Crofton}, which reduces to $1-\exp(-\pi\lambda r^2)$ for $d=2$. Furthermore, \cite{Haenggi2017} and \cite{Martin2017_Meta} empirically show that $\rho=13/10$ and $5/4$ provide a close match for the exact CDF of $R_o$. This is also illustrated in Fig.~\ref{fig:CDF_R}.
Building on these initial insights, we derive the aforementioned c.f.~$\rho$ for the general case of $d$ dimensions in the next section and provide a useful physical interpretation of the resulting value. \vspace{-.3cm}
\section{Approximation of the Distribution of $R_o$ }
\label{sec:Approximate_CDF}
As discussed in the previous section (and shown in Fig. \ref{fig:CDF_R}), the inclusion of an appropriate c.f.~$\rho$ to the CDF of $\tilde{R}_o$ provides a close approximation to the CDF of $R_o$ for $d=2$. Therefore, motivated by this, here we approximate the CDF of $R_o$ with the CDF of $\tilde{R}_o$ by including the c.f.~$\rho_d$ for the $d$-dimensional case.
That is, the CDF of $R_o$ is approximated as $1-\exp(-\rho_d\lambda \kappa_d r^d)$. We determine the c.f.~$\rho_d$ by matching the $d$-th derivative of the second-order Taylor series expansion of the CDF of $R_o$ at $r=0$. Further, we show that $\rho_d$ is equal to the ratio of the mean volumes of the 0-cell $\tilde{V}_o$ and the typical cell $V_o$. Finally, we show that $\rho_d\to 1$ as $d\to\infty$. Note that since \cite{Haenggi2017} and \cite{Martin2017_Meta} obtained c.f. $\rho_d$ for $d=2$ through curve fitting, the mathematical treatment provided in this section is new even for the specific case of $d=2$.
For the second-order Taylor series expansion of the CDF of $R_o$, the moments and covariance of the volume of the typical cell $V_o$ and the volume of intersection of $\mathcal{B}_r(o)$ with the typical cell $V_o$ are required. Therefore, before we determine the c.f.~$\rho_d$, we present these intermediate results in the following subsection.
\subsection{Some Useful Results}
The moments and covariance of the volumes of the typical cell and its intersection with a ball can be derived using \eqref{eq:Moment_Random_Set} along with the void probability of the homogeneous PPP. We first present the second moment of the volume of the typical cell $V_o$ in the following lemma.
\begin{lemma}
\label{lemma:Moments_Volume_PVCell}
The second moment of the volume of the typical cell $V_o$ is
\begin{align}
\mathbb{E}[\upsilon_d(V_o)^2]=4\pi C_{d,2}\int_{0}^{\pi}\int_{0}^\infty\int_{0}^\infty\exp(-\lambda U(v_1,v_2,u))(v_1v_2)^{d-1}(\sin u)^{d-2}{\rm d}v_2{\rm d}v_1{\rm d}u,
\label{eq:2ndMoment_Volume_PVCell}
\end{align}
where
\begin{align}
U(v_1,v_2,u)=\kappa_dv_1^d+\kappa_dv_2^d-\kappa_dv_1^d\int_0^{\psi_1}\alpha_d\sin^d\psi{\rm d}\psi-\kappa_dv_2^d\int_0^{\psi_2}\alpha_d\sin^d\psi{\rm d}\psi,
\label{eq:Volume_Union_TwoBalls}
\end{align}
$C_{d,2}=\frac{d!}{2(d-2)!}\frac{\kappa_d\kappa_{d-1}}{\kappa_2\kappa_1}$, $\alpha_d=\frac{\Gamma(\frac{d}{2}+1)}{\Gamma(\frac{1}{2})\Gamma(\frac{d+1}{2})}$, $\psi_1+\psi_2=\pi-u$ and $v_1^d\sin^d\psi_1=v_2^d\sin^d\psi_2$.
\end{lemma}
\noindent Note that $U(v_1,v_2,u)$ represents the union of balls of radii $v_1$ and $v_2$ with centers at angle $u$.
\begin{proof}
Using \eqref{eq:Moment_Random_Set}, we obtain the second moment of $\upsilon_d(V_o)$ as
\begin{align*}
\mathbb{E}[\upsilon_d(V_o)^2]&=\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}\mathbb{P}(x_1,x_2\in V_o){\rm d}x_1{\rm d}x_2 \\
&=\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}\exp(-\lambda\upsilon_d(\mathcal{B}_{\|x_1\|}(x_1)\cup \mathcal{B}_{\|x_2\|}(x_2))){\rm d}x_2{\rm d}x_1. \numberthis
\end{align*}
Further, following the steps from the proof of \cite[Theorem 3.1]{alishahi2008}, we can obtain \eqref{eq:2ndMoment_Volume_PVCell}.
\end{proof}
The $n$-th moment of the volume of the intersection of a ball of arbitrary radius with the typical cell is obtained in \cite[Lemma 4.2]{alishahi2008}. Using this result, we present the first and second moments of $\upsilon_d(\mathcal{B}_r(o)\cap V_o)$ in the following lemma.
\begin{lemma}
\label{lemma:Moments_Volume_Int_Ball_PVCell}
The first and second moments of the volume of the intersection of the ball $\mathcal{B}_r(o)$ with the typical cell $V_o$ are
\begin{align}
\mathbb{E}[\upsilon_d(\mathcal{B}_r(o)\cap V_o)]=\frac{1}{\lambda}\left(1-\exp(-\lambda\kappa_d r^d)\right)
\label{eq:1stMoment_Volume_Int_Ball_PVCell}
\end{align}
and
\begin{align}
\mathbb{E}[\upsilon_d(\mathcal{B}_r(o)\cap V_o)^2]=4\pi C_{d,2}\int_{0}^{\pi}\int_{0}^r\int_{0}^r\exp(-\lambda U(v_1,v_2,u))(v_1 v_2)^{d-1}(\sin u)^{d-2}{\rm d} v_2 {\rm d} v_1 {\rm d}u,
\label{eq:2ndMoment_Volume_Int_Ball_PVCell}
\end{align}
where $U(v_1,v_2,u)$ is given by \eqref{eq:Volume_Union_TwoBalls}.
\end{lemma}
\begin{proof}
The first moment in \eqref{eq:1stMoment_Volume_Int_Ball_PVCell} follows from Lemma \ref{lemma:Mean_IntBall_PVCell}. Similar to the second moment of the volume of the typical cell derived in Lemma \ref{lemma:Moments_Volume_PVCell}, the second moment of the volume of $\mathcal{B}_r(o)\cap V_o$ can be determined as
\begin{align*}
\mathbb{E}[\upsilon_d(\mathcal{B}_r(o)\cap V_o)^2]&=\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}\mathbb{P}(x_1,x_2\in \mathcal{B}_r(o)\cap V_o){\rm d}x_1{\rm d}x_2\nonumber\\
&=\int_{\mathbb{R}^d\cap \mathcal{B}_r(o)}\int_{\mathbb{R}^d\cap \mathcal{B}_r(o)}\exp\left(-\lambda\upsilon_d(\mathcal{B}_{\|x_1\|}(x_1)\cup \mathcal{B}_{\|x_2\|}(x_2))\right){\rm d}x_1{\rm d}x_2.
\end{align*}
Following similar steps as in Lemma \ref{lemma:Moments_Volume_PVCell} completes the proof.
\end{proof}
In \cite[Lemma~3.1]{Olsbo2007}, the correlation between the volume of the \textit{typical Stienen sphere} and the volume of the typical cell is derived. Using the approach of \cite{Olsbo2007}, we provide the covariance of the volumes of $\mathcal{B}_r(o)\cap V_o$ and $V_o$ in the following lemma.
\begin{lemma}
\label{lemma:Cov_Int_Ball_PVCell_1}
The covariance of the volume of the intersection of $\mathcal{B}_r(o)$ with the typical cell $V_o$ and the volume of the typical cell $V_o$ is
\begin{align}
\mathtt{Cov}[\upsilon_d(\mathcal{B}_r(o)\cap V_o),\upsilon_d(V_o)]=& \frac{1}{2}\mathtt{Var}[\upsilon_d(V_o)] - \frac{1}{2\lambda^2}\left(1-2\exp(-\lambda\kappa_d r^d)\right) \label{eq:Cov_Int_Ball_PVCell_1}\\
& + 2\pi C_{d,2}\int_{0}^{\pi}\int_{0}^r\int_{0}^r\exp(-\lambda U(v_1,v_2,u))(v_1v_2)^{d-1}(\sin u)^{d-2}{\rm d}v_2{\rm d}v_1{\rm d}u \nonumber\\
& - 2\pi C_{d,2}\int_{0}^{\pi}\int_{r}^\infty\int_{r}^\infty\exp(-\lambda U(v_1,v_2,u))(v_1v_2)^{d-1}(\sin u)^{d-2}{\rm d}v_2{\rm d}v_1{\rm d}u,\nonumber
\end{align}
where $U(v_1,v_2,u)$ is given by \eqref{eq:Volume_Union_TwoBalls}.
\end{lemma}
\begin{proof}
Let $\hat{V}_o(r)=V_o\setminus V_o\cap \mathcal{B}_r(o)$. The variance of the volume of $\hat{V}_o(r)$ is
\begin{align*}
\mathtt{Var}[\upsilon_d(\hat{V}_o(r))]&=\mathtt{Var}[\upsilon_d({V_o})-\upsilon_d(\mathcal{B}_r(o)\cap V_o)]\nonumber\\
&=\mathtt{Var}[\upsilon_d(V_o)]+\mathtt{Var}[\upsilon_d(\mathcal{B}_r(o)\cap V_o)]-2\mathtt{Cov}[\upsilon_d(\mathcal{B}_r(o)\cap V_o),\upsilon_d(V_o)].
\end{align*}
This implies
\begin{align}
\mathtt{Cov}[\upsilon_d(\mathcal{B}_r(o)\cap V_o),\upsilon_d(V_o)]=\frac{1}{2}\mathtt{Var}[\upsilon_d(V_o)]+\frac{1}{2}\mathtt{Var}[\upsilon_d(\mathcal{B}_r(o)\cap V_o)]-\frac{1}{2}\mathtt{Var}[\upsilon_d(\hat{V}_o(r))].
\label{eq:Cov_Int_Ball_PVCell}
\end{align}
Using Lemma \ref{lemma:Moments_Volume_Int_Ball_PVCell}, the variance of $\upsilon_d(\mathcal{B}_r(o)\cap V_o)$ can be expressed as
\begin{align}
\mathtt{Var}[\upsilon_d(\mathcal{B}_r(o)\cap V_o)]&=4\pi C_{d,2}\int_{0}^{\pi}\int_{0}^r\int_{0}^r\exp(-\lambda U(v_1,v_2,u))(v_1v_2)^{d-1}(\sin u)^{d-2}{\rm d}v_2{\rm d}v_1{\rm d}u\nonumber\\
&~~~~-\frac{1}{\lambda^2}\left(1-\exp(-\lambda\kappa_d r^d)\right)^2.
\label{eq:Var_Volume_Int_Ball_PVCell}
\end{align}
Now, we obtain the mean and variance of $\hat{V}_o(r)$. Using \eqref{eq:1stMoment_Volume_Int_Ball_PVCell}, the first moment becomes
\begin{equation}
\mathbb{E}[\upsilon_d(\hat{V}_o(r))]=\mathbb{E}[\upsilon_d(V_o)-\upsilon_d(\mathcal{B}_r(o)\cap V_o)]=\frac{1}{\lambda}\exp(-\lambda\kappa_d r^d).
\label{eq:1stMoment_Volume_Rem_Int_PVCell}
\end{equation}
Using \eqref{eq:Moment_Random_Set}, we can obtain the second moment as
\begin{align}
\mathbb{E}[\upsilon_d(\hat{V}_o(r))^2]&=\int_{\mathbb{R}^d}\int_{\mathbb{R}^d}\mathbb{P}(r< \|x_1\|,r< \|x_2\|,x_1,x_2\in V_o(r)){\rm d}x_2{\rm d}x_1\nonumber\\
&=\int_{\mathbb{R}^d\setminus \mathcal{B}_r(o)}\int_{\mathbb{R}^d\setminus \mathcal{B}_r(o)}\mathbb{P}(x_1,x_2\in V_o){\rm d}x_2{\rm d}x_1\nonumber\\
&\stackrel{(a)}{=}4\pi C_{d,2}\int_{0}^{\pi}\int_{r}^\infty\int_{r}^\infty\exp(-\lambda U(v_1,v_2,u))(v_1v_2)^{d-1}(\sin u)^{d-2}{\rm d}v_2{\rm d}v_1{\rm d}u,
\label{eq:2ndMoment_Volume_Rem_Int_PVCell}
\end{align}
where (a) follows from the same steps as in Lemma \ref{lemma:Moments_Volume_PVCell} for the second moment of the volume of typical cell $V_o$.
Lastly, substituting \eqref{eq:Var_Volume_Int_Ball_PVCell}, \eqref{eq:1stMoment_Volume_Rem_Int_PVCell} and \eqref{eq:2ndMoment_Volume_Rem_Int_PVCell} in \eqref{eq:Cov_Int_Ball_PVCell} completes the proof.
\end{proof}
Since we use $1-\exp(-\rho_d\lambda\kappa_d r^d)$ for the approximation of CDF of $R_o$, the c.f.~$\rho_d$ is determined by matching the $d$-th derivative of the second-order approximation of the CDF of $R_o$ at $r=0$. As the second-order Taylor series expansion of the CDF includes the covariance term given in Lemma \ref{lemma:Cov_Int_Ball_PVCell_1}, we first provide its $d$-th derivative at $r=0$ in the following lemma.
\begin{lemma}
\label{lemma:Cov_derivative}
The $d$-th derivative of the covariance of the volume of the intersection of $\mathcal{B}_r(o)$ with the typical cell $V_o$ and the volume of typical cell $V_o$ w.r.t. $r$ is zero at $r=0$.
\end{lemma}
\begin{proof}
Using Lemma \ref{lemma:Cov_Int_Ball_PVCell_1}, we can write
\begin{align}
\frac{{\rm d}^d}{{\rm d}r^d}\mathtt{Cov}[\upsilon_d(\mathcal{B}_r(o)\cap V_o),\upsilon_d(V_o)]\bigg|_{r=0}=\frac{{\rm d}^d}{{\rm d}r^d}\frac{1}{\lambda^2}\exp(-\lambda\kappa_d r^d)\bigg|_{r=0} +\frac{{\rm d}^d}{{\rm d}r^d} \left( f_1(r) - f_2(r)\right)\bigg|_{r=0},
\label{eq:derivative_of_Cov_1}
\end{align}
where
\begin{align*}
f_1(r)= \int_{0}^r\int_{0}^rg(v_1,v_2){\rm d}v_2{\rm d}v_1,\text{~~and~~} f_2(r)=\int_{r}^\infty\int_{r}^\infty g(v_1,v_2){\rm d}v_2{\rm d}v_1,
\end{align*}
such that
\begin{align*}
g(v_1,v_2)=2\pi C_{d,2}\int_{0}^{\pi}\exp(-\lambda U(v_1,v_2,u))(v_1v_2)^{d-1}(\sin u)^{d-2}{\rm d}u.
\end{align*}
Further,
\begin{equation}
\frac{{\rm d}^d}{{\rm d}r^d}\frac{1}{\lambda^2}\exp(-\lambda\kappa_d r^d)\bigg|_{r=0}=-\frac{1}{\lambda}d!\kappa_d=-\frac{1}{\lambda}2\pi^{\frac{d}{2}}\frac{\Gamma(d)}{\Gamma(\frac{d}{2})}.
\label{eq:Derivetive_Cov_1stTerm}
\end{equation}
Now, differentiating $f_1$ w.r.t. $r$, we obtain
\begin{align*}
\frac{{\rm d}}{{\rm d}r}f_1(r)&=\frac{{\rm d}}{{\rm d}r} \int_{0}^r{\rm d}v_1\int_{0}^{r}g(v_1,v_2){\rm d}v_2\\
&\stackrel{(a)}{=}\int_{0}^{r}g(r,v_2){\rm d}v_2 + \int_{0}^r{\rm d}v_1\frac{{\rm d}}{{\rm d}r}\int_{0}^{r}g(v_1,v_2){\rm d}v_2\\
&\stackrel{(b)}{=} \int_{0}^{r}g(r,v_2){\rm d}v_2 + \int_{0}^rg(v_1,r){\rm d}v_1,
\end{align*}
where (a) and (b) are obtained using the successive application of Leibniz's integral rule. Again differentiating, we obtain
\begin{align*}
\frac{{\rm d^2}}{{\rm d}r^2}f_1(r)&= \frac{{\rm d}}{{\rm d}r} \int_{0}^{r}g(r,v_2){\rm d}v_2 + \frac{{\rm d}}{{\rm d}r}\int_{0}^rg(v_1,r){\rm d}v_1\\
&\stackrel{(a)}{=} 2g(r,r) + \int_0^r \frac{{\rm d}}{{\rm d}r} g(r,v_2){\rm d}v_2 + \int_0^r \frac{{\rm d}}{{\rm d}r}g(v_1,r){\rm d}v_1,
\end{align*}
where (a) is obtained using Leibniz's integral rule.
Similarly, we get
\begin{align*}
\frac{{\rm d}^3}{{\rm d}r^3}f_1(r)&= 4\frac{{\rm d}}{{\rm d}r}g(r,r) + \int_0^r \frac{{\rm d}^2}{{\rm d}r^2} g(r,v_2){\rm d}v_2 + \int_0^r \frac{{\rm d}^2}{{\rm d}r^2}g(v_1,r){\rm d}v_1, \\
\frac{{\rm d}^4}{{\rm d}r^4}f_1(r)&= 6\frac{{\rm d}^2}{{\rm d}r^2}g(r,r) + \int_0^r \frac{{\rm d}^3}{{\rm d}r^3} g(r,v_2){\rm d}v_2 + \int_0^r \frac{{\rm d}^3}{{\rm d}r^3}g(v_1,r){\rm d}v_1
\end{align*}
Thus, in general, we have
\begin{align*}
\frac{{\rm d}^d}{{\rm d}r^d}f_1(r)=2(d-1)\frac{{\rm d}^{(n-2)}}{{\rm d}r^{(n-2)}}g(r,r)+ \int_0^r \frac{{\rm d}^{(n-1)}}{{\rm d}r^{(n-1)}} g(r,v_2){\rm d}v_2 + \int_0^r \frac{{\rm d}^{(n-1)}}{{\rm d}r^{(n-1)}}g(v_1,r){\rm d}v_1.
\end{align*}
Following similar steps, we obtain the $d$-fold derivative of $f_2$ w.r.t. $r$ as
\begin{align*}
\frac{{\rm d}^d}{{\rm d}r^d}f_2(r)=2(d-1)\frac{{\rm d}^{(n-2)}}{{\rm d}r^{(n-2)}}g(r,r)- \int_r^\infty \frac{{\rm d}^{(n-1)}}{{\rm d}r^{(n-1)}} g(r,v_2){\rm d}v_2 -\int_r^\infty \frac{{\rm d}^{(n-1)}}{{\rm d}r^{(n-1)}}g(v_1,r){\rm d}v_1.
\end{align*}
Subtracting $\frac{{\rm d}^d}{{\rm d}r^d}f_2(r)$ from $\frac{{\rm d}^d}{{\rm d}r^d}f_1(r)$, we get
\begin{align}
\frac{{\rm d}^d}{{\rm d}r^d} \left( f_1(r) - f_2(r)\right)=\int_0^\infty \frac{{\rm d}^{(d-1)}}{{\rm d}r^{(d-1)}} g(r,v_2){\rm d}v_2 + \int_0^\infty \frac{{\rm d}^{(d-1)}}{{\rm d}r^{(d-1)}}g(v_2,r){\rm d}v_1.
\label{eq:derivative_f1f2}
\end{align}
Now, we obtain the $(d-1)$-th derivative of $g(r,v_2)$ at $r=0$ as
\begin{align*}
\frac{{\rm d}^{(d-1)}}{{\rm d}r^{(d-1)}} g(r,v_2)\bigg|_{r=0}&=2\pi C_{d,2}\frac{{\rm d}^{(d-1)}}{{\rm d}r^{(d-1)}}\int_{0}^{\pi}\exp(-\lambda U(r,v_2,u))(r_1v_2)^{d-1}(\sin u)^{d-2}{\rm d}u\bigg|_{r=0}\\
&=2\pi C_{d,2}\int_{0}^{\pi}\frac{{\rm d}^{(d-1)}}{{\rm d}r^{(d-1)}}\exp(-\lambda U(r,v_2,u))(rv_2)^{d-1}\bigg|_{r=0}(\sin u)^{d-2}{\rm d}u\\
&=2\pi(d-1)! C_{d,2}\int_{0}^{\pi}\exp(-\lambda U(0,v_2,u))v_2^{d-1}(\sin u)^{d-2}{\rm d}u \\
&\stackrel{(a)}{=}2\pi(d-1)!C_{d,2}v_2^{d-1}\exp(-\lambda\kappa_d v_2^d)\int_{0}^{\pi}(\sin u)^{d-2}{\rm d}u\\
&\stackrel{(b)}{=}d\pi^d\frac{\Gamma(d)}{\Gamma(\frac{d}{2})}\frac{1}{\Gamma\left(\frac{d}{2}+1\right)}v_2^{d-1}\exp(-\lambda\kappa_d v_2^d),
\end{align*}
where (a) follows due to $U(0,v_2,u)=\kappa_dv_2^d$ and (b) follows using $\int_0^\pi(\sin u)^{d-2}{\rm d}u=\sqrt{\pi}\frac{\Gamma(\frac{d-1}{2})}{\Gamma(\frac{d}{2})}$ \cite[Eq. 3.62.5]{gradshteyn2014table}.
Now, using the above expression along with $g(r,x)=g(x,r)$ and
\begin{align*}
\int_0^\infty v^{d-1}\exp(-\lambda\kappa_d v^d){\rm d}v=\frac{1}{d\lambda\kappa_d}\int_0^\infty \exp(-t){\rm d}t=\frac{1}{d\lambda\kappa_d}=\frac{\Gamma(\frac{d}{2}+1)}{d\lambda\pi^{\frac{d}{2}}},
\end{align*}
we can write \eqref{eq:derivative_f1f2} at $r=0$ as
\begin{align}
\frac{{\rm d}^d}{{\rm d}r^d} \left( f_1(r) - f_2(r)\right)\bigg|_{r=0}&=\frac{1}{\lambda}2\pi^{\frac{d}{2}}\frac{\Gamma(d)}{\Gamma(\frac{d}{2})}.
\label{eq:Derivetive_Cov_2ndTerm}
\end{align}
Finally, the substitution of \eqref{eq:Derivetive_Cov_1stTerm} and \eqref{eq:Derivetive_Cov_2ndTerm} in \eqref{eq:derivative_of_Cov_1} completes the proof.
\end{proof}
\subsection{Approximate CDF of $R_o$}
\label{subsec:ApproximateCDF}
Now, in the following theorem we determine the c.f.~of the approximated CDF of $R_o$, which is the main result of this section.
\begin{thm}
\label{thm:App_CDF_R_Typical}
For the homogeneous PPP with intensity $\lambda$ on $\mathbb{R}^d$, the approximate CDF of the distance $R_o$ from the nucleus to a uniformly random point in the typical cell $V_o$ is
\begin{align}
F_{R_o}(r)\approx 1-\exp(-\rho_d\lambda\kappa_d r^d),
\label{eq:CDF_R_apprx}
\end{align}
where $\rho_d$ is the c.f. obtained by matching the $d-th$ derivative of \eqref{eq:CDF_R_apprx} with that of the second-order Taylor series expansion of the exact CDF of $R_o$ at $r=0$ and is given by
\begin{align}
\rho_d= 1+\frac{\mathtt{Var}[\upsilon_d(V_o)]}{\mathbb{E}[\upsilon_d(V_o)]^2}.
\label{eq:CorrectionFactor}
\end{align}
\end{thm}
\begin{proof}
The second order Taylor series expansion of the bivariate function $f(Z_1,Z_2)=\frac{Z_1}{Z_2}$ around the mean ($\bar{z}_1,\bar{z}_2$) can be written as
\begin{align*}
f(Z_1,Z_2)\approx\frac{\bar{z}_1}{\bar{z}_2} + \frac{1}{\bar{z}_2}(Z_1-\bar{z}_1) -\frac{\bar{z}_1}{\bar{z}_2^2}(Z_2-\bar{z}_2)+\frac{1}{\bar{z}_2^2}(Z_1-\bar{z}_1)(Z_2-\bar{z}_2) + \frac{\bar{z}_1}{\bar{z}_2^3}(Z_2-\bar{z}_2)^2.
\end{align*}
Taking expectation of $f(Z_1,Z_2)$ w.r.t. $Z_1$ and $Z_2$, we get
\begin{equation}
\mathbb{E}[f(Z_1,Z_2)]\approx\frac{\bar{z}_1}{\bar{z}_2} - \frac{1}{\bar{z}_2^2}\mathtt{Cov}[z_1,z_2] + \frac{\bar{z}_1}{\bar{z}_2^3}\mathtt{Var}[z_2].
\label{eq:TaylorExpansion}
\end{equation}
The CDF of $R_o$ is
$$F_{R_o}(r)=\mathbb{E}\left[\frac{\upsilon_d(\mathcal{B}_r(o)\cap V_o)}{\upsilon_d(V_o)}\right].$$
Therefore, using \eqref{eq:TaylorExpansion}, the second-order Taylor series expansion of $F_{R_o}(r)$ around the mean ($\mathbb{E}[\upsilon_d(\mathcal{B}_r(o)\cap V_o)],\mathbb{E}[\upsilon_d(V_o)]$) can be written as
\begin{align*}
F_{R_o}(r)\approx\frac{\mathbb{E}[\upsilon_d(\mathcal{B}_r(o)\cap V_o)]}{\mathbb{E}[\upsilon_d(V_o)]}\left[1+\frac{\mathtt{Var}[\upsilon_d(V_o)]}{\mathbb{E}[\upsilon_d(V_o)]^2}\right]-\frac{\mathtt{Cov}[\upsilon_d(\mathcal{B}_r(o)\cap V_o),\upsilon_d(V_o)]}{\mathbb{E}[\upsilon_d(V_o)]^2}.
\end{align*}
Using Lemma \ref{lemma:Moments_Volume_PVCell} and Lemma \ref{lemma:Moments_Volume_Int_Ball_PVCell}, we obtain
\begin{align}
F_{R_o}(r) \approx & \left(1-\exp(-\lambda\kappa_d r^d)\right)\left[1+\frac{\mathtt{Var}[\upsilon_d(V_o)]}{\mathbb{E}[\upsilon_d(V_o)]^2}\right]-\frac{\mathtt{Cov}[\upsilon_d(\mathcal{B}_r(o)\cap V_o),\upsilon_d(V_o)]}{\mathbb{E}[\upsilon_d(V_o)]^2}.\label{eq:Cov_Int_Ball_PVCell_1}
\end{align}
Now, as $1-\exp(-\rho_d\lambda\kappa_d r^d)$ is considered for the approximation, we determine the c.f.~$\rho_d$ by matching the $d$-th derivatives of $1-\exp(-\rho_d\lambda\kappa_d r^d)$ and $F_{R_o}(r)$ at $r=0$ as
\begin{align*}
\rho_d=\frac{1}{d!\lambda\kappa_d}\frac{{\rm d}^d}{{\rm d}r^d}F_{R_o}(r)\bigg|_{r=0}.
\end{align*}
Therefore, using \eqref{eq:Cov_Int_Ball_PVCell_1} and Lemma \ref{lemma:Cov_derivative} we have
\begin{align*}
\rho_d= 1+\frac{\mathtt{Var}[\upsilon_d(V_o)]}{\mathbb{E}[\upsilon_d(V_o)]^2}.
\end{align*}
This completes the proof.
\end{proof}
Before giving the numerical validation of the approximated CDF of $R_o$, we present the approximated $n$-th moment of the distance $R_o$ and some useful observations about the c.f.~in the following corollaries.
\begin{cor}
\label{cor:Mean_R}
For the homogeneous PPP with intensity $\lambda$ on $\mathbb{R}^d$, the $n$-th moment of the distance $R_o$ from the nucleus to a uniformly random point in the typical cell $V_o$ is approximately
\begin{align}
\mathbb{E}[R_o^n]\approx\frac{\Gamma\left(1+\frac{n}{d}\right)}{\left(\rho_d\lambda\kappa_d\right)^{\frac{n}{d}}}.
\label{eq:Mean_Ro_Typical}
\end{align}
\end{cor}
\begin{cor}
\label{cor:CorrectionFactor_ratio}
For the homogeneous PPP with intensity $\lambda$ on $\mathbb{R}^d$, the CDF of the distance $R_o$ from the nucleus to a uniformly random point in the typical cell $V_o$ can be approximated as $1-\exp(-\lambda\kappa_d\rho_dr^d)$ where the c.f.~$\rho_d$ is equal to the ratio of the mean volumes of the 0-cell and the typical cell, i.e.,
\begin{align}
\rho_d= \frac{\mathbb{E}[\upsilon_d(\tilde{V}_o)]}{\mathbb{E}[\upsilon_d(V_o)]}.
\label{eq:CorrectionFactor_RatioVolumes}
\end{align}
\end{cor}
\begin{proof}
From \cite[Equation 2.5]{MECKE1999}, we have
\begin{align*}
\mathbb{E}[\upsilon_d(\tilde{V}_o)]=\mathbb{E}[\upsilon_d(V_o)] + \frac{\mathtt{Var}[\upsilon_d(V_o)]}{\mathbb{E}[\upsilon_d(V_o)]}.
\end{align*}
Substituting the above expression in \eqref{eq:CorrectionFactor} gives \eqref{eq:CorrectionFactor_RatioVolumes}.
\end{proof}
\begin{cor}
\label{cor:CDF_R_d_Infinity}
The c.f.~$\rho_d$ approaches one as $d$ approaches infinity, i.e., $\lim\limits_{d\to\infty}\rho_d=1$.
\end{cor}
\begin{proof}
Using \cite[Theorem 3.1]{alishahi2008}, we can write
\begin{align*}
\lim_{d\to\infty}\mathtt{Var}[\upsilon_d(V_o)]=0.
\end{align*}
Since, the mean volume of the PV cell is $\lambda^{-1}$ for any $d$, the proof directly follows using \eqref{eq:CorrectionFactor} and above result.
\end{proof}
\begin{figure}
\centering
\includegraphics[width=.55\textwidth]{Approx_CDF_Ro_CrfTyp_1to10D_v1.eps}\vspace{-.3cm}
\caption{CDF of $R_o$ and $\tilde{R}_o$ for unit-intensity PPP on $\mathbb{R}^d$ where $d\in\{1,\dots,10\}$. The CDF of $\tilde{R}_o$ and approximate CDF of $R_o$ are given in Theorem \ref{thm:CDF_R_Crofton} and Theorem \ref{thm:App_CDF_R_Typical}, respectively.}
\label{fig:Approx_CDF_R}
\end{figure}
\begin{remark}
From \eqref{eq:Mean_Ro_crofton} and \eqref{eq:Mean_Ro_Typical}, it is clear that the ratio of the means of $\tilde{R}_o$ and $R_o$ is approximately $\sqrt[d]{\rho_d}$. Therefore, using Corollary \ref{cor:CorrectionFactor_ratio}, we can infer that the ratio of the means of $\tilde{R}_o$ and $R_o$ is approximately equal to the $d$-th root of the ratio of the mean volumes of the 0-cell $\tilde{V}_o$ and the typical cell $V_o$. In other words, the distance from the nucleus to a uniformly random point in the typical cell scales with the distance from the nucleus to a uniformly random point in the 0-cell by a factor equal to the $d$-th root of the ratio of the mean volumes of the 0-cell $\tilde{V}_o$ and the typical cell $V_o$.
\end{remark}
\subsection{Numerical Comparisons}
\label{sec:Numerical_Results_Appx}
For the numerical evaluation of the approximated CDF of $R_o$, we obtain the c.f.~$\rho_d$ using \eqref{eq:CorrectionFactor} for which the mean and variance of the volume of the typical cell are evaluated using Lemma \ref{lemma:Moments_Volume_PVCell}. Fig. \ref{fig:Approx_CDF_R} validates the accuracy of the approximated CDF of $R_o$ by comparing it with the Monte Carlo simulations for the cases of $d\in\{1,\dots,10\}$.
Fig. \ref{fig:Approx_CDF_R} clearly indicates that the CDF of $R_o$ gradually approaches that of $\tilde{R}_o$ as $d$ increases.
Further, Table \ref{table:Accuracy_CDF_Ro} verifies the accuracy of the approximated mean and variance of $R_o$ (obtained using Corollary \ref{cor:Mean_R}) for $d\in\{1,\dots,10\}$.
For $d=2$, the obtained mean value of $R_o$ is $0.442$ which is also close to the mean values $0.438$ and $0.447$ obtained using the curve-fitted c.f.s~ $13/10$ and $5/4$ of \cite{Haenggi2017} and \cite{Martin2017_Meta}, respectively.
\begin{table}
\centering
\caption{Accuracy of Approximated Mean and Variance of $R_o$.}\vspace{.1cm}
\label{table:Accuracy_CDF_Ro}
\scriptsize{
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline
\multicolumn{2}{|l|}{$d$} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\
\hline
\multicolumn{2}{|l|}{$\rho_d$} & 1.500 & 1.285 & 1.171 & 1.128 & 1.079 & 1.062 & 1.043 & 1.032 & 1.029 & 1.018 \\
\hline
\multirow{2}{*}{$\mathbb{E}[R_o]$} & Exact & 0.305 & 0.445 & 0.529 & 0.595 & 0.651 & 0.701 & 0.749 & 0.798 & 0.831 & 0.873 \\
\cline{2-12}
& Approx. & 0.333 & 0.442 & 0.524 & 0.591 & 0.648 & 0.698 & 0.745 & 0.789 & 0.829 & 0.862 \\
\hline
\multirow{2}{*}{$\mathtt{Var}[R_o]$} & Exact & 0.090 & 0.058 & 0.038 & 0.028 & 0.022 & 0.019 & 0.016 & 0.014 & 0.013 & 0.012 \\
\cline{2-12}
& Approx. & 0.111 & 0.053 & 0.036 & 0.028 & 0.022 & 0.018 & 0.015 & 0.013 & 0.012 & 0.011 \\
\hline
\end{tabular}}
\end{table}
\section{Limiting Shape of Large PV Cells}
\label{sec:LimitingShape}
Thus far, we have presented an exact characterization of the CDFs of $\tilde{R}_o$ and $R_o$ in Sections \ref{sec:CroftonCell} and \ref{sec:TypicalCell} and a closed-form approximation for the multi-integral exact expression for the CDF of $R_o$ in Section \ref{sec:Approximate_CDF}.
It is worth noting that the conditioning on the $k$ points of $\Phi$ in the $\mathcal{B}_{2\ell}(o)$, defined as the domain configuration $\mathcal{C}_\ell^k$ (see \eqref{eq:Domain_Confg_Def}), allowed us to construct the set of surfaces of the spherical caps $\{L_i\}_{i=1}^{k}$ on the ball $\mathcal{B}_\ell(o)$ as in \eqref{eq:Arc_DomainCong}. This helps in determining the conditional volume of the typical cell $V_o$ and thus the conditional CDF of $R_o$.
It is easy to observe that some points of the domain configuration $\mathcal{C}_\ell^k$ are the closest points on some boundaries of the typical cell $V_o$ and thus the lines joining them to origin are perpendicular to the corresponding boundaries. Further, these points are also the midpoints of the chords formed by the corresponding spherical caps. This implies that these surfaces of spherical caps completely lie outside the typical cell $V_o$ (see Fig.~\ref{fig:Illustration} for $d=2$).
Therefore, it is quite straightforward to see that the typical cell is completely contained within $\mathcal{B}_\ell(o)$ only if the set $\{L_i\}_{i=1}^{k}$ completely covers the boundary of $\mathcal{B}_\ell(o)$. Using this fact, in this section, we provide an alternate proof to the well-known spherical property of $d$-dimensional PV cells containing a large inball.
Let the point $\tilde{\textbf{x}}_0\triangleq(R,\boldsymbol{\theta}_0)$ denote the nearest point on the boundary of the typical cell $V_o$ to its nucleus. Therefore, $R$ is the radius of the largest ball $\mathcal{B}_{R}(o)$ contained within the typical cell $V_o$, henceforth called the inradius of the cell. In this construction, it is evident that the nearest point $\mathbf{x}_0$ in $\Phi$ from the nucleus of $V_o$ (i.e., the origin) is at $(2R,\boldsymbol{\theta}_0)$ such that $\|\tilde{\mathbf{x}}_0\|=\frac{1}{2}\|\mathbf{x}_0\|=R$. Note that the results presented in the following are conditioned on the inradius $R$.
Let $\mathcal{A}(r,\epsilon)$ denote the annulus formed by two balls of radii $r$ and $r+\epsilon$ co-centered at the origin. Now, consider the domain configuration $\mathcal{C}_{R}^k=\{\mathbf{\tilde{x}}_i\}_{i=1}^k$ as the set containing the mid-point of lines joining the nucleus of $V_o$ and the points in $\Phi\cap \mathcal{A}(2R,2\epsilon)$ given $\Phi(\mathcal{A}(2R,2\epsilon))=k$. Fig.~\ref{fig:PointProb_PVCell} illustrates a potential configuration of $\mathcal{C}_{R}^2$ for the case of $d=2$. By the Poisson property, the $k$ points of $\mathcal{C}_R^{k}$ are distributed uniformly at random independently of each other in the annulus $\mathcal{A}(R,\epsilon)$ such that the CDF of $\|\tilde{\mathbf{x}}_i\|=l_i$, for $\forall i$, conditioned on $R$ is
\begin{align}
F_{l_i}(l)=\frac{l^d-R^d}{(R+\epsilon)^d-R^d},~~R\leq l\leq R+\epsilon.
\label{eq:CDF_li}
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=.6\textwidth]{Illustration_LargePVCell_v1.eps}\vspace{-.7cm}
\caption{Typical cell with inradius $R$ for the case of $d=2$.}
\label{fig:PointProb_PVCell}
\end{figure}
We define the set of $k+1$ spherical caps $\{L_i\}_{i=0}^k$ corresponding to points $\{\tilde{\mathbf{x}}\}_{i=0}^k=\{ \tilde{\mathbf{x}}_0\cup\mathcal{C}_R^k\}$ on the $\mathcal{B}_{R+\epsilon}(o)$ with heights equal to $\epsilon$ for $i=0$ and $R+\epsilon-l_i$ for $i=1,\dots,k$.
The surface area of the spherical cap $L_i$ is \cite{li2011concise}
\begin{align}
S_i=\begin{cases}
\frac{1}{2}\chi_d (R+\epsilon)^{d-1} I_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right),& \text{for}~i=0\\
\frac{1}{2}\chi_d (R+\epsilon)^{d-1} I_{1-\frac{l_i^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right),& \text{for}~i=1,\dots,k,
\end{cases}
\end{align}
where $\chi_d=\frac{2\pi^{\frac{d}{2}}}{\Gamma(\frac{d}{2})}$ is the surface area of the unit radius ball in $\mathbb{R}^d$ and $I_z(a,b)=\frac{B_z(a,b)}{B(a,b)}$ such that $B(a,b)$ and $B_z(a,b)$ are the beta function and the incomplete beta function, respectively. Note that $0\leq S_i\leq S_0$ $\forall i$.
Since the points in $\mathcal{C}_R^k$ are i.i.d.~in $\mathcal{A}(R,\epsilon)$, the spherical caps $\{L_i\}_{i=1}^k$ of i.i.d.~surface areas are placed uniformly at random independently of each other on $\mathcal{B}_{R+\epsilon}(o)$.
Now, we evaluate the probability that the uniformly chosen point $(R+\epsilon,\boldsymbol{\alpha})$ on the surface of $\mathcal{B}_{R_m+\epsilon}(o)$ belongs to the spherical cap $L_i$, for $i\in\{1,\dots,k\}$, as
\begin{align}
p&=\mathbb{P}((R+\epsilon,\boldsymbol{\alpha})~\text{belongs to the cap~} L_i~\text{of area}~ S_i)\nonumber\\
&=\frac{1}{\chi_d (R+\epsilon)^{d-1}}\mathbb{E}[S_i]\nonumber\\
&\stackrel{(a)}{=}\frac{d}{2((R+\epsilon)^d-R^d)}\int_{R}^{R+\epsilon} I_{1-\frac{l^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right) l^{d-1}{\rm d}l\nonumber\\
& \stackrel{(b)}=\frac{1}{2((R+\epsilon)^d-R^d)}\Bigg[ \frac{(R+\epsilon)^d}{B\left(\frac{d-1}{2},\frac{1}{2}\right)}B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{d+1}{2}\right)- R^d I_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right)\Bigg],
\label{eq:probability_belongs_to_cap_Li}
\end{align}
where (a) follows using the pdf of $l_i$ which is obtained using \eqref{eq:CDF_li} and (b) follows using the steps given in Appendix \ref{app:probability_belongs_to_cap_Li}. Also note that the probability that the uniformly chosen point $(R+\epsilon,\boldsymbol{\alpha})$ on the surface of $\mathcal{B}_{R+\epsilon}(o)$ belongs to the spherical cap $L_0$ is
\begin{align}
p_0=\frac{1}{2}I_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right).
\label{eq:probability_belongs_to_cap_L0}
\end{align}
Let $K=\Phi\left(\mathcal{A}(2R,2\epsilon)\right)$. By definition, $K$ is Poisson with mean $\lambda\kappa_d((R+\epsilon)^d-R^d)$. Now to complete our argument, we evaluate the probability that the point on the boundary of $\mathcal{B}_{R+\epsilon}(o)$ does not belong to $V_o$ as
\begin{align}
Q_d(R,\epsilon) &=\mathbb{P}((R+\epsilon,\boldsymbol{\alpha})~\text{belongs to at least one of the caps}) \nonumber\\
&=1-(1-p_0)\mathbb{E}\left[(1-p)^K\right]\nonumber\\
&\stackrel{(a)}{=}1-\left(1-\frac{1}{2}I_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right)\right)\exp\Bigg(-\frac{1}{2}\lambda \kappa_d h(R,\epsilon)\Bigg),
\label{eq:Prob1_Out}
\end{align}
where
\begin{equation}
h(R,\epsilon)=\frac{(R+\epsilon)^d}{B\left(\frac{d-1}{2},\frac{1}{2}\right)}B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{d+1}{2}\right)-R^d I_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right),
\end{equation}
and
(a) directly follows using \eqref{eq:probability_belongs_to_cap_Li}, \eqref{eq:probability_belongs_to_cap_L0} and the probability generating function of the Poisson distribution with mean $\lambda\kappa_d((R+\epsilon)^d-R^d)$.
Now, in the following theorem we state the limiting case of \eqref{eq:Prob1_Out}.
\begin{thm}
\label{thm:LargePVCell_Circular}
Given the inradius $R$, the probability that a point on the boundary of $\mathcal{B}_{R+\epsilon}(o)$ does not belong to the PV cell $V_o$ approaches one as $R$ tends to infinity, i.e.,
\begin{equation}
\lim_{R\to\infty}Q_d(R,\epsilon)=1, ~~~~~~~~\forall \epsilon>0.
\label{eq:LargePVCell_Circular}
\end{equation}
\end{thm}
\begin{proof}
We note that, for $\epsilon>0$, $I_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right)\to 0$ as $R\to\infty$. Therefore, in order to prove \eqref{eq:LargePVCell_Circular}, it is sufficient to show that the exponential term in \eqref{eq:Prob1_Out} tends to 0 as $R\to\infty$ for $\epsilon>0$, i.e.,
\begin{align}
\lim_{R\to\infty}{h}(R,\epsilon)=\infty.\nonumber
\end{align}
To this end, we multiply ${h}(R,\epsilon)$ with $B\left(\frac{d-1}{2},\frac{1}{2}\right)$ to obtain
\begin{align}
\tilde{h}(R,\epsilon)=(R+\epsilon)^d B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{d+1}{2}\right)-R^d B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right).
\label{eq:Converegence_show}
\end{align}
We have
\begin{equation}
B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},a\right)=\int_0^{1-\frac{R^2}{(R+\epsilon)^2}}t^{\frac{d-1}{2}-1}(1-t)^{a-1} {\rm d}t.\nonumber
\end{equation}
Thus, using the binomial expansion of the term $(1-t)^{a-1}$, we get
\begin{align*}
B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},a\right)&=\int_0^{1-\frac{R^2}{(R+\epsilon)^2}}t^{\frac{d-1}{2}-1}\sum_{k=0}^{\infty} (-1)^k \frac{1}{k!}\prod_{l=0}^{k-1}(a-1-l) t^k {\rm d}t\\
&=\sum_{k=0}^{\infty}(-1)^k \frac{1}{k!}\prod_{l=0}^{k-1}(a-1-l) \int_0^{1-\frac{R^2}{(R+\epsilon)^2}}t^{k+\frac{d-1}{2}-1} {\rm d}t\\
&=\sum_{k=0}^{\infty}(-1)^k \frac{\prod_{l=0}^{k-1}(a-1-l)}{k!\left(k+\frac{d-1}{2}\right)} \left(1-\frac{R^2}{(R+\epsilon)^2}\right)^{k+\frac{d-1}{2}}.
\end{align*}
Let $A_k=\frac{1}{k!(k+\frac{d-1}{2})}\prod_{l=0}^{k-1}\left(\frac{d+1}{2}-1-l\right)$ and $B_k=\frac{1}{k!(k+\frac{d-1}{2})}\prod_{l=0}^{k-1}\left(\frac{1}{2}-1-l\right)$.
Using the above series expansion of the incomplete beta function, we can rewrite \eqref{eq:Converegence_show} as
\begin{align*}
\tilde{h}(R,\epsilon)&=\sum_{k=0}^{\infty}(-1)^k \left[A_k(R+\epsilon)^d-B_kR^d\right]\left(1-\frac{R^2}{(R+\epsilon)^2}\right)^{k+\frac{d-1}{2}}\nonumber\\
&=\sum_{k=0}^{\infty}(-1)^k \left[(A_k-B_k)R^d + A_k\sum_{n=0}^{d-1} {d\choose n} R^n\epsilon^{d-n}\right]\frac{(2R\epsilon+\epsilon^2)^{k+\frac{d-1}{2}}}{(R+\epsilon)^{2k+d-1}}\nonumber\\
&=\sum_{k=0}^{\infty}(-1)^k \left[(A_k-B_k)R^{\frac{d+1}{2}-k} + A_k\sum_{n=0}^{d-1} {d\choose n} R^{n+\frac{1-d}{2}-k}\epsilon^{d-n}\right]\frac{(2\epsilon+R^{-1}\epsilon^2)^{k+\frac{d-1}{2}}}{(1+R^{-1}\epsilon)^{2k+d-1}}.\nonumber
\end{align*}
Now note that $A_k-B_k\geq 0$ for $k\leq\frac{d-1}{2}$.
Therefore, the terms in the above summation tend to infinity as $R$ tends to infinity for $k<\frac{d+1}{2}$. In addition, the terms converge to a constant for $k=\frac{d+1}{2}$ (if $d$ is odd) and to zero for $k>\frac{d+1}{2}$. From this, it is clear that $\tilde{h}(R,\epsilon)\to\infty$ as $R\to\infty$. Therefore, we have ${h}(R,\epsilon)\to\infty$ as $R\to\infty$.
\end{proof}
From Theorem \ref{thm:LargePVCell_Circular}, it is easy to see that the boundary of a PV cell $V_o$ must be contained within the annulus $\mathcal{A}(R,\epsilon)$ as its inradius $R\to\infty$ for an arbitrarily small $\epsilon$. Hence PV cells with large inradii tend to be spherical. Therefore, the approach presented in this section provides an alternate proof for the well-known {\em spherical} nature of the PV cells having a large inball \cite{calka2005limit,Calka2002,miles1995heuristic}. A realization of a PV cell $V_o$ with large inradius is shown in Fig.~\ref{fig:Illustration_largPVCell} for the case of $d=2$.
\begin{figure}[h]
\centering
\includegraphics[width=.55\textwidth]{Illustration_LargePVCell.eps}\vspace{-.6cm}
\caption{Illustration of a cell in $\mathbb{R}^2$ with large inradius.}
\label{fig:Illustration_largPVCell}
\end{figure}
\appendices
\section{Solution of Integrals in \eqref{eq:CDF_Ro_d1_Integrals}}
\label{app:CDF_Ro_d1_Integrals}
We have
\begin{align*}
F_{R_o}(r) = &\underbrace{\int_0^r\int_0^{r_2}8\lambda^2\exp(-2\lambda(r_1+r_2)){\rm d}r_1{\rm d}r_2}_{\mathtt{Int}_1} +\underbrace{\int_r^\infty\int_0^{r}\frac{r+r_1}{r_1+r_2}8\lambda^2\exp(-2\lambda(r_1+r_2)){\rm d}r_1{\rm d}r_2}_{\mathtt{Int}_2} \\
& +\underbrace{\int_r^\infty\int_r^{r_2}\frac{2r}{r_1+r_2}8\lambda^2\exp(-2\lambda(r_1+r_2)){\rm d}r_1{\rm d}r_2}_{\mathtt{Int}_3}. \numberthis
\label{eq:CDF_Ro_d1_Integrals1}
\end{align*}
First of all, it is easy to show that $\mathtt{Int}_1$ reduces to
\begin{equation}
\mathtt{Int}_1=1+\exp(-4\lambda r)-2\exp(-2\lambda r).\label{eq:I_1}
\end{equation}
Now, we have
\begin{align*}
\mathtt{Int}_2 = & \underbrace{\int_r^\infty\int_0^{r}\frac{r}{r_1+r_2}8\lambda^2\exp(-2\lambda(r_1+r_2)){\rm d}r_1{\rm d}r_2}_{\mathtt{Int}_{21}} +\underbrace{\int_r^\infty\int_0^{r}\frac{r_1}{r_1+r_2}8\lambda^2\exp(-2\lambda(r_1+r_2)){\rm d}r_1{\rm d}r_2}_{\mathtt{Int}_{22}}.
\end{align*}
By substituting $r_1+r_2=y$, we solve $\mathtt{Int}_{21}$ as
\begin{align*}
\mathtt{Int}_{21}&=8\lambda^2 r\int_r^\infty\int_{r_2}^{r+r_2}\frac{1}{y}\exp(-2\lambda y){\rm d}y{\rm d}r_2\\
&=8\lambda^2 r\int_r^\infty\int_{r_2}^{\infty}\frac{1}{y}\exp(-2\lambda y){\rm d}y{\rm d}r_2-8\lambda^2 r\int_r^\infty\int_{r+r_2}^{\infty}\frac{1}{y}\exp(-2\lambda y){\rm d}y{\rm d}r_2\\
&=8\lambda^2 r\int_r^\infty\int_{2\lambda r_2}^{\infty}\frac{1}{z}\exp(-z){\rm d}z{\rm d}r_2-8\lambda^2 r\int_r^\infty\int_{2\lambda (r+r_2)}^{\infty}\frac{1}{z}\exp(-z){\rm d}z{\rm d}r_2\\
&=8\lambda^2r\int_r^\infty\mathtt{E}_1(2\lambda r_2){\rm d}r_2-8\lambda^2 r\int_{r}^\infty\mathtt{E}_1(2\lambda(r+r_2)){\rm d}r_2\\
&=8\lambda^2r\int_r^\infty\mathtt{E}_1(2\lambda r_2){\rm d}r_2-8\lambda^2 r\int_{2 r}^\infty\mathtt{E}_1(2\lambda u){\rm d}u,
\end{align*}
where $\mathtt{E}_1$ is an exponential integral function.
From \cite[Eq. 5.22.8]{gradshteyn2014table}, we have
\begin{equation}
\int_x^\infty\mathtt{E}_1(az){\rm d}z=\frac{1}{a}\exp(-ax)-x\mathtt{E}_1(ax).\label{eq:IntegralofE1}
\end{equation}
Therefore, we get
\begin{align*}
\mathtt{Int}_{21}=4\lambda r\exp(-2\lambda r)-4\lambda r\exp(-4\lambda r)-8\lambda^2 r^2\mathtt{E}_1(2\lambda r)+16\lambda^2 r^2\mathtt{E}_1(4\lambda r).\numberthis
\label{eq:I_21}
\end{align*}
Similarly, by substituting $r_1+r_2=y$, we solve $\mathtt{Int}_{22}$ as
\begin{align*}
\mathtt{Int}_{22}&=8\lambda^2\int_r^\infty\int_{r_2}^{r+r_2}\frac{y-r_2}{y}\exp(-2\lambda y){\rm d}y{\rm d}r_2\\
&=8\lambda^2 \int_r^\infty \left(\int_{r_2}^{r+r_2}\exp(-2\lambda y){\rm d}y-r_2\int_{r_2}^{r+r_2}\frac{1}{y}\exp(-2\lambda y){\rm d}y\right){\rm d}r_2\\
&=8\lambda^2 \int_r^\infty \frac{1}{2\lambda}\left(\exp(-2\lambda r_2)-\exp(-2\lambda(r+r_2))\right)-r_2\left(\mathtt{E}_1(2\lambda r_2)-\mathtt{E}_1(2\lambda(r+r_2))\right){\rm d}r_2\\
&=2\left(1-\exp(-2\lambda r)\right)\exp(-2\lambda r)-8\lambda^2 \underbrace{\int_r^\infty r_2\mathtt{E}_1(2\lambda r_2){\rm d}r_2}_{\mathtt{Int}_{221}}+8\lambda^2\underbrace{\int_r^\infty r_2\mathtt{E}_1(2\lambda(r+r_2)){\rm d}r_2}_{\mathtt{Int}_{222}}.\numberthis
\label{eq:I_22}
\end{align*}
Now, using \eqref{eq:IntegralofE1} and the integration by parts, we solve $\mathtt{Int}_{221}$ as
\begin{align*}
\mathtt{Int}_{221}&=\int_r^\infty r_2\mathtt{E}_1(2\lambda r_2){\rm d}r_2\\
&=r_2\int\mathtt{E}_1(2\lambda r_2){\rm d}r_2\mid_r^\infty - \int_r^\infty\left(\int\mathtt{E}_1(2\lambda r_2){\rm d}r_2\right){\rm d}r_2\\
&=\frac{r}{2\lambda}\exp(-2\lambda r)-r^2\mathtt{E}_1(2\lambda r) - \int_r^\infty \left(r_2\mathtt{E}_1(2\lambda r_2)-\frac{1}{2\lambda}\exp(-2\lambda r_2)\right){\rm d}r_2\\
&=\frac{r}{2\lambda}\exp(-2\lambda r)-r^2\mathtt{E}_1(2\lambda r) +\frac{1}{4\lambda^2 }\exp(-2\lambda r)-\mathtt{Int}_{221}\\
&=\frac{r}{4\lambda}\exp(-2\lambda r)+\frac{1}{8\lambda^2}\exp(-2\lambda r)-\frac{r^2}{2}\mathtt{E}_1(2\lambda r).\numberthis\label{eq:I_221}
\end{align*}
Now,
\begin{align*}
\mathtt{Int}_{222}&=\int_r^\infty r_2\mathtt{E}_1(2\lambda(r+r_2)){\rm d}r_2\\
&=\frac{1}{4\lambda^2 }\int_{4\lambda r}^\infty(y-2\lambda r)\mathtt{E}_1(y){\rm d}y\\
&=\underbrace{\frac{1}{4\lambda^2 }\int_{4\lambda r}^\infty y\mathtt{E}_1(y){\rm d}y}_{A}-\underbrace{\frac{r}{2\lambda }\int_{4\lambda r}^\infty\mathtt{E}_1(y){\rm d}y}_{B}\\
&=\frac{1}{4\lambda^2}y\int\mathtt{E}_1(y){\rm d}y\mid_{4\lambda r}^\infty-\frac{1}{4\lambda^2}\int_{4\lambda
r}^{\infty}\left(\int\mathtt{E}_1(y){\rm d}y\right){\rm d}y-B\\%\frac{r}{2}\left(\exp(-4r)-4r\mathtt{E}_1(4r)\right)\\
&=\frac{r}{\lambda}\exp(-4\lambda r)-4r^2\mathtt{E}_1(4\lambda r)-\frac{1}{4\lambda^2}\int_{4\lambda r}^\infty\left(y\mathtt{E}_1(y)-\exp(-y)\right){\rm d}y-B\\
&=\frac{r}{\lambda}\exp(-4\lambda r)-4r^2\mathtt{E}_1(4\lambda r)+\frac{1}{4\lambda^2}\exp(-4\lambda r)-A-B\\
&\stackrel{(a)}{=}\frac{r}{2\lambda}\exp(-4\lambda r)+\frac{1}{8\lambda^2}\exp(-4\lambda r)-2r^2\mathtt{E}_1(4\lambda r)-\frac{r}{2\lambda}\left(\exp(-4\lambda r)-4\lambda r\mathtt{E}_1(4\lambda r)\right)\\
&=\frac{1}{8\lambda^2}\exp(-4\lambda r),
\numberthis\label{eq:I_222}
\end{align*}
where step (a) follows by substituting $A+B = \mathtt{Int}_{222}+2B$ and $B=\frac{r}{2\lambda}(\exp(-4\lambda r)-4\lambda r\mathtt{E}_1(4\lambda r))$.
Substituting \eqref{eq:I_221} and \eqref{eq:I_222} in \eqref{eq:I_22}, we get
\begin{align*}
\mathtt{Int}_{22}&=\exp(-2\lambda r)-2\lambda r\exp(-2\lambda r)-\exp(-4\lambda r)+4\lambda^2 r^2\mathtt{E}_1(2\lambda r).\numberthis
\label{eq:I_22a}
\end{align*}
Now, adding \eqref{eq:I_21} and \eqref{eq:I_22a}, we get
\begin{align*}
\mathtt{Int}_2&=\exp(-2\lambda r)+2\lambda r\exp(-2\lambda r)-\exp(-4\lambda r)-4\lambda r\exp(-4\lambda r)-4\lambda^2 r^2\mathtt{E}_1(2\lambda r)+16\lambda^2 r^2\mathtt{E}_1(4\lambda r).\numberthis
\label{eq:I_2}
\end{align*}
Again substituting $r_1+r_2=y$ and using \eqref{eq:IntegralofE1}, we evaluate $\mathtt{Int}_3$ as
\begin{align*}
\mathtt{Int}_3&=16\lambda^2 r\int_r^\infty\int_{r+r_2}^{2r_2}\frac{1}{y}\exp(-2\lambda y){\rm d}y{\rm d}r_2\\
&=16\lambda^2 r\int_r^\infty\mathtt{E}_1(2\lambda (r+r_2)){\rm d}r_2-16\lambda^2 r\int_r^\infty\mathtt{E}_1(4\lambda r_2){\rm d}r_2\\
&=8\lambda r\int_{4\lambda r}^\infty\mathtt{E}_1(u){\rm d}u-16\lambda^2 r \left(\frac{1}{4\lambda}\exp(-4\lambda r)-r\mathtt{E}_1(4r)\right)\\
&=8\lambda r\left(\exp(-4\lambda r)-4\lambda r\mathtt{E}_1(4\lambda r)\right)-4\lambda r\exp(-4\lambda r)+16\lambda^2r^2\mathtt{E}_1(4r)\\
&=4\lambda r\exp(-4\lambda r)-16\lambda^2r^2\mathtt{E}_1(4\lambda r).\numberthis\label{eq:I_3}
\end{align*}
Finally, adding \eqref{eq:I_1}, \eqref{eq:I_2} and \eqref{eq:I_3}, we get \eqref{eq:CDF_R_1D}.
\section{Solution of Integral in \eqref{eq:probability_belongs_to_cap_Li}}
\label{app:probability_belongs_to_cap_Li}
Let $a=\frac{d-1}{2}$ and $b=\frac{1}{2}$. From step (a) of \eqref{eq:probability_belongs_to_cap_Li} and using $I_x(a,b)=\frac{B_x(a,b)}{B(a,b)}$, we have
\begin{align*}
p&=\nu_R\int_{R}^{R+\epsilon} B_{{1-\frac{l^2}{(R+\epsilon)^2}}}\left(a,b\right) l^{d-1}{\rm d}l,
\end{align*}
where $\nu_R=\frac{{d((R+\epsilon)^d-R^d)^{-1}}}{2B\left(a,b\right)}$.
We solve the above integral using integration by parts as follows.
Let $v=l^{d-1}$ and $u=B_{1-\frac{l^2}{(R+\epsilon)^2}}\left(\frac{d-1}{2},\frac{1}{2}\right)$. We have $$\frac{{\rm d}}{{\rm d}l}u=-\frac{2l}{(R+\epsilon)^2}\left(\frac{l^{2}}{(R+\epsilon)^{2)}}\right)^{b-1}\left(1-\frac{l^2}{(R+\epsilon)^2}\right)^{a-1},$$ and thus
\begin{align*}
p&=-\nu_R\frac{R^d}{d}B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(a,b\right) + \nu_R\int_R^{R+\epsilon} \frac{l^d}{d} \frac{2l}{(R+\epsilon)^2}\frac{l^{2(b-1)}}{(R+\epsilon)^{2(b-1)}}\left(1-\frac{l^2}{(R+\epsilon)^2}\right)^{a-1}{\rm d}l.
\end{align*}
Now, substituting $\frac{l^2}{(R+\epsilon)^2}=z$, we get
\begin{align*}
p&=-\nu_R\frac{R^d}{d}B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(a,b\right)+\nu_R\frac{(R+\epsilon)^d}{d}\int_\frac{R^2}{(R+\epsilon)^2}^1z^{b+\frac{d}{2}-1}(1-z)^{a-1}{\rm d}z\\
&=-\nu_R\frac{R^d}{d}B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(a,b\right)+ \nu_R\frac{(R+\epsilon)^d}{d}\left[B\left(b+\frac{d}{2}-1,a\right) - B_{\frac{R^2}{(R+\epsilon)^2}}\left(b+\frac{d}{2}-1,a\right)\right]\\
&=-\tilde{\nu}_R R^d I_{1-\frac{R^2}{(R+\epsilon)^2}}(a,b) +\tilde{\nu}_R(R+\epsilon)^d\frac{B\left(b+\frac{d}{2},a\right)}{B(a,b)}\left[1-I_{1-\frac{R^2}{(R+\epsilon)^2}}\left(b+\frac{d}{2},a\right)\right]\\
&=-\tilde{\nu}_R R^d I_{1-\frac{R^2}{(R+\epsilon)^2}}(a,b) +\tilde{\nu}_R(R+\epsilon)^d\frac{B\left(b+\frac{d}{2},a\right)}{B(a,b)}\left[1-I_{1-\frac{R^2}{(R+\epsilon)^2}}\left(b+\frac{d}{2},a\right)\right]\\
&=\tilde{\nu}_R(R+\epsilon)^d B_{1-\frac{R^2}{(R+\epsilon)^2}}\left(a,b+\frac{d}{2}\right)-\tilde{\nu}_R R^d I_{1-\frac{R^2}{(R+\epsilon)^2}}(a,b)
\end{align*}
where $\tilde{\nu}_R=\frac{1}{2((R+\epsilon)^d-R^d)}$. The last equality follows using $I_{x}(a,b)=1-I_{1-x}(b,a)$ and $B(a,b)=B(b,a)$.
| {
"timestamp": "2019-12-06T02:19:02",
"yymm": "1907",
"arxiv_id": "1907.03635",
"language": "en",
"url": "https://arxiv.org/abs/1907.03635",
"abstract": "Consider the distances $\\tilde{R}_o$ and $R_o$ from the nucleus to a uniformly random point in the 0-cell and the typical cell, respectively, of the $d$-dimensional Poisson-Voronoi (PV) tessellation. The main objective of this paper is to characterize the exact distributions of $\\tilde{R}_o$ and $R_o$. First, using the well-known relationship between the 0-cell and the typical cell, we show that the random variable $\\tilde{R}_o$ is equivalent in distribution to the contact distance of the Poisson point process. Next, we derive a multi-integral expression for the exact distribution of $R_o$. Further, we derive a closed-form approximate expression for the distribution of $R_o$, which is the contact distribution with a mean corrected by a factor equal to the ratio of the mean volumes of the 0-cell and the typical cell. An additional outcome of our analysis is a direct proof of the well-known spherical property of the PV cells having a large inball.",
"subjects": "Probability (math.PR); Information Theory (cs.IT)",
"title": "Distance from the Nucleus to a Uniformly Random Point in the 0-cell and the Typical Cell of the Poisson-Voronoi Tessellation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9820137926698566,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7087617774724588
} |
https://arxiv.org/abs/1301.2342 | A Linear Time Algorithm for the Feasibility of Pebble Motion on Graphs | Given a connected, undirected, simple graph $G = (V, E)$ and $p \le |V|$ pebbles labeled $1,..., p$, a configuration of these $p$ pebbles is an injective map assigning the pebbles to vertices of $G$. Let $S$ and $D$ be two such configurations. From a configuration, pebbles can move on $G$ as follows: In each step, at most one pebble may move from the vertex it currently occupies to an adjacent unoccupied vertex, yielding a new configuration. A natural question in this setting is the following: Is configuration $D$ reachable from $S$ and if so, how? We show that the feasibility of this problem can be decided in time $O(|V| + |E|)$. | \section{Introduction}
In Sam Loyd's 15-puzzle \cite{Loy59}, a player is asked to arrange square game pieces labeled 1-15, scrambled on a $4 \times 4$ grid, to a shuffled row major ordering, using one empty swap cell: In each step, one of the labeled pieces neighboring the empty cell may be moved to the empty cell (see, e.g., Fig. \ref{fig:15-puzzle}). As early as 1879, Story \cite{Sto1879} observed that the feasibility of a 15-puzzle instance is solely decided by the parity of the starting configuration (with respect to the fixed goal configuration).
\begin{figure}[htp]
\begin{center}
\begin{tabular}{ccc}
\includegraphics[width=0.8in]{15-puzzle.eps} & \hspace{10mm} &
\includegraphics[width=0.8in]{15-puzzle-solved.eps} \\
(a) & & (b)\\
\end{tabular}
\end{center}
\caption{\label{fig:15-puzzle} Two 15-puzzle instances. a) An unsolved instance. In the next step, one of the pieces labeled 5, 6, 14 may move to the vacant cell, leaving behind it another vacant cell for the next move. b) The solved instance.}
\end{figure}
Generalizing the 15-puzzle to having $n-1$ labeled pebbles on an arbitrary non-separable graph $G$ with $n$ vertices, Wilson \cite{Wil74} formalized the observation of Story by showing that if $G$ is bipartite, the reachable configurations from a given start configuration form an alternating group on $n-1$ letters, partitioning all possible configurations into two equivalence classes. If $G$ is non bipartite (except for a special $\theta_0$ graph with seven vertices), then the reachable configurations form the symmetric group on $n - 1$ letters, implying that the problem is always feasible. The result by Wilson also yields an algorithm for solving a given instance. However, the number of moves involved may be exponential. Taking a further step, Kornhauser, Miller, and Spirakis \cite{KorMilSpi84} studied the {\em pebble motion problem} on an arbitrary connected $n$-vertex graph with up to $n - 1$ pebbles. For this problem, they gave a polynomial time algorithm that produces solutions with a $O(n^3)$ upper bound on the number of moves.
As pointed out in \cite{KorMilSpi84}, certain instances of the pebble motion problem require $\Theta(n^3)$ moves, suggesting an $\Omega(n^3)$ lower bound on any algorithm that computes a step-by-step plan for moving the pebbles. Since not all instances of the pebble motion problem can be solved, if the feasibility test can be performed faster than $\Theta(n^3)$, unnecessary computation on infeasible instances can be avoided. Auletta et al. \cite{AulMonParPer99} showed that for trees, deciding whether an given instance of the pebble motion problem is feasible can be done in linear time. Recently, Goraly and Hassin \cite{GorHas10} extended the result to graphs. We independently reach the same conclusion via a direct reduction to the tree approach proposed in \cite{AulMonParPer99}. The tree that we obtain shrinks the graph significantly wheres the method in \cite{GorHas10} adds more vertices to the graph.
The pebble motion problem on graphs finds applications in multi-robot path planning, deflection routing in data networks, and memory management in distributed systems. Fast feasibility test for this problem can help eliminate infeasible instances, thus avoiding unnecessary computation on parts of these instances.
\subsection{Problem Statement and Main Result}
Let $G = (V, E)$ be a connected, undirected, simple graph with $|V| = n$. The presented results readily generalize to graphs with multiple connected components (via taking the direct product of the groups from the components. Generalizations to directed graphs and graphs with multiple edges are also straightforward. Let there be a set $p \le n $ pebbles, numbered $1, \ldots, p$, residing on distinct vertices of $G$. A {\em configuration} of these pebbles is an injective map $S: \{1, \ldots, p\} \to V$. A configuration can also be viewed as a sequence of vertices, $S = \langle s_1, \ldots, s_p \rangle$. We use $V(S)$ to denote the range of $S$. A {\em move} is a pair of configurations, $\langle S, S' \rangle$, such that $S, S'$ differ at exactly one pebble $i$ and $(s_i, s_i') \in E$. That is, in a move, a single pebble may migrate from its current vertex to an empty neighboring vertex.
When two configurations $S$ and $S'$ are parts of a move, they are {\em connected}. Two configurations $S$ and $S'$ are also connected (and therefore {\em reachable} from each other) if there exists a sequence of configurations $\langle S=S_0, \ldots, S_t = S'\rangle$ such that every pair of consecutive configurations $S_i, S_{i+1}$ in the sequence are connected. The problem of {\em pebble motion on graphs} or $\pmg$ is defined as follows.
\begin{problem}[$\pmg$]\label{pm} Given an instance $I = (G, S, D)$ in which $G$ is a connected graph, and $S$ and $D$ are two pebble configurations on $G$, find a sequence of moves that connects configurations $S$ and $D$.
\end{problem}
It is clear that a given $\pmg$ instance may not have a solution. If it does, then it is {\em feasible}. When $G$ is a tree, $\pmg$ is also referred to as {\em pebble motion on trees} ($\pmt$). In this case, an instance is usually written as $I = (T, S, D)$. Auletta et al. \cite{AulMonParPer99} showed that the feasibility test of a $\pmt$ instance can be performed in $O(n)$ operations. We generalize this linear time result to graphs. The main result of the paper is presented in the following theorem.
\begin{theorem}\label{t:main} The feasibility test of $\pmg$ can be performed in $O(|V| + |E|)$ time.
\end{theorem}
The key idea behind our linear time feasibility test is reducing a $\pmg$ instance to a $\pmt$-like instance, allowing many ideas from \cite{AulMonParPer99} to be adapted for proving Theorem \ref{t:main}. This leads to some intermediate results that look similar to those from \cite{AulMonParPer99} but require significantly different proofs. To make this paper self contained, complete proofs are generally provided for these intermediate results.
\section{Reducing Pebble Motion on Graphs to Pebble Permutation on Graphs}
If $V(S) = V(D)$ in a $\pmg$ instance $(G, S, D)$, $D$ can be viewed as a {\em permutation} $\Pi$ of $S$, defined as $d_i = s_{\Pi(i)}$ for all $i$ (i.e., $\Pi$ permutes the pebbles). We call such a problem {\em pebble permutation on graphs}, or $\ppg$. The main goal of this section is to show that any $\pmg$ instance can be reduced to an equivalent $\ppg$ instance such that the $|V(S)|$ pebbles can occupy any set of $|V(S)|$ vertices on $G$.
If the underlying graph is a tree in a $\ppg$ instance, the problem becomes {\em pebble permutation on trees}, or $\ppt$. Reducing a $\pmg$ instance to an equivalent $\ppg$ can be done using Theorem 3 from \cite{AulMonParPer99}, a restatement of which is given below. We give a shorter constructive proof of this result.
\begin{theorem}\label{t:pmt-ppt} Let $(T, S, D)$ be a $\pmt$ instance. In $O(n)$ steps, an instance $(T, S', \Pi)$ of \ppt\, can be computed such that $S, S'$ are connected and for all $i$, $d_i = s_{\Pi(i)}'$ for a fixed permutation $\Pi$.
\end{theorem}
{\sc Proof.} We produce a mapping between $S$ and a new configuration $S'$ so that the requirements are satisfied. To start, all leaf vertices of $T$ are put into a queue $Q$ and {\em processed} in the order they are added. After a vertex $v$ from $Q$ is processed, its neighbors, $N(v)$, are examined. If a neighbor $u \in N(v)$ has not been added to $Q$, $u$ is added to $Q$ if $N(u)$ has at most one member which has not already been added to $Q$. It is straightforward to check that adding vertices to $Q$ this way guarantees that $Q$ will not be empty until all vertices of $T$ are processed.
The processed vertices form a forest, $F$, of which the trees eventually combine to yield $T$. In this proof, $v$ is always assumed to be the current vertex from $Q$ that is being processed and is adjacent to the tree $T_i \in F$ (this does not prevent $v$ from being adjacent to other trees from $F$). As $v$ is being processed, $T_i$ will be examined. Depending on how $|V(S) \cap V(T_i)|$ and $|V(D) \cap V(T_i)|$ compare, there are three possibilities.
First, if $|V(S) \cap V(T_i)| = |V(D) \cap V(T_i)|$, then nothing additional is needed to be done for $T_i$.
Next, if $|V(S) \cap V(T_i)| > |V(D) \cap V(T_i)|$, then some pebbles will need to be moved out of $T_i$ through its root so that the numbers of pebbles on $T_i$ from $S$ and $D$ are the same. For such a tree $T_i$, a {\em surplus} queue, $Q_i^+$, of pebbles will be maintained, so that pebbles at the front of the queue are readily moved out of the root of $T_i$. To maintain $Q_i^+$, the operations for removing and adding a pebble, as well as merging of two queues need to specified (one more operation involving a surplus queue will be introduced in the next paragraph). Removing a pebble from $Q_i^+$ is needed when $v \in V(D)$ and $v \notin V(S)$; a pebble from $Q_i^+$ needs to move to $v$. For this, simply grab the pebble at the end of $Q_i^+$, since that pebble can be the last pebble to leave the current $T_i$. To add a pebble (needed when $v \in V(S), v \notin V(D)$), insert the pebble in the front of $Q_i^+$. Note that it is possible that $v \in (V(S) \cap V(D))$; in this case the removal is followed by the insertion. If $v$ will be the new root of two trees $T_i, T_j$ and both trees have surplus queues ($Q_i^+, Q_j^+$, respectively), before processing $v$, merge $Q_i^+, Q_j^+$ by attaching $Q_j^+$ at the end of $Q_i^+$. This works since pebbles from $Q_i^+, Q_j^+$ must all move through $v$; all pebbles in $Q_i^+$ can move through $v_i$ first. Same applies if there are three or more trees meeting at $v$.
Finally, if $|V(S) \cap V(T_i)| < |V(D) \cap V(T_i)|$, pebbles will need to be moved into $T_i$ through its root. A {\em deficit} queue $Q_i^-$ containing vertices of $V(D)$ is maintained in this case. Assuming $Q_i^-$ is arranged such that the front vertex is close to the root of $T_i$. The operations for removing, adding, and merging deficit queues mirror those for surplus queues. An extra queue operation needed here is when a deficit queue $Q_i^-$ needs to be merged with a surplus queue $Q_j^+$; this can be done simply by filling $Q_i^-$ from the back with pebbles of $Q_j^+$ starting from the front.
~\qed
The constructive proof can be easily adapted to yield paths for actually moving the pebbles (note that doing this requires more than linear time). Theorem \ref{t:pmt-ppt} allows us to reduce a $\pmg$ to an equally feasible $\ppg$.
\begin{corollary}\label{c:reduce} An instance $I = (G, S, D)$ of $\pmg$ can be reduced to a $\ppg$ instance, $I' = (G, S', \Pi)$, in linear time.
\end{corollary}
{\sc Proof.} Given a $\pmg$ instance $I = (G, S, D)$, compute in linear time a spanning tree $T_G$ of $G$. From the $\pmt$ instance $(T_G, S, D)$, compute a equally feasible $\ppt$ instance $(T_G, S', \Pi)$ (via Theorem \ref{t:pmt-ppt}) in which $d_i = s_{\Pi(i)}'$ for all $i$. $I' = (G, S', \Pi)$ is the desired $\ppg$ instance. To see that $I$ and $I'$ are equally feasible, note that the sequence of configurations connecting $S, S'$ in $T_G$ are still present on $G$. ~\qed
Corollary \ref{c:reduce} yields another useful corollary, which says that a $\ppg$ instance can be converted to an equivalent one such that the pebbles occupy an arbitrary set of vertices.
\begin{corollary}\label{c:reduce2} Let $I = (G, S, \Pi)$ be an arbitrary $\ppg$ instance and let $V_A$ be an arbitrary set of $|V(S)|$ vertices of $G$. Then $I$ can be reduced to a $\ppg$ instance, $I' = (G, S', \Pi)$, in linear time, such that $V(S') = V_A$.
\end{corollary}
{\sc Proof.} Let $A$ be an arbitrary configuration of the $|V(S)|$ pebbles with $V(A) = V_A$. From the $\pmg$ instance $(G, S, A)$, a $\ppg$ instance $(G, S', \Pi')$ can be computed by Corollary \ref{c:reduce} such that the $\pmg$ instance $(G, S, S')$ is feasible. Let $D$ be the goal configuration of $I$ (i.e., $d_i = s_{\Pi(i)}$). Applying the same moves (that take $S$ to $S'$) to $D$ yields $D'$ satisfying $d_i' = s_{\Pi(i)}'$. Therefore, $S'$ occupy the same set of vertices as $A$ and $(G, S, \Pi)$ is feasible if and only if $(G, S', \Pi)$ is feasible. ~\qed
\section{Partitioning of $\ppg$ Instances}\label{sec:partition}
We now partition a $\ppg$ instance $I = (G, S, \Pi)$ based on the graph $G$ as some cases require relatively simple but special treatment. A {\em maximal $2$-edge-connected component} ($\tec$ for short) of $G$ is a $2$-edge-connected component of $G$ that is not contained in any other $2$-edge-connected component of $G$. We use $n_{M}$ to denote the number of vertices of all $\tec$s of $G$. The $\tec$s of a graph can be found in $O(|V| + |E|)$ time \cite{Tar72}. The main goal of this section is to solve all $\ppg$ instances other than these with $n_M - 2 \le p \le n - 2$. The case of $p = n$ is trivial. For the discussion in this section, unless otherwise stated, let $(G, S, D)$ be the $\pmg$ instance identical to $I$ (that is, $d_i = s_{\Pi(i)}$).
The first special case is when $G$ is the $\theta_0$ graph with seven vertices, which is formed by connecting an extra vertex to two vertices of distance $3$ on a hexagon \cite{KorMilSpi84}. Any $\ppg$ instance in which $G$ is the $\theta_0$ graph can be solved in constant time since there are only a finite number of possible configurations.
The second special case is when $G$ is a cycle. In this case, $S$ and $D$, as sequences of vertices, induce natural cyclic orderings of the pebbles. This implies that $I$ is feasible if and only if $s_i = d_{(i + k) \textrm{ mod } k}$ for some fixed natural number $k$. The associated computation requires linear time.
The third special case is when $G$ is a $2$-connected (i.e., non separable) graph that is not a cycle or the $\theta_0$ graph. As pointed out in \cite{Wil74}, if $G$ is bipartite and $p = n - 1$, all configurations of pebbles fall into two equivalence classes. In each equivalence class, all configurations are connected and two configurations from different configurations classes are not connected. Deciding whether two configurations are connected can be performed in linear time by computing the {\em parity} of the two configurations (i.e., treating the configurations as permutations). Continuing on this special case, if $G$ is bipartite and $p \le n - 2$, or if $G$ is not bipartite and $p \le n -1$, then all configurations are connected. We summarize the cases mentioned so far in the following lemma.
\begin{lemma}\label{l:special} Let $I = (G, S, \Pi)$ be a $\ppg$ instance in which $G$ is a $2$-connected graph. The feasibility test of $I$ can be performed in linear time. Moreover, $I$ is always feasible if:
\begin{enumerate}
\item $G$ is not a cycle and $p \le n - 2$, or
\item $G$ is not a cycle, not the $\theta_0$ graph, and $p \le n - 1$.
\end{enumerate}\end{lemma}
If a $\ppg$ instance is always feasible, it means that any pair of pebbles can switch locations without affecting other pebbles. In general, when two pebbles can exchange locations without net effects on the locations of other pebbles, they are {\em equivalent} with respect to the specific configuration they are associated with. More formally, two pebbles $i, j$ are equivalent with respect to a configuration $S$ if $S$ is connected to a configuration $S'$ in which $s_i = s_j', s_j = s_i'$, and $s_k = s_k'$ for all $k \ne i, j$. A set of pebbles are equivalent if every pair of pebbles from the set are equivalent. It is clear that this type of equivalence is reflexive and transitive. Note that our definition of pebble equivalence equals the definition of vertex equivalence used in \cite{AulMonParPer99}.
\begin{corollary}\label{c:feasible-0} Let $I = (G, S, \Pi)$ be an instance of $\ppg$ in which $G$ is a single $2$-connected component plus a single degree one vertex attached to that component. If $p \le n - 2$, then $I$ is feasible.
\end{corollary}
{\sc Proof.} Let the component be $H$ and the degree one vertex be $v$. We may assume that in $S$, $v$ is occupied by a pebble $i$. If $H$ is a cycle, with two empty vertices on the cycle $H$, it is straightforward to check that $i$ can be exchanged with any other pebble on $H$. This implies that all pebbles are equivalent with respect to $S$. Thus, $I$ is feasible for an arbitrary $\Pi$.
If $H$ is not a cycle, with pebble $i$ on $v$, all pebbles on $H$ are equivalent with respect to $S$ by Lemma \ref{l:special}. Fixing any cycle $C$ on $H$ adjacent to $v$, moving the empty vertices to $C$ shows that pebble $i$ is equivalent to at least one pebble on $C$. By transitivity, all pebbles are again equivalent with respect to $S$. ~\qed
The fourth special case is when $G$ is $2$-edge-connected and separable (i.e., not $2$-connected), for which statements similar to that from Lemma \ref{l:special} can be made. We include all four cases in the following theorem.
\begin{theorem}\label{t:special}Let $I = (G, S, \Pi)$ be a $\ppg$ instance in which $G$ is a $2$-edge-connected graph. The feasibility test of $I$ can be performed in linear time. Moreover, $I$ is always feasible if $G$ is not a cycle and $p \le n - 2$.
\end{theorem}
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2in]{2c.eps}
\end{center}
\caption{\label{fig:2c} A $2$-edge-connected graph can be viewed as a tree of its maximal $2$-connected components.}
\end{figure}
{\sc Proof.} Since the case of $G$ being non separable is covered by Lemma \ref{l:special}, assume that $G$ is separable. $G$ is then two or more $2$-connected graphs joined at articulation vertices, forming a tree like structure (see, e.g., Fig. \ref{fig:2c}). If $p \le n - 2$, without loss of generality, assume (given configuration $S$) that there are two empty vertices $v_1, v_2$ on a ``leaf'' $2$-connected component $H$ of $G$. By Corollary \ref{c:feasible-0}, the current pebbles on $H$ in $S$ and $i_1, i_2$ are all equivalent. Restricting our attention to $H'$ and $v_2$, all pebbles on $H$' are equivalent with respect to $S$ by Corollary \ref{c:feasible-0}. By transitivity of pebble equivalence, all pebbles on $H, H'$ in $S$ are equivalent. Inductively, all pebbles in $S$ are equivalent.
If $p = n - 1$, imagine in Fig. \ref{fig:2c} that (given configuration $S$) all vertices other than $v_1$ are occupied. Since no pebbles can cross the border between $H, H'$, if $I|_H$ ($I$ restricted to $H$ in the natural way) is not feasible, then $I$ cannot be feasible. Same applies to $I|_{H'}$. By shifting the empty vertex to other $2$-connected components, additional restricted instances can be obtained. $I$ is feasible if and only if all such restrictions are feasible. Since these instances can be computed independently, observe that the overall time needed is $O(|V| + |E|)$.~\qed
With Theorem \ref{t:special}, we can state a more useful version of Corollary \ref{c:feasible-0}.
\begin{corollary}\label{c:feasible} Let $I = (G, S, \Pi)$ be an instance of $\ppg$ in which $G$ is a single $\tec$ with a single degree one vertex attached to the $\tec$. If $p \le n - 2$, then $I$ is feasible.
\end{corollary}
Next, we look at the case of $p = n - 1$ for general graphs. The proof strategy is similar to that used for proving the $p = n - 1$ case of Theorem \ref{t:special}.
\begin{theorem}The feasibility of a $\ppg$ instance in which $p = n - 1$ can be decided in $O(|V| + |E|)$ time. \end{theorem}
{\sc Proof.} Let $I = (G, S, \Pi)$ be an instance of $\ppg$ in which $p = n-1$. The claim holds when $G$ is a tree or a $2$-edge-connected graph; assume $G$ is not such a graph. Let $H$ be an $\tec$ of $G$. Without loss of generality, we may assume that in configuration $S$, the only unoccupied vertex is within $H$, leaving $|V(H)| - 1$ pebbles on $H$. By Theorem \ref{t:special}, the feasibility of $I|_H$ can be decided in linear time.
If $I|_H$ is not feasible, then $I$ itself cannot be feasible. If $I|_H$ is feasible, then other $\tec$s of $G$ are examined next. For this, configurations $S$ is updated to $S'$such that another $\tec$ (if any) of $G$ will now have one unoccupied vertex (note that pebbles do not need to be actually moved). Perform the same movements on $D$ gives $D'$ with the same unoccupied vertex (recall that $D$ is the goal configuration). This yields an equivalent $\ppg$ instance $I' = (G, S', \Pi)$. Let the $\tec$ with the unoccupied vertex be $H'$, feasibility of $I'|_{H'}$ can also be checked in linear time. If any $I'|_{H'}$ is infeasible, then $I$ is infeasible. Checking the feasibility of all such restricted instances take total time $O(|V| + |E|)$.
If $I$ remains feasible after above checks, rest of the pebbles (those that have not appeared in $I|_H$ or an $I'|_{H'}$) must be examined. For each of these pebbles, say pebble $i$, if $s_i \ne d_i$, then $I$ is not feasible. Otherwise, $I$ is feasible. ~\qed
The last special case is when $p \le n_{M} - 3$.
\begin{theorem}\label{t:nmminus3} A $\ppg$ instance is feasible when $p \le n_M - 3$ and $G$ is not a cycle. \end{theorem}
{\sc Proof.} Let $I = (G, S, \Pi)$ be an arbitrary instance of $\ppg$ in which $p \le n_M -3$. This excludes $G$ from being a tree. If $G$ is a $2$-edge-connected graph and not a cycle, then the claim trivially holds by Theorem \ref{t:special}. Same is true if $G$ contains only one $\tec$ by Corollary \ref{c:feasible}. For the rest of the proof, assume that $G$ contains two or more $\tec$s. We only work with configuration $S$ and prove the case $p = n_M - 3$. Other cases then trivially follow.
By Corollary \ref{c:reduce2}, assume without loss of generality that all $p$ pebbles are on $\tec$s of $G$ and one $\tec$, say $H$, has at three unoccupied vertices. This means that $\tec$s other than $H$ are fully occupied. Let $H'$ be another $\tec$ such that there are no other $\tec$s between $H$ and $H'$. Let $i_i, i_2$ be two pebbles on $H'$ that occupy vertices closest to $H$. Since there are no pebbles between $H$ and $H'$, pebbles $i_1, i_2$ can be moved such that $i_1$ is on $H$ and $i_2$ is on a vertex adjacent to $H$ (see e.g., Fig. \ref{fig:nmminus3}). Let the vertex to which $i_2$ is moved be $v$; note that $v$ may be on $H'$. Let the new configuration be $S'$. By Corollary \ref{c:feasible}, the pebbles current on $H$ plus $i_2$ are all equivalent with respect to $S'$ and therefore, $S$.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=1.6in]{nmminus3.eps}
\end{center}
\caption{\label{fig:nmminus3} Moving two pebbles between to adjacent $\tec$s $H$ and $H'$. The two small dotted circles are the new locations of pebbles $i_1$ (left) and $i_2$ (right).}
\end{figure}
On the other hand, from the configuration $S$, $i_1$ can be moved away from $H'$ and $i_2$ can then be moved to a vertex adjacent to $H'$. This leaves two empty vertices on $H'$; let this configuration be $S''$. Following the same argument, all pebbles on $H'$ plus $i_2$ are equivalent with respect to $S''$ and therefore, $S$. By transitivity, all pebbles on $H, H'$ in configuration $S$ are equivalent. Inductively, this shows that all pebbles are equivalent with respect to $S$. Thus $I$ is feasible.
~\qed
\section{Reducing Pebble Permutations to Pebble Exchanges}
In this section, let $I = (G, S, \Pi)$ be a $\ppg$ instance in which $G$ contains an $\tec$, $G$ is not a single $\tec$, and $p \le n - 2$. We show that such an instance is feasible if and only if for all $i$, pebbles $i$ and $\Pi(i)$ are equivalent with respect to $S$. That is, pebbles $i$ and $\Pi(i)$ can be exchanged without net effects on other pebbles.
\begin{lemma}\label{l:ij} Let $S$ be an arbitrary configuration and let $i, j$ be two pebbles. If both $i$ and $j$ can reach two distinct vertices (not necessarily the same two vertices for $i, j$) of an $\tec$ $H$, then $i$ and $j$ are equivalent with respect to $S$.
\end{lemma}
{\sc Proof.} Assume without loss of generality that pebble $i$ first gets to a configuration $S'$ such that it is on $H$ and can move to a nearby empty vertex on $H$. Assume that pebble $j$ can do the same in a later configuration $S''$ (if both pebbles $i, j$ are already on $H$ with an empty vertex on $H$, $i, j$ are equivalent by Corollary \ref{c:feasible}).
Starting from $S'$, if there is only one empty vertex on $H$, move pebbles outside $H$ such that an vertex adjacent to $H$ is empty (since $G$ is not a single $\tec$). This setup satisfies the conditions of Corollary \ref{c:feasible}. Therefore, all pebbles on $H$, including $i$, are equivalent with respect to the current configuration and also $S'$. Since an $\tec$ contains at least $3$ vertices, $i$ is at least equivalent to one other pebble (note that this pebble may not be on $H$ in configuration $S'$).
From configuration $S'$, to move $j$ to $H$, some pebbles may need to be moved to $H$ and some pebbles, including $i$, may get moved out of $H$. We now augment the configurations between $S'$ and $S''$ so that pebble $i$ never leaves $H$. First note that if $i$ is the only pebble on $H$ and is supposed to leave $H$, then it must be for some other pebble to move into $H$. It is straightforward to verify that the next pebble entering $H$ must also be equivalent to $i$ and can take on pebble $i$'s role. If $i$ is not the only pebble on $H$ and is supposed to leave, again we can let some pebble equivalent to $i$ leave $H$. With this augmentation, when pebble $j$ eventually gets to $H$ and can move to another vertex of $H$, $i$ and $j$ are clearly equivalent with respect to the current configuration and therefore, $S$ (exchange $i, j$ and reverse all previous moves).
~\qed
For a given $G$, if a vertex $x$ is an articulation vertex, removing $x$ splits $G$ into two or more components. Denote these components as $C(x)$ ($G$ is assumed). Denote the component from $C(x)$ containing $y$ as $C(x, y)$ and the rest $\overline{C(x, y)}$. We now give a generalization of Lemma 6 from \cite{AulMonParPer99}.
\begin{lemma}\label{l:uvw} Let $S$ be an arbitrary configuration and suppose that $i, j$ are two pebbles occupying vertices $u, v$, respectively. Let $w$ be a vertex such that two shortest paths between $u, v$ and $v, w$ share at least one common edge. Suppose that there are moves that take $S$ to a configuration in which $i$ is at $v$ and $j$ is at $w$. Then $i, j$ are equivalent with respect to $S$.
\end{lemma}
\noindent {\sc Proof.} Lemma 6 from \cite{AulMonParPer99} shows that Lemma \ref{l:uvw} holds when $G$ is a tree. Our proof seeks to reduce our case to the tree case. Note that when $G$ is a tree, the shortest paths between a pair of vertices are unique, which is not the case for a general graph. The statement of Lemma \ref{l:uvw} only requires that a shortest path between some $u, v$ and a shortest path between $v, w$ share an edge. Also, pebbles other than $i, j$ can be treated as indistinguishable pebbles; their locations before and after the moves do not matter.
Let the sequence of moves that takes $i, j$ to $v, w$, respectively, be $X$. For convenience, let $u \leadsto v$ and $v \leadsto w$ denote two shortest paths that share at least one edge. These paths may not be unique on $G$. We require that $u\leadsto v$ and $v\leadsto w$ have only one intersection (which may contain multiple edges and vertices), denoted $y \leadsto v$. $u\leadsto y$ and $y \leadsto w$ denote the parts of $u\leadsto v$ and $v \leadsto w$, respectively, after removing $y \leadsto v$. Note that fixing a $u\leadsto v$, if some vertex $z$ on $u\leadsto v$ is on an $\tec$ then any path that $i$ takes to reach $v$ from $u$ must also reach the same $\tec$ even if $i$ does not pass $z$.
If $u, v$ belong to the same $\tec$ $H$, any $u \leadsto v$, as a shortest path, must fall entirely in $H$. This suggests that $i$ and $j$ can both reach two vertices of $H$. $i, j$ are equivalent by Lemma \ref{l:ij}. Assume for the rest of the proof that $u, v$ do not belong to the same $\tec$.
If any edge $e$ in $y \leadsto v$ appear in an $\tec$ $H$, then $i, j$ must visit at least two vertices of $H$ (not necessarily passing edge $e$) and must be equivalent. We may then assume that $y\leadsto v$ has no edges belonging to an $\tec$ of $G$. Furthermore, we may assume that no vertices on $y \leadsto v$ (including $y$ but not $v$) belong to any $\tec$ $H$; otherwise $i, j$ must be equivalent. To see this, assume without loss of generality that $y$ is on some $\tec$ $H$. At some point, $i$ must pass through $H$. If $i$ travels through an edge of $H$, we return to the earlier case; suppose not. This forces $i$ to pass $H$ through $y$. When $i$ has just arrived at $y$, there is an empty vertex on $u \leadsto y$ and there must also be an empty vertex in $\overline{C(y, u)}$. These two empty vertices allow $i$ to reach at least one more vertex of $H$ other than $y$. Same applies to $j$, making $i, j$ equivalent with respect to $S$ by Lemma \ref{l:ij}.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=2in]{pebble-equ.eps}
\end{center}
\caption{\label{fig:pebble-equ} If $i, j$ (on $u, v$, respectively) can reach $v, w$, then a pebble equivalent to $i$ can reach $v$ without using features of $\tec$ as $j$ moves to $w$ .}
\end{figure}
We now look at $u \leadsto y$. If any vertex inside $u \leadsto y$ (i.e., excluding $u, y$) belong to some $\tec$, let the $H$ be such an $\tec$ that is closest to $y$ ($H$ may not be unique). Since $i$ can reach $H$, it can reach two vertices of $H$ (by the argument in the previous paragraph). Pebble $i$ can then take take any position on $H$. Let $x$ be the vertex of $H$ that is closest to $y$ (this vertex is unique), we may assume that $i$ is initially located on a vertex $u'$ on $H$ adjacent to $x$ (see e.g., Fig. \ref{fig:pebble-equ}). Using the argument in the proof of Lemma \ref{l:ij}, $X$ can be modified so that $i$ never needs to move to vertices other than $u'$ in $\overline{C(x, y)}$. This means that we can effectively treat $x$ as a $T$-junction instead of a vertex of an $\tec$ for moving $i, j$ to $v, w$, respectively. If no vertices inside $u \leadsto y$ is on an $\tec$, it is possible that $X$ requires that $i$ to travel in the opposite direction of $u \leadsto y$, further away from $u$. Same argument shows that a similar $u'$ exists (e.g., $v'$ in Fig. \ref{fig:pebble-equ}).
Following the same argument, $v', w'$ can be defined similarly so that $X$ can be modified to take $i$ from $u'$ to $v$ and $j$ from $v$ to $w'$, without $i, j$ ever reaching two vertices of any $\tec$ other than $u', v'$, and $w'$. Since no features of $\tec$s are used, we return to the tree case and $i, j$ are equivalent. ~\qed
With Lemma \ref{l:uvw}, Corollaries 1 and 2 from \cite{AulMonParPer99} can be extended to general graphs. We only need the extended version of Corollary 1, stated and proved below.
\begin{corollary}\label{c:exchange}Let $I = (G, S, \Pi)$ be a feasible $\ppg$ instance. Then there exists $1 \le i \le p$, such that pebbles $i$ and $\Pi(i)$ are equivalent with respect to $S$.
\end{corollary}
{\sc Proof.} Applying the proof of Corollary 1 from \cite{AulMonParPer99} to a spanning tree of $G$. ~\qed
The main result of this section is an extension of Theorem 4 in \cite{AulMonParPer99} to general graphs. We provide a shorter proof enabled by the transitivity of pebble equivalence.
\begin{theorem}\label{t:equiv}Let $I = (G, S, \Pi)$ be a feasible $\ppg$ instance. Then for all $1 \le i \le p$, pebbles $i$ and $\Pi(i)$ are equivalent with respect to $S$.
\end{theorem}
{\sc Proof.} After one application of Corollary \ref{c:exchange}, at least one pebble can be moved to its desired goal vertex (assuming that $\Pi$ is not the identity permutation). Repeated applications of pebble exchanges will eventually move pebble $i$ to vertex $s_{\Pi(i)}$. By transitivity of pebble equivalence, $i$ can be exchanged with a pebble occupying $s_{\Pi(i)}$ at some point. Similarly, any pebble exchanged to occupy $s_{\Pi(i)}$ at any point must be equivalent to $\Pi(i)$. Thus, $i$ and $\Pi(i)$ are equivalent with respect to $S$. ~\qed
An implication of Theorem \ref{t:equiv} is that to test the feasibility of a $\ppg$ instance, $I = (G, S, \Pi)$, we may work with $S$ and $\Pi$ separately by first computing pebble equivalence classes with respect to $S$. Two pebbles are put into the same class if and only if they are equivalent with respect to $S$. The instance $I$ is feasible if and only if pebbles $i$ and $\Pi(i)$ belong to the same equivalence class by Theorem \ref{t:equiv}.
\section{Linear Feasibility Test of Pebble Exchanges}
In this section, we reduce the feasibility test of pebble exchanges on general graphs to the feasibility test of pebble exchanges on trees in linear time. The linear time algorithm from \cite{AulMonParPer99} then applies. The key lemma enabling this reduction is as follows.
\begin{lemma}\label{l:shrunk} Let $I = (G, S, \Pi)$ be a $\ppg$ instance in which $p \le n - 2$ and $G$ contains an $\tec$ $H$ with one empty vertex. Contract $H$ into a single edge $(v_1, v_2)$ such that all vertices adjacent to $H$ are now adjacent to $v_1$. Let this new graph be $G'$. All pebbles already on $H$ are treated as a single pebble (equivalence class) staying on $v_2$, leaving $v_1$ empty. Then the pebble equivalence classes on $G$ with respect to $S$ is the same as the pebble equivalence classes computed on $G'$.
\end{lemma}
{\sc Proof.} We need to show that movements of pebbles can be done with $H$ if and only if equivalent movements can also be done using the new structure. Call the single pebble that represents the $|V(H)| - 1$ pebbles the {\em composite} pebble.
First, note that it is never necessary to have more than two empty vertices on $H$ in any planned moves of pebbles. To see this, suppose at some point more than two vertices of $H$ are to be emptied. The reason for doing this can only be to allow other pebbles to move into or through $H$. However, with two empty vertices on $H$, both objectives can already be achieved. To move a pebble $i$ through $H$, with two empty vertices, $i$ can enter $H$, leaving one empty vertex on $H$. This empty vertex then allows $i$ to move to any desired exit.
Next, observe that the only reason to fill $H$ with pebbles is when a pebble need to ``pass by'' $H$ (i.e., the pebble enters and leaves $H$ without visiting other vertices of $H$). To see this, suppose a pebble $i$ enters $H$, making $H$ fully occupied. If $i$ is to move to other vertices of $H$, then some other vertices of $H$ must be emptied first, which can be done before $i$ enters $H$.
We may now assume that the planned moves never move more than two vertices out of $H$ and $H$ is never full unless a pebble is to pass through $v_1$. This leaves three possible operations involving $H$: 1. Moving out a pebble from $H$, leaving $|V(H)| - 2$ pebbles on $H$, 2. Moving a pebble into $H$ when there are $|V(H)| - 2$ pebbles on $H$, and 3. Moving a pebble through $H$ (not a ``pass by'').
For the first case, with $|V(H)| - 1$ pebbles on $H$, any pebble $i$ on $H$ can be moved to a desired exit. Note that this requires a vertex adjacent to $H$, say $v$, to be empty. To carry out the same operation on the edge $(v_1, v_2)$, we empty the composite pebble from $v_2$ to $v_1$ and let it represent pebble $i$ (other pebbles represented by the composite pebble stay at $v_2$ and are ``invisible''). $i$ can then be moved to $v$ as well. It is clear that the only if part is also true. The second case is the reverse of the first case.
For the third case, suppose we want to move a pebble $i$ from a vertex $u$ adjacent to $H$ to a vertex $v$, also adjacent to $H$. For this to be doable through $H$, there must be at least two empty vertices between $H$ and $v$. Without loss of generality, assume that $H$ has one empty vertex. With two empty vertices between $H$ and $v$, we first move a pebble, say $j$, from $H$ to $v$. The two vertices allows $i$ to be exchanged with a pebble $k$ on $H$. $k$ now occupies $u$. This again leaves two empty vertices on $H$, allowing $i$ to exchange with $j$ on $v$. The $|V(H) -1|$ pebbles on $H$ can then be returned to their initial configuration by Corollary \ref{c:feasible}. The same operation is straightforward to carry out on $(v_1, v_2)$ and $u, v$: With two empty vertices, $v_2$ and $v$ can be emptied, allowing $i$ to move to $v$ directly. ~\qed
For a $\tec$ $H$ that is fully occupied, it is also converted as outlined in the above lemma. The only difference is that a pebble now needs to be put on $v_1$. This pebble cannot always be arbitrarily selected from the pebbles on $H$. Since there are at least two empty vertices somewhere on $G$, at least two pebbles can be moved out of $H$. If there are two vertices adjacent to $H$ that can be emptied (without moving pebbles on $H$), then all pebbles on $H$ are equivalent (with respect to the current configuration) by Corollary \ref{c:feasible}. To shrink $H$ in this case, an arbitrary pebble on $H$ can be selected to occupy $v_1$ and the other $|V(H)| - 1$ are combined into a composite pebble that occupies $v_2$. On the other hand, if only one vertex adjacent to $H$, say $v$, can be emptied, the pebble occupying the vertex of $H$ adjacent to $v$ may not be equivalent to the rest $|V(H)| - 1$ pebbles. The $|V(H)| - 1$ pebbles, however, are equivalent by Corollary \ref{c:feasible}. In this later case, we let this not necessarily equivalent pebble occupy $v_1$ (in the shrunk graph $G'$) and combine the rest into a composite pebble occupying $v_2$. The equivalence between $H$ and the converted edge can be formally proven using the same proof from Lemma \ref{l:shrunk} (need to add a case that moves the pebble on $v_1$ away before any other operations).
A reduction example is given in Fig. \ref{fig:reduce}.
\begin{lemma}\label{l:main} Let $I = (G, S, \Pi)$ be a $\ppg$ instance in which $n_M - 2 \le p \le n - 2$. The feasibility of $I$ can be decided in time $O(|V| + |E|)$.
\end{lemma}
{\sc Proof.} The case of $G$ being a tree or a single $\tec$ is already covered. If $G$ contains a single $\tec$ and $p = n_M - 2$, then $I$ is feasible by Corollary \ref{c:feasible}.
For the rest of the cases, by Corollary \ref{c:reduce2}, we may assume that in configuration $S$, any $\tec$ $H$ is occupied by at least $|V(H)| -1$ pebbles. Using the reduction from Lemma \ref{l:shrunk} (and the comments that follow), all $\tec$s can be converted into single edges (with the associated pebbles combined), yielding a tree in the end. From this we can obtain a $\ppt$ instance. Grouping pebbles into equivalence classes for this $\ppt$ instance can be performed in linear time using the algorithm from \cite{AulMonParPer99}.~\qed
\begin{figure}[htp]
\begin{center}
\includegraphics[width=3.2in]{reduce.eps}
\end{center}
\caption{\label{fig:reduce} A graph with three $\tec$s (top) and the converted tree (bottom).}
\end{figure}
Combining Lemma \ref{l:main} with the results (i.e., $p \le n_M - 3$ or $p \ge n -1$) from Section \ref{sec:partition} yields the main result (Theorem \ref{t:main}) of this paper.
\bibliographystyle{plain}
| {
"timestamp": "2013-01-22T02:02:36",
"yymm": "1301",
"arxiv_id": "1301.2342",
"language": "en",
"url": "https://arxiv.org/abs/1301.2342",
"abstract": "Given a connected, undirected, simple graph $G = (V, E)$ and $p \\le |V|$ pebbles labeled $1,..., p$, a configuration of these $p$ pebbles is an injective map assigning the pebbles to vertices of $G$. Let $S$ and $D$ be two such configurations. From a configuration, pebbles can move on $G$ as follows: In each step, at most one pebble may move from the vertex it currently occupies to an adjacent unoccupied vertex, yielding a new configuration. A natural question in this setting is the following: Is configuration $D$ reachable from $S$ and if so, how? We show that the feasibility of this problem can be decided in time $O(|V| + |E|)$.",
"subjects": "Data Structures and Algorithms (cs.DS); Robotics (cs.RO)",
"title": "A Linear Time Algorithm for the Feasibility of Pebble Motion on Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137916170774,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7087617767126226
} |
https://arxiv.org/abs/2006.11612 | Unstable Modules with the Top $k$ Squares | Unstable modules over the Steenrod algebra with only the top $k$ operations are introduced in the language of ringoids. We prove the category of such modules has homological dimension at most $k$. A pratical method, which generalizes the $\Lambda$ complex, to compute the $\mathrm{Ext}$ group from such modules to spheres is proposed. We are also able to establish several functors to relate such modules and unstable modules over the Steenrod algebra, and to describe the connections between the $\mathrm{Ext}$ groups in them. | \section*{Introduction}
Let $A$ be the Steenrod algebra over the field $\mathbb{F}_{2}$.
The purpose of this paper is to investigate the category $\mathcal{U}_{k}$ of unstable left $A$-modules where only the top $k$ Steenrod squares are allowed.
In general, on an homogeneous element of degree $n$ the top $k$ Steenrod squares on it are $\mathrm{Sq}^{n},\mathrm{Sq}^{n-1},\dots,\mathrm{Sq}^{n-k+1}$.
The cohomology of a topological space is naturally an unstable left $A$-module.
The category $\mathcal{U}$ of such modules has been studied extensively (see e.g. \cite{schwartz1994}).
It is a basic problem in algebraic topology to compute the $\mathrm{Ext}$ groups between spheres in this category, as they coincide with the $E_{2}$ page of the unstable Adams spectral sequence.
Unfortunately, those $\mathrm{Ext}$ groups are often difficult to compute and the category $\mathcal{U}$ is not of finite homological dimension.
Our category $\mathcal{U}_{k}$ is easier to work with than $\mathcal{U}$ in the sense that it is of finite homological dimension.
Computation of $\mathrm{Ext}$ groups in $\mathcal{U}_{k}$ in turn contributes to computation of $\mathrm{Ext}$ groups in $\mathcal{U}$, because when $N$, a module in $\mathcal{U}$, is bounded above degree $n$ and $k$ is large compared to $n$, the $\mathrm{Ext}$ groups into $N$ in those two categories agree with each other.
Just as the $\Lambda$ complex computes the $\mathrm{Ext}$ groups into spheres in $\mathcal{U}$, our $\Lambda_{k}$ complex gives an algorithmic method to compute the $\mathrm{Ext}$ groups into spheres in $\mathcal{U}_{k}$.
Our arguments also give an alternative proof to the fact that the traditional $\Lambda$ complex computes the $\mathrm{Ext}$ groups into spheres in $\mathcal{U}$.
The paper consists of 6 sections, the first of which introduces the theory of ringoids and prepares for the formal definition of category $\mathcal{U}_{k}$ in Section \ref{sec:U_k}.
Some examples of modules in $\mathcal{U}_{k}$ and the symmetric monoidal structure of $\mathcal{U}_{k}$ are also given in Section \ref{sec:U_k}.
Section \ref{sec:functors} is devoted to various functors between $\mathcal{U}_{k},\mathcal{U}_{k+1},\mathcal{U}$.
Our main theorem on the homological dimension of $\mathcal{U}_{k}$ is presented in Section \ref{sec:U_k_homodim}.
Section \ref{sec:Lambda_k} introduces the $\Lambda_{k}$-complex of a module in $\mathcal{U}_{k}$ and shows that it computes the $\mathrm{Ext}$ group from that module into spheres.
The convergence of an inverse system of $\mathrm{Ext}$ groups in $\mathcal{U}_{k}$'s to the $\mathrm{Ext}$ group in $\mathcal{U}$ is studied in Section \ref{sec:invsys}.
The author would like to thank Professor Haynes Miller without whom this paper would not exist.
Professor Miller's conjecture that the homological dimension of $\mathcal{U}_{k}$ is at most $k$, was the starting point of this project.
Professor Miller pointed the author to this topic and the author benefited a lot from weekly discussions with him.
\section{Ringoids}
\label{sec:ringoids}
In this section, we briefly review the theory of ringoids, which is not a novelty of this paper.
Informally, a ringoid is a ring with several objects.
Similar to rings, ringoids have subringoids, ideals and quotients.
We point interested readers to more extensive literature e.g. \cite{mitchell1972}.
\subsection{Ringoids and modules}
\begin{definition}[Preadditive category]
A category $\mathcal{A}$ is called preadditive if each morphism set $\mathcal{A}(x, y)$ is endowed with the structure of an abelian group in such a way that the compositions \[\mathcal{A}(x, y)\times\mathcal{A}(y, z)\to\mathcal{A}(x, z)\] are bilinear.
\end{definition}
\begin{definition}[Additive functor]
A functor $F:\mathcal{A}\to\mathcal{B}$ of preadditive categories is called additive if and only if \[F:\mathcal{A}(x, y)\to\mathcal{B}(F(x), F(y))\] is a homomorphism of abelian groups for all objects $x$ and $y$ in $\mathcal{A}$.
\end{definition}
\begin{definition}[Ringoid]
A ringoid is a small preadditive category and a morphism of ringoids is an additive functor.
Denote the category of ringoids by $\mathbf{Ringoid}$.
A ring is just a special ringoid --- a ringoid with a single object.
Many statements in ring theory can be generalized to ringoids.
\end{definition}
\begin{definition}[Left module over a ringoid]
\label{def:moduleRingoid}
If $\mathcal{A}$ is a ringoid, then a left $\mathcal{A}$-module is a covariant additive functor from $\mathcal{A}$ to the category $\mathbf{Ab}$ of abelian groups.
These modules form a category $\mathcal{A}\mathbf{Mod}$.
From now on, by an $\mathcal{A}$-module we mean a left $\mathcal{A}$-module.
Note that for any object $x$ in a ringoid $\mathcal{A}$, the covariant functor $\mathcal{A}(x,-):\mathcal{A}\to\mathbf{Ab}$ is an $\mathcal{A}$-module.
\end{definition}
\begin{proposition}
\label{prop:yoneda}
For any object $x$ in a ringoid $\mathcal{A}$ and any $\mathcal{A}$-module $M$, we have the following isomorphism of abelian groups
\[\mathcal{A}\mathbf{Mod}(\mathcal{A}(x,-),M) \cong M(x).\]
What's more, this isomorphism is natural in $M$, i.e. given an $\mathcal{A}$-module map $f:M\to N$, we have the following commutative diagram
\[\begin{tikzcd}
\mathcal{A}\mathbf{Mod}(\mathcal{A}(x,-),M)\arrow[r]\arrow[d] &M(x)\arrow[d]\\
\mathcal{A}\mathbf{Mod}(\mathcal{A}(x,-),N)\arrow[r] &N(x).
\end{tikzcd}\]
That is to say, the functor $\mathcal{A}\mathbf{Mod}(\mathcal{A}(x,-), -)$ from $\mathcal{A}\mathbf{Mod}$ to $\mathbf{Ab}$ is equal to the functor sending $M$ to $M(x)$.
\end{proposition}
\begin{proof}
This follows from the Yoneda lemma.
\end{proof}
\begin{proposition}
For any object $x$ in the ringoid $\mathcal{A}$, the $\mathcal{A}$-module $\mathcal{A}(x,-)$ is projective.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:yoneda}, the functor $\mathcal{A}\mathrm{Mod}(\mathcal{A}(x,-),-)$ is equal to the functor $\mathcal{A}\mathrm{Mod}\to\mathbf{Ab}$ sending $M$ to $M(x)$.
So this functor is exact and the module $\mathcal{A}(x,-)$ is projective.
\end{proof}
\begin{proposition}
\label{prop:enoughProj}
$\mathcal{A}\mathbf{Mod}$ is a complete and cocomplete abelian category with enough projectives.
\end{proposition}
\begin{proof}
Since $\mathbf{Ab}$ is a complete and cocomplete abelian category which satisfies \emph{Ab}5, the same is true of $\mathcal{A}\mathbf{Mod}$.
See \cite{grothendieck1957} for more details on the axiom \emph{Ab}5.
To see that $\mathcal{A}\mathbf{Mod}$ has enough projectives, it suffices to construct for every $\mathcal{A}$-module $M$ an epimorphism $e:F\to M$ with $F$ projective.
For each object $x$ in the ringoid $\mathcal{A}$ and each element $m$ in the abelian group $M(x)$, we take the $\mathcal{A}$-module map $\mathcal{A}(x,-)\to M$ corresponding to $m\in M(x)$, according to Proposition \ref{prop:yoneda}.
Then the map $\mathcal{A}(x,-)\to M$ sends $\mathrm{id}_{x}\in\mathcal{A}(x,x)$ to $m\in M(x)$.
Taking the direct sum of all such maps, we get an epimorphism from a direct sum of projective modules to $M$.
\end{proof}
\begin{proposition}
\label{prop:projFree}
An $\mathcal{A}$-module $M$ is projective if and only if there exists another $\mathcal{A}$-module $N$ such that \[M\oplus N = \bigoplus_{i\in I}\mathcal{A}(x(i),-)\] for some index set $I$.
\end{proposition}
\begin{proof}
If $M\oplus N = \bigoplus_{i\in I}\mathcal{A}(x(i),-)$, then $M\oplus N$ is projective and $M$, as a retract of $M\oplus N$, is projective as well.
For the other direction, let $M$ be a projective $\mathcal{A}$-module.
According to the proof to Proposition \ref{prop:enoughProj}, we can construct an epimorphism $e:F\to M$ with $F=\bigoplus_{i\in I}\mathcal{A}(x(i),-)$.
Since $M$ is projective, we can construct $f:M\to F$ such that $e\circ f$ is equal to the identity map on $M$.
Denote the kernel of $e$ by $N$.
Then it is easy to check that the map $M\oplus N\to F$ induced by $f:M\to F$ and the monomorphism $N\to F$ is an isomorphism.
\end{proof}
\subsection{Subringoids and ideals}
A subring of a ring is an abelian subgroup that is closed under multiplication and contains the multiplicative identity of the ring.
We generalize this to the following definition of subringoid.
\begin{definition}[Subringoid]
A subringoid is a wide preadditive subcategory of the ringoid.
In other words, a subringoid $\mathcal{B}$ of a ringoid $\mathcal{A}$ is a subcategory of $\mathcal{A}$ such that
\begin{itemize}
\item
$\mathrm{Ob}(\mathcal{B}) = \mathrm{Ob}(\mathcal{A})$,
\item
$\mathcal{B}(x, y)$ is an abelian subgroup of $\mathcal{A}(x, y)$ for all objects $x, y$.
\end{itemize}
\end{definition}
An ideal of a ring is an abelian subgroup that is closed under multiplication with elements in the ring both from the left and right.
We generalize this to the following definition of an ideal of a ringoid.
\begin{definition}[Ideal and quotient]
An ideal $\mathcal{I}$ of a ringoid $\mathcal{A}$ consists of an abelian subgroup $\mathcal{I}(x,y)$ of $\mathcal{A}(x,y)$ for each pair of objects $x,y\in\mathrm{Ob}(\mathcal{A})$ such that for all objects $x,y,z\in\mathrm{Ob}(\mathcal{A})$,
\begin{itemize}
\item
the image of composition $\mathcal{I}(x, y)\times \mathcal{A}(y, z)\to \mathcal{A}(x, z)$ lies in $\mathcal{I}(x, z)$,
\item
the image of composition $\mathcal{A}(x, y)\times \mathcal{I}(y, z)\to \mathcal{A}(x, z)$ lies in $\mathcal{I}(x, z)$.
\end{itemize}
Given a ringoid $\mathcal{A}$ and an ideal $\mathcal{I}$ in it, we can form the quotient $\mathcal{Q} = \mathcal{A}/\mathcal{I}$ by equalizing morphisms in $\mathcal{I}$ to zero.
We call $\mathcal{Q}$ the quotient ringoid of the ringoid $\mathcal{A}$ by the ideal $\mathcal{I}$.
More precisely, given a ringoid $\mathcal{A}$ and an ideal $\mathcal{I}$ in it, we define $\mathcal{Q}$ as the category with
\begin{itemize}
\item
$\mathrm{Ob}(\mathcal{Q}) = \mathrm{Ob}(\mathcal{A})$,
\item
$\mathcal{Q}(x, y) = \mathcal{A}(x,y) / \mathcal{I}(x, y)$ for all objects $x,y$.
\end{itemize}
The quotient maps $\mathcal{A}(x,y)\to\mathcal{Q}(x,y)$ on morphisms provide a ringoid map $\mathcal{A}\to\mathcal{Q}$.
\end{definition}
In ring theory, we have
\begin{itemize}
\item
any intersection of subrings of ring $R$ is again a subring of $R$,
\item
any intersection of ideals of ring $R$ is again an ideal of $R$,
\item
the intersection of an ideal and a subring is an ideal of the subring.
\end{itemize}
The next lemma is their counterparts in the ringoid theory.
Since the proof is straightforward, we will omit it.
\begin{lemma}[Intersection]
Let $\mathcal{A}$ be a ringoid. Then we have
\begin{itemize}
\item
any intersection of subringoids of $\mathcal{A}$ is again a subringoid of $\mathcal{A}$,
\item
any intersection of ideals of $\mathcal{A}$ is again an ideal of $\mathcal{A}$,
\item
the intersection of an ideal $\mathcal{I}$ and a subringoid $\mathcal{B}$ is an ideal of the subringoid $\mathcal{B}$.
\end{itemize}
\end{lemma}
\begin{definition}[Subringoid generated by a set of morphisms]
Let $\mathcal{A}$ be a ringoid and $\mathcal{M}$ be a set of morphisms in it.
Then the subringoid of $\mathcal{A}$ generated by $\mathcal{M}$ is defined to be the smallest subringoid (intersection of all the ringoids) of $\mathcal{A}$ containing all the morphisms in $\mathcal{M}$.
More explicitly, the subringoid of $\mathcal{A}$ generated by $\mathcal{M}$ is the ringoid $\mathcal{B}$ such that $\mathrm{Ob}(\mathcal{B}) = \mathrm{Ob}(\mathcal{A})$, and for any two objects $x$ and $y$,
\[\mathcal{B}(x,y) = \left\{\sum_{i=1}^{n}a_{i}m_{i}\right\},\]
where $n\geq 0, a_{i}\in\mathbb{Z}$ and $m_{i}:x\to y$ is a composition of morphisms in $\mathcal{M}$.
Note that we allow the composition to be empty and $m_{i}$ to be identity morphism $\mathrm{id}:x\to x$.
It is easy to see that $\mathcal{B}$ is a subringoid of $\mathcal{A}$ and it is the smallest one containing $\mathcal{M}$.
\end{definition}
\begin{definition}[Ideal generated by a set of morphisms]
Let $\mathcal{A}$ be a ringoid and $\mathcal{M}$ be a set of morphisms in it.
Then the ideal of $\mathcal{A}$ generated by $\mathcal{M}$ is defined to be the smallest ideal (intersection of all the ideals) of $\mathcal{A}$ containing all the morphisms in $\mathcal{M}$.
More explicitly, the ideal of $\mathcal{A}$ generated by $\mathcal{M}$ is $\mathcal{I}$ with \[\mathcal{I}(x,y) = \left\{\sum_{i=1}^{n}a_{i}\cdot(f_{i}\circ m_{i}\circ g_{i})\right\},\]
where $n\geq 0, a_{i}\in\mathbb{Z}, g_{i}\in\mathcal{A}(x, x_{i}), m_{i}\in\mathcal{M}(x_{i}, y_{i}), f_{i}\in\mathcal{A}(y_{i}, y)$.
It is easy to see that $\mathcal{I}$ is an ideal of $\mathcal{A}$ and that it is the smallest one containing $\mathcal{M}$.
\end{definition}
\subsection{Adjunction between quivers and ringoids}
In this subsection, we will develop a pair of adjoint functors between the category of quivers and the the category of ringoids.
Then we will use this to give an alternative definition of subringoid generated by a set of morphisms.
\begin{definition}[Quiver]
A quiver $G$ consists of two sets $E$ and $V$ and two functions $s,t:E\rightrightarrows V$.
If $G = (E, V, s, t)$ and $G' = (E', V', s', t')$ are two quivers, a morphism $g:G\to G'$ is a pair of morphisms $g_{0}:V\to V'$ and $g_{1}:E\to E'$ such that $g_{0}\circ s = s'\circ g_{1}$ and $g_{0}\circ t = t'\circ g_{1}$.
Denote the category of quivers by $\mathbf{Quiver}$.
Intuitively, $E$ consists of oriented edges and $V$ consists of vertices.
\end{definition}
Denote the category of categories by $\mathbf{Cat}$.
\begin{definition}[Free functor from $\mathbf{Quiver}$ to $\mathbf{Cat}$]
We define the free functor $F:\mathbf{Quiver}\to\mathbf{Cat}$ to be the left adjoint to the forgetful functor $u:\mathbf{Cat}\to\mathbf{Quiver}$.
Below we give one construction of the free functor.
Given a quiver $G = (E, V)$, we construct a category $\mathcal{C} = FG$ such that
\begin{itemize}
\item
$\mathrm{Ob}(\mathcal{C}) = V$,
\item
the morphism set $\mathcal{C}(x,y)$ is the set of finite paths from $x$ to $y$ in the quiver $G$, where a path is defined as a finite sequence of composable edges and an ``empty path'' constitutes the identity morphisms of $\mathcal{C}$,
\item
the composition law of $\mathcal{C}$ follows from concatenation of paths in the quiver $G$.
\end{itemize}
\end{definition}
\begin{definition}[Free functor from $\mathbf{Cat}$ to $\mathbf{Ringoid}$]
We define the free functor $\mathbb{Z}:\mathbf{Cat}\to\mathbf{Ringoid}$ to be the left adjoint to the forgetful functor $u:\mathbf{Ringoid}\to\mathbf{Cat}$.
Here is one construction of the free functor $\mathbb{Z}$.
Given a category $\mathcal{C}$, we construct a ringoid $\mathcal{A} = \mathbb{Z}\mathcal{C}$ such that
\begin{itemize}
\item
$\mathrm{Ob}(\mathcal{A}) = \mathrm{Ob}(\mathcal{C})$,
\item
the morphism set $\mathcal{A}(x, y)$ is the free abelian group generated by $\mathcal{C}(x, y)$, where $x, y$ are any two objects in $\mathrm{Ob}(\mathcal{A}) = \mathrm{Ob}(\mathcal{C})$,
\item
the composition $\mathcal{A}(x,y)\times \mathcal{A}(y,z)\to \mathcal{A}(x, z)$ sends \[\left(\sum_{i=1}^{m}a_{i}f_{i}, \sum_{j=1}^{n}b_{j}g_{j}\right)\mapsto\sum_{i=1}^{m}\sum_{j=1}^{n}a_{i}b_{j}(g_{j}\circ f_{i}),\]
where $a_{i}, b_{j}$ are integers and $f_{i}\in\mathcal{C}(x,y), g_{j}\in\mathcal{C}(y, z)$.
\end{itemize}
\end{definition}
Composing those two pairs of adjoint functors above, we get a pair of adjoint functors between $\mathbf{Quiver}$ and $\mathbf{Ringoid}$.
Next, we will use that adjunction to give an alternative definition of the subringoid generated by a set of morphisms.
In general, the image of a functor is not necessarily a subcategory.
But as we will see in the following lemma, when the functor is ``nice'', the image will be a subcategory.
\begin{lemma}
If $F:\mathcal{A}\to\mathcal{B}$ is a functor of categories such that $F:\mathrm{Ob}(\mathcal{A})\to\mathrm{Ob}(\mathcal{B})$ is bijective, then the image $\mathcal{C}:=F\mathcal{A}$ is a wide subcategory of $\mathcal{B}$.
\end{lemma}
\begin{proof}
We know $\mathrm{Ob}(\mathcal{C}) = \mathrm{Ob}(\mathcal{B}) = \mathrm{Ob}(\mathcal{A})$.
We also know that the morphism set $\mathcal{C}(x, y)$ is equal to the image of $F:\mathcal{A}(x, y)\to\mathcal{B}(x, y)$ for all objects $x$ and $y$.
So the identify morphisms exist in $\mathcal{C}$.
The composition law in $\mathcal{C}$ follows the composition law in $\mathcal{B}$.
We only need to check that if $f\in\mathcal{C}(x,y)$ and $g\in\mathcal{C}(y,z)$, then $g\circ f\in\mathcal{C}(x, z)$.
Since $f = F(f')$ for some $f'\in\mathcal{A}(x,y)$ and $g = F(g')$ for some $g'\in\mathcal{A}(y,z)$, we have $g\circ f = F(g')\circ F(f') = F(g'\circ f')$ with $g'\circ f'\in\mathcal{A}(x, z)$.
Therefore, $g\circ f\in\mathcal{C}(x, z)$.
\end{proof}
Furthermore, the following lemma will give us a condition on when the image of a morphism of ringoids is a subringoid.
\begin{lemma}
If $F:\mathcal{A}\to\mathcal{B}$ is a morphism of ringoids such that $F:\mathrm{Ob}(\mathcal{A})\to\mathrm{Ob}(\mathcal{B})$ is bijective, then the image $\mathcal{C}:=F\mathcal{A}$ is a subringoid of $\mathcal{B}$.
\end{lemma}
\begin{proof}
By the lemma above, $\mathcal{C}$ is a wide subcategory of $\mathcal{B}$.
Since $F:\mathcal{A}(x, y)\to\mathcal{B}(x, y)$ is a homomorphism of abelian groups for all objects $x$ and $y$, we know that $\mathcal{C}(x,y)$ is an abelian subgroup of $\mathcal{B}(x,y)$.
Therefore, by definition $\mathcal{C}$ is a subringoid of $\mathcal{B}$.
\end{proof}
\begin{definition}[Subringoid generated by a set of morphisms]
Let $\mathcal{A}$ be a ringoid and $\mathcal{M}$ be a set of morphisms in it.
Denote by $\mathcal{M}'$ the union of $\mathcal{M}$ and $\{\mathrm{id}:x\to x,\forall x\in\mathrm{Ob}(\mathcal{A})\}$.
Then we get a morphism of quivers $\mathcal{M}'\to u\mathcal{A}$ and by adjunction, this gives arise to a morphism of ringoids $F\mathcal{M}'\to\mathcal{A}$.
Since this morphism of ringoids is bijective on objects, its image is a subringoid of $\mathcal{A}$ by the lemma above.
We define this subringoid as the subringoid of $\mathcal{A}$ generated by morphisms in $\mathcal{M}$.
\end{definition}
\section{Unstable $A$-modules with the top $k$ squares}
\label{sec:U_k}
\subsection{Steenrod algebra and unstable modules over it}
\begin{definition}[Steenrod algebra]
The mod 2 Steenrod algebra $A$ is the quotient of the free unital graded $\mathbb{F}_{2}$-algebra generated by the elements $\mathrm{Sq}^{i}$ of degree $i$ by the ideal generated by
\[\mathrm{Sq}^{0} = 1, \mathrm{Sq}^{i} = 0\quad \textrm{when }i < 0\]
and the Adem relations
\begin{equation}
\label{eq:steenrod_relation}
\mathrm{Sq}^{i}\mathrm{Sq}^{j} = \sum_{t = 0}^{\lfloor i/2\rfloor}{j - t - 1 \choose i - 2t}\mathrm{Sq}^{i + j - t}\mathrm{Sq}^{t}\quad\textrm{when }0<i<2j.
\end{equation}
We shall denote by $\mathcal{M}$ the category of graded left $A$-modules, whose morphisms are $A$-linear maps of degree zero.
From now on, by an $A$-module we mean a module in the category $\mathcal{M}$.
Note that the Adem relations actually hold for all integers $i, j$.
\end{definition}
The following is a standard fact.
\begin{lemma}
The basis of the Steenrod algebra $A$ as a graded vector space over $\mathbb{F}_{2}$ is the admissible squares $\mathrm{Sq}^{i(1)}\mathrm{Sq}^{i(2)}\cdots\mathrm{Sq}^{i(m)}$ with $i(1)\geq 2i(2),i(2)\geq 2i(3),\dots,i(m-1)\geq 2i(m), i(m)> 0$.
Note that when $m = 0$, we have $\mathrm{Sq}^{0} = 1$.
\end{lemma}
\begin{notation}[Lower squares]
If $x$ is a homogeneous element in an $A$-module, then we denote \[\mathrm{Sq}_{i}x = \mathrm{Sq}^{|x| - i}x.\]
\end{notation}
\begin{proposition}[Adem relations in lower squares]
\label{prop:AdemLower}
Take any $A$-module $M$.
Let $i, j, n$ be any integers satisfying $n> j$ and $2n> i + j$.
Then for any homogeneous element $x$ of degree $n$ in $M$, we have
\[\mathrm{Sq}_{i}\mathrm{Sq}_{j}x = \sum_{s=\lceil (i+j)/2 \rceil}^{n}{s-j-1 \choose 2s-i-j}\mathrm{Sq}_{i+2j-2s}\mathrm{Sq}_{s}x.\]
\end{proposition}
\begin{proof}
Compute:
\[
\begin{split}
\mathrm{Sq}_{i}\mathrm{Sq}_{j}x
&= \mathrm{Sq}^{2n-i-j}\mathrm{Sq}^{n-j}x\\
&= \sum_{t=0}^{\lfloor n-(i+j)/2\rfloor}{n-j-t-1\choose 2n-i-j-2t}\mathrm{Sq}^{3n-i-2j-t}\mathrm{Sq}^{t}x\\
&= \sum_{t=0}^{\lfloor n-(i+j)/2\rfloor}{n-j-t-1\choose 2n-i-j-2t}\mathrm{Sq}_{-2n+i+2j+2t}\mathrm{Sq}_{n-t}x\\
&= \sum_{s=\lceil (i+j)/2 \rceil}^{n}{s-j-1 \choose 2s-i-j}\mathrm{Sq}_{i+2j-2s}\mathrm{Sq}_{s}x.
\end{split}
\]
The second equality comes from the Adem relations in upper squares.
The second to last equality comes from the substitution $s = n-t$.
\end{proof}
As in the case of upper squares, the Adem relations in lower squares hold for all integers $i,j,n$.
When $i>j$, all the terms of the summation on the right hand side satisfy $i+2j-2s\leq s$.
The reason is $s\geq (i+j)/2 \geq j$ and thus $(i+j-2s) + (j-s)\leq 0$.
So whenever there is a $\mathrm{Sq}_{i}\mathrm{Sq}_{j}$ with $i>j$, one can rewrite it as of a sum of $\mathrm{Sq}_{i'}\mathrm{Sq}_{j'}$'s such that $i'\leq j'$.
This observation agrees with Proposition \ref{prop:Gn} in a later section, which says whenever there is a sequence of lower squares, one can always rewrite it as a sum of $\mathrm{Sq}_{i(1)}\mathrm{Sq}_{i(2)}\cdots\mathrm{Sq}_{i(m)}$'s such that $i(1)\leq i(2)\leq\dots\leq i(m)$.
\begin{definition}[Unstable $A$-module]
An $A$-module is said to be unstable if $\mathrm{Sq}^{i}x = 0$ for any $i>n$ and any homogeneous element $x$ of degree $n$.
Or in other words, $M$ is unstable if $\mathrm{Sq}_{i}M = 0$ for all $i<0$.
Note that if $M$ is an unstable $A$-module, then $M^{n} = 0$ for all $n < 0$ because $x = \mathrm{Sq}^{0}x = 0$ for any homogeneous element $x$ in $M$ of degree $n < 0$.
We shall denote by $\mathcal{U}$ the full subcategory of $\mathcal{M}$ with objects the unstable ones.
From now on, by an unstable $A$-module we mean a module in the category $\mathcal{U}$.
\end{definition}
\begin{example}[Sphere module $S(n)$]
For any integer $n$, we define the sphere module $S(n)$ to be the $A$-module with the degree $n$ part equal to $\mathbb{F}_{2}$ and the other parts equal to zero.
The sphere module is defined for any integer $n$, but it is unstable only when $n \geq 0$.
\end{example}
\subsection{Intuition and formal definition}
Let $k$ be any natural number.
We will define a new kind of ``unstable $A$-module'', where the only simple Steenrod operations allowed are $\mathrm{Sq}_{i}$ with $i<k$.
As indicated by ``unstable'', we require $Sq_{i}=0$ when $i<0$ and thus $M^{n} = 0$ when $n < 0$.
For example, when $k=0$, the only Steenrod operations allowed are $\mathrm{Sq}_{i} = 0$ with $i <0$ and we have non-negatively graded $\mathbb{F}_{2}$-vector spaces.
When $k = 1$, we have one nontrivial Steenrod operation $\mathrm{Sq}_{0}$ which doubles the degree.
Note that $\mathrm{Sq}_{0}$ is equal to the identity on degree zero.
For another example, when $k = 3$, we have nontrivial Steenrod operations $\mathrm{Sq}_{2},\mathrm{Sq}_{1},\mathrm{Sq}_{0}$ and trivial Steenrod operations $\mathrm{Sq}_{i} = 0$ with $i<0$.
The Steenrod operations $\mathrm{Sq}_{2}$ and $\mathrm{Sq}_{1}$ are not available in all degrees --- $\mathrm{Sq}_{2}$ acts on degrees $\geq 2$ and $\mathrm{Sq}_{1}$ acts on degrees $\geq 1$.
We shall denote by $\mathcal{U}_{k}$ the category of such ``unstable $A$-modules''.
To make this idea clear, we are going to use the language of ringoids.
We are interested in
\begin{itemize}
\item
$\mathcal{M}$, the category of $A$-modules,
\item
$\mathcal{U}$, the category of unstable $A$-modules,
\item
$\mathcal{U}_{k}$, the category of ``unstable $A$-modules with only the top $k$ squares''.
\end{itemize}
Although $\mathcal{U}$ is not the category of all modules over any graded ring, it is the category of all modules over a certain ringoid.
In fact, we can formulate each of those three categories above as the category all modules over some ringoid.
\begin{definition}[Ringoid $\mathcal{R}$]
Let $\mathcal{R}$ be the ringoid such that
\begin{itemize}
\item
the objects are all the integers,
\item
for any $a, b\in\mathbb{Z}$, the morphism set $\mathcal{R}(a, b)$ is the $\mathbb{F}_{2}$-vector space whose basis are all finite sequences of integers $(c_{1},c_{2},\dots,c_{m})$ such that $a = c_{1} < c_{2} <\dots< c_{m} = b$.
\end{itemize}
We write the sequence \[(a=c_{1},c_{2},\dots,c_{m}=b)\] as \[\mathrm{Sq}^{c_{2}-c_{1}}\mathrm{Sq}^{c_{3}-c_{2}}\cdots\mathrm{Sq}^{c_{m}-c_{m-1}}.\]
For example, the morphism set $\mathcal{R}(-1, 2)$ is a four-dimentional $\mathbb{F}_{2}$-vector space with basis
\[(-1,0,1,2), (-1,0,2),(-1,1,2),(-1,2)\]
or equivalently
\[\mathrm{Sq}^{1}\mathrm{Sq}^{1}\mathrm{Sq}^{1},\mathrm{Sq}^{1}\mathrm{Sq}^{2},\mathrm{Sq}^{2}\mathrm{Sq}^{1},\mathrm{Sq}^{3}.\]
The identity morphism $n\to n$ is also written as $\mathrm{Sq}^{0}$.
\end{definition}
\begin{definition}[Ringoid $\mathcal{A}$]
\label{def:ringoid_A}
Let $\mathcal{I}$ be the ideal of $\mathcal{R}$ generated by the Adem relations
\[\mathrm{Sq}^{i}\mathrm{Sq}^{j} - \sum_{t = 0}^{\lfloor i/2\rfloor}{j - t - 1 \choose i - 2t}\mathrm{Sq}^{i + j - t}\mathrm{Sq}^{t}\in\mathcal{R}(n, n + i + j)\textrm{ with }0<i<2j.\]
Define $\mathcal{A}$ as the quotient ringoid of $\mathcal{R}$ by the ideal $\mathcal{I}$.
This new ringoid $\mathcal{A}$ is the same as the ringoid such that
\begin{itemize}
\item
the objects are all the integers,
\item
for any $a, b\in\mathbb{Z}$, the morphism set $\mathcal{A}(a, b)$ is the degree $(b-a)$ part of the Steenrod algebra $A$.
\end{itemize}
The category of left modules over the ringoid $\mathcal{A}$ is exactly $\mathcal{M}$, the category of modules over the Steenrod algebra $A$.
\end{definition}
\begin{definition}[Ringoid $\mathcal{Q}$]
Let $\mathcal{J}$ be the ideal of $\mathcal{R}$ generated by the Adem relations (as in Definition \ref{def:ringoid_A}) and the unstability conditions
\[\mathrm{Sq}^{i}:n\to n + i\textrm{ with }i > n.\]
Define $\mathcal{Q}$ as the quotient ringoid of $\mathcal{R}$ by the ideal $\mathcal{J}$.
Then the category of left modules over the ringoid $\mathcal{Q}$ is exactly $\mathcal{U}$, the category of unstable modules over the Steenrod algebra $A$.
Note that $\mathcal{Q}$ can also be seen as the quotient of ringoid $\mathcal{A}$ by the ideal $\mathcal{L}$ generated by the unstability conditions alone.
\end{definition}
\begin{definition}[Ringoid $\mathcal{Q}_{k}$]
Let $k\geq 0$.
Let $\mathcal{Q}_{k}$ be the subringoid of $\mathcal{Q}$ generated by the top $k$ squares
\[\mathrm{Sq}^{i}:n\to n + i\textrm{ with }i\geq 0, n-i < k.\]
Denote the category of left modules over the ringoid $\mathcal{Q}_{k}$ by $\mathcal{U}_{k}$ and we call it the category of unstable modules over the Steenrod algebra $A$ with only the top $k$ squares.
\end{definition}
\begin{notation}
When $M$ is a module in $\mathcal{M},\mathcal{U}$ or $\mathcal{U}_{k}$, we use notations $M^{n}$ and $M(n)$ interchangeably throughout this paper to denote the degree $n$ part of module $M$.
\end{notation}
\begin{example}[Sphere module $S_{k}(n)$]
The sphere module $S_{k}(n)$ is defined to be the module in $\mathcal{U}_{k}$ with the degree $n$ part equal to $\mathbb{F}_{2}$ and the other parts equal to zero.
Note that $n$ cannot be negative because the negative degree parts of a module in $\mathcal{U}_{k}$ must be zero.
\end{example}
\subsection{Example: free modules}
\begin{definition}[Free modules]
By Definition \ref{def:moduleRingoid}, $\mathcal{A}(x,-)$ is an $\mathcal{A}$-module for any ringoid $\mathcal{A}$ and any object $x$ in the ringoid $\mathcal{A}$.
So when $\mathcal{A},\mathcal{Q},\mathcal{Q}_{k}$ are ringoids as defined in the last subsection,
\begin{itemize}
\item
$\mathcal{A}(n,-)$ is an $\mathcal{A}$-module for any integer $n$,
\item
$\mathcal{Q}(n,-)$ is a $\mathcal{Q}$-module for any integer $n$,
\item
$\mathcal{Q}_{k}(n,-)$ is a $\mathcal{Q}_{k}$-module for any integers $n$ and $k$ with $k\geq 0$.
\end{itemize}
These modules and their direct sums are said to be \emph{free} module in $\mathcal{M},\mathcal{U},\mathcal{U}_{k}$ respectively.
Denote $F(n):=\mathcal{Q}(n,-)$ and $F_{k}(n):=\mathcal{Q}_{k}(n,-)$.
Note that $F(n) = 0$ and $F_{k}(n) = 0$ for all $n<0$.
\end{definition}
From now on, we use $\iota_{n}$ to denote the universal element of degree $n$.
\begin{proposition}
\label{prop:Gn}
A basis of $\mathcal{A}(n,-)$ as a graded vector space over $\mathbb{F}_{2}$ is
\[\mathrm{Sq}_{i(1)}\mathrm{Sq}_{i(2)}\cdots\mathrm{Sq}_{i(m)}\iota_{n}\] with \[i(1)\leq i(2)\leq\dots\leq i(m)<n.\]
Note that when $m = 0$, we have $\iota_{n}$.
\end{proposition}
\begin{proof}
The admissible basis of the Steenrod algebra $A$ is \[\mathrm{Sq}^{j(1)}\mathrm{Sq}^{j(2)}\cdots\mathrm{Sq}^{j(m)}\] with \[j(s)>0, j(s)\geq 2j(s+1).\]
Therefore, a basis of $\mathcal{A}(n,-)$ is \[\mathrm{Sq}^{j(1)}\mathrm{Sq}^{j(2)}\cdots\mathrm{Sq}^{j(m)}\iota_{n}\] with \[j(s)>0, j(s)\geq 2j(s+1).\]
It translates into \[\mathrm{Sq}_{i(1)}\mathrm{Sq}_{i(2)}\cdots\mathrm{Sq}_{i(m)}\iota_{n}\] with \[i(1)\leq i(2)\leq\dots\leq i(m)<n.\qedhere\]
\end{proof}
We will write down the basis of $F(n)$ and $F_{k}(n)$ as vector spaces over $\mathbb{F}_{2}$ explicitly.
Before that, we present a lemma about the structure of $\mathcal{L}$, the ideal of ringoid $\mathcal{A}$ generated by $\mathrm{Sq}^{i}:n\to n+i$ with $i>n$.
Remember that the ringoid $\mathcal{Q}$ is equal to the quotient of $\mathcal{A}$ by $\mathcal{L}$.
\begin{lemma}
\label{lem:Lna_basis}
The $\mathcal{L}(n,a)$ is indeed a vector space generated by admissible Steenrod monomials \[\mathrm{Sq}^{j(1)}\mathrm{Sq}^{j(2)}\cdots\mathrm{Sq}^{j(m)}:n\to a,\]
where there exists at least one $s\in[1,m]$ such that \[j(s)>n+\sum_{t=s+1}^{m}j(t).\]
\end{lemma}
\begin{proof}
By definition, the $\mathcal{L}(n,a)$ as a vector space is generated by
\[\mathrm{Sq}^{j(1)}\mathrm{Sq}^{j(2)}\cdots\mathrm{Sq}^{j(m)}:n\to a,\]
where there exists at least one $s\in[1,m]$ such that \[j(s)>n+\sum_{t=s+1}^{m}j(t).\]
Using the Adem relations (\ref{eq:steenrod_relation}), we can rewrite the morphism above as a sum of some admissible Steenrod monomials.
Say we are applying the Adem relations on $\mathrm{Sq}^{i}\mathrm{Sq}^{j}:b\to b+i+j$ and get a sum of $\mathrm{Sq}^{i'}\mathrm{Sq}^{j'}:b\to c$.
If $i>j+b$, then $i'>j'+b$ because the Adem relations increases the first upper index and decreases the second one.
If $j>b$, then $i'>j'+b$ because $i' = i+j-j'$ and $i\geq 2j'$ together imply $i'\geq j'+j>j'+b$.
Therefore, for each admissible summand \[\mathrm{Sq}^{j'(1)}\mathrm{Sq}^{j'(2)}\cdots\mathrm{Sq}^{j'(m')}:n\to\ell\] on the right hand side of the Adem relations, there exists at least one $s\in[1,m']$ such that \[j'(s) > n + \sum_{t=s'+1}^{m'}j'(t).\]
So those admissible monomials generate the $\mathcal{L}(n,a)$.
\end{proof}
\begin{proposition}
\label{prop:Fn}
When $n\geq 0$, a basis of $F(n)$ as a graded vector space over $\mathbb{F}_{2}$ is
\[\mathrm{Sq}_{i(1)}\mathrm{Sq}_{i(2)}\cdots\mathrm{Sq}_{i(m)}\iota_{n}\] with \[0\leq i(1)\leq i(2)\leq\dots\leq i(m)< n.\]
Note that when $m = 0$, we have $\iota_{n}$.
\end{proposition}
\begin{proof}
We have $F(n) = \mathcal{Q}(n,-)$ and $F(n)^{a} = \mathcal{Q}(n,a) = \mathcal{L}(n,a)/\mathcal{I}(n,a)$.
The $\mathcal{Q}(n,-)$ is generated as a vector space over $\mathbb{F}_{2}$ by
\[\mathrm{Sq}^{j(1)}\mathrm{Sq}^{j(2)}\cdots\mathrm{Sq}^{j(m)}\iota_{n}\]
with \[j(s)>0, j(s)\geq 2j(s+1), j(s)\leq n + \sum_{t = s+1}^{m}j(t).\]
They are linearly independent by Lemma \ref{lem:Lna_basis}.
It translates into \[\mathrm{Sq}_{i(1)}\mathrm{Sq}_{i(2)}\cdots\mathrm{Sq}_{i(m)}\iota_{n}\] with \[0\leq i(1)\leq i(2)\leq\dots\leq i(m)< n.\qedhere\]
\end{proof}
\begin{proposition}
\label{prop:Fkn}
When $n\geq 0$, a basis of $F_{k}(n)$ as a graded vector space over $\mathbb{F}_{2}$ is \[\mathrm{Sq}_{i(1)}\mathrm{Sq}_{i(2)}\cdots\mathrm{Sq}_{i(m)}\iota_{n}\] with \[0\leq i(1)\leq i(2)\leq\dots\leq i(m)< \min(n, k).\]
Note that when $m = 0$, we have $\iota_{n}$.
\end{proposition}
\begin{proof}
We have $F_{k}(n) = \mathcal{Q}_{k}(n,-)$ and $F_{k}(n)^{a} = \mathcal{Q}_{k}(n,a)$.
For all integers $n$ and $a$, $\mathcal{Q}_{k}(n,a)$ is an abelian subgroup of $\mathcal{Q}(n,a)$ generated by $\mathrm{Sq}_{0},\dots,\mathrm{Sq}_{k-1}$.
So we can compute a basis of $\mathcal{Q}_{k}(n,-)$ using a basis of $\mathcal{Q}(n,-)$ by selecting the ones using only $\mathrm{Sq}_{0},\dots,\mathrm{Sq}_{k-1}$.
Proposition \ref{prop:Fn} gives a basis of $\mathcal{Q}(n,-)$.
Therefore, a basis of $\mathcal{Q}_{k}(n,-)$ is \[\mathrm{Sq}_{i(1)}\mathrm{Sq}_{i(2)}\cdots\mathrm{Sq}_{i(m)}\iota_{n}\] with \[0\leq i(1)\leq i(2)\leq\dots\leq i(m)< \min(n, k).\qedhere\]
\end{proof}
\begin{remark}[Locally Noetherian]
The category $\mathcal{U}$ is locally noetherian (see \cite{schwartz1994} Chapter 1, Section 8).
But the categories $\mathcal{U}_{k}$ are not locally noetherian in general.
For example, $\mathcal{U}_{2}$ is not locally noetherian.
Although $F_{2}(2)$ is finitely generated, it has a submodule $M$ which is not finitely generated: Let $M$ be the submodule generated by $\mathrm{Sq}_{0}(\mathrm{Sq}_{1})^{i}\iota_{2}$ with $i\geq 0$.
\end{remark}
\subsection{Symmetric monoidal category}
We will construct a functor $\otimes:\mathcal{U}_{k}\times\mathcal{U}_{k}\to\mathcal{U}_{k}$, and then prove that $\mathcal{U}_{k}$ is a symmetric monoidal category with this tensor product and that $S_{k}(0)$ is the unit object with respect to this tensor product.
Before constructing the tensor product, we propose an alternative construction of ringoid $\mathcal{Q}_{k}$ in terms of generators and relations.
Recall that $\mathcal{Q} = \mathcal{R} / \mathcal{J}$ and $\mathcal{Q}_{k}$ is the subringoid of $\mathcal{Q}$ generated by the top $k$ squares.
In other words, $\mathcal{Q}_{k}$ is a subringoid of a quotient ringoid of $\mathcal{R}$.
We will present $\mathcal{Q}_{k}$ as a quotient ringoid of a subringoid of $\mathcal{R}$, after the following lemma.
\begin{lemma}
\label{lem:subquotient_ringoid}
Let $\mathcal{A}$ be a ringoid and $\mathcal{I}$ an ideal in it.
Let $\mathcal{M}$ be a set of morphisms in $\mathcal{A}$.
Then the subringoid $\mathcal{B}$ of $\mathcal{A}/\mathcal{I}$ generated by $\mathcal{M}$ is equivalent to the quotient ringoid of $\mathcal{C}$ by the ideal $\mathcal{I}$, where $\mathcal{C}$ is defined as the subringoid of $\mathcal{A}$ generated by $\mathcal{M}$ and $\mathcal{I}$.
\end{lemma}
\begin{proof}
The bijection on objects is easy to see.
We need to constract a bijection between $\mathcal{B}(x,y)$ and $(\mathcal{C}/\mathcal{I})(x,y)$ for any two objects $x,y$ of ringoid $\mathcal{A}$.
Let $\mathcal{D}$ be the subringoid of $\mathcal{A}$ generated by $\mathcal{M}$.
Then $\mathcal{B}(x,y) = (\mathcal{D}(x,y) + \mathcal{I}(x,y))/\mathcal{I}(x,y) = \mathcal{C}(x,y)/\mathcal{I}(x,y) = (\mathcal{C}/\mathcal{I})(x,y)$.
\end{proof}
\begin{proposition}
\label{prop:Q_k_gen_rel}
Let $k\geq 0$.
Let $\mathcal{R}_{k}$ be the subringoid of $\mathcal{R}$ generated by
\[\mathcal{M}:=\left\{\mathrm{Sq}^{i}:n\to n + i\textrm{ with }i\geq 0, 0\leq n-i<k\right\}.\]
Then $\mathcal{R}_{k} / (\mathcal{R}_{k}\cap \mathcal{J})$ is equivalent to the ringoid $\mathcal{Q}_{k}$.
\end{proposition}
\begin{proof}
According to Lemma \ref{lem:subquotient_ringoid}, $\mathcal{Q}_{k}$ is equivalent to the quotient ringoid of $\mathcal{C}$ by the ideal $\mathcal{J}$, where $\mathcal{C}$ is defined as the subringoid of $\mathcal{R}$ generated by $\mathcal{J}$ and $\mathrm{Sq}^{i}:n\to n+i$ with $i\geq 0, n-i<k$.
Note that if $i\geq 0$ and $n-i<0$, then $\mathrm{Sq}^{i}:n\to n+i$ is a morphism in ideal $\mathcal{J}$.
Therefore, $\mathcal{C}$ is the subringoid of $\mathcal{R}$ generated by $\mathcal{J}$ and $\mathcal{M}$.
Recall that $\mathcal{R}_{k}$ is the subringoid generated by $\mathcal{M}$.
Then the morphism set $\mathcal{C}(a, b) = \mathcal{J}(a, b) + \mathcal{R}_{k}(a,b)$ and \[\left(\frac{\mathcal{C}}{\mathcal{J}}\right)(a, b) = \frac{\mathcal{C}(a,b)}{\mathcal{J}(a,b)} = \frac{\mathcal{J}(a, b) + \mathcal{R}_{k}(a,b)}{\mathcal{J}(a,b)} = \frac{\mathcal{R}_{k}(a,b)}{\mathcal{J}(a,b)\cap\mathcal{R}_{k}(a,b)}.\]
Therefore, the ringoid $\mathcal{C}/\mathcal{J}$ is equivalent to $\mathcal{R}_{k} / (\mathcal{R}_{k}\cap \mathcal{J})$.
\end{proof}
After describing the ringoid $\mathcal{Q}_{k}$ in terms of generators and relations, we are now ready to define tensor product structure on the module category $\mathcal{U}_{k}$.
\begin{definition}[Tensor product]
Given two modules $M, N$ in $\mathcal{U}_{k}$, define a new module $M\otimes N$ in $\mathcal{U}_{k}$ as
\[(M\otimes N)(n) := \bigoplus_{i + j = n}M(i)\otimes N(j).\]
This is a finite direct sum because $M(i) = 0$ when $i < 0$ and similarly for $N$.
Let $x$ and $y$ be any nonzero homogeneous elements in $M$ and $N$ respectively.
We define the top $k$ Steenrod squares on $x\otimes y$ as
\[\mathrm{Sq}^{n}(x\otimes y) := \sum_{i + j = n,\,0\leq i\leq |x|,\, 0\leq j\leq |y|}\left(\mathrm{Sq}^{i}x\right)\otimes\left(\mathrm{Sq}^{j}y\right)\]
when $n\geq 0$ and $0\leq |x| + |y| - n < k$.
Note that on the right hand side, we only have the top $k$ squares because $|x| - i < k + n - |y| - i = k + j - |y|\leq k$ and similarly $|y| - j < k$.
This check uses the unstable condition $j\leq|y|$ and $i\leq|x|$.
We need to verify that such defined $M\otimes N$ is a module over $\mathcal{Q}_{k}$.
According to Proposition \ref{prop:Q_k_gen_rel}, it suffices to verify that $M\otimes N$ is a left $\mathcal{R}_{k}$-module with $f(x\otimes y) = 0$ for any morphism $f\in \mathcal{R}_{k}\cap\mathcal{J}$ and any two nonzero homogenous elements $x,y$ in $M,N$.
Since there is a similar tensor product structure on $\mathcal{U}$ and $\mathcal{Q} = \mathcal{R}/\mathcal{J}$, we know that $f(x\otimes y) = 0$ for any morphism $f\in\mathcal{J}$.
This finishes our proof that $M\otimes N$ is indeed a module over $\mathcal{Q}_{k}$.
\end{definition}
This tensor product $M\otimes N$ is functorial in both $M$ and $N$, so we get the tensor product functor $\otimes:\mathcal{U}_{k}\times\mathcal{U}_{k}\to\mathcal{U}_{k}$.
The sphere module $S_{k}(0)$ is the unit object with respect to this tensor product because $\left(M\otimes S_{k}(0)\right)(n) = M(n)$ and $\mathrm{Sq}^{n}(x\otimes y) = \left(\mathrm{Sq}^{n}x\right)\otimes y$, where $x$ is any homogenous element in $M$ and $y$ is the only nontrivial element in $S_{k}(0)$.
It is easy to further verify that $\left(\mathcal{U}_{k},\otimes, S_{k}(0)\right)$ is a symmetric monoidal category.
\section{Functors between categories $\mathcal{U}$ and $\mathcal{U}_{k}$}
\label{sec:functors}
\subsection{Forgetful functor}
\begin{definition}[Forgetful functor]
Inclusion morphisms of ringoids \[\mathcal{Q}_{0}\to\mathcal{Q}_{1}\to\dots\to\mathcal{Q}_{k-1}\to\mathcal{Q}_{k}\to\dots\to\mathcal{Q}\]
induce forgetful functors $u$ \[\mathcal{U}_{0}\leftarrow\mathcal{U}_{1}\leftarrow\dots\leftarrow\mathcal{U}_{k-1}\leftarrow\mathcal{U}_{k}\leftarrow\dots\leftarrow\mathcal{U}.\]
Those forgetful functors $u$ are additive.
\end{definition}
\begin{proposition}
The forgetful functors $u$ send free modules to free modules.
\end{proposition}
\begin{proof}
It suffices to prove that $u(F(n))$ and $u(F_{k+1}(n))$ are free modules in $\mathcal{U}_{k}$.
By Proposition \ref{prop:Fn}, $u(F(n))$ is a direct sum of $F_{k}(|x|)$ where $x = \mathrm{Sq}_{i(1)}\mathrm{Sq}_{i(2)}\cdots\mathrm{Sq}_{i(m)}\iota_{n}$ with $k\leq i(1)\leq\dots\leq i(m)<n$.
Similarly by Proposition \ref{prop:Fkn}, $u(F_{k+1}(n))$ is a direct sum of $F_{k}(|x|)$ where $x = (\mathrm{Sq}_{k})^{m}\iota_{n}$ with $m\geq 0$ if $k < n$ and $x = \iota_{n}$ if $k \geq n$.
\end{proof}
\begin{proposition}
\label{prop:prefor}
The forgetful functors $u$ send projective modules to projective modules.
\end{proposition}
\begin{proof}
Say the forgetful functor goes from category $\mathcal{U}$ to category $\mathcal{U}_{k}$.
Take any projective $M$ in $\mathcal{U}$.
By Proposition \ref{prop:projFree}, there is another module $N$ in $\mathcal{U}$ such that $M\oplus N$ is free in $\mathcal{U}$.
Applying the forgetful functor $u$, we get $u(M\oplus N)$ that is free in $\mathcal{U}_{k}$.
Since the forgetful functor $u$ is additive, we have $u(M)\oplus u(N)$ is free and therefore by Proposition \ref{prop:projFree} the module $u(M)$ is a projective in $\mathcal{U}_{k}$.
When the forgetful functor goes from $\mathcal{U}_{k+1}$ to $\mathcal{U}_{k}$, the proof is similar and omitted.
\end{proof}
\begin{proposition}
\label{prop:exfor}
The forgetful functor $u$ is exact.
\end{proposition}
\begin{proof}
Since the forgetful functor does not change the underlying set of a module and the underlying map between them, it must be exact.
\end{proof}
\subsection{Suspension functor}
\begin{definition}[Suspension morphism]
We define the suspension morphism $\sigma:\mathcal{A}\to\mathcal{A}$ of ringoids such that
\begin{itemize}
\item
$\sigma(n) = n - 1$,
\item
$\sigma\left(\mathrm{Sq}^{i}\right) = \mathrm{Sq}^{i}$.
\end{itemize}
The suspension morphism $\sigma:\mathcal{A}\to\mathcal{A}$ induces a suspension morphism of the quotient ringoids $\sigma':\mathcal{Q}\to\mathcal{Q}$ because $\sigma(\mathcal{I}(n,n+i))\subseteq\mathcal{I}(n-i,n+i-1)$.
Furthermore, the suspension morphism $\sigma':\mathcal{Q}\to\mathcal{Q}$ induces another suspension morphism of ringoids $\sigma_{k}:\mathcal{Q}_{k+1}\to\mathcal{Q}_{k}$ because $\sigma$ sends $\mathrm{Sq}_{i} = \mathrm{Sq}^{n-i}\in\mathcal{A}(n,2n-i)$ with $i < k + 1$ to $\mathrm{Sq}_{i-1} = \mathrm{Sq}^{n-i}\in\mathcal{A}(n-1,2n-i-1)$ with $i - 1 < k$.
\end{definition}
\begin{definition}[Suspension functor]
The suspension morphism $\sigma:\mathcal{Q}\to\mathcal{Q}$ of ringoids induces a suspension functor $\Sigma:\mathcal{U}\to\mathcal{U}$.
Similarly, the suspension morphism $\sigma_{k}:\mathcal{Q}_{k+1}\to\mathcal{Q}_{k}$ of ringoids induces a suspension functor $\Sigma:\mathcal{U}_{k}\to\mathcal{U}_{k+1}$.
The suspension functors are exact, because they just shift the underlying sets and maps.
Let $M$ be any module in $\mathcal{U}_{k}$.
Then the underlying sets and the Steenrod operations of $\Sigma M$ are
\[(\Sigma M)^{n + 1} = M^{n}\quad\forall n\]
and
\[\mathrm{Sq}_{i + 1}(\Sigma x) = \Sigma(\mathrm{Sq}_{i}x)\quad\forall i < k,\]
where $\Sigma x$ denotes the element in $(\Sigma M)^{n+1}$ corresponding to a homogeneous $x$ in $M^{n}$.
In particular, we have $\mathrm{Sq}_{0}(\Sigma x) = \Sigma(\mathrm{Sq}_{-1}x) = 0$.
\end{definition}
\begin{proposition}
\label{prop:invsus}
The suspension functor $\Sigma:\mathcal{U}_{k}\to\mathcal{U}_{k+1}$ restricts to an equivalence between $\mathcal{U}_{k}$ and the full subcategory of $\mathcal{U}_{k+1}$ with objects the ones with $\mathrm{Sq}_{0} = 0$.
\end{proposition}
\begin{proof}
Denote the full subcategory by $\mathcal{C}$.
It suffices to find a functor $F:\mathcal{C}\to\mathcal{U}_{k}$ such that $\Sigma F = 1$ and $F\Sigma = 1$.
Here is the construction of the functor $F$.
Given a module $M$ in $\mathcal{C}$, we observe that $M^{0} = 0$ because if $x$ is a nonzero homogeneous element of degree zero in $M$, then $x = \mathrm{Sq}_{0}x = 0$.
Construct $FM$ as the one-degree downward shift of $M$.
More precisely, let \[(FM)^{n} = M^{n+1}\quad\forall n\geq 0\] and
\[\begin{tikzcd}
(FM)^{n} \arrow[r, "\mathrm{Sq}^{i}"]\arrow[d, "="]& (FM)^{n+i}\arrow[d, "="]\\
M^{n+1} \arrow[r, "\mathrm{Sq}^{i}"] & M^{n + i + 1}
\end{tikzcd}\quad\forall 0\leq i\leq n, i > n - k.\]
It is easy to check that $FM$ is indeed a module in $\mathcal{U}_{k}$ and $F$ is a functor from $\mathcal{C}$ to $\mathcal{U}_{k}$.
It is also easy to check that $\Sigma F = 1$ and $F\Sigma = 1$.
\end{proof}
\subsection{Frobenius functor and loop functor}
We are going to introduce a functor $\Phi:\mathcal{U}_{k}\to\mathcal{U}_{2k}$, which is a analogue of the Frobenius functor $\Phi:\mathcal{U}\to\mathcal{U}$ described in Section 1.7 of \cite{schwartz1994}.
To do that, we need an alternative story of the ringoids $\mathcal{A},\mathcal{Q},\mathcal{Q}_{k}$.
\begin{definition}[Ringoid $\mathcal{A}^{+}$]
Let $\mathcal{A}^{+}$ be the ringoid with objects $\{+\}\cup\mathbb{Z}$.
The morphism set $\mathcal{A}^{+}(n, n+i)$ is defined to be the degree $i$ part of the Steenrod algebra.
The morphism sets $\mathcal{A}^{+}(+,+), \mathcal{A}^{+}(n,+), \mathcal{A}^{+}(+, n)$ are all zero.
If $M$ is a left $\mathcal{A}^{+}$-module, i.e. a covariant additive functor from $\mathcal{A}^{+}$ to $\mathbf{Ab}$, then $M(+) = 0$ because $\mathcal{A}^{+}(+,+) = 0$.
Therefore, $\mathcal{A}^{+}\mathbf{Mod} = \mathcal{A}\mathbf{Mod} = \mathcal{M}$, where $\mathcal{M}$ denotes the category of modules over the Steenrod algebra $A$.
\end{definition}
\begin{definition}[Ringoid $\mathcal{Q}^{+}$]
Let $\mathcal{I}^{+}$ be the ideal of $\mathcal{A}^{+}$ generated by \[\mathrm{Sq}^{i}:n\to n+i\textrm{ with }i\geq 0, i>n.\]
Let $\mathcal{Q}^{+}$ be the quotient ringoid of $\mathcal{A}^{+}$ by the ideal $\mathcal{I}^{+}$.
Just as above, we have $\mathcal{Q}^{+}\mathbf{Mod} = \mathcal{Q}\mathbf{Mod} = \mathcal{U}$.
\end{definition}
\begin{definition}[Ringoid $\mathcal{Q}_{k}^{+}$]
Let $k\geq 0$.
Let $\mathcal{Q}_{k}^{+}$ be the subringoid of $\mathcal{Q}^{+}$ generated by
\[\mathrm{Sq}^{i}:n\to n + i\textrm{ with }i\geq 0, n-i<k.\]
Just as above, we have $\mathcal{Q}_{k}^{+}\mathbf{Mod} = \mathcal{Q}_{k}\mathbf{Mod} = \mathcal{U}_{k}$.
\end{definition}
The following lemma prepares us for the definition of the Frobenius morphism $\mathcal{Q}^{+}\to\mathcal{Q}^{+}$ of ringoids.
\begin{lemma}
\label{lem:froMor}
There is a unique morphism of ringoids $\phi:\mathcal{A}^{+}\to\mathcal{A}^{+}$ satisfying
\begin{itemize}
\item
$\phi(2n) = n, \phi(2n+1) = +, \phi(+) = +$,
\item
$\phi\left(\mathrm{Sq}^{2i}\right) = \mathrm{Sq}^{i}$.
\end{itemize}
\end{lemma}
\begin{proof}
It suffices to prove that the Frobenius morphism sends
\begin{equation}
\label{eq:froMor}
\mathrm{Sq}^{i}\mathrm{Sq}^{j} - \sum_{t = 0}^{\lfloor i/2\rfloor}{j - t - 1 \choose i - 2t}\mathrm{Sq}^{i + j - t}\mathrm{Sq}^{t}
\end{equation}
to zero.
If $i+j$ is odd, every summing term is sent to zero.
So we assume that $i+j$ is even.
If both $i$ and $j$ are odd, then we need to prove that \[{j-t-1\choose i-2t}\equiv 0\mod 2\quad\textrm{ when }t\textrm{ is even}.\]
It is true because of Lucas's theorem.
From now on, we assume that both $i$ and $j$ are even.
The Frobenius morphism sends (\ref{eq:froMor}) to \[\mathrm{Sq}^{i/2}\mathrm{Sq}^{j/2} - \sum_{s = 0}^{\lfloor i/4\rfloor}{j - 2s - 1 \choose i - 4s}\mathrm{Sq}^{i/2 + j/2 - s}\mathrm{Sq}^{s}.\]
We know that \[0 = \mathrm{Sq}^{i/2}\mathrm{Sq}^{j/2} - \sum_{s=0}^{\lfloor i/4\rfloor}{j/2 - s - 1 \choose i/2 - 2s}\mathrm{Sq}^{i/2 + j/2 - s}\mathrm{Sq}^{s}.\]
So it suffices to prove \[{j - 2s - 1 \choose i - 4s} \equiv {j/2 - s - 1 \choose i/2 - 2s}\mod 2,\]
or equivalently \[{2a+1 \choose 2b} \equiv {a\choose b}\mod 2.\]
It is true again by Lucas's theorem.
\end{proof}
\begin{definition}[Frobenius morphisms]
We define the Frobenius morphism $\phi:\mathcal{A}^{+}\to\mathcal{A}^{+}$ of ringoids such that
\begin{itemize}
\item
$\phi(2n) = n, \phi(2n+1) = +, \phi(+) = +$,
\item
$\phi\left(\mathrm{Sq}^{2i}\right) = \mathrm{Sq}^{i}$.
\end{itemize}
As we have seen in Lemma \ref{lem:froMor}, there is a unique morphism $\phi$ satisfying these properties.
The Frobenius morphism $\mathcal{A}^{+}\to\mathcal{A}^{+}$ induces a Frobenius morphism of the quotient ringoids $\mathcal{Q}^{+}\to\mathcal{Q}^{+}$ because $\phi$ sends $\mathrm{Sq}^{i}\in\mathcal{A}^{+}(n, n+i)$ with $i<n$ to $\mathrm{Sq}^{i/2}\in\mathcal{A}^{+}(n/2, n/2+i/2)$ with $i/2<n/2$ if both $n$ and $i$ are even, and zero otherwise.
Furthermore, the Frobenius morphism $\mathcal{Q}^{+}\to\mathcal{Q}^{+}$ induces another Frobenius morphism of ringoids $\mathcal{Q}_{2k}^{+}\to\mathcal{Q}_{k}^{+}$ because $\phi$ sends $\mathrm{Sq}_{i} = \mathrm{Sq}^{n-i}\in\mathcal{A}^{+}(n,2n-i)$ with $i < 2k$ to $\mathrm{Sq}_{i/2} = \mathrm{Sq}^{n/2-i/2}\in\mathcal{A}^{+}(n/2,n-i/2)$ with $i/2<k$ if both $n$ and $i$ are even, and zero otherwise.
\end{definition}
\begin{definition}[Frobenius functors]
The Frobenius morphism $\mathcal{Q}^{+}\to\mathcal{Q}^{+}$ of ringoids induces a Frobenius functor $\Phi:\mathcal{U}\to\mathcal{U}$.
Similarly, the Frobenius morphism $\mathcal{Q}^{+}_{2k}\to\mathcal{Q}^{+}_{k}$ of ringoids induces a Frobenius functor $\Phi:\mathcal{U}_{k}\to\mathcal{U}_{2k}$.
\end{definition}
\begin{remark}
If $M$ is a module in $\mathcal{U}_{k}$, then $\Phi M$ is a module in $\mathcal{U}_{2k}$ with \[(\Phi M)^{2n} = M^{n},\quad(\Phi M)^{\text{odd}} = 0.\]
We denote the element in $(\Phi M)^{2n}$ corresponding to $x$ in $M^{n}$ by $\Phi x$.
The Steenrod operations on $\Phi M$ are \[\mathrm{Sq}_{2i}(\Phi x) = \Phi(\mathrm{Sq}_{i}x),\quad\mathrm{Sq}_{\text{odd}}(\Phi x) = 0.\]
\end{remark}
\begin{proposition}
\label{prop:exfro}
The Frobenius functor is exact.
\end{proposition}
\begin{proof}
The Frobenius functor only shifts the underlying sets and maps.
\end{proof}
For any $k>0$, we have a natural transformation $\phi u\to\mathrm{id}$ between morphisms of ringoids $\mathcal{Q}_{k}^{+}\to\mathcal{Q}_{k}^{+}$, where $u$ is the forgetful morphism $\mathcal{Q}_{k}^{+}\to\mathcal{Q}_{2k}^{+}$ and $\phi$ is the Frobenius morphism $\mathcal{Q}_{2k}^{+}\to\mathcal{Q}_{k}^{+}$.
The natural transformation is given by
\begin{equation*}\begin{split}
&\mathrm{Sq}_{0}:\phi u(2n) = n\to 2n,\\
&0:\phi u(2n+1)=+\to 2n+1,\\
&0:\phi u(+) = +\to +.
\end{split}\end{equation*}
This natural transformation gives rise to another natural transformation $\lambda:u\Phi\to\mathrm{id}$ between functors $\mathcal{Q}_{k}^{+}\mathrm{Mod}\to\mathcal{Q}_{k}^{+}\mathrm{Mod}$, i.e. between functors $\mathcal{U}_{k}\to\mathcal{U}_{k}$.
The map $\lambda_{M}:u\Phi M\to M$ of modules in $\mathcal{U}_{k}$ sends $\Phi x\mapsto \mathrm{Sq}_{0}x$.
The kernel and cokernel of $\lambda_{M}:u\Phi M\to M$ are suspensions because $\mathrm{Sq}_{0}$ acts trivially on both of them.
We define functors $\Omega,\Omega_{1}:\mathcal{U}_{k}\to\mathcal{U}_{k - 1}$ such that $\Sigma\Omega M$ is the the cokernel of $\lambda_{M}$ and $\Sigma\Omega_{1} M$ is the kernel of $\lambda_{M}$.
So we have an exact sequence in $\mathcal{U}_{k}$ \[0\to\Sigma\Omega_{1}M\to u\Phi M\to M\to\Sigma\Omega M\to 0.\]
\begin{proposition}
\label{prop:omeSig}
Let $k$ be any positive integer.
The loop functor $\Omega:\mathcal{U}_{k}\to\mathcal{U}_{k-1}$ is the left adjoint of the suspension functor $\Sigma:\mathcal{U}_{k-1}\to\mathcal{U}_{k}$.
The functor $\Omega_{1}:\mathcal{U}_{k}\to\mathcal{U}_{k-1}$ is the first left derived functor of $\Omega$, and all higher derived functors are trivial.
\end{proposition}
\begin{proof}
We will first prove that $\Omega$ is the left adjoint to $\Sigma$.
Given any morphism $M\to\Sigma N$, the composition $u\Phi M\to M\to \Sigma N$ is equal to zero because $\mathrm{Sq}_{0}(\Sigma x) = 0$.
So by the universal property of cokernel, we get $\Sigma\Omega M\to \Sigma N$ and thus get $\Omega M\to N$ by Proposition \ref{prop:invsus}.
Similarly, given any morphism $\Omega M\to N$, we get $\Sigma\Omega M\to \Sigma N$ and composing it with $M\to \Sigma \Omega M$, we get $M\to\Sigma N$.
Thus we get a bijection between the two morphism sets $\mathcal{U}_{k}(\Sigma M, N)$ and $\mathcal{U}_{k-1}(M,\Omega N)$.
Now let us prove $\Omega_{1}$ is the first left derived functor of $\Omega$ and all higher derived functors are trivial.
Let $M$ be any module in $\mathcal{U}_{k}$.
For now, let us denote the kernel of $u\Phi M\to M$ by $\Sigma FM$.
So we are going to prove $F = \Omega_{1}$ and $\Omega_{i} = 0$ for $i \geq 2$.
Consider a free resolution of $M$ in the category $\mathcal{U}_{k}$
\[\dots\to P_{1}\to P_{0}\to M\to 0.\]
Then by definition, $\Omega_{i}(M)$ is equal to the $i$-th homology of \[\dots\to \Omega(P_{2})\to \Omega(P_{1})\to \Omega(P_{0})\to 0.\]
Think about the double complex $K^{*,*}$
\[\begin{tikzcd}
\dots \arrow[r]\arrow[d] &u\Phi P_{2} \arrow[r]\arrow[d] &u\Phi P_{1} \arrow[d] \arrow[r] & u\Phi P_{0} \arrow[r]\arrow[d]& 0\\
\dots \arrow[r] &P_{2}\arrow[r] &P_{1} \arrow[r] & P_{0} \arrow[r] & 0.
\end{tikzcd}\]
We get two spectral sequences from the double complex $K^{*,*}$ above.
Since for every $n\in\mathbb{Z}$, there are only finitely many nonzero $K^{p,q}$ with $p+q = n$, the two spectral sequence converge to the same thing.
Now we are going to compute the limits of those two spectral sequences.
Since both the Frobenius functor $\Phi$ and the forgetful functor $u$ are exact by Proposition \ref{prop:exfro} and \ref{prop:exfor}, we get $u\Phi M\to M$ by taking horizonal homology first.
Then take vertical homology, we get $\Sigma FM$ and $\Sigma\Omega M$.
If instead taking vertical homology first, we get \[\dots, 0, 0, 0\quad\mathrm{and}\quad\dots, \Sigma\Omega P_{2}, \Sigma\Omega P_{1}, \Sigma\Omega P_{0}.\]
Then taking horizontal homology, we get $\dots,\Sigma\Omega_{2}M, \Sigma\Omega_{1}M, \Sigma\Omega M$ because the suspension functors $\Sigma$ are exact.
Since the limits of those two spectral sequences are expected to be the same, we get $F = \Omega_{1}$ and $\Omega_{i} = 0$ for all $i\geq 2$.
\end{proof}
\begin{proposition}
\label{prop:omePro}
The loop functor $\Omega:\mathcal{U}_{k+1}\to\mathcal{U}_{k}$ preserves projectives.
\end{proposition}
\begin{proof}
It suffices to prove that its right adjoint $\Sigma:\mathcal{U}_{k}\to\mathcal{U}_{k+1}$ preserves epimorphisms.
Remember that the $\Sigma$ suspension functor is exact.
\end{proof}
\begin{proposition}
When $M$ is a sphere module in $\mathcal{U}_{k}$ and $k>0$,
\[
\Omega_{1}(S_{k}(n)) = \left\{
\begin{split}
&S_{k-1}(2n-1)&\textrm{ if }n > 0\\
&0&\textrm{ if }n = 0
\end{split}\right.
\]
\[
\Omega(S_{k}(n)) = \left\{
\begin{split}
&S_{k-1}(n-1)&\textrm{ if }n > 0\\
&0&\textrm{ if }n = 0
\end{split}\right.
\]
\end{proposition}
\begin{proof}
We have $u\Phi S_{k}(n) = S_{k}(2n)$ and the map $u\Phi S_{k}(n)\to S_{k}(n)$ sends $\iota_{2n}$ to $\mathrm{Sq}_{0}\iota_{n}$.
When $n > 0$, we have $\mathrm{Sq}_{0}\iota_{n} = 0$ and thus \[\Sigma\Omega_{1}S_{k}(n) = S_{k}(2n),\quad \Sigma\Omega S_{k}(n) = S_{k}(n).\]
When $n = 0$, we have $\mathrm{Sq}_{0}\iota_{n} = \iota_{n}$ and thus \[\Sigma\Omega_{1}S_{k}(n) = \Sigma\Omega S_{k}(n) = 0.\qedhere\]
\end{proof}
\section{Homological dimension of category $\mathcal{U}_{k}$}
\label{sec:U_k_homodim}
\begin{notation}
We abbreviate $\mathrm{Ext}_{\mathcal{U}_{k}}^{*}(M,N)$ to $\mathrm{Ext}_{k}^{*}(M,N)$.
\end{notation}
In this section, we shall prove that the homological dimension of the category $\mathcal{U}_{k}$ is at most $k$.
Our goal is to prove that $\mathrm{Ext}_{k}^{s}(M, N) = 0$ for all $s>k\geq 0$.
Our strategy is to first prove it for $N$ a sphere module by induction on $k$, then for $N$ bounded above, and finally for $N$ a general module.
\subsection{EHP sequence}
\begin{lemma}
\label{lem:U_k_EHP}
Let $M$ be any module in $\mathcal{U}_{k}$ and $N$ any module in $\mathcal{U}_{k-1}$.
Then we have the following long exact sequence of vector spaces over $\mathbb{F}_{2}$
\[\begin{tikzcd}
&&\dots\arrow[dll]\\
\mathrm{Ext}_{k-1}^{s}(\Omega M, N)\arrow[r] &\mathrm{Ext}_{k}^{s}(M, \Sigma N)\arrow[r] &\mathrm{Ext}_{k-1}^{s-1}(\Omega_{1}M, N)\arrow[dll]\\
\mathrm{Ext}_{k-1}^{s+1}(\Omega M, N)\arrow[r] &\mathrm{Ext}_{k}^{s+1}(M, \Sigma N)\arrow[r] &\mathrm{Ext}_{k-1}^{s}(\Omega_{1}M, N)\arrow[dll]\\
\dots&&
\end{tikzcd}\]
\end{lemma}
\begin{proof}
Recall from Proposition \ref{prop:omeSig} that the loop functor $\Omega:\mathcal{U}_{k}\to\mathcal{U}_{k-1}$ is the left adjoint of the suspension functor $\Sigma:\mathcal{U}_{k-1}\to\mathcal{U}_{k}$.
So \[\mathcal{U}_{k-1}(\Omega(-),N) = \mathcal{U}_{k}(-,\Sigma N)\] and its right derived functor is $\mathrm{Ext}_{k}^{*}(-,\Sigma N)$.
Since the inside functor $\Omega$ sends projectives to projectives by Proposition \ref{prop:omePro}, we have a Grothendieck spectral sequence with \[E_{2}^{s,t} = \mathrm{Ext}^{s}_{k-1}(\Omega_{t}M, N)\Rightarrow\mathrm{Ext}^{s+t}_{k}(M, \Sigma N).\]
Since $\Omega_{t} = 0$ for $t>1$, the $E_{2}$ page consists of only two nontrivial rows $t = 0, 1$.
We have the exact sequences
\[\mathrm{Ext}^{s-2}_{k-1}(\Omega_{1} M, N)\to \mathrm{Ext}^{s}_{k-1}(\Omega M, N)\to E_{3}^{s, 0}\to 0\]
and
\[0\to E_{3}^{s-1, 1}\to \mathrm{Ext}^{s-1}_{k-1}(\Omega_{1}M, N)\to \mathrm{Ext}_{k-1}^{s+1}(\Omega M, N).\]
Since all further differentials are trivial, we have $E_{3} = E_{\infty}$.
By convergence of the spectral sequence, we have a short exact sequence
\[0\to E_{\infty}^{s, 0}\to \mathrm{Ext}_{k}^{s}(M,\Sigma N)\to E_{\infty}^{s-1, 1}\to 0.\]
Conbining these three exact sequences above, we get the long exact sequence.
\end{proof}
\begin{proposition}
\label{prop:hdsphere}
$\mathrm{Ext}_{k}^{s}(-, S_{k}(n)) = 0$ for all $s > k\geq 0$.
\end{proposition}
\begin{proof}
Say $N$ is the sphere module $S_{k}(n)$.
We will proceed by double induction on $n$ and $k$.
The base case is $n = 0$ or $k = 0$.
When $n = 0$, $\mathcal{U}_{k}(-,S_{k}(0))$ is an exact functor so $\mathrm{Ext}^{s}_{k}(-,S_{k}(0)) = 0$ for all $s > 0$.
When $k = 0$, $\mathcal{U}_{0}(-,S_{0}(n))$ is an exact functor so $\mathrm{Ext}^{s}_{0}(-,S_{0}(n)) = 0$ for all $s > k = 0$.
Now assume $n>0$ and $k>0$.
Our goal is to prove that \[\mathrm{Ext}_{k}^{s}(-, S_{k}(n)) = 0\] for all $s>k$.
Our induction hypothesis is that \[\mathrm{Ext}_{k'}^{s'}(-, S_{k'}(n')) = 0\] for all $n',k',s'$ satisfying $0\leq n'<n, 0\leq k'<k, s'>k'$.
Observe that $S_{k}(n) = \Sigma S_{k-1}(n-1)$.
Take $N = S_{k-1}(n-1)$ in the lemma above and we get a long exact sequence.
When $s>k$, we have $\mathrm{Ext}^{s}_{k-1}(\Omega M, N) = 0$ and $\mathrm{Ext}^{s-1}_{k-1}(\Omega_{1}M, N) = 0$ by the induction hypothesis.
Thus $\mathrm{Ext}^{s}_{k}(M, \Sigma N) = 0$.
\end{proof}
\subsection{Bounded above modules}
\begin{proposition}
\label{prop:hdbdd}
If $N$ is bounded above, i.e. $N^{n} = 0$ for large enough $n$, then $\mathrm{Ext}_{k}^{s}(-, N) = 0$ for all $s > k\geq 0$.
\end{proposition}
\begin{proof}
If $N = 0$, then it is trivial.
Let us assume $N\neq 0$.
Say the highest nontrivial degree of $N$ is equal to $n$.
We will proceed by induction on $n$.
The base case is $n = 0$.
When $n = 0$, $N$ is equal to a direct sum of sphere modules and therefore everything follows.
Let us assume $n > 0$ and that we have proven the cases $0,1,\dots, n-1$.
Then we have a short exact sequence of modules in $\mathcal{U}_{k}$
\[0\to N'\to N\to N''\to 0,\]
where $N'$ is the degree $n$ part of $N$ and $N''$ is the degree $<n$ part of $N$.
This short exact sequence induces a long exact sequence of $\mathrm{Ext}$ groups
\[\begin{tikzcd}
&&\dots\arrow[dll]\\
\mathrm{Ext}_{k}^{s}(M, N')\arrow[r] &\mathrm{Ext}_{k}^{s}(M, N)\arrow[r] &\mathrm{Ext}_{k}^{s}(M, N'')\arrow[dll]\\
\mathrm{Ext}_{k}^{s+1}(M, N')\arrow[r] &\mathrm{Ext}_{k}^{s+1}(M, N)\arrow[r] &\mathrm{Ext}_{k}^{s+1}(M, N'')\arrow[dll]\\
\dots&&
\end{tikzcd}\]
Since $N'$ is a direct sum of spheres, we that $\mathrm{Ext}_{k}^{s}(M, N') = 0$ by Proposition \ref{prop:hdsphere}.
We also know that $\mathrm{Ext}^{s}_{k}(M, N'') = 0$ by the induction hypothesis.
Therefore, $\mathrm{Ext}^{s}_{k}(M, N) = 0$.
\end{proof}
\subsection{General modules}
In this subsection, we are going to prove $\mathrm{Ext}_{k}^{s}(-,-) = 0$ for all $s>k\geq 0$.
Preparing for that proof, we present without proof the following lemma on the Milnor exact sequence for $\mathrm{Ext}$ groups.
\begin{lemma}[Milnor exact sequence]
Let $M$ be any module in $\mathcal{U}_{k}$.
Let $N_{0}\leftarrow N_{1}\leftarrow N_{2}\leftarrow \cdots$ be an inverse system of modules in $\mathcal{U}_{k}$ such that all maps are surjective.
Denote its inverse limit by $N$.
Then we have a short exact sequence \[0\to{\varprojlim}^{1}\mathrm{Ext}^{s-1}_{k}(M,N_{i})\to\mathrm{Ext}^{s}_{k}(M,N)\to\varprojlim\mathrm{Ext}^{s}_{k}(M,N_{i})\to 0.\]
\end{lemma}
\begin{theorem}
\label{thm:hdall}
$\mathrm{Ext}_{k}^{s}(-, -) = 0$ for all $s > k\geq 0$.
\end{theorem}
\begin{proof}
Let $M$ and $N$ be any two modules in $\mathcal{U}_{k}$.
We are going to prove $\mathrm{Ext}_{k}^{s}(M, N) = 0$ for all $s>k\geq 0$.
For any $i\geq 0$, define $N_{i}$ as a module in $\mathcal{U}_{k}$ with the degree $\leq i$ part equal to $N$ and the degree $> i$ part being zero.
That is, $N_{i}^{j} = N^{j}$ if $j \leq i$ and $N_{i}^{j} = 0$ if $j > i$.
The Steenrod opeartions on $N_{i}$ follow from $N$.
Then we get a surjective inverse system
\[N_{0}\leftarrow N_{1}\leftarrow N_{2}\leftarrow \cdots\]
and the inverse limit of the inverse system is exactly our module $N$.
So we get the Milnor exact sequence of $\mathrm{Ext}$ groups \[0\to{\varprojlim}^{1}\mathrm{Ext}^{s-1}_{k}(M,N_{i})\to\mathrm{Ext}^{s}_{k}(M,N)\to\varprojlim\mathrm{Ext}^{s}_{k}(M,N_{i})\to 0.\]
Since each $N_{i}$ is bounded above, we know $\mathrm{Ext}^{s}_{k}(M, N_{i}) = 0$ for all $i$ by Proposition \ref{prop:hdbdd} and thus the right term in the short exact sequence is zero.
It remains to prove that the left term in the short exact sequence is zero.
For all $i\geq 0$, we have a short exact sequence $0\to K\to N_{i+1}\to N_{i}\to 0$ with $K$ being a direct sum of sphere modules.
Therefore we have a long exact sequence, part of which looks like
\[\cdots\to\mathrm{Ext}_{k}^{s-1}(M,N_{i+1}) \to\mathrm{Ext}_{k}^{s-1}(M,N_{i}) \to\mathrm{Ext}_{k}^{s}(M,K) \to\cdots.\]
We know that $\mathrm{Ext}_{k}^{s}(M,K) = 0$ by Proposition \ref{prop:hdsphere}.
Therefore, the map $\mathrm{Ext}_{k}^{s-1}(M,N_{i+1})\to\mathrm{Ext}_{k}^{s-1}(M,N_{i})$ is a surjection for all $i$.
The left term is thus zero because it is $\varprojlim^{1}$ of a surjective inverse system.
\end{proof}
\begin{corollary}
Any module in $\mathcal{U}_{k}$ has a length $\leq k$ projective resolution.
In other words, the homological dimension of the category $\mathcal{U}_{k}$ is at most $k$.
\end{corollary}
\begin{proof}
By Proposition \ref{prop:enoughProj}, the abelian category $\mathcal{U}_{k}$ has enough projectives.
In an abelian category with enough projectives, if \[\mathrm{Ext}^{s}(M,-) = 0 \textrm{ for all }s>k\geq 0,\] then $M$ has a projective resolution of length $\leq k$.
So this corollary follows immediately from Theorem \ref{thm:hdall}.
\end{proof}
\section{$\Lambda$-complex for modules in $\mathcal{U}_{k}$}
\label{sec:Lambda_k}
In this section, we will introduce a contravariant functor $\Lambda_{k}$ from the category of unstable modules over the Steenrod algebra with only the top $k$ squares to the category of cochain complexes of graded vector spaces over $\mathbb{F}_{2}$, namely $\Lambda_{k}:\mathcal{U}_{k}^{\mathrm{op}}\to\mathrm{Ch}^{*}(\mathrm{Gr}(\mathbb{F}_{2}\mathrm{Mod}))$.
Note that we use the upper index to emphasize the \emph{cochain} complex.
The cohomological degree is denoted by $s$ and the degree in the graded vector space is denoted by $a$.
To motivate its study, we list two nice properties of this functor here:
\begin{itemize}
\item
The cohomology $H^{s,a}(\Lambda_{k}(M))$ is equal to $\mathrm{Ext}_{k}^{s}(M,S_{k}(a))$ for all $s,a$.
\item
The cochain complex $\Lambda_{k}(M)$ is relatively small and easy to compute.
To be more concrete, $\Lambda_{k}^{s}(M) = 0$ for all $s < 0$ or $s > k$.
Furthermore, when $M$ is finite, so is its cochain complex $\Lambda_{k}(M)$.
\end{itemize}
\subsection{Recall: $\Lambda$ algebra and $\Lambda$ functor}
In this subsection, we provide a brief recollection \cite{bousfield1966, priddy1970} of our knowledge on the $\Lambda$ algebra and the $\Lambda$ functor.
Formally, $\Lambda$ is an associative differential bigraded $\mathbb{F}_{2}$-algebra with generators $\lambda_{i}\in\Lambda^{1, i+1}$ for $i\geq 0$ and relations
\begin{equation}
\label{eq:lambda_relation}
\lambda_{i}\lambda_{2i+1+j} = \sum_{t\geq 0}{j-t-1\choose t}\lambda_{i+j-t}\lambda_{2i+1+t}\quad\textrm{for }i,j\geq 0
\end{equation}
with differential
\[d(\lambda_{i}) = \sum_{j\geq 1}{i-j\choose j}\lambda_{i-j}\lambda_{j-1}.\]
We refer to the first grading in $\Lambda$ as the cohomological degree $s$ and the second as the internal degree $t$.
The differential $d$ in $\Lambda$ increases $s$ by one and preserves $t$.
\begin{definition}[Admissible monomials]
A monomial \[\lambda_{I} := \lambda_{I(1)}\lambda_{I(2)}\cdots\lambda_{I(s)}\in\Lambda\] is said to be admissible if \[2I(r)\geq I(r+1)\quad\textrm{for all } 1\leq r < s.\]
The excess of $\lambda_{I}$ is defined as \[\textrm{excess}(I):=\sum_{r=1}^{s-1}(2I(r) - I(r+1)).\]
\end{definition}
\begin{proposition}
The admissible monomials form an additive basis for $\Lambda$.
\end{proposition}
\begin{proposition}
The internal degree $t$ part of the $s$-th cohomology of $\Lambda$ is equal to the $s$-th Ext group in the category of $\mathcal{M}$ from $S(0)$ to $S(t)$, i.e. \[H^{s,t}(\Lambda) = \mathrm{Ext}^{s}_{\mathcal{M}}(S(0),S(t)).\]
\end{proposition}
\begin{definition}[Subcomplex $\Lambda(m)$]
$\Lambda(m)$ is the defined to be the sub-bigraded vector space of $\Lambda$ spanned by the admissible monomials $\lambda_{I}$ with $I(1) < m$.
The trivial monomial $1$ lives in all $\Lambda(m)$.
\end{definition}
The $\Lambda(m)$ is a subcomplex of $\Lambda$ by virtue of Lemma \ref{lem:lambda_check}.
\begin{lemma}\label{lem:lambda_check}
\[d(\Lambda(m))\subseteq\Lambda(m),\quad\Lambda^{s,t}(m)\Lambda(m+t)\subseteq\Lambda(m).\]
\end{lemma}
\begin{proposition}
\label{prop:lambda_m_coho}
The internal degree $t$ part of the $s$-th cohomology of $\Lambda(m)$ is equal to the $s$-th Ext group in the category of $\mathcal{U}$ from $S(m)$ to $S(m+t)$, i.e. \[H^{s,t}(\Lambda(m)) = \mathrm{Ext}^{s}_{\mathcal{U}}(S(m),S(m+t)).\]
\end{proposition}
\begin{proposition}
\label{prop:lambda_EHP}
For each $m\geq 0$, there is a short exact sequence \[0\to\Lambda(m)\xrightarrow{e}\Lambda(m+1)\xrightarrow{h}\Sigma^{1,m+1}\Lambda(2m+1)\to 0,\]
where the suspension suspends $s$ and $t$ by 1 and $m+1$ respectively.
The map $e$ does not change the admissible monomials.
The map $h$ drops $\lambda_{n}$ if the admissible monomial starts with $\lambda_{n}$ and sends all other admissible basis vectors to zero.
\end{proposition}
This short exact sequence in Proposition \ref{prop:lambda_EHP} leads to a long exact sequence, known as the EHP sequence, for each $m$.
One can generalize $\Lambda(m)$ to $\Lambda(M)$ in such a way that the cohomology of $\Lambda(M)$ is equal to the $\mathrm{Ext}$ group from $M$ to spheres.
\begin{definition}
Given any $M\in\mathcal{U}$, we construct a cochain complex
\[\Lambda(M)=\bigoplus_{m}\Lambda(m)\otimes M_{m}\] with differential
\[d(\lambda_{I}\otimes x_{m}) = d(\lambda_{I})\otimes x_{m} + \sum_{i\geq 1, m-2i\geq 0}\lambda_{i-1}\lambda_{I}\otimes x_{m}\mathrm{Sq}^{i}.\]
Here $M_{m}$ denotes the dual $\mathrm{Hom}(M^{m}, \mathbb{F}_{2})$ of the degree $m$ part of $M$.
So the Steenrod operation $\mathrm{Sq}^{i}$ acts from the right on $x_{m}$.
We enforce the condition $m-2i\geq 0$ because otherwise $x_{m}\mathrm{Sq}^{i} = x_{m}\mathrm{Sq}_{2m-i} = 0$ by the unstability of the module $M$.
The $\Lambda(M)$, as a subspace of $\Lambda\otimes \left(M^{\vee}\right)$, closed under the differential by Lemma \ref{lem:lambda_check}.
The relations (\ref{eq:lambda_relation}) in the $\Lambda$ algebra and the ones (\ref{eq:steenrod_relation}) in the Steenrod algebra $A$ play so well with each other that one can check $d^{2}(\lambda_{I}\otimes x_{m}) = 0$.
The cochain complex $\Lambda(M)$ is still bigraded: the first grading is still the cohomological degree $s$, but the second grading is the \emph{absolute} internal degree $a$.
The absolute internal degree $a$ of $\lambda_{I}\otimes x_{m}$ is equal to $m$ plus the internal degree of $\lambda_{I}$.
The differential $d$ in $\Lambda(M)$ increases $s$ by one and preserves $a$.
\end{definition}
\begin{proposition}
\label{prop:lambda_coho}
The absolute internal degree $a$ part of the $s$-th cohomology of $\Lambda(M)$ is equal to the $s$-th Ext group in the category of $\mathcal{U}$ from $M$ to $S(a)$, i.e. \[H^{s,a}(\Lambda(M)) = \mathrm{Ext}^{s}_{\mathcal{U}}(M,S(a)).\]
\end{proposition}
We can view $\Lambda$ as a contravariant functor from $\mathcal{U}$ to $\mathrm{Ch}^{*}(\mathrm{Gr}(\mathbb{F}_{2}\mathrm{Mod}))$.
\subsection{Cochain complex $\Lambda_{k}(m)$}
\begin{definition}
For all $m, k\geq 0$, $\Gamma(m, k)$ is defined to be the sub-bigraded vector space of $\Lambda(m)$ spanned by the admissible monomials $\lambda_{I}$ with $I(1)< m$ and \[\textrm{excess}(I) + (s-1) > I(1) - (m-k).\]
The trivial monomial 1 does not live in any $\Gamma(m,k)$.
\end{definition}
The $\Gamma(m,k)$ is a subcomplex of $\Lambda(m)$ by virtue of the following lemma.
\begin{lemma}
$\Gamma(m,k)$ is closed under the differential.
\end{lemma}
\begin{proof}
Note that $\textrm{excess}(I) + (s-1) > I(1) - (m-k)$ if and only if $t+m-k-1 > 2I(s)$, where $t$ is the internal degree of $\lambda_{I}$.
Since the differential $d$ does not change $t, m, k$, it suffices to prove that the differential $d$ does not increase the last subscript.
In other words, we need to prove that $d(\lambda_{I})$ can be written as a sum of admissible monomials $\lambda_{J}$ with $J(s+1) \leq I(s)$.
This is true because both the differential formula and the relations in $\Lambda$ does not increase the second subscript.
\end{proof}
Observe that $\Gamma(m, k+1)$ is a subcomplex of $\Gamma(m, k)$.
\begin{definition}
For all $m,k\geq 0$, $\Lambda_{k}(m)$ is defined to be the quotient cochain complex of $\Lambda(m)$ by its subcomplex $\Gamma(m, k)$.
The differentials in $\Lambda_{k}(m)$ follow from those in $\Lambda(m)$.
Observe that all nontrivial admissible monomials $\lambda_{I}$ in $\Lambda_{k}(m)$ have $m-k\leq I(1)\leq m-1$, because $I(1)<m$ and $0\leq\textrm{excess}(I) + (s-1)\leq I(1) - (m-k)$.
Note that $\Lambda_{k}^{s}(m) = 0$ when $s > k$ or $s < 0$.
When $s > k$, we have $\textrm{excess}(I) + s - 1 < I(1) - m + s$ and thus $\textrm{excess}(I)\leq I(1) - m< 0$.
No admissible monomial $\lambda_{I}$ can have $\textrm{excess}(I) < 0$.
\end{definition}
In a later subsection, we are going to prove Theorem \ref{thm:lambda_k_coho}, a special case of which is that the cohomology $H^{s,t}(\Lambda_{k}(m))$ is equal to $\mathrm{Ext}_{k}^{s}(S_{k}(m),S_{k}(m+t))$ for all $s,t$.
This result is the correspondence of Proposition \ref{prop:lambda_m_coho} in the world $\mathcal{U}_{k}$.
\begin{proposition}
The short exact sequence in Proposition \ref{prop:lambda_EHP} induces a short exact sequence \[0\to\Lambda_{k}(m)\xrightarrow{e}\Lambda_{k+1}(m+1)\xrightarrow{h}\Sigma^{1,m+1}\Lambda_{k}(2m+1)\to 0,\]
where the suspension suspends $s$ and $t$ by 1 and $m+1$ respectively.
\end{proposition}
\begin{proof}
As a vector space over $\mathbb{F}_{2}$, the $\Lambda_{k}(m)$ is spanned by the admissible monomials $\lambda_{I}$ satisfying $I(1)<m$ and \[\textrm{excess}(I) + (s-1) \leq I(1) - (m-k).\]
The map $e$ keeps the admissible monomials and the map $h$ drops $\lambda_{m}$ if the the admissible monomial starts with $\lambda_{m}$ and sends all other additive basis to zero.
It is easy to check the exactness of the short sequence and we omit it.
\end{proof}
The short exact sequence above leads to a long exact sequence
\[\begin{tikzcd}
&&\ldots\arrow[dll, "P"']\\
\mathrm{Ext}_{k}^{s}(m,n)\arrow[r, "E"'] &\mathrm{Ext}_{k+1}^{s}(m+1,n+1)\arrow[r, "H"] &\mathrm{Ext}_{k}^{s-1}(2m+1,n)\arrow[dll, "P"]\\
\mathrm{Ext}_{k}^{s+1}(m,n)\arrow[r, "E"'] &\mathrm{Ext}_{k+1}^{s+1}(m+1,n+1)\arrow[r, "H"] &\mathrm{Ext}_{k}^{s-1}(2m+1,n)\arrow[dll, "P"]\\
\ldots&&
\end{tikzcd}\]
where $\mathrm{Ext}_{k}^{s}(m,n)$ is an abbreviation for $\mathrm{Ext}_{k}^{s}(S_{k}(m),S_{k}(n))$.
Note that we've seen this long exact sequence before in Lemma \ref{lem:U_k_EHP}.
We see this long exact sequence as the correspondence of the EHP sequence in the world of $\mathcal{U}_{k}$.
\begin{example}
We write down the structure of several $\Lambda_{k}(m)$'s explicitly.
\begin{enumerate}
\item
$\Lambda_{0}(m)$ has additive basis $\{1\}$ for all $m\geq 0$.
All differentials are trivial.
\item
$\Lambda_{1}(m)$ has additive basis
\[
\left\{
\begin{split}
&\{1\}&\textrm{ if }m = 0\\
&\{1,\lambda_{m-1}\}&\textrm{ if }m \geq 1
\end{split}\right.
\]
All differentials are trivial.
\item
$\Lambda_{2}(m)$ has additive basis
\[
\left\{
\begin{split}
&\{1\}&\textrm{ if }m = 0\\
&\{1,\lambda_{0},\lambda_{0}\lambda_{0}\}&\textrm{ if }m = 1\\
&\{1,\lambda_{m-2},\lambda_{m-1},\lambda_{m-1}\lambda_{2m-2}\}&\textrm{ if } m \geq 2
\end{split}\right.
\]
All differentials are trivial.
\item
$\Lambda_{3}(m)$ has additive basis
\[
\left\{
\begin{split}
&\{1\}&\textrm{ if }m = 0\\
&\{1,\lambda_{0},\lambda_{0}\lambda_{0},\lambda_{0}\lambda_{0}\lambda_{0}\}&\textrm{ if }m = 1\\
&\{1,\lambda_{0},\lambda_{1},\lambda_{0}\lambda_{0},\lambda_{1}\lambda_{1},\lambda_{1}\lambda_{2},\lambda_{1}\lambda_{2}\lambda_{4}\}&\textrm{ if }m = 2\\
&\{1,\lambda_{m-3},\lambda_{m-2},\lambda_{m-1},\lambda_{m-2}\lambda_{2m-4},\lambda_{m-1}\lambda_{2m-3},&\\
&\quad\lambda_{m-1}\lambda_{2m-2},\lambda_{m-1}\lambda_{2m-2}\lambda_{4m-4}\}&\textrm{ if } m \geq 3
\end{split}\right.
\]
All differentials are trivial.
\end{enumerate}
\end{example}
\subsection{Functor $\Lambda_{k}:\mathcal{U}_{k}^{\mathrm{op}}\to\mathrm{Ch}^{*}(\mathrm{Gr}(\mathbb{F}_{2}\mathrm{Mod}))$}
\begin{definition}
Given any $M\in\mathcal{U}_{k}$, we construct a cochain complex
\[\Lambda_{k}(M)=\bigoplus_{m}\Lambda_{k}(m)\otimes M_{m}\] with differentials
\[d(\lambda_{I}\otimes x_{m}) = d(\lambda_{I})\otimes x_{m} + \sum_{i\geq 1, 0\leq m-2i< k}\lambda_{i-1}\lambda_{I}\otimes x_{m}\mathrm{Sq}^{i}.\]
We require $0\leq m-2i<k$ because $x_{m}Sq^{i} = x_{m}\mathrm{Sq}_{2m-i}$ and as a module in $\mathcal{U}_{k}$, the $M$ only allows operations $\mathrm{Sq}_{0},\ldots,\mathrm{Sq}_{k-1}$.
Note that we can omit the condition $m-2i\geq 0$ in the definition of the differential because $x_{m}\mathrm{Sq}^{i} = 0$ automatically when $m-2i < 0$.
\end{definition}
\begin{lemma}
The differential $d$ is well-defined.
\end{lemma}
\begin{proof}
It suffices to prove $\lambda_{i-1}\lambda_{I}\in\Gamma(m-i,k)$ if $\lambda_{I}\in\Gamma(m, k)$ when $k\geq 0, i\geq 1$ and $m\geq 2i$.
By Lemma \ref{lem:lambda_check}, $2i\leq m$ implies that $\lambda_{i-1}\Lambda_{I}$ lives in $\Lambda(m-i)$.
Note that the condition for an admissible monomial $\lambda_{I}\in\Lambda(m)$ to be in $\Gamma(m,k)$ is $\textrm{excess}(I) + (s-1) > I(1) - (m-k)$, which is equivalent to $t(I) - 2I(s) > k - m + 1$.
Here $t(I)$ denotes the internal degree of $\lambda_{I}$.
Say $\lambda_{i-1}\lambda_{I} = \sum_{J}\lambda_{J}$ with $\lambda_{J}$ being admissible monomials.
Then $t(J) = i + t(I)$ and $t(J) - 2J(s+1) = i + t(I) - 2J(s+1) \ge i + t(I) - 2I(s) > i + k - m + 1$.
So $\lambda_{J}\in\Gamma(m-i, k)$.
\end{proof}
\begin{theorem}
\label{thm:lambda_k_dsq}
$d^{2} = 0$ in $\Lambda_{k}(M)$.
\end{theorem}
We first prove two lemmas which will come in handy when proving Theorem \ref{thm:lambda_k_dsq}.
Lemma \ref{lem:lambda_comp_0} is about the cochain complex $\Lambda_{k}(m)$ and Lemma \ref{lem:lambda_ineq} is about the standard $\Lambda$ algebra.
\begin{lemma}
\label{lem:lambda_comp_0}
If $i + 1 \leq m - k$ and $\lambda_{I}$ is any admissible monomial in $\Lambda(m+i+1)$, then $\lambda_{i}\lambda_{I} = 0 \in \Lambda_{k}(m)$.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:lambda_check}, $\lambda_{i}\lambda_{I}$ lives in $\Lambda(m)$.
If $\lambda_{I} = 1$, then $\lambda_{i}$ is trivial in $\Lambda_{k}(m)$ because $i<m-k$.
From now on, assume the length of $\lambda_{I}$ is $s\geq 1$.
Say $\lambda_{i}\lambda_{I} = \sum_{J}\lambda_{J}$ where each $\lambda_{J}$ is admissible of length $s + 1$ in $\Lambda(m)$.
It suffices to prove $\textrm{excess}(J) + s > J(1) - (m-k)$ or equivalently, $J(1) + \cdots + J(s) + s + (m-k) > J(s+1)$.
Since $i + 1\leq m - k$, it suffices to prove $J(1) + \cdots + J(s) + i + (s+1) > J(s+1)$, which follows from Lemma \ref{lem:lambda_ineq}
\end{proof}
\begin{lemma}
\label{lem:lambda_ineq}
Let $\lambda_{I}$ be any admissible monomial of length $s\geq 1$.
Write $\lambda_{i}\lambda_{I}$ as the sum of admissible monomials $\lambda_{J}$'s.
Then $i + J(1) + \cdots + J(s)\geq J(s+1)$.
\end{lemma}
\begin{proof}
We will proceed by induction on $s$.
The base case is $s = 1$.
When $s = 1$, write $\lambda_{i}\lambda_{j}$ as the sum of admissible monomials $\lambda_{i'}\lambda_{j'}$'s.
If $2i\geq j$, then $i' = i$ and $j' = j$.
Otherwise, in the relation (\ref{eq:lambda_relation}), we have $i + (i + j -t ) \geq 2i + 1 + t$ because the binomial coefficient requires $j - t - 1\geq t$.
One can check that the right hand side of the relation (\ref{eq:lambda_relation}) is admissible.
Assume $s > 1$.
Write $\lambda_{i}\lambda_{I(1,2,\ldots,s-1)}$ as the sum of admissible monomials $\lambda_{P}$.
Then by the case $s-1$, we have $i + P(1) + \cdots P(s-1)\geq P(s)$.
Now consider $\lambda_{P}\lambda_{I(s)} = \lambda_{P(1,\ldots,s-1)}\lambda_{P(s)}\lambda_{I(s)}$.
It is equal to a sum of $\lambda_{P(1,\ldots,s-1)}\lambda_{Q(1,2)}$ where both $\lambda_{P(1,\ldots,s-1)}$ and $\lambda_{Q(1,2)}$ are admissible monomials.
By the base case $s = 1$, we have $P(s) + Q(1)\geq Q(2)$.
Adding those two inequalities leads to $i + P(1) + \cdots + P(s-1) + Q(1) \geq Q(2)$.
Further applying relations (\ref{eq:lambda_relation}) to $\lambda_{P(1,\ldots,s-1)}\lambda_{Q(1,2)}$ will either preserve both sides or increases left hand side while decreasing the right hand side.
\end{proof}
\begin{proof}[Proof to Theorem \ref{thm:lambda_k_dsq}]
In this proof, we declare ${a\choose b} = 0$ if $a < 0$ or $b < 0$.
Let $x$ be any element in $M_{m}$.
\[
\begin{split}
d^{2}\left(\lambda_{I}\otimes x\right)
=&d^{2}(\lambda_{I})\otimes x\\
&\quad+\sum_{n\geq 1, m-2n< k}\lambda_{n-1}d(\lambda_{I})\otimes x\mathrm{Sq}^{n}\\
&\quad+\sum_{n\geq 1, m-2n< k}d\left(\lambda_{n-1}\lambda_{I}\right)\otimes x\mathrm{Sq}^{n}\\
&\quad+\sum_{i,j\geq 1, m-2i< k, m-i-2j<k}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i}\mathrm{Sq}^{j}\\
=&\sum_{n\geq 2, m-2n< k}d(\lambda_{n-1})\lambda_{I}\otimes x\mathrm{Sq}^{n}\\
&\quad+\sum_{i,j\geq 1, m-2i< k, m-i-2j<k}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i}\mathrm{Sq}^{j}\\
\end{split}
\]
For convenience, define $S(m, k)$ to be the set of indices $(i,j)\in\mathbb{Z}_{>0}^{2}$ satisfying \[m-2i< k, m-i-2j<k.\]
Define $A, B, C$ as
\[
\begin{split}
A:=&\sum_{(i, j)\in S(m, k), i\geq 2j}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i}\mathrm{Sq}^{j},\\
B:=&\sum_{(i, j)\in S(m, k), i< 2j}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i}\mathrm{Sq}^{j},\\
C:=&\sum_{n\geq 2, m-2n< k}d(\lambda_{n-1})\lambda_{I}\otimes x\mathrm{Sq}^{n}.
\end{split}
\]
Therefore, $d^{2}(\lambda_{I}\otimes x) = A + B + C$.
In $A$, the $\mathrm{Sq}^{i}\mathrm{Sq}^{j}$ is admissible but $\lambda_{j-1}\lambda_{i-1}$ is not.
Applying the relations (\ref{eq:lambda_relation}), we get
\[
\begin{split}
A =& \sum_{(i,j)\in S(m,k), i\geq 2j}\sum_{t\ge 0}{i-2j-t-1\choose t}\lambda_{i-j-t-1}\lambda_{2j+t-1}\lambda_{I}\otimes x\mathrm{Sq}^{i}\mathrm{Sq}^{j}\\
=& \sum_{(i,j)\in S(m,k), i\geq 2j}\sum_{v\ge 2j}{i-v-1\choose v-2j}\lambda_{i+j-v-1}\lambda_{v-1}\lambda_{I}\otimes x\mathrm{Sq}^{i}\mathrm{Sq}^{j}\\
=& \sum_{(i', j', s, t)\in A(m, k)}{s-i'-1\choose i'-2t}\lambda_{j'-1}\lambda_{i'-1}\lambda_{I}\otimes x\mathrm{Sq}^{s}\mathrm{Sq}^{t},
\end{split}
\]
where $A(m, k)$ is defined as the set of indices $(i, j, s, t) \in \mathbb{Z}_{>0}^{4}$ satisfying \[i< 2j, s\geq 2t, i+j = s+t, m-s-2t < k.\]
In the second equality, we used the substitution $v = 2j + t$.
In the third equality, we used the substitution $i' = v, j' = i+j-v, s = i, t = j$.
It is straightforward to check the third equality and the corresondence of index sets.
In $B$, the $\lambda_{j-1}\lambda_{i-1}$ is admissible but $\mathrm{Sq}^{i}\mathrm{Sq}^{j}$ is not.
Applying the Adem relations, we get
\[
\begin{split}
B &= \sum_{(i,j)\in S(m, k), i < 2j}\sum_{t\geq 0}{j-t-1\choose i-2t}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i+j-t}\mathrm{Sq}^{t}\\
&= \sum_{(i,j)\in S(m, k), i < 2j}\sum_{t\geq 1}{j-t-1\choose i-2t}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i+j-t}\mathrm{Sq}^{t}\\
&\qquad+\sum_{(i,j)\in S(m, k), i < 2j}{j-1\choose i}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i+j}\\
&= D + E,
\end{split}
\]
where $D,E$ are defined as
\[
\begin{split}
D:=&\sum_{(i,j,s,t)\in D(m,k)}{s-i-1\choose i-2t}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{s}\mathrm{Sq}^{t},\\
E:=&\sum_{(i,j)\in E(m,k)}{j-1\choose i}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i+j},
\end{split}
\]
$D(m,k)$ is defined to be the set of indices $(i,j,s,t)\in\mathbb{Z}_{>0}^{4}$ satisfying \[i<2j,s\ge 2t, i+j = s+t, m-2i < k\]
and $E(m,k)$ is defined to be the set of indices $(i,j)\in\mathbb{Z}_{>0}^{2}$ satisfying \[i<2j, m-2i < k.\]
Therefore, $d^{2}(\lambda_{I}\otimes x) = A + D + E + C$.
Applying the differential formula to $d(\lambda_{n-1})$, we get
\[
\begin{split}
C =& \sum_{n\geq 2, m-2n< k}\sum_{i\geq 1}{n-i-1\choose i}\lambda_{n-i-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{n}\\
=& \sum_{(i,j)\in C(m, k)}{j-1\choose i}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i+j},
\end{split}
\]
where $C(m, k)$ is defined as the set of indices $(i, j) \in \mathbb{Z}_{>0}^{2}$ satisfying \[i<2j, m-2(i+j)< k.\]
In the second equality, we used the substitution $j = n - i$.
Observe that $A$ and $D$ are summation of the same expression over slightly different index set.
The same for $E$ and $C$.
Take any $(i,j,s,t)\in D(m,k)$.
Then $(m-2i) - (m-s-2t) = s+2t-2i\geq 1$ because the binomial coefficient leads to $s-i-1\geq i-2t$.
So $(i,j,s,t)\in A(m,k)$ and $D(m,k)$ is a subset of $A(m,k)$.
The difference of those two index sets are consists of $(i,j,s,t)\in A(m,k)$ satisfying $m-s-2t = k-1$ and $s+2t-2i = 1$.
Any $(i,j,s,t)$ in the difference $A(m,k) - D(m,k)$ satisfies $m-2i = k$.
Take any $(i,j)\in E(m,k)$.
Then $(m-2i-2j) - (m-2i) = -2j < 0$.
So $(i,j)\in C(m,k)$ and $E(m,k)$ is a subset of $C(m,k)$.
The difference of those two index sets are consists of $(i,j)\in C(m,k)$ satisfying $m-2i\geq k$.
Therefore, \[A+D=\sum_{(i,j,s,t)\in A(m,k) - D(m,k)}{s-i-1\choose i-2t}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{s}\mathrm{Sq}^{t}\]
and \[E+C=\sum_{(i,j)\in C(m,k) - E(m,k)}{j-1\choose i}\lambda_{j-1}\lambda_{i-1}\lambda_{I}\otimes x\mathrm{Sq}^{i+j}.\]
The index $(i,j,s,t)$ in $A(m,k) - D(m,k)$ and the index $(i,j)$ in $C(m,k) - E(m,k)$ both satisfy $m-2i\geq k$.
By Lemma \ref{lem:lambda_comp_0}, $\lambda_{i-1}\lambda_{I} = 0$ when $m-2i\geq k$ and thus $A+D = 0, E+C = 0$.
Adding them up, we get $d^{2}(\lambda_{I}\otimes x) = A + D + E + C = 0$.
\end{proof}
\begin{remark}
$\Lambda_{k}$ is a contravariant exact functor from $\mathcal{U}_{k}$ to $\mathrm{Ch}^{*}(\mathrm{Gr}(\mathbb{F}_{2}\mathrm{Mod}))$.
\end{remark}
\subsection{Cohomology of the cochain complex $\Lambda_{k}(M)$}
The following thoerem is our main result in this subsection.
It is the correspondence of Proposition \ref{prop:lambda_coho} in the world of $\mathcal{U}_{k}$.
\begin{theorem}
\label{thm:lambda_k_coho}
For any module $M\in\mathcal{U}_{k}$, the $\Lambda_{k}(M)$ is a length $\leq k$ cochain complex with
\[H^{s,a}(\Lambda_{k}(M)) = \mathrm{Ext}_{k}^{s}(M,S_{k}(a))\quad\textrm{for all }s,a.\]
\end{theorem}
\begin{proof}
Let $M$ be any module in $\mathcal{U}_{k}$.
Consider a free resolution of $M$ \[\cdots\to P_{1}\to P_{0}\to M\to 0.\]
Applying the functor $\Lambda_{k}$ to the complex \[\cdots\to P_{2}\to P_{1}\to P_{0}\to 0,\] we get the following double complex $C^{*,*}$
\[\begin{tikzcd}
&\vdots&\vdots&\vdots&\\
\cdots&\Lambda_{k}^{2}(P_{2})\arrow[l]\arrow[u] &\Lambda_{k}^{2}(P_{1}) \arrow[l]\arrow[u] & \Lambda_{k}^{2}(P_{0}) \arrow[l] \arrow[u]& 0\arrow[l]\\
\cdots &\Lambda_{k}^{1}(P_{2})\arrow[l]\arrow[u] &\Lambda_{k}^{1}(P_{1}) \arrow[l]\arrow[u] & \Lambda_{k}^{1}(P_{0}) \arrow[l] \arrow[u]& 0\arrow[l]\\
\cdots &\Lambda_{k}^{0}(P_{2})\arrow[l]\arrow[u] &\Lambda_{k}^{0}(P_{1}) \arrow[l]\arrow[u] & \Lambda_{k}^{0}(P_{0}) \arrow[l] \arrow[u]& 0\arrow[l]\\
& 0\arrow[u]& 0\arrow[u]& 0\arrow[u]&
\end{tikzcd}\]
We get two spectral sequences from the double complex $C^{*,*}$ with $C^{s,i} = \Lambda_{k}^{s}(P_{i})$.
As we will see, both spectral sequences collapse at the second page.
Taking horizontal cohomology first, we get only one nontrivial column
\[\begin{tikzcd}
&\vdots&\vdots&\vdots&\\
\cdots&0\arrow[l]\arrow[u]&0\arrow[l]\arrow[u]&\Lambda_{k}^{2}(M)\arrow[l]\arrow[u]&0\arrow[l]\\
\cdots&0\arrow[l]\arrow[u]&0\arrow[l]\arrow[u]&\Lambda_{k}^{1}(M)\arrow[l]\arrow[u]&0\arrow[l]\\
\cdots&0\arrow[l]\arrow[u]&0\arrow[l]\arrow[u]&\Lambda_{k}^{0}(M)\arrow[l]\arrow[u]&0\arrow[l]\\
&0\arrow[u]&0\arrow[u]&0\arrow[u]&
\end{tikzcd}\]
beause the functor $\Lambda_{k}^{s}:\mathcal{U}_{k}^{\mathrm{op}}\to\mathbb{F}_{2}\mathrm{Mod}$ is exact.
Then taking vertical cohomology, we get $H^{s}(\Lambda_{k}(M))$ at position $(s,0)$ and zero everywhere else.
Its absolute internal degree $a$ part is $H^{s,a}(\Lambda_{k}(M))$.
If instead taking vertical cohomology first, we get only one nontrivial row according to Proposition \ref{prop:lambda_k_coho}
\[\begin{tikzcd}
&\vdots&\vdots&\\
\cdots&0 \arrow[l]\arrow[u] & 0 \arrow[l] \arrow[u]& 0\arrow[l]\\
\cdots&\bigoplus_{a\geq 0}\mathrm{Hom}_{k}(P_{1}, S_{k}(a)) \arrow[l]\arrow[u] & \bigoplus_{a\geq 0}\mathrm{Hom}_{k}(P_{0}, S_{k}(a)) \arrow[l] \arrow[u]& 0\arrow[l]\\
& 0\arrow[u]& 0\arrow[u]&
\end{tikzcd}\]
Then taking horizontal cohomology, we get $\bigoplus_{a\geq 0}\mathrm{Ext}_{k}^{s}(M,S_{k}(a))$ at position $(0, s)$ and zero everywhere else.
Its absolute internal degree $a$ piece is $\mathrm{Ext}_{k}^{s}(M,S_{k}(a))$.
Since those two limits of spectral sequences are the same and the absolute internal degrees should match, we have \[H^{s,a}(\Lambda_{k}(M)) = \mathrm{Ext}^{s}_{k}(M,S_{k}(a)).\qedhere\]
\end{proof}
\begin{proposition}
\label{prop:lambda_k_coho}
\[H^{s,a}(\Lambda_{k}F_{k}(n)) =
\left\{\begin{split}
&\mathbb{F}_{2}&\textrm{if } s = 0,a = n\\
&0&\textrm{otherwise}
\end{split}\right.
\]
Or equivalently,
\[H^{s,a}(\Lambda_{k}F_{k}(n)) =
\left\{\begin{split}
&\mathrm{Hom}_{k}(F_{k}(n), S_{k}(a))&\textrm{if } s = 0\\
&0&\textrm{if }s > 0
\end{split}\right.
\]
\end{proposition}
\begin{proof}
Construction \ref{cons:P^u} provides a decreasing filtration $\left(P^{u}\right)_{u\geq 0}$ of the cochain complex $\Lambda_{k}F_{k}(n)$.
Construction \ref{cons:Q^u_L} provides an increasing multi-filtration $\left(Q^{u}_{L}\right)_{L\in\mathbb{N}^{u}}$ of the cochain complex $P^{u}/P^{u+1}$.
By Lemma \ref{lem:Q^u_L_union}, their union is \[\bigcup_{L\in\mathbb{N}^{u}}Q^{u}_{L} = P^{u}/P^{u+1}.\]
Denote by $Q^{u}_{<L}$ the sum of cochain complexes $Q^{u}_{L'}$ with $L'< L$ lexicographically.
Lemma \ref{lem:hom_Q_ag} calculates the cohomology of the associated graded $Q^{u}_{L}/Q^{u}_{<L}$: it has zero cohomology when $u\geq 1$; the associated graded $Q^{0}_{\emptyset}/Q^{0}_{<\emptyset}$ has cohomology $\mathbb{F}_{2}$ at $(s,a) = (0,n)$ and zero everywhere else.
Adding them up, we get the cohomology of the original cochain complex $\Lambda_{k}F_{k}(n)$.
\end{proof}
\begin{remark}
If $\mathrm{Sq}^{J}\iota_{n}$ is an element of the basis of $F_{k}(n)$ by admissibles as in Proposition \ref{prop:Fkn} and is of degree $m$, then we use $\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}$ to denote the element in $F_{k}(n)_{m} = \mathrm{Hom}(F_{k}(n)^{m},\mathbb{F}_{2})$ which sends $\mathrm{Sq}^{J}\iota_{n}$ to 1 and all other admissible basis vectors to 0.
Every element in $\Lambda_{k}F_{k}(n)$ can be written uniquely as a sum of $\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}$ with
\begin{itemize}
\item
$\lambda_{I}$ and $\mathrm{Sq}^{J}$ are both admissible,
\item
$\lambda_{I}$ is nonzero in $\Lambda_{k}(m)$ where $m$ is the degree of $\mathrm{Sq}^{J}\iota_{n}$.
\end{itemize}
We say $\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}$ is \emph{admissible} if it satisfies the two conditions above.
In general, when $S$ forms an additive basis for $T$, we say $s\in S$ \emph{shows up} in $t\in T$ if $t = \sum_{s'\in S'}s'$ with $S'\subseteq S$ and $s\in S'$.
\end{remark}
\begin{construction}
\label{cons:P^u}
For any $u\geq 0$, define $P^{u}$ to be the sub-bigraded vector space of $\Lambda_{k}F_{k}(n)$ spanned by admissible $\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}$ with $|I| + |J| \geq u$.
By $|I|$ and $|J|$ we mean the lengths of them.
\end{construction}
The $P^{u}$ is a subcomplex of $\Lambda_{k}F_{k}(n)$ by virtue of the following lemma.
\begin{lemma}
$P^{u}$ is closed under the differential.
\end{lemma}
\begin{proof}
We need to prove $d(P^{u})\subseteq P^{u}$.
Let $\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}$ be admissible.
We have \[d(\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}) = \sum_{I',J'}\lambda_{I'}\otimes\left(\mathrm{Sq}^{J'}\iota_{n}\right)^{\vee}.\]
It suffices to prove $|I'| + |J'| \geq |I| +|J|$.
We know $|I'| = |I| + 1$.
So it suffices to prove $|J'|\geq |J| - 1$.
If $J' = J$, we are done.
Otherwise, $\lambda_{I'}\otimes\left(Sq^{J'}\iota_{n}\right)^{\vee}$ must show up in $\lambda_{i-1}\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}\mathrm{Sq}^{i}$, which means that $\mathrm{Sq}^{J}\iota_{n}$ must show up in $\mathrm{Sq}^{i}\mathrm{Sq}^{J'}\iota_{n}$.
The Adem relations do not increase the length of the Steenrod squares.
So $|J|\leq 1 + |J'|$.
\end{proof}
Observe that $P^{u+1}$ is a subcomplex of $P^{u}$.
\begin{construction}
\label{cons:Q^u_L}
The associated graded $P^{u}/P^{u+1}$ is a cochain complex spanned by the admissible $\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}$ with $|I| + |J| = u$.
For any $u\geq 0$ and any $L\in\mathbb{N}^{u}$, define $Q_{L}^{u}$ to be the sub-bigraded vector space of $P^{u}/P^{u+1}$ spanned by admissible $\lambda_{I}\otimes\left(\mathrm{Sq}_{J}\iota_{n}\right)^{\vee}$ with \[(I(s) + 1, \ldots, I(1) + 1, J(1), \ldots,J(t))\leq L,\] where $s:=|I|, t:=|J|$.
The index set $\mathbb{N}^{u}$ is ordered lexigraphically.
For example, when $u = 2$ and $L \in \mathbb{N}^{2}$, we have $(1,6) < (4,1)$ and $(1, 2) < (1, 4)$.
When $u = 0$, we have $\mathbb{N}^{0} = \{\emptyset\}$ and $Q_{\emptyset}^{0}$ is spanned by $1\otimes\iota_{n}$.
\end{construction}
The $Q_{L}^{u}$ is a subcomplex of $P^{u}/P^{u+1}$ by virtue of the following lemma.
\begin{lemma}
\label{lem:Q^u_L}
$Q^u_L$ is closed under the differential.
\end{lemma}
\begin{proof}
When $u = 0$ and $L = \emptyset$, $Q^{0}_{\emptyset}$ is a cochain complex with only one nontrivial element $1\otimes\iota_{n}$.
Assume $u\geq 1$.
We need to prove $d(Q^{u}_{L})\subseteq Q^{u}_{L}$.
Let $\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}$ be admissible in $P^{u}/P^{u+1}$.
We have \[d(\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}) = \sum_{I',J'}\lambda_{I'}\otimes\left(\mathrm{Sq}^{J'}\iota_{n}\right)^{\vee}.\]
It suffices to prove \[(I'(s+1)+1,\ldots,I'(1)+1,J')\leq (I(s)+1,\ldots,I(1)+1,J).\]
Since everything is in $P^{u}/P^{u+1}$, we have $|I'| + |J'| = |I| + |J|$, so $|I'| = |I| + 1$ and $|J'| = |J| - 1$.
So $\lambda_{I'}\otimes\left(Sq^{J'}\iota_{n}\right)^{\vee}$ shows up in $\lambda_{i-1}\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}\mathrm{Sq}^{i}$.
That is, $\lambda_{I'}$ shows up in $\lambda_{i-1}\lambda_{I}$, and $\mathrm{Sq}^{J}\iota_{n}$ shows up in $\mathrm{Sq}^{i}\mathrm{Sq}^{J'}\iota_{n}$.
The Adem relations in the Steenrod algebra tell us that $J\geq (i, J')$.
So \[(I(s)+1,\ldots,I(1) + 1, i, J')\le (I(s)+1,\ldots,I(1) + 1,J).\]
If $\lambda_{i-1}\lambda_{I}$ is not admissible, then \[(I'(s+1) + 1, \ldots, I'(1) + 1) < (I(s) + 1, \ldots, I(1) + 1, i).\]
If $\lambda_{i-1}\lambda_{I}$ is admissible, then \[(I'(s+1) + 1, \ldots, I'(1) + 1) = (I(s) + 1, \ldots, I(1) + 1, i).\]
So in either case, we have \[(I'(s+1)+1,\ldots,I'(1)+1,J')\leq (I(s)+1,\ldots,I(1)+1,J).\qedhere\]
\end{proof}
Observe that $Q^{u}_{L}$ is a subcomplex of $Q^{u}_{L'}$ when $L\leq L'$ lexigraphically.
\begin{lemma}
\label{lem:Q^u_L_union}
\[\bigcup_{L\in\mathbb{N}^{u}}Q^{u}_{L} = P^{u}/P^{u+1}.\]
\end{lemma}
\begin{proof}
When $u = 0$, we have $Q^{0}_{\emptyset} = P^{0}/P^{1}$.
When $u\geq 1$, $\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}$ lives in $Q^{u}_{L}$ when $L = (I(s)+1,\ldots,I(1) + 1, J)$.
\end{proof}
\begin{lemma}
\label{lem:hom_Q_ag}
Define $Q^{u}_{<L}$ as the sum of cochain complexes $Q^{u}_{L'}$ with $L'< L$.
The associated graded $Q^{u}_{L}/Q^{u}_{<L}$ has zero cohomology when $u\geq 1$.
The associated graded $Q^{0}_{\emptyset}/Q^{0}_{<\emptyset}$ has cohomology $\mathbb{F}_{2}$ at $(s,a) = (0,n)$ and zero everywhere else.
\end{lemma}
\begin{proof}
The case $u = 0$ is obvious.
Assume $u\geq 1$.
The associated graded $Q^{u}_{L}/Q^{u}_{<L}$ is spanned by admissible $\lambda_{I}\otimes(\mathrm{Sq}^{J}\iota_{n})^{\vee}$ with \begin{equation}\label{eq:lem:hom_Q_ag}(I(s)+1,\ldots,I(1) + 1, J(1),\ldots,J(t)) = L.\end{equation}
When there exsit no admissible $\lambda_{I}$ and $\mathrm{Sq}^{J}$ such that equation (\ref{eq:lem:hom_Q_ag}) is true, the associated graded will be zero and we are done.
Now assume there exist admissible $\lambda_{I}$ and $\mathrm{Sq}^{J}$ such that equation (\ref{eq:lem:hom_Q_ag}) is true.
The admissibility requires that $I(r+1)\leq 2I(r)$ or equivalently $I(r+1)+1<2(I(r)+1)$ and $J(r)\ge 2J(r+1)$.
So there are two admissibles $\lambda_{I}$ and $\mathrm{Sq}^{J}$ such that equation (\ref{eq:lem:hom_Q_ag}) is true.
They are $\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}$ and $\lambda_{J(1) - 1}\lambda_{I}\otimes\left(\mathrm{Sq}^{J(2)}\cdots\mathrm{Sq}^{J(t)}\iota_{n}
\right)^{\vee}$ with $2(J(1)-1)\geq I(1)$.
Note that $\lambda_{I} = 0$ in $\Lambda_{k}(m)$ if and only if $\lambda_{J(1) - 1}\lambda_{I} = 0$ in $\Lambda_{k}(m-J(1))$.
So the associated graded will either be zero or $\cdots\to0\to\mathbb{F}_{2}\to\mathbb{F}_{2}\to0\to\cdots$.
According to the proof of Lemma \ref{lem:Q^u_L}, the differential will be
\[d\left(\lambda_{I}\otimes\left(\mathrm{Sq}^{J}\iota_{n}\right)^{\vee}\right) = \lambda_{J(1) - 1}\lambda_{I}\otimes\left(\mathrm{Sq}^{J(2)}\cdots\mathrm{Sq}^{J(t)}\iota_{n}\right)^{\vee}\] if $I(1)\leq 2(J(1) - 1)$ and zero otherwise.
Therefore, the map $\mathbb{F}_{2}\to\mathbb{F}_{2}$ is the identity map.
So the cohomology of the associated graded is always zero when $u\geq 1$.
\end{proof}
\begin{remark}
The arguments in this subsection give another proof to Proposition \ref{prop:lambda_coho}.
\end{remark}
\section{Inverse system of $\mathrm{Ext}$ groups}
\label{sec:invsys}
Since the forgetful functor $u:\mathcal{U}_{k+1}\to\mathcal{U}_{k}$ preserves projectives by Proposition \ref{prop:prefor} and is exact by Proposition \ref{prop:exfor}, it induces a map of $\mathrm{Ext}$ groups $\mathrm{Ext}_{k+1}^{s}(M, N)\to \mathrm{Ext}_{k}^{s}(uM, uN)$, where $M$ and $N$ are any two modules in $\mathcal{U}_{k+1}$.
Therefore, given any two modules $M$ and $N$ in $\mathcal{U}$, we have an inverse system of $\mathrm{Ext}$ groups
\[\cdots\to\mathrm{Ext}^{s}_{2}(uM, uN)\to \mathrm{Ext}^{s}_{1}(uM, uN)\to \mathrm{Ext}^{s}_{0}(uM, uN).\]
In this section, we will study this inverse system and its inverse limit.
The main result of this section is summarized in the following theorem.
\begin{theorem}
\label{thm:is}
Let $M$ and $N$ be two nonzero modules in the category $\mathcal{U}$.
Let $s$ be any nonnegative integer.
If $N$ is bounded above with top nontrivial degree $n$, then the inverse system
\[\cdots\to\mathrm{Ext}^{s}_{2}(uM, uN)\to \mathrm{Ext}^{s}_{1}(uM, uN)\to \mathrm{Ext}^{s}_{0}(uM, uN).\]
stablizes and the limit is equal to $\mathrm{Ext}^{s}_{\mathcal{U}}(M,N)$.
More specifically, when $k\geq n-1$, the maps
\[\mathrm{Ext}_{k+1}^{s}(uM,uN)\to\mathrm{Ext}_{k}^{s}(uM,uN)\]
and
\[\mathrm{Ext}_{\mathcal{U}}^{s}(M,N)\to\mathrm{Ext}_{k}^{s}(uM,uN)\]
are isomorphisms.
\end{theorem}
Before proving this theorem, we take some time defining the decomposables and indecomposables of modules in $\mathcal{U}$ or $\mathcal{U}_{k}$ and presenting several lemmas.
\begin{definition}[Decomposables and indecomposables]
Let $M$ be a module in $\mathcal{U}$.
Let $N$ be the submodule of $M$ generated by $\mathrm{Sq}^{i}x$ with $i\geq 1$ and $x\in M$.
The submodule $N$ is called the decomposables of $M$ and the quotient module $M/N$ is called the indecomposables of $M$.
Note that both $N$ and $M/N$ live in $\mathcal{U}$.
\end{definition}
Similar definitions apply for a module in $\mathcal{U}_{k}$.
\begin{definition}[Decomposables and indecomposables]
Let $k\geq 0$ and $M$ be a module in $\mathcal{U}_{k}$.
Let $N$ be the submodule of $M$ generated by $\mathrm{Sq}^{i}x$ where $i\geq 1, |x| - i < k$ and $x$ is any nonzero homogeneous element in $M$.
The submodule $N$ is called the decomposables of $M$ and the quotient module $M/N$ is called the indecomposables of $M$.
Note that both $N$ and $M/N$ live in $\mathcal{U}_{k}$.
\end{definition}
\begin{lemma}
\label{lem:sphere_indecom}
If $M$ is a module in $\mathcal{U}_{k}$, then $\mathcal{U}_{k}(M, S_{k}(n))$ is equal to the dual of the degree $n$ part of the indecomposables of $M$.
If $M$ is a module in $\mathcal{U}$, then $\mathcal{U}(M,S(n))$ is equal to the dual of the degree $n$ part of the indecomposables of $M$.
\end{lemma}
\begin{proof}
We assume $M$ is a module in $\mathcal{U}_{k}$.
The $\mathcal{U}$ case is quite similar and omitted.
The $\mathbb{F}_{2}$-module $\mathcal{U}_{k}(M, S_{k}(n))$ is equal to the set of $\mathbb{F}_{2}$-linear maps from $M^{n}$ to $\mathbb{F}_{2}$ such that \[f(\mathrm{Sq}^{i}x) = 0\quad\forall i\geq 1, n-2i<k, x\in M^{n-i}.\]
It is equivalent to the the set of $\mathbb{F}_{2}$-linear maps from $(M/N)^{n}$ to $\mathbb{F}_{2}$, where $N$ is the decomposables of $M$.
\end{proof}
\begin{lemma}
\label{lem:Ext_iso_sphere}
Suppose that $M$ is a module in $\mathcal{U}_{k+1}$.
If $n\not\in\{k+2,k+4,k+6,\ldots\}$, then $\mathrm{Ext}_{k+1}^{s}(M, S_{k+1}(n))\to \mathrm{Ext}_{k}^{s}(uM, S_{k}(n))$ is an isomorphism for any $s$.
\end{lemma}
\begin{proof}
It suffices to prove that $\mathcal{U}_{k+1}(M, S_{k+1}(n))\to \mathcal{U}_{k}(uM, S_{k}(n))$ is an isomorphism.
By Lemma \ref{lem:sphere_indecom}, it suffices to prove that the degree $n$ part of the indecomposables of $M$ and $uM$ are the same.
Those two indecomposables only differ by the image of the lower Steenrod operations $\mathrm{Sq}_{k} = \mathrm{Sq}^{m-k}$ from degree $m$ to degree $2m-k$ with $m\geq k+1$.
But those $\mathrm{Sq}_{k}$'s does not have target degree $n$ because its target degrees are $k+2, k+4,k+6,\ldots$.
\end{proof}
\begin{lemma}
\label{lem:Ext_iso_single_deg}
Suppose that $M, N$ are two modules in $\mathcal{U}_{k+1}$.
If $n\not\in\{k+2,k+4,k+6,\ldots\}$ and $N^{i} = 0$ for any $i\neq n$, then $\mathrm{Ext}_{k+1}^{s}(M, N)\to \mathrm{Ext}_{k}^{s}(uM, uN)$ is an isomorphism for any $s$.
\end{lemma}
\begin{proof}
The module $N$ is equal to the a direct sum of some sphere modules $S_{k+1}(n)$.
Say $N = \bigoplus_{j\in J} S_{k+1}(n)$, where $J$ is an index set.
Therefore, \[\mathrm{Ext}_{k+1}^{s}(M, N) = \bigoplus_{j\in J}\mathrm{Ext}_{k+1}^{s}(M, S_{k+1}(n))\] and \[uN = \bigoplus_{j\in J} S_{k}(n),\quad\mathrm{Ext}_{k}^{s}(M, N) = \bigoplus_{j\in J}\mathrm{Ext}_{k}^{s}(M, S_{k}(n)).\]The map $\mathrm{Ext}_{k+1}^{s}(M, S_{k+1}(n))\to \mathrm{Ext}_{k}^{s}(M, S_{k}(n))$ is an isomorphism by Lemma \ref{lem:Ext_iso_sphere} and thus so is the map $\mathrm{Ext}_{k+1}^{s}(M, N)\to \mathrm{Ext}_{k}^{s}(uM, uN)$.
\end{proof}
\begin{proposition}
\label{prop:U_k_is
Suppose that $M, N$ are two modules in $\mathcal{U}_{k+1}$.
If $N$ is bounded above with the top nontrivial degree $n \leq k+1$, then $\mathrm{Ext}_{k+1}^{s}(M, N)\to \mathrm{Ext}_{k}^{s}(uM, uN)$ is an isomorphism for any $s$.
\end{proposition}
\begin{proof}
We will proceed by induction on $n$.
The lemma above solves the base case $n = 0$.
Now assume $n > 0$.
We have the following short exact sequence in $\mathcal{U}_{k+1}$
\[0\to N'\to N\to N''\to 0,\]
where $N'$ is the degree $n$ part of $N$ and $N''$ is the degree $<n$ part of $N$.
This short exact sequence gives rise to a long exact of $\mathrm{Ext}$ groups
\[\cdots\to \mathrm{Ext}_{k+1}^{s}(M, N')\to \mathrm{Ext}_{k+1}^{s}(M, N)\to \mathrm{Ext}_{k+1}^{s}(M, N'')\to \cdots\]
Since the forgetful functor $u:\mathcal{U}_{k+1}\to\mathcal{U}_{k}$ is exact, we have the following short exact sequence in $\mathcal{U}_{k}$
\[0\to uN'\to uN\to uN''\to 0\] and similarly we have the following long exact sequence of $\mathrm{Ext}$ groups
\[\cdots\to \mathrm{Ext}_{k}^{s}(uM, uN')\to \mathrm{Ext}_{k}^{s}(uM, uN)\to \mathrm{Ext}_{k}^{s}(uM, uN'')\to \cdots\]
The two long exact sequences above form a commutative diagram
\[\begin{tikzcd}
\cdots\arrow[r]&\mathrm{Ext}_{k+1}^{s}(M, N')\arrow[r]\arrow[d]&\mathrm{Ext}_{k+1}^{s}(M, N)\arrow[r]\arrow[d] &\mathrm{Ext}_{k+1}^{s}(M, N'')\arrow[r]\arrow[d]&\cdots\\
\cdots\arrow[r]&\mathrm{Ext}_{k}^{s}(uM,uN')\arrow[r]&\mathrm{Ext}_{k}^{s}(uM,uN)\arrow[r]&\mathrm{Ext}_{k}^{s}(uM, uN'')\arrow[r]&\cdots
\end{tikzcd}\]
By Lemma \ref{lem:Ext_iso_single_deg}, the map $\mathrm{Ext}^{s}_{k+1}(M,N')\to\mathrm{Ext}^{s}_{k}(uM,uN')$ is an isomorphism for any $s$.
By the induction hypothesis, the map $\mathrm{Ext}^{s}_{k+1}(M,N'')\to\mathrm{Ext}^{s}_{k}(uM,uN'')$ is an isomorphism for any $s$.
By five lemma, the middle map $\mathrm{Ext}^{s}_{k+1}(M, N)\to\mathrm{Ext}^{s}_{k}(uM,uN)$ is an isomorphism for all $s$.
\end{proof}
\begin{lemma}
\label{lem:Ext_iso_single_deg_U}
Suppose that $M,N$ are two modules in $\mathcal{U}$.
If $n \leq k+1$ and $N^{i} = 0$ for any $i\neq n$, then the map $\mathrm{Ext}_{\mathcal{U}}^{s}(M,N)\to\mathrm{Ext}_{k}^{s}(uM,uN)$ is an isomorphism for all $s$.
\end{lemma}
\begin{proof}
The proof is similar to the proof to Lemma \ref{lem:Ext_iso_single_deg}.
The core observation is that when $n \leq k+1$, the degree $n$ parts of the indecomposables of $M\in\mathcal{U}$ and $uM\in\mathcal{U}_{k}$ are the same.
\end{proof}
\begin{proposition}
\label{prop:U_is}
Suppose that $M,N$ are two modules in $\mathcal{U}$.
If $N$ is bounded above and the top nontrivial degree $n \leq k+1$, then $\mathrm{Ext}_{\mathcal{U}}^{s}(M,N)\to\mathrm{Ext}_{k}^{s}(uM,uN)$ is an isomorphism for any $s$.
\end{proposition}
\begin{proof}
The proof is quite similar to the proof to Propositon \ref{prop:U_k_is}.
So we only give a sketch.
We split $N$ into the degree $n$ part $N'$ and the degree $<n$ part $N''$.
Then Lemma \ref{lem:Ext_iso_single_deg_U} and an induction on $n$ complete the proof.
\end{proof}
Proposition \ref{prop:U_k_is} and Proposition \ref{prop:U_is} together give a proof to Theorem \ref{thm:is}.
| {
"timestamp": "2020-06-23T02:13:06",
"yymm": "2006",
"arxiv_id": "2006.11612",
"language": "en",
"url": "https://arxiv.org/abs/2006.11612",
"abstract": "Unstable modules over the Steenrod algebra with only the top $k$ operations are introduced in the language of ringoids. We prove the category of such modules has homological dimension at most $k$. A pratical method, which generalizes the $\\Lambda$ complex, to compute the $\\mathrm{Ext}$ group from such modules to spheres is proposed. We are also able to establish several functors to relate such modules and unstable modules over the Steenrod algebra, and to describe the connections between the $\\mathrm{Ext}$ groups in them.",
"subjects": "Algebraic Topology (math.AT)",
"title": "Unstable Modules with the Top $k$ Squares",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137905642982,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7087617759527864
} |
https://arxiv.org/abs/1405.5587 | Parking functions, Shi arrangements, and mixed graphs | The \emph{Shi arrangement} is the set of all hyperplanes in $\mathbb R^n$ of the form $x_j - x_k = 0$ or $1$ for $1 \le j < k \le n$. Shi observed in 1986 that the number of regions (i.e., connected components of the complement) of this arrangement is $(n+1)^{n-1}$. An unrelated combinatorial concept is that of a \emph{parking function}, i.e., a sequence $(x_1, x_2, ..., x_n)$ of positive integers that, when rearranged from smallest to largest, satisfies $x_k \le k$. (There is an illustrative reason for the term \emph{parking function}.) It turns out that the number of parking functions of length $n$ also equals $(n+1)^{n-1}$, a result due to Konheim and Weiss from 1966. A natural problem consists of finding a bijection between the $n$-dimensional Shi arragnement and the parking functions of length $n$. Stanley and Pak (1996) and Athanasiadis and Linusson 1999) gave such (quite different) bijections. We will shed new light on the former bijection by taking a scenic route through certain mixed graphs. | \section{Introduction}
Our goal is to draw (bijective) connections between three seemingly unrelated concepts; their names
form the title of our paper, and we start by introducing them one by one.
\subsection{Parking Functions}
Imagine a one-way street with $n$ parking spots and a cliff at its end. We'll give the first parking spot the number
1, the next one number 2, etc., down to the last one, number $n$. Initially they're all free, but
there are $n$ cars approaching the street, and they'd all like to park.
To make life interesting, every car has a parking preference, and we record the preferences in a
sequence; e.g., if $n=3$, the sequence $(2, 1, 1)$ means that the first car would like to park at
spot number 2, the second car prefers parking spot number 1, and the last car would also like to
park at number 1. The street is narrow, so there is no way to back up. Now each car enters the
street and approaches its preferred parking spot; if it is free, it parks there, and if not, it
moves down the street to the first available spot. We call a sequence a \Def{parking function} (of
length $n$) if all cars end up happily finding a parking spot, i.e., none fall off the cliff.
For example, the sequence $(2, 1, 1)$ is a parking function (of length 3), whereas the sequence
$(1,3,3,4)$ is not (see Figure~\ref{fig:nonparkingfunction}).
\begin{figure}[h]
\includegraphics[scale=.3]{park}
\caption{A sequence that is not a parking function.}\label{fig:nonparkingfunction}
\end{figure}
The following enumerative result makes for a fun (nontrivial) exercise in any undergraduate
combinatorics class. The earliest reference we are aware of is \cite[Section 5]{pyke}, though its
explicit relation to parking functions first appeared in~\cite{konheimweiss}.
\begin{theorem}\label{thm:parkingfunctions}
There are precisely $(n+1)^{ n-1 }$ parking functions of length~$n$.
\end{theorem}
One possible proof of this theorem starts with the following equivalence, which we will need below:
\begin{lemma}\label{lem:parkingfunctionperm}
A sequence $\mathbf{x} \in \mathbb{Z}_{ >0 }^n$ is a parking function if and only if $\mathbf{x}$ is componentwise less than
or equal to some permutation of $(1, 2, \dots, n)$.
\end{lemma}
This equivalent notion also explains why parking functions naturally appear in many contexts; these include tree inversions \cite{kreweras}, symmetric functions \cite{haimanparking}, Riemann--Roch theory for graphs (where
parking functions are also called \emph{reduced divisors} \cite{bakernorine}), Hopf algebras \cite{novellithibon}, chip-firing games (where parking functions go by the name of \emph{superstable configurations} \cite{holroydlevinemeszarosetal}), and vertex operators~\cite{dotsenko}.
\subsection{Shi Arrangements}
A \Def{hyperplane arrangement} is a finite collection of hyperplanes in some Euclidean space, i.e., sets of the form
\[
\left\{ \mathbf{x} \in \mathbb{R}^n : \, \mathbf{a} \, \mathbf{x} = b \right\}
\]
for some $\mathbf{a} \in \mathbb{R}^n \setminus \{ \mathbf{0} \}$ and $b \in \mathbb{R}$.
A famous example is the $n$-dimensional \Def{(real) braid arrangement} consisting of all hyperplanes of the form $x_j = x_k$
for $1 \le j < k \le n$; it has a natural connection to the symmetric group $S_n$, and indeed, the
braid arrangement forms a geometric bridge between various algebraic and combinatorial concepts.
A picture (in this case, Figure \ref{fig:braid}) is worth a thousand words.
\begin{figure}[htb]
\def1{.8}
\def${$}
\def${$}
\begin{center}
\input{braid3}
\end{center}
\caption{The bijection between permutations and regions of the braid arrangement illustrated for $n=3$.}\label{fig:braid}
\end{figure}
The $n$-dimensional \Def{Shi arrangement} is a close relative to the braid arrangement: it consists of all hyperplanes of the form
\[
x_j - x_k = 0
\qquad \text{ and } \qquad
x_j - x_k = 1
\qquad \text{ for all } 1 \le j < k \le n \, .
\]
A \Def{region} of the hyperplane arrangement $\H$ is a maximal connected component of $\mathbb{R}^n \setminus
\bigcup \H$. The following result was first proved by Shi~\cite{shi}.
\begin{theorem}\label{thm:shi}
The $n$-dimensional Shi arrangement has precisely $(n+1)^{ n-1 }$ regions.
\end{theorem}
One proof of this theorem (first given in \cite{headley}, see also \cite{athanasiadisfinitefieldmethod}) is through
the \emph{characteristic polynomial} of the Shi arrangement, which carries more information than the
number of its regions \cite{zaslavskythesis}; e.g., with it one can also prove that the $n$-dimensional
Shi arrangement has precisely $(n-1)^{ n-1 }$ \emph{bounded} regions.
Hyperplane arrangements are ubiquitous in several areas of mathematics and form a research area in its own right.
Some highlights include connections between combinatorics (starting with \cite{zaslavskythesis}), matroid theory \cite{orientedmatroids}, topology (starting with \cite{hattori}), commutative algebra \cite{orliksolomon}, and algebraic geometry \cite{saito}.
We refer to \cite{orlikterao,stanleyhyparr} for further study.
\subsection{Parking Graphs}
A \Def{mixed graph} $G$ is an amphibian: $G$ is between a graph and a directed graph, in the sense that $G$
may contain both undirected and directed edges (see, e.g., \cite{hararypalmer}).
Mixed graphs have various applications, e.g., to scheduling problems.
Here we are only interested in a fairly special class of mixed graphs.
As with directed graphs, we define the \Def{in-degree} of a vertex $v$ of $G$ to be the number of directed edges pointing into $v$; the \Def{out-degree} of $v$ is the number of
directed edges pointing away from $v$.
A \Def{parking graph} $P$ is a mixed graph with vertex set $[n] := \left\{ 1, 2, \dots, n \right\} $
whose underlying graph is the complete graph $K_n$ and whose edges
satisfy what we will call the \emph{source-sink condition}. To describe it, we use the following
nomenclature for $1 \le j < k \le n$:
\begin{itemize}
\item a directed edge $j \leftarrow k$ in $P$ is a \Def{ down edge};
\item a directed edge $j \rightarrow k$ in $P$ is an \Def{ up edge};
\item an undirected edge $jk$ in $P$ is a \Def{downish edge}.
\end{itemize}
(See Figure \ref{fig:example} for examples.)
One reason for the last terminology is that we will associate to each parking graph $P$ a directed
graph $\vec P$ which, in addition to the directed edges of $P$, also contains each undirected edge
$jk$ of $P$ as a directed edge $j \leftarrow k$.
For example, the mixed graph in Figure \ref{fig:example} gives rise to a coherently oriented 3-cycle.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=.4]{updowntri.pdf}
\caption{A down ($1 \leftarrow 2$), up ($1 \rightarrow 3$), and downish ($23$) edge.}\label{fig:example}
\end{center}
\end{figure}
The \Def{source-sink condition} says that $\vec P$ is acyclic, i.e., it contains no coherently oriented cycles,
and for any triangle $G$ (complete subgraph on 3 vertices) of $P$ that has both a down and a downish edge, the source (i.e., the
vertex with in-degree 0) and sink (the vertex with out-degree 0) of $\vec G$ cannot be connected by a downish edge.
\begin{theorem}\label{thm:parkinggraph}
There are precisely $(n+1)^{ n-1 }$ parking graphs on the vertex set~$[n]$.
\end{theorem}
Both the notion of a parking graph and Theorem \ref{thm:parkinggraph} seem to be new, though close relatives of parking graphs can be found in \cite{hopkinsperkinson}.
We will give their motivation next.
\subsection{A natural question}
Theorems \ref{thm:parkingfunctions} and \ref{thm:shi} beg the problem of finding a bijection between the parking functions of length $n$ and the regions of the $n$-dimensional Shi arrangement.
The first such bijection, due to Pak and Stanley \cite{stanleyshiproceedings}, recursively labels the regions of the Shi arrangement with parking functions from the inside out,
starting by giving the label $(1, 1, \dots, 1)$ to the ``central'' region $\left\{ \mathbf{x} \in \mathbb{R}^n : \, x_n + 1
> x_1 > x_2 > \dots > x_n \right\}$,
and giving a rule how the labels change as one crosses any of the Shi hyperplanes.
A different bijection, due to Athanasiadis and Linusson \cite{ath}, constructs a diagram of nonnesting arcs on $[n]$ from a region of the Shi arrangement in $\mathbb{R}^n$,
and each diagram in turn determines a parking function.
Our goal is to give a third bijection by way of parking graphs, hence proving Theorem \ref{thm:parkinggraph}: we will
exhibit a bijection between Shi regions and parking graphs and a bijection between parking graphs and parking
functions. This implies, of course, a bijection between Shi regions and parking functions, and its forward
direction turns out to be equivalent to the Pak--Stanley bijection. (Pak and Stanley ``only'' gave
an injection from the set of Shi regions in $\mathbb{R}^n$ to the set of parking functions of length $n$
and then appealed to the fact that both sets are equinumerous.)
Hopkins and Perkinson \cite{hopkinsperkinson} recently extended the Pak--Stanley bijection to \emph{bigraphic arrangements}, using mixed graphs as a similar
intermediate tool. Our construction is not identical but equivalent---our parking graphs correspond to Hopkins--Perkinson's \emph{Shi-admissable
orientations} of a complete graph, and our source-sink condition is equivalent to their condition of ``having no bad cycles." At any rate, our approach seems more direct (which should not be surprising, since Hopkins and Perkinson proved a more general result).
Even more recent work of Backman also essentially contains parking graphs; his definition adds a $q$-rooted spanning tree~\cite[Lemma 5.6]{backman}.
\subsection{Cayley's formula}
As a historical aside, the arguably most famous counting instance with answer $(n+1)^{ n-1 }$ is the number of labeled trees with $n+1$ vertices.
Thus a similar natural question concerns possible bijections between labeled trees and, say, parking functions. The oldest such bijection
we are aware of is due to Sch\"utzenberger \cite{schutzenbergerparking}; the arguably simplest bijection is due to Foata and Riordan \cite{foatariordan} and illustrates that bijections between labeled trees and parking
functions are generally easier than those between parking functions and regions of the Shi arrangements: The Foata--Riordan bijection starts with encoding a labeled tree by its \emph{Pr\"ufer code}, a recursively constructed sequence
containing the label of the vertex incident to the lowest-labeled leaf (which then gets removed to yield the recursion; this procedure stops when a single edge is left).\footnote{Pr\"ufer's idea \cite{pruefer} immediately
gives a bijection between labeled trees with $n+1$ vertices and sequences of length $n-1$ containing numbers in $[n+1]$, thus confirming Cayley's formula.}
Foata and Riordan use the following map, which they attribute to Henry O.\ Pollak, from the set of all parking functions of length $n$ to the set of all Pr\"ufer codes encoding labeled trees with $n+1$ vertices:
\begin{equation}\label{eq:pollak}
\left( x_1, x_2, \dots, x_n \right) \mapsto \left( x_2 - x_1 , x_3 - x_2 , \dots, x_n - x_{n-1} \right) \bmod n+1 \, .
\end{equation}
This map can be inverted if $x_1$ is known independently, and the Foata--Riordan proof shows that for any Pr\"ufer code, there exists a unique $x_1$ that gives an
inverse of \eqref{eq:pollak} yielding a parking function.
\section{The Bijections}
A directed complete graph $G$ (also called a \emph{tournament}) contains a cycle if and only if there is a triangle subgraph in $G$ that is a cycle. As a consequence,
it suffices to limit acyclicity in the source-sink condition to triangles.
Now we tackle the actual bijections. We start with a map from parking graphs to regions of the Shi
arrangement: given a parking graph $P$ on the vertex set $[n]$, we define
\begin{equation}\label{eq:defpsi}
\psi(P) := \left\{ \mathbf{x} \in \mathbb{R}^n : \,
\begin{array}{rl}
x_j - x_k \ge 1 & \text{ if } \ j \leftarrow k \ \text{ is down, } \\
0 \le x_j - x_k \le 1 & \text{ if } \ j k \ \text{ is downish, } \\
x_j - x_k \le 0 & \text{ if } \ j \rightarrow k \ \text{ is up }
\end{array}
\right\} .
\end{equation}
Thus $\psi(P)$ is a potential region of the $n$-dimensional Shi arrangement. We will now show that it is
an honest region, i.e., full-dimensional, and that every such region comes from a parking graph.
Figure \ref{fig:3dimex} shows the case $n=3$.
\begin{figure}[htb]
\def1{1}
\def${$}
\def${$}
\begin{center}
\input{shi3mixed}
\end{center}
\caption{The bijection between parking graphs on three vertices (all labeled as in the graph in the bottom right) and regions of the 3-dimensional Shi arrangement.}\label{fig:3dimex}
\end{figure}
\begin{theorem}\label{thm:graphtoshi}
The map $\psi$ gives a bijection between the parking graphs on the vertex set $[n]$ and the regions of
the $n$-dimensional Shi arrangement.
\end{theorem}
\begin{proof}
For a region $R$ of the $n$-dimensional Shi arrangement, we define (by a slight abuse of notation but in
light of what's to come) $\psi^{ -1 }(R)$ to be the mixed graph one obtains by reversing the
recipe given by \eqref{eq:defpsi}. Our goal is to show that both $\psi$ and $\psi^{ -1 }$ are well-defined
maps from the set of all parking graphs on the vertex set $[n]$ to the set of all regions of the $n$-dimensional Shi
arrangement and back. Since $\psi$ and $\psi^{ -1 }$ are inverses by construction, this will prove Theorem~\ref{thm:graphtoshi}.
It is slightly easier to see that $\psi^{ -1 }$ is well defined, so let's start with that: namely, we
need to show that, for any region $R$, the mixed graph $\psi^{ -1 }(R)$ satisfies the source-sink
condition. If $\overrightarrow{\psi^{ -1 }(R)}$ contains a cycle, then for some $j<k<m$ the region $R$ satisfies either
\[\begin{array}{rcl} \left\{ \begin{array}{r}x_j-x_k>0\\mathbf{x}_k-x_m>0\\mathbf{x}_m-x_j>0\end{array} \right\} &\text{ or }& \left\{
\begin{array}{r}x_j-x_k<0\\mathbf{x}_k-x_m<0\\mathbf{x}_m-x_j<0 \end{array} \right\} . \end{array}\]
The first set of inequalities gives rise to the contradiction $x_j>x_j$ and the second to $x_j<x_j$.
If $\psi^{ -1 }(R)$ violates the downish part of the source-sink condition, then the defining inequalities for $R$ include both $0
\le x_j - x_k \le 1$ (where $k$ is the source and $j$ the sink of some triangle in $\overrightarrow{\psi^{ -1 }(R)}$) and $x_j - x_k \ge 1$ (see Figure \ref{fig:sourcesinkcond}); but a region has nontrivial interior, so no such $R$ can exist.
\begin{figure}[htb]
\begin{center}\def3.35cm}\input{algexample.pdf_tex{3cm}
\input{sscdel.pdf_tex}
\end{center}
\caption{Violation of the source-sink condition; here $k<m<j$ and at least one of the two potential down edges is present.}\label{fig:sourcesinkcond}
\end{figure}
To see that $\psi$ is well defined, we need to show, given a parking graph $P$, that $\psi(P)$ has
nontrivial interior. Let's look at the in-degree sequence of $\vec P$; since $\vec P$ is acyclic, the in-degree sequence is a permutation $\left(
\sigma(0), \sigma(1), \dots, \sigma(n-1) \right)$ of the vector $(0, 1, \dots, n-1)$. Note that this
in-degree sequence also means that $j \to k$ is an up edge if and only if $\sigma(j-1) < \sigma(k-1)$.
Now choose $\mathbf{x} \in \mathbb{R}^n$ whose coordinates satisfy
\[
x_{ \sigma^{ -1 } (0) + 1 } < x_{ \sigma^{ -1 } (1) + 1 } < \dots < x_{ \sigma^{ -1 } (n-1) + 1 } \, .
\]
We claim that $\mathbf{x}$ is in the interior of $\psi(P)$.
Indeed, $\mathbf{x}$ satisfies all inequalities $x_j \ge x_k$ in \eqref{eq:defpsi} strictly, by construction. In words,
$\mathbf{x}$ respects both the inequality constraints stemming from an up edge and those stemming from a
down/downish edge. What remains to be shown is that there is no potential contradiction from the
interplay of the constraints stemming from a down and a downish edge. But the only way such a
contradiction could occur is when
\[
\begin{array}{ccccccccc}
0 & < & x_{ \sigma^{ -1 } (i) + 1 } & & - & & x_{ \sigma^{ -1 } (m) + 1 } & < & 1 \\
& & & x_{ \sigma^{ -1 } (j) + 1 } & - & x_{ \sigma^{ -1 } (k) + 1 } & & > & 1
\end{array}
\]
for some $m <i,~k < j $ and $ \sigma^{ -1 } (i) \le \sigma^{ -1 } (j)< \sigma^{ -1 } (k) \le \sigma^{ -1 } (m) $. This means that $P$ has a subgraph $G$ of
the form in Figure \ref{fig:submixedgraph} (where $m$ and $k$ or $j$ and $i$ could coincide).
\begin{figure}[htb]
\begin{center}\def3.35cm}\input{algexample.pdf_tex{3cm}
\input{sscquad.pdf_tex}
\end{center}
\caption{A sub-mixed graph violating the source-sink condition. Vertex $i$ is the source and vertex $m$ is the sink. The edges between $i$ or $m$ and $j$ or $k$ may be up, down, or downish edges in the direction of the dashed arrows.}\label{fig:submixedgraph}
\end{figure}
In the subgraph $G$, vertex $i$ is the source, $m$ is the sink, the edge $im$ is downish, and $k\leftarrow j$ is a down edge. We
will show that $G$ violates the source-sink condition. Notice that
$k$ is the sink of the triangle on vertices $i,j,$ and $k$, and that $j$ is the source of the triangle on vertices $j,k$ and $m$.
If the edge $ki$ is downish, then the triangle on vertices $i,j$, and $k$ violate the source-sink condition. If the edge $k\leftarrow i$ is a down edge, then the triangle on vertices $k, i$, and $m$ violates the source-sink condition. Finally, if $i\rightarrow k$ is an up edge, then $m<i<k<j$. Therefore the edge $m\leftarrow j$ must be a down edge, thus violating the source-sink condition for the triangle on vertices $i,j,$ and $m$.
\end{proof}
Our second map is between parking graphs and parking functions: given a parking graph $P$, let
$\mathbf{d}(P)$ be its in-degree sequence, and define
\[
\phi(P) := \mathbf{d}(P) + \mathbf{1}
\] where $\bf 1$ $\in\mathbb{R}^n$ is a vector all of whose coordinates are 1.
\begin{theorem}\label{thm:bijparkgraphfunction}
The map $\phi$ gives a bijection between the parking graphs on the vertex set $[n]$ and the parking functions of length $n$.
\end{theorem}
\begin{proof}
To show that $\phi$ is well defined, note once more that, given a parking graph $P$, the in-degree sequence of $\vec{P}$ is a
permutation of $(0,1,\ldots,n-1)$, and thus $\mathbf{d}(P)$ is component-wise less than or equal to this permuted vector; this implies that $\mathbf{d}(P)+\mathbf{1}$ is a parking function.
To show that $\phi^{ -1 }$ exists, we provide an algorithm that constructs a parking graph $P(\mathbf{x})$ for the parking function $\mathbf{x}$ and show that the algorithm yields $\phi^{-1}$. Starting with the vertex set $[n]$, the algorithm determines when to add up edges from a particular vertex and when to add down edges. In contrast with the function
$\phi$, the algorithm is not as easy to define nor understand. We start with a detailed example which will shed some light into the process behind the algorithm.\\
\noindent
{\it Example.} To see the algorithm in practice, we will construct $P(3,1,1,2)$, the parking graph associated to the parking function $(3,1,1,2)$.
The output of the algorithm must be a parking graph with vertex set $\{1,2,3,4\}$ and in-degree sequence $\mathbf{d}(P)=(2,0,0,1)$.
There are many graphs with such an in-degree sequence. Figure \ref{fig:algexnon} shows two different labeled complete mixed graphs on four vertices. The graph on the left violates the source-sink condition: triangle $\{1,2,4\}$ violates the downish part of the source-sink condition, and triangle $\{2,3,4\}$ creates a cycle. The graph on the right is the unique parking graph associated to $(3,1,1,2)$.
\begin{figure}[htb]
\begin{center}
\def3.35cm}\input{algexample.pdf_tex{3cm}\input{nonalgexample3112.pdf_tex}
\hspace{2cm}
\def3.35cm}\input{algexample.pdf_tex{3cm}\input{algexample3112.pdf_tex}
\end{center}
\caption{Two graphs with the same in-degree; the graph on the right is $P(3,1,1,2)$.}\label{fig:algexnon}
\end{figure}
So how can we create an algorithm that yields an actual parking graph? We begin the process by analyzing the entries of the input with smallest value. By virtue of being a
parking function, the input will always have at least one entry with a value of $1$. In our example, we have two such entries: the ones corresponding to vertex $2$ and vertex
$3$. These will be the two vertices with in-degree $0$ in our graph. Suppose the first up edge introduced was $2 \rightarrow 4$. Then the edge $34$ must be downish. There is
also a downish edge $23$ since both have in-degree $0$. Then the triangle $\{2,3,4\}$ creates a cycle, as seen on the left graph of Figure \ref{fig:algexnon}. Therefore, we instead begin by creating an up edge $3 \rightarrow 4$ as seen in the first graph in Figure \ref{fig:algex}. In general, in order to avoid creating cycles, up edges will always emerge from the available vertex corresponding to the entry with highest index and lowest value.
This process is formalized as the \emph{up step} of the algorithm below.
\begin{figure}[htb]
\begin{center}
\def3.35cm}\input{algexample.pdf_tex{3.5cm}\input{step1algexample3112.pdf_tex}
\def3.35cm}\input{algexample.pdf_tex{3.5cm}\input{step2algexample3112.pdf_tex}
\def3.35cm}\input{algexample.pdf_tex{3.35cm}\input{algexample3112.pdf_tex}
\end{center}
\caption{The algorithm in practice: building the parking graph $P(3,1,1,2)$.}\label{fig:algex}
\end{figure}
So we now have a graph where vertex $4$ has in-degree 1, as desired. Since vertices $2, 3$ and $4$ all have the correct in-degree, we no longer introduce up edges. Vertex $1$ must have in-degree $2$, so we need to introduce two down edges from vertices 2, 3, or 4.
If the down edge were $1 \leftarrow 2$, then either the edge between $1$ and $4$ or the edge between $1$ and $3$ will be downish. If $14$ is a downish edge, then the triangle
$\{1,2,4\}$ would violate the source-sink condition, as $4$ would be the source and $1$ would be the sink. This can also be seen in the left graph of Figure \ref{fig:algexnon}. An analogous argument applies if $13$ was the downish edge. It is
therefore necessary to have the down edges $4 \leftarrow 1$ and $3 \leftarrow 1$ as seen in the middle graph in Figure \ref{fig:algex}. In general, in order to avoid violating
the downish part of the source-sink condition, down edges will always emerge from the available vertices with the corresponding entries with the lowest in-degree. This process
is formalized as the \emph{down step} of the algorithm below.
Each of the vertices has the desired in-degree, and so we add all other edges as downish edges, finally yielding $P(3,1,1,2)$.
We now formalize the ideas introduced in this example with an algorithm:
\begin{eqnarray*}
&{}& \text{Input: parking function } \mathbf{x} \in \mathbb{Z}_{ > 0 }^n \\
&{}& \mathbf{y} := \mathbf{x} - (1, 1, \dots, 1) \\
&{}& (\star) \text{ If there exists } y_k = 0 \text{ then } \\
&{}& \qquad \ \, j := \max \left\{ k : \, y_k = 0 \right\} \\
&{}& \qquad \left. \begin{array}{l}
\text{introduce } j \to k \\
y_k := y_k - 1
\end{array} \right\} \text{ for all } k > j \text{ with } y_k > 0 \qquad [{\bf up \hspace{1.5mm} step}] \\
&{}& \qquad \ \, y_j := y_j - 1 \\
&{}& \qquad \ \, y_k := y_k - 1 \text{ for all } y_k < 0 \\
&{}& \qquad \ \, \text{go to } (\star) \\
&{}& \text{Else if there exists } y_k > 0 \text{ then } \\
&{}& \qquad \ \, \text{choose minimal } y_j \text{ such that there exists }k<j \text{ with } y_k>0 \text{ and } k\nleftarrow j\\
&{}& \qquad \left. \begin{array}{l}
\text{introduce } k \leftarrow j \\
y_k := y_k - 1
\end{array} \right\} \text{ for all } k < j \text{ with } y_k > 0 \qquad [{\bf down \hspace{1.5mm} step}] \\
&{}& \qquad \ \, \text{go to } (\star) \\
&{}& \text{Else [all } y_k < 0 {]} \\
&{}& \qquad \ \, \text{introduce remaining edges as undirected and stop } \\
&{}& \text{Output: mixed graph } P(\mathbf{x}), \text{ \Def{source priority vector} } s(\mathbf{x}):=(y_1,\ldots,y_n)
\end{eqnarray*}
Some additional terminology will be useful in order to talk about the algorithm:
We will refer to $\hat\mathbf{y}:=\mathbf{x}-(1,\ldots,1)$ as the \Def{initial} $\mathbf{y}$. At each iteration of the algorithm, we will refer to the current value of the $j^{th}$ entry of $y$ as $y_j$. If $j<k$, we say the $j$-entry is
\Def{before} the $k$-entry of $\mathbf{y}$, and the $k$-entry is \Def{after} the $j$-entry of $\mathbf{y}$.
If $y_j=0$ and $j=\max\{k:y_k=0\}$ we call $j$ the \Def{feeder} of the corresponding up step, or an \Def{up feeder}. Similarly, $j$ is the
\Def{feeder} of a down step, or a \Def{down feeder}, if a down edge $k\leftarrow j$ is introduced.
We will say $j$ is a \Def{down feeder candidate} for $k$ whenever the algorithm is at a down step, $y_k>0$, $k<j$ and $k \nleftarrow j$.
In Lemma \ref{lem:sourceorderup}, we will show the source priority vector $s(\mathbf{x})$ associates a total order to the vertices of $P(\mathbf{x})$ that determines when each vertex is the feeder of an up step. If $|s_i(\mathbf{x})|>|s_j(\mathbf{x})|$, then $i$ became a feeder of an up step before ~$j$ did.
We must check that this algorithm both runs without failing and indeed returns the parking graph associated to the input, the parking function $\mathbf{x}$. Note that the algorithm always starts at an up step because the input $\mathbf{x}$ is a parking function. Hence, there exists $k$ such that $x_k=1$, so $\hat{y}_k$ will be 0. This ensures, in particular, that the algorithm will always start.
If there exists a $y_k$ with $y_k=0$, an up step will always occur. So the up step of the algorithm is well defined. However, the down step of the algorithm could cause problems. How do we know that when there exists a $y_k>0$ that triggers a down step, there also exists a down feeder candidate, $j$, for $k$?
We answer this question in the following lemma.
\begin{lemma}\label{lem:feeder}
If the algorithm is at a down step, then for some $y_k>0$, there exists a down feeder candidate for $k$, namely, $j$. Furthermore, if $j$ is the down feeder for $k$, then $y_j<0$.
\end{lemma}
\begin{proof}
Suppose, by way of contradiction, that a down feeder candidate did not exist. By assumption, at least one $y_k>0$ and for every other $l \ne k$, either $y_l>0$ or $y_l<0$, but there is no entry with $y_l=0$ since the algorithm is at a down step. Let $k_0=\min\{k: y_k>0\}$. If there is no down feeder candidate, then either there is no $j$ with $k<j$ for any $y_k>0$, in which case $k_0=n$, or for all $j$ with $k_0<j$ we have that $k_0 \leftarrow j$ has already been introduced.
In the first case, $k_0=n$, $y_{k_0}=y_n>0$ and $y_l<0$ for all $1\le l<n$. It follows that at some point in the algorithm, each of the $l$ must have fed $n$ at the step when they were 0. Therefore, $n$ has been fed $n-1$ times during an up step, and yet $y_n \ge 1$. Since each up step redefines $y_n:=y_n-1$, if we trace back the steps of the algorithm, it follows that for the initial $\hat \mathbf{y}$, $\hat y_n \ge 1+(n-1)$. Therefore, $x_n = n+1$, which means $\mathbf{x}$ was not a parking function by Lemma \ref{lem:parkingfunctionperm}, so this case is not possible.
In the second case, for all $j$ with $k_0<j$, $k_0\leftarrow j$ has already been introduced, so $k_0$ has been fed by every entry after $k_0$. Furthermore, for all $l<k_0$, $y_l<0$, so at some point in the algorithm $l$ fed $k_0$ during an up step, so $k_0$ has been fed by every entry before $k_0$. So again, $k_0$ has been fed $n-1$ times, so by the same argument as in the first case, tracing our steps back would give us that the initial input was not a parking function. So this case is also not possible.
This proves the existence of a down feeder candidate at a down step. We now show that if $j$ is the feeder, i.e., the minimal $y_j$ amongst the candidates, then $y_j<0$.
Recall that there are no 0 entries in $\mathbf{y}$ since we are at a down step. Again, by way of contradiction, suppose that there are no negative candidates for feeders. This means that for any $p$ with $y_p>0$, each $q>p$ with $y_q<0$ has already fed $p$; that is, the edge $p\leftarrow q$ has already been introduced in a previous down step. Furthermore, the algorithm gives us that for any $r<p$ with $y_r<0$, $r$ has fed $p$ in a previous up step. Let $k$ be the total number of positive entries in the current down step. Then for any of the $k$ possible entries $y_p$ with $y_p>0$, $p$ has been fed by all $n-k$ indices with negative entries.
Since the input $\mathbf{x}$ is a parking function and $y_p>0$, we have $1\le y_p\le n-1-(n-k)=k-1$.
Since we have gone through at least $n-k$ iterations of the algorithm, we can trace the algorithm back $n-k$ steps to see that for the initial $\hat\mathbf{y}$, $1+(n-k) \le\hat y_p\le
n-1$. Therefore, $n-k+2\le x_p\le n$. Since there are exactly $k$ entries $x_p$ satisfying this inequality, it follows that there are only $n-k$ entries of $\mathbf{x}$ that could be
less than $n-k+2$, and by Lemma \ref{lem:parkingfunctionperm}, $\mathbf{x}$ cannot be a parking function. This contradiction proves Lemma~\ref{lem:feeder}.
\end{proof}
Lemma \ref{lem:feeder} shows that the algorithm is well defined, i.e., it runs without failing. Since the input is a finite $n$-tuple with non-negative entries, at some point in
the algorithm each of these entries will eventually become 0 and later negative after a series of up steps. Therefore, the final output of the algorithm $s(\mathbf{x})$ is indeed an
$n$-tuple consisting of all negative entries. Furthermore, the following lemma tells us that these entries are all distinct and that the $n$-tuple gives us meaningful
information about the desired parking function $P(\mathbf{x})$.
\begin{lemma}\label{lem:sourcepri}
The source priority vector $s(\mathbf{x})$ is a total order that indicates the reverse order in which indices are the feeder of an up step.
\end{lemma}
\begin{proof}
The algorithm begins with an up step with the rightmost index $j$ of $\hat \mathbf{y}$ with $\hat y_j=y_j=0$ as the feeder. This up step of the algorithm instructs us to do as follows: for each $l>j$ with $y_l>0$, introduce the edge $j \rightarrow l$; redefine $y_j:=y_j-1$; and redefine $y_l:=y_l-1$. Since there are no negative entries in $\hat \mathbf{y}$, we need not do anything else in this step. So the new entries of $\mathbf{y}$ are $y_j=-1$, $y_l \ge 0$ for all $l \ne j$. Therefore, $j$ is the unique entry with $y_j=-1$ after the first step of the algorithm.
Suppose we continue with the algorithm and are now at another up step. Let $j$ be the rightmost index of $\mathbf{y}$ with $y_j=0$. The up step instructs us to change the entries of $\mathbf{y}$ as follows:
redefine $y_j:=-1$; for every $y_l>0$ with up edge $j \rightarrow l$ introduced, redefine $y_l := y_l -1$; for any $y_k<0$, redefine $y_k:=y_k-1$; and for any $i\ne j$ such
that $y_i=0$, no new adjacent edges are introduced, so $y_i$ remains $0$. Thus for $i,j,k,$ and $l$ playing the above roles, the new entries of $\mathbf{y}$ are $y_j=-1$, $y_l \ge 0$,
$y_k \le -2$, and $y_i=0$. Therefore, the $j$-entry is the unique entry with $y_j=-1$.
It follows inductively that $s(\mathbf{x})$ gives a total ordering of $\{-1,\ldots,-n\}$, and since $y_k<0$ gets redefined to be $y_k-1$ only during up steps, this ordering signifies the order in which $j$ was the feeder of an up step.
\end{proof}
It is now possible to understand exactly when the algorithm will create an up edge $i \rightarrow j$ in terms of the source priority vector. This is summarized in the following lemma.
\begin{lemma}\label{lem:sourceorderup}
Let $i<j$. Then $|s_i(\mathbf{x})|>|s_j(\mathbf{x})|$ if and only if $i\rightarrow j$ in $P(\mathbf{x})$.
\end{lemma}
\begin{proof}
Suppose $i\rightarrow j$ in $P(\mathbf{x})$. Then $i\rightarrow j$ is introduced when $y_i=0$ and $y_j>0$. Therefore, $y_i$ becomes $-1$ before $y_j$ becomes $-1$. Hence $|s_i(\mathbf{x})|>|s_j(\mathbf{x})|$.
Suppose $i\nrightarrow j$. Then, during the iteration when $y_i=0$, it must have been true that $y_j<0$. Therefore $|s_i(\mathbf{x})|<|s_j(\mathbf{x})|$.
This proves Lemma \ref{lem:sourceorderup}.
\end{proof}
It is also useful to talk about the order in which the algorithm introduces down edges in terms of the source priority vector.
\begin{lemma}\label{lem:sourceorderdown}
Let $j$ and $k$ both be down feeder candidates for $i$ at a certain down step. Then $j$ becomes the down feeder for $i$ at this step if and only if $|s_j(\mathbf{x})|>|s_k(\mathbf{x})|$.
\end{lemma}
\begin{proof}
Lemma \ref{lem:feeder} gives us that if $j$ is a down feeder for $i$ then $y_j<0$. Since the down feeder is the minimum $y_j$ amongst the candidates, it follows that $j$ is the down feeder for $i$ if and only if $y_j<0$ and $y_j<y_k$. This is true if and only if $|s_j(\mathbf{x})|>|s_k(\mathbf{x})|$.
\end{proof}
It remains to show that if $\mathbf{x}$ is the input of the algorithm, then the output, $P(\mathbf{x})$, is indeed a parking graph (i.e., $P(\mathbf{x})$ satisfies the source-sink condition), and $P(\mathbf{x})$ has in-degree sequence $\mathbf{x} -1$. Therefore, the algorithm yields $\phi^{-1}$.
To show that $\mathbf{d}(P(\mathbf{x}))=\mathbf{x}-1$, we will follow an entry $y_k$ through the algorithm. The initial value $\hat y_k=x_k-1$ is the desired in-degree of vertex
$k$ of $P(\mathbf{x})$. If $y_k>0$, then a directed edge is introduced towards $k$ if and only if $y_k$ is reduced by $1$.
Therefore, when $y_k=0$, vertex $k$ of $P(\mathbf{x})$ has in-degree $x_k-1$; and once $y_k\le 0$, the only edges incident with $k$ that are introduced are edges that are directed away
from $k$ or that are undirected, neither of which contributes to the in-degree. Therefore, $\mathbf{d}(P(\mathbf{x}))=\mathbf{x}-1$, as desired.
To show $P(\mathbf{x})$ is acyclic, suppose $1\le i<j<k\le n$ and $i\rightarrow j \rightarrow k$. When the edge $j\rightarrow k$ is introduced, $y_j=0$ while $y_k>0$.
When $i\rightarrow j$ is introduced, then $y_i=0$ and $y_j>0$. Therefore $y_k>0$, so $i\rightarrow k$ is
also introduced. Thus there is no cycle $i\rightarrow j\rightarrow k\rightarrow i$. Now suppose $i\rightarrow k$. When the edge $i\rightarrow k$ is introduced, $y_i=0$, $y_k>0$ and for all $l>i$, $y_l\ne0$. In particular, $y_j\ne0$. If $y_j>0$, then $i\rightarrow j$ is
introduced in the same step. Suppose $y_j<0$; then an up step previously occurred in which $j$ was the source and $y_k$ was positive, so $j\rightarrow k$ was introduced. Therefore there are no cycles involving $i\rightarrow k$.
Lastly, we show that $P(\mathbf{x})$ satisfies the downish part of the source-sink condition, and this will complete the proof that $P(\mathbf{x})$ is a parking graph with the correct in-degree sequence.
Consider a triangle $\Delta$ of $P(\mathbf{x})$ with vertices $i,j,k$ with $i<j<k$. If $P(\mathbf{x})$ has either an up edge or a down edge between each pair of vertices, then there is no violation of the source-sink condition and we're done.
So suppose for some pair of vertices of $\Delta$, $P(\mathbf{x})$ has a downish edge between them. We consider three cases.
Case 1: There is a downish edge $ij$. Suppose, by way of contradiction, that $\Delta$ violates the downish part of the source-sink condition. Then $\Delta$ must have an up edge $j \rightarrow k$ and a down edge $i \leftarrow k$. Since $j<k$, Lemma \ref{lem:sourceorderup} gives us that $|s_j(\mathbf{x})|>|s_k(\mathbf{x})|$. On the other hand, since $k \leftarrow i$ is introduced before $j \leftarrow i$, Lemma \ref{lem:sourceorderdown} gives us that $|s_k(\mathbf{x})|>|s_j(\mathbf{x})|$. So this cannot happen.
Case 2: There is a downish edge $jk$. Suppose, by way of contradiction, that $\Delta$ violates the downish part of the source-sink condition. Then $\Delta$ must have an up edge $i \rightarrow j$ and a down edge $i \leftarrow k$. Since $i<j$, Lemma \ref{lem:sourceorderup} gives us that $|s_i(\mathbf{x})|>|s_j(\mathbf{x})|$. In particular, when $y_i=0$, $y_j>0$, so when $y_i>0, y_j>0$. On the other hand, $k$ is a down feeder for $i$, but since $y_j$ was also positive when $y_i$ was positive, $k$ would have been a down feeder for $j$ during this same step. This contradicts the assumption that $jk$ was downish, so this cannot happen.
Case 3: There is a downish edge $ik$. Suppose, by way of contradiction, that $\Delta$ violates the downish part of the source-sink condition. Then all edges of $\Delta$ are either down or downish, and at least one of $i\leftarrow j$ or $j\leftarrow k$ is in $P(\mathbf{x})$. Suppose $i\leftarrow j$ is in $P(\mathbf{x})$. Then during a down step in the algorithm, both $j$ and $k$ were down feeder candidates for $i$, and $j$ became the feeder for $i$, so by Lemma \ref{lem:sourceorderdown}, $|s_j(\mathbf{x})|>|s_k(\mathbf{x})|$, and since $j<k$, by Lemma \ref{lem:sourceorderup}, $\Delta$ has up edge $j \rightarrow k$. This contradicts the assumption that $\Delta$ violates the source-sink condition. Now Suppose $j\leftarrow k$ is in $P(\mathbf{x})$. Since $i \nrightarrow j$, by Lemma \ref{lem:sourceorderup}, $|s_i(x)|<|s_j(x)|$. Therefore, during the down step in the algorithm where $k$ was a down feeder for $j$, it would have also fed $i$ as both $y_i$ and $y_j$ were simultaneously positive at this step. This contradicts the assumption that $ik$ is downish. Therefore, this cannot happen either.
With these three cases exhausted, we conclude that $P(\mathbf{x})$ satisfies the downish part of the source-sink condition. Since $P(\mathbf{x})$ satisfies the source-sink condition and
$\mathbf{d}(P(\mathbf{x}))=\mathbf{x}-1$, we conclude that $P(\mathbf{x})$ is a parking graph and the algorithm yields $\phi^{-1}$, and this finishes the proof of Theorem~\ref{thm:bijparkgraphfunction}.
\end{proof}
\section{Closing Remarks}
As mentioned above, the composition $\phi \circ \psi^{ -1 }$ gives a bijection between the regions
of the $n$-dimensional Shi arrangement and the parking functions of length $n$, and the ``forward''
direction of this bijection gives the map introduced by Pak and Stanley
\cite{stanleyshiproceedings}. It is not clear to us how parking graphs interact with the other known
bijection by Athanasiadis and Linusson \cite{ath}.
In the same paper, Athanasiadis and Linusson study an arrangement that is situated between braid and Shi arrangement, namely, the arrangement
\begin{align*}
x_j - x_k &= 0
\qquad \text{ for all } 1 \le j < k \le n \, , \\
x_j - x_k &= 1
\qquad \text{ for all } 1 \le j < k \le n \text{ with } jk \in E \, ,
\end{align*}
where $E$ is the edge set of a given fixed graph on $n$ vertices (see also
\cite{athanasiadisfinitefieldmethod}). It would be interesting if parking graphs could shed any further light upon these arrangements.
Finally, it could be interesting to study possible connections of this work with the chromatic theory for gain graphs initiated by
Berthom\'e, Cordovil, Forge, Ventos, and Zaslavsky; see, in particular, \cite[Section~8]{berthomeetal}.
\bibliographystyle{amsplain}
| {
"timestamp": "2014-09-09T02:11:45",
"yymm": "1405",
"arxiv_id": "1405.5587",
"language": "en",
"url": "https://arxiv.org/abs/1405.5587",
"abstract": "The \\emph{Shi arrangement} is the set of all hyperplanes in $\\mathbb R^n$ of the form $x_j - x_k = 0$ or $1$ for $1 \\le j < k \\le n$. Shi observed in 1986 that the number of regions (i.e., connected components of the complement) of this arrangement is $(n+1)^{n-1}$. An unrelated combinatorial concept is that of a \\emph{parking function}, i.e., a sequence $(x_1, x_2, ..., x_n)$ of positive integers that, when rearranged from smallest to largest, satisfies $x_k \\le k$. (There is an illustrative reason for the term \\emph{parking function}.) It turns out that the number of parking functions of length $n$ also equals $(n+1)^{n-1}$, a result due to Konheim and Weiss from 1966. A natural problem consists of finding a bijection between the $n$-dimensional Shi arragnement and the parking functions of length $n$. Stanley and Pak (1996) and Athanasiadis and Linusson 1999) gave such (quite different) bijections. We will shed new light on the former bijection by taking a scenic route through certain mixed graphs.",
"subjects": "Combinatorics (math.CO)",
"title": "Parking functions, Shi arrangements, and mixed graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982013790564298,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7087617759527863
} |
https://arxiv.org/abs/1306.3396 | Symmetry minimizes the principal eigenvalue: an example for the Pucci's sup operator | We explicitly evaluate the principal eigenvalue of the extremal Pucci's sup--operator for a class of special plane domains, and we prove that, for fixed area, the eigenvalue is minimal for the most symmetric set. | \section{Introduction}\label{intro}
In 1951, P\'olya and Szego conjectured:
{\em Of all $n$-polygons with the same area, the regular $n$-polygon has the smallest first Dirichlet eigenvalue,}
referring to the Dirichlet eigenvalue of the Laplacian. It is very simple to see
that among all rectangles of same area, the one that minimizes the first Laplace Dirichlet
eigenvalue is the square. Using Steiner symmetrization, P\'olya and Szego proved the conjecture for $n=2$ and $n=3$,
but it is still an open problem for $n>4$. On the other hand, the well known Faber-Krahn's inequality, affirms that in any dimension,
among all domains of same volume, the euclidean ball has the smallest first Laplace Dirichlet eigenvalue.
The notion of the first Dirichlet eigenvalue for linear elliptic operators, has been extended to fully nonlinear ones (see \cite{BD,BEQ,IY}).
Indeed, for linear operators, Berestycki, Nirenberg and Varadhan in \cite{BNV}
use the maximum principle to define the principal eigenvalue. So, following their idea it is possible to prove that,
if $\mathcal{M}^+_{\lambda, \Lambda}$ denotes the Pucci's supremum operator, with ellipticity constants $0<\lambda\leq\Lambda$ and
if $\Omega$ is a bounded Lipschitz domain, then there exists $\phi>0$ in $\Omega$ such that
$$
\left\{\begin{array}{lc}
\mathcal{M}^+_{\lambda, \Lambda}(D^2 \phi)+\mu^+(\Omega) \phi = 0 & \mbox{in}\ \Omega\\
\phi=0& \mbox{on}\ \partial\Omega
\end{array}
\right.
$$
for
$$\mu^+(\Omega)=\sup\{\mu\in{\mathbb R}\, :\ \exists\ \phi>0 \ \mbox{in}\ \Omega,\ \mathcal{M}^+_{\lambda, \Lambda}(D^2 \phi)+\mu \phi \leq 0\quad\mbox{in}\ \Omega\}.$$
For
$$\mu^-(\Omega)=\sup\{\mu\in{\mathbb R}\, :\ \exists\ \phi<0 \ \mbox{in}\ \Omega,\ \mathcal{M}^+_{\lambda, \Lambda}(D^2 \phi)+\mu \phi \geq 0\quad\mbox{in}\ \Omega\}$$
the existence of a negative eigenfunction is similarly proved.
It is hence quite natural, to wonder if the Faber-Krahn inequality is valid for these "eigenvalues" associated
to $\mathcal{M}^+_{\lambda, \Lambda}$; precisely, given a ball $B$ is it true that
\begin{equation}\label{FK}
\mu^+(B)\leq\mu^+(\Omega), \ \mbox{for any}\ \Omega \ \mbox{such that}\ |\Omega|=|B|?
\end{equation}
Here $|.|$ indicates the volume.
Faber-Krahn inequality is proved in several ways, the most classical one uses Steiner symmetrization
together with the Rayleigh quotient that defines the eigenvalue. Clearly these tools are not at all adapted to this non variational
fully nonlinear setting.
Another possible proof relies on a more geometrical understanding of the problem;
as it is well-explained in \cite{SI}, a domain $\Omega$ is critical for the Laplace first eigenvalue functional under fixed volume variation, if and only if the eigenfunction $\phi>0$ associated to $\mu(\Omega)$ has
constant Neumann boundary condition i.e. if it is a solution of an
overdetermined boundary value problem. This is proved using Hadamard's identity (we refer to \cite{SI} and references therein). But, by
Serrin's classical result, the only bounded domains which admit non trivial solutions satisfying overdetermined boundary conditions
are balls.
In \cite{BDov}, it is proved that at least for $\lambda$ and $\Lambda$ close enough, the only bounded domains for which
the overdetermined boundary value problem associated to $\mathcal{M}^+_{\lambda, \Lambda}$ admits a non trivial solution are the balls.
This suggests that (\ref{FK}) may be true.
Unfortunately, it is not known if, for the eigenvalue functional associated to $\mathcal{M}^+_{\lambda, \Lambda}$, the critical domains under fixed volume have
eigenfunctions with constant normal derivative.
Both the Faber-Krahn inequality and the P\'olya and Szego conjecture state that symmetry of the domain decreases
the Laplace first eigenvalue.
If this is true for the Pucci eigenvalue is not known but the scope of this paper is to show that among a family of subsets of
${\mathbb R}^2$ of same area, which are in some sense deformations of rectangles,
the one that minimizes $\mu^+(\cdot)$ is the most symmetric one. This minimal domain will be denoted
$\Omega^\omega_1$ for $\omega=\frac{\Lambda}{\lambda}$ and it is, somehow, a deformation of a square.
The result is accomplished by explicitly computing the eigenvalue $\mu^+(\Omega^\omega_1)$ and the corresponding eigenfunction.
Observe that the square is not the good set to consider, since, as it
is proved in Proposition \ref{rect}, the eigenfunction associated to the square is not the product of
two functions of one variable.
Remarkably, an analogous explicit computation of $\mu^-(\cdot)$ leads to unbounded sets. In particular,
one can construct a symmetric unbounded set $D^\omega_1$ such that $\mu^-(D^\omega_1)=\lambda$.
\section{The principal eigenvalue of $\mathcal{M}^+_{\lambda, \Lambda}$ in some special domains}
In order to fix notations, we recall that the supremum Pucci operator is defined by
$$\displaystyle \mathcal{M}^+_{\lambda, \Lambda}(X)=\lambda \sum_{e_i<0} e_i+\Lambda \sum_{e_i>0} e_i$$
where $e_i$ are the eigenvalues of the symmetric matrix $X$ and $\Lambda \geq \lambda>0$ are fixed constants.
The starting point of our analysis is the following observation.
\begin{proposition}\label{rect}
For $\Lambda >\lambda>0$, any eigenfunction of $\mathcal{M}^+_{\lambda, \Lambda}$ associated with the positive principal eigenvalue in any squared domain $Q\subset {\mathbb R}^2$ is not a function of separable variables.
\end{proposition}
\begin{proof} Let $Q=\left( -\frac{\pi}{\sqrt{2}},\frac{\pi}{\sqrt{2}}\right)^2$ and let $u(x,y)$ be the principal eigenfunction of $\mathcal{M}^+_{\lambda, \Lambda}$ in $Q$ associated with the positive principal eigenvalue $\mu>0$, that is
\begin{equation}\label{eigen}
\left\{ \begin{array}{l}
-\mathcal{M}^+_{\lambda, \Lambda} (D^2u)= \mu \, u\quad \hbox{in }\ Q\, ,\\[2ex]
u>0 \quad \hbox{in } Q\, ,\ u=0 \quad \hbox{on } \partial Q\,.
\end{array} \right.
\end{equation}
Assume, by contradiction, that $u$ is a function of separable variables. Then, by symmetry and regularity results, $u$ can be written as
$$
u(x,y)= f(x)\, f(y)
$$
with $f:\left( -\frac{\pi}{\sqrt{2}},\frac{\pi}{\sqrt{2}}\right)\to {\mathbb R}$ smooth, positive, even, and, up to a normalization, satisfying $f(0)=1$. In particular, one has
$$
D^2u(0,y)=\left( \begin{array}{cc}
f^{''}(0)f(y) & 0\\
0 & f^{''}(y) \end{array}\right)
$$
and equation \refe{eigen} tested at $(0,0)$ yields
$$
f^{''}(0)=- \frac{\mu}{2\lambda } <0\, .
$$
Moreover, if for some $y_0\in \left( -\frac{\pi}{\sqrt{2}},\frac{\pi}{\sqrt{2}}\right)$ one has $f^{''}(y_0)=0$, then from equation \refe{eigen} written for $(x,y)=(0,y_0)$ we obtain the contradiction
$$
-\lambda\, f^{''}(0) = \mu =-2 \lambda\, f^{''}(0)\, .
$$
Therefore, we have $f^{''}<0$ in $\left(- \frac{\pi}{\sqrt{2}},\frac{\pi}{\sqrt{2}}\right)$ and, again from equation \refe{eigen}, we deduce that $f$ satisfies
$$
\left\{
\begin{array}{l}
f^{''}=-\frac{\mu}{2 \lambda} \, f\, ,\quad f>0 \quad \hbox{in } \left( -\frac{\pi}{\sqrt{2}},\frac{\pi}{\sqrt{2}}\right)\\[2ex]
f\left(-\frac{\pi}{\sqrt{2}}\right)=f\left(\frac{\pi}{\sqrt{2}}\right)=0\, ,\ f(0)=1
\end{array}
\right.
$$
Hence, $\mu= \lambda$ and $f(x)=\cos \left(\frac{x}{\sqrt{2}}\right)$. On the other hand, for the function $u(x,y)=\cos \left(\frac{x}{\sqrt{2}}\right)\, \cos \left(\frac{y}{\sqrt{2}}\right)$ one has, in particular,
$$
D^2u (x,x)=
\frac{1}{2}\left( \begin{array}{cc}
-\cos^2\left(\frac{x}{\sqrt{2}}\right) & \sin^2\left(\frac{x}{\sqrt{2}}\right) \\
\sin^2\left(\frac{x}{\sqrt{2}}\right) & -\cos^2\left(\frac{x}{\sqrt{2}}\right) \end{array}\right)\, ,
$$
and, for $\frac{\pi}{2\sqrt{2}}\leq |x|< \frac{\pi}{\sqrt{2}}$ we have
$$
-\mathcal{M}^+_{\lambda, \Lambda}(D^2u (x,x))= \Lambda \cos^2\left(\frac{x}{\sqrt{2}}\right) +\frac{\Lambda -\lambda}{2} \neq \lambda \, u(x,x)\, ,
$$
unless $\Lambda =\lambda$.
\end{proof}
Let us remark that the function
$$
u(x,y)=\cos \left(\frac{x}{\sqrt{2}}\right)\, \cos \left(\frac{y}{\sqrt{2}}\right)=\frac{1}{2} \left[ \cos\left( \frac{x+y}{\sqrt{2}}\right) +\cos\left( \frac{x-y}{\sqrt{2}}\right)\right]
$$
is an eigenfunction for the Laplace operator in the squared domain $Q$ relative to the first eigenvalue $\lambda_1(-\Delta, Q)=1$. As long as $u$ is concave, it also satisfies the equation
$$
-\mathcal{M}^+_{\lambda, \Lambda} (D^2u)=-\lambda \, \Delta u= \lambda\, u\, .
$$
Actually this is the case for $(x,y)\in Q_1=\{ (x,y)\in {\mathbb R}^2\, : \, |x|+|y|< \frac{\pi}{\sqrt{2}}\}$, the rotated squared domain with side $\pi$. Moreover, the same holds true for any function of the form
$$
u_\gamma (x,y)= \gamma\, \cos\left( \frac{x+y}{\sqrt{2}}\right) +\cos\left( \frac{x-y}{\sqrt{2}}\right)\, ,
$$
with $\gamma>0$. In the next result we suitably extend the function $u_\gamma |_{Q_1}$ in order to obtain an eigenfunction for $\mathcal{M}^+_{\lambda, \Lambda}$ relative to the eigenvalue $\lambda$.
\medskip
Let $\omega \geq 1$ be a parameter to be fixed in the sequel, and, for $\frac{1}{\sqrt{\omega}}\leq \gamma\leq \sqrt{\omega}$ let us introduce the positive even functions defined for $|x|\leq \frac{\pi}{2}+\sqrt{\omega} \arcsin \left(\frac{1}{\gamma \sqrt{\omega}}\right)$ as
$$
\phi^\omega_\gamma (x)=\left\{
\begin{array}{ll}
\frac{\pi}{2} + \sqrt{\omega} \arcsin \left(\frac{\gamma}{\sqrt{\omega}}\cos x\right) & \hbox{if } |x|\leq \frac{\pi}{2}\\[2ex]
\arccos \left(\gamma \sqrt{\omega} \sin \left( \frac{|x|-\pi/2}{\sqrt{\omega}} \right)\right) & \hbox{if } \frac{\pi}{2}<|x|\leq \frac{\pi}{2}+\sqrt{\omega} \arcsin \left(\frac{1}{\gamma \sqrt{\omega}}\right)
\end{array}
\right.
$$
Note that
\begin{equation}\label{fiog}
\phi^\omega_{\gamma^{-1}}= \left(\phi^\omega_\gamma\right)^{-1}
\end{equation}
so that, in particular, $\phi^\omega_1=\left(\phi^\omega_1\right)^{-1}$.
\begin{figure}
\includegraphics[height=40mm]{Puccisquare2.pdf}
\includegraphics[height=40mm]{Puccisquare-4.pdf}
\includegraphics[height=40mm]{Puccisquare-3.pdf}
\caption{$\Omega^\omega_1$, $\Omega^\omega_{\gamma}$, $\Omega^\omega_{\sqrt{\omega}}$, three domains for which the eigenvalue is $\lambda$; in the black square $u_\gamma^\omega$ is concave.}
\end{figure}
Next, let us consider the open bounded subsets
$$
\Omega^\omega_\gamma \,: = \left\{ (x,y)\in {\mathbb R}^2\, : \, |y|< \phi^\omega_\gamma (x)\right\}\, .
$$
Note that for $\omega =1$ we have $\gamma=1$ and $\Omega^1_1$ is nothing but the rotated squared domain with side $\sqrt{2} \pi$.
In general, $\Omega^\omega_\gamma$ is a Lipschitz domain symmetric both with respect to the $x$ and $y$ axes, and, by \refe{fiog},
$$
\Omega^\omega_{\frac{1}{\gamma}}=\left\{ (x,y)\in {\mathbb R}^2\, :\, (y,x)\in \Omega^\omega_\gamma \right\}\, .
$$
In particular, $\Omega^\omega_1$ is symmetric also with respect to the diagonal $y=x$.
\begin{theorem} \label{simm} Given $\Lambda\geq \lambda>0$ let us set $\omega =\frac{\Lambda}{\lambda}\geq1$. Then, for any $\frac{1}{\sqrt{\omega}}\leq \gamma\leq \sqrt{\omega}$, the positive principal eigenvalue of $\mathcal{M}^+_{\lambda, \Lambda}$ in the domain $\Omega^\omega_\gamma$ is
$$
\mu \left( \Omega^\omega_\gamma\right) = \lambda
$$
and the principal eigenfunction is, up to positive constants,
$$
u^\omega_\gamma(x,y)=\left\{
\begin{array}{ll}
\gamma\, \cos x +\cos y & \hbox{if } |x|\leq \frac{\pi}{2}\,,\ |y|\leq \frac{\pi}{2}\\[2ex]
\gamma \, \sqrt{\omega} \cos \left( \frac{|x|-\pi/2}{\sqrt{\omega}}+\frac{\pi}{2}\right) + \cos y & \hbox{if } (x,y)\in \Omega^\omega_\gamma\,,\ |x|\geq \frac{\pi}{2}\\[2ex]
\gamma\, \cos x +\sqrt{\omega} \cos \left( \frac{|y|-\pi/2}{\sqrt{\omega}}+ \frac{\pi}{2}\right) & \hbox{if } (x,y)\in \Omega^\omega_\gamma\,,\ |y|\geq \frac{\pi}{2} \end{array} \right.
$$
\end{theorem}
\begin{proof}
The proof is a straightforward computation. We observe that $u^\omega_\gamma$ is smooth and positive in $\Omega^\omega_\gamma$, and it vanishes on $\partial \Omega^\omega_\gamma$. For $|x|\leq \frac{\pi}{2}\,,\ |y|\leq \frac{\pi}{2}$ one has
$$
D^2u^\omega_\gamma (x,y)= \left(
\begin{array}{cc}
-\gamma \, \cos x & 0\\
0 & -\cos y
\end{array}\right)
$$
Therefore, for $|x|\leq \frac{\pi}{2}$ and $|y|\leq \frac{\pi}{2}$, $u^\omega_\gamma$ is concave and it satisfies
$$
-\mathcal{M}^+_{\lambda, \Lambda} \left( D^2u^\omega_\gamma\right) =-\lambda\, \Delta u^\omega_\gamma =\lambda\, u^\omega_\gamma\, .
$$
For $(x,y)\in \Omega^\omega_\gamma$ and $|x|\geq \frac{\pi}{2}$, one has
$$
D^2u^\omega_\gamma (x,y)= \left(
\begin{array}{cc}
\frac{\gamma}{\sqrt{\omega}} \sin \left( \frac{|x|-\pi/2}{\sqrt{\omega}}\right)& 0\\
0 & -\cos y
\end{array}\right)
$$
Note that, if $(x,y)\in \Omega^\omega_\gamma$ and $|x|\geq \frac{\pi}{2}$, then $|y|\leq \frac{\pi}{2}$ and $0\leq \frac{|x|-\pi/2}{\sqrt{\omega}}<\arcsin \left( \frac{1}{\gamma \sqrt{\omega}}\right) \leq \frac{\pi}{2}$; therefore
$$
-\mathcal{M}^+_{\lambda, \Lambda} \left( D^2u^\omega_\gamma\right) =\lambda \, \cos y-\Lambda\, \frac{\gamma}{\sqrt{\omega}} \sin \left( \frac{|x|-\pi/2}{\sqrt{\omega}}\right)= \lambda\, u^\omega_\gamma\, .
$$
Analogously, for $(x,y)\in \Omega^\omega_\gamma$ and $|y|\geq \frac{\pi}{2}$, we have
$$
D^2u^\omega_\gamma (x,y)= \left(
\begin{array}{cc}
- \gamma\, \cos x & 0\\
0 & \frac{1}{\sqrt{\omega}} \sin \left( \frac{|y|-\pi/2}{\sqrt{\omega}}\right)
\end{array}\right)
$$
and, since $|x|\leq \frac{\pi}{2}$ and $0\leq \frac{|y|-\pi/2}{\sqrt{\omega}}<\arcsin \left( \frac{\gamma}{ \sqrt{\omega}}\right) \leq \frac{\pi}{2}$, we again conclude
$$
-\mathcal{M}^+_{\lambda, \Lambda} \left( D^2u^\omega_\gamma\right) =\lambda \, \gamma\, \cos x- \frac{\Lambda}{\sqrt{\omega}} \sin \left( \frac{|y|-\pi/2}{\sqrt{\omega}}\right)= \lambda\, u^\omega_\gamma\, .
$$ \end{proof}
\begin{remark}
{\rm
Let us remark that for $\omega=1$ the only admissible value for $\gamma$ is $\gamma=1$ and there is only one set $\Omega^1_1$.
In this case, up to a rotation, $\Omega^1_1$ is the square $\{ |x|< \pi/\sqrt{2}\, ,\ |y|<\pi/\sqrt{2}\}$ and $u^1_1(x,y)=\cos \left( \frac{x}{\sqrt{2}}\right)\, \cos \left( \frac{y}{\sqrt{2}}\right)$ is the first eigenfunction of the Laplace operator, associated with the first eigenvalue $\lambda_1 = \mu \left( -\Delta, \Omega^1_1\right)=1$.
For $\omega>1$, we have identified the family of bounded domains $\Omega^\omega_\gamma$, $\frac{1}{\sqrt{\omega}}\leq \gamma\leq \sqrt{\omega}$, in all of which the positive principal eigenvalue of $\mathcal{M}^+_{\lambda, \Lambda}$ is $\lambda$. Note that $\Omega^\omega_\gamma$ is a smooth set except for $\gamma=\sqrt{\omega}$ and the symmetric case $\gamma=1/\sqrt{\omega}$. $\partial \Omega^\omega_{\sqrt{\omega}}$ has singularity points at $ \left(0, \pm (1+\sqrt{\omega}) \frac{\pi}{2}\right)$, where an angle of amplitude $2\arctan \left(\frac{1}{\sqrt{\omega}}\right)$ occurs (see Figure 1). Moreover, for $(x,y)\in \Omega^\omega_{\sqrt{\omega}}\cap \{ |y|>\pi/2\}$, the eigenfunction $u^\omega_{\sqrt{\omega}}$ has the expression
$$
u^\omega_{\sqrt{\omega}}(x,y)=2\cos \left( \frac{ \frac{|y|-\pi/2}{\sqrt{\omega}}+\frac{\pi}{2}+x}{2}\right)\, \cos \left( \frac{ \frac{|y|-\pi/2}{\sqrt{\omega}}+\frac{\pi}{2}-x}{2}\right)
$$
showing that $u^\omega_{\sqrt{\omega}}(x,y)$ vanishes quadratically as $\Omega^\omega_{\sqrt{\omega}}\ni (x,y)\to \left( 0,\pm (1+\sqrt{\omega}) \frac{\pi}{2}\right)$. This property is consistent with the fact that the homogeneous problem
\begin{equation}\label{omo}
\left\{ \begin{array}{c}
\mathcal{M}^+_{\lambda, \Lambda}(D^2\Phi) =0\qquad \hbox{in } \mathcal{C}\\[1ex]
\Phi =0 \qquad \hbox{on } \partial \mathcal{C}
\end{array}\right.
\end{equation}
where $\mathcal{C}$ is the plane cone $\mathcal{C}=\{ y> \sqrt{\omega}|x|\}$, has the positive, degree 2 homogeneous solution $\Phi (x,y)=y^2-\omega\, x^2$ (see \cite{L}). Indeed, by the comparison principle, it immediately follows that
$$
\liminf_{\Omega^\omega_{\sqrt{\omega}}\ni (x,y)\to \left(0, \pm (1+\sqrt{\omega}) \frac{\pi}{2}\right)} \frac{u^\omega_{\sqrt{\omega}}(x,y)}
{\Phi\left( x,(1+\sqrt{\omega})\frac{\pi}{2}\mp y\right)} >0\, .
$$
}
\end{remark}
\bigskip
\begin{remark}{\rm The function $u^\omega_\gamma$ can be extended in order to obtain a changing sign eigenfunction for $\mathcal{M}^+_{\lambda, \Lambda}$ in the whole ${\mathbb R}^2$.
Precisely, for any $\gamma>0$, let us define in the square $\left\{ |x|\, ,\ |y| \leq (1+\sqrt{\omega})\frac{\pi}{2}\right\}$
$$
u^\omega_\gamma (x,y)=\left\{
\begin{array}{ll}
\gamma\, \cos x +\cos y & \hbox{if } |x|\,,\ |y|\leq \frac{\pi}{2}\\[2ex]
-\gamma\, \sqrt{\omega} \sin \left( \frac{|x|-\pi/2}{\sqrt{\omega}}\right) +\cos y & \hbox{if } \frac{\pi}{2}< |x|\leq (1+\sqrt{\omega})\frac{\pi}{2}\, ,\ |y|\leq \frac{\pi}{2}\\[3ex]
\gamma\, \cos x -\sqrt{\omega} \sin \left( \frac{|y|-\pi/2}{\sqrt{\omega}} \right) & \hbox{if } |x| \leq \frac{\pi}{2}\, ,\ \frac{\pi}{2}< |y|\leq (1+\sqrt{\omega})\frac{\pi}{2}\\[3ex]
-\sqrt{\omega}\left( \gamma \sin \left( \frac{|x|-\pi/2}{\sqrt{\omega}}\right) +\sin \left( \frac{|y|-\pi/2}{\sqrt{\omega}} \right)\right) & \hbox{if } \frac{\pi}{2}< |x|\,,\ |y| \leq (1+\sqrt{\omega})\frac{\pi}{2}
\end{array} \right.
$$
and extend $u^\omega_\gamma$ periodically both with respect to $x$ and $y$. Then, by arguing as in Theorem \ref{simm}, it is easy to see that
$$
\mathcal{M}^+_{\lambda, \Lambda} (D^2u^\omega_\gamma )+\lambda\, u =0 \qquad \hbox{in } {\mathbb R}^2\, .
$$
The set where $u^\omega_\gamma$ is positive has bounded connected components if and only if $\frac{1}{\sqrt{\omega}}\leq \gamma \leq \sqrt{\omega}$, and in this case they are nothing but translations of $\Omega^\omega_\gamma$. Conversely, the connected components of the set
$D^\omega_\gamma =\{ u^\omega_\gamma<0\}$ are unbounded for any $\gamma >0$. For $\frac{1}{\sqrt{\omega}}< \gamma < \sqrt{\omega}$ $D^\omega_\gamma$ is connected and unbounded in both $x$ and $y$ direction, whereas either for $\gamma \leq \frac{1}{\sqrt{\omega}}$ or for $\gamma\geq \sqrt{\omega}$ the connected components of $D^\omega_\gamma$ are contained in unbounded respectively horizontal or vertical stripes, see Figure 2. Since $u^\omega_\gamma$ is a negative eigenfunction for $\mathcal{M}^+_{\lambda, \Lambda}$ in each of the connected components of $D^\omega_\gamma$, we can say that for these sets one has $\mu^-=\lambda$. We finally remark that this construction does not yield a changing sign eigenfunction for a bounded domain, so that we cannot calculate eigenvalues different from the principal ones.}
\end{remark}
\begin{figure}
\includegraphics[height=40mm]{Pucciminus.pdf} \includegraphics[height=40mm]{Pucciminus2.pdf}\includegraphics[height=40mm]{Pucciminus3.pdf}
\caption{${ D}^{\omega}_1$, ${ D}^{\omega}_{\sqrt{\omega}}$ and ${ D}^{\omega}_{\gamma}$ for $\gamma >\sqrt{\omega}$.}
\end{figure}
\bigskip\bigskip
\noindent Let us now enlarge, by deforming the sets $\Omega^\omega_\gamma$, the class of domains for which we can evaluate the positive principal eigenvalue of $\mathcal{M}^+_{\lambda, \Lambda}$. For any $a\in {\mathbb R}$ with $|a|<\pi$ let us consider the non singular matrix
$$
C_a= \left(
\begin{array}{cc}
\sqrt{1 -\left(\frac{a}{\pi}\right)^2} & 0\\[2ex]
\frac{a}{\pi} & 1
\end{array}\right)
$$
and let us denote by $C_a:{\mathbb R}^2\to {\mathbb R}^2$ also the linear transformation induced by $C_a$. We observe that $C_a$ maps the square $Q=\{ |x|+|y|<\pi\}$ with side $\sqrt{2}\pi$ into the rectangle $R=\left\{ |x| +\left| \frac{\sqrt{\pi^2-a^2}y-a\,x}{\pi}\right|<\sqrt{\pi^2-a^2}\right\}$ with sides $\sqrt{2\pi(\pi-a)}$ and $\sqrt{2\pi(\pi+a)}$, and the square $\{ |x|\,,\ |y|<\pi/2\}$ onto the rhombus $\left\{ |x|< \frac{\sqrt{\pi^2-a^2}}{2}\, ,\ \left| y-\frac{a}{\sqrt{\pi^2-a^2}}x\right| <\frac{\pi}{2}\right\}$. Let us further set
$$
\Omega^\omega_{\gamma, a} \, := C_a \, \left( \Omega^\omega_\gamma\right)
$$
and
$$
u^\omega_{\gamma, a} (x,y) \, : = u^\omega_\gamma \left( C_a^{-1} (x,y)\right) \, , \quad (x,y)\in \Omega^\omega_{\gamma, a}\, ,
$$
where $u^\omega_\gamma$ is defined in Theorem \ref{simm}, see Figure 3.
\begin{figure}
\includegraphics[height=40mm]{Puccirectangle.pdf}
\caption{The domain $\Omega^\omega_{\gamma, a}$; in the black part $u^\omega_{\gamma, a}$ is concave.}
\end{figure}
\begin{theorem}\label{nonsimm} Given $\Lambda\geq \lambda>0$ let us set $\omega =\frac{\Lambda}{\lambda}\geq1$. Then, for any $\frac{1}{\sqrt{\omega}}\leq \gamma\leq \sqrt{\omega}$ and $|a|<\pi$ the function $u^\omega_{\gamma, a}$ satisfies
\begin{equation}\label{super}
\left\{ \begin{array}{l}
-\mathcal{M}^+_{\lambda, \Lambda} (D^2 u^\omega_{\gamma, a} ) \geq \frac{\lambda \, \pi^2}{\pi^2-a^2} u^\omega_{\gamma, a} \quad \hbox{in } \Omega^\omega_{\gamma, a}\\[2ex]
u^\omega_{\gamma, a} >0\ \hbox{in } \Omega^\omega_{\gamma, a}\, ,\ u^\omega_{\gamma, a} =0\ \hbox{on } \partial \Omega^\omega_{\gamma, a}
\end{array}\right.
\end{equation}
As a consequence, the positive principal eigenvalue of $\mathcal{M}^+_{\lambda, \Lambda}$ in $\Omega^\omega_{\gamma, a}$ satisfies
\begin{equation}\label{lb}
\mu \left( \Omega^\omega_{\gamma, a} \right) \geq \frac{\lambda\, \pi^2}{\pi^2-a^2}\, ,
\end{equation}
and equality holds if and only if either $\omega=1$ or $a=0$.
\end{theorem}
\begin{proof}
Let us compute. We have
$$
D^2u^\omega_{\gamma, a} (x,y)= \left( C_a^{-1}\right)^t D^2u^\omega_\gamma \left( C_a^{-1} (x,y)\right) C_a^{-1}\, ,
$$
with
$$
C_a^{-1}=\left(
\begin{array}{cc}
\frac{\pi}{\sqrt{\pi^2-a^2}} & 0\\[2ex]
- \frac{a}{\sqrt{\pi^2-a^2}} & 1
\end{array}\right)\, .
$$
Since $D^2u^\omega_\gamma$ is diagonal, by setting
$$
\left\{ \begin{array}{l}
X=\frac{\pi}{\sqrt{\pi^2-a^2}} x\\[2ex]
Y=y- \frac{a}{\sqrt{\pi^2-a^2}} x
\end{array}
\right.
$$
we then obtain
$$
D^2u^\omega_{\gamma, a} (x,y)=\left(
\begin{array}{cc}
\frac{\pi^2}{\pi^2-a^2} (u^\omega_\gamma)_{xx}(X,Y) +\frac{a^2}{\pi^2-a^2}(u^\omega_\gamma)_{yy}(X,Y) & -\frac{a}{\sqrt{\pi^2-a^2}}(u^\omega_\gamma)_{yy}(X,Y)\\[2ex]
-\frac{a}{\sqrt{\pi^2-a^2}}(u^\omega_\gamma)_{yy}(X,Y) & (u^\omega_\gamma)_{yy}(X,Y)
\end{array} \right)\, .
$$
Note that, in particular,
$$
{\rm det}(D^2u^\omega_{\gamma, a} (x,y))= \frac{\pi^2}{\pi^2-a^2}{\rm det}(D^2u^\omega_\gamma(X,Y))\, .
$$
Therefore, for $(x,y)\in C_a\left( \left\{|X|\, ,\ |Y|\leq \frac{\pi}{2}\right\} \right)$, $u^\omega_{\gamma, a} (x,y)$ is concave like $u^\omega_\gamma(X,Y)$ and it follows that
$$
-\mathcal{M}^+_{\lambda, \Lambda}(D^2u^\omega_{\gamma, a} )=-\lambda\, \Delta u^\omega_{\gamma, a} =- \frac{\lambda\, \pi^2}{\pi^2-a^2} \Delta u^\omega_\gamma= \frac{\lambda\, \pi^2}{\pi^2-a^2} u^\omega_\gamma =\frac{\lambda\, \pi^2}{\pi^2-a^2} u^\omega_{\gamma, a}\, .
$$
Otherwise, for $(x,y)\in \Omega^\omega_{\gamma, a}$ such that either $|X|>\frac{\pi}{2}$ or $|Y|>\frac{\pi}{2}$, we have ${\rm det}(D^2u^\omega_{\gamma, a} (x,y))<0$, and, by computing the eigenvalues of $D^2u^\omega_{\gamma, a}$ and recalling the expressions of $(u^\omega_{\gamma, a})_{xx}$ and $(u^\omega_{\gamma, a})_{yy}$ from the proof of Theorem \ref{simm}, we get
$$
\begin{array}{ll}
-\mathcal{M}^+_{\lambda, \Lambda} (D^2u^\omega_{\gamma, a} ) & = -\frac{\lambda\, \pi^2}{2(\pi^2-a^2)} \left[ (\omega+1) \left( (u^\omega_\gamma)_{xx}+(u^\omega_\gamma)_{yy}\right) \right.\\[2ex]
& \qquad \qquad \quad \left. +(\omega -1) \sqrt{(u^\omega_\gamma)_{xx}^2
+(u^\omega_\gamma)_{yy}^2+2\left( \frac{2a^2}{\pi^2}-1\right) (u^\omega_\gamma)_{xx}(u^\omega_\gamma)_{yy}}\right]\\[2ex]
& \geq -\frac{\lambda\, \pi^2}{2(\pi^2-a^2)} \left[ (\omega+1) \left( (u^\omega_\gamma)_{xx}+(u^\omega_\gamma)_{yy}\right) +(\omega -1) \left| (u^\omega_\gamma)_{xx}-(u^\omega_\gamma)_{yy}\right|\right]\\[2ex]
& = \frac{\lambda\, \pi^2}{(\pi^2-a^2)} \, u^\omega_{\gamma, a}\, ,
\end{array}
$$
and equality holds in the above if and only if either $\omega=1$ or $a=0$. Therefore,
$u^\omega_{\gamma, a}$ satisfies \refe{super}, and \refe{lb} follows immediately from the definition of the positive principal eigenvalue for $\mathcal{M}^+_{\lambda, \Lambda}$. Moreover, equality holds in \refe{lb} if and only if $u^\omega_{\gamma, a}$ is the principal eigenfunction for $\mathcal{M}^+_{\lambda, \Lambda}$ in $\Omega^\omega_{\gamma, a}$, see Corollary 2.1 in \cite{BNV} or Theorem 4.4 in \cite{P}. Hence, equality holds in \refe{lb} if and only if either $\omega=1$ or $a=0$.
\end{proof}
As a consequence of Theorems \ref{simm} and \ref{nonsimm}, we can deduce that, among all sets $\Omega^\omega_{\gamma, a}$ and their rescaled $\delta \,\Omega^\omega_{\gamma, a}$ with $\delta>0$, for equal area the minimum of the principal eigenvalue for $\mathcal{M}^+_{\lambda, \Lambda}$ is achieved on the most symmetric domain, that is some rescaled of $\Omega^\omega_1$. We will denote by $|\Omega|$ the area (two dimensional Lebesgue measure) of any set $\Omega\in {\mathbb R}^2$, and by $\mu (\Omega)$ the positive principal eigenvalue of $\mathcal{M}^+_{\lambda, \Lambda}$ in the domain $\Omega$.
\begin{corollary}
Given $\Lambda\geq \lambda>0$, let us set $\omega =\frac{\Lambda}{\lambda}\geq1$. Then
$$
\mu \left( \frac{\Omega^\omega_1}{\sqrt{\left| \Omega^\omega_1\right|}}\right)=\min \left\{ \mu \left( \frac{\Omega^\omega_{\gamma, a}}{\sqrt{\left| \Omega^\omega_{\gamma, a}\right|}}\right)\, :\ \frac{1}{\sqrt{\omega}}\leq \gamma\leq \sqrt{\omega}\, ,\ |a|<\pi\right\}\, .
$$
\end{corollary}
\begin{proof}
By the homogeneity of the principal eigenvalue and by Theorem \ref{nonsimm}, we have
$$
\mu \left( \frac{\Omega^\omega_{\gamma, a}}{\sqrt{\left| \Omega^\omega_{\gamma, a}\right|}}\right) = \left| \Omega^\omega_{\gamma, a}\right| \, \mu \left( \Omega^\omega_{\gamma, a}\right) \geq \frac{\lambda\, \pi^2}{\pi^2-a^2} \, \left| \Omega^\omega_{\gamma, a}\right| \, .
$$
Moreover, one has
$$
\left| \Omega^\omega_{\gamma, a}\right| =\left| C_a\left( \Omega^\omega_\gamma \right)\right| =\left| {\rm det}\left( C_a\right)\right|\, \left| \Omega^\omega_\gamma\right| = \frac{\sqrt{\pi^2-a^2}}{\pi}\, \left| \Omega^\omega_\gamma\right| \, ,
$$
so that
$$
\mu \left( \frac{\Omega^\omega_{\gamma, a}}{\sqrt{\left| \Omega^\omega_{\gamma, a}\right|}}\right) \geq \frac{\lambda\, \pi}{\sqrt{\pi^2-a^2}}\, \left| \Omega^\omega_\gamma\right|\geq \lambda\, \left| \Omega^\omega_\gamma\right|\, .
$$
On the other hand, by the definition of $\Omega^\omega_\gamma$, we get
$$
\left| \Omega^\omega_\gamma\right| = \pi^2 +4\sqrt{\omega} \int_{0}^{\pi/2} \left[ \arcsin \left( \frac{\gamma}{\sqrt{\omega}}\cos x\right) +\arcsin \left( \frac{1}{\gamma\, \sqrt{\omega}}\cos x\right) \right]\, dx\, ;
$$
hence,
$$
\frac{d}{d \gamma} \left| \Omega^\omega_\gamma\right|= \frac{4\sqrt{\omega}}{\gamma} \int_0^{\pi/2} \left[ \frac{1}{\sqrt{\frac{\omega}{\gamma^2}-\cos^2 x}}-
\frac{1}{\sqrt{\omega\, \gamma^2-\cos^2 x}}\right]\, \cos x\, dx \left\{ \begin{array}{l}
\geq 0 \quad \hbox{for } \gamma \geq 1\\[2ex]
\leq 0 \quad \hbox{for } \gamma \leq 1
\end{array} \right.
$$
which shows that $\left| \Omega^\omega_\gamma\right|$ is minimal for $\gamma=1$. In conclusion, by using also Theorem \ref{simm}, we deduce
$$
\mu \left( \frac{\Omega^\omega_{\gamma, a}}{\sqrt{\left| \Omega^\omega_{\gamma, a}\right|}}\right) \geq \lambda\, \left| \Omega^\omega_\gamma\right|\geq \lambda\, \left| \Omega^\omega_1\right| = \mu \left( \frac{\Omega^\omega_1}{\sqrt{\Omega^\omega_1}}\right)\, .
$$
\end{proof}
| {
"timestamp": "2013-07-08T02:03:33",
"yymm": "1306",
"arxiv_id": "1306.3396",
"language": "en",
"url": "https://arxiv.org/abs/1306.3396",
"abstract": "We explicitly evaluate the principal eigenvalue of the extremal Pucci's sup--operator for a class of special plane domains, and we prove that, for fixed area, the eigenvalue is minimal for the most symmetric set.",
"subjects": "Analysis of PDEs (math.AP)",
"title": "Symmetry minimizes the principal eigenvalue: an example for the Pucci's sup operator",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137879323496,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7087617740531954
} |
https://arxiv.org/abs/1806.07260 | The graphs with all but two eigenvalues equal to $2$ or $-1$ | In this paper, all graphs whose adjacency matrix has at most two eigenvalues (multiplicities included) different from $2$ and $-1$ are determined. These graphs conclude a class of generalized friendship graphs $F_{t,r,k}, $ which is the graph of $k$ copies of the complete graph $K_t$ meeting in common $r$ vertices such that $t-r=3.$ Which of these graphs are determined by its spectrum is are also obtained. | \section{ Introduction}
All graphs in this paper are simple graphs and all spectrum of a graph are adjacency spectrum. Let $G=(V,E)$ be a graph. The adjacency matrix $A(G)$ (or $A$) of $G$ is an $n\times n$ matrix, whose $(i,j)$-entry is $1$ if vertex $v_{i}$ is adjacent to $v_{j}$ (denote by $v_i \sim v_j$), and is $0$ otherwise. The characteristic polynomial $P_G(x)=det(xI-A(G))$ is called the characteristic polynomial of $G$. The eigenvalues of $A$ are called the
adjacency eigenvalues of $G$. There are many results on the eigenvalues of graphs and their application, see \cite{b1} for more details.
Connected graphs with a small number of distinct eigenvalues have aroused a lot of interest in the past several decades. This problem was first raised by Doob \cite{b13}. It is well known that a connected graph with just two distinct eigenvalues if and only if it is completed graph and a regular connected graph with just three distinct eigenvalues if and only if it is strongly regular graph. It is difficult to characterise all non-regular connected graphs with three or four distinct eigenvalues. There are interesting results on regular graphs with four distinct eigenvalues \cite{b10}, non-regular graph with three distinct eigenvalues \cite{b11}, biregular graphs with three distinct eigenvalues \cite{b9} and small regular with four distinct eigenvalues \cite{b12}. Cioab\v{a} et al. in \cite{b8} determined all connected graphs with at most two eigenvalues different from $-2$ or $0$.
For more results on graphs with few distinct eigenvalues, we refer the reader to \cite{b15,b14,b16}.
For $0\leq r \leq t$, denote the generalized friendship graph on $kt-tr+r$ vertices by $F_{t,r,k},$ where $F_{t,r,k}$ is the graph of $k$ copies of the complete graph $K_t$ meeting in a common $r$ vertices. Clearly $F_{t,r,1}=F_{t,t,k}=K_t,$ which is determined by its spectrum. For convenience we shall assume that $k\geq 2.$ $F_{3,1,k}$ is the friendship graph, which is determined by its spectrum if $k\not=16$ \cite{b5}. It is not difficult to obtain that the spectrum of $F_{t,r,k}$ has at most two eigenvalues (multiplicities included) different from $t-r-1$ and $-1.$ It may be a interesting problem that $F_{t,r,k}$ is whether determined by its spectrum.
Very recently, Cioab\v{a} et al. in \cite{b5} determined all connected graphs with at most two eigenvalues different from $\pm1,$ which responds to the case $t-r=2,$ and prove that
friendship graph $F_{3,1,k}$ is determined by its spectrum unless $k=16.$
In this paper, we consider the case of $t-r=3$ and determine all connected graphs with two eigenvalues different from $2$ and $-1$, these graphs consist of four infinite families and twenty sporadic graphs, which of these graphs are determined by its spectrum is also obtained.
\section{Main tools}
\par We start with a well known result on equitable partitions (see for example \cite{b1} ). Consider a partition $\mathcal{P} = \{V_1,\dots , V_m\}$ of the set $V= \{1, \dots , n\}$. The characteristic matrix $\mathcal{X}_\mathcal{P}$ of $\mathcal{P}$ is the $n \times m$ matrix whose columns are the character vectors of $V_1, \dots, V_m$. Consider a symmetric matrix $A$ of order $n$, with rows and columns partitioned according to $\mathcal{P}$. The partition of $A$ is equitable if each submatrix $A_{i, j}$ formed by the rows of $V_{i}$ and the columns of $V_{j}$ has constant row sums $q_{ij}$. The $m\times m$ matrix $Q =(q_{i,j})$ is called the quotient matrix of $A$ with respect to $\mathcal{P}$.
\begin{lem}\label{odd}\cite{b1}
The matrix $A$ has the following two kinds of eigenvectors and eigenvalues:
(1) The eigenvectors in the column space of $\mathcal{X}_\mathcal{P}$; the corresponding eigenvalues coincide with the eigenvalues of $Q$;
(2) The eigenvectors orthogonal to the columns of $\mathcal{X}_\mathcal{P}$; the corresponding eigenvalues of $A$ remain unchanged if some scalar multiple of the all-one block $J$ is added to block $A_{i,j}$ for each $i,j\in \{{1,\dots, m}\}$.\end{lem}
\par The degree of a vertex $v$, denoted by $d_v$, which is the number of vertices adjacent to $v$, $d_{uv}$ is the number of common neighbors of $u$ and $v$. If the vertices $i$ and $j$ are adjacent, we denoted by $i\sim j$, otherwise $i\nsim j$. Let $mK_{3}$ denote the disjoint union of $m$ triangles, and $kK_{2}$ denote the disjoint union of $k$ edges, and $T_{3m}$ be the adjacency matrix of $ mK_{3}$ and $R_{2k}$ be the adjacency matrix of $ k K_2$. We denote the $m\times n$ all-ones matrix by $J_{m,n}$ (or just $J$ ) and the $m\times n$ all-zeros matrix by $0_{m}$ (or $0$). We define a $2k \times k$ matrix $S_{2k}$ as following:
$$ S_{2k}=\begin{bmatrix}\begin{smallmatrix}
1&0&0&\cdots&0&0\\
1&0&0&\cdots&0&0\\
0&1&0&\cdots&0&0\\
0&1&0&\cdots&0&0\\
\vdots&\vdots&\vdots& &\vdots&\vdots\\
0&0&0&\cdots&0&1\\
0&0&0&\cdots&0&1\\
\end{smallmatrix}
\end{bmatrix}.
$$
\begin{lem}\label{even}\cite{b1}
Let $G$ be a graph with smallest eigenvalue $-1$, then $G$ is the disjoint union of complete graphs.
\end{lem}
\begin{lem}\label{clique}(\cite{b6})
The only connected graphs having the largest eigenvalue $2$ are the graphs in Figure 1 .\end{lem}
\begin{figure}[ht]
\centering
\includegraphics[width=10cm, height=3cm]{fig1.jpg}\\
\caption{Connected graphs with the largest eigenvalue 2. }\label{fig-1}
\end{figure}
\begin{prop} \label{p:wAPPROX}
Let $G$ be a graph with $n$ vertices, we have
(i) If $G$ has all its eigenvalues equal to $2$ and $-1$, then $G = \frac{n}{3} K_3$.
(ii) If $G$ has all but one eigenvalue equal to $2$ and $-1$, then $G$ is the disjoint union of complete
graphs with all but one connected components equal to $K_3$.
(iii) If $G$ has just two eigenvalues, $r$ and $s$ ($r \geq s$) different from $2$ and $-1$, then $r > 2$ and
$s < -1$, or $G$ is a disjoint union of complete graphs with two connected components different from $K_3$.
\end{prop}
\noindent{\bf Proof. } If $G$ has the smallest eigenvalue $-1$, by Lemma \ref{even}, then $G$ is the disjoint union of complete graphs, which leads to (i),(ii) and the second option of (iii). If $G$ has the largest eigenvalue $ 2 $, by Lemma \ref{clique}, then $G$ are the graphs in Figure 1. Computing eigenvalue of these graphs, the corresponding graphs are not in $G$, therefore $r>2$, and $s<-1$, this case is captured by the first option of (iii).
$ \hfill \Box$ \vspace*{0.2cm}
By Proposition \ref{p:wAPPROX}, in order to obtain the connected graphs with at most two eigenvalues differen from 2 and $-1,$ it is sufficient to determine the graphs with just two eigenvalues $r$ and $s$ ($r >2>-1 > s$) different from $2$ and $-1.$ Therefore, the spectrum of such a graph $G$ has two interesting properties: The first property is that the second largest eigenvalue of $A(G)$ is $2$, and the second smallest eigenvalue is equal to $-1$. By eigenvalue interlacing, this gives a considerable reduction on the possible induced subgraphs (see Lemma \ref{main}). The second property is that $(A(G)+I)(A(G)-2I)$ has rank $2$ and is positive semi-definite. This leads to conditions for the structure of $(A(G)+I)(A(G)-2I)$ (see Lemmas \ref{nonsigular}, \ref{nine}). Because of these observations, we take a more general approach, and consider all graphs with the mentioned two properties. In what followings we determine all connected graphs with only two eigenvalues $r$ and $s$ $(r>2>-1>s)$ different from $2$ and $-1$.
\begin{lem}\label{nonsigular}
If the graph $G$ with only two eigenvalues $r>2$ and $s<-1$ (multiplicities included) different from $2$ and $-1$, then
(i) One connected component of $G$ has all vertices with degree at least $3$, and all other connected components are isomorphic to $K_3$.
(ii) If the vertices $u\nsim v$, and each neighbor of $u$ is also a neighbor of $v$, then $d_v - d_u \geq 5$.
\end{lem}
\noindent{\bf Proof. } (i) We prove the result by contradiction, suppose $u$ is a vertex of degree $1$, $v$ is a vertex of degree $2$. Let $v$ be the neighbor of $u$, and assume that $v$ has another neighbor $w$ of degree $d_w$. The $2\times2$ principal submatrix of $A^2-A-2I$ corresponding to $u$ and $w$ equals
$$S=\left[
\begin{array}{cc}
-1 & 1 \\
1 & d_w-2 \\
\end{array}
\right].
$$
The $2\times2$ principal submatrix of $A^2-A-2I$ corresponding to $v$ and $w$ equals $$S'=\left[
\begin{array}{cc}
0 & -1 \\
-1 & d_w-2 \\
\end{array}
\right].
$$
We have $\det S<0$, det $S'<0$, which contradicts with that $A^2-A-2I$ is positive semi-definite. Thus we have $d_x\geq 3$ for any vertex $x\in G$.
(ii) The $2\times2$ principal submatrix of $A^2-A-2I$ corresponding to $u$ and $v$ equals
$$S=\left[
\begin{array}{cccc}
d_u-2 & d_u \\
d_u & d_v-2 \\
\end{array}
\right].
$$
If $d_v\leq d_u + 4$, then det $ S\leq (d_u-2)(d_u + 2)-d_u^2<0$, contradiction. $ \hfill \Box$ \vspace*{0.2cm}
\par Note that Lemma \ref{nonsigular} (ii) indicates that any two non-adjacent vertices can not have the same set of neighbors.
\begin{lem}\label{five}\cite{b1} Let $G$ be a bipartite graph, if $\lambda$ is an eigenvalue of $G$ with multiplicity $k$, then $-\lambda$ is also an eigenvalue of $G$ with multiplicity $k$.
\end{lem}
\begin{lem}\label{prod} (Interlacing Theorem)\cite{b1}
Let $A$ be a symmetric $n\times n$ matrix and let $B$ be a principal submatrix of $A$ of order $n-1$. If $\lambda_1\geq\dots\geq\lambda_n$ and $\mu_1\geq\dots\geq\mu_{n-1}$ are the eigenvalues $A$ and $B$, respectively, then
$$\lambda_1\geq\mu_1\geq\lambda_2\geq\dots\geq\lambda_{n-1}\geq\mu_{n-1}\geq\lambda_n.$$
\end{lem}
\begin{figure}[h]
\centering
\includegraphics[width=13cm, height=12cm]{fig2.jpg}\\
\caption{Forbidden induced subgraphs.}\label{fig_2}
\end{figure}
Define $\mathcal{F}$ to be the set of connected graphs with two eigenvalues $r > 2$ and $s < -1$ (multiplicities included), and all other eigenvalues equal to $2$ and $-1$. Lemmas \ref{even}, \ref{five} indicate that the graph $G\in \mathcal{F}$ is not bipartite. In order to find all graphs with only two eigenvalues different from $2$ and $-1$, we start with a list of forbidden induced subgraphs.
\begin{lem}\label{main} No graph in $\mathcal{F}$ has one of the graphs presented in Figure $2$ as an induced subgraph.
\end{lem}
\noindent{\bf Proof. } Each graph in Figure $2$ has its second largest eigenvalue $\lambda_2$ strictly greater than $2$, or its second smallest eigenvalue $\lambda_{n-1}$ strictly less than $-1$. Interlacing completes the proof.
$ \hfill \Box$ \vspace*{0.2cm}
\section{Main results}
We begin with the description of the graphs in $\mathcal{F} $. The proof will be given in the next section.
\begin{thm}\label{one} For each $G\in \mathcal{F} $, the adjacency matrices and the corresponding spectra of $G$ are one of the following forms:
\vskip 0.10 cm
(i). $\begin{bmatrix}
J-I_a & J \\
J & T_{3k} \\
\end{bmatrix} (a\geq 1,k\geq2)$
with spectrum $\{{\frac{(a+1)\pm\sqrt{(a-3)^2+12ak}}{2}, 2^{k-1}, -1^{2k+a-1}}\},$ \vskip 0.20 cm
(ii). $\begin{bmatrix}
T_{3k} & J \\
J & T_{3\ell} \\
\end{bmatrix}(k\geq \ell\geq2)$
with spectrum $\{{2\pm3\sqrt{k\ell}}, 2^{k+\ell-2}, -1^{2(k+\ell)}\},$ \vskip 0.20 cm
(iii). $\left[
\begin{array}{cc}
R_{2m }& J-S_{2m} \\
J-S_{2m}^{T} & 0 \\
\end{array}
\right] (m\geq3)$ with spectrum $\{{\frac{1\pm\sqrt{9-16m+8m^{2}}}{2}, 2^{m-1}, -1^{2m-1}}\},$ \vskip 0.20 cm
(iv).
$\begin{bmatrix}
J-I_6 & J & 0 \\
J & T_{3k} & J \\
0 & J & R_2 \\
\end{bmatrix} (k\geq2)
$ with spectrum $\{{{3\pm2\sqrt{1+6k}}, 2^{k}, -1^{2k+6}}\},$ \vskip 0.20 cm
(v). $\begin{bmatrix}
J-I_a & J & J \\
J & J-I_b & 0 \\
J & 0 & R_{2} \\
\end{bmatrix}$ where $(a,b)=(2,9), (3,6)$ and $(6,5)$,
with the corresponding spectra $\{{4\pm\sqrt{37}},2,-1^{10}\}$,$\{3\pm2\sqrt{7},2,-1^{8}\}$, $\{4\pm3\sqrt{5},2,-1^{10}\},$ \vskip 0.20 cm
(vi). $\begin{bmatrix}
J-I_a & J & J \\
J & J-I_b & 0 \\
J & 0 & 0 \\
\end{bmatrix}
$
where $(a,b)=(7,45),(8,27),(9,21),(10,18),
(12,15),$\vskip 0.25 cm $(15,13)$, $(18,12),(24,11)$ and $(42,10),$ with the corresponding spectra $$\{{24\pm\sqrt{730}},2,-1^{50}\},\{\frac{31\pm9\sqrt{17}}{2},2,-1^{33}\},\{13\pm\sqrt{259},2,-1^{28}\},$$
$$\{{12\pm\sqrt{229}},2,-1^{26}\},\{\frac{23\pm\sqrt{865}}{2},2,-1^{25}\},\{12\pm3\sqrt{26},2,-1^{26}\},$$
$$\{{13\pm2\sqrt{67}},2,-1^{28}\}, \{\frac{31\pm\sqrt{1441}}{2},2,-1^{33}\},\{24\pm3\sqrt{85},2,-1^{50}\},$$
(vii). $\begin{bmatrix}
J-I_a & J& 0 \\
J & 0 & J-S^{T}_{2m} \\
0 & J-S_{2m} & R_{2m} \\
\end{bmatrix}$
where $(a,m)=(4,4)$ and $(6,3),$ \\ with corresponding spectra $\{7,-5,2^4,-1^{10}\}$ and $\{2\pm \sqrt{33},2^3,-1^{10}\}$.
(viii). $\begin{bmatrix}
J-I_a & J & 0 \\
J & R_{2k} & J-S_{2k} \\
0 & J-S^{T}_{2k} & 0 \\
\end{bmatrix}$
where $(a,k)=(4,10),(5,7), (6,6)$ and $(9,5),$ \\ with the corresponding spectra
$\{1\pm2\sqrt{61},2^{10},-1^{22}\}$, $\{\frac{3\pm3\sqrt{65}}{2},2^{7},-1^{17}\},$\\ $\{2\pm\sqrt{129},2^{6},-1^{16}\}$ and $\{\frac{7\pm\sqrt{561}}{2},2^{5},-1^{17}\},$ \vskip 0.20 cm
(viiii). $\begin{bmatrix}
J-I_a & J & 0 & 0 \\
J & R_{2k} & J-S_{2k} & 0 \\
0 & J-S^{T}_{2k} & 0 & J \\
0 & 0 & J & 0 \\
\end{bmatrix}
$ where $(a,k)=(3,4)$ and $(5,3)$\\
with spectra $\{1\pm3\sqrt{5},2^{4},-1^{10}\}$ and $\{2\pm\sqrt{43},2^{3},-1^{10}\},$ \vskip 0.20 cm
\end{thm}
From Theorem \ref{one}, we see that $\mathcal{F}$ contains four infinite families and twenty sporadic graphs. From the given
spectra it follows straightforwardly that
\begin{cor}\label{two} No two graphs $ \mathcal{F}$ are cospectral.
\end{cor}
Given any two graphs $G$ and $H,$ let $G\cup H$ be the disjoint union of $G$ and $H,$ and $mG$ be the disjoint union of $m$ copies of $G.$
\begin{thm}\label{four} Suppose $G$ and $G'$ are nonisomorphic cospectral graphs with at most two eigenvalues different from $2$, $-1$. Then $G=H \cup \beta K_{3}$ and $G'=H'\cup \beta' K_{3}$, where $H$ and $H'$ are
one of the following pairs of graphs in $\mathcal{F}:$
$(1). $ $H$ is of type (i) with $a=5$ and $k\geq2$, $H'$ is type (iv) with $k'\geq2$, where $5k=1+8k'$.
$(2). $ $H$ is of type (i) with $a=3$ and $k\geq2$, $H'$ is type (ii) with $k',\ell'\geq2$, where $k=k'\ell'$.
$(3). $ $H$ is of type (i) with $k\geq2$, $H'$ is type (viii) with $(a',k')=(4,10)$, where $a=1$ and $k=81$.
$(4). $ Both $H$ and $H'$ are of type (ii) with parameters $(k,\ell)$ and $(k',\ell')$, where $kl = k'\ell'$.
\end{thm}
\noindent{\bf Proof. } The disjoint union of complete graphs in determined by its spectrum (see \cite{b2}). By Lemma \ref{nonsigular} (i), $G$ and $G'$ must have the described form. Observing that $H$ and $H'$ has the eigenvalues $r > 2$ and $s < -1$, we easily find the given possibilities for $H$ and $H'$.
$ \hfill \Box$ \vspace*{0.2cm}
It we take $\beta = 0$, we can find the graphs in $\mathcal{F}$ having a non-isomorphic cospectral mate by Theorem \ref{four}. Hence, we have
\begin{cor}\label{fi} A graph $G\in\mathcal{F}$ is determined by its spectrum, unless $G$ is one of the following
$\diamond$ $G$ is of type (i) and $(a, k) = (1, 81)$.
$\diamond$ $G$ is of type (i) with $a = 3$ and $k$ is a composition number.
$\diamond$ $G$ is of type (i) with $a=5, k \equiv 5 \mod 8.$
$\diamond$ $G$ is of type (ii) and $k\ell$ has a divisor $d$ such that $\ell < d <k.$
\end{cor}
By above Corollary \ref{fi}, then the generalized friendship graph $F_{t,r,k}$ with $t-r=3$
is determined by its spectrum, except when $r=1, k = 81; $ or $r=3,$ $k $ is a composition number; or $r=5,
k \equiv 5 \mod 8.$
\section{The proof of Theorem \ref{one}}
In all cases in Theorem \ref{one}, we see that the corresponding quotient matrix has two eigenvalues different from $2$ and $-1$, and with Lemma \ref{odd} it straightforwardly follows that the remaining eigenvalues of the graph are all equal to $2$ and $-1$. So all graphs of Theorem \ref{one} are in ${\mathcal{F} }$.
\par We choose $C$ to be a clique in $G\in \mathcal{F}$ with maximum size. By Lemma \ref{main} (graphs $G_1$ and $G_2$) $G$ contains no induced odd cycles of length five or more, therefore $|C| \geq 3.$ If there are more than one cliques of maximum size, we choose one for which the number of outgoing edges is minimal. The following lemmas and proposition are the key to our approach.
\begin{lem}\label{eight}
The vertex set of $C$ can be partitioned into two nonempty subsets $X$ and $Y$, such that the neighborhood of any vertex outside $C$ intersects $C$ in $X$, $Y$, or $\emptyset$.
\end{lem}
\noindent{\bf Proof. } The proof is analogous to the method in \cite{b5}. If $|C| = n - 1,$ the result is obvious. So assume $3 \leq |C| \leq n - 2$. Take vertices $x$ and $y$ outside $C$, and let $X$ and $Y$ consist of the neighbors of $x$ and $y$ in $C$, respectively.
Note that $X$ and $Y$ are proper subsets of $C$, since otherwise $C$ is not maximal. Suppose that $X \cap Y \neq\emptyset $, but $ X \nsubseteq Y $ . Then there exist vertices $ u \in X \cap Y $ and $ v \in X\backslash Y$. Let $w$ be a vertex in $ C \backslash X$. Then the subgraph induced by $\{u, v ,w , x , y\}$ is a forbidden subgraph $ G_3$, $G_4,$ or $G_5$. Therefore, if $X$ and $Y$ are not disjoint, then $ X \subseteq Y $, and analogously $ Y \subseteq X. $ Thus $ X \cap Y \neq \emptyset,$ implies $X = Y.$ If $ X \cap Y = \emptyset $, assume there exist vertices $ u \in X $, $ v \in Y $, and $z \in C \backslash (X \cup Y )$, then $\{z, u, v, x, y\}$ induces a forbidden subgraph $G_6$ or $G_7$. This implies that if $X$ and $Y$ are disjoint and both nonempty, then $X \cup Y = C$.
$ \hfill \Box$ \vspace*{0.2cm}
\begin{lem}\label{nine}
If we take two vertices $x$ and $y$, $x\nsim y$,
consider the corresponding $2\times2$ principal submatrix $S$ of $A^{2}-A-2I$,
$$S=\left[
\begin{array}{cc}
d_{x}-2 & d_{xy} \\
d_{xy} & d_{y}-2\\
\end{array}
\right].
$$
then $S$ is positive semi-definite and $\det S=(d_x-2)(d_y-2)-d_{xy}^{2}\geq0$.
\end{lem}
\par Let $\Gamma X$ and $ \Gamma Y$ denote the set of vertices outside $C$ adjacent to $X$ and $Y$ respectively. The set of vertices not adjacent to any vertex of $C$ will be denoted by $\Omega.$ Some of these sets may be empty, but clearly $\Gamma X$ or $\Gamma Y$ is nonempty (otherwise $G$ would be disconnected or complete). We choose $\Gamma X \neq\emptyset$ and distinguish three cases: (1) both $\Gamma Y$ and $\Omega$ are empty; (2) only $\Omega$ is empty; (3) $\Omega$ is nonempty. For convenience we define $a = |X|$, $b = |Y|$, and $c = |C| = a + b$. Let $G[ Z ] $ denote the induced subgraph by $ Z$.
\begin{prop} \label{there} Let $G$ be a graph, $ |X|=a$, $ |Y|=b$, $G[ \Gamma X] $ and $G[ \Gamma Y] $ denote the induced subgraph by $ \Gamma X$ and $\Gamma Y$, respectively. Then
(i). If $b=1$ (resp., $ a=1$), then $G[\Gamma X]=lK_1$ (resp., $ G[\Gamma Y]=lK_1)$ ;
(ii). If $b=2$ (resp., $a=2)$, then $G[\Gamma X]=lK_1\cup kK_2$ (resp., $G[\Gamma Y]=lK_1\cup kK_2)$;
(iii). If $b=3$ (resp., $a=3)$, then $G[\Gamma X]=lK_1\cup kK_2\cup mK_3$ (resp., $G[\Gamma Y]=lK_1\cup kK_2\cup mK_3)$;
(iv). If $b\geq 4$ (resp., $a\geq 4)$, then $G[\Gamma X]=lK_1\cup kK_2$ (resp., $G[\Gamma Y]=lK_1\cup kK_2)$.
\end{prop}
\noindent{\bf Proof. } (i). If $ b = 1$, then $ \Gamma X $ contains no edges, otherwise $ C $ would not be maximal.
(ii). If $b=2$, choose $ u \in X $, suppose $x \in \Gamma X$ has two neighbors $p$ and $q$ in $ \Gamma X$. If $p \nsim q$, then $\{ u, x, p, q,y\} $ ($y\in Y$) induces forbidden subgraph $G_3$ in Fig $2$, otherwise interchanging $\{x,p,q\}$ with $Y$ would give another larger clique. Therefore each vertex $x \in \Gamma X $ has at most one neighbor in $ \Gamma X $, and $ G[\Gamma X]= lK_1\cup kK_2.$
(iii). If $b=3$, choose $ u \in X $, suppose $x \in \Gamma X$ has three neighbors $v $, $p$ and $q$ in $ \Gamma X$. If there exists a pair of vertex $p$ and $ q,$ such that $p\nsim q,$ then $\{ u, x, p, q,y\} $ $(y\in Y)$ induces forbidden subgraph $G_3$, otherwise $v\sim p,$ $v\sim q,$ $p\sim q,$ interchanging $\{x,v,p,q\}$ with $Y$ would give another larger clique than before. Thus any vertex of $ \Gamma X $ has at most two neighbor in $ \Gamma X $. If any vertex of $ \Gamma X$ has exactly two neighbor in $\Gamma X $, then the induced subgraph by $ \Gamma X$ are the disjoint union of cycles. If $G[\Gamma X]$ has a cycle with length four or more, then induces forbidden subgraph $G_3$, thus every cycle of length is three, and $ G[\Gamma X]=lK_1\cup kK_2\cup mK_3.$
(iv). If $b\geq 4$, let $y, z, v ,w$ be four distinct vertices in $Y$, take a vertex $ u \in X $, suppose $x \in \Gamma X$ has two neighbors $p$ and $q$ in $ \Gamma X$. If $p\nsim q$, then $\{u, y, x, p, q \}$ induces forbidden subgraph $G_3$, otherwise $\{u, y, z, v, w, x, p, q\}$ induces forbidden subgraph $G_8$. Thus each vertex $x \in \Gamma X $ has at most one neighbor in $ \Gamma X $, and $ G[\Gamma X]= lK_1\cup kK_2.$
$ \hfill \Box$ \vspace*{0.2cm}
\subsection {$\Gamma Y$ and $\Omega$ are empty}
\par Assume that $1\leq b\leq3$, then $G[\Gamma X]=lK_1\cup kK_2\cup mK_3$ by Proposition \ref{there}. If $x \in \Gamma X $, $y\in Y$, then $d_{xy}=a$, $a\leq d_{x}\leq a+2$, $a\leq d_{y}\leq a+2$, $\det S=(d_{x}-2)(d_{y}-2)-a^{2}\leq 0$. By Lemma \ref{nine}, $\det S=0$, thus $d_{x} = d_{y} = a+ 2$. Therefore $ G[\Gamma X]=mK_3$, $b=3$. Let $Y'=Y\cup \Gamma X=m'K_3$, $m'\geq 2$, since $Y$ and $\Gamma X$ are nonempty. We can write $A$ as:
$$ {\begin{matrix}
A=\begin{bmatrix}
J-I_a & J \\
J & T_{3m'}\\
\end{bmatrix}
\end{matrix}}
$$
where $3m'=|\Gamma X|+3,$ which leads to Case (i).
Assume that $b\geq 4$, then $G[\Gamma X]=lK_1\cup kK_2$ by Proposition \ref{there}. By Lemma \ref{nonsigular} (ii), it is impossible that there exists one vertex of $\Gamma X$ has one neighbor in $\Gamma X$ but another vertex has no neighbor in $\Gamma X$. We conclude that $G[ \Gamma X]=lK_{1} $ or $G[ \Gamma X]=kK_{2}$.
Case (1): $G[\Gamma X]=lK_{1}.$ If $l \geq 2$, then there are at least two vertices have the same neighbors, which contradicts Lemma \ref{nonsigular} (ii). So $ l=1$ and we find
$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & J \\
J & J-I_b & 0 \\
J & 0 & 0 \\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & b & 1 \\
a & b-1 & 0\\
a & 0 & 0\\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
$P_Q(x)=a-ab-x+2ax+bx-2x^2+ax^2+bx^2-x^3$ shows that $Q$ has no eigenvalue $-1$ and has an eigenvalue $2$ if and only if $(a, b) = (7, 45),(8 ,27), (9, 21),(10,18),\\(12,15),(15,13),(18,12),(24,11)$ and $(42,10),$ which leads to Case (vi).
\par Case (2): $G[ \Gamma X]=kK_{2}.$ If $k \geq 2$, then $G$ has eigenvalues 1, which contradicts Proposition \ref{p:wAPPROX}, thus $k=1$. $G$ has the following $A$ and $Q$ :
$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & J \\
J & J-I_b & 0 \\
J & 0 & R_{2}\\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & b & 2 \\
a& b-1 & 0 \\
a& 0 & 1 \\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
$P_Q(x)=1+a-b-2ab+x+2ax-x^2+ax^2+bx^2-x^3$ shows that $Q$ has no eigenvalue $-1$ and an eigenvalue $2$ if and only if $(a, b) = (2, 9),(3, 6)$ and $(6, 5)$, which leads to Case (v).
\subsection{$\Gamma X$ and $\Gamma Y$ are nonempty, and $ \Omega $ is empty}
\subsubsection{Claim : $a \leq 3$ or $ b\leq 3$.}
\noindent{\bf Proof. } Suppose $ a \geq b \geq 4 $, by Proposition \ref{there}, we have $G[\Gamma X]=lK_1\cup kK_2$. By Lemma \ref{nonsigular} (ii) and forbidden graphs $G_{20},G_{29},G_{30}$, it is impossible that there exists one vertex of $\Gamma X$ has one neighbor in $\Gamma X$ and another vertex has no neighbor in $\Gamma X$. We conclude that $G[ \Gamma X]=kK_{2} $ or $G[ \Gamma X]=lK_{1}$. Forbidden graph $G_{28}$ implies that $k=1$. Similarly, we conclude that $G[ \Gamma Y]=K_{2} $, or $G[ \Gamma Y]=l'K_{1} $.
\par Case (1): $G[ \Gamma X]=K_{2}$, $G[ \Gamma Y]=K_{2} $.\\
Forbidden graph $ G_{20} $ implies that every vertex in $ \Gamma X $ is adjacent to all vertices in $ \Gamma Y $. We
find the following $ A $ and $ Q $:
$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & J & 0 \\
J & J-I_b & 0 & J \\
J & 0 & R_{2} & J\\
0 & J & J & R_2\\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & b & 2 & 0 \\
a& b-1 & 0 & 2\\
a & 0 & 1 & 2 \\
0 & b & 2 & 1\\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
$P_Q(x)=-3+5a+5b-8ab-8x+5ax+5bx+4abx-6x^2-ax^2-bx^2-ax^3-bx^3+x^4$ shows that $Q$ has no eigenvalue $-1$ and has eigenvalue $2$ with multiplicity $1$ if and only if $(a, b) = (5, 4), $ but none of the other $ 3 $ eigenvalues are equal to $ 2 $ and $ -1 $. Thus the corresponding graphs are not in $\mathcal{F}$.
Case (2): $G[ \Gamma X]=K_{2} $, $ G[\Gamma Y]=l'K_{1} $ .
Forbidden graph $ G_{29} $ implies that every vertex in $ \Gamma Y $ is adjacent to all vertices in $ \Gamma X $. If $l' \geq 2,$ then there are at least two vertices have the same neighbors, which contradicts Lemma \ref{nonsigular} (ii). So $l'= 1$, we find the following $ A $ and $ Q $:
$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & J & 0 \\
J & J-I_b & 0 & J \\
J & 0 & R_{2} & J\\
0 & J & J & 0\\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & b & 2 & 0 \\
a& b-1 & 0 & 1\\
a & 0 & 1 & 1 \\
0 & b & 2 & 0\\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
$P_Q(x)=-2+2a+3b-3ab-5x+ax+3bx+3abx-3x^2-2ax^2-bx^2+x^3-ax^3-bx^3+x^4$ shows that $Q$ has no eigenvalue $-1,$ and has eigenvalue $2$ with multiplicity $1$ if and only if $(a, b) = (5, 5),$ but none of the other $ 3 $ eigenvalues are equal to $ 2 $ and $ -1 $. Thus the corresponding graphs are not in $ \mathcal{F} $.
Case (3): $ G[\Gamma X]=lK_{1} $, $ G[\Gamma Y]=l'K_{1} $.\\
Now forbidden subgraph $ G_{30}$ implies that a vertex in $ \Gamma X $ is adjacent to all, or all but one vertices in $ \Gamma Y $, or all but two vertices in $ \Gamma Y $(and vice versa).
Let $ x $ be a vertex in $ \Gamma X $ and suppose $ x $ is adjacent to all vertices of $\Gamma Y $, suppose $ y $ is
another vertex in $ \Gamma X $, by Lemma \ref{nonsigular} (ii), $ y$ has fewer than $|\Gamma Y | - 4 $ neighbors in $ \Gamma Y $, contradiction. Similarly, if $ |\Gamma Y | \geq 2 $, then each vertex in $ \Gamma Y $ is adjacent to all but one vertices of $ \Gamma X $. This implies that the subgraph induced by $ \Gamma X \cup \Gamma Y $ is $ K_2 $ or a complete bipartite graph with the edges of a perfect matching deleted, by Lemma \ref{nonsigular} (ii), thus $l=l'$. Take two vertices $x'\in \Gamma X$, $y'\in \Gamma X,$ then $d_{x'}=d_{x'y'}+1$, $d_{y'}=d_{x'y'}+1$, $\det S=(d_{x'}-2)(d_{y'}-2)-d_{x'y'}^{2}<0,$ by Lemma \ref{nine}, which is contradiction, therefore $l=l'\geq2$, the corresponding graphs are not in $ \mathcal{F} $. We find $G$ has the following $ A $ and $ Q $, where $l=l'=1$:
$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & J & 0 \\
J & J-I_b & 0 & J \\
J & 0 & 0 & 1\\
0 & J & 1 & 0\\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & b & 1 & 0 \\
a& b-1 & 0 & 1\\
a & 0 & 0 & 1 \\
0 & b & 1 & 0\\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
$P_Q(x)=-1+a+b-ab-2x+2abx-2ax^2-2bx^2+2x^3-ax^3-bx^3+x^4$ shows that $Q$ has no eigenvalue $-1$, and has eigenvalue $2$ with multiplicity $1$ if and only if $(a, b) = (9,9 ),(13,7),(21,6),$ but none of the other $ 3 $ eigenvalues are equal to $ 2 $ and $ -1 $. Thus the corresponding graphs are not in $ \mathcal{F} $. $ \hfill \Box$ \vspace*{0.2cm}
\subsubsection{Claim : $a \geq b=3.$}
\noindent{\bf Proof. } First assume $a > b = 1$, by Proposition \ref{there}, we have $G[\Gamma X]=lK_1$. If $y \in Y $ and $ x \in \Gamma X$, then $ x $ is adjacent to all vertices in $ \Gamma Y $, otherwise interchanging $ x $ and $ y $ would give another maximal clique of size $c$ with fewer outgoing edges. This implies that $x$ and $ y$ have the same neighbors, which is contradiction.
\par Next assume $a \geq b = 2 $, by Proposition \ref{there}, we have $G[\Gamma X]=lK_1\cup kK_2$.
Suppose $G[\Gamma X]$ contains a $K_2$, then every vertex in $ \Gamma Y $ is adjacent to the two vertices of a $K_2$ in $ \Gamma X $. Otherwise interchanging two vertices of a $K_{2}$ in $G[\Gamma X]$ and $ Y $ would give another maximal clique of size $c$ with fewer outgoing edges. Choose a vertex $x$ of $K_2$ in $\Gamma X$, and a vertex $y$ of $Y$, thus $d_{x}=d_{y}= d_{xy}+1$, $\det S=(d_{x}-2)(d_{y}-2)-d_{xy}^{2} <0$, which contradicts Lemma \ref{nine}. Thus $G[\Gamma X]=lK_1.$
Choose a isolated vertex $x$ of $\Gamma X$, for any vertex $y\in Y$, then $d_{x}=d_{xy}$, by Lemma \ref{nonsigular} (ii), $d_{y}\geq d_{x}+5$ . If $a=2$, by Proposition \ref{there}, then $G[\Gamma Y]=k'K_2\cup l'K_1 $. By the same argument as above, we obtain $G[\Gamma X]=lK_1.$ Forbidden subgraph $G_{12}$ shows that $d_{y}< d_{x}+5$, or we can find two vertices $p,q\in \Gamma Y$, $d_p-d_q<5$, which are contradiction. If $a\geq3$, then we have $G[\Gamma Y]=l'K_1\cup k'K_2 \cup m'K_3$ by Proposition \ref{there}. Forbidden subgraphs $G_{12},$ $G_{24}$, $G_{25}$, $G_{32}$ show that $d_{y}< d_{x}+5$, or we can find two vertices $p,q\in \Gamma Y$, $p\nsim q$, $d_p-d_q<5$, which are contradiction. $ \hfill \Box$ \vspace*{0.2cm}
We have $a \geq b = 3 $, we have $G[\Gamma X]=mK_3\cup kK_2\cup lK_1$ by Proposition \ref{there}.
Suppose $G[\Gamma X]$ contains a $K_2$, choose a vertex $x$ of $K_2$, for any vertex $y\in Y$, then $d_{x}= d_{xy}+1$. By Lemma \ref{nine} $\det S=(d_x-2)(d_y-2)-d_{xy}^{2}\geq 0$, then $d_{y}\geq d_{xy}+4$. Forbidden subgraphs $G_{18}$, $G_{20}$ show that $d_{y}< d_{x}+4$, which is contradiction.
Suppose $G[\Gamma X]$ contains a isolated vertex $x$, for any vertex $y\in Y$, then $d_{x}=d_{xy}$, by Lemma \ref{nonsigular} (ii), $d_{y}\geq d_{x}+5$. But forbidden subgraphs $G_{12}, G_{17}, G_{18}, G_{32}$ show that $d_{y}< d_{x}+5$, or we can find two vertices $p,q\in \Gamma Y$, $p\nsim q$, $d_p-d_q\leq 4$, contradiction.
Thus $ G[\Gamma X]=mK_3,$ and every vertex in $ \Gamma X $ is adjacent to all vertices $ \Gamma Y $. Otherwise interchanging three vertices of a $K_{3}$ in $G[\Gamma X]$ and $ Y $ would give another maximal clique of size $c$ with fewer outgoing edges. By Lemma \ref{nonsigular} (ii), it is impossible that there exists one vertex of $\Gamma Y$ has no neighbor in $\Gamma Y$ but another vertex has one or two neighbor in $\Gamma Y$. If $G[\Gamma Y]= m'K_3\cup k'K_2$, then $a=3$, otherwise $a\geq4$, which is impossible by forbidden subgraph $G_{8}$. Therefore $a=3$, by the same argument as above, $ G[\Gamma Y ]=m'K_{3}$. Thus $G[\Gamma Y ]=l'K_{1}$, $G[\Gamma Y ]=k'K_{2}$ or $ G[\Gamma Y ]=m'K_{3}$. Let $Y' = Y\cup \Gamma X=m''K_3$, then $m''\geq 2$, where $3m'' = |\Gamma X| + 3$, since $Y$ and $\Gamma X$ are nonempty.
Case (1): $ G[\Gamma Y ]=l'K_{1}$ , if $l'\geq 2 $ , then there at least two vertices have the same neighbors, contradiction. So $l'= 1$ and we find $G$ has the following $ A $ and $ Q $:
$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & 0 \\
J & T_{3m''} & J\\
0 & J & 0\\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & 3m'' & 0 \\
a& 2 & 1\\
0 & 3m'' & 0\\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.$$
Computing det$(Q+I)$ and det$(Q-2I)$ shows that $Q$ has no eigenvalues $-1$ and $2$. Therefore the corresponding graphs are not in $ \mathcal{F} $.
\par Case (2): $ G[\Gamma Y ]=k'K_{2}$, $G$ has the following $ A $ and $ Q $:$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & 0 \\
J & T_{3m''} & J\\
0 & J & R_{2k'}\\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & 3m'' & 0 \\
a& 2 & 2k'\\
0 & 3m'' & 1\\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.$$
Computing $\det(Q+I)$ and $\det(Q-2I)$ shows that $Q$ has no eigenvalues $-1$ and has an eigenvalue $2$ for $(a,k')=(6,1),(4,2)$, but $(a,k')=(4,2)$, $G$ has an eigenvalue 1, contradiction. Thus $(a,k')=(6,1),$ which leads to Case (iv).
\par Case (3): $ G[\Gamma Y ]=m'K_{3}$, $a=3$. Let $X'=X\cup \Gamma Y =lK_3$, then $l\geq 2$ as $X$ and $\Gamma Y$ are nonempty. Thus $G$ has the following $ A $ :
$$ A = \begin{bmatrix}
R_{3m''} & J \\
J & R_{3l} \\
\end{bmatrix} $$
with $m'',l\geq2$, where $3m'' = |\Gamma X| + 3$ and $ 3l = | \Gamma Y | + 3$, which leads to Case (ii).
\subsection{ $ \Omega $ is nonempty }
Since $ G $ is connected, there exists an edge ${xz}$ with $z \in \Omega $, and $x \in \Gamma X$, or $x \in \Gamma Y.$ Assume
$ x \in \Gamma X$, take $u \in X$, and let $y$ be a neighbor of $z$ different from $x$. If $y \in \Gamma Y$, then the
neighbor $v \in Y$ of $y$ together with $u,$ $x,$ $y,$ and $z$ induce a forbidden subgraph $G_1$ or $G_6$. Thus, $y \notin \Gamma Y$ which means $y\in \Gamma X \cup \Omega$. Similarly, if $x\in \Gamma Y$, then $y\in \Gamma Y\cup \Omega$. Without loss of generality, we assume that $\Gamma X$ and $\Omega$ are nonempty.
\subsubsection{Claim : $a> b= 1$ or $a\geq b= 2$.}
\noindent{\bf Proof. } Assume $ a\geq b\geq 3$, it follows that $G[\Gamma X]= mK_3\cup kK_2\cup lK_1$ by Proposition \ref{there}. Forbidden subgraphs $G_{10},G_{19},G_{20}$ and Lemma \ref{nonsigular} imply that at most one vertex in $\Omega$ is adjacent to all vertices in $ \Gamma X $. Similarly, at most one vertex in $\Omega$ is adjacent to all vertices in $ \Gamma Y $. Suppose $z\in \Omega$, then there is at least 2 vertices in $\Gamma X$ by Lemma \ref{nonsigular} (i) , we can find two vertices $x$ and $y$, such that $x,y\in \Gamma X$, $x\sim z$, $y\sim z$. Forbidden subgraphs $ G_{21}, G_{27}, G_{32}$ imply that every vertex in $ \Gamma X $ which is adjacent to an vertex of $\Omega$ has no neighbor in $ \Gamma X $, thus $x\nsim y$. If $G[\Gamma Y]=\emptyset$, then $d_x=d_y=d_{xy}$; if $G[\Gamma Y]=l'K_1\cup k'K_2\cup m'K_3$, then forbidden subgraph $G_{22}$ implies that $x$ and $y$ are adjacent to all vertices of $K_2$ and $K_3$ in $ \Gamma Y $, forbidden subgraph $G_{11}$ implies that isolate vertices in $\Gamma Y$ is adjacent to all vertices or all but one vertices in $ \Gamma X $ which is adjacent to $z$, forbidden subgraph $G_{13}$ implies that a vertex in $\Gamma X$ which is adjacent to $z$ is adjacent to all vertices or all but one isolate vertex in $ \Gamma Y, $ thus $d_{xy}\leq d_x\leq d_{xy}+1$ and $d_{xy}\leq d_y\leq d_{xy}+1$, but $\det S\leq(d_{xy}-1)^{2}-d_{xy}^{2}<0$, which is contradicts Lemma \ref{nine}. Thus the corresponding graphs are not in $ \mathcal{F} $ for $a\geq b\geq 3$. $ \hfill \Box$ \vspace*{0.2cm}
We have $a > b=1$ or $a \geq b =2$.
\par If $a > b=1$, Then $G[\Gamma X] $ contains no edges, otherwise $ C $ is not maximal. Consider the set $Y' = Y\cup \Gamma X$, then $|Y'|\geq 2$, since $Y$ and $\Gamma X$ are nonempty. However $Y'$ contains no edges, otherwise $C$ wouldn't be maximal. Let $Z$ be the set of vertices that are not in $X$ or $Y'$. Therefore $X$, $Y'$, and $Z$ give the following block structure of $A$:
$$A=\left[
\begin{array}{ccc} \begin{smallmatrix}
J-I_a & J & 0 \\
J & 0 & N \\
0 & N^{T} & M \\
\end{smallmatrix}\end{array}
\right].$$
Take three vertices $u \in X$, $x \in Y'$ and $y\in Y'$. Consider the corresponding $3 \times 3$ principal submatrix $T$ of $A^{2}-A-2I$, then
$$ {\begin{matrix}
T=\begin{bmatrix} \begin{smallmatrix}
d_u-2&a-2&a-2&\\
a-2&d_x-2&d_{xy}\\
a-2&d_{xy}&d_y-2\\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
Let $T = (a-2)J + T'$, then
$$ {\begin{matrix}
T'=\begin{bmatrix}
d_u-a&0^{T}\\
0&T''\\
\end{bmatrix} ,& T'' = \begin{bmatrix}
d_x-a&d_{xy}-a+2\\
d_{xy}-a+2&d_y-a\\
\end{bmatrix}
\end{matrix}}.
$$
Note that $d_u> a$, $d_x\geq a$ and $d_y\geq a$. Without loss of generality, we assume $d_y\geq d_x$. If $T''$ is positive definite, then so are $T'$ and $T$, which contradicts rank $T \leq2$. Therefore $\det T''= (d_x-a)(d_y-a)-(d_{xy}-a+2)^{2} \leq0$, and by Lemma \ref{nine} $\det S=(d_x-2)(d_y-2)-d^{2}_{xy}\geq 0$. If $d_{x} = d_{xy}+1,$ then there exists $z$ such that $z\sim x$, but $z\nsim y$, then these neighbors of $y$ together with $x, y, z$ and any two vertices in $X$ induce forbidden subgraph $ G_{13}$, thus $d_{xy}+1 \leq d_{y} \leq d_{xy}+ 3$, then $\det S\leq(d_{xy}-1)(d_{xy}+1)-d^{2}_{xy}< 0$, which is contradiction. If $d_{x} \geq d_{xy}+2,$ then det $T''> 0$, unless $d_{x} = d_{y} = d_{xy}+ 2$. If $d_{x} = d_{xy},$ then for any two vertices $u, v $ of $Y'$ satisfy $d_{u} = d_{v} = d_{uv}+ 2$ other than $x$. If $|X|\geq 3$ and by Lemma \ref{nonsigular}, $d_y\geq d_x+5$, which is impossible by forbidden subgraph $G_{31}$. If $|X|= a=2$, then det $T''= (d_x-a)(d_y-a)-(d_{xy}-a+2)^{2} =(d_x-2)(d_y-2)-d_{xy}^{2}\leq0$, and by Lemma \ref{nine} $\det S=(d_x-2)(d_y-2)-d^{2}_{xy}\geq 0$, therefore det $T''$=det $S=(d_x-2)(d_y-2)-d^{2}_{xy}= 0$. Because $d_{x} = d_{xy},$ then $d_{x} =3,$ $d_{y} =11,$ which is impossible by forbidden subgraph $G_9$; or $d_{x} =6,$ $d_{y} =11,$ which is impossible by forbidden subgraph $G_{16}$; or $d_{x} =4,$ $d_{y} =10,$ which is impossible by forbidden subgraph $G_{16}$, or $G$ has eigenvalue $1$, contradiction. Therefore, for any vertex of $Y'$, we conclude that $d_{x} = d_{y} = d_{xy}+ 2$, we find the following two possible structures for $N$:
$$N=\left[
\begin{array}{ccc}
J-S_{2k}^{T} &0&J \\
\end{array}
\right](k\geq2),~~ \text{or}~~ \\ N=\left[
\begin{array}{ccc}
S_{2m}^{T} &0 &J\\
\end{array}
\right](m\geq3). $$
Partition $Z=Z_1\cup Z_2\cup Z_3$ according to the structure on $N$, so that the vertices in $Z_2$ are not adjacent to all vertices of $Y'$ and $X$, the vertices in $Z_3$ are adjacent to all vertices of $Y'$. Forbidden subgraph $G_{13}$ implies that $ G[Z_1]=mK_{2} $. Suppose $z\in Z_2$ is adjacent to a vertex of $Z_1$, we can find $u\in X$, $x, y\in Z_1$, $m,n\in Y'$, such that $m\sim x$, $n\sim y$, $x\nsim y$, then these vertices $\{u, m, n, x, y, z\}$ induce forbidden subgraph $G_{2},$ or we find $u\in X$, $x, y\in Z_1$, $m\in Y'$, such that $m\sim x$, $m\sim y$, $x\sim y$, $x\sim z$, then $\{u, m, x, y, z\}$ induce forbidden subgraph $G_{7},$ thus the vertices in $Z_2$ are adjacent to all vertices of $Z_1$. Forbidden subgraph $G_{26}$ implies that at most one vertex in $Z_2.$ Suppose a vertex $z\in Z_3$ and $p\in Z_1$ are adjacent, we can find $u\in X$, $m,n\in Y'$, such that $p\sim n$ and $p\nsim m$, then $\{u, m, n, z, p\}$ induce graph $G_{6}$, thus the vertices in $Z_3$ are non-adjacent to all vertices of $Z_1.$ Forbidden subgraph $G_{14}$ implies that every vertex in $Z_2$ is adjacent to all vertices of $Z_3.$ Forbidden subgraph $G_8$ implies that any vertex of $ Z_3 $ has at most two neighbor in $Z_3 $. We can find two vertices $x'\in Z_1$, $y'\in Z_3$, $x'\nsim y'$, $d_{x'}=d_{x'y'}+1$, $d_{x'y'}+1\leq d_{y'}\leq d_{x'y'}+3$, det $ S=(d_{x'}-2)(d_{y'}-2)-d_{x'y'}^{2}<0$, by Lemma \ref{nine}, which is contradiction, therefore $Z_3$ is empty. Hence $N=[J-S_{2k}^{T}\ 0]$ or $N=[S_{2k}^{T}\ 0]$. Forbidden subgraph $G_{15}$ and Lemma \ref{nonsigular} imply that the second structures for $N$ is impossible. We find two structures for $Z_2$: $Z_2$ is empty, or $Z_2$ is nonempty and $|Z_2|=1$.
Case (1): If $Z_2$ is empty, $G[Y']=lK_{1}$, $ G[Z_1]=mK_{2} $, then $l=m$, and
$G$ has the following adjacency matrix $A$ with quotient matrix $Q$:
$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & 0 \\
J & 0 & J-S_{2m}^{T} \\
0 & J-S_{2m} & R_{2m} \\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & m & 0 \\
a & 0 & 2m-2 \\
0& m-1 &1 \\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
$P_Q(X)=2-2a-4m+3am+2m^2-2am^2+3x-ax-4mx+amx+2m^2x+ax^2-x^3$ shows that $Q$ has no eigenvalue $-1$ and has an eigenvalue $2$ if and only if $(a, m) = (6, 3),(4 ,4)$, which lead Case (vii).
Case (2): If $|Z_2|=1$, $G[Y']=lK_{1},$ $G[Z_1]=mK_{2},$ then $l=m$, and $G$ has the following adjacency matrix $A$ with quotient matrix $Q$:
$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & 0& 0 \\
J & 0 & J-S^{T}_{2m} & 0 \\
0 & J-S_{2m} & R_{2m} &J \\
0& 0 &J & 0\\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & m & 0 & 0 \\
a & 0 & 2m-2 & 0 \\
0 & m-1 &1 & 1 \\
0& 0& 2m& 0& \\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
$P_Q(x)=(1+x)(2am^2-2x+2ax+2mx-amx-2m^2x-x^2-ax^2+x^3)$ shows that $Q$ has an eigenvalue $-1$ and has an eigenvalue $2$ if and only if $a = 2$, we can rewrite $A$ as $$A=\left[
\begin{array}{cc}
R_{2m} & S-J_{2m} \\
J-S_{2m}^{T} & 0 \\
\end{array}
\right]
$$
with $m\geq 3$, which leads Case (iii).
If $a\geq b=2,$ then $G[\Gamma X]= kK_2\cup lK_1$ by Proposition \ref{there}. Forbidden subgraphs $G_{10},G_{19},G_{20}$ and Lemma \ref{nonsigular} imply that at most one vertex in $\Omega$ is adjacent to all vertices in $ \Gamma X $. Similarly, at most one vertex in $\Omega$ is adjacent to all vertices in $ \Gamma Y $.
\par Suppose $G[\Gamma X]$ contains a isolated vertex $x$, and $z\in \Omega$ is adjacent to $x$, choose any vertex $y\in Y$, then $ d_{x}= d_{xy}+1$. If $G[\Gamma Y]= \emptyset$, then $d_y=d_{xy}+1$; if $G[\Gamma Y]= l'K_1\cup k'K_2 \cup m'K_3$, then forbidden subgraph $G_{13}$ implies that $d_{xy}+1\leq d_{y}\leq d_{xy}+3$, but det $S=(d_x-2)(d_y-2) -d^{2}_{xy}< 0,$ contradiction. Thus $z\nsim x$, choose a vertex $p\in \Gamma X$ of $K_2,$ such that $z\sim p$. Forbidden subgraph $G_{33}$ implies that $d_{x}= d_{xp}$. Therefore $d_x\geq 3$ by Lemma \ref{nonsigular}. But forbidden subgraphs $G_9, G_{16}$ imply that $d_p< d_x+5$, contradiction. Thus the corresponding graphs are not in $ \mathcal{F} $ for $G[\Gamma X]$ contains isolated vertices.
Thus $G[\Gamma X]=kK_2$. Consider the set $Y' = Y\cup \Gamma X=mK_2$, then $m\geq 2$, since $Y$ and $\Gamma X$ are nonempty. Let $Z$ be the set of vertices which are not in $X$ or $Y'$. Therefore $X$, $Y'$, and $Z$ give the following block structure of $A$:
$$A=\left[
\begin{array}{ccc}\begin{smallmatrix}
J-I_a & J & 0 \\
J & R_{2m} & N \\
0 & N^{T} & M \\
\end{smallmatrix}\end{array}
\right].$$
Take three vertices $u \in X$, $x \in Y'$ and $y\in Y'$, $x\nsim y$. Consider the corresponding $3 \times 3$ principal submatrix $T$ of $A^{2}-A-2I$ , then
$$ {\begin{matrix}
T=\begin{bmatrix}\begin{smallmatrix}
d_u-2&a-1&a-1&\\
a-1&d_x-2&d_{xy}\\
a-1&d_{xy}&d_y-2\\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
Write $T = (a-1)J + T'$, then
$$ {\begin{matrix}
T'=\begin{bmatrix}
d_u-a-1&0^{T}\\
0&T''\\
\end{bmatrix} ,& T'' = \begin{bmatrix}
d_x-a-1&d_{xy}-a+1\\
d_{xy}-a+1&d_y-a-1\\
\end{bmatrix}
\end{matrix}}.
$$
Note that $d_u>a+1$, $d_x\geq a+1$, $d_y\geq a+1$. Without loss of generality, we assume $d_y\geq d_x$. If $T''$ is positive definite, then so are $T'$ and $T$, which contradicts rank $T \leq2$. Therefore det $T''= (d_x-a-1)(d_y-a-1)-(d_{xy}-a+1)^{2} \leq0$ and by Lemma \ref{nine} det $S=(d_x-2)(d_y-2)-d^{2}_{xy}\geq 0$. If $d_{x} = d_{xy}+1,$ forbidden subgraphs $G_{20}, G_{23}$ show that $d_{xy}+1 \leq d_{y} \leq d_{xy}+ 3$, then det $S=(d_x-2)(d_y-2)-d^{2}_{xy}< 0$, which is contradiction. If $d_{x} \geq d_{xy}+2,$ then det $T''> 0$, unless $d_{x} = d_{y} = d_{xy}+ 2$. We conclude that $d_{x} = d_{y} = d_{xy}+ 2$, we find the following two possible structures for $N$:
$$N=\left[
\begin{array}{ccc}
J-S_{2k} &0 &J \\
\end{array}
\right](k\geq2), ~~\text{or}~~ N=\left[
\begin{array}{ccc}
S_{2m}
&0&J\\
\end{array}
\right](m\geq3). $$
Partition $Z=Z_1\cup Z_2\cup Z_3$ according to the structure on $N$. Take five vertices $x,y\in Z_1$, $u\in X$,$m,n\in Y'$, such that $m\sim x$, $n\sim y$, $m\nsim n$, if $x\sim y$ then $\{u,x,y,m,n\}$ induce graph $G_1$ in Fig $2$, thus $ G[Z_1]=lK_{1} $. An argument similar to the one used in $a>b=1$ shows that $Z_3$ is empty, and the second structures for $N$ is impossible. We find two structures for $Z_2$: $Z_2$ is empty, or $Z_2$ is nonempty and $|Z_2|=1$.
Case (1): If $Z_2$ is empty, $G[Y']=kK_{2},$ $G[Z_1]=lK_{1},$ then $k=l$, and
$G$ has the following adjacency matrix $A$ with quotient matrix $Q$: $$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & 0 \\
J & R_{2k} & J-S_{2k} \\
0 & J-S^{T}_{2k} & 0 \\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & 2k & 0 \\
a& 1 & k-1 \\
0 & 2k-2 &0 \\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.$$
$P_Q(x)=2-2a-4k+4ak+2k^2-2ak^2+3x-ax-4kx+2akx+2k^2x+ax^2-x^3$ shows that $ Q$ has no eigenvalue $-1$ and has an eigenvalue $2$ if and only if $(a, k) = (4, 10),(5,7),(6,6),(9,5),$ which leads case (viii).
\par Case (2): If $|Z_2| =1$, $G[Y']=kK_{2},$ $G[Z_1]=lK_{1},$ then $k=l$, and
$G$ has the following adjacency matrix $A$ with quotient matrix $Q$:
$$ {\begin{matrix}
A=\begin{bmatrix}\begin{smallmatrix}
J-I_a & J & 0 &0 \\
J &R_{2k}& J-S_{2k} &0 \\
0 & J-S^{T}_{2k} & 0 &J\\
0&0&J&0\\
\end{smallmatrix}\end{bmatrix} ,& Q = \begin{bmatrix}\begin{smallmatrix}
a-1 & 2k & 0 & 0 \\
a & 1& k-1 &0 \\
0 & 2k-2 &0 &1 \\
0&0&k&0\\
\end{smallmatrix}\end{bmatrix}
\end{matrix}}.
$$
$P_Q(x)=(1+x)(k-ak+2ak^2-2x+2ax+3kx-2akx-2k^2x-x^2-ax^2+x^3)$ shows that $Q$ has an eigenvalue $-1$ and has an eigenvalue $2$ if and only if $(a, k) = (3,4), (5,3), $ which leads case (viiii). $ \hfill \Box$ \vspace*{0.2cm}
\vskip 0.4 true cm
\begin{center}{\textbf{Acknowledgments}}
\end{center}
This project was supported by the National Natural Science Foundation of China (No. 11571101).
| {
"timestamp": "2018-06-20T02:11:23",
"yymm": "1806",
"arxiv_id": "1806.07260",
"language": "en",
"url": "https://arxiv.org/abs/1806.07260",
"abstract": "In this paper, all graphs whose adjacency matrix has at most two eigenvalues (multiplicities included) different from $2$ and $-1$ are determined. These graphs conclude a class of generalized friendship graphs $F_{t,r,k}, $ which is the graph of $k$ copies of the complete graph $K_t$ meeting in common $r$ vertices such that $t-r=3.$ Which of these graphs are determined by its spectrum is are also obtained.",
"subjects": "Combinatorics (math.CO)",
"title": "The graphs with all but two eigenvalues equal to $2$ or $-1$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137868795702,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7087617732933591
} |
https://arxiv.org/abs/1805.02201 | RealCertify: a Maple package for certifying non-negativity | Let $\mathbb{Q}$ (resp. $\mathbb{R}$) be the field of rational (resp. real) numbers and $X = (X_1, \ldots, X_n)$ be variables. Deciding the non-negativity of polynomials in $\mathbb{Q}[X]$ over $\mathbb{R}^n$ or over semi-algebraic domains defined by polynomial constraints in $\mathbb{Q}[X]$ is a classical algorithmic problem for symbolic computation.The Maple package \textsc{RealCertify} tackles this decision problem by computing sum of squares certificates of non-negativity for inputs where such certificates hold over the rational numbers. It can be applied to numerous problems coming from engineering sciences, program verification and cyber-physical systems. It is based on hybrid symbolic-numeric algorithms based on semi-definite programming. | \section{Introduction}
Let ${\mathbb{Q}}$ (resp.~${\mathbb{R}}$) be the field of rational (resp.~real) numbers and
$X = (X_1, \ldots, X_n)$ be a sequence of variables. We consider the problem of
deciding the non-negativity of $f \in {\mathbb{Q}}[X]$ either over ${\mathbb{R}}^n$ or over a
semi-algebraic set $S$ defined by some constraints
$g_1\geq 0, \ldots, g_m\geq 0$ (with $g_j \in {\mathbb{Q}}[X]$). We denote by $d$ the
maximum of the total degrees of these polynomials.
The Cylindrical Algebraic Decomposition (CAD) algorithm~\cite{Collins75} solves
this decision problem in time doubly exponential in $n$ (and polynomial in $d$).
This algorithm (and its further improvements) has been implemented in most of
computer algebra systems.
Later, the so-called critical point method has been designed, allowing to solve
this decision problem in time singly exponential in $n$ (and polynomial in $d$).
Recent variants of this method have also been implemented in the \href{http://www-polsys.lip6.fr/~safey/RAGLib/}{$\texttt{RAGLib}$} Maple
package.
All the aforementioned algorithms are ``root finding'' ones: they try to find a
point at which $f$ is negative over the considered domain. When $f$ is positive,
they return an empty list without a {\it certificate} that can be checked {\it a
posteriori}.
To compute certificates of non-negativity, an approach based on {\it sum of
squares} (SOS) decompositions (and their variants) has been popularized by
Lasserre~\cite{Las01sos} and Parillo~\cite{phdParrilo}. The idea is as follows.
To ensure that a polynomial $f$ of degree $d = 2k$ is non-negative over ${\mathbb{R}}^n$, it suffices to
write it as a sum of squares $c_1 s_1^2+\cdots+c_r s_r^2$ where the $c_i$'s are
positive constants. When such $s_i$ and such $c_i$'s can be obtained with
rational coefficients, one says that one obtains a certificate of non-negativity
over the rationals. Such a decomposition can be obtained by finding a
semi-definite positive symmetric matrix $G$ such that
\[
f = v_k^T G v_k
\]
where $v_k$ is the vector of all monomials of degree $\leq k$. Obtaining such a
matrix $G$ boils down to solving a linear matrix inequality.
This method is attractive because efficient numerical solvers are available for
solving {\em large} linear matrix inequalities. Besides, when $d$ is fixed and
$n$ grows, the size of the matrix $G$ varies polynomially in $n$, hence
providing {\em approximations} of a sum of squares decomposition for $f$. It
can also be generalized to obtain certificates of non-negativity for constrained
problems, writing $f$ as
\[
f = \sigma_0+ \sigma_1 g_1+\cdots + \sigma_m g_m
\]
where the $\sigma_i$'s are sum of squares.
On the minus side, this method provides only {\em approximations} of
certificates of non-negativity. Besides, it is well-known that not all
non-negative polynomials can be written as sum of squares of polynomials.
Original work of Parillo/Peyrl \cite{PaPe08} and Kaltofen/Li/Yang/Zhi
\cite{KLYZ08} have opened the door to hybrid symbolic numeric strategies for
computing certificates of non-negativity whenever such certificates exist over
the rational numbers.
In \cite{MaSa18}, we have designed hybrid symbolic-numeric algorithms for
computing certificates of non-negativity over the rationals in some ``easy''
situations (roughly speaking, these are the situations where the searched sum of
squares decomposition lies in the interior of the cone of polynomials which are
sum of squares). The package \textsc{RealCertify} implements these algorithms
and aims at providing a full suite of hybrid algorithms for computing
certificates of non-negativity based on numerical software for solving linear
matrix inequalities.
\section{Algorithmic background and overall description}
\subsection{The univariate case}
In the univariate case, all non-negative polynomials are sums-of-squares. The
library includes two distinct algorithms:
\begin{itemize}
\item $\texttt{univsos1}$, which is a recursive procedure relying on root isolation
and quadratic under approximations of positive polynomials.
The first step computes a rational approximation $t$ of the smallest global
minimizer $a$ of $f$ and a non-negative quadratic under-approximation $f_t$ of
$f$ such that $t$ is a root of $f - f_t$. The second step performs square-free
decomposition of $f - f_t = g h^2$. Then, we apply the same procedure on $g$
until the resulting degree is less than 2.
\item $\texttt{univsos2}$, which relies on root isolation of perturbed positive
polynomials. Given a univariate polynomial $f > 0$ of degree $d = 2k$, this
algorithm computes weighted SOS decompositions of $f$. The first numeric step
of $\texttt{univsos2}$ is to find $\varepsilon$ such that the perturbed polynomial
$f_\varepsilon := f - \varepsilon \sum_{i=0}^k X^{2 i} > 0$ and to compute its
complex roots, yielding an approximate SOS decomposition $l (s_1^2 + s_2^2)$,
where $l$ is the leading coefficient of $f_\varepsilon$. In the second
symbolic step, one considers the remainder polynomial
$u := f_\varepsilon - l s_1^2 - l s_2^2$ and tries to computes an exact SOS
decomposition of $\varepsilon \sum_{i=0}^k X^{2 i} + u$. This succeeds for
large enough precision of the root isolation procedure.
\end{itemize}
In both cases, the output is a list $[c_1,s_1,\dots,c_r,s_r]$, with
$c_i \in {\mathbb{Q}}^{> 0}$, $s_i \in {\mathbb{Q}}[X]$, such that
$f = c_1 s_1^2 + \dots + c_r s_r^2$.
Let us illustrate the behavior of both algorithms on the input
$f = 1 + X + X^2 + X^3 + X^4$.
\begin{enumerate}
\item When running~$\univsos1$, the algorithm first provides the value $t = -1$
as an approximation of the minimizer of $f$ together with a positive quadratic
under-approximation $f_{t}(X) = X^2$. Next, one obtains the square-free
decomposition $f - f_t = (X+1)^2 g(X)$ with
$g(X) = (X-\frac{1}{2})^2 + \frac{3}{4}$.
The Maple command: \verb?univsos1(1+X+X^2+X^3+X^4,X)? outputs the list
$[1, 0, 1, (X + 1) (X - \frac{1}{2}), \frac{3}{4}, X + 1, 1, -X]$,
corresponding to the weighted rational SOS decomposition
$f = (X+1)^2 (X-\frac{1}{2})^2 + \frac{3}{4} (X+1)^2 + (-X)^2$.
\item When running~$\univsos2$, the algorithm performs the first loop and
provides the value $\varepsilon = \frac{1}{8}$ with the polynomial
$f_\varepsilon := f - \frac{1}{8} (1 + X^2 + X^4)$ which has no real root. The
leading coefficient of $f_\varepsilon$ is $l = \frac{7}{8}$.
After multiplying the precision of complex root isolation by 8, one obtains
$s_1 = X^2 + \frac{9}{16} X - \frac{3}{4}$,
$s_2 = \frac{23}{16} X+ \frac{11}{16}$ and
$u = \frac{1}{64} X^3 + \frac{105}{1024} X^2 + \frac{9}{1024} X -
\frac{63}{2048}$.
Using that $X^3 = \frac{1}{2} (X^2 + X)^2 - \frac{1}{2} (X^4 + X^2)$ and
$X = \frac{1}{2} (X + 1)^2 - \frac{1}{2} (X^2 + 1)$, one gets an SOS
decomposition for $u + \frac{1}{8} (1 + X^2 + X^4)$.
The Maple command \verb?univsos2(1+X+X^2+X^3+X^4,X)? outputs the decomposition
$f = \frac{7}{8}(s_1^2 + s_2^2) + \frac{377}{4096} + \frac{55}{256} X^2 +
\frac{7}{64} X^4 + \frac{9}{1024} (X+\frac{1}{2})^2 + \frac{1}{64}
X^2(X+\frac{1}{2})^2$.
\end{enumerate}
\subsection{The multivariate case}
In the multivariate case, the $\texttt{multivsos}$ library performs SOS decompositions
of multivariate non-negative polynomials with rational coefficients in the
(un)-constrained case.
In the unconstrained case, $\texttt{multivsos}$ implements a hybrid numeric-symbolic
algorithm computing exact rational SOS decompositions for polynomials lying in
the interior $\mathring{\Sigma}[X]$ of the SOS cone $\Sigma[X]$. It computes an
approximate SOS decomposition for a perturbation of the input polynomial with an
arbitrary-precision semi-definite programming (SDP) solver. An exact SOS
decomposition is obtained thanks to the perturbation terms.
Given $f \in {\mathbb{Z}}[X] \cap \mathring{\Sigma}[X]$ of degree $d = 2k$, one first
computes its Newton polytope $P$. The support of the SOS involved in the
decomposition of $f$ lies in $Q = P/2 \cap {\mathbb{N}}^n$.
A first loop allows to find $\varepsilon \in {\mathbb{Q}}^{>0}$ such that the perturbed
polynomial $f_\varepsilon := f - \varepsilon \sum_{\alpha \in Q} X^{2 \alpha}$
is also in ${\mathbb{Z}}[X] \cap \mathring{\Sigma}[X]$. In the second loop, one computes
an approximate rational SOS decomposition $\tilde{\sigma}$ of $f_\varepsilon$
with an arbitrary-precision SDP solver ($\texttt{sdp}$ procedure). We obtain the
remainder
$u = f - \varepsilon \sum_{\alpha \in Q} X^{2 \alpha} - \tilde{\sigma}$. When
the precision is large enough, the last symbolic step allows to retrieve an
exact rational SOS decomposition of
$u + \varepsilon \sum_{\alpha \in Q} X^{2 \alpha}$.
In the constrained case, $\texttt{multivsos}$ relies on a similar procedure to compute
weighted SOS decompositions for polynomials positive over basic compact
semi-algebraic sets.
We apply~$\texttt{multivsos}$ on
$f = 4 X_1^4 + 4 X_1^3 X_2 - 7 X_1^2 X_2^2 - 2 X_1 X_2^3 + 10 X_2^4$. The other
input parameters are $\varepsilon = 1$, $\delta = R = 60$ and $\delta_c = 10$.
Then $Q := \conv{(\spt{f})}/2 \cap {\mathbb{N}}^n = \{(2,0),(1,1),(0,2)\}$. At the end of
the first loop, we get
$f - \varepsilon t = f - (X_1^4 + X_1^2 X_2^2 + X_2^2) \in
\mathring{\Sigma}[X]$.
The $\texttt{sdp}$ and $\texttt{cholesky}$ procedures yield
$s_1 = 2 X_1^2+ X_1 X_2- \frac{8}{3} X_2^2$,
$s_2 = \frac{4}{3} X_1 X_2+ \frac{3}{2} X_2^2$ and $s_3 = \frac{2}{7} X_2^2$.
The remainder polynomial is
$u = f- \varepsilon t - s_1^2 - s_2^2 - s_3^2 = - X_1^4- \frac{1}{9} X_1^2
X_2^2- \frac{2}{3} X_1 X_2^3- \frac{781}{1764} X_2^4$.
At the end of the second loop, we obtain
$\varepsilon_{(2,0)} = \varepsilon - X_1^4 = 0$, which is the coefficient of
$X_1^4$ in $\varepsilon t + u$. Then,
$\varepsilon (X_1^2 X_2^2 + X_2^4) - \frac{2}{3} X_1 X_2^3 = \frac{1}{3} (X_1
X_2 - X_2^2)^2 + (\varepsilon - \frac{1}{3}) (X_1^2 X_2^2 + X_2^4)$.
In the polynomial $\varepsilon t + u$, the coefficient of $X_1^2 X_2^2$ is
$\varepsilon_{(1,1)} = \varepsilon - \frac{1}{3} - \frac{1}{9} = \frac{5}{9}$
and the coefficient of $X_4^4$ is
$\varepsilon_{(0,2)} = \varepsilon - \frac{1}{3} - \frac{781}{1764} =
\frac{395}{1764}$.
The Maple command
\begin{verbatim}
multivsos(4 * X1^4 + 4 * X1^3 * X2 - 7 * X1^2 * X2^2 - 2 * X1 * X2^3 + 10 * X2^4):
\end{verbatim}
allows to obtain the weighted rational SOS decomposition: $4 X_1^4 + 4 X_1^3 X_2 - 7 X_1^2 X_2^2 - 2 X_1 X_2^3 + 10 X_2^4 = \frac{1}{3} (X_1 X_2-X_2^2)^2 + \frac{5}{9} (X_1 X_2)^2 + \frac{395}{1764} X_2^4 + (2 X_1^2+ X_1 X_2- \frac{8}{3} X_2^2)^2 + (\frac{4}{3} X_1 X_2+ \frac{3}{2} X_2^2)^2+(\frac{2}{7} X_2^2)^2)$.
\subsection{Dependencies}
The $\textsc{RealCertify}$ software is available and maintained as a GitHub repository
at \href{https://gricad-gitlab.univ-grenoble-alpes.fr/magronv/RealCertify}{Gitlab}. The
$\texttt{univsos}$ and $\texttt{multivsos}$ libraries have been tested with Maple 2016.
$\texttt{univsos}$ requires the external
\href{http://pari.math.u-bordeaux.fr/download.html}{PARIGP} software for
$\texttt{univsos2}$, as well as the external SDP
solvers~\href{https://sourceforge.net/projects/sdpa/files/sdpa/sdpa_7.3.8.tar.gz}{SDPA}
(double precision)
and~\href{https://sourceforge.net/projects/sdpa/files/sdpa-gmp/sdpa-gmp.7.1.3.src.20150320.tar.gz}{SDPA-GMP}~\cite{Nakata10GMP}
(arbitrary-precision).
In addition of SDPA and SDPA-GMP used for the $\texttt{sdp}$ procedure, $\texttt{multivsos}$
requires the Maple
package~\href{http://www.math.uwo.ca/faculty/franz/convex}{Convex}, by M. Franz,
to compute Newton polytopes.
\section{Performance analysis and limitations}
Timings, which we report on below, were obtained on an Intel Core i7-5600U CPU
(2.60 GHz) with 16Gb of RAM. Most of the time is spent in the $\texttt{sdp}$ procedure
for all benchmarks. Those benchmarks are standard ones in the polynomial
optimization community. We report here only on multivariate problems. We refer
to \cite{univsos} for a performance analysis in the univariate case.
The table on the left below reports on unconstrained problems, while the one on
the right reports on constrained ones. It appears that on this class of
problems, \textsc{RealCertify} scales better than CAD-based software (the Maple
package implementing CAD) and \textsc{RAGlib}. It should be observed that these
examples can actually be decomposed into sums of squares quite easily.
\vspace{-0.4cm}
{\tiny
\begin{center}
\begin{tabular}{ll}
\begin{minipage}{0.5\linewidth}
\begin{center}
\begin{tabular}{|lrr|rr|rr|c|c|}
\hline
\multirow{2}{*}{Id} & \multirow{2}{*}{$n$} & \multirow{2}{*}{$d$} & \multicolumn{2}{c|}{$\texttt{multivsos}$} & $\texttt{RAGLib}$ & $\texttt{CAD}$ \\
& & & $\tau_1$ (bits) & $t_1$ (s) & $t_3$ (s) & $t_4$ (s) \\
\hline
$f_{12}$ & 2 & 12 & 162 861 & 5.96 & 0.15 & 0.07 \\
$f_{20}$ & 2 & 20 & 745 419 & 110. & 0.16 & 0.03 \\
$M_{20}$ & 3 & 8 & 4 695 & 0.18 & 0.13 & 0.05 \\
$M_{100}$ & 3 & 8 & 17 232 & 0.35 & 0.15 & 0.03 \\
$r_2$ & 2 & 4 & 1 866 & 0.03 & 0.09 & 0.01 \\
$r_4$ & 4 & 4 & 14 571 & 0.15 & 0.32 & $-$ \\
$r_6$ & 6 & 4 & 56 890 & 0.34 & 623. & $-$ \\
$r_8$ & 8 & 4 & 157 583 & 0.96 & $-$ & $-$ \\
$r_{10}$ & 10 & 4 & 344 347 & 2.45 & $-$ & $-$ \\
$r_6^2$ & 6 & 8 & 1 283 982 & 13.8 & 10.9 & $-$ \\
\hline
\end{tabular}
\label{table:bench2}
\end{center}
\end{minipage}
&
\begin{minipage}{0.5\linewidth}
\begin{center}
\begin{tabular}{|lrr|rrr|c|c|}
\hline
\multirow{2}{*}{Id} & \multirow{2}{*}{$n$} & \multirow{2}{*}{$d$} & \multicolumn{3}{c|}{$\texttt{multivsos}$} & $\texttt{RAGLib}$ & $\texttt{CAD}$ \\
& & & $k$ & $\tau_1$ (bits) & $t_1$ (s) & $t_2$ (s) & $t_3$ (s) \\
\hline
$p_{46}$ & 2 & 4 & 3 & 21 723 & 0.83 & 0.15 & 0.81 \\
$f_{260}$ & 6 & 3 & 2 & 114 642 & 2.72 & 0.12 & $-$ \\
$f_{491}$ & 6 & 3 & 2 & 108 359& 9.65 & 0.01 & 0.05 \\
$f_{752}$ & 6 & 2 & 2 & 10 204 & 0.26 & 0.07 & $-$ \\
$f_{859}$ & 6 & 7 & 4 & 6 355 724 & 303. & 5896. & $-$ \\
$f_{863}$ & 4 & 2 & 1 & 5 492 & 0.14 & 0.01 & 0.01 \\
$f_{884}$ & 4 & 4 & 3 & 300 784 & 25.1 & 0.21 & $-$ \\
$f_{890}$ & 4 & 4 & 2 & 60 787 & 0.59 & 0.08 & $-$ \\
butcher & 6 & 3 & 2 & 247 623 & 1.32 & 47.2 & $-$ \\
heart & 8 & 4 & 2 & 618 847 & 2.94 & 0.54 & $-$ \\
magn. & 7 & 2 & 1 & 9 622 & 0.29 & 434. & $-$ \\
\hline
\end{tabular}
\label{table:bench3}
\end{center}
\end{minipage}
\end{tabular}
\end{center}
}
The technique on which \textsc{RealCertify} relies takes also plenty advantage
on the fact that solving linear matrix inequalities at fixed precision can be
done in polynomial time when $d$ is fixed and $n$ increases.
However, we mention that for non-negative polynomials which are not sums of
squares, or which have coefficients with large magnitude, the practical
behaviour of \textsc{RealCertify} can be much less satisfactory and less
efficient than e.g. \textsc{RAGlib}.
Hence, one can see \textsc{RealCertify} as to be used in a pre-process for
getting cecrtificates of non-negativity which can be completed with other
symbolic computation tools.
| {
"timestamp": "2018-05-08T02:11:12",
"yymm": "1805",
"arxiv_id": "1805.02201",
"language": "en",
"url": "https://arxiv.org/abs/1805.02201",
"abstract": "Let $\\mathbb{Q}$ (resp. $\\mathbb{R}$) be the field of rational (resp. real) numbers and $X = (X_1, \\ldots, X_n)$ be variables. Deciding the non-negativity of polynomials in $\\mathbb{Q}[X]$ over $\\mathbb{R}^n$ or over semi-algebraic domains defined by polynomial constraints in $\\mathbb{Q}[X]$ is a classical algorithmic problem for symbolic computation.The Maple package \\textsc{RealCertify} tackles this decision problem by computing sum of squares certificates of non-negativity for inputs where such certificates hold over the rational numbers. It can be applied to numerous problems coming from engineering sciences, program verification and cyber-physical systems. It is based on hybrid symbolic-numeric algorithms based on semi-definite programming.",
"subjects": "Symbolic Computation (cs.SC); Mathematical Software (cs.MS)",
"title": "RealCertify: a Maple package for certifying non-negativity",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137858267907,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7087617725335226
} |
https://arxiv.org/abs/1907.00551 | Plateau's problem as a singular limit of capillarity problems | Soap films at equilibrium are modeled, rather than as surfaces, as regions of small total volume through the introduction of a capillarity problem with a homotopic spanning condition. This point of view introduces a length scale in the classical Plateau's problem, which is in turn recovered in the vanishing volume limit. This approximation of area minimizing hypersurfaces leads to an energy based selection principle for Plateau's problem, points at physical features of soap films that are unaccessible by simply looking at minimal surfaces, and opens several challenging questions. | \section{Introduction}
\subsection{Overview}\label{section overview} The theory of minimal surfaces with prescribed boundary data provides the basic model for soap films hanging from a wire frame: given an $(n-1)$-dimensional surface $\Gamma\subset\mathbb{R}^{n+1}$ without boundary, one seeks $n$-dimensional surfaces $M$ such that
\begin{equation}
\label{minimal surfaces}
H_M=0\,,\qquad\partial M=\Gamma\,,
\end{equation}
where $H_M$ is the mean curvature of $M$ (and $n=2$ in the physical case). A limitation of \eqref{minimal surfaces} as a physical model is that, in general, \eqref{minimal surfaces} may be non-uniquely solvable, including unstable (and thus, not related to observable soap films) solutions. Area minimization can be used to construct stable (and thus, physical) solutions, providing a strong motivation for the study of {\it Plateau's problem}; see \cite{coldingminiBOOK}. Here we are concerned with a more elementary physical limitation of \eqref{minimal surfaces}, namely, the absence of a length scale: if $M$ solves \eqref{minimal surfaces} for $\Gamma$, then $t\,M$ solves \eqref{minimal surfaces} for $t\,\Gamma$, no matter how large $t>0$ is.
\medskip
Following \cite{maggiscardicchiostuvard}, we introduce a length scale in the modeling of soap films by thinking of them as regions $E\subset\mathbb{R}^{n+1}$ with small volume $|E|=\varepsilon$. At equilibrium, the isotropic pressure at a point $y$ interior to the liquid but immediately close to its boundary $\partial E$ is
\begin{equation}
\label{balance of p 1}
p(y)=p_0+\sigma\,\vec{H}_{\partial E}(y)\cdot\nu_E(y)\,,
\end{equation}
where $p_0$ is the atmospheric pressure, $\sigma$ is the surface tension, $\nu_E$ the outer unit normal to $E$, and $\vec{H}_{\partial E}$ the mean curvature vector of $\partial E$; at the same time, for any two points $y,z$ inside the film we have
\begin{equation}
\label{balance of p 2}
p(y)-p(z)=\rho\,g\,(z-y)\cdot e_{n+1}\,,
\end{equation}
where $\rho$ is the density of the fluid, $g$ the gravity of Earth and $e_{n+1}$ is the vertical direction. In the absence of gravity, \eqref{balance of p 1} and \eqref{balance of p 2} imply that $H_E=\vec{H}_{\partial E}\cdot\nu_E$ is {\it constant} along $\partial E$. A heuristic analysis shows that if $\partial E$ is representable, locally, by the two graphs $\{x\pm (h(x)/2)\,\nu_M(x):x\in M\}$ defined by a positive function $h$ over an ideal mid-surface $M$, then $H_M$ should be {\it small, but non-zero} (even in the absence of gravity); see \cite[Section 2]{maggiscardicchiostuvard}. As it is well-known, one cannot prescribe non-vanishing mean curvature with arbitrarily large boundary data, see, e.g. \cite{giustiINV,duzaarfuchs}. Hence this point of view can potentially capture physical features of soap films that are not accessible by modeling them as minimal surfaces.
\medskip
The goal of this paper is starting the analysis of the variational problem playing for \eqref{balance of p 1} and \eqref{balance of p 2} the role that Plateau's problem plays for \eqref{minimal surfaces}. The new aspect is not in the energy minimized, but in the boundary conditions under which the minimization occurs. Indeed, the equivalence between the constancy of $H_E$ and the balance equations \eqref{balance of p 1} and \eqref{balance of p 2}, leads us to work in the classical framework of Gauss' capillarity model for liquid droplets in a container. Given an open set $\Omega\subset\mathbb{R}^{n+1}$ (the container), the surface tension energy\footnote{For simplicity, we are setting to zero the adhesion coefficient with the container; see, e.g. \cite{Finn}.} of a droplet occupying the open region $E\subset\Omega$ is given by
\[
\sigma\,\H^n(\Omega\cap\partial E)\,,
\]
where $\H^n$ denotes $n$-dimensional Hausdorff measure (surface area if $n=2$, length if $n=1$). In the case of soap films hanging from a wire frame $\Gamma$, we choose as container $\Omega$ the set
\[
\Omega=\mathbb{R}^{n+1}\setminus I_\delta(\Gamma)\,,
\]
corresponding to the complement of the ``solid wire'' $I_\delta(\Gamma)$, where $I_\delta$ denotes the closed $\delta$-neighborhood of a set. The minimization of $\H^n(\Omega\cap\partial E)$ among open sets $E\subset\Omega$ with $|E|=\varepsilon$ leads indeed to finding minimizers whose boundaries have constant mean curvature. However, these boundaries will not resemble soap films at all, but will rather consist of small ``droplets'' sitting at points of maximal curvature for $I_\delta(\Gamma)$; see
\begin{figure}
\input{span.pstex_t}\caption{{\small Minimizers of the capillarity problem in the unusual container $\Omega$ consisting of the complement of a $\delta$-neighborhood $I_\delta(\Gamma)$ of a curve $\Gamma$ (depicted in light gray). The shape of $E$ is drastically different depending on whether or not a homotopic spanning condition is prescribed: (a) without a $\mathcal{C}$-spanning condition, we observe tiny droplets sitting near points of maximal mean curvature of $\partial\Omega$; (b) with a $\mathcal{C}$-spanning condition, small rounds droplets will not be admissible, and a different region of the energy landscape is explored; minimizers are now expected to stretch out and look like soap films.}}\label{fig span}
\end{figure}
Figure \ref{fig span}, and \cite{baylerosales,fall,maggimihaila} for more information.
\medskip
To observe soap films, rather than droplets, we must require that $\partial E$ stretches out to span $I_\delta(\Gamma)$. To this end, we exploit a beautiful idea introduced by Harrison and Pugh in \cite{harrisonpughACV}, as slightly generalized in \cite{DLGM}. The idea is fixing a {\bf spanning class}, i.e. a homotopically closed\footnote{By this we mean that if $\gamma_0,\gamma_1$ are smooth embeddings of $\SS^1$ into $\Omega$, $\gamma_0\in\mathcal{C}$, and there exists a continuous map $f:[0,1]\times\SS^1\to\Omega$ with $f(t,\cdot)=\gamma_t$ for $t=0,1$, then $\gamma_1\in\mathcal{C}$.} family $\mathcal{C}$ of smooth embeddings of $\SS^1$ into $\Omega=\mathbb{R}^{n+1}\setminus I_\delta(\Gamma)$, and to say\footnote{Notice that, in stating condition \eqref{spanning condition}, the symbol $\gamma$ denotes the subset $\gamma(\SS^1)\subset\Omega$. We are following here the same convention set in \cite{DLGM}.} that a relatively closed set $S\subset\Omega$ is {\bf $\mathcal{C}$-spanning $I_\delta(\Gamma)$} if
\begin{equation}
\label{spanning condition}
S\cap\gamma\ne\emptyset\qquad\forall \gamma\in\mathcal{C}\,.
\end{equation}
Given a choice of $\mathcal{C}$, we have a corresponding version of Plateau's problem
\begin{equation}
\label{ell intro}
\ell=\inf\Big\{\H^n(S):\mbox{$S$ is relatively closed in $\Omega$, and $S$ is $\mathcal{C}$-spanning $I_\delta(\Gamma)$}\Big\}\,,
\end{equation}
as illustrated in Figure
\begin{figure}
\input{hpc.pstex_t}\caption{{\small The variational problem \eqref{ell intro} with $\Gamma$ given by two parallel circles centered on the same axis at a mutual distance smaller than their common radius. Different choices of $\mathcal{C}$ lead to different minimizers $S$ in $\ell$: (a) if $\mathcal{C}$ is generated by the loops $\gamma_1$ and $\gamma_2$, then $S$ is the area minimizing catenoid; (b) if we add to $\mathcal{C}$ the homotopy class of $\gamma_3$, then $S$ is the {\it singular} area minimizing catenoid, consisting of two catenoidal necks, meeting at equal angles along a circle of $Y$-points bounding a ``floating'' disk. Such singular catenoid cannot be approximated in energy by smooth surfaces: hence the choice of casting $\ell$ in a class of non-smooth surfaces.}}\label{fig hpc}
\end{figure}
\ref{fig hpc}. The variational problem $\psi(\varepsilon)$ studied here is thus a reformulation of $\ell$ as a capillarity problem with a homotopic spanning condition, namely:
\begin{equation}\nonumber
\psi(\varepsilon)=\inf\Big\{\H^n(\Omega\cap\partial E):\mbox{$E\subset\Omega$, $|E|=\varepsilon$, $\Omega\cap\partial E$ is $\mathcal{C}$-spanning $I_\delta(\Gamma)$}\Big\}\,,\qquad\varepsilon>0\,.
\end{equation}
We now give {\it informal} statements of our main results (e.g., we make no mention to singular sets or comment on reduced vs topological boundaries); see section \ref{section main statements} for the formal ones.
\medskip
\noindent {\bf Existence of generalized minimizers and Euler-Lagrange equations (Theorem \ref{thm lsc} and Theorem \ref{thm basic regularity})}:
{\it There always exists a generalized minimizer $(K,E)$ for $\psi(\varepsilon)$: that is, there exists a set $K\subset\Omega$, relatively closed in $\Omega$ and $\mathcal{C}$-spanning $I_\delta(\Gamma)$, and there exists an open set $E\subset\Omega$ with $\Omega\cap\partial E\subset K$ and $|E|=\varepsilon$, such that
\[
\psi(\varepsilon)=\mathcal F(K,E)=2\,\H^n(K\setminus\partial E)+\H^n(\Omega\cap\partial E)\,.
\]
Moreover, $(K,E)$ minimizes $\mathcal F$ with respect to all its diffeomorphic images: in particular, $\Omega\cap\partial E$ has constant mean curvature $\l\in\mathbb{R}$ and $K\setminus\partial E$ has zero mean curvature.}
\medskip
\noindent {\bf Convergence to the Plateau's problem (Theorem \ref{thm convergence as eps goes to zero})}:
{\it We always have $\psi(\varepsilon)\to 2\,\ell$ when $\varepsilon\to 0^+$, and if $(K_j,E_j)$ are generalized minimizers for $\psi(\varepsilon_j)$ with $\varepsilon_j\to 0^+$, then, up to extracting subsequences, we can find a minimizer $S$ for $\ell$ with
\[
2\,\int_{K_j\setminus\partial E_j}\varphi+\int_{\partial E_j}\varphi\to2\,\int_S\varphi\qquad\forall \varphi\in C^0_c(\Omega)\,,
\]
as $j\to\infty$; in other words, generalized minimizers in $\psi(\varepsilon_j)$ with $\varepsilon_j\to 0^+$ converge as Radon measures to minimizers in the Harrison-Pugh formulation of Plateau's problem.}
\begin{example}[Volume and thickness in the non-collapsed case]\label{example two points}
{\rm Let $\Gamma$ consists of two points at distance $r$ in the plane, or of an $(n-1)$-sphere of radius $r$ in $\mathbb{R}^{n+1}$. For $\varepsilon$ small enough, $\psi(\varepsilon)$ should admit a unique generalized minimizer $(K,E)$, consisting of two almost flat spherical caps meeting orthogonally along the torus $I_\delta(\Gamma)$ (so that $K=\partial E$ and collapsing does not occur); see Figure \ref{fig example23}-(a). In general, we expect that {\it when all the minimizers $S$ in $\ell$ are smooth, then generalized minimizers in $\psi(\varepsilon)$ are not collapsed, and, for small $\varepsilon$, $K=\partial E$ is a two-sided approximation of $S$, with $H_{E}=\psi'(\varepsilon)\to 0$ and
\begin{equation}
\label{psi eps quadro expected}
\psi(\varepsilon)=2\,\ell+C\,\varepsilon^2+{\rm o}(\varepsilon^2)\,,\qquad\mbox{as $\varepsilon\to 0^+$}\,,
\end{equation}
for a positive $C$}. This insight is consistent with the idea (see \cite{maggiscardicchiostuvard}) that {\it almost minimal surfaces} arise in studying soap films with a thickness. In particular, {\it volume and thickness will be directly related} in terms of the geometry of $\Gamma$. Sending $\varepsilon\to 0^+$ with $\Gamma$ fixed or, equivalently, considering $t\,\Gamma$ for large $t$ at $\varepsilon$ fixed, will make the thickness decrease until it reaches a threshold below which we do not expect soap films to be stable. A critical thickness can definitely be identified with the characteristic length scale of the molecules of surfactant, below which the model stops making sense. But depending on temperatures, actual soap films with even larger thicknesses should burst out due to the increased probability of fluctuations towards unstable configurations.}
\end{example}
\begin{example}[Volume and thickness in the collapsed case]\label{example tripe sing}
{\rm At small volumes, and in presence of singularities in the minimizers of $\ell$, collapsing is energetically convenient, and allows $\psi(\varepsilon)$ to approximate $2\,\ell$ from below. If $\Gamma\subset\mathbb{R}^2$ consists of the three vertices of an equilateral triangle, for small $\delta$ the unique minimizer of $\ell$ consists of a $Y$-configuration. For small $\varepsilon$, we expect generalized minimizers $(K,E)$ of $\psi(\varepsilon)$ to be collapsed, see Figure
\begin{figure}
\input{example23.pstex_t}\caption{{\small (a) If $\Gamma$ consists of two points, then the minimizer is not collapsed, and is bounded by two very flat circular arcs; (b) when $\Gamma$ consists of the vertices of an equilateral triangle, the generalized minimizer is indeed collapsed. The three segments defining $K\setminus\partial E$ are depicted in bold, and $E$ is a negatively curved curvilinear triangle nested around the singular point of the unique minimizer of $\ell$.}}\label{fig example23}
\end{figure}
\ref{fig example23}-(b): there, $E$ is a curvilinear triangle made up of three circular arcs whose length is ${\rm O}(\sqrt\varepsilon)$, and whose (negative) curvature is ${\rm O}(1/\sqrt{\varepsilon})$. The thickness of an actual soap film in this configuration should thus be considerably larger near the singularity than along the collapsed region, and the volume and the thickness of the film are somehow {\it independent} geometric quantities. This suggests, in presence of singularities, the need for introducing a second length scale in the model. A possibility is replacing the sharp interface energy $\H^n(\Omega\cap\partial E)$ with a diffused interface energy, like the Allen-Cahn energy
\[
\mathcal{E}_\eta(u)=\eta\,\int_\Omega|\nabla u|^2+\frac1\eta\int_\Omega W(u)\,,\qquad \eta>0\,,
\]
for a double-well potential with $\{W=0\}=\{-1,1\}$. We expect $\{u>0\}$ to (approximately) coincide with the union of a curvilinear triangle of area $\varepsilon$ with three stripes having the collapsed segments as their mid-sections, and of width $\eta\,|\log\eta|$; cf. with \cite{delpino1}.}
\end{example}
\begin{example}[Capillarity as a selection principle for Plateau's problem]\label{example selection}
{\rm The following statement holds (as a heuristic principle): {\it Generalized minimizers of $\psi(\varepsilon)$ converge to those minimizers of Plateau's problem \eqref{ell intro} with larger singular set, and when no singular minimizers are present, they select those whose second fundamental form has maximal $L^2$-norm}. Since the second part of this selection principle is justified by standard second variation arguments, we illustrate the first part only. In
\begin{figure}
\input{example.pstex_t}\caption{{\small (a) and (b): a four points configuration $\Gamma$ with a choice of $\mathcal{C}$ such that $\ell$ admits two minimizers, one with and one without singularities; (c) and (d): a six points configuration $\Gamma$ with a choice of $\mathcal{C}$ such that $\ell$ admits many minimizers, possibly with a variable number of singularities; here we have depicted two of them, including the one with four singular points that is selected by the $\psi(\varepsilon)$ problems.}}\label{fig example}
\end{figure}
Figure \ref{fig example}, $\Gamma$ is either given by four or by six points, that are suitably spaced so that $\ell$ has different minimizers. As $\varepsilon\to 0^+$, $\psi(\varepsilon)$ selects those $\ell$-minimizers with singularities over the ones without singularities; and when more minimizers with singularities are present, it selects the ones with the largest number of singularities. Indeed, the approximation of a smooth minimizer in $\ell$ will require an energy cost larger than $2\,\ell$. At the same time, each time a singularity is present, minimizers of $\psi(\varepsilon)$ can save length in the approximation, thus paying less than $2\,\ell$ in energy, and the more the singularities, the bigger the gain. To check this claim, pick $N$ singularities, and denote by $\varepsilon_i$ the volume placed near the $i$-th singularity and by $r_i$ the radius of the three circular arcs enclosing $\varepsilon_i$. Each wetted singularity has area $c_1\,r_i^2$, while the total relaxed energy of the approximating configuration is $\mathcal F=2\,\ell-c_2\,\sum_{i=1}^N\,r_i$. Minimizing under the constraint $\varepsilon=c_1\,\sum_{i=1}^N\,r_i^2$, we must take $r_i=\sqrt{\varepsilon/N c_1}$, thus finding
\[
\psi(\varepsilon)=2\,\ell-c_2\,\sqrt{\frac{\varepsilon\,N_{{\rm max}}}{c_1}}\,,
\]
if $N_{{\rm max}}$ is the maximal number of singularities available among minimizers of $\ell$. This example suggests that (in every dimension) {\it in the presence of singular minimizers of $\ell$, one should have}
\begin{equation}
\label{psip primo meno infinito}
\mbox{$\psi'(\varepsilon)\to-\infty$ as $\varepsilon\to 0^+$}\,.
\end{equation}
This is of course markedly different from what we expect to be the situation when $\ell$ has only smooth minimizers, see \eqref{psi eps quadro expected}. We finally notice that a selection principle for the capillarity model (without homotopic spanning conditions) via its Allen-Cahn approximation has been recently obtained by Leoni and Murray, see \cite{leonimurray1,leonimurray2}. }
\end{example}
\subsection{Statements of the results}\label{section main statements} We now give a more technical introduction to our paper, with precise statements, more bibliographical references, and comments on the proofs.
\medskip
\noindent {\bf Plateau's problem with homotopic spanning}: We fix a compact set $W\subset\mathbb{R}^{n+1}$ (the ``wire frame'') and denote the region accessible by the soap film as
\[
\Omega=\mathbb{R}^{n+1}\setminus W\,.
\]
The typical case we have in mind is $W=I_\delta(\Gamma)$, as discussed in section \ref{section overview}, but this is not necessary. We fix a {\bf spanning class} $\mathcal{C}$, that is a non-empty family of smooth embeddings of $\SS^1$ into $\Omega$ which is closed by homotopy in $\Omega$. We assume that $W$ and $\mathcal{C}$ are such that the {\bf Plateau's problem defined by $\mathcal{C}$}
\begin{equation}
\label{plateau problem}
\ell=\inf\big\{\H^n(S):S\in\mathcal S\big\}
\end{equation}
is such that\footnote{The condition $\ell<\infty$ clearly implies that no $\gamma\in\mathcal{C}$ is homotopic to a constant map.} $\ell<\infty$. Here, for the sake of brevity, we have introduced
\[
\mathcal S=\big\{S\subset\Omega:\mbox{$S$ is relatively closed in $\Omega$ and $S$ is $\mathcal{C}$-spanning $W$}\big\}\,.
\]
As proved in \cite{harrisonpughACV,DLGM}, if $\ell<\infty$, then there exists a compact, $\H^n$-rectifiable set $S$ such that $\H^n(S)=\ell$; see also \cite{harrisonJGA,davidshouldwe,fangAPISA,hpOPEN,hpCOHOMO,dPdRghira,delederosaghira,friedKP,harrisonpughGENMETH,FangKola,derosaSIAM} for related existence results. In addition, $S$ minimizes $\H^n$ with respect to Lipschitz perturbations of the identity localized in $\Omega$, so that: (i) $S$ is a classical minimal surface outside of an $\H^n$-negligible, relatively closed set in $\Omega$ by \cite{Almgren76}; (ii) if $n=1$, $S$ consists of finitely many segments, possibly meeting in three equal angles at singular $Y$-points in $\Omega$; (iii) if $n=2$, $S$ satisfies {\bf Plateau's laws} by \cite{taylor76}: namely, $S$ is locally diffeomorphic either to a plane, or to a cone $Y=T^1\times\mathbb{R}$, or to a cone $T^2$, where $T^n$ is the cone over the origin defined by the $(n-1)$-dimensional faces of a regular tetrahedron in $\mathbb{R}^{n+1}$. The validity of Plateau's laws in this context makes \eqref{plateau problem} more suitable when one is motivated by physical considerations: indeed, minimizers of the codimension one Plateau's problem in the class of rectifiable currents are necessarily smooth if $n\le 6$. Although smoothness is desirable for geometric applications, it creates an {\it a priori} limitation when studying actual soap films; see also \cite{davidshouldwe,harrisonpughACV,DLGM}.
\medskip
\noindent {\bf The capillarity problem and the relaxed energy}: Next, we give a precise formulation of the capillarity problem $\psi(\varepsilon)$ at volume $\varepsilon>0$, which is defined as
\begin{equation}
\label{psi eps}
\psi(\varepsilon)=\inf\Big\{\H^n(\Omega\cap\partial E):\mbox{$E\in\mathcal{E}$, $|E|=\varepsilon$, $\Omega\cap\partial E$ is $\mathcal{C}$-spanning $W$}\Big\}\,.
\end{equation}
Here we have introduced the family of sets
\begin{equation}\label{class E}
\mathcal{E}=\Big\{E\subset\Omega:\,\mbox{$E$ is an open set and $\partial E$ is $\H^n$-rectifiable}\Big\}\,.
\end{equation}
If $E\in\mathcal{E}$, then $\partial E$ is $\H^n$-finite and covered by countably many Lipschitz images of $\mathbb{R}^n$ into $\mathbb{R}^{n+1}$. Thus, $E$ is of finite perimeter in $\Omega$ by a classical result of Federer, and its (distributional) perimeter $P(E;U)$ in an open set $U\subset\Omega$ is equal to $\H^n(U\cap\partial^* E)$, where $\partial^*E$ is the reduced boundary of $E$ (notice that, in general, $P(E;U)\le\H^n(U\cap\partial E)$). The relaxed energy $\mathcal F$ is defined by
\begin{eqnarray*}
\mathcal F(K,E;U)&=&\H^n(U\cap\partial^*E)+2\,\H^n\big(U\cap(K\setminus\partial^*E)\big)\,,\qquad \mbox{$U\subset\Omega$ open}\,,
\\
\mathcal F(K,E)&=&\mathcal F(K,E;\Omega)\,,
\end{eqnarray*}
on every pair $(K,E)$ in the family $\mathcal{K}$ given by
\[
\begin{split}
\mathcal{K}=
\Big\{(K,E):\,&\mbox{$E\subset\Omega$ is open with $\Omega\cap\mathrm{cl}\,(\partial^*E)=\Omega\cap\partial E\subset K$\,,}
\\
&\mbox{$K\in\mathcal S$ and $K$ is $\H^n$-rectifiable in $\Omega$}\Big\}\,.
\end{split}
\]
By the requirement $K\in\mathcal S$, $K$ is $\mathcal{C}$-spanning $W$, while $\Omega\cap\partial E$, which is always a subset of $K$, may be not be $\mathcal{C}$-spanning $W$; we expect this when collapsing occurs, see Figure \ref{fig example23}.
\medskip
\noindent {\bf Assumptions on $\Omega$}: We make two main geometric assumptions on $W$ and $\mathcal{C}$. Firstly, in constructing a system of volume-fixing variations for a given minimizing sequence of $\psi(\varepsilon)$ (see step two of the proof of Theorem \ref{thm lsc}) we shall assume that
\begin{equation}
\label{hp on W and C 0}
\mbox{$\exists\,\tau_0>0$ such that, for every $\tau<\tau_0$, $\mathbb{R}^{n+1}\setminus I_{\tau}(W)$ is connected}\,.
\end{equation}
This is compatible with the idea that, in the physical case $n=2$, $W$ represents a ``solid wire''. Secondly, to verify the finiteness of $\psi(\varepsilon)$ (see step one in the proof of Theorem \ref{thm lsc}), we require that
\begin{eqnarray}
\label{hp on W and C}
\mbox{$\exists\,\eta_0>0$ and a minimizer $S$ in $\ell$ s.t. $\gamma\setminus I_{\eta_0}(S)\ne\emptyset$ for every $\gamma\in\mathcal{C}$}\,.
\end{eqnarray}
This is clearly a generic situation, which (thanks to the convex hull property of stationary varifolds) is implied, for example, by the much more stringent condition that $\gamma\setminus Z\ne\emptyset$ for every $\gamma\in\mathcal{C}$ where $Z$ is the closed convex hull of $W$. Finally, we shall also assume that ``$\partial\Omega=\partial W$ is smooth'': by this we mean that locally near each $x\in\partial\Omega$, $\Omega$ can be described as the epigraph of a smooth function of $n$-variables.
\bigskip
\noindent {\bf Existence of minimizers and Euler-Lagrange equations}: Our first main result is the existence of generalized minimizers of $\psi(\varepsilon)$.
\begin{theorem}[Existence of generalized minimizers]\label{thm lsc}
Let $\ell<\infty$, $\partial W$ be smooth and let \eqref{hp on W and C 0} and \eqref{hp on W and C} hold. If $\{E_j\}_j$ is a minimizing sequence for $\psi(\varepsilon)$,
then there exists a pair $(K,E)\in\mathcal{K}$ with $|E|=\varepsilon$ such that, up to possibly extracting subsequences, and up to possible modifications of each $E_j$ outside a large ball containing $W$ (with both operations resulting in defining a new minimizing sequence for $\psi(\varepsilon)$, still denoted by $\{E_j\}_j$), we have that
\begin{equation}\label{mininizing seq conv to gen minimiz}
\begin{split}
&\mbox{$E_j\to E$ in $L^1(\Omega)$}\,,
\\
&\H^n\llcorner(\Omega\cap\partial E_j)\stackrel{*}{\rightharpoonup} \theta\,\H^n\llcorner K\qquad\mbox{as Radon measures in $\Omega$}\,,
\end{split}
\end{equation}
as $j\to\infty$, where $\theta:K\to\mathbb{R}$ is an upper semicontinuous function with
\begin{equation}
\label{theta density}
\mbox{$\theta= 2$ $\H^n$-a.e. on $K\setminus\partial^*E$},\qquad\mbox{$\theta=1$ on $\Omega\cap\partial^*E$}\,.
\end{equation}
Moreover, $\psi(\varepsilon)=\mathcal F(K,E)$ and, for a suitable constant $C$, $\psi(\varepsilon)\le 2\,\ell+C\,\varepsilon^{n/(n+1)}$.
\end{theorem}
\begin{remark}
{\rm Whenever $(K,E)\in\mathcal{K}$ is such that $|E|=\varepsilon$, $\mathcal F(K,E)=\psi(\varepsilon)$ and there exists a minimizing sequence $\{E_j\}_j$ for $\psi(\varepsilon)$ which converges to $(K,E)$ as in \eqref{mininizing seq conv to gen minimiz}, we say that $(K,E)$ is a {\bf generalized minimizer of $\psi(\varepsilon)$}. We say that $(K,E)$ is {\bf collapsed} if $K\setminus\partial E\ne\emptyset$. If $(K,E)$ is not collapsed, then $E$ is a (standard) minimizer of $\psi(\varepsilon)$.}
\end{remark}
Next, we derive the Euler-Lagrange equations for a generalized minimizer and apply Allard's theorem.
\begin{theorem}[Euler-Lagrange equation for generalized minimizers]\label{thm basic regularity}
Let $\ell<\infty$, $\partial W$ be smooth and let \eqref{hp on W and C 0} and \eqref{hp on W and C} hold. If $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)$ and $f:\Omega\to\Omega$ is a diffeomorphism such that $|f(E)|=|E|$, then
\begin{equation}
\label{minimality KE against diffeos}
\mathcal F(K,E)\le\mathcal F(f(K),f(E))\,.
\end{equation}
In particular:
\begin{enumerate}
\item[(i)] there exists $\l\in\mathbb{R}$ such that
\begin{equation}
\label{stationary main}
\l\,\int_{\partial^*E}X\cdot\nu_E\,d\H^n=\int_{\partial^*E}{\rm div}\,^K\,X\,d\H^n+2\,\int_{K\setminus\partial^*E}{\rm div}\,^K\,X\,d\H^n\,,
\end{equation}
for every $X\in C^1_c(\mathbb{R}^{n+1};\mathbb{R}^{n+1})$ with $X\cdot\nu_\Omega=0$ on $\partial\Omega$, where ${\rm div}\,^K$ denotes the tangential divergence along $K$;
\item[(ii)] there exists $\Sigma\subset K$, closed and with empty interior in $K$, such that $K\setminus\Sigma$ is a smooth hypersurface, $K\setminus(\Sigma\cup\partial E)$ is a smooth embedded minimal hypersurface, $\H^n(\Sigma\setminus\partial E)=0$, $\Omega\cap(\partial E\setminus\partial^*E)\subset \Sigma$ has empty interior in $K$, and $\Omega\cap\partial^*E$ is a smooth embedded hypersurface with constant scalar (w.r.t. $\nu_E$) mean curvature $\l$.
\end{enumerate}
\end{theorem}
\begin{remark}\label{remark mink}
{\rm Although we do not pursue this point here, we mention that we would expect $(K,E)$ to be a proper minimizer of $\mathcal F$ among pairs $(K',E')\in\mathcal{K}$ with $|E'|=\varepsilon$ (and not just when $K'=f(K)$ for a diffeomorphism $f$, as proved in \eqref{minimality KE against diffeos}). To show this we would need to approximate in energy a generic $(K',E')$ by competitors $\{F_j\}_j$ for $\psi(\varepsilon)$. The natural {\it ansatz} for this approximation would be taking $F_j=U_{\eta_j}(K'\cup E')\setminus I_{\eta_j}(K'\cap E')$ for $\eta_j\to 0^+$, where $U_\eta$ denotes the {\it open} $\eta$-neighborhood of a set. The convergence of this approximation is delicate, and can be made to work by elaborating on the ideas contained in \cite{ambrosiocolevilla,villa} at least for $(K',E')$ in certain subclasses of $\mathcal{K}$.}
\end{remark}
\begin{remark}
{\rm Theorem \ref{thm basic regularity} points at two interesting free boundary problems. The first problem concerns the size and properties of $\partial E\setminus\partial^*E$, which is the transition region between constant and zero mean curvature; similar free boundary problems (on graphs rather than on unconstrained surfaces) have been considered, e.g., in \cite{cjk,caffasilvasavinDOUBLE,caffasilvasavinDOUBLE2}. The second problem concerns the wetted region $\partial\Omega\cap\partial E$, which could either be $\H^n$-negligible or not, recall Figure \ref{fig example23}: in the former case, $\partial\Omega\cap\partial E$ should be $(n-1)$-dimensional, while in the latter case $\partial\Omega\cap\partial E$ should be a set of finite perimeter inside $\partial\Omega$, and Young's law $\nu_\Omega\cdot\nu_E=0$ should hold at generic boundary points of $\partial\Omega\cap\partial E$ relative to $\partial\Omega$; see for example \cite{dephilippismaggiCAP-ARMA,dephilippismaggiCAP-CRELLE}.}
\end{remark}
\noindent {\bf Convergence towards Plateau's problem}: The next theorem establishes the nature of Plateau's problem $\ell$ as the singular limit of the capillarity problems $\psi(\varepsilon)$ as $\varepsilon\to0^+$.
\begin{theorem}
[Plateau's problem as a singular limit of capillarity problems]\label{thm convergence as eps goes to zero} If $\ell<\infty$, $\partial W$ be smooth, and \eqref{hp on W and C 0} and \eqref{hp on W and C} hold, then $\psi$ is lower semicontinuous on $(0,\infty)$ and
\begin{equation}
\label{convergence of optimal value}
\lim_{\varepsilon\to 0^+}\psi(\varepsilon)=2\,\ell\,.
\end{equation}
In addition, if $\{(K_h,E_h)\}_h$ is a sequence of generalized minimizers of $\psi(\varepsilon_h)$ for $\varepsilon_h\to 0^+$ as $h\to\infty$, then there exists a minimizer $S$ in $\ell$ such that, up to extracting subsequences and as $h\to\infty$,
\begin{equation}
\label{weak star convergence of gen minimizers}
\H^n\llcorner(\Omega\cap\partial^*E_h)+2\,\H^n\llcorner(K_h\setminus\partial^*E_h)\stackrel{*}{\rightharpoonup}\,2\,\H^n\llcorner S\,,\qquad\mbox{as Radon measures in $\Omega$}\,.
\end{equation}
\end{theorem}
\begin{remark}
{\rm The behavior of $\psi(\varepsilon)-2\,\ell$ as $\varepsilon\to 0^+$ is expected to depend heavily on whether minimizers of $\ell$ have or do not have singularities, as noticed in \eqref{psi eps quadro expected} and \eqref{psip primo meno infinito}. In particular, we expect $\psi'(\varepsilon)\to 0^+$ only in special situations: when this happens, we have a vanishing mean curvature approximation of Plateau's problem which is related to Rellich's conjecture, see e.g. \cite{breziscoronRELL}.}
\end{remark}
\begin{remark}\label{remark L1}
{\rm The Hausdorff convergence of $K_h$ to $S$ is not immediate (nor is the convergence in varifolds sense). Given \eqref{weak star convergence of gen minimizers}, Hausdorff convergence would follow from an area lower bound on $K_h$. In turn, this could be deduced (thanks to area monotonicity) from a uniform $L^p$-bound, for some $p>n$, on the mean curvature vectors $\vec{H}_{V_h}$ of the integer varifolds $V_h$ supported on $K_h$, with multiplicity $2$ on $K_h\setminus\partial^*E_h$, and multiplicity $1$ on $\partial^*E_h$. Notice however that, by \eqref{stationary main}, if $\l_h$ is the Lagrange multiplier of $(K_h,E_h)$, then $\vec{H}_{V_h}=\l_h\,\nu_{E_h}\,1_{\partial^*E_h}$, so that, even when $n=1$, the only uniform $L^p$-bound that can hold is the one with $p=1$; see Example \ref{example tripe sing}.}
\end{remark}
\noindent {\bf Proofs}: We approach Theorem \ref{thm lsc} with the method introduced in \cite{DLGM} to solve \eqref{plateau problem}, which is now briefly summarized. The idea in \cite{DLGM} is considering a minimizing sequence $\{S_j\}_j$ for $\ell$, which (up to extracting subsequences) immediately leads to a sequence of Radon measures $\mu_j=\H^n\llcorner S_j\stackrel{*}{\rightharpoonup}\mu$ as Radon measures in $\Omega$, with $S={\rm spt}\,\mu$ $\mathcal{C}$-spanning $W$. By comparing $S_j$ with its cup competitors $S_j'$, see
\begin{figure}
\input{dlgm.pstex_t}\caption{{\small (a) the cup competitor of a set $S$ in $B_r(x)$ relative to an $\H^n$-maximal connected component $A$ of $\partial B_r(x)\setminus S$; (b) the cone competitor of $S$ in $B_r(x)$.}}\label{fig dlgm}
\end{figure}
Figure \ref{fig dlgm}-(a), and then letting $j\to\infty$, it is shown that $\mu(B_r(x))\ge \theta_0(n)\,r^n$ for every $x\in{\rm spt}\,\mu$; by comparing $S_j$ with its cone competitors $S_j'$, and then letting $j\to\infty$, it is proved that $r^{-n}\,\mu(B_r(x))$ is increasing in $r$. By Preiss' theorem \cite{preiss,DeLellisNOTES} it follows that $\mu=\theta\,\H^n\llcorner S$ and that $S$ is $\H^n$-rectifiable. Finally, spherical isoperimetry and a geometric argument imply that $\theta\ge 1$ $\H^n$-a.e. on $S$, which in turn suffices to conclude that $S$ is a minimizer in $\ell$ since, by lower semicontinuity, $\H^n(S)\le\mu(\Omega)\le\liminf_j\mu_j(\Omega)=\ell$, and because $S$ is in the competition class of $\ell$.
Adapting this approach to a minimizing sequence $\{E_j\}_j$ for $\psi(\varepsilon)$ requires the introduction of new ideas. First, cup and cone competitors for $\{E_j\}_j$ have to be defined as {\it boundaries}, a feature that requires taking into consideration two kind of cup competitors, and that also leads to other difficulties. Second, local variations need to be compensated by volume-fixing variations, which must be uniform along the elements of the minimizing sequence. At this stage, we can prove that $\mu_j=\H^n\llcorner(\Omega\cap\partial E_j)\stackrel{*}{\rightharpoonup}\mu=\theta\,\H^n\llcorner K$ for an $\H^n$-rectifiable set $K$ which is $\mathcal{C}$-spanning $W$. The same argument as in \cite{DLGM} shows that $\theta\ge 1$, and the lower bound $\theta\ge 2$ $\H^n$-a.e. on $K\setminus\partial^*E$ requires a further elaboration which takes into account that we are considering the convergence of boundaries. We cannot conclude that $\mathcal F(K,E)=\psi(\varepsilon)$ just by lower semicontinuity because clearly $(K,E)$ is not in the competition class of $\psi(\varepsilon)$. We thus improve lower semicontinuity by some non-concentration estimates: at infinity, at the boundary and by folding against $K$. The latter are the most interesting ones, and they require a careful comparison argument based on the introduction of a third kind of competitors, called slab competitors. The construction of the various competitors is discussed in section \ref{section five competitors}, while the proof of Theorem \ref{thm lsc} is contained in section \ref{section existence of generalized minimizers}. Slab competitors are also used in the delicate proof of \eqref{minimality KE against diffeos}, whose starting point are some ideas originating in \cite{depauwHardt}, as further developed in \cite{DLGM} when addressing the formulation of Plateau's problem for David's sliding minimizers; see section \ref{section theorem basic regularity}. Finally, in section \ref{section convergence to plateau} we prove Theorem \ref{thm convergence as eps goes to zero}: the main difficulty, explained there in more detail, is that, at vanishing volume, we have no non-trivial local limit sets to be used for constructing uniform volume-fixing variations.
\bigskip
\noindent {\bf Structure of generalized minimizers:} Theorem \ref{thm lsc}, Theorem \ref{thm basic regularity} and Theorem \ref{thm convergence as eps goes to zero} lay the foundations to study the properties of generalized minimizers of $\psi(\varepsilon)$. The most intriguing questions are concerned with the relations between the properties of minimizers in Plateau's problem $\ell$, like the presence or the absence of singularities, and the properties of minimizers in $\psi(\varepsilon)$ at small $\varepsilon$: collapsing vs non-collapsing and the sign of $\l$, limiting behavior of $\l$ as $\varepsilon\to 0^+$, dimensionality of the wetted part of the wire, etc. This is of course a very large set of problems, which will require further investigations. In the companion paper \cite{kms2}, we start this kind of study by proving that collapsed minimizers have non-positive Lagrange multipliers, deduce from this property that they satisfy the convex hull property, and lay the ground for the forthcoming paper \cite{kms3}, where we further investigate the regularity of the collapsed set $K\setminus\partial^*E$.
\bigskip
\noindent {\bf Acknowledgement:} We thank an anonymous referee for several useful remarks that helped us improving the quality of the paper. Antonello Scardicchio has contributed with many inspiring discussions to the physical background of this work. This work was completed during the Spring 2019 while FM was first a member of IAS in Princeton, through support from the Charles Simonyi Endowment, and then a visitor of ICTP in Trieste. All the authors were supported by the NSF grants DMS-1565354, DMS-RTG-1840314 and DMS-FRG-1854344.
\section{Cone, cup and slab competitors, nucleation and collapsing}\label{section five competitors} Section \ref{section notation} contains the notation and terminology used in the paper. Section \ref{section preliminaries} collects some basic properties of $\mathcal{C}$-spanning sets. Sections \ref{section cup first}, \ref{section slab} and \ref{section cone} deal with cup, slab and cone competitors. Section \ref{section nucleation} contains the nucleation lemma for volume-fixing variations, and section \ref{section spherical collapsing} concerns density lower bounds for collapsing sequences of sets of finite perimeter.
\subsection{Notation and terminology}\label{section notation} We denote by $|A|$ and $\H^s(A)$ the Lebesgue and the $s$-dimensional Hausdorff measures of $A\subset\mathbb{R}^{n+1}$, by $I_\eta(A)$ and $U_\eta(A)$ the closed and open $\eta$-neighborhoods of $A$, by $B_r(x)$ the open ball of center at $x$ and radius $r$. We work in the framework of \cite{SimonLN,AFP,maggiBOOK}. Given $k\in\mathbb{N}$, $1\le k\le n$, a Borel set $M\subset\mathbb{R}^{n+1}$ is {\bf countably $\H^k$-rectifiable} if it is covered by countably many Lipschitz images of $\mathbb{R}^k$; it is {\bf (locally) $\H^k$-rectifiable} if, in addition, $M$ is (locally) $\H^k$-finite. If $M$ is locally $\H^k$-rectifiable, then for $\H^k$-a.e. $x\in M$ there exists a unique $k$-plane $T_xM$ such that, as $r\to 0^+$, $\H^k\llcorner(M-x)/r\stackrel{*}{\rightharpoonup}\H^k\llcorner T_xM$ as Radon measures in $\mathbb{R}^{n+1}$; $T_xM$ is called the {\bf approximate tangent plane to $M$ at $x$}. Given a Lipschitz map $f:\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$, we denote by $J^Mf$ its {\bf tangential jacobian along $M$}, so that if $f$ is smooth and $f(x)=x+t\,X(x)+{\rm o}(t)$ in $C^1$ as $t\to 0^+$, then $J^Mf=1+t\,{\rm div}\,^MX+{\rm o}(t)$ where ${\rm div}\,^M X$ is the {\bf tangential divergence of $X$ along $M$}; moreover, $M$ has {\bf distributional mean curvature vector $\vec{H}\in L^1_{\rm loc}(U;\H^k\llcorner M)$ in $U$} open, if
\[
\int_M\,{\rm div}\,^M\,X\,d\H^k=\int_M\,X\cdot\vec{H}\,d\H^k\,,\qquad\forall X\in C^\infty_c(U;\mathbb{R}^{n+1})\,,
\]
see \cite[Sections 8 and 9]{SimonLN}. A Borel set $E\subset\mathbb{R}^{n+1}$ has {\bf finite perimeter} if there exists an $\mathbb{R}^{n+1}$-valued Radon measure on $\mathbb{R}^{n+1}$, denoted by $\mu_E$, such that $\langle\mu_E,X\rangle=\int_E{\rm div}\, X$ whenever $X\in C^1_c(\mathbb{R}^{n+1};\mathbb{R}^{n+1})$ and $P(E;\mathbb{R}^{n+1})=|\mu_E|(\mathbb{R}^{n+1})<\infty$. The set of points $x\in\mathbb{R}^{n+1}$ such that $|\mu_E|(B_r(x))^{-1}\,\mu_E(B_r(x))\to\nu_E(x)\in\SS^n$ as $r\to 0^+$ is denoted by $\partial^*E$, and called the {\bf reduced boundary $\partial^*E$} of $E$. Then $\mu_E=\nu_E\,\H^n\llcorner\partial^*E$, $\partial^*E$ is $\H^n$-rectifiable in $\mathbb{R}^{n+1}$, and $T_x\partial^*E=\nu_E(x)^\perp$ for every $x\in\partial^*E$. The {\bf set $E^{(t)}$ of points of density $t\in[0,1]$ of $E$} is given by those $x\in \mathbb{R}^{n+1}$ with $|E\cap B_r(x)|/|B_r(x)|\to t$ as $r\to 0^+$, and (see, e.g., see \cite[Theorem 16.2]{maggiBOOK}),
\begin{equation}
\label{federers theorem}
\mbox{$\{\partial^*E,E^{(0)},E^{(1)}\}$ is a partition of $\mathbb{R}^{n+1}$ modulo $\H^n$}\,.
\end{equation}
Federer's criterion \cite[4.5.11]{FedererBOOK} states that if the {\bf essential boundary} $\partial^\mathrm{e} E=\mathbb{R}^{n+1}\setminus(E^{(0)}\cup E^{(1)})$ is $\H^n$-finite, then $E$ is of finite perimeter in $\mathbb{R}^{n+1}$. If $E$ is open, then $\partial^\mathrm{e} E\subset\partial E$: hence, if $E\in\mathcal{E}$ and $\H^n(\partial\Omega)<\infty$, then $E$ is of finite perimeter.
\subsection{Some preliminary results}\label{section preliminaries} In the following, $W$ is a compact set, $\mathcal{C}$ a spanning class for $W$ and $\Omega=\mathbb{R}^{n+1}\setminus W$.
\begin{lemma}\label{statement K spans}
If $\{K_j\}_j$ are relatively closed sets in $\Omega$, such that each $K_j$ is $\mathcal{C}$-spanning $W$ and $\H^n\llcorner K_j\stackrel{*}{\rightharpoonup}\mu$ as Radon measures in $\Omega$, then $K=\Omega \cap {\rm spt}\mu$ is $\mathcal{C}$-spanning $W$.
\end{lemma}
\begin{proof} See \cite[Step 2, proof of Theorem 4]{DLGM}.
\end{proof}
\begin{lemma}\label{lemma 10}
Let $K$ be relatively closed in $\Omega$ and let $B_r(x)\subset\subset\Omega$. Then $K$ is $\mathcal{C}$-spanning $W$ if and only if, whenever $\gamma\in\mathcal{C}$ is such that $\gamma\cap K\setminus B_r(x)=\emptyset$, then there exists a connected component of $\gamma\cap \mathrm{cl}\,(B_r(x))$ which is diffeomorphic to an interval, and whose end-points belong to distinct connected components of $\mathrm{cl}\,(B_r(x))\setminus K$, as well as to distinct components of $\partial B_r(x)\setminus K$.
\end{lemma}
\begin{proof}
This is \cite[Lemma 10]{DLGM}.
\end{proof}
\begin{lemma}\label{statement spanning is close by Lipschitz maps}
If $K$ is $\mathcal{C}$-spanning $W$, $B_r(x)\subset\subset \Omega$, and $f:\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$ is a bi-Lipschitz map with $\{f\ne{\rm id}\,\}\subset\subset B_r(x)$ and $f(B_r(x))\subset B_r(x)$, then $f(K)$ is $\mathcal{C}$-spanning $W$.
\end{lemma}
\begin{proof}
By $f(K)\setminus B_r(x)=K\setminus B_r(x)$, if $f(K)$ is not $\mathcal{C}$-spanning $W$, then there exists $\gamma\in\mathcal{C}$ with $\gamma\cap K\setminus B_r(x)=\emptyset$ such that $\gamma\cap f(K)=\emptyset$. Hence, the curve $\tilde \gamma := f^{-1} \circ \gamma$ is a continuous embedding of $\mathbb{S}^1$ in $\Omega$, homotopic to $\gamma$ in $\Omega$, and such that $\tilde \gamma \cap K = \emptyset$. Since $\tilde \gamma$ and $W$ are compact and $K$ is closed, $\tilde \gamma$ has positive distance from $K\cup W$, and by smoothing out $\tilde\gamma$ we define a smooth embedding $\hat \gamma$ of $\mathbb{S}^1$ into $\Omega$, disjoint from $K$, and homotopic to $\tilde \gamma$ (and therefore to $\gamma$) in $\Omega$, a contradiction.
\end{proof}
\begin{lemma}
\label{lemma close by Lipschitz at boundary}
If $\partial\Omega$ is smooth, then there exists $r_0>0$ with the following property. If $x\in\partial \Omega$, $\Omega \subset \Omega'$, $f:\mathrm{cl}\,(\Omega)\to\mathrm{cl}\,(\Omega')=f(\mathrm{cl}\,(\Omega))$ is a homeomorphism with $f(\partial\Omega)=\partial\Omega'$, $\{f\ne{\rm id}\,\}\subset\subset B_{r_0}(x)$, and $f(B_{r_0}(x)\cap\mathrm{cl}\,(\Omega))=B_{r_0}(x)\cap\mathrm{cl}\,(\Omega')$, and if $K$ is $\mathcal{C}$-spanning $W$, then $K'=f(K\cap\Omega^*)$ is relatively closed in $\Omega$ and is $\mathcal{C}$-spanning $W$, where $\Omega^*=f^{-1}(\Omega)$.
\end{lemma}
\begin{proof}
{\it Step one}: We show that, for $K$ relatively closed in $\Omega$ and $B_{r_0}(x)$ as in the statement, $K$ is $\mathcal{C}$-spanning $W$ if and only if, whenever $\gamma\in\mathcal{C}$ is such that $\gamma\cap K\setminus B_{r_0}(x)=\emptyset$, then there exists a connected component of $\gamma\cap \mathrm{cl}\,(B_{r_0}(x))$, diffeomorphic to an interval, and whose end-points belong to distinct connected components of $\Omega\cap\mathrm{cl}\,(B_{r_0}(x))\setminus K$. We only prove the ``only if'' part. First of all, we notice that $\gamma$ cannot be contained in $\Omega\cap B_{r_0}(x)$, because $r_0$ can be chosen small enough to ensure that $\Omega\cap B_{r_0}(x)$ is simply connected, and because $\ell<\infty$ implies that no element of $\mathcal{C}$ is homotopic to a constant. Arguing as in \cite[Step two, proof of Lemma 10]{DLGM} we can assume that $\gamma$ and $\partial B_{r_0}(x)$ intersect transversally, so that there exist finitely many disjoint $I_i=[a_i,b_i]\subset\SS^1$ such that $\gamma\cap\mathrm{cl}\,(B_{r_0}(x))=\bigcup_i\gamma(I_i)$ with $\gamma\cap\partial B_{r_0}(x)=\bigcup_i\{\gamma(a_i),\gamma(b_i)\}$ and $\gamma\cap B_{r_0}(x)=\bigcup_i\gamma((a_i,b_i))$. Assume by contradiction that for each $i$ there exists a connected component $A_i$ of $\Omega\cap\mathrm{cl}\,(B_{r_0}(x))\setminus K$ such that $\gamma(a_i),\gamma(b_i)\in A_i$. If $r_0$ is small enough, then $\mathrm{cl}\,(\Omega\cap B_{r_0}(x))$ is diffeomorphic to $\mathrm{cl}\,(B_1(0)\cap\{x_1>0\})$ through a diffeomorphism mapping $B_{r_0}(x)\cap\partial\Omega$ into $B_1(0)\cap\{x_1=0\}$. Using this fact and the connectedness of each $A_i$, we define smooth embeddings $\tau_i:I_i\to A_i$ with $\tau_i(a_i)=\gamma(a_i)$, $\tau_i(b_i)=\gamma(b_i)$ and $\tau_i$ homotopic in $\Omega\cap B_{r_0}(x)$ to the restriction of $\gamma$ to $I_i$. Moreover, this can be done with $\tau_i(I_i)\cap\tau_j(I_j)=\emptyset$. The new embedding $\bar\gamma$ of $\SS^1$ obtained by replacing $\gamma$ with $\tau_i$ on $I_i$ is thus homotopic to $\gamma$ in $\Omega$, and such that $\bar\gamma\cap K=\emptyset$, a contradiction.
\medskip
\noindent {\it Step two}: Since $K\cap\Omega^*$ is relatively closed in $\Omega^*$, $K'=f(K\cap\Omega^*)$ is relatively closed in $\Omega=f(\Omega^*)$. Should $K'$ not be $\mathcal{C}$-spanning $W$, given that $K'\setminus B_{r_0}(x)=K\setminus B_{r_0}(x)$, we could find $\gamma\in\mathcal{C}$ with $\gamma\cap K\setminus B_{r_0}(x)=\emptyset$ and $\gamma\cap K'=\emptyset$. By step one, there would be a connected component $\sigma$ of $\gamma\cap \mathrm{cl}\,(B_{r_0}(x))$, diffeomorphic to an interval, and such that: (i) the end-points $p$ and $q$ of $\sigma$ (which lie on $\partial B_{r_0}(x)$) belong to distinct connected components of $\Omega\cap\mathrm{cl}\,(B_{r_0}(x))\setminus K$; and (ii) $p$ and $q$ belong to the same connected component of $\Omega\cap\mathrm{cl}\,(B_{r_0}(x))\setminus K'$. Since $f$ is a homeomorphism, $f(p)=p$, and $f(q)=q$, by (i) we would find that $p$ and $q$ belong to {\it distinct} connected components of
\[
f\big(\Omega\cap\mathrm{cl}\,(B_{r_0}(x))\setminus K\big)=\Omega'\cap\mathrm{cl}\,(B_{r_0}(x))\setminus f(K)\;
\]
while, by (ii), there would be an arc connecting $p$ and $q$ in $\Omega\cap\mathrm{cl}\,(B_{r_0}(x))\setminus K'$, where
\begin{eqnarray*}
\Omega\cap\mathrm{cl}\,(B_{r_0}(x))\setminus K'&=&\Omega\cap\mathrm{cl}\,(B_{r_0}(x))\setminus f(K\cap\Omega^*)
\\
&=&\Omega\cap\mathrm{cl}\,(B_{r_0}(x))\setminus f(K)
\,\,\subset\,\,\Omega'\cap\mathrm{cl}\,(B_{r_0}(x))\setminus f(K)\,,
\end{eqnarray*}
and hence $p$ and $q$ would belong to a {\it same} component of $\Omega'\cap\mathrm{cl}\,(B_{r_0}(x))\setminus f(K)$.
\end{proof}
\subsection{Cup competitors}\label{section cup first} Given $E\in\mathcal{E}$, $B_r(x)\subset\subset\Omega$ and a connected component $A$ of $\partial B_r(x)\setminus\partial E$, cup competitors are used to compare $\H^n(B_r(x)\cap\partial E)$ with $\H^n(\partial B_r(x)\setminus A)$. The construction is more involved than in the case of Plateau's problem considered in \cite{DLGM} as we need to construct cup competitors as {\it boundaries}, and we have to argue differently depending on whether $A\cap E=\emptyset$ or $A\subset E$.
\begin{lemma}[Cup competitors]\label{lemma cup competitor first kind}
Let $E\in\mathcal{E}$ be such that $\Omega \cap \partial E$ is $\mathcal{C}$-spanning $W$, let $x\in\Omega$, $0<r<{\rm dist}(x,\partial\Omega)$, and let $A$ be a connected component of $\partial B_r(x)\setminus\partial E$. Assume that $\partial E\cap\partial B_r(x)$ is $\H^{n-1}$-rectifiable. Then, for every $\eta \in \left( 0, r/2 \right)$ there exists a set $F = F_\eta \in \mathcal{E}$ so that $\Omega \cap \partial F$ is $\mathcal{C}$-spanning $W$, and
\begin{eqnarray}\label{cup fuori da br chiusa}
&& \partial F\setminus\mathrm{cl}\,(B_r(x))=\partial E\setminus\mathrm{cl}\,(B_r(x))\,,
\\
\label{cup buccia}
&& \lim_{\eta\to 0^+} \H^n \big( (\partial B_r (x) \cap \partial F) \, \Delta \, (\partial B_r (x) \setminus A) \big) = 0\,,
\\
\label{cup area totale}
&& \limsup_{\eta\to 0^+}\H^n(\Omega\cap\partial F)\le\H^n\big(\Omega \cap \partial E\setminus B_r(x)\big)+2\,\H^n(\partial B_r(x)\setminus A)\,.
\end{eqnarray}
Moreover,
\begin{itemize}
\item[(i)] If $A\cap E=\emptyset$, then
\begin{eqnarray}
\label{cup first br area}
\limsup_{\eta\to 0^+}\H^n(B_r(x)\cap\partial F)\le\H^n\Big(\partial B_r(x)\setminus \big(A\cup (E\cap\partial B_r)\big)\Big)\,;
\end{eqnarray}
\item[(ii)] If $A\subset E$, then
\begin{eqnarray}
\label{cup second br area}
\limsup_{\eta\to 0^+}\H^n(B_r(x)\cap\partial F)\le\H^n\big(E\cap\partial B_r(x)\setminus A\big)\,.
\end{eqnarray}
\end{itemize}
\end{lemma}
\begin{remark} \label{rmk cup}
Before proceeding with the proof of the lemma, let us first provide some additional details on the construction of the competitors $F=F_\eta$, which, as anticipated, is different depending on whether $A \cap E = \emptyset$ or $A \subset E$. In what follows, given $Y\subset\partial B_r(x)$, we set
\[
N_\eta(Y)=\Big\{y-t\,\nu_{B_r(x)}(y):y\in Y\,,t\in(0,\eta)\Big\}\,,\qquad 0<\eta<r\,.
\]
{\it The case when $A \cap E = \emptyset$:} In this case, we define
\begin{equation} \label{who is Y}
Y=\partial B_r(x)\setminus\big(\mathrm{cl}\,(E\cap\partial B_r(x))\cup\mathrm{cl}\,(A)\big)\,,
\end{equation}
and then we further distinguish two scenarios, depending on whether the set
\begin{equation} \label{who is S}
S = \partial E \cap \mathrm{cl}\, (A) \setminus \left[ \mathrm{cl}\,\left(E \cap \partial B_r (x) \right) \cup \mathrm{cl}\,(Y) \right]
\end{equation}
is empty or not. When $S=\emptyset$ the cup competitor defined by $E$ and $A$ is given by
\begin{equation}
\label{cup competitor first case lemma}
F=\big(E\setminus\mathrm{cl}\,(B_r(x))\big)\cup\,N_\eta(Y)\,,
\end{equation}
see
\begin{figure}
\input{cup00.pstex_t}\caption{{\small Cup competitors when: (a) $A\cap E=\emptyset$ and $S = \emptyset$; (b) $A \cap E = \emptyset$ and $S \neq \emptyset$; (c) $A\subset E$. Picture (b) really pertains to the case $n\ge 2$, in which the component $A$ in the picture is not necessarily disconnected by the presence of $S$. In the situation of picture (b) the set $F$ defined by \eqref{cup competitor first case lemma} may fail to intersect a test curve $\gamma$ which was intersecting with $\Omega\cap\partial E$ only at points in $S$.}}\label{fig cup00}
\end{figure}
Figure \ref{fig cup00}-(a), and step one of the proof. When $S \neq \emptyset$, see Figure \ref{fig cup00}-(b), if we define $F$ as in \eqref{cup competitor first case lemma}, then $\Omega \cap \partial F$ may fail to be $\mathcal{C}$-spanning $W$; we thus need to modify \eqref{cup competitor first case lemma}, and to this end, denoting by ${\rm d}_S$ the distance function from $S$ and by $U_\eta (S) = \partial B_r (x) \cap \{{\rm d}_S (y) < \eta\}$, we set
\begin{equation} \label{cup competitor first case difficult lemma}
F=\big(E\setminus\mathrm{cl}\,(B_r(x))\big)\cup\,N_\eta(Z)\,, \qquad Z = Y \cup \left( U_\eta(S) \setminus \mathrm{cl}\, (E \cap \partial B_r (x)) \right )\,,
\end{equation}
see, again, Figure \ref{fig cup00}-(b). This situation, discussed in detail in step two of the proof, is made more delicate since we can prove that the sets defined in \eqref{cup competitor first case difficult lemma} are well-behaved in the limit as $\eta\to 0^+$ only along along a suitable sequence $\eta_k\downarrow 0^+$. For this reason, we will actually define $F_\eta$ as in \eqref{cup competitor first case difficult lemma} only when $\eta=\eta_k$, and then extend the definition by setting $F_\eta=F_{\eta_k}$ for all $\eta \in \left(\eta_{k+1}, \eta_{k}\right)$ (so that, for the sake of homogenity, \eqref{cup area totale} can be stated as an $\eta\to 0^+$-limit in all three cases).
\medskip
\noindent {\it The case when $A \cap E = \emptyset$:} Finally, when $A \subset E$ the cup competitor defined by $E$ and $A$ is given by
\begin{equation}
\label{cup competitor second case lemma}
F=\big(E\cup B_r(x)\big)\setminus\mathrm{cl}\,\big(N_\eta(Y)\big)\,,\qquad Y=(E\cap\partial B_r(x))\setminus\mathrm{cl}\,(A)\,,
\end{equation}
see Figure \ref{fig cup00}-(c). We treat this case in step three of the proof.
\end{remark}
\begin{proof}
{\it Step one}: We assume that $A \cap E = \emptyset$ and, after defining $Y$ as in \eqref{who is Y} and $S$ as in \eqref{who is S}, we suppose first that
\begin{equation} \label{cc1_easy}
S = \emptyset\,.
\end{equation}
We then define $F$ by \eqref{cup competitor first case lemma}. For the sake of brevity we set $B_r=B_r(x)$. We claim that \eqref{cup fuori da br chiusa} holds, and that we have
\begin{eqnarray}
\label{posse 3 1}
B_r\cap\partial F&=&B_r\cap\partial N_\eta(Y)\,,
\\
\label{posse 0 1}
Y&\subset&\partial F\cap\partial B_r\,,
\\\label{posse 0 2}
E\cap\partial B_r&\subset&\partial F\cap\partial B_r\,,
\\\label{posse 0 3 1}
\partial B_r\setminus\mathrm{cl}\,(A)&\subset&\,\partial F\cap\partial B_r\,,
\\ \label{posseno ammazzarlo}
\partial E \cap \partial B_r & \subset & \partial F \cap \partial B_r\,,
\\\label{posse 0 4 1}
\mbox{$A$, $E\cap\partial B_r$, $Y$ are open and disjoint in $\partial B_r\,$,}\hspace{-4cm}
\\\label{posse 0 3 2}
\partial F\cap\partial B_r&\subset&\partial B_r\setminus A\,,
\\\label{posse 0 4 2}
\partial B_r\setminus\mathrm{cl}\,(E)&\subset& A\cup Y\,,
\\\label{posse 0 4}
\mathrm{cl}\,(Y)\setminus Y&\subset&\partial B_r\cap\partial E\,,
\\\label{posse 0 4 A}
\mathrm{cl}\,(A)\setminus A&\subset&\partial B_r\cap\partial E\,,
\\\label{posse 0 4 E}
\mathrm{cl}\,(E\cap\partial B_r)\setminus (E\cap\partial B_r)&=&\partial B_r\cap\partial E\,.
\end{eqnarray}
Indeed, \eqref{cup fuori da br chiusa} and \eqref{posse 3 1} follow from $F\cap B_r=N_\eta(Y)\cap B_r$ and $F\setminus\mathrm{cl}\,(B_r)=E\setminus\mathrm{cl}\,(B_r)$. To prove \eqref{posse 0 1}: $Y\subset\mathrm{cl}\,(N_\eta(Y))$ gives $Y\subset\mathrm{cl}\,(F)$, and $F\cap\partial B_r=\emptyset$ implies $Y\cap F=\emptyset$. To prove \eqref{posse 0 2}: $E\cap\partial B_r\subset\mathrm{cl}\,(E\setminus\mathrm{cl}\,(B_r))$, so that $E\cap\partial B_r\subset\mathrm{cl}\,(F)$, while $F\cap\partial B_r=\emptyset$ gives $(E\cap\partial B_r)\cap F=\emptyset$. \eqref{posse 0 4 1} is obvious, and \eqref{posse 0 3 1} follows from \eqref{posse 0 1} and \eqref{posse 0 2}. \eqref{posseno ammazzarlo} is then an immediate consequence of \eqref{posse 0 1}, \eqref{posse 0 2}, \eqref{posse 0 3 1}, and the condition in \eqref{cc1_easy}. To prove \eqref{posse 0 3 2}: $A$ is open in $\partial B_r\setminus\partial E$ and $A\cap E=\emptyset$, thus $A\cap\mathrm{cl}\,(E)=\emptyset$; moreover, $A\cap\mathrm{cl}\,(Y)=\emptyset$ by \eqref{posse 0 4 1}, hence
\[
\partial F\cap\partial B_r\,\,\subset\,\,\mathrm{cl}\,(F)\cap\partial B_r\,\,\subset\,\,\mathrm{cl}\,(E)\cup\Big(\mathrm{cl}\,(N_\eta(Y))\cap\partial B_r\Big)\,\,=\mathrm{cl}\,(E)\cup\mathrm{cl}\,(Y)\,,
\]
and we deduce \eqref{posse 0 3 2}. To prove \eqref{posse 0 4 2}: if $y\in\partial B_r\setminus\mathrm{cl}\,(E)$, then $y$ belongs to one of the open connected components of $\partial B_r\setminus\partial E$, so it is either $y\in A$, or $y\in \partial B_r\setminus\mathrm{cl}\,(A)\subset Y$. To prove \eqref{posse 0 4}: by \eqref{posse 0 4 1} we have $A\cap\mathrm{cl}\,(Y)=\emptyset$, so that by \eqref{posse 0 4 2}
\[
\mathrm{cl}\,(Y)\setminus Y\,\subset\,\partial B_r\setminus(A\cup Y)\,\subset\,\partial B_r\cap\mathrm{cl}\,(E)\,,
\]
and we conclude by $(E\cap\partial B_r)\cap\mathrm{cl}\,(Y)=\emptyset$ (again, thanks to \eqref{posse 0 4 1}). Finally, \eqref{posse 0 4 A} and the inclusion ``$\subset$'' in \eqref{posse 0 4 E} are obvious, while the other inclusion in \eqref{posse 0 4 E} follows from \eqref{cc1_easy}. Having proved the claim, we complete the proof. By definition, $F \subset \Omega$ is open. We show that $\Omega\cap\partial F$ is $\mathcal{C}$-spanning $W$. Given $\gamma\in\mathcal{C}$, if $\gamma\cap\partial E\setminus\mathrm{cl}\,(B_r)\ne\emptyset$, then $\gamma\cap\partial F\ne\emptyset$ by \eqref{cup fuori da br chiusa}; if instead $\gamma\cap\partial E\setminus\mathrm{cl}\,(B_r)=\emptyset$, then necessarily $\gamma \cap \partial E \cap \mathrm{cl}\, (B_r) \neq \emptyset$. Now, if $\gamma \cap \partial E \cap \partial B_r \neq \emptyset$ then $\gamma \cap \partial F \neq \emptyset$ by \eqref{posseno ammazzarlo}; otherwise we actually have $\gamma \cap \partial E \setminus B_r = \emptyset$, and thus, by Lemma \ref{lemma 10}, $\gamma$ intersects two distinct connect components of $\partial B_r\setminus\partial E$, and at least one of them is contained in $\partial F\cap\partial B_r$: indeed, $\partial F\cap\partial B_r$ contains $\partial B_r\setminus\mathrm{cl}\,(A)$ by \eqref{posse 0 3 1}, where $\mathrm{cl}\,(A)$ is disjoint from all the connected components of $\partial B_r\setminus\partial E$ that are different from $A$.
Now, we prove \eqref{cup buccia}, \eqref{cup area totale}, and \eqref{cup first br area}. First notice that \eqref{posse 0 3 1}, \eqref{posse 0 3 2}, \eqref{posse 0 4 A}, and $\H^n (\partial B_r \cap \partial E)=0$ imply that
\begin{eqnarray} \label{cup buccia stronger}
\partial F \cap \partial B_r &=& \partial B_r \setminus A \qquad \mbox{modulo $\H^n$}\,,
\end{eqnarray}
which in turn implies \eqref{cup buccia}. Next, we claim that
\begin{eqnarray}\label{cup area total eta first}
\H^n(\Omega\cap\partial F)&\le&\H^n\big(\Omega \cap \partial E\setminus B_r\big)+\H^n(E\cap\partial B_r)
\\\nonumber
&&+\big(2+C(n)\,\eta\big)\,\H^n\Big(\partial B_r\setminus \big(A\cup(E\cap\partial B_r)\big)\Big)
+C(n)\,\eta\,\H^{n-1}\big(\partial E\cap\partial B_r\big)\,.
\end{eqnarray}
To prove the claim, first by $\H^n(\partial E\cap\partial B_r)=0$, \eqref{cup fuori da br chiusa} and \eqref{posse 0 3 2} we have
\begin{eqnarray}
\nonumber
\H^n(\Omega\cap\partial F)&=&\H^n(\Omega \cap \partial E\setminus B_r)+\H^n(\mathrm{cl}\,(B_r)\cap\partial F)
\\
\label{posse c1}
&\le&\H^n(\Omega \cap \partial E\setminus B_r)+\H^n(\partial B_r\setminus A)+\H^n(B_r\cap\partial F)\,.
\end{eqnarray}
If $g(y,t)=y-t\,\nu_{B_r}(y)$, then by \eqref{posse 3 1}
\begin{eqnarray}\nonumber
B_r\cap\partial F=B_r\cap \partial N_\eta(Y)=g(Y,\eta)\,\cup\, g\Big(\big(\mathrm{cl}\,(Y)\setminus Y\big)\times[0,\eta]\Big)\,,
\end{eqnarray}
so that \eqref{posse 0 4}, the $\H^{n-1}$-rectifiability of $\partial E\cap\partial B_r$, and the area formula give us
\begin{eqnarray}\label{posse c3}
\H^n(B_r\cap \partial F)\le (1+C(n)\,\eta)\,\H^n(Y)+C(n)\,\eta\,\H^{n-1}(\partial E\cap\partial B_r)\,.
\end{eqnarray}
By $\H^n(\partial E\cap\partial B_r)=0$, \eqref{posse 0 4 A} and \eqref{posse 0 4 E} we have
\begin{equation}
\label{posse c4}
\H^n(Y)=\H^n\big(\partial B_r\setminus \big(A\cup(E\cap\partial B_r)\big)\big)\,,
\end{equation}
so that \eqref{posse c1}, \eqref{posse c3} and \eqref{posse c4} imply \eqref{cup area total eta first}. Letting $\eta\to 0^+$ in \eqref{cup area total eta first} we find \eqref{cup area totale}, and doing the same in \eqref{posse c3} and \eqref{posse c4}, we deduce \eqref{cup first br area}.
\medskip
\noindent {\it Step two:} In the case $A \cap E = \emptyset$, we now allow for the set $S$ defined in \eqref{who is S} to be non-empty. In this case, if $F$ is defined as in \eqref{cup competitor first case lemma} then the inclusion \eqref{posseno ammazzarlo} is not true in general, and $\Omega \cap \partial F$ may fail to be $\mathcal{C}$-spanning $W$. We then modify the construction as detailed in Remark \ref{rmk cup}, defining $F$ as in \eqref{cup competitor first case difficult lemma}. We notice that $F \subset \Omega$ is open, and that \eqref{cup fuori da br chiusa} holds true, since once again $F \setminus \mathrm{cl}\, (B_r) = E \setminus\mathrm{cl}\, (B_r)$. Moreover, we have
\begin{eqnarray}
\label{fix:in Br}
B_r \cap \partial F &=& B_r \cap \partial N_\eta (Z)\,,
\\ \label{fix:Z}
Z &\subset & \partial F \cap \partial B_r\,,
\\ \label{fix:E on the sphere}
E \cap \partial B_r &\subset & \partial F \cap \partial B_r\,,
\\\label{fix: all except clA}
\partial B_r \setminus \mathrm{cl}\, (A) &\subset& \partial F \cap \partial B_r\,,
\\ \label{fix:key for spanning}
\partial E \cap \partial B_r &\subset & \partial F \cap \partial B_r\,,
\\ \label{fix:disjoint comp}
\mbox{$A$, $E\cap\partial B_r$, $Y$ are open and disjoint in $\partial B_r\,$,}\hspace{-4cm}
\\ \label{fix:competitor shell}
\partial F \cap \partial B_r &\subset & \left[\partial B_r \setminus A\right] \cup \left[ \partial B_r \cap \{{\rm d}_S \le \eta\} \right]\,,
\\ \label{fix:outside of clE}
\partial B_r \setminus \mathrm{cl}\, (E) & \subset & A \cup Y\,,
\\ \label{fix:bdry Y}
\mathrm{cl}\, (Y) \setminus Y & \subset & \partial B_r \cap \partial E\,,
\\ \label{fix:bdry A}
\mathrm{cl}\, (A) \setminus A &\subset & \partial B_r \cap \partial E\,,
\\ \label{fix:cl E on the sphere}
\mathrm{cl}\, (E \cap \partial B_r) \setminus (E \cap \partial B_r) & \subset & \partial B_r \cap \partial E\,.
\end{eqnarray}
The proofs of \eqref{fix:in Br}, \eqref{fix:Z}, \eqref{fix:E on the sphere}, \eqref{fix: all except clA} are identical to the proofs of the corresponding statements in step one with $Z$ replacing $Y$; \eqref{fix:key for spanning} then follows from \eqref{fix:Z}, \eqref{fix:E on the sphere}, and \eqref{fix: all except clA}, since $S \subset U_\eta(S) \setminus \mathrm{cl}\,(E \cap \partial B_r) \subset Z$; \eqref{fix:disjoint comp} is obvious. To prove \eqref{fix:competitor shell}: as in step one, $A \cap \mathrm{cl}\, (E) = \emptyset$ and $A \cap \mathrm{cl}\, (Y) = \emptyset$ by \eqref{fix:disjoint comp}, and
\begin{eqnarray*}
\partial F \cap \partial B_r &&\subset\,\, \mathrm{cl}\, (F) \cap \partial B_r \,\,\subset \,\,\mathrm{cl}\, (E) \cup \left( \mathrm{cl}\, (N_\eta (Z)) \cap \partial B_r \right)
\\
&&\subset\,\, \mathrm{cl}\, (E) \cup \mathrm{cl}\, (Y) \cup \mathrm{cl}\, (U_\eta(S))\,,
\end{eqnarray*}
so that \eqref{fix:competitor shell} follows from the fact that $\mathrm{cl}\, (U_\eta(S)) \subset \partial B_r \cap \{{\rm d}_S \leq \eta\}$. Next, we notice that \eqref{fix:outside of clE}, \eqref{fix:bdry Y}, \eqref{fix:bdry A}, and \eqref{fix:cl E on the sphere} are shown analogously to step one (with the identity in \eqref{posse 0 4 E} which becomes an inclusion in \eqref{fix:cl E on the sphere} due to $S$ possibly being not empty). With the above at our disposal, we proceed now to verify the claims of the lemma. First, the proof that $\Omega \cap \partial F$ is $\mathcal{C}$-spanning $W$ follows \emph{verbatim} the argument from step one. Next, \eqref{fix: all except clA}, \eqref{fix:competitor shell}, \eqref{fix:bdry A}, and $\mathcal{H}^n (\partial E \cap \partial B_r)=0$ imply that
\begin{equation}\label{cup buccia eta level}
\mathcal{H}^n\left((\partial F \cap \partial B_r) \, \Delta \, (\partial B_r \setminus A)\right) \leq \mathcal{H}^n (\partial B_r \cap \{{\rm d}_S\leq \eta\})\,.
\end{equation}
In particular, since $\mathcal{H}^{n-1} (S) < \infty$, it holds
\begin{equation} \label{fix:cup competitor shell}
\lim_{\eta \to 0^+} \mathcal{H}^n \left( (\partial F \cap \partial B_r) \, \Delta \, (\partial B_r \setminus A) \right) = 0\,,
\end{equation}
that is \eqref{cup buccia}. Next, we proceed with estimating $\H^n (\Omega \cap \partial F)$. We first notice that, by \eqref{cup fuori da br chiusa} and $\H^n (\partial E \cap \partial B_r) = 0$
\begin{eqnarray}
\nonumber
\H^n(\Omega\cap\partial F)&=&\H^n(\Omega \cap \partial E\setminus B_r)+\H^n(\mathrm{cl}\,(B_r)\cap\partial F)
\\
\label{fix:est1}
&\le&\H^n(\Omega \cap \partial E\setminus B_r)+\H^n(\partial F \cap \partial B_r)+\H^n(B_r\cap\partial F)\,.
\end{eqnarray}
Setting, as in step one, $g(y,t) = y - t\, \nu_{B_r} (y)$, we then have from \eqref{fix:in Br} that
\begin{equation} \label{fix rect 1}
B_r \cap \partial F = B_r \cap \partial N_\eta (Z) = g (Z,\eta) \cup g\left( \left( \mathrm{cl}\, (Z) \setminus Z \right) \times \left[0,\eta\right] \right)\,.
\end{equation}
By the area formula, we can easily estimate
\begin{align}
\H^n (g(Z,\eta)) &\leq (1 + C(n)\,\eta)\, \H^n (Z) \nonumber \\
& \leq (1 + C(n)\,\eta)\, \Big( \H^n (Y) + \H^n (\partial B_r \cap \{{\rm d}_S < \eta\}) \Big) \nonumber \\ \label{fix:est2}
& \leq (1 + C(n)\,\eta)\, \Big( \H^n (\partial B_r \setminus (A\cup (E \cap \partial B_r))) + \H^n (\partial B_r \cap \{{\rm d}_S < \eta\}) \Big) \,.
\end{align}
On the other hand, it holds
\begin{equation} \label{fix rect 2}
\mathrm{cl}\, (Z) \setminus Z \subset \left[ \mathrm{cl}\, (Y) \setminus (Y) \right] \cup [ \mathrm{cl}\, (\hat U) \setminus \hat U ]\,,
\end{equation}
where $\hat U = U_\eta(S) \setminus \mathrm{cl}\, (E \cap \partial B_r)$. Since $\mathrm{cl}\, (\hat U) \subset \mathrm{cl}\, (U_\eta(S)) \setminus (E \cap \partial B_r)$, \eqref{fix:cl E on the sphere} implies that
\begin{equation} \label{fix rect 3}
\mathrm{cl}\,(\hat U) \setminus \hat U \subset \left( \partial B_r \cap \{{\rm d}_S = \eta\} \right) \cup \left( \partial B_r \cap \partial E \right)\,,
\end{equation}
and thus \eqref{fix:bdry Y} yields
\begin{equation} \label{fix:est3}
\H^n \left( g ( (\mathrm{cl}\,(Z) \setminus Z) \times \left[0,\eta\right] ) \right) \leq C(n)\, \eta\, \Big( \H^{n-1} (\partial B_r \cap \partial E) + \H^{n-1} (\partial B_r \cap \{{\rm d}_S = \eta\}) \Big)\,.
\end{equation}
By applying the coarea formula to ${\rm d}_S$, it holds for every $0 < \sigma < r/2$
\begin{equation} \label{fix:coarea}
\int_0^\sigma \H^{n-1} (\partial B_r \cap \{{\rm d}_S = \eta\}) \, d\eta = \H^n (\partial B_r \cap \{{\rm d}_S \leq \sigma \}) < \infty\,,
\end{equation}
and thus there exists a decreasing sequence $\{\eta_k\}_{k=1}^\infty$ with $\lim_{k \to \infty} \eta_k =0$ such that $\partial B_r \cap \{{\rm d}_S = \eta_k\}$ is $\H^{n-1}$-rectifiable and
\begin{equation} \label{fix:coarea trick}
\lim_{k \to \infty} \eta_k \, \H^{n-1} (\partial B_r \cap \{{\rm d}_S =\eta_k\}) = 0\,.
\end{equation}
If $F_k$ is the sequence of cup competitors defined by \eqref{cup competitor first case difficult lemma} in correspondence with the choice $\eta=\eta_k$, we then have from \eqref{fix rect 1}, \eqref{fix rect 2}, \eqref{fix:bdry Y}, and \eqref{fix rect 3} that $\Omega \cap \partial F_k$ is $\H^n$-rectifiable, and from \eqref{fix:est1}, \eqref{fix:cup competitor shell}, \eqref{fix:est2}, \eqref{fix:est3}, and \eqref{fix:coarea trick} that
\begin{eqnarray} \label{fix: est final1}
\limsup_{k \to \infty} \H^n (B_r \cap \partial F_k) &\leq& \H^n ( \partial B_r \setminus (A \cup (E \cap \partial B_r)))\,, \\ \label{fix: est final2}
\limsup_{k \to \infty} \H^n (\Omega \cap \partial F_k) &\leq& \H^n (\Omega \cap \partial E \setminus B_r) + 2\, \H^n (\partial B_r \setminus A)\,.
\end{eqnarray}
Defining $F_\eta=F_{\eta_k}$ for all $\eta \in \left( \eta_{k+1}, \eta_{k}\right)$ then allows to conclude both \eqref{cup area totale} and \eqref{cup first br area}.
\medskip
\noindent {\it Step three}: We now assume that $A\subset E$, and define $F$ by \eqref{cup competitor second case lemma}, that is
\begin{equation}
\label{cup competitor second case lemma repeat}
F=\big(E\cup B_r\big)\setminus\mathrm{cl}\,\big(N_\eta(Y)\big)\,,\qquad Y=(E\cap\partial B_r)\setminus\mathrm{cl}\,(A)\,.
\end{equation}
We claim that \eqref{cup fuori da br chiusa} holds, as well as
\begin{eqnarray}
\label{cc2_Y}
Y & \subset & \partial F \cap \partial B_r\,, \\
\label{cc2_restoftheworld}
\partial B_r \setminus E & \subset & \partial F \cap \partial B_r\,,\\
\label{cc2_keyforspanning}
\partial B_r \setminus \mathrm{cl}\,(A) & \subset & \partial F \cap \partial B_r\,,\\
\label{cc2_in}
B_r\cap\partial F & \subset & B_r \cap \partial N_\eta(Y)\,,\\
\label{cc2_partition}
A, \partial B_r \setminus \mathrm{cl}\,(E), Y \mbox{ are open and disjoint in $\partial B_r$}\,,\hspace{-4cm}\\
\label{cc2_bdry}
\partial F \cap \partial B_r & \subset & \partial B_r \setminus A\,,\\
\label{cc2 AAA}
\mathrm{cl}\,(A)\setminus A &\subset &\partial B_r\cap\partial E\,,
\\
\label{cc2_Ybdry}
\mathrm{cl}\,(Y) \setminus Y & \subset & \partial B_r \cap \partial E\,.
\end{eqnarray}
First, $F \setminus \mathrm{cl}\,(B_r) = E \setminus \mathrm{cl}\,(B_r)$ implies \eqref{cup fuori da br chiusa}. To prove \eqref{cc2_Y}: since $E$ is open we have $E \cap \partial B_r\subset\mathrm{cl}\,(E\setminus\mathrm{cl}\,(B_r))=\mathrm{cl}\,(F\setminus\mathrm{cl}\,(B_r))$ (by \eqref{cup competitor second case lemma repeat}), thus $Y\subset\mathrm{cl}\,(F)$; we conclude as $Y \cap F = \emptyset$. As $F\cap\partial B_r\subset E\cap\partial B_r$, to prove \eqref{cc2_restoftheworld} we just need to show that $\partial B_r\setminus E\subset\mathrm{cl}\,(F)$: since $\mathrm{cl}\,(U)\setminus\mathrm{cl}\,(V)\subset\mathrm{cl}\,(U\setminus\mathrm{cl}\,(V))$ for every $U,V\subset\mathbb{R}^{n+1}$, by $\partial B_r\cap\mathrm{cl}\,(N_\eta(Y))\subset\mathrm{cl}\,(E)$,
\begin{eqnarray*}
\partial B_r\setminus\mathrm{cl}\,(E)&\subset&\mathrm{cl}\,(B_r)\setminus\mathrm{cl}\,(N_\eta(Y))\subset\mathrm{cl}\,\big(B_r\setminus\mathrm{cl}\,(N_\eta(Y)\big)\subset\mathrm{cl}\,(F)\,,
\\
(\partial B_r\cap\partial E)\setminus\mathrm{cl}\,(N_\eta(Y))&\subset&\mathrm{cl}\,(E)\setminus\mathrm{cl}\,(N_\eta(Y))\,\,\subset\,\,\mathrm{cl}\,(E\setminus\mathrm{cl}\,(N_\eta(Y)))\,\,\subset\,\,\mathrm{cl}\,(F)\,,
\\
\partial B_r\cap\partial E\cap\mathrm{cl}\,(N_\eta(Y))&\subset&\partial E\cap\mathrm{cl}\,(Y)\,\,\subset\,\,\partial F\,,
\end{eqnarray*}
where the last inclusion follows by \eqref{cc2_Y}. Next, \eqref{cc2_keyforspanning} follows by \eqref{cc2_Y}, \eqref{cc2_restoftheworld} and
\[
\partial B_r \setminus \mathrm{cl}\,(A) = \left[ (E\cap\partial B_r) \setminus \mathrm{cl}\,(A) \right] \cup \left[ \partial B_r \setminus (E \cup \mathrm{cl}\,(A)) \right]
\subset Y \cup (\partial B_r \setminus E)\,.
\]
To prove \eqref{cc2_in}: setting $V^c=\mathbb{R}^{n+1}\setminus V$, by $B_r\cap F=B_r\cap \mathrm{cl}\,(N_\eta(Y))^c$ we find $B_r\cap\partial F= B_r \cap \partial \left[ \mathrm{cl}\,(N_\eta(Y))^c \right]$, where, as a general fact on open set $U\subset\mathbb{R}^{n+1}$, we have
\[
\partial[\mathrm{cl}\,(U)^c]=\mathrm{cl}\,(\mathrm{cl}\,(U)^c)\setminus\mathrm{cl}\,(U)^c=\mathrm{cl}\,(U)\cap\mathrm{cl}\,(\mathrm{cl}\,(U)^c)\,,\qquad\mathrm{cl}\,(\mathrm{cl}\,(U)^c)\subset U^c\,,
\]
and thus $\partial[\mathrm{cl}\,(U)^c]\subset\partial U$. Next, \eqref{cc2_partition} is obvious, and implies $A\cap\mathrm{cl}\,(Y)=\emptyset$ where $\mathrm{cl}\,(Y)=\mathrm{cl}\,(N_\eta(Y))\cap\partial B_r$, so that $A \cap \partial B_r \subset E \cap \partial B_r \setminus \mathrm{cl}\,(N_\eta(Y))= F \cap \partial B_r$, and \eqref{cc2_bdry} follows. To prove \eqref{cc2 AAA}, just notice that $A\subset E$ and $A$ is a connected component of $\partial B_r\setminus\partial E$. To prove \eqref{cc2_Ybdry}: trivially, $\mathrm{cl}\,(Y)\setminus Y\subset\mathrm{cl}\,(Y) \subset \partial B_r \cap \mathrm{cl}\,(E)$, while by definition of $Y$ and by $\mathrm{cl}\,(Y)\cap A=\emptyset$
\begin{eqnarray*}
E\cap(\mathrm{cl}\,(Y)\setminus Y)&=&\big(\mathrm{cl}\,(Y)\cap(E\cap\partial B_r)\big)\setminus Y=\mathrm{cl}\,(Y)\cap(E\cap\partial B_r)\cap\mathrm{cl}\,(A)
\\
&=&(E\cap\partial B_r)\cap\mathrm{cl}\,(Y)\cap\partial A\subset E\cap(\mathrm{cl}\,(A)\setminus A)=\emptyset\,,
\end{eqnarray*}
thanks to \eqref{cc2 AAA}. We have completed the claim. Next, by \eqref{cc2_keyforspanning}, \eqref{cc2_bdry}, \eqref{cc2 AAA}, and by $\H^n(\partial B_r\cap\partial E)=0$, we deduce \eqref{cup buccia stronger} and thus \eqref{cup buccia}, while $\Omega \cap \partial F$ is $\mathcal{C}$-spanning $W$ thanks to \eqref{cup fuori da br chiusa}, Lemma \ref{lemma 10}, \eqref{cc2_keyforspanning}, and \eqref{cc2_restoftheworld}. Finally,
\begin{eqnarray}\label{cup area total eta second}
\H^n(\Omega\cap\partial F)&\le&\H^n\big(\partial E\setminus B_r\big)+\H^n(\partial B_r\setminus E)
\\\nonumber
&&+\big(2+C(n)\,\eta\big)\,\H^n(E\cap\partial B_r\setminus A)
+C(n)\,\eta\,\H^{n-1}\big(\partial E\cap\partial B_r\big)\,.
\end{eqnarray}
Indeed, by $\H^n(\partial E \cap \partial B_r) = 0$, \eqref{cup fuori da br chiusa}, and \eqref{cc2_bdry}
\begin{eqnarray}
\label{cc2_a_ext1}
\H^n(\Omega \cap \partial F) & \leq& \H^n(\partial E \setminus B_r) + \H^n(\partial F \cap \mathrm{cl}\,(B_r))
\\\nonumber
&\leq& \H^n(\partial E \setminus B_r) + \H^n(\partial B_r \setminus A)+ \H^n(B_r \cap \partial F)
\\\nonumber
&\le& \H^n(\partial E \setminus B_r) + \H^n(\partial B_r\setminus E)+\H^n((E\cap\partial B_r)\setminus A)+\H^n(B_r \cap \partial F)\,;
\end{eqnarray}
by \eqref{cc2_in}, \eqref{cc2_Ybdry}, the $\H^{n-1}$-rectifiability of $\partial E \cap \partial B_r$, and the area formula
\begin{eqnarray}\label{cc2_a_ext2}
\H^n(B_r\cap\partial F)&\le&\H^n(B_r \cap \partial N_\eta(Y))
\\\nonumber
&\leq&(1 + C(n) \, \eta) \H^n(Y) + C(n) \, \eta \, \H^{n-1}(\partial E \cap \partial B_r)\,,
\end{eqnarray}
while \eqref{cc2 AAA} and $\H^n(\partial B_r\cap\partial E)=0$ give
\[
\H^n(Y)=\H^n((E\cap\partial B_r)\setminus\mathrm{cl}\,(A))=\H^n((E\cap\partial B_r)\setminus A)\,.
\]
We thus deduce \eqref{cup area total eta second}. As $\eta\to 0^+$ in \eqref{cup area total eta second} and in \eqref{cc2_a_ext2} we get \eqref{cup area totale} and \eqref{cup second br area}.
\end{proof}
In the following lemma we introduce the notion of exterior cup competitor. We set
\[
M_\eta(Y)=\Big\{y+t\,\nu_B(y):y\in Y\,,t\in(0,\eta)\Big\}\,,\qquad\eta>0\,,
\]
whenever $B$ is an open ball and $Y\subset\partial B$.
\begin{lemma}[Exterior cup competitor]
\label{lemma cup exterior}
Let $E\in\mathcal{E}$ be such that $\Omega \cap \partial E$ is $\mathcal{C}$-spanning $W$, let $R>0$ be such that $W\subset\subset B_R(0)$ and $\partial E\cap\partial B_R(0)$ is $\H^{n-1}$-rectifiable, and let $A$ be a connected component of $\partial B_R(0)\setminus\partial E$ such that $A\cap E=\emptyset$. For every $\eta \in \left(0,1 \right)$ there exists a set $F=F_\eta \in \mathcal{E}$ such that $\Omega\cap\partial F$ is $\mathcal{C}$-spanning $W$ and
\begin{eqnarray}
\label{area of external cup competitors}
\limsup_{\eta\to 0^+}\H^n(\Omega\cap\partial F)\le \H^n\big(\Omega\cap B_R(0)\cap\partial E)+2\,\H^n(\partial B_R(0)\setminus A)\,.
\end{eqnarray}
\end{lemma}
\begin{proof}
The proof consists of a minor modification of step one and step two in the proof of Lemma \ref{lemma cup competitor first kind}. Precisely, the exterior cup competitor defined by $E$ and $A$ is given by
\begin{equation}
\label{cup exterior}
F=\big(E\cap B_R(0)\big)\cup\,M_\eta(Z)\,,
\end{equation}
where
\begin{eqnarray*}
Z&=&Y \cup \Big( U_\eta(S) \setminus \mathrm{cl}\, (E \cap \partial B_R(0)) \Big)\,,
\\
Y&=&\partial B_R(0)\setminus\Big(\mathrm{cl}\,(E\cap\partial B_R(0))\cup\mathrm{cl}\,(A)\Big)\,,
\\
U_\eta(S)&=&\partial B_R(0) \cap \{{\rm d}_S < \eta\}\,,
\\
S&=&\partial E \cap \mathrm{cl}\, (A) \setminus \Big( \mathrm{cl}\, (E \cap \partial B_R(0)) \cup \mathrm{cl}\, (Y)\Big)\,;
\end{eqnarray*}
see
\begin{figure}
\input{cupext.pstex_t}\caption{{\small An exterior cup competitor. Notice that for $S$ to be non-emtpy, and non-disconnecting $A$, it must be $n\ge 2$.}}\label{fig cupext}
\end{figure}
Figure \ref{fig cupext}. If $\gamma\in\mathcal{C}$ is such that $\gamma\cap\partial E\cap \mathrm{cl}\,(B_R(0))=\emptyset$, then an adaptation of step one in the proof of Lemma \ref{lemma close by Lipschitz at boundary} shows that there exists a connected component of $\gamma\setminus B_R(0)$ which is diffeomorphic to an interval, and whose end-points belong to distinct connected components of $(\mathbb{R}^{n+1}\setminus B_R(0))\setminus\partial E$. Using this fact, and since $\partial F \cap B_R(0) = \partial E \cap B_R(0)$, we just need to show that $\partial B_R(0)\cap\partial F$ contains $\partial B_R(0) \cap \partial E$ as well as $\partial B_R(0) \setminus \mathrm{cl}\, (A)$ in order to show that $\Omega \cap \partial F$ is $\mathcal{C}$-spanning $W$. This is done by repeating with minor variations the considerations contained in step two of the proof of Lemma \ref{lemma cup competitor first kind}. The proof of \eqref{area of external cup competitors} is obtained in a similar way, and the details are omitted.
\end{proof}
\subsection{Slab competitors}\label{section slab} Bi-Lipschitz deformations of cup competitors can be used to generate new competitors thanks to Lemma \ref{statement spanning is close by Lipschitz maps}. We will crucially use this remark to replace balls with ``slabs'' (see Figures \ref{fig slab1}, \ref{fig slab2} and \ref{fig slab3}) and obtain sharp area concentration estimates in step five of the proof of Theorem \ref{thm lsc}, as well as in the proof of Theorem \ref{thm basic regularity}, see e.g. \eqref{the important remark citato}. Given $\tau\in(0,1)$, $x\in\mathbb{R}^{n+1}$, $r > 0$, and $\nu\in\SS^n$, we set
\[
S_{\tau,r}^{\,\nu}(x)=\big\{y\in B_r(x):|(y-x)\cdot\nu|<\tau\,r\big\}\,,
\]
and we claim the existence of a bi-Lipschitz map $\Phi:\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$ with
\[
\{\Phi\ne{\rm id}\,\}\subset\subset B_{2\,r}(x)\,,\qquad \Phi\big(B_{2\,r}(x)\big)= B_{2\,r}(x)\,,\qquad
\Phi\big(\partial S_{\tau,t}^{\,\nu}(x)\big)=\partial B_t(x)\quad\forall t\in(0,r)\,,
\]
and such that ${\rm Lip}\,\Phi$ and ${\rm Lip}\,\Phi^{-1}$ depend only on $n$ and $\tau$. Indeed, assuming without loss of generality that $x=0$, there is a convex, degree-one positively homogenous function $\varphi:\mathbb{R}^{n+1}\to[0,\infty)$ such that $S_{\tau,t}^\nu(0)=\{\varphi<t\}$ for every $t>0$. Taking $\eta_r:[0,\infty)\to[0,\infty)$ smooth, decreasing and such that $\eta=1$ on $[0,4r/3]$ and $\eta=0$ on $[5r/3,\infty)$, we set
\[
\Phi(x)=\eta_r(|x|)\,\frac{\varphi(x)}{|x|}\,x+(1-\eta_r(|x|))\,x\,.
\]
Noticing that $\Phi$ is a smooth interpolation between linear maps on each half-line $\{t\,x:t\ge0\}$, and observing that the slopes of these linear maps change in a Lipschitz way with respect to the angular variable, one sees that $\Phi$ has the required properties.
\begin{lemma}[Slab competitors]
\label{lemma slab competitor}
Let $E\in\mathcal{E}$ be such that $\Omega\cap\partial E$ is $\mathcal{C}$-spanning $W$, and let $B_{2r}(x)\subset\subset\Omega$, $\nu\in\SS^n$, $\tau\in(0,1)$ with $\partial S_{\tau,r}^{\,\nu}(x)\cap\partial E$ $\H^{n-1}$-rectifiable. Let $A$ be an open connected component of $\partial S_{\tau,r}^{\,\nu}(x)\setminus\partial E$.
Then for every $\eta\in(0,r/2)$, there exists $F\in\mathcal{E}$ such that $\Omega\cap\partial F$ is $\mathcal{C}$-spanning $W$,
\begin{eqnarray}
\label{slab competitor exterior}
&& F\setminus\mathrm{cl}\,(S_{\tau,r}^{\,\nu}(x))=E\setminus\mathrm{cl}\,(S_{\tau,r}^{\,\nu}(x))\,,
\\
\label{slab competitor buccia}
&&\lim_{\eta \to 0^+} \H^n \left( (\partial F\cap\partial S_{\tau,r}^{\,\nu}(x)) \, \Delta \, ( \partial S_{\tau,r}^{\,\nu}(x)\setminus A ) \right) = 0\,,
\end{eqnarray}
and such that if $A\cap E=\emptyset$, then
\begin{eqnarray}
\label{area of slab competitors first 0}
\limsup_{\eta\to 0^+}\H^n(S_{\tau,r}^{\,\nu}(x)\cap\partial F)
\le\,C(n,\tau)\,\H^n\big(\partial S_{\tau,r}^{\,\nu}(x)\setminus (A\cup E)\big)\,;
\end{eqnarray}
while, if $A\subset E$, then
\begin{eqnarray} \label{area of slab competitors second 0}
\limsup_{\eta\to 0^+}\H^n(S_{\tau,r}^{\,\nu}(x)\cap\partial F)
\le\,C(n,\tau)\,\H^n\big(E\cap\partial S_{\tau,r}^{\,\nu}(x)\setminus A\big)\,.
\end{eqnarray}
\end{lemma}
\begin{proof}
Let us set for brevity $S_r=S_{\tau,r}^{\,\nu}(x)$ and $B_r=B_r(x)$. By Lemma \ref{statement spanning is close by Lipschitz maps}, $\Phi(E)\in\mathcal{E}$ and $\Omega\cap\partial\Phi(E)$ is $\mathcal{C}$-spanning $W$. Since $\Phi$ is an homeomorphism between $\partial S_r$ and $\partial B_r$, $\Phi(A)$ is an open connected component of $\partial B_r\setminus \partial \Phi(E)$. Depending on whether $A\cap E=\emptyset$ or $A\subset E$, and thus, respectively, depending on whether $\Phi(A)\cap\Phi(E)=\emptyset$ or $\Phi(A)\cap\Phi(E)\ne\emptyset$, we consider the cup competitor $G$ defined by $\Phi(E)$ and $\Phi(A)$, so that
\[
G=\big(\Phi(E)\setminus\mathrm{cl}\,(B_r)\big)\cup\,N_\eta(Z)\,,\qquad Z = Y \cup \Big( U_\eta(S) \setminus \mathrm{cl}\, (\Phi (E) \cap \partial B_r) \Big)\,,
\]
where
\[
Y=\partial B_r\setminus\big(\mathrm{cl}\,(\Phi(E)\cap\partial B_r)\cup\mathrm{cl}\,(\Phi(A))\big)\,, \qquad U_\eta(S) = \partial B_r \cap \{{\rm d}_S < \eta\}\,,
\]
with
\[
S = \partial \Phi(E) \cap \mathrm{cl}\, (\Phi (A)) \setminus \left[ \mathrm{cl}\, (\Phi (E) \cap \partial B_r) \cup \mathrm{cl}\, (Y) \right],
\]
if $A\cap E=\emptyset$, see \eqref{cup competitor first case difficult lemma}, and
\[
G=\big(\Phi(E)\cup B_r\big)\setminus\mathrm{cl}\,\big(N_\eta(Y)\big)\,,\qquad Y=\big(\Phi(E)\cap\partial B_r\big)\setminus\mathrm{cl}\,(\Phi(A))\,,
\]
if $A\subset E$, see \eqref{cup competitor second case lemma}. Finally, we set $F=\Phi^{-1}(G)$. Since $G\in\mathcal{E}$ and $\Omega\cap\partial G$ is $\mathcal{C}$-spanning $W$, by Lemma \ref{statement spanning is close by Lipschitz maps} we find that $F\in\mathcal{E}$ and that $\Omega\cap\partial F$ is $\mathcal{C}$-spanning $W$. By construction $G\setminus\mathrm{cl}\,(B_r)=\Phi(E)\setminus\mathrm{cl}\,(B_r)$, so that \eqref{slab competitor exterior} follows by
\[
F\setminus\mathrm{cl}\,(S_r)=\Phi^{-1}\big(G\setminus\mathrm{cl}\,(B_r)\big)=\Phi^{-1}\big(\Phi(E)\setminus\mathrm{cl}\,(B_r)\big)=E\setminus\mathrm{cl}\,(S_r)\,.
\]
By \eqref{cup buccia}, $\H^n \left( (\partial B_r\cap\partial G) \, \Delta \, (\partial B_r\setminus\Phi(A)) \right) \to 0$ as $\eta \to 0^+$, which gives \eqref{slab competitor buccia} by the area formula. Finally, \eqref{area of slab competitors first 0} and \eqref{area of slab competitors second 0} are deduced by the area formula, \eqref{cup first br area} and \eqref{cup second br area}.
\end{proof}
\subsection{Cone competitors}\label{section cone}
As customary in the analysis of area minimization problems, we want to compare $\H^n(B_r(x)\cap\partial E)$ with $\H^n(B_r(x)\cap\partial F)$, where $F$ is the cone spanned by $E\cap\partial B_r(x)$ over $x$,
\begin{equation}
\label{cone competitor}
F=\big(E\setminus\mathrm{cl}\,(B_r(x))\big)\cup \big\{(1-t)\,x+t\,y:y\in E\cap \partial B_r(x)\,,t\in(0,1]\big\}\,.
\end{equation}
Following the terminology of \cite{DLGM}, given $K\in\mathcal S$, the cone competitor $K'$ of $K$ in $B_r(x)$ is similarly defined as
\[
K'=(K\setminus B_r(x) )\cup \big\{(1-t)\,x+t\,y:y\in K\cap\partial B_r(x)\,,t\in[0,1]\big\}\,,
\]
and is indeed $\mathcal{C}$-spanning $W$ (since $K$ was). However, for some values of $r$, $\partial F\cap B_r(x)$ may be strictly smaller than the cone competitor $K'$ defined by the choice $K=\Omega\cap\partial E$ in $B_r(x)$, and thus it may fail to be $\mathcal{C}$-spanning; see
\begin{figure}
\input{cone.pstex_t}\caption{{\small In this picture, the cone competitor $F$ defined by $E\cap\partial B_r$ as in \eqref{cone competitor} may fail to be $\mathcal{C}$-spanning $W$. Notice that the dashed lines are part of the cone competitor $K'$ defined by $K=\Omega\cap\partial E$ in $B_r(x)$, which is indeed strictly larger than $\Omega\cap\partial F$.
}}\label{fig cone}
\end{figure}
Figure \ref{fig cone}. By Sard's lemma, if $E$ has smooth boundary in $\Omega$ this issue can be avoided as, for a.e. $r$, $\partial E$ and $\partial B_r$ intersect transversally, and thus $\partial E\cap\partial B_r(x)$ is the boundary of $E\cap\partial B_r(x)$ relative to $\partial B_r(x)$; but working with smooth boundary leads to other difficulties when constructing cup competitors. We thus approximate $F$ (as defined in \eqref{cone competitor}) in energy by means of diffeomorphic images of $E$.
\begin{lemma}[Cone competitors]
\label{lemma cone competitor}
Let $E\in\mathcal{E}$ be such that $\Omega\cap\partial E$ is $\mathcal{C}$-spanning $W$, and let $B=B_r(x)\subset\subset\Omega$ be such that $E \cap \partial B_r(x)$ is $\H^n$-rectifiable, $\partial E\cap\partial B_r(x)$ is $\H^{n-1}$-rectifiable and $r$ is a Lebesgue point of the maps $t \mapsto \H^{n}(E \cap \partial B_t(x))$ and $t \mapsto \H^{n-1}(\partial E \cap \partial B_t(x))$. Then for each $\eta\in(0,r/2)$ there exists $F\in\mathcal{E}$ such that $F \Delta E \subset B_r(x)$, $\Omega\cap\partial F$ is $\mathcal{C}$-spanning $W$, and
\begin{align}
\label{cone competitor area inequality}
\limsup_{\eta\to 0^+}\H^n(\Omega\cap\partial F)&\le\H^n(\partial E\setminus B_r(x))+\frac{r}n\,\H^{n-1}(\partial E\cap\partial B_r(x))\,,\\
\label{cone competitor volume inequality}
\liminf_{\eta \to 0^+} |F| & \geq |E \setminus B_r(x)| + \frac{r}{n+1} \, \H^n(E \cap \partial B_r(x)) \,.
\end{align}
\end{lemma}
\begin{proof}
Let $x=0$, $r=1$, $B_r=B_r(0)$, and define a bi-Lipschitz map $f_\eta$ by $f_\eta(0)=0$ and $f_\eta(x)=u_\eta(|x|)\,\hat{x}$ if $x\ne 0$, where $\hat x = x / |x|$ and $u_\eta:\mathbb{R}\to[0,\infty)$ is given by
\begin{equation} \label{u_eta}
u_\eta(t) :=
\begin{cases}
\max\{0,\eta\,t\}\,, &\mbox{for $t \leq 1-\eta$}\,,
\\
\eta(1-\eta)+\frac{t-(1-\eta)}{\eta}\,\left(1-\eta(1-\eta)\right)\,, &\mbox{for $t \in [ 1-\eta,1 ]$}\,,
\\
t\,, & \mbox{for $t \geq 1$}\,,
\end{cases}
\end{equation}
so that $u_\eta(t)\le t$ for $t\ge0$. Clearly, $\{f_\eta\ne{\rm id}\,\} \subset B_1$ and $f_\eta(B_1)\subset B_1$. The open set $F=f_\eta(E)$ is such that $\Omega\cap\partial F=f_\eta(\Omega\cap\partial E)$, so that $\Omega\cap\partial F$ is $\H^n$-rectifiable and, by Lemma \ref{statement spanning is close by Lipschitz maps}, $\mathcal{C}$-spanning $W$. Thanks to the area formula, \eqref{cone competitor area inequality} will follow by showing
\begin{equation} \label{wanted estimate}
\limsup_{\eta\to 0^+}\int_{B_1\cap\partial E}J^{\partial E}f_\eta \,d\H^n\le\frac{1}{n}\,\H^{n-1}(\partial E\cap\partial B_1)\,.
\end{equation}
Trivially, the integral over $B_{1-\eta}\cap\partial E$ is bounded by $C(n)\,\eta^n\,\H^n(\Omega\cap\partial E)$. The integral over $B_1\setminus B_{1-\eta}$ is treated as in \cite[Step two, Theorem 7]{DLGM}; by the coarea formula,
\begin{equation} \label{ciccia}
\begin{split}
\int_{(B_1 \setminus B_{1-\eta} )\cap\partial E}J^{\partial E}f_\eta \,d\H^n = & \int_{1-\eta}^1 dt \int_{\partial B_t \cap \partial E \cap \{|\nu_E \cdot \hat x| < 1\}} \frac{J^{\partial E} f_\eta}{\sqrt{1-(\nu_E \cdot \hat x)^2}} \, d\H^{n-1} \\ &+ \int_{(B_1 \setminus B_{1-\eta}) \cap \partial E \cap \{|\nu_E \cdot \hat x| =1\} } J^{\partial E}f_\eta \, d\H^n\,,
\end{split}
\end{equation}
where $\nu_E(x) \in T_x(\partial E) \cap \mathbb{S}^n$ at $\H^n$-a.e. $x \in \partial E$. By
\begin{equation} \label{derivative}
\nabla f_\eta(x) = \frac{u_\eta(|x|)}{|x|} \, {\rm Id} + \left( u_\eta'(|x|) - \frac{u_\eta(|x|)}{|x|} \right) \, \hat x \otimes \hat x\,,
\end{equation}
if $|\nu_E(x) \cdot \hat x| = 1$, then $J^{\partial E}f_\eta =(u_\eta(|x|)/|x|)^n \leq 1$. Since
\begin{equation} \label{annulus shrinks}
\lim_{\eta \to 0^+} \H^n(\partial E \cap (B_1 \setminus B_{1-\eta})) = 0\,,
\end{equation}
the second term on the right-hand side of \eqref{ciccia} converges to $0$ as $\eta\to 0^+$. As for the first term, by \eqref{derivative}, we have, as explained later on,
\begin{equation}
\label{all}
J^{\partial E}f_\eta(x) \leq 1 + \sqrt{1-(\nu_E(x) \cdot \hat x)^2} \, u_\eta'(|x|) \, \Big( \frac{u_\eta(|x|)}{|x|} \Big)^{n-1} \quad \mbox{for $\H^n$-a.e. $x \in \partial E$}\,.
\end{equation}
The term corresponding to $1$ in \eqref{all} converges to $0$ as $\eta\to 0^+$ by \eqref{annulus shrinks}. At the same time,
\[
\limsup_{\eta\to 0^+}\Big|\int_{1-\eta}^{1} \Big(\H^{n-1}(\partial E \cap \partial B_t)-\H^{n-1}(\partial E \cap \partial B_1)\Big) \, u_\eta' \, \Big( \frac{u_\eta}{t} \Big)^{n-1} \, dt\Big|=0
\]
since $t=1$ is a Lebesgue point of $t\mapsto\H^{n-1}(\partial B_t\cap\partial E)$, and since $u'_\eta(t)\le 1/\eta$ and $(u_\eta(t)/t)\leq 1$ for $t\ge0$. Finally,
\begin{eqnarray*}
\int_{1-\eta}^{1} u_\eta' \, \Big( \frac{u_\eta}{t} \Big)^{n-1} \, dt\le\frac1{(1-\eta)^{n-1}}\,\frac{u_\eta(1)^n-u_\eta(1-\eta)^n}n=\frac1{(1-\eta)^{n-1}}\,\frac{1-\eta^n(1-\eta)^n}n\to \frac1n
\end{eqnarray*}
as $\eta\to 0^+$, thus completing the proof of \eqref{cone competitor area inequality}. The proof of \eqref{cone competitor volume inequality} follows an analogous argument. The goal is to show that
\begin{equation} \label{volume wanted estimate}
\liminf_{\eta \to 0^+} \int_{E \cap B_1} Jf_\eta \, dx \geq \frac{1}{n+1} \, \H^n(E \cap \partial B_1)\,,
\end{equation}
and by the coarea formula and \eqref{derivative} it is immediate to see that
\[
\int_{E \cap B_1} Jf_\eta \, dx \geq \int_{1-\eta}^1 u_\eta'(t) \, \left( \frac{u_\eta(t)}{t} \right)^n \, \H^n(E \cap \partial B_t) \, dt\,.
\]
The estimate in \eqref{volume wanted estimate} then readily follows using that $t=1$ is a Lebesgue point for the map $t \mapsto \H^n(E \cap \partial B_t)$, together with
\[
\int_{1-\eta}^1 u_\eta'(t) \, \left( \frac{u_\eta(t)}{t} \right)^n \, dt \geq \frac{1 -\eta^{n+1}(1-\eta)^{n+1} }{n+1} \to \frac{1}{n+1} \qquad \mbox{as $\eta \to 0^+$}\,.
\]
We finally explain how to deduce \eqref{all} from \eqref{derivative}. For $x\in\partial^*E$, let $\{\tau_i\}_{i=1}^n$ be an orthonormal basis of $T_x\partial^*E$ such that $\{\tau_i\}_{i=1}^{n-1}\subset x^\perp$. In this way, we can take
\[
\tau_n=\frac{\hat x-(\hat{x}\cdot\nu_E(x))\nu_E(x)}{\sqrt{1-(\hat{x}\cdot\nu_E(x))^2}}\,,
\]
and therefore compute by \eqref{derivative} that
\begin{eqnarray*}
\nabla^{\partial E}f_\eta(x)[\tau_i]&=& \frac{u_\eta(|x|)}{|x|}\,\tau_i\,,\qquad\forall i=1,...,n-1\,,
\\
\nabla^{\partial E}f_\eta(x)[\tau_n]&=&u_\eta'(|x|)\,\sqrt{1-(\hat{x}\cdot\nu_E)^2}\,\hat{x}-\frac{u_\eta(|x|)}{|x|}\,(\hat{x}\cdot\nu_E)\,
\frac{\nu_E-(\hat{x}\cdot\nu_E)\hat{x}}{\sqrt{1-(\hat{x}\cdot\nu_E)^2}}\,,
\end{eqnarray*}
where we have set for brevity $\nu_E$ in place of $\nu_E(x)$. Therefore
\begin{eqnarray*}
J^{\partial E}f(x)^2&=&\Big|\bigwedge_{i=1}^n\nabla^{\partial E}f_\eta(x)[\tau_i]\Big|^2
\\
&=&\Big(\frac{u_\eta(|x|)}{|x|}\Big)^{2n}\,(\hat{x}\cdot\nu_E)^2\,\Big|\tau_1\wedge\cdots\wedge\tau_{n-1}\wedge\Big(\frac{\nu_E-(\hat{x}\cdot\nu_E)\hat{x}}{\sqrt{1-(\hat{x}\cdot\nu_E)^2}}\Big)\Big|^2
\\
&&+\Big(\frac{u_\eta(|x|)}{|x|}\Big)^{2(n-1)}\,u_\eta'(|x|)^2\,\big(1-(\hat{x}\cdot\nu_E)^2\big)\,
\Big|\tau_1\wedge\cdots\wedge\tau_{n-1}\wedge\hat{x}\Big|^2
\\
&\le&1+\Big(\frac{u_\eta(|x|)}{|x|}\Big)^{2(n-1)}\,u_\eta'(|x|)^2\,\big(1-(\hat{x}\cdot\nu_E)^2\big)\,,
\end{eqnarray*}
from which \eqref{all} follows thanks to $\sqrt{1+a}\le1+\sqrt{a}$ for $a\ge 0$.
\end{proof}
\subsection{Nucleation lemma}\label{section nucleation} The following nucleation lemma can be found, with slightly different statements, in \cite[VI(13)]{Almgren76} or in \cite[Lemma 29.10]{maggiBOOK}.
\begin{lemma}
\label{statement nucleation} Let $\xi(n)$ be the constant of Besicovitch's covering theorem in $\mathbb{R}^{n+1}$. If $T$ is closed, $A=\mathbb{R}^{n+1}\setminus T$, $0<|E|<\infty$, $P(E;A)<\infty$, $\tau>0$, and
\[
\sigma=\min\Big\{\frac{|E\setminus I_\tau(T)|}{\tau\,P(E;A)},\frac{\xi(n)}{n+1}\Big\}>0
\]
then there exists $x\in E^{(1)}\setminus I_\tau(T)$ such that
\[
|E\cap B_\tau(x)|\ge\Big(\frac{\sigma}{2\xi(n)}\Big)^{n+1}\,\tau^{n+1}\,.
\]
\end{lemma}
\begin{proof}
By contradiction one assumes that
\begin{equation}
\label{nucl 1}
|E\cap B_\tau(x)|<\Big(\frac{\sigma}{2\xi(n)}\Big)^{n+1}\,\tau^{n+1}\qquad\forall x\in E^{(1)}\setminus I_\tau(T)\,.
\end{equation}
Setting $\a=\xi(n)/\sigma$, so that $\a\ge n+1$, we claim that \eqref{nucl 1} implies the existence, for each $x\in E^{(1)}\setminus I_\tau(T)$, of $\tau_x\in(0,\tau)$ such that
\begin{equation}
\label{nucl 2}
P(E;B_{\tau_x}(x))>\frac\a\tau\,|E\cap B_{\tau_x}(x)|\,.
\end{equation}
In turn \eqref{nucl 2} is in contradiction with \eqref{nucl 1}: indeed, by applying Besicovitch's theorem to $\{\mathrm{cl}\,(B_{\tau_x}(x)):x\in E^{(1)}\setminus I_\tau(T)\}$ we find an at most countable subset $I$ of $E^{(1)}\setminus I_\tau(T)$ such that $\{\mathrm{cl}\,(B_{\tau_x}(x))\}_{x\in I}$ is disjoint and
\begin{eqnarray*}
|E\setminus I_\tau(T)|&\le&\xi(n)\,\sum_{x\in I}|E\cap B_{\tau_x}(x)|
<\frac{\xi(n)\,\tau}\a\,\sum_{x\in I}P(E;B_{\tau_x}(x))
\\
&\le&\frac{\xi(n)\,\tau\,P(E;A)}\a
=\tau\,\sigma\,P(E;A)\le|E\setminus I_\tau(T)|\,,
\end{eqnarray*}
a contradiction. We show that \eqref{nucl 1} implies \eqref{nucl 2}: indeed, if \eqref{nucl 1} holds but \eqref{nucl 2} fails, then there exists $x\in E^{(1)}\setminus I_\tau(T)$ such that, setting $m(r)=|E\cap B_r(x)|$ for $r>0$,
\begin{equation}
\label{nucl 3}
\mbox{$m>0$ on $(0,\infty)$}\,,\qquad m(\tau)<\Big(\frac{\tau}{2\a}\Big)^{n+1}
\end{equation}
and $(\a/\tau)\,m(r)\ge P(E;B_r(x))$ for every $r\in(0,\tau)$. Adding up $\H^n(\partial B_r(x)\cap E)$, which equals $m'(r)$ for a.e. $r>0$ by the coarea formula, we obtain
\begin{equation}
\label{nucl 4}
m'(r)+\frac\a\tau\,m(r)\ge P(E\cap B_r(x))\ge m(r)^{n/(n+1)}\,,\qquad\mbox{for a.e. $r\in(0,\tau)$}\,.
\end{equation}
where in the last inequality we have used that $P(F)\ge |F|^{n/(n+1)}$ whenever $0<|F|<\infty$; see e.g. \cite[Proposition 12.35]{maggiBOOK}. Since $m>0$ on $(0,\infty)$ we find
\begin{equation}\nonumber
\left\{
\begin{split}
&\frac\a\tau m(r)\le (1/2) m(r)^{n/(n+1)}
\\
&\forall r\in(0,\tau)
\end{split}
\right .
\qquad\mbox{iff}\quad
\left\{
\begin{split}
&m(r)\le (\tau/2\a)^{n+1}
\\
&\forall r\in(0,\tau)
\end{split}
\right .
\qquad\mbox{if}\quad m(\tau)\le\Big(\frac{\tau}{2\a}\Big)^{n+1}\,,
\end{equation}
where the last condition holds by \eqref{nucl 3}. Thus \eqref{nucl 4} gives $m'(r)\ge(1/2) m(r)^{n/(n+1)}$ for a.e. $r\in(0,\tau)$, thus
$m(\tau)\ge(\tau/2(n+1))^{n+1}\ge(\tau/2\a)^{n+1}$ as $\a\ge n+1$, a contradiction.
\end{proof}
\subsection{Isoperimetry, lower bounds and collapsing}\label{section spherical collapsing} Given an $L^1$-converging sequence of sets of finite perimeter $\{E_j\}_j$, the boundary of the $L^1$-limit set $E$ will be (in general) strictly included in $K={\rm spt}\,\mu$, where $\mu$ is the weak-star limit of the Radon measures defined by the boundaries of the $E_j$'s. In the next lemma we show that, under some mild bounds on $\mu$ and $E_j$, if $\mu$ is absolutely continuous with respect to $\H^n \llcorner K$ then the Radon-Nikod\'ym density $\theta$ of $\mu$ is everywhere larger than $1$, and is actually larger than $2$ at a.e. point of $K\setminus\partial^*E$ (that is, a cancellation can happen only when boundaries are collapsing).
\begin{lemma}[Collapsing lemma]\label{lemma llb}
Let $K$ be a relatively compact and $\H^n$-rectifiable set in $\Omega$, let $E\subset\Omega$ be a set of finite perimeter with $\Omega\cap\partial^*E\subset K$, and let $\{E_j\}_j\subset\mathcal{E}$ such that $E_j\to E$ in $L^1_{{\rm loc}}(\Omega)$, and $\mu_j\stackrel{*}{\rightharpoonup}\mu$ as Radon measures in $\Omega$, where $\mu_j=\H^n\llcorner(\Omega\cap\partial E_j)$ and $\mu=\theta\,\H^n\llcorner K$ for a Borel function $\theta$. If $\Omega'\subset\Omega$ and $r_*>0$ are such that for every $x\in K\cap\Omega'$ and a.e. $r<r_*$ with $B_r(x)\subset\subset\Omega'$ we have
\begin{eqnarray}
\label{llb1}
\mu(B_r(x))&\ge&c(n)\,r^n\,,
\\
\label{llb2}
\liminf_{j\to\infty}\H^n(B_r(x)\cap\partial E_j)&\le& C(n)\,\liminf_{j\to\infty}\,\H^n(\partial B_r(x)\setminus A_{r,j}^0)\,,
\end{eqnarray}
where $A_{r,j}^0$ denotes an $\H^n$-maximal connected component of $\partial B_r(x)\setminus\partial E_j$, then $\theta(x)\ge 1$ for $\H^n$-a.e. $x\in K\cap\Omega'$, and $\theta(x)\ge 2$ for $\H^n$-a.e. $x\in (K\setminus\partial^*E)\cap\Omega'$.
\end{lemma}
The bound $\theta\ge 1$ follows by arguing exactly as in \cite[Proof of Theorem 2, Step three]{DLGM}, and has nothing to do with the fact that the measures $\mu_j$ are defined by boundaries; the latter information is in turn crucial in obtaining the bound $\theta\ge 2$, and requires a new argument. For the sake of clarity, we also give the details of the $\theta\ge 1$ bound, which in turn is based on spherical isoperimetry.
\begin{lemma}[Spherical isoperimetry]
\label{statement isoperimetry on spheres} Let $\Sigma\subset\mathbb{R}^{n+1}$ denote a spherical cap\footnote{That is, $\Sigma=\SS^n\cap H$ where $H$ is an open half-space of $\mathbb{R}^{n+1}$.} in the $n$-dimensional unit sphere $\SS^n$, possibly with $\Sigma=\SS^n$. If $K$ is a compact set in $\mathbb{R}^{n+1}$ and $\{A^h\}_{h=0}^\infty$ is the family of the open connected components of $\Sigma\setminus K$, ordered so to have $\H^n(A^h)\ge\H^n(A^{h+1})$, then
\begin{equation}
\label{spherical isoperimetry}
\H^n(\Sigma\setminus A^0)\le C(n)\,\H^{n-1}(\Sigma\cap K)^{ n/(n-1)}\,.
\end{equation}
Moreover, if $\Sigma=\SS^n$, $\sigma_n=\H^n(\SS^n)$ and $\H^{n-1}(\SS^n\cap K)<\infty$, then each $A^h$ is a set of finite perimeter in $\SS^n$ and for every $\tau>0$ there exists $\sigma>0$ such that
\begin{equation}
\label{quant isop hp}
\min\Big\{\H^n(A^0),\H^n(A^1)\Big\}=\H^n(A^1)\ge\frac{\sigma_n}2-\sigma
\end{equation}
implies
\begin{equation}
\label{quant isop tesis}
\min\Big\{\H^{n-1}(\partial^* A^0),\H^{n-1}(\partial^* A^1)\Big\}\ge\sigma_{n-1}-\tau\,.
\end{equation}
Here $\partial^*A^h$ denotes the reduced boundary of $A^h$ in $\SS^n$.
\end{lemma}
\begin{proof}
This is \cite[Lemma 9]{DLGM}. However, \eqref{quant isop tesis} is stated in a weaker form in \cite[Lemma 9]{DLGM}, so we give the details. Arguing by contradiction, we can find $\tau>0$ and $\{K_j\}_j$ such that, for $\a=0,1$, $\H^{n-1}(\partial^*A^\a_j)\le\sigma_{n-1}-\tau$ for every $j$, but $\H^n(A_j^\a)\to\sigma_n/2$ as $j\to\infty$. Since $\sigma_n=\H^n(\SS^n)$ and $A_j^0\cap A_j^1=\emptyset$, we find that, for $\a=0,1$, $A_j^\a\to A^\a$ in $L^1(\SS^n)$ where $A^0\cap A^1=\emptyset$ and $A^0\cup A^1$ is $\H^n$-equivalent to $\SS^n$. Therefore $\H^{n-1}(\partial^*A^0)=\H^{n-1}(\partial^*A^1)\le\sigma_{n-1}-\tau$, where we have used lower semicontinuity of perimeter. Since $\inf\,\H^{n-1}(\partial^*A)$ with $\H^n(A)=\sigma_n/2$ is equal to $\sigma_{n-1}$ we have reached a contradiction.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma llb}] {\it Step one}: We fix $x\in K\cap\Omega'$ such that $\H^n\llcorner(K-x)/r\stackrel{*}{\rightharpoonup} \H^n\llcorner T_xK$ as $r\to 0^+$. Setting $\nu(x)^\perp=T_xK$ for $\nu(x)\in\SS^n$, by the lower density estimate \eqref{llb1} we easily find that for every $\sigma>0$ there exists $r_0=r_0(\sigma,x)\in(0,\min\{r_*,{\rm dist}(x,\partial\Omega')\})$ such that $|(y-x)\cdot\nu(x)|<\sigma\,r$ for every $y\in K\cap B_r(x)$ and every $r<r_0$. In particular,
\[
\lim_{j \to \infty} \H^n(\partial E_j \cap \{ y \in B_r(x) \, \colon \, |(y-x) \cdot \nu(x)| > \sigma \, r\}) = 0 \qquad \mbox{for every $r \leq r_0$}\,,
\]
and thus by the coarea formula (see \cite[Equation (2.13)]{DLGM})
\begin{equation}
\label{from dlgm}
\lim_{j\to\infty}\H^{n-1}(\Sigma_{r,\sigma}^\pm\cap \partial E_j)=0\qquad\mbox{for a.e. $r\le r_0$}\,,
\end{equation}
where we have set
\begin{eqnarray*}
\Sigma_{r,\sigma}^+&=&\big\{y\in \partial B_r(x):(y-x)\cdot\nu(x)>\sigma\,r\big\}\,,
\\
\Sigma_{r,\sigma}^-&=&\big\{y\in \partial B_r(x):(y-x)\cdot\nu(x)<-\sigma\,r\big\}\,.
\end{eqnarray*}
Let $A_{r,j}^+$ be an $\H^n$-maximal connected component of $\Sigma_{r,\sigma}^+\setminus \partial E_j$, and define similarly $A_{r,j}^-$. Equations \eqref{from dlgm} and \eqref{spherical isoperimetry} imply that, for a.e. $r<r_0$,
\begin{equation}
\label{from dlgm2}
\lim_{j\to\infty}\H^n(A_{r,j}^\pm)=\H^n(\Sigma_{r,\sigma}^\pm)\,.
\end{equation}
Now let $\{A^h_{r,j}\}_{h=0}^\infty$ denote the open connected components of $\partial B_r(x)\setminus\partial E_j$, ordered by decreasing $\H^n$-measure. We claim that
\begin{equation}
\label{llb3}
\mbox{if \eqref{from dlgm2} holds, then either $A_{r,j}^+$ or $A_{r,j}^-$ is not contained in $A_{r,j}^0$}\,.
\end{equation}
Indeed, if for some $r$ we have $A_{r,j}^+\cup A_{r,j}^-\subset A_{r,j}^0$, then by \eqref{llb2} and \eqref{from dlgm2} we find
\begin{equation}
\label{ias later}
\mu(B_r(x))\le\liminf_{j\to\infty}\mu_j(B_r(x))\le C(n)\,\liminf_{j\to\infty}\H^n(\partial B_r\setminus A_{r,j}^0)\le C(n)\,r^n\,\sigma\,,
\end{equation}
a contradiction to \eqref{llb1} if $\sigma\le\sigma_0(n)$ for a suitable $\sigma_0(n)$. By \eqref{llb3} and \eqref{from dlgm2},
\begin{equation}
\label{density 1 1}
\min\Big\{\H^n(A^0_{r,j}),\H^n(A^1_{r,j})\Big\}\ge \Big(\frac{\sigma_n}2-C(n)\,\sigma\Big)\,r^n\qquad\mbox{for a.e. $r<r_0$}\,.
\end{equation}
By Lemma \ref{statement isoperimetry on spheres} and \eqref{density 1 1}, given $\tau>0$, if $\sigma$ is small enough in terms of $n$ and $\tau$, then
\begin{equation}
\label{density 1 2}
\min\Big\{\H^{n-1}(\partial^*A^0_{r,j}),\H^{n-1}(\partial^*A^1_{r,j})\Big\}\ge \big(\sigma_{n-1}-\tau\big)\,r^{n-1}\qquad\mbox{for a.e. $r<r_0$}\,,
\end{equation}
where $\partial^*A^{\a}_{r,j}$ is the reduced boundary of $A^{\a}_{r,j}$ as a subset of $\partial B_r(x)$. Since $A_{r,j}^0$ is a connected component of $\partial B_r(x)\setminus \partial E_j$ we have
\begin{equation}
\label{llb4}
\big(\sigma_{n-1}-\tau\big)\,r^{n-1}\le\H^{n-1}(\partial^*A^0_{r,j})\le \H^{n-1}(\partial B_r(x)\cap \partial E_j)\,.
\end{equation}
Now if $f_j(r)=\mu_j(B_r(x))$ and $f(r)=\mu(B_r(x))$ then by the coarea formula we easily find that $f_j\to f$ a.e. with $\liminf_{j\to\infty}f_j'(r)\le f'(r)\le Df$, where $Df$ denotes the distributional derivative of $f$. Hence, letting $j\to\infty$ and $\tau\to 0^+$ in \eqref{llb4} we obtain $Df\ge \sigma_{n-1}\,r^{n-1}\,dr$ on $(0,r_0)$. As $\omega_n=n\,\sigma_{n-1}$, we conclude that $\theta(x)\ge 1$. We stress once more that so far we have just followed the argument of \cite[Proof of Theorem 2, Step three]{DLGM}.
\medskip
\noindent {\it Step two}: We use the boundary structure to show that $\theta\ge2$ $\H^n$-a.e. on $\Omega'\cap(K\setminus\partial^*E)$. Since $\{E^{(0)}\,,E^{(1)}\,,\partial^*E\}$ is an $\H^n$-a.e. partition of $\mathbb{R}^{n+1}$, we can assume that $x\in (E^{(0)}\cup E^{(1)})\cap K\cap\Omega'$.
We consider first the case $x\in E^{(0)}$. Given $\sigma>0$, up to decreasing $r_0$,
\begin{equation}
\label{density 2 1}
\sigma\, r_0^{n+1}\ge\lim_{j\to\infty}|E_j\cap B_{r_0}(x)|=\lim_{j\to\infty}\int_0^{r_0}\,\H^n(E_j\cap\partial B_r(x))\,dr\,.
\end{equation}
Let us consider the measurable set $I_j\subset(0, r_0)$
\[
I_j=\big\{r\in(0,r_0):A_{r,j}^0\cup A_{r,j}^1\subset\partial B_r(x)\setminus\mathrm{cl}\,(E_j)\big\}\,.
\]
We claim that
\begin{equation}
\label{density 2 2}
\H^{n-1}(\partial^*A_{r,j}^0\cap \partial^*A_{r,j}^1)=0\qquad\forall r\in I_j\,.
\end{equation}
Indeed, if $r\in I_j$, then $A_{r,j}^0$, $A_{r,j}^1$, and $\partial B_r(x)\cap E_j$ are disjoint sets of finite perimeter in $\partial B_r(x)$, and in particular
\begin{eqnarray*}
\nu_{A_{r,j}^0}&=&-\nu_{A_{r,j}^1}\,,\hspace{0.8cm}\qquad\mbox{$\H^{n-1}$-a.e. on $\partial^*A_{r,j}^0\cap\partial^*A_{r,j}^1$}\,,
\\
\nu_{A_{r,j}^0}&=&-\nu_{\partial B_r(x)\cap E_j}\,\qquad\mbox{$\H^{n-1}$-a.e. on $\partial^*A_{r,j}^0\cap\partial^*[\partial B_r(x)\cap E_j]$}\,,
\\
\nu_{A_{r,j}^1}&=&-\nu_{\partial B_r(x)\cap E_j}\,\qquad\mbox{$\H^{n-1}$-a.e. on $\partial^*A_{r,j}^1\cap\partial^*[\partial B_r(x)\cap E_j]$}\,.
\end{eqnarray*}
At the same time, since $\{A_{r,j}^h\}_{h=0}^\infty$ are connected components of $\partial B_r(x)\setminus \partial E_j$,
\[
\partial^*A_{r,j}^h\subset\partial^*[\partial B_r(x)\cap E_j]\qquad\mbox{modulo $\H^n$}
\]
and thus $\H^{n-1}$-a.e. on $\partial^*A_{r,j}^0\cap\partial^*A_{r,j}^1$ we have
\[
\nu_{\partial B_r(x)\cap E_j}=-\nu_{A_{r,j}^0}=\nu_{A_{r,j}^1}=-\nu_{\partial B_r(x)\cap E_j}
\]
a contradiction. By \eqref{density 1 2} and \eqref{density 2 2}, given $\tau>0$ and provided $\sigma$ is small enough in terms of $n$ and $\tau$, for a.e. $r\in I_j$ we find
\begin{eqnarray*}
f_j'(r)&\ge&\H^{n-1}(\partial B_r(x)\cap \partial E_j)\ge\H^{n-1}(\partial^*A_{r,j}^0\cup\partial^*A_{r,j}^1)
\\
&=&\H^{n-1}(\partial^*A_{r,j}^0)+\H^{n-1}(\partial^*A_{r,j}^1)
\ge2\,\big(\sigma_{n-1}-\tau\big)\,r^{n-1}\,.
\end{eqnarray*}
Hence,
\begin{eqnarray}\nonumber
f_j( r_0)&\ge& 2\,\big(\sigma_{n-1}-\tau\big)\,\frac{ r_0^n}n-C(n)\int_{(0, r_0)\setminus I_j}r^{n-1}\,dr
\\\label{density 2 3}
&\ge&2\,\big(\sigma_{n-1}-\tau\big)\,\frac{ r_0^n}n-C(n)\, r_0^{1/n}\,\Big(\int_{(0, r_0)\setminus I_j}r^n\,dr\Big)^{(n-1)/n}\,.
\end{eqnarray}
We notice that for a.e. $r \in \left( 0, r_0 \right) \setminus I_j$, \eqref{density 1 1} gives
\[
\H^n(E_j\cap\partial B_r(x))\ge\min\Big\{\H^n(A^0_{r,j}),\H^n(A^1_{r,j})\Big\}\ge \Big(\frac{\sigma_n}2-C(n)\,\sigma\Big)\,r^n\,,
\]
so that \eqref{density 2 1} implies
\begin{equation}
\label{density 2 4}
\sigma\, r_0^{n+1}\ge c(n)\,\limsup_{j\to\infty}\int_{(0, r_0)\setminus I_j}\,r^n\,dr\,.
\end{equation}
If we combine \eqref{density 2 3} and \eqref{density 2 4} and let $j\to\infty$, then we find
\[
f(r_0)=\lim_{j\to\infty}f_j( r_0)\ge 2\,\big(\sigma_{n-1}-\tau\big)\,\frac{ r_0^n}n-C(n)\, r_0^{1/n}\,\Big(\sigma\, r_0^{n+1}\Big)^{(n-1)/n}
\]
Dividing by $ r_0^n$ and letting $ r_0\to 0^+$, $\sigma\to 0^+$ and $\tau\to 0^+$ we find $\theta(x)\ge 2$ whenever $x\in E^{(0)}\cap K\cap\Omega'$. The case when $x\in E^{(1)}$ is analogous and the details are omitted.
\end{proof}
\section{Existence of generalized minimizers: Proof of Theorem \ref{thm lsc}}\label{section existence of generalized minimizers} Given the length of the proof, we provide a short overview. In step one, we check that $\psi(\varepsilon)<\infty$ by using the open neighborhoods of a minimizer $S$ of $\ell$ as comparison sets for $\psi(\varepsilon)$. We remark that this is the only point of the proof where \eqref{hp on W and C} is used. It is important here to allow for sufficiently non-smooth sets in the competition class $\mathcal{E}$: indeed, minimizers of $\ell$ are known to be smooth only outside of a close $\H^n$-negligible set in arbitrary dimension. Once $\psi(\varepsilon)<\infty$ is established, we consider a minimizing sequence $\{E_j\}_j$ for $\psi(\varepsilon)$, so that $E_j\in\mathcal{E}$, $|E_j|=\varepsilon$, $\Omega\cap\partial E_j$ is $\mathcal{C}$-spanning $W$ and
\begin{equation}
\label{minimizing sequence}
\H^n(\Omega\cap\partial E_j)\le \H^n(\Omega\cap\partial F)+\frac1j\qquad\forall F\in\mathcal{E}\,,\,|F|=\varepsilon\,,\,\mbox{$\Omega\cap\partial F$ is $\mathcal{C}$-spanning $W$}\,.
\end{equation}
We want to apply \eqref{minimizing sequence} to the comparison sets constructed in section \ref{section five competitors}, but, in general, those local variations do not preserve the volume constraint. A family of volume-fixing variations acting uniformly on $\{E_j\}_j$ is constructed through the nucleation lemma (Lemma \ref{statement nucleation}) following some ideas introduced by Almgren in the existence theory of minimizing clusters \cite{Almgren76}; see steps two and three. In step four we exploit cup and cone competitors to show that, up to extracting subsequences, $\H^n\llcorner(\Omega\cap\partial E_j)\stackrel{*}{\rightharpoonup}\mu=\theta\,\H^n\llcorner K$ as Radon measures in $\Omega$, and $E_j\to E$ in $L^1_{{\rm loc}}(\Omega)$, for a pair $(K,E)\in\mathcal{K}$ and for an upper semicontinuous function $\theta\ge1$ on $K$. An application of Lemma \ref{lemma llb} shows that $\theta\ge 2$ $\H^n$-a.e. on $K\setminus\partial^*E$, thus proving $\psi(\varepsilon)\ge\mathcal F(K,E)$. In order to show that $\psi(\varepsilon)=\mathcal F(K,E)$, and thus that $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)$, we need to exclude that $\Omega\cap\partial E_j$ concentrates area by folding against $K$, at infinity, or against the wire frame. By using slab competitors we prove that $\Omega\cap\partial E_j$, in its convergence towards $K$, cannot fold at all near points in $\partial^*E$, and can fold at most twice near points in $K\cap(E^{(0)}\cup E^{(1)})$ (step five). In step six, concentration of area at the boundary is ruled out by a deformation argument based on Lemma \ref{lemma close by Lipschitz at boundary}. Finally, in step seven, we exclude area (and volume) concentration at infinity by using exterior cup competitors to construct a uniformly bounded minimizing sequence.
\begin{proof}[Proof of Theorem \ref{thm lsc}] \noindent {\it Step one}: We show that
\begin{equation}
\label{psi eps basic bounds}
\psi(\varepsilon)\le 2\,\ell+C(n)\,\varepsilon^{n/(n+1)}\qquad\forall\varepsilon>0\,.
\end{equation}
Let $S$ be a minimizer of $\ell$, and let $\eta_0>0$ be such that \eqref{hp on W and C} holds. If $\eta\in(0,\eta_0)$, then the open $\eta$-neighborhood $U_\eta(S)$ of $S$ is such that $\Omega\cap\partial U_\eta(S)$ is $\mathcal{C}$-spanning $W$: otherwise we could find $\eta\in(0,\eta_0)$ and $\gamma\in\mathcal{C}$ such that $\gamma\cap \partial U_\eta(S)=\emptyset$. Since $\gamma$ is connected, we would either have $\gamma\subset\{x:{\rm dist}(x,S)>\eta\}$, against the fact that $S$ is $\mathcal{C}$-spanning; or we would have $\gamma\subset U_{\eta}(S)$, against \eqref{hp on W and C}. Hence $\Omega\cap\partial U_\eta(S)$ is $\mathcal{C}$-spanning $W$.
As proved in \cite{DLGM}, $S$ is $\H^n$-rectifiable. Moreover, as shown in Theorem \ref{thm density for S fine} in the appendix, we have
\begin{equation}
\label{mememe}
\H^n(S\cap B_r(x))\ge c(n)\,r^n\qquad\forall x\in\mathrm{cl}\,(S)\,,r<\rho_0
\end{equation}
where $\rho_0$ depends on $W$, so that $\H^n(S)<\infty$ implies that $\mathrm{cl}\,(S)$ is compact. This density estimate has two more consequences: first, combined with \cite[Corollary 6.5]{maggiBOOK}, it implies $\H^n(\mathrm{cl}\,(S)\setminus S)=0$; second, it allows us to exploit \cite[Theorem 2.104]{AFP} to find
\begin{equation}
\label{mink step1}
|U_\eta(S)|=2\,\eta\,\H^n(\mathrm{cl}\,(S))+{\rm o}(\eta)=2\,\eta\,\H^n(S)+{\rm o}(\eta)\qquad\mbox{as $\eta\to 0^+$}\,.
\end{equation}
By the coarea formula for Lipschitz maps applied to the distance function from $S$, see \cite[Theorem 18.1, Remark 18.2]{maggiBOOK}, we have
\[
|U_{\eta}(S)\cap A|=\int_0^\eta\,P(U_t(S);A)\,dt=\int_0^\eta\,\H^n(A\cap\partial U_t(S))\,dt\,,\qquad\mbox{$\forall A\subset\mathbb{R}^{n+1}$ open}\,,
\]
so that $U_\eta(S)$ is a set of finite perimeter in $\mathbb{R}^{n+1}$ and $\H^n(\partial U_\eta(S)\setminus\partial^*U_\eta(S))=0$ for a.e. $\eta>0$. Summarizing, we have proved that, for a.e. $\eta\in(0,\eta_0)$,
\[
F_\eta=\Omega\cap U_\eta(S)\in\mathcal{E}\,,\qquad\mbox{$\Omega\cap\mathrm{cl}\,(\partial^*F_\eta)=\Omega\cap\partial F_\eta$ is $\mathcal{C}$-spanning $W$}\,,
\]
and, by \eqref{mink step1},
\[
f(\eta)=|F_\eta|=\int_0^\eta P(F_t;\Omega)\,dt=\int_0^\eta\,P(U_t(S);\Omega)\,dt\le 2\,\eta\,\H^n(S)+{\rm o}(\eta)\,.
\]
Notice that $f(s)$ is absolutely continuous with $f(\eta)=\int_0^\eta\,f'(t)\,dt$ and $f'(t)=P(F_t;\Omega)$ for a.e. $t\in(0,\eta)$. Hence, for every $\eta>0$ there exist $t_1(\eta),t_2(\eta)\in(0,\eta)$ such that $f'(t_1(\eta))\le f(\eta)/\eta\,\le f'(t_2(\eta))$. Setting $F_j=F_{t_1(\eta_j)}$ for a suitable $\eta_j\to 0^+$, we get
\[
\limsup_{j\to\infty}P(F_j;\Omega)\le2\,\ell\,,
\]
where $|F_j|\to 0^+$. Finally, given $\varepsilon>0$, we pick $j$ such that $|F_j|<\varepsilon$, and construct a competitor for $\psi(\varepsilon)$ by adding to $F_j$ a disjoint ball of volume $\varepsilon-|F_j|$. In this way, $\psi(\varepsilon)\le P(F_j;\Omega)+C(n)\,\big(\varepsilon-|F_j|\big)^{n/(n+1)}$, and \eqref{psi eps basic bounds} is found by letting $j\to\infty$.
\medskip
Since $\psi(\varepsilon) < \infty$, we can now consider a minimizing sequence $\{E_j\}_{j=1}^{\infty}$ for $\psi(\varepsilon)$. Given that $P(E_j)\le\H^n(\partial\Omega)+\H^n(\Omega\cap\partial E_j)\le\H^n(\partial\Omega)+\psi(\varepsilon)+1$ for $j$ large, and that $|E_j| = \varepsilon$ for every $j$, there exist a set of finite perimeter $E\subset\Omega$ and a Radon measure $\mu$ in $\Omega$ such that, up to extracting subsequences,
\begin{eqnarray}\label{almostthere}
E_j\to E\quad\mbox{in $L^1_{{\rm loc}}(\Omega)$}\,,\quad
\mu_j=\H^n\llcorner(\Omega\cap\partial E_j)\stackrel{*}{\rightharpoonup}\mu\quad\mbox{as Radon measures on $\Omega$}\,,
\end{eqnarray}
as $j\to\infty$, see e.g. \cite[Section 12.4]{maggiBOOK}. We consider the set, relatively closed in $\Omega$, defined by
\[
K=\Omega \cap {\rm spt}\mu=\big\{x\in\Omega:\mu(B_r(x))>0\quad\forall r>0\big\}\,,
\]
and claim that
\begin{eqnarray}\label{proof that KE belongs to KK 1}
\mbox{$K$ is $\mathcal{C}$-spanning $W$}\,,\qquad \Omega\cap\partial^*E\subset K\,.
\end{eqnarray}
Indeed, the first claim in \eqref{proof that KE belongs to KK 1} is obtained by applying Lemma \ref{statement K spans} to $K_j=\Omega\cap\partial E_j$; and if $x\in\Omega\cap \partial^*E$ and $B_r(x)\subset\Omega$, then
\[
0<P(E;B_r(x))\le\liminf_{j\to\infty}P(E_j;B_r(x))\le\liminf_{j\to\infty}\mu_j(B_r(x))\le\mu(\mathrm{cl}\,(B_{r}(x)))
\]
so that $x\in K$. Notice that, at this stage, we still do not know if $(K,E)\in\mathcal{K}$: we still need to show that $K$ is $\H^n$-rectifiable and, possibly up to Lebesgue negligible modifications, that $E$ is open with $\Omega\cap\mathrm{cl}\,(\partial^*E)=\Omega\cap\partial E$. Moreover, we just have $|E|\le\varepsilon$ (possible volume loss at infinity), and we know nothing about the structure of $\mu$.
\bigskip
\noindent {\it Step two}: We show the existence of $\tau>0$ such that for every $E_j$ there exist $x_j^1,x_j^2\in\mathbb{R}^{n+1}$ such that $\{\mathrm{cl}\,(B_{2\tau}(x_j^1)),\mathrm{cl}\,(B_{2\tau}(x_j^2)),W\}$ is disjoint and
\begin{equation}\label{lsc good balls}
|E_j\cap B_\tau(x_j^1)|=\k_1\,,\qquad |E_j\cap B_\tau(x_j^2)|=\k_2\,,
\end{equation}
for some $\k_1,\k_2\in(0,|B_\tau|/2]$ depending on $n$, $\tau$, $\varepsilon$ and $\ell$ only. With $\tau_0$ as in \eqref{hp on W and C 0}, for $M\in\mathbb{N} \setminus \{0\}$ to be chosen later on, and by compactness of $W$, we can pick $\tau>0$ so that
\begin{equation}
\label{lsc tau req}
(M+1)\,\tau<\tau_0\,,\qquad |B_{M\,\tau}|<\frac\e4\,,\qquad |I_{(M+1)\tau}(W)\setminus W|<\frac\e2\,.
\end{equation}
The value $\sigma$ in Lemma \ref{statement nucleation} corresponding to $E_j$ and $T=I_{M\,\tau}(W)$ is given by
\[
\min\Big\{\frac{|E_j\setminus I_\tau(T)|}{\tau\,P(E_j;\mathbb{R}^{n+1}\setminus T)},\frac{\xi(n)}{n+1}\Big\}
\ge\min\Big\{\frac{\varepsilon/2}{\tau\,(\psi(\varepsilon)+1)},\frac{\xi(n)}{n+1}\Big\}>0\,,
\]
since $|E_j\setminus I_\tau(T)|\ge\varepsilon/2$ by \eqref{lsc tau req}, and since $P(E_j;\Omega)\le\psi(\varepsilon)+1$. Therefore, setting
\[
\sigma_1 = \min\Big\{\frac{\varepsilon/2}{\tau\,(\psi(\varepsilon)+1)},\frac{\xi(n)}{n+1}\Big\}\,,
\]
an application of Lemma \ref{statement nucleation} yields $y_j\in \mathbb{R}^{n+1}\setminus I_{(M+1)\tau}(W)$ such that
\begin{eqnarray*}
|E_j\cap B_\tau(y_j)|\ge\min\Big\{\Big(\frac{\sigma_1}{2\xi(n)}\Big)^{n+1}\tau^{n+1}\,,\frac{|B_{\tau}|}2\Big\}=\k_1\,,
\end{eqnarray*}
so that $\k_1\in(0,|B_\tau|/2]$ depends on $n$, $\ell$, $\varepsilon$, and $\tau$ only (observe that this is a consequence of \eqref{psi eps basic bounds}). The continuous map $x\mapsto|E_j\cap B_\tau(x)|$ takes a value larger than $\k_1$ at $y_j\in \mathbb{R}^{n+1}\setminus I_{(M+1)\,\tau}(W)$; at the same time, by \eqref{hp on W and C 0}, $\mathbb{R}^{n+1}\setminus I_{(M+1)\,\tau}(W)$ is open and connected, therefore it is pathwise connected \cite[Corollary 5.6]{topa}, and $|E_j\cap B_\tau(x)|\to 0$ as $|x|\to\infty$ in $\mathbb{R}^{n+1}\setminus I_{(M+1)\,\tau}(W)$. Therefore we can find $x_j^1\in \mathbb{R}^{n+1}\setminus I_{(M+1)\,\tau}(W)$ such that the first identity in \eqref{lsc good balls} holds and $\{\mathrm{cl}\,(B_{(M+1)\,\tau}(x_j^1)),W\}$ is disjoint. Setting $B=\mathrm{cl}\,(B_{(M-2)\tau}(x_j^1))$, the value $\sigma$ in Lemma \ref{statement nucleation} corresponding to $E_j$ and $T=I_{\tau}(W)\cup B$ is given by
\[
\min\Big\{\frac{|E_j\setminus I_\tau(T)|}{\tau\,P(E_j;\mathbb{R}^{n+1}\setminus T)},\frac{\xi(n)}{n+1}\Big\}
\ge\min\Big\{\frac{\varepsilon/4}{\tau\,(\psi(\varepsilon)+1)},\frac{\xi(n)}{n+1}\Big\}>0\,,
\]
so that, after setting
\[
\sigma_2 = \min\Big\{\frac{\varepsilon/4}{\tau\,(\psi(\varepsilon)+1)},\frac{\xi(n)}{n+1}\Big\}\,,
\]
we can find $z_j\in\mathbb{R}^{n+1}\setminus( I_{2\tau}(W)\cup \mathrm{cl}\,(B_{(M-1)\,\tau}(x_j^1)))$ such that
\begin{eqnarray*}
|E_j\cap B_\tau(z_j)|\ge
\min\Big\{\Big(\frac{\sigma_2}{2\xi(n)}\Big)^{n+1}\tau^{n+1}\,,\frac{|B_{\tau}|}2\Big\}=\k_2\,,
\end{eqnarray*}
with $\k_2\in(0,|B_\tau|/2]$ depending on $n$, $\ell$, $\varepsilon$, and $\tau$ only.
Since $I_{2\tau}(W)$ and $\mathrm{cl}\,(B_{(M-1)\tau}(x_j^1))$ are disjoint and since $\mathbb{R}^{n+1}\setminus I_{2\tau}(W)$ is pathwise connected by \eqref{hp on W and C 0}, we easily check that $\mathbb{R}^{n+1}\setminus( I_{2\tau}(W)\cup \mathrm{cl}\,(B_{(M-1)\tau}(x_j^1)))$ is pathwise connected. By continuity,
\begin{equation}
\label{lsc automatic}
\exists \, x_j^2\in\mathbb{R}^{n+1}\setminus( I_{2\tau}(W)\cup \mathrm{cl}\,(B_{(M-1)\,\tau}(x_j^1)))
\end{equation}
such that the second identity in \eqref{lsc good balls} holds. Finally, \eqref{lsc automatic} implies that the family of sets $\{\mathrm{cl}\,(B_{(M-3)\tau}(x_j^1)),\mathrm{cl}\,(B_{2\tau}(x_j^2)),W\}$ is disjoint. We pick $M=5$ to conclude the proof.
\bigskip
\noindent {\it Step three}: In this step we show that \eqref{minimizing sequence} can be modified to allow for comparison with local variations $F_j$ of $E_j$ that do not necessarily preserve the volume constraint. More precisely, we prove the existence of positive constants $r_*$ and $C_*$ (depending on the whole sequence $\{E_j\}_j$, and thus uniform in $j$) such that if $x\in\Omega$, $r<r_*$ and $\{F_j\}_j$ is an {\bf admissible local variation of $\{E_j\}_j$ in $B_r(x)$}, in the sense that
\begin{equation}
\label{admissible variation}
F_j\in\mathcal{E}\,,\qquad F_j\Delta E_j\subset\subset B_r(x)\,,
\qquad
\mbox{$\Omega\cap\partial F_j$ is $\mathcal{C}$-spanning $W$}\,,
\end{equation}
(notice that we do not require $B_r(x)\subset\Omega$), then
\begin{equation}
\label{volume fixing variation inequality}
\H^n(\Omega\cap\partial E_j)\le \H^n(\Omega\cap\partial F_j)+C_*\,\Big||E_j|-|F_j|\Big|+\frac1j\,.
\end{equation}
We first claim that if $B_j \subset \Omega$ is a ball with ${\rm dist}(B_j,B_r(x))>0$, $\zeta:\Omega\to\Omega$ is a diffeomorphism with $\zeta(B_j)\subset B_j$ and $\{\zeta\ne{\rm id}\,\}\subset\subset B_j$, and if
\begin{equation}
\label{form of Gj}
G_j=\Big(F_j\cap B_r(x)\Big)\cup\Big(\zeta(E_j)\cap B_j\Big)\cup \Big(E_j\setminus (B_j\cup B_r(x))\Big)
\end{equation}
then $G_j\in\mathcal{E}$ and $\Omega\cap\partial G_j$ is $\mathcal{C}$-spanning $W$. The fact that $G_j$ is open is obvious since $G_j$ is equal to $E_j$ in a neighborhood of $\Omega\setminus (B_r(x)\cup B_j)$, to $F_j$ in a neighborhood of $B_r(x)$, and to $\zeta(E_j)$ in a neighborhood of $B_j$, where $E_j$, $F_j$ and $\zeta(E_j)$ are open, and where ${\rm dist}(B_j,B_r(x))>0$; this also shows that $\partial G_j$ is equal to $\partial E_j$ in a neighborhood of $\Omega\setminus(B_r(x)\cup B_j)$, to $\partial F_j$ in a neighborhood of $B_r(x)$, and to $\partial\zeta(E_j)=\zeta(\partial E_j)$ in a neighborhood of $B_j$, so that $\Omega\cap\partial G_j$ is $\H^n$-rectifiable and, thanks to \eqref{admissible variation} and Lemma \ref{statement spanning is close by Lipschitz maps}, that $\Omega\cap\partial G_j$ is $\mathcal{C}$-spanning $W$. Having proved the claim, we only have to construct sets $G_j$ as in \eqref{form of Gj} and such that
\begin{equation}
\label{lsc Gj properties 1}
|G_j|=\varepsilon\,,\qquad
\H^n(\Omega\cap\partial G_j)\le\H^n(\Omega\cap\partial F_j)+C_*\,\big||E_j|-|F_j|\big|\,,
\end{equation}
in order to deduce \eqref{volume fixing variation inequality} from \eqref{minimizing sequence}. To this aim, let $\{x_j^k\}_{k=1,2}$ be as in step two: the sets $\{(E_j-x_j^k)\cap B_\tau(0)\}_j$ are bounded in $B_\tau(0)$, and have uniformly bounded perimeters, so that, up to extracting a subsequence, for each $k=1,2$ there exists a set of finite perimeter $E_*^k\subset B_\tau(0)$ such that $(E_j-x_j^k)\cap B_\tau(0)\to E_*^k$ in $L^1(\mathbb{R}^{n+1})$. The crucial point is that, by \eqref{lsc good balls} and since $\k_k\in(0,|B_\tau(0)|/2]$, we must have
\[
B_\tau(0)\cap\partial^*E_*^k\ne\emptyset\,.
\]
Hence, by arguing as in \cite[Section 29.6]{maggiBOOK}, we can find positive constants $C_*'$ and $\varepsilon_*$ such that for every set of finite perimeter $E'\subset B_\tau(0)$ with
\[
|E'\Delta E_*^k|<\varepsilon_*\,,
\]
there exists a $C^1$-map $\Phi_k:(-\varepsilon_*,\varepsilon_*)\times B_\tau(0)\to B_\tau(0)$ such that, for each $|v|<\varepsilon_*$: (i) $\Phi_k(v,\cdot)$ is a diffeomorphism with $\{\Phi_k(v,\cdot)\ne{\rm Id}\,\}\subset\subset B_\tau(0)$; (ii) $|\Phi_k(v,E')|=|E'|+v$; (iii) if $\Sigma$ is an $\H^n$-rectifiable set in $B_\tau(0)$, then
\[
\Big|\H^n(\Phi_k(v,\Sigma))-\H^n(\Sigma)\Big|\le C_*'\,\H^n(\Sigma)\,|v|\,.
\]
By taking $E'=(E_j-x_j^k)\cap B_\tau(0)$ (for $j$ large enough), by composing the maps $\Phi_k$ with a translation by $x_j^k$, and then by extending the resulting maps as the identity map outside of $B_\tau(x_j^k)$, we prove the existence of $C^1$-maps $\Psi_k:(-\varepsilon_*,\varepsilon_*)\times \mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$ such that, for each $|v|<\varepsilon_*$: (i) $\Psi_k(v,\cdot)$ is a diffeomorphism with $\{\Psi_k(v,\cdot)\ne{\rm Id}\,\}\subset\subset B_\tau(x_j^k)$; (ii) $|\Psi_k(v,E_j)|=|E_j|+v$; (iii) if $\Sigma$ is an $\H^n$-rectifiable set in $\mathbb{R}^{n+1}$, then
\[
\Big|\H^n(\Psi_k(v,\Sigma))-\H^n(\Sigma)\Big|\le C_*'\,\H^n(\Sigma)\,|v|\,.
\]
Finally, we set
\[
r_*=\min\Big\{\tau,\Big(\frac{\varepsilon_*}{2\,\omega_{n+1}}\Big)^{1/(n+1)}\Big\}\,,\qquad B_j=B_{\tau}(x_j^{k(j)})
\]
where $k=k(j)\in\{1,2\}$ is selected so that ${\rm dist}(B_r(x),B_j)>0$ (this is possible because $r_x
*\le\tau$ and $\{\mathrm{cl}\,(B_{2\tau}(x_j^1)),\mathrm{cl}\,(B_{2\tau}(x_j^2))\}$ are disjoint). We finally define $G_j$ by \eqref{form of Gj} with
\[
\zeta= \Psi_{k(j)}(v_j,\cdot)\,,\qquad v_j=|E_j\cap B_r(x)|-|F_j\cap B_r(x)|\,,
\]
as we are allowed to do since $E_j\Delta F_j\subset\subset B_r(x)$ and thus $|v_j|\le \omega_{n+1}\,r_*^{n+1}\le\varepsilon_*/2$. To prove \eqref{lsc Gj properties 1}: first, we have $G_j\Delta F_j\subset\subset\Omega\setminus\mathrm{cl}\,(B_r(x))$, while property (ii) of $\Psi_{k(j)}$ gives
\begin{eqnarray*}
|G_j|-|E_j|&=&|\Psi_{k(j)}(v_j,E_j)\cap B_j|+|F_j\cap B_r(x)|-|E_j\cap B_j|-|E_j\cap B_r(x)|
\\
&=&|\Psi_{k(j)}(v_j,E_j)\cap B_j|-v_j-|E_j\cap B_j|=0\,;
\end{eqnarray*}
second, property (iii) applied to the $\H^n$-rectifiable set $\S=B_j\cap\partial E_j$ gives
\begin{eqnarray*}
&&\H^n(\Omega\cap\partial G_j)-\H^n(\Omega\cap\partial F_j)
\\
&=&\H^n\Big(\Psi_{k(j)}(v_j,B_j\cap\partial E_j)\Big)-\H^n(B_j\cap\partial E_j)\le C_*'\,|v_j|\,
\H^n(B_j\cap\partial E_j)
\end{eqnarray*}
so that \eqref{lsc Gj properties 1} follows by taking $C_*=C_*'\,(\psi(\varepsilon)+1)$.
\bigskip
\noindent {\it Step four}: In this step we apply \eqref{volume fixing variation inequality} to the cup and cone competitors constructed in section \ref{section five competitors} and show that $K = \Omega \cap {\rm spt}\mu$ is relatively compact in $\Omega$ and $\H^n$-rectifiable, that $\mu=\theta\,\H^n\llcorner K$ with $\theta\ge1$ on $K$ and $\theta\ge 2$ $\H^n$-a.e. on $K\setminus\partial^*E$, and, finally, that $(K,E)\in\mathcal{K}$. To this end, pick $x\in K$, set $d(x)={\rm dist}(x,W)>0$, and let
\[
f_j(r)=\mu_j(B_r(x))=\H^n(B_r(x)\cap\partial E_j)\,,\qquad f(r)=\mu(B_r(x))\,,\qquad \mbox{for every $r\in(0,d(x))$}\,.
\]
Denoting by $Df$ the distributional derivative of $f$, and by $f'$ its classical derivative, the coarea formula (see \cite[Step one, proof of Theorem 2]{DLGM} and \cite[Theorem 2.9.19]{FedererBOOK}) gives
\begin{eqnarray}\label{fj f g}
\mbox{$f_j\to f$ a.e. on $(0,d(x))$}\,,\quad
Df_j\ge f_j'\,dr\,,\quad Df\ge f'\,dr\,,\quad f'\ge g=\liminf_{j\to\infty}f_j'\,,
\\
\label{lb fj'}
f_j'(r)\ge\H^{n-1}(\partial B_r(x)\cap \partial E_j)\qquad\mbox{$\forall j$ and for a.e. $r\in(0,d(x))$}\,.\hspace{1cm}
\end{eqnarray}
Now let $\eta\in(0,r/2)$, let $A_j$ denote an $\H^n$-maximal open connected component of $\partial B_r(x)\setminus\partial E_j$, and let $F_j$ be the cup competitor defined by $E_j$ and $A_j$ as in Lemma \ref{lemma cup competitor first kind}. More precisely, when $E_j \cap A_j = \emptyset$, we let $\{\eta^j_k\}_{k=1}^\infty$ be the decreasing sequence with $\lim_{k \to \infty} \eta^j_k = 0$ defined in step two of the proof of Lemma \ref{lemma cup competitor first kind}, and setting, for $\eta^j_k$ such that $\eta \in \left( \eta^j_{k+1}, \eta^j_k \right]$,
\begin{eqnarray*}
Y_j&=&\partial B_r(x)\setminus\big(\mathrm{cl}\,(E_j\cap\partial B_r(x))\cup\mathrm{cl}\,(A_j)\big)\,, \\
S_j &=& \partial E_j \cap \mathrm{cl}\,(A_j) \setminus \left( \mathrm{cl}\, (E_j \cap \partial B_r (x)) \cup \mathrm{cl}\, (Y_j) \right)\,, \\
U_j &=& \partial B_r (x) \cap \{{\rm d}_{S_j} < \eta^j_k\}\,,
\end{eqnarray*}
we define
\begin{equation}
\label{cup competitor first case}
F_j=\big(E_j\setminus\mathrm{cl}\,(B_r(x))\big)\cup\,N_{\eta^j_k}(Z_j)\,, \qquad Z_j = Y_j \cup \left( U_j \setminus \mathrm{cl}\, (E_j \cap \partial B_r (x)) \right)\,.
\end{equation}
When $A_j \subset E_j$, instead, we define
\begin{equation} \label{cup competitor second case}
F_j=\big(E_j\cup B_r(x)\big)\setminus\mathrm{cl}\,\big(N_\eta(Y_j)\big)\,, \qquad Y_j=(E_j\cap\partial B_r(x))\setminus\mathrm{cl}\,(A_j)\,;
\end{equation}
see Figure \ref{fig cup00}. In both cases, $\{F_j\}_j$ is an admissible local variation of $\{E_j\}_j$ in $B_{r'}(x)$ for some $r'>r$, and by \eqref{cup area totale}, for a.e. $r<d(x)$ we have
\[
\limsup_{\eta\to 0^+}\H^n(\Omega\cap\partial F_j)\le \H^n(\partial E_j\setminus B_r(x))+2\,\H^n(\partial B_r(x)\setminus A_j)
\]
so that, by \eqref{volume fixing variation inequality}, for a.e. $r<\min\{d(x),r_*\}$, we have
\begin{equation}\label{cup analysis 1}
f_j(r)\le2\,\H^n(\partial B_r(x)\setminus A_j)+C_*\,\limsup_{\eta\to 0^+}\big||E_j|-|F_j|\big|+\frac1j\,.
\end{equation}
The estimate of $||E_j|-|F_j||$ is different depending on whether $F_j$ is given by \eqref{cup competitor first case} or by \eqref{cup competitor second case}. In both cases we make use of the Euclidean isoperimetric inequality
\[
(n+1)\,|B_1|^{1/(n+1)}\,|U|^{n/(n+1)}\le P(U)\,,\qquad\forall U\subset\mathbb{R}^{n+1}\,,
\]
and we also need the perimeter identities
\begin{equation}
\label{choice of r 2}
\begin{split}
&P(E_j\cap B_r(x))=P(E_j;B_r(x))+\H^n(E_j\cap\partial B_r(x))\,,
\\
&P(B_r(x)\setminus E_j)=P(E_j;B_r(x))+\H^n(\partial B_r(x)\setminus E_j)\,,
\end{split}
\end{equation}
which hold for a.e. $r>0$, with the exceptional set of $r$-values that can be made independent from $j$. We now take $F_j$ as in \eqref{cup competitor first case}: up to further decreasing the value of $r_*$ so to entail $C_*\,r_*/(n+1)\le 1/2$, and assuming that $r<r_*$, we have
\begin{eqnarray}\nonumber
C_*\,\Big||E_j|-|F_j|\Big|&\le& C_*\,|E_j\cap B_r(x)|+C_*\,c(n)\,r^n\,\eta^j_k
\\\nonumber
&\le& C_*\,|B_1|^{1/(n+1)}\,r\,|E_j\cap B_r(x)|^{n/(n+1)}+C_*\,c(n)\,r^n\,\eta^j_k
\\\nonumber
&\le& \frac{C_*}{n+1}\,r_*\,P(E_j\cap B_r(x))+C_*\,c(n)\,r^n\,\eta^j_k
\\\nonumber
&\le&\frac12\Big\{P(E_j;B_r(x))+\H^n(E_j\cap\partial B_r(x))\Big\}+C_*\,c(n)\,r^n\,\eta^j_k
\\\label{volume error estimate 1}
&\le&\frac12\Big\{f_j(r)+\H^n(\partial B_r(x)\setminus A_j)\Big\}+C_*\,c(n)\,r^n\,\eta^j_k\,,
\end{eqnarray}
where in the last inequality we have used $\partial^*E_j\subset\partial E$ and $A_j\cap E_j=\emptyset$ (that is the assumption under which $F_j$ is chosen as in \eqref{cup competitor first case}). If instead we take $F_j$ as in \eqref{cup competitor second case}, then
\begin{eqnarray}\nonumber
C_*\,\Big||E_j|-|F_j|\Big|&=&C_*\,\Big||E_j\cap B_r(x)|-|F_j\cap B_r(x)|\Big|=C_*\,\Big||B_r(x)\setminus E_j|-|B_r(x)\setminus F_j|\Big|
\\\nonumber
&\le&C_*\,|B_1|^{1/(n+1)}\,r\,|B_r(x)\setminus E_j|^{n/(n+1)}+C_*\,|N_\eta(\partial B_r(x)\cap E_j\setminus\mathrm{cl}\,(A_j))|
\\\nonumber
&\le&\frac12\Big\{P(E_j;B_r(x))+\H^n(\partial B_r(x)\setminus E_j)\Big\}+C_*\,c(n)\,r^n\,\eta
\\\label{volume error estimate 2}
&\le&\frac12\Big\{f_j(r)+\H^n(\partial B_r(x)\setminus A_j)\Big\}+C_*\,c(n)\,r^n\,\eta\,,
\end{eqnarray}
where in the last inequality we have used $\partial^*E_j\subset\partial E_j$ and $A_j\subset E_j$ (the assumption corresponding to \eqref{cup competitor second case}). By combining \eqref{cup analysis 1} with \eqref{volume error estimate 1} and \eqref{volume error estimate 2}, we conclude that
\begin{equation}
\label{key inequality}
\frac{f_j(r)}2\le 3\,\H^n(\partial B_r(x)\setminus A_j)+\frac1j\,,\qquad\mbox{for a.e. $r<\min\{r_*,d(x)\}$}\,.
\end{equation}
By the spherical isoperimetric inequality, Lemma \ref{statement isoperimetry on spheres}, and by \eqref{lb fj'}, for a.e. $r<d(x)$,
\[
\H^n(\partial B_r(x)\setminus A_j)\le C(n)\,\H^{n-1}(\partial B_r(x)\cap \partial E_j)^{n/(n-1)}\le C(n)\,f_j'(r)^{n/(n-1)}\,,
\]
which combined with \eqref{key inequality} and \eqref{fj f g}, allows us to conclude (letting $j\to\infty$), that
\begin{eqnarray}\label{cup analysis 3}
f(r)\le C(n)\,f'(r)^{n/(n-1)}\,,\qquad\mbox{for a.e. $r<\min\{r_*,d(x)\}$}\,.
\end{eqnarray}
Since $x\in{\rm spt}\mu$, $f$ is positive, and thus \eqref{cup analysis 3} implies the existence of $\theta_0(n)>0$ such that
\begin{equation}
\label{mu lower bound basic}
\mu(B_r(x))\ge \theta_0\,\omega_n\,r^n\qquad\forall x\in K\,,r<r_*\,,B_r(x)\subset\subset\Omega\,.
\end{equation}
Since $K=\Omega\cap{\rm spt}\mu$, by \cite[Theorem 6.9]{mattila} and \eqref{mu lower bound basic} we obtain
\begin{equation}
\label{mu lb HnK}
\mu\ge\theta_0\,\H^n\llcorner K\qquad\mbox{on $\Omega$}\,.
\end{equation}
As a consequence of $\mu(\Omega)<\infty$ and of \eqref{mu lower bound basic} we deduce that $K$ is bounded, thus relatively compact in $\Omega$. In turn, $\partial^* E\subset K$ implies the boundedness of $E$. Notice that we have not excluded $|E|<\varepsilon$ yet.
To further progress in the analysis of $\mu$, given $\eta\in(0,r/2)$ let use now denote by $F_j$ the set corresponding to $\eta$ constructed in Lemma \ref{lemma cone competitor}, so that, by \eqref{cone competitor area inequality}, for a.e. $r<d(x)$,
\begin{equation}
\label{cone competitor area inequality proof}
\limsup_{\eta\to 0^+}\H^n(\Omega\cap\partial F_j)\le\H^n(\partial E_j\setminus B_r(x))+\frac{r}n\,\H^{n-1}(\partial E_j\cap\partial B_r(x))\,.
\end{equation}
Using that $\{F_j\}_j$ is an admissible local variation of $\{E_j\}_j$ in $B_r(x)$, and combining \eqref{volume fixing variation inequality} and \eqref{cone competitor area inequality proof} with $||E_j|-|F_j||\le C(n)\,r^{n+1}$, we find that
\[
\H^n(B_r(x)\cap\partial E_j)\le \frac{r}n\,f_j'(r)+C_*\,r^{n+1}+\frac1j\,,
\]
so that, as $j\to\infty$, $f(r)\le (r/n)\,f'(r)+C_*\,r^{n+1}$. By combining this last inequality with $Df\ge f'(r)\,dr$ and \eqref{mu lower bound basic} we find that
\begin{eqnarray*}
D\,(e^{\Lambda\,r}f(r)/r^n)&=&\frac{n\,e^{\Lambda\,r}}{r^{n+1}}\Big\{\frac{r}n\,Df+\Big(\frac{r\,\Lambda}n\,f(r)-\,f(r)\Big)\,dr\Big\}
\\
&\ge&
\frac{n\,e^{\Lambda\,r}}{r^{n+1}}\Big\{f(r)-C_*\,r^{n+1}+\frac{r\,\Lambda}n\,f(r)-\,f(r)\Big\}\,dr
\\
&=&
\frac{n\,e^{\Lambda\,r}}{r^n}\Big\{-C_*\,r^{n}+\frac{\Lambda\,f(r)}n\Big\}\,dr
\ge
n\,\,e^{\Lambda\,r}\Big\{-C_*+\frac{\Lambda\,\theta_0\,\omega_n}n\Big\}\,dr
\end{eqnarray*}
so that, setting $\Lambda\ge n\,C_*/(\theta_0\omega_n)$, we have proved
\begin{equation}
\label{monotonicity}
e^{\Lambda\,r}\,\frac{\mu(B_r(x))}{r^n}\qquad\mbox{is non-decreasing on $r<\min\{r_*,d(x)\}$}\,.
\end{equation}
By \eqref{monotonicity} and \eqref{mu lb HnK} we find that
\[
\theta(x)=\lim_{r\to 0^+}\frac{\mu(B_r(x))}{\omega_n\,r^n}\quad\mbox{exists in $(0,\infty)$ for every $x\in K$}\,.
\]
By Preiss' theorem, $\mu=\theta\,\H^n\llcorner K^*$ for a Borel function $\theta$ and a countably $\H^n$-rectifiable set $K^*\subset\Omega$. Since $K=\Omega \cap {\rm spt}\mu$, we have $\H^n(K^*\setminus K)=0$, while \eqref{mu lb HnK} gives $\H^n(K\setminus K^*)=0$. Thus $K$ is countably $\H^n$-rectifiable and $\mu=\theta\,\H^n\llcorner K$. Moreover, $\theta$ is upper semicontinuous on $K$ thanks to \eqref{monotonicity}. Finally, consider the open set
\[
E^*=\big\{x\in\Omega:\mbox{$\exists r>0$ s.t. $|B_r(x)|=|E\cap B_r(x)|$}\big\}\,.
\]
The topological boundary of $E^*$ is equal to
\[
\partial E^*=\big\{x\in\mathrm{cl}\,(\Omega):0<|E\cap B_r(x)|<|B_r(x)|\quad\forall r>0\big\}\,,
\]
so that $\Omega\cap\mathrm{cl}\,(\partial^*E)=\Omega\cap\partial E^*$ by \cite[Proposition 12.19]{maggiBOOK}. Clearly $E^*\subset E^{(1)}$: moreover, if $x\in E^{(1)}\setminus E^*$, then $0<|E\cap B_r(x)|<|B_r(x)|$ for every $r>0$, and thus $x\in\partial E^*$. In particular,
\[
\Omega\cap(E^{(1)}\setminus E^*)\subset\Omega\cap\partial E^*=\Omega\cap\mathrm{cl}\,(\partial^*E)\subset K\,,
\]
where $K$ is $\H^n$-rectifiable, and thus Lebesgue negligible. Since $\H^n(\partial\Omega)<\infty$, we have proved $\H^n(E^{(1)}\setminus E^*)<\infty$, and thus $|E^{(1)}\Delta E^*|=0$. By the Lebesgue's points theorem, $E^*$ is equivalent to $E$, so that $\partial^*E=\partial^*E^*$. Replacing $E$ with $E^*$ we find $(K,E)\in\mathcal{K}$. Finally, the lower bounds $\theta\ge 1$ $\H^n$-a.e. on $K$ and $\theta\ge2$ $\H^n$-a.e. on $K\setminus\partial^*E$ follow by applying Lemma \ref{lemma llb} with $\Omega'=\Omega$: notice indeed that assumptions \eqref{llb1} and \eqref{llb2} in Lemma \ref{lemma llb} hold by \eqref{mu lower bound basic} and by \eqref{key inequality}.
\medskip
\noindent {\it Step five}: We show that $\theta(x)\le1$ at every $x\in\Omega\cap\partial^*E$ and that $\theta(x)\le2$ at every $x\in K\cap (E^{(0)}\cup E^{(1)})$ such that $K$ admits an approximate tangent plane at $x$ (thus, that $\theta\le 2$ $\H^n$-a.e. on $K\setminus\partial^*E$). We choose $\nu(x)\in\SS^n$ such that $T_xK=\nu(x)^\perp$ (notice that, necessarily, $\nu(x)=\nu_E(x)$ or $\nu(x)=-\nu_E(x)$ when, in addition, $x\in\partial^*E$), and let $B_{2\,r}(x)\subset\subset\Omega$. For $\tau\in(0,1)$ and $\sigma\in(0,\tau)$ we set
\begin{eqnarray}\label{all the young dudes}
S_{\tau,r}&=&\big\{y\in B_r(x):|(y-x)\cdot\nu(x)|<\tau\,r\big\}\,,
\\\nonumber
V_{\sigma,r}&=&\big\{y\in B_r(x):|(y-x)\cdot\nu(x)|<\sigma\,|y-x|\big\}\,\subset\, S_{\sigma,r}\,\subset\, S_{\tau,r}\,,
\\\nonumber
W_{\tau,\sigma,r}^\pm&=&\big(S_{\tau,r}\setminus\mathrm{cl}\,(V_{\sigma,r})\big)\cap\{y:(y-x)\cdot\nu_E(x)\gtrless0\}\,,
\\\nonumber
\Gamma_{\tau,\sigma,r}^\pm&=&\partial S_{\tau,r}\cap\partial W_{\tau,\sigma,r}^\pm\,,
\end{eqnarray}
that are depicted in
\begin{figure}
\input{step8geometry.pstex_t}\caption{{\small The sets defined in \eqref{all the young dudes}. Here $\sigma<\tau<1$, and $S_{\tau,r}$ is decomposed into a central open cone $V_{\sigma,r}$ of small amplitude $\sigma$, the upper and lower open regions $W_{\tau,\sigma,r}^\pm$, and the closed cone $S_{\tau,r}\cap\partial V_{\sigma,r}$. For $r\le r_0(\sigma,x)$, $B_r(x)\cap K$ lies inside $V_{\sigma,r}$ by approximate differentiability of $K$ at $x$ and by the density estimate \eqref{mu lower bound basic}. When $x\in\partial^*E$, if we choose $\nu(x)=\nu_E(x)$, then the divergence theorem implies that $E$ fills up the whole $W_{\tau,\sigma,r}^-$, and leaves empty $W_{\tau,\sigma,r}^+$.
}}\label{fig step8geometry}
\end{figure}
Figure \ref{fig step8geometry}. By \eqref{mu lower bound basic} and since $\H^n\llcorner(K-x)/\rho\stackrel{*}{\rightharpoonup}\H^n\llcorner T_xK$ as $\rho\to 0^+$,
the approximate tangent plane $T_xK$ is a classical tangent plane, and thus there exists $r_0=r_0(\sigma,x)>0$ such that $K\cap B_r(x)\subset S_{\sigma,r}$ for every $r<r_0$, or, equivalently,
\begin{equation}\label{theta 1 bordo nw}
K\cap B_{r_0}(x)\subset V_{\sigma,r_0}\cup\{x\}\,.
\end{equation}
In particular
\begin{equation}
\label{theta 1 fine 00}
\mu(S_{\tau,r})=\mu(B_r(x))\,,\qquad\forall r<r_0\,.
\end{equation}
We also notice that for a.e. value of $r$ we have
\begin{equation}\label{slab null}
\mbox{$\partial S_{\tau,r}\cap\partial E_j$ is $\H^{n-1}$-rectifiable}\qquad\forall j\,.
\end{equation}
We now introduce the family of open sets
\begin{eqnarray}\nonumber
\mathcal A_{r,j}^{{\rm out}}&=&\Big\{A\subset \partial S_{\tau,r}:\mbox{$A$ is an open connected component}
\\\nonumber
&&\hspace{4cm}\mbox{of $\partial S_{\tau,r}\setminus\partial E_j$ and $A$ is {\bf disjoint} from $E_j$}\Big\}\,,
\\
\nonumber
\mathcal A_{r,j}^{{\rm in}}&=&\Big\{A\subset \partial S_{\tau,r}:\mbox{$A$ is an open connected component}
\\\nonumber
&&\hspace{4cm}\mbox{of $\partial S_{\tau,r}\setminus\partial E_j$ and $A$ is {\bf contained} in $E_j$}\Big\}\,,
\end{eqnarray}
and denote by $A_{r,j}^{{\rm out}}$ and $A_{r,j}^{{\rm in}}$ $\H^n$-maximal elements of $\mathcal A_{r,j}^{{\rm out}}$ and $\mathcal A_{r,j}^{{\rm in}}$ respectively. Finally, given $\eta\in(0,r/2)$, we let $F_j^\star$ be the slab competitor defined by $E_j$, $A_{r,j}^\star$ and $\tau$ in $B_{2r}(x)$ for $\star\in\{{\rm out},{\rm in}\}$ as in Lemma \ref{lemma slab competitor}: accordingly, $F_j^\star\in\mathcal{E}$, $\Omega\cap\partial F_j^\star$ is $\mathcal{C}$-spanning $W$,
\begin{eqnarray}
\label{new slabs 1}
&&F_j^\star\setminus\mathrm{cl}\,(S_{\tau,r})=E_j\setminus\mathrm{cl}\,(S_{\tau,r})\,,
\\
\label{new slabs 2}
&& \lim_{\eta \to 0^+} \H^n \left( (\partial S_{\tau,r}\cap\partial F_j^\star) \, \Delta \, (\partial S_{\tau,r}\setminus A_{r,j}^\star) \right) = 0\,,
\end{eqnarray}
and
\begin{eqnarray}
\label{new slabs 3}
\limsup_{\eta\to 0^+}\H^n(S_{\tau,r}\cap\partial F_j^\star)\le C(n,\tau)\,\left\{
\begin{split}
&\H^n\big(\partial S_{\tau,r}\setminus(A_{r,j}^{{\rm out}}\cup E_j)\big)\,,\hspace{1cm}\mbox{if $\star={\rm out}$}\,,
\\
&\H^n\big((E_j\cap\partial S_{\tau,r})\setminus A_{r,j}^{{\rm in}}\big)\,,\hspace{1.2cm}\mbox{if $\star={\rm in}$}\,;
\end{split}
\right .
\end{eqnarray}
see \eqref{slab competitor exterior}, \eqref{slab competitor buccia}, \eqref{area of slab competitors first 0} and \eqref{area of slab competitors second 0}. By \eqref{volume fixing variation inequality}, $\H^n(\partial S_{\tau,r}\cap\partial E_j)=0$ and \eqref{new slabs 1},
\[
\H^n(S_{\tau,r}\cap\partial E_j)\le\H^n(\mathrm{cl}\,(S_{\tau,r})\cap\partial F_j^\star)+C_*\,c(n)\,r^{n+1}+\frac1j\,,\qquad\forall\star\in\{{\rm out},{\rm in}\}\,.
\]
By \eqref{new slabs 2} and \eqref{new slabs 3}, taking the limit first as $\eta\to 0^+$ and then as $j\to\infty$, and by taking also into account that $\mu_j\stackrel{*}{\rightharpoonup}\mu$ and that \eqref{theta 1 fine 00} holds, we find, in the case $\star={\rm out}$, that
\begin{eqnarray}\label{area of slab competitors first proof}
\mu(B_r(x))&\le&\limsup_{j\to\infty}\H^n(E_j\cap\partial S_{\tau,r})
\\\nonumber
&&+C(n,\tau)\,\limsup_{j\to\infty}\H^n\big(\partial S_{\tau,r}\setminus(A_{r,j}^{{\rm out}}\cup E_j)\big)+C_*\,c(n)\,r^{n+1}\,,
\end{eqnarray}
and, in the case $\star={\rm in}$, that
\begin{eqnarray}\label{area of slab competitors second proof}
\mu(B_r(x))&\le&\limsup_{j\to\infty}\H^n(\partial S_{\tau,r}\setminus E_j)
\\\nonumber
&&+C(n,\tau)\,\limsup_{j\to\infty}\H^n\big((E_j\cap\partial S_{\tau,r})\setminus A_{r,j}^{{\rm in}}\big)+C_*\,c(n)\,r^{n+1}\,.
\end{eqnarray}
We now discuss the cases $x\in\partial^*E$, $x\in K\cap E^{(0)}$ and $x\in K\cap E^{(1)}$ separately.
\medskip
\noindent {\it The case $x\in\partial^*E$}: We claim that, in this case, for every $\sigma\in(0,\tau)$ and for a.e. $r<r_0(\sigma,x)$,
\begin{eqnarray}
\label{theta 1 fine 0}
\limsup_{j\to\infty}\H^n\Big(\partial S_{\tau,r}\setminus \big(A_{r,j}^{{\rm out}}\cup E_j\big)\Big)&\le&C(n)\,\sigma\,r^n\,,
\\ \label{theta 1 fine}
\limsup_{j\to\infty}\Big|\H^n\big(E_j\cap\partial S_{\tau,r}\big)-\omega_n\,r^n\Big|&\le& C(n)\,\tau\,r^n\,;
\end{eqnarray}
see
\begin{figure}
\input{slab1.pstex_t}\caption{{\small The slab competitor $F_j^{{\rm out}}$ is used in proving that $\theta(x)\le 1$. The fact that $x\in\partial^*E$ is used to show that $E_j\cap\partial S_{\tau,r}$ consists of a large connected component whose area is close to $\omega_n\,r^n$ up to a ${\rm o}(r^n)$ error as $r\to 0^+$.}}\label{fig slab1}
\end{figure}
Figure \ref{fig slab1}. We notice that \eqref{theta 1 fine 0} and \eqref{theta 1 fine} combined with \eqref{area of slab competitors first proof} imply
\[
\frac{\mu(B_r(x))}{r^n}\le \omega_n+C(n)\,\tau+ C(n,\tau)\,\sigma+C_*\,c(n)\,r\,,\qquad\mbox{for a.e. $r<r_0$}\,,
\]
which gives $\theta(x)\le 1$ by letting, in the order, $r\to 0^+$, $\sigma\to 0^+$ and then $\tau\to 0^+$. We now prove \eqref{theta 1 fine 0} and \eqref{theta 1 fine}. Since $x\in\partial^*E$, we can set $\nu(x)=\nu_E(x)$. As $\nu_E(x)$ is the outer normal to $E$, by $\partial^*E\subset K$, \eqref{theta 1 bordo nw} and the divergence theorem, we obtain
\[
|W_{\tau,\sigma,r_0}^-\setminus E|=|W_{\tau,\sigma,r_0}^+\cap E|=0\,.
\]
By $|W_{\tau,\sigma,r_0}^-\setminus E|=0$, the coarea formula and Fatou's lemma, we deduce
\begin{eqnarray*}
0&=&\lim_{j\to\infty}|W_{\tau,\sigma,r_0}^-\setminus E_j|=\lim_{j\to\infty}\int_0^{r_0}\H^n\Big(\partial S_{\tau,r}\cap \big(W_{\tau,\sigma,r_0}^-\setminus E_j\big)\Big)\,dr
\\
&\ge&\int_0^{r_0}
\liminf_{j\to\infty}\H^n\Big(\Gamma_{\tau,\sigma,r}^-\setminus E_j\Big)\,dr\,,
\end{eqnarray*}
and by arguing similarly with $|W_{\tau,\sigma,r_0}^+\cap E|=0$ we conclude that, for a.e. $r<r_0$,
\begin{eqnarray}
\label{theta 1 bordo nw UP}
&&\lim_{j\to\infty}\H^n\big(\Gamma_{\tau,\sigma,r}^+\cap E_j\big)=0\,,
\\
\label{theta 1 bordo nw DOWN}
&&\lim_{j\to\infty}\H^n\big(\Gamma_{\tau,\sigma,r}^-\setminus E_j\big)=0\,.
\end{eqnarray}
By \eqref{theta 1 bordo nw UP}, \eqref{theta 1 bordo nw DOWN}, and since
\begin{equation}
\label{slab decomp}
\partial S_{\tau,r}=\Gamma_{\tau,\sigma,r}^+\cup\Gamma_{\tau,\sigma,r}^-\cup\big(\partial S_{\tau,r}\cap\partial S_{\sigma,r}\big)
\end{equation}
we find that, as $j\to\infty$,
\begin{eqnarray*}
\big|\H^n(\partial S_{\tau,r}\cap E_j)-\omega_n\,r^n\big|&\le&\H^n(\partial S_{\tau,r}\cap\partial S_{\sigma,r})
+\big|\H^n(\Gamma_{\tau,\sigma,r}^-\cap E_j)-\omega_n\,r^n\big|+{\rm o}(1)
\\
&\le&C(n)\,\sigma\,r^n+\big|\H^n(\Gamma_{\tau,\sigma,r}^-)-\omega_n\,r^n\big|+{\rm o}(1)
\\
&\le&C(n)\,\tau\,r^n+{\rm o}(1)\,,
\end{eqnarray*}
that is \eqref{theta 1 fine}. At the same time, again by \eqref{theta 1 bordo nw} and by the coarea formula, assuming without loss of generality that $r_0=r_0(\sigma,x)$ also satisfies $\H^n(K\cap\partial B_{r_0}(x))=0$ in addition to \eqref{theta 1 bordo nw}, we get
\begin{eqnarray}\nonumber
0&=&\mu(K\cap \mathrm{cl}\,(B_{r_0}(x))\setminus V_{\sigma,r_0})=\lim_{j\to\infty}\H^n\big(B_{r_0}(x)\cap\partial E_j\setminus V_{\sigma,r_0}\big)
\\\nonumber
&\ge&\lim_{j\to\infty}\H^n\big(S_{\tau,r_0}\cap\partial E_j\setminus V_{\sigma,r_0}\big)
\\\nonumber
&\ge&\lim_{j\to\infty}\int_0^{r_0}\,\H^{n-1}\big(\partial S_{\tau,r}\cap\partial E_j\setminus V_{\sigma,r_0}\big)\,dr\,,
\end{eqnarray}
that is
\begin{equation}
\label{theta 1 bordo nw 1}
\lim_{j\to\infty}\H^{n-1}\big(\partial S_{\tau,r}\cap\partial E_j\setminus V_{\sigma,r_0}\big)=0\qquad\mbox{for a.e. $r<r_0$}\,.
\end{equation}
Notice that \eqref{theta 1 bordo nw 1} implies in particular that
\begin{equation}
\label{theta 1 bordo nw 2}
\lim_{j\to\infty}\H^{n-1}\big(\Gamma_{\tau,\sigma,r}^+\cap\partial E_j\big)=0\qquad\mbox{for a.e. $r<r_0$}\,.
\end{equation}
Since $\Gamma_{\tau,\sigma,r}^+$ is a bi-Lipschitz image of a hemisphere, by Lemma \ref{statement isoperimetry on spheres},
\begin{equation}
\label{isoperimetric on half cylinder}
\H^{n-1}(\Gamma_{\tau,\sigma,r}^+\cap J)^{n/(n-1)}\ge c(n,\tau,\sigma)\,\H^n(\Gamma_{\tau,\sigma,r}^+\setminus A)\,,
\end{equation}
whenever $J$ is relatively closed in $\Gamma_{\tau,\sigma,r}^+$, and $A$ is an $\H^n$-maximal connected component of $\Gamma_{\tau,\sigma,r}^+\setminus J$. By \eqref{theta 1 bordo nw 2} and \eqref{isoperimetric on half cylinder} we find that, if
\[
\mbox{$A_{r,j}^+$ is a maximal $\H^n$-component of $\Gamma_{\tau,\sigma,r}^+\setminus\partial E_j$}\,,
\]
then
\begin{equation}
\label{theta 1 bordo nw 3}
\lim_{j\to\infty}\H^n(\Gamma_{\tau,\sigma,r}^+\setminus A_{r,j}^+)=0\,,\qquad\mbox{for a.e. $r<r_0$}\,.
\end{equation}
By connectedness, $A_{r,j}^+$ is either contained in $A_{r,j}^{{\rm out}}$, or in $E_j$, or in
\[
Y_{r,j}=\bigcup\big\{A:A\in\mathcal A_{r,j}^{{\rm out}}\,,A\ne A_{r,j}^{{\rm out}}\big\}\,.
\]
By combining \eqref{theta 1 bordo nw UP} with \eqref{theta 1 bordo nw 3} we find that for a.e. $r<r_0$, if $j$ is large enough, then
\[
A_{r,j}^+\cap E_j=\emptyset\,.
\]
Similarly, should there be a non-negligible set of values of $r$ such that for infinitely many value of $j$ the inclusion $A_{r,j}^+\subset Y_{r,j}$ holds, then by \eqref{theta 1 bordo nw DOWN} and \eqref{theta 1 bordo nw 3} there would be an element of $\mathcal A_{r,j}^{{\rm out}}$ different from $A_{r,j}^{{\rm out}}$ with $\H^n$-measure arbitrarily close to $\H^n(\Gamma_{\tau,\sigma,r}^+)$; thanks to \eqref{theta 1 bordo nw DOWN}, we would then have $\H^n(A_{r,j}^{{\rm out}})\to 0$, against the $\H^n$-maximality of $A_{r,j}^{{\rm out}}$ itself. In conclusion, it must be
\begin{equation}
\label{theta 1 bordo nw 4}
\mbox{$A_{r,j}^+\subset A_{r,j}^{{\rm out}}$ for a.e. $r<r_0$ and for $j$ large enough}\,.
\end{equation}
By combining \eqref{theta 1 bordo nw 4} and \eqref{theta 1 bordo nw 3} we conclude that
\begin{equation}
\label{theta 1 bordo nw 5}
\lim_{j\to\infty}\H^n\big(\Gamma_{\tau,\sigma,r}^+\setminus A_{r,j}^{{\rm out}}\big)=0\,.
\end{equation}
By \eqref{slab decomp}, \eqref{theta 1 bordo nw DOWN} and \eqref{theta 1 bordo nw 5} we conclude that
\[
\limsup_{j\to\infty}\H^n\Big(\partial S_{\tau,r}\setminus \big(A_{r,j}^{{\rm out}}\cup E_j\big)\Big)
\le \H^n(\partial S_{\tau,r}\cap \partial S_{\sigma,r})\le C(n)\,\sigma\,r^n\,,
\]
that is \eqref{theta 1 fine 0}. This completes the proof of $\theta(x)\le1$ for $x\in\partial^*E$.
\medskip
\noindent {\it The case $x\in E^{(0)}$}: We claim that, in this case, for every $\sigma\in(0,\tau)$,
\begin{eqnarray}
\label{coney island 1}
\limsup_{j\to\infty}\H^n(E_j\cap \partial S_{\tau,r})\le C(n)\,\sigma\,r^n\,,
\\
\label{coney island 1 star}
\limsup_{j\to\infty}\big|\H^n\big(\partial S_{\tau,r}\setminus E_j\big)-2\,\omega_n\,r^n\big|\le C(n)\,\tau\,r^n\,,
\end{eqnarray}
for a.e. $r<r_0(\sigma,x)$, see
\begin{figure}
\input{slab2.pstex_t}\caption{{\small The slab competitor used in proving that $\theta(x)\le 2$ when $x\in E^{(0)}$ is the one defined by $A_{r,j}^{{\rm in}}$. Since $x\in E^{(0)}$ we can show that $E_j\cap\partial S_{\tau,r}$ is ${\rm o}(r^n)$ as $r\to 0^+$.}}\label{fig slab2}
\end{figure}
Figure \ref{fig slab2}. The idea is using the competitor defined by $A_{r,j}^{{\rm in}}$: indeed, \eqref{coney island 1}, \eqref{coney island 1 star}, and \eqref{area of slab competitors second proof} give
\begin{eqnarray*}
\frac{\mu(B_r(x))}{r^n}
&\le&\limsup_{j\to\infty}\frac{\H^n(\partial S_{\tau,r}\setminus E_j)}{r^n}
\\\nonumber
&&+C(n,\tau)\,\limsup_{j\to\infty}\frac{\H^n\big((E_j\cap\partial S_{\tau,r})\setminus A_{r,j}^{{\rm in}}\big)}{r^n}+C_*\,c(n)\,r
\\
&\le&2\,\omega_n+C(n)\,\tau+C(n,\tau)\,\sigma+C_*\,c(n)\,r\,,
\end{eqnarray*}
and then $\theta(x)\le 2$ by letting, in the order, $r\to 0^+$, $\sigma\to 0^+$ and then $\tau\to 0^+$. The proof of \eqref{coney island 1} and \eqref{coney island 1 star} is simple: since $x\in E^{(0)}$ and $\partial^*E\subset K$, by \eqref{theta 1 bordo nw} and by the divergence theorem we find that
\[
|E\cap B_{r_0}(x)\setminus V_{\sigma,r_0}|=0\,.
\]
In particular, by the coarea formula we find that for a.e. $r<r_0$,
\[
0=\lim_{j\to\infty}\H^n\Big((E_j\setminus V_{\sigma,r_0})\cap\partial S_{\tau,r}\Big)=
\lim_{j\to\infty}\H^n\Big(E_j\cap\big(\Gamma_{\tau,\sigma,r}^+\cup \Gamma_{\tau,\sigma,r}^-\big)\Big)\,,
\]
so that, by \eqref{slab decomp},
\begin{eqnarray*}
\H^n(E_j\cap\partial S_{\tau,r})=\H^n(\partial S_{\tau,r}\cap\partial S_{\sigma,r})+{\rm o}(1)\le C(n)\,\sigma\,r^n+{\rm o}(1)\,,
\end{eqnarray*}
as $j\to\infty$, that is \eqref{coney island 1}, and
\begin{eqnarray*}
\big|\H^n(\partial S_{\tau,r}\setminus E_j)-2\,\omega_n\,r^n\big|&\le&\H^n(\partial S_{\tau,r}\cap\partial S_{\sigma,r})+
\big|\H^n(\Gamma_{\tau,\sigma,r}^+\cup \Gamma_{\tau,\sigma,r}^-)-2\,\omega_n\,r^n\big|+{\rm o}(1)
\\
&\le&C(n)\,\tau\,r^n+{\rm o}(1)
\end{eqnarray*}
as $j\to\infty$, that is \eqref{coney island 1 star}.
\medskip
\noindent {\it The case $x\in E^{(1)}$}: We claim that for every $\sigma\in(0,\tau)$,
\begin{eqnarray}
\label{coney island 2}
\limsup_{j\to\infty}\big|\H^n(E_j\cap \partial S_{\tau,r})-2\,\omega_n\,r^n\big|\le C(n)\,\tau\,r^n\,,
\\
\label{coney island 2 star}
\limsup_{j\to\infty}\H^n\big(\partial S_{\tau,r}\setminus E_j\big)\le C(n)\,\sigma\,r^n\,,
\end{eqnarray}
for a.e. $r<r_0(\sigma,x)$, see
\begin{figure}
\input{slab3.pstex_t}\caption{{\small The slab competitor used in proving that $\theta(x)\le 2$ when $x\in E^{(1)}$ is the one defined by $A_{r,j}^{{\rm out}}$.}}\label{fig slab3}
\end{figure}
Figure \ref{fig slab3}. Indeed, by using as in the case $x\in\partial^*E$ the competitor defined by $A_{r,j}^{{\rm out}}$, \eqref{coney island 2}, \eqref{coney island 2 star} are combined with \eqref{area of slab competitors first proof} to obtain
\begin{eqnarray}\nonumber
\frac{\mu(B_r(x))}{r^n}&\le&\limsup_{j\to\infty}\frac{\H^n(E_j\cap\partial S_{\tau,r})}{r^n}
\\\nonumber
&&+C(n,\tau)\,\limsup_{j\to\infty}\frac{\H^n\big(\partial S_{\tau,r}\setminus(A_{r,j}^{{\rm out}}\cup E_j)\big)}{r^n}+C_*\,c(n)\,r
\\
&\le& 2\,\omega_n+C(n)\,\tau+C(n,\tau)\,\sigma+C_*\,c(n)\,r\,,
\end{eqnarray}
which gives $\theta(x)\le 2$ by letting once again $r\to 0^+$, $\sigma\to 0^+$ and finally $\tau\to 0^+$. To prove \eqref{coney island 2} and \eqref{coney island 2 star}, we notice that by $x\in E^{(1)}$, $\partial^*E\subset K$, \eqref{theta 1 bordo nw} and the divergence theorem, we have
\[
\big|B_{r_0}(x)\setminus\big(V_{\sigma,r_0}\cup E\big)\big|=0\,.
\]
By the coarea formula, for a.e. $r<r_0$ we find
\[
0=\lim_{j\to\infty}\H^n\big(\big(\Gamma_{\tau,\sigma,r}^+\cup \Gamma_{\tau,\sigma,r}^-\big)\setminus E_j\big)\,,
\]
and conclude as in the previous case by exploiting \eqref{slab decomp}.
\medskip
\noindent {\it Remark}: We make an important remark on the constructions of step five, which will be needed in the proof of Theorem \ref{thm basic regularity}. We claim that, under the assumptions on $x$ considered in step five, for a.e. $r<r_0(\sigma,x)$ we have
\begin{eqnarray}
\label{the important remark}
&&\limsup_{\eta\to 0^+}
\Big|\H^n\Big(\Big\{y\in \mathrm{cl}\,(S_{\tau,r})\cap\partial F_j^\star:T_y(\partial F_j^{\star})=T_xK\Big\}\Big)-\theta(x)\,\omega_n\,r^n\Big|
\\\nonumber
&&\hspace{3cm}\le C(n)\,\tau\,r^n+C(n,\tau)\,\sigma\,r^n+{\rm o}(1)\,, \qquad\mbox{as $j\to\infty$}\,.
\end{eqnarray}
Here $\star={\rm out}$ if $x\in\partial^*E\cup(K\cap E^{(1)})$, $\star={\rm in}$ if $x\in K\cap E^{(0)}$, and $\theta(x)=1$ if $x\in\partial^*E$ and $\theta(x)=2$ if $x\in K\cap(E^{(0)}\cup E^{(1)})$. Consider, for example, the case when $x\in \partial^*E$. By \eqref{new slabs 2}, $\partial S_{\tau,r}\cap\partial F_j^{{\rm out}} \subset (\partial S_{\tau,r} \setminus A_{r,j}^{{\rm out}}) \cup N_j$ with $\lim_{\eta\to 0^+}\H^n (N_j) = 0$: thus, by taking into account that
\[
T_y(\partial F_j^{{\rm out}})=T_y(\partial S_{\tau,r})\qquad\mbox{$\H^n$-a.e. on $\partial F_j^{{\rm out}}\cap \partial S_{\tau,r}$}
\]
and that
\[
\big\{y\in\partial S_{\tau,r}:T_y(\partial S_{\tau,r})=T_xK\big\}= \partial S_{\tau,r}\setminus\partial B_r(x)\,,
\]
(recall that $T_xK=\nu(x)^\perp$), we have
\begin{eqnarray*}
&&\Big|\H^n\Big(\Big\{y\in \mathrm{cl}\,(S_{\tau,r})\cap\partial F_j^{{\rm out}}:T_y(\partial F_j^{{\rm out}})=T_xK\Big\}\Big)-\omega_n\,r^n\Big|
\\
&\le&\Big|\H^n\Big(\Big\{y\in \partial S_{\tau,r}\cap\partial F_j^{{\rm out}} :T_y(\partial F_j^{{\rm out}})=T_xK\Big\}\Big)-\omega_n\,r^n\Big|
+\H^n(S_{\tau,r}\cap\partial F_j^{{\rm out}})
\\
&\le&\Big|\H^n\Big(\Big\{y\in \partial S_{\tau,r}\setminus A_{r,j}^{{\rm out}}:T_y(\partial S_{\tau,r})=T_xK\Big\}\Big)-\omega_n\,r^n\Big|
+\H^n (N_j) + \H^n(S_{\tau,r}\cap\partial F_j^{{\rm out}})
\\
&=&\Big|\H^n\big(\partial S_{\tau,r}\setminus(\partial B_r(x)\cup A_{r,j}^{{\rm out}})\big)-\omega_n\,r^n\Big|
+\H^n (N_j) + \H^n(S_{\tau,r}\cap\partial F_j^{{\rm out}})
\end{eqnarray*}
so that, by \eqref{new slabs 3}, \eqref{theta 1 fine 0}, and $\H^n(\partial S_{\tau,r}\cap\partial B_r(x))\le C(n)\,\tau\,r^n$,
\begin{eqnarray*}
&&
\limsup_{\eta\to 0^+}\Big|\H^n\Big(\Big\{y\in \mathrm{cl}\,(S_{\tau,r})\cap\partial F_j^{{\rm out}}:T_y(\partial F_j^{{\rm out}})=T_xK\Big\}\Big)-\omega_n\,r^n\Big|
\\
&\le&\Big|\H^n(\partial S_{\tau,r}\cap E_j)-\omega_n\,r^n\Big|+ C(n,\tau)\H^n\big(\partial S_{\tau,r}\setminus(A_{r,j}^{{\rm out}}\cup E_j)\big)+C(n)\,\tau\,r^n\,.
\end{eqnarray*}
By \eqref{theta 1 fine 0} and \eqref{theta 1 fine} we deduce \eqref{the important remark} when $x\in\partial^*E$. The case when $x\in K\cap(E^{(0)}\cup E^{(1)})$ is treated analogously and the details are omitted.
\medskip
\noindent {\it Step six}: We exclude area concentration near $\partial\Omega$, by showing that
\begin{equation}
\label{no area concentration at the boundary}
\limsup_{\eta\to 0^+}\limsup_{j\to\infty}\mu_j(\Omega\cap U_\eta(\partial\Omega))=0\,.
\end{equation}
Exploiting the smoothness and boundedness of $\partial\Omega$, we can find $r_0>0$ such that Lemma \ref{lemma close by Lipschitz at boundary} holds, and such that for every $x\in\partial\Omega$ there exists an open set $\Omega'$ with $\Omega\subset\Omega'$ and a homeomorphism $f:\mathrm{cl}\,(\Omega)\to\mathrm{cl}\,(\Omega')=f(\mathrm{cl}\,(\Omega))$ with $f(\partial\Omega)=\partial\Omega'$, $\{f\ne{\rm id}\,\}\subset\subset B_{r_0}(x)$, $f(B_{r_0}(x)\cap\mathrm{cl}\,(\Omega))=B_{r_0}(x)\cap\mathrm{cl}\,(\Omega')$, which is a diffeomorphism $f:\Omega\to\Omega'$, and such that
\begin{equation}
\label{boundary diffeo f}
f\Big(\Omega\cap U_\eta(\partial\Omega)\cap B_{r_0/2}(x)\Big)\subset \Omega'\setminus\Omega\,,\qquad \|f-{\rm id}\,\|_{C^1(\Omega)}\le C\,\eta\,;
\end{equation}
see
\begin{figure}
\input{push.pstex_t}
\caption{{\small The boundary diffeomorphism $f$ pushes out $\Omega$ into a larger open set $\Omega'$. Regions depicted with the same color are mapped one into the other. Notice that the dark region on the left contains $\Omega\cap U_\eta(\partial\Omega)\cap B_{r_0/2}(x)$, and is mapped outside of $\Omega$. The diffeomorphism $f$ can be formally constructed by exploiting the local graphicality of $\Omega$, and the simple details are omitted.}}
\label{fig push}
\end{figure}
Figure \ref{fig push}. Let $\Omega^*=f^{-1}(\Omega)$ and let $F_j=f(E_j\cap\Omega^*)=f(E_j)\cap\Omega$. Clearly $F_j\in\mathcal{E}$, and $f(\partial\Omega^*)=\partial\Omega$ and $\Omega^*\cap\partial(E_j\cap\Omega^*)=\Omega^*\cap\partial E_j$ give
\[
\Omega\cap\partial F_j=f(\Omega^*)\cap\,f\big(\partial(E_j\cap\Omega^*)\big)=f\big(\Omega^*\cap\partial E_j\big)\,,
\]
so that $\Omega\cap\partial F_j$ is $\mathcal{C}$-spanning $W$ by Lemma \ref{lemma close by Lipschitz at boundary}. Assuming without loss of generality that $r_0<r_*$, by \eqref{volume fixing variation inequality}, $\{f\ne{\rm id}\,\}\subset\subset B_{r_0}(x)$ and $f(B_{r_0}(x)\cap\mathrm{cl}\,(\Omega))=B_{r_0}(x)\cap\mathrm{cl}\,(\Omega')$ we have
\begin{eqnarray*}
\H^n(\Omega\cap B_{r_0}(x)\cap \partial E_j)&\le&\H^n\big(f(B_{r_0}(x)\cap\Omega^*\cap\partial E_j)\big)+C_*\,\big||F_j|-|E_j|\big|+\frac1j
\\
&\le&\big(1+C\,\eta\big)\,\H^n\big(B_{r_0}(x)\cap\Omega^*\cap\partial E_j\big)+C_*\,\big||F_j|-|E_j|\big|+\frac1j\,,
\end{eqnarray*}
where
\begin{eqnarray*}
\big||F_j|-|E_j|\big|\le\big||E_j\cap\Omega^*|-|E_j|\big|+\int_{E_j\cap\Omega^*}|Jf-1|
\le\big|\Omega\setminus\Omega^*\big|+C\,\varepsilon\,\eta\le C\,\eta\,,
\end{eqnarray*}
so that
\begin{eqnarray*}
\H^n(\Omega\cap B_{r_0}(x)\cap \partial E_j\setminus\Omega^*)\le C\,\eta\,\Big\{\H^n\big(\Omega\cap\partial E_j\big)+1\Big\}+\frac1j
\le C\,\eta\,\Big\{\psi(\varepsilon)+2 \Big\}+\frac1j\,.
\end{eqnarray*}
Since $\Omega\cap U_\eta(\partial\Omega)\cap B_{r_0/2}(x)\subset\Omega\setminus\Omega^*$, by letting $j\to\infty$ we conclude that
\[
\mu\big(B_{r_0/2}(x)\cap U_\eta(\partial\Omega)\big)\le C\,\eta\,,\qquad\forall x\in\partial\Omega\,.
\]
By a covering argument we find $\mu(\Omega\cap U_\eta(\partial\Omega))\le C\,\eta$, and thus \eqref{no area concentration at the boundary} follows.
\bigskip
\noindent {\it Step seven}: Let us now pick $R>0$ such that $W\cup K\cup E\subset\subset B_R(0)$. If $E_j\subset B_{R+1}(0)$ for infinitely many values of $j$, then $|E|=\varepsilon$ and $\mu_j(\Omega\setminus B_{R+1}(0))=0$, which combined with \eqref{no area concentration at the boundary} implies $\mu_j(\Omega)\to\mu(\Omega)=\mathcal F(K,E)$ as $j\to\infty$, and thus $\psi(\varepsilon)=\mathcal F(K,E)$ with $(K,E)\in\mathcal{K}$ and $|E|=\varepsilon$: thus $(K,E)$ is a generalized minimizer of $\psi(\varepsilon)$, as desired. We now assume without loss of generality that $|E_j\setminus B_{R+1}(0)|>0$ for every $j$. By \eqref{almostthere},
\[
\limsup_{j\to\infty}|E_j\cap(B_{R+1}(0)\setminus B_R(0))|=\limsup_{j\to\infty}\H^n((B_{R+1}(0)\setminus B_R(0))\cap\partial E_j)=0\,.
\]
By the coarea formula, this implies that for a.e. $s\in(R,R+1)$,
\begin{equation}
\label{infinity 1}
\limsup_{j\to\infty}\H^n(E_j\cap\partial B_s(0))=\limsup_{j\to\infty}\H^{n-1}(\partial E_j\cap\partial B_s(0))=0\,.
\end{equation}
We fix a value of $s$ such that \eqref{infinity 1} holds, and we let $A_j$ denote an $\H^n$-maximal connected component of $\partial B_s(0)\setminus\partial E_j$. It must be $A_j\cap E_j=\emptyset$: for, otherwise, by the spherical isoperimetric inequality, $A_j\subset E_j$ would imply
\begin{eqnarray*}
C(n)\,\H^{n-1}(\partial B_s(0)\cap \partial E_j)^{n/(n-1)}&\ge&\H^n(\partial B_s(0)\setminus A_j)\ge\H^n(\partial B_s(0)\setminus E_j)
\\
&\ge& c(n)\,R^n-\H^n(E_j\cap\partial B_s(0))\,,
\end{eqnarray*}
a contradiction to \eqref{infinity 1}. Since $A_j\cap E_j=\emptyset$, we can consider the exterior cup competitor defined by $E_j$ and $A_j$. More precisely, for every $j$ there exists a decreasing sequence $\{\eta^j_k\}_{k=1}^\infty$ with $\lim_{k \to \infty} \eta^j_k = 0$ such that, setting
\begin{eqnarray*}
Y_j = \partial B_s(0) \setminus \mathrm{cl}\, ((E_j \cap \partial B_s (0)) \cup A_j)\,, &\quad& S_j = \partial E_j \cap \mathrm{cl}\, (A_j) \setminus \left( \mathrm{cl}\, ((E_j \cap \partial B_s(0)) \cup Y_j) \right)\,,\\
U_{j,k} = \partial B_s (0) \cap \{{\rm d}_{S_j} < \eta^j_k\}\,,& \quad & Z_{j,k} = Y_j \cup \left( U_{j,k} \setminus \mathrm{cl}\, (E_j \cap \partial B_s (0)) \right)\,,
\end{eqnarray*}
the sets
\[
F_{j,k}=\big(E_j\cap B_s(0)\big)\cup M_{\eta^j_k}(Z_{j,k})
\]
satisfy $F_{j,k} \in \mathcal{E}$, with $\Omega\cap\partial F_{j,k}$ $\mathcal{C}$-spanning $W$, $F_{j,k}\subset B_{R+1}$ and
\begin{eqnarray} \label{infinity 2}
\limsup_{k \to \infty}\H^n(\Omega\cap\partial F_{j,k})&\le&\H^n(\Omega\cap B_s(0)\cap\partial E_j)+2\,\H^n(\partial B_s(0)\setminus A_j)
\\
&\le&\nonumber
\H^n(\Omega\cap B_s(0)\cap\partial E_j)+C(n)\,\H^{n-1}(\partial B_s(0)\cap\partial E_j)^{n/(n-1)}\,.
\end{eqnarray}
Since $|E_j\setminus B_{R+1}(0)|>0$ for every $j$, we can select $k(j)$ sufficiently large so that
\begin{equation} \label{infinity fix}
\H^n (\Omega \cap \partial F_{j,k(j)}) \leq \H^n(\Omega\cap B_s(0)\cap\partial E_j)+C(n)\,\H^{n-1}(\partial B_s(0)\cap\partial E_j)^{n/(n-1)} + \frac{1}{j}\,,
\end{equation}
as well as $|E_j\setminus B_s(0)|>|M_{\eta^j_{k(j)}}(Z_{j,k(j)})|$; then, after setting $F_j = F_{j,k(j)}$, define $\rho_j>0$ by the equation
\[
|B_{\rho_j}|=|E_j|-|F_j|=|E_j\setminus B_s(0)|-|M_{\eta^j_{k(j)}}(Z_{j,k(j)})|\,.
\]
In particular, $|B_{\rho_j}|\le\varepsilon$, so that we can find $x\in\Omega$ such that $\mathrm{cl}\,(B_{\rho_j}(x))\cap\mathrm{cl}\,(F_j)=\emptyset$ and
\[
E_j^*=F_j\cup B_{\rho_j}(x)\subset B_{R+1+C(n)\,\varepsilon^{1/(n+1)}}(0)\qquad\forall j\,.
\]
We notice that $E_j^*\in\mathcal{E}$ with $|E_j^*|=\varepsilon$ and $\Omega\cap\partial F_j\subset\Omega\cap\partial E_j^*$, so that $\Omega\cap\partial E_j^*$ is $\mathcal{C}$-spanning $W$: in particular, $\psi(\varepsilon)\le\H^n(\Omega\cap\partial E_j^*)$. By the Euclidean isoperimetric inequality, and since $|B_{\rho_j}|\le |E_j\setminus B_s(0)|$ by definition of $\rho_j$, we have
\[
P(B_{\rho_j})\le P(E_j\setminus B_s(0))=\H^n(\partial E_j\setminus B_s(0))+\H^n(E_j\cap\partial B_s(0))\,,
\]
so that by \eqref{infinity 1} and \eqref{infinity fix} we get
\begin{eqnarray*}
\psi(\varepsilon)&\le& \limsup_{j\to\infty}\H^n(\Omega\cap\partial E_j^*)
\le\limsup_{j\to\infty}\H^n(\Omega\cap\partial F_j)+P(B_{\rho_j})
\\
&\le&\limsup_{j\to\infty}\H^n(\Omega\cap \partial E_j)+2\,C(n)\,\limsup_{j\to\infty}\H^{n-1}(\partial B_s(0)\cap\partial E_j)^{n/(n-1)}=\psi(\varepsilon)\,.
\end{eqnarray*}
We have thus proved that $\{E_j^*\}_j$ is a minimizing sequence for $\psi(\varepsilon)$, with $E_j^*\subset B_{R^*}(0)$ for some $R^*$ depending only on $R$, $n$ and $\varepsilon$. By repeating the argument of the first six steps with $E_j^*$ in place of $E_j$ we see that $E_j^*\to E^*$ in $L^1(\Omega)$ and $\mu_j^*=\H^n\llcorner(\Omega\cap\partial E_j^*)\stackrel{*}{\rightharpoonup} \mu^*$ where $\mu^*=2\,\H^n\llcorner (K^*\setminus\partial^*E^*)+\H^n\llcorner\partial^*E^*$, and where $(K^*,E^*)\in\mathcal{K}$ with $|E^*|=\varepsilon$ and with
\[
\limsup_{\eta\to 0^+}\limsup_{j\to\infty}\mu_j^*(\Omega\cap\ U_\eta(\partial\Omega))=0\,.
\]
Therefore $\mu_j^*(\Omega)\to\mu^*(\Omega)=\mathcal F(K^*,E^*)$ and in conclusion
\[
\mathcal F(K^*,E^*)=\mu^*(\Omega)=\lim_{j\to\infty}\mu_j^*(\Omega)=\psi(\varepsilon)
\]
so that, by $|E^*|=\varepsilon$, $(K^*,E^*)$ is indeed a generalized minimizer of $\psi(\varepsilon)$. This concludes the proof of the theorem.
\end{proof}
\section{The Euler-Lagrange equation: Proof of Theorem \ref{thm basic regularity}}\label{section theorem basic regularity}
\begin{proof}
[Proof of Theorem \ref{thm basic regularity}] Let $(K,E)$ be a generalized minimizer of $\psi(\varepsilon)$ and $f:\Omega\to\Omega$ be a diffeomorphism such that $|f(E)|=|E|$. We want to prove that
\begin{equation}
\label{basic tesi}
\mathcal F(K,E)\le\mathcal F(f(K),f(E))\,.
\end{equation}
Let $K'$ denote the set of points of approximate differentiability of $K$, so that $\H^n(K\setminus K')=0$, and for $x\in K'$ denote by $T_x=T_xK=\nu_x^\perp$ the approximate tangent plane to $K$ at $x$, where $\nu_x\in\SS^n$ is chosen so that $\nu_x=\nu_E(x)$ if $x\in \partial^*E$. As in step five of the proof of Theorem \ref{thm lsc}, for every $\sigma>0$ we introduce $r_0=r_0(\sigma,x)$ such that
\begin{equation}
\label{basic striscia 0}
K\cap B_r(x)\subset S_{\sigma,r}^x=\Big\{y\in B_r(x):|(y-x)\cdot\nu_x|<\sigma\,r\Big\}\qquad\forall r<r_0(\sigma,x)\,,
\end{equation}
see \eqref{theta 1 bordo nw}. In fact, by Egoroff's theorem, we can find a compact set $K^*\subset K'$ with $\H^n(K\setminus K^*)<\sigma$ such that $r_*(\sigma)=\max\{r_0(\sigma,x):x\in K^*\}\to 0^+$ as $\sigma\to 0^+$, that is, such that \eqref{basic striscia 0} holds uniformly on $K^*$,
\begin{equation}
\label{basic striscia}
K\cap B_r(x)\subset S_{\sigma,r}^x\qquad\forall x\in K^*\,,\forall r<r_*(\sigma)\,.
\end{equation}
Similarly, if $G_n$ denotes the family of the $n$-planes in $\mathbb{R}^{n+1}$, endowed with a distance $d$, by Lusin's theorem and up to further decreasing the size of $K^*$ while keeping $\H^n(K\setminus K^*)<\sigma$, we can make sure that
\begin{equation}
\label{basic omega}
\sup_{x,y\in K^*\,,|x-y|<r}d(T_x,T_y)+\sup_{x,y\in K^*\,,|y-x|<r}|\nabla f(x)-\nabla f(y)|\le \omega_*(r)\,,
\end{equation}
for a function $\omega_*(r)\to 0^+$ as $r\to 0^+$. Finally, since
\begin{equation*}
\hspace{2.1cm}\left\{
\begin{split}
&\H^n(B_r(x)\cap\partial^*E)={\rm o}(r^n)\,,
\\
&\H^n\big(B_r(x)\cap(K\setminus\partial^*E)\big)=\omega_n\,r^n+ {\rm o}(r^n)\,,\qquad\mbox{for $\H^n$-a.e. $x\in K\setminus\partial^*E$}\,,
\end{split}
\right .
\end{equation*}
\begin{equation*}
\left\{
\begin{split}
&\H^n(B_r(x)\cap\partial^*E)=\omega_n\,r^n+{\rm o}(r^n)\,,
\\
&\H^n\big(B_r(x)\cap(K\setminus\partial^*E)\big)= {\rm o}(r^n)\,, \qquad\mbox{for $\H^n$-a.e. $x\in \partial^*E$}\,,
\end{split}
\right .
\end{equation*}
as $r\to 0^+$, by Egoroff's theorem, up to decreasing $K^*$ and increasing $\omega_*$, we can also entail
\begin{eqnarray}\label{basic density 2}
\sup_{x\in K^*\setminus\partial^*E}\H^n(B_r(x)\cap\partial^*E)+\Big|\H^n\big(B_r(x)\cap(K\setminus\partial^*E)\big)-\omega_n\,r^n\Big|\le\omega_*(r)\,r^n\,,
\\\label{basic density 1}
\sup_{x\in K^*\cap \partial^*E}\Big|\H^n(B_r(x)\cap\partial^*E)-\omega_n\,r^n\Big|+\H^n\big(B_r(x)\cap(K\setminus\partial^*E)\big)\le\omega_*(r)\,r^n\,,
\end{eqnarray}
while still keeping $\H^n(K\setminus K^*)<\sigma$ and $\omega_*(r)\to 0^+$ as $r\to 0^+$.
\medskip
Let $\{E_j\}_j$ be a minimizing sequence for $\psi(\varepsilon)$ converging to $(K,E)$ as in \eqref{mininizing seq conv to gen minimiz}, and consider a point $x\in K^*$. Given $\tau\in(0,1)$ and $\sigma\in(0,\tau)$, for a.e. $r<r_*(\sigma)$ such that $B_{2r}(x)\subset\subset\Omega$, we have that $\partial S_{\tau,r}^x\cap\partial E_j$ is $\H^{n-1}$-rectifiable for every $j$ (with the exceptional set depending on $x$). For such values of $r$ and for every $\eta\in(0,r/2)$, we can set
\[
F_j^x=\left\{
\begin{split}
& F_j^{{\rm out}}\,,\qquad\mbox{if $x\in\partial^*E\cup(K^*\cap E^{(1)})$}\,,
\\
& F_j^{{\rm in}}\,,\qquad\hspace{0.2cm}\mbox{if $x\in K^*\cap E^{(0)}$}\,,
\end{split}
\right .
\]
with $F_j^{{\rm out}}$ and $F_j^{{\rm in}}$ defined as in step five of the proof of Theorem \ref{thm lsc}. In particular, $F_j^x\in\mathcal{E}$, $\Omega\cap\partial F_j^x$ is $\mathcal{C}$-spanning $W$, $F_j^x\setminus\mathrm{cl}\,(S_{\tau,r}^x)=E_j\setminus\mathrm{cl}\,(S_{\tau,r}^x)$ and, as proved in \eqref{the important remark}, for a.e. $r<r_*(\sigma)$ we have
\begin{eqnarray}
\label{the important remark citato}
&&\limsup_{\eta\to 0^+}
\Big|\H^n\Big(\Big\{y\in \mathrm{cl}\,(S_{\tau,r}^x)\cap\partial F_j^x:T_y(\partial F_j^x)=T_x\Big\}\Big)-\theta(x)\,\omega_n\,r^n\Big|
\\\nonumber
&&\hspace{3cm}\le C(n)\,\tau\,r^n+C(n,\tau)\,\sigma\,r^n+{\rm o}(1) \qquad\mbox{as $j\to\infty$}\,,
\end{eqnarray}
where $\theta(x)=1$ if $x\in\partial^*E$ and $\theta(x)=2$ if $x\in K\cap (E^{(0)}\cup E^{(1)})$, as well as
\begin{equation}
\label{poca roba}
\limsup_{j\to\infty}\limsup_{\eta\to 0^+}\H^n(S_{\tau,r}^x\cap\partial F_j^x)\le C(n,\tau)\,\sigma\,r^n\,,
\end{equation}
see \eqref{new slabs 3}, \eqref{theta 1 fine 0}, \eqref{coney island 1}, and \eqref{coney island 2 star}. By Besicovitch-Vitali's covering theorem and by Federer's theorem \eqref{federers theorem}, we can find a {\it finite} disjoint family of closed balls $\{B_i=\mathrm{cl}\,(B_{r_i}(x_i))\}_i$ such that $B_i\subset\subset\Omega$ and
\begin{equation}
\label{basic covering}
\H^n\Big(K^*\setminus\bigcup B_{r_i}(x_i)\Big)<\sigma\,,\qquad x_i\in K^*\cap \big(E^{(0)}\cup E^{(1)}\cup\partial^*E\big)\,,\qquad r_i<r_*(\sigma)\,.
\end{equation}
We let $\eta<\min_i\{r_i/2\}$, define $F_j^{x_i}$ accordingly, and set
\[
S_i=S_{\tau,r_i}^{x_i}\subset\subset B_i\,,\qquad T_i=T_{x_i}\,,\qquad F_j^i=F_j^{x_i}\,.
\]
Correspondingly, we define a sequence $\{F_j\}_j\subset\mathcal{E}$ with $\Omega\cap\partial F_j$ $\mathcal{C}$-spanning $W$ by setting
\begin{equation}
\label{basic Fj Ej fuori da Bi}
F_j\setminus \bigcup_i B_i=E_j\setminus\bigcup_i B_i\,,\qquad F_j\cap B_i=F_j^i\cap B_i\,.
\end{equation}
Since $F_j^i\setminus \mathrm{cl}\,(S_i)=E_j\setminus\mathrm{cl}\,(S_i)$ we find that
\begin{equation}
\label{basic Fj Ej fuori da Si}
F_j\setminus \bigcup_i \mathrm{cl}\,(S_i)=E_j\setminus\bigcup_i\mathrm{cl}\,(S_i)\,,
\end{equation}
and, setting,
\begin{equation}
\label{theta i def}
\theta_i=1\quad
\mbox{if $x_i\in\partial^*E$}\,,\qquad
\theta_i=2\quad \mbox{if $x_i\in E^{(0)}\cup E^{(1)}$}
\end{equation}
we deduce from \eqref{the important remark citato} and \eqref{poca roba} that, for each $i$,
\begin{eqnarray}
\label{basic Fj has right tangent plane}
&&\limsup_{\eta\to 0^+}\Big|\H^n\big(\big\{y\in \mathrm{cl}\,(S_i)\cap\partial F_j:T_y(\partial F_j)=T_i\big\}\big)-\theta_i\,\omega_n\,r_i^n\Big|
\\\nonumber
&&\hspace{5cm}\le C(n)\,\tau\,r_i^n+C(n,\tau)\,\sigma\,r_i^n+{\rm o}(1)
\\\nonumber
\\
\label{basic Fj xi first kind}
&&\hspace{1.4cm}\limsup_{\eta\to 0^+}\H^n(S_i\cap\partial F_j)\le C(n,\tau)\,\sigma\,r_i^n+{\rm o}(1)
\end{eqnarray}
as $j\to\infty$. Now let $C_*$ and $\varepsilon_*$ the volume-fixing variation constants defined by $f(E)$. By the monotonicity formula \eqref{monotonicity}, which can be applied to $B_{r_i}(x_i)$ as $x_i\in K$, we have
\begin{equation}\label{basic sum rin 0}
e^{-\Lambda\,r_*(\sigma)}\,\theta_i\,\omega_n\,r_i^n\le e^{-\Lambda\,r_i}\,\theta_i\,\omega_n\,r_i^n\le \mu(B_{r_i}(x_i))=\mu(S_i)\,,
\end{equation}
where in the last identity we have used \eqref{basic striscia}, and where $\Lambda$ depends on $E$. By \eqref{basic sum rin 0}, $\theta_i \geq 1$, and $\mu = \theta \, \H^n \llcorner K$ with $\theta \leq 2$,
\begin{equation}
\label{basic sum rin}
\sum_i\,r_i^n\le C(n,E)\,\sum_i\H^n(K\cap B_i)\le C(n,E)\, \H^n(K)=C(n,E,K)\,,
\end{equation}
so that, by \eqref{basic Fj Ej fuori da Si}, $|S_i|\le C(n)\,\tau\,r_i^{n+1}$ and $r_i\le r_*(\sigma)\le 1$, we find
\[
|F_j\Delta E_j|\le\sum_i|S_i|\le C(n,E,K)\,\tau\,.
\]
Therefore,
\begin{eqnarray*}
|f(F_j)\Delta f(E)|\le C\big(n,E,{\rm Lip} (f),\H^n(K)\big)\,\Big\{\tau+|E_j\Delta E|\Big\}<\varepsilon_*\,,
\end{eqnarray*}
provided $j$ is large enough and $\tau$ is small enough depending on $\varepsilon_*$. By the volume-fixing variations construction, for each $j$ large enough there exists a smooth map $\Phi_j:(-\varepsilon_*,\varepsilon_*)\times\mathbb{R}^{n+1}\to\mathbb{R}^{n+1}$, such that, for every $|v|<\varepsilon_*$, $\Phi_j(v,\cdot)$ is a diffeomorphism with $\Phi_j(v,\Omega)=\Omega$ and
\[
|\Phi_j(v,f(F_j))|=v+|f(F_j)|\,,\qquad \H^n\big(\Phi_j(v,\Sigma)\big)\le\H^n(\Sigma)+ C_*\,|v|\,\H^n(\Sigma)\,,
\]
for every $\H^n$-rectifiable set $\Sigma\subset\Omega$. In particular, if we set
\[
G_j=\Phi_j(v_j,f(F_j))\,,\qquad v_j=|f(E)|-|f(F_j)|=|E|-|f(F_j)|\,,
\]
then we find that $G_j\in\mathcal{E}$, $|G_j|=|E|=\varepsilon$ and
\[
\H^n(\Omega\cap\partial G_j)\le \left( 1 + C\big(n,E,{\rm Lip}(f),\H^n(K)\big)\,\Big\{\tau+|E_j\Delta E|\Big\} \right) \H^n(\Omega\cap\partial f(F_j))\,.
\]
Since $\Omega\cap\partial F_j$ is $\mathcal{C}$-spanning $W$, so is $\Omega\cap\partial G_j$ thanks to Lemma \ref{statement spanning is close by Lipschitz maps}, so that the minimizing sequence property of $E_j$ implies
\begin{equation}
\label{basic Ej prima}
\H^n(\Omega\cap \partial E_j)\le \left( 1 + C\,\Big\{\tau+|E_j\Delta E|\Big\} \right) \H^n(\Omega\cap\partial f(F_j)) + \frac1j\,,
\end{equation}
where, here and for the rest of the proof, $C$ is a generic constant depending on $K$, $E$, $f$ and $n$. We now claim that
\begin{equation}
\label{basic Fj stima}
\limsup_{\sigma\to 0^+}\limsup_{j\to\infty}\,\limsup_{\eta\to0^+}\H^n(\Omega\cap\partial f(F_j))\le\mathcal F(f(K),f(E))+C\,\tau\,.
\end{equation}
Notice that by combining \eqref{basic Ej prima} and \eqref{basic Fj stima}, and by finally letting $\tau\to 0^+$, we complete the proof of \eqref{basic tesi}.
\medskip
To prove \eqref{basic Fj stima}, we notice that $f(\Omega)=\Omega$, $\Omega\cap\partial f(F_j)=f(\Omega\cap\partial F_j)$, and \eqref{basic Fj Ej fuori da Si} yield
\begin{eqnarray*}
\H^n(\Omega\cap \partial f(F_j))\le
\H^n\Big(f\Big(\Omega\cap\partial E_j\setminus\bigcup_i\mathrm{cl}\,(S_i)\Big)\Big)
+\sum_i\int_{\mathrm{cl}\,(S_i)\cap\partial F_j}J^{\partial F_j}f\,d\H^n\,,
\end{eqnarray*}
where
\[
\limsup_{j \to \infty} \, \limsup_{\eta \to 0^+}\H^n\Big(f\Big(\Omega\cap\partial E_j\setminus\bigcup_i \mathrm{cl}\,(S_i)\Big)\Big)
\le C\,\H^n\Big(K\setminus\bigcup_i S_i\Big)\le C\,\sigma
\]
by \eqref{basic striscia}, \eqref{basic covering}, and $\H^n(K\setminus K^*)<\sigma$. Hence, as
\begin{eqnarray}\label{basic 1}
\H^n(\Omega\cap \partial f(F_j))\le\sum_i\int_{\mathrm{cl}\,(S_i)\cap\partial F_j}J^{\partial F_j}f\,d\H^n
+C\,\sigma + {\rm o}(1)\,,
\end{eqnarray}
where ${\rm o}(1) \to 0^+$ if we let first $\eta \to 0^+$ and then $j \to \infty$.\\
If we set
\[
Z_i=\big\{y\in \partial S_i\cap\partial F_j:T_y(\partial F_j)=T_i\big\}\,,
\]
then by \eqref{basic Fj has right tangent plane} and \eqref{basic Fj xi first kind} we find
\begin{eqnarray*}
\H^n\big((\mathrm{cl}\,(S_i)\cap\partial F_j)\Delta Z_i\big)\le C(n)\,\tau\,r_i^n+C(n,\tau)\,\sigma\,r_i^n+{\rm o}(1)\,,
\\
\big|\H^n(Z_i)-\theta_i\,\omega_n\,r_i^n\big| \le C(n)\,\tau\,r_i^n+C(n,\tau)\,\sigma\,r_i^n+{\rm o}(1)\,,
\end{eqnarray*}
where ${\rm o}(1)\to 0^+$ if we let first $\eta\to 0^+$ and then $j\to\infty$. Also, it follows from \eqref{basic sum rin 0}, the characterization of $\mu$, and \eqref{basic density 1} that
\begin{equation} \label{basic sum rin 0 nuova}
e^{-\Lambda \, r_*(\sigma)} \, \theta_i \, \omega_n \, r_i^n \leq \theta_i \, \H^n(S_i \cap K) + \omega_*(r_i) \, r_i^n\,.
\end{equation}
By \eqref{basic omega}, \eqref{basic sum rin 0 nuova}, and $r_i < r_*(\sigma)$, we thus find
\begin{eqnarray}\nonumber
\int_{\mathrm{cl}\,(S_i)\cap\partial F_j}J^{\partial F_j}f&\le&
\int_{Z_i}J^{T_i}f+({\rm Lip}\,f)^n\big\{C(n)\,\tau+C(n,\tau)\,\sigma\big\}\,r_i^n+{\rm o}(1)
\\\nonumber
&\le&
\theta_i\,\omega_n\,r_i^n\,\big\{J^{T_i}f(x_i)+C(n)\,\omega_*(r_i)\big\}
+C\,\big\{\tau+C(n,\tau)\,\sigma\big\}\,r_i^n+{\rm o}(1)
\\\nonumber
&\le&\Big\{J^{T_i}f(x_i)+\,C\,\Big(\omega_*(r_*(\sigma))+\tau+C(n,\tau)\,\sigma\Big)\,\Big\}\times
\\\nonumber
&&\times\,\left( \theta_i \, \H^n(S_i\cap K) + \omega_*(r_*(\sigma)) \, r_i^n \right)\,e^{\Lambda\,r_*(\sigma)}+{\rm o}(1)
\\\label{basic 2}
&=&J^{T_i}f(x_i)\big(\theta_i \, \H^n(S_i\cap K^*)+\a_i + \omega_*(r_*(\sigma)) \, r_i^n\big)\,e^{\Lambda\,r_*(\sigma)}
\\\nonumber
&&+C\,\Big\{\omega_*(r_*(\sigma))+\tau+C(n,\tau)\,\sigma\Big\}\,\left( \H^n(S_i\cap K) + \omega_*(r_*(\sigma)) \, r_i^n \right)\,e^{\Lambda\,r_*(\sigma)}
\\\nonumber
&&+{\rm o}(1)\,,
\end{eqnarray}
where we have set
\begin{equation}
\label{basic sum ai}
\a_i=\theta_i \,\H^n(S_i\cap(K\setminus K^*))\quad\mbox{so that}\quad
\sum_i\a_i<2\,\sigma\,.
\end{equation}
Now, again by \eqref{basic omega} we see that
\begin{eqnarray*}
\theta_i\,J^{T_i}f(x_i)\,\H^n(S_i\cap K^*)&\le&\theta_i\int_{S_i\cap K^*}J^Kf\,d\H^n+C(n)\,\omega_*(r_i)\,\H^n(S_i\cap K^*)
\\
&=&\theta_i\,\H^n(f(S_i\cap K^*))+C(n)\,\omega_*(r_i)\,\H^n(S_i\cap K^*)\,.
\end{eqnarray*}
By combining this last relation with \eqref{basic sum rin}, \eqref{basic 1}, \eqref{basic 2} and $r_i<r_*(\sigma)$, we find that
\begin{eqnarray}\label{basic 3}
\H^n(\Omega\cap \partial f(F_j))&\le&e^{\Lambda\,r_*(\sigma)}\,\sum_i\theta_i\,\H^n(f(S_i\cap K^*))
\\\nonumber
&&+
C\,\Big\{\omega_*(r_*(\sigma))+\tau+C(n,\tau)\,\sigma\Big\}\,e^{\Lambda\,r_*(\sigma)}+{\rm o}(1)\,,
\end{eqnarray}
with ${\rm o}(1)\to 0$ as first $\eta\to 0^+$ and then $j\to\infty$. If $x_i\in K^*\setminus\partial^*E$, then $\theta_i=2$ and by \eqref{basic density 2} we have
\begin{eqnarray*}
\theta_i\,\H^n(f(S_i\cap K^*))&\le& 2\,\H^n\big(f\big(S_i\cap (K^*\setminus\partial^*E)\big)\big)+2\,{\rm Lip}(f)^n\,\omega_*(r_i)\,r_i^n
\\
&\le&2\H^n\big(f\big(S_i\cap (K\setminus\partial^*E)\big)\big)+C\,\omega_*(r_*(\sigma))\,r_i^n\,;
\end{eqnarray*}
if, instead, $x_i\in \partial^*E$, then $\theta_i=1$ and \eqref{basic density 1} give
\begin{eqnarray*}
\theta_i\,\H^n(f(S_i\cap K^*))&\le& \H^n\big(f\big(S_i\cap K^*\cap \partial^*E\big)\big)+{\rm Lip}(f)^n\,\omega_*(r_i)\,r_i^n
\\
&\le&\H^n\big(f(S_i\cap \partial^*E)\big)+C\,\omega_*(r_*(\sigma))\,r_i^n\,;
\end{eqnarray*}
combining these last two estimates with \eqref{basic sum rin}, we find
\begin{eqnarray*}
\sum_i\theta_i\,\H^n(f(S_i\cap K^*))&\le& \sum_i\,2\,\H^n\big(f\big(S_i\cap (K\setminus\partial^*E)\big)\big)
+\H^n\big(f(S_i\cap \partial^*E)\big)
\\
&&+C\,\omega_*(r_*(\sigma))\,\sum_i\,r_i^n
\\
&\le& \mathcal F\Big(f(K),f(E);\bigcup_if(S_i)\Big)+C\,\omega_*(r_*(\sigma))\,,
\end{eqnarray*}
where $f(\partial^*E)=\partial^*f(E)$ by Lemma \ref{lemma redb}. Combining this last estimate with \eqref{basic 3} we find
\[
\H^n(\Omega\cap \partial f(F_j))\le
\,e^{\Lambda\,r_*(\sigma)}\,\Big\{\mathcal F(f(K),f(E))+
C\,\big\{\omega_*(r_*(\sigma))+\tau+C(n,\tau)\,\sigma\big\}\Big\}+{\rm o}(1)\,,
\]
where ${\rm o}(1)\to 0$ as first $\eta\to 0^+$ and then $j\to\infty$; in particular, \eqref{basic Fj stima} holds.
\medskip
We conclude the proof. As explained, \eqref{basic Fj stima} implies \eqref{basic tesi}. By a classical first variation argument, see Appendix \ref{memme}, we deduce the existence of $\l\in\mathbb{R}$ such that
\begin{equation}
\label{basic stationary main}
\l\,\int_{\partial^*E}X\cdot\nu_E\,d\H^n=\int_{\partial^*E}{\rm div}\,^K\,X\,d\H^n+2\,\int_{K\setminus\partial^*E}{\rm div}\,^K\,X\,d\H^n\,,
\end{equation}
for every $X\in C^1_c(\mathbb{R}^{n+1};\mathbb{R}^{n+1})$ with $X\cdot\nu_\Omega=0$ on $\partial\Omega$. Let us now consider the integer rectifiable varifold $V$ supported on $K$, with density $2$ on $K\setminus\partial^*E$ and $1$ on $\partial^*E$. By \eqref{basic stationary main}, we can compute the first variation of $V$ as
\[
\delta V(X)=\int\,\vec{H}\cdot X\,d\,\|V\|\qquad\forall X\in C^1_c(\Omega;\mathbb{R}^{n+1})
\]
where $\vec{H}=0$ on $K\setminus\partial^*E$ and $\vec{H}=\l\,\nu_E$ on $\partial^*E$. In particular, $\vec{H}\in L^\infty(\|V\|)$, and by Allard's regularity theorem \cite[Chapter 5]{SimonLN}, we have $K=\Sigma\cup{\rm Reg}$, where $\Sigma\subset K$ is closed and has empty interior in $K$, and where
for every $x\in {\rm Reg}$ there exists a $C^{1,\a}$-function $u$ defined on $\mathbb{R}^n$ such that
\begin{equation}
\label{allard local graph}
B_{r_x/2}(x)\cap K=B_{r_x/2}(x)\cap{\rm Reg}=B_{r_x/2}(x)\cap{\rm graph}(u)\,.
\end{equation}
By the divergence theorem, if $x\in{\rm Reg}\cap\partial E$, then, by \eqref{allard local graph} and by $\Omega\cap\partial E\subset K$,
\begin{eqnarray}\label{allard local epigraph}
&&\mbox{$E={\rm epigraph}(u)$}\qquad\hspace{0.6cm}\mbox{inside $B_{r_x/2}(x)$}\,,
\\\label{allard local red bound}
&&\mbox{$K=\partial E={\rm graph}(u)$}\qquad\mbox{inside $B_{r_x/2}(x)$}\,,
\end{eqnarray}
which imply ${\rm Reg}\cap\partial E\subset\Omega\cap\partial^*E$. Viceversa, if $x\in\Omega\cap\partial^*E$, then $\H^n(B_r(x)\cap(K\setminus\partial^*E))={\rm o}(r^n)$ and $\H^n(B_r(x)\cap\partial^*E)=\omega_n\,r^n+{\rm o}(r^n)$ as $r\to 0^+$, so that Allard's regularity theorem implies $\Omega\cap\partial^*E\subset{\rm Reg}\cap\partial E$. Thus ${\rm Reg}\cap\partial E=\Omega\cap\partial^*E$, and, in particular, $\Omega\cap(\partial E\setminus\partial^*E)\subset\Sigma$, so that $\Omega\cap(\partial E\setminus\partial^*E)$ has empty interior in $K$. Moreover, by \eqref{allard local epigraph}, \eqref{basic stationary main} implies that the graph of $u$ has constant mean curvature in $B_{r_x/2}(x)$, and thus that $\partial^*E$ is a smooth hypersurface, see e.g. \cite[Section 8.2]{GiMa}. Finally, \eqref{basic stationary main} implies that $K\setminus\partial E$ is the support of a multiplicity one stationary varifold in the open set $\Omega\setminus\partial E$, so that $K\setminus(\Sigma\cup\partial E)$ is a smooth hypersurface with zero mean curvature, and $\H^n(\Sigma\setminus\partial E)=0$. The proof of Theorem \ref{thm basic regularity} is complete.
\end{proof}
\section{Convergence to Plateau's problem: Proof of Theorem \ref{thm convergence as eps goes to zero}}\label{section convergence to plateau} This section is devoted to showing that $\psi(\varepsilon)\to 2\,\ell$ as $\varepsilon\to 0^+$ and that a sequence $\{(K_h,E_h)\}_h$ of generalized minimizers for $\psi(\varepsilon_h)$ with $\varepsilon_h\to 0^+$ as $h\to\infty$ has to converge to a minimizer $S$ for Plateau's problem $\ell$ counted with multiplicity $2$ in the sense of Radon measures. If one could prove the latter assertion directly, then the former would follow at once by lower semicontinuity of weak-star converging Radon measures and by the upper bound $\psi(\varepsilon)\le 2\,\ell+C\,\varepsilon^{n/(n+1)}$ proved in \eqref{psi eps basic bounds}. A possible direct approach to the convergence of $(K_h,E_h)$ to a minimizer of Plateau's problem may be tried using White's compactness theorem \cite{whiteMULT2limit}. That would require proving an $L^1$-bound on the first variations of the varifolds $V_h$ supported on $K_h$ with density $1$ on $\Omega\cap\partial^*E_h$ and with density $2$ on $K_h\setminus\partial^*E_h$. The validity of such bound is supported by the analysis of simple examples like Example \ref{example two points} and Example \ref{example tripe sing}. However, Example \ref{example tripe sing} also indicates that when singularities are present in the limit Plateau minimizers $S$, then an $L^1$-bound for the mean curvatures of the varifolds $V_h$ would result from a quantitative balance between the rate of divergence towards $-\infty$ of the constant mean curvatures of the reduced boundaries $\partial^*E_h$, and the rate of vanishing of the areas $\H^n(\Omega\cap\partial^*E_h)$. Validating a quantitative analysis of this kind in some generality would be of course very interesting per se as a way to describe the behavior of generalized minimizers; nonetheless, completing this analysis has so far eluded our attempts. Coming back to the proof of Theorem \ref{thm convergence as eps goes to zero}, we adopt a different approach. We prove directly that $\psi(\varepsilon)\to 2\,\ell$ as $\varepsilon\to 0^+$ by exploiting the same ``compactness-by-comparison'' strategy adopted in the proof of Theorem \ref{thm lsc}. An interesting point here is that because $|E_h|=\varepsilon_h\to 0^+$, we do not have a limit set that we can use to uniformly adjust volumes among local competitors of the elements of the minimizing sequence, and have to use a sort of ``absolute minimality at vanishing volumes'' of any sequence $\{(K_h,E_h)\}_h$ of generalized minimizers such that $\lim_{h\to\infty}\mathcal F(K_h,E_h)$ is equal to $\liminf_{\varepsilon\to 0^+}\psi(\varepsilon)$.
\begin{proof}[Proof of Theorem \ref{thm convergence as eps goes to zero}]
{\it Step one}: We start proving that $\psi$ is lower semicontinuous on $(0,\infty)$. Given $\varepsilon_0>0$, let $\varepsilon_j\to\varepsilon_0>0$ as $j\to\infty$ be such that
\[
\lim_{j\to\infty}\psi(\varepsilon_j)=\liminf_{\varepsilon\to\varepsilon_0}\psi(\varepsilon)\,,
\]
and let $E_j\in\mathcal{E}$ be such that $|E_j|=\varepsilon_j$ and $\H^n(\Omega\cap\partial E_j)\le\psi(\varepsilon_j)+1/j$. By \eqref{psi eps basic bounds}, $\psi(\varepsilon_j)$ is bounded in $j$, and thus by the compactness criteria for sets of finite perimeter and for Radon measures we have that, up to extracting subsequences, $\mu_j=\H^n\llcorner(\Omega\cap\partial E_j)\stackrel{*}{\rightharpoonup} \mu$ as Radon measures in $\Omega$ and $E_j\to E$ in $L^1_{{\rm loc}}(\Omega)$, where $\mu$ is a Radon measure in $\Omega$, and where $E\subset\Omega$ is a set of finite perimeter. We now repeat the proof of Theorem \ref{thm lsc}, with the only difference that while $|E_j|$ was constant in that proof, we know have that $|E_j|=\varepsilon_j\to\varepsilon_0$ for some $\varepsilon_0>0$. The modifications are minimal. In step two (nucleation of the sequence $E_j$), we repeat {\it verbatim} the argument, using the facts that $|E_j|\ge\varepsilon_0/2$ and that $\H^n(\Omega\cap\partial E_j)\le 2\,\ell+C\,\varepsilon_0^{n/(n-1)}+1$ in place of $|E_j|=\varepsilon$ and $\H^n(\Omega\cap\partial E_j)\le\psi(\varepsilon)+1$. Based on step two, in step three we construct volume-fixing variations with uniform constant $\varepsilon_*$ and $C_*$, and then repeat the rest of the argument without modifications. As a consequence, we can show that $\mu=\theta\,\H^n\llcorner K$ and $(K,E)\in\mathcal{K}$ is a generalized minimizer of $\psi(\varepsilon_0)$, with
\[
\psi(\varepsilon_0)=\mu(\Omega)=\lim_{j\to\infty}\mu_j(\Omega)\leq\lim_{j\to\infty}\psi(\varepsilon_j)=\liminf_{\varepsilon\to\varepsilon_0}\psi(\varepsilon)\,,
\]
as claimed. The key information here is of course that $|E_j|\ge\varepsilon_0/2$ where $\varepsilon_0>0$. If $\varepsilon_0=0$, then the nucleation lemma is inconsequential, and the argument cannot be used.
\medskip
\noindent {\it Step two}: Thanks to \eqref{psi eps basic bounds}, to prove $\psi(\varepsilon)\to 2\,\ell$ as $\varepsilon\to 0^+$ we just need to show that
\begin{equation}
\label{psi eps lb 2 ell}
\liminf_{\varepsilon\to 0^+}\psi(\varepsilon)\ge2\,\ell\,.
\end{equation}
To this end, we pick a sequence $\varepsilon_h\to 0^+$ such that
\begin{eqnarray*}
\label{ias1}
\liminf_{\varepsilon\to 0^+}\psi(\varepsilon)=\lim_{h\to\infty}\psi(\varepsilon_h)\,.
\end{eqnarray*}
Notice that, in this way, given an arbitrary sequence $\sigma_h\to 0^+$, we have
\begin{equation}
\label{ias2}
\limsup_{h\to\infty}\left[\psi(\varepsilon_h)-\psi(\sigma_h)\right]\le 0\,.
\end{equation}
Let $\{E_{h,j}\}_j$ be a minimizing sequence in $\psi(\varepsilon_h)$. By Theorem \ref{thm lsc}, there exists a generalized minimizer $(K_h,E_h)$ in $\psi(\varepsilon_h)$ such that, up to extracting subsequences,
\begin{eqnarray*}
&&E_{h,j}\to E_h\qquad\hspace{5.8cm}\mbox{in $L^1(\Omega)$ as $j\to\infty$}\,,
\\
&&\mu_{h,j}=\H^n\llcorner(\Omega\cap\partial E_{h,j})\stackrel{*}{\rightharpoonup} \mu_h\qquad\hspace{2.8cm}\mbox{as Radon measures in $\Omega$ as $j\to\infty$}\,,
\\
&&\mbox{$|E_{h,j}|=\varepsilon_h$ and}\,\, \H^n(\Omega\cap\partial E_{h,j})\le \psi(\varepsilon_h)+\frac1j\,,\qquad\forall j\in\mathbb{N}\,,
\end{eqnarray*}
where, by \eqref{psi eps basic bounds} and up to extracting a further subsequence,
\begin{eqnarray}\label{ias def muh}
\mu_h=2\,\H^n\llcorner(K_h\setminus\partial^*E_h)+\H^n\llcorner(\Omega\cap\partial^*E_h)\stackrel{*}{\rightharpoonup}\mu\qquad\mbox{as Radon measures in $\Omega$}
\end{eqnarray}
for some Radon measure $\mu$ in $\Omega$. Given $x\in\Omega \cap {\rm spt}\,\mu$, we set $d(x)={\rm dist}(x,\partial\Omega)$, and let
\begin{equation}
\label{ias5}
H_{x,r}=\big\{h\in\mathbb{N}:|E_h\setminus B_r(x)|>0\big\}\,,\qquad I_x=\big\{r\in(0,d(x)):\mbox{$H_{x,r}$ is infinite}\big\}\,.
\end{equation}
We now look at local variations $F_{h,j}$ of $E_{h,j}$ such that $|F_{h,j}|$ has a positive limit volume $\sigma_h$ as $j\to\infty$, which in turn satisfies $\sigma_h\to 0^+$ as $h\to\infty$. The idea is that, by \eqref{ias2}, we will be able to use such variations to gather information on $\mu$.
\medskip
\noindent {\it Claim}: for every $r\in I_x$, if $\{F_{h,j}\}_{h\in H_{x,r},\,j \in \mathbb{N}}\subset\mathcal{E}$ is such that $\Omega\cap\partial F_{h,j}$ is $\mathcal{C}$-spanning $W$ and $F_{h,j}\Delta E_{h,j}\subset\mathrm{cl}\,(B_r(x))$ for every $h\in H_{x,r}$ and every $j\in\mathbb{N}$, and if
\begin{equation}
\label{ias sh to zero}
\exists\,\,\sigma_h=\lim_{j\to\infty}|F_{h,j}|>0\,,\qquad\mbox{and}\quad\lim_{h\in H_{x,r}\,,h\to\infty}\sigma_h=0\,,
\end{equation}
then
\begin{equation}
\label{ias7}
\mu(B_r(x))\le \liminf_{h\in H_{x,r}\,,h\to\infty}\liminf_{j\to\infty}\H^n\big(\mathrm{cl}\,(B_r(x))\cap\partial F_{h,j}\big)\,.
\end{equation}
\medskip
\noindent To prove this claim, we first notice that, for every $h\in H_{x,r}$,
\begin{equation}
\label{ias5starstar}
\sigma_h=\lim_{j\to\infty}|F_{h,j}|\ge|E_h\setminus B_r(x)|>0\,.
\end{equation}
In particular, for $j$ large enough, $|F_{h,j}|>0$, $\psi(|F_{h,j}|)$ is well-defined, and $F_{h,j}$ is a competitor for $\psi(|F_{h,j}|)$, so that
\begin{eqnarray}\nonumber
\psi\big(|F_{h,j}|\big)&\le&\H^n(\Omega\cap\partial F_{h,j})=\H^n(\mathrm{cl}\,(B_r(x))\cap\partial F_{h,j})+\H^n(\partial E_{h,j}\cap\Omega\setminus\mathrm{cl}\,(B_r(x)))
\\
\nonumber
&\le&\H^n(\mathrm{cl}\,(B_r(x))\cap\partial F_{h,j})+\psi(\varepsilon_h)+\frac1{j}-\H^n(\partial E_{h,j}\cap B_r(x))
\end{eqnarray}
which can be recombined into
\[
\mu_{h,j}(B_r(x))\le \H^n(\mathrm{cl}\,(B_r(x))\cap\partial F_{h,j})+\psi(\varepsilon_h)-\psi\big(|F_{h,j}|\big)+\frac1{j}\,.
\]
Letting $j\to\infty$, by $\mu_{h,j}\stackrel{*}{\rightharpoonup}\mu_h$, $|F_{h,j}|\to \sigma_h>0$, and the lower semicontinuity of $\psi$ on $(0,\infty)$, we find that
\[
\mu_h(B_r(x))\le\liminf_{j\to\infty}\H^n(\mathrm{cl}\,(B_r(x))\cap\partial F_{h,j})+\psi(\varepsilon_h)-\psi(\sigma_h)\,.
\]
Since $\sigma_h\to 0^+$ as $h\to\infty$ with $h\in H_{x,r}$, by $\mu_h\stackrel{*}{\rightharpoonup}\mu$ and \eqref{ias2} we deduce \eqref{ias7}, and thus prove the claim.
\medskip
\noindent {\it Step three}: We now fix $x\in{\rm spt}\mu$, set $f(r)=\mu(B_r(x))$, and prove that, for a.e. $r\in I_x$,
\begin{eqnarray}\label{ias late 1}
&&\left\{\begin{split}
&\mbox{either $f'(r)\ge c(n)\,r^{n-1}$}\,,
\\
&\mbox{or $(f^{1/n})'(r)\ge c(n)$}\,,
\end{split}
\right .
\\\label{ias9}
&&f(r)\le \frac{r}n\,f'(r)\,.
\end{eqnarray}
By using the coarea formula together with $|E_h|\to 0$ as $h\to\infty$ and $E_{h,j}\to E_h$ as $j\to\infty$, we find that for a.e. $r<d(x)$,
\begin{eqnarray}\label{ias rect}
\mbox{$\partial E_{h,j}\cap\partial B_r(x)$ is $\H^{n-1}$-rectifiable}\,,
\\\label{ias needed for}
\lim_{j\to\infty}\H^n(E_{h,j}\cap\partial B_r(x))=\H^n(E_h\cap\partial B_r(x))\,,
\\\label{ias needed for Ahj to zero}
\lim_{h\to\infty}\lim_{j\to\infty}\H^n(E_{h,j}\cap\partial B_r(x))=0\,,
\end{eqnarray}
for every $h,j\in\mathbb{N}$. Moreover, if we set
\[
f_{h,j}(r)=\mu_{h,j}(B_r(x))\,,\qquad f_h(r)=\mu_h(B_r(x))\,.
\]
then, again by the coarea formula and by Fatou's lemma, for a.e. $r<d(x)$ we find
\begin{equation}\label{ias derivate}
\begin{split}
\H^{n-1}(\partial E_{h,j}\cap\partial B_r(x))&\le f_{h,j}'(r)\,,
\\
g_h(r)=\liminf_{j\to\infty}f_{h,j}'(r)&\le f_h'(r)\,,
\\
g(r)=\liminf_{h\in H_{x,r},h\to\infty}f_h'(r)&\le f'(r)\,,
\end{split}
\end{equation}
for every $h,j\in\mathbb{N}$. We first prove \eqref{ias late 1}. Let $r\in I_x$ be such that \eqref{ias rect}, \eqref{ias needed for}, \eqref{ias needed for Ahj to zero} and \eqref{ias derivate} hold, and let $A_{h,j}$ denote an $\H^n$-maximal connected component of $\partial B_r(x)\setminus\partial E_{h,j}$. If $A_{h,j}\subset E_{h,j}$, then, by spherical isoperimetry, by \eqref{ias derivate}, and since the relative boundary to $A_{h,j}$ in $\partial B_r(x)$ is contained in $\partial B_r(x)\cap\partial E_{h,j}$, we find
\begin{eqnarray*}
f_{h,j}'(r)\ge c(n)\,\H^n(\partial B_r(x)\setminus A_{h,j})^{(n-1)/n}\,,
\end{eqnarray*}
where the lower bound converges to $c(n)\,r^{n-1}$ if we let first $j\to\infty$ and then $h\to\infty$ thanks to \eqref{ias needed for Ahj to zero}; hence, if $A_{h,j}\subset E_{h,j}$, the first alternative in \eqref{ias late 1} holds. We now assume that $A_{h,j}\cap E_{h,j}=\emptyset$, and consider the corresponding cup competitor $F_{h,j}$ as defined in Lemma \ref{lemma cup competitor first kind} starting from $E_{h,j}$, $A_{h,j}$. More precisely, if $\{\eta^{h,j}_k\}_{k=1}^\infty$ denotes the corresponding sequence as in \eqref{fix:coarea trick}, we choose $k(h,j)$ so that, setting
\begin{eqnarray*}
Y_{h,j} &=& \partial B_r(x) \setminus \mathrm{cl}\, ((E_{h,j} \cap \partial B_r (x)) \cup A_{h,j})\,, \\
S_{h,j} &=& \partial E_{h,j} \cap \mathrm{cl}\, (A_{h,j}) \setminus \left( \mathrm{cl}\, ((E_{h,j} \cap \partial B_r(x)) \cup Y_{h,j}) \right)\,,
\end{eqnarray*}
we have that $\eta_j = \eta^{h,j}_{k(h,j)}$ satisfies $\eta_j\le r/2j$, with
\begin{eqnarray}
\label{sequence2}
&&\H^n (\partial B_r (x) \cap \{{\rm d}_{S_{h,j}} \leq \eta_j\}) \leq \frac1{j}\,,
\\\label{sequence3}
&&\eta_j \,\H^{n-1} (\partial B_r (x) \cap \{{\rm d}_{S_{h,j}} = \eta_j\}) \leq\frac{1}j\,.
\end{eqnarray}
Then, with the usual notation
\begin{eqnarray*}
U_{h,j} = \partial B_r (x) \cap \{{\rm d}_{S_{h,j}} < \eta_j\}\,, &\qquad& Z_{h,j} = Y_{h,j} \cup \left( U_{h,j} \setminus \mathrm{cl}\, (E_{h,j}\cap \partial B_r (x)) \right)\,,
\end{eqnarray*}
we define
\[
F_{h,j}=\big(E_{h,j}\setminus\mathrm{cl}\,(B_r(x))\big)\cup N_{\eta_j}(Z_{h,j})\,.
\]
By Lemma \ref{lemma cup competitor first kind}, $F_{h,j}\in\mathcal{E}$, $\Omega\cap\partial F_{h,j}$ is $\mathcal{C}$-spanning $W$ and $E_{h,j}\Delta F_{h,j}\subset \mathrm{cl}\,(B_r(x))$. Since $\eta_j\to 0$ as $j\to\infty$, we find
\[
\sigma_h=\lim_{j\to\infty}|F_{h,j}|=\lim_{j\to\infty}|E_{h,j}\setminus B_r(x)|=|E_h\setminus B_r(x)|\,,
\]
so that $\sigma_h>0$ if $h\in H_{x,r}$, and $\sigma_h\to 0^+$ if we let $h\to\infty$. Thus $F_{h,j}$ satisfies \eqref{ias sh to zero}, and we can apply \eqref{ias7} to $F_{h,j}$. To estimate the upper bound in \eqref{ias7}, we look back at \eqref{cup buccia eta level}, \eqref{fix rect 1}, \eqref{fix:est2}, and \eqref{fix:est3}, and find that
\begin{equation}\label{ias festa}
\begin{split}
\H^n(\mathrm{cl}\,(B_r(x))\cap\partial F_{h,j})&\le(2+C(n)\,\eta_j)\,\H^n(\partial B_r(x)\setminus A_{h,j})
\\
&+ (2 + C(n)\,\eta_j)\, \H^n (\partial B_r (x) \cap \{{\rm d}_{S_{h,j}} \leq \eta_j\}) \\
&+C(n)\,\eta_j\,\left(\H^{n-1}(\partial B_r(x)\cap\partial E_{h,j}) + \H^{n-1}(\partial B_r(x) \cap \{{\rm d}_{S_{h,j}} = \eta_j\})\right)\,.
\end{split}
\end{equation}
By \eqref{ias7}, \eqref{sequence2}, \eqref{sequence3}, and \eqref{ias festa} we deduce that
\begin{eqnarray}\nonumber
f(r)=\mu(B_r(x))&\le&\liminf_{h\in H_{x,r}\,,h\to\infty}\liminf_{j\to\infty}\H^n\big(\mathrm{cl}\,(B_r(x))\cap\partial F_{h,j}\big)
\\\label{ias played by}
&\le&\liminf_{h\in H_{x,r}\,,h\to\infty}\liminf_{j\to\infty}
2\,\H^n(\partial B_r(x)\setminus A_{h,j})
\\\nonumber
&\le&C(n)\,\liminf_{h\in H_{x,r}\,,h\to\infty}\liminf_{j\to\infty}
f_{h,j}'(r)^{n/(n-1)}\le C(n)\,f'(r)^{n/(n-1)}\,,
\end{eqnarray}
We have thus proved that the second alternative in \eqref{ias late 1} holds, as claimed. We now prove \eqref{ias9}: let us now denote by $F_{h,j}$ the set defined by Lemma \ref{lemma cone competitor} as approximation of the cone competitor corresponding to $E_{h,j}$ in $B_r(x)$ with $\eta = \eta_j = r/2j$. We have that $F_{h,j} \in \mathcal{E}$ and that $\Omega \cap \partial F_{h,j}$ is $\mathcal{C}$-spanning $W$; furthermore,
by \eqref{cone competitor volume inequality} and \eqref{ias needed for} we find
\[
\sigma_h=\lim_{j\to\infty}|F_{h,j}|\geq|E_h\setminus B_r(x)|+\frac{r}{n+1}\,\H^n(E_h\cap \partial B_r(x))
\]
(in particular, $\sigma_h>0$ if $h\in H_{x,r}$) and, by \eqref{ias needed for Ahj to zero}, $\sigma_h\to 0^+$ as $h\to\infty$. Thus \eqref{ias sh to zero} holds, and we can deduce from \eqref{ias7} and \eqref{cone competitor area inequality} that
\[
f(r)=\mu(B_r(x))\le\liminf_{h\in H_{x,r}\,,h\to\infty}\liminf_{j\to\infty}\frac{r}n\,\H^{n-1}(\partial E_{h,j}\cap\partial B_r(x))\le \frac{r}n\,f'(r)\,,
\]
that is \eqref{ias9}.
\medskip
\noindent {\it Step four}: We now define a function $g:\Omega\to(0,\infty)\cup\{-\infty\}$ by letting
\begin{eqnarray*}
g(x)&=&\sup\Big\{s>0:(0,s)\subset I_x\Big\}
\\
&=&\sup\Big\{t>0:\mbox{if $s<t$, then $|E_h\setminus B_s(x)|>0$ for infinitely many $h$}\Big\}\,.
\end{eqnarray*}
We notice that
\begin{eqnarray}
\label{g 1}
&&\mbox{$g$ is lower semicontinuous on $\Omega$}\,,
\\
\label{g 2}
&&\mbox{$\{g=-\infty\}$ contains at most one point}\,.
\end{eqnarray}
(Notice that $\{g=-\infty\}$ may indeed contain one point: this is the case of the singular point of a triple junction, see Figure \ref{fig example23}-(b)). To prove \eqref{g 1}: if $g(x)\ne-\infty$, then $g(x)>0$, and for every $s\in(0,g(x))$, $|E_h\setminus B_s(x)|>0$ for infinitely many $h$. Thus, if $\eta\in(0,g(x))$ and $m_\eta$ is such that $|x-x_m|<\eta$ for every $m\ge m_\eta$, then, for every $m\ge m_\eta$ and $s\in(0,g(x)-\eta)$,
\[
|E_h\setminus B_s(x_m)|\ge |E_h\setminus B_{s+\eta}(x)|>0\,,\qquad\mbox{for intinitely many $h$}\,,
\]
that is $g(x)-\eta\le g(x_m)$ for every $m\ge m_\eta$; this proves \eqref{g 1}. Next, if $g(x_1)=g(x_2)=-\infty$, then for every $s>0$ there exists $h(s)$ such that
\[
|E_h\setminus B_s(x_1)|= |E_h\setminus B_s(x_2)|=0\qquad\forall h\ge h(s)\,.
\]
If $x_1\ne x_2$ we can take $s=|x_1-x_2|/2$ and deduce $|E_h|=0$; thus \eqref{g 2} holds. Let us now consider the open set $\{g>s\}\subset\Omega$, $s>0$, and set
\[
Z(s)={\rm spt}\mu\cap\{g>s\}\,,\qquad Z={\rm spt}\mu\cap\{g>0\}\,.
\]
We claim that if $x\in Z(s)$, then
\begin{equation}
\label{ias replaced}
f(r)\ge c_0(n)\,r^n\qquad\forall r\in(0,s)\,,\qquad\mbox{$r^{-n}\,f(r)$ is increasing over $r\in(0,s)$}\,.
\end{equation}
The second assertion is immediate from \eqref{ias9}. To prove the first one, set
\[
L_1=\big\{r\in(0,s):f'(r)\ge c(n)\,r^{n-1}\big\}\,,\qquad L_2=(0,s)\setminus L_1\,,
\]
with $c(n)$ as in \eqref{ias late 1}. If $x\in Z(s)$ is such that $\H^1(L_1)\ge s/2$, then for every $r\in(0,s)$
\begin{eqnarray*}
f(r)\ge\int_{L_1\cap(0,r)}f'\ge c(n) \int_{L_1\cap(0,r)}t^{n-1}\,dt\ge c(n)\,\int_0^{\min\{r,s/2\}}\,t^{n-1}\,dt\ge \frac{c(n)}{n\,2^n}\,r^n\,;
\end{eqnarray*}
if instead $\H^1(L_2)\ge s/2$, then for every $r\in(0,s)$,
\begin{eqnarray*}
f(r)^{1/n}\ge\int_{L_2\cap(0,r)}(f^{1/n})'\ge c(n)\, \H^1(L_2\cap(0,r))\ge c(n)\,\min\big\{r,\frac{s}2\big\}\ge\frac{c(n)}2\,r\,,
\end{eqnarray*}
where we have used the fact that, by \eqref{ias late 1}, we have $(f^{1/n})'\ge c(n)$ on $L_2$. Thanks to \eqref{ias replaced} we are in the position of using \cite[Theorem 6.9]{mattila} and Preiss' theorem (as done in step four of the proof of Theorem \ref{thm lsc}) on each $Z(s)$, to find that $Z$ is $\H^n$-rectifiable with
\begin{equation}
\label{ias101}
\mu\llcorner Z=\theta\,\H^n\llcorner Z\,,
\end{equation}
where the density
\[
\theta(x)=\lim_{r\to 0^+}\frac{\mu(B_r(x))}{\omega_n\,r^n}\quad\mbox{exists in $[c_0(n),\infty)$ for every $x\in Z$}\,.
\]
Moreover, by \eqref{g 2},
\begin{equation}
\label{ias10}
\H^0({\rm spt}\mu\setminus Z)\le 1\,.
\end{equation}
By combining \eqref{ias101} and \eqref{ias10} we find that $K={\rm spt}\mu$ is $\H^n$-rectifiable and such that $\mu=\theta\,\H^n\llcorner K$. Since $K_h={\rm spt}\,\mu_h$ is $\mathcal{C}$-spanning $W$ and $\mu_h\stackrel{*}{\rightharpoonup}\mu$, by Lemma \ref{statement K spans} we find that $K$ is $\mathcal{C}$-spanning $W$, and thus admissible in $\ell$, so that
\begin{equation}
\label{ias12}
\liminf_{\varepsilon\to 0^+}\psi(\varepsilon)
=\lim_{h\to\infty}\mu_h(\Omega)\ge\mu(\Omega)=\int_K\,\theta\,d\H^n\ge\,\min_K\,\theta\,\H^n(K)\ge\,\ell\,\min_K\,\theta\,.
\end{equation}
Thus, to complete the proof of \eqref{psi eps lb 2 ell} we just need to show that
\begin{equation}
\label{ias11}
\mbox{$\theta\ge2$ $\H^n$-a.e. on $K$}\,.
\end{equation}
Since $\mu_{h,j}=\H^n\llcorner(\Omega\cap\partial E_{h,j})\stackrel{*}{\rightharpoonup}\mu_h$ as $j\to\infty$, with $\mu_h\stackrel{*}{\rightharpoonup} \theta\,\H^n\llcorner K$ as $h\to\infty$, we can extract a diagonal subsequence $j=j(h)$ so that, denoting $E_h^* = E_{h,j(h)}$, $\{E_h^*\}_h\subset\mathcal{E}$, $\Omega\cap\partial E_h^*$ $\mathcal{C}$-spanning $W$, and
\[
\mu_h^*=\H^n\llcorner(\Omega\cap\partial E_h^*)\stackrel{*}{\rightharpoonup} \theta\,\H^n\llcorner K\,,\qquad\mbox{as $h\to\infty$}\,.
\]
Moreover, $\mu(B_r(x))\ge c(n)\,r^n$ for every $r\in(0,s)$ if $x\in K\cap\{g>s\}$ and, thanks to \eqref{ias festa},
\[
\liminf_{h\to\infty}\H^n(B_r(x)\cap\partial E_h^*)\le C(n)\,\liminf_{h\to\infty}\,\H^n(\partial B_r(x)\setminus A_{r,h}^0)\,,
\]
where $A_{r,h}^0$ denotes an $\H^n$-maximal connected component of $\partial B_r(x)\setminus\partial E_h^*$, this time for every $x\in K$ and $B_r(x)\subset\subset\Omega$. We can thus apply Lemma \ref{lemma llb} with the open set $\Omega'=\{g>s\}$ to deduce that
\[
\mbox{$\theta\ge 2$ $\H^n$-a.e. on $\{g>s\}\cap K\setminus \partial^*E^*$}
\]
where $E^*=\emptyset$ is the $L^1$-limit of the sets $E_h^*$. Since $\partial^*E^*=\emptyset$, taking the union over $s>0$ and recalling \eqref{ias10}, we conclude that \eqref{ias11} holds.
\medskip
\noindent {\it Step five}: Now that $\psi(\varepsilon)\to 2\,\ell$ as $\varepsilon\to 0^+$ has been proved, let $(K_h,E_h)$ be a sequence of generalized minimizers of $\psi(\varepsilon_h)$ for an arbitrary sequence $\varepsilon_h\to 0^+$. Since the limit of $\psi(\varepsilon)$ as $\varepsilon\to0^+$ exists, $\varepsilon_h$ automatically satisfies \eqref{ias1}, and the arguments of step two to four can be repeated {\it verbatim}. Correspondingly, up to extracting subsequences, \eqref{ias def muh} holds with $\mu=\theta\,\H^n\llcorner K$, $\theta\ge 2$ $\H^n$-a.e. on $K$, and $K$ a relatively compact subset of $\Omega$, $\H^n$-rectifiable, and $\mathcal{C}$-spanning $W$. By plugging $\psi(\varepsilon)\to 2\,\ell$ as $\varepsilon\to 0^+$ in \eqref{ias12}, we find that $\theta=2$ $\H^n$-a.e. on $K$, $2\,\H^n(K)=2\,\ell$, so that $K$ is a minimizer of $\ell$, and thus, looking back at \eqref{ias def muh}, we conclude that \eqref{weak star convergence of gen minimizers} holds.
\end{proof}
| {
"timestamp": "2021-05-05T02:18:23",
"yymm": "1907",
"arxiv_id": "1907.00551",
"language": "en",
"url": "https://arxiv.org/abs/1907.00551",
"abstract": "Soap films at equilibrium are modeled, rather than as surfaces, as regions of small total volume through the introduction of a capillarity problem with a homotopic spanning condition. This point of view introduces a length scale in the classical Plateau's problem, which is in turn recovered in the vanishing volume limit. This approximation of area minimizing hypersurfaces leads to an energy based selection principle for Plateau's problem, points at physical features of soap films that are unaccessible by simply looking at minimal surfaces, and opens several challenging questions.",
"subjects": "Analysis of PDEs (math.AP); Mathematical Physics (math-ph)",
"title": "Plateau's problem as a singular limit of capillarity problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES\n\n",
"lm_q1_score": 0.9820137858267906,
"lm_q2_score": 0.7217432003123989,
"lm_q1q2_score": 0.7087617725335225
} |
https://arxiv.org/abs/2207.06074 | Optimal Reach Estimation and Metric Learning | We study the estimation of the reach, an ubiquitous regularity parameter in manifold estimation and geometric data analysis. Given an i.i.d. sample over an unknown $d$-dimensional $\mathcal{C}^k$-smooth submanifold of $\mathbb{R}^D$, we provide optimal nonasymptotic bounds for the estimation of its reach. We build upon a formulation of the reach in terms of maximal curvature on one hand, and geodesic metric distortion on the other hand. The derived rates are adaptive, with rates depending on whether the reach of $M$ arises from curvature or from a bottleneck structure. In the process, we derive optimal geodesic metric estimation bounds. | \subsection{Geometric Inference}
Topological data analysis and geometric methods now constitute a standard toolbox in statistics and machine learning~\cite{Wasserman18,Chazal21}.
In this family of methods, data $\mathbb{X}_n := \{X_1,\dots,X_n\}$ are usually seen as point clouds in high dimension, for which complex structural correlations give rise to an underlying structure that is neither full-dimensional, nor even linear.
Dealing with non-linearity is very well understood through the prism of \emph{non-parametric regression}.
However, in absence of distinguished ``covariate'' and ``response'' variables (i.e. coordinates), regression does not make sense anymore.
Hence, one needs to adopt a more global and coordinate-free approach: data are naturally viewed as lying on a submanifold $M \subset \mathbb{R}^D$ of dimension $d \ll D$, where $d$ corresponds to its \emph{true} number of degrees of freedom.
This approach opens the way to the estimation of numerous geometric and topological quantities to describe data.
Central to it is the manifold itself~\cite{Genovese12,Genovese12b,Kim15,Fefferman19,Divol21,Aizenbud21,Puchkin22}, where error is most commonly measured in Hausdorff distance.
Among many others, let us also mention the homology~\cite{Balakrishnan12}, persistent homology~\cite{Chazal14}, differential quantities~\cite{Aamari19}, intrinsic metric~\cite{Arias20} and regularity~\cite{Aamari19b}.
\subsection{Reach and Regularity}
Similarly to functional estimation, the theoretical study of nonparametric geometric problems naturally comes with regularity conditions.
By far, the most ubiquitous regularity and scale parameter in this context is the \emph{reach}.
First introduced by H. Federer's seminal paper~\cite{Federer59} on geometric measure theory, the reach $\tau(K) \in \mathbb{R}_+$ of a set $K \subset \mathbb{R}^D$ measures how far $K$ is from being convex~\cite{Attali13}.
It hence provides a typical scale at which it shares most of the properties of a convex set.
These properties include -- among others -- uniqueness of the projection map, contractibility of balls, and explicit formulas for the volume of thickenings (see~\cite{Federer59}).
When $K = M$ is a submanifold, the reach also assesses quantitatively how it deviates from its tangent spaces.
Therefore, the reach also provides an upper bound on curvature (that is, a bound in $\mathcal{C}^2$) and a minimal scale of possible quasi self-intersections~\cite{Aamari19}.
For all these reasons, the reach practically appears in \emph{all} geometric inference methods as a natural scale parameter, which either drives a bandwidth used in a localization method~\cite{Genovese12,Aamari19b}, a minimal regularity scale in a minimax study~\cite{Kim15}, or a \emph{signal} part in a signal-to-noise ratio~\cite{Genovese12b,Fefferman19,Aizenbud21}. See~\cite{Aamari19b} for more examples of its use.
On the estimation side, the reach has already been studied under several angles.
\begin{itemize}
\item
The formulation of $\tau(M)$ in terms of deviation to tangent spaces from~\cite[Theorem~4.18]{Federer59} has been put to use through a plugin in~\cite{Aamari19}.
The authors derived non-matching upper and lower bounds for the estimation of $\tau(M)$ over $\mathcal{C}^3$ submanifolds.
In addition to being suboptimal, the method of~\cite{Aamari19} requires the knowledge of tangent spaces, and is very sensitive to uncertainty on them (see~\cite[Section~6]{Aamari19}).
\item
Extending the minimax study of~\cite{Aamari19},~\cite{Berenfeld22} took advantage of the so-called \emph{convexity defect function} introduced by~\cite{Attali13} to propose another plugin strategy, with rates obtained over more general $\mathcal{C}^k$-smooth manifold classes.
Despite still deriving non-matching upper and lower bounds, \cite{Berenfeld22} managed to exhibit two different estimation rates, depending on whether the reach testifies of a high curvature zone (the so-called local case, with slow rates) or of a narrow bottleneck structure (global case, with faster rates).
In this work, the derived rates are only suboptimal when the reach is achieved by curvature.
\item
More recently,~\cite[Theorem~1]{Boissonnat19} gave a new formulation of the reach in terms of geodesic distortion. Informally, they showed that $\tau(K)$ is the largest radius $r \geq 0$ for which the geodesic distance $\d_K$ is smaller than the geodesic distance $\d_{\mathcal{S}(r)}$ on a Euclidean ball of radius $r$.
Based on this purely metric statement, \cite{Cholaquidis21} proposed to plug-in a nearest-neighbor graph distance of the data in this formulation. This method provides a consistent estimator under very weak assumptions. Unfortunately, it fails to take advantage of high order regularity, when the reach is achieved by curvature (again).
\end{itemize}
With this analysis of possible estimation flaws in mind, this article proposes a two-step method.
In short, we decouple the estimation of the local and global reaches \cite{Aamari19b}, and estimate them separately via max-curvature estimation and geodesic distance estimation respectively.
\subsection{Metric Learning}
In the data analysis area, \emph{metric learning} refers to the problem of finding a distance $\wh \d$ over the space of observations $\mathbb{X}_n \times \mathbb{X}_n$ that is relevant for a given task at stake~\cite{yang2006distance, suarez2021tutorial}.
For instance, in a supervised framework where one is provided with tuples of allegedly \emph{similar} or \emph{dissimilar} observations, the goal is to find a distance that is small on the similar tuples and large on the dissimilar ones.
There is a wide range of existing methods in the literature, ranging from parametric (LSI~\cite{xing2002distance}, MCML~\cite{globerson2005metric}, LDML~\cite{guillaumin2009you} among others) to nonparametric (DMLMJ~\cite{nguyen2017supervised}, kernel methods~\cite{kwok2003learning,chatpatanasiri2010new}, to cite a few).
In an unsupervised setting, metric learning aims at finding a metric that takes into account the underlying geometry of the data.
That is, it amounts to estimating of shortest path (or \emph{geodesic}) distance.
Often, this is done via a dimension reduction technique: any low-dimensional embedding of the data gives rise to a new distance over the data in the embedded space.
Existing algorithms include PCA, t-SNE~\cite{hinton2002stochastic}, MDS~\cite{cox2008multidimensional}, Isomap~\cite{tenenbaum2000global}, or MVU~\cite{Arias14}.
See \cite{suarez2021tutorial} for a thorough overview of the field.
Astonishingly, despite the variety of existing methods, we are not aware of any general minimax study of geodesic metric learning.
Though, two major theoretical references seem to stand out:
\begin{itemize}
\item
In \cite{Trillos2019}, the authors use a neighborhood graph to estimate distances and derive convergence rates in the $\mathcal{C}^2$ case, but only for nearby points.
\item
In~\cite{Arias20}, estimation rates of geodesic distances are derived in the $\mathcal{C}^2$ case using a reconstructing mesh.
Lower bounds are also obtained, but in a fixed-design setting only.
\end{itemize}
We propose a simple plugin method, and show that estimating the geodesic metric is no harder than estimating the manifold itself in Hausdorff distance.
This general strategy is also supported by a matching minimax lower bound.
\subsection{Contribution and Outline}
This article deals with the framework where data lies on an unknown $d$-dimensional $\mathcal{C}^k$-submanifold of $\mathbb{R}^D$ (Section~\ref{sec:models}).
The main contribution consists of nearly-tight minimax bounds for reach estimation (Section~\ref{sec:optimal_reach_estimation}).
Along the way, three major building blocks, interesting in their own rights, are developed thoroughly:
\begin{itemize}
\item[(Section~\ref{sec:reach})]
We propose a general plug-in strategy for estimating the reach of a manifold. It is based on curvature estimation on one hand, and on the estimation of an intermediate scale (framed between the reach and the weak feature size) on the other hand.
\item[(Section~\ref{sec:sdr})]
We define the so-called \emph{spherical distortion radius} at scale $\delta > 0$ and study its estimation.
From the metric characterization of the reach from~\cite{Boissonnat19},
we notice that this purely metric quantity can be used to play the role of an intermediate scale for reach estimation.
We show that its stability properties make it well-suited to play the role of the intermediate scale of Section~\ref{sec:reach}.
\item[(Section~\ref{sec:metric})]
We propose a general plugin strategy for metric learning, and derive optimal geodesic metric estimation upper and lower bounds.
\end{itemize}
The proofs and the most technical points are deferred to the Appendix.
\subsection{General Notation}
In what follows, $\mathbb{R}^D$ ($D\geq 2$) is endowed with the Euclidean norm $\norm{\cdot}$.
The closed ball of radius $r\geq 0$ centered at $x \in \mathbb{R}^D$ is denoted by $\ball(x,r)$. If $x \in T \subset \mathbb{R}^D$ is a linear subspace, we write $\ball_T(x,r) := T \cap \ball(x,r)$ for the same ball in $T$.
Throughout, $c_\square,c'_\square,C_\square,C'_\square \geq 0$ denote generic constants that depend on $\square$, and that shall change from line to line to shorten notation.
Similarly, universal constants shall generically be denoted by $c,c',C,C' \geq 0$.
\subsection{Characterizations and Relaxations of the Reach}\label{sec:reach_wfs_defi}
Let $K$ be a compact subset of $\mathbb{R}^D$. Following the original definition of~\cite{Federer59}, the \emph{reach} of $K$, denoted by $\tau(K)$, may be thought of as the largest radius of a neighborhood of $K$ onto which the projection map $\pi_K$ onto $K$ is well-defined. More formally, define the \emph{medial axis} of $K$ by
\begin{align*}
\mathrm{Med}(K) := \{ u \in \mathbb{R}^D \mid \exists x_1 \neq x_2 \in K \quad \|u - x_1\| = \|u-x_2\| = \d(u,K) \}.
\end{align*}
The reach of $K$ is then defined as the smallest distance between $K$ and $\mathrm{Med}(K)$.
\begin{definition}\label{defi:reach}
For all closed $K \subset \mathbb{R}^D$, the \emph{reach} of $K$ is defined by
\begin{align*}
\tau(K)
:=
\min_{x \in K} \d(x,\mathrm{Med}(K))
=
\inf_{u \in \mathrm{Med}(K)} \d(u,K)
.
\end{align*}
\end{definition}
Note that in full generality, the medial axis might not be a closed set, so that the infimum in Definition~\ref{defi:reach} may not be attained (for instance in the case where $K$ is one-dimensional with a sharp edge). From a topological viewpoint, a key property of sets with positive reach is that the projection onto $K$ induces continuous retractions from the \emph{offset} $K^r := \{ u \in \mathbb{R}^D \mid \d(u,K) \leq r\}$ onto $K$, whenever $r < \tau(K)$~\cite[Theorem 4.8]{Federer59}.
This property is at the core of topologically consistent reconstruction procedures such as that of~\cite{Boissonnat14}.
Sets with positive reach can also been thought of as generalizations of convex sets, characterized by the smoothness of their distance function. Indeed, based on the remark that $x \mapsto \d(x,K)$ is $\mathcal{C}^1$ on $\mathbb{R}^D \setminus K$ whenever $K$ is convex,~\cite{Clarke95} define $r$-\textit{proximally-smooth} sets as the sets $K$ such that $\d(\cdot,K)$ is $\mathcal{C}^1$ over $\{u \in \mathbb{R}^D \mid 0 < \d(u,K) < r\}$. Interestingly, for subsets of $\mathbb{R}^D$, $r$-proximally smooth sets are exactly sets with reach $\tau(K) \geq r$~\cite{Poliquin00}, so that the reach may be alternatively defined in terms of gradients of the distance function.
To this aim, following~\cite{Chazal05}, a generalized gradient function can be defined over $\mathbb{R}^D \setminus K$.
For all $x \in \mathbb{R}^D \setminus K$, we write
\begin{align}\label{eq:defi_gradient_distance}
\nabla \d(x,K) := \frac{x-c_K(x)}{\d(x,K)},
\end{align}
where $c_K(x)$ is the center of the smallest enclosing ball of the set $\pi_K(\{x\})$ of nearest neighbors of $x$ on $K$.
Since $c_K(x) = \pi_K(x)$ whenever $x \notin \mathrm{Med}(K)$, the medial axis can actually be characterized as
\begin{align*}
\mathrm{Med}(K) = \{ x \in \mathbb{R}^D \setminus K \mid \| \nabla \d(x,K) \| < 1 \},
\end{align*}
and the reach as
$$
\tau(K)
=
\sup \{ r >0 \mid 0 < \d(x,K) < r \Rightarrow \| \nabla \d(x,K) \| =1 \}
.$$
This characterization of the reach allows for a straightforward relaxation. Namely, for a parameter $\mu \in [0,1]$, the seminal paper~\cite{Chazal06} introduces the so-called \emph{$\mu$-medial axis} as being
\begin{align*}
\mathrm{Med}_\mu(K) := \{ x \in \mathbb{R}^D \setminus K \mid \| \nabla \d(x,K) \| \leq \mu \},
\end{align*}
and the $\mu$-reach as
\begin{align}
\label{eq:mu-reach}
\tau_\mu(K) := \inf_{u \in \mathrm{Med}_\mu(K)} \d(u,K).
\end{align}
It is clear that for all $\mu<1$, $\tau(K) \leq \tau_\mu(K)$, with $\tau(K)$ corresponding to the limit $\tau_{1^-}(K)$.
Furthermore, this relaxation of the reach still yields enough regularity guarantees that the offsets $K^r = \{ u \in \mathbb{R}^D \mid \d(u,K) \leq r\}$ are isotopic for all $r \in (0, \tau_\mu(K))$~\cite[Lemma~2.1]{Chazal06}.
Hence, the condition that $\tau_\mu(K) > 0$ conveys enough regularity properties for many topological estimators to work~\cite{Chazal09}.
Through this lens, the largest radius that ensures the topological stability of the offsets is the $0$-reach, also called \textit{weak-feature size},
\begin{align}\label{eq:wfs_defi}
\wfs(K) := \inf_{ u \in \mathrm{Med}_0(K) } \d(u,K),
\end{align}
that is the distance from $K$ to the set of critical points of $\d(\cdot,K)$. As detailed in the following section, the weak-feature size plays a special role in the case where $K$ is a manifold.
Here come a few elementary properties of the weak feature size that we will use later on.
\begin{proposition}\label{prop:wfs_properties}
Let $K \subset \mathbb{R}^D$ be compact.
\begin{itemize}\setlength{\itemsep}{.05in}
\item[(i)] If $K$ is a closed submanifold of $\mathbb{R}^D$, then $\wfs(K) < +\infty$;
\item[(ii)] If $\wfs(K) < +\infty$, then for all $\mu \in [0,1)$,
\begin{align*}
\tau(K)
\leq
\tau_\mu(K)
\leq
\wfs(K)
\leq
\sqrt{\frac{D}{2(D+1)}}
\diam(K)
.
\end{align*}
\end{itemize}
\end{proposition}
A proof is given in Section \ref{sec:proof_of_prp_wfs_properties}. Proposition~\ref{prop:wfs_properties} thus ensures that $\wfs(M)$ is uniformly bounded over the classes $\manifolds{k}{\tau_{\min}} {\mathbf{L}}$ introduced in Section~\ref{sec:models}.
Since $\wfs(K)$ and $\tau_\mu(K)$ both measure a typical scale for topological stability, estimating them from sample could be of practical interest for topological inference.
Unfortunately, the following negative result shows that this estimation problem is intractable, even over a well-behaved model of closed $\mathcal{C}^k$-submanifolds such as $\distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$.
\begin{theorem}\label{thm:wfs_inconsistency}
\label{thm:mu-reach-inconsistency}
Assume that $f_{\min} \leq c_{d,k}/\tau_{\min}^d$ and $f_{\max} \geq C_{d,k}/\tau_{\min}^d$, and $L_j \geq C_{d,k}/\tau_{\min}^{j-1}$ for all $j \in \{2,\ldots,k\}$.
Then there exists $\tilde{c}_{d,k}>0$ such that for all $n \geq 1$ and $\mu \in [0,1)$,
\begin{align*}
\inf_{\widehat{r}_\mu} \sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}}
\mathbb{E} _{P^{\otimes n}}\left[
| \widehat{r}_\mu - \tau_\mu(M)|
\right]
\geq
\tilde{c}_{d,k}\tau_{\min}
>
0
,
\end{align*}
where $\widehat{r}_\mu$ ranges among all the possible estimators based on $n$ samples.
\end{theorem}
An intuition behind Theorem \ref{thm:mu-reach-inconsistency} is that for all $\mu < 1$, the $\mu$-medial axis is an unstable structure.
For certain manifolds $M_0 \in \manifolds{k}{\tau_{\min}}{\mathbf{L}}$, one can find arbitrarily small perturbations of $M_0$ whose $\mu$-medial axes remain at a fixed Hausdorff distance from $\mathrm{Med}_\mu(M_0)$.
See the proof of Theorem \ref{thm:mu-reach-inconsistency} in Section~\ref{sec:proof-of-lower-bound-mu-reach} for a precise statement of this intuition.
Despite the fact that $\tau(K) = \tau_{1^-}(K)$, this negative result indicates that we cannot leverage $\mu$-reach estimation to obtain quantitative bounds for reach estimation.
We shall hence turn towards other reach-related quantities. In fact, the particular case where $K= M$ is a manifold offers us several other characterizations of the reach, which suggest other estimation strategies.
\subsection{Reach of Submanifolds}
\label{sec:reach-of-submanifolds}
In what follows, $M$ stands for a $d$-dimensional closed submanifold of $\mathbb{R}^D$.
Note that~\cite[Remarks~4.20 and~4.21]{Federer59} and~\cite{Boissonnat19} assert that a closed submanifold with positive reach is at least of regularity $\mathcal{C}^{1,1}$, so that geodesics and tangent spaces are always defined in the usual differential sense.
For the manifold case, the intuition of $\tau(M)$ as a generalized convexity parameter is further backed by~\cite[Theorem~4.8]{Federer59}.
Indeed, the inequality $\left\langle x - \pi_C(x), \pi_C(x) - c \right\rangle \geq 0$ valid for all $c \in C$ and $x \in \mathbb{R}^D$ whenever $C$ is convex, translates to $\left\langle x - \pi_M(x), \pi_M(x) - y \right\rangle \geq - \|\pi_M(x)-y\|^2 \|x-\pi_M(x)\|/(2\tau(M))$ being valid for all $y \in \mathbb{R}^D$ and $x \in \mathbb{R}^D$ such that $\d(x,M) < \tau(M)$.
This leads to the following characterization of the reach, in the manifold case.
\begin{theorem}[{\cite[Theorem 4.18]{Federer59}}]\label{thm:Fed_reach_tangentspace}
For a submanifold $M \subset \mathbb{R}^D$ without boundary,
\[
\tau(M) = \inf_{p \neq q \in M} \frac{\|p-q\|^2}{2 \d(q-p,T_p M)},
\]
where $T_p M$ denotes the tangent space of $M$ at $p$.
\end{theorem}
This result provides a natural plugin estimator, proposed by \cite{Aamari19b}, which consists in replacing $M$ and $T_p M$ by suitable estimators of them.
A key result from~\cite{Aamari19b} is a description of how the infimum in Theorem~\ref{thm:Fed_reach_tangentspace} is achieved, possibly asymptotically.
\begin{theorem}[{\cite[Theorem 3.4]{Aamari19b}}]
\label{thm:reach_wfs_local}
Let $M \subset \mathbb{R}^D$ be a compact $\mathcal{C}^2$ submanifold without boundary.
Then,
\begin{align*}
\tau(M) = \wfs(M) \wedge R_{\ell}(M),
\end{align*}
where denoting by $\II_p: T_p M \times T_p M \to T_p M^\perp$ the \emph{second fundamental form} of $M$ at $p \in M$,
$$R_{\ell}(M) := \min_{p \in M} \| \II_p\|_{\mathrm{op}}^{-1}
$$ stands for the minimal curvature radius of $M$.
\end{theorem}
This result conveys the following intuition in the manifold case: the infimum in the right-hand side of Theorem~\ref{thm:Fed_reach_tangentspace} may be attained:
\begin{itemize}
\item[(Local case)]
Asymptotically, for pairs of points $(p,q)$ converging to a maximal curvature point in some direction, so that $\tau(M) = R_\ell(M)$.
\item[(Global case)]
For a pair of points $(p,q)$ belonging to parallel areas of $M$, forming a bottleneck zone, so that $\tau(M) = \wfs(M)$.
\end{itemize}
This local/global dichotomy of the reach may also be retrieved in the recent characterization given by~\cite{Boissonnat19} in terms of metric distortion.
\begin{theorem}[{\cite[Theorem 1]{Boissonnat19}}]\label{thm:reach_characterization_metric_distortion}
Let $K \subset \mathbb{R}^D$ be a closed subset.
Then
\begin{align*}
\tau(K) = \sup \left\lac {r >0 \mid \forall p,q \in K, \|p-q\| < 2r \Rightarrow \d_K(p,q) \leq 2r \arcsin \left ( \frac{\|p-q\|}{2r} \right )} \right\rac
,
\end{align*}
where $\d_K: K \times K \to \bar{\mathbb{R}}_+$ stands for the \emph{shortest-path} (or \emph{geodesic}) distance on $K$.
\end{theorem}
Recall that, for all $p,q \in K$, the distance $\d_K(p,q)$ is the infimum of the length of all the continuous path in $K$ between $p$ and $q$.
As will be detailed in Section~\ref{sec:sdr}, the above result allows to characterize the reach in terms of metric distortion with respect to metrics on spheres of radii $r$.
In the same spirit as Theorem~\ref{thm:reach_wfs_local}, when $K=M$ is a submanifold, the configurations of $(p,q,r)
$ in the supremum of Theorem~\ref{thm:reach_characterization_metric_distortion} are limited by the same two local and global layouts:
\begin{itemize}
\item[(Local case)]
When $p$ and $q$ tend to a maximal curvature point in some direction, the geodesic distance $\d_K$ behaves like that of a sphere of radius $R_{\ell}(M)$ at this point in this direction.
\item[(Global case)]
When $p$ and $q$ are in parallel areas, their geodesic distance must be larger than the spherical distance of radius $\|p-q\|/2$.
\end{itemize}
\subsection{Plug-in Methods for Reach Estimation}
\label{sec:plug-in-for-reach}
The characterizations of the reach given in Section~\ref{sec:reach-of-submanifolds} all lead to their associate plug-in estimators:
\begin{itemize}
\item
Studying a $\mathcal{C}^3$ model similar to $\distributions{3}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$,~\cite{Aamari19b} took advantage of the characterization with tangent spaces (Theorem~\ref{thm:Fed_reach_tangentspace}) to conceive a reach estimator that converges at rate $O(n^{-2/(3d-1)})$ in the local case ($\tau(M) = R_{\ell}(M)$), and $O(n^{-1/d})$ in the global case ($\tau(M) = \wfs(M)$).
\item
Based on the metric distortion characterization of Theorem~\ref{thm:reach_characterization_metric_distortion},~\cite{Cholaquidis21} propose a reach estimator that is consistent whenever $M$ has positive reach.
\end{itemize}
In light of Theorem~\ref{thm:reach_wfs_local}, differences of convergence rates between the local and global case are to be expected.
To quantify this intuition,~\cite{Berenfeld22} introduces subclasses of the model $\distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$, parametrized by the gap between $R_\ell(M)$ and $\wfs(M)$. They obtain the following lower bounds.
\begin{theorem}[{\cite[Theorem 7.1]{Berenfeld22} and~\cite[Proposition 2.9]{Aamari19b}}]
\label{thm:lwr_bounds_clementb}
Let $\alpha \in \mathbb{R}$, $k \geq 2$, and write
\begin{align*}
\distributions{k}{\tau_{\min}}{\mathbf{L}, \alpha}{f_{\min}}{f_{\max}}
:=
\left\lac P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}} \mid R_{\ell}(M) \geq \wfs(M) + \alpha \right\rac,
\end{align*}
where $M$ denotes $\support(P)$. Then, for all $\tau_{\min} > 0$ there exists small enough $f_{\min}$ and large enough $f_{\max}$, $\mathbf{L}$ such that
\begin{align*}
\inf_{\widehat{\tau}} \sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}, \alpha}{f_{\min}}{f_{\max}}} \mathbb{E} | \widehat{\tau} - \tau(M)| & \geq c_{\tau_{\min},d,k} \left(\frac{1}{n} \right)^{(k-2)/d}, \quad \mbox{\text{if} $\alpha \leq 0$}, \\
\inf_{\widehat{\tau}} \sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}, \alpha}{f_{\min}}{f_{\max}}} \mathbb{E} | \widehat{\tau} - \tau(M)| & \geq c_{\tau_{\min},d,k, \alpha} \left(\frac{1}{n} \right)^{k/d}, \quad \mbox{\text{if} $\alpha > 0$}.
\end{align*}
\end{theorem}
These bounds indicate that estimating the reach is at least as hard as estimating the curvature in the local case ($\tau(M)=R_{\ell}(M)$), and at least as hard as estimating the manifold in the global case ($\tau(M) = \wfs(M)$).
We will prove in Section~\ref{sec:optimal_reach_estimation} that these rates are in fact minimax optimal up to $\log n$ factors.
This means that reducing reach estimation to curvature \emph{and} manifold estimation is a good way to go, as it leads to optimal rates.
To do so, following the idea behind Theorem~\ref{thm:reach_wfs_local}, estimating $R_{\ell}(M)$ -- or some notion of local reach -- and $\wfs(M)$ -- or some notion of global reach -- separately seems a sensible approach.
\subsubsection{Local Reach Estimation}
For (max-)curvature estimation, the strategy that we adopt follows from the polynomial patches estimator proposed in~\cite{Aamari19}.
Given a localization bandwidth $h>0$, and a parameter $t > 0$, for all $i \in \{1,\ldots,n\}$,
we let $\hat{\pi}_i : \mathbb{R}^D \to \mathbb{R}^D$ be an orthogonal projector of rank $d$ and $\hat{\mathbb{T}}^{(j)}_{i} : \bigl( \mathbb{R}^D \bigr)^{\otimes j} \to \mathbb{R}^D$ be symmetric tensors solutions of the least squares problem
\begin{align}\label{eq:defi_pol_fit}
\min_{
\substack{
\pi
\\
\max_{2\leq j \leq k-1} \|\mathbb{T}^{(j)}\|^{\frac{1}{j-1}} \leq t
}
} P_{n-1}^{(i)} \left [ \left \| x - \pi(x) - \sum_{j=2}^{k-1} \mathbb{T}^{(j)}(\pi(x)^{\otimes j}) \right \|^2 \mathbbm 1_{\ball(0,h)}(x) \right],
\end{align}
where $P_{n-1}^{(i)} := \frac{1}{n-1} \sum_{p \neq i} \delta_{X_p-X_i}$ denotes the empirical measure centered at point $X_i$.
Following~\cite[Section 3]{Aamari19}, if $h$ is taken to be of order $\Theta\( (\log n / n)^{1/d} \)$, that $t$ is chosen such that $t^k h \leq 1$, and that $\hat{T}_i := \Im(\hat{\pi}_i)$ denotes the image of $\hat{\pi}_i$ -- which is a $d$-dimensional vector space by construction --, then the local patches
\begin{align}
\label{eq:polynomial_expansion}
\widehat{\Psi}_i \colon \ball_{\hat{T}_i}(0,7h/8) &\longrightarrow \mathbb{R}^D \notag
\\
v &\longmapsto X_i + v + \sum_{j=2}^{k-1} \hat{\mathbb{T}}_{i}^{(j)}(\tens{v}{j})
\end{align}
are local $O(h^k)$ approximations of $M$ whenever $n$ is large enough.
Furthermore, for $v \in \ball_{\hat{T}_i}(0,h/4)$, we can estimate the curvature tensor at $\pi_M(\widehat{\Psi}_i(v))$ via the second derivative of $\widehat{\Psi}_i$ at $v$, expressed in local coordinates around $\widehat{\Psi}_i(v)$ given by a basis of $\mathrm{Im}(\d_v \widehat{\Psi}_i)$.
To summarize, for all $v \in \ball_{\hat{T}_i}(0,h/4)$,~\eqref{eq:polynomial_expansion} provides a $d$-dimensional space $\hat{T}_{i,v} := \mathrm{Im}(\d_v \widehat{\Psi}_i)$, as well as a symmetric bilinear map
\begin{align*}
\hat{\mathbb{T}}_{i,v}^{(2)}: \hat{T}_{i,v} \times \hat{T}_{i,v} \rightarrow \hat{T}_{i,v}^\perp,
\end{align*}
that is provably close to $\II_{\pi_M(\widehat{\Psi}_i(v))}$. The precise definition of $\hat{\mathbb{T}}_{i,v}^{(2)}$ is given in Section \ref{sec:proof_thm_cvrates_curvature_max}. A minimal curvature radius (i.e. maximal curvature) estimator may then be computed as the minimal curvature radius of all the polynomial patches around sample points, that is
\begin{align}\label{eq:defi_curvature_radius_estimate}
\wh R_\ell := \min_{1 \leq i \leq n} \min_{ v \in \ball_{\hat{T}_i}(0,h/4)} \|\hat{\mathbb{T}}_{i,v}^{(2)}\|_{\mathrm{op}}^{-1}.
\end{align}
Provided $M$ is uniformly well approximated by $\bigcup_{i=1}^n \widehat{\Psi}_i(\ball_{\hat{T}_i}(0,h/4))$, the convergence rate of $\wh R_\ell$ towards $R_\ell(M)$ will follow from uniform curvature bounds, similar to the pointwise ones from~\cite[Theorem~4]{Aamari19}.
We are able to prove the following.
\begin{theorem}\label{thm:cv_rates_curvature_max}
Let $k \geq 3$ and $P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$. Write $h = \left ( C_{d,k} \frac{f^2_{\max} \log n}{f^3_{\min}n} \right )^{1/d}$.
Then for $n$ large enough, with probability larger than $1- 2n^{-k/d}$, we have
\begin{align*}
\bigl| \wh R_\ell - R_{\ell}(M) \bigr| \leq C_{d,k,\mathbf{L}, \tau_{\min}} R_{\ell}^2(M) \sqrt{\frac{f_{\max}}{f_{\min}}} h^{k-2}.
\end{align*}
\end{theorem}
We refer to \secref{proof_thm_cvrates_curvature_max} for a proof of this result. In particular, the estimator $\wh R_\ell$ achieves the rate of the lower bound from Theorem~\ref{thm:lwr_bounds_clementb} in the case where $\tau(M) = R_{\ell}(M)$ (i.e. $\alpha \leq 0$), up to $\log n$ factors.
\subsubsection{Global Reach Estimation}
To complete the construction of an estimator of $\tau(M)$, building an estimator of $\wfs(M)$ could be a possibility.
However, Theorem~\ref{thm:wfs_inconsistency} shows that building an estimator of the weak feature size with a uniform convergence rates over $\distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$ is hopeless.
Nonetheless, it is important to note that a uniform estimation rate of $\wfs(M)$ over $\mathcal{P}^k$ is not necessary to obtain uniform convergence rate for $\tau(M)$.
Indeed, an estimator $\widehat{\wfs}$ of $\wfs(M)$ that exhibits an optimal uniform convergence rate whenever $\wfs(M) \leq R_{\ell}(M)$, and that is provably larger than $R_{\ell}(M)$ otherwise, is enough to build an optimal reach estimator when combined with $\wh R_\ell$.
This is the case, for instance, of the weak feature size estimator of~\cite{Berenfeld22} based on the so-called \emph{convexity defect function}.
Based on this remark, we adopt a more general strategy, by seeking for an intermediate geometric scale $\theta(M)$ (or \emph{feature size}) such that for all $M \in \manifolds{k}{\tau_{\min}}{\mathbf{L}}$,
$$
\tau(M) \leq \theta(M) \leq \wfs(M)
.
$$
In such a case, Theorem~\ref{thm:reach_wfs_local} extends trivially, with $\wfs(M)$ replaced by $\theta(M)$.
\begin{proposition}
\label{prop:reach_intermediate_local}
Assume that $\theta : \mathcal{C}^{2}_{\tau_{\min}}{} \to \mathbb{R}_+$ is such that $\tau(M) \leq \theta(M) \leq \wfs(M)$ for all $M \in \mathcal{C}^{2}_{\tau_{\min}}{}$.
Then,
\begin{align*}
\tau(M) = \theta(M) \wedge R_{\ell}(M).
\end{align*}
\end{proposition}
Given such an intermediate scale parameter of interest $\theta(M)$, and assuming that a consistent estimator $\wh \theta $ of $\theta(M)$ is available, one can naturally consider the plugin $\widehat{\tau} := \wh{R}_\ell \wedge \wh{\theta}$. For free, Proposition~\ref{prop:reach_intermediate_local} yields that $\theta(M)\mathbbm 1_{R_{\ell}(M) > \tau(M)} = \tau(M)\mathbbm 1_{R_{\ell}(M) > \tau(M)}$, so that
\begin{align}
\label{eq:double-plugin-precision}
|\tau(M) - \widehat{\tau}| \leq | \wh{R}_\ell - R_{\ell}(M)| \mathbbm 1_{R_{\ell}(M) \leq \tau(M)} + | \wh{\theta} - \theta(M)| \mathbbm 1_{R_{\ell}(M) > \tau(M)}
,
\end{align}
as soon as $|R_{\ell}(M) - \wh{R}_\ell| + |\theta(M) - \wh{\theta}| \leq |R_{\ell}(M) - \theta(M)|$.
In addition, such a quantity would provide a local scale that is of interest for further topological inference, as exposed in Section~\ref{sec:reach_wfs_defi}.
According to Theorem~\ref{thm:wfs_inconsistency}, taking $\theta(M)$ to be related to the medial axis characterization of the reach -- such as the $\mu$-reach, or the $\lambda$-reach defined in~\cite{Chazal05}) -- is likely to lead to an unsolvable statistical problem, because of the inherent instability of the medial axis.
Hence, we rather build upon the metric distortion characterization of the reach given by Theorem~\ref{thm:reach_characterization_metric_distortion}, and provide a better-behaved intermediate scale $\theta(M)$: the \emph{spherical distortion radius}.
\subsection{Motivation and Definition}
Based on Theorem~\ref{thm:reach_characterization_metric_distortion},
we now build a geometrically stable feature size that measures the maximum radius (or scale) at which the geodesic distance can be compared to the corresponding spherical distance.
To be more precise, for $x,y \in \mathbb{R}^D$ and $r > 0$, we define the \emph{spherical distance} $\d_{\mathcal{S}(r)}(x,y)$ -- or \emph{great-circle distance} -- as the distance between $x$ and $y$ when seen as both lying on a sphere of radius $r$. That is,
\[
\d_{\mathcal{S}(r)}(x,y)
:=
\begin{cases}
2 r \arcsin\left(\frac{\|x-y\|}{2r}\right)
&
\text{if } \norm{x-y} \leq 2r,
\\
+\infty
&
\text{otherwise}
\end{cases}
\]
Note that the map $r \mapsto \d_{\mathcal{S}(r)}(x,y)$ is decreasing on $[\|x-y\|/2,\infty)$ and that
$$
\d_{\mathcal{S}(r)}(x,y) = \frac12 \pi \|x-y\| ~~\text{for}~~r = \frac{\|x-y\|}{2}~~~\text{and}~~~\d_{\mathcal{S}(r)}(x,y) \xrightarrow[r \to \infty]{} \|x-y\|.
$$
Then, Theorem~\ref{thm:reach_characterization_metric_distortion} can be rewritten as
\begin{align*}
\tau(K) = \sup \left \lac r >0 \mid \forall x,y \in K, \|x-y\| < 2r \Rightarrow \d_K(x,y) \leq \d_{\mathcal{S}(r)}(x,y) \right\rac.
\end{align*}
It should be noted that $\d_{\mathcal{S}(r)}$ is not formally a distance on $K$ (unless $K$ is a subset of a sphere of radius $r$), but this is of little importance in what follows.
Based on the same idea that motivates the introduction of the $\mu$-reach, we intend to discard curvature effects to obtain some notion of global reach.
In the metric characterization of the reach from Theorem~\ref{thm:reach_characterization_metric_distortion}, this can be done by the supremum restricting to points that are not too close.
\begin{definition}\label{defi:SDR}
Let $K$ be a compact subset of $\mathbb{R}^D$, $\d$ a distance on $K$ and $\delta >0$. The \emph{spherical distortion radius} of the metric space $(K,\d)$ at scale $\delta$ is defined by
$$
\sdr_\delta(K,\d) := \sup\{r > 0~\middle|~\forall x,y \in K,~ \delta \leq \|x-y\| < 2r \ \Rightarrow \ \d(x,y) \leq \d_{\mathcal{S}(r)}(x,y) \}.
$$
\end{definition}
In words, the spherical distortion radius at scale $\delta > 0$ is the largest radius $r$ for which the distance $\d$ is bounded above by the spherical distance at radius $r$, when restricted to points that are at least $\delta$-apart for the Euclidean distance.
\begin{figure}[!htbp]
\centering
\includegraphics[width = 0.5\linewidth]{sdr-ex-curve}
\caption{
A curve $K$ in the plane. In blue is the shortest path between two points $x$ and $y$, whose length is $\d_K(x,y)$.
In green (resp. grey) is the circle portion of radius $r_0$ (resp. $r_1$) going through $x$ and $y$.
The layout is chosen so that $r_0 \leq r_1$ and $\d_{\mathcal{S}(r_1)}(x,y) \leq \d_{K}(x,y) \leq \d_{\mathcal{S}(r_0)}(x,y)$.
}
\label{fig:sdr}
\end{figure}
By construction, $\sdr_\delta(K,d) \geq\delta/2$ for all $\delta > 0$.
Furthermore, whenever $\delta$ is strictly greater than $\diam K$, then no pairs of points in $x,y \in K$ satisfies $\|x-y\| \geq \delta$ so that $\sdr_\delta(K) = +\infty$.
On the other hand, if $\delta = 0$, then the spherical distortion radius of $(K,\d_K)$, coincides with the reach of $K$ (Theorem~\ref{thm:reach_characterization_metric_distortion}). In fact, Proposition~\ref{prp:interpolate} below confirms that the spherical distortion radius interpolates between the reach and the weak feature size.
\begin{proposition} \label{prp:interpolate}
For all closed $K \subset \mathbb{R}^D$ and all metric $\d$ on $K$, the map $\delta \mapsto \sdr_\delta(K,\d)$ is non-decreasing.
Furthermore, for $\d = \d_K$,
$$\tau(K) \leq \sdr_\delta(K,\d_K) \leq \wfs(K) ~~~\text{for all} ~~~0 \leq \delta \leq \sqrt{\frac{2(D+1)}{D}}\wfs(K).$$
\end{proposition}
A proof of Proposition~\ref{prp:interpolate} is given in Appendix \ref{sec:proof-compare-reach-sdr}.
\begin{example}\label{ex:sdr_polygon} As a toy example, let us study the spherical distortion radius of the wedge shape $K_\alpha = \mathcal{L}_1 \cup \mathcal{L}_2$ where $\mathcal{L}_1$ and $\mathcal{L}_2$ are two half-line originated from a common point $z \in \mathbb{R}^D$ (see \figref{examplesdr}). We let $\alpha \in (0,\pi)$ be the angle between these two lines. In this context, we have $\tau(K_\alpha) = 0$, and it is easy to see that $\wfs(K_\alpha) = \infty$. Furthermore, the usual interpolations between the reach and the weak feature size exhibit a very degenerate behavior in the presence of an angular configuration such as this one, with for instance
$$
\tau_\mu(K_\alpha) = \begin{cases} 0~~&\text{if}~~ \mu \geq \sin(\alpha/2),\\
\infty ~~&\text{if}~~ \mu < \sin(\alpha/2).
\end{cases}
$$
On the contrary, we show hereafter that the spherical distortion radius interpolates non-trivially between $\tau(K_\alpha)$ and $\wfs(K_\alpha)$ in this case, giving rise to a new family of relevant characteristic scales even for non-smooth subsets $K_\alpha$.
To see this, take $x \in \mathcal{L}_1$ and $y \in \mathcal{L}_2$, and denote by $a := \|x-z\|$ and $b := \|y-z\|$. The intrinsic distance $\d_{K_\alpha}(x,y)$ is given by $a+b$ while $\|x-y\|^2 = a^2 + b^2 - 2 ab \cos(\alpha)$.
Now the solution of the minimization problem
$$
\min\{a^2+b^2-2ab \cos(\alpha)~|~ a + b = \d_{K_{\alpha}}(x,y)\}
$$
is given by $a = b = \d_{K_{\alpha}}(x,y)/2$ and equals $\d^2_{K_{\alpha}}(x,y) \sin^2(\alpha/2)$. The spherical distortion radius of $K_\alpha$ at scale $\delta$ is thus the largest $r$ such that
\beq \label{exsdr}
\frac{\delta}{\sin(\alpha/2)} \leq 2r \arcsin\(\frac{\delta}{2r}\).
\end{equation}
Since the right-hand side above ranges between $\delta$ and $\delta\pi/2$, we distinguish two cases:
\begin{itemize}
\item
If $\sin(\alpha/2) < 2/\pi$, then no $r$ can fulfill \eqref{exsdr}. Hence, $\sdr_\delta(K_\alpha,\d_{K_\alpha}) = \delta/2$.
\item
Otherwise $\sin(\alpha/2) \geq 2/\pi$, in which case the largest $r$ is given by the equality $\varphi(2r/ \delta) = 1/\sin(\alpha/2)$, where $\varphi(u) := u \arcsin(1/u)$ is a bijection between $[1,\infty)$ and $(1,\pi/2]$.
\end{itemize}
All in all, it holds
$$
\sdr_{\delta}(K_\alpha,\d_{K_\alpha}) = \begin{cases}
\delta/2~~~~&\text{if}~~~\alpha < \alpha_* \\
(\delta/2) \varphi^{-1}(1/\sin(\alpha/2)) ~~~~&\text{if}~~~\alpha \geq \alpha_*
\end{cases}
$$
where $\alpha_* = 2 \arcsin(2/\pi) < \pi/2$.
Note that compared to $\tau_\mu(K_\alpha)$, there is no discontinuity in $\sdr_\delta(K_\alpha,\d_{K_\alpha})$ as $\alpha$ varies.
\end{example}
\begin{figure}[!htbp]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height = 0.5\linewidth]{examplesdr}
\caption{}
\label{fig:examplesdr}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height = 0.5\linewidth]{examplesdr2}
\caption{}
\label{fig:examplesdr2}
\end{subfigure}
\caption{(\subref{fig:examplesdr}) Diagram of $K_\alpha = \mathcal{L}_1 \cup \mathcal{L}_2$ with an angle $\alpha$ between the two half-lines. The shortest path between $x$ and $y$ is drawn in blue. In dashed the $\mu$-medial axis for $\mu > \sin(\alpha/2)$, showing in particular that $\tau_\mu(K_\alpha) = 0$ in this case.
(\subref{fig:examplesdr2}) Plot of the function $\alpha \mapsto \sdr_\delta(K_\alpha,\d_{K_\alpha})$, which operates a smooth interpolation between $\delta/2$ and $\infty$.}
\label{fig:examplesdr}
\end{figure}
Example~\ref{ex:sdr_polygon} above carries the intuition that the spherical distortion radius seems somehow stable with respect to Hausdorff perturbations, contrary to the $\mu$-reach. We quantify this intuition in the following section.
\subsection{Stability Properties}
In this section, we will be comparing different metric spaces on subsets of $\mathbb{R}^D$.
Let $K$ and $K'$ be two subsets of $\mathbb{R}^D$, endowed with distances $\d$ and $\d'$ respectively.
We intend to prove that $\sdr_\delta(K,\d)$ and $\sdr_\delta(K',\d')$ are close whenever $(K,\d)$ and $(K',\d')$ are close, and that $(K,\d)$ has good properties.
The notion of proximity between $K$ and $K'$ will be measured in Hausdorff distance (see~\eqref{eq:hausdorff}).
It remains to define a notion of proximity between $\d$ and $\d'$, which is called the \emph{mutual distortion}.
\begin{definition}\label{def:metric_distortion}
Let $(K,\d)$ and $(K',\d')$ be two metric subspaces of $\mathbb{R}^D$. The \emph{metric distortion} of $\d'$ relative to $\d$ at scale $\delta > 0$ is
$$
\D_\delta(\d' | \d) := \sup_{\substack{x',y' \in K' \\ \|x'-y'\| \geq \delta}} \frac{\d'(x',y')}{\d(\pr_K(\{x'\}), \pr_K(\{y'\}))}.
$$
where $\pr_K$ is the (possibly multivalued) closest-point projection onto $K$ for the ambient Euclidean distance, and where
$$
\d(\pr_K(\{x'\}), \pr_K(\{y'\}))
:=
\inf
\{\d(x,y)~|~x\in \pr_K(\{x'\}),~y \in \pr_K(\{y'\}) \}
.
$$
We adopt the convention $\D_\delta(\d'|d) = 0$ if $\delta > \diam(K')$.
The \emph{mutual distortion} of $\d$ and $\d'$ is then defined as
\begin{align*}
\D_\delta(\d,\d') := \max\{\D_\delta(\d'|\d), \D_\delta(\d|\d')\}.
\end{align*}
\end{definition}
The mutual distortion defined above allows to compare distances on different spaces, while taking into account their respective embeddings in $\mathbb{R}^D$.
A small distortion $\D_\delta(\d,\d')$ means that, if $a,b \in K$ and $x,y \in K'$ are two couples of points that are $\delta$-separated and such that $x$ and $a$, and $y$ and $b$ are respectively close to each other, then $\d(a,b)$ and $\d'(x,y)$ should be close as well. This definition of \emph{mutual distortion} between metric subspaces of $\mathbb{R}^D$ is related to the existing notion \emph{metric distortion} of an embedding.
See for instance~\cite{bourgain1985lipschitz} or more recently~\cite{chennuru2018measures} which deals with distortion measures in a statistical framework.
It is nonetheless significantly different, in particular because the usual notion of distortion is invariant through re-scaling of either $\d$ or $\d'$. In our framework, invariance with respect to scaling is an undesirable property, since we want to estimate the reach, which is itself a scale factor (or feature size).
\begin{rem} \label{rem:bilip} When $K = K'$, the mutual distortion can be seen as the bi-Lipschitz coefficient of $\Id : (K,\d) \to (K,\d')$ at \emph{scale} $\delta$, meaning that for all $x,y \in K$
$$
\|x-y\| \geq \delta \ \Rightarrow \ \frac1L \d'(x,y) \leq \d(x,y) \leq L \d'(x,y),
$$
where $L = \D_\delta(\d,\d')$. In particular, a mutual distortion that is close to $1$ means that $(K,\d)$ is quasi-isometric to $(K,\d')$, at scale $\delta$.
\end{rem}
If the two subspaces $K$ and $K'$ are too far apart, then it makes no sense to compare two distances $\d$ and $\d'$ defined on them, and one could expect the mutual distortion to explode. This is will typically the case when $\dh(K,K') \geq \delta$.
It is clear from the definition that using the notion of relative metric distortion defined above, the spherical distortion radius of $K$ may be expressed as
$$
\sdr_\delta(K,\d) = \sup\{r > 0~\middle|~\D_\delta(\d | \d_{\mathcal{S}(r)}) \leq 1\}.
$$
This point supports the idea that the relative metric distortion we defined is a suitable notion of proximity to assess stability of the spherical distortion radius, as exposed by the following proposition.
\begin{proposition} \label{prp:stab1}
Let $\delta_0 > 0$ and $\varepsilon,\nu > 0$. Assume that both $\dh(K',K) \leq \varepsilon$ and $\D_{\delta_0}(\d'|\d) \leq 1+\nu$.
Define
$$
\xi(r) := 384 (1+\pi)\frac{r^4}{\delta_0^4}
\text{~for all~} r \geq 0.
$$
Then, for all $\delta \geq \delta_0$, letting $\Upsilon := (\delta \nu)\vee \varepsilon$ and $\mathrm{r}_{1} := \sdr_{\delta+2\varepsilon}(K',\d')$, if $ \xi(\mathrm{r}_{1}) \Upsilon < \mathrm{r}_{1}$, then
$$
\sdr_{\delta}(K,\d) \leq \sdr_{\delta+2\varepsilon}(K',\d')+\xi(\mathrm{r}_{1}) \Upsilon.
$$
\end{proposition}
A proof of \prpref{stab1} is given in Appendix \ref{sec:proof-sdr-properties}.
Note that the condition $\dh(K',K) \leq \varepsilon$ may be relaxed via $\dh(K'|K) \leq \varepsilon$, where $\dh(K'|K) := \sup_{x \in K'} \d(x,K)$. Also, under the assumptions of \prpref{stab1}, let us remark that if $ \sdr_{\delta+2\varepsilon}(K',\d')$ is finite, then so is $\sdr_{\delta}(K,\d)$ with $\sdr_{\delta}(K,\d) \leq 2 \sdr_{\delta+2\varepsilon}(K',\d')$.
Proposition~\ref{prp:stab1} can be symmetrized to get the following two-sided control.
\begin{corollary} \label{cor:stab} Let $0 < \delta_0 < \delta_1$ and $\varepsilon,\nu > 0$. Assume that both $\dh(K',K) \leq \varepsilon$ and $\D_{\delta_0}(\d',\d) \leq 1+\nu$. Then, for any $\delta \in (\delta_0+2\varepsilon,\delta_1 - 2\varepsilon)$, it holds
$$
\sdr_{\delta-2\varepsilon}(K,\d)-\xi_0 \Upsilon \leq \sdr_{\delta}(K',\d') \leq \sdr_{\delta+2\varepsilon}(K,\d)+\xi_0 \Upsilon
$$
with $\xi_0 := \xi(2\sdr_{\delta_1}(K,\d))$ and $\Upsilon := (\nu \delta) \vee \varepsilon$, provided that $\xi_0 \Upsilon \leq 2\sdr_{\delta_1}(K,\d)$.
\end{corollary}
Corollary~\ref{cor:stab} is proven in Appendix \ref{sec:proof-sdr-properties}.
It ensures that the spherical distortion radius enjoys an interleaving property.
That is the SDR of $(K,\d)$ at scale $\delta$ may be framed by the SDR of an approximation $(K',\d')$ at scales $\delta\pm \varepsilon$. This interleaving property is a common thread with the $\mu$-reach (see, e.g.,~\cite[Theorem~3.4]{Chazal06}) and the $\lambda$-reach (\cite[Theorem~3]{Chazal05}), that is not enough to ensure consistent estimation. In fact, for the two aforementioned quantities, consistency may be proved with the additional assumption of $\mu \mapsto \tau_\mu(K)$ (resp. $\lambda \mapsto \lambda$-reach) are continuous at the targeted $\mu$ (resp. $\lambda$).
As opposed to the $\mu$-reach the $\lambda$-reach, the SDR is also stable with respect to its the scale parameter $\delta$.
Next, we prove that $\delta \mapsto \sdr_\delta(K,\d)$ is continuous over a fixed range $(0,\Delta^*)$ under mild structural assumptions on $(K,\d)$. These assumptions will be easily checked in the model $\manifolds{k}{\tau_{\min}}{\mathbf{L}}$, hence ensuring consistency of the subsequent reach estimator.
\begin{ass}\label{ass:spread}
We say that $K \subset \mathbb{R}^D$ is \emph{spreadable} if there exist $\Delta_0 > 0$, $\varepsilon_0 > 0$, and $C_0 > 0$ such that for all $x,y \in K$ such that $\|x-y\| \leq \Delta_0$ and all $\varepsilon \leq \varepsilon_0$, there exists a point $a \in K$ such that either
\begin{itemize}\setlength{\itemsep}{.05in}
\item $\|a-y\| \leq \varepsilon$ and $\|x-a\| \geq \|x-y\| + C_0 \varepsilon$, or
\item $\|a-x\| \leq \varepsilon$ and $\|y-a\| \geq \|x-y\| + C_0 \varepsilon$.
\end{itemize}
\end{ass}
Assumption~\ref{ass:spread} requires that every point $y$ of $K$ may be locally pushed away from any (close enough) point $x \in K$. In particular, this means that $K$ is nowhere discrete.
In the manifold case, this pushing may be carried out using the exponential map (see Proposition~\ref{prp:assreach}).
\begin{ass}\label{ass:subeuc}
We say that $(K,d)$ is \emph{sub-Euclidean} if there exist $C_1 > 0$ and $\Delta_1 > 0$ such that for all $x,y \in K$ such that $\|x-y\| \leq \Delta_1$, we have $\d(x,y) \leq C_1 \|x-y\|$.
\end{ass}
Assumption~\ref{ass:subeuc} requires that the distance locally compares with the ambient Euclidean distance.
This essentially means that the identity map $(K,\d) \to (K,\norm{\cdot})$ is locally Lipschitz.
Such an assumption is automatically fulfilled whenever $K$ has positive reach and $\d = \d_K$ (see~\cite{Federer59}), with explicit constants in the manifold case (see Proposition~\ref{prp:assreach})
Whenever these two conditions are met, the spherical distortion radius of $(K,\d)$ can be proved to be locally Lipschitz in $\delta$.
\begin{theorem} \label{thm:lip} Assume that the metric space $(K,\d)$ fulfills Assumptions~\ref{ass:spread} and~\ref{ass:subeuc}. Then $\delta \mapsto \sdr_\delta(K,\d)$ is locally Lipschitz on $(0,\Delta^*)$ where
$$
\Delta^* := \min\{ \Delta_0, \Delta_1, \sup\{\delta \geq 0~|~\sdr_{\delta}(K,\d) < \infty\} \}.
$$
More precisely, for all $0 < \delta_0 < \delta_1 < \Delta^*$, the map $\delta \mapsto \sdr_\delta(K,\d)$ is $L_0$-Lipschitz on $[\delta_0,\delta_1]$ with
\[
L_0 := \frac{192 \mathrm{r}_1^3}{C_0 \delta_0^3}\(C_1 + \pi\frac{\mathrm{r}_1}{\delta_0}\),
\]
where $\mathrm{r}_1 := \sdr_{\delta_1}(K,\d)$.
\end{theorem}
A proof of \thmref{lip} can be found in Appendix \ref{sec:proof-sdr-properties}.
Not only does it ensure that the spherical distortion radius at scale $\delta$ is continuous with respect to $\delta$, that is enough to guarantee consistency, but it also allows to control its variation via an explicit local Lipschitz constant.
Combined with \corref{stab}, this allows to convert a bound between $(K,\d)$ and $(K',\d')$ in terms of Hausdorff distance and metric distortion into a bound on the SDR's at scale $\delta$.
\begin{theorem} \label{thm:stab} Let $(K,\d)$ fulfill Assumptions~\ref{ass:spread} and~\ref{ass:subeuc}, and let $(K',\d')$ be such that $\dh(K,K') \leq \varepsilon$ and $\D_{\delta_0}(\d,\d') \leq 1+\nu$ for some $\delta_0 < \Delta^*$.
Then, for all $\delta_1 \in (\delta_0,\Delta^*)$ and $\delta \in (\delta_0+2\varepsilon,\delta_1-2\varepsilon)$, provided that $\xi_0 \Upsilon \leq 2\sdr_{\delta_1}(K,\d)$, we have
$$
\left|\sdr_{\delta}(K,\d) - \sdr_{\delta}(K',\d')\right| \leq \zeta_0 \Upsilon,
$$
with $\Upsilon = (\delta \nu) \vee \epsilon$ and $\zeta_0 = \xi_0 + 2L_0$, where $\xi_0$ is defined in \corref{stab}, $L_0$ is defined in \thmref{lip}.
\end{theorem}
We refer to Appendix \ref{sec:proof-sdr-properties} for a proof of this result and to Figure \ref{fig:thm49} for a diagram of the scales at play. Note that the constant $\zeta_0$ only depends on $\delta_0$ and features of $(K,\d)$, that the assumptions are required on $(K,\d)$ only, and that the constraint on $\varepsilon$ depends only on $(K,\d)$ as well.
\begin{figure}[h!]
\centering
\includegraphics[width = 0.45\textwidth]{thm49}
\caption{Plot of $\delta \mapsto \sdr_\delta$ for $(K,d)$ and $(K',d')$ in the context of Theorem \ref{thm:stab}. On the interval $(\delta_0+\varepsilon,\delta_1-\varepsilon)$, the two functions do not differ of more than $\zeta_0 \Upsilon$.
Even though $(K',d')$ might not be well-behaved, the regularity of $\delta \mapsto \sdr_\delta(K,d)$ (Theorem \ref{thm:lip}) is sufficient to insure stability.}
\label{fig:thm49}
\end{figure}
The estimation of $K$ is a now well-understood in the manifold case (see~\cite{Aamari19b}). To obtain guarantees on the estimation of $\sdr_\delta(K,\d_K)$, it hence remains to investigate the estimation of $\d_K$. This is the aim of the following section.
\subsection{Unsupervised Distance Metric Learning}
As explained in the introduction, various learning tasks lead to the problem of estimation the shortest-pat distance $\d_K$, via an estimator $\hat{d}$ on a sample of $K \subset \mathbb{R}^D$.
Though, there is no canonical choice of loss for measuring the proximity of $\wh \d$ to $\d_K$.
One could consider for instance the empirical $\sup$-loss
$$\ell_n(\wh \d | \d_K) := \sup_{x \neq y \in \mathbb{X}_n} \left|1- \frac{\wh \d(x,y) }{\d_K(x,y)}\right|,
$$
or the global $\sup$-loss
$$
\ell_\infty(\wh \d | \d_K) := \sup_{x \neq y \in K} \left|1- \frac{\wh \d(x,y) }{\d_K(x,y)}\right|.
$$
It might seem counter-intuitive to ask an estimator $\wh \d$ of $\d_K : K \times K \to \mathbb{R}_+$ to be defined on the whole set $K\times K$, while the this domain is unknown.
It actually is easy to extend any metric estimator to the whole space $\mathbb{R}^{D} \times \mathbb{R}^D$.
Indeed, given such a metric estimation procedure $\wh \d_n : \mathbbm X_n \times \mathbbm X_n \to \mathbb{R}_+$ that outputs a distance $\wh \d_n[\mathbb{X}_n](x,y)$ between any pair of points of $\mathbb{X}_n$, we can define $\wt \d_n(x,y) := \wh \d_{n+2}[\mathbb{X}_n,x,y](x,y)$ for all $(x,y) \in \mathbb{R}^D \times \mathbb{R}^D$.
Informally this means that one can treat any given tuple of points $(x,y)$ as actual data points in the estimation process, and that we are only interested in the behavior of the later when $x$ and $y$ are in fact from $K$.
The losses $\ell_n$ and $\ell_\infty$ are naturally multiplicative, in particular because the usual notions of distortions are multiplicative by nature (see \secref{sdr}).
Indeed, the sup-loss $\ell_\infty(\wh \d | \d_K)$ being smaller than $\nu$ means that
$$
\forall x,y \in K,~~~(1-\nu)\d_K(x,y) \leq \wh \d(x,y) \leq (1+\nu) \d_K(x,y),
$$
which is the usual way to quantify if the intrinsic metric is well-estimated. See for instance~\cite{tenenbaum2000global, Arias20}.
When $\nu$ is small, it yields that $(K,\wh\d)$ is quasi-isometric to $(K,\d_K)$.
\begin{rem} We emphasize the fact that the global sup-loss $\ell_\infty$ and the mutual metric distortion $\D_\delta$ from Definition~\ref{def:metric_distortion} are different in essence.
Indeed, while the mutual metric distortion $\D_\delta$ allows to compare different metrics on \emph{different} subsets of $\mathbb{R}^D$, the sup-loss $\ell_\infty$ compares two distances defined on the \emph{same} subset.
However, the global sup-loss and the mutual distortion metric may be related as follows.
Consider $K$ endowed with either $\wh \d$ or $\d_K$.
Denote by $\D_{0^+}(\d_K,\wh{\d}) := \lim_{\delta \rightarrow 0}\D_{\delta}(\d_K,\wh{\d})$.
Then, straightforward computation entails \begin{align*}
\ell_\infty(\wh{\d} | \d_K) +1 \leq \D_{0^+}(\d_K,\wh{\d}) \leq (1-\ell_\infty(\wh{\d}|\d_K))_+^{-1}.
\end{align*}
Hence, the global sup-loss $\ell_\infty(\wh{\d}|\d_K)$ is somehow an additive counterpart to the mutual distortion $\D_{0^+}(\wh{\d},\d_K)$ in the case where $K=K'$. That is, when the support of the two metrics coincide in Definition~\ref{def:metric_distortion}, as already noticed in Remark \ref{rem:bilip}.
\end{rem}
When $K = M$ is a $\mathcal{C}^2$ submanifold of $\mathbb{R}^D$ of dimension $d$ with reach bounded below, methods using neighborhood graphs such as Isomap provably estimate $\d_M$ at rate $O(n^{-2/3d})$~\cite{arias2019unconstrained}.
As we will show in Theorem~\ref{thm:metric-minimax-ub}, this rate is far from being optimal.
To date, the best minimax lower bound in this setting is due to~\cite{Arias20}, who obtain a rate of order $\Omega(n^{-2/d})$ in the particular case of a deterministic design on $\mathcal{C}^2$ submanifolds.
Actually, we can extend the result of~\cite{Arias20} to our random design setting, and to general $\mathcal{C}^k$ submanifolds with $k \geq 2$.
\begin{theorem}\label{thm:metriclb}
Assume that $f_{\min} \leq c_{d,k}/\tau_{\min}^d$ and $f_{\max} \geq C_{d,k}/\tau_{\min}^d$, and $L_j \geq C_{d,k}/\tau_{\min}^{j-1}$ for all $j \in \{2,\ldots,k\}$.
Then for $n$ large enough,
\begin{align*}
\inf_{\wh \d}
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}
} \mathbb{E}_{P^{\otimes n}}[\ell_\infty(\wh \d | \d_M)]
\geq
\tilde{c}_{d,k,\tau_{\min}} \left(\frac{1}{n}\right)^{k/d},
\end{align*}
where the infimum is taken over all measurable estimator $\wh \d$ of $\d_M$ based on $n$ samples.
\end{theorem}
This theorem is proved in Appendix \ref{sec:proof-lower-bound-metric}. As we shall prove shortly in Section~\ref{sec:optimal-metric-estimation}, this lower-bound can be provided with a matching upper-bound up to $\log n$ factors (\thmref{metricub}), and is thus optimal.
\subsection{An optimal Approach of Metric Estimation}
\label{sec:optimal-metric-estimation}
The existing unsupervised methods for metric learning are known to either have no theoretical guarantees, or to have a sub-optimal rate for estimating the intrinsic metric.
As stated before, Isomap reaches a rate of $n^{-2/3d}$, which is very far from the theoretical lower-bound $n^{-k/d}$ shown in \thmref{metriclb}. Other methods, such as taking the shortest path distance over a Delaunay triangulation~\cite{Arias20}, are shown to attain a precision of $n^{-2/d}$ which is optimal for $\mathcal{C}^2$-model but not for $k \geq 3$. We propose here a fairly general approach that can output a family of minimax-optimal metric estimators. It relies on the following bound.
\begin{proposition} \label{prp:metric}
Let $K \subset \mathbb{R}^D$ be a set of positive reach $\tau(K) > 0$, and $K' \subset \mathbb{R}^D$ be any set such that $\dh(K',K) < \varepsilon \leq \tau(K)/2$. Then,
$$
\ell_\infty(\d_{(K')^\varepsilon} | \d_{K}) \leq \frac{2\varepsilon}{\tau(K)}
,
$$
where
we recall that
$(K')^\varepsilon = \{ u \in \mathbb{R}^D \mid \d(u,K') \leq \varepsilon\}$, so that $K \subset (K')^\varepsilon$.
\end{proposition}
\prpref{metric} is proved in Appendix \ref{sec:proof-plugin-metric}.
It asserts that estimating geodesic distances of sets of positive reach is never harder than estimating the sets themselves in Hausdorff distance.
Beyond the framework of closed manifold developed here, note that for the convex case $\tau(K) = \infty$, $\d_K$ coincides with the Euclidean metric, so that estimating $\d_K$ becomes trivial.
A significant consequence of \prpref{metric} is that we can derive a consistent estimator of the intrinsic distance from any consistent estimator of the support, and with the same rate of convergence.
In what follows, we write
\begin{align}
\label{eq:dmax}
\d_{\max} := \frac{5^d}{\omega_d f_{\min} \tau_{\min}^{d-1}},
\end{align}
where $\omega_d$ is the volume of the $d$-dimensional unit ball. In \lemref{boundgeo}, the length $\d_{\max}$ is proved to be an upper bound on the geodesic diameter of the supports of any distribution in the model $\distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$.
\begin{theorem} \label{thm:metricub} Let $k \geq 2$ and let $\wh M$ be an estimator satisfying
$$
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}} P^{\otimes n}(\dh(\wh M, M) \geq \varepsilon_n) \leq \eta_n,
$$
for some positive sequences $\varepsilon_n$ and $\eta_n$ converging to $0$.
Then the metric estimator
$$\wh \d(x,y) := \d_{\max} \wedge \d_{(\wh M_{x,y})^{\varepsilon_n}}(x,y)~~~\text{with}~~~\wh M_{x,y} := \wh M \cup \{x,y\},
$$
which is defined for all $x,y \in \mathbb{R}^D$, satisfies
$$
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}} \mathbb{E}_{P^{\otimes n}}[\ell_\infty(\wh \d | \d_M)] \leq \frac{2}{\tau_{\min}}\varepsilon_n+\left(1+\frac{\d_{\max}}{\varepsilon_n}\right)\eta_n.
$$
\end{theorem}
\thmref{metricub} is proved in Appendix \ref{sec:proof-plugin-metric}.
A particular advantage of this result is that it does not require the estimator $\wh M$ to have any geometric structure, nor to be regular in any sense.
This contrasts sharply with \cite{Arias20}, which extensively uses the structural properties of the intermediate estimator $\wh M$.
\thmref{metricub} is much more versatile, since here, $\wh M$ could just as easily be anything as a point cloud, a metric graph, a triangulation, or a union of polynomial patches.
For instance, taking $\wh M = \{X_1,\dots,X_n\}$ to be the observed data, we can take $\varepsilon_n = C(\log n/n)^{1/d}$ for $C$ large enough yields $\eta_n \leq \varepsilon_n^2$ so that
$$
\sup_{P \in \distributions{2}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}} \mathbb{E}_{P^{\otimes n}} [\ell_\infty(\wh \d | \d_M)] \leq C_{\tau_{\min}, d, f_{\min}} \left(\frac{\log n}{n}\right)^{1/d},
$$
which is faster than the known rate of order $O(n^{-2/3d})$ for Isomap (see for instance \cite[Eq (1.2)]{Arias20}).
Now, taking $\wh M$ to be a minimax optimal estimator of $M$ for the Hausdorff loss --- as that of~\cite{Aamari19}, for instance --- and $\varepsilon_n = C (\log n/n)^{k/d}$ for some large constant $C > 0$ yields $\eta_n \leq \varepsilon_n^2$ (see \lemref{hausdorff_and_covering}), and a metric estimator $\wh \d$ that achieves the following rate.
\begin{theorem} \label{thm:metric-minimax-ub}
Let $\wh \d$ be the estimator described in \thmref{metricub} built on top of $\wh M$ described in \lemref{hausdorff_and_covering}. Then for $n$ large enough,
$$
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}} \mathbb{E}_{P^{\otimes n}} [\ell_\infty(\wh \d | \d_M)] \leq C_{\tau_{\min}, d, f_{\max}, f_{\min}, \mathbf{L},k} \left(\frac{\log n}{n}\right)^{k/d}.
$$
\end{theorem}
In virtue of \thmref{metriclb}, this rate is minimax optimal up to $\log n$ factors.
\subsection{Optimal Spherical Distortion Radius Estimation}
Interesting as it is in its own right, we now investigate the estimation rates of the spherical distortion radius at scale $\delta > 0$.
To obtain a minimax lower bound, we simply note that $\sdr_\delta(M,\d_M)$ coincides with $\tau(M)$ whenever $\tau(M) = \wfs(M)$ (Proposition~\ref{prp:interpolate}).
Hence, any lower bound for the estimation of $\tau(M)$ on a model over which $\tau(M) = \wfs(M)$ yields a lower bound for the estimation of $\sdr_\delta(M,\d_M)$.
In application of \thmref{lwr_bounds_clementb} with $\alpha \geq 0$, this immediately gives the following lower bound.
\begin{theorem}
\label{thm:sdrlb}
Assume that $f_{\min} \leq c_{d,k}/\tau_{\min}^d$ and $f_{\max} \geq C_{d,k}/\tau_{\min}^d$, and $L_j \geq C_{d,k}/\tau_{\min}^{j-1}$ for all $j \in \{2,\ldots,k\}$.
Then for $n$ large enough,
for all $\delta \in (0,\tau_{\min})$,
$$
\inf_{\wh\sdr_\delta} \sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}} \mathbb{E}_{P^{\otimes}}[|\wh\sdr_\delta - \sdr_\delta(M,\d_M)|] \geq \tilde{c}_{\tau_{\min},d,k} n^{-k/d}.
$$
where the infimum is taken over all measurable estimators $\wh\sdr_\delta$ of $\sdr_\delta(M,\d_M)$ based on $n$ samples.
\end{theorem}
It turns out that this bound is optimal.
To exhibit an estimator that achieves this rate, we take advantage of the Hausdorff and metric stability of the spherical distortion radius shown in \thmref{stab}.
In order to apply it, we first need to check that Assumptions~\ref{ass:spread} and~\ref{ass:subeuc} are fulfilled for every manifolds in our models $\mathcal{C}^k_{\tau_{\min},\mathbf{L}}$.
\begin{proposition} \label{prp:assreach} Let $M \subset \mathbb{R}^D$ be a submanifold with bounded reach $\tau(M) > 0$. Then $M$ satisfies Assumptions \ref{ass:spread} and \ref{ass:subeuc} with parameters
$$\varepsilon_0 = \tau(M)/4, ~~~~\Delta_0 = \tau(M) ,~~~~C_0 = 3/16,~~~~\Delta_1 = \tau(M)/2 ~~~~\text{and} ~~~~C_1 = 2.
$$
\end{proposition}
\prpref{assreach} is proven in Appendix \ref{sec:proofoptireach}. In the vein of \thmref{metricub}, and using the stability of the spherical distortion radius with respect to the pair $(K,\d)$, we can now build an estimator of $\sdr_\delta(M,\d_M)$ in a plug-in fashion over $\mathcal{C}^k$ submanifolds.
Recall that when $M$ is in $\mathcal{C}^k_{\tau_{\min},\mathbf{L}}$, and $\delta \in (0,\sqrt{2(D+1)/D}\wfs(M))$, then according to Propositions \ref{prp:interpolate} and \ref{prop:wfs_properties}, and to \lemref{boundgeo},
$$
0 < \tau_{\min} \leq \tau(M) \leq \sdr_\delta(M,\d_M) \leq \wfs(M) \leq \sqrt{\frac{D}{2(D+1)}} \diam(M) \leq \mathrm{s}_{\max} < \infty
,$$
where $\mathrm{s}_{\max} := \sqrt{D/(2(D+1))} \d_{\max}$, with $\d_{\max}$ being the constant introduced in \eqref{eq:dmax}.
\begin{theorem} \label{thm:sdrub} Given $k \geq 2$, let $\wh M$ be an estimator satisfying
$$
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}} P^{\otimes n}(\dh(M,\wh M) \geq \varepsilon_n) \leq \eta_n
$$
for some positive sequences $\varepsilon_n,\eta_n$ converging to $0$. Then, for any $\delta \in (0,\tau_{\min})$, the estimator $\wh\sdr_\delta := \sdr_\delta(\wh M,\wh\d) \wedge \mathrm{s}_{\max}$, where $\wh \d$ is defined in \thmref{metricub}, satisfies
$$
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}} \mathbb{E}_{P^{\otimes n}} |\wh\sdr_\delta -\sdr_\delta(M,\d_M)| \leq C \left(\frac{\mathrm{s}_{\max}^4}{\delta^4} \varepsilon_n + \mathrm{s}_{\max}\eta_n\right).
$$
\end{theorem}
We refer to Appendix \ref{sec:proofoptireach} for a proof of this result.
\begin{rem}
In place of $\wh \d = \d_{\wh M^{\varepsilon_n}}$, one could actually plug any estimator $\wh \d$ of the metric into\thmref{sdrub}.
In light of the stability result of \thmref{stab}, as long as $\wh \d$ satisfies
\begin{align*}
\sup_{P \in \mathcal{P}^k} P^{\otimes n}\(\D_\delta(\wh \d,\d_M) \geq 1+\frac{\varepsilon_n}{\delta}\) \leq \eta_n,
\end{align*}
the conclusion of \thmref{sdrub} would still hold.
This comes in handy, especially if one wants to input a computationally efficient distance estimator, such as shortest-path distance on a neigbhorhood graph~\cite{tenenbaum2000global} or on Delaunay triangulations~\cite{Arias20}.
\end{rem}
Again, taking $\wh M$ to be a minimax optimal estimator for the Hausdorff loss~\cite{Aamari19} outputs an estimator $\wh\sdr_\delta$ of the spherical distortion radius satisfying \begin{theorem}
For all $\delta \in (0,\tau_{\min})$, with the construction of $\wh \sdr_\delta$ above, we have that for $n$ large enough,
$$
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}} \mathbb{E}_{P^{\otimes n}} |\wh\sdr_\delta -\sdr_\delta(M,\d_M)| \leq C_{\tau_{\min}, d, f_{\max}, f_{\min}, \mathbf{L},k} \frac{1}{\delta^{4}} \left(\frac{\log n}{n}\right)^{k/d},
$$
and this rate is optimal in regard of \thmref{sdrlb}.
\end{theorem}
Note the presence of the factor $1/\delta^{4}$ in the bound, which makes the rate diverge as $\delta \to 0$.
This blowup is to be expected for the following reason. As $\delta$ goes to $0$, the spherical distortion radius goes to the reach $\tau(M)$ (Proposition~\ref{prp:interpolate}).
Since the estimation of $\tau(M)$ cannot be faster than $n^{-(k-2)/d}$ (\thmref{lwr_bounds_clementb}), the estimation rate of $\sdr_\delta(M,\d_M)$ must deteriorate in some way as $\delta\to 0$.
\subsection{Optimal Reach Estimation}
In light of Proposition~\ref{prop:reach_intermediate_local} and~\eqref{eq:double-plugin-precision}, it only remains to combine the maximal curvature estimator and the spherical distortion radius estimator to obtain an estimator of the reach.
Naely, we let $\wh M$ be the minimax-Hausdorff estimator of \lemref{hausdorff_and_covering}. According to the very same \lemref{hausdorff_and_covering}, there exists $c_{\tau_{\min}, d, f_{\max}, f_{\min}, \mathbf{L},k} > 0$ such that denoting by
\beq \label{eq:tune_ven}
\varepsilon_n := c_{\tau_{\min}, d, f_{\max}, f_{\min}, \mathbf{L},k} \(\frac{\log n}{n}\)^{k/d},
\end{equation}
there holds
\beq \label{eq:proba_ven}
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}}P^{\otimes n}(\dh(\wh M,M) \geq \varepsilon_n) \leq \varepsilon_n^2.
\end{equation}
We also let $\wh \d$ be the estimator of the intrinsic distance of \thmref{metricub} from $\wh M$ and $\varepsilon_n$.
We let $\wh\sdr_\delta := \sdr_\delta(\wh M,\wh \d) \wedge \mathrm{s}_{\max}$ for some $\delta \in (0,\tau_{\min})$ as in Theorem~\ref{thm:sdrub}.
Finally, we write
$$
\wh \tau := \wh R_\ell \wedge \wh\sdr_\delta.
$$
The following \thmref{reachub} is a straightforward consequence of Theorems \ref{thm:cv_rates_curvature_max} and \ref{thm:sdrub}, inserted in the plugin strategy of Proposition~\ref{prop:reach_intermediate_local} and~\eqref{eq:double-plugin-precision}.
\begin{theorem} \label{thm:reachub}
The estimator $\wh\tau$ described above with $\delta = \tau_{\min}/2$ satisfies
\begin{align*}
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}} \mathbb{E}_{P^{\otimes n}} | \wh \tau - \tau(M) | \leq C_{\tau_{\min}, d, f_{\max}, f_{\min}, \mathbf{L},k} \left(\frac{\log n}{n}\right)^{(k-2)/d},
\end{align*}
and, for all $\alpha > 0$,
\begin{align*}
\sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}, \alpha}{f_{\min}}{f_{\max}}} \mathbb{E}_{P^{\otimes n}} | \wh \tau - \tau(M) | \leq C_{\tau_{\min}, d, f_{\max}, f_{\min}, \mathbf{L},k,\alpha} \left(\frac{\log n}{n}\right)^{k/d}.
\end{align*}
\end{theorem}
As a conclusion, Theorems~\ref{thm:lwr_bounds_clementb} and~\ref{thm:reachub} assert that $\wh \tau$ is minimax optimal, and that its rate of convergence adapts to whether $\tau(M)$ is attained by curvature (yielding the slower rate $O(n^{-(k-2)/d})$) or by a bottleneck (yielding the faster rate rate $O(n^{-k/d})$).
The computation of $\wh\tau$ depends explicitly on the parameters of the models at two levels.
First, in tuning the value of $\varepsilon_n$ as in \eqref{eq:tune_ven}.
Second, in choosing $\delta \in (0,\tau_{\min})$. These two dependencies may be circumvented by picking
$$\wt \varepsilon_n = {\log n}\left(\frac{\log n}{n}\right)^{k/d},$$
and $\delta_n = 1/\log n$. Then, for $n$ large enough, both \eqref{eq:proba_ven} and $\delta_n \in (0,\tau_{\min})$ will be fulfilled.
The price to pay for this way-around to calibration of constants limits to multiplicative $\log n$ factors in the upper-bound of \thmref{reachub}.
\section{Proofs of \secref{reach}} \label{sec:proof3}
\subsection{Comparing Reaches, Weak Feature Size and Diameter} \label{sec:proof_of_prp_wfs_properties}
This Section is devoted to the Proof of Proposition~\ref{prop:wfs_properties}, which goes as follows.
\begin{proof}[Proof of Proposition~\ref{prop:wfs_properties}]
For (i), recall that no closed compact submanifold can be contractible~\cite[Theorem~3.26]{hatcher2002algebraic}. Furthermore,~\cite[Theorem~4.8]{Federer59} and~\cite[Lemma~2.1]{Chazal06} combined together yield that $K^r$ is isotopic to $K$ for all $r < \wfs(K)$.
On the other hand whenever $r > \rad(K)$ where $\rad(K)$ is the radius of the smallest ball enclosing $K$, $K^r$ is star-shaped with respect to any point of the non-empty intersection $\cap_{x \in M} \ball(x,r)$.
We conclude that $\wfs(K) \leq \rad(K)$, Since $\rad(K) < \infty$ because $K$ is compact, we obtain $\wfs(K) < \infty$.
For (ii), the first two inequalities come from the definition of $\tau_\mu(K)$ (see~\eqref{eq:mu-reach}).
The rightmost comes Jung's Theorem~\cite[Theorem~2.10.41]{Federer69}, which asserts that
$
\rad(K) \leq \sqrt{\frac{D}{2(D+1)}}\diam(K)
$,
and the fact that $\wfs(K) \leq \rad(K)$ whenever $\wfs(K)$ is finite (same argument as for (i)).
\end{proof}
\subsection{Minimax Lower Bound for $\mu$-Reach Estimation}
\label{sec:proof-of-lower-bound-mu-reach}
This Section is devoted to the proof of Theorem~\ref{thm:mu-reach-inconsistency}. It builds upon the possible discontinuities of the map $M \mapsto \mathrm{Med}_\mu(M)$ in Hausdorff distance.
The exhibition of such a discontinuity can be done in dimension $d=1$ and $D=2$, and can then be generalized to arbitrary $1 \leq d < D$ by using symmetry and rotation arguments.
The building block of the construction is the following arc of curve.
For all $\alpha\in (0,\pi/4]$, write $R_\alpha := 1/\sin(\alpha)$. Let also $\mathsf{C}_\alpha : [0,1] \to \mathbb{R}_+$ be defined as $\mathsf{C}_\alpha(t) := R_\alpha - \sqrt{R_\alpha^2 - t^2}$, which graph is an arc of circle of radius $R_\alpha$ and aperture $\alpha$ (see Figure~\ref{fig:galpha}).
To be able to glue up smoothly $\alpha$-turns like $\mathsf{C}_\alpha$ with straight lines, we smooth it as follows.
\begin{lemma}
\label{lem:turn-widget}
There exists $G_\alpha : [0,1] \to \mathbb{R}_+$ infinitely differentiable such that:
\begin{enumerate}
\item $G_\alpha^{(\ell)}(0) = 0$ for all $\ell \geq 0$;
\item $G_\alpha(1) = \mathsf{C}_\alpha(1)$, $G_\alpha'(1) = \mathsf{C}_\alpha'(1)$ and $G_\alpha^{(\ell)}(1) = 0$ for all $\ell \geq 2$;
\item $\|G_\alpha^{(\ell)}\|_\infty \leq C_{\ell}/R_\alpha$ for all $\ell \geq 1$;
\item $G_\alpha(t) < \mathsf{C}_\alpha(t)$ for all $t \in (0,1)$;
\item $G_\alpha$ is convex.
\end{enumerate}
\end{lemma}
See \figref{galpha} for a diagram of such a $G_\alpha$.
Let us first comment on the requirements on $G_\alpha$. Items 1 and 2 say that $G_\alpha$ is a $\mathcal{C}^k$ interpolation between the two tangent lines of two points of $\mathsf{C}_\alpha$ who are $\alpha$-apart in term of polar coordinate.
Item 3 says that the graph of $G_\alpha$, once rescaled by $1/R_\alpha$, will be bounded in $\mathcal{C}^k$-norm for all $k$.
Items 4 and 5 ensure well-behavior of the medial axes of our future construct (see Figure~\ref{fig:malpha}).
\begin{figure}[h!]
\centering
\includegraphics[width=0.35\textwidth]{turn-bump}
\caption{Construction for Lemma~\ref{lem:turn-widget}: curves associated to $\mathsf{C}_\alpha$, $A_\alpha$, and $G_\alpha$.}
\label{fig:galpha}
\end{figure}
\begin{proof}[Proof of Lemma~\ref{lem:turn-widget}]
The following construction applies to general convex functions, although we restrict it to $G_\alpha$ for simplicity.
Consider the piecewise linear map $A_\alpha$ given by the tangent lines of $\mathsf{C}_\alpha$ at $t=0$ and $t=1$. That is, define $A_\alpha(t)$ for all $t \in \mathbb{R}$ by
\begin{align*}
A_\alpha(t)
:=&
\max\{ \mathsf{C}_\alpha(0) + \mathsf{C}_\alpha'(0)t , \mathsf{C}_\alpha(1) + (t-1)\mathsf{C}_\alpha'(1)\}
\\
=&
\max\{ 0 , \mathsf{C}_\alpha(1) + (t-1)\mathsf{C}_\alpha'(1) \}
.
\end{align*}
As $\mathsf{C}_\alpha$ is strictly convex, $A_\alpha < \mathsf{C}_\alpha$ on $\mathbb{R}\setminus \{0,1\}$.
We also denote by $t_\alpha^\ast$ the (unique) point of non-differentiability of $A_\alpha$, that is
\begin{align*}
t_\alpha^\ast
:=
1 - \frac{\mathsf{C}_\alpha(1)}{\mathsf{C}_\alpha'(1)}
=
R_\alpha \tan(\alpha/2)
.
\end{align*}
Note by now that for all $\alpha \in (0,\pi/4)$, $1/2 \leq t_\alpha^* \leq 2-\sqrt{2} \leq 6/10$.
Given $h>0$ to be chosen later, write $K_h(t) := h^{-1}K(t/h)$, where $K(t) := c_0 \exp(-1/(1-t^2)) \mathbbm{1}_{|t|< 1}$ is a non-negative $\mathcal{C}^\infty$ kernel, and $c_0$ is chosen so that $\int_{\mathbb{R}} K = 1$.
Finally, consider the convolution
\begin{align*}
G_\alpha(t) := \int_{\mathbb{R}} K_h(x) A_\alpha(t-x) \diff x
.
\end{align*}
By smoothness of $K_h$ and non-negativity of both $K_h$ and $A_\alpha$, $G_\alpha = K_h \ast A_\alpha$ is infinitely differentiable and non-negative.
Also, since $A_\alpha$ is convex and $K_h$ non-negative, $G_\alpha$ is convex (Item 5). Furthermore, one easily checks that outside the interval $[t_\alpha^\ast-h,t_\alpha^\ast+h]$, $G_\alpha$ coincides with $A_\alpha$.
Hence, if $h \leq 1/4$, we have $[t_\alpha^\ast-h,t_\alpha^\ast+h] \subset [1/2-1/4,6/10+1/4]= [1/4,17/20]$, so that Items 1 and 2 holds directly.
To check that $G_\alpha < \mathsf{C}_\alpha$ on $(0,1)$, fix $t \in (0,1)$.
If $t \notin [t_\alpha^\ast-h,t_\alpha^\ast+h]$, $G_\alpha(t) = A_\alpha(t) < \mathsf{C}_\alpha(t)$ by construction.
If $t \in [t_\alpha^\ast-h,t_\alpha^\ast+h]$, we have $G_\alpha(t) \leq G_\alpha(t_\alpha^\ast + h) = h \mathsf{C}_\alpha'(1)$.
But on the other hand, $\mathsf{C}_\alpha(t) \geq \mathsf{C}_\alpha(t_\ast^\ast-h) > \mathsf{C}_\alpha(1/4)$. Hence, we do have $G_\alpha(t) < \mathsf{C}_\alpha(t)$ as soon as $h \leq 1/100$, since $\mathsf{C}_\alpha(1/4)/\mathsf{C}_\alpha'(1) > 1/100$ for all $\alpha \in (0,\pi/4)$. This yields Item 4.
Finally, letting $h = h_0 = 1/100$, we obtain for all $\ell \geq 1$ and $t \in [0,1]$,
\begin{align*}
|G_\alpha^{(\ell)}(t)|
&=
\bigl|
K_h^{(\ell)}
\ast
\mathsf{C}_\alpha(t)
\bigr|
\leq
\bigl\Vert
K_h^{(\ell)}
\bigr\Vert
_\infty
\bigl\Vert
\mathsf{C}_\alpha
\bigr\Vert_\infty
\leq
C_{\ell} \mathsf{C}_\alpha(1)
\leq
C_{\ell}/R_{\alpha},
\end{align*}
which yields Item 3 and concludes the proof.
\end{proof}
Given $R>0$, we now let $G_{\alpha,R}$ be the curve obtained by dilating homogeneously the graph of $G_\alpha$ by a scale factor $R/R_\alpha$.
We extend the construction of these smooth $\alpha$-turns for $\alpha \in (\pi/4,\pi]$: for this, we glue two $G_{\alpha/2,R}$ or four $G_{\alpha/4,R}$ to define $G_{\alpha,R}$.
\begin{proposition}
\label{prop:hypotheses}
Assume that for all $j \in \{2,\ldots,k\}$, $L_j \geq C_{d,k}/\tau_{\min}^{j-1}$ for $C_{d,k}>0$ large enough.
Then for all $\mu \in [0,1)$ and $\varepsilon >0$ small enough, there exist $M,M' \in \manifolds{k}{\tau_{\min}}{\mathbf{L}}$ such that:
\begin{itemize}
\item
$
|\tau_\mu(M) - \tau_\mu(M')| \geq c_{d,k}\tau_{\min}
$
;
\item
$c'_{d,k} \tau_{\min}^d
\leq
\vol_d(M) \wedge \vol_d(M')
\leq
\vol_d(M) \vee \vol_d(M')
\leq C''_{d,k} \tau_{\min}^d$;
\item
$\vol_d(M \triangle M') \leq C'''_{d,k} \tau_{\min}^{d} \varepsilon$
.
\end{itemize}
\end{proposition}
\begin{proof}[Proof of Proposition~\ref{prop:hypotheses}]
For small enough (and arbitrarily small) $\varepsilon >0$, we let $\alpha \in [0,\pi]$ be such that $\sin\bigl((\alpha+\varepsilon)/2\bigr)^2 = 1-\mu^2$. Such an $\alpha$ always exists since $\mu^2 < 1$.
Given $\Delta,R_0,R_1> 0$ to be chosen later, we glue smooth turns from Lemma~\ref{lem:turn-widget} with straight lines to create a $\mathcal{C}^k$ closed curve in $\mathbb{R}^2$, as shown in Figure~\ref{fig:malpha}.
Then, we obtain a $\mathcal{C}^k$ closed $d$-dimensional submanifold $M_\alpha$ of $\mathbb{R}^{d+1}$, with a symmetry of revolution with respect to the horizontal axis of \figref{malpha}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{bumps_glued}
\caption{Construction of $M_\alpha$ in the proof of Proposition~\ref{prop:hypotheses}.}
\label{fig:malpha}
\end{figure}
By construction, if $\Delta \geq 8R_0$, then $M_\alpha$ has local parametrizations on top of its tangent spaces (see Definition~\ref{def:geometric-model}) with $L_j \leq C_{d,k}/(\Delta \wedge R_0 \wedge R_1)^{j-1}$ for all $j \geq 2$, and has volume $\vol_d(M_\alpha) \leq C_{d,k} (\Delta \vee R_0 \vee R_1)^d$ and $\vol_d(M_\alpha) \geq c_{d,k} (\Delta \wedge R_0 \wedge R_1)^d$.
We now examine the structure of the medial axis and the reach of $M_\alpha$.
If $u \in \mathrm{Med}(M_\alpha)$ is a point on the medial axis, rotational symmetry yields that two of its projections points must lie either:
\begin{itemize}
\item
In a plane containing its horizontal axis of symmetry (i.e. Figure~\ref{fig:malpha}).
As a result, its distance to $M_\alpha$ cannot be smaller than the smallest reach of each of its parts $G_{\pi/2,R_1}, G_{\alpha/2,R_0}$ and $G_{\alpha,R_0}$, so that $\d(u,M_\alpha) \geq c_{d,k} R_0 \wedge R_1$.
\item
In a $d$-plane orthogonal to the horizontal axis. By rotational invariance, this forces $u$ to be on this axis of symmetry. As a result, $\d(u,M_\alpha) \geq \Delta/2 - 3R_0 \geq c_{d,k} \Delta$ since $\Delta \geq 8R_0$.
\end{itemize}
In all, we get
$\tau(M_\alpha) \geq c_{d,k} (\Delta \wedge R_0 \wedge R_1).$
We now examine the $\mu$-reach of $M_\alpha$. By definition, if $u \in \mathrm{Med}_\mu(M_\alpha)$ has two nearest neighbors $x,y \in M_\alpha$, the angle between $(u-x)$ and $(u-y)$ must be at most $2 \arcsin(\sqrt{1-\mu^2})$. As a result, a single branch of $M_\alpha$ between the two arcs of $G_{\alpha/2,R_0}$ cannot not generate any point of the $\mu$-medial axis, since $\alpha$ has been chosen so that $\alpha < 2 \arcsin(\sqrt{1-\mu^2})$.
Hence, for $\Delta,R_1$ large enough compared to $R_0$, we have $\tau_\mu(M_\alpha) \geq c'_{d,k} (\Delta \wedge R_1)$.
Finally, we build $M_\alpha'$ from $M_\alpha$ by bumping the curve near $G_{\alpha,R_0}$ as shown in Figure~\ref{fig:bump} (while still preserving the radial symmetry as before).
The manifold $M_\alpha'$ satisfies the same regularity conditions at $M_\alpha$.
Furthermore, $M_\alpha$ and $M_\alpha'$ only differ on a set of volume $\vol_d(M_\alpha \triangle M_\alpha') \leq C_{d,k} (\Delta\vee R_1)^{d-1} (R_0 \varepsilon)$.
\begin{figure}[h!]
\label{fig:bump}
\centering
\includegraphics[width=0.5\textwidth]{bump_local}
\caption{
Local bump of $M_\alpha'$ for Proposition~\ref{prop:hypotheses},
in the boxed area of Figure~\ref{fig:malpha}.
}
\label{fig:bump}
\end{figure}
With this extra bump, we create a point $u_0 \in \mathrm{Med}(M_\alpha')$ that has two nearest neighbors $x_0,y_0 \in M_\alpha'$ at distance $R_0$, with angle between $(u_0-x_0)$ and $(u_0-y_0)$ equal to $\alpha' = \alpha + \varepsilon$, which satisfies $\sin(\alpha'/2)^2 = 1-\mu^2$. As a result, $u_0 \in \mathrm{Med}_\mu(M_\alpha')$, so that $\tau_\mu(M_\alpha') \leq \|u_0 - y_0\|=R_0$.
In particular, we have
\begin{align*}
|\tau_\mu(M_\alpha) - \tau_\mu(M_\alpha')| \geq c'_{d,k} (\Delta \wedge R_1) - R_0
.
\end{align*}
The proof is hence complete by setting $M = M_\alpha$ and $M'=M_\alpha'$, with $R_1 = \Delta = R_0 /c'_{d,k}$ and $R_0 = \tau_{\min} / c_{d,k}$ for small enough $c_{d,k},c'_{d,k}>0$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:mu-reach-inconsistency}]
From Proposition~\ref{prop:hypotheses}, for $\varepsilon>0$ small enough, take $M,M' \in \manifolds{k}{\tau_{\min}}{\mathbf{L}}$ such that
$
|\tau_\mu(M) - \tau_\mu(M')| \geq c_{d,k} \tau_{\min}
$
, $c'_{d,k} \tau_{\min}^d \leq \vol_d(M),\vol_d(M') \leq C_{d,k} \tau_{\min}^d$, and
$\vol_d(M \triangle M') \leq C'_{d,k} \tau_{\min}^{d} \varepsilon$
.
Let us denote by $P$ and $P'$ the uniform distributions over $M$ and $M'$ respectively.
Elementary calculations directly yield that
$$
\TV(P,P') \leq \frac{\vol_d(M \triangle M')}{\vol_d(M) \vee \vol_d(M')} \leq C''_{d,k} \varepsilon
.
$$
Furthermore, since $c'_{d,k} \tau_{\min}^d \leq \vol_d(M) \wedge \vol_d(M') \leq \vol_d(M) \vee \vol_d(M') \leq C_{d,k} \tau_{\min}^d$, we obtain that $P,P' \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$ as soon as $f_{\min} \leq 1/(C_{d,k} \tau_{\min}^d)$ and $f_{\max} \geq 1/(c'_{d,k} \tau_{\min}^d)$.
As a result, for all $n \geq 1$, Le Cam's Lemma \cite{yu1997assouad} yields
\begin{align*}
\inf_{\widehat{r}_\mu} \sup_{P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}}
&\mathbb{E} _{P^{\otimes n}}\left[
| \widehat{r}_\mu - \tau_\mu(M)|
\right]
\\
&\geq
\frac{1}{2}
|\tau_\mu(M) - \tau_\mu(M')|
\bigl(1 - \TV(P,P'))^n
\\
&
\geq
c_{d,k} \tau_{\min} (1 - \varepsilon)^n
.
\end{align*}
As this construction is valid for all $\varepsilon>0$ small enough, we obtain the result by letting $\varepsilon$ tend to zero.
\end{proof}
\subsection{Maximal Curvature Estimation}\label{sec:proof_thm_cvrates_curvature_max}
This section is devoted to the proof of Theorem~\ref{thm:cv_rates_curvature_max}. It is based on a careful investigation of the local polynomial fitting procedure described in~\cite{Aamari19b}. First, recall that from~\cite[Lemma~2]{Aamari19b}, if $M \in \manifolds{k}{\tau_{\min}}{\mathbf{L}}$, $y \in M$ and $y' \in \ball \left (y, \frac{L_2 \wedge \tau_{\min}}{4} \right ) \cap M$, we may write
\begin{align}\label{eq:true_pol_decomposition}
y'-y = \pi^*_y&(y'-y) + \mathbb{T}_y^{(2),*}( \tens{\pi^*_y(y'-y)}{2}) + \hdots + \mathbb{T}_y^{(k-1),*}(\tens{\pi^*_y(y'-y)}{k-1}) \notag
\\
&+ R_{y}^{(k)}(y'-y),
\end{align}
where $\pi^*_y := \pi_{T_y M}$, $\mathbb{T}_y^{(j),*}$ are $j$-multilinear maps from $T_y M$ to $\mathbb{R}^D$, and $R_y^{(k)}$ satisfies
\begin{align*}
\norm{R_y^{(k)} (y'-y)} \leq C t_*^{k-1} \norm{y'-y}^{k},
\end{align*}
where $t_* = \max_{2 \leq j \leq k, y \in M}\|T^{(j),*}_y\|_{\mathrm{op}}^{\frac{1}{j-1}} \leq C_{k,d,\tau_{\min},\mathbf{L}}$. As assessed by~\cite[Lemma~2]{Aamari19b}, the polynomial decomposition expressed in~\eqref{eq:true_pol_decomposition} allows to recover the curvature tensor via $\II_y M = \mathbb{T}_{y}^{(2),*}$. Following~\cite{Aamari19b}, we estimate this curvature tensor via the second term of the polynomial decomposition provided by local fit to data points \eqref{eq:defi_pol_fit}.
To this aim, a slight adaptation of~\cite[Lemma~3]{Aamari19b} is needed, that allows to translate quality of approximation in terms of Hausdorff distance to guarantees on the monomial terms.
\begin{lemma}\label{lem:pol_expression}
Set $h_0= (\tau_{min} \wedge L_2^{-1})/8$ and $h \leq h_0$. Let $M \in \mathcal{C}^{k}_{\tau_{min},\mathbf{L}}$, $x_0 = y_0 + z_0$, with $y_0 \in M$ and $\|z_0\| \leq \sigma \leq h/4$. Denote by $\pi^*_{y_0}$ the orthogonal projection onto $T_{y_0}M$, and by $\mathbb{T}_{y_0}^{(2),*}, \hdots, \mathbb{T}_{y_0}^{(k-1),*}$ the multilinear maps given by ~\eqref{eq:true_pol_decomposition}.
Let $x = y+z$ be such that $y \in M$, $\|z\| \leq \sigma \leq h/4$ and $x \in \ball(x_0,h)$. We also let $\pi$ be an orthogonal projection, and $\mathbb{T}^{(2)}, \hdots, \mathbb{T}^{(k-1)}$ be multilinear maps that satisfy
\begin{align*}
\left (\max_{2 \leq j \leq k-1}\|\mathbb{T}^{(j)}\|_{\mathrm{op}}^{\frac{1}{j-1}} \right ) \vee t^* \leq t, \\
th \leq \frac{1}{4},
\end{align*}
for some $t \geq 0$. Then it holds
\begin{multline*}
x-x_0 - \pi(x-x_0) - \sum_{j=2}^{k-1}{ \mathbb{T}^{(j)}(\tens{\pi(x-x_0)}{j})} = \sum_{j=1}^{k}\mathbb{T}^{(j),'}_{y_0}(\tens{\pi^*_{y_0}(y-y_0)}{j}) + R^{(k)}_{y_0}{(x-x_0)},
\end{multline*}
where $\mathbb{T}_{y_0}^{(j),'}$ are $j$-linear maps, and $\|R_{y_0}^{(k)}(x-x_0)\| \leq C \left ( \sigma + h^k (t_*^{k-1} + t^k h) \right )$, where $C$ depends on $d$, $k$, $\tau_{\min}$, $L_2$,$\ldots$, $L_k$. Moreover, we have
\[
\begin{array}{@{}rl}
\mathbb{T}^{(1),'}_{y_0} &= (\pi^*_{y_0}-{\pi}),\\
\mathbb{T}^{(2),'}_{y_0} &= (\pi^*_{y_0}-{\pi})\circ \mathbb{T}_{y_0}^{(2),*} + (\mathbb{T}_{y_0}^{(2),*} \circ \pi_{y_0}^* - {\mathbb{T}}^{(2)} \circ {\pi}),
\end{array}
\]
and, if $\pi = \pi^*_{y_0}$ and $\mathbb{T}^{(j)} = \mathbb{T}_{y_0}^{(j),*}$ for all $j \in \{2, \hdots, k-1\}$, then $\mathbb{T}_{y_0}^{(j),'}=0$ for all $j \in \{1, \hdots, k\}$.
\end{lemma}
The proof of Lemma~\ref{lem:pol_expression} is deferred to Section~\ref{sec:proof_lem_pol_expression}. To ensure that our local curvature estimators allow to approximate the maximal curvature of $M$, we have to ensure that the sample covers $M$ well enough. That is the aim of the following Lemma.
\begin{lemma}[{\cite[Appendix, Lemma~B.7 \& Section 5.1.4]{Aamari19b}}]\label{lem:hausdorff_and_covering}
Let $P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$. Write $\mathbb{X}_n$ for an i.i.d. $n$-sample drawn from $P$.
Let $h = \left( {C_{d,k}}\frac{f_{\max}^2}{f_{min}^3} \frac{\log n}{n}\right)^{1/d}$, for $C_{d,k}$ large enough. Then, for $n$ large enough so that $h \leq \tau_{\min}/4$, with probability at least $1 - 2\left( \frac{1}{n}\right)^{2k/d}$, it holds
\begin{align*}
\dh\left(M,\mathbb{X}_n\right)
& \leq
h/4, \\
\dh (M,\hat{M}) & \leq C_{d,k,\tau_{\min}, \mathbf{L}} (t^*)^{k-1} \left ( \frac{f_{\max}^{2 + \frac{d}{2k}} \log n}{f_{\min}^{3 + \frac{d}{2k}} n} \right )^{k/d},
\end{align*}
where $\hat{M}$ denotes the union of local polynomial patches
\[
\hat{M} := \bigcup_{i=1}^n \hat{\Psi}_i \left ( \ball_{\hat{T}_i}(0,7h/8) \right )
\]
defined by \eqref{eq:defi_pol_fit} and \eqref{eq:polynomial_expansion}, and $t_* = \max_{y \in M, 2 \leq j \leq k}\|\mathbb{T}^{(j),*}_y\|_{\mathrm{op}}^{\frac{1}{j-1}} \leq C_{k,d,\tau_{\min},\mathbf{L}}$ as in Lemma~\ref{lem:pol_expression}.
\end{lemma}
Equipped with these two lemmas, we are in position to prove Theorem~\ref{thm:cv_rates_curvature_max}.
\begin{proof}[Proof of Theorem~\ref{thm:cv_rates_curvature_max}]
Based on Lemma \ref{lem:hausdorff_and_covering}, for $h = \left ( C_{d,k} \frac{f^2_{\max} \log n}{f^3_{\min}n} \right )^{\frac{1}{d}}$, given $i \in \{1,\ldots,n\}$, we denote by $\hat{\Psi}_i$ the polynomial estimator around $X_i$ defined by
\begin{align*}
\hat{\Psi}_i(v) := X_i + v + \sum_{j=2}^{k-1} \hat{\mathbb{T}}_{i}^{(j)}(\tens{v}{j}),
\end{align*}
for all $v \in \hat{T}_i$.
Setting
$$
\hat{M} := \bigcup_{i=1}^n \hat{\Psi}_i \left ( \ball_{\hat{T}_i}(0,7h/8) \right )
,
$$
we have that with probability larger than $1-2 \left( \frac{1}{n} \right )^{\frac{2k}{d}}$,
\begin{align}\label{eq:bound_hausdorff_patch}
\dh (\hat{M},M) \leq C_{d,k,\tau_{\min}, \mathbf{L}} (t_*)^{k-1} \left ( \frac{f_{\max}^{2 + \frac{d}{2k}} \log n}{f_{\min}^{3 + \frac{d}{2k}} n} \right )^{\frac{k}{d}} := \varepsilon_1,
\end{align}
for $n$ large enough, according to Lemma~\ref{lem:hausdorff_and_covering}. In what follows we settle on the probability event of Lemma~\ref{lem:hausdorff_and_covering}. In particular, denoting by
$$
\hat{t} = \max_{1 \leq i \leq n} \max_{2 \leq j \leq k-1} \|\hat{\mathbb{T}}_{i}^{(j)}\|_{\mathrm{op}}^\frac{1}{j-1},
$$
note that~\cite[Section 5.1.2]{Aamari19b} ensures that $\hat{t}\vee t_* \leq t \leq 1/(4h)$, for some fixed $t$, provided $n$ is large enough.
We let $i \in \{1,\ldots,n\}$, $v \in \ball_{\hat{T}_i}(0,h/4)$, and intend to approximate $\II_{\pi_M(\hat{\Psi}_i(v))}$. To do so, we consider the following polynomial expansion centered at $v$: for $u \in \ball_{\hat{T}_i}(0,h/4)$,
\begin{align}\label{eq:pol_dec_1}
\hat{\Psi}_i(v+u) - \hat{\Psi}_i(v) & = u + \sum_{j=2}^{k-1} j \hat{\mathbb{T}}_{i}^{(j)}\left ( \tens{v}{j-1} \otimes u\right ) + \sum_{j=2}^{k-1} \sum_{r=j}^{k-1} \binom{r}{j} \hat{\mathbb{T}}_{i}^{(r)} \left ( \tens{v}{r-j} \otimes \tens{u}{j}\right ).
\end{align}
First we deduce from \eqref{eq:pol_dec_1} an estimate for the tangent space at $\pi_M(X_i+v)$, as well as a coordinate system. Namely, we let
\begin{align*}
\hat{J}_{i,v}\colon \hat{T}_i &\longrightarrow \hat{J}_{i,v}(\hat{T}_i)
\\
u &\longmapsto u + \sum_{j=2}^{k-1} j \hat{\mathbb{T}}_{i}^{(j)} \left( \tens{v}{j-1} \otimes u \right).
\end{align*}
Note that since $th \leq 1/4$, we have
\begin{align*}
\| \hat{J}_{i,v}(u) - u \| & \leq \sum_{j=2}^{k-1} j \left(\frac{th}{4} \right )^{j-1}\|u\| \\
& \leq \left (\sum_{j=1}^{\infty}j \left (\frac{th}{4} \right )^{j-1} -1 \right) \|u\| \\
& \leq \left ( \left ( \frac{1}{1-\frac{th}{4}} \right ) ^2 -1 \right ) \|u\| \leq \frac{\|u\|}{2},
\end{align*}
so that $\hat{J}_{i,v}$ is full-rank. In what follows we write $\hat{T}_{i,v} := \Im(\hat{J}_{i,v})$ and $\hat{\pi}_{i,v} := \pi_{\hat{T}_{i,v}}$. We now may express \eqref{eq:pol_dec_1} in terms of the coordinate system given by $\hat{T}_{i,v}$:
\begin{align}\label{eq:pol_dec_2}
\hat{\Psi}_i(v+u) - \hat{\Psi}_i(v) = \hat{J}_{i,v}(u) + \sum_{j=2}^{k-1} \widetilde{\mathbb{T}}_{i,v}^{(j)}(\tens{\hat{J}_{i,v}(u)}{j}),
\end{align}
where the symmetric tensor of order $j$ centered at $v$, $\widetilde{\mathbb{T}}_{i,v}^{(j)}$, is defined by
\begin{align*}
\widetilde{\mathbb{T}}_{i,v}^{(j)}(\tens{w}{j}) := \sum_{r=j}^{k-1} \binom{r}{j} \hat{\mathbb{T}}_{i}^{(r)} \left ( \tens{v}{r-j} \otimes \tens{\hat{J}_{i,v}^{-1}(w)}{j} \right ),
\end{align*}
for $w \in \hat{T}_{i,v}$. As well, since $th \leq \frac{1}{4}$, we may write
\begin{align*}
\norm{\widetilde{\mathbb{T}}_{i,v}^{(j)}}_{\mathrm{op}} & \leq \sum_{r=j}^{k-1} \binom{r}{j} (3/2)^j t^{r-1} \left ( \frac{h}{4} \right )^{r-j} \\
& \leq \left ( \sum_{r=j}^\infty \binom{r}{j} \left ( \frac{th}{4} \right )^{r-j} \right ) (3/2)^j t^{j-1} \\
&\leq \left ( \frac{1}{1-\frac{th}{4}} \right )^j (3/2)^j t^{j-1} \leq (3/2)^{2j} t^{j-1},
\end{align*}
so that $\max_{2 \leq j \leq k-1} \norm{\widetilde{\mathbb{T}}_{i,v}^{(j)}}_{\mathrm{op}}^{\frac{1}{j-1}} \leq \widetilde{t} \leq \left ( \frac{3}{2} \right )^4 t$.
In particular, the bilinear form $\widetilde{\mathbb{T}}^{(2)}_{i,v} : \hat{T}_{i,v} \times \hat{T}_{i,v} \to \mathbb{R}^D$ may be expressed by
\begin{align*}
\widetilde{\mathbb{T}}_{i,v}^{(2)}(\tens{w}{2}) & := \sum_{j=2}^{k-1} \binom{j}{2} \hat{\mathbb{T}}_{i}^{(j)}\left ( \tens{v}{j-2} \otimes \tens{\hat{J}_{i,v}^{-1}(w)}{2} \right )
\end{align*}
for all $w \in \hat{T}_{i,v}$. Our second fundamental form estimator at $\pi_M(\hat{\Psi}_i(v))$ is then defined by
\begin{align*}
\hat{\mathbb{T}}_{i,v}^{(2)}: & = \widetilde{\mathbb{T}}_{i,v}^{(2)} \circ \hat{\pi}_{i,v} - \hat{\pi}_{i,v} \circ \widetilde{\mathbb{T}}_{i,v}^{(2)} \circ \hat{\pi}_{i,v},
\end{align*}
where with a slight abuse of notation, $\mathbb{T}\circ \pi (u) := \mathbb{T}\bigl(\tens{\pi(u)}{2}\bigr)$. Note that composition with $\hat{\pi}_{i,v}$ is performed to ensure that $\hat{\mathbb{T}}_{i,v}^{(2)}$ ranges into $\hat{T}_{i,v}^\perp$.
Our final max-curvature estimator can now be defined as
\begin{align*}
\hat{R}_\ell^{-1}
:=
\max_{1 \leq i \leq n} \max_{v \in \ball_{\hat{T}_i}(h/4)} \norm{\hat{\mathbb{T}}_{i,v}^{(2)}}_{\mathrm{op}}.
\end{align*}
First, we intend to show that, for a given $v \in \ball_{\hat{T}_i}(h/4)$, $\hat{\mathbb{T}}_{i,v}^{(2)}$ is close to $\II_{y_0}$, for some $y_0 \in M$. To do so, we let $u \in \ball_{\hat{T}_i}(0,h/4)$, $x := \hat{\Psi}_i(v+u), x_0 := \hat{\Psi}_i(v)$, and $\widetilde{P}^{(r:k-1)}_{i,v} := \sum_{j=r}^{k-1} \widetilde{\mathbb{T}}^{(j)}_{i,v}$. Then, we have the decomposition
\begin{align*}
\hat{J}_{i,v}(u) & = \hat{\pi}_{i,v}(x-x_0) - \sum_{j=2}^{k-1} \hat{\pi}_{i,v} \circ \widetilde{\mathbb{T}}^{(j)}_{i,v}(\tens{\hat{J}_{i,v}(u)}{j}) \\
& = \hat{\pi}_{i,v}(x-x_0) - \sum_{j=2}^{k-1} \hat{\pi}_{i,v} \circ \widetilde{\mathbb{T}}^{(j)}_{i,v}\left [\tens{\left (\hat{\pi}_{i,v}(x-x_0) - \hat{\pi}_{i,v} \circ \widetilde{P}^{(2:k-1)}_{i,v}(\hat{J}_{i,v}(u)) \right )}{j} \right ] \\
& = \hat{\pi}_{i,v}(x-x_0) + \sum_{j=2}^{k} \mathbb{T}^{(j),''}_{i,v}( \tens{\hat{\pi}_{i,v}(x-x_0)}{j}) + R_{i,v}^{(k)}(x-x_0),
\end{align*}
with $\mathbb{T}^{(2),''}_{i,v} = - \hat{\pi}_{i,v} \circ \widetilde{\mathbb{T}}^{(2)}_{i,v}$, higher order tensors satisfying $\norm{\mathbb{T}^{(j),''}_{i,v}}_{\mathrm{op}} \leq C_k \widetilde{t}^{j-1} \leq C_k t^{j-1}$, and remainder term $\|R^{(k)}_{i,v}\| \leq C_k t^k h^{k+1}$.
Plugging the above inequalities into~\eqref{eq:pol_dec_2} yields
\begin{align}\label{eq:pol_dec_decentre}
x-x_0 = \hat{\pi}_{i,v}(x-x_0) + \mathbb{T}^{(2)}_{i,v}\left(\tens{\hat{\pi}_{i,v}(x-x_0)}{2} \right ) + \sum_{j=3}^k \mathbb{T}^{(j)}_{i,v}(\tens{\hat{\pi}_{i,v}(x-x_0)}{j}) + R^{(k),'}_{i,v}(x-x_0),
\end{align}
with $\mathbb{T}^{(2)}_{i,v} = \widetilde{\mathbb{T}}^{(2)}_{i,v} - \hat{\pi}_{i,v} \circ \widetilde{\mathbb{T}}^{(2)}_{i,v}$, $\norm{\mathbb{T}^{(j)}_{i,v}}_{\mathrm{op}} \leq C_k t^{j-1}$, and $\norm{R^{(k),'}_{i,v}(x-x_0)} \leq C_k t^k h^{k+1}$.
Then, according to Lemma \ref{lem:hausdorff_and_covering}, there exists $y_0 \in \ball(X_i, \frac{8}{7 \times 4}h) \cap M$ such that $\norm{y_0-x_0} \leq \varepsilon_1$, where $\varepsilon_1$ is defined by~\eqref{eq:bound_hausdorff_patch}.
We further have
\begin{align*}
\norm{v - \hat{\pi}_{i}(y_0-X_i)} & \leq \varepsilon_1 + \norm{\hat{\Psi}_i(v) - (X_i + v)} \\
& \leq \varepsilon_1 + \norm{\sum_{j=2}^{k-1} \hat{\mathbb{T}}_{i}^{(j)}(\tens{v}{j})} \\
& \leq \varepsilon_1 + h/16 \\
& \leq h/8,
\end{align*}
since $\hat{t}h \leq \frac{1}{4}$, provided that $\varepsilon_1 \leq h/16$ (satisfied for $n$ large enough).
Next, if $z \in \ball \left (y_0,\frac{h}{8} \right ) \cap M$, we have
\begin{align*}
\norm{\hat{\pi}_{i}(z-X_i) - v } \leq & \norm{\hat{\pi}_{i}(z-y_0)} + \norm{v - \hat{\pi}_{i}(y_0-X_i)} \\
& \leq h/4,
\end{align*}
so that, writing $x_z:= \hat{\Psi}_i(\hat{\pi}_{i}(z-X_i))$, it holds $\norm{z-x_z} \leq \varepsilon_1$ and~\eqref{eq:pol_dec_decentre} applies.
Next, provided $C_kth < 1/4$ and $C_kt \geq t_*$ (satisfied whenever $n$ is large enough), Lemma~\ref{lem:pol_expression} yields that
\begin{multline*}
x_z - x_0 - \left ( \hat{\pi}_{i,v}(x_z-x_0) + \mathbb{T}^{(2)}_{i,v}(\tens{\hat{\pi}_{i,v}(x_z-x_0)}{2}) + \sum_{j=3}^k \mathbb{T}^{(j)}_{i,v}(\tens{\hat{\pi}_{i,v}(x_z-x_0)}{j}) \right ) \\
= \sum_{j=1}^k \mathbb{T}^{(j),'}_{i,v}(\tens{\pi_y^*(z-y_0)}{j}) + R^{(k)}_{y_0}(x_z-x_0),
\end{multline*}
so that
\begin{align*}
\norm{\sum_{j=1}^k \mathbb{T}^{(j),'}_{i,v}(\tens{\pi_y^*(z-y_0)}{j})} = \norm{R^{(k),'}_{i,v}(x_z-x_0) - R^{(k)}_{y_0}(x_z-x_0)} \leq C_{k,d,\tau_{\min},\mathbf{L}} \varepsilon_1,
\end{align*}
according to \eqref{eq:pol_dec_decentre} and Lemma~\ref{lem:pol_expression}, since $t^kh \leq C_{k,d,\tau_{\min},\mathbf{L}}$.
Using the development~\eqref{eq:pol_dec_1} and the inclusion $\ball_{T_{y_0}M}(0,h/16) \subset \pi^*_{y_0} \left ( \ball(y_0,h/8) \cap M - y_0 \right )$ from~\cite[Lemma~2]{Aamari19b} then entails
\begin{align*}
\norm{\sum_{j=1}^k \mathbb{T}^{(j),'}_{i,v}(\tens{\pi^*_{y_0}(w)}{j})} \leq C_{k,d,\tau_{\min},\mathbf{L}} \varepsilon_1,
\end{align*}
for all $w \in \ball_{T_{y_0}M}(0,h/16)$. Proceeding as in~\cite[Proof of Theorem~2]{Aamari19b}, we get
\begin{align*}
\norm{\mathbb{T}^{(1),'}_{i,v}}_{\mathrm{op}} & \leq C_{k,d,\tau_{\min},\mathbf{L}} \varepsilon_1 h^{-1}, \\
\text{and~}\norm{\mathbb{T}^{(2),'}_{i,v}}_{\mathrm{op}} & \leq C_{k,d,\tau_{\min},\mathbf{L}} \varepsilon_1 h^{-2}.
\end{align*}
In turn, following~\cite[Proof of Theorem~4]{Aamari19b} entails
\begin{align*}
\norm{\hat{\mathbb{T}}^{(2)}_{i,v} \circ \hat{\pi}_{i,v} - \mathbb{T}^{(2),*}_{y_0} \circ \pi^*_{y_0}}_{\mathrm{op}} \leq C_{k,d,\tau_{\min},\mathbf{L}} \varepsilon_1 h^{-2}.
\end{align*}
Since $\II_{y_0} = \mathbb{T}^{(2),*}_{{y_0}}$ (\cite[Lemma~2]{Aamari19b}), we deduce that
\begin{align}\label{eq:sup_curv_1}
\max_{1 \leq i \leq n} \max_{v \in \ball_{\hat{T}_j}(0,h/4)} \norm{\hat{\mathbb{T}}^{(2)}_{i,v} \circ \hat{\pi}_{i,v}}_{\mathrm{op}} \leq \max_{y \in M} \norm{\II_y}_{\mathrm{op}} + C_{k,d,\tau_{\min},\mathbf{L}} \varepsilon_1 h^{-2}.
\end{align}
Conversely, since $X_1, \hdots, X_n$ is a $(h/4)$-covering of $M$ onto the probability event described in Lemma~\ref{lem:hausdorff_and_covering}, we deduce that for all $y \in M$, there exists $i_0 \in \{1,\ldots,n\}$ such that $\|X_{i_0} - y \| \leq h/4$. In particular, we have
\begin{align*}
v:= \hat{\pi}_{i_0,v}(y - X_{i_0}) \in \ball_{\hat{T}_{i_0}}(0,h/4).
\end{align*}
Proceeding as above similarly leads to
\begin{align*}
\norm{\hat{\mathbb{T}}^{(2)}_{i_0,v} \circ \hat{\pi}_{i_0,v} - \II_y \circ \pi^*_y}_{\mathrm{op}} \leq C_{k,d,\tau_{\min},\mathbf{L}} \varepsilon_1 h^{-2},
\end{align*}
so that
\begin{align}\label{eq:sup_curv_2}
\max_{y \in M} \norm{\II_y}_{\mathrm{op}} \leq \max_{1 \leq i \leq n} \max_{v \in \ball_{\hat{T}_i}(0,h/4)} \norm{\hat{\mathbb{T}}^{(2)}_{i,v} \circ \hat{\pi}_{i,v}}_{\mathrm{op}} + C_{k,d,\tau_{\min},\mathbf{L}} \varepsilon_1 h^{-2}.
\end{align}
Combining~\eqref{eq:sup_curv_1} and~\eqref{eq:sup_curv_2} yields that for $n$ large enough,
\begin{align*}
\bigl| \hat{R}_\ell - R_\ell(M) \bigr| \leq R_\ell(M)^2 C_{k,d,\tau_{\min},\mathbf{L}} \varepsilon_1 h^{-2}
,
\end{align*}
which concludes the proof.
\end{proof}
\subsection{Proof of Lemma~\ref{lem:pol_expression}}\label{sec:proof_lem_pol_expression}
\begin{proof}[Proof of Lemma~\ref{lem:pol_expression}] We follow the proof of~\cite[Lemma~3]{Aamari19b}. Without loss of generality we take $y_0=0$, so that $\|y\| \leq 3h/2$. Let $z' = z-z_0$, so that $\|z'\| \leq h/2$. We write
\begin{align*}
& x-x_0 - \pi(x-x_0) - \sum_{j=2}^{k}{ \mathbb{T}^{(j)}(\tens{\pi(x-x_0)}{j})} \\
& \qquad = y + z' - \pi(y+z') - \sum_{j=2}^{k}\mathbb{T}^{(j)} (\tens{ (\pi(y) + \pi(z')) }{j}) \\
& \qquad = y + z' - \pi(y+z') - \sum_{j=2}^{k} \left [ \mathbb{T}^{(j)}(\tens{\pi(y)}{j}) + \sum_{r=0}^{j-1} \binom{j}{r} \mathbb{T}^{(j)} \left ( \tens{\pi(y)}{r} \otimes \tens{\pi(z')}{j-r} \right ) \right ].
\end{align*}
Since, for any $j \geq 2$ and $r \in \{0,\ldots,j-1\}$,
\begin{align*}
\norm{ \mathbb{T}^{(j)} \left ( \tens{\pi(y)}{r} \otimes \tens{\pi(z')}{j-r} \right )} & \leq t^{j-1} (3h/2)^{r} (2 \sigma)^{j-r} \\
& \leq C_k \sigma t^{j-1}h^{j-1} \leq C_k \sigma,
\end{align*}
we may write
\begin{multline}\label{eq:dec_pol_noise}
x-x_0 - \pi(x-x_0) - \sum_{j=2}^{k}{ \mathbb{T}^{(j)}(\tens{\pi(x-x_0)}{j})} \\
= y - \pi(y) - \sum_{j=2}^{k} \mathbb{T}^{(j)}(\tens{\pi(y)}{j}) + R^{(k),'}(x-x_0),
\end{multline}
where $\norm{R^{(k),'}(x-x_0)} \leq C_k \sigma$. Next,~\eqref{eq:true_pol_decomposition} entails
\begin{align*}
y = \pi_{y_0}^*&(y) + \mathbb{T}^{(2),*}_{y_0}( \tens{\pi_{y_0}^*(y)}{2}) + \hdots + \mathbb{T}_{y_0}^{(k-1),*}(\tens{\pi^*_{y_0}(y)}{k-1})
\\
&+ R^{(k),''}_{y_0}(y),
\end{align*}
with $\| R^{(k),''}_{y_0}(y) \| \leq C_{k,d,\tau_{\min}, \mathbf{L}} t_*^{k-1} h^k$. Denoting by
\[
{P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y)) := \pi^*_{y_0}(y) + \sum_{r=2}^{k-1} \mathbb{T}_{y_0}^{(r),*}(\tens{\pi^*_{y_0}(y)}{r}),
\]
we deduce that
\begin{multline*}
y - \pi(y) - \sum_{j=2}^{k}{ \mathbb{T}^{(j)}(\tens{\pi(y)}{j})} = \\
{P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y)) + R^{(k),''}_{y_0}(y) - \pi \left ( {P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y)) + R^{(k),''}_{y_0}(y) \right ) \\
- \sum_{j=2}^{k} \mathbb{T}^{(j)} \left ( \tens{ \pi \left [ \left ({P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y)) + R^{(k),''}_{y_0}(y) \right )\right ]}{j} \right )
.
\end{multline*}
Note that
$$
\norm{\pi(R^{(k),''}_{y_0}(y))} \leq \norm{R^{(k),''}_{y_0}(y)} \leq C_{k,d,\tau_{\min}, \mathbf{L}} t_*^{k-1} h^k
.
$$
Next, since $\|y\| \leq 3h/2$, it holds
\begin{align*}
\norm{ {P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y)) } & \leq \sum_{r=1}^{k-1} t_*^{r-1} \left ( \frac{3h}{2} \right )^r \\
& \leq \frac{3h}{2} \frac{1}{1-\frac{3t_*h}{2}} \leq 3h,
\end{align*}
so that, for all $j \in \{2,\ldots,k\}$,
\begin{multline*}
\left\Vert
\mathbb{T}^{(j)} \left ( \tens{ \pi \left [ \left ( {P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y)) + R^{(k),''}_{y_0}(y) \right ) \right ]}{j} \right)
-
\mathbb{T}^{(j)} \left ( \tens{ \pi \left [ \left ( {P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y)) \right ) \right ]}{j} \right)
\right\Vert \\
\begin{aligned}[t]
& \qquad \leq t^{j-1} \sum_{r=1}^j \binom{j}{r} \norm{R^{(k),''}_{y_0}}^r (3h)^{j-r} \\
& \qquad \leq C_{k,d,\tau_{\min}, \mathbf{L}} t^{j-1}h^j\max_{1 \leq r \leq j} t_*^{(k-1)r}h^{(k-1)r} \\
& \qquad \leq C_{k,d,\tau_{\min}, \mathbf{L}} t^k h^{k+1}.
\end{aligned}
\end{multline*}
Thus, we may write
\begin{multline*}
y - \pi(y) - \sum_{j=2}^{k}{ \mathbb{T}^{(j)}(\tens{\pi(y)}{j})} = {P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y)) - \pi \left ( {P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y)) \right ) \\
- \sum_{j=2}^{k} \mathbb{T}^{(j)} \left ( \tens{ \pi \left [ \left ( {P}^{*,(1:k-1)}_{y_0}(\pi^*_{y_0}(y))\right ) \right ]}{j} \right ) + R^{(k),'''}_{y_0}(y),
\end{multline*}
where $\norm{R^{(k),'''}_{y_0}(y)} \leq C_{k,d,\tau_{\min}, \mathbf{L}} h^k (t_*^{k-1} + t^k h)$. At last, for $j \in \{2,\ldots,k\}$, and $r_1, \hdots, r_j \in \{1,\ldots,k-1\}$ such that $\sum_{s=1}^{j} r_s \geq k+1$, we have
\begin{align*}
\norm{\mathbb{T}^{(j)} \left ( \bigotimes_{s=1}^j \pi \left ( \mathbb{T}_{y_0}^{(r_s),*} \left( \tens{\pi^*_{y_0}(y)}{r_s}\right ) \right ) \right )} & \leq t^{j-1} \prod_{s=1}^j t_*^{r_s-1}h^{r_s} \\
& \leq (th)^{\left ( \sum_{s=1}^j r_s \right )-1} h \leq t^k h^{k+1},
\end{align*}
where $\mathbb{T}_{y_0}^{(1),*} = \pi_{y_0}^*$, with a slight abuse of notation. Hence, it holds
\begin{multline*}
y - \pi(y) - \sum_{j=2}^{k}{ \mathbb{T}^{(j)}(\tens{\pi(y)}{j})} =
( \pi^*_{y_0} - \pi\circ \pi^*_{y_0})(y) + \mathbb{T}_{y_0}^{(2),*}(\tens{\pi_{y_0}^*(y)}{2}) - \pi \left ( \mathbb{T}_{y_0}^{(2),*}(\tens{\pi^*(y)}{2}) \right ) \\
- \mathbb{T}^{(2)} \left ( \tens{\left ( \pi \circ \pi_{y_0}^*(y) \right )}{2} \right ) + \sum_{j=3}^k \mathbb{T}^{(j),'}_{y_0} \left ( \tens{\pi_{y_0}^*(y)}{j} \right ) + R^{(k),''''}_{y_0}(y),
\end{multline*}
where $\norm{R^{(k),''''}_{y_0}(y)} \leq C_{k,d,\tau_{\min}, \mathbf{L}} h^k (t_*^{k-1} + t^k h)$. Plugging the above equation into \eqref{eq:dec_pol_noise} gives the result.
\end{proof}
\section{Proofs of \secref{sdr}} \label{sec:proofsdr}
\subsection{Comparing Reach, Weak Feature Size and Spherical Distortion Radius}
\label{sec:proof-compare-reach-sdr}
Let us prove \prpref{interpolate}.
\begin{proof}[Proof of \prpref{interpolate}]
The monotonicity follows trivially from the definition, and since by~\cite[Theorem~1]{boissonnat2019reach}, $\sdr_0(K,\d_K) = \tau(K,\d_K)$, there holds immediately that $\sdr_\delta(K,\d_K) \geq \tau(K)$ for any $\delta \geq 0$.
Now take $\delta \leq \sqrt{2(D+1)/D}\wfs(K)$, and take $z$ a critical point of $K$, so that $z \in \conv \Gamma$ where $\Gamma := \{x \in K~|~\|x-z\| = \d(z,K)\}$. Using Jung's theorem~\cite[Theorem~2.10.41]{Federer69}, there holds
$$
\diam(\Gamma) \geq \sqrt{\frac{2(D+1)}{D}} \rad(\Gamma) = \sqrt{\frac{2(D+1)}{D}} \d(z,K) \geq \sqrt{\frac{2(D+1)}{D}} \wfs(K) \geq \delta
$$
so that there exists two points $x,y \in \Gamma$ such that $\|x-y\| \geq \delta$. Furthermore, since the interior of $\ball(z,\wfs(K))$ contains no point of $K$, there holds
$$
\d_K(x,y) \geq \d_{\mathcal{S}(\wfs(K))}(x,y) > \d_{\mathcal{S}(r)}(x,y),\text{~for all~} r > \wfs(K),
$$
so that indeed $\sdr_\delta(K,\d_K) \leq \wfs(K)$.
\end{proof}
\subsection{Stability Properties of the Spherical Distortion Radius}
\label{sec:proof-sdr-properties}
We now move to the proofs of the stability properties of the SDR. As a first step, we will need the following lemma on geodesic distances over spheres.
\begin{lemma} \label{lem:dstech} Let $r,\varepsilon > 0$ and take $x,y,a,b \in K$ such that $\|x-y\| < 2r$ and
$$
\|a-b\| \leq \left(1+\frac{A\varepsilon}{r}\right) \|x-y\|
$$
for some $A > 0$. For all $\lambda > 0$, define
$$
\zeta_\lambda = \max\{\frac{192 r^3}{\|a-b\|^3}(\lambda + A\pi), 4A \}.
$$
Then, for all $\zeta \geq \zeta_\lambda$ such that $\zeta \varepsilon \leq r$, there holds
\[
\d_{\mathcal{S}(r+ \zeta \varepsilon)}(a,b) \leq \d_{\mathcal{S}(r)}(x,y) - \lambda \varepsilon.
\]
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:dstech}] Notice that, denoting by $\rho = \|x-y\|$,
\[
\d_{\mathcal{S}(r)}(x,y) = 2 r \arcsin\(\frac{\rho}{2r}\) = \rho \times \varphi(2r/\rho)~~\text{with}~~\varphi(u) := u \arcsin(1/u).
\]
The map $\varphi$ is decreasing on $[1,\infty)$ and, using the development of
\[
\arcsin(u) = \sum_{n = 0}^\infty (2n)! u^{2n+1}/(2^{2n}n!^2(2n+1)),
\]
we find that
$$
\varphi'(u) = - \sum_{n = 1}^\infty \frac{(2n)! \times 2n }{2^{2n}n!^2(2n+1)}\frac{1}{u^{2n+1}} \leq - \frac{1}{3u^3}.
$$
Notice furthermore that, by assumption
\begin{align*}
\frac{2(r+\zeta\varepsilon)}{\|a-b\|} &\geq \frac{2(r+\zeta\varepsilon)}{(1+A\varepsilon/r)\|x-y\|} = \frac{1+\zeta\varepsilon/r}{1+A\varepsilon/r} \frac{2r}{\|x-y\|} \\
&\geq \left(1+\frac{\zeta\varepsilon}{2r}\right) \frac{2r}{\|x-y\|}
\end{align*}
where we used that $A \leq \zeta/4$, $\zeta \varepsilon \leq r$, and that $(1+u)/(1+u/4) \geq 1+u/2$ for $|u| \leq 2$.
Now, as $\varphi \leq \pi/2$ and that $|\varphi'|$ is decreasing, we can write
\begin{align*}
\d_{\mathcal{S}(r+\zeta\varepsilon)}(a,b)
&\leq
\frac{A\varepsilon}{r}\|x-y\| \varphi\left(2(r+\zeta\varepsilon)/\|a-b\|\right)+ \|x-y\|\varphi\left(2(r+\zeta\varepsilon)/\|a-b\|\right) \\
&\leq A\pi\varepsilon+ \d_{\mathcal{S}(r)}(x,y)
\\
&\hspace{3em}
-\|x-y\| \times | \varphi' | \(\frac{2(r+\zeta\varepsilon)}{\|a-b\|}\)\times \left(\frac{2(r+\zeta\varepsilon)}{\|a-b\|} -\frac{2r}{\|x-y\|} \right) \\
&\leq \d_{\mathcal{S}(r)}(x,y)+A\pi\varepsilon-\frac{\|a-b\|^3}{3(2(r+\zeta\varepsilon))^3} \zeta\varepsilon\\
&\leq \d_{\mathcal{S}(r)}(x,y)+\left(A\pi-\frac{\|a-b\|^3}{192 r^3} \zeta\right)\varepsilon,
\end{align*}
and using $\zeta \geq \zeta_\lambda$ ends the proof.
\end{proof}
We are now in position to prove \prpref{stab1} and \thmref{lip}.
\begin{proof}[Proof of \prpref{stab1}] If $\mathrm{r}_1 = \infty$ there is nothing to show. Otherwise, notice that because $\mathrm{r}_1 \geq \delta_0/2$ by definition, there holds that
\[
\xi(R) \geq \max\{\frac{192 R^3}{\delta_0^3}\(1+\pi\frac{2R}{\delta_0}\), \frac{8 R}{\delta_0}\} \geq 1
\]
for all $R > \mathrm{r}_1$. Now, since $\xi(\mathrm{r}_1) \Upsilon < \mathrm{r}_1$, one can find $R > \mathrm{r}_1$ such that $\xi(R) \Upsilon < R$. By definition of $\mathrm{r}_1$, there exist $x,y \in K'$ such that $\delta+2\varepsilon \leq \|x-y\| < 2R$ and $\d_{\mathcal{S}(R)}(x,y) < \d'(x,y)$.
Now, let $a,b \in K$ be two closest points (in Euclidean distance) from $x$ and $y$ such that $\d(a,b) = \d\(\pr_K(\{x\}), \pr_K(\{y\}\)$. Then
\[
\delta \leq \|a-b\| \leq \|x-y\|+2\varepsilon < 2R + 2\Upsilon \leq 2(R+\xi(R)\Upsilon)
\]
and
\[
\|a-b\| \leq \|x-y\|+2\varepsilon \leq \left(1 + \frac{2\Upsilon R}{\delta_0 R}\right) \|x-y\|.
\]
We now can apply \lemref{dstech} with $A = 2R/\delta_0$ and $\lambda = 1$ to find that
\begin{align*}
\d(a,b)
&\geq
\frac{1}{1+\nu} \d'(x,y)
\\
&> \frac{1}{1+\nu}\d_{\mathcal{S}(R)}(x,y)
\\
&\geq
\frac{1}{1+\nu}\left(\d_{\mathcal{S}(R+\xi(R)\Upsilon)}(a,b) + \Upsilon\right)
\\
&\geq
\frac{1+\Upsilon/\delta}{1+\nu} \d_{\mathcal{S}(R+\xi(R)\Upsilon)}(a,b),
\end{align*}
where the last inequality uses that $\d_{\mathcal{S}(R+\xi(R)\Upsilon)}(a,b) \geq \|a-b\| \geq \delta$. At the end of the day, since $\Upsilon \geq \delta\nu$, we have $\d(a,b) > \d_{\mathcal{S}(R+\xi(R)\Upsilon)}(a,b)$, so that $\sdr_{\delta}(K,\delta) < R + \xi(R)\Upsilon$. Taking $R$ to $\mathrm{r}_1$ yields the result.
\end{proof}
\begin{proof}[Proof of \thmref{lip}] We take $\varepsilon > 0$ such that
$$\varepsilon < C_0\varepsilon_0, ~~\varepsilon < (\delta_1-\delta_0)/2,~~\text{and}~~\varepsilon < r_0/L_0,$$
and take $\delta \in [\delta_0,\delta_1-\varepsilon)$. We write $r_\delta := \sdr_\delta(K,\d)$ and $r_{\delta+\varepsilon} := \sdr_{\delta+\varepsilon}(K,\d)$ for short. Recall that $r_\delta \leq r_{\delta+\varepsilon}$. Now take $r \leq r_{\delta+\varepsilon} - L_0\varepsilon$, and two points $x,y \in K$ such that $\delta \leq \|x-y\| < 2r$ (if there are none, then $r \leq r_\delta$ automatically). If $\|x-y\| \geq \delta + \varepsilon$, then $\d(x,y) \leq \d_{\mathcal{S}(r)}(x,y)$ because $r \leq r_{\delta+\varepsilon}$. If now $\|x-y\| < \delta+\varepsilon$, since $\|x-y\| \leq \Delta_0$, we can use \assref{spread} and find a point $a \in K$ such that $\|a-y\| \leq \varepsilon/C_0$ and $\|x-a\| \geq \|x-y\|+ \varepsilon \geq \delta+\varepsilon$. Now, since $r + L_0 \varepsilon \leq r_{\delta+\varepsilon}$, it holds $\d(x,a) \leq \d_{\mathcal{S}(r+L_0\varepsilon)}(x,a)$. Furthermore, notice that
$$
\|x-a\| \leq \|x-y\| + \frac1{C_0}\varepsilon \leq \left(1 + \frac{r_1 \varepsilon}{C_0 \delta_0 r}\right) \|x-y\|
.
$$
Using \assref{subeuc} and \lemref{dstech} with $A = r_1 /(C_0 \delta_0)$ and $\lambda = C_1/C_0$, we find
\begin{align*}
\d(x,y) \leq \d(x,a) + \d(a,y) \leq \d_{\mathcal{S}(r+L_0\varepsilon)}(x,a)+ \frac{C_1}{C_0} \varepsilon \leq \d_{\mathcal{S}(r)}(x,y)
,
\end{align*}
so that in the end $r \leq r_\delta$. Taking $r$ to $r_{\delta+\varepsilon} -L_0\varepsilon$ yields that $r_{\delta+\varepsilon} \leq r_{\delta} + L_0\varepsilon$, ending the proof.
\end{proof}
Finally, \corref{stab} follows as a direct corollary of \prpref{stab1}.
\begin{proof}[Proof of \corref{stab}] Since $\xi_0 \varepsilon \leq 2\sdr_{\delta_1}(K,\d)$, the radius $\sdr_{\delta_1}(K,\d)$ is in particular finite so that, according to \prpref{stab1}, $\sdr_{\delta}(K',\d') \leq 2 \sdr_{\delta_1}(K,\d)$ and, consequently, $\xi_1 \Upsilon \leq \sdr_{\delta}(K',\d')$ and $\xi_2 \Upsilon \leq \sdr_{\delta+2\varepsilon}(K,\d)$, where $\xi_1 = \xi(\sdr_{\delta}(K',\d'))$ and $\xi_2 = \xi( \sdr_{\delta+2\varepsilon}(K,\d))$. Applying \prpref{stab1} twice -- which is possible, since $\Upsilon \geq ((\delta-2\varepsilon)\nu) \vee \varepsilon)$ --, we thus find
$$
\sdr_{\delta-2\varepsilon}(K,\d)-\xi_1 \Upsilon \leq \sdr_{\delta}(K',\d') \leq \sdr_{\delta+2\varepsilon}(K,\d)+\xi_2 \Upsilon,
$$
and we conclude by noticing that both $\xi_1$ and $\xi_2$ are less than $\xi_0$.
\end{proof}
\begin{proof}[Proof of \thmref{stab}] Using \corref{stab} and \thmref{lip}, one find that
\begin{align*}
\sdr_{\delta}(K',\d')
&\leq
\sdr_{\delta+2\varepsilon}(K,\d)+\xi_0 \Upsilon
\\
&\leq
\sdr_{\delta}(K,\d)+2L_0 \varepsilon + \xi_0 \Upsilon
\\
&\leq
\sdr_{\delta}(K,\d)+ \zeta_0 \Upsilon,
\end{align*}
and likewise for the lower bound.
\end{proof}
\section{Proofs of \secref{metric}} \label{sec:proofmetric}
\subsection{Minimax Lower Bound for Metric Learning}
\label{sec:proof-lower-bound-metric}
We now turn towards the proof of \thmref{metriclb}.
It relies on an adaptation of the classical Le Cam's argument~\cite{yu1997assouad} to the asymmetric loss $\ell_\infty$.
\begin{lemma} \label{lem:lecammet} Let $x,y \in \mathbb{R}^D$ and let $M_0$ and $M_1$ be two submanifolds of $\mathbb{R}^D$ such that $x, y \in M_0 \cap M_1$ and the uniform distribution $P_0$ (resp. $P_1$) on $M_0$ (resp. $M_1$) is in $\distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$. Then if $\d_{M_0}(x,y) \leq \d_{M_1}(x,y)$,
\beq
\inf_{\wh\d} \sup_{P \in \mathcal{P}^k} \mathbb{E}_{P^{\otimes n}}[\ell_\infty(\wh \d|\d_M)] \geq \frac12 \times \left|1- \frac{\d_{M_0}(x,y) }{\d_{M_1}(x,y)}\right| \times (1 - \tv(P_0^{\otimes n},P_1^{\otimes n})), \label{eq:lecammet}
\end{equation}
\end{lemma}
\begin{proof}[Proof of \lemref{lecammet}] For brevity, we write $\mathcal{R}_n$ be the minimax risk appearing in the left-hand side of~\eqref{eq:lecammet}. First, we write
\begin{align*}
\mathcal{R}_n
&\geq
\inf_{\wh \d} \sup_{P \in \{P_0, P_1\}} \mathbb{E}_{P^{\otimes n}}[\ell_\infty(\wh \d | \d_M)] \\
&\geq
\inf_{\wh \d} \sup_{P \in \{P_0, P_1\}} \mathbb{E}_{P^{\otimes n}} \left[\left|1- \frac{\wh \d(x,y) }{\d_M(x,y)}\right|\right] \\
&\geq \frac12 \inf_{\wh \d}\{\mathbb{E}_{P_0^{\otimes n}} \left[ \left|1- \frac{\wh \d(x,y) }{\d_{M_0}(x,y)}\right|\right] +\mathbb{E}_{P_1^{\otimes n}} \left[ \left|1- \frac{\wh \d(x,y) }{\d_{M_1}(x,y)}\right|\right] \}\\
&\geq \frac12 \inf_{\wh \d} \mathbb{E}_{P_0^{\otimes n}}\left[\left(\left|1- \frac{\wh \d(x,y) }{\d_{M_0}(x,y)}\right| + \left|1- \frac{\wh \d(x,y) }{\d_{M_1}(x,y)}\right|\right) \times \left(1 \wedge \frac{\diff P_1^{\otimes n}}{\diff P_0^{\otimes n}}\right)\right].
\end{align*}
But now, using that $\d_{M_0}(x,y) \leq \d_{M_1}(x,y)$, a simple computation shows that the functional
$$
\delta \mapsto \left|1- \frac{\delta}{\d_{M_0}(x,y)}\right| + \left|1- \frac{\delta}{\d_{M_1}(x,y)}\right|
$$
is minimal for $\delta = \d_{M_0}(x,y)$ so that
\begin{align*}
\mathcal{R}_n &\geq \frac12 \mathbb{E}_{P_0^{\otimes n}}\left[\left|1- \frac{\d_{M_0}(x,y) }{\d_{M_1}(x,y)}\right| \times \left(1 \wedge \frac{\diff P_1^{\otimes n}}{\diff P_0^{\otimes n}}\right)\right] \nonumber\\
&= \frac12 \times \left|1- \frac{\d_{M_0}(x,y) }{\d_{M_1}(x,y)}\right| \times (1 - \tv(P_0^{\otimes n},P_1^{\otimes n})),
\end{align*}
which ends the proof.
\end{proof}
\begin{proof}[Proof of \thmref{metriclb}]
Without loss of generality, we set the analysis in $\mathbb{R}^{d+1} \simeq \mathbb{R}^{d+1} \times \{0\}^{D-(d+1)} \subset \mathbb{R}^D$.
\subsubsection{Submanifolds Construction} We let $M_0 \subset \mathbb{R}^{d+1}$ be a submanifold of $\mathcal{C}^k_{2\tau_{\min},\mathbf{L}/2}$ such that it contains the cylinder
$$
\{(s,z) \in \mathbb{R}^{2} \times \mathbb{R}^{d-1}~|~\|s\| = R \text{~and~} \|z\| \leq 3R\}
.
$$
Such a manifold always exists as soon as $R \geq 2 \tau_{\min}$ and $L_j$ is large enough compared to $1/R^{j-1}$.
For instance, one can design $M_0$ as a hypersurface of revolution obtained based on patches the interpolating curves of \lemref{turn-widget}.
In what follows, we denote any $x \in \mathbb{R}^{d+1} = \mathbb{R}^{d} \times \mathbb{R}$ as $x = (w,h) \in \mathbb{R}^{d}\times \mathbb{R}$.
With this notation, we define, for $\varepsilon>0$ and $c>0$ to be chosen later,
$$
\Phi_{\varepsilon}(x) := x + c \varepsilon^{k} K(w/\varepsilon) e_{d+1}~~~\text{where}~~~ e_{d+1} = (0,\dots,0,1) \in \mathbb{R}^{d+1},
$$
where $K(w)$ equals $\exp(-1/(1-\|w\|^2)_{+})$ for $\|w\|<1$ and $0$ otherwise.
For $\varepsilon \leq 1$ and $c$ small enough, $\Phi_{\varepsilon}$ is a diffeomorphism of $\mathbb{R}^{d+1}$ with derivative bounded up to the order $k$.
Using~\cite[Proposition~A.4]{Aamari19}, we get that $M_\varepsilon := \Phi_{\varepsilon}(M)$, the image of $M_0$ by $\Phi_{\varepsilon}$, belongs to $\mathcal{C}^k_{\tau_{\min},\mathbf{L}}$ provided that $c$ is small enough (depending on $R$) and $\varepsilon \leq c R$.
Locally around the apex $(0,R) \in \mathbb{R}^{d+1}$, $M_0$ can be seen as the graph of $
\Psi_0(w) := \sqrt{R^2-w_1^2}$, defined on $(-R,R) \times \ball_{\mathbb{R}^{d-1}}(0,3R)$, while $M_\varepsilon$ is the graph of
$$
\Psi_\varepsilon(w) := \Psi_0(w) + c\varepsilon^{k} K(w/\varepsilon).
$$
Finally, we let $\bar\Psi_\varepsilon(w) := (w,\Psi_\varepsilon(w))$ and similarly define $\bar\Psi_0$. We refer to \figref{cyl} for a diagram of the situation.
\begin{figure}[h!]
\centering
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height = 0.7\linewidth,page=1]{Mgeo}
\caption{}
\label{fig:Mgeo0}
\end{subfigure}
\begin{subfigure}{0.49\textwidth}
\centering
\includegraphics[height = 0.7\linewidth,page=2]{Mgeo}
\caption{}
\label{fig:Mgeo1}
\end{subfigure}
\caption{(\subref{fig:Mgeo0}) The cylindrical section of $M_0$ used in the proof of \thmref{metriclb}, and (\subref{fig:Mgeo1}) the perturbed submanifold $M_\varepsilon$.}
\label{fig:cyl}
\end{figure}
\subsubsection{Shortest-Path Properties}
In this section, we seek to derive a lower bound on $\left|1 - \d_{M_0}(x,y)/\d_{M_\varepsilon}(x,y)\right|$, so as to apply Lemma~\ref{lem:lecammet}. For this, we will consider well-chosen $x,y \in M_0 \cap M_\varepsilon$ and derive a lower bound on $\d_{M_\varepsilon}(x,y) - \d_{M_0}(x,y)$.
We let $\ell < R$, and we pick $x := \bar\Psi_0(-\ell e_1)$ and $y := \bar\Psi_0(\ell e_1)$ where $e_1 = (1,0,\dots,0) \in \mathbb{R}^d$. By construction, $x$ and $y$ belong to $M_0$.
Furthermore, provided that $\ell \geq \varepsilon$, there holds that $x = \bar\Psi_\varepsilon(-\ell e_1)$ and $y = \bar\Psi_\varepsilon(\ell e_1)$ so that $x$ and $y$ are also in $M_\varepsilon$.
We let $\gamma_\varepsilon : [-1,1] \to M_{\varepsilon}$ be a shortest path in $M_\varepsilon$ between $x$ and $y$, parametrized at constant speed. We denote paths
$$
w_\varepsilon := a_\varepsilon e_1 + b_\varepsilon := \pr_{\mathbb{R}^{d}\times\{0\}}(\gamma_\varepsilon)
,
$$
where $b_\varepsilon \in \{0\} \times \mathbb{R}^{d-1}$. We refer to \figref{ab} for a diagram of the situation. Several observations are in order.
\begin{figure}[h!]
\centering
\begin{subfigure}[b]{0.75\textwidth}
\includegraphics[width = 1\linewidth,page=1]{topMve}
\caption{}
\label{fig:topMve0}
\end{subfigure}
\begin{subfigure}[b]{0.75\textwidth}
\includegraphics[width = 1\linewidth,page=2]{topMve}
\caption{}
\label{fig:topMve1}
\end{subfigure}
\caption{(\subref{fig:topMve0}) Top view of $M_\varepsilon$ and of one of the shortest path between $x$ and $y$, in blue. In light grey is represented the bump of size $\varepsilon$.
(\subref{fig:topMve1}) Same view of $M_\varepsilon$ as (\subref{fig:topMve0}), illustrating the fact that any shortest path must go from left to right (otherwise one can construct a shorter path, through $s_1$ in the figure) and cannot go outside the shaded area (otherwise one can construct a shorter path, through $s_2$ in the figure).}
\label{fig:ab}
\end{figure}
\begin{itemize}
\item Since $w_\varepsilon(\pm 1) = \pm\ell e_1$, we have $a_\varepsilon(\pm1) = \pm \ell$ and $b_\varepsilon(\pm1) = 0$. Also, because $\gamma_\varepsilon$ is a minimizing path, $a_\varepsilon$ is nondecreasing, and $\|b_\varepsilon\|_\infty \leq \varepsilon$ (see \figref{ab}).
\item Because $\gamma_\varepsilon$ has constant speed on $[-1,1]$, there holds
\beq \label{eq:gamma}
\|\gamma'_\varepsilon(t)\| = \frac12 \d_{M_\varepsilon}(x,y) \in [A_1 \ell, A_2 \ell],~~~\text{for all}~~~t \in [-1,1],
\end{equation}
with $A_1,A_2$ depending on $R$ only, uniformly on small $\varepsilon$.
\item $a_\varepsilon$ and $b_\varepsilon$ are smooth and $\gamma_\varepsilon = \bar\Psi_\varepsilon(w_\varepsilon)$.
\item Since $M_\varepsilon$ is symmetric with respect to $\{0\} \times \mathbb{R}^d$, so should be the shortest path between $x$ and $y$. This entails in particular that $b_\varepsilon$ is even and that $a_\varepsilon$ is odd;
\item As $\gamma_\varepsilon$ has constant speed and has a curvature bounded from above (as a shortest path in a bounded-curvature space), the ratio $\|\gamma''_\varepsilon \| / \|\gamma_\varepsilon'\|^2$ is bounded in sup-norm by a constant depending on $R$ only. Therefore, there exists a constant $B > 0$ depending on $R$ only such that, uniformly on $\varepsilon$ small enough,
\beq \label{eq:abder}
\max\{\|a_\varepsilon'\|_\infty / \ell, \|a_\varepsilon''\|_\infty/ \ell^2,\|b_\varepsilon'\|_\infty/\ell,\|b_\varepsilon''\|_\infty/\ell^2\} \leq B.
\end{equation}
\item By symmetry also, $\gamma_\varepsilon$ crosses the hyperplane $\{0\} \times \mathbb{R}^d$ orthogonally. As a consequence $\inner{\gamma'_\varepsilon(0)}{e_{d+1}} = 0$, $b_\varepsilon'(0) = 0$ and
$$
a_\varepsilon'(0) = \|w_\varepsilon'(0)\| = \| \gamma'_\varepsilon(0) \| \in \left[ A_1 \ell, A_2\ell\right],
$$
where $A_1$ and $A_2$ were introduced in~\eqref{eq:gamma}.
\item Finally, using~\eqref{eq:abder}, we deduce that there exists $C > 0$ depending on $R$ only such that for all $t \in [-1,1]$,
\beq \label{eq:dera}
\begin{cases}
|a_\varepsilon(t) - a_\varepsilon'(0) t | &\leq C \ell^2 t^2, \\
|a_\varepsilon'(t) - a_\varepsilon'(0) | &\leq C \ell^2 t, \\
|a_\varepsilon'(t)a_\varepsilon(t) - a_\varepsilon'(0)^2 t | &\leq C \ell^3 t^2, \\
|b_\varepsilon(t) - b_\varepsilon(0)| &\leq C \ell t.
\end{cases}
\end{equation}
\end{itemize}
\subsubsection{Perturbative Expansion of the Geodesic Length} We let $\gamma_0(t) := \bar\Psi_0(a_\varepsilon(t) e_{1})$. Although not constant-speed, monotonicity of $a_\varepsilon$ implies that $\gamma_0$ is the shortest path in $M_0$ between $x$ and $y$, and we get, using~\eqref{eq:abder} and~\eqref{eq:dera}, that for some constant $A_3$ depending on $R$,
\beq \label{eq:gamma0}
\frac12 A_1\ell \leq \|\gamma'_0(t)\| \leq A_3 \ell ~~~~\text{if}~~\ell \leq \frac{A_1}{2C},
\end{equation}
which we will assume henceforth. Furthermore, the velocity of $\gamma_\varepsilon$ writes
\begin{align*}
\gamma_\varepsilon' &= \d\bar\Psi_\varepsilon(w_\varepsilon)[w_\varepsilon'] = \d\bar\Psi_0(w_\varepsilon)[w_\varepsilon'] +c \varepsilon^{k-1} \inner{\nabla K(w_\varepsilon/\varepsilon)}{w_\varepsilon'} e_{d+1} \\
&= w_\varepsilon' + \inner{\nabla \Psi_0(w_\varepsilon)}{w_\varepsilon'}e_{d+1}+c\varepsilon^{k-1} \inner{\nabla K(w_\varepsilon/\varepsilon)}{w_\varepsilon'} e_{d+1} \\
&=
a_\varepsilon' e_1 +b_\varepsilon'
+
\bigl(
\underbrace{\inner{\nabla \Psi_0(a_\varepsilon)}{a_\varepsilon'}}_{:=\nabla_0}
+
\underbrace{c\varepsilon^{k-1}\inner{\nabla K(w_\varepsilon/\varepsilon)}{w_\varepsilon'}
}_{:=\nabla_1}
\bigr)
e_{d+1},
\end{align*}
where we used the fact that $\Psi_0$ depends only on its first variable.
We write the last term as $(\nabla_0 + \nabla_1)e_{d+1}$.
Using that each three terms in the preceding development are orthogonal, we obtain
\beq \label{eq:devgv}
\|\gamma_\varepsilon'\|^2 = a_\varepsilon'^2 + \|b_\varepsilon'\|^2+(\nabla_0 + \nabla_1)^2 = \underbrace{a_\varepsilon'^2 +\nabla_0^2}_{= \|\gamma_0'\|^2}+ \underbrace{\|b_\varepsilon'\|^2+2 \nabla_0 \nabla_1 + \nabla_1^2}_{:=Q_\varepsilon},
\end{equation}
and it only remains to study the last three terms, denoted by $Q_\varepsilon$. First, notice that using~\eqref{eq:abder}, one can find two constants $D_0$ depending on $R$ such that $Q_\varepsilon \geq -D_0 \varepsilon^2 \ell^2$. Together with~\eqref{eq:gamma0}, this yields that $Q_\varepsilon/\|\gamma_0'\|^2 \geq -1$ for $\varepsilon$ small enough (depending on $R$). Likewise, we can show that $Q_\varepsilon \leq D_1 (\ell^2+\ell^2\varepsilon^2+\varepsilon^4)$, for some constant $D_1$ depending on $R$. This again yields
\beq \label{eq:qe}
\frac{Q_\varepsilon}{\|\gamma_0'\|^2} \leq D_2 ~~~~\text{if}~~~~\varepsilon \leq D_3 \ell,
\end{equation}
for some constants $D_2$ and $D_3$ depending on $R$ only. All in all, we have that $Q_\varepsilon/\|\gamma_0'\| \in [-1,D_2]$. Using that
$$
\sqrt{1+z} \geq \begin{cases}
1 + z~~~~~&\text{if}~~~z \in [-1,0], \\
1 + D_4 z~~~~~&\text{if}~~~ z\in [0,D_2],~\text{with}~~D_4 = \frac1{D_2}(\sqrt{1+D_2}-1),
\end{cases}
$$
we can finally derive from~\eqref{eq:devgv} and~\eqref{eq:qe} the following bound
\begin{align} \label{eq:newdev}
\|\gamma_\varepsilon'\| = \|\gamma_0'\| \sqrt{1+\frac{Q_\varepsilon}{\|\gamma_0'\|^2}} \geq \|\gamma_0'\| + \tau(Q_\varepsilon) Q_\varepsilon,
\end{align}
where
$$\tau(z) := \frac{2}{A_1\ell} \mathbbm{1}_{z < 0} + \frac{D_4}{A_3\ell} \mathbbm{1}_{z \geq 0},
$$
and where we also used~\eqref{eq:gamma0} to bound $1/\|\gamma_0'\|$.
In particular, integrating \eqref{eq:newdev} over $[-1,1]$ yields that
\begin{align*}
\d_{M_\varepsilon}(x,y) \geq \d_{M_0}(x,y) +
\int_{-1}^1 \tau(Q_\varepsilon) Q_\varepsilon
.
\end{align*}
To obtain a more explicit bound, let us now study $Q_\varepsilon$.
For this, first rewrite $\nabla_0$ and $\nabla_1$ more explicitly as
$$
\nabla_0 = -\frac{a_\varepsilon a'_\varepsilon}{\sqrt{R^2-a_\varepsilon^2}}~~~~\text{and}~~~~\nabla_1 = -2c \varepsilon^{k-2}\frac{K(w_\varepsilon/\varepsilon)}{(1-\|w_\varepsilon/\varepsilon\|^2)^2} \inner{w_\varepsilon}{w_\varepsilon'}
.
$$
Hence, noticing that $\inner{w_\varepsilon}{w_\varepsilon'} = a_\varepsilon a_\varepsilon'+\inner{b_\varepsilon}{b_\varepsilon'}$, one can write $2\nabla_0 \nabla_1$ as $P_0+P_1$ with
\begin{align*}
\begin{cases}
P_0 &= \varepsilon^{k-2} (a_\varepsilon a'_\varepsilon)^2 T_\varepsilon \\
P_1 &= \varepsilon^{k-2} T_\varepsilon a_\varepsilon a'_\varepsilon \inner{b_\varepsilon}{b'_\varepsilon}
\end{cases}
~~~~ \text{with}~~~T_\varepsilon:= \frac{4c K(w_\varepsilon/\varepsilon)}{\sqrt{R^2-a_\varepsilon^2} (1-\|w_\varepsilon/\varepsilon\|^2)^2}.
\end{align*}
For $\ell \leq A_1/4C$, condition~\eqref{eq:dera} together with $a'_\varepsilon(0) \geq A_1 \ell/2$ imply that
$$
\|w_\varepsilon(t) \| \geq |a_\varepsilon(t)| \geq \varepsilon~~~\text{for all~}|t| \geq t_\varepsilon~~\text{with}~~t_\varepsilon := \frac{4\varepsilon}{A_1 \ell},
$$
so that in particular, $T_\varepsilon(t) = 0$ for $|t| \geq t_\varepsilon$. Furthermore, notice that, provided that $\ell$ is small before $R$, $T_\varepsilon$ is bounded by some constant $E > 0$ depending on $R$ only. Using again~\eqref{eq:dera}, we find that for $\ell \leq A_1^2/8C$, there holds
\begin{align}
&(a_\varepsilon' a_\varepsilon)^2(t) \geq \frac{1}{2} a_\varepsilon'(0)^4 t^2 - C^2 \ell^6 t^4 \geq \frac{1}{32} A_1^4 \ell^4 t^2, \label{eq:aaprime}\\
\text{and}~~~~~~~~
&|a_\varepsilon' a_\varepsilon |(t) \leq a_\varepsilon'(0)^2 |t| + C \ell^3 t^2 \leq 5 A_1^2 \ell^2 |t|~~~~~\text{for all}~~t\in [-1,1]. \nonumber
\end{align}
In particular, we find that
\begin{align*}
\int_{-1}^1 |P_1(t)| \diff t
&\leq
5 \varepsilon^{k-2} E A_1^2 \ell^2 \|b_\varepsilon\|_\infty \|b'_\varepsilon\|_\infty \int_{-t_\varepsilon}^{t_\varepsilon} |t| \diff t
\\
&= 5 \varepsilon^{k-2} E A_1^2 \ell^2 \|b_\varepsilon\|_\infty \|b'_\varepsilon\|_\infty t_\varepsilon^2
\\
&\leq
80 B E \|b_\varepsilon\|_\infty \ell^2 \varepsilon^{k}
,
\end{align*}
where we used~\eqref{eq:abder} in the last inequality. On the other hand, letting $t_0 \in (-1,1)$ be a time at which $\|b_\varepsilon(t_0)\| =\|b_\varepsilon\|_\infty$,
notice that
\begin{align*}
\int_{-1}^1 \|b_\varepsilon'\|^2
&=
\int_{-1}^{t_0} \|b_\varepsilon'\|^2 + \int_{t_0}^1 \|b_\varepsilon'\|^2
\\
&\geq
\frac{1}{1+t_0} \left\|\int_{-1}^{t_0} b_\varepsilon' \right\|^2 + \frac{1}{1-t_0} \left\| \int_{t_0}^1 b_\varepsilon'\right\|^2
\\
&=
\left(\frac{1}{1+t_0}+\frac{1}{1-t_0}\right) \|b_\varepsilon(t_0)\|^2 \geq 2 \|b_\varepsilon\|^2_\infty.
\end{align*}
Integrating~\eqref{eq:devgv} and using that $\nabla_1^2 \geq 0$ thus yields
\beq \label{eq:finaldev}
\int_{-1}^1 \tau(Q_\varepsilon) Q_\varepsilon
\geq
2 \|b_\varepsilon\|_\infty\( \tau_1 \|b_\varepsilon\|_\infty -40 \tau_2 B E \ell^2 \varepsilon^{k}\) + \tau_1 \varepsilon^{k-2} \int_{-1}^1 (a_\varepsilon a'_\varepsilon)^2 T_\varepsilon.
\end{equation}
where $\tau_1$ is the smallest value of $\tau$, and $\tau_2$ its greatest value.
Now we distinguish on the value of $\|b_\varepsilon(0)\|$:
\begin{itemize}
\item
If $\|b_\varepsilon(0)\| \geq \varepsilon/2$, then $\|b_\varepsilon\|_\infty \geq \varepsilon/2$ and for $\varepsilon$ small enough, we get, noticing that the last term in~\eqref{eq:finaldev} is non-negative,
$$
\int_{-1}^1 \tau(Q_\varepsilon) Q_\varepsilon \geq c_R\varepsilon(\varepsilon/2 - \varepsilon/4)/\ell \geq c_R \varepsilon^2/\ell.
$$
\item
Otherwise, if $\|b_\varepsilon(0)\| \leq \varepsilon/2$, then, using~\eqref{eq:dera}, we find that
$$
\begin{cases}\| b_\varepsilon(t) \| &\leq 3\varepsilon/4, \\
|a_\varepsilon(t)| &\leq \varepsilon/2, \\
\end{cases}
~~~\text{for all~} |t| \leq t^*_\varepsilon~~~\text{with}~~ t^*_\varepsilon := \min\{\frac{\varepsilon}{4C\ell},\frac{\varepsilon}{8A_2\ell},\frac{2A_2}{C\ell}\}.
$$
For $\varepsilon$ small before $R$, $t^*_\varepsilon$ is of the form $t^*_\varepsilon = G\varepsilon/\ell$ with $G$ depending on $R$ only. Furthermore, notice that for $|t| \leq t_\varepsilon^*$, there holds $\|w_\varepsilon(t)\|^2 = |a_\varepsilon(t)|^2 + \|b_\varepsilon(t)\|^2 \leq 13 \varepsilon^2/16$.
In particular, $T_\varepsilon$ is lower-bounded on $[-t_\varepsilon^*,t_\varepsilon^*]$ by a constant $H$ depending on $R$ only.
Noticing that $t^*_\varepsilon \leq t_\varepsilon$, we can use the inequality in~\eqref{eq:aaprime} to obtain
\begin{align*}
\int_{-1}^1 (a_\varepsilon a'_\varepsilon)^2 T_\varepsilon
\geq
\frac1{32} A_1^4 \ell^4 H \int_{-t_\varepsilon^*}^{t_\varepsilon^*} t^2 \diff t
=
\frac{1}{48} A_1^4 H G^3 \ell \varepsilon^3.
\end{align*}
Finally, since $z \mapsto z(z-\nu)$ is minimal on $\mathbb{R}_+$ at $z = \nu/2$ with minimal value $-\nu^2/4$, we find the bound
\begin{align*}
\int_{-1}^1 \tau(Q_\varepsilon) Q_\varepsilon \geq c_R \varepsilon^{k+1} - c'_R \ell^3 \varepsilon^{2k} \geq c_R \varepsilon^{k+1},
\end{align*}
provided that $\varepsilon$ is small enough before $R$.
\end{itemize}
In both cases, we find that $\int_{-1}^1 \tau(Q_\varepsilon) Q_\varepsilon \geq c_R \varepsilon^{k+1}$. Now integrating~\eqref{eq:newdev} gives
$$
\d_{M_\varepsilon}(x,y) \geq \d_{M_0}(x,y) + c'_R \varepsilon^{k+1} > \d_{M_0}(x,y).
$$
Finally,~\eqref{eq:gamma} yields $\d_{M_\varepsilon}(x,y) \geq 2A_2 \ell$ and letting $\ell := (1 \vee D_3^{-1}) \varepsilon$, which we can from~\eqref{eq:qe}, finally gives
\beq
\label{eq:vek}
\left|1 - \frac{\d_{M_0}(x,y)}{\d_{M_\varepsilon}(x,y)} \right| \geq c_R \varepsilon^{k}.
\end{equation}
\subsubsection{Concluding with Le Cam's lemma} We apply \lemref{lecammet} with $M_0$ and $M_1 := M_\varepsilon$ for $\varepsilon$ properly chosen.
Their volumes are bounded from above and below by something depending on $R$ and $d$ only, so that the uniform distribution on $M_0$ and $M_\varepsilon$ are in $\distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$ provided that $f_{\min}$ and $f_{\max}$ are respectively small enough and large enough compared to $1/R^d$. Finally, we set $R = 2\tau_{\min}$ and $\varepsilon = (C_{\tau_{\min},d} n)^{-1/d}$. For $n$ large enough so that all previous controls are verified, \lemref{lecammet} finally yields
$$
\inf_{\wh\d} \sup_{P \in \mathcal{P}^k} \mathbb{E}_{P^{\otimes n}}[\ell_\infty(\wh \d|\d_M)] \geq \frac12 c_{\tau_{\min}} \varepsilon^{k}(1- C_{\tau_{\min},d} n \varepsilon^d) \geq c_{\tau_{\min},d,k} n^{-k/d},
$$
where the total variation was bounded using~\cite[Lemma~7]{Berenfeld22}.
\end{proof}
\subsection{Plug-in Estimation for Metric Learning}
\label{sec:proof-plugin-metric}
We start by giving the proof of \prpref{metric}.
\begin{proof}[Proof of \prpref{metric}]
Let $x,y \in K$. Notice that, since $K \subset (K')^\varepsilon$, there holds trivially that $\d_{(K')^\varepsilon}(x,y) \leq d_{K}(x,y)$. For the converse inequality, let $\gamma : [0,1] \to \mathbb{R}^D$ be a continuous path in $(K')^\varepsilon$ between $x$ and $y$. Since $\varepsilon < \tau(K)/2$ the closest-point projection on $K$ is well-defined on $(K')^\varepsilon \subset K^{2\varepsilon}$ and we can consider $\gamma_0 = \pr_K \circ \gamma$, which is a continuous path in $K$. For any subdivision $0 = t_0 < t_1 < \dots < t_k = 1$, there holds
$$
\sum_{i=0}^{k-1} \|\gamma_0(t_{i+1}) - \gamma_0(t_i) \| \leq \frac{\tau(K)}{\tau(K)-2\varepsilon} \sum_{i=0}^{k-1} \|\gamma(t_{i+1}) - \gamma(t_i) \|
$$
where we used the fact that $\pr_{K}$ is $ \tau(K)/(\tau(K)-2\varepsilon)$-Lipschitz on $K^{2\varepsilon}$~\citep[Theorem~4.8~(8)]{Federer59}. Taking the supremum over all subdivision yields
$$
\d_{K}(x,y) \leq L(\gamma_0) \leq \frac{\tau(K)}{\tau(K)-2\varepsilon} L(\gamma)
$$
and then taking the infimum on all continuous path $\gamma$ finally gives
$$
\d_{(K')^{\varepsilon}}(x,y) \geq \left(1 - \frac{2\varepsilon}{\tau(K)}\right) \d_{K}(x,y)
$$
ending the proof.
\end{proof}
To prove \thmref{metricub}, an intermediate result that bounds the intrinsic diameters of the supports in our statistical model is needed.
\begin{lemma} \label{lem:boundgeo} For any $P \in \distributions{k}{\tau_{\min}}{\mathbf{L}}{f_{\min}}{f_{\max}}$, if $M = \support(P)$, then
$$
\sup_{x,y \in M} \d_M(x,y) \leq \d_{\max}.
$$
where $\d_{\max}$ is defined in \thmref{metricub}.
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:boundgeo}] We let $x_1,\dots,x_N$ be a $\tau_{\min}/4$-packing of $M$. We let $x,y \in M$, and $G$ be the neighborhood graph built on top of $x,y,x_1,\dots,x_N$ with connectivity radius $\tau_{\min} / 2$. Using~\cite[Theorem~6.3]{niyogi2008finding}, denoting $z_0 = x, z_1,\dots, z_k = y$ the shortest path between $x$ and $y$ in $G$, there holds
\[
\d_M(x,y) \leq \sum_{i=0}^{k-1} \d_M(z_i,z_{i+1}) \leq \sum_{i=0}^{k-1} 2 \|z_i - z_{i+1}\| \leq k \tau_{\min}.
\]
But now $k \leq N-1$ and
\[
N \leq \frac{\vol_d(M)}{\min_{x \in M} \vol_d(M \cap \ball(x,\tau_{\min}/4))} \leq \frac{\vol_d(M)}{\(1-1/8^2\)^{d/2} \omega_d \tau_{\min}^d/4^d},
\]
where we used~\cite[Lemma~5.3]{niyogi2008finding}. Noticing that $\vol_d(M) \leq 1/f_{\min}$, we easily conclude.
\end{proof}
We are now in position to prove \thmref{metricub}.
\begin{proof}[Proof of \thmref{metricub}] We let $\mathcal{A}_n := \{\dh(\wh M, M) \leq \varepsilon_n\}$ denote the event where $\wh M$ is $\varepsilon_n$-precise in Hausdorff distance, and we take $x, y \in M$. On the event $\mathcal{A}_n$, for $n$ large enough such that $\varepsilon_n \leq \tau_{\min}/2$, \prpref{metric} applies to $K' = \wh M \cup \{x,y\}$ and, together with \lemref{boundgeo}, yields
$$\left| 1 - \frac{\wh \d(x,y)}{\d_M(x,y)}\right| \leq \frac{2\varepsilon_n}{\tau_{\min}}.$$
On $\mathcal{A}_n^c$, we distinguish whether $\|x- y\| \leq \varepsilon_n$ or not. If so, then $\wh\d(x,y) = \|x-y\| \leq \d_M(x,y)$. In the other case, $\d_M(x,y) \geq \|x-y\| \geq \varepsilon_n$ and $\d(x,y) \leq \d_{\max}$ so that, in any case
$$
\left| 1 - \frac{\wh \d(x,y)}{\d_M(x,y)}\right| \leq 1 + \frac{\wh \d(x,y)}{\d_M(x,y)} \leq 1 + \frac{\d_{\max}}{\varepsilon_n},
$$
for $n$ large enough such that $\varepsilon_n \leq \d_{\max}$. Patching these two bounds together yields
$$
\mathbb{E}_{P^{\otimes n}}[\ell_\infty(\wh \d| \d_M)] \leq \frac{2\varepsilon_n}{\tau_{\min}} P^{\otimes n}(\mathcal{A}_n)+ \left(1 + \frac{\d_{\max}}{\varepsilon_n}\right) P^{\otimes n}(\mathcal{A}_n^c),
$$
ending the proof.
\end{proof}
\section{Proofs of \secref{optimal_reach_estimation}} \label{sec:proofoptireach}
We first prove that submanifolds of the model do fulfill \assref{spread} and \assref{subeuc}.
\begin{proof}[Proof of \prpref{assreach}]
\assref{subeuc} is a simple consequence of~\cite[Proposition~6.3]{niyogi2008finding} which yields fulfillment for $\Delta_1 = \tau(M)/2$ and $C_1 = 2$.
For \assref{spread}, take $x,y \in M$ such that $\|x-y\| \leq \tau(M)$ and take $\varepsilon < \tau(M)/4$.
We consider $a = \exp_y(v)$, where
$$
v = - \varepsilon \frac{\pr_{T_y M}(x-y)}{\|\pr_{T_y M}(x-y)\|}.
$$
Thanks to~\cite[Theorem~4.8 (7)]{Federer59}, there holds
\begin{align*}
\|\pr_{T_y M}(x-y)\|^2
&=
\|x-y\|^2 - \d^2(x-y,T_y M)
\\
&\geq \|x-y\|^2- \frac{\|x-y\|^4}{4 \tau^2(M)} \\
&\geq \frac34 \|x-y\|^2,
\end{align*}
and
$$
\inner{v}{y-x} = \varepsilon \frac{\inner{x-y}{\pr_{T_y M}(x-y)}}{\|\pr_{T_y M}(x-y)\|} = \varepsilon \|\pr_{T_y M}(x-y)\| \geq \frac12 \varepsilon \|x-y\|,
$$
so that
\[
\|x-y-v\|^2 \geq \|x-y\|^2+ \varepsilon \|x-y\| +\varepsilon^2 \geq \(\|x-y\|+\frac12\varepsilon\)^2,
\]
and thus $\|x-y-v\| \geq \|x-y\|+\varepsilon/2$. But now $\|x-a\| \geq \|x-y-v\|-\|a-y-v\|$ and $\|a-y-v\| \leq 5\varepsilon^2/4\tau(M)$ according to~\cite[Lemma~1]{Aamari19}. All in all, we get that
$$
\|x-a\| \geq \|x-y\|+\frac12 \varepsilon-\frac{5}{4\tau(K)}\varepsilon^2 \geq \|x-y\|+\frac{3}{16} \varepsilon,
$$
ending the proof.
\end{proof}
To prove \thmref{sdrub}, a bound on the metric distortion between our distance estimator and $\d_M$ is needed, that easily follows from Proposition~\ref{prp:metric}.
\begin{proposition} \label{prp:metdisto} In the context of \prpref{metric}, we have that for all $\delta > 4\varepsilon$,
$$
\D_\delta(\d_{K}, \d_{(K')^\varepsilon}) \leq 1 + \frac{4\varepsilon}{(\delta-4\varepsilon) \wedge \tau(K)}.
$$
\end{proposition}
\begin{proof}[Proof of \prpref{metdisto}] \prpref{metric} already gives that $\D_\delta(\d_{K}|\d_{(K')^{\varepsilon}}) \leq 1 + 2\varepsilon/\tau(K)$. For the other control, notice that for any two $x,y \in (K')^\varepsilon$ that are $\delta$-apart for the Euclidean distance, there holds denoting $x_0 = \pr_{K}(x)$ and $y_0 = \pr_{K}(y)$,
$$\d_{(K')^\varepsilon}(x,y) \leq 4\varepsilon + \d_{K}(x_0,y_0)
$$
because the piecewise-defined path consisting of the segment $[x,x_0]$ of the (or a near-minimizing) shortest-path between $x_0$ and $y_0$ in $K$, and of the segment $[y_0,y]$, is a continuous path in $(K')^\varepsilon$ between $x$ and $y$ of length the RHS of the display above. Now notice that
$$
\d_{K}(x_0,y_0) \geq \|x_0 - y_0\| \geq \delta - 4\varepsilon,
$$
which immediately yields $ \D_\delta(\d_{(K')^\varepsilon}|\d_{K}) \leq 1 + \frac{4\varepsilon}{\delta-4\varepsilon}$.
\end{proof}
The rate of the plug-in SDR estimator follows straightforwardly.
\begin{proof}[Proof of \thmref{sdrub}] Let $\mathcal{A}_n := \{\dh(M,\wh M) \leq \varepsilon_n\}$. On this event, we have $\D_{\delta}(\wh\d,\d_M) \leq 1 + 8\varepsilon_n/\delta$ according to \prpref{metdisto}, so that applying \thmref{stab} with $\delta_0 = \delta/2$, $\varepsilon= \varepsilon_n$ and $\nu = 8\varepsilon_n/\delta$ yields $|\wh\sdr_\delta -\sdr_\delta(M,\d_M)| \leq \zeta_0 \varepsilon_n$ with $
\zeta_0 \leq C \mathrm{s}_{\max}^4/\delta^4$. We conclude that
$$
\mathbb{E}_{P^{\otimes n}} |\wh\sdr_\delta -\sdr_\delta(M,\d_M)| \leq \zeta_0 \varepsilon_n P^{\otimes n }(\mathcal{A}_n) + 2 \mathrm{s}_{\max} P^{\otimes n }(\mathcal{A}_n^c),
$$
which ends the proof.
\end{proof}
\section{Introduction}
\input 1.introduction.tex
\section{Geometric and Statistical Model}
\label{sec:models}
\input 2.geo_stat_model.tex
\section{Reach and Related Quantities}
\label{sec:reach}
\input 3.reach_related.tex
\section{Spherical Distortion Radius}
\label{sec:sdr}
\input 4.sdr.tex
\section{Optimal Metric Learning}
\label{sec:metric}
\input 5.opti_metric_learn.tex
\section{Optimal Reach Estimation}
\label{sec:optimal_reach_estimation}
\input 6.opti_reach_est.tex
\section{Conclusion and Further Prospects}
\label{sec:conclusion}
\input 7.conclusion.tex
\section*{Acknowledgments}
The authors would like to thank heartily \emph{Chez Adel} for its unconditional warmth and creative atmosphere, and Vincent Divol for helpful discussions.
\bibliographystyle{chicago}
| {
"timestamp": "2022-07-14T02:12:44",
"yymm": "2207",
"arxiv_id": "2207.06074",
"language": "en",
"url": "https://arxiv.org/abs/2207.06074",
"abstract": "We study the estimation of the reach, an ubiquitous regularity parameter in manifold estimation and geometric data analysis. Given an i.i.d. sample over an unknown $d$-dimensional $\\mathcal{C}^k$-smooth submanifold of $\\mathbb{R}^D$, we provide optimal nonasymptotic bounds for the estimation of its reach. We build upon a formulation of the reach in terms of maximal curvature on one hand, and geodesic metric distortion on the other hand. The derived rates are adaptive, with rates depending on whether the reach of $M$ arises from curvature or from a bottleneck structure. In the process, we derive optimal geodesic metric estimation bounds.",
"subjects": "Statistics Theory (math.ST); Metric Geometry (math.MG)",
"title": "Optimal Reach Estimation and Metric Learning",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137937226358,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7087617723547471
} |
https://arxiv.org/abs/1707.04247 | On the maximum diameter of path-pairable graphs | A graph is path-pairable if for any pairing of its vertices there exist edge disjoint paths joining the vertices in each pair. We obtain sharp bounds on the maximum possible diameter of path-pairable graphs which either have a given number of edges, or are c- degenerate. Along the way we show that a large family of graphs obtained by blowing up a path is path-pairable, which may be of independent interest. | \section{Introduction}
\emph{Path-pairability} is a graph theoretical notion that emerged from a practical networking problem introduced by Csaba, Faudree, Gy\'arf\'as, Lehel, and Schelp \cite{CS}, and further studied by Faudree, Gy\'arf\'as, and Lehel \cite{mpp,F,pp} and by Kubicka, Kubicki and Lehel \cite{grid}. Given a fixed integer $k$ and a simple undirected graph $G$ on at least $2k$ vertices, we say that $G$ is {\it $k$-path-pairable} if, for any pair of disjoint sets of distinct vertices $\{x_1,\dots,x_k\}$ and $\{y_1,\dots,y_k\}$ of $G$, there exist $k$ edge-disjoint paths $P_1,P_2,\dots,P_k$, such that $P_i$ is a path from $x_i$ to $y_i$, $1\leq i\leq k$. The path-pairability number of a graph $G$ is the largest positive integer $k$, for which $G$ is $k$-path-pairable, and it is denoted by $\pp(G)$. A $k$-path-pairable graph on $2k$ or $2k+1$ vertices is simply said to be {\it path-pairable}.
Path-pairability is related to the notion of \textit{linkedness}. A graph is $k$-\emph{linked} if for any choice of $2k$ vertices $\{s_1, \ldots ,
s_k, t_1, \ldots , t_k\}$ (not necessarily distinct), there are internally vertex disjoint paths $P_1, \ldots , P_k$ with $P_i$ joining $s_i$ to $t_i$ for $1 \le i \le k$. Bollob{\'a}s and Thomason~\cite{BollobasThomason} showed that any $2k$-connected graph with a lower bound on its edge density is $k$-linked. On the other hand, a graph being path-pairable imposes no constraint on the connectivity or edge-connectivity of the graph. The most illustrative examples of this phenomenon are the stars $K_{1, n-1}$. Indeed, it is easy to see that stars are path-pairable, while they are neither $2$-connected nor $2$-edge-connected. Note that, for any pairing of the vertices of $K_{1, n-1}$, joining two vertices in a pair is straightforward due to the presence of a vertex of high degree, and the fact that the diameter is small. This example motivates the study of two natural questions about path-pairable graphs: given a path-pairable graph $G$ on $n$ vertices, how small can its maximum degree $\Delta(G)$ be, and how large can its diameter $d(G)$ be? This note addresses some aspects of the second question. To be precise, for a family of graphs $\mathcal{G}$ let us define $d(n, \mathcal{G})$ as follows:
\[
d(n, \mathcal{G}) = \max\{d(G): G \in \mathcal{G} \text{ and } G \text{ is path-pairable on } n \text{ vertices}\}.
\]
When $\mathcal{G}$ is the family of path-pairable graphs, we shall simply write $d(n)$ instead of $d(n, \mathcal{G})$.
\begin{comment}
It was proved by Faudree et al. that $\Delta_\text{min}(n)$ has to grow with the size of the graph; in particular, if $G$ is a path-pairable graph on $n$ vertices with maximum degree $\Delta$, then $n\leq 2\Delta^\Delta$ holds. The result places sublogarithmic lower bound on $\Delta_\text{min}(n)$, that is, $\Delta_\text{min}(n) = \Omega\left(\frac{\log n}{\log\log n}\right)$. To date the best known asymptotic upper bound is $\Delta_\text{min}(n) = O(\log n)$ due to Gy\H ori et al \cite{ntp}.
\end{comment}
The maximum diameter of arbitrary path-pairable graphs was investigated by M\'esz\'aros \cite{me_diam} who proved that $d(n) \le 6 \sqrt{2} \sqrt{n}$.
Our aim in this note is to investigate the maximum diameter of path-pairable graphs when we impose restrictions on the number of edges and on how the edges are distributed.
To state our results, let us denote by $\mathcal{G}_m$ the family of graphs with at most $m$ edges.
The following result determines $d(n, \mathcal{G}_m)$ for a certain range of $m$.
\begin{thm}\label{diam_m}
If $2n \le m \le \frac{1}{4}n^{3/2}$ then
\[
\sqrt[3]{\frac{1}{2}m-n} \le d(n, \mathcal{G}_m) \le 16 \sqrt[3]{m}.
\]
\end{thm}
We remark that the upper bound in the Theorem~\ref{diam_m} holds for $m$ in any range, but when $m \ge \frac{1}{4}n^{3/2}$ the bound obtained by M\'esz\'aros \cite{me_diam} is sharper.
Determining the behaviour of the maximum diameter among path-pairable graphs on $n$ vertices with fewer than $2n$ edges remains an open problem.
In particular, we do not know if the maximum diameter must be bounded (see Section \ref{sec:final}).
Following this line of research, it is very natural to consider the problem of determining the maximum attainable diameter for other classes of graphs. For example, what is the behaviour of the maximum diameter of path-pairable \emph{planar} graphs? Although we could not give a satisfactory answer to this particular question, we were able to do so for graphs which are $c$-\emph{degenerate}.
As usual, we say that an $n$-vertex graph $G$ is $c$-\emph{degenerate} if there exists an ordering $v_1,\ldots,v_n$ of its vertices such that $|\{v_j: j > i, v_iv_j\in E(G) \}|\leq c$ holds for all $i=1,2,\ldots,n$. We let $\mathcal{G}_{c\text{-deg}}$ denote the family of $c$-degenerate graphs. Clearly all $c$-degenerate graphs have a linear number of edges, so Theorem~\ref{diam_m} implies that $d(n, \mathcal{G}_{c\text{-deg}}) = O(\sqrt[3]{n})$. However, as the next result shows, this bound is far from the truth.
\begin{thm}\label{diam_cdeg}
Let $c \ge 5$ be an integer. Then
\[
(2+o(1)) \frac{\log(n)}{\log({\frac{c}{c-2}})} \leq d(n, \mathcal{G}_{c\text{-deg}}) \leq (12+o(1)) \frac{\log(n)}{\log(\frac{c}{c-2})}
\]
as $n \rightarrow \infty$.
\end{thm}
We remark that we have not made an effort to optimize the constants appearing in the upper and lower bounds of Theorems~\ref{diam_m} and~\ref{diam_cdeg}.
\subsection{The Cut-Condition}
While path-pairable graphs need not be highly connected or edge-connected, they must satisfy certain `connectivity-like' conditions that we shall need in the remainder of the paper. We say a graph $G$ on $n$ vertices satisfies the \emph{cut-condition} if for every $X \subset V(G)$, $|X| \le n/2$, there are at least $|X|$ edges between $X$ and $V(G)\setminus X$. Clearly, a path-pairable graph has to satisfy the cut-condition. On the other hand, satisfying the cut-condition is not sufficient to guarantee path-pairability in a graph; see \cite{me_pp} for additional details.
\subsection{Organization and Notation}
The proofs of the lower bounds in Theorems~\ref{diam_m} and~\ref{diam_cdeg} require constructions of path-pairable graphs with large diameter. In Section~\ref{sec:blowup}, we
show how to obtain such graphs by proving that a more general class of graphs is path-pairable. In Sections~\ref{sec:proofdiam_m} and~\ref{sec:proofdiam_cdeg} we shall complete the proofs of
Theorems~\ref{diam_m} and~\ref{diam_cdeg}, respectively. Finally, we mention some open problems in Section~\ref{sec:final}.
Our notation is standard. Thus, for a (simple, undirected) graph $G$ we shall denote by $V(G)$ and $E(G)$ the vertex set and edge set of $G$, respectively. We also let $|G|$ and $d(G)$ denote the number of vertices and diameter of $G$, respectively. For a vertex $x \in V(G)$ we let $N_{G}(x)$ denote the neighbourhood of $x$ in $G$, and we shall omit the subscript `$G$' when no ambiguity arises.
\section{Path-pairable graphs from blowing up paths}\label{sec:blowup}
In this section, we will show how to construct a quite general class of graphs which have high diameter and are path-pairable.
Let $G$ be a graph with vertex set $V(G)=\{v_1,\ldots,v_k\}$, and let $G_1,\ldots, G_k$ be graphs. We define the \textit{blown-up graph} $G(G_1,\ldots, G_k)$ as follows: replace every vertex $v_i$ in $G$ by the corresponding graph $G_i$, and for every edge $v_iv_j \in E(G)$ insert a complete bipartite graph between the vertex sets of $G_i$ and $G_j$.
Let $P_k$ denote the path on $k$ vertices. The following lemma asserts that if we blow-up a path with graphs $G_1, \ldots , G_k$, such that $G_{i}$ is path-pairable for $i \le k-1$, and certain properties inherited from the cut-condition hold, then the resulting blow-up is path-pairable.
\begin{lemma}\label{blown-up}
Suppose that $G_1, \ldots ,G_k$ are graphs on $n_1, \ldots , n_k$ vertices, respectively, where $G_{i}$ is path-pairable for $i \le k-1$.
Let $n = \sum_{i =1}^k n_i$ and let $u_i = \sum_{j=1}^i n_j$ for $i=1, \dots , k-1$. Then $P_k(G_1,\ldots, G_k)$ is path-pairable if and only if
\begin{equation}\label{eq_1}
n_i\cdot n_{i+1}\geq \min(u_i,n-u_i)
\end{equation}
holds for $i=1,\ldots,k-1$.
\end{lemma}
\begin{proof}
For each $i = 1, \ldots , k$, let $U_i = \bigcup_{j=1}^i V(G_j)$ so that $u_i = |U_i|$. Now, if $P_k(G_1, \ldots , G_k)$ is path-pairable, then we may apply the cut-condition to the cut $\{U_i, V(G)\setminus U_i\}$. This implies $n_i\cdot n_{i+1}\geq \min(u_i,n-u_i)$ must hold for $i=1,\ldots , k-1$. In the remainder, we show that this simple condition is enough to yield the path-pairability of
$G := P_k(G_1, \ldots , G_k)$. Assume that a pairing $\mathcal{P}$ of the vertices of $G$ is given. If $\{u, v\} \in \mathcal{P}$ we shall say that $u$ is a \emph{sibling} of $v$ (and vice-versa).
We shall define an algorithm that sweeps through the classes $G_1,G_2,\ldots, G_k$ and joins each pair of siblings via edge-disjoint paths.
First we give an overview of the algorithm.
We proceed by first joining pairs $\{u, v\} \in \mathcal{P}$ via edge-disjoint paths such that $u$ and $v$ belong to different $G_i$'s, and then afterwards joining pairs that remain inside some $G_j$ (using the path-pairability of $G_j$).
Before round $1$ we use the path-pairability property of $G_{1}$ to join those siblings which belong to $G_{1}$.
In round $1$ we assign to every vertex $u$ of $G_1$ a vertex $v$ of $G_2$.
If $\{u, v\} \in \mathcal{P}$ are siblings, then we simply choose the edge $uv$.
Then we join the siblings which are in $G_{2}$ again using the path-pairability property of $G_{2}$.
For those paths $uv$ that have not ended (because $\{u, v\} \notin \mathcal{P}$) we shall continue by choosing a new vertex $w$ in $G_3$ and continue the path with edge $vw$, and so on.
Paths which have not finished joining a pair of siblings we shall call \emph{unfinished}; otherwise, we say the path is \emph{finished}.
The last edge which completes a finished path we shall call a \emph{path-ending edge}.
During round $i$ we shall first choose those vertices in $G_{i+1}$ which, together with some vertex of $G_i$, form path-ending edges.
At the end of round $i$, in $G_{i+1}$ we will have endpoints of unfinished paths and perhaps also some endpoints of finished paths.
Note that the vertices of $G_{i+1}$ might be endpoints of several unfinished paths.
For $x \in G_{i+1}$ let $w(x)$ denote the number of unfinished paths $P\cup \{x\}$ with $P \subset U_i$ at the end of round $i$ which are to be extended by a vertex of $G_{i+2}$ (including the single-vertex path $x$ in the case when $x$ was not joined to its sibling in the latest round).
Note that every such path corresponds to a yet not joined vertex in $U_{i+1}$ as well as to another vertex yet to be joined lying in $V(G)\setminus U_{i+1}$.
It follows that
\begin{equation}\label{eq:weights}
\sum_{x \in G_{i+1}}w(x) \le \min(u_{i+1}, n-u_{i+1}).
\end{equation}
Let us now be more explicit in how we make choices in each round.
We shall maintain the following two simple conditions throughout our procedure (the first of which has been mentioned above):
\begin{itemize}
\item[(a)] During round $i$ ($1\le i \le k-1$), if $w \in G_i$ is the current endpoint of the path which began at some vertex $u\in U_i$ (possibly $u=w$), and $\{u, v\} \in \mathcal{P}$ for $v \in G_{i+1}$, then we join $w$ to $v$. Informally, we choose path-ending edges when we can.
\item[(b)] $w(x) \leq n_{i+1}$ for all $x \in G_i$, for $i =1, \ldots , k-1$.
\end{itemize}
The second condition above is clearly necessary in order to proceed during round $i$, as ${|N(x)\cap G_{i+1}| = n_{i+1}}$ for every $x \in G_i$, and hence we cannot continue more than $n_{i+1}$ unfinished paths through $x$.
We claim that as long as both of the above conditions are maintained, the proposed algorithm finds a collection of edge-disjoint paths joining every pair in $\mathcal{P}$.
Both conditions are clearly satisfied for $i=1$ as $w(x) \le 1\leq n_2$ for all $x\in G_1$.
Let $i \ge 2$ and suppose both conditions hold for rounds $1, \ldots , i-1$.
Our aim is show that an appropriate selection of edges between $G_i$ and $G_{i+1}$ exists in round $i$ to maintain the conditions.
We start round $i$ by choosing all path-ending edges with endpoints in $G_i$ and $G_{i+1}$; this can be done since, by induction, $w(x) \le n_{i+1}$ for every $x \in G_i$.
Observe that if $i = k-1$ then the only remaining siblings are in $G_{k}$.
Then for every $\left\{ u, v \right\} \in \mathcal{P}$ such that $u,v \in G_{k}$ we can find a vertex $w$ in $G_{k-1}$ and join $u, v$ with the path $uwv$.
When $i < k-1$ then the remaining paths can be continued by assigning arbitrary vertices from $G_{i+1}$ (without using any edge multiple times).
We choose an assignment that balances the `weights' in $G_{i+1}$.
More precisely, let us choose an assignment of the vertices that minimizes
\[
\sum\limits_{a\in G_{i+1}}w(a)^2.
\]
If for every $x \in G_{i+1}$ we have that $w(x) \le n_{i+2}$ we are basically done.
It remains to find edge-disjoint paths inside $G_{i+1}$ for those pairs $\{x, y\} \in \mathcal{P}$ whose vertices belong to $G_{i+1}$.
But this is possible because of the assumption that $G_{i+1}$ is path-pairable.
Suppose then that in the above assignment there exists $x \in G_{i+1}$ with $w(x) \geq n_{i+2}+1$.
We first claim that, under this assignment, no other vertex of $G_{i+1}$ has small weight.
\begin{claim}\label{claim:y-big}
Every vertex $y\in G_{i+1}$ satisfies $w(y)\geq n_{i+2}-1$.
\end{claim}
\begin{proof}
Suppose there is $y \in G_{i+1}$ such that $w(y) \le n_{i+2}-2$.
Then, as $w(x)>w(y)+2$, there exist vertices $v_1,v_2\in G_i$ such that certain paths ending at $v_1$ and $v_2$ were joined in round $i$ to $x$ ($x$ was assigned as the next vertex of these paths) but no paths at $v_1$ or $v_2$ were assigned $y$ as their next vertex.
Observe that at least one of the edges $v_1x$ and $v_2x$ is not a path ending edge which could have been replaced by the appropriate $v_1y$ or $v_2y$ edge, respectively.
That operation would result in a new assignment with a smaller square sum $\sum_{a\in G_{i+1}}w(a)^2$, which is a contradiction.
\end{proof}
Therefore, we may assume $w(y)\geq n_{i+2} - 1$ for all $y\in G_{i+1}$.
In this case, partition the vertices of $G_{i+1}$ into three classes:
\begin{align*}
X &= \{v\in G_{i+1}: w(v) \geq n_{i+2} +1\}\\
Y &= \{v\in G_{i+1}: w(v) = n_{i+2} - 1\} \\
Z &= \{v\in G_{i+1}: w(v) = n_{i+2}\}.
\end{align*}
Observe first that $1 \le |X| \leq |Y|$, since otherwise using~(\ref{eq:weights}) we have \[n_{i+1}n_{i+2}+1\leq \sum\limits_{s\in G_{i+1}}w(s)\leq \min(u_{i+1},n-u_{i+1}),\] contradicting condition~(\ref{eq_1}).
Notice also that the same argument as in Claim~\ref{claim:y-big} shows that $w(v) \le n_{i+2}+1$ for every $v \in G_{i+1}$, hence we can actually write
\[
X = \left\{ v \in G_{i+1}: w(v) = n_{i+2}+1 \right\}.
\]
We will need the following claim which asserts that if there are siblings in $G_{i+1}$ then they must belong to $Z$.
\begin{claim}\label{claim:Z_pairs}
If $\{u,v\} \in \mathcal{P}$ and $u, v \in G_{i+1}$, then $u, v \in Z$.
\end{claim}
\begin{proof}
We first show that every $y \in Y$ is incident to a path-ending edge. Suppose, to the contrary, that there is $y \in Y$ such that there is no path-ending edge which ends at $y$.
It follows that there are at most $w(y)$ vertices in $G_{i}$ which had been joined to $y$.
Hence we can take any $x \in X$ and find $z \in G_{i}$ which was not joined to $y$, and such that $xz$ is not a path-ending edge.
Replacing $zx$ by $zy$ would result in a smaller square sum $\sum_{a \in G_{i+1}}w(a)^{2}$, which gives a contradiction.
Now, let $\{u, v\} \in \mathcal{P}$ such that $u, v \in G_{i+1}$.
Since every $y \in Y$ is incident to a path-ending edge, we have that $u, v \not\in Y$.
Suppose, for contradiction, that $u \in X$.
Then $u$ was joined to $w(u) = n_{i+2}+1$ vertices in $G_{i}$, and hence for every $y \in Y$, there is $z \in G_{i}$ which was joined to $u$ but not $y$.
Replacing $zu$ by $zy$ would result in a smaller square sum $\sum_{a \in G_{i+1}}w(a)^{2}$, which again gives a contradiction.
\end{proof}
Finally, we shall show that we can reduce the weights of the vertices in $X$ (and pair the siblings inside $G_{i+1}$) using the path-pairable property of $G_{i+1}$.
For every $x \in X$ pick a different vertex $y_{x} \in Y$ (which we can do, since $|Y| \ge |X|$) and let $\mathcal{P'} = \left\{ \{u, v\} \in \mathcal{P} : u, v \in G_{i+1} \right\} \cup \left\{ \{x, y_{x}\} : x \in X \right\}$.
Since $G_{i+1}$ is path-pairable, we can find edge-disjoint paths joining the siblings in $\mathcal{P'}$ (note that by Claim~\ref{claim:Z_pairs} none of the pairs $\{x, y_x\}$ interfere with any siblings $\{u, v\} \in \mathcal{P}$ with $u, v \in G_{i+1}$).
Observe now that for every $x \in X$ one path has been channeled to a vertex $y\in Y$, thus the number of unfinished path endpoints at $x$ has dropped to $n_{i+2}$ and so the condition is maintained.
\end{proof}
We close the section by pointing out that the condition that the graphs $G_{i}$ are path-pairable is necessary.
We do this by giving an example of a blown-up path $P_k(G_{n_1},\ldots,G_{n_k})$ that satisfies the cut-conditions of Lemma~\ref{blown-up} yet it is not path-pairable unless some of $G_i$'s are path-pairable as well.
For the sake of simplicity we set $k = 5$ and prove that $G_3$ has to be path-pairable.
Let $n = 2t^2 + t$ for some even $t \in \mathbb{N}$ and let $n_1 = n_5 = t^2-t$, $n_2=n_3 = n_4 = t$.
Clearly $P_5(G_{n_1},\ldots,G_{n_5})$ satisfies the Condition~\ref{eq_1} of Lemma~\ref{blown-up}.
Observe, that any pairing of the vertices in $G_{1} \cup G_{2}$ with the vertices in $G_{4} \cup G_{5}$ has to use all the edges between $G_{3}$ and $G_{2} \cup G_{4}$.
Therefore if we additionally pair the vertices inside $G_{3}$, then the paths joining those vertices can only use the edges in $G_{3}$, therefore $G_{3}$ has to be path-pairable.
\section{Proof of Theorem \ref{diam_m}}\label{sec:proofdiam_m}
Take $x,y\in V(G)$ such that $d(x,y) = d(G)$ and let $V_{i}$ be the set of vertices at distance exactly $i$ from $x$, for every $i$.
Observe that $V_0 = \{x\}$ and $y\in V_{d(G)}$.
For $i \in \left\{ 1, \dots, d(G) \right\}$ define $n_{i}$ to be the size of $V_{i}$ and let $u_{i} = \sum_{j=0}^{i} n_{j}$.
We need the following claim.
\begin{claim}
$u_{2k+1} \geq \binom{k+2}{2}$ as long as $u_{2k+1}\leq\frac{n}{2}$.
\end{claim}
\begin{proof}
We shall use induction on $k$.
For $k = 0$ it is clear.
Assume that $u_{2k-1} \ge \binom{k+1}{2}$.
By the cut-condition we have that the number of edges between $V_{2k}$ and $V_{2k+1}$ is at least $u_{2k-1}$, hence $n_{2k}\cdot n_{2k+1} \ge u_{2k-1} \ge \binom{k+1}{2}$.
By the arithmetic-geometric mean inequality, $n_{2k} + n_{2k+1} \ge 2\sqrt{\binom{k+1}{2}} \ge k+1$.
As $u_{2k+1} = u_{2k-1} + n_{2k} + n_{2k+1}$, we have $u_{2k+1} \ge \binom{k+2}{2}$.
\end{proof}
Now, let $A = \bigcup_{i=0}^{\lfloor d / 3 \rfloor}V_{i}$, $B = \bigcup_{i = \lfloor d / 3 \rfloor +1}^{2d/3} V_{i}$, $C = \bigcup_{i = \lfloor 2d/3 \rfloor + 1}^{d} V_{i}$.
Observe, that $|A|, |C| \ge \min\left\{\frac{n}{2}, \frac{d^{2}}{100}\right\}$, so joining vertices in $A$ with vertices in $C$ requires at least $ \min\left\{\frac{n}{2},\frac{d^2}{100}\right\}\cdot\frac{d}{3}$ edges.
Hence,
\[ \min\left\{\frac{n}{2},\frac{d^2}{100}\right\}\cdot\frac{d}{3}\leq m,\]
which implies
\[d\leq \max\left\{\frac{6m}{n},16\sqrt[3]{m}\right\}.\]
Notice that whenever $m \le 4n^{3/2}$ we have $d \le 16\sqrt[3]{m}$.
Let us remark that if $m \ge \frac{1}{4}n^{3/2}$ then the upper bound is trivially satisfied by the general upper bound obtained in \cite{me_diam}.
For the lower bound, let $n$ and $2n \le m \le \frac{1}{4}n^{3/2}$ be given.
For any natural number $\ell$ we shall denote by $S_\ell$ the star $K_{1, \ell-1}$ on $\ell$ vertices.
Consider the graph $G = P_{k}(G_{1},\dots,G_{k})$ on $n$ vertices,
where $k = \left\lfloor \sqrt[3]{\frac{m}{2}-n} \right\rfloor$ and $G_{1} = G_{2} = \dots = G_{k} = S_{k}$, $G_{k+1} = S_{k^{2}}$, $G_{k+2} = S_{2}$, and $G_{k+3}$ is an empty graph on $n-2k^{2}-2$ vertices.
Straightforward calculation shows that $u_i = i\cdot k$, for $i\leq k, u_{k+1}= 2k^2$, and $u_{k+2} = 2k^2 + 2$.
Also $n_1n_2=n_2n_3=\ldots=n_{k-1}n_{k}=k^2$, $n_{k}n_{k+1}=k^3$, $n_{k+1}n_{k+2}=2k^2$, and $n_{k+2}n_{k+3}=2n-4k^2-4$.
Therefore, for $i \in \left\{ 1, \dots, k+1 \right\}$ we have $n_{i} \cdot n_{i+1} \ge u_{i} \ge \min(u_{i}, n-u_{i})$ and $n_{k+2} \cdot n_{k+3} \ge n_{k+3} \ge \min(u_{k+2}, n-u_{k+2})$.
Hence it follows from Lemma~\ref{blown-up} that $G$ is path-pairable.
It is easy to check that the number of edges in $G$ is at most $2n + 2k^{3} \le m$.
On the other hand, the diameter of $G$ is $k+2 \ge \sqrt[3]{\frac{m}{2} - n}$.
\section{Proof of Theorem \ref{diam_cdeg}}\label{sec:proofdiam_cdeg}
In this section, we investigate the maximum diameter a path-pairable $c$-degenerate graph on $n$ vertices can have.
We shall assume that $c$ is an integer and $c\geq 5$.
Let $G$ be a $c$-degenerate graph on $n$ vertices with diameter $d$.
We shall show first that $d \le 4\log_{\frac{c+1}{c}}(n)+3$.
Let $x \in G$ be such that there is $y \in G$ with $d(x,y) = d$.
For $i \in \left\{ 0, \dots, d \right\}$, write $V_{i}$ for the set of vertices at distance $i$ from $x$.
Let $n_{i} = |V_{i}|$ and $u_{i} = \sum_{j = 0}^{i}n_{j}$.
Observe that $|V_{i}| \ge 1$ for every $i \in \left\{ 0, \dots, d \right\}$.
We can assume that $u_{\lfloor \frac{d}{2} \rfloor} \le \frac{n}{2}$ (otherwise we repeat the argument below with $V'_{i} = V_{d-i})$.
The result will easily follow from the following claim.
\begin{claim}
$u_{2k+1} \ge \left( \frac{c+1}{c} \right)^{k}$ as long as $u_{2k+1} \le \frac{n}{2}$.
\end{claim}
Let us assume the claim and prove the result.
Letting $k = \frac{\lfloor\frac{d}{2}\rfloor -1}{2}$, we have that $n/2 \ge u_{2k+1} \ge \left( \frac{c+1}{c} \right)^{\frac{\lfloor \frac{d}{2} \rfloor-1}{2}}$.
Hence $d \leq 4\log_{\frac{c+1}{c}}(n)+3 = 4 \frac{\log(n)}{\log(\frac{c+1}{c})} + 3 \le 4 \frac{\log(n)}{\log(\frac{c}{c-2})}\frac{\log(\frac{c}{c-2})}{\log(\frac{c+1}{c})} + 3\le 12 \frac{\log(n)}{\log(\frac{c}{c-2})}+3$, where the last inequality follows from the easy to check fact that $\frac{\log(\frac{c}{c-2})}{\log(\frac{c+1}{c})} \le 3$, for all $c \ge 5$.
\begin{proof}[Proof of the Claim]
We shall prove the claim by induction on $k$.
The base case when $k=0$ is trivial as $u_{1} \ge 2$.
Suppose the claim holds for every $l \le k-1$.
Since $G$ is $c$-degenerate we have that $e(V_{2k}, V_{2k+1}) \le c\left( n_{2k} + n_{2k+1} \right)$.
On the other hand, it follows from the cut-condition that $e(V_{2k}, V_{2k+1}) \ge u_{2k} = u_{2k-1}+ n_{2k}$.
Therefore, by the induction hypothesis, we have
$n_{2k} + n_{2k+1} \ge
\frac{1}{c}\left( u_{2k-1}+n_{2k} \right)
\ge \frac{1}{c} \left( \left( {\frac{c+1}{c}}\right)^{k-1} +n_{2k} \right)
\ge \frac{1}{c}\left( \frac{c+1}{c} \right)^{k-1}$. Hence, $u_{2k+1}=u_{2k-1}+n_{2k}+n_{2k+1}\geq \left( \frac{c+1}{c} \right)^{k-1}+ \frac{1}{c}\left( \frac{c+1}{c} \right)^{k-1}= \left( 1+\frac{1}{c} \right) \left( \frac{c+1}{c} \right)^{k-1} \ge \left( \frac{c+1}{c} \right)^{k}$, which proves the claim.
\end{proof}
We shall prove the lower bound assuming $c$ is an odd integer; when $c$ is even we apply the same argument for $c-1$.
To do so, consider the graph $G = P(G_{1},\ldots, G_{2m^{\prime}-1})$ for some $m^{\prime}\in \mathbb{N}$, which we specify later.
Firstly, we shall define the sizes of $G_i$ for $i \in \{1,\ldots, 2m'-1\}$.
To do so, let us define a sequence $\{n_i\}_{i\in\mathbb{N}}$ where $n_{2i} =\frac{c-1}{2} $ and $n_{2i+1}$ is defined recursively in the following way:
\begin{equation}
n_{2i+1}= \left \lceil \frac{2}{c-1} \cdot \sum_{j=1}^{2i} n_j \right \rceil \leq \frac{2}{c-1}\sum_{j=1}^{2i} n_j +1
\end{equation}
Let $m$ be the largest integer such that $\sum_{j=1}^{m} n_j \leq n/2$.
We let $m'=m$ when $m$ is odd and $m'=m-1$ when $m$ is even.
Moreover, let $|G_{m'}|=n-2\sum_{j=1}^{j=m'-1} n_j$ and let $|G_i|=n_i$ for $1\leq i < m'$ and $|G_{m'+j}|= |G_{m'-j}|$ for $ j \in \{1,\ldots, m'-1\}$.
For all $i\in \{1,\ldots 2m'-1\}$ let $G_i = S_{n_i}$ be a star on $n_i$ vertices.
It is easy to check that the graph $P_{2m'-1}(G_1,\ldots,G_{2m'-1}) $ is path-pairable by Lemma \ref{blown-up}.
It has diameter at least $2m-4$ and $m \geq \log_{\frac{c+1}{c-1}}(n)(1+o(1))$.
Again an easy verification shows that the graph $G$ is $c$-degenerate.
\section{Final remarks and open problems}\label{sec:final}
We obtained tight bounds on the parameter $d(n, \mathcal{G}_{m})$ when $(2+\epsilon)n \leq m \leq \frac{1}{4} n^{3/2}$, for any fixed $\epsilon >0$.
It is an interesting open problem to investigate what happens when the number of edges in a path-pairable graph on $n$ vertices is around $2n$. We ask the following:
\begin{question}
Is there a function $f$ such that for every $\epsilon>0$ and for every path-pairable graph $G$ on $n$ vertices with at most $(2-\epsilon)n$ edges, the diameter of $G$ is bounded by $f(\epsilon)$?
\end{question}
Another line of research concerns determining the behaviour of $d(n, \mathcal{P})$, where $\mathcal{P}$ is the family of planar graphs. Since planar graphs are $5$-degenerate, it follows from Theorem~\ref{diam_cdeg} that the diameter of a path-pairable planar graph on $n$ vertices cannot be larger than $c \log{n}$.
This fact makes us wonder whether there are path-pairable planar graphs with unbounded diameter.
\begin{question}
Is there a family of path-pairable planar graphs with arbitrarily large diameter?
\end{question}
The graph constructed in the proof of the lower bound in Theorem~\ref{diam_cdeg} when $c=5$ is not planar since it contains a copy of $K_{3,3}$. Therefore, it cannot be used to show that the diameter of a path-pairable planar graph can be arbitrarily large (note, however, that this graph does not contain a $K_7$-minor nor a $K_{6,6}$-minor). We end by remarking that we were able to construct an infinite family of path-pairable planar graphs with diameter $6$, but not larger.
\begin{comment}
\begin{thm} \label{diameter_thm}
For every even $n\geq 6$ there is a path-pairable planar graph on $n$ vertices and with diameter equal $6$.
\end{thm}
\begin{proof}
\textbf{There are few errors here.
First, the diameter is $5$.
When $k_{1} = 1$ the graph seems not to be path-pairable.
Also, when applying the induction we have to make sure that $A_{3}$ doesn't have too few vertices\dots}
Consider the following planar graph on $n=k_1+k_2+6$ vertices, where $k_1,k_2\geq 1$ and $k_1+k_2$ is even.
Partition the vertex set of $G$ into $7$ non-empty subsets $A_1, A_2,A_3,A_4,A_5,A_6$ and $A_7$, namely $A_1=\{a_1\}$,$A_2=\{a_2\}$, $A_3=\{v_1,v_2,...,v_{k_1}\}$, $A_4=\{a_3,a_4\}$ and $A_5=\{w_1,w_2,...,w_{k_2}\}$, $A_6=\{a_5\}$ and $A_7=\{a_6\}$.
Connect $a_1$ to $a_2$ and $a_2$ to every vertex in $A_3$. Let $a_3$ be joined to every vertex in $A_3\cup A_5$ and $a_4$ to the set $\{v_1,v_k,a_3,w_1,w_{k_2}\}$. Symmetrically, let $a_5$ be connected to every vertex in $A_4$ and finally let $a_6$ be connected to $a_5$. We also add the edges of respective paths inside $A_3, A_4,A_5$, that is, we join consecutive vertices of the sequences $(v_1, v_2,...,v_{k_1})$, $(a_3,a_4)$, and $(w_1,w_2,...,w_{k_2})$.
Clearly $d(a_1,a_6)=6$, so the diameter of $G$ is $6$. It is also easy to see that the graph is planar. We need to show it is path-pairable. The statement is fairly obvious for small values of $n$; we leave the verification of these cases to the reader. On the other hand, if $k_1+k_2>6$ (i.e. $n>12$), then every pairing $\mathcal{P}$ of the vertices contains at least one pair of terminals $(u,v)$ such that their corresponding vertices lie in $A_3\cup A_5$. We can join this pair by a path of length 2 through the vertex $a_3$ and complete the pairing process via induction that completes our proof.
\end{proof}
\end{comment}
\begin{comment}
-------------------------------------Need to re-write this proof------------------\\
For every pair between vertices in $\{a_3\}\cup A_2\cup A_3$ use the vertex $a_2$ to join them. If $(a_3,a_2) \in \mathcal{P}$ then just join them via the edge between them. Also if whenever $a_1$ is paired with some vertex $v_t$ in $A_2$, use the edge $(a_1,v_t)$ and similarly if $a_4$ is paired with a vertex in $A_4$.
We split our analysis into three cases, regarding how $\mathcal{P}$ pairs the vertices in ${a_1,a_2,a_3,a_4}$.
\begin{itemize}
\item[i)]$(a_1,a_4)\in \mathcal{P}$; \\
To join $(a_1,a_4)$ use the path $(a_1,v_1,a_3,w_1,a_4)$.
Then either $a_3$ is paired with $a_2$ (which we can solve) or $a_3$ is paired with something in $A_2\cup A_3$, say $v_i$, then use the path $a_3,v_k,v_{k-1},...,v_i$.
\item[ii)] $(a_1,a_2) \in \mathcal{P}$; (when $(a_4,a_2) \in \mathcal{P}$, it is symmetric)\\
Firstly assume $a_3$ and $a_4$ are paired with vertices $v_{j_1},v_{j_2} \in A_2$ with $j_1\leq j_2$ (the other case is symmetric), respectively. Then use the path $(a_1,v_j,a_2)$ to join $a_1$ to $a_2$ and the path $(a_3,v_1,v_2,...,v_{j_1})$ to join $a_3$ and the path $(a_4,w_1,a_3,v_k,v_{k-1},...,v_{j_2})$. The case when $a_3$ is paired with a vertex $w_j \in A_4$ the argument works. If $a_3$ is paired with $a_4$ then use the path $(a_1,v_1,a_3,a_2)$ to join $a_1$ to $a_2$ and the path $(a_3,w_1,a_4)$. When $a_3$ is paired with some vertex $w_l \in A_4$ and $a_4$ is paired with some vertex $v_j \in A_2$ then, as before, use the path $(a_1,v_j,a_2)$ to join $a_1$ to $a_2$, use the path $(a_3,w_1,w_2,...,w_l)$ to join $a_3$ to $w_l$ and the path $(a_4,w_k,a_3,v_1,v_2,...,v_j)$.
\item[iii)] $a_1$ and $a_4$ are paired with some vertex $w_j \in A_4$ and $ v_j \in A_2$, respectively.
Then use the paths $(a_1,v_1,a_3,w_1,a_2,w_j)$ and $(a_2,w_k,a_3,v_k,a_1,w_j)$ to join $a_1$ to $w_j$ and $a_4$ to $v_j$, respectively. Now if $a_3$ is paired $w_l$ (or $v_l$) then join them via $(a_3,a_2,w_l)$ (or $(a_3,a_2,v_l)$).
\end{itemize}
Our case analysis exhausted all possibilities, so we proved our graph is path-pairable.
\end{comment}
\bibliographystyle{acm}
| {
"timestamp": "2017-07-14T02:07:51",
"yymm": "1707",
"arxiv_id": "1707.04247",
"language": "en",
"url": "https://arxiv.org/abs/1707.04247",
"abstract": "A graph is path-pairable if for any pairing of its vertices there exist edge disjoint paths joining the vertices in each pair. We obtain sharp bounds on the maximum possible diameter of path-pairable graphs which either have a given number of edges, or are c- degenerate. Along the way we show that a large family of graphs obtained by blowing up a path is path-pairable, which may be of independent interest.",
"subjects": "Combinatorics (math.CO)",
"title": "On the maximum diameter of path-pairable graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.982013792143467,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7087617712149927
} |
https://arxiv.org/abs/1812.11169 | Spherical harmonic d-tensors | Tensor harmonics are a useful mathematical tool for finding solutions to differential equations which transform under a particular representation of the rotation group $\mathrm{SO}(3)$. The aim of this work is to make use of this tool also in the setting of Finsler geometry, or more general geometries on the tangent bundle, where the objects of relevance are d-tensors on the tangent bundle, or tensors in a pullback bundle, instead of ordinary tensors. For this purpose, we construct a set of d-tensor harmonics for spherical symmetry and show how these can be used for calculations in Finsler geometry. | \section{Introduction}\label{sec:intro}
It is commonly understood that problems in differential geometry and its applications in physics simplify if they exhibit any symmetries, such as spherical or planar symmetries, which are most common to appear in physics via the action of a corresponding symmetry group on some underlying space or spacetime manifold. Symmetries under such a group action are commonly encoded in geometric objects defined on these manifolds which are invariant under the group action, or transform under particular representations of the symmetry group. A common example is the hydrogen atom, whose wave function is expressed by a complex function on the space manifold \(M = \mathbf{R}^3\), and where the spherical symmetry of the potential allows to express the angular part of the eigenfunctions of the Hamiltonian in terms of spherical harmonics, which are the eigenfunctions of the angular momentum operator. Another example is given by electromagnetic and gravitational radiation from a point-like source, which is described by tensor fields, and whose multipole expansion can be described by suitable representations of the rotation group on the space of tensor fields.
While the geometric objects in the aforementioned examples have in common that they are defined on the underlying manifold itself, one finds a different situation in Finsler and spray geometry. In these cases one rather deals with geometric objects defined on the tangent bundle of the underlying manifold. In the most simple case, such as that of a Finsler function, the relevant object is simply a real or complex function on the tangent bundle. Another very commonly encountered class of objects is given by d-tensors. These objects have in common that they have a well-defined transformation behavior under the action of a symmetry group on the underlying base manifold via diffeomorphisms. This naturally raises the question for suitable bases on the spaces containing these objects, which can be decomposed into irreducible representations of the symmetry group, in analogy to the well-known spherical harmonics.
The aim of this article is to construct a suitable generalization of spherical harmonics to the space of d-tensors of arbitrary rank over the manifold \(M = \mathbf{R}^3\), which allows for a decomposition into irreducible representations of the rotation group. Note that this in particular includes the case of d-tensors of rank \(0\), which are simply functions on the tangent bundle, and which will be the starting point of our construction. We essentially follow the same steps as done for tensor spherical harmonics~\cite{James:1976}, and so the d-tensors we construct will inherit several properties of the tensor spherical harmonics on which our construction is based.
The article is structured as follows. We start with a brief review of the relevant mathematical notions and introduce the necessary objects for the description of spherical symmetry in section~\ref{sec:pre}. We then construct the spherical harmonic d-tensors in two steps. The most simple case of rank \(0\), which is simply given by functions on the tangent bundle \(TM\), will be discussed in section~\ref{sec:scalar}. This will turn out to be a necessary building block for the case of general rank discussed in section~\ref{sec:dtensor}. As an illustrative example, we calculate the Finsler metric for the most general spherically symmetric Finsler space in terms of spherical harmonic d-tensors in section~\ref{sec:app}. We end with a conclusion in section~\ref{sec:conclusion}.
\section{Preliminaries}\label{sec:pre}
Before we show our construction spherical harmonic d-tensors, we clarify a few necessary notions. We start with the definition of d-tensors using the pullback formalism in section~\ref{ssec:pullback}. We then show how these d-tensors transform under diffeomorphisms of the base manifold in section~\ref{ssec:sym}. In order to be able to discuss spherical symmetry, we introduce suitable coordinates on the tangent bundle in section~\ref{ssec:coord}. Finally, in section~\ref{ssec:spher} we display the generators of rotations in our chosen coordinates.
\subsection{Pullback bundle formalism and d-tensors}\label{ssec:pullback}
One of the most important classes of objects in Finsler or spray geometry is given by \emph{distinguished tensors}, or simply d-tensors. These can be defined in different ways, the most common definition being as tensors over the tangent bundle \(TM\) of a manifold \(M\), whose components are purely horizontal with respect to some given splitting \(TTM = HTM \oplus VTM\) of the double tangent bundle and a corresponding splitting of \(T^*TM\). For our purposes, however, a different definition in terms of pullback bundles~\cite{Szilasi:2003} is more convenient. For this purpose we introduce the pullback bundle \(\pi: PM \to TM\), where \(PM = TM \times_M TM\) is a fibered product and \(\pi\) is the projection onto the first factor of this product. It is a vector bundle over \(TM\), whose fibers are isomorphic to the fibers of \(TM\), and whose dual is given by \(P^*M = TM \times_M T^*M\).
The pullback bundle is closely related to the tangent bundle \(\tau: TM \to M\) and the double tangent bundle \(\varpi: TTM \to TM\). This can be seen after defining the two maps
\begin{equation}
\func{\mathbf{i}}{PM}{TTM}{(v,w)}{\left.\frac{d}{dt}(v + tw)\right|_{t = 0}}
\end{equation}
and
\begin{equation}
\func{\mathbf{j}}{TTM}{PM}{\xi}{\left(\varpi(\xi), \tau_*(\xi)\right)}\,,
\end{equation}
called the \emph{vertical} and \emph{horizontal morphism}, respectively. One can show that they form an exact sequence
\begin{equation}\label{eq:exseq}
0 \rightarrow PM \xrightarrow{\mathbf{i}} TTM \xrightarrow{\mathbf{j}} PM \rightarrow 0
\end{equation}
of vector bundle morphisms over \(TM\).
From the pullback bundle and its dual one can now construct the tensor bundles
\begin{equation}
P^r_sM = \underbrace{PM \otimes \cdots \otimes PM}_{r \text{ times}} \otimes \underbrace{P^*M \otimes \cdots \otimes P^*M}_{s \text{ times}}\,,
\end{equation}
which are again bundles over \(TM\). We then call a \emph{d-tensor} of rank \((r,s)\) a section of the corresponding bundle \(\pi^r_s: P^r_sM \to TM\).
Given coordinates \((x^a)\) on \(M\), one has a coordinate basis \((\partial_a)\) on \(TM\), which allows to write any tangent vector \(v \in T_xM\) in the form \(v = \dot{x}^a\partial_a\), where we introduced coordinates \((\dot{x}^a)\) on \(T_xM\). This yields a set of induced coordinates \((x^a, \dot{x}^a)\) on \(TM\). Further, recall that the fiber \(P_vM\) of \(PM\) over \(v \in PM\) is canonically isomorphic to the fiber \(T_{\tau(v)}M\). One may therefore canonically lift the basis vector fields \(\partial_a\) on \(M\) to sections of \(PM\), which likewise have the property to form a basis at each \(v \in TM\). The same holds for \(P^*M\) and the basis covector fields \(dx^a\). This allows us to write any d-tensor \(A\) of rank \((r,s)\) in components as
\begin{equation}
A(x,\dot{x}) = A^{a_1 \cdots a_r}{}_{b_1 \cdots b_s}(x,\dot{x})\partial_{a_1} \otimes \cdots \otimes \partial_{a_r} \otimes dx^{b_1} \otimes \cdots \otimes dx^{b_s}\,.
\end{equation}
This component expression is also helpful for calculating the transformation behavior under symmetry transformations, as we discuss next.
\subsection{Transformation of d-tensors under diffeomorphisms}\label{ssec:sym}
We now briefly review the transformation behavior of d-tensors under finite and infinitesimal diffeomorphisms from the manifold \(M\) to itself. Let \(\varphi: M \to M\) be a diffeomorphism. Its differential \(\varphi_*: TM \to TM\) is a diffeomorphism from \(TM\) to itself. This lift of \(\varphi\) to the tangent bundle is a functorial lift; it shows that \(TM\) is an example of a natural bundle~\cite{Kolar:1993}. In induced coordinates \((x^a, \dot{x}^a)\) we may define \(x' = \varphi(x)\) and find that \(\varphi_*\) is expressed as
\begin{equation}
\dot{x}'^a = \frac{\partial x'^a}{\partial x^b}\dot{x}^b\,.
\end{equation}
Given a d-tensor \(A\) of rank \((r,s)\), we may calculate its pullback \(\varphi^*A\) as~\cite{Tashiro:1959}
\begin{equation}
(\varphi^*A)^{a_1 \cdots a_r}{}_{b_1 \cdots b_s}(x,\dot{x}) = A^{c_1 \cdots c_r}{}_{d_1 \cdots d_s}(x',\dot{x}')\frac{\partial x^{a_1}}{\partial x'^{c_1}} \cdots \frac{\partial x^{a_r}}{\partial x'^{c_r}}\frac{\partial x'^{d_1}}{\partial x^{b_1}} \cdots \frac{\partial x'^{d_s}}{\partial x^{b_s}}\,.
\end{equation}
If we are given a one-parameter group \(t \mapsto \varphi_t\) of diffeomorphisms instead, which is generated by the vector field
\begin{equation}
\func{\xi}{M}{TM}{x}{\left.\frac{d}{dt}\varphi_t(x)\right|_{t = 0}}\,,
\end{equation}
then we obtain the Lie derivative of a d-tensor as
\begin{equation}
\begin{split}
(\mathcal{L}_{\xi}A)^{a_1 \cdots a_r}{}_{b_1 \cdots b_s} &= \left.\frac{d}{dt}(\varphi^*A)^{a_1 \cdots a_r}{}_{b_1 \cdots b_s}\right|_{t = 0}\\
&= \xi^c\partial_cA^{a_1 \cdots a_r}{}_{b_1 \cdots b_s} + \dot{x}^d\partial_d\xi^c\dot{\partial}_cA^{a_1 \cdots a_r}{}_{b_1 \cdots b_s}\\
&\phantom{=}- \xi^{a_1}\partial_cA^{ca_2 \cdots a_r}{}_{b_1 \cdots b_s} - \cdots - \xi^{a_r}\partial_cA^{a_1 \cdots a_{r-1}c}{}_{b_1 \cdots b_s}\\
&\phantom{=}+ \xi^c\partial_{b_1}A^{a_1 \cdots a_r}{}_{cb_2 \cdots b_s} + \cdots + \xi^c\partial_{b_s}A^{a_1 \cdots a_r}{}_{b_1 \cdots b_{s-1}c}\,.
\end{split}
\end{equation}
The vector field \(\xi^c\partial_c + \dot{x}^d\partial_d\xi^c\dot{\partial}_c\) on \(TM\) generating the one-parameter diffeomorphism group \(\varphi_{t*}\) is called the complete lift~\cite{Yano:1973} of \(\xi\).
\subsection{Coordinates}\label{ssec:coord}
In this article we make use of different sets of coordinates on \(TM\). We start by introducing Cartesian coordinates \((x^a, a = 1, \ldots, 3)\) on \(M\). These induce a set of canonical coordinate basis vector fields, which we denote by \(\partial_a\), and which allow us to introduce coordinates \((\dot{x}^a, a = 1, \ldots, 3)\) as \(\dot{x}^a\partial_a\) on every fiber of \(TM\), and by restriction also on \(TM\). The corresponding coordinate basis of \(TTM\) will be denoted by \((\partial_a, \dot{\partial}_a)\).
Starting from the Cartesian coordinates, we define two more sets of coordinates on \(TM\). The first set \((r, \theta, \phi, \bar{r}, \alpha, \beta)\), which we call co-rotated spherical coordinates, is defined by
\begin{subequations}
\begin{align}
\left(\begin{array}{c}
x^1\\
x^2\\
x^3
\end{array}\right) &= \left(\begin{array}{ccc}
\cos\phi & -\sin\phi & 0\\
\sin\phi & \cos\phi & 0\\
0 & 0 & 1
\end{array}\right) \cdot \left(\begin{array}{ccc}
\cos\theta & 0 & \sin\theta\\
0 & 1 & 0\\
-\sin\theta & 0 & \cos\theta
\end{array}\right) \cdot \left(\begin{array}{c}
0\\
0\\
r
\end{array}\right)\\
\left(\begin{array}{c}
\dot{x}^1\\
\dot{x}^2\\
\dot{x}^3
\end{array}\right) &= \left(\begin{array}{ccc}
\cos\phi & -\sin\phi & 0\\
\sin\phi & \cos\phi & 0\\
0 & 0 & 1
\end{array}\right) \cdot \left(\begin{array}{ccc}
\cos\theta & 0 & \sin\theta\\
0 & 1 & 0\\
-\sin\theta & 0 & \cos\theta
\end{array}\right)\nonumber\\
&\phantom{=}\cdot \left(\begin{array}{ccc}
\cos\beta & -\sin\beta & 0\\
\sin\beta & \cos\beta & 0\\
0 & 0 & 1
\end{array}\right) \cdot \left(\begin{array}{ccc}
\cos\alpha & 0 & \sin\alpha\\
0 & 1 & 0\\
-\sin\alpha & 0 & \cos\alpha
\end{array}\right) \cdot \left(\begin{array}{c}
0\\
0\\
\bar{r}
\end{array}\right)
\end{align}
\end{subequations}
The second set \((r, \theta, \phi, \bar{\rho}, \bar{z}, \beta)\) will be called co-rotated cylindrical coordinates and defined as
\begin{subequations}
\begin{align}
\left(\begin{array}{c}
x^1\\
x^2\\
x^3
\end{array}\right) &= \left(\begin{array}{ccc}
\cos\phi & -\sin\phi & 0\\
\sin\phi & \cos\phi & 0\\
0 & 0 & 1
\end{array}\right) \cdot \left(\begin{array}{ccc}
\cos\theta & 0 & \sin\theta\\
0 & 1 & 0\\
-\sin\theta & 0 & \cos\theta
\end{array}\right) \cdot \left(\begin{array}{c}
0\\
0\\
r
\end{array}\right)\\
\left(\begin{array}{c}
\dot{x}^1\\
\dot{x}^2\\
\dot{x}^3
\end{array}\right) &= \left(\begin{array}{ccc}
\cos\phi & -\sin\phi & 0\\
\sin\phi & \cos\phi & 0\\
0 & 0 & 1
\end{array}\right) \cdot \left(\begin{array}{ccc}
\cos\theta & 0 & \sin\theta\\
0 & 1 & 0\\
-\sin\theta & 0 & \cos\theta
\end{array}\right) \cdot \left(\begin{array}{c}
\bar{\rho}\cos\beta\\
\bar{\rho}\sin\beta\\
\bar{z}
\end{array}\right)
\end{align}
\end{subequations}
They are obviously related to each other by
\begin{equation}\label{eq:sphercyltrans}
\bar{\rho} = \bar{r}\sin\alpha\,, \quad
\bar{z} = \bar{r}\cos\alpha\,,
\end{equation}
similar to the usual spherical and cylindrical coordinates. The usefulness of these coordinates will become apparent later.
We finally remark that by introducing Cartesian coordinates on \(M\) together with bases \((\partial_a)\) of \(TM\) and \((dx^a)\) of \(T^*M\) we have also introduced bases of the corresponding pullback bundles over \(TM\), which are simply obtained by pullback via the fiber isomorphisms. We will use the same notation for these bases. Hence, we have also defined bases for d-tensors, and we will use them throughout the remainder of this article.
\subsection{Spherical symmetry}\label{ssec:spher}
We now use the previously defined coordinates for the description of spherical symmetry, i.e., symmetry under rotations. Recall that rotations in Euclidean geometry can be defined as diffeomorphism that keep the origin fixed and preserve distances measured with the canonical Euclidean metric \(\delta_{ab} = \mathrm{diag}(1,1,1)\). For our purpose it is sufficient to consider the generating vector fields of rotations, which in Cartesian coordinates take the simple forms
\begin{equation}
\mathbf{r}_1 = -x^2\partial_3 + x^3\partial_2\,, \quad \mathbf{r}_2 = -x^3\partial_1 + x^1\partial_3\,, \quad \mathbf{r}_3 = -x^1\partial_2 + x^2\partial_1\,.
\end{equation}
In order to apply them to d-tensors and functions on \(TM\), we also need their canonical lifts. In Cartesian induced coordinates they are given by
\begin{subequations}
\begin{align}
\hat{\mathbf{r}}_1 &= -x^2\partial_3 + x^3\partial_2 - \dot{x}^2\dot{\partial}_3 + \dot{x}^3\dot{\partial}_2\,,\\
\hat{\mathbf{r}}_2 &= -x^3\partial_1 + x^1\partial_3 - \dot{x}^3\dot{\partial}_1 + \dot{x}^1\dot{\partial}_3\,,\\
\hat{\mathbf{r}}_3 &= -x^1\partial_2 + x^2\partial_1 - \dot{x}^1\dot{\partial}_2 + \dot{x}^2\dot{\partial}_1\,.
\end{align}
\end{subequations}
These expressions are still simple, but not the most useful for the construction of harmonics. We therefore express the canonical lifts in the other sets of coordinates we have introduced, and find that they take the same form
\begin{subequations}
\begin{align}
\hat{\mathbf{r}}_1 &= \sin\phi\partial_{\theta} + \frac{\cos\phi}{\tan\theta}\partial_{\phi} - \frac{\cos\phi}{\sin\theta}\partial_{\beta}\,,\\
\hat{\mathbf{r}}_2 &= -\cos\phi\partial_{\theta} + \frac{\sin\phi}{\tan\theta}\partial_{\phi} - \frac{\sin\phi}{\sin\theta}\partial_{\beta}\,,\\
\hat{\mathbf{r}}_3 &= -\partial_{\phi}
\end{align}
\end{subequations}
in both co-rotated spherical and co-rotated cylindrical coordinates. Note that they act only on the coordinates \(\theta, \phi, \beta\) and leave the other coordinates invariant. This will significantly simplify our task of constructing harmonics in the following sections. We finally define the operators \(\mathcal{R}_j\), which act on d-tensors \(A\) as
\begin{equation}
\mathcal{R}_jA = i\mathcal{L}_{\mathbf{r}_j}A\,,
\end{equation}
and we will use this notation throughout the remainder of this article.
\section{Tangent bundle spherical harmonics}\label{sec:scalar}
We now continue with the construction of functions on the tangent bundle \(TM\), which form irreducible representations of the algebra of rotation operators \(\mathcal{R}_j\) introduced in the previous section. This will be done in several steps. We start by enlarging the algebra by introducing another set of operators in section~\ref{ssec:corot}. We then choose a particular set of commuting operators, and construct their eigenfunctions in section~\ref{ssec:eigen}. This will lead us to the definition of tangent bundle spherical harmonics in section~\ref{ssec:scalharm}. We list a few special cases in section~\ref{ssec:scalspecial}, and discuss their properties in section~\ref{ssec:scalprop}.
\subsection{Co-rotation operators}\label{ssec:corot}
We start our discussion of harmonic functions by introducing another set of operators, which act on functions on the (slit) tangent bundle \(TM\). For this purpose we introduce the vector fields
\begin{subequations}
\begin{align}
\mathbf{b}_1 &= \sin\beta\partial_{\theta} + \frac{\cos\beta}{\tan\theta}\partial_{\beta} - \frac{\cos\beta}{\sin\theta}\partial_{\phi}\,,\\
\mathbf{b}_2 &= -\cos\beta\partial_{\theta} + \frac{\sin\beta}{\tan\theta}\partial_{\beta} - \frac{\sin\beta}{\sin\theta}\partial_{\phi}\,,\\
\mathbf{b}_3 &= -\partial_{\beta}
\end{align}
\end{subequations}
on \(TM\). Note that their expressions are identical in co-rotated spherical and co-rotated cylindrical coordinates, so that we do not have to distinguish between these two sets of coordinates at this point. It is important to note that they are \emph{not} the complete lifts of vector fields on the base manifold \(M\), and so they cannot be applied to d-tensors as discussed in section~\ref{ssec:sym}, but only on functions (or ordinary tensor fields) on \(TM\). Their action on functions \(f \in C^{\infty}\left(TM\right)\) is given by the usual Lie derivative, and allows us to define the operators
\begin{equation}
\mathcal{B}_jf = i\mathcal{L}_{\mathbf{b}_j}f\,,
\end{equation}
which we call \emph{co-rotation operators}. These operators satisfy the algebra relations
\begin{equation}
[\mathcal{B}_j,\mathcal{B}_k] = i\epsilon_{jkl}\mathcal{B}_l\,, \quad [\mathcal{B}_j,\mathcal{R}_k] = 0\,,
\end{equation}
where in the latter the restriction of \(\mathcal{R}_j\) to functions on \(TM\) is understood. We further define the operators
\begin{equation}
\mathcal{B}_{\pm} = \mathcal{B}_1 \pm i\mathcal{B}_2\,, \quad \mathcal{B}_z = \mathcal{B}_3\,, \quad \mathcal{B}^2 = \mathcal{B}_1^2 + \mathcal{B}_2^2 + \mathcal{B}_3^2 = \mathcal{R}^2\,.
\end{equation}
They satisfy the algebra relations
\begin{equation}
[\mathcal{B}_z,\mathcal{B}_{\pm}] = \pm\mathcal{B}_{\pm}\,, \quad [\mathcal{B}_+,\mathcal{B}_-] = 2\mathcal{B}_z\,, \quad [\mathcal{B}_{\pm},\mathcal{B}^2] = [\mathcal{B}_z,\mathcal{B}^2] = 0\,.
\end{equation}
We see that the operators \(\mathcal{B}_j\) and \(\mathcal{R}_j\) together satisfy the algebra of rotations of a rigid body.
\subsection{Eigenvalues and eigenfunctions}\label{ssec:eigen}
We now construct irreducible representations of the algebra of rigid rotations spanned by the operators \(\mathcal{B}_j\) and \(\mathcal{R}_j\) above. Since \(\mathcal{R}^2, \mathcal{R}_z, \mathcal{B}_z\) mutually commute, we can find simultaneous eigenfunctions \(f\) which satisfy
\begin{equation}\label{eq:eigenfunc}
\mathcal{R}^2f = l(l + 1)f\,, \quad \mathcal{R}_zf = mf\,, \quad \mathcal{B}_zf = nf\,.
\end{equation}
For this purpose we write a function \(f \in C^{\infty}\left(TM\right)\) using a separation ansatz in the form
\begin{equation}
f(r, \bar{r}, \alpha, \beta, \theta, \phi) = \tilde{f}(r, \bar{r}, \alpha)\Theta(\theta)\Phi(\phi)B(\beta)
\end{equation}
in co-rotated spherical coordinates into a ``radial'' part \(\tilde{f}\) and three functions constituting the ``angular'' part. Note that operators we consider act only on the angular part, so that we could equally well use co-rotated cylindrical coordinates instead and introduce the separation ansatz
\begin{equation}
f(r, \bar{\rho}, \bar{z}, \beta, \theta, \phi) = \check{f}(r, \bar{\rho}, \bar{z})\Theta(\theta)\Phi(\phi)B(\beta)\,,
\end{equation}
where the functions \(\tilde{f}\) and \(\check{f}\) are related through the coordinate transformation~\eqref{eq:sphercyltrans} by
\begin{equation}
\tilde{f}(r, \bar{r}, \alpha) = \check{f}(r, \bar{r}\sin\alpha, \bar{r}\cos\alpha)\,.
\end{equation}
From this separation ansatz we then obtain the solutions
\begin{equation}
\Phi(\phi) = e^{im\phi}\,, \quad B(\beta) = e^{in\beta}
\end{equation}
for two of the eigenvalue equations. Since the coordinates \(\phi\) and \(\beta\) are $2\pi$-periodic, \(m\) and \(n\) must be integers. We can insert this solution into the equation for \(\mathcal{R}^2\) and obtain
\begin{equation}
\Theta''(\theta) + \frac{\Theta'(\theta)}{\tan\theta} + \left(\frac{2mn\cos\theta - m^2 - n^2}{\sin^2\theta} + l(l + 1)\right)\Theta(\theta) = 0\,.
\end{equation}
This equation can be solved more easily by introducing a new variable \(z = \cos\theta\) and a function \(Z(\cos\theta) = \Theta(\theta)\). The differential equation then reads
\begin{equation}
(1 - z^2)Z''(z) - 2zZ'(z) + \left(\frac{2mnz - m^2 - n^2}{1 - z^2} + l(l + 1)\right)Z(z) = 0\,.
\end{equation}
This equation has the general solution
\begin{multline}
Z(z) = \left(\frac{1 + z}{1 - z}\right)^{\frac{m + n}{2}}\bigg[C_1(1 - z)^m\vphantom{F}_2F_1\left(m - l, l + m + 1; 1 + m - n; \frac{1 - z}{2}\right)\\
+ C_2(1 - z)^n\vphantom{F}_2F_1\left(n - l, l + n + 1; 1 + n - m; \frac{1 - z}{2}\right)\bigg]
\end{multline}
with integration constants \(C_1, C_2\). Here we are interested in regular solutions on the interval \(z \in [-1,1]\). The regular solution is given by
\begin{multline}
Z(z) = C\sqrt{1 + z}^{m + n}\sqrt{1 - z}^{|m - n|}\\
\vphantom{F}_2F_1\left(\max(m,n) - l, \max(m,n) + l + 1; |m - n| + 1; \frac{1 - z}{2}\right)
\end{multline}
with \(l \in \mathbb{N}\) and \(m, n \in \{-l, -1 + 1, \ldots, l\}\), and \(C\) is an integration constant. After substituting back to \(\theta\) we thus obtain the solution
\begin{multline}
\Theta(\theta) = C'\cos^{m + n}\frac{\theta}{2}\sin^{|m - n|}\frac{\theta}{2}\\
\vphantom{F}_2F_1\left(\max(m,n) - l, \max(m,n) + l + 1; |m - n| + 1; \sin^2\frac{\theta}{2}\right)\,,
\end{multline}
where \(l, m, n\) take the same values as above. Note that up to the integration constant the functions \(\Theta, \Phi, B\) are uniquely defined by the eigenvalue equations~\eqref{eq:eigenfunc} and the requirement that they are regular.
\subsection{Definition and explicit formula}\label{ssec:scalharm}
We can now define a family of functions
\begin{equation}
\begin{split}
\mathcal{Y}_{l,m,n}(\theta,\phi,\beta) &= N_{l,m,n}e^{im\phi}e^{in\beta}\cos^{m + n}\frac{\theta}{2}\sin^{|m - n|}\frac{\theta}{2}\\
&\phantom{=}\cdot \vphantom{F}_2F_1\left(\max(m,n) - l, \max(m,n) + l + 1; |m - n| + 1; \sin^2\frac{\theta}{2}\right)\,,
\end{split}
\end{equation}
where the normalization constants
\begin{equation}
N_{l,m,n} = (-1)^{\max(m,n)}\frac{\sqrt{(2l + 1)}}{|m - n|!}\sqrt{\frac{(l - \min(m,n))!(l + \max(m,n))!}{(l - \max(m,n))!(l + \min(m,n))!}}
\end{equation}
are chosen so that
\begin{equation}
\int_{0}^{2\pi}\int_{0}^{2\pi}\int_{0}^{\pi}|\mathcal{Y}_{l,m,n}(\theta,\phi,\beta)|^2\sin\theta\,d\theta\,d\phi\,d\beta = 8\pi^2\,.
\end{equation}
We call these functions the tangent bundle spherical harmonics. Note that up to the choice of the normalization constants and notation they are the same as the Wigner d-matrix components, which are defined as
\begin{multline}
D^l_{m,n}(\phi, \theta, \beta) = (-1)^{m-n}\sqrt{(l+m)!(l-m)!(l+n)!(l-n)!}e^{-im\phi}e^{-in\beta}\\
\sum_s\frac{(-1)^s}{s!(l+n-s)!(l-m-s)!(m-n+s)!}\cos^{2l+n-m-2s}\frac{\theta}{2}\sin^{m-n+2s}\frac{\theta}{2}\,.
\end{multline}
One finds that \(\mathcal{Y}_{l,m,n} = (-1)^m\sqrt{2l + 1}D^l_{-m,-n}\).
\subsection{Special cases}\label{ssec:scalspecial}
We note that there are a number of special cases. For \(n = 0\) the functions do not depend on the tangent bundle coordinates, and reduce to the usual spherical harmonics
\begin{equation}
\mathcal{Y}_{l,m,0}(\theta,\phi,\beta) = \sqrt{4\pi}Y_{lm}(\theta,\phi)\,.
\end{equation}
It further follows from the symmetry \(\phi \leftrightarrow \beta\), \(m \leftrightarrow n\) that analogously holds
\begin{equation}
\mathcal{Y}_{l,0,n}(\theta,\phi,\beta) = \sqrt{4\pi}Y_{ln}(\theta,\beta)
\end{equation}
for \(m = 0\).
\subsection{Properties}\label{ssec:scalprop}
\subsubsection{Operator relations}
The harmonics satisfy the relations
\begin{equation}
\mathcal{R}^2\mathcal{Y}_{l,m,n} = l(l + 1)\mathcal{Y}_{l,m,n}\,, \quad \mathcal{R}_z\mathcal{Y}_{l,m,n} = m\mathcal{Y}_{l,m,n}\,, \quad \mathcal{B}_z\mathcal{Y}_{l,m,n} = n\mathcal{Y}_{l,m,n}\,.
\end{equation}
Functions with identical \(l\) are related by application of the ladder operators
\begin{subequations}
\begin{align}
\mathcal{R}_{\pm}\mathcal{Y}_{l,m,n} &= \sqrt{(l \mp m)(l \pm m + 1)}\mathcal{Y}_{l,m \pm 1,n}\,,\\
\mathcal{B}_{\pm}\mathcal{Y}_{l,m,n} &= \sqrt{(l \mp n)(l \pm n + 1)}\mathcal{Y}_{l,m,n \pm 1}\,.
\end{align}
\end{subequations}
\subsubsection{Complex conjugate}
The complex conjugate is given by\begin{equation}
\overline{\mathcal{Y}_{l,m,n}}(\theta,\phi,\beta) = (-1)^{m + n}\mathcal{Y}_{l,-m,-n}(\theta,\phi,\beta)\,.
\end{equation}
\subsubsection{Orthogonality}
The harmonics are orthogonal,
\begin{equation}
\int_{0}^{2\pi}\int_{0}^{2\pi}\int_{0}^{\pi}\mathcal{Y}_{l,m,n}(\theta,\phi,\beta)\overline{\mathcal{Y}_{l',m',n'}}(\theta,\phi,\beta)\sin\theta\,d\theta\,d\phi\,d\beta = 8\pi^2\delta_{ll'}\delta_{mm'}\delta_{nn'}\,.
\end{equation}
\subsubsection{Product rule}
The product of two harmonics can be written as
\begin{equation}
\begin{split}
\mathcal{Y}_{l,m,n}\mathcal{Y}_{l',m',n'} &= \sum_{l''}\sqrt{\frac{(2l + 1)(2l' + 1)}{2l'' + 1}}C^{l,l',l''}_{m,m',m+m'}C^{l,l',l''}_{n,n',n+n'}\mathcal{Y}_{l'',m+m',n+n'}\\
&= (-1)^{m+m'+n+n'}\sum_{l''}\sqrt{(2l + 1)(2l' + 1)(2l'' + 1)}\\
&\phantom{=}\cdot \left(\begin{array}{ccc}
l & l' & l''\\
m & m' & -m - m'
\end{array}\right)\left(\begin{array}{ccc}
l & l' & l''\\
n & n' & -n - n'
\end{array}\right)\mathcal{Y}_{l'',m+m',n+n'}\,,
\end{split}
\end{equation}
where \(C^{l,l',l''}_{m,m',m''}\) denotes the Clebsch-Gordan coefficients and the terms in the sum are non-vanishing only for
\begin{equation}
\max(|l - l'|, |m + m'|, |n + n'|) \leq l'' \leq l + l'\,.
\end{equation}
\section{Spherical harmonic d-tensors}\label{sec:dtensor}
Using the tangent bundle spherical harmonics introduced in the previous section, we now finally come to the construction of spherically harmonic d-tensors. Again we proceed in several steps. We start by choosing suitable basis vectors and covector, which form a representation of the rotation algebra in section~\ref{ssec:basis}. From these and their tensor products we then construct the spherical harmonic d-tensors in section~\ref{ssec:tensharm}. Some special cases are listed in section~\ref{ssec:tensspecial}, and their properties are discussed in section~\ref{ssec:tensprop}.
\subsection{Basis vectors and covectors}\label{ssec:basis}
We start our constructing by examining the action of the rotation operators \(\mathcal{R}_a\) on the coordinate basis elements \(\partial_a\) and \(dx^a\) of the pullback bundles \(TM \times_M TM\) and \(TM \times_M T^*M\) derived from the Cartesian coordinates in section~\ref{ssec:coord}. Starting with the former, one finds that they are given by
\begin{equation}
\mathcal{R}_a\partial_b = i\epsilon_{abc}\partial_c\,,
\end{equation}
as usual for an oriented right-handed vector basis. In particular, it follows that they satisfy the relations
\begin{equation}
\mathcal{R}^2\partial_a = 2\partial_a\,, \quad
\mathcal{R}_z\partial_1 = i\partial_2\,, \quad
\mathcal{R}_z\partial_2 = -i\partial_1\,, \quad
\mathcal{R}_z\partial_3 = 0\,.
\end{equation}
This motivates the definition of the basis d-tensors
\begin{equation}
\mathbf{e}_0 = \bar{\partial}_3\,, \quad \mathbf{e}_1 = -\frac{\bar{\partial}_1 + i\bar{\partial}_2}{\sqrt{2}}\,, \quad \mathbf{e}_{-1} = \frac{\bar{\partial}_1 - i\bar{\partial}_2}{\sqrt{2}}\,.
\end{equation}
The d-tensors defined above satisfy the relations
\begin{equation}\label{eq:basvectrel}
\mathcal{R}^2\mathbf{e}_m = 2\mathbf{e}_m\,, \quad \mathcal{R}_z\mathbf{e}_m = m\mathbf{e}_m\,, \quad \mathcal{R}_{\pm}\mathbf{e}_m = \sqrt{(1 \mp m)(2 \pm m)}\mathbf{e}_{m \pm 1}\,.
\end{equation}
Note that the same construction can be applied to the covector basis \((dx^a)\). In this case we define
\begin{equation}
\mathbf{e}^0 = dx^3\,, \quad \mathbf{e}^1 = -\frac{dx^1 + idx^2}{\sqrt{2}}\,, \quad \mathbf{e}^{-1} = \frac{dx^1 - idx^2}{\sqrt{2}}
\end{equation}
They satisfy the same relations~\eqref{eq:basvectrel} as the corresponding vector basis elements.
\subsection{Definition and explicit formula}\label{ssec:tensharm}
We now recursively construct the rank \(k\) d-tensors from the rank \(1\) basis tensors as follows. We start with the rank \(0\) tensors, which are simply given by
\begin{equation}
\ytens{l}{m}{n} = \mathcal{Y}_{l,m,n}\,.
\end{equation}
Note that we have vertically centered the index \(l\) for this scalar, the reason of which will become clear in the following. From this we construct the vectors
\begin{equation}
\ytens{l'}{m}{n}{}_{l} = (-1)^{l - m}\sqrt{2l + 1}\sum_{m',\mu}\left(\begin{array}{ccc}
l & l' & 1\\
m & -m' & -\mu
\end{array}\right)\mathcal{Y}_{l',m',n}\mathbf{e}_{\mu}\,,
\end{equation}
and analogously the covectors
\begin{equation}
\ytens{l'}{m}{n}{}^{l} = (-1)^{l - m}\sqrt{2l + 1}\sum_{m',\mu}\left(\begin{array}{ccc}
l & l' & 1\\
m & -m' & -\mu
\end{array}\right)\mathcal{Y}_{l',m',n}\mathbf{e}^{\mu}\,,
\end{equation}
where the position of the new index reflects the tensor type. Higher order tensors are then constructed using the recursion formula
\begin{equation}
\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k} = (-1)^{l_k - m}\sqrt{2l_k + 1}\sum_{m',\mu}\left(\begin{array}{ccc}
l_k & l_{k-1} & 1\\
m & -m' & -\mu
\end{array}\right)\ytens{l_0}{m'}{n}{}_{l_1 \cdots l_{k-1}} \otimes \mathbf{e}_{\mu}
\end{equation}
and analogously for \(\mathbf{e}^m\) and mixed tensors. Here the appearance of the Wigner $3j$-symbols imply that the indices must obey the conditions
\begin{equation}
l_0 = 0, 1, \ldots\,, \quad l_i = |l_{i - 1} - 1|, \ldots, l_{i - 1} + 1\,, \quad m = -l_k, \ldots, l_k\,, \quad n = -l_0, \ldots, l_0\,,
\end{equation}
since otherwise they would vanish identically. Note that any tensor of rank \(k\) now carries \(k\) indices \(l_1, \ldots, l_k\) which are either in lower or upper position, and whose position reflects the position of the indices on the basis elements. For example, a tensor \(\ytens{l_0}{m}{n}_{l_1}{}^{l_2}\) would have a component expression of the form \(A^a{}_b\partial_a \otimes dx^b\). The reason for this choice is that the harmonic tensors form an adapted basis of the space of d-tensors, in analogy to the coordinate basis elements such as \(\partial_a \otimes dx^b\), and so should carry the indices in the same positions as it is the case for the coordinate basis.
From the recursive definition one easily derives the explicit formula
\begin{multline}
\ytens{l_0}{m_k}{n}{}_{l_1 \cdots l_k} = \sum_{\substack{m_0, \ldots, m_{k-1}\\\mu_1, \ldots, \mu_k}}\mathcal{Y}_{l_0,m_0,n}\mathbf{e}_{\mu_1} \otimes \ldots \otimes \mathbf{e}_{\mu_k}\\
\cdot \prod_{i = 1}^k(-1)^{l_i - m_i}\sqrt{2l_i + 1}\left(\begin{array}{ccc}
l_i & l_{i-1} & 1\\
m_i & -m_{i-1} & -\mu_i
\end{array}\right)
\end{multline}
and analogously for \(\mathbf{e}^m\) and mixed tensors, by raising the indices \(\mu_i\) corresponding to the indices \(l_i\) for \(i = 1, \ldots, k\).
\subsection{Special cases}\label{ssec:tensspecial}
A few special cases are worth mentioning. First note that the basis vectors and covectors are recovered as
\begin{equation}
\mathbf{e}_m = \ytens{0}{m}{0}_1\,, \quad
\mathbf{e}^m = \ytens{0}{m}{0}^1\,.
\end{equation}
The Kronecker tensor is given by
\begin{equation}
\boldsymbol{\delta} = \partial_a \otimes dx^a = \sum_m\overline{\mathbf{e}_m} \otimes \mathbf{e}^m = -\sqrt{3}\ytens{0}{0}{0}{}_1{}^0\,.
\end{equation}
Finally, the totally antisymmetric tensor is given by
\begin{equation}
\boldsymbol{\epsilon} = \epsilon_{abc}dx^a \otimes dx^b \otimes dx^c = i\sqrt{6}\ytens{0}{0}{0}{}^{1\,1\,0}\,.
\end{equation}
Note that the latter two have \(l_k = 0\), and thus are invariant under rotation.
\subsection{Properties}\label{ssec:tensprop}
\subsubsection{Operator relations}
The harmonic d-tensors satisfy
\begin{gather}
\mathcal{R}^2\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k} = l_k(l_k + 1)\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k}\,, \quad \mathcal{R}_z\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k} = m\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k}\,,\nonumber\\
\mathcal{R}_{\pm}\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k} = \sqrt{(l_k \mp m)(l_k \pm m + 1)}\ytens{l_0}{m \pm 1}{n}{}_{l_1 \cdots l_k}\,.
\end{gather}
\subsubsection{Complex conjugate}
The complex conjugate is given by
\begin{equation}
\overline{\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k}} = (-1)^{l_0 + l_k + m + n + k}\ytens{l_0}{-m}{-n}{}_{l_1 \cdots l_k}\,.
\end{equation}
\subsubsection{Orthogonality}
The spherical harmonic d-tensors satisfy
\begin{equation}
\int_{0}^{2\pi}\int_{0}^{2\pi}\int_{0}^{\pi}\left\langle\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k}, \ytens{l_0'}{m'}{n'}{}^{l_1 \cdots l_k}\right\rangle(\theta,\phi,\beta)\sin\theta\,d\theta\,d\phi\,d\beta = 8\pi^2\delta_{mm'}\delta_{nn'}\prod_{i = 0}^k\delta_{l_il_i'}\,,
\end{equation}
where
\begin{equation}
\langle A,B \rangle = \overline{A_{a_1 \cdots a_k}}B^{a_k \cdots a_1}\,.
\end{equation}
\subsubsection{Permutation of tensor indices}
The easiest relation one may derive is the transpose of tensors of rank \(2\). From the relation between $3j$-symbols and $6j$-symbols follows
\begin{equation}
\left(\ytens{l_0}{m}{n}{}_{l_1l_2}\right)^t = \sum_l(-1)^{l + l_1}\sqrt{2l + 1}\sqrt{2l_1 + 1}\left\{\begin{array}{ccc}
l_0 & l_1 & 1\\
l_2 & l & 1
\end{array}\right\}\ytens{l_0}{m}{n}{}_{ll_2}\,.
\end{equation}
From this formula one can then derive the transposition of neighboring indices. It be constructed by making use of the recursive definition of the harmonic d-tensors. Denoting
\begin{multline}
\left(\mathbf{e}_{m_1} \otimes \ldots \otimes \mathbf{e}_{m_k}\right)^{t(i,j)} =\\
\mathbf{e}_{m_1} \otimes \ldots \otimes \mathbf{e}_{m_{i - 1}} \otimes \mathbf{e}_{m_j} \otimes \mathbf{e}_{m_{i + 1}} \otimes \ldots \otimes \mathbf{e}_{m_{j - 1}} \otimes \mathbf{e}_{m_i} \otimes \mathbf{e}_{m_{j + 1}} \otimes \ldots \otimes \mathbf{e}_{m_k}
\end{multline}
the transposition of indices \(i, j\) with \(i < j\), we find
\begin{equation}\label{eq:permneigh}
\left(\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k}\right)^{t(i,i+1)} = \sum_l(-1)^{l + l_i}\sqrt{2l + 1}\sqrt{2l_i + 1}\left\{\begin{array}{ccc}
l_{i - 1} & l_i & 1\\
l_{i + 1} & l & 1
\end{array}\right\}\ytens{l_0}{m}{n}{}_{l_1 \cdots l_{i - 1}ll_{i + 1} \cdots l_k}\,.
\end{equation}
This can finally be generalized to arbitrary index permutations. Note that an arbitrary permutation of indices can be decomposed into a composition of transpositions of neighboring indices, and the aforementioned formula is applied recursively. The result is rather lengthy, and so we omit it here for brevity.
\subsubsection{Contraction}
Again we start with the most simple case given by tensors or rank \(2\). From the traces
\begin{equation}
\tr\left(\mathbf{e}_{m_1} \otimes \mathbf{e}^{m_2}\right) = \tr(\mathbf{e}^{m_1} \otimes \mathbf{e}_{m_2}) = (-1)^{m_1}\delta_{m_1,-m_2}
\end{equation}
and the properties of the $3j$-symbols follows:
\begin{equation}
\tr\ytens{l_0}{m}{n}{}_{l_1}{}^{l_2} = \tr\ytens{l_0}{m}{n}{}^{l_1}{}_{l_2} = (-1)^{l_0 - l_1}\sqrt{\frac{2l_1 + 1}{2l_0 + 1}}\delta_{l_0l_2}\mathcal{Y}_{l_0,m,n}\,.
\end{equation}
This formula generalizes directly to contractions over neighboring indices. Denoting this contraction by \(\tr_{i,j}\), we find
\begin{multline}
\tr_{i,i+1}\ytens{l_0 \cdots l_{i - 1}}{m}{n}{}_{l_i}{}^{l_{i + 1}}\vcenter{\hbox{\scriptsize$l_{i + 2} \cdots l_k$}} = \tr_{i,i+1}\ytens{l_0 \cdots l_{i - 1}}{m}{n}{}^{l_i}{}_{l_{i + 1}}\vcenter{\hbox{\scriptsize$l_{i + 2} \cdots l_k$}}\\
= (-1)^{l_{i + 1} - l_i}\sqrt{\frac{2l_i + 1}{2l_{i + 1} + 1}}\delta_{l_{i - 1}l_{i + 1}}\ytens{l_0 \cdots l_{i - 2}l_{i + 1} \cdots l_k}{m}{n}\,.
\end{multline}
Arbitrary contractions can then be calculated by first performing a permutation of indices by recursively applying the formula~\eqref{eq:permneigh} and then a contraction of neighboring indices.
\subsubsection{Tensor product}
The calculation of the tensor product is very lengthy, as it involves the evaluation of numerous $3j$-symbols. Nevertheless, it is possible to derive a general formula for the result, which is given by
\begin{multline}
\ytens{l_0}{m_k}{n}{}_{l_1 \cdots l_k} \otimes \ytens{l_0'}{m_{k'}'}{n'}{}_{l_1' \cdots l_{k'}'} = \sum_{\substack{l_0'',\ldots,l_{k+k'}''\\m_{k+k'}'',n''}}\left[\sum_{\substack{m_0,\ldots,m_{k-1}\\\mu_0,\ldots,\mu_{k-1}}}\sum_{\substack{m_0',\ldots,m_{k'-1}'\\\mu_0',\ldots,\mu_{k'-1}'}}\sum_{m_0'',\ldots,m_{k+k'-1}''}\right.\\
(-1)^{-m_0 - m_0' - n - n'}\left(\begin{array}{ccc}
l_0 & l_0' & l_0''\\
m_0 & m_0' & -m_0''
\end{array}\right)\left(\begin{array}{ccc}
l_0 & l_0' & l_0''\\
n & n' & -n''
\end{array}\right)\\
\cdot \prod_{i = 0}^{k-1}(-1)^{l_{i+1} + l_{i+1}'' - m_{i+1} - m_{i+1}''}\left(\begin{array}{ccc}
l_{i+1} & l_i & 1\\
m_{i+1} & -m_i & -\mu_i
\end{array}\right)\left(\begin{array}{ccc}
l_{i+k'+1}'' & l_{i+k'}'' & 1\\
m_{i+k'+1}'' & -m_{i+k'}'' & -\mu_i
\end{array}\right)\\
\cdot \prod_{i = 0}^{k' - 1}(-1)^{l_{i+1}' + l_{i+k+1}'' - m_{i+1}' - m_{i+k+1}''}\left(\begin{array}{ccc}
l_{i+1}' & l_i' & 1\\
m_{i+1}' & -m_i' & -\mu_i'
\end{array}\right)\left(\begin{array}{ccc}
l_{i+1}'' & l_i'' & 1\\
m_{i+1}'' & -m_i'' & -\mu_i'
\end{array}\right)\\
\cdot \left.\sqrt{\prod_{i = 0}^k(2l_i + 1)\prod_{i = 0}^{k'}(2l_i' + 1)\prod_{i = 0}^{k + k'}(2l_i'' + 1)}\right]\ytens{l_0''}{m_{k+k'}''}{n''}{}_{l_1'' \cdots l_{k+k'}''}\,,
\end{multline}
and where all sums have only a finite number of terms for which the $3j$-symbols are non-vanishing.
\subsubsection{Scalar product}
To calculate the scalar product of a vector and a covector, one may first calculate their tensor product, which follows from the general tensor product formula and takes the form
\begin{multline}
\ytens{l_0}{m}{n}{}_{l_1} \otimes \ytens{l_0'}{m'}{n'}{}^{l_1'} = \sum_{l_0'',l_1'',l_2''}(-1)^{m + m' + n + n'}\\
\cdot \sqrt{(2l_0 + 1)(2l_1 + 1)(2l_0' + 1)(2l_1' + 1)(2l_0'' + 1)(2l_1'' + 1)(2l_2'' + 1)}\\
\cdot \left(\begin{array}{ccc}
l_0'' & l_0' & l_0\\
-n - n' & n' & n
\end{array}\right)\left(\begin{array}{ccc}
l_2'' & l_1' & l_1\\
-m - m' & m' & m
\end{array}\right)\left\{\begin{array}{ccc}
1 & l_0 & l_1\\
l_0' & l_1'' & l_0''
\end{array}\right\}\left\{\begin{array}{ccc}
1 & l_0' & l_1'\\
l_1 & l_2'' & l_1''
\end{array}\right\}\ytens{l_0''}{m + m'}{n + n'}{}_{l_1''}{}^{l_2''}\,.
\end{multline}
By taking the trace we can read off the scalar product
\begin{multline}
\ytens{l_0}{m}{n}{}_{l_1} \cdot \ytens{l_0'}{m'}{n'}{}^{l_1'} = \sum_{l''}(-1)^{l_0 + l_1' + l'' + m + m' + n + n'}\\
\cdot \sqrt{(2l_0 + 1)(2l_1 + 1)(2l_0' + 1)(2l_1' + 1)(2l'' + 1)}\\
\cdot \left(\begin{array}{ccc}
l'' & l_0' & l_0\\
-n - n' & n' & n
\end{array}\right)\left(\begin{array}{ccc}
l'' & l_1' & l_1\\
-m - m' & m' & m
\end{array}\right)\left\{\begin{array}{ccc}
1 & l_0 & l_1\\
l'' & l_1' & l_0'
\end{array}\right\}\mathcal{Y}_{l'',m + m',n + n'}\,,
\end{multline}
where the two $6j$-symbols collapse into one by using their summation rules.
\subsubsection{Vertical differential}
The vertical differential \(\nabla^v\) of a harmonic d-tensor can most easily be written using co-rotated cylindrical coordinates \(r,\theta,\phi,\bar{\rho},\beta,\bar{z}\). For a function \(f(r,\bar{\rho},\bar{z})\) it takes the form
\begin{equation}
\begin{split}
\nabla^v[f\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k}] &= dx^a \otimes \dot{\partial}_a[f\ytens{l_0}{m}{n}{}_{l_1 \cdots l_k}]\\
&= \left[\frac{1}{\sqrt{2}}\left(n\frac{f}{\bar{\rho}} - f_{\bar{\rho}}\right)\ytens{1}{0}{1}{}^0 + \frac{1}{\sqrt{2}}\left(n\frac{f}{\bar{\rho}} + f_{\bar{\rho}}\right)\ytens{1}{0}{-1}{}^0 - f_{\bar{z}}\ytens{1}{0}{0}{}^0\right] \otimes \ytens{l_0}{m}{n}{}_{l_1 \cdots l_k}\,.
\end{split}
\end{equation}
The vertical differential appears in various calculations in particular in Finsler geometry. A possible application will be shown in the following section.
\section{Application: spherically symmetric Finsler metric}\label{sec:app}
As an example, we now assume that our manifold \(M\) is equipped with a spherically symmetric Finsler Lagrangian \(L = L(r, \bar{\rho}, \bar{z})\) of homogeneity \(2\). We first calculate the momenta, given by the one-form
\begin{equation}
p_adx^a = \frac{1}{2}\bar{\partial}_aL\,dx^a = \frac{1}{2}\nabla^vL = -\frac{1}{2}L_{\bar{z}}\ytens{1}{0}{0}{}^0 - \frac{1}{2\sqrt{2}}L_{\bar{\rho}}\left(\ytens{1}{0}{1}{}^0 - \ytens{1}{0}{-1}{}^0\right)\,.
\end{equation}
Note that the appearing harmonic d-tensors have the property that \(l_k = 0\), which means that they are invariant under rotation, as one would expect, since we started from a Finsler Lagrangian with the same symmetry property. The same also holds for the Finsler metric
\begin{equation}
\begin{split}
g^L_{ab}dx^a \otimes dx^b &= \frac{1}{2}\bar{\partial}_a\bar{\partial}_bL\,dx^a \otimes dx^b = \frac{1}{2}\nabla^v\nabla^vL\\
&= -\frac{1}{2\sqrt{3}}\left(\frac{L_{\bar{\rho}}}{\bar{\rho}} + L_{\bar{\rho}\bar{\rho}} + L_{\bar{z}\bar{z}}\right)\ytens{0}{0}{0}{}^{1\,0} - \frac{1}{2\sqrt{6}}\left(\frac{L_{\bar{\rho}}}{\bar{\rho}} + L_{\bar{\rho}\bar{\rho}} - 2L_{\bar{z}\bar{z}}\right)\ytens{2}{0}{0}{}^{1\,0}\\
&\phantom{=}+ \frac{1}{2}L_{\bar{\rho}\bar{z}}\left(\ytens{2}{0}{1}{}^{1\,0} - \ytens{2}{0}{-1}{}^{1\,0}\right) + \frac{1}{4}\left(L_{\bar{\rho}\bar{\rho}} - \frac{L_{\bar{\rho}}}{\bar{\rho}}\right)\left(\ytens{2}{0}{2}{}^{1\,0} + \ytens{2}{0}{-2}{}^{1\,0}\right)\,.
\end{split}
\end{equation}
As a final illustration we also calculate its inverse. We remark that the derivation of a general formula for the inverse of a non-degenerate tensor of rank \(2\) is rather lengthy. However, one may use the tensor product and contraction formulas in order to derive a system of linear equations for the particular case we discuss here, and solve it explicitly. We then find that the inverse takes the form
\begin{equation}
\begin{split}
g^{L\,ab}\bar{\partial}_a \otimes \bar{\partial}_b &= \frac{2}{\sqrt{3}}\frac{L_{\bar{\rho}}\left(L_{\bar{\rho}\bar{\rho}} + L_{\bar{z}\bar{z}}\right) - \bar{\rho}\left(L_{\bar{\rho}\bar{z}}^2 - L_{\bar{\rho}\bar{\rho}}L_{\bar{z}\bar{z}}\right)}{L_{\bar{\rho}}\left(L_{\bar{\rho}\bar{z}}^2 - L_{\bar{\rho}\bar{\rho}}L_{\bar{z}\bar{z}}\right)}\ytens{0}{0}{0}{}_{1\,0}\\
&\phantom{=}- \frac{\sqrt{2}}{\sqrt{3}}\frac{L_{\bar{\rho}}\left(2L_{\bar{\rho}\bar{\rho}} - L_{\bar{z}\bar{z}}\right) + \bar{\rho}\left(L_{\bar{\rho}\bar{z}}^2 - L_{\bar{\rho}\bar{\rho}}L_{\bar{z}\bar{z}}\right)}{L_{\bar{\rho}}\left(L_{\bar{\rho}\bar{z}}^2 - L_{\bar{\rho}\bar{\rho}}L_{\bar{z}\bar{z}}\right)}\ytens{2}{0}{0}{}_{1\,0}\\
&\phantom{=}+ \frac{2L_{\bar{\rho}\bar{z}}}{L_{\bar{\rho}\bar{z}}^2 - L_{\bar{\rho}\bar{\rho}}L_{\bar{z}\bar{z}}}\left(\ytens{2}{0}{1}{}_{1\,0} - \ytens{2}{0}{-1}{}_{1\,0}\right)\\
&\phantom{=}- \frac{L_{\bar{\rho}}L_{\bar{z}\bar{z}} + \bar{\rho}\left(L_{\bar{\rho}\bar{z}}^2 - L_{\bar{\rho}\bar{\rho}}L_{\bar{z}\bar{z}}\right)}{L_{\bar{\rho}}\left(L_{\bar{\rho}\bar{z}}^2 - L_{\bar{\rho}\bar{\rho}}L_{\bar{z}\bar{z}}\right)}\left(\ytens{2}{0}{2}{}_{1\,0} + \ytens{2}{0}{-2}{}_{1\,0}\right)\,.
\end{split}
\end{equation}
One may then proceed with calculating further geometric objects, such as the Cartan tensor, in the same fashion. However, we stop at this point, as it should be clear now how these calculations are performed, and deriving all relevant quantities for a spherically symmetric Finsler space would exceed the scope of this article.
\section{Conclusion}\label{sec:conclusion}
We presented a set of d-tensors on the tangent bundle of a three-dimensional manifold, which decomposes into irreducible representations of the group of rotations acting on the underlying manifold. These d-tensors were constructed using suitable basis vectors and covectors as well as a set of spherical harmonic functions on the tangent bundle. We gave both a recursive definition and an explicit formula, and discussed various algebraic and differential properties. As a potential application, we calculated the Finsler metric of the most general spherically symmetric Finsler space and expressed it in terms of harmonic d-tensors.
Since d-tensors are abundant in spray and Finsler geometry, there are various possible applications of the harmonic d-tensors we presented here. Note that these applications are not restricted to three-dimensional manifolds. Spherical symmetry plays a role also in higher dimensions, for example in four dimensions with Lorentzian signature, where one of the dimensions carries the interpretation of time instead of space, and where a suitable decomposition into time and space can be performed. Note that this decomposition applies to d-tensors in the same way as to ordinary tensors on the manifold.
Various possible extensions of this work may be considered. The most interesting would be to extend our notion of harmonic objects also to connections on the tangent bundle instead of d-tensors, where two notions must be distinguished. The first notion is that of a non-linear connection on the tangent bundle, which can be expressed in terms of a tensor field of rank \((1,1)\) on \(TM\), and could therefore be addressed by decomposing tensors over \(TM\) into irreducible representations of the rotation group. An alternative definition, which is more closely related to the pullback bundle approach we used here, is to define a non-linear connection as a splitting of the exact sequence~\eqref{eq:exseq}. The other notion is that of a $N$-linear connection, which allows taking the covariant derivative of a d-tensor field, at whose components could be expressed in terms of the adapted basis shown in section~\ref{ssec:basis}.
Another possible extension of our work would be to consider other symmetry groups, such as the groups \(\mathrm{SO}(4)\), \(\mathrm{ISO}(3)\) and \(\mathrm{SO}(3,1)\) relevant in cosmology. For example, in the case of \(\mathrm{SO}(4)\) symmetry, one may use the coordinates introduced in~\cite{Hohmann:2015duq}, which have properties similar to the ones we have used here. Another possible line or research is to exploit the relation between Finsler and Cartan geometry discussed in~\cite{Hohmann:2013fca,Hohmann:2015pva} and to discuss Cartan geometries with particular symmetry groups.
\section*{Acknowledgments}
The author gratefully acknowledges the full support by the Estonian Ministry for Education and Science through the Institutional Research Support Project IUT02-27 and Startup Research Grant PUT790, as well as the European Regional Development Fund through the Center of Excellence TK133 ``The Dark Side of the Universe''.
| {
"timestamp": "2018-12-31T02:21:06",
"yymm": "1812",
"arxiv_id": "1812.11169",
"language": "en",
"url": "https://arxiv.org/abs/1812.11169",
"abstract": "Tensor harmonics are a useful mathematical tool for finding solutions to differential equations which transform under a particular representation of the rotation group $\\mathrm{SO}(3)$. The aim of this work is to make use of this tool also in the setting of Finsler geometry, or more general geometries on the tangent bundle, where the objects of relevance are d-tensors on the tangent bundle, or tensors in a pullback bundle, instead of ordinary tensors. For this purpose, we construct a set of d-tensor harmonics for spherical symmetry and show how these can be used for calculations in Finsler geometry.",
"subjects": "Mathematical Physics (math-ph); General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th)",
"title": "Spherical harmonic d-tensors",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137916170774,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7087617708350746
} |
https://arxiv.org/abs/0801.4726 | Stochastic extrema as stationary phases of characteristic functions | The paper is dealing with semi-classical asymptotics of a characteristic function for a stochastic process. The main technical tool is provided by the stationary phase method. The extremal range for a stochastic process is defined by limit values of the complex logarithm of the characteristic function. The paper also outlines a numerical method for calculating stochastic extrema. | \section{INTRODUCTION}
The extremum for a stochastic process admits transparent numerical presentation in terms of limit set of its characteristic function which is treated as a high frequency integral \cite{Guillemin}, \cite{McClure}, \cite{Maslov}, \cite{Maslov_Fedoriuk}. The proposed concept of stochastic extremum is compatible with other known methods of assessing extrema of stochastic functions (see, e.g., \cite{Bohachevsky}, \cite{Brooks}, \cite{Burry}, \cite{Drees}, \cite{Gumbel}, \cite{PeterHall}). However, the ideology of high frequency integrals, though different, is close to the simulated annealing technique \cite{Bohachevsky}, \cite{Brooks}. In our approach the role of the parameter like the inverse of "temperature" (from the annealing process ) is played by the frequency and in order to calculate the extremum we increase the frequency. Method of high frequency integrals is able to calculate extrema of stochastic processes with several variables, and therefore can be applied to analyze images and three-dimensional data samples. When it is possible to apply the method of assessing extrema with principal component functions (in the sense of Karhunen-Lo\`eve representation) \cite{PeterHall}, then our approach works as well and practically leads to the same results. However, it does not rely on any type of Karhunen-Lo\`eve representations and in this sense our concept of stochastic extremum is of equal or more general nature than the Karhunen-Lo\`eve representation itself. Moreover, the method of treating stochastic extrema as stationary phases of its characteristic function leads us to a transparent numerical procedure that allows efficient estimation of the stochastic extremum and evaluation of its statistical significance.
\section{STATIONARY PHASE}
\vspace{0.1cm}
Our goal is to introduce the concept of extremum for a stochastic process. In order to do that we employ the high-frequency integrals:
$$
I(k,\omega) = \int_{-\infty}^{\infty} \varphi (t,\omega) e^{ik\cdot f (t,\omega)} dt,
$$
where $\omega \in \Omega$ is a parameter; both $f$ and $\varphi$ are real functions that are infinitely many times differentiable with respect to $t.$ Moreover, $\varphi (t,\omega)$ has a finite time support for any fixed $\omega \in \Omega, \;\;supp_t (\varphi)$ is a subset of a closed interval from $\R.$ Throughout the paper $\R$ denotes the set of real numbers. For reader convenience, we recall the basic properties of the high-frequency integrals (for further reading on this subject see, e.g. \cite{McClure}, \cite{Maslov}, \cite{Maslov_Fedoriuk}).
\vspace{0.1cm}
If $supp_t (\varphi) \subset [ a, b]$ and
$$
\frac{d}{dt} f(t,\omega) \not= 0\;\;\forall\;t \in [ a, b]\;\;\mbox{ and } \;\;\forall\; \omega \in \Omega
$$
then
$$
I(k,\omega) = \int_{-\infty}^{\infty} \varphi (t,\omega) e^{ik\cdot f (t,\omega)} dt=\int_{a}^{b} \varphi (t,\omega) e^{ik\cdot f (t,\omega)}dt
$$
and integrating by parts $n$ times yields
$$
I(k,\omega) =(\frac{1}{ik})^n\int_{a}^{b} L^n(\varphi) (t,\omega) e^{ik\cdot f (t,\omega)}dt
$$
where the linear operator $L$ is defined as
$$
L(\varphi) = -\frac{d}{dt}(\frac{\varphi}{f_t} )
$$
and $f_t$ denotes the derivative of $f(t,\omega)$ with respect to time,
$$
f_t (t,\omega)= \frac{d}{dt} f(t,\omega).
$$
As one can see, if the phase $f(t,\omega)$ does not have critical points in $supp_t(\varphi)$ then
$$
I(k,\omega) = O(\frac{1}{k^n}) \;\;\forall\;n \in \N \;\;\mbox{ and }\;\;\forall \omega \in \Omega,
$$
where $\N$ denotes the set of natural numbers. This fact is often represented as
$$
I(k,\omega) = O(\frac{1}{k^\infty}) \;\;\forall \omega \in \Omega.
$$
The critical (or stationary) points of the phase $f(t,\omega)$ make the main contribution into the high-frequency integral $I(k,\omega)$ as $k\to\infty .$
\begin{definition}
A point $(t^\star, \omega^\star) \in \R \times \Omega$ is called a stationary phase (point) if
$$
\frac{d}{dt}f(t^\star , \omega^\star ) = 0.
$$
\end{definition}
The set of all stationary phase points for $f$ is addressed as $St(f) \subset \R \times \Omega.$ Now let us turn our attention to calculating the contribution of a stationary phase point $(t^\star, \omega^\star) \in St(f)$ into the high-frequency integral $I(k,\omega).$ A stationary phase point $(t^\star, \omega^\star) \in St(f)$ is said to have order $m\in \N$ if $m$ is the first natural number for which
$$
(\frac{d}{dt})^m f(t^\star, \omega^\star ) \not= 0.
$$
The set of such stationary phase points is denoted by $St^m(f).$
The Taylor expansion near $t^\star$ is
$$
f(t,\omega^\star ) - f(t^\star,\omega^\star ) = \frac{1}{m!}(\frac{d}{dt})^m f(t^\star ,\omega^\star) (t-t^\star)^m + O( (t-t^\star)^{m+1} ) \mbox{ as } \;t\to t^\star.
$$
Consider the change of coordinates
$$
x (t,\omega^\star) = (sign(f^{(m)}_t(t^\star ,\omega^\star) ) \cdot (f(t,\omega^\star ) - f(t^\star,\omega^\star ) ) )^{\frac{1}{m}},
$$
where $sign(f^{(m)}_t(t^\star ,\omega^\star))$ denotes the sign of
$$
(\frac{d}{dt})^m f(t^\star ,\omega^\star) .
$$
Since
$$
\frac{d}{dt} x (t,\omega^\star) = \vert \frac{1}{m!}(\frac{d}{dt})^m f(t^\star ,\omega^\star) \vert ^{\frac{1}{m}} + O(t-t^\star)
$$
the change of coordinates is not degenerate on some interval $Q_\varepsilon ,$
$$
t^\star - \varepsilon < t < t^\star + \varepsilon
$$
Let us take infinitely many times differentiable function $h$ with $supp_t(h) \subset Q_\varepsilon$ (the set of such functions is denoted as $C_0^\infty (Q_\varepsilon)).$ Assume also that $h( t, \omega^\star)=1$ in a neighborhood of $t^\star.$ Then
$$
I(k,\omega^\star ) = \int_{-\infty}^{\infty} \varphi (t,\omega^\star ) h(t,\omega^\star ) e^{ik\cdot f (t,\omega^\star)} dt + \int_{-\infty }^{\infty } \varphi (t,\omega^\star ) (1 - h(t,\omega^\star )) e^{ik\cdot f (t,\omega^\star)}dt
$$
and in order to find the contribution of the stationary phase $ (t^\star , \omega^\star )$ we need to calculate asymptotics for
$$
\int_{-\infty}^{\infty} \varphi (t,\omega^\star ) h(t,\omega^\star ) e^{ik\cdot f (t,\omega^\star)} dt.
$$
After making the change of coordinates $x= x (t,\omega^\star)$ in the integral
$$
I(k,\omega^\star )= e^{ikf(t^\star,\omega^\star)} \cdot \int_{-\infty}^{\infty} \varphi (t,\omega^\star ) h(t,\omega^\star) e^{ik(f(t,\omega^\star) -f(t^\star,\omega^\star)) }dt
$$
we have
$$
I(k,\omega^\star ) = e^{ikf(t^\star,\omega^\star)} \cdot \int_{-\infty}^{\infty} \varphi (x,\omega^\star ) \frac{h(x,\omega^\star) }{x_t}e^{sign(f^{(m)}_t(t^\star ,\omega^\star)) \cdot ikx^m}dx,
$$
where $x_t$ denotes $\frac{d}{dt} x(t,\omega^\star).$ The integral
$$
\int_{-\infty}^{\infty} e^{\pm ikx^m}dx
$$
can be calculated by reducing it to the linear combination of the integrals like$$
\int_{0}^{\infty} e^{\pm ikx^m}dx
$$
and then evaluating the latter with the help of the integral along the curve on a complex plane \cite{Maslov_Fedoriuk}. The curve consists out of the segment of $x$-axis $0\le x \le \rho,$ the arc of a circle
$$
\rho \cdot e^{\pm i\tau}\;\;(0\le \tau \le \frac{\pi}{2m})
$$
and the segment of the straight line
$$
r\cdot e^{\pm i\frac{\pi}{2m}}\;\;(\rho \ge r \ge 0).
$$
Taking $\rho \to \infty$ yields that
$$
\int_{-\infty}^{\infty} e^{\pm ikx^m}dx = \cos(\frac{\pi}{2m})\frac{1}{k^\frac{1}{m}} \cdot C_m \;\;\mbox{ for odd } m
$$
and
$$
\int_{-\infty}^{\infty} e^{\pm ikx^m}dx = \frac{e^{\pm i\frac{\pi}{2m} }}{k^\frac{1}{m}} \cdot C_m\;\;\mbox{ for even } m
$$
where
$$
C_m = 2\cdot \int_{0}^\infty e^{-x^m} dx.
$$
Taking into account that $h(t^\star,\omega^\star)=1,$ for even $m$ we have
$$
I(k,\omega^\star ) = \varphi(t^\star,\omega^\star) \cdot C_m \cdot \big(\frac{m!}{k \cdot \vert f^{(m)}_t(t^\star ,\omega^\star) \vert} \big) ^{\frac{1}{m}}\cdot e^{sign(f^{(m)}_t(t^\star ,\omega^\star)) i\frac{\pi}{2m} } \cdot e^{ikf(t^\star,\omega^\star)} +
$$
$$
e^{ikf(t^\star,\omega^\star)} \int_{-\infty}^{\infty} (\varphi (x,\omega^\star ) \cdot \frac{h(x,\omega^\star) }{x_t} - \big( \varphi (x,\omega^\star ) \frac{h(x,\omega^\star) }{x_t} \big)\big\vert_{x=0} ) e^{sign(f^{(m)}_t(t^\star ,\omega^\star)) \cdot ikx^m}dx
$$
and the latter integral has the asymptotic
$$
O(\frac{1}{k^\frac{2}{m}}) \;\;\mbox{ as }\;\;k\;\to \; \infty
$$
as long as there are no other stationary phase points present.
If $m$ is odd then
$$
I(k,\omega^\star ) = \varphi(t^\star,\omega^\star) \cdot C_m \cdot \big(\frac{m!}{k \cdot \vert f^{(m)}_t(t^\star ,\omega^\star) \vert} \big) ^{\frac{1}{m}}\cdot \cos(\frac{\pi}{2m} ) \cdot e^{ikf(t^\star,\omega^\star)} + O(\frac{1}{k^\frac{2}{m}})
$$
as $\;\;k\;\to \; \infty.$
\vspace{0.1cm}
In conclusion to this section we formulate the basic results of stationary phase method in the form of the following formal statement.
\begin{theorem}
\label{stateionaryPhase}
Let $\varphi (t, \omega),\;\;f(t, \omega)$ be real functions that are infinitely many times differentiable with respect to $t$ for any $\omega \in \Omega.$ Moreover, for any $\omega \in \Omega$ one can find an interval $[a(\omega),b(\omega)]\subset \R$ such that
$$
\varphi (t, \omega) \in C^\infty_0([a(\omega),b(\omega)]).
$$
Then the following statements hold.
\begin{itemize}
\item[i.] If $[a(\omega),b(\omega)] \cap St(f) = \emptyset$ then
$$
I(k,\omega) = O(\frac{1}{k^\infty})\;\;\mbox{ as } \;\;k\to\infty
$$
\item[ii.] If $[a(\omega),b(\omega)] \cap St(f) = \{t_j\}$ then $I(k,\omega)$ has the asymptotic
$$
\sum_{\mbox{even } m_j }\left(\varphi(t_j,\omega) \cdot C_{m_j} \cdot \big(\frac{m_j!}{k \cdot \vert f^{(m_j)}_t(t_j ,\omega) \vert} \big) ^{\frac{1}{m_j}}\cdot e^{sign(f^{(m_j)}_t(t_j ,\omega)) i\frac{\pi}{2m_j} } \cdot e^{ikf(t_j,\omega)} \right. +
$$
$$
\left. O(\frac{1}{k^\frac{2}{m_j}})\right) + \sum_{\mbox{odd } m_j } \left(\varphi(t_j,\omega) \cdot C_{m_j} \cdot \big(\frac{m_j!}{k \cdot \vert f^{(m_j)}_t(t_j ,\omega) \vert} \big) ^{\frac{1}{m_j}}\cdot \cos(\frac{\pi}{2m_j} ) \cdot e^{ikf(t_j,\omega)} \right. +
$$
$$
\left. O(\frac{1}{k^\frac{2}{m_j}}) \right)
$$
as $k\to \infty .$
\end{itemize}
\end{theorem}
\section{STOCHASTIC EXTREMUM}
Consider a real valued stochastic process $\xi(t) $ defined in the probability space $(\Omega, A(\Omega), P),$ where $A(\Omega)$ is a $\sigma$-algebra of subsets from $\Omega$ and $P$ is a measure of probability. Throughout the paper we assume that $\xi(t)$ takes only non-negative real values. In the context of this paper it is tacitly assumed that
$$
\xi(t, \omega) = g(t,\omega),
$$
where $\omega$ is a stochastic variable which does not depend on time and $g(t,x)$ is a smooth non-negative function of its arguments.
\vspace{0.1cm}
In order to examine whether $\xi(t) $ has extrema on interval $[a,b]\subset \R$ we take a function $\varphi_\varepsilon (t)\in C^\infty_0([a-\varepsilon , b+\varepsilon])$ such that $\varepsilon > 0,\;\;\;\varphi_\varepsilon (t)\ge 0\;\;\forall \;t\in \R$ and
$$
\varphi_\varepsilon (t) =1 \;\;\forall\; t\in [a,b].
$$
Then we analyze the asymptotics of the high-frequency integral
$$
\int_\Omega \int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{ik \xi(t,\omega)}dtdP(\omega) .
$$
The formal description of the situation when $\xi(t)$ does not have stochastic extrema on $[a,b]$ sounds as follows.
\begin{definition}
\label{noExtrema}
Let $\ln(z)$ denote a fixed branch of the complex logarithm. Then a real-valued stochastic process $\xi(t)$ does not have stochastic extrema on $[a,b]$ if one can find a positive real number $\varepsilon$ such that
$$
\lim_{k\to \infty } Re\{\frac{1}{ik} \ln( \int_\Omega \int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{ik \xi(t, \omega )}dt dP(\omega) )\}= 0.
$$
\end{definition}
The logical negation of this statement describes the intervals where stochastic extrema for $\xi(t)$ occur. In other words, the interval $[a,b]$ contains stochastic extrema if $\forall\;\varepsilon >0$
$$
\lim_{k\to \infty }Re\{\frac{1}{ik} \ln(\int_\Omega \int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{ik \xi(t,\omega )}dt dP(\omega) )\}\not= 0.
$$
Sometimes it is possible to estimate extremal values for $\xi(t)$ on $[a,b]$ with the help of the following
\begin{equation}
\label{Smax}
Smax_\varepsilon ([a,b]) = \overline{\lim}_{k\to \infty }Re \left\{\frac{1}{ik} \ln( \int_\Omega \int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{ik \xi(t,\omega)}dt dP(\omega) )\right\}
\end{equation}
and
\begin{equation}
\label{Smin}
Smin_\varepsilon ([a,b]) = {\underline \lim}_{k\to \infty }Re\left\{\frac{1}{ik} \ln(\int_\Omega \int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{ik \xi(t,\omega)}dt dP(\omega) )\right\},
\end{equation}
where $\overline{\lim}$ and ${\underline \lim}$ denote upper and lower limits, respectively.
\vspace{0.1cm}
For majority of applications $Smax_\varepsilon ([a,b]),\;\;Smin_\varepsilon ([a,b])$ deliver boundaries for extremal values of a real stochastic process $\xi(t).$ One can justify that when the stochastic process $\xi(t)$ has additional properties like, for example, $\Omega$ is a smooth manifold and $\xi(t,\omega )$ is a smooth function of $t$ and $\omega.$ Let $\frac{dP}{d\omega}$ denote the probability density function for $P(\omega).$ Then the following statement takes place.
\begin{theorem}
\label{smoothExpectation}
Let $\Omega$ be a smooth manifold. Assume also that density $\frac{dP}{d\omega}$ exists; $\frac{dP}{d\omega} \in C^\infty (\Omega),$ $\xi(t,\omega )\in C^\infty(\R ,\Omega )$ and there is only one stationary phase point $( t^\star, \omega^\star) \in St^2(\xi)\cap \left( [a-\varepsilon ,b + \varepsilon ]\times \Omega \right)$ with $\frac{\partial}{\partial \omega} \xi(t^\star, \omega^\star ) = 0 $ and the second derivative $\frac{\partial^2 \xi }{\partial t \partial \omega}$ has non-zero determinant
\begin{equation}
\label{nonZero2nd}
J(t^\star, \omega^\star)=\det \left( \frac{\partial^2 \xi }{\partial t \partial \omega} (t^\star, \omega^\star) \right) \not= 0
\end{equation}
at the stationary phase point $(t^\star, \omega^\star).$
If $\frac{d}{d\omega}P(\omega^\star) >0$ then the following is true:
$$
Re\left\{ \frac{1}{ik} \cdot \ln \left( \int_\Omega \int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{ik \xi(t,\omega)}dt dP(\omega) \right) \right\} = \xi(t^\star,\omega^\star ) + O(\frac{1}{k})\;\;\;\mbox{ as }\;\;k\to \infty.
$$
\end{theorem}
\vspace{0.1cm}
{\bf Proof.}
\vspace{0.1cm}
Let $n$ denote the dimension of $\Omega.$ Then applying the stationary phase method \cite{Maslov} ,\cite{Maslov_Fedoriuk} to
$$
\int_\Omega \int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{ik \xi(t,\omega )}dt dP(\omega)
$$
yields
$$
\int_\Omega \int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{ik \xi(t)}dt dP(\omega) =
$$
$$
\frac{d}{d\omega}P(\omega^\star) \cdot (\frac{2\pi}{k})^{\frac{n+1}{2}} \cdot \frac{1}{\sqrt{\vert J(t^\star,\omega^\star) \vert}} \cdot e^{sign(\frac{\partial^2 \xi }{\partial t \partial \omega}(t^\star,\omega^\star )) i\frac{\pi}{4} } \cdot e^{ik\xi(t^\star,\omega^\star )} + O(\frac{1}{k^{1+\frac{n+1}{2}}})
$$
where
$$
sign(\frac{\partial^2 \xi }{\partial t \partial \omega}(t^\star,\omega^\star ))
$$
denotes the difference between the number of positive and the number of negative eigenvalues of the corresponding quadratic form.
Taking the complex logarithm of
$$
\frac{d}{d\omega}P(\omega^\star) (\frac{2\pi}{k})^{\frac{n+1}{2}} \cdot \frac{1}{\sqrt{\vert J(t^\star,\omega^\star) \vert}} \cdot e^{sign(\frac{\partial^2 \xi }{\partial t \partial \omega}(t^\star,\omega^\star )) i\frac{\pi}{4} } \cdot e^{ik\xi(t^\star,\omega^\star )} \cdot(1 + O(\frac{1}{k}))
$$
we obtain
$$
\ln (\int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{ik \xi(t)}dt) = ik\xi(t^\star,\omega^\star) - \frac{n+1}{2}\ln (k) + \ln (\frac{d}{d\omega}P(\omega^\star) ) + \frac{n+1}{2} \cdot \ln (2\pi) -
$$
$$
\frac{1}{2} \ln\big( \vert J(t^\star,\omega^\star) \vert \big) + i\frac{\pi}{4} \cdot sign(\frac{\partial^2 \xi }{\partial t \partial \omega}(t^\star,\omega^\star ))+ O(\frac{1}{k})
$$
as $\;\;k\to \infty.$ Dividing by $ik$ and taking real part complete the proof.
{\bf Q.E.D.}
\vspace{0.1cm}
Formulas (\ref{Smax}), (\ref{Smin}) together with Theorem \ref{smoothExpectation} give us a recipe for assessing extrema of a real stochastic process $\xi(t)$ on an arbitrary time interval $[a,b].$ In order to do that one needs to analyze the limit properties of the following integral for big values of $k:$
$$
Re\left\{\frac{1}{ik} \int_{\gamma_k} \frac{dz}{z} \right\},
$$
where the complex curve $\gamma_k$ is defined as
$$
\gamma_k = \left\{ \int_\Omega \int_{-\infty }^{\infty} \varphi_\varepsilon (t) e^{i \lambda \xi(t,\omega )}dt dP(\omega);\;\; 0 \le \lambda \le k \right\}.
$$
Notice that $\gamma_k$ can be interpreted as the characteristic function of $ \xi(t,\omega ),$ where $\varphi_\varepsilon (t)$ is chosen so that
$$
\int_{-\infty }^{\infty} \varphi_\varepsilon (t) dt =1.
$$
In conclusion we notice that the condition (\ref{nonZero2nd}) can be relaxed with the help of Theorem \ref{stateionaryPhase} (see also \cite{Atiyah}, \cite{Malgrange}). In some applications (e.g., when estimating tails of distributions ) it is beneficial that the truncation function $\varphi_\varepsilon (t)$ also depends on $\omega.$
\vspace{0.1cm}
| {
"timestamp": "2008-01-30T18:35:40",
"yymm": "0801",
"arxiv_id": "0801.4726",
"language": "en",
"url": "https://arxiv.org/abs/0801.4726",
"abstract": "The paper is dealing with semi-classical asymptotics of a characteristic function for a stochastic process. The main technical tool is provided by the stationary phase method. The extremal range for a stochastic process is defined by limit values of the complex logarithm of the characteristic function. The paper also outlines a numerical method for calculating stochastic extrema.",
"subjects": "Probability (math.PR); Statistics Theory (math.ST); Applications (stat.AP)",
"title": "Stochastic extrema as stationary phases of characteristic functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137895115189,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7087617693154021
} |
https://arxiv.org/abs/1204.6530 | Independent sets in hypergraphs | Many important theorems in combinatorics, such as Szemerédi's theorem on arithmetic progressions and the Erdős-Stone Theorem in extremal graph theory, can be phrased as statements about independent sets in uniform hypergraphs. In recent years, an important trend in the area has been to extend such classical results to the so-called sparse random setting. This line of research culminated recently in the breakthroughs of Conlon and Gowers and of Schacht, who developed general tools for solving problems of this type.In this paper, we provide a third, completely different approach to proving extremal and structural results in sparse random sets. We give a structural characterization of the independent sets in a large class of uniform hypergraphs by showing that every independent set is almost contained in one of a small number of relatively sparse sets. We then derive many interesting results as fairly straightforward consequences of this abstract theorem. In particular, we prove the well-known conjecture of Kohayakawa, Łuczak and Rödl, a probabilistic embedding lemma for sparse graphs. We also give alternative proofs of many of the results of Conlon and Gowers and Schacht, and obtain their natural counting versions, which in some cases are considerably stronger. We moreover prove a sparse version of the Erdős-Frankl-Rödl Theorem on the number of H-free graphs and extend a result of Rödl and Ruciński on Ramsey properties in sparse random graphs to the general, non-symmetric setting.We remark that similar results have been discovered independently by Saxton and Thomason, and that, in parallel to this work, Conlon, Gowers, Samotij and Schacht have proved a sparse analogue of the counting lemma for subgraphs of the random graph G(n,p), which may be viewed as a version of the KŁR conjecture that is stronger in some ways and weaker in others. | \section{Introduction}
A great many of the central questions in combinatorics fall into the following general framework: Given a finite set $V$ and a collection $\mathcal{H} \subseteq \mathcal{P}(V)$ of \emph{forbidden structures}, what can be said about sets $I \subseteq V$ that do not contain any member of $\mathcal{H}$? For example, the celebrated theorem of Szemer{\'e}di~\cite{Sz} states that if $V = \{1, \ldots, n\}$ and $\mathcal{H}$ is the collection of $k$-term arithmetic progressions in $\{1, \ldots, n\}$, then every set $I$ that contains no member of $\mathcal{H}$ satisfies $|I| = o(n)$. The archetypal problem studied in extremal graph theory, dating back to the work of Tur{\'a}n~\cite{Turan} and Erd{\H o}s and Stone~\cite{ErSt}, is the problem of characterizing such sets $I$ when $V$ is the edge set of the complete graph on $n$ vertices and $\mathcal{H}$ is the collection of copies of some fixed graph $H$ in $K_n$. In this setting, a great deal is known, not only about the maximum size of $I$ that contains no member of $\mathcal{H}$, but also what the largest such sets look like, how many such sets there are, and what the structure of a typical such set is.
A collection $\mathcal{H} \subseteq \mathcal{P}(V)$ as above is usually referred to as a \emph{hypergraph} on the vertex set $V$ and any set $I \subseteq V$ that contains no element (\emph{edge}) of $\mathcal{H}$ is called an \emph{independent set}. Therefore, one might say that a large part of extremal combinatorics is concerned with studying independent sets in various specific hypergraphs. We might add here that in many natural settings, such as the two mentioned above, the hypergraphs considered are \emph{uniform}, that is, all edges of $\mathcal{H}$ have the same size.
Although it might at first seem somewhat artificial to study concrete questions in such an abstract setting, the past few years have proved that taking such a general approach can be highly beneficial. The recently-proved general transference theorems of Conlon and Gowers~\cite{CG} and Schacht~\cite{Sch} (see also~\cite{FRS}), which imply, among other things, sparse random analogues of the classical theorems of Szemer{\'e}di and of Erd{\H o}s and Stone, were stated in the language of hypergraphs. Roughly speaking, these transference theorems say the following: Let $\mathcal{H}$ be a hypergraph whose edges are sufficiently `uniformly distributed'. Then the independence number of $\mathcal{H}$ is `well-behaved' with respect to taking subhypergraphs induced by (sufficiently dense) random subsets of the vertex set. More precisely, given $p \in [0,1]$ and a finite set $V$, we shall write $V_p$ to denote the \emph{$p$-random subset} of $V$, that is, the random subset of $V$ in which each element of $V$ is included with probability $p$, independently of all other elements. We write $\alpha(\mathcal{H})$ and $v(\mathcal{H})$ to denote the size of the largest independent set and the number of vertices in a hypergraph $\mathcal{H}$, respectively. The results of Conlon and Gowers~\cite{CG} and Schacht~\cite{Sch} imply, in particular, that if the distribution of the edges of some uniform hypergraph $\mathcal{H}$ is sufficiently `balanced', then with probability tending to $1$ as $v(\mathcal{H}) \to \infty$,
\[
\alpha\big( \mathcal{H}[V(\mathcal{H})_p] \big) \le p \alpha(\mathcal{H}) + o\big(p v(\mathcal{H})\big),
\]
provided that $p$ is sufficiently large.
In this work, we give an approximate structural characterization of the family of all independent sets in uniform hypergraphs whose edge distribution satisfies a certain natural boundedness condition. More precisely, we shall prove that the family $\mathcal{I}(\mathcal{H})$ of independent sets of such a hypergraph $\mathcal{H}$ exhibits a certain clustering phenomenon. Our main result (Theorem~\ref{thm:main}, below) states that $\mathcal{I}(\mathcal{H})$ admits a partition into relatively few classes with the following property: all members of each class are essentially contained in a single `almost independent' subset of $V(\mathcal{H})$ (i.e., one which contains only a tiny proportion of all the edges of $\mathcal{H}$). This somewhat abstract statement has surprisingly many deep and interesting consequences, some of which we list in the remainder of this section. We remark that Theorem~\ref{thm:main} was partly inspired by the work of Kleitman and Winston~\cite{KW}, who implicitly considered a statement of this type in the setting of graphs ($2$-uniform hypergraphs) and subsequently used it to bound the number of $n$-vertex graphs without a $4$-cycle. We also note that a result similar to Theorem~\ref{thm:main} was independently proved by Saxton and Thomason~\cite{SaTh}, who also use it to derive many of the statements that we present in Sections~\ref{sec:intro-APs}--~\ref{sec:KLR-intro}.
\subsection{The number of sets with no $k$-term arithmetic progression}
\label{sec:intro-APs}
The celebrated theorem of Szemer{\'e}di~\cite{Sz} says that for every $k \in \mathbb{N}$, the largest subset of $\{1, \ldots, n\}$ that contains no $k$-term arithmetic progression (AP) has $o(n)$ elements. It immediately follows that there are only $2^{o(n)}$ subsets of $\{1, \ldots, n\}$ with no $k$-term AP. Our first result can be viewed as a sparse analogue of this statement.
\begin{thm}
\label{thm:Sz}
For every positive $\beta$ and every $k \in \mathbb{N}$, there exist constants $C$ and $n_0$ such that the following holds. For every $n \in \mathbb{N}$ with $n \ge n_0$, if $m \ge Cn^{1-1/(k-1)}$, then there are at most
\[
\binom{\beta n}{m}
\]
$m$-subsets of $\{1, \ldots, n\}$ that contain no $k$-term AP.
\end{thm}
We shall deduce Theorem~\ref{thm:Sz} from our main theorem, Theorem~\ref{thm:main}, and a robust version of Szemer{\'e}di's theorem, see Section~\ref{sec:Sz}. The sparse random analogue of Szemer{\'e}di's theorem, proved by Schacht~\cite{Sch} and independently by Conlon and Gowers~\cite{CG}, follows as an easy corollary of Theorem~\ref{thm:Sz}. Following~\cite{CG}, we shall say that a set $A \subseteq \mathbb{N}$ is \emph{$(\delta,k)$-Szemer{\'e}di} if every subset $B \subseteq A$ with at least $\delta|A|$ elements contains a $k$-term AP. For the sake of brevity, let $[n] = \{1, \ldots, n\}$ and recall that $[n]_p$ denotes the $p$-random subset of $[n]$.
\begin{cor}
\label{cor:Sz}
For every $\delta \in (0,1)$ and every $k \in \mathbb{N}$, there exists a constant $C$ such that the following holds. If $p_n \ge Cn^{-1/(k-1)}$ for all sufficiently large $n$, then
\[
\lim_{n \to \infty} \Pr\big( \text{$[n]_{p_n}$ is $(\delta,k)$-Szemer{\'e}di} \big) = 1.
\]
\end{cor}
We remark that Theorem~\ref{thm:Sz} and Corollary~\ref{cor:Sz} are both sharp up to the value of the constant $C$, see the discussion in Section~\ref{sec:Sz}, where both of these statements are proved.
Our main result has a variety of other applications in additive combinatorics, see for example~\cite{ABMS1,ABMS2} where, jointly with Alon, we used a much simpler version of it to count sum-free sets of fixed size in various Abelian groups and the set $[n]$. In Section~\ref{sec:Sz}, we shall mention two other applications: generalizations of Theorem~\ref{thm:Sz} to higher dimensions and to $k$-term APs whose common difference is of the form $d^r$. In each case, the random version (which was proved in~\cite{CG,Sch}) follows as an easy corollary.
\subsection{Tur{\'a}n's problem in random graphs}
\label{sec:turan-problem}
The famous theorem of Erd{\H o}s and Stone~\cite{ErSt} states that the maximum number of edges in an $H$-free graph on $n$ vertices, the \emph{Tur{\'a}n number for $H$}, denoted $\ex(n,H)$, satisfies
\begin{equation}
\label{eq:exnH}
\ex(n,H) = \left( 1 - \frac{1}{\chi(H) - 1} + o(1) \right) \binom{n}{2},
\end{equation}
where $\chi(H)$ is the chromatic number of $H$. The analogue of this theorem for the Erd{\H o}s-R{\'e}nyi random graph $G(n,p)$ was first studied by Babai, Simonovits, and Spencer~\cite{BaSiSp}, who proved that \emph{asymptotically almost surely} (a.a.s.~for short), i.e., with probability tending to~$1$ as $n \to \infty$, the largest triangle-free subgraph of $G(n,1/2)$ is bipartite, and by Frankl and R{\"o}dl~\cite{FrRo}, who proved that if $p \ge n^{-1/2 + \varepsilon}$ then a.a.s.~the largest triangle-free subgraph of $G(n,p)$ has $pn^2/8 + o(pn^2)$ edges. The systematic study of the Tur\'an problem in $G(n,p)$ was initiated by Haxell, Kohayakawa, and \L uczak~\cite{HaKoLu95, HaKoLu96} and by Kohayakawa, \L uczak, and R\"odl~\cite{KLR97}, who posed the following problem. For a fixed graph $H$, determine necessary and sufficient conditions on a sequence $\mathbf{p} \in [0,1]^\mathbb{N}$ of probabilities such that, a.a.s.,
\begin{equation}
\label{eq:exGnpH}
\ex\big(G(n,p_n), H\big) = \left( 1 - \frac{1}{\chi(H)-1} + o(1) \right) \binom{n}{2} p_n,
\end{equation}
where $\ex(G,H)$ denotes the maximum number of edges in an $H$-free subgraph of $G$.
By considering a random $(\chi(H)-1)$-partition of the vertex set of $G(n,p)$, it is straightforward to show that the inequality $\ex\big(G(n,p), H \big) \ge \left(1 - \frac{1}{\chi(H) - 1} + o(1)\right) \binom{n}{2} p$ holds for every $p \in [0,1]$. On the other hand, if the number of copies of some subgraph $H' \subseteq H$ in $G(n,p)$ is much smaller than the number of edges in $G(n,p)$, then the converse inequality cannot hold, since one can make any graph $H$-free by removing from it one edge from each copy of $H'$. This observation motivates the notion of \emph{$2$-density} of $H$, denoted by $m_2(H)$, which is defined by
\begin{equation}
\label{eq:m2H}
m_2(H) = \max \left\{ \frac{e(H') - 1}{v(H') - 2} \colon H' \subseteq H \text{ with } v(H') \ge 3 \right\}.
\end{equation}
It now follows easily that for every graph $H$ with maximum degree at least $2$ and every $\delta \in \big(0, 1/(\chi(H)-1)\big)$, there exists a positive constant $c$ such that if $p_n \le c n^{-1/m_2(H)}$, then a.a.s.
\[
\ex\big(G(n,p_n), H\big) > \left( 1 - \frac{1}{\chi(H) - 1} + \delta \right) \binom{n}{2} p_n.
\]
It was conjectured by Haxell, Kohayakawa, and \L uczak~\cite{HaKoLu95} and Kohayakawa, \L uczak, and R{\"o}dl~\cite{KLR97} that the above simple argument, removing an arbitrary edge from each copy of $H'$ in $G(n,p)$, is the main obstacle that prevents~\eqref{eq:exGnpH} from holding asymptotically almost surely. The conjecture, often referred to as Tur{\'a}n's theorem for random graphs, has attracted considerable attention in the past fifteen years. Numerous partial results and special cases had been established by various researchers~\cite{Fu, Ge, GeScSt, HaKoLu95, HaKoLu96, KLR97, KRS04, SzVu} before the conjecture was finally proved by Conlon and Gowers~\cite{CG} (under the assumption that $H$ is \emph{strictly $2$-balanced}\footnote{A graph $H$ is $2$-balanced if the maximum in~\eqref{eq:m2H} is achieved with $H' = H$, that is, if $m_2(H) = \frac{e(H)-1}{v(H)-2}$. It is strictly $2$-balanced if $m_2(H) > m_2(H')$ for every proper subgraph $H' \subsetneq H$.}) and by Schacht~\cite{Sch}.
\begin{thm}
\label{thm:Turan-Gnp}
For every graph $H$ with $\Delta(H) \ge 2$ and every positive $\delta$, there exists a positive constant $C$ such that if $p_n \ge C n^{-1/m_2(H)}$, then a.a.s.
\[
\ex\big(G(n,p_n), H\big) \le \left( 1 - \frac{1}{\chi(H) - 1} + \delta \right) \binom{n}{2} p_n.
\]
\end{thm}
Our methods give yet another proof of Theorem~\ref{thm:Turan-Gnp}. In fact, we shall deduce from our main result, Theorem~\ref{thm:main}, a version of the general transference theorem of Schacht~\cite[Theorem~3.3]{Sch}, which easily implies Theorem~\ref{thm:Turan-Gnp} for such graphs $H$. Our version of Schacht's transference theorem, Theorem~\ref{thm:Sch-weak}, is stated and proved in Section~\ref{sec:extremal-results}. We then, in Section~\ref{sec:Turan}, use it to derive a natural generalization of Theorem~\ref{thm:Turan-Gnp} to $t$-uniform hypergraphs, Theorem~\ref{thm:t-Turan-Gnp}, which was also first proved in~\cite{CG} and~\cite{Sch}.
\begin{remark}
In the original version of this paper, we only proved the results concerning $H$-free graphs under the additional assumption that $H$ is $2$-balanced. However, a simple modification of our method (permitting multiple edges in our hypergraphs, as in~\cite{SaTh}) allowed us to remove this condition. We would like to thank David Saxton for pointing this out.
\end{remark}
Our methods also yield the following sparse random analogue of the famous stability theorem of Erd{\H o}s and Simonovits~\cite{ES1, ES2}, originally proved by Conlon and Gowers~\cite{CG} in the case when $H$ is strictly $2$-balanced and then extended to arbitrary $H$ by Samotij~\cite{Sam}, who adapted the argument of Schacht~\cite{Sch} for this purpose.
\begin{thm}
\label{thm:stability-Gnp}
For every graph $H$ with $\Delta(H) \ge 2$ and every positive $\delta$, there exist positive constants $C$ and $\varepsilon$ such that if $p_n \ge Cn^{-1/m_2(H)}$, then a.a.s.~the following holds. Every $H$-free subgraph of $G(n,p_n)$ with at least
\[
\left(1 - \frac{1}{\chi(H)-1} -\varepsilon\right) \binom{n}{2} p_n
\]
edges may be made $(\chi(H)-1)$-partite by removing from it at most $\delta n^2p_n$ edges.
\end{thm}
As with Theorem~\ref{thm:Turan-Gnp}, we shall in fact deduce Theorem~\ref{thm:stability-Gnp} from a more general statement, Theorem~\ref{thm:stability-random}, which is a version of the general transference theorem for stability results proved in~\cite{Sam}. Theorem~\ref{thm:stability-random} is stated and proved in Section~\ref{sec:stability-results}; in Section~\ref{sec:Turan}, we use it to derive Theorem~\ref{thm:stability-Gnp}.
\subsection{The typical structure of $H$-free graphs}
\label{sec:typical-structure}
Let $H$ be an arbitrary non-empty graph. For an integer $n$, denote by $f_n(H)$ the number of labelled $H$-free graphs on the vertex set $[n]$. Since every subgraph of an $H$-free graph is also $H$-free, it follows that $f_n(H) \ge 2^{\ex(n,H)}$. Erd{\H o}s, Frankl, and R{\"o}dl~\cite{ErFrRo} proved that this crude lower bound is in a sense tight, namely that
\begin{equation}
\label{eq:fnH-upper}
f_n(H) = 2^{\ex(n,H) + o(n^2)}.
\end{equation}
Our next result can be viewed as a `sparse version' of~\eqref{eq:fnH-upper}. Such a statement was already considered by \L uczak~\cite{Lu}, who derived it from the so-called K\L R conjecture, which we discuss in the next subsection. For integers $n$ and $m$ with $0 \le m \le \binom{n}{2}$, let $f_{n,m}(H)$ be the number of labelled $H$-free graphs on the vertex set $[n]$ that have exactly $m$ edges. The following theorem refines~\eqref{eq:fnH-upper} to $n$-vertex graphs with $m$ edges.
\begin{thm}
\label{thm:ErFrRo-sparse}
For every graph $H$ and every positive $\delta$, there exists a positive constant $C$ such that the following holds. For every $n \in \mathbb{N}$, if $m \ge Cn^{2-1/m_2(H)}$, then
\[
\binom{\ex(n,H)}{m} \le f_{n,m}(H) \le \binom{\ex(n,H) + \delta n^2}{m}.
\]
\end{thm}
In fact, we shall deduce from our main result, Theorem~\ref{thm:main}, a `counting version' of the general transference theorem of Schacht~\cite[Theorem~3.3]{Sch}, which easily implies Theorem~\ref{thm:ErFrRo-sparse}. This `counting version' of Schacht's theorem (which refines and, in some respects, strengthens the main results of~\cite{CG,Sch}) is stated and proved in Section~\ref{sec:extremal-results}. We then use it to derive Theorem~\ref{thm:ErFrRo-sparse} in Section~\ref{sec:Turan-counting}. We remark that~\eqref{eq:fnH-upper} was refined in a different sense by Balogh, Bollob{\'a}s, and Simonovits~\cite{BaBoSi04}, who showed that $f_n(H) = 2^{\ex(n,H) + O(n^{2-c(H)})}$, where $c(H)$ is some positive constant, and also gave a very precise structural description of almost all $H$-free graphs. We would also like to point out that our proof of Theorem~\ref{thm:ErFrRo-sparse} does not use Szemer{\'e}di's regularity lemma, unlike the proof given in~\cite{Lu} or the proofs of Erd{\H o}s, Frankl, and R{\"o}dl~\cite{ErFrRo} and Balogh, Bollob{\'a}s, and Simonovits~\cite{BaBoSi04}.
The result of Erd{\H o}s, Frankl, and R{\"o}dl has, in some cases, a structural counterpart that significantly strengthens~\eqref{eq:fnH-upper}. For example, Erd{\H o}s, Kleitman, and Rothschild~\cite{ErKlRo} proved that \emph{almost all} triangle-free graphs are bipartite, that is, that with probability tending to $1$ as $n \to \infty$, a graph selected uniformly at random from the family of all triangle-free graphs on the vertex set $[n]$ is bipartite or, in other words (since clearly every bipartite graph is triangle-free), $f_n(K_3)$ is asymptotic to the number of bipartite graphs on the vertex set $[n]$. Extending this result, Osthus, Pr{\"o}mel, and Taraz~\cite{OsPrTa} proved that if $m \ge Cn^{3/2}\sqrt{\log n}$ for some $C > \sqrt{3}/4$, then almost all $n$-vertex triangle-free graphs with $m$ edges are bipartite. The corresponding result for $K_{r+1}$-free graphs was proved recently in~\cite{BMSW}.
Our next result, which is a strengthening of Theorem~\ref{thm:ErFrRo-sparse}, is an approximate version of this statement for an arbitrary graph $H$. Such a statement was also considered by \L uczak~\cite{Lu}, who derived it from the K\L R conjecture. Following~\cite{Lu}, given a positive real $\delta$ and an integer $k$, let us say that a graph $G$ is $(\delta,k)$-partite if $G$ can be made $k$-partite by removing from it at most $\delta e(G)$ edges.
\begin{thm}
\label{thm:H-free-structure}
For every graph $H$ with $\chi(H) \ge 3$, and every positive $\delta$, there exists a positive constant $C$ such that the following holds. If $m \ge Cn^{2 - 1/m_2(H)}$, then almost all $H$-free graphs with $n$ vertices and $m$ edges are $\big(\delta,\chi(H)-1\big)$-partite.
\end{thm}
As with Theorem~\ref{thm:ErFrRo-sparse}, we shall in fact deduce Theorem~\ref{thm:H-free-structure} from a `counting version' of the general transference theorem for stability results proved in~\cite{Sam}. Our version of it, Theorem~\ref{thm:stability-counting}, is stated and proved in Section~\ref{sec:stability-results}. In Section~\ref{sec:Turan-counting}, we use it to derive Theorem~\ref{thm:H-free-structure}. Once again, our proof does not use the regularity lemma, unlike that in~\cite{Lu}. Finally, we would like to mention that, as observed by \L uczak~\cite{Lu}, Theorem~\ref{thm:H-free-structure} has the following elegant corollary.
\begin{cor}
\label{cor:H-free-prob}
For every graph $H$ with $\chi(H) \ge 3$ and every positive $\varepsilon$, there exist positive constants $C$ and $n_0$ such that the following holds. For every $n \in \mathbb{N}$ with $n \ge n_0$ and every $m \in \mathbb{N}$ with $Cn^{2-1/m_2(H)} \le m \le n^2/C$,
\begin{equation}\label{eq:cor:luczak}
\left(\frac{\chi(H) - 2}{\chi(H) - 1} - \varepsilon\right)^m \le \, \Pr\big(G_{n,m} \nsupseteq H\big) \le \left(\frac{\chi(H) - 2}{\chi(H) - 1} + \varepsilon\right)^m,
\end{equation}
where $G_{n,m}$ is a uniformly selected random $n$-vertex graph with $m$ edges.
\end{cor}
Note that~\eqref{eq:cor:luczak} does not hold if $m$ is too large; for example, if $m > n^2/4$ then $\Pr\big(G_{n,m} \nsupseteq K_3 \big) = 0$. We remark that a great deal more is known about the structure of a typical $H$-free graph (drawn uniformly at random from the set of all $n$-vertex $H$-free graphs), see~\cite{BaBoSi09} and the references therein for more details.
\subsection{The K\L R conjecture}
\label{sec:KLR-intro}
The celebrated Szemer{\'e}di regularity lemma~\cite{Sz78}, which is considered to be one of the most important and powerful tools in extremal graph theory, says that the vertex set of every graph may be divided into a bounded number of parts of approximately the same size in such a way that most of the bipartite subgraphs induced between pairs of parts of the partition satisfy a certain pseudo-randomness condition termed \emph{$\varepsilon$-regularity}. The strength of the regularity lemma lies in the fact that it may be combined with the so-called \emph{embedding lemma} to show that a graph contains particular subgraphs. The combination of the regularity and embedding lemmas allows one to prove many well-known theorems in extremal graph theory, such as the theorem of Erd{\H o}s and Stone~\cite{ErSt} and the stability theorem of Erd{\H o}s and Simonovits~\cite{ES1,ES2}, both mentioned in Section~\ref{sec:turan-problem}.
For sparse graphs, that is, $n$-vertex graphs with $o(n^2)$ edges, the original version of the regularity lemma is vacuous since if the vertex set of a sparse graph is partitioned into a bounded number of parts, then all induced bipartite subgraphs thus obtained are trivially $\varepsilon$-regular, provided that $n$ is sufficiently large. However, it was independently observed by Kohayakawa~\cite{Ko97} and R{\"o}dl (unpublished) that the notion of $\varepsilon$-regularity may be extended in a meaningful way to graphs with density tending to zero. Moreover, with this more general notion of regularity, they were also able to prove an associated regularity lemma which applies to a large class of sparse graphs, including (a.a.s.) the random graph $G(n,p)$.
Given a $p \in [0,1]$ and a positive $\varepsilon$, we say that a bipartite graph between sets $V_1$ and $V_2$ is \emph{$(\varepsilon,p)$-regular} if for every $W_1 \subseteq V_1$ and $W_2 \subseteq V_2$ with $|W_1| \ge \varepsilon|V_1|$ and $|W_2| \ge \varepsilon |V_2|$, the density $d(W_1, W_2)$ of edges between $W_1$ and $W_2$ satisfies
\[
\big| d(W_1, W_2) - d(V_1, V_2) \big| \le \varepsilon p.
\]
A partition of the vertex set of a graph into $r$ parts $V_1, \ldots, V_r$ is said to be $(\varepsilon, p)$-regular if $\big| |V_i| - |V_j| \big| \le 1$ for all $i$ and $j$ and for all but at most $\varepsilon r^2$ pairs $(V_i, V_j)$, the graph induced between $V_i$ and $V_j$ is $(\varepsilon, p)$-regular. The class of graphs to which the Kohayakawa-R{\"o}dl regularity lemma applies are the so-called upper-uniform graphs. Given positive $\eta$ and $K$, we say that an $n$-vertex graph $G$ is \emph{$(\eta, p, K)$-upper-uniform} if for all $W \subseteq V(G)$ with $|W| \ge \eta n$, the density of edges within $W$ satisfies $d(W) \le Kp$. This condition is satisfied by many natural classes of graphs, including (a.a.s.) all subgraphs of random graphs of density $p$. The sparse regularity lemma of Kohayakawa~\cite{Ko97} and R{\"o}dl says the following.
\begin{SpSzRL}
For all positive $\varepsilon$, $K$, and $r_0$, there exist a positive constant $\eta$ and an integer $R$ such that for every $p \in [0,1]$, the following holds. Every $(\varepsilon,p,K)$-upper-uniform graph with at least $r_0$ vertices admits an $(\varepsilon,p)$-regular partition of its vertex set into $r$ parts, for some $r \in \{r_0, \ldots, R\}$.
\end{SpSzRL}
We remark that a version of this theorem avoiding the need for the upper-uniformity assumption was recently proved by Scott~\cite{Sc11}.
The aforementioned embedding lemma roughly says that if we start with an arbitrary graph $H$, replace its vertices by large independent sets and its edges by $\varepsilon$-regular bipartite graphs with density much larger than $\varepsilon$, then this blown-up graph will contain a copy of $H$. To make it more precise, let $H$ be a graph on the vertex set $\{1, \ldots, v(H)\}$, let $\varepsilon$ and $p$ be as above, and let $n$ and $m$ be integers satisfying $0 \le m \le n^2$. Let us denote by $\mathcal{G}(H,n,m,p,\varepsilon)$ the collection of all graphs $G$ constructed in the following way. The vertex set of $G$ is a disjoint union $V_1 \cup \ldots \cup V_{v(H)}$ of sets of size $n$, one for each vertex of $H$. For each edge $\{i,j\}$ of $H$, we add to $G$ an $(\varepsilon,p)$-regular bipartite graph with $m$ edges between the sets $V_i$ and $V_j$. These are the only edges of $G$. With this notation in hand, we can state the embedding lemma. Given any graph $G$ as above, we define \emph{canonical copies of $H$} to be all copies of $H$ in $G$ in which (the image of) each vertex $i \in V(H)$ lies in the set $V_i \subseteq V(G)$.
\begin{EL}
For every graph $H$ and every positive $d$, there exist a positive $\varepsilon$ and an integer $n_0$ such that for every $n$ and $m$ with $n \ge n_0$ and $m \ge dn^2$, every $G \in \mathcal{G}(H,n,m,1,\varepsilon)$ contains a canonical copy of $H$.
\end{EL}
One might hope that a similar statement holds when one replaces $1$ by an arbitrary $p$ and the assumption $m \ge dn^2$ by $m \ge pdn^2$, even if $p$ is a decreasing function of $n$. However, for an arbitrary function $p$, this is too much to hope for. Indeed, consider the random `blow-up' of $H$, that is, the random graph $G$ obtained from $H$ by replacing each vertex of $H$ by an independent set of size $n$ and each edge of $H$ by a random bipartite graph with $pn^2$ edges. With high probability, the number of canonical copies of $H$ in $G$ will be about $p^{e(H)}n^{v(H)}$ and hence if $p^{e(H)}n^{v(H)} \ll pn^2$, then one can remove all copies of $H$ from $G$ by deleting a tiny proportion of all edges. Since in the above argument one may replace $H$ with an arbitrary subgraph $H' \subseteq H$, it follows easily\footnote{Note that we also replace $p$ with some $p' = (1 + o(1))p$, and that the removal of $o(pn^2)$ edges does not affect the $\varepsilon$-regularity conditions.} that if $p \ll n^{-1/m_2(H)}$, then there are graphs in $\mathcal{G}(H,n,pn^2,p,\varepsilon)$ that do not contain any canonical copies of~$H$.
\enlargethispage{\baselineskip}
As in the case of Tur{\'a}n's theorem for random graphs, see Section~\ref{sec:turan-problem}, one might still hope that if $p \ge Cn^{-1/m_2(H)}$ for some large constant $C$, then the natural sparse analogue of the embedding lemma discussed above holds. However, it was observed by \L uczak (see~\cite{GeSt,KoRo}) that, somewhat surprisingly, for any graph $H$ which contains a cycle and any function $p$ satisfying $p = o(1)$, there are graphs in $\mathcal{G}(H,n,pn^2,p,\varepsilon)$ with no canonical copy of $H$. Nevertheless, it still seemed likely that such atypical graphs comprise so tiny a proportion of $\mathcal{G}(H,n,m,p,\varepsilon)$ that they do not appear in $G(n,p)$ asymptotically almost surely.
This was formalized in the following conjecture of Kohayakawa, \L uczak, and R{\"o}dl~\cite{KLR97}, usually referred to as the \emph{K\L R conjecture}. Given a graph $H$, integers $m$ and $n$, a $p \in [0,1]$, and a positive $\varepsilon$, let $\mathcal{G}^*(H,n,m,p,\varepsilon)$ denote the collection of graphs in $\mathcal{G}(H,n,m,p,\varepsilon)$ that contain no canonical copy of $H$. We will prove the conjecture in Section~\ref{sec:KLR}.
\begin{thm}[The K\L R conjecture]\label{thm:KLR}
For every graph $H$ and every positive $\beta$, there exist positive constants $C$, $n_0$, and $\varepsilon$ such that the following holds. For every $n \in \mathbb{N}$ with $n \ge n_0$ and $m \in \mathbb{N}$ with $m \ge Cn^{2-1/m_2(H)}$,
\[
\big| \mathcal{G}^*(H,n,m,m/n^2,\varepsilon) \big| \le \beta^m \binom{n^2}{m}^{e(H)}.
\]
\end{thm}
The K\L R conjecture has been one of the central open questions in extremal graph theory and has attracted substantial attention from many researchers over the past fifteen years. It has been verified in several special cases. It is easy to see that it holds for all graphs $H$ which do not contain a cycle. The cases $H = K_3$, $K_4$, and $K_5$ were resolved in~\cite{KLR96}, \cite{GePrScStTa}, and~\cite{GeScSt}, respectively. The case $H = C_\ell$ has also been resolved, but here the history is somewhat more complex. A proof under some extra technical assumptions was given in~\cite{KoKr}. Those extra assumptions were later removed in~\cite{GeKoRoSt} and, independently, in \cite{Be}. We remark here that in parallel to this work, Conlon, Gowers, Samotij, and Schacht~\cite{CoGoSaSc} have proved a sparse analogue of the \emph{counting lemma} for subgraphs of the random graph $G(n,p)$, which may be viewed as a version of the K\L R conjecture that is stronger in some aspects and weaker in other aspects.
It is well-known that Theorem~\ref{thm:KLR} easily implies Tur{\'a}n's theorem for random graphs, Theorem~\ref{thm:Turan-Gnp}, and also its stability version, Theorem~\ref{thm:stability-Gnp}. In fact, this was the original motivation behind the K\L R conjecture, see~\cite{KLR97}. Moreover, it was proved by \L uczak~\cite{Lu} that Theorem~\ref{thm:KLR} implies Theorems~\ref{thm:ErFrRo-sparse} and~\ref{thm:H-free-structure}. The work of Conlon and Gowers~\cite{CG} and Schacht~\cite{Sch} (see also~\cite{Sam}), as well as this work, have shown that one does not need to appeal to the sparse regularity lemma and to the K\L R conjecture in order to prove such extremal statements in random graphs. Nevertheless, there are still many beautiful corollaries of the conjecture that cannot (yet) be proved by other means. For discussion and derivation of some of them, we refer the reader to~\cite{CoGoSaSc}. Here, we present only one corollary of the K\L R conjecture, the threshold for asymmetric Ramsey properties of random graphs, which does not follow from the version of the conjecture proved in~\cite{CoGoSaSc}. The deduction of this result from the K\L R conjecture is essentially due to Kohayakawa and Kreuter~\cite{KoKr}.
\subsection{Ramsey properties of random graphs}
Let $H$ be a fixed graph and let $r$ be a positive integer. For an arbitrary graph $G$, we write $G \to (H)_r$ if every $r$-coloring of the edges of $G$ contains a monochromatic copy of $H$. It follows from the classical result of Ramsey~\cite{Ra} that $K_n \to (H)_r$, provided that $n$ is sufficiently large. Ramsey properties of random graphs were first investigated by Frankl and R{\"o}dl~\cite{FrRo} and since then much effort has been devoted to their study. Most notably, R\"odl and Ruci\'nski~\cite{RoRu93, RoRu95} established the following general threshold result.
\begin{thm}
\label{thm:RoRu}
For every graph $H$ that is not a forest, and every positive integer $r$, there exist positive constants $c$ and $C$ such that
\[
\lim_{n \to \infty} \Pr\big( G(n,p_n) \to (H)_r \big) =
\begin{cases}
1 & \text{if $p_n \ge Cn^{-1/m_2(H)}$}, \\
0 & \text{if $p_n \le cn^{-1/m_2(H)}$}.
\end{cases}
\]
\end{thm}
In the above discussion, a copy of the same graph $H$ is forbidden in each of the $r$ color classes. A natural generalization of Theorem~\ref{thm:RoRu} would determine thresholds for so-called \emph{asymmetric Ramsey properties}. For any graphs $G$, $H_1, \ldots, H_r$, we write $G \to (H_1, \ldots, H_r)$ if for every coloring of the edges of $G$ with colors $1, \ldots, r$, there exists, for some $i \in [r]$, a copy of $H_i$ all of whose edges have color $i$. In the context of asymmetric Ramsey properties of random graphs, the following generalization of the $2$-density $m_2(\cdot)$ was introduced in~\cite{KoKr}. For two graphs $H_1$ and $H_2$, define\footnote{To motivate this definition, set $p = n^{-1 / m_2(H_1,H_2)}$ and observe that the edges of $G(n,p)$ which are contained in a copy of each subgraph $H_1' \subseteq H_1$ have density roughly $n^{-1/m_2(H_2)}$.}
\begin{equation}
\label{eq:m2H2H1}
m_2(H_1, H_2) = \max \left\{ \frac{e(H_1') }{v(H_1') - 2 + 1/m_2(H_2)} \,\colon H_1' \subseteq H_1 \text{ with } v(H_1') \ge 3 \right\}.
\end{equation}
Kohayakawa and Kreuter~\cite{KoKr} formulated the following conjecture and proved it in the case when all $H_i$ are cycles.
\begin{conj}
\label{conj:KoKr}
Let $H_1, \ldots, H_r$ be graphs with $1 < m_2(H_r) \le \ldots \le m_2(H_1)$. Then there exist constants $c$ and $C$ such that
\[
\lim_{n \to \infty} \Pr\big( G(n,p_n) \to (H_1, \ldots, H_r) \big) =
\begin{cases}
1 & \text{if $p_n \ge Cn^{-1/m_2(H_1, H_2)}$}, \\
0 & \text{if $p_n \le cn^{-1/m_2(H_1, H_2)}$}.
\end{cases}
\]
\end{conj}
More accurately, the above conjecture was stated in~\cite{KoKr} only in the case $r=2$, but the above generalization is quite natural.\footnote{To see why the graphs $H_3,\ldots,H_r$ do not appear in the threshold, replace each of $H_2,\ldots,H_r$ by the disjoint union $H' = H_2 \cup \cdots \cup H_r$, and note that $m_2(H') = m_2(H_2)$, see~\cite{MaSkSpSt}.} There had been little progress on Conjecture~\ref{conj:KoKr} until quite recently, when the $0$-statement was proved by Marciniszyn, Skokan, Sp\"ohel, and Steger~\cite{MaSkSpSt} in the case where all of the $H_i$ are cliques, and the $1$-statement in the case $r = 2$ was established\footnote{In their concluding remarks, the authors of~\cite{KoSchSp} moreover claim that their method can be extended to the setting with more than two colours, using ideas from~\cite{RoRu95}.} by Kohayakawa, Schacht, and Sp\"ohel~\cite{KoSchSp} under very mild extra assumptions on $H_1$ and $H_2$. It was observed in~\cite[Theorem~31]{MaSkSpSt} that, using Theorem~\ref{thm:KLR}, the approach of Kohayakawa and Kreuter~\cite{KoKr}, which employs the sparse regularity lemma, can be adapted to yield a proof of the $1$-statement in Conjecture~\ref{conj:KoKr} for the following class of graphs.
\begin{thm}
\label{thm:Ramsey-Gnp}
Let $H_1, \ldots, H_r$ be graphs with $1 < m_2(H_r) \le \ldots \le m_2(H_1)$ and such that $H_1$ is strictly $2$-balanced. Then there exists a constant $C$ such that if $p_n \ge Cn^{-1/m_2(H_1,H_2)}$, then a.a.s.
\[
G(n,p_n) \to (H_1, \ldots, H_r).
\]
\end{thm}
For the deduction of Theorem~\ref{thm:Ramsey-Gnp} from Theorem~\ref{thm:KLR}, see~\cite{KoKr} and~\cite[Section~4]{MaSkSpSt}.
\subsection{Outline of the paper}
The remainder of this paper is organized as follows. In Section~\ref{sec:main}, we state and discuss our main result, Theorem~\ref{thm:main}, which we then prove in Section~\ref{sec:proof-thm-main}. In Section~\ref{sec:Sz}, we discuss the applications of Theorem~\ref{thm:main} in the context of subsets of $[n]$ with no $k$-term arithmetic progressions. In particular, we prove Theorem~\ref{thm:Sz} and use it to derive Corollary~\ref{cor:Sz}. In Section~\ref{sec:extremal-results}, we prove two versions of the general transference theorem of Schacht~\cite[Theorem~3.3]{Sch} (obtained independently, in a slightly different form, by Conlon and Gowers~\cite{CG}) -- a `random' version suited for extremal problems in sparse random discrete structures and its `counting' counterpart that generalizes Theorem~\ref{thm:Sz}. In Section~\ref{sec:stability-results}, we prove `random' and `counting' versions of the general stability result of Conlon and Gowers~\cite{CG} in a form that is easily comparable with~\cite[Theorem~3.4]{Sam}. In Section~\ref{sec:Turan}, we discuss several applications of Theorem~\ref{thm:main} in the context of the Tur{\'a}n problem in sparse random graphs. In particular, using the results of Sections~\ref{sec:extremal-results} and~\ref{sec:stability-results} we give new proofs of the sparse random analogues (stated above) of the classical theorems of Erd{\H o}s and Stone, and Erd{\H o}s and Simonovits, see Section~\ref{sec:turan-problem}. In Section~\ref{sec:Turan-counting}, we discuss applications of Theorem~\ref{thm:main} to the problem of describing the typical structure of a sparse graph without a forbidden subgraph. In particular, we prove sparse analogues of classical theorems of Erd{\H o}s, Frankl, and R{\"o}dl and Erd{\H o}s, Kleitman, and Rothschild, see Section~\ref{sec:typical-structure}. Finally, in Section~\ref{sec:KLR}, we use Theorem~\ref{thm:main} to prove the K\L R conjecture for every graph~$H$.
\section{The Main Theorem}
\label{sec:main}
In this section, we present the main result of this paper, Theorem~\ref{thm:main}, which gives a structural characterization of the collection of all independent sets in a large class of uniform hypergraphs. Let us stress here that all of the hypergraphs we consider are allowed to have multiple edges; moreover, we shall always count edges with multiplicities.
We start with an important definition. Recall that a family of sets $\mathcal{F} \subseteq \mathcal{P}(V)$ is called \emph{increasing} (or an \emph{upset}) if it is closed under taking supersets, that is, if for every $A, B \subseteq V$, $A \in \mathcal{F}$ and $A \subseteq B$ imply that $B \in \mathcal{F}$.
\begin{defn}
\label{defn:Feps-dense}
Let $\mathcal{H}$ be a uniform hypergraph with vertex set $V$, let $\mathcal{F}$ be an increasing family of subsets of $V$ and let $\varepsilon \in (0,1]$. We say that $\mathcal{H}$ is \emph{$(\mathcal{F},\varepsilon)$-dense} if
\[
e(\mathcal{H}[A]) \ge \varepsilon e(\mathcal{H})
\]
for every $A \in \mathcal{F}$.
\end{defn}
A moment of thought reveals that for an arbitrary hypergraph $\mathcal{H}$ and $\varepsilon \in (0,1]$, it is extremely simple to find families $\mathcal{F} \subseteq \mathcal{P}(V(\mathcal{H}))$ for which $\mathcal{H}$ is $(\mathcal{F}, \varepsilon)$-dense. To this end, let
\[
\mathcal{F}_{\varepsilon} = \big\{ A \subseteq V(\mathcal{H}) \colon e(\mathcal{H}[A]) \ge \varepsilon e(\mathcal{H}) \big\}
\]
and note that $\mathcal{F}_{\varepsilon}$ is increasing and $\mathcal{H}$ is $(\mathcal{F}_{\varepsilon}, \varepsilon)$-dense. In fact, the families $\mathcal{F}$ for which $\mathcal{H}$ is $(\mathcal{F}, \varepsilon)$-dense are precisely all increasing subfamilies of $\mathcal{F}_\varepsilon$.
In this work, we will be interested in upsets that admit a much more `constructive' description than that of $\mathcal{F}_\varepsilon$. Many such families arise naturally in the study of extremal and structural problems in combinatorics. For example, consider the $k$-uniform hypergraph $\mathcal{H}_1$ on the vertex set $[n]$ whose edges are all $k$-term arithmetic progressions in $[n]$ and let $\mathcal{F}_1$ be the collection of all subsets of $[n]$ with at least $\delta n$ elements. Clearly, $\mathcal{F}_1$ is an upset and it follows from the famous theorem of Szemer{\'e}di~\cite{Sz} that $\mathcal{H}_1$ is $(\mathcal{F}_1, \varepsilon)$-dense for some positive $\varepsilon$ depending only on $\delta$ and $k$, see Section~\ref{sec:Sz}. Similarly, consider the $3$-uniform hypergraph $\mathcal{H}_2$ on the vertex set $E(K_n)$ whose edges are edge sets of all copies of $K_3$ in the complete graph $K_n$ and let $\mathcal{F}_2$ be the family of all $n$-vertex graphs (subgraphs of $K_n$) with at least $(1/2 - \varepsilon)\binom{n}{2}$ edges such that every $2$-coloring of its vertices yields at least $\delta n^2$ monochromatic edges. Again, $\mathcal{F}_2$ is increasing and it follows from the stability theorem of Erd{\H o}s and Simonovits~\cite{ES1,ES2} and the triangle removal lemma of Ruzsa and Szemer{\'e}di~\cite{RuSz} that $\mathcal{H}_2$ is $(\mathcal{F}_2, \varepsilon)$-dense, provided that $\varepsilon$ is sufficiently small as a function of $\delta$.
Our main result roughly says the following. If $\mathcal{H}$ is a uniform hypergraph that is $(\mathcal{F}, \varepsilon)$-dense for some family $\mathcal{F}$ and whose edge distribution satisfies certain natural boundedness conditions, then the collection $\mathcal{I}(\mathcal{H})$ of all independent sets in $\mathcal{H}$ admits a partition into relatively few classes such that all independent sets in one class are essentially contained in a single set $A \not\in \mathcal{F}$. Before we state the result, we first need to quantify the above boundedness conditions for the edge distribution of a hypergraph. Given a hypergraph $\mathcal{H}$, for each $T \subseteq V(\mathcal{H})$, we define\footnote{We emphasize that if $\mathcal{H}$ has multiple edges, then $\{ e \in \mathcal{H} \colon T \subseteq e \}$ should be thought of as a multi-set. In other words, $\deg_\mathcal{H}(T)$ is the number of edges of $\mathcal{H}$, counted with multiplicities, which contain $T$.}
\[
\deg_\mathcal{H}(T) = | \{ e \in \mathcal{H} \colon T \subseteq e \} |,
\]
and let
\[
\Delta_\ell(\mathcal{H}) = \max \big\{ \deg_\mathcal{H}(T) \colon T \subseteq V(\mathcal{H}) \text{ and } |T| = \ell \big\}.
\]
Recall that $\mathcal{I}(\mathcal{H})$ denotes the family of all independent sets in $\mathcal{H}$. The following theorem is our main result.
\begin{thm}
\label{thm:main}
For every $k \in \mathbb{N}$ and all positive $c$ and $\varepsilon$, there exists a positive constant $C$ such that the following holds. Let $\mathcal{H}$ be a $k$-uniform hypergraph and let $\mathcal{F} \subseteq \mathcal{P}(V(\mathcal{H}))$ be an increasing family of sets such that $|A| \ge \varepsilon v(\mathcal{H})$ for all $A \in \mathcal{F}$. Suppose that $\mathcal{H}$ is $(\mathcal{F},\varepsilon)$-dense and $p \in (0,1)$ is such that, for every $\ell \in [k]$,
\[
\Delta_\ell(\mathcal{H}) \le c \cdot p^{\ell-1} \frac{e(\mathcal{H})}{v(\mathcal{H})}.
\]
Then there exists a family $\S \subseteq \binom{V(\mathcal{H})}{\le Cp \cdot v(\mathcal{H})}$ and functions $f \colon \S \to \overline{\mathcal{F}}$ and $g \colon \mathcal{I}(\mathcal{H}) \to \S$ such that for every $I \in \mathcal{I}(\mathcal{H})$,
\[
g(I) \subseteq I \qquad \text{and} \qquad I \setminus g(I) \subseteq f( g(I) ).
\]
\end{thm}
Roughly speaking, if $\mathcal{H}$ satisfies certain technical conditions, then each independent set $I$ in $\mathcal{H}$ can be labelled with a small subset $g(I)$ in such a way that all sets labelled with some $S \in \S$ are essentially contained in a single set $f(S)$ that contains very few edges of $\mathcal{H}$. We remark that the constant $C$ in the theorem has only a polynomial dependence on $\varepsilon$. Unfortunately, however, in most of our applications $\varepsilon$ will have a tower-type dependence on some other parameter.
Theorem~\ref{thm:main} will be proved in Section~\ref{sec:proof-thm-main}. We end this section with a short informal discussion of its consequences. As we have already mentioned, Theorem~\ref{thm:main} combined with some classical extremal results on discrete structures has strikingly strong implications. Let us briefly explain why this is so. Many classical extremal problems ask for an estimate on the number of independent sets (of a certain size) in some auxiliary uniform hypergraph. If applicable, Theorem~\ref{thm:main} implies that all such independent sets are almost contained in one of very few sets that are almost independent, that is, contain a small number of copies of some forbidden substructure. If we know a good characterization of sets that are almost independent in the above sense, which is often the case, we can easily obtain an upper bound on the number of independent sets. For example, consider the problem of counting subsets of $[n]$ with no $k$-term AP and recall the definition of $\mathcal{H}_1$ and $\mathcal{F}_1$ from the beginning of this section. Theorem~\ref{thm:main}, applied to this pair, implies that every subset of $[n]$ with no $k$-term AP is essentially contained in one of at most $\binom{n}{O(n^{1-1/(k-1)})}$ sets of size at most $\delta n$ each, where $\delta$ is an arbitrarily small positive constant. This easily implies that if $m \gg n^{1-1/(k-1)}$, then there are at most $\binom{2\delta n}{m}$ sets of size $m$ with no $k$-term AP. For more details, we refer the reader to Section~\ref{sec:Sz}.
\section{Proof of the main theorem}
\label{sec:proof-thm-main}
In this section, we shall prove Theorem~\ref{thm:main}. The main ingredient in the proof is the following proposition, which (roughly) says that Theorem~\ref{thm:main} holds in the special case when $\mathcal{F}$ is the family of all subsets of $V(\mathcal{H})$ with at least $(1-\delta)v(\mathcal{H})$ elements. Theorem~\ref{thm:main} follows by applying Proposition~\ref{prop:main} a constant number of times.
\begin{prop}
\label{prop:main}
For every integer $k$ and positive $c$, there exists a positive $\delta$ such that the following holds. Let $p \in (0,1)$ and suppose that $\mathcal{H}$ is a $k$-uniform hypergraph such that, for every $\ell \in [k]$,
\[
\Delta_\ell(\mathcal{H}) \le c \cdot p^{\ell-1} \frac{e(\mathcal{H})}{v(\mathcal{H})}.
\]
Then there exist a family $\S \subseteq \binom{V(\mathcal{H})}{\le (k-1)p \cdot v(\mathcal{H})}$ and functions $f_0 \colon \S \to \mathcal{P}(V(\mathcal{H}))$ and $g_0 \colon \mathcal{I}(\mathcal{H}) \to \S$ such that for every $I \in \mathcal{I}(\mathcal{H})$,
\[
g_0(I) \subseteq I \subseteq f_0(g_0(I)) \cup g_0(I) \qquad \text{and} \qquad \big| f_0( g_0(I) ) \big| \le (1-\delta)v(\mathcal{H}).
\]
Moreover, if for some $I, I' \in \mathcal{I}(\mathcal{H})$, $g_0(I) \subseteq I'$ and $g_0(I') \subseteq I$, then $g_0(I) = g_0(I')$.
\end{prop}
The final line of Proposition~\ref{prop:main} states that the labelling function $g_0$ exhibits a certain consistency. This property of $g_0$, which may look somewhat puzzling, will be crucial in the proof of Theorem~\ref{thm:main}.
In order to prove Proposition~\ref{prop:main}, given an independent set $I \in \mathcal{I}(\mathcal{H})$, we shall construct a sequence $(B_{k-1}, \ldots, B_q)$ of subsets of $I$ with $|B_{k-1}|, \ldots, |B_q| \le p v(\mathcal{H})$, for some $q \in [k-1]$, and use it to define a sequence $(\mathcal{H}_{k-1}, \ldots, \mathcal{H}_{r})$, where $r \in \{q, q+1\}$, of hypergraphs such that the following holds for each $i \in \{r, \ldots, k-1\}$:
\begin{enumerate}[(a)]
\item
$\mathcal{H}_i$ is an $i$-uniform hypergraph on the vertex set $V(\mathcal{H})$,
\item
$I$ is an independent set in $\mathcal{H}_i$,
\item
\label{item:DeltaHHi}
$\Delta_1(\mathcal{H}_i) \le O\big(e(\mathcal{H}_i) / v(\mathcal{H}_i)\big)$, and
\item
\label{item:eHHi}
$e(\mathcal{H}_i) \ge \Omega(p^{k-i} e(\mathcal{H}))$.
\end{enumerate}
We shall be able to do it in such a way that in the end, there will be a set $A \subseteq V(\mathcal{H})$ of size at most $(1-\delta)v(\mathcal{H})$ such that the remaining elements of $I$ (i.e., the set $I \setminus S$, where $S = B_k \cup \cdots \cup B_q$) must all lie inside $A$. If $r = 1$, then we will simply let $A$ be the set of non-edges of the $1$-uniform hypergraph $\mathcal{H}_1$; in this case, the upper bound on $|A|$ will follow from~(\ref{item:DeltaHHi}) and~(\ref{item:eHHi}). If $r > 1$, then we will obtain an appropriate $A$ while trying (and failing) to construct the hypergraph $\mathcal{H}_{r-1}$ using the hypergraph $\mathcal{H}_r$ and the set $B_r$. Crucially, this set $A$ will depend solely on $S$, that is, if for some pair $I, I' \in \mathcal{I}(\mathcal{H})$ our procedure generates $(S, A)$ and $(S', A')$, respectively, and if $S = S'$, then also $A = A'$. This will allow us to set $g_0(I) = S$ and $f_0(S) = A$.
\subsection{The Algorithm Method}
\label{sec:WSAlg}
For the remainder of this section, let us fix $k$, $c$, $p$, and $\mathcal{H}$ as in the statement of Proposition~\ref{prop:main}. Without loss of generality, we may assume that $c \ge 1$. Let $I$ be an independent set in $\mathcal{H}$. We shall describe a procedure of choosing the sets $B_i \subseteq I$ and constructing the hypergraphs $\mathcal{H}_i$ as above. This procedure, which we shall term the \emph{Scythe Algorithm}, lies at the heart of the proof of Proposition~\ref{prop:main}.
The general strategy used in the Scythe Algorithm, that of selecting a small set $S$ of high-degree vertices and using it to define a set $A$ such that $S \subseteq I \subseteq A \cup S$, dates back to the work of Kleitman and Winston~\cite{KW}, who used it to bound the number of independent sets in graphs satisfying the following local density condition: all sufficiently large vertex sets induce subgraphs with many edges. Recently, Balogh and Samotij~\cite{BS1,BS2} refined the ideas of Kleitman and Winston and obtained a bound on the number of independent sets in uniform hypergraphs satisfying a similar local density condition. Even more recently, Alon, Balogh, Morris and Samotij~\cite{ABMS1} used similar ideas to bound the number of independent sets in `almost linear' $3$-uniform hypergraphs satisfying a more general density condition termed $(\alpha,\mathcal{B})$-stability, see Definition~\ref{defn:aB-stable}. Here, we combine, generalize, and refine all of the above approaches and make them work in the general setting of $(\mathcal{F}, \varepsilon)$-dense uniform hypergraphs.
At each step of the Scythe Algorithm, we shall order the vertices of a certain subhypergraph of $\mathcal{H}$ with respect to their degrees in that subhypergraph. For the sake of brevity and clarity of the presentation, let us make the following definition.
\begin{defn}[Max-degree order]
Given a hypergraph $\mathcal{G}$, we define the \emph{max-degree order} on $V(\mathcal{G})$ as follows:
\begin{enumerate}
\item
\label{item:mdo-a}
Fix an arbitrary total ordering of $V(\mathcal{G})$.
\item
For each $j \in \{1, \ldots, v(\mathcal{G})\}$, let $u_j$ be the maximum-degree vertex in the hypergraph $\mathcal{G}\big[ V(\mathcal{G}) \setminus \{u_1, \ldots, u_{j-1}\} \big]$; ties are broken by giving preference to vertices which come earlier in the order chosen in (\ref{item:mdo-a}).
\item
The max-degree order on $V(\mathcal{G})$ is $(u_1,\ldots,u_{v(\mathcal{G})})$.
\end{enumerate}
Finally, we write $W(u)$ to denote the initial segment of the max-degree order on $V(\mathcal{G})$ that ends with $u$, i.e., for every $j$, we let $W(u_j) = \{u_1, \ldots, u_j\}$.
\end{defn}
We remark here that the only property of the max-degree order that will be important for us is that for every $j \in \{1, \ldots, v(\mathcal{G})\}$, the degree of the vertex $u_j$ in the hypergraph $\mathcal{G}[V(\mathcal{G}) \setminus W(u_{j-1})]$ is at least as large as the average degree of this hypergraph.
We next define the numbers $\Delta_\ell^i$, where $1 \le \ell \le i \le k$, which will play a crucial role in the description and the analysis of the algorithm.
\begin{defn}\label{def:delta:is}
For every $\ell \in [k]$, let $\Delta_\ell^k = \Delta_\ell(\mathcal{H})$ and for all $i \in [k-1]$ and $\ell \in [i]$, let
\begin{equation}
\label{def:delta}
\Delta_\ell^i = \max \left\{ 2 \cdot \Delta_{\ell+1}^{i+1}, \, p \cdot \Delta_{\ell}^{i+1} \right\}.
\end{equation}
\end{defn}
We use the numbers $\Delta_\ell^i$ to define the following families of sets with high degree.
\begin{defn}\label{def:Msets}
Given an $i \in [k]$, an $i$-uniform hypergraph $\mathcal{G}$ and an $\ell \in [i]$, let
\[
M_\ell^i(\mathcal{G}) = \left\{ T \in \binom{V(\mathcal{G})}{\ell} \colon \deg_\mathcal{G}(T) \ge \frac{\Delta_\ell^i}{2} \right\}.
\]
\end{defn}
Let $b = p v(\mathcal{H})$ and for each $i \in [k]$, let $c_i = (ck2^{k+1})^{i-k}$.
\begin{Prop}
The key properties that we would like the constructed hypergraph $\mathcal{H}_i$ to possess are:
\begin{enumerate}[(P1)]
\item
\label{item:WAlg-prop-1}
$\mathcal{H}_i$ is $i$-uniform and $V(\mathcal{H}_i) = V(\mathcal{H})$,
\item
\label{item:WAlg-prop-2}
$I$ is an independent set in $\mathcal{H}_i$,
\item
\label{item:WAlg-prop-3}
$\Delta_\ell(\mathcal{H}_i) \le \Delta_\ell^i$ for each $\ell \in [i]$,
\item
\label{item:WAlg-prop-4}
$e(\mathcal{H}_i) \ge c_i p^{k-i} e(\mathcal{H})$.
\end{enumerate}
\end{Prop}
Set $\mathcal{H}_k = \mathcal{H}$ and note that (P\ref{item:WAlg-prop-1})--(P\ref{item:WAlg-prop-4}) are vacuously satisfied for $i = k$. The main step of the Scythe Algorithm will be a procedure that, given $\mathcal{H}_{i+1}$ and $I$ satisfying (P\ref{item:WAlg-prop-1})--(P\ref{item:WAlg-prop-4}), outputs a set $B_i \subseteq I$ of cardinality at most $b$, a set $A_i \subseteq V(\mathcal{H})$ with the property that $I \setminus B_i \subseteq A_i$, and a hypergraph $\mathcal{H}_i$ satisfying (P\ref{item:WAlg-prop-1})--(P\ref{item:WAlg-prop-3}). Moreover, if the constructed $\mathcal{H}_i$ does not satisfy (P\ref{item:WAlg-prop-4}), then we have $|A_i| \le (1-c_i)v(\mathcal{H})$. Crucially, these $A_i$ and $\mathcal{H}_i$ depend solely on $B_i$ and $\mathcal{H}_{i+1}$, that is, if on two inputs $(\mathcal{H}_{i+1}, I)$ and $(\mathcal{H}_{i+1}, I')$, the procedure outputs the same set $B_i$, it also outputs the same $A_i$ and $\mathcal{H}_i$.
\begin{WAlg}
Given an $(i+1)$-uniform hypergraph $\mathcal{H}_{i+1}$ and an independent set $I \in \mathcal{I}(\mathcal{H}_{i+1})$, set $\mathcal{A}_{i+1}^{(0)} = \mathcal{H}_{i+1}$ and let $\mathcal{H}_i^{(0)}$ be the empty hypergraph on the vertex set $V(\mathcal{H})$. For $j = 0, \ldots, b-1$, do the following:
\begin{enumerate}[(1)]
\item
\label{item:WAlg-0}
If $I \cap V\big( \mathcal{A}_{i+1}^{(j)} \big) = \emptyset$, then set $\mathcal{H}_i = \mathcal{H}_i^{(0)}$, $A_i = \emptyset$, and $B_i = \{u_0, \ldots, u_{j-1}\}$ and STOP.
\item
\label{item:WAlg-1}
Let $u_j$ be the first vertex of $I$ in the max-degree order on $V\big( \mathcal{A}_{i+1}^{(j)} \big)$.
\item \label{item:WAlg-2}
Let $\mathcal{H}_{i}^{(j+1)}$ be the hypergraph on the vertex set $V(\mathcal{H})$ defined by:
\[
\mathcal{H}_{i}^{(j+1)} = \mathcal{H}_{i}^{(j)} \cup \left\{D \in \binom{V(\mathcal{H})}{i} \colon D \cup \{u_j\} \in \mathcal{A}_{i+1}^{(j)} \right\}.
\]
\item
\label{item:WAlg-3}
Let $\mathcal{A}_{i+1}^{(j+1)}$ be the hypergraph on the vertex set $V\big( \mathcal{A}_{i+1}^{(j)} \big) \setminus W(u_j)$ defined by:\footnote{We emphasize that $W(u_j)$ is defined relative to the max-degree order on $V(\mathcal{A}_{i+1}^{(j)})$.}
\[
\mathcal{A}_{i+1}^{(j+1)} = \left\{ D \in \mathcal{A}_{i+1}^{(j)} \colon D \cap W(u_j) = \emptyset \text{ and } T \nsubseteq D \text{ for every } T \in \bigcup_{\ell=1}^{i} M_\ell^i \big( \mathcal{H}_{i}^{(j+1)} \big) \right\}.
\]
\end{enumerate}
Finally, set $\mathcal{H}_{i} = \mathcal{H}_{i}^{(b)}$, $A_i = V\big( \mathcal{A}_{i+1}^{(b)} \big)$, and $B_i = \{u_0,\ldots,u_{b-1}\}$.
\end{WAlg}
We shall now establish various properties of the Scythe Algorithm. We begin by making some basic (but key) observations.
\begin{lemma}
\label{lemma:WAlg-basics}
The following hold for every $i \in [k-1]$:
\begin{enumerate}[(a)]
\item
\label{item:WAlg-basics-0}
$\mathcal{H}_i$ is $i$-uniform and $V(\mathcal{H}_i) = V(\mathcal{H})$.
\item
\label{item:WAlg-basics-1}
If $I \in \mathcal{I}(\mathcal{H}_{i+1})$, then $I \in \mathcal{I}(\mathcal{H}_i)$.
\item
\label{item:WAlg-basics-2}
$B_i \subseteq I \subseteq A_i \cup B_i$.
\item
\label{item:WAlg-basics-3}
The hypergraph $\mathcal{H}_i$ and the set $A_i$ depend only on $\mathcal{H}_{i+1}$ and the set $B_i$.
\end{enumerate}
\end{lemma}
\begin{proof}
Property (\ref{item:WAlg-basics-0}) is trivial. To see (\ref{item:WAlg-basics-1}), simply observe that each edge of $\mathcal{H}_i$ is of the form $D \setminus \{u\}$ for some $D \in \mathcal{H}_{i+1}$ and $u \in I$. Thus, if $I$ contains an edge of $\mathcal{H}_i$, it must also contain an edge of $\mathcal{H}_{i+1}$. To see (\ref{item:WAlg-basics-2}), observe that for each $j$, $u_j$ is the first vertex of $I$ in the max-degree order on $V\big( \mathcal{A}_{i+1}^{(j)} \big)$ and hence $W(u_j) \cap I = \{u_j\}$. It follows that $B_i \subseteq I$ and that $I \setminus A_i = B_i$. Note in particular that if $A_i = \emptyset$, then $I \cap V\big( \mathcal{A}_{i+1}^{(j)}\big) = \emptyset$ for some $j \in \{0, \ldots, b\}$, which implies that $B_i = I$. Finally, to prove (\ref{item:WAlg-basics-3}), observe that all steps of the Scythe Algorithm are deterministic and that every element of $I$ that we need to observe in order to define $A_i$ and $\mathcal{H}_i$ is placed in $B_i$. More precisely, note that while choosing the vertex $u_j$, we only need to know the first vertex of $I$ in the max-degree order on $V\big(\mathcal{A}_{i+1}^{(j)}\big)$; the remaining vertices remain unobserved. Since we have $W(u_j) \cap B_i = W(u_j) \cap I = \{u_j\}$, this information can be recovered from $B_i$. Thus, at each step, the hypergraph $\mathcal{H}_i^{(j+1)}$ can be recovered from $\mathcal{H}_i^{(j)}$ and $B_i$, and the hypergraph $\mathcal{A}_{i+1}^{(j+1)}$ can be recovered from $\mathcal{A}_{i+1}^{(j)}$, $\mathcal{H}_i^{(j+1)}$ and $B_i$. Hence, a trivial inductive argument proves that, if the algorithm does not stop in step~(\ref{item:WAlg-0}), for each $j \in \{0, \ldots, b\}$, the hypergraphs $\mathcal{H}_i^{(j)}$ and $\mathcal{A}_{i+1}^{(j)}$ are determined by $\mathcal{H}_{i+1}$ and the set $B_i$, as required. Finally, the algorithm stops in step~(\ref{item:WAlg-0}) if and only if $|B_i| < b$. If this happens, then $\mathcal{H}_i$ and $A_i$ are empty.
\end{proof}
We next show that the Scythe Algorithm exhibits a certain `consistency' while generating its output. This property will be important in the proof of Proposition~\ref{prop:main}.
\begin{lemma}
\label{lemma:WAlg-consistency}
Suppose that on inputs $(\mathcal{H}_{i+1}, I)$ and $(\mathcal{H}_{i+1}, I')$, the Scythe Algorithm outputs $(A_i, B_i, \mathcal{H}_i)$ and $(A_i', B_i', \mathcal{H}_i')$, respectively. If $B_i \subseteq I'$ and $B_i' \subseteq I$, then $(A_i, B_i, \mathcal{H}_i) = (A_i', B_i', \mathcal{H}_i')$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lemma:WAlg-basics}, it suffices to show that $B_i = B_i'$. Let us first consider the (degenerate) case when $\min\{|B_i|, |B_i'|\} < b$. Without loss of generality, we may assume that $|B_i| < b$. This means that, while running on $(\mathcal{H}_{i+1}, I)$, the Scythe Algorithm stopped in step~(\ref{item:WAlg-0}). By Lemma~\ref{lemma:WAlg-basics}, it follows that $B_i = I$ and hence $B_i' \subseteq B_i$, which means that $|B_i'| < b$ and therefore $B_i' = I'$. Hence, $B_i = B_i'$, as claimed. On the other hand, if $|B_i| = |B_i'| = b$ and $B_i \neq B_i'$, then there must exist some $j$ such that $u_j \neq u_j'$. Let $j$ be the smallest such index. Note that by the minimality of $j$, we have $\mathcal{A}_{i+1}^{(j)} = \big(\mathcal{A}_{i+1}^{(j)}\big)' = \mathcal{A}$. Since $u_j \neq u_j'$, one of these vertices comes earlier in the max-degree order on $V(\mathcal{A})$; without loss of generality, we may suppose that it is $u_j$. Since $B_i \subseteq I'$, it follows that $u_j \in I'$ and hence the Algorithm, while running on the input $(\mathcal{H}_{i+1}, I')$, would not pick $u_j'$ in step $j$, a contradiction. This shows that in fact $B_i = B_i'$, as required.
\end{proof}
The next lemma shows that if $\mathcal{H}_{i+1}$ satisfies (P\ref{item:WAlg-prop-3}), then so does $\mathcal{H}_i$. The lemma follows easily from the definitions of $\Delta_\ell^i$ and $M_\ell^i(\mathcal{G})$.
\begin{lemma}
\label{lemma:Delta}
If $\Delta_{\ell+1}(\mathcal{H}_{i+1}) \le \Delta_{\ell+1}^{i+1}$ for some $\ell \in [i]$, then $\Delta_\ell(\mathcal{H}_{i}) \le \Delta_\ell^i$.
\end{lemma}
\begin{proof}
The crucial observation is that if
\[
\deg_{\mathcal{H}_i^{(j)}}(T) \ge \frac{\Delta_\ell^i}{2}
\]
for some $T \in \binom{V(\mathcal{H})}{\ell}$ and $j \in [b]$, then all edges containing $T$ are removed from $\mathcal{A}_{i+1}^{(j)}$ and hence no more such edges are added to $\mathcal{H}_i$. It follows that $\deg_{\mathcal{H}_i}(T) = \deg_{\mathcal{H}_i^{(j)}}(T)$. Moreover, when we extend $\mathcal{H}_{i}^{(j-1)}$ to $\mathcal{H}_{i}^{(j)}$, then we only add to it sets $D$ such that $D \cup \{u_j\} \in \mathcal{A}_{i+1}^{(j-1)} \subseteq \mathcal{H}_{i+1}$ and hence
\[
\deg_{\mathcal{H}_i^{(j)}}(T) - \deg_{\mathcal{H}_i^{(j-1)}}(T) \le \deg_{\mathcal{H}_{i+1}}(T \cup \{u_j\}) \le \Delta_{|T|+1}(\mathcal{H}_{i+1}).
\]
It follows that
\[
\Delta_\ell(\mathcal{H}_i) \le \frac{\Delta_\ell^i}{2} + \Delta_{\ell+1}(\mathcal{H}_{i+1}) \le \frac{\Delta_\ell^i}{2} + \Delta_{\ell+1}^{i+1} \le \Delta_\ell^i
\]
where the last inequality follows from~\eqref{def:delta}.
\end{proof}
Next, let us establish an easy bound on the numbers $\Delta_1^i$.
\begin{lemma}
\label{lemma:Delta-facts}
$\Delta_1^i \le c 2^k p^{k-i} \frac{e(\mathcal{H})}{v(\mathcal{H})}$ for every $i \in \{1, \ldots, k\}$.
\end{lemma}
\begin{proof}
To prove the lemma, simply note that, by the definition of $\Delta_\ell^i$, for every $i \in [k]$ and every $\ell \in [i]$,
\begin{equation}
\label{eq:Deltali}
\Delta_\ell^i = 2^dp^{k-i-d}\Delta_{d+\ell}(\mathcal{H}) \quad \text{for some $d \in \{0, \ldots, k-i\}$}.
\end{equation}
One easily proves~\eqref{eq:Deltali} by induction on $k-i$. Intuitively, $d$ in~\eqref{eq:Deltali} is the number of times that the first term in the maximum in~\eqref{def:delta} is larger than the second term when following the recursive definition of $\Delta_\ell^i$ back to $\Delta_{d+\ell}^k$.
Since $\Delta_\ell(\mathcal{H}) \le c \cdot p^{\ell - 1} \frac{e(\mathcal{H})}{v(\mathcal{H})}$, as in the statement of Proposition~\ref{prop:main}, it follows from~\eqref{eq:Deltali} that
\[
\Delta_1^i \le \max_{0 \le d \le k - i}\left\{ 2^d p^{k-i-d} \Delta_{d+1}(\mathcal{H}) \right\} \le \max_{0 \le d \le k - i}\left\{ 2^d p^{k-i-d} \cdot c p^{d} \cdot \frac{e(\mathcal{H})}{v(\mathcal{H})} \right\} \le c \cdot 2^k p^{k-i}\frac{e(\mathcal{H})}{v(\mathcal{H})},
\]
as required.
\end{proof}
Finally, we show that if $\mathcal{H}_{i+1}$ satisfies (P\ref{item:WAlg-prop-3}) and (P\ref{item:WAlg-prop-4}), then either $\mathcal{H}_{i+1}$ also satisfies (P\ref{item:WAlg-prop-4}) or we have $|A_i| \le (1-c_i)v(\mathcal{H})$. Recall that $c_i = (ck2^{k+1})^{i-k}$.
\begin{lemma}
\label{lemma:P4Ai}
Let $i \in [k-1]$ and suppose that $e(\mathcal{H}_{i+1}) \ge c_{i+1} p^{k - (i+1)} e(\mathcal{H})$ and that $\Delta_\ell(\mathcal{H}_{i+1}) \le \Delta_\ell^{i+1}$ for every $\ell \in [i+1]$. Then either
\begin{equation}
\label{eq:eHi-large}
e(\mathcal{H}_i) \ge \frac{p}{c \cdot 2^{k+1} k} e(\mathcal{H}_{i+1}) \ge c_i p^{k-i} e(\mathcal{H})
\end{equation}
or $|A_i| \le (1 - c_i)v(\mathcal{H})$.
\end{lemma}
\begin{proof}
If the Scythe Algorithm stops in step~(\ref{item:WAlg-0}), then $|A_i| = 0$ and there is nothing to prove. Hence, we may assume that steps~(\ref{item:WAlg-1})--(\ref{item:WAlg-3}) are executed $b$ times. Note that, for each $j \in \{0, \ldots, b-1\}$, we have
\begin{equation}
\label{eq:eHi-change}
e\big( \mathcal{H}_i^{(j+1)} \big) - e\big(\mathcal{H}_i^{(j)} \big) = \deg_{\mathcal{A}_{i+1}^{(j)}}(u_j).
\end{equation}
By the definition of the max-deg order, the right-hand side of~\eqref{eq:eHi-change} is at least the average degree of the hypergraph $\tilde{\mathcal{A}}_{i+1}^{(j)}$, the subhypergraph of $\mathcal{A}_{i+1}^{(j)}$ induced by the set $\big(V\big(\mathcal{A}_{i+1}^{(j)}\big) \setminus W(u_j)\big) \cup \{u_j\}$. Therefore, by the definition of $\mathcal{A}_{i+1}^{(j+1)}$, we have
\[
e\big( \mathcal{H}_i^{(j+1)} \big) - e\big(\mathcal{H}_i^{(j)} \big) \ge \frac{(i+1)e\big( \tilde{\mathcal{A}}_{i+1}^{(j)} \big)}{v\big( \tilde{\mathcal{A}}_{i+1}^{(j)} \big)} \ge \frac{(i+1)e\big( \mathcal{A}_{i+1}^{(j+1)} \big)}{v\big( \mathcal{H} \big)}.
\]
Hence, if $(i+1)e\big( \mathcal{A}_{i+1}^{(j+1)} \big) \ge e\big( \mathcal{H}_{i+1} \big)$ for every $j \in \{0, \ldots, b-1\}$, then
\[
e(\mathcal{H}_i) \ge \sum_{j = 0}^{b-1} \frac{(i+1)e\big( \mathcal{A}_{i+1}^{(j+1)} \big)}{v\big( \mathcal{H} \big)} \ge b \cdot \frac{e(\mathcal{H}_{i+1})}{v(\mathcal{H})} = p \cdot e(\mathcal{H}_{i+1}),
\]
since $b = p \cdot v(\mathcal{H})$, as required. Thus, we may assume that for some $j$,
\begin{equation}
\label{eq:eAb-small}
e\big( \mathcal{A}_{i+1}^{(b)} \big) \le e\big( \mathcal{A}_{i+1}^{(j+1)} \big) < \frac{e\big( \mathcal{H}_{i+1} \big)}{i+1}.
\end{equation}
Intuitively, \eqref{eq:eAb-small} means that while running the Scythe Algorithm on $\mathcal{H}_{i+1}$ and $I$, many edges are removed from $\mathcal{A}_{i+1}$ (that is, $\mathcal{H}_{i+1}$) in step~(\ref{item:WAlg-3}). This may happen for one of the following two reasons: either many of the initial segments $W(u_j)$ are long or one of the families $M_\ell^i(\mathcal{H}_i)$ of sets with high degree in $\mathcal{H}_i$ is large.
\begin{claim*}
Either
\[
\sum_{j=0}^{b-1} | W(u_j) | \ge \frac{1}{4\Delta_1^{i+1}} \cdot e(\mathcal{H}_{i+1})
\]
or for some $\ell \in [i]$,
\[
\left| M_\ell^i \big( \mathcal{H}_i \big) \right| \ge \frac{1}{2(i+1)\Delta_\ell^{i+1}} \cdot e(\mathcal{H}_{i+1}).
\]
\end{claim*}
\begin{proof}[Proof of claim]
Recall that $\mathcal{A}_{i+1}^{(0)} = \mathcal{H}_{i+1}$ and observe that for every $j \in \{0, \ldots, b-1\}$,
\begin{equation}
\label{eq:eAi-change}
e\big( \mathcal{A}_{i+1}^{(j)} \big) - e\big( \mathcal{A}_{i+1}^{(j+1)} \big) \le | W(u_j) | \cdot \Delta_1(\mathcal{H}_{i+1}) + \sum_{\ell=1}^i \left| M_\ell^i \big( \mathcal{H}_{i}^{(j+1)} \big) \setminus M_\ell^i \big( \mathcal{H}_{i}^{(j)} \big) \right| \cdot \Delta_\ell(\mathcal{H}_{i+1}).
\end{equation}
Inequality~\eqref{eq:eAi-change} follows since in step (\ref{item:WAlg-3}) of the Scythe Algorithm, we remove from $\mathcal{A}_{i+1}^{(j)}$ only the edges that contain either a vertex of $W(u_j)$ or a member of $M_\ell^i \big( \mathcal{H}_{i}^{(j+1)} \big)$ for some $\ell \in [i]$. Thus, since $\Delta_\ell(\mathcal{H}_{i+1}) \le \Delta_\ell^{i+1}$ for every $\ell \in [i]$, summing~\eqref{eq:eAi-change} over all $j$, we get
\[
e\big( \mathcal{H}_{i+1} \big) - e(\mathcal{A}_{i+1}^{(b)}) \le \sum_{j=0}^{b-1} |W(u_j)| \cdot \Delta_1^{i+1} + \sum_{\ell = 1}^i \left| M_\ell^i \big( \mathcal{H}_{i}^{(b)} \big) \right| \cdot \Delta_\ell^{i+1}.
\]
Since we assumed that $e(\mathcal{A}_{i+1}^{(b)}) < e\big( \mathcal{H}_{i+1} \big) / (i+1)$, see~\eqref{eq:eAb-small}, and $\mathcal{H}_i = \mathcal{H}_i^{(b)}$, it follows that if
\[
\sum_{j=0}^{b-1} \big| W(u_j) \big| \cdot \Delta_1^{i+1} < \frac{e(\mathcal{H}_{i+1})}{4} \le \frac{i}{2(i+1)} \cdot e(\mathcal{H}_{i+1}),
\]
then
\[
\left| M_\ell^i \big( \mathcal{H}_i \big) \right| \cdot \Delta_\ell^{i+1} \ge \frac{1}{2(i+1)} \cdot e(\mathcal{H}_{i+1}) \quad \text{for some $\ell \in [i]$},
\]
as claimed.
\end{proof}
Finally, let us deal with the two cases implied by the claim. In the remainder of the proof, we will show that if $M_\ell^i \big( \mathcal{H}_i \big)$ is large for some $\ell \in [i]$, then $e(\mathcal{H}_i)$ is large and if $\sum_{j=0}^{b-1} | W(u_j) |$ is large, then $|A_i|$ is small.
\medskip
\noindent
\textbf{Case 1:} $\left| M_\ell^i \big( \mathcal{H}_i \big) \right| \ge \frac{1}{2(i+1)\Delta_\ell^{i+1}} \cdot e(\mathcal{H}_{i+1})$ for some $\ell \in [i]$.
\smallskip
Since $\deg_{\mathcal{H}_i}(T) \ge \Delta_\ell^i/2$ for every $T \in M_\ell^i \big( \mathcal{H}_i \big)$, it follows by the handshaking lemma that
\begin{equation}
\label{eq:edges2}
e(\mathcal{H}_i) = \binom{i}{\ell}^{-1} \sum_{T \in \binom{V(\mathcal{H})}{\ell}} \deg_{\mathcal{H}_{i}}(T) \ge \frac{\big| M_\ell^i(\mathcal{H}_i) \big| \cdot \Delta_\ell^i}{2\binom{i}{\ell}}.
\end{equation}
Recalling that $\Delta_\ell^i \ge p \Delta_{\ell}^{i+1}$, see~\eqref{def:delta}, we have
\[
e(\mathcal{H}_{i}) \ge \frac{e(\mathcal{H}_{i+1})}{4(i+1) \binom{i}{\ell}} \cdot \frac{\Delta_\ell^i}{\Delta_\ell^{i+1}} \ge \frac{p}{2^{i+2}(i+1)} \cdot e(\mathcal{H}_{i+1}) \ge \frac{p}{2^{k+1} k } \cdot e(\mathcal{H}_{i+1}),
\]
as required.
\medskip
\noindent
\textbf{Case 2:} $\sum_{j=0}^{b-1} |W(u_j)| \ge \frac{1}{4\Delta_1^{i+1} } \cdot e(\mathcal{H}_{i+1})$.
\medskip
We claim that in this case, $|A_i| \le (1 - c_i) v(\mathcal{H})$. Indeed, we have
\[
v(\mathcal{H}) - |A_i| = v\big( \mathcal{A}_{i+1}^{(0)} \big) - v\big( \mathcal{A}_{i+1}^{(b)} \big) = \sum_{j=0}^{b-1} |W(u_j)| \ge \frac{e(\mathcal{H}_{i+1})}{4\Delta_1^{i+1}}.
\]
Recall that $\Delta_1^{i+1} \le c 2^k p^{k-i-1} \frac{e(\mathcal{H})}{v(\mathcal{H})}$ by Lemma~\ref{lemma:Delta-facts}. Thus,
\[
v(\mathcal{H}) - |A_i| \ge \frac{p^{i+1-k}}{c 2^{k+2}} \cdot \frac{v(\mathcal{H})}{e(\mathcal{H})} \cdot e(\mathcal{H}_{i+1}) \ge c_i v(\mathcal{H}),
\]
since $e(\mathcal{H}_{i+1}) \ge c_{i+1} p^{k - (i+1)} e(\mathcal{H})$ and $c_{i+1} / (c2^{k+2}) \ge c_i$.
\end{proof}
\subsection{The proof of Proposition~\ref{prop:main} and Theorem~\ref{thm:main}}
\begin{proof}[Proof of Proposition~\ref{prop:main}]
Let $k$ be an integer and let $c$ be a positive constant. Furthermore, let $p \in (0,1)$ and let $\mathcal{H}$ be a $k$-uniform hypergraph that satisfy the assumptions of Proposition~\ref{prop:main}. Let $\delta = (ck2^{k+1})^{-k}$ and $b = pv(\mathcal{H})$. We will use the Scythe Algorithm, described in Section~\ref{sec:WSAlg}, to construct a family $\S$ and functions $f_0$ and $g_0$ as in the statement of Proposition~\ref{prop:main}. We obtain them by running the following algorithm (with $\mathcal{H}_k = \mathcal{H}$) on every independent set $I \in \mathcal{I}(\mathcal{H})$. We shall define $f_0$ somewhat implicitly by defining a function $f_0^* \colon \mathcal{I}(\mathcal{H}) \to \mathcal{P}(V(\mathcal{H}))$ that is constant on the set $g_0^{-1}(S)$ for every $S \in \S$.
\begin{Proc}
Given an $I \in \mathcal{I}(\mathcal{H})$, set $i = k - 1$ and repeat the following:
\begin{enumerate}[(1)]
\item
Apply the Scythe Algorithm to $\mathcal{H}_{i+1}$ and $I$. Suppose that it outputs $\mathcal{H}_i$, $A_i$ and $B_i$.
\item
\label{item:Alg-2}
If $|A_i| \le (1 - \delta) v(\mathcal{H})$, then set $q = i$, $r = i+1$ and STOP.
\item
If $i > 1$, then set $i = i - 1$. Otherwise, set $q = r = 1$ and STOP.
\end{enumerate}
\end{Proc}
Let $I$ be an independent set and let us execute the above procedure (with $\mathcal{H}_k = \mathcal{H}$) on $I$. We claim that for every $i \in \{r, \ldots, k\}$, the hypergraph $\mathcal{H}_i$ satisfies properties (P\ref{item:WAlg-prop-1})--(P\ref{item:WAlg-prop-4}) defined in Section~\ref{sec:WSAlg}. This follows by induction on $k - i$. The base of the induction, the case $i = k$, follows vacuously from the definitions of $c_k$ and $\Delta_\ell^k$ for $\ell \in [k]$. The inductive step follows from Lemmas~\ref{lemma:WAlg-basics}, \ref{lemma:Delta}, and~\ref{lemma:P4Ai}. To see this, note that since $|A_i| > (1-\delta)v(\mathcal{H}) \ge (1-c_i)v(\mathcal{H})$ for all $i \in \{r, \ldots, k-1\}$, then~\eqref{eq:eHi-large} in Lemma~\ref{lemma:P4Ai} always holds.
Now, let us define $g_0(I)$ and $f_0^*(I)$. Suppose first that $r > 1$ and note that in this case, the algorithm stopped in step (\ref{item:Alg-2}), which means that $|A_q| \le (1 - \delta) v(\mathcal{H})$; we set
\[
g_0(I) = B_{k-1} \cup \ldots \cup B_q \quad \text{and} \quad f_0^*(I) = A_q.
\]
On the other hand, if $r = 1$, then we set
\[
g_0(I) = B_{k-1} \cup \ldots \cup B_1 \quad \text{and} \quad f_0^*(I) = \big\{ v \in V(\mathcal{H}_1) \colon \{v\} \not\in \mathcal{H}_1 \big\}.
\]
Finally, we let
\[
\S = \{ g_0(I) \colon I \in \mathcal{I}(\mathcal{H}) \}.
\]
We will define $f_0$ by letting $f_0(S) = f_0^*(I)$ for some $I \in g_0^{-1}(S)$. We first show that this definition will not depend on the choice of $I$. In fact, we shall prove a slightly stronger statement, which also establishes the consistency property of $g_0$ stated in the final line of Proposition~\ref{prop:main}.
\begin{claim*}
Suppose that for some $I, I' \in \mathcal{I}(\mathcal{H})$, $g_0(I) \subseteq I'$ and $g_0(I') \subseteq I$. Then $g_0(I) = g_0(I')$ and $f_0^*(I) = f_0^*(I')$.
\end{claim*}
\begin{proof}[Proof of claim]
Suppose that while running the algorithm on some $I$, we obtain a sequence $(B_{k-1}, \ldots, B_q)$. Since $g_0(I)$ depends solely on $(B_{k-1}, \ldots, B_q)$ and, by Lemma~\ref{lemma:WAlg-basics}, for each $i$, the hypergraph $\mathcal{H}_i$ and the set $A_i$ depend only on $(B_{k-1}, \ldots, B_i)$, then also $f_0^*(I)$ depends solely on $(B_{k-1}, \ldots, B_q)$. Hence, it suffices to show that if, while running the algorithm on some $I'$ with $B_{k-1} \cup \ldots \cup B_q \subseteq I'$, we obtain a sequence $(B_{k-1}', \ldots, B_{q'}')$ with $B_{k-1}' \cup \ldots \cup B_{q'}' \subseteq I$, then $(B_{k-1}', \ldots, B_{q'}') = (B_{k-1}, \ldots, B_q)$. To this end, let us first observe that, under the above assumptions, for every $i \in [k-1]$, if $\mathcal{H}_{i+1} = \mathcal{H}_{i+1}'$, then $B_i = B_i'$. Indeed, note that $B_i$ and $B_i'$ are the outputs of the Scythe Algorithm executed on the inputs $(\mathcal{H}_{i+1}, I)$ and $(\mathcal{H}_{i+1}', I')$, respectively. Hence, if $\mathcal{H}_{i+1} = \mathcal{H}_{i+1}'$, then since
\[
B_i \subseteq B_{k-1} \cup \ldots \cup B_q \subseteq I' \quad \text{and} \quad B_i' \subseteq B_{k-1}' \cup \ldots \cup B_{q'}' \subseteq I,
\]
then Lemma~\ref{lemma:WAlg-consistency} implies that $B_i = B_i'$. Since clearly $\mathcal{H}_k = \mathcal{H}_k' = \mathcal{H}$ and, as noted before, for each $i$, $\mathcal{H}_{i+1}$ depends only on $(B_{k-1}, \ldots, B_{i+1})$, it follows that $B_i = B'_i$ for all $i$, as required.
\end{proof}
By the above claim, we can define $f_0$ by letting, for every $S \in \S$, $f(S) = f_0^*(I)$ for any $I \in g_0^{-1}(S)$. Finally, let us show that the $\S$, $g_0$, and $f_0$, which we have just defined, satisfy the required conditions, that is, for all $I, I' \in \mathcal{I}(\mathcal{H})$,
\begin{enumerate}[(i)]
\item
\label{item:cond-S}
$|S| \le (k-1)pv(\mathcal{H})$ for every $S \in \S$,
\item
\label{item:cond-gfI}
$g_0(I) \subseteq I \subseteq f_0(g_0(I)) \cup g_0(I)$,
\item
\label{item:cond-f-size}
$|f_0(g_0(I))| \le (1-\delta)v(\mathcal{H})$,
\item
\label{item:cond-g-consistency}
$g_0(I) \subseteq I'$ and $g_0(I') \subseteq I$ imply that $g_0(I) = g_0(I')$.
\end{enumerate}
To see (\ref{item:cond-S}), simply recall that $|B_i| \le p v(\mathcal{H})$ for every $i \in [k-1]$. To see (\ref{item:cond-gfI}), note that $B_i \subseteq I \subseteq A_i \cup B_i$ for every $i \in \{q, \ldots, k-1\}$, by Lemma~\ref{lemma:WAlg-basics}, that $I$ is an independent set in $\mathcal{H}_1$ (if $r = 1$) and, crucially, that $f_0(g_0(I)) = f_0^*(I)$. To see (\ref{item:cond-f-size}), note that if $r > 1$, then $|A_q| \le (1 - \delta)v(\mathcal{H})$, see step (\ref{item:Alg-2}) of the algorithm; if $r = 1$, then since $\Delta_1(\mathcal{H}_1) \le c \cdot 2^k p^{k-1} e(\mathcal{H}) / v(\mathcal{H})$, by Lemma~\ref{lemma:Delta-facts} and property~(P\ref{item:WAlg-prop-3}), we have
$$\left| \big\{ v \in V(\mathcal{H}_1) \colon \{v\} \in \mathcal{H}_1 \big\} \right| \ge \frac{e(\mathcal{H}_1)}{\Delta_1(\mathcal{H}_1)} \ge \frac{c_1p^{k-1}e(\mathcal{H})}{\Delta_1(\mathcal{H}_1)} \ge \delta v(\mathcal{H}),$$
since $\delta \le c_1 / ( c \cdot 2^k)$ and $\mathcal{H}_1$ satisfies property (P\ref{item:WAlg-prop-4}), so $e(\mathcal{H}_1) \ge c_1p^{k-1}e(\mathcal{H})$. Finally, (\ref{item:cond-g-consistency}) follows directly from the claim.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:main}]
The theorem follows by applying Proposition~\ref{prop:main} a bounded number of times. Given an integer $k$ and positive reals $c$ and $\varepsilon$, let $\delta = \delta_{\ref{prop:main}}(c/\varepsilon)$ and let
\[
C = (k-1) \cdot \left(\frac{1}{\delta} \log \frac{1}{\varepsilon} + 1\right).
\]
Let $V$ be a finite set and let $\mathcal{F}$ be an increasing family of subsets of $V$ such that $|A| \ge \varepsilon |V|$ for every $A \in \mathcal{F}$. Let $p \in (0,1)$ and suppose that $\mathcal{H}$ is a $k$-uniform hypergraph on the vertex set $V$ that is $(\mathcal{F}, \varepsilon)$-dense and satisfies the assumptions of the theorem, that is,
\[
\Delta_\ell(\mathcal{H}) \le c p^{\ell-1}\frac{e(\mathcal{H})}{v(\mathcal{H})}
\]
for every $\ell \in [k]$. We now show how to construct a family $\S \subseteq \binom{V(\mathcal{H})}{\le C p v(\mathcal{H})}$ and functions $f \colon \S \to \overline{\mathcal{F}}$ and $g \colon \mathcal{I}(\mathcal{H}) \to \S$ such that
\begin{equation}
\label{eq:fg-cond}
g(I) \subseteq I \quad \text{and} \quad I \setminus g(I) \subseteq f(g(I))
\end{equation}
for every $I \in \mathcal{I}(\mathcal{H})$. Similarly as in the proof of Proposition~\ref{prop:main}, we shall define $f$ via a function $f^* \colon \mathcal{I}(\mathcal{H}) \to \mathcal{P}(V)$ that is constant on each set $g^{-1}(S)$ with $S \in \S$.
Fix some $I \in \mathcal{I}(\mathcal{H})$. Using Proposition~\ref{prop:main}, we shall construct (for some $J \le \frac{1}{\delta} \log \frac{1}{\varepsilon} + 1$) a sequence $(A_j, S_j)_{j=1}^J$ of pairs of subsets of $V$ such that for each $j \in [J]$,
\[
S_1 \cup \ldots \cup S_j \subseteq I \subseteq A_j \cup S_1 \cup \ldots \cup S_j.
\]
Moreover, $A_J \in \overline{\mathcal{F}}$ while $|S_1 \cup \ldots \cup S_J| \le Cpv(\mathcal{H})$. Crucially, the set $A_J$ will depend solely on $S_1 \cup \ldots \cup S_J$. We will let $g(I) = S_1 \cup \ldots \cup S_J$ and $f^*(I) = A_J$.
\begin{Const}
Let $S_0 = \emptyset$ and let $A_0 = V$. For $j = 0, 1, \ldots$, do the following:
\begin{enumerate}[(1)]
\item
\label{item:Const-1}
If $A_j \in \mathcal{F}$, then let $I_j = I \cap A_j$ and apply Proposition~\ref{prop:main} with $c_{\ref{prop:main}} = c/\varepsilon$ and $p_{\ref{prop:main}} = p$ to the hypergraph $\mathcal{H}[A_j]$ and the set $I_j$ to obtain sets $g_0(I_j)$ and $f_0(g_0(I_j))$ such that $g_0(I_j) \subseteq I_j$ and $I_j \setminus g_0(I_j) \subseteq f_0(g_0(I_j))$. Otherwise, if $A_j \in \overline{\mathcal{F}}$, then STOP.
\item
Let $S_{j+1} = g_0(I_j)$ and let $A_{j+1} = f_0(g_0(I_j))$.
\end{enumerate}
\end{Const}
Let us first show that the above procedure is well-defined, that is, that the assumptions of Proposition~\ref{prop:main} are satisfied each time we are in~(\ref{item:Const-1}). To this end, fix some $A \subseteq V$ and note that if $A \in \mathcal{F}$, then, since $\mathcal{H}$ is $(\mathcal{F}, \varepsilon)$-dense,
\[
\Delta_\ell(\mathcal{H}[A]) \le \Delta_\ell(\mathcal{H}) \le c p^{\ell-1}\frac{e(\mathcal{H})}{v(\mathcal{H})} \le \frac{c}{\varepsilon} \cdot p^{\ell-1}\frac{e(\mathcal{H}[A])}{v(\mathcal{H}[A])},
\]
where the last step follows since $e(\mathcal{H}[A]) \ge \varepsilon \cdot e(\mathcal{H})$ and $v(\mathcal{H}[A]) \le v(\mathcal{H})$.
Next, let us show that the above procedure terminates, therefore producing a finite sequence $(A_j, S_j)$ with $j \in [J]$. To this end, let us simply note that by Proposition~\ref{prop:main}, $|A_{j+1}| \le (1-\delta)|A_j|$ for all $j$, $A_0 = V$ and $|A| \ge \varepsilon|V|$ for every $A \in \mathcal{F}$. Moreover, since $A_{J-1} \in \mathcal{F}$, then
\[
\varepsilon |V| \le |A_{J-1}| \le (1-\delta)^{J-1} |A_0| = \exp(-(J-1)\delta)|V|
\]
and hence $J \le \frac{1}{\delta} \log \frac{1}{\varepsilon} + 1$. It immediately follows that
\[
|g(I)| \le \sum_{j=1}^{J} |S_j| \le \sum_{j=1}^J (k-1)pv(\mathcal{H}[A_j]) \le J(k-1)pv(\mathcal{H}) \le Cpv(\mathcal{H}).
\]
Finally, let $\S = \{g(I) \colon I \in \mathcal{I}(\mathcal{H})\}$. It remains to show that for every $S \in \S$, $f^*$ is constant on $g^{-1}(S)$. Similarly as in the proof of Proposition~\ref{prop:main}, we shall prove a somewhat stronger statement.
\begin{claim*}
Suppose that for some $I, I' \in \mathcal{I}(\mathcal{H})$, $g(I) \subseteq I'$ and $g(I') \subseteq I$. Then $g(I) = g(I')$ and $f^*(I) = f^*(I')$.
\end{claim*}
\begin{proof}[Proof of claim]
Suppose that while running the above procedure on some $I$, we generate a sequence $(A_j, S_j)_{j=1}^J$. Since for each $j$, $A_{j+1}$ depends solely on $A_j$ and $S_{j+1}$, where $A_0 = V$, then both $g(I)$ and $f^*(I)$ depend solely on $(S_1, \ldots, S_J)$. Hence, it suffices to show that if, while running the above procedure on some $I'$ with $S_1 \cup \ldots \cup S_J \subseteq I'$, we generate a sequence $(A_j', S_j')_{j=1}^{J'}$ with $S_1' \cup \ldots \cup S_{J'}' \subseteq I$, then $(S_1, \ldots, S_J) = (S_1', \ldots, S_{J'}')$. To this end, it suffices to note that if $A_j = A_j'$, then, since
\[
S_{j+1} \subseteq S_1 \cup \ldots \cup S_J \subseteq I' \quad \text{and} \quad S_{j+1}' \subseteq S_1' \cup \ldots \cup S_{J'}' \subseteq I,
\]
by the consistency property of $g_0$ stated in the final line of Proposition~\ref{prop:main}, $S_{j+1} = S_{j+1}'$. Since $A_0 = A_0' = V$ and for each $j$, $A_j$ depends only on $(S_1, \ldots, S_j)$, it follows that $S_j = S_j'$ for all $j$, as required.
\end{proof}
Finally, for every $S \in \S$, we let $f(S) = f^*(I)$ for some $I \in g^{-1}(S)$. This completes the proof of Theorem~\ref{thm:main}.
\end{proof}
\section{Szemer\'edi's theorem for sparse sets}
\label{sec:Sz}
In this section, we prove Theorem~\ref{thm:Sz} and derive from it Corollary~\ref{cor:Sz}. Before we get to the proofs, let us first remark that Theorem~\ref{thm:Sz} and Corollary~\ref{cor:Sz} are both sharp up to the value of the constant $C$ in the lower bounds for $p$ and $m$. More precisely, let us make the following two observations.
\begin{enumerate}
\item
For every $\beta \in (0,1)$, there is a positive $c$ such that if $m \le cn^{1-1/(k-1)}$, then the number of $m$-subsets of~$[n]$ that contain no $k$-term AP is at least $(1-\beta)^m\binom{n}{m}$. To see this, let $\varepsilon = \beta^2$ and observe that if $c$ is sufficiently small and $m \le cn^{1-1/(k-1)}$, then the expected number of $k$-term APs in a random $(1+\varepsilon)m$-subset of~$[n]$ is smaller than $\varepsilon m/2$ and hence by Markov's inequality, at least half of all $(1+\varepsilon)m$-subsets of~$[n]$ contain a subset of size $m$ with no $k$-term AP. Hence\footnote{We assume here, without loss of generality, that $\beta$ (and hence also $\varepsilon$) is sufficiently small.}
\[
\text{\#\{$m$-subsets of $[n$] with no $k$-term AP\}} \ge \frac{\binom{n}{(1+\varepsilon)m}}{2\binom{n}{\varepsilon m}} \ge \left(1-\sqrt{\varepsilon}\right)^m \binom{n}{m},
\]
where the final inequality holds since $\binom{n}{(1+\varepsilon)m} \ge \big( \frac{n}{2m} \big)^{\varepsilon m} \binom{n}{m}$ and $\binom{n}{\varepsilon m} \le \big( \frac{en}{\varepsilon m} \big)^{\varepsilon m}$.
\item
There is a positive constant $c$ such that if $p_n \le cn^{-1/(k-1)}$, then
\[
\Pr\big( \text{$[n]_{p_n}$ is $(\delta,k)$-Szemer{\'e}di} \big) \to 0 \text{ as $n \to \infty$}.
\]
For a (simple) proof of this statement, we refer the reader to~\cite{Sch}.
\end{enumerate}
We shall in fact prove the following somewhat stronger version of Corollary~\ref{cor:Sz}, originally proved by Schacht~\cite{Sch} (the approach of Conlon and Gowers~\cite{CG} yields a somewhat weaker probability estimate).
\begin{cor}
\label{cor:Sz-strong}
For every $k \in \mathbb{N}$ and every $\delta \in (0,1)$, there exists a constant $C$ such that for all sufficiently large $n$, if $p \ge Cn^{-1/(k-1)}$, then
\[
\Pr\big( \text{$[n]_p$ is $(\delta,k)$-Szemer{\'e}di} \big) \ge 1 - 2\exp(-pn/8).
\]
\end{cor}
In the proofs of Theorem~\ref{thm:Sz} and Corollary~\ref{cor:Sz-strong}, and frequently in later sections, we shall need various estimates on binomial coefficients, which we list here for future reference. Let $a$, $b$, and $c$ be integers satisfying $a \ge b \ge c \ge 0$. Then the following inequalities hold:
\begin{minipage}{0.45\textwidth}
\begin{align}
\label{eq:binomial-1}
\binom{a}{b} & \le \bigg( \frac{ea}{b} \bigg)^b, \\
\addtocounter{equation}{1}
\label{eq:binomial-2}
\binom{a}{b-c} & \le \left(\frac{b}{a-b}\right)^c\binom{a}{b},
\end{align}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\begin{align}
\addtocounter{equation}{-2}
\label{eq:binomial-3}
\binom{b}{c} & \le \left(\frac{b}{a}\right)^c\binom{a}{c}, \\
\addtocounter{equation}{1}
\label{eq:binomial-4}
\binom{a}{c} & \le \left(\frac{a-c}{b-c}\right)^c\binom{b}{c}.
\end{align}
\end{minipage}
\smallskip
\noindent
We remark that each inequality above follows easily from the definition of $\binom{a}{b}$.
\begin{proof}[Proof of Corollary~\ref{cor:Sz-strong}]
Fix $k \in \mathbb{N}$ and $\delta \in (0,1)$, let $\beta = \delta/(2e) \cdot e^{-1/\delta}$, and set $C = 2C_{\ref{thm:Sz}}(\beta,k)/\delta$. Assume that $p \ge Cn^{-1/(k-1)}$, let $m = \delta pn/2$, and let $X_m$ denote the number of $m$-subsets of $[n]_p$ that contain no $k$-term AP. By Theorem~\ref{thm:Sz} and~\eqref{eq:binomial-1}, we have
\begin{equation}
\label{eq:ExXm}
\Pr( X_m > 0 ) \le \mathbb{E}[X_m] \le \binom{\beta n}{m}p^m \le \left( \frac{\beta e p n}{m} \right)^m = \left( \frac{2\beta e}{\delta} \right)^m = e^{-m/\delta}.
\end{equation}
Let $\mathcal{A}$ denote the event that $[n]_p$ is \emph{not} $(\delta,k)$-Szemer{\'e}di, i.e., that $[n]_p$ contains a subset with $\delta |[n]_p|$ elements and no $k$-term AP. By~\eqref{eq:ExXm} and Chernoff's inequality (see, e.g., \cite[Appendix~A]{AlSp}), it follows that
\[
\Pr(\mathcal{A}) \le \Pr\left(\mathcal{A} \wedge |[n]_p| \ge \frac{pn}{2}\right) + \Pr\left( |[n]_p| < \frac{pn}{2} \right) \le \Pr(X_m > 0) + e^{-pn/8} \le 2e^{-pn/8},
\]
as required.
\end{proof}
Finally, let us show how to deduce Theorem~\ref{thm:Sz} from Theorem~\ref{thm:main}. Our proof will use the following robust version of Szemer{\'e}di's theorem, which can be proved by a simple averaging argument, originally observed by Varnavides~\cite{Varn}.
\begin{lemma}
\label{lemma:Varn}
For every positive $\delta$ and $k \in [n]$, there exists a positive $\varepsilon$ such that the following holds for all sufficiently large $n$. Every subset of $[n]$ with at least $\delta n$ elements contains at least $\varepsilon n^2$ $k$-term APs.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:Sz}]
Given $k \in \mathbb{N}$ and positive $\beta$, let $\delta = \min\{ \beta/2, 1/10 \}$ and let $n \in \mathbb{N}$ be sufficiently large. Let $\mathcal{H}$ be the $k$-uniform hypergraph of $k$-term APs in $[n]$, i.e., the hypergraph on the vertex set $[n]$ whose edges are all $k$-term APs in $[n]$, let $\mathcal{F}$ denote the family of subsets of $[n]$ with at least $\delta n$ elements, and let $\varepsilon = \varepsilon_{\ref{lemma:Varn}}(\delta, k)$. By Lemma~\ref{lemma:Varn}, the hypergraph $\mathcal{H}$ is $(\mathcal{F}, \varepsilon)$-dense, provided that $n$ is sufficiently large. Let $p = n^{-1/(k-1)}$ and let $c = 2k^2$. Since $e(\mathcal{H}) \ge n^2/k^2 \ge 2n^2/c$, it follows that
\[
\Delta_1(\mathcal{H}) \le k \cdot \frac{n}{k-1} \le 2n \le c \cdot p^{1-1} \frac{e(\mathcal{H})}{v(\mathcal{H})},
\]
for every $\ell \in \{2, \ldots, k-1\}$,
\[
\Delta_\ell(\mathcal{H}) \le \Delta_2(\mathcal{H}) \le \binom{k}{2} \le 2n^{1/(k-1)} \le c \cdot p^{\ell-1}\frac{e(\mathcal{H})}{v(\mathcal{H})},
\]
and $\Delta_k(\mathcal{H}) = 1 \le c \cdot p^{k-1}\frac{e(\mathcal{H})}{v(\mathcal{H})}$.
Let $C' = C_{\ref{thm:main}}(k,\varepsilon,c)$, let $C = C'/\delta$, and assume that $m \ge Cn^{1-1/(k-1)} = Cpn$. Note that if $m > \delta n/2$, then $\mathcal{I}(\mathcal{H}, m) = 0$ by Szemer{\'e}di's theorem, so we may assume that $m \le \delta n /2$. Since $C'pn \le \delta m$, then by Theorem~\ref{thm:main}, there exists a family $\S \subseteq \binom{[n]}{\le C'pn} \subseteq \binom{[n]}{\le \delta m}$ and functions $f \colon \S \to \overline{\mathcal{F}}$ and $g \colon \mathcal{I}(\mathcal{H}) \to \S$, such that for every $I \in \mathcal{I}(\mathcal{H})$,
\[
g(I) \subseteq I \quad \text{and} \quad I \setminus g(I) \subseteq f(g(I)).
\]
Therefore, using \eqref{eq:binomial-1} and \eqref{eq:binomial-2}, the number of independent sets of size $m$ in $\mathcal{H}$ can be estimated as follows:
\begin{align*}
\label{eq:Sz-IHm}
|\mathcal{I}(\mathcal{H}, m)| & \, = \, \sum_{S \in \S} |\{ I \in \mathcal{I}(\mathcal{H}, m) \colon g(I) = S \}| \le \sum_{S \in \S} \binom{|f(S)|}{m-|S|} \\
&\, \le \, \sum_{k \le \delta m} \binom{n}{k} \binom{\delta n}{m - k} \le \sum_{k \le \delta m} \left( \frac{en}{k} \right)^k \left( \frac{m}{\delta n - m} \right)^k \binom{\delta n}{m}.
\end{align*}
Since $m \le \delta n /2$ and the function $x \mapsto (y/x)^x$ is increasing on $(0,y/e)$, it follows that
\[
|\mathcal{I}(\mathcal{H},m)| \le \sum_{k \le \delta m} \left( \frac{2em}{\delta k} \right)^k \binom{\delta n}{m} \le m \left( \frac{2e}{\delta^2} \right)^{\delta m} \binom{\delta n}{m} \le \binom{\beta n}{m},
\]
where the final inequality follows since $\binom{\delta n}{m} \le 2^{-m} \binom{2\delta n}{m}$, by~\eqref{eq:binomial-3}, and since $2^{1/\delta} > 2e / \delta^2$ if $\delta \le 1/10$. This proves Theorem~\ref{thm:Sz}.
\end{proof}
The same proof, combined with an analogue of Lemma~\ref{lemma:Varn} due to Furstenberg and Katznelson~\cite{FK}, yields the following generalization of Theorem~\ref{thm:Sz}, which strengthens both~\cite[Theorem~10.4]{CG} and~\cite[Theorem~2.3]{Sch}. Given a set $F \subseteq \mathbb{N}^\ell$, we call a set of the form $a + bF = \{a + bx \colon x \in F\}$, with $a \in \mathbb{N}^\ell$ and $b \in \mathbb{Z} \setminus \{0\}$, a \emph{homothetic copy} of $F$.
\begin{thm}
For every positive $\beta$, every $\ell \in \mathbb{N}$, and every finite configuration $F \subseteq \mathbb{N}^\ell$, there exist constants $C$ and $n_0$ such that the following holds. For every $n \in \mathbb{N}$ with $n \ge n_0$, if $m \ge Cn^{\ell - 1/(|F| - 1)}$, then there are at most
\[
\binom{\beta n^\ell}{m}
\]
$m$-subsets of $[n]^\ell$ that contain no homothetic copy of $F$.
\end{thm}
Finally, using the famous polynomial Szemer\'edi theorem of Bergelson and Leibman~\cite{BL}, the same argument gives a counting version of~\cite[Theorem~10.7]{CG}.
\begin{thm}
For every positive $\beta$ and integers $k$ and $r$, there exist constants $C$ and $n_0$ such that the following holds. For every $n \in \mathbb{N}$ with $n \ge n_0$, if $m \ge C n^{1 - 1/kr}$, then there are at most
\[
\binom{\beta n}{m}
\]
$m$-subsets of $[n]$ that contain no set of the form $\{a, a + d^r, \ldots, a + k d^r \}$.
\end{thm}
\section{Extremal results for sparse sets}
\label{sec:extremal-results}
In this section, we shall deduce from Theorem~\ref{thm:main} two versions of the general transference theorem of Schacht~\cite[Theorem~3.3]{Sch}. We remind the reader that a statement very similar to Schacht's theorem was proved independently by Conlon and Gowers~\cite{CG}. For the benefit of the readers who are familiar with~\cite{Sch}, we shall state it using the terminology used there.
\begin{defn}
\label{defn:alpha-dense}
Let $\mathcal{H} = (\mathcal{H}_n)_{n \in \mathbb{N}}$ be a sequence of $k$-uniform hypergraphs and let $\alpha \in [0,1)$. We say that $\mathcal{H}$ is \emph{$\alpha$-dense} if the following is true: For every positive $\delta$, there exist positive $\varepsilon$ and $n_0$ such that for every $n$ with $n \ge n_0$ and every $U \subseteq V(\mathcal{H}_n)$ with $|U| \ge (\alpha + \delta)v(\mathcal{H}_n)$, we have
\[
e(\mathcal{H}_n[U]) \ge \varepsilon e(\mathcal{H}_n).
\]
\end{defn}
Let us remark here that Definition~\ref{defn:Feps-dense} is a generalization of Definition~\ref{defn:alpha-dense}. Indeed, if $\mathcal{F}_\delta$ denotes the collection of all subsets of $V(\mathcal{H}_n)$ with at least $(\alpha + \delta)v(\mathcal{H}_n)$ elements, then a sequence $\mathcal{H}$ of hypergraphs is $\alpha$-dense if and only if for every positive $\delta$, there exists a positive $\varepsilon$ such that for all sufficiently large $n$, the hypergraph $\mathcal{H}_n$ is $(\mathcal{F}_\delta, \varepsilon)$-dense.
We start with the `random' version of our extremal result, which was originally proved by Schacht~\cite[Theorem~3.3]{Sch}.
\begin{thm}
\label{thm:Sch-weak}
Let $\mathcal{H}$ be a sequence of $k$-uniform hypergraphs, let $\alpha \in [0,1)$, and let $c$ be a positive constant. Suppose that $\mathbf{p} \in [0,1]^\mathbb{N}$ is a sequence of probabilities such that for all sufficiently large $n \in \mathbb{N}$, and for every $\ell \in [k]$, we have
\begin{equation}
\label{eq:Delta-ell}
\Delta_\ell(\mathcal{H}_n) \le c \cdot p_n^{\ell-1} \frac{e(\mathcal{H}_n)}{v(\mathcal{H}_n)}.
\end{equation}
If $\mathcal{H}$ is $\alpha$-dense, then the following holds. For every positive $\delta$, there exists a constant $C$ such that if $q_n \ge Cp_n$ and $q_nv(\mathcal{H}_n) \to \infty$ as $n \to \infty$, then a.a.s.
\[
\alpha\big( \mathcal{H}_n[V(\mathcal{H}_n)_{q_n}] \big) \le (\alpha + \delta) q_n v(\mathcal{H}_n).
\]
\end{thm}
We note that the probability bounds implicit in the `asymptotically almost surely' statement that we obtain are, as in~\cite{Sch}, optimal, that is, they decay exponentially in $p_n v(\mathcal{H}_n)$.
\begin{remark}
We remark that the only difference between Theorem~\ref{thm:Sch-weak} and \cite[Theorem~3.3]{Sch} are the assumptions on the hypergraph sequence $\mathcal{H}$. It turns out that this difference is only superficial, since condition~\eqref{eq:Delta-ell} is essentially equivalent to the condition that $\mathcal{H}$ is $(K,\mathbf{p})$-bounded (see~\cite{Sch}). One easily checks that if $\mathcal{H}_n$ satisfies~\eqref{eq:Delta-ell} for sufficiently large $n$, then $\mathcal{H}$ is $(K,\mathbf{p})$-bounded for some constant $K$ that depends only on $c$ and $k$. Conversely, if $\mathcal{H}$ is $(K,\mathbf{p})$-bounded, then for all sufficiently large $n$, there is an $\mathcal{H}_n' \subseteq \mathcal{H}_n$ with at least $(1-\varepsilon)e(\mathcal{H}_n)$ edges that satisfies~\eqref{eq:Delta-ell} for some constant $c$ that depends only on $\varepsilon$, $k$, and $K$. One obtains such $\mathcal{H}_n'$ by repeatedly deleting from $\mathcal{H}_n$ edges that contain an $\ell$-set $T$ with $\deg_{\mathcal{H}}(T) > c \cdot p_n^{\ell-1} e(\mathcal{H}_n)/v(\mathcal{H}_n)$. Finally, note that, trivially, if $\mathcal{H}_n$ is $(\mathcal{F},2\varepsilon)$-dense for some family $\mathcal{F} \subseteq \mathcal{P}(V(\mathcal{H}_n))$, then every $\mathcal{H}_n'$ with $e(\mathcal{H}_n') \ge (1-\varepsilon)e(\mathcal{H}_n)$ is $(\mathcal{F},\varepsilon)$-dense.
\end{remark}
Our methods also yield the following `counting' analogue of Theorem~\ref{thm:Sch-weak}. This generalizes Theorem~\ref{thm:Sz}, and does not follow from the methods of~\cite{CG} or~\cite{Sch}. In the case $\alpha = 0$, it can be thought of as a strengthening of Theorem~\ref{thm:Sch-weak}, see Corollary~\ref{cor:Sz}.
\begin{thm}
\label{thm:Sch-counting}
Let $\mathcal{H}$ be a sequence of $k$-uniform hypergraphs, let $\alpha \in [0,1)$, and let $c$ be a positive constant. Suppose that $\mathbf{p} \in [0,1]^\mathbb{N}$ is a sequence of probabilities such that for all sufficiently large $n \in \mathbb{N}$, and for every $\ell \in [k]$, we have
\[
\Delta_\ell(\mathcal{H}_n) \le c \cdot p_n^{\ell-1} \frac{e(\mathcal{H}_n)}{v(\mathcal{H}_n)}.
\]
If $\mathcal{H}$ is $\alpha$-dense, then the following holds. For every positive $\delta$, there exists a constant $C$ such that for all sufficiently large $n$, if $m \ge C p_n v(\mathcal{H}_n)$, then
\[
|\mathcal{I}( \mathcal{H}_n, m )| \le \binom{(\alpha + \delta) v(\mathcal{H}_n)}{m}.
\]
\end{thm}
\begin{proof}[Proof of Theorem~\ref{thm:Sch-weak}]
Let $\alpha \in [0,1)$, let $k \in \mathbb{N}$, let $\mathbf{p} \in [0,1]^\mathbb{N}$, let $c \in (0,\infty)$, and let $\mathcal{H}$ be a sequence of $k$-uniform hypergraphs as in the statement of Theorem~\ref{thm:Sch-weak}. Furthermore, suppose that $\mathcal{H}$ is $\alpha$-dense and fix some positive $\delta$; without loss of generality, we may assume that $\delta$ is sufficiently small. Let $n \in \mathbb{N}$ be sufficiently large, let $\delta' = \delta/3$, and let $\mathcal{F}$ denote the family of all subsets of $V(\mathcal{H}_n)$ with at least $(\alpha + \delta')v(\mathcal{H}_n)$ elements. Since $\mathcal{H}_n$ is $\alpha$-dense, it follows that $\mathcal{H}_n$ is $(\mathcal{F}, \varepsilon)$-dense for some small positive $\varepsilon$ that does not depend on $n$. Let $C' = C_{\ref{thm:main}}(k,\varepsilon,c)$. By Theorem~\ref{thm:main}, there exist a family $\S \subseteq \binom{V(\mathcal{H}_n)}{\le C'p_nv(\mathcal{H}_n)}$ and functions $f \colon \S \to \overline{\mathcal{F}}$ and $g \colon \mathcal{I}(\mathcal{H}_n) \to \S$ such that
\[
g(I) \subseteq I \qquad \text{and} \qquad I \setminus g(I) \subseteq f(g(I))
\]
for every $I \in \mathcal{I}(\mathcal{H}_n)$. Let $C = C'/\delta^3$ and assume that $q_n \ge Cp_n$. Let $m = (\alpha+\delta) q_n v(\mathcal{H}_n)$ and, for the sake of brevity, let us write $V = V(\mathcal{H}_n)$ and $q = q_n$. Observe that
\begin{align}
\label{eq:Pr-alpha-H-large}
\Pr\Big(\alpha(\mathcal{H}_n[V_q]) \ge m\Big) & = \Pr\Big(\text{$I \subseteq V_q$ for some $I \in \mathcal{I}(\mathcal{H}_n,m)$}\Big) \\
\nonumber
& \le \sum_{S \in \S} \Pr\Big(\text{$I \subseteq V_{q}$ for some $I \in \mathcal{I}(\mathcal{H}_n,m)$ such that $g(I) = S$}\Big).
\end{align}
Fix an $S \in \S$ and let $\mathcal{I}'_S = \{I \in \mathcal{I}(\mathcal{H}_n,m) \colon g(I) = S\}$. We estimate the summand in the right-hand side of~\eqref{eq:Pr-alpha-H-large} as follows:
\begin{equation}
\label{eq:Pr-IS}
\Pr\Big(\text{$I \subseteq V_q$ for some $I \in \mathcal{I}'_S$}\Big) \le \Pr\big(S \subseteq V_q\big) \cdot \Pr\left(\big|V_q \cap f(S)\big| \ge m - |S| \right).
\end{equation}
To see the above inequality, simply note that for every $I \in \mathcal{I}'_S$, we have $I \setminus S \subseteq f(S)$.
Now, since $m = (\alpha+3\delta') q_n v(\mathcal{H}_n)$ and $S \in \S$, then
\[
|S| \le C'p_nv(\mathcal{H}_n) \le \delta^3 q v(\mathcal{H}_n) \le \delta' q v(\mathcal{H}_n)
\]
and hence $m - |S| \ge (\alpha+2\delta')qv(\mathcal{H}_n)$. On the other hand, since $|f(S)| \le (\alpha + \delta')v(\mathcal{H}_n)$ by the definition of $\mathcal{F}$, then
\[
\mathbb{E}\big[|V_q \cap f(S)|\big] \le (\alpha + \delta') q v(\mathcal{H}_n).
\]
Hence, by Chernoff's inequality, we have
\begin{equation}
\label{eq:Pr-Vq-fS}
\Pr\left(\big|V_q \cap f(S)\big| \ge m - |S| \right) \le \exp\left(-\frac{(\delta')^2qv(\mathcal{H}_n)}{4}\right) = \exp\left(-\frac{\delta^2qv(\mathcal{H}_n)}{36}\right).
\end{equation}
Finally, note that since $|S| \le \delta^3 q v(\mathcal{H}_n)$ for every $S \in \S$, and using~\eqref{eq:binomial-1},
\begin{equation}
\label{eq:Pr-Ssum}
\sum_{S \in \S} \Pr\big( S \subseteq V_q \big) \le \sum_{s = 0}^{\delta^3 q v(\mathcal{H}_n)} \binom{v(\mathcal{H}_n)}{s} q^s \le v(\mathcal{H}_n) \cdot \left( \frac{e}{\delta^3} \right)^{\delta^3 q v(\mathcal{H}_n)}.
\end{equation}
Putting~\eqref{eq:Pr-alpha-H-large}, \eqref{eq:Pr-IS}, \eqref{eq:Pr-Vq-fS}, and~\eqref{eq:Pr-Ssum} together, we obtain
\[
\Pr\Big(\alpha(\mathcal{H}_n[V_q]) \ge m\Big) \le \sum_{S \in \S} \Pr\big(S \subseteq V_q\big) \exp\left( -\frac{\delta^2qv(\mathcal{H}_n)}{36} \right) \le \exp\big( - \delta^3 q v(\mathcal{H}_n) \big),
\]
as required.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:Sch-counting}]
Let $\alpha \in [0,1)$, let $k \in \mathbb{N}$, let $\mathbf{p} \in [0,1]^\mathbb{N}$, let $c \in (0,\infty)$, and let $\mathcal{H}$ be a sequence of $k$-uniform hypergraphs as in the statement of Theorem~\ref{thm:Sch-weak}. Furthermore, suppose that $\mathcal{H}$ is $\alpha$-dense and fix some positive $\delta$. Let $n$ be sufficiently large, let $\delta' = \delta/2$, and let $\mathcal{F}$ denote the family of all subsets of $V(\mathcal{H}_n)$ with at least $(\alpha + \delta')v(\mathcal{H}_n)$ elements. Since $\mathcal{H}_n$ is $\alpha$-dense, it follows that $\mathcal{H}_n$ is $(\mathcal{F}, \varepsilon)$-dense for some small positive $\varepsilon$ that does not depend on $n$. Let $C' = C_{\ref{thm:main}}(k,\varepsilon,c)$. By Theorem~\ref{thm:main}, there exist a family $\S \subseteq \binom{V(\mathcal{H}_n)}{\le C'p_nv(\mathcal{H}_n)}$ and functions $f \colon \S \to \overline{\mathcal{F}}$ and $g \colon \mathcal{I}(\mathcal{H}_n) \to \S$ such that
\[
g(I) \subseteq I \qquad \text{and} \qquad I \setminus g(I) \subseteq f(g(I))
\]
for every $I \in \mathcal{I}(\mathcal{H}_n)$. Let $C = C'/\delta^2$ and assume that $m \ge C p_n v(\mathcal{H}_n)$. Fix an $S \in \S$, let $\mathcal{I}_S = \{I \in \mathcal{I}(\mathcal{H}_n, m) \colon g(I) = S\}$, and note for future reference that
\begin{equation}
\label{eq:S-Sch-count}
|S| \le C'p_nv(\mathcal{H}_n) \le \delta^2 m.
\end{equation}
Since $f(S) \in \overline{\mathcal{F}}$, we have $|f(S)| < (\alpha + \delta')v(\mathcal{H}_n)$. Therefore,
\[
|\mathcal{I}_S| \le \binom{|f(S)|}{m - |S|} \le \binom{(\alpha + \delta')v(\mathcal{H}_n)}{m - |S|}.
\]
To see the above inequality, simply note that for every $I \in \mathcal{I}_S$, we have $I \setminus S \subseteq f(S)$.
It follows, using~\eqref{eq:binomial-3} and~\eqref{eq:binomial-2}, that
\begin{equation} \label{eq:IS-Sch-count}
|\mathcal{I}_S| \le \binom{(\alpha + \delta')v(\mathcal{H}_n)}{m - |S|} \le \bigg( \frac{\alpha + \delta'}{\alpha + \delta} \bigg)^{m - |S|} \left( \frac{m}{(\alpha + \delta) v(\mathcal{H}_n) - m} \right)^{|S|} \binom{(\alpha + \delta)v(\mathcal{H}_n)}{m}.
\end{equation}
Now, if $m \ge (\alpha+\delta')v(\mathcal{H}_n)$, then every $m$-subset of $V(\mathcal{H}_n)$ belongs to $\mathcal{F}$ and hence there is no independent set of size $m$. We may therefore assume that $m < (\alpha + \delta')v(\mathcal{H}_n) = (\alpha + \delta/2)v(\mathcal{H}_n)$. Setting $s = |S|$, we obtain
\[
\binom{v(\mathcal{H}_n)}{s} \cdot |\mathcal{I}_S| \le \bigg( \frac{\alpha + \delta'}{\alpha + \delta} \bigg)^{m/2} \left( \frac{e v(\mathcal{H}_n)}{s} \cdot \frac{2m}{\delta v(\mathcal{H}_n)} \right)^{s} \binom{(\alpha + \delta)v(\mathcal{H}_n)}{m} \le e^{-\delta^2 m} \binom{(\alpha+\delta) v(\mathcal{H}_n)}{m},
\]
since $s \le \delta^2 m$, by~\eqref{eq:S-Sch-count}, and provided that $\delta$ is sufficiently small. It follows that
\[
|\mathcal{I}(\mathcal{H}_n,m)| = \sum_{S \in \S} |\mathcal{I}_S| \le \sum_{s = 0}^{\delta^2 m} \binom{v(\mathcal{H}_n)}{s} \max\big\{ |\mathcal{I}_S| \colon |S| = s \big\} \le \binom{(\alpha+\delta) v(\mathcal{H}_n)}{m},
\]
as claimed.
\end{proof}
\section{Stability results for sparse sets}
\label{sec:stability-results}
In this section, we shall deduce from Theorem~\ref{thm:main} two versions of the general transference theorem for stability results proved by Conlon and Gowers~\cite{CG}. Similarly as in Section~\ref{sec:extremal-results}, we shall state our results using the terminology used by Schacht~\cite{Sch}. We remark here that in parallel to this work, Schacht's method was adapted to yield sparse random analogues of stability statements by Samotij~\cite{Sam}. The main result of this section is most easily compared with~\cite[Theorem~3.4]{Sam}. We begin by recalling the following definition from~\cite{ABMS1}.
\begin{defn}
\label{defn:aB-stable}
Let $\mathcal{H}$ be a sequence of $k$-uniform hypergraphs, let $\alpha$ be a positive real, and let $\mathcal{B}$ be a sequence of sets with $\mathcal{B}_n \subseteq \mathcal{P}(V(H_n))$. We say that $\mathcal{H}$ is \emph{$(\alpha, \mathcal{B})$-stable} if for every positive $\delta$, there exist positive $\varepsilon$ and $n_0$ such that the following holds. For every $n$ with $n \ge n_0$ and every $U \subseteq V(\mathcal{H}_n)$ with $|U| \ge (\alpha - \varepsilon)v(\mathcal{H}_n)$, we have either $e(\mathcal{H}_n[U]) \ge \varepsilon e(\mathcal{H}_n)$ or $|U \setminus B| \le \delta v(\mathcal{H}_n)$ for some $B \in \mathcal{B}_n$.
\end{defn}
Roughly speaking, a sequence $\mathcal{H}$ of hypergraphs is $(\alpha, \mathcal{B})$-stable if for every $A \subseteq V(\mathcal{H}_n)$ that is almost as large as $\alpha v(\mathcal{H}_n)$, the set $A$ is either very `close' to some extremal set $B \in \mathcal{B}_n$ or it contains `many' (a positive fraction of all) edges of $\mathcal{H}_n$. Note that in many natural settings, such a property does hold, for example, as a~consequence of the~Erd{\H o}s-Simonovits stability theorem~\cite{ES1, ES2} and the~removal lemma for graphs.
We again start with the `random' version of our stability result, which was originally proved in~\cite{Sam}.
\begin{thm}
\label{thm:stability-random}
Let $\mathcal{H}$ be a sequence of $k$-uniform hypergraphs, let $\alpha \in (0,1)$, and let $c$ be a positive constant. Let $\mathbf{p}$ be a sequence of probabilities such that, for every $\ell \in [k]$,
\[
\Delta_\ell(\mathcal{H}_n) \le c \cdot p_n^{\ell-1} \frac{e(\mathcal{H}_n)}{v(\mathcal{H}_n)}
\]
and let $\mathcal{B}$ be a sequence of sets with $\mathcal{B}_n \subseteq \mathcal{P}(V(\mathcal{H}_n))$.
If $\mathcal{H}$ is $(\alpha, \mathcal{B})$-stable, then the following holds. For every positive $\delta$, there exist $\varepsilon$ and $C$ such that if $q_n \ge Cp_n$ and $q_n v(\mathcal{H}_n) \to \infty$ as $n \to \infty$, then a.a.s.~every independent set $I \subseteq V(\mathcal{H}_n)_{q_n}$ with $|I| \ge (\alpha - \varepsilon) q_n v(\mathcal{H}_n)$ satisfies $|I \setminus B| < \delta q_n v(\mathcal{H}_n)$ for some $B \in \mathcal{B}_n$.
\end{thm}
The following theorem, a `counting' analogue of Theorem~\ref{thm:stability-random}, is our main stability result. A simple version of it, applicable to $3$-uniform hypergraphs with $\Delta_2(\mathcal{H}_n) = O(1)$, was proved in~\cite{ABMS1} and used in~\cite{ABMS1,ABMS2} to count sum-free subsets in Abelian groups and in the set $[n]$.
\begin{thm}
\label{thm:stability-counting}
Let $\mathcal{H}$ be a sequence of $k$-uniform hypergraphs, let $\alpha \in (0,1)$, and let $c$ be a positive constant. Let $\mathbf{p}$ be a sequence of probabilities such that, for every $\ell \in [k]$,
\[
\Delta_\ell(\mathcal{H}_n) \le c \cdot p_n^{\ell-1} \frac{e(\mathcal{H}_n)}{v(\mathcal{H}_n)}
\]
and let $\mathcal{B}$ be a sequence of sets with $\mathcal{B}_n \subseteq \mathcal{P}(V(\mathcal{H}_n))$.
If $\mathcal{H}$ is $(\alpha, \mathcal{B})$-stable, then the following holds. For every positive $\delta$, there exist $\varepsilon$ and $C$ such that if $m \ge Cp_nv(\mathcal{H}_n)$, then there are at most
\[
(1-\varepsilon)^m \binom{\alpha v(\mathcal{H}_n)}{m}
\]
independent sets $I \in \mathcal{I}(\mathcal{H}_n,m)$ such that $|I \setminus B| \ge \delta m$ for every $B \in \mathcal{B}_n$.
\end{thm}
\begin{proof}[Proof of Theorem~\ref{thm:stability-random}]
The proof is similar to the proof of Theorem~\ref{thm:Sch-weak}. Let $k \in \mathbb{N}$, $\alpha \in (0,1)$, $\mathbf{p} \in [0,1]^\mathbb{N}$, $c \in (0,\infty)$, and $\mathcal{H}$ and $\mathcal{B}$ be as in the statement of Theorem~\ref{thm:stability-random}. Furthermore, suppose that $\mathcal{H}$ is $(\alpha, \mathcal{B})$-stable and fix some small positive $\delta$. Let $\varepsilon$ be a small positive constant, let $n$ be sufficiently large, let $\delta' = \delta/3$ and $\varepsilon' = 3\varepsilon$, and set
\[
\mathcal{F} = \big\{ A \subseteq V(\mathcal{H}_n) \colon |A| \ge (\alpha - \varepsilon')v(\mathcal{H}_n) \text{ and } |A \setminus B| \ge \delta' v(\mathcal{H}_n) \text{ for every } B \in \mathcal{B}_n \big\}.
\]
Since $\mathcal{H}$ is $(\alpha, \mathcal{B})$-stable, it follows that $\mathcal{H}_n$ is $(\mathcal{F}, \varepsilon)$-dense, provided that $\varepsilon$ is sufficiently small. Let $C' = C_{\ref{thm:main}}(k,\varepsilon,c)$. By Theorem~\ref{thm:main}, there exist a family $\S \subseteq \binom{V(\mathcal{H}_n)}{\le C'p_nv(\mathcal{H}_n)}$ and functions $f \colon \S \to \overline{\mathcal{F}}$ and $g \colon \mathcal{I}(\mathcal{H}_n) \to \S$ such that
\[
g(I) \subseteq I \qquad \text{and} \qquad I \setminus g(I) \subseteq f(g(I))
\]
for every $I \in \mathcal{I}(\mathcal{H}_n)$. Let $C = C'/\varepsilon^3$ and assume that $q_n \ge Cp_n$. Let $m = (\alpha-\varepsilon) q_n v(\mathcal{H}_n)$ and, for the sake of brevity, let us write $V = V(\mathcal{H}_n)$ and $q = q_n$. Let
\[
\mathcal{I}' = \big\{ I \in \mathcal{I}(\mathcal{H}_n) \colon |I| \ge m \text{ and } |I \setminus B| \ge \delta q v(\mathcal{H}_n) \text{ for every } B \in \mathcal{B}_n \big\}
\]
and let $\mathcal{A}$ denote the event that $\mathcal{H}_n[V_q]$ contains an independent set $I \in \mathcal{I}'$. We are required to prove that $\Pr(\mathcal{A})$ tends to $0$ as $n \to \infty$.
Observe first that
\begin{equation}
\label{eq:Pr-not-stable}
\Pr(\mathcal{A}) \le \sum_{S \in \S} \Pr\Big(\text{$I \subseteq V_q$ for some $I \in \mathcal{I}'$ such that $g(I) = S$}\Big).
\end{equation}
Fix an $S \in \S$, let $\mathcal{I}'_S = \{I \in \mathcal{I}' \colon g(I) = S\}$, and note for future reference that
\begin{equation}
\label{eq:S}
|S| \le C'p_nv(\mathcal{H}_n) \le \varepsilon^3 q v(\mathcal{H}_n).
\end{equation}
We claim that
\begin{equation}
\label{eq:Pr-not-stable-S}
\Pr\Big(\text{$I \subseteq V_q$ for some $I \in \mathcal{I}'_S$}\Big) \le \Pr\big(S \subseteq V_q\big) \cdot \exp\left( -\frac{\varepsilon^2 q v(\mathcal{H}_n)}{4} \right).
\end{equation}
In order to prove~\eqref{eq:Pr-not-stable-S}, recall that since $f(S) \in \overline{\mathcal{F}}$, we either have $|f(S)| < (\alpha - \varepsilon')v(\mathcal{H}_n)$ or $|f(S) \setminus B| < \delta' v(\mathcal{H}_n)$ for some $B \in \mathcal{B}_n$. We therefore consider two cases.
\medskip
\noindent
\textbf{Case 1:} $|f(S)| < (\alpha - \varepsilon')v(\mathcal{H}_n)$.
\medskip
\noindent
We bound the left-hand side of~\eqref{eq:Pr-not-stable-S} as follows:
\begin{equation}
\label{eq:Pr-I'S-1}
\Pr\Big(\text{$I \subseteq V_q$ for some $I \in \mathcal{I}'_S$}\Big) \le \Pr\big(S \subseteq V_q\big) \cdot \Pr\left(\big|V_q \cap f(S)\big| \ge m - |S| \right).
\end{equation}
In order to justify the above inequality, note that for every $I \in \mathcal{I}'_S$, we have $I \setminus S \subseteq f(S)$. Recall that $\varepsilon' = 3\varepsilon$. Since $m - |S| \ge (\alpha - 2\varepsilon)qv(\mathcal{H}_n)$, by~\eqref{eq:S}, and
\[
\mathbb{E}[|V_q \cap f(S)|] \le (\alpha - \varepsilon')qv(\mathcal{H}_n) = (\alpha - 3\varepsilon) q v(\mathcal{H}_n),
\]
then by Chernoff's inequality we have
\begin{equation}
\label{eq:Pr-Vq-fS-stab-1}
\Pr\left(\big|V_q \cap f(S)\big| \ge m - |S| \right) \le \exp\left(-\frac{\varepsilon^2 q v(\mathcal{H}_n)}{4}\right).
\end{equation}
Combining~\eqref{eq:Pr-I'S-1} and~\eqref{eq:Pr-Vq-fS-stab-1}, we obtain~\eqref{eq:Pr-not-stable-S}, as required.
\medskip
\noindent
\textbf{Case 2:} $|f(S) \setminus B| < \delta' v(\mathcal{H}_n)$ for some $B \in \mathcal{B}_n$.
\medskip
\noindent
We estimate the left-hand side of~\eqref{eq:Pr-not-stable-S} as follows:
\[
\Pr\Big(\text{$I \subseteq V_q$ for some $I \in \mathcal{I}'_S$}\Big) \le \Pr\big(S \subseteq V_q\big) \cdot \Pr\Big(\big|V_q \cap (f(S) \setminus B)\big| \ge \delta q v(\mathcal{H}_n) - |S| \Big).
\]
This follows from the definition of $\mathcal{I}'$ and the fact that $I \setminus S \subseteq f(S)$ for every $I \in \mathcal{I}'_S$. Since $|f(S) \setminus B| < \delta' v(\mathcal{H}_n)$, we have
\[
\mathbb{E}\big[|V_q \cap (f(S) \setminus B)|\big] < \delta' q v(\mathcal{H}_n),
\]
whereas $\delta q v(\mathcal{H}_n) - |S| \ge 2\delta' q v(\mathcal{H}_n)$ by~\eqref{eq:S} and since $\delta = 3\delta'$. By Chernoff's inequality, it follows that
\[
\Pr\left(\big|V_q \cap (f(S) \setminus B)\big| \ge 3\delta' q v(\mathcal{H}_n) - |S| \right) \le \exp\left(-\frac{(\delta')^2 q v(\mathcal{H}_n)}{4}\right) \le \exp\left(- \frac{\varepsilon^2 q v(\mathcal{H}_n)}{4}\right)
\]
since $\varepsilon$ was chosen sufficiently small. Thus~\eqref{eq:Pr-not-stable-S} follows in this case as well.
\medskip
Finally, note that, since $|S| \le \varepsilon^3 q v(\mathcal{H}_n)$ for every $S \in \S$, as in~\eqref{eq:Pr-Ssum}, we have
\begin{equation}
\label{eq:Pr-Ssum-stable}
\sum_{S \in \S} \Pr\big(S \subseteq V_q\big) \le \sum_{s = 0}^{\varepsilon^3 q v(\mathcal{H}_n)} \binom{v(\mathcal{H}_n)}{s} q^s \le v(\mathcal{H}_n) \cdot \left(\frac{e}{\varepsilon^3} \right)^{\varepsilon^3 q v(\mathcal{H}_n)} .
\end{equation}
Putting~\eqref{eq:Pr-not-stable},~\eqref{eq:Pr-not-stable-S}, and~\eqref{eq:Pr-Ssum-stable} together, we obtain
\[
\Pr(\mathcal{A}) \le \sum_{S \in \S} \Pr\big(S \subseteq V_q\big) \exp\left(-\frac{\varepsilon^2 q v(\mathcal{H}_n)}{4}\right) \le \exp(-\varepsilon^3 q v(\mathcal{H}_n)),
\]
as required.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:stability-counting}]
Let $k \in \mathbb{N}$, $\alpha \in (0,1)$, $\mathbf{p} \in [0,1]^\mathbb{N}$, $c \in (0,\infty)$ ,and $\mathcal{H}$ and $\mathcal{B}$ be as in the statement of Theorem~\ref{thm:stability-counting}. Furthermore, suppose that $\mathcal{H}$ is $(\alpha, \mathcal{B})$-stable and fix some positive $\delta$. Let $\delta'$ be a sufficiently small positive constant (depending only on $\alpha$ and $\delta$), let $\varepsilon$ be a small positive constant, and let $n$ be sufficiently large. Let $\varepsilon' = 2\varepsilon$, and set
\[
\mathcal{F} = \big\{ A \subseteq V(\mathcal{H}_n) \colon |A| \ge (\alpha - \varepsilon')v(\mathcal{H}_n) \text{ and } |A \setminus B| \ge \delta' v(\mathcal{H}_n) \text{ for every } B \in \mathcal{B}_n \big\}.
\]
Since $\mathcal{H}$ is $(\alpha, \mathcal{B})$-stable, it follows that $\mathcal{H}_n$ is $(\mathcal{F}, \varepsilon)$-dense, provided that $\varepsilon$ is sufficiently small (as a function of $\delta'$). Let $C' = C_{\ref{thm:main}}(k,\varepsilon,c)$. By Theorem~\ref{thm:main}, there exist a family $\S \subseteq \binom{V(\mathcal{H}_n)}{\le C'p_nv(\mathcal{H}_n)}$ and functions $f \colon \S \to \overline{\mathcal{F}}$ and $g \colon \mathcal{I}(\mathcal{H}_n) \to \S$ such that
\[
g(I) \subseteq I \qquad \text{and} \qquad I \setminus g(I) \subseteq f(g(I))
\]
for every $I \in \mathcal{I}(\mathcal{H}_n)$. Let $C = C'/\varepsilon^2$, assume that $m \ge Cp_nv(\mathcal{H}_n)$, and set
\[
\mathcal{I}' = \big\{ I \in \mathcal{I}(\mathcal{H}_n, m) \colon |I \setminus B| \ge \delta m \text{ for every } B \in \mathcal{B}_n \big\}.
\]
Our task is to bound the size of $\mathcal{I}'$ from above. To this end, fix an $S \in \S$ and let $\mathcal{I}'_S = \{I \in \mathcal{I}' \colon g(I) = S\}$. Note for future reference that
\begin{equation}
\label{eq:S-count}
|S| \le C'p_nv(\mathcal{H}_n) \le \varepsilon^2 m.
\end{equation}
Since $f(S) \in \overline{\mathcal{F}}$, we either have $|f(S)| < (\alpha - \varepsilon')v(\mathcal{H}_n)$ or $|f(S) \setminus B| < \delta' v(\mathcal{H}_n)$ for some $B \in \mathcal{B}_n$. We therefore consider two cases.
\medskip
\noindent
\textbf{Case 1:} $|f(S)| < (\alpha - \varepsilon')v(\mathcal{H}_n)$.
\medskip
\noindent
We claim that in this case
\begin{equation}
\label{eq:stab-count-case1}
\binom{v(\mathcal{H}_n)}{|S|} \cdot |\mathcal{I}_S'| \le \frac{(1-\varepsilon)^m}{2m} \binom{\alpha v(\mathcal{H}_n)}{m}.
\end{equation}
To prove~\eqref{eq:stab-count-case1}, we first estimate the size of $\mathcal{I}'_S$ as follows:
\[
|\mathcal{I}_S'| \le \binom{|f(S)|}{m - |S|} \le \binom{(\alpha - \varepsilon')v(\mathcal{H}_n)}{m - |S|}.
\]
The above inequality follows since $I \setminus S \subseteq f(S)$ for every $I \in \mathcal{I}'_S$.
It follows, using~\eqref{eq:binomial-3} and~\eqref{eq:binomial-2}, as in~\eqref{eq:IS-Sch-count}, that
$$
|\mathcal{I}'_S| \le \binom{(\alpha - \varepsilon')v(\mathcal{H}_n)}{m - |S|} \le \bigg( \frac{\alpha-\varepsilon'}{\alpha} \bigg)^{m - |S|} \left( \frac{m}{\alpha v(\mathcal{H}_n) - m} \right)^{|S|} \binom{\alpha v(\mathcal{H}_n)}{m}.
$$
Now, if $m \ge (\alpha-\varepsilon')v(\mathcal{H}_n)$, then $\mathcal{I}' \subseteq \mathcal{F}$ and hence $\mathcal{I}' = \emptyset$, since $\mathcal{H}_n$ is $(\mathcal{F}, \varepsilon)$-dense. We may therefore assume that $m < (\alpha - \varepsilon')v(\mathcal{H}_n) = (\alpha - 2\varepsilon)v(\mathcal{H}_n)$. We obtain
\[
\binom{v(\mathcal{H}_n)}{|S|} |\mathcal{I}_S'| \le \bigg( \frac{\alpha-\varepsilon'}{\alpha} \bigg)^{m/2} \left( \frac{e v(\mathcal{H}_n)}{|S|} \cdot \frac{m}{2\varepsilon v(\mathcal{H}_n)} \right)^{|S|} \binom{\alpha v(\mathcal{H}_n)}{m} \le \frac{(1-\varepsilon)^m}{2m} \binom{\alpha v(\mathcal{H}_n)}{m},
\]
since $|S| \le \varepsilon^2 m$ and $\varepsilon' = 2\varepsilon$, as claimed.
\medskip
\noindent
\textbf{Case 2:} $|f(S) \setminus B| < \delta' v(\mathcal{H}_n)$ for some $B \in \mathcal{B}_n$.
\medskip
\noindent
We claim that in this case
\begin{equation}
\label{eq:stab-count-case2}
\binom{v(\mathcal{H}_n)}{|S|} \cdot |\mathcal{I}_S'| \le \delta^m \binom{\alpha v(\mathcal{H}_n)}{m}.
\end{equation}
To prove~\eqref{eq:stab-count-case2}, we first estimate the size of $\mathcal{I}'_S$ as follows:
\begin{equation}
\label{eq:IS'-count-2}
|\mathcal{I}_S'| \le \binom{|f(S) \setminus B|}{\delta m - |S|} \binom{|f(S)|}{m - \delta m} \le \binom{\delta' v(\mathcal{H}_n)}{\delta m - |S|}\binom{v(\mathcal{H}_n)}{m - \delta m}.
\end{equation}
To see the first inequality, recall that every $I \in \mathcal{I}'_S$ contains at least $\delta m - |S|$ elements of $f(S) \setminus B$ for every $B \in \mathcal{B}_n$. Recall that $|S| \le \varepsilon^2m$ and note that therefore, if $m \ge (\alpha/2)v(\mathcal{H}_n)$, then $\delta m - |S| \ge \delta' v(\mathcal{H}_n)$ and hence $\mathcal{I}'_S = \emptyset$. Thus, we may assume that $m < (\alpha/2)v(\mathcal{H}_n)$. It follows, using~\eqref{eq:binomial-2} and \eqref{eq:binomial-4}, that
\begin{equation}
\label{eq:IS'-count-2-2}
\binom{v(\mathcal{H}_n)}{m - \delta m} \le \left(\frac{m}{v(\mathcal{H}_n) - m} \right)^{\delta m} \binom{v(\mathcal{H}_n)}{m} \le \left(\frac{2m}{v(\mathcal{H}_n)} \right)^{\delta m} \left(\frac{2}{\alpha}\right)^m \binom{ \alpha v(\mathcal{H}_n)}{m}.
\end{equation}
Hence, by~\eqref{eq:S-count},~\eqref{eq:IS'-count-2}, and~\eqref{eq:IS'-count-2-2}, using~\eqref{eq:binomial-1}, we have
\begin{align*}
\binom{v(\mathcal{H}_n)}{|S|} |\mathcal{I}_S'| & \le \left( \frac{ev(\mathcal{H}_n)}{|S|} \right)^{|S|} \left( \frac{2e\delta' v(\mathcal{H}_n)}{\delta m}\right)^{\delta m - |S|} \left(\frac{2m}{v(\mathcal{H}_n)} \right)^{\delta m} \left(\frac{2}{\alpha}\right)^m \binom{ \alpha v(\mathcal{H}_n)}{m}\\
& \le \left( \frac{1}{|S|} \cdot \frac{\delta m}{2\delta'} \right)^{|S|} \left( \frac{4e\delta'}{\delta} \right)^{\delta m} \left(\frac{2}{\alpha}\right)^m \binom{ \alpha v(\mathcal{H}_n)}{m} \le \delta^m \binom{\alpha v(\mathcal{H}_n)}{m},
\end{align*}
as claimed, since $|S| \le\varepsilon^2 m$ and $\delta'$ and $\varepsilon$ were chosen to be sufficiently small. Indeed, note that (for this calculation, and assuming that $\delta$ is sufficiently small) $\delta' = \delta^3 \cdot (\delta \alpha / 2e)^{1/\delta}$ and $\varepsilon < \delta'$ suffice.
\medskip
Finally, by~\eqref{eq:stab-count-case1} and~\eqref{eq:stab-count-case2}, we obtain
\[
|\mathcal{I}'| = \sum_{S \in \S} |\mathcal{I}_S'| \le \sum_{s = 0}^{\varepsilon^2 m} \binom{v(\mathcal{H}_n)}{s} \max\big\{ |\mathcal{I}_S'| \colon |S| = s \big\} \le (1-\varepsilon)^m \binom{\alpha v(\mathcal{H}_n)}{m},
\]
as claimed.
\end{proof}
\section{Tur{\'a}n's problem in random graphs}
\label{sec:Turan}
In this section, we shall deduce from Theorems~\ref{thm:Sch-weak} and~\ref{thm:stability-random} the sparse random analogues of the classical theorems of Erd{\H o}s and Stone~\cite{ErSt} and Tur{\'a}n~\cite{Turan} and of Erd{\H o}s and Simonovits~\cite{ES1,ES2}, Theorems~\ref{thm:Turan-Gnp} and~\ref{thm:stability-Gnp}. In fact, we will prove a natural generalization of Theorem~\ref{thm:Turan-Gnp} to $t$-uniform hypergraphs, Theorem~\ref{thm:t-Turan-Gnp} below, which was already proved by Conlon and Gowers~\cite{CG} and Schacht~\cite{Sch}. We first recall the following generalization of the notion of $2$-density of a graph to $t$-uniform hypergraphs.
\begin{defn}
Let $H$ be a $t$-uniform hypergraph with at least $t + 1$ vertices. We define the \emph{$t$-density} of $H$, denoted by $m_t(H)$, by
\[
m_t(H) = \max \left\{ \frac{e(H') - 1}{v(H') - t} \colon H' \subseteq H \text{ with } v(H') \ge t + 1 \right\}.
\]
\end{defn}
We also recall that the \emph{Tur{\'a}n density} of a $t$-uniform hypergraph $H$, denoted $\pi(H)$, is defined by
\begin{equation}
\label{eq:piH}
\pi(H) = \lim_{n \to \infty} \frac{\ex\big(K_n^{t}, H\big)}{\binom{n}{t}},
\end{equation}
where, as usual, $\ex\big(K_n^{t}, H \big)$ is the Tur{\'a}n number for $H$, that is, the maximum number of edges in an $H$-free $t$-uniform hypergraph with $n$ vertices.
\begin{thm}
\label{thm:t-Turan-Gnp}
For every $t$-uniform hypergraph $H$ with $\Delta(H) \ge 2$ and every positive $\delta$, there exists a positive constant $C$ such that if $q_n \ge Cn^{-1/m_t(H)}$, then
\[
\Pr\left( \ex\big(G^t(n,q_n), H\big) \le (\pi(H) + \delta) q_n \binom{n}{t} \right) \to 1
\]
as $n \to \infty$.
\end{thm}
Once again, we emphasize that we actually obtain essentially optimal bounds on the probability in the above statement, i.e., bounds of the form $1 - \exp(-bq_nn^t)$ for some positive constant $b$ that depends only on $H$ and $\delta$.
Theorems~\ref{thm:t-Turan-Gnp} and~\ref{thm:stability-Gnp}, and hence also Theorem~\ref{thm:Turan-Gnp}, will follow easily from our general transference results, Theorems~\ref{thm:Sch-weak} and~\ref{thm:stability-random}, the classical supersaturation results of Erd{\H o}s and Simonovits~\cite{ES83} (for Theorem~\ref{thm:t-Turan-Gnp}), and the stability theorem of Erd{\H o}s and Simonovits~\cite{ES1,ES2} together with the so-called graph removal lemma (for Theorem~\ref{thm:stability-Gnp}). We only need to check that the hypergraph of copies of $H$ in the complete hypergraph $K_n^t$, to which we would like to apply our transference theorems, satisfies the assumptions of Theorems~\ref{thm:Sch-weak} and \ref{thm:stability-random}. Since we are going to use this fact several times in this and later sections, we state it as a separate proposition.
Let $H$ be an arbitrary $t$-uniform hypergraph. The \emph{hypergraph of copies of $H$ in $K_n^t$} is the $e(H)$-uniform hypergraph on the vertex set $E(K_n^t)$ whose edges are the edge sets of all copies of $H$ in $K_n^t$.
\begin{prop}
\label{prop:Delta-bal-hyp}
Let $n$ and $t$ be integers with $t \ge 2$ and let $H$ be a $t$-uniform hypergraph. Set $k = e(H)$ and let $\mathcal{H}$ be the $k$-uniform hypergraph of copies of $H$ in $K_n^t$. There exists a positive constant $c$ such that, letting $p = n^{-1/m_t(H)}$,
\begin{equation} \label{item:Delta-bal-hyp-b}
\Delta_\ell(\mathcal{H}) \le c \cdot p^{\ell-1} \frac{e(\mathcal{H})}{v(\mathcal{H})}
\end{equation}
for every $\ell \in [k]$.
\end{prop}
\begin{proof}
Note that $v(\mathcal{H}) = \binom{n}{t} = \Theta(n^t)$ and that $e(\mathcal{H}) = \frac{(v(H))!}{|\Aut(H)|} \cdot \binom{n}{v(H)} = \Theta\big( n^{v(H)} \big)$. By the definition of $p$ and $m_t(H)$, we have
\begin{equation}\label{eq:pbound:def:mtH}
p^{e(H')-1}n^{v(H')-t} \ge 1
\end{equation}
for every $H' \subseteq H$. Now, for each $\ell \in [k]$,
\[
\Delta_\ell(\mathcal{H}) \le c' \cdot \max\left\{ n^{v(H) - v(H')} \colon H' \subseteq H \text{ with } e(H') = \ell \right\}
\]
for some positive constant $c'$. Since $e(\mathcal{H}) / v(\mathcal{H}) \ge c'' \cdot n^{v(H)-t}$ for some constant $c''$, it follows that
\begin{align*}
\Delta_\ell(\mathcal{H}) \cdot \left( p^{\ell-1} \frac{e(\mathcal{H})}{v(\mathcal{H})} \right)^{-1} & \le c' \cdot \frac{v(\mathcal{H})}{e(\mathcal{H})} \cdot \max_{H' \subseteq H \colon e(H') = \ell} \left( \frac{n^{v(H)}}{p^{e(H')-1}n^{v(H')}} \right) \\
& \le \frac{c'}{c''} \cdot \max_{H' \subseteq H \colon e(H') = \ell} \left( \frac{1}{p^{e(H')-1}}{n^{v(H')-t}} \right) \le \frac{c'}{c''},
\end{align*}
where the last inequality follows by~\eqref{eq:pbound:def:mtH}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:t-Turan-Gnp}]
Let $H$ be a $t$-uniform hypergraph, let $k = e(H)$, and let $(\mathcal{H}_n)_{n \in \mathbb{N}}$ be the sequence of $k$-uniform hypergraphs of copies of $H$ in $K_n^t$. Let $\alpha = \pi(H)$, let $\delta$ be a positive constant, and let $p_n = n^{-1/m_t(H)}$. It follows easily from the supersaturation theorem of Erd{\H o}s and Simonovits~\cite{ES83} that $\mathcal{H}$ is $\alpha$-dense, see~\cite{Sch}. Let $C = C_{\ref{thm:Sch-weak}}(\mathcal{H}, \delta)$ and assume that $q_n \ge Cp_n = Cn^{-1/m_t(H)}$. Note that the assumption that $H$ contains a vertex of degree at least $2$ implies that $m_t(H) > 1/t$ and hence $q_n v(\mathcal{H}_n) \to \infty$ as $n \to \infty$. Together with Proposition~\ref{prop:Delta-bal-hyp}, this implies that $\mathcal{H}$ satisfies the assumptions of Theorem~\ref{thm:Sch-weak} and hence with probability tending to $1$ as $n \to \infty$,
\[
\ex\big( G^t(n,q_n), H \big) = \alpha\left( \mathcal{H}_n\big[E(G^t(n,q_n))\big] \right) \le ( \pi(H) + \delta ) q_n \binom{n}{t},
\]
as required.
\end{proof}
In the proof of Theorems~\ref{thm:stability-Gnp} and~\ref{thm:H-free-structure}, we shall need the following proposition, which is a fairly straightforward consequence of the Erd{\H o}s-Simonovits stability theorem~\cite{ES1,ES2} and the graph removal lemma~\cite{ErFrRo}. A proof of this statement can be found in~\cite{Sam}. We remark that a new proof of the graph removal lemma, which avoids the use of the Szemer\'edi regularity lemma, was given recently by Fox~\cite{Fox}.
\begin{prop}
\label{prop:stability-removal}
For every graph $H$ and every positive $\delta$, there exists a positive $\varepsilon$ such that the following holds for every $n \in \mathbb{N}$. If $G$ is an $n$-vertex graph with
\[
e(G) \ge \left( 1 - \frac{1}{\chi(H) - 1} - \varepsilon \right) \binom{n}{2},
\]
then either $G$ may be made $(\chi(H) - 1)$-partite by removing from it at most $\delta n^2$ edges or $G$ contains at least $\varepsilon n^{v(H)}$ copies of $H$.
\end{prop}
\begin{proof}[Proof of Theorem~\ref{thm:stability-Gnp}]
Let $H$ be a graph, let $k = e(H)$, and let $(\mathcal{H}_n)_{n \in \mathbb{N}}$ be the sequence of $k$-uniform hypergraphs of copies of $H$ in $K_n$. Let $\alpha = \pi(H) = \left(1 - \frac{1}{\chi(H)-1}\right)$, let $\delta$ be a positive constant, and let $p_n = n^{-1/m_2(H)}$. Moreover, let $\mathcal{B}_n$ be the family of all complete $(\chi(H)-1)$-partite subgraphs of $K_n$. By Proposition~\ref{prop:stability-removal}, $\mathcal{H}$ is $(\alpha,\mathcal{B})$-stable. Let $C = C_{\ref{thm:stability-random}}(\mathcal{H}, \delta)$, let $\varepsilon = \varepsilon_{\ref{thm:stability-random}}(\mathcal{H}, \delta)$, and assume that $q_n \ge Cp_n = Cn^{-1/m_2(H)}$. Note that the assumption that $H$ contains a vertex of degree at least $2$ implies that $m_2(H) > 1/2$ and hence $q_n v(\mathcal{H}_n) \to \infty$ as $n \to \infty$. Together with Proposition~\ref{prop:Delta-bal-hyp}, the discussion above implies that $\mathcal{H}$ satisfies the assumptions of Theorem~\ref{thm:stability-random} and hence with probability tending to $1$ as $n \to \infty$, every independent set $G' \subseteq G(n,q_n)$ with $|G'| \ge (\alpha-\varepsilon) q_n v(\mathcal{H}_n)$ satisfies $|G' \setminus B| \le \delta q_n v(\mathcal{H}_n)$ for some $B \in \mathcal{B}_n$. In other words, with probability tending to $1$ as $n \to \infty$, every $H$-free subgraph of $G(n,q_n)$ with at least $\left(1-\frac{1}{\chi(H)-1}-\varepsilon\right)\binom{n}{2}q_n$ edges can be made $(\chi(H)-1)$-partite by removing from it at most $\delta q_n \binom{n}{2}$ edges, as required.
\end{proof}
\section{The typical structure of $H$-free graphs}
\label{sec:Turan-counting}
In this section, we shall deduce from Theorems~\ref{thm:Sch-counting} and~\ref{thm:stability-counting} the sparse analogue of the theorem of Erd{\H o}s, Frankl, and R{\"o}dl~\cite{ErFrRo}, Theorem~\ref{thm:ErFrRo-sparse}, and an approximate sparse analogue of the result of Erd{\H o}s, Kleitman, and Rothschild~\cite{ErKlRo}, Theorem~\ref{thm:H-free-structure}. We stress once again that neither proof employs Szemer{\'e}di's regularity lemma. In order to prove Theorem~\ref{thm:ErFrRo-sparse}, we are actually going to prove the following natural generalization of it to $t$-uniform hypergraphs. Generalizing the definition stated in Section~\ref{sec:typical-structure}, given integers $n$ and $m$ with $0 \le m \le \binom{n}{t}$ and a $t$-uniform hypergraph $H$, let us denote by $f_{n,m}(H)$ the number of $H$-free $t$-uniform hypergraphs on the vertex set $[n]$ that have exactly $m$ edges.
\begin{thm}
\label{thm:t-ErFrRo-sparse}
For every $t$-uniform hypergraph $H$ and every positive $\delta$, there exists a positive constant $C$ such that the following holds. For every $n \in \mathbb{N}$, if $m \ge Cn^{t-1/m_t(H)}$, then
\[
\binom{\ex(n,H)}{m} \le f_{n,m}(H) \le \binom{\ex(n,H) + \delta n^t}{m}.
\]
\end{thm}
We remark that Theorem~\ref{thm:t-ErFrRo-sparse} refines a result of Nagle, R\"odl, and Schacht~\cite{NRS}, who, using the hypergraph regularity lemma, generalized~\eqref{eq:fnH-upper} to $t$-uniform hypergraphs.
\begin{proof}[Proof of Theorem~\ref{thm:t-ErFrRo-sparse}]
Let $H$ be a $t$-uniform hypergraph, let $k = e(H)$, and let $(\mathcal{H}_n)_{n \in \mathbb{N}}$ be the sequence of $k$-uniform hypergraphs of copies of $H$ in $K_n^t$. Let $\alpha = \pi(H)$, see~\eqref{eq:piH}, let $\delta$ be a positive constant, and let $p_n = n^{-1/m_t(H)}$. It follows easily from the supersaturation theorem of Erd{\H o}s and Simonovits~\cite{ES83} that $\mathcal{H}$ is $\alpha$-dense, see~\cite{Sch}. Let $C = C_{\ref{thm:Sch-counting}}(\mathcal{H}, \delta)$ and assume that $m \ge C n^{t-1/m_t(H)} \ge C p_n v(\mathcal{H}_n)$. Note that Proposition~\ref{prop:Delta-bal-hyp} implies that $\mathcal{H}$ satisfies the assumptions of Theorem~\ref{thm:Sch-counting} and hence
\[
f_{n,m}(H) = |I(\mathcal{H}_n, m)| \le \binom{(\pi(H) + \delta)\binom{n}{t}}{m} \le \binom{\ex(n,H) + \delta n^t}{m},
\]
as required. The claimed lower bound on $f_{n,m}(H)$ is trivial.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:H-free-structure}]
Let $H$ be a graph, let $k = e(H)$, and let $(\mathcal{H}_n)_{n \in \mathbb{N}}$ be the sequence of $k$-uniform hypergraphs of copies of $H$ in $K_n$. Let $\alpha = \pi(H) = 1 - \frac{1}{\chi(H)-1}$, let $\delta$ be a positive constant, and let $p_n = n^{-1/m_2(H)}$. Moreover, let $\mathcal{B}_n$ be the family of all complete $(\chi(H)-1)$-partite subgraphs of $K_n$. By Proposition~\ref{prop:stability-removal}, $\mathcal{H}$ is $(\alpha,\mathcal{B})$-stable. Let $C = C_{\ref{thm:stability-counting}}(\mathcal{H}, \delta)$, let $\varepsilon = \varepsilon_{\ref{thm:stability-counting}}(\mathcal{H}, \delta)$, and assume that $m \ge Cn^{2-1/m_2(H)} \ge Cp_n v(\mathcal{H}_n)$. Together with Proposition~\ref{prop:Delta-bal-hyp}, this implies that $\mathcal{H}$ satisfies the assumptions of Theorem~\ref{thm:stability-counting} and hence, letting $f_{n,m}^\delta(H)$ denote the number of $H$-free graphs on the vertex set $[n]$ that have exactly $m$ edges and that are not $(\delta, \chi(H)-1)$-partite,
\[
f_{n,m}^\delta(H) \le (1-\varepsilon)^m \binom{\pi(H)\binom{n}{2}}{m}.
\]
Finally, note that (trivially),
\[
f_{n,m}(H) \ge \binom{\pi(H)\binom{n}{2}}{m}
\]
and hence $f_{n,m}^\delta(H) = o\big(f_{n,m}(H)\big)$, as claimed.
\end{proof}
For $t$-uniform hypergraphs, there is no general stability theorem known; however, such results have been proved for a few specific hypergraphs (see~\cite{FPS05, FuSi05, KeMu04, KeSu05}), and in each case we obtain a corresponding result for sparse hypergraphs. For example, following~\cite{BaMu12}, let $F_5$ denote the `3-uniform triangle', i.e., the hypergraph with edge set isomorphic to $\{123,124,345\}$, and say that a 3-uniform hypergraph is \emph{triangle-free} if it contains no copy of $F_5$. The following theorem follows easily, as above, from Theorem~\ref{thm:stability-counting} combined with the hypergraph removal lemma of Gowers~\cite{Gow07} and R\"odl and Skokan~\cite{RoSk06} and the stability theorem for 3-uniform triangle-free hypergraphs, which was proved by Keevash and Mubayi~\cite{KeMu04}.
\begin{thm}\label{3trianglestab}
For every positive $\delta$, there exists a constant $C$ such that the following holds. If $m \ge Cn^2$, then almost every triangle-free $3$-uniform hypergraph with $n$ vertices and $m$ edges can be made tripartite by removing from it at most $\delta m$ edges.
\end{thm}
\begin{proof}
Let $(\mathcal{H}_n)_{n \in \mathbb{N}}$ be the sequence of $3$-uniform hypergraphs of copies of $F_5$ in $K_n^3$, set $\alpha = 2/9$, and let $\mathcal{B}_n$ denote the collection of all complete tripartite subhypergraphs of $K_n^3$. By the hypergraph removal lemma~\cite[Theorem~1.3]{RoSk06}, combined with the stability theorem for triangle-free 3-uniform hypergraphs~\cite[Theorem~1.6]{KeMu04}, it follows that $\mathcal{H}$ is $(\alpha,\mathcal{B})$-dense.
It follows by Proposition~\ref{prop:Delta-bal-hyp} that $\mathcal{H}$ satisfies the conditions of Theorem~\ref{thm:stability-counting} with $p_n = n^{-1}$. Hence the number of triangle-free $3$-uniform hypergraphs with $n$ vertices and $m$ edges that cannot be made tripartite by removing at most $\delta m$ edges is at most
$$(1-\varepsilon)^m \binom{\pi(F_5) {n \choose 3}}{m},$$
which easily implies the theorem.
\end{proof}
Finally, we remark that Theorem~\ref{3trianglestab} can be seen as an approximate sparse analogue of a result of Balogh and Mubayi~\cite{BaMu12}, who used the hypergraph regularity lemma and~\cite[Theorem~1.6]{KeMu04} to show that almost all triangle-free 3-uniform hypergraphs are tripartite. For similar results for other forbidden hypergraphs, see~\cite{BaMu11} and~\cite{PS09}.
\section{The K\L R Conjecture}
\label{sec:KLR}
In this section, we shall deduce from Theorem~\ref{thm:main} the K\L R conjecture, Theorem~\ref{thm:KLR}. As in the preceding sections, the proof will be a fairly straightforward application of Theorem~\ref{thm:main} to an appropriately defined hypergraph $\mathcal{H}$ and family $\mathcal{F} \subseteq \mathcal{P}(V(\mathcal{H}))$. Let $H$ be an arbitrary graph and let $\mathcal{H}$ be the $e(H)$-uniform hypergraph of canonical copies of $H$ in the complete blow-up of $H$. Defining an appropriate family $\mathcal{F}$ and showing that $\mathcal{H}$ is $(\mathcal{F},\varepsilon)$-dense will require some work.
Given a graph $H$ and integers $n_1, \ldots, n_{v(H)}$, let us denote by $\mathcal{G}(H;n_1,\ldots,n_{v(H)})$ the collection of all graphs $G$ constructed in the following way. The vertex set of $G$ is a disjoint union $V_1 \cup \ldots \cup V_{v(H)}$ of sets of sizes $n_1, \ldots, n_{v(H)}$, respectively, one for each vertex of $H$. The only edges of $G$ lie between those pairs of sets $(V_i, V_j)$ such that $\{i,j\}$ is an edge of $H$. Recall the definition of $\mathcal{G}(H,n,m,p,\varepsilon)$ from Section~\ref{sec:KLR-intro} and observe that $\mathcal{G}(H,n,m,p,\varepsilon) \subseteq \mathcal{G}(H;n,\ldots,n)$ for all $m$, $p$, and $\varepsilon$.
The following lemma, which is a robust version of the embedding lemma, stated in Section~\ref{sec:KLR-intro}, suggests the right choice of $\mathcal{F}$. The lemma is well-known, and so we omit the (standard) proof.
\begin{lemma}
\label{lemma:hole}
Let $H$ be a graph and let $\delta \colon (0,1] \to (0,1)$ be an arbitrary function. There exist positive constants $\alpha_0$, $\xi$, and $N$ such that for every collection of integers $n_1, \ldots, n_{v(H)}$ satisfying $n_1, \ldots, n_{v(H)} \ge N$ and every graph $G \in \mathcal{G}(H;n_1, \ldots, n_{v(H)})$, one of the following holds:
\begin{enumerate}[(a)]
\item
\label{item:hole-a}
$G$ contains at least $\xi n_1 \dots n_{v(H)}$ canonical copies of $H$.
\item
\label{item:hole-b}
There exist a positive constant $\alpha$ with $\alpha \ge \alpha_0$, an edge $\{i,j\} \in E(H)$, and sets $A_i \subseteq V_i$, $A_j \subseteq V_j$ such that $|A_i| \ge \alpha n_i$, $|A_j| \ge \alpha n_j$, and
$d_G(A_i,A_j) < \delta(\alpha)$.
\end{enumerate}
\end{lemma}
Our next lemma is also straightforward. It allows us to count $(\varepsilon, p)$-regular subgraphs of a graph that has a `hole', as in Lemma~\ref{lemma:hole}(\ref{item:hole-b}). Recall that $\mathcal{G}(K_2,n,m,p,\varepsilon)$ denotes the collection of all $(\varepsilon, p)$-regular bipartite graphs with $m$ edges and $n$ vertices in each part. Given such $G$, let $V_1(G)$ and $V_2(G)$ denote the two parts. For each $\beta \in (0,1)$, define a function $\delta \colon (0,1] \to (0,1)$ by setting
\begin{equation}
\label{eq:delta-def}
\delta(x) = \frac{1}{4e} \left( \frac{\beta}{2} \right)^{2/x^2}
\end{equation}
for each $x \in (0,1]$. The following lemma says that a graph $\tilde{G}$ that has a hole of size $\alpha n$ and density at most $\delta(\alpha)$ has very few subgraphs in $\mathcal{G}(K_2,n,m,m/n^2,\varepsilon)$.
\begin{lemma}\label{lemma:holecount}
For every positive $\alpha_0$ and $\beta$, there exists a positive constant $\varepsilon$ such that the following holds. Let $\tilde{G} \subseteq K_{n,n}$ be such that there exist subsets $A \subseteq V_1(\tilde{G})$ and $B \subseteq V_2(\tilde{G})$ with
\[
\min\{ |A|,|B|\} \ge \alpha n \quad \text{and} \quad d_G(A,B) < \delta(\alpha)
\]
for some $\alpha \in [\alpha_0,1]$, and let $S \subseteq \tilde{G}$. Then, for every $m$ with $|S|/\varepsilon \le m \le n^2$, there are at most
\[
\beta^{m} \binom{n^2}{m - |S|}
\]
subgraphs of $\tilde{G}$ that belong to $\mathcal{G}(K_2,n,m,m/n^2,\varepsilon)$ and contain $S$.
\end{lemma}
\begin{proof}
We begin by noting that, by choosing random subsets of $A$ and $B$ if necessary, we may assume that $|A| = |B| = \alpha n$. Set $\varepsilon = \min\{\alpha_0^2/4, 1/4\}$, write $\mathcal{G}^*$ for the family of all subgraphs of $\tilde{G}$ that belong to $\mathcal{G}(K_2,n,m,m/n^2,\varepsilon)$ and contain $S$, and let $G \in \mathcal{G}^*$. In particular, $G$ is $(\varepsilon,p)$-regular, where $p = m/n^2$ and since $\varepsilon \le \alpha$, it follows that the pair $(A,B)$ must have density at least $(1-\varepsilon)p$ in $G$, and hence must contain at least $(1 - \varepsilon - \varepsilon/\alpha^2)p|A||B|$ edges of $E(G) \setminus S$, since $|S| \le \varepsilon m$. Set $m' = m - |S|$ and $\varepsilon' = \varepsilon( 1 + 1/\alpha^2) \le 1/2$, and let us write $e_{\tilde{G}}(A,B)$ for the number of edges of $\tilde{G}$ that lie between the sets $A$ and $B$. Since $d_{\tilde{G}}(A,B) < \delta(\alpha)$, then the number of choices for $G$ can be estimated as follows:
\begin{equation}
\label{eq:holecount-1}
|\mathcal{G}^*| \le \sum_{\ell \ge (1- \varepsilon')p|A||B|} \binom{e_{\tilde{G}}(A,B)}{\ell} \binom{e(\tilde{G}) - e_{\tilde{G}}(A,B)}{m' - \ell} \le \sum_{\ell \ge \alpha^2 m/2} \binom{\delta(\alpha) \alpha^2 n^2}{\ell} \binom{n^2}{m' - \ell}.
\end{equation}
Note that the right-hand side of~\eqref{eq:holecount-1} is zero if $m > 2\delta(\alpha) n^2$, so we may assume that $m' \le m \le 2\delta(\alpha) n^2 \le n^2/2$. Thus, using~\eqref{eq:binomial-1} and~\eqref{eq:binomial-2}, \eqref{eq:holecount-1} implies that
\begin{equation}
\label{eq:holecount-2}
|\mathcal{G}^*| \le \sum_{\ell \ge \alpha^2 m/2} \left( \frac{e \delta(\alpha) \alpha^2 n^2}{\ell} \right)^\ell \left( \frac{m'}{n^2 - m'} \right)^\ell \binom{n^2}{m'} \le \sum_{\ell \ge \alpha^2 m/2} \left( \frac{2e \delta(\alpha) \alpha^2 m}{\ell} \right)^\ell \binom{n^2}{m'}.
\end{equation}
Since $\delta(\alpha) < 1/4e$, the summand in the right-hand side of~\eqref{eq:holecount-2} is decreasing in $\ell$ on $(\alpha^2 m/2,\infty)$ and hence
\[
|\mathcal{G}^*| \le m \big( 4e \delta(\alpha) \big)^{\alpha^2 m/2} \binom{n^2}{m'} \le \beta^m \binom{n^2}{m'},
\]
as required, since $\big( 4e \delta(\alpha) \big)^{\alpha^2/2} = \beta/2$.
\end{proof}
We can now easily deduce Theorem~\ref{thm:KLR} from Theorem~\ref{thm:main}.
\begin{proof}[Proof of Theorem~\ref{thm:KLR}]
Let $H$ be a fixed graph, let $n \in \mathbb{N}$, and let $H(n)$ be the largest graph in the family $\mathcal{G}(H;n, \ldots, n)$, i.e., the complete blow-up of $H$, where each vertex of $H$ is replaced by an independent set of size $n$ and each edge of $H$ is replaced by the complete bipartite graph $K_{n,n}$. Let $\mathcal{H}$ be the $e(H)$-uniform hypergraph on the vertex set $E(H(n))$ whose edges are all $n^{v(H)}$ canonical copies of $H$ in $H(n)$.
Fix an arbitrary positive constant $\beta$, let $\delta \colon (0,1] \to (0,1)$ be the function defined in~\eqref{eq:delta-def} with $\beta$ replaced by $\beta/2$, i.e., set
\[
\delta(x) = \frac{1}{4e} \left( \frac{\beta}{4} \right)^{2/x^2}
\]
for each $x \in (0,1]$, and let $\alpha_0 = (\alpha_0)_{\ref{lemma:hole}}(H, \delta)$, $\xi = \xi_{\ref{lemma:hole}}(H,\delta)$, and $N = N_{\ref{lemma:hole}}(H,\delta)$. Let $\mathcal{F}$ be the family of all subgraphs of $H(n)$, i.e., graphs in $\mathcal{G}(H;n, \ldots, n)$, for which (\ref{item:hole-b}) in Lemma~\ref{lemma:hole} is \emph{not} satisfied. Clearly $\mathcal{F}$ is an upset, and so, by Lemma~\ref{lemma:hole}, $\mathcal{H}$ is $(\mathcal{F},\xi)$-dense provided that $n \ge N$.
Now, since $\mathcal{H}$ is contained in the hypergraph of all copies of $H$ in the complete graph on $v(H)n$ vertices and contains a positive proportion of those copies, it follows from Proposition~\ref{prop:Delta-bal-hyp} that $\mathcal{H}$ satisfies the assumptions of Theorem~\ref{thm:main} with $p = n^{2 - 1/m_2(H)}$ and $\varepsilon = \xi$, for some constant $c$ depending only on $H$. Therefore, there is a constant $C'$, a family $\S \subseteq \binom{E(H(n))}{\le C'n^{2-1/m_2(H)}}$, and functions $f \colon \S \to \overline{\mathcal{F}}$ and $g \colon \mathcal{I}(\mathcal{H}) \to \S$ such that
\[
g(I) \subseteq I \qquad \text{and} \qquad I \setminus g(I) \subseteq f(g(I))
\]
for every $I \in \mathcal{I}(\mathcal{H})$.
Let $\varepsilon$ be a sufficiently small positive constant such that, in particular, $\varepsilon \le \varepsilon_{\ref{lemma:holecount}}(\alpha_0, \beta/2)$, let $C = C' / \varepsilon$, and suppose that $m \ge Cn^{2-1/m_2(H)}$. Let $\mathcal{G}^* = \mathcal{G}^*(H,n,m,m/n^2,\varepsilon)$ and note that $\mathcal{G}^* \subseteq \mathcal{I}(\mathcal{H})$. We are required to bound from above the number of graphs in $\mathcal{G}^*$.
To this end, fix an $S \in \S$, let
\[
\mathcal{G}_S^* = \big\{ G \in \mathcal{G}^* \colon g(G) = S \big\},
\]
and let $G_S = f(S)$. For each $\{i,j\} \in E(H)$, let $s(i,j) = e_S(V_i, V_j)$ and note that $\sum_{ij \in E(H)} s(i,j) = |S|$. Since
\[
|S| \le C'n^{2-1/m_2(H)} \le \varepsilon \cdot Cn^{2-1/m_2(H)} \le \varepsilon m,
\]
then $s(i,j) \le \varepsilon m$ for every $\{i,j\} \in E(H)$.
Now, since $G_S \in \overline{\mathcal{F}}$, it follows that there exist an $\alpha \in [\alpha_0,1]$, an edge $\{i, j\} \in E(H)$, and sets $A_i \subseteq V_i$, $A_j \subseteq V_j$ such that $|A_i|, |A_j| \ge \alpha n$ and $d_{G_S}(A_i,A_j) < \delta(\alpha)$. By Lemma~\ref{lemma:holecount}, it follows that there are at most
\[
\left( \frac{\beta}{2} \right)^{m} \binom{n^2}{m - s(i,j)}
\]
choices for the edges between $V_i$ and $V_j$ such that $G[V_i,V_j] \in \mathcal{G}(K_2,n,m,m/n^2,\varepsilon)$ and $S[V_i,V_j] \subseteq G[V_i,V_j] \subseteq S \cup G_S[V_i,V_j]$. It follows immediately that
\[
|\mathcal{G}_S^*| \le \left(\frac{\beta}{2}\right)^m \prod_{ij \in E(H)} {n^2 \choose m - s(i,j)}.
\]
Summing over sets $S \in \S$, and using~\eqref{eq:binomial-1} and \eqref{eq:binomial-2}, we obtain
\begin{align*}
|\mathcal{G}^*| & \le \sum_{S \in \S} \left(\frac{\beta}{2}\right)^m \prod_{ij \in E(H)} \left( \frac{m}{n^2 - m} \right)^{s(i,j)} \binom{n^2}{m} = \left(\frac{\beta}{2}\right)^m \binom{n^2}{m}^{e(H)} \sum_{S \in \S} \left( \frac{m}{n^2 - m} \right)^{|S|} \\
& \le \left(\frac{\beta}{2}\right)^m \binom{n^2}{m}^{e(H)} \sum_{s \le \varepsilon m} \binom{e(H)n^2}{s} \left( \frac{2m}{n^2} \right)^{s} \le \left(\frac{\beta}{2}\right)^m \binom{n^2}{m}^{e(H)} \sum_{s \le \varepsilon m} \left( \frac{2e \cdot e(H)m}{s} \right)^s.
\end{align*}
Now, since $\varepsilon$ was chosen to be sufficiently small, it follows that the summand above is increasing in $s$ on $(0,\varepsilon m]$ and hence
\[
|\mathcal{G}^*| \le \left(\frac{\beta}{2}\right)^m \binom{n^2}{m}^{e(H)} m \left( \frac{2e \cdot e(H)}{\varepsilon} \right)^{\varepsilon m} \le \beta^m {n^2 \choose m}^{e(H)},
\]
as required.
\end{proof}
\noindent
{\bf Acknowledgement.}
The third author would like to thank Noga Alon and David Conlon for stimulating discussions. The authors would also like to thank David Conlon and Yoshiharu Kohayakawa for helpful comments on the manuscript, and David Saxton for pointing out the usefulness of allowing multiple edges. Finally, we would like to thank the anonymous referee for a very careful reading of the proof, and a plenitude of helpful suggestions.
\bibliographystyle{amsplain}
| {
"timestamp": "2014-03-24T01:07:52",
"yymm": "1204",
"arxiv_id": "1204.6530",
"language": "en",
"url": "https://arxiv.org/abs/1204.6530",
"abstract": "Many important theorems in combinatorics, such as Szemerédi's theorem on arithmetic progressions and the Erdős-Stone Theorem in extremal graph theory, can be phrased as statements about independent sets in uniform hypergraphs. In recent years, an important trend in the area has been to extend such classical results to the so-called sparse random setting. This line of research culminated recently in the breakthroughs of Conlon and Gowers and of Schacht, who developed general tools for solving problems of this type.In this paper, we provide a third, completely different approach to proving extremal and structural results in sparse random sets. We give a structural characterization of the independent sets in a large class of uniform hypergraphs by showing that every independent set is almost contained in one of a small number of relatively sparse sets. We then derive many interesting results as fairly straightforward consequences of this abstract theorem. In particular, we prove the well-known conjecture of Kohayakawa, Łuczak and Rödl, a probabilistic embedding lemma for sparse graphs. We also give alternative proofs of many of the results of Conlon and Gowers and Schacht, and obtain their natural counting versions, which in some cases are considerably stronger. We moreover prove a sparse version of the Erdős-Frankl-Rödl Theorem on the number of H-free graphs and extend a result of Rödl and Ruciński on Ramsey properties in sparse random graphs to the general, non-symmetric setting.We remark that similar results have been discovered independently by Saxton and Thomason, and that, in parallel to this work, Conlon, Gowers, Samotij and Schacht have proved a sparse analogue of the counting lemma for subgraphs of the random graph G(n,p), which may be viewed as a version of the KŁR conjecture that is stronger in some ways and weaker in others.",
"subjects": "Combinatorics (math.CO); Number Theory (math.NT)",
"title": "Independent sets in hypergraphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137863531804,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7087617670358929
} |
https://arxiv.org/abs/1403.7920 | Computing the dimension of ideals in group algebras, with an application to coding theory | The problem of computing the dimension of a left/right ideal in a group algebra F[G] of a finite group G over a field F is considered. The ideal dimension is related to the rank of a matrix originating from a regular left/right representation of G; in particular, when F[G] is semisimple, the dimension of a principal ideal is equal to the rank of the matrix representing a generator. From this observation, a bound and an efficient algorithm to compute the dimension of an ideal in a group ring are established. Since group codes are ideals in finite group rings, the algorithm allows efficient computation of their dimension. | \section{Introduction and preliminaries}\label{sect1}
Let $\mathcal G=\{g_1,g_2, \ldots,g_n\}$ be a finite multiplicative group of
order $n=|\mathcal G|$, with neutral element $g_1=1$. Let $\mathbb F$ be a field of characteristic $p$.
Finite fields of order $q=p^m$ are denoted as $\mathbb F_q$.
The group algebra $\mathbb F[\mathcal G]$
of $\mathcal G$ over $\mathbb F$ consists of the formal sums
\begin{equation}
\label{dede1}
\sum_{j=1}^{n} \alpha_i g_i ~~~~\alpha_i \in \mathbb F, ~~g_i \in \mathcal G,
\end{equation}
where the $\alpha_i$'s are called coefficients. The sum in $\FF[\mathcal G]$ is
defined coefficientwise, that is the coefficients of the same $g_j$ are added according
to the addition in $\mathbb F$. The product is performed by applying the distributive law,
and the group elements are multiplied according to the rule in $\mathcal G$.
The group algebra $\mathbb F[\mathcal G]$ is a vector space of dimension $n$
over the field $\mathbb F$, and has the structure of an associative ring with identity.
It is commutative if and only if $\G$ is commutative.
It turns out that the structure of the ideals
depends on the group $\mathcal G$ and the field characteristic.
If the field characteristic does not divide the group order, or is $0$, the group
ring is semisimple by Maschke's Theorem (see~\cite{maschke}).
In addition, every ideal is principal and generated by an idempotent, \cite{curtis}.
If the field characteristic divides the group order, then the group ring
is not semisimple in general \cite{navarro}.
A group code of length $n$ is a linear code which is the image of an ideal $I\subseteq\FF[\G]$ via
an isomorphism $\phi:\FF[\G]\rightarrow\FF^n$. In other words, an ideal of $\FF[\G]$ is a
group code in $\FF^n$, for a given choice of a basis of $\FF[\G]$. Denote by $C=\phi(I)$ the group code
corresponding to the ideal $I$, then $C$ is an $[n,k]$-code, where $k=\dim_{\FF} I$ (see~\cite{pless} for more details). In this paper, we study
the problem of how to efficiently compute the dimension of $I$. Our approach uses some elementary tools
from representation theory.
Every representation $D:\G\longrightarrow GL_m(\KK)$ of $\mathcal G$
over an extension field $\mathbb K$ of $\mathbb F$ induces a representation
of the group algebra $\mathbb F[\mathcal G]$, which we denote again by $D$
\begin{equation}\label{dede1rap}
\begin{array}{rcl}
D: \FF[\G] & \longrightarrow & \M_m(\KK) \\
\sum_{j=1}^{n} \alpha_j g_j & \longmapsto & \sum_{j=1}^{n} \alpha_j D(g_j).
\end{array}
\end{equation}
Here $\M_m(\KK)$ is the ring of $m\times m$ matrices with entries in $\KK$.
In particular a regular representation of $\G$ induces a representation of $\FF[\G]$ over $\mathbb F$.
The main results of the paper are contained in Section~\ref{main}:
In Proposition~\ref{theorem1} we relate the dimension of a proper left/right ideal $I\subset\FF[\G]$
to the rank of a matrix constructed using a regular right/left representation of $\G$.
In Proposition~\ref{id_gen} and Theorem~\ref{corol} we discuss how to compute an idempotent generator
of a given left/right ideal and show that the dimension of the left/right ideal that it generates
can be obtained from the characteristic polynomial of the matrix from Proposition~\ref{theorem1}.
In Corollary~\ref{charpoly} we
reduce the computation of the dimension of a principal left/right ideal to the computation of a
characteristic polynomial, even in the case when the ideal is not generated by an idempotent.
In Section~\ref{exs} we discuss the applications to coding theory and present some examples.
\section{Ideal dimension}\label{main}
Computing the dimension of a left/right ideal in $\mathbb F[\mathcal G]$
has both theoretical relevance and practical applications.
The solution that we propose is based on the representation of
$\mathbb F[\mathcal G]$ induced by a regular representation of
$\mathcal G$. In the next proposition, we relate the dimension of a left/right ideal
to the rank of a matrix built using the regular right/left representation of $\G$.
In the rest of the section, we discuss how to efficiently compute this rank.
In particular, we relate it to the characteristic polynomial of a suitable matrix.
\begin{proposition}\label{theorem1}
Let $\G=\{g_1,\ldots,g_n\}$ be a group, $\FF$ be a field, and let $A=\FF[\G]$.
Let $f_1,\ldots,f_t\in A$, $f_i=\sum_{j=1}^n \alpha_{ij} g_j$ with $\alpha_{ij}\in\FF$.
Let $I=Af_1+\ldots+Af_t$ and $J=f_1A+\ldots+f_tA$.
Let $\rho(f_1),\ldots,\rho(f_t)$
be the matrices corresponding to $f_1,\ldots,f_t$ in the representation of $A$
induced by the regular right representation $\rho$ of $\G$. Let
$\lambda(f_1),\ldots,\lambda(f_t)$ be the matrices corresponding to $f_1,\ldots,f_t$
in the representation of $A$ induced by the regular left representation $\lambda$
of $\G$.
Define block matrices $$\rho(I)=\left[\begin{array}{c} \rho(f_1) \\ \vdots \\ \rho(f_t)
\end{array}\right]\;\;\; \mbox{ and }\;\;\;
\lambda(J)=\left[\begin{array}{c} \lambda(f_1) \\ \vdots \\ \lambda(f_t)\end{array}\right].$$
Then $$\dim_{\FF} I=\rk \rho(I)\;\;\; \mbox{ and }\;\;\; \dim_{\FF} J=\rk \lambda(J).$$
\end{proposition}
\begin{proof}
For each $f=\sum_{i=1}^n a_ig_i\in A$, define $f^*=\sum_{i=1}^n a_ig_i^{-1}.$
We have $\rho(f^*)=(m_{ij})\in\M_n(\FF)$, where
$$g_if=m_{i1}g_1+\cdots+m_{in}g_n.$$ In other words, the entries
in the i-th row of $\rho(f_k^*)$ are the coefficients of $g_if_k$
with respect to the $\FF$-basis $g_1,\ldots,g_n$ of $A$. The elements
$g_if_k$ for $i=1,\ldots,n$ and $k=1,\ldots,t$ generate
$I$ as $\FF$-vector space, hence
$$\dim_{\FF} I=\rk\left[\begin{array}{c} \rho(f_1^*) \\ \vdots \\ \rho(f_t^*)\end{array}\right].$$
The permutation of $G$ that exchanges $g_i$ and $g_i^{-1}$
for each $i$ induces a permutation of the columns of $\rho(f^*)$ for all $f\in A$,
which sends $\rho(f^*)$ to $\rho(f)$. Hence $$\dim_{\FF}I=\rk\left[\begin{array}{c} \rho(f_1^*) \\ \vdots \\ \rho(f_t^*)\end{array}\right]=\rk\left[\begin{array}{c} \rho(f_1) \\ \vdots \\ \rho(f_t)
\end{array}\right]=\rk\rho(I).$$
Similarly, for the right ideal $J=f_1A+\cdots+f_tA$ we consider the regular left
representation $\lambda$ of $\G$. The entries in the i-th row of $\lambda(f_k)$ are the
coefficients of $f_kg_i$ with respect to the $\FF$-basis
$g_1,\ldots,g_n$. Since the elements $f_kg_i$ generate $J$ as an $\FF$-vector space,
the rank of $\lambda(J)$ equals the dimension of $J$.
\end{proof}
The computation of characteristic polynomials is straightforward, and can be used to compute
or bound the rank of a matrix. In this context, Proposition~\ref{theorem1} has interesting consequences.
We start by establishing a preliminary result. This result is essentially contained in Theorem~24.2 of~\cite{curtis}, but we present it here in the form that we will need.
\begin{lemma}\label{idemp}
Let $A$ be a semisimple ring. Let $I\subset A$ be a proper left (resp., right) ideal.
Then $I=Ae$ (resp., $I=eA$), where $e\in A$ is an idempotent and $a=ae$
(resp. $a=ea$) for all $a\in I$.
Further, any idempotent generator of $I$ is of the form $e=1-\epsilon$,
where $\epsilon$ is an idempotent and $e\epsilon=\epsilon e=0$.
If in addition $I$ is a two-sided ideal, then $I=(e)$ for some $e\in A$ idempotent. Moreover
$A=I\oplus J$ where $J=0:_A I$ is the annihilator of $I$, and $J=(\epsilon)$.
\end{lemma}
\begin{proof}
We give the proof for the case of left ideals. The proof for right ideals is analogous.
Since $A$ is semisimple, $A=I\oplus J$, where $J\subset A$ is a left ideal.
Write $1=e+\epsilon$, where $e\in I$ and $\epsilon\in J$.
Multiplying the identity on the left by $e$, and using the fact that $J$ is a left ideal,
we obtain that $$e\epsilon=e-e^2\in I\cap J=0.$$ Therefore $e=e^2\in I$ is an idempotent,
and the same holds for $\epsilon\in J$.
Clearly $Ae\subseteq I$. In order to show the reverse inclusion, observe that for any $a\in I$
we have $a\epsilon=a-ae\in I\cap J=0$, therefore $a=ae\in Ae.$
It follows that $I=Ae$ and $a=ae$ for all $a\in I$. This also shows that $J=A\epsilon$ and
$a\epsilon=a$ for all $a\in J$.
If in addition $I$ is a two-sided ideal, then $A=I\oplus J$ where $J$ is also a two-sided ideal.
Hence $I=(e)$
and $J=(\epsilon)$ with $e\epsilon=\epsilon e=0$. Therefore $IJ=JI=0$ and
$$J\subseteq 0:_A I=\{a\in A\mid aI=Ia=0\}.$$ Conversely, let $a\in 0:_A I$.
Then $a=ae+a\epsilon=a\epsilon\in J$. Therefore $J=0:_A I$, as claimed.
\end{proof}
\begin{remarks}\label{idemp_gen}
\begin{enumerate}
\item Since in general a semisimple ring has elements which are not idempotents,
there exist ideals which are generated by an element which is not an idempotent.
However, the lemma implies that they have another generator, which is an idempotent.
E.g., let $A=\FF_5[S_3]$ and let $f=1+(12)\in A$. Then $f^2=2f\neq f$,
however $Af=A(3f)$ and $e=3f$ is idempotent.
\item An idempotent matrix $M\in\M_n(\FF)$ has minimal polynomial $z,z-1$ or $z^2-z$.
Hence its characteristic polynomial has the form $z^k(z-1)^{n-k}$ for some $0\leq k\leq n$.
In the previous example, following the notation of Proposition~\ref{theorem1} we have
$$\rho(f)=\left[\begin{array}{cccccc}
1 & 1 & 0 & 0 & 0 & 0 \\
1 & 1 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 1 \\
0 & 0 & 0 & 0 & 1 & 1
\end{array}\right].$$ The matrix $\rho(f)$ is not idempotent,
and has characteristic polynomial $z^3(z-2)^3$.
Notice that $2\in\FF_5^*$ has order $4$, and $\rho(f)^4=3\rho(f)$ has characteristic
polynomial $z^3(z-1)^3$.
\end{enumerate}
\end{remarks}
Remark~\ref{idemp_gen}.1 raises the question of how to compute an idempotent generator of a
given ideal $I$. E.g. let $I=Af$ be a left ideal with a given generator $f\in A$.
Then $e-1\in\{a\in A\mid fa=0\}$, but the right annihilator of $f$ is not always easy to compute.
However, the problem has a simple solution if we consider a representation of $A$, since it reduces
to a simple linear algebra problem. In the next proposition we discuss the case of left ideals.
Right ideals can be treated similarly.
\begin{proposition}\label{id_gen}
Let $D$ be a representation of $\G=\{g_1,\ldots,g_n\}$ such that $D(g_1),\ldots, D(g_n)$ are
linearly independent.
Then $D$ induces an injective representation $D$ of $A=\FF[\G]$, and the right annihilators
of $f\in A$ corresponds via $D$ to the solutions over $\FF$ of the linear system
$$D(f)\left(\sum_{i=1}^n x_i D(g_i)\right)=0$$
in the variables $x_1 \ldots x_n$. In other words, solutions $(a_1,\ldots,a_n)\in\FF^n$
of the system corresponds to right annihilators $a=a_1g_1+\ldots+a_ng_n$ of $f$.
\end{proposition}
\begin{proof}
A representation $D:\G\rightarrow GL_m(\FF)$ induces a representation $D:A\rightarrow \M_m(\FF)$. $D$ is injective since $D(g_1),\ldots, D(g_n)$ are linearly independent. Hence $D$ is an isomorphism between $A$ and its image, in particular $fa=0$ iff $D(fa)=D(f)D(a)=0$. Let $a=a_1g_1+\ldots+a_ng_n$, then $D(a)=a_1D(g_1)+\ldots+a_nD(g_n)$ and $D(f)D(a)=0$ iff $(a_1,\ldots,a_n)\in\FF^n$ is a solution of the linear system $$D(f)\left(\sum_{i=1}^n x_i D(g_i)\right)=0.$$
\end{proof}
We can now state the first consequence of Proposition~\ref{theorem1}. Notice that if $p\nmid n=|\G|$, then $\FF[\G]$ is semisimple by Maschke's Theorem (see \cite{maschke}). In particular, by Lemma~\ref{idemp} every left/right ideal of $A$ has an idempotent generator, which one can compute using Proposition~\ref{id_gen}. In that case, the next result allows us to reduce the computation of the dimension of a left/right ideal to the computation of a characteristic polynomial.
\begin{theorem}\label{corol}
Let $\G=\{g_1,\ldots,g_n\}$ be a group, $\FF$ be a field, and let $A=\FF[\G]$.
Let $f=\alpha_1 g_1+\ldots+\alpha_n g_n\in A$ and let
$I=Af$ be a proper left ideal (resp., $I=fA$ be a proper right ideal) of $A$. Let $F=\rho(f)$ be
the matrix associated to $f$ in the regular right representation $\rho$ of $\G$
(resp., let $F=\lambda(f)$ be the matrix associated to $f$ in the regular left representation
$\lambda$ of $\G$). Let $z^k g(z)$ be the characteristic polynomial of $F$, where $z\nmid g(z)$.
Then $$n-k\leq\dim_{\FF} I\leq n-1.$$
If in addition $f$ is idempotent, then $\dim_{\FF} I=n-k$.
\end{theorem}
\begin{proof}
It follows from Proposition~\ref{theorem1} that $$\dim_{\FF} I=\rk(F)\geq n-k.$$ Since $I$ is proper,
$\dim_{\FF} I\leq n-1$.
If in addition $f$ is idempotent, then $F$ is an idempotent matrix, hence
$$\ker F=\ker F^m\;\; \mbox{for any $m\geq 1$.}$$
Therefore, the algebraic and geometric multiplicities of $0$ as an eigenvalue of $F$ coincide, and
$$\dim_{\FF} I=\rk(F)=n-k,$$ where the first equality again follows from Proposition~\ref{theorem1}.
\end{proof}
In the case that $p\mid n$, there may be ideals of $\FF[\G]$ which have no idempotent generator.
If $f$ is not idempotent, it is possible that $\rk(F)>n-k$.
Next we give some examples where $A$ is commutative and $I=(f)$ has $\dim_{\FF} I=\rk(F)>n-k$.
In particular, we provide examples where $n-k=0$ and $\dim_{\FF} I=\ell$ for any $\ell$
which divides $n/p$, $p=\ch\FF$.
\begin{example}
We follow the notation of Theorem~\ref{corol}. Let $\G$ be a cyclic group of order $n=\ell m$
and let $p$ be a prime, $p\mid m$.
Let $H$ be a subgroup of $\G$ of order $p$, and let $$f=\sum_{h\in H} h\in\FF[\G]$$ where $\FF$ is a field.
For a suitable reordering of the elements of $\G$, the matrix $F\in\M_n(\FF)$ associated to $f$ in a regular representation of $\G$
is the block matrix $$F=\left.\left[\begin{array}{ccccc}
1 & 0 & 0 & \cdots & 0 \\
0 & 1 & 0 & \cdots & 0 \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
0 & \cdots & 0 & 1 & 0 \\
0 & 0 & \cdots & 0 & 1
\end{array}\right]\right\}\mbox{$\ell$ blocks}$$
where $0$ and $1$ denote the matrix of size $m\times m$ filled with $0$ and $1$ respectively.
If $I=(f)\subset A$, then $$\dim_{\FF} I=\rk(F)=\ell$$ by Proposition~\ref{theorem1}.
It is easy to check that the characteristic polynomial of $F$ is $z^{n-\ell}(z-m)^{\ell}$.
Hence $k=n-\ell$ if $\ch\FF\neq p$, and $k=n$ if $\ch\FF=p$.
In particular, if $\ch\FF=p\mid n$, then $$\dim_{\FF} I=\ell>n-k=0.$$
\end{example}
Combining Proposition~\ref{theorem1} with a result by Mulmuley \cite{mu87}, we can reduce the computation of the dimension of a principal ideal to the computation of a characteristic polynomial, even in the case that the generator of the ideal is not nilpotent.
\begin{corollary}\label{charpoly}
Let $f\in A=\FF[\G]$ and let $I=Af$ (resp., $I=fA$).
Let $F$ be the matrix associated to $f$ in the regular right representation
(resp., in the regular left representation) of $\G$. Let
$$M=F\;\;\; \mbox{if $\G$
is commutative,}$$ and let
$$M=\left[\begin{array}{cc} 0 & F \\
F^t & 0
\end{array}\right] \;\;\; \mbox{if $\G$ is not commutative.}$$
Let $x$ be a transcendental element over $\FF$ and let $X$ be the
diagonal matrix with eigenvalues $1,x,\ldots,x^{m-1}$, where $m$ is
the size of the matrix $M$. Let $z^k g(z,x)$ be the
characteristic polynomial of the matrix $XM$, where $z\nmid g(z,x)$. Then
$$\dim_{\FF} I=\left\{\begin{array}{ll}
\deg_z g(z,x) & \mbox{if $\G$ is commutative,} \\
\deg_z g(z,x)/2 & \mbox{if $\G$ is not commutative.}
\end{array}\right.$$
\end{corollary}
\begin{remark}
If in Corollary~\ref{charpoly} we let $x$ be a random element in $\FF$
(or a suitable algebraic extension field),
we obtain a faster randomized algorithm to compute the dimension of $I$.
Such an approach works well in practice, and we can increase the probability
of obtaining the correct result by
repeating the computation for different random values of $x$.
\end{remark}
\section{Application to coding theory and examples}\label{exs}
A group code is an $\FF_q$-linear code which is image of an ideal $I\subseteq\FF_q[\G]$
via the isomorphism $\phi:\FF[\G]\rightarrow\FF^n$ which sends $g_i\in\G$
to the $i$th element of the canonical basis of $\FF^n$.
The length of $\phi(I)$ is equal to $n$, the cardinality of $\G$, and the dimension of the code is
the dimension of $I$ over $\FF_q$. The method that we propose therefore allows us to efficiently
compute the dimension of group codes.
See~ \cite{ADR1,markov} for an overview of group code
properties, and ~\cite{elia} for an efficient encoding, and corresponding syndrome decoding. \\
Our method will be illustrated by two simple examples.
We observe that the right regular representation of a group $\G=\{g_1,\ldots,g_n\}$
can be easily obtained from a modified Cayley table:
The element in position $(i,j)$ in the table is $g_i^{-1}g_j$. Letting $g_1$ be the neutral element,
the entries on the diagonal are all equal to $g_1$.
For a given $f=\sum_{i=1}^n x_i g_i \in A=\mathbb F[\mathcal G]$, the matrix $\rho(f)$ associated to $f$ in
the right regular representation $\rho$ of $A$ is obtained by substituting $g_i$ by $x_i$
in the Cayley table for $i=1,\ldots,n$.
\begin{example}
Consider the Klein group
$\mathcal K_4=\mathbb Z_2 \times \mathbb Z_2=\{e,\alpha, \beta, \alpha\beta \}$,
where $e$ is the neutral element, $\alpha^2=\beta^2=1$, and $\alpha \beta =\beta \alpha$.
Let $a,b,c,d\in\mathbb F$, then the matrix associated to
$f=a e + b \alpha+ c \beta + d \alpha \beta\in\FF[\K_4]$ in the right regular representation $\rho$ of $\FF[\K_4]$ is
$$\rho(f)= \left[ \begin{array}{cccc}
a & b & c & d \\
b & a & d & c \\
c & d & a & b \\
d & c & b & a \\
\end{array} \right].
$$
If $\ch(\FF)\neq 2$, then $\rho(f)$ is diagonalizable
$$\rho(f)\sim \left[ \begin{array}{cccc}
a+b+c+d & 0 & 0 & 0 \\
0 & a-b-c+d & 0 & 0 \\
0 & 0 & a+b-c-d & 0 \\
0 & 0 & 0 & a-b+c-d \\
\end{array} \right].
$$
In particular, the characteristic polynomial of $\rho(f)$ is $$\phi_f(z)=(z-a-b-c-d)(z-a+b+c-d)(z-a-b+c+d)(z-a+b-c+d)=z^k g(z)$$
with $g(0)\neq 0$, and $$4-k=\deg g(z)=\rk\rho(f)=\dim_{\FF}(f).$$ Notice that every $f\in\FF[\K_4]$ is idempotent.
In this case $\FF[\K_4]$ is semisimple, and the diagonal form of $\rho(f)$ corresponds to the decomposition of $\FF[\K_4]$
as a direct sum of ideals
$$\FF[\K_4]=(e+\alpha+\beta+\alpha\beta)\oplus (e-\alpha-\beta+\alpha\beta)\oplus (e+\alpha-\beta-\alpha\beta)\oplus (e-\alpha+\beta-\alpha\beta),$$
where each of the ideals appearing as direct summands is minimal and idempotent.
If $\ch(\FF)=2$, then $\rho(f)$ can be brought into the form
$$ \rho(f)\sim\left[ \begin{array}{cccc}
a+b+c+d & 0 & 0 & 0 \\
b & a+b+c+d & 0 & 0 \\
c & c+d & a+d & b+c \\
d & c+d & b+c & a+d \\
\end{array} \right].$$
The characteristic polynomial of $\rho(f)$ is $$\phi_f(z)=(z+a+b+c+d)^4.$$
If $f$ is not invertible, then $a+b+c+d=0$, in particular $\phi_f(z)=z^4$.
Hence $\rk \rho(f)\in\{0,1,2\}$. In particular, $\rk\rho(f)=0$ if and only if $f=0$.
Moreover $\rk\rho(f)=1$ if and only if $a=b=c=d\neq 0$. In this case we have
$(f)=(e+\alpha+\beta+\alpha\beta)$ and $\dim_{\FF}(f)=1$. Finally, if $a,b,c,d$
are not all equal and either $b\neq 0$ or $b\neq c$ or $c\neq d$, then $\rk\rho(f)=2$.
In this case $f\in\{e+\alpha, e+\beta, e+\alpha\beta\}$ and $\dim_{\FF}(f)=2$.
Notice that if $\ch(\FF)=2$, then $\FF[\K_4]$ is not semisimple and every ideal equals its annihilator.
\end{example}
\begin{example}
Consider the symmetric group
$S_3=\{ 1, (12), (13), (23), (123), (132)\}.$\\
Each element $g\in\FF[S_3]$ is of the form
$$g=a1+b(12)+c(13)+d(23)+e(123)+f(132) ~~~ \mbox{where}~~a,b,c,d,e,f, \in \FF,$$
and is represented by the matrix
$$ \rho(g) = \left[ \begin{array}{cccccc}
a & b & c & d & e & f \\
b & a & e & f & c & d \\
c & f & a & e & d & b \\
d & e & f & a & b & c \\
f & c & d & b & a & e \\
e & d & b & c & f & a \\
\end{array} \right] ~~,
$$
where $\rho$ denotes the right regular representation of $\G$.
If $\ch(\FF)\neq 2,3$, then $\rho(f)$ can be brought in the following form
\begin{equation}\label{diagg} \rho(g)\sim\left[ \begin{array}{cccccc}
a+b+c+d+e+f & 0 & 0 & 0 & 0 & 0 \\
0 & a-b-c-d+e+f & 0 & 0 & 0 & 0 \\
0 & 0 & a-f & e-f & -c+d & b-c \\
0 & 0 & -e+f & a-e & b-d & c-d \\
0 & 0 & -c+d & b-c & a-f & e-f \\
0 & 0 & b-d & c-d & -e+f & a-e \\
\end{array} \right].
\end{equation}
Therefore $A=\FF[S_3]= I_1\oplus I_2\oplus I_3= I_1\oplus I_2\oplus J_1\oplus J_2$,
where $I_1,I_2,I_3$ are minimal two-sided ideals
$$I_1= A(1+(12)+(13)+(23)+(123)+(132)),~~ I_2= A(1-(12)-(13)-(23)+(123)+(132)), $$
$$I_3=A(1-(123))+A((12)-(23))+A((13)-(23))+A((123)-(132))=A(2\cdot 1-(123)-(132)).$$
We have $\dim_{\FF} I_1=\dim_{\FF} I_2=1$ and $\dim_{\FF} I_3=4$.
Moreover $I_3$ can be decomposed as the direct sum of two left ideals:
$I_3= J_1\oplus J_2$ with
$$ J_1 =A(1+(12)-(23)-(123)),~~ J_2=A(1-(12)+(23)-(132))$$ and $\dim_{\FF} J_1=\dim_{\FF} J_2=2$.
Using (\ref{diagg}) it is easy to check that in each case the dimension of the ideal
is as predicted by Theorem~\ref{corol}.
It is also easy to check that $I_1,I_2,J_1,J_2$ are minimal left ideals, hence any
nonzero $I=Af\subseteq A$ is the sum or one or more among them.
In particular $\dim_{\FF} I$ is the sum of the dimensions of the corresponding ideals, again
as predicted by Theorem~\ref{corol}.
If $\ch(\FF)=2$, then
$$\rho(g)\sim\left[ \begin{array}{cccccc}
a+b+c+d+e+f & 0 & 0 & 0 & 0 & 0 \\
0 & a+e+f & a+b+c+d+e+f & 0 & 0 & 0 \\
0 & a+b+c+d+e+f & a+e & e+f & b+c & c+d \\
0 & 0 & e+f & a+f & b+d & b+c \\
0 & 0 & b+c & c+d & a+e & e+f \\
0 & 0 & b+d & b+c & e+f & a+f \\
\end{array} \right].$$
Denoting by $I_1,I_2,I_3,J_1,J_2$ the same ideals as before, we have $I_1=I_2$ and an easy
computation involving the above matrix yields
$$\dim_{\FF} I_1=1,~~\dim_{\FF} I_3=4,~~ \dim_{\FF} J_1=\dim_{\FF} J_2=2.$$
Notice that if $\ch(\FF)=2$, then $\FF[S_3]$ is no longer semisimple.
If $\ch(\FF)=3$, then $I_1,I_2\subset I_3$ where
$$I_3=A(1-(123))+A((12)-(23))+A((13)-(23))+A((123)-(132))=A(1+(123)+(132))$$ and
$$\dim_{\FF} I_1=\dim_{\FF} I_2=1,~~\dim_{\FF} I_3=2.$$ Again $\FF[S_3]$ is not semisimple.
\end{example}
\section*{Acknowledgments}
The authors are grateful to David Conti for several useful discussions during the preparation of this paper.
The second author was partially supported by the Swiss National Science Foundation under grant no. 123393.
| {
"timestamp": "2014-04-01T02:13:40",
"yymm": "1403",
"arxiv_id": "1403.7920",
"language": "en",
"url": "https://arxiv.org/abs/1403.7920",
"abstract": "The problem of computing the dimension of a left/right ideal in a group algebra F[G] of a finite group G over a field F is considered. The ideal dimension is related to the rank of a matrix originating from a regular left/right representation of G; in particular, when F[G] is semisimple, the dimension of a principal ideal is equal to the rank of the matrix representing a generator. From this observation, a bound and an efficient algorithm to compute the dimension of an ideal in a group ring are established. Since group codes are ideals in finite group rings, the algorithm allows efficient computation of their dimension.",
"subjects": "Information Theory (cs.IT); Rings and Algebras (math.RA)",
"title": "Computing the dimension of ideals in group algebras, with an application to coding theory",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9820137863531804,
"lm_q2_score": 0.7217431943271999,
"lm_q1q2_score": 0.7087617670358929
} |
https://arxiv.org/abs/1608.01596 | Heat kernel estimates on connected sums of parabolic manifolds | We obtain matching two sided estimates of the heat kernel on a connected sum of parabolic manifolds, each of them satisfying the Li-Yau estimate. The key result is the on-diagonal upper bound of the heat kernel at a central point. Contrary to the nonparabolic case (which was settled in [15]), the on-diagonal behavior of the heat kernel in our case is determined by the end with the maximal volume growth function. As examples, we give explicit heat kernel bounds on the connected sums $R^2#R^2$ and $R^1#R^2$ where $R^1 = R_+\timesS^1$. |
\section{Introduction}
\label{Introduction}
Let $M$ be a Riemannian manifold. The heat kernel $p(t,x,y)$ on $M$ is the
minimal positive fundamental solution of the heat equation $\partial
_{t}u=\Delta u$ on $M$ where $u=u\left( t,x\right) $, $t>0$, $x\in M$ and $%
\Delta $ is the (negative definite) Laplace-Beltrami operator on $M$. For
example, in $\mathbb{R}^{n}$ the heat kernel is given by the classical
Gauss-Weierstrass formula%
\begin{equation*}
p(t,x,y)=\frac{1}{(4\pi t)^{n/2}}\exp \left( -\frac{|x-y|^{2}}{4t}\right) .
\end{equation*}%
The heat kernel is sensitive to the geometry of the underlying manifold $M$,
which results in numerous applications of this notion in differential
geometry. On the other hand, the heat kernel has a probabilistic meaning: $%
p(t,x,y)$ is the transition density of Brownian motion $(\{X_{t}\}_{t\geq0},%
\{\mathbb{P}_{x}\}_{x \in M})$ on $M$. Namely, for any Borel set $A\subset M$%
, we have%
\begin{equation*}
\mathbb{P}_{x}(X_{t}\in A)=\int_{A}p(t,x,y)dy,
\end{equation*}%
where $\mathbb{P}_{x}(X_{t}\in A)$ is the probability that Brownian particle
starting at the point $x$ will be found in the set $A$ in time $t$.
From now on let us assume that the manifold $M$ is non-compact and
geodesically complete. Dependence of the long time behavior of the heat
kernel on the large scale geometry of $M$ is an interesting and important
problem that has been intensively studied during the past few decades by
many authors (see, for example, \cite{Davies CTM}, \cite{G AMS}, \cite{SC
LNS} and references therein). In the case when the Ricci curvature of $M$ is
non-negative, P.Li and S.-T.Yau proved in their pioneering work \cite{Li-Yau}
the following estimate, for all $x,y\in M$ and $t>0$:%
\begin{equation}
p(t,x,y)\asymp \frac{C}{V(x,\sqrt{t})}\exp \left( -b\frac{d^{2}(x,y)}{t}%
\right) , \tag{$LY$} \label{LY type}
\end{equation}%
where the sign $\asymp $ means that both $\leq $ and $\geq $ hold but with
different values of positive constants $C$ and $b$, $V(x,r)$ is the
Riemannian volume of the geodesic ball of radius $r$ centered at $x\in M$,
and $d\left( x,y\right) $ is the geodesic distance between the points $x,y$.
The estimate $($\ref{LY type}$)$ is satisfied also for the heat kernel of
uniformly elliptic operators in divergence form in $\mathbb{R}^{n}$ as was
proved by Aronson \cite{Aronson}. It was proved by Fabes and Stroock \cite%
{Fabes-Stroock}, that the estimate $($\ref{LY type}$)$ is equivalent to the
uniform parabolic Harnack inequality (see also \cite{SC LNS}). Grigor'yan
\cite{Grigoryan 1991} and Saloff-Coste \cite{SC 1992}, \cite{SC LNS} proved
that $($\ref{LY type}$)$ is equivalent to the conjunction of the Poincar\'{e}
inequality and the volume doubling property.
One of the simplest example of a manifold where $($\ref{LY type}$)$ fails is
the hyperbolic space $\mathbb{H}^{n}.$ A more interesting counterexample was
constructed by Kuz'menko and Molchanov \cite{Kuz'menko-Molchanov}: they
showed that the connected sum $\mathbb{R}^{n}\#\mathbb{R}^{n}$ of two copies
of $\mathbb{R}^{n}$, $n\geq 3$, admits a non-trivial bounded harmonic
function, which implies that the Harnack inequality and, hence, $($\ref{LY
type}$)$ cannot be true. Benjamini, Chavel and Feldman \cite%
{Benjamini-Chavel-Feldman} explained this phenomenon by a bottleneck-effect:
if $x$ and $y$ belong to the different ends of the manifold $\mathbb{R}^{n}\#%
\mathbb{R}^{n}$ and $\left\vert x\right\vert \approx \left\vert y\right\vert
\approx \sqrt{t}\rightarrow \infty $ then $p\left( t,x,y\right) \ll t^{-n/2}$
where $t^{-n/2}$ is predicted by the right hand side of $($\ref{LY type}$)$.
This phenomenon is especially transparent from probabilistic viewpoint:
Brownian particle can go from $x$ to $y$ only through the central part,
which reduces drastically the transition density (see Fig. \ref{rn+rn}). A
similar phenomenon was observed by B.Davies \cite{Davies 97} on a model case
of one-dimensional line complex.
\FRAME{ftbphFU}{2.8625in}{1.5186in}{0pt}{\Qcb{Brownian path goes from $x$ to
$y$ via the bottleneck}}{\Qlb{rn+rn}}{rn+rn.pdf}{\special{language
"Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display
"USEDEF";valid_file "F";width 2.8625in;height 1.5186in;depth
0pt;original-width 6.3183in;original-height 3.3356in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";filename 'rn+rn.eps';file-properties
"XNPEU";}}
Based on these early works, the first and the third authors of the present
paper started a project on heat kernel bounds on connected sums of
manifolds, provided each of them satisfies the Li-Yau estimate $($\ref{LY
type}$)$. The results of this study are published in a series \cite{G-SC
letter}, \cite{G-SC Dirichlet}, \cite{G-SC hitting}, \cite{G-SC ends}, and
\cite{G-SC FK}. In particular, they obtained in \cite{G-SC ends} matching
upper and lower estimates of heat kernels on connected sums of manifolds
when at least one of them is \textit{non-parabolic}. Recall that a manifold $%
M$ called \textit{parabolic} if Brownian motion on $M$ is recurrent, and
\textit{non-parabolic} otherwise. There are several equivalent definitions
of parabolicity in different terms (see, for example, \cite{G 1999}).
In this paper we complement the results \cite{G-SC ends} by proving
two-sided estimates of heat kernels on connected sums of \textit{parabolic}
manifolds. The detailed statements are given in the next section. We
illustrate our results on the following two examples.
Consider first the manifold $M=\mathcal{R}^{1}\#\mathbb{R}^{2}$, where $%
\mathcal{R}^{1}=\mathbb{R}_{+}\times \mathbb{S}^{1}$ (see Fig. \ref{r1+r2}%
). \FRAME{ftbphFU}{2.8115in}{1.6604in}{0pt}{\Qcb{Connected sum $\mathcal{R}%
^{1}\#\mathbb{R}^{2}$}}{\Qlb{r1+r2}}{jpic8.pdf}{\special{language
"Scientific Word";type "GRAPHIC";maintain-aspect-ratio TRUE;display
"USEDEF";valid_file "F";width 2.8115in;height 1.6604in;depth
0pt;original-width 6.0026in;original-height 3.535in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";filename 'jpic8.eps';file-properties
"XNPEU";}} For $x\in M$, define $|x|:=d(x,K)+e$, where $K\subset M$ is the
central part of $M$. Then we obtain that for $x\in \mathcal{R}^{1}$, $y\in
\mathbb{R}^{2}$ and $t>1$
\begin{equation*}
p(t,x,y)\asymp \left\{
\begin{array}{ll}
\frac{1}{t}e^{-b\frac{d^{2}(x,y)}{t}} & \mbox{if }\left\vert y\right\vert >%
\sqrt{t}, \\
\frac{1}{t}\left( 1+\frac{|x|}{\sqrt{t}}\log \frac{e\sqrt{t}}{|y|}\right) & %
\mbox{if }\left\vert x\right\vert ,\left\vert y\right\vert \leq \sqrt{t}, \\
\frac{1}{t}\log \frac{e\sqrt{t}}{|y|} & \mbox{if }\left\vert x\right\vert >%
\sqrt{t}\geq \left\vert y\right\vert .%
\end{array}%
\right.
\end{equation*}%
In particular, if $|x|$, $|y|$ are bounded and $t\rightarrow \infty $, then
\begin{equation*}
p(t,x,y)\approx \frac{1}{t}.
\end{equation*}%
If $\left\vert x\right\vert \approx \sqrt{t}\rightarrow \infty $ and $%
\left\vert y\right\vert $ remains bounded, then
\begin{equation*}
p(t,x,y)\approx \frac{\log t}{t}.
\end{equation*}
Consider now the manifold $M=\mathbb{R}^{2}\#\mathbb{R}^{2}$, or,
equivalently, a catenoid (see Fig. \ref{catenoid}). \FRAME{ftbphFU}{%
2.1586in}{1.5134in}{0pt}{\Qcb{Catenoid}}{\Qlb{catenoid}}{catenoid.pdf}{%
\special{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio
TRUE;display "USEDEF";valid_file "F";width 2.1586in;height 1.5134in;depth
0pt;original-width 4.6428in;original-height 3.2467in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";filename 'catenoid.eps';file-properties
"XNPEU";}}
Then we have the following estimate for all $x,y$ lying in different sheets
and for $t>1$:
\begin{equation*}
p(t,x,y)\asymp \left\{
\begin{array}{ll}
\frac{1}{t\log ^{2}t}\left( \log t+\log ^{2}\sqrt{t}-\log |x|\log |y|\right)
& \mbox{if }|x|,|y|\leq \sqrt{t}, \\
\frac{1}{t\log t}\log \frac{e\sqrt{t}}{|y|}e^{-b\frac{d^{2}(x,y)}{t}} & %
\mbox{if }|y|\leq \sqrt{t}<|x|, \\
\frac{1}{t\log t}\log \frac{e\sqrt{t}}{|x|}e^{-b\frac{d^{2}(x,y)}{t}} & %
\mbox{if }|x|\leq \sqrt{t}<|y|, \\
\frac{1}{t}\left( \frac{1}{\log |x|}+\frac{1}{\log |y|}\right) e^{-b\frac{%
d^{2}(x,y)}{t}} & \mbox{if }|x|,|y|>\sqrt{t}.%
\end{array}%
\right.
\end{equation*}
In particular, if $\vert x \vert$, $\vert y \vert$ are bounded and $t
\rightarrow \infty$, then
\begin{equation*}
p(t,x,y) \approx \frac{1}{t}.
\end{equation*}
If $|x|\approx |y|\approx \sqrt{t}\rightarrow \infty $ then
\begin{equation*}
p(t,x,y)\approx \frac{1}{t\log t}.
\end{equation*}%
The heat kernel estimates on $\mathbb{R}^{2}\#\mathbb{R}^{2}$ was also
obtained in \cite{G-SC ends} by an ad hoc method. In the present paper these
estimates are part of our general Theorem \ref{T1}. We also give further
examples, in particular, the heat kernel estimates on $\mathcal{R}^{1}\#%
\mathcal{R}^{1}\#\mathbb{R}^{2}.$
In the next section we introduce necessary definitions and state our main
results. In Section \ref{SecGeneral} we prove some auxiliary results about
the integrated resolvent. In Section \ref{SecOndiag} we prove the main
technical result of this paper -- Theorem \ref{main theorem} about
on-diagonal upper bound of the heat kernel on the connected sum of parabolic
manifolds. Finally, in Section \ref{SecOff} we use Theorem \ref{main theorem}
and the gluing techniques from \cite{G-SC ends} to obtain full off-diagonal
estimates of the heat kernels; they are stated in Theorems \ref{T1}-\ref{T3}
and Corollaries \ref{power} and \ref{Corcases}.
\begin{notation}
Throughout this article, the letters $c,C,b,...$ denote positive constants
whose values may be different at different instances. When the value of a
constant is significant, it will be explicitly stated. The notation $%
f\approx g$ for two non-negative functions $f,g$ means that there are two
positive constants $c_{1},c_{2}$ such that $c_{1}g$ $\leq f\leq c_{2}g$ for
the specified range of the arguments of $\,f$ and $g$.
\end{notation}
\section{Statement of main results and examples}
\label{section 2}
\setcounter{equation}{0}The main result will be stated in a more general
setting of weighted manifolds that is explained below.
\subsection{Weighted manifolds}
Let $M$ be a connected Riemannian manifold of dimension $N$. The Riemannian
metric of $M$ induces the geodesic distance $d(x,y)$ between points $x,y\in
M $ and the Riemannian measure $d\mathrm{vol}.$ Given a smooth positive
function $\sigma $ on $M$, let $\mu $ be the measure on $M$ given by $d\mu
(x)=\sigma (x)d\mathrm{vol}(x)$. The pair $(M,\mu )$ is called a \textit{%
weighted manifold}. Any Riemannian manifold can be considered also as a
weighted manifold with $\sigma \equiv 1$.
The Laplace operator $\Delta $ of the weighted manifold $\left( M,\mu
\right) $ is defined by%
\begin{equation*}
\Delta =\frac{1}{\sigma }\func{div}\left( \sigma \nabla \right) ,
\end{equation*}%
where $\func{div}$ and $\nabla $ are the divergence and the gradient of the
Riemannian metric of $M$. It is easy to see that $\Delta $ is the generator
of the following Dirichlet form%
\begin{equation*}
D\left( f,f\right) =\int_{M}\left\vert \nabla f\right\vert ^{2}d\mu
\end{equation*}%
in $W^{1,2}\left( M,\mu \right) $. The associated heat semigroup $e^{t\Delta
}$ has always a smooth positive kernel $p\left( t,x,y\right) $ that is
called the heat kernel of $\left( M,\mu \right) $. At the same time, $%
p\left( t,x,y\right) $ is the minimal positive fundamental solution of the
corresponding heat equation $\partial _{t}u=\Delta u$ on $M\times \mathbb{R}%
_{+}$ (see \cite{G AMS}). The heat kernel is also the transition probability
density of Brownian motion $\left( \left\{ X_{t}\right\} ,\left\{ \mathbb{P}%
_{x}\right\} \right) $ on $M$ that is generated by $\Delta $.
A weighted manifold $\left( M,\mu \right) $ is called \textit{parabolic} if
any positive superharmonic function on $M$ is constant, and \textit{%
non-parabolic} otherwise. The parabolicity is equivalent to each of the
following properties, that can be regarded as equivalent definitions (see,
for example, \cite{G 1999}):
\begin{enumerate}
\item There exists no positive fundamental solution of $-\Delta .$
\item $\int^{\infty }p\left( t,x,y\right) dt=\infty $ for all/some $x,y\in M$%
.
\item Brownian motion on $M$ is recurrent.
\end{enumerate}
\subsection{Notion of connected sum}
\label{notion}
Let $(M,\mu )$ be a geodesically complete non-compact weighted manifold. Let
$K\subset M$ be a connected compact subset of $M$ with non-empty interior
and smooth boundary such that $M\setminus K$ has $k$ non-compact connected
components $E_{1},\ldots ,E_{k}$; moreover, assume also that the closures $%
\overline{E}_{i}$ are disjoint. We refer to each $E_{i}$ as an \textit{end}
of $M$. Clearly, $\partial K$ is a disjoint union of $\partial E_{i}$, $%
i=1,...,k$.
Assume also that $E_{i}$ is isometric to the exterior of a compact set $%
K_{i} $ in another weighted manifold $(M_{i},\mu _{i})$. Then we refer to $M$
as the connected sum of $M_{1},...,M_{k}$ and write
\begin{equation*}
M=M_{1}\#M_{2}\#\cdots \#M_{k}
\end{equation*}%
(see Fig. \ref{figure: connectedsum1}).
\begin{figure}[tbph]
\begin{center}
\scalebox{0.9}{
\includegraphics{connectedsum1.PDF}
}
\end{center}
\caption{Connected sum $M=M_{1}\#M_{2}\cdots \#M_{k}$.}
\label{figure: connectedsum1}
\end{figure}
Denote by $d_{i}$ the geodesic distance on $M_{i}$ and by $B_{i}\left(
x,r\right) $ the geodesic ball in $M_{i}$ of radius $r$ centered at $x\in
M_{i}$. Set also $V_{i}\left( x,r\right) =\mu _{i}\left( B_{i}\left(
x,r\right) \right) $. Fix a reference point $o_{i}\in K_{i}$ and set
\begin{equation*}
V_{i}(r)=V_{i}(o_{i},r).
\end{equation*}%
In this paper we always assume that every manifold $M_{i}$, $i=1,\ldots ,k$,
satisfies the following four conditions.
\begin{enumerate}
\item[$\left( a\right) $] The heat kernel $p_{i}\left( t,x,y\right) $ of $%
\left( M_{i},\mu _{i}\right) $ satisfies the Li-Yau estimate $($\ref{LY type}%
$)$, that is,
\begin{equation}
p_{i}\left( t,x,y\right) \asymp \frac{C}{V_{i}\left( x,\sqrt{t}\right) }\exp
\left( -b\frac{d_{i}^{2}\left( x,y\right) }{t}\right) . \label{LYi}
\end{equation}
\item[$\left( b\right) $] $M_{i}$ is parabolic; under the standing
assumption (\ref{LYi}), the parabolicity of $M_{i}$ is equivalent to
\begin{equation}
\int^{\infty }\frac{rdr}{V_{i}\left( r\right) }=\infty . \label{Vpar}
\end{equation}
\item[$\left( c\right) $] $M_{i}$ has \textit{relatively connected annuli},
that is, there exists a positive constant $A>1$ such that for any $r>A^{2}$
and all $x,y\in M_{i}$ with $d_{i}(o_{i},x)=d_{i}(o_{i},y)=r$, there exists
a continuous path from $x$ to $y$ staying in $B_{i}(o_{i},Ar)\setminus
B_{i}(o,A^{-1}r)$. We denote this condition shortly by $\left( RCA\right) $.
\item[$\left( d\right) $] $M_{i}$ is either \textit{critical} or \textit{%
subcritical}; here $M_{i}$ is called critical if, for all large enough $r$,%
\begin{equation*}
V_{i}(r)\approx r^{2},
\end{equation*}%
and subcritical if, for all large enough $r$,
\begin{equation}
\int_{1}^{r}\frac{sds}{V_{i}(s)}\leq \frac{Cr^{2}}{V_{i}(r)}.
\label{subcritical}
\end{equation}
\end{enumerate}
For example, if $V_{i}(r)\approx r^{\alpha }\log ^{\beta }r$ for some $%
0<\alpha <2$ and $\beta \in \mathbb{R}$, then $M_{i}$ is subcritical. On the
other hand, in the case $V_{i}\left( r\right) \approx \frac{r^{2}}{\log
^{\beta }r}$ with $\beta >0$ the manifold $M_{i}$ is neither critical nor
subcritical, although still parabolic.
Let us describe a class of manifolds satisfying all the hypotheses $\left(
a\right) -\left( d\right) $. For any $0<\alpha \leq 2$ consider a Riemannian
\textit{model manifold} $\mathcal{R}^{\alpha }:=(\mathbb{R}^{2},g_{\alpha })$%
, where $g_{\alpha }$ is a Riemannian metric on $\mathbb{R}^{2}$ such that,
in the polar coordinates $\left( \rho ,\theta \right) $, it is given for $%
\rho >1$ by
\begin{equation*}
g_{\alpha }=d\rho ^{2}+\rho ^{2(\alpha -1)}d\theta ^{2}.
\end{equation*}%
For example, if $\alpha =2$ then $g_{2}$ can be taken to be the Euclidean
metric of $\mathbb{R}^{2}$ so that in this case $\mathcal{R}^{2}=\mathbb{R}%
^{2}$. If $\alpha =1$ then $g_{1}=d\rho ^{2}+d\theta ^{2}$ so that the
exterior domain $\left\{ \rho >1\right\} $ of $\mathcal{R}^{1}$ is isometric
to the cylinder $\mathbb{R}_{+}\times \mathbb{S}$ (see Fig. \ref{pic5}).
\FRAME{ftbphFU}{6.3683in}{0.7923in}{0pt}{\Qcb{Model manifold $\mathcal{R}%
^{1} $}}{\Qlb{pic5}}{pic5.pdf}{\special{language "Scientific Word";type
"GRAPHIC";maintain-aspect-ratio TRUE;display "USEDEF";valid_file "F";width
6.3683in;height 0.7923in;depth 0pt;original-width 6.0026in;original-height
0.7231in;cropleft "0";croptop "1";cropright "1";cropbottom "0";filename
'PIC5.eps';file-properties "XNPEU";}}
For a general $0<\alpha <2$, the exterior domain $\left\{ \rho >1\right\} $
of $\mathcal{R}^{\alpha }$ is isometric to a certain surface of revolution
in $\mathbb{R}^{3}$.
Observe that the volume function $V(x,r)$ on $\mathcal{R}^{\alpha }$ admits
for $r>1$ the estimate%
\begin{equation}
V(x,r)\approx \left\{
\begin{array}{ll}
r^{\alpha }, & \left\vert x\right\vert <r \\
\min \left( r^{2},r\left\vert x\right\vert ^{\alpha -1}\right) , &
\left\vert x\right\vert \geq r%
\end{array}%
\right. \approx \frac{r^{2}}{1+\frac{r}{(\left\vert x\right\vert +r)^{\alpha
-1}}} \label{model}
\end{equation}%
(see \cite[Sec. 4.4]{G-SC stability}). In particular, if $x=o$, where $o$ is
the origin of $\mathbb{R}^{2}$, then
\begin{equation}
V\left( o,r\right) \approx r^{\alpha }. \label{Va}
\end{equation}%
By \cite[Prop. 4.10]{G-SC stability}, $\mathcal{R}^{\alpha }$ satisfies the
parabolic Harnack inequality and, hence, the Li-Yau estimate $($\ref{LY type}%
$)$. Obviously, $\mathcal{R}^{\alpha }$ satisfies (\ref{Vpar}) and, hence, $%
\mathcal{R}^{\alpha }$ is parabolic. It is easy to see that $\mathcal{R}%
^{\alpha }$ satisfies $\left( RCA\right) $. Note also that $\mathcal{R}%
^{\alpha }$ is critical if $\alpha =2$ and subcritical if $\alpha <2$.
Hence, $\mathcal{R}^{\alpha }$ satisfies all hypotheses $\left( a\right)
-\left( d\right) $.
One can make a similar family of examples also in class of weighted
manifolds. Indeed, for any $\alpha >0$ consider in $\mathbb{R}^{2}$ the
following measure
\begin{equation*}
d\mu _{\alpha }=\left( 1+\left\vert x\right\vert ^{2}\right) ^{\frac{\alpha
}{2}-1}dx.
\end{equation*}%
It is easy to see that $\left( \mathbb{R}^{2},\mu _{\alpha }\right) $
satisfies (\ref{Va}). The Li-Yau estimate on $\left( \mathbb{R}^{2},\mu
_{\alpha }\right) $ holds by \cite[Prop. 4.9]{G-SC stability}. Hence, $%
\left( \mathbb{R}^{2},\mu _{\alpha }\right) $ satisfies all the hypotheses $%
\left( a\right) -\left( d\right) $ provided $0<\alpha \leq 2$.
Returning to the general setting, let us mention that the hypotheses $\left(
a\right) ,\left( b\right) ,\left( c\right) $ are essential for our main
result, whereas $\left( d\right) $ is technical. Probably, the method of
proof will work also without assuming $\left( d\right) $ but, even if that
is the case, the necessary computations will become much more technical and
complicated. So, we prefer to impose here the additional condition $\left(
d\right) $ to simplify the computational part of the proof, which even under
$\left( d\right) $ remains quite involved.
Observe also that the condition $\left( b\right) $ follows from $\left(
d\right) $. Indeed, if the integral (\ref{Vpar}) converges then by (\ref%
{subcritical}) $V_{i}\left( r\right) \leq Cr^{2}$, which implies the
divergence of the integral in (\ref{Vpar}). However, for the aforementioned
reason, we state $\left( b\right) $ independently of $\left( d\right) $.
In fact, in the subcritical case we have%
\begin{equation}
V_{i}\left( r\right) =o\left( r^{2}\right) \ \ \text{as }r\rightarrow \infty
, \label{or2}
\end{equation}%
as it follows from (\ref{Vpar}) and (\ref{subcritical}). Moreover,
substituting (\ref{or2}) to the left hand side of (\ref{subcritical}), we
obtain that, in the subcritical case,
\begin{equation}
V_{i}\left( r\right) =o\left( \frac{r^{2}}{\log r}\right) \ \text{as }%
r\rightarrow \infty . \label{or2logr}
\end{equation}
\subsection{On-diagonal estimates}
Denote by $d\left( x,y\right) $ the geodesic distance between points $x,y\in
M$ and by $V\left( x,r\right) $ the Riemannian volume of the geodesic ball
on $M$ of radius $r$ centered at $x\in M$. Fix a reference point $o\in K$
and set $V(r)=V(o,r)$. Set also
\begin{equation*}
V_{\max }(r)=\max_{1\leq i\leq k}V_{i}(r).
\end{equation*}%
It is easy to see that, for all $r>0$,
\begin{equation*}
V(r)\approx V_{1}(r)+V_{2}(r)+\cdots +V_{k}(r)\approx V_{\max }(r).
\end{equation*}%
The first main result of this paper is as follows.
\begin{theorem}
\label{main theorem} Let $M=M_{1}\#\cdots \#M_{k}$ be a connected sum of
non-compact complete manifolds $M_{1},\ldots ,M_{k}$. Assume that each $%
M_{i} $ is parabolic and satisfies {$($\ref{LY type}$)$} and $\left(
RCA\right) $. We also assume that each $M_{i}$ is either critical or
subcritical. Then we have
\begin{equation}
p(t,o,o)\approx \frac{1}{V_{\max }(\sqrt{t})}\approx \frac{1}{V(\sqrt{t})},
\label{ptoo}
\end{equation}%
for all $t>0$.
\end{theorem}
Let us mention for comparison the following result of \cite{G-SC ends}: if
all manifolds $M_{i}$ are non-parabolic and satisfy $($\ref{LY type}$)$ and $%
\left( RCA\right) $, then the heat kernel on $M=$ $M_{1}\#\cdots \#M_{k}$
satisfies
\begin{equation}
p(t,o,o)\approx \frac{1}{V_{\min }(\sqrt{t})}, \label{non-parabolic}
\end{equation}%
where
\begin{equation*}
V_{\min }(r):=\min_{1\leq i\leq k}V_{i}(r).
\end{equation*}%
The proof of the upper bound in (\ref{non-parabolic}), that is, of the
inequality%
\begin{equation}
p\left( t,o,o\right) \leq \frac{C}{V_{\min }\left( \sqrt{t}\right) },
\label{ptmin}
\end{equation}%
goes as follows. By \cite[Prop. 5.2]{G Revista}, the upper bound in $($\ref%
{LY type}$)$ on $M_{i}$ is equivalent to a certain \textit{Faber-Krahn type}
inequality on $M_{i}$. Using a technique for merging of such inequalities,
developed in \cite[Thm. 3.5]{G-SC FK}, one obtains a similar Faber-Krahn
inequality on $M$, which then implies the heat kernel upper bound (\ref%
{ptmin}) by \cite[Thm. 5.2]{G Revista} (see \cite[Thm. 4.5]{G-SC FK} and
\cite[Cor. 4.7]{G-SC ends} for the details). The reason for appearing of $%
V_{\min }$ in (\ref{ptmin}) is that the Faber-Krahn inequality on $M$ cannot
be stronger than that of each end $M_{i}$ and, hence, is determined by the
end with the smallest function $V_{i}\left( r\right) $.
The proof of the lower bound in (\ref{non-parabolic}), that is, of the
inequality
\begin{equation}
p\left( t,o,o\right) \geq \frac{c}{V_{\min }\left( \sqrt{t}\right) }
\label{ptlow}
\end{equation}%
uses the comparison
\begin{equation*}
p(t,x,y)\geq p_{E_{i}}(t,x,y)
\end{equation*}%
on each end $E_{i}$, where $p_{E_{i}}(t,x,y)$ is the Dirichlet heat kernel
on $E_{i}$ vanishing on $\partial E_{i}$. By \cite[Thm 3.1]{G-SC Dirichlet},
non-parabolicity of $M_{i}$ and $($\ref{LY type}$)$ imply that, away from $%
\partial E_{i}$,%
\begin{equation}
p_{E_{i}}(t,x,y)\geq cp_{i}\left( Ct,x,y\right) . \label{pE}
\end{equation}%
It follows that, for any $i=1,...,k$,
\begin{equation*}
p\left( t,o,o\right) \geq \frac{c}{V_{i}\left( \sqrt{t}\right) },
\end{equation*}%
which is equivalent to (\ref{ptlow}).
In the present setting, when all the manifolds $M_{i}$ are parabolic, both
arguments described above work but give non-optimal results. For example,
one obtains as above the upper bound (\ref{ptmin}), which in general is
weaker than the upper in (\ref{ptoo}). As far as the lower bound is
concerned, the estimate (\ref{pE}) fails in the parabolic case and has to be
replaced by a weaker one (cf. \cite[Thm 4.9]{G-SC Dirichlet}), which does
not yield an optimal lower bound for $p\left( t,o,o\right) .$ This explains
why we have to develop entirely new method for obtaining optimal bounds for $%
p\left( t,o,o\right) $ in the case when all manifolds $M_{i}$ are parabolic.
The most significant part of the estimate (\ref{ptoo}) is the upper bound%
\begin{equation}
p\left( t,o,o\right) \leq \frac{C}{V_{\max }\left( \sqrt{t}\right) }.
\label{pt<}
\end{equation}%
The proof of (\ref{pt<}) is the main achievement of the present paper. We
use for that a new method involving the \textit{integrated resolvent}%
\begin{equation*}
\gamma _{\lambda }\left( x\right) =\int_{K}\int_{0}^{\infty }e^{-t\lambda
}p\left( t,x,y\right) dtd\mu \left( y\right)
\end{equation*}%
defined for $\lambda >0.$ The parabolicity of $M$ implies that $\gamma
_{\lambda }\left( x\right) \rightarrow \infty $ as $\lambda \rightarrow 0,$
and the rate of increase of $\gamma _{\lambda }\left( x\right) $ as $\lambda
\rightarrow 0$ is related to the rate of decay of $p\left( t,o,o\right) $ as
$t\rightarrow \infty .$ In fact, the integrated resolvent $\gamma _{\lambda
} $ on the connected sum $M$ satisfies a certain integral equation involving
as coefficients the Laplace transforms of the exit probabilities at each
end. This allows to estimate the rate of growth of $\gamma _{\lambda }$ as $%
\lambda \rightarrow 0$ and then to recover the upper bound (\ref{pt<}) in
the subcritical case. In the critical case one has to use instead $\partial
_{\lambda }\gamma _{\lambda }$.
Since $V_{\max }\left( r\right) \approx V\left( o,r\right) $ and $V\left(
o,r\right) $ satisfies the volume doubling property, the upper bound (\ref%
{pt<}) implies automatically a matching lower bound of $p\left( t,o,o\right)
$ by \cite[Thm. 7.2]{Coulhon-Grigoryan} (see Section \ref{lower bound} for
the details).
\begin{remark}
\rm Kasahara and Kotani recently obtained in \cite[Example 6.1]{Kasahara}
the same on-diagonal heat kernel estimates for a connected sum of two Bessel
processes on the half line $[0,\infty )$ by using the Stieltjes transforms.
\end{remark}
\subsection{Off-diagonal estimates}
In order to state the estimates for $p\left( t,x,y\right) $ for arbitrary $%
x,y\in M$, we need some notation. For any $x\in M$ set%
\begin{equation*}
\left\vert x\right\vert :=d\left( x,K\right) +e.
\end{equation*}%
For all $x\in M$ and for all $t>2$, define the following functions:
\begin{equation}
D(x,t):=\left\{
\begin{array}{ll}
1, & \text{if }\left\vert x\right\vert >\sqrt{t}\ \text{and }x\in E_{i}, \\
\frac{\left\vert x\right\vert ^{2}V_{i}(\sqrt{t})}{tV_{i}(\left\vert
x\right\vert )}, & \text{if }\left\vert x\right\vert \leq \sqrt{t}\text{ and
}x\in E_{i}, \\
0, & \text{if }x\in K,%
\end{array}%
\right. \label{function D}
\end{equation}%
\begin{equation}
U\left( x,t\right) :=\left\{
\begin{array}{ll}
\frac{1}{\log \left\vert x\right\vert }, & \text{if\ }\left\vert
x\right\vert >\sqrt{t} \\
\frac{1}{\log \sqrt{t}}\log \frac{e\sqrt{t}}{|x|}, & \text{if }\left\vert
x\right\vert \leq \sqrt{t},%
\end{array}%
\right. \label{function U}
\end{equation}%
\begin{equation}
W(x,t):=\left\{
\begin{array}{ll}
1, & \text{if }\left\vert x\right\vert >\sqrt{t} \\
\frac{\log \left\vert x\right\vert }{\log \sqrt{t}}, & \text{if }\left\vert
x\right\vert \leq \sqrt{t}.%
\end{array}%
\right. \label{function W}
\end{equation}
It is clear that $U\left( x,t\right) \leq 1$, $U\left( x,t\right) \nearrow 1$
as $t\rightarrow $ $\infty ,$ and $W\left( x,t\right) \leq 1$ and $W\left(
x,t\right) \searrow 0\ $as $t\rightarrow \infty .$ It is also useful to
observe that%
\begin{equation}
1\leq U\left( x,t\right) +W\left( x,t\right) \leq 2. \label{U+W}
\end{equation}%
If $V_{i}\left( r\right) $ is either critical or subcritical, then it is
possible to show that $D\left( x,t\right) $ is bounded.
The next three theorems constitute our second main result. It is obtained by
combining Theorem \ref{main theorem} with several results from \cite{G-SC
Dirichlet}, \cite{G-SC hitting} and \cite{G-SC ends}.
In the first theorem we consider the case when $x$ and $y$ lie at different
ends.
\begin{theorem}
\label{T1}In the setting of Theorem {\ref{main theorem}}, the following
estimates are true for all $x\in E_{i}$, $y\in E_{j}$ with $i\neq j$ and $%
t>t_{0}$, where $t_{0}$ is large enough.
\begin{enumerate}
\item[$\left( i\right) $] If all the manifolds $M_{l}$, $l=1,...,k$, are
subcritical then%
\begin{equation}
p(t,x,y)\asymp \frac{C}{V_{\max }(\sqrt{t})}e^{-b\frac{d^{2}\left(
x,y\right) }{t}}. \label{T1i}
\end{equation}
\item[$\left( ii\right) $] Suppose that at least one of the manifolds $M_{l}$%
, $l=1,...,k$, is critical.
$\left( ii\right) _{1}$ If both of $M_{i}$ and $M_{j}$ are subcritical, then
\begin{equation}
p(t,x,y)\asymp \frac{C}{t}\left( 1+\left( D(x,t)+D(y,t)\right) \log t\right)
e^{-b\frac{d^{2}\left( x,y\right) }{t}}. \label{T1ii1}
\end{equation}
$\left( ii\right) _{2}$ If both of $M_{i}$ and $M_{j}$ are critical, then
\begin{equation}
p(t,x,y)\asymp \frac{C}{t}\left(
U(x,t)U(y,t)+W(x,t)U(y,t)+U(x,t)W(y,t)\right) e^{-b\frac{d^{2}\left(
x,y\right) }{t}}. \label{T1ii2}
\end{equation}%
$\left( ii\right) _{3}$ If $M_{i}$ is subcritical and $M_{j}$ is critical,
then
\begin{equation}
p(t,x,y)\asymp \frac{C}{t}\left( 1+D(x,t)U(y,t)\log t\right) e^{-b\frac{%
d^{2}\left( x,y\right) }{t}}. \label{T1ii3}
\end{equation}
\end{enumerate}
\end{theorem}
The next two theorems cover the case when $x,y$ lie at the same end.
\begin{theorem}
\label{T2}In the setting of Theorem \ref{main theorem}, assume that $x,y\in
E_{i}$ and $t>t_{0}$.
\begin{enumerate}
\item[$\left( a\right) $] If $\sqrt{t}\leq \min \left( \left\vert
x\right\vert ,\left\vert y\right\vert \right) $ then%
\begin{equation}
p(t,x,y)\asymp \frac{C}{V_{i}(x,\sqrt{t})}e^{-b\frac{d^{2}\left( x,y\right)
}{t}}. \label{Vi}
\end{equation}
\item[$\left( b\right) $] Moreover, if $V_{i}\left( r\right) \approx V_{\max
}\left( r\right) $ for all large $r$, then (\ref{Vi}) holds for all $t>t_{0}$%
. In particular, this is the case when $M_{i}$ is critical.
\end{enumerate}
\end{theorem}
Estimate (\ref{Vi}) means that, for a restricted time, Brownian motion on
each end does not see the other ends, which is natural to expect. Note that
the same phenomenon holds also in the case when all $M_{i}$ are
non-parabolic.
The second claim of Theorem \ref{T2} means that, on the maximal end,
Brownian motion does not see the other ends for all times. It is interesting
to observe that in the case when all $M_{i}$ are non-parabolic, a similar
statement holds for the minimal end.
\begin{theorem}
\label{T3}In the setting of Theorem \ref{main theorem}, assume that $M_{i}$
is subcritical, $x,y\in E_{i}$ and $t>t_{0}$. If $\sqrt{t}\geq \min \left(
\left\vert x\right\vert ,\left\vert y\right\vert \right) $ then the
following is true.
\begin{enumerate}
\item[$\left( i\right) $] If all the manifolds $M_{l}$, $l=1,...,k$, are
subcritical, then%
\begin{equation}
p(t,x,y)\asymp C\left( \frac{D(x,t)D(y,t)}{V_{i}(\sqrt{t})}+\frac{1}{V_{\max
}(\sqrt{t})}\right) e^{-b\frac{d^{2}\left( x,y\right) }{t}}. \label{T3i}
\end{equation}
\item[$\left( ii\right) $] If at least one of the manifolds $M_{l}$, $%
l=1,...,k$, is critical then
\begin{equation}
p(t,x,y)\asymp C\left( \frac{D(x,t)D(y,t)}{V_{i}(\sqrt{t})}+\frac{1}{t}%
\left( 1+\left( D(x,t)+D(y,t)\right) \log t\right) \right) e^{-b\frac{%
d^{2}\left( x,y\right) }{t}}. \label{T3ii}
\end{equation}
\end{enumerate}
\end{theorem}
\begin{remark}
\rm All the estimates of Theorems \ref{T1}-\ref{T3} can be extended to all $%
x,y\in M$ including also a possibility $x\in K$ or $y\in K$. This follows
from the local Harnack inequality for the heat kernel $p(t,x,y)$ and from a
careful analysis of the estimates. The latter shows that in all cases when $%
\left\vert x\right\vert $ (or $\left\vert y\right\vert $) remains bounded,
the terms containing $D\left( x,t\right) $ are dominated by others and,
hence, can be eliminated, which is equivalent to setting $D\left( x,t\right)
=0$ as in (\ref{function D}). A graphical summary of the estimates of
Theorems \ref{T1}-\ref{T3} can be found at the following location:
\href{https://www.math.uni-bielefeld.de/~grigor/tables.pdf}{%
https://www.math.uni-bielefeld.de/\symbol{126}grigor/tables.pdf}
\end{remark}
\begin{remark}
\rm By \cite[Lemma 5.9]{G-SC ends}, for all $x,y\in M$ and $0<t\leq t_{0}$,
the heat kernel on $M$ satisfies the Li-Yau estimate $($\ref{LY type}$)$
with constants depending on $t_{0}$. For this result it suffices to assume
that each end $M_{i}$ satisfies the Li-Yau estimate. Hence, in Theorems \ref%
{T1}-\ref{T3} we do not worry about the estimates for $t\leq t_{0}$.
\end{remark}
If $V_{i}(r)$ is a power function for each $i=1,\ldots k$, then we can
simplify the heat kernel estimates of Theorems \ref{T1}-\ref{T3} as follows.
In the next statement $x,y$ lie at different ends.
\begin{corollary}
\label{power} Suppose that $V_{i}(r)\approx r^{\alpha _{i}}$ for all $%
i=1,\ldots ,k$ and $r\geq 1$, where $0<\alpha _{i}\leq 2$.
\begin{enumerate}
\item[$\left( i\right) $] Assume that $0<\alpha _{i}<2$ for all $i=1,...,k$
and set
\begin{equation*}
\alpha =\max_{1\leq i\leq k}\alpha _{i}~.
\end{equation*}%
Then, for all $x,y$ lying at different ends and for all $t>2$, we have%
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C}{t^{\alpha /2}}e^{-b\frac{d^{2}\left(
x,y\right) }{t}}.
\end{equation*}
\item[$\left( ii\right) $] Assume that $\alpha _{l}=2$ for some $1\leq l\leq
k$. Then the following estimates hold for $i\neq j$, $x\in E_{i}$, $y\in
E_{j}$, $t>2$.
\end{enumerate}
$\left( ii\right) _{1}$ Let $\alpha _{i}<2$ and $\alpha _{j}<2$. If $\min
(\left\vert x\right\vert ,\left\vert y\right\vert )\geq \sqrt{t}$ then%
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C\log t}{t}e^{-b\frac{d^{2}\left(
x,y\right) }{t}},
\end{equation*}%
and if $\min (\left\vert x\right\vert ,\left\vert y\right\vert )\leq \sqrt{t}
$ then%
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C}{t}\left( 1+\log t~\left[ \left( \frac{%
\left\vert x\right\vert }{\sqrt{t}}\right) ^{2-\alpha _{i}}+\left( \frac{%
\left\vert y\right\vert }{\sqrt{t}}\right) ^{2-\alpha _{j}}\right] \right) .
\end{equation*}
$\left( ii\right) _{2}$ If $\alpha _{i}=\alpha _{j}=2$ then%
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C}{t}\left( U\left( x,t\right) U\left(
y,t\right) +U\left( x,t\right) \frac{\log |y|}{\log |y|+\log t}+U\left(
y,t\right) \frac{\log |x|}{\log |x|+\log t}\right) e^{-b\frac{d^{2}\left(
x,y\right) }{t}}.
\end{equation*}%
Consequently, if $\left\vert x\right\vert ,\left\vert y\right\vert \geq
\sqrt{t}$ then%
\begin{equation}
p\left( t,x,y\right) \asymp \frac{C}{t}\left( \frac{1}{\log \left\vert
x\right\vert }+\frac{1}{\log \left\vert y\right\vert }\right) e^{-b\frac{%
d^{2}\left( x,y\right) }{t}}, \label{power (ii)_2}
\end{equation}%
if $\left\vert x\right\vert ,\left\vert y\right\vert \leq \sqrt{t}$ then%
\begin{equation}
p\left( t,x,y\right) \asymp \frac{C}{t\log ^{2}t}\left( \log t+\log ^{2}%
\sqrt{t}-\log |x|\log |y|\right) , \label{power (ii)_3}
\end{equation}%
and if $|x|\geq \sqrt{t}\geq |y|$ then%
\begin{equation}
p(t,x,y)\asymp \frac{C}{t\log t}\log \frac{e\sqrt{t}}{|y|}e^{-b\frac{%
d^{2}(x,y)}{t}}. \label{x>y}
\end{equation}%
Similarly, if $|y|\geq \sqrt{t}\geq |x|$ then%
\begin{equation}
p(t,x,y)\asymp \frac{C}{t\log t}\log \frac{e\sqrt{t}}{|x|}e^{-b\frac{%
d^{2}(x,y)}{t}}. \label{y>x}
\end{equation}
$\left( ii\right) _{3}$ If $\alpha _{i}<2$ and $\alpha _{j}=2$ then%
\begin{equation}
p\left( t,x,y\right) \asymp \frac{C}{t}\left( 1+\left( \frac{\left\vert
x\right\vert }{\left\vert x\right\vert +\sqrt{t}}\right) ^{2-\alpha
_{i}}U\left( y,t\right) \log t~\right) e^{-b\frac{d^{2}\left( x,y\right) }{t}%
}. \label{Corii3}
\end{equation}%
Consequently, if $\left\vert y\right\vert \geq \sqrt{t}$ then%
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C}{t}e^{-b\frac{d^{2}\left( x,y\right) }{t}%
},
\end{equation*}%
if $\left\vert x\right\vert ,\left\vert y\right\vert \leq \sqrt{t}$ then%
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C}{t}\left( 1+\left( \frac{\left\vert
x\right\vert }{\sqrt{t}}\right) ^{2-\alpha _{i}}\log \frac{e\sqrt{t}}{%
\left\vert y\right\vert }\right) ,
\end{equation*}%
and if $|x|\geq \sqrt{t}\geq |y|$ then%
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t}\log \frac{e\sqrt{t}}{|y|}e^{-b\frac{d^{2}(x,y)}{t}%
}.
\end{equation*}
\end{corollary}
\begin{proof}
All the estimates of Corollary \ref{power} follow immediately from those of
Theorem \ref{T1} and the definitions of functions $D$ and $W$. In the case $%
\left( ii\right) _{2}$, in the range $\left\vert x\right\vert ,\left\vert
y\right\vert \leq \sqrt{t}$, Theorem \ref{T1} gives the estimate
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C}{t\log ^{2}\sqrt{t}}\left( \log \frac{e%
\sqrt{t}}{\left\vert x\right\vert }\log \frac{e\sqrt{t}}{\left\vert
y\right\vert }+\log \left\vert y\right\vert \log \frac{e\sqrt{t}}{\left\vert
x\right\vert }+\log \left\vert x\right\vert \log \frac{e\sqrt{t}}{\left\vert
y\right\vert }\right) .
\end{equation*}%
Since the sum in the brackets is equal to
\begin{equation*}
\left( \log \left\vert x\right\vert +\log \frac{e\sqrt{t}}{\left\vert
x\right\vert }\right) \left( \log \left\vert y\right\vert +\log \frac{e\sqrt{%
t}}{\left\vert y\right\vert }\right) -\log \left\vert x\right\vert \log
\left\vert y\right\vert =\left( 1+\log \sqrt{t}\right) ^{2}-\log \left\vert
x\right\vert \log \left\vert y\right\vert ,
\end{equation*}%
we obtain (\ref{power (ii)_3}).
\end{proof}
Let us state some consequences of Theorems \ref{T1}-\ref{T3} in the general
setting, but under some specific restrictions of the variables $x,y,t$.
\begin{corollary}
\label{Corcases}Under the hypotheses of Theorems \emph{\ref{T1}-\ref{T3}},
we have the following estimates.
\begin{enumerate}
\item[$\left( a\right) $] (Long time regime) For fixed $x,y\in M$ and $%
t\rightarrow \infty $,
\begin{equation}
p\left( t,x,y\right) \approx \frac{1}{V_{\max }\left( \sqrt{t}\right) }.
\label{ptmax}
\end{equation}
\item[$\left( b\right) $] (Medium time regime) Let $x\in E_{i}$ and $y\in
E_{j}$ with $i\neq j$. If $\left\vert x\right\vert \approx \left\vert
y\right\vert \approx \sqrt{t}$ then in the cases $\left( i\right) $ and $%
\left( ii\right) _{3}$ we have (\ref{ptmax}), in the case $\left( ii\right)
_{1}$ we have%
\begin{equation}
p\left( t,x,y\right) \approx \frac{\log t}{t}, \label{logt/t}
\end{equation}%
and in the case $\left( ii\right) _{2}$%
\begin{equation}
p\left( t,x,y\right) \approx \frac{1}{t\log t}. \label{tlogt}
\end{equation}
\end{enumerate}
\end{corollary}
\begin{proof}
$\left( a\right) $ The estimate (\ref{ptmax}) follows easily from Theorem %
\ref{main theorem} by using a local Harnack inequality. However, we show
here how it follows from Theorems \ref{T1}, \ref{T3}. Observe that, for a
fixed $x\in E_{i}$ and large $t$ we have
\begin{equation}
D\left( x,t\right) \approx \frac{V_{i}\left( \sqrt{t}\right) }{t},\text{\ \ }%
U\left( x,t\right) \approx 1,\ \ W\left( x,t\right) \approx \frac{1}{\log t}.
\label{DUW}
\end{equation}%
Assume that $x\in E_{i}$, $y\in E_{j}$ and consider the cases $\left(
i\right) ,\left( ii\right) _{1},\left( ii\right) _{2}$ and $\left( ii\right)
_{3}$ as in Theorem \ref{T1}.
Case $\left( i\right) $. Using (\ref{T1i}), (\ref{T3i}), (\ref{DUW}) and $%
V_{i}\left( x,\sqrt{t}\right) \approx V_{i}\left( \sqrt{t}\right) \ $as $%
t\rightarrow \infty $ we obtain
\begin{equation*}
p\left( t,x,y\right) \approx \frac{V_{i}\left( \sqrt{t}\right) }{t^{2}}%
\delta _{ij}+\frac{1}{V_{\max }\left( \sqrt{t}\right) }\approx \frac{1}{%
V_{\max }\left( \sqrt{t}\right) },
\end{equation*}
where we have also used that $V_{j}\left( r\right) V_{\max }\left( r\right)
=o\left( r^{4}\right) .$
Case $\left( ii\right) _{1}$. By (\ref{T1ii1}), (\ref{T3ii}) and (\ref{DUW})
we have%
\begin{eqnarray*}
p\left( t,x,y\right) &\approx &\frac{V_{i}\left( \sqrt{t}\right) }{t^{2}}%
\delta _{ij}+\frac{1}{t}\left\{ 1+\left( \frac{V_{i}(\sqrt{t})}{t}+\frac{%
V_{j}(\sqrt{t})}{t}\right) \log t\right\} \\
&\approx &\frac{1}{t}\approx \frac{1}{V_{\max }\left( \sqrt{t}\right) },
\end{eqnarray*}%
because of $V_{\max }\left( r\right) \approx r^{2}$ and (\ref{or2logr})$.$
Case $\left( ii\right) _{2}.$ If $i\neq j$ then by (\ref{T1ii2}) and (\ref%
{DUW})%
\begin{equation*}
p(t,x,y)\approx \frac{1}{t}\left( 1+\frac{1}{\log t}\right) \approx \frac{1}{%
t}\approx \frac{1}{V_{\max }\left( \sqrt{t}\right) }.
\end{equation*}%
If $i=j$ then (\ref{ptmax}) follows trivially from (\ref{Vi}).
Case $\left( ii\right) _{3}.$ In this case necessarily $i\neq j$, and we
obtain by (\ref{T1ii3})
\begin{equation*}
p\left( t,x,y\right) \approx \frac{1}{t}\left\{ 1+\frac{V_{i}(\sqrt{t})}{t}%
\log t\right\} \approx \frac{1}{t}\approx \frac{1}{V_{\max }\left( \sqrt{t}%
\right) }.
\end{equation*}
$\left( b\right) $ In the case $\left\vert x\right\vert \approx \left\vert
y\right\vert \approx \sqrt{t}$ we have $d^{2}\left( x,y\right) \approx t$ and%
\begin{equation*}
D\left( x,t\right) \approx 1,\ \ \ \ U\left( x,t\right) \approx \frac{1}{%
\log t},\ \ \ W\left( x,t\right) \approx 1.
\end{equation*}%
Then the required estimates follow directly from those stated in Theorem \ref%
{T1}.
\end{proof}
Let us observe the following. In the medium time regime, that is, when
\thinspace $x$ and $y$ lie at different ends and $\left\vert x\right\vert
\approx \left\vert y\right\vert \approx \sqrt{t}$, we have by $\left(
b\right) $: in the cases $\left( i\right) $ and $\left( ii\right) _{3}$
\begin{equation*}
p\left( t,x,y\right) \approx \frac{1}{V_{\max }\left( \sqrt{t}\right) },
\end{equation*}%
that is, $p\left( t,x,y\right) $ behaves itself as in the long time regime,
whereas in the case $\left( ii\right) _{1}$%
\begin{equation*}
p\left( t,x,y\right) \approx \frac{\log t}{t}\gg \frac{1}{V_{\max }\left(
\sqrt{t}\right) },
\end{equation*}%
and in the case $\left( ii\right) _{2}$%
\begin{equation*}
p\left( t,x,y\right) \approx \frac{1}{t\log t}\ll \frac{1}{V_{\max }\left(
\sqrt{t}\right) }.
\end{equation*}
Hence, we observe in the case $\left( ii\right) _{2}$ the \textit{bottleneck
effect}: the heat kernel value $\frac{1}{t\log t}$ in the medium time regime
is significantly \textit{smaller} than that of long time regime $\frac{1}{t}$%
. For example, this case happens for $M=\mathbb{R}^{2}\#\mathbb{R}^{2}$ (see
Fig. \ref{rn+rn}). A similar bottleneck effect was observed in \cite{G-SC
ends} for $M=\mathbb{R}^{n}\#\mathbb{R}^{n}$ with $n\geq 3$: the heat kernel
of $M$ in the long time regime is comparable to $\frac{1}{t^{n/2}}$ whereas
in the medium time regime -- to $\frac{1}{t^{n-1}}$. In the case $n=2$ the
bottleneck effect is quantitatively weaker as the distinction between the
two regimes is determined by $\log t$ in contrast to the power of $t$ in the
case $n\geq 3.$
On the contrary, in the case $\left( ii\right) _{1}$ we observe an
interesting \textit{anti-bottleneck effect}: the heat kernel value $\frac{%
\log t}{t}$ in the medium time regime is significantly \textit{larger} than
that of the long time regime $\frac{1}{t}$. This effect occurs only when
there are at least three ends, one of them being critical and two --
subcritical. For example, this is the case for $M=\mathcal{R}^{1}\#\mathcal{R%
}^{1}\#\mathcal{R}^{2}$ (see Fig. \ref{r2+r1}).
\FRAME{ftbphFU}{3.0225in}{1.6016in}{0pt}{\Qcb{Connected sum $\mathcal{R}%
^{1}\#\mathcal{R}^{1}\#\mathcal{R}^{2}$}}{\Qlb{r2+r1}}{r2+r1.pdf}{\special%
{language "Scientific Word";type "GRAPHIC";maintain-aspect-ratio
TRUE;display "USEDEF";valid_file "F";width 3.0225in;height 1.6016in;depth
0pt;original-width 6.333in;original-height 3.3356in;cropleft "0";croptop
"1";cropright "1";cropbottom "0";filename 'r2+r1.eps';file-properties
"XNPEU";}}
\subsection{Examples}
\label{examples}
We present here heat kernel bounds on some specific examples using Theorems %
\ref{T1}-\ref{T3} and Corollary \ref{power}.
\begin{example}[Heat kernel on $\mathcal{R}^{\protect\alpha _{1}}\#\mathcal{R%
}^{\protect\alpha _{2}}$]
\rm Let us write down the heat kernel bounds on the connected sum
\begin{equation*}
M=M_{1}\#M_{2}=\mathcal{R}^{\alpha _{1}}\#\mathcal{R}^{\alpha _{2}},
\end{equation*}%
where $1\leq \alpha _{1}\leq \alpha _{2}<2$. In this case both $M_{1}$ and $%
M_{2}$ are subcritical so that Theorem \ref{T1}$\left( i\right) $, Theorem %
\ref{T2} and Theorem \ref{T3}$\left( i\right) $ apply. Observe that
\begin{equation}
D(x,t)=\left\{
\begin{array}{ll}
1, & \mbox{if }\left\vert x\right\vert >\sqrt{t}, \\
\left( \frac{\left\vert x\right\vert }{\sqrt{t}}\right) ^{2-\alpha _{i}}, & %
\mbox{if }\left\vert x\right\vert \leq \sqrt{t},%
\end{array}%
\right. \label{Di}
\end{equation}%
and%
\begin{equation*}
V_{\max }\left( r\right) \approx r^{\alpha _{2}},\ \ \ r>1.
\end{equation*}%
In the case $x\in E_{1}$ and $y\in E_{2}$, we obtain by (\ref{T1i}) or by
Corollary \ref{power}$\left( i\right) $,
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t^{\alpha _{2}/2}}e^{-b\frac{d^{2}\left( x,y\right)
}{t}}.
\end{equation*}%
\newline
Assume now that $x,y\in E_{1}$. If $\left\vert x\right\vert ,\left\vert
y\right\vert >\sqrt{t}$, then by (\ref{Vi}) we have%
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C}{V_{1}(x,\sqrt{t})}e^{-b\frac{%
d^{2}\left( x,y\right) }{t}}.
\end{equation*}
If $\left\vert x\right\vert ,\left\vert y\right\vert \leq \sqrt{t}$ then by (%
\ref{T3i}) and (\ref{Di}) we obtain%
\begin{equation}
p(t,x,y)\approx \frac{1}{t^{\alpha _{1}/2}}\left( \frac{\left\vert
x\right\vert \left\vert y\right\vert }{t}\right) ^{2-\alpha _{1}}+\frac{1}{%
t^{\alpha _{2}/2}}. \label{a1a2}
\end{equation}%
In particular, in the long time regime $t\rightarrow \infty $ we obtain%
\begin{equation*}
p\left( t,x,y\right) \approx \frac{1}{t^{\alpha _{2}/2}},
\end{equation*}%
which, of course, matches (\ref{ptmax}). Assume now that $\left\vert
x\right\vert >\sqrt{t}\geq \left\vert y\right\vert $. Substituting (\ref{Di}%
) into (\ref{T3i}), we obtain%
\begin{equation*}
p\left( t,x,y\right) \asymp C\left( \frac{1}{t^{\alpha_1/2}}\left( \frac{%
\left\vert y\right\vert }{\sqrt{t}}\right) ^{2-\alpha _{1}}+\frac{1}{%
t^{\alpha _{2}/2}}\right) e^{-b\frac{d^{2}(x,y)}{t}}.
\end{equation*}%
A similar estimate holds in the case $|y|>\sqrt{t}\geq |x|$.
Finally, if $x,y\in E_{2}$ then we have by Theorem \ref{T2} that for all $%
t>1 $%
\begin{equation*}
p(t,x,y)\asymp \frac{C}{V_{2}(x,\sqrt{t})}e^{-b\frac{d^{2}\left( x,y\right)
}{t}}.
\end{equation*}
\end{example}
\begin{example}[Heat kernel on $\mathcal{R}^{1}\#\mathcal{R}^{2}$]
\rm Consider $M=M_{1}\#M_{2}=\mathcal{R}^{1}\#\mathcal{R}^{2}$ (see Fig. \ref%
{r1+r2}). Suppose that $x\in E_{1}$, $y\in E_{2}$. Then by Theorem \ref{T1}$%
\left( ii\right) _{3}$ or by the estimate (\ref{Corii3}) of Corollary \ref%
{power}%
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t}\left( 1+\frac{|x|}{|x|+\sqrt{t}}U(y,t)\log
t\right) e^{-b\frac{d^{2}\left( x,y\right) }{t}}.
\end{equation*}%
Using (\ref{function U}) we obtain: if $|y|>\sqrt{t}$, then
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t}e^{-b\frac{d^{2}(x,y)}{t}};
\end{equation*}%
if $|x|,|y|\leq \sqrt{t}$, then%
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t}\left( 1+\frac{|x|}{\sqrt{t}}\log \frac{e\sqrt{t}}{%
|y|}\right) ,
\end{equation*}%
and if $|x|>\sqrt{t}\geq |y|$, then%
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t}\log \frac{e\sqrt{t}}{|y|}e^{-b\frac{d^{2}(x,y)}{t}%
}.
\end{equation*}%
\newline
Assume that $x,y\in E_{1}$. If $\min (\left\vert x\right\vert ,\left\vert
y\right\vert )\leq \sqrt{t}$, then we obtain by (\ref{T3ii}) and (\ref{Di})%
\begin{equation*}
p(t,x,y)\approx \frac{1}{t}\left( 1+\frac{\left\vert x\right\vert \left\vert
y\right\vert }{\sqrt{t}}+\frac{\left\vert x\right\vert +\left\vert
y\right\vert }{\sqrt{t}}\log t\right) e^{-b\frac{d^{2}(x,y)}{t}}.
\end{equation*}%
In particular, if $|x|>\sqrt{t}\geq |y|$, we obtain
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t}\left( |y|+\log t\right) e^{-b\frac{d^{2}(x,y)}{t}%
}.
\end{equation*}%
Similar estimate follows when $|y|>\sqrt{t}\geq |x|$. If $\min (|x|,|y|)>%
\sqrt{t}$, we obtain by Theorem \ref{T2}
\begin{equation*}
p(t,x,y)\asymp \frac{C}{\sqrt{t}}e^{-b\frac{d^{2}(x,y)}{t}}.
\end{equation*}
In the case $x,y\in E_{2}$, we obtain by Theorem \ref{T2}
\begin{equation}
p(t,x,y)\asymp \frac{C}{t}e^{-b\frac{d^{2}(x,y)}{t}} . \label{ptcb}
\end{equation}
\end{example}
\begin{example}[Heat kernel on $\mathbb{R}^{2}\#\mathbb{R}^{2}$]
Suppose that $x\in E_{1}$ and $y\in E_{2}$. If $\left\vert x\right\vert
,\left\vert y\right\vert \leq \sqrt{t}$, then by (\ref{T1ii2}), or by (\ref%
{power (ii)_3})
\begin{equation*}
p(t,x,y)\approx \frac{1}{t\log ^{2}t}\left( \log t+\log ^{2}\sqrt{t}-\log
|x|\log |y|\right) .
\end{equation*}%
In particular, in the long time regime $\left\vert x\right\vert \approx
\left\vert y\right\vert \approx 1$ we obtain%
\begin{equation*}
p\left( t,x,y\right) \approx \frac{1}{t},
\end{equation*}%
and in the medium time regime $\left\vert x\right\vert \approx \left\vert
y\right\vert \approx \sqrt{t}$ we have%
\begin{equation*}
p(t,x,y)\approx \frac{1}{t\log t},
\end{equation*}%
which means a mild bottleneck-effect on $\mathbb{R}^{2}\#\mathbb{R}^{2}$.
\newline
If $\left\vert x\right\vert ,\left\vert y\right\vert \geq \sqrt{t}$ then the
heat kernel on $\mathbb{R}^{2}\#\mathbb{R}^{2}$ satisfies (\ref{power (ii)_2}%
), that is,
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C}{t}\left( \frac{1}{\log \left\vert
x\right\vert }+\frac{1}{\log \left\vert y\right\vert }\right) e^{-b\frac{%
d^{2}\left( x,y\right) }{t}}.
\end{equation*}%
The cases $\left\vert x\right\vert >\sqrt{t}\geq \left\vert y\right\vert $
and $\left\vert y\right\vert >\sqrt{t}\geq \left\vert x\right\vert $ are
covered by (\ref{x>y}) and (\ref{y>x}), respectively.
If $x,y\in E_{1}$ or $x,y\in E_{2}$ then $p\left( t,x,y\right) $ satisfies (%
\ref{ptcb}) by Theorem \ref{T2}.
\end{example}
\begin{example}[Heat kernel on $\mathcal{R}^{1}\#\mathcal{R}^{1}\#\mathcal{R}%
^{2}$]
\rm Let $M=M_{1}\#M_{2}\#M_{3}=\mathcal{R}^{1}\#\mathcal{R}^{1}\#\mathcal{R}%
^{2}$ (see Fig. \ref{r2+r1}). If $x$ and $y$ are at the same end, or $x\in
\mathcal{R}^{1}$ and $y\in \mathcal{R}^{2}$, then the heat kernel $p(t,x,y)$
satisfies same estimates as in the above case $\mathcal{R}^{1}\#\mathcal{R}%
^{2}$.
Assume now that $x\in E_{1}$ and $y\in E_{2}$. Then by Corollary \ref{power}$%
\left( ii\right) _{1}$ we obtain the following estimates: if $\min
(\left\vert x\right\vert ,\left\vert y\right\vert )\leq \sqrt{t}$ then
\begin{equation*}
p(t,x,y)\approx \frac{1}{t}\left( 1+\frac{\log t}{\sqrt{t}}(\left\vert
x\right\vert +\left\vert y\right\vert )\right) ,
\end{equation*}%
and if $\min (|x|,|y|)>\sqrt{t}$, then
\begin{equation*}
p(t,x,y)\asymp \frac{\log t}{t}e^{-b\frac{d^{2}(x,y)}{t}}.
\end{equation*}%
In particular, if $\left\vert x\right\vert \approx \left\vert y\right\vert
\approx \sqrt{t}$, then
\begin{equation*}
p(t,x,y)\approx \frac{\log t}{t}.
\end{equation*}
\end{example}
\section{Some auxiliary estimates}
\setcounter{equation}{0}\label{SecGeneral}In this section we prove some
auxiliary results to be used in the proof of Theorem \ref{main theorem}.
Let $(M,\mu )$ be a geodesically complete non-compact weighted manifold (we
do not even assume parabolicity of $M$ unless it is clearly stated). For
any open set $\Omega \subset M$, denote by $p_{\Omega }\left( t,x,y\right) $
the Dirichlet heat kernel in $\Omega $. Assume from now on that $\Omega $
has smooth boundary. Then $p_{\Omega }\left( t,x,y\right) =0$ whenever $x$
or $y$ belongs to $\partial \Omega $. Denote also by $P_{t}^{\Omega }$ the
associated heat semigroup. Denote as before by $(\{X_{t}\}_{t\geq 0},\{%
\mathbb{P}_{x}\}_{x\in M})$ Brownian motion on $M$. Let $\tau _{\Omega }$ be
the first exit time of $X_{t}$ from $\Omega $, that is,%
\begin{equation*}
\tau _{\Omega }=\inf \left\{ t>0:X_{t}\notin \Omega \right\} .
\end{equation*}%
Then, for any bounded continuous function $f$ on $M$,%
\begin{equation}
P_{t}^{\Omega }f\left( x\right) =\mathbb{E}_{x}\left( f\left( X_{t}\right)
1_{\left\{ \tau _{\Omega }>t\right\} }\right) . \label{PE}
\end{equation}
\subsection{Integrated resolvent}
The resolvent operator $G_{\lambda }^{\Omega }$ is defined for any $\lambda
>0$ as an operator on non-negative measurable functions $f$ on $\Omega $ by%
\begin{equation*}
G_{\lambda }^{\Omega }f\left( x\right) =\int_{0}^{\infty }e^{-\lambda
t}P_{t}^{\Omega }f\,dt.
\end{equation*}%
Clearly, $G_{\lambda }^{\Omega }$ is a linear operator that preserves
non-negativity. Note that by definition $G_{\lambda }^{\Omega }f$ vanishes
in $\Omega ^{c}$. If $\Omega =M$ then we write $G_{\lambda }\equiv
G_{\lambda }^{M}$. Clearly, $G_{\lambda }^{\Omega }$ is an integral operator
whose kernel%
\begin{equation*}
g_{\lambda }^{\Omega }\left( x,y\right) =\int_{0}^{\infty }e^{-\lambda
t}p_{\Omega }\left( t,x,y\right) dt
\end{equation*}%
is called the \textit{resolvent kernel}. In general, $G_{\lambda }^{\Omega
}f $ may take value $+\infty $. However, if $f$ is bounded and continuous
then the function $u=G_{\lambda }^{\Omega }f$ is finite and, moreover, is
the minimal non-negative solution of the equation $\Delta u-\lambda u=-f$
(see \cite{G AMS}). It follows from (\ref{PE}) that%
\begin{equation}
G_{\lambda }^{\Omega }f\left( x\right) =\mathbb{E}_{x}\left( \int_{0}^{\tau
_{\Omega }}f\left( X_{t}\right) e^{-\lambda t}dt\right) . \label{FC}
\end{equation}%
If in addition $\Omega $ is precompact then the function $u=G_{\lambda
}^{\Omega }f$ solves the Dirichlet problem%
\begin{equation*}
\left\{
\begin{array}{ll}
\Delta u-\lambda u=-f & \text{in }\Omega , \\
u=0 & \text{on }\partial \Omega .%
\end{array}%
\right.
\end{equation*}%
For the proof of Theorem \ref{main theorem} we need the notion of \textit{%
integrated resolvent.} Fix a compact set $K\subset M$ with non-empty
interior $\mathring{K}$ such that $K$ is the closure of $\mathring{K}$ and
the boundary $\partial K$ is smooth. Fix also once and for all a reference
point $o\in K$.
For any $\lambda >0$, define the function $\gamma _{\lambda }$ on $M$ by%
\begin{equation}
\gamma _{\lambda } (x):=G_{\lambda } 1_{K}\left( x\right)
=\int_{K}g_{\lambda } \left( x,z\right) d\mu \left( z\right)
=\int_{K}\int_{0}^{\infty }e^{-\lambda t}p \left( t,x,z\right) dz\,dt.
\label{Gla}
\end{equation}%
The function $\gamma _{\lambda }$ is called the integrated resolvent. Set
also%
\begin{equation}
\dot{\gamma}_{\lambda } =G_{\lambda } \gamma _{\lambda } . \label{gadot}
\end{equation}%
It follows from the resolvent equation $G_{\alpha }-G_{\beta }=\left( \beta
-\alpha \right) G_{\alpha }G_{\beta }$ that
\begin{equation}
\dot{\gamma}_{\lambda }=-\frac{\partial }{\partial \lambda }\gamma _{\lambda
}=\int_{K}\int_{0}^{\infty }te^{-\lambda t}p\left( t,x,z\right) dz\,dt.
\label{Gladot}
\end{equation}
\begin{lemma}
\label{Lemma of G}
\begin{itemize}
\item[$\left( i\right) $] If there exist positive constants $C,\lambda _{0}$
and a function $F:\mathbb{R}_{+}\rightarrow \mathbb{R}_{+}$ such that, for
some $x\in K$,
\begin{equation}
\gamma _{\lambda }(x)\leq \frac{C}{\lambda F(\frac{1}{\sqrt{\lambda }})}%
\quad \text{for all }\lambda \in (0,\lambda _{0}], \label{Gl<}
\end{equation}%
then there exist positive constants $C^{\prime },t_{0}$ such that
\begin{equation}
p(t,o,o)\leq \frac{C^{\prime }}{F(\sqrt{t})}\quad \text{for all }t\geq t_{0}.
\label{on-diagonal from resolvent}
\end{equation}
\item[$\left( ii\right) $] If there exist positive constants $C,\lambda _{0}$
such that, for some $x\in K$,%
\begin{equation}
\dot{\gamma}_{\lambda }(x)\leq \frac{C}{\lambda }\quad \text{for all }%
\lambda \in (0,\lambda _{0}], \label{Gdot<}
\end{equation}%
then there exist positive constants $C^{\prime },t_{0}$ such that
\begin{equation*}
p(t,o,o)\leq \frac{C^{\prime }}{t}\quad \text{for all }t\geq t_{0}.
\end{equation*}
\end{itemize}
\end{lemma}
\begin{proof}
$\left( i\right) $ Set $\delta =(\mathrm{diam}K)^{2}$. By the local Harnack
inequality, there exit positive constants $c_{1},c_{2}$ such that, for all $%
x,z\in K$ and $s>2c_{2}\delta $,
\begin{equation}
p(s,x,z)\geq c_{1}p\left( s-c_{2}\delta ,o,o\right) , \label{local Harnack}
\end{equation}%
which implies by (\ref{Gla}), for all $x\in K$,%
\begin{equation*}
\gamma _{\lambda }(x)\geq c_{1}\mathrm{vol}(K)\int_{2c_{2}\delta }^{\infty
}e^{-\lambda s}p(s-c_{2}\delta ,o,o)ds.
\end{equation*}%
Using the monotonicity of $p(s,o,o)$ with respect to $s$ (see \cite[%
Exercises 7.22]{G AMS}), we obtain, for $t\geq 4c_{2}\delta ,$%
\begin{align}
\gamma _{\lambda }\left( x\right) & \geq c_{1}\mathrm{vol}%
(K)\int_{t/2}^{t}e^{-\lambda s}p(s-c_{2}\delta ,o,o)ds \notag \\
& \geq c_{1}\mathrm{vol}(K)\int_{t/2}^{t}e^{-\lambda s}p(t,o,o)ds\geq
cte^{-\lambda t}p(t,o,o). \label{G2}
\end{align}%
Set $t_{0}:=\max \{4c_{2}\delta ,\lambda _{0}^{-1}\}$. For any $t\geq t_{0}$
and using (\ref{Gl<}) and (\ref{G2}) with $\lambda =t^{-1}$, we obtain%
\begin{equation*}
\frac{C}{\lambda F(\frac{1}{\sqrt{\lambda }})}\geq cte^{-1}p(t,o,o),
\end{equation*}%
which implies%
\begin{equation*}
p(t,o,o)\leq \frac{C}{F(\sqrt{t})}.
\end{equation*}
$\left( ii\right) $ Arguing as in $\left( i\right) $ and using (\ref{local
Harnack}) and (\ref{Gladot}), we obtain, for $t\geq 4c_{2}\delta $ and $x\in
K$,%
\begin{eqnarray}
\dot{\gamma}_{\lambda }(x) &=&\int_{0}^{\infty }\int_{K}se^{-\lambda
s}p(s,x,z)dsd\mu (z) \notag \\
&\geq &c_{1}\mathrm{vol}(K)\int_{t/2}^{t}se^{-\lambda s}p(t,o,o)ds\geq
ct^{2}e^{-\lambda t}p(t,o,o). \label{Gdot2}
\end{eqnarray}%
Assuming $t\geq t_{0}:=\max \{4c_{2}\delta ,\lambda _{0}^{-1}\}$ and using (%
\ref{Gdot<}) and (\ref{Gdot2}) with $\lambda =t^{-1}$, we obtain%
\begin{equation*}
\frac{C}{\lambda }\geq ct^{2}e^{-1}p(t,o,o),
\end{equation*}%
which implies%
\begin{equation*}
p(t,o,o)\leq \frac{C}{t}.
\end{equation*}
\end{proof}
\begin{remark}
\rm Lemma \ref{Lemma of G} will be used in the proof of Theorem \ref{main
theorem} in Section \ref{SecUpper} as follows. In the case when all the ends
are subcritical, we will prove the following upper bound for the integrated
resolvent:%
\begin{equation}
\sup_{\partial K}\gamma _{\lambda }\leq \frac{C}{\lambda V_{\max }(\frac{1}{%
\sqrt{\lambda }})}, \label{supga}
\end{equation}%
which then implies by Lemma \ref{Lemma of G}$\left( i\right) $ the desired
upper bound
\begin{equation*}
p(t,o,o)\leq \frac{C}{V_{\max }(\sqrt{t})}.
\end{equation*}%
However, in the case when one of the ends is critical, we obtain instead of (%
\ref{supga}) a weaker inequality%
\begin{equation}
\sup_{\partial K}\gamma _{\lambda }\leq C\log \frac{1}{\lambda },
\label{logla}
\end{equation}%
which yields
\begin{equation*}
p(t,o,o)\leq C\frac{\log t}{t}
\end{equation*}%
instead of the desired estimate
\begin{equation}
p(t,o,o)\leq \frac{C}{t}. \label{ptt}
\end{equation}%
In order to be able to prove the latter, we will use the second part of
Lemma \ref{Lemma of G}. Namely, we will prove that in the critical case%
\begin{equation}
\sup_{\partial K}\dot{\gamma}_{\lambda }\leq \frac{C}{\lambda },
\label{1/la}
\end{equation}%
which then will imply (\ref{ptt}) by Lemma \ref{Lemma of G}$\left( ii\right)
$.
Note that the estimate (\ref{logla}) of $\gamma _{\lambda }$ is already
optimal as it is matched by the estimate (\ref{1/la}) of $\dot{\gamma}%
_{\lambda }=-\frac{\partial }{\partial \lambda }\gamma _{\lambda }$.
However, the function $\gamma _{\lambda }$ alone does not allow to recover
an optimal estimate of the heat kernel, while its $\lambda $-derivative $%
\dot{\gamma}_{\lambda }$ does.
\end{remark}
\subsection{Comparison principles}
Fix an open set $\Omega \subset M$ and $\lambda >0$. We say that a function $%
u$ is $\lambda $-harmonic in $\Omega $ if it satisfies in $\Omega $ the
equation $\Delta u-\lambda u=0.$ A function $u$ is called $\lambda $%
-superharmonic if $\Delta u-\lambda u\leq 0$. We will frequently use the
following minimum principle: if $\Omega $ is precompact, $u\in C\left(
\overline{\Omega }\right) $ is $\lambda $-superharmonic in $\Omega $ and $%
u\geq 0$ on $\partial \Omega $ then $u\geq 0$ in $\Omega $. It implies the
comparison principle: if $u,v\in C\left( \overline{\Omega }\right) $, $u$ is
$\lambda $-superharmonic in $\Omega $ and $v$ is $\lambda $-harmonic in $%
\Omega $ then%
\begin{equation}
u\geq v\ \text{on }\partial \Omega \ \ \Rightarrow \ u\geq v\ \text{in }%
\Omega . \label{u>v}
\end{equation}
Let now $\Omega $ be an exterior domain, that is, $\Omega =F^{c}$ where $F$
is a compact subset of $M$. Let $v\in C\left( \overline{\Omega }\right) $ be
non-negative and $\lambda $-harmonic in $\Omega $.\ We say that $v$ is
\textit{minimal} in $\Omega $ if there exists an exhaustion $\left\{
U_{k}\right\} $ of $M$ by precompact open sets $U_{k}\supset F$ and a
sequence $\left\{ v_{k}\right\} $ of functions $v_{k}\in C\left( \overline{%
U_{k}\setminus F}\right) $ that are non-negative and $\lambda $-harmonic in $%
U_{k}\setminus F$ and such that $v_{k}|_{\partial U_{k}}=0$ and $%
v_{k}\uparrow v$ in $\overline{\Omega }$. Then the following modification of
the comparison principle holds in $\Omega $: if $u,v\in C\left( \overline{%
\Omega }\right) $, $u$ is non-negative $\lambda $-superharmonic in $\Omega $
and $v$ is non-negative minimal $\lambda $-harmonic in $\Omega $ then (\ref%
{u>v}) is satisfied. Indeed, by the comparison principle in $U_{k}\setminus
F $ we obtain $u\geq v_{k}$ whence the claim follows.
We are left to mention that, for any non-negative bounded function $f$ with
compact support, the function $G_{\lambda }f$ is non-negative, minimal, $%
\lambda $-harmonic outside $\limfunc{supp}f$, since $G_{\lambda
}^{U_{k}}f\uparrow G_{\lambda }f$.
\subsection{Functions $\Phi _{\protect\lambda }^{\Omega }$ and $\Psi _{%
\protect\lambda }^{\Omega }$}
In any open set $\Omega \subset M$, consider a function%
\begin{equation}
\Phi _{\lambda }^{\Omega }:=\lambda G_{\lambda }^{\Omega }1=\int_{0}^{\infty
}\lambda e^{-\lambda t}P_{t}^{\Omega }1\,dt. \label{Fidef}
\end{equation}%
Since $0\leq P_{t}^{\Omega }1\leq 1$, we see that
\begin{equation}
0\leq \Phi _{\lambda }^{\Omega }\leq 1. \label{Fi1}
\end{equation}%
It follows from (\ref{PE}) that
\begin{equation}
\Phi _{\lambda }^{\Omega }(x)=\int_{0}^{\infty }\lambda e^{-\lambda t}%
\mathbb{P}_{x}(\tau _{\Omega }>t)dt. \label{Fi}
\end{equation}
Let $A$ be a precompact open subset of $M$ with smooth boundary and let $%
K\subset A$. Set
\begin{equation}
\gamma _{\lambda }^{A}(x):=G_{\lambda }^{A}1_{K}\left( x\right)
=\int_{K}g_{\lambda }^{A}\left( x,z\right) d\mu \left( z\right)
=\int_{K}\int_{0}^{\infty }e^{-\lambda t}p_{A}(t,x,z)dtd\mu (z) . \label{GA}
\end{equation}
\begin{lemma}
\label{Lemga}$\left( a\right) $ The following inequality holds in $A$:%
\begin{equation}
\gamma _{\lambda }-\gamma _{\lambda }^{A}\leq (\sup_{\partial A}\gamma
_{\lambda })\left( 1-\Phi _{\lambda }^{A}\right) . \label{gaA}
\end{equation}%
$\left( b\right) $ The following inequality holds in $K^{c}$:%
\begin{equation}
\gamma _{\lambda }\leq (\sup_{\partial K}\gamma _{\lambda })\left( 1-\Phi
_{\lambda }^{K^{c}}\right) . \label{gaKc}
\end{equation}
\end{lemma}
\begin{proof}
$\left( a\right) $ By (\ref{Fidef}), the function $\Phi _{\lambda }^{A}$
satisfies%
\begin{equation*}
\left\{
\begin{array}{ll}
\Delta \Phi _{\lambda }^{A}-\lambda \Phi _{\lambda }^{A}=-\lambda & \text{in
}A \\
\Phi _{\lambda }^{A}=0 & \text{on }\partial A.%
\end{array}%
\right.
\end{equation*}
It follows that the function $u:=1-\Phi _{\lambda }^{A}$ solves the boundary
value problem%
\begin{equation*}
\left\{
\begin{array}{ll}
\Delta u-\lambda u=0 & \text{in }A \\
u=1 & \text{on }\partial A.%
\end{array}%
\right.
\end{equation*}%
Note that $\gamma _{\lambda }-\gamma _{\lambda }^{A}=G_{\lambda
}1_{K}-G_{\lambda }^{A}1_{K}$ is $\lambda $-harmonic in $A$ and is equal to $%
\gamma _{\lambda }$ on $\partial A$, which implies by the comparison
principle in $A$ that%
\begin{equation*}
\gamma _{\lambda }-\gamma _{\lambda }^{A}\leq (\sup_{\partial A}\gamma
_{\lambda })u\ \ \ \text{in }A,
\end{equation*}%
which proves (\ref{gaA}).
$\left( b\right) $ Set $\Omega =K^{c}$. As in $\left( a\right) $, the
function $u:=1-\Phi _{\lambda }^{\Omega }$ solves the following boundary
value problem:%
\begin{equation*}
\left\{
\begin{array}{ll}
\Delta u-\lambda u=0 & \text{in }\Omega \\
u=1 & \text{on }\partial \Omega%
\end{array}%
\right.
\end{equation*}%
The function $\gamma _{\lambda }=G_{\lambda }1_{K}$ is non-negative, $%
\lambda $-harmonic, and minimal in $\Omega $. On $\partial \Omega =\partial
K $ we have%
\begin{equation}
\gamma _{\lambda }\leq \sup_{\partial K}\gamma _{\lambda }=(\sup_{\partial
K}\gamma _{\lambda })u. \label{gau}
\end{equation}%
Since $u$ is non-negative and $\lambda $-harmonic in $\Omega ,$ it follows
by the comparison principle in $\Omega $ that (\ref{gau}) holds also in $%
\Omega $, which proves (\ref{gaKc}).
\end{proof}
Set
\begin{equation}
\Psi _{\lambda }^{\Omega }:=G_{\lambda }^{\Omega }\left( 1-\Phi _{\lambda
}^{\Omega }\right) \label{Psidef}
\end{equation}%
and observe that $\Psi _{\lambda }^{\Omega }\geq 0$ by (\ref{Fi1}).
\begin{lemma}
\label{Lemma of Psi} Assume that $M$ is parabolic. Then we have the
following identity for all $x\in \Omega $:%
\begin{equation}
\Psi _{\lambda }^{\Omega }(x)=\int_{0}^{\infty }te^{-\lambda t}\partial _{t}%
\mathbb{P}_{x}(\tau _{\Omega }\leq t)dt. \label{Psi}
\end{equation}
\end{lemma}
\begin{proof}
Integrating by parts in (\ref{Fi}) together with the parabolicity of $M$, we
obtain%
\begin{eqnarray}
\Phi _{\lambda }^{\Omega }(x) &=&-\int_{0}^{\infty }\mathbb{P}_{x}(\tau
_{\Omega }>t)de^{-\lambda t}=1+\int_{0}^{\infty }e^{-\lambda t}\partial _{t}%
\mathbb{P}_{x}(\tau _{\Omega }>t) \notag \\
&=&1-\int_{0}^{\infty }e^{-\lambda t}\partial _{t}\mathbb{P}_{x}(\tau
_{\Omega }\leq t). \label{1-i}
\end{eqnarray}%
On the other hand, we have%
\begin{eqnarray*}
\Psi _{\lambda }^{\Omega } &=&G_{\lambda }^{\Omega }1-G_{\lambda }^{\Omega
}\Phi _{\lambda }^{\Omega }=G_{\lambda }^{\Omega }1-\lambda G_{\lambda
}^{\Omega }G_{\lambda }^{\Omega }1 \\
&=&G_{\lambda }^{\Omega }1+\lambda \frac{\partial }{\partial \lambda }%
G_{\lambda }^{\Omega }1=\frac{\partial }{\partial \lambda }\left( \lambda
G_{\lambda }^{\Omega }1\right) =\frac{\partial }{\partial \lambda }\Phi
_{\lambda }^{\Omega }.
\end{eqnarray*}%
Hence, differentiating (\ref{1-i}) in $\lambda $, we obtain (\ref{Psi}).
\end{proof}
\subsection{Some local estimates}
Recall that, for any open set $A$ containing $K$, we have defined
\begin{equation*}
\gamma _{\lambda }^{A}(x)=G_{\lambda }^{A}1_{K}\left( x\right)
=\int_{K}\int_{0}^{\infty }e^{-\lambda t}p_{A}(t,x,z)dtd\mu (z).
\end{equation*}%
Set also
\begin{equation}
\dot{\gamma}_{\lambda }^{A}(x):=G_{\lambda }^{A}\gamma _{\lambda }^{A}\left(
x\right) =-\frac{\partial }{\partial \lambda }\gamma _{\lambda
}^{A}(x)=\int_{K}\int_{0}^{\infty }te^{-\lambda t}p_{A}(t,x,z)dtd\mu (z).
\label{GAdot}
\end{equation}%
Note that $\gamma _{\lambda }^{A}$ and $\dot{\gamma}_{\lambda }^{A}$ vanish
outside $A$. Note also that $\gamma _{\lambda }=\gamma _{\lambda }^{M}$ and $%
\dot{\gamma}_{\lambda }=\dot{\gamma}_{\lambda }^{M}$.
In what follows we fix a precompact open set $A\supset K$ with smooth
boundary.
\begin{lemma}
\label{Lemma iii}There exists a positive constant $C=C\left( A\right) $ such
that, for all $\lambda >0$,
\begin{equation}
\sup_{A}\gamma _{\lambda }^{A}\leq C, \label{GC}
\end{equation}%
\begin{equation}
\sup_{A}\dot{\gamma}_{\lambda }^{A}\leq C^{2}, \label{estimate of G^A1}
\end{equation}%
and%
\begin{equation}
\sup_{A}\Psi _{\lambda }^{A}\leq C. \label{estimate of R_A^1}
\end{equation}
\end{lemma}
\begin{proof}
It follows from (\ref{GA}) that
\begin{equation*}
\gamma _{\lambda }^{A}\left( x\right) \leq \int_{A}\int_{0}^{\infty
}p_{A}(t,x,z)dt\,d\mu (z)=\int_{A}g^{A}\left( x,z\right) d\mu \left(
z\right) ,
\end{equation*}%
where $g^{A}=g_{0}^{A}$ is the Green function of $\Delta $ in $A$. The
function%
\begin{equation*}
u\left( x\right) =\int_{A}g^{A}(x,z)d\mu (z)
\end{equation*}%
solves the following boundary value problem%
\begin{equation*}
\left\{
\begin{array}{ll}
\Delta u=-1 & \text{in }A, \\
u=0 & \text{on }\partial A,%
\end{array}%
\right.
\end{equation*}%
which implies that $u\left( x\right) $ is bounded. Hence, (\ref{GC}) holds
with $C=\sup u$.
By (\ref{GAdot}) we have%
\begin{equation*}
\dot{\gamma}_{\lambda }^{A}\left( x\right) =\int_{A}g_{\lambda }^{A}\left(
x,z\right) \gamma _{\lambda }^{A}\left( z\right) d\mu \left( z\right) ,
\end{equation*}%
which implies by (\ref{GC}), for any $x\in A$,
\begin{equation*}
\dot{\gamma}_{\lambda }^{A}\left( x\right) \leq \sup_{A}\gamma _{\lambda
}^{A}\int_{A}g^{A}\left( x,z\right) d\mu \left( z\right) \leq C\sup u=C^{2},
\end{equation*}%
which proves (\ref{estimate of G^A1}).
Finally, it follows from (\ref{Psidef}) that%
\begin{equation*}
\Psi _{\lambda }^{A}\left( x\right) \leq G_{\lambda }^{A}1\left( x\right)
=\int_{A}g^{A}\left( x,z\right) d\mu \left( z\right) \leq C,
\end{equation*}%
which proves (\ref{estimate of R_A^1}).
\end{proof}
\subsection{Global estimates of $\Phi _{\protect\lambda }^{\Omega }$ and $%
\Psi _{\protect\lambda }^{\Omega }$}
So far we have used a compact set $K$ and a precompact open set $A\supset K$%
. We have also assume that $K$ and $A$ have smooth boundaries.
In the next Lemma we estimate $\inf_{\partial A}\Phi _{\lambda }^{K^{c}} $
from below using additional geometric assumptions. Denote by $K_{\epsilon }$
the $\epsilon $-neighborhood of $K$. We will assume in addition that $%
K_{\epsilon }\subset A$ for some large enough $\epsilon $ specified below.
\begin{lemma}
\label{lemma estimate of important integral} Let $M$ be a geodesically
complete, non-compact parabolic manifold satisfying $($\ref{LY type}$)$, $%
\left( RCA\right) $. Fix a reference point $o\in K$ and set $V(r)=V(o,r)$.
Assume in addition that $K_{\epsilon }\subset A$ for sufficiently large $%
\epsilon =\epsilon \left( K\right) >0$. Then there exists a constant $c>0$
such that
\begin{equation}
\inf_{\partial A}\Phi _{\lambda }^{K^{c}}\geq c\int_{(\mathrm{diam}%
A)^{2}}^{\infty }(1-e^{-\lambda s})\frac{1}{V(\sqrt{s})H(\sqrt{s})^{2}}ds,
\label{resolvent dirichlet}
\end{equation}%
where%
\begin{equation}
H(r):=1+\left( \int_{1}^{r}\frac{s}{V(s)}ds\right) _{+}. \label{H}
\end{equation}%
In addition, we have:
\begin{enumerate}
\item[$\left( i\right) $] if $V\left( r\right) $ is subcritical then, for $%
0<\lambda \leq \frac{1}{(\mathrm{diam}A)^{2}}$,
\begin{equation}
\inf_{\partial A}\Phi _{\lambda }^{K^{c}}\geq c\lambda V(\frac{1}{\sqrt{%
\lambda }}). \label{resolvent estimate (i)}
\end{equation}
\item[$\left( ii\right) $] If $V\left( r\right) $ is critical then, for $%
0<\lambda \leq \frac{1}{(\mathrm{diam}A)^{2}}$,
\begin{equation}
\inf_{\partial A}\Phi _{\lambda }^{K^{c}}\geq \frac{c}{\log \frac{1}{\lambda
}}. \label{resolvent estimate (ii)}
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
Denote $\Omega =K^{c}$. By \cite[Theorem 4.9 and (4.23)]{G-SC Dirichlet}, if
$\epsilon $ is big enough then, for all $a,y$ outside $K_{\epsilon /2}$ and
for all $s>0$, the following estimate holds:
\begin{equation}
p_{\Omega }(s,x,y)\geq C\frac{D(s,x,y)}{V(x,\sqrt{s})}\exp \left( -c\frac{%
d^{2}(x,y)}{s}\right) , \label{pOM}
\end{equation}%
where
\begin{equation*}
D(s,x,y)=\frac{H(|x|)H(\left\vert y\right\vert )}{\left( H(|x|)+H(\sqrt{s}%
)\right) \left( H(\left\vert y\right\vert )+H(\sqrt{s})\right) }.
\end{equation*}%
By \cite[(3.29)]{G-SC hitting}, we have, for any $x\notin K_{\epsilon }$,
\begin{equation*}
\mathbb{P}_{x}\left( \tau _{\Omega }>t\right) \geq c\int_{t}^{\infty
}\inf_{y\in K_{\epsilon }\backslash K_{\epsilon /2}}p_{\Omega }(s,x,y)ds,
\end{equation*}%
where $c=c\left( K,\epsilon \right) >0$, which implies by (\ref{Fi})
\begin{align}
\Phi _{\lambda }^{\Omega }(x)\geq & c\int_{0}^{\infty }\lambda e^{-\lambda
t}\left( \int_{t}^{\infty }\inf_{y\in K_{\epsilon }\backslash K_{\epsilon
/2}}p_{\Omega }(s,x,y)ds\right) dt \notag \\
=& c\int_{0}^{\infty }\left( \int_{0}^{s}\lambda e^{-\lambda t}\inf_{y\in
K_{\epsilon }\backslash K_{\epsilon /2}}p_{\Omega }(s,x,y)dt\right) ds
\notag \\
=& c\int_{0}^{\infty }(1-e^{-\lambda s})\inf_{y\in K_{\epsilon }\backslash
K_{\epsilon /2}}p_{\Omega }(s,x,y)ds. \label{change of variable}
\end{align}%
Assume that $x\in \partial A$. Since $y\in K_{\epsilon }$, we see that $%
d\left( x,y\right) \leq \mathrm{diam}A$. Also, $\left\vert x\right\vert
,\left\vert y\right\vert $ are bounded by $\mathrm{diam}A+e$. It follows
from (\ref{pOM}) that if $s\geq (\mathrm{diam}A)^{2}$ then%
\begin{equation*}
p_{\Omega }(s,x,y)\geq \frac{c}{V(\sqrt{s})H(\sqrt{s})^{2}}.
\end{equation*}%
Substituting into (\ref{change of variable}) yields (\ref{resolvent
dirichlet}).
In the case $\left( i\right) $, when $V$ is subcritical, we obtain from (\ref%
{H})
\begin{equation}
H(r)\approx \frac{r^{2}}{V(r)}. \label{Hsub}
\end{equation}%
Substituting into (\ref{resolvent dirichlet}), we obtain, for $0<\lambda
\leq \frac{1}{(\mathrm{diam}A)^{2}}$,
\begin{equation*}
\inf_{\partial A}\Phi _{\lambda }^{\Omega }\geq c\int_{1/\lambda }^{\infty
}(1-e^{-\lambda s})\frac{V(\sqrt{s})}{s^{2}}ds\geq c\lambda V(\frac{1}{\sqrt{%
\lambda }}),
\end{equation*}%
which proves (\ref{resolvent estimate (i)}).
In the case $\left( ii\right) $, when $V$ is critical, we have
\begin{equation}
H(r)\approx \log r, \label{Hc}
\end{equation}%
which implies, for $0<\lambda \leq \frac{1}{(\mathrm{diam}A)^{2}}$,
\begin{align*}
\inf_{\partial A}\Phi _{\lambda }^{\Omega }\geq & c\int_{1/\lambda }^{\infty
}(1-e^{-\lambda s})\frac{ds}{s\log ^{2}s} \\
\geq & c(1-e^{-1})\int_{1/\lambda }^{\infty }\frac{d\log s}{\log ^{2}s} \\
=& c(1-e^{-1})\frac{1}{\log \frac{1}{\lambda }},
\end{align*}%
which proves (\ref{resolvent estimate (ii)}).
\end{proof}
\begin{lemma}
\label{Lemma derivative RHS}Let $M$ be a geodesically complete, non-compact
parabolic manifold satisfying $($\ref{LY type}$)$, $\left( RCA\right) $.
Assume in addition that $K_{\epsilon }\subset A$ for sufficiently large $%
\epsilon =\epsilon \left( K\right) >0$. Assume also that $V\left( r\right)
:=V\left( o,r\right) $ is either critical or subcritical. Then there exists
a constant $C>0$ such that, for small enough $\lambda >0$,
\begin{equation}
\sup_{\partial A}\Psi _{\lambda }^{K^{c}}\leq \frac{C}{\lambda \log ^{2}%
\frac{1}{\lambda }}. \label{estimate of R_K^1}
\end{equation}
\end{lemma}
\begin{proof}
Set $\Omega =K^{c}$. Fix $a\in \partial A$ and set
\begin{equation*}
T=\frac{1}{\lambda \log ^{2}\frac{1}{\lambda }}.
\end{equation*}%
In the identity (\ref{Psi}) for $\Psi _{\lambda }^{\Omega }$, let us
decompose the integration into two intervals: $\left[ 0,T\right] $ and $%
[T,\infty )$. For the first interval, we have by integration by parts%
\begin{equation*}
\int_{0}^{T}te^{-\lambda t}\partial _{t}\mathbb{P}_{a}(\tau _{\Omega }\leq
t)dt=Te^{-\lambda T}\mathbb{P}_{a}\left( \tau _{\Omega }\leq T\right)
-\int_{0}^{T}e^{-\lambda t}(1-\lambda t)\mathbb{P}_{a}(\tau _{\Omega }\leq
t)dt.
\end{equation*}%
Assume that $\lambda <e$ so that $\log ^{2}\frac{1}{\lambda }>1$ and, hence,
$\lambda T<1$. It follows that $1-\lambda t\geq 0$ on $\left[ 0,T\right] $
and, therefore, the integral in the right hand side of the above identity is
non-negative. It follows that%
\begin{equation*}
\int_{0}^{T}te^{-\lambda t}\partial _{t}\mathbb{P}_{a}(\tau _{\Omega }\leq
t)dt\leq T,
\end{equation*}%
which matches the required estimate (\ref{estimate of R_K^1}).
Let us estimate the integral (\ref{Psi}) over $[T,\infty )$. By \cite[Remark
4.3]{G-SC hitting}, if $\epsilon $ is large enough then, for all $a\in
\partial A\subset \Omega $ and for all $t\geq t_{0}$ (where $t_{0}$ depends
on $\func{diam}A$), we have
\begin{equation}
\partial _{t}\mathbb{P}_{a}(\tau _{\Omega }\leq t)\leq \frac{C}{V\left(
\sqrt{t}\right) H^{2}\left( \sqrt{t}\right) }, \label{Hest}
\end{equation}%
where $H$ is defined by (\ref{H}). Assuming that $\lambda $ is so small that
$T>t_{0}$ and using (\ref{Hest}), we obtain
\begin{equation}
\int_{T}^{\infty }te^{-\lambda t}\partial _{t}\mathbb{P}_{a}(\tau _{\Omega
}\leq t)dt\leq C\int_{T}^{\infty }\frac{te^{-\lambda t}dt}{V\left( \sqrt{t}%
\right) H^{2}\left( \sqrt{t}\right) }. \label{sH}
\end{equation}%
Consider first the case when $V\left( r\right) $ is critical, that is, $%
V\left( r\right) \approx r^{2}$. Then $H\left( r\right) \approx \log r$ and
we obtain%
\begin{equation*}
\int_{T}^{\infty }te^{-\lambda t}\partial _{t}\mathbb{P}_{a}(\tau _{\Omega
}\leq t)dt\leq C\int_{T}^{\infty }\frac{e^{-\lambda t}dt}{\log ^{2}t}\leq
\frac{C}{\log ^{2}T}\int_{0}^{\infty }e^{-\lambda t}dt=\frac{C}{\lambda \log
^{2}T}.
\end{equation*}%
Taking $\lambda >0$ sufficiently small so that $\log ^{2}\frac{1}{\lambda }%
\leq \frac{1}{\sqrt{\lambda }}$, we obtain $T\geq \frac{1}{\sqrt{\lambda }}$
and $\log T\geq \frac{1}{2}\log \frac{1}{\lambda }$, whence%
\begin{equation*}
\int_{T}^{\infty }te^{-\lambda t}\partial _{t}\mathbb{P}_{a}(\tau _{\Omega
}\leq t)dt\leq 4CT,
\end{equation*}%
which proved (\ref{estimate of R_K^1}) in the critical case.
Assume now that $V\left( r\right) $ is subcritical. Then, for $r>2$, we have%
\begin{equation*}
\frac{r^{2}}{V\left( r\right) }\leq 3\int_{r/2}^{r}\frac{tdt}{V\left(
t\right) }\leq 3H\left( r\right) .
\end{equation*}%
Substituting into (\ref{sH}), we obtain
\begin{equation*}
\int_{T}^{\infty }te^{-\lambda t}\partial _{t}\mathbb{P}_{a}(\tau _{\Omega
}\leq t)dt\leq C\int_{T}^{\infty }\frac{e^{-\lambda t}dt}{H\left( \sqrt{t}%
\right) }\leq \frac{C}{\lambda H(\sqrt{T})}\leq \frac{CV(\sqrt{T})}{\lambda T%
},
\end{equation*}%
where in the last inequality we have used (\ref{Hsub}). In order to prove
that the right hand side is bounded by $CT$, it suffices to verify that%
\begin{equation*}
V(\sqrt{T})\leq C\lambda T^{2}.
\end{equation*}%
Since $\log \frac{1}{\lambda }\approx \log T$ and, hence, $\lambda \approx
\frac{1}{T\log ^{2}T}$, it suffices to prove that%
\begin{equation*}
V(\sqrt{T})\leq \frac{CT}{\log ^{2}T}
\end{equation*}%
for large enough $T$. Putting $T=r^{2}$, this inequality is equivalent to
\begin{equation}
\log ^{2}r\leq C\frac{r^{2}}{V(r)}. \label{subcritical bound critical}
\end{equation}%
Since $M$ is subcritical, there exists a constant $b>0$ such that, for large
enough $r$,%
\begin{equation}
b\leq \int_{1}^{r}\frac{tdt}{V(t)}\leq C\frac{r^{2}}{V(r)}.
\label{subcritical bound}
\end{equation}%
Since
\begin{equation}
\int_{1}^{r}\frac{tdt}{V(t)}=\int_{1}^{r}\frac{t^{2}}{V(t)}d\log t,
\label{subcritical bound 2}
\end{equation}%
substituting (\ref{subcritical bound}) into the right hand side of (\ref%
{subcritical bound 2}), we obtain
\begin{equation*}
\log r=\int_{1}^{r}d\log t\leq \int_{1}^{r}\frac{C}{b}\frac{t^{2}}{V(t)}%
d\log t=\int_{1}^{r}\frac{C}{b}\frac{tdt}{V(t)}\leq \frac{C^{2}}{b}\frac{%
r^{2}}{V(r)}.
\end{equation*}%
Substituting this into (\ref{subcritical bound 2}) again, we obtain for
large $r>0$,
\begin{equation*}
\log ^{2}r=2\int_{1}^{r}\log td\log t\leq 2\int_{1}^{r}\frac{C^{2}}{b}\frac{%
t^{2}}{V(t)}d\log t\leq \frac{2C^{3}}{b}\frac{r^{2}}{V(r)},
\end{equation*}%
whence (\ref{subcritical bound critical}) follows.
\end{proof}
\section{On-diagonal estimates at center}
\setcounter{equation}{0}\label{SecOndiag}In this section we prove Theorem %
\ref{main theorem}. In order to obtain the upper bound of $p\left(
t,o,o\right) $ on $M=M_{1}\#...\#M_{k}$, we use the integrated resolvent
introduced in the previous section. This idea of using the resolvent on a
connected sum goes back to Woess \cite[p.\ 96]{Woess} where it was used in
the setting of connected sums of graphs. Implementation in the present case
of manifolds requires much more technique, though.
\subsection{Estimates of integrated resolvent on connected sums}
From now on let $M=M_{1}\#M_{2}\#\cdots \#M_{k}$ be a connected sum of
parabolic manifolds $M_{1},\ldots ,M_{k}$ with a central part $K$. Let $A$
be a connected, precompact open subset of $M$ with smooth boundary and such
that $K\subset A$. In fact, we will need that $K_{\epsilon }\subset A$ for
large enough $\epsilon $. Set
\begin{equation*}
\partial A_{i}:=\partial A\cap E_{i},\ \ \ 1\leq i\leq k
\end{equation*}%
so that $\partial A=\sqcup _{i}\partial A_{i}$ (see Fig. \ref{figure:
connectedsum2}).
\begin{figure}[tbph]
\begin{center}
\scalebox{0.9}{
\includegraphics{connectedsum2.PDF}
}
\end{center}
\caption{Sets $K$ and $A$ in the connected sum $M$.}
\label{figure: connectedsum2}
\end{figure}
\begin{lemma}
\label{Lemma estimate of G sum}There is a constant $h=h\left( A,K\right) >0$
such that, for any $\lambda >0$,%
\begin{equation}
h(\sup_{\partial K}\gamma _{\lambda })\sum_{i=1}^{k}\inf_{\partial
A_{i}}\Phi _{\lambda }^{E_{i}}\leq \sup_{\partial K}\gamma _{\lambda }^{A}.
\label{estimate of G sum (i)}
\end{equation}
\end{lemma}
\begin{proof}
As it follows from (\ref{Gla}) and (\ref{GA}) the function
\begin{equation*}
u:=\gamma _{\lambda }-\gamma _{\lambda }^{A}=G_{\lambda }1_{K}-G_{\lambda
}^{A}1_{K}
\end{equation*}%
is $\lambda $-harmonic in $A$. Consider the function $h_{i}$ in $A$ that
solves the Dirichlet problem%
\begin{equation*}
\left\{
\begin{array}{ll}
\Delta h_{i}=0 & \text{in }A \\
h_{i}=1_{\partial A_{i}} & \text{on }\partial A.%
\end{array}%
\right.
\end{equation*}%
Since on $\partial A_{i}$ we have%
\begin{equation*}
u\leq \sup_{\partial A_{i}}\gamma _{\lambda }=(\sup_{\partial A_{i}}\gamma
_{\lambda })h_{i},
\end{equation*}%
it follows that on $\partial A$%
\begin{equation}
u\leq \sum_{i=1}^{n}(\sup_{\partial A_{i}}\gamma _{\lambda })h_{i}.
\label{uh}
\end{equation}%
Since $h_{i}$ is $\lambda $-superharmonic in $A$, we conclude by the
comparison principle in $A$ that (\ref{uh}) holds in $A$. Let us also
observe that on $\partial A$
\begin{equation}
\sum_{i=1}^{k}h_{i}=1, \label{h1}
\end{equation}%
which implies then that (\ref{h1}) holds in $A$.
Since in $E_{i}$ we have $\Phi _{\lambda }^{K^{c}}=\Phi _{\lambda }^{E_{i}}$%
, we obtain by Lemma \ref{Lemga}$\left( b\right) $ that in $E_{i}$%
\begin{equation*}
\gamma _{\lambda }\leq (\sup_{\partial K}\gamma _{\lambda })(1-\Phi
_{\lambda }^{E_{i}}),
\end{equation*}%
which implies%
\begin{equation*}
\sup_{\partial A_{i}}\gamma _{\lambda }\leq (\sup_{\partial K}\gamma
_{\lambda })\sup_{\partial A_{i}}(1-\Phi _{\lambda
}^{E_{i}})=(\sup_{\partial K}\gamma _{\lambda })(1-\inf_{\partial A_{i}}\Phi
_{\lambda }^{E_{i}}).
\end{equation*}%
Substituting into (\ref{uh}) and recalling the definition of $u$, we obtain
that on $A$%
\begin{equation}
\gamma _{\lambda }\leq \gamma _{\lambda }^{A}+(\sup_{\partial K}\gamma
_{\lambda })\sum_{i=1}^{k}(1-\inf_{\partial A_{i}}\Phi _{\lambda
}^{E_{i}})h_{i}. \label{gaga}
\end{equation}%
Let $x\in \partial K$ be a point where $\gamma _{\lambda }$ attains its
maximum on $\partial K$. Considering (\ref{gaga}) at this point $x$ we obtain%
\begin{equation*}
\gamma _{\lambda }\left( x\right) \leq \gamma _{\lambda }^{A}\left( x\right)
+\gamma _{\lambda }\left( x\right) \sum_{i=1}^{k}(1-\inf_{\partial
A_{i}}\Phi _{\lambda }^{E_{i}})h_{i}\left( x\right) ,
\end{equation*}%
whence by (\ref{h1})%
\begin{equation*}
\gamma _{\lambda }\left( x\right) \sum_{i=1}^{k}(\inf_{\partial A_{i}}\Phi
_{\lambda }^{E_{i}})h_{i}\left( x\right) \leq \gamma _{\lambda }^{A}\left(
x\right) .
\end{equation*}%
This implies (\ref{estimate of G sum (i)}) with $h:=\min_{i}\inf_{\partial
K}h_{i}>0$.
\end{proof}
\begin{lemma}
There exists a constant $h=h\left( A,K\right) >0$ such that
\begin{equation}
h(\sup_{\partial K}\dot{\gamma}_{\lambda })\sum_{i=1}^{k}\inf_{\partial
A_{i}}\Phi _{\lambda }^{E_{i}}\leq \sup_{\partial K}\dot{\gamma}_{\lambda
}^{A}+(\sup_{\partial K}\gamma _{\lambda })\left( \sup_{\partial K}\Psi
_{\lambda }^{A}+\sum_{i=1}^{k}\sup_{\partial A_{i}}\Psi _{\lambda
}^{E_{i}}\right) . \label{estimate of G sum (ii)}
\end{equation}
\end{lemma}
\begin{proof}
By (\ref{gadot}) and (\ref{GAdot}), the function
\begin{equation*}
v:=\dot{\gamma}_{\lambda }-\dot{\gamma}_{\lambda }^{A}=G_{\lambda }\gamma
_{\lambda }-G_{\lambda }^{A}\gamma _{\lambda }^{A}
\end{equation*}%
solves in $A$ the following boundary value problem:%
\begin{equation*}
\left\{
\begin{array}{ll}
\Delta v-\lambda v=-\left( \gamma _{\lambda }-\gamma _{\lambda }^{A}\right)
& \text{in }A \\
v=\dot{\gamma}_{\lambda } & \text{on }\partial A.%
\end{array}%
\right.
\end{equation*}%
Consider also function $w$ that solves the problem%
\begin{equation*}
\left\{
\begin{array}{ll}
\Delta w-\lambda w=0 & \text{in }A \\
w=\dot{\gamma}_{\lambda } & \text{on }\partial A.%
\end{array}%
\right.
\end{equation*}%
Then we have%
\begin{equation}
v=G_{\lambda }^{A}\left( \gamma _{\lambda }-\gamma _{\lambda }^{A}\right) +w.
\label{vw}
\end{equation}%
Using the estimate (\ref{gaA}) of Lemma \ref{Lemga}$\left( a\right) $ and (%
\ref{Psidef}), we obtain that in $A$
\begin{equation}
G_{\lambda }^{A}\left( \gamma _{\lambda }-\gamma _{\lambda }^{A}\right) \leq
(\sup_{\partial A}\gamma _{\lambda })G_{\lambda }^{A}\left( 1-\Phi _{\lambda
}^{A}\right) \ =(\sup_{\partial A}\gamma _{\lambda })\Psi _{\lambda }^{A}.
\label{Gaga1}
\end{equation}%
Observe that%
\begin{equation*}
\gamma _{\lambda }\leq \sup_{\partial K}\gamma _{\lambda }\ \text{in\ }K^{c}
\end{equation*}%
because the constant function $\sup_{\partial K}\gamma _{\lambda }$ is $%
\lambda $-superharmonic in $K^{c}$, while $\gamma _{\lambda }$ is minimal $%
\lambda $-harmonic that is bounded by $\sup_{\partial K}\gamma _{\lambda }$
on $\partial K^{c}$. Hence, we obtain from (\ref{Gaga1}) that%
\begin{equation}
G_{\lambda }^{A}\left( \gamma _{\lambda }-\gamma _{\lambda }^{A}\right) \leq
(\sup_{\partial K}\gamma _{\lambda })\Psi _{\lambda }^{A}\ \ \text{in }A.
\label{Gaga}
\end{equation}%
In order to estimate $w$, let us represent this function in the form%
\begin{equation*}
w=\sum_{i=1}^{k}w_{i},
\end{equation*}%
where $w_{i}$ solves the Dirichlet problem%
\begin{equation*}
\left\{
\begin{array}{ll}
\Delta w_{i}-\lambda w_{i}=0 & \text{in }A \\
w_{i}=\dot{\gamma}_{\lambda }1_{\partial A_{i}} & \text{on }\partial A.%
\end{array}%
\right.
\end{equation*}%
Let $h_{i}$ be the same as in the proof of Lemma \ref{Lemma estimate of G
sum}. By the comparison principle, we have that in $A$%
\begin{equation}
w_{i}\leq (\sup_{\partial A_{i}}\dot{\gamma}_{\lambda })h_{i}. \label{wi}
\end{equation}%
Let us prove further that%
\begin{equation}
\dot{\gamma}_{\lambda }-G_{\lambda }^{E_{i}}\gamma _{\lambda }\leq
(\sup_{\partial E_{i}}\dot{\gamma}_{\lambda })(1-\Phi _{\lambda }^{E_{i}})\
\ \text{in }E_{i}. \label{Ei}
\end{equation}%
Indeed, by (\ref{gadot}), the function
\begin{equation*}
\dot{\gamma}_{\lambda }-G_{\lambda }^{E_{i}}\gamma _{\lambda }=G_{\lambda
}\gamma _{\lambda }-G_{\lambda }^{E_{i}}\gamma _{\lambda }
\end{equation*}%
is non-negative, $\lambda $-harmonic, and minimal in $E_{i}$. Besides, it is
bounded by $\sup_{\partial E_{i}}\dot{\gamma}_{\lambda }$ on $\partial E_{i}$%
. The function $1-\Phi _{\lambda }^{E_{i}}$ is non-negative and $\lambda $%
-harmonic in $E_{i}$, and is equal to $1$ on $\partial E_{i}$. The estimate (%
\ref{Ei}) follows by the comparison principle in $E_{i}$.
Similarly, we have%
\begin{equation*}
\gamma _{\lambda }\leq (\sup_{\partial E_{i}}\gamma _{\lambda })(1-\Phi
_{\lambda }^{E_{i}})\ \text{in }E_{i},
\end{equation*}%
because $\gamma _{\lambda }$ is non-negative, $\lambda $-harmonic and
minimal in $E_{i}$, and is bounded by $\sup_{\partial E_{i}}\gamma _{\lambda
}$ on $\partial E_{i}$. It follows that in $E_{i}$%
\begin{equation*}
G_{\lambda }^{E_{i}}\gamma _{\lambda }\leq (\sup_{\partial E_{i}}\gamma
_{\lambda })G_{\lambda }^{E_i}(1-\Phi _{\lambda }^{E_{i}})=(\sup_{\partial
E_{i}}\gamma _{\lambda })\Psi _{\lambda }^{E_{i}}.
\end{equation*}%
Combining with (\ref{Ei}), we obtain that in $E_{i}$%
\begin{equation*}
\dot{\gamma}_{\lambda }\leq (\sup_{\partial E_{i}}\dot{\gamma}_{\lambda
})(1-\Phi _{\lambda }^{E_{i}})+(\sup_{\partial E_{i}}\gamma _{\lambda })\Psi
_{\lambda }^{E_{i}}.
\end{equation*}%
Substituting into (\ref{wi}), we obtain that in $A$%
\begin{equation*}
w\leq \sum_{i=1}^{k}(\sup_{\partial A_{i}}\dot{\gamma}_{\lambda })h_{i}\leq
\sum_{i=1}^{k}\left( (\sup_{\partial E_{i}}\dot{\gamma}_{\lambda
})(1-\inf_{\partial A_{i}}\Phi _{\lambda }^{E_{i}})+(\sup_{\partial
E_{i}}\gamma _{\lambda })\sup_{\partial A_{i}}\Psi _{\lambda
}^{E_{i}}\right) h_{i}.
\end{equation*}%
Combining with (\ref{vw}) and (\ref{Gaga}), we obtain the following estimate
of the function $v=\dot{\gamma}_{\lambda }-\dot{\gamma}_{\lambda }^{A}$ in $%
A $:%
\begin{equation*}
\dot{\gamma}_{\lambda }-\dot{\gamma}_{\lambda }^{A}\leq (\sup_{\partial
K}\gamma _{\lambda })\Psi _{\lambda }^{A}+\sum_{i=1}^{k}\left(
(\sup_{\partial E_{i}}\dot{\gamma}_{\lambda })(1-\inf_{\partial A_{i}}\Phi
_{\lambda }^{E_{i}})+(\sup_{\partial E_{i}}\gamma _{\lambda })\sup_{\partial
A_{i}}\Psi _{\lambda }^{E_{i}}\right) h_{i}.
\end{equation*}%
Let $x$ be a point of maximum of $\dot{\gamma}_{\lambda }$ on $\partial K$.
It follows that%
\begin{equation*}
\dot{\gamma}_{\lambda }\left( x\right) \leq \dot{\gamma}_{\lambda
}^{A}\left( x\right) +(\sup_{\partial K}\gamma _{\lambda })\Psi _{\lambda
}^{A}\left( x\right) +\sum_{i=1}^{k}\left( \dot{\gamma}_{\lambda }\left(
x\right) (1-\inf_{\partial A_{i}}\Phi _{\lambda }^{E_{i}})+(\sup_{\partial
E_{i}}\gamma _{\lambda })\sup_{\partial A_{i}}\Psi _{\lambda
}^{E_{i}}\right) h_{i}\left( x\right) .
\end{equation*}%
Since $\sum h_{i}\equiv 1$, we see that $\dot{\gamma}_{\lambda }\left(
x\right) $ cancels out in the both sides, and we obtain
\begin{equation*}
\dot{\gamma}_{\lambda }\left( x\right) \sum_{i=1}^{k}(\inf_{\partial
A_{i}}\Phi _{\lambda }^{E_{i}})h_{i}\left( x\right) \leq \dot{\gamma}%
_{\lambda }^{A}\left( x\right) +(\sup_{\partial K}\gamma _{\lambda })\Psi
_{\lambda }^{A}\left( x\right) +\sum_{i=1}^{k}(\sup_{\partial E_{i}}\gamma
_{\lambda })(\sup_{\partial A_{i}}\Psi _{\lambda }^{E_{i}})h_{i}\left(
x\right) .
\end{equation*}%
Since $h\leq h_{i}\left( x\right) \leq 1$ where $h:=\min_{i}\inf_{K}h_{i}>0$%
, we obtain from here (\ref{estimate of G sum (ii)}).
\end{proof}
\subsection{Proof of Theorem \protect\ref{main theorem}: Upper bound}
\label{SecUpper} As in the statement of Theorem \ref{main theorem}, let $M$
be a connected sum of parabolic manifolds $M_{1},\ldots ,M_{k}$, where all $%
M_{i}$, $i=1,\ldots ,k$ satisfy $($\ref{LY type}$)$ and $\left( RCA\right) $%
. Let $V_{i}(r)=V_{i}\left( o_{i},r\right) $ be the volume function on $%
M_{i} $ at $o_{i}\in K_{i}=M_{i}\setminus E_{i}$. We also assume that every $%
V_i(r)$ is either critical or subcritical, that is, condition (d) of Section %
\ref{notion}. Let $V(r)=V\left(o,r\right) $ be the volume function on $M$ at
a reference point $o\in K$.
It suffices to prove the main estimate (\ref{ptoo}) for large enough $t$
because for small $t$ we have $p(t,o,o)\asymp t^{-N/2}$ and $V(\sqrt{t}%
)\asymp t^{N/2}$.
Fix a connected precompact open set $A$ with smooth boundary such that $%
A\supset K_{\epsilon }$ for large enough $\epsilon >0$ as in Lemmas \ref%
{lemma estimate of important integral} and \ref{Lemma derivative RHS}
applied to all ends $M_{i}$.
Recall that the integrated resolvent $\gamma _{\lambda }$ is defined by (\ref%
{Gla}). By Lemmas \ref{Lemma iii} and \ref{Lemma estimate of G sum}, we
have, for any $\lambda >0$ and any $i=1,...,k$
\begin{equation}
\sup_{\partial K}\gamma _{\lambda }\leq \frac{C}{\inf_{\partial A_{i}}\Phi
_{\lambda }^{E_{i}}}, \label{estimate of G sum}
\end{equation}%
where $C=C\left( K,A\right) $.
Assume first that all manifolds $M_{i}$ are subcritical. Applying (\ref%
{resolvent estimate (i)}) on each end $M_{i}$ we obtain that%
\begin{equation*}
\inf_{\partial A_{i}}\Phi _{\lambda }^{E_{i}}\geq c\lambda V_{i}(\frac{1}{%
\sqrt{\lambda }})
\end{equation*}%
provided $\lambda \leq \lambda _{0}=\lambda _{0}\left( A\right) $.
Substituting into (\ref{estimate of G sum}), we obtain that, for $\lambda
\leq \lambda _{0}$,
\begin{equation*}
\sup_{\partial K}\gamma _{\lambda }\leq \frac{C}{\lambda V_{\max }(\frac{1}{%
\sqrt{\lambda }})},
\end{equation*}%
where $V_{\max }(r)=\max_{1\leq i\leq k}V_{i}(r)$. By Lemma \ref{Lemma of G}$%
\left( i\right) $, we conclude that, for all $t\geq t_{0}=t_{0}\left(
\lambda _{0}\right) $,
\begin{equation}
p(t,o,o)\leq \frac{C}{V_{\max }(\sqrt{t})} \label{on-diagonal subcritical}
\end{equation}%
which proves the on-diagonal upper bound in (\ref{ptoo}) in the subcritical
case.
Assume now that there exists at least one critical end. Let it be $M_{j}$.
Applying (\ref{resolvent estimate (ii)}) in $M_{j}$, we have%
\begin{equation}
\inf_{\partial A}\Phi _{\lambda }^{E_{j}}\geq \frac{c}{\log \frac{1}{\lambda
}}, \label{Ej}
\end{equation}
which together with (\ref{estimate of G sum}) yields, for all $\lambda \leq
\lambda _{0}$,%
\begin{equation}
\sup_{\partial K}\gamma _{\lambda }\leq C\log \frac{1}{\lambda }.
\label{estimate of G critical sum}
\end{equation}%
However, as we have pointed out before, in order to obtain upper bound in (%
\ref{ptoo}) in the critical case, we need some additional argument about $%
\dot{\gamma}_{\lambda }$.
For that, let us use the estimate (\ref{estimate of G sum (ii)}) of $%
\sup_{\partial K}\dot{\gamma}_{\lambda }$. Substituting into (\ref{estimate
of G sum (ii)}) the estimates (\ref{estimate of G^A1}) and (\ref{estimate of
R_A^1}), we obtain%
\begin{equation*}
(\sup_{\partial K}\dot{\gamma}_{\lambda })\inf_{\partial A_{j}}\Phi
_{\lambda }^{E_{j}}\leq C+C\sup_{\partial K}\gamma _{\lambda }\left(
1+\sum_{i=1}^{k}\sup_{\partial A_{i}}\Psi _{\lambda }^{E_{i}}\right) .
\end{equation*}%
Substituting here (\ref{Ej}), (\ref{estimate of G critical sum}), (\ref%
{estimate of R_K^1}), we obtain, for all $\lambda \leq \lambda _{0}$,
\begin{equation*}
\sup_{\partial K}\dot{\gamma}_{\lambda }\frac{1}{\log \frac{1}{\lambda }}%
\leq C+C\log \frac{1}{\lambda }\left( 1+\frac{1}{\lambda \log ^{2}\frac{1}{%
\lambda }}\right) \leq \frac{C^{\prime }}{\lambda \log \frac{1}{\lambda }},
\end{equation*}%
which implies%
\begin{equation*}
\sup_{\partial K}\dot{\gamma}_{\lambda }\leq \frac{C}{\lambda }\text{ for
all }\lambda \leq \lambda _{0}.
\end{equation*}%
By Lemma \ref{Lemma of G} $\left( ii\right) $, we conclude that%
\begin{equation}
p(t,o,o)\leq \frac{C}{t}\quad \text{for all }t\geq t_{0}
\label{on-diagonal critical}
\end{equation}%
which finishes the proof of the upper bound in (\ref{ptoo}) in the critical
case.
\subsection{Proof of Theorem \protect\ref{main theorem}: Lower bound}
\label{lower bound} Let $M$ be a connected sum satisfying the assumption of
Theorem \ref{main theorem}. Let us observe that%
\begin{equation}
V(r)\approx V_{1}(r)+V_{2}(r)+\cdots +V_{k}(r)\approx V_{\max }(r)
\label{VVmax}
\end{equation}%
for all $r>0$. By (\ref{on-diagonal subcritical}) and (\ref{on-diagonal
critical}), we obtain that, for all $t>0,$
\begin{equation}
p(t,o,o)\leq \frac{C}{V(\sqrt{t})}. \label{on-diagonal upper bound}
\end{equation}%
Since each $V_{i}\left( r\right) $ satisfies the doubling condition, so does
$V\left( r\right) $ by (\ref{VVmax}). By \cite[Theorem 7.2]%
{Coulhon-Grigoryan}, the upper bound (\ref{on-diagonal upper bound})
together with the doubling property of $V\left( r\right) $ implies the
matching lower bound%
\begin{equation*}
p(t,o,o)\geq \frac{c}{V(\sqrt{t})}.
\end{equation*}%
Replacing here $V$ by $V_{\max }$, we finish the proof of the lower bound in
(\ref{ptoo}) and, hence, the proof of Theorem \ref{main theorem}.
\section{Off-diagonal estimates}
\setcounter{equation}{0}\label{SecOff}In this section, we prove Theorems \ref%
{T1}-\ref{T3} by combining Theorem \ref{main theorem} with some results from
\cite{G-SC Dirichlet}, \cite{G-SC hitting} and \cite{G-SC ends}.
For any open set $\Omega $ in any weighted manifold $M$, define the \textit{%
exit probability function} in $\Omega $: for all $x\in \Omega $ and $t>0$,%
\begin{equation*}
\psi _{\Omega }\left( y,t\right) =\mathbb{P}_{x}(\tau _{\Omega }\leq t).
\end{equation*}%
Equivalently, $\psi _{\Omega }\left( x,t\right) $ is the minimal
non-negative solution of the heat equation $\partial _{t}u=\Delta u$ in $%
\Omega \times \mathbb{R}_{+}$ with the initial condition $u|_{t=0}=0$ and
the boundary condition $u|_{\partial \Omega }=1$.
We will use the abstract upper and lower off-diagonal estimates of \cite[%
Theorem 3.5]{G-SC ends} for the heat kernel $p\left( t,x,y\right) $ on an
arbitrary manifold $M$ for $x\in A$ and $y\in B$ where $A,B$ are open
subsets of $M$ such either $\overline{A}$ and $\overline{B}$ are disjoint or
$\overline{B}\subset A$. These estimates use the exit probabilities $\psi
_{A}\left( x,t\right) $\ and $\psi _{B}\left( y,t\right) $ and their time
derivatives. Besides, they use the quantities%
\begin{equation*}
P^{+}\left( t\right) =\sup_{s\in \left[ t/4,t\right] }\sup_{z_{1}\in
\partial A,\ z_{2}\in \partial B}p\left( s,z_{1},z_{2}\right) \text{ \ and \
}P^{-}\left( t\right) =\inf_{s\in \left[ t/4,t\right] }\inf_{z_{1}\in
\partial A,\ z_{2}\in \partial B}p\left( s,z_{1},z_{2}\right)
\end{equation*}%
and%
\begin{equation*}
G^{+}\left( t\right) =\int_{0}^{t}\sup_{z_{1}\in \partial A,\ z_{2}\in
\partial B}p\left( s,z_{1},z_{2}\right) ds\ \text{and}\ G^{-}\left( t\right)
=\int_{0}^{t}\inf_{z_{1}\in \partial A,\ z_{2}\in \partial B}p\left(
s,z_{1},z_{2}\right) ds.
\end{equation*}%
With these notations, the estimates of \cite[Theorem 3.5]{G-SC ends} read as
follows: for all $x\in A,y\in B$ and $t>0$,%
\begin{eqnarray}
p(t,x,y) &\approx &p_{A}\left( t,x,y\right) +P^{\pm }\left( t\right) \mathbb{%
\psi }_{A}\left( x,\tilde{t}\right) \mathbb{\psi }_{B}\left( y,\tilde{t}%
\right) \notag \\
&&+G^{\pm }\left( \tilde{t}\right) \left[ \partial _{t}\mathbb{\psi }%
_{A}\left( x,\xi \right) \psi _{B}\left( y,\tilde{t}\right) +\partial
_{t}\psi _{B}\left( y,\zeta \right) \mathbb{\psi }_{A}\left( x,\tilde{t}%
\right) \right] ,\, \label{general full estimate}
\end{eqnarray}%
where the index \textquotedblleft $+$\textquotedblright\ is used for the
upper bound, \textquotedblleft $-$\textquotedblright\ is used for the lower
bound, $\tilde{t}=t$ for the upper bound, $\tilde{t}=\frac{1}{4}t$ for the
lower bound, $\xi $ and $\zeta $ are some values from $\left[ t/4,t\right] $
that may be different for upper and lower bounds.
\begin{proof}[Proof of Theorem \protect\ref{T1}]
Recall that $M$ is a connected sum of $M_{1},\ldots ,M_{k}$ with a central
part $K$, where each $M_{i}$ satisfies conditions $\left( a\right) $-$\left(
d\right) $ in Subsection \ref{notion}. We apply (\ref{general full estimate}%
) with $A=E_{i}$ and $B=E_{j}$ where $i\neq j$. Since $A$ and $B$ are
disjoint, we have $p_{A}\left( t,x,y\right) =0$ for all $x\in A$ and $y\in B$%
.
Note that, for all $z_{1}\in \partial E_{i}$ and $z_{2}\in \partial E_{j}$,
the distance $d\left( z_{1},z_{2}\right) $ is bounded from above and below
by positive constants. Therefore, assuming $t>1$, we obtain by the local
Harnack inequality and Theorem \ref{main theorem} that%
\begin{equation}
P^{\pm }\left( t\right) \asymp Cp\left( ct,o,o\right) \approx \frac{1}{%
V\left( \sqrt{t}\right) }. \label{Ppm}
\end{equation}%
Let us estimate similarly $G^{\pm }\left( t\right) $. Assuming $t>1$, we can
split the integrals in the definition of $G^{\pm }\left( t\right) $ into the
sum of two integrals: over $(0,1]$ and over $(1,t]$. The first integral is
bounded, while in the second integral we can apply the local Harnack
inequality to the heat kernel and, hence, replace $z_{1},z_{2}$ by $o$.
Using further the estimate (\ref{ptoo}) of Theorem \ref{main theorem}, we
obtain that, for large $t$,%
\begin{equation}
G^{\pm }\left( t\right) \approx \int_{1}^{t}\frac{1}{V\left( \sqrt{s}\right)
}ds. \label{Gint}
\end{equation}%
If all ends are subcritical, then by (\ref{subcritical}) we have, for large $%
t$,
\begin{equation*}
\int_{1}^{t}\frac{ds}{V(\sqrt{s})}\leq \frac{Ct}{V(\sqrt{t})}.
\end{equation*}%
Since also%
\begin{equation*}
\int_{1}^{t}\frac{ds}{V(\sqrt{s})}\geq \int_{t/2}^{t}\frac{ds}{V(\sqrt{s})}%
\geq \frac{t}{2V(\sqrt{t})},
\end{equation*}%
we obtain that
\begin{equation}
G^{\pm }\left( \tilde{t}\right) \approx \frac{t}{V(\sqrt{t})}.
\label{central integral subcritical}
\end{equation}%
If there exists at least one critical end, then $V\left( \sqrt{t}\right)
\approx t$, and (\ref{Gint}) implies, for large $t$,%
\begin{equation}
G^{\pm }\left( \tilde{t}\right) \approx \log t.
\label{central integral critical}
\end{equation}%
Note that the exit probability $\psi _{i}\left( x,t\right) $ depends only on
the intrinsic geometry of $E_{i}$. Since each $M_{i}$ satisfies $($\ref{LY
type}$)$ and $\left( RCA\right) $, we can use the results of \cite[Theorem
4.6]{G-SC hitting} that gives the following: for all $x\in E_{i}$ with large
enough $\left\vert x\right\vert $,%
\begin{equation}
\psi _{E_{i}}\left( x,t\right) \asymp \left\{
\begin{array}{ll}
\frac{C\left\vert x\right\vert ^{2}\exp \left( -b\left\vert x\right\vert
^{2}/t\right) }{V_i\left( \left\vert x\right\vert \right) H\left( \left\vert
x\right\vert \right) } & t<2\left\vert x\right\vert ^{2}, \\
\frac{C}{H\left( \sqrt{t}\right) }\int_{\left\vert x\right\vert }^{\sqrt{t}}%
\frac{sds}{V_i\left( s\right) }, & t\geq 2\left\vert x\right\vert ^{2}%
\end{array}%
\right. \label{psiEi}
\end{equation}%
and, for large enough $\left\vert x\right\vert $ and $t$,
\begin{equation}
\partial _{t}\mathbb{\psi }_{E_{i}}\left( x,t\right) \asymp \frac{CH\left(
\left\vert x\right\vert \right) \exp \left( -b\left\vert x\right\vert
^{2}/t\right) }{V_{i}\left( \sqrt{t}\right) \left( H\left( \left\vert
x\right\vert \right) +H\left( \sqrt{t}\right) \right) H\left( \sqrt{t}%
\right) }, \label{PsiEider}
\end{equation}%
where $H$ is the function defined in (\ref{H}). Note that in the case of
bounded $\left\vert x\right\vert $ the estimate (\ref{PsiEider}) matches the
estimate (\ref{Hest}) used in the proof of Lemma \ref{Lemma derivative RHS}.
If $M_{i}$ is subcritical then $H\left( r\right) \approx r^{2}/V_{i}\left(
r\right) $. Substituting this into then (\ref{psiEi}) and (\ref{PsiEider}),
we obtain, for all large enough $t$ and $\left\vert x\right\vert $,%
\begin{align}
\mathbb{\psi }_{E_{i}}\left( x,t\right) \asymp & Ce^{-b\frac{\left\vert
x\right\vert ^{2}}{t}}, \label{hitting subcritical} \\
\partial _{t}\mathbb{\psi }_{E_{i}}\left( x,t\right) \asymp & \frac{C}{t}%
D(x,t)e^{-b\frac{\left\vert x\right\vert ^{2}}{t}},
\label{hitting derivative subcritical}
\end{align}%
where $D$ is defined in (\ref{function D}).
If $M_{i}$ is critical then $H\left( r\right) \approx \log r$ which yields%
\begin{align}
\mathbb{\psi }_{E_{i}}\left( x,t\right) \asymp & CU(x,t)e^{-b\frac{%
\left\vert x\right\vert ^{2}}{t}}, \label{hitting critical} \\
\partial _{t}\mathbb{\psi }_{E_{i}}\left( x,t\right) \asymp & \frac{C}{t\log
t}W(x,t)e^{-b\frac{\left\vert x\right\vert ^{2}}{t}},
\label{hitting derivative critical}
\end{align}%
where $U$ is defined in (\ref{function U}) and $W$ is defined in (\ref%
{function W}).
Now we are in position to verify all the heat kernel estimates claimed in
Theorem \ref{T1} for $x\in E_{i},y\in E_{j}$ with $i\neq j$. It suffices to
prove all the estimates for large enough $\,\left\vert x\right\vert
,\left\vert y\right\vert $ and $t$. Then the estimates for all $x\in E_{i}$
and $y\in E_{j}$ (while $t$ is still large enough) follow by application of
the local Harnack inequality.
$\left( i\right) $ If all ends are subcritical, then (\ref{general full
estimate}), (\ref{Ppm}), (\ref{central integral subcritical}), (\ref{hitting
subcritical}), (\ref{hitting derivative subcritical}) yield:%
\begin{equation*}
p(t,x,y)\asymp \frac{C}{V(\sqrt{t})}\left[ 1+D(x,t)+D(y,t)\right] e^{-b\frac{%
\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}}.\,
\end{equation*}%
Observing that that by (\ref{function D}) $D\left( x,t\right) $ is bounded
and that%
\begin{equation*}
\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}\approx d^{2}\left(
x,y\right)
\end{equation*}%
we obtain (\ref{T1i}).
$\left( ii\right) $ Now let at least one of the ends be critical, so that $%
V\left( r\right) \approx r^{2}$.
$\left( ii\right) _{1}$ Let $M_{i},M_{j}$ are subcritical, then (\ref%
{general full estimate}), (\ref{Ppm}), (\ref{central integral critical}), (%
\ref{hitting subcritical}), (\ref{hitting derivative subcritical}) yield:%
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t}\left( 1+\left( D(x,t)+D(y,t)\right) \log t\right)
e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}},\,
\end{equation*}%
which proves (\ref{T1ii1}).
$\left( ii\right) _{2}$ Let both $M_{i}$ and $M_{j}$ be critical. Then we
obtain from (\ref{general full estimate}), (\ref{Ppm}), (\ref{central
integral critical}), (\ref{hitting critical}), (\ref{hitting derivative
critical}) that%
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t}\left[ U(x,t)U(y,t)+W(x,t)U\left( y,t\right)
+U(x,t)W\left( y,t\right) \right] e^{-b\frac{\left\vert x\right\vert
^{2}+\left\vert y\right\vert ^{2}}{t}},\,
\end{equation*}%
that is, (\ref{T1ii2}).
$\left( ii\right) _{3}$ Let $M_{i}$ be subcritical and $M_{j}$ be critical.
Then we obtain similarly%
\begin{equation*}
p(t,x,y)\asymp \frac{C}{t}\left[ U\left( x,t\right) +D(x,t)U\left(
y,t\right) \log t+W(x,t)\right] e^{-b\frac{\left\vert x\right\vert
^{2}+\left\vert y\right\vert ^{2}}{t}}.
\end{equation*}%
By (\ref{U+W}) we can replace here $U+W$ by $1$, which yields (\ref{T1ii3}).
\end{proof}
For the proof of Theorems \ref{T2} and \ref{T3}, we will use again the
estimate (\ref{general full estimate}) but this time we take $A=E_{i}$ and $%
B=E_{i}^{\prime }$ where $E_{i}^{\prime }=E_{i}\setminus K^{\prime }$ and $%
K^{\prime }$ is a closed $\epsilon $-neighborhood of $K$ for large enough $%
\epsilon $. In this case we have $\overline{B}\subset A$.
Note that, for all $z_{1}\in \partial E_{i}$ and $z_{2}\in \partial
E_{i}^{\prime }$, the distance $d\left( z_{1},z_{2}\right) $ is bounded from
above and below by positive constants. Hence, arguing as above, we obtain
the same estimates of $P^{\pm }\left( t\right) ,G^{\pm }\left( t\right) $ as
stated in the proof of Theorem \ref{T1}. The estimates of $\psi _{E_{i}}$
and $\partial _{t}\psi _{E_{i}}$ also remain the same. Clearly, $\psi
_{E_{i}^{\prime }}$ and $\partial _{t}\psi _{E_{i}^{\prime }}$ satisfy
similar estimates.
To handle the term $p_{A}\left( t,x,y\right) =p_{E_{i}}(t,x,y)$ in (\ref%
{general full estimate}), we use the result of \cite[Theorem 4.9]{G-SC
Dirichlet} that says the following: for all $t>0$ and all $x,y\in E_{i}$
with large $\left\vert x\right\vert ,\left\vert y\right\vert $,
\begin{equation*}
p_{E_{i}}(t,x,y)\asymp \frac{C}{V_{i}(x,\sqrt{t})}\left( \frac{H(\left\vert
x\right\vert )}{H(\left\vert x\right\vert )+H(\sqrt{t})}\right) \left( \frac{%
H(\left\vert y\right\vert )}{H(\left\vert y\right\vert )+H(\sqrt{t})}\right)
e^{-b\frac{d^{2}}{t}},
\end{equation*}%
where $d=d\left( x,y\right) $. If $M_{i}$ is subcritical, then $H(r)\approx
r^{2}/V(r)$, which gives
\begin{equation}
p_{E_{i}}(t,x,y)\asymp C\frac{D(x,t)D(y,t)}{V_{i}(x,\sqrt{t})}e^{-b\frac{%
d^{2}}{t}}. \label{Dirichlet subcritical}
\end{equation}%
If $M_{i}$ is critical, then $H(r)\approx \log r$, which gives
\begin{equation}
p_{E_{i}}(t,x,y)\asymp C\frac{W(x,t)W(y,t)}{V_{i}(x,\sqrt{t})}e^{-b\frac{%
d^{2}}{t}}. \label{Dirichlet critical}
\end{equation}
For the proof of Theorems \ref{T2} and \ref{T3} we need the following lemma.
\begin{lemma}
\label{lemma e} For all $x,y\in E_{i}$ and $\sqrt{t}\geq \min (|x|,|y|)$ we
have%
\begin{equation}
Ce^{-b\frac{|x|^{2}+|y|^{2}}{t}}\asymp C^{\prime }e^{-b^{\prime }\frac{%
d^{2}(x,y)}{t}}. \label{equivalence of e}
\end{equation}%
Moreover, if $\sqrt{t}\geq \left\vert x\right\vert $ then%
\begin{equation}
\frac{C}{V_{i}\left( x,\sqrt{t}\right) }e^{-b\frac{d^{2}(x,y)}{t}}\asymp
\frac{C^{\prime }}{V_{i}\left( \sqrt{t}\right) }e^{-b^{\prime }\frac{%
d^{2}(x,y)}{t}} \label{Vixo}
\end{equation}
\end{lemma}
\begin{proof}
Set $\delta =\mathrm{diam}K$. The triangle inequality $\left\vert
x\right\vert +\left\vert y\right\vert +\delta \geq d(x,y)$ implies%
\begin{equation}
e^{-b\frac{|x|^{2}+|y|^{2}}{t}}\leq e^{-b^{\prime }\frac{d^{2}(x,y)-\delta
^{2}}{t}}\leq C^{\prime }e^{-b^{\prime }\frac{d^{2}(x,y)}{t}}.
\label{upper e}
\end{equation}%
To prove the opposite inequality, assume that $\left\vert x\right\vert \leq
\sqrt{t}$ (the case $\left\vert y\right\vert \leq \sqrt{t}$ is similar). The
triangle inequality%
\begin{equation*}
\left\vert y\right\vert \leq \left\vert x\right\vert +\delta +d(x,y)
\end{equation*}%
implies%
\begin{equation*}
\left\vert x\right\vert +\left\vert y\right\vert \leq 2\left\vert
x\right\vert +\delta +d(x,y)\leq 2\sqrt{t}+\delta +d(x,y),
\end{equation*}%
whence it follows that%
\begin{equation*}
\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}\leq
b^{\prime }\frac{d^{2}(x,y)}{t}+const,
\end{equation*}%
which completes the proof of (\ref{equivalence of e}).
To prove (\ref{Vixo}) observe first that by (\ref{equivalence of e}), the
term $d^{2}\left( x,y\right) $ in the both sides of (\ref{Vixo}) can be
replaced by $\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}$. The
doubling property of\ $V_{i}\left( x,r\right) $ yields%
\begin{equation*}
\frac{V_{i}\left( o_{i},\sqrt{t}\right) }{V_{i}\left( x,\sqrt{t}\right) }%
\leq C\left( 1+\frac{\left\vert x\right\vert }{\sqrt{t}}\right) ^{\beta
}\leq Ce^{\varepsilon \frac{\left\vert x\right\vert ^{2}}{t}},
\end{equation*}%
for arbitrarily small $\varepsilon >0$, which implies that%
\begin{eqnarray}
\frac{C}{V_{i}\left( x,\sqrt{t}\right) }e^{-b\frac{\left\vert x\right\vert
^{2}+\left\vert y\right\vert ^{2}}{t}} &\leq &\frac{C^{\prime }}{V_{i}\left(
o,\sqrt{t}\right) }e^{\varepsilon \frac{\left\vert x\right\vert ^{2}}{t}%
}e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}}
\notag \\
&\leq &\frac{C^{\prime }}{V_{i}\left( o,\sqrt{t}\right) }e^{-b^{\prime }%
\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}}.
\label{Vo}
\end{eqnarray}%
The opposite inequality is proved similarly.
\end{proof}
\begin{proof}[Proof of Theorem \protect\ref{T2}$\left( a\right) $]
We consider the same cases as in Theorem \ref{T1} and use the same estimates
of all the terms in (\ref{general full estimate}), except for the Dirichlet
heat kernel. Note that the case $\left( ii\right) _{3}$ cannot occur because
\thinspace $x,y$ are at the same end $E_{i}$.
$\left( i\right) $ Assume that all ends are subcritical. Substituting (\ref%
{Dirichlet subcritical}), (\ref{Ppm}), (\ref{central integral subcritical}),
(\ref{hitting subcritical}) and (\ref{hitting derivative subcritical}) into (%
\ref{general full estimate}), we obtain
\begin{align}
p(t,x,y)\asymp & C\frac{D(x,t)D(y,t)}{V_{i}(x,\sqrt{t})}e^{-b\frac{d^{2}}{t}}
\notag \\
& +\frac{C}{V(\sqrt{t})}\left( 1+D(x,t)+D(y,t)\right) e^{-b\frac{\left\vert
x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}}. \label{D x+y}
\end{align}%
By (\ref{function D}) and the assumption $\sqrt{t}\leq \min \left(
\left\vert x\right\vert ,\left\vert y\right\vert \right) $ we have
\begin{equation*}
D\left( x,t\right) =D\left( y,t\right) =1
\end{equation*}%
and, hence,
\begin{equation}
p\left( t,x,y\right) \asymp \frac{C}{V_{i}(x,\sqrt{t})}e^{-b\frac{%
d^{2}\left( x,y\right) }{t}}+\frac{C}{V\left( \sqrt{t}\right) }e^{-b\frac{%
\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}}. \label{x+y}
\end{equation}%
Using the volume doubling property of $V_{i}$, we obtain
\begin{align}
\frac{1}{V(\sqrt{t})}e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert
y\right\vert ^{2}}{t}}=& \frac{V_{i}(o_{i,}\sqrt{t})}{V_{\max }(\sqrt{t})}%
\frac{V_{i}(x,\sqrt{t})}{V_{i}(o_{i},\sqrt{t})}\frac{1}{V_{i}(x,\sqrt{t})}%
e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}}
\notag \\
\leq & C\left( 1+\frac{\left\vert x\right\vert }{\sqrt{t}}\right) ^{\beta }%
\frac{1}{V_{i}(x,\sqrt{t})}e^{-b\frac{\left\vert x\right\vert
^{2}+\left\vert y\right\vert ^{2}}{t}} \notag \\
\leq & \frac{C^{\prime }}{V_{i}(x,\sqrt{t})}e^{-b^{\prime }\frac{d^{2}\left(
x,y\right) }{t}}, \label{vd max}
\end{align}%
which shows that the first term in (\ref{x+y}) is dominant, hence yielding (%
\ref{Vi}).
$\left( ii\right) $ Let at least one of the ends be critical.
$\left( ii\right) _{1}$ Let $M_{i}$ be subcritical. In this case we have as
above%
\begin{equation}
p(t,x,y)\asymp \frac{C}{V_{i}(x,\sqrt{t})}e^{-b\frac{d^{2}}{t}}+C\frac{\log t%
}{t}e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}%
}. \label{caseii1 mod}
\end{equation}%
By (\ref{or2logr}) and the volume doubling property of $M_{i}$, we obtain
\begin{align}
\frac{\log t}{t}e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert
y\right\vert ^{2}}{t}}=& \frac{\log t}{t}V_{i}(o_{i},\sqrt{t})\frac{1}{%
V_{i}(x,\sqrt{t})}\frac{V_{i}(x,\sqrt{t})}{V_{i}(o_{i},\sqrt{t})}e^{-b\frac{%
\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}} \notag \\
\leq & \frac{C}{V_{i}(x,\sqrt{t})}\left( 1+\frac{|x|}{\sqrt{t}}\right)
^{\beta }e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}%
}{t}} \notag \\
\leq & \frac{C^{\prime }}{V_{i}(x,\sqrt{t})}e^{-b^{\prime }\frac{d^{2}\left(
x,y\right) }{t}}. \label{vd log}
\end{align}%
Substituting (\ref{vd log}) into (\ref{caseii1 mod}), we obtain (\ref{Vi}).
$\left( ii\right) _{2}$ Let $M_{i}$ be critical. Substituting (\ref%
{Dirichlet critical}), (\ref{Ppm}), (\ref{central integral critical}), (\ref%
{hitting critical}) and (\ref{hitting derivative critical}) into (\ref%
{general full estimate}), we obtain
\begin{eqnarray}
p(t,x,y) &\asymp &C\frac{W(x,t)W(y,t)}{V_{i}(x,\sqrt{t})}e^{-b\frac{d^{2}}{t}%
} \notag \\
&&+\frac{C}{t}\left[ U(x,t)U(y,t)+W(x,t)U\left( y,t\right) +W(y,t)U(x,t)%
\right] e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}%
}{t}}. \label{same end ii2}
\end{eqnarray}%
By (\ref{function W}) and $\sqrt{t}\leq \min \left( \left\vert x\right\vert
,\left\vert y\right\vert \right) $, we have
\begin{equation*}
W(x,t)=W(y,t)=1.
\end{equation*}%
Substituting into (\ref{same end ii2}) we obtain
\begin{eqnarray*}
p(t,x,y) &\asymp &\frac{C}{V_{i}(x,\sqrt{t})}e^{-b\frac{d^{2}}{t}} \\
&&+\frac{C}{t}\left[ U(x,t)U(y,t)+U\left( y,t\right) +U(x,t)\right] e^{-b%
\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}}.
\end{eqnarray*}%
Since $U$ is bounded, (\ref{vd max}) implies that the second term is
dominated by the first one, which yields (\ref{Vi}).
\end{proof}
\begin{proof}[Proof of Theorem \protect\ref{T2}$\left( b\right) $]
Let $V_{i}\left( r\right) \approx V_{\max }\left( r\right) $. In the view of
part $\left( a\right) $, we can assume that $\sqrt{t}>\min \left( \left\vert
x\right\vert ,\left\vert y\right\vert \right) $. Since by the doubling
property of $V_{i}$%
\begin{equation*}
\frac{C}{V_{i}\left( x,\sqrt{t}\right) }e^{-b\frac{d^{2}\left( x,y\right) }{t%
}}\asymp \frac{C^{\prime }}{V_{i}\left( y,\sqrt{t}\right) }e^{-b^{\prime }%
\frac{d^{2}\left( x,y\right) }{t}}
\end{equation*}%
(cf. (\ref{Vo})), the estimate (\ref{Vi}) is symmetric in $x,y$. Hence, we
can assume that $\sqrt{t}>\left\vert x\right\vert $. As in Theorem \ref{T1},
we can also assume that $\left\vert x\right\vert ,\left\vert y\right\vert $
are large enough.
$\left( i\right) $ Let all the ends be subcritical. Then we have again (\ref%
{D x+y}). Using $\sqrt{t}>\left\vert x\right\vert $ and (\ref{equivalence of
e}), we can replace $e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert
y\right\vert ^{2}}{t}}$ in the right hand side of (\ref{D x+y}) by $e^{-b%
\frac{d^{2}\left( x,y\right) }{t}}$. Using further (\ref{Vixo}), we can
replace $V_{i}\left( x,\sqrt{t}\right) $ by $V_{i}\left( \sqrt{t}\right) $
and, hence, by $V\left( \sqrt{t}\right) $, which yields%
\begin{equation*}
p\left( t,x,y\right) \asymp \frac{C}{V(\sqrt{t})}\left(
D(x,t)D(y,t)+1+D(x,t)+D(y,t)\right) e^{-b\frac{d^{2}\left( x,y\right) }{t}},
\end{equation*}%
and which implies (\ref{Vi}) since $D(x,t)$, $D(y,t)$ are bounded.
$\left( ii\right) $ Let at least one of the ends be critical. Then by $%
V_{i}\left( r\right) \approx V\left( r\right) $, the end $M_{i}$ has to be
critical, too. As in the case $\left( ii\right) _{2}$ of the proof of
Theorem \ref{T2}$\left( a\right) $, we obtain again (\ref{same end ii2}),
where by (\ref{equivalence of e}) we can replace $e^{-b\frac{\left\vert
x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}}$ in the right hand side
of (\ref{same end ii2}) by $e^{-b\frac{d^{2}}{t}}.$ Using further (\ref{Vixo}%
), we replace $V_{i}\left( x,\sqrt{t}\right) $ by $V_{i}\left( \sqrt{t}%
\right) \approx V\left( \sqrt{t}\right) \approx t$, which yields
\begin{eqnarray*}
p(t,x,y) &\asymp &\frac{C}{t}\left[ W(x,t)W(y,t)+U(x,t)U(y,t)+W(x,t)U\left(
y,t\right) +W(y,t)U(x,t)\right] e^{-b\frac{d^{2}}{t}} \\
&=&\frac{C}{t}\left\{ W(x,t)+U(x,t)\right\} \left\{ W(y,t)+U(y,t)\right\}
e^{-b\frac{d^{2}}{t}}.
\end{eqnarray*}%
Using (\ref{U+W}), we conclude (\ref{Vi}).
\end{proof}
\begin{proof}[Proof of Theorem \protect\ref{T3}]
As in Theorem \ref{T1}, we can assume that $\left\vert x\right\vert
,\left\vert y\right\vert $ are large enough. Since $\sqrt{t}\geq \min \left(
\left\vert x\right\vert ,\left\vert y\right\vert \right) $ and the both
estimates (\ref{T3i}) and (\ref{T3ii}) are symmetric in $x$, $y$, so we can
assume without loss of generality that $\sqrt{t}\geq \left\vert x\right\vert
$. Then, by Lemma \ref{lemma e}, the function $V_{i}\left( x,\sqrt{t}\right)
$ in the estimates (\ref{Dirichlet subcritical}) and (\ref{Dirichlet
critical}) can be replaced by $V_{i}\left( \sqrt{t}\right) $.
$\left( i\right) $ Assume that all ends are subcritical. Applying (\ref%
{equivalence of e}) to (\ref{D x+y}) and observing that the function $D$ is
bounded, we obtain (\ref{T3i}).
$\left( ii\right) $ Let at least one of the ends be critical. Since $M_{i}$
is subcritical, substituting (\ref{Dirichlet subcritical}), (\ref{Ppm}), (%
\ref{central integral critical}), (\ref{hitting subcritical}) and (\ref%
{hitting derivative subcritical}) into (\ref{general full estimate}), we
obtain
\begin{align*}
p(t,x,y)\asymp & C\frac{D(x,t)D(y,t)}{V_{i}(\sqrt{t})}e^{-b\frac{d^{2}}{t}}+%
\frac{C}{t}e^{-b\frac{\left\vert x\right\vert ^{2}+\left\vert y\right\vert
^{2}}{t}} \notag \\
& +C\frac{\log t}{t}\left( D(x,t)+D(y,t)\right) e^{-b\frac{\left\vert
x\right\vert ^{2}+\left\vert y\right\vert ^{2}}{t}},
\end{align*}%
which together with (\ref{equivalence of e}) implies (\ref{T3ii}).
\end{proof}
\begin{acknowledgement}
This work was completed during the stay of the first and second authors in
the Institute of Mathematical Sciences of Chinese University of Hong Kong.
The authors are grateful to CUHK for the hospitality and support. The
authors would like to thank Professor Yuji Kasahara for sending his preprint.
\end{acknowledgement}
| {
"timestamp": "2016-08-05T02:09:59",
"yymm": "1608",
"arxiv_id": "1608.01596",
"language": "en",
"url": "https://arxiv.org/abs/1608.01596",
"abstract": "We obtain matching two sided estimates of the heat kernel on a connected sum of parabolic manifolds, each of them satisfying the Li-Yau estimate. The key result is the on-diagonal upper bound of the heat kernel at a central point. Contrary to the nonparabolic case (which was settled in [15]), the on-diagonal behavior of the heat kernel in our case is determined by the end with the maximal volume growth function. As examples, we give explicit heat kernel bounds on the connected sums $R^2#R^2$ and $R^1#R^2$ where $R^1 = R_+\\timesS^1$.",
"subjects": "Probability (math.PR)",
"title": "Heat kernel estimates on connected sums of parabolic manifolds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138131620184,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7087156933908838
} |
https://arxiv.org/abs/2212.10759 | The construction of $ε$-splitting map | For a geodesic ball with non-negative Ricci curvature and almost maximal volume, without using compactness argument, we construct an $\epsilon$-splitting map on a concentric geodesic ball with uniformly small radius. There are two new technical points in our proof. The first one is the way of finding $n$ directional points by induction and stratified almost Gou-Gu Theorem. The other one is the error estimates of projections, which guarantee the $n$ directional points we find really determine $n$ different directions. | \section{Introduction}
For a compact $n$-dimensional Riemannian manifold with $Rc\geq (n- 1)$, if its volume is close to the volume of unit round sphere $\mathbb{S}^{n}$, Colding \cite{Colding-shape} proved that the manifold is Gromov-Hausdorff close to $\mathbb{S}^{n}$. Analogue to the positive Ricci curvature case, for a geodesic ball with $Rc\geq 0$ and almost maximal volume, Colding \cite[Theorem $0.8$]{Colding-volume} pointed out that such geodesic ball is Gromov-Hausdorff close to the Euclidean ball, which can be proved in similar way as \cite{Colding-shape}.
Later, Cheeger \cite[Theorem $9.69$]{Cheeger-note} gave a complete proof of \cite[Theorem $0.8$]{Colding-volume}. The proof of Cheeger is by the compactness argument, which relies on the almost cone property and the almost splitting property established in the Ricci limit space theory (see\cite{CC-Ann}, \cite{CC1}, \cite{CC2} and \cite{CC3}).
We recall the definition of $\epsilon$-Gromov-Hausdorff approximation. Let $(\mathbf{X}, d_\mathbf{X})$ and $(\mathbf{Y}, d_\mathbf{Y})$ be two metric spaces. For $\epsilon> 0$, a map $f: \mathbf{X}\rightarrow \mathbf{Y}$ is called an \textbf{$\epsilon$-Gromov-Hausdorff approximation} if
\begin{equation}\nonumber
\left\{
\begin{array}{rl}
&\mathbf{Y}\subset \mathbf{U}_{\epsilon}\big(f(\mathbf{X})\big) \\
&\Big|d_\mathbf{Y}\big(f(x_1), f(x_2)\big)- d_\mathbf{X}(x_1, x_2)\Big|< \epsilon \ , \quad \quad \quad \quad \forall x_1, x_2\in \mathbf{X}
\end{array} \right.
\end{equation}
where $\mathbf{U}_{\epsilon}\big(f(\mathbf{X})\big)= \Big\{z\in \mathbf{Y}: d\big(z, f(\mathbf{X})\big)\leq \epsilon\Big\}$. For simplicity reason, we also use G-H approximation instead of Gromov-Hausdorff approximation in the rest of the paper.
In $Rc\geq (n- 1)$ case \cite{Colding-shape}, the corresponding G-H approximation is explicitly constructed, although some topological results are used. On the other hand, the proof of Cheeger for $Rc\geq 0$ case did not provide the way of constructing the corresponding G-H approximation. We are interested in constructing the explicit G-H approximation for $Rc\geq 0$ case. By looking at the model space, it seems the exponential map, the linear harmonic map or some optimal transportation map are possible candidates. But it is not easy to judge which one is canonical from the compactness argument, since we do not know whether the estimates related to these maps are stable under the Gromov-Hausdorff topology. We note that the stable version of splitting property of Ricci limit space is the quantitative version of almost splitting property in \cite{CC-Ann} (see Lemma \ref{lem property 2 of G-H appr}), which is well-known to the experts in this field. This result is our starting point.
Now we recall the definition of $\epsilon$-splitting map introduced in \cite{CN} as follows. If $Rc(M^n)\geq 0$ and $B_r(p)\subseteq M^n$ is a geodesic ball, then the harmonic map $\mathbf{b}= \big\{\mathbf{b}_i\big\}_{i= 1}^{n}: B_{r}(p)\rightarrow \mathbb{R}^n$ is called an \textbf{$\epsilon$-splitting map}, if $\mathbf{b}$ satisfies
\begin{align}
\sup_{B_{r}(p)\atop i= 1, \cdots, n} |\nabla \mathbf{b}_i|\leq 1+ \epsilon ,\quad \quad and \quad \quad \fint_{B_{r}(p)} \big|\langle \nabla \mathbf{b}_i, \nabla \mathbf{b}_j\rangle- \delta_{ij}\big|^2\leq \epsilon .\nonumber
\end{align}
From \cite{CC-Ann} and \cite{CC1} (also see \cite{CJN}), the existence of $\epsilon$-G-H approximation from such geodesic ball to the corresponding Euclidean ball, is equivalent to the existence of an $\epsilon'$-splitting map between them. In fact, from \cite{Ding} and \cite{Cheeger}, an $\epsilon'$-splitting map comes from pulling back the harmonic functions on the Euclidean space by one $\epsilon$-G-H approximation. We can view $\epsilon'$-splitting map as a `canonical' G-H approximation, although there is no uniqueness restriction. Hence we can reduce the construction of $\epsilon$-G-H approximation for $Rc\geq 0$ case to the construction of $\epsilon'$-splitting map.
In this paper we construct an $\epsilon$-splitting map on a concentric geodesic ball with uniformly small radius, and establish the corresponding quantitative estimate in term of the volume ratio between geodesic ball and Euclidean ball. Although the quantitative version of almost splitting property in \cite{CC-Ann} (see Lemma \ref{lem property 2 of G-H appr}) is our starting point, it is not enough for our purpose. One new ingredient of our construction is finding $n$ direction points by combining the stratified Gou-Gu Theorem with the induction method, which is the content of Theorem \ref{thm dist of points in geodesic balls induc-dim}. The $(k+ 1)$th direction point $q_{k+ 1}$ is chosen from the image of $\mathscr{P}_k$, where $\mathscr{P}_k$ is the composition of the first $k$ projection maps with respect to the first $k$ direction points in order. To establish the stratified Gou-Gu Theorem for $(k+ 1)$ direction points, the distance between $q_{k+ 1}$ and the origin $p$ needs to be in a larger scale than the scale of the error estimate in the stratified Gou-Gu Theorem. In other words, we need to get a lower bound of the radius of the image of $\mathscr{P}_k$ with respect to the origin $p$. This lower bound is proved by the stratified Gou-Gu Theorem for $k$ direction points and the volume comparison theorem (see Step (1) of the proof of Theorem \ref{thm dist of points in geodesic balls induc-dim}).
To prove that $n$ direction points we find really determine $n$ almost orthogonal directions, we need to show that upper bound of discrete Lipschitz constant of distance map is close to $1$ and use the integral Toponogov Theorem. The first one is from distance aspect, the second one is from angle aspect.
One technical difficulty we overcome is, proving the corresponding error estimates of projections, which yield that the upper bound of discrete Lipschitz constant of distance map is close to $1$. These estimates of projections are trivial in Euclidean case, to obtain them on manifolds, we need to use the stratified Gou-Gu Theorem carefully, and the induction method will be used again. These estimates are provided in Section \ref{sec quasi-isom}. After these preparation work, we prove the distance map determined by the $n$ direction points is in fact a quasi-isometry.
The reason that we can only deal with the geodesic ball of uniformly small radius, is that we do not have the quantitative version of the almost cone property established in \cite{CC-Ann}. Lack of the almost cone property, we can not relate the cases with variant scales in the expected quantitative way.
In Section \ref{sec existence of splitting map}, we will use the fact that the upper bound of discrete Lipschitz constant is close to $1$, combining with the integral Toponogov Theorem for $Rc\geq 0$ to establish the almost orthogonality of distance maps.
The integral Toponogov Theorem for $Rc\geq 0$ was proved in \cite{Colding-volume} by approximating the distance function by harmonic functions and covering argument. Motivated by the cosine law for triangle (the terms with distance functions are in the form of the square of distance functions), we consider the model function $f$ defined in Subsection \ref{subsec integral est of diff} relating with the square of the distance function, present a different proof of this result here (Corollary \ref{cor Colding-1.28}), which avoids the covering argument. And our argument is consistent with the former argument for almost Gou-Gu Theorem of distance functions. Finally we show that the harmonic map constructed from the distance map is an $\epsilon$-splitting map.
\section{Almost Gou-Gu Theorem of distance functions}\label{sec general Pytha}
We always assume $\beta\geq 3$ in this section unless otherwise mentioned.
\subsection{Distance estimate by multiple line integral of difference}
For $B_r(p)\subseteq (M^n, g)$ and $\beta\geq 3$, let $d(p, q)= \beta r$. For $x\in B_r(p)$, let $\rho(x)= d(q, x)$. If there exists $\theta_x\in S(T_qM)$ such that $x=\exp_q(\rho(x)\theta_x)$ and $\exp_q(t\theta_x)$ is a segment up to $t=\beta r$, then define $\pi(x)=\exp_q(\beta r \theta_x)\in \partial B_{\beta r}(q)$.
In the following, we firstly estimate the distance $d(x,y)$ of $x,y\in B_r(p)$ when $\pi(x), \pi(y)$ are well defined. In this case, we define $\displaystyle \sigma_x(t)=\gamma_{\pi(x),x}(t)$, for $t\in [0,d(x,\pi(x))]$, where $\gamma_{\pi(x),x}$ is a geodesic segment from $\pi(x)$ to $x$; and the relative velocity $r_x^y$ (for $\rho(x)\neq \beta r$) by $\displaystyle r_x^y:=\frac{d(\pi(y),y)}{d(\pi(x),x)}= \Big|\frac{\beta r-\rho(y)}{\beta r-\rho(x)}\Big|$. For $t\in [0,d(x,\pi(x))]$, denote
\begin{align*}
\tilde{\sigma}_y(t)=\sigma_y(r_x^yt),\ \ \ \tau_t(s)= \gamma_{\sigma_x(t), \tilde{\sigma}_y(t)}(s) \text{ for } s\in [0, l_t],
\end{align*}
where $\displaystyle l_t:= d\big(\sigma_x(t), \tilde{\sigma}_y(t)\big)$. Denote $\mathfrak{S}(x)\vcentcolon = \mathcal{S}(\rho(x))$, where
\begin{align*}
\mathcal{S}(t)=\left\{
\begin{aligned}
1&, & \text{ if }t-\beta r>0, \\
-1&, & \text{ if } t-\beta r<0.
\end{aligned}
\right.
\end{align*}
One general philosophy of distance estimate and derivative estimate of distance function on manifolds is, to reduce the estimates to the $C^0$ estimate, integral gradient estimate of error functions between original function and make-up function $f$ and the Hessian estimate of $f$. Although the make-up function $f$ is freely chosen, to get the Hessian estimate of $f$, we usually need to choose $f$ as the solution of suitable elliptic PDE. One typical technical tool is the following estimate of distance functions.
\begin{lemma}\label{lem property 2 of G-H appr}
{For $x, y\in B_r(p)$ with well-defined $\pi(x), \pi(y)$, we have
\begin{align}
&\quad \quad \Big|d(x, y)^2- \Big(\big[\rho(x)- \rho(y)\big]^2+ d(\pi(x), \pi(y))^2\Big)\Big| \nonumber\\
&\leq C(n)\Big\{\int_0^{d(x,\pi(x))} \big|\nabla (\rho^2- f)\big|(\tilde{\sigma}_y(t))+ \big|\nabla (\rho^2- f)\big|(\sigma_x(t))dt \nonumber \\
&\quad + \sup_{B_{2r}(p)} |\rho^2- f|+ \int_0^{d(x,\pi(x))} dt\int_{\gamma_{\sigma_x(t), \tilde{\sigma}_y(t)}(s)} |\nabla^2 f- 2g| ds+ r^2\beta^{-1} \Big\}.\nonumber
\end{align}
}
\end{lemma}
\begin{remark}\label{rem almost G-G in Rn}
{Note if $M=\mathbb{R}^n$, then it is straightforward to verify that
$$\sup_{x,y\in B_r(p)}|d(x,y)^2-d(\pi(x),\pi(y))^2-|\rho(x)-\rho(y)|^2|\le \frac{12r^2}{\beta}.$$
The above lemma is the Riemannian manifolds version of the almost Gou-Gu inequality in $\mathbb{R}^n$.
}
\end{remark}
\begin{figure}[H]
\begin{center}
\includegraphics{p1.eps}
\caption{Lemma \ref{lem property 2 of G-H appr}}
\label{figure: lemma2.1}
\end{center}
\end{figure}
{\it Proof:}~
{\textbf{Step (1)}. Since
\begin{align*}
\int_0^{d(x,\pi(x))} \big|\nabla (\rho^2- f)\big|(\tilde{\sigma}_y(t))+ \big|\nabla (\rho^2- f)\big|(\sigma_x(t))dt =\int_{\gamma_{\pi(y),y}\cup \gamma_{\pi(x),x}} \big|\nabla (\rho^2- f)\big|,
\end{align*}
we know the right hand term is symmetric with respect to $x$ and $y$. Thus we can assume $d(x,\pi(x))\ge d(y,\pi(y))$ without loss of generality.
For $\zeta\in (0, 1)$ to be determined later, we firstly assume $\displaystyle d(x,\pi(x))=|\rho(x)- \beta r| \geq \zeta r$. Define $\displaystyle \alpha= \frac{\rho(y)- \rho(x)}{d(x,\pi(x))}$. Then $|\alpha|\le \frac{2}{\zeta}$ and
\begin{align}
\rho\big(\tau_t(l_t)\big)=\beta r+\big(\alpha+\mathfrak{S}(x)\big)t \ ,\quad \quad
\rho\big(\tau_t(0)\big)=\beta r+\mathfrak{S}(x)t. \nonumber
\end{align}
Let $\mathcal{U}_t$ be the solution of
\begin{equation}\nonumber
\left\{
\begin{array}{rl}
\mathcal{U}_t''&= 1 \\
\mathcal{U}_t(0)&= \frac{1}{2}\big(\beta r+\mathfrak{S}(x)t\big)^2, \quad \quad and \quad \quad \mathcal{U}_t(l_t)= \frac{1}{2}\bigg(\beta r+\big(\alpha+\mathfrak{S}(x)\big)t\bigg)^2 . \\
\end{array} \right.
\end{equation}
Then for any $s_1, s_2\in [0, l_t]$, we have
\begin{align}
\big|(\frac{1}{2} f(\tau_t)- \mathcal{U}_t)'(s_2)- (\frac{1}{2} f(\tau_t)- \mathcal{U}_t)'(s_1)\big|\leq \int_0^{l_t} |\nabla^2\frac{1}{2} f- g|(\tau_t(s))ds \label{1st deri diff}
\end{align}
and
\begin{align}
\mathcal{U}_t'(l_t)- \mathcal{U}_t'(0)= l_t \label{diff of 1st derivative}
\end{align}
From the mean value theorem, there is some $\xi\in [0, l_t]$ such that
\begin{align}
(\frac{1}{2} f(\tau_t)- \mathcal{U}_t)(l_t)- (\frac{1}{2} f(\tau_t)- \mathcal{U}_t)(0)= l_t\cdot (\frac{1}{2} f(\tau_t)- \mathcal{U}_t)'(\xi), \nonumber
\end{align}
which implies
\begin{align}
\big|(\frac{1}{2} f(\tau_t)- \mathcal{U}_t)'(\xi)\big|\leq \frac{1}{l_t}\sup_{\sigma_x[0, t]\cup \tilde{\sigma}_y[0, t]} |\rho^2- f| . \label{one point deri bound}
\end{align}
From (\ref{1st deri diff}) and (\ref{one point deri bound}), we get
\begin{align}
\sup_{s\in [0, l_t]}\big|(\frac{1}{2} f(\tau_t)- \mathcal{U}_t)'(t)\big|\leq \frac{\sup\limits_{\sigma_x[0, t]\cup \tilde{\sigma}_y[0, t]} |\rho^2- f|}{l_t}+ \int_0^{l_t} |\nabla^2\frac{1}{2} f- g|(\tau_t(s))ds . \label{crucial tilde U est}
\end{align}
\textbf{Step (2)}. Since $l_t$ is almost everywhere differentiable on $[0, d(x, \pi(x))]$, from the extension of the first variation formula (for example, see \cite{Liu}) and (\ref{diff of 1st derivative}), we have
\begin{align}
\frac{d}{dt}l_t&= (\alpha+ \mathfrak{S}(x))\langle \nabla \rho, \tau_t'\rangle(l_t)- \mathfrak{S}(x)\langle \nabla \rho, \tau_t'\rangle(0) \nonumber \\
&= \frac{(\alpha+ \mathfrak{S}(x))t+ \mathfrak{S}(x)\beta r+ \frac{1}{2}\alpha \beta r}{\big[\beta r+ (\alpha+ \mathfrak{S}(x))t\big](\beta r+\mathcal{ S}(x)t)}l_t+ \frac{\alpha^2\beta r t\big[\beta r+ \frac{1}{2}(\alpha+ 2\mathfrak{S}(x))t\big]}{\big[\beta r+ (\alpha+ \mathfrak{S}(x))t\big](\beta r+ \mathfrak{S}(x)t)}\frac{1}{l_t}
\nonumber \\
&\quad + (I)+ (II) , \label{eq need 1.1.1}
\end{align}
where
\begin{align}
(I)& = \frac{\alpha+ \mathfrak{S}(x)}{\beta r+ (\alpha+ \mathfrak{S}(x))t}\Big[(\frac{1}{2} \rho^2\circ \tau_t)'(l_t)- (\frac{1}{2} f\circ \tau_t)'(l_t)\Big] \nonumber \\
&\quad - \frac{\mathfrak{S}(x)}{\beta r+ \mathfrak{S}(x)t}\Big[(\frac{1}{2} \rho^2\circ \tau_t)'(0)- (\frac{1}{2} f\circ \tau_t)'(0)\Big] \nonumber \\
(II)& = \frac{\alpha+ \mathfrak{S}(x)}{\beta r+ (\alpha+ \mathfrak{S}(x))t}\big[(\frac{1}{2} f\circ \tau_t)'- \mathcal{U}_t'\big](l_t)- \frac{\mathfrak{S}(x)}{\beta r+ \mathfrak{S}(x)t}\big[(\frac{1}{2} f\circ \tau_t)'- \mathcal{U}_t'\big](0) .\nonumber
\end{align}
Simplifying (\ref{eq need 1.1.1}), let $h(t)= \frac{\beta r+ \mathfrak{S}(x)t}{\beta r+ (\alpha+\mathfrak{S}(x))t}+ \frac{\beta r+ (\alpha+ \mathfrak{S}(x))t}{\beta r+\mathfrak{S}(x)t}$,
we have
\begin{align}
&\frac{\big[\beta r+ (\alpha+ \mathfrak{S}(x))t\big](\beta r+\mathfrak{S}(x)t)}{2l_t}\cdot \frac{d}{dt}\Big\{\frac{l_t^2}{\big[\beta r+ (\alpha+ \mathfrak{S}(x))t\big](\beta r+ \mathfrak{S}(x)t)}-h(t)\Big\} \nonumber \\
&= (I)+ (II) \label{first diff equa 1.1}
\end{align}
By $d(\pi(x),x)=|\rho(x)-\beta r|=|\rho(x)-\rho(p)|\le d(x,p)\le r$, we know $\pi(x)\in B_{2r}(p)$ and hence $\sigma_x(t)\in B_{2r}(p)$ for any $t\in [0,d(x,\pi(x))]$. Similarly, $d(\tilde{\sigma}_y(t),p)\le 2r$. From (\ref{first diff equa 1.1}) and (\ref{crucial tilde U est}), note $\beta \ge 3$, $|(\alpha+\mathfrak{S}(x))t|, |\mathfrak{S}(x)t|\le 2r$, and $l_t\le 4r$.
\begin{align}
&\quad \Big|\frac{d}{dt}\Big\{\frac{l_t^2}{\big[\beta r+ (\alpha+ \mathfrak{S}(x))t\big](\beta r+\mathfrak{S}(x) t)}-h(t)\Big\}\Big| \nonumber \\
&\leq \frac{216(|\alpha|+ 1)r}{(\beta r)^3}\Big\{\big|\nabla (\frac{1}{2} \rho^2- \frac{1}{2} f)\big|(\tau_t(l_t))+ \big|\nabla (\frac{1}{2} \rho^2- \frac{1}{2} f)\big|(\tau_t(0))\Big\}\nonumber \\
&\quad + \frac{216(|\alpha|+ 1)}{(\beta r)^3}\Big\{2\sup_{\sigma_x[0, t]\cup \tilde{\sigma}_y[0, t]} |\rho^2- f|+ 4r\int_0^{l_t} |\nabla^2\frac{1}{2} f- g|(\tau_t(s))ds\Big\}
\end{align}
\textbf{Step (3)}. Since for any $t_1, t_2\in [0,d(x,\pi(x))]$,
$$|l_{t_1}-l_{t_2}|\le d(\sigma_x(t_1),\sigma_x(t_2))+d(\tilde{\sigma}_{y}(t_1),\tilde{\sigma}_{y}(t_2))\le (1+r_x^y)|t_2-t_1|,$$ we know $l_t$ is a Lipschitz function. Then we get
\begin{align}
&\quad \Big|\Big\{\frac{l_t^2}{\big[\beta r+ (\alpha+ \mathfrak{S}(x))t\big](\beta r+ \mathfrak{S}(x)t)}-h(t)\Big\}- \Big\{\frac{l_0^2}{(\beta r)^2}-h(0)\Big\}\Big| \nonumber \\
&\leq \frac{C(n)}{(\beta r)^3\zeta}\Big\{r\int_0^t \big|\nabla (\rho^2- f)\big|(\tau_v(l_v))+ \big|\nabla (\rho^2- f)\big|(\tau_v(0))dv \nonumber \\
&\quad \quad \quad + t\sup_{\sigma_x[0, t]\cup \tilde{\sigma}_y[0, t]} |\rho^2- f|+ r\int_0^t \int_0^{l_v} |\nabla^2 f- 2g|(\tau_v(s))dsdv\Big\}.\nonumber
\end{align}
Let $t= d(x,\pi(x))\leq r$ in the above, note $l_{d(x,\pi(x))}= d(x, y)$ and $\displaystyle \Big(\sigma_x[0, d(x,\pi(x))]\cup \tilde{\sigma}_y[0, d(x,\pi(x))]\Big)\subseteq B_{2r}(p)$, we have
\begin{align}
&\quad \Big|\Big\{\frac{d(x, y)^2}{\rho(y)\rho(x)}-\frac{\rho(x)}{\rho(y)}-\frac{\rho(y)}{\rho(x)}\Big\}- \Big\{\frac{l_0^2}{(\beta r)^2}-2\Big\}\Big|\nonumber \\
&\leq \frac{C(n)}{\beta^3 r^2\zeta}\Big\{\int_0^{d(x,\pi(x))} \big|\nabla (\rho^2- f)\big|(\tilde{\sigma}_y(t))+ \big|\nabla (\rho^2- f)\big|(\sigma_x(t))dt \nonumber \\
&\quad + \sup_{B_{2r}(p)} |\rho^2- f|+ \int_0^{d(x,\pi(x))} \int_{\gamma_{\sigma_x(t), \tilde{\sigma}_y(t)}} |\nabla^2 f- 2g| dt\Big\} . \nonumber
\end{align}
Note $\displaystyle |(\frac{\rho(x)\rho(y)}{(\beta r)^2})- 1|\ell_0^2\leq C(n)\frac{r^2}{\beta}$, we obtain
\begin{align}
&\quad \quad \Big|d(x, y)^2- \Big(\big[\rho(x)- \rho(y)\big]^2+ l_0^2\Big)\Big| \nonumber\\
&\leq \frac{C(n)}{\beta\zeta}\Big\{\int_0^{d(x,\pi(x))} \big|\nabla (\rho^2- f)\big|(\tilde{\sigma}_y(s))+ \big|\nabla (\rho^2- f)\big|(\sigma_x(s))ds \nonumber \\
&\quad + \sup_{B_{2r}(p)} |\rho^2- f|+ \int_0^{d(x,\pi(x))} \int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} |\nabla^2 f- 2g| ds\Big\}\nonumber \\
&\quad + C(n)r^2(\zeta + \beta^{-1}) .\label{needed in lemma}
\end{align}
If $|\rho(x)-\beta r|< \zeta r$, then by assumption, $|\rho(y)- \beta r|=d(y,\pi(y))\le d(x,\pi(x))< \zeta r$. Thus $|\ell_0- d(x, y)|\leq 2\zeta r$ and we get
\begin{align}
\Big|d(x, y)^2- \Big(\big[\rho(x)- \rho(y)\big]^2+ l_0^2\Big)\Big|\leq C(n)\zeta r^2. \nonumber
\end{align}
Combining this with (\ref{needed in lemma}) and taking $\zeta= \beta^{-1}$, the conclusion follows.
}
\qed
\subsection{Integral estimates of model functions on balls}\label{subsec integral est of diff}
In this subsection, we define the volume of the geodesic sphere $V(s)= V(\rho^{-1}(s))$, where $\rho(x)= d(x, q)$.
\begin{definition}\label{def F and G}
{We define $f$ satisfying
\begin{equation}\nonumber
\left\{
\begin{array}{rl}
&\Delta f= 2n, \quad \quad \quad \quad on\ B_r(q), \\
&f\big|_{\partial B_r(q)}= r^2 . \nonumber
\end{array} \right.
\end{equation}
}
\end{definition}
\begin{remark}This definition is similar to the one in Cheeger-Colding's almost metric cone theorem. And the following Propositions \ref{prop L2 diff of gadient of F}-\ref{prop integral Hessian small} are, in
some sense, an adaption of the proof of almost metric cone theorem in the case of almost maximal volume growth. The equation above
is the one satisfied by the function occurs in the cosine law of the model space, and hence contains much geometric information.
\end{remark}
\begin{lemma}\label{lem C0 bound of G}
{We have
\begin{align}
&-C(n)r^2\leq f(x)\leq \rho^2(x) \ , \quad \quad \quad \quad \forall x\in B_r(q) .\nonumber
\end{align}
}
\end{lemma}
{\it Proof:}~
{By Laplacian Comparison Theorem, $\Delta(f- \rho^2)\geq 0$. By the Maximum Principle, we have $f(x)\leq \rho^2(x)$. Let $\phi(x)= d(z, x)$, where $d(z, q)= 2r$. Then from Laplacian Comparison Theorem, $\Delta(\phi^{4- 2n})\geq 2(n- 2)^2\phi^{2- 2n}\geq 2(n- 2)^2 3^{2- 2n}r^{2- 2n}$ for any $x\in B_r(q)$. Now we get
\begin{align}
\Delta\Big(\frac{n \phi^{4- 2n}}{(n- 2)^23^{2- 2n}r^{2- 2n}}- f\Big)\geq 0. \nonumber
\end{align}
Apply the Maximum principle again, we obtain
\begin{align}
\sup_{B_r(q)} \Big(\frac{n \phi^{4- 2n}}{(n- 2)^23^{2- 2n}r^{2- 2n}}- f\Big)\leq \max_{\partial B_r(q)} \Big(\frac{n \phi^{4- 2n}}{(n- 2)^23^{2- 2n}r^{2- 2n}}- f\Big)\leq \frac{n r^{4- 2n}}{(n- 2)^23^{2- 2n}r^{2- 2n}}- r^2, \nonumber
\end{align}
which implies the conclusion by $\displaystyle \sup_{B_r(q)}\phi\leq 3r$.
}
\qed
\begin{prop}\label{prop L2 diff of gadient of F}
{If $\displaystyle Rc\geq 0$ and $\displaystyle \frac{V(\partial B_r(q))}{r^{n- 1}}\geq (1- \omega)n\omega_n$, where $\omega\in (0, 1)$, then
\begin{align}
\fint_{B_r(q)} |\nabla (f- \rho^2)|^2 \leq C(n)\omega r^2. \nonumber
\end{align}
}
\end{prop}
{\it Proof:}~
{From Lemma \ref{lem C0 bound of G}, we get
\begin{align}
\int_{B_r(q)} |\nabla (f- \rho^2)|^2&= \int_{B_r(q)}(-\Delta f+ \Delta \rho^2)(f- \rho^2)\leq \sup_{B_r(q)}|\rho^2- f|\int_{B_r(q)}(2n- \Delta \rho^2) \nonumber \\
&\leq C(n)r^2\Big(2n\mathrm{V}(B_r(q))- \int_{\partial B_r(q)} 2\rho\Big)= C(n)r^2\mathrm{V}(B_r(q))\Big(n- r\frac{\mathrm{V}(\partial B_r(q))}{\mathrm{V}(B_r(q))}\Big) \nonumber \\
&\leq C(n)\omega r^2. \nonumber
\end{align}
which implies the conclusion.
}
\qed
\begin{prop}\label{prop C0 diff of F}
{Assume $\displaystyle Rc\geq 0$ and $\displaystyle \frac{V(\partial B_r(q))}{n\omega_nr^{n- 1}}\geq (1- \omega)$, where $\omega\in (0, 1)$, then
\begin{align}
\sup_{B_{(1- 2\omega^{\frac{1}{2n+ 4}})r}(q)}\Big|f- \rho^2\Big|\leq C(n)r^2\omega^{\frac{1}{2n+ 4}} . \nonumber
\end{align}
}
\end{prop}
{\it Proof:}~
{By the scaling invariant property of $Rc\geq 0$, we only need to show the conclusion for $r= 1$. From the Poincare inequality and Proposition \ref{prop L2 diff of gadient of F}, there is
\begin{align}
\fint_{B_1(q)} \big|f- \rho^2\big|^2\leq C(n)\cdot \fint_{B_1(q)} \big|\nabla(f- \rho^2)\big|^2 \leq C(n)\omega .\label{L2 diff of F}
\end{align}
For any $B_{2\ell}(x)\subset B_{2\sqrt{\ell}}(x)\subset B_1(q)$, where $\ell\in (0, 1)$ is to be determined later, then from (\ref{L2 diff of F}),
\begin{align}
\frac{V\big(B_{\ell}(x)\big)}{V(B_1(q))}\min_{B_{\ell}(x)} \big|f- \rho^2\big|^2\leq C(n)\omega . \label{4.50.1}
\end{align}
From Bishop-Gromov Comparison Theorem, we have
\begin{align}
V\big(B_{\ell}(x)\big)\geq \big(\frac{\ell}{2}\big)^n\cdot V\big(B_{2}(x)\big)\geq \big(\frac{\ell}{2}\big)^n\cdot V\big(B_1(q)\big) \label{4.50.2}
\end{align}
By (\ref{4.50.1}) and (\ref{4.50.2}),
\begin{align}
\min_{B_{\ell}(x)} \big|f- \rho^2\big|&\leq C(n)\big(\frac{1}{\ell}\big)^{\frac{n}{2}}\omega^{\frac{1}{2}}. \label{4.50.3}
\end{align}
From Cheng-Yau's gradient estimate and Lemma \ref{lem C0 bound of G}, we get
\begin{align}
\sup_{B_{\ell}(x)}|\nabla f|\leq \sup_{B_{\sqrt{\ell}}(x)}|\nabla f|\leq C(n)(\sqrt{\ell})^{-1}(\sup_{B_{2\sqrt{\ell}}(x)} |f|+ 2n) \leq \frac{C(n)}{\sqrt{\ell}}. \label{gradient of b has a bound}
\end{align}
Now from (\ref{4.50.3}) and (\ref{gradient of b has a bound}),
\begin{align}
&\quad \max_{B_{\ell}(x)} \Big|f- \rho^2\Big|\leq \min_{B_{\ell}(x)} \big|f- \rho^2\big|+ 2\ell \sup_{B_{\ell}(x)} \big|\nabla (f- \rho^2)\big| \nonumber \\
&\leq C(n)\big(\frac{1}{\ell}\big)^{\frac{n}{2}}\omega^{\frac{1}{2}}+ 2\ell\cdot \big(2+ \sup_{B_{\ell}(x)} \big|\nabla f\big|\big) \leq C(n)\big(\frac{1}{\ell}\big)^{\frac{n}{2}}\omega^{\frac{1}{2}}+ C(n)\sqrt{\ell} .\nonumber
\end{align}
Let $\ell= \omega^{\frac{1}{n+ 2}}$ in the above, from the free choice of $x\in B_1(q)$, we have
\begin{align}
\sup_{B_{1- 2\omega^{\frac{1}{2n+ 4}}}(q)}\Big|f- \rho^2\Big|\leq C(n)\omega^{\frac{1}{2n+ 4}} . \nonumber
\end{align}
}
\qed
Now we get the integral Hessian estimate of difference.
\begin{prop}\label{prop integral Hessian small}
{If $\displaystyle Rc\geq 0$ and $\displaystyle \frac{V(\partial B_r(q))}{r^{n- 1}}\geq (1- \omega)n\omega_n$, where $\omega\in (0, \frac{1}{2})$, then
\begin{align*}
\fint_{B_{(1- \omega^{\frac{1}{32}})r}(q)} \big|\nabla^2 f- 2g\big|^2 \leq C(n)\omega^{\frac{1}{4}}.
\end{align*}
}
\end{prop}
{\it Proof:}~
{From \cite[Lemma $2.3$]{Xu-group}, we can choose $\phi\in C^\infty(M^n; [0, 1])$ with $\mathrm{supp}(\phi)\subseteq B_r(q)$ and $\phi\big|_{B_{(1- \tau)r}}\equiv 1$ for some $\tau\in (0, \frac{1}{2})$, furthermore $|\Delta \phi|\leq C(n)\tau^{-8}r^{-2}$. Then from the Bochner formula and $\Delta f= 2n$, we have
\begin{align}
&\quad \int_{B_r}\Delta\phi\cdot (|\nabla f|^2- 4f)= \int_{B_r}\phi\cdot \Delta(|\nabla f|^2- 4f) \nonumber \\
&= \int_{B_r}\phi\cdot \Big(2|\nabla^2 f|^2+ 2\langle\nabla \Delta f, \nabla f\rangle+ Rc(\nabla f, \nabla f)- 8n\Big)\nonumber \\
&\geq 2\int_{B_r}\phi\cdot(|\nabla^2 f|^2- 4n) = 2\int_{B_r}\phi\cdot \big|\nabla^2 f- 2g\big|^2 ,\nonumber \\
&\geq 2\int_{B_{(1- \tau)r}} \big|\nabla^2 f- 2g\big|^2 .\nonumber
\end{align}
On the other hand, from (\ref{L2 diff of F}) and Proposition \ref{prop L2 diff of gadient of F},
\begin{align}
&\int_{B_r}\Delta\phi\cdot (|\nabla f|^2- 4f)\leq \sup_{B_r(q)}|\Delta\phi|\cdot \Big\{\int_{B_r} \big||\nabla f|^2- 4\rho^2\big|+ 4\int_{B_r}|\rho^2- f|\Big\} \nonumber \\
&\leq C(n)\tau^{-8}r^{-2}\cdot \Big\{\int_{B_r} |\nabla (f- \rho^2)|\cdot |\nabla(f+ \rho^2)|+ C(n)r^2\sqrt{\omega}\mathrm{V}(B_r)\Big\} \nonumber \\
&\leq C(n)\tau^{-8}r^{-2}\cdot \Big\{\sqrt{r^2\omega\mathrm{V}(B_r)}\cdot \Big(\sqrt{\int_{B_r} |\nabla(f- \rho^2)|^2}+ \sqrt{\int_{B_r} |\nabla(2\rho^2)|^2}\Big)+ r^2\sqrt{\omega}\mathrm{V}(B_r)\Big\} \nonumber \\
&\leq C(n)\tau^{-8}\cdot \sqrt{\omega}\mathrm{V}(B_r) \nonumber
\end{align}
Let $\tau= \omega^{\frac{1}{32}}$, then we have
\begin{align}
\int_{B_r}\Delta\phi\cdot (|\nabla f|^2- 4f)\leq C(n) \omega^{\frac{1}{4}}\mathrm{V}(B_r)\leq C(n)\omega^{\frac{1}{4}}\mathrm{V}(B_{(1- \tau)r}). \nonumber
\end{align}
The conclusion follows from the above.
}
\qed
\begin{lemma}\label{lem volume conv imply level set conv}
{If $Rc(M^n)\geq 0$ and $V\big(B_r(p)\big)\geq (1- \delta)V\big(B_r(0)\big)$, then $\displaystyle V(s)\geq (1- \sqrt{\delta})n\omega_n s^{n- 1}$ for any $s\in (0, (1- \sqrt{\delta})^{\frac{1}{n}}r)$.
}
\end{lemma}
{\it Proof:}~
{Let $s_0= (1- \sqrt{\delta})^{\frac{1}{n}}r$, if $V(s_0)< (1- \sqrt{\delta})V(\mathbb{S}^{n- 1})s_0^{n- 1}$, then from the Bishop-Gromov Comparison Theorem,
\begin{align}
V(s)< (1- \sqrt{\delta})V(\mathbb{S}^{n- 1})s^{n- 1} \ , \quad \quad \quad \quad \forall s\in [s_0, r] \nonumber
\end{align}
Then we have
\begin{align}
V\big(B_r(p)\big)&= V\big(B_{s_0}(p)\big)+ V(A_{s_0, r})\leq V\big(B_{s_0}(0)\big)+ \int_{s_0}^r V(t)dt \nonumber \\
&< \frac{V(\mathbb{S}^{n- 1})}{n}s_0^n+ (1- \sqrt{\delta})V(\mathbb{S}^{n- 1})\int_{s_0}^r t^{n- 1} dt \nonumber \\
&= \frac{V(\mathbb{S}^{n- 1})}{n}\big[r^n- (r^n- s_0^n)\sqrt{\delta}\big]\leq \frac{V(\mathbb{S}^{n- 1})}{n}(1- \delta)r^n \nonumber \\
&= (1- \delta)V\big(B_r(0)\big) \nonumber
\end{align}
which is contradicting to the assumption. Then $\displaystyle V(s_0)\geq (1- \sqrt{\delta})V(\mathbb{S}^{n- 1})s_0^{n- 1}$.
}
\qed
\begin{definition}\label{def choice of f wrt q p r}
{Assume there is $q\in M^n$ with $d(p, q)= \beta r_1$, we define $f$ by
\begin{equation}\nonumber
\left\{
\begin{array}{rl}
&\Delta f= 2n, \quad \quad \quad \quad on \ B_{(\beta+ 8)r_1}(q), \\
&f\big|_{\partial B_{(\beta+ 8)r_1}(q)}= (\beta+ 8)^2r_1^2\nonumber
\end{array} \right.
\end{equation}
We call $f$ the \textbf{model function with respect to $\{p, q, \beta\}$}.
}
\end{definition}
Given the volume ratio lower bound and the existence of $q$, combining the former results in this section, we obtain the integral estimate of difference as follows.
\begin{lemma}\label{lem difference controlled by volume ratio}
{For $\tau\in \big(0,\frac{1}{6(n+1)}\big)$, $3\leq \beta\leq \tau^{-c(n)}$ and $r>0$, assume $B_{r}(p)\subset (M,g)$ satisfying $Rc\ge 0$ and $\frac{V(B_{r}(p))}{V(B_{r})}\ge 1-\tau$. Assume there is $q\in M^n$ with $r_q\vcentcolon = \beta^{-1} d(p, q), d(p, q)\leq \beta^{-C(n)}r$, and define $\rho(x)= d(q, x)$. Then the model function $f$ with respect to $\{p, q, \beta\}$ satisfies
\begin{align}
r_q^{-1}\fint_{B_{2r_q}(p)} |\nabla (f- \rho^2)|+ r_q^{-2}\sup_{B_{2r_q}(p)}\Big|f- \rho^2\Big|+ \fint_{B_{4r_q}(p)} \big|\nabla^2 f- 2g\big| \leq C(n)\beta^{-C(n)} . \nonumber
\end{align}
}
\end{lemma}
{\it Proof:}~
{Note $r\geq (2\beta+ 16) r_q$, from the Bishop-Gromov volume comparison Theorem, we obtain
\begin{align}
\frac{V(B_{(2\beta+ 16) r_q}(q))}{V(B_{(2\beta+ 16) r_q}(0))}&\geq \frac{V(B_{r}(q))}{V(B_{r}(0))}\geq \frac{V(B_{r- \beta r_q}(p))}{V(B_{r}(0))}\geq (1- \tau)(1- \frac{\beta r_q}{r})^n \nonumber \\
&\geq (1- \beta^{-C(n)})^{n+ 1}.\nonumber
\end{align}
In the last inequality above, we use $\displaystyle d(p, q)\leq \beta^{-C(n)}r$ and $\displaystyle \beta\leq \tau^{-c(n)}$.
From the above inequality, we have
\begin{align}
\frac{V(B_{(2\beta+ 16) r_q}(q))}{V(B_{(2\beta+ 16) r_q}(0))}\geq 1- (n+ 1)\beta^{-C(n)}. \nonumber
\end{align}
Let $\delta= (n+ 1)\beta^{-C(n)}$, apply Lemma \ref{lem volume conv imply level set conv} on $B_{(2\beta+ 16)r_q}(q)$, using $\delta\leq (1- 2^{-n})^2$, we obtain
\begin{align}
\frac{V(\partial B_s(q))}{s^{n- 1}}\geq (1- \sqrt{\delta})n\omega_n, \quad \quad \quad \quad \forall s\in (0, (\beta+ 8)r_q) . \label{boundary volume ineq}
\end{align}
From (\ref{boundary volume ineq}), we can apply Proposition \ref{prop L2 diff of gadient of F}, Proposition \ref{prop C0 diff of F} and Proposition \ref{prop integral Hessian small} on $B_{(\beta+ 8)r_q}(q)$ to get
\begin{align}
&\fint_{B_{(\beta+ 8)r_q}(q)} |\nabla (f- \rho^2)|^2 \leq C(n)\sqrt{\delta} \big((\beta+8)r_q\big)^2\le C(n)\beta^{- C(n)}r_q^2, \nonumber \\
&\sup_{B_{(1- 2\delta^{\frac{1}{4n+ 8}})(\beta+ 8)r_q}(q)}\Big|f- \rho^2\Big|\leq C(n)\big((\beta+8)r_q\big)^2\delta^{\frac{1}{4n+ 8}}\le C(n)\beta^{- C(n)}r_q^2 , \nonumber \\
&\fint_{B_{(1- \delta^{\frac{1}{64}})(\beta+ 8)r_q}(q)} \big|\nabla^2 f- 2g\big|^2 \leq C(n)\delta^{\frac{1}{8}}\leq C(n)\beta^{- C(n)}. \nonumber
\end{align}
Note $\delta= (n+ 1)\beta^{-C(n)}$, then we have $\displaystyle \frac{\beta+ 4}{\beta+ 8}\leq \min\{1- 2\delta^{\frac{1}{4n+ 8}}, 1- \delta^{\frac{1}{64}}\}$. Now we obtain
\begin{align}
B_{4r_q}(p)\subseteq B_{(\beta+ 4)r_q}(q)\subseteq B_{(1- 2\delta^{\frac{1}{4n+ 8}})(\beta+ 8)r_q}(q)\cap B_{(1- \delta^{\frac{1}{64}})(\beta+ 8)r_q}(q) . \nonumber
\end{align}
Hence by Bishop-Gromov volume comparison Theorem, we get
\begin{align}
&\fint_{B_{2r_q}(p)} |\nabla (f- \rho^2)|^2 \leq C(n)\beta^{-C(n)} r_q^2\frac{V(B_{(\beta+ 8)r_q}(q))}{V(B_{2r_q}(p))}\leq C(n)\beta^{-C(n)}r_q^2, \nonumber \\
&\sup_{B_{2r_q}(p)}\Big|f- \rho^2\Big|\leq C(n)\beta^{-C(n)}r_q^2\leq C(n)\beta^{-C(n)}r_q^2, \nonumber \\
&\fint_{B_{4r_q}(p)} \big|\nabla^2 f- 2g\big|^2 \leq C(n)\beta^{-C(n)} \frac{V(B_{(1- \delta^{\frac{1}{64}})(\beta+ 8)r_q}(q))}{V(B_{4r_q}(p))}\leq C(n)\beta^{-C(n)} .\nonumber
\end{align}
From the H\"older inequality, we get
\begin{align}
\fint_{B_{2r_q}(p)} |\nabla (f- \rho^2)| \leq C(n)\beta^{-C(n)}r_q, \quad \quad \quad \fint_{B_{4r_q}(p)} \big|\nabla^2 f- 2g\big| \leq C(n)\beta^{-C(n)}. \nonumber
\end{align}
}
\qed
\subsection{The measure of points with integral control}
For $d(p, q)= \beta r> r$, define $\mathcal{B}(q)\subset B_{r}(p)$ is the set of points $x$, where $\pi(x)$ is not well defined and $\pi(x)= \overline{q,x}\cap \partial B_q(\beta r)$. When the context is clear, we use $\mathcal{B}$ instead of $\mathcal{B}(q)$ for simplicity.
Let $r_q=\beta^{-1}d(p,q)>0$, recall $\mathcal{B}(q)\subset B_{r_q}(p)$ be the set of points $x$, where $\pi(x)$ is not well defined and $\pi(x)= \overline{q,x}\cap \partial B_q(\beta r_q)$.
\begin{lemma}\label{lem dist between points and segment}
For $\tau\in (0,\frac{1}{6(n+1)}), 3\leq \beta\leq \tau^{-c(n)}, \epsilon>0$ and $r>0$, if $B_{r}(p)\subset (M,g)$ satisfies $Rc\ge 0$ and $\displaystyle \frac{V(B_{r}(p))}{V(B_r(0))}\ge 1-\tau$. Then for $q\in B_{\beta^{-C(n)} r}(p)\backslash \{p\}$ and $x\in B_{r_q}(p)$, we have
\begin{align*}
\frac{V(B_{\epsilon r_q}(x)\cap \mathcal{B}(q))}{V(B_{\epsilon r_q}(x))}\le C(n)\frac{\beta^{-C(n)}}{\epsilon^{n-1}}.
\end{align*}
\end{lemma}
\begin{proof}
Let $\delta= (n+1)\beta^{-C(n)}$, then we have
\begin{align}
\frac{V(B_{r}(q))}{V(B_{r}(0))}&\geq \frac{V(B_{(1- \beta^{-C(n)})r}(p))}{V(B_{r}(0))}\geq (1- \tau)(1- \beta^{-C(n)})^n\ge 1-(n+1)\beta^{-C(n)}=1-\delta. \nonumber
\end{align}
For $\theta\in S(T_qM)$, we define
$\displaystyle l_\theta=\sup\{l\ |\ \exp_q(t\theta)\text{ is a segment on } [0,l]\}$, let $\mathcal{A}=\{\theta\in S(T_{q} M)| \ l_{\theta}\le 3\beta r_q\}$ and set
$\alpha:=\frac{\mathcal{H}^{n-1}(\mathcal{A})}{\mathcal{H}^{n-1}(S^{n-1})}=\frac{\mathcal{H}^{n-1}(\mathcal{A})}{n\omega_n}.$ Then by Bishop-Gromov volume comparison theorem, we have
\begin{align*}
(1- \delta)\omega_nr^{n}&\le V(B_{r}(q))\le \int_{0}^{r}\int_{S(T_{q}M)\backslash \mathcal{A}}t^{n-1}dtd\theta+\int_{0}^{3\beta r_q}\int_{\mathcal{A}}t^{n-1}dtd\theta\\
&\le \omega_nr^{n}-\alpha \omega_n(r^{n}- (3\beta r_q)^n).
\end{align*}
So by $\beta r_q=d(p,q)\le \beta^{-C(n)} r$, we have
$$\alpha\le \frac{\delta}{1-(3\beta^{-C(n)} )^n}\le \frac{(n+1)\beta^{-C(n)}}{1-3\beta^{-C(n)} }\le C(n)\beta^{-C(n)} .$$
Now, For $r_2\geq r_1\geq 0$, we define
\begin{align*}
&A_{r_1, r_2}(q)= \{y\in M^n|\ r_1< d(y, q)< r_2\} ,\nonumber \\
&\mathcal{C}=\{y\in A_{(\beta-\epsilon)r_q, (\beta+\epsilon)r_q}(q)| \exists \theta(y)\in S(T_{q}M) s.t. y=\exp_q(\rho(y)\theta(y)), l_{\theta(y)}\ge 3\beta r_q\}.
\end{align*}
Note $\rho(x)\le d(x,p)+d(p,q)\le (\beta+1)r_q\le 2\beta r_q$. Thus by Bishop-Gromov volume comparison theorem again, we get
\begin{align*}
\frac{V(A_{\rho(x)-\epsilon r_q,\ \rho(x)+\epsilon r_q}(q)\backslash \mathcal{C})}{V(B_{\epsilon r_q}(x))}&\leq \frac{\int_{\rho(x)-\epsilon r_q}^{\rho(x)+\epsilon r_q}\int_{\mathcal{A}}t^{n-1}dtd\theta}{V(B_{\epsilon r_q}(x))}\nonumber \\
& \le C(n)\alpha\omega_n \frac{(\rho(x)+\epsilon r_q)^n-(\rho(x)-\epsilon r_q)^n}{(1-(n+1)\delta)\omega_n \epsilon^n}\\
&\le C(n)\beta^{-C(n)} \frac{n\cdot 2^n \big(\frac{\rho(x)}{r_q}\big)^{n-1}}{\epsilon^{n-1}}\le C(n)\frac{\beta^{-C(n)}}{\epsilon^{n-1}}.
\end{align*}
Now the conclusion follows from the fact $\displaystyle B_{\epsilon r_q}(x)\cap \mathcal{B}(q)\subseteq A_{\rho(x)-\epsilon r_q,\ \rho(x)+\epsilon r_q}(q)\backslash \mathcal{C}$.
\end{proof}
For $d(p, q)= \beta r> r$, we define the points of the ball $B_r(p)\subseteq B_{(\beta+ 1)r}(q)$ with integral control as follows.
\begin{definition}\label{def local integral has good control inner ball}
{For $0< \eta< \frac{1}{2}$ and $L^\infty$ function $h\geq 0$ in $B_{4r_q}(p)$, we define
\begin{align}
Q_{\eta}^{r_q}(h)&= \Big\{x\in B_{r_q}(p)\backslash \mathcal{B}: \int_
{0}^{|\rho(x)- \beta r_q|} h\big(\sigma_x(s)\big)ds\leq \eta r_q^2\Big\}\ , \nonumber \\
\check{Q}_{\eta}^{r_q}(h)&= \Big\{x\in B_{r_q}(p)\backslash \mathcal{B}: \int_
{0}^{|\rho(x)- \beta r_q|} h\big(\sigma_x(s)\big)ds> \eta r_q^2\Big\}\ , \nonumber \\
T_{\eta}^{r_q}(h)&= \Big\{x\in B_{r_q}(p)\backslash \mathcal{B}: \frac{1}{V(B_{r_q}(p))}\int_{B_{r_q}(p)\backslash \mathcal{B}} dy\Big(\int_0^{|\rho(x)- \beta r_q|} \big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\big) ds\Big)\leq \eta r_q^2\Big\} \ , \nonumber \\
\check{T}_{\eta}^{r_q}(h)&= \Big\{x\in B_{r_q}(p)\backslash \mathcal{B}: \frac{1}{V(B_{r_q}(p))}\int_{B_{r_q}(p)\backslash \mathcal{B}} dy\Big(\int_0^{|\rho(x)- \beta r_q|} \big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\big) ds\Big)> \eta r_q^2\Big\} \ , \nonumber \\
T_{\eta}^{r_q}(h, x)&= \Big\{y\in B_{r_q}(p)\backslash \mathcal{B}: \int_
0^{|\rho(x)- \beta r_q|} \big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\big) ds\leq \sqrt{\eta}r_q^2 \Big\}\ , \quad \quad \quad \quad \forall x\in T_{\eta}^{r_q}(h) .\nonumber\\
\check{T}_{\eta}^{r_q}(h, x)&= \Big\{y\in B_{r_q}(p)\backslash \mathcal{B}: \int_
0^{|\rho(x)- \beta r_q|} \big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\big) ds> \sqrt{\eta}r_q^2 \Big\}\ , \quad \quad \quad \quad \forall x\in T_{\eta}^{r_q}(h) .\nonumber
\end{align}
}
\end{definition}
We recall the following segment inequality due to Cheeger and Colding \cite{CC-Ann}.
\begin{lemma}[Segment Inequality]\label{lem segment ineq}
{Assume $(M^n, g)$ is a complete Riemannian manifold with $Rc\geq 0$, then for any nonnegative function $h$ defined on $B_{2r}(q)\subset M^n$,
\begin{align}
\int_{B_r(q)\times B_r(q)} \Big(\int_0^{d(y_1, y_2)} h\big(\gamma_{y_1, y_2}(s)\big) ds\Big) dy_1 dy_2\leq 2^{n+ 1}r\cdot V\big(B_r(q)\big)\cdot \int_{B_{2r}(q)} h \nonumber
\end{align}
}
\end{lemma}\qed
Now we use the segment inequality to show that the set of points with integral control occupies a large part of the inner ball.
\begin{lemma}\label{lem lower bound of good local integra set inner ball}
{For $\beta\ge 3$ and $r>0$, assume $B_{ r}(p)\subset (M,g)$ satisfies $Rc\ge 0$ and $r_q=\beta^{-1}d(p,q)$, then we have
\begin{align}
&\frac{V(\check{Q}_{\eta}^{r_q}(h))}{V\big(B_{r_q}(p)\big)}\leq \frac{C(n)}{\eta r_q}\fint_{B_{2r_q}(p)} h , \quad \quad \quad
\frac{V(\check{T}_{\eta}^{r_q}(h))}{V\big(B_{r_q}(p)\big)}\leq \frac{C(n)}{\eta} \cdot \fint_{B_{4r_q}(p)}h , \nonumber \\
&\frac{V\big(\check{T}_{\eta}^{r_q}(h, x)\big)}{V\big(B_{r_q}(p)\big)}\leq \sqrt{\eta} , \quad \quad \quad \quad \forall x\in T_{\eta}^{r_q}(h)\nonumber
\end{align}
}
\end{lemma}
{\it Proof:}~
{\textbf{Step (1)}. Let $B= B_{r_q}(p)$, we have
\begin{align}
\int_{ \check{Q}_\eta^{r_q}(h)} dx \int_0^{|\rho(x)- \beta r_q|} h\big(\sigma_x(s)\big) ds \geq \eta r_q^2\cdot V\big( \check{Q}_\eta^{r_q}(h)\big) .\nonumber
\end{align}
Assume $\theta_s(x)$ is the gradient flow of $\rho(\cdot)$ starting from $\pi(x)$ at time $s$ and denote
\begin{align*}
\mathcal{S}(t)=\left\{
\begin{aligned}
1&, & \text{ if }t-\beta r_q>0, \\
-1&, & \text{ if } t-\beta r_q<0.
\end{aligned}
\right.
\end{align*}
Using Co-Area formula and Bishop-Gromov volume comparison theorem, we get
\begin{align}
\quad \int_{ \check{Q}_\eta^{r_q}(h)} dx &\int_0^{|\rho(x)- \beta r_q|} h\big(\sigma_x(s)\big) ds
\leq \int_{(\beta- 1) r_q}^{(\beta+ 1) r_q} dt \int_0^{|t- \beta r_q|} ds \int_{\rho^{-1}(t)\cap B\backslash \mathcal{B}} h\big(\theta_{\mathcal{S}(t)s}(x)\big) d\mathcal{H}^{n-1}(x)\nonumber\\
& \leq \big(\frac{\beta+ 1}{\beta- 1}\big)^{n- 1} \int_{(\beta- 1) r_q}^{(\beta+ 1) r_q} dt \int_0^{|t- \beta r_q|} ds \int_{\theta_{\mathcal{S}(t)s}\big(\rho^{-1}(t)\cap B\backslash \mathcal{B}\big)} h\big(\tilde{x}\big) d\mathcal{H}^{n-1}(\tilde{x}) \nonumber \\
&\leq C(n) \int_{(\beta- 1) r_q}^{(\beta+ 1) r_q} dt \int_{B_{2r_q}(p)} h\big(\tilde{x}\big) d\tilde{x} \leq C(n)r_qV(B_{r_q}(p))\cdot \fint_{B_{2r_q}(p)} h .\nonumber
\end{align}
In the above we use $(\frac{\beta+ 1}{\beta- 1})\leq 2$ by $\beta \geq 3$. From the above, we have
\begin{align*}
\frac{V\big(\check{Q}_\eta^{r_q}(h)\big)}{V(B)}\leq \frac{C(n)}{\eta r_q}\fint_{B_{2r_q}(p)}h.
\end{align*}
To prove the $3$rd inequality of the conclusion, we note
\begin{align}
\int_{ \check{T}_{\eta}^{r_q}(h, x)} dy \int_0^{|\rho(x)- \beta r_q|}\Big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\Big) ds\geq \sqrt{\eta}r_q^2 V\big( \check{T}_{\eta}^{r_q}(h, x)\big) \nonumber
\end{align}
On the other hand, note $x\in T_{\eta}^{r_q}(h)$, we have
\begin{align}
\int_{\check{T}_{\eta}^{r_q}(h, x)} dy \int_0^{|\rho(x)- \beta r_q|}\Big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\Big) ds
\leq V\big(B\big)\cdot \eta r_q^2 .\nonumber
\end{align}
Hence we obtain the $3$rd inequality.
\textbf{Step (2)}. Finally we prove the $2$nd inequality. Note we have
\begin{align}
\int_{ \check{T}_{\eta}^{r_q}(h)} dx\frac{1}{V(B)}\int_{B\backslash\mathcal{B}} dy \int_0^{|\rho(x)- \beta r_q|}\Big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\Big) ds \geq \eta r_q^2 \cdot V\big( \check{T}_{\eta}^{r_q}(h)\big) .\nonumber
\end{align}
Let $\displaystyle \Omega_0= \big(\rho^{-1}(t_1)\cap B\backslash\mathcal{B}\big)\times \big(\rho^{-1}(t_2)\cap B\backslash\mathcal{B}\big)$, and
\begin{align}
\Omega_1= \theta_{\mathcal{S}(t_1)s}\big(\rho^{-1}(t_1)\cap B\backslash\mathcal{B}\big)\times \theta_{\frac{t_2- \beta r_q}{|t_1- \beta r_q|}\cdot s}\big(\rho^{-1}(t_2)\cap B\backslash\mathcal{B}\big). \nonumber
\end{align}
From the Co-area formula, we have
\begin{align}
&\quad\frac{1}{V(B)} \int_{\check{T}_{\eta}^{r_q}(h)} dx\int_{B\backslash\mathcal{B}} dy \int_0^{|\rho(x)- \beta r_q|}\Big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\Big) ds \nonumber\\
&\leq \frac{1}{V\big(B\big)}\int_{(\beta- 1) r_q}^{(\beta+ 1) r_q} dt_1\int_{(\beta- 1) r_q}^{(\beta+ 1) r_q} dt_2 \int_{\Omega_0} d\mathcal{H}^{2n-2}(x,y) \int_0^{|t_1- \beta r_q|}\Big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\Big) ds \nonumber \\
&\leq \frac{C(n)}{V\big(B\big)}\int_{(\beta- 1) r_q}^{(\beta+ 1) r_q} dt_1\int_{(\beta- 1) r_q}^{(\beta+ 1) r_q} dt_2\int_0^{|t_1- \beta r_q|} \Big(\int_{\Omega_1} d\mathcal{H}^{2n-2}(\tilde{x},\tilde{y}) \Big(\int_{\gamma_{\tilde{x}, \tilde{y}}} h\Big) \Big) ds \nonumber \\
&\leq \frac{C(n)}{V\big(B\big)}\int_{0}^{r_q} \int_{B_{2r_q}(p)\times B_{2r_q}(p)} d\tilde{x}d\tilde{y} \Big(\int_{\gamma_{\tilde{x}, \tilde{y}}} h\Big)ds. \nonumber
\end{align}
Now from Lemma \ref{lem segment ineq} and the Bishop-Gromov comparison Theorem, we get
\begin{align}
&\quad\frac{1}{V(B)} \int_{\check{T}_{\eta}^{r_q}(h)} dx\int_{B\backslash\mathcal{B}} dy \int_0^{|\rho(x)- \beta r_q|}\Big(\int_{\gamma_{\sigma_x(s), \tilde{\sigma}_y(s)}} h\Big) ds \leq C(n)r_q^2 \int_{B_{4r_q}(p)}h \nonumber \\
&\leq C(n)r_q^2 V(B_{r_q}(p))\fint_{B_{4r_q}(p)}h, \nonumber
\end{align}
and hence the $2$nd inequality follows.
}
\qed
\subsection{Stratified Gou-Gu Theorem for multiple points}
In the rest of the paper, we assume $\nu\geq 4^{n+ 1}$ is some fixed constant. To establish the stratified Gou-Gou Theorem with small error estimate, we have to choose points in the ball carefully, which satisfy the corresponding integral estimates. For this reason, we define a model $\epsilon r$-dense net in $B_r(p)$ as follows.
\begin{definition}\label{def epsilon r dense model sets}
{For $3\leq \beta$, assume $B_{r}(p)\subset (M,g)$. For some fixed $k$ with $1\leq k\leq n$, assume there is $\{q_i\}_{i= 1}^{k- 1}$ such that
\begin{align}
C(n)^{-1}\beta^{\nu^{n+ 1- i}}\leq \frac{d(p, q_i)}{d(p, q_{i+ 1})}\leq C(n)\beta^{\nu^{n+ 1- i}}, \quad \quad \quad \forall 1\leq i\leq k- 2. \nonumber
\end{align}
Define $r_{q_i}= \beta^{-\nu^{n+ 1- i}}d(p, q_i)$, $\rho_i(x)= d(q_i, x)$ and $\pi_i(x)= \overline{q_i x}\cap \partial B_{q_i}(d(p, q_i))$ for any $x\in B_{r_{q_i}}(p)$. For the model functions $f_i$ with respect to $\{p, q_i, \beta^{\nu^{n+ 1- i}}\}$ where $1\leq i\leq k- 1, \eta\in (0, \frac{1}{2})$, if there is an $(\epsilon r_{q_i})$-dense subset $\mathfrak{N}_i\subset B_{r_{q_i}}(p)$ and the family $\{\mathfrak{N}_i\}_{i=1}^{k-1}$ satisfies
\begin{enumerate}
\item[(a)]. For any $x\in \mathfrak{N}_i$ and $1\le l\le i$, $\pi_l(x)$ is well-defined;
\item[(b)]. For any $1\leq l\le i_1\le i_2\le k-1$, $y\in \mathfrak{N}_{i_1}, z\in \mathfrak{N}_{i_2}$, we have
\begin{align*}
\big|d(y, z)^2-|\rho_l(y),\rho_l(z)|^2-d(\pi_l(y),\pi_l(z))^2 \big|\le \tilde{\epsilon}_{l}(r_{q_l})^2;
\end{align*}
\item[(c)]. $\displaystyle \#(\mathfrak{N}_i)\le \frac{C(n)}{\epsilon^{n}}$;
\item[(d)]. For any $1\le l\le i$, $\mathfrak{N}_i\subset T^{r_{q_l}}_{\eta, l}:=T_\eta^{r_{q_l}}(|\nabla^2f_l-g|)$ with respect to the function $\rho_l$;
\end{enumerate}
then we say $\{\mathfrak{N}_i\}_{i=1}^{k-1}$ is a \textbf{model $\{\epsilon r_{q_i}\}_{i=1}^{k-1}$-dense net with respect to $\Big\{\{ B_{r_{q_i}}(p), f_i, \tilde{\epsilon}_i\}_{i= 1}^{k- 1}, \eta\Big\}$}.
}
\end{definition}
The philosophy of finding $n$ direction points is starting from $\{\mathfrak{N}_i\}_{i=1}^{k-1}$, then we prove the corresponding stratified Gou-Gu Theorem, next we find direction point $q_k$ and the corresponding $\{\mathfrak{N}_i\}_{i=1}^{k}$. The final conclusion follows from the induction method based on the above argument.
Now we obtain the distance estimate between any points in a model $(\epsilon r)$-dense net of the ball $B_r(p)$ as follows by induction method.
\begin{prop}\label{prop dist of points in geodesic balls 1}
{For $\tau\in (0, c(n)]$, $\tau^{-c(n)}\geq \beta\ge 3$ and $r>0$, assume $B_{r}(p)\subset (M,g)$ satisfying $Rc\ge 0$ and $\frac{V(B_{r}(p))}{V(B_{r})}\ge 1-\tau$. For some fixed $k$ with $1\leq k\leq n$, assume there are $\{q_i\}_{i= 1}^k$ satisfying $\displaystyle d(p, q_1)\leq \beta^{-C(n)}r$ and
\begin{align}
C(n)^{-1}\beta^{\nu^{n+ 1- i}}\leq \frac{d(p, q_i)}{d(p, q_{i+ 1})}\leq C(n)\beta^{\nu^{n+ 1- i}}, \quad \quad \quad \quad 1\leq i\leq k- 1 .\nonumber
\end{align}
Let $\{f_i\}_{i= 1}^k$ be the model functions with respect to $(p, q_i, \beta^{\nu^{n+ 1- i}})$ and
\begin{align}
\epsilon= \beta^{-\nu^{n+ 1}}, \quad \quad \quad \quad \tilde{\epsilon}_{i}= C(n) \beta^{-\nu^{n+ 1- i}}. \nonumber
\end{align}
If there is a \textbf{model $\{\epsilon r_{q_i}\}_{i=1}^{k-1}$-dense net $\{\mathfrak{N}_i\}_{i=1}^{k-1}$ with respect to $\Big\{\{B_{r_{q_i}}(p), f_i, \tilde{\epsilon}_{i}\}_{i= 1}^{k- 1}, \beta^{-C(n)}\Big\}$}, then there is $\mathfrak{N}_{k}\subseteq B_{r_{q_k}}(p)$ such that $\{\mathfrak{N}_i\}_{i=1}^{k}$ is a \textbf{model $\{\epsilon r_{q_i}\}_{i=1}^{k}$-dense net with respect to $\Big\{\{B_{r_{q_i}}(p), f_i, \tilde{\epsilon}_{i}\}_{i= 1}^{k}, \beta^{-C(n)}\Big\}$}.
}
\end{prop}
{\it Proof:}~
\textbf{Step (1)}. From Lemma \ref{lem difference controlled by volume ratio}, for any $1\leq l\leq k$ we have
\begin{align}
\frac{\fint_{B_{2r_{q_l}}(p)}|\nabla (f_l- \rho_l^2)|}{ r_{q_l}}+ \fint_{B_{4r_{q_l}}(p)} |\nabla^2 f_l- 2g|+\frac{\sup\limits_{B_{2r_{q_l}}(p)}|\rho_l^2-f_l|}{(r_{q_l})^2}\leq C(n)\beta^{-C(n)} .\label{diff inequality together}
\end{align}
In the rest, let $\eta= \beta^{-C(n)}$. We firstly choose $B_{\epsilon r_{q_k}}(z_j)$ with $z_j\in B_{r_{q_k}}(p)$ such that
\begin{align}
B_{r_{q_k}}(p)\subseteq \bigcup_{j= 1}^{m(\epsilon)}B_{\epsilon r_{q_k}}(z_j) , \nonumber
\end{align}
where $\displaystyle m(\epsilon)\leq C(n)\epsilon^{-n}$ from the Bishop-Gromov comparison Theorem.
From Lemma \ref{lem dist between points and segment}, for $1\leq i\leq k$, we know
\begin{align}
\frac{V(B_{\epsilon r_{q_k}}(z_1)\cap \mathcal{B}(q_i))}{V(B_{\epsilon r_{q_k}}(z_1))}\leq C(n)\frac{\frac{d(q_i, p)}{r}}{\big(\epsilon\frac{r_{q_k}}{r_{q_i}}\big)^{n-1}} \le C(n)\frac{\beta^{-C(n)}}{\epsilon^{n- 1}}. \label{star ineq}
\end{align}
Denote $T_{\eta, i}^{r_{q_i}}= T_\eta^{r_{q_i}}(|\nabla^2f_i- 2g|)$ with respect to the function $\rho_i$, where $1\leq i\leq k$. Similarly let $Q_{\eta, i}^{r_{q_i}}=Q_\eta^{r_{q_i}}(|\nabla(f_i-\rho_i^2)|)$ and $T^{r_{q_i}}_{\eta,i}(x)=T^{r_{q_i}}_\eta(|\nabla^2f_i-2g|,x)$ for $x\in T^{r_{q_i}}_{\eta,i}$ with respect to $\rho_i$, then by Lemma \ref{lem lower bound of good local integra set inner ball}, (\ref{star ineq}) and Bishop-Gromov volume comparison theorem, we know
\begin{align}
&\frac{V\Big(B_{\epsilon r_{q_k}}(z_1)- \bigcup_{i= 1}^k \Big[\mathcal{B}(q_i)\cup \check{T}_{\eta, i}^{r_{q_i}}\cup \check{Q}_{\eta, i}^{r_{q_i}}\Big]-\bigcup_{i=1}^{k-1}\Big[ \cup_{x\in \mathfrak{N}_i\atop l\le i} \check{T}_{\eta, l}^{r_{q_l}}(x)\Big]\Big)}{V(B_{\epsilon r_{q_k}}(z_1))\big)} \nonumber \\
&\ge \frac{V\Big(B_{\epsilon r_{q_k}}(z_1)- \bigcup_{i= 1}^k \mathcal{B}(q_i)\Big)}{V(B_{\epsilon r_{q_k}}(z_1))}- \frac{V\Big( \bigcup_{i= 1}^k \big(\check{T}_{\eta, i}^{r_{q_i}}\cup \check{Q}_{\eta, i}^{r_{q_i}}\big)\Big)}{V(B_{\epsilon r_{q_k}}(z_1))}- \frac{V\Big( \bigcup_{i=1}^{k-1}\Big[ \cup_{x\in \mathfrak{N}_i\atop l\le i} \check{T}_{\eta, l}^{r_{q_l}}(x)\Big]\Big)}{V(B_{\epsilon r_{q_k}}(z_1))}\nonumber \\
&\ge 1- C(n)\frac{\beta^{-C(n)}}{\epsilon^{n- 1}}- \frac{C(n)}{\epsilon^n}\big(\frac{r_{q_1}}{r_{q_k}}\big)^n\sum_{i= 1}^k\{\frac{\fint_{B_{4r_{q_i}}(p)}|\nabla^2f_i-g|}{\eta}+\frac{\fint_{B_{2r_{q_i}}(p)}|\nabla(f_i-\rho_i^2)}{\eta r_{q_i}}|+ \epsilon^{-n}\sqrt{\eta}\}. \nonumber
\end{align}
It is direct to get $\displaystyle \frac{r_{q_1}}{r_{q_k}}\leq C(n)\beta^{\nu^{n+ 1}}$, then from the above and (\ref{diff inequality together}), we have
\begin{align}
&\frac{V\Big(B_{\epsilon r_{q_k}}(z_1)- \bigcup_{i= 1}^k \Big[\mathcal{B}(q_i)\cup \check{T}_{\eta, i}^{r_{q_i}}\cup \check{Q}_{\eta, i}^{r_{q_i}}\Big]-\bigcup_{i=1}^{k-1}\Big[ \cup_{x\in \mathfrak{N}_i\atop l\le i} \check{T}_{\eta, l}^{r_{q_l}}(x)\Big]\Big)}{V(B_{\epsilon r_{q_k}}(z_1))\big)} \nonumber \\
&\geq 1- C(n)\beta^{-C(n)}\epsilon^{1- n}- C(n)\epsilon^{-2n}\beta^{-C(n)} \geq 1- C(n)\epsilon^{-2n}\beta^{-C(n)} >0. \nonumber
\end{align}
where we use the choice of $\epsilon$ in the last inequality.
Thus there exists $\displaystyle x_1^{(k)}\in \bigcap_{i= 1}^k \big(T_{\eta, i}^{r_{q_i}}\cap Q_{\eta, i}^{r_{q_i}}\big)\cap \bigcap_{i=1}^{k-1}\big(\cap_{x\in \mathfrak{N}_i\atop l\le i}T^{r_{q_l}}_{\eta,l}(x)\big) \cap B_{\epsilon r_{q_k}}(z_1)\backslash \bigcup_{i= 1}^k \mathcal{B}(q_i)$. From the definition of $\mathcal{B}(q_i)$, we know that $\pi_i(x_1^{(k)})$ is well defined for all $1\leq i\leq k$. Moreover, for $1\le l\le i\le k-1$ and $x\in \mathfrak{N}_i$, by Lemma \ref{lem property 2 of G-H appr} (in this case $\beta$ there will be $\beta^{\nu^{n+ 1- l}}$), we get
\begin{align*}
&\Big|d(x_1^{(k)}, x)^2- \big[\rho_l(x_1^{(k)})- \rho_l(x)\big]^2- d\big(\pi_l(x_1^{(k)}), \pi_l(x)\big)^2\Big| \\
&\le C(n)\{\int_{0}^{d(x_1^{(k)},\pi_l(x_1^{(k)}))}|\nabla(\rho_l^2-f_l)|(\sigma_{x_1^{(k)}}(s))ds
+\int_{0}^{d(x,\pi_l(x))}|\nabla(\rho_l^2-f_l)|(\sigma_{x}(s))ds\\
&+\int_0^{d(x,\pi_l(x))}\int_{\gamma_{\sigma_{x(s)}\tilde{\sigma}_{x_1^{(k)}}(s)}}|\nabla^2f_l-2g|ds
+ \sup_{B_{2r_{q_l}}(p)}|\rho_l^2-f_l|+ \beta^{-\nu^{n+ 1- l}}r_{q_l}^2\}\\
&\le C(n)(\sup_{B_{2r_{q_l}}(p)}|\rho_l^2-f_l|+ \beta^{- \nu^{n+ 1- l}}r_{q_l}^2+\sqrt{\eta}r_{q_l}^2)\le C(n)\beta^{-\nu^{n+ 1- l}}r_{q_l}^2.
\end{align*}
\textbf{Step (2)}. Assume $\displaystyle \mathfrak{N}_{k}= \{x_i^{(k)}\}_{i= 1}^{m(\epsilon)}$, we choose $x_j^{(k)}$ by induction on $j$. For $s\leq m(\epsilon)\leq C(n)\epsilon^{-n}$, assume $\{x_1^{(k)}, \cdots, x_s^{(k)}\}\subset \mathfrak{N}_{k}$ are chosen such that
\begin{enumerate}
\item[(a)]. For each $1\leq j\leq s$, the point $\displaystyle x_j^{(k)}\in \bigcap_{i= 1}^k \big(T_{\eta, i}^{r_{q_i}}\cap Q_{\eta, i}^{r_{q_i}}\big)\cap B_{\epsilon r_{q_k}}(z_j)$.
\item[(b)]. For $1\leq j\leq s, 1\leq l\leq k$, the point $\pi_l(x_j^{(k)})$ is well defined.
\item[(c)]. For any $1\le l\le i\le k$, $x\in \mathfrak{N}_i$ and $x'\in \{x_1^{(k)}, \cdots, x_s^{(k)}\}$, we have
\begin{align}
\Big|d(x, x')^2- \big[\rho_l(x)- \rho_l(x')\big]^2- d\big(\pi_l(x), \pi_l(x')\big)^2\Big| \leq \tilde{\epsilon}_lr_{q_l}^2 \nonumber
\end{align}
\end{enumerate}
By Lemma \ref{lem lower bound of good local integra set inner ball}, (\ref{diff inequality together}) and Bishop-Gromov comparison theorem, we know
\begin{align*}
&\frac{V\Big(B_{\epsilon r_{q_k}}(z_{s+ 1})- \bigcup\limits_{i= 1}^k \Big[\mathcal{B}(q_i)\cup \check{T}_{\eta, i}^{r_{q_i}}\cup \check{Q}_{\eta, i}^{r_{q_i}}\cup \big(\bigcup\limits_{j= 1}^s \check{T}_{\eta, i}^{r_{q_i}}(x_j^{(k)})\big)\Big]-\bigcup\limits_{i=1}^{k-1}\Big[ \bigcup\limits_{x\in \mathfrak{N}_i\atop l\le i} \check{T}_{\eta, l}^{r_{q_l}}(x)\Big]\Big)}{V(B_{\epsilon r_{q_k}}(z_{s+ 1}))\big)} \nonumber \\
&\geq 1- C(n)\beta^{-C(n)}\epsilon^{-2n} >0, \nonumber
\end{align*}
where the choice of $\epsilon$ is used in the last inequality. Thus there exists
\begin{align}
x_{s+ 1}^{(k)}\in B_{\epsilon r_{q_k}}(z_{s+ 1})- \bigcup_{i= 1}^k \Big[\mathcal{B}(q_i)\cup \check{T}_{\eta, i}^{r_{q_i}}\cup \check{Q}_{\eta, i}^{r_{q_i}}\cup \big(\bigcup_{j= 1}^s \check{T}_{\eta, i}^{r_{q_i}}(x_j^{(k)})\big)\Big]-\bigcup_{i=1}^{k-1}\Big[ \bigcup\limits_{x\in \mathfrak{N}_i\atop l\le i} \check{T}_{\eta, l}^{r_{q_l}}(x)\Big]. \nonumber
\end{align}
For $1\le i\le k, x\in \mathfrak{N}_i$ and $l\le i$, by Lemma \ref{lem property 2 of G-H appr}, we get
\begin{align*}
&\Big|d(x, x_{s+1}^{(k)})^2- \big[\rho_l(x)- \rho_l(x_{s+1}^{(k)})\big]^2- d\big(\pi_l(x), \pi_l(x_{s+1}^{(k)})\big)^2\Big| \\
&\le C(n)(\sup_{B_{2r_{q_l}}(p)}|\rho_l^2-f_l|+ \beta^{- \nu^{n+ 1- l}}r_{q_l}^2+\sqrt{\eta}r_{q_l}^2)\le \tilde{\epsilon}_lr_{q_l}^2.
\end{align*}
Finally, the conclusion follows by the induction method.
\qed
By induction on the number of points $q_i$, using the volume ratio and the almost Gou-Gu Theorem, we obtain the lower bound of the finite projection set's diameter, which yields the points $q_k$. Also using the above integral of difference, combining Proposition \ref{prop dist of points in geodesic balls 1}, we find suitable $(\epsilon r)$-dense net and the related almost Gou-Gu Theorem at the same time.
Once the above $\{q_i\}_{i=1}^k$ and $\{\mathfrak{N}_{i}\}_{i=1}^{k}$ are constructed, for $0\le i\le k-1$, we can define $\mathscr{P}_{0}^{(i)}:B_{r_{q_{i+1}}}\to \mathfrak{N}_{i+1}$ such that for each $x\in B_{r_{q_{i+1}}}(p)$,
\begin{align}
d(x, \mathscr{P}_0^{(i)}(x))\leq C(n)\beta^{-\nu^{n+ 1}} r_{q_{i+ 1}}. \nonumber
\end{align}
Especially, we use $\mathscr{P}_0$ to denote $\mathscr{P}_0^{(0)}$ for simplicity. And we define $\pi_0(x)=x$,
\begin{align}
\hat{\pi}_i= \mathscr{P}_0^{(i)}\circ \pi_i, \text{ and } \mathscr{P}_s= \hat{\pi}_s\circ \cdots \circ \hat{\pi}_1\circ \hat{\pi}_0, \text{ for } 0\le i,s\le k-1. \nonumber
\end{align}
We further define $\displaystyle \check{\mathscr{P}}_i= \mathscr{P}_0^{(i- 1)}\circ \pi_i\circ \mathscr{P}_{i- 1}$ for $1\le k$ and
\begin{align}
\phi^{(k- 1)}(x)= (\phi_1(x), \cdots, \phi_{k- 1}(x)), \quad \quad and \quad \quad \phi_j(x)= \rho_j(\mathscr{P}_{j- 1} (x))- \rho_j(p). \nonumber
\end{align}
We also use the notation $r_{q_0}=\beta^{-C(n)} r$ and $\mathscr{P}_{-1}(x)= \mathscr{P}_0^{(-1)}(x)=x$.
\begin{remark}\label{rem the meaning of the proj}
{Note when $x\in \mathfrak{N}_1$, if $\pi_{i- 1}\circ \cdots \circ\pi_1(x)$ exists and $\displaystyle \pi_{j}\circ \cdots \circ\pi_1(x)\in \mathfrak{N}_{j+ 1}$ for $1\leq j\leq i- 1$, then we can choose $\mathscr{P}_{i- 1}(x)= \pi_{i- 1}\circ \cdots \circ\pi_1(x)$.
}
\end{remark}
\begin{theorem}\label{thm dist of points in geodesic balls induc-dim}
{For $\tau\in \big(0, c(n)\big)$, $3\leq \beta\leq \tau^{-c(n)}$, assume $\frac{V(B_r(p))}{V(B_r(0))}\geq 1- \tau$, then we can find $\{q_k\}_{k= 1}^n$ with $\displaystyle d(p, q_1)= \beta^{-C(n)} r, q_{k}\in \check{\mathscr{P}}_{k- 1} (B_{r_{q_{k- 1}}}(p))$ and
\begin{align}
C(n)^{-1}\beta^{\nu^{n+ 1- k}} \leq \frac{d(q_k, p)}{d(q_{k+ 1}, p)}\leq C(n)\beta^{\nu^{n+ 1- k}}, \nonumber
\end{align}
Furthermore, there is a model $(\beta^{-\nu^{n+ 1}} r_{q_i})$-dense net $\{\mathfrak{N}_i\}_{i=1}^{n}$ with respect to
\begin{align}
\Big\{\{B_{r_{q_i}}(p), f_i, \beta^{-\nu^{n+ 1- i}}\}_{i= 1}^{n}, \beta^{-C(n)}\Big\}; \nonumber
\end{align}
and for all $1\leq k\leq n$,
\begin{align}\label{Gougu}
\sup_{x, y\in B_{r_{q_{k}}}(p)}\Big|d(x, y)^2&- \sum\limits_{j= 1}^{k}\big[\rho_j(\mathscr{P}_{j- 1}(x))- \rho_j(\mathscr{P}_{j- 1}(y))\big]^2 \nonumber \\
&- d(\check{\mathscr{P}}_{k}(x), \check{\mathscr{P}}_{k}(y))^2\Big| \leq C(n) \beta^{-\nu^{n+ 1- k}} r_{q_{k}}^2.
\end{align}
}
\end{theorem}
\begin{remark}
{In the rest of the paper, for simplicity, we use $\displaystyle \gamma_i= \beta^{-2^{-1}\nu^{n+ 1- i}}r_{q_i}$. For any $i, j$, we have the following facts used often:
\begin{align}
r_{q_{i+ 1}}\leq r_{q_i}, \quad \quad \quad \gamma_i\leq \gamma_{i+ 1}, \quad \quad \quad \beta^{-\nu^{n+ 1}}r_{q_i}\leq \gamma_i \quad \quad \quad \gamma_{j- 2}r_{q_{j- 2}}\leq \gamma_{j- 1}^2. \nonumber
\end{align}
}
\end{remark}
{\it Proof:}~
{\textbf{Step (1)}. We can choose $q_1$ such that $d(p, q_1)= \beta^{-C(n)} r$. From Proposition \ref{prop dist of points in geodesic balls 1}, there is $\displaystyle \mathscr{P}_{0} (x), \mathscr{P}_{0} (y)\in \mathfrak{N}_{1}$ satisfying
\begin{align}
&d(\pi_1\mathscr{P}_0(x), \check{\mathscr{P}}_{1}(x))+ d(\pi_1\mathscr{P}_0(y), \check{\mathscr{P}}_{1}(y))\leq C(n)\beta^{-\nu^{n+ 1}} r_{q_1}, \nonumber \\
&\Big|d(\mathscr{P}_{0}(x), \mathscr{P}_{0}(y))^2- \big[\rho_1(\mathscr{P}_{0}(x))- \rho_1(\mathscr{P}_{0}(y))\big]^2\nonumber\\
&\quad \quad \quad \quad \quad \quad \quad - d(\pi_1\circ \mathscr{P}_{0}(x), \pi_1\circ\mathscr{P}_{0}(y))^2\Big|\leq C(n)\beta^{-\nu^{n}}r_{q_1}^2.\nonumber
\end{align}
Thus (\ref{Gougu}) holds for $k= 1$.
Now we prove the conclusion by induction on $k$. Assume $k\le n$ and the conclusion holds for $1\le i\le k-1$. For $1\leq i\leq k-1$ and $\displaystyle x, y\in B_{r_{q_i}}(p)$, we have
\begin{align}\label{j assumption Gougu}
\Big|d(x, y)^2&- \sum\limits_{j= 1}^{i}\big[\rho_j(\mathscr{P}_{j- 1}(x))- \rho_j(\mathscr{P}_{j- 1}(y))\big]^2 \nonumber \\
&- d(\check{\mathscr{P}}_{i}(x), \check{\mathscr{P}}_{i}(y))^2\Big| \leq C(n) \beta^{-\nu^{n+ 1- i}} r_{q_{i}}^2.
\end{align}
Define $\displaystyle \tilde{r}_1= r_{q_{k- 1}}$, then $\displaystyle \tilde{r}_1 \in [C(n)^{-1}\beta^{n- k+ 1} r_1, C(n)\beta^{n- k+ 1} r_1]$ by the induction assumption. Note
\begin{align}
\phi^{(k- 1)}(B_{\tilde{r}_1}(p))\subseteq [-\tilde{r}_1, \tilde{r}_1]\times [-2\tilde{r}_1, 2\tilde{r}_1]\times \cdots \times [- 2^{k- 2}\tilde{r}_1, 2^{k- 2}\tilde{r}_1]\subseteq [-2^n \tilde{r}_1, 2^n \tilde{r}_1]^{k- 1}. \nonumber
\end{align}
Then for $\lambda=2^{-n(2n+3)}$, we can find $\displaystyle \{z_j\}_{j= 1}^{({\frac{2^{n+2}}{\lambda}})^{k- 1}}\subseteq B_{\tilde{r}_1}(p)$ such that for any $x\in B_{\tilde{r}_1}(p)$, there is $z_{j_0}$ satisfying $\displaystyle |\phi^{(k- 1)}(x)- \phi^{(k- 1)}(z_{j_0})|\leq \lambda \tilde{r}_1$.
We define $\displaystyle r_0\vcentcolon= \max_{x\in \check{\mathscr{P}}_{k- 1} (B_{\tilde{r}_1}(p))}d(x, p)$. From (\ref{j assumption Gougu}), for any $x\in B_{\tilde{r}_1}(p)$, we get
\begin{align*}
& \quad d(x, z_{j_0}) \\
&\leq \sqrt{|\phi^{(k- 1)}(x)- \phi^{(k- 1)}(z_{j_0})|^2+ d(\check{\mathscr{P}}_{k- 1}(x), \check{\mathscr{P}}_{k- 1} (z_{j_0}))^2} \nonumber \\
&\quad + C(n)\beta^{-\frac{1}{2}\nu^{n+ 2- k}}\tilde{r}_1 \\
&\leq \lambda \tilde{r}_1+ 2r_0+ C(n)\beta^{-\frac{1}{2}\nu^{n+ 2- k}}\tilde{r}_1 + C(n)\beta^{-14\cdot \nu^{n- 1}} \tilde{r}_1\le 2(\lambda+\frac{r_0}{\tilde{r}_1})\tilde{r}_1,
\end{align*}
where in the last line we use
\begin{align}
r_{q_1}= \beta^{-\nu^n}d(p, q_1)\leq \beta^{-\nu^n}(\beta^{\nu^n}\cdots \beta^{\nu^{n+ 1- (k- 2)}})d(p, q_{k- 1}) \leq \beta^{2\cdot \nu^{n- 1}}\tilde{r}_1. \nonumber
\end{align}
This implies $x\in B_{2(\lambda+\frac{r_0}{\tilde{r}_1})\tilde{r}_1}(z_{j_0})$, and hence $\displaystyle B_{\tilde{r}_1}(p)\subseteq \bigcup_{i= 1}^{(\frac{2^{n+2}}{\lambda})^{k-1}} B_{2(\lambda+\frac{r_0}{\tilde{r}_1})\tilde{r}_1}(z_i)$.
From the volume comparison Theorem, we obtain
\begin{align}
(1- \tau)\omega_n\tilde{r}_1^n\leq V(B_{\tilde{r}_1}(p))\leq (\frac{2^{n+2}}{\lambda})^{k-1} \omega_n \big(2(\lambda+\frac{r_0}{\tilde{r}_1})\tilde{r}_1\big)^n \nonumber
\end{align}
Using $k\leq n$, we get $\frac{1}{2}\le (\frac{2^{n+2}}{\lambda})^{n-1}\cdot 2^n\cdot (\lambda+\frac{r_0}{\tilde{r}_1})^n$. That is,
\begin{align}
\frac{r_0}{\tilde{r}_1}\ge 2^{-2(n+1)}\lambda^{\frac{n-1}{n}}-\lambda=(2^{-2(n+1)}-\lambda^{\frac{1}{n}})=2^{-n(2n+3)}. \nonumber
\end{align}
Now we choose $q_{k}\in \check{\mathscr{P}}_{k- 1}(B_{\tilde{r}_1}(p))$ with $\displaystyle d(p, q_{k})\geq 2^{-n(2n+3)} \tilde{r}_1$.
\textbf{Step (2)}. Note we can use Proposition \ref{prop dist of points in geodesic balls 1} to find a model $(\beta^{-\nu^{n+ 1}} r_{q_i})$-dense net $\{\mathfrak{N}_i\}_{i=1}^{k}$ with respect to $\Big\{\{B_{r_{q_i}}(p), f_i, \tilde{\epsilon}_i\}_{i= 1}^{k}, \eta\Big\}$ by induction method.
Thus for $1\le i\le k$, there exists $\mathscr{P}_0^{(i-1)}: B_{r_{q_i}}(p)\to \mathfrak{N}_i$ such that
\begin{align}
d(x, \mathscr{P}^{(i-1)}_0 (x))\leq C(n)\beta^{-\nu^{n+ 1}} r_{q_i} , \quad \quad \quad \quad \forall 1\leq i\leq k.
\end{align}
We show (\ref{j assumption Gougu}) also holds for $i=k$. Now for $x, y\in B_{r_{q_{k}}}(p)$, from $\mathscr{P}_{k-1} (x), \mathscr{P}_{k-1} (y)\in \mathfrak{N}_k$, we can obtain
\begin{align}
&\Big|d(\mathscr{P}_{k-1} (x), \mathscr{P}_{k-1} (y))^2- \big[\rho_{k}(\mathscr{P}_{k-1} (x))- \rho_{k}(\mathscr{P}_{k-1} (y))\big]^2 \nonumber \\
&\quad \quad \quad \quad \quad \quad - d(\pi_{k}\circ\mathscr{P}_{k-1} (x), \pi_{k}\circ\mathscr{P}_{k-1} (y))^2\Big| \leq C(n)\beta^{-\nu^{n- k+1}} r_{q_{k}}^2.\label{j+1 terms}
\end{align}
We have
\begin{align}
&\quad |d(\mathscr{P}_{k-1}(x),\mathscr{P}_{k-1}(y))-d(\check{\mathscr{P}}_{k- 1}(x),\check{\mathscr{P}}_{k- 1}(y))|\nonumber \\
&\le d(\mathscr{P}_{k-1}(x),\check{\mathscr{P}}_{k- 1}(x))+d(\mathscr{P}_{k-1}(y),\check{\mathscr{P}}_{k- 1}(y))\le 2C(n)\beta^{-\nu^{n+1}}r_{q_{k- 1}},\label{perturb term}
\end{align}
Combining (\ref{j assumption Gougu}), (\ref{perturb term}) with (\ref{j+1 terms}), we get
\begin{align}
\Big|d(x, y)^2- \sum\limits_{j= 1}^{k}\big[\rho_j(\mathscr{P}_{j- 1}(x))- \rho_j(\mathscr{P}_{j- 1}(y))\big]^2 - d(\check{\mathscr{P}}_{k}(x), \check{\mathscr{P}}_{k}(y))^2\Big| \leq C(n) \beta^{-\nu^{n+ 1- k}} r_{q_{k}}^2.\nonumber
\end{align}
From the induction method, the conclusion follows.
}
\qed
\section{The distance map is quasi-isometry}\label{sec quasi-isom}
The main result of this section is Theorem \ref{thm direction points imply Lipschitz less than 1+ep}, which says the distance map is a pseudo-isometry. The fact that upper bound of discrete Lipschitz constant is close to $1$, provides the distance counterpart of the almost orthogonality of distance functions determined by direction points. The angle counterpart of the almost orthogonality will be provided by Colding's integral Toponogov Theorem, which will be addressed in Subsection \ref{subsec almost og of dist map}.
There are two key points in the proof of Theorem \ref{thm direction points imply Lipschitz less than 1+ep}, which are Proposition \ref{prop n direction points} and Proposition \ref{prop the diam upper bound of (n+ 1)-proj}. We will prove Proposition \ref{prop n direction points} in Subsection \ref{subsec error of proj}, and this will be used to prove the upper bound of discrete Lipschitz constant of the distance map. Proposition \ref{prop the diam upper bound of (n+ 1)-proj} will be proved in Subsection \ref{subsec n+1 proj}, and this result provide the lower bound of discrete Lipschitz constant of the distance map.
\subsection{The error estimate of projections}\label{subsec error of proj}
In this subsection, we will show that the $n$ points $\{q_k\}_{k= 1}^n$ found in Theorem \ref{thm dist of points in geodesic balls induc-dim} combining with the `origin' point $p$ `almost' determines $n$ directions on $M^n$, which is reflected by the main result of this subsection: Proposition \ref{prop n direction points} (also see the proof of Theorem \ref{thm direction points imply Lipschitz less than 1+ep}).
\begin{lemma}\label{lem two key elem ineq}
For $a,b,c, \epsilon,\epsilon_2\ge 0$ and $\epsilon_1 \in [0,1)$, if
$$a+b\le (1+\epsilon_1)\big(\sqrt{a^2-c^2+\epsilon_2^2}+\sqrt{b^2-c^2+\epsilon_2^2}+\epsilon\big),$$
then $\displaystyle c\le 4\sqrt{(\epsilon+\epsilon_2)(a+b+\epsilon_2)}+4\sqrt{\epsilon_1}(a+b+\epsilon_2).$
\end{lemma}
\begin{proof}
Put $\xi=\frac{c}{a+b+2\epsilon_2}$. Then
\begin{align*}
a+b+2\epsilon_2\le (1+\epsilon_1)\sqrt{1-\xi^2}(a+b+2\epsilon_2)+(1+\epsilon_1)\epsilon+2\epsilon_2.
\end{align*}
If $(1+\epsilon_1)\sqrt{1-\xi^2}\ge 1$, then $\xi\le 2\sqrt{\epsilon_1}$ and $\displaystyle c= \xi(a+b+2\epsilon_2)\le 4\sqrt{\epsilon_1}(a+b+\epsilon_2),$
which implies the conclusion. Thus we can assume $(1+\epsilon_1)\sqrt{1-\xi^2}<1$ and from above, we get
$\displaystyle \xi^2(1+\epsilon_1)^2\le \frac{2\epsilon(1+\epsilon_1)+4\epsilon_2}{a+b+2\epsilon_2}+4\epsilon_1,$
which implies $\xi\le \frac{2\sqrt{\epsilon+\epsilon_2}}{\sqrt{a+b+2\epsilon_2}}+2\sqrt{\epsilon_1}$ and hence
$\displaystyle c\le 4\sqrt{(\epsilon+\epsilon_2)(a+b+\epsilon_2)}+4\sqrt{\epsilon_1}(a+b+\epsilon_2).$
\end{proof}
In Euclidean case, if two points lie on the projection image, then the line determined by the two points will also lie on the projection image. We will show the almost version of this fact on manifolds in Lemma \ref{lem crucial trig ineq two} and Lemma \ref{lem crucial trig ineq involved}.
There are two error estimates of projection, we firstly deal with the case with the projection point on the segment $\overline{x_1, x_2}$. The similar argument of the following lemma will be used in Subsection \ref{subsec n+1 proj} too. For $A\subseteq M$, we define $\displaystyle \mathfrak{U}_{\delta}(A)= \{x\in M: d(x, A)< \delta\}$.
\begin{lemma}\label{lem crucial trig ineq involved}
Assume $1\le l<i\le n$, $x_1\in B_{r_{q_i}}(p)$, $x_2\in B_{r_{q_{i-1}}}(p)$ and $j=i \text{ or } i-1$. If there are $\displaystyle\hat{x}_1, \hat{x}_2\in \bigcup_{s\ge l}\mathfrak{N}_{s}$ such that $\displaystyle \sum_{k=1}^2d(\hat{x}_k,x_k)+d(\pi_l(\hat{x}_k),\hat{x}_k)\le \tilde{\delta}$ and $d(x_1,x_2)\le r_{q_{j}}, $ where $\tilde{\delta}\in (\gamma_{i-1}, r_{q_j})$ .
Then $\displaystyle \sup_{\hat{w}\in \mathfrak{U}_{\tilde{\delta}}(\overline{x_1, x_2})\cap \mathfrak{N}_i}d(\pi_l(\hat{w}),\hat{w})\le C(n)\big(\frac{\tilde{\delta}}{r_{q_j}}\big)^{\frac{1}{4}}r_{q_{j}}$.
\end{lemma}
\begin{figure}[H]
\begin{center}
\includegraphics{lemma32.eps}
\caption{Lemma \ref{lem crucial trig ineq involved}}
\label{figure: lemtrig}
\end{center}
\end{figure}
\begin{proof}
There is $w\in \overline{x_1, x_2}$ with $d(w, \hat{w})\leq \tilde{\delta}$. From $\overline{x_1, w, x_2}$ is a segment, we have
\begin{align*}
d(\hat{x}_1,\hat{w})+d(\hat{x}_2,\hat{w})
&\le d(x_1,w)+d(x_2,w)+4\delta=d(x_1,x_2)+4\tilde{\delta}\\
&\leq d(\pi_l(\hat{x}_1), \pi_l(\hat{w}))+ d(\pi_l(\hat{x}_2), \pi_l(\hat{w}))+ 8\tilde{\delta}.
\end{align*}
Applying the definition $(b)$ of $\mathfrak{N}_{s}$ on $\hat{x}_1, \hat{w}$ with respect to $q_l$, we obtain
\begin{align*}
d(\pi_l(\hat{x}_1), \pi_l(\hat{w}))\le \sqrt{d(\hat{x}_1,\hat{w})^2-|\rho_l(\hat{x}_1)-\rho_l(\hat{w})|^2+C(n)\gamma_l^2}.
\end{align*}
Similarly, we have
\begin{align*}
d(\pi_l(\hat{x}_2), \pi_l(\hat{w}))\le \sqrt{d(\hat{x}_2,\hat{w})^2-|\rho_l(\hat{x}_2)-\rho_l(\hat{w})|^2+C(n)\gamma_l^2}.
\end{align*}
Note that
\begin{align*}
\big | |\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_1)|^2\big|\le 8\tilde{\delta}(\tilde{\delta}+ d(x_1,x_2)).
\end{align*}
We have
\begin{align*}
d(\hat{w},\hat{x}_2)+d(\hat{w},\hat{x}_1)\le& \sqrt{d(\hat{w},\hat{x}_2)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2+8\tilde{\delta}(\tilde{\delta}+ d(x_1,x_2))+C(n)\gamma_l^2}\\
&+\sqrt{d(\hat{w},\hat{x}_1)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2+C(n)\gamma_l^2}+8\tilde{\delta}.
\end{align*}
Thus by Lemma \ref{lem two key elem ineq} and note $\gamma_l\le \gamma_{i-1}\le \tilde{\delta}\le r_{q_j}$ and $d(x_1,x_2)\le r_{q_j}$, we get
\begin{align*}
|\rho_l(\hat{w})-\rho(\hat{x}_2)|\le C(n) \big(\frac{\tilde{\delta}}{r_{q_j}}\big)^{\frac{1}{4}}r_{q_j},
\end{align*}
which implies
\begin{align*}
d(\hat{w},\pi_l(\hat{w}))\le C(n) \big(\frac{\tilde{\delta}}{r_{q_j}}\big)^{\frac{1}{4}}r_{q_j}+\tilde{\delta}\le C(n) \big(\frac{\tilde{\delta}}{r_{q_j}}\big)^{\frac{1}{4}}r_{q_j}.
\end{align*}
\end{proof}
The second case deals with the projection point on the extension of the segment $\overline{x_1, x_2}$, which will be only used in the proof of Lemma \ref{lem crucial dist est between point and proj of pt}.
\begin{lemma}\label{lem crucial trig ineq two}
Assume $1\le l<i\le n$, $x_1\in B_{r_{q_i}}(p)$ and $x_2\in B_{r_{q_{i-1}}}(p)$. If there are $\hat{x}_1,\hat{x}_2\in \mathfrak{N}_{i-1}$ such that
$$\sum_{k=1}^2d(\hat{x}_k,x_k)+d(\pi_l(\hat{x}_k),\hat{x}_k)\le \delta r_{q_{i-1}} \text{ and } d(x_1,x_2)\ge c(n) r_{q_{i-1}} ,$$
where $\delta \in (\epsilon_{i-1},\frac{1}{20}c(n))$ for $\epsilon_{i-1}=\frac{\gamma_{i-1}}{r_{q_{i-1}}}$ and $c(n)\in (0,1)$.
Then for $w\in B_{r_{q_i}}(p)$ with a segment $\overline{w,x_1,x_2}$ and $\hat{w}\in B_{\delta r_{q_{i-1}}}(w)\cap \mathfrak{N}_i$, we have $\displaystyle d(\pi_l(\hat{w}),\hat{w})\le C(n)\delta^{\frac{1}{4}}r_{q_{i-1}}$.
\end{lemma}
\begin{figure}[H]
\begin{center}
\includegraphics{lemma33.eps}
\caption{Lemma \ref{lem crucial trig ineq two}}
\label{figure: lemtrigtwo}
\end{center}
\end{figure}
\begin{remark}
The lower bound of $d(x_1,x_2)$ is necessary even for Euclidean space $\mathbb{R}^2$. For example, for $q_1=(0,-1)$ and $q_2=(r_1,0)$ and $r_2<r_1$. There exist $x_1^k,x_2^k,w^k\in B_{r_2}(0)$ such that $\overline{w^k,x_1^k,x_2^k}$ is a segment and
$$d(x_1^k,\pi_1(x_1^k))+d(x_2^k,\pi(x_2^k))\le \frac{1}{k}r_1\to 0 \text{ but } \inf_{k}\frac{d(w^k,\pi_1(w^k))}{r_1}\ge \frac{r_2}{2r_1}>0.$$
For example, we can take $x_1^k=(0,-\frac{r_2}{4k}), x_2^k=(0,-\frac{r_2}{2k})$ and $w^k=(0,\frac{r_2}{2})$.
\end{remark}
\begin{proof}
By triangle inequality, we know
\begin{align}\label{triangle ineq}
d(\pi_l(\hat{w}),\pi_l(\hat{x}_2))-d(\pi_l(\hat{w}),\pi_l(\hat{x}_1))\le d(\pi_l(\hat{x}_1),\pi_l(\hat{x}_2))\le d(\hat{x}_1,\hat{x}_2)+2\delta r_{q_{i-1}}.
\end{align}
Since $\hat{x}_1,\hat{x}_2,\hat{w}\in \mathfrak{N}_i$ and $l<i$, we have
\begin{align*}
\big| d(\hat{w},\hat{x}_k)^2-d(\pi_l(\hat{w}),\pi_l(\hat{x}_k))^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_k)|^2 \big|\le C(n)\gamma_l^2, \quad \quad k=1,2.
\end{align*}
Thus
\begin{align}\label{close projection}
d(\pi_l(\hat{w}),\pi_l(\hat{x}_1))\le \sqrt{d(\hat{w},\hat{x}_1)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_1)|^2+C(n)\gamma_l^2}
\end{align}
and
\begin{align}\label{far projection}
d(\pi_l(\hat{w}),\pi_l(\hat{x}_2))^2\ge d(\hat{w},\hat{x}_2)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2-C(n)\gamma_l^2.
\end{align}
We claim the right hand term in the last inequality is positive, i.e.,
$$d(\hat{w},\hat{x}_2)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2-C(n)\gamma_l^2>0.$$
Otherwise, we know $d(\hat{w},\hat{x}_2)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2\le C(n)\gamma_l^2$ and hence
$$d(\pi_l(\hat{w}),\pi_l(\hat{x}_2))\le \sqrt{2C(n)}\gamma_l.$$
But when noting that $d(\hat{w},\pi_l(\hat{w}))=|\rho_l(\hat{w})-\rho_l(\pi_l(\hat{w}))|=|\rho_l(\hat{w})-\rho_l(p)|\le d(\hat{w},p)$ and that $\overline{w,x_1,x_2}$ is a segment implies $d(x_1,x_2)\le d(x_2,w)$, we know
\begin{align*}
d(\pi_l(\hat{w}),\pi_l(\hat{x}_2))
&\ge d(\hat{x}_2,\hat{w})- d(\pi_l(\hat{w}),\hat{w})-\delta r_{q_{i-1}}\ge d(x_2,w)-d(\hat{w},p)-3\delta r_{q_{i-1}}\\
&\ge d(x_2,x_1)-d(w,p)-4\delta r_{q_{i-1}}\ge 5c(n)r_{q_{i-1}}\gg C(n) \gamma_{l},
\end{align*}
which is a contradiction.
So, substitute (\ref{close projection}) and the square root of (\ref{far projection}) into (\ref{triangle ineq}), we get
\begin{align}
\frac{A^2-B^2}{A+B}=A-B\le d(x_1,x_2)+4\delta r_{q_{i-1}}, \label{A B ineq}
\end{align}
where $A=\sqrt{d(\hat{w},\hat{x}_2)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2-C(n)\gamma_l^2}$
and $$B=\sqrt{d(\hat{w},\hat{x}_1)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_1)|^2+C(n)\gamma_l^2}.$$
Note $d(x_2, w)-d(x_1,w)=d(x_1,x_2)$ and
\begin{align*}
\big | |\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_1)|^2\big|
&=|\rho_l(\hat{x}_1)-\rho_l(\hat{x}_2)|\cdot |\rho_l(\hat{x}_1)+\rho_l(\hat{x}_2)-2\rho_l(\hat{w})|\\
&\le (\sum_{k=1}^2d(\hat{x}_k,\pi_l(\hat{x}_k)))\cdot(\sum_{k=1}^2d(\hat{x}_k,\hat{w}))\\
(\text{ using }20\delta r_{q_{i-1}} \le d(x_1,x_2))&\le 18\delta r_{q_{i-1}} d(x_1,x_2) .
\end{align*}
we have
\begin{align*}
A^2-B^2
&=d(\hat{w},\hat{x}_2)^2-d(\hat{w},\hat{x}_1)^2+|\rho_l(\hat{w})-\rho_l(\hat{x}_1)|^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2-2C(n)\gamma_l^2\\
&\ge \big( d(x_1,x_2)-4\delta r_{q_{i-1}}\big)\big((\hat{w},\hat{x}_2)+d(\hat{w},\hat{x}_1)\big)-18\delta r_{q_{i-1}} d(x_1,x_2) -C(n)\gamma_l^2 \\
A+B&\le \sqrt{d(\hat{w},\hat{x}_2)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2}\\
&+\sqrt{d(\hat{w},\hat{x}_1)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2+C(n)\gamma_l^2+18\delta r_{q_{i-1}} d(x_1,x_2) }.
\end{align*}
Substituting these two inequalities into (\ref{A B ineq}), we get
\begin{align*}
&\big(1-\frac{4\delta r_{q_{i-1}}}{d(x_1,x_2)}\big)\big(d(\hat{w},\hat{x}_2)+d(\hat{w},\hat{x}_1)\big)\\
\le& 18\delta r_{q_{i-1}}+C(n)\frac{\gamma_l^2}{d(x_1,x_2)}+\big(1+\frac{4\delta r_{q_{i-1}}}{d(x_1,x_2)}\big)\bigg(\sqrt{d(\hat{w},\hat{x}_2)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2}\\
+&\sqrt{d(\hat{w},\hat{x}_1)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2+C(n)\gamma_l^2+18\delta r_{q_{i-1}}d(x_1,x_2)}\bigg)
\end{align*}
Note $d(x_1,x_2)\ge \max\{20\delta r_{q_{i-1}},\gamma_l\}$. We have
\begin{align*}
&d(\hat{w},\hat{x}_2)+d(\hat{w},\hat{x}_1)\le \big(1+\frac{16\delta r_{q_{i-1}}}{d(x_1,x_2)}\big)\bigg(\sqrt{d(\hat{w},\hat{x}_2)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2}\\
&+\sqrt{d(\hat{w},\hat{x}_1)^2-|\rho_l(\hat{w})-\rho_l(\hat{x}_2)|^2+C(n)\gamma_l^2+18\delta r_{q_{i-1}} d(x_1,x_2)}\bigg)+C(n)\big(\gamma_l+\delta r_{q_{i-1}}\big).
\end{align*}
Thus by Lemma \ref{lem two key elem ineq} and note $d(\hat{w},\hat{x}_2)+d(\hat{w},\hat{x}_1)\le 8d(x_1,x_2)$ , we get
\begin{align*}
|\rho_l(\hat{w})-\rho(\hat{x}_2)|\le C(n)\sqrt{\frac{\gamma_l}{d(x_1,x_2)}+\sqrt{\frac{\delta r_{q_{i-1}}}{d(x_1,x_2)}}}d(x_1,x_2),
\end{align*}
which implies
\begin{align*}
d(\hat{w},\pi_l(\hat{w}))
&=|\rho_l(\hat{w})-\rho_l(\pi_l(\hat{w}))|=|\rho_l(\hat{w})-\rho_l(\pi_l(\hat{x}_2))|\\
&\le d(\hat{x}_2,\pi_l(\hat{x}_2))+ |\rho_l(\hat{w})-\rho(\hat{x}_2)|\\
&\le \delta r_{q_{i-1}}+ C(n)\sqrt{\frac{\gamma_l}{d(x_1,x_2)}+\sqrt{\frac{\delta r_{q_{i-1}}}{d(x_1,x_2)}}}d(x_1,x_2)\\
&\le C(n)\delta^{\frac{1}{4}}r_{q_{i-1}},
\end{align*}
where we use $\gamma_l\le \gamma_{i-1}\le \delta r_{q_{i-1}}\le \frac{1}{20}d(x_1,x_2)\le r_{q_{i-1}}$ in the last line.
\end{proof}
In Euclidean case, the image of the composition of different projections will lie on the image of any the projection and the composition of projections. The following lemma and Lemma \ref{lem proj and the composition of proj} show that this fact is almost true on manifolds.
\begin{lemma}\label{lem crucial dist est between point and proj of pt}
{For $2\leq j\leq n$, we have $\displaystyle \sup_{\hat{w}\in \check{\mathscr{P}}_{j- 1}(B_{r_{q_{j- 1}}}(p))\atop 1\leq i\leq j- 1} d(\hat{w}, \pi_i(\hat{w}))\leq C(n)\gamma_{j- 1}$.
}
\end{lemma}
\begin{remark}\label{rem reverse induction}
{In the proof of the above lemma, we use the induction method. Unlike the usual induction method from $k$ to $k+ 1$, we do induction from $k+ 1$ to $k$ here and similar induction argument is also used in the later argument for other results involving projection. The reason is that the later projection depends on the later direction points, which lie on the image of the former projection by the choice of the direction points. And this asymmetric choice of direction points pushes us to use the above reverse induction argument.
}
\end{remark}
\begin{figure}[H]
\begin{center}
\begin{tabular}{c c }
\includegraphics[width=0.42\linewidth]{lemma35a.eps} &
\hspace{0.05in}
\includegraphics[width=0.50\linewidth]{lemma35b.eps}\\
(Case $(1)$) &(Case $(2)$)\\
\end{tabular}
\caption{Lemma \ref{lem crucial dist est between point and proj of pt}}
\label{figure: lem4.5}
\end{center}
\end{figure}
{\it Proof:}~
{ We argue by induction for $s=1,2, \ldots, n-1$.
\textbf{Step (1)}. For $s=1$, we will prove that for any $2=s+1\le j\le n$,
\begin{align*}
\sup_{\hat{w}\in \check{\mathscr{P}}_{j- 1}(B)\atop j- s\leq i\leq j- 1} d(\hat{w}, \pi_{i}(\hat{w}))=\sup_{\hat{w}\in \check{\mathscr{P}}_{j- 1}(B)} d(\hat{w}, \pi_{j-1}(\hat{w}))\leq C(n)\gamma_{j- 1} .
\end{align*}
For simplicity, we use $B$ to denote $\displaystyle B_{r_{q_{j- 1}}}(p)$. We firstly note there is $\displaystyle w\in \pi_{j- 1}\mathscr{P}_{j- 2}(B)$ such that $\displaystyle d(w, \hat{w})\leq \gamma_{j- 1}$.
From $\displaystyle d(w, q_{j- 1})= d(\pi_{j- 1}(\hat{w}), q_{j- 1})= d(p, q_{j- 1})$, we get
\begin{align}
d(\hat{w}, \pi_{j- 1}(\hat{w}))&= |d(\hat{w}, q_{j- 1})- d(\pi_{j- 1}(\hat{w}), q_{j- 1})|=| d(\hat{w}, q_{j- 1})- d(w, q_{j- 1})| \nonumber \\
&\leq d(\hat{w}, w)\leq C(n)\gamma_{j- 1}. \nonumber
\end{align}
\textbf{Step (2)}. Assume for some $s\in \{1,2,\ldots, n-2\}$, the following holds for any $j\in \{s+1,\ldots,n\}$:
\begin{align}
\sup_{\hat{w}\in \check{\mathscr{P}}_{j- 1}(B)\atop j- s\leq i\leq j- 1} d(\hat{w}, \pi_{i}(\hat{w}))\leq C(n)\gamma_{j- 1} .\label{induction assumption for proj}
\end{align}
By induction, to prove the conclusion we only need to show
\begin{align}
d(\hat{w}, \pi_{j- s- 1}(\hat{w}))\leq C(n)\gamma_{j- 1} \text{ for all } j\in \{s+2,\ldots, n\}.\nonumber
\end{align}
Assume $w= \pi_{j- 1}(w_{j- 1})$, where $w_{j- 1}\in \mathscr{P}_{j- 2}(B)$. By the definition of $\pi_{j-1}$, we know either $\overline{q_{j- 1}, w, w_{j- 1}}$ or $\overline{q_{j-1},w_{j-1},w}$ is a segment. We discuss the two cases respectively.
Case (1). If $\overline{q_{j- 1}, w, w_{j- 1}}$ is a segment, then from the definition of $\mathscr{P}_{j- 2}, \check{\mathscr{P}}_{j- 2}$, we can find $\hat{w}_{j- 1}\in \check{\mathscr{P}}_{j- 2}$ such that
\begin{align}
d(w_{j- 1}, \hat{w}_{j- 1})\leq \beta^{-\nu^{n+ 1}}r_{q_{j- 2}} \le \gamma_{j-2}. \label{small perturbation}
\end{align}
Apply the induction assumption (\ref{induction assumption for proj}) for $j- 1$ and $\hat{w}_{j- 1}, q_{j- 1}\in \check{\mathscr{P}}_{j- 2}(B_{r_{q_{j- 2}}}(p))= \check{\mathscr{P}}_{(j- 1)- 1}(B_{r_{q_{j- 2}}}(p))$, we get
\begin{align}
d(\hat{w}_{j- 1}, \pi_{j- 1- s}(\hat{w}_{j- 1}))+ d(q_{j- 1}, \pi_{j- 1- s}(q_{j- 1}))\leq C(n)\gamma_{j- 2}. \label{ineq from induc assumption we have}
\end{align}
Applying Lemma \ref{lem crucial trig ineq involved} on $w_{j- 1}, q_{j- 1}$, note $\displaystyle \hat{w}\in \big(\mathfrak{U}_{\gamma_{j- 2}} \overline{w_{j- 1}q_{j- 1}}\big)\cap \mathfrak{N}_{j- 1}$, we get
\begin{align}
d(\hat{w}, \pi_{j- 1- s}(\hat{w}))\leq C(n)(\frac{\gamma_{j- 2}}{r_{q_{j- 2}}})^{\frac{1}{4}} r_{q_{j- 2}}\leq C(n)\gamma_{j- 1}. \nonumber
\end{align}
Case (2). If $\overline{q_{j- 1}, w_{j-1}, w_{j}}$ is a segment, we still choose $\hat{w}_{j-1}$ as above. Applying Lemma \ref{lem crucial trig ineq two} on $w_{j-1}$ and $q_{j-1}$,we also get
\begin{align}
d(\hat{w}, \pi_{j- 1- s}(\hat{w}))\leq C(n)\gamma_{j- 1}. \nonumber
\end{align}
Then the conclusion follows from the induction method.
}
\qed
\begin{lemma}\label{lem proj and the composition of proj}
{For $\hat{w}\in \mathfrak{N}_k$ where $j\leq k\leq n$, if $\displaystyle \sup_{1\leq i\leq j- 1}d(\hat{w}, \pi_i(\hat{w}))\leq \delta$, then
\begin{align}
\sup_{1\leq i\leq j- 1}d(\mathscr{P}_i(\hat{w}), \hat{w})\leq C(n)(\gamma_k+ \delta). \nonumber
\end{align}
}
\end{lemma}
\begin{remark}\label{rem choice of region}
{In the above lemma, we require $\hat{w}\in \mathfrak{N}_k$ for some $k\geq j$, to assure that $\mathscr{P}_{j- 1}(\hat{w})$ is well defined.
}
\end{remark}
{\it Proof:}~
{We prove the conclusion by induction on $i$. When $i= 1$,
\begin{align}
d(\mathscr{P}_1(\hat{w}), \hat{w})\leq d(\pi_1(\hat{w}), \hat{w})+ \beta^{-\nu^{n+ 1}}r_{q_1}\leq C(n)(\gamma_k+ \delta). \nonumber
\end{align}
Assume $\displaystyle d(\mathscr{P}_m(\hat{w}), \hat{w})\leq C(n)(\gamma_k+ \delta)$ for some $i=m\leq j- 2$, then
\begin{align}
d(\mathscr{P}_{m+ 1}(\hat{w}), \hat{w})&\leq d(\pi_{m+ 1}\mathscr{P}_m(\hat{w}), \hat{w})+ \beta^{-\nu^{n+ 1}}r_{q_{m+ 1}} \nonumber \\
&\leq d(\pi_{m+ 1}\mathscr{P}_m(\hat{w}), \pi_{m+ 1}(\hat{w}))+ \delta+ C(n)\gamma_k \nonumber \\
&\leq \sqrt{d(\mathscr{P}_m(\hat{w}), \hat{w})^2+ C(n)\gamma_{m+ 1}^2}+ C(n)(\delta+ \gamma_k) \nonumber \\
&\leq d(\mathscr{P}_m(\hat{w}), \hat{w})+ C(n)(\delta+ \gamma_k) \leq C(n)(\delta+ \gamma_k). \nonumber
\end{align}
The conclusion follows from the induction method.
}
\qed
After establishing the general results about projection error estimates, we can apply them on $q_j$ as follows.
\begin{cor}\label{cor comp of proj of qi}
{For $j\leq n$, we have $\displaystyle \sup_{1\leq i\leq j-1}d(q_j, \pi_i(q_j))+ \sup_{1\leq i\leq j- 2}d(\mathscr{P}_i(q_j), q_j)\leq C(n)\gamma_{j- 1}$.
}
\end{cor}
{\it Proof:}~
{From $q_j\in \check{\mathscr{P}}_{j- 1}(B)\subseteq \mathfrak{N}_{j- 1}$, by Lemma \ref{lem crucial dist est between point and proj of pt}, we get
\begin{align}
\sup_{1\leq i\leq j-1}d(q_j, \pi_i(q_j))\leq C(n)\gamma_{j- 1}. \nonumber
\end{align}
Applying Lemma \ref{lem proj and the composition of proj}, we have
\begin{align}
\sup_{1\leq i\leq j- 2}d(\mathscr{P}_i(q_j), q_j)\leq C(n)(\gamma_{j- 1}+ C(n)\gamma_{j- 1})\leq C(n)\gamma_{j- 1} \nonumber
\end{align}
}
\qed
\begin{prop}\label{prop n direction points}
{If $1\leq i< j\leq n$, we have $|\phi_i(q_j)|\leq C(n) \gamma_n$.
}
\end{prop}
{\it Proof:}~
{If $m\leq j- 1$, using Corollary \ref{cor comp of proj of qi}, we have
\begin{align}
|\phi_m(q_j)|&= |\rho_m(\mathscr{P}_{m- 1}(q_j))- \rho_m(p)| \leq |\rho_m(q_j)- \rho_m(\pi_{m- 1}(q_j))|+ d(q_j, \mathscr{P}_{m- 1}(q_j))\nonumber \\
&\leq d(q_j, \pi_{m- 1}(q_j))+ C(n)\gamma_n \leq C(n)\gamma_{n}. \nonumber
\end{align}
}
\qed
\subsection{The diameter upper bound after $(n+ 1)$ projections}\label{subsec n+1 proj}
In this subsection, we show that there are exactly $n$ directions, which is implied by the following Proposition. This Proposition is only used to prove the lower bound in Theorem \ref{thm direction points imply Lipschitz less than 1+ep}, which implies the equivalence between G-H approximation and splitting map. The results of this subsection will not be used in our existence proof of the splitting map.
The key of the proof of Proposition \ref{prop the diam upper bound of (n+ 1)-proj} is the density of the $(n+ 1)$ projections image, that is,
\begin{align*}
\phi^{(n+ 1)}(B_{r_{q_{n+ 1}}}(p)) \text{ is } C(n)\beta^{-1}r_{q_{n+ 1}}\text{-dense in } [-\frac{r_{q_{n+ 1}}}{2},0]^{n+1}\subset \mathbb{R}^{n+1},
\end{align*}
which is the content of Proposition \ref{prop (n+1) comp proj is dense}. We prove Proposition \ref{prop the diam upper bound of (n+ 1)-proj} by firstly assuming Proposition \ref{prop (n+1) comp proj is dense}.
\begin{prop}\label{prop the diam upper bound of (n+ 1)-proj}
{$\displaystyle \sup_{x\in \check{\mathscr{P}}_n(B_{r_{q_n}}(p))} d(p, x)\leq \beta^{-1}r_{q_n}$.
}
\end{prop}
\begin{proof}
Assume there exists a point $q_{n+1}$ such that $r_0= d(q_{n+1},p)\ge \beta^{-1} r_{q_n}$. From Proposition \ref{prop (n+1) comp proj is dense}, the set $\phi^{(n+1)}(B_{r_{q_{n+1}}}(p))$ is $C(n)\beta^{-1} r_{q_{n+1}}$-dense in $[-\frac{r_{q_{n+1}}}{2},0]^{n+ 1}\subset \mathbb{R}^{n+1}$.
For $\lambda\ge C(n)\beta^{-1} r_{q_{n+1}}$ to be determined, by Proposition \ref{prop (n+1) comp proj is dense}, we can choose maximal disjoint family of balls $\{B_{\frac{\lambda}{2}}(y_i)\}_{i=1}^{N}$ in $\mathbb{R}^{n+1}$ such that $y_i\in \phi^{(n+1)}(B_{r_{q_{n+1}}}(p))$. Then $\phi^{(n+1)}(B_{r_{q_{n+1}}}(p))\subset \cup_{i=1}^{N} B_{\lambda}(y_i)$ and hence
$$[-\frac{r_{q_{n+1}}}{2},0]^{n+ 1}\subset B_{C(n)\beta^{-1} r_{q_{n+1}}}\big( \phi^{(n+1)}(B_{r_{q_{n+1}}}(p))\big)\subset \bigcup_{i= 1}^NB_{\lambda+C(n)\beta^{-1} r_{q_{n+1}}}(y_i)\subset \bigcup_{i=1}^{N} B_{2\lambda}(y_i).$$
Thus we get
\begin{align*}
N\ge \frac{\big(\frac{r_{q_{n+1}}}{2}\big)^{n+ 1}}{\omega_{n+1}(2\lambda)^{n+1}}=\frac{r_{q_{n+1}}^{n+1}}{2^{2(n+1)}\omega_{n+1}\lambda^{n+1}}.
\end{align*}
Since $y_i\in \phi^{(n+1)}(B_{r_{q_{n+1}}}(p))$, there exists $x_i\in B_{r_{q_{n+1}}}(p)$ such that $\phi^{(n+1)}(x_i)=y_i$. By the construction of $\phi^{(n+1)}$, we know for $1\le i\neq j\le N$,
$$\lambda\le |y_i-y_j|=|\phi^{(n+1)}(x_i)-\phi^{(n+1)}(x_j)|\le \sqrt{n+1}d(x_i,x_j),$$
which means $B_{\frac{\lambda}{2\sqrt{n+1}}}(x_i)\cap B_{\frac{\lambda}{2\sqrt{n+1}}}(x_j)=\emptyset.$ Thus by the volume comparison theorem,
\begin{align*}
\omega_n(2r_{q_{n+1}})^n\ge V(B_{2r_{q_{n+1}}}(p))&\ge \sum_{i=1}^{N}V(B_{\frac{\lambda}{2\sqrt{n+1}}}(x_i))\\
&\ge \frac{r_{q_{n+1}}^{n+1}}{2^{2(n+1)}\omega_{n+1}\lambda^{n+1}} \cdot (1-(n+1)\tau)\omega_n \big(\frac{\lambda}{2\sqrt{n+1}}\big)^n,
\end{align*}
which implies $r_{q_{n+1}}\le 2^{10n}n^n\omega_{n+1}\lambda.$ So, if we take $\lambda=C(n)\beta^{-1} r_{q_{n+1}}$, then we get
$$1\le 2^{10n}n^n\omega_{n+1}C(n)\beta^{-1}.$$ This is the contradiction if we take $\beta> C(n)$ is big enough.
\end{proof}
In the rest of this subsection, we always assume that
\begin{align}
\sup_{x\in \check{\mathscr{P}}_n(B_{r_{q_n}}(p))} d(p, x)\geq \beta^{-1}r_{q_n},\nonumber
\end{align}
we will prove Proposition \ref{prop (n+1) comp proj is dense} under this assumption. Let $\epsilon=\beta^{-\nu^{n+1}}$ in the rest argument, unless otherwise mentioned.
To prove Proposition \ref{prop (n+1) comp proj is dense}, we need set up $(n+ 1)$ projections and the corresponding net $\mathfrak{N}_{n+ 1}$, which will be used to obtain the corresponding almost Gou-Gu formula. All these results (Lemma \ref{lem one more point implies one more net} and Corollary \ref{cor one more gougu}) are obtained by the similar argument of Proposition \ref{prop dist of points in geodesic balls 1} and Theorem \ref{thm dist of points in geodesic balls induc-dim}.
\begin{lemma}\label{lem one more point implies one more net}
Assume there is a point $q_{n+1}\in \check{\mathscr{P}}_n(B_{r_{q_n}}(p))$ such that
$$d(p,q_{n+1})=\alpha r_{q_n}$$
for some $\alpha\ge \frac{1}{\beta}$. Then for $r_{q_{n+1}}:=\beta^{-2}d(p,q_{n+1})$, there exists an $\epsilon r_{q_{n+1}}$-dense net $\mathfrak{N}_{n+1}$ of $B_{r_{q_{n+1}}}(p)$ such that
\begin{enumerate}
\item[(a)]. For any $x\in \mathfrak{N}_{n+1}$ and $1\le l\le n+1$, $\pi_l(x)$ is well-defined;
\item[(b)]. For any $1\leq l\le i\le n+1$, $y\in \mathfrak{N}_{i}, z\in \mathfrak{N}_{n+1}$, we have
\begin{align*}
\big|d(y, z)^2-|\rho_l(y),\rho_l(z)|^2-d(\pi_l(y),\pi_l(z))^2 \big|\le \tilde{\epsilon}_{l}(r_{q_l})^2\le C(n)\beta^{-2}r_{q_{n+1}}^2,
\end{align*}
where $\tilde{\epsilon}_l=C(n)\beta^{-\nu^{n+1-l}}$ for $l\le n$ and $\tilde{\epsilon}_{n+1}=C(n)\beta^{-2}$;
\item[(c)]. For any $1\le l\le {n+1}$, $\mathfrak{N}_{n+1}\subset T^{r_{q_l}}_{\eta, l}:=T_\eta^{r_{q_l}}(|\nabla^2f_l-g|)$ with respect to the function $\rho_l(\cdot)=d(\cdot, q_l)$.
\end{enumerate}
\end{lemma}
\begin{proof}
The proof is similar to the proof of Proposition \ref{prop dist of points in geodesic balls 1}.
\end{proof}
With the net $\mathfrak{N}_{n+1}$ constructed above, we define the projection $\mathscr{P}_0^{(n)}: B_{r_{q_{n+1}}}(p)\to \mathfrak{N}_{n+1}$ such that
$$d(\mathscr{P}_0^{(n)}(x),x)\le C(n)\beta^{-\nu^{n+1}}r_{q_{n+1}}, \quad \forall x\in B_{r_{q_{n+1}}}(p).$$
Furthermore we define $$\hat{\pi}_n=\mathscr{P}_0^{n}\circ \pi_n,\quad \quad \mathscr{P}_n=\hat{\pi}_n\circ \ldots \circ \hat{\pi}_1\circ \hat{\pi}_0=\hat{\pi}_n\circ \mathscr{P}_{n-1},$$ $$\pi_{n+1}(x)=\overline{q_{n+1}x}\cap \partial B_{d(p,q_{n+1})}(q_{n+1})\text{ for } x\in B_{r_{q_{n+1}}}(p),$$
$$\check{\mathscr{P}}_{n+1}=\mathscr{P}_0^n\circ \pi_{n+1}\circ \mathscr{P}_n.$$
Then we have the following corollary.
\begin{cor}\label{cor one more gougu}
Assume $\tau \in (0, c(n))$ and $1\ll \beta_0(n)\le \beta \le \tau^{-c(n)}$. Assume $\frac{V(B_r(p))}{V(B_r(0))}\ge 1-\tau$ and $\{q_k\}_{k=1}^n$ are constructed as in Theorem \ref{thm dist of points in geodesic balls induc-dim} and there exists a point $q_{n+1}\in \check{\mathscr{P}}_{n}(B_{r_{q_n}}(p))$ such that $\displaystyle d(q_{n+1},p)=\alpha r_{q_n}\ge \frac{1}{\beta} r_{q_n}$. Then, for the net $\mathfrak{N}_{n+1}$ constructed in Lemma \ref{lem one more point implies one more net} and $\check{\mathscr{P}}_{n+1}$ defined above, we have
\begin{align*}
\sup_{x,y\in B_{r_{q_{n+1}}}(p)}\big| d(x,y)^2&-\sum_{j=1}^{n+1}[\rho_j(\mathscr{P}_{j-1}(x))-\rho_j(\mathscr{P}_{j-1}(y))]^2\\
&-d(\check{\mathscr{P}}_{n+1}(x),\check{\mathscr{P}}_{n+1}(y))^2\big|\le C(n)\beta^{-2}r_{q_{n+1}}^2.
\end{align*}
\end{cor}
\begin{proof}
For any $x,y\in B_{r_{q_{n+1}}}(p)$, by Theorem \ref{thm dist of points in geodesic balls induc-dim}, we know
\begin{align*}
\big| d(x,y)^2&-\sum_{j=1}^{n}[\rho_j(\mathscr{P}_{j-1}(x))-\rho_j(\mathscr{P}_{j-1}(y))]^2\\
&-d(\check{\mathscr{P}}_{n}(x),\check{\mathscr{P}}_{n}(y))^2\big|\le C(n)\beta^{-\nu}r_{q_{n}}^2\le C(n)\beta^{-2}r_{q_n}^2.
\end{align*}
By the definition of $\check{\mathscr{P}}_n, \mathscr{P}_0^{(n-1)}, \mathscr{P}_n$ and $\mathscr{P}_0^{(n)}$, we know
$$\big|d(\check{\mathscr{P}}_n(x),\check{\mathscr{P}}_n(y))^2-d(\mathscr{P}_n(x),\mathscr{P}_n(y))^2\big|\le C(n)\beta^{-\nu^{n+1}}(r_{q_n}^2+r_{q_{n+1}}^2)\le C(n)\beta^{-2}r_{q_{n+1}}^2.$$
Note $\mathscr{P}_n(x),\mathscr{P}_n(y)\in \mathfrak{N}_{n+1}$. By Lemma \ref{lem one more point implies one more net}, we know
\begin{align*}
\big|d(\mathscr{P}(x),\mathscr{P}(y))^2&-|\rho_{n+1}(\mathscr{P}_n(x))-\rho_{n+1}(\mathscr{P}_n(y))|^2\\
&-d(\pi_{n+1}\mathscr{P}_n(x),\pi_{n+1}\mathscr{P}_n(y))^2\big|\le C(n)\beta^{-2}r_{q_{n+1}}^2.
\end{align*}
Again by the definition of $\mathscr{P}_0^{(n)}$ and $\check{\mathscr{P}}_n$, we know
$$\big|d(\pi_{n+1}\mathscr{P}_n(x),\pi_{n+1}\mathscr{P}_n(y))^2-d(\check{\mathscr{P}}_n(x),\check{\mathscr{P}}_n(y))^2\big|\le C(n)\beta^{-\nu^{n+1}}r_{q_{n+1}}^2.$$
Combining the four inequalities above together, we get the conclusion.
\end{proof}
Let $\gamma_{n+ 1}= \beta^{-1}r_{q_{n+ 1}}$ in the rest argument.
\begin{lemma}\label{lem crucial dist est-(n+1)}
{For $\hat{w}\in \mathfrak{N}_k$ where $j\leq k= n+ 1$, if $\displaystyle \sup_{1\leq i\leq j- 1}d(\hat{w}, \pi_i(\hat{w}))\leq \delta$, then
\begin{align}
&\sup_{1\leq i\leq j- 1}d(\mathscr{P}_i(\hat{w}), \hat{w})\leq C(n)(\gamma_k+ \delta), \nonumber \\
&\sup_{1\leq i\leq n}d(q_{n+ 1}, \pi_i(q_{n+ 1}))+ \sup_{1\leq i\leq n- 1}d(\mathscr{P}_i(q_{n+ 1}), q_{n+ 1})\leq C(n)\gamma_n . \nonumber
\end{align}
}
\end{lemma}
{\it Proof:}~
{For the first conclusion, by Lemma \ref{lem proj and the composition of proj}, we only need to prove the case of $j=n+1$. Also by Lemma \ref{lem proj and the composition of proj}, we know in this case
\begin{align}\label{an inequality needed}
\sup_{1\le i\le n-1} d(\hat{w},\mathscr{P}_{i}(w))\le C(n)(\delta+\gamma_n).
\end{align}
Now, by Lemma \ref{lem one more point implies one more net} and Corollary \ref{cor one more gougu}, we know
\begin{align*}
d(\hat{w},\mathscr{P}_n(\hat{w}))
&\le \delta +\beta^{-\nu^{n+1}r_{q_{n+1}}}+d(\pi_n\mathscr{P}_{n-1}(\hat{w}),\pi_n(\hat{w}))\\
&\le \delta +\gamma_{n+1}+\sqrt{2d(\hat{w},\mathscr{P}_{n-1}(\hat{w}))^2+C(n)\gamma_{n+1}^2}\\
&\le C(n)(\delta +\gamma_n+\gamma_{n+1}).
\end{align*}
By the choosing of $\gamma_{n+1}$, we know $\gamma_{n+1}\ge \gamma_n$. Thus the first half conclusion follows. For the second conclusion, we follow the proof of Lemma \ref{lem crucial dist est between point and proj of pt}. Denote $B_{r_{q_{n}}}(p)=B$. Since $q_{n+1}\in \check{\mathscr{P}}_n(B)$, there exists a $\check{q}_{n+1}\in \pi_n\mathscr{P}_{n-1}(B)$ such that $\displaystyle d(\check{q}_{n+1},q_{n+1})\le \beta^{-\nu^{n+1}}r_{q_n}\le \gamma_n$.
Thus we know
$$d(q_{n+1},\pi_{n}(q_{n+1}))=|d(q_{n+1},q_n)-d(\pi_n(q_{n+1}),q_n)|=|d(q_{n+1},q_n)-d(\check{q}_{n+1},q_n)|\le \gamma_n.$$
Now, choose $w_n\in \mathscr{P}_{n-1}(B)$ such that $\check{q}_{n+1}=\pi_n(w_n)$. Then by the definition of $\mathscr{P}_{n-1}$ and $\check{\mathscr{P}}(B)$ such that $d(w_n,\hat{w}_n)\le C(n)\gamma_{n-1}$. Note $q_n\in \check{\mathscr{P}}_{n-1}(B_{r_{q_{n-1}}}(p))$. By Lemma \ref{lem crucial dist est between point and proj of pt}, we know
\begin{align}
d(\hat{w}_n,\pi_i(\hat{w}_n))+d(q_n,\pi_i(q_n))\le C(n)\gamma_{n-1}, \quad \quad \quad 1\le i\le n-1. \nonumber
\end{align}
Since either $\overline{q_n,w_n,\check{q}_{n+1}}$ or $\overline{q_n,\check{q}_{n+1},w_n}$ is a segment. By Lemma \ref{lem crucial trig ineq involved} or Lemma \ref{lem crucial trig ineq two}, we get
$$d(\pi_i(q_{n+1}),q_{n+1})\le C(n)\big(\frac{\gamma_{n-1}}{r_{q_{n-1}}}\big)^{\frac{1}{4}}r_{q_{n-1}}\le C(n)\gamma_n.$$
Combining the above we know $\displaystyle \sup_{1\leq i\leq n}d(q_{n+ 1}, \pi_i(q_{n+ 1}))\leq C(n)\gamma_n$. Thus by (\ref{an inequality needed}) we get $\displaystyle \sup_{1\leq i\leq n- 1}d(\mathscr{P}_i(q_{n+ 1}), q_{n+ 1})\leq C(n)\gamma_n$. The conclusion follows.
}
\qed
Define
\begin{align}
\mathscr{A}_{n+ 1}= \Big(\mathfrak{U}_{\epsilon r_{q_{n+ 1}}} (\overline{p, q_{n+ 1}})\Big)\cap \mathfrak{N}_{n+ 1}, \quad \quad \quad \mathscr{A}_i= \bigcup_{y\in \mathscr{A}_{i+ 1}}\Big(\big(\mathfrak{U}_{\epsilon r_{q_{n+ 1}}}(\overline{q_i, y})\big)\cap \mathfrak{N}_{n+ 1}\Big) , \nonumber
\end{align}
\begin{lemma}\label{lem proj of points in Ai}
{We have $\displaystyle \sup_{1\leq k\leq i\leq n+ 1\atop z\in \mathscr{A}_i} d(\mathscr{P}_{k- 1}(z), z)+ d(\pi_{k- 1}(z), z)\leq C(n)\epsilon_n^{4^{i-n-1}}r_{q_{n}}$, where $\displaystyle \epsilon_n= \frac{\gamma_n}{r_{q_n}}$.
}
\end{lemma}
{\it Proof:}~
{\textbf{Step (1)}. We firstly show the conclusion for $i= n+ 1$. For $z\in \mathscr{A}_{n+ 1}$ and $k\leq n+ 1$, note there is $\hat{p}\in \mathfrak{N}_{n}$ with
\begin{align}
d(\pi_{k- 1}(\hat{p}), \hat{p})+ d(\hat{p}, p)\leq \epsilon r_{q_n}; \nonumber
\end{align}
and also note $\displaystyle d(\pi_{k- 1}(q_{n+ 1}), q_{n+ 1})\leq C(n)\gamma_n$ by Lemma \ref{lem crucial dist est-(n+1)}. Applying Lemma \ref{lem crucial trig ineq involved} on $p, q_{n+ 1}$, note $z\in \mathfrak{U}_{\epsilon r_{q_{n+ 1}}}(\overline{pq_{n+ 1}})\cap \mathfrak{N}_{n+ 1}$, we obtain
\begin{align}
d(\pi_{k- 1}(z), z)\leq C(n)\epsilon_n^{\frac{1}{4}}r_{q_n}. \label{proj est needed now}
\end{align}
From Lemma \ref{lem crucial dist est-(n+1)} and (\ref{proj est needed now}), for $z\in \mathscr{A}_{n+ 1}$, we have
\begin{align}
d(\mathscr{P}_{k- 1}(z), z)\leq C(n)(\gamma_n+ \epsilon_n^{\frac{1}{4}}r_{q_{n}})\leq C(n)\epsilon_n^{\frac{1}{4}}r_{q_n}. \nonumber
\end{align}
Hence the conclusion holds for $i= n+ 1$.
\textbf{Step (2)}. Assume the conclusion holds for $i$, hence
\begin{align}
\sup_{1\leq k\leq i\atop z\in \mathscr{A}_i} d(\mathscr{P}_{k- 1}(z), z)+ d(\pi_{k- 1}(z), z)\leq C(n)\epsilon_n^{4^{i-n-1}}r_{q_n}. \nonumber
\end{align}
We will show the conclusion holds for $i- 1$. For any $z\in \mathscr{A}_{i- 1}, k\leq i- 1$, we note $z\in \mathfrak{U}_{\epsilon r_{q_{n+ 1}}}(\overline{y q_{i- 1}})$ where $y\in \mathscr{A}_i$. From the induction assumption, we have
\begin{align}
d(\pi_{k- 1}(y), y)\leq C(n)\epsilon_n^{4^{i-n-1}}r_{q_n}. \nonumber
\end{align}
Note $\displaystyle d(\pi_{k- 1}(q_{i- 1}), q_{i- 1})\leq C(n)\gamma_n$ by Corollary \ref{cor comp of proj of qi}, from Lemma \ref{lem crucial trig ineq involved}, we obtain
\begin{align}
d(\pi_{k- 1}(z), z)\leq C(n)\epsilon_n^{4^{i-n-2}}r_{q_n}. \nonumber
\end{align}
From the above and Lemma \ref{lem crucial dist est-(n+1)}, we also get
$\displaystyle d(\mathscr{P}_{k- 1}(z), z)\leq C(n)\epsilon_n^{4^{i-n-2}}r_{q_n}$. The conclusion follows by the induction method.
}
\qed
\begin{lemma}\label{lem comp proj of z and comp proj of y}
{For $y\in \mathscr{A}_{k+ 1}, \hat{z}\in \Big(\mathfrak{U}_{\epsilon r_{q_{n+ 1}}}(\overline{y, q_k})\cap \mathfrak{N}_{n+ 1}\Big)$, we have
\begin{align}
\sup_{k\leq i\leq n} d(\mathscr{P}_i(y), \mathscr{P}_i(\hat{z}))\leq C(n)\epsilon_n^{4^{-n}}r_{q_n} . \nonumber
\end{align}
}
\end{lemma}
{\it Proof:}~
{When $i= k$, since $\hat{z}\in \Big(\mathfrak{U}_{\epsilon r_{q_{n+ 1}}}(\overline{y, q_k})$, there exists $z
\in \overline{y,q_k}$ such that $d(z,\hat{z})\le \epsilon r_{q_{n+1}}$. This implies
\begin{align*}
\big||d(y,q_k)-d(q_k,\hat{z})|^2-d(y,\hat{z})^2\big|
&\le \big||d(y,q_k)-d(q_k,\hat{z})|-d(y,\hat{z})\big|\big(|d(y,q_k)-d(q_k,\hat{z})|+d(y,\hat{z})\big)\\
&\le 2\epsilon r_{q_{n+1}}(2\epsilon r_{q_{n+1}}+2d(y,\hat{z}))\le C(n)\epsilon r_{q_{n+1}}^2.
\end{align*}
Thus by Lemma \ref{lem proj of points in Ai}, we have
\begin{align}
d(\mathscr{P}_k(y), \mathscr{P}_k(\hat{z}))&\leq d(y, \mathscr{P}_k(\hat{z}))+ C(n)\epsilon_n^{2^{-n}}r_{q_1} \nonumber \\
&\leq d(\pi_k\mathscr{P}_{k- 1}(\hat{z}), \pi_k(\hat{z}))+ d(\pi_k(\hat{z}), y)+ C(n)\epsilon_n^{4^{-n}}r_{q_n} \nonumber \\
&\leq d(\mathscr{P}_{k-1}(\hat{z}), \hat{z})+ d(\pi_k(\hat{z}), \pi_k(y))+ C(n)\epsilon_n^{4^{-n}}r_{q_n}\nonumber \\
&\leq \sqrt{d(\hat{z}, y)^2- |d(\hat{z}, q_k)- d(y, q_k)|^2+C(n)\gamma_k^2}+ C(n)\epsilon_n^{4^{-n}}r_{q_n} \nonumber \\
&\leq C(n)\epsilon_n^{4^{-n}}r_{q_n} .\nonumber
\end{align}
Assume the conclusion holds for some $i\geq k$, we show the conclusion holds for $i+ 1$ as follows:
\begin{align}
d(\mathscr{P}_{i+ 1}(y), \mathscr{P}_{i+ 1}(\hat{z}))&\leq d(\pi_{i+ 1}\mathscr{P}_i(y), \pi_{i+ 1}\mathscr{P}_i(\hat{z}))+ C(n)\epsilon_n^{4^{-n}}r_{q_n}\nonumber \\
&\leq d(\mathscr{P}_i(y), \mathscr{P}_i(\hat{z}))+ C(n)\epsilon_n^{4^{-n}}r_{q_n}\leq C(n)\epsilon_n^{4^{-n}}r_{q_n}. \nonumber
\end{align}
The conclusion follows by the induction method.
}
\qed
\begin{prop}\label{prop (n+1) comp proj is dense}
The set $\phi^{(n+ 1)}(B_{r_{q_{n+ 1}}}(p))$ is $C(n)\beta^{-1}r_{q_{n+ 1}}$-dense in $[-\frac{r_{q_{n+ 1}}}{2},0]^{n+1}\subset \mathbb{R}^{n+1}$.
\end{prop}
{\it Proof:}~
{Define $\psi^{(k)}= (\phi_k, \cdots, \phi_{n+ 1})$, we claim that $\psi^{(k)}(\mathscr{A}_k)$ is $C(n)\epsilon_n^{4^{-n}}r_{q_n}$-dense in $I^{n-k+ 2}= [-2^{-1}r_{q_{n+ 1}}, 0]^{n- k+ 2}$. Note $C(n)\beta^{-1}r_{q_{n+ 1}}\geq C(n)\epsilon_n^{4^{-n}}r_{q_n}$ by $\nu\geq 4^{n+ 1}$, then the conclusion follows from the case $k= 1$.
When $k= n+ 1$, for any $t_{n+ 1}\in I^1$, choose $\check{y}\in \overline{p q_{n+ 1}}$ such that $d(p, \check{y})= |t_{n+ 1}|$. Choose $\hat{y}\in \mathscr{A}_{n+ 1}$ such that $d(y, \hat{y})\leq \epsilon r_{q_{n+ 1}}$, from Lemma \ref{lem proj of points in Ai}, we get
\begin{align}
|\phi_{n+ 1}(\hat{y})- t_{n+ 1}|&= |d(\mathscr{P}_{n}(\hat{y}), q_{n+ 1})- d(p, q_{n+ 1})- t_{n+ 1}| \nonumber \\
&\leq |d(\check{y}, q_{n+ 1})- d(p, q_{n+ 1})- t_{n+ 1}|+ d(\check{y}, \mathscr{P}_{n}(\hat{y})) \nonumber \\
&\le d(\hat{y}, \mathscr{P}_{n}(\hat{y}))+ d(\hat{y}, \check{y}) \leq C(n)\epsilon_n^{4^{-1}}r_{q_n}. \nonumber
\end{align}
The claim holds for $k= n+ 1$. We will prove the claim by induction on $k$.
Assume the claim holds for $k+ 1$, we prove the claim for $k$. Now for $(t_k, t_{k+ 1}, \cdots, t_{n+ 1})\in I^{n-k+ 2}$, from the induction assumption we can find $y\in \mathscr{A}_{k+ 1}$ such that
\begin{align}
|\psi^{(k+ 1)}(y)- (t_{k+ 1}, \cdots, t_{n+ 1})|\leq C(n)\epsilon_n^{4^{-n}}r_{q_n} . \nonumber
\end{align}
Choose $\check{z}\in \overline{q_k y}$ such that $d(\check{z}, y)= |t_k|$, choose $\hat{z}\in \mathfrak{N}_{n+ 1}$ such that $d(\hat{z}, \check{z})\leq \epsilon r_{q_{n+ 1}}$, then $\hat{z}\in \mathscr{A}_k$. Now from Lemma \ref{lem comp proj of z and comp proj of y} we have
\begin{align}
&|\phi_k(\hat{z})- t_k|= |d(\mathscr{P}_{k- 1}(\hat{z}), q_k)- d(p, q_k)- t_k|\nonumber \\
&\leq |d(\check{z}, q_k)- d(p, q_k)- t_k|+ d(\check{z}, \mathscr{P}_{k- 1}(\hat{z})) \nonumber \\
&\leq d(\hat{z}, \mathscr{P}_{k- 1}(\hat{z}))+ d(\check{z}, \hat{z})\leq \epsilon r_{q_k}+ C(n)\epsilon_n^{4^{-n}}r_{q_n}\leq C(n)\epsilon_n^{4^{-n}}r_{q_n} . \nonumber
\end{align}
For $i\geq k+ 1$, from Lemma \ref{lem comp proj of z and comp proj of y} we have
\begin{align}
|\phi_i(\hat{z})- t_i|&= |d(\mathscr{P}_{i- 1}(\hat{z}), q_i)- d(p, q_i)- t_i|\nonumber \\
&\leq |d(\mathscr{P}_{i- 1}(y), q_i)- d(p, q_i)- t_i|+ d(\mathscr{P}_{i- 1}(y), \mathscr{P}_{i- 1}(\hat{z})) \nonumber \\
&\leq |\phi_i(y)- t_i|+ C(n)\epsilon_n^{4^{-n}}r_{q_n} \leq C(n)\epsilon_n^{4^{-n}}r_{q_n} . \nonumber
\end{align}
Note $\phi^{(n+1)}=\psi^{(1)}$. The conclusion follows by the induction method.
}
\qed
\subsection{The quasi-isometry property of distance map}
For the orthogonal basis $\{e_i\}_{i=1}^n$ of $\mathbb{R}^n$, let $q_i=\beta r e_i$, $b_i(x)=d(q_i,x)-d(q_i,0)$ and $\psi(x)=(b_1(x),b_2(x),\ldots, b_n(x))$. Then we have the bi-Lipschitz estimate
$$1-\frac{4\sqrt{n}}{\sqrt{\beta}}\le \frac{|\psi(x)-\psi(y)|}{|x-y|}\le 1+\frac{5\sqrt{n}}{\sqrt{\beta}}, \quad \forall x,y\in B_r(0).$$
The following lemma is the almost version of the above result on manifolds, which shows that the distance map is a discrete bi-Lipschitz map with Lipschitz constant close to $1$.
\begin{theorem}\label{thm direction points imply Lipschitz less than 1+ep}
{Let $\displaystyle \psi(x)= (b_1^+(x), \cdots, b_n^+(x))$ and $b_i^+(x)= d(x, q_i)-d(p,q_i)$, then
\begin{align}
1- C(n)\beta ^{-\frac{1}{2}} \le \sup_{x, y\in B_{r_{q_n}}(p), \atop d(x, y)\ge \beta^{-\frac{1}{2}} r_{q_n}}\frac{d(\psi(x), \psi(y))}{d(x, y)}\leq 1+ C(n)\beta ^{-\frac{1}{2}} . \nonumber
\end{align}
}
\end{theorem}
\begin{remark}\label{rem dist map is G-H map}
{In later argument, we use this distance map to get a `canonical' $\epsilon$-G-H approximation in analytical sense, which is the $\epsilon$-splitting map.
}
\end{remark}
{\it Proof:}~
{Let $r_1= r_{q_n}$, note
\begin{align}
d(\psi(x), \psi(y))^2= \sum_{i= 1}^n |b_i^+(x)- b_i^+(y)|^2= \sum_{i= 1}^n \Big|\frac{d(x, q_i)^2- d(y, q_i)^2}{d(x, q_i)+ d(y, q_i)}\Big|^2. \label{Lipschitz map}
\end{align}
For $i\geq 1$ and any $x\in B_{r_1}(p)$, apply Theorem \ref{thm dist of points in geodesic balls induc-dim} on $B_{r_1}(p)$, we get
\begin{align}
\Big|d(x, q_i)^2- \sum_{j= 1}^{i- 1}|\phi_j(x)- \phi_j(q_i)|^2- d\big(\check{\mathscr{P}}_{i-1}(x), \check{\mathscr{P}}_{i-1}(q_i)\big)^2\Big|\leq C(n)\gamma_{n}^2. \nonumber
\end{align}
Now for any $x, y\in B_{r_1}(p)$, from Proposition \ref{prop n direction points} we get
\begin{align}
&\Big||d(x, q_i)^2- d(y, q_i)^2|- |d\big(\check{\mathscr{P}}_{i-1}(x), \check{\mathscr{P}}_{i-1}(q_i)\big)^2- d\big(\check{\mathscr{P}}_{i-1}(y), \check{\mathscr{P}}_{i-1}(q_i)\big)^2| \Big|\nonumber\\
&\leq C(n)\gamma_{n}^2+ \sum_{j= 1}^{i- 1}|\phi_j(x)- \phi_j(y)|\cdot |\phi_j(x)+ \phi_j(y)- 2\phi_j(q_i)| \leq C(n)r_{q_n}^2\label{upper bound of num}
\end{align}
By Corollary \ref{cor comp of proj of qi} and the definition of $\check{\mathscr{P}}_{i-1}$, we know
$$d(\check{\mathscr{P}}_{i-1}(q_i),q_i)+d(\check{\mathscr{P}}_{i-1}(x),\mathscr{P}_{i-1}(x))\le C(n)\gamma_i,$$
which implies
\begin{align}\label{check to approx}
\big|d\big(\check{\mathscr{P}}_{i-1}(x), \check{\mathscr{P}}_{i-1}(q_i)\big)^2-d(\mathscr{P}_{i-1}(x),q_i)^2\big|\le C(n)\epsilon_n r^2_{q_n}+C(n)\gamma_i d(p,q_i).
\end{align}
From the assumption, for any $x\in B_{r_1}(p)$, we have $d(x, q_i)\geq c(n)\beta^{\nu^{n+ 1- i}} r_{q_i}.$ Thus $\frac{\gamma_id(p,q_i)}{d(x,q_i)+d(y,q_i)}\le C(n) \gamma_i$ and
\begin{align}\label{almost lip}
\frac{|d(\mathscr{P}_{i-1}(x),q_i)^2- d(\mathscr{P}_{i-1}(y),q_i)^2|}{d(x,q_i)+d(y,q_i)}&=\frac{|\phi_i(x)-\phi_i(y)|\big(\rho_i(\mathscr{P}_{i-1}(x))+\rho_i(\mathscr{P}_{i-1}(y))\big)}{d(x,q_i)+d(y,q_i)}\nonumber\\
&\in \big((1-\beta^{-5}), (1+\beta^{-5})\big)|\phi_i(x)-\phi_i(y)|.
\end{align}
From (\ref{upper bound of num}), (\ref{check to approx}) and (\ref{almost lip}), we have
\begin{align}
\Big|\frac{d(x, q_i)^2- d(y, q_i)^2}{d(x, q_i)+ d(y, q_i)}\Big| &\le (1+ \beta^{-5}) |\phi_i(x)- \phi_i(y)|+ C(n)\beta^{-10}r_{q_n} , \nonumber \\
\Big|\frac{d(x, q_i)^2- d(y, q_i)^2}{d(x, q_i)+ d(y, q_i)}\Big| &\ge (1- \beta^{-5}) |\phi_i(x)- \phi_i(y)|- C(n)\beta^{-10}r_{q_n} . \nonumber
\end{align}
Plugging the above into (\ref{Lipschitz map}), note $|\phi_{i}(x)- \phi_{i}(y)|\leq C(n) r_{q_n}$, we get
\begin{align}
d(\psi(x), \psi(y))^2 &\leq (1+ \beta^{-5})^2\sum_{i= 1}^n |\phi_i(x)- \phi_i(y)|^2+ C(n)\beta^{-10} r^2_{q_n} ,\label{upper bound of numerator-final} \\
d(\psi(x), \psi(y))^2 &\ge (1- \beta^{-5})^2\sum_{i= 1}^n |\phi_i(x)- \phi_i(y)|^2-C(n)\beta^{-10} r^2_{q_n}.\label{lower bound of numerator-final}
\end{align}
From Theorem \ref{thm dist of points in geodesic balls induc-dim}, one has
\begin{align}
\Big|d(x, y)^2- \sum_{j= 1}^n|\phi_j(x)- \phi_j(y)|^2- d(\check{\mathscr{P}}_{n}(x), \check{\mathscr{P}}_{n}(y))^2\Big|\leq C(n)\gamma_n^2 . \nonumber
\end{align}
Now we get
\begin{align}
\sup_{x, y\in B_p(r_{q_n}), \atop d(x, y)\ge \beta^{-\frac{1}{2}} r_{q_n}}\frac{d(\psi(x), \psi(y))}{d(x, y)}
&\leq \sqrt{\frac{(1+ \beta^{-5})^2\sum\limits_{i= 1}^n |\phi_i(x)- \phi_i(y)|^2+ C(n)\beta^{-10}r^2_{q_n}}{d(x, y)^2}} \nonumber \\
&\leq \sqrt{\frac{(1+ \beta^{-5})^2d(x, y)^2+ C(n) \beta^{-2}r_{q_n}^2}{d(x, y)^2}} \leq 1+ C(n)\beta^{-\frac{1}{2}}, \nonumber
\end{align}
where in the last line we use $d(x,y)\ge \beta^{-\frac{1}{2}}r_{q_n}.$
On the other hand, for $x, y\in B_{r_1}(p)$ by Proposition \ref{prop the diam upper bound of (n+ 1)-proj}, we get
\begin{align}
|\sum_{j= 1}^n|\phi_j(x)- \phi_j(y)|^2- d(x, y)^2| \le C(n)\gamma_n^2+ d(\check{\mathscr{P}}_{n}(x), \check{\mathscr{P}}_{n}(y))^2 \le C(n)\beta^{-2}r_{q_n}^2. \nonumber
\end{align}
Similarly we have
\begin{align}
\sup_{x, y\in B_p(r_{q_n}), \atop d(x, y)\ge \beta^{-\frac{1}{2}} r_{q_n}}\frac{d(\psi(x), \psi(y))}{d(x, y)}
&\ge \sqrt{\frac{(1-\beta^{-5})^2\sum\limits_{i= 1}^n |\phi_i(x)- \phi_i(y)|^2- C(n)\beta^{-2}r^2_{q_n}}{d(x, y)^2}} \nonumber \\
&\ge 1-C(n) \beta^{-\frac{1}{2}}. \nonumber
\end{align}
}
\qed
\section{The existence of $\epsilon$-splitting maps}\label{sec existence of splitting map}
\subsection{The almost orthogonality of distance maps}\label{subsec almost og of dist map}
The integral Toponogov Comparison Theorem for $Rc\geq 0$ was proved in \cite{Colding-volume} by approximating the distance function by harmonic functions and covering argument. Motivated by the cosine law for triangle (the terms with distance functions are in the form of the square of distance functions), we consider the model function $f$ defined in Subsection \ref{subsec integral est of diff} relating with the square of the distance function, present a different proof of this result here (Corollary \ref{cor Colding-1.28}), which avoids the covering argument. And our argument is more consistent with the former argument for almost Gou-Gu Theorem of distance functions. Then we use the integral Toponogov comparison Theorem to establish the almost orthogonality of distance maps.
Let $SM^n$ be the unit tangent bundle of $M^n$, if $\pi: SM^n\rightarrow M^n$ is the projection map, for any $\Omega\subset SM^n$, the \textbf{Liouville measure} of $\Omega$, denoted by $\mu^L(\Omega)$, is defined by $\mu^L(\Omega)= \int_{\pi(\Omega)}\mathcal{H}^{n-1}(\pi^{-1}(x))d\mu(x)$, where $\mu$ is the volume measure of $(M^n, g)$ determined by the metric $g$. We begin with an integral version of the law of cosine with respect to the model $\mathbb{R}^n$.
\begin{lemma} \label{lem almost cosine}
Assume $B_{11r}(q)$ is a geodesic ball in $(M^n,g)$ with $Rc\geq 0$ and $f$ is a smooth function. Then for any $s\in [0, 10r]$, we have
\begin{align*}
&\fint_{SB_r(q)}\big|\frac{d_q^2(\gamma_v(s))}{2}-\frac{d_q^2(x)}{2}-\langle \nabla \frac{d_q^2(x)}{2},v\rangle s-\frac{s^2}{2}\big|d\mu^L(x, v)\\
&\le C(n)\Big\{\sup_{B_{11r}(q)}|f-\frac{d_q^2}{2}|+ r\big(\fint_{B_r(q)}|\nabla(f-\frac{d^2_q}{2})|^2\big)^{\frac{1}{2}}+ r^2\cdot \big(\fint_{B_{11 r}(q)}|\nabla^2 f-g|^2\big)^{\frac{1}{2}}\Big\},
\end{align*}
where $d\mu^L(x, v)$ is the Liouville measure on the sphere bundle $SM$.
\end{lemma}
\begin{proof}
For any $v\in SB_r(q)$, we have
$$(f\circ \gamma_v)'(t)-(f\circ \gamma_v)'(0)=\int_0^t\nabla^2f(\gamma_v'(\tau),\gamma_v'(\tau))d\tau.$$
So, we have
\begin{align*}
\sup_{0\le t\le 10r} |(f\circ \gamma_v)'(t)-(f\circ \gamma_v)'(0)-t|\le \int_0^t |\nabla^2 f-g|_{\gamma_v(\tau)}d\tau.
\end{align*}
Integrating on $SB_r(q)$ with respect to the Louisville measure $d\mu^L(x, v)$, we get
\begin{align*}
&\int_{SB_r(q)}\sup_{0\le t\le 10r} |(f\circ \gamma_v)'(t)-\langle \nabla f(x),v\rangle -t|d\mu^L(x, v)\\
&\le \int_0^{10r}\int_{SB_r(q)}|\nabla^2f-g|(\gamma_v(\tau))d\mu^L(x, v)d\tau\\
&\le \int_0^{10r}\int_{SB_{11r}(q)}|\nabla^2f-g|(y) dV(y,v)d\tau\\
&=10n\omega_n r\int_{B_{11r}(q)}|\nabla^2f-g|(y)dV(y).
\end{align*}
Note that for $0\le s\le 10r$, we have
\begin{align*}
\big|(f\circ \gamma_v)(s)-(f\circ \gamma_v)(0)-\langle \nabla\frac{d_q^2}{2},v\rangle s-\frac{s^2}{2}\big|&=\big|\int_0^2(f\circ \gamma_v)'(t)-\langle \frac{d_q^2(x)}{2},v\rangle-t\big|dt\\
&\le 10r\sup_{0\le t\le 10r}|(f\circ \gamma_v)'(t)-\langle \frac{d_q^2(x)}{2},v\rangle-t|,
\end{align*}
\begin{align*}
\int_{SB_r(q)}|\langle \nabla (f-\frac{d_q^2}{2}),v\rangle|d\mu^L(x, v)\le n\omega_n(V(B_r(q)))^{\frac{1}{2}}\big(\int_{B_r(q)}|\nabla(f-\frac{d^2_q}{2})|^2dV(x)\big)^{\frac{1}{2}},
\end{align*}
\begin{align*}
\int_{SB_r(q)}|f-\frac{d_q^2}{2}|d\mu^L(x, v)\le n\omega_nV(B_r(q))\sup_{B_r(q)}|f-\frac{d_q^2}{2}|
\end{align*}
and
\begin{align*}
\int_{SB_r(q)} |f\circ \gamma_v(s)-\frac{d_q^2(\gamma_v(s))}{2}|d\mu^L(x, v)\le n\omega_nV(B_r(q))\sup_{B_{11r}(q)}|f-\frac{d_q^2}{2}|.
\end{align*}
We know
\begin{align*}
&\int_{SB_r(q)} \big|\frac{d_q^2(\gamma_v(s))}{2}-\frac{d_q^2(x)}{2}-\langle \nabla \frac{d_q^2(x)}{2},v\rangle s-\frac{s^2}{2}\big| \\
&\le 2n\omega_nV(B_r(q))\sup_{B_{11r}(q)}|f-\frac{d_q^2}{2}|
+10r n\omega_nV(B_r(q))^{\frac{1}{2}}\big(\int_{B_r(q)}|\nabla(f-\frac{d^2_q}{2})|^2dV(x)\big)^{\frac{1}{2}}\\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + 100r^2n\omega_n\cdot\big(\int_{B_{11 r}(q)}|\nabla^2 f-g|dV(x)\big)^{\frac{1}{2}}.
\end{align*}
Dividing both sides by $n\omega_n V(B_r(q))$ and using H\"older inequality and Bishop-Gromov volume comparison theorem, we get the conclusion.
\end{proof}
The integral law of cosine implies the Busemann function is almost linear. More precisely, the directional derivative is close to the difference quotient in an integral sense.
\begin{lemma}\label{lem almost linear}
Assume $B_{11r}(q)\subseteq (M^n,g)$ with $Rc\geq 0$ and $f$ is a smooth function. Then for any $B_{r_1}(p)\subset B_r(q)$ satisfying $d_q(p)\ge 2 r_1$ and $s\le 10 r$, there holds
\begin{align*}
&\fint_{SB_{r_1}(p)}\big|\langle \nabla b^+(x),v\rangle -\frac{b^+(\gamma_v(s))-b^+(x)}{s}\big|\\
&\le \frac{2s}{d_q(p)}+ \frac{C(n)}{sd_q(p)}\cdot (\frac{r}{r_1})^n\big\{ \sup_{B_{11r}(q)}|f-\frac{d_q^2}{2}|+r\big(\fint_{B_r(q)}|\nabla(f-\frac{d^2_q}{2})|^2dV(x)\big)^{\frac{1}{2}}\\
&\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad+ r^2\cdot\big(\fint_{B_{11 r}(q)}|\nabla^2 f-g|^2dV(x)\big)^{\frac{1}{2}}
\big\}
\end{align*}
where $b^+(x)=d(q,x)-d(q,p)$.
\end{lemma}
\begin{proof}
By the definition of $b^+$, we know
\begin{align*}
&\frac{1}{2sd_q(x)}\big| d_q^2(\gamma_v(s))-d_q^2(x)-\langle d_q^2(x),v\rangle s-s^2\big|\\
&=\big|(1+\frac{b^+(\gamma_v(s))-b^+(x)}{2d_q(x)})(\frac{b^+(\gamma_v(s))-b^+(x)}{s})-\langle \nabla b^+(x),v\rangle -\frac{s}{2d_q(x)}\big|.
\end{align*}
Note that $|b^+(\gamma_v(s))-b^+(x)|\le s$ and $d_q(x)\ge \frac{d_q(p)}{2}$ for $x\in B_{r_1}(p)$ and $2r_1\le d_q(p)$. We have
\begin{align*}
&\big|\langle \nabla b^+(x),v\rangle -\frac{b^+(\gamma_v(s))-b^+(x)}{s}\big|\\
&\le \frac{1}{2sd_q(x)}\big| d_q^2(\gamma_v(s))-d_q^2(x)-\langle d_q^2(x),v\rangle s-s^2\big|+\frac{s}{d_q(x)}\\
&\le \frac{1}{sd_q(p)}\big| d_q^2(\gamma_v(s))-d_q^2(x)-\langle d_q^2(x),v\rangle s-s^2\big|+\frac{2s}{d_q(p)}
\end{align*}
So, by Lemma \ref{lem almost cosine}, we know
\begin{align*}
&\fint_{SB_{r_1}(p)}\big|\langle \nabla b^+(x),v\rangle -\frac{b^+(\gamma_v(s))-b^+(x)}{s}\big|\\
&\le \frac{2s}{d_q(p)}+\frac{V(B_r(q))}{sd_q(p) V(B_{r_1}(p))}\fint_{SB_{r}(q)}\big| d_q^2(\gamma_v(s))-d_q^2(x)-\langle d_q^2(x),v\rangle s-s^2\big|d\mu^L(x, v)\\
&\le \frac{2s}{d_q(p)}+ C(n)\frac{r^n}{sd_q(p) r_1^n}\big\{ \sup_{B_{11r}(q)}|f-\frac{d_q^2}{2}|+r\big(\fint_{B_r(q)}|\nabla(f-\frac{d^2_q}{2})|^2dV(x)\big)^{\frac{1}{2}}\\
&\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad+ r^2\cdot\big(\fint_{B_{11 r}(q)}|\nabla^2 f-g|^2dV(x)\big)^{\frac{1}{2}}
\big\}
\end{align*}
\end{proof}
\begin{cor}\label{cor Colding-1.28}
For $0<\tau\le \tau_0(n)\ll 1$, $0<\beta_0(n)\le \beta \le \tau^{-c(n)}$ and $r>0$. Assume $B_r(p)\subseteq (M,g)$ with $Rc\ge 0$ satisfying $\displaystyle \frac{V(B_r(p))}{V(B_r(0))}\ge 1-\tau.$
Choose $\{q_i\}_{i=1}^{n}$ as in Theorem \ref{thm dist of points in geodesic balls induc-dim} and $b_i^+(x)=d(q_i,x)-d_(q_i,p)$. Then for $r_1=r_{q_n}=c(n)\beta^{-(C(n)+\frac{\nu^{n+1}-\nu}{\nu-1})}r$
and $s\le r_1$, there holds
\begin{align*}
\fint_{SB_{r_1}(p)}\big|\langle \nabla b_i^+(x),v\rangle &-\frac{b_i^+(\gamma_v(s))-b_i^+(x)}{s}\big|d\mu^L(x, v)\\
&\le 2\beta^{-\frac{\mu^{n+2-i}-\nu^2}{\nu-1}}\frac{s}{r_1}+C(n)\frac{r_1}{s}\beta^{-(C(n)-\frac{(n+1)(\nu^{n+2-i}-\nu^2)}{\nu-1})}.
\end{align*}
\end{cor}
\begin{proof}
By the choose of $q_i$, we know $C(n)\beta^{\frac{v^{n+2-i}-v^2}{v-1}}r_1= d_{q_i}(p)\le \beta^{-C(n)}r$. So, we know
$$\frac{V(B_{200d_{q_i}(p)}(q_i))}{V(B_{200d_{q_i}(p)}(0))}\ge 1-C(n)\beta^{-C(n)}$$
and $B_{r_1}(p)\subset B_{d_{q_i}(p)}(q_i)$ with $2r_1\le d_{q_i}(p)$. Thus by Proposition \ref{prop L2 diff of gadient of F}, \ref{prop C0 diff of F} and \ref{prop integral Hessian small}, we know there exists $f_i$ such that
\begin{align*}
\frac{1}{d^2_{q_i}(p)}\sup_{B_{11 d_{q_i}(p)}(q_i)}|f-d_{q_i}^2|&+\frac{1}{d_{q_i}(p)}\big(\int_{B_{d_{q_i}(p)}(q_i)}|\nabla(f-d_{q_i}^2(p))|^2\big)^{\frac{1}{2}}\\
&+\big(\int_{B_{11d_{q_i}(p)}(q_i)}|\nabla^2f-2g|^2\big)^{\frac{1}{2}}
\le C(n)\beta^{-\frac{C(n)}{2n+4}}.
\end{align*}
So, by Lemma \ref{lem almost linear}, we know
\begin{align*}
\fint_{SB_{r_1}(p)}\big|\langle \nabla b_i^+(x),v\rangle &-\frac{b_i^+(\gamma_v(s))-b_i^+(x)}{s}\big|\le \frac{2s}{d_{q_i}(p)}+ \frac{C(n)}{sd_{q_i}(p)} (\frac{d_{q_i}(p)}{r_1})^n \beta^{-C(n)}d_{q_i}^2(p)\\
&\le 2\beta^{-\frac{\nu^{n+2-i}-\nu^2}{\nu-1}}\frac{s}{r_1}+C(n)\frac{r_1}{s}\beta^{-(C(n)-\frac{(n+1)(\nu^{n+2-i}-\nu^2)}{\nu-1})}.
\end{align*}
\end{proof}
Theorem \ref{thm direction points imply Lipschitz less than 1+ep} means the functions $\{b_i^{+}\}_{1\le i\le n}$ are almost orthogonal in the difference quotient sense. Combining it with the above corollary, we get the orthogonality of $\{b_i^{+}\}_{1\le i\le n}$ in the following sense.
\begin{prop}\label{prop approximation component is almost orthogonal}
{For $0<\tau\le \tau_0(n)\ll 1$, $0<\beta_0(n)\le \beta \le \tau^{-c(n)}$ and $r>0$. Assume $B_r(p)$ is a geodesic ball in $(M,g)$ with $Rc\ge 0$ satisfying $\displaystyle \frac{V(B_r(p))}{V(B_r(0))}\ge 1-\tau$. Then for $\{q_i\}_{i=1}^{n}$ as in Theorem \ref{thm dist of points in geodesic balls induc-dim}, $r_1=r_{q_n}=c(n)\beta^{-(C(n)+\frac{\nu^{n+1}-\nu}{\nu-1})}r$ and $b_i^+(x)=d(q_i,x)-d_(q_i,p)$, there holds
\begin{align}
\fint_{B_{r_1}(p)} \big|\langle \nabla b_i^+, \nabla b_j^+\rangle\big|^2\leq C(n)(\beta^{-\frac{1}{2}}+\tau^{\frac{2}{n+1}}) , \quad \quad \quad \quad 1\leq i\neq j\leq n .\nonumber
\end{align}
}
\end{prop}
{\it Proof:}~
{We only need to show the conclusion for fixed $i, j$ with $i\neq j$. For $\theta\in (0, \frac{\pi}{2})$ (to be determined later), set $\displaystyle C_{\theta}(x)\vcentcolon= \Big\{\nu\in S(T_xM)|\ \langle \nu, \nabla b_i^+(x)\rangle\geq \cos\theta \Big\}$ and $\displaystyle C_\theta=\cup_{x\in B_{r_1}(p)}C_\theta(x)$.
Moreover, recall $l_v$ is the largest number such that $\exp_x(tv)$ is a segment for any $t\le l_v$. $\mathcal{B}(x)=\{v\in S(T_xM)| l_v\le r_1\}$, , and $\mathcal{B}_\theta=\cup_{x\in B_{r_1}(p)} \mathcal{B}(x)\cap C_{\theta}$. Then for any $\nu\in C_{\theta}$,
\begin{align}
|\nu- \nabla b_i^+|^2= |\nu|^2+ |\nabla b_i^+|^2- 2\langle \nu, \nabla b_i^+\rangle \leq 2- 2\cos\theta. \label{need length difference}
\end{align}
Using $\frac{V(B_{r}(p))}{\omega_n r^n}\ge 1-\tau $, by the same argument as the proof of Lemma \ref{lem dist between points and segment}, we know
$\frac{\mathcal{H}^{n-1}(\mathcal{B}(x))}{\mathcal{H}^{n-1}(S(T_xM))}\le 2\tau,$
which implies
\begin{align}\label{bad ratio}
\frac{V(\mathcal{B}_\theta)}{V(SB_{r_1}(p))}\le 2\tau.
\end{align}
From Corollary \ref{cor Colding-1.28}, let $s=r_1$, then we have
\begin{align}
\frac{1}{V\big(SB_{r_1}(p)\big)} \int_{SB_{r_1}(p)} \Big|\langle \nabla b_k^+, \nu \rangle- f_s(\nu)\Big|\leq \epsilon_1 \ , \quad \quad \quad \quad k= 1, \cdots, n\label{need 2.11}
\end{align}
where $f_k(\nu)\vcentcolon= \frac{(b_k^+\circ \gamma_{\nu})(r_1)- (b_k^+\circ \gamma_{\nu})(0)}{r_1}$ for $\nu\in SB_{r_1}(p)$ and $\epsilon_1=C(n)\beta^{-C(n)}$. So,
\begin{align}
&\quad \frac{1}{V\big(SB_{r_1}(p)\big)}\int_{C_{\theta}} \Big|\big|f_i(\nu)\big|^2- 1\Big| \nonumber \\
&\leq \frac{1}{V\big(SB_{r_1}(p)\big)}\int_{C_{\theta}} \Big|\big|f_i(\nu)\big|^2- \langle \nabla b_i^+, \nu \rangle^2\Big| + \frac{1}{V\big(SB_{r_1}(p)\big)}\int_{C_{\theta}} \big|1- \langle \nabla b_i^+, \nu \rangle^2 \big| \nonumber \\
&\leq 2\epsilon_1+ (1- \cos^2\theta)\cdot \frac{V(C_{\theta})}{V\big(SB_{r_1}(p)\big)}\leq 2\epsilon_1+ \theta^2 \frac{V(C_{\theta})}{V\big(SB_{r_1}(p)\big)} \label{need lem 4.4.1}
\end{align}
Now from Theorem \ref{thm direction points imply Lipschitz less than 1+ep}, Corollary \ref{cor Colding-1.28}, (\ref{bad ratio})and (\ref{need lem 4.4.1}), for $j\neq i$, we have
\begin{align}
&\quad \frac{1}{V\big(SB_{r_1}(p)\big)} \int_{C_{\theta}} \big|\langle \nabla b_j^+, \nu \rangle\big|^2\nonumber \\
&\leq \frac{2}{V\big(SB_{r_1}(p)\big)} \int_{SB_{r_1}(p)} \Big|\langle \nabla b_j^+, \nu \rangle- f_j(\nu)\Big|^2+ \frac{2}{V\big(SB_{r_1}(p)\big)} \int_{C_{\theta}} \big|f_j(\nu)\big|^2 \nonumber \\
& \leq 4\epsilon_1 + \frac{2}{V\big(SB_{r_1}(p)\big)}\Big(\int_{C_{\theta}} \Big|\frac{d\big(\psi\circ \gamma_{\nu}(r_1), \psi\circ \gamma_{\nu}(0)\big)}{r_1}\Big|^2- \big|f_i(\nu)\big|^2\Big) \nonumber \\
&\leq 4\epsilon_1+ \frac{2}{V\big(SB_{r_1}(p)\big)}\Big\{3\delta V(C_{\theta}\backslash \mathcal{B}_\theta)+nV(\mathcal{B}_\theta)+ \Big(\int_{C_{\theta}} \Big|1- \big|f_i(\nu)\big|^2\Big|\Big)\Big\} \nonumber \\
&\leq 8\epsilon_1 +4n\tau + (6\delta + 2\theta^2)\frac{V(C_{\theta})}{V\big(SB_{r_1}(p)\big)}, \nonumber
\end{align}
where $\delta =\beta^{-\frac{1}{2}}$ comes from Theorem \ref{thm direction points imply Lipschitz less than 1+ep}.
Note (\ref{need length difference}), then
\begin{align}
\int_{C_{\theta}} \big|\langle \nabla b_i^+, \nabla b_j^+\rangle\big|^2 &\leq 2\int_{C_{\theta}} \big|\langle \nabla b_{j}^+, v- \nabla b_i^+\rangle\big|^2+ 2\int_{C_{\theta}} \big|\langle \nabla b_j^+, \nu\rangle \big|^2 \nonumber \\
&\leq 2\int_{C_{\theta}} \big|v- \nabla b_i^+\big|^2+ 2\int_{C_{\theta}} \big|\langle \nabla b_j^+, \nu\rangle \big|^2 \nonumber \\
&\leq 2\theta^2 \cdot V(C_{\theta})+ 16n(\epsilon_1+ \tau)V\big(SB_{r_1}(p)\big)+ 12(\delta+\theta^2)V(C_{\theta}) .\nonumber
\end{align}
Now note $\langle \nabla b_i^+, \nabla b_j^+\rangle$ is constant on $T_xM^n$ for any fixed $x\in M^n$, we have
\begin{align}
\fint_{B_{r_1}(p)} \big|\langle \nabla b_i^+, \nabla b_j^+\rangle \big|^2&= \frac{1}{V\big(C_{\theta}\big)}\int_{C_{\theta}} \big|\langle \nabla b_i^+, \nabla b_j^+\rangle\big|^2 \nonumber \\
&\leq 14(\theta^2+\delta)+ 16n(\epsilon_1+ \tau)\cdot \frac{V\big(SB_{r_1}(p)\big)}{V(C_{\theta})} \nonumber
\end{align}
Note that $\frac{V(C_\theta)}{V(SB_{r_1}(p))}=\frac{\int_0^\theta \sin^{n-2}(\alpha)d\alpha}{\int_0^{\pi} \sin^{n-2}(\alpha)d\alpha}\ge c(n)\theta^{n-1}$. We can choose $\theta=c(n)(\epsilon_1+\tau)^{\frac{1}{n+1}}$ such that $\theta^2=\frac{\epsilon_1+\tau}{c(n)\theta^{n-1}}$. Then
$$\fint_{B_{r_1}(p)} \big|\langle \nabla b_i^+, \nabla b_j^+\rangle \big|^2\le C(n)(2\theta^2+\delta)\leq C(n)(\beta^{-\frac{1}{2}}+\tau^{\frac{2}{n+1}}).$$
}
\qed
\subsection{The $\epsilon$-splitting map induced by distance map}\label{subsec splitting map}
\begin{definition}\label{def excess est}
{For $q^+, q^-, p\in \mathbf{X}$, where $\mathbf{X}$ is a metric space, we say that $[q^+, q^-, p]$ is an \textbf{AG-triple on $\mathbf{X}$ with the excess $s$ and the scale $t$} if
\begin{align}
\mathbf{E}(p)= s \quad\quad and \quad \quad
\min\big\{d(p, q^+), d(p, q^-)\big\}= t \nonumber
\end{align}
where $\mathbf{E}(\cdot)= d(\cdot, q^+)+ d(\cdot, q^-)- d(q^+, q^-)$.
}
\end{definition}
We recall the following Abresch-Gromoll lemma (\cite{AG}, also see \cite[Lemma $2.2$]{Xu-group}).
\begin{lemma}\label{lem general Abresch-Gromoll}
{On complete Riemannian manifold $(M^n, g)$ with $Rc\geq 0$, assume that $[q^+, q^-, p]$ is an AG-triple with the excess $\leq \frac{1}{n}\frac{r^2}{R}$ and the scale $\geq R$, furthermore assume $R\geq 2^{2n}r$, then $\displaystyle \sup_{B_{r}(p)} \mathbf{E}\leq 2^6\cdot \big(\frac{r}{R}\big)^{\frac{1}{n- 1}}r$.
}
\end{lemma}\qed
On Riemannian manifolds, if there is a segment $\gamma_{p, q}$ between two points $p, q$, we can choose the middle point of the segment $\gamma_{p, q}$, denoted as $z$. Then $[p, q, z]$ is an AG-triple with the excess $0$ and the scale $\frac{1}{2}d(p, q)$. The following result is the almost version of the above fact.
\begin{prop}\label{prop diameter pair points}
{For $\tau\in (0,\frac{1}{6(n+1)}), 3\leq \beta\leq \tau^{-c(n)}$ and $r>0$, if $B_{r}(p)\subset (M,g)$ satisfies $Rc\ge 0$ and $\displaystyle \frac{V(B_{r}(p))}{V(B_r(0))}\ge 1-\tau$. Then for $q\in B_{\beta^{-C(n)} r}(p)- B_{\frac{1}{2}d(p,q_n)}(p)$, there is $q^{-}$ such that $[q,q^-,p]$ is an AG-triple with the excess $\le C(n)\beta^{-C(n)}d_q(p)$ and the scale $\ge d_q(p)$.
}
\end{prop}
{\it Proof:}~
{By (the proof of) Lemma \ref{lem dist between points and segment}, we know
\begin{align*}
\frac{V(A_{d_{q}(p)-\epsilon r_{q},d_{q}(p)+\epsilon r_{q}}(q)\backslash \mathcal{C})}{V(B_{\epsilon r_{q}}(p))}\le C(n)\frac{\beta^{-C(n)}}{\epsilon^{n-1}},
\end{align*}
where
$$\mathcal{C}=\{y\in A_{d_q(p)-\epsilon r_q, d_q(p)+\epsilon r_q}(q)| \exists \theta(y)\in S(T_{q}M) s.t. y= \exp_q(\rho(y)\theta(y)), l_{\theta(y)}\ge 3 d_q(p)\}.$$
Taking $\epsilon=(2 C(n)\beta^{-C(n)})^{\frac{1}{n-1}}$, we know $C(n)\frac{\beta^{-C(n)}}{\epsilon^{n-1}}=\frac{1}{2}$, which means there exists $\tilde{p}\in B_{(2 C(n)\beta^{-C(n)})^{\frac{1}{n-1}}r_{q}}(p)$ such that $\tilde{p}=\exp_{q}(\rho(\tilde{p})\theta(\tilde{p}))$ and $l_{\theta(\tilde{p})}\ge 3d_q(p)$. That is,
$\exp_{q}(t\theta(\tilde{p})):[0,3d_q(p)]\to M$ is a segment. So,
$$d(\exp_{q}(3d_q(p)\theta(\tilde{p})),p)\ge d(\exp_{q}(3d_q(p)\theta(\tilde{p})),q)-d(p,q)=2d_q(p)$$
and
$$d(\exp_{q}(\rho(\tilde{p})\theta(\tilde{p})),p)=d(\tilde{p},p)\le C(n)\beta^{-C(n)}r_q\ll d_q(p).$$
By continuity, there exists $t_1\in [\tilde{\rho},3d_q(p)]$ such that $q^-=\exp_{q}(t_1\theta(\tilde{p}))$ satisfies $\displaystyle d(q^-,p)=d_q(p)$ and
$$d(q^{-},q)=d(q^{-},\tilde{p})+d(\tilde{p},q)\ge d(q^-,p)+d(p,q^-)-2d(p,\tilde{p})\ge (2-C(n)\beta^{-C(n)})d_q(p).$$
Moreover, we have
$$\mathbf{E}(p)=d(p,q)+d(p,q^{-})-d(q,q^-)\le 2d(p,\tilde{p})\le C(n)\beta^{-C(n)}r_q.$$
}
\qed
Recall we have the following existence result of splitting function $\mathbf{b}$ with respect to the local Busemann function $b^+$.
\begin{lemma}\label{lem existence of harmonic function-r}
{On complete Riemannian manifold $M^n$ with $Rc\geq 0$, assume that $[q^+, q^-, p]$ is an AG-triple with the excess $\leq \frac{4}{n}\frac{r^2}{R}$ and the scale $\geq R$, also assume $R\geq 2^{2n+ 1}r$. Then there exists harmonic function $\mathbf{b}$ defined on $B_{2r}(p)$ such that
\begin{align}
&\sup_{B_r(p)}|\mathbf{b}- b^+|\leq C(n)(\frac{r}{R})^{\frac{1}{n- 1}}r, \nonumber \\
&\sup_{B_{r}(p)} |\nabla \mathbf{b}|\leq 1+ 2^{51n^2}\big(\frac{r}{R}\big)^{\frac{1}{4(n- 1)}} \quad \quad and \quad \quad
\fint_{B_{r}(p)} \big|\nabla (\mathbf{b}- b^+)\big|^2 \leq C(n)\big(\frac{r}{R}\big)^{\frac{1}{n- 1}} , \nonumber
\end{align}
where $b^{+}(x)= d(x, q^{+})- d(p, q^+)$.
}
\end{lemma}
{\it Proof:}~
{see \cite[Lemma $2.6$]{Xu-group}, especially \cite[(2.12), (2.15)]{Xu-group} there.
}
\qed
Now we establish our main theorem about the construction of $\epsilon$-splitting map.
\begin{theorem}\label{thm AG-triple imply one more splitting-pre}
{If $Rc(M^n)\geq 0$ and $V\big(B_r(p)\big)\geq (1- \tau)V\big(B_r(0)\big)$, then for $\beta=\tau^{-c(n)}$ and $r_1=c(n)\beta^{-(C(n)+\frac{\nu^{n+1}-\nu}{\nu-1})}r$, there are harmonic functions $\big\{\mathbf{b}_i\big\}_{i= 1}^{n}$ defined on $B_{r_1}(p)\subset B_{r}(p)$, such that
\begin{align}
\sup_{B_{r_1}(p)\atop i= 1, \cdots, n} |\nabla \mathbf{b}_i|\leq 1+ C(n)\beta^{-1} \quad and \quad \fint_{B_{r_1}(p)} \big|\langle \nabla \mathbf{b}_i, \nabla \mathbf{b}_j\rangle- \delta_{ij}\big|^2\leq C(n)\tau^{c(n)}.\nonumber
\end{align}
}
\end{theorem}
{\it Proof:}~
{We can apply Proposition \ref{prop approximation component is almost orthogonal} to get
\begin{align}
\fint_{B_{r_1}(p)}\sum_{i, j= 1}^{n} \big|\langle \nabla b_i^+, \nabla b_j^+\rangle- \delta_{ij}\big|\leq C(n)(\beta^{-\frac{1}{2}}+\tau^{\frac{2}{n+1}}).\label{almost o.n.-need-1}
\end{align}
From Proposition \ref{prop diameter pair points}, it is easy to see that there exist AG-triple $[q_i,q_i^{-},p]$ with the excess $\leq C(n)\beta^{-C(n)}d_{q_i}(p)$ and the scale $\geq d_{q_i}(p)$ for $i= 1, \cdots, n$.
Note that
$$
\max\{C(n)\frac{r_1}{d_{q_i}(p)},\frac{C(n)\beta^{-C(n)}d_{q_i}(p)}{r_1}\}\le C(n)\beta^{-\nu}+C(n)\beta^{-(C(n)-\frac{\nu^{n+1}}{\nu-1})}\le C(n)\beta^{-\nu}.$$
Now we can apply Lemma \ref{lem existence of harmonic function-r} to obtain harmonic functions $\big\{\mathbf{b}_i\big\}_{i= 1}^{n}$ satisfying
\begin{align}
&\sup_{B_{r_1}(p)\atop i= 1, \cdots, n} |\nabla \mathbf{b}_i|\leq 1+ 2^{51n^2}\Big(C(n)\beta^{-\nu}\Big)^{\frac{1}{4(n- 1)}}\leq 1+ C(n)\beta^{-\frac{\nu}{4(n-1)}} \label{gradient bound of b need} \\
&\sup_{i= 1, \cdots, n}\fint_{B_{r_1}(p)} \big|\nabla (\mathbf{b}_i- b_i^+)\big|^2 \leq 2^{4n}\Big(C(n)\beta^{-\nu}\Big)^{\frac{1}{n- 1}}\leq C(n)\beta^{-\frac{\nu}{n-1}}.\label{harmonic is almost linear}
\end{align}
From (\ref{gradient bound of b need}) and (\ref{harmonic is almost linear}), we get
\begin{align}
&\quad \fint_{B_{r_1}(p)} \big|\langle \nabla \mathbf{b}_i, \nabla\mathbf{b}_j\rangle- \delta_{ij}\big|^2 \nonumber \\
&\leq 3\fint_{B_{r_1}(p)} \big|\nabla (\mathbf{b}_i- b_i^+)\big|^2\cdot |\nabla \mathbf{b}_j|^2+
\Big|\big\langle \nabla b_i^+, \nabla(\mathbf{b}_j- b_j^+)\big\rangle\Big|^2+ \big|\langle \nabla b_i^+, \nabla b_j^+ \rangle- \delta_{ij}\big|^2 \nonumber \\
&\leq C(n)\beta^{-\frac{\nu}{n-1}}+ 2\fint_{B_{r_1}(p)} \big|\langle \nabla b_i^+, \nabla b_j^+ \rangle- \delta_{ij}\big| .\nonumber
\end{align}
From (\ref{almost o.n.-need-1}) and the above inequality, we have
\begin{align}
\fint_{B_{r_1}(p)} \sum_{i, j= 1}^{n}\big|\langle \nabla \mathbf{b}_i, \nabla\mathbf{b}_j\rangle- \delta_{ij}\big|^2 \leq \epsilon_1=C(n)(\beta^{-\frac{1}{2}}+\beta^{-\frac{\nu}{n-1}}+\tau^{\frac{2}{n+1}}).\nonumber
\end{align}
Note that we have fixed $\nu>4(n-1)$, we get the conclusion.
}
\qed
\section*{Acknowledgments}
We thank Zuoqin Wang for arranging our visit to University of Science and Technology of China, part of the work was done during the visit. Also we thank Zichang Liu for sharing his preprint \cite{Liu} and some comments on the earlier version of this paper.
\begin{bibdiv}
\begin{biblist}
\bib{AG}{article}{
AUTHOR = {Abresch, Uwe},
AUTHOR = {Gromoll, Detlef},
TITLE = {On complete manifolds with nonnegative {R}icci curvature},
JOURNAL = {J. Amer. Math. Soc.},
FJOURNAL = {Journal of the American Mathematical Society},
VOLUME = {3},
YEAR = {1990},
NUMBER = {2},
PAGES = {355--374},
ISSN = {0894-0347},
MRCLASS = {53C21},
MRNUMBER = {1030656 (91a:53071)},
MRREVIEWER = {Ji-Ping Sha},
DOI = {10.2307/1990957},
URL = {http://dx.doi.org/10.2307/1990957},
}
\bib{Cheeger}{article}{
author={Cheeger, Jeff},
title={Differentiability of Lipschitz functions on metric measure spaces},
journal={Geom. Funct. Anal.},
volume={9},
date={1999},
number={3},
pages={428--517},
}
\bib{Cheeger-note}{book}{
AUTHOR = {Cheeger, Jeff},
TITLE = {Degeneration of {R}iemannian metrics under {R}icci curvature
bounds},
SERIES = {Lezioni Fermiane. [Fermi Lectures]},
PUBLISHER = {Scuola Normale Superiore, Pisa},
YEAR = {2001},
PAGES = {ii+77},
MRCLASS = {53C21 (53C20 53C23)},
MRNUMBER = {2006642},
MRREVIEWER = {Vitali Kapovitch},
}
\bib{CC-Ann}{article}{
AUTHOR = {Cheeger, Jeff},
author= {Colding, Tobias H.},
TITLE = {Lower bounds on {R}icci curvature and the almost rigidity of warped products},
JOURNAL = {Ann. of Math. (2)},
FJOURNAL = {Annals of Mathematics. Second Series},
VOLUME = {144},
YEAR = {1996},
NUMBER = {1},
PAGES = {189--237},
ISSN = {0003-486X},
CODEN = {ANMAAH},
MRCLASS = {53C21 (53C20 53C23)},
MRNUMBER = {1405949 (97h:53038)},
MRREVIEWER = {Joseph E. Borzellino},
DOI = {10.2307/2118589},
URL = {http://dx.doi.org/10.2307/2118589},
}
\bib{CC1}{article}{
author={Cheeger, Jeff},
author={Colding, Tobias H.},
title={On the structure of spaces with Ricci curvature bounded below. I},
journal={J. Differential Geom.},
volume={46},
date={1997},
number={3},
pages={406--480, MR1484888, Zbl 0902.53034},
}
\bib{CC2}{article}{
author={Cheeger, Jeff},
author={Colding, Tobias H.},
title={On the structure of spaces with Ricci curvature bounded below. II},
journal={J. Differential Geom.},
volume={54},
date={2000},
number={1},
pages={13--35},
}
\bib{CC3}{article}{
author={Cheeger, Jeff},
author={Colding, Tobias H.},
title={On the structure of spaces with Ricci curvature bounded below. III},
journal={J. Differential Geom.},
volume={54},
date={2000},
number={1},
pages={37--74},
}
\bib{CN}{article}{
AUTHOR = {Cheeger, Jeff},
author = {Naber, Aaron},
TITLE = {Regularity of {E}instein manifolds and the codimension 4
conjecture},
JOURNAL = {Ann. of Math. (2)},
FJOURNAL = {Annals of Mathematics. Second Series},
VOLUME = {182},
YEAR = {2015},
NUMBER = {3},
PAGES = {1093--1165},
ISSN = {0003-486X},
MRCLASS = {53C25 (53C23)},
MRNUMBER = {3418535},
MRREVIEWER = {Luis Guijarro},
DOI = {10.4007/annals.2015.182.3.5},
URL = {https://doi.org/10.4007/annals.2015.182.3.5},
}
\bib{CJN}{article}{
author={Cheeger, Jeff},
author={Jiang, Wenshuai},
author={Naber, Aaron},
title={Rectifiability of Singular Sets in Noncollapsed Spaces with Ricci Curvature bounded below},
journal={arXiv:1805.07988v1 [math.DG]},
}
\bib{Colding-shape}{article}{
AUTHOR = {Colding, Tobias H.},
TITLE = {Shape of manifolds with positive {R}icci curvature},
JOURNAL = {Invent. Math.},
FJOURNAL = {Inventiones Mathematicae},
VOLUME = {124},
YEAR = {1996},
NUMBER = {1-3},
PAGES = {175--191},
ISSN = {0020-9910},
MRCLASS = {53C23 (53C21)},
MRNUMBER = {1369414},
MRREVIEWER = {Man Chun Leung},
DOI = {10.1007/s002220050049},
URL = {https://doi.org/10.1007/s002220050049},
}
\bib{Colding-volume}{article}{
AUTHOR = {Colding, Tobias H.},
TITLE = {Ricci curvature and volume convergence},
JOURNAL = {Ann. of Math. (2)},
FJOURNAL = {Annals of Mathematics. Second Series},
VOLUME = {145},
YEAR = {1997},
NUMBER = {3},
PAGES = {477--501},
ISSN = {0003-486X},
CODEN = {ANMAAH},
MRCLASS = {53C21 (53C23)},
MRNUMBER = {1454700 (98d:53050)},
MRREVIEWER = {Zhongmin Shen},
DOI = {10.2307/2951841},
URL = {http://dx.doi.org/10.2307/2951841},
}
\bib{Ding}{article}{
author={Ding, Yu},
title={Heat kernels and Green's functions on limit spaces},
journal={Comm. Anal. Geom.},
volume={10},
date={2002},
number={3},
pages={475--514},
}
\bib{Liu}{article}{
AUTHOR = {Liu, Zichang},
TITLE = {A generalization of the first variation formula},
JOURNAL = {Preprint},
}
\bib{Xu-group}{article}{
AUTHOR = {Xu, Guoyi},
TITLE = {Local estimate of fundamental groups},
JOURNAL = {Adv. Math.},
FJOURNAL = {Advances in Mathematics},
VOLUME = {352},
YEAR = {2019},
PAGES = {158--230},
ISSN = {0001-8708},
MRCLASS = {53C21 (57M05)},
MRNUMBER = {3959654},
MRREVIEWER = {Christine M. Escher},
DOI = {10.1016/j.aim.2019.06.006},
URL = {https://doi.org/10.1016/j.aim.2019.06.006},
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"timestamp": "2022-12-22T02:06:40",
"yymm": "2212",
"arxiv_id": "2212.10759",
"language": "en",
"url": "https://arxiv.org/abs/2212.10759",
"abstract": "For a geodesic ball with non-negative Ricci curvature and almost maximal volume, without using compactness argument, we construct an $\\epsilon$-splitting map on a concentric geodesic ball with uniformly small radius. There are two new technical points in our proof. The first one is the way of finding $n$ directional points by induction and stratified almost Gou-Gu Theorem. The other one is the error estimates of projections, which guarantee the $n$ directional points we find really determine $n$ different directions.",
"subjects": "Differential Geometry (math.DG)",
"title": "The construction of $ε$-splitting map",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138131620183,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7087156933908837
} |
https://arxiv.org/abs/1809.06296 | Improvements for eigenfunction averages: An application of geodesic beams | Let $(M,g)$ be a smooth, compact Riemannian manifold and $\{\phi_\lambda \}$ an $L^2$-normalized sequence of Laplace eigenfunctions, $-\Delta_g\phi_\lambda =\lambda^2 \phi_\lambda$. Given a smooth submanifold $H \subset M$ of codimension $k\geq 1$, we find conditions on the pair $(M,H)$, even when $H=\{x\}$, for which $$ \Big|\int_H\phi_\lambda d\sigma_H\Big|=O\Big(\frac{\lambda^{\frac{k-1}{2}}}{\sqrt{\log \lambda}}\Big)\qquad \text{or}\qquad |\phi_\lambda(x)|=O\Big(\frac{\lambda ^{\frac{n-1}{2}}}{\sqrt{\log \lambda}}\Big), $$ as $\lambda\to \infty$. These conditions require no global assumption on the manifold $M$ and instead relate to the structure of the set of recurrent directions in the unit normal bundle to $H$. Our results extend all previously known conditions guaranteeing improvements on averages, including those on sup-norms. For example, we show that if $(M,g)$ is a surface with Anosov geodesic flow, then there are logarithmically improved averages for any $H\subset M$. We also find weaker conditions than having no conjugate points which guarantee $\sqrt{\log \lambda}$ improvements for the $L^\infty$ norm of eigenfunctions. Our results are obtained using geodesic beam techniques, which yield a mechanism for obtaining general quantitative improvements for averages and sup-norms. | \section{Introduction}
On a smooth compact Riemannian manifold without boundary of dimension $n$, $(M,g)$, we consider sequences of Laplace eigenfunctions $\{\phi_\lambda\}$ solving
\[
(-\Delta_g-\lambda^2)\phi_\lambda=0,\qquad{\|\phi_\lambda\|_{L^2(M)}=1.}
\]
We study the average oscillatory behavior of $\phi_\lambda$ when restricted to a submanifold $H\subset M$ without boundary. { In {particular}, we examine the behavior of the integral average $\int_H\phi_\lambda d\sigma_H$ as $\lambda \to \infty$, where $\sigma_H$ is the volume measure on $H$ induced by the Riemannian metric. {Since} we allow $H$ to consist of a single point, our results include the study of sup-norms $\|\phi_\lambda\|_{_{L^\infty(M)}}$.
The study of these quantities has a long history. In general
\begin{equation}
\label{e:zelBound}
\int_H\phi_\lambda d\sigma_H=O(\lambda^{\frac{k-1}{2}}) \qquad \text{and} \qquad \|\phi_\lambda\|_{_{L^\infty(M)}}=O(\lambda^{\frac{n-1}{2}}),
\end{equation}
where $k$ is the codimension of $H$, and $H$ is any smooth embedded submanifold.
The sup-norm bound in~\eqref{e:zelBound} is a consequence of the well known works~\cite{Ava,Lev,Ho68}.
The bound on averages was first obtained in~\cite{Good} and~\cite{Hej}, for the case in which $H$ is a periodic geodesic in a compact hyperbolic surface. The general bound in \eqref{e:zelBound} for integral averages was proved by Zelditch in \cite[Corollary 3.3]{Zel}.}
Since it is easy to find examples on the round sphere which saturate the estimate~\eqref{e:zelBound}, it is natural to ask whether the bound is typically saturated, and to understand conditions under which the estimate may be improved.
In~\cite{CG17,Gdefect,CGT, GT}, the authors (together with Toth in the latter two cases) gave bounds on {integral averages} based on understanding microlocal concentration as measured by defect measures (see~\cite[Chapter 5]{EZB} or~\cite{Gerard} for a description of defect measures). In particular,~\cite{CG17} gave a new proof of~\eqref{e:zelBound} and studied conditions on $({\{\phi_\lambda\}},H)$ guaranteeing
\begin{equation}
\label{e:averageImproved1}
\int_H\phi_\lambda d\sigma_H =o\big(\lambda^{\frac{k-1}{2}}\big).
\end{equation}
These conditions generalized and weakened the assumptions in~\cite{SZ02,SoggeTothZelditch,CS,SXZ,Wym,Wym2,Wym3, GT,Gdefect,CGT,Berard77,SZ16I,SZ16II} which guarantee at least the improvement~\eqref{e:averageImproved1}. However, the results in~\cite{CG17} neither recovered the bound
\begin{equation}
\label{e:averageImproved2}
\int_H\phi_\lambda d\sigma_H =O\Bigg(\frac{\lambda^{\frac{k-1}{2}}}{\sqrt{\log \lambda}}\Bigg),
\end{equation}
obtained in~\cite{SXZ,Wym2,Wym18} under various conditions on $H$ when $M$ has non-positive curvature, nor recovered the improvement on sup-norms given in~\cite{Berard77, Bo16,Randol} when $k=n$ and $M$ has no conjugate points. In the present article, we address such quantitative improvements.
To the authors' knowledge, this article improves and extends \emph{all} existing bounds on averages over submanifolds for eigenfunctions of the Laplacian, including those on $L^\infty$ norms (without additional assumptions on the eigenfunctions; see Remark~\ref{r:extra} for more detail on other types of assumptions). The estimates from~\cite{CG18d} imply those of~\cite{CG17} and therefore can be used to obtain all previously known improvements of the form~\eqref{e:averageImproved1}. In this article, we make the geometric arguments necessary to apply geodesic beam techniques and improve upon the results of~\cite{Wym18,Wym2,SXZ,Berard77,Bo16,Randol}.
These improvements are possible because the geodesic beam techniques developed in~\cite{CG18d} give {an} explicit bound on averages over submanifolds, $H$, which depend{s} only on microlocal information about $\phi_\lambda$ near the conormal {bundle} to $H$. The estimate requires no assumptions on the geometry of $H$ or $M$ and is purely local. It is only with this bound in place that~\cite{CG18d} applies Egorov's theorem to obtain a purely dynamical estimate (see also Theorem~\ref{t:coverToEstimate}). In this article, we apply dynamical arguments to draw conclusions about the pairs $((M,g),H)$ supporting eigenfunctions with maximal averages. While previous works on eigenfunction averages rely on explicit parametrices for the kernel of the half wave-group for large times, the authors' techniques~\cite{GT,Gdefect,CGT,CG17,CG18d}, show that improvements can be effectively obtained by understanding the microlocalization properties of eigenfunctions.
\begin{remark}
\label{r:extra}
Note that in this paper we study averages of relatively weak quasimodes for the Laplacian with no additional assumptions on the functions. This is in contrast with results which impose additional conditions on the functions such as:
that they be Laplace eigenfunctions that simultaneously satisfy additional equations~\cite{I-S,GT18a,Ta18}; that they be eigenfunctions in the very rigid case of the flat torus~\cite{B93,Gros}; or that they form a density one subsequence of Laplace eigenfunctions~\cite{JZ}.
\end{remark}
We now state the main results of this article. In order to match the language of~\cite{CG18d}, we will semiclassically rescale, setting $h=\lambda^{-1}$ and sending $h\to 0^+$. Relabeling, $\phi_\lambda$ as $\phi_h$, {the eigenfunction equation becomes}
$$
(-h^2\Delta_g-1)\phi_h=0,\qquad \|\phi_h\|_{L^2}=1.
$$
We also recall the notation {for the semiclassical Sobolev norms: }
\begin{equation}
\label{e:sobolev}
\|u\|_{_{\sob{s}}}^2:=\big\langle (-h^2\Delta_g+1)^{s}u,u\big\rangle_{_{\!L^2(M)}}.
\end{equation}
Let ${\Xi}$ denote the collection of maximal unit speed geodesics for $(M,g)$. {For $m$ a positive integer, $r>0$, $t\in {\mathbb R}$, and $x \in M$} define
$$
{\Xi}_x^{m,r,t}:=\big\{\gamma\in \Xi: \gamma(0)=x,\,\exists\text{ at least }m\text{ conjugate points to } x \text{ in }\gamma(t-r,t+r)\big\},
$$
where we count conjugate points with multiplicity. Next, for a set $V \subset M$ write
$$
\mc{C}_{_{\!V}}^{m,r,t}:=\bigcup_{x\in V}\{\gamma(t): \gamma\in \Xi_x^{m,r,t}\}
$$
{Note that if $r_t \to 0^+$ as $|t|\to \infty$, then saying that $x \in \mc{C}_x^{n-1,r_t,t}$ for $t$ large indicates that $x$ behaves like a point that is maximally self-conjugate. This is the case for every point on the sphere. The following result applies under the assumption that this does not happen and obtains quantitative improvements in that setting. }
\begin{theorem}
\label{t:noConj2}
Let $V \subset M$ and assume that there exist $t_0>0$ and $a>0$ so that
$$
\inf_{x\in V}d\big(x, \mc{C}_{x}^{n-1,r_t,t}\big)\geq r_t,\qquad\text{ for } t\geq t_0
$$
with $r_t=\frac{1}{a}e^{-at}.$
Then, there exist $C>0$ and $h_0>0$ so that for $0<h<h_0$ and $u \in {\mc{D}'}(M)$
$$
\|u\|_{L^\infty(V)}\leq Ch^{\frac{1-n}{2}}\left(\frac{\|u\|_{{_{\!L^2(M)}}}}{\sqrt{\log h^{-1}}}\;+\; \frac{\sqrt{\log h^{-1}}}{h}\big\|(-h^2\Delta_g-1)u\big\|_{\Hs{\!\!{\frac{n-3}{2}}}}\right).
$$
\end{theorem}
In fact a generalization of Theorem~\ref{t:noConj2} holds not just for $H=\{x\}$, but for any $H\subset M$ of large enough codimension.
\begin{theorem}
\label{t:noConj1}
Let {$H\subset M$ be a closed embedded submanifold of codimension $k>\frac{n+1}{2}$} and assume that there exist $t_0>0$ and $a>0$ such that
\begin{equation}
\label{e:dist}
d\big(H, \mc{C}_H^{2k-n-1,r_t,t}\big)\geq r_t,\,\qquad \text{ for }t\geq t_0
\end{equation}
with $r_t:=\frac{1}{a}e^{-at}.$
Then, there exists $C>0$, so that for all $w\in C_c^\infty(H)$ the following holds. There exists $h_0>0$ such that for all $0<h<h_0$ and $u\in \mc{D}'(M)$,
\begin{equation}
\label{e:goalEst}
\Big|\int_H wu d\sigma_{_H}\Big|\leq Ch^{\frac{1-k}{2}}{\|w\|_{_{\!\infty}}}\left(\frac{\|u\|_{{_{\!L^2(M)}}}}{\sqrt{\log h^{-1}}}\;+\;\frac{\sqrt{\log h^{-1}}}{h}\big\|(-h^2\Delta_g-1)u\big\|_{\Hs{\frac{k-3}{2}}}\right).
\end{equation}
\end{theorem}
\begin{remark}
One should think of the assumption in Theorem~\ref{t:noConj2} as ruling out maximal self-conjugacy of a point with itself uniformly up to time $\infty$. In fact, in order to obtain an $L^\infty$ bound of $o(h^{\frac{1-n}{2}})$ on $u(x)$, it is enough to assume that there is not a positive measure set of directions $A\subset S^*_xM$ so that for each element $\xi\in A$ there is a sequence of geodesics starting at $x$ in the direction of $\xi$ with length tending to infinity along which $x$ is maximally conjugate to itself.
\end{remark}
Before stating our next theorem, we recall that if $(M,g)$ has strictly negative sectional curvature, then it also has Anosov geodesic flow~\cite{Anosov}. Also, both Anosov geodesic flow and non-positive sectional curvature imply that $(M,g)$ has no conjugate points ~\cite{Kling}.
When $(M,g)$ is non-positively curved (indeed when it has no focal points), if every geodesic encounters a point of negative curvature, then $(M,g)$ has Anosov geodesic flow~\cite[Corollary 3.4]{Eberlein73}. In particular, there are manifolds for which the curvature is positive in some places {while} the geodesic flow is Anosov. However, even in non-positive curvature some geodesics may fail to encounter negative curvature and thus the geodesic flow may not be Anosov. To study this situation, we introduce an integrated curvature condition inspired by that in~\cite{SXZ}: There are $T>0$, and $c_{_{\!K}}>0$ so that for {every geodesic $\gamma$} of length $t\geq T$ in the universal cover $(\tilde{M}, \tilde g)$ {of $(M,g)$}, {and for all} $0\leq s\leq 1$,
\begin{equation}
\label{e:intCurve}
\int_{\Omega_{\gamma}(s)}\!\!Kdv_{\tilde g}\leq -c_{_{\!K}}{e^{-\frac{1}{c_{_{\!K}}\sqrt{s}}}}
\end{equation}
where
$
{\Omega_{\gamma}(s)}:=\{x\in \tilde{M}:\; d(x,\gamma)\leq s\},
$
{and $K$ is the scalar curvature for {$(\tilde M,\tilde g)$}.}
{Note that, unlike the {curvature} conditions in~\cite{SXZ}, {the assumption in} ~\eqref{e:intCurve} allows the curvature to vanish in open sets so long as no geodesic lies entirely in such an open set. Moreover, it allows the curvature to vanish to infinite order at the geodesic.}
\begin{theorem}\label{t:surfaces}
Let $(M,g)$ be a smooth, compact Riemannian surface. Let $H\subset M$ be a closed embedded {curve or a point}. Suppose one of the following assumptions holds{:}
\begin{enumerate}[label=\textbf{\Alph*.},ref=\ref{t:surfaces}.\Alph*]
\item \label{a3}$(M,g)$ has Anosov geodesic flow. \smallskip
\item \label{a7} $(M,g)$ {has non-positive curvature and satisfies the integrated curvature condition~\eqref{e:intCurve}, and $H$ is a geodesic.}
\end{enumerate}
Then, there exists $C>0$ so that for all $w\in C_c^\infty(H)$ the following holds. There is $h_0>0$ so that for $0<h<h_0$ and $u\in \mc{D}'(M)$
\begin{equation}
\label{e:subEstSurface}
\Big|\int_Hwud\sigma_H\Big|\leq Ch^{\frac{1-k}{2}}\|w\|_{\infty}\Big(\frac{\|u\|_{{_{\!L^2(M)}}}}{\sqrt{\log h^{-1}}}+\frac{\sqrt{\log h^{-1}}}{h}\|(-h^2\Delta_g-1)u\|_{\Hs{\frac{k-3}{2}}}\Big).
\end{equation}
\end{theorem}
\begin{remark}
In fact, the proof Theorem~\ref{a7} shows that it is enough to have~\eqref{e:intCurve} for every geodesic $\gamma$ normal to $H$.
\end{remark}
For manifolds of arbitrary dimensions, we also obtain quantitative improvements for averages in a variety of situations.
\begin{theorem}\label{T:applications}
Let $(M,g)$ be a smooth, compact Riemannian manifold of dimension $n$ and $H\subset M$ be a closed embedded submanifold of codimension $k$. Suppose one of the following assumptions holds{:}
\begin{enumerate}[label=\textbf{\Alph*.},ref=\ref{T:applications}.\Alph*]
\item \label{a1} $(M,g)$ has no conjugate points and $H$ has codimension $k>\frac{n +1}{2}$. \smallskip
\item \label{a2}$(M,g)$ has no conjugate points and $H$ is a geodesic sphere.\smallskip
\item \label{a6}$(M,g)$ is non-positively curved and has Anosov geodesic flow, and $H$ has codimension $k>1$. \smallskip
\item \label{a4}$(M,g)$ is non-positively curved and has Anosov geodesic flow, and $H$ is totally geodesic. \smallskip
\item \label{a5} $(M,g)$ has {Anosov geodesic flow} and $H$ is a subset of $M$ that lifts to a horosphere in the universal cover.
\end{enumerate}
Then, there exists $C>0$ so that for all $w\in C_c^\infty(H)$ the following holds. There is $h_0>0$ so that for $0<h<h_0$ and $u\in \mc{D}'(M)$
\begin{equation}
\label{e:subEst}
\Big|\int_Hwud\sigma_H\Big|\leq Ch^{\frac{1-k}{2}}\|w\|_{\infty}\Big(\frac{\|u\|_{{_{\!L^2(M)}}}}{\sqrt{\log h^{-1}}}+\frac{\sqrt{\log h^{-1}}}{h}\|(-h^2\Delta_g-1)u\|_{\Hs{\frac{k-3}{2}}}\Big).
\end{equation}
\end{theorem}
We note here that {Theorem} {\ref{a7}} includes the bounds of~\cite{SXZ} as a special case. The bounds in~\cite{Wym2,Wym18} are special cases of {Theorem} {\ref{a3}}, {Theorem} {\ref{a6}}, and the results of Theorem~\ref{T:tangentSpace} {below (see the discussion that follows Theorem~\ref{T:tangentSpace}}). {We also note {that} for any smooth compact embedded submanifold, $H_0 \subset M$, satisfying one of the conditions in Theorem~\ref{T:applications}, there is a neighborhood $U$ of $H_0$, in the $C^\infty$ topology, so that the constants $C$ and $h_0$ in Theorem
\ref{T:applications} are uniform over $H\in U$ and $w$ taken in a bounded subset of {$C_c^\infty(H)$}.} In particular, the sup-norm bounds from~\cite{Berard77,Bo16,Randol} are a special case of {Theorem}~ \ref{a1}. Similar to the $o(h^{\frac{1-k}{2}})$ bounds in~\cite{CG17}, we conjecture that~\eqref{e:subEst} holds whenever $(M,g)$ is a manifold with Anosov geodesic flow, regardless of the geometry of $H$.
{Geodesic beam techniques can also be used to study $L^p$ norms of eigenfunctions~\cite{CG18b} and to give quantitatively improved remainder estimates for the kernel of the spectral projector and for Kuznecov sum type formulae~\cite{CG18c}. The authors are currently studying how to give polynomial improvements for $L^\infty$ norms on certain manifolds with integrable geodesic flow. To our knowledge the only other case where polynomial improvements are available is in~\cite{I-S} for Hecke--Maase forms on arithmetic surfaces or when $(M,g)$ is the flat torus~\cite{B93,Gros}. }
\subsection{Results on geodesic beams}
The main estimate from~\cite{CG18d} gives control on eigenfunction averages in terms of microlocal data. We now review the necessary notation to state that result.
Let $p(x,\xi)=|\xi|_{g(x)}$ defined on $T^*M$
and {consider the geodesic flow} on $T^*M$,
\begin{equation}\label{e:varphi}
\varphi_t:=\exp(tH_p).
\end{equation}
Next, fix a hypersurface
\begin{equation}
\label{e:Hsig}
\mc{H}_{\Sigma}\subset T^*\!M \text{ transverse to }H_p\;\text{ with }S\!N^*\!H\subset \mc{H}_{\Sigma},
\end{equation}
define $\Psi:\mathbb{R}\times \mc{H}_{\Sigma}\to T^*\!M $ by $\Psi(t,q)=\varphi_t(q)$, and let
\begin{equation}\label{e:Tinj}
\tau_{_{\!\text{inj}H}}:=\sup\{{\tau \leq 1}:\; \Psi|_{(-\tau,\tau)\times\mc{H}_{\Sigma}}\text{ is injective}\}.
\end{equation}
Given $A \subset T^*\!M $ define
$$\Lambda_{_{\!A}}^\tau:=\bigcup_{|t|\leq \tau}\varphi_t(A).$$
For $r>0$ and $A\subset \SNH$ we define
\begin{equation}\label{e:tube}
{\Lambda_{_{\!A}}^\tau(r):=\Lambda_{A_r}^{\tau+r},\qquad A_r:=\{\rho\in \mc{H}_{\Sigma}:d(\rho,A)<r\}.}
\end{equation}
where $d$ denotes the distance induced by the Sasaki metric on $TM$ (see e.g. Appendix~\ref{s:jacobi} or~\cite[Chapter 9]{BlairSasaki} for an explanation of the Sasaki metric).
Throughout the paper we adopt the notation
\begin{equation}\label{e:KH}
{K}_{_{\!H}}>0
\end{equation}
for a constant so that all sectional curvatures of $H$ are bounded by $K_{_{\!H}}$ {and the second fundamental form of $H$ is bounded by $K_{_{\!H}}$}. Note that when $H$ is a point, we may take $K_{_{\!H}}$ to be arbitrarily close to $0$.
{We next recall~\cite[{Theorem 11}]{CG18d} which controls eigenfunction averages by covers of $\Lambda^\tau_{S\!N^*\!H}(h^\delta)$ by ``good" tubes that are non self-looping and ``bad" tubes whose {number} is controlled. In fact, Theorems~\ref{t:noConj2},~\ref{t:noConj1}, and~\ref{T:applications} are reduced to a purely dynamical argument together with an application of Theorem~\ref{t:coverToEstimate}.}
For $0<t_0<T_0$, we say that $A\subset T^*\!M$ is \emph{$[t_0,T_0]$ non-self looping} if
\begin{equation}\label{e:nonsl}
\bigcup_{t=t_0}^{T_0}\varphi_t(A)\cap A=\emptyset\qquad \text{ or }\qquad \bigcup_{t=-T_0}^{-t_0}\varphi_t(A)\cap A=\emptyset.
\end{equation}
We define the \emph{maximal expansion rate }
\begin{equation}\label{e:Lmax}
\Lambda_{\max}:=\limsup_{|t|\to \infty}\frac{1}{|t|}{\log} \sup_{{S^*\!M}}\|d\varphi_t(x,\xi)\|.
\end{equation}
Then, the Ehrenfest time at frequency $h^{-1}$ is
\begin{equation}\label{e:Tehr}
T_e(h):=\frac{\log h^{-1}}{2\Lambda_{\max}}.
\end{equation}
Note that $\Lambda_{\max}\in[0,\infty)$ and if $\Lambda_{\max}=0$, we may replace it by an arbitrarily small positive constant.
\begin{definition} \label{d: cover} Let $A\subset \SNH$, $r>0$, {$\tau>0$}, and $\{\rho_j\}_{j=1}^{N_r} \subset A$. We say that the collection of tubes $\{\Lambda_{\rho_j}^\tau(r)\}_{j=1}^{N_r}$ is a \emph{$(\tau, r)$-cover} of a set $A\subset \SNH$ provided
$$\Lambda_A^\tau(\tfrac{1}{2}r) \subset\bigcup_{j=1}^{N_r}\Lambda_{\rho_j}^{\tau}(r).$$
\end{definition}
{
{It will often be useful to have a notion of $(\tau,r)$ cover of $S\!N^*\!H$ without too many overlapping tubes. To that end, we make the following definition.}
\begin{definition}
\label{d:good cover}
Let $A\subset \SNH$, {$r>0$}, $\mathfrak{D}>0$, and $\{\rho_j\}_{j=1}^{N_r} \subset A$. We say that the collection of tubes $\{\Lambda_{\rho_j}^\tau(r)\}_{j=1}^{N_r}$ is a \emph{$(\mathfrak{D},\tau, r)$-good cover} of a set $A\subset \SNH$ provided that it is a $(\tau,r)$-cover for $A$ and there exists a partition $\{\mathcal{J}_\ell\}_{\ell=1}^{\mathfrak{D}}$ of $\{1, \dots, N_r\}$ so that for every $\ell\in \{1, \dots, \mathfrak{D}\}$
\[
\Lambda_{\rho_j}^\tau (3r)\cap \Lambda_{\rho_i}^\tau(3r)=\emptyset\qquad i,j\in \mathcal{J}_\ell, \quad i\neq j.
\]
\end{definition}
\noindent We recall that~\cite[Proposition 3.3]{CG18d} shows the existence of $\mathfrak{D}_n>0$, depending only on $n$, so that for all sufficiently small $(\tau,r)$ there are of $(\mathfrak{D}_n,\tau, r)$ good covers of $S\!N^*\!H$. We will use this fact freely throughout this article.}
{For convenience we state~\cite[{Theorem 11}]{CG18d}}.
The theorem involves many parameters. These provide flexibility when applying the theorem, but make the statement involved. We refer the reader to the comments after the statement of the theorem for a heuristic explanation of its contents.
\begin{theorem}[{\cite[{Theorem 11}]{CG18d}}]
\label{t:coverToEstimate}
{Let $H\subset M$ be a submanifold of codimension $k$.}
Let $0<\delta<\frac{1}{2}$, $N>0$ and ${\{w_h\}_h}$ with {$w_h\in S_\delta \cap C_c^\infty( H)$}. There exist positive constants $\tau_0=\tau_0(M,g,\tau_{_{\!\text{inj}H}} ,H)$, {$R_0=R_0(M,g, K_H,{k,\tau_{_{\!\text{inj}H}} })$,} $C_{n,k}$ depending only on $n$ and $k$, and $h_0=h_0(M,g,{\delta, {H}})$, and for each $0<\tau\leq \tau_0$ there exist $C=C(M,g,\tau,\delta,, H)>0$ and $C_{_{\!N}}=C_{_{\!N}}(M,g, N,\tau,\delta,{\{w_h\}_h},H)>0$, so that the following holds.
Let $8h^\delta\leq R(h)\leq R_0$,{ $0\leq\alpha< 1-2{\limsup_{h\to 0}\frac{\log R(h)}{\log h}}$,} and suppose $\{\Lambda_{_{\rho_j}}^\tau(R(h))\}_{j=1}^{N_h}$ is a {$(\mathfrak{D},\tau, R(h))$ cover of $S\!N^*\!H$ for some $\mathfrak{D}>0$}.
In addition, suppose there exist $\mc{B}\subset \{1,\dots, N_h\}$ and a finite collection $\{\mc{G}_\ell\}_{\ell \in \mathcal L} \subset \{1,\dots, N_h\}$ with
$$
\mathcal J_h(w_h)\;\subset\; \mc{B} \cup \bigcup_{\ell \in \mathcal L}\mc{G}_\ell,
$$
where
\begin{equation}\label{e:i}
\mathcal J_h(w_h):=\{j:\; \Lambda_{_{\!\rho_j}}^\tau(2R(h))\cap \pi^{-1}(\text{\ensuremath{\supp}} w_h)\neq \emptyset\},
\end{equation}
and so that for every $\ell \in \mathcal L$ there exist $t_\ell=t_\ell(h)>0$ and ${T_\ell=T_\ell(h)}\leq {2} \alpha T_e(h)$ so that
$$
\bigcup_{j\in \mc{G}_\ell}\Lambda_{_{\rho_j}}^\tau(R(h))\;\;\text{ is }\;\;[t_\ell,T_{\ell}]\text{ non-self looping for }\varphi_t:=\exp(tH_{|\xi|_g}).
$$
Then, for $u\in \mc{D}'(M)$ and $0<h<h_0$,
\begin{align*}
h^{\frac{k-1}{2}}\Big|\int_{H} w_h u\, d\sigma_{H}\Big|
&\leq\frac{C_{n,k}{\mathfrak{D}}\|w_h\|_{_{\!\infty}}R(h)^{\frac{n-1}{2}}}{\tau^{\frac{1}{2}}}
\Bigg(|\mc{B}|^{\frac{1}{2}}+\sum_{\ell \in \mathcal L }\frac{(|\mc{G}_\ell|t_\ell)^{\frac{1}{2}}}{T^{\frac{1}{2}}_\ell}\Bigg)\|u\|_{{_{\!L^2(M)}}} \\
&+\frac{C_{n,k}{\mathfrak{D}}\|w_h\|_{_{\!\infty}}R(h)^{\frac{n-1}{2}}}{\tau^{\frac{1}{2}}} \sum_{\ell \in \mathcal L}\frac{(|\mc{G}_\ell|t_\ell T_\ell)^{\frac{1}{2}}}{h}\;\|(-h^2\Delta_g-1)u\|_{{_{\!L^2(M)}}}\! \\
&+Ch^{-1}{\|w_h\|_\infty}\|(-h^2\Delta_g-1)u\|_{\Hs{\frac{k-3}{2}}}\\
&+C_{_{\!N}}h^N\big(\|u\|_{{_{\!L^2(M)}}}+{\|(-h^2\Delta_g-1)u\|_{\Hs{\frac{k-3}{2}}}}\big).
\end{align*}
Here, the constant $C_{_{\!N}}$ depends on $\{w_h\}_h$ only through finitely many $S_\delta$ seminorms of $w_h$. {The constants ${\tau_0},C,C_{_{\!N}},h_0$ depend on $H$ only through finitely many derivatives of its curvature and second fundamental form.}
\end{theorem}
{
\begin{remark}
The estimates in Theorem \ref{t:coverToEstimate} are uniform in $H$. For a precise description see \cite[{Theorem 11}]{CG18d}. In particular, when $H=\{x\}$ and $w=1$, then $k=0$ and $|\int_{H} w_h u\, d\sigma_{H}|$ is replaced with $\|u\|_{L^\infty(B(x, h^\delta))}$.
\end{remark}
}
\medskip
Theorem~\ref{t:coverToEstimate} reduces estimates on averages to construction of covers of $\Lambda^\tau_{_{\!\SNH}}(h^\delta)$ by sets with appropriate structure. To understand the statement, we first ignore the extra structure requirement and assume $(-h^2\Delta_g-1)u=0$. With these simplifications, and ignoring {an $h^\infty\|u\|_{{_{\!L^2(M)}}}$ term,} if there is a cover of $\Lambda^\tau_{_{\!\SNH}}(h^\delta)$ by ``good" sets $\{G_\ell(h)\}_{\ell\in L}$ and a ``bad" set $B(h)$ with $G_\ell$, $[t_\ell(h),T_\ell(h)]$ non-self looping, the estimate reads
\begin{multline*}
h^{\frac{k-1}{2}}\Big|\int_H w ud\sigma_H\Big|
\leq \frac{C_{n,k}\|w\|_{_{\!\infty}}}{\tau^{\frac{1}{2}}}
\left(
\![\sigma_{_{\!\!S\!N^*\!H}}(B)]^{\frac{1}{2}}
+\sum_{\ell \in \mathcal L }\frac{[\sigma_{_{\!\!S\!N^*\!H}}(G_\ell)]^{\frac{1}{2}}t_\ell^{\frac{1}{2}}}{T^{\frac{1}{2}}_\ell(h)}\right)\!\!\|u\|_{{_{\!L^2(M)}}},\\
\end{multline*}
where $\sigma_{_{\!\!S\!N^*\!H}}$ denotes the volume induced on $\SNH$ by the Sasaki metric on $T^*\!M$ and for $A\subset T^*\!M$, we write $\sigma_{_{\!\!S\!N^*\!H}}(A)=\sigma_{_{\!\!S\!N^*\!H}}(A\cap \SNH)$. The additional structure required on the sets $G_\ell$ and $B$ is that they consist of a union tubes $\Lambda_{{\rho_i}}^\tau(h^\delta)$ for some $0\leq \delta<\frac{1}{2}$ and that $T_\ell(h)<2(1-2\delta)T_e(h)$.
With this in mind, Theorem~\ref{t:coverToEstimate} should be thought of as giving non-recurrent condition on $\SNH$ which guarantees quantitative improvements over~\eqref{e:zelBound}. In particular, taking $t_\ell$, $T_\ell$, $G_\ell$ and $B$ to be $h$-independent can be used to recover the dynamical consequences in~\cite{CG17,Gdefect} (see~\cite{GJEDP}).
\begin{remark}
Note that it is possible to use Theorem~\ref{t:coverToEstimate} to obtain quantitative estimates which are strictly between $O(h^{\frac{1-k}{2}})$ and $O(h^{\frac{1-k}{2}}/\sqrt{\log h^{-1}})$. For example, this happens if $r_t$ is replaced by e.g. $a^{-1}e^{-a t^2}$ in~\eqref{e:dist}. We expect that the construction in~\cite{BuPa} can be used to generate examples where this type of behavior is optimal.
\end{remark}
\subsection{Manifolds with {no focal points} or Anosov geodesic flow}
{In parts {\ref{a3}}, {\ref{a6}}, {\ref{a4}} and {\ref{a5}} of Theorem \ref{T:applications} we assume either that $(M,g)$ has no focal points or that it has Anosov geodesic flow.} We show that these structures allow us to construct non-self looping covers away from the points $\mc{S}_H \subset S\!N^*\!H$ at which the tangent space to $S\!N^*\!H$ splits into a sum of stable and unstable directions. To make this sentence precise we introduce some notation.
If $(M,g)$ has no conjugate points, then for any $\rho \in S^*\!M$ there exist a stable subspace $E_{+}(\rho)\subset T_{\rho}S^*\!M$ and an unstable subspace $E_{-}(\rho)\subset T_\rho S^*\!M$ so that
\[
d\varphi_t :E_\pm(\rho) \to E_\pm(\varphi_t(\rho)),
\]
and
\[
|d\varphi_t({\bf{v}})|\leq C|{\bf{v}}| \;\; \text{for}\; {\bf{v}}\in E_\pm\;\; \text{and}\; t\to \pm\infty.
\]
Moreover, these spaces have the property that
$$
T_\rho S^*\!M=(E_+(\rho)+E_-(\rho))\oplus \mathbb{R} H_p(\rho).
$$
{We recall that a manifold has no focal points if for every geodesic $\gamma$, and every Jacobi field $Y(t)$ along $\gamma$ {with $Y(0)=0$ and $Y'(0)\neq 0$}, $Y(t)$ satisfies $\tfrac{d}{dt}\| Y(t)\|^2>0$ for $t>0$, where $\|\cdot \|$ denotes the norm with respect to the Riemannian metric. {In particular, if $(M,g)$ has non-positive curvature, then it has no focal points (see e.g. \cite[page 440]{Eberlein73})}. It is also known that if $(M,g)$ has no focal points then {$(M,g)$ has no conjugate points and that}
$E_\pm(\rho)$ vary continuously with $\rho$. (See for example \cite[Proposition 2.13 {and remarks thereafter}]{Eberlein73}.) {See e.g.~\cite{Ruggiero,Eberlein73b,Pesin} for further discussions of manifolds without focal points. }}
In what follows we write
\begin{equation}\label{e:N pm}
N_{\pm}(\rho):=T_{\rho}(S\!N^*\!H)\cap E_\pm(\rho).
\end{equation}
We define {the \emph{mixed} and \emph{split} subsets of $S\!N^*\!H$ respectively by}
\begin{align}
\mc{M}_H&:=\Big\{\rho \in S\!N^*\!H:\:\, N_-(\rho)\neq \{0\}\text{ and }N_+(\rho)\neq \{0\}\Big\}, \label{e:MH}\\
\mc{S}_H&:= \Big\{\rho\in S\!N^*\!H:\;\, T_\rho (S\!N^*\!H)=N_-(\rho)+N_+(\rho)\Big\}.\label{e:SH}
\end{align}
Then we write
\begin{equation}\label{e:AH}
\mc{A}_H:={\mc{M}_H\cap \mc{S}_H
\end{equation}
where we will use $\mc{A}_H$ when considering manifolds with Anosov geodesic flow and {$\mc{S}_H$} when considering those with no focal points.
Next, we recall that any manifold with no focal points in which every geodesic encounters a point of negative curvature has Anosov geodesic flow \cite[Corollary 3.4]{Eberlein73}. In particular, the class of manifolds with Anosov geodesic flows includes those with negative curvature. We also recall that a manifold with Anosov geodesic flow does not have conjugate points~\cite{Kling} and for all $\rho \in S^*\!M$
\[
T_\rhoS^*\!M=E_+(\rho)\oplus E_-(\rho)\oplus \mathbb{R} H_p(\rho),
\]
where $E_+,E_-$ are the stable and unstable directions as before. (For other characterizations of manifolds with Anosov geodesic flow, see~\cite[Theorem 3.2]{Eberlein73},~\cite{Eberlein73b}.) {An equivalent definition of Anosov geodesic flow~\cite{Anosov} is that there exist $E_{\pm}(\rho)\subset T_\rhoS^*\!M$ and} $\mathbf{B}>0$ so that for all $\rho\in S^*\!M$,
\begin{equation}\label{e:Bdef}
|d\varphi_t({\bf{v}})|\leq \mathbf{B} e^{\mp \frac{t}{\mathbf{B}}}|{\bf{v}}|,\qquad{ {\bf{v}}\in E_\pm(\rho),\quad t\to \pm\infty.}
\end{equation}
{In addition having Anosov geodesic flow implies that} the spaces $E_\pm(\rho)$ are H\"older continuous in $\rho$~\cite[Theorem 19.1.6]{KatokHasselblatt}.
{In what follows, $\pi$ continues to be the canonical projection $\pi:S\!N^*\!H \to H$.}
\begin{theorem}\label{T:tangentSpace}
Let $H\subset M$ be a closed embedded submanifold of codimension $k$. Suppose that $A\subset H$ and one of the following two conditions holds:
\begin{itemize}
\item $(M,g)$ has {no focal points} and~$\pi^{-1}(A)\cap{\mc{S}}_H=\emptyset$.
\item $(M,g)$ has Anosov geodesic flow and $\pi^{-1}(A)\cap \mc{A}_H=\emptyset$.
\end{itemize}
Then, there exists $C>0$ so that for all $w\in C_c^\infty(H)$ with $\text{\ensuremath{\supp}} w\subset A$ the following holds. There exists $h_0>0$ so that for $0<h<h_0$ and $u\in \mc{D}'(M)$
\begin{equation*}
\Big|\int_Hwud\sigma_H\Big|\leq C h^{\frac{1-k}{2}}\|w\|_\infty\Bigg(\frac{\|u\|_{{_{\!L^2(M)}}}}{\sqrt{\log h^{-1}}}+\frac{\sqrt{\log h^{-1}}}{h}\|(-h^2\Delta_g-1)u\|_{\Hs{\frac{k-3}{2}}}\Bigg).
\end{equation*}
\end{theorem}
\noindent {Theorem~\ref{T:tangentSpace} also comes with some uniformity over the constants $(C,h_0)$. In particular, for $(A_0,H_0)$ satisfying one of the conditions in Theorem~\ref{T:tangentSpace}, there is a neighborhood {$U$} of $(A_0,H_0)$ in the $C^\infty$ topology so that the constants $(C,h_0)$ are uniform for $(A,H)\in U$ and $w$ in a bounded subset of $C_c^\infty$.}
We note that the conclusion of Theorem \ref{T:tangentSpace} holds when $(M,g)$ is a surface with Anosov geodesic flow, since in this case $\mc{A}_H=\emptyset$ {regardless of $H$}. To see this note that if $\dim M=2$, then $\mc{S}_H=\mc{A}_H$ since $\dim T_{\rho}(S\!N^*\!H)=1$. {Indeed}, it is not possible to have both $N_{+}(\rho)\neq\{0\}$ and $N_{-}(\rho)\neq\{0\}$ {unless $N_+(\rho)=N_-(\rho)=T_\rho(S\!N^*\!H)$} and hence $\mc{S}_H\subset \mc{A}_H$. Moreover, in the Anosov case, since $E_+(\rho)\cap E_-(\rho)=\{0\}$, $\mc{A}_H=\emptyset.$
In~\cite{Wym,Wym2} Wyman works with $(M,g)$ non-positively curved (and hence having no focal points), $\dim M=2$ and $H=\{\gamma(s)\}$ a curve. He then imposes the condition that for all $s$ the curvature of $\gamma$, $\kappa_\gamma(s)$, avoids two special values $\mathbf{k}_\pm(\gamma'(s))$ determined by the tangent vector to $\gamma(s)$. He shows that under this condition, when $\phi_h$ is an eigenfunction of the Laplacian,
\[
\int_\gamma \phi_hd\sigma_\gamma=O\Big(\frac{1}{\sqrt{\log h^{-1}}}\Big).
\]
We note that if $\kappa_\gamma(s)=\mathbf{k}_{\pm}(\gamma'(s))$, then the lift of $\gamma$ to the universal cover of $M$ is tangent to a stable or unstable horosphere at $\gamma(s)$, and $\kappa_\gamma(s)$ is equal to the curvature of that horosphere. Since this implies that $T_{(\gamma(s),\gamma'(s))}SN^*\gamma$ is stable or unstable, the condition there is that $\mc{S}_\gamma=\emptyset.$ Thus, the condition $\mc{S}_H=\emptyset$ is the generalization to higher codimensions and more general geometries of that in~\cite{Wym,Wym2}.
We also point out that through a small improvement in a dynamical argument, we have replaced the set
$$
\mc{N}_H:=\mc{S}_H\cup \mc{M}_H
$$
in~\cite[Theorem 8]{CG17} with $\mc{S}_H$ when considering manifolds without focal points.
\subsection{Outline of the paper}
Sections~\ref{s:controlLooping ncp} and~\ref{s:controlLooping anosov} build technical tools for constructing non-self looping covers. Then, Sections~\ref{s:dynNoConj}, and~\ref{s:Anosov} apply these tools to build non-self looping covers under certain geometric assumptions. In particular, Theorems~\ref{t:noConj2} and~\ref{t:noConj1} are proved in Section~\ref{s:dynNoConj}. In Section~\ref{s:Anosov}, we prove Theorem~\ref{T:tangentSpace} and the remaining cases in Theorem~\ref{T:applications}.
\subsection{Index of Notation}\ \smallskip
In general we denote points in $T^*\!M$ by $\rho$, and vectors in $T_\rho(T^*\!M)$ in boldface (e.g. $\mathbf{v} \in T_\rho(T^*\!M)$). Sets of indices are denoted in calligraphic font (e.g $\mathcal I$). When position and momentum need to be distinguished we write $\rho=(x,\xi)$ for $x\in M$ and $\xi \in T_x^*M$. Next, we list symbols that are used repeatedly in the text along with the location where they are first defined.
\begin{multicols}{3}
$\varphi_t $ \tabto{1.6cm} \eqref{e:varphi}\\
$\mc{H}_{\Sigma}$ \tabto{1.6cm} \eqref{e:Hsig}\\
$\tau_{_{\!\text{inj}H}}$ \tabto{1.6cm} \eqref{e:Tinj}\\
$\Lambda_{_{\!A}}^\tau(r)$ \tabto{1.6cm} \eqref{e:tube}\\
${K}_{_{\!H}}$ \tabto{1.6cm} \eqref{e:KH}\\
$\mathbf{B}$ \tabto{1.6cm} \eqref{e:Bdef}\\
$\sob{m}$ \tabto{1.6cm} \eqref{e:sobolev}
\vfill\null
\columnbreak
\noindent $\Lambda_{\max}$ \tabto{1.6cm} \eqref{e:Lmax}\\
$T_e(h)$ \tabto{1.6cm} \eqref{e:Tehr}\\
$N_{\pm}(\rho)$ \tabto{1.6cm} \eqref{e:N pm}\\
$\mc{M}_H$ \tabto{1.6cm} \eqref{e:MH}\\
$\mc{S}_H$ \tabto{1.6cm} \eqref{e:SH}\\
$\mc{A}_H$ \tabto{1.6cm} \eqref{e:AH}
\vfill\null
\columnbreak
\noindent $F$, $\delta_F$ \tabto{1.6cm} \eqref{e:defFunction} \\
$\psi$ \tabto{1.6cm} \eqref{e:psi} \\
$J_t$ \tabto{1.6cm} \eqref{e:Jdef}\\
$\mathbf{D}$ \tabto{1.6cm} \eqref{e:D} \\
$C_{\varphi}$ \tabto{1.6cm} \eqref{e:cphi}\\
$\Theta_{\pm}$ \tabto{1.6cm} \eqref{e:Theta}
\end{multicols}
\medskip
\noindent {\sc Acknowledgements.} Thanks to Pat Eberlein, John Toth, Andras Vasy, and Maciej Zworski for many helpful conversations and comments on the manuscript.
J.G. is grateful to the National Science Foundation for support under the Mathematical Sciences Postdoctoral Research Fellowship DMS-1502661. {Y.C. is grateful to the Alfred P. Sloan Foundation. }
\addcontentsline{toc}{section}{\quad\;\,\bf{Dynamical Analysis}}
\section{Partial invertibility of $d\varphi_t|_{T\SNH}$ and looping sets}
\label{s:controlLooping ncp}
\renewcommand{\SNH}{\Sigma_{_{\!H,p}}}
\renewcommand{\Lambda^\tau_{_{\!\SNH}}}{\Lambda^\tau_{\!\Sigma_{\!H,p}}}
The aim of this section is to study the set of geodesic loops in $S\!N^*\!H$ under conditions on the structure of the set of conjugate points of $(M,g)$. However, we work in the general setting in which the Hamiltonian flow is not necessarily the geodesic one. In particular, let $p\in S^m$ be real valued with
$$
|p|\geq |\xi|^m/C,\qquad |\xi|\geq C
$$
and define $\varphi_t:=\exp(tH_p)$ and $\Sigma_{_{\!H,p}}:=\{p=0\}\cap N^*\!H$ so that in the case $p=|\xi|_g-1$, $\Sigma_{_{\!H,p}}=S\!N^*\!H$. Also, define $r_H:T^*\!M\to \mathbb{R}$ by $r_H(\rho)=d(\pi(\rho),H)$, and let
$$
\mathfrak{I}_{_{\!H}}:=\inf_{\rho\in \SNH} \lim_{t\to 0^+}|H_pr_H(\varphi_t(\rho))|
$$
We now fix once and for all a defining function $F:T^*\!M \to \mathbb{R}^{n+1}$ for $\SNH$ and $\delta_F>0$ so that:\\ \ \\
\qquad For $q \in T^*\!M $ with $d(q,\SNH)<\delta_F$,
\begin{align}
\label{e:defFunction}
&\bullet\; \SNH=F^{-1}(0) \notag\\
&\bullet\; \tfrac{1}{2}d(q,\SNH)\leq |F(q)|\leq 2d(q,\SNH), \notag\\
& \bullet\; dF(q)\text{ has a {right} inverse }{R}_{_{\!F}}(q)\text{ with }\|{R}_{_{\!F}}(q)\|\leq 2,\\
&\bullet\; \max_{|\alpha|\leq 2}(|\partial^\alpha F(q)|)\leq 2. \notag
\end{align}
Define also $\psi:\mathbb{R}\times T^*\!M \to \mathbb{R}^{n+1}$
\begin{equation}\label{e:psi}
\psi(t,\rho)=F\circ \varphi_t(\rho).
\end{equation}
Working under the assumption that the set of conjugate points can be controlled will allow us to say that if $ \varphi_{t_0}(\rho_0)$ is exponentially close to $\SNH=S\!N^*\!H$ for some time $t_0$ and some $\rho_0 \in S\!N^*\!H$, then there exists a tangent vector $ {\bf w} \in T_{\rho_0}S\!N^*\!H$ for which the restriction
\begin{equation}\label{e:li}
d\psi_{(t_0,\rho_0)}: {\mathbb R} \partial_t \times \mathbb{R} {\bf w}\to T_{\psi(t_0,\rho_0)}\mathbb{R}^{n+1}
\end{equation}
has a left inverse whose norm we control. This is proved in Lemma \ref{l:prelimNoConj} and is the cornerstone in the proof of Theorems \ref{t:noConj1} and~\ref{t:noConj2}. Note, however, that asking \eqref{e:li} to hold is a very general condition that may not need the control of the structure of the set of conjugate points. We will use this in Section~\ref{s:Anosov}.
The goal of this section is to prove Proposition \ref{p:ballCover} below,
whose purpose is to control the number of tubes that emanate from a subset of $\SNH$
and loop back to $\SNH$. This is done under the assumption that the restriction of $d\psi_{(t_0,\rho_0)}$ in \eqref{e:li} has a left inverse. To state this proposition we first need a lemma that describes a convenient system of coordinates near $\SNH$. The statement of this lemma is illustrated in Figure \ref{fig:2lemmas}.
Observe that by~\cite[(C.3)]{DyGu14} for any $\Lambda>\Lambda_{\max}$ and $\alpha$ multiindex, there exists $ C_{_{\!M,p,\alpha}}>0$ depending only on $M,p, \alpha$ so that
\begin{equation}
\label{e:derFlow}
|\partial^\alpha \varphi_t|\leq C_{_{\!M,p,\alpha}} e^{|\alpha| \Lambda t}.
\end{equation}
\begin{lemma}[Coordinates near $\SNH$]
\label{l:tanSpace}
There exists $\tau_1=\tau_1(M,p,\mathfrak{I}_{_{\!H}})>0$ and $\mathfrak{c}_0=\mathfrak{c}_0(M,p,\mathfrak{I}_{_{\!H}})$ so that for $\Lambda>\Lambda_{\text{max}}$ the following holds.
Let $\rho_0\in \SNH$, $t_0\in \mathbb{R}$ be so that
\begin{itemize}
\item there exists ${\bf w}={\bf w}(t_0,\rho_0)\in T_{\rho_0}\SNH$ so that the restriction
$$
d\psi_{(t_0,\rho_0)}: {\mathbb R} \partial_t \times \mathbb{R} {\bf w}\to T_{\psi(t_0,\rho_0)}\mathbb{R}^{n+1} $$
has left inverse $L_{(t_0,\rho_0)}$ with $\|L_{(t_0,\rho_0)}\|\leq A$ for some $A\geq 1$,\medskip
\item $d(\varphi_{t_0}(\rho_0), \SNH)\leq \min\big\{\frac{ {e^{-2\Lambda|t_0|}}}{16 \mathfrak{c}_0^2 A^2.},\delta_F\big\}$
\end{itemize}
Then, points $\rho$ in a neighborhood of $\rho_0$ can be written in coordinates $\rho=\rho(y_1,\dots, y_{2n})$, with
$\rho_0=\rho(0,\dots ,0)$ and $\SNH=\{y_{n}=\dots =y_{2n}=0\}$,
so that
{$$
\frac{1}{2}d(\rho(y),\rho(y'))\leq |y-y'| \leq 2d(\rho(y),\rho(y')).
$$}
In addition, there exists a smooth real valued function $f$ defined in a neighborhood of $0\in {\mathbb{R}^{2n-1}}$ so that letting $r_{t_0}:=\frac{8 {e^{-3\Lambda|t_0|}}}{\mathfrak{c}_0^2 A^2}$ and ${0<}r< \tfrac{1}{128}e^{\Lambda |t_0|}r_{t_0}$,
if
\[
|y|<r_{t_0} \qquad \text{and} \qquad d(\varphi_t(\rho(y)),\SNH)<r\;\;\text{
for some}\; t\in [t_0-\tau_1,t_0+\tau_1],
\]
then
\[
|y_1-f(y_2,\dots y_{2n})|<{2(1+\c)}Ar \qquad \text{and}\qquad
|\partial_{y_j}f|<\c Ae^{\Lambda|t_0|}.
\]
\end{lemma}
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{coordinates.pdf}
\caption{Illustration of the statement in Lemma \ref{l:tanSpace} {when $H$ is a curve and $M$ is a surface}.
}
\label{fig:2lemmas}
\end{figure}
\begin{proof}
{Suppose $F=(f_1, \dots, f_{n+1})$ where $F$ is as in \eqref{e:defFunction}. }
Since $d\psi_{(t_0,\rho_0)}:\mathbb{R}\partial_t\times \mathbb{R} {\bf w} \to \mathbb{R}^{n+1}$ has a left inverse, we may choose coordinates on $\mathbb{R}^{n+1}$ so that with $\tilde F=(f_1, f_2)$,
\[
\Psi: {\mathbb R} \times T^*\!M \to {\mathbb R}^2, \qquad \Psi(t, \rho):=\tilde F \circ \varphi_t(\rho),
\]
then the restriction $d\Psi:\mathbb{R}\partial_t\times \mathbb{R} {\bf w}\to \mathbb{R}^2$ is invertible with inverse $L$ having $\|L\|\leq A$.
Moreover, since
$$
d\psi_{(t_0,\rho_0)}:\mathbb{R} \partial_t \to T_{\psi(t_0,\rho_0)}\mathbb{R}^{n+1}
$$
has a left inverse, $L_1\in \mathbb{R}$ with $|L_1|< 2\mathfrak{I}_{_{\!H}}^{-1}:=A_0$ we may choose these coordinates so that with
$$\Psi_1:{\mathbb{R} \times T^*\!M } \to \mathbb{R},\qquad \Psi_1{(t, \rho)}:=f_1\circ\varphi_t(\rho),$$
the restriction $d\Psi_1:\mathbb{R} \partial_t\to \mathbb{R}$ is invertible with inverse $L_1$ having $\|L_1\|\leq A_0$.
Let ${(t,y)}=(t,{y_1},y_2,\dots, {y_{n-1}, y_{n}},\dots y_{2n})$ be coordinates on $\mathbb{R} \times T^*\!M$ near {$(t_0,\rho_0)$} so that ${(t_0,0)}\mapsto (t_0,\rho_0)$, $\partial_{y_1}\mapsto {\bf w}/\|{\bf w}\|$ at ${(t_0,0)}$, and $({y_n},{y_{n+1},\dots, {y_{2n}}})$ define ${\SNH}$. Finally, let $\tilde{y}=e^{\Lambda|t_0|}y.$ We will work with these coordinates on $\mathbb{R}\times T^*\!M$ for the remainder of the proof.
Applying the implicit function theorem (see Lemma~\ref{l:quantImplicit}) with $x_0=t$, $x_1=\tilde{y}$ and $\tilde f:{\mathbb R} \times {\mathbb R}^{2n} \times {\mathbb R} \to {\mathbb R} $ with $\tilde f(x_0,x_1,x_2)=\Psi_1(x_0,x_1)-x_2$ gives that there exists a neighborhood $U \subset \mathbb{R}^{2n}\times \mathbb{R}$ of {${(0,x_2^0)}$, where $x_2^0:=\Psi_1{(t_0,0)}$}, and a function $x_0=\mathfrak{t}:U \to \mathbb{R}$, so that for $( \tilde{y},x_2)\in U$,
$$
x_2=\Psi_1 \big(\mathfrak{t}(\tilde{y},x_2) , \tilde{y}\big)
$$
with
\[
|\partial_{x_2} \mathfrak{t}|\leq A_0,\qquad \qquad \max_{1\leq j \leq 2n}|\partial_{\tilde y_j}\mathfrak{t}|\leq {\tfrac{c_{_{\!M,p}}}{64 n}}A_0,
\]
where $c_{_{\!M,p}}$ is a positive constant depending only on $(M,p)$, and so that
$|\partial_{t,\tilde{y}}^2 \tilde f|\leq {\tfrac{c_{_{\!M,p}}}{64 n}}$, {$|\partial_{t}^2 \tilde f|\leq {\tfrac{c_{_{\!M,p}}}{64 n}}$,} and $|\partial_{\tilde{y}_j} \tilde f|\leq \tfrac{c_{_{\!M,p}}}{64 n}$ for all $j=1, \dots, 2n$. Then, working with
\[r_0=\tfrac{8}{ c_{_{\!M,p}}A_0}, \qquad r_1=\min\Big\{\tfrac{32}{c_{_{\!M,p}}^2A_0^2 },\;{\tfrac{8}{c_{_{\!M,p}}A_0}}\Big\}, \qquad r_2=\tfrac{2}{ c_{_{\!M,p}}A_0^2},
\]
\[{B_0={ \tfrac{c_{_{\!M,p}}}{32}},\qquad B_1= \tfrac{c_{_{\!M,p}}}{64 n}} , \qquad B_2=0, \qquad \tilde B_1=\tfrac{c_{_{\!M,p}}}{64 n} , \qquad \tilde B_2=1,
\]
for $r_0, r_1, r_2$ and $B_0, B_1, B_2, \tilde B_1, \tilde B_2$ as in Lemma \ref{l:quantImplicit}, we obtain that
$U$ can be chosen so that $B(0,r_1)\times {B(x_2^0, r_2)}\subset U$.
In particular, it follows that if
\begin{equation}
\label{e:condBoston}|\mathfrak{t}-t_0|<\tfrac{8}{ c_{_{\!M,p}}A_0},\qquad |\tilde{y}|\leq \min\Big\{\tfrac{32}{c_{_{\!M,p}}^2A_0^2 },{\tfrac{8}{c_{_{\!M,p}}A_0}}\Big\} ,\qquad {|x_2-x_2^0|}<\tfrac{2}{ c_{_{\!M,p}}A_0^2},
\end{equation}
then
\[
|\mathfrak{t}(\tilde{y},x_2)-\mathfrak{t}(\tilde{y},0) | \leq A_0 {|x_2|}.
\]
Next, since $d\Psi:\mathbb{R} \partial_t\times \mathbb{R} {\bf w}\to \mathbb{R}^2$ is invertible with inverse $L$ satisfying ${\|L\|\leq A}$, we may {perform a linear change of coordinates ({with norm 1}) in ${\mathbb R}^2$ so that }$|\partial_{\tilde{y}_1}\tilde{f}|^{-1}{\leq} Ae^{\Lambda|t_0|}$ where now we write $\tilde f$ for
$$
{\tilde{f}(\tilde{y},x_2,x_3)=\Psi_2(\mathfrak{t}(\tilde{y},x_2),\tilde{y})-x_3}.
$$
Next, we write $\tilde{y}=(\tilde{y}_1,\tilde{y}')$ and once again apply the implicit function theorem (Lemma~\ref{l:quantImplicit}) with $x_0=\tilde{y}_1$, $x_1=(x_2,\tilde{y}')$, $x_3\in \mathbb{R}$, to see that there exists $U \subset \mathbb{R}^{2n}\times \mathbb{R}$ {of ${(0,x_3^0)}$, with $x_3^0=\Psi_2({t_0},0)$,} and a function $x_0=\tilde{\mathbf{y}}_1:U \to \mathbb{R}$, so that for $(\tilde{y}',x_3)\in U$,
$$
x_3=\Psi_2 \Big(\mathfrak{t}\big(\tilde{\mathbf{y}}_1(\tilde{y}',x_2,x_3),\tilde{y}',s\big),\tilde{\mathbf{y}}_1(\tilde{y}',x_2,x_3),\tilde{y}' \Big)
$$
with
\[
|\partial_{x_3} \tilde{\mathbf{y}}_1|\leq{A e^{\Lambda|t_0|}},\qquad |\partial_{x_2} \tilde{\mathbf{y}}_1|<\c Ae^{\Lambda|t_0|}, \qquad \max_{2\leq j\leq 2n}|\partial_{\tilde{y}_j} \tilde{\mathbf{y}}_1|\leq \c\,A e^{\Lambda|t_0|}\]
where $\c$ is a positive constant depending only on $(M,p,A_0)$, so that
$|\partial_{(x_2,\tilde{y})}^2 \tilde{f}|\leq \tfrac{\c}{64n}$ and $|\partial_{x_2}\tilde{f}|,\,|\partial_{\tilde{y}_j} \tilde{f}|\leq \tfrac{\c}{64n}$ for all $j=2, \dots, 2n$. {Without loss of generality we assume that $\c \geq c_{_{\!M,p}}A_0$ {and that $\c>1$}.}
Then, working with
\[r_0=\tfrac{8 e^{-\Lambda|t_0|}}{ \c A}, \qquad r_1=\min\Big\{\tfrac{32e ^{-2\Lambda|t_0|}}{\c^2A^2 },\tfrac{8 e^{-\Lambda|t_0|}}{\c A}\Big\}, \qquad r_2=\tfrac{2e^{-2\Lambda|t_0|}}{ \c A^2},
\]
\[B_0= {\tfrac{\c}{32}},\qquad B_1= \tfrac{\c}{64n} , \qquad B_2=0, \qquad \tilde B_1=\tfrac{\c}{64n} , \qquad \tilde B_2=1,
\]
for $r_0, r_1, r_2$ and $B_0, B_1, B_2, \tilde B_1, \tilde B_2$ as in Lemma \ref{l:quantImplicit}, we obtain that
$U$ can be chosen so that $B({(x_2^0,0)},r_1)\times B({x_3^0}, r_2)\subset U$.
In particular, it follows that if
\begin{equation}
\label{e:ChapelHill}
\begin{gathered}
|\tilde{\mathbf{y}}_1|<\tfrac{8 e^{-\Lambda|t_0|}}{ \c A},\quad |(\tilde{y}',x_2-{x_2^0})|\leq \min\Big\{\tfrac{32e ^{-2\Lambda|t_0|}}{\c^2A^2 },\tfrac{8 e^{-\Lambda|t_0|}}{\c A}\Big\},\quad
|x_3-x_3^0|<\tfrac{2e^{-\Lambda|t_0|}}{ \c A^2},
\end{gathered}
\end{equation}
then
\[
|\tilde{\mathbf{y}}_1(\tilde{y}',x_2,x_3)-\tilde{\mathbf{y}}_1(\tilde{y}',x_2,0) | \leq A e^{\Lambda|t_0|}{|x_3| }.
\]
Note that this can be done since by assumption {$\c>1$ and }
\begin{equation}\label{e:radiusx3}
|0-x_3^0|=|\Psi_2(t_0, \rho_0)| \leq 2d(\varphi_{t_0}(\rho_0), \SNH)<\tfrac{2e^{-2\Lambda|t_0|}}{ \c A^2}.
\end{equation}
It follows, {after undoing the change $\tilde y=e^{\Lambda|t_0|}y$,} that if
\begin{align*}
\bullet\;& \max\{|x_2-x_2^0|,|x_3-x_3^0|\}<
{ \min\Big\{\tfrac{2}{ c_{_{\!M,p}}A_0^2},\; \tfrac{32e ^{-2\Lambda|t_0|}}{\c^2A^2 },\;\tfrac{8 e^{-\Lambda|t_0|}}{\c A},\; \tfrac{2e^{-\Lambda|t_0|}}{ \c A^2}\Big\}},\\
\bullet\;&|y|<
{ \min\Big\{\tfrac{8 e^{-2\Lambda|t_0|}}{ \c A},\;
\tfrac{32e ^{-3\Lambda|t_0|}}{\c^2A^2 },\;
\tfrac{8 e^{-2\Lambda|t_0|}}{\c A},\;
\tfrac{32 e^{-\Lambda|t_0|}}{c_{_{\!M,p}}^2A_0^2 },
\;{\tfrac{8 e^{-\Lambda|t_0|}}{c_{_{\!M,p}}A_0}}
\Big\}},\\
\bullet\;&|t-t_0|
<\tfrac{8}{c_{_{\!M,p}}A_0},
\end{align*}
then
\[
|{\mathbf{y}}_1({y}',x_2,x_3)-{\mathbf{y}}_1({y}',0,0) | \leq (1+\c)A\, {|(x_2,x_3)|}.
\]
{Next, note that since $d(\varphi_t(\rho(y)),\SNH)\leq r$ and $r<\frac{e^{-2\Lambda|t_0|}}{16\c^2 A^2}$,
then
\[
{|x_2-x_2^0| \leq |x_2| + |x_2^0| \leq 2d(\varphi_t(\rho(y)),\SNH)+2d(\varphi_{t_0}(\rho_0(y)),\SNH)} \leq \tfrac{2e^{-2\Lambda|t_0|}}{ \c A^2},
\]
and similarly, $|x_3-x_3^0|\leq \tfrac{2e^{-2\Lambda|t_0|}}{ \c A^2}$.
In addition, we can assume $c_{_{\!M,p}}>1$. Since $\c \geq c_{_{\!M,p}}A_0$, with the above definition of $r_{t_0}$, we obtain that if $r<\frac{1}{128}e^{\Lambda |t_0|}r_{t_0}$ and $|y|<r_{t_0}$, then
\[
|{\mathbf{y}}_1({y}',x_2,x_3)-{\mathbf{y}}_1({y}',0,0) | \leq 2(1+\c)A r.
\]}
To finish the argument, we note that we may define
$f(y'):={\mathbf{y}}_1({y}',0,0)$ satisfying
$
|\partial_{y'}f|\leq \c Ae^{\Lambda|t_0|}
$
as claimed. Where, as argued in \eqref{e:radiusx3}, this can be done since
$|0-x_2^0|<\tfrac{2e^{-2\Lambda|t_0|}}{ \c A^2}$ and using that $A \geq 1$, $\c \geq c_{_{\!M,p}}A_0$.
\end{proof}
\begin{remark}\label{R:c}{ We proceed to study the number of looping directions and prove the main result of this section.
In what follows $\c$ denotes the constant from Lemma \ref{l:tanSpace}.}
\end{remark}
\begin{proposition}
\label{p:ballCover}
Let $0\leq t_0<T_0$, $0<\tilde c<{\delta_F} $, {$a>0$}, {$\Lambda>\Lambda_{\text{max}}$}, {$c>0$, $\beta \in \mathbb{R}$, $A\subset \SNH$, and $B\subset A$ a ball of radius $R>0$} satisfy the following assumption: {for all} $(t, \rho) \in [t_0, T_0]\times B$ such that $d(\varphi_t(\rho),{A})\leq \tilde c\, e^{-{a}|t|}$, there exists ${\bf w} \in T_{\rho}\SNH$ for which the restriction
$$
d\psi_{(t,\rho)}: {\mathbb R} \partial_t \times {\mathbb R} {\bf w} \to T_{\psi(t,\rho)}\mathbb{R}^{n+1}
$$
has left inverse $L_{(t, \rho)}$ with $\|L_{(t, \rho)}\|\leq ce^{\beta |t|}$.
\smallskip
Then, there exist $\alpha_1=\alpha_1(M,p)>0$ and $ \alpha_2=\alpha_2(M,p,c, \tilde c, \delta_F, \mathfrak{I}_{_{\!H}})$ so that the following holds.
Let $r_0,r_1,r_2 >0$ satisfy
\[
r_0 < r_1, \qquad r_1< \alpha_1\, r_2, \qquad r_2 \leq \min\{R,{1}, \alpha_2\, e^{-\gamma T_0}\}, \qquad r_0 < {\tfrac{1}{3}}\, e^{-\Lambda T_0} r_2,
\]
where { $\gamma=\max\{a, 3\Lambda+2\beta\}$}.
Let $0<\tau_0<\frac{\tau_{_{\!\text{inj}H}}}{2}$, $0<\tau\leq \tau_0$, and $\{\rho_j \}_{j=1}^{N} \subset {\SNH}$ be a family of points so that
\vspace{-0.3cm}
\[{\Lambda_{\rho_j}^\tau(r_1)\cap \Lambda_B^\tau(r_0)\neq \emptyset},\qquad \Lambda_B^\tau(r_0) \subset \bigcup_{j=1}^{N} \Lambda_{\rho_j}^\tau( r_1 ),\]
and $\big\{\Lambda_{\rho_j}^\tau(r_1)\big\}_{j=1}^N$ can be divided into ${\mathfrak{D}}$ sets of disjoint tubes.
Then,
there exist a partition of the indices
$\mathcal G \cup \mathcal B= \{1,\dots, N\}$ and a constant ${{\bf{C}_0}={\bf{C}_0}(M,p,k,c,\beta,\mathfrak{I}_{_{\!H}})}>0$ so that \smallskip
\begin{itemize}
\item $\bigcup_{j\in \mc{G}}\Lambda_{_{\rho_j}}^\tau({r_1})\quad \text{ is }\text{ non-self looping for times in }\;\; [t_0,T_0],$ \medskip
\item $|\mathcal B|\leq {{\bf{C}_0}}{\mathfrak{D}} \;r_2 \;\frac{R^{n-1}}{r_1^{n-1}}\; {T_0}\,e^{4(\Lambda+\beta)T_0},$\medskip
\item $d\Big({\Lambda_{A}^\tau(r_0)}\;,\;\bigcup_{t\in[t_0,T_0]}\bigcup_{j\in \mc{G}}\varphi_t(\Lambda_{_{\rho_j}}^\tau(r_1))\Big)>2r_1$.
\end{itemize}
\end{proposition}
\begin{remark}\label{r:D depends on n}
{Note that we will typically apply Proposition~\ref{p:ballCover} with $\{\Lambda_{\rho_j}^\tau(r_1)\}_j$ a subset of a $(\mathfrak{D}_n,\tau,r)$ good cover for $\SNH$. In this case the constant $\mathfrak{D}$ can be absorbed into ${\bf{C}_0}$ since it depends only on $n$.}
\end{remark}
\begin{proof}
Let {$\tau_1=\tau_1(M,p,\mathfrak{I}_{_{\!H}})$ be the minimum of $1$ and the constant from Lemma~\ref{l:tanSpace}}, and let
$L$ be the largest integer with $L\leq \frac{1}{{\tau_1}}(T_0-t_0)+1$. Cover $[t_0,T_0]$ by
$$
[t_0,T_0]\subset \bigcup_{\ell=0}^{L} \big[s_\ell - \tfrac{{\tau_1}}{2} ,s_\ell + \tfrac{{\tau_1}}{2}\big],
$$
{where $s_\ell:= t_0 +(\ell +\frac{1}{2}){\tau_1}$.}
We claim that for each $\ell=0, \dots, L$ there exists a partition of indices $ \mathcal G_\ell \cup \mathcal B_\ell = \{1, \dots, N\}$ so that
\begin{equation}\label{e:mainclaim0}
|\mathcal B_\ell| \leq {{\bf{C}_0}}{\mathfrak{D}}\frac{r_2 R^{n-1}}{r_1^{n-1}} e^{4(\Lambda+\beta)|s_\ell |}
\end{equation}
and
{
\begin{equation}\label{e:mainclaim}
d\left(\Lambda_{{A}}^\tau(r_0)\;, \bigcup_{s=s_\ell-\frac{{\tau_1}}{2}}^{s_\ell+\frac{{\tau_1}}{2}}\varphi_t\big(\Lambda_{_{\!\rho_k}}^\tau(r_1)\big)\right)\geq {\frac{1}{C_{_{\!S}}}}r_2 -C_{_{\!S}}r_0 \;\;\; \;\;\;\forall k \in \mathcal G_\ell.
\end{equation}
}
Here,
\begin{gather*}
C_{_{\!S}}:=\sup\big\{ \|d\varphi_t(q)\|:\; q\in \Lambda^1_{\{p=0\}}(\varepsilon_0),\,|t|\leq {\tfrac{4}{3}}\big\},
\end{gather*}
where $\varepsilon_0>R$ is a constant independent of $r_0, r_1, r_2, R$.
The result then follows from setting
\[
\mathcal B:= \bigcup_{\ell=0}^L \mathcal B_\ell\qquad \text{and} \qquad \mathcal G:=\{1, \dots, N\}\backslash \mathcal B,
\]
together with asking for $\alpha_1<{\tfrac{1}{2C_{_{\!S}}+C^2_{_{\!S}}}}$ so that {${\frac{1}{C_{_{\!S}}}}r_2 -C_{_{\!S}}r_0 >2r_1$}. {Note that the adjustment depends only on $(M,p)$. }
We have reduced the proof of the lemma to establishing the claims in \eqref{e:mainclaim0} and \eqref{e:mainclaim}. We next explain that it suffices to prove \eqref{e:mainclaim} with $\Lambda_{{A}}^\tau(r_0)$ replaced by ${A}$. To see this, let $\{t_j\}$ be so that
\[
[-(3\tau+{\tau_1}{+r_0}),3\tau+{\tau_1}+{r_0}]=\bigcup^{J}_{j=1} [t_j-\tfrac{{\tau_1}}{2},t_j+\tfrac{{\tau_1}}{2}],
\]
where $J$ is the largest integer with $J \leq (6\tau+{2r_0})/{\tau_1}+2.$ {Note that since $\tau<\tau_0<1$, ${r_0<\frac{1}{3}}$ and ${\tau_1}$ depends only on $(M,p,\mathfrak{I}_{_{\!H}})$, the same is true for $J$.}
Fix $\ell \in \{1, \dots, L\}$. We claim that for each $j\in \{1, \dots, J\}$ there exists a partition $\mathfrak{g}_j^\ell\cup \mathfrak{b}_j^\ell=\{1,\dots,N\}$
with
\begin{equation}
\label{e:claim-grass0}
|\mathfrak b_j^\ell |\leq {{\bf{C}_0}}{\mathfrak{D}}\frac{r_2 R^{n-1}}{r_1^{n-1}} e^{4(\Lambda+\beta)|s_\ell |},
\end{equation}
and
\begin{equation}
\label{e:claim-grass}
d\Big({A}, \bigcup_{t=s_\ell+t_j-\frac{{\tau_1}}{2}}^{s_\ell+t_j+\frac{{\tau_1}}{2}}\varphi_t\big(\rho\big)\Big)\geq r_2 \qquad \text{for all}\;\; \rho\in \bigcup_{k\in \mathfrak g_j^\ell}\Lambda_{\rho_k}^\tau(r_1).
\end{equation}
Suppose the claims in \eqref{e:claim-grass0} and \eqref{e:claim-grass} hold and let
\[
\mathcal B_\ell:=\bigcup_{j=1}^J \mathfrak b_j^\ell\qquad \text{ and }\qquad \mathcal G_\ell=\{1, \dots, N\}\backslash \mathcal B_\ell.
\]
Then, by construction, {after possibly adjusting $ {\bf{C}_0}$ to take into account the bound on $J$ (which only depends on $(M,p,\mathfrak{I}_{_{\!H}})$),} we obtain that \eqref{e:mainclaim0} also holds. To derive \eqref{e:mainclaim}
suppose $\rho\in \Lambda_{\rho_k}^\tau(r_1)$ for some $k \in \mathcal G_\ell$. In particular, since $k \in \mathfrak g_j^\ell$ for all $j=1, \dots, J$, relations \eqref{e:claim-grass} yield that
$$
d\Big({A}, \bigcup_{t=s_\ell-3\tau-{\tau_1}-{r_0}}^{s_\ell+3\tau+{\tau_1}+{r_0}}\varphi_t(\rho)\Big)\geq r_2.
$$
In particular, {using the definition of $ {C_{_{\!S}}}$, { that $\tau<\tau_{_{\!\text{inj}H}}\leq 1$}, {and $r_0<\frac{1}{3}$}}
{
$$
d\Big(\Lambda^{\tau+{r_0}}_{{A}}, \bigcup_{t=s_\ell-2\tau-{{\tau_1}}}^{s_\ell+2\tau+{{\tau_1}}}\varphi_t(\rho)\Big)\geq {\frac{r_2}{C_{_{\!S}}}},
$$
and this proves \eqref{e:mainclaim} after using the definition of $C_S$ once again.}
We have then reduced the proof of the proposition to establishing the claims in \eqref{e:claim-grass0} and \eqref{e:claim-grass}. Fix $\ell \in \{1, \dots, L\}$, $j \in \{1, \dots, J\}$, and set
\[
s:=s_\ell +t_j.
\]
To prove these claims we start by covering $B$ by balls $B_\alpha^s \subset T^*\!M $ of radius ${\bf R_s}>0$ (to be determined later) and centers in $B$,
\[
B \subset \bigcup_{\alpha=1}^{I_s} B_\alpha^s,
\]
so that $I_s \leq C_{n} R^{n-1} {\bf R}^{-(n-1)}_s $ for some $C_{n}>0$.
Fix $B_\alpha^s$ and suppose there exists $\rho_0\in B_\alpha^s$ such that
\begin{equation}
\label{e:badness}
d(\SNH,\rho_0)<r_0\qquad \text{and}\qquad d\Big({A}, \bigcup_{t=s-\frac{{\tau_1}}{2}}^{s+\frac{{\tau_1}}{2}}\varphi_t(\rho_0)\Big)< r_2.
\end{equation}
Then there exists $\tilde{s}\in [s-\frac{{\tau_1}}{2},s+\frac{{\tau_1}}{2}]$ with $d(\varphi_{\tilde{s}}(\rho_0),A)<r_2$.
{Next, since $d(\rho_0,\SNH)<r_0$, there exists $\rho_\alpha \in \SNH$ with
\[
\varphi_{\tilde{s}}(\rho_\alpha)\in B(\varphi_{\tilde{s}}(\rho_0), c_{_{\!M,p}}e^{\Lambda|\tilde{s}|}r_0),\qquad d(\rho_0,\rho_\alpha)<r_0,
\]}
{for some $c_{_{\!M,p}}>0$.}
In addition, {letting ${\bf{\bar r}}_{{s}}=c_{_{\!M,p}}e^{\Lambda|\tilde{s}|}r_0$,}
\begin{equation*}
\begin{gathered}
d(\SNH,\varphi_{\tilde{s}}(\rho_\alpha))\leq d(A,\varphi_{\tilde{s}}(\rho_\alpha)) \leq d(A, \varphi_{\tilde{s}}(\rho_0)) + d( \varphi_{\tilde{s}}(\rho_0),\varphi_{\tilde{s}}(\rho))< r_2+ {\bf \bar r_s}.
\end{gathered}
\end{equation*}
We then assume that $\alpha_2<{\frac{3}{3+c_{_{\!M,p}}}\min\{\tfrac{\tilde c}{2},\,{\tfrac{\delta_F}{2}},\,\tfrac{1}{{32}\c ^2c^2}\}}$ so that
$$ r_2+ {\bf \bar r_s}<{\min\Bigg\{\tilde c e^{-a|\tilde{s}|},\, \frac{e^{-2(\Lambda+\beta)|\tilde{s}|}}{16\c ^2c^2},\,\delta_F\Bigg\}}$$
{where $\c$ is from Lemma~\ref{l:tanSpace}.}
Then, by assumption there exists ${\bf w}={\bf w}(\tilde s, \rho_\alpha)\in T_{\rho_\alpha}\SNH$ so that
the restriction $d\psi_{(\tilde{s},\rho_\alpha)}: {\mathbb R} \partial_t \times \mathbb{R} {\bf w} \to T_{\psi(\tilde{s} ,\rho_\alpha)}\mathbb{R}^{n+1} $
has left inverse $L_{(\tilde{s},\rho_\alpha)}$ with $\|L_{(\tilde{s},\rho_\alpha)}\|\leq ce^{\beta|\tilde s|}=:A$. By Lemma \ref{l:tanSpace} the {points $\rho$ in a neighborhood of $\rho_\alpha$ can be written in coordinates $\rho=\rho(y_1,\dots, y_{2n})$ with
$\rho_\alpha=\rho(0, \dots, 0)$ and $\SNH=\{y_{n}=\dots =y_{2n}=0\}$}
so that
$
\frac{1}{2}d(\rho(y),\rho(y'))<|y-y'|<2d(\rho(y),\rho(y')).
$
Let
\[
r_{\tilde s}:=\frac{{8}e^{-3\Lambda |\tilde s|}}{\c^2 A^2}=\frac{ {8}e^{-(3\Lambda+2\beta) |\tilde s|}}{c^2\c^2 }.
\]
These coordinates are built with the property that there exists a smooth real valued function $f$ defined in a neighborhood of $0 \in {\mathbb R}^{{2n-1}}$ so that if {$0<r<\frac{1}{128}{e^{\Lambda |\tilde s |}r_{\tilde s}}$},
\[ |y|<r_{\tilde s} \qquad \text{and} \qquad d(\varphi_t(\rho(y)),\SNH)<r\;\;\text{
for some}\; t\in \big[\tilde s-{\tau_1},\tilde s+{\tau_1}\big],\]
then
\[|y_1-f(y_2,\dots y_{2n})|<{2(1+\c)ce^{\beta |\tilde s|} r} \qquad \text{and}\qquad
|\partial_{y_j}f|<{\c\,}ce^{\beta |\tilde s|}e^{\Lambda|\tilde s|}
\]
Assume {$\alpha_2<\tfrac{1}{128}$} so that $r_2<{\frac{1}{128}e^{\Lambda |\tilde{s} |}r_{\tilde{s}}}$. Since $\tilde{s}\in[s-\frac{{\tau_1}}{2},s+\frac{{\tau_1}}{2}]$, we may choose $r:= r_2$ to get that, if $\rho=\rho(y)\in B(\SNH,r_0) $ satisfies $d(\rho,\rho_\alpha)<\frac{r_{\tilde s}}{2}$ and
\vspace{-0.2cm}
\begin{equation}\label{e:nonloop}
d\Big(\SNH, \bigcup_{t=s-\frac{{\tau_1}}{2}}^{s+\frac{{\tau_1}}{2}}\varphi_t(\rho)\Big)< r_2,
\end{equation}
\vspace{-0.2cm}
then {with $\bar{y}=(y_n,\dots y_{2n})$}
\begin{align*}
|y_1-f(y_2,\dots, y_{n-1},0)|
&\leq |y_1-f(y_2,\dots, y_{n-1},{\bar{y}})|+ |\partial_{y_j}f(y_2,\dots, y_{n-1},0)| |{\bar{y}}| \\
&<{2(1+\c)}ce^{\beta |\tilde s|} r_2+ {\c}c e^{\beta |\tilde s|}e^{\Lambda|\tilde s|}2r_0\\
&<C_0e^{\beta |\tilde s|}r_2.
\end{align*}
Here, we have used that the assumption $r_0 < \tfrac{1}{3}\, e^{-\Lambda T_0} r_2$ implies {$e^{\Lambda|\tilde s|}2r_0< r_2$}, and we have written $C_0={(2+3\c) c}$. Also, we used that $|\bar y| \leq 2 d(\rho(y), \rho(y_2,\dots, y_{n-1},0))=2 d(\rho(y),\SNH){\leq 2 r_0}$.
\begin{figure}
\centering
\includegraphics[width=10cm]{monster.pdf}
\caption{Illustration, when $n=3$, of the covering balls that intersect $B_\alpha^s$ and loop back for times $\tilde s$ near $s$. }
\label{fig:1lemma}
\end{figure}
Next, we let ${\bf R_s}=\frac{r_{\tilde s}}{8}$ {and use that $\alpha_2<\frac{1}{16 c^2\c^2}$} to obtain that since {$\rho_0\in B_\alpha^s$, for $\rho \in B_\alpha^s$,}
\begin{equation}\label{E:Rs}
d(\rho,\rho_\alpha )\leq d(\rho_0,\rho_\alpha)+d(\rho,\rho_0)<{r_0}+2{\bf R_s}<{\frac{r_{\tilde s}}{2}}.
\end{equation}
In particular, \eqref{E:Rs} implies
$$B_\alpha^s \subset \{\rho \in T^*\!M :\; d(\rho,\rho_\alpha)<{\frac{r_{\tilde s}}{2}}\}.$$
Therefore, we have showed that if $\rho \in B_\alpha^s \cap B(\SNH,r_0)$ satisfies \eqref{e:nonloop}, then $\rho \in \mathcal U_{\rho_\alpha}^s \cap B(\SNH,r_0)$ where
$$
\mathcal U_{\rho_\alpha}^s=\left\{\rho:\; |y_1-f(y_2,\dots, y_{n-1},0)|<C_0e^{\beta |\tilde s|}r_2,\quad d(\rho,\rho_\alpha)<\tfrac{r_{\tilde s}}{2}\right\}.
$$
This is illustrated in Figure \ref{fig:1lemma}.
Next, note that, the number of disjoint tubes in $\{ \Lambda_{\rho_j}^\tau( r_1 )\}_{j=1}^{N}$ that intersect $\mathcal U_{\rho_\alpha}^s \cap B(\SNH,r_0)$ is controlled by the number of disjoint balls in the collection $\{B(\rho_j, r_1)\}_{j=1}^N$ that intersect ${\mathcal U}_{\rho_\alpha}^s \cap \SNH$. In addition, for each $j\in \{1, \dots, N\}$ the intersection $B(\rho_j, r_1)\cap \SNH$ is entirely contained in $\tilde{\mc{U}}_{\rho_\alpha}^s\cap \SNH$ where
$$
\tilde{\mc{U}}_{\rho_\alpha}^s\!\!=\!\!\left\{\!\rho: |y_1-f(y_2,\dots, y_{n-1},0)|<C_0e^{\beta |\tilde s|}r_2\!+\!4r_1,\quad\, d(\rho,\rho_\alpha)<\frac{r_{\tilde s}}{2}\!+\!4r_1\right\}\!.
$$
In particular,
\begin{align*}
\vol (\tilde{\mathcal U}_{\rho_\alpha}^s \cap \SNH)
&\leq(C_0e^{\beta |\tilde s|}r_2+4r_1)\int_{B(0,\frac{r_{\tilde s}}{2}+4r_1)}\sqrt{1+|\nabla f|^2}\,dy_2\dots dy_{n-1}.
\end{align*}
Hence, the number of disjoint balls in the collection $\{B(\rho_j, r_1)\}_{j=1}^N$ that intersect ${\mathcal U}_{\rho_\alpha}^s \cap \SNH$ is controlled by
$$
{2\sqrt{n-1}\, \c c}(C_0e^{\beta(|s|+{\tau_1})}r_2+4r_1)\, {e^{(\beta+\Lambda)(|s|+{\tau_1})}}\big(\frac{r_{\tilde s}}{2}+4r_1\big)^{n-2}r_1^{-(n-1)}.
$$
Here, we used the bound $|\partial_{y_j}f|<{\c} \,ce^{(\beta+\Lambda) |\tilde s|}$ and that $e^{\beta|\tilde s|} \leq e^{\beta(|s|+{\tau_1})}$.
Finally, note that since $\alpha_2< \tfrac{1}{c^2 \c^2}$ and $\gamma \geq 3\Lambda +2\beta$, by choosing $\alpha_1<1$, we have $r_1<\min \{r_2,r_{\tilde s}\}$.
Hence, the number of disjoint balls in the collection $\{B(\rho_j, r_1)\}_{j=1}^N$ that intersect ${\mathcal U}_{\rho_\alpha}^s \cap \SNH$ is controlled by
$
{e^{2\beta {\tau_1}}} e^{(2\beta+\Lambda)|s|}r_2 \tilde r_s^{n-2}r_1^{-(n-1)}
$
up to a constant that depends only on $(M,p,k,c, {\mathfrak{I}_{_{\!H}} })$.
In addition, note that in the collection $\{ \Lambda_{\rho_j}^\tau( r_1 )\}_{j=1}^{N}$ there are ${\mathfrak{D}}$ sets of disjoint tubes of radius $r_1$. Therefore, since there are $ I_s \leq C_{n} R^{n-1} {\bf R_s}^{-(n-1)} $ balls $B_\alpha^s$, for $s=s_\ell+t_j$ we can build $\mathfrak b^\ell_j$ so that
\[
\rho \notin \bigcup_{k \in \mathfrak b^\ell_j} \Lambda_{\rho_k}^\tau(r_1)\quad \Longrightarrow \quad d\Big(A, \bigcup_{t=s_\ell+t_j-\frac{{\tau_1}}{2}}^{s_\ell+t_j+\frac{{\tau_1}}{2}}\varphi_t(\rho)\Big)\geq r_2,
\]
and so that for some ${{\bf{C}_0}={\bf{C}_0}(M,p,k,c,\beta,\mathfrak{I}_{_{\!H}} )}>0$
$$
|\mathfrak b^\ell_j|
\leq {{\bf{C}_0}}{\mathfrak{D}}\frac{e^{(2\beta+\Lambda)|s|}r_2r_{\tilde s}^{n-2}R^{n-1}}{r_1^{n-1}\mathbf{R}_s^{n-1}}.
$$
{Here, we have used that $e^{2\beta {\tau_1}} \leq e^{2\beta}$ since ${\tau_1}\leq 1$.}
Using that $\frac{r_{\tilde s}^{n-2}}{\mathbf{R}_s^{n-1}}=\frac{8^{n-1}}{r_{\tilde s}}$ and adjusting ${{\bf{C}_0}}$, we obtain \eqref{e:claim-grass0}. This concludes the proofs of the claims in \eqref{e:claim-grass0} and \eqref{e:claim-grass}.
\end{proof}
\section{Contraction of $\varphi_t$ and non-self looping sets}
\label{s:controlLooping anosov}
The proofs of Theorems \ref{T:applications} and \ref{T:tangentSpace} hinge on controlling how the geodesic flow changes the volume of sets contained in $\SNH$.
Let
\begin{equation}\label{e:Jdef}
J_t:=d\varphi_t|_{T_\rho\SNH}:T_\rho \SNH\to d\varphi_t(T_\rho \SNH).
\end{equation}
Assuming that the geodesic flow is Anosov will allow us to prove in Section \ref{T:tangentSpace} that for certain $A_0 \subset \SNH$ there is $C_0\geq 1$ so that
\begin{equation}
\label{e:forward}
\sup_{\rho \in {A_0} }|\det J_t|\leq C_0e^{-|t|/C_{_{0}}}.
\end{equation}
{ Note, however, that the condition in \eqref{e:forward} is very general and that it may hold in situations where the geodesic flow is not Anosov. For example, such an estimate holds at the umbillic points of the triaxial ellipsoid (see e.g.~\cite{GT18a}). }This section is dedicated to study the structure of the set of looping tubes under the assumption that \eqref{e:forward} holds.
By~\eqref{e:derFlow}, there exists $C_{_{\!\varphi}}>0$ depending only on $(M,p)$, so that {for all $\Lambda>\Lambda_{\text{max}}$}
\begin{equation}
\label{e:cphi}
\|d\varphi_t\| \leq C_{_{\!\varphi}}e^{\Lambda |t|},\qquad t\in {\mathbb R}.
\end{equation}
Let $\mathbf{D}>1$ be so that
{
\begin{equation}\label{e:D}
e^{-\Lambda \mathbf{D}}< \min\Big\{\frac{e^{-\Lambda}}{C_{_{\!\varphi}}}, \frac{\alpha_1}{4}, \frac{1}{4}
\Big\},
\end{equation}
{where $\alpha_1=\alpha_1(M,p)$} is the constant introduced in Proposition \ref{p:ballCover}.
}
\begin{definition}\label{d:control}
Let $ A_0\subset\SNH$, $\varepsilon_0>0$, $\digamma>0$, {$\mathfrak{t}_0:[\varepsilon_0, \infty) \to [1, \infty)$}, and $T_0>1$ . If the following conditions are satisfied, we say that
\[
A_0 \; \text{ can be}\; {(\varepsilon_0, \mathfrak{t}_0, \digamma) \text{-controlled up to time}\; T_0.}
\]
Let $\varepsilon\geq\varepsilon_0$, {$\Lambda>\Lambda_{\text{max}}$},
\begin{equation*}\label{e:parameters}
0<R_0\leq \tfrac{1}{\digamma}e^{-\digamma\Lambda|T_0|},\qquad0<r_0<R_0,
\end{equation*}
and balls $\{B_{0,i}\}_{i=1}^N\subset \SNH$ centered in $A_0$ with radii $\{R_{0,i}\}_{i=1}^N \subset [r_0, R_0]$. Then, for {$0<\tau<\tfrac{1}{2}\tau_{_{\!\text{inj}H}}$} and all
$$
A_1\subset \bigcup_{i=1}^NB_{0,i}\subset A_0
\qquad
\text{and}
\qquad
0<r<\tfrac{1}{\digamma}e^{-\digamma\Lambda T_0}r_0,
$$ there are balls $\{\tilde B_{1,k}\}_{k}\subset \SNH$ with radii $\{R_{1,k}\}_k \subset [0, {\tfrac{1}{4}R_0}]$ so that
\begin{enumerate}
\item $\Lambda^\tau_{A_1 \backslash \cup_k \tilde B_{1,k}}(r)$ is non self-looping for times in $[\mathfrak{t}_0(\varepsilon), T_0]$, \\
\item $\sum_k R_{1,k}^{n-1}\leq \varepsilon \sum_i R_{0,i}^{n-1}$,\\
\item ${\inf_k}R_{1,k} \geq e^{-\mathbf{D}\Lambda T_0}\, {\inf_iR_{0,i}}$
\end{enumerate}
{We observe that when we write $A_1 \backslash \cup_k \tilde B_{1,k}$ we mean $A_1 \cap (\SNH \backslash \cup_k \tilde B_{1,k})$.}
\end{definition}
\begin{lemma}\label{l:nonlooping}
{There exists $\digamma>0$ depending only on $(M,p, {K_{_{\!H}}})$ so that for every monotone decreasing function $f:[0,\infty) \to [0,\infty)$ with $f\in L^1([0,\infty))$ {and $\Lambda>\Lambda_{\max}$}, there exists a function $\mathfrak{t}_0:(0,\infty)\to[1,\infty)$ with the following properties.\\
If $A_0\subset\SNH$ is so that
\begin{equation}
\label{e:forward1}
\sup_{\rho \in {A_0} }|\det J_t|\leq f(|t|)
\end{equation}
for all $t \in (0, T_0)$ or for all $t \in (-T_0, 0)$, then, for all $\varepsilon_0>0$,
\[
A_0 \; \text{can be}\; (\varepsilon_0, \mathfrak{t}_0, \digamma)\text{-controlled up to time}\; T_0
\]
in the sense of Definition~\ref{d:control}. Furthermore, in addition to conditions (1), (2) and (3) in Definition~\ref{d:control} being satisfied,
\[
\bigcup_{t={\mathfrak{t}_0(\varepsilon)}}^{T_0} \varphi_t(\Lambda^\tau_{A_1\backslash\cup_k\tilde{B}_{1,k}}(r))\cap \Lambda_{\SNH \backslash\cup_k\tilde{B}_{1,k}}^\tau(r)=\emptyset.
\]}
\end{lemma}
\begin{proof}
We prove the case in which \eqref{e:forward1} holds for all $t \in (0, T_0)$ (the case in which it holds for all $t \in (-T_0, 0)$ is identical after sending $t\to -t$). {Let $\Lambda>\Lambda_{\max}$ and
$t_0$ be large enough so that {$t_0>\tau_{_{\!\text{inj}H}}+2$} and
\begin{equation}\label{e:Tplus}
{C_{_{\!\varphi}}e^{\Lambda}e^{-\mathbf{D}\Lambda (t_0-\tau_{_{\!\text{inj}H}}-1)}} \leq 1
\end{equation}
where {$C_{_{\!\varphi}}$ is as in \eqref{e:D}}. {We will assume, without loss of generality, that $f(|t|)\geq \frac{1}{C_\varphi}e^{-\Lambda t}$.}
Define
\[
\mathfrak{t}_0:(0,\infty) \to [1, \infty) \qquad \mathfrak{t}_0(\varepsilon)=\inf\Bigg\{s\geq t_0:\,\, \int_s^\infty \!\! f(s)ds\leq \frac{\varepsilon\tau_{_{\!\text{inj}H}}}{4\alpha}\Bigg\},
\]
where
\[\alpha:={2^{3n-1}}\gamma^{n-1}\qquad \text{and} \qquad \gamma:=\tfrac{1}{4}C_{\varphi}e^{\Lambda}.\] }
{Fix $\varepsilon_0>0$ and let $\varepsilon\geq \varepsilon_0$.} Let $0<\tau<\frac{1}{2}\tau_{_{\!\text{inj}H}}$, $R_0>0$, $0<r_0<R_0$ and let $\{B_{0,i}\}_{i=1}^N\subset \SNH$ be a collection of balls centered in $A_0$ with radii $\{R_{0,i}\}_{i=1}^N \subset [r_0, R_0]$. Let $A_1\subset \bigcup_{i=1}^NB_{0,i}$ and $0<r<1$.
For each $i \in \{1, \dots, N\}$ let $\{I_{0,i,j}\}_{j=1}^{N_i}$ be a collection of {disjoint} intervals $I_{0,i,j} \subset [{\mathfrak{t}_0(\varepsilon)}-2\tau-r, T_0+2\tau+r]$ {so that $\tfrac{\tau_{_{\!\text{inj}H}}}{4}\leq |I_{0,i,j}|<\tfrac{\tau_{_{\!\text{inj}H}}}{2}$ and}
\begin{equation}\label{e:intervals}
\begin{gathered}
\{t \in [{\mathfrak{t}_0(\varepsilon)}-2\tau-r, T_0+2\tau+r]:\; \varphi_t(\Lambda^0_{B_{0,i}}(r)) \cap \Lambda^0_{_{\!\SNH}}(r) \neq \emptyset\big\} \subset \bigcup_{j=1}^{N_i} I_{0,i,j},\\
\text{and}\\
{\bigcup_{t\in I_{0,i,j}}\varphi_t(\Lambda_{B_{0,i}}^0(r)\cap \Lambda^0_{_{\!\SNH}})(r)\neq \emptyset.}
\end{gathered}
\end{equation}
For $i \in \{1, \dots, N\}$ and $j \in \{1, \dots, N_i\}$ define
\begin{equation}\label{e:D def}
D_{0,i,j}:= \bigcup_{t \in I_{0,i,j}} \varphi_t(\Lambda^0_{B_{0,i}}(r)) \cap \Lambda^0_{_{\!\SNH}}(r).
\end{equation}
We claim that {for each pair $(i,j)$}
\begin{equation}\label{e:Disk}
D_{0,i,j}\subset \bigcup_{\ell=1}^{{L_{i,j}}} \Lambda_{B_{0,i,j,\ell}}^0(r)
\end{equation}
where $\{B_{0,i,j,\ell}\}_{\ell=1}^{{L_{i,j}}}$ are balls centered in $\SNH$ with radii {$R_{0,i,j,\ell}:=\gamma e^{-\mathbf{D}\Lambda t_{0,i,j}}R_{0,i}$} satisfying
\begin{equation}\label{e:radii}
{L_{i,j}}R_{0,i,j,\ell}^{n-1}\leq \alpha {f( t_{0,i,j})}R_{0,i}^{n-1}
\end{equation}
(see Figure~\ref{f:contract} for an illustration of this covering),
where $t_{0,i,j}:=\min\{t:\,t \in I_{0,i,j}\}$.
Note that $t_{0,i,j}>1$ for all $(i,j)$ since $r<1$ and ${\mathfrak{t}_0(\varepsilon)\geq t_0}>\tau_{_{\!\text{inj}H}}+2$, and so ${\mathfrak{t}_0(\varepsilon)}-2\tau-r >{\mathfrak{t}_0(\varepsilon)}-\tau_{_{\!\text{inj}H}}-1>1$.
\begin{figure}
\includegraphics[height=8cm]{flowout2.pdf}
\caption{\label{f:contract} Illustration of a contracting ball and the cover by much smaller balls for the proof of Lemma~\ref{l:nonlooping}.}
\end{figure}
Note that, since {we take $0<r<R_0<\digamma^{-1}{e^{-\digamma \Lambda T_0}}$, if we let $\digamma_{_{\!\!0}}=\digamma_{_{\!\!0}}(M,p,K_{_{\!H}})$ large enough and assume $\digamma\geq \digamma_{_{\!\!0}}$}, then $\SNH$ is almost flat as a submanifold {of $T^*\!M$ at scale $R_0$}. In particular, we have
\[
\mathcal B (\rho, \tfrac{1}{2} R) \cap \Lambda^0_{_{\! \SNH}}(r) \; \subset\; \Lambda_{ B(\rho, R)}^0(r),
\]
for all $\rho \in \SNH$ and $0\leq R\leq R_0$. Here we are using $\mathcal B$ to denote a ball in $T^*\!M $ and $B$ to denote a ball in $\SNH$.
Therefore, it suffices to show that
\begin{equation}\label{e:goalD}
D_{0,i,j}\subset \bigcup_{\ell=1}^{L_{i,j}} \mathcal B_{0,i,j,\ell}.
\end{equation}
where $\{\mathcal B_{0,i,j,\ell}\}_{\ell=1}^{L_{i,j}} \subset T^*\!M $ are balls with radii
$\mathcal R_{0,i,j,\ell}=\tfrac{1}{2}R_{0,i,j,\ell}$ with $R_{0,i,j,\ell}$ as in \eqref{e:radii}.
{Let $\rho_{0,i}\in A_0$ be the center of $B_{0,i}$} and fix $j\in \{1, \dots, N_i\}$. To prove the claim in \eqref{e:goalD} fix $t_{\!_{\rho_{0,i}}} \in I_{0,i,j}$ so that $\varphi_{t_{\!_{\rho_{0,i}}}}(\rho_{0,i})\in \Lambda^0_{\SNH}(r)$. Observe that choosing coordinates near $\rho_{0,i}$ and $\varphi_{_{{t_{\!_{\rho_{0,i}}}}}}(\rho_{0,i})$, we have for $t$ near $t_{\rho_{0,i}}$ and $\rho$ near $\rho_{0,i}$,
$$
\varphi_{t}(\rho)=\varphi_t(\rho_{0,i})+d\varphi_t(\rho-\rho_{0,i})+O(|\rho-\rho_{0,i}|^2{e^{2\Lambda|t|}}).
$$
{If $|\rho-\rho_{0,i}|\leq R_{0,i}$} and $\rho \in \SNH$, this gives
$$
\varphi_{t}(\rho)=\varphi_t(\rho_{0,i})+J_t(\rho-\rho_{0,i})+O(R_{{0,i}}^2e^{2\Lambda|t|}).
$$
Now, let $\{\lambda_i(t)\}_{i=1}^{n-1}$ be the eigenvalues of $J_t$ ordered so that $|\lambda_i(t)|\leq |\lambda_{i+1}(t)|$. Then, modulo perturbations controlled by $R_0^2e^{2\Lambda |t|}$, the set $\varphi_t(B_{0,i})$ is an $n-1$ dimensional ellipsoid with axes of length $|\lambda_i(t)|R_{0,i}$.
Also, observe that
$$
\frac{e^{-\Lambda t}}{C_{_{\!\varphi}}}\leq |\lambda_1(t)|\leq |\lambda_{n-1}(t)|\leq C_{_{\!\varphi}} e^{\Lambda t},
$$
{where $C_{_{\!\varphi}}$ is as in \eqref{e:cphi}.}
Since ${\mathfrak{t}_0(\varepsilon)}\geq 1$, we note that $e^{-\Lambda {\mathfrak{t}_0(\varepsilon)} (\mathbf{D} -1)}<\frac{1}{C_{_{\!\varphi}}} $. This ensures that $e^{-\mathbf{D} \Lambda t} < \frac{e^{-\Lambda t}}{C_{_{\!\varphi}}}$ for all $t \geq {\mathfrak{t}_0(\varepsilon)}$.
Also, note that there exists a constant $\alpha_{_{\!M,p}}>0$ so that for all $i \in \{1, \dots, N\}$ and $\rho \in \varphi_{_{t_{\rho_{0,i}}}}(\Lambda^0_{B_{0,i}}(r))$ we have $d(\rho, \varphi_{t_{\rho_{0,i}}}(B_{0,i}) )\leq \alpha_{_{\!M,p}} e^{\Lambda t_{\rho_{0,i}}}r$. {Define $\digamma$ by }
$$
\digamma:= \max\{8\alpha_{_{\!M,p}}\,,\,{\mathbf{D}+1}\,,\,\digamma_{_{\!\!0}}\},
$$
and {from now on work with $R_0\leq \tfrac{1}{\digamma}e^{-\digamma\Lambda|T_0|}$.}
Then, {if $0<r<\tfrac{1}{\digamma}e^{-\digamma\Lambda T_0}r_0$,} we have that $r$ is small enough so that $\alpha_{_{\!M,p}}e^{\Lambda T_0}r \leq \frac{1}{8}e^{-\mathbf{D} \Lambda T_0}r_0$. In particular, $\alpha_{_{\!M,p}} e^{\Lambda t_{\rho_{0,i}}}r<\frac{1}{8}e^{-\mathbf{D} \Lambda t_{0,i,j}}R_{0,i}$ {for all $i \in \{1, \dots, N\}$} and there are points $\{q_\ell\}_{\ell=1}^{{L_{i,j}}}\subset \varphi_{_{t_{\rho_{0,i}}}}(B_{0,i})$ so that
\begin{equation}\label{e:D union}
\varphi_{_{t_{\rho_{0,i}}}}(\Lambda^0_{B_{0,i}}(r))\subset \bigcup_{\ell=1}^{{L_{i,j}}} \mc{B}(q_{\ell}, \tfrac{1}{8}e^{- \mathbf{D} \Lambda t_{0,i,j}}R_{0,i}),
\end{equation}
where the balls in the right hand side are balls in $T^*\!M $.
Furthermore,
\begin{align*}
\vol(\varphi_{t_{\rho_{0,i}}}(B_{0,i}))&\leq \vol(B_{0,i}) ( |\det (J_{t_{\rho_0,i}})|+ C_{_{\!M,p}}R_0^2e^{2\Lambda t_{\rho_{0,i}}}) \\
&\leq C_{n} R_{0,i}^{n-1}( {f(t_{\rho_{0,i}})}+ C_{_{\!M,p}}R_0^2e^{2\Lambda t_{\rho_0,i}})
\end{align*}
for some $C_{n}>0$ and $C_{_{\!M,p}}>0$.
{Next, adjust $\digamma$ so that $\digamma^2>C_\varphi C_{_{\!M,p}}$. Then, since $f(|t|) \geq \frac{1}{C_\varphi} e^{-\Lambda t}$,
\begin{align*}
\vol(\varphi_{t_{\rho_{0,i}}}(B_{0,i}))\leq {2} C_{n} R_{0,i}^{n-1} {f(t_{\rho_{0,i}})}.
\end{align*}}
In addition, {since $ t_{0,i,j}\leq t_{\rho_{0,i}}$}, the points $\{q_\ell\}_{\ell=1}^{L_{i,j}}$ can be chosen so that
{\begin{align}\label{e:volumebound}
{L_{i,j}C_{n} ({\tfrac{1}{8}}e^{-\mathbf{D} \Lambda t_{0,i,j}}R_{0,i})^{n-1}}
&\leq 2 \vol \Big(\varphi_{t_{\rho_{0,i}}}(B_{0,i}) \; \bigcap \; \cup_{\ell=1}^{{L_{i,j}}} \mc{B}(q_{\ell}, {\tfrac{1}{8}}e^{- \mathbf{D} \Lambda t_{0,i,j}}R_{0,i}) \Big) \notag\\
& \leq {4}{C_{n} R_{0,i}^{n-1}{f(t_{0,i,j})}} .
\end{align}}
{Note that this yields $L_{i,j} ({\tfrac{1}{8}}e^{-\mathbf{D} \Lambda t_{0,i,j}})^{n-1} \leq {4}f(t_{0,i,j})$.}
Since $|I_{0,i,j}|<1$,
it follows that {for every choice of indices $\ell$, $(i,j)$ we have}
\begin{align}\label{e:diam}
\text{diam}\Big(\bigcup_{{t\in I_{0,i,j}}}\varphi_{_{t-t_{\rho_0,i}}}({\mc{B}}({q_{\ell},{\tfrac{1}{8}}e^{-\mathbf{D}\Lambda t_{0,i,j}}R_{0,i}))}\cap \Lambda^0_{_{\!\SNH}}(r)\Big)
&\leq {\frac{1}{8}C_{_{\!\varphi}}e^{\Lambda}} e^{-\mathbf{D}\Lambda t_{0,i,j} }R_{0,i}{\leq \frac{1}{8}R_{0,i} }
\end{align}
where in the last inequality, we use the definition of $\mathbf{D}$. Without loss of generality, we may assume that $C_{_{\!\varphi}}\geq 4$ (redefining $\mathbf{D}$ in the process) and hence that $\gamma=\frac{1}{4}C_{_{\!\varphi}}e^{\Lambda}\geq 1$ (see \eqref{e:radii}). This implies that we can find a point $\rho_{0,i,j,\ell}\in \SNH$ so that the ball $\mathcal B_{0,i,j,\ell}\subset T^*\!M $ of center $\rho_{0,i,j,\ell}$ and radius $\mathcal R_{0,i,j,\ell}=\tfrac{1}{2}\gamma e^{-\mathbf{D}\Lambda t_{0,i,j }}R_{0,i}=\tfrac{1}{2}R_{0,i,j,\ell}$ contains the set {in \eqref{e:diam}} whose diameter is being bounded. Thus, {by the definition \eqref{e:D def} of $D_{0,i,j}$ together with \eqref{e:D union}, we conclude that \eqref{e:goalD} and \eqref{e:Disk} hold}. Also, {by {the definition of $R_{0,i,j,\ell}$, the definition of $\alpha$,} and \eqref{e:volumebound}, for each choice of $(i,j)$ }
$$
\sum_{\ell=1}^{L_{i,j}} R_{0,i,j,\ell}^{n-1}
= {L_{i,j}} \gamma^{n-1} (e^{-\mathbf{D}\Lambda t_{0,i,j }}R_{0,i})^{n-1} \leq\alpha {f(t_{0,i,j})}R_{0,i}^{n-1},
$$
and hence~\eqref{e:radii} holds.
Therefore, {from the definition of $\mathfrak{t}_0(\varepsilon)$ it follows that }
\begin{equation}\label{e:sum of radii}
\sum_{i,j, \ell} R_{0,i,j,\ell}^{n-1} \leq \alpha \sum_{i,j}{f(t_{0,i,j})} R_{0, i}^{n-1} \leq \frac{4 \alpha }{\tau_{_{\!\text{inj}H}}}\,{\int_{{\mathfrak{t}_0(\varepsilon)}}^\inftyf(s)ds} \sum_i R_{0, i}^{n-1}\leq {\varepsilon \sum_i R_{0, i}^{n-1}},
\end{equation}
where to get the second inequality we used that $t_{0,i,j+1}-t_{0,i,j}\geq \tau_{_{\!\text{inj}H}}/4$ implies
\[
\sum_j \tfrac{\tau_{_{\!\text{inj}H}}}{4} {f(t_{0,i,j})} \leq {\int_{{\mathfrak{t}_0(\varepsilon)}}^\infty f(s)ds} .
\]
Let $k=k(i,j,\ell)$ be an index reassignment and write $\tilde B_{1,k}=B_{0,i,j,\ell}$ {and $R_{1,k}=R_{0,i,j,\ell}$. Note that by the definition of $R_{0,i,j,\ell}$ in \eqref{e:radii} and the first inequality in \eqref{e:Tplus} we know $R_{1,k}\leq \tfrac{1}{4}R_0$.} {In addition, $\cup_{i,j}D_{0,i,j} \subset \cup_k \tilde B_{1,k}$}. {According to~\eqref{e:intervals} and~\eqref{e:D def} we proved that
\begin{equation}\label{e:a}
\bigcup_{t={\mathfrak{t}_0(\varepsilon)}-2\tau-r}^{T_0+2\tau+r} \varphi_t(\Lambda^0_{{A_1}\backslash\cup_k\tilde{B}_{1,k}}(r))\cap \Lambda_{{\SNH}\backslash\cup_k\tilde{B}_{1,k}}^0(r)=\emptyset.
\end{equation}
We claim that this implies
\begin{equation}\label{e:b}
\bigcup_{t={\mathfrak{t}_0(\varepsilon)}}^{T_0} \varphi_t(\Lambda^\tau_{{A_1}\backslash\cup_k\tilde{B}_{1,k}}(r))\cap \Lambda_{{\SNH}\backslash\cup_k\tilde{B}_{1,k}}^\tau(r)=\emptyset.
\end{equation}
Indeed, if $\rho$ belongs to the set in \eqref{e:b}, then there exist times $t\in [{\mathfrak{t}_0(\varepsilon)}-\tau-r,T_0+\tau+r]$, $s\in [-\tau-r, \tau+r]$, and points $q_0,q_1{\in \mc{H}_{\Sigma}}$ {(see \eqref{e:Hsig})} with
$$
{d(q_0, A_1\backslash\cup_k\tilde{B}_{1,k})<r},\qquad {d(q_1, {\SNH}\backslash\cup_k\tilde{B}_{1,k})<r}
$$
so that $\rho=\varphi_t(q_0)=\varphi_s(q_1)$. Let $\tau' \in [-\tau, \tau]$ be so that $|s-\tau'|<r$. Then, $\varphi_{-\tau'}(\rho)=\varphi_{s-\tau'}(q_1)=\varphi_{t-\tau'}(q_0)$ belongs to the set in \eqref{e:a} since $|s-\tau'|<r$ and $t-\tau'\in [{\mathfrak{t}_0(\varepsilon)}-2\tau-r,T_0+2\tau +r]$. This means that if the set in \eqref{e:a} is empty, then so is the set in \eqref{e:b}.} Finally, \eqref{e:b} implies that
\[{\Lambda_{A_1}^\tau(r)} \backslash \bigcup_k\Lambda_{\tilde B_{1,k}}^\tau(r)\]
is non self looping for times in $[{\mathfrak{t}_0(\varepsilon)}, T_0]$. {Furthermore, \eqref{e:sum of radii} now reads
\[
\sum_{k} R_{1,k}^{n-1} \leq \varepsilon \sum_i R_{0, i}^{n-1}.
\]}
\end{proof}
\begin{lemma}\label{l:nonlooping2}
Let $E\subset \SNH$ be a ball of radius $\delta>0$. Let {$\varepsilon_0>0$, $\mathfrak{t}_0:[\varepsilon_0, +\infty) \to [1, +\infty)$,} $T_0>0$, and $\digamma>0$, have the property {that $E$ can be $(\varepsilon_0, {\mathfrak{t}_0}, \digamma)$-controlled up to time $T_0$ in the sense of Definition \ref{d:control}.}
Let $0<m<\frac{\log T_0-\log {\mathfrak{t}_0(\varepsilon)}}{\log 2}$ be a positive integer,
\[
0\leq R_0\leq {\min}\Big\{{\tfrac{1}{\digamma}}e^{-{\digamma}\Lambda T_0}, {\tfrac{\delta}{10}}\Big\},
\qquad
0<r_1<{\tfrac{1}{5\digamma}}e^{-({\digamma}+2\mathbf{D})\Lambda T_0}R_0,
\]
and $E_0\subset E$ with $d(E_0, E^c)>R_0$. {{Let $0<\tau<\tfrac{1}{2}\tau_{_{\!\text{inj}H}}$} and } suppose that $\Lambda_{_{\rho_j}}^\tau(r_1)$ is a $({\mathfrak{D}},\tau, r_1)$ good cover of $\SNH$ and set
$$
\mc{E}:=\{j \in \{1, \dots, {N_{r_1}}\}: \Lambda_{\rho_j}^\tau(r_1)\cap \Lambda^\tau_{E_0}(\tfrac{r_1}{5})\neq \emptyset\}.
$$
Then, there exist $C_{_{\!M,p}}>0$ depending only on $(M,p)$ and sets $\{\mc{G}_\ell\}_{\ell =0}^m\subset \{1,\dots N_{r_1}\}$, $\mc{B}\subset \{1,\dots N_{r_1}\}$ so that
\[\mc{E}\;\subset\; \mc{B}\cup \displaystyle\bigcup_{\ell=0}^m\mc{G}_\ell,\]
\begin{align}
&\bullet \;\bigcup_{i\in \mc{G}_\ell}\Lambda_{\rho_i}^\tau(r_1)\text{\; is \;} [\mathfrak{t}_0,2^{-\ell}T_0]\text{\; non-self looping {for every $\ell \in\{0, \dots, m\}$},} \label{e:nsl} \\
&\bullet\; |\mc{G}_\ell|\leq C_{_{\!M,p}}{\mathfrak{D}}\varepsilon_0^\ell {\delta^{n-1}} r_1^{1-n} \;\;\; {\text{for every}\;\; \ell \in\{0, \dots, m\}}, \label{e:count good}\\ \ \medskip
&\bullet\; |\mc{B}|\leq C_{_{\!M,p}}{\mathfrak{D}} \varepsilon_0^{m+1}{\delta^{n-1}} r_1^{1-n}\Big.. \label{e:count bad}
\end{align}
\end{lemma}
\begin{proof}
Choose balls $\{B_{0,i}\}_{i=1}^N$ centered in $E_0$ so that $E_0\subset \bigcup_{i=1}^NB_{0,i}$ where $B_{0,i}$ has radius $R_{0,i}=R_0$ built so that {$NR_0^{n-1}\leq C_{n}\delta^{n-1}$}. {This can be done since $R_0<\frac{\delta}{10}$.} {Let $ r_0:=e^{-{2\mathbf{D} \Lambda T_0} }R_0$.} Since $E$ can be $(\varepsilon_0, \mathfrak{t}_0, \digamma)$-controlled up to time $T_0$, for \[0< r<\tfrac{1}{\digamma}e^{-{\digamma}\Lambda T_0}r_0{=}\tfrac{1}{\digamma}e^{-(\digamma+2\mathbf{D})\Lambda T_0}R_0\] there are balls $\{\tilde{B}_{1,k}\}_k\subset \SNH $ of radii {$\{R_{1,k}\}_k\subset[0, \tfrac{1}{4}R_0]$}, so that
\[
{\inf_k}R_{1,k}\geq e^{-\mathbf{D}\Lambda T_0} R_0 \geq r_0,
\qquad \qquad
\sum_{k}R_{1,k}^{n-1}\leq \varepsilon_0 N{R_{0}^{n-1}},
\qquad \qquad
\]
and with
$G_0:= \Lambda^\tau_{E_0\backslash \tilde E_1}(r)$ non-self-looping for times in $[{\mathfrak{t}_0(\varepsilon)}, T_0],$
where we have set $\tilde E_1=\cup_k \tilde B_{1,k}$. {Note that we may assume that $E_0\cap \tilde B_{1,k}\neq \emptyset$ for all $k$.} Now, since $R_{1,k}\leq \frac{1}{4}R_0$, the ball $\tilde{B}_{1,k}$ is centered at a distance no more than $\frac{1}{4}R_0$ from $E_0$. So, letting $E_1:=\cup_k B_{1,k}$ with $B_{1,k}$ the ball of radius $2R_{1,k}$ with the same center as $\tilde{B}_{1,k}$, we have
$$
d(E_1,E^c)\geq {d(E_0,E^c)}-\tfrac{3}{4}R_0>(1-\tfrac{3}{4})R_0.
$$
{Next, we set $T_1:=2^{-1}T_0$ and use that $E_0$ can be {$(\varepsilon_0, \mathfrak{t}_0, \digamma)$}-controlled up to time $T_1$ (indeed up to time $2T_1$). By definition $E_1\subset \bigcup_k B_{1,k}$ and $R_0\leq {\digamma^{-1}}e^{-{\digamma}\Lambda T_0}\leq {\digamma^{-1}}e^{-{\digamma}\Lambda T_1}$.
Therefore, since $0<r<{\digamma^{-1}}e^{-{\digamma}\Lambda T_0}r_0<{\digamma}^{-1}e^{-{\digamma}\Lambda T_1}r_0$, there are balls $\{\tilde B_{2,k}\}_k \subset {\SNH}$ of radii {$0<R_{2,k}\leq \tfrac{1}{4^2}R_0$} with
\begin{equation}\label{e:inf}
{\inf_k}R_{2,k}\geq e^{-{\mathbf{D}}\Lambda T_1} {\inf_i}R_{1,i}
\qquad \text{and}\qquad
\sum_{k}R_{2,k}^{n-1}\leq \varepsilon_0\sum_{k}R_{1,k}^{n-1} \leq \varepsilon_0^2 N{R_{0}^{n-1}},
\end{equation}
so that $G_1:= \Lambda^\tau_{E_1\backslash \tilde E_2}(r)$
is non-self-looping for times in $[{\mathfrak{t}_0(\varepsilon)}, T_1],$ where we have set $\tilde E_2=\cup_k \tilde B_{2,k}$. {Since we may assume that $E_1\cap \tilde{B}_{2,k}\neq \emptyset$ for all $k$, the balls $\tilde{B}_{2,k}$ are centered at a distance smaller than $\tfrac{1}{4^2}R_0$ from $E_1$. In particular, letting $E_2=\cup_{k}B_{2,k}$ where $B_{2,k}$ is the ball of radius $2R_{2,k}$ centered at the same point as $\tilde{R}_{2,k}$, we have}
$$
d(E_2, E^c)\geq d(E_1,E^c)-\tfrac{3}{4^{2}}R_0>R_0\big(1-\tfrac{3}{4}-\tfrac{3}{4^2}\big).
$$
}
Continuing this way we claim that one can construct a collection of sets $\{G_\ell\}_{\ell=1}^m \subset \Lambda_{E}^\tau(r)$ so that
\begin{enumerate}
\item[A)] $G_\ell$ is non-self-looping for times in $[{\mathfrak{t}_0(\varepsilon)}, T_\ell]$ with $T_\ell=2^{-\ell}T_0$.
\item[B)] There are balls $B_{\ell, k}, \tilde B_{\ell, k} \subset \SNH$ {{centered at $\rho_{\ell,k}\in{E}$}} of radii $2R_{\ell,k}$, $ R_{\ell,k}$ respectively so that
\[G_\ell= \Lambda_{E_\ell\backslash\tilde E_{\ell+1}}^\tau(r),\]
where
$E_\ell= \bigcup_k B_{\ell, k}$ and $\tilde E_{\ell} = \bigcup_k \tilde B_{\ell,k}$.
\item[C)] For all $\ell \geq 1$, the radii satisfy {$\sup_\ell R_{\ell,k}\leq \tfrac{1}{4^\ell}R_0$, }
\begin{equation}\label{e:upperbound}
{\inf_k}R_{\ell, k} \geq e^{-{2{\mathbf{D}} \Lambda T_0} }R_0=r_0 \qquad \text{and} \qquad \sum_k R_{\ell,k}^{n-1} \leq \varepsilon_0^\ell N R_0^{n-1}.
\end{equation}
\end{enumerate}
The claim in (A) {follows by construction of $G_{\ell}$. For the claim in (B), we only need to check that the balls $B_{\ell,k}$ are centered in $E$. For this, note} that since $R_{\ell,k} \leq \frac{1}{4^\ell}R_0$, by induction
$$
d(E_\ell, E^c)>d(E_{\ell-1}, E^c)-\tfrac{3}{4^\ell}R_0> R_0\Big(1-{\sum_{j=1}^\ell} \tfrac{3}{4^j}\Big)\geq \frac{1}{{4^\ell}}R_0.
$$
\begin{remark}
{Note that this actually gives $E_\ell \subset E$ and so all of $B_{\ell,k}$ is inside $E$ (not just its center).}
\end{remark}
We proceed to justify the first inequality in \eqref{e:upperbound}.
Note that the construction yields that
$\inf_k R_{\ell,k}\geq e^{-{\mathbf{D}}\Lambda T_\ell}\inf_{i}R_{\ell-1,i}$
for every $\ell$.
Therefore, {since $T_\ell=2^{-\ell}T_0$ and $\inf_k R_{\ell, k} \geq e^{-\mathbf{D} \Lambda T_\ell} \inf_i R_{\ell-1, i}$ (see \eqref{e:inf})}, we obtain
\[
\inf_k R_{\ell, k} \geq \prod_{{j=0}}^\ell e^{-{\mathbf{D}}\Lambda \frac{T_0}{{2^{j}}}} R_0 = e^{-{{\mathbf{D}} \Lambda T_0(2-{\frac{1}{2^{\ell}}})} }R_0\geq e^{-{2{\mathbf{D}} \Lambda T_0} }R_0.
\]
The construction also yields that $\sum_{k} R_{\ell,k}^{n-1}\leq \varepsilon_0 \sum_k R_{\ell-1,k}^{n-1}$
for all $\ell$. Therefore, the upper bound \eqref{e:upperbound} on the sum of the radii follows by induction. Indeed,
$$
\sum_k R_{\ell,k}^{n-1}\leq \varepsilon_0^\ell \sum_k R_{0,k}^{n-1}=\varepsilon_0^\ell N R_0^{n-1}.
$$
{Set $r:=5r_1$ in the above argument, and define }
$$
\mc{G}_\ell:=\{{i \in \mathcal E} : \Lambda_{\rho_i}^\tau(r_1)\subset G_\ell\},\qquad \mc{B}:=\mc{E}\setminus \bigcup_{{\ell=0}}^m\mc{G}_\ell.
$$
Then, since $G_\ell$ is $[{\mathfrak{t}_0(\varepsilon_0)},2^{-\ell}T_0]$ non-self looping,~\eqref{e:nsl} holds. Furthermore, $\mc{E}\;\subset\; \mc{B}\cup \bigcup_{\ell=0}^m\mc{G}_\ell$ by construction.
We proceed to prove~\eqref{e:count good}.
Since the cover by tubes can be decomposed into ${\mathfrak{D}}$ sets of disjoint tubes,
$$
{|\mc{G}_\ell|\leq {\mathfrak{D}}\frac{\vol(G_\ell \cap \Lambda^\tau_{E_0}(r_1))}{\min_{i}\vol(\Lambda_{\rho_i}^\tau(r_1))} }\leq C_{_{\!M,p}}{\mathfrak{D}}r_1^{1-n}\sum_k{R_{\ell,k}^{n-1}}\leq C_{_{\!M,p}}{\mathfrak{D}}r_1^{1-n}\varepsilon_0^\ell N R_0^{n-1},
$$
{for some $C_{_{\!M,p}}>0$ that depends only on $(M,p)$}. {Then, \eqref{e:count good} follows since $NR_0^{n-1}\leq C_n \delta^{n-1}$.}
{The rest of the proof is dedicated to obtaining ~\eqref{e:count bad}}.
{For each $\ell$ note that $E_\ell\subset (G_\ell\cup \tilde{E}_{\ell+1})$ and $\Lambda_{E_\ell}^\tau(\frac{r_1}{5})\subset \Lambda^\tau_{_{\!\SNH}}(\frac{r_1}{5}) \subset \cup_i\Lambda_{\rho_i}^\tau(r_1)$. {We claim that} for every pair of indices $(\ell,i)$ with $ \Lambda_{E_\ell}^\tau(\frac{r_1}{5}) \cap \Lambda_{\rho_i}^\tau(r_1)\neq \emptyset$, either
$$
\Lambda_{\rho_i}^\tau(r_1)\subset \Lambda_{{E_\ell\setminus \tilde{E}_{\ell+1}}}^\tau ({5r_1})\
\qquad\text{ or }\qquad \Lambda_{\rho_i}^\tau(r_1)\cap { \Lambda_{{\tilde{E}_{\ell+1}}}^\tau(\tfrac{r_1}{5})}\neq \emptyset.
$$
Indeed, suppose that $\Lambda_{\rho_i}^\tau(r_1)\cap{ \Lambda_{{\tilde{E}_{\ell+1}}}^\tau(\tfrac{r_1}{5})}=\emptyset$. Then, there exists $q\in\mc{H}_{\Sigma}\cap \Lambda_{\rho_i}^\tau(r_1)$ so that $d(q,\rho_i)<r_1$, $d(q,E_\ell)<\frac{r_1}{5}$, $d(q,\tilde{E}_{\ell+1})\geq \frac{r_1}{5}$. In particular, $d(q,E_\ell\setminus\tilde{E}_{\ell+1})<\frac{r_1}{5}$. Now, suppose that $q_1\in \mc{H}_{\Sigma}\cap \Lambda_{\rho_i}^\tau(r_1)$. Then,
\[
d(q_1,E_\ell\setminus \tilde{E}_{\ell+1})\leq d(q_1,\rho_i)+d(\rho_i,q)+d(q,E_\ell\setminus {\tilde{E}_{\ell+1}})<\tfrac{11}{5}r_1<{5r_1}.
\]
In particular, $\Lambda_{\rho_i}^\tau(r_1)\subset \Lambda_{E_\ell \setminus \tilde{E}_{\ell+1}}^\tau({5r_1})$ as claimed.
{Now, suppose that $\Lambda_{\rho_i}^\tau(r_1)\cap {\Lambda_{\tilde{E}_{\ell+1}}^\tau}(\tfrac{r_1}{5})\neq \emptyset$. Then, since} $r_1<\frac{r_0}{5}$ {and $R_{\ell,k}\geq r_0$}, we have
$$\Lambda_{\rho_i}^\tau(r_1)\cap\mc{H}_\Sigma \; \subset E'_{\ell+1}$$
where $E'_{\ell+1}=\cup_{j}\frac{3}{2}\tilde{B}_{\ell+1,j}$.}
Observe then that {for all $\ell$}
{
\begin{equation}
\label{e:indStep}
\Lambda_{E_\ell}^\tau(\tfrac{r_1}{5})\cap \Big( \bigcup_{i\in \mc{G}_\ell}\Lambda_{\rho_i}^\tau(r_1)\Big)^c\;\;\subset \;\;\Lambda_{E'_{\ell+1}}^\tau(\tfrac{r_1}{5}).
\end{equation}}
{By induction {in $k\geq 2$} we assume that
$
\Lambda_{E_0}^\tau(\tfrac{r_1}{5})\cap \Big( \bigcup_{\ell=0}^{{k}-1}\bigcup_{i\in \mc{G}_\ell}\Lambda_{\rho_i}^\tau(r_1)\Big)^c\subset \Lambda_{E'_{{k}}}^\tau(\tfrac{r_1}{5}).
$
{Note that the base case $k=1$ is covered by setting $\ell=0$ in \eqref{e:indStep}.}
Then, using \eqref{e:indStep} with $\ell=k$ together with the inclusion $\tilde{E}_{k}\subset E'_{k}\subset E_{k}$ (in fact the balls defining each set have the same center and radii given respectively by $R_{\ell,k}$, $\frac{3}{2}R_{l,k}$ and $2R_{l,k}$) we obtain
$$
\Lambda_{E_0}^\tau(\tfrac{r_1}{5})\cap \Big( \bigcup_{\ell=0}^{{k}}\bigcup_{i\in \mc{G}_\ell}\Lambda_{\rho_i}^\tau(r_1)\Big)^c\;\;\subset \;\;\Lambda_{E'_{{k}+1}}^\tau(\tfrac{r_1}{5}).
$$}
In particular, if $i\in \mc{B}$, then
$
{\Lambda_{E_0}^\tau(\tfrac{r_1}{5})\cap}\Lambda_{\rho_i}^\tau(r_1)\subset \Lambda_{{{E}}_{m+1}}^\tau({\tfrac{r_1}{5}}).
$
Therefore,
$$
|\mc{B}|\leq C_{_{\!M,p}}{\mathfrak{D}}r_1^{1-n}\sum_i{R_{m+1,i}^{n-1}}\leq C_{_{\!M,p}}r_1^{1-n}\varepsilon_0^{m+1} N R_0^{n-1},
$$
for some $C_{_{\!M,p}}$ that depends only on $(M,p)$. {This proves \eqref{e:count bad} since $NR_0^{n-1}\leq C_n \delta^{n-1}$.}
\end{proof}
\renewcommand{\SNH}{S\!N^*\!H}
\renewcommand{\Lambda^\tau_{_{\!\SNH}}}{\Lambda^\tau_{\!S\!N^*\!H}}
\addcontentsline{toc}{section}{\quad\;\,\bf{Construction of covers in concrete settings}}
\section{No Conjugate points: Proof of Theorems \ref{t:noConj2} and~\ref{t:noConj1}}
\label{s:dynNoConj}
{We dedicate this section to the proofs of Theorems \ref{t:noConj2} and~\ref{t:noConj1}. We work with the hamiltonian $p:T^*\!M \to {\mathbb R}$ given by $p(x,\xi)=|\xi|_{g(x,\xi)}-1$. The hamiltonian flow $\varphi_t$ associated to it is the geodesic flow, and for any $H \subset M$ we have $\SNH=S\!N^*\!H$.
Let $\varepsilon>0$, $t_0 \in {\mathbb R}$, and $x\in M$. The study of the behavior of the geodesic flow near $S\!N^*\!H$ under the no conjugate points assumption hinges on the fact that if there are no more than $m$ conjugate points (counted with multiplicity) along $\varphi_t$ for $t\in (t_0-2\varepsilon,t_0+2\varepsilon)$, then for every $\rho\in S^*_xM$ there is a subspace $\mathbf{V_{\!\rho}} \subset T_\rho S^*_xM$ of dimension $n-1-m$ so that for all ${\bf v}\in\mathbf{V_{\!\rho}}$,
$$
|(d\varphi_{t})_\rho {\bf v}|\leq {(1+ C\varepsilon^{-{2}})^\frac{1}{2}}|(d\pi \circ d \varphi_t)_\rho {\bf v}|,\qquad t\in (t_0-\varepsilon, t_0+\varepsilon).
$$
Here, the existence of the constant $C>0$ is independent of the choice of $\varepsilon$ and $x$.
In particular, this yields that the restriction $(d\pi \circ d\varphi_t)_\rho: \mathbf{V_{\!\rho}}\to T_{\pi \varphi_t(\rho)}M$ is invertible onto its image with
\begin{equation}\label{e:normbound}
\|(d\pi \circ d\varphi_t)_\rho^{-1}\|\leq {(1+ C\varepsilon^{-{2}})^\frac{1}{2}}\|(d\varphi_{-t})_\rho\|.
\end{equation}
The proof of this result is very similar to that of \cite[Proposition 2.7]{Eberlein73}. We include it in Appendix~\ref{s:prelim} as Proposition \ref{l:panda}.
In what follows we continue to write $F:T^*\!M \to \mathbb{R}^{n+1}$ for the defining function of $S\!N^*\!H$ satisfying \eqref{e:defFunction} and we continue to work with
$$
\psi:\mathbb{R}\times T^*\!M \to \mathbb{R}^{n+1}, \qquad \qquad \psi(t,\rho)=F\circ \varphi_t(\rho).
$$
The following lemma is dedicated to finding a suitable left inverse for $d\psi$.
}
\begin{lemma}
\label{l:prelimNoConj}
{Suppose $k>\frac{n+1}{2}$, $\Lambda>\Lambda_{\max}$ and that there exist $t_0 \in {\mathbb R}$ and $a > 0$ with
{
$$
d(H, \mc{C}_{_{\!H}}^{2k-n-1,r_{t_0},t_0})>r_{t_0},
$$}
where $r_t=\tfrac{1}{a}e^{-a|t|}$.
Then, if $\rho_0\in S\!N^*\!H$ and
$$
d(S\!N^*\!H, \varphi_{t_0}(\rho_0))< r_{t_0},
$$
there exists $ {\bf w}_0 \in T_{\rho_0}S\!N^*\!H$ so that the restriction}
$$
d\psi_{(t_0,\rho_0)}: {\mathbb R} \partial_t \times \mathbb{R} {\bf w}_0\to T_{\psi(t_0,\rho_0)}\mathbb{R}^{n+1}
$$
has left inverse $L_{(t_0,\rho_0)}$ with
\[
\|L_{(t_0,\rho_0)}\|\leq C_{_{\! M,g}}\max \big\{{ae^{(a+\Lambda)|t_0|} }\;\big\}
\]
where $C_{_{\! M,g}}>0$ is a constant depending only on $(M,g)$.
\end{lemma}
\begin{proof}
Without loss of generality, we may assume that $F=(f_1, \dots, f_k, f_{k+1}, \dots, f_{n+1})$ with $(f_1, \dots, f_k)=\tilde F \circ \pi$ where $\tilde F: M \to {\mathbb R}^{k}$ defines $H$ and $\pi:T^*\!M \to M$ is the canonical projection.
In addition, we may assume that $d\tilde F_y$ has {right} inverse ${R}_{_{\! \tilde F,y}}$ with $\|{R}_{_{\! \tilde F,y}}\|\leq 2$ for all $y$ near $H$.
Next, define
\[
\tilde \psi:{\mathbb R} \times T^*\!M \to {\mathbb R}^k, \qquad \qquad \tilde \psi(t, \rho):= \tilde F \circ \pi \circ \varphi_t (\rho).
\]
We claim that there exists $ {\bf w}_0 \in T_{\rho_0} S\!N^*\!H$ so that
$$
d\tilde \psi_{(t_0,\rho_0)}: \mathbb{R}\partial_t\times \mathbb{R} {\bf w}_0 \to \mathbb{R}^{n+1}
$$
is injective and has a left inverse bounded by $C_{_{\!M,g}}\max\{a e^{(a+\Lambda)|t_0|} \}$. Note that this is sufficient as this produces a left inverse for $\psi$ itself.
Observe that for $s\in \mathbb{R}$, $\rho\in S\!N^*\!H$, and $ {\bf w} \in T_\rhoS\!N^*\!H$,
\begin{equation}\label{e:split}
d\tilde \psi_{(t,\rho)}(s\partial_t, {\bf w} )
=d(\tilde F\circ \pi)_{\varphi_t(\rho)} \big( s\, H_p+ ({d\varphi_t})_\rho\, {{\bf w}}\big).
\end{equation}
Note also that since $H$ is conormally transverse for $p$, there exists a neighborhood $W\subset T^*\!M $ of $S\!N^*\!H$ and $c>0$ so that for $\tilde{\rho}\in W$,
\begin{equation}\label{e:split1}
\|d(\tilde F\circ \pi)_{\varphi_t(\tilde \rho)} H_p\|\geq \frac{1}{2}.
\end{equation}
In particular, the restriction
$$
d\tilde \psi_{(t_0,\rho_0)}:\mathbb{R}\partial_t\to \mathbb{R}^k
$$
has a left inverse bounded by $2$.
We proceed to find $ {\bf w}_0 \in T_{\rho_0}S\!N^*\!H$ as claimed. Assuming that $d(H, \mc{C}_H^{2k-n-1,r_{t_0},t_0})>r_{t_0}$ implies that for all $x \in H$, and for every unit speed geodesic $\gamma$ with $\gamma(0)=x$, there are no more than $m=2k-n-2$ conjugate points to $x$ (counted with multiplicity) along $\gamma(t_0-r_{t_0}, t_0+r_{t_0})$ whenever $d(\gamma(t_0),H)<r_{t_0}$. In particular, since $d(\varphi_{t_0}(\rho_0),S\!N^*\!H)<r_{t_0}$, we have $d(\pi(\varphi_{t_0}(\rho_0)),H)<r_{t_0}$. Therefore, by setting $\varepsilon=r_{t_0}$ in \eqref{e:normbound} or Proposition~\ref{l:panda} in the Appendix,
we have that there is a $2(n-k)+1$ dimensional subspace $\mathbf{V}_{\!\rho_0}\subset T_{\rho_0} S^*_{x_0}M$ so that $d\pi\circ d\varphi_{t_0}|_{\mathbf{V}_{\!\rho_0}}$ is invertible onto its image with
\begin{equation}\label{e:ub on norm}
\|(d\pi \circ d\varphi_{t_0}|_{\mathbf{V}_{\!\rho_0}})^{-1}\|\leq {(1+\tilde C_{_{\!M,g}} a^2 e^{2a|t_0|})^{\frac{1}{2}}}\|d\varphi_{-t_0}\|\leq C_{_{\!M,g}}a e^{(a+\Lambda) |t_0|},
\end{equation}
for some $C_{_{\!M,p}}, {\tilde C_{_{\!M,p}}}>0$ depending only on $(M,p)$, and where $x_0:=\pi( \rho_0)$. Note that to apply Proposition~\ref{l:panda} we need $m \geq 0$ which is equivalent to asking $k> \frac{n+1}{2}.$
Let
\[
V
=d(\pi \circ \varphi)_{(t_0, \rho_0)}\big({\mathbb R}\partial_t \times (T_{\rho_0} (SN_{x_0}^*H)\cap{\mathbf{V}_{\!\rho_0}})\big).
\]
{Note that since $\dim \mathbf{V}_{\!\rho_0}= 2(n-k)+1$, $\dim T_{\rho_0} SN_{x_0}^*H= k-1$, and $\dim S^*_{x_0}M=n-1$, we know that $\dim (T_{\rho_0} SN_{x_0}^*H\cap{\mathbf{V}_{\!\rho_0}}) \geq n-k+1 $ and so $\dim V\geq {n-k+2}$. }Also, the restriction
\[
d(\pi \circ \varphi)_{(t_0, \rho_0)}
: \mathbb{R}\partial_t \times (T_{\rho_0} (SN_x^*H){\cap\mathbf{V}_{\!\rho_0}})\to V
\]
is invertible with inverse $\tilde L_{(t_0,\rho_0)}$ satisfying
$$
\| \tilde L_{(t_0,\rho_0)}\|\leq C_{_{\!M,g}}a e^{(a+\Lambda)|t_0|}.
$$
Next, there exists a neighborhood {$U\subset M$ of $H$ so that for $y\in U$}, $d\tilde F_y:T_{y}M\to \mathbb{R}^k$ is surjective with right inverse $R_y$. {By assumption, $R_y$ is bounded by $2$}. Furthermore, we may assume without loss of generality that for $\rho\in T^*U\cap W$, $d\pi_\rho H_p$ lies in the range of $R_{\pi(\rho)}$.
Since $\dim (\operatorname{ran} R_{\pi(\varphi_{t_0}(\rho_0))})=k$, $\dim V\geq {n-k+2}$, and both $V$ and $\operatorname{ran} R_{\pi(\varphi_{t_0}(\rho_0))}$ are contained in $T_{\pi(\varphi_{t_0}(\rho_0))}M$, we know that
$$
\dim (\operatorname{ran} R_{\pi(\varphi_{t_0}(\rho_0))} \cap V)\geq 2.
$$
In particular, there exists {$ {\bf w}_0 \in T_{\rho_0}(SN_{x_0}^*H)\cap \mathbf{V}_{\!\rho_0} \backslash \{0\}$}, so that
$$
(d\pi \circ d\varphi_{t_0})_{\rho_0} {\bf w}_0 \in \operatorname{ran} R_{\pi(\varphi_{t_0}(\rho_0))}.
$$
\begin{remark}
{ Note that having $\dim (\operatorname{ran} R_{\pi(\varphi_{t_0}(\rho_0))} \cap V)\geq 1$ would not have been sufficient as $\partial_t$ is a component we cannot ignore.}
\end{remark}
Then, there exists $\mathbf{x}\in \mathbb{R}^k$ so that
$$
(d\pi \circ d\varphi_{t_0})_{\rho_0} {\bf w}_0 =R_{\pi(\varphi_{t_0}(\rho_0))}\mathbf{x}.
$$
Since $\sup_{y\in U}\|R_y\|\leq 2$,
$$
\|(d\pi \circ d\varphi_{t_0})_{\rho_0} {\bf w}_0 \|\leq 2\|\mathbf{x}\|
$$
and{ by \eqref{e:ub on norm}} we have
$$
\| {\bf w}_0 \|\leq C_{_{\!M,g}}a e^{(a+\Lambda )|t_0|}\|\mathbf{x}\|.
$$
which implies the desired claim since $(d\tilde F\circ d\pi \circ d\varphi_{t_0})_{\rho_0} {\bf w}_0 =\mathbf{x}$ and so
\begin{equation}\label{e:split2}
\|d(\tilde F\circ \pi)_{\varphi_{t_0}(\rho_0)} ( (d\varphi_{t_0})_{\rho_0} {\bf w}_0)\|\geq (C_{_{\!M,g}}a)^{-1} e^{-(a+\Lambda )|t_0|}{\|{\bf w}_0\|}.
\end{equation}
Combining \eqref{e:split1} and \eqref{e:split2} with \eqref{e:split} gives the desired bound on the left inverse for $d\tilde \psi$ restricted to $\mathbb{R}\partial_t \times \mathbb{R}{\bf w}_0$ {provided we impose $C_{_{\!M,g}} \geq 2$}.
\end{proof}
\noindent {\bf Proof of Theorem~\ref{t:noConj1}.}
Let $t_0>0$, $a>{\delta_F^{-1}}$ so that for $t\geq t_0$,
{\begin{equation}\label{e:toprove}
d\Big(H, \mc{C}_H^{2k-n-1,r_t,t}\Big)> r_t,
\end{equation}}
where $r_t=\tfrac{1}{a}e^{-at}.$
By Lemma~\ref{l:prelimNoConj}, for $t\geq t_0$, if $\rho\in S\!N^*\!H$ and $d(\varphi_t(\rho),S\!N^*\!H)<\tfrac{1}{a}e^{-at}$, then there exists a ${\bf w}={\bf w}(t, \rho)\in T_{\rho}S\!N^*\!H$ so that $d\psi$ restricted to $\mathbb{R}\partial_t \times \mathbb{R}{\bf w}$ has left inverse $L_{(t,\rho)}$ with
\[
\|L_{(t,\rho)}\|\leq C_{_{\! M,g}}\max \Big\{a e^{(a+\Lambda) |t|}\;\Big\},
\]
for some $C_{_{\! M,p}}>0$ and any $\Lambda>\Lambda_{\max}$.
{For the purposes of the proof of Theorem~\ref{t:noConj1} fix $\Lambda=2\Lambda_{\max}+1$.
Let $c:= a C_{_{\! M,p}}$, $\beta:= a+\Lambda$, and let $t_1=t_1(a,t_0)\geq t_0$ be so that
\[
\|L_{(t,\rho)}\|\leq c e^{\beta|t|} \qquad t\geq t_1.
\]
}In particular, we may cover $S\!N^*\!H$ by finitely many balls $\{B_i\}_{i=1}^N$ of radius $R>0$ (independent of $h$) so that
$NR^{n-1}<C_{n}\vol(S\!N^*\!H),$ and the hypotheses of Proposition~\ref{p:ballCover} hold for each $B_i$ choosing $\tilde c={a}^{-1}$.
Let {$\alpha_1=\alpha_1(M,p)$ and $\alpha_2=\alpha_2(M,g,a ,\delta_F)$} be as in Proposition~\ref{p:ballCover}.
Fix $0<\varepsilon<\frac{1}{4}$ and set
\[r_0:=h^{2\varepsilon}, \qquad r_1:=h^\varepsilon, \qquad r_2:=\tfrac{2}{\alpha_1}h^\varepsilon.\]
Let {$$T_0(h)=b \log h^{-1}$$ with $b>0$ to be chosen later}. Then, the assumptions in Proposition~\ref{p:ballCover} hold provided
$$
h^{\varepsilon}< \min \Big\{{\tfrac{2}{3\alpha_1}} e^{-\Lambda T_0}\;,\;\tfrac{\alpha_1\alpha_2}{2} e^{- \gamma T_0}, {\tfrac{\alpha_1 R}{2}}\Big\}
$$
where {$\gamma = \max\{a, 3\Lambda+2\beta\}=5\Lambda+2a$}. In particular, {if we set $\alpha_3:=\min\{\tfrac{2}{3\alpha_1} ,\tfrac{\alpha_1\alpha_2}{2} \}$, the assumptions in Proposition~\ref{p:ballCover} hold provided {$h<\big(\frac{\alpha_1 R}{2}\big)^{\frac{1}{\varepsilon}}$} and
\begin{equation}
\label{e:t0Temp}
T_0(h)< \frac{ \varepsilon}{ \gamma} \log h^{-1} + \frac{\log \alpha_3}{ \gamma}.
\end{equation}}
We will choose $T_0$ satisfying~\eqref{e:t0Temp} later.
{Let $0<\tau_0<\tau_{_{\!\text{inj}H}}$, ${R_0=R_0(n,k,g,K_{_{\!H}})}>0$ be as in Theorem~\ref{t:coverToEstimate}. Note that $\tau_0=\tau_0(M,p,\tau_{_{\!\text{inj}H}})$. Also let $h_0=h_0(M,p)>0$ be the constant given by Theorem~\ref{t:coverToEstimate} and possibly shrink it so that $h_0<\big(\frac{\alpha_1 R}{2}\big)^{\frac{1}{\varepsilon}}$. }
Let {$\{\rho_j\}_j \subset S\!N^*\!H$ be so that $\{\Lambda^\tau_{_{\rho_j}}(h^\varepsilon)\}_j$} is a $({\mathfrak{D}_n},\tau_0,h^\varepsilon)$ good cover of $S\!N^*\!H$ (existence of such a cover follows from~\cite[Proposition 3.3]{CG18d}). ({See Remark \ref{r:D depends on n}.)}
Then, for each $i \in \{1, \dots, K\}$ we apply Proposition~\ref{p:ballCover} to obtain a cover of $\Lambda_{B_i}^{\tau_0}(h^{2\varepsilon})$ by tubes $\{\Lambda_{\rho_j}^{\tau_0}(h^\varepsilon)\}_{j=1}^{N_i}$ with $\rho_j\in B_i$ and so that $\{1, \dots, N_i\}=\mc{G}_i\cup\mc{B}_i$,
$$
\bigcup_{j\in \mc{G}_i}\Lambda_{\rho_j}^{\tau_0}(h^\varepsilon) \quad \text{is }\;\; [t_0,T_0(h)]\;\; \text{ non-self looping,}
$$
$$
h^{\varepsilon(n-1)}|\mc{B}_i|\leq {\bf{C}_0}\tfrac{2}{\alpha_1} \;h^\varepsilon \;R^{n-1}\; T_0{e^{4(2\Lambda+a)T_0}},
$$
where {${\bf{C}}_0={\bf{C}}_0(M,g,k,a)>0$}.
We choose {$b>0$} so that
{$
b < \frac{\varepsilon}{12(2\Lambda+a)}
$}
{and \eqref{e:t0Temp} is satisfied for all $h<h_0$.} {Note that this implies that $b=b(M,g,a,\delta_F)$.}
In particular, there exists $h_0=h_0(\tau_0, {\bf{C}}_0)$, so that for all $0<h<h_0$,
\begin{equation}\label{e:badsets}
h^{\varepsilon(n-1)}|\mc{B}_i|< h^{\frac{\varepsilon}{3}}R^{n-1}.
\end{equation}
We next apply Theorem~\ref{t:coverToEstimate} $\delta:=2\varepsilon$, and $R(h):=h^\varepsilon$ (not to be confused with $R$). If needed, we shrink $h_0$ so that $5h^{2\varepsilon}\leq R(h)<R_0 $ for all $0<h<h_0$. {We let $\alpha<1-2\varepsilon$ and let $b$ be small enough so that $T_0(h) \leq 2\alpha T_e(h)$} for all $0<h<h_0$. We also let $\mc{B}=\cup_{i=1}^K\mc{B}_i$, and work with only one set of good indices $\mathcal G:=\mathcal I_h(w)\backslash \mathcal B$. We choose {$t_\ell(h)=t_1$} and $T_\ell(h)=T_0(h)$. Note that \eqref{e:badsets} gives
\[R(h)^{\frac{n-1}{2}}|\mc{B}|^{\frac{1}{2}}\leq h^{\frac{\varepsilon}{6}} (K R^{n-1})^{\frac{1}{2}} \leq h^{\frac{\varepsilon}{6}} { {C_{n}}^{\!\frac{1}{2}}} \vol(S\!N^*\!H)^{\frac{1}{2}}.\]
Since in addition
\[|\mc{G}| \leq |\mathcal I_h(w)|\leq K (\max_{1\leq i \leq K}N_i) \leq \vol(S\!N^*\!H)C_{n}{h^{-\varepsilon(n-1)}},\]
{Let $N>0$. Theorem~\ref{t:coverToEstimate} yields the existence of constants $C_{n,k}>0$, $\tilde{C}=\tilde{C}({M,g,\tau_0,\varepsilon})>0$ and $C_{_{\!N}}>0$ so that for all $0<h<h_0$}
\begin{align}
&h^{\frac{k-1}{2}}\Big|\int_H w u\,d\sigma_H\Big| \notag\\
&\leq \frac{C_{n,k}{\vol(S\!N^*\!H)^{\frac{1}{2}}}\|w\|_{_{\!\infty}}C_{_n}^{\tfrac{1}{2}}}{\tau_0^{\frac{1}{2}}}
\!\Bigg(\Bigg[h^{\frac{\varepsilon}{6}}+ \frac{t_1^{\frac{1}{2}}}{T_0^{\frac{1}{2}}(h)}\Bigg]\!\!\|u\|_{{_{\!L^2(M)}}}\!+\frac{T_0^{\frac{1}{2}}(h)t_1^{\frac{1}{2}}}{h}\|(-h^2\Delta_g-I)u\|_{_{{\sob{-2}}}}\Bigg)\notag \\
\label{e:absorbing} &+\frac{\tilde{C}}{h}{\|w\|_\infty}\|(-h^2\Delta_g-I)u\|_{\Hs{\frac{k-3}{2}}}
\!+C_{_{\!N}}h^N\big(\|u\|_{{_{\!L^2(M)}}}\!+{\|(-h^2\Delta_g-I)u\|_{{\Hs{\frac{k-3}{2}}}}}\big)\\
&\leq C{\|w\|_{_{\!\infty}}}\left(\frac{\|u\|_{{_{\!L^2(M)}}}}{\sqrt{\log h^{-1}}}+\frac{\sqrt{\log h^{-1}}}{h}\|(-h^2\Delta_g-I)u\|_{\Hs{\frac{k-3}{2}}}\right)\label{e:submarine}
\end{align}
where $C=C(M,g,k,t_0,a,\delta_F, \vol(S\!N^*\!H), \tau_{_{\!\text{inj}H}})>0$ is some positive constant {and $h_0=h_0(\delta, M,g,\tau_0,k,a,w,R_0)$ is chosen small enough so that the last term on the right of~\eqref{e:absorbing} can be absorbed}. Note that the $\varepsilon$ dependence of $C$ and $h_0$ is resolved by fixing any $\varepsilon<\tfrac{1}{4}$.
\qed
\ \\
\noindent {\bf Proof of Theorem~\ref{t:noConj2}.}
{Note that if $H=\{x\}$ then $S\!N^*\!H=S_x^*M$ and $\vol(S^*_xM)=c_n$ for some $c_n>0$ that depends only on $n$. Next, note that $\tau_{_{\!\text{inj}H}}(\{x\})$ and $\delta_F$ can be chosen uniform on $M$ and that $H_pr_H=2$. {Moreover, in this case, $w=1$ and $K_{_{\!H}}$ can be taken arbitrarily small so {$R_0=R_0(n,k,p,K_{_{\!H}} )$} can be taken to be uniform on $M$.}
Therefore, since the constant in \eqref{e:submarine} and $h_0$ depends only on $$M,\;g,\;k,\;t_0,\;a,\; \delta_F,\; \vol(S\!N^*\!H),\; \tau_{_{\!\text{inj}H}},$$ all of the terms on the right hand side of~\eqref{e:submarine} are uniform for $x\in M$ completing the proof of Theorem~\ref{t:noConj2}.}
\qed
\section{No focal points or Anosov geodesic flow: Proof of Theorems \ref{T:applications} and \ref{T:tangentSpace}}
\label{s:Anosov}
Next we analyze the cases in which $(M,g)$ has no focal points or Anosov geodesic flow. {For $\rho \in S\!N^*\!H$ we continue to write $N_{\pm}(\rho)=T_{\rho}(S\!N^*\!H)\cap E_\pm(\rho)$} and define the functions $m,m_{\pm}:S\!N^*\!H\to \{0,\dots,n-1\}$
\begin{equation}
\label{e:dim}
\begin{gathered}
m(\rho):=\dim (N_+(\rho)+N_-(\rho)),\qquad m_{\pm}(\rho):=\dim N_{\pm}(\rho),
\end{gathered}
\end{equation}
and note that the continuity of $E_{\pm}(\rho)$ implies that $m,\,m_{\pm}$ are upper semicontinuous ({see e.g. \cite[Lemma 20]{CG17}}).
We will need extensions of $N_\pm(\rho)$, $m_\pm(\rho)$ to neighborhoods of $S\!N^*\!H$ for our next lemma. To have this, for each $\rho$ in a neighborhood of $S\!N^*\!H$ define the set
$$
\mc{F}_{\!\rho}:=\{q\in T^*\!M :\; F(q)=F(\rho)\},
$$
where $F$ is the defining function for $S\!N^*\!H$ introduced in \eqref{e:defFunction}.
Since for $\rho\in S\!N^*\!H$, $\mc{F}_{\!\rho}=S\!N^*\!H$, $\mc{F}_{\!\rho}$ can be thought of as a family of `translates' of $S\!N^*\!H$. We then define
\[
\tilde{N}_{\pm}(\rho):=T_{\rho}\mc{F}_{\!\rho} \cap E_{\pm}(\rho)\qquad \text{and}\qquad \tilde{m}_{\pm}(\rho):=\dim \tilde{N}_{\pm}(\rho).
\]
Note that since $T_{\rho}\mc{F}_{\!\rho}$ is smooth in $\rho$ and agrees with $T_{\rho}(S\!N^*\!H)$ for $\rho\in S\!N^*\!H$, $\tilde{m}_{\pm}(\rho)$ is upper semicontinuous with $\tilde{m}_{\pm}|_{S\!N^*\!H}=m_{\pm}.$
In what follows we continue to write $\mc{S}_H= \{\rho\in S\!N^*\!H:\;\, T_\rho (S\!N^*\!H)=N_-(\rho)+N_+(\rho)\}$.
{The following lemma shows that if $\rho \in S\!N^*\!H$ does not belong to $\mathcal S_H$ and $\varphi_t(\rho)$ is close enough to $\rho$ for $t$ sufficiently large, then $(d\varphi_t)_\rho{\bf w}$ leaves ${T_{{\varphi_{t}(\rho)}}}\mc{F}_{\!{\varphi_{t}}(\rho)}$ for some ${\bf w} \in T_\rho S\!N^*\!H$. }
\begin{lemma}\label{P:1}
Suppose $(M,g)$ has Anosov geodesic flow {or no focal points} and let $K \subset (S\!N^*\!H \backslash {\mc{S}_H})$ be a compact set. Then there exist positive constants $c_{_{\! K}},t_{_{\! K}}, \delta_{_{\! K}}>0$ so that if ${d(\rho, K)\leq \delta_{_{\!K}}}$, $|t| \geq t_{_{\! K}}$, and
\[
{\varphi_{t}}(\rho) \in \; \overline{B(\rho, \delta_{_{\! K}})},
\]
then there is $\mathbf{w}=\mathbf{w}(t, \rho)\in T_{\rho}(S\!N^*\!H)$ with
\begin{equation}
\label{e:noTangent1}
\inf\{\|d{\varphi_{t}} (\mathbf{w})+{\bf v}\|:\; {\bf v} \in T_{\varphi_{t}(\rho)}\mc{F}_{\!{\varphi_{t}}(\rho)} {+} {\mathbb R} H_p\}\geq c_{_{\! K}} \|\mathbf{w}\|.
\end{equation}
\end{lemma}
\begin{proof}
First note that since $\tilde{m}_{\pm}$ are upper semi-continuous, $K$ is compact, and $K\cap \mc{S}_H$ is empty, there exists $\delta_{_{\! \tilde K}}>0$ so that $d(K,\mc{S}_H)>\delta_{_{\! \tilde K}}.$ Therefore, to prove the lemma we work with the compact set $\tilde{K}:=\{\rho \in S\!N^*\!H:\; d(\rho,K)\leq \frac{\delta_{_{\!\tilde K}}}{2}\}$ and insist that $\delta_K<\frac{\delta_{_{\! \tilde K}}}{2}$.
Let $\rho\in \tilde{K}$. Since
$T_{\rho}(S\!N^*\!H)\neq N_+(\rho)+ N_-(\rho)$,
we may choose
\[
{\bf u} \in T_{\rho}(S\!N^*\!H) \setminus ( N_+(\rho)+ N_-(\rho)),\qquad \|{\bf u}\|=1.
\]
Now, let $\mathbf{u}_+\in E_+(\rho)$ and $\mathbf{u}_-\in E_-(\rho)$ be so that
\[
\mathbf{u}=\mathbf{u}_++\mathbf{u}_-.
\]
Without loss of generality, we assume that $\mathbf{u}_-$ is orthogonal to $N_-(\rho)$ and, since $\rho$ varies in a compact subset of $S\!N^*\!H\backslash \mc{S}_H$, we may assume
that {there exists $C_{_{\!K}}$ uniformly for $\rho\in \tilde{K}$ so that}
\begin{equation}\label{e:size}
C_{_{\!K}}^{-1}\|\mathbf{u}_+\|\leq \|\mathbf{u}_-\|\leq C_{_{\!K}}\|\mathbf{u}_+\|.
\end{equation}
{To deal with the fact that in the no focal points case we may have $E_+(\rho)\cap E_-(\rho)\neq \{0\}$, without loss of generality we also assume that
\begin{equation}
\label{e:notStable}
\inf\{\|\mathbf{u}_-+\mathbf{v}\|:\, \mathbf{v}\in E_+(\rho){\cap E_-(\rho)}\}=\|\mathbf{u}_-\|.
\end{equation}}
{Since $d\varphi_t:E_-(\rho) \to E_-(\varphi_t(\rho))$ and $d\varphi_t : E_+(\rho)\cap E_-(\rho) \to E_+(\varphi_t(\rho))\cap E_-(\varphi_t(\rho)) $ are isomorphisms,}
we have
\[
\dim \operatorname{span}\begin{pmatrix} d\varphi_t(\mathbf{u}_-), &d\varphi_t(N_-(\rho))\end{pmatrix}=1+\dim N_-(\rho).
\]
Also, note that since $\tilde{m}_-$ is upper semicontinuous {and integer valued}, we may choose $\delta>0$ uniform in $\rho \in S\!N^*\!H$ so that $\dim \tilde{N}_-(q)\leq \dim N_-(\rho)$ for all $q \in B(\rho, \delta)$.
For any $t$ {and $q\in B(\rho, \delta)$} we then have
\begin{equation}\label{E:dimension}
\dim \operatorname{span}\begin{pmatrix} d\varphi_t(\mathbf{u}_-), &d\varphi_t(N_-(\rho))\end{pmatrix}\geq1+\dim{\tilde N_-(q)}. \medskip
\end{equation}
Next, note that $\operatorname{span}\!\begin{pmatrix} d\varphi_t(\mathbf{u}_-), &d\varphi_t(N_-(\rho))\end{pmatrix} \subset E_-(\varphi_t(\rho))$.
Suppose now that $\varphi_t(\rho)\in B(\rho,\delta)$ {for some $t$} and note that if {$d\varphi_t({\bf w}) \in E_-(\varphi_t(\rho)) \backslash \tilde{N}_-(\varphi_t(\rho))$, then $d\varphi_t({\bf w}) \notin T_{\varphi_t(\rho)}\mc{F}_{\!\varphi_t(\rho)}$}.
In particular, relation \eqref{E:dimension} gives that there exists a linear combination
\[
{\bf w_t}= a_t \,\mathbf{u}_- + {\bf e}_-(t) \;{\in E_-(\rho)},
\]
with ${\bf e}_-(t) \in N_-(\rho)$, so that
\[
\left \| \pi_{t,\rho} (d\varphi_t {\bf w_t}) \right\|=1=\left \| d\varphi_t {\bf w_t} \right\|,
\]
where $\pi_{t,\rho}: T_{\varphi_t(\rho)}(S^*M) \to W_{t,\rho}$ is the orthogonal projection map onto a subspace $W_{t,\rho}$ of $T_{\varphi_t(\rho)}(S^*M)$ chosen so that $T_{\varphi_t(\rho)}(S^*M)=W_{t,\rho} \oplus T_{\varphi_t(\rho)}\mc{F}_{\!\varphi_t(\rho)}$ {is an orthogonal decomposition}.
{If we had that ${\bf w_t}$ was a tangent vector in $T_{\rho}(S\!N^*\!H)$, then we would be done proving \eqref{e:noTangent1}. {Note that to say this we are using that $d\varphi_t {\bf w}_t \in E_-(\varphi_t(\rho))$ and that $E_-(\varphi_t(\rho))\cap \mathbb{R} H_p =\{0\}$.} However, since $\mathbf{u}_-$ is not necessarily in {$T_{\rho}(S\!N^*\!H)$} we have to modify ${\bf w_t}$.}
Consider the vector
\[
{\bf \tilde{w}_t}= a_t\, \mathbf{u} + {\bf e}_-(t),
\]
and note that ${\bf \tilde{w}_t} \in T_{\rho}(S\!N^*\!H)$
and
\[
d\varphi_t ( {\bf \tilde{w}_t})= d\varphi_t({\bf w_t})+a_t\, d\varphi_t (\mathbf{u}_+).
\]
{Let $\delta_1>0$ be so that $1-\delta_1 \tilde\mathbf{B} {C_{_{\!K}}}>\frac{1}{2}$. We claim that there is $t_{_{\! K}}>0$, depending only on $(M,p,K)$,} so that for $t>t_{_{\! K}}$,
\begin{equation}\label{e:a_t}
\| {\bf w_t}\| \leq {\delta_1} \qquad \text{and} \qquad |a_t|<\delta_1 {\|\mathbf{u}_-\|^{-1}}.
\end{equation}
Note that this yields that for $t$ large enough, $d\varphi_t ( {\bf \tilde{w}_t})$ approaches $d\varphi_t({\bf w_t}) \notin T_{\varphi_{t}(\rho)}\mc{F}_{\!{\varphi_{t}}(\rho)}$. In particular, the $t$-flowout of the ${\bf \tilde{w}_t}$ direction in $T_{\rho}(S\!N^*\!H)$ approaches $E_-(\varphi_t(\rho))$ (see Figure \ref{f:rotation}).
We postpone the proof of \eqref{e:a_t} until the end, and show how to finish the proof assuming it holds.
\begin{figure}[h]
\begin{tikzpicture}
\begin{scope}[scale =1.5]
\foreach \t in{0}{
\begin{scope}[shift= {(2.4*\t,0)}]
\draw[thick,->](\t,-1.7)--(\t,1.7)node[right]{\tiny{$E_-$}};
\draw[gray,->](\t,0)--(\t+.5,.5)node[right]{\color{gray}{\tiny{$H_p$}}};
\draw[thick,dashed,blue](\t-.3,-.9)--(\t+.4,1.2)node[right]{\tiny{\color{blue}$T_\rho (S\!N^*\!H)$}};
\draw[thick,->](\t-.85,.5)--(\t+.85,-.5)node[right]{\tiny{$E_+$}};
\draw[thick,->,red] ({(\t-.75*.2/(\t+1)+\t+.75*.2/(\t+1))/2},{(-.75*.6*(\t+1)+.75*.6*(\t+1))/2}) --({\t+.75*.2/(\t+1)}, {.75*.6*(\t+1)})node[left]{\tiny{\color{red}$d\varphi_{\t}(\bf \tilde{w}_t)$}};
\end{scope}
}
\foreach \t in{1,2}{
\begin{scope}[shift= {(2.4*\t,0)}]
\draw[thick,->](\t,-1.7)--(\t,1.7)node[right]{\tiny{$E_-$}};
\draw[gray,->](\t,0)--(\t+.5,.5)node[right]{\color{gray}{\tiny{$H_p$}}};
\draw[thick,dashed,blue](\t-.3,-.9)--(\t+.4,1.2)node[right]{\tiny{\color{blue}$T_{_{\!\varphi_{\t}(\rho)}} (S\!N^*\!H)$}};
\draw[thick,->](\t-.85,.5)--(\t+.85,-.5)node[right]{\tiny{$E_+$}};
\draw[thick,->,red] ({(\t-.75*.2/(\t+1)+\t+.75*.2/(\t+1))/2},{(-.75*.6*(\t+1)+.75*.6*(\t+1))/2}) --({\t+.75*.2/(\t+1)}, {.75*.6*(\t+1)})node[left]{\tiny{\color{red}$d\varphi_{\t}(\bf \tilde{w}_t)$}};
\end{scope}
}
\end{scope}
\end{tikzpicture}
\caption{\label{f:rotation} Schematic of the rotation of $\bf \tilde{w}_t$ under the geodesic flow.}
\end{figure}
We next observe that there exists $\tilde\mathbf{B}>0$ so that if ${\bf w} \in E_{\pm}(\rho)$ then $\|d\varphi_t {\bf w}\|\leq \tilde\mathbf{B}\|{\bf w}\|$ as $t\to \pm\infty$. Indeed, in the Anosov case $\tilde\mathbf{B}=\mathbf{B}$, where $\mathbf{B}$ is defined in \eqref{e:Bdef}, and in the no focal point case the existence of $\tilde\mathbf{B}$ is guaranteed by~\cite[Proposition 2.13, Corollary 2.14]{Eberlein73}. We can therefore conclude from \eqref{e:size} and \eqref{e:a_t} that
\[
\| \pi_{t,\rho} (d\varphi_t {\bf \tilde{w}_t}) \| \geq \| \pi_{t,\rho} (d\varphi_t {\bf w_t})\|-\| a_t\, \pi_{t,\rho} (d\varphi_t {\bf u}_+) \| >1-\delta_1 \tilde\mathbf{B} {C_{_{\!K}}},
\]
and
$$
\|\mathbf{\tilde{w}_t}\|=\|{\bf w}_t +a_t {\bf u}_+ \|\leq \|\mathbf{w}_t\|+|a_t|\|\mathbf{u}_+\|\leq \delta_1(1+C_{_{\!K}}).
$$
In particular,
\[
\| \pi_{t,\rho} (d\varphi_t {\bf \tilde{w}_t}) \| \geq \frac{1-\delta_1 \tilde\mathbf{B} {C_{_{\!K}}}}{\delta_1(1+C_{_{\!K}})} \|\mathbf{\tilde{w}_t}\|.
\]
Therefore, there exist positive constants $c_{_{\! K}}$, $\delta_{_{\! K}}$ and $t_{_{\! K}}$ (uniform for $\rho\in K$) so that if ${\varphi_{t}}(\rho)\in B(\rho, \delta_{_{\! K}})$ for some $t$ with $|t|>t_{_{\! K}}$, {then there is $\mathbf{w}=\tilde{\mathbf{w}}_{t}\in T_{\rho}(S\!N^*\!H)$} so that
\begin{equation}
\label{e:noTangent}
\| d{\varphi_{t}} (\mathbf{w})+\mathbb{R} H_p+T_{{\varphi_{t}}(\rho)}\mc{F}_{\!{\varphi_{t}}(\rho)}\|\geq c_{_{\! K}} \|\mathbf{w}\|.
\end{equation}
This would finish the proof assuming that the claim in \eqref{e:a_t} holds. We proceed to prove \eqref{e:a_t}. We start with the Anosov case.
By the definition of Anosov geodesic flow,
\[
\| (d\varphi_t|_{E_-})^{-1}\|\leq \mathbf{B} e^{-t/\mathbf{B}},\quad t\geq 0.
\]
{Thus, since ${\bf w_t}\in E_-(\rho)$ and $\left \| d\varphi_t {\bf w_t} \right\|=1$, we find {$\|{\bf w_t}\|\leq \mathbf{B} e^{-t/\mathbf{B}}$}. In particular, since $\mathbf{u}_-$ and $ {\bf e}_-(t) $ are orthogonal, we have
\[
|a_t|\leq \mathbf{B} e^{-t/\mathbf{B}}\|\mathbf{u}_-\|^{-1},\qquad { t\geq 0}.
\]}
This proves the claim \eqref{e:a_t} in the Anosov flow case after choosing $t_{_{\! K}}>0$ large enough so that $\mathbf{B} e^{-t/\mathbf{B}}\leq \delta_1$.
We next consider the non-focal points case. Define {$\mathcal C_+^\alpha(\rho)\subset T_\rho (S^*\!M)$} to be the conic set of vectors forming an angle larger than or equal to $\alpha>0$ with $E_+(\rho)$. {Let $\alpha_{_{\! K}}>0$ be so that ${\mathbf{w}_t}\in E_-(\rho)\cap \mathcal C_+^{\alpha_{_{\! K}}}(\rho)$ for all $\rho \in \tilde{K}$}. {By~\cite[Proposition 2.6]{Eberlein73} $(d\pi)_\rho:E_{\pm}(\rho)\oplus H_p(\rho)\to T_{\pi(\rho)}M$ is an isomorphism for each $\rho$. In particular, letting $V(\rho)\subset T_\rho (S^*\!M)$ denote the vertical vectors, we have that $E_{\pm}(\rho)\cap V(\rho)=\emptyset$ and $V(\rho)\oplus E_+(\rho)\oplus H_p(\rho)=T_{\pi(\rho)}S^*M$. In addition, since $(M,g)$ has no focal points, $\cup_{\rho\in S^*M} E_{\pm}(\rho)$ is closed~\cite[see right before Proposition 2.7]{Eberlein73} and hence there exists $c_{_{ \alpha_{_{\! K}}}}>0$ depending only on $\alpha_{_{\! K}}$ so that
\[
\mathbf{w}_t={\bf e_+}+{\bf v}
\]
with
\[
c_{_{\alpha_{_{\! K}}}}\|{\bf e_+}\|\leq \|\mathbf{w}_t\| \leq \frac{1}{c_{_{\alpha_{_{\! K}}}}} \|{\bf v}\|.
\]}
and ${\bf e_+}\in E_+(\rho)$, ${\bf v}\in {V}(\rho)$. By~\cite[Remark 2.10]{Eberlein73}, for all $R>0$ there exists $T(R)>0$ so that $\|Y(t)\| \geq R \|Y'(0)\|$ for all $t>T(R)$, where $Y(t)$ is any Jacobi field with $Y(0)=0$ and perpendicular to a unit speed geodesic $\gamma$ with $\gamma(0) \in \tilde{K}$. Since ${\bf v}$ is a vertical vector, we may consider $Y(t)=d\pi \circ d\varphi_t ({\bf v})$, and this implies that $Y'(0)=\bf{K}{\bf v}^\sharp$ ({see Appendix~\ref{s:jacobi} for an explanation of the connection map $\bf{K}$, and the $\sharp$ operator}). We therefore have that $\|d\varphi_t {\bf v}\|\geq R\|{\bf v}\|$ for all $t>T(R)$. In particular, then
$$
\|d\varphi_t\mathbf{w}_t\|=\|d\varphi_t\mathbf{v} + d\varphi_t\mathbf{e}_+ \| \geq R\|{\bf v}\|-\tilde \mathbf{B}\|{\bf e_+}\|\geq (Rc_{_{ \alpha_{_{\! K}}}}-c_{_{ \alpha_{_{\! K}}}}^{-1}\tilde \mathbf{B})\|\mathbf{w}_t\|.
$$
So, choosing $R(\alpha_{_{\! K}})=c_{_{ \alpha_{_{\! K}}}}^{-1}(\delta_1^{-1}+c_{_{ \alpha_{_{\! K}}}}^{-1}\tilde \mathbf{B})$, we have that for $t\geq t_{_{\! K}}:=T(R(\alpha_{_{\! K}}))$,
$$
1=\|d\varphi_t\mathbf{w}_t\|\geq \delta_1^{-1}\|\mathbf{w}_t\|.
$$
In particular, for $t\geq t_{_{\! K}}$, since $\mathbf{u}_-$ is orthogonal to ${\bf e_-}(t)$, we obtain
$
1=\|d\varphi_t\mathbf{w}_t\|\geq \delta_1^{-1}\|\mathbf{w}_t\|\geq \delta_1^{-1}|a_t| {\|\mathbf{u}_-\|},
$
completing the proof of the lemma in the case of manifolds without focal points.\\
\end{proof}
{When $(M,g)$ has Anosov geodesic flow, we need to define a notion of angle between a vector and $E_{\pm}(\rho)$.}
Let $\pi_{\pm}:T_\rho S^*\!M\to E_{\pm}(\rho)$ be the projection onto $E_{\pm}(\rho)$ along $E_{\mp}(\rho)\oplus H_p(\rho)$ {i.e. if ${\bf{u}}={\bf{v}}_++{\bf{v}}_-+rH_p$ with $r\in \mathbb{R}$, ${\bf{v}}_{\pm}\in E_{\pm}(\rho)$, then $\pi_{\pm}({\bf{u}})={\bf{v}}_{\pm}$.}
For $\rho \in S^*\!M$, define ${\Theta}^{\pm}_\rho:T_\rho S^*\!M\setminus \{0\}\to [0,\infty]$ by
\begin{equation}\label{e:Theta}
{{\Theta}^{\pm}_\rho}(\mathbf{u}):=\frac{\|\pi_\mp\mathbf{u}\|}{\|\pi_\pm\mathbf{u}\|}.
\end{equation}
Note that ${\Theta}^{\pm}_\rho$ should be thought of as measuring the tangent of the angle from $E_{\pm}(\rho)$, and that {given a compact subset $K$ of $T^*\!M \backslash\{0\}$ there exists $C_{_{\!K}}>0$ so that for all $\rho \in K$,} $t\in \mathbb{R}$, and $\mathbf{u} \in T_\rho S^*\!M$, we have
\begin{equation}
\label{e:angleChange}
\frac{e^{\pm t/C_{_{\!K}}}}{C_{_{\!K}}}\,{\Theta}^{\pm}_\rho(\mathbf{u})\leq {\Theta}^{\pm}_\rho(d\varphi_t\mathbf{u})\leq C_{_{\!K}}e^{\pm C_{_{\!K}}t}\,{\Theta}^{\pm}_\rho(\mathbf{u}).
\end{equation}
{{In what follows we will use the fact that by~\cite[Proposition 3.3]{CG18d} there are {$\mathfrak{D}_n>0$ depending only on $n$}, $\tau_{_{\!S\!N^*\!H}}>0$ depending only on $\tau_{_{\!\inj H}}$, {and} ${R_0>0}$ depending only on $(n,k,K_H)$ and finitely many derivatives of the curvature and second fundamental form of $H$, so that for ${0<}\tau<\tau_{_{\!\SNH}}$ and ${0<}r<R_0$, there is a $({\mathfrak{D}}_n,\tau,r)$ good cover of $S\!N^*\!H$.}}
\begin{lemma}\label{l:anosov}
Let $(M,g)$ have Anosov geodesic flow and $H\subset M$ satisfy $\mc{A}_H=\emptyset$. Then, there exist{ $c=c(M,g,H)>0$, $C=C(M,g,H)>2$}, ${I}>0$, $t_0>1$, so that for all {$\Lambda>\Lambda_{\text{max}}$} the following holds.
Let $T_0\geq t_0$,\; $m=\big\lfloor \frac{\log T_0 - \log t_0} {\log 2}\big\rfloor$, \; $0<\tau_0<\tau_{_{\!\SNH}}$,\; $0<\tau\le \tau_0$,
\[0\leq r_1\leq \min\{e^{-C T_0},{R_0}\}, \]
and $\{\Lambda_{\rho_j}^\tau(r_1)\}_{j=1}^{N_{r_1}}$ be a $({\mathfrak{D}_n},\tau,r_1)$ good cover of $S\!N^*\!H$.
Then, for each $i\in\{1,\dots, {I}\}$ there are sets of indices $\{\mc{G}_{i,\ell}\}_{\ell=0}^m\subset \{1,\dots, N_{r_1}\}$ and $\mc{B}\subset \{1,\dots, N_{r_1}\}$ so that
$$\bigcup_{i=1}^{{I}}\bigcup_{\ell=0}^m\mc{G}_{i,\ell}\cup \mc{B}= \{1,\dots, N_{r_1}\},$$
and for every $i\in\{1,\dots, {{I}}\}$ and every $\ell\in \{0, \dots, m\}$
\begin{itemize}
\item $\bigcup_{j\in \mc{G}_{i,\ell}}\Lambda_{\rho_j}^\tau(r_1)$ is $[t_0,2^{-\ell}T_0]$ non-self looping, \\
\item $ |\mc{G}_{i,\ell}|\leq c \, 5^{-\ell}\, r_1^{1-n},$\\
\item $ |\mc{B}|\leq c \,e^{- cT_0}\, r_1^{1-n}.$
\end{itemize}
\end{lemma}
{We note that if $H_0 \subset M$ is an embedded submanifold, there exists a neighborhood $U$ of $H_0$ (in the $C^\infty$ topology) so that the constants $c=c(M,p,H)$ and $C=C(M,p,H)$ in Lemma \ref{l:anosov} are uniform for $H\in U$.}
\begin{proof}
{Let
$0\leq r_0\leq \tfrac{1}{C}e^{-\Lambda T_0}r_1$. Then $\{\Lambda_{\rho_j}^\tau(r_1)\}_{j=1}^{N_{r_1}}$ covers $\Lambda_{_{\!S\!N^*\!H}}^\tau(r_0)$ since $r_0 \leq \tfrac{1}{2}r_1$.} Throughout this proof we will repeatedly use that if $F:T^*\!M \to \mathbb{R}^{n-1}$ is the defining function for $S\!N^*\!H$, then there exist $\delta_0,c_0>0$ so that for $q\in T^*\!M $
\begin{equation}\label{e:F}
d(q, S\!N^*\!H)\leq \delta_0 \quad \Longrightarrow \quad \|dF {\bf v}\|\geq c_0 \inf\big \{\|{\bf v}+{\bf u}\|:\; {\bf u} \in T_{q}\mathcal F_{q}\big\}\quad \forall {\bf v}\in T_{q}(T^*\!M ).
\end{equation}
In addition, {let $\nu>0$ be so that {$\rho \mapsto E_{\pm}(\rho)\in C^\nu$} and} define $c_{_{\!H}}>0$ so that
\begin{equation}\label{e:angle}
\sup_{{q_1,q_2} \in S\!N^*\!H} \Big(\| \tan^{-1}\circ\Theta^+_{{q_1}}\|_{L^\infty(T_{{q_1}}S\!N^*\!H)}-\| \tan^{-1}\circ\Theta^+_{{q_2}}\|_{L^\infty(T_{{q_2}}S\!N^*\!H)}\Big)\leq \frac{1}{c_{_{\!H}}}{d(q_1,q_2)^\nu}.
\end{equation}
{This implies that} that for all $\varepsilon>0$, there exists $\delta_\varepsilon>0$ so that for every ball $\tilde B \subset S\!N^*\!H$ of radius $\delta_\varepsilon$ we have that
\begin{equation}\label{e:Abounds}
\sup_{\rho_1,\rho_2\in \tilde B} \;\Big|
\|\tan^{-1}{\Theta}^{\pm}_{\rho_1}\|_{L^\infty(T_{\rho_1}S\!N^*\!H)} -
\| \tan^{-1}{\Theta}^{\pm}_{\rho_2}\|_{L^\infty(T_{\rho_2}S\!N^*\!H)}
\Big|<\varepsilon.
\end{equation}
Also, since $\mc{A}_H=\emptyset$, we know that for every $\rho \in \mc{S}_H$ we must have that either $m_+(\rho)=0$ or $m_-(\rho)=0$, where we continue to write $m(\rho)_\pm =\dim N_\pm (\rho)$.
Therefore, choosing
\begin{equation}\label{e:epsilon}
\varepsilon=\varepsilon(M,p,H)<1
\end{equation}
small enough, depending only on $(M,g,H)$, and shrinking $\delta_\varepsilon$ if necessary, we may also assume that
if $ \tilde B\cap \mc{S}_H\neq \emptyset$ then either
\begin{align}
m_-(\rho)&= 0 \;\; \text{and} \;\; {\Theta}^+_\rho\leq \varepsilon \;\;\; \text{for all} \;\;\rho \in \tilde B, \notag\\
&\qquad \qquad \qquad\text{or}\label{e:dimensions}\\
m_+(\rho)&= 0 \;\; \text{and} \;\;{\Theta}^-_\rho\leq \varepsilon \;\;\; \text{for all}\;\; \rho \in \tilde B.\notag
\end{align}
Furthermore, we assume that $\delta_\varepsilon \leq \tfrac{2}{9}\big[\varepsilon c_{_{\!H}}\big]^{\frac{1}{\nu}}$.
Next, let $\{B_i\}_{i =1}^{N_\varepsilon} \subset S\!N^*\!H$ be a cover of $S\!N^*\!H$ with
\[
S\!N^*\!H \subset \bigcup_{i =1}^{N_\varepsilon} B_i, \qquad \qquad B_i \;\;\text{ball of radius}\; \tfrac{1}{2}\delta_\varepsilon.
\]
Let $\mc{I}_{ \mc{S}_H}:=\{i \in \{1, \dots, N_\varepsilon\}: \; B_i \cap \mc{S}_H \neq \emptyset\}$, and define $K=K_\varepsilon$ by
\[
K:= \bigcup_{i\in \mc{I}_{\mc{S}_H} } (S\!N^*\!H \backslash {B_i}).
\]
Since $K\subset (S\!N^*\!H \backslash \mc{S}_H)$ is compact and the geodesic flow is Anosov, by Lemma~\ref{P:1} there exist positive constants $c_{_{\!K}},t_{_{\!K}},\delta_{_{\!K}}$ so that {$d(K,\mc{S}_H)>\delta_{_{\! K}}$} and, if {$d(\rho ,K)\leq \delta_{_{\!K}}$} and $\varphi_t(\rho) \in \overline{ B(\rho, \delta_{_{\!K}})}$ for some $|t|>t_{_{\!K}}$, then there exists ${\bf w}={\bf w}(t,\rho) \in T_{\rho}(S\!N^*\!H)$ so that
\begin{equation}\label{e:lb0}
\inf\{\|d\varphi_{t} (\mathbf{{\bf w}})+{\bf v}\|:\; {\bf v} \in {T_{\varphi_{t}(\rho)}}\mc{F}_{\!\varphi_{t}(\rho)} + {\mathbb R} H_p\}\geq c_{_{\!K}}\|\mathbf{{\bf w}}\|.
\end{equation}
We then introduce a cover $\{D_i\}_{i \in I_K} \subset S\!N^*\!H$ of $K$ by balls with
\[
K \subset \bigcup_{i \in I_K} D_i, \qquad \qquad D_i \;\;\text{ball of radius}\;{ \tfrac{1}{4}R,}
\]
where
\[R:=\min\{\delta_{_{\!K}}, \delta_0, \tfrac{1}{2}\delta_\varepsilon,{\delta_F}\}\]
and $\delta_F$ is as in~\eqref{e:defFunction}.
Note that $R$ depends only on $(M,p,H,K)$.
It follows that,
\begin{equation}\label{e:snh}
S\!N^*\!H \subset \left( \bigcup_{i \in \mc{I}_{ \mc{S}_H}} B_i \;\; \cup \;\; \bigcup_{i \in \mc{I}_K} D_i \right)
\end{equation}
where each ball $B_i$ satisfies \eqref{e:Abounds} and \eqref{e:dimensions}, and each ball $D_i$ satisfies \eqref{e:lb0}. Also,
\[
\mc{S}_H\cap D_i =\emptyset \;\;\; \forall i \in \mc{I}_K \qquad \text{and}\qquad \mc{S}_H\cap B_i \neq \emptyset \;\;\; \forall i \in \mc{I}_{ \mc{S}_H}.
\]
Since $S\!N^*\!H$ can be split as in \eqref{e:snh}, we present how to treat $D_i$ with $i \in \mc{S}_H$ and $B_i$ with $i \in \mc{I}_K$ separately.
\begin{center}
{\underline{\bf Treatment of $D\in \{D_i\}_{i \in \mc{I}_{K}}$.}}
\end{center}
Let $D\in \{D_i\}_{i \in \mc{I}_{K}}$. Note that since $R\leq \min\{\delta_{_{\!K}}, \delta_0\}$, by \eqref{e:lb0} we know that if $\rho \in D$ and $|t| \geq t_{_{\!K}}$ are so that $d(\varphi_t(\rho),\rho)< R$, then there exists ${\bf w}={\bf w}(t, \rho)\in T_{\rho}(S\!N^*\!H)$ so that for all $s\in {\mathbb R}$
\begin{align*}
\|dF(d\varphi_t{\bf w}+s H_p)\|
&\geq c_0 \inf\big \{\|d\varphi_t{\bf w} + s H_p+ {\bf u}\|:\; {\bf u} \in T_{\varphi_t(\rho)}\mathcal F_{\varphi_t(\rho)} \big\}\\
&\geq c_0 \inf\big \{\|d\varphi_t{\bf w} + {\bf v}\|:\; {\bf v} \in T_{\varphi_t(\rho)}\mathcal F_{\varphi_t(\rho)} + {\mathbb R} H_p\big\}\\
& \geq c_0 c_{_{\!K}}\|{\bf w}\|,
\end{align*}
where we used \eqref{e:F} to get the first inequality and \eqref{e:lb0} for the third one. This implies that if $|t|\geq t_{_{\!K}}$ and $\rho \in D$ are so that $d(\varphi_t(\rho),\rho)< R$, then $d\psi(t,\rho):=d(F \circ \varphi_t)(t,\rho)$ has a left inverse $L_{(t, \rho)}$ when restricted to ${\mathbb R}\partial_t \oplus {\mathbb R} {\bf w}$ with $\|L_{(t, \rho)}\| \leq (c_0 c_{_{\!K}})^{-1}$.
Let $\alpha_1, \alpha_2$ be as in Proposition~\ref{p:ballCover}, and note that they only depend on $(M,g, H,{K})$. We aim to apply this proposition with $A=D$, $B=D$, $\beta=0$, $c={(c_0 c_{_{\!K}})^{-1}}$, {$a=0$, $\tilde{c}=\frac{R}{4}$}. Let $t_1$ satisfy
\begin{equation}\label{e:t0def}
t_1\geq \max\{1,t_{_{\!K}}\}.
\end{equation}
Note that $t_1$ depends only on $(M,p,H,{K})$.
Next, let $T_0 \geq t_1$. By construction, if $(t,\rho) \in [t_1, T_0] \times D $ are so that $d(\varphi_t(\rho),D) \leq \tilde c $, by \eqref{e:t0def} we have
\[
d(\varphi_t(\rho),\rho) \leq d(\varphi_t(\rho),D)+\diam (D) \leq \tilde c+ 2 (\tfrac{1}{4}R)< R.
\]
In this case there exists ${\bf w}={\bf w}(t, \rho)\in T_{\rho}(S\!N^*\!H)$ so that $d\psi(t,\rho)$ has a left inverse $L_{(t, \rho)}$ when restricted to ${\mathbb R}\partial_t \oplus {\mathbb R} {\bf w}$ with $\|L_{(t, \rho)}\| \leq c_0 c_{_{\!K}}\leq c $.
Let $C>0$ be so that
\begin{equation}\label{e:Cdef}
\frac{1}{C}<{\min}\{\tfrac{1}{2}, \tfrac{1}{3\alpha_1}\} \qquad \text{and} \qquad e^{-CT_0}\leq \min\{\tfrac{1}{8}\alpha_1R,\; \tfrac{1}{2}\alpha_1\alpha_2e^{-{3\Lambda} T_0}\}.
\end{equation}
Set $r_2:=\frac{2}{\alpha_1}r_1$ and note that by construction, and the assumptions on the pair $(r_0,r_1)$, we have
$$
r_1< \alpha_1\, r_2, \qquad r_2 \leq \min\{{\tfrac{1}{4}R},\alpha_2\, e^{-{3\Lambda} T_0}\}, \qquad r_0 < \tfrac{1}{3}\, e^{-\Lambda T_0} r_2.
$$
Also, note that we work with $0<\tau<\tau_0<\tau_{_{\!\SNH}}$, and that by definition $\tau_{_{\!\SNH}}<\tfrac{1}{2}\tau_{_{\!\text{inj}H}}$ as requested by Proposition~\ref{p:ballCover}.
We apply Proposition~\ref{p:ballCover} {to the cover $\{\Lambda_{\rho_j}^\tau(r_1)\}_{j \in \mathcal E_{_{\!D}}}$ of $\Lambda_{_{\!D}}^\tau(r_0)$ where}
\begin{equation}
\label{e:ei1}
\mc{E}_{_{\!D}}:=\{j: \Lambda_{\rho_j}^\tau(r_1)\cap \Lambda_{_{\!{D}}}^\tau(r_0)\neq \emptyset\}.
\end{equation}
Then, there is a partition $\mc{E}_{_{\!D}}=\mc{G}_{_{\!D}}\cup \mc{B}_{_{\!D}}$ with
\begin{equation}\label{e:badD}
|\mc{B}_{_{\!D}}|\leq {\bf{C}_0} \;\frac{{R}^{n-1}}{{r_1^{n-2}}}\; T_0e^{{4\Lambda}T_0},
\end{equation}
where {${\bf{C}_0}={\bf{C}_0}(M,g,k,c_0,c_{_{K}})>0$},
and so that
\begin{equation}\label{e:goodD}
\bigcup_{j\in \mc{G}_{_{\!D}}}\Lambda^\tau_{\rho_j}(r_1)\qquad \text{is}\;\;\; [t_1,T_0]\text{ non-self looping}.
\end{equation}
\begin{center}
{\underline{\bf Treatment of $B\in \{B_i\}_{i \in \mc{I}_{\mc{S}_H}}$}}
\end{center}
Let $B\in \{B_i\}_{i \in \mc{I}_{\mc{S}_H}}$. Since \eqref{e:dimensions} is satisfied for all $\rho \in B$, we shall focus on the case where $m_-(\rho)=0$ for all $\rho \in B$; the other being similar after sending $t\mapsto -t$ in the arguments below.
Suppose $B$ is the ball $B(\rho_{_{\!B}}, \tfrac{1}{2}\delta_\varepsilon)$ for some $\rho_{_{\!B}}\in S\!N^*\!H$ and let
$$
E:=B(\rho_{_{\!B}}, \tfrac{3}{4}\delta_\varepsilon)\subset S\!N^*\!H, \qquad \tilde B:=B(\rho_{_{\!B}}, \delta_\varepsilon)\subset S\!N^*\!H.
$$
Note that $B \subset E \subset \tilde B$, and that ${\Theta}^+_\rho\leq \varepsilon$ for all $\rho \in \tilde B$ by \eqref{e:dimensions}.
We claim that there exist a function $\mathfrak{t}_{2}:[\tfrac{1}{5}, +\infty) \to [1, +\infty)$ that depends only on $(M,p)$, and a constant $\digamma>0$ {depending on $(M,p,K_{_{\!H}})$}, so that
\begin{equation}\label{e:Bcontrol}
E \;\; \text{can be}\;\; (\tfrac{1}{5}, \mathfrak{t}_{2}, \digamma)\text{-controlled up to time} \;T_0.
\end{equation}
If the claim in \eqref{e:Bcontrol} holds, setting $R_0:= \min\{{\tfrac{1}{\digamma}}e^{-{\digamma}\Lambda T_0}, \tfrac{1}{8}\delta_\varepsilon\}$ and noting that $d(B, E^c)=\tfrac{1}{4}\delta_\varepsilon > R_0$, we may apply Lemma~\ref{l:nonlooping2} to the ball $E$ with $E_0=B$ and $\varepsilon_0=\tfrac{1}{5}$.
Indeed, by possibly enlarging $C>0$ in \eqref{e:Cdef} so that
\begin{equation}\label{e:Cdef3}
e^{-CT_0}<\tfrac{1}{5\digamma}e^{-({\digamma}+2\mathbf{D})\Lambda T_0} R_0,
\end{equation}
by the assumption that $r_1\leq e^{-CT_0}$ we conclude $0<r_1<{\tfrac{1}{5\digamma}}e^{-({\digamma}+2\mathbf{D})\Lambda T_0} R_0$.
Therefore, letting
\begin{equation}
\label{e:ei2}
\mc{E}_{_{\!B}}:=\{j: \Lambda_{\rho_j}^\tau(r_1)\cap \Lambda_{_{\!B}}^\tau(r_0)\neq \emptyset\},
\end{equation}
there exists $C_{_{M,g}}>0$ depending only on $(M,g)$, so that for every integer $0<m<\frac{\log T_0-\log t_0(\frac{1}{5})}{\log 2}$ there are sets $\{\mc{G}_{_{\!B,\ell}}\}_{\ell =0}^m\subset \{1,\dots N_{r_1}\}$, $\mc{B}_{_{\!B}}\subset \{1,\dots N_{r_1}\}$ satisfying
\begin{align}\label{e:Bballs}
\mc{E}_{_{\!B}}\subset \mc{B}_{_{\!B}}\cup \displaystyle\bigcup_{\ell =0}^m\mc{G}_{_{\!B,\ell}},\qquad
\bigcup_{i\in \mc{G}_{_{\!B,\ell}}}\Lambda_{\rho_i}^\tau(r_1)\text{ is } [t_{2}(\tfrac{1}{5}),2^{-\ell}T_0]\text{ non-self looping} \notag\\
|\mc{G}_{_{\!B,\ell}}|\leq C_{_{M,p}}\frac{\delta_\varepsilon^{n-1}}{5^{\ell}} \frac{1}{r_1^{n-1} },\qquad \text{ and }
\qquad|\mc{B}_{_{\!B}}|\leq C_{_{M,p}} \frac{\delta_\varepsilon^{n-1}}{5^{m+1}} \frac{1}{r_1^{n-1}},
\end{align}
for all $\ell \in \{0, \dots, m\}$.
We shall use this construction later in the proof, namely below the ``Constructing the complete cover" title, to build the complete cover.\\
We dedicate the rest of the argument to proving the claim in \eqref{e:Bcontrol}.
Let $\digamma>0$ satisfy
\begin{equation}\label{e:digamma}
\frac{1}{\digamma} < \min \Big \{ \frac{\alpha}{4} , \frac{\alpha^2}{4} , \frac{\alpha\, }{{60\bf{C}_0}}, \frac{[\varepsilon c_{_{\!\!H}}]^{\frac{1}{\nu}}}{3} , {\frac{\varepsilon^{{\frac{1}{\nu}}}}{C^{{\frac{1}{\nu}}}_{_{\!\Theta}}}} , \frac{1}{11},{\frac{\nu}{2}} \Big\},
\end{equation}
where $\alpha:=\min\{{\frac{1}{3}},\alpha_1, \alpha_2\}$,
$c_{_{\!\!H}}$ is defined in \eqref{e:angle}, {${\bf{C}_0}$ } is the positive constant introduced in Proposition~\ref{p:ballCover} (that depends only on $(M,g,H,\varepsilon)$ when the left inverse is bounded by ${{\frac{2C_\varphi}{c_0\, \varepsilon }}}$), and {$C_{_{\!\Theta}}$ is so that {for all $\rho_1,\rho_2\inS\!N^*\!H$}
\begin{equation}
\label{e:thetaChange}
\sup_{\substack{{{\bf w}_1}\in T_{{\rho_1}}S\!N^*\!H\\ \Theta^+({\bf w}_{{1}})\leq \varepsilon}}
\inf_{\substack{{{\bf w}_2}\in T_{{\rho_2}}S\!N^*\!H\\ \Theta^+({\bf w}_{{2}})\leq \varepsilon}}
|\Theta^+_{\varphi_t(\rho_1)}(d\varphi_t)_{\rho_1}{\bf w}_1
-\Theta^+_{\varphi_t(\rho_2)}(d\varphi_t)_{\rho_2}{\bf w}_2\|
\leq C_{_{\!\Theta}}d(\rho_1,\rho_2)^{\nu} e^{2\Lambda |t|}
\end{equation}
for all $t \in \mathbb{R}$}.
Next, Let $0<\tau<\tau_0$, $\varepsilon_1\geq \tfrac{1}{5}$,
\[
0< \tilde R_0\leq \tfrac{1}{ \digamma}e^{- \digamma \Lambda T_0}\qquad \text{and}\qquad 0<\tilde r_0<\tilde R_0.
\]
Also, let $\{B_{0,i}\}_{i=1}^N \subset S\!N^*\!H$ be a collection of balls with centers in $E$ and radii $R_{0,i}=\tilde R_0\geq 0$ so that
\[
E \subset \bigcup_{i=1}^NB_{0,i} \subset \tilde B.
\]
Using \eqref{e:angleChange} we let $L\geq 1$ be so that for all $q \in S\!N^*\!H$ and all ${\bf u} \in T_\rho S^*\!M\backslash \{0\}$ we have
${{\Theta}^+_{\varphi_s(q)}(d\varphi_s {\bf u} )}\geq \frac{1}{L}{\Theta}^+_q({\bf u} )$ provided $ s\geq 0.$
Next, for each $i \in \{1, \dots, N\}$ let
\[T_{_{\! B_{0,i}}}:= \inf_{\rho \in B_{0,i}} T(\rho) \qquad \text{for}\qquad
T(\rho):=\inf \big\{t\geq 0: \sup_{{\bf w}\in T_{\rho}S\!N^*\!H} {\Theta}^+_\rho(d\varphi_t{\bf w}) > 5L\varepsilon\big\},\]
where $\varepsilon=\varepsilon(M,g,H)$ as defined in \eqref{e:epsilon}.
Note that since ${\Theta}^+_\rho\leq \varepsilon$ for $\rho \in \tilde B$, then $T_{_{\! B_{0,i}}}>0$ for all $i \in \{1, \dots, N\}$. \\
{\noindent \bf Control of $ B_{0,i}$ before time $T_{_{\! B_{0,i}}}$.}
We claim that for all $\rho \in B_{0,i}$ and ${\bf w} \in T_\rho S^*\!M$
\begin{equation}\label{e:expDecay}
\|d\varphi_t {\bf w}\|\leq \mathbf{B}(1+5L\varepsilon)e^{-t/\mathbf{B}}\|{\bf w}\|\quad\qquad 0\leq t< T_{_{\! B_{0,i}}}.
\end{equation}
Indeed, suppose that $0\leq t< T(\rho)$ for some $\rho \in B_{0,i}$. Then, ${\Theta}^+_{\varphi_t(\rho)}(d\varphi_{t}\mathbf{w})\leq 5L\varepsilon$ for all ${\bf w} \in T_\rhoS\!N^*\!H$ and so, using that $\pi_\pm d\varphi_t=d\varphi_t\pi_{\pm},$
we have
\[
\|d\varphi_{t}\mathbf{u}\|\leq \|d\varphi_t\pi_+\mathbf{u}\|+\|d\varphi_t\pi_-\mathbf{u}\|
\leq (1+5L\varepsilon) \|d\varphi_{t}\pi_+\mathbf{u}\|
\leq (1+5L\varepsilon)\mathbf{B} e^{- t/\mathbf{B}}\|\mathbf{u}\|.
\]
From \eqref{e:expDecay} it follows that there exists $C_0>0$, depending only on $(M,g,H)$, so that
$$
\sup_{\rho \in B_{0,i}}|\det J_t|\leq C_0\, e^{-|t|/C_0} \qquad \text{ for all}\;\; t \in (0, T_{_{\! B_{0,i}}}).
$$
Suppose that $T_{_{\! B_{0,i}}} > 1$. By Lemma~\ref{l:nonlooping}, for all $\varepsilon_0>0$ there exists $\digamma_{_{\!\!M,g,K_{_{\!H}}}}>0$ and a function $\mathfrak{t}_0:[\varepsilon_0, +\infty) \to [1, +\infty)$ depending only on $(M,g,H,\varepsilon_0, C_0)$ so that the set $B_{0,i}$ can be $(\varepsilon_0, \mathfrak{t}_0, \digamma_{_{\!\!\!M,p}})$-controlled up to time $T_{_{\! B_{0,i}}}$ in the sense of Definition \ref{d:control}. In addition, by Lemma~\ref{l:nonlooping}, given $\varepsilon_1 >0$ and any $0<r \leq \tfrac{1}{ \digamma} e^{- \digamma \Lambda T_0} \tilde r_0 $, there exist balls $\{\tilde B_{1,k}\}_{k}\subset S\!N^*\!H$ with radii $R_{1,k}\in [0, \tfrac{1}{4}\tilde R_0]$ so that
\begin{equation}\label{e:noloop}
{\bigcup_{t=\mathfrak{t}_0(\tfrac{1}{5})}^{T_{_{\! B_{0,i}}}} \varphi_t(\Lambda^\tau_{B_{0,i} \backslash\cup_k\tilde{B}_{1,k}}(r))\;\bigcap \;\Lambda_{{S\!N^*\!H}\backslash\cup_k\tilde{B}_{1,k}}^\tau(r)=\emptyset,}
\end{equation}
\begin{equation}\label{e:noloop-radius}
\sum_k \tilde R_{1,k}^{n-1}\leq \frac{\varepsilon_1}{2} \tilde R_0^{n-1} \qquad \text{and}\qquad {\inf_k}\tilde R_{1,k} \geq e^{-\mathbf{D}\Lambda T_0}\tilde R_{0}.
\end{equation}
In the case in which $T_{_{\! B_{0,i}}} \leq 1$ we will not attempt to control $B_{0,i}$ for times smaller than $T_{_{\! B_{0,i}}}$. Indeed, we will set $t_0=1$, interpret \eqref{e:noloop} and \eqref{e:noloop-radius} as empty statements, and define every ball $\tilde B_{1,k}$ as the empty set.
We now set $\varepsilon_0=\frac{1}{10}$ so that $\varepsilon_1\geq \frac{1}{5}$.\\
{\noindent \bf Control of $ B_{0,i}$ after time $T_{_{\! B_{0,i}}}$.} Set $A:=\bigcup_{i=1}^N B_{0,i}$.
Next, suppose that $\rho \in B_{0,i}$ and $t\geq T_{_{\! B_{0,i}}}$ are so that $d(\varphi_t(\rho),A)\leq \tilde c \,e^{-2\Lambda|t|}$ where
\[\tilde c:=\min\Big\{\tfrac{1}{3}\big[\varepsilon c_{_{\!H}}\big]^{\frac{1}{\nu}}, \delta_0, \delta_F \Big\},\]
with $\delta_F$ defined in~\eqref{e:defFunction}, $\delta_0$ defined in \eqref{e:F}, and $c_{_{\!H}}$ defined in \eqref{e:angle}.
Since by \eqref{e:digamma} the parameter $\digamma$ is chosen so that $\tfrac{1}{\digamma} \leq \min\{ {\frac{\varepsilon^{\frac{1}{\nu}}}{C_{_{\!\Theta}}^{\frac{1}{\nu}}}},\frac{1}{11}\}$ and $\tilde{R}_0<\frac{1}{\digamma}e^{-\digamma\Lambda T_0}$, we have {$\tilde R_0 \leq {\frac{\varepsilon^{{\frac{1}{\nu}}}}{C_{_{\!\Theta}}^{{\frac{1}{\nu}}}}}\, e^{-{\frac{2}{\nu}} \Lambda T_0}.$} Thus, {using~\eqref{e:thetaChange}}, {$L\geq {1}$}, and that $\rho \in B_{0,i}$, there exists ${\bf w} \in T_\rho S\!N^*\!H$ for which
\[
{\Theta}^+_{\varphi_{_{\!T_{_{\! B_{0,i}}}}}\!\!(\rho)}(d\varphi_{_{\!T_{_{\! B_{0,i}}}}} \!\!{\bf w}) \geq 4L\varepsilon.
\]
It then follows by the definition of $L$ that, if $t=T_{_{\! B_{0,i}}} +s$ for some $s> 0$, then
$
{\Theta}^+_{\varphi_t(\rho)}(d\varphi_t {\bf w})
= {\Theta}^+_{\varphi_s( \varphi_{_{\!T_{_{\! B_{0,i}}}}}( \rho))}(d\varphi_s (d\varphi_{_{\!T_{_{\! B_{0,i}}}}} \!\!{\bf w}))
\geq \tfrac{1}{L}{\Theta}^+_{\varphi_{_{\!T_{_{\! B_{0,i}}}}}\!\!(\rho)}(d\varphi_{_{\!T_{_{\! B_{0,i}}}}} \!\!{\bf w}) \geq 4\varepsilon.
$
In particular,
\begin{equation}\label{e:theta1}
{\Theta}^+_{\varphi_t(\rho)}(d\varphi_t {\bf w} +r H_p) \geq 4\varepsilon \qquad \text{for all}\;\; r \in {\mathbb R}.
\end{equation}
In addition, we note that
\begin{equation}\label{e:theta2}
{\Theta}^+_{\varphi_t(\rho)}({\bf v}) \leq 2\varepsilon\qquad \text{for all}\;\;{\bf v} \in T_{\varphi_t(\rho)} \mathcal F_{\varphi_t(\rho)}.
\end{equation}
Indeed, this follows from the estimate in \eqref{e:angle} together with the facts that $\Theta^+_\rho\leq \varepsilon$, $B_{0,i}$ is a ball with radius $\tilde R_0$ and center in $E$, and
\[
d(\varphi_t(\rho),\rho)\leq d(\varphi_t(\rho),A) + \text{diam}(E) +\tilde R_0 \leq \tilde c \,e^{-2\Lambda|t|} + 2(\tfrac{3}{4}) \delta_\varepsilon +\tfrac{1}{\digamma} \leq [\varepsilon c_{_{\!H}}]^{\frac{1}{\nu}}.
\]
We have also used that $\tilde c \leq \tfrac{1}{3} [\varepsilon c_{_{\!H}}]^{\frac{1}{\nu}}$, $\delta_\varepsilon \leq \tfrac{2}{9}[\varepsilon c_{_{\!H}}]^{\frac{1}{\nu}}$, and $\tfrac{1}{\digamma} \leq \frac{1}{3}[\varepsilon c_{_{\!H}}]^{\frac{1}{\nu}}$ by \eqref{e:digamma}.
\renewcommand{c_{_{\!M,p}}}{c_{_{\!M,g}}}
From \eqref{e:theta1} and \eqref{e:theta2} it follows that for all $r\in {\mathbb R}$ and $(\rho, t) \in B_{0,i}\times [T_{_{\! B_{0,i}}}, \infty)$ with $d(\varphi_t(\rho),A)\leq \tilde c \,e^{-2\Lambda|t|}$ we have
\[
\inf \{|{\Theta}^+_{\varphi_t(\rho)}(d\varphi_t {\bf w}+ r H_p) -{\Theta}^+_{\varphi_t(\rho)}({\bf v})| : {\bf v} \in T_{\varphi_t(\rho)} \mathcal F_{\varphi_t(\rho)}\} \geq 2\varepsilon{\|{\bf w}\|}.
\]
Moreover, we claim that {there is $c_{_{\!M,g}}>0$ depending only on $(M,g)$ so that}
\begin{equation}
\label{e:aardvark}
\|d\varphi_t {\bf w}+{\bf v}\| \geq {\frac{\ec_{_{\!M,p}}}{2 C_{\varphi}}}\,e^{-\Lambda t}\|{\bf w}\|,
\end{equation}
for all ${\bf v} \in T_{\varphi_t(\rho)} \mathcal F_{\varphi_t(\rho)} \oplus {\mathbb R} H_p$.
{To see this, first observe that by continuity of $E_{\pm}$ and the fact that $E_{+}\cap E_-=\{0\}$, there exists $c_{_{\!M,g}}>0$ depending only on $(M,g)$ so that for all ${\bf{v}}\in TT^*\!M$ }
{\begin{equation*}\label{e:normCompare}
c_{_{\!M,p}}(\|\pi_+{\bf{v}}\|+\|\pi_-{\bf{v}}\|)\leq \|{\bf{v}}\|\leq \|\pi_+{\bf{v}}\|+\|\pi_-{\bf{v}}\|.
\end{equation*}}
Next, suppose that $\|\pi_+{\bf{v}}\|<\frac{3}{2}\|\pi_+d\varphi_t {\bf w}\|$. Then, by {~\eqref{e:normCompare}},{~\eqref{e:theta1} and~\eqref{e:theta2}}
{\begin{align*}
\|d\varphi_t {\bf w}+{\bf v}\|&\geq c_{_{\!M,p}}( \|\pi_-d\varphi_t {\bf w}\|-\|\pi_-{\bf v}\|)\\
&\geq c_{_{\!M,p}}( 4\varepsilon \|\pi_+d\varphi_t {\bf w}\|-2\varepsilon\|\pi_+{\bf v}\|)\geq c_{_{\!M,p}}\varepsilon\|\pi_+d\varphi_t{\bf{w}}\|.
\end{align*}}
On the other hand, assuming that $\varepsilon \leq \frac{1}{2}$ we have $\|\pi_+{\bf{v}}\|\geq \frac{3}{2}\|\pi_+d\varphi_t {\bf w}\|$, then $$
\|d\varphi_t {\bf w}+{\bf v}\|\geq c_{_{\!M,p}}(\|\pi_+ {\bf v}\|-\|\pi_+d\varphi_t{\bf w}\|)\geq c_{_{\!M,p}}{\tfrac{1}{2}}\|\pi_+d\varphi_t {\bf w}\|{\geq c_{_{\!M,p}} {\varepsilon}\|\pi_+d\varphi_t {\bf w}\|}.
$$
Also, note that
$$
\|\pi_+d\varphi_t{\bf w}\|=\|d\varphi_t\pi_+ {\bf w}\|\geq {\tfrac{1}{C_\varphi}}e^{-\Lambda|t|}\|\pi_+{\bf w}\|,
$$
and
{$$
\|{\bf w}\|\leq \|\pi_+{\bf w}\|+\|\pi_-{\bf w}\| \leq (1+\Theta_\rho^+({\bf w}))\|\pi_+{\bf w}\|\leq (1+\varepsilon)\|\pi_+{\bf w}\|.
$$}
The proof of \eqref{e:aardvark} follows from noticing that $\frac{\varepsilon}{1+\varepsilon}\geq \frac{\varepsilon}{2}$ since $\varepsilon<1$.
Since $d(\varphi_t(\rho),A)\leq\tilde c \,e^{-2\Lambda|t|}\leq \delta_0$, we conclude by \eqref{e:F} and \eqref{e:aardvark} that for all $s \in {\mathbb R}$
\begin{align*}
\|dF(d\varphi_t {\bf w}+sH_p)\|
&\geq c_0 \inf\{ \|d\varphi_t {\bf w}+{\bf v}\|: {\bf v} \in T_{\varphi_t(\rho)} \mathcal F_{\varphi_t(\rho)} \oplus {\mathbb R} H_p\} \\
&\geq { \frac{c_0\,\varepsilon\, c_{_{\!M,p}} }{2C_\varphi}} e^{-\Lambda t} \|{\bf w}\|.
\end{align*}
This means that if $\psi=F \circ \varphi_t$, then $d\psi(t,\rho)$ has a left inverse $L_{(t, \rho)}$ when restricted to ${\mathbb R}\partial_t \oplus {\mathbb R} {\bf w}$ with $\|L_{(t, \rho)}\| \leq {\frac{2C_\varphi }{c_0\, \varepsilon\, {c_{_{\!M,p}}} }}e^{t\Lambda}$.
In particular, for any $t\geq T_{_{\! B_{0,i}}}$ so that $d(\varphi_t(\rho),A)\leq \tilde c\, e^{-2\Lambda|t|},$ the hypotheses of Proposition~\ref{p:ballCover} apply to the set $A$ with $t_0=T_{_{\! B_{0,i}}}$, $B=B_{0,i}$, $R=\tilde R_0$, $\beta =\Lambda$, and $c=c_0\, C_{_{\!M,g}}\, \varepsilon^{{-1}}$, {$a=2\Lambda$}.
Fix $0<\tilde r_0< \tilde R_0$ and $0<r \leq \tfrac{1}{ \digamma} e^{- \digamma \Lambda T_0} \tilde r_0 $. Let
\[
\tilde r_2:= \max \Big\{ {6 e^{\Lambda T_0}r}, \; \tfrac{4}{\alpha_1} r,\; \tfrac{4}{\alpha_1}e^{-\mathbf{D}\Lambda T_0}\tilde R_0 \Big\},
\]
and note by the definition \eqref{e:digamma} of $\digamma$ we have
\[
\tilde r_2 < \min \Big\{ \tilde R_0,\; \alpha_2 e^{-{5}\Lambda T_0},\; \tfrac{1}{10 {\bf{C}_0} }e^{-10 \Lambda T_0}\Big\}.
\]
This can be done since $T_0>1$ and $e^{-\mathbf{D} \Lambda}< \tfrac{\alpha_1}{4}$ by the definition \eqref{e:D} of $\mathbf{D}$.
Setting $\tilde r_1:= \max\{2r, e^{-\mathbf{D} \Lambda T_0}\}$ we have
\[
r<\tilde r_1,\qquad
\tilde r_1< \alpha_1\, \tilde r_2, \qquad
\tilde r_2 \leq \min\{\tilde R_0,\,\alpha_2\, e^{-{{5} \Lambda} T_0}\},\qquad
r < {\tfrac{1}{3}} e^{-\Lambda T_0}\tilde r_2.
\]
Therefore, we may apply Proposition~\ref{p:ballCover} to the cover $\{\Lambda_{\rho_j}^\tau(\tilde r_1)\}_{j \in \mathcal E_{_{\!B_{0,i}}}}$ of $\Lambda_{_{\!B_{0,i}}}^\tau(r)$ where
\begin{equation}
\label{e:ei3}
\mc{E}_{_{\!B_{0,i}}}:=\{j: \Lambda_{\rho_j}^\tau(\tilde r_1)\cap \Lambda_{_{\!B_{0,i}}}^\tau(r)\neq \emptyset\}.
\end{equation}
Then, there is a partition $\mc{E}_{_{\! B_{0,i}}}=\mc{G}_{_{\!B_{0,i}}}\cup \mc{B}_{_{\!B_{0,i}}}$ with
\begin{equation}\label{e:badballs0i}
|\mc{B}_{_{\!B_{0,i}}}|\leq {\bf{C}_0} \;\tilde r_2\frac{R_0^{n-1}}{\tilde r_1^{n-1}}\; T_0e^{{8}\Lambda T_0},
\end{equation}
and so that
\begin{equation}\label{e:noloop2}
\bigcup_{t=T_{_{\! B_{0,i}}}}^{T_0} \varphi_t\Big(\Lambda^\tau_{B_{0,i}}(r) \backslash \bigcup_{j\in \mc{B}_{_{\!B_{0,i}}}}\!\!\!\Lambda^\tau_{\rho_j}(\tilde r_1) \Big)\;\bigcap \;\Lambda_{A}^\tau(r)=\emptyset.
\end{equation}
Here ${\bf{C}_{0}}$ coincides with the positive constant used in the definition \eqref{e:digamma} of $\digamma$.
Combining \eqref{e:noloop} with \eqref{e:noloop2}, and using that $E \subset A$ and $0<r< \frac{1}{\digamma} e^{-\digamma \Lambda T_0}\tilde r_0$, we obtain
\begin{equation}\label{e:noloop3}
\bigcup_{t=t_0}^{T_0} \varphi_t\Big(\Lambda^\tau_{B_{0,i} \backslash\cup_k\tilde{B}_{1,k}}(r) \backslash \bigcup_{j\in \mc{B}_{_{\!B_{0,i}}}}\!\!\!\Lambda^\tau_{\rho_j}(\tilde r_1) \Big)\;\bigcap \;\Lambda_{E\backslash\cup_k\tilde{B}_{1,k}}^\tau(r)=\emptyset,
\end{equation}
In particular, there are balls $\{\tilde{B}_{2,j}\}_j$ with radii $R_{2,j}=\tilde r_1$ so that
$$
\bigcup_{t=t_0}^{T_0} \varphi_t(\Lambda^\tau_{B_{0,i}\backslash [\cup_{k,j}\tilde{B}_{1,k}\cup \tilde{B}_{2,j}]}(r))\cap \Lambda_{E\backslash\cup_k\tilde{B}_{1,k}}^\tau(r)=\emptyset.
$$
In addition,
\begin{equation}\label{e:noloop-radius2}
\sum_j R_{2,j}^{n-1}\leq {\bf{C}_0} \tilde r_2R_0^{n-1}\; T_0e^{{8}\Lambda T_0} \leq \frac{\varepsilon_1}{2}R_0^{n-1},
\end{equation}
where the first inequality is due to \eqref{e:badballs0i} and the second one is a consequence of the fact that $\tilde r_2< \tfrac{1}{10 {\bf{C}_0}} e^{-9 \Lambda T_0}$ and $\frac{\varepsilon_1}{2} \geq \tfrac{1}{10}$.\\
Repeating this argument with $B_{0,i}$ for every $i\in\{1, \dots, N\}$ we conclude that there exist balls $\tilde B_{\ell}$ of radius $R_{\ell}$ centered in $E$ so that
\begin{equation}\label{e:conclusion1}
{\Lambda^\tau_{E\setminus \cup_\ell \tilde B_{\ell}}(r)\;\;\text{ is }\;\;[t_0(\tfrac{1}{5}),T_0]\text{ non-self looping}}.
\end{equation}
{Note that $R_\ell= \tilde r_1 \in [0, \tfrac{1}{4}\tilde R_0]$ since $\tilde r_1=\max\{2r, e^{-\mathbf{D} \Lambda T_0}\tilde R_0\}$ while $2r\leq \tfrac{2}{\digamma}\tilde r_0 \leq \tfrac{2}{11} \tilde r_0 \leq \tfrac{1}{4} \tilde R_0$ and $e^{-\mathbf{D} \Lambda}< \tfrac{1}{4}$ by the definition \eqref{e:D} of $\mathbf{D}$. }
Also, by \eqref{e:noloop-radius} and \eqref{e:noloop-radius2},
\begin{equation}\label{e:conclusion2}
\sum_\ell R_\ell^{n-1}\leq \sum_{i=1}^N \Big(\sum_{k}R_{1,k}^{n-1}+\sum_{j}R_{2,j}^{n-1} \Big) \leq \varepsilon_1 \sum_{i=1}^N R_0^{n-1}.
\end{equation}
Finally, since $R_{1,k}\geq e^{-\mathbf{D}\Lambda T_0}R_0$ for all $k$ and {$R_{2,j}=\tilde r_1 \geq e^{-\mathbf{D}\Lambda T_0}\tilde R_0$} for all $j$,
\begin{equation}\label{e:conclusion3}
R_\ell \geq e^{-\mathbf{D}\Lambda T_0}R_0.
\end{equation}
Relations \eqref{e:conclusion1}, \eqref{e:conclusion2} and \eqref{e:conclusion3} show that $E$ can be $(\tfrac{1}{5}, \digamma)$-controlled up to time $T_0$ as claimed in \eqref{e:Bcontrol}.
\needspace{2in}
\begin{center}
{\underline{\bf Constructing the complete cover}}
\end{center}
We now partition $\{\rho_j\}_{j=1}^{N_{r_1}}$. Let $t_0=\max\{t_1, \mathfrak{t}_2(\tfrac{1}{5})\}$ where $t_1$ is defined in \eqref{e:t0def} and $t_{2}$ is defined in~\eqref{e:Bcontrol}. By \eqref{e:badD} and \eqref{e:goodD}, for each $i\in \mc{I}_K$ we have constructed a partition $\mc{E}_{_{\!D_i}}=\mc{G}_{_{\!D_i}}\cup \mc{B}_{_{\!D_i}}$ of
$\mc{E}_{_{\!D_i}}=\{j: \Lambda_{\rho_j}^\tau(r_1)\cap \Lambda_{_{\!{D_i}}}^\tau(r_0)\neq \emptyset\}$
where
\begin{equation}\label{e:bothD}
|\mc{B}_{_{\!D_i}}|\leq {{\bf{C}_0}} \;\frac{{R}^{n-1}}{{r_1^{n-2}}}\; T_0e^{{4\Lambda}T_0}
\quad\text{and}\quad
\bigcup_{j\in \mc{G}_{_{\!D_i}}}\Lambda^\tau_{\rho_j}(r_1)\;\text{is}\; [t_0,T_0]\text{ non-self looping}.
\end{equation}
Moreover, by \eqref{e:Bballs}, for each $i\in \mc{I}_{\mc{S}_H}$ and $m>0$ integer we have constructed a partition of
$\mc{E}_{_{\!B_i}}=\{j: \Lambda_{\rho_j}^\tau(r_1)\cap \Lambda_{_{\!{B_i}}}^\tau(r_0)\neq \emptyset\}$
by sets $\{\mc{G}_{_{\!B_i,\ell}}\}_{\ell =0}^m\subset \{1,\dots N_{r_1}\}$, $\mc{B}_{_{\!B_i}}\subset \{1,\dots N_{r_1}\}$ satisfying
\begin{align}\label{e:Bboth}
\mc{E}_{_{\!B_i}}\subset \mc{B}_{_{\!B_i}}\cup \displaystyle\bigcup_{\ell =0}^m\mc{G}_{_{\!B_i,\ell}},\qquad
\bigcup_{i\in \mc{G}_{_{\!B_i,\ell}}}\Lambda_{\rho_i}^\tau(r_1)\text{ is } [t_0,2^{-\ell}T_0]\text{ non-self looping}, \notag\\
|\mc{G}_{_{\!B_i,\ell}}|\leq C_{_{M,p}}\frac{\delta_\varepsilon^{n-1}}{5^{\ell}}\frac{1}{r_1^{n-1} }\qquad \text{ and }
\qquad|\mc{B}_{_{\!B_i}}|\leq C_{_{M,p}} \frac{\delta_\varepsilon^{n-1}}{5^{m+1}}\frac{1}{r_1^{n-1} }.
\end{align}
Next, define
\[
m:=\Big\lfloor \frac{\log T_0- \log {t_0}}{\log 2} \Big\rfloor \qquad \text{and}\qquad \mc{B}:=\bigcup_{i \in I_{K}} \mc{B}_{_{\!D_i}} \cup \bigcup_{i \in \mc{I}_{\mc{S}_H}} \mc{B}_{_{\!B_i}}.
\]
For each $i\in \mc{I}_K$ set $\mc{G}_{{i,0}}:=\mc{G}_{_{\!D_i}}$ and {$\mc{G}_{i,\ell}:=\mc{G}_{_{\!B_{i,\ell-1}}}$} for $\ell \geq 1$. Then, there exists {$I<\infty$}, depending only on $(M,H,p)$, so that after relabelling the indices $i \in I_{K} \cup I_{\mc{S}_H}$ there are sets $\{\mc{G}_{i,\ell}:\; 1\leq \ell \leq m, \; 1\leq i\leq I\}$ so that
\begin{gather*}
\bigcup_{i=1}^I\bigcup_{\ell=1}^m \mc{G}_{i,\ell}\cup \mc{B}= \{1,\dots N_{r_1}\},\qquad
\bigcup_{j\in \mc{G}_{i,\ell}}\Lambda_{\rho_j}^\tau(r_1)\text{ is } [t_0,2^{-\ell}T_0]\text{ non-self looping}.
\end{gather*}
In addition, there exists $c>0$, which may change from line to line, so that
\begin{align*}
|\mc{B}|
&\leq c\, r_1^{1-n} \Big( |I_{K}| \;r_1{{R}^{n-1}}\; T_0e^{{4\Lambda}T_0} + |I_{\mc{S}_H}|\frac{\delta_\varepsilon^{n-1}}{5^{m+1}}\Big) \\
& \leq c\, r_1^{1-n} \Big( r_1 T_0e^{{4\Lambda}T_0} + e^{-T_0 \log 5 }\Big).
\end{align*}
Here, we have used that $|\mc{I}_{K}| \leq c\,R^{-(n-1)}$ and $|\mc{I}_{\mc{S}_H}| \leq c\,\delta_\varepsilon^{-(n-1)}$. Since $r_1 \leq e^{-CT_0}$ and we may enlarge $C$ so that $C>4\Lambda +1 +\log 5$, we conclude that
\[
|\mc{B}| \leq c\, e^{-T_0 \log 5}r_1^{1-n},
\]
as claimed. In addition, note that $|\mc{G}_{_{\!D_i}}|\leq |\mc{E}_{_{\!D_i}}|\leq c\, R^{n-1}{r_1^{-(n-1)}}$ for each $i \in \mc{I}_{K}$ . Therefore, since $R\leq 1$ and $\delta_\varepsilon\leq 1$, for all $\ell\in \{1, \dots, m\}$ and all $i \in \{1,\dots, L\}$
\[|\mc{G}_{i,\ell}|\leq c\, \frac{1}{5^{\ell}}\, r_1^{1-n}.\]
{ Finally, we note that by construction the constants $c=c(M,g,H)$ and $C=C(M,g,H)$ are uniform for for $H$ varying in a small neighborhood of a fixed submanifold $H_0 \subset M$.}
\end{proof}
\begin{lemma}
\label{l:noFocalTubes}
Suppose that $(M,g)$ has no focal points and $\mc{S}_H=\emptyset.$
Then, the conclusions of Lemma \ref{l:anosov} hold.
\end{lemma}
\begin{proof}
Since $S\!N^*\!H$ is compact by Lemma~\ref{P:1} there exist positive constants $c_{_{\!K}},t_{_{\!K}},\delta_{_{\!K}}$ so that if $\rho \in K$ and $\varphi_t(\rho) \in \overline{ B(\rho, \delta_{_{\!K}})}$ for some $|t|>t_{_{\!K}}$, then there exists ${\bf w}={\bf w}(t,\rho) \in T_{\rho}(S\!N^*\!H)$ so that
\begin{equation
\inf\{\|d\varphi_{t} (\mathbf{{\bf w}})+{\bf v}\|:\; {\bf v} \in {T_{\varphi_{t}(\rho)}}\mc{F}_{\!\varphi_{t}(\rho)} \oplus {\mathbb R} H_p\}\geq c_{_{\!K}}\|\mathbf{{\bf w}}\|.
\end{equation}
Cover $S\!N^*\!H$ with finitely many balls $\{D_i\}_{i \in I}\subset S\!N^*\!H$ of radius equal to $\delta_{_{\!K}}$. The remainder of the proof of this lemma is identical to that in the Anosov case since $\mc{S}_H=\emptyset$ implies that $D_i\cap \mc{S}_H=\emptyset$ {for all $i$}.
\end{proof}
\subsection{Proof of Theorem~\ref{T:tangentSpace}}
We first apply Lemma \ref{l:anosov} when $(M,g)$ has Anosov geodesic flow, or Lemma \ref{l:noFocalTubes} when $(M,g)$ has no focal points. Let $c>0$, $C>2$, {$I>0$}, $t_0>1$ be the constants whose existence is given by the lemmas. Then, let $\Lambda>\Lambda_{\text{max}}$, $0<\tau_0<\tau_{_{\!\SNH}}$, $0<\tau<\tau_0$,
\[
0<\varepsilon<\tfrac{1}{2}, \qquad 0<a<\tfrac{1-2\varepsilon}{\varepsilon}, \qquad \tilde c\geq \max\{C, \tfrac{\Lambda_{\text{max}}}{a}\},\qquad \varepsilon \big(1+ \tfrac{\Lambda}{\tilde c} \big)<\delta<\tfrac{1}{2},
\]
\[
T_0(h)= \tfrac{\varepsilon}{\tilde c}\log h^{-1},\qquad r_1(h)=h^\varepsilon, \qquad r_0(h)= h^\delta,
\]
and let $\{\Lambda_{\rho_j}^\tau(h^\varepsilon)\}_{j=1}^{N_{h^\varepsilon}}$ be {a $({\mathfrak{D}_n},\tau, h^\varepsilon)$-good cover of $S\!N^*\!H$. }
Then, since $\tilde c \geq C$, Lemmas \ref{l:anosov} and \ref{l:noFocalTubes} give that for each $i\in\{1,\dots, I\}$, and
$$
m:= \Big\lfloor \frac{\log T_0(h)- \log t_0}{\log 2} \Big\rfloor,
$$
there are sets of indices $\{\mc{G}_{i,\ell}\}_{\ell=0}^m\subset \{1,\dots, N_{h^\varepsilon}\}$ and $\mc{B}\subset \{1,\dots, N_{h^\varepsilon}\}$ so that
$$
\bigcup_{i=1}^I\bigcup_{\ell=0}^m\mc{G}_{i,\ell}\cup \mc{B}= \{1,\dots, N_{h^\varepsilon}\},
$$
and for every $i\in\{1,\dots, I\}$ and every $\ell\in \{0, \dots, m\}$
\[
\bigcup_{j\in \mc{G}_{i,\ell}}\Lambda_{\rho_j}^\tau(h^\varepsilon)\;\;\text{is}\;\;[t_0,2^{-\ell}T_0(h)]\;\;\text{ non-self looping},
\]
\[
|\mc{G}_{i,\ell}|\leq c \, 5^{-\ell}\, h^{\varepsilon(1-n)},\qquad \qquad |\mc{B}|\leq c \,h^{ \frac{c \,\varepsilon}{\tilde c} }\, h^{\varepsilon(1-n)}.
\]
Next, we apply Theorem~\ref{t:coverToEstimate} with $R(h)=h^\varepsilon$, $\alpha=a \varepsilon$, $t_\ell(h)=t_0$ for all $\ell$, $T_\ell(h)=2^{-\ell}T_0(h)$ for all $\ell$. Note that $R_0>R(h)\geq 5h^\delta$ for $h$ small enough since $\delta>\varepsilon$, and that $\alpha<1-2\varepsilon$ as needed. In addition, $T_\ell(h) \leq 2\alpha T_e(h)$ since $\tilde c \geq \tfrac{\Lambda_{\text{max}}}{a}$. It follows that {there exists $C>0$, and for all $N>0$ there exists $C_{_{\!N}}$ so that}
\begin{align*}
&h^{\frac{k-1}{2}}\Big|\int_H w ud\sigma_H\Big|\\
&\qquad \leq C\|w\|_{_{\!\infty}}\Big(\Big[h^{\frac{\varepsilon\, c}{2\tilde c}}+\tfrac{1}{\sqrt{\log h^{-1}}}\sum_\ell (\tfrac{2}{5})^{\frac{\ell}{2}}\Big]\|u\|_{{_{\!L^2(M)}}}+\tfrac{\sqrt{\log h^{-1}}}{h}\sum_\ell (\tfrac{1}{10})^{\frac{\ell}{2}}\|Pu\|_{{_{\!L^2(M)}}}\Big)\\
&\qquad \quad+Ch^{-1}\|w\|_\infty \|Pu\|_{{\Hs{\frac{k+1}{2}}}} +{C_{_{\!N}}h^N\big(\|u\|_{{_{\!L^2(M)}}}+\|Pu\|_{{\Hs{\frac{k+1}{2}}}}\big)},
\end{align*}
which gives the desired result after choosing $h_0$ to be small enough.
{We note that if $H_0 \subset M$, there is a neighborhood $U$ of $H_0$ (in the $C^\infty$ topology) so that the constants $C$, {$C_{_{\!N}}$} and $h_0$ are uniform over $H\in U$, $w$ taken in a bounded subset of $C_c^\infty$, {and $N$ bounded above}. }
\qed
\subsection{Proof of Theorem~\ref{T:applications}}
We have already proved {Theorem}~{\ref{a1}} in Theorem~\ref{t:noConj1}. For {Theorem}~{\ref{a3}}, {Theorem}~{\ref{a4}}, {Theorem}~{\ref{a5}} we refer the reader to~\cite[Section 5.4]{CG17} where it is shown that either $\mc{A}_H=\emptyset$ in {Theorem}~{\ref{a3}}, $\mc{S}_H=\emptyset$ in {Theorem}~{\ref{a4}}, and $\mc{A}_H=\emptyset$ in {Theorem}~{\ref{a5}}. Therefore, Theorem \ref{T:tangentSpace} can be applied to all these setups yielding the desired conclusions.
\begin{proof}[Proof of {Theorem}~{\ref{a2}}]
Let $H$ be a geodesic sphere. Then, $H= \pi(\varphi_s(S_x^*M))$ for some $x\in M$ and $s>0$.
Next, we observe, using that $(M,g)$ has no conjugate points, the proof of Theorem~\ref{t:noConj1} (when the submanifold is the point $\{x\}$) yields the existence of a cover for $S_x^*M$, with some choices of $(R(h), t_\ell(h),T_\ell(h))$, so that Theorem~\ref{t:coverToEstimate} implies the outcome in Theorem~\ref{t:noConj1} (which coincides with that of Theorem~\ref{T:applications}). Then, since $\varphi_s(S_x^*M)=S\!N^*\!H$, the result follows from flowing out the cover for time $s$ to obtain a cover for $S\!N^*\!H$. This cover will have the same desired properties as the original one, but possibly with $R(h)$ replaced by $m_sR(h)$ for some $m_s>0$ independent of $h$. The result follows from applying Theorem~\ref{t:coverToEstimate} to the new cover.
\end{proof}
\begin{remark}
This proof in fact shows that there is a certain invariance of estimates under fixed time geodesic flow. That is, if one uses Theorem~\ref{t:coverToEstimate} to conclude an estimate on $H$, then essentially the same estimate will hold on $\pi\varphi_s(S\!N^*\!H)$ for any $s\in \mathbb{R}$ independent of $h$ provided that $\pi\varphi_s(S\!N^*\!H)$ is a finite union of submanifolds of codimension $k$ for some $k$.
\end{remark}
\begin{proof}[Proof of {Theorem}~{\ref{a6}}]
For this part we assume that $(M,g)$ has Anosov geodesic flow, non-positive curvature, and $H$ is a submanifold with codimension $k>1$. We will prove that $\mc{A_H}=\emptyset$, and by Theorem \ref{T:tangentSpace} this will imply the desired conclusion. {In what follows we write $\pi$ for both $\pi:TM \to M$ and $\pi:T^*M \to M$ since it should be clear from context which map is being used.}
We proceed by contradiction. Suppose there exists $\rho \in \mc{A}_H \subset S\!N^*\!H$.
We write $\rho^\sharp \in S\!N\!H$ and note
\[T_{\rho^\sharp } NH=\{{\bf w}: \;\;\exists \, N\!:\!(-\varepsilon,\varepsilon)\to N\!H \; \text{smooth field,\;} N(0)=\rho^\sharp, N'(0)={\bf w}\}.\]
Moreover, for $v \in T_{\pi(\rho^\sharp)}H$ and ${\bf w}\in T_{\rho^\sharp } N\!H $ with {$d\pi {\bf w} \in T_{\rho^\sharp}H \backslash\{ 0\}$} and ${\bf w}=N'(0)$ with $N$ as before,
$$
\langle \tilde{\nabla}_{d\pi {\bf w}}N\,,\, v\rangle_{_{\!g(\pi(\rho^\sharp))}} =-\langle \rho^\sharp \,,\, \Pi_H(d\pi {\bf w},v)\rangle_{_{\!g(\pi(\rho^\sharp))}}.
$$
Here,
$\tilde{\nabla}$ denotes the Levi--Civita connection on $M$ and $\Pi_H:TH\times TH\to NH$ is the second fundamental form of $H$.
The
equality follows from the definition of the second fundamental form, together with the fact that $N$ is a normal vector field.
Now, let $v\in T_{\pi(\rho)}H$ be a direction of principal curvature $\kappa$. Then, {for all ${\bf w}\in T_{\rho^\sharp}S\!N\!H$ with $d\pi {\bf w}=v$,}
\begin{equation}
\label{e:curvature}
- \langle \tilde{\nabla}_{d\pi {\bf w}}N\,,\, v\rangle_{_{\!g(\pi(\rho^\sharp))}} =\langle \rho^\sharp, \Pi_H(v, v)\rangle_{_{\!g(\pi(\rho^\sharp))}}
=\kappa \| v\|_{_{\!g(\pi(\rho^\sharp))}}.
\end{equation}
We will derive a contradiction from~\eqref{e:curvature}, together with the assumption that $T_{\rho}S\!N^*\!H=N_+(\rho)\oplus N_-(\rho)$, by showing that the stable and unstable manifolds at $\rho^\sharp$ have signed second fundamental forms. In particular, note that $E_{\pm}^\sharp(\rho^\sharp)$ are given by $T\mc{W}_\pm(\rho^\sharp)$ where $\mc{W}_{\pm}(\rho^\sharp)$ are respectively the stable and unstable manifolds through $\rho^\sharp$. Furthermore, these manifolds are $\mc{W}_{\pm}(\rho^\sharp)= N\!\mathcal{H}_{\pm}$ where $\mathcal{H}_{\pm}\subset M$ are smooth submanifolds given by the stable/unstable horospheres in $M$ so that $\rho^\sharp\in N\!\mathcal{H}_{\pm}$~\cite[Section 4.1]{Ruggiero}. The signed curvature of $\mathcal{H}_{\pm}$ implies that there is $c>0$ so that
\begin{equation}
\label{e:signedCurve}
\pm \Pi_{\mathcal{H}_{\pm}}\geq c>0.
\end{equation}
We postpone the proof of this fact until the end of the lemma and first derive our contradiction.
Since $T_\rho S\!N^*\!H=N_+(\rho)\oplus N_-(\rho)$, then
$
T_{\rho^\sharp}S\!N\!H=N^\sharp_+(\rho)\oplus N^\sharp_{-}(\rho).
$
Observe that $d\pi:E^\sharp_{\pm}(\rho)\cap TSM\to T_{\pi(\rho)}M$ is injective where $\pi:TM\to M$ is the standard projection. In particular, for $v\in T_{\pi(\rho)}M$, $\dim(d\pi^{-1}(v)\cap E^\sharp_{\pm}(\rho))\leq 1.$ Since $k>1$, there exist ${\bf w}_1,{\bf w}_2\in T_{\rho^\sharp}SNH$ linearly independent with $d\pi {\bf w}_i=v$ for $i=1,2$. In particular, using that $T_{\rho^\sharp}(S\!N\!H)=N_+^\sharp(\rho)\oplus N_-^\sharp(\rho)$, there are ${\bf w}_{\pm}\in N_\pm^\sharp(\rho)$ such that $d\pi {\bf w}_{\pm}=v$.
Now, since ${\bf w}_{\pm}\in T_{\rho^\sharp}(S\!N\!\mathcal{H}_{\pm})$, using~\eqref{e:signedCurve},
$$
- \langle \tilde{\nabla}_{d\pi {\bf w_-}}N\,,\, v\rangle_{_{\!g(\pi(\rho^\sharp))}} = \langle \rho^\sharp,\Pi_{\mathcal{H}_{+}}(v,v)\rangle\geq c\|v\|^2,
$$
and
$$
- \langle \tilde{\nabla}_{d\pi {\bf w_+}}N\,,\, v\rangle_{_{\!g(\pi(\rho^\sharp))}} = \langle \rho^\sharp,\Pi_{\mathcal{H}_{-}}(v,v)\rangle\leq -c\|v\|^2.
$$
This contradicts~\eqref{e:curvature} as the principal curvature $\kappa$ has a unique sign.
We now prove~\eqref{e:signedCurve}. We have by~\cite[Theorem 1, part (6)]{Eberlein73b} that since $(M,g)$ has Anosov flow and non-positive curvature, there are $c,t_0>0$ so that for any perpendicular Jacobi field $Y(t)$ with $Y(0)=0$, and $t\geq t_0$,
\begin{equation}
\label{e:curvedHorosphere}
\langle Y'(t),Y(t)\rangle \geq c\|Y(t)\|^2.
\end{equation}
By~\cite[Proof of Lemma 4.2]{Ruggiero} the second fundamental form to $\mathcal{H}_{\pm}$ at $\pi(\rho^\sharp)\in \mathcal{H}_{\pm}$ is given by
$$
\pm \Pi_{\mathcal{H}_{\pm}}=\mp \lim_{r\to \mp\infty} U_r(0)
$$
where $U_r(t)=Y_r'(t)Y_r^{-1}(t)$ and $Y_r(t)$ is a matrix of perpendicular Jacobi fields along $t \mapsto \pi \varphi_t(\rho)$ satisfying
$Y_r(r)=0$ and $Y_r(0)=\operatorname{Id}.$
In particular, by~\eqref{e:curvedHorosphere}, applied to the Jacobi field $\tilde{Y}(t)=Y_r(r-t)$, at $t=r$ gives for $r\geq t_0$,
\begin{align*}
\langle U_r(0)x,x\rangle&=\langle Y_r'(0)x,Y_r(0)x\rangle=-\langle \tilde{Y}'(r)x,\tilde{Y}(r)x\rangle\leq -c\|Y_r(0)x\|^2=-c\|x\|^2.
\end{align*}
Similarly, for $r\leq -t_0$, we apply~\eqref{e:curvedHorosphere} to $\tilde{Y}(t)=Y_r(r+t)$ at $t=|r|$ to obtain
$$
\langle U_r(0)x,x\rangle=\langle\tilde{Y}'(|r|)x,\tilde{Y}(|r|)x\rangle\geq c\|x\|^2
$$
This yields that $\pm \Pi_{\mathcal{H}_{\pm}}=\mp\lim_{r\to \pm \infty}U_r(0)\geq c>0$ as claimed.
\end{proof}
\subsection{Proof of Theorem~\ref{t:surfaces}}
For {Theorem}~{\ref{a3}} we refer the reader to~\cite[Section 5.4]{CG17} where it is shown that $\mc{A}_H=\emptyset$. Therefore, Theorem \ref{T:tangentSpace} can be applied to this setup yielding the desired conclusions.\smallskip
We proceed to prove Theorem \ref{a7}.
Fix a geodesic $H\subset {M}$
We prove that Theorem {\ref{a7}} holds under the following curvature assumption.
Suppose there exist $T>0$, and $c_1, c_2, c_3>0$ so that for all $\rho_0, \rho_1 \in S\!N^*\!H$ with $d(\rho_0, \rho_1)=s\leq c_3$, and all $t_0, t_1\geq T$ with $\varphi_{t_0}(\rho_0),\varphi_{t_1}(\rho_1) \in S\!N^*\!H$, we have
\begin{equation}\label{e:modified curvature assumption}
-\int_{Q_s}K dv_{\tilde g} \geq c_1e^{-c_2/{\sqrt{s}}},
\end{equation}
where $Q_s$ is the quadrilateral domain in the universal cover, $(\tilde M, \tilde g)$, whose sides are the geodesics that join the points, ${\pi(\rho_0)}, {\pi(\rho_1)}, \pi(\varphi_{t_0}(\rho_0)), \pi(\varphi_{t_1}(\rho_1))$.
At the end of the proof we shall show that the integrated curvature assumption \eqref{e:intCurve} implies the assumption in \eqref{e:modified curvature assumption}.
The first step in the proof is to show that there exist $r_0>0$ and $c_4>0$ so that the following holds. If $0<r\leq r_0$ and $\rho_0,\rho_1 \in S\!N^*\!H$ are such that there are $t_0,t_1 \geq T$ with $|t_0-t_1|<\frac{\tau_{_{\!\text{inj}H}}}{2}$ and
$$
d(\varphi_{t_0}(\rho_0),S\!N^*\!H)<r,\qquad d(\varphi_{t_1}(\rho_1),S\!N^*\!H)<r,
$$
then either
\begin{equation}\label{e:distance claim}
d(\rho_0,\rho_1)<c_2^2\ln\big(\tfrac{c_4}{r}\big)^{-{2}} \qquad \text{or}\qquad d(\rho_0,\rho_1)>c_3.
\end{equation}
To prove the claim in \eqref{e:distance claim} suppose that there is $\rho_0\in S\!N^*\!H$ with $d(\varphi_{t_0}(\rho_0),S\!N^*\!H)<r$ for some $r>0$. Then, there exists $C=C(M,g,H)\geq 1$ so that by changing $t_0$ to $\tilde{t_0}$ with $|t_0-\tilde{t}_0|\leq C r$ and $r>0$ small enough, we may assume that $\pi(\varphi_{{\tilde t}_0}(\rho_0))\in H$ and $d(\varphi_{\tilde t_0}(\rho_0),S\!N^*\!H)<2Cr$. Now, let $\rho_s\in S\!N^*\!H$, with $d(\rho_0,\rho_s)=s$ and suppose there is $t_s$ with $|t_0-t_s|<\tfrac{\tau_{_{\!\text{inj}H}}}{2}$ and $d(\varphi_{t_s}(\rho_s),S\!N^*\!H)<r$. As before, we can adjust ${t_s}$ to $\tilde t_s$, with $|t_s-\tilde t_s|\leq Cr$, in order to have $\pi(\varphi_{\tilde{t}_s}(\rho_s))\in H$ and $d(\varphi_{\tilde t_s}(\rho_s),S\!N^*\!H)<2Cr$. Let
\[\gamma_0(t):=\pi(\varphi_t(\rho_0)), \qquad \gamma_s(t):=\pi(\varphi_t(\rho_s)).\]
Note that, in the universal cover of $M$, $\tilde{M}$, $\gamma_s$ does not intersect $\gamma_0$ unless $\rho_0=\rho_s$. Indeed, suppose they did intersect at an angle $\beta$. Then, by the Gauss--Bonnet theorem, we would have
$$
0\geq\int_{\Delta_s} K\,dv_{\tilde g}=\beta \geq 0,
$$
where $\Delta_s$ is the triangular region enclosed by $\gamma_0$, $\gamma_s$ and $H$.
In particular, this would give $\beta=0$ and hence $\gamma_s=\gamma_0$ and $s=0$.
Next, suppose that $\gamma_0$ and $\gamma_s$ do not cross in the universal cover. Let $\alpha_s$ denote the angle between $\dot{\gamma}_s(\tilde t_s)$ and $H$, and let $\alpha_0$ denote the angle between $\dot{\gamma_0}(\tilde t_0)$ and $H$. This can be done since $\pi(\varphi_{{\tilde t}_0}(\rho_0))\in H$ and $\pi(\varphi_{\tilde{t}_s}(\rho_s))\in H$. Then, by the Gauss--Bonnet theorem,
$$
\pi-\alpha_0-\alpha_s = -\int_{Q_s} K\, dv_{\tilde g}
$$
where $Q_s$ is the quadrilateral formed by $\gamma_0$, $\gamma_s$, the copy of $H$ in $\tilde M$ that contains ${\pi(\rho_0)}, {\pi(\rho_s)}$, and the copy of $H$ that contains $\pi(\varphi_{{\tilde t}_0}(\rho_0)), \pi(\varphi_{\tilde{t}_s}(\rho_s))$.
Since $d(\varphi_{\tilde t_0}(\rho_0),S\!N^*\!H)\leq 2Cr$, we have $0<\frac{\pi}{2}-\alpha_0\leq 2Cr$. Hence,
$$
\frac{\pi}{2}-\alpha_s\geq-\int_{Q_s} K\,dv_{\tilde g}-2Cr.
$$
In particular, by the curvature assumption \eqref{e:modified curvature assumption} we have that if $s\leq c_3$,
$$
\frac{\pi}{2}-\alpha_s\geq c_1e^{-c_2/ {\sqrt{s}}} -2Cr.
$$
Let $\tilde C=\tilde C(H, M, g)>0$ be so that if $\frac{\pi}{2}-\alpha_s \geq 2 \tilde C r$, then $d(\varphi_{\tilde{t}_s}(\rho_s),S\!N^*\!H)>2Cr$.
Then, for $ c_2^2\ln(c_4r^{-1})^{-{2}}<s \leq c_3$, with $c_4=c_1/{2}(C+\tilde C)$, we have
$$
\frac{\pi}{2}-\alpha_s > 2\tilde{C}r.
$$
This implies that $d(\varphi_{\tilde{t}_s}(\rho_s),S\!N^*\!H)>2Cr$, and hence proves \eqref{e:distance claim}.
Let $\tau_0$ be the positive constant given in Theorem \ref{t:coverToEstimate} and $0<r\leq r_0$.
Next, we prove that there exists $C>0$ so that if {$0<r_1< r$}, then for every $0<\tau \leq \tau_0$, $T_0>T$, and every $({\mathfrak{D}_n},\tau, r_1)$-{good} cover of $S\!N^*\!H$ by tubes $\{\Lambda^\tau_{\rho_j}(r_1)\}_{j=1}^{N_{r_1}}$, there is a partition $\{1, \dots, N_{r_1}\}=\mc{B} \cup \mc{G}$ so that \begin{equation}\label{e: artichoke}
\bigcup_{j\in \mc{G}} \Lambda^\tau_{\rho_j}(r_1) \;\;\text{is}\;\; (T,T_0) \; \text{non-self looping \;\;and\;\;} |\mc{B}|\leq C\frac{T_0}{T}\ln\big(\tfrac{c_4}{r}\big)^{-{2}}r_1^{-1}.
\end{equation}
Note that by splitting $[T,T_0]$ into intervals of length $\tau$ the claim in \eqref{e: artichoke} {is implied by} showing that for each
$\tilde t \in [T, T_0]$
\begin{equation}\label{e: artichoke leaf}
\#\Bigg\{\rho_j:\; \bigcup_{|t-\tilde{t}|<\tfrac{\tau}{2}}\varphi_{t}(\Lambda_{\rho_j}^\tau(r_1))\cap \Lambda^\tau_{_{\!\SNH}}(r_1)\neq \emptyset\Bigg\}\leq C\ln\big(\tfrac{c_4}{r}\big)^{-{2}}r_1^{-1}.
\end{equation}
To prove \eqref{e: artichoke leaf} we start by covering $S\!N^*\!H$ by balls $\{B_\ell\}_{\ell=1}^L$ of radius $\tfrac{c_3}{2}$. Fix $\tilde t\geq T+\frac{\tau}{2}$. It follows from \eqref{e:distance claim} that for each $\ell \in \{1, \dots, L\}$, if
$$
N_\ell:= B_\ell\cap\{\rho:\; \exists\,{t}\in (\tilde t-\tfrac{\tau}{2}, \tilde t+\tfrac{\tau}{2}),\quad d(S\!N^*\!H,\varphi_{{t}}(\rho))<r\},
$$
then {there is $\rho_\ell\in N_\ell$ such that
$$
N_\ell\subset \{\rho \in S\!N^*\!H:\; d(\rho,\rho_\ell)<c_{_2}^2(\ln (c_4 r^{-1}))^{-2}\}.
$$}
In particular, {since $\{\Lambda_{\rho_j}^\tau(r_1)\}_{j=1}^{N_{r_1}}$ is a $(\mathfrak{D}_n,\tau,r_1)$ good cover for $S\!N^*\!H$ and $r_1<r$} there exists $C_n>0$ so that for each $\ell \in \{1, \dots, L\}$,
\begin{equation*}
\#\Big\{\rho_j:\; \Lambda_{\rho_j}^\tau(r_1)\cap B_\ell \neq \emptyset,\;\;\bigcup_{|t-\tilde{t}|<\tfrac{\tau}{2}}\varphi_{{t}}(\Lambda_{\rho_j}^\tau(r_1))\cap \Lambda^\tau_{_{\!\SNH}}(r_1)\neq \emptyset\Big\}
\leq C_n{c_{_2}^2} \ln(\tfrac{c_4}{r})^{-{2}} r_1^{-1}.
\end{equation*}
The claim in \eqref{e: artichoke leaf} follows from taking the union in $\ell$ over all the balls $B_\ell$.
Finally, let $\varepsilon>0$ and $\delta>0$ with $\varepsilon<\delta$. Also, set $r=h^\varepsilon$, $r_1=8h^\delta$ and
\[T_0= \gamma \log h^{-1}-\beta,
\qquad
0<\gamma<\tfrac{\delta -\varepsilon}{\Lambda_{\max}}, \qquad\beta<-\tfrac{\log C}{\Lambda_{\max}}.
\]
We have obtained a splitting of $\{1,\dots, N_h\}$ into $\mc{B}\cup \mc{G}$ with the tubes in $\mc{G}$ being $[T, T_0]$ non-self looping and such that
$$
|\mc{B}|\leq C\frac{T_0}{T} (\varepsilon\ln c_4{h^{-1}})^{-{2}}h^{-\delta}.
$$
Using this cover in Theorem~\ref{t:coverToEstimate} completes the proof of Theorem~\ref{T:applications} part~\ref{a6} since $\tfrac{T_0}{T}\leq \log h^{-1}$ and hence $h^\delta|\mc{B}|\leq \frac{C}{\log h^{-1}}$ for some $C>0$ and $h$ small enough.
To see that~\eqref{e:modified curvature assumption} holds,
{let $s\mapsto\rho_s=(x(s), \xi(s)){\in S\!N^*\!H}$ {be a smooth map}, where $x(s)$ parametrizes $H$ with $|\dot x(s)|_{{g}}=1$ and $\langle \dot \xi(s), \xi(s)\rangle=0$ for all $s$. {Next, let $\Gamma(s,t)=\pi(\varphi_t(\rho_s))$ so that $t\mapsto \Gamma(s,t)$ is a geodesic with $\langle \partial_t\Gamma(s,t),\dot{x}(s)\rangle_g=0$ and $\Gamma(s,0)=x(s)$.}
{In particular, if we let
$$
Y(t)=\partial_s\Gamma(s,t)|_{s=0},
$$
then $Y(t)$ is a Jacobic field along $\gamma_0$ with $Y(0)=\dot{x}(0)$ and
$$\tfrac{D}{dt}Y(0)=\tfrac{D}{ds}\partial_t\Gamma(s,t)\Big|_{(0,0)}{=0}.$$}
{Indeed,} observe that the angle between $\partial_t\Gamma(s,t)|_{t=0}$ and $\dot{x}(s)$ is constant and $|\partial_t\Gamma(s,t)|_g=1$. Therefore, since $x(s)$ is a unit speed geodesic, $\tfrac{D}{ds}\partial_t\Gamma(s,t)|_{t=0}=0$ and hence $\tfrac{D}{dt} Y(0)=0$.
{Now,} let $\gamma_0^\perp(t)$ be a vector field along $\gamma_0(t)$ with $\langle \dot{\gamma_0}(t),\gamma_0^\perp(t)\rangle_g=0$ and $|\gamma_0^{\perp}(t)|_g=1$, we then have
$Y(t)=J(t)\gamma_0^{\perp}(t)$ with $J(0)=1$, $J'(0)=0$, and
$$
J''(t)+R(t)J(t)=0.
$$
Since, $R(t)\leq 0$ and $J''(t)\geq 0$,
$$
J(t)\geq 1.
$$
{In particular,
{
$$\partial_s(\pi\circ\varphi_t(\rho_s))|_{s=0}=d(\pi\circ \varphi_t)|_{\rho_0}\partial_s \rho_s|_{s=0}=Y(t),$$
and hence
$$
d(\pi\circ\varphi_t(\rho_s),\exp_{\pi\circ\varphi_t(\rho_0)}({s}Y(t))\leq C_1e^{2\Lambda t}s^2.
$$
}
{Therefore,} for $t\in [0,4T]$,
$$
d(\gamma_s(t),\exp_{\gamma_0(t)}(sY(t)))\leq C_1e^{8\Lambda T}s^2.
$$
{Since $J(t)\geq 1$,} it follows that $Q_s$ contains $\Omega_{\tilde{\gamma}}(\tfrac{s}{4})$ for $s<\tfrac{1}{8C_1}e^{-{8\Lambda T}}$ where $\tilde{\gamma}:=\{\gamma_{_{\frac{s}{2}}}(t):t\in [T,2T]\}.$
{Therefore,
\[
-\int_{Q_s}K dv_{\tilde g} \geq -\int_{\Omega_{\tilde{\gamma}}(\tfrac{s}{4})}K dv_{\tilde g}\geq c_1e^{-c_2/{\sqrt{s}}},
\]
as claimed.
}
\qed
\ \\
\begin{remark}
We note that the proof of Theorem~\ref{a7} essentially shows that, while horospheres on $M$ may not be positively curved everywhere, their curvature can only vanish at a fixed exponential rate.
\end{remark}
| {
"timestamp": "2019-03-22T01:07:21",
"yymm": "1809",
"arxiv_id": "1809.06296",
"language": "en",
"url": "https://arxiv.org/abs/1809.06296",
"abstract": "Let $(M,g)$ be a smooth, compact Riemannian manifold and $\\{\\phi_\\lambda \\}$ an $L^2$-normalized sequence of Laplace eigenfunctions, $-\\Delta_g\\phi_\\lambda =\\lambda^2 \\phi_\\lambda$. Given a smooth submanifold $H \\subset M$ of codimension $k\\geq 1$, we find conditions on the pair $(M,H)$, even when $H=\\{x\\}$, for which $$ \\Big|\\int_H\\phi_\\lambda d\\sigma_H\\Big|=O\\Big(\\frac{\\lambda^{\\frac{k-1}{2}}}{\\sqrt{\\log \\lambda}}\\Big)\\qquad \\text{or}\\qquad |\\phi_\\lambda(x)|=O\\Big(\\frac{\\lambda ^{\\frac{n-1}{2}}}{\\sqrt{\\log \\lambda}}\\Big), $$ as $\\lambda\\to \\infty$. These conditions require no global assumption on the manifold $M$ and instead relate to the structure of the set of recurrent directions in the unit normal bundle to $H$. Our results extend all previously known conditions guaranteeing improvements on averages, including those on sup-norms. For example, we show that if $(M,g)$ is a surface with Anosov geodesic flow, then there are logarithmically improved averages for any $H\\subset M$. We also find weaker conditions than having no conjugate points which guarantee $\\sqrt{\\log \\lambda}$ improvements for the $L^\\infty$ norm of eigenfunctions. Our results are obtained using geodesic beam techniques, which yield a mechanism for obtaining general quantitative improvements for averages and sup-norms.",
"subjects": "Analysis of PDEs (math.AP); Spectral Theory (math.SP)",
"title": "Improvements for eigenfunction averages: An application of geodesic beams",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138131620183,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7087156933908837
} |
https://arxiv.org/abs/2106.11573 | Approximation convergence in the inverse first-passage time problem | The inverse first-passage time problem determines a boundary such that the first-passage time of a Wiener process to this boundary has a given distribution. An approximation which is based on the starting value of the boundary to a smooth boundary by a piecewise linear boundary is given by equating the probability of the first-passage time to a linear boundary and the increment of the distribution on each interval. We propose a modification of that approximation which also approximates the starting value of the boundary. First, we show that the approximation is well-defined when assuming that the boundary is absolutely continuous. Second, we show that a subsequence of this new approximation uniformly converges to the boundary when the length of each interval of linear approximation goes to 0 asymptotically. The results are obtained using Arzela-Ascoli theorem on any compact space on which we further assume that the boundary admits uniformly dominated derivative. As the starting value of the boundary is unknown, this makes the new approximation more suitable for applications. The results are also proved in the first-passage time problem of a reflected Wiener process. | \section{Introduction}
In the theory of stochastic processes, first-passage time related problems have been studied extensively. When specified to a standard Brownian motion $(W_t)_{t \geq 0}$, the related first passage to an upper boundary continuous function $g: \mathbb{R}^+ \rightarrow \mathbb{R}$ satisfying $g(0) \geq 0$ is defined as
\begin{eqnarray}
\label{taugW}
\mathrm{T}_g^W := \inf \{t \in \mathbb{R}^+ \text{ s.t. } W_t \geq g(t)\},
\end{eqnarray}
with $f_g^W (t)$ its related density. There are two classes of problems related to these stopping times. The first-passage problem investigates $f_g^W(t)$ when $g$ is known. The \emph{inverse} first-passage problem aims at finding a boundary $g_f: \mathbb{R}^+ \rightarrow \mathbb{R}$, with $g_f(0) \geq 0$, which satisfies
\begin{eqnarray}
\label{intro0}
f_{g_f}^W (t) = f(t) \text{ for } t \geq 0,
\end{eqnarray}
where $f$ is a \emph{target} density
function of the form $f:\mathbb{R}^+ \rightarrow \mathbb{R}^+$. Almost 50 years ago, A. Shiryaev\footnote{during a Banach center meeting in 1976} asked if one can find such $g_f$ when $f(t) = \lambda \exp (- \lambda t)$ for $\lambda > 0$ is exponentially distributed. Obviously, this question can be classified as an \emph{inverse} problem.
\smallskip
First-passage problems are often described as being hard, and explicit solutions of the density $f_g^W (t)$ are only exhibited for a few cases $g$. These include and are not limited to: the linear case (see, e.g., \cite{doob1949heuristic} (Formula (4.2), p. 397) or \cite{malmquist1954certain} (p. 526)); the Daniels boundary (see \cite{daniels1969minimum}) when
$$g(t) = \frac{\alpha}{2} - \frac{t}{\alpha} \log \left(\frac{\beta}{2} + \sqrt{\frac{\beta^2}{4} + \gamma \exp\left(\frac{-\alpha^2}{t}\right)}\right),$$
with $\alpha > 0$, $\beta \geq 0$ and $\gamma > \beta^2/4$; a formula in a general boundary setup but which depends on asymptotic conditional expectations (see \cite{durbin1971boundary}); the one-sided square root boundary (see \cite{novikov1971stopping}); the quadratic boundary (see \cite{salminen1988first}); the piecewise-linear boundary (see \cite{wang1997boundary}) ; the piecewise-specific boundary (\cite{novikov1999approximations}) where "specific" could mean any of the aforementioned cases.
\smallskip
As for the Shiryaev inverse problem itself, there are two early papers (\cite{dudley1977stopping} and \cite{anulova1981markov}) which provide the existence of stopping times for any target density function $f(t)$, but these stopping times are not of the hitting-boundary form (\ref{taugW}). Later, key results were found in \cite{peskir2002integral}, where the problem was reformulated as a nonlinear Volterra integral equation of the second kind. Highly related to that, \cite{peskir2002limit} exhibits three boundaries in the neighborhood of 0, which respectively satisfy
$$ \lim_{t \rightarrow 0} f_g(t) =0 \text{ , } \lim_{t \rightarrow 0} f_g(t) = \infty , \text{ , } \lim_{t \rightarrow 0} f_g(t) = c \in (0, \infty).$$
Subsequently, \cite{zucca2009inverse} investigate two methods for approximating the boundary. The first method is based on a piecewise-linear scheme, while the second makes use of the Volterra equation form. Assuming that the boundary exists, is unique, concave or convex, they in particular prove that the error vanishes asymptotically in the first case. Finally, \cite{gur2020empirical} extends the latter method to provide a boundary estimator based on a sample of hitting times. See also \cite{abundo2006limit} for extensions to the general diffusion process case.
\smallskip
There is also a literature who builds a bridge between the inverse problem and the free boundary problem in survival analysis. The idea goes back to \cite{avellaneda2001distance}. In an impressive series of papers, \cite{cheng2006analysis} and \cite{chen2011existence}
respectively show the existence and uniqueness of the viscosity solution in the related free boundary problem and that the obtained resultant satisfies the inverse first-passage problem (\ref{taugW})-(\ref{intro0}). Unfortunately, they exhibit rough limit boundaries which can even be equal to $-\infty$, whereas the Shiryaev problem requires continuity of the boundary. In a more recent paper, \cite{ekstrom2016inverse} prove the connection between the inverse problem and a related optimal stopping problem. See also \cite{fukasawa2020efficient} for possible application of the method.
\smallskip
As for the existence in the Shiryaev inverse problem itself, it is still an open problem, as far as the author knows. In this paper, we propose to fill up this blank. The proof is based on discretization of the problem to approximate the boundary with a piecewise-linear boundary whose definition is almost equal to that in \cite{zucca2009inverse} (Section 3). The merit of our paper is that based on this approximation, we \emph{show} the existence, whereas the cited paper \emph{assumes} such existence.
\smallskip
Key steps of the proof of existence are the definition, for any arbitrarily big horizon time $T > 0$, of a sequence of continuous piecewise-linear functions $g_{f,T}^{(n)}: \mathbb{R}^+ \rightarrow \mathbb{R}$ with $g_{f,T}^{(n)}(0) \geq 0$ which satisfies
for any $n \in \mathbb{N} - \{0\}$, any $l \in \mathbb{N} - \{0\}$ with $l \geq n$ and any $m=0,\cdots,2^n-1$
\begin{eqnarray}
\label{intro01}
\mathbb{P} \left(T_{g_{f,T}^{(l)}}^W \in [mT/2^n,(m+1)T/2^n]\right) = \int_{mT/2^n}^{(m+1)T/2^n}f(s)ds.
\end{eqnarray}
Then, using a fundamental result of analysis, i.e. the Arzel\`a-Ascoli theorem, we are able to extract a continuous boundary $g_{f,T}: \mathbb{R}^+ \rightarrow \mathbb{R}$, with $g_{f,T}(0) \geq 0$, as the limit of a subsequence of $g_{f,T}^{(n)}$. Finally, we can show that (\ref{intro01}) is preserved at the limit, i.e. for any $p \in \mathbb{N} - \{0 \}$ and $k=0,\ldots,2^{p}-1$ we obtain
\begin{eqnarray}
\label{intro02}
\mathbb{P} \big(T_{g_{f,T}}^W \in [kT/2^p,(k+1)T/2^p]\big) = \int_{kT/2^p}^{(k+1)T/2^p}f(s)ds.
\end{eqnarray}
By Borel arguments, (\ref{intro02}) implies that
$$f_{g_{f,T}}^W (t) = f(t) \text{ for any } 0 \leq t \leq T.$$
\smallskip
The assumptions required on $f(t)$ are rather smooth, and in particular, the exponential distribution $f(t) = \lambda \exp (- \lambda t)$ for $\lambda > 0$ satisfies the assumptions of this paper. So we can provide a theoretical answer to Shiryaev question, which backs up the numerica findings in \cite{zucca2009inverse} (Section 8). The results are also obtained in the two-dimensional boundary case.
\smallskip
As it stands, we do not obtain any explicit formula for the boundary limit as a function of $f$, nor unicity of the solution. Rather, we obtain $g_{f,T}$ which a-priori depends on $T$. Had we been able to establish unicity of the solution of the system (\ref{taugW})-(\ref{intro0}), we would have most likely been able to get rid of $T$-dependence in the limit boundary. Yet investigating unicity is above the scope of this paper and left for future work, as it could turn out to be an even harder problem to solve. One possibility consists in investigating the highly nonlinear Volterra problem as the question of uniqueness for this kind of equations has been affirmatively resolved in \cite{peskir2005american}.
\smallskip
In terms of applications, first-passage problems can be useful in statistics (see, e.g., \cite{sen1981sequential} or \cite{siegmund1986boundary}), in biology (see, e.g., \cite{ricciardi1999outline}), in mathematical finance such as in the default barrier reproducing a given default distribution problem (see, e.g., \cite{hull2001valuing}), or in high frequency econometrics when modeling endogeneity in sampling times (see \cite{fukasawa2010realized}, \cite{robert2012volatility} or \cite{potiron2017estimation}).
\section{Notation}
Prior to the exposition of the theory, we give some common definition which will be used throughout the manuscript. For $A \subset \mathbb{R}^+$ and $B \subset \mathbb{R}$ s.t. $0 \in A$, we define:
\begin{eqnarray*}
\mathcal{C}_0(A,B) & := & \{h: A \rightarrow B \text{ s.t. } h \text{ is continuous}\}, \\
\mathcal{C}_0^+(A,B) & := & \{h: A \rightarrow B \text{ s.t. } h \text{ is continuous and } h(0) \geq 0\},\\
\mathcal{C}_0^-(A,B) & := & \{h: A \rightarrow B \text{ s.t. } h \text{ is continuous and } h(0) \leq 0\},\\
\overline{\mathcal{C}_0^-(A,B)} & := & \mathcal{C}_0^-(A,B) \cup \{h: A \rightarrow - \infty \},
\end{eqnarray*}
as respectively the functional space of continuous functions, the subspace with non-negative starting values, the subspace with non-positive starting values and the union of the subspace with non-positive starting values together with function with only one value equal to $-\infty$. Finally, for any vector function $h=(h^{(1)},h^{(2)}) \in \overline{\mathcal{C}_0^-(A,B)} \times \mathcal{C}_0^+(A,B) $, we define the related infinite norm as:
\begin{eqnarray*}
\left| \left| h \right| \right| & = & 2 \sup_{t \in A} \left| h^{(2)}(t) \right| \text{ if } h^{(1)} \neq - \infty\\
& = & \sup_{t \in A} \left| h^{(1)}(t) \right| + \sup_{t \in A} \left| h^{(2)}(t) \right| \text{ else}
\end{eqnarray*}
\section{Objective}
\smallskip
We introduce the (complete) stochastic basis $\mathcal{B} = (\Omega, \mathbb{P}, \mathcal{F}, \mathbf{F})$, where $\mathcal{F}$ is a $\sigma$-field and $\mathbf{F} = (\mathcal{F}_t)_{t \in \mathbb{R}^+}$ is a filtration. For any $\mathcal{F}_t$-standard Brownian motion $W_t$ and any given pair of boundary continuous functions
$$g = (g^{(1)}, g^{(2)}) \in \overline{\mathcal{C}_0^-(\mathbb{R}^+,\mathbb{R})} \times \mathcal{C}_0^+(\mathbb{R}^+,\mathbb{R}),$$ we associate the boundary crossing variable
\begin{eqnarray}
\label{TgZdef}
\mathrm{T}_g^W := \inf \{t \in \mathbb{R}^+ \text{ s.t. } W_t \leq g^{(1)}(t) \text{ or } W_t \geq g^{(2)}(t)\}.
\end{eqnarray}
In (\ref{TgZdef}), the process $W_t$ is the \emph{hitting} process, and the function $g$ plays the role of the (two-dimensional: upper and lower) boundary. Accordingly, $\mathrm{T}_g^W$ corresponds to the variable obtained when the process first hits the upper boundary function or the lower boundary function. If one or both functional components of $g$ are constant, we can "abuse" notation and just refer to one or both constant components as real constant. We also define the corresponding cumulative function as
\begin{eqnarray}
\label{PgZdef}
P_g^W(t):= \mathbb{P} (\mathrm{T}^W_g \leq t)
\end{eqnarray}
and its related density as
\begin{eqnarray}
\label{fZgt}
f_g^W(t):= \frac{dP_g^W(t)}{dt}.
\end{eqnarray}
We will consider those two cases in the following of this paper:
\begin{enumerate}
\item (upper boundary case) $g= (-\infty, g^{(2)})$ with $g^{(2)} \in \mathcal{C}_0^+(\mathbb{R}^+,\mathbb{R})$
\item (symmetrical boundary case) $g= (-g^{(2)}, g^{(2)})$ with $g^{(2)} \in \mathcal{C}_0^+(\mathbb{R}^+,\mathbb{R})$
\end{enumerate}
\smallskip
Our objective in this paper is to show that for any target density function of the form $$f:\mathbb{R}^+ \rightarrow \mathbb{R}^+,$$
with related cumulative function defined as
\begin{eqnarray}
\label{Fdef}
F:\mathbb{R}^+ & \rightarrow & [0,1]\\
F:t & \mapsto &\int_0^t f(s) ds,
\end{eqnarray}
there exists (for each case) a boundary
$$g_f = (g_f^{(1)}, g_f^{(2)}) \in \overline{\mathcal{C}_0^-(\mathbb{R}^+,\mathbb{R})} \times \mathcal{C}_0^+(\mathbb{R}^+,\mathbb{R}),$$
such that $P_{g_f}^W (t) = F(t) \text{ for } t \geq 0$, or equivalently
\begin{eqnarray}
\label{fgWf}
f_{g_f}^W (t) = f(t) \text{ for } t \geq 0.
\end{eqnarray}
\section{Nonlinear Equation}
\label{equation}
Ideally, we would like to find a closed-form expression where we would be able to express $g_f(t)$ in terms of $f(t)$. For this particular purpose, we introduce some notation, namely the set of functions $g^{lin}_{t,a} \in \overline{\mathcal{C}_0^-(\mathbb{R}^+,\mathbb{R})} \times \mathcal{C}_0^+(\mathbb{R}^+,\mathbb{R})$ for any
$$(t,a) \in \mathbb{R}^+ \times \left( \{(-z,z) \in \mathbb{R}^2 \text{ s.t. } z \in \mathbb{R} \} \cup (- \infty \times \mathbb{R})\right)$$
which is formally defined as:
\begin{eqnarray}
g^{lin}_{t,a}(u) &=& g(u) \text{ when } 0 \leq u \leq t,\\
g^{lin}_{t,a}(u) &=& g(t) + a(u-t) \text{ when } u \geq t.
\end{eqnarray}
The function $g^{lin}_{t,a}$ corresponds exactly to $g$ up to $t$ and to a linear one dimensional boundary (in the upper boundary case) or symmetrical two dimensional boundary (in the symmetrical boundary case) starting at the same value $g(t)$ and of coefficient $a$. Therefore, $g$ needs only to be defined on $[0,t]$ for $g^{lin}_{t,a}$ to exist. We also introduce the related probability that the Brownian motion hits such function between $t$ and $s$:
\begin{eqnarray*}
p_g^W : \{(t,s) \in (\mathbb{R}^+)^2 \text{ s.t. } t \leq s \} \times \left( \{(-z,z) \in \mathbb{R}^2 \text{ s.t. } z \in \mathbb{R} \} \cup (- \infty \times \mathbb{R})\right) & \rightarrow & [0,1]\\
p_g^W(t,s,a) & = &\mathbb{P}\left(\mathrm{T}_{g^{lin}_{t,a}}^W \in [t,s]\right).
\end{eqnarray*}
Now we can express $g_{f}$ as solution of the following equation
\begin{eqnarray}
\label{fgW}
f_{g_f}^W (t) = \frac{dp_{g_f}}{ds}(t,t,g_f'(t)) \text{ for } t \geq 0,
\end{eqnarray}
which in view of (\ref{fgWf}) can be re-expressed as
\begin{eqnarray}
\label{fgkey}
f(t) = \frac{dp_g}{ds}(t,t,g'(t)) \text{ for } t \geq 0.
\end{eqnarray}
The above differential equation (\ref{fgkey}) would be key to our analysis had we found any explicit way to solve the equation. This is not surprising that the direct route does not yield a closed-form expression, as the literature was not able to track down such expression despite many attempts over the years (such as, e.g., the Volterra integral equation of the second kind developed in \cite{peskir2002integral} and \cite{peskir2002limit}).
\section{Discretization of the problem}
As we have not found any closed-form expression, we discretize the problem and introduce a piecewise-linear approximation of the boundary on $[0,T]$, where $T >0$ is the horizon time. In particular, note that this approximation is not based on the differential equation (\ref{fgkey}) derived in Section \ref{equation}. On the contrary, we consider a piecewise-linear approximation whose definition is almost the same as that of Section 3 in \cite{zucca2009inverse}. There is a difference in objective though, as the cited paper \emph{assumes} the existence of $g_f$, whereas we will \emph{show} the existence of $g_f$ in Section \ref{existence}.
\smallskip
For any $n \in \mathbb{N} - \{0\}$, we can define the sequence of approximation $g_{f,T}^{(n)} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$ recursively on $m$ as:
\begin{eqnarray}
\label{seq1}
g_{f,T}^{(n)}(u) & = & \alpha_{0,n} \text{ for } u \in [0,T/2^n],\\
\label{seq2}
g_{f,T}^{(n)}(u) & = & g_{f,T}^{(n)}(mT/2^n) + \alpha_{m,n} (u - mT/2^n) \text{ for } u \in [mT/2^n,(m+1)T/2^n], m=1,\cdots,2^n-1
\end{eqnarray}
and where
\begin{eqnarray*}
\alpha_{0,n} &:= & (- \infty, \alpha_{0,n}^{(2)}) \text{ in upper boundary case}\\
\alpha_{0,n} &:= & (- \alpha_{0,n}^{(2)}, \alpha_{0,n}^{(2)}) \text{ in symmetrical boundary case}
\end{eqnarray*}
with $\alpha_{0,n}^{(2)} \in \mathbb{R}^+ - \{0\}$ and
\begin{eqnarray*}
\alpha_{m,n} &:= & (- \infty, \alpha_{m,n}^{(2)}) \text{ in upper boundary case}\\
\alpha_{m,n} &:= & (- \alpha_{m,n}^{(2)}, \alpha_{m,n}^{(2)}) \text{ in symmetrical boundary case}
\end{eqnarray*}
with $\alpha_{m,n}^{(2)} \in \mathbb{R}$ for $m=1,\cdots,2^n-1$ satisfy
\begin{eqnarray}
\label{alpha0n} \mathbb{P} \big(T_{\alpha_{0,n}}^W \in [0,T/2^n]\big) &= &\int_{0}^{T/2^n}f(s)ds,\\
\label{pintf}p_{g_{f,T}^{(n)}}(mT/2^n,(m+1)T/2^n,\alpha_{m,n}) & = & \int_{mT/2^n}^{(m+1)T/2^n}f(s)ds \text{ for } m=1,\cdots,2^n-1.
\end{eqnarray}
The driving idea behind (\ref{alpha0n})-(\ref{pintf}) is to equate the probability of hitting the boundary in the discrete problem to the target probability on each block.
\section{Proof of existence}
\label{existence}
In this section, we show the existence of a continuous boundary function $g_{f,T} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$ satisfying
\begin{eqnarray}
\label{existenceeq}
f_{g_{f,T}}^{W} = f_t \text{ for } 0 \leq t \leq T
\end{eqnarray}
for any target density $f$ and any horizon time $T > 0$. The proof goes in two steps. First, we show that there exists a subsequence of $g_{f,T}^{(n)}$ which converges uniformly to some $g_{f,T}$ using a fundamental result of analysis, i.e. Arzel\`a-Ascoli theorem. Second, we show that the obtained limit $g_{f,T}$ satisfies (\ref{existenceeq}).
\smallskip
Our first result establishes existence and uniqueness of the sequence $\alpha_{0,n}^{(2)} \in \mathbb{R}^+ - \{0\}$ (which is equivalent to the existence and uniqueness of the sequence $\alpha_{0,n}$) and that of $\alpha_{m,n}^{(2)} \in \mathbb{R}$ (which is equivalent to that of the sequence $\alpha_{m,n}$) for any $m=1,\cdots,2^n-1$ and for any $n \in \mathbb{N} - \{0\}$. Although stated differently, the following result is quite close to Section 3 in \cite{zucca2009inverse}. We make the positivity assumption on $f$ for that result:
\begin{description}
\item[[A\!\!]] We assume that $f(t) > 0$ for any $t \geq 0$.
\end{description}
\begin{proposition*}
\label{propexistence}
Under Assumption \textbf{[A]}, for any $n \in \mathbb{N} - \{0\}$, Equation (\ref{alpha0n}) defines a unique $\alpha_{0,n}^{(2)} \in \mathbb{R}^+ - \{0\}$ and Equation (\ref{pintf}) defines a unique $\alpha_{m,n}^{(2)} \in \mathbb{R}$ for any $m=0,\cdots,2^n-1$. Moreover, for any $n \in \mathbb{N} - \{0\}$ and any $m=0,\cdots,2^n-1$, the approximated boundary satisfies
\begin{eqnarray}
\label{propexeq}
\mathbb{P} \left(T_{g_{f,T}^{(n)}}^W \in [mT/2^n,(m+1)T/2^n]\right) = \int_{mT/2^n}^{(m+1)T/2^n}f(s)ds.
\end{eqnarray}
\end{proposition*}
\begin{proof}
For any $n \in \mathbb{N} - \{0\}$, we prove Proposition \ref{propexistence}, i.e. the statement and Equality (\ref{propexeq}), by recurrence on $m$. We start with the $m=0$ case. By Definition (\ref{alpha0n}), it is clear that
\begin{eqnarray}
\label{propexeqproof}
\mathbb{P} \left(T_{g_{f,T}^{(n)}}^W \in [0,T/2^n]\right) = \int_{0}^{T/2^n}f(s)ds.
\end{eqnarray}
We have thus shown Equality (\ref{propexeq}). Now, we define the functions
\begin{eqnarray*}
z_1 : \mathbb{R}^+ & \rightarrow & [0,1]\\
z_1 : \alpha & \mapsto & \mathbb{P} \big(T_{(-\infty,\alpha)}^W \in [0,T/2^n]\big)
\end{eqnarray*}
and
\begin{eqnarray*}
z_2 : \mathbb{R}^+ & \rightarrow & [0,1]\\
z_2 : \alpha & \mapsto & \mathbb{P} \big(T_{(-\alpha,\alpha)}^W \in [0,T/2^n]\big),
\end{eqnarray*}
where $z_i$ for $i=1,2$ correspond respectively to the upper boundary case and the symmetrical boundary case.
According to \cite{doob1949heuristic} (for the upper boundary case, p. 397) or \cite{anderson1960modification} (for the symmetrical boundary case), we have that for $i=1,2$:
\begin{eqnarray}
\label{proof0331a}
z_i(\alpha) \overset{\alpha \rightarrow 0}{\rightarrow} 1,\\
\label{proof0331b}
z_i(\alpha) \overset{\alpha \rightarrow +\infty}{\rightarrow} 0,
\end{eqnarray}
and that $z_i$ is continuous and strictly decreasing in $\alpha$. We also note that in view of Assumption \textbf{[A]}
$$0 < \int_{0}^{T/2^n}f(s)ds < 1,$$
so that given that $\alpha_{0,n}^{(2)}$ solves the equation
$$z_i(\alpha_{0,n}^{(2)}) = \int_{0}^{T/2^n}f(s)ds$$
in view of Equality (\ref{propexeqproof}), we can deduce that
\begin{eqnarray}
\label{proof0331c}
0 < z_i(\alpha_{0,n}^{(2)}) < 1.
\end{eqnarray}
An application of the Intermediate Value Theorem together with (\ref{proof0331a})-(\ref{proof0331b})-(\ref{proof0331c}) provides the existence and uniqueness of $\alpha_{0,n}^{(2)} \in \mathbb{R}^+ - \{0\}$.
\smallskip
We consider now the case when $m=1,\cdots,2^n-1$. By hypothesis of recurrence together with Definition (\ref{seq1})-(\ref{pintf}), we deduce that
\begin{eqnarray}
\label{proof0515b} \mathbb{P} \big(T_{g_{f,T}^{(n)}}^W \in [mT/2^n,(m+1)T/2^n]\big) &= &\int_{mT/2^n}^{(m+1)T/2^n}f(s)ds,
\end{eqnarray}
so that we have proved Equality (\ref{propexeq}). Then, for any
$$(x,y) \in \{(t,s) \in ([0,T])^2 \text{ s.t. } t < s \},$$
we consider the functions
\begin{eqnarray}
\label{Rxy1} R_{x,y,1} : \mathbb{R} & \rightarrow & (0,\int_{x}^{+ \infty}f(s)ds)\\ \nonumber
z & \mapsto & p_{g_{f,T}^{(n)}}(x,y,(-\infty,z)).
\end{eqnarray}
and
\begin{eqnarray}
\label{Rxy2} R_{x,y,2} : \mathbb{R} & \rightarrow & (0,\int_{x}^{+ \infty}f(s)ds)\\ \nonumber
z & \mapsto & p_{g_{f,T}^{(n)}}(x,y,(-z,z)),
\end{eqnarray}
where $R_{x,y,i}$ for $i=1,2$ correspond respectively to the upper boundary case and the symmetrical boundary case. By hypothesis of recurrence, we deduce that
\begin{eqnarray}
\label{proof0515} \mathbb{P} \big(T_{g_{f,T}^{(n)}}^W \in [0,mT/2^n]\big) &= &\int_{0}^{mT/2^n}f(s)ds.
\end{eqnarray}
In view of \cite{wang1997boundary} (for the upper boundary case) or \cite{novikov1999approximations} (for the symmetrical boundary case, see also \cite{potzelberger2001boundary}) along with (\ref{proof0515}), we obtain that for $i=1,2$:
\begin{eqnarray}
\label{proof0401a}
R_{x,y,i}(\alpha) & \overset{\alpha \rightarrow -\infty}{\rightarrow} & \int_{x}^{+\infty}f(s)ds,\\
\label{proof0401b}
R_{x,y,i}(\alpha) & \overset{\alpha \rightarrow +\infty}{\rightarrow} & 0,
\end{eqnarray}
and that $R_{x,y,i}$ is continuous and strictly decreasing. We also note that applying Assumption \textbf{[A]} we get
$$0 < \int_{mT/2^n}^{(m+1)T/2^n}f(s)ds < \int_{mT/2^n}^{+ \infty}f(s)ds.$$
In addition, as an application of Equality (\ref{proof0515b}), we deduce that $\alpha_{m,n}^{(2)}$ solves the equation
$$R_{mT/2^n,(m+1)T/2^n,i}(\alpha_{m,n}^{(2)}) = \int_{mT/2^n}^{(m+1)T/2^n}f(s)ds.$$
We can then infer that
\begin{eqnarray}
\label{proof0401c}
0 < R_{mT/2^n,(m+1)T/2^n,i}(\alpha_{m,n}^{(2)}) < \int_{mT/2^n}^{+ \infty}f(s)ds.
\end{eqnarray}
An application of the Intermediate Value Theorem along with (\ref{proof0401a})-(\ref{proof0401b})-(\ref{proof0401c}) provides the existence and uniqueness of $\alpha_{m,n}^{(2)} \in \mathbb{R}$.
\end{proof}
Key to our proof of existence is the next corollary, as it extends Equality (\ref{propexeq}) to the case where the approximated boundary is at level $l \geq n$, everything else being frozen in the equality. The proof is a direct consequence of the fact that the grid is nested. This is the reason why we chose an approximation length of $T/2^n$ rather than, say, $T/n$.
\begin{corollary*}
\label{corexistence}
Under Assumption \textbf{[A]}, for any $n \in \mathbb{N} - \{0\}$, any $l \in \mathbb{N} - \{0\}$ with $l \geq n$ and any $m=0,\cdots,2^n-1$, the approximated boundary satisfies
\begin{eqnarray}
\label{propexeq2}
\mathbb{P} \left(T_{g_{f,T}^{(l)}}^W \in [mT/2^n,(m+1)T/2^n]\right) = \int_{mT/2^n}^{(m+1)T/2^n}f(s)ds.
\end{eqnarray}
\end{corollary*}
\begin{proof}
For any $n \in \mathbb{N} - \{0\}$, any $l \in \mathbb{N} - \{0\}$ with $l \geq n$ and any $m=0,\cdots,2^n-1$, we have
\begin{eqnarray*}
\mathbb{P} \left(T_{g_{f,T}^{(l)}}^W \in [mT/2^n,(m+1)T/2^n]\right) & = & \sum_{i=0}^{2^{l-n}-1} \mathbb{P} \left(T_{g_{f,T}^{(l)}}^W \in [(m2^{l-n}+i) T/2^l,(m2^{l-n}+i+1) T/2^l]\right)\\
& = & \sum_{i=0}^{2^{l-n}-1} \int_{(m2^{l-n}+i) T/2^l}^{(m2^{l-n}+i+1) T/2^l}f(s)ds\\
& = &\int_{mT/2^n}^{(m+1)T/2^n}f(s)ds,
\end{eqnarray*}
where we used the fact that the probability of an union of disjoint events is equal to the sum of probability of each individual event in the first equality, and Equality (\ref{propexeq}) from Proposition \ref{propexistence} in the second equality.
\end{proof}
The following proposition can be seen as the most important result for our proof of existence. We prove that there exists a subsequence of $g_{f,T}^{(n)}$ which converges uniformly to some $g_{f,T} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$ via Arzel\`a-Ascoli theorem. We make some further assumption on $f$ for that proof.
\begin{description}
\item[[B\!\!]] We assume that there exist a small $\eta > 0$,
\begin{eqnarray*}
g_1 & := & (-\infty,g_1^{(2)}) \text{ in upper boundary case}\\
g_1 &:= & (-g_1^{(2)},g_1^{(2)}) \text{ in symmetrical boundary case}
\end{eqnarray*}
with $g_1^{(2)} \in \mathcal{C}_0^+([0,\eta],\mathbb{R}^+ - \{0\})$ s.t. for $0 \leq t \leq \eta$:
\begin{eqnarray}
\label{assB0}
f_{g_1}^W (t) \leq f(t).
\end{eqnarray}
We also assume that there exists $K_C \in \mathbb{R}^+$ such that
\begin{eqnarray}
\label{assC}
\sup_{\underset{m=1,\cdots,2^n-1}{n \in \mathbb{N} - \{ 0 \}}} \frac{2^n}{T}\int_{mT/2^n}^{(m+1)T/2^n}f(s)ds \leq K_C.
\end{eqnarray}
\end{description}
\begin{proposition*}
\label{propassB}
Under Assumption \textbf{[A]} and Assumption \textbf{[B]}, there exists a subsequence of $g_{f,T}^{(n)}$ which converges uniformly to some $g_{f,T} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$.
\end{proposition*}
\begin{proof}
As the statement of the proposition is "asymptotic", without loss of generality we can drop the first terms of the sequence, and start from the term
$$n_s = \lceil \log_2(T) - \log_2(\eta) \rceil,$$
where $\lceil . \rceil$ is the ceiling function. Then, for any $n \in \mathbb{N} - \{0\}, n \geq n_s$, we have that
\begin{eqnarray}
\label{etaprop}
\frac{T}{2^{n}} \leq \eta.
\end{eqnarray}
\smallskip
To prove the proposition, we use a fundamental result of analysis, namely Arzel\`a-Ascoli theorem. In the notation of our paper, the theorem states:
\\ \emph{(Arzel\`a-Ascoli theorem) The sequence $g_{f,T}^{(n)} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$ of continuous functions defined on the interval $[0, T]$ is uniformly bounded if there is a constant number $M > 0$ such that
\begin{eqnarray}
\label{aa1}
\left|\left|g_{f,T}^{(n)}(t)\right|\right|\leq M
\end{eqnarray}
for every function $g_{f,T}^{(n)}$ belonging to the sequence, and every $t \in [0, T]$. (Here, $M$ must be independent of $n$ and $t$, but not of $T$.)
\\The sequence is said to be uniformly equicontinuous if, for every $\varepsilon > 0$, there exists a $\delta > 0$ such that
\begin{eqnarray}
\label{aa2}
\left|\left|g_{f,T}^{(n)}(t)-g_{f,T}^{(n)}(s)\right|\right| \leq \varepsilon
\end{eqnarray}
whenever $|t - s| < \delta$ for all functions $g_{f,T}^{(n)}$ in the sequence. (Here, $\delta$ may depend on $\varepsilon$ and $T$, but not $t$, $s$ or $n$.)
\\If the sequence $g_{f,T}^{(n)} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$ is uniformly bounded and uniformly continuous, then there exists a subsequence which converges uniformly to some $g_{f,T} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$.}
\smallskip
Then, the proof of Proposition \ref{propassB} amounts to prove the conditions of Arzel\`a-Ascoli theorem. We claim that the assumption of uniformly boundedness and uniformly equicontinuity are satisfied if we can show that $\alpha_{m,n}^{(2)}$ are bounded uniformly, i.e. there exists a constant $K \in \mathbb{R}^+ - \{0\}$ such that
\begin{eqnarray}
\label{Kalphamn}
\sup_{\underset{m=0,\cdots,2^n-1}{n \in \mathbb{N}, n \geq n_s}} \left| \alpha_{m,n}^{(2)} \right| \leq K.
\end{eqnarray}
Then, the proof of Proposition \ref{propassB} goes as follows:
(i) We show the claim, i.e. we show that conditions of Arzel\`a-Ascoli theorem are satisfied whenever (\ref{Kalphamn}) holds.
(ii) We show (\ref{Kalphamn}).
Proof of (i): We prove in this part that both inequalities (\ref{aa1}) and (\ref{aa2}) hold under (\ref{Kalphamn}). We start with Inequality (\ref{aa1}). By recurrence, we can rewrite (\ref{seq1})-(\ref{seq2}) as
\begin{eqnarray}
g_{f,T}^{(n)}(u) & = & \alpha_{0,n} \text{ for } u \in [0,T/2^n],\\
\label{seq3} g_{f,T}^{(n)}(u) & = & \alpha_{0,n} + (T/2^n) \sum_{i=1}^{m-1} \alpha_{i,n} + \alpha_{m,n} (u - mT/2^n) \text{ for } u \in [mT/2^n,(m+1)T/2^n],\\ \nonumber & & m=1,\cdots,2^n-1.
\end{eqnarray}
We obtain that for $m=0,\cdots,2^n-1$ and $u \in [mT/2^n,(m+1)T/2^n]$:
\begin{eqnarray*}
\left|\left|g_{f,T}^{(n)}(u) \right|\right|& \leq & \left|\left|\alpha_{0,n} \right|\right|+ (T/2^n) \sum_{i=1}^{m-1} \left|\left|\alpha_{i,n} \right|\right|+ \left|\left|\alpha_{m,n} \right|\right|(u - mT/2^n)\\
& \leq & \left|\left|\alpha_{0,n} \right|\right|+ (T/2^n) \sum_{i=1}^{m} \left|\left|\alpha_{i,n} \right|\right|\\
& \leq & \left|\left|\alpha_{0,n} \right|\right|+ (T/2^n) \sum_{i=1}^{2^n-1} \left|\left|\alpha_{i,n} \right|\right|\\
& \leq & \left|\left|\alpha_{0,n} \right|\right|+ (2^n - 1)T/2^n \sup_{\underset{i=1,\cdots,2^n-1}{n \in \mathbb{N}, n \geq n_s}} \left|\left|\alpha_{i,n} \right|\right|\\
& \leq & (1 + (2^n - 1)T/2^n) \sup_{\underset{i=0,\cdots,2^n-1}{n \in \mathbb{N}, n \geq n_s}} \left|\left|\alpha_{i,n} \right|\right|\\
& = & 2(1 + (2^n - 1)T/2^n) \sup_{\underset{i=0,\cdots,2^n-1}{n \in \mathbb{N}, n \geq n_s}} \left|\alpha_{i,n}^{(2)} \right|\\
& \leq & 2(1 + (2^n - 1)T/2^n) K,
\end{eqnarray*}
where we used triangular inequality in the first inequality, the definition of $\alpha_{i,n}$ in the equality and (\ref{Kalphamn}) in the last inequality. We have thus shown that (\ref{Kalphamn}) $\implies$ (\ref{aa1}).
We continue with (\ref{aa2}) case. We consider any arbitrarily small $\varepsilon > 0$. Accordingly, we set
\begin{eqnarray}
\label{delta} \delta = \frac{\varepsilon}{2K}.
\end{eqnarray}
For any $t \in [0,T]$, we define the corresponding $m_t$ such that
$$t \in [m_tT/2^n,(m_t+1)T/2^n].$$
From Equality (\ref{seq3}), we can deduce that
\begin{eqnarray*}
g_{f,T}^{(n)}(t) & = & \alpha_{0,n} + (T/2^n) \sum_{i=1}^{m_t-1} \alpha_{i,n} + \alpha_{m_t,n} (t - m_t T/2^n).
\end{eqnarray*}
Thus, for any $0 \leq s \leq t$ such that
\begin{eqnarray}
\label{delta0}
|t - s| < \delta,
\end{eqnarray}
we obtain
\begin{eqnarray*}
g_{f,T}^{(n)}(t) - g_{f,T}^{(n)}(s) & = & \alpha_{0,n} + (T/2^n) \sum_{i=1}^{m_t-1} \alpha_{i,n} + \alpha_{m_t,n} (t - m_t T/2^n) - \\ & & (\alpha_{0,n} + (T/2^n) \sum_{i=1}^{m_s-1} \alpha_{i,n} + \alpha_{m_s,n} (s - m_s T/2^n))\\
& = & (T/2^n) \sum_{i=m_s}^{m_t-1} \alpha_{i,n} + \alpha_{m_t,n} (t - m_t T/2^n) - \alpha_{m_s,n} (s - m_s T/2^n)
\end{eqnarray*}
Taking the norm, we obtain that
\begin{eqnarray*}
\left|\left| g_{f,T}^{(n)}(t) - g_{f,T}^{(n)}(s) \right|\right| & = & \left|\left| (T/2^n) \sum_{i=m_s}^{m_t-1} \alpha_{i,n} + \alpha_{m_t,n} (t - m_t T/2^n) - \alpha_{m_s,n} (s - m_s T/2^n) \right|\right|\\
& \leq & \left| t-s \right| \sup_{\underset{i=1,\cdots,2^n-1}{n \in \mathbb{N}, n \geq n_s}} \left|\left|\alpha_{i,n} \right|\right|\\
& \leq & 2 K \left| t-s \right|,
\\
& \leq & \varepsilon,
\end{eqnarray*}
where we used the definition of $\alpha_{i,n}$, of the norm and (\ref{Kalphamn}) in the second inequality, the definition (\ref{delta}) and Inequality (\ref{delta0}) in the last inequality. We have thus shown that (\ref{Kalphamn}) $\implies$ (\ref{aa2}).
Proof of (ii): We start with the case $\alpha_{0,n}$. We want to show (\ref{Kalphamn}). By (\ref{assB0}) of Assumption \textbf{[B]}, there exist a small $\eta > 0$, \begin{eqnarray*}
g_1 & := & (-\infty,g_1^{(2)}) \text{ in upper boundary case}\\
g_1 &:= & (-g_1^{(2)},g_1^{(2)}) \text{ in symmetrical boundary case}
\end{eqnarray*}
with $g_1^{(2)} \in \mathcal{C}_0^+([0,\eta],\mathbb{R}^+ - \{0\})$ s.t. for $0 \leq t \leq \eta$:
\begin{eqnarray}
\label{proof0512930}
f_{g_1}^W (t) \leq f(t).
\end{eqnarray}
Key for the rest of this part proof is (\ref{etaprop}). If we write
\begin{eqnarray*}
\max g_1 & := & (- \infty,\max_{0 \leq t \leq \eta} g_1^{(2)}(t)) \text{ in upper boundary case}\\
\max g_1 & := & (- \max_{0 \leq t \leq \eta} g_1^{(2)}(t),\max_{0 \leq t \leq \eta} g_1^{(2)}(t)) \text{ in symmetrical boundary case},
\end{eqnarray*}
it is clear by definition that the boundaries of $g_1$ are within that of $\max g_1$, i.e. the upper boundary of $g_1$ is smaller than that of $\max g_1$ and the lower boundary of $g_1$ is greater than that of $\max g_1$ (if we are in the symmetrical boundary case). This implies that $g_1$ will be hitten first by the Brownian motion, i.e. conditionally on the hit happening during $[0,\eta]$ that
$$T_{g_1}^W \leq T_{\max g_1}^W.$$
In terms of densities, this can be expressed as
\begin{eqnarray}
\label{proof0512929}
\int_{0}^{T/2^{n}} f_{\max g_1}^W (s) ds \leq \int_{0}^{T/2^{n}} f_{g_1}^W (s) ds.
\end{eqnarray}
Combining (\ref{proof0512930}) along with (\ref{proof0512929}), we deduce that
\begin{eqnarray}
\label{proof0512931}
\int_{0}^{T/2^{n}} f_{\max g_1}^W (s) ds \leq \int_{0}^{T/2^{n}} f(s) ds.
\end{eqnarray}
By density definition, we know that
\begin{eqnarray}
\label{proof512951}
\mathbb{P} \big(T_{\max g_1}^W \in [0,T/2^{n}]\big) & = &\int_{0}^{T/2^{n}}f_{\max g_1}^W (s) ds.
\end{eqnarray}
By Definition (\ref{alpha0n}), we have $\alpha_{0,n}$ defined such that
\begin{eqnarray}
\label{proof512954}
\mathbb{P} \big(T_{\alpha_{0,n}}^W \in [0,T/2^{n}]\big) = \int_{0}^{T/2^{n}}f(s)ds,
\end{eqnarray}
From (\ref{proof0512931}), (\ref{proof512951}), (\ref{proof512954}) together with (\ref{etaprop}), we can deduce that
\begin{eqnarray}
\mathbb{P} \big(T_{\max g_1}^W \in [0,T/2^{n}]\big) \leq \mathbb{P} \big(T_{\alpha_{0,n}}^W \in [0,T/2^{n}]\big),
\end{eqnarray}
which in turn implies that
\begin{eqnarray}
\alpha_{0,n}^{(2)} \leq \max_{0 \leq t \leq \eta} g_1^{(2)}(t).
\end{eqnarray}
This means that we have shown (\ref{Kalphamn}) for the case $\alpha_{0,n}$.
We consider now the case $\alpha_{m,n}$ for $m > 0$. The key is to work out the definition (\ref{pintf}), which we recall here:
\begin{eqnarray*}
p_{g_{f,T}^{(n)}}(mT/2^n,(m+1)T/2^n,\alpha_{m,n}) & = & \int_{mT/2^n}^{(m+1)T/2^n}f(s)ds \text{ for } m=1,\cdots,2^n-1.
\end{eqnarray*}
We consider the spaces
\begin{eqnarray*}
\mathcal{Q}_S & = & \{(x,y) \in ([0,T])^2 \text{ s.t. } x < y \} \times \left( \{(-z,z) \in \mathbb{R}^2 \text{ s.t. } z \in \mathbb{R} \} \cup (- \infty \times \mathbb{R})\right),\\
\mathcal{Q}_F & = & \{(x,y,z) \in ([0,T])^2 \times [0,1] \text{ s.t. } x < y \text{ and } 0 < z < \frac{2^n}{T} \int_{x}^{+ \infty}f(s)ds \}.
\end{eqnarray*}
Let's consider the function
\begin{eqnarray*}
Q: \mathcal{Q}_S & \rightarrow & \mathcal{Q}_F\\
Q:(x,y,z) & \mapsto & (x,y,\frac{2^n}{T}p_{g_{f,T}^{(n)}}(x,y,z)).
\end{eqnarray*}
To show that Q is invertible, we recall for any $(x,y) \in \{(x,y) \in ([0,T])^2 \text{ s.t. } x < y \}$ definitions (\ref{Rxy1}) and (\ref{Rxy2}) of the functions
\begin{eqnarray}
R_{x,y,1} : \mathbb{R} & \rightarrow & (0,\int_{x}^{+ \infty}f(s)ds)\\ \nonumber
z & \mapsto & p_{g_{f,T}^{(n)}}(x,y,(-\infty,z))
\end{eqnarray}
and
\begin{eqnarray}
R_{x,y,2} : \mathbb{R} & \rightarrow & (0,\int_{x}^{+ \infty}f(s)ds)\\ \nonumber
& \mapsto & p_{g_{f,T}^{(n)}}(x,y,(-z,z)),
\end{eqnarray}
where $R_{x,y,i}$ for $i=1,2$ correspond respectively to the upper boundary case and the symmetrical boundary case. It was originally showed that $R_{x,y,i}$ are strictly decreasing, and that they satisfy
\begin{eqnarray*}
R_{x,y,i}(\alpha) & \overset{\alpha \rightarrow -\infty}{\rightarrow} & \int_{x}^{+\infty}f(s)ds,\\
R_{x,y,i}(\alpha) & \overset{\alpha \rightarrow +\infty}{\rightarrow} & 0,
\end{eqnarray*}
Thus, we can deduce that $R_{x,y,i}$ are invertible. We also define
\begin{eqnarray*}
S_1 : (- \infty \times \mathbb{R}) & \rightarrow & \mathbb{R} \\
z & \mapsto & z^{(2)},
\end{eqnarray*}
and
\begin{eqnarray*}
S_2 : \{(-z,z) \in \mathbb{R}^2 \text{ s.t. } z \in \mathbb{R} \} & \rightarrow & \mathbb{R} \\
z & \mapsto & z^{(2)},
\end{eqnarray*}
which are both canonically invertible. By definition, it is clear that for any $(x,y,z) \in {Q}_S$ we have
\begin{eqnarray}
\label{Qexp}
Q(x,y,z) = (x,y,\frac{2^n}{T}R_{x,y,i}(S_i(z))).
\end{eqnarray}
for $i=1,2$. In view of (\ref{Qexp}) together with the fact that $R_{x,y,i}$ and $S_{i}$ are all invertible, we can deduce that Q is invertible. We refer to the invert function as $Q^{-1}$. $Q$ is continuous as a function of continuous functions. As its invert function, $Q^{-1}$ is then continuous. Then note that the set of
$$ \{ (mT/2^n,(m+1)T/2^n,\frac{2^n}{T}\int_{mT/2^n}^{(m+1)T/2^n}f(s)ds) \}_{\underset{m=1,\cdots,2^n-1}{n \in \mathbb{N}, n \geq n_s}}$$
is contained into a compact set of $\mathcal{Q}_F$ by (\ref{assC}) in Assumption \textbf{[B]}. Then, it means that the image of it by $Q^{-1}$ is in a compact set. But the image of it by $Q^{-1}$ is exactly equal to $$\{(mT/2^n,(m+1)T/2^n,\alpha_{m,n})\}_{\underset{m=1,\cdots,2^n-1}{n \in \mathbb{N}, n \geq n_s}}.$$
Thus we have shown that $\alpha_{m,n}$ is bounded, which implies that (\ref{Kalphamn}) is proven.
\end{proof}
The following lemma establishes almost surely convergence of $T_{h_T^{(n)}}^W$ towards $T_{h_T}^W$ when $h_T^{(n)}$ converges uniformly to $h_T$. It appeared to be missing in the literature, so we added that result. For our proof of existence, we only require the convergence in distribution.
\begin{lemma*}
\label{lemma0}
For any sequence $h_{T}^{(n)} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$ which converges uniformly to some $h_{T} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$, we have that $T_{h_T^{(n)}}^W$ converges almost surely to $T_{h_T}^W$. As a by-product, we deduce that $T_{h_T^{(n)}}^W$ converges in distribution to $T_{h_T}^W$.
\end{lemma*}
\begin{proof}
To prove that $T_{h_T^{(n)}}^W$ converges almost surely to $T_{h_T}^W$, it is sufficient to show that for any arbitrarily small $\epsilon > 0$, there exists $N_\epsilon \in \mathbb{N}$ such that for any $n \in \mathbb{N}, n \geq N_\epsilon$, we have:
\begin{eqnarray}
\label{proof0430}
\left|T_{h_T^{(n)}}^W - T_{h_T}^W \right| \leq \epsilon.
\end{eqnarray}
As $h_{T}^{(n)}$ converges uniformly to $h_{T}$, we have that for any $\epsilon_h > 0$, there exists $N_{\epsilon_h} \in \mathbb{N}$ such that for any $n \in \mathbb{N}, n \geq N_{\epsilon_h}$, we have
\begin{eqnarray}
\label{epsilonh0}
\left|\left|h_T^{(n)} - h_T \right|\right| \leq \epsilon_h.
\end{eqnarray}
We set
\begin{eqnarray}
\label{epsilonh}
\epsilon_h = \min \left(\frac{1}{2}\inf_{0 \leq t \leq T_{h_T}^W - \epsilon,i=1,2} \left|h_{T}^{(i)}(t) - W_t \right|, \sup_{T_{h_T}^W \leq t \leq T_{h_T}^W + \epsilon} \left|W_t - h_{T}^{(i_h)} (t)\right| \right),
\end{eqnarray}
where $i_h = 1$ if the lower boundary was hitten first and $i_h = 2$ if the upper boundary was hitten first. First, note that both terms of (\ref{epsilonh}) are strictly positive, since $T_{h_T}^W$ is defined as the first time when $W_t$ hits $h_T$. Second, note that the first term in (\ref{epsilonh}) prevents (almost surely) $W_t$ from touching $h_T^{(n)}$ before $T_{h_T}^W - \epsilon$, whereas the second term in (\ref{epsilonh}) ensures (almost surely) that $W_t$ first hits $g_T^{(n)}$ during $[T_{h_T}^W - \epsilon, T_{h_T}^W + \epsilon]$. In other words, we have shown that $T_{h_T^{(n)}}^W \in [T_{h_T}^W - \epsilon, T_{h_T}^W + \epsilon]$ whenever (\ref{epsilonh0}) holds with $\epsilon_h$ set as in (\ref{epsilonh}). This means that we have shown (\ref{proof0430}) with $N_\epsilon := N_{\epsilon_h}$.
\end{proof}
We provide now the main result of our paper, which states the existence of the continuous boundary. This proof is basically a combination of application to the previous obtained results.
\begin{theorem*}
\label{main}
(Existence) Under Assumption \textbf{[A]} and Assumption \textbf{[B]}, there exists a corresponding continuous boundary function $g_{f,T} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$ such that the density of $\mathrm{T}_{g_{f,T}}^{W}$ is $f$ on $[0,T]$.
\end{theorem*}
\begin{proof}
To prove Theorem \ref{main}, we have to show that there exists $g_{f,T} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$ such that the density of $T_{g_{f,T}}^W$ is exactly $f$ on $[0,T]$. By Borel arguments, it is sufficient to show that for any $p \in \mathbb{N} - \{0 \}$ and $k=0,\ldots,2^{p}-1$ we have
\begin{eqnarray}
\label{proof0}
\mathbb{P} \big(T_{g_{f,T}}^W \in [kT/2^p,(k+1)T/2^p]\big) = \int_{kT/2^p}^{(k+1)T/2^p}f(s)ds.
\end{eqnarray}
To prove (\ref{proof0}), we will use $g_{f,T} \in \overline{\mathcal{C}_0^-([0,T],\mathbb{R})} \times \mathcal{C}_0^+([0,T],\mathbb{R})$ from Proposition \ref{propassB}, which is obtained as the uniform limit of a subsequence of $g_{f,T}^{(n)}$. We refer to such subsequence as $g_{f,T}^{(n_q)}$.
Then, for any $p \in \mathbb{N} -\{0\}$ and $k=0,\ldots,2^p -1$, we obtain
\begin{eqnarray*}
\mathbb{P} \big(T_{g_{f,T}}^W \in [kT/2^p,(k+1)T/2^p]\big) & = & \lim_{n \rightarrow \infty} \mathbb{P} \big(T_{g_{f,T}^{(n_q)}}^W \in [kT/2^p,(k+1)T/2^p]\big) \\
& = & \int_{kT/2^p}^{(k+1)T/2^p}f(s)ds,
\end{eqnarray*}
where the first equality corresponds to the convergence in distribution of $T_{g_f^{(n_s)}}^W$ to $T_{g_f}^W$ as an application of Lemma \ref{lemma0} along with the fact that there is uniform convergence of $g_{f,T}^{(n_q)}$ towards $g_{f,T}$ as an application of Proposition \ref{propassB}, we have used Corollary \ref{corexistence} in the second equality. In other words, we have shown (\ref{proof0}).
\end{proof}
\section{Conclusion}
In this paper, we have investigated the question of existence in the inverse first-passage problem. Correspondingly, we have provided smooth assumptions on the target density for such boundary to exist. We have solved the almost-50-year-old Shiryaev question.
\smallskip
There is room for improvement, as we have not provided any closed-formula for the solution, nor unicity of the solutions. This promises to be an even harder problem to solve.
| {
"timestamp": "2021-06-23T02:12:53",
"yymm": "2106",
"arxiv_id": "2106.11573",
"language": "en",
"url": "https://arxiv.org/abs/2106.11573",
"abstract": "The inverse first-passage time problem determines a boundary such that the first-passage time of a Wiener process to this boundary has a given distribution. An approximation which is based on the starting value of the boundary to a smooth boundary by a piecewise linear boundary is given by equating the probability of the first-passage time to a linear boundary and the increment of the distribution on each interval. We propose a modification of that approximation which also approximates the starting value of the boundary. First, we show that the approximation is well-defined when assuming that the boundary is absolutely continuous. Second, we show that a subsequence of this new approximation uniformly converges to the boundary when the length of each interval of linear approximation goes to 0 asymptotically. The results are obtained using Arzela-Ascoli theorem on any compact space on which we further assume that the boundary admits uniformly dominated derivative. As the starting value of the boundary is unknown, this makes the new approximation more suitable for applications. The results are also proved in the first-passage time problem of a reflected Wiener process.",
"subjects": "Probability (math.PR)",
"title": "Approximation convergence in the inverse first-passage time problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138196557983,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7087156922869144
} |
https://arxiv.org/abs/1610.07232 | Symbolic Iterative Solution of Two-Point Boundary Value Problems | In this work we give an efficient method involving symbolic manipulation, Picard iteration, and auxiliary variables for approximating solutions of two-point boundary value problems. | \section{Introduction}\label{s:intro}
There exist a variety of numerical methods for approximating solutions of two-point boundary value problems,
among them shooting methods, finite difference techniques, power series methods, and variational methods, all
of which are described in detail in classical texts (for example, \cite{BSW}, \cite{BF}, and \cite{K}). With the
advent of computer algebra systems hybrids of numerical and symbolic manipulation techniques have also arisen
(for example, \cite{PP} and \cite{SW}). In this work we develop a purely symbolic technique for approximating
solutions of two-point boundary value problems that applies identically in both linear and nonlinear cases.
Fundamental to the technique is the idea of deriving an integral expression for the slope $\gamma$ of the solution
$y(t)$ of the boundary value problem at the left endpoint; we then use a Picard iteration scheme to simultaneously
approximate, ever more closely, both $\gamma$ and $y(t)$, the latter now viewed as the unique solution to the initial
value problem that it determines at the left endpoint. We prove theorems guaranteeing both existence of a unique
solution to the boundary value problem and convergence of our iterates to it, although as we demonstrate the
technique works under conditions far more general than those given by the theorems. Since the theorems are proved
using the Contraction Mapping Theorem, the iterates obtained converge to $y(t)$ exponentially fast in the supremum
norm.
By introducing auxiliary variables we overcome problems with quadratures that cannot be performed in closed
form. Thus we ultimately obtain an efficient computational method whose output is a sequence of polynomials
converging to $y(t)$.
\section{The Algorithm}\label{s:alg}
Consider a two-point boundary value problem of the form
\renewcommand{\thefootnote}{}
\footnote{${}^1$ Mathematics Department, James Madison University, Harrisonburg, VA 22807, USA}
\footnote{${}^2$ Mathematics Department, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
(Corresponding author, \texttt{dsshafer@uncc.edu}, +1 704 687 5601, FAX +1 704 687 1392)}
\begin{equation}\label{basic.bvp}
y'' = f(t, y, y'), \qquad y(a) = \alpha, \quad y(b) = \beta \, .
\end{equation}
If $f$ is continuous and locally Lipschitz in the last two variables then by the Picard-Lindel\"of Theorem, for any
$\gamma \in \mathbb{R}$ the initial value problem
\begin{equation}\label{basic.ivp}
y'' = f(t, y, y'), \qquad y(a) = \alpha, \quad y'(a) = \gamma
\end{equation}
will have a unique solution on some interval about $t = a$.
The boundary value problem \eqref{basic.bvp} will have a solution if and only if there exists $\gamma \in \mathbb{R}$ such
that (i) the maximal interval of existence of the unique solution of \eqref{basic.ivp} contains the interval
$[a, b]$, and (ii) the unique solution $y(t)$ of \eqref{basic.ivp} satisfies $y(b) = \beta$. But \eqref{basic.ivp} is
equivalent to the integral equation
\begin{equation}\label{equiv.ivp}
y(t) = \alpha + \gamma (t - a) + \int_a^t (t - s) f(s, y(s), y'(s)) \, ds.
\end{equation}
If $y(t)$ is a solution of \eqref{basic.bvp} then inserting it into \eqref{equiv.ivp}, evaluating at $t = b$, and
solving the resulting equation for $\gamma$, we obtain an expression for the corresponding value of $\gamma$ in \eqref{basic.ivp}, namely
\begin{equation} \label{e:gamma}
\gamma = \tfrac1{b - a}
\left(
\beta - \alpha - \int_a^b (b - s) f(s, y(s), y'(s)) \, ds
\right).
\end{equation}
(If \eqref{basic.bvp} has no solution then for any solution $y(t)$ of the ordinary differential equation in \eqref{basic.bvp} that
satisfies $y(a) = \alpha$ and exists on $[a, b]$ the number on the right hand side of \eqref{equiv.ivp}, hence the value of $\gamma$ specified by \eqref{e:gamma}, exists, but will not be equal to the originally determined value of $y'(a)$.) The key idea in the new
method proposed here for solving \eqref{basic.bvp} is that in a Picard iteration scheme applied to the system of first order equations
\begin{align*}
y' &= u \mspace{124mu}y(a) = \alpha \\
u' &= f(t, y(t), u(t)) \mspace{25mu}u(a) = \gamma
\end{align*}
that is equivalent to \eqref{basic.ivp} we use \eqref{e:gamma} to iteratively obtain successive approximations to the value of $\gamma$
in \eqref{basic.ivp}, if it exists. Thus making for the initial approximations of $y(t)$ and $u(t) = y'(t)$, the reasonable respective
choices of the left boundary value and the average slope of the solution to \eqref{basic.bvp} on the interval $[a, b]$, the iterates are
\begin{subequations} \label{e:iteration.split}
\begin{equation} \label{e:iteration.split.initial}
\begin{aligned}
y^{[0]}(t) &\equiv \alpha \\
u^{[0]}(t) &\equiv \tfrac{\beta - \alpha}{b - a} \\
\end{aligned}
\end{equation}
and
\begin{equation}\label{e:iteration.split.recursion}
\begin{aligned}
\gamma^{[k+1]} &= \frac1{b-a}
\Big(
\beta - \alpha - \int_a^b (b - s) f(s, y^{[k]}(s), u^{[k]}(s)) \, ds
\Big) \\
y^{[k+1]}(t) &= \alpha + \int_a^t u^{[k]}(s) \, ds \\
u^{[k+1]}(t) &= \gamma^{[k+1]} + \int_a^t f(s, y^{[k]}(s), u^{[k]}(s)) \, ds \,.
\end{aligned}
\end{equation}
\end{subequations}
\begin{comment}
\begin{subequations} \label{e:iteration.split}
\begin{equation} \label{e:iteration.split.initial}
\begin{aligned}
y^{[0]}(t) &\equiv \alpha \\
u^{[0]}(t) &\equiv \tfrac{\beta - \alpha}{b - a}
\end{aligned}
\end{equation}
and
\begin{equation}\label{e:iteration.split.recursion}
\begin{aligned}
y^{[k+1]}(t) &= \alpha + \int_a^t u^{[k]}(s) \, ds \\
u^{[k+1]}(t) &= \frac1{b-a}
\Big(
\beta - \alpha - \int_a^b (b - s) f(s, y^{[k]}(s), u^{[k]}(s)) \, ds
\Big) \\
&\mspace{40mu}+ \int_a^t f(s, y^{[k]}(s), u^{[k]}(s)) \, ds \,.
\end{aligned}
\end{equation}
\end{subequations}
\end{comment}
This gives the following algorithm for approximating solutions of \eqref{basic.bvp}.
\begin{algorithm} \label{r:alg1}
To approximate the solution of the boundary value problem
\begin{equation}\label{basic.bvp.dupe}
y'' = f(t, y, y'), \qquad y(a) = \alpha, \quad y(b) = \beta
\end{equation}
iteratively compute the sequence of functions on $[a, b]$ defined by \eqref{e:iteration.split.initial}
and \eqref{e:iteration.split.recursion}.
\end{algorithm}
We will now state and prove a theorem that gives conditions guaranteeing that the problem \eqref{basic.bvp.dupe}
has a unique solution, then prove that the iterates in Algorithm \ref{r:alg1} converge to it. We will need the
following simple lemma whose proof is omitted.
\begin{lemma}\label{lemma.L}
Let $E \subset \mathbb{R} \times \mathbb{R}^2$ be open and let $f: E \to \mathbb{R} : (t, y, u) \mapsto f(t, y, u)$ be Lipschitz in
${\bf y} = (y,u)$ on $E$ with Lipschitz constant $L$ with respect to absolute value on $\mathbb{R}$ and the sum norm on
$\mathbb{R}^2$. Then
\[
{\bf F} : E \to \mathbb{R}^2 : (t, y, u) \mapsto (u, f(t, y, u))
\]
is Lipschitz in ${\bf y}$ with Lipschitz constant $1 + L$ with respect to the sum norm on $\mathbb{R}^2$.
\end{lemma}
\begin{theorem}\label{thm.1}
Let $f : [a, b] \times \mathbb{R}^2 \to \mathbb{R} : (t, y,u) \mapsto f(t, y, u)$ be continuous and Lipschitz in ${\bf y} = (y, u)$
with Lipschitz constant $L$ with respect to absolute value on $\mathbb{R}$ and the sum norm on $\mathbb{R}^2$. If
$0 < b - a < (1 + \frac32 L)^{-1}$ then for any $\alpha$, $\beta \in \mathbb{R}$ the boundary value problem
\begin{equation} \label{orig_bvp.1}
y'' = f(t, y, y'), \qquad y(a) = \alpha, \quad y(b) = \beta
\end{equation}
has a unique solution.
\end{theorem}
\begin{proof}
A twice continuously differentiable function $\eta$ from a neighborhood of $[a, b]$ into $\mathbb{R}$ solves the ordinary
differential equation in \eqref{orig_bvp.1} if and only the mapping $(y(t), u(t)) = (\eta(t), \eta'(t))$ from that
neighborhood into $\mathbb{R}^2$ solves the integral equation
\[
\begin{pmatrix}
y(t) \\
u(t)
\end{pmatrix}
=
\begin{pmatrix}
y(a) \\
u(a)
\end{pmatrix}
+
\int_a^t
\begin{pmatrix}
u(s) \\
f(s, y(s), u(s))
\end{pmatrix}
\,
ds.
\]
By the discussion surrounding \eqref{e:gamma} $\eta$ meets both boundary conditions in \eqref{orig_bvp.1} if and only if
$\eta(a) = \alpha$ and $\eta'(a) = \gamma$ where $\gamma$ is given by \eqref{e:gamma}, with $(y(s), u(s))$ replaced by
$(\eta(s), \eta'(s))$. In short the boundary value problem \eqref{orig_bvp.1} is equivalent to the integral equation
\begin{equation}\label{long.ie}
\begin{pmatrix}
y(t) \\
u(t)
\end{pmatrix}
=
\begin{pmatrix}
\alpha \\
\tfrac{1}{b - a} \left[ \beta -\alpha - \int_a^b (b - s) f(s, y(s), u(s)) \, ds \right]
\end{pmatrix}
+
\int_a^t
\begin{pmatrix}
u(s) \\
f(s, y(s), u(s))
\end{pmatrix}
ds.
\end{equation}
If ${\bf y}(t) = (y(t), u(t))$ is a bounded continuous mapping from a neighborhood $U$ of $[a, b]$ in $\mathbb{R}$ into $\mathbb{R}^2$
then the right hand side of \eqref{long.ie} is well defined and defines a bounded continuous mapping from $U$ into
$\mathbb{R}^2$. Thus letting $\mathscr C$ denote the set of bounded continuous mappings from a fixed bounded open neighborhood $U$
of $[a, b]$ into $\mathbb{R}^2$, a twice continuously differentiable function $\eta$ on $U$ into $\mathbb{R}$ solves the boundary
value problem \eqref{orig_bvp.1} if and only if $\pmb{\eta} \stackrel{\textrm{def}}{=} (\eta, \eta')$ is a fixed point
of the operator ${\mathscr T} : \mathscr C \to \mathscr C$ defined by
\renewcommand{\baselinestretch}{.7}
\[
{\mathscr T}
\begin{pmatrix}
y \\
u
\end{pmatrix}
(t)
=
\begin{pmatrix}
\alpha \\
\left[ \tfrac{\beta -\alpha}{b - a} - \int_a^b \tfrac{b - s}{b - a} f(s, y(s), u(s)) \, ds \right]
\end{pmatrix}
+
\int_a^t
\begin{pmatrix}
u(s) \\
f(s, y(s), u(s))
\end{pmatrix}
ds,
\]
\renewcommand{\baselinestretch}{2}
which we abbreviate to
\renewcommand{\stretch}{.7}
\begin{equation}\label{Cont}
\begin{aligned}
{\mathscr T}({\bf y})(t)
=
\begin{pmatrix}
\alpha \\
\left[ \tfrac{\beta -\alpha}{b - a} - \int_a^b \tfrac{b - s}{b - a} f(s, {\bf y}(s)) \, ds \right]
\end{pmatrix}
+
\int_a^t
{\bf F}(s, {\bf y}(s)) \,ds
\end{aligned}
\end{equation}
\renewcommand{\baselinestretch}{2}
by defining ${\bf F} : [a, b] \times \mathbb{R}^2 \to \mathbb{R}$ by
${\bf F}(t, y, u) = (u, f(t, y, u))$.
The vector space $\mathscr C$ equipped with the supremum norm is well known to be complete. Thus by the Contraction Mapping
Theorem the theorem will be proved if we can show that ${\mathscr T}$ is a contraction on $\mathscr C$. To this end, let $\pmb{\eta}$ and
$\pmb{\mu}$ be elements of $\mathscr C$. Let $\varepsilon = \max \{ t - b : t \in U \}$. Then for any $t \in U$
\renewcommand{\stretch}{.7}
\begin{align*}
| ({\mathscr T}&\pmb{\eta})(t) - ({\mathscr T}\pmb{\mu})(t) |_\textrm{sum} \\
&\leqslant
\left|
\begin{pmatrix}
\alpha \\
\left[ \frac{\beta -\alpha}{b - a} - \int_a^b \frac{b - s}{b - a} f(s, \pmb{\eta}(s)) ds \right]
\end{pmatrix}
-
\begin{pmatrix}
\alpha \\
\left[ \frac{\beta -\alpha}{b - a} - \int_a^b \frac{b - s}{b - a} f(s, \pmb{\mu}(s)) ds \right]
\end{pmatrix}
\right|_\textrm{sum} \\
&\mspace{200mu}+
\left|
\int_a^t {\bf F}(s, \pmb{\eta}(s)) - {\bf F}(s, \pmb{\mu}(s)) \, ds
\right|_\textrm{sum} \\
&\leqslant \int_a^b \frac{b - s}{b - a} | f(s, \pmb{\eta}(s)) - f(s, \pmb{\mu}(s)) | \, ds
+ \int_a^t | {\bf F}(s, \pmb{\eta}(s)) - {\bf F}(s, \pmb{\mu}(s)) |_\textrm{sum} \, ds \\
&\overset{(*)}{\leqslant} \int_a^b \frac{b - s}{b - a} L | \pmb{\eta}(s) - \pmb{\mu}(s) |_\textrm{sum} \, ds
+ \int_a^t (1 + L) | \pmb{\eta}(s) - \pmb{\mu}(s) |_\textrm{sum} \, ds \\
&\leqslant \int_a^b \frac{b - s}{b - a} L || \pmb{\eta} - \pmb{\mu} ||_\textrm{sup} \, ds
+ \int_a^t (1 + L) || \pmb{\eta} - \pmb{\mu} ||_\textrm{sup} \, ds \\
&\leqslant [ \tfrac12 (b - a) L + (1 + L) ((b - a) + \varepsilon ) ] || \pmb{\eta} - \pmb{\mu} ||_\textrm{sup}
\end{align*}
where for inequality ($*$) Lemma \ref{lemma.L} was applied in the second summand. Thus
$|| {\mathscr T}\pmb{\eta} - {\mathscr T}\pmb{\mu} ||_\textrm{sup} \leqslant (1 + \frac32 L) ((b - a) + \varepsilon)) || \pmb{\eta} - \pmb{\mu} ||_\textrm{sup}$
and ${\mathscr T}$ is a contraction provided $(1 + \frac32 L) ((b - a) + \varepsilon) < 1$, equivalently, provided
$(1 + \frac32 L) (b - a) < 1 - (1 + \frac32 L)\varepsilon$. But $U$ can be chosen arbitrarily, hence
$(1 + \frac32 L)\varepsilon$ can be made arbitrarily small, giving the sufficient condition of the theorem.
\end{proof}
Repeated composition of the mapping ${\mathscr T}$ in the proof of the theorem generates Picard iterates. The recursion
\eqref{e:iteration.split} could be expressed without reference to $\gamma$ and so as to exactly form the Picard iterates
based on the contraction ${\mathscr T}$ simply by
replacing $\gamma^{[k+1]}$ in the last line in \eqref{e:iteration.split.recursion} by the right
hand side of the expression for $\gamma^{[k+1]}$ in the first line in \eqref{e:iteration.split.recursion}. The recursion
\eqref{e:iteration.split} was expressed as it was so as to match the discussion leading up to it and to make it convenient to
track the estimates of $\gamma$ in applications of the algorithm in Section \ref{s:exa}. Therefore, since by the Contraction
Mapping Theorem iterates ${\mathscr T}^n(\pmb{\eta}) = ({\mathscr T} \circ \cdots \circ {\mathscr T})(\pmb{\eta})$ converge to the fixed point for every choice of starting
point, the iterates defined by \eqref{e:iteration.split} will converge to the unique solution of \eqref{orig_bvp.1} that the
theorem guarantees to exist. Thus we have the following result.
\begin{theorem} \label{r:alg1.works}
Let $f : [a, b] \times \mathbb{R}^2 \to \mathbb{R} : (t, y,u) \mapsto f(t, y, u)$ be Lipschitz in ${\bf y} = (y, u)$ with Lipschitz
constant $L$ with respect to absolute value on $\mathbb{R}$ and the sum norm on $\mathbb{R}^2$. If
$0 < b - a < (1 + \frac32 L)^{-1}$ then for any $\alpha$, $\beta \in \mathbb{R}$ the iterates generated by
Algorithm \ref{r:alg1} converge to the unique solution of the boundary value problem
\begin{equation} \label{orig_bvp}
y'' = f(t, y, y'), \qquad y(a) = \alpha, \quad y(b) = \beta
\end{equation}
guaranteed by Theorem \ref{thm.1} to exist.
\end{theorem}
\begin{remark}\label{mono.rmk}
Because Theorem \ref{thm.1} was proved by means of the Contraction Mapping Theorem approximate solutions
$(y^{[k]}(t), u^{[k]}(t))$ to \eqref{long.ie} converge monotonically and exponentially fast. Since the sum norm was used
on $\mathbb{R}^2$, however, the corresponding approximate solutions $y^{[k]}(t)$ to \eqref{orig_bvp.1} need not converge
monotonically to the solution $y(t)$ of \eqref{orig_bvp}.
\end{remark}
\begin{remark}\label{mixed.rmk}
The ideas developed in this section can also be applied to two-point boundary value problems of the form
\[
y'' = f(t, y, y'), \qquad y'(a) = \gamma, \quad y(b) = \beta \, ,
\]
so that in equation \eqref{basic.ivp} the constant $\gamma$ is now known and $\alpha = y(a)$ is unknown. Thus in
\eqref{equiv.ivp} we evaluate at $t = b$ but now solve for $\alpha$ instead of $\gamma$, obtaining in place
of \eqref{e:gamma} the expression
\[
\alpha = \beta - \gamma (b - a) - \int_a^b (b - s) f(s, y(s), y'(s)) \, ds.
\]
In the Picard iteration scheme we now successively update an approximation of $\alpha$ starting with some initial
value $\alpha_0$. Theorems analogous to Theorems \ref{thm.1} and \ref{r:alg1.works} hold in this setting.
\end{remark}
The examples in Section \ref{s:exa} will show that the use of Algorithm \ref{r:alg1} is by no means restricted
to problems for which the hypotheses of Theorem \ref{r:alg1.works} are satisfied. It will in fact give satisfactory
results for many problems that do not satisfy those hypotheses.
\section{The Computational Method}\label{s:aux}
When the right hand side of the equation \eqref{orig_bvp} is a polynomial function then the integrations that
are involved in implementing Algorithm \ref{r:alg1} can always be done efficiently, but otherwise the Picard iterates
can lead to impossible integrations. Consider, for example, the boundary value problem
\begin{equation}\label{psm1}
y'' = \sin y, \quad y(0) = 0, \quad y(\pi/8) = 1,
\end{equation}
with the corresponding first order system (with unknown constant $\gamma$)
\begin{equation}\label{psm1.sys}
\begin{aligned}
y' &= u \\
u' &= \sin y
\end{aligned}
\qquad
\begin{aligned}
y(0) &= 0 \\
u(0) &= \gamma
\end{aligned}
\end{equation}
for which \eqref{e:iteration.split.recursion}, with explicit mention of $\gamma^{[k+1]}$ eliminated, is
\begin{equation}\label{sine.recursion}
\begin{aligned}
y^{[k+1]}(t) &= \int_0^t u^{[k]}(s) \, ds \\
u^{[k+1]}(t) &= \tfrac{8}{\pi}\Big[ 1 - \int_0^{\frac{\pi}{8}} (\tfrac{\pi}{8} - s) \sin y^{[k]}(s) \, ds \Big]
+ \int_0^t \sin y^{[k]}(s) \, ds.
\end{aligned}
\end{equation}
The first few iterates are readily computed but on the fourth iteration the expression for $u^{[4]}(t)$ contains terms
like $\int_0^t \sin(\frac{\pi^2}{64} \sin(\frac{8}{\pi}s)) \, ds$, which cannot be computed in closed form. In such a
situation we use the auxiliary variable method as expounded by Parker and Sochacki (\cite{PS}; see also \cite{CPSW}).
In this example we introduce the variable $v = \sin y$ and, since $v' = - \cos y \, y'$, the variable $w = \cos y$, so
that \eqref{psm1.sys} is replaced by the four-dimensional problem
\begin{equation}\label{e:psm1.ult}
\begin{aligned}
y' &= u \\
u' &= v \\
v' &= uw \\
w' &= -uv
\end{aligned}
\qquad
\begin{aligned}
y(0) &= 0 \\
u(0) &= \gamma \\
v(0) &= 0 \\
w(0) &= 1 \,,
\end{aligned}
\end{equation}
where the initial values for $v$ and $w$ come from their definitions in terms of $y(t)$ and the initial value of $y$.
Suppose that the unique solution to \eqref{psm1.sys} is $(y, u) = (\sigma(t), \tau(t))$ on some interval $J$ about 0 and that the
unique solution to \eqref{e:psm1.ult} is $(y, u, v, w) = (\rho(t), \mu(t), \nu(t), \xi(t))$ on some interval $K$ about 0. Then by
construction $(y, u, v, w) = (\sigma(t), \tau(t), \sin \sigma(t), \cos \sigma(t))$ solves \eqref{e:psm1.ult} on $J$, hence we
conclude that on $J \cap K$ the function $y = \sigma(t)$, which in the general case we cannot find explicitly, is equal to the
function $y = \rho(t)$, which we can approximate on any finite interval about 0 to any required accuracy. For although the
dimension has increased, now the right hand sides of the differential equations are all polynomial functions so quadratures can
be done easily.
Applying the method of auxiliary variables in the implementation of Algorithm \ref{r:alg1} in general means simply adjoining to
\eqref{e:iteration.split.initial} initializations of auxiliary variables, say by their initial values as determined by their
definitions and the initial values $y(a) = \alpha$ and $u(a) = \gamma$, and adjoining to \eqref{e:iteration.split.recursion} the
obvious Picard recurrence expression arising from the initial value problems for the auxiliary variables, analogous to the last
two equations in \eqref{e:psm1.ult}. For example, for \eqref{psm1.sys} we obtain from \eqref{e:psm1.ult} the additional
initializations $v^{[0]}(t) \equiv 0$ and $w^{[0]}(t) \equiv 1$ and the additional recursion equations
$v^{[k+1]}(t) = \int_0^t u^{[k]}(s) w^{[k]}(s) \, ds$ and $w^{[k+1]}(t) = 1 - \int_0^t u^{[k]}(s) v^{[k]}(s) \, ds$.
Thus in general we obtain a computationally efficient method for approximating the solution.
The following theorem is stated and proved in \cite{PS}.
\begin{theorem}\label{r:PSM}
Let $\mathcal F = (f_1, \cdots, f_n) : \mathbb{R}^n \to \mathbb{R}^n$ be a polynomial mapping and ${\bf y} = (y_1, \cdots, y_n) : \mathbb{R} \to \mathbb{R}^n$.
Consider initial value problem
\[
y'_j = f_k({\bf y}), \qquad y_j(0) = \alpha_j, \quad j = 1, \cdots, n
\]
and the corresponding Picard iterates $P_k(s) = (P_{1,k}(s), \cdots, P_{n,k}(s))$,
\begin{align*}
P_{j,1} (t) &= \alpha_j, \quad j = 1, \cdots, n \\
P_{j,k+1}(t) &= \alpha_j + \int_0^t f_j(P_k(s)) \, ds, \quad k = 1, 2, \cdots, \quad j = 1, \cdots, n.
\end{align*}
Then $P_{j,k+1}$ is the $k^{th}$ Maclaurin Polynomial for $y_j$ plus a polynomial all of whose terms have degree
greater than k.
\end{theorem}
In \cite{CPSW} the authors address the issue as to which systems of ordinary differential equations can be handled by
this method. The procedure for defining the new variables is neither algorithmic nor unique. However, with sufficient
ingenuity it has been successfully applied in every case for which the original differential equation is analytic.
By Theorem \ref{r:PSM} Algorithm \ref{r:alg1} is generating approximations of the Maclaurin series of the solution of
\eqref{basic.bvp}, but because of the convergence to $\gamma$ the approximations match the solution virtually perfectly
at the right endpoint as well as at the left.
Because the method involves only repeated integration of polynomial functions, it is easy to code and experience
shows that it compares favorably in the computational time required to obtain results with accuracy comparable to
that obtained by means of such popular methods as the shooting method with fourth order Runge-Kutta numerics, the
power series method, and the finite difference method.
\section{Examples}\label{s:exa}
We will illustrate the method and its efficiency with several examples, both linear and nonlinear, and compare the
result with the known exact solutions. Computations were done independently using Mathematica 10 and Maple 16.
\begin{example}\label{ex:-y.a}
Consider the second order linear ordinary differential equation $y'' = -y$. Since the right hand side is a polynomial
function and is Lipschitz with Lipschitz constant 1 Algorithm \ref{r:alg1} can be applied directly to any corresponding
two-point boundary value problem, and Theorem \ref{r:alg1.works} guarantees that the approximations to the solution of
the equivalent problem of the form \eqref{long.ie} that are generated will converge to the solution monotonically and
exponentially fast, with respect to the supremum norm, on any interval $[a, b]$ for which $b - a < 2/5$. We will consider
two problems for which the solution is is $y(t) = \cos t + \sin t$. The first is
\
y'' = -y, \qquad y(0) = 1, \quad y(\tfrac\pi8) = \sqrt{1 + 1 / \sqrt{2}},
\
for which the conditions of Theorem \ref{r:alg1.works} are met, and for which we find that for the first five iterates
the errors, $|| y(t) - y^{[k]}(t) ||_\text{sup}$, $1 \leqslant k \leqslant 5$, rounded to five decimal places, are
\[
0.022 60 \qquad 0.003 39 \qquad 0.000 36 \qquad 0.000 05 \qquad 0.000 01
\]
and that $| \gamma - \gamma^{[k]} |$ for $1 \leqslant k \leqslant 5$ are
\[
0.022 94 \qquad 0.002 93 \qquad 0.000 41 \qquad 0.000 04 \qquad 0.000 01.
\]
However, Algorithm \ref{r:alg1} performs well even for intervals of length much greater than $2/5$. This is the case for
the second problem (with the same solution),
\begin{equation}\label{e:ex.-y.a}
y'' = -y, \qquad y(0) = 1, \quad y(\tfrac\pi4) = \sqrt{2}.
\end{equation}
When Algorithm \ref{r:alg1} is applied to \eqref{e:ex.-y.a} we have that $|| y(t) - y^{[k]}(t) ||_\text{sup}$ for
$1 \leqslant k \leqslant 5$ are
\[
0.100 00 \qquad 0.002 30 \qquad 0.006 40 \qquad 0.001 42 \qquad 0.000 40
\]
(see Remark \ref{mono.rmk}) and that $| \gamma - \gamma^{[k]} |$ for $1 \leqslant k \leqslant 5$ are
\[
0.079 91 \qquad 0.025 69 \qquad 0.005 50 \qquad 0.001 50 \qquad 0.000 35.
\]
As already noted, because we are forcing agreement at the two endpoints the error is virtually zero at each end of
the interval, which is the universal pattern.
\end{example}
\begin{example} \label{Nex1}
Consider non-linear boundary value problem
\begin{equation} \label{e:nonlin.bvp.example}
y'' = 16 + (3 - 2t)^3 + \tfrac14 y y', \qquad y(0) = \tfrac{43}{3}, \quad y(1) = 17,
\end{equation}
which has solution $y(t) = (3 - 2t)^2 + 16 (3 - 2t)^{-1}$. The right hand side of the differential equation is not Lipschitz in
${\bf y} = (y, y')$ and the right endpoint 1 is close to where the solution blows up. Nevertheless when we attempt to apply Algorithm
\ref{r:alg1} the iterates converge to the exact solution, albeit slowly. After eight iterations we have that
$|| y(t) - y^{[8]}(t) ||_\text{sup} \approx 0.024$ and $\gamma^{[8]} \approx -8.457$ compared to $\gamma = -8.\bar4$.
\end{example}
\begin{example}
Consider the boundary value problem
\begin{equation}\label{NCVRD.1}
y'' = -e^{-2y}, \qquad y(0) = 0, \quad y(1.2) = \ln \cos 1.2 \approx -1.015\,123\,283,
\end{equation}
for which auxiliary variables must be introduced. The right hand side is not Lipschitz in $y$ yet in this case the algorithm
works well. The unique solution is $y(t) = \ln \cos t$, yielding $\gamma = 0$.
Introducing the dependent variable $u = y'$ to obtain the equivalent first order system $y' = u$, $u' = e^{-y}$ and the
variable $v = e^{-2y}$ to replace the transcendental function with a polynomial we obtain the expanded system
\begin{align*}
y' &= u \\
u' &= -v \\
v' &= -2uv
\end{align*}
with initial conditions
\[
y(0) = 0, \quad u(0) = \gamma, \quad v(0) = 1
\]
(with $\gamma$ regarded as unknown here), a system on $\mathbb{R}^3$ for which the $y$-component is the solution of the boundary value problem
\eqref{NCVRD.1}. Thus in this instance
\[
y^{[0]}(t) \equiv 0, \quad
u^{[0]}(t) \equiv \frac{\ln \cos 1.2}{1.2}, \quad
v^{[0]}(t) \equiv 1, \quad
\]
and
\begin{align*}
\gamma^{[k+1]} &= \Big( \ln \cos 1.2 + \int_0^{1.2} \, (1.2 - s) v^{[k]}(s) \, ds \Big) / 1.2 \\
y^{[k+1]}(t) &= 0 + \int_0^t u^{[k]}(s) \, ds \\
u^{[k+1]}(t) &= \gamma^{[k+1]} - \int_0^t v^{[k]}(s) \, ds \\
v^{[k+1]}(t) &= 1 - 2 \int_0^t u^{[k]}(s) v^{[k]}(s) \, ds \, .
\end{align*}
The first eight iterates of $\gamma$ are:
\[
\gamma^{[1]} = -0.24594,
\mspace{15mu}
\gamma^{[2]} = \phantom{-}0.16011,
\mspace{15mu}
\gamma^{[3]} = \phantom{-}0.19297,
\mspace{15mu}
\gamma^{[4]} = 0.04165,
\]
\[
\gamma^{[5]} = -0.04272,
\mspace{15mu}
\gamma^{[6]} = -0.04012,
\mspace{15mu}
\gamma^{[7]} = -0.00923,
\mspace{15mu}
\gamma^{[8]} = 0.01030,
\]
The maximum errors show a similar sort of pattern as they tend to zero; after eight iterations the maximum error is
$|| y(t) - y^{[8]}(t) ||_\text{sup} \approx 0.0115$.
\end{example}
\section{Extended Theorem and Algorithm}\label{s:extended}
Theorem \ref{thm.1}, hence the theoretical scope of the algorithm presented in Section \ref{s:alg}, can be extended
by partitioning the interval $[a, b]$ into $n$ subintervals and simultaneously and recursively approximating the
solutions to the $n$ boundary value problems that are induced on the subintervals by \eqref{basic.bvp} and its
solution.
If $\eta(t)$ is a solution of the original boundary value problem \eqref{basic.bvp} and the interval $[a, b]$ is
subdivided into $n$ subintervals of equal length $h$ by means of a partition
\[
a = t_0 < t_1 < t_2 < \cdots < t_n = b
\]
then setting $\beta_j = \eta(t_j)$, $j = 1, \ldots, n-1$, we see that $n$ boundary value problems are induced:
\[
\begin{aligned}
y'' &= f(t, y, y') \\
y(t_0) &= \alpha, \mspace{5mu} y(t_1) = \beta_1
\end{aligned}
\quad
\begin{aligned}
y'' &= f(t, y, y') \\
y(t_1) &= \beta_1, \mspace{5mu} y(t_2) = \beta_2
\end{aligned}
\quad
\dots
\quad
\begin{aligned}
y'' &= f(t, y, y') \\
y(t_{n-1}) &= \beta_{n-1}, \mspace{5mu} y(t_n) = \beta.
\end{aligned}
\]
Setting $\gamma_j = \eta'(t_{j-1})$, $j = 1, \dots, n$, or computing them by means of an appropriate implementation
of \eqref{e:gamma}, their solutions are solutions of the respective initial value problems
\[
\begin{aligned}
y'' = f&(t, y, y') \\
y(t_0) &= \alpha, \\
y'(t_0) &= \gamma_1
\end{aligned}
\mspace{40mu}
\begin{aligned}
y'' = f&(t, y, y') \\
y(t_1) &= \beta_1, \\
y'(t_1) &= \gamma_2
\end{aligned}
\mspace{40mu}
\dots
\mspace{40mu}
\begin{aligned}
y'' = f&(t, y, y') \\
y(t_{n-1}) &= \beta_{n-1}, \\
y'(t_{n-1}) &= \gamma_n.
\end{aligned}
\]
We denote the solutions to these problems by $y_j(t)$ with derivatives $u_j(t) := y_j'(t)$, $j = 1, 2, 3, \dots, n$.
To make the presentation cleaner and easier to read we will use the following shorthand notation (for relevant choices
of $j$), where the superscript $[k]$ will pertain to the $k$th iterate in the recursion to be described:
\begin{equation}\label{e:short.notation}
\begin{gathered}
f_j(s) = f(s, y_j(s), u_j(s))
\qquad
f_j^{[k]}(s) = f(s, y_j^{[k]}(s), u_j^{[k]}(s)) \\
\phantom{blank line} \\
{\bf y}_j(s) = \begin{pmatrix} u_j(s) \\ y_j(s) \end{pmatrix}
\qquad
{\bf y}_j^{[k]}(s) = \begin{pmatrix} u_j^{[k]}(s) \\ y_j^{[k]}(s) \end{pmatrix} \\
\phantom{blank line} \\
I_j = \int_{t_{j-1}}^{t_j} f_j(s) \, ds
\qquad
I_j^{[k]} = \int_{t_{j-1}}^{t_j} f_j^{[k]}(s) \, ds \\
\phantom{blank line} \\
J_j = \int_{t_{j-1}}^{t_j} (t_j - s) f_j(s) \, ds
\qquad
J_j^{[k]} = \int_{t_{j-1}}^{t_j} (t_j - s) f_j^{[k]}(s) \, ds.
\end{gathered}
\end{equation}
The idea for generating a sequence of successive approximations of $\eta(t)$ is to update the estimates of the
functions ${\bf y}_j(t)$ using
\begin{equation}\label{e:ur.update.fns}
{\bf y}_j(t) = \begin{pmatrix} \beta_{j-1} \\ \gamma_j \end{pmatrix}
+ \int_{t_{j-1}}^t \begin{pmatrix} u_j(s) \\ f_j(s) \end{pmatrix} \, ds
\end{equation}
and then update $\gamma_j$ and $\beta_j$ using
\begin{equation}\label{e:ur.updates}
\gamma_j = \gamma_{j-1} + I_{j-1}
\qquad
\text{and}
\qquad
\beta_{j-1} = \beta_j - h \, \gamma_j - J_j
\end{equation}
(starting with $j = 1$ and working our way up to $j = n$ for the $\gamma_j$ and in the reverse order with the
$\beta_j$, with the convention that $\beta_n = \beta$), except that on the first step we update $\gamma_1$ using
instead
\begin{equation}\label{e:ur.update.gamma1}
\gamma_1 = \tfrac1h [ \beta_1 - \alpha - J_1 ]
\end{equation}
and there is no $\beta_0$. Note that the updates on the $\beta_j$ come from ``the right," i.e., values of
$\beta_r$ with $r > j$, hence ultimately tying into $\beta$ at each pass through the recursion, while the updates
on the $\gamma_j$ come from ``the left," i.e., values of $\gamma_r$ with $r < j$, hence ultimately tying into
$\alpha$ at each pass through the recursion.
In fact we will not be able to make the estimates that we need to show convergence if we update $\beta_1$ on the
basis given above. To obtain a useful formula on which to base the successive approximations of $\beta_1$, we
begin by using the second formula in \eqref{e:ur.updates} $n-1$ times:
\begin{align*}
\beta_1 &= \beta_2 - (h \, \gamma_2 + J_2) \\
&= \beta_3 - (h \, \gamma_3 + J_3) - (h \, \gamma_2 + J_2) \\
&= \beta_4 - (h \, \gamma_4 + J_4) - (h \, \gamma_3 + J_3) - (h \, \gamma_2 + J_2) \\
&\mspace{20mu}\vdots \\
&= \beta - (h \, \gamma_n + J_n) - \cdots - (h \, \gamma_2 + J_2) \\
&= \beta - h(\gamma_2 + \cdots + \gamma_n) - (J_2 + \cdots + J_n).
\end{align*}
But by repeated application of the first equation in \eqref{e:ur.updates} and use of
\eqref{e:ur.update.gamma1} on the last step
\begin{align*}
&\mspace{20mu}\gamma_2 + \cdots + \phantom{3} \gamma_{n-2} + \phantom{2} \gamma_{n-1} + \gamma_n \\
&= \gamma_2 + \cdots + \phantom{3} \gamma_{n-2} + 2 \gamma_{n-1} + I_{n-1} \\
&= \gamma_2 + \cdots + 3 \gamma_{n-2} + 2 I_{n-2} + I_{n-1} \\
&\mspace{20mu}\vdots \\
&=(n-1)\gamma_2 + (n-2) I_2 + \cdots + 3 I_{n-3} + 2 I_{n-2} + I_{n-1} \\
&=(n-1) \gamma_1 + (n-1) I_1 + (n-2) I_2 + \cdots + 3 I_{n-3} + 2 I_{n-2} + I_{n-1} \\
&= \frac{n-1}{h} [ \beta_1 - \alpha - J_1 ]
+ (n-1) I_1 + (n-2) I_2 + \cdots + 3 I_{n-3} + 2 I_{n-2} + I_{n-1}.
\end{align*}
Inserting this expression into the previous display and solving the resulting equation for $\beta_1$ yields the
formula
\begin{equation}\label{e:ur.update.beta1}
\beta_1 = \frac1n
\left[
\beta + (n-1) \alpha + (n-1) J_1 - \sum_{r=2}^n J_r - h \sum_{r=1}^{n-1} (n-r) I_r
\right].
\end{equation}
Once an initialization has been chosen, an iteration procedure based on \eqref{e:ur.update.fns},
\eqref{e:ur.updates}, \eqref{e:ur.update.gamma1}, and \eqref{e:ur.update.beta1} is, with the convention
$\beta_0 = \alpha$ and $\beta_n = \beta$, the shorthand notation introduced above, and order of evaluation in the
order listed,
\begin{subequations}\label{e:update.n}
\begin{gather}
{\bf y}_j^{[k+1]}(t) = \begin{pmatrix} \beta_{j-1}^{[k]} \\ \gamma_j^{[k]} \end{pmatrix}
+ \int_{t_{j-1}}^t \begin{pmatrix} u_j^{[k]}(s) \\ f_j^{[k]}(s) \end{pmatrix} \, ds
\qquad j = 1, \cdots, n
\label{e:update.n.fns} \\
\phantom{blank line} \notag \\
\beta_1^{[k+1]} = \frac1n
\left[
\beta + (n-1) \alpha + (n-1) J_1^{[k+1]}
- \sum_{r=2}^n J_r^{[k+1]} - h \sum_{r=1}^{n-1} (n-r) I_r^{[k+1]}
\right]
\label{e:update.n.beta1} \\
\phantom{blank line} \notag \\
\gamma_1^{[k+1]} = \tfrac1h [ \beta_1^{[k+1]} - \alpha -J_1^{[k+1]} ]
\label{e:update.n.gamma1} \\
\phantom{blank line} \notag \\
\gamma_j^{[k+1]} = \gamma_{j-1}^{[k+1]} + I_{j-1}^{[k+1]}
\qquad j = 2, \cdots, n
\label{e:update.n.gamma.j}
\\
\phantom{blank line} \notag \\
\beta_{j-1}^{[k+1]} = \beta_j^{[k+1]} - h \, \gamma_j^{[k+1]} - J_j^{[k+1]}
\qquad
j = n, n-1, \ldots, 4, 3
\label{e:update.n.beta.j}
\end{gather}
\end{subequations}
\begin{theorem}\label{r:Thm.Mul}
Suppose the function $f(t, y, u)$ from $[a, b] \times \mathbb{R}^2$ into $\mathbb{R}$ is continuous and Lipschitz in ${\bf y} = (y,u)$
with Lipschitz constant $L$. If there exists an integer $n \geqslant 1$ such that for the subdivision of $[a, b]$
into $n$ subintervals of equal length $h$ by the partition
\[
a = t_0 < t_1 < \cdots < t_{n-1} < t_n = b
\]
the inequality
\[
\frac1{2n} [ (n^3 + n^2 + n + 2) L + 2 ] (b - a) < 1
\]
holds if $h = (b-a)/n \leqslant 1$ or the inequality
\[
\frac1{2n^2} [ (n^3 + n^2 + n + 2) L + 2 ] (b - a)^2 < 1
\]
holds if $h = (b-a)/n \geqslant 1$, then there exists a solution of the two-point boundary value problem
\eqref{basic.bvp}. Moreover, in the language of the notation introduced in the first paragraph of this section and
display \eqref{e:short.notation}, for any initial choice of the functions ${\bf y}_j(t) = (y_j(t), u_j(t))$,
$1 \leqslant j \leqslant n$, the constants $\gamma_j$, $1 \leqslant j \leqslant n$, and the constants $\beta_j$,
$1 \leqslant j \leqslant n-1$, the sequence of successive approximations defined by \eqref{e:update.n} converges to
such a solution.
\end{theorem}
The following three lemmas will be needed in the proof. The straightforward proofs are omitted.
\begin{lemma}\label{r:cauchy.cond}
Let $x^{[k]}$ be a sequence in a normed vector space $(V, | \cdot |)$. If there exist a number $c < 1$ and an index
$N \in \mathbb{N}$ such that
\[
| x^{[k+1]} - x^{[k]} | \leqslant c | x^{[k]} - x^{[k-1]} |
\quad
\text{for all} \quad k \geqslant N
\]
then the sequence $x^{[k]}$ is a Cauchy sequence.
\end{lemma}
\begin{lemma}\label{r:fyjk.estimate}
Suppose the interval $[a, b]$ has been partitioned into $n$ subintervals of equal length $h = (b - a) / n$ by
partition points $a = t_0 < t_1 < \cdots < t_{n-1} < t_n = b$. With the notation
\[
{\bf y}_j(t) = \begin{pmatrix} y_j(t) \\ u_j(t) \end{pmatrix}
\quad
\text{and}
\quad
f_j^{[r]}(s) = f(s, y_j^{[r]}(s), u_j^{[r]}(s)),
\quad
r \in \mathbb{Z}^+ \cup \{ 0 \},
\quad
j = 1, 2,
\]
the following estimates hold:
\begin{equation}\label{int.est.simple}
\int_{t_{j-1}}^{t_j} | f_j^{[k+1]}(s) - f_j^{[k]}(s) | \, ds
\leqslant
L \, h || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max}
\end{equation}
and
\begin{equation}\label{int.est.complex}
\int_{t_{j-1}}^{t_j} (t_j - s) | f_j^{[k+1]}(s) - f_j^{[k]}(s) | \, ds
\leqslant
\tfrac 12 \, L \, h^2 || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max}.
\end{equation}
\end{lemma}
\begin{lemma}\label{r:c2.lemma}
Suppose $\eta : (-\epsilon, \epsilon) \to \mathbb{R}$ is continuous and that $\eta'$ exists on
$(-\epsilon, 0) \cup (0, \epsilon)$. Suppose $g : (-\epsilon, \epsilon) \to \mathbb{R}$ is continuous and that $g = \eta'$
on $(-\epsilon, 0) \cup (0, \epsilon)$. Then $\eta'$ exists and is continuous on $(-\epsilon, \epsilon)$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{r:Thm.Mul}]
We will show that the sequence
\[
{\bf y}^{[k]}(t) = ({\bf y}_1^{[k]}(t), \dots, {\bf y}_n^{[k]}(t))
\in
C([t_0, t_1], \mathbb{R}^2) \times C([t_1, t_2], \mathbb{R}^2) \times \cdots \times C([t_{n-1}, t_n], \mathbb{R}^2)
\]
is a Cauchy sequence by means of Lemma \ref{r:cauchy.cond}, where we place the supremum norm on each function space, with
respect to absolute value on $\mathbb{R}$ and the sum norm on $\mathbb{R}^2$, and the maximum norm on their product. It is clear that the
successive approximations converge to functions on the individual subintervals which when concatenated form a function $y(t)$
that is $C^1$ on $[a, b]$, $C^2$ on $[a, b] \setminus \{ t_1, \dots, t_{n-1} \}$, solves the differential equation in
\eqref{basic.bvp} on the latter set, and satisfies the two boundary conditions in \eqref{basic.bvp}. An application of
Lemma \ref{r:c2.lemma} at each of the $n - 1$ partition points implies the existence of the second derivative at the partition
points, so that $y(t)$ solves the boundary value problem \eqref{basic.bvp}.
To begin the proof that ${\bf y}^{[k]}(t)$ is a Cauchy sequence, by Lemma \ref{r:fyjk.estimate}
\begin{equation}\label{int.est.simple.gen}
\begin{aligned}
| I_r^{[k+1]} - I_r^{[k]} |
&=
\left|
\int_{t_{r-1}}^{t_r} f_r^{[k+1]}(s) \, ds - \int_{t_{r-1}}^{t_r} f_r^{[k]}(s) \, ds
\right|
\\
&\leqslant
\int_{t_{r-1}}^{t_r} | f_r^{[k+1]}(s) - f_r^{[k]}(s) | \, ds
\\
&\leqslant
L \, h || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max}
\end{aligned}
\end{equation}
and
\begin{equation}\label{int.est.complex.gen}
\begin{aligned}
| J_r^{[k+1]} - J_r^{[k]} |
&=
\left
| \int_{t_{r-1}}^{t_r} (t_r - s) f_r^{[k+1]}(s) \, ds - \int_{t_{r-1}}^{t_r} (t_r - s) f_r^{[k]}(s) \, ds
\right|
\\
&\leqslant
\int_{t_{r-1}}^{t_r} (t_r - s) | f_r^{[k+1]}(s) - f_r^{[k]}(s) | \, ds
\\
&\leqslant
\tfrac12 \, L \, h^2 || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max}.
\end{aligned}
\end{equation}
Then from \eqref{e:update.n.beta1} we have
\begin{align*}
| \beta_1^{[k+1]} - \beta_1^{[k]} |
&\leqslant
\frac{n-1}{n}
\left| J_1^{[k+1]} - J_1^{[k]} \right|
+ \sum_{r=2}^n \left| J_r^{[k+1]} - J_r^{[k]} \right| \\
&\mspace{300mu}+ h \sum_{r=1}^{n-1} (n-r) \left| I_r^{[k+1]} - I_r^{[k]} \right|
\\
&\leqslant
\left[
\left(\frac{n-1}{n}\right) \frac12 h^2
+ \sum_{r=2}^n \frac12 h^2
+ h \sum_{r=1}^{n-1}(n-r) h
\right]
L || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max}
\\
&= \widehat B_1 h^2 L || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max},
\end{align*}
where $\widehat B_1 = \frac{n-1}2 \left[ \frac1n + 1 + n \right]$.
From \eqref{e:update.n.gamma1}
\begin{align*}
| \gamma_1^{[k+1]} - \gamma_1^{[k]} |
&\leqslant
\frac1h | \beta_1^{[k+1]} - \beta_1^{[k]} | + \frac1h | J_1^{[k+1]} - J_1^{[k]} | \\
&\leqslant
[\widehat B_1 h L + \frac12 h L ] || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max} \\
&=
\widehat \Gamma_1 h L || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max}
\end{align*}
and from repeated application of \eqref{e:update.n.gamma.j}, starting from $j =2$ up through
$j = n$,
\begin{align*}
| \gamma_j^{[k+1]} - \gamma_j^{[k]} |
&\leqslant
| \gamma_{j-1}^{[k+1]} - \gamma_{j-1}^{[k]} | + | I_{j-1}^{[k+1]} - I_{j-1}^{[k]} | \\
&\leqslant
[\widehat \Gamma_{j-1} h L + h L ] || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max} \\
&=
\widehat \Gamma_j h L || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max}
\end{align*}
with $\widehat \Gamma_j = \widehat B_1 + \frac{2j-1}{2}$, which is in fact valid for $1 \leqslant j \leqslant n$.
From repeated application of \eqref{e:update.n.beta.j}, starting from $j = n-1$ (with the
convention that $\beta_n = \beta$) and down through $j = 2$,
\begin{align*}
| \beta_j^{[k+1]} - \beta_j^{[k]} |
&\leqslant
| \beta_{j+1}^{[k+1]} - \beta_{j+1}^{[k]} |
+ h | \gamma_{j+1}^{[k+1]} - \gamma_{j+1}^{[k]} |
+ | J_{j+1}^{[k+1]} - J_{j+1}^{[k]} | \\
&\leqslant
[ \widehat B_{j+1} h^2 L + \widehat \Gamma_{j+1} h^2 L + \frac12 h^2 L ]
|| {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max} \\
&=
\widehat B_j h^2 L || {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max},
\end{align*}
and $\widehat B_{n-r} = r \widehat B_1 + r \, n - \frac{r (r-1)}{2}$, hence
$\widehat B_j = (n - j) \widehat B_1 + n (n - j) - \frac{(n - j)(n - j - 1)}{2}$, $2 \leqslant j \leqslant n - 1$.
Using these estimates we find that, setting $h^* = \max \{ h, h^2 \}$, for any $j \in \{ 2, \dots, n \}$, for any
$t \in [t_{j-1}, t_j]$,
\begin{align*}
| {\bf y}_j^{[k+1]}&(t) - {\bf y}_j^{[k]}(t) |_\text{sum} \\
&\leqslant
\left|
\begin{pmatrix}
\beta_{j-1}^{[k]} - \beta_{j-1}^{[k-1]} \\
\gamma_j^{[k]} - \gamma_j^{[k-1]}
\end{pmatrix}
\right|_\text{sum}
+
\int_{t_{j-1}}^{t_j}
\left|
\begin{pmatrix} u_j^{[k]}(s) - u_j^{[k-1]}(s) \\
f_j^{[k]}(s) - f_j^{[k-1]}(s)
\end{pmatrix}
\right|_\text{sum} \, ds \\
&\leqslant
| \beta_{j-1}^{[k]} - \beta_{j-1}^{[k-1]} | + | \gamma_j^{[k]} - \gamma_j^{[k-1]} |
+
(1 + L) \int_{t_{j-1}}^{t_j} | {\bf y}_j^{[k]}(s) - {\bf y}_j^{[k-1]}(s) |_\text{sum} \, ds \\
&\leqslant
[ \widehat B_{j-1} h^2 L + \widehat \Gamma_j h L ] \, || {\bf y}^{[k]} - {\bf y}^{[k-1]} ||_\textrm{max}
+
(1 + L) || {\bf y}_j^{[k]} - {\bf y}_j^{[k-1]} ||_\textrm{sup} \, h \\
&\leqslant
[ \widehat B_{j-1} L + \widehat \Gamma_j L + (1 + L) ] h^* || {\bf y}^{[k]} - {\bf y}^{[k-1]} ||_\textrm{max} \\
&=
[ ( \widehat B_{j-1} + \widehat \Gamma_j + 1 ) L + 1 ] h^* || {\bf y}^{[k]} - {\bf y}^{[k-1]} ||_\textrm{max} \\
&= c_j h^* || {\bf y}^{[k]} - {\bf y}^{[k-1]} ||_\textrm{max}
\end{align*}
where, using the expressions above for $\widehat B_1$ and $\widehat \Gamma_1$,
\[
c_j = \tfrac1{2n} [ n^4 + (3 - j) n^3 + n^2 + (3 j - j^2)n + (j - 2)] L + 1 \qquad (2 \leqslant j \leqslant n).
\]
Similarly, for all $t \in [t_0, t_1]$,
$| {\bf y}_1^{[k+1]}(t) - {\bf y}_1^{[k]}(t) |_\text{sum} \leqslant c_1 h || {\bf y}^{[k]} - {\bf y}^{[k-1]} ||_\textrm{max}$
for
\[
c_1 = \left[ \frac{n^3 + 3n - 1}{2n} \right] L + 1.
\]
Then for all $j \in \{ 1, \dots, n \}$,
\[
|| {\bf y}_j^{[k]} - {\bf y}_j^{[k-1]} ||_\textrm{sup} \leqslant c_j h^* || {\bf y}^{[k]} - {\bf y}^{[k-1]} ||_\textrm{max}
\]
and
\begin{equation}\label{e:yb.estimate}
|| {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max}
\leqslant
\max \{ c_1, \ldots, c_n \} h^* || {\bf y}^{[k]} - {\bf y}^{[k-1]} ||_\textrm{max}.
\end{equation}
For fixed $n$, for $j \geqslant 2$, $c_j$ is a quadratic function of $j$ with maximum at $j =\frac{-n^3 + 3 n + 1}{2n}$,
which is negative for $n \geqslant 2$, so $c_2 > c_j$ for $j \geqslant 3$. Direct comparison shows that $c_2 > c_1$ for
all choices of $n$ as well. Thus estimate \eqref{e:yb.estimate} is
\[
|| {\bf y}^{[k+1]} - {\bf y}^{[k]} ||_\textrm{max} \leqslant c_2 h^* || {\bf y}^{[k]} - {\bf y}^{[k-1]} ||_\textrm{max}
\]
and by Lemma \ref{r:cauchy.cond} the sequence ${\bf y}^{[k]}(t)$ is a Cauchy sequence provided
$c_2 h^* < 1$, which, when $h = (b-a)/n \leqslant 1$ is the condition
\[
\frac1{2n} [ (n^3 + n^2 + n + 2) L + 2 ] (b - a) < 1
\]
and when $h = (b-a)/n \geqslant 1$ is the condition
\[
\frac1{2n^2} [ (n^3 + n^2 + n + 2) L + 2 ] (b - a)^2 < 1.
\]
\end{proof}
To see that Theorem \ref{r:Thm.Mul} can provide an actual improvement over Theorem \ref{thm.1}, suppose the
interval $[a, b]$ is fixed and we wish to know how large $L$ can be and still be assured that a solution to \eqref{basic.bvp} exists.
For any $n > b - a$ the corresponding maximum value of $L$ allowed by Theorem \ref{r:Thm.Mul} is greater than that allowed by
Theorem \ref{thm.1} if
\[
\tfrac23
\left[
(b - a)^{-1} - 1
\right]
<
\frac{1}{n^3 + n^2 + n + 2}
\left[
2n (b - a)^{-1} - 2
\right],
\]
which holds if and only if
\begin{equation}\label{e:L.est}
\frac{n^3 + n^2 - 2n+ 2}{n^3 + n^2 + n - 1} < b - a.
\end{equation}
For $n > 1$ the left hand side of \eqref{e:L.est} is less than 1 and increases with increasing $n$. Thus for example
if $b - a = 1$ then Theorem \ref{thm.1} guarantees that a solution to \eqref{basic.bvp} will exist if $L < 0$, so no
conclusion can be made, whereas for all $n \geqslant 2$ Theorem \ref{r:Thm.Mul} implies existence of a unique solution
if
\[
L < \frac{2n - 2}{n^3 + n^2 + n - 2}.
\]
The best result is for $n = 2$ and is $L < \frac16$.
Similarly, if a Lipschitz constant $L$ is known to exist for all values of $t$, $y$, and $y'$, then
Theorem \ref{r:Thm.Mul} can sometimes provide a guarantee of existence of a solution to a boundary value problem of the
form \eqref{basic.bvp} on a longer interval than that provided by Theorem \ref{thm.1}.
\section{Conclusion}\label{s:concl}
We have introduced a purely symbolic technique for approximating solutions of two-point boundary value
problems whose output is a sequence of polynomials that converges to the true solution exponentially fast
with respect to the supremum norm. We provided conditions under which the method is guaranteed to work, and
illustrated by example that its practical usefulness exceeds what the theorems provide. By introducing auxiliary
variables we overcome problems with quadratures that cannot be performed in closed form. The algorithm is
easy to code in popular comptuer algebra systems such as Maple and Mathematica. Experience has shown that it
compares favorably in efficiency with shooting and finite difference approximation techniques.
| {
"timestamp": "2016-10-25T02:07:02",
"yymm": "1610",
"arxiv_id": "1610.07232",
"language": "en",
"url": "https://arxiv.org/abs/1610.07232",
"abstract": "In this work we give an efficient method involving symbolic manipulation, Picard iteration, and auxiliary variables for approximating solutions of two-point boundary value problems.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Symbolic Iterative Solution of Two-Point Boundary Value Problems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138190064204,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7087156918161996
} |
https://arxiv.org/abs/1710.09301 | The Loewner Equation for Multiple Hulls | Kager, Nienhuis, and Kadanoff conjectured that the hull generated from the Loewner equation driven by two constant functions with constant weights could be generated by a single rapidly and randomly oscillating function. We prove their conjecture and generalize to multiple continuous driving functions. In the process, we generalize to multiple hulls a result of Roth and Schleissinger that says multiple slits can be generated by constant weight functions. The proof gives a simulation method for hulls generated by the multiple Loewner equation. | \section{Introduction}
The Loewner equation is the initial value problem
\begin{equation}\label{eqn:LEintro}
\frac{\partial}{\partial t} g_t(z)
=\frac{2}{g_t(z)-\lambda(t)},
\quad g_0(z)=z.
\end{equation}
where $\lambda:[0,T]\to\mathbb{R}$ is called the driving function. For $z\in\mathbb{H}$, a solution exists up to a maximum time, call it $T_z$. The collection of points
\begin{equation}
K_t=\left\{z\in\mathbb{H}:T_z\leq t\right\}
\end{equation}
is called a hull. A fundamental note is that there is a one-to-one correspondence between hulls and driving functions. The map $g_t$ in (\ref{eqn:LEintro}) is a conformal map from $\mathbb{H}\setminus K_t$ to $\mathbb{H}$ (see Section \ref{loewnerequation} for more details).
The Loewner equation was discovered in 1923 by Charles Loewner in pursuit of proving the Bieberbach conjecture and it reemerged in 2000, when Oded Schramm discovered its relationship to the scaling limit of loop-erased random walks. This discovery lead to construction of the Schramm-Loewner Evolution (SLE$_\kappa$) and has been vigorously studied ever since.
In this paper, our main focus is the multiple Loewner equation
\begin{equation}\label{eqn:multiLEr}
\frac{\partial}{\partial t} g_t(z)=\sum_{k=1}^{n}\frac{2w_k(t)}{g_t(z)-\lambda_k(t)}\text{ a.e. }t\in [0,T],
\quad g_0(z)=z
\end{equation}
where $\lambda_1,...,\lambda_n:[0,T]\to\mathbb{R}$ are continuous and $w_1,...,w_n\in L^1[0,T]$ are weight functions. In \cite{KNK04}, it was conjectured that the multiple Loewner equation driven by $\lambda_1=-1$ and $\lambda_2=1$ with constant weights equal to $\frac{1}{2}$ could be realized by a single rapidly and randomly oscillating function driven by the Loewner equation (\ref{eqn:LEintro}). We prove this conjecture with the following more general result.
\begin{prop}\label{prop:nonconstantweights}
Let $K=\bigcup_{i=1}^{n}K_i$, where $K_1,...,K_n$ are disjoint hulls driven by continuous driving functions in the chordal sense. Then $K$ is the limit of hulls generated by a sequence of randomly and rapidly oscillating functions.
\end{prop}
\noindent This proposition inspires a simulation method for hulls from the multiple Loewner equation driven with constant weights. The idea is to use a single driving function that randomly and rapidly oscillates between the multiple driving functions, which generalizes the conjecture in \cite{KNK04}. We simulate the hull investigated in \cite{KNK04} and compare it to the actual hull in Section \ref{simulations}.
The proof of Proposition \ref{prop:nonconstantweights} result follows from a generalization of Theorem 1.1 in \cite{RS14}, which says that multiple slits can be generated through the multiple Loewner equation by continuous driving functions and constant weights. We generalize this to multiple hulls, as follows:
\begin{thm}\label{thm:3.6.2}
Let $K^1,...,K^n$ be disjoint Loewner hulls. Let $\text{hcap}(K^1\cup\cdots\cup K^n)=2T$. Then there exist constants $w_1,...,w_n\in(0,1)$ with $\sum_{k=1}^{n}w_k=1$ and continuous driving functions $\lambda_1,...,\lambda_n:[0,T]\to\mathbb{R}$ so that
\begin{equation}
\frac{\partial}{\partial t} g_t(z)=\sum_{k=1}^{n}\frac{2w_k}{g_t(z)-\lambda_k(t)},\quad g_0(z)=z
\end{equation}
satisfies $g_T=g_{K^1\cup\cdots\cup K^n}$.
\end{thm}
One significant difference between Theorem 1.1 in \cite{RS14} and this result is the lack of uniqueness. This is due to the fact that we do not know the growth over time of the hulls in Theorem \ref{thm:3.6.2}, we only know what the hull looks like at a particular time. This ambiguity allows the possibility that a hull can be driven by different driving functions, whereas any slit has a unique driving function. For example, if the hull is a semi-circle of radius 1 centered at 0, then two ways to generate this hull are by travelling the boundary clockwise or counterclockwise. This corresponds to scaling the driving function by $-1$. However, if we have $K_t^j$ for each time and each $j\in\{1,...,n\}$, then using the same proof of uniqueness for slits from \cite{RS14}, we would have uniqueness in the multiple hull setting as well.
This paper is structured as follows: Section \ref{convofrapidrandomhulls} introduces enough about the Loewner equation to prove Proposition \ref{prop:nonconstantweights} from Theorem \ref{thm:3.6.2}. Section \ref{simulations} discusses simulation of the multiple Loewner equation. Section \ref{background} rigorously covers the background information about the Loewner equation, hulls, and a generalization of the tip of a curve, which is needed to prove Theorem \ref{thm:3.6.2}. Finally, Section \ref{precompactness} gives the proof of Theorem \ref{thm:3.6.2}. Sections \ref{background} and \ref{precompactness} can be read without reading Sections \ref{convofrapidrandomhulls} and \ref{simulations}. As in \cite{RS14}, we will only show results for $n=2$ and the general result follows from mathematical induction.\par\vspace{12pt}
\noindent\textbf{Acknowledgement:} I would like to thank Joan Lind for all of her help and support with this paper.
\section{Convergence of Hulls Using Rapid and Random Oscillation}\label{convofrapidrandomhulls}
\subsection{Brief Introduction to Loewner Equation}
Our goal is to discuss convergence of a rapidly and randomly oscillating driving function, but we need to define what convergence we will use. We say that $g_t^n$ converges to $g_t$ in the Carath\'eodory sense, denoted $g_t^n\xrightarrow{Cara} g_t$, if for each $\epsilon>0$ $g_t^n$ converges to $g_t$ uniformly on the set
\begin{equation}
[0,T]\times\{z\in\mathbb{H}:\text{dist}(z,K_t)\geq\epsilon\}.
\end{equation}
This form of convergence allows for convergence of functions when their domains are changing.
\subsection{Introduction to Conjecture}\label{introtoconjecture}
In Section 6 of \cite{KNK04}, Kager, Nienhuis, and Kadanoff investigate the multiple Loewner equation generated from constant driving functions, $\lambda_1\equiv -1$ and $\lambda_2\equiv 1$, and constant weights, $w_1=w_2=\frac{1}{2}$. They show that the hull is given by
\begin{equation}\label{eqn:knkhull}
K_t=\left\{\sqrt{\frac{2\theta_t}{\sin(2\theta_t)}}(\pm\cos\theta_t+i\sin\theta_t)\right\}
\end{equation}
where $\theta_t$ increases from 0 to $\frac{\pi}{2}$ as $t$ increases. They make the conjecture that the same hull can be generated by a single driving function that ``makes rapid (random) jumps between the values $\lambda_j$.'' In this section, we will say that a sequence of driving functions generate a hull if the corresponding conformal maps from the Loewner equation converge in the Carath\'eodory sense to the conformal map corresponding to the hull. We will prove their conjecture constructively. The key tool in the proof is the use of the following theorem by Roth and Schleissinger from \cite{RS14} which we use to relate the multiple Loewner equation and a single driving function.
\begin{thm}[2.4 \cite{RS14}]\label{thm:rs2.4rrodf}
For $j\in\{1,2\}$ let $w_j^n,w_j\in L^1[0,1]$ be weight functions and let $\lambda_j^n,\lambda_j\in C[0,1]$ be driving functions with associated Loewner chains $g_t^n, g_t$. If $\lambda_j^n$ converges to $\lambda_j$ uniformly on $[0,1]$ and if $w_j^n$ converges weakly in $L^1[0,1]$ to $w_j$ for $j=1,2$, then $g_t^n$ converges in the Carath\'eodory sense to the chain $g_t$.
\end{thm}
The idea to constructing a randomly, rapidly oscillating driving function is to use the driving functions that generate the hull $K_t$ from the multiple Loewner equation. We do this by dividing up the time interval into smaller intervals and then randomly pick which driving function to use on each small interval. This random picking is governed by the weights. Furthermore, this construction is not limited to the case described above that is considered in \cite{KNK04}. In fact, Proposition \ref{prop:nonconstantweights} is a more general answer to their conjecture.
\subsection{Controlled Oscillation}
Before we tackle the conjecture, we will do an example. In the situation of \cite{KNK04}, let $\lambda_1\equiv -1$, $\lambda_2\equiv 1$, $w_1=w_2=\frac{1}{2}$, and $K_t$ be as in (\ref{eqn:knkhull}). We will create a sequence of rapidly oscillating functions that generate $K_t$. The idea here is essentially the idea in the more general case: divide the interval into smaller pieces and decide whether the driving function is $-1$ or $1$ on each piece. Here, since $w_1=w_2=\frac{1}{2}$, we will simply rotate between the driving functions $-1$ and $1$. Let
\begin{equation}
\lambda^n(t)=\sum_{k=0}^{2^{n-2}}\chi_{[\frac{2k+1}{2^n},\frac{2(k+1)}{2^n})}-\chi_{[\frac{2k}{2^n},\frac{2k+1}{2^n})}(t).
\end{equation}
So, we take $[0,1]$ and divide it into an even number of intervals of the from $[\frac{j}{2^n},\frac{j+1}{2^n})$. When $j$ is even $\lambda^n|_{[\frac{j}{2^n},\frac{j+1}{2^n})}\equiv -1$ and when $j$ is odd $\lambda^n|_{[\frac{j}{2^n},\frac{j+1}{2^n})}\equiv 1$. This means for any $n\in\mathbb{N}$ $\lambda^n(t)=-1=\lambda_1$ for half of the time and $\lambda^n(t)=1=\lambda_2$ for the other half of the time, corresponding to $w_1=w_2=\frac{1}{2}$. Now, we will show that $K_t$ is generated by $\lambda^n$. The proof uses Theorem \ref{thm:rs2.4rrodf} to relate the multiple Loewner equation to a single driving function. We have already defined the driving function, so we will now set up the multiple Loewner equation situation. Define the weight functions
\begin{equation}\label{eqn:controlledjumps}
w_1^n(t):=\sum_{k=0}^{2^{n-1}-1}\chi_{[\frac{2k}{2^n},\frac{2k+1}{2^n})}(t)\quad
\text{and}
\quad
w_2^n(t):=\sum_{k=1}^{2^{n-1}}\chi_{[\frac{2k-1}{2^n},\frac{2k}{2^n})}(t).
\end{equation}
At any time, they sum to 1 and they are never 1 at the same time. We will show $w_j^n$ converges to $\frac{1}{2}$ weakly. Since the conformal maps from the Loewner equation driven by $\lambda^n$ and the conformal maps from the multiple Loewner equation driven by $\lambda_1$, $\lambda_2$, $w_1^n$, and $w_2^n$ are the same, we will have that $K_t$ is generated by $(\lambda^n)_{n=1}^{\infty}$.
\begin{lemma}\label{lem:onehalfweights}
As $n\to\infty$, $w_j^n$ converges weakly to $\frac{1}{2}$ for $j=1,2$ - that is, for each $h\in L^{\infty}[0,1]$
\begin{equation}
\int w_1^n h\to \int\frac{1}{2}h\text{ as }n\to\infty.
\end{equation}
\end{lemma}
\begin{proof}
We will prove this for $j=1$ first. Let $\epsilon>0$ and $h\in L^{\infty}[0,1]$. By Lusin's Theorem there exists $E\in\mathcal{B}([0,1])$ (the Borel sets of $\mathbb{R}$) compact with $m([0,1]\setminus E)<\frac{\epsilon}{2||h||_{\infty}}$ ($m$ denotes Lebesgue measure) and $h$ is continuous on $E$. So,
\begin{equation}
\left|\int_{[0,1]\setminus E}h\left(w_1^n-\frac{1}{2}\right)\right|<\frac{\epsilon}{2}.
\end{equation}
Since $E$ is compact, $h$ is uniformly continuous on $E$. So there exists $\delta>0$ such that for each $x,y\in E$ with $|x-y|<\delta$, we have that $|h(x)-h(y)|<\epsilon$. Also, there exists $N\in\mathbb{N}$ such that for all $n\geq N$, $\frac{1}{2^{n-1}}<\delta$. Let $n\geq N$. For $k\in\mathbb{N}$, define
\begin{equation}
I_k=\left[\frac{k}{2^n},\frac{k+1}{2^n}\right)\cap E.
\end{equation}
Then
\begin{equation}
\left|\int_{E}h\cdot\left(w_1^n-\frac{1}{2}\right)\right|
=\left|\sum_{k=0}^{2^n-1}\int_{I_k}\dfrac{(-1)^k}{2}h\right|
\leq\frac{1}{2}\sum_{k=0}^{2^{n-1}-1}\left|\int_{I_{2k}}h-\int_{I_{2k+1}}h\right|.
\end{equation}
Since the length of $I_{2k}\cup I_{2k+1}$ is $\frac{1}{2^{n-1}}<\delta$, for all $x\in I_{2k}\cup I_{2k+1}$,
\begin{equation}
h\left(\frac{2k+1}{2^n}\right)-\epsilon
\leq h(x)
\leq h\left(\frac{2k+1}{2^n}\right)+\epsilon.
\end{equation}
So,
\begin{equation}
\left|\int_{I_{2k}}h-\int_{I_{2k+1}}h\right|
\leq\frac{1}{2^n}(2\epsilon)
=\frac{\epsilon}{2^{n-1}}.
\end{equation}
Hence,
\begin{equation}
\left|\int_{E}h\cdot\left(w_1^n-\frac{1}{2}\right)\right|
\leq\frac{1}{2}\sum_{k=0}^{2^{n-1}-1}\frac{\epsilon}{2^{n-1}}
<\epsilon.
\end{equation}
This shows that $w_1^n$ converges weakly to $\frac{1}{2}$.\par
Since $w_2^n=1-w_1^n$, we have that $w_2^n$ converges weakly to $\frac{1}{2}$, as well.
\end{proof}
Since $\lambda^n(t)=w_1^n(t)\lambda_1(t)+w_2^n(t)\lambda_2(t)$, by Theorem \ref{thm:rs2.4rrodf}, we have that $K_t$ is generated by $\lambda^n$. This proves that $K_t$ is generated by a rapidly oscillating function.
\subsection{Rapid, Random Oscillation}\label{rapidrandomoscillation}
Now that we have shown that a rapidly oscillating function can be used to satisfy the conjecture in \cite{KNK04}, we turn to proving that we do not have to control the oscillation as we did before. In the random case, we begin construction of the sequence of driving functions by defining weight functions. Let $w_1\in(0,1)$ and $w_2=1-w_1$ be constants. For each $k\in\mathbb{N}$, let $X_k$ be a random variable such that $P(X_k=1)=w_1$ and $P(X_k=0)=w_2$ (i.e. $X_k$ is a Bernoulli random variable). For each $n\in\mathbb{N}$ and $k\in\{1,...,n\}$, define
\begin{equation}
I_k^n=\left[\frac{k-1}{n},\frac{k}{n}\right).
\end{equation}
For each $n\in\mathbb{N}$, define
\begin{equation}\label{eqn:defofrandomweights}
w_1^n=\sum_{k=1}^{n}X_k\chi_{I_{k}^{n}}(t)\quad
\text{and}\quad
w_2^n=\sum_{k=1}^{n}(1-X_k)\chi_{I_{k}^{n}}(t).
\end{equation}
Then for every $t\in[0,1]$ and $n\in\mathbb{N}$, $w_1^n(t)+w_2^n(t)=1$ a.s. Further, $w_1^n(t)=1$ only when $w_2^n(t)=0$ and vice versa. Let
\begin{equation}
\lambda^n(t)=w_1^n(t)\lambda_1(t)+w_2^n(t)\lambda_2(t).
\end{equation}
For any $n\in\mathbb{N}$, $\lambda^n$ rapidly (for large $n$) and randomly oscillates between the values of $\lambda_1$ and $\lambda_2$. The idea here is that $w_j^n$ turns off and on $\lambda_j$. So, essentially we are using the single Loewner equation to approximate the multiple Loewner equation and the weights control which function is turned on or picked in the intervals $I_k^n$. We will first show that $w_j^n$ converges weakly to $w_j$ for $j=1,2$. Then using Theorems \ref{thm:rs2.4rrodf} and \ref{thm:3.6.2}, we will obtain the desired result.
\begin{lemma}\label{lem:randomwweights}
As $n\to\infty$, almost surely $w_j^n$ as in (\ref{eqn:defofrandomweights}) converges weakly to $w_j$ for $j=1,2$.
\end{lemma}
\noindent We will prove this for $j=1$ using a standard approach by proving that convergence holds on intervals, for step functions, for non-negative functions, and for $L^{\infty}$ functions. Then the result will also hold for $j=2$ as $w_2^n=1-w_1^n$.
\begin{claim}
Let $J\subseteq[0,1]$ be an interval. Then almost surely
$\int_J w_1^n\to\int_Jw_1=w_1m(J)$
\end{claim}\par
\begin{proof}
Let $\epsilon>0$ and $J\subseteq [0,1]$ be an interval. Then there exists $N_1\in\mathbb{N}$ such that for all $n\geq N_1$ there exists $a_n\in\{1,...,n\}$ and $m_n\in\{0,...,n-a_n\}$ such that
$\bigcup_{k=a_n}^{a_n+m_n}I_k^n\subseteq J.$
Then there exists a natural number $N_2\geq N_1$ such that for all $n\geq N_2$
\begin{equation}
I_n=\bigcup_{k=a_n}^{a_n+m_n}I_k^n\subseteq J \quad\text{and}\quad m(J\setminus I_n)<\frac{\epsilon}{2}.
\end{equation}
So,
\begin{equation}
\left|\int_{J\setminus I_n}w_1^n-w_1\right|
\leq\left|\int_{J\setminus I_n}dt\right|
=m(J\setminus I_n)
<\frac{\epsilon}{2}
\end{equation}
As $n\to\infty$, $m_n\to\infty$. By the Strong Law of Large Numbers, we have
\begin{equation}
\sum_{k=a_n}^{a_n+m_n}\frac{X_k}{m_n}\to w_1 \text{ a.s.}
\end{equation}
So, there exists $N\geq N_2$ such that for all $n\geq N$
\begin{equation}
\left|\sum_{k=a_n}^{a_n+m_n}\frac{X_k}{m_n}-w_1\right|<\frac{\epsilon}{2}\text{ a.s.}
\end{equation}
Fix $n\geq N$. Then with probability 1, since $m(I_n)=\frac{1}{n}$,
\begin{equation}
\left|\int_{I_n} w_1^n-w_1\right|
=\left|\frac{m_n}{n}\sum_{k=a_n}^{a_n+m_n}\frac{X_k-w_1}{m_n}\right|
=m(I_n)\left|\sum_{k=a_n}^{a_n+m_n}\frac{X_k}{m_n}-w_1\right|
\leq\frac{\epsilon}{2}
\end{equation}
Therefore, as $n\to\infty$, almost surely
\begin{equation}
\int_J w_1^n\to w_1m(J).
\end{equation}
\end{proof}
\begin{claim}
Let $h\in L^{\infty}[0,1]$ be a step function. Then almost surely
\begin{equation}
\int_{[0,1]}h w_1^n\to w_1\int_{[0,1]}h.
\end{equation}
\end{claim}
\begin{proof}
Since $h$ is a bounded step function, there exist finitely many nonempty intervals $J_1,...,J_n$ and $\alpha_1,...,\alpha_n\in\mathbb{R}\setminus\{0\}$ so that $h=\sum_{i=1}^{n}\alpha_i\chi_{J_i}$. Then, by the previous claim, there exists $N$ such that for all $n\geq N$ almost surely
\begin{equation}
\left|\int_{J_i}w_1^n-w_1m(J_i)\right|<\frac{\epsilon}{2\sum_{i=1}^{n}|\alpha_i|}.
\end{equation}
Then with probability 1,
\begin{equation}
\left|\int_{[0,1]}h(w_1^n-w_1)\right|
\leq\sum_{i=1}^{n}\left|\alpha_i\int_{J_i}(w_1^n-w_1)\right|
\leq\sum_{i=1}^{n}|\alpha_i|\frac{\epsilon}{2\sum_{i=1}^{n}|\alpha_i|}
<\epsilon
\end{equation}
This proves the claim.
\end{proof}
\begin{claim}
For $h\in L^{\infty}[0,1]$ with $h\geq 0$, almost surely
\begin{equation}
\int h w_1^n\to w_1\int h.
\end{equation}
\end{claim}
\begin{proof}
Let $h\in L^{\infty}[0,1]$ with $h\geq 0$. Then there exists a step function $f\in L^{\infty}[0,1]$ such that $||f-h||_2\leq\frac{\epsilon}{2}$, where $\|\cdot\|_{k}$ denotes the $L^k[0,1]$ norm. Then there exists $N\in\mathbb{N}$ such that for all $n\geq N$, almost surely $|\int(w_1^n-w_1)|<\frac{\epsilon}{2(\|f\|_{\infty}\vee 1)}$. Also, since $0\leq w_1^n(t)\leq 1$ a.s., $|w_1^n-w_1|\leq 1$ a.s. for all $t\in[0,1]$. So,
\begin{equation}
\left|\int h(w_1^n-w_1)\right|
\leq\left|\int f(w_1^n-w_1)\right|+\left|\int (h-f)(w_1^n-w_1)\right|
\leq\frac{\epsilon}{2}+||h-f||_2
<\epsilon
\end{equation}
This proves the claim.
\end{proof}
\begin{claim}\label{cla:randomweightsconvtow}
For $h\in L^{\infty}[0,1]$, almost surely
\begin{equation}
\int h w_1^n\to w_1\int h.
\end{equation}
\end{claim}
\begin{proof}
Let $h\in L^{\infty}[0,1]$. Then $h^+,h^-\in L^{\infty}[0,1]$ (where $h^+,h^-\geq 0$ and $h=h^+-h^-$). Then there exists $N\in\mathbb{N}$ such that for all $n\geq N$, almost surely
\begin{equation}
\left|\int_{[0,1]}h^+\left(w_1^n-w_1\right)\right|<\frac{\epsilon}{2}
\text{ and }
\left|\int_{[0,1]}h^-\left(w_1^n-w_1\right)\right|<\frac{\epsilon}{2}.
\end{equation}
Then with probability 1,
\begin{equation}
\left|\int h\left(w_1^n-w_1\right)\right|
\leq\left|\int h^+\left(w_1^n-w_1\right)\right|+\left|\int h^-\left(w_1^n-w_1\right)\right|
<\epsilon.
\end{equation}
\end{proof}
\begin{proof}[Proof of Lemma \ref{lem:randomwweights}]
By Claim \ref{cla:randomweightsconvtow}, we have that $w_1^n$ converges weakly to $w_1$. Then as $w_2^n=1-w_1^n$, we have $w_2^n$ converges weakly to $1-w_1=w_2$. So we have the result.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:nonconstantweights}]
Apply Theorem \ref{thm:3.6.2} to get $\lambda_1,...,\lambda_n$ continuous functions and constant weights $w_1,..., w_n\in(0,1)$. Applying Lemma \ref{lem:randomwweights} to $w_j^n$ from (\ref{eqn:defofrandomweights}), we have that $w_j^n$ converges weakly to $w_j$ in $L^1[0,1]$ for $j=1,2$. Now, using Theorem \ref{thm:rs2.4rrodf}, we have that we get the convergence we desire.
\end{proof}
\section{Simulating the Multiple Loewner Equation}\label{simulations}
The Loewner equation yields a conformal map that takes sets in the upper half-plane and maps them down to the real line and for this reason is sometimes referred to as the downward Loewner equation. For a map that does the opposite, we can consider the initial value problem
\begin{equation}\label{eqn:upwardle}
\partial_t f_t(z)
=\frac{-2}{f_t(z)-\xi(t)},
\quad f_0(z)=z.
\end{equation}
We call this the upward Loewner equation and the conformal maps $f_t$ grow sets in the upper half-plane. There is a relationship between the downward and upward Loewner equations. If $g_t$ is the map given by the downward Loewner equation driven by $\lambda:[0,T]\to\mathbb{R}$ and $f_t$ is the map given by the upward Loewner equation driven by $\xi(t)=\lambda(T-t)$, then $f_T=g_T^{-1}$.
The idea of the standard algorithm to simulate the hulls from the Loewner equation uses the upward Loewner equation driven by constant functions (see for instance \cite{bauer}, \cite{kennedy07}, \cite{kennedy09}, or \cite{mr}). For a constant driving function $\xi(t)=c$, the solution to the upward Loewner equation is
\begin{equation}\label{eqn:upwardconstant}
f_t^c(z)=\sqrt{(z-c)^2-4t}+c.
\end{equation}
\begin{figure}
\centering
\begin{tikzpicture}
\draw (0,0) -- (4,0);
\draw[fill] (2,0) circle [radius=0.05];
\node[below] at (2,0) {$c$};
\draw[->] (4.5,1) to [out=0,in=180] (5.5,1);
\node[above] at (5,1) {$f_t^c$};
\draw (6,0) -- (10,0);
\draw[fill] (8,0) circle [radius=0.05];
\node[below] at (8,0) {$c$};
\draw (8,0)--(8,2);
\draw[fill] (8,2) circle [radius=0.05];
\node[above] at (8,2) {$c+2i\sqrt{t}$};
\end{tikzpicture}
\caption{Mapping Up Hull Corresponding to $f_t^c$}
\label{fig:constantsim}
\end{figure}
\noindent The algorithm for simulating the hull driven by $\lambda:[0,T]\to\mathbb{R}$ with $N+1$ sample points is as follows:
\begin{itemize}
\item[0.] Compute $\lambda(T)$ and add to hull
\item[1.] Apply (\ref{eqn:upwardconstant}) with $c=\lambda(T\cdot\frac{N-k}{N})$ to points in hull
\item[2.] Add $\lambda(T\cdot\frac{N-k}{N})$ to hull
\item[3.] Repeat steps 1-2 for $k\in\{1,...,N\}$
\end{itemize}
For the multiple Loewner equation, we want to use the same idea as above but our driving function (randomly) oscillates between the driving functions. This is in effect what the proof in Section \ref{rapidrandomoscillation} does to generate the hulls. Let $\lambda_1,\lambda_2:[0,T]\to\mathbb{R}$ be driving functions and $w_1,w_2\in[0,1]$ be constant weights. For $k\in\{0,...,N\}$:
\begin{itemize}
\item[1.] (Randomly) assign $j_k$ to be either $1$ or $2$ so that $P(j_k=1)=w_1$ and $P(j_k=2)=w_2$
\item[2.] Define $\lambda(T\cdot\frac{k}{n})=\lambda_{j_k}(T\cdot\frac{k}{n})$
\item[3.] Repeat steps in previous algorithm
\end{itemize}
We will investigate this algorithm by revisiting the example done in \cite{KNK04} and mentioned here in Section \ref{introtoconjecture} that motivates all of our results. Let $\lambda_1=-1$, $\lambda_2=1$, and $w_1=\frac{1}{2}=w_2$. Recall the hull is given by
\begin{equation}
K_t=\left\{\sqrt{\frac{2\theta_t}{\sin(2\theta_t)}}(\pm\cos\theta_t+i\sin\theta_t)\right\}.
\end{equation}
First, we will control the oscillation by assigning $j_k$ to be 1 when $k$ is odd and 2 when $k$ is even. The simulations for 1,000 and 10,000 oscillations are given in Figures \ref{fig:1000controlledoscillations} and \ref{fig:10000controlledoscillations}. For 1,000 oscillations, the simulated data points are extremely close to the curve. There is a larger spread in the points near the real line since the growth of $f_t^c$ is faster there. For 10,000 oscillations, the simulated data is almost indistinguishable from the curve.
The errors (that is, the maximum distance the data is from the hull) for 1000, 500, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20, 10 controlled oscillations are shown in Figure \ref{fig:errors}, where the blue points correspond to points on the left side (i.e. associated with $\lambda_1$) and the red points correspond to points on the right side (i.e. associated with $\lambda_2$).
Since the last map used in each controlled simulation is $f_t^1$, all of the right sided points are shifted up from their previous positions. This causes more error for these points. On the other hand, the map shifts the left sided points towards the right and reduces the error for these points.
One amazing note is that even for 10 oscillations (11 data points), the error is small enough that simulated points are closer to their respective side than the opposite side (that is, their real parts are on the same side of 0 as their corresponding driving function). Further, for any number of oscillations ($\geq 10$), we could thicken each side of the hull by the error and they would not intersect (up to $T=10$).
Second, we switch to randomly oscillating the driving function. We randomly assign $j_k$ to be 1 or 2 by flipping a fair, virtual coin. In each of Figures \ref{fig:1000randomoscillations} and \ref{fig:10000randomoscillations} are 10 simulated hulls (non-black curves) with 1,000 and 10,000 oscillations (respectively) and the hull (black curves). For 1,000 oscillations, the simulated hulls have the same overall shape (e.g. they approach each other as their imaginary parts increase), but there is significant variation between the curves. For 10,000 oscillations, the simulated hulls are significantly closer to the hull, but there is still variation between the curves. The upshot is that the random hulls are visually a good replacement for the actual hull. Figure \ref{fig:100errors} gives a histogram of 100 simulations of 1,000 random oscillations where left and right sides correspond to the colors blue and red as before.
It appears that the controlled oscillation (i.e. forcing a switch between driving functions) always outperforms the random oscillation. This intuitively makes sense. Say we grow the -1 hull first using $f_t^{-1}$. If we use $f_t^1$ next, the hull corresponding to -1 will be shifted to the right. Instead, if we use $f_t^{-1}$ next, the hull corresponding to -1 will be higher. In the random oscillation case, either of these maps could be used over and over before switching. This would cause the hulls to be higher or more to the left or right than the actual hull. The forced oscillation appears to not allow either side of the hull to get too far away from the actual hull.
\begin{figure}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=2in]{1000osc.pdf}
\caption{1000 Controlled Oscillations}
\label{fig:1000controlledoscillations}
\end{minipage}\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=2in]{10000osc.pdf}
\caption{10000 Controlled Oscillations}
\label{fig:10000controlledoscillations}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=.9\textwidth]{error.pdf}
\caption{Errors for 1000, 500, 400, 300, 200, 100, 90, 80, 70, 60, 50, 40, 30, 20, 10 Controlled Oscillations}
\label{fig:errors}
\end{minipage}\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=.9\textwidth]{100randomosc1000errors.pdf}
\caption{Histogram of 100 Errors for 1000 Random Oscillations}
\label{fig:100errors}
\end{minipage}
\end{figure}
\begin{figure}
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=2in]{randomosc1000.pdf}
\caption{1000 Random Oscillations}
\label{fig:1000randomoscillations}
\end{minipage}\hfill
\begin{minipage}{0.45\textwidth}
\centering
\includegraphics[width=0.9\textwidth,height=2in]{randomosc10000.pdf}
\caption{10000 Random Oscillations}
\label{fig:10000randomoscillations}
\end{minipage}
\end{figure}
\section{Background}\label{background}
We now give a more rigorous introduction to the Loewner equation, hulls, and prime ends. This section gives us the tools and background needed to generalize Theorem 1.1 in \cite{RS14} which we used to prove Proposition \ref{prop:nonconstantweights}. We begin by reintroducing the Loewner equation. Next we discuss hulls in the upper half-plane. This leads to the section on Loewner hulls, which are hulls that can be generated through the Loewner equation driven by a continuous driving function. We then generalize the notion of the tip of a curve to prime ends. This section concludes with results on multiple Loewner hulls, which are hulls that can be generated through the multiple Loewner equation driven by multiple continuous driving functions.
\subsection{Loewner Equation}\label{loewnerequation}
Let $\lambda:[0,T]\to\mathbb{R}$ be continuous. For $z\in\mathbb{H}$, the (single, chordal) Loewner equation is the initial value problem
\begin{equation}\label{eqn:LE}
\frac{\partial}{\partial t} g_t(z)
=\frac{2}{g_t(z)-\lambda(t)},
\quad g_0(z)=z.
\end{equation}
A solution to the Loewner equation exists on some time interval, where the only issue stopping existence is when $g_t(z)=\lambda(t)$. We denote $K_t$ as the points of $\mathbb{H}$ when the solution has failed to exist at some time up to time $t$, that is,
\begin{equation}
K_t=\{z\in\mathbb{H}:g_s(z)=\lambda(s)\text{ for some }s\in[0,t]\}.
\end{equation}
The function $\lambda$ is called the driving function and $(g_t)_{t\in[0,T]}$ is called a Loewner chain. For $t\in[0,T]$, we call $K_t$ a Loewner hull and we call the family $(K_t)_{t\in[0,T]}$ a Loewner family (see Section \ref{loewnerhulls}). We introduce the Loewner hull moniker to distinguish hulls that can be generated by a single, continuous driving function from hulls that cannot. For example, using $\lambda(t)=c$, we can grow a vertical line starting at $c$. However, two vertical lines at $c_1$ and $c_2$ (with $c_1\not=c_2$) cannot be generated from a single continuous driving function. We discuss this further in Section \ref{loewnerhulls}. The solution $g_t(z)$ is the conformal map from $\mathbb{H}\setminus K_t$ onto $\mathbb{H}$ that satisfies
\begin{equation}
g_t(z)=z+\frac{2t}{z}+O\left(\frac{1}{z^2}\right)
\end{equation}
near infinity. We define the half-plane capacity of $K_t$, $\text{hcap}(K_t)$, to be $2t$ (see Section \ref{hulls}).
If instead of starting with a continuous function, we started with a Loewner family, we can find a unique driving function satisfying (\ref{eqn:LE}). This gives a one-to-one correspondence between continuous functions and Loewner families of hulls. See \cite{lawler} Lemma 4.2, Theorem 4.6, and the discussion following Example 4.12 for more details.
Now, let $\lambda_1,...,\lambda_n:[0,T]\to\mathbb{R}$ be continuous and $w_1,...,w_n\in L^1[0,T]$ with $\sum_{k=1}^{\infty}w_k(t)\equiv 1$. For $z\in\mathbb{H}$, the multiple Loewner equation is the initial value problem
\begin{equation}\label{eqn:multiLE}
\frac{\partial}{\partial t} g_t(z)=\sum_{k=1}^{n}\frac{2w_k(t)}{g_t(z)-\lambda_k(t)}\text{ a.e. }t\in [0,T],
\quad g_0(z)=z.
\end{equation}
This is the sum of weighted Loewner equations, which allows growth of multiple Loewner hulls simultaneously.
Note that (\ref{eqn:multiLE}) holds a.e. $t\in [0,T]$ whereas (\ref{eqn:LE}) holds for all $t\in[0,T]$.
\subsection{Hulls}\label{hulls}
\begin{defn}
A bounded set $K\subseteq\mathbb{H}$ is a hull if $\mathbb{H}\setminus K$ is simply connected.
\end{defn}
\noindent For any hull $K$, there is a unique conformal map $g_K:\mathbb{H}\setminus K\to\mathbb{H}$ with $\lim_{z\to\infty}(g_K(z)-z)=0$, by Riemann mapping theorem (see Proposition 3.36 in \cite{lawler}). The inverse of $g_K$ satisfies the Nevanlinna representation formula
\begin{equation}
g_K^{-1}(z)=z+\int_{\mathbb{R}}\frac{d\mu_K(t)}{t-z}
\end{equation}
for some finite, nonnegative Borel measure on $\mathbb{R}$ (see Section 3.1 in \cite{schleissinger}).
We now state a very useful result from \cite{RS14}.
\begin{lemma}[3.4 \cite{RS14}]\label{lem:3.2.3}
Let $A$ be a hull.
\begin{itemize}
\item[(a)] If $\overline{A}\cap\mathbb{R}$ is contained in the closed interval $[a,b]$, then $g_A(\alpha)\leq\alpha$ for every $\alpha\in\mathbb{R}$ with $\alpha<a$ and $g_A(\beta)\geq\beta$ for every $b\in\mathbb{R}$ with $\beta>b$.
\item[(b)] If the open interval $(a,b)$ is contained in $\mathbb{R}\setminus\overline{A}$, then $|g_A(\beta)-g_A(\alpha)|\leq|\beta-\alpha|$ for all $\alpha,\beta\in(a,b)$.
\end{itemize}
\end{lemma}
\begin{defn}
Let $K$ be a hull. The half-plane capacity of $K$ is defined as
\begin{equation}
\text{hcap}(K)=\lim_{z\to\infty}z(g_K(z)-z).
\end{equation}
\end{defn}
\noindent Half-plane capacity is a real value relating $g_K$ and $K$. Part of the importance of the half-plane capacity is captured in the following lemma from \cite{RS14}.
\begin{lemma}[3.1 \cite{RS14}]\label{lem:3.2.2}
Let $A$, $A_1$, $A_2$ be hulls.
\begin{itemize}
\item[(a)] If $A_1\cup A_2$ and $A_1\cap A_2$ are hulls, then
\begin{equation}
\text{hcap}(A_1)+\text{hcap}(A_2)\geq\text{hcap}(A_1\cup A_2)+\text{hcap}(A_1\cap A_2)
\end{equation}
\item[(b)] If $A_1\subset A_2$, then $\text{hcap}(A_2)=\text{hcap}(A_1)+\text{hcap}(g_{A_1}(A_2\setminus A_1))\geq\text{hcap}(A_1)$.
\item[(c)] If $A_1\cup A_2$ is a hull and $A_1\cap A_2=\emptyset$, then $\text{hcap}(g_{A_1}(A_2))\leq\text{hcap}(A_2)$.
\item[(d)] If $c>0$, then $\text{hcap}(cA)=c^2\text{hcap}(A)$ and $\text{hcap}(A\pm c)=\text{hcap}(A)$.
\end{itemize}
\end{lemma}
Remark 3.50 in \cite{lawler} gives that there exists $M>0$ so that for any hull $K$,
\begin{equation}
\text{diam}(g_K(K)) < M\text{diam}(K).
\end{equation}
In order to further discuss diam$g_K(K)$, we introduce some notation.
\begin{defn}
Let $A$ and $B$ be hulls or a finite union of hulls. Let $g_B:\mathbb{H}\setminus B\to\mathbb{H}$ be the hydrodynamically normalized conformal map. Define $g_B^+(A)=0$ if $A\subseteq\text{int}(B)$ and otherwise
\begin{equation}
g_{B}^+(A)
=\max\left\{\lim_{n\to\infty}g_{B}(z_n):(z_n)_{n=1}^{\infty}\subseteq\mathbb{H}\setminus B, z_n\to z\in A,g_{B}(z_n)\to x\in\mathbb{R}\right\}.
\end{equation}
Similarly, define $g_B^-(A)=0$ if $A\subseteq\text{int}(B)$ and otherwise
\begin{equation}
g_{B}^-(A)
=\min\left\{\lim_{n\to\infty}g_{B}(z_n):(z_n)_{n=1}^{\infty}\subseteq\mathbb{H}\setminus B, z_n\to z\in A,g_{B}(z_n)\to x\in\mathbb{R}\right\}.
\end{equation}
\end{defn}
This means
\begin{equation}\label{eqn:lawlerremark3.50}
g_K^+(K)-g_K^-(K)=\text{diam}(g_K(K))\leq M\text{diam}(K)
\end{equation}
\subsection{Loewner Hulls}\label{loewnerhulls}
As previously mentioned, not all hulls can be grown from the Loewner equation driven by a continuous function, for instance a tree or a disconnected set. We will call these special hulls Loewner hulls.
\begin{defn}
We say that a family of hulls, $(K_t)_{t\in[0,T]}$ is a Loewner family if for all $t\in[0,T]$, $\text{hcap}(K_t)=2t$, $K_s\subset K_t$ for $s<t$, and for all $\epsilon>0$ there exists $\delta>0$ so that for $t\in [0,T-\delta]$ there is a bounded, connected set $S\subset\mathbb{H}\setminus K_t$ with diam$(S)<\epsilon$ where $S$ disconnects $K_{t+\delta}\setminus K_t$ from infinity in $\mathbb{H}\setminus K_t$.
\end{defn}
\noindent The above definition is motivated by Theorem 2.6 of \cite{lsw} which states that $(K_t)_{t\in[0,T]}$ is a Loewner family if and only if there exists $\lambda:[0,T]\to\mathbb{R}$ continuous so that $(K_t)_{t\in[0,T]}$ is driven by $\lambda$. Furthermore, $\lambda(t)$ is the point in $\bigcap_{\epsilon>0}g_t(K_{t+\epsilon}\setminus K_t)$. We will say that two Loewner families $(K_t)_{t\in[0,T]}$ and $(L_s)_{s\in[0,S]}$ are disjoint if $\overline{K_T}\cap \overline{L_S}=\emptyset$, where the closure is taken in $\overline{\mathbb{H}}$. Similarly, if $A$ and $B$ are hulls, we say they are disjoint if $\overline{A}\cap \overline{B}=\emptyset$. When there is no risk of confusion, we denote Loewner families simply by $K_t$, dropping the index on $t$.
\begin{defn}
We say that the hull $K$ with $\text{hcap}(K)=2T$ is a Loewner hull if there is a Loewner family $K_t$ with $K_T=K$.
\end{defn}
\noindent The relationship between a Loewner family and its driving function is very deep. We exemplify this relationship by stating a few results that will prove useful.
\begin{lemma}[3.3 (a) \cite{chenrohde}]\label{lem:cr3.3a}
Let $K_t$ be a Loewner family driven by $\lambda$. If $\lambda(t)\in [a,b]$ for all $t\in[0,T]$, then $\overline{K_T}\subset[a,b]\times\mathbb{R}$.
\end{lemma}
\begin{lemma}[4.13 \cite{lawler}]\label{lem:lawler4.13}
Let $K_t$ be a Loewner family generated by $\lambda$ with Loewner chain $g_t$. Define $R_t=\max\{\sqrt{t},\sup\{|\lambda(s)|:0\leq s\leq t\}\}$. Then $\sup\{|z|:z\in K_t\}\leq 4R_t$. In fact, if $|z|>4R_t$, then $|g_s(z)-z|\leq R_t$ for $0\leq s\leq t$.
\end{lemma}
\noindent Beyond the driving function, Loewner families can only grow in particular ways.
\begin{defn}[\cite{lawler}]
Let $K_t$ be a Loewner family. We call $z$ a $t$-accessible point if $z\in K_t\setminus\cup_{s<t} K_s$ and there exists a continuous curve $\gamma:[0,1]\to\mathbb{C}$ with $\gamma(0)=z$ and $\gamma(0,1]\subseteq\mathbb{H}\setminus K_t$.
\end{defn}
\begin{prop}[4.26 \cite{lawler}]\label{pro:lawler4.26}
If $t>0$ and $z$ is a $t$-accessible point, then there is a strictly increasing sequence $s_j\uparrow t$ and a sequence of $s_j$-accessible points $z_j$ with $z_j\to z$.
\end{prop}
\begin{prop}[4.27 \cite{lawler}]\label{pro:lawler4.27}
For each $t>0$, there is at most one $t$-accessible point. Also, the boundary of the time $t$ hull is contained in the closure of the set of $s$-accessible points for $s\leq t$.
\end{prop}
\noindent The restriction on the number of $t$-accessible points also shows that the boundary of a hull always intersects the boundary of previous hulls.
\begin{lemma}\label{lem:B}
Let $K_t$ be a Loewner family generated by $\lambda$. Fix $0<t\leq T$. Then there exists $0<s<t$ so that $\partial_{\mathbb{H}} K_t\cap K_s\not=\emptyset$. Moreover, $\partial_{\mathbb{H}} K_t\cap \partial K_r\not=\emptyset$ for $s\leq r\leq t$.
\end{lemma}
\noindent Note that here we use $\partial_{\mathbb{H}}$ to indicate the boundary with respect to $\mathbb{H}$. Explicitly, for $A\subseteq\mathbb{H}$,
\begin{equation}
\partial_{\mathbb{H}} A=\{z\in\mathbb{C}:\text{ exists }(z_n)_{n=1}^\infty\subseteq\mathbb{H}\setminus A\text{ with }z_n\to z\}
\end{equation}
\begin{proof}
Suppose not - that is, for some fixed $t\in (0,T]$, $\partial_{\mathbb{H}} K_t\cap K_s=\emptyset$ for all $0<s<t$. Since $0<t$, we have that $\partial_{\mathbb{H}} K_t$ is larger than a singleton set. Let $z_1,z_2\in\partial_{\mathbb{H}} K_t$ with $|z_1-z_2|=\delta>0$. Then there are $w_1,w_2\in\mathbb{H}\setminus K_t$ with $|z_i-w_i|<\frac{\delta}{3}$ for $i=1,2$. Let $\gamma_i:[0,1]\to\mathbb{H}$ be the straight line segment starting at $w_i$ and ending at $z_i$ for $i=1,2$. Let $t_i\in(0,1]$ be the first time that $\gamma_i$ intersects $K_t$ and $z_i'=\gamma_i(t_i)$. Two important facts follow. First, since $z_i'\in\partial_{\mathbb{H}} K_t\subseteq K_t\setminus\bigcup_{s<t}K_s$ for $i=1,2$, $z_1'$ and $z_2'$ are $t$-accessible. Second, by construction $|z_1'-z_2'|>\frac{\delta}{3}$, so $z_1'\not= z_2'$. This shows that there is more than one $t$-accessible point, a contradiction to Proposition \ref{pro:lawler4.27}. So, for all $t\in(0,T]$ there is $0<s<t$ with $\partial_{\mathbb{H}} K_t\cap K_s\not=\emptyset$.
The moreover statement follows immediately using the fact that $s\leq r\leq t$ gives $K_s\subseteq K_r\subseteq K_t$.
\end{proof}
\noindent Often we will be considering the family $(g_L(K_t))_{t\in [0,T]}$ where $L$ is a hull disjoint from $K_T$. The next lemma investigates what happens when a Loewner family is conformally transformed.
\begin{lemma}[2.8 \cite{lsw}]\label{lem:lsw2.8}
Let $(K_t)_{t\in[0,T]}$ be a Loewner family driven by $\lambda$. Let $D$ be a relatively open subset of $\overline{\mathbb{H}}$ which contains $\overline{K_T}$, and set $D_{\mathbb{R}}:=D\cap\mathbb{R}$. Let $G:D\to\overline{\mathbb{H}}$ be conformal in $D\setminus D_{\mathbb{R}}$ and continuous in $D$, and suppose that $G(D_{\mathbb{R}})\subset\mathbb{R}$. Then $(G(K_t))_{t\in[0,T]}$ is a Loewner family. Moreover,
$\partial_t [\text{hcap}(G(K_t))]=G'(\lambda(0))^2\partial_t \text{hcap}(K_t)$ as $t=0$.
\end{lemma}
\subsection{Prime Ends}
In order to generalize the results of \cite{RS14}, we need to generalize the tip of a curve into the setting of hulls. This is done with prime ends, which are equivalence classes of crosscuts. We give only a brief introduction, for more details see \cite{rempe-gillen}.
\begin{defn}[\cite{rempe-gillen}] Let $\Omega\subseteq\mathbb{H}$ be a simply connected domain containing $\infty$. Let $C$ be a crosscut of $\Omega$ (that is, a Jordan arc in $\Omega$ with endpoints in $\partial\Omega$) and $\Omega_C$ the component of $\Omega\setminus C$ not containing $\infty$. A prime end of $\Omega$ is represented by a sequence of pairwise disjoint crosscuts $(C_n)_{n=1}^{\infty}$ with $\text{diam}(C_n)\to 0$ as $n\to\infty$ and $C_{n+1}\subseteq\overline{\Omega_{C_n}}$. Two sequences, $(C_n)_{n=1}^{\infty}$ and $(\widetilde{C}_n)_{n=1}^{\infty}$, represent the same prime end if for each $n$ there is a $J_n\in\mathbb{N}$ so that $\widetilde{C}_j\subseteq\Omega_{C_n}$ for $j\geq J_n$ and vice versa.
\end{defn}
\begin{defn}
Let $p$ be a prime end represented by the sequence of crosscuts $(C_n)_{n=1}^{\infty}$. The impression of $p$ is defined as $I(p)=\bigcap_{n=1}^{\infty}\overline{\Omega_{C_n}}.$
Since $(\overline{\Omega_{C_n}})_{n=1}^{\infty}$ is a decreasing sequence of nonempty, compact, and connected sets, the impression of $p$ is nonempty. Moreover, the impression of $p$ is independent of its representation.
\end{defn}
\begin{lemma}\label{lem:C}
Let $K_t$ be a Loewner family generated by $\lambda$. Fix $0<t\leq T$. If there exists $0<s<t$ such that $\lambda(s)<\lambda(r)$ or $\lambda(s)>\lambda(r)$ for $r\in(s,t)$, then $\overline{K_s}\cap\partial_{\mathbb{H}} K_t\not=\emptyset$.
\end{lemma}
\begin{proof}
Suppose $\lambda(s)<\lambda(r)$ (resp. $\lambda(s)>\lambda(r)$) for $s<r<t$. Then Lemma \ref{lem:cr3.3a} shows that $\lambda(s)\leq\min\{\overline{g_{K_s}(K_t\setminus K_s)}\cap\mathbb{R}\}$ ($\geq\max$ resp.). As $\lambda(s)\in\overline{g_{K_s}(K_t\setminus K_s)}$, $\lambda(s)\in\partial\overline{g_{K_s}(K_t\setminus K_s)}$. Now, there exists $(w_n)_{n=1}^{\infty}\subset\mathbb{H}\setminus\overline{g_{K_s}(K_t\setminus K_s)}$ with $w_n\to\lambda(s)$. So, there exists a corresponding sequence $(z_n)_{n=1}^{\infty}\subset\mathbb{H}\setminus K_t$ so that $g_{K_s}(z_n)=w_n$. Furthermore, there is a subsequence of $(z_n)_{n=1}^{\infty}$ that converges to a point in $\overline{K_s}$ as there is at least one point in the impression of the prime end corresponding to $\lambda(s)$. This shows that $\overline{K_s}\cap\partial_{\mathbb{H}} K_t\not=\emptyset$.
\end{proof}
\begin{defn}
Let $\Omega\subseteq\mathbb{H}$ be a simply connected domain containing $\infty$. Let $P(\Omega)$ denote the set of prime ends of $\Omega$ and $\widehat{\Omega}:=\Omega\cup P(\Omega)$ denote the Carath\'eodory compactification of $\Omega$.
We can define a topology on $\widehat{\Omega}$ by making the following equivalent:
\begin{itemize}
\item $(z_j)_{j=1}^{\infty}\subseteq\Omega$ converges to $p\in P(\Omega)$
\item for any $(C_n)_{n=1}^{\infty}\in p\in P(\Omega)$ there exists $J\in\mathbb{N}$ so that $(z_j)_{j=J}^{\infty}\subseteq\Omega_{C_n}$
\end{itemize}
\end{defn}
\noindent Under this topology, if $g:\Omega\to\mathbb{H}$ is conformal, then $g$ extends to a homeomorphism $\widehat{g}:\widehat{\Omega}\to\overline{\mathbb{H}}$. We can identify prime ends of $\Omega$ with boundary points of $\Omega$ as follows:
\begin{equation}
(z_j)_{j=1}^{\infty}\subseteq\Omega
\text{ with }
z_j\to z\in \partial\Omega
\text{ if and only if }
(z_j)_{j=1}^{\infty}\subseteq\Omega
\text{ with }
z_j\to p\in P(\Omega)
\end{equation}
\noindent If $z\in\partial\Omega$ and $p\in P(\Omega)$ are identified, we do not distinguish the point $z$ and the prime end $p$.
\noindent Since the identity map on $\mathbb{H}$ is conformal, $\overline{H}$ and $\widehat{H}$ are homeomorphic and we can think of boundary points (i.e. real points) as prime ends and the other way around.
\begin{defn}
Let $K_t$ be a Loewner family driven by $\lambda$ with Loewner chain $g_t$. Let $p$ be a prime end of $\mathbb{H}\setminus K_t$.
We say that ``$p$ corresponds to $\lambda(t)$'' or ``$p$ is the (generalized) tip of $K_t$'' if $\widehat{g}_t(p)=\lambda(t)$.
\end{defn}
\noindent This gives us a family of prime ends $(p_t)_{t\in[0,T]}$ each corresponding to $\lambda(t)$ which generates $K_t$. More specifically, $\widehat{g}_t(p_t)=\lambda(t)$ where $g_t$ is the Loewner chain corresponding to $K_t$ and $\lambda$ is its driving function.
In the situation of a curve $\gamma$ with Loewner chain $g_t$, since $g_t(\gamma(t))=\lambda(t)$, the tip at time $t$, $\gamma(t)$, is the prime end corresponding to $\lambda(t)$. This is the reason that we use prime ends to generalize tips.
We now will revisit the definitions of $g_B^+(A)$ and $g_B^-(A)$ and relate them to prime ends. If $A\not\subseteq\text{int}(B)$,
\begin{equation}
g_B^+(A)=\sup\{g_B(p)\in\mathbb{R}:p\in P(\mathbb{H}\setminus A),I(p)\cap\overline{A}\not=\emptyset\}
\end{equation}
and
\begin{equation}
g_B^-(A)=\inf\{g_B(p)\in\mathbb{R}:p\in P(\mathbb{H}\setminus A),I(p)\cap\overline{A}\not=\emptyset\}.
\end{equation}
This follows from $g_B$ extending to $\widehat{\mathbb{H}\setminus B}$. Note that from now on, we will assume $g_B$ is its extension $\widehat{g}_B$.
\subsection{Multiple Loewner Hulls}\label{multiloewnerhulls}
We now switch to the setting of our main result: multiple, disjoint Loewner families. Let $K$ and $L$ be disjoint hulls. There are many ways that $K\cup L$ can be mapped down to the real line. Two basic ways are mapping down one hull and then mapping down the image other hull, see Figure \ref{fig:maponethenother}. By uniqueness we have
\begin{equation}\label{eqn:mapcomposition}
g_{g_K(L)}\circ g_K=g_{K\cup L}=g_{g_L(K)}\circ g_L.
\end{equation}
This gives a significant amount of flexibility in our maps.
\begin{figure}
\centering
\begin{tikzpicture}
\draw (0,0) -- (6,0);
\draw[thick] (0.5,0)
to [out=90, in=270] (1,0.75)
to [out=90, in=270] (0.5,1.5)
to [out=90, in=180] (0.875,1.875)
to [out=0, in=90] (1.25,1.5)
to [out=270, in=180] (1.5,1.25)
to [out=0, in=270] (1.875,2)
to [out=90, in=90] (2.5,1.25)
to [out=270, in=90] (2.25,0.5)
to [out=270, in=135] (2.75,0);
\node[above] at (1.625,0) {$K_t$};
\draw[fill] (0.875,1.875) circle [radius=0.05];
\node[above] at (0.875,1.875) {$p_t$};
\draw[thick] (3.25,0)
to [out=90, in=270] (3.5,0.75)
to [out=90, in=270] (3.25,1.5)
to [out=90, in=180] (3.75,2)
to [out=0, in=90] (4.25,1.5)
to [out=270, in=180] (4.75,1)
to [out=0, in=90] (5.5,0);
\node[above] at (4.375,0) {$L$};
\draw (9,0) -- (15,0);
\draw[thick] (10.17,0)
to [out=90.242,in=257.702] (10.6135,0.611557)
to [out=84.752,in=264.509] (10.2432,1.30248)
to [out=84.3251,in=172.591] (10.6219,1.60237)
to [out=-7.88783,in=80.947] (10.9066,1.21776)
to [out=-99.053,in=169.8] (11.0813,0.968246)
to [out=-10.2,in=165.7926] (11.5572,1.54393)
to [out=-14.2074,in=63.8903] (11.8558,0.73348)
to [out=243.8903,in=77.6203] (11.5178,0.303857)
to [out=257.6203,in=83] (11.75,0);
\node[below] at (10.96,0) {$g_L(K_t)$};
\draw[fill] (10.6219,1.60237) circle [radius=0.05];
\node[above] at (10.6219,1.60237) {$g_L(p_t)$};
\draw (0,-4) -- (6,-4);
\draw[thick] (4.03045,-4)
to[out=93.3425,in=-63.638](4.03006,-3.64881)
to[out=116.362,in=-48.1169](3.55863,-3.32399)
to[out=131.883,in=202.392](3.73849,-2.64428)
to[out=382.392,in=106.857](4.33724,-2.92541)
to[out=286.857,in=189.0628](4.83518,-3.23488)
to[out=9.063,in=90.2936](5.5,-4);
\node[below] at (4.77,-4) {$g_{K_t}(L)$};
\draw (9,-4) -- (15,-4);
\draw[->] (6.5,1) to [out=0,in=180] (8.5,1);
\node[above] at (7.5,1) {$g_L$};
\draw[->] (6.5,-3) to [out=0,in=180] (8.5,-3);
\node[below] at (7.5,-3) {$g_{g_{K_t}(L)}$};
\draw[->] (3,-0.5) to [out=270,in=90] (3,-1.5);
\node[left] at (3,-1) {$g_{K_t}$};
\draw[->] (12,-0.5) to [out=270,in=90] (12,-1.5);
\node[right] at (12,-1) {$g_{g_L(K_t)}$};
\draw[->] (6.5,-0.5) to [out=333.43,in=153.47] (8.5,-1.5);
\node[above,rotate=333.43] at (7.5,-1) {$g_{K_t\cup L}$};
\draw[fill] (1.5,-4) circle [radius=0.05];
\node[below] at (1.5,-4) {$U(t)$};
\draw[fill] (10.5,-4) circle [radius=0.05];
\node[below] at (10.5,-4) {$\lambda(t)$};
\end{tikzpicture}
\caption{Mapping Down Hulls in Different Orders}
\label{fig:maponethenother}
\end{figure}
\noindent We now state a few preliminary results on what happens when another hull is added.
\begin{lemma}\label{lem:3.6.8c}
Let $K_t$ be a Loewner family and $L$ a hull disjoint from $K_T$. If $K_s\cap\partial_{\mathbb{H}} K_t\not=\emptyset$, then for $s\leq r\leq t$,
\begin{equation}
g_{K_t\cup L}^-(K_t\setminus K_s)
\leq g_{K_t\cup L}^-(K_t\setminus K_r)
\leq g_{K_t\cup L}^+(K_t\setminus K_r)
\leq g_{K_t\cup L}^+(K_t\setminus K_s)
\end{equation}
\end{lemma}
\begin{proof}
The middle inequality follows from the definitions of $g_{K_t\cup L}^-$ and $g_{K_t\cup L}^+$.
For the first inequality, let $(z_n)_{n=1}^{\infty}\subseteq\mathbb{H}\setminus(K_t\cup L)$ with $z_n\to z\in K_t\setminus K_r$ and $g_{K_t\cup L}\to x\in\mathbb{R}$. Then as $K_s\subseteq K_r$, $z\in K_t\setminus K_s$. So, $g_{K_t\cup L}(K_t\setminus K_s)\leq x$. This holds for any such sequence, so the first inequality is proven.
The third inequality follows in the same manner.
\end{proof}
Let $K_t$ be a Loewner family driven by $U:[0,T]\to\mathbb{R}$ and $L$ be a hull disjoint from $K_T$. What happens to $U$ if we map down $L$ and then map down $g_L(K_t)$? What happens to $U$ if we do the opposite and map down $K_t$ then $L$? The answer is actually given using (\ref{eqn:mapcomposition}) and $g_{K_t}(p_t)=U(t)$ for the corresponding family of prime ends $p_t$. Observe:
\begin{equation}\label{eqn:timemapcomposition}
g_{g_{K_t}(L)}(U(t))=g_{g_{K_t}(L)}(g_{K_t}(p_t))=g_{g_{L}(K_t)}(g_L(p_t)).
\end{equation}
If we define $\lambda(t)=g_{g_{K_t}(L)}(U(t))$, then, as $g_L(p_t)$ is the (generalized) tip of $g_L(K_t)$, $\lambda$ drives $g_L(K_t)$. Moreover, by (\ref{eqn:timemapcomposition}), $\lambda(t)=g_{K_t\cup L}(p_t)$ (see Figure \ref{fig:maponethenother}). Since $p_t$ is the (generalized) tip of $K_t$ in the hull $K_t\cup L$, we get the usual relationship between tips and driving functions. This gives us a concrete way of defining the driving function in the multiple hull setting.
\begin{lemma}\label{lem:A}
Let $K_t$ be a Loewner family driven by $U:[0,T]\to\mathbb{R}$. Let $L$ be a hull disjoint from $K_T$. Let $\lambda(t)=g_{g_{K_t}(L)}(U(t))$. Fix $0\leq s<t\leq T$ so that $K_s\cap\partial_{\mathbb{H}} K_t\not=\emptyset$. Then for $s\leq r\leq t$
\begin{equation}
g_{K_t\cup L}^-(K_t\setminus K_s)\leq\lambda(r)\leq g_{K_t\cup L}^+(K_t\setminus K_s).
\end{equation}
\end{lemma}
\begin{proof}
Let $0\leq s< t\leq T$, $K_s\cap\partial K_t\not=\emptyset$, and $A_r=g_{K_r\cup L}(K_t\setminus K_r)$ for $s\leq r\leq t$. Then $\lambda(r)\in\mathbb{R}\cap\overline{g_{K_r\cup L}(K_t\setminus K_r)}=\mathbb{R}\cap\overline{A_r}$. Since $g_{K_t\cup L}=g_{A_r}\circ g_{K_r\cup L}$, by Lemma \ref{lem:3.2.3}, $\lambda(r)\in\mathbb{R}\cap\overline{g_{K_t\cup L}(K_t\setminus K_r)}$. So, $g_{K_t\cup L}^-(K_t\setminus K_r)\leq\lambda(r)\leq g_{K_t\cup L}^+(K_t\setminus K_r)$ for $s\leq r\leq t$.
Let $s<r<t$. Then as $K_s\subset K_r$ and $K_s\cap\partial_{\mathbb{H}} K_t\not=\emptyset$, we have $K_r\cap\partial_{\mathbb{H}} K_t\not=\emptyset$. Using Lemma \ref{lem:3.6.8c},
\begin{equation}
g_{K_t\cup L}^-(K_t\setminus K_s)
\leq g_{K_t\cup L}^-(K_t\setminus K_r)
\leq \lambda(r)
\leq g_{K_t\cup L}^+(K_t\setminus K_r)
\leq g_{K_t\cup L}^+(K_t\setminus K_s)
\end{equation}
Lastly, let $r_n\uparrow t$ with $s\leq r_n$. Then for all $n\in\mathbb{N}$
\begin{equation}
g_{K_t\cup L}^-(K_t\setminus K_s)
\leq \lambda(r_n)
\leq g_{K_t\cup L}^+(K_t\setminus K_s)
\end{equation}
As $\lambda$ is continuous, the result holds for $t$.
\end{proof}
\begin{coro}\label{cor:C}
Let $K_t$ be a Loewner family driven by $U:[0,T]\to\mathbb{R}$. Let $L$ be a hull disjoint from $K_T$. Let $\lambda(t)=g_{g_{K_t}(L)}(U(t))$. If $|\lambda(t)-\lambda(s)|>|\lambda(t)-\lambda(r)|$ for $s<r<t$, then $K_s\cap\partial_{\mathbb{H}} K_t\not=\emptyset$.
\end{coro}
\begin{proof}
Since $L\cap K_T=\emptyset$, $g_L(K_t)$ is a Loewner family and furthermore is driven by $\lambda$. If $|\lambda(t)-\lambda(s)|>|\lambda(t)-\lambda(r)|$ for $s<r<t$, then clearly $\lambda(s)\not=\lambda(r)$ for $s<r<t$. Since $\lambda$ is continuous either $\lambda(s)>\lambda(r)$ for all $s<r<t$ or $\lambda(s)<\lambda(r)$ for all $s<r<t$. By Lemma \ref{lem:C}, $\partial_{\mathbb{H}} g_L(K_t)\cap g_L(K_s)\not=\emptyset$. By the disjointness of $K_T$ and $L$, $\partial_{\mathbb{H}} K_t\cap K_s\not=\emptyset$ as well.
\end{proof}
Whenever we use the families $K_t$ and $L_s$, we will assume that $K_T$ is on the left side of $L_S$. We note that the next lemma is a generalization of Lemma 3.5 from \cite{RS14}. The proof of part (a) uses the key ideas brought up in the corresponding proof in \cite{RS14}, but the proof of part (b) is fundamentally different.
\begin{lemma}\label{lem:3.6.8}
Let $(K_t)_{t\in[0,T]}$ and $(L_v)_{v\in[0,S]}$ be two disjoint Loewner families. Then, for any $t\in[0,T]$ and $s\in[0,S]$,
\begin{itemize}
\item[(a)] $g_{K_T\cup L_S}^-(K_T)\leq g_{K_t\cup L_s}^-(K_T)< g_{K_t\cup L_s}^+(L_S)\leq g_{K_T\cup L_S}^+(L_S)$
\item[(b)] $g_{K_t\cup L_s}^-(L_S)-g_{K_t\cup L_s}^+(K_T)\geq g_{K_T\cup L_S}^-(L_S)-g_{K_T\cup L_S}^+(K_T)$.
\end{itemize}
\end{lemma}
\begin{proof}[Proof of (a)]
First, the middle inequality is immediate since $\overline{K_T}\cap \overline{L_S}=\emptyset$.
Second, we will prove the first inequality. Let $t\in[0,T]$ and $s\in[0,S]$. Define
\begin{equation}
A_1=\overline{g_{K_t\cup L_s}(K_T\setminus K_t)}
\text{ and }
A_2=\overline{g_{K_t\cup L_s}(L_S\setminus L_s)}.
\end{equation}
Then $A_1\cap\mathbb{H}$ and $A_2\cap\mathbb{H}$ are disjoint hulls. Let
$a=g_{K_t\cup L_s}^-(K_T\setminus K_t)$ and
$b=g_{K_t\cup L_s}^+(L_S\setminus L_s)$.
Since $K_T\setminus K_t\subseteq K_T$, $g_{K_t\cup L_s}^-(K_T)\leq a$.
Define $A=A_1\cup A_2$ which is a hull with $\overline{A}\cap\mathbb{R}\subseteq[a,b]$.
If $g_{K_t\cup L_s}^-(K_T)<a$, then by Lemma \ref{lem:3.2.3} (a),
\begin{equation}
g_{K_T\cup L_S}^-(K_T)=g_A(g_{K_t\cup L_s}^-(K_T))\leq g_{K_t\cup L_s}^-(K_T).
\end{equation}
If $g_{K_t\cup L_s}^-(K_T)=a$, then as $g_A\circ g_{K_t\cup L_s}=g_{K_T\cup L_S}$,
\begin{equation}
g_{K_T\cup L_S}^-(K_T)= g_A^-(g_{K_t\cup L_s}(K_T))\leq g_{K_t\cup L_s}^-(K_T).
\end{equation}
In both cases, $g_{K_T\cup L_S}^-(K_T)\leq g_{K_t\cup L_s}^-(K_T)$.
Lastly, the other inequality follows in the same manner.
\end{proof}
\begin{proof}[Proof of (b)]
Let $A=\overline{g_{K_t\cup L_s}((K_T\setminus K_t)\cup (L_S\setminus L_s))}$. Then $A\cap\mathbb{H}$ is a hull with
\begin{equation}
A\cap\mathbb{R}=[g_{K_t\cup L_s}^-(K_T\setminus K_t),g_{K_t\cup L_s}^+(K_T\setminus K_t)]\cup[g_{K_t\cup L_s}^-(L_S\setminus L_s),g_{K_t\cup L_s}^+(L_S\setminus L_s)]
\end{equation}
Let $(x_n)_{n=1}^{\infty},(y_n)_{n=1}^{\infty}\subset\mathbb{R}$ so that $x_n\downarrow g_{K_t\cup L_s}^+(K_T)$, $y_n\uparrow g_{K_t\cup L_s}^-(L_S)$, and
\begin{equation}
g_{K_t\cup L_s}^+(K_T)<x_n<\frac{g_{K_t\cup L_s}^+(K_T)+g_{K_t\cup L_s}^-(L_S)}{2}<y_n<g_{K_t\cup L_s}^-(L_S)
\end{equation}
Then for every $n$, $0<g_A(y_n)-g_A(x_n)\leq y_n-x_n$ by Lemma \ref{lem:3.2.3} (b) as $(x_n,y_n)\subseteq\mathbb{R}\setminus A$. Since $g_A\circ g_{K_t\cup L_s}=g_{K_T\cup L_s}$,
\begin{equation}
g_{K_t\cup L_s}^-(L_S)-g_{K_t\cup L_s}^+(K_T)
\geq g_A^-(g_{K_t\cup L_s}(L_S))-g_A^+(g_{K_t\cup L_s}(K_T))
=g_{K_T\cup L_S}^-(L_S)-g_{K_T\cup L_S}^+(K_T).
\end{equation}
\end{proof}
\noindent We will now generalize the notion of Loewner families to the multiple hull setting.
\begin{defn}
Let $K_1,...,K_n$ be disjoint Loewner hulls and $\text{hcap}(K_1\cup\cdots\cup K_n)=2T$. For $j=1,...,n$ let $K_t^j$ be an increasing family of hulls so that
\begin{itemize}
\item $t\mapsto\text{hcap}(K_t^j)$ is nondecreasing
\item $\text{hcap}(K_t^1\cup\cdots\cup K_t^n)=2t$ for $t\in[0,T]$
\item $K_T^j=K_j$
\end{itemize}
We call $K_t=(K_t^1,...,K_t^n)$ a Loewner parameterization for the hull $K_1\cup\cdots\cup K_n$.
\end{defn}
\section{Loewner Parameterization Precompactness}\label{precompactness}
The generalization of Theorem 1.1 in \cite{RS14}, Theorem \ref{thm:3.6.2} here, follows with almost the same proof due to prime ends generalizing tips so appropriately. In \cite{RS14} a few technical lemmas are shown, then Theorems 1.1 and 2.2 are proven. Since credit for the proofs goes to the authors of \cite{RS14}, we will state results where the proofs generalize quickly without proof and direct the reader to \cite{RS14}.
\begin{lemma}[3.2 \cite{RS14}]\label{lem:3.6.6}
Let $K_t$ be a Loewner family. Let $L$ be a hull disjoint from $K_T$. Then there exists a constant $c>0$ so that for all $0\leq s<t\leq T$
\begin{equation}
c\leq\frac{\text{hcap}(K_t\cup L)-\text{hcap}(K_s\cup L)}{t-s}
\end{equation}
\end{lemma}
\begin{lemma}[3.3 \cite{RS14}]\label{lem:3.6.7}
Let $(K_t)_{t\in[0,T_1]}$ and $(L_t)_{t\in[0,T_2]}$ be two disjoint Loewner families. Then there is a constant $c>0$ so that
\begin{equation}
c\leq\frac{\text{hcap}(K_{t_1}\cup L_{t_2})-\text{hcap}(K_{s_1}\cup L_{s_2})}{t_j-s_j}
\end{equation}
for all $0\leq s_j<t_j\leq T_j$ and $j=1,2$.
\end{lemma}
\begin{lemma}[3.6 \cite{RS14}]\label{lem:3.6.9}
Let $(K_t)_{t\in[0,T]}$ and $(L_v)_{v\in[0,S]}$ be two disjoint Loewner families. Then there exists a constant $M>0$ so that
\begin{equation}
|g_{K_t\cup L_u}(p)-g_{K_t\cup L_v}(p)|\leq M|v-u|
\end{equation}
for any $t\in[0,T]$ and $u,v\in[0,S]$ where $p$ is the prime end corresponding to $K_t$.
\end{lemma}
The proof of Lemma \ref{lem:3.6.9} from \cite{RS14}, deals with images of base points of slits (specifically, $p_1$ and $p_2$). In particular, the proof looks at the real points that correspond to the prime ends $p_1$ and $p_2$. This is equivalent to mapping down both slits and looking at the corresponding line segments. In order to prove this lemma, we replace $p_1$ by $K_T$ and $p_2$ be $L_S$, which gives the analogue of mapping down both slits. The change from base points of a slit to entire hulls in the proof of Lemma \ref{lem:3.6.9} comes from the fact that for a slit, the two images of the base are the smallest and largest real points in the image of the mapped down slit, whereas with hulls, this corresponds to mapping down the entire hull.
\begin{lemma}[3.7 \cite{RS14}]\label{lem:3.6.10}
Let $K_t$ be a Loewner family driven by $U:[0,T]\to\mathbb{R}$. Let $L$ be a hull disjoint from $K_T$. Let $\lambda(t)=g_{g_{K_t}(L)}(U(t))$. Then there exists $\omega:[0,T]\to[0,\infty)$ increasing with $\lim_{\delta\downarrow 0}\omega(\delta)=\omega(0)=0$ such that
\begin{equation}\label{eqn:3610.3}
|g_{K_t\cup L}(p_t)-g_{K_s\cup L}(p_s)|\leq\omega(|t-s|)
\end{equation}
for $s,t\in[0,T]$, where $p_t$ and $p_s$ are the prime ends corresponding to $\lambda(t)$ and $\lambda(s)$ respectively.
\end{lemma}
The proof of (\ref{eqn:3610.3}) in the setting of hulls requires more background work than in the setting of slits. The majority of the results in Section \ref{multiloewnerhulls} are used to show that hulls grow similarly to slits. It is this subtle difference in growth that requires a different proof of (\ref{eqn:3610.3}) than in \cite{RS14}. However, the proof that $\omega(\delta)\to0$ as $\delta\to 0$ is the exact same as in \cite{RS14}, so we refer the reader there for the proof.
\begin{proof}
Let $\omega:[0,T]\to[0,\infty)$ be defined by $\omega(0)=0$ and
\begin{equation}
\omega(\delta)=\sup\{g_{K_t}^+(K_t\setminus K_s)-g_{K_t}^-(K_t\setminus K_s):0\leq s<t\leq T,t-s\leq\delta\}
\end{equation}
Clearly, $\omega(\delta)$ is increasing.
Next, we will prove the inequality in (\ref{eqn:3610.3}). Let $0\leq s'<t\leq T$ and $\delta'=t-s'$. Lemma \ref{lem:B} and the corollary to Lemma \ref{lem:C} show that there exists $s'\leq s<t$ with $K_s\cap\partial_{\mathbb{H}} K_t\not=\emptyset$ and
\begin{equation}\label{eqn:3610.1}
|g_{K_t\cup L}(p_t)-g_{K_{s'}\cup L}(p_{s'})|
=|\lambda(t)-\lambda(s')|
\leq|\lambda(t)-\lambda(s)|
=|g_{K_t\cup L}(p_t)-g_{K_s\cup L}(p_s)|
\end{equation}
Let $\delta=t-s\leq\delta'$, so $\omega(\delta)\leq\omega(\delta')$. Since $K_s\cap\partial_{\mathbb{H}} K_t\not=\emptyset$, by Lemma \ref{lem:A} we have for $r\in[s,t]$
\begin{equation}
g_{K_t\cup L}^-(K_t\setminus K_s)\leq\lambda(r)\leq g_{K_t\cup L}^+(K_t\setminus K_s).
\end{equation}
So,
\begin{equation}
|g_{K_t\cup L}(p_t)-g_{K_s\cup L}(p_s)|
=|\lambda(t)-\lambda(s)|
\leq g_{K_t\cup L}^+(K_t\setminus K_s)-g_{K_t\cup L}^-(K_t\setminus K_s).
\end{equation}
Since $g_{K_t\cup L}=g_{g_{K_t}(L)}\circ g_{K_t}$, Lemma \ref{lem:3.2.3} (b) shows
\begin{equation}\label{eqn:3610.2}
g_{K_t\cup L}^+(K_t\setminus K_s)-g_{K_t\cup L}^-(K_t\setminus K_s)
\leq g_{K_t}^+(K_t\setminus K_s)-g_{K_t}^-(K_t\setminus K_s)
\leq\omega(\delta).
\end{equation}
Combining (\ref{eqn:3610.1}), (\ref{eqn:3610.2}), and $\omega(\delta)\leq\omega(\delta')$ gives the result.
\end{proof}
\begin{lemma}[3.8 \cite{RS14}]\label{lem:3.6.11}
Let $(K_t)_{t\in[0,T]}$ and $(L_v)_{v\in[0,S]}$ be two disjoint Loewner families. Then there exists constants $c,M>0$ and $\omega:[0,T]\to[0,\infty)$ increasing with $\lim_{\delta\downarrow 0}\omega(\delta)=\omega(0)=0$ such that
\begin{align}
|g_{K_t\cup L_v}(p_t)-g_{K_s\cup L_u}(p_s)|
&\leq\omega\left(\frac{1}{c}|\text{hcap}(K_t\cup L_v)-\text{hcap}(K_s\cup L_u)|\right)\\
&\phantom{1}\qquad+\frac{M}{c}|\text{hcap}(K_t\cup L_v)-\text{hcap}(K_s\cup L_u)|
\end{align}
for all $s,t\in[0,T]$ and $u,v\in[0,S]$, where $p_t$ and $p_s$ are the prime ends corresponding to $\lambda(t)$ and $\lambda(s)$, respectively.
\end{lemma}
\begin{thm}[2.2 \cite{RS14}]\label{thm:3.6.5}
Let $A$ be a multi-Loewner hull with $\text{hcap}(A)=2T$. For any Loewner parameterization $K_t=(K_t^1,K_t^2)$ of $A$, let $\lambda^j_K$ be the driving function of $K_t^j$ for $j=1,2$. Then the sets
\begin{equation}
\{\lambda_K^j:[0,T]\to\mathbb{R}|K\text{ Loewner parameterization of }A\}
\end{equation}
are precompact subsets of the Banach space $C([0,T],\mathbb{R})$ for $j=1,2$.
\end{thm}
The first step in proving this theorem in \cite{RS14} is to get a uniform bound (in time) on $\lambda_K^j(t)$ for $j=1,2$. This bound, in our case, is
\begin{equation}
g_{A}^-(A)
= g_T^-(A)
\leq\lambda_K^j(t)
\leq g_T^+(A)
= g_{A}^+(A).
\end{equation}
The rest of the proof in \cite{RS14} generalizes.
\begin{thm:3.6.2}[1.1 \cite{RS14}]
Let $K^1,...,K^n$ be disjoint Loewner hulls. Let $\text{hcap}(K^1\cup\cdots\cup K^n)=2T$. Then there exist constants $w_1,...,w_n\in(0,1)$ with $\sum_{k=1}^{n}w_k=1$ and continuous driving functions $\lambda_1,...,\lambda_n:[0,T]\to\mathbb{R}$ so that
\begin{equation}
\partial_t g_t(z)=\sum_{k=1}^{n}\frac{2w_k}{g_t(z)-\lambda_k(t)},\quad g_0(z)=z
\end{equation}
satisfies $g_T=g_{K^1\cup\cdots\cup K^n}$.
\end{thm:3.6.2}
\noindent The proof of this theorem is the proof in \cite{RS14}, but we include it so that the reader can see where the previously proven lemmas are used.
\begin{proof}
Let $K^1,K^2$ be disjoint Loewner hulls, $\text{hcap}(K^1\cup K^2)=2$, $c_j=\frac{1}{2}\text{hcap}(K_j)$.
Define $\alpha_{n,w}:[0,1]\to\{0,1\}$ for $(n,w)\in\mathbb{N}\times[0,1]$ as follows:
\begin{equation}
\alpha_{n,w}(t)=\left\{
\begin{array}{rr}
1\qquad&t\in(\frac{k}{2^n},\frac{k+w}{2^n})\\
0\qquad&t\in(\frac{k+w}{2^n}\frac{k+1}{2^n})
\end{array}\right.
\end{equation}
for $k\in\{0,...,2^n\}$. Let
\begin{equation}
\partial_t g_{t,n}(z)
=\frac{2\alpha_{n,w}(t)}{g_{t,n}(z)-\lambda_{1,n}(t)}
+\frac{2(1-\alpha_{n,w}(t))}{g_{t,n}(z)-\lambda_{2,n}(t)},\qquad
g_0(z)=z.
\end{equation}
By the construction of $\alpha_{n,w}$ only one hull grows at a time. So, the Loewner equation (with a single driving function) gives that $\lambda_{1,n}(t)$ is defined on $\bigcup_{k=0}^{2^n-1}(\frac{k}{2^n},\frac{k+w}{2^n})$ (similarly for $\lambda_{2,n}(t)$). The disjointness of the hulls gives that we can extend $\lambda_{j,n}$ to be the image of $\lambda_{j,n}(t)$ under the map corresponding to the other hull. So, $\lambda_{1,n}$ and $\lambda_{2,n}$ are continuous on $[0,1]$. For $t\in[0,1]$ the hull at time $t$ is
\begin{equation}
H_{n,w,t}=K^1_{x_{n,w,t}}\cup K^2_{y_{n,w,t}}
\end{equation}
where $x_{n,w,t}\in[0,1]$ depends continuously on $w$. For all $n\in\mathbb{N}$, $x_{n,0,1}=0$ and $x_{n,1,1}=1$ (as $w=0$ and $w=1$ correspond to single hull growth of $K^1$ and $K^2$ respectively). By the Intermediate Value Theorem, for each $n\in\mathbb{N}$ there exists $w_n$ so that $x_{n,w_n,1}=c_1$. By Lemma \ref{lem:3.2.2} (b), $y_{n,w_n,1}=c_2$. So, $H_{n,w_n,1}=K^1\cup K^2$. Which means that $\alpha_{n,w_n}$ is a sequence of weights and $\lambda_{j,n}$ are sequences of continuous driving function generating $K^1\cup K^2$.
By Theorem \ref{thm:3.6.5}, there is a subsequence of $\lambda_{1,n}$ converging to a function $\lambda_1$. Using Theorem \ref{thm:3.6.5} again on the corresponding subsequence of $\lambda_{2,n}$ we get that there is a further subsequence converging to a function $\lambda_2$. Furthermore, the corresponding subsequence of $w_n$ has a convergent subsequence converging to $w\in[0,1]$. We will now reindex this sequence by $n\in\mathbb{N}$.
Let
\begin{equation}
\partial_t g_t(z)
=\frac{2w}{g_{t,n}(z)-\lambda_{1,n}(t)}
+\frac{2(1-w)}{g_{t,n}(z)-\lambda_{1,n}(t)},\qquad
g_0(z)=z.
\end{equation}
Then it is easy to see that $\alpha_{n,w_n}$ converges weakly to $w$ in $L^1([0,1])$ (similar to Lemma \ref{lem:onehalfweights}). Now, by Theorem \ref{thm:rs2.4rrodf}, we have the result.
\end{proof}
\bibliographystyle{alpha}
| {
"timestamp": "2017-10-26T02:09:02",
"yymm": "1710",
"arxiv_id": "1710.09301",
"language": "en",
"url": "https://arxiv.org/abs/1710.09301",
"abstract": "Kager, Nienhuis, and Kadanoff conjectured that the hull generated from the Loewner equation driven by two constant functions with constant weights could be generated by a single rapidly and randomly oscillating function. We prove their conjecture and generalize to multiple continuous driving functions. In the process, we generalize to multiple hulls a result of Roth and Schleissinger that says multiple slits can be generated by constant weight functions. The proof gives a simulation method for hulls generated by the multiple Loewner equation.",
"subjects": "Complex Variables (math.CV)",
"title": "The Loewner Equation for Multiple Hulls",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138190064204,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7087156918161996
} |
https://arxiv.org/abs/math/0612552 | Isomorphisms between Leavitt algebras and their matrix rings | Let $K$ be any field, let $L_n$ denote the Leavitt algebra of type $(1,n-1)$ having coefficients in $K$, and let ${\rm M}_d(L_n)$ denote the ring of $d \times d$ matrices over $L_n$. In our main result, we show that ${\rm M}_d(L_n) \cong L_n$ if and only if $d$ and $n-1$ are coprime. We use this isomorphism to answer a question posed in \cite{PS} regarding isomorphisms between various C*-algebras. Furthermore, our result demonstrates that data about the $K_0$ structure is sufficient to distinguish up to isomorphism the algebras in an important class of purely infinite simple $K$-algebras. | \section*{Introduction}
Let $K$ be any field, and let $m<n$ be positive integers. The ring
$R$ is said to have {\it invariant basis number} (IBN) if no two
free left $R$-modules of differing rank over $R$ are isomorphic.
On the other hand, $R$ is said to have {\it module type} $(m,n-m)$
in case for every pair of positive integers $a$ and $b$, (1) if
$1\leq a<m$ then the free left $R$-modules $R^a$ and $R^i$ are
not isomorphic for all positive integers $i\neq a$, and (2) if
$a,b \geq m$, then the free left $R$-modules $R^a$ and $R^b$ are
isomorphic precisely when $a\equiv b$ (mod $n-m$). It is not hard
to show that any non-IBN ring has module type $(m,n-m)$ for some
pair of positive integers $m<n$. (The notation used here is not
completely universal: some authors refer to the module type of
such an algebra as the pair $(m,n)$. Our notation is consistent
with that used in many of the algebra articles on this topic, and
is also consistent with the C$^*$-algebra usage as well.) As
shown by Leavitt in \cite{L1}, for every such pair $m,n$ there
exists a $K$-algebra $L_K(m,n)$ whose module type is $(m,n-m)$.
In particular, the module type of $L_K(1,n)$ is $(1,n-1)$. We
denote $L_K(1,n)$ by $L_n$. Various aspects of these algebras
have been investigated, with an initial flurry of activity in the
1960's and early 1970's (e.g. \cite{B}, \cite{Co}, and \cite{L2}),
and then again in a revival beginning at the start of the new
millennium (e.g. \cite{A}, \cite{AAn1}, and \cite{AGP}).
On the ``analytic" side of the coin, Cuntz \cite{Cu} in the 1970s investigated the C$^*$-algebras $\{\mathcal{O}_n \mid
2\leq n\in \mathbb{N}\}$. There is an intimate connection between the Leavitt algebra $L_K(1,n)$ and the Cuntz algebra
$\mathcal{O}_{n}$. Specifically, for any field $K$, the elements of $L_K(1,n)$ can be viewed as linear transformations
on an infinite dimensional $K$-vector space in a natural way as a collection of shift operators. In particular, when
$K$ is the field of complex numbers, then $L_K(1,n)$ can be viewed as acting on Hilbert space $\ell ^2$, and thereby
inherits the operator norm. The Cuntz algebra $\mathcal{O}_{n}$ is the completion of $L_{\mathbb{C}}(1,n)$ in the
metric induced by this norm.
Since $L_n \cong L_n^n$ as free left $L_n$-modules, by taking endomorphism rings we get immediately that there is a
ring isomorphism between $L_n$ and ${\rm M}_n(L_n)$. The first two authors extended this type of isomorphism to
additional matrix sizes in \cite{AAn1}, where they observe that $L_n \cong {\rm M}_d(L_n)$ whenever $d$ divides
$n^{\alpha}$ for some positive integer $\alpha$. In \cite{L1} Leavitt shows that for ${\rm gcd}(d,n-1)>1$, the
$K$-algebras $L_n$ and ${\rm M}_d(L_n)$ cannot be isomorphic. Since $d|n^{\alpha}$ implies ${\rm gcd}(d,n-1)=1$, these
two results yield the following natural question, posed in \cite{AAn1}, page 362:
\begin{center}
For ${\rm gcd}(d,n-1)=1$, are $L_{n}$ and ${\rm M}_d(L_{n})$ isomorphic?
\end{center}
In our main result, Theorem \ref{Main}, we answer this question in
the affirmative for all fields $K$.
Theorem \ref{Main} has important consequences in the context of C$^*$-algebras. First, we show in Section
\ref{applications} that this result can be used to {\it directly} answer in the affirmative the following question,
posed in \cite{PS}, page 8:
\begin{center}
Are $ {\rm M}_m(\mathcal{O}_{n})$ and $\mathcal{O}_{n}$ isomorphic
whenever $m$ and $n-1$ are relatively prime?
\end{center}
While an affirmative answer to this question was provided for even $n$ in \cite{Ro}, Corollary 7.3, and subsequently
shown for all $n\geq 2$ as a consequence of \cite{Ph1}, Theorem 4.3(1), the method we provide here is significantly
more elementary. Indeed, the second important consequence of our result is that, unlike the current situation in the
C$^*$-algebra case, the isomorphisms we present between the indicated $K$-algebras are in fact explicitly given.
Moreover, when $K=\mathbb{C}$, this explicit description carries over to an explicit description of the isomorphisms
between the appropriately sized matrix rings over Cuntz algebras.
Finally, our result demonstrates that data about the $K_0$ structure is sufficient to distinguish up to isomorphism the
algebras in an important class of purely infinite simple $K$-algebras, thus paving a path for subsequent work by the
authors \cite{AAP} towards an algebraic version of \cite{P}, Theorem 4.2.4.
The authors thank the referee for an extremely careful review of this article.
\section{Notation and basic concepts}\label{basics}
We begin by explicitly defining the Leavitt algebras $L_{K}(1,n)$.
For any positive integer $n\geq 2$, and field $K$, we denote
$L_K(1,n)$ by $L_{K,n}$, and call it the {\it Leavitt algebra of
type (1,n-1) with coefficients in $K$. } (When $K$ is understood,
we denote this algebra simply by $L_n$.) Precisely, $L_{K,n}$ is
the quotient of the free associative $K$-algebra in $2n$
variables:
$$L_{K,n}=K<X_1,...,X_n,Y_1,...,Y_n>/T,$$
where $T$ is the ideal generated by the relations $ X_iY_j - \delta_{ij}1_K$ (for $1\leq i,j \leq n$) and
$\sum_{j=1}^{n} Y_jX_j - 1_K$. The images of $X_i,Y_i$ in $L_{K,n}$ are denoted respectively by $x_i,y_i$. In
particular, we have the equalities $x_iy_j = \delta_{ij}1_K$ and $\sum_{j=1}^{n} y_jx_j = 1_K$ in $L_n$. The algebra
$L_n$ was investigated originally by Leavitt in his seminal paper \cite{L1}. We now list various fundamental
properties of $L_n$, culminating in the property which will serve as the focus of our investigation.
\begin{proposition}\label{Basicprops} Let $K$ be any field.
\begin{enumerate}
\item \cite{L1}, Theorem 8: $L_n$ has module type $(1,n-1)$. In particular, if $a\equiv b$ (mod $n-1$) then $L_n^a \cong
L_n^b$ as free left $L_n$-modules. Consequently, if $a\equiv b$ (mod $n-1$), then there is an isomorphism of matrix
rings ${\rm M}_a(L_n)\cong {\rm M}_b(L_n)$.
\item Suppose $R$ is a $K$-algebra which contains a subset
$\{a_1,...,a_n,b_1,...,b_n\}$ for which $ a_ib_j = \delta_{ij}1_R$ (for $1\leq i,j \leq n)$, and $\sum_{j=1}^{n} b_ja_j
= 1_R$. (For instance, any $K$-algebra having module type $(1,n-1)$ has this property.) Then there exists a (unital)
$K$-algebra homomorphism from $L_n$ to $R$ extending the map $x_i\mapsto a_i$ and $y_i\mapsto b_i$ (for $1\leq i \leq
n$).
\item \cite{L2}, Theorem 2: $L_{n}$ is a simple $K$-algebra.
\end{enumerate}
\end{proposition}
\begin{corollary}\label{method}
Let $I$ denote the identity matrix in ${\rm M}_d(L_n)$. To show
$L_n\cong {\rm M}_d(L_n)$ it suffices to show that there is a set
$S = \{a_1,...,a_n,b_1,...,b_n\} \subseteq {\rm M}_d(L_n)$ such
that: $ a_ib_j = \delta_{ij}I$ (for $1\leq i,j \leq n$);
$\sum_{j=1}^{n} b_ja_j = I$; and $S$ generates ${\rm M}_d(L_n)$ as
a $K$-algebra.
\end{corollary}
\begin{proof} The existence of a nontrivial $K$-algebra homomorphism
from $L_n$ to ${\rm M}_d(L_n)$ follows from Proposition \ref{Basicprops}(2), while the injectivity of such a
homomorphism follows from Proposition \ref{Basicprops}(3). Since $\{x_1,...,x_n,y_1,...,y_n\}$ generates $L_n$ as a
$K$-algebra, the image of this homomorphism is generated by $\{a_1,...,a_n,b_1,...,b_n\} \subseteq {\rm M}_d(L_n)$.
\end{proof}
For any unital ring $R$ and $i\in\{1,2,...,d\}$ we denote the
idempotent $e_{i,i}$ of the matrix ring ${\rm M}_d(R)$ simply by
$e_i$, and we define
$$E_i = \sum_{j=1}^i e_j.$$
In this notation $E_d=I$, the identity matrix in ${\rm M}_d(R)$.
\begin{definition}\label{involutiondef} {\rm For any field $K$, the extension of the assignments $x_i \mapsto y_i=x_i^*$ and $y_i \mapsto x_i=y_i^*$
for $1\leq i \leq n$ yields an involution $*$ on $L_K(1,n)$. This involution on $L_K(1,n)$ produces an involution
on any sized matrix ring ${\rm M}_m(L_K(1,n))$ over $L_K(1,n)$ by setting $X^* =
(x_{j,i}^*)$ for each $X=(x_{i,j})\in {\rm M}_m(L_K(1,n))$.}
\end{definition}
We note that if $K$ is a field with involution (which we also denote by $*$), then a second involution on $L_K(1,n)$
may be defined
by extending the assignments $k\mapsto k^*$ for all $k\in K$, $x_i \mapsto y_i=x_i^*$ and $y_i \mapsto
x_i=y_i^*$ for $1\leq i\leq n$. Of course in the case $K=\mathbb{C}$ we have such an involution on $K$. Although it might be
of interest to consider this second type of involution on $L_{\mathbb{C}}(1,n)$ in order to maintain some natural connection
with the standard involution on the corresponding Cuntz algebra $\mathcal{O}_{n}$, we prefer to work with the
involution on $L_K(1,n)$ described in Definition \ref{involutiondef} because it can be defined for any field $K$. All
of the results presented in this article for involutions on $L_K(1,n)$ and their matrix rings are valid using either
type of involution.
\medskip
{\bf We now set some notation which will be used throughout the remainder of the article.} For positive integers $d$
and $n$ we write
$$n = qd + r \mbox{ where } 1\leq r \leq d.$$
We assume throughout that ${\rm gcd}(d,n-1)=1$, and that $d < n$. (We will relax the hypothesis $d<n$ in our main
result.) Without loss of generality we will also assume that $r\geq 2$, since $r=1$ would yield $n-1 = qd$, which along
with the hypothesis that ${\rm gcd}(d,n-1)=1$ would yield $d=1$, and the main result in this case is then the trivial
statement $L_n\cong {\rm M}_1(L_n)$. An important role will be played by the number $s$, defined as
$$s = d - (r-1).$$
Since ${\rm gcd}(d,n-1)=1$ we get also that ${\rm gcd}(s,d)=1$.
\begin{definition}\label{hsequence}
{\rm We consider the sequence $\{h_i\}_{i=1}^{d}$ of integers, whose $i^{th}$ entry is given by
$$h_i = 1 + (i-1)s \ ({\rm mod} \ d).$$
The integers $h_i$ are understood to be taken from the set $\{1,2,...,d\}$. Rephrased, we define the sequence
$\{h_i\}_{i=1}^{d}$ by setting $h_1=1,$ and, for $1\leq i \leq d-1$,
$$h_{i+1} = h_i+s \mbox{ if }h_i\leq r-1, \ \ \mbox{ and } \ \ h_{i+1}=h_i-(r-1)
\mbox{ if } h_i \geq r.$$}
\end{definition}
\smallskip
Because ${\rm gcd}(d,s)=1$ (so that $s$ is invertible ${\rm mod} \ d$), basic number theory yields the following
\begin{lemma}\label{sequence}
$\mbox{ }$
\begin{enumerate}
\item The entries in the sequence $h_1,h_2,...,h_d$ are distinct.
\item The set of entries $\{h_1,h_2,...,h_d\}$ equals the set $\{1,2,...,d\}$ (in some order).
\item The final entry in the sequence is $r$; that is, $h_d=r$.
\end{enumerate}
\end{lemma}
\begin{proof} The only non-standard statement is (3). Suppose $r =
1+(i-1)s \ ({\rm mod} \ d)$. Then $r-1 = (i-1)s \ ({\rm mod} \ d)$, so that $(r-1)+s = is \ ({\rm mod} \ d)$. But $d
= (r-1)+s$ by definition, so this gives $d = is \ ({\rm mod} \ d)$. Now ${\rm gcd}(s,d)=1$ gives that $i = d$, so that
$i-1=d-1$ and we get $r = 1 + (d-1)s \ ({\rm mod} \ d) = h_d$ as desired.
\end{proof}
Our interest will lie in a decomposition of $\{1,2,...,d\}$ effected by the sequence $h_1,h_2,...,h_d$, as follows.
\begin{definition}\label{d1e1f1} {\rm We let $d_1$ denote the integer for which
$$h_{d_1} = r-1$$
in the previously defined sequence. Such an integer $d_1$ exists by Lemma \ref{sequence}(2). Note then that
$h_{d_1+1}= (r-1)+s = d.$ We denote by $\hat{S_1}$ the following subset of $\{1,2,...,d\}$:
$$\hat{S_1}= \{h_i | 1\leq i \leq d_1\}.$$
We denote by $\hat{S_2}$ the complement of $\hat{S_1}$ in $\{1,2,...,d\}$; in other words, $\hat{S_2} = \{h_i |
d_1+1\leq i \leq d\}$. If we define $d_2=d-d_1$, then
$$d_1 = |\hat{S_1}|, d_2 = |\hat{S_2}|, \mbox{ and } d_1+d_2 = d.$$
Let $e_1 = |\hat{S_1} \cap \{r-1, r, r+1, ... d\}|$. So $e_1$ is the number of elements in $\hat{S_1}$ which are at
least $r-1$. Similarly, let $e_2 = |\hat{S_2} \cap \{r-1, r, r+1, ... d\}|$. (Note by definition of $\hat{S_1}$ and
Lemma \ref{sequence}(3) we have $1,r-1 \in \hat{S_1}$ and $r,d\in \hat{S_2}$.) So we get
$$e_1+e_2=|\{r-1, r, ..., d\}| = d-(r-1)+1 = d-r+2.$$
Let $f_1 = |\hat{S_1} \cap \{1,2,...,r-1, r\}|$. So $f_1$ is the number of elements in $\hat{S_1}$ which are at most
$r$. Similarly, let $f_2 = |\hat{S_2} \cap \{1,2,...,r\}|$. We get
$$f_1 + f_2 = r.$$ Finally, by definition we have}
$$e_1+f_1 = d_1+1 \mbox{ and } e_2 + f_2 = d_2+1.$$
\end{definition}
\begin{proposition}\label{bandt}
Write $h_{d_1}=r-1= 1 + (d_1-1)s \ ({\rm mod} \ d)$. So there exists a nonnegative integer $t$ with $r-1= 1 +
(d_1-1)s -td$, so that
$$r-1= 1 + (d_1-1)s -t[s+(r-1)]=1 + (d_1-1-t)s -t(r-1).$$
Let $b$ denote $d_1-1-t$. So we have
$$r-1 = 1 + bs - t(r-1).$$
(In particular, we also have $(1+t)(r-1) = 1+bs$.) Then $e_1 = t+1$, $d_1 = 1+b+t$, and $f_1 = 1+b$.
\end{proposition}
\begin{proof} By definition, each element of the sequence $\{ h_i\}^d_{i=1}$ is the
remainder of $1+(i-1)s$ modulo $d$. Now, we will show by induction on $i$ that $h_i=1+(i-1)s-l_id$ where $l_i$ is the
number of $h_j$ for which $h_j\geq r$ and $j<i$.
For $i=1$, $h_1=1=1+(1-1)s-0d$, as $r\geq 2$ implies $l_1=0$. Now, suppose that the result holds for $i\geq 1$. If
$h_i\geq r$, $l_{i+1}=l_i+1$ by definition. Also, the computation gives us
$$h_{i+1}=h_i-(r-1)=h_i+s-d=1+(i-1)s-l_id+s-d=1+is-(l_i+1)d=1+is-l_{i+1}d.$$
On the other hand, if $h_i\leq r-1$, then $l_{i+1}=l_i$ by definition. Also, the computation gives us
$$h_{i+1}=h_i+s=1+(i-1)s-l_id+s=1+is-l_id=1+is-l_{i+1}d.$$
Thus, induction step works.
Now, for $i=d_1$, denote $l_{d_1}$ by $t$. The previous assertion shows that
$$r-1=1+(d_1-1)s-td,$$
where $t$ is the number of $h_j$ for which $h_j\geq r$ and $j<d_1$. Since $\hat{S_1}=\{ h_i\mid 1\leq i\leq d_1\}$, we
have
$$t=\vert (\hat{S_1}\setminus \{ r-1\})\cap \{ r-1, r, \dots ,d\}\vert,$$ so that $e_1=1+t$.
By definition of $b$, $d_1=1+b+t$. But $f_1=d_1+1-e_1$, so we are done.
\end{proof}
\begin{example} {\rm It will be helpful to give a specific example in order to solidify these ideas.
Suppose $n=35,d=13$. Then ${\rm gcd}(13,35-1)=1$, so we are in the desired situation. Now $35 = 2\cdot 13 + 9$, so
that $r=9, r-1=8,$ and $s=d-(r-1)=13-8=5$. Then the sequence $h_1,h_2,...,h_d$ is given by
$$1,6,11,3,8,13,5,10,2,7,12,4,9.$$
Since $r-1=8$, The partition $\{1,2,...,d\} = \hat{S_1}\cup \hat{S_2}$ is then
$$\{1,2,...,13\} = \{1,3,6,8,11\}\cup \{2,4,5,7,9,10,12,13\}.$$
Furthermore,
$$d_1=|\{1,3,6,8,11\}|=5, \ \ d_2=|\{2,4,5,7,9,10,12,13\}|= 8,$$
$$e_1=|\{8,11\}|=2,\ \ e_2=|\{9,10,12,13\}|=4,$$
$$f_1=|\{1,3,6,8\}|=4,\ \ f_2=|\{2,4,5,7,9\}|=5.$$
Note that $f_1 = 4 = 1+3 = 1+b$, and $e_1 = 2 = 1+1 = 1+t$. Finally, we have}
$$r-1 = 8 = 1 + 3\cdot 5 - 1\cdot 8 = 1 + bs - t(r-1).$$
\end{example}
\section{The search for appropriate matrices inside ${\rm M}_d(L_n)$}\label{findmatrices}
We start this section by giving a plausibility argument for Theorem \ref{Main}. In \cite{L1}, Theorem 5, Leavitt proves
\begin{proposition}\label{moduletypeofmatrices}
If $R$ has module type $(1,n-1)$, then ${\rm M}_d(R)$ has module type $(1, \frac{n-1}{{\rm gcd}(d,n-1)})$.
\end{proposition}
Since module type is an isomorphism invariant, this result immediately gives that $L_n$ and ${\rm M}_d(L_n)$ are not
isomorphic when ${\rm gcd}(d,n-1)
>1$.
On the other hand, in case ${\rm gcd}(d,n-1)=1$, Leavitt's proof of Proposition \ref{moduletypeofmatrices} gives an
algorithm for finding specific elements $\{a_1,...,a_n,b_1,...,b_n\}$ inside ${\rm M}_d(L_n)$ which satisfy the
appropriate relations. So, by Corollary \ref{method}, we would be done if we could show that this set of elements
generates ${\rm M}_d(L_n)$ as a $K$-algebra.
However, this set of elements does NOT generate ${\rm M}_d(L_n)$ in general. It is instructive here to look at a
specific example. Because by \cite{AAn1}, Proposition 2.1, we know our main result is true when $d$ divides some power
of $n$, the smallest case of interest is the situation $d=3, n=5$, since then ${\rm gcd}(d,n-1)=1$ but $d$ does not
divide any power of $n$. Leavitt's proof (for general $d,n$) manifests in this specific case that $M_3(L_5)$ has module
type $(1,4)$, and is based on an analysis of the $n=5$ elements in $M_3(L_5)$
$$X_1=\begin{pmatrix}{x_{1}}&0&0\\
x_2&0&0\\
x_3&0&0 \end{pmatrix}\hspace{.25in}
X_2=\begin{pmatrix}{x_4}&0&0\\
x_5&0&0\\
0&x_1&0\end{pmatrix} \hspace{.25in}
X_3=\begin{pmatrix}0&x_2&0\\
0&x_3&0\\
0&x_4&0\end{pmatrix} \hspace{.25in}$$
$$ X_4=\begin{pmatrix}0&x_5&0\\
0&0&x_1\\
0&0&x_2\end{pmatrix} \hspace{.25in}
X_5=\begin{pmatrix}0&0&x_3\\
0&0&x_4\\
0&0&x_5\end{pmatrix} \hspace{.25in}$$
together with the five dual matrices $Y_i = X_i^*$ for $1\leq i \leq 5$. While these ten matrices generate ``much of"
$M_3(L_5)$, these matrices do not, for instance, generate the matrix unit $e_{1,3}$. In fact, we show below in
Proposition \ref{gradediso} that whenever ${\rm gcd}(d,n-1)=1$ but $d$ does not divide $n^{\alpha}$ for any positive
integer $\alpha$, then the matrices in ${\rm M}_d(L_n)$ which arise in the proof of \cite{L1}, Theorem 5, cannot
generate ${\rm M}_d(L_n)$.
A breakthrough in this investigation was achieved when the authors were able to show that isomorphisms between more
general structures (so-called ``Leavitt path algebras"; see e.g. \cite{AAr1}), when interpreted in light of
\cite{AAr2}, Proposition 13, in fact yield an isomorphism between $L_5$ and ${\rm M}_3(L_5)$. By tracing through the
appropriate translation maps, the following subset of ${\rm M}_3(L_5)$ emerges as the desired set of elements, elements
which satisfy the appropriate relations {\it and} generate ${\rm M}_3(L_5)$ as a $K$-algebra:
$$X_1=\begin{pmatrix}x_{1}&0&0\\
x_5&0&0\\
x_3&0&0\end{pmatrix} \hspace{.25in}
X_2=\begin{pmatrix}x_2&0&0\\
x_4&0&0\\
0&1&0\end{pmatrix} \hspace{.25in}
X_3=\begin{pmatrix}0&0&{x_1}^2\\
0&0&x_5x_1\\
0&0&x_3x_1\end{pmatrix} \hspace{.25in}$$
$$ X_4= \begin{pmatrix}0&0&x_2x_1\\
0&0&x_4x_1\\
0&0&x_5\end{pmatrix} \hspace{.25in}
X_5= \begin{pmatrix}0&0&x_2\\
0&0&x_4\\
0&0&x_3\end{pmatrix} \hspace{.25in} $$
and $Y_i = X_i^*$ for each $1\leq i \leq 5$. What we glean from
this particular set of matrices in ${\rm M}_3(L_5)$ is that:
\smallskip
(i) it might be useful to use $1_K$ as an entry (any number of times) in the generating matrices,
(ii) various nonlinear monomials might play a useful role in the
generating matrices, and
(iii) it might be of use to place elements in the matrices in
some order other than lexicographic order.
\medskip
With guidance provided by the above system of generators in ${\rm M}_3(L_5)$, one can easily check that the following
set of matrices (together with the appropriate dual matrices) is also a set of generators of ${\rm M}_3(L_5)$ which
satisfies the conditions of Corollary \ref{method}, and hence provides an isomorphism between $L_5$ and ${\rm
M}_3(L_5)$.
\begin{center}
$$X_1=\begin{pmatrix}x_{1}&0&0\\
x_2&0&0\\
x_3&0&0\end{pmatrix} \hspace{.25in}
X_2=\begin{pmatrix}x_4&0&0\\
x_5&0&0\\
0&1&0\end{pmatrix} \hspace{.25in}$$
$$X_3=\begin{pmatrix}0&0&{x_1}^2\\
0&0&x_2x_1\\
0&0&x_3x_1\end{pmatrix} \hspace{.25in}
X_4= \begin{pmatrix}0&0&x_4x_1\\
0&0&x_5x_1\\
0&0&x_2\end{pmatrix} \hspace{.25in}
X_5= \begin{pmatrix}0&0&x_4\\
0&0&x_5\\
0&0&x_3\end{pmatrix} \hspace{.25in} $$
\end{center}
It is easy to show, and not at all unexpected, that for each $n$, the symmetric group $S_n$ acts as automorphisms on
$L_n$ in the obvious way. Specifically, for $\sigma \in S_n$ we define $\alpha_{\sigma}:L_n \rightarrow L_n$ by
setting $\alpha_{\sigma}(x_i)=x_{\sigma(i)}$ for each $1\leq i \leq n$, and extending linearly. In fact, with $\sigma
\in S_5$ given by $\sigma(2)=5$, $\sigma(4)=2$, and $\sigma(5)=4$, it is straightforward to show that the corresponding
$\alpha_{\sigma}$ transforms this last set of five matrices to the previously given set.
We close this section by giving three additional sets of generating matrices for ${\rm M}_3(L_5)$. First, consider the
set $\{X_1,X_2,X_3,X_4,X_5\}$ of matrices presented directly above. It is relatively easy to show that by defining
$X_5'$ to be the matrix gotten by interchanging the entries $x_5$ and $x_3$ of $X_5$, then the set
$\{X_1,X_2,X_3,X_4,X_5'\}$ (and their duals) provide a generating set for ${\rm M}_3(L_5)$. (We note for future
reference that, in contrast, switching the entries $x_5$ and $x_4$ of $X_5$ would not provide a generating set.)
Second, consider again the set $\{X_1,X_2,X_3,X_4,X_5\}$ of matrices presented directly above. It is not difficult to
show that by defining $X_4''$ and $X_5''$ to be the matrices gotten by interchanging the entry $x_2$ of $X_4$ with the
entry $x_3$ of $X_5$, then the set $\{X_1,X_2,X_3,X_4'',X_5''\}$ (and their duals) provide a generating set for ${\rm
M}_3(L_5)$.
In Section \ref{maintheorem} we will generalize these first two observations, and show how each yields an action of
various symmetric groups as automorphisms of ${\rm M}_d(L_n)$, and hence of $L_n$, whenever ${\rm gcd}(d,n-1)=1$.
Third, and finally, it is somewhat less obvious that there are many other types of actions of various symmetric groups
on ${\rm M}_3(L_5)$. To give one such example, here is yet another set of five matrices which, along with their duals,
provides a set of generators for ${\rm M}_3(L_5)$. Loosely speaking, these are produced from the previous set
$\{X_1,X_2,X_3,X_4,X_5\}$ by an appropriate permutation in $S_5$ together with an interchanging of the roles of the
initial and final columns of ${\rm M}_3(L_5)$.
\begin{center}
$$X_1=\begin{pmatrix}x_{1}x_5&0&0\\
x_2x_5&0&0\\
x_3x_5&0&0\end{pmatrix} \hspace{.25in}
X_2=\begin{pmatrix}x_4&0&0\\
x_4x_5&0&0\\
x_5^2&0&0\end{pmatrix} \hspace{.25in}$$
$$X_3=\begin{pmatrix}x_1&0&0\\
x_2&0&0\\
x_3&0&0\end{pmatrix} \hspace{.25in}
X_4= \begin{pmatrix}0&1&0\\
0&0&x_2\\
0&0&x_3\end{pmatrix} \hspace{.25in}
X_5= \begin{pmatrix}0&0&x_1\\
0&0&x_4\\
0&0&x_5\end{pmatrix} \hspace{.25in} $$
\end{center}
We will describe subsequent to the proof of Theorem \ref{Main} a number of additional, significantly different
collections of generating matrices in ${\rm M}_d(L_n)$ for ${\rm gcd}(d,n-1)=1$. Each of these collections gives rise
to an automorphism of ${\rm M}_d(L_n)$. Because Theorem \ref{Main} will demonstrate that ${\rm M}_d(L_n)\cong L_n$ for
${\rm gcd}(d,n-1)=1$, each of these automorphisms of ${\rm M}_d(L_n)$ will in turn induce an automorphism of $L_n$.
\medskip
\section{The generators of ${\rm M}_d(L_n)$}\label{generators}
In this section we present the appropriate $2n$ matrices of ${\rm M}_d(L_n)$ which generate ${\rm M}_d(L_n)$. We write
$n=qd+r$ with $2\leq r \leq d$. We assume $d<n$, so that $q\geq 1$. The matrices $X_1, X_2, ..., X_q$ are given as
follows. For $1\leq i \leq q$ we define
$$X_i=
\begin{pmatrix}x_{(i-1)d+1}&0& &0 \\
x_{(i-1)d+2}&0& &0 \\
\vdots&0&...&0 \\
x_{id}&0& &0
\end{pmatrix}
= \sum_{j=1}^{d}x_{(i-1)d+j}e_{j,1} $$ The two matrices $X_{q+1}$
and $X_{q+2}$ play a pivotal role here. They are defined as
follows.
$$X_{q+1}=
\begin{pmatrix}x_{qd+1}&0&0& &0&0& &0& \\
x_{qd+2}&0&0& &0&0& &0& \\
\vdots&0&0& &0&0& &0& \\
x_n&0&0&...&0&0&...&0&\\
0&1&0& &0&0& &0& \\
0&0&1& &0&0& &0& \\
& & &\vdots& & & & & \\
0&0&0& ...&1&0& &0& \end{pmatrix}$$
$$ = \sum_{i=1}^{d-r}e_{i+r,i+1} + \sum_{t=1}^{r}x_{qd+t}e_{t,1}$$
and
$$X_{q+2}=
\begin{pmatrix}0& &0&1&0&0& &0&0 \\
0& &0&0&1&0& &0&0 \\
& & & & &\vdots& & &\\
0& &0&0&0&0& &1&0 \\
0&...&0&0&0&0& &0&a_{q+2,r-1} \\
0& &0&0&0&0& &0&a_{q+2,r} \\
& & &\vdots& & & & &\vdots \\
0& &0&0&0&0& &0&a_{q+2,d}\end{pmatrix}$$
$$ = \sum_{j=1}^{r-2}e_{j,j+s} + \sum_{t=1}^{d-(r-2)}a_{q+2,(r-2)+t}e_{(r-2)+t,d}$$
(where the elements $a_{q+2,r-1}, a_{q+2,r},..., a_{q+2,d} \in
L_n$ are monomials in $x$-variables which will be determined
later). In case $d-r=0$ or $r-2=0$ we interpret the appropriate
sums as zero.
The remaining matrices $X_{q+3},...,X_n$ will be explicitly
specified later, but each of these will have the same general
form. In particular, for $q+3 \leq i \leq n$,
$$X_i=
\begin{pmatrix}0& &0&a_{i,1} \\
0& &0&a_{i,2} \\
0&...&\vdots&\\
0& &0&a_{i,d}
\end{pmatrix}
= \sum_{j=1}^{d}a_{i,j}e_{j,d} $$ (where the elements $a_{i,1},
a_{i,2}, ... ,a_{i,d} \in L_n$ are monomials in the $x$-variables
which will be determined later). In case $q+3 > n$ then we
understand that there are no matrices of this latter form in our
set of $2n$ matrices. We note that we always have the matrices
$X_{q+1}$ and $X_{q+2}$, since $n=qd+r\geq q\cdot 1 + 2$.
\bigskip
We define the matrices $Y_i$ for $1\leq i \leq n$ by setting $Y_i
= X_i^*$. Because they will play such an important role, we
explicitly describe $Y_{q+1}$ and $Y_{q+2}$.
$$Y_{q+1}=
\begin{pmatrix}y_{qd+1}&y_{qd+2}&...&y_n&0&0& &0& \\
0&0&0&0&1&0& &0& \\
0&0&0&0&0&1& &0& \\
& & &\vdots& & &...& &\\
0&0&0&0&0&0& &1& \\
0&0&0&0&0&0& &0& \\
& & &\vdots& & & & & \\
0&0&0&0&0&0& &0& \end{pmatrix}$$
$$ = \sum_{i=1}^{d-r}e_{i+1,i+r} + \sum_{t=1}^{r}y_{qd+t}e_{1,t}$$
and
$$Y_{q+2}=
\begin{pmatrix}0&0& &0&0&0& &0 \\
& & & &\vdots& & & \\
0&0& &0&0&0& &0\\
1&0& &0&0&0&... &0 \\
0&1& &0&0&0& &0 \cr
& &...& & & & &\vdots \\
0&0& &1&0&0& &0 \\
0&0& &0&a^*_{q+2,r-1}& a^*_{q+2,r}&...& a^*_{q+2,d} \end{pmatrix}$$
$$ = \sum_{j=1}^{r-2}e_{j+s,j} + \sum_{t=1}^{d-(r-2)}a^*_{q+2,(r-2)+t}e_{d,(r-2)+t}$$
As above, in case $d-r=0$ or $r-2=0$ we interpret the corresponding sums as zero.
\begin{definition}
{\rm We denote by $A$ the subalgebra of ${\rm M}_d(L_n)$ generated by the matrices $$\{X_i,Y_i | 1\leq i \leq n\}.$$
That is,}
$$A =<\{X_i,Y_i | 1\leq i \leq n\}>.$$
\end{definition}
So in order to achieve our main result, we seek to show that $A={\rm M}_d(L_n)$.
\medskip
Using the relation $\sum_{j=1}^n y_jx_j = 1_K$, we immediately get
\begin{lemma}\label{YiXi1toqplus1}
$$\sum_{i=1}^{q+1}Y_iX_i = E_s \in A.$$
\end{lemma}
A similar computation yields
\begin{lemma}\label{IminusEsinA}
$\mbox{ }$
\begin{enumerate}
\item Assume the elements $\{a_{q+2,r-1},...,a_{q+2,d}\} \cup \{a_{i,j} | q+3 \leq i \leq n, 1\leq j \leq d\}$ are chosen
so that
$$
\hspace{6.5truecm} \sum_{i,j}a^*_{i,j}a_{i,j}=1_K. \hspace{6truecm}
(\dagger)
$$
Then
$$\sum_{i=q+2}^{n}Y_iX_i = I - E_s\in A.$$
\item Assume the elements $\{a_{q+2,r-1},...,a_{q+2,d}\}$ are chosen
so that
$$a_{q+2, j}a^*_{ q+2, i}=\delta _{i,j}$$
for every $i,j\in \{r-1, \dots , d\}.$ Then $X_{q+2}Y_{q+2}=I$.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{dual}
If $a\in A$ then $a^*\in A$.
\end{lemma}
\begin{proof} The set of generators of $A$ has this property, and the relations
are self-dual, hence for any element $a$ which can be generated by
ring-theoretic operations we can also generate $a^*$.
\end{proof}
\begin{definition}
{\rm Recall the partition $\hat{S_1} \cup \hat{S_2}$ of $\{1,2,...,d\}$ described in Section \ref{basics}. For $i,j\in
\{1,2,...,d\}$ we write $i\sim j$ in case $i,j$ are both in the same $\hat{S_k}, k=1,2$.}
\end{definition}
Our goal for the remainder of this section is to show that $A$
contains all matrix units $e_{i,j}$ for $i\sim j$. We begin by
defining two monomorphisms of ${\rm M}_d(L_n)$ which will be useful
in this context.
\begin{definition}
{\rm We define the monomorphism $\beta$ of ${\rm M}_d(L_n)$ by setting
$$\beta(M) = Y_{q+1}MX_{q+1}$$
for each $M\in {\rm M}_d(L_n).$ Since $Y_{q+1}$ and $X_{q+1}$ are each in $A$, then $\beta$ in fact restricts to a
monomorphism of $A$.
Assuming that we have chosen the elements
$\{a_{q+2,r-1},...,a_{q+2,d}\}$ as described in Lemma
\ref{IminusEsinA}(2), we define the monomorphism $\phi$ of ${\rm
M}_d(L_n)$ by setting
$$\phi(M) = Y_{q+2}MX_{q+2}$$
for each $M\in {\rm M}_d(L_n).$ Since $Y_{q+2}$ and $X_{q+2}$ are each in $A$, then $\phi$ in fact restricts to a
monomorphism of $A$.}
\end{definition}
We begin by showing that all of the matrix idempotents
$\{e_i|1\leq i\leq d\}$ are in $A$. The results presented in the
next two lemmas follow directly from straightforward matrix
computations, so we omit their proofs.
\begin{lemma}\label{klessthanrminus1}
If $k<r-1$ then
$$\phi(e_k) = e_{k+s}.$$
\end{lemma}
\begin{lemma}\label{kbiggerthanr}
If $k>r$ then
$$\beta(e_k) = e_{k-(r-1)}.$$
\end{lemma}
It is instructive to note the following. In words, the previous two lemmas say that we can move matrix idempotents
``forward by $s$" (if we start with an index less than $r-1$), and ``backwards by $r-1$" (if we start with an index
bigger than $r$). But even though it would make sense to move the specific idempotent $e_{r-1}$ forward by $s$ units
(since $(r-1)+s=d$), or to move the specific idempotent $e_{r}$ backwards by $r-1$ units, neither of these moves can be
effected by the matrix multiplications described in the lemmas. For instance, the entry in the $(d,d)$ coordinate of
$\phi(e_{r-1}) = Y_{q+2}e_{r-1}X_{q+2}$ is $a^*_{q+2,r-1}a_{q+2,r-1}$, which may or may not equal $1$ depending on the
choice of $a_{q+2,r-1}$. (Indeed, we will see later that we will NOT choose $a_{q+2,r-1}$ having this property.) This
observation is precisely the reason why we must expend so much effort in analyzing the partition $\hat{S_1}\cup
\hat{S_2}$ of $\{1,2,...,d\}$ described previously.
\medskip
We consider the sequence $\{u_i\}_{i=1}^{d}$ of integers, whose
$i^{th}$ entry is given by
$$u_i = is \ ({\rm mod} \ d).$$
The integers $u_i$ are understood to be taken from the set
$\{1,2,...,d\}$. Rephrased, we define the sequence
$\{u_i\}_{i=1}^{d}$ by setting $u_1=s,$ and, for $1\leq i \leq
d-1$,
$$u_{i+1} = u_i+s \mbox{ if }u_i\leq r-1, \ \ \mbox{ and } \ \ u_{i+1}=u_i-(r-1)
\mbox{ if } u_i \geq r.$$ Of course, the $u$-sequence is closely related to the $h$-sequence described in Section
\ref{basics}. Thus it is not surprising that the following Lemma closely resembles Lemma \ref{sequence}. Because
${\rm gcd}(d,s)=1$ (so that $s$ is invertible ${\rm mod} \ d$), basic number theory yields the following
\begin{lemma}\label{usequence}
$\mbox{ }$
\begin{enumerate}
\item The entries in the sequence $u_1,u_2,...,u_d$ are distinct.
\item The set of entries $\{u_1,u_2,...,u_d\}$ equals the set $\{1,2,...,d\}$ (in some order).
\item The penultimate entry in the sequence is $r-1$; that is, $u_{d-1}=r-1$.
\item The final entry in the sequence is $d$; that is, $u_d=d$.
\end{enumerate}
\end{lemma}
\begin{proof} The only non-standard statements are (3) and (4).
Suppose $r-1 = is \ ({\rm mod} \ d)$. Then $d = r-1 + s = (i+1)s \ ({\rm mod} \ d)$. Now ${\rm gcd}(s,d)=1$ gives
that $i+1 = d$, so that $i=d-1$ and we get $r-1=(d-1)s \ ({\rm mod} \ d) = u_{d-1}$ as desired. Then (4) follows
directly from (3) and the equation $d=(r-1)+s$.
\end{proof}
\begin{proposition}\label{eiinA}
For every $j$ with $1\leq j \leq d$ we have $e_j\in A$.
\end{proposition}
\begin{proof} The key idea is to show that $E_j\in A$ for all $1\leq j
\leq d$. Since $X_1Y_1 = I$ we have $I = E_d \in A$. We consider the sequence of matrices
$E_{u_1},E_{u_2},...,E_{u_d}$ arising from the sequence $\{u_i\}_{i=1}^{d}$ described above. By induction on $i$, we
show that each of $E_{u_1},E_{u_2},...,E_{u_{d-1}}\in A$. For $i=1$ we have $E_{u_1}=E_s\in A$ by Lemma
\ref{YiXi1toqplus1}. Now we assume that $E_{u_i}\in A$ for $i\leq d-2$, and show that $E_{u_{i+1}}\in A$. By Lemma
\ref{usequence}(3), $i\leq d-2$ gives that $u_i\neq r-1$. There are two cases.
\smallskip
Case 1: $u_i \leq r-2$. Then by definition $u_{i+1}=u_i+s$. Since
$E_{u_i}\in A$ by hypothesis, we have $\phi(E_{u_i})\in A$, which
then gives
$$E_s + \phi(E_{u_i})\in A.$$
But since $u_i\leq r-2$, Lemma \ref{klessthanrminus1} applies to
give
$$\phi(E_{u_i}) = \sum_{j=1}^{u_i} e_{j+s} = \sum_{k=s+1}^{u_i+s} e_{k},$$
so that
$$E_s+\phi(E_{u_i})=\sum_{k=1}^{s} e_{k} + \sum_{k=s+1}^{u_i+s} e_{k}= E_{u_i+s}=E_{u_{i+1}},$$
so that $ E_{u_{i+1}}\in A$, and Case 1 is shown.
\smallskip
Case 2: $u_i\geq r$. Since $I = E_d \in A$ we have $I-E_{u_i} \in
A$, and since $E_s\in A$ we get
$$E_s - \beta(I-E_{u_i} )\in A.$$
But $I-E_{u_i} = \sum_{j=u_i+1}^d e_j$, and $u_i+1 > r$, so Lemma
\ref{kbiggerthanr} applies to give
$$\beta(I-E_{u_i}) = \sum_{j=u_i+1}^d e_{j-(r-1)}.$$
Thus we get that
\begin{eqnarray*}
E_s - \beta(I-E_{u_i}) & = & E_s - \sum_{j=u_i+1}^d e_{j-(r-1)} \\
& = & E_s - \sum_{k=u_i+1-(r-1)}^{d-(r-1)} e_{k} = E_s - \sum_{k=u_i+1-(r-1)}^{s} e_{k} \\
& = & \sum_{k=1}^{u_i-(r-1)} e_{k}=E_{u_i-(r-1)} = E_{u_{i+1}},
\end{eqnarray*}
so that $E_{u_{i+1}}\in A$, and Case 2 is shown. Thus we have established by induction that $E_{u_i}\in A$ for all
$1\leq i \leq d-1$. But $E_{u_d}=E_d$ by Lemma \ref{usequence}, and $E_d =I\in A$ has already been established, so in
fact we have $E_{u_i}\in A$ for all $1\leq i \leq d.$ So by Lemma \ref{usequence}(2) we conclude that $E_j\in A$ for
all $1\leq j\leq d$.
Now the desired result follows easily from the observation that
$e_1=E_1\in A$, while $e_j=E_j - E_{j-1} \in A$ for all $2\leq
j\leq d$.
\end{proof}
We remark that we need not modify the proof of Proposition \ref{eiinA} at all in case $r=2$ (resp. $r=d$). This is
because even though we would not have the matrix $X_{q+2}$ (resp. $X_{q+1}$) containing 1 in the appropriate entries,
in the case $r=2$ (resp. $r=d$) we would have $s=d-1$ (resp. $s=1$), so that we would only be using multiplication by
$X_{q+1}$ (resp. $X_{q+2}$) in the proof.
\smallskip
Now that we have established that all of the matrix idempotents
$e_i$ ($1\leq i \leq d$) are in $A$, we use them to generate all
of the matrix units $e_{i,j}$.
\begin{lemma}\label{e11plussinA}
$\mbox{ }$
\begin{enumerate}
\item Suppose $1+s < d$. Then $e_1X_{q+2}e_{1+s} = e_{1,1+s}$, and $e_{1,1+s}\in A$.
\item Suppose $1+s =d$. Then $\hat{S_1}=\{1\}$ and $\hat{S_2}=\{2,...,d\}$.
\item The situation $1+s >d$ is not possible.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) We have $1+s\leq d-1$. By construction, the $(1,s+1)$ entry of $X_{q+2}$ is $1$ as long as $r-2\geq 1$. But
$1+s\leq d-1$ gives $d-(r-2)\leq d-1$, which yields the desired $r-2\geq 1$. Now use Proposition \ref{eiinA}.
(2) If $1+s =d$, since $(r-1)+s=d$ we get $r-1=1$. So the sequence $\{h_i\}_{i=1}^d$ has $h_1=1=r-1$, so that
$\hat{S_1}=\{1\}$.
(3) If $1+s >d$, then with $(r-1)+s=d$ we would get $r< 2$, contradicting the hypothesis that $r\geq 2$.
\end{proof}
\begin{lemma}\label{edsinA}
$\mbox{ }$
\begin{enumerate}
\item Suppose $n$ is not a multiple of $d$. Then $e_dX_{q+1}e_{s} = e_{d,s}$, and $e_{d,s}\in A$.
\item Suppose $n$ is a multiple of $d$. Then
$\hat{S_1}=\{1,2,...,d-1\}$ and $\hat{S_2}=\{d\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) If $n$ is not a multiple of $d$ then $r\neq d$, so that the $(d,s)$ entry of the matrix $X_{q+1}$ is $1$. Now use
Proposition \ref{eiinA}.
(2) On the other hand, if $n$ is a multiple of $d$, then $n=qd+d$, so $r=d$, so that $r-1=d-1$, which gives
$s=d-(r-1)=1$, so that the sequence $\{h_i\}_{i=1}^d$ has $h_1=1$, $h_i=h_i+1$, and the result follows.
\end{proof}
The next Proposition provides a link between the matrix units $e_{i,j}\in A$ and the partition $\hat{S_1}\cup
\hat{S_2}$ of $\{1,2,...,d\}$.
\begin{proposition}\label{partitionconsecutivematrixunits}
Consider the sequence $\{h_i\}_{i=1}^d$ described in Section
\ref{basics}. Let $h_i,h_{i+1},h_{i+2}$ be three consecutive
elements of the sequence, where $h_i\neq r,r-1$ and $h_{i+1}\neq
r,r-1$. (In other words, consider three consecutive elements
$h_i,h_{i+1},h_{i+2}$ so that all three are in $\hat{S_1}$ or all
three are in $\hat{S_2}$.) Then there exists $X\in
\{X_{q+1},X_{q+2}\}$ and $Y\in \{Y_{q+1},Y_{q+2}\}$ so that
$$Ye_{h_i,h_{i+1}}X = e_{h_{i+1},h_{i+2}}.$$
In particular, in this situation, if $e_{h_i,h_{i+1}}\in A$ then
also $e_{h_{i+1},h_{i+2}}\in A$.
\end{proposition}
\begin{proof}
There are four cases to consider, depending on whether we
use the ``plus $s$" or ``minus $r-1$" operation to get from one
element of the sequence to the next.
\medskip
Case 1: $h_{i+1}=h_i+s$ and $h_{i+2}=h_{i+1}+s$. In this
situation we have $h_i\leq r-1$ because $h_{i+1}\leq d=s+(r-1)$
and $h_{i+1}=h_i+s$. But $h_i \neq r-1$ by hypothesis. Thus we
have in fact $h_i\leq r-2$. In an exactly analogous way we also
have $h_{i+1}\leq r-2$. Using that each of $h_i$ and $h_{i+1}$
is less than $r-1$, we get
$$Y_{q+2}e_{h_i,h_{i+1}}X_{q+2} = e_{h_{i+1},h_{i+2}}.$$
\smallskip
Case 2: $h_{i+1}=h_i+s$ and $h_{i+2}=h_{i+1}-(r-1)$. As in Case
1 we have $h_i< r-1$. Also, $h_{i+1}\geq r$ because $1\leq
h_{i+2}=h_{i+1}-(r-1)$. But $h_{i+1}\neq r$ by hypothesis. Thus
we have in fact $h_{i+1}>r$. Using both that $h_i<r-1$ and
$h_{i+1}>r$, we get
$$Y_{q+2}e_{h_i,h_{i+1}}X_{q+1} = e_{h_{i+1},h_{i+2}}.$$
\smallskip
Case 3: $h_{i+1}=h_i-(r-1)$ and $h_{i+2}=h_{i+1}+s$. As shown
above, the hypotheses yield $h_i>r$ and $h_{i+1}<r-1$, from which
we get
$$Y_{q+1}e_{h_i,h_{i+1}}X_{q+2} = e_{h_{i+1},h_{i+2}}.$$
\smallskip
Case 4: $h_{i+1}=h_i-(r-1)$ and $h_{i+2}=h_{i+1}-(r-1)$. As shown
above, the hypotheses yield $h_i>r$ and $h_{i+1}>r$, from which we
get
$$Y_{q+1}e_{h_i,h_{i+1}}X_{q+1} = e_{h_{i+1},h_{i+2}},$$
and the result is established.
\end{proof}
We now establish the relationship between the partition $\hat{S_1}\cup \hat{S_2}$ of $\{1,2,...,d\}$ and the matrix
units $e_{i,j}\in A.$ Intuitively, the idea is this. Suppose for instance that $a,b\in \hat{S_1}$. We seek to show
that $e_{a,b}\in A$. There is a sequence of elements in $\hat{S_1}$ which starts at $a$ (resp. $b$) and ends at $r-1$.
By the previous result, this will imply that $e_{a,r-1}\in A$ (resp. $e_{b,r-1}\in A$). But then by duality
$e_{r-1,b}\in A$, so that $e_{a,r-1}e_{r-1,b}=e_{a,b}\in A$. Here are the formal details.
\begin{proposition}\label{partitionmatrixunits}
Suppose $h_i,h_{j}$ are two entries in the sequence
$\{h_i\}_{i=1}^d$, for which both entries are either in
$\hat{S_1}$ or $\hat{S_2}$. Then $$e_{h_i,h_j}\in A.$$
\end{proposition}
\begin{proof} We start by proving the result for $\hat{S_1}$. Suppose
first that we are in a situation for which $1+s < d$. Then Lemma \ref{e11plussinA}(1) yields that $e_{1,1+s}\in A$.
Since in this situation the integers $1,1+s$ are the first two elements of the sequence $\{h_i\}_{i=1}^d$, and both are
in $\hat{S_1}$, repeated applications of Proposition \ref{partitionconsecutivematrixunits} gives that
$e_{h_i,h_{i+1}}\in A$ for any two consecutive elements $h_i,h_{i+1}$ of $\hat{S_1}$. By matrix multiplication this
then gives $e_{h_i,h_{j}}\in A$ whenever $i<j$ and both $h_i,h_{j}$ are in $\hat{S_1}$. By Lemma \ref{dual} this gives
that $e_{h_i,h_{j}}\in A$ whenever $i\neq j$ and both $h_i,h_{j}$ are in $\hat{S_1}$. This together with Proposition
\ref{eiinA} yields that $e_{h_i,h_{j}}\in A$ whenever both $h_i,h_{j}$ are in $\hat{S_1}$.
On the other hand, if we are in a situation for which $1+s = d$, then by Lemma \ref{e11plussinA}(2) we have that
$\hat{S_1}=\{1\}$, and the result follows immediately from Proposition \ref{eiinA}.
The result for $\hat{S_2}$ is established in a similar manner,
using Lemma \ref{edsinA} and Propositions \ref{eiinA} and
\ref{partitionconsecutivematrixunits}, along with the fact that
whenever $n$ is not a multiple of $d$, then the first two elements
of $\hat{S_2}$ in the sequence $\{h_i\}_{i=1}^d$ are $d,s$.
\end{proof}
\section{The main theorem}\label{maintheorem}
With the results of Section \ref{generators} in hand, we now show
how the partition $\hat{S_1}\cup \hat{S_2}$ of $\{1,2,...,d\}$ can
be used to specify the elements of $X_{q+2},...,X_n$ in such a way
that the set $$\{X_1,...,X_n,Y_1,...,Y_n\}$$ generates ${\rm
M}_d(L_n)$.
\begin{definition}
{\rm We define a partition $S_1 \cup S_2$ of $\{1,2,...,n\}$ as follows: For $w\in \{1,2,...,n\}$, write $w = q_wd +
\hat{w}$ with $1\leq \hat{w} \leq d$. We then define $w \in S_k$ (for $k= 1,2$) if and only if $\hat{w}\in
\hat{S_k}$.}
\end{definition}
So we are 'enlarging' the partition of $\{1,2,...,d\} = \hat{S_1}
\cup \hat{S_2}$ to a partition of $\{1,2,...,n\} = S_1 \cup S_2$
by extending modulo $d$.
Now consider this set, which we will call ``The List":
$$x_1^{d-1}$$
$$x_2x_1^{d-2}, x_3x_1^{d-2}, ... , x_nx_1^{d-2}$$
$$x_2x_1^{d-3}, x_3x_1^{d-3}, ... , x_nx_1^{d-3}$$
$$\vdots$$
$$x_2x_1, x_3x_1, ... , x_nx_1$$
$$x_2, x_3, ... , x_n$$
\begin{lemma}\label{Listhasdagger} The elements of The List satisfy $(\dagger)$. That is,
$$ y_1^{d-1}x_1^{d-1} + \sum_{i=0}^{d-2} \sum_{j=2}^n y_1^iy_j x_jx_1^i = 1_K.$$
\end{lemma}
\begin{proof} We note that
\begin{eqnarray*}
y_1^{d-1}x_1^{d-1} + \sum_{i=0}^{d-2} \sum_{j=2}^n y_1^iy_j x_jx_1^i & = &
y_1^{d-1}x_1^{d-1} + \sum_{j=2}^n y_1^{d-2}y_j x_jx_1^{d-2} + \sum_{i=0}^{d-3} \sum_{j=2}^n y_1^iy_j x_jx_1^i \\
& = & \sum_{j=1}^n y_1^{d-2}y_j x_jx_1^{d-2} + \sum_{i=0}^{d-3} \sum_{j=2}^n y_1^iy_j x_jx_1^i \\
& = & y_1^{d-2} (\sum_{j=1}^n y_j x_j) x_1^{d-2} + \sum_{i=0}^{d-3} \sum_{j=2}^n y_1^iy_j x_jx_1^i \\
& = & y_1^{d-2}(1_K) x_1^{d-2} + \sum_{i=0}^{d-3} \sum_{j=2}^n y_1^iy_j x_jx_1^i \\
& = & y_1^{d-2} x_1^{d-2} + \sum_{i=0}^{d-3} \sum_{j=2}^n y_1^iy_j x_jx_1^i.
\end{eqnarray*}
By induction we continue in a similar way to get
$$ \ \ \ = y_1 x_1 + \sum_{j=2}^n y_j x_j = 1_K. $$
\end{proof}
It is clear that
\begin{lemma}\label{listsize}
There are $(d-1)(n-1)+1$ elements on The List.
\end{lemma}
\begin{lemma}\label{numbertobespecified} The number of entries
$\{a_{q+2,r-1},...,a_{q+2,d}\}\cup \{a_{i,j} | q+3 \leq i \leq n, 1\leq j \leq d\}$ which must be specified to form the
matrices $X_{q+2}, X_{q+3}, ..., X_n$ is
$$(s+1)+d[n-(q+2)].$$
\end{lemma}
\begin{proof} The elements $\{a_{q+2,j} | r-1 \leq j \leq d\}$ needed to
complete $X_{q+2}$ is a list containing $d-(r-1)+1 = s+1$ entries. There are $n-(q+2)$ matrices in the list $X_{q+3},
...,X_n$, and each of these matrices will contain exactly $d$ nonzero entries.
\end{proof}
\begin{lemma}
The number of entries which must be specified to form the matrices $X_{q+2}, X_{q+3}, ...,X_n$ is equal to the number
of entries in The List.
\end{lemma}
\begin{proof} By Lemmas \ref{listsize} and \ref{numbertobespecified}, we
must show
$$(s+1)+d[n-(q+2)] = (d-1)(n-1)+1.$$
But \begin{eqnarray*}
(s+1)+d[n-(q+2)] & = & [d-(r-1)]+1 + dn-dq-2d \\
& = & d-r+2+dn-dq-2d \\
& = & -(n-qd) +2+dn-dq-d \\
& = & n(d-1) +2 -d \\
& = & n(d-1) -(d-1) + 1 \\
& = & (n-1)(d-1)+1.
\end{eqnarray*}
\end{proof}
The following result describes exactly how many of the entries to
be specified in $X_{q+2},...,X_n$ correspond to the subset
$\hat{S_1}$ in the partition $\hat{S_1}\cup \hat{S_2}$ of
$\{1,2,...,d\}$.
\begin{lemma}\label{specifiedinS1hat}
Consider the set of matrices $X_{q+3}, ..., X_n$, together with the last $s+1$ rows of $X_{q+2}$. Then the number of
nonzero entries corresponding to rows indexed by elements of $\hat{S_1}$ equals
$$d_1[n-(q+2)]+e_1.$$
\end{lemma}
\begin{proof} This follows directly by an argument analogous to that given
in the proof of Lemma \ref{numbertobespecified}, together with the
definitions of $d_1$ and $e_1$.
\end{proof}
\begin{lemma}\label{thelistinS1}
The number of entries on The List of the form $x_ux_1^t$ for which
$u\in S_1$ is
$$(d-1)[(qd_1-1)+f_1]+1.$$
\end{lemma}
\begin{proof} Consider each of the $d-1$ rows of The List (other than the
first). For each of the $d_1$ entries which are in $\hat{S_1}$
(including $1$) there are $q$ elements congruent to it (modulo
$d$). So we get $qd_1$ such entries. But we have started each
list with $x_2$ (and not $x_1$), so in fact there are $qd_1 - 1$
such entries in each row. Each row also contains $f_1$ entries
from the set $\{qd+1, ..., qd+r=n\}$. There are $d-1$ rows.
Finally, we add in the term corresponding to $x_1^{d-1}$.
\end{proof}
Before we get to the main proposition, we need a computational
lemma.
\begin{lemma}\label{d1r}
$$d_1r = df_1 - d + d_1 + 1.$$
\end{lemma}
\begin{proof} Using the equations $d_1=1+b+t$ and $(1+t)(r-1) = 1+bs$ from
Proposition \ref{bandt}, we get
\begin{eqnarray*}
d_1r & = & (1+b+t)r = (1+t)r+ br \\
& = &(1+t)(r-1) + (1+t) + br \\
& = & bs+br+t+2
\end{eqnarray*}
while
\begin{eqnarray*}
df_1-d+d_1+1 & = &(s+(r-1))(1+b)-(s+(r-1))+(1+b+t)+1 \\
& = & sb+rb+t+2 \ \ \mbox{ (by an easy computation). }
\end{eqnarray*}
\end{proof}
We are now ready to prove the key algorithmic tool which will provide the vehicle for our main result.
\begin{proposition}\label{possibleplacement}
Consider the set of matrices $X_{q+3}, ..., X_n$, together with the last $s+1$ rows of $X_{q+2}$. Then the number of
nonzero entries corresponding to rows indexed by elements of $\hat{S_1}$ equals the number of entries on The List of
the form $x_ux_1^t$ for which $u\in S_1$.
Rephrased: It is possible to place the elements of The List in the ``to be specified" entries of the matrices
$X_{q+2},X_{q+3}, ... ,X_n$ in such a way that each entry of the form $x_ux_1^t$ for $u\in S_k$ ($k=1,2$) is placed in
a row indexed by $\hat{u}$ where $\hat{u}\in \hat{S_k}$ ($k=1,2$).
\end{proposition}
\begin{proof} By Lemmas \ref{specifiedinS1hat} and \ref{thelistinS1} it
suffices to show that
$$d_1[n-(q+2)]+e_1 = (d-1)[(qd_1-1)+f_1]+1.$$
But
\begin{eqnarray*}
d_1[n-(q+2)] + e_1 & = & d_1[qd+r-q-2]+e_1 \\
& = & d_1q(d-1) + d_1r - 2d_1 + e_1 \\
& = & d_1q(d-1) + [df_1 -d + d_1 + 1] -2d_1 + e_1 \mbox{ (using Lemma \ref{d1r}) } \\
& = & d_1q(d-1) + [df_1 -d + d_1 + 1] -2d_1 + [d_1+1-f_1] \mbox{ (Definition \ref{d1e1f1}) } \\
& = & d_1q(d-1) + (d-1)f_1 - (d-1) + 1 \\
& = & (d-1)[d_1q -1 + f_1] + 1
\end{eqnarray*}
and we are done.
\end{proof}
In other words, Proposition \ref{possibleplacement} implies that is possible to place the entries of The List in the
empty ``boxes" of the matrices $X_{q+2},X_{q+3}, ..., X_n$ in such a way that each entry of the form $x_ux_1^t$ for
$u\in S_k$ ($k=1,2$) is placed in a row indexed by $\hat{u}$ where $\hat{u}\in \hat{S_k}$ ($k=1,2$).
\medskip
{\bf We assume for the remainder of this article that we have made such a placement.} To help the reader clarify the
process, a specific example appears below. However, the reader should keep in mind that in fact there are {\it many}
possible such placements.
\medskip
Once such a placement has been made, we can immediately deduce various properties of the matrices
$\{X_1,...,X_n,Y_1,...,Y_n\}$. For instance,
\begin{lemma}\label{XYequalsI}
For all $1\leq i,j \leq n$ we have
$$X_iY_j = \delta_{i,j}I \mbox{ in } {\rm M}_d(L_n).$$
\end{lemma}
\begin{proof}
By definition of the matrices $X_i,Y_j$ it suffices to show that
$$x_ix_1^t \cdot y_1^u y_j = \delta_{t,u}\delta_{i,j}1_K$$
for all $1\leq i,j\leq n$ and $1\leq u,t \leq d-2$. But this follows easily by the definition of multiplication in
$L_n$.
\end{proof}
\begin{lemma}\label{eachw}
For each $w$ having $1\leq w \leq n$, $x_we_{\hat{w},1}\in A$ where $w\sim \hat{w}$.
\end{lemma}
\begin{proof} Write $w = q_wd+ \hat{w}$ with $1\leq \hat{w}\leq d$. But
$e_{\hat{w}}$ and $e_1$ are in $A$, so $e_{\hat{w}}X_{q_w}e_1 \in A$, and this gives the result.
\end{proof}
\begin{lemma}\label{eachv}
For each $v$ having $1\leq v \leq n$, $y_ve_{1,\hat{v}}\in A$ where $v\sim \hat{v}$.
\end{lemma}
\begin{proof} Write $v = q_vd+ \hat{v}$ with $1\leq \hat{v} \leq d$. But
$e_{\hat{v}}$ and $e_1$ are in $A$, so $e_1Y_{q_v}e_{\hat{v}} \in A$, and this gives the result.
\end{proof}
\noindent
(We note that indeed Lemma \ref{eachv} can also be established directly from Lemmas \ref{eachw} and
\ref{dual}.)
\medskip
Proposition \ref{partitionmatrixunits} yields that matrix units indexed by the sets $\hat{S_1}$ and $\hat{S_2}$ are in
$A$. In order to show that all the matrix units $\{e_{i,j}|1\leq i,j\leq d\}$ are in $A$, we need to provide a
``bridge" between these two subsets of matrix units. That connection is made in the following Proposition, which
provides the last major piece of the puzzle.
\begin{proposition}\label{e1dinA}
$$e_{1,d}\in A \ \mbox{ and } \ e_{d,1}\in A.$$
\end{proposition}
\begin{proof}
Because we have assumed that we have placed the elements from The List in a manner ensured by Proposition
\ref{possibleplacement}, there exists $M \in \{X_{q+2},X_{q+3},...,X_n\}$ and an integer $l \in \{1,2,...,d\}$ for
which $l\sim 1$, and for which the $(l,d)$ entry of $M$ is $x_1^{d-1}$. That is, $e_{l}Me_d= x_1^{d-1}e_{l,d}\in A$.
But because $l\sim 1$, Proposition \ref{partitionmatrixunits} gives that $e_{1,l}\in A$. Thus
$e_{1,l}x_1^{d-1}e_{l,d}\in A$, so
$$x_1^{d-1}e_{1,d}\in A.$$
We have $y_1e_1 = e_1Y_1e_1\in A$, so that
$$y_1e_1\cdot x_1^{d-1}e_{1,d}=y_1x_1^{d-1}e_{1,d} =
y_1x_1x_1^{d-2}e_{1,d}\in A.$$
Now choose any $w$ with $2\leq w \leq n$. Again using the
hypothesis that we have placed the elements from The List in a
manner ensured by Proposition \ref{possibleplacement}, there exists
$M \in \{X_{q+2},X_{q+3},...,X_n\}$ and $w'\in \{1,...,d\}$ for
which $w'\sim w$, and
$$e_{w'}Me_d=x_wx_1^{d-2}e_{w',d}\in A.$$
Write $w = q_wd + \hat{w}$ with $1\leq \hat{w}\leq d$. Then
$w\sim \hat{w}$ by definition, and so we get $w'\sim \hat{w}$. So by
Proposition \ref{partitionmatrixunits}, $e_{\hat{w},w'}\in A$. In
addition, $e_1Y_{q_w}e_{\hat{w}}=y_we_{1,\hat{w}}\in A$. So we
get
$$y_we_{1,\hat{w}}e_{\hat{w},w'}x_wx_1^{d-2}e_{w',d}\in A,$$
so that $y_wx_wx_1^{d-2}e_{1,d}\in A$ for each $w$ having $2\leq w
\leq d$. This, together with the previously established
$y_1x_1x_1^{d-2}e_{1,d}\in A$, gives
$$\sum_{w=1}^n y_wx_wx_1^{d-2}e_{1,d} = (\sum_{w=1}^n y_wx_w)x_1^{d-2}e_{1,d}
= 1_K\cdot x_1^{d-2}e_{1,d} \in A$$
so that
$$x_1^{d-2}e_{1,d}\in A.$$
By a procedure analogous to the one we have just completed, which
shows how to obtain $x_1^{d-2}e_{1,d}\in A$ starting from
$x_1^{d-1}e_{1,d}\in A$, we can show that each of the elements
$$x_1^{d-3}e_{1,d},\hspace{.05in} x_1^{d-4}e_{1,d},\hspace{.05in}... \hspace{.05in}, \hspace{.05in}
x_1e_{1,d} \in A,$$
the last of which similarly gives $(\sum_{w=1}^n y_wx_w)e_{1,d} \in
A$, which then finally yields
$$e_{1,d} \in A$$
as desired. That $e_{d,1}\in A$ follows from Lemma \ref{dual}.
\end{proof}
We finally are in a position to prove the main result of this article.
\begin{theorem}\label{Main}
Let $d,n$ be positive integers, and $K$ any field. Let $L_{K,n} = L_n$ denote the Leavitt algebra of type $(1,n-1)$
with coefficients in $K$. Then $L_n\cong {\rm M}_d(L_n)$ if and only if ${\rm gcd}(d,n-1)=1$.
\end{theorem}
\begin{proof} By Proposition \ref{moduletypeofmatrices}, if
${\rm gcd}(d,n-1)>1$ then the module type of ${\rm M}_d(L_n)$ is not $(1,n-1)$, so that ${\rm M}_d(L_n)$ and $L_n$
cannot be isomorphic in this case.
For the implication of interest, suppose ${\rm gcd}(d,n-1)=1$, and suppose $d<n$. By Corollary \ref{method}, we need
only show that the set $A = \{X_1,...,X_n,Y_1,...,Y_n\}$ satisfies the three indicated properties. That $X_iY_j =
\delta_{i,j}I$ follows directly by the definition of these matrices and Lemma \ref{XYequalsI}. The equation
$\sum_{j=1}^n Y_jX_j = I$ follows from Lemmas \ref{YiXi1toqplus1}, \ref{IminusEsinA}, and \ref{Listhasdagger}.
For the final property, we must show that $A=<\{X_1,...,X_n,Y_1,...,Y_n\}>={\rm M}_d(L_n)$. It suffices to show that
$x_we_{i,j}\in A$ for all $1\leq w \leq n$ and all $i,j\in \{1,2,...,d\}$, since by Lemma \ref{dual} this will yield
$y_we_{i,j}\in A$ for all $1\leq w \leq n$ and all $i,j\in \{1,2,...,d\}$, and these two collections together clearly
generate all of ${\rm M}_d(L_n)$
By Proposition \ref{possibleplacement} we may assume that the elements from The List have been placed appropriately in
the matrices $X_{q+2},...,X_n$. Now let $i,j\in \{1,2,...,d\}$. If $i\sim j$ then $e_{i,j}\in A$ by Proposition
\ref{partitionmatrixunits}. So suppose $i\in \hat{S_1}$ and $j\in \hat{S_2}$. Then $i\sim 1$ and $j\sim d$, so
$e_{i,1}$ and $e_{d,j}$ are each in $A$, again by Proposition \ref{partitionmatrixunits}. But Proposition \ref{e1dinA}
yields $e_{1,d}\in A$, so that
$$e_{i,1}e_{1,d}e_{d,j}=e_{i,j}\in A.$$
The situation where $i\in \hat{S_2}$ and $j\in \hat{S_1}$ is identical, and thus yields $e_{i,j}\in A$ for all $i,j\in
\{1,2,...,d\}$. Finally, since each of the elements $\{x_w|1\leq w \leq n\}$ is contained as an entry in one of the
matrices $X_1,...,X_{q+1}$, we can indeed generate all elements of the desired form in $A$. Thus we have shown that
for ${\rm gcd}(d,n-1)=1$ and $d<n$ we have $L_n\cong {\rm M}_d(L_n)$.
To finish the proof of our main result we need only show that the desired isomorphism holds in case $d\geq n$. Write
$d=q'(n-1)+d'$ with $1\leq d' \leq n-1$. Then easily ${\rm gcd}(d',n-1)=1$, so the previous paragraph yields $L_n\cong
{\rm M}_{d'}(L_n)$. But then also $d\equiv d'$ (mod $n-1$), so by Proposition \ref{Basicprops}(1) we get ${\rm
M}_d(L_n)\cong {\rm M}_{d'}(L_n)\cong L_n$, and we are done.
\end{proof}
Notice that Theorem \ref{Main} does not depend on the choice of the positions of the elements from The List in the
non-specified entries of the matrices $X_{q+2}, \dots ,X_n$, other than that the positions are consistent with the
condition allowed by Proposition \ref{possibleplacement}.
\begin{example}
{\rm We indicated in Section \ref{findmatrices} that $L_5\cong {\rm M}_3(L_5)$; in fact, we provided there five
different sets of appropriate generating matrices of ${\rm M}_3(L_5)$. Here is yet another set, built by using the
recipe provided in Theorem \ref{Main}. In this case we have $n=5, d=3, r=2, r-1=1, s=3-1=2, \hat{S_1}=\{1\},
\hat{S_2}=\{2,3\}, S_1=\{1,4\}, S_2=\{2,3,5\}$. The List consists of the $(n-1)(d-1)+1 = 9$ elements
$\{x_1^2,x_2x_1,x_3x_1,x_4x_1,x_5x_1,x_2,x_3,x_4,x_5\}$. The point to be made here is that only the elements
$x_1^2,x_4x_1$, and $x_4$ can be placed in row $1$ of column $3$, since $S_1=\{1,4\}$. }
$$X_1=\begin{pmatrix}x_{1}&0&0\\
x_2&0&0\\
x_3&0&0\end{pmatrix} \hspace{.25in}
X_2=\begin{pmatrix}x_4&0&0\\
x_5&0&0\\
0&1&0\end{pmatrix} \hspace{.25in}$$
$$X_3=\begin{pmatrix}0&0&x_4\\
0&0&x_3x_1\\
0&0&x_2\end{pmatrix} \hspace{.25in}
X_4= \begin{pmatrix}0&0&x_1^2\\
0&0&x_2x_1\\
0&0&x_3\end{pmatrix} \hspace{.25in}
X_5= \begin{pmatrix}0&0&x_4x_1\\
0&0&x_5\\
0&0&x_5x_1\end{pmatrix} \hspace{.25in} $$
\end{example}
We finish this section by describing some automorphisms of $L_n$ which arise as a consequence of Theorem \ref{Main}.
There are many possible assignments of the elements on The List to the ``boxes" of the matrices $X_{q+2},X_{q+3}, ...,
X_n$ consistent with the method described in Proposition \ref{possibleplacement}. In particular, this freedom of
assignment affords an action of the bisymmetric group $S_{d_1}\times S_{d_2}$ on each of the matrices $X_{q+3}, ...,
X_n$ by permuting the entries inside $\hat{S_1}$ and $\hat{S_2}$. Similarly, we have an action of $S_{e_1}\times
S_{e_2}$ on $X_{q+2}$. This freedom of assignment also allows an action on each of the $d$ rows in the generating
matrices. Specifically, for each row $i$ ($1\leq i \leq d$), we can permute the $(i,d)$-entries of the $n-(q+2)$
matrices $X_{q+3}, ..., X_n$; each of the $d\cdot(n-(q+2))!$ such permutations will yield a different set of generators
for ${\rm M}_d(L_n)$. Thus we have described
$$d\cdot (n-(q+2))!e_1!e_2!(d_1!d_2!)^{n-(q+2)}$$
permutations on the entries of the matrices $X_{q+2},X_{q+3}, ..., X_n$, each of which induces a distinct automorphism
of ${\rm M}_d(L_n)$. In turn, by Theorem \ref{Main}, each then induces an automorphism of $L_n$ whenever ${\rm
gcd}(d,n-1)=1$. These permutations yield automorphisms on $L_n$ which generalize the specific automorphisms of ${\rm
M}_3(L_5)$ described in Section \ref{findmatrices}.
Intriguingly, the types of automorphisms described here and in Section \ref{findmatrices} still do not in general
completely describe all the automorphisms of $L_n$ which arise from producing appropriate sets of generators in ${\rm
M}_d(L_n)$. We present here two additional specific examples of generating sets inside various-sized matrix rings. In
both cases, the entries used to build the generating matrices $X_1,...,X_n,Y_1,...,Y_n$ are monomials of degree at most
2. In contrast to the previously presented examples, because The List contains monomials of degree up to and including
$d-1$, the examples given here cannot be realized as arising from automorphisms induced by permutations of the entries
of a specific set of generators as constructed in Theorem \ref{Main}.
\medskip
{\bf Example}: A set of generators of ${\rm M}_4(L_6)\cong L_6$. (So $d=4$, $d-1=3;$ note that there are no monomials
of degree 3 used in this set.)
$$X_1=\begin{pmatrix}x_{1}&0&0&0\\
x_2&0&0&0\\
x_3&0&0&0\\
x_4&0&0&0\end{pmatrix} \hspace{.25in}
X_2=\begin{pmatrix}x_5&0&0&0\\
x_6&0&0&0\\
0&1&0&0\\
0&0&1&0\end{pmatrix} \hspace{.25in}
X_3=\begin{pmatrix}0&0&0&x_1^2\\
0&0&0&x_2x_1\\
0&0&0&x_3x_1\\
0&0&0&x_4x_1\end{pmatrix} \hspace{.25in}$$
$$X_4=\begin{pmatrix}0&0&0&x_5x_1\\
0&0&0&x_6x_1\\
0&0&0&x_1x_2\\
0&0&0&x_2^2\end{pmatrix} \hspace{.25in}
X_5=\begin{pmatrix}0&0&0&x_3x_2\\
0&0&0&x_4x_2\\
0&0&0&x_5x_2\\
0&0&0&x_6x_2\end{pmatrix} \hspace{.25in}
X_6=\begin{pmatrix}0&0&0&x_5\\
0&0&0&x_6\\
0&0&0&x_3\\
0&0&0&x_4\end{pmatrix} \hspace{.25in}$$
\medskip
{\bf Example}: A set of generators of ${\rm M}_5(L_9)\cong L_9$. (So $d=5$, $d-1=4;$ note that there are no monomials
of degree $3$ or $4$ used in this set. Also note that, unlike the matrices constructed in Theorem \ref{Main}, there
are entries other than $1_K$ in column $2$.)
$$X_1=\begin{pmatrix}x_{1}&0&0&0&0\\
x_2&0&0&0&0\\
x_3&0&0&0&0\\
x_4&0&0&0&0\\
x_5&0&0&0&0\end{pmatrix} \hspace{.25in}
X_2=\begin{pmatrix}x_6&0&0&0&0\\
x_7&0&0&0&0\\
x_8&0&0&0&0\\
x_9&0&0&0&0\\
0&x_9&0&0&0\end{pmatrix} \hspace{.25in}
X_3=\begin{pmatrix}0&x_1&0&0&0\\
0&x_2&0&0&0\\
0&x_3&0&0&0\\
0&x_4&0&0&0\\
0&x_5&0&0&0\end{pmatrix} \hspace{.25in}$$
$$X_4=\begin{pmatrix}0&x_6&0&0&0\\
0&x_7&0&0&0\\
0&x_8&0&0&0\\
0&0&1&0&0\\
0&0&0&1&0\end{pmatrix} \hspace{.25in}
X_5=\begin{pmatrix}0&0&0&0&x_1^2\\
0&0&0&0&x_2x_1\\
0&0&0&0&x_3x_1\\
0&0&0&0&x_4x_1\\
0&0&0&0&x_5x_1\end{pmatrix} \hspace{.25in}
X_6=\begin{pmatrix}0&0&0&0&x_6x_1\\
0&0&0&0&x_7x_1\\
0&0&0&0&x_8x_1\\
0&0&0&0&x_9x_1\\
0&0&0&0&x_9\end{pmatrix} \hspace{.25in}$$
$$X_7=\begin{pmatrix}0&0&0&0&x_1x_2\\
0&0&0&0&x_2^2\\
0&0&0&0&x_3x_2\\
0&0&0&0&x_4x_2\\
0&0&0&0&x_5x_2\end{pmatrix} \hspace{.25in}
X_8=\begin{pmatrix}0&0&0&0&x_6x_2\\
0&0&0&0&x_7x_2\\
0&0&0&0&x_8x_2\\
0&0&0&0&x_8\\
0&0&0&0&x_9x_2\end{pmatrix} \hspace{.25in}
X_9=\begin{pmatrix}0&0&0&0&x_6\\
0&0&0&0&x_7\\
0&0&0&0&x_3\\
0&0&0&0&x_4\\
0&0&0&0&x_5\end{pmatrix} \hspace{.25in}$$
\bigskip
\section{Applications to C$^*$-algebras and questions about $K_0$}\label{applications}
As mentioned in the Introduction, one consequence of our main result is that we are able to directly and explicitly
establish an affirmative answer to the question posed in \cite{PS}, page 8, regarding isomorphisms between matrix
rings over Cuntz algebras.
\begin{theorem}\label{C*-fact1}
$ {\rm M}_d(\mathcal{O}_{n}) \cong \mathcal{O}_{n}$ if and only if ${\rm gcd}(d,n-1)=1$.
\end{theorem}
\begin{proof} If ${\rm gcd}(d,n-1)\ne 1$, Then $M_d(\mathcal{O}_n)\not\cong
\mathcal{O}_n$ by \cite{PS}, Corollary 2.4.
So suppose conversely that ${\rm gcd}(d,n-1)=1$. Let $\{ s_1, \dots ,s_n\}\subset \mathcal{O}_n$ be the orthogonal isometries
generating $\mathcal{O}_n$. These satisfy:
\begin{enumerate}
\item[(i)] For every $1\leq i,j\leq n$, $s_i^*s_j=\delta _{i,j}$, and
\item[(ii)] $1=\sum\limits_{i=1}^ns_is_i^*$.
\end{enumerate}
Now consider the complex Leavitt algebra $L_{\mathbb{C},n}$, and notice that by Proposition \ref{Basicprops}(2) there
exists a (unique) $\mathbb{C}$-algebra morphism
$$\varphi : L_{\mathbb{C},n} \rightarrow \mathcal{O}_n $$
given by the extension of the assignment $x_i \mapsto s_i^* $ and $y_i \mapsto s_i $ for $1\leq i \leq n$.
Since $L_{\mathbb{C},n}$ is a
simple algebra, $L_{\mathbb{C},n}\cong \varphi (L_{\mathbb{C},n})$. But $\mathcal{P}_n=\varphi (L_{\mathbb{C},n})$ is
the complex dense $\ast$-subalgebra of $\mathcal{O}_n$ generated by $\{ s_1, \dots ,s_n\}$ (as a complex algebra). Now
consider the morphism
$$\varphi _d: M_d(L_{\mathbb{C},n})\rightarrow
M_d(\mathcal{O}_n)$$ induced by $\varphi$. (In particular, $\varphi_d(e_{i,j}) = e_{i,j}$ for each matrix unit
$e_{i,j}$, $1\leq i,j \leq d$.) Notice that, if $X_i$ ($1\leq i\leq n$) is any of the matrices defined in Section
\ref{generators} then, by definition of the elements of The List, $\varphi _d(Y_i)=\varphi _d(X_i)^*$ with respect to
the involution $\ast$ of $M_d(\mathcal{O}_n)$. So, by defining $S_i=\varphi _d(Y_i)$, we get $S^*_i=\varphi _d(X_i)$,
and thus $\{ S_1, \dots ,S_n\}\subset M_d(\mathcal{O}_n)$ is a family of $n$ orthogonal isometries satisfying
$I_d=\sum\limits_{i=1}^nS_iS_i^*$. Hence, by \cite{Cu}, Theorem 1.12, there exists an isomorphism
$$\Phi :\mathcal{O}_n\rightarrow C^*(S_1, \dots , S_n)\subseteq M_d(\mathcal{O}_n)$$
defined by the rule $\Phi (s_i)=S_i$ for every $1\leq i\leq n$. Now,
applying Theorem \ref{Main} to $\mathcal{P}_n$ and
$M_d(\mathcal{P}_n)$ (via $\varphi$), for every $1\leq i,j\leq d$
and for every $1\leq k\leq n$ we have
$$s_ke_{i,j}=\varphi_d(y_ke_{i,j})\in C^*(S_1, \dots , S_n),$$
so that the generators of $M_d(\mathcal{O}_n)$ lie in $C^*(S_1, \dots , S_n)$. Thus, $\mathcal{O}_n\cong
M_d(\mathcal{O}_n)$ via $\Phi$, so we are done.
\end{proof}
As mentioned previously, the affirmative answer to the isomorphism question for matrix rings over Cuntz algebras
provided in Theorem \ref{C*-fact1} is indeed already known, a byproduct of \cite{Ph1}, Theorem 4.3(1). However, the
method we have provided in Theorem \ref{C*-fact1} is significantly more elementary, and provides an explicit
description of the germane isomorphisms (such an explicit description has previously not been known).
\smallskip
A second interesting consequence of Theorem \ref{Main} is that the class of matrices over Leavitt algebras is
classifiable using K-theoretic invariants. (For additional information about purely infinite simple algebras and their
$K$-theory, see \cite{AGP}.)
\begin{theorem}\label{C*-fact2}
Let $\mathcal{L}$ denote the set of purely infinite simple
$K$-algebras
$$\{{\rm M}_d(L_n) | d,n \in \mathbb{N}\}.$$
Let $B,B'\in \mathcal{L}$. Then $B\cong B'$ if and only if there
is an isomorphism $\phi: K_0(B)\rightarrow K_0(B')$ for which
$\phi([1_B])=[1_{B'}]$.
\end{theorem}
\begin{proof}
It is well known (see e.g. \cite {Ros}, page 5) that any unital isomorphism $f: B\rightarrow B'$ induces a group
isomorphism $K_0(f): K_0(B)\rightarrow K_0(B')$ sending $[1_B]$ to $[1_{B'}]$.
To see the converse, first notice that, for any $B\in \mathcal{L}$, $B={\rm M}_d(L_n)$ for suitable $d, n\in \mathbb{N}$. It is
well known that
$$(K_0({\rm M}_d(L_n)), [1_{{\rm M}_d(L_n)}])\cong (\mathbb{Z}/(n-1)\mathbb{Z}, [d])$$
(see e.g \cite{B} or \cite{AGP}). Hence, if $B'={\rm M}_k(L_m)$ for suitable $k, m\in \mathbb{N}$, then the existence of an
isomorphism $\phi: K_0(B)\rightarrow K_0(B')$ forces that $n=m$.
Now, since every automorphism of $\mathbb{Z}/(n-1)\mathbb{Z}$ is given by multiplication by an element $1\leq l\leq n-1$ such that
${\rm gcd}(l,n-1)=1$, the hypothesis $\phi([1_B])=[1_{B'}]$ yields that $[k]=[dl]\in \mathbb{Z}/(n-1)\mathbb{Z}$, i.e., that $k\equiv
dl$ (mod $n-1$). So Proposition \ref{Basicprops}(1) gives that
$${\rm M}_k(L_n)\cong {\rm M}_{dl}(L_n)\cong {\rm M}_d({\rm M}_l(L_n)).$$
Since ${\rm gcd}(l,n-1)=1$, we have ${\rm M}_l(L_n)\cong L_n$ by Theorem \ref{Main}. Hence, ${\rm M}_d({\rm
M}_l(L_n))\cong {\rm M}_d(L_n)$, whence
$${\rm M}_k(L_n)\cong {\rm M}_{dl}(L_n)\cong {\rm M}_d({\rm M}_l(L_n))\cong {\rm M}_d(L_n),$$ as
desired.
\end{proof}
A significantly more general C$^*$-algebraic analog of Theorem \ref{C*-fact2} is well-known for the class of unital
purely infinite simple C*-algebras, as a consequence of the powerful work of Kirchberg and Phillips, \cite{K} and
\cite{P}. However, even in the concrete case of the subclass $\{{\rm M}_d(\mathcal{O}_n) | d,n \in \mathbb{N}\}$, the
existence of the previously known isomorphisms in the C$^*$-algebra setting (to wit, the aforementioned results of
R{\o}rdam, Kirchberg and Phillips) depend on deep results which produce no explicit isomorphisms. A natural question in
this context is whether \cite{P}, Theorem 4.2.4, has an algebraic counterpart. In \cite{AAP} the authors establish a
partial affirmative answer to this question for a large class of purely infinite simple algebras.
\section{Graded isomorphisms between Leavitt algebras and their matrix rings.}\label{graded}
In this final section we incorporate the natural ${\mathbb Z}$-grading on the Leavitt algebras into our analysis. As
one consequence, we will show that the sets of matrices which arise in the proof of \cite{L1}, Theorem 5, cannot in
general generate ${\rm M}_d(L_n)$.
The ${\mathbb Z}$-grading on $L_{K,n}$ is given as follows. We define the degree of a monomial of the form
$y_i^tx_j^u$ by setting
$$deg(y_i^tx_j^u) = u-t,$$
and extending linearly to all of $L_{K,n}$. This is precisely the ${\mathbb Z}$-grading on $L_{K,n}$ induced by setting $deg(X_i)=1, deg(Y_i)=-1$
in $R = K<X_1,...,X_n,Y_1,...,Y_n>$, and then grading the factor ring $L_n = R/I$ in the natural way.
(We note that the relations which define $L_n$ are homogeneous in this grading of $R$.)
It was shown in
\cite{AAn1} that, in this grading, $(L_n)_0 \cong \lim \limits_{\longrightarrow}{}_{t\in \mathbb{N}}({\rm M}_{n^t}(K))$.
Here the connecting homomorphisms are unital (so that the direct limit is unital); the homomorphism from ${\rm M}_{n^t}(K)$
to ${\rm M}_{n^{t+1}}(K)$ is given by sending any matrix of the form $(a_{i,j})$ to the matrix $(a_{i,j}I_n)$.
We will need the following easily proved result about unital direct limits of rings. For a unital ring $R$, we say
that a finite set $E = \{e_1,...,e_p\}$ of idempotents in $R$ is {\it complete, orthogonal, pairwise isomorphic} in
case $1_R = e_1 + ... + e_p$, $e_ie_j=0$ for all $i\neq j$, and $Re_i\cong Re_j$ as left $R$-modules for all $1\leq
i,j \leq p$. In particular, in this situation we have $R\cong \oplus_{i=1}^p Re_i$ as left $R$-modules.
\begin{lemma}\label{isomorphicinsubrings}
Suppose $R$ is a unital direct limit of rings $R =\lim \limits_{\longrightarrow}{}_{t\in \mathbb{N}}(R_t)$ (so we are assuming
that connecting homomorphism $R_t \rightarrow R_{t+1}$ is unital for each $t\in \mathbb{N}$). Suppose $R$ contains a complete
orthogonal pairwise isomorphic set of $p$ idempotents. Then there exists $m\in \mathbb{N}$ so that $R_m$ contains a complete
orthogonal pairwise isomorphic set of $p$ idempotents.
\end{lemma}
\begin{proof}
Let $E = \{e_1,...,e_p\}$ denote the indicated set in $R$. It is well known (see e.g. \cite{J}, Proposition III.7.4)
that for idempotents $e$ and $f$ in any ring $R$, $Re\cong Rf$ as left $R$-modules if and only if there exist elements
$x,y$ in $R$ such that $x=exf$, $y=fye$, $xy=e$, and $yx=f$. For each two-element subset $\{e_i,e_j\}$ of $E$ let
$\{x_{i,j},y_{i,j}\}$ denote a pair of associated elements whose existence is ensured by the supposed isomorphism
$Re_i\cong Re_j$. Now pick $m\in \mathbb{N}$ with the property that $R_m$ contains the finite set $\{x_{i,j},y_{i,j}\mid
1\leq i,j \leq p\}$; such $m$ exists by definition of direct limit. Then necessarily $R_m$ contains $E$, as
$x_{i,j}y_{i,j}=e_i$ for each $1\leq i \leq p$. Now invoking the previously cited result from \cite{J}, and using the
hypothesis that the direct limit has unital connecting homomorphisms, we conclude that $E$ is a complete orthogonal
pairwise isomorphic set of $p$ idempotents in $R_m$.
\end{proof}
\begin{lemma}\label{matrixsizes}
Let $S$ be any unital ring, let $K$ be a field, and let $p$ be any positive integer.
\begin{enumerate}
\item If $p\mid d$, then the matrix ring $T={\rm M}_d(S)$ contains
a complete, orthogonal, pairwise isomorphic set of $p$ idempotents.
\item If the matrix ring $T={\rm M}_d(K)$ contains a complete, orthogonal, pairwise isomorphic set of $p$ idempotents,
then $p\mid d$.
\end{enumerate}
\end{lemma}
\begin{proof} For (1), writing $d=pq$ and using the isomorphism ${\rm M}_d(S) \cong {\rm M}_p({\rm M}_q(S))$ produces such a
set, where we take $E$ to be the set of $p$ matrix idempotents in ${\rm M}_p({\rm M}_q(S))$.
For (2), let $E$ be such a set. The ring $T={\rm M}_d(K)$ is semisimple artinian, with composition length $d$. As the
left $T$-modules $Te_i$ generated by the elements of $E$ are pairwise isomorphic, each must have the same composition
length, which we denote by $q$. But $T\cong \oplus_{i=1}^p Te_i$, which yields that $pq=d$.
\end{proof}
With these two lemmas in hand, we are ready to prove the main result of this section.
\begin{proposition}\label{gradediso}
The algebras $L_n$ and ${\rm M}_d(L_n)$ are isomorphic as $\mathbb{Z}$-graded algebras if and only if there exists
$\alpha \in \mathbb{N}$ such that $d \mid n^{\alpha}$.
\end{proposition}
\begin{proof}
First suppose there exists $\alpha \in \mathbb{N}$ such that $d \mid n^{\alpha}$. Then the explicit isomorphism
provided in \cite{PS}, Proposition 2.5, between the indicated matrix rings over Cuntz algebras is easily seen to
restrict to an isomorphism of the analogously-sized matrix rings over Leavitt algebras. Furthermore, the isomorphism
preserves the appropriate grading on these algebras, thus yielding the first implication. (For clarity, an explicit
example of this isomorphism in a particular case is given below.)
Conversely, suppose the algebras $L_n$ and ${\rm M}_d(L_n)$ are isomorphic as $\mathbb{Z}$-graded algebras. Then
necessarily the $0$-components of these algebras are isomorphic. It is easy to show that the $0$-component of ${\rm
M}_d(L_n)$ is isomorphic to ${\rm M}_d (\lim \limits_{\longrightarrow}{}_{t\in \mathbb{N}}(M_{n^t}(K)))$. Now let $p$ be any
prime number with $p\mid d$. Then by Lemma \ref{matrixsizes}(1), ${\rm M}_d (\lim \limits_{\longrightarrow}{}_{t\in
\mathbb{N}}(M_{n^t}(K)))$ contains a complete orthogonal pairwise isomorphic set of $p$ idempotents. Using the isomorphism
between $0$-components, we get a complete orthogonal pairwise isomorphic set of $p$ idempotents in $\lim
\limits_{\longrightarrow}{}_{t\in \mathbb{N}}({\rm M}_{n^t}(K))\cong (L_n)_0$. But by Lemma \ref{isomorphicinsubrings}, this
implies that there exists an integer $u$ so that the matrix ring ${\rm M}_{n^u}(K)$ contains a complete orthogonal
pairwise isomorphic set of $p$ idempotents. By Lemma \ref{matrixsizes}(2) this implies that $p\mid n^u$, so that $p\mid
n$ as $p$ is prime. Thus we have shown that any prime $p$ which divides $d$ also necessarily divides $n$, so that
$d$ indeed divides some power of $n$ as desired.
\end{proof}
\begin{corollary}\label{gradedisoonlywhenddividesnalpha}
Suppose ${\rm gcd}(d,n-1)=1$. Suppose $W = \{X_1,...,X_n,Y_1,...,Y_n\}$ is a set of $2n$ matrices in ${\rm M}_d(L_n)$
which satisfy the conditions of Proposition \ref{Basicprops}(2). Suppose further that each entry of $X_i$ (resp.
$Y_i$) is either $0$ or a monomial of degree 1 (resp. degree -1). If $W$ generates ${\rm M}_d(L_n)$ as a
$K$-algebra, then
$d\mid n^{\alpha}$ for some positive integer $\alpha$.
In particular, let $W$ be the set of $2n$ matrices
$\{X_1,...,X_n,Y_1,...,Y_n\}$ constructed in \cite{L1}, Theorem 5. Then $W$ generates ${\rm M}_d(L_n)$ as a $K$-algebra
if and only if $d\mid n^{\alpha}$ for some positive integer $\alpha$.
\end{corollary}
\begin{proof}
If $W$ satisfies the indicated conditions, then the homomorphism from $L_n$ to ${\rm M}_d(L_n)$ induced by the
assignment $x_i \mapsto X_i$ and $y_i \mapsto Y_i$ in fact would be a graded isomorphism, and the result follows from
Proposition \ref{gradediso}.
In the specific case of the $2n$ matrices described in \cite{L1}, Theorem 5, the matrices are of the indicated type,
and were shown in \cite{PS} to generate ${\rm M}_d(L_n)$.
\end{proof}
It is instructive to compare and contrast the two types of generating sets of ${\rm M}_d(L_n)$ which can be constructed
in case $d\mid n^{\alpha}$ for some $\alpha$. Let $d=3,n=6$. Here are the six matrices $\{X_1,...,X_6\}$ which arise in
the aforementioned construction presented in \cite{PS}.
$$X_1=\begin{pmatrix}x_{1}&0&0\\
x_2&0&0\\
x_3&0&0\end{pmatrix} \hspace{.25in}
X_2=\begin{pmatrix}x_4&0&0\\
x_5&0&0\\
x_6&0&0\end{pmatrix} \hspace{.25in}
X_3=\begin{pmatrix}0&x_1&0\\
0&x_2&0\\
0&x_3&0\end{pmatrix} \hspace{.25in}$$
$$X_4=\begin{pmatrix}0&x_4&0\\
0&x_5&0\\
0&x_6&0\end{pmatrix} \hspace{.25in}
X_5= \begin{pmatrix}0&0&x_1\\
0&0&x_2\\
0&0&x_3\end{pmatrix} \hspace{.25in}
X_6= \begin{pmatrix}0&0&x_4\\
0&0&x_5\\
0&0&x_6\end{pmatrix} \hspace{.25in} $$
In particular, all of these are of degree $1$ in the ${\mathbb Z}$-grading, so that the assignment $x_i \mapsto X_i$
(and $x^*_i \mapsto X^*_i$) from $L_6$ to ${\rm M}_3(L_6)$ extends to a graded homomorphism, which can be shown in a
straightforward way (using the argument given in \cite{PS}) to be a graded isomorphism.
\smallskip
In contrast, we now present one (of many) sets of generators for ${\rm M}_3(L_6)$ which arises from our construction.
When $n=6,d=3$ then the appropriate data from our main result are as follows: $6 = 1\cdot 3 + 3$, so $r=3$, $r-1 = 2$,
$s=3-2 = 1$, $\hat{S_1}=\{1,2\}$, $\hat{S_2} = \{3\}$, $S_1=\{1,2,4,5\}$, $S_2 = \{3,6\}$. So one possible collection
of appropriate generating matrices in ${\rm M}_3(L_6)$ is
$$X_1=\begin{pmatrix}x_{1}&0&0\\
x_2&0&0\\
x_3&0&0\end{pmatrix} \hspace{.25in}
X_2=\begin{pmatrix}x_4&0&0\\
x_5&0&0\\
x_6&0&0\end{pmatrix} \hspace{.25in}
X_3=\begin{pmatrix}0&1&0\\
0&0&x_1^2\\
0&0&x_3x_1\end{pmatrix} \hspace{.25in}$$
$$X_4=\begin{pmatrix}0&0&x_2x_1\\
0&0&x_4x_1\\
0&0&x_6x_1\end{pmatrix} \hspace{.25in}
X_5= \begin{pmatrix}0&0&x_5x_1\\
0&0&x_2\\
0&0&x_3\end{pmatrix} \hspace{.25in}
X_6= \begin{pmatrix}0&0&x_4\\
0&0&x_5\\
0&0&x_6\end{pmatrix} \hspace{.25in} $$
\medskip
We close this article by providing a brief historical perspective on this question. As mentioned earlier, Leavitt
showed in \cite{L1} that if $R$ has module type $(1,n-1)$, then ${\rm M}_d(R)$ has module type $(1, \frac{n-1}{{\rm
gcd}(d,n-1)}$). The validity of this result is justified by the presentation of an appropriate set of elements inside
${\rm M}_d(R)$. In the situation where $R = L_n$ and ${\rm gcd}(d,n-1)=1$, it turns out that the appropriate set of
elements inside ${\rm M}_d(R)$ is simply a lexicographic ordering of the variables $\{x_1,...,x_n,y_1,...,y_n\}$, using
a straightforward algorithm. (An example of this process was given in Section \ref{findmatrices}.) In the particular
case when $d\mid n^{\alpha}$, the set of elements so constructed coincides with the set of elements analyzed by Paschke
and Salinas in \cite{PS}; furthermore, this set just happens to generate all of ${\rm M}_d(L_n)$. However, as noted in
Corollary \ref{gradedisoonlywhenddividesnalpha}, the analogous set of elements cannot generate all of ${\rm M}_d(L_n)$
when $d$ is not a divisor of some power of $n$. Thus, in order to establish our main result (Theorem \ref{Main}), it
was necessary to build a completely different set of tools than those which had already been used in this arena.
Corollary \ref{gradedisoonlywhenddividesnalpha} shows that in general we cannot find generating sets of size $2n$
inside ${\rm M}_d(L_n)$ in which each of the entries in the $n$ matrices has degree $1$ (resp., each of the entries in
the $n$ dual matrices has degree -1). In our main result we have shown that we can find generating sets of size $2n$
inside ${\rm M}_d(L_n)$ in which each of the entries in the $n$ matrices has degree less than or equal to $d-1$ (resp.,
each of the entries in the $n$ dual matrices has degree greater than or equal to $1-d$). Reflecting on the examples
given at the end of Section \ref{maintheorem}, it would be interesting to know whether in general it is possible to
find generating sets of size $2n$ inside ${\rm M}_d(L_n)$ in which each of the entries in the $n$ matrices has degree
less than or equal to $2$ (resp. each of the entries in the $n$ dual matrices has degree greater than or equal to
$-2$).
\bibliographystyle{amsplain}
| {
"timestamp": "2008-02-22T14:00:05",
"yymm": "0612",
"arxiv_id": "math/0612552",
"language": "en",
"url": "https://arxiv.org/abs/math/0612552",
"abstract": "Let $K$ be any field, let $L_n$ denote the Leavitt algebra of type $(1,n-1)$ having coefficients in $K$, and let ${\\rm M}_d(L_n)$ denote the ring of $d \\times d$ matrices over $L_n$. In our main result, we show that ${\\rm M}_d(L_n) \\cong L_n$ if and only if $d$ and $n-1$ are coprime. We use this isomorphism to answer a question posed in \\cite{PS} regarding isomorphisms between various C*-algebras. Furthermore, our result demonstrates that data about the $K_0$ structure is sufficient to distinguish up to isomorphism the algebras in an important class of purely infinite simple $K$-algebras.",
"subjects": "Rings and Algebras (math.RA); Operator Algebras (math.OA)",
"title": "Isomorphisms between Leavitt algebras and their matrix rings",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138105645059,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7087156915080242
} |
https://arxiv.org/abs/1112.5206 | Vanishing of negative $K$-theory in positive characteristic | We show how a theorem of Gabber on alterations can be used to apply work of Cisinski, Suslin, Voevodsky, and Weibel to prove that $K_n(X)[1/p] = 0$ for $n < - \dim X$ where $X$ is a quasi-excellent noetherian scheme, $p$ is a prime that is nilpotent on $X$, and $K_n$ is the $K$-theory of Bass-Thomason-Trobaugh. This gives a partial answer to a question of Weibel. | \section{Introduction}
In \cite[2.9]{Wei80} Weibel asks if $K_n(X) = 0$ for $n < - \dim X$ for every noetherian scheme $X$ where $K_n$ is the $K$-theory of Bass-Thomason-Trobaugh. This question was answered in the affirmative in \cite{CHSW} for schemes essentially of finite type over a field of characteristic zero. Assuming strong resolution of singularities, it is also answered in the affirmative in \cite{GH10} for schemes essentially of finite type over a field of positive characteristic. Both of these proofs compare $K$-theory with cyclic homology, and then use a cdh descent argument.
Our main theorem is the following.
\begin{theoUn}[{\ref{theo:KtheoryVanishing}}]
Let $X$ be a quasi-excellent noetherian scheme and $p$ a prime that is nilpotent on $X$. Then $K_n(X) \otimes \mathbb{Z}[\tfrac{1}{p}] = 0$ for $n < - \dim X$ where $K_n$ is the $K$-theory of Bass-Thomason-Trobaugh.
\end{theoUn}
Our proof can be outlined as follows. We reduce the vanishing of $K_n(X) \otimes \mathbb{Z}[\tfrac{1}{p}]$ to the vanishing of homotopy invariant $K$-theory using a result of Weibel that in positive characteristic $p$ compares $K$-theory with homotopy invariant $K$-theory $KH_n$ away from $p$-torsion. Cisinski has shown that homotopy invariant $K$-theory is representable in the Morel-Voevodsky stable homotopy category \cite[Theorem 2.20]{Cis13}. Furthermore, he shows using Ayoub's proper base change theorem (\cite[Corollary 1.7.18]{Ayo07}) and Voevodsky's work on cd structures (\cite{Voe10a}, \cite{Voe10b}) that every object in the stable homotopy category satisfies cdh descent \cite[Proposition 3.7]{Cis13}. In particular, homotopy invariant $K$-theory satisfies cdh descent for noetherian schemes of finite Krull dimension (without restriction on the characteristic).\footnote{That homotopy invariant $K$-theory satisfies cdh descent was shown in characteristic zero in \cite{Hae04} for schemes essentially of finite type over a field and the proof goes through in positive characteristic if embedded resolution of singularities turns out to hold.} Using the cdh descent spectral sequence, the desired vanishing is implied by the vanishing of the cdh sheaves associated to $KH_n \otimes \mathbb{Z}[\tfrac{1}{p}]$ for $n < 0$. To deduce this, we apply a theorem of Gabber on alterations as a replacement for resolution of singularities.
The cdh topology was introduced to apply resolution of singularities. To apply this theorem of Gabber, we refine the cdh topology to a topology that we call the {$l$dh}~topology. This topology is by definition generated by the cdh topology, and morphisms which are flat finite surjective and of constant degree prime to a chosen prime $l$. This method of applying the {$l$dh}~topology to replace resolution of singularities arguments with Gabber's theorem on alterations can also be used in other settings. See for example \cite{Kel12} where all the results of \cite{Voev00}, \cite{FV}, and \cite{Sus00} that assume resolution of singularities are proved in positive characteristic using this theorem of Gabber (using $\mathbb{Z}[\tfrac{1}{p}]$-coefficients of course, where $p$ is the characteristic of the base field). See also \cite{HKO} where the {$l$dh}~topology is used to generalise to positive characteristic Voevodsky's result identifying the algebra of bistable operations with the motivic Steenrod algebra.
The material in this article is part of my Ph.D. thesis. I am deeply indebted to Denis-Charles Cisinski for many discussions, for the choice of problem, and in particular for the myriad potential (not all equivalent) definitions for the {$l$dh}~topology that he suggested. The readability of this paper was greatly improved following suggestions by Ariyan Javanpeykar, Giuseppe Ancona, and a referee.
\section{The {$l$dh}~topology} \label{sec:ltopology}
We work with $Sch(S)$, the category of separated schemes of finite type over a noetherian base scheme $S$. Recall that a refinement of a family of morphisms $\{ U_i \to X \}_{i \in I}$ is a family of morphisms $\{ V_j \to X \}_{j \in J}$ such that for each $j \in J$ there is an $i_j \in I$ and a factorisation $V_j \to U_{i_j} \to X$. Following Suslin and Voevodsky (and contrary to Artin and therefore Milne), we use the terms \emph{topology} and \emph{covering} as in \cite[Expos{\'e} II, Definition 1.1]{SGA41} and \cite[Expos{\'e} II, Definition 1.2]{SGA41} respectively. In particular, if a family of morphisms admits a refinement by a covering for a topology $\tau$, then that family itself is also a covering \cite[Expos{\'e} II, Proposition 1.4]{SGA41}. We observe the usual abuse of the term \emph{covering} by referring to a morphism $f$ as a covering if $\{ f \}$ is a covering family.
The reader not familiar with the cdh topology can find a very readable account of it in \cite[Section 5]{SV00}.
\begin{samepage}
\begin{defi} \label{defi:topologies}
Let $l \in \mathbb{Z}$ be a prime.
\begin{enumerate}
\item By an \emph{fps$l'$~morphism} (fini, plat, surjectif, premier {\`a} $l$) we will mean a morphism $f: U \to X$ that is finite flat surjective of constant rank prime to $l$.
\item The \emph{fps$l'$~topology} on the category $Sch(S)$ is the least fine topology such that fps$l'$~morphisms are fps$l'$~coverings.
\item The \emph{{$l$dh}~topology} on the category $Sch(S)$ is the least fine topology which is finer than the cdh topology, and finer than the fps$l'$~topology.
\end{enumerate}
\end{defi}
\end{samepage}
Our choice of definition of an {$l$dh}~topology is motivated by the following two ideas. Firstly, the theorem of Gabber (Theorem~\ref{theo:gabber}) should provide the existence of regular {$l$dh}~coverings (or smooth depending on the context). Secondly, we want to make use of the ample literature on the cdh topology. That is, we want to be able to reduce statements about the {$l$dh}~topology to statements about the cdh topology and statements about the fps$l'$~topology. This way we only need to deal with the fps$l'$~topology. This we usually do using a suitable structure of ``trace'' morphisms.
\begin{rema}
Our {$l$dh}~topology differs from the topology of $\ell'$ alterations\footnote{We have chosen to use $l$ instead of the more aesthetically pleasing $\ell$ used in \cite{ILO} as $l$ is easier for search engines.} described in \cite[Expos{\'e} III, Section 3]{ILO}
Firstly, the underlying categories are different. In \cite{ILO} the category $alt/S$ is used, which should be thought of as some kind of Riemann-Zariski space. The objects of this category are the reduced schemes that are dominant and of finite type over $S$, such that the structural morphism sends generic points to generic points, and the induced field extensions associated to generic points are finite. The inclusion $alt/S \to Sch(S)$ does not preserve fibre-products.\footnote{The category $alt/S$ has fibre-products in the categorical sense. These are obtained from the usual fibre product in the category of schemes as follows. For morphisms $X \to Y, Z \to Y$ in $alt/S$ one forms the usual fibre product $X \times_Z Y$ in the category of schemes, then takes the associated reduced scheme $(X \times_Z Y)_{red}$, and forgets any irreducible components of $(X \times_Z Y)_{red}$ which do not dominate an irreducible component of $S$ \cite[Expos{\'e} II, 1.2.5, Proposition 1.2.6]{ILO}.} If we equip $alt/S$ with the topology of $\ell'$ alterations and $Sch(S)$ with the {$l$dh}~topology, then this inclusion is not a continuous morphism of sites \cite[Expos{\'e} III D{\'e}finition 1.1]{SGA41} (it is however cocontinuous \cite[Expos{\'e} III D{\'e}finition 2.1]{SGA41}).
Second the topology of $\ell'$ alterations is defined in \cite{ILO} similarly to the ($alt/S$ analogue of the) cdh topology, but by extending the ``proper cdh'' part to allow Gabber's theorem to replace resolution of singularities. However, our choice of definition of the {$l$dh}~topology as being generated by the cdh topology and the fps$l'$~topology is what makes the proof of our main result so direct. It appears that our topology is a ``global'' version of their ``local'' topology where global and local are in the resolution of singularities sense (cf. \cite[Expos{\'e} II, Th{\'e}or{\`e}me 3.2.3]{ILO}).
\end{rema}
\begin{lemm} \label{lemm:NisFpslSwap}
Suppose that $\{Y_i' \to Y\}_{i \in I}$ is a Nisnevich covering and $Y \to X$ a finite morphism. Then there exists a Nisnevich covering $\{X_j' \to X\}_{j \in J}$ such that $\{Y \times_X X_j' \to Y\}_{j \in J}$ is a refinement of $\{Y_i' \to Y\}_{i \in I}$.
\end{lemm}
\begin{proof}
If $X$ is henselian, then $Y$ is also henselian \cite[Proposition 18.5.10]{EGAIV4} and so there exists some $i \in I$ for which $Y_i' \to Y$ has a section. So we can take $\{X_j' \to X\}_{j \in J}$ to be $\{X \stackrel{id}{\to} X \}$. If $X$ is not henselian then for every point $x \in X$ we consider the pullback along the henselisation $\ ^hx \to X$. The result now follows from the limit arguments in \cite[Section 8]{EGAIV3} and the description of the henselisation as a suitable limit of \'{e}tale neighbourhoods \cite[Definition 18.6.5]{EGAIV3}.
\end{proof}
The following proposition is in a similar spirit to \cite[Proposition 5.9]{SV00}.
\begin{prop} \label{prop:cdhlswap}
Let $X$ be a noetherian scheme and suppose that $Y \to X$ is an fps$l'$~morphism and $\{ U_i \to Y \}_{i \in I}$ is a cdh covering. Then there exists a cdh covering $\{ V_j \to X \}_{j \in J}$ and fps$l'$~morphisms $V'_j \to V_j$ such that $\{ V'_j \to X \}_{j \in J}$ is a refinement of $\{ U_i \to Y \to X \}_{i \in I}$.
\end{prop}
\begin{proof}
The proof is similar to the proof of \cite[Proposition 5.9]{SV00}, and the reader is encouraged to consult that proof before proceeding. In fact, using this result we can assume that $\{ U_i \to Y \}_{i \in I}$ is of the form $\{U_i \to Y' \to Y\}^n_{i = 1}$ where $Y' \to Y$ is a proper morphism which is a cdh covering (such a morphism is called a \emph{proper cdh covering}) and $\{U_i \to Y' \}_{i = 1}^n$ is a Nisnevich covering of $Y'$. Lemma~\ref{lemm:NisFpslSwap} takes care of the Nisnevich part, and so it suffices to consider the case when $\{ U_i \to Y \}_{i \in I}$ is of the form $\{ U \to Y \}$ where $U \to Y$ is a proper cdh covering.
As in the proof of \cite[Proposition 5.9]{SV00}, proceeding by noetherian induction we may assume that for any proper closed subscheme $Z \subset X$ the induced covering of $Z$ admits a refinement of the desired form and the scheme $X$ is integral.
Suppose we can find a proper birational morphism $V \to X$ and an fps$l'$~covering $V' \to V$ such that the composition $V' \to V \to X$ factors through the composition $U \to Y \to X$. Choose a closed subscheme $Z \to X$ outside of which $V \to X$ is an isomorphism. The inductive hypothesis applied to $\{Z \times_X U \to Z \times_X Y \}$ and $Z \times_X Y \to Z$ over $Z$ gives a cdh covering $\{W_j \to Z \}_{j \in J}$ and fps$l'$~morphisms $W'_j \to W_j$ such that $\{ W'_j \to W_j \to Z \}_{j \in J}$ is a refinement of $\{Z \times_X U \to Z \times_X Y \to Z\}$. Consequently, $\{W'_j \to W_j \to Z \to X \}_{j \in J} \cup \{V' \to V \to X\}$ is a refinement of $\{U \to Y \to X \}$ of the desired form. So it suffices to find such a $V \to X$ and $V' \to V$.
We will now construct the following diagram, which is presented here so that the reader may refer to it while it is being constructed (we assure the anxious reader that the symbols are defined in the following paragraph, during the construction of the diagram).
\[ \xymatrix{
V' \ar[r] \ar[dd] & Y'' \times_Y U \ar[rr] \ar[d] && U \ar[d] \\
& Y'' \ar[r] \ar[d] & \overline{\{ \eta_i \}} \ar[d] \ar[r] & Y \ar[dl] \\
V \ar[r] & X' \ar[r] & X
} \]
Replacing $U \to Y$ by a finer proper cdh covering we may assume that $U_{red} \to Y_{red}$ is an isomorphism over a dense open subscheme of $Y$, and that $U = U_{red}$. If $\eta_i$ are the generic points of $Y$ and $m_i$ the lengths of their local rings, then the degree of $Y \to X$ is $\sum m_i [k(\eta_i): k(\xi)]$ where $\xi$ is the generic point of $X$. Since $l$ does not divide $\sum m_i [k(\eta_i): k(\xi)]$, there is some $i$ for which $l$ does not divide $[k(\eta_i): k(\xi)]$. Choose such an $i$ and consider the closed integral subscheme $\overline{\{ \eta_i \}}$ of $Y$ which has $\eta_i$ as its generic point. By the platification theorem \cite{RG71} (or \cite[Theorem 2.2.2]{SV} for a precise statement) there exists a blow-up $X' \to X$ of $X$ with nowhere dense centre such that the strict transform $Y'' \to X'$ of $\overline{\{ \eta_i \}} \to X$ is flat, and hence finite flat surjective of degree prime to $l$. Consequently, the composition $Y'' \times_Y U \to Y'' \to X'$ is generically an fps$l'$~covering. Applying the platification theorem to this composition we can find a blow-up $V \to X'$ of $X'$ such that the strict transform $V' \to V$ of the composition $Y'' \times_Y U \to Y'' \to X'$ is flat, Since it is generically an fps$l'$~covering, flatness implies that it is actually an fps$l'$~covering.
\end{proof}
For a topology $\tau$ we denote by $a_\tau$ the canonical functor that takes a presheaf to its associated $\tau$ sheaf.
\begin{lemm} \label{lemm:sep}
Suppose that $\sigma$ and $\rho$ are two topologies on a category $\mathcal{C}$. Suppose that for every $\rho$ covering $\{U_i \to X \}_{i \in I}$ and $\sigma$ coverings $\{ V_{ij} \to U_i\}_{j \in J_i}$, there exists a $\sigma$ covering $\{U_k' \to X \}_{k \in K}$ and $\rho$ coverings $\{ V_{k\ell}' \to U_k'\}_{\ell \in L_k}$ such that $\{ V_{k\ell}' \to X\}_{k \in K, \ell \in L_k}$ refines $\{ V_{ij} \to X\}_{i \in I, j \in J_i}$. If $F$ is a presheaf which is $\rho$ separated, then its associated $\sigma$ sheaf $a_\sigma F$ is also $\rho$ separated.
\end{lemm}
\begin{proof}
We will use coverings of cardinality one to make the proof easier to read. Suppose $s \in a_\sigma F(X)$ is a section and $U \to X$ a $\rho$ covering such that $s|_U = 0$. We must show $s = 0$ in $a_\sigma F(X)$. It is sufficient to consider the case where $s$ is in the image of $F(X) \to a_\sigma F(X)$. In this case the condition $s|_U = 0$ implies that there is a $\sigma$ covering $V \to U$ with $s'|_V = 0$ where $s' \in F(X)$ is some section sent to $s \in a_\sigma F(X)$. By hypothesis, we can refine $V \to U \to X$ by a composition $V' \to U' \to X$ with $V' \to U'$ a $\rho$ covering and $U' \to X$ a $\sigma$ covering. Now $s'|_{V'} = 0$ but $F$ is $\rho$ separated so $s'|_{U'} = 0$, which implies that $s = 0$ in $a_\sigma F(X)$.
\end{proof}
We now reproduce a weak version of a theorem of Gabber. We follow it with a corollary which converts it into a form that we will use. For a statement and an outline of the proof of this theorem of Gabber see \cite{Ill09}, or \cite{Gab05}. There is a proof in the book in preparation \cite{ILO}.
\begin{theo}[{Gabber, \cite[Expos{\'e} 0 Theorem 2]{ILO}, \cite[Expos{\'e} II Theorem 3.2.1]{ILO}}] \label{theo:gabber}
Let $X$ be a noetherian quasi-excellent scheme and let $l$ be a prime number invertible on $X$. There exists a finite family of morphisms of finite type $\{U_i \to X \}_{i = 1}^n$ with each $U_i$ regular, and a refinement of this family of the form $\{ V_j \to Y \to X \}_{j \in J}$ such that $Y \to X$ is proper surjective generically finite of degree prime to $l$ and $\{V_j \to Y \}_{j \in J}$ is a Nisnevich covering.
\end{theo}
\begin{coro} \label{coro:regularlCover}
Let $X$ be a noetherian quasi-excellent scheme and let $l$ be a prime number invertible on $X$. Then there exists an {$l$dh}~covering $\{ W_i \to X \}_{i = 1}^m$ of $X$ such that each $W_i$ is regular.
\end{coro}
\begin{proof}
This corollary follows from Theorem~\ref{theo:gabber} using the platification theorem \cite{RG71}.
Let us be more explicit. We will use noetherian induction. Suppose that the result is true for all proper closed subschemes of $X$. Let $\{U_i \to X \}_{i = 1}^n$ and $\{ V_j \to Y \to X \}_{j \in J}$ be as in the statement of Theorem~\ref{theo:gabber}. As in the proof of Proposition~\ref{prop:cdhlswap} (or rather, the proof of \cite[Proposition 5.9]{SV00}) we can assume that $X$ is integral, since the disjoint union of the inclusion of the reduced irreducible components of $X$ is a cdh covering, and hence an {$l$dh}~covering. By the platification theorem there exists a blow-up with nowhere dense centre $X' \to X$ such that the proper transform $Y' \to X'$ of $Y \to X$ is a finite flat surjective morphism of constant degree. Let $\{ V'_j \to Y' \}_{j \in J}$ be the pullback of the Nisnevich covering $\{ V_j \to Y \}_{j \in J}$, and let $Z \subset X$ be a proper closed subscheme of $X$ for which $X' \to X$ is an isomorphism outside of $Z$. The family $\{Z \to X\} \cup \{V'_j \to X \}_{j \in J}$ is the composition of the cdh covering $\{Z \to X, X' \to X\}$, the fps$l'$~covering $\{Z \stackrel{id}{\to} Z, Y' \to X' \}$, and the Nisnevich covering $\{Z \stackrel{id}{\to} Z\} \cup \{V'_j \to Y' \}_{j \in J}$. Hence, it is an {$l$dh}~covering. Now by construction this family is a refinement of the family $\{Z \to X\} \cup \{U_i \to X \}_{i = 1}^n$. Therefore $\{Z \to X\} \cup \{U_i \to X \}_{i = 1}^n$ is also an {$l$dh}~covering. Finally, by the inductive hypothesis there exists an {$l$dh}~covering $\{Z_k' \to Z\}_{k= 1}^{n'}$ of $Z$ with each $Z_k'$ regular, and so the family $\{Z_k' \to Z \to X\}_{k= 1}^{n'} \cup \{U_i \to X \}_{i = 1}^n$ is an {$l$dh}~covering of $X$ for which all the sources are regular.
\end{proof}
\section{Vanishing of $K$-theory} \label{sec:vanishing}
In this section we will use the $K$-theory of Bass-Thomason-Trobaugh and Weibel's homotopy invariant $K$-theory. These will be denoted by $K$ and $KH$ respectively.
Let as recall quickly their constructions. For any scheme $X$ consider $\Perf(X)$, the complicial biWaldhausen category of perfect complexes of $\mathcal{O}_X$-modules \cite[Definition 1.2.11, Definition 2.2.12]{TT90}. We deviate slightly from \cite{Wei89} and \cite{TT90} by using $\mathcal{O}_X$-modules on $Sch(X)$ as opposed to the small Zariski site of $X$. We do this so that $K$ and $KH$ are actual presheaves instead of just lax functors. See \cite[Section C4]{FS02} for a discussion. From the biWaldhausen category $\Perf(X)$ we obtain an $S^1$-spectrum denoted in \cite[6.4]{TT90} by $K^B(X)$, but which will be denoted in our article simply by $K(X)$. The $K(X)$ form a presheaf of $S^1$-spectra $K$ on $Sch(X)$. From this presheaf of $S^1$-spectra we obtain a second presheaf of $S^1$-spectra \cite[Definition 1.1]{Wei89} which is denoted by $KH$. There is a canonical morphism of presheaves of $S^1$-spectra $K \to KH$.
We write $KH_n$ (resp. $K_n$) for the functor which associates to a scheme $X$ the $n$th homotopy group of $KH(X)$ (resp. $K(X)$).
It is in the following lemma that we use ``trace'' morphisms to bridge the gap between cdh and {$l$dh}.
\begin{lemm} \label{lemm:KHfpslsep}
The cdh sheaves $a_{cdh}KH_n \otimes {\mathbb{Z}_{(\ell)}}$ associated to the presheaves $KH_n \otimes {\mathbb{Z}_{(\ell)}}$ are separated for the {$l$dh}~topology.
\end{lemm}
\begin{proof}
Consider the weakest topology on $Sch(S)$ such that fps$l'$~morphisms $f: Y \to X$ with $f_*\mathcal{O}_Y$ globally free are coverings. Let's call this topology the fps$l'$gl~topology. We begin by showing that the presheaves $KH_n \otimes {\mathbb{Z}_{(\ell)}}$ are separated for the fps$l'$gl~topology.
The construction of $KH$ is functorial in complicial biWaldhausen categories. Notably, for every finite flat surjective morphism of schemes $f: Y \to X$ we get a corresponding exact functor $f_*: \Perf(Y) \to \Perf(X)$ between the biWaldhausen categories. These functors give rise to morphisms $\mathrm{Tr}_{f} : KH_n(Y) \to KH_n(X)$. We claim that if there is an isomorphism $f_*\mathcal{O}_Y \cong \mathcal{O}_X^d$ then $\mathrm{Tr}_{f} KH_n(f) = d \cdot id_{KH_n(X)}$. If $f$ is an fps$l'$~morphism then $d$ is invertible in ${\mathbb{Z}_{(\ell)}}$ and we deduce from $\mathrm{Tr}_{f} KH_n(f) = d \cdot id_{KH_n(X)}$ that the morphism $KH_n(f) \otimes {\mathbb{Z}_{(\ell)}}$ is injective, and hence $KH_n \otimes {\mathbb{Z}_{(\ell)}}$ is separated for the fps$l'$gl~topology. To prove $\mathrm{Tr}_{f} KH_n(f) = d \cdot id_{KH_n(X)}$, since $f_*\mathcal{O}_Y$ is globally free, we reduce immediately to proving that the operation of $A \mapsto A^{\oplus d}$ on $\Perf(X)$ induces $a \mapsto d \cdot a$ in $KH_n$. This can be derived from the additivity theorem \cite[1.7.3]{TT90} by induction on $d$ (let $F' = \mathrm{id}, F'' = (-)^{\oplus (d - 1)}$, $F = - \oplus (-)^{\oplus (d - 1)}$ with the obvious natural transformations).
Now Lemma~\ref{lemm:NisFpslSwap} shows that the hypotheses of Lemma~\ref{lemm:sep} hold with $\sigma =$ Nis and $\rho =$ fps$l'$gl. Hence, the Nisnevich sheaf $a_{Nis}KH_n \otimes {\mathbb{Z}_{(\ell)}}$ associated to $KH_n \otimes {\mathbb{Z}_{(\ell)}}$ is fps$l'$gl~separated. But every finite flat morphism of schemes is free locally for the Zariski topology, and therefore locally for the Nisnevich topology. Hence, $a_{Nis}KH_n \otimes {\mathbb{Z}_{(\ell)}}$ is separated for the fps$l'$~topology. Now Proposition~\ref{prop:cdhlswap} shows that the hypotheses of Lemma~\ref{lemm:sep} hold with $\sigma =$ cdh and $\rho =$ fps$l'$, so the associated cdh sheaves $a_{cdh}a_{Nis}KH_n \otimes {\mathbb{Z}_{(\ell)}}$ are fps$l'$~separated. Clearly we have $a_{cdh}a_{Nis}KH_n \otimes {\mathbb{Z}_{(\ell)}} = a_{cdh}KH_n \otimes {\mathbb{Z}_{(\ell)}}$ as the cdh topology is finer than the Nisnevich topology by definition. Now it follows from the definition of the {$l$dh}~topology that $a_{cdh}KH_n \otimes {\mathbb{Z}_{(\ell)}}$ is separated for the {$l$dh}~topology.
\end{proof}
\begin{defi}
For $\mathcal{E}$ a presheaf of $S^1$-spectra on $Sch(S)$ and $\tau$ a topology with enough points define $\mathbb{H}_\tau(-, \mathcal{E})$ to be the presheaf of $S^1$-spectra given by the Godement-Thomason construction \cite[1.33]{Tho85}. This comes equipped with a canonical morphism $\mathcal{E} \to \mathbb{H}_\tau(-, \mathcal{E})$.
\end{defi}
The following lemma shows that we can apply the previous definition to the cdh topology on $Sch(S)$.
\begin{lemm}[Deligne, Suslin-Voevodsky]
The cdh topology on $Sch(S)$ has enough points.
\end{lemm}
\begin{proof}
A theorem of Deligne says that any locally coherent topos has enough points \cite[Expos{\'e} VI Proposition 9.0]{SGA42}.\footnote{The author thanks Brad Drew for pointing out that there is an extremely clear account of this theorem of Deligne given in \cite[Chapter 7]{Joh77}. The statement is \cite[, Theorem 7.44]{Joh77}.} A topos is locally coherent \cite[Expos{\'e} VI Definition 2.3]{SGA42} if and only if it is equivalent to a category of sheaves on a site such that every object is quasi-compact, and all fibre products and finite products are representable \cite[Expos{\'e} VI 2.4.5]{SGA42}. By definition \cite[Expos{\'e} VI Definition 1.1]{SGA42}, an object is quasi-compact if and only if every covering family admits a finite subfamily which is still a covering family. The proof of \cite[Proposition 5.9]{SV00} remains valid with the base field $F$ replaced by any noetherian scheme $S$, and shows that every object in $Sch(S)$ is quasi-compact (for the cdh topology).
\end{proof}
\begin{theo}[{\cite[Theorem 2.20, Proposition 3.7]{Cis13}}] \label{theo:KHcdhDescent}
The presheaf of $S^1$-spectra $KH$ satisfies cdh descent on $Sch(S)$. That is, the canonical morphism $KH \to \mathbb{H}_{cdh}(-, KH)$ gives a stable weak equivalence of $S^1$-spectra when evaluated on each scheme.
\end{theo}
\begin{theo} \label{theo:KtheoryVanishing}
Let $X$ be a quasi-excellent noetherian scheme and $p$ a prime that is nilpotent on $X$. Then $K_n(X) \otimes \mathbb{Z}[\tfrac{1}{p}] = 0$ for $n < - \dim X$.
\end{theo}
\begin{proof}
Since $p$ is nilpotent on $X$ the canonical morphism $K_n \otimes \mathbb{Z}[\tfrac{1}{p}] \to KH_n \otimes \mathbb{Z}[\tfrac{1}{p}]$ is an isomorphism \cite[9.6]{TT90}. Hence it suffices to prove that \mbox{$KH_n(X) \otimes {\mathbb{Z}_{(\ell)}}$} vanishes for every prime $l \neq p$ and $n < - \dim X$. Since $KH$ satisfies cdh descent (Theorem~\ref{theo:KHcdhDescent}) we have a spectral sequence of the following form\footnote{The indexing we have used is the Bousfield-Kan indexing used in \cite{Tho85}. Other indexing conventions exist in the literature. See \cite[Corollary 5.2]{Wei89} and \cite[Section 5]{GH10} for examples of other possible indexings.} (cf. \cite[1.36]{Tho85}).
\[ E_2^{p, q} = H^{p}_{cdh}(X, a_{cdh}KH_q(-)) \implies KH_{q - p}(X) \]
As the reader may be unfamiliar with this indexing, we mention that the differentials on the $E_2$ sheet have bidegree $(2, 1)$, but we will not use this fact. This spectral sequence converges due to the cdh cohomological dimension being bounded by $\dim X$ \cite[12.5]{SV00}. Furthermore, this bound on the cohomological dimension implies that the $E_2$ sheet is zero outside of $0 \leq p \leq \dim X$. We tensor this spectral sequence with ${\mathbb{Z}_{(\ell)}}$ to obtain a second convergent spectral sequence
\[ E_2^{p, q} = H^{p}_{cdh}(X, a_{cdh}KH_q(-)) \otimes {\mathbb{Z}_{(\ell)}} \implies KH_{q - p}(X) \otimes {\mathbb{Z}_{(\ell)}} \]
Due to the vanishing of $E_2$ terms outside of $0 \leq p \leq \dim X$, we have reduced to showing that
\[ {a_{cdh}KH_q(-) \otimes {\mathbb{Z}_{(\ell)}} = 0} \quad \textrm{ for all } \quad q < 0. \]
This cdh sheaf is {$l$dh}~separated (Lemma~\ref{lemm:KHfpslsep}) so the canonical morphism
\[ a_{cdh}KH_q(-) \otimes {\mathbb{Z}_{(\ell)}} \to a_{ldh}KH_q(-) \otimes {\mathbb{Z}_{(\ell)}} \]
is injective and it suffices to show that
\[ a_{ldh}KH_q(-) \otimes {\mathbb{Z}_{(\ell)}} = 0 \quad \textrm{ for each } \quad q < 0. \]
For any scheme $U$ in $Sch(X)$ and any section $s \in a_{ldh}KH_q(U) \otimes {\mathbb{Z}_{(\ell)}}$ there exists an {$l$dh}~covering $\{U_i \to U \}_{i \in I}$ such that each restriction $s|_{U_i}$ of the section $s$ is in the image of the canonical morphism of presheaves ${KH_q(U_i) \otimes {\mathbb{Z}_{(\ell)}} \to a_{ldh}KH_q (U_i)\otimes {\mathbb{Z}_{(\ell)}}}$. Each $U_i$ admits an {$l$dh}~covering ${\{U_{ij} \to U_i\}_{j \in J_i}}$ with $U_{ij}$ regular (Corollary~\ref{coro:regularlCover}). Since $KH_q$ vanishes on every regular scheme for $q < 0$ \cite[Proposition 6.8]{TT90}, the restrictions $s|_{U_{ij}}$ are all zero if $q < 0$, and therefore $s$ is also zero if $q < 0$.
\end{proof}
\bibliographystyle{alpha}
| {
"timestamp": "2013-05-24T02:02:42",
"yymm": "1112",
"arxiv_id": "1112.5206",
"language": "en",
"url": "https://arxiv.org/abs/1112.5206",
"abstract": "We show how a theorem of Gabber on alterations can be used to apply work of Cisinski, Suslin, Voevodsky, and Weibel to prove that $K_n(X)[1/p] = 0$ for $n < - \\dim X$ where $X$ is a quasi-excellent noetherian scheme, $p$ is a prime that is nilpotent on $X$, and $K_n$ is the $K$-theory of Bass-Thomason-Trobaugh. This gives a partial answer to a question of Weibel.",
"subjects": "Algebraic Geometry (math.AG); K-Theory and Homology (math.KT)",
"title": "Vanishing of negative $K$-theory in positive characteristic",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138105645059,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7087156915080242
} |
https://arxiv.org/abs/math/0502402 | Topological fundamental groups can distinguish spaces with isomorphic homotopy groups | We exhibit a map f between aspherical spaces X and Y such that f induces an isomorphism on homotopy groups but, with natural topologies, X and Y fail to have homeomorphic fundamental groups. Thus the topological fundamental group has the capacity to distinguish homotopy type when the Whitehead theorem fails. | \section{Introduction}
Given CW complexes $X$ and $Y,$ the Whitehead theorem (\cite{hatch}) asserts
that a map $f:X\rightarrow Y$ is a homotopy equivalence provided $f$ induces
an isomorphism on homotopy groups. However the result can fail in the
context of path connected metric spaces. For example the standard Warsaw
circle has trivial homotopy groups but fails to have the homotopy type of a
point. This note aims to show the \textit{topological fundamental group }can
help counterbalance the general failure of the Whitehead theorem.
For a general space $X$ work of Biss \cite{Biss} initiates the development
of a theory whose fundamental notion is the following. Endowed with the
quotient topology inherited from the path components of based loops in $X$,
the familiar based fundamental group $\pi _{1}(X,p)$ of a topological space $%
X$ becomes a \textit{topological group}. For example if $X$ is locally
contractible then loops in $X$ are homotopically invariant under small
perturbation, and consequently the fundamental group $\pi _{1}(X,p)$ has the
discrete topology. For spaces that are complicated both locally and
globally, the topology of $\pi _{1}(X,p)$ can be more interesting (\cite{fab}
\cite{fab2} \cite{fab5}). An important feature of the theory is that if $X$
and $Y$ have the same homotopy type then $\pi _{1}(X,p)$ and $\pi _{1}(Y,p)$
are isomorphic \textit{and} homeomorphic (Proposition 3.3 \cite{Biss}).
These facts motivate the question of whether the added topological structure
on $\pi _{1}(X,p)$ can ever succeed in distinguishing the homotopy type of
spaces $X$ and $Y$ in instances when the hypotheses of the Whitehead theorem
are satisfied.
In fact this is so and in this note we exhibit aspherical spaces $X$ and $Y$
such that inclusion $j:X\rightarrow Y$ induces an isomorphism on homotopy
groups. However $\pi _{1}(X)$ and $\pi _{1}(Y)$ fail to be homeomorphic, and
thus we can conclude that $X$ and $Y$ do not have the same homotopy type
despite the failure of the Whitehead theorem for this pair of examples.
The theory of topological fundamental groups is still in the early stages of
development (\cite{fab3} \cite{fab4} \cite{fab6}) and it is hoped this note
will be seen as promoting its utility and helping to motivate its continued
investigation. For example the space $Y$ constructed in this paper is not
locally path connected. This suggests the following.
\textbf{Question. }Suppose $Y$ is an aspherical (metric) Peano continuum and
$X\subset Y$ is aspherical and path connected. Suppose inclusion $%
j:X\hookrightarrow Y$ induces an isomorphism $j^{\ast }:\pi
_{1}(X,p)\rightarrow \pi _{1}(Y,p).$ Must $j^{\ast }$ be a homeomorphism? If
$j^{\ast }$ is a homeomorphism must $j$ be a homotopy equivalence?
\section{Definitions and Preliminaries}
All definitions are compatible with those found in Munkres \cite{Munk}. If $%
X $ is a metrizable space and $p\in X$ let $C_{p}(X)=\{f:[0,1]\rightarrow X$
such that $f$ is continuous and $f(0)=f(1)=p\}.$ Endow $C_{p}(X)$ with the
topology of uniform convergence.
The \textbf{topological fundamental group} $\pi _{1}(X,p)$ is the set of
path components of $C_{p}(X)$ endowed with the quotient topology under the
canonical surjection $q:C_{p}(X)\rightarrow \pi _{1}(X,p)$ satisfying $%
q(f)=q(g)$ if and only if $f$ and $g$ belong to the same path component of $%
C_{p}(X).$ Thus a set $U\subset \pi _{1}(X)$ is open if and only if $%
q^{-1}(U)$ is open in $C_{p}(Y).$
\begin{remark}
The topological fundamental group $\pi _{1}(X,p)$ is a topological group
under concatenation of paths. (Proposition 3.1\cite{Biss}). A map $%
f:X\rightarrow Y$ determines a continuous homomorphism $f^{\ast }:\pi
_{1}(X,p)\rightarrow \pi _{1}(Y,f(p))$ via $f^{\ast }([\alpha ])=[f(\alpha
)] $ (Proposition 3.3 \cite{Biss}). If $X$ and $Y$ have the same homotopy
type then $\pi _{1}(X)$ is homeomorphic and isomorphic to $\pi _{1}(Y)$
(Corollary 3.4 \cite{Biss}) For the remainder of this paper all fundamental
groups will be considered topological groups.
\end{remark}
The space $X$ is \textbf{semilocally simply connected} at $p$ if there
exists an open set $U\subset X$ such that inclusion $j:U\hookrightarrow X$
induces the trivial homomorphism $j^{\ast }:\pi _{1}(U,p)\rightarrow \pi
_{1}(X,p).$ The space $Z$ is \textbf{discrete} if each one point subset of $%
Z $ is open.
\begin{remark}
\label{rem2}The main result of \cite{fab} shows that if $X$ is locally path
connected then $\pi _{1}(X,p)$ is discrete if and only if $\pi _{1}(X,p)$ is
semilocally simply connected.
\end{remark}
\section{Main result}
\begin{theorem}
There exist path connected aspherical separable metric spaces $X$ and $Y$
such that $X\subset Y$ and inclusion $j:X\hookrightarrow Y$ induces an
isomorphism $j^{\ast }:\pi _{1}(X,p)\rightarrow \pi _{1}(Y,p)$. Thus $%
(X,Y,j) $ satisfies the hypothesis of the Whitehead theorem. However the
topological fundamental groups $\pi _{1}(X,p)$ and $\pi _{1}(Y,p)$ are not
homeomorphic. Hence the \textbf{topology} of fundamental groups has the
capacity to distinguish the homotopy type of $X$ and $Y$ when the algebra
fails to do so.
\end{theorem}
\begin{proof}
The basic idea is to let $X$ denote the countable union of a sequence of
large simple closed curves $C_{1}\cup C_{2}...$ joined at a common point $p.$
Such a space is sometimes called a bouquet of infinitely many loops. In
particular $X$ is locally contractible and should not be mistaken for the
Hawaiian earring. The space $Y$ is a compactification of $X$ obtained by
attaching a line segment $\alpha $ based at $p$ such that the curves $C_{n}$
converge to $\alpha $ in the Hausdorff metric.
Since each of $X$ and $Y$ is path connected and 1 dimensional, if $n\neq 1$
then $\pi _{n}(X,p)=\pi _{n}(Y,p)=1.$ Thus, to show that $(X,Y,j)$ satisfies
the hypothesis of the Whitehead theorem it suffices to show that $j^{\ast
}:\pi _{1}(X,p)\rightarrow \pi _{1}(Y,p)$ is an isomorphism.
Formally for $n\geq 2$ let $C_{n}\subset R^{2}$ denote boundary of the
convex hull of the following 3 point set: $\{(0,0),(\frac{1}{n},1),(\frac{1}{%
n},1)+\frac{1}{10^{(10n)}}(n,-1)\}.$ Then for each $n\geq 2$ $C_{n}$ is the
boundary of a triangle and in particular $C_{n}$ is a simple closed curve.
Let $p=(0,0).$ Note $C_{n}\cap C_{m}=p$ if $n\neq m.$ Let $\alpha $ denote
the line segment $[(0,0),(0,1)]\subset R^{2}.$ Let $X=\cup _{n=2}^{\infty
}C_{n}$ and let $Y=\overline{X}.$ Note $X\cup \alpha .$ Note the path
connected spaces $X$ and $Y$ are 1 dimensional and hence aspherical (\cite
{Fort}). We will show inclusion $j:X\hookrightarrow Y$ induces an
isomorphism $j^{\ast }:\pi _{1}(X,p)\rightarrow \pi _{1}(Y,p).$
To prove $j^{\ast }$ is one to one suppose $f:\partial D^{2}\rightarrow X$
is inessential in $Y$ and suppose $f(1)=p.$ Let $F:D^{2}\rightarrow Y$
satisfy $F_{\partial D^{2}}=f.$ Let $U=F^{-1}(\alpha \backslash p).$ Since $%
D^{2}$ is locally path connected, and since $\alpha \backslash p$ is a
component of $X\backslash p$ the set $U$ is open. Suppose $x\in \overline{U}%
\backslash U.$ Then $F(x)=p$ since $\alpha =\overline{\alpha \backslash p}.$
Thus, we may redefine $F$ to be $p$ on the set $U$ and obtain a continuous
function $G:D^{2}\rightarrow X$ such that $G_{\partial D^{2}}=f.$ This
proves $j^{\ast }$ is one to one.
To prove $j^{\ast }$ is a surjection suppose $\beta \in C_{p}(Y).$ We must
show there exists $\gamma \in C_{p}(X)$ such that $\gamma $ and $\beta $ are
path homotopic in $Y.$ Since $im(\beta )$ is a Peano continuum $im(\beta )$
is locally path connected. Thus we may choose $N$ such that $im(\beta )\cap
(\{\frac{1}{N},\frac{1}{N+1},..\}\times \{1\})=\emptyset .$ Let $A=im(\beta
)\cap (\alpha \cup C_{N}\cup C_{N+1}...).$ Let $B=C_{1}\cup C_{2}...\cup
C_{N-1}.$ Note $A$ is a contractible Peano continuum such that $p\in A.$
Moreover $B$ is a strong deformation retract of $B\cup A.$ Thus there exists
a homotopy $h_{t}:A\cup B\rightarrow B$ such that $h_{0}=id_{A\cup B}$ and $%
h_{t}$ fixes $B$ pointwise. Thus the homotopy $h_{t}(\beta )$ determines
that $\beta $ is path homotopic in $Y$ to $\gamma =h_{1}(\beta ).$ Note $%
im(\gamma )\subset X.$ Hence $j^{\ast }$ is a surjection and therefore $%
j^{\ast }$ is an isomorphism.
Since the space $X$ is locally contractible $\pi _{1}(X,p)$ has the discrete
topology (Remark \ref{rem2}). On the other hand $\pi _{1}(Y,p)$ does not
have the discrete topology, since there exists an inessential loop $f\in
C_{p}(Y)$ which is the uniform limit of inessential loops. (Let the
(inessential) map $f$ go up and down once on $\alpha $ and let $f_{n}$ be an
(essential) loop going once around $C_{n}$). Thus the path component of the
constant map is not open in $C_{p}(Y)$ and thus $\pi _{1}(Y,p)$ cannot have
the discrete topology. Thus $\pi _{1}(X,p)$ and $\pi _{1}(Y,p)$ are not
homeomorphic and hence $X$ and $Y$ do not have the same homotopy type.
\end{proof}
\begin{remark}
The space $Y$ constructed is semilocally simply connected. However $\pi
_{1}(Y,p)$ does not have the discrete topology. Consequently $Y$ is a
counterexample to the (false) Theorem 5.1 \cite{Biss} which asserts that $%
\pi _{1}(Y,p)$ is discrete if and only if $\pi _{1}(Y,p)$ is semilocally
simply connected.
\end{remark}
| {
"timestamp": "2005-04-19T22:26:04",
"yymm": "0502",
"arxiv_id": "math/0502402",
"language": "en",
"url": "https://arxiv.org/abs/math/0502402",
"abstract": "We exhibit a map f between aspherical spaces X and Y such that f induces an isomorphism on homotopy groups but, with natural topologies, X and Y fail to have homeomorphic fundamental groups. Thus the topological fundamental group has the capacity to distinguish homotopy type when the Whitehead theorem fails.",
"subjects": "Algebraic Topology (math.AT); General Topology (math.GN)",
"title": "Topological fundamental groups can distinguish spaces with isomorphic homotopy groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138092657496,
"lm_q2_score": 0.7248702880639791,
"lm_q1q2_score": 0.7087156905665942
} |
https://arxiv.org/abs/2301.05414 | Higher order first integrals of autonomous non-Riemannian dynamical systems | We consider autonomous holonomic dynamical systems defined by equations of the form $\ddot{q}^{a}=-\Gamma_{bc}^{a}(q) \dot{q}^{b}\dot{q}^{c}$ $-Q^{a}(q)$, where $\Gamma^{a}_{bc}(q)$ are the coefficients of a symmetric (possibly non-metrical) connection and $-Q^{a}(q)$ are the generalized forces. We prove a theorem which for these systems determines autonomous and time-dependent first integrals (FIs) of any order in a systematic way, using the `symmetries' of the geometry defined by the dynamical equations. We demonstrate the application of the theorem to compute linear, quadratic, and cubic FIs of various Riemannian and non-Riemannian dynamical systems. | \section{Introduction}
\label{sec.intro}
A first integral (FI) of a second order set of dynamical equations with generalized coordinates $q^{a}$ and generalized velocities $\dot{q}^{a}\equiv \frac{dq^{a}}{dt}$ is a function $I(t,q^{a},\dot{q}^{a})$ satisfying the condition $\frac{dI}{dt}=0$ along the dynamical equations. FIs are important because they can be used in order to reduce the order of the dynamical equations and, if they are `enough' in number \cite{Arnold 1989}, to find the solution of the system by quadrature (Liouville integrability).
The standard method to compute the FIs of Lagrangian systems is Noether's theorem. A different method is the direct method which requires only the dynamical equations and was originally introduced by Whittaker \cite{Whittaker, Katzin 1981, Thompson 1984, Horwood 2007, Post 2015, Mits 2021}. In the latter method, one assumes a functional form for the FI $I$ (e.g. a polynomial form in $\dot{q}^{a}$) and demands the condition $\frac{dI}{dt}=0$. Using the dynamical equations to remove the terms $\ddot{q}^{a}$ whenever they appear, the FI condition leads to a system of partial differential equations (PDEs) whose solution provides the FIs.
In this work, we apply the direct method to autonomous holonomic dynamical systems in a space with a symmetric connection $\Gamma_{bc}^{a}(q)$ (not necessarily Riemannian) which is read from the dynamical equations. We compute the resulting system of PDEs and solve it in terms of the `symmetries' of $\Gamma_{bc}^{a}(q)$. The result is stated as Theorem \ref{thm1} and provides a systematic method to determine polynomial FIs in velocities of any order, time-dependent and autonomous, for this type of dynamical systems. In the special case that the symmetric connection $\Gamma_{bc}^{a}(q)$ is the Riemannian connection defined by the kinetic metric (kinetic energy) $\gamma_{ab}(q)$ of the system, the computed FIs are directly related by means of the Inverse Noether Theorem \cite{Djukic, TsampMitsB} to gauged generalized (i.e. velocity-dependent) weak Noether symmetries. Finally, we apply Theorem \ref{thm1} in order to find new integrable and superintegrable systems which admit linear (LFIs), quadratic (QFIs), and cubic FIs (CFIs).
The structure of the paper has as follows. In section \ref{sec.conditions}, using the direct method, we derive the system of PDEs that must be satisfied by the coefficients of an $m$th-order FI of an autonomous (in general non-Riemannian) dynamical system. In section \ref{sec.theorem}, the `solution' of the system of PDEs is stated as Theorem \ref{thm1} (the proof of Theorem \ref{thm1} is given in the appendix). In section \ref{sec.lfis}, we apply Theorem \ref{thm1} for $m=1$ in order to consider the LFIs. It is shown that there are three independent types of LFIs and we determine their explicit formulae. In section \ref{sec.higherFIs}, we apply Theorem \ref{thm1} for $m>1$ and we find that there are three independent types of higher order FIs. In section \ref{sec.remove}, we discuss these independent FIs and provide a procedure by which symmetries of order smaller than the order of the FI can be removed. Using this procedure, we remove these lesser symmetries and we find the complete forms of the FIs for an even order $m=2\nu$ and an odd order $m=2\nu+1$. The results are collected in Propositions \ref{pro.thm1.4}, \ref{pro.thm1.5}, and \ref{pro.thm1.6}. In section \ref{sec.nonR}, we find a family of two-dimensional (2d) non-Riemannian autonomous dynamical systems and, by applying Theorem \ref{thm1}, we determine the LFIs. In sections \ref{sec.andro}, \ref{sec.appl2}, and \ref{sec.appl3}, we provide further applications of Theorem \ref{thm1} for QFIs and CFIs, by extending existing results in the literature. Finally, in section \ref{sec.conclusions}, we draw our conclusions.
\section{The conditions for $m$th-order FIs}
\label{sec.conditions}
We consider \emph{general} (i.e. Riemannian and non-Riemannian) autonomous dynamical systems of the form
\begin{equation}
\ddot{q}^{a}= -\Gamma^{a}_{bc}(q)\dot{q}^{b}\dot{q}^{c} -Q^{a}(q) \label{eq.tk1}
\end{equation}
where $q^{a}$ with $a=1,2,...,D$ are the generalized coordinates of the configuration space of the system, $D$ is the dimension of the configuration space, a dot over a letter indicates derivation with respect to (wrt) the parameter $t$ (time) along the trajectory $q^{a}(t)$, Einstein's summation convention is applied, $\Gamma^{a}_{bc}(q)$ are the coefficients of a general connection and $-Q^{a}(q)$ are the generalized forces. Since only the symmetric part $\Gamma^{a}_{(bc)}$ contributes to the dynamical equations --without loss of generality-- the quantities $\Gamma^{a}_{bc}(q)$ are assumed to be symmetric.
We look for $m$th-order FIs of the general form
\begin{equation}
I^{(m)}= \sum_{r=0}^{m} M_{i_{1}i_{2}...i_{r}}(t,q) \dot{q}^{i_{1}} \dot{q}^{i_{2}}...\dot{q}^{i_{r}}= M+ M_{i_{1}}\dot{q}^{i_{1}} +M_{i_{1}i_{2}} \dot{q}^{i_{1}} \dot{q}^{i_{2}} +...+M_{i_{1}i_{2}...i_{m}}\dot{q}^{i_{1}} \dot{q}^{i_{2}} ...\dot{q}^{i_{m}} \label{FI.5}
\end{equation}
where $M_{i_{1}...i_{r}}(t,q)$ with $r=0,1,...,m$ are totally symmetric $r$-rank tensors and the index $m \geq 1$ denotes the order of the FI. \emph{We note that when $r=0$, the quantities $M_{i_{1}...i_{r}}(t,q)$ reduce to the scalar $M(t,q)$.} For $m=1$, we have the LFIs; for $m=2$, the QFIs; and for $m=3$, the CFIs.
The FI condition
\begin{equation}
\frac{dI^{(m)}}{dt}=0 \label{DS1.10a}
\end{equation
along the dynamical equations (\ref{eq.tk1}) results in the following system of PDEs
\begin{equation}
M_{i_{1}i_{2}...i_{r},t} +M_{(i_{1}i_{2}...i_{r-1}|i_{r})} -(r+1)M_{i_{1}i_{2}...i_{r}i_{r+1}}Q^{i_{r+1}} =0, \enskip r=0, 1, 2, ..., m, m+1 \label{eq.veldep4} \\
\end{equation}
which is expanded as follows:
\begin{eqnarray}
M_{(i_{1}i_{2}...i_{m}|i_{m+1})} &=&0 \label{eq.veldep4.1} \\
M_{i_{1}i_{2}...i_{m},t}+M_{(i_{1}i_{2}...i_{m-1}|i_{m})} &=&0
\label{eq.veldep4.2} \\
M_{i_{1}i_{2}...i_{r},t}+M_{(i_{1}i_{2}...i_{r-1}|i_{r})} -(r+1)M_{i_{1}i_{2}...i_{r}i_{r+1}}Q^{i_{r+1}} &=&0, \enskip r=1,2,...,m-1, \enskip m>1 \label{eq.veldep4.3} \\
M_{,t}-M_{i_{1}}Q^{i_{1}} &=&0. \label{eq.veldep4.4}
\end{eqnarray}
The symbol $|$ denotes the covariant derivative wrt the symmetric connection $\Gamma^{a}_{bc}$, a comma indicates partial derivative wrt $q^{a}$ or $t$, round/square brackets indicate symmetrization/antisymmetrization of the enclosed indices, and indices enclosed between wavy lines are overlooked by symmetrization or antisymmetrization symbols.
We note that equation (\ref{eq.veldep4.1}) is derived from (\ref{eq.veldep4}) for $r=m+1$; equation (\ref{eq.veldep4.2}) from (\ref{eq.veldep4}) for $r=m$; and equation (\ref{eq.veldep4.4}) from (\ref{eq.veldep4}) for $r=0$.
Concerning the notation, we remark that:
\[
M_{i_{1}...i_{r-k}}(r=0)=
\begin{cases}
M, \enskip k=0 \\
0, \enskip k\geq 1
\end{cases}, \enskip
M_{i_{1}...i_{r}}(r>m)=0.
\]
Equations (\ref{eq.veldep4.1}) and (\ref{eq.veldep4.2}) are purely geometric equations, which are common to all systems of the form (\ref{eq.tk1}) that share the same symmetric connection. In particular, equation (\ref{eq.veldep4.1}) generalizes the concept of Killing tensors (KTs) to a non-metrical geometry with a symmetric connection $\Gamma^{a}_{bc}$. In this context, $M_{i_{1}i_{2}...i_{m}}$ is a \textbf{generalized $\mathbf{m}$th-order KT} for $\Gamma^{a}_{bc}$.
On the other hand, equations (\ref{eq.veldep4.3}) and (\ref{eq.veldep4.4}) are of a dynamical character, because they relate the geometric elements with the generalized forces $Q^{a}$ of the specific dynamical system.
Since the dynamical system (\ref{eq.tk1}) is autonomous, we should use the polynomial method described in \cite{Mits time} in order to solve the system of PDEs (\ref{eq.veldep4.1}) - (\ref{eq.veldep4.4}). According to this method, one assumes general polynomial expressions in the variable $t$ for the tensor quantities $M_{i_{1}...i_{r}}(t,q)$ with $r=1,2,...,m$ (see eq. (\ref{eq.aspm}) in the appendix), and replaces these expressions in the system of PDEs (\ref{eq.veldep4.1}) - (\ref{eq.veldep4.4}). Then, the scalar $M(t,q)$ is determined exactly, and the remaining PDEs reduce to polynomial equations in $t$ whose coefficients are functions of $q^{a}$. The `solution' of the latter system of PDEs is stated below as Theorem \ref{thm1}. A detailed proof is given in the appendix.
\section{Theorem for $m$th-order FIs of a general autonomous dynamical system}
\label{sec.theorem}
\begin{theorem} \label{thm1}
There are two types of $m$th-order FIs for the autonomous (in general non-Riemannian) dynamical system (\ref{eq.tk1}). These are the following:
\bigskip
\textbf{Integral 1.}
\begin{equation}
I^{(m)}_{n}= \sum_{r=1}^{m} \left( \sum^{n}_{N=0}L_{(N)i_{1}...i_{r}} t^{N}\right) \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +s_{0}\frac{t^{n+1}}{n+1} +\sum^{n>0}_{N=1} L_{(N-1)c}Q^{c} \frac{t^{N}}{N} +G(q) \label{eq.thm1.1}
\end{equation}
where $m\geq1$, $n\geq0$, $L_{(N)i_{1}...i_{m}}(q)$ with $N=0, 1, ..., n$ are $\mathbf{m}$\textbf{th-order generalized KTs} satisfying the condition
\begin{equation}
L_{(k)i_{1}...i_{m}} = -\frac{1}{k} L_{(k-1)(i_{1}...i_{m-1}|i_{m})}, \enskip k=1, 2, ..., n, \enskip n>0, \enskip m>1, \label{eq.thm1.2}
\end{equation}
the totally symmetric tensor $L_{(n)i_{1}...i_{m-1}}(q)$ with $m>1$ is an $\mathbf{(m-1)}$\textbf{th-order generalized KT}, the constant $s_{0}$ is defined by the condition
\begin{equation}
L_{(n)a}Q^{a} = s_{0} \label{eq.thm1.3}
\end{equation}
while the function $G(q)$ and the totally symmetric tensors $L_{(N)i_{1}...i_{r}}(q)$ satisfy the conditions:
\begin{eqnarray}
G_{,i_{1}} &=& 2L_{(0)i_{1}i_{2}}(m>1)Q^{i_{2}} -L_{(1)i_{1}}(n>0) \label{eq.thm1.4} \\
\left(L_{(k-1)c}Q^{c}\right)_{,i_{1}} &=& 2kL_{(k)i_{1}i_{2}}(m>1)Q^{i_{2}} -k(k+1)L_{(k+1)i_{1}}(k<n), \enskip k=1,2,...,n, \enskip n>0 \label{eq.thm1.5} \\
L_{(k)(i_{1}...i_{r-1}|i_{r})} &=& (r+1)L_{(k)i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}} -(k+1) L_{(k+1)i_{1}...i_{r}}(k<n), \notag \\
&& k=0,1,..., n, \enskip r= 2,3,...,m-1, \enskip m>2. \label{eq.thm1.6}
\end{eqnarray}
\textbf{Integral 2.}
\begin{equation}
I^{(m)}_{e}= \frac{e^{\lambda t}}{\lambda} \left( \lambda \sum^{m}_{r=1} L_{i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +L_{c}Q^{c} \right) \label{eq.thm2.1}
\end{equation}
where $\lambda\neq0$, $L_{i_{1}...i_{m}}(q)$ is an $\mathbf{m}$\textbf{th-order generalized KT} satisfying the condition
\begin{equation}
L_{i_{1}...i_{m}}= -\frac{1}{\lambda} L_{(i_{1}...i_{m-1}|i_{m})}, \enskip m>1 \label{eq.thm2.2}
\end{equation}
and the totally symmetric tensors $L_{i_{1}...i_{r}}(q)$ satisfy the conditions:
\begin{eqnarray}
\left( L_{c}Q^{c} \right)_{,i_{1}}&=& 2\lambda L_{i_{1}i_{2}}(m>1) Q^{i_{2}} -\lambda^{2}L_{i_{1}} \label{eq.thm2.3} \\
L_{(i_{1}...i_{r-1}|i_{r})}&=& (r+1)L_{i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}} -\lambda L_{i_{1}...i_{r}}, \enskip r=2,3,...,m-1, \enskip m>2. \label{eq.thm2.4}
\end{eqnarray}
\end{theorem}
Concerning the notation, the symbol $I^{(m)}_{n}$ means the $m$th-order FI (upper index) with degree of time-dependence $n$ (lower index), while $I^{(m)}_{e}$ indicates the $m$th-order FI with exponential time-dependence (lower index).
Using mathematical induction, one also proves the following recursion formulae.
\begin{proposition} \label{pro1}
The independent $m$th-order FIs $I_{n}^{(m)}$ and $I^{(m)}_{e}$ satisfy the following recursion formulae:\newline
a. $I_{n}^{(k)}<I_{n}^{(k+1)}$, that is, each $k$th-order FI $I_{n}^{(k)}$ is a subcase of the next $(k+1)$th-order FI $I_{n}^{(k+1)}$ with the same degree $n$ of time-dependence for all integers $k\geq1$. \newline
b. $I_{\ell}^{(m)}<I_{\ell+1}^{(m)}$, that is, the $m$th-order FI $I_{\ell}^{(m)}$ with time-dependence fixed by $\ell$ is a subcase of the $m$th-order FI $I_{\ell+1}^{(m)}$ with time-dependence $\ell+1$ for all integers $\ell\geq0$. \newline
c. $I_{e}^{(k)}<I_{e}^{(k+1)}$, that is, each $k$th-order FI $I_{e}^{(k)}$ is a subcase of the next $(k+1)$th-order FI $I_{e}^{(k+1)}$ for all integers $k\geq1$.
\end{proposition}
We note that Theorem \ref{thm1} for $m=2$ (QFIs) and a Riemannian connection reduces to Theorem 3 of \cite{TsampMitsB}.
In the case of a Riemannian connection, by means of the Inverse Noether Theorem \cite{Djukic, TsampMitsB}, the general $m$th-order FIs (\ref{FI.5}) are related to the generalized gauged weak Noether symmetry
\begin{equation}
\left( \xi=0, \enskip \eta_{i_{1}}= -\frac{\partial I^{(m)}}{\partial \dot{q}^{i_{1}}}, \enskip \phi_{a}, \enskip f =I^{(m)} -\frac{\partial I^{(m)}}{\partial \dot{q}^{i_{1}}}\dot{q}^{i_{1}} \right) \quad \text{such that $\phi_{a} \dot{q}^{a} +F^{a} \frac{\partial I^{(m)}}{\partial \dot{q}^{a}}= 0$} \label{eq.weak6}
\end{equation}
where $F^{a}(t,q,\dot{q})$ are the non-conservative generalized forces, $\phi^{a}(t,q,\dot{q})$ is an additional vector generator, $f(t,q,\dot{q})$ is the Noether function, $\mathbf{X}= \xi(t,q,\dot{q}) \partial_{t} + \eta^{a}(t,q,\dot{q}) \partial_{q^{a}}$ is the Lie generator, and the quantity
\[
\frac{\partial I^{(m)}}{\partial \dot{q}^{i_{1}}} = M_{i_{1}} + 2M_{i_{1}i_{2}}\dot{q}^{i_{2}} + 3M_{i_{1}i_{2}i_{3}} \dot{q}^{i_{2}} \dot{q}^{i_{3}} + ... + m M_{i_{1}i_{2}...i_{m}} \dot{q}^{i_{2}}...\dot{q}^{i_{m}} = \sum^{m-1}_{r=0} (r+1) M_{i_{1}i_{2}...i_{r+1}} \dot{q}^{i_{2}}...\dot{q}^{i_{r+1}}.
\]
\section{FIs of order $m=1$: LFIs}
\label{sec.lfis}
Applying Theorem \ref{thm1} for $m=1$ (LFIs), we find the following proposition for the LFIs.
\begin{proposition} \label{pro.thm1.1}
There are two types of LFIs for the autonomous (in general non-Riemannian) dynamical system (\ref{eq.tk1}) which are the following:
\bigskip
\textbf{LFI 1.}
\begin{equation}
I^{(1)}_{n}= \sum^{n}_{N=0}L_{(N)a} t^{N} \dot{q}^{a} +s_{0}\frac{t^{n+1}}{n+1} +\sum^{n>0}_{N=1} L_{(N-1)a}Q^{a} \frac{t^{N}}{N} +G(q) \label{eq.pro1.1}
\end{equation}
where $n\geq0$, $G(q)$ is an arbitrary smooth function, $L_{(N)a}(q)$ with $N=0, 1, ..., n$ are \textbf{generalized KVs} satisfying the conditions:
\begin{eqnarray}
L_{(1)a}(n>0)&=& -G_{,a}. \label{eq.pro1.2} \\
\left(L_{(k-1)b}Q^{b}\right)_{,a}&=& -k(k+1)L_{(k+1)a}(k<n), \enskip k=1,2,...,n, \enskip n>0, \label{eq.pro1.3}
\end{eqnarray}
and the constant $s_{0}$ is defined by the condition
\begin{equation}
L_{(n)a}Q^{a} = s_{0}. \label{eq.pro1.4}
\end{equation}
\textbf{LFI 2.}
\begin{equation}
I^{(1)}_{e}= \frac{e^{\lambda t}}{\lambda} \left( \lambda L_{a} \dot{q}^{a} +L_{a}Q^{a} \right) \label{eq.pro1.5}
\end{equation}
where $\lambda\neq0$ and $L_{a}(q)$ is a \textbf{generalized KV} satisfying the condition
\begin{equation}
\left( L_{b}Q^{b} \right)_{,a}= -\lambda^{2}L_{a}. \label{eq.pro1.6}
\end{equation}
\end{proposition}
The LFI (\ref{eq.pro1.1}) consists of two independent LFIs: a) A LFI for the even vectors $L_{(2N)a}$, and b) another one for the odd vectors $L_{(2N+1)a}$ and the scalar $G(q)$. This results directly from the fact that condition (\ref{eq.pro1.2}) involves only the odd vector $L_{(1)a}$ and the scalar $G(q)$, while conditions (\ref{eq.pro1.3}) are separated in two sets; one set involving only the even vectors $L_{(2N)a}$ (i.e. for the odd values $k=1,3,5,...$), and another one involving only the odd vectors $L_{(2N+1)a}$ (i.e. for the even values $k=2,4,6,...$). We note that the above holds for either an odd or an even time-dependence $n$. In the following proposition, we give the explicit formulae of these two independent LFIs.
\begin{proposition}
\label{pro.thm1.2} The LFI $I^{(1)}_{n}$ given in (\ref{eq.pro1.1}) consists of the following two independent LFIs:
\bigskip
\textbf{LFI 1.1.}
\begin{equation}
I^{(1,1)}_{\ell}= \sum_{N=1}^{\ell} t^{2N-1} L_{(2N-1)a} \dot{q}^{a} +s_{1}\frac{t^{2\ell}}{2\ell} +\sum_{N=1}^{\ell-1\geq1} L_{(2N-1)a}Q^{a} \frac{t^{2N}}{2N} +G(q) \label{eq.pro1.10}
\end{equation}
where $\ell>0$, $L_{(2N-1)a}(q)$ with $N=1,2,...,\ell$ are \textbf{generalized KVs} satisfying the condition
\begin{equation}
\left(L_{(2k-1)b}Q^{b}\right)_{,a}= -2k(2k+1)L_{(2k+1)a}, \enskip k=1,2,...,\ell-1, \enskip \ell>1, \label{eq.pro1.11}
\end{equation}
$G(q)$ is an arbitrary smooth function such that $L_{(1)a}= -G_{,a}$ is a \textbf{gradient generalized KV}, and the constant $s_{1}$ is defined by the condition
\begin{equation}
s_{1}= L_{(2\ell-1)a}Q^{a}. \label{eq.pro1.12}
\end{equation}
\textbf{LFI 1.2.}
\begin{equation}
I^{(1,2)}_{\ell}= \sum_{N=0}^{\ell} t^{2N}L_{(2N)a}\dot{q}^{a} +s_{0}\frac{t^{2\ell+1}}{2\ell+1} +\sum_{N=0}^{\ell-1\geq0} L_{(2N)a}Q^{a} \frac{t^{2N+1}}{2N+1} \label{eq.pro1.7}
\end{equation}
where $\ell \geq0$, $L_{(2N)a}(q)$ with $N=0,1,...,\ell$ are \textbf{generalized KVs} satisfying the condition
\begin{equation}
\left(L_{(2k)b}Q^{b}\right)_{,a}= -2(2k+1)(k+1)L_{(2k+2)a}, \enskip k=0,1,...,\ell-1, \enskip \ell>0, \label{eq.pro1.8}
\end{equation}
and the constant $s_{0}$ is defined by the condition
\begin{equation}
L_{(2\ell)a}Q^{a}= s_{0}. \label{eq.pro1.9}
\end{equation}
Therefore, the general autonomous dynamical system (\ref{eq.tk1}) admits three independent LFIs which are given by the formulae (\ref{eq.pro1.5}), (\ref{eq.pro1.10}), and (\ref{eq.pro1.7}).
\end{proposition}
The notation $I^{(1,\alpha)}_{\ell}$, where $\alpha=1,2$, indicates one of the two types of independent LFIs of the type $I^{(1)}_{n}$ with time-dependence fixed by the lower index $\ell$.
We note that condition (\ref{eq.pro1.11}) is derived from (\ref{eq.pro1.3}) for the even values $k=2,4,6,..., 2\ell$ if we set $n=2\ell$ and rename the index $k$ as $2k$; while condition (\ref{eq.pro1.8}) is derived from (\ref{eq.pro1.3}) for the odd values $k=1,3,5,..., 2\ell-1$ if we set $n=2\ell$ and rename the index $k$ as $2k+1$.
\section{FIs of order $m>1$}
\label{sec.higherFIs}
Applying Theorem \ref{thm1} for $m>1$, we find the following proposition.
\begin{proposition} \label{pro.thm1.3}
There are two types of $m$th-order FIs with $m>1$ for the autonomous (in general non-Riemannian) dynamical system (\ref{eq.tk1}) which are the following:
\bigskip
\textbf{Integral 1.}
\begin{equation}
I^{(m>1)}_{n}= \sum_{r=1}^{m} \left( \sum^{n}_{N=0}L_{(N)i_{1}...i_{r}} t^{N}\right) \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +s_{0}\frac{t^{n+1}}{n+1} +\sum^{n>0}_{N=1} L_{(N-1)c}Q^{c} \frac{t^{N}}{N} +G(q) \label{eq.pro2.1}
\end{equation}
where $m>1$, $n\geq0$, $L_{(N)i_{1}...i_{m}}(q)$ with $N=0, 1, ..., n$ are $\mathbf{m}$\textbf{th-order generalized KTs} satisfying the condition
\begin{equation}
L_{(k)i_{1}...i_{m}} = -\frac{1}{k} L_{(k-1)(i_{1}...i_{m-1}|i_{m})}, \enskip k=1, 2, ..., n, \enskip n>0, \label{eq.pro2.2}
\end{equation}
the totally symmetric tensor $L_{(n)i_{1}...i_{m-1}}(q)$ is an $\mathbf{(m-1)}$\textbf{th-order generalized KT}, the constant $s_{0}$ is defined by the condition
\begin{equation}
L_{(n)a}Q^{a} = s_{0} \label{eq.pro2.3}
\end{equation}
while the function $G(q)$ and the remaining totally symmetric tensors $L_{(N)i_{1}...i_{r}}(q)$ satisfy the conditions:
\begin{eqnarray}
G_{,i_{1}} &=& 2L_{(0)i_{1}i_{2}}Q^{i_{2}} -L_{(1)i_{1}}(n>0) \label{eq.pro2.4} \\
\left(L_{(k-1)c}Q^{c}\right)_{,i_{1}} &=& 2kL_{(k)i_{1}i_{2}}Q^{i_{2}} -k(k+1)L_{(k+1)i_{1}}(k<n), \enskip k=1,2,...,n, \enskip n>0 \label{eq.pro2.5} \\
L_{(k)(i_{1}...i_{r-1}|i_{r})} &=& (r+1)L_{(k)i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}} -(k+1) L_{(k+1)i_{1}...i_{r}}(k<n), \notag \\
&& k=0,1,..., n, \enskip r= 2,3,...,m-1, \enskip m>2. \label{eq.pro2.6}
\end{eqnarray}
\textbf{Integral 2.}
\begin{equation}
I^{(m>1)}_{e}= \frac{e^{\lambda t}}{\lambda} \left( \lambda \sum^{m}_{r=1} L_{i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +L_{c}Q^{c} \right) \label{eq.pro2.7}
\end{equation}
where $\lambda\neq0$, $L_{i_{1}...i_{m}}(q)$ is an $\mathbf{m}$\textbf{th-order generalized KT} satisfying the condition
\begin{equation}
L_{i_{1}...i_{m}}= -\frac{1}{\lambda} L_{(i_{1}...i_{m-1}|i_{m})} \label{eq.pro2.8}
\end{equation}
and the remaining totally symmetric tensors $L_{i_{1}...i_{r}}(q)$ with $r=1,2,...,m-1$ satisfy the conditions:
\begin{eqnarray}
\left( L_{c}Q^{c} \right)_{,i_{1}}&=& 2\lambda L_{i_{1}i_{2}} Q^{i_{2}} -\lambda^{2}L_{i_{1}} \label{eq.pro2.9} \\
L_{(i_{1}...i_{r-1}|i_{r})}&=& (r+1)L_{i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}} -\lambda L_{i_{1}...i_{r}}, \enskip r=2,3,...,m-1, \enskip m>2. \label{eq.pro2.10}
\end{eqnarray}
\end{proposition}
From Proposition \ref{pro.thm1.3}, we observe that the conditions (\ref{eq.pro2.2}) - (\ref{eq.pro2.6}) of the $m$th-order FI (\ref{eq.pro2.1}) are divided in two classes, according to if the tensor quantities $L_{(N)i_{1}...i_{r}}(q)$ are of even or odd order, and the associated index $N$ is even or odd. Therefore, the FI (\ref{eq.pro2.1}) consists of two independent FIs: \newline
a) One FI, say $I^{(m,1)}_{\ell}$, which contains tensor quantities $L_{(2N-1)i_{1}...i_{r}}(q)$ of odd order and $L_{(2N)i_{1}...i_{r}}(q)$ of even order. \newline
b) Another FI, say $I^{(m,2)}_{\ell}$, which contains tensor quantities $L_{(2N-1)i_{1}...i_{r}}(q)$ of even order and $L_{(2N)i_{1}...i_{r}}(q)$ of odd order.
We note that the order $r$ of the involved totally symmetric tensors $L_{(N)i_{1}...i_{r}}(q)$ takes values from $1$ to $m>1$, where $m$ is the order of the considered FI (\ref{eq.pro2.1}).
The independent FIs $I^{(m,1)}_{\ell}$ and $I^{(m,2)}_{\ell}$ are given by the following explicit formulae:
\bigskip
a.
\begin{eqnarray}
I^{(m>1,1)}_{\ell} &=& \sum_{r=1,\text{odd}}^{m} \sum_{N=1}^{\ell>0} t^{2N-1} L_{(2N-1)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +\sum_{r=1,\text{even}}^{m} \sum_{N=0}^{\ell} t^{2N} L_{(2N)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} + \notag \\
&& +\sum_{N=1}^{\ell>0}L_{(2N-1)c}Q^{c} \frac{t^{2N}}{2N} +G(q) \label{eq.pro3.1}
\end{eqnarray}
where the time-dependence $\ell \geq0$ and the involved quantities satisfy the conditions:
\begin{eqnarray}
G_{,i_{1}}&=& 2L_{(0)i_{1}i_{2}}Q^{i_{2}} -L_{(1)i_{1}}(\ell>0) \label{eq.pro3.2} \\
\left(L_{(2N-1)c}Q^{c} \right)_{,i_{1}}&=& 4NL_{(2N)i_{1}i_{2}} Q^{i_{2}} -2N(2N+1)L_{(2N+1)i_{1}}(N<\ell), \enskip N=1,2,...,\ell, \enskip \ell>0 \label{eq.pro3.3} \\
L_{(2N-1)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N-1)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -2NL_{(2N)i_{1}...i_{r}}, \notag \\
&& N=1,2,...,\ell, \enskip r=2,4,6,..., \enskip \ell>0 \label{eq.pro3.4} \\
L_{(2N)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -(2N+1)L_{(2N+1)i_{1}...i_{r}}(N<\ell), \notag \\
&& N=0,1,...,\ell, \enskip r=3,5,7,... ~. \label{eq.pro3.5}
\end{eqnarray}
In conditions (\ref{eq.pro3.4}) and (\ref{eq.pro3.5}), it holds that $r\leq m-1$ where $m>2$.
b.
\begin{eqnarray}
I^{(m>1,2)}_{\ell} &=& \sum_{r=1,\text{odd}}^{m} \sum_{N=0}^{\ell} t^{2N} L_{(2N)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +\sum_{r=1,\text{even}}^{m} \sum_{N=1}^{\ell>0} t^{2N-1} L_{(2N-1)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} + \notag \\
&& +s_{0}\frac{t^{2\ell+1}}{2\ell+1} +\sum_{N=0}^{\ell-1\geq0} L_{(2N)c}Q^{c} \frac{t^{2N+1}}{2N+1} \label{eq.pro4.1}
\end{eqnarray}
where the time-dependence $\ell \geq0$ and the involved quantities satisfy the conditions:
\begin{eqnarray}
L_{(2\ell)a}Q^{a}&=&s_{0} \label{eq.pro4.0} \\
\left( L_{(2N-2)c}Q^{c} \right)_{,i_{1}} &=& 2(2N-1) L_{(2N-1)i_{1}i_{2}} Q^{i_{2}} -2N(2N-1)L_{(2N)i_{1}}, \enskip N=1,2,...,\ell, \enskip \ell>0 \label{eq.pro4.2} \\
L_{(2N)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -(2N+1)L_{(2N+1)i_{1}...i_{r}}(N<\ell), \notag \\
&& N=0,1,...,\ell, \enskip r=2,4,6,..., \label{eq.pro4.3} \\
L_{(2N-1)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N-1)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -2NL_{(2N)i_{1}...i_{r}}, \notag \\
&& N=1,2,...,\ell, \enskip r=3,5,7,..., \enskip \ell>0. \label{eq.pro4.4}
\end{eqnarray}
In conditions (\ref{eq.pro4.3}) and (\ref{eq.pro4.4}), it holds that $r\leq m-1$ where $m>2$.
For the FIs (\ref{eq.pro3.1}) and (\ref{eq.pro4.1}), we have also that $L_{(2\ell)i_{1}...i_{m-1}}(q)$ is an \textbf{$\mathbf{(m-1)}$th-order generalized KT}, and $L_{(N)i_{1}...i_{m}}(q)$ with $N=0, 1, ..., 2\ell$ are \textbf{$\mathbf{m}$th-order generalized KTs} such that:
\begin{eqnarray}
L_{(2N)i_{1}...i_{m}}&=& -\frac{1}{2N} L_{(2N-1)(i_{1}...i_{m-1}|i_{m})}, \enskip N=1,2,...,\ell, \enskip \ell>0 \label{eq.pro0.1} \\
L_{(2N-1)i_{1}...i_{m}}&=& -\frac{1}{2N-1} L_{(2N-2)(i_{1}...i_{m-1}|i_{m})}, \enskip N=1,2,...,\ell, \enskip \ell>0. \label{eq.pro0.2}
\end{eqnarray}
We note that the sum $\sum_{r=1,\text{odd}}^{m}$ is over the odd values of $r$, while the sum $\sum_{r=1,\text{even}}^{m}$ is over the even values of $r$.
\section{How to remove the $(m-1)$th-order geometric symmetries from the two independent FIs $I^{(m>1,1)}_{\ell}$ and $I^{(m>1,2)}_{\ell}$: The complete forms}
\label{sec.remove}
As we have seen, the two independent FIs (\ref{eq.pro3.1}) and (\ref{eq.pro4.1}) include geometric symmetries of order smaller from the order $m>1$ of the FIs. These symmetries are described by the $(m-1)$th-order generalized KT $L_{(2\ell)i_{1}...i_{m-1}}(q)$. We observe that if $m=2\nu$ (even) where $\nu>0$, the considered $(m-1)$th-order generalized KT appears in the FI (\ref{eq.pro4.1}); while in the case that $m=2\nu+1$ (odd), appears in the FI (\ref{eq.pro3.1}). Moreover, for an even order $m$, the condition (\ref{eq.pro0.1}) accompanies the FI (\ref{eq.pro3.1}) and the condition (\ref{eq.pro0.2}) accompanies the FI (\ref{eq.pro4.1}); while for an odd order $m$, the condition (\ref{eq.pro0.1}) accompanies the FI (\ref{eq.pro4.1}) and the condition (\ref{eq.pro0.2}) accompanies the FI (\ref{eq.pro3.1}).
The $(m-1)$th-order generalized KT $L_{(2\ell)i_{1}...i_{m-1}}(q)$ can be removed from the expressions (\ref{eq.pro3.1}) and (\ref{eq.pro4.1}) by applying the following procedure:
\bigskip
1) \underline{For an even order $m=2\nu$ where $\nu>0$.}
In this case, the $(2\nu-1)$th-order generalized KT $L_{(2\ell)i_{1}...i_{2\nu-1}}(q)$ appears in the independent FI (\ref{eq.pro4.1}). We can remove this $(2\nu-1)$th-order symmetry by introducing a sequence of an even rank totally symmetric tensors of a higher time-dependence, that is, we add in the FI (\ref{eq.pro4.1}) terms of the form
\begin{equation}
t^{2\ell+1} L_{(2\ell+1)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}}, \enskip r=2,4,...,2\nu, \enskip \nu>0 \label{eq.prf1.1}
\end{equation}
such that $L_{(2\ell+1)i_{1}...i_{2\nu}}$ is a $(2\nu)$th-order generalized KT given by the relation
\begin{equation}
L_{(2\ell+1)i_{1}...i_{2\nu}}= -\frac{1}{2\ell+1} L_{(2\ell)(i_{1}...i_{2\nu-1}|i_{2\nu})}. \label{eq.prf1.2}
\end{equation}
Then, the condition (\ref{eq.pro4.0}) must be generalized as
\begin{equation}
\left( L_{(2\ell)c}Q^{c} \right)_{,i_{1}}= 2(2\ell+1) L_{(2\ell+1)i_{1}i_{2}} Q^{i_{2}}; \label{eq.prf1.3}
\end{equation}
from the condition (\ref{eq.pro4.3}), the restriction $N<\ell$ must be removed because now quantities of the form $L_{(2N+1)i_{1}...i_{r}}$ do exist for $N=\ell$; and we must add the condition
\begin{equation}
L_{(2\ell+1)(i_{1}...i_{r-1}|i_{r})} = (r+1) L_{(2\ell+1)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}}, \enskip r=3,5,7,...,2\nu-1 \enskip \nu>1. \label{eq.prf1.4}
\end{equation}
Observe that the condition (\ref{eq.prf1.2}) removes the original $(2\nu-1)$th-order symmetry by allowing the $(2\nu-1)$th-order generalized KT $L_{(2\ell)i_{1}...i_{2\nu-1}}(q)$ to be a totally symmetric tensor of the same rank, which is not necessarily a KT. Then, the case of the generalized KT is derived as a subcase. This shows that symmetries of an order lesser than the order of the FI can always be absorbed into a higher order term.
2) \underline{For an odd order $m=2\nu+1$ where $\nu>0$.}
In this case, the $(2\nu)$th-order generalized KT $L_{(2\ell)i_{1}...i_{2\nu}}(q)$ appears in the independent FI (\ref{eq.pro3.1}). We can remove this $(2\nu)$th-order symmetry by introducing a sequence of an odd rank totally symmetric tensors of a higher time-dependence, that is, we add in the FI (\ref{eq.pro3.1}) the sum
\begin{equation}
\sum_{r=1,odd}^{2\nu+1} t^{2\ell+1} L_{(2\ell+1)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}}+ s_{1} \frac{t^{2\ell+2}}{2\ell+2} \label{eq.prf2.1}
\end{equation}
such that $L_{(2\ell+1)i_{1}...i_{2\nu+1}}$ is a $(2\nu+1)$th-order generalized KT given by the relation
\begin{equation}
L_{(2\ell+1)i_{1}...i_{2\nu+1}}= -\frac{1}{2\ell+1} L_{(2\ell)(i_{1}...i_{2\nu}|i_{2\nu+1})}
\label{eq.prf2.2}
\end{equation}
and the constant $s_{1}$ is defined by the relation
\begin{equation}
L_{(2\ell+1)c}Q^{c} = s_{1}. \label{eq.prf2.3}
\end{equation}
Then, from conditions (\ref{eq.pro3.3}) and (\ref{eq.pro3.5}), the restriction $N<\ell$ must be removed because now quantities of the form $L_{(2N+1)i_{1}...i_{r}}$ do exist for $N=\ell$; and we must add the condition
\begin{equation}
L_{(2\ell+1)(i_{1}...i_{r-1}|i_{r})} = (r+1) L_{(2\ell+1)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}}, \enskip r=2,4,6,...,2\nu. \label{eq.prf2.4}
\end{equation}
Observe that the condition (\ref{eq.prf2.2}) removes the original $(2\nu)$th-order symmetry defined by allowing the $(2\nu)$th-order generalized KT $L_{(2\ell)i_{1}...i_{2\nu}}(q)$ to be a totally symmetric tensor of the same rank, which is not necessarily a KT. Then, the case of the generalized KT is derived as a subcase. This shows that symmetries of an order lesser than the order of the FI can always be absorbed into a higher order term.
\bigskip
We collect the above results in the following propositions.
\begin{proposition}
\label{pro.thm1.4} For an even order $m=2\nu$, where $\nu>0$, the \textbf{complete forms} (i.e. expressions without geometric symmetries of order less than $m$) of the independent FIs (\ref{eq.pro3.1}) and (\ref{eq.pro4.1}) are the following:
\bigskip
\textbf{Integral 1.1.}
\begin{eqnarray}
J^{(2\nu,1)}_{\ell} &=& \sum_{r=1,\text{odd}}^{2\nu} \sum_{N=1}^{\ell>0} t^{2N-1} L_{(2N-1)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +\sum_{r=1,\text{even}}^{2\nu} \sum_{N=0}^{\ell} t^{2N} L_{(2N)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} + \notag \\
&& +\sum_{N=1}^{\ell>0}L_{(2N-1)c}Q^{c} \frac{t^{2N}}{2N} +G(q) \label{eq.pro5.1}
\end{eqnarray}
where $\ell \geq0$, $L_{(2N)i_{1}...i_{2\nu}}(q)$ with $N=0,1,...,\ell$ are $\mathbf{(2\nu)}$\textbf{th-order generalized KTs} given by the relation
\begin{equation}
L_{(2N)i_{1}...i_{2\nu}}= -\frac{1}{2N} L_{(2N-1)(i_{1}...i_{2\nu-1}|i_{2\nu})}, \enskip N=1,2,...,\ell, \enskip \ell>0 \label{eq.pro5.2}
\end{equation}
and the involved quantities satisfy the conditions:
\begin{eqnarray}
G_{,i_{1}}&=& 2L_{(0)i_{1}i_{2}}Q^{i_{2}} -L_{(1)i_{1}}(\ell>0) \label{eq.pro5.3} \\
\left(L_{(2N-1)c}Q^{c} \right)_{,i_{1}}&=& 4NL_{(2N)i_{1}i_{2}} Q^{i_{2}} -2N(2N+1)L_{(2N+1)i_{1}}(N<\ell), \enskip N=1,2,...,\ell, \enskip \ell>0 \label{eq.pro5.4} \\
L_{(2N-1)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N-1)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -2NL_{(2N)i_{1}...i_{r}}, \notag \\
&& N=1,2,...,\ell, \enskip r=2,4,6,...,2\nu-2, \enskip \ell>0, \enskip \nu>1 \label{eq.pro5.5} \\
L_{(2N)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -(2N+1)L_{(2N+1)i_{1}...i_{r}}(N<\ell), \notag \\
&& N=0,1,...,\ell, \enskip r=3,5,7,..., 2\nu-1, \enskip \nu>1. \label{eq.pro5.6}
\end{eqnarray}
\textbf{Integral 1.2.}
\begin{eqnarray}
J^{(2\nu,2)}_{\ell} &=& \sum_{r=1,\text{odd}}^{2\nu} \sum_{N=0}^{\ell} t^{2N} L_{(2N)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +\sum_{r=1,\text{even}}^{2\nu} \sum_{N=1}^{\ell+1} t^{2N-1} L_{(2N-1)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} + \notag \\
&& +s_{0}\frac{t^{2\ell+1}}{2\ell+1} +\sum_{N=0}^{\ell-1\geq0} L_{(2N)c}Q^{c} \frac{t^{2N+1}}{2N+1} \label{eq.pro5.7}
\end{eqnarray}
where $\ell \geq0$, $L_{(2N-1)i_{1}...i_{2\nu}}(q)$ with $N=1,2,...,\ell+1$ are $\mathbf{(2\nu)}$\textbf{th-order generalized KTs} given by the relation
\begin{equation}
L_{(2N-1)i_{1}...i_{2\nu}}= -\frac{1}{2N-1} L_{(2N-2)(i_{1}...i_{2\nu-1}|i_{2\nu})}, \enskip N=1,2,...,\ell+1 \label{eq.pro5.8}
\end{equation}
and the involved quantities satisfy the conditions:
\begin{eqnarray}
\left( L_{(2N-2)c}Q^{c} \right)_{,i_{1}}&=& 2(2N-1) L_{(2N-1)i_{1}i_{2}} Q^{i_{2}} -2N(2N-1)L_{(2N)i_{1}}(N<\ell+1), \notag \\
&& N=1,2,...,\ell+1 \label{eq.pro5.9} \\
L_{(2N)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -(2N+1)L_{(2N+1)i_{1}...i_{r}}, \notag \\
&& N=0,1,...,\ell, \enskip r=2,4,6,...,2\nu-2, \enskip \nu>1 \label{eq.pro5.10} \\
L_{(2N-1)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N-1)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -2NL_{(2N)i_{1}...i_{r}}(N<\ell+1), \notag \\
&& N=1,2,...,\ell+1, \enskip r=3,5,7,...,2\nu-1, \enskip \nu>1. \label{eq.pro5.11}
\end{eqnarray}
\end{proposition}
\begin{proposition}
\label{pro.thm1.5} For an odd order $m=2\nu+1$, where $\nu>0$, the \textbf{complete forms} (i.e. expressions without geometric symmetries of order less than $m$) of the independent FIs (\ref{eq.pro3.1}) and (\ref{eq.pro4.1}) are the following:
\bigskip
\textbf{Integral 1.1.}
\begin{eqnarray}
J^{(2\nu+1,1)}_{\ell} &=& \sum_{r=1,\text{odd}}^{2\nu+1} \sum_{N=1}^{\ell+1} t^{2N-1} L_{(2N-1)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +\sum_{r=1,\text{even}}^{2\nu+1} \sum_{N=0}^{\ell} t^{2N} L_{(2N)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} + \notag \\
&& +s_{1}\frac{t^{2\ell+2}}{2\ell+2} +\sum_{N=1}^{\ell>0}L_{(2N-1)c}Q^{c} \frac{t^{2N}}{2N} +G(q) \label{eq.pro6.1}
\end{eqnarray}
where $\ell \geq0$, $L_{(2N-1)i_{1}...i_{2\nu+1}}(q)$ with $N=1,...,\ell+1$ are $\mathbf{(2\nu+1)}$\textbf{th-order generalized KTs} given by the relation
\begin{equation}
L_{(2N-1)i_{1}...i_{2\nu+1}}= -\frac{1}{2N-1} L_{(2N-2)(i_{1}...i_{2\nu}|i_{2\nu+1})}, \enskip N=1,2,...,\ell+1 \label{eq.pro6.2}
\end{equation}
while $s_{1}$ is a constant defined by the relation
\begin{equation}
L_{(2\ell+1)a}Q^{a} = s_{1} \label{eq.pro6.3}
\end{equation}
and the involved quantities satisfy the conditions:
\begin{eqnarray}
G_{,i_{1}}&=& 2L_{(0)i_{1}i_{2}}Q^{i_{2}} -L_{(1)i_{1}}(\ell>0) \label{eq.pro6.4} \\
\left(L_{(2N-1)c}Q^{c} \right)_{,i_{1}}&=& 4NL_{(2N)i_{1}i_{2}} Q^{i_{2}} -2N(2N+1)L_{(2N+1)i_{1}}, \enskip N=1,2,...,\ell, \enskip \ell>0 \label{eq.pro6.5} \\
L_{(2N-1)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N-1)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -2NL_{(2N)i_{1}...i_{r}}(N<\ell+1), \notag \\
&& N=1,2,...,\ell+1, \enskip r=2,4,6,...,2\nu \label{eq.pro6.6} \\
L_{(2N)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -(2N+1)L_{(2N+1)i_{1}...i_{r}}, \notag \\
&& N=0,1,...,\ell, \enskip r=3,5,7,..., 2\nu-1, \enskip \nu>1. \label{eq.pro6.7}
\end{eqnarray}
\textbf{Integral 1.2.}
\begin{eqnarray}
J^{(2\nu+1,2)}_{\ell} &=& \sum_{r=1,\text{odd}}^{2\nu+1} \sum_{N=0}^{\ell} t^{2N} L_{(2N)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +\sum_{r=1,\text{even}}^{2\nu+1} \sum_{N=1}^{\ell>0} t^{2N-1} L_{(2N-1)i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} + \notag \\
&& +s_{0}\frac{t^{2\ell+1}}{2\ell+1} +\sum_{N=0}^{\ell-1\geq0} L_{(2N)c}Q^{c} \frac{t^{2N+1}}{2N+1} \label{eq.pro6.8}
\end{eqnarray}
where $\ell \geq0$, $L_{(2N)i_{1}...i_{2\nu+1}}(q)$ with $N=0,1,...,\ell$ are $\mathbf{(2\nu+1)}$\textbf{th-order generalized KTs} given by the relation
\begin{equation}
L_{(2N)i_{1}...i_{2\nu+1}}= -\frac{1}{2N} L_{(2N-1)(i_{1}...i_{2\nu}|i_{2\nu+1})}, \enskip N=1,2,...,\ell, \enskip \ell>0 \label{eq.pro6.9}
\end{equation}
and the involved quantities satisfy the conditions:
\begin{eqnarray}
L_{(2\ell)a}Q^{a}&=&s_{0} \label{eq.pro6.10} \\
\left( L_{(2N-2)c}Q^{c} \right)_{,i_{1}} &=& 2(2N-1) L_{(2N-1)i_{1}i_{2}} Q^{i_{2}} -2N(2N-1)L_{(2N)i_{1}}, \enskip N=1,2,...,\ell, \enskip \ell>0 \label{eq.pro6.11} \\
L_{(2N)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -(2N+1)L_{(2N+1)i_{1}...i_{r}}(N<\ell), \notag \\
&& N=0,1,...,\ell, \enskip r=2,4,6,...,2\nu \label{eq.pro6.12} \\
L_{(2N-1)(i_{1}...i_{r-1}|i_{r})} &=& (r+1) L_{(2N-1)i_{1}...i_{r}i_{r+1}}Q^{i_{r+1}} -2NL_{(2N)i_{1}...i_{r}}, \notag \\
&& N=1,2,...,\ell, \enskip r=3,5,7,...,2\nu-1, \enskip \ell>0, \enskip \nu>1. \label{eq.pro6.13}
\end{eqnarray}
\end{proposition}
Therefore, concerning the higher order FIs of a general autonomous dynamical system, we have the following general result.
\begin{proposition}
\label{pro.thm1.6} The autonomous (non-Riemannian in general) dynamical system (\ref{eq.tk1}) admits three independent $m$th-order FIs --autonomous or time-dependent-- with $m>0$, which are given by the following formulae: \newline
i. For $m=1$, we have the LFIs (\ref{eq.pro1.5}), (\ref{eq.pro1.10}) and (\ref{eq.pro1.7}). \newline
ii. For $m=2\nu$ with $\nu>0$, we have the FIs (\ref{eq.pro2.7}), (\ref{eq.pro5.1}) and (\ref{eq.pro5.7}). \newline
iii. For $m=2\nu+1$ with $\nu>0$, we have the FIs (\ref{eq.pro2.7}), (\ref{eq.pro6.1}) and (\ref{eq.pro6.8}).
\end{proposition}
In the following sections, we give applications of the above general results.
\section{Application 1: A family of 2d non-Riemannian autonomous dynamical systems}
\label{sec.nonR}
We consider two-dimensional (2d) autonomous dynamical systems of the form:
\begin{eqnarray}
\ddot{x} &=& -Q^{1}(x,y) -\Gamma^{1}_{11}(x,y)\dot{x}^{2} \label{eq.nonR1.1} \\
\ddot{y} &=& -Q^{2}(x,y) -\Gamma^{2}_{22}(x,y)\dot{y}^{2} \label{eq.nonR1.2}
\end{eqnarray}
where $q^{a}=(x,y)$ are the generalized coordinates, $Q^{1}, Q^{2}$ are the components of the generalized forces, and $\Gamma^{1}_{11}, \Gamma^{2}_{22}$ are the non-zero symmetric connection coefficients of the system (all the other connection coefficients vanish).
We will find conditions for which the dynamical system (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2}) is non-Riemannian. In that case, a kinetic metric (i.e. a regular Lagrangian or a kinetic energy) cannot be defined, and standard methods (e.g. Noether's theorem) for the determination of FIs cannot be applied. Therefore, Theorem \ref{thm1} is the only systematic method we have in order to determine the FIs of the system (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2}).
\subsection{Conditions for a non-Riemannian connection}
It is well-known that the symmetric connection coefficients $\Gamma^{a}_{bc}$ define a Riemannian connection iff there exists a (kinetic) metric $\gamma_{ab}$ with zero metricity, that is, $\gamma_{ab|c}=0$. Then,
\[
\Gamma^{a}_{bc}= \frac{1}{2}\gamma^{ad} \left( \gamma_{bd,c} +\gamma_{cd,b} -\gamma_{bc,d} \right).
\]
Question: \emph{In which cases the dynamical system (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2}) is defined over a Riemannian configuration space or, equivalently, when the associated symmetric connection is Riemannian?}
In order to answer this question, we assume that there exists a kinetic metric $\gamma_{ab}(x,y)$ such that
\begin{equation}
\gamma_{ab|c}=0 \implies \gamma_{ab,c} -\gamma_{db}\Gamma^{d}_{ac} -\gamma_{ad}\Gamma^{d}_{bc}= 0. \label{eq.nonR2}
\end{equation}
Setting $\Gamma^{1}_{11}\equiv f_{1}(x,y)$ and $\Gamma^{2}_{22}\equiv f_{2}(x,y)$, condition (\ref{eq.nonR2}) gives the following system of PDEs:
\[
\begin{cases}
\gamma_{11,x}= 2\gamma_{11}f_{1}, \enskip \gamma_{12,x}= \gamma_{12}f_{1}, \enskip \gamma_{22,x}=0 \implies \gamma_{22}= \gamma_{22}(y) \\
\gamma_{22,y}= 2\gamma_{22}f_{2}, \enskip \gamma_{12,y}= \gamma_{12}f_{2}, \enskip \gamma_{11,y}=0 \implies \gamma_{11}= \gamma_{11}(x)
\end{cases} \implies
\]
\begin{eqnarray}
\frac{d\gamma_{11}}{dx}&=& 2\gamma_{11}(x)f_{1} \label{eq.nonR3.1} \\
\frac{d\gamma_{22}}{dy}&=& 2\gamma_{22}(y)f_{2} \label{eq.nonR3.2} \\
\gamma_{12,x}&=& \gamma_{12}(x,y)f_{1} \label{eq.nonR3.3} \\
\gamma_{12,y}&=& \gamma_{12}(x,y)f_{2}. \label{eq.nonR3.4}
\end{eqnarray}
In order to have a well-defined metric $\gamma_{ab}$ (i.e. the inverse $\gamma^{ab}$ exists), it must also hold that
\begin{equation}
\det\left[\gamma_{ab}\right]\neq0 \implies \gamma_{11}\gamma_{22} -\gamma_{12}^{2} \neq0. \label{eq.nonR4}
\end{equation}
We consider the following cases:
1) Case $\gamma_{12}\neq0$.
1.1. Subcase $\gamma_{11}=\gamma_{22}=0$.
Conditions (\ref{eq.nonR3.1}), (\ref{eq.nonR3.2}) and (\ref{eq.nonR4}) are satisfied identically.
The remaining conditions (\ref{eq.nonR3.3}) and (\ref{eq.nonR3.4}) give the Riemannian connection coefficients:
\[
f_{1}= \frac{F_{,x}}{F}, \enskip f_{2}= \frac{F_{,y}}{F}
\]
where $\gamma_{12}\equiv F(x,y)$.
The associated kinetic metric is $\gamma_{ab}= F(x,y)
\left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)$. The constrained geodesics of this metric has been discussed for various cases of the function $F(x,y)$ in \cite{Dimakis 2019} and, more recently, in \cite{Mits Constr2022}.
1.2. Subcase $\gamma_{11}\equiv f(x)\neq0$ and $\gamma_{22}=0$.
Conditions (\ref{eq.nonR3.2}) and (\ref{eq.nonR4}) are satisfied identically.
Condition (\ref{eq.nonR3.1}) implies that $f_{1}= \frac{f_{,x}}{2f}$. Replacing $f_{1}$ in (\ref{eq.nonR3.3}), we find that $\gamma_{12}= h(y)\sqrt{f(x)}$ which when replaced into the remaining condition (\ref{eq.nonR3.4}) gives $f_{2}= \frac{h_{,y}}{h}$.
The associated kinetic metric is $\gamma_{ab}=
\left(
\begin{array}{cc}
f(x) & h(y)\sqrt{f} \\
h(y)\sqrt{f} & 0 \\
\end{array}
\right)$.
1.3. Subcase $\gamma_{22}\equiv h(y)\neq0$ and $\gamma_{11}=0$.
Conditions (\ref{eq.nonR3.1}) and (\ref{eq.nonR4}) are satisfied identically.
Condition (\ref{eq.nonR3.2}) implies that $f_{2}= \frac{h_{,y}}{2h}$. Replacing $f_{2}$ in (\ref{eq.nonR3.4}), we find that $\gamma_{12}= f(x)\sqrt{h(y)}$ which when replaced into the remaining condition (\ref{eq.nonR3.3}) gives $f_{1}= \frac{f_{,x}}{f}$.
The associated kinetic metric is $\gamma_{ab}=
\left(
\begin{array}{cc}
0 & f(x)\sqrt{h} \\
f(x)\sqrt{h} & h(y) \\
\end{array}
\right)$.
1.4. Subcase $\gamma_{11}\equiv f(x), \gamma_{22}\equiv h(y)$, and $f(x)h(y)\neq0$.
Conditions (\ref{eq.nonR3.1}) and (\ref{eq.nonR3.2}) imply that
$f_{1}=\frac{f_{,x}}{2f}$ and $f_{2}= \frac{h_{,y}}{2h}$, respectively. Replacing $f_{1}, f_{2}$ in the remaining conditions (\ref{eq.nonR3.3}), (\ref{eq.nonR3.4}), we find that $\gamma_{12}= c_{0}\sqrt{f(x)h(y)}$ where $c_{0}$ is an arbitrary non-zero constant.
The associated kinetic metric is $\gamma_{ab}=
\left(
\begin{array}{cc}
f(x) & c_{0}\sqrt{fh} \\
c_{0}\sqrt{fh} & h(y) \\
\end{array}
\right)$.
From the condition (\ref{eq.nonR4}), we find that $c_{0} \neq \pm 1$.
2) Case $\gamma_{12}=0$.
Condition (\ref{eq.nonR4}) implies that $\gamma_{11}\gamma_{22}\neq0$.
We set $\gamma_{11}\equiv f(x)$ and $\gamma_{22}\equiv h(y)$.
Conditions (\ref{eq.nonR3.3}) and (\ref{eq.nonR3.4}) are satisfied identically.
The remaining conditions (\ref{eq.nonR3.1}) and (\ref{eq.nonR3.2}) give $f_{1}=\frac{f_{,x}}{2f}$ and $f_{2}= \frac{h_{,y}}{2h}$, respectively.
The associated kinetic metric is $\gamma_{ab}=
\left(
\begin{array}{cc}
f(x) & 0 \\
0 & h(y) \\
\end{array}
\right)$.
\bigskip
We collect the above results in the following Proposition.
\begin{proposition}
\label{pro.nonR1} Autonomous dynamical systems of the form (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2}) admit a Riemannian geometry (i.e. the quantities $\Gamma^{a}_{bc}$ are Riemannian connection coefficients defined by a kinetic metric $\gamma_{ab}$) in the following cases: \newline
1) For $\Gamma^{1}_{11}= \frac{F_{,x}}{F}$ and $\Gamma^{2}_{22}= \frac{F_{,y}}{F}$, where $F(x,y)$ is a non-zero arbitrary smooth function. The associated kinetic metric is $\gamma_{ab}= F(x,y)
\left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)$. \newline
2) For $\Gamma^{1}_{11}= \frac{f_{,x}}{2f}$ and $\Gamma^{2}_{22}= \frac{h_{y}}{h}$, where $f(x)$ and $h(y)$ are non-zero arbitrary smooth functions. Then, $\gamma_{ab}=
\left(
\begin{array}{cc}
f(x) & h(y)\sqrt{f} \\
h(y)\sqrt{f} & 0 \\
\end{array}
\right)$. \newline
3) For $\Gamma^{1}_{11}= \frac{f_{x}}{f}$ and $\Gamma^{2}_{22}= \frac{h_{,y}}{2h}$. Then, $\gamma_{ab}=
\left(
\begin{array}{cc}
0 & f(x)\sqrt{h} \\
f(x)\sqrt{h} & h(y) \\
\end{array}
\right)$. \newline
4) For $\Gamma^{1}_{11}=\frac{f_{,x}}{2f}$ and $\Gamma^{2}_{22}= \frac{h_{,y}}{2h}$. Then, $\gamma_{ab}=
\left(
\begin{array}{cc}
f(x) & c_{0}\sqrt{fh} \\
c_{0}\sqrt{fh} & h(y) \\
\end{array}
\right)$ where the (possibly zero) constant $c_{0} \neq \pm 1$.
\end{proposition}
From Proposition \ref{pro.nonR1}, we find the following proposition.
\begin{proposition}
\label{pro.nonR2} The dynamical system (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2}) is non-Riemannian if the non-zero symmetric connection coefficients $\Gamma^{1}_{11}(x,y)$ and $\Gamma^{2}_{22}(x,y)$ satisfy the condition
\begin{equation}
\Gamma^{1}_{11,y} \neq \Gamma^{2}_{22,x}. \label{eq.nonR5}
\end{equation}
\end{proposition}
Proof:\newline
- Case 1) of Proposition \ref{pro.nonR1} implies that $\Gamma^{1}_{11}= (\ln F)_{,x}$ and $\Gamma^{2}_{22}= (\ln F)_{,y}$. Taking the integrability condition $(\ln F)_{,xy}= (\ln F)_{,yx}$, we find $\Gamma^{1}_{11,y}= \Gamma^{2}_{22,x}$. Therefore, for a non-Riemannian connection, the condition (\ref{eq.nonR5}) is required. \newline
- For the remaining cases 2) - 4) of Proposition \ref{pro.nonR1}, we observe that $\Gamma^{1}_{11}(x)$ and $\Gamma^{2}_{22}(y)$. This implies that $\Gamma^{1}_{11,y}= \Gamma^{2}_{22,x}= 0$, which is a subcase of the condition $\Gamma^{1}_{11,y}= \Gamma^{2}_{22,x}$. Therefore, for a non-Riemannian connection, we find again the condition (\ref{eq.nonR5}).
\subsection{LFIs for the non-Riemannian dynamical system (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2})}
We assume that the connection of the autonomous dynamical system (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2}) is non-Riemannian; therefore, the condition (\ref{eq.nonR5}) applies.
Applying Theorem \ref{thm1} for $m=1$ (LFIs) and $n=0$ (zero degree of time-dependence), we find for the considered dynamical system the autonomous LFI
\begin{equation}
I= L_{1}(x,y)\dot{x} +L_{2}(x,y)\dot{y} +s_{0}t \label{eq.nonR6.1}
\end{equation}
where $L_{a}(x,y)$ is a generalized KV of the connection $\Gamma^{a}_{bc}(x,y)$ defined by the dynamical equations (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2}) and $s_{0}$ is a constant defined by the condition
\begin{equation}
L_{1}Q^{1} +L_{2}Q^{2}= s_{0}. \label{eq.nonR6.2}
\end{equation}
The generalized KV condition $L_{(a|b)}=0$ implies the following system of PDEs:
\begin{eqnarray}
L_{1,x} -L_{1}\Gamma^{1}_{11} &=& 0 \label{eq.nonR6.3} \\
L_{2,y} -L_{2}\Gamma^{2}_{22} &=& 0 \label{eq.nonR6.4} \\
L_{1,y} +L_{2,x} &=& 0. \label{eq.nonR6.5}
\end{eqnarray}
We have an overdetermined system of four PDEs (\ref{eq.nonR6.2}) - (\ref{eq.nonR6.5}), which has six unknown functions $L_{a}(x,y)$, $Q^{a}(x,y)$, $\Gamma^{1}_{11}(x,y)$, $\Gamma^{2}_{22}(x,y)$ and one free parameter $s_{0}$. Therefore, in order to solve it, we should fix either the dynamics of the system (i.e. the generalized forces $Q^{a}$) or the non-Riemannian geometry of the system (i.e. the non-Riemannian connection coefficients; it holds that $\Gamma^{1}_{11,y} \neq \Gamma^{2}_{22,x}$).
\subsubsection{Two linearly coupled harmonic oscillators with a non-Riemannian quadratic damping term}
As a first application, we fix the generalized forces $Q^{a}$. We assume that
\begin{equation}
Q^{a}=
\left(
\begin{array}{c}
kx -py \\
ky +px \\
\end{array}
\right) \label{eq.nonR6.6}
\end{equation}
where $k, p$ are arbitrary non-zero constants.
For the choice (\ref{eq.nonR6.6}), the dynamical system (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2}) becomes:
\begin{eqnarray}
\ddot{x} &=& -kx +py -\Gamma^{1}_{11}(x,y)\dot{x}^{2} \label{eq.nonR7.1} \\
\ddot{y} &=& -ky -px -\Gamma^{2}_{22}(x,y)\dot{y}^{2} \label{eq.nonR7.2}
\end{eqnarray}
which describes two linearly coupled harmonic oscillators with a non-Riemannian (i.e. $\Gamma^{1}_{11,y} \neq \Gamma^{2}_{22,x}$) quadratic damping term.
Replacing $Q^{a}$ from (\ref{eq.nonR6.6}), equation (\ref{eq.nonR6.2}) gives
\begin{equation}
L_{1}= -\frac{ky +px}{kx-py}L_{2} +\frac{s_{0}}{kx-py}= \frac{1}{py -kx} \left[ (ky+px)L_{2} -s_{0} \right] \label{eq.nonR8}
\end{equation}
which when substituted into (\ref{eq.nonR6.5}) implies that
\begin{equation}
L_{2}= (py -kx) F_{1}\left( p(y^{2}-x^{2}) -2kxy\right) -\frac{s_{0}x}{p(y^{2}-x^{2}) -2kxy} \label{eq.nonR9}
\end{equation}
where $F_{1}\left( p(y^{2}-x^{2}) -2kxy\right)$ is an arbitrary smooth function of its argument.
Replacing (\ref{eq.nonR9}) in (\ref{eq.nonR8}), we find
\begin{equation}
L_{1}= (ky+px) F_{1}\left( p(y^{2}-x^{2}) -2kxy\right) +\frac{s_{0}x(ky+px)}{(kx -py)\left[ p(y^{2}-x^{2}) -2kxy\right]} +\frac{s_{0}}{kx-py}. \label{eq.nonR10}
\end{equation}
Using (\ref{eq.nonR9}) and (\ref{eq.nonR10}), the remaining PDEs (\ref{eq.nonR6.3}) and (\ref{eq.nonR6.4}) determine the connection coefficients as follows:
\begin{equation}
\Gamma^{1}_{11}= \frac{L_{1,x}}{L_{1}}, \enskip \Gamma^{2}_{22}= \frac{L_{2,y}}{L_{2}}. \label{eq.nonR11}
\end{equation}
In order to have a non-Riemannian connection, the condition (\ref{eq.nonR5}) must be satisfied. Therefore,
\begin{equation}
\Gamma^{1}_{11,y} \neq \Gamma^{2}_{22,x} \implies \left( \ln |L_{1}| \right)_{,xy} \neq \left( \ln |L_{2}| \right)_{,yx} \implies \left( \ln \left| \frac{L_{1}}{L_{2}} \right| \right)_{,xy} \neq 0 \label{eq.nonR11.0}
\end{equation}
which leads to restrictions among the parameters $k, p, s_{0}$ and the function $F_{1}$.
We note that the vector $L_{a}$ whose components are given by the relations (\ref{eq.nonR9}) and (\ref{eq.nonR10}) is a generalized KV of the connection (\ref{eq.nonR11}).
Finally, replacing (\ref{eq.nonR11}) in the dynamical equations (\ref{eq.nonR7.1}) - (\ref{eq.nonR7.2}), we find the family of dynamical systems:
\begin{eqnarray}
\ddot{x} &=& -kx +py -\frac{L_{1,x}}{L_{1}}\dot{x}^{2} \label{eq.nonR11.1} \\
\ddot{y} &=& -ky -px -\frac{L_{2,y}}{L_{2}}\dot{y}^{2} \label{eq.nonR11.2}
\end{eqnarray}
parameterized by the constant $s_{0}$ and the function $F_{1}$, which admits the time-dependent LFI (\ref{eq.nonR6.1})
In order to get a specific example, we fix the parameters $s_{0}$ and $F_{1}$ as follows:
- Case $s_{0}=0$ and $F_{1}=1$.
Then, equations (\ref{eq.nonR9}) and (\ref{eq.nonR10}) give the generalized KV
\begin{equation}
L_{a}=
\left(
\begin{array}{c}
ky+px \\
py-kx \\
\end{array}
\right) \label{eq.nonR12}
\end{equation}
which when replaced into the relations (\ref{eq.nonR11}) determines the connection coefficients:
\begin{equation}
\Gamma^{1}_{11}= \frac{p}{ky+px}, \enskip \Gamma^{2}_{22}= \frac{p}{py-kx}. \label{eq.nonR13}
\end{equation}
In order to have a non-Riemannian connection, the quantities (\ref{eq.nonR13}) must satisfy the condition (\ref{eq.nonR11.0}). We compute:
\[
\Gamma^{1}_{11,y}= -\frac{kp}{(ky+px)^{2}} \neq \Gamma^{2}_{22,x}= \frac{kp}{(py-kx)^{2}} \implies
\]
\[
(py-kx)^{2} +(ky+px)^{2} \neq0 \implies (k^{2}+p^{2})(x^{2}+y^{2}) \neq0 \implies k\neq \pm ip.
\]
Therefore, \emph{the connection (\ref{eq.nonR13}) is non-Riemannian only when $k\neq \pm ip$}.
Using (\ref{eq.nonR12}), the dynamical system (\ref{eq.nonR11.1}) - (\ref{eq.nonR11.2}) becomes:
\begin{eqnarray}
\ddot{x} &=& -kx +py -\frac{p}{ky+px}\dot{x}^{2} \label{eq.nonR14.1} \\
\ddot{y} &=& -ky -px -\frac{p}{py-kx}\dot{y}^{2} \label{eq.nonR14.2}
\end{eqnarray}
and the associated autonomous LFI (\ref{eq.nonR6.1}) is
\begin{equation}
I_{1}= (ky +px)\dot{x} +(py-kx)\dot{y}. \label{eq.nonR14.3}
\end{equation}
\bigskip
\underline{Remark:} In the case that $k=\pm ip$, the connection coefficients (\ref{eq.nonR13}) reduce to the Riemannian connection:
\[
\Gamma^{1}_{11\pm}= \frac{1}{x\pm iy}, \enskip \Gamma^{2}_{22\pm}= \frac{1}{y\mp ix}
\]
which, according to Proposition \ref{pro.nonR1}, is associated with a metric of the form 1). Indeed, for an arbitrary function $F(x,y)$, we have:
\[
\begin{cases}
(\ln F_{\pm})_{,x}= \frac{1}{x\pm iy} \\
(\ln F_{\pm})_{,y}= \frac{1}{y\mp ix}
\end{cases} \implies
F_{+}= \frac{x^{2}+y^{2}}{y+ix}, \enskip F_{-}=y+ix
\]
and the corresponding metric is $\gamma_{ab}= F_{\pm}
\left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)$.
\section{Application 2: The QFIs of a non-Riemannian dynamical system}
\label{sec.andro}
Consider the dynamical system\footnote{This dynamical system has been suggested to us by Dr. Andronikos Paliathanasis.}:
\begin{eqnarray}
\ddot{u} &=& -\frac{8\beta }{u^{3}}\left( u\dot{u}\dot{w} -w\dot{u}^{2}\right) -\frac{1}{u^{2}} \label{eq.aham8a} \\
\ddot{w} &=& -\frac{4\beta }{u^{3}}\left( u\dot{w}^{2} -4w\dot{u}\dot{w}\right) +\frac{2w}{u^{3}} \label{eq.aham8b}
\end{eqnarray}
where $\beta$ is an arbitrary real constant. This system is autonomous holonomic of the form (\ref{eq.tk1}) with variables
\[
q^{a}=
\left(
\begin{array}{c}
u \\
w \\
\end{array}
\right), \enskip
Q^{a}= \frac{1}{u^{2}}
\left(
\begin{array}{c}
1 \\
-\frac{2w}{u} \\
\end{array}
\right).
\]
The symmetric connection coefficients are read from the dynamical equations and are:
\begin{equation}
\Gamma^{1}_{22}=\Gamma^{2}_{11}=0, \enskip \Gamma^{1}_{11}= \Gamma^{2}_{12}= -8\beta\frac{w}{u^{3}}, \enskip \Gamma^{1}_{12}= \Gamma^{2}_{22}= \frac{4\beta}{u^{2}}. \label{eq.conne}
\end{equation}
The non-zero components of the curvature tensor $R^{a}{}_{bcd}= \Gamma^{a}_{bd,c} -\Gamma^{a}_{bc,d} +\Gamma^{a}_{sc}\Gamma^{s}_{bd} -\Gamma^{a}_{sd}\Gamma^{s}_{bc}$ are:
\[
R^{1}{}_{112}= R^{2}{}_{221} = -R^{2}{}_{212} = -R^{1}{}_{121}= -\frac{32b^{2}w}{u^{5}}, \enskip R^{2}{}_{112}= -R^{2}{}_{121}= \frac{24bw}{u^{4}}.
\]
Solving the generalized KT condition $C_{(ab|c)}=0$, we find that the connection (\ref{eq.conne}) admits only the second order generalized KT
\begin{equation}
C_{ab}= k e^{\frac{12\beta w}{u^{2}}}
\left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right) \label{eq.KT1}
\end{equation}
where $k$ is an arbitrary constant.
Solving the generalized Killing vector (KV) condition $L_{(a|b)}=0$, we find $L_{a}=0$; therefore, generalized KVs do not exist.
Moreover, it can be shown that non-zero vectors $B_{a}$ which generate reducible generalized KTs of the form $B_{(a|b)}$ do not exist as well.
Applying Theorem \ref{thm1} for $m=2$, we find that the system admits only one QFI which is the
\begin{equation}
I= e^{\frac{12\beta w}{u^{2}}} \left( \dot{u}\dot{w} +\frac{1}{12\beta} \right). \label{eq.qint1.5}
\end{equation}
To prove that the given system is integrable, one needs one more independent autonomous FI of higher order in involution.
\section{Application 3: A new superintegrable potential which admits time-dependent QFIs}
\label{sec.appl2}
In \cite{Evans 1990}, using the separability of the corresponding Hamilton-Jacobi equation in more than two coordinate systems, all minimally and maximally superintegrable potentials in the Euclidean space $E^{3}$ that admit autonomous QFIs are determined. We extend this result to the case where time-dependent FIs are considered.
Applying Theorem \ref{thm1} for $m=2$ (QFIs), $q^{a}=(x,y,z)$, $\Gamma^{a}_{bc}=0$ and $Q^{a}=V^{a,}$, where $V(x,y,z)$ denotes the potential, we find the new maximally superintegrable potential in $E^{3}$
\begin{equation}
V(x,y,z)= -\frac{\lambda^{2}}{2} R^{2} +\frac{kx}{y^{2}R} +\frac{c_{1}}{y^{2}} -\frac{\lambda^{2}}{8}z^{2} +\frac{c_{2}}{z^{2}} \label{eq.evans.1}
\end{equation}
where $\lambda\neq0, k, c_{1}, c_{2}$ are arbitrary constants, and $R= \sqrt{x^{2} +y^{2}}$.
The potential (\ref{eq.evans.1}) admits the following independent (autonomous and time-dependent) QFIs:
\begin{eqnarray}
I_{1}&=& \frac{1}{2} \left( \dot{x}^{2} +\dot{y}^{2} +\dot{z}^{2} \right) -\frac{\lambda^{2}}{2} R^{2} +\frac{kx}{y^{2}R} +\frac{c_{1}}{y^{2}} -\frac{\lambda^{2}}{8}z^{2} +\frac{c_{2}}{z^{2}} \label{eq.evans.2.1} \\
I_{2}&=& \frac{1}{2}M_{3}^{2} + \frac{(kR +c_{1}x)x}{y^{2}} \label{eq.evans.2.2} \\
I_{3}&=& \frac{1}{2}\dot{z}^{2} -\frac{\lambda^{2}}{8}z^{2} +\frac{c_{2}}{z^{2}} \label{eq.evans.2.3} \\
I_{4}&=& e^{\lambda t} \left[ M_{3} (\dot{y} -\lambda y) +\frac{2c_{1}x}{y^{2}} +\frac{k(y^{2} +2x^{2})}{y^{2}R} \right] \label{eq.evans.2.4} \\
I_{5}&=& e^{\lambda t} \left[ \left( \dot{z} -\frac{\lambda}{2}z \right)^{2} +\frac{2c_{2}}{z^{2}} \right]. \label{eq.evans.2.5}
\end{eqnarray}
The QFI $I_{1}$ is the Hamiltonian of the system and the vector $M_{i}= \left( y\dot{z} -z\dot{y}, z\dot{x} -x\dot{z}, x\dot{y} -y\dot{x} \right)$ with $i=1,2,3$ is the angular momentum.
\section{Application 4: A new superintegrable separable potential which admits an autonomous CFI}
\label{sec.appl3}
It is well-known that separable Newtonian potentials of the form $V(x,y)= F_{1}(x) +F_{2}(y)$ admit the QFIs $J_{1}= \frac{1}{2}\dot{x}^{2} +F_{1}(x)$ and $J_{2}= \frac{1}{2}\dot{y}^{2} +F_{2}(y)$, where $F_{1}$ and $F_{2}$ are arbitrary smooth functions of their arguments. The question is if there are functions $F_{1}$ and $F_{2}$ for which the corresponding potential $V(x,y)$ is superintegrable.
A partial answer to this problem has been given in \cite{Gravel 2004} by considering autonomous CFIs as the third FI. One of the third order superintegrable potentials found in \cite{Gravel 2004} is the
\begin{equation}
V(x,y) = c_{1}y^{2} +F(x) \label{eq.gravel.1}
\end{equation}
where $c_{1}$ is an arbitrary non-zero constant and $F(x)$ is an arbitrary smooth function satisfying the condition
\begin{equation}
k_{2}x^{2} +4k_{1}^{2} +\left( 9F -c_{1}x^{2} \right) \left( F -c_{1}x^{2} \right)^{3} -4k_{1} \left( F -c_{1}x^{2} \right) \left( 3F +c_{1}x^{2} \right)= 0 \label{eq.gravel.1.2}
\end{equation}
where $k_{1}$ and $k_{2}$ are arbitrary constants.
Using Theorem \ref{thm1}, we generalize the above result and determine a class of superintegrable potentials which contains the potential (\ref{eq.gravel.1}) as a special case.
We apply Theorem \ref{thm1} for $m=3$ (CFIs), $q^{a}=(x,y)$, $\Gamma^{a}_{bc}=0$ and $Q^{a}= V^{,a}$, where $V(x,y)$ denotes potentials of the form $V= F_{1}(x) +F_{2}(y)$. We find that the potential (\ref{eq.gravel.1}) is superintegrable due to the three independent autonomous FIs:
\begin{eqnarray}
I_{1}&=& \frac{1}{2}\dot{x}^{2} +F(x) \label{eq.gravel.3.1} \\
I_{2}&=& \frac{1}{2}\dot{y}^{2} +c_{1}y^{2} \label{eq.gravel.3.2} \\
I_{3}&=& L\dot{x}^{2} -\left( 3yF -c_{1}x^{2}y +k_{3}y \right) \dot{x} +\frac{F'}{2c_{1}}\left( 3F -c_{1}x^{2} +k_{3} \right)\dot{y} \label{eq.gravel.3.3}
\end{eqnarray}
where $L\equiv x\dot{y} -y\dot{x}$ is the angular momentum, $F'\equiv \frac{dF}{dx}$ and the function $F(x)$ satisfies the condition
\begin{eqnarray}
0&=& k_{2}x^{2} +4k_{1}^{2} +\left( 9F -c_{1}x^{2} \right) \left( F -c_{1}x^{2} \right)^{3} -4k_{1} \left( F -c_{1}x^{2} \right) \left( 3F +c_{1}x^{2} \right) + \notag \\
&& +4k_{3} \left( 3F -c_{1}x^{2} \right) \left( F -c_{1}x^{2} \right)^{2} +4k_{3}^{2} \left( F -c_{1}x^{2} \right)^{2} -\frac{8k_{1}k_{3}}{3} \left( 3F -c_{1}x^{2} \right) \label{eq.gravel.2}
\end{eqnarray}
where $k_{1}, k_{2}, k_{3}$ are arbitrary constants. We note that for $k_{3}=0$ condition (\ref{eq.gravel.2}) reduces to condition (\ref{eq.gravel.1.2}). Therefore, the superintegrable potential (C.6) of \cite{Gravel 2004} is a subcase of (\ref{eq.gravel.1}).
\section{Conclusions}
\label{sec.conclusions}
We draw the following conclusions: \newline
a) We have developed a direct systematic method to compute the $m$th-order FIs of the autonomous (in general non-Riemannian) dynamical systems (\ref{eq.tk1}) in terms of the `symmetries' of the geometric objects (symmetric connection or kinetic metric, depending on the case) defined by the dynamical equations. \newline
b) This method applies to non-Riemanian geometries with a symmetric connection. It has been shown that the $m$th-order FIs require the generalized KTs and KVs defined by the symmetric connection $\Gamma _{bc}^{a}(q)$. The case of a Riemannian connection is a special case, where the $m$th-order FIs can be related to a gauged weak generalized Noether symmetry by means of the Inverse Noether Theorem.\newline
c) The system of PDEs (\ref{eq.veldep4.1}) - (\ref{eq.veldep4.4}) resulting from the condition $\frac{dI^{(m)}}{dt}=0$ along the dynamical equations consists of two parts: A geometric part (eqs. (\ref{eq.veldep4.1}) and (\ref{eq.veldep4.2}) ) common to all systems which share the same connection; and a dynamical part (eqs. (\ref{eq.veldep4.3}) and (\ref{eq.veldep4.4}) ) which includes the generalized forces $Q^{a}$ of the specific system. \newline
d) We determined the condition which the connection coefficients must satisfy in order the 2d dynamical systems (\ref{eq.nonR1.1}) - (\ref{eq.nonR1.2}) to be non-Riemannian.
Obviously, Theorem \ref{thm1} provides a new systematic way to determine the higher order FIs, autonomous and time-dependent, of autonomous (in general non-Riemannian) dynamical systems of the form (\ref{eq.tk1}).
\section*{Appendix}
We assume that the totally symmetric tensor quantities
\begin{equation}
M_{i_{1}...i_{r}}(t,q)=\sum_{N_{r}=0}^{n_{r}} L_{(N_{r})i_{1}...i_{r}}(q)t^{N_{r}},\enskip r=1,2,...,m, \enskip m\geq1 \label{eq.aspm}
\end{equation}
where $L_{(N_{r})i_{1}...i_{r}}(q)$, $N_{r}=0,1,...,n_{r}$, are arbitrary $r$-rank totally symmetric tensors and $n_{r}\geq0$ is the degree of the polynomial associated with the $r$-rank tensor $M_{i_{1}...i_{r}}(t,q)$. We note that the degrees $n_{r}$ of the above polynomial expressions of $t$ may be infinite.
We consider the following cases.
\bigskip
\underline{\textbf{I. Case with $n$ finite.}}
Substituting (\ref{eq.aspm}) in the system of PDEs (\ref{eq.veldep4.1}) - (\ref{eq.veldep4.4}), we obtain the following system of polynomial equations in $t$:
\begin{eqnarray}
0&=& \sum_{N=0}^{n} L_{(N)(i_{1}...i_{m}|i_{m+1})}t^{N} \label{eq.nh1.0} \\
0&=& M_{,t} -\sum^{n}_{N=0} L_{(N)i_{1}}Q^{i_{1}} t^{N} \label{eq.nh1.1} \\
0 &=& M_{,i_{1}} +\sum^{n>0}_{N=1}\left[ NL_{(N)i_{1}} -2L_{(N-1)i_{1}i_{2}}Q^{i_{2}}\right] t^{N-1} -2L_{(n)i_{1}i_{2}}Q^{i_{2}}t^{n}, \enskip m>1 \label{eq.nh1.2} \\
0&=& \sum^{n-1\geq0}_{N=0} \left[ (N+1) L_{(N+1)i_{1}...i_{r}} +L_{(N)(i_{1}...i_{r-1}|i_{r})} -(r+1)L_{(N)i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}} \right] t^{N} + \notag \\
&& +\left[ L_{(n)(i_{1}...i_{r-1}|i_{r})} -(r+1)L_{(n)i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}} \right] t^{n}, \enskip r= 2,3,...,m-1, \enskip m>2 \label{eq.nh1.3} \\
0&=& M_{,i_{1}} +\sum^{n>0}_{N=1}NL_{(N)i_{1}} t^{N-1}, \enskip m=1 \label{eq.nh1.4} \\
0&=& \sum^{n-1\geq0}_{N=0} \left[ (N+1)L_{(N+1)i_{1}...i_{m}} +L_{(N)(i_{1}...i_{m-1}|i_{m})} \right]t^{N} +L_{(n)(i_{1}...i_{m-1}|i_{m})}t^{n}, \enskip m>1 \label{eq.nh1.5}
\end{eqnarray}
where --without loss of generality-- the polynomial expressions (\ref{eq.aspm}) of $t$ are assumed to be of the same degree, that is, $n=n_{r}$ for all values of $r$. All the results with $n \neq n_{r}$ are derived as subcases from the case $n=n_{r}$. We note also that: Equation (\ref{eq.nh1.0}) is derived from (\ref{eq.veldep4.1}), (\ref{eq.nh1.1}) from (\ref{eq.veldep4.4}), (\ref{eq.nh1.2}) from (\ref{eq.veldep4.3}) for $r=1$, (\ref{eq.nh1.3}) from (\ref{eq.veldep4.3}) for $r=2,3,...,m-1$, (\ref{eq.nh1.4}) from (\ref{eq.veldep4.2}) for $m=1$, and (\ref{eq.nh1.5}) from (\ref{eq.veldep4.2}) for $m>1$.
Equation (\ref{eq.nh1.0}) implies that the quantities $L_{(N)i_{1}...i_{m}}$ with $N=0,1,...,n$ are $m$th-order generalized KTs. For $m=1$, $L_{(N)i_{1}}$ are generalized KVs.
Integrating equation (\ref{eq.nh1.1}), we find that
\begin{equation}
M= \sum^{n}_{N=0} L_{(N)i_{1}}Q^{i_{1}} \frac{t^{N+1}}{N+1} +G(q) \label{eq.nh2.1}
\end{equation}
where $G(q)$ is an arbitrary smooth function. \emph{We note that the integrability conditions of the scalar $M(t,q)$ have been replaced by the integrability conditions $G_{,[ab]}=0$ of the function $G(q)$.}
Replacing $M$ from (\ref{eq.nh2.1}), equation (\ref{eq.nh1.2}) gives:
\begin{align*}
0=& G_{,i_{1}} +L_{(1)i_{1}}(n>0) -2L_{(0)i_{1}i_{2}}Q^{i_{2}} + \\
& +\sum^{n-1\geq1}_{N=1} \frac{1}{N} \left[ \left(L_{(N-1)c}Q^{c}\right)_{,i_{1}} +N(N+1)L_{(N+1)i_{1}} -2NL_{(N)i_{1}i_{2}}Q^{i_{2}} \right] t^{N} + \\
& +\frac{1}{n} \underbrace{\left[ \left(L_{(n-1)c}Q^{c}\right)_{,i_{1}} -2nL_{(n)i_{1}i_{2}}Q^{i_{2}} \right]}_{n>0} t^{n} +\left(L_{(n)c}Q^{c}\right)_{,i_{1}} \frac{t^{n+1}}{n+1}, \enskip m>1 \implies
\end{align*}
\begin{eqnarray}
G_{,i_{1}} &=& 2L_{(0)i_{1}i_{2}}Q^{i_{2}} -L_{(1)i_{1}}(n>0), \enskip m>1 \label{eq.nh3.1} \\
\left(L_{(k-1)c}Q^{c}\right)_{,i_{1}} &=& 2kL_{(k)i_{1}i_{2}}Q^{i_{2}} -k(k+1)L_{(k+1)i_{1}}, \enskip k=1,2,...,n-1, \enskip n>1, \enskip m>1 \label{eq.nh3.2} \\
\left(L_{(n-1)c}Q^{c}\right)_{,i_{1}} &=& 2nL_{(n)i_{1}i_{2}}Q^{i_{2}}, \enskip n>0, \enskip m>1 \label{eq.nh3.3} \\
L_{(n)i_{1}}Q^{i_{1}} &=& s_{0}, \enskip m>1 \label{eq.nh3.4}
\end{eqnarray}
where $s_{0}$ is an arbitrary constant. \emph{The notation $L_{(1)i_{1}}(n>0)$ indicates that the vector $L_{(1)i_{1}}$ exists only when the degree of the polynomial $n>0$, that is, when $n=0$, the vector $L_{(1)i_{1}}$ vanishes}.
Replacing (\ref{eq.nh2.1}) in (\ref{eq.nh1.4}), we find the following conditions:
\begin{eqnarray}
G_{,i_{1}} &=& -L_{(1)i_{1}}(n>0), \enskip m=1 \label{eq.nh4.1} \\
\left(L_{(k-1)c}Q^{c}\right)_{,i_{1}} &=& -k(k+1)L_{(k+1)i_{1}}, \enskip k=1,2,...,n-1, \enskip n>1, \enskip m=1 \label{eq.nh4.2} \\
L_{(n-1)i_{1}}Q^{i_{1}} &=& s_{1}, \enskip n>0, \enskip m=1 \label{eq.nh4.3} \\
L_{(n)i_{1}}Q^{i_{1}} &=& s_{0}, \enskip m=1 \label{eq.nh4.4}
\end{eqnarray}
where $s_{1}$ is an arbitrary constant.
We observe that conditions (\ref{eq.nh4.1}) - (\ref{eq.nh4.4}) are emerged from conditions (\ref{eq.nh3.1}) - (\ref{eq.nh3.4}) if we rewrite the latter in the following compact form:
\begin{eqnarray}
G_{,i_{1}} &=& 2L_{(0)i_{1}i_{2}}(m>1)Q^{i_{2}} -L_{(1)i_{1}}(n>0) \label{eq.nh5.1} \\
\left(L_{(k-1)c}Q^{c}\right)_{,i_{1}} &=& 2kL_{(k)i_{1}i_{2}}(m>1)Q^{i_{2}} -k(k+1)L_{(k+1)i_{1}}, \enskip k=1,2,...,n-1, \enskip n>1 \label{eq.nh5.2} \\
\left(L_{(n-1)c}Q^{c}\right)_{,i_{1}} &=& 2nL_{(n)i_{1}i_{2}}(m>1)Q^{i_{2}}, \enskip n>0 \label{eq.nh5.3} \\
L_{(n)i_{1}}Q^{i_{1}} &=& s_{0}. \label{eq.nh5.4}
\end{eqnarray}
\emph{The notation $L_{(0)i_{1}i_{2}}(m>1)$ indicates that the quantities $L_{(0)i_{1}i_{2}}$ exist only when the degree of the FI $m>1$, that is, when $m=1$, the quantities $L_{(0)i_{1}i_{2}}$ vanish.}
Conditions (\ref{eq.nh5.2}) and (\ref{eq.nh5.3}) are written compactly as follows:
\begin{equation}
\left(L_{(k-1)c}Q^{c}\right)_{,i_{1}} = 2kL_{(k)i_{1}i_{2}}(m>1)Q^{i_{2}} -k(k+1)L_{(k+1)i_{1}}(k<n), \enskip k=1,2,...,n, \enskip n>0. \label{eq.nh5.5}
\end{equation}
\emph{The notation $L_{(k+1)i_{1}}(k<n)$ indicates that the vector $L_{(k+1)i_{1}}$ exists only when $k<n$, that is, if $k\geq n$, the vector $L_{(k+1)i_{1}}$ vanishes.}
Equation (\ref{eq.nh1.3}) implies that:
\begin{eqnarray}
L_{(k)(i_{1}...i_{r-1}|i_{r})} &=& (r+1)L_{(k)i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}} -(k+1) L_{(k+1)i_{1}...i_{r}}, \notag \\
&& k=0,1,..., n-1, \enskip r= 2,3,...,m-1, \enskip n>0, \enskip m>2 \label{eq.nh6.1} \\
L_{(n)(i_{1}...i_{r-1}|i_{r})} &=& (r+1)L_{(n)i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}}, \enskip r= 2,3,...,m-1, \enskip m>2. \label{eq.nh6.2}
\end{eqnarray}
Conditions (\ref{eq.nh6.1}) and (\ref{eq.nh6.2}) are written compactly as follows:
\begin{eqnarray}
L_{(k)(i_{1}...i_{r-1}|i_{r})} &=& (r+1)L_{(k)i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}} -(k+1) L_{(k+1)i_{1}...i_{r}}(k<n), \notag \\
&& k=0,1,..., n, \enskip r= 2,3,...,m-1, \enskip m>2. \label{eq.nh7}
\end{eqnarray}
\emph{The notation $L_{(k+1)i_{1}...i_{r}}(k<n)$ indicates that the quantities $L_{(k+1)i_{1}...i_{r}}$ exist only when $k<n$, that is, if $k\geq n$, the quantities $L_{(k+1)i_{1}...i_{r}}$ vanish.}
Equation (\ref{eq.nh1.5}) implies that the quantities $L_{(n)i_{1}...i_{m-1}}$ with $m>1$ are the components of an $(m-1)$th-order generalized KT and the $m$th-order generalized KTs
\begin{equation}
L_{(k)i_{1}...i_{m}} = -\frac{1}{k} L_{(k-1)(i_{1}...i_{m-1}|i_{m})}, \enskip k=1, 2, ..., n, \enskip n>0, \enskip m>1. \label{eq.nh8}
\end{equation}
The $m$th-order FI (\ref{FI.5}) is
\begin{equation}
I^{(m)}_{n}= \sum_{r=1}^{m} \left( \sum^{n}_{N=0}L_{(N)i_{1}...i_{r}} t^{N}\right) \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +s_{0}\frac{t^{n+1}}{n+1} +\sum^{n>0}_{N=1} L_{(N-1)c}Q^{c} \frac{t^{N}}{N} +G(q) \label{eq.nh9}
\end{equation}
where $m\geq1$, $n\geq0$, $L_{(N)i_{1}...i_{m}}(q)$ with $N=0, 1, ..., n$ are $m$th-order generalized KTs satisfying the condition (\ref{eq.nh8}), $L_{(n)i_{1}...i_{m-1}}(q)$ with $m>1$ is an $(m-1)$th-order generalized KT and the constant $s_{0}$ is given by (\ref{eq.nh5.4}). The function $G(q)$ and the totally symmetric tensors $L_{(N)i_{1}...i_{r}}(q)$ satisfy the conditions (\ref{eq.nh5.1}), (\ref{eq.nh5.5}) and (\ref{eq.nh7}).
The notation $I^{(m)}_{n}$ means the $m$th-order FI (upper index) with time-dependence $n$ (lower index). For example, the $I^{(2)}_{n}$ is a QFI whose coefficients are expressed as polynomials of $t$ of degree fixed by $n$.
\bigskip
\underline{\textbf{II. Case with $n$ infinite.}}
The polynomial expressions (\ref{eq.aspm}) as $n_{r}=n \to \infty$ turn into the infinite sum (series)
\begin{equation}
M_{i_{1}...i_{r}}(t,q)= \sum_{N=0}^{\infty} L_{(N)i_{1}...i_{r}}(q) t^{N}, \enskip r=1,2,...,m, \enskip m\geq1. \label{eq.nh10.1}
\end{equation}
It is found that new results --different from those found with $n$ finite-- are derived in the case that
\begin{equation}
L_{(N)i_{1}...i_{r}}(q)= \frac{\lambda_{r}^{N}}{N!} L_{i_{1}...i_{r}}(q) \label{eq.nh10.2}
\end{equation}
where $\lambda_{r}$ are arbitrary non-zero constants and $L_{i_{1}...i_{r}}(q)$ are $r$-rank totally symmetric tensors.
Replacing (\ref{eq.nh10.2}) in (\ref{eq.nh10.1}), we find
\begin{equation}
M_{i_{1}...i_{r}}(t,q)= L_{i_{1}...i_{r}}(q) \sum_{N=0}^{\infty} \frac{(\lambda_{r}t)^{N}}{N!} = e^{\lambda_{r}t} L_{i_{1}...i_{r}}(q). \label{eq.nh10.3}
\end{equation}
Substituting (\ref{eq.nh10.3}) in the system of PDEs (\ref{eq.veldep4.1}) - (\ref{eq.veldep4.4}), we obtain the following system of equations:
\begin{eqnarray}
0 &=& L_{(i_{1}...i_{m}|i_{m+1})} \label{eq.nh11.1} \\
0 &=& M_{,t} -e^{\lambda t} L_{c}Q^{c} \label{eq.nh11.2} \\
0 &=& M_{,i_{1}} + e^{\lambda t} \left( \lambda L_{i_{1}} -2L_{i_{1}i_{2}}Q^{i_{2}} \right), \enskip m>1 \label{eq.nh11.3} \\
0 &=& L_{(i_{1}...i_{r-1}|i_{r})} +\lambda L_{i_{1}...i_{r}} -(r+1) L_{i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}}, \enskip r=2,3,...,m-1, \enskip m>2 \label{eq.nh11.4} \\
0 &=& M_{,i_{1}} +\lambda e^{\lambda t}L_{i_{1}}, \enskip m=1 \label{eq.nh11.5} \\
0 &=& \lambda L_{i_{1}...i_{m}} +L_{(i_{1}...i_{m-1}|i_{m})}, \enskip m>1 \label{eq.nh11.6}
\end{eqnarray}
where --without loss of generality-- all the non-zero constants $\lambda_{r}$ are fixed to the same non-zero constant $\lambda$. The $m$th-order FI produced from this assumption contains as subcases all the FIs associated with constants $\lambda_{r}$ which are not all the same.
Equation (\ref{eq.nh11.1}) implies that $L_{i_{1}...i_{m}}$ is an $m$th-order generalized KT.
Integrating equation (\ref{eq.nh11.2}), we find
\begin{equation}
M= \frac{e^{\lambda t}}{\lambda}L_{c}Q^{c} +G(q) \label{eq.nh12}
\end{equation}
where $G(q)$ is an arbitrary smooth function.
Replacing (\ref{eq.nh12}) in equations (\ref{eq.nh11.3}) and (\ref{eq.nh11.5}), we find that $G(q)=const\equiv 0$ and the condition:
\begin{equation}
\left( L_{c}Q^{c} \right)_{,i_{1}}= 2\lambda L_{i_{1}i_{2}}(m>1) Q^{i_{2}} -\lambda^{2}L_{i_{1}}. \label{eq.nh13}
\end{equation}
Equation (\ref{eq.nh11.4}) gives the condition
\begin{equation}
L_{(i_{1}...i_{r-1}|i_{r})}= (r+1)L_{i_{1}...i_{r}i_{r+1}} Q^{i_{r+1}} -\lambda L_{i_{1}...i_{r}}, \enskip r=2,3,...,m-1, \enskip m>2 \label{eq.nh14.1}
\end{equation}
while the remaining condition (\ref{eq.nh11.6}) implies that
\begin{equation}
L_{i_{1}...i_{m}}= -\frac{1}{\lambda} L_{(i_{1}...i_{m-1}|i_{m})}, \enskip m>1. \label{eq.nh14.2}
\end{equation}
The associated $m$th-order FI (\ref{FI.5}) is
\begin{equation}
I^{(m)}_{e}= \frac{e^{\lambda t}}{\lambda} \left( \lambda \sum^{m}_{r=1} L_{i_{1}...i_{r}} \dot{q}^{i_{1}} ... \dot{q}^{i_{r}} +L_{c}Q^{c} \right) \label{eq.nh15}
\end{equation}
where $\lambda\neq0$, $L_{i_{1}...i_{m}}(q)$ is an $m$th-order generalized KT satisfying the condition (\ref{eq.nh14.2}), and the remaining totally symmetric tensors $L_{i_{1}...i_{r}}(q)$ with $r=1,2,...,m-1$ and $m>1$ satisfy the conditions (\ref{eq.nh13}) and (\ref{eq.nh14.1}).
The notation $I^{(m)}_{e}$ indicates the $m$th-order FI (upper index) with exponential time-dependence (lower index).
\bigskip
The above complete the proof of Theorem \ref{thm1}.
\section*{Acknowledgments}
We thank Dr Andronikos Paliathanasis for proposing the dynamical system (\ref{eq.aham8a}) - (\ref{eq.aham8b}).
\section*{Data Availability}
The data that supports the findings of this study are available within the article.
\section*{Conflict of interest}
The authors declare no conflict of interest.
\section*{Author contributions}
Conceptualization, Antonios Mitsopoulos and Michael Tsamparlis; Formal analysis, Antonios Mitsopoulos; Methodology, Antonios Mitsopoulos and Michael Tsamparlis; Writing – original draft, Antonios Mitsopoulos; Writing – review and editing, Antonios Mitsopoulos, Michael Tsamparlis and Aniekan Magnus Ukpong.
\bigskip
\bigskip
| {
"timestamp": "2023-01-16T02:06:30",
"yymm": "2301",
"arxiv_id": "2301.05414",
"language": "en",
"url": "https://arxiv.org/abs/2301.05414",
"abstract": "We consider autonomous holonomic dynamical systems defined by equations of the form $\\ddot{q}^{a}=-\\Gamma_{bc}^{a}(q) \\dot{q}^{b}\\dot{q}^{c}$ $-Q^{a}(q)$, where $\\Gamma^{a}_{bc}(q)$ are the coefficients of a symmetric (possibly non-metrical) connection and $-Q^{a}(q)$ are the generalized forces. We prove a theorem which for these systems determines autonomous and time-dependent first integrals (FIs) of any order in a systematic way, using the `symmetries' of the geometry defined by the dynamical equations. We demonstrate the application of the theorem to compute linear, quadratic, and cubic FIs of various Riemannian and non-Riemannian dynamical systems.",
"subjects": "Mathematical Physics (math-ph)",
"title": "Higher order first integrals of autonomous non-Riemannian dynamical systems",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138164089085,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7087156899333406
} |
https://arxiv.org/abs/math/9805098 | On Dynamics of Cubic Siegel Polynomials | Motivated by the work of Douady, Ghys, Herman and Shishikura on Siegel quadratic polynomials, we study the one-dimensional slice of the cubic polynomials which have a fixed Siegel disk of rotation number theta, with theta being a given irrational number of Brjuno type. Our main goal is to prove that when theta is of bounded type, the boundary of the Siegel disk is a quasicircle which contains one or both critical points of the cubic polynomial. We also prove that the locus of all cubics with both critical points on the boundary of their Siegel disk is a Jordan curve, which is in some sense parametrized by the angle between the two critical points. A main tool in the bounded type case is a related space of degree 5 Blaschke products which serve as models for our cubics. Along the way, we prove several results about the connectedness locus of these cubic polynomials. |
\section{A Blaschke Parameter Space}
\label{sec:blapar}
Now we focus on a certain class of degree $5$ Blaschke products. These are the maps $B$ with the following two properties:\\
\begin{enumerate}
\item[(i)]
$B$ has the form
\begin{equation}
\label{eqn:blass}
B:z\mapsto e^{2 \pi i t} z^3 \left ( \frac{z-p}{1-\overline{p}z} \right )
\left ( \frac{z-q}{1-\overline{q}z} \right ), \ \ \ \ |p|>1, |q|>1
\end{equation}
where $p$ and $q$ are chosen such that $B$ has a double critical point on the unit circle $\BBB T$ and a pair $(c,1/\overline{c})$ of symmetric critical points which may or may not be on $\BBB T$.
\item[(ii)]
$t$ is the unique number in $[0,1]$ for which the rotation number of $B|_{\BBB T}$ is equal to $\theta$, with $0<\theta<1$ being a given irrational number.
\end{enumerate}
The number $t$ in (ii) is unique because the rotation number of $B$ in (\ref{eqn:blass}) is a continuous and increasing function of $t$ which is strictly increasing at all irrational values (see for example \cite{Katok}, Proposition 11.1.9).
From the above description, it follows that every $B$ which satisfies (i) and (ii) can be represented as a normalized Blaschke product in $\overline{\cal B}\smallsetminus \cal B$ followed by a unique rotation which adjusts the rotation number to $\theta$. As a consequence, \corref{critpar2} shows that every such $B$ is uniquely determined by the position of its critical points.
The rotation group {\bf rot}$=\{ R_{\rho}:z\mapsto \rho z\ \ \mbox{with}\ \ |\rho|=1 \}$ acts on the set of all such Blaschke products by conjugation. In fact,
$$R_{\rho}^{-1}\circ B\circ R_{\rho}:z\mapsto e^{2 \pi i t} \rho^4 z^3 \left ( \frac{z-p \overline{\rho}}{1-\overline{p}\rho z} \right )
\left ( \frac{z-q \overline{\rho}}{1-\overline{q}\rho z} \right ). $$
\noindent
We would like to understand the topology of the space of all ``critically marked'' Blaschke products satisfying (i) and (ii) modulo the action of {\bf rot}. By \corref{critpar2}, the conjugacy class of such a Blaschke product is uniquely determined by the location of its critical points up to a rotation. In case there is only one double critical point on $\BBB T$, we can simply represent every conjugacy class by the unique Blaschke product which has a double critical point at $z=1$. Therefore, the quotient space of all critically marked Blaschke products satisfying (i) and (ii) is canonically homeomorphic to the space of all configurations of the two marked critical points $c_1$ and $c_2$ outside the unit disk $\BBB D$ with $c_1=1$ or $c_2=1$. This is just the disjoint union of two copies of $\BBB C\smallsetminus \BBB D$ glued together along the boundary circle by the identification
$$ (1,c_2)\sim (c_1,1) \Longleftrightarrow c_1=\frac{1}{c_2}.$$
It is not hard to see that the resulting space is topologically a punctured plane (see \figref{glue}).
\realfig{glue}{sig2.eps}{{\sl Topology of the parameter space ${\cal B}_5^{cm}(\theta)$.}}{13cm}
The space of all critically marked Blaschke products $B$ satisfying (i) and (ii) above modulo the action of {\bf rot} is denoted by ${\cal B}_5^{cm}(\theta)$. The identification ${\cal B}_5^{cm}(\theta) \simeq
\BBB C^{\ast}$ can be explained by introducing the uniformizing parameter $\mu:\BBB C^{\ast}\rightarrow {\cal B}_5^{cm}(\theta)$ as follows: For $\mu \in \BBB C^{\ast}$ with $|\mu|> 1$, the corresponding Blaschke product $B_{\mu}$ has marked critical points at $\{ 0, \infty,
c_1=\mu, 1/{\overline \mu}, c_2=1 \}$. Similarly, if $|\mu |< 1$, $B_{\mu}$ is the unique Blaschke product with marked critical points at $\{ 0, \infty, c_1=1, c_2=1/\mu, {\overline \mu}\}$. Finally, when $|\mu|=1$, $B_{\mu}$ denotes either the unique Blaschke product $B$ with marked critical points at $\{ 0, \infty, c_1=\mu, c_2=1 \}$ or its conjugate $R_{\mu}^{-1}\circ B\circ R_{\mu}$ with marked critical points at $\{ 0, \infty, c_1=1, c_2=1/\mu \}$. Note that $B_{\mu}=B_{1/\mu}$ as maps, if we forget the marking of the critical points.
In the topology of ${\cal B}_5^{cm}(\theta)$, the convergence of a sequence $\{ B_{\mu_n}\}$ to some $B_{\mu}$ has the following meaning: If $B_{\mu}$ has only one double critical point on $\BBB T$ so that $|\mu|\neq 1$, then $B_{\mu_n}\rightarrow B_{\mu}$ simply means $\mu_n\rightarrow \mu$, i.e., uniform convergence on compact subsets of the plane respecting the convergence of the marked critical points. On the other hand, if $B_{\mu}$ has two double critical points on the unit circle so that $|\mu|=1$, then $B_{\mu_n}\rightarrow B_{\mu}$ means that $\{ \mu_n \}$ can only accumulate on $\mu$ or $1/\mu=\overline{\mu}$. In other words, in the topology of local uniform convergence, $\{B_{\mu_n}\} $ can only accumulate on $B_{\mu}$ or its conjugate $R_{\mu}^{-1}\circ B_{\mu}\circ R_{\mu}$.\\
For future reference, we need a somewhat detailed analysis of the structure of the invariant set $\bigcup_{k\geq 0} B^{-k}(\BBB T)$ for a Blaschke product $B\in {\cal B}_5^{cm}(\theta)$. For similar descriptions in a family of degree $3$ Blaschke products, see \cite{Petersen}.\\ \\
{\bf Definition (Skeletons).}\ Let $B\in {\cal B}_5^{cm}(\theta)$. Define $T_0=\BBB T$ and $T_1=\overline{B^{-1}(T_0)\smallsetminus T_0}$. In general, for $k\geq 2$ we define
$T_k$ inductively as $T_k=B^{-1}(T_{k-1})$. We call the closed set $T_k$ the
$k$-{\it skeleton} of $B$. Note that $B$ commutes with the reflection $I:z\mapsto 1/\overline z$. Therefore, every $T_k$ is invariant under $I$.\\
Since $B$ is a holomorphic branched covering of the sphere, it is not hard to see that the preimage of every piecewise analytic Jordan curve under $B$ is a finite union of piecewise analytic Jordan curves intersecting one another at finitely many points which are necessarily among the critical points of $B$. Therefore, each $T_k$ decomposes into a finite number of piecewise analytic Jordan curves with this finite intersection property.
The next proposition tells us what a $k$-skeleton looks like.
\begin{prop}[Structure of the $k$-Skeleton]
\label{skeleton}
\noindent
\begin{enumerate}
\item[(a)]
For $k\geq 1$, the $k$-skeleton $T_k$ is the union of finitely many piecewise analytic Jordan curves $\{ T_k^1, \cdots, T_k^m \}$ which intersect one another at finitely many points and do not cross the unit circle $\BBB T$. None of the $T_k^i$ encloses $\BBB T$. For any $T_k^i$ in this family, the reflected copy $I(T_k^i)$ also belongs to this family.
\item[(b)]
With the notation of (a), let $D_k^i$ denote the bounded component of $\BBB C\smallsetminus T_k^i$ for $k\geq 1$. For $k=0$, $D_0^i$ could mean either $\BBB D$ or $\overline{\BBB C} \smallsetminus \overline{\BBB D}$. Then for $k\geq 1$, $B$ maps $D_k^i$ onto some $D_{k-1}^j$. The mapping is either a conformal isomorphism or a 2-to-1 branched covering. As a result, $B^{\circ k}$ is a proper holomorphic map from $D_k^i$ onto $\BBB D$ or $\overline{\BBB C} \smallsetminus \overline{\BBB D}$.
\item[(c)]
If $k\geq 1$ and $i\neq j$, we have $D_k^i\cap D_k^j = \emptyset$.
\item[(d)]
For $k>l\geq 1$, either $D_k^i$ and $D_l^j$ are
disjoint or $D_k^i\subset D_l^j$. Conversely, if $D_k^i\subset D_l^j$, we necessarily have $k\geq l$.
\end{enumerate}
\end{prop}
Every $D_k^i$ is called a $k$-{\it drop} or simply a {\it drop} of $B$. In other words, $k$-drops are the open topological disks bounded by the Jordan curves
in the decomposition of the $k$-skeleton of $B$. For $k=0$, we have slightly changed the notion of drops. The unit circle $\BBB T$ is the only Jordan curve in the 0-skeleton of $B$, {\it and we agree to call any of the two topological disks $\BBB D$ or $\overline{\BBB C}\smallsetminus \overline{\BBB D}$ a $0$-drop}. The integer $k$ is called the {\it depth} of $D_k^i$.
\begin{pf}
(a) $B^{-1}(\BBB T)$ is the union of $\BBB T$ and 2 or 4 piecewise
analytic Jordan curves which are symmetric with respect to the unit circle and
intersect it at at most one point. (In fact, none of them crosses the unit circle because a point of crossing would be a simple critical point of $B$ on $\BBB T$.) In particular, $T_1$ is the union of these 2 or 4 Jordan curves. It follows that $B^{-1}(\BBB D)$ consists of 1 or 2 open topological disks outside $\BBB D$ together with a subregion of $\BBB D$ which is bounded by $\BBB T$ and 1 or 2 preimages of $\BBB T$ in $\BBB D$ (see \figref{T1}).
As we mentioned earlier, from the fact that $B$ is a holomorphic branched covering of the sphere and by induction on $k$, it follows that $T_k$ is a finite union of piecewise analytic Jordan curves $\{ T_k^1, \cdots, T_k^m \}$ which intersect one another at finitely many points. These points are necessarily {\it precritical} points of $B$. The fact that none of the $T_k^i$
crosses the unit circle also follows easily by induction on $k$.
\realfig{T1}{sig3.eps}{{\sl Four different configurations for $B^{-1}(\BBB T)$, where $B\in {\cal B}_5^{cm}(\theta)$. The shaded regions are components of $B^{-1}(\BBB D)$. The shaded subregion of $\BBB D$ is mapped to $\BBB D$ by a 3-to-1 branched covering with a superattracting fixed point at the origin. There is a critical point at $z=1$ and the other critical point(s) are symmetric with respect to the unit circle. They are marked by an asterisk. In (a) both components of $B^{-1}(\BBB D)$ outside $\BBB D$ are mapped isomorphically to $\BBB D$. In (b) there is only one component of $B^{-1}(\BBB D)$ outside $\BBB D$ which is mapped onto $\BBB D$ by a 2-to-1 branched covering. (c) is a limiting case of (a) or (b) and (d) is a limiting case of (a).}}{11cm}
(b) By the construction of $T_k$, $B$ maps every $T_k^i$ to some $T_{k-1}^j$. Let $k\geq 1$ and let us assume that $T_k^i$ is completely outside of $\BBB D$. Since all poles of $B$ are inside $\BBB D$, it follows that $B$ is holomorphic in $D_k^i$ and maps it in a proper way onto $D_{k-1}^j$. In case $T_k^i$ is inside $\BBB D$, it follows by symmetry that $B$ maps $D_k^i$ onto some $D_{k-1}^j$ (which is the reflection of the image of $I(D_k^i)$). Since every $D_k^i$ can contain at most one critical point of $B$, in either case the map $B:D_k^i\rightarrow D_{k-1}^j$ will be a conformal isomorphism or a 2-to-1 branched covering.
(c) We prove the claim by induction on $k$. This is obvious for $k=0$. Suppose that there exist two distinct $k$-drops $D_k^i$ and $D_k^i$ which intersect. By (b), $B$ maps both of them to some $(k-1)$-drops and the mapping is proper. It is easy to see that these two $(k-1)$-drops have to be distinct. Then every point in $D_k^i\cap D_k^i$ must map to a point in the intersection of the two $(k-1)$-drops. This contradicts the induction hypothesis.
(d) Let $k>l$ and $D_k^i\cap D_l^j \neq \emptyset$. If $D_k^i$ is not contained in $D_l^j$, then $D_k^i\cap T_l^j \neq \emptyset$. Applying $B^{\circ k}$ to $D_k^i$, it follows from (b) that $B^{\circ k}(T_l^j)$ intersects $\BBB D$ or $\overline{\BBB C} \smallsetminus \overline{\BBB D}$. But $k>l$ implies $B^{\circ k}(T_l^j)=B^{\circ k-l}(B^{\circ l}(T_l^j))=B^{\circ k-l}(\BBB T)=\BBB T$. Conversely, if $D_k^i\subset D_l^j$, then $k$ has to be greater than $l$. This is because $D_k^i$ and $D_l^j$ are intersecting, so by the above argument $k<l$ would imply the reverse inclusion $D_l^j\subset D_k^i$.
\end{pf}
\noindent
{\bf Definition (Nucleus of a Drop).} Let $D_k^i$ be a drop. We define the {\it nucleus}\footnote{Terminology suggested by A. Epstein.} $N_k^i$ of $D_k^i$ as the set of all points in $D_k^i$ which are not accumulated by any other drop of $B$. The nuclei of $k$-drops are said to have depth $k$.
It follows from \propref{skeleton}(c) that
$$N_k^i=D_k^i\smallsetminus \overline{\bigcup_{l\neq k}\bigcup_j D_l^j}.$$
Clearly every nucleus is open. It is also nonempty because every drop contains an open set which eventually maps to the immediate basin of attraction of $0$ or $\infty$, and this open set cannot intersect the closure of any other drop of $B$.
We have two nuclei of depth zero: $N_0$, which is the nucleus of $\BBB D$ and contains the immediate basin of attraction of 0, and $N_{\infty}$, which is the nucleus of $\overline{\BBB C}\smallsetminus \overline{\BBB D}$ and contains the immediate basin of attraction of $\infty$. Obviously $N_{\infty}=I(N_0)$. It is not hard to see that both $N_0$ and $N_{\infty}$ are invariant under $B$:
\begin{equation}
\label{eqn:inv}
B(N_0)\subset N_0,\ \ \ \ B(N_{\infty})\subset N_{\infty}.
\end{equation}
This of course implies that $N_0$ and $N_{\infty}$ are subsets of the Fatou set of $B$.
It follows from \propref{skeleton}(b) that $B$ maps every nucleus of depth $k$ onto some nucleus of depth $k-1$ and the mapping is either a conformal isomorphism or a 2-to-1 branched covering. We include the following lemma for completeness:
\begin{lem}
\label{chain}
Let $N_k^i$ be the nucleus of a drop $D_k^i$ which eventually maps to the unit disk $\BBB D$. Then
\begin{enumerate}
\item[(a)]
No point in the orbit
$$N_k^i=N_k^{i_0}\stackrel{B}{\longrightarrow}N_{k-1}^{i_1}\stackrel{B}{\longrightarrow}\cdots \stackrel{B}{\longrightarrow}N_1^{i_{k-1}}\stackrel{B}{\longrightarrow}N_0 $$
can intersect any of the reflected nuclei $I(N_{k-j}^{i_j}),\ 0\leq j\leq k$.
\item[(b)]
For $z\in N_k^i$, $B^{\circ k}$ is the first iterate of $B$ which sends $z$ to $N_0$.
\end{enumerate}
\end{lem}
\begin{pf}
(a) $B$ commutes with $I$, so there is a reflected orbit
$$I(N_k^i)=I(N_k^{i_0})\stackrel{B}{\longrightarrow}I(N_{k-1}^{i_1})\stackrel{B}{\longrightarrow}\cdots \stackrel{B}{\longrightarrow}I(N_1^{i_{k-1}})\stackrel{B}{\longrightarrow}N_{\infty}. $$
Now any point in both orbits would have to map to a point in $N_0$ and $N_{\infty}$ simultaneously, which is impossible since $N_0 \cap N_{\infty}=\emptyset$.
(b) This is obvious if $k=1$. Suppose that $k>1$ and that for some $0<l<k$, $B^{\circ l}(z)\in N_0$. Then by (\ref{eqn:inv}), $B^{\circ k-1}(z)\in N_0\subset \BBB D$. But $B^{\circ k-1}(z) \in B^{\circ k-1}(D_k^i)$ and $B^{\circ k-1}(D_k^i)$ is a 1-drop which does not intersect $\BBB D$.
\end{pf}
\noindent
{\bf Remark.} If $z\in N_k^i$, it is {\it not} true that $B^{\circ k}$ is the first iterate of $B$ which sends $z$ to the unit disk. In fact, the orbit of $z$ can pass through $\BBB D$ several times before it maps to $N_0$ (see \figref{several}).
\realfig{several}{sev.eps}{{\sl The orbit of a 3-drop under the iteration of a Blaschke product $B \in {\cal B}_5^{cm}(\theta)$. This dark drop on the right maps successively to lighter drops. It visits the unit disk once before it maps onto it.}}{5cm}
\begin{prop}
\label{nuc}
\noindent
\begin{enumerate}
\item[(a)]
Distinct nuclei are disjoint.
\item[(b)]
The map $B^{\circ k}$ from $N_k^i$ onto $N_0$ or $N_{\infty}$ is
either a conformal isomorphism or a 2-to-1 branched covering.
\end{enumerate}
\end{prop}
\begin{pf}
(a) Let $N_k^i$ and $N_l^j$ be two distinct nuclei which intersect. By \propref{skeleton}(c), we have $k\neq l$. Without loss of generality, we assume that $k>l$ and the iterate $B^{\circ l}$ maps $N_l^j$ onto $N_0$. So
for every $z$ in the intersection $N_k^i \cap N_l^j$, $B^{\circ l}(z)$ will belong to $N_0$. This contradicts \lemref{chain}(b).
(b) Since by (a) distinct nuclei are disjoint, an orbit
$$N_k^i=N_k^{i_0}\stackrel{B}{\longrightarrow}N_{k-1}^{i_1}\stackrel{B}{\longrightarrow}
\cdots \stackrel{B}{\longrightarrow}N_1^{i_{k-1}}\stackrel{B}{\longrightarrow}N_0\ \ \mbox{or}\ \ N_{\infty} $$
can hit every critical point of $B$ at most once. Since the critical point $z=1$ of $B$ does not belong to any nucleus, the above orbit can only hit the pair of critical points $c$ and $1/\overline c$, with $|c|\neq 1$. By \lemref{chain}(a), both critical points cannot belong to the
above orbit simultaneously. This means that $B^{\circ k}:N_k^i
\rightarrow N_0$ or $N_{\infty}$ is either a conformal isomorphism or a 2-to-1 branched covering.
\end{pf}
\figref{hypbla}, \figref{nucleus}, and \figref{blaext} show the Julia sets of some Blaschke products in ${\cal B}_5^{cm}(\theta)$ for $\theta=(\sqrt{5}-1)/2$.
\goodbreak
\section{Connectivity of ${\cal M}_3(\theta)$}
\label{sec:connect}
In this section we prove that ${\cal M}_3(\theta)$ is connected. It will be more convenient to work with the double cover $\hat {\cal M}_3(\theta)$, which by definition is the set of all $s\in \BBB C^{\ast}$ such that $s^2\in {\cal M}_3(\theta)$. \propref{m3com} shows that the complement of $\hat {\cal M}_3(\theta)$ in $\BBB C^{\ast}$ has two connected components $\hat{\Omega}_{ext}$ and $\hat{\Omega}_{int}$ which are double covers of $\Omega_{ext}$ and $\Omega_{int}$ and are mapped to one another by the inversion $s\mapsto 1/s$. We would like to show that these open sets are homeomorphic to punctured disks. Connectivity of $\hat {\cal M}_3(\theta)$, hence ${\cal M}_3(\theta)$, will follow immediately. The strategy of the proof is more or less similar to the proof of connectivity of the Mandelbrot set with one additional
difficulty: We construct a holomorphic branched covering $\Phi:\hat{\Omega}_{ext}\rightarrow {\BBB C} \smallsetminus \overline {\BBB D}$ which extends holomorphically to infinity with $\Phi^{-1}(\infty)=\infty$. The degree of this map is 3, so to prove that $\hat{\Omega}_{ext}$ is a punctured disk one has to show that $\Phi$ has no critical point other than $\infty$. This additional difficulty does not show up in the case of the Mandelbrot set, where the similar map has degree 1, so it automatically becomes a conformal isomorphism (see \cite{Douady-Hubbard1}).
Recall that the {\it B\"{o}ttcher map} $\beta$ associated to a polynomial $$P:z\mapsto a_dz^d+a_{d-1}z^{d-1}+\cdots+a_1z+a_0, \ \ \ \ a_d\neq 0$$
is a conformal isomorphism defined near $\infty$, with $\beta(\infty)=\infty$, which conjugates $P$ to
the map $z\mapsto z^d$; that is, $\beta(P(z))=\beta(z)^d$. This map is unique up to multiplication by a $(d-1)$-th root of unity, so it can be normalized so that the derivative at infinity $\beta'(\infty)$ becomes any $(d-1)$-th root of $1/a_d$.
There is a classical formula for $\beta$ in terms of the iterates of the polynomial $P$ (see for example \cite{Carleson-Gamelin}):
\begin{equation}
\label{eqn:zarb}
\beta(z)=\lim_{n\rightarrow \infty} \left ( P^{\circ n}(z) \right ) ^{d^{-n}} = z \prod_{n=1}^{\infty} \left ( \frac{P^{\circ n}(z)}{(P^{\circ n-1}(z))^d} \right )^{d^{-n}}.
\end{equation}
The infinite product converges uniformly outside a sufficiently large disk centered at the origin.
For each $s\in {\BBB C}^{\ast}$, consider the polynomial
$$P^s: z \mapsto \lambda z \left ( 1-\frac{1}{2}(s+\frac{1}{s})z+\frac{1}{3} z^2 \right ) $$
which has critical points at $s$ and $1/s$. The dilation $z\mapsto s z$ conjugates $P^s$ to $P_c$ in (\ref{eqn:normform}) with $c=s^2$. Hence $P^s \in \hat {\cal M}_3(\theta)$ if and only if $P_c \in {\cal M}_3(\theta)$.
\begin{thm}[Connectivity of ${\cal M}_3(\theta)$]
\label{m3con}
The open set $\hat{\Omega}_{ext}$ is homeomorphic to ${\BBB C}\smallsetminus \overline {\BBB D}$. Therefore, $\hat {\cal M}_3(\theta)$, hence ${\cal M}_3(\theta)$, is connected.
\end{thm}
\begin{pf}
Let $s\in \hat{\Omega}_{ext}$ and let $\beta_s$ be the B\"{o}ttcher
map which conjugates $P^s$ to $z\mapsto z^3$ near infinity, with $\beta'_s(\infty)=\sqrt{3/\lambda}$. It is a standard argument to show that
$\beta_s$ depends holomorphically on $s$ and can be extended conformally down to the equipotential $\gamma_s$ passing through the escaping critical point $s$ of $P^s$ and it maps outside of $\gamma_s$ to the outside of some closed disk $\overline{\BBB D}(0,r)$, where $r>1$. Note that $\gamma_s$ is topologically a figure eight with $s$ as double point. Define a map $\Phi:\hat{\Omega}_{ext}\rightarrow {\BBB C} \smallsetminus \overline {\BBB D}$ by
$$\Phi (s)=\beta_s (P^s(s)).$$
This is a holomorphic map which extends holomorphically to infinity. It is not hard to show that $\Phi$ is proper, i.e., $|\Phi(s)|\rightarrow 1$ as $s\rightarrow \partial \hat{\cal M}_3(\theta)$. Hence $\Phi$ is a finite-degree branched covering from $\hat{\Omega}_{ext}\cup \{ \infty \}$ to the topological disk
$\overline {\BBB C} \smallsetminus \overline {\BBB D}$. Let us compute the mapping degree of $\Phi$. By (\ref{eqn:zarb}), we have
$$\begin{array}{rl}
\beta_s(z) & =z\ \displaystyle{ \prod_{n=1}^{\infty} \left [ \frac{\lambda}{3}-\frac{\lambda}{2}(s+\frac{1}{s})\frac{1}{(P^s)^{\circ n-1}(z)}+\frac{\lambda}{((P^s)^{\circ n-1}(z))^2} \right ] ^{3^{-n}} } \\ \\
& =: z\ \displaystyle{ \prod_{n=1}^{\infty} \beta_n(z,s)^{3^{-n}} },
\end{array}$$
hence
\begin{equation}
\label{eqn:fi}
\Phi(s)=P^s(s) \prod_{n=1}^{\infty} \beta_n(P^s(s),s) ^ {3^{-n}},
\end{equation}
where $P^s(s)=-\frac{\lambda}{6}s^3+\frac{\lambda}{2}s$.
By considering the logarithm of $\Phi$, we see that near infinity the infinite product in (\ref{eqn:fi}) is of the form $\sqrt{\frac{\lambda}{3}}(1+O(1/s))$. Hence $\Phi(s)=\sqrt{\frac{\lambda}{3}}(-\frac{\lambda}{6}s^3+\frac{\lambda}{2}s)(1+O(1/s))$.
Since $\Phi^{-1}(\infty)=\infty$, this means that the mapping degree of $\Phi$ is 3.
In particular, $\infty$ is a double critical point of $\Phi$. By the Riemann-Hurwitz formula, the Euler characteristic of $\hat{\Omega}_{ext}$ is equal to $-n$, where $n$ is the number of critical points of $\Phi$ in $\hat{\Omega}_{ext}$. Therefore,
$\hat{\Omega}_{ext}$ is homeomorphic to a punctured open disk if and only if $\Phi$ has no critical point in $\hat{\Omega}_{ext}$. In what follows, we prove that $\Phi$ is locally injective in $\hat{\Omega}_{ext}$. Since $\Phi$ is also holomorphic, this will prove that there are no critical points other than $\infty$.
So assume $\Phi(s_1)=\Phi(s_2)=w$ for some $s_1,s_2\in \hat{\Omega}_{ext}$.
To simplify the notation, put $P_1=P^{s_1}, P_2=P^{s_2}$. Let $v_1=P_1(s_1)$ and $v_2=P_2(s_2)$ be the critical values and $a_1$ and $a_2$
be the {\it co-critical} points, i.e., $a_i\neq s_i$ and $P_i(a_i)=v_i$ for
$i=1,2$. Finally, let $\gamma_1$ and $\gamma_2$ be the equipotentials of the corresponding B\"{o}ttcher maps $\beta_1$ and $\beta_2$ which pass through the critical points $s_1$ and $s_2$ (see \figref{conn}).
\realfig{conn}{sig4.eps}{{\sl Extending conformal conjugacies.}}{10cm}
Define a conformal map $\varphi$ from the outside of $\gamma_1$ to the outside of $\gamma_2$ by $\varphi=\beta_2^{-1}\circ \beta_1$. We would like to extend $\varphi$ to the entire basin of attraction of $\infty$ for $P_1$. Let $u_1, u_2, u_3$ be the cube roots of $w$. Under $\beta_1$, every connected component of $\gamma_1\smallsetminus \{ s_1, a_1 \}$ maps homeomorphically to
one of the 1/3-circles joining $u_1, u_2, u_3$. If $s_1$ is sufficiently close to $s_2$, it follows by continuity that the corresponding components of $\gamma_2\smallsetminus \{ s_2, a_2 \}$ will map homeomorphically to the {\it same} circular segments joining $u_1,u_2,u_3$ (see \figref{conn}). This allows us to extend $\varphi$ to a homeomorphism $\gamma_1\stackrel{\simeq}{\rightarrow}\gamma_2$.
Now it is straightforward to extend $\varphi$ further: The annulus bounded
by $\gamma_1$ and $P_1(\gamma_1)$ has two preimages which are mapped onto it in a 1-to-1 and 2-to-1 fashion. We can extend $\varphi$ to these
preimages by taking pull-backs, i.e., we define $\varphi$ to be
$P_2^{-1} \circ \varphi \circ P_1$, where the boundary values of $\varphi|_{\gamma_1}:\gamma_1\rightarrow \gamma_2$ tell us which branch of $P_2^{-1}$ must be taken. It is not hard to see that this process of taking pull-backs can be continued until $\varphi$ is defined on the entire basin of attraction of $\infty$ for $P_1$. (One formal way to keep track of various preimages of these annuli is to consider the {\it pattern} associated with each cubic as introduced by Branner and Hubbard \cite{Branner-Hubbard}. In their language, $P_1$ and $P_2$ have ``homeomorphic patterns of infinite depth.'') The extension of $\varphi$ defined this way is conformal, since it is a homeomorphism which is holomorphic except on a disjoint countable union of piecewise analytic curves.
By \thmref{ren}, $P_1$ and $P_2$ are both renormalizable. Hence there are quadratic-like restrictions $f=P_1|_U:U\rightarrow V$ and $g=P_2|_{U'}:U'\rightarrow V'$ of both polynomials which are hybrid equivalent to $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$. Note that $K(f)$ and $K(g)$ are just the components of $K(P_1)$ and $K(P_2)$ which contain the Siegel disks $\Delta_{P_1}$ and $\Delta_{P_2}$, respectively. The filled Julia set $K(P_1)$ decomposes as $\bigcup_{n\geq 0}P_1^{-n}(K(f)) \cup G_1$, where $G_1 \subset J(P_1)$ is the uncountable union of trivial components (see \figref{hypext}). Similarly, we have $K(P_2)=\bigcup_{n\geq 0}P_2^{-n}(K(g)) \cup G_2$. Now we are exactly in the situation of \lemref{EF}, with $E=(K(P_1) \smallsetminus K(f)) \cap U$ and $F=(K(P_2) \smallsetminus K(g)) \cap U'$. By \lemref{EF}, $\varphi$ can be extended to $K(f) \stackrel{\simeq}{\longrightarrow} K(g)$ and then to $\bigcup_{n\geq 0}P_1^{-n}(K(f)) \stackrel{\simeq}{\longrightarrow} \bigcup_{n\geq 0}P_2^{-n}(K(g))$ by taking pull-backs. Note that this extension has zero $\overline{\partial}$-derivative on this union by \lemref{EF} and the Bers Sewing Lemma. It is not hard to see that $\varphi$ also extends homeomorphically to $G_1 \stackrel{\simeq}{\longrightarrow} G_2$. Therefore, we obtain a homeomorphism $\varphi: \BBB C \rightarrow \BBB C$ conjugating $P_1$ to $P_2$, which is quasiconformal at least on $\BBB C \smallsetminus J(P_1)$.
We would like to show that $\varphi$ is quasiconformal everywhere. One way to do this proceeds as follows. By \thmref{qcclass} below, $P_1$ and $P_2$ are conjugate by a quasiconformal homeomorphism $\psi: \BBB C \rightarrow \BBB C$. The proof of \lemref{EF} shows that $\psi^{-1}\circ \varphi$ is the identity map on $\partial K(f)$. It easily follows that $\varphi=\psi$ on the entire Julia set $J(P_1)$. Therefore, $\varphi$ is identically equal to the homeomorphism $\varphi \amalg \psi$ defined by
$$(\varphi \amalg \psi)(z)= \left \{
\begin{array}{ll}
\varphi(z) & z\in \BBB C \smallsetminus J(P_1) \\
\psi(z) & z\in J(P_1)
\end{array}
\right. $$
which is quasiconformal by the Bers Sewing Lemma.
Finally, we show that $\varphi$ is in fact a conformal homeomorphism. Just as in the proof of \corref{measure}, up to a set of measure zero, $K(P_1)=\bigcup_{n\geq 0}P_1^{-n}(K(f))$. Hence the measure of $G_1$ is zero. It follows that $\varphi$ is conformal on $\BBB C\smallsetminus K(P_1)$ and has zero $\overline{\partial}$-derivative almost everywhere on $K(P_1)$. Hence $\varphi$ is conformal everywhere, which means $P_1=P_2$.
\end{pf}
\vspace{0.17in}
\section{Continuity of the Surgery Map}
\label{sec:continuity}
This section is devoted to the proof of continuity of the surgery map $\cal S$. This is by no means trivial, and in fact, as we will see, depends strongly on the cubic parameter space being one-dimensional. The fact that the cubics on the boundary of the connectedness locus ${\cal M}_3(\theta)$ are quasiconformally rigid is the most crucial step in the proof, and it is exactly this fact which makes the generalization of this work to higher degrees difficult. We would like to point out that the situation is similar to Douady-Hubbard's proof of the continuity of the ``straightening map'' in their study of the space of quadratic-like maps \cite{Douady-Hubbard2}. One additional difficulty here is the lack of complete information on quasiconformal conjugacy classes in the non-holomorphic family ${\cal B}_5^{cm}(\theta)$ (the analogue of \thmref{qcclass}; see however \thmref{qcpath}).
The idea of the proof is as follows: Given a sequence $B_n=B_{\mu_n}\in {\cal B}_5^{cm}(\theta)$ such that $B_n\rightarrow B=B_{\mu}$, we prove that there exists a subsequence $\{ B_{n(j)} \}$ such that ${\cal S}(B_{n(j)})\rightarrow {\cal S}(B)$ in ${\cal P}_3^{cm}(\theta)$. The topology of the parameter space ${\cal P}_3^{cm}(\theta)$ is the uniform topology which respects the marking of the critical points. The same is true for ${\cal B}_5^{cm}(\theta)$ with one exception (compare Section \ref{sec:surgery}): If $\mu$ has absolute value $1$, i.e., if $B$ has two double critical points on the unit circle, then $B_n\rightarrow B$ means that every subsequence of $\{ B_n \}$ has a further subsequence which either converges to $B$ or to its conjugate $R_{\mu}^{-1}\circ B\circ R_{\mu}$. From the construction of ${\cal S}$ it is easy to see that ${\cal S}(B)={\cal S}(R_{\mu}^{-1}\circ B\circ R_{\mu})$. Therefore, in order to prove continuity of ${\cal S}$, all we have to show is that $B_n\rightarrow B$ locally uniformly on $\BBB C$ (respecting the convergence of the marked critical points) implies
that for some subsequence $\{ B_{n(j)} \}$, ${\cal S}(B_{n(j)})\rightarrow {\cal S}(B)$ locally uniformly on $\BBB C$ (again, respecting the convergence of the marked critical points).
So consider the sequence $\{ B_n|_{\BBB T} \}$ and let $h_n$ and $h$ be the unique $k(\theta)$-quasisymmetric homeomorphisms
which fix $z=1$ and conjugate $ B_n|_{\BBB T}$ and $B|_{\BBB T}$ to the rigid rotation $R_{\theta}$. It is easy to see that $h_n\rightarrow h$ uniformly
on $\BBB T$. Consider the Douady-Earle extensions $H_n$ and $H$, which are $K(\theta)$-quasiconformal homeomorphisms of the unit disk. By the construction of these extensions, $H_n$ and $H$ are real-analytic in $\BBB D$ and $H_n\rightarrow H$ locally uniformly in $C^{\infty}$ topology \cite{Douady-Earle}. In particular, the partial derivatives $\partial H_n$ and $\overline{\partial}H_n$ converge locally uniformly in $\BBB D$ to the corresponding derivatives $\partial H$ and $\overline{\partial}H$. This shows that $\sigma_n|_{\BBB D}\rightarrow \sigma|_{\BBB D}$ locally uniformly, where $\sigma_n$ and $\sigma$ are the conformal structures we constructed in the course of surgery for
$B_n$ and $B$ (see Section \ref{sec:surgery}).
At this point, the main problem is to prove that $B_n\rightarrow B$
and $\sigma_n|_{\BBB D}\rightarrow \sigma|_{\BBB D}$ implies $\sigma_n\rightarrow \sigma$ in the $L^1$-norm on $\BBB C$, for this would show
that the normalized solutions $\varphi_n=\varphi_{H_n}$ of the Beltrami equations $\varphi_n^{\ast} \sigma_0=\sigma_n$ converge locally uniformly on $\BBB C$ to the normalized solution $\varphi$ of the equation $\varphi^{\ast} \sigma_0=\sigma$. This would simply mean that ${\cal S}(B_n)\rightarrow {\cal S}(B)$ as $n\rightarrow \infty$.
Unfortunately, we cannot prove $\sigma_n\rightarrow \sigma$ in $L^1(\BBB C)$ in all cases. So, following \cite{Douady-Hubbard2}, we take a slightly different approach by splitting the argument into two cases depending on whether $\cal S (B)$ is quasiconformally rigid or not. In the first case, we show continuity directly using the rigidity. In the latter case, however, we prove $\varphi_n\rightarrow \varphi$ using the fact that $\cal S (B)$ admits nontrivial deformations.
\begin{thm}
\label{contin}
The surgery map $\cal S : {\cal B}_5^{cm}(\theta) \rightarrow {\cal P}_3^{cm}(\theta)$ is continuous.
\end{thm}
\begin{pf}
Consider $B_n,B\in {\cal B}_5^{cm}(\theta)$ and start with the same construction as above to get a sequence $\{ \sigma_n\}$ of conformal structures on the plane with uniformly bounded dilatation and the corresponding sequence $\{ \varphi_n \}$
of normalized solutions of $\varphi_n^{\ast} \sigma_0=\sigma_n$. Since $\{ \varphi_n \}$ is a normal family by \corref{normal}, it has a subsequence, still denoted by $\{ \varphi_n \}$, which converges locally uniformly to a quasiconformal homeomorphism $\psi:\BBB C\rightarrow \BBB C$.
Set $P_n=\varphi_n\circ \tilde{B}_n\circ \varphi_n^{-1}={\cal S}(B_n)$, $P=\varphi\circ \tilde{B}\circ
\varphi^{-1}={\cal S}(B)$, and $Q=\psi \circ \tilde{B} \circ \psi^{-1}$.
All these maps are cubic polynomials in ${\cal P}_3^{cm}(\theta)$. Also $P$ is quasiconformally conjugate to $Q$, and $P_n\rightarrow Q$ as $n \rightarrow \infty$. We will show that $P=Q$ and this will prove continuity at $B$.
For the rest of the argument, we distinguish two cases: If $P=\cal S (B)$ is quasiconformally rigid, then automatically $P=Q$ and we are done. (By \thmref{qcclass} this case corresponds to the points on the boundary of ${\cal M}_3(\theta)$ or the centers of hyperbolic-like or capture components.) Otherwise, $P$ is not rigid, so the quasiconformal conjugacy class of $P$ is a nonempty open set $U\subset {\cal P}_3^{cm}(\theta)$ by \corref{qcopen}. Assume by way of contradiction that $P\neq Q$. Since $P_n\rightarrow Q$ as $n\rightarrow \infty$, $P_n\in U$ for large $n$. Hence $P_n$ is quasiconformally conjugate to $P$ for large $n$, i.e., there exists a normalized quasiconformal homeomorphism $\eta_n:\BBB C\rightarrow \BBB C$ such that $\eta_n\circ P=P_n \circ \eta_n$. Observe that the dilatation of $\eta_n$ is uniformly bounded, since by \thmref{qcpar} the dilatation of $(\psi\circ \varphi^{-1})\circ \eta_n^{-1}$ goes to $1$ as $n$ goes to $\infty$ (see \figref{arrow}).
By ``lifting'' $\eta_n$, we can find a quasiconformal conjugacy $\xi_n=\varphi_n^{-1}\circ \eta_n \circ \varphi$ between the modified Blaschke products $\tilde B$ and $\tilde {B}_n$, i.e.,
\begin{equation}
\label{eqn:lift}
\xi_n\circ \tilde B=\tilde {B}_n \circ \xi_n.
\end{equation}
Again, note that the dilatation of $\xi_n$ is uniformly bounded.
\realfig{arrow}{sig5.eps}{{\sl Sketch of the proof of continuity of $\cal S$.}}{7 cm}
We prove that the sequence of conformal structures $\{ \sigma_n \} $ converges
in $L^1(\BBB C)$ to $\sigma$. This, by a standard theorem on quasiconformal mappings (see for example \cite{Lehto}, Theorem 4.6), will show that $\varphi_n\rightarrow \varphi$ locally uniformly,
hence $P_n\rightarrow P$, hence $P=Q$, which contradicts our assumption.
To this end, we introduce the following sequences of conformal structures (where, as usual, we identify a conformal structure with its associated Beltrami differential):
$$\sigma_n^k(z)= \left \{
\begin{array}{ll}
\sigma_n(z) & \mbox{when}\ z\in \bigcup_{i=0}^k \tilde{B}_n^{-i}(\BBB D)\\
0 & \mbox{otherwise}
\end{array}
\right. $$
and
$$\sigma^k(z)= \left \{
\begin{array}{ll}
\sigma(z) & \mbox{when}\ z\in \bigcup_{i=0}^k \tilde{B}^{-i}(\BBB D)\\
0 & \mbox{otherwise}
\end{array}
\right. $$
Note that $\sigma^k\rightarrow \sigma $ in $L^1(\BBB C)$ as $k\rightarrow \infty$ and for every fixed $k$,
$\sigma_n^k\rightarrow \sigma^k$ in $L^1(\BBB C)$ as $n\rightarrow \infty$.
\begin{lem}
\label{area}
The $L^1$-norm $\| \sigma_n -\sigma \|_1$ goes to zero as
$n\rightarrow \infty$ if the area of the open set $\bigcup_{i=k}^{\infty}\tilde{B}_n^{-i}(\BBB D)$ goes to zero uniformly in $n$
as $k\rightarrow \infty$.
\end{lem}
\begin{pf}
For a given $\epsilon >0$, take $k_0$ so large that $k>k_0$
implies $area(\bigcup_{i=k}^{\infty}\tilde{B}_n^{-i}(\BBB D))<\epsilon $ for
all $n$. Then for a fixed large $k>k_0$ and $n$ large enough,
$$\begin{array}{rl}
\| \sigma_n -\sigma \|_1 \leq & \| \sigma_n -\sigma_n^k \|_1 + \| \sigma_n^k -\sigma^k \|_1 + \| \sigma^k -\sigma \|_1 \\
\leq & \| \sigma_n -\sigma_n^k \|_1 +2\epsilon \\
= & \displaystyle{\int}_{\bigcup_{i=k+1}^{\infty} \tilde{B}_n^{-i}(\BBB D)}|\sigma_n -\sigma_n^k |\ dxdy + 2\epsilon \\
< & 4\epsilon.
\end{array}$$
This completes the proof of the lemma.
\end{pf}
So it remains to prove that the area of $\bigcup_{i=k}^{\infty}\tilde{B}_n^{-i}(\BBB D)$ goes to zero uniformly in $n$
as $k\rightarrow \infty$. Clearly $area(\bigcup_{i=k}^{\infty}\tilde{B}^{-i}(\BBB D))\rightarrow 0$ as $k\rightarrow \infty$. Since $\{ \xi_n \}$ is uniformly quasiconformal, there
is a constant $C>0$ such that
$$C^{-1}\ area(E) \leq area(\xi_n(E))\leq C\ area(E)$$
for any measurable set $E$. By (\ref{eqn:lift}),
$$\bigcup_{i=k}^{\infty}\tilde{B}_n^{-i}(\BBB D)=\xi_n(\bigcup_{i=k}^{\infty}\tilde{B}^{-i}(\BBB D)),$$
so $area(\bigcup_{i=k}^{\infty}\tilde{B}_n^{-i}(\BBB D))\leq C\ area(\bigcup_{i=k}^{\infty}\tilde{B}^{-i}(\BBB D))$ and this proves that the left side goes to zero uniformly in $n$.
\end{pf}
\vspace{0.17in}
\section{Critical Parametrization of Blaschke Products}
\label{sec:blaschke}
This section is the beginning of a digression in the study of cubic Siegel polynomials. We look at a somewhat different class of maps, i.e., certain Blaschke products which will serve as models for the cubics in ${\cal P}_3^{cm}(\theta)$. We will introduce these model maps in Section \ref{sec:blapar} and return to their relation with the cubics in Section \ref{sec:surgery}.
Let us consider the following space of degree $5$ normalized Blaschke products:
\begin{equation}
\label{eqn:blas}
\hat{ \cal B} =\{ B:z\mapsto \tau z^3 \left ( \frac{z-p}{1-\overline{p}z} \right )
\left ( \frac{z-q}{1-\overline{q}z} \right ) : B(1)=1\ \mbox{and}\ |p|>1, |q|>1 \},
\end{equation}
where the rotation factor $\tau \in \BBB T$ is chosen so as to achieve the normalization $B(1)=1$.
Each $B\in \hat{ \cal B}$ has superattracting fixed points at $0$ and $\infty$
and four other critical points counted with multiplicity. We are interested in
the open subset $\cal B \subset \hat{ \cal B}$ of those normalized Blaschke
products of the form (\ref{eqn:blas}) whose four critical points other than $0$ and $\infty$ are of the form
$$c_1,\ c_2,\ \frac{1}{\overline c_1},\ \frac{1}{\overline c_2}$$
with $|c_1|>1, |c_2|>1$. Our goal is to parametrize elements of $\cal B$ by their critical points $c_1$ and $c_2$. The following theorem provides this ``critical parametrization'' for $\cal B$:
\begin{thm}[Critical Parametrization]
\label{critpar}
Let $c_1$ and $c_2$ be two points outside the closed unit disk in the complex plane. Then there exists a unique normalized Blaschke product $B\in \cal B$ whose critical points are located at $0, \infty , c_1, c_2, \displaystyle{ \frac{1}{\overline c_1}, \frac{1}{\overline c_2} } $.
\end{thm}
The proof of this theorem will be given after the following two supporting lemmas. We remark
that we would like to find a direct proof of this fact which can be generalized to higher degrees, but we have not been able to find such a proof. (Compare a similar situation in \cite{Zakeri}, where a conceptual proof is possible.)
The space $\hat{ \cal B}$ of all Blaschke products of the form (\ref{eqn:blas}) can be identified with the set of all unordered pairs $\{ p, q \}$ of points outside
the closed unit disk. This is canonically homeomorphic to the symmetric product of two copies of the punctured plane. The latter can be identified with the space of all degree $2$ monic polynomials
$$w\mapsto (w-w_1)(w-w_2)=w^2-(w_1+w_2)w+w_1w_2$$
with $w_1w_2\neq 0$. It follows that $\hat{\cal B}$ is homeomorphic to $\BBB C
\times \BBB C^{\ast}$. In particular, it is an open topological manifold of
real dimension $4$.
In the same way, we may consider the space $\cal C$ of all unordered pairs
$\{ c_1, c_2\} $ of points outside the closed unit disk, which has a completely similar description.
We consider the continuous map
$$\Psi :\cal B \rightarrow \cal C$$
which sends a normalized Blaschke product $B \simeq \{ p,q \}$ with critical points
$\{ 0, \infty , c_1, c_2, \\
\displaystyle {\frac{1}{\overline c_1}, \frac{1}{\overline c_2} } \} $ to the unordered pair $\{ c_1, c_2\}$.
\begin{lem}
\label{psiprop}
$\Psi$ is a proper map.
\end{lem}
\begin{pf}
Let $B_n \simeq \{ p_n, q_n\}$ be a sequence of normalized Blaschke products in $\cal B$ which leaves every compact subset of $\cal B$. Then either
\begin{enumerate}
\item[$\bullet$]
Some critical point of $B_n$ accumulates on the unit circle, or
\item[$\bullet$]
After relabeling, $p_n$ goes to $\infty$, or
\item[$\bullet$]
After relabeling, $p_n$ accumulates on the unit circle.
\end{enumerate}
In the first two cases, it is easy to see that $\Psi (B_n)$ leaves every compact subset of $\cal C$. In the third case, there is a subsequence of $B_n$ which converges locally uniformly to a Blaschke product of degree $<5$. It follows that the corresponding subsequence of $\Psi (B_n)$ has to leave every compact subset of $\cal C$.
\end{pf}
\begin{lem}
\label{psiinj}
$\Psi$ is injective.
\end{lem}
\begin{pf}
Let $A$ and $B$ be two normalized Blaschke products in $\cal B$ with the same critical points $\{ 0, \infty , c_1, c_2, \displaystyle{ \frac{1}{\overline c_1}, \frac{1}{\overline c_2} } \} $ . Let
$$ A:z\mapsto \tau_A z^3 \left ( \frac{z-p_1}{1-\overline{p_1}z} \right )
\left ( \frac{z-q_1}{1-\overline{q_1}z} \right ), $$
$$ B:z\mapsto \tau_B z^3 \left ( \frac{z-p_2}{1-\overline{p_2}z} \right )
\left ( \frac{z-q_2}{1-\overline{q_2}z} \right ), $$
and assume by way of contradiction that $p_1\neq p_2$ and $p_1\neq q_2$.
Consider the rational function
$$R(z)=\frac{A(z)}{B(z)}.$$
Clearly $\deg R=4$ and hence $R$ has $6$ critical points counted with multiplicity. We have
$$A'(z)=(\mbox{const.})\frac{z^2\ \prod (z-c_j)(1-\overline{c}_jz)}{(1-\overline{p}_1z)^2
(1-\overline{q}_1z)^2}\ \ ,\ \ B'(z)=(\mbox{const.})\frac{z^2\ \prod (z-c_j)(1-\overline{c}_jz)}{(1-\overline{p}_2z)^2 (1-\overline{q}_2z)^2}$$
from which it follows that
$$R'(z)=(\mbox{const.})\frac{1}{z}\ \prod (z-c_j)(1-\overline{c}_jz)\left \{ \frac{
\sum (-1)^j(z-p_j)(z-q_j)(1-\overline{p}_jz)(1-\overline{q}_jz)}{(z-p_2)^2 (z-q_2)^2(1-\overline{p}_1z)^2(1-\overline{q}_1z)^2}\right \}.$$
(Note that all the sums and products are taken over $j=1,2$.)
From the above expression, $R$ has already $4$ critical points at the $c_j$
and $1/\overline{c_j}$. So the rational function in the braces should have exactly
$2$ roots. Since this fraction is irreducible (by our assumption $p_1\neq
p_2$ and $p_1\neq q_2$), the numerator should have degree $2$. But that implies
$$p_1q_1=p_2q_2,$$
$$\overline{p}_1(1+|q_1|^2)+\overline{q}_1(1+|p_1|^2)=\overline{p}_2(1+|q_2|^2)+\overline{q}_2(1+|p_2|^2)$$
from which it follows that $p_1=p_2$ or $p_1=q_2$, hence $q_1=q_2$ or $q_1=p_2$,
which contradicts our assumption.
\end{pf}
\noindent
{\it Proof of the Theorem (Critical Parametrization).} By \lemref{psiprop} and \lemref{psiinj},
$\Psi $ is a covering map of degree $1$. Hence, it is a homeomorphism $\cal B \stackrel{\simeq}{\longrightarrow} \cal C$. $\Box$\\
In particular, the theorem shows that $\cal B$ is also homeomorphic to the product $\BBB C \times \BBB C^{\ast}$.
\begin{cor}
\label{critpar2}
Given any two points $c_1$ and $c_2$ in the plane,
with $|c_1|\geq 1$ and $|c_2|\geq 1$, there exists a unique normalized Blaschke product $B$ in the closure $\overline{\cal B}$ with critical points $\{ 0, \infty , c_1, c_2, \displaystyle{\frac{1}{\overline c_1}, \frac{1}{\overline c_2}\}} $.
\end{cor}
In other words, critical parametrization is possible even if one or both
critical points $c_1,c_2$ belong to the unit circle.
\begin{pf}
Take a sequence $\{ c_1^n, c_2^n \}$ of pairs of points outside the closed unit disk such that $c_1^n \rightarrow c_1$ and $c_2^n \rightarrow c_2$
as $n\rightarrow \infty$. The corresponding sequence $\Psi ^{-1} (\{ c_1^n, c_2^n \} )$ of normalized Blaschke products has a subsequence which converges to a normalized Blaschke product which, by continuity of $\Psi$, has critical points at $\{ 0, \infty , c_1, c_2, \displaystyle{\frac{1}{\overline c_1}, \frac{1}{\overline c_2} } \} $.
To see uniqueness, it is enough to note that the proof of \lemref{psiinj} can be repeated word by word even if we assume $|c_1|=1$ or $|c_2|=1$.
\end{pf}
\begin{prop}
\label{realhom}
Every $B\in {\cal B}$ induces a real-analytic diffeomorphism of the unit circle. Consequently, if $B\in \overline{\cal B}\smallsetminus \cal B$, the restriction of $B$ to the unit circle will be a real-analytic homeomorphism with one (or two) critical point(s).
\end{prop}
\begin{pf}
Let us consider $B\in \cal B$ as in (\ref{eqn:blas}) which has critical points at $0, \infty , c_1, c_2, \displaystyle{ \frac{1}{\overline c_1}, \frac{1}{\overline c_2} }$, with
$|c_1|>1$ and $|c_2|>1$ and let us prove that $B|_{\BBB T}$ is a real-analytic
diffeomorphism. Since $B|_{\BBB T}$ has no critical points, it is a local
diffeomorphism, hence a covering map of some degree $d\leq 5$. We will prove that $d=1$.
$B$ induces a branched covering from every connected component $D$ of $B^{-1}(\BBB D)$ to $\BBB D$. Let $D$ be any such component other than the one
whose boundary is $\BBB T$ and contains the origin. Then $\partial D \cap \BBB T=\emptyset $ since otherwise
every point in the intersection would be a critical point of $B$. Since
$p,q\in B^{-1}(\BBB D)$, either
\begin{enumerate}
\item[(i)]
There are two components $D_1$ and $D_2$ of $B^{-1}(\BBB D)$ with $p\in D_1$
and $q\in D_2$ such that $B:D_j\rightarrow \BBB D$ is a conformal isomorphism
for $j=1,2$; or
\item[(ii)]
Both $p$ and $q$ belong to the same component $D$ of $B^{-1}(\BBB D)$ and $B:D\rightarrow \BBB D$ is a 2-to-1 branched covering.
\end{enumerate}
By the Maximum Principle and the fact that all poles of $B$ are inside $\BBB D$, these components have to be topological disks with piecewise analytic boundaries. It follows that in either case (i) or (ii) the boundaries of the corresponding components give two preimages for $\BBB T$ counted with multiplicity. Since $B^{-1}(\BBB T)$ is symmetric with respect to the unit circle, we have a total number of $4$ preimages for $\BBB T$ other than $\BBB T$ itself. Clearly this means that the degree of $B|_{\BBB T}$ is $1$.
Now let us assume that $B\in \overline{\cal B}\smallsetminus \cal B$. Then there exists a sequence $B_n\in \cal B$ which converges locally uniformly to $B$. Since $B$ has at least one double critical point on $\BBB T$, it follows that $B|_{\BBB T}$ is a real-analytic homeomorphism.
\end{pf}
\vspace{0.17in}
\section{A Cubic Parameter Space}
\label{sec:cubpar}
We would like to parametrize the space of all cubic polynomials which have a fixed Siegel disk of multiplier $\lambda=e^{2 \pi i \theta}$ centered at the origin, where $0< \theta <1$ is an irrational number of Brjuno type. By the theorem of Brjuno-Yoccoz, every holomorphic germ $w\mapsto e^{2 \pi i \theta} w+O(w^2) $ with $\theta $ of Brjuno type is holomorphically linearizable near 0 \cite{Yoccoz}. Therefore, any such cubic polynomial has to be of the form
$$w \mapsto \lambda w+a_2w^2+a_3w^3,$$
where $(a_2,a_3)\in \BBB C \times \BBB C^{\ast}$. We can mark the critical points of this polynomial by assuming that they
are located at the points $c$ and $1$ with $c\neq 0$. In fact, one can conjugate the
above cubic by the linear map $w\mapsto z=\alpha w$, and the new cubic in the $z$-plane will have the form
$$z\mapsto \lambda z+\frac{a_2}{\alpha}z^2+\frac{a_3}{\alpha ^2}z^3.$$
It is easy to see that a critical point of this map is located at $1$
if we choose $\alpha$ to be any root of the equation $\lambda \alpha^2 +2 a_2 \alpha +3 a_3=0$. In this case, the other critical point $c$ will satisfy
$$c=\frac{\lambda \alpha ^2}{3a_3}$$
so that the map gets the form
\begin{equation}
\label{eqn:normform}
P_c:z\mapsto \lambda z \left ( 1-\frac{1}{2}(1+\frac{1}{c})z+\frac{1}{3c}z^2
\right )
\end{equation}
with $c\in \BBB C^{\ast}$.
We denote the space of all critically marked cubic polynomials of the
form (\ref{eqn:normform}) by ${\cal P}_3^{cm}(\theta)$. In other words, ${\cal P}_3^{cm}(\theta)\simeq \BBB C^{\ast}$
is parametrized by the invariant $c$. By an abuse of notation, we often identify the cubic $P_c$ with the parameter $c$. Note that $P_c$ and $P_{1/c}$ are affinely conjugate as maps, but certainly their critical points have different marking. Hence they will be regarded as distinct elements of ${\cal P}_3^{cm}(\theta)$.
In the topology of ${\cal P}_3^{cm}(\theta)$, a sequence $P_n$ converges to some $P$ if there exist
$c_n,c\in \BBB C^{\ast}$, with $P_n=P_{c_n}$ and $P=P_c$, such that $c_n \rightarrow c$ as $n\rightarrow \infty$. In other words, the topology is given by uniform convergence of cubics on compact sets respecting the convergence of the marked critical points. \\ \\
{\bf Notation and Terminology.} Throughout this paper, the Siegel disk of the cubic $P_c$ centered at the origin is denoted by $\Delta_c$. When we do not want to emphasize the dependence on $c$, we denote the Siegel disk of a cubic $P$ by $\Delta_P$. By the {\it grand orbit} $GO(\Delta_P)$ we mean the set of all points in the plane which eventually map to the Siegel disk under the iteration of $P$. In other words,
$$GO(\Delta_P)=\bigcup_{k\geq 0} P^{-k}(\Delta_P).$$
{\bf Remark.} From classical Fatou-Julia theory (\cite{Milnor1}, Corollary 11.4), we know that every point on the boundary of the Siegel disk $\Delta_c$ must be in the closure of the orbit of either $c$ or $1$. According to Herman \cite{Herman1}, $P_c|_{\partial \Delta_c}$
has a dense orbit. It follows that the orbit of either $c$ or $1$ must accumulate on the entire boundary $\partial \Delta_c$.\\
The ``size'' of the Siegel disk $\Delta_c$ can be measured by the following invariant:\\ \\
{\bf Definition (Conformal Capacity).} Consider the Siegel disk $\Delta_c$ for $c \in {\BBB C}^{\ast}$ and the unique linearizing map $h_c:{\BBB D}(0,r_c) \stackrel{\simeq}{\longrightarrow} \Delta_c$, with $h_c(0)=0$ and $h_c'(0)=1$. The radius $r_c>0$ of the domain of $h_c$ is called the {\it conformal capacity} of $\Delta_c$ and is denoted by $\kappa (\Delta_c)$. \\ \\
Alternatively, $\kappa (\Delta_c)$ can be described as the derivative $\varphi_c'(0)$ of the unique linearizing map $\varphi_c: {\BBB D} \stackrel{\simeq}{\longrightarrow} \Delta_c$ normalized by $\varphi_c(0)=0$ and $\varphi_c'(0)>0$. Naturally, one is interested in the behavior of the function $c \mapsto \kappa (\Delta_c)$. The following lemma gives a basic result in this direction (compare \cite{Yoccoz}):
\begin{lem}
\label{upper}
The conformal capacity function $c \mapsto \kappa (\Delta_c)$ is upper semicontinuous.
\end{lem}
\begin{pf}
\comm{Consider the power series expansion of $h_c$ near $z=0$ for $c \in {\BBB C}^{\ast}$:
$$h_c(z)=z+a_2(c)z^2+a_3(c)z^3+ \cdots$$
Since $h_c(\lambda z)=P_c(h_c(z))$ for $z$ sufficiently close to zero and $P_c$ depends holomorphically in $c$, one can explicitly calculate the coefficients $a_j(c)$ to conclude that for every $j$, $c\mapsto a_j(c)$ is holomorphic. We claim that the conformal capacity $\kappa (\Delta_c)$ is equal to the radius of convergence of the above power series. If not, there exists an $r>\kappa (\Delta_c)$ such that $h_c$ can be continued analytically on the disk ${\BBB D}(0,r)$. The conjugacy relation $h_c(\lambda z)=P_c(h_c(z))$ persists over this larger disk. This implies that $P_c$ has bounded orbits on an open neighborhood of the closed disk $\overline{\Delta}_c$, which is a contradiction. We conclude that
$$\kappa (\Delta_c)=\frac{1}{\limsup_{j \rightarrow \infty} |a_j(c)|^{1/j}},$$
or
$$\log (\kappa (\Delta_c))= - \limsup_{j \rightarrow \infty} \frac{\log |a_j(c)|}{j}.$$
Since $\log|a_j(c)|$ is harmonic, ... }
Let $c_n \rightarrow c$ and $\kappa(\Delta_{c_n}) \geq r$. We would like to prove that $\kappa(\Delta_{c}) \geq r$ as well. The sequence of normalized univalent maps $h_{c_n}: {\BBB D}(0,\kappa(\Delta_{c_n})) \rightarrow {\BBB C}$ is normal on ${\BBB D}(0,r)$, so we may assume that a subsequence converges locally uniformly to a univalent function $h:{\BBB D}(0,r) \rightarrow {\BBB C}$. Since $h(\lambda z)=P_c(h(z))$ trivially, $h({\BBB D}(0,r))$ must be contained in the Siegel disk $\Delta_c$. Hence $\kappa(\Delta_{c}) \geq r$.
\end{pf}
Since the conformal capacity $\kappa (\Delta_c)$ is upper semicontinuous by the above lemma, a priori it can jump to a {\it lower} value by a small perturbation. Intuitively, this means that the size of the Siegel disk $\Delta_c$ can become much smaller by a very small perturbation of the cubic $P_c$. Later we will see that for $\theta$ of bounded type, this cannot happen. In fact, in this case the closed Siegel disk $\overline{\Delta}_c$ is a quasidisk which moves continuously in the Hausdorff topology on compact subsets of the plane (see \thmref{move}). Therefore $\kappa (\Delta_c)$ is actually continuous as a function of $c$. On the other hand, for arbitrary $\theta$ of Brjuno type, I do not know if $c \mapsto \kappa (\Delta_c)$ is continuous. However, we have the following general theorem of Yoccoz \cite{Yoccoz}:
\begin{thm}
\label{size of disk}
Let $0< \theta <1$ be an irrational number of Brjuno type, and set $W(\theta)=\sum_{n=1}^{\infty} (\log q_{n+1})/q_n < \infty$. Let $S(\theta)$ be the space of all univalent functions $f:{\BBB D} \rightarrow {\BBB C}$ with $f(0)=0$ and $f'(0)=e^{2 \pi i \theta}$, with the maximal Siegel disk $\Delta_f \subset {\BBB D}$. Finally, define $\kappa(\theta)=\inf_{f \in S(\theta)} \kappa(\Delta_f)$. Then, there is a universal constant $C>0$ such that $|\log(\kappa(\theta))+W(\theta)|<C$.
\end{thm}
As an immediate corollary of the above theorem, we have:
\begin{cor}
\label{lower bound}
In the family $\{ P_c\} $ of cubic polynomials in $($\ref{eqn:normform}$)$, the conformal capacity function $c \mapsto \kappa (\Delta_c)$ is locally bounded away from $0$.
\end{cor}
\noindent
{\bf Definition.} We define the {\it Cubic Connectedness Locus} ${\cal M}_3(\theta)$ as the set of all critically marked cubics $P\in {\cal P}_3^{cm}(\theta)$ whose Julia sets
$J(P)$ are connected. It follows from classical Fatou-Julia theory (\cite{Milnor1}, Theorem 17.3) that $P\in {\cal M}_3(\theta)$ if and only if both critical points of $P$ have bounded orbits. We can formally set
$$\begin{array}{rl}
{\cal M}_3(\theta) & =\{ c \in {\BBB C}^{\ast}: \mbox{The Julia set $J(P_c)$
is connected} \} \\
& = \{ c \in {\BBB C}^{\ast}: \mbox{Both sequences $ \{P_c^{\circ k}(c)\}$ and $\{ P_c^{\circ k}(1)\}$ are bounded} \}.
\end{array}$$
Since $P_c$ and $P_{1/c}$ are affinely conjugate as maps, neglecting the marking of the critical points, ${\cal M}_3(\theta)$ as a subset of the $c$-plane is invariant under the inversion $c\mapsto 1/c$ with respect to the unit circle. \figref{m3} shows the connectedness locus ${\cal M}_3(\theta)$ for the golden mean
$\theta=(\sqrt 5 -1)/2=0.61803399...$ and \figref{m3zoom} shows the details of the same set near the unit circle.
\realfig{m3}{M3.ps}{{\sl The cubic connectedness locus ${\cal M}_3(\theta)$ for the golden mean $\theta=(\sqrt 5 -1)/2$. The circle in white is the unit circle centered at the origin.}}{9cm}
\realfig{m3zoom}{zM3.ps}{{\sl Details of the connectedness locus ${\cal M}_3(\theta)$ near the unit circle centered at the origin.}}{13cm}
\begin{prop}
\label{m3com}
\noindent
\begin{enumerate}
\item[(a)]
${\cal M}_3(\theta)$ is compact and contained in the open annulus $\BBB A(\frac{1}{30},30)$.
\item[(b)]
The complement $\BBB C^{\ast}\smallsetminus {\cal M}_3(\theta) $ has two connected components $\Omega_{ext}$ and $\Omega_{int}$ which are mapped to one another by the inversion $c\mapsto 1/c$.
\end{enumerate}
\end{prop}
\begin{pf}
(a) ${\cal M}_3(\theta)$ is clearly closed. To see that it is bounded, we note that
$$\begin{array}{rl}
|P_c(z)| & = |\frac{1}{3c}z^2-\frac{1}{2}(1+1/c)z+1||z| \\ \\
& \geq ( \frac{1}{|c|} | \frac{1}{3}z-\frac{1}{2}(c+1)||z|-1)|z|.
\end{array}$$
Let
\begin{equation}
\label{eqn:rcbound}
m_c=(4.38)\max \{ |c|, 1 \}.
\end{equation}
If $|z|\geq m_c$, then
$$\begin{array}{rl}
|P_c(z)| & \geq ( \frac{1}{|c|} ( \frac{1}{3}|z|-\frac{1}{4.38}|z|)|z|-1)|z| \\ \\
& \geq (0.46|z|-1) |z|\\ \\
& \geq 1.0148\ |z|,
\end{array}$$
from which it follows that
\begin{equation}
\label{eqn:kcbound}
K(P_c)\subset {\BBB D}(0,m_c),
\end{equation}
where $K(P_c)$ is the filled Julia set of $P_c$. Now let $|c|\geq 30$. Then
$$|P_c(c)|=|\frac{1}{6}c-\frac{1}{2}||c|\geq (4.5)|c|>m_c,$$
which implies $P_c^{\circ k}(c)\rightarrow \infty$ as $k\rightarrow \infty$.
Therefore ${\cal M}_3(\theta) \subset {\BBB D}(0,30)$, hence by symmetry ${\cal M}_3(\theta) \subset \BBB A(\frac{1}{30},30)$.\\
(b) Let $\Omega_{ext}$ be the unbounded connected component of $\BBB C^{\ast} \smallsetminus {\cal M}_3(\theta)$. Since ${\cal M}_3(\theta)$ is invariant under the inversion $c\mapsto 1/c$, there exists a corresponding component $\Omega_{int}$ of the complement of ${\cal M}_3(\theta)$ containing a punctured neighborhood of the origin. By the proof of (a), we have
$$\Omega_{ext}=\{ c\in \BBB C^{\ast}: P_c^{\circ k}(c)\rightarrow \infty\ \mbox{as}\ k \rightarrow \infty \},$$
and similarly
$$\Omega_{int}=\{ c\in \BBB C^{\ast}: P_c^{\circ k}(1)\rightarrow \infty\ \mbox{as}\ k \rightarrow \infty \}.$$
Suppose that there exists a bounded connected component $U$ of $\BBB C^{\ast} \smallsetminus {\cal M}_3(\theta)$ which is not $\Omega_{int}$. Then
$$0<\sup_{c\in {\partial U}}|c|=R< + \infty.$$
If $c\in \partial U$, it follows from (\ref{eqn:kcbound}) that for each $k\geq 0$, $|P_c^{\circ k}(c)|$ and $|P_c^{\circ k}(1)|$ are not greater than $m_c$, and
$$\sup_{c\in \partial U} m_c \leq (4.38)\max \{ R, 1 \} < + \infty.$$
Since $U\neq \Omega_{int}$, we have $\partial U \subset \partial {\cal M}_3(\theta)$ and both $P_c^{\circ k}(c)$ and $P_c^{\circ k}(1)$ are holomorphic in $U$. It follows from the Maximum Principle that the iterates $P_c^{\circ k}(c)$ and
$P_c^{\circ k}(1)$ are uniformly bounded throughout $U$, which is a contradiction.
\end{pf}
{\bf Remarks.}
(1) The bound $30$ in (a) is not sharp. Computer experiments show that
it can actually be replaced by 11.266519.
(2) Later we will prove that $\Omega_{ext}$ (hence $\Omega_{int}$) is
homeomorphic to a punctured disk. This will show that ${\cal M}_3(\theta)$ is a connected set (see \thmref{m3con}).
\vspace{0.17in}
\section{On Injectivity of the Surgery Map}
\label{sec:inject}
In this section we prove that the surgery map $\cal S: {\cal B}_5^{cm}(\theta) \rightarrow {\cal P}_3^{cm}(\theta)$ is injective on the set of Blaschke products which map to $\BBB C^{\ast}\smallsetminus {\cal M}_3(\theta)$ or to hyperbolic-like cubics. The proof of this fact is based on the combinatorics of drops and their nuclei as developed in Section \ref{sec:blapar}. Here is the outline of the proof: If
$\cal S (A)=\cal S (B)$ for some $A,B\in {\cal B}_5^{cm}(\theta)$, there exists a quasiconformal homeomorphism of the plane which conjugates the modified Blaschke products $\tilde A$ and $\tilde B$, which is conformal everywhere except on the union of the maximal drops. A careful analysis will then show that when $\cal S(A)$ is not capture, one can redefine this homeomorphism on all the drops of the two Blaschke products to get a conjugacy between $A$ and $B$
everywhere. A pull-back argument together with the Bers Sewing Lemma at each step shows that this conjugacy is conformal away from the Julia sets (\thmref{ABconj}). When ${\cal S}(A)$ is hyperbolic-like or has disconnected Julia set,
one can use the renormalization scheme in Section \ref{sec:renbla} and the rigidity on the Julia sets (\lemref{rig}) to conclude that the conjugacy between $A$ and $B$ is in fact conformal (\thmref{inj}). The main \thmref{main} and some corollaries on the connectedness locus ${\cal C}_5(\theta)$ will follow immediately.
\begin{thm}
\label{ABconj}
Let $A,B\in {\cal B}_5^{cm}(\theta)$ and $\cal S (A)=\cal S (B)=P$. Suppose that $P$ is not capture. Then there exists a quasiconformal homeomorphism $\Phi: \overline{\BBB C} \rightarrow \overline{\BBB C}$ which fixes $0,1,\infty$, commutes with $I$, and conjugates $A$ to $B$. Moreover, $\Phi$ is conformal on the Fatou set $\overline{\BBB C} \smallsetminus J(A)$.
\end{thm}
\begin{pf}
Following the notation of (\ref{eqn:pmodb}), we assume that $P=\varphi \circ \tilde A \circ \varphi^{-1}=\varphi' \circ \tilde B \circ {\varphi'}^{-1}$ for some quasiconformal homeomorphisms $\varphi$ and $\varphi'$. Consider the quasiconformal homeomorphism $\Phi_0={\varphi'}^{-1}\circ \varphi$ which conjugates $\tilde A$ to $\tilde B$ on the entire plane and is conformal (i.e., $\overline{\partial}\Phi_0=0$) everywhere except on $\bigcup_{k\geq 0}\tilde A^{-k}(\BBB D)$.
Note that by \propref{aux}(b) the open set $\overline{\BBB C} \smallsetminus \overline{\bigcup_{k\geq 0}\tilde A^{-k}(\BBB D)}$ is precisely the nucleus $N_{\infty}$ as defined in Section \ref{sec:blapar}. Also, $\bigcup_{k\geq 0}\tilde A^{-k}(\BBB D)$ is the {\it disjoint} union of the maximal drops of $A$ (which by \propref{aux}(a) correspond to the bounded Fatou components of $P$ which map to the Siegel disk $\Delta_P$). Similar correspondence holds for the open set $\bigcup_{k\geq 0}\tilde B^{-k}(\BBB D)$. Therefore, corresponding to any maximal $k$-drop $D_k^i(A)$, there exists a unique maximal
$k$-drop $D_k^i(B)=\Phi_0 (D_k^i(A))$. Finally, note that for any such maximal drops, $A^{\circ k}:D_k^i(A)\rightarrow \BBB D$ and $B^{\circ k}:D_k^i(B)\rightarrow \BBB D$ are conformal isomorphisms since by our assumption $P$ is not capture.
In what follows we construct a sequence of quasiconformal homeomorphisms $\Phi_n$ which preserve the unit circle $\BBB T$ and another sequence $\Upsilon_n$ by {\it symmetrizing } each $\Phi_n$:
$$\Upsilon_n(z)= \left \{ \begin{array}{ll}
\Phi_n(z) & |z|\geq 1 \\
(I\circ \Phi_n\circ I)(z) & |z|<1
\end{array}
\right. $$
We have already constructed $\Phi_0$, hence $\Upsilon_0$. Consider the sequences of compact sets $\{ J_n(A) \}$ and $\{ J_n(B) \}$ as in \lemref{alt}. Note that $\Phi_0 \circ A=B\circ \Phi_0$ on $J_0(A)$. The next step is to define $\Phi_1$: Let $\Phi_1=\Upsilon_0$ everywhere except on the maximal drops of $A$. On any maximal $k$-drop $D_k^i(A)$ we define $\Phi_1:D_k^i(A)\rightarrow D_k^i(B)$ by $B^{-k}\circ \Upsilon_0 \circ A^{\circ k}$. (When $k=0$, the only maximal $0$-drop is $\BBB D$ and by this definition $\Phi_1|_{\BBB D}=\Upsilon_0|_{\BBB D}$.) Observe that the two definitions match along the common boundary. Hence $\Phi_1$ is in fact a quasiconformal homeomorphism by the Bers Sewing Lemma. Note that $\Phi_1|_{J_0(A)}=\Phi_0|_{J_0(A)}$ and by definition of $J_1(A)$ in (\ref{eqn:jn}), $\Phi_1 \circ A=B\circ \Phi_1$ on $J_1(A)$. The homeomorphism $\Upsilon_1$ is then obtained by symmetrizing $\Phi_1$.
Continuing inductively, we define $\Phi_n$ to be equal to $\Upsilon_{n-1}$ everywhere except on the maximal drops of $A$ and then on the maximal drops we define it by taking pull-backs. In other words, $\Phi_n:D_k^i(A)\rightarrow D_k^i(B)$ will be defined by $B^{-k}\circ \Upsilon_{n-1} \circ A^{\circ k}$.
\begin{lem}
\label{induct}
The sequence of quasiconformal homeomorphisms $\{ \Phi_n \}$ has the following properties:
\begin{equation}
\label{eqn:ind1}
\Phi_n|_{J_{n-1}(A)}=\Phi_{n-1}|_{J_{n-1}(A)},
\end{equation}
and
\begin{equation}
\label{eqn:ind2}
(\Phi_n \circ A)(z)=(B\circ \Phi_n)(z)\hskip 1cm z\in J_n(A).
\end{equation}
\end{lem}
\begin{pf}
Both properties follow by induction on $n$. Let us prove (\ref{eqn:ind1}) first. We have already seen (\ref{eqn:ind1}) for $n=1$. Assume (\ref{eqn:ind1}) is true and let $z\in J_n(A)$. We distinguish three cases:\\
$\bullet${\it Case 1:} $z\in J_n(A)\cap \overline{\BBB D}$. Then $I(z)\in J_{n-1}(A)$ and we have $\Phi_{n+1}(z)=\Upsilon_n(z)=(I\circ \Phi_n \circ I)(z)=(I\circ \Phi_{n-1} \circ I)(z)$ by the induction hypothesis. The latter is clearly equal to $\Upsilon_{n-1}(z)=\Phi_n(z)$.\\
$\bullet${\it Case 2:} $z\in J_n(A)\smallsetminus \overline{\BBB D}$ and $A^{\circ k}(z)\in \overline{\BBB D}$ for some $k\geq 1$. $A^{\circ k}(z)\in IJ_{n-1}$ and hence $(I\circ A^{\circ k})(z)\in J_{n-1}(A)$. So $\Phi_{n+1}(z)=(B^{-k}\circ \Upsilon_n \circ A^{\circ k})(z)=(B^{-k}\circ I\circ \Phi_n \circ I \circ A^{\circ k})(z)=(B^{-k}\circ I\circ \Phi_{n-1} \circ I \circ A^{\circ k})(z)$ by the induction hypothesis. Again, the latter is equal to $(B^{-k}\circ \Upsilon_{n-1} \circ A^{\circ k})(z)=\Phi_n(z)$.\\
$\bullet${\it Case 3:} $z\in J_n(A)\smallsetminus \overline{\BBB D}$ and $z$ is accumulated by points of the form {\it Case 2}. Then, clearly, $\Phi_{n+1}(z)=\Phi_n(z)$ by continuity.\\
Altogether the three steps show that $\Phi_{n+1}|_{J_n(A)}=\Phi_n|_{J_n(A)}$, which completes the induction step and the proof of (\ref{eqn:ind1}).
To prove (\ref{eqn:ind2}) we have to work a little bit more. We have already seen (\ref{eqn:ind2}) for $n=1$. Assume (\ref{eqn:ind2}) is true and let $z\in J_{n+1}(A)$. We split the induction step into the following cases:\\
$\bullet${\it Case 1:} $z\in J_{n+1}(A)\smallsetminus \overline{\BBB D}$ and $A(z)
\notin \overline{\BBB D}$. Then $(\Phi_{n+1} \circ A)(z)=(B\circ \Phi_{n+1})(z)$ automatically since $\Phi_{n+1}$ is defined by pull-backs.\\
$\bullet${\it Case 2:} $z\in J_{n+1}(A)\smallsetminus \overline{\BBB D}$ but $A(z)
\in \overline{\BBB D}$. Then $(\Phi_{n+1}\circ A)(z)=(\Upsilon_n\circ A)(z)=
(B\circ B^{-1}\circ \Upsilon_n\circ A)(z)=(B\circ \Phi_{n+1})(z).$\\
$\bullet${\it Case 3:} $z\in J_{n+1}(A)\cap \overline {\BBB D}$ and $A(z)\in \overline {\BBB D}$.
Then $(\Phi_{n+1}\circ A)(z)=(\Upsilon_n\circ A)(z)=(I\circ \Phi_n\circ I)(A(z))=(I\circ \Phi_n \circ A)(I(z))$. But $I(z)\in J_n(A)$ so by the induction hypothesis, $(I\circ \Phi_n\circ A)(I(z))=(I\circ B\circ \Phi_n)(I(z))=(B\circ I\circ \Phi_n)(I(z))=(B\circ \Upsilon_n)(z)=(B\circ \Phi_{n+1})(z)$.\\
$\bullet${\it Case 4:} $z\in J_{n+1}(A)\cap \overline {\BBB D}$ but $A(z)\notin \overline {\BBB D}$. Then $I(z)\in J_n(A)$. Let $w=A(z)$. Since $A(I(z))=I(w)\in \BBB D$, we have $I(w)\in IJ_{n-1}(A)$, hence $w\in J_{n-1}(A)$. By (\ref{eqn:ind1}), one has $\Phi_{n+1}(w)=\Phi_n(w)=\Phi_{n-1}(w)=\Upsilon_{n-1}(w)=(I\circ \Upsilon_{n-1}\circ I)(w)=(I\circ \Phi_n\circ I)(w)=(I\circ \Phi_n\circ I)(A(z))=(I\circ \Phi_n\circ A)(I(z))=(I\circ B\circ \Phi_n)(I(z))$ by the induction hypothesis. The latter is equal to $(B\circ I\circ \Phi_n)(I(z))=(B\circ \Upsilon_n)(z)=(B\circ \Phi_{n+1})(z)$.
\end{pf}
Back to the proof of \thmref{ABconj}. By the Bers Sewing Lemma, the symmetrization $\Phi_n \longrightarrow \Upsilon_n$ does not increase the dilatation. On the other hand, the modification $\Upsilon_n \longrightarrow \Phi_{n+1}$ achieved by pull-backs along the maximal drops does not increase the dilatation either, simply because $A$ and $B$ are holomorphic. So we may assume that $\{ \Phi_n \}$ is uniformly quasiconformal. Since all the $\Phi_n$ fix $0,1,\infty$, it follows that some subsequence $\Phi_{n(j)}$ converges locally uniformly to a quasiconformal homeomorphism $\Phi$. \lemref{alt} and \lemref{induct} imply that $\Phi \circ A=B\circ \Phi$ on $J(A)$.
In particular, this shows that $\Phi$ sends {\it all} the drops of $A$ bijectively to the drops of $B$ (before we only had a correspondence between the {\it maximal} drops of $A$ and $B$).
It is easy to check that $\Phi$ obtained this way is conformal on the union $N=\bigcup_{i,k} N_k^i(A)$ of all the nuclei of drops of $A$ at all depths as defined in Section (\ref{sec:blapar}) and in fact conjugates $A$ to $B$ there. Since $N$ is clearly disjoint from the Julia set $J(A)$ by (\ref{eqn:inv}), it remains to show that every Fatou component of $A$ is contained in $N$.
Consider a component $U$ of the Fatou set of $A$. Under the iteration of $A$, $U$ visits both $\BBB D$ and $\BBB C \smallsetminus \overline{\BBB D}$ either finitely many times or infinitely often. In the first case, $U$ has to eventually map into the nucleus $N_0(A)$ or $N_{\infty}(A)$, hence it has to be contained in $N$. We prove that the second case cannot occur. In fact, suppose that the orbit of $U$ visits $\BBB D$ and $\BBB C \smallsetminus \overline{\BBB D}$ infinitely often. According to Sullivan \cite{Sullivan1}, $U$ eventually maps to a periodic Fatou component of $A$ which is either an attracting or parabolic basin or a Siegel disk or a Herman ring. It follows that this cycle of periodic Fatou components intersects both $\BBB D$ and $\BBB C \smallsetminus \overline{\BBB D}$, so in either case a critical point of $A$ has to enter $\BBB D$ and escapes from it infinitely often, which is impossible since ${\cal S}(A)$ is not a capture. This shows that $N=\overline{\BBB C}\smallsetminus J(A)$ and proves that $\Phi$ is a conjugacy between $A$ and $B$ everywhere and is conformal on $\overline{\BBB C}\smallsetminus J(A)$. It is easy to see that $\Phi$ constructed this way commutes with $I$.
\end{pf}
\begin{thm}
\label{inj}
Let $A,B \in {\cal B}_5^{cm}(\theta)$ and ${\cal S}(A)={\cal S}(B)$. If ${\cal S}(A)$ is hyperbolic-like or has disconnected Julia set, then $A=B$.
\end{thm}
\begin{pf}
$A$ and $B$ are renormalizable by \thmref{ren}. Consider the quasiconformal homeomorphism $\Phi$ given by \thmref{ABconj}. By \thmref{b3like}, there exists a pair of annuli $W'_A\Subset W_A$ (resp. $W'_B\Subset W_B$) and a quasiconformal homeomorphism $\varphi_A$ (resp. $\varphi_B$) which conjugates $A$ (resp. $B$) to $f_{\theta}$ on $W'_A$ (resp. $W'_B$). Since ${\cal S}(A)={\cal S}(B)$, we can assume that $W'_B=\Phi (W'_A)$ and $W_B=\Phi (W_A)$.
The quasiconformal homeomorphism $\psi=\varphi_B \circ \Phi \circ \varphi_A^{-1}: \varphi_A(W'_A)\rightarrow \varphi_B(W'_B)$ is a self-conjugacy of $f_{\theta}$ near its Julia set which commutes with $I$. By \lemref{rig}, we must have $\psi|_{J(f_{\theta})}=id$. It follows from the Bers Sewing Lemma that the $\overline{\partial}$-derivative of $\psi$ is zero almost everywhere on $J(f_{\theta})$. Since by \thmref{b3like}(b) $\varphi_A$ (resp. $\varphi_B$) has zero $\overline{\partial}$-derivative on $K(A)$ (resp. $K(B)$), we conclude that $\overline{\partial}\Phi=0$ almost everywhere on $K(A)$. But, as in the proof of \corref{measure}, up to a set of measure zero,
$J(A)=\bigcup_{n\geq 0} A^{-n}(K(A))$. Therefore, $\overline{\partial}\Phi$ has to be zero almost everywhere on the Julia set $J(A)$. Hence $\Phi$ is conformal, so $A=B$.
\end{pf}
\noindent
{\bf Remark.} We believe that the surgery map is a homeomorphism, at least outside of the capture components where it might have branching. This would imply that the connectedness loci ${\cal C}_5(\theta)$ and ${\cal M}_3(\theta)$ are actually homeomorphic, a conjecture that is strongly supported by computer experiments.
\begin{cor}
\label{c5con}
The surgery map $\cal S$ restricts to a homeomorphism
$\Lambda_{ext}\stackrel{\simeq}{\longrightarrow} \Omega_{ext}$. Similar conclusion holds for $\Lambda_{int}$ and $\Omega_{int}$. In particular, the connectedness locus ${\cal C}_5(\theta)$ is connected.
\end{cor}
\begin{pf}
Clearly $\cal S$ maps $\Lambda_{ext}$ into $\Omega_{ext}$ injectively by the previous theorem. Since $\cal S$ is a proper map by \propref{proper}, it extends to a continuous injection $\Lambda_{ext}\cup \{ \infty \} \hookrightarrow \Omega_{ext}\cup \{ \infty \}$. We claim that this injection is onto. To this end, it suffices to show that for any sequence $B_n \in \Lambda_{ext}$ which converges to the boundary of the connectedness locus ${\cal C}_5(\theta)$, the sequence $P_n={\cal S}(B_n)\in \Omega_{ext}$ converges to the boundary of ${\cal M}_3(\theta)$. If not, there is a subsequence of $B_n$ which converges to $B\in \partial {\cal C}_5(\theta)$ but the corresponding subsequence of $P_n$ converges to some $P\in \Omega_{ext}$. By continuity, $P={\cal S}(B)$. But $B$ has connected Julia set while $J(P)$ is disconnected. This is impossible by \thmref{hitD}.
\end{pf}
\begin{cor}
\label{c5full}
The connectedness locus ${\cal C}_5(\theta)$ has only two complementary
components $\Lambda_{ext}$ and $\Lambda_{int}$.
\end{cor}
\begin{pf}
Let $U$ be a bounded component of $\BBB C^{\ast}\smallsetminus {\cal C}_5(\theta)$ which is not $\Lambda_{int}$. Without loss of generality, we assume that $U$ maps into $\Omega_{ext}$ by $\cal S$. Take $A\in U$. By the previous corollary, there exists a $B\in\Lambda_{ext}$ such that ${\cal S}(A)={\cal S}(B)$. By \thmref{inj}, $A=B$ and this is a contradiction.
\end{pf}
\begin{cor}
\label{surj}
The surgery map ${\cal S}:{\cal B}_5^{cm}(\theta)\rightarrow {\cal P}_3^{cm}(\theta)$ is surjective.
\end{cor}
\begin{pf}
Compactify ${\cal B}_5^{cm}(\theta)$ and ${\cal P}_3^{cm}(\theta)$ by adding points at $0$ and $\infty$ to get topological 2-spheres. $\cal S$ extends to a continuous map between these spheres by \propref{proper}. This map has topological degree $\neq 0$ because it is a homeomorphism $\Lambda_{ext}\stackrel{\simeq}{\longrightarrow} \Omega_{ext}$ and ${\cal S}^{-1}(\Omega_{ext})=\Lambda_{ext}$. Therefore it has to be surjective.
\end{pf}
Since the boundary of the Siegel disk of a cubic which comes from the surgery is a quasicircle passing through some critical point, we have proved the following:
\begin{thm}[Bounded type cubic Siegel disks are quasidisks]
\label{main}
Let $P$ be a cubic polynomial which has a fixed Siegel disk $S$ of rotation number $\theta$. Let $\theta$ be of bounded type. Then the boundary of $S$ is a quasicircle which contains one or both critical points of $P$.
\end{thm}
By a recent theorem of Graczyk and Jones \cite{GJ}, we have the following corollary:
\begin{cor}
\label{HD}
Under the assumptions of \thmref{main}, the boundary of the Siegel disk $S$ has Hausdorff dimension greater than $1$.
\end{cor}
Now it is possible to show that despite all the bifurcations taking place near the boundary of the connectedness locus ${\cal M}_3(\theta)$ which give rise to discontinuity of the Julia sets, the boundaries of the Siegel disks move continuously.
\begin{thm}[Boundary of Siegel disks move continuously]
\label{move}
The boundary $\partial \Delta_c$ of the Siegel disk of $P_c\in {\cal P}_3^{cm}(\theta)$ centered at $0$ is a continuous function of $c\in \BBB C^{\ast}$ in the Hausdorff topology.
\end{thm}
\begin{pf}
Let us fix some $P\in {\cal P}_3^{cm}(\theta)$. If $P\notin \partial {\cal M}_3(\theta)$, \thmref{unstable} shows that $J(P)$, hence $\partial \Delta_P$, moves holomorphically in a neighborhood of $P$ and continuity at $P$ is obvious. So let us assume that $P\in \partial {\cal M}_3(\theta)$ and consider a sequence $P_n\in {\cal P}_3^{cm}(\theta)$ which converges to $P$ as $n\rightarrow \infty$. Since the surgery map is surjective, there exists a sequence $B_n\in {\cal B}_5^{cm}(\theta)$ such that ${\cal S}(B_n)=P_n$. By properness (\propref{proper}), some subsequence which we still denote by $B_n$ converges to some $B\in {\cal B}_5^{cm}(\theta)$, which by continuity maps to $P$. Now consider the representations
$P_n=\varphi_n \circ \tilde{B}_n\circ \varphi_n^{-1}$ as in (\ref{eqn:pmodb}). Then the boundary $\partial \Delta_{P_n}$ is just the image $\varphi_n(\BBB T)$. Since $\{ \varphi_n \}$ is normal by \corref{normal}, some further subsequence, still denoted by $\{ \varphi_n \}$, converges to a quasiconformal homeomorphism $\psi$. The map $Q=\psi\circ \tilde{B} \circ \psi^{-1}\in {\cal P}_3^{cm}(\theta)$ is quasiconformally conjugate to $P$. Since $P$ is rigid by \thmref{qcclass}, $P=Q$. Now, as $n\rightarrow \infty$, $\partial \Delta_{P_n}=\varphi_n(\BBB T)$ converges in the Hausdorff topology to $\psi(\BBB T)=\partial \Delta_Q=\partial \Delta_P$.
\end{pf}
\vspace{0.17in}
\section{Components of the Interior of ${\cal M}_3(\theta)$}
\label{sec:intcomp}
\noindent
{\bf Definition (Types of Components).}\ A component $U$ of the interior of ${\cal M}_3(\theta)$ is called {\it hyperbolic-like} if for every $c\in U$, the orbit of either $c$ or $1$
under $P_c$ converges to an attracting cycle. $U$ is called a {\it capture}
component if for every $c\in U$, either $c$ or $1$ eventually maps to the
Siegel disk $\Delta_c$. In case $U$ is neither hyperbolic-like nor capture, we call it a {\it queer} component.
We say that $P_c$ is hyperbolic-like, capture, or queer if the corresponding parameter $c$ belongs to such a component.\\
For example, there is a hyperbolic-like component in the form of the main cardioid of a large copy of the Mandelbrot set on the lower right corner of \figref{m3}. For every $c$ in this component, the orbit of the critical point $c$ of $P_c$ converges to an attracting fixed point. On the other hand, the large component
which is attached on the right side of the unit circle to $c=1$ is a capture,
consisting of all $c$ for which $P_c(c)$ belongs to $\Delta_c$. \figref{hypcub}-\figref{hypext} show examples of the filled Julia sets of cubics in ${\cal P}_3^{cm}(\theta)$.
In the above definition, we tacitly assumed that hyperbolic-like
or capture cubics define components of the interior of ${\cal M}_3(\theta)$. The condition of being hyperbolic-like is clearly open. So to justify the definition in this case, we have to show that it is also closed in the interior of ${\cal M}_3(\theta)$.
\realfig{hypcub}{test5.ps}{{\sl The filled Julia set of a hyperbolic-like cubic in ${\cal P}_3^{cm}(\theta)$ with $\theta=(\sqrt {5}-1)/2$. The large open topological disk on the right is the immediate basin of attraction of an attracting fixed point.}}{9cm}
\realfig{captcub}{test6.ps}{{\sl The filled Julia set of a capture cubic in
${\cal P}_3^{cm}(\theta)$ with $\theta=(\sqrt {5}-1)/2$. Every bounded Fatou component eventually maps to the Siegel disk
centered at the origin. There is a critical point in the large preimage of the Siegel disk on the right. Hence this preimage maps to the Siegel disk by a 2-to-1 branched covering.}}{9cm}
\realfig{paracub}{test7.ps}{{\sl The filled Julia set of a cubic on the
boundary of ${\cal M}_3(\theta)$ with $\theta=(\sqrt {5}-1)/2$. The large open topological
disk on the right is the immediate basin of attraction of a parabolic fixed
point. Here the parameter $c$ is located at the ``cusp'' of the large
cardioid on the right lower corner of \figref{m3}.}}{11cm}
\realfig{2crit}{test8.ps}{{\sl Another example of the filled Julia set of a
cubic on the boundary of ${\cal M}_3(\theta)$ with $\theta=(\sqrt {5}-1)/2$. In this
example there are two distinct critical points on the boundary of the Siegel
disk centered at the origin. }}{11cm}
\realfig{hypext}{test9.ps}{{\sl The filled Julia set of a cubic in
$\Omega_{ext}$ for $\theta=(\sqrt {5}-1)/2$. It consists of uncountably many connected components. There are countably many components each quasiconformally homeomorphic to the quadratic Siegel filled Julia set with the same rotation number. Each remaining component is a single point.}}{11cm}
Let $P$ be hyperbolic-like and let $V$ and $U$ be the connected components containing $P$ of the hyperbolic-like cubics and the interior of ${\cal M}_3(\theta)$ respectively. Clearly $V\subset U$. If $V\neq U$, we can choose a $Q\in \partial V\cap U$. Since $Q$ is $J$-stable by \thmref{unstable}, the number of attracting cycles remains constant for all $Q'$ in a small neighborhood of $Q$. This number
is $1$ if $Q'\in V$, hence $Q$ has to have an attracting cycle, contradicting $Q\in \partial V$.
Now consider the property of being capture for $P\in {\cal P}_3^{cm}(\theta)$. It follows from
\thmref{unstable} that when $P$ belongs to the interior of ${\cal M}_3(\theta)$, the connected component $V$ of the capture cubics containing $P$ has nonempty interior. Define $U$ as the component of the interior of ${\cal M}_3(\theta)$ containing $P$. Clearly $int(V)\subset U$. If they are not equal, let $Q\in \partial V \cap U$. Since $Q$ is
$J$-stable, for all $Q'$ in a small neighborhood of $Q$, a critical point of $Q'$ belongs to the Fatou set of $Q'$ if and only if the corresponding critical point of $Q$ belongs to the Fatou set of $Q$. If we choose $Q'\in V$, there is a critical point of $Q'$ which hits the Siegel disk $\Delta_{Q'}$. It follows that the same is true for $Q$, hence $Q$ is capture, which contradicts $Q\in \partial V$. This proves $int(V)=U$. In other words, when a capture cubic $P$ belongs to the interior of ${\cal M}_3(\theta)$, the whole interior component containing $P$ consists of capture cubics, hence the name ``capture component.''
However, since there is no a priori reason why the boundary of the Siegel disk of $P\in {\cal P}_3^{cm}(\theta)$ should move continuously, being capture is not trivially seen to be an open condition. Hence, the above argument does not rule out the possibility of a capture being on the boundary of the connectedness locus ${\cal M}_3(\theta)$. In fact, this follows from a different type of argument which is standard in deformation theory for rational maps (see \thmref{capop}). On the other hand, when $\theta$ is of bounded type, we will show that the boundary of the Siegel disk of a cubic in ${\cal P}_3^{cm}(\theta)$ moves continuously (see \thmref{move}), hence in this case the condition of being capture is automatically open.
Conjecturally, queer components do not exist. But if they do, every cubic in a queer component exhibits an outstanding property: It admits an invariant line field on its Julia set, and in particular, its Julia set
has positive Lebesgue measure. The proof of this fact depends on the harmonic $\lambda$-lemma of Bers and Royden \cite{Bers-Royden} as well as the elementary observation of Sullivan \cite{Sullivan2} that if the boundary of a Siegel disk moves holomorphically in a family of rational maps, then there is a choice of holomorphically varying Riemann maps for the Siegel disks (also see the new expanded version \cite{Mc-Sul}).
There is a technical difficulty showing up in the proof: For a general $\theta$ of Brjuno type, it is not known whether the boundary of the Siegel disk of a $P\in {\cal P}_3^{cm}(\theta)$ is a Jordan curve. For this reason, the extension of holomorphic motions to the grand orbits of Siegel disks will require some extra work. On the other hand, we will prove later that for $\theta$ of bounded type, the boundary of the Siegel disk $\Delta_P$ of a $P\in {\cal P}_3^{cm}(\theta)$ is a Jordan curve (see \thmref{main}). In this case the following theorem can be proved using the more elementary argument of \lemref{ext} with much less effort.
First we need the following useful lemma of L. Bers \cite{Bers}, \cite{Douady-Hubbard2}:
\begin{lem}[Bers Sewing Lemma]
\label{BSL}
Let $E\subset \BBB C$ be closed and $U$ and $V$ be two open neighborhoods of $E$. Let $\varphi :U \stackrel{\simeq}{\longrightarrow} \varphi(U)$ and $\psi :V \stackrel{\simeq}{\longrightarrow} \psi(V)$ be two homeomorphisms such that
\begin{enumerate}
\item[$\bullet$]
$\varphi$ is $K_1$-quasiconformal,
\item[$\bullet$]
$\psi|_{V\smallsetminus E}$ is $K_2$-quasiconformal,
\item[$\bullet$]
$\varphi|_{\partial E}=\psi|_{\partial E}$.
\end{enumerate}
Then the map $\varphi \amalg \psi$ defined on $V$ by
$$(\varphi \amalg \psi)(z)= \left \{
\begin{array}{ll}
\varphi(z) & z\in E \\
\psi(z) & z\in V\smallsetminus E
\end{array}
\right. $$
is a $K$-quasiconformal homeomorphism with $K=\max \{ K_1, K_2 \} $. Moreover, $\overline{\partial}(\varphi \amalg \psi)=\overline{\partial} \varphi$ almost everywhere on $E$.
\end{lem}
Throughout this paper, we say that two critically marked cubics $P_{c_1},P_{c_2}\in {\cal P}_3^{cm}(\theta)$ are {\it quasiconformally conjugate} if there exists a quasiconformal homeomorphism $\varphi$ of the plane such that $\varphi \circ P_{c_1}=P_{c_2} \circ \varphi$ {\it and} $\varphi(c_1)=c_2$, In other words, all conjugacies must respect the markings of the critical points.
\begin{thm}[Invariant Line Fields for Queer Cubics]
\label{invline}
Let $U$ be a queer component of the interior of ${\cal M}_3(\theta)$. Then for any $c\in U$, the Julia set $J(P_c)$ has positive Lebesgue measure and supports an invariant line field.
\end{thm}
\begin{pf}
The beginning of the argument is similar to the case of the Mandelbrot set. Fix some $c_0\in U$. We first note that every Fatou component of $P_{c_0}$ eventually maps to the Siegel disk $\Delta_{c_0}$ and the mapping is a conformal isomorphism: There cannot be further attracting cycles (since $P_{c_0}$ is not hyperbolic-like) or indifferent periodic orbits (see \thmref{indiff}). In particular, $K(P_{c_0})=\overline{GO(\Delta_{c_0})}$.
Choose some $c \in U$ with $c\neq c_0$, and let
$$\varphi_c:\BBB C\smallsetminus K(P_{c_0})\stackrel{\simeq}{\longrightarrow} \BBB C\smallsetminus K(P_c)$$
be the conformal conjugacy given by composition of the B\"{o}ttcher maps of $P_{c_0}$ and $P_c$ (see Section \ref{sec:connect}). A brief computation shows that $\varphi_c(z)=\sqrt{c/c_0} z+O(1)$ and we can choose the branch of the square root near $c_0$ for which $\varphi_{c_0}(z)=z$. Since $\varphi_c$ depends holomorphically on $c$,
it defines a holomorphic motion of $\BBB C\smallsetminus K(P_{c_0})$. By the harmonic $\lambda$-lemma \cite{Bers-Royden}, this motion extends to a {\it unique} holomorphic motion $\hat{\varphi}_c$ of the entire plane, which is now defined only for $c$ in a small neighborhood $V$ of $c_0$, with the following properties:
\begin{enumerate}
\item[$\bullet$]
For every $c\in V$, $\hat{\varphi}_c$ is a quasiconformal homeomorphism of the plane.
\item[$\bullet$]
For every $c\in V$, the Beltrami differential $\displaystyle{\frac{\overline{\partial}\hat{\varphi}_c}{\partial \hat{\varphi}_c}\frac{d\overline z}{dz}}$ is harmonic in $GO(\Delta_{c_0})$.
\end{enumerate}
It is easy to see that uniqueness of this extended motion implies that $\hat{\varphi}_c$ conjugates $P_{c_0}$ to $P_c$ on the entire plane (compare \cite{Mc-Sul}). In fact, one can replace $\hat{\varphi}_c$ by $P_c^{-1}\circ \hat{\varphi}_c \circ P_{c_0}$ on $GO(\Delta_{c_0})$, which also extends $\varphi_c$, where the branch of $P_c^{-1}$ is determined uniquely by the values of $\hat{\varphi}_c$ on the Julia set $J(P_{c_0})$. Hence $\hat{\varphi}_c=P_c^{-1}\circ \hat{\varphi}_c \circ P_{c_0}$ by uniqueness.
Next, we want to show that the restriction $\hat{\varphi}_c: GO(\Delta_{c_0})\rightarrow GO(\Delta_c)$ is a {\it conformal} conjugacy. As Sullivan observes in \cite{Sullivan2}, the fact that the boundary of $\Delta_{c_0}$ moves holomorphically (\thmref{unstable}) implies that there is a choice of the Riemann map $\zeta_c: \BBB D\rightarrow \Delta_c$ such that $\zeta_c(0)=0$ and $c\mapsto \zeta_c$ is holomorphic in $c$. Define a conformal conjugacy $\psi_c:\Delta_{c_0}\rightarrow \Delta_c$ by $\psi_c=\zeta_c\circ \zeta_{c_0}^{-1}$, which can be extended to a conformal conjugacy
$\psi_c : GO(\Delta_{c_0})\rightarrow GO(\Delta_c)$ by taking pull-backs as follows. Take any component $W$ of $P_{c_0}^{-n}(\Delta_{c_0})$ and let $W_c=\hat{\varphi}_c(W)$ be the corresponding component of $P_c^{-n}(\Delta_c)$. Define $\psi_c:W\rightarrow W_c$ by $\psi_c=P_c^{-n}\circ \psi_c \circ P_{c_0}^{\circ n}$. Since $c\mapsto \psi_c$ is holomorphic and $\psi_c=id$ when $c=c_0$, it follows that $\psi_c$ defines a holomorphic motion of $GO(\Delta_{c_0})$. By the harmonic $\lambda$-lemma, it extends to a unique holomorphic motion $\hat{\psi}_c$ of the entire plane which is defined for $c$ in a neighborhood $V'$ of $c_0$ and has harmonic Beltrami differential on $\BBB C\smallsetminus K(P_{c_0})$. By an argument similar to the one we used for $\hat{\varphi}_c$, it follows that $\hat{\psi}_c$ respects the dynamics, i.e., it conjugates $P_{c_0}$ to $P_c$ on the entire plane. In particular, it sends the marked critical point $c_0$ of $P_{c_0}$ to the marked critical $c$ of $P_c$. Let us assume for example that the forward orbit of $c_0$ accumulates on the boundary of $\Delta_{c_0}$. Then the same is true for $c$ and $\Delta_c$. Since $\hat{\varphi}_c$ was also a conjugacy to begin with, for all $c\in V \cap V'$ we have
$\hat{\psi}_c(c_0)=c=\hat{\varphi}_c(c_0)$, and by induction $\hat{\psi}_c(P_{c_0}^{\circ k}(c_0))=P_c^{\circ k}(c)=\hat{\varphi}_c(P_{c_0}^{\circ k}(c_0))$ for all $k$. Since every point on the boundary of $\Delta_{c_0}$ is in the closure of the forward orbit of $c_0$, we conclude that $\hat{\psi}_c$ and $\hat{\varphi}_c$ agree on $\partial \Delta_{c_0}$. Evidently this shows that $\hat{\psi}_c$ and $\hat{\varphi}_c$ agree on the boundary of every bounded Fatou component of $P_{c_0}$, hence on the entire Julia set $J(P_{c_0})$. It follows then from the Bers Sewing \lemref{BSL} that $\varphi_c \amalg \psi_c$ defined by
$$(\varphi_c \amalg \psi_c)(z)= \left \{ \begin{array}{ll}
\hat{\varphi}_c(z) & z\in \BBB C \smallsetminus GO(\Delta_{c_0}) \\
\hat{\psi}_c(z) & z\in GO(\Delta_{c_0})
\end{array}
\right.$$
is a quasiconformal homeomorphism which trivially has harmonic Beltrami differential in $\BBB C \smallsetminus J(P_{c_0})$. Note that $\varphi_c \amalg \psi_c$ is an extension of both $\varphi_c$ and $\psi_c$. By uniqueness, we conclude that $\hat{\varphi}_c \equiv \hat{\psi}_c$. In particular, when $c \in V \cap V'$, $\hat{\varphi}_c$ is conformal away from the Julia set $J(P_{c_0})$.
Now, if the Julia set $J(P_{c_0})$ had measure zero, $\hat{\varphi}_c$ would have been conformal, contradicting $c\neq c_0$. So $J(P_{c_0})$ has positive measure. The desired invariant line field is then given by $\hat{\varphi}_c^{\ast}(\sigma_0)$, the pull-back of the standard conformal structure $\sigma_0$ on the plane by $\hat{\varphi}_c$.
\end{pf}
It must be apparent that the existence of holomorphic motions in the above proof was the crucial fact which made the conformal extensions possible. In the case we have ``static'' quasiconformal conjugacies rather than holomorphic motions, such conformal extensions are still possible once we assume that the boundaries of Siegel disks are Jordan curves. To show this, we first need the following definition: \\ \\
{\bf Definition (Conformal Position).} Let $\Delta $ be a Jordan domain containing the origin, with a marked point $b$ on its boundary. Consider the unique conformal isomorphism $\zeta:\Delta \stackrel{\simeq}{\longrightarrow} \BBB D$ such that $\zeta(0)=0$ and $\zeta(b)=1$. By the {\it conformal position} of a point $z\in \Delta $ we mean the image $\zeta(z)\in \BBB D$. Note that this notion is well-defined once the boundary marking is given.\\
Let $R_t:z\mapsto e^{2 \pi i t}z$ denote the rigid rotation on the unit circle. Let $\zeta: \Delta \rightarrow \BBB D$ be any conformal isomorphism with $\zeta(0)=0$. Then a homeomorphism $h_t:\partial \Delta \rightarrow \partial \Delta$ of the form $h_t=\zeta^{-1}\circ R_t \circ \zeta$ is called an {\it intrinsic rotation} of $\partial \Delta $. By the Schwarz Lemma, $h_t$ is independent of the choice of $\zeta$.
Now consider two Jordan domains $\Delta_1$ and $\Delta_2$ containing the origin and a homeomorphism $\varphi: \partial \Delta_1 \rightarrow \partial \Delta_2$. Suppose that for some irrational angle $t$, $\varphi \circ h_t^1=h_t^2 \circ \varphi$, where the $h_t^j$ denote the intrinsic rotations of $\partial \Delta_j$. Then we can talk about two points in $\Delta_1$ and $\Delta_2$ having the same conformal position {\em even if there is no preferred choice for the marked points as before}. In fact, we may choose any $b\in \partial \Delta_1$ and $\varphi(b)\in \partial \Delta_2$ as the marked points on the boundaries and define the conformal positions accordingly. It is easy to check that the notion of having the same conformal position for two points in $\Delta_1$ and $\Delta_2$ does not depend on the particular choice of $b$.
For our purposes, $\Delta_1$ and $\Delta_2$ will be the Siegel disks of two cubics in ${\cal P}_3^{cm}(\theta)$ and the homeomorphism $\varphi:\partial \Delta_1 \rightarrow \partial \Delta_2$ comes from a conjugacy between the cubics on the boundaries of these Siegel disks.
The following result, which will turn out to be useful later (see \propref{independent}), tells us how to extend a quasiconformal conjugacy between two cubics in ${\cal P}_3^{cm}(\theta)$ to the grand orbits of their Siegel disks.
\begin{lem}[Extending QC Conjugacies]
\label{ext}
Let $P$ and $Q$ be two cubics in
${\cal P}_3^{cm}(\theta)$ such that the boundary of the Siegel disk $\Delta_P$ of $P$ is a Jordan curve. Let $\varphi:\BBB C \rightarrow \BBB C$ be a quasiconformal homeomorphism whose restriction $\BBB C \smallsetminus GO(\Delta_P) \rightarrow \BBB C \smallsetminus GO(\Delta_Q)$ conjugates $P$ to $Q$. Then
\begin{enumerate}
\item[(a)]
If $P$ is not capture, there exists a quasiconformal homeomorphism $\psi:\BBB C\rightarrow \BBB C$ which conjugates $P$ and $Q$, which is conformal on $GO(\Delta_P)$ and agrees with $\varphi$ on $\BBB C \smallsetminus GO(\Delta_P)$.
\item[(b)]
If $P$ is capture, we can construct a $\psi$ as in (a) if and only if
the captured images of the critical points of $P$ and $Q$ in $\Delta_P$ and $\Delta_Q$ have the same conformal position.
\end{enumerate}
\end{lem}
\begin{pf}
(a) When $P$ is not capture, the extension is easy to define.
Fix some $b_1\in \partial \Delta_P$ and let $b_2=\varphi(b_1)$. Consider conformal isomorphisms $\zeta_1: \Delta_P \stackrel{\simeq}{\longrightarrow} \BBB D$ and $\zeta_2: \Delta_Q \stackrel{\simeq}{\longrightarrow}
\BBB D$, with $\zeta_1(0)=0=\zeta_2(0)$ and $\zeta_1(b_1)=1=\zeta_2(b_2)$, which conjugate $P$ on $\Delta_P$
and $Q$ on $\Delta_Q$ to the rigid rotation $R_{\theta}:z\mapsto e^{2\pi i
\theta}z$ on $\BBB D$. Since the boundaries of $\Delta_P$ and $\Delta_Q$ are Jordan curves, $\zeta_1$ and
$\zeta_2$ extend homeomorphically to the closures. The composition $\psi=
\zeta_2^{-1}\circ \zeta_1:\Delta_P\rightarrow \Delta_Q$ is conformal and conjugates
$P$ on $\Delta_P$ to $Q$ on $\Delta_Q$. Also $\psi(b_1)=\varphi(b_1)=b_2$ and by induction $\psi(P^{\circ k}(b_1)))=Q^{\circ k}(b_2)=\varphi(P^{\circ k}(b_1))$. Since the orbit of $b_1$ is dense on the boundary of $\Delta_P$, we have $\psi |_{\partial \Delta_P}=\varphi |_{\partial \Delta_P}$. Therefore, $\psi$ gives the required extension of $\varphi$ to the Siegel disk $\Delta_P$. It is now easy to extend $\psi$ to the grand orbit $GO(\Delta_P)$ as follows: $P^{\circ k}$ maps any component of $P^{-k}(\Delta_P)$ isomorphically onto $\Delta_P$. Hence we can define $\psi$ on any such component as the composition $Q^{-k}\circ
\psi |_{\Delta_P}\circ P^{\circ k}$, where the branch of $Q^{-k}$ is determined by the values of $\varphi$ on the Julia set $J(P)$. Clearly this composition is conformal inside this component and agrees with $\varphi$ on its boundary. The fact that $\psi$ defined this way is a quasiconformal homeomorphism follows from the Bers Sewing \lemref{BSL}, with $U=V=\BBB C$ and $E=\BBB C \smallsetminus GO(\Delta_P)$.
\realfig{confpos}{sig1.eps}{{\sl Extending $\varphi$ in the capture case.}}{9cm}
(b) Now let $P$ be capture. The construction of $\psi$ goes through as in
case (a) except for the last part where we want to extend $\psi$ by taking pull-backs. Suppose that there exists a positive integer $k$ such that
the critical point $c_1$ of $P$ belongs to the component $U_1$ of $P^{-k}(\Delta_P)$.
Let $V_1=P(U_1)$ and let $v_1=P(c_1)$ be the critical value in $V_1$. Since $P:\partial U_1 \rightarrow \partial V_1$ is a double covering and $\varphi$ conjugates $P$ to $Q$ on the Julia sets, there must be a critical point $c_2$ of $Q$ in a component $U_2$ of $Q^{-k}(\Delta_Q)$, with $\partial U_2=\varphi ( \partial U_1)$. Similarly define $V_2$ and $v_2$. By the proof of part (a) we can define $\psi$ inductively up to the $(k-1)$-th preimages of $\Delta_P$, including $V_1$. This gives us a conformal isomorphism $\psi:V_1\rightarrow V_2$ which necessarily maps $v_1$ to $v_2$, because by our assumption $P^{\circ k}(c_1)$ and $Q^{\circ k}(c_2)$ have the same conformal position in $\Delta_P$ and $\Delta_Q$ and so one gets
mapped to the other by $\psi |_{\Delta_P}$. Choose any simple arc $\gamma_1$ in $V_1$ connecting $v_1$ to some boundary point $\beta_1$. The simple arc $\gamma_2=\psi(\gamma_1)$ in $V_2$ connects $v_2$ to the boundary point
$\beta_2=\psi(\beta_1)$. Pull $\gamma_1$ back by $P$ to get two branches of a simple arc passing through the critical point $c_1$ with two distinct endpoints $\alpha_1$ and $\alpha'_1$ on the boundary of $U_1$. Similarly we consider the pull-back of $\gamma_2$ by $Q$ and we get two endpoints on the boundary of $U_2$, which we label as $\alpha_2=\varphi(\alpha_1)$ and $\alpha'_2=\varphi(\alpha'_1)$ (see \figref{confpos}). Now the inverse $Q^{-1}$ can be defined analytically over $V_2 \smallsetminus \gamma_2$ and has two branches which take
values in two different connected components of $U_2\smallsetminus Q^{-1}(\gamma_2)$. Define $\psi$ on $U_1$ as the composition $Q^{-1}\circ
\psi \circ P$, where the boundary orientation tells us which of the two branches of $Q^{-1}$ has to be taken. This way we extend $\psi$ to $U_1$ and $\psi$ can then be defined on further preimages of $\Delta_P$ similar to the case (a).
\end{pf}
\vspace{0.17in}
\section{Introduction}
\label{sec:introduction}
Let $f$ be a polynomial of degree $d\geq 2$ in the complex plane and consider the following statements:
({\bf A}$_d$) ``If $f$ has a fixed Siegel disk $\Delta$ of bounded type rotation number,
then $\partial \Delta$ is a quasicircle passing through some critical point of $f$.''
({\bf B}$_d$) ``If $f$ has a fixed Siegel disk $\Delta$ such that $\partial \Delta$ is a quasicircle passing through some critical point of $f$, then the rotation number of $\Delta$ is bounded type.''
Statement ({\bf A}$_2$) is a theorem of Douady, Ghys, Herman, and Shishikura,
({\bf B}$_d$) is open, even for $d=2$, and the main object of this work is to prove ({\bf A}$_3$):\\ \\
{\bf Theorem 14.7.} {\it Let $P$ be a cubic polynomial which has a fixed Siegel disk $\Delta$ of rotation number $\theta$. Let $\theta$ be of bounded type. Then the boundary of $\Delta$ is a quasicircle which contains one or both critical points of $P$.}\\ \\
Along the way, we prove several results about the dynamics of cubic Siegel polynomials. In fact, we study the one-dimensional slice ${\cal P}_3^{cm}(\theta)$ in the cubic parameter space which consists of all cubics with a fixed Siegel disk of a given rotation number $\theta$. Many of the results apply to general $\theta$ of Brjuno type.
Siegel disks are examples of quasiperiodic motion in holomorphic dynamical systems. Let $p$ be an {\it irrationally indifferent} fixed point of a rational map $f:\overline{\BBB C}\rightarrow \overline{\BBB C}$. This means that $f(p)=p$ and the {\it multiplier} $f'(p)=\lambda$ is of the form $e^{2 \pi i \theta}$, where the {\it rotation number} $0< \theta<1$ is irrational. $p$ is called {\it linearizable} if there exists a holomorphic change of coordinates near $p$ which conjugates $f$ to the rigid rotation $z\mapsto \lambda z$. The largest domain on which this linearization is possible is a simply-connected domain $\Delta$ which is called the {\it Siegel disk} of $f$ centered at $p$. In other words, there exists a conformal isomorphism $h:(\BBB D,0)\stackrel{\simeq}{\longrightarrow} (\Delta,p)$ such that $h(\lambda z)=f(h(z))$ for all $z\in \BBB D$, and $\Delta$ is not contained in any larger domain with this property. While the Siegel disk $\Delta$ is a component of the Fatou set of $f$, the boundary of $\Delta$ is a subset of the Julia set.
Every punctured Siegel disk $\Delta \smallsetminus \{ p\}$ is foliated by dynamically-defined real-analytic invariant curves. However, as we get close to $\partial \Delta$, these invariant curves usually become more and more wiggly, and in the limit, we lose the control over the distortion of them. So, a priori, we do not even know if the $\partial \Delta$ is a Jordan curve. This question is difficult to answer partially because of the delicate analytic issues which arise in the study of the boundary behavior of the (essentially unique) linearizing map $h$.
It was conjectured by Douady and Sullivan in the early $80$'s that the boundary of every Siegel disk for a rational map has to be a Jordan curve (see \cite{Douady1}). This has still remained an open problem, even for polynomials, even when the degree is $2$. Even worse, there are very few explicit examples of polynomials for which we can effectively verify the conjecture. For instance, it is easy to see that local-connectivity of the Julia set implies the boundary of a Siegel disk to be a Jordan curve, but except for one case in the quadratic family \cite{Petersen}, we do not know how to check local-connectivity of the Julia set of a rational map which has a Siegel disk (and even in that single case, the boundary being a Jordan curve is proved without any reference to local-connectivity!). On the other hand, there are examples of quadratic Siegel polynomials whose Julia sets are not locally-connected, yet the boundaries of the Siegel disks are quasicircles \cite{Herman3} or even smooth Jordan curves \cite{Perez3}.
It is known that in any counterexample to this conjecture, the boundary of the Siegel disk must either be very complicated (an indecomposable continuum) or very simple (a circle with infinitely many topologist's sine curves implanted on it) \cite{Rogers}.
Let $\theta=[a_1, a_2, \ldots, a_n, \ldots ]$ be the continued fraction
expansion of $\theta$ and let $p_n/q_n=\discretionary{}{}{}
[a_1, a_2, \ldots, a_n]$ be its $n$-th
rational approximation, with every $a_i$ being a positive integer.
According to the theorem of Brjuno-Yoccoz \cite{Yoccoz}, every holomorphic germ with an indifferent fixed point of multiplier $\lambda=e^{2 \pi i \theta}$ is linearizable if and only if $\theta$ satisfies
$$\sum_{n=1}^{\infty} \frac{ \log q_{n+1}}{q_n}< +\infty.$$
Such $\theta$, or the corresponding $\lambda$, is called of {\it Brjuno type}. It is not hard to show that this set has full measure on the unit circle. The set of irrational numbers of Brjuno type contains two important arithmetic subsets: (1) numbers of {\it Diophantine type}, the set of all $0<\theta<1$ for which there exist positive constants $C$ and $\nu$ such that $|\theta-p/q|>C/q^{\nu}$ for every rational number $0\leq p/q<1$; and (2) numbers of {\it bounded type}, the set of all $0<\theta<1$ for which $\sup_n a_n< +\infty$.
By the above discussion, every rational map with an indifferent fixed point whose multiplier is of Brjuno type has a Siegel disk. However, whether the multiplier of every Siegel disk of a rational map has to be of Brjuno type is only known to be true for quadratic polynomials by a theorem of Yoccoz \cite{Yoccoz} (see also \cite{Perez1} for a partial generalization).
Another issue is the existence of critical points on the boundary of Siegel disks. This problem was first studied by Ghys \cite{Ghys} under the assumption that the boundary is a Jordan curve and the rotation number is Diophantine. Later Herman improved the result by showing that when the rotation number is Diophantine and the action on the boundary is injective, there must be a critical point on the boundary \cite{Herman1}. The idea is to extract a circle diffeomorphism from the action on the boundary when there is no critical point there, and then to use the condition on the rotation number to extend the linearization to a neighborhood of the boundary, which gives a contradiction. A very short proof of this theorem is now possible with the knowledge of the ``Siegel compacts'' as recently introduced by Perez-Marco \cite{Perez2}. In the case of quadratic polynomials, no critical point on the boundary of the Siegel disk automatically implies that the map acts injectively on this boundary. Hence one concludes that for $\theta$ of Diophantine type, the critical point of $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$ is on the boundary of the Siegel disk centered at $0$.
Later Herman gave the first example of a $\theta$ of Brjuno type for which the boundary of the Siegel disk for $Q_{\theta}$ is disjoint from the entire orbit of the critical point \cite{Herman3}.
The most significant example in which one can explicitly show that the boundary of a Siegel disk is a Jordan curve containing a critical point is the quadratic map $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$, with $\theta$ being of bounded type. The idea is to consider the degree $3$ Blaschke product
$$f_{\theta} (z)=e^{2 \pi i t(\theta)} z^2 \left ( \frac{z-3}{1-3z} \right )$$
which has a double critical point at $1$ and $t(\theta)$ is chosen such that the rotation number of the restriction of $f_{\theta}$ to the unit circle is $\theta$.
Using a theorem of Swiatek and Herman on quasisymmetric linearization of critical circle maps \cite{Swiatek},\cite{Herman2}, one can redefine $f_{\theta}$ on the unit disk to make it quasiconformally conjugate to the rigid rotation. After modifying the conformal structure on the unit disk and all its preimages, one applies the Measurable Riemann Mapping Theorem of Ahlfors and Bers to prove that the resulting topological picture is quasiconformally conjugate to a quadratic polynomial $Q$. But the image of the unit disk has to be a Siegel disk for $Q$ of rotation number $\theta$, and there is only one such quadratic, so $Q=Q_{\theta}$ up to an affine conjugacy, which proves ({\bf A}$_2$).
In any attempt to generalize this result to higher degrees, one must address
several questions. In fact, the main difficulty is not the surgery which can be performed in all degrees in a similar way, provided that one has the appropriate Blaschke products in hand. Instead, we have to face a different set of questions, such as the parametrization of the candidate Blaschke products by their critical points, the combinatorics of various ``drops'' of their Julia sets, the continuity of the surgery, and the injectivity of this operation. None of these questions arises in degree $2$, where the corresponding parameter spaces are single points.
Let us briefly sketch the organization of this paper: In Section \ref{sec:cubpar} we introduce a normal form for the critically marked cubic polynomials with a Siegel disk of a given rotation number $\theta$ centered at the origin. We show that the associated parameter space ${\cal P}_3^{cm}(\theta)$ is homeomorphic to the punctured plane and has a symmetry induced by the inversion $c\mapsto 1/c$ through the unit circle. We then study elementary topological properties of the connectedness locus ${\cal M}_3(\theta)$ in ${\cal P}_3^{cm}(\theta)$.
In Section \ref{sec:stab} we show that the Julia sets of cubics in ${\cal P}_3^{cm}(\theta)$ move holomorphically away from the boundary of the connectedness locus ${\cal M}_3(\theta)$ where various bifurcations do occur. In particular, if some cubic has an indifferent cycle other than the center of the Siegel disk at the origin, it must belong to the boundary of ${\cal M}_3(\theta)$. Both facts resemble well-known properties of the Mandelbrot set.
In Section \ref{sec:intcomp} we study components of the interior of ${\cal M}_3(\theta)$. In the interior of ${\cal M}_3(\theta)$, we can observe two possibilities: (1) The free critical point approaches an attracting cycle. In this case the cubic is called {\it hyperbolic-like} and is renormalizable in the sense of the definition in Section \ref{sec:rencub}. (2) The free critical point eventually maps into the Siegel disk centered at the origin. This is called a {\it capture} cubic. The hyperbolic-like and capture components are the only possibilities one expects. However, as in the case of the Mandelbrot set, it is not known if these cases in fact cover all possibilities. As a third possibily, a cubic in the interior of ${\cal M}_3(\theta)$ which is neither hyperbolic-like nor capture is called {\it queer}. The most significant property of these cubics is that their Julia sets support invariant line fields and in particular have positive Lebesgue measure. To show this, unlike the quadratic family where the holomorphic motion of the basin of infinity extends automatically to the whole plane by the $\lambda$-lemma, here we must do some extra work. In fact, when the rotation number $\theta$ is an arbitrary number of Brjuno type, we do not know if the boundary of the Siegel disks are Jordan curves. Hence it is difficult to extend the holomorphic motions to the grand orbits of the Siegel disks in order to construct deformations of a queer cubic. Following McMullen and Sullivan, we overcome this difficulty by an application of the so-called ``harmonic $\lambda$-lemma'' of Bers and Royden. This section ends with a static version of an extension lemma, which turns out to be useful later in the construction of the surgery map.
Section \ref{sec:rencub} studies the class of renormalizable cubics in ${\cal P}_3^{cm}(\theta)$. From every such cubic one can extract the quadratic Siegel polynomial $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$ using the theory of polynomial-like maps. As an easy byproduct, using a theorem of Petersen \cite{Petersen}, we show that the Julia set of a hyperbolic-like cubic or a cubic with disconnected Julia set has measure zero when $\theta$ is bounded type.
Section \ref{sec:connect} supplies a proof of connectivity of ${\cal M}_3(\theta)$. The standard Douady-Hubbard map on the exterior component of $\BBB C^{\ast}\smallsetminus {\cal M}_3(\theta)$ turns out to be proper holomorphic of degree $3$, so in order to prove that this component is homeomorphic to a punctured disk one needs to show that the map only branches over infinity. Again, this difficulty does not appear in the proof of connectivity of the Mandelbrot set, where the similar map has degree $1$.
In Section \ref{sec:qc} we characterize the quasiconformal conjugacy classes in ${\cal P}_3^{cm}(\theta)$. The most important feature is the quasiconformal rigidity of the cubics on the boundary of ${\cal M}_3(\theta)$, which will be the crucial step in the proof of continuity of the surgery map in Section \ref{sec:continuity}. The material here is standard, except possibly for the existence of centers for capture components, which follows from the fact that the condition of being capture is open.
Section \ref{sec:blaschke} is the beginning of our study of an auxiliary family of degree $5$ critically marked Blaschke products. To define the parameter space here, we need to show that our Blaschke products can be parametrized by their critical points, a fact that is trivial in the polynomial case. We prove that such a ``critical parametrization'' is always possible.
In Section \ref{sec:blapar} we use this critical parametrization to define the parameter space ${\cal B}_5^{cm}(\theta)$. Every $B\in {\cal B}_5^{cm}(\theta)$ is a degree $5$ Blaschke product with superattracting fixed points at $0$ and $\infty$, a double critical point on the unit circle $\BBB T$, and a pair $\{ c, 1/\overline{c} \}$ of symmetric critical points which may or may not be on $\BBB T$. The space of all such critically marked Blaschke products is homeomorphic to the punctured plane. For a $B\in {\cal B}_5^{cm}(\theta)$, we study the structure of the preimages of the unit disk which are called the {\it drops} of $B$.
Section \ref{sec:surgery} describes a surgery on Blaschke products in ${\cal B}_5^{cm}(\theta)$, with $\theta$ being of bounded type, in order to obtain critically marked cubics in ${\cal P}_3^{cm}(\theta)$. The theorem of Swiatek-Herman on quasisymmetric linearization of critical circle maps is the main tool. Unlike the quadratic case, here we must address a new question: Does the result of the surgery depend on various choices we make along the way? The answer turns out to be negative by an application of the Bers Sewing Lemma.
Section \ref{sec:newcon} defines the connectedness locus ${\cal C}_5(\theta)$ in ${\cal B}_5^{cm}(\theta)$. We present a dynamical meaning for this locus by finding an alternative description for the Julia sets of elements in ${\cal B}_5^{cm}(\theta)$. This description turns out to be useful in the study of injectivity of the surgery map.
In Section \ref{sec:continuity} we show that the surgery map ${\cal S}:{\cal B}_5^{cm}(\theta) \rightarrow {\cal P}_3^{cm}(\theta)$ is continuous. The proof depends strongly on the fact that the parameter spaces have one complex dimension. One expects the similar map in higher dimensions to be discontinuous.
Section \ref{sec:renbla} introduces the notion of a renormalizable Blaschke product.
We show that from every such map one can extract the standard degree $3$ Blaschke product $f_{\theta}$ introduced by Douady, Ghys, Herman and Shishikura. This will be a very useful fact in Section \ref{sec:inject}, mainly because of the simple observation that $f_{\theta}$ is quasiconformally rigid.
In Section \ref{sec:inject} we prove that the surgery map is injective on the set of all Blaschke products which map to hyperbolic-like cubics or cubics with disconnected Julia sets. We actually prove a stronger result: Any two Blaschke products in the fiber over a noncapture cubic must be quasiconformally conjugate, with the conjugacy being conformal except on the Julia set. The fact that the surgery map is proper and restricts to a homeomorphism between the complementary components of ${\cal C}_5(\theta)$ and ${\cal M}_3(\theta)$ allows us to deduce that it is surjective. This proves \thmref{main}.
Finally, in Section \ref{sec:twocrit} we study the set of all cubics in ${\cal P}_3^{cm}(\theta)$ which have both critical points on the boundary of their Siegel disks. We prove that this is a Jordan curve in the boundary of ${\cal M}_3(\theta)$, which in some sense is parametrized by an ``angle'' between the two critical points.\\ \\
{\bf Acknowledgements.} I am grateful to Jack Milnor for many inspiring discussions and his moral support. Among other things, he provided me with his computer programs to create pictures of various dynamically defined objects and showed me with great patience how to write them. Further thanks are due to Misha Lyubich and Dierk Schleicher for very useful conversations during the Spring and Fall semesters of 1997 at Stony Brook.
\vspace{0.17in}
\section{The Blaschke Connectedness Locus ${\cal C}_5(\theta)$}
\label{sec:newcon}
Suggested by the case of cubic polynomials, we define the {\it Blaschke Connectedness Locus} ${\cal C}_5(\theta)$ by
$${\cal C}_5(\theta)= \{ B\in {\cal B}_5^{cm}(\theta):\ \mbox{The Julia set $J(B)$ is connected} \}.$$
The following theorem provides a useful characterization of ${\cal C}_5(\theta)$ in terms of the critical orbits.
\begin{thm}
\label{hitD}
$B\in {\cal C}_5(\theta)$ if and only if one of the following holds:
\begin{enumerate}
\item[$\bullet$]
The orbit of $c$, the critical point of $B$ in $\BBB C \smallsetminus \BBB D$ other than $1$, eventually hits $\overline{\BBB D}$.
\item[$\bullet$]
The orbit of $c$ never hits $\overline{ \BBB D}$, but remains bounded.
\end{enumerate}
\end{thm}
The proof of this theorem depends on an alternative dynamical description for the Julia set of a Blaschke product in ${\cal B}_5^{cm}(\theta)$ which is obtained by taking pull-backs
along certain type of drops called the maximal drops. This description will be useful later in the proof of \thmref{ABconj}.\\ \\
{\bf Definition.} Let $D_k^i$ be a $k$-drop of $B\in {\cal B}_5^{cm}(\theta)$.
We call $D_k^i$ a {\it maximal drop} if $D_k^i=\BBB D$, or if $D_k^i\cap \BBB D=\emptyset$ and $D_k^i$ is not contained in any other $l$-drop of $B$ for $l\geq 1$.\\
It follows in particular that maximal drops of $B$ are disjoint.
\begin{prop}
\label{aux}
Let $B\in {\cal B}_5^{cm}(\theta)$ and let $P=\cal S (B)=\varphi \circ \tilde{B} \circ \varphi^{-1}$ as in $($\ref{eqn:pmodb}$)$. Then
\begin{enumerate}
\item[(a)]
$D_k^i$ is a maximal drop of $B$ if and only if $\varphi(D_k^i)$ is a Fatou component of $P$ which eventually maps to the Siegel disk $\Delta_P$.
\item[(b)]
$\varphi$ maps the nucleus $N_{\infty}$ of $B$ onto $\overline{\BBB C}\smallsetminus \overline{GO(\Delta_P)}$.
\item[(c)]
The boundary of the immediate basin of attraction of infinity for $B$ is precisely the closure of the union of the boundaries of all the maximal drops of $B$.
Under $\varphi$ this set maps to the Julia set $J(P)$.
\end{enumerate}
\end{prop}
\begin{pf}
(a) and (b) are easy consequences of the definitions. For (c), just note that under $\varphi$, the boundary of the immediate basin of attraction of infinity for $B$ corresponds to the similar boundary for $P$, and the closure of the union of the boundaries of all the maximal drops of $B$ corresponds to the Julia set $J(P)$ by (a).
\end{pf}
\begin{lem}[Alternative description for Julia Sets]
\label{alt}
Let $B\in {\cal B}_5^{cm}(\theta)$ and let $J_0$ be the boundary of the immediate basin of attraction of infinity for $B$. Define a sequence of compact sets $J_n=J_n(B)$ inductively by
\begin{equation}
\label{eqn:jn}
J_n= \overline{ \bigcup_{D_k^i\ maximal} B^{-k}(IJ_{n-1}\cap \BBB D)\cap D_k^i },
\end{equation}
Then
\begin{equation}
\label{eqn:jb}
J(B)=\overline{\bigcup_{n\geq 0}J_n}.
\end{equation}
\end{lem}
\begin{pf}
Each $J_n$ is compact and contained in $J(B)$. By \lemref{aux}(c), $J_0\subset J_1$ and it follows by induction on $n$ that $J_n\subset J_{n+1}$ for $n\geq 0$. Put
$$J_{\infty}=\overline{\bigcup_{n\geq 0}J_n}.$$
Clearly $J_{\infty}$ is compact and contained in the Julia set $J(B)$, and it is not hard to see that it is invariant under the reflection $I$. We will show that $J_{\infty}$ is totally invariant under $B$, i.e., $B^{-1}(J_{\infty})=J_{\infty}$. This will prove that $J_{\infty}=J(B)$.
First we prove that $J_{\infty}$ is forward invariant. For any $n$, it follows from (\ref{eqn:jn}) that $B(J_n \smallsetminus \BBB D)\subset J_n \subset J_{\infty}$. On the other hand, $B(J_n\cap \overline{\BBB D})= B(IJ_{n-1}\cap \overline{\BBB D})=
IB(J_{n-1}\smallsetminus \BBB D)\subset IJ_{\infty}=J_{\infty}$. These two inclusions show that $B(J_n)\subset J_{\infty}$, hence $B(J_{\infty})\subset J_{\infty}$.
To prove backward invariance, first note that for any $n$, $B^{-1}(J_n)\smallsetminus \BBB D \subset J_n \subset J_{\infty}$ by (\ref{eqn:jn}). To obtain the same kind of inclusion for $B^{-1}(J_n)\cap \overline{\BBB D}$, we distinguish two cases: First, $B^{-1}(J_n\cap \overline{\BBB D})\cap \overline{\BBB D}= B^{-1}(IJ_{n-1}\cap \overline{\BBB D}) \cap \overline{\BBB D}\subset I(B^{-1}(J_{n-1}\smallsetminus \BBB D)) \subset IJ_{n-1}\cup J_n \subset J_{\infty}$. Second, $B^{-1}(J_n\smallsetminus \BBB D)\cap \overline{\BBB D}=I(B^{-1}(IJ_n\cap \overline{\BBB D})\smallsetminus \BBB D) \subset I(B^{-1}(J_{n+1})\smallsetminus \BBB D)\subset IJ_{n+1}\subset J_{\infty}$. Altogether, these three inclusions show that $B^{-1}(J_n)\subset J_{\infty}$ for all $n$. Hence $B^{-1}(J_{\infty})\subset J_{\infty}$ and this proves (\ref{eqn:jb}).
\end{pf}
\vspace{0.5cm}
\noindent
{\bf Remark.} In terms of the modified Blaschke product $\tilde{B}$ as defined in (\ref{eqn:modbla}), one can also define the sequence $J_n$ by $J_0=\varphi^{-1}(J(P))$ and
$$J_n=\overline{ \bigcup_{k\geq 0} \tilde{B}^{-k}(IJ_{n-1}\cap \BBB D) }.$$
Here $\tilde{B}^{-k}$ refers to any branch of $(\tilde{B}^{\circ k})^{-1}$ of the form $\tilde{B}^{-1} \circ \ldots \circ \tilde{B}^{-1}$ ($k$ times) where each branch of $\tilde{B}^{-1}$ satisfies $\tilde{B}^{-1}(\BBB D)\cap \BBB D =\emptyset$.\\ \\
{\it Proof of \thmref{hitD}.} One direction is quite easy to see: If the orbit of $c$ never hits the closed unit disk and escapes to infinity, one can easily show that $J(B)$ is disconnected exactly like the polynomial case by considering the B\"{o}ttcher map of the immediate basin of attraction of $\infty$ for $B$ (see for example \cite{Milnor1}, Theorem 17.3).
Conversely, suppose that the orbit of the critical point $c$ either hits $\overline {\BBB D}$ or stays bounded in $\BBB C \smallsetminus
\overline {\BBB D}$. Then the Julia set $J(P)$ is connected, where $P={\cal S}(B)$. Consider the sequence of compact sets $J_n$ in (\ref{eqn:jn}). By \propref{aux}(c), $J_0$ is connected and it follows by induction on $n$ that each $J_n$ defined by (\ref{eqn:jn}) is connected. Therefore (\ref{eqn:jb}) shows that $J(B)$ is connected. Hence $B\in {\cal C}_5(\theta)$. $\Box$\\
In what follows, we prove that the connectedness locus ${\cal C}_5(\theta)$ is compact. Other facts, e.g., having only two complementary components, or connectivity, will be proved later using surgery (see \corref{c5con} and \corref{c5full}). We would like to remark that unlike the case of cubic polynomials, it is often difficult to prove anything about the topology of the Blaschke connectedness locus, partly because of the complicated way these Blaschke products depend on their critical points, but more importantly because of the fact that the family
$\mu\mapsto B_{\mu}$ does not depend holomorphically on $\mu$.
\begin{lem}
\label{hn}
Let $\{ B_n \}$ be an arbitrary sequence of Blaschke products in ${\cal B}_5^{cm}(\theta)$ and $h_n:\BBB T \rightarrow \BBB T$ be the unique normalized quasisymmetric homeomorphism which conjugates $B_n|_{\BBB T}$ to the rigid rotation $R_{\theta}$. Let $H_n$ denote the Douady-Earle extension of $h_n$. Then the sequence $\{ H_n \}$ has a subsequence which converges locally uniformly to a quasiconformal homeomorphism of $\BBB D$.
\end{lem}
It follows that the sequence $\{ H_n^{-1}(0) \}$ stays in a compact subset of the unit disk.
\begin{pf}
Regarding $\BBB T$ as the quotient $\BBB R / \BBB Z$, we can lift each $h_n$ to a $k(\theta)$-quasisymmetric homeomorphism $\tilde{h}_n: \BBB R \rightarrow \BBB R$ which fixes $0$ and satisfies $\tilde{h}_n(x+1)=\tilde{h}_n(x)+1$ for all $x$. The space of all uniformly quasisymmetric normalized homeomorphisms of the real line is compact (\cite{Lehto}, Lemma 5.1), hence a subsequence of $\{ \tilde{h}_n \}$ converges uniformly to a $k(\theta)$-quasisymmetric homeomorphism $\tilde{h}:\BBB R \rightarrow \BBB R$. This homeomorphism descends to a quasisymmetric homeomorphism $h:\BBB T \rightarrow \BBB T$, which is the uniform limit of the corresponding subsequence of $\{ h_n \}$. On the other hand, the Douady-Earle extension depends continuously on the circle homeomorphism \cite{Douady-Earle}. It follows that the corresponding subsequence of $\{ H_n \}$ converges locally uniformly on $\BBB D$ to the extension of $h$.
\end{pf}
\begin{cor}
\label{normal}
Let $B \in {\cal B}_5^{cm}(\theta)$ and $\varphi_B: \BBB C \rightarrow \BBB C$ be the quasiconformal homeomorphism which conjugates the modified Blaschke product $\tilde B$ to the cubic $P={\cal S}(B)$ as in (\ref{eqn:pmodb}): $P=\varphi_B \circ \tilde{B} \circ \varphi_B^{-1}$. Then the family ${\cal F}= \{ \varphi_B \}_{B \in {\cal B}_5^{cm}(\theta)}$ is normal.
\end{cor}
\begin{pf}
By the surgery construction as described in Section \ref{sec:surgery}, $\cal F$ is uniformly quasiconformal. Let $\varphi_n=\varphi_{B_n}$ be a sequence in $\cal F$. Let $B_n=B_{\mu_n}$ and choose a subsequence, still denoted by $B_n$, such that $|\mu_n| \geq 1$ for all $n$ (the case $|\mu_n| \leq 1$ is similar).
By the way we normalized $\varphi_n$,
$$\varphi_n(H_n^{-1}(0))=0, \ \ \ \varphi_n(1)=1, \ \ \ \varphi_n(\infty)=\infty.$$
But $\{ H_n^{-1}(0) \}$ lives in a compact subset of $\BBB D$. Hence the three points $H_n^{-1}(0)$, $1$ and $\infty$ has mutual spherical distance larger than some positive constant independent of $n$. This implies equicontinuity of $\{ \varphi_n \}$ by a standard theorem on quasiconformal mappings (\cite{Lehto}, Theorem 2.1).
\end{pf}
Now we show that the surgery map constructed in the previous section is proper.
\begin{prop}
\label{proper}
The surgery map ${\cal S}:{\cal B}_5^{cm}(\theta) \rightarrow {\cal P}_3^{cm}(\theta)$ is proper.
\end{prop}
\begin{pf}
Let the sequence $\{ B_n \}$ leave every compact set in ${\cal B}_5^{cm}(\theta)$ and consider the corresponding cubics $P_n={\cal S}(B_n)=\varphi_n \circ \tilde B_n \circ \varphi_n^{-1}$. To be more specific, let us assume that $B_n=B_{\mu_n}$ as in Section \ref{sec:blapar}, and the critical point $\mu_n$ tends to infinity. Clearly $P_n=P_{c_n}$, where $c_n=\varphi_n(\mu_n)$. Since $\{ \varphi_n \}$ is normal by the above corollary, we simply conclude that $c_n \rightarrow \infty$.
\end{pf}
\begin{prop}
\label{c5com}
${\cal C}_5(\theta)$ is compact and invariant under the inversion $\mu \mapsto 1/\mu$. As a result, there exists an unbounded component $\Lambda_{ext}$ of $\BBB C^{\ast}\smallsetminus {\cal C}_5(\theta)$ which contains a punctured neighborhood of $\infty$ and a corresponding component $\Lambda_{int}$ which is mapped to it by $\mu \mapsto 1/\mu$.
\end{prop}
\begin{pf}
The invariance follows from the definition of ${\cal B}_5^{cm}(\theta)$ and its identification with $\BBB C^{\ast}$. Note that the unit circle $\BBB T\subset {\cal B}_5^{cm}(\theta)$ is contained in ${\cal C}_5(\theta)$ by \thmref{hitD}. So $\Lambda_{ext}$ and $\Lambda_{int}$ are actually distinct components of $\BBB C^{\ast}\smallsetminus {\cal C}_5(\theta)$.
${\cal C}_5(\theta)$ is clearly closed by \thmref{hitD}. Let us prove it is bounded. Assuming not, there is a sequence $B_{\mu_n}\in {\cal C}_5(\theta)$ with $\mu_n \rightarrow \infty$ as in the above proof. It follows from \propref{aux}(c) and \thmref{hitD} that the corresponding polynomials $P_{c_n}={\cal S}(B_{\mu_n})=\varphi_n \circ \tilde{B}_{\mu_n} \circ \varphi_n^{-1}$ have connected Julia sets. By \propref{m3com}, $1/30 \leq |c_n| \leq 30$. This contradicts properness of $\cal S$.
\end{pf}
\vspace{0.17in}
\section{Cubic Quasiconformal Conjugacy Classes}
\label{sec:qc}
In this section we prove that quasiconformal conjugacy classes in ${\cal P}_3^{cm}(\theta)$ are either open and connected or single points. This result, together with the fact that any holomorphic family of rational maps with constant critical orbit relations forms a quasiconformal conjugacy class, enables us to completely characterize the quasiconformal conjugacy classes in ${\cal P}_3^{cm}(\theta)$.
\begin{thm}[Parametrization of QC Conjugacy Classes]
\label{qcpar}
Let $P_{c_0},P_{c_1}$ be distinct cubics in ${\cal P}_3^{cm}(\theta)$ and let $\varphi:\BBB C \rightarrow \BBB C$ be a $K$-quasiconformal homeomorphism which conjugates $P_{c_0}$ to $P_{c_1}$, i.e., $\varphi\circ P_{c_0}=P_{c_1}\circ \varphi$ and $\varphi(c_0)=c_1$. Then there exists a nonconstant holomorphic map $t\mapsto c_t$ from an open disk $\BBB D(0,r)\ (r>1)$ into $\BBB C^{\ast}$ which maps $0$ to $c_0$ and $1$ to $c_1$, such that for every $t\in \BBB D(0,r)$, $P_{c_0}$ is conjugate to $P_{c_t}$ by a $K_t$-quasiconformal homeomorphism $\varphi_t: \BBB C\rightarrow \BBB C$. Moreover, $K_t\rightarrow 1$ as $t\rightarrow 0$.
\end{thm}
\begin{pf}
The idea of the proof goes back to Douady and Hubbard \cite{Douady-Hubbard2}: Define a conformal structure $\sigma$ on $\BBB C$ by $\sigma=\varphi^{\ast}\sigma_0$, where, as usual, $\sigma_0$ is the standard conformal structure on $\BBB C$. (To simplify the notation, in what follows we identify a conformal structure on $\BBB C$ with its associated Beltrami differential.) Since $P_{c_1}$ is holomorphic, $P_{c_0}$ has to preserve $\sigma$. Since $\varphi$ is quasiconformal, $\| \sigma \|_{\infty}<1$. Define a one-parameter family $\{ \sigma_t \}$ of complex-analytic deformations of $\sigma$ by $\sigma_t=t\sigma$,
where $t\in \BBB D(0,r)$ for some $r>1$ such that $r\| \sigma \| _{\infty}<1$.
By the Measurable Riemann Mapping Theorem of Ahlfors and Bers \cite{Ahlfors-Bers}, there exists a unique quasiconformal homeomorphism $\varphi_t$ of the plane which solves the Beltrami equation $\varphi_t^{\ast} \sigma_0=\sigma_t$ and fixes $0$, $1$ and $\infty$. Define $P^t=\varphi_t\circ P_{c_0}\circ \varphi_t^{-1}$. Since $P_{c_0}$ is holomorphic, it acts as a pure rotation on Beltrami differentials. Hence $P_{c_0}^{\ast}\sigma=\sigma$ implies $P_{c_0}^{\ast}\sigma_t=\sigma_t$ and therefore $P^t$ is a quasiregular self-map of the plane which preserves $\sigma_0$ and is conjugate to a cubic polynomial. It is then easy to see that $P^t$ itself is a cubic polynomial with a fixed Siegel disk of rotation number $\theta$ centered at $0$ with a marked critical point at $z=1$.
Note that $t\mapsto \sigma_t$ is holomorphic, so the same is true for $t\mapsto \varphi_t$ and hence $t\mapsto P^t$ by the analytic dependence of the solutions of the Beltrami equation on parameters \cite{Ahlfors-Bers}. Therefore the map $t\mapsto c_t$ which defines the second critical point of $P^t$ so that $P^t=P_{c_t}$ is holomorphic. It is easy to see that $c_t$ has all the required properties.
\end{pf}
\begin{cor}
\label{qcopen}
Quasiconformal conjugacy classes in ${\cal P}_3^{cm}(\theta)$ are either single
points or open and connected. In particular, cubics on the boundary $\partial {\cal M}_3(\theta)$ are quasiconformally rigid, i.e., their conjugacy classes are single points.\ $\Box$
\end{cor}
\begin{thm}[Capture is an open condition]
\label{capop}
Let $P_{c_0} \in {\cal P}_3^{cm}(\theta)$ be a capture cubic. Then there is an open neighborhood $U$ of $c_0$ such that for every $c\in U$, $P_c$ is also capture.
\end{thm}
\begin{pf}
When $\theta$ is of bounded type, we will see that the boundary of the Siegel disks of cubics in ${\cal P}_3^{cm}(\theta)$ move continuously (see \thmref{move}) and in this case the theorem follows immediately. The following proof uses a standard argument in quasiconformal deformation theory which is similar to the proof of \thmref{qcpar}. I am indebted to X. Buff who pointed out to me that a defomation approach would work in the general case: To fix the ideas, let us assume that $P_{c_0}^{\circ k}(c_0)\in \Delta_{c_0}$ and $k\geq 1$ is the smallest such integer. First assume that $P_{c_0}^{\circ k}(c_0) \neq 0$. Let $A\subset \Delta_{c_0}$ be the annulus bounded by $\partial \Delta_{c_0}$ and the analytic invariant curve in $\Delta_{c_0}$ passing through $P_{c_0}^{\circ k}(c_0)$. Take a conformal isomorphism $\psi: A \stackrel{\simeq}{\longrightarrow} {\BBB A}(1,\epsilon)$, with $\epsilon=e^{2 \pi mod(A)}>1$, which conjugates $P_{c_0}$ on $A$ to the rotation on ${\BBB A}(1,\epsilon)$. Postcompose $\psi$ with a (nonconformal) dilation ${\BBB A}(1,\epsilon) \rightarrow {\BBB A}(1,\epsilon^2)$ to get a quasiconformal homeomorphism $\varphi: A \rightarrow {\BBB A}(1,\epsilon^2)$ conjugating $P_{c_0}$ to the rotation. Define a $P_{c_0}$-invariant conformal structure $\sigma$ on $\BBB C$ by putting $\sigma=\varphi^{\ast}\sigma_0$ on $A$ and pulling it back by the inverse branches of $P_{c_0}$ to the entire grand orbit of $A$. Set $\sigma=\sigma_0$ everywhere else. As in the proof of \thmref{qcpar}, we define $\sigma_t=t\sigma$ for $t\in {\BBB D}(0,r)$ for some $r>1$, solve the Beltrami equation $\varphi_t^{\ast}\sigma_0=\sigma_t$ and set $P^t=\varphi_t \circ P_{c_0} \circ \varphi_t^{-1}$. Then $P^t$ is a capture cubic in ${\cal P}_3^{cm}(\theta)$ and $P^0=P_{c_0}$. The holomorphic mapping $t\mapsto P^t$ is not constant because $mod(\varphi_1(A))$ is the same as the modulus of $A$ equipped with the conformal structure $\sigma$, which in turn is $(1/2 \pi) \log(\epsilon^2)=2\ mod(A)$. Hence $P^1\neq P^0$ and the mapping $t\mapsto P^t$ is open.
Now consider the case where $P_{c_0}^{\circ k}(c_0)=0$. In this case, by \corref{lower bound}, the conformal capacity of $\Delta_c$ has a positive lower bound for all $c$ sufficiently close to $c_0$. It follows that there exists an $\epsilon >0$ such that for all $c$ close to $c_0$, $\Delta_c \supset {\BBB D}(0, \epsilon)$. Hence a small perturbation of $P_{c_0}$ will still be a capture cubic.
\end{pf}
By a {\it center} of a hyperbolic-like component $U\subset{\cal M}_3(\theta)$ we mean a cubic $P_c\in U$ with one of the critical points $c$ or $1$ being periodic. Similarly, a center of a capture component will be a cubic with one critical point eventually mapped
to the indifferent fixed point at the origin.
\begin{lem}[Existence of Centers]
\label{cen}
Every hyperbolic-like or capture component of the interior of ${\cal M}_3(\theta)$ has a center.
\end{lem}
By the remark after the proof, centers of hyperbolic-like or capture components are unique when $\theta$ is of bounded type.
\begin{pf}
First let $U$ be a hyperbolic-like component. For every $c\in U$, consider the multiplier $m(c)$ of the unique attracting periodic orbit of $P_c$. The mapping $c\mapsto m(c)$ from $U$ into $\BBB D$ is easily seen to be proper and holomorphic. Hence it vanishes at a finite number of points in $U$.
Now let $U$ be capture. To be more specific, let us assume that for every $c\in U$, $P_c^{\circ k}(c)$ belongs to the Siegel disk $\Delta_c$, and let $k$ be the smallest such integer. Since $P_c$ is $J$-stable by \thmref{unstable}, the boundary of $\Delta_c$ moves holomorphically. Then, as in the proof of \thmref{invline}, there is a holomorphically varying choice of the Riemann maps $\zeta_c:\BBB D \rightarrow \Delta_c$ with $\zeta_c(0)=0$. Define a map $m:U\rightarrow \BBB D$ by
$$m(c)=\zeta_c^{-1}(P_c^{\circ k}(c)).$$
(In the language of the definition before \lemref{ext}, this is just the conformal position with respect to $\zeta_c$ of the captured image of the critical point $c$ of $P_c$.)
Clearly $m$ is holomorphic. Let $c_n \in U$ be any sequence which converges to $c\in \partial U$ as $n\rightarrow \infty$. For simplicity, put $\zeta_{c_n}=\zeta_n$. Let $z_n=P_{c_n}^{\circ k}(c_n)\in \Delta_{c_n}$ and $w_n=m(c_n)=\zeta_n^{-1}(z_n)\in \BBB D$. If $w_n$ does not converge to the unit circle, we can find a subsequence $w_{n(j)}$ such that $w_{n(j)}\rightarrow w\in \BBB D$ as $j\rightarrow \infty$. Since the family of univalent functions $\{ \zeta_n :\BBB D\rightarrow \BBB C \}$ is normal, by passing to a further subsequence if necessary, we may assume that $\zeta_{n(j)}\rightarrow \zeta$ locally uniformly on $\BBB D$. Clearly $\zeta (\BBB D)\subset \Delta_c$. Therefore, $\zeta(w)=\lim_j \zeta_{n(j)}(w_{n(j)})=\lim_j z_{n(j)}=P_c^{\circ k}(c)\in \Delta_c$. But this means that $P_c$ is capture, which contradicts $c\in \partial U$. This proves that $w_n$ converges to the unit circle. Hence $m$ is a proper map. Now, as before, $m^{-1}(0)$ has to be nonvacuous and finite.
\end{pf}
\vspace{0.15in}
\noindent
{\bf Remark.} When the rotation number $\theta$ is of bounded type, there is a simple proof for the {\it uniqueness} of centers. (Compare \cite{MCM} or \cite{JACK}, where this is shown for every hyperbolic component in a holomorphic family of polynomial maps.) We sketch such a proof briefly. By \corref{qcopen}, it is enough to prove that any two centers for a component are quasiconformally conjugate. First let $U$ be a hyperbolic-like component and $c_1$ and $c_2$ be centers of $U$. Let $P_i=P_{c_i}$. Then, as in the proof of \thmref{invline}, there is a conformal conjugacy $\varphi: \BBB C \smallsetminus K(P_1) \stackrel{\simeq}{\longrightarrow} \BBB C \smallsetminus K(P_2)$ which extends quasiconformally to the whole plane. Let $z_1=c_1 \mapsto \cdots \mapsto z_p \mapsto z_1$ be the superattracting cycle of $P_1$ which is contained in the cycle $U_1 \mapsto \cdots \mapsto U_p \mapsto U_1$ of Fatou components. By an argument similar to the proof of \thmref{ren}, there exists a quadratic-like restriction $P_1^{\circ p}:U \rightarrow V$ with $U_1 \subset U$ which is hybrid equivalent to $z\mapsto z^2$. Similarly we get a quadratic-like restriction $P_2^{\circ p}:U' \rightarrow V'$ hybrid equivalent to $z\mapsto z^2$. This gives a quasiconformal conjugacy between $P_1$ and $P_2$ on $U_1$, and then on the grand orbit of $U_1$ by taking pull-backs, which extends $\varphi$ to this set. Since the boundary of $\Delta_{P_1}$ is a Jordan curve by \thmref{main}, \lemref{ext} allows us to extend $\varphi$ to a quasiconformal conjugacy on the whole plane. (It is easy to check that $\varphi$ is conformal away from the Julia set $J(P_1)$. But $J(P_1)$ has measure zero by \corref{measure}, hence $\varphi$ is in fact conformal.)
Now let $U$ be a capture component with $c_1$ and $c_2$ being two centers of $U$. As before, there is a conformal conjugacy $\varphi: \BBB C \smallsetminus \overline{GO(\Delta_{P_1})} \stackrel{\simeq}{\longrightarrow} \BBB C \smallsetminus \overline{GO(\Delta_{P_2})}$ which extends quasiconformally to the entire plane. Again, by \thmref{main} and \lemref{ext}, $\varphi$ can be extended to a quasiconformal conjugacy $\BBB C \rightarrow \BBB C$.\\
Now we can completely characterize the quasiconformal conjugacy classes in ${\cal P}_3^{cm}(\theta)$.
\begin{thm}[QC Conjugacy Classes in ${\cal P}_3^{cm}(\theta)$]
\label{qcclass}
Every quasiconformal conjugacy class in ${\cal P}_3^{cm}(\theta)$ is one from the following list:
\begin{enumerate}
\item[(a)]
A hyperbolic-like or capture component of the interior of ${\cal M}_3(\theta)$ with the center(s) removed.
\item[(b)]
The two components $\Omega_{ext}$ and $\Omega_{int}$.
\item[(c)]
A queer component of the interior of ${\cal M}_3(\theta)$.
\item[(d)]
A center of a hyperbolic-like or capture component.
\item[(e)]
A single point on the boundary $\partial {\cal M}_3(\theta)$.
\end{enumerate}
\end{thm}
\begin{pf}
\corref{qcopen} shows that no conjugacy class intersects two distinct members of the above list. It also proves that (d) and (e) are in fact conjugacy classes. Also the proof of \thmref{invline} shows that every queer component is a conjugacy class. So it remains to prove that (a) and (b) are quasiconformal conjugacy classes.
Recall that a {\it critical orbit relation} for a rational map $f$ on the sphere
is a coincidence of the form $f^{\circ k}(c_1)=f^{\circ n}(c_2)$, where $c_1$ and $c_2$ are critical points of $f$ and $k$ and $n$ are nonnegative integers
with $k+n>0$ (we may have $c_1=c_2$). A holomorphic family $\cal F$ of
rational maps has {\it constant critical orbit relations} if every critical orbit relation for $f\in \cal F$ persists under perturbation of $f$ in $\cal F$. Any two rational maps in a holomorphic family with constant critical orbit relations are quasiconformally conjugate (\cite{Mc-Sul}, Theorem 2.7). In other words, critical orbit relations are the only obstruction to constructing quasiconformal conjugacies.
Now suppose that $U$ is a hyperbolic-like or capture component with the center(s) removed, or $U=\Omega_{ext}$ or $\Omega_{int}$. Then the family $\cal F =\{ P_c \}_{c\in U}$ has no critical orbit relation at all. Therefore, $U$ has to be a quasiconformal conjugacy class.
\end{pf}
\vspace{0.17in}
\section{Renormalizable Blaschke Products}
\label{sec:renbla}
Here we consider those Blaschke products in ${\cal B}_5^{cm}(\theta)$ out of which one can ``extract'' the standard degree $3$ Blaschke product $f_{\theta}$ to be defined below. The importance of this particular Blaschke product comes from the fact that it provides a model for the dynamics of the quadratic polynomial $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$. It will be convenient to define renormalizable Blaschke products in ${\cal B}_5^{cm}(\theta)$ as ones which after the surgery give rise to renormalizable cubics in ${\cal P}_3^{cm}(\theta)$ (see Section \ref{sec:rencub}). In what follows we will have to work with a {\it symmetrized} version of the notion of a quadratic-like map in order to show that any renormalizable Blaschke product is quasiconformally conjugate near the Julia set of its renormalization to the standard map $f_{\theta}$. The proof of this fact resembles the proof of \cite{Douady-Hubbard2} that every hybrid class of polynomial-like maps contains a polynomial.
First we include the following simple fact for completeness.
\begin{prop}
\label{bla3}
Let $0<\theta<1$ be a given irrational number and $f:\overline{\BBB C}\rightarrow\overline{\BBB C}$ be a degree $3$ Blaschke product with a superattracting fixed point at the origin and a double critical point at $z=1$. Let the rotation number of $f|_{\BBB T}$ be $\theta$. Then there exists a unique $0<t(\theta)<1$ such that
\begin{equation}
\label{eqn:ftet}
f(z)=f_{\theta} (z)=e^{2 \pi i t(\theta)} z^2 \left ( \frac{z-3}{1-3z} \right ).
\end{equation}
\end{prop}
\begin{pf}
Clearly $f(z)=e^{2 \pi i t} z^2 \displaystyle{ \left ( \frac{z-a}{1-\overline{a}z} \right ) }$, with $|a|>1$ and $0<t<1$. The fact that $f'(1)=0$ implies $a=3$. The rotation number of $f|_{\BBB T}$ as a function of $t$ is continuous and strictly monotone at all irrational values \cite{Katok}. Hence there exists a unique $t(\theta)$ for which this rotation number is $\theta$.
\end{pf}
\realfig{ft}{test4.ps}{{\sl The Julia set of $f_{\theta}$ for $\theta=(\sqrt{5}-1)/2$.}}{9 cm}
\noindent
{\bf Remark.} Computer experiments give the value $t(\theta)=0.613648$ for the golden mean $\theta=(\sqrt{5}-1)/2$. \figref{ft} shows the Julia set of $f_{\theta}$ for this value of $\theta$. This standard degree $3$ Blaschke product was introduced by Douady, Ghys, Herman and Shishikura as a model for the quadratic $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$ in the case $\theta$ is irrational of bounded type \cite{Douady2}.
It was also used in \cite{Petersen} to prove that the Julia set of $Q_{\theta}$ is locally connected and has measure zero. \\ \\
{\bf Definition.} A Blaschke product $B\in {\cal B}_5^{cm}(\theta)$ is called {\it renormalizable} if ${\cal S}(B)\in {\cal P}_3^{cm}(\theta)$ is a renormalizable cubic, as defined in Section \ref{sec:rencub}.
\begin{thm}
\label{b3like}
Let $B\in {\cal B}_5^{cm}(\theta)$ be renormalizable. Then there exists a pair of annuli $W'\Subset W$, both containing the unit circle and symmetric with respect to it, and a quasiconformal homeomorphism $\varphi_B:\BBB C\rightarrow \BBB C$ such that:
\begin{enumerate}
\item[(a)]
$B:\partial W'\rightarrow \partial W$ is a degree $2$ covering map,
\item[(b)]
$\varphi_B\circ I=I\circ \varphi_B,$
\item[(c)]
$(\varphi_B\circ B)(z)=(f_{\theta}\circ \varphi_B)(z)$ for all $z\in W'$.
\end{enumerate}
Moreover, $\varphi_B$ can be chosen to be conformal (i.e., $\overline{\partial} \varphi_B =0$) on $K(B)=\bigcap_{n\geq 0}B^{-n}(W')$.
\end{thm}
\begin{pf}
Consider the cubic $P={\cal S}(B)=\varphi \circ \tilde{B}\circ \varphi^{-1}\in {\cal P}_3^{cm}(\theta)$ which is renormalizable. Consider the quadratic-like restriction $P|_U:U\rightarrow V$ and the corresponding regions $U_1=\varphi^{-1}(U)$ and $V_1=\varphi^{-1}(V)$. Clearly $U_1\Subset V_1$ and both contain the closed unit disk. Define the symmetrized regions
$$W'=U_1\cap I(U_1),\ \ \ \ \ W=V_1\cap I(V_1)$$
which are topological annuli with $W'\Subset W$. Note that $B$ sends $\partial W'$ to $\partial W$ in a 2-to-1 fashion.
Now extend $B|_{W'}$ to the whole complex plane by gluing it to the polynomial $z\mapsto z^2$ near $0$ and $\infty$ as follows: Let $r>1$ and $\omega :\BBB C\smallsetminus W'\rightarrow \BBB C\smallsetminus \BBB A (r^{-1},r)$ be a diffeomorphism such that
$$\begin{array}{ll}
\omega\circ I=I\circ \omega, & \\
\omega (B(z))=\omega(z)^2, & z\in \partial W'.
\end{array}$$
Define the extension of $B|_{W'}$ by
$$F(z)= \left \{
\begin{array}{ll}
B(z) & z\in W'\\
\omega^{-1}(\omega(z)^2) & z\notin W'
\end{array}
\right. $$
Note that $F$ is a quasiregular degree $3$ self-map of the sphere, $F\circ I=I\circ F$, and every point outside $W'$ will converge to $0$ or $\infty$ under the iteration of $F$.
Define a conformal structure $\sigma$ on the plane as follows: Put $\sigma=\omega^{\ast}\sigma_0$ on $\BBB C\smallsetminus W'$, and pull it back by $F^{\circ n}$
to all the components of $F^{-n}(\BBB C\smallsetminus W')\cap W'$. Finally, on $K(B)$ set $\sigma=\sigma_0$. It is easy to see that $\sigma$ has bounded dilatation on the plane, is symmetric with respect to the unit circle, and $F^{\ast}(\sigma)=\sigma$. By the Measurable Riemann Mapping Theorem of Ahlfors and Bers, there exists a unique quasiconformal homeomorphism $\varphi_B$ of the plane which fixes $0, 1, \infty$, such that $\varphi_B^{\ast}(\sigma_0)=\sigma$. The conjugate map $f=\varphi_B\circ F\circ \varphi_B^{-1}$ is easily seen to be a degree $3$ rational map on the sphere. The quasiconformal homeomorphism $I\circ \varphi_B\circ I$ also fixes $0,1,\infty$ and pulls $\sigma_0$ back to $\sigma$ because $\sigma$ is symmetric with respect to $\BBB T$. By uniqueness, $\varphi_B=I\circ \varphi_B\circ I$. This implies that $f$ commutes with $I$, hence it is a Blaschke product. By \propref{bla3}, $f=f_{\theta}$, and we are done.
\end{pf}
While the above theorem establishes a direct connection between some Blaschke products in ${\cal B}_5^{cm}(\theta)$ and $f_{\theta}$, it is curious to note the following entirely different relation:
\begin{thm}
\label{q=3}
Let $B_n=B_{\mu_n}$ be any sequence in ${\cal B}_5^{cm}(\theta)$ such that $\mu_n \rightarrow \infty$ as $n \rightarrow \infty$. Then $B_n\rightarrow f_{\theta}$ locally uniformly on $\BBB C$ as $n \rightarrow \infty$.
\end{thm}
In other words, $f_{\theta}$ can be regarded as the point at infinity of the parameter space ${\cal B}_5^{cm}(\theta)$.
\begin{pf}
As in Section \ref{sec:blapar}, let
$$ B_n :z\mapsto e^{2 \pi i t_n} z^3 \left ( \frac{z-p_n}{1-\overline{p}_n z} \right ) \left ( \frac{z-q_n}{1-\overline{q}_nz} \right ) .$$
The logarithmic first derivative $B_n'/B_n$ and second derivative $(B_n B_n'' -(B_n')^2)/(B_n)^2$ both vanish at $z=1$. A straightforward computation shows that these two conditions translate into
\begin{equation}
\label{eqn:yek}
\frac{|p_n|^2-1}{|p_n-1|^2}+\frac{|q_n|^2-1}{|q_n-1|^2}=3,
\end{equation}
and
\begin{equation}
\label{eqn:do}
\frac{(p_n-\overline{p}_n)(|p_n|^2-1)}{|p_n-1|^4}+\frac{(q_n-\overline{q}_n)(|q_n|^2-1)}{|q_n-1|^4}=0.
\end{equation}
Let us write $a_n \leadsto a$ when $a$ is an accumulation point of the sequence $a_n$. Since $\mu_n \rightarrow \infty$, both $p_n$ and $q_n$ cannot stay bounded. Hence one of them, say $p_n$ gets arbitrarily large, or $p_n \leadsto \infty$. Then (\ref{eqn:yek}) shows that $(|q_n|^2-1)/|q_n-1|^2 \leadsto 2$, or equivalently, $|q_n-2| \leadsto 1$ but $q_n$ stays away from $z=1$. On the other hand, (\ref{eqn:do}) shows that $(q_n-\overline{q}_n)(|q_n|^2-1)/|q_n-1|^4 \leadsto 0$, hence $(q_n-\overline{q}_n)/|q_n-1|^2 \leadsto 0$. Since $q_n$ does not accumulate on $z=1$, this implies that $(q_n-\overline{q}_n) \leadsto 0$. Near the circle $|z-2|=1$ this can happen only if $q_n \leadsto 3$.
We have shown that there exists a subsequence $B_{n(j)}$ such that $p_{n(j)} \rightarrow \infty$ and $q_{n(j)} \rightarrow 3$ as $j\rightarrow \infty$. Since the rotation number depends continuously on the circle map, it is easy to see that this implies $B_{n(j)} \rightarrow f_{\theta}$ locally uniformly on $\BBB C$.
\end{pf}
Consider a sequence $B_n=B_{\mu_n}$ going off to infinity as in the previous theorem. Consider the cubics $P_n=P_{c_n}={\cal S}(B_n)=\varphi_n \circ \tilde{B}_n \circ \varphi_n^{-1}$ as in (\ref{eqn:pmodb}). By the previous theorem, $B_n\rightarrow f_{\theta}$, so $\tilde{B}_n\rightarrow \tilde{f_{\theta}}$. Since $\{ \varphi_n \}$ is normal by \corref{normal}, by passing to a subsequence if necessary, $\varphi_n$ converges to a quasiconformal homeomorphism $\varphi$. Since the surgery map is proper by \propref{proper}, $c_n \rightarrow \infty$. By examining the normal form (\ref{eqn:normform}), we see that $P_n \rightarrow Q$, where $Q:z\mapsto \lambda z (1-1/2 z)$ is affinely conjugate to $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$. Hence, $Q=\varphi \circ \tilde{f_{\theta}} \circ \varphi^{-1}$ and we recover the surgery introduced by Douady and others. We conclude that the surgery map ${\cal S}: {\cal B}_5^{cm}(\theta) \rightarrow {\cal P}_3^{cm}(\theta)$ extends continuously to the points at infinity of both parameter spaces, and the extension is also a surgery.\\
The next theorem is the analogue of \thmref{qcpar} for Blaschke products. It will be more convenient to formulate it for a general Blaschke product since we would like to use it for $f_{\theta}$ as well as the elements of ${\cal B}_5^{cm}(\theta)$.
\begin{thm}[Paths of QC Conjugacies]
\label{qcpath}
Let $A$ and $B$ be two Blaschke products of degree $d$ and let $\Phi$ be a quasiconformal homeomorphism which fixes $0,1,\infty$ such that $\Phi \circ I=I\circ \Phi$ and $\Phi \circ A=B\circ \Phi$. Then there exists a path $\{ \Phi_t \}_{0\leq t\leq 1}$ of quasiconformal homeomorphisms, with $\Phi_0=id$ and $\Phi_1=\Phi$, such that $A_t=\Phi_t\circ A\circ \Phi_t^{-1}$ is a Blaschke product for every $0\leq t\leq 1$. In particular, either $A$ is quasiconformally rigid or its conjugacy class is nontrivial and path-connected.
\end{thm}
\begin{pf}
The proof is almost identical to that of \thmref{qcpar}.
Consider $\sigma=\Phi^{\ast}\sigma_0$, which is invariant under $A$, and take the {\it real} perturbations $\sigma_t=t\sigma$, $0\leq t\leq 1$. Let $\Phi_t$ be the unique quasiconformal homeomorphism which fixes $0,1,\infty$ and satisfies $\Phi_t^{\ast}\sigma_0=\sigma_t$. The map $A_t=\Phi_t\circ A\circ \Phi_t^{-1}$ is easily seen to be a degree $d$ rational map. By uniqueness, $I\circ \Phi_t \circ I=\Phi_t$ since the left-hand side also pulls $\sigma_0$ back to $\sigma_t$ and fixes $0,1,\infty$. Hence $A_t$ commutes with $I$. So it is a Blaschke product.
\end{pf}
We will need the next lemma in the proof of \thmref{inj}.
\begin{lem}[Rigidity on the Julia Set]
\label{rig}
Let $\psi$ be a quasiconformal homeomorphism defined on an open annulus containing the Julia set $J(f_{\theta})$ of the Blaschke product $f_{\theta}$ defined in $($\ref{eqn:ftet}$)$. Suppose that $\psi$ commutes with $I$ and conjugates $f_{\theta}$ to itself. Then $\psi|_{J(f_{\theta})}$ is the identity.
\end{lem}
\begin{pf}
Extend $\psi$ to a quasiconformal homeomorphism $\BBB C\rightarrow \BBB C$ which commutes with $I$ and conjugates $f_{\theta}$ to itself. By the previous theorem, there exists a path $t\mapsto \psi_t$ of quasiconformal homeomorphisms, with $0\leq t\leq 1$ and $\psi_0=id, \psi_1=\psi$, such that $\psi_t\circ f_{\theta} \circ \psi_t^{-1}$ is a degree $3$ Blaschke product quasiconformally conjugate to $f_{\theta}$. By \propref{bla3}, this Blaschke product has to be $f_{\theta}$ itself, so $\psi_t$ commutes with $f_{\theta}$.
Now for any periodic point $z\in J(f_{\theta})$ of period $n$, $t\mapsto \psi_t(z)$ is a continuous path in the finite set of all period-$n$ points in $J(f_{\theta})$. Since $\psi_0(z)=z$, we must have $\psi(z)=z$. Since such points $z$ are dense in the Julia set, $\psi|_{J(f_{\theta})}$ must be the identity.
\end{pf}
\vspace{0.17in}
\section{Renormalizable Cubics}
\label{sec:rencub}
This section briefly studies the class of renormalizable cubics in ${\cal P}_3^{cm}(\theta)$. These are the cubics with disjoint critical orbits out of which one can extract the quadratic $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$ by straightening. From a different point of view, one may consider a renormalizable cubic with connected Julia set as the result of ``intertwining'' the quadratic $Q_{\theta}$ with another quadratic with connected Julia set (compare \cite{Epstein-Yam}).
For background on polynomial-like maps, straightening, and hybrid classes, see for example \cite{Douady-Hubbard2}.\\ \\
{\bf Definition.} Let $P\in {\cal P}_3^{cm}(\theta)$. We call $P$ {\it renormalizable} if there exists a pair of Jordan domains $U$ and $V$, with $0\in U\Subset V$, such that the restriction $P|_U: U\rightarrow V$ is a quadratic-like map hybrid equivalent to $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$.\\
When $\theta$ is irrational of bounded type, it follows from \cite{Douady2} that the boundary of the Siegel disk of $Q_{\theta}$ is a quasicircle passing through
the critical point. Hence the same is true for the Siegel disk $\Delta_P$ when $P$ is renormalizable.
\begin{thm}
\label{ren}
A cubic $P\in {\cal P}_3^{cm}(\theta)$ is renormalizable if one of the following conditions holds:
\begin{enumerate}
\item[(a)]
$P$ has a non-repelling periodic orbit other than $0$ which is not parabolic.\footnote{That the parabolic case must be excluded was pointed out to me by M. Yampolsky.}
\item[(b)]
$P$ has disconnected Julia set.
\end{enumerate}
\end{thm}
\begin{pf}
We use the Separation \lemref{seplem}. First assume that we are in case (a) so that $J(P)$ is connected. Let $\cal R$ be the finite collection of the closed preperiodic external rays given by the Separation Lemma. Let $V$ be the component of $\BBB C\smallsetminus \cal R$ which contains $0$, cut off by an equipotential of $K(P)$. Finally, let $U$ be the component of $P^{-1}(V)$ containing $0$. Since all the rays in $\cal R$ are preperiodic, $P(\cal R)\subset \cal R$, hence $U\subset V$.
$U$ necessarily contains a critical point of $P$ since otherwise the Schwarz lemma and $|P'(0)|=1$ would imply that $U=V$ and $P|_U:U\rightarrow V$ is a conformal isomorphism conjugate to a rotation. This would contradict the fact that $U$ intersects the basin of attraction of infinity for $P$. The other critical point of $P$ has to stay away from $V$ because by the second part of the Separation Lemma its entire orbit lives in the cycle of components of $\BBB C\smallsetminus \cal R$ which contains the non-repelling periodic orbit of $P$.
Since by our assumption the non-repelling cycle of $P$ is not parabolic, the landing points of the external rays in $\cal R$ must all be repelling. Therefore, by a simple ``thickening'' procedure (see for example \cite{Milnor2}), we can assume that $\overline{U}\subset V$, so that $P|_U:U\rightarrow V$ is a quadratic-like map. Since up to affine conjugation there is only one quadratic polynomial which has a fixed Siegel disk of rotation number $\theta$, this quadratic-like map has to be hybrid equivalent to $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$. This proves the theorem in the case $J(P)$ is connected.
Now suppose that we are in situation (b) so that $J(P)$ is disconnected. For $\epsilon >0$, let $U_{\epsilon}$ be the connected component of $\{ z\in \BBB C: G_P(z)< \epsilon \}$ containing the Siegel disk $\Delta_P$, where $G_P:\BBB C \rightarrow \{ x\in \BBB R: x\geq 0 \}$ is the Green's function of $K(P)$. It is not hard to see that for small $\epsilon$, $P|_{U_{\epsilon}}: U_{\epsilon}\rightarrow U_{3\epsilon}$ is a quadratic-like map, necessarily hybrid equivalent to $Q_{\theta}$.
\end{pf}
\figref{hypcub} and \figref{hypext} demonstrate the above theorem. In each example, there is a piece of the filled Julia set which is quasiconformally homeomorphic to the filled Julia set of $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$ in \figref{Q}. This piece is just the filled Julia set of the quadratic-like restriction $P|_U:U\rightarrow V$ given by the above theorem.
\realfig{Q}{test10.ps}{{\sl The filled Julia set of the quadratic $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$ for $\theta=(\sqrt{5}-1)/2$.}}{9cm}
\begin{cor}
\label{measure}
Let $\theta$ be an irrational number of bounded type. Let $P\in {\cal P}_3^{cm}(\theta)$ be hyperbolic-like or have disconnected Julia set $J(P)$. Then $J(P)$ has Lebesgue measure zero.
\end{cor}
\begin{pf}
Let $P|_U:U\rightarrow V$ be the quadratic-like restriction given by \thmref{ren} with the filled Julia set $K$. Since this restriction is hybrid equivalent to $Q_{\theta}:z\mapsto e^{2 \pi i \theta}z+z^2$ whose Julia set has measure zero when $\theta$ is bounded type \cite{Petersen}, we simply conclude that $\partial K$ has Lebesgue measure zero.
It is well-known that the forward orbit of almost every point $z\in J(P)$ accumulates on the $\omega$-limit set of the critical points of $P$ (\cite{Lyubich}, Proposition 1.14), which in this case is just $\partial \Delta_P$ union the attracting periodic orbit (resp. $\partial \Delta_P$) if $P$ is hyperbolic-like (resp. with disconnected Julia set). So the orbit of almost every $z\in J(P)$ accumulates on $\partial \Delta_P$. This implies that for all $n\geq N=N(z)$, $P^{\circ n}(z)\in V$. This can happen only if $P^{\circ N}(z)\in \partial K$ or equivalently $z\in P^{-N}(\partial K)$. We conclude that, up to a set of measure zero, $J(P)=\bigcup_{N\geq 0}P^{-N}(\partial K)$. But the right-hand side has measure zero because
$ \partial K$ does. This proves that $J(P)$ has Lebesgue measure zero as well.
\end{pf}
The next supplementary result will be useful later in the proof of connectivity of ${\cal M}_3(\theta)$ (\thmref{m3con}). I am indebted to M. Lyubich for pointing out that every quasiconformal self-conjugacy of the map $z\mapsto z^2$ near the unit circle $\BBB T$ extends to the identity map on $\BBB T$. This fact is the heart of the following lemma.
\begin{lem}
\label{EF}
Let $f:U\rightarrow V$ and $g:U'\rightarrow V'$ be quadratic-like maps both hybrid equivalent to the same quadratic polynomial $Q:z\mapsto z^2+c$ with connected Julia set. Let $E$ and $F$ be two subsets of $U$ and $U'$ respectively, such that
\begin{enumerate}
\item[$\bullet$]
$E\cap K(f)=\emptyset$ and $F\cap K(g)=\emptyset$,
\item[$\bullet$]
$E\cup K(f)$ and $F\cup K(g)$ are closed in $U$ and $U'$ respectively, and
\item[$\bullet$]
$f^{-1}(E)\subset E$ and $g^{-1}(F)\subset F$.
\end{enumerate}
Then any quasiconformal homeomorphism $\varphi:U\smallsetminus (E\cup K(f))\rightarrow U'\smallsetminus (F\cup K(g))$ which conjugates $f$ and $g$ extends to a quasiconformal homeomorphism $\varphi:U\smallsetminus E\rightarrow U'\smallsetminus F$. Moreover, we can arrange $\overline{\partial}\varphi=0$ on $K(f)$.
\end{lem}
\begin{pf}
By straightening, we may assume without loss of generality that both $f$ and $g$ are the quadratic $Q$. Under this assumption, we prove that $\varphi$ extends continuously to the identity on the filled Julia set $K(Q)$. The last part of the theorem will follow because the $\overline{\partial}$-derivative of every hybrid equivalence vanishes on the corresponding filled Julia set.
Consider the B\"{o}ttcher map $\beta:\BBB C\smallsetminus K(Q)\stackrel{\simeq}{\longrightarrow} \BBB C\smallsetminus \overline{\BBB D}$ which conjugates $Q$ to $z\mapsto z^2$ near $K(Q)$. Put $\tilde{U}=\beta(U\smallsetminus K(Q))$ and $\tilde{E}=\beta(E)$, and similarly define $\tilde{U'}$ and $\tilde{F}$. The induced map $\tilde{\varphi}=\beta \circ \varphi \circ \beta^{-1}:\tilde{U}\smallsetminus \tilde{E}\rightarrow \tilde{U'}\smallsetminus \tilde{F}$ is then a quasiconformal homeomorphism which satisfies $\tilde{\varphi}(z^2)=(\tilde{\varphi}(z))^2$.
Consider the universal covering map $\zeta:\BBB H\rightarrow \BBB C\smallsetminus \overline{\BBB D}$ defined on the upper-half plane by $\zeta(z)=e^{-2 \pi i z}$. Let $\hat{U}=\zeta^{-1}(\tilde{U})$, $\hat{E}=\zeta^{-1}(\tilde{E})$, etc. Lift $\tilde{\varphi}$ to a quasiconformal homeomorphism $\hat{\varphi}:\hat{U}\smallsetminus \hat{E} \rightarrow \hat{U'}\smallsetminus \hat{F}$ which satisfies $\hat{\varphi}(z+1)=\hat{\varphi}(z)+1$ and $\hat{\varphi}(2z)=2\hat{\varphi}(z)$. Without loss of generality we can assume that $\hat{U}$ contains the horizontal strip $\{ z: 1\leq \Im (z) \leq 2 \}$. Clearly
$$\sup \{ d_{\BBB H}(z,\hat{\varphi}(z)): z\in \hat{U}\smallsetminus \hat{E}, 1\leq \Im (z)\leq 2 \} =C < +\infty, $$
where $d_{\BBB H}$ is the hyperbolic distance in $\BBB H$. Now given any point $z\in \hat{U}\smallsetminus \hat{E}$, choose $n\in \BBB Z$ so that $1\leq 2^n \Im (z)\leq 2$. Then
$$d_{\BBB H}(z,\hat{\varphi}(z))=d_{\BBB H}(2^nz,2^n\hat{\varphi}(z))=d_{\BBB H}(2^nz,\hat{\varphi}(2^nz))\leq C.$$
By the Schwarz lemma applied to the composition $\beta^{-1}\circ \zeta$, we have
$$\sup \{ d(z,\varphi (z)): z\in U\smallsetminus (E\cup K(Q)) \}\leq C,$$
where $d$ is the hyperbolic distance in $\BBB C\smallsetminus K(Q)$. Hence, as $z\rightarrow J(Q)$ in $U\smallsetminus (E\cup K(Q))$, $|z-\varphi(z)|\rightarrow 0$. This means that we can define $\varphi(z)=z$ throughout $K(Q)$, and the extension will be a quasiconformal homeomorphism by the Bers Sewing Lemma.
\end{pf}
\vspace{0.17in}
\section{Stability of Cubics}
\label{sec:stab}
In this section we prove the following result, which is reminiscent of the
similar fact about the Mandelbrot set. For terminology and basic results
on holomorphic motions and $J$-stability, see \cite{McMullenbook} .
\begin{thm}[Boundary of ${\cal M}_3(\theta)$ is Unstable]
\label{unstable}
The complement $\BBB C^{\ast} \smallsetminus \partial {\cal M}_3(\theta)$ is the set of parameters for which the corresponding cubics are J-stable in ${\cal P}_3^{cm}(\theta)$.
\end{thm}
\begin{pf} A polynomial $P_{c_0}\in {\cal P}_3^{cm}(\theta)$ is $J$-stable if and only if both
sequences $\{ P_c^{\circ k}(c)\} $ and $\{ P_c^{\circ k}(1)\} $ are
normal for $c$ in a neighborhood of $c_0$ (\cite{McMullenbook}, Theorem 4.2). If $c_0 \in \Omega_{ext}$, then $c_0$ escapes to infinity under $P_{c_0}$, while $1$ has bounded orbit. For $c$ close to $c_0$, the orbit of $c$ under $P_c$ will still converge to infinity while $1$
will have bounded orbit, with a bound given by $m_c$ in (\ref{eqn:rcbound}). It follows from the Montel's theorem that both sequences are normal throughout a neighborhood of $c_0$. Hence $c_0$ is $J$-stable. Similarly, every $P_{c_0}$
with $c_0 \in\Omega_{int}$ is $J$-stable. If $c_0$ belongs to the interior of ${\cal M}_3(\theta)$, then
both $c_0$ and $1$ will have orbits contained in $\BBB D(0,m_{c_0})$ and
the same holds for all $c$ sufficiently close to $c_0$. Again by Montel, both sequences
$\{ P_c^{\circ k}(c)\} $ and $\{ P_c^{\circ k}(1)\} $ are normal in a neighborhood of $c_0$. Finally, if $c_0$ belongs to the boundary of ${\cal M}_3(\theta)$, then
a small perturbation will make either $c$ or $1$ escape to infinity. Hence
at least one of the sequences $\{ P_c^{\circ k}(c)\} $ or $\{ P_c^{\circ k}(1)\} $ fails to be normal in any neighborhood of $c_0$.
\end{pf}
\begin{thm}
\label{indiff}
Let $P_{c_0}\in {\cal P}_3^{cm}(\theta)$ have an indifferent periodic orbit other than the fixed point at the origin. Then $c_0 \in \partial {\cal M}_3(\theta)$.
\end{thm}
\begin{pf}
Otherwise $c_0$ will be a $J$-stable parameter by the above
theorem. But any stable indifferent cycle has to be persistent (\cite{McMullenbook}, Theorem 4.2). This means that the indifferent cycle $z(c_0)\mapsto P_{c_0}(z(c_0))\mapsto \cdots \mapsto
P_{c_0}^{\circ k-1}(z(c_0))\mapsto z(c_0)$ can be continued analytically as a function of $c$ in a neighborhood of $c_0$ and
the multiplier function $c\mapsto (P_c^{\circ k})'(z(c))$ will be constant in
this neighborhood. But this cycle can be continued analytically to the whole $c$-plane except for a finite number of singular points by the implicit function theorem, and the multiplier has to remain constant during the continuation. It follows that for every parameter $c$, the cubic $P_c$ has an indifferent cycle other than $0$. This is clearly impossible, since for example when $c=3-6 \overline{\lambda}$, $P_c(c)=c$ is a superattracting fixed point, hence there cannot be any indifferent periodic point other than 0.
\end{pf}
To prove the next corollary, we use the following lemma in \cite{Kiwi} which is a much sharpened version of an earlier result of Goldberg and Milnor (\cite{Goldberg-Milnor}, Theorem 3.3). This useful lemma will also be applied in Section \ref{sec:rencub} below to extract quadratic-like maps out of renormalizable cubics.
\begin{lem}[Separation Lemma]
\label{seplem}
Let $P$ be a polynomial with connected Julia
set. Then there exists a finite collection of closed preperiodic
external rays, separating the plane into disjoint open simply-connected sets $\{ U_j\}$, such that:
\begin{enumerate}
\item[$\bullet$]
Each $U_j$ contains at most one non-repelling periodic point or periodic Fatou component of $P$.
\item[$\bullet$]
If $z_1\mapsto \cdots \mapsto z_p\mapsto z_1$ is a non-repelling cycle meeting $U_{i_1}\mapsto \cdots \mapsto U_{i_p}\mapsto U_{i_1}$,
then $\bigcup _{j=1}^p U_{i_j}$ contains the entire orbit of at least one critical point of $P$.
\end{enumerate}
\end{lem}
\begin{cor}
\label{second ind}
If $P\in {\cal P}_3^{cm}(\theta)$ has an indifferent periodic point other than
the fixed point at the origin, then a critical point of $P$, other than the one which accumulates on the boundary of $\Delta_P$, accumulates on the extra indifferent point (in case it is not linearizable)
or on the boundary of the extra Siegel disk (in case it is linearizable).\ \ $\Box$
\end{cor}
\vspace{0.17in}
\section{The Surgery}
\label{sec:surgery}
From now on, unless otherwise stated, {\it we assume that $\theta$ is an irrational number of bounded type.} We describe a surgery on degree $5$ Blaschke products in ${\cal B}_5^{cm}(\theta)$ to obtain cubic polynomials in ${\cal P}_3^{cm}(\theta)$. A similar surgery was done previously in the case of quadratic polynomials \cite{Douady2}
using the following theorem of Swiatek and Herman (see \cite{Swiatek} or \cite{Herman2}). Recall that a homeomorphism $h:\BBB R \rightarrow \BBB R$ is called $k$-{\it quasisymmetric} if
$$0< k^{-1} \leq \frac{|h(x+t)-h(x)|}{|h(x)-h(x-t)|}\leq k < +\infty$$
for all $x$ and all $t>0$. We call $h$ quasisymmetric if it is $k$-quasisymmetric for some $k$. A homeomorphism $h:\BBB T \rightarrow \BBB T$ is $k$-quasisymmetric if its lift to $\BBB R$ is such a homeomorphism.
\realfig{hypbla}{test1.ps}{{\sl The Julia set of a Blaschke product in ${\cal B}_5^{cm}(\theta)$ for $\theta=(\sqrt{5}-1)/2$. There are two symmetric attracting cycles in the nuclei $N_0$ and $N_{\infty}$. The topological disks in black form the basin of attraction of these two cycles. After surgery this Blaschke product becomes a hyperbolic-like cubic in ${\cal P}_3^{cm}(\theta)$.}}{8cm}
\realfig{nucleus}{test2.ps}{{\sl Another example of the Julia set of a Blaschke product in ${\cal B}_5^{cm}(\theta)$ for $\theta=(\sqrt{5}-1)/2$. There is a critical point in the nucleus of the large $1$-drop attached to the unit disk at $z=1$ which maps into $N_0$. Hence this nucleus contains the zeros $p$ and $q$. However, after surgery this Blaschke product becomes a capture cubic in ${\cal P}_3^{cm}(\theta)$.}}{8cm}
\begin{thm}[Linearization of Critical Circle Maps]
\label{ccmap}
Let $f:\BBB T \rightarrow \BBB T$ be a real-analytic homeomorphism with finitely many critical points and rotation number $\theta$. Then there exists a quasisymmetric homeomorphism
$h:\BBB T \rightarrow \BBB T$ which conjugates $f$ to the rigid rotation $R_{\theta}:t\mapsto t+\theta $ (mod 1) if and only if $\theta$ is an irrational number of bounded type. Moreover, if $f$ belongs to a compact family of real-analytic homeomorphisms with rotation number $\theta$, then $h$ is $k$-quasisymmetric, where the constant $k$ only depends on the family and not on the choice of $f$.
\end{thm}
Let us briefly sketch what this surgery does on a Blaschke product $B \in {\cal B}_5^{cm}(\theta)$.
By \propref{realhom}, the restriction $B|_{\BBB T}$ is a real-analytic homeomorphism with one (or two) critical point(s). When the rotation number of this circle map is of bounded type, by \thmref{ccmap} one can find a unique $k$-quasisymmetric homeomorphism $h:\BBB T\rightarrow \BBB T$ with $h(1)=1$ such that the following diagram commutes:
$$\begin{array}{rlcrl}
& \BBB T & \stackrel{B}{\longrightarrow} & \BBB T & \\
h & \downarrow & & \downarrow & h\\
& \BBB T & \stackrel{R_{\theta}}{\longrightarrow} & \BBB T &
\end{array}$$
Moreover, the family $\{ B|_{\BBB T} \}_{B \in {\cal B}_5^{cm}(\theta)}$ is compact (compare \thmref{q=3}), hence $h$ is in fact $k(\theta)$-quasisymmetric, where the constant $k(\theta)$ only depends on $\theta$.
We can extend $h$ to a $K(\theta)$-quasiconformal homeomorphism $H:\BBB D \rightarrow \BBB D$ whose dilatation depends only on $\theta$. Possible extensions are given by the theorem of Beurling and Ahlfors \cite{Ahlfors} or Douady and Earle \cite{Douady-Earle} (which has the advantage of being conformally invariant). Define a {\it modified Blaschke product} $\tilde B$ as follows:
\begin{equation}
\label{eqn:modbla}
\tilde{B}(z)= \left \{ \begin{array}{ll}
B(z) & |z|\geq 1 \\
(H^{-1}\circ R_{\theta}\circ H)(z) & |z|<1
\end{array}
\right.
\end{equation}
\realfig{blaext}{test3.ps}{{\sl The Julia set of a Blaschke product in ${\cal B}_5^{cm}(\theta)$ for $\theta=(\sqrt{5}-1)/2$ outside the connectedness locus ${\cal C}_5(\theta)$ (see Section \ref{sec:newcon}). Surgery makes this Blaschke product into a cubic in $\Omega_{ext}$.}}{8cm}
\noindent
This amounts to cutting out the unit disk and gluing in a Siegel disk instead.
Note that the two definitions match along $\BBB T$ by the above commutative diagram. Now define a conformal structure $\sigma$ on the plane as follows: On $\BBB D$, let $\sigma$ be the pull-back $H^{\ast}\sigma_0$ of the standard conformal structure $\sigma_0$. Since $R_\theta$ preserves $\sigma_0$, $\tilde B$ will preserve $\sigma$ on $\BBB D$. For every $k\geq 1$, pull $\sigma|_{\BBB D}$ back by $\tilde {B}^{\circ k}=
B^{\circ k}$ on $B^{-k}(\BBB D)\smallsetminus \BBB D$ (which consists of all the {\it maximal} $k$-drops of $B$; see Section \ref{sec:newcon}). Since $B^{\circ k}$ is
holomorphic, this does not increase the dilatation of $\sigma$. Finally, let
$\sigma=\sigma_0$ on the rest of the plane. By the construction, $\sigma$
has bounded dilatation and is invariant under
$\tilde B$. Therefore, by the Measurable Riemann Mapping Theorem of
Ahlfors and Bers, we can find a quasiconformal homeomorphism $\varphi:\BBB C\rightarrow \BBB C$ such that $\varphi^{\ast}\sigma_0=\sigma$. Set
\begin{equation}
\label{eqn:pmodb}
P=\varphi \circ \tilde{B} \circ \varphi^{-1}.
\end{equation}
Then $P$ is a quasiregular self-map of the sphere which preserves $\sigma_0$, hence it is holomorphic. Also $P$ is proper of degree $3$ since $\tilde B$ has the same properties. Therefore
$P$ is a cubic polynomial.
Now the action of $P$ on $\varphi (\BBB D)$ is quasiconformally conjugate to a rigid rotation, hence $\varphi (\BBB D)$ is contained in a Siegel disk for $P$ with rotation number $\theta$. Since $\varphi (1)$ is a critical point for $P$, it follows that the entire orbit $\{ P ^{\circ k}(\varphi (1))\}_{k\geq 1}$ lives on the boundary of this Siegel disk. But $\{ P ^{\circ k}(\varphi (1))\}_{k\geq 1}$ is dense on $\varphi (\BBB T)$, so $\varphi (\BBB T )$ is exactly the boundary of this Siegel disk, which is a quasicircle passing through the critical point $\varphi (1)$ of $P$.
To mark the critical points of $P$, hence getting an element of ${\cal P}_3^{cm}(\theta)$, we must normalize $\varphi$ carefully. Recall from Section \ref{sec:blapar} that ${\cal B}_5^{cm}(\theta)$ is uniformized by the parameter $\mu\in \BBB C^{\ast}$ as follows: If $|\mu| \geq 1$, $B_{\mu}$ has marked critical points at $\{ 0, \infty, c_1=\mu, 1/\overline {\mu}, c_2=1 \}$, while for $|\mu| \leq 1$, $B_{\mu}$ has marked critical points at $\{ 0, \infty, c_1=1, c_2=1/\mu, \overline {\mu} \}$. In the first case, we normalize $\varphi$ such that $\varphi(H^{-1}(0))=0$ and $\varphi(1)=1$. Call $\varphi(\mu)=c$ and mark the critical points of $P$ by declaring $P=P_c$ as in Section \ref{sec:cubpar}. In the case $|\mu|\leq 1$, we normalize $\varphi$ similarly by putting $\varphi(H^{-1}(0))=0$ and $\varphi(1/\mu)=1$, but this time we call $\varphi(1)=c$ and set $P=P_c$. It is easy to see that when $|\mu|=1$, both normalizations produce the same critically marked cubic polynomial in ${\cal P}_3^{cm}(\theta)$.
Let us denote the polynomial $P$ constructed this way by ${\cal S}_H(B)$, i.e., the cubic obtained by performing surgery
on a Blaschke product $B$ using a quasiconformal extension $H$. The first question we would like to address is the following:
\begin{enumerate}
\item[]
``Given a $B\in {\cal B}_5^{cm}(\theta)$, what cubic polynomials of the form ${\cal S}_H(B)$ can we obtain as the result of this surgery by choosing different quasiconformal extensions $H$?"
\end{enumerate}
We will see that for two quasiconformal extensions $H$ and $H'$, the cubics
${\cal S}_H(B)$ and ${\cal S}_{H'}(B)$ are quasiconformally conjugate and the conjugacy is conformal everywhere except on the grand orbit of the Siegel disk centered at the origin. When ${\cal S}_H(B)$ is capture, we can certainly end up with two different cubics if we choose the extensions arbitrarily. In fact, let $k$ be the first moment the orbit of the critical point $c$ of $B$ hits the unit disk, and let $w=B^{\circ k}(c)$.
Then for two quasiconformal extensions $H$ and $H'$, the captured images of the critical points of ${\cal S}_H(B)$ and ${\cal S}_{H'}(B)$ have the same conformal position in their corresponding Siegel disks if and only if $H(w)=H'(w)$. It follows that ${\cal S}_H(B)\neq {\cal S}_{H'}(B)$ as soon as we choose two different extensions $H,H'$ with $H(w)\neq H'(w)$.
The following proposition has a very nontrivial content in case the result of the surgery is a cubic whose Julia set has positive measure (say, in a queer component). It is the Bers Sewing Lemma which makes the proof work.
\begin{prop}
\label{independent}
Let $P={\cal S}_H(B)$ and $H'$ be any other quasiconformal extension of the circle homeomorphism $h$ which linearizes $B|_{\BBB T}$. Then, if $P$ is not capture, ${\cal S}_H(B)={\cal S}_{H'}(B)$. On the other hand, when $P$ is capture, ${\cal S}_H(B)={\cal S}_{H'}(B)$ if and only if $H(w)=H'(w)$, where $w\in \BBB D$ is the captured image of the critical point of $B$.
\end{prop}
\begin{pf}
Let $Q={\cal S}_{H'}(B)$ and $\varphi_H$ and $\varphi_{H'}$ denote the quasiconformal homeomorphisms which satisfy $P=\varphi_H \circ \tilde{B}_H \circ \varphi_H^{-1}$ and $Q=\varphi_{H'} \circ \tilde{B}_{H'} \circ \varphi_{H'}^{-1}$ as in (\ref{eqn:pmodb}). The homeomorphism $\varphi$ defined by
$$\varphi(z)= \left \{
\begin{array}{ll}
(\varphi_{H'}\circ \varphi_H^{-1})(z) & z\in \BBB C\smallsetminus GO(\Delta_P)\\
(\varphi_{H'}\circ B^{-k}\circ {H'}^{-1}\circ H \circ B^{\circ k}\circ \varphi_H^{-1})(z) & z\in P^{-k}(\Delta_P)
\end{array} \right. $$
is quasiconformal and conjugates $P$ to $Q$. By \lemref{ext}, one can find a quasiconformal conjugacy $\psi:\BBB C \rightarrow \BBB C$ between $P$ and $Q$ which is conformal on the grand orbit $GO(\Delta_P)$ and agrees with $\varphi$ everywhere else. By the Bers Sewing Lemma, $\overline{\partial} \psi=\overline{\partial}\varphi$ almost everywhere on $\BBB C\smallsetminus GO(\Delta_P)$. But the latter generalized partial derivative vanishes almost everywhere on $\BBB C\smallsetminus GO(\Delta_P)$ because the surgery does not change the conformal structures outside $\bigcup_{k\geq 0}B^{-k}(\BBB D)$. Hence $\overline{\partial} \psi=0$ almost everywhere on $\BBB C$,
which means $\psi$ is conformal. This shows $P=Q$.
\end{pf}
\vspace{0.15in}
\noindent
{\bf Convention.} For the rest of this paper, we always choose the Douady-Earle extension of circle homeomorphisms to perform surgery. By the above proposition, this is really a ``choice'' only in the capture case. We can therefore neglect the dependence on $H$ and call
$${\cal S}:{\cal B}_5^{cm}(\theta) \rightarrow {\cal P}_3^{cm}(\theta)$$
the {\it surgery map}.\\
As an immediate corollary of normalization of $\varphi$ and construction of $\cal S$, we have the following:
\begin{cor}
\label{switch}
Let $\mu \in \BBB C^{\ast}$ and $P_c={\cal S}(B_{\mu})$ be the cubic obtained
by performing the above surgery.
\begin{enumerate}
\item[$\bullet$]
If $|\mu|>1$, then $1 \in \partial \Delta_c$ and $c \notin \partial \Delta_c$.
\item[$\bullet$]
If $|\mu|<1$, then $c \in \partial \Delta_c$ and $1 \notin \partial \Delta_c$.
\item[$\bullet$]
If $|\mu|=1$, then both $c$ and $1 \in \partial \Delta_c$.\ $\Box$
\end{enumerate}
\end{cor}
\vspace{0.17in}
\section{Siegel Disks with Two Critical Points on Their Boundary}
\label{sec:twocrit}
In this section we characterize those cubics in ${\cal P}_3^{cm}(\theta)$ which have both critical points on the boundary of their Siegel disk. In \thmref{gamma1} we will prove that the set of all such cubics is a Jordan curve $\Gamma$ in ${\cal P}_3^{cm}(\theta)$. The proof of this theorem will use the fact that the quasiconformal conjugacy classes in ${\cal B}_5^{cm}(\theta)$ are path-connected (\thmref{qcpath}). We then show that when there are no queer components, $\Gamma$ is in fact the common boundary of $\Omega_{ext}$ and $\Omega_{int}$ (\thmref{gamma2}).
Consider the set $\Gamma$ which consists of all cubics $P\in {\cal P}_3^{cm}(\theta)$ such that both critical points of $P$ belong to the boundary of the Siegel disk $\Delta_P$. \figref{Gamma2} shows this set in the parameter space ${\cal P}_3^{cm}(\theta)$.
Since the surgery map ${\cal S}:{\cal B}_5^{cm}(\theta) \rightarrow {\cal P}_3^{cm}(\theta)$ is surjective by \corref{surj}, every $P\in \Gamma$ is of the form ${\cal S}(B_{\mu})$ with $B_{\mu}$ having two double critical points on the circle. \corref{switch} shows that $\mu$ must belong the unit circle $\BBB T\subset \BBB C^{\ast}\simeq {\cal B}_5^{cm}(\theta)$. Therefore, we simply have
$$\Gamma={\cal S}(\BBB T).$$
In particular, $\Gamma$ is a closed path in ${\cal P}_3^{cm}(\theta) \simeq {\BBB C}^{\ast}$. Suggested by \figref{Gamma2}, we want to prove that $\Gamma$ is a Jordan curve. This would follow immediately if we could prove that ${\cal S}|_{\BBB T}$ is injective.
However, I have not been able to show this. In fact, I do not know how to prove that Blaschke products on the boundary of the connectedness locus ${\cal C}_5(\theta)$ are quasiconformally rigid. So we take a slightly different approach by showing that the fibers of ${\cal S}|_{\BBB T}:\BBB T\rightarrow \Gamma$ are connected.
\begin{lem}
\label{fiber}
Let $A,B\in {\cal B}_5^{cm}(\theta)$ and ${\cal S}(A)={\cal S}(B)=P$. Suppose that $P$ is not capture. Then there exists a path $t\mapsto A_t \in {\cal B}_5^{cm}(\theta)$ of Blaschke products for $0\leq t\leq 1$, with $A_0=A,\ A_1=B$, such that ${\cal S}(A_t)=P$ for all $t$.
\end{lem}
\begin{pf}
Since $P$ is not capture, by \thmref{ABconj} there exists a quasiconformal homeomorphism $\Phi$ which conjugates $A$ to $B$, which is conformal away from the Julia set $J(A)$. By \thmref{qcpath} there exists a path $\{ \Phi_t \} _{0\leq t\leq 1}$ connecting the identity map to $\Phi$ and a corresponding path $\{ A_t=\Phi_t \circ A\circ \Phi_t^{-1} \} _{0\leq t\leq 1}$ of elements of ${\cal B}_5^{cm}(\theta)$ connecting $A$ to $B$. Note that by the definition of $\Phi_t$, these quasiconformal homeomorphisms are all conformal away from $J(A)$.
It remains to show that ${\cal S}(A_t)=P$ for all $0\leq t\leq 1$. Consider the Douady-Earle extension $H:\BBB D\rightarrow \BBB D$ used in the definition of ${\cal S}(A)$ in Section \ref{sec:surgery}. Recall that $H|_{\BBB T}$ conjugates $A|_{\BBB T}$ to the rigid rotation $t\mapsto t+\theta$ (mod 1). Hence, the quasiconformal homeomorphism $H_t=H\circ \Phi_t^{-1}:\BBB D\rightarrow \BBB D$ will conjugate $A_t|_{\BBB T}$ to the rigid rotation as well. Note that $H_t$ is not in general the Douady-Earle extension of the linearizing homeomorphism $h_t:\BBB T \rightarrow \BBB T$ for $A_t$. Nevertheless, ${\cal S}_{H_t}(A_t)={\cal S}(A_t)$ by \propref{independent}. Consider the modified Blaschke products
$$ \tilde{A}(z)= \left \{ \begin{array}{ll}
A(z) & |z|\geq 1 \\
(H^{-1}\circ R_{\theta}\circ H)(z) & |z|<1
\end{array}
\right.$$
and
$$ \tilde{A_t}(z)= \left \{ \begin{array}{ll}
A_t(z) & |z|\geq 1 \\
(H_t^{-1}\circ R_{\theta}\circ H_t)(z) & |z|<1
\end{array}
\right.$$
Note that $\Phi_t \circ \tilde{A}=\tilde{A_t}\circ \Phi_t$.
Define the corresponding conformal structures $\sigma$ and $\sigma_t$ as in Section \ref{sec:surgery}. It is easy to see that
\begin{equation}
\label{eqn:conf}
\sigma=\Phi_t^{\ast}\sigma_t.
\end{equation}
Here we use that fact that $\Phi_t$ is conformal away from $J(A)$. Consider the normalized solutions $\varphi$ and $\varphi_t$ of the Beltrami equations
$$\varphi^{\ast}\sigma_0=\sigma,\ \ \ \ \varphi_t^{\ast}\sigma_0=\sigma_t.$$
By (\ref{eqn:conf}) and uniqueness, we have
$$\varphi_t=\varphi \circ \Phi_t^{-1}.$$
Hence, by \propref{independent},
$$\begin{array}{rl}
{\cal S}(A_t) & =\varphi_t\circ \tilde{A_t}\circ \varphi_t^{-1} \\
& =\varphi \circ \Phi_t^{-1}\circ \tilde{A_t}\circ \Phi_t \circ \varphi^{-1}\\
& =\varphi \circ \tilde{A}\circ \varphi^{-1}\\
& ={\cal S}(A).
\end{array}$$
This completes the proof of the lemma.
\end{pf}
\begin{cor}
\label{fiber2}
The fibers of ${\cal S}|_{\BBB T}: \BBB T\rightarrow \Gamma$ are connected.
\end{cor}
\begin{pf}
Let $A,B \in \BBB T \subset {\cal B}_5^{cm}(\theta)$ and ${\cal S}(A)={\cal S}(B)$. Apply the previous lemma to $A,B$. Note that $A_t\in \BBB T$ for all $0\leq t\leq 1$, since $A_t$ is quasiconformally conjugate to $A$, hence has two double critical points on the unit circle.
\end{pf}
\begin{lem}
\label{moore}
Let $\sim$ be an equivalence relation on the unit circle $\BBB T$ such that every equivalence class is closed and connected. Suppose that the whole circle is not an equivalence class. Then the quotient space $\BBB T/\sim$ is also homeomorphic to $\BBB T$.
\end{lem}
\realfig{Gamma2}{modgamma.ps}{{\sl The Jordan curve $\Gamma$. This is the locus of all critically marked cubics in ${\cal P}_3^{cm}(\theta)$ which have both critical points on the boundary of their Siegel disk. Topologically it can be described as the common boundary of the complementary regions $\Omega_{ext}$ and $\Omega_{int}$. Note that $\Gamma$ is invariant under the inversion $c\mapsto 1/c$. In particular, it passes through $c=1$.}}{12cm}
\begin{pf}
One can easily construct the homeomorphism as follows: Identify $\BBB T$ with $\BBB R/\BBB Z$, and let $\{ S_i \}_{i\in \BBB N}$ be the collection of nontrivial equivalence classes of $\sim$. (In case this collection is empty or finite, the lemma is clear.) Each $S_i$ can be regarded as a closed interval in $(0,1]$, and we may assume that the right endpoint of $S_1$ is $1$. Note that there is a natural order $<$ on the collection $\{ S_i \}$. Define a function $f:[0,1]\rightarrow [0,1]$ by putting $f(0)=0$, $f|_{S_1}\equiv 1$, and $f|_{S_2}\equiv 1/2$ and proceed inductively as follows. Suppose that $n\geq 2$ and $f$ is already defined on $S_1 \cup \cdots \cup S_n$. Consider $S_{n+1}$ and let $S_i$ and $S_j$ be its two neighbors with $S_i < S_{n+1} < S_j$ and $1\leq i,j \leq n$.
Define $f|_{S_{n+1}}\equiv 1/2(f(S_i)+f(S_j))$. (In case $S_{n+1}$ has no left neighbor, simply set $f|_{S_{n+1}}\equiv 1/2f(S_j)$.) This defines $f$ inductively on $\bigcup S_i$. By the construction, $f$ extends continuously to the closure $\overline{\bigcup S_i}$. Interpolate $f$ linearly on each open interval in $(0,1)\smallsetminus \overline{\bigcup S_i}$.
It is easy to check that $f$ constructed this way is continuous, increasing, and the preimage of every point is either a single point in $[0,1]\smallsetminus \bigcup S_i$ or an interval $S_i$. Clearly such a function induces a homeomorphism between $\BBB T$ and $\BBB T/\sim$.
\end{pf}
\noindent
{\bf Remark.} This simple lemma should be thought of as the one-dimensional (baby) version of the following deep theorem of R. L. Moore \cite{Moore}: Let $\sim$ be a closed equivalence relation on the 2-sphere $S^2$ such that every equivalence class is closed and connected and nonseparating. Then $S^2/\sim $ is also homeomorphic to $S^2$.
\begin{thm}
\label{gamma1}
$\Gamma$ is a Jordan curve.
\end{thm}
\begin{pf}
Consider ${\cal S}|_{\BBB T}:\BBB T\rightarrow \Gamma$ whose fibers are closed and connected by \lemref{fiber}. By general topology, $\Gamma$ is homeomorphic to $\BBB T/\sim$, where $A\sim B$ means ${\cal S}(A)={\cal S}(B)$. By \lemref{moore}, $\BBB T/\sim$ is homeomorphic to the circle.
\end{pf}
Finally, we find a topological characterization of $\Gamma$ in ${\cal P}_3^{cm}(\theta)$ under the assumption that there are no queer components in the interior of ${\cal M}_3(\theta)$.
\begin{thm}[Topological characterization of $\Gamma$]
\label{gamma2}
$\Gamma$ is a subset of the boundary $\partial {\cal M}_3(\theta)$ which contains $\partial \Omega_{ext} \cap \partial \Omega_{int}$. If there are no queer components in the interior of ${\cal M}_3(\theta)$, then $\Gamma=\partial \Omega_{ext} \cap \partial \Omega_{int}$.
\end{thm}
\begin{pf}
First let us show that $\partial \Omega_{ext} \cap \partial \Omega_{int}\subset \Gamma $. Let $P\in \partial \Omega_{ext} \cap \partial \Omega_{int}$ and assume that $P\notin \Gamma$. Choose $B_{\mu}\in {\cal B}_5^{cm}(\theta)$ such that ${\cal S}(B_{\mu})=P$. We can assume without loss of generality that $|\mu|>1$. Choose a sequence $P_n\in \Omega_{int}$ converging to $P$ and a sequence $B_n\in \Lambda_{int}$ such that ${\cal S}(B_n)=P_n$. By passing to a subsequence we may assume that $B_n\rightarrow B_{\mu '}$ as $n\rightarrow \infty$. By continuity, ${\cal S}(B_{\mu '} )=P$ so we must have $|\mu '|<1$. Since $P$ is not capture by \corref{capop}, \lemref{fiber} shows that there is a path $t\mapsto B_t$ of quasiconformally conjugate Blaschke products connecting $B_{\mu}$ to $B_{\mu '}$ all of which are mapped to $P$. Since this path must intersect $\BBB T$ somewhere, we conclude that $P\in \Gamma$ which is a contradiction.
Now we prove that $\Gamma \subset \partial {\cal M}_3(\theta)$. Fix some $P\in \Gamma$. Since $P$ has both critical points on $\partial \Delta_P$, it cannot belong to any hyperbolic-like or capture component. Also, $P$ cannot be in a queer component $U$ of the interior of ${\cal M}_3(\theta)$, since otherwise every $Q\in U$ would have to be quasiconformally conjugate to $P$ by \thmref{qcclass}, which would imply that $Q$ has two critical points on $\partial \Delta_Q$, which would show $U\subset \Gamma$. But this is evidently impossible because $U$ is open and $\Gamma$ is a Jordan curve. Therefore, $P$ has to lie in $\partial {\cal M}_3(\theta)=\partial \Omega_{ext} \cup \partial \Omega_{int}$.
\realfig{DD}{DD.eps}{}{8cm}
Now assume that there are no queer components in the interior of ${\cal M}_3(\theta)$. To show that $\Gamma=\partial \Omega_{ext} \cap \partial \Omega_{int}$, let $P=P_{c_0}$ and assume by way of contradiction that $c_0\in \partial \Omega_{ext} \smallsetminus
\partial \Omega_{int}.$ Since $c_0$ has positive distance from $\Omega_{int}$, for all $c$ in a neighborhood $D$ of $c_0$ the sequence $\{ P_c^{\circ n}(1) \}$ has to be normal. Assuming that $D$ is a small disk, the Jordan curve $\Gamma$ cuts $D$ into two topological disks $D_1$ and $D_2$ such that for every $c\in D_1$, $1 \in \partial \Delta_c$ and $c\notin \partial \Delta_c$, and for every $c\in D_2$, $c\in \partial \Delta_c$ and $1 \notin \partial \Delta_c$ (see \figref{DD}).
Clearly $D_2\cap \partial \Omega_{ext}=D_2 \cap \partial \Omega_{int}=\emptyset$. So $D_2$ has to be a subset of a component $U$ of the interior of ${\cal M}_3(\theta)$. Since there are no queer components by the assumption, $U$ is either hyperbolic-like or capture.
For every $c\in D_1$, we have $1 \in \partial \Delta_c$ and the restriction $P_c|_{\partial \Delta_c}$ is conjugate to the rigid rotation by angle $\theta$. Therefore, $P_c^{\circ q_n}(1)\rightarrow 1$ for all $c\in D_1$, where the $q_n$ are the denominators of the rational approximations of $\theta$. Since $\{ P_c^{\circ n}(1) \} $ is normal in $D$, for a subsequence $\{ q_{n(j)} \} $ we must have $P_c^{\circ q_{n(j)}}(1)\rightarrow 1$ throughout $D$. In particular, if $c\in D_2$, the critical point $1$ of $P_c$ must be recurrent. This is impossible if $U$ is hyperbolic-like or capture, since over $D_2$, $c\in \partial \Delta_c$ and hence $1$ either gets attracted to the attracting cycle or eventually maps to the Siegel disk $\Delta_c$.
\end{pf}
| {
"timestamp": "1998-05-22T00:05:37",
"yymm": "9805",
"arxiv_id": "math/9805098",
"language": "en",
"url": "https://arxiv.org/abs/math/9805098",
"abstract": "Motivated by the work of Douady, Ghys, Herman and Shishikura on Siegel quadratic polynomials, we study the one-dimensional slice of the cubic polynomials which have a fixed Siegel disk of rotation number theta, with theta being a given irrational number of Brjuno type. Our main goal is to prove that when theta is of bounded type, the boundary of the Siegel disk is a quasicircle which contains one or both critical points of the cubic polynomial. We also prove that the locus of all cubics with both critical points on the boundary of their Siegel disk is a Jordan curve, which is in some sense parametrized by the angle between the two critical points. A main tool in the bounded type case is a related space of degree 5 Blaschke products which serve as models for our cubics. Along the way, we prove several results about the connectedness locus of these cubic polynomials.",
"subjects": "Dynamical Systems (math.DS)",
"title": "On Dynamics of Cubic Siegel Polynomials",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138151101525,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7087156889919108
} |
https://arxiv.org/abs/1109.1857 | Some remarks about interpolating sequences in reproducing kernel Hilbert spaces | In this paper we study two separate problems on interpolation. We first give some new equivalences of Stout's Theorem on necessary and sufficient conditions for a sequence of points to be an interpolating sequence on a finite open Riemann surface. We next turn our attention to the question of interpolation for reproducing kernel Hilbert spaces on the polydisc and provide a collection of equivalent statements about when it is possible to interpolation in the Schur-Agler class of the associated reproducing kernel Hilbert space. | \section*{Notation}
\section{Introduction and Statement of Main Results}
Recall that a sequence $Z=\{z_j\}\subset\mathbb{D}$ is called an $H^\infty$-\textit{interpolating} sequence if for every $a=\{a_j\}\in\ell^\infty$ there exists a function $f\in H^\infty$ such that
$$
f(z_j)=a_j\quad\forall j.
$$
Similarly, for the sequence $Z$ let $\ell^2(\mu_Z)$ be the space of all sequences $a=\{a_j\}$ such that
$$
\sum_{j=1}^\infty\abs{a_j}^2(1-\abs{z_j}^2):=\norm{a}_{\ell^2(\mu_Z)}^2<\infty.
$$
Then the sequence $Z=\{z_j\}$ is called an $H^2$-\textit{interpolating} sequence if for every $a=\{a_j\}\in\ell^2(\mu_Z)$ there exists a function $f\in H^2$ such that
$$
f(z_j)=a_j\quad\forall j.
$$
As is well known, these sequences turn out to be one in the same and
are characterized by a separation condition on the points in $Z$ and
that the points must generate a Carleson measure for the space
$H^2$. The following theorem gives a precise statement of this.
\begin{thm}[Carleson, \cite{Car2}, Shapiro, Shields \cite{ShSh2}]
\label{CarInterp}
The following are equivalent:
\begin{itemize}
\item[(a)] The sequence $Z$ is $H^2$-interpolating;
\item[(b)] The sequence $Z$ is $H^\infty$-interpolating;
\item[(c)] The sequence $Z$ is separated in the pseudo-hyperbolic metric and generates a $H^2$-Carleson measure. In particular, $\sum_{z_j\in Z}(1-\abs{z_j}^2)\delta_{z_j}$ is a $H^2$ Carleson measure and
$$
\inf_{j\neq k}\abs{\frac{z_j-z_k}{1-\overline{z_k}z_j}}\geq\delta>0;
$$
\item[(d)] The sequence $Z$ is strongly separated, namely there exists a constant $\delta>0$ such that
$$
\inf_{j}\abs{\prod_{j\neq k}\frac{z_j-z_k}{1-\overline{z_k}z_j}}\geq\delta>0.
$$
\end{itemize}
\end{thm}
Since the results of Carleson, \cite{Car2}, and Shapiro-Shields, \cite{ShSh2} the question of characterizing the interpolating sequences for other spaces of analytic functions has been intensively studied. See any of the papers \cites{MR2137874, MaSu, Seip} for various generalizations of this question.
In this paper we study the problem of interpolating sequences in two
settings. First, we consider the case of finite Riemann surfaces and
obtain a new equivalences and a different proof of a theorem of
Stout. We then go on to consider a multivariable example: the Schur
Agler class. In both cases we will make heavy use of results on Pick
interpolation.
\subsection*{Interpolation on Riemann Surfaces}
Let $\Gamma$ be a Fuchsian group acting on the unit disk. We will
assume that $\Gamma$ finitely-generated. The group $\Gamma$ acts on
$H^\infty$ by composition and the associated fixed-point algebra is
denoted $H^\infty_\Gamma$. It is known that every finite open Riemann
surface can be viewed as the quotient space of the disk by the action
of such a group. The group $\Gamma$ is finitely generated and acts
without fixed points on the disk. A major advantage of viewing the
problem in terms of fixed points is that the algebra
$H^\infty_\Gamma\subseteq H^\infty$ and this allows us to bootstrap
results about Riemann surfaces to the classical setting of the open
unit disk. There is also a reproducing kernel Hilbert space
$H^2_\Gamma$ associated with the group action. We denote by $K^\Gamma$
the reproducing kernel for the Hilbert space $H^2_\Gamma$. This is
just the set of fixed points in $H^2$. In~\cite{R} it is shown that
$H^\infty_\Gamma$ is the multiplier algebra for $H^2_\Gamma$.
Given a sequence of non-zero vectors $\{x_n\}$ in a Hilbert space $H$ we define the associated Gramian as the matrix $[\inp{x_n}{x_m}]_{m,n = 1}^\infty$. The normalized Gramian is defined as the Gramian of the sequence $\tilde{x_n}$, where $\tilde{x_n} = \frac{x_n}{\norm{x_n}}$.
Our first main result of this paper is the following theorem.
\begin{thm}\label{interpriemann}
Let $Z = (z_n)\subseteq \mathbb{D}$ be a sequence of points in
$H^\infty_\Gamma$ such that no two points lie on the same orbit of
$\Gamma$, where $\Gamma$ is the group of deck transformation
associated to a finite Riemann surface. Let $Z_n = Z \setminus
\{z_n\}$. The following are equivalent:
\begin{enumerate}
\item The sequence $\{z_n\}$ is interpolating for $H^\infty_\Gamma$;
\item The sequence $\{z_n\}$ is interpolating for $H^2_\Gamma$;
\item The sequence $\{z_n\}$ is $H^2_\Gamma$-separated and
$\sum_{n=1}^\infty K^\Gamma(z_i,z_i)^{-1}\delta_{z_i}$ is a
Carleson measure;
\item The Gramian $G =
\left[\frac{K^\Gamma(z_i,z_j)}{\sqrt{K^\Gamma(z_i,z_i)K^\Gamma(z_j,z_j)}}\right]$
is bounded below;
\item There is a constant $\delta>0$ such that $\inf_{n\geq 1}
d_{H^\infty_\Gamma}(z_n, Z_n)\geq \delta$.
\end{enumerate}
\end{thm}
A similar result was obtained by Stout~\cite{S}. However, there are
two differences between the results obtained there and our
results. First, we use the interpolation theorem from~\cite{RW} as an
essential ingredient in our proof. This modern approach appears in the
work of Marshall and Sundberg on interpolating sequences for the
Dirichlet space. Second, our proof applies to the case of a subalgebra
of $H^\infty$ that is fixed by the action of a finitely-generated
discrete group, a more general setting than the case of a finite
Riemann surface.
\subsection*{Interpolation in the Schur-Agler Class}
We now turn to the case where the domain is $\mathbb{D}^d$. Here the algebra
in question is the set of functions in the Schur-Agler class. As
motivation for our results we describe the important theorem of Agler
and McCarthy that characterizes the interpolating sequences for
$H^\infty(\mathbb{D}^2)$. Recall that $H^\infty(\mathbb{D}^2)$ is the multiplier
algebra for the space $H^2(\mathbb{D}^2)$, and that this is a reproducing
kernel Hilbert space with kernel given by
\[
k_z(w)=\frac{1}{1-\overline{z_1}w_1}\frac{1}{1-\overline{z_2}w_2}
\]
for $z,w\in\mathbb{D}^2$.
A sequence of points $\{\lambda_j\}\subset\mathbb{D}^2$ is called an
$H^\infty(\mathbb{D}^2)$-interpolating sequence if for any sequence of bounded
numbers $\{w_i\}$ there is a function $f\in H^\infty(\mathbb{D}^2)$ such that
$f(\lambda_j)=w_j$. The sequence of points is said to be
\textit{strongly separated} if for each integer $i$ there is a
function in $\varphi_i\in H^\infty(\mathbb{D}^2)$ of norm at most $M$ such
that $\varphi_i(\lambda_i)=1$ and $\varphi_i(\lambda_k)=0$ for $k\neq
i$. The result of Agler and McCarthy then gives a characterization of
the interpolating sequences for $H^\infty(\mathbb{D}^2)$.
\begin{thm}[Agler and McCarthy, \cite{AgMc}]
Let $\{\lambda_j\}\in\mathbb{D}^2$. The following are equivalent:
\begin{itemize}
\item[(i)] $\{\lambda_j\}$ is an interpolating sequence for
$H^\infty(\mathbb{D}^2)$;
\item[(ii)] The following two conditions hold
\begin{itemize}
\item[$(a)$] For all admissible kernels $k$, their normalized Gramians
are uniformly bounded above,
$$
G^k\leq MI
$$
for some $M>0$,
item[$(b)$] For all admissible kernels $k$, their normalized Gramians
are uniformly bounded below,
$$
G^k\geq NI
$$
for some $N>0$;
\end{itemize}
\item[(iii)] The sequence $\{\lambda_j\}$ is strongly separated and
condition $(a)$ alone holds;
\item[(iv)] Condition $(b)$ alone holds.
\end{itemize}
\end{thm}
Here an admissible kernel is one for which the pointwise by $M_{z_j}$
is a contraction on $H(k)$, the reproducing kernel Hilbert space
on $\mathbb{D}^2$ with kernel $k$.
We now consider a related question, but for more general products of
reproducing kernel Hilbert spaces. Given $k_j$ with $j=1,\ldots, d$
reproducing kernels on $\mathbb{D}$ with the property that
\[\frac{1}{k_j}(z,w) = 1 - \inp{b_j(z)}{b_j(w)}\]
where $b_j$ is an analytic map from $\mathbb{D}$ into the open unit ball of a separable
Hilbert space.
The kernel $k_j$ is the reproducing kernel
for the Hilbert space $H(k_j)$. Let $H(k)$ denote the reproducing
kernel Hilbert space defined on $\mathbb{D}^d$ with reproducing kernel
$k(z,w)=\prod_{j=1}^d k_j(z_j,w_j)$ for $z,w\in\mathbb{D}^d$.
We define $S_{H(k)}(\mathbb{D}^d)$ to be the set of functions
$m:\mathbb{D}\to\mathbb{C}$ such that
$$
1-m(z)\overline{m(w)}=\sum_{j=1}^d\frac{1}{k_j}(z_j,w_j)h_j(z)\overline{h_j(w)}
$$
for functions $\{h_j\}$ defined on $\mathbb{D}^d$. Note that this is the
Schur-Agler class of multipliers for $H(k)$. In the case where $k_j$ is the Szeg\"o kernel and $d = 2$ an
application of Ando's theorem shows that $H^\infty(\mathbb{D}^2)$ and
$S_{H(k)}(\mathbb{D}^2)$ coincide. In higher dimensions this is no longer the case.
Let us say that a kernel $k$ is an \textit{admissible kernel} if we have that
$$
\frac{1}{k_j}(z_j, w_j) k(z,w) = (1-\inp{b_j(z_j)}{b_j(w_j)})k(z,w)\geq
0\text{ for } j=1,\ldots, d.
$$
Given a sequence of points $\{\lambda_j\}\in\mathbb{D}^d$, then the normalized
Gramian of $k$ is the matrix given by
$$
G_{ij}^k=\frac{k(\lambda_i,\lambda_j)}{\sqrt{k(\lambda_i,\lambda_i)k(\lambda_j,\lambda_j)}}
$$
A sequence of points $\{\lambda_j\}\subset\mathbb{D}^2$ is called an
$S_{H(k)}(\mathbb{D}^d)$-interpolating sequence if for any sequence of bounded
numbers $\{w_i\}$ there is a function $f\in S_{H(k)}$ such that
$f(\lambda_j)=w_j$. The sequence of points is said to be
\textit{strongly separated} if for each integer $i$ there is a
function in $\varphi_i\in S_{H(k)}(\mathbb{D}^d)$ of norm at most $M$ such
that $\varphi_i(\lambda_i)=1$ and $\varphi_i(\lambda_k)=0$ for $k\neq
i$. Our second main result is the following theorem providing a
generalization of the result of Agler and McCarthy:
\begin{thm}
\label{main}
Let $\{\lambda_j\}$ be a sequence of points in $\mathbb{D}^d$. The
following are equivalent:
\begin{itemize}
\item[(i)] $\{\lambda_j\}$ is an interpolating sequence for $S_{H(k)}(\mathbb{D}^d)$;
\item[(ii)] The following two conditions hold
\begin{itemize}
\item[$(a)$] For all admissible kernels $k$, their normalized Gramians
are uniformly bounded above,
$$
G^k\leq MI
$$
for some $M>0$,
\item[$(b)$] For all admissible kernels $k$, their normalized Gramians
are uniformly bounded below,
$$
G^k\geq NI
$$
for some $N>0$;
\end{itemize}
\item[(iii)] The sequence $\{\lambda_j\}$ is strongly separated and
condition $(a)$ alone holds;
\item[(iv)] Condition $(b)$ alone holds.
\end{itemize}
\end{thm}
\section{Interpolation in Riemann surfaces}
\label{s.Riemann}
Our goal in this section is to prove the analogue of Carleson's
theorem for Riemann surfaces. We view the Riemann surface as the
quotient of the disk by the action of a Fuchsian group and state our
theorems for the corresponding fixed-point algebra $H^\infty_\Gamma$.
A central result that we require is a Nevanlinna--Pick type theorem
obtained in~\cite{RW}. We briefly recall the parts of that paper that
are most relevant to our work.
Let
$C(H^\infty_\Gamma)$ be the set of columns over $H^\infty_\Gamma$,
similarly, let $R(H^\infty_\Gamma)$ denote the rows. There is a
natural identification between $C(H^\infty_\Gamma)$ and the space of
multipliers $\mathrm{mult}(H^2_\Gamma, H^2_\Gamma\otimes \ell^2)$. There is
also a natural identification between $R(H^\infty_\Gamma)$ and
$\mathrm{mult}(H^2_\Gamma\otimes \ell^2, H^2_\Gamma)$.
\begin{thm}[\cite{RW}]\label{interpthm}
Let $z_1,\ldots,z_n\in\mathbb{D}$, $w_1,\ldots,w_n\in \mathbb{C}$ and
$v_1,\ldots,v_n\in\ell^2$. There exists a function $F\in
C(H^\infty_\Gamma)$ such that $\norm{F}\leq C$ and $\inp{F(z_i)}{v_i} =
w_i$ if and only if the matrix $[(\alpha^2C^2\inp{v_j}{v_i} -
w_i\overline{w_j})K^\Gamma(z_i, z_j)]\geq 0$. The constant $\alpha$
depends on $\Gamma$ but not on the points $z_1,\ldots,z_n$.
\end{thm}
A similar argument also establishes the fact that there is a function
$F\in R(H^\infty_\Gamma)$ such that $\norm{F}\leq C$ and $F(z_i) =
v_i$ if and only if the matrix $[(C^2\alpha^2 -
\inp{v_j}{v_i})K^\Gamma(z_i, z_j)]\geq 0$.
\subsection{Separation, interpolation, and Carleson measures}
In order to state our theorem we need to develop some of the necessary
background on separation of points, interpolating sequences and Carleson
measures. We state our definitions in terms of reproducing kernels and
multiplier algebras. The case we are interested in is the RKHS
$H^2_\Gamma$ and its multiplier algebra $H^\infty_\Gamma$. In this
situation there is additional structure that we can exploit.
Let $X$ be a set and let $\{x_n\}$ be a sequence of points in $X$. Let
$H$ be a reproducing kernel Hilbert space of functions on $X$ with kernel $K$ and let
$M(H)$ be its multiplier algebra. We say that $\{x_n\}$ is an
\textit{interpolating sequence} for the algebra $M(H)$ if and only if
the restriction map $R:M(H)\to \ell^\infty$ given by $R(f) = \{f(x_n)\}$
is surjective.
Given a point $x\in X$ and a set $S\subseteq X$ we define the $M(H)$-distance from
$x$ to $S$ by $d_{M(H)}(x, S) = \sup\{\abs{f(x)}\,:\, f|_S = 0,
\norm{f}_{M(H)} \leq 1\}$. The sequence $\{x_n\}$ is called
\textit{$M(H)$-separated} if and only if there exists a constant
$\delta >0$ such that $d_{M(H)}(x_n, Z_n)\geq \delta$ for all $n\geq
1$, where $Z_n = \{x_m\,:\, m\geq 1\}\setminus \{x_n\}$. If $\{x_n\}$ is
an interpolating sequence, then there exists a constant $C$ such that
for any sequence $w\in \ell^\infty$, $R(f) = w$ and
$\norm{f}_{M(H)}\leq C\norm{w}_\infty$. Applying this to the case
where $w = e_j$ we see that $d_{M(H)}(x_n , Z_n) \geq C^{-1}$. Therefore an interpolating sequence for $M(H)$ is $M(H)$-separated.
Carleson's theorem states that the converse is true for
$H^\infty(\mathbb{D})$, that is, every $H^\infty$-separated sequence
is an $H^\infty$-interpolating sequence. The modern approach to this problem
relies on the fact that the $H^\infty$ is the multiplier algebra of
the Hardy space, and the fact that the Szeg\"o kernel has the complete
Pick property. We will use a similar approach based on Theorem~\ref{interpthm} and
bootstrap our results to the case of $H^\infty$.
There is a related notion of separation in terms of the reproducing
kernel of $H$. In~\cite{AgMc2} it is shown that the function $\rho_H(x,y)
= \sqrt{1 - \frac{\abs{K(x,y)}^2}{K(x,x)K(y,y)}}$ is semi-metric on
the set $X$. A sequence of points is called \textit{$H$-separated} if
and only if $\inf_{i\not = j}\rho_H(x_i,x_j) > 0 $. A sequence is
\textit{weakly separated} if and only if there is a constant $\delta>0$ and functions
$f_{i,j}\in M(H)$ such that $\norm{f_{i,j}}\leq 1$ with $f_{i,j}(x_i)
= \delta$ and $f_{i,j}(x_j) = 0$. In general a weakly separated sequence is
$H$-separated, and the converse if true for the case of Riemann
surfaces.
\begin{lm}\label{weaksep}
Let $\Gamma$ be a finitely generated discrete group of automorphisms
and let $H^\infty_\Gamma$ be the corresponding fixed-point
algebra. There exists a constant $C$ such that for any pair of
points $z,w\in \mathbb{D}$, $\rho_H(z,w) \geq \delta/C$ if and only
if there exists a function $f\in H^\infty_\Gamma$ such that $f(w) =
\delta$, $f(z) = 0$ and $\norm{f}_\infty\leq 1$.
\end{lm}
\begin{proof}
By the Interpolation Theorem~\ref{interpthm} there exists a constant $C$ and function $f\in
H^\infty_\Gamma$ such that $f(w) = \delta$ and $f(z) = 0$ with
$\norm{f}_\infty \leq 1$ if and only if the matrix
\[
\begin{bmatrix}
C^2K^{\Gamma}(z,z) & C^2K^{\Gamma}(z,w)\\
C^2K^{\Gamma}(w, z) & (C^2-\delta^2)K^{\Gamma}(w,w)
\end{bmatrix} \geq 0,
\]
where $C$ is a constant that does not depend on the points
$z,w$. Since the diagonal terms of the above matrix are non-negative,
the matrix positivity condition is equivalent to the determinant being
non-negative. Computing the determinant and rearranging we find that
$\rho_H(z,w) \geq \delta/C$.
\end{proof}
\begin{cor}
A sequence is $H^2_\Gamma$-separated if and only if
the sequence is weakly separated by $H^\infty_\Gamma$.
\end{cor}
A sequence is called a (universal) interpolating sequence for $H$ if
and only if the map $T:H\to \ell^2$ given by $T(f) =
\left\{\frac{f(x_n)}{K(x_n,x_n)^{1/2}}\right\}$ is surjective.
In order to state our results we need the notion of a Carleson measure. A measure $\mu$ on a set $X$ is called a Carleson measure for the Hilbert space $H$ if and only if there exists a constant $C(\mu)$ such that
\[\int_X \abs{f(x)}^2\,d\mu \leq C(\mu)\norm{f}_H^2.\]
Given a sequence of points $\{x_n\}$ we can construct a measure on the set $X$ by setting $\mu = \sum_{n=1}^\infty K(x_n,x_n)^{-1}\delta_{x_n}$. When $f\in H$, we see that
\begin{eqnarray*}
\int_X\abs{f(x)}^2\,d\mu & = & \sum_{n=1}^\infty \abs{f(x_n)}^2K(x_n,x_n)^{-1}\\
& = & \sum_{n=1}^\infty \abs{\inp{f}{k_{x_n}/\norm{k_{x_n}}}}^2\\
& \leq & C(\mu)\norm{f}_H^2.
\end{eqnarray*}
With the last inequality happening if the set of points $\{x_n\}$ that generates $\mu$ generates a Carleson measure.
It is helpful to restate the above in terms of sequences in Hilbert
space. To this end let us fix a sequence $x_n \in X$, let $k_{x_n}$ be
the corresponding reproducing kernel and let $g_n =\frac{
k_{x_n}}{\norm{k_{x_n}}}$. Note that $\{g_n\}$ is a unit norm sequence in
the Hilbert space $H$. The map $T:H\to \ell^2$ given by $T(f) =
\left\{\frac{f(x_n)}{K(x_n,x_n)^{1/2}}\right\} = \{\inp{f}{g_n}\}$. It is well known that
this map is bounded if and only if $\{g_n\}$ is a Bessel sequence, i.e.,
there is a constant $C$ such that $\sum_{n=1}^\infty
\abs{\inp{f}{g_n}}^2 \leq C\norm{f}^2$ for all $f\in H$. This in turn
is equivalent to the fact that the measure $\sum_{n=1}^\infty K(x_n, x_n)^{-1}\delta_{x_n}$ is a Carleson measure for $H$. In order to make the connection with Pick interpolation later on, we also point out that the sequence $\{g_n\}$ is Bessel if and only if the Gram matrix $G$ whose entries are given by
$\inp{g_j}{g_i}$ is bounded, when viewed as an operator on $\ell^2$.
The sequence $\{x_n\}$ is interpolating for $H$ if and only if the sequence
$\{g_n\}$ is a \textit{Riesz basic sequence}, i.e., the sequence $\{g_n\}$ is
similar to a orthonormal set. In terms of the Gramian this means that
$G$ is both bounded and bounded below.
Before we proceed we need a preliminary lemma that relates the fact
that $\{g_n\}$ is a Riesz basic sequence to the matrix positivity
condition that appears in Theorem~\ref{interpthm}
\begin{lm}\label{pick:gram}
Let $C>0$ be a constant. Let $\{g_n\}$ be a sequence of vectors in a Hilbert space. The Gramian $[\inp{g_j}{g_i}]$ is both bounded, and bounded
below if and only if for all points $(w_n)_{n\geq 1}\in \mathrm{ball}(\ell^\infty)$ the matrix $[(C^2-w_i\overline{w_j})\inp{g_j}{g_i}]$ is a positive matrix.
\end{lm}
\begin{proof}
Suppose that $[(C^2 - w_i\overline{w_j})\inp{g_j}{g_i}] \geq 0$. If
$\alpha_n$ is a sequence in $\ell^2$, then we get
\[C^2 \sum_{i,j=1}^\infty \alpha_j\overline{\alpha_i}\inp{g_j}{g_i}
\geq \sum_{i,j=1}^\infty
w_i\overline{w_j}\alpha_j\overline{\alpha_i}\inp{g_j}{g_i}.\] Choose
$w_i = \exp(2\pi t_i \sqrt{-1})$. This gives,
\[C^2 \sum_{i,j=1}^\infty \alpha_j\overline{\alpha_i}\inp{g_j}{g_i}
\geq \sum_{i,j=1}^\infty
\exp(2\pi(t_i-t_j)\sqrt{-1})\alpha_j\overline{\alpha_i}\inp{g_j}{g_i}.\]
If we integrate both sides from $0$ to $2\pi$ with respect to each of
the variables $t_1,\ldots,t_n,\ldots$ then the above equation reduces
to
\[C^2\norm{\sum_{n=1}^\infty \alpha_n g_n}^2 \geq \sum_{n=1}^\infty \abs{\alpha_n}^2.\]
Now choose $\alpha_i = w_i\beta_i$. We have
\[C^2 \sum_{i,j=1}^\infty
\exp(2\pi(t_i-t_j)\sqrt{-1})\beta_j\overline{\beta_i} \inp{g_j}{g_i}
\geq \sum_{i,j=1}^\infty \beta_j\overline{\beta_i}\inp{g_j}{g_i}.\]
Integrating as before, we obtain
\[C^2 \sum_{n=1}^\infty \abs{\beta_n}^2 \geq \norm{\sum_{n=1}^\infty
\alpha_ng_n}^2.\]
For the converse assume that $B \geq [\inp{g_j}{g_i}] \geq
B^{-1}$. Let $D_w$ be the matrix with diagonal entries
$\{w_n\}$. Since $D_w$ is a contraction we obtain the equation
\[BG \geq D_wGD_w^*.\] Since $G$ is invertible we have
$\norm{G^{-1/2}D_w G^{1/2}} \leq \norm{G^{-1/2}}\norm{G^{1/2}} \leq
B$. Therefore,
$$
B^2I - (G^{-1/2}D_w G^{1/2})(G^{-1/2}D_w G^{1/2})^*
\geq 0.
$$
Which gives $B^2G \geq D_wGD_w^*$. This is just $[(B^2 - w_i\overline{w_j})\inp{g_j}{g_i}]\geq 0$.
\end{proof}
\subsection{Proof of Theorem~\ref{interpriemann}}
Our goal is to prove Theorem \ref{interpriemann} (stated again for ease):
\begin{thm
Let $Z = (z_n)\subseteq \mathbb{D}$ be a sequence of points in $H^\infty_\Gamma$ such that no two points lie on the same orbit of $\Gamma$, where $\Gamma$ is the group of deck transformation associated to a finite Riemann surface. Let $Z_n = Z \setminus \{z_n\}$. The following are equivalent:
\begin{enumerate}
\item\label{interp1} The sequence $\{z_n\}$ is interpolating for $H^\infty_\Gamma$;
\item\label{interp2} The sequence $\{z_n\}$ is interpolating for $H^2_\Gamma$;
\item\label{cm} The sequence $\{z_n\}$ is $H^2_\Gamma$-separated and $\sum_{n=1}^\infty
K^\Gamma(z_i,z_i)^{-1}\delta_{z_i}$ is a Carleson measure;
\item\label{bb} The Gramian $G = \left[\frac{K^\Gamma(z_i,z_j)}{\sqrt{K^\Gamma(z_i,z_i)K^\Gamma(z_j,z_j)}}\right]$ is bounded below;
\item\label{sep} There is a constant $\delta>0$ such that $\inf_{n\geq 1}
d_{H^\infty_\Gamma}(z_n, Z_n)\geq \delta$.
\end{enumerate}
\end{thm}
The strategy that we will follow is to prove that (1) and (2) are
equivalent and that (4) and (1) are equivalent. We will then show that (5) implies (1). Finally, we establish that (2) implies (3) and that
(3) implies (5). We now prove the first of our claims that lead to
the proof of Theorem~\ref{interpriemann}.
We fix notation as follows: Let $\{z_n\}\subseteq \mathbb{D}$ be a
sequence of points and let $K^{\Gamma}_{z_n}$ be the reproducing
kernel for the space $H^2_\Gamma$ at the point $z_n$. The
corresponding normalized kernel function will be denoted $g_n$. The
Gram matrix will be denoted $G = [\inp{g_j}{g_i}]$.
\begin{prop}
{\rm (1)} $\Leftrightarrow$ {\rm (2)} A sequence of points $\{z_n\}$ is interpolating for $H^\infty_\Gamma$
if and only if it is interpolating for $H^2_\Gamma$.
\end{prop}
\begin{proof}
Suppose that $\{z_n\}$ is a sequence of points in $\mathbb{D}$. Let
$g_n$ be the normalized kernel function for $H^2_\Gamma$ at the
point $z_n$. The restriction map $R:H^\infty_\Gamma\to \ell^\infty$
is surjective if and only if there is a constant $M$ such that for
every sequence $w = (w_n) \in \mathrm{ball}(\ell^\infty)$ there is a
function $f\in H^\infty_\Gamma$ of norm at most $M$ such that $R(f)
= w$. By the Nevanlinna--Pick type Theorem~\ref{interpthm} this is equivalent
to the matrix $[(C^2M^2 - w_i\overline{w _j})\inp{g_j}{g_i}]\geq 0$
for all choices of $w$. Using Lemma~\ref{pick:gram} we see that this
is equivalent to the Gramian $G$ being bounded and bounded below
which in turn is equivalent to the sequence $\{g_n\}$ being a Riesz
basic sequence. By our comments earlier the sequence $g_n$ is a
Riesz basic sequence if and only if $T:H^2_\Gamma\to \ell^2$ given
by $T(f) = \{\inp{f}{g_n}\}$ is bounded and surjective, i.e., $\{z_n\}$
is interpolating for $H^2_\Gamma$.
\end{proof}
\begin{prop}{\rm (5) $\Rightarrow$ {\rm (1)}}
If there is a constant $\delta$ such that $d_{H^\infty_\Gamma}(z_n, Z_n) \geq \delta >0$, then $\{z_n\}$ is an interpolating sequence for $H^\infty_\Gamma$.
\end{prop}
\begin{proof}
Given a set $Z\subseteq \mathbb{D}$, let $\Gamma Z := \{\gamma(z)\,:\,
\gamma\in\Gamma, z\in Z\}$. Suppose that $d_{H^\infty_\Gamma}(z_n ,
Z_n)\geq \delta >0$. Then, by definition, there exist functions $f_n\in
H^\infty_\Gamma$ such that $\norm{f_n}_\infty\leq 1$,
$\abs{f_n(z_n)}\geq \delta$ and $f_n|_{Z_n} = 0$. It follows that
$f_n|_{\Gamma Z_n} = 0$. The proof of~\cite{S}*{Theorem 6.3} shows that
the sequence $\Gamma Z$ is an interpolating sequence for $H^\infty$.
Now given a sequence $\{w_n\}\in\ell^\infty$, let $\tilde{w}_{n,
\gamma} = w_n$ for $\gamma\in\Gamma$ and $n \geq 1$. Since $\Gamma
Z$ is interpolating for $H^\infty$, there exists a function
$\tilde{f}\in H^\infty$ such that $\tilde{f}(\gamma(z_n)) =
\tilde{w}_{n, \gamma} = w_n$.
Next we invoke a result of Earle and Marden~\cite{EM}*{Theorem page 274}. Their result shows that there is a polynomial $p$ such that the map
\[(\Phi g)(z) = \frac{\sum_{\gamma\in \Gamma}
p(\gamma(z))g(\gamma(z))\gamma'(z)^2}{\sum_{\gamma\in
\Gamma}p(\gamma(z))\gamma'(z)^2}\] defines a bounded projection
from $H^\infty$ onto $H^\infty_\Gamma$. If $g\in H^\infty$ and $g(\gamma(\zeta)) = c$ for some constant $c$, then $(\Phi g)(\zeta) = c$. It follows that the function $f = \Phi \tilde{f}$ is in $H^\infty_\Gamma$ and that $f(z_n) = w_n$. Hence, $z_n$ is an interpolating sequence for $H^\infty_\Gamma$.
\end{proof}
\begin{prop}
{\rm (4)} $\Leftrightarrow$ {\rm (1)}
The Gram matrix is bounded below if and only if the sequence is interpolating for $H^\infty_\Gamma$.
\end{prop}
\begin{proof}
If $G\geq C^{-2} >0$, then
by Theorem~\ref{interpthm}, there exists a function
$F\in R(H^\infty_\Gamma)$ such that $\norm{F}\leq C$ and
$F(z_n) = e_n$. If we write $F =(f_1,\ldots,)$, then $f_m(z_n) =
\delta_{m,n}$ with $\norm{f_m}\leq C$. Let $\phi_n = f_n^2$. Given a sequence $w= \{w_n\}\in
\ell^\infty$ let $f = \sum_{n=1}^\infty w_n f_n^2$. We have,
\begin{align*}
\abs{f(z)} &\leq \abs{\sum_{n=1}^\infty w_nf_n(z)^2} \leq \sum_{n=1}^\infty \abs{w_n}\abs{f_n(z)}^2\\
&\leq \left(\sup_{n\geq 1}\abs{w_n}\right) \norm{F(z)}^2 \leq \norm{w}_{\ell^\infty} \norm{F}^2 \leq \norm{w}_{\ell^\infty}C^2.
\end{align*}
This proves that the sequence is
interpolating for $H^\infty_\Gamma$.
If the sequence $z_n$ is interpolating for $H^\infty_\Gamma$, then
for any choice of sequence $(w_n)\in \ell^\infty$ such that
$\abs{w_n}\leq 1$, there exists a function $f\in H^\infty_\Gamma$,
with $\norm{f}_\infty\leq C$ such that $f(z_n) = w_n$. Hence, the
matrix $[(C^2 - w_i\overline{w_j})\inp{g_i}{g_j}]\geq 0$ for all
$(w_n) \in \mathrm{ball}(\ell^\infty)$. From Lemma~\ref{pick:gram} we see
that the Gramian is bounded below.
\end{proof}
\begin{prop} {\rm (2)} $\Rightarrow$ {\rm (3)} If $\{z_n\}$ is an
interpolating sequence for $H^2_\Gamma$, then $\{z_n\}$ is
$H^2_\Gamma$-separated and $\sum_{n=1}^\infty
K^\Gamma(z_n,z_n)^{-1}\delta_{z_n}$ is a Carleson measure.
\end{prop}
\begin{proof}
This result is true for any RKHS and the proof can be found in~\cite{Seip}.
\end{proof}
\begin{prop}
{\rm (3)} $\Rightarrow$ {\rm (5)}
If the sequence $\{z_n\}$ is $H^2_\Gamma$-separated and $\sum_{n=1}^\infty K^\Gamma(z_n, z_n)^{-1}\delta_{z_n}$ is a Carleson measure, then the sequence $\{z_n\}$ is $H^\infty_\Gamma$-separated.
\end{prop}
\begin{proof}
Fix one of the indices $m$. Since the sequence is $H^2_\Gamma$
separated, by Lemma~\ref{weaksep} there exist functions $f_n$ such that
$\norm{f_n}\leq 1$ with $f_n(z_n) = 0$ and $f_n(z_m) \geq \rho_{H^2_\Gamma}(z_n,z_m)$.
We now consider the sequence of products $\phi_n = f_1\cdots
f_n$. This sequence has a weak-* limit in the unit ball of
$H^\infty_\Gamma$. Denote this limit by $\phi$.
The claim is that $\phi(z_m) > \delta'$ and $\phi(z_n) = 0$, where
$\delta'$ is a constant that does not depend on $j$.
To see this we note that the infinite product that defines
$\phi(z_m)$ converges to a non-zero value if and only if the
series $\sum_{n\not = m}1-\abs{f_n(z_m)}^2 < + \infty$.
Using the Carleson condition we get that $\sum_{n=1}^\infty
K^{\Gamma}(z_n,z_n)\abs{K^{\Gamma}_{z_m}(z_n)}^2 \leq
C\norm{K^\Gamma_{z_m}}^2$, where the constant $C$ is independent of
$m$. Rewriting this we get
\[\sum_{n = 1}^\infty
\dfrac{\abs{K^\Gamma(z_m,z_n)}^2}{K^\Gamma(z_n,z_n)K^\Gamma(z_m,z_m)}
\leq C.
\]
Now we
invoke the fact that $f_n(z_m) \geq \rho_{H^2_\Gamma}(z_n,z_m)$
from which we get that the sum $\sum_{n\not = m}1-\abs{f_n(z_m)}^2\leq C$.
\end{proof}
Combining all these Propositions then gives the Proof of Theorem \ref{interpriemann}.
\subsection{Applications to the Feichtinger conjecture}
In this section we make some observations that are relevant to the
Kadison-Singer problem. This has been a significant problem in
operator algebras for the past 50 years. We refrain from stating the
problem in its original form, and instead focus on an equivalent
statement: the Feichtinger conjecture.
The Feichtinger conjecture asks whether every bounded frame $\{f_n\}$
can be written as the union of finitely many Riesz basic
sequences. In~\cite{CCLV} it is shown the term bounded frame can be
replaced by bounded Bessel sequence. The term bounded here means that
$\inf_{n\geq 1}\norm{f_n} > 0$. Perhaps, bounded below is a better
term. In recent years there has been interest in this problem from the
perspective of function theory. The frames of interest are sequences
of normalized reproducing kernels. Given a kernel function $K$ on a
set $X$, the normalized reproducing kernel at $x$ is the function $g_x
=\frac{k_x}{K(x,x)^{1/2}}$. Given a sequence of points in $X$ we obtain a
sequence of unit norm vectors $g_{x_n}$.
\begin{thm}
Let $\{z_n\}$ be a sequence of points in the unit disk.
The Bessel sequence $\{g_{z_n}\}$ of reproducing kernels for $H^2_\Gamma$ can be written as
a union of finitely many Riesz basic sequences. The Feichtinger
conjecture is true for such Bessel sequences.
\end{thm}
\begin{proof}
Let $\{z_n\}$ be a sequence of points in the unit disk. Let $K^{\Gamma}$ be the
reproducing kernel for the space $H^2_\Gamma$. The sequence
$\{g_{z_n}\}$ is a sequence of unit norm vectors in $H^2_\Gamma$. The
condition that the sequence $\{g_{z_n}\}$ be a Bessel sequence is
equivalent to the Carleson condition on the points $\{z_n\}$. A result
of~\cite{M}, a proof of which can be found in~\cite{AgMc2}
and~\cite{Seip}, shows that a Bessel sequence can be written as a
union of finitely many $H$-separated sequences. By
Theorem~\ref{interpriemann} an $H^2_\Gamma$-separated Bessel
sequence of normalized reproducing kernels is an interpolating
sequence for $H^2_\Gamma$. From Theorem~\ref{interpriemann} we see that the
Bessel sequence of reproducing kernels for $H^2_\Gamma$ can be
written as a union of finitely many Riesz basic sequences.
\end{proof}
\section{Interpolation for Products of Kernels}
In this section we prove a generalization of a theorem of
Agler-McCarthy~\cite{AgMc} on interpolating sequences in several
variables. Our results depend on the Pick Interpolation theorem due to
Tomerlin~\cite{T}. Our goal is to give a proof of Theorem~\ref{main}.
We proceed in much the same way as in \cite{AgMc} by first defining
related conditions that will help us in studying the equivalences
between the various interpolation problems. Condition $(a)$ from
Theorem \ref{main} is equivalent to the following: There exists a
constant $M$ and positive semi-definite infinite matrices $\Gamma^j$,
$j=1,\ldots, d$, such that
\begin{equation}
\label{Conditiona'}
\tag{$a'$}
M\delta_{ij}-1=\sum_{l=1}^d\Gamma_{ij}^l\frac{1}{k_l}(\lambda_i^l,\lambda_j^l)
\end{equation}
While Condition $(b)$ is equivalent to the following: There exists a
constant $N$ and positive semi-definite infinite matrices $\Delta^j$,
$j=1,\ldots, d$, such that
\begin{equation}
\label{Conditionb'}
\tag{$b'$}
1 - N\delta_{ij}=\sum_{l=1}^d\Delta_{ij}^l\frac{1}{k_l}(\lambda_i^l,\lambda_j^l)
\end{equation}
\subsection{\texorpdfstring{Proof that $(a)\Leftrightarrow (a')$ and $(b)\Leftrightarrow(b')$}{Equivalence of (a) and (a') and (b) and (b')}}
Note that a kernel $K$ defined on a subset $Y\subseteq \mathbb{D}^d$ can
always be extended to a weak kernel $\tilde{K}$ on $\mathbb{D}^d$ by setting
$\tilde{K}(z,w) = K(z,w)$ for $z,w\in Y$ and $\tilde{K}(z,w) = 0$
otherwise.
We need to prove that if $K$ is a kernel on $\Lambda$ such that $(MI -
J)\cdot K\geq 0$, then $MI - J$ is a sum of the above form. The proof
of this follows from a basic Hilbert space argument.
If $A, B \in M_n$, then the Schur product of $A$ and $B$ is the matrix
$A\cdot B = [a_{i,j}b_{i,j}]$. It is a well-known fact that the Schur
product of two positive matrices is positive. Let $M_n^h$ denote the
set of $n\times n$ Hermitian matrices. The space $M_n^h$ is a real
Hilbert space in the inner product $\inp{A}{B} = \trace(AB) =
\inp{A\cdot B e}{e}$ where $e$ is the vector in $\mathbb{R}^n$ all of whose
entries are 1.
If $C$ is a wedge in a Hilbert space $H$, then the dual wedge $C'$ is
defined as the collection of all elements $h$ of $H$ such that
$\inp{h}{x}\geq 0$ for all $x\in C$. The following observation appears in~\cite{P}.
\begin{prop}\label{dual}
Let $C\subseteq M_n^h$ be a set of matrices such that for every
positive matrix $P$, and $X\in C$, $P\cdot X \in C$. Then,
\[C' = \{ H \in M_n^h\,:\, H\cdot X \geq 0 \text{ for all } X\in
C\}.\]
\end{prop}
\begin{proof}
If $H\cdot X\geq 0$, then $\inp{H}{X} = \inp{H\cdot X e}{e} \geq
0$. On the other hand, assume that $H\in C'$ and let $v\in \mathbb{C}^n$. If
$X = [x_{i,j}]$, then the matrix $v^*Xv = [v_ix_{i,j}\overline{v_j}]
= (vv^*)\cdot X\in C$, since $vv^*$ is positive. Since $H\in C'$ we
have that $\inp{H\cdot X v}{v} = \inp{H}{v^*Xv}\geq 0$.
\end{proof}
With this proposition in hand we can prove our claim. We are assuming
that $K$ is kernel function on $\Lambda$ and that $(MI - J)\cdot K\geq
0$.
Let $R_l$ be the matrix
$\left(\frac{1}{k_l}(\lambda_i^l,\lambda_j^l)\right)_{i,j=1}^n$. Note that $R_l$ is
self-adjoint. Let $\mathcal{R}_l$ be the set of matrices of the form
$P\cdot R_l$ where $P\geq 0$. Note that this collection is a closed
wedge that satisfies the hypothesis of Proposition~\ref{dual}. If $K$
is an admissible kernel, then $K\cdot R_l \geq 0$ and so $K\cdot P
\cdot R_l \geq 0$ for all $P$. Hence, $K\in \mathcal{R}_l '$ for $l = 1,\ldots, d$.
Let $\mathcal{K}$ be the collection of positive matrices $K$ such that
$K\cdot R_l\geq 0$ for all $l$. We have just shown that $\mathcal{K} = \mathcal{R}_1'\cap
\cdots \cap \mathcal{R}_d'$.
Let $K$ be an admissible kernel such that $(MI - J)\cdot K \geq
0$. Note that any positive matrix $P$ such that $(MI-J)\cdot P\geq 0$
can be extended to an admissible kernel. This means that the matrix
$MI - J$ is in $\mathcal{K}'$.
If $C_1, C_2$ and $C$ are closed wedges, then $(C_1\cap C_2)' = C_1'+C_2'$ and
$C'' = C$. Applying this result we get $\mathcal{K}' = (\mathcal{R}_1'\cap
\cdots \cap \mathcal{R}_d')' = \mathcal{R}_1'' + \cdots +
\mathcal{R}_d'' = \mathcal{R}_1 + \cdots + \mathcal{R}_d$. Hence, there exists matrices $\Gamma_l \geq 0$ such that $MI - J =
\sum_{l=1}^d \Gamma_l \cdot R_l$.
\subsection{\texorpdfstring{Equivalence of Conditions $(a')$ and $(b')$
to vector-valued interpolation problems}{Equivalence of (a') and
(b') to vector-valued interpolation problems}}
We next show that conditions $(a')$ and $(b')$ are equivalent to
certain vector-valued interpolation problems. The general idea is to
follow the proof in \cite{AgMc}, but to use related results by
Tomerlin \cite{T} that are directly applicable to our setting.
First, some notation. Let $E$ and $E_*$ denote separable Hilbert spaces
and let $\mathcal{L}(E,E_*)$ denote the space of bounded linear
operators from $E$ to $E_*$. Let $\{e_i\}$ denote the standard basis
in $\ell^2(\mathbb{N})$.
Finally, let $S_{H(k)}(\mathbb{D}^d; \mathcal{L}(E,E_*))$ denote the set of
functions $M$ such that there exist functions $H_j$ on $\mathbb{D}^d$ and
auxiliary Hilbert spaces $E_j$ with values in $\mathcal{L}(E_j,E_*)$
such that
$$
I_{E_*}-M(z)M(w)^{*}=\sum_{j=1}^d\frac{1}{k_j}(z_j,w_j)H_j(z) H_j(w)^{*}.
$$
\begin{thm}[Tomerlin \cite{T}]
\label{interp}
Let $z_1,\ldots,z_n$ be points in $\mathbb{D}^d$, let $x_1,\ldots,x_n\in
\mathcal{L}(\mathcal{E}_*,\mathcal{H}_n)$, let $y_1,\ldots,y_n\in
\mathcal{L}(\mathcal{E}, \mathcal{H}_n)$. Then there exists an
element $W\in S_{H(k)}(\mathbb{D}^d; \mathcal{L}(\mathcal{E},
\mathcal{E}_*))$ such that $x_i W(z_i) = y_i$ if and only if there
exist block matrices $[\Gamma_{i,j}^l]$ such that $x_ix_j^* -
y_iy_j^* = \sum_{l=1}^d \Gamma^l_{i,j}\frac{1}{k_l}(z_i, z_j)$
\end{thm}
With this notation and result, we can now state an alternate
equivalence between condition $(b')$.
\begin{lm}
\label{b'equivb''}
Let $\{\lambda_j\}$ be a sequence of points in $\mathbb{D}^d$.
The following are equivalent:
\begin{itemize}
\item[$(b')$]There exists a constant $N$ and positive semi-definite
infinite matrices $\Delta^j$, $j=1,\ldots, d$, such that
\begin{equation*}
1 - N\delta_{ij}=\sum_{l=1}^d\Delta_{ij}^l\frac{1}{k_l}(\lambda_i^l,\lambda_j^l);
\end{equation*}
\item[$(b'')$] There exists a function $\Phi\in S_{H(k)}(\mathbb{D}^d;
\mathcal{L}(\mathbb{C},\ell^2(\mathbb{N})))$ of norm at most $\sqrt{N}$ such that
$$
\Phi(\lambda_i)=e_i.
$$
\end{itemize}
\end{lm}
Similarly, we have the following lemma giving an equivalent condition
for $(a')$.
\begin{lm}
\label{a'equiva''}
Let $\{\lambda_j\}$ be a sequence of points in $\mathbb{D}^d$. The following
are equivalent:
\begin{itemize}
\item[$(a')$]There exists a constant $M$ and positive semi-definite
infinite matrices $\Gamma^j$, $j=1,\ldots, d$, such that
\begin{equation*}
M\delta_{ij}-1=\sum_{l=1}^d\Gamma_{ij}^l\frac{1}{k_l}(\lambda_i^l,\lambda_j^l);
\end{equation*}
\item[$(a'')$] There exists a function $\Psi\in S_{H(k)}(\mathbb{D}^d;
\mathcal{L}(\ell^2(\mathbb{N}),\mathbb{C}))$ of norm at most $\sqrt{M}$ such that
$$
\Psi(\lambda_i)e_i=1.
$$
\end{itemize}
\end{lm}
\begin{proof}[Proof of Lemma \ref{b'equivb''}]
Suppose that $(b')$ is true. Consider the interpolation problem with
$x_i = \sqrt{N} \in \mathbb{C}$ and $y_i = e_i$ viewed as a map from
$\ell^2(\mathbb{N})$ to $\mathbb{C}$. Then $x_ix_j^* - y_iy_j^* = NJ - I$. From the
interpolation Theorem \ref{interp} we see that there exits an element
$\tilde{\Psi} \in S_{H(k)}(\mathbb{D}^d; \mathcal{L}(\ell^2(\mathbb{N}), \mathbb{C}))$ such
that $\sqrt{N}\tilde{\Psi}(\lambda_i) = y_i = e_i$. Hence, $\Psi =
\sqrt{N}\tilde{\Psi}$ has norm at most $\sqrt{N}$ and has the
property that $\Psi(\lambda_i) = e_i$.
The converse follows from the fact that a multiplier $\Psi$ of norm
at most $\sqrt{N}$ has the property that $N -
\frac{1}{N}\Psi(\lambda)\Psi(\mu)^* = \sum_{l=1}^d
\frac{1}{k_l}(\lambda,\mu) \Gamma^l(\lambda, \mu)$ for some positive
semidefinite functions $\Gamma^1,\ldots,\Gamma^d$. When restricted
to the points $\lambda_i$ we see that $NJ-I$ is a sum of the
appropriate form.
\end{proof}
The proof of Lemma \ref{a'equiva''} is similar to the above. We have seen that condition $(a)$ is equivalent to the condition $(a')$ which is equivalent to $(a'')$. A similar equivalence is true for $(b)$, $(b')$ and $(b'')$.
Before we prove~Theorem~\ref{main} recall that strong separation of a sequence $\{\lambda_n\}$ by $S_{H(k)}$ if there is a constant $M>0$ and functions $f_n\in S_{H(k)}$ such that $\norm{f_n}_{S_{H(K)}}\leq M$, $f_n(\lambda_n) = 1$, and $f_n(\lambda_m) = 0$ for $n\not = m$.
\subsection*{Proof of Theorem~\ref{main}}
We are now ready to prove the second main theorem (stated again for ease on the reader)
\begin{thm}
Let $\{\lambda_j\}$ be a sequence of points in $\mathbb{D}^d$. The
following are equivalent:
\begin{itemize}
\item[(i)] $\{\lambda_j\}$ is an interpolating sequence for $S_{H(k)}(\mathbb{D}^d)$;
\item[(ii)] The following two conditions hold
\begin{itemize}
\item[$(a)$] For all admissible kernels $k$, their normalized Gramians
are uniformly bounded above,
$$
G^k\leq MI
$$
for some $M>0$,
\item[$(b)$] For all admissible kernels $k$, their normalized Gramians
are uniformly bounded below,
$$
G^k\geq NI
$$
for some $N>0$;
\end{itemize}
\item[(iii)] The sequence $\{\lambda_j\}$ is strongly separated and
condition $(a)$ alone holds;
\item[(iv)] Condition $(b)$ alone holds.
\end{itemize}
\end{thm}
\begin{proof}
Let $\lambda_i$ be an
interpolating sequence and suppose that $k$ is an admissible
kernel. Let $k_j$ denote the normalized kernel function at the point
$\lambda_j$. Let $w_i$ be a sequence of points such that $\abs{w_i}
\leq 1$ for all $i$.
Using the interpolation Theorem~\ref{interp} we see that there
exists a function $f$ such that $f/\sqrt{M} \in S_{H(k)}$ and
$f(\lambda_i) = w_i$ if and only if the matrix $(M -
w_i\overline{w_j}) \cdot \inp{k_j}{k_i} \geq 0$ for all admissible
kernels $k$. This statement is equivalent to the fact that
$M\norm{\sum_{i=1}^\infty \alpha_i k_i}^2 \geq
\norm{\sum_{i=1}^\infty \alpha_i w_i k_i}^2$ for all sequences
$\{\alpha_i\}\in\ell^2$.
It follows from the argument in~\cite{AgMc}*{Lemma 2.1} that both
$(M\delta_{i,j} - J)\cdot K $ and $(J - M\delta_{i,j})\cdot K$ are
positive.
It is also clear that (i) and (ii) are equivalent to the conditions
(iii) and (iv).
We now come to the equivalence of (iii) and (iv). The proof that (iii)
and (iv) are equivalent is essentially that given by
Agler-McCarthy~\cite{AgMc}. If there exists $\Phi$ such that $\Phi/\sqrt{M} \in S_{H(k)}(\mathbb{D}^d,
\mathcal{L}(\mathbb{C},\ell^2))$ such that $\Phi(\lambda_i)^*e_i = 1$, then
writing $\Phi = (\phi_1,\ldots,)^t$ we see that
$\overline{\phi_i(\lambda_i)} = 1$. Since we have assumed strong
separation, there exist $f_i$ and a constant $C$ such that
$f_i(\lambda_j) = \delta_{i,j}$ and $\norm{f_i}\leq C$. Therefore
$\Psi = (\phi_1 f_1,\ldots)$ has the property that $\norm{\Psi} \leq
C\sqrt{M}$ and $\Psi(\lambda_i) = e_i$.
Conversely if $\Psi = (\psi_1,\ldots)$ has the property that
$\Psi(\lambda_i) = e_i$ then $\psi_j(\lambda_i) =
\delta_{i,j}$. Therefore the functions $\psi_i$ strongly separate
$\lambda_i$. Setting $\Phi = \Psi^t$ we see that $\Phi(\lambda_i)^*e_i
= \overline{\phi_i(\lambda_i)} = 1$.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\normalsize
\bib{AgMc}{article}{
author={Agler, Jim},
author={McCarthy, John E.},
title={Interpolating sequences on the bidisk},
journal={Internat. J. Math.},
volume={12},
date={2001},
number={9},
pages={1103--1114}
}
\bib{AgMc2}{book}{
author={Agler, Jim},
author={McCarthy, John E.},
title={Pick interpolation and Hilbert function spaces},
series={Graduate Studies in Mathematics},
volume={44},
publisher={American Mathematical Society},
place={Providence, RI},
date={2002},
pages={xx+308}
}
\bib{MR2137874}{article}{
author={B{\o}e, Bjarte},
title={An interpolation theorem for Hilbert spaces with Nevanlinna-Pick
kernel},
journal={Proc. Amer. Math. Soc.},
volume={133},
date={2005},
number={7},
pages={2077--2081 (electronic)}
}
\bib{Car2}{article}{
author={Carleson, Lennart},
title={An interpolation problem for bounded analytic functions},
journal={Amer. J. Math.},
volume={80},
date={1958},
pages={921--930}
}
\bib{CCLV}{article}{
author={Casazza, Peter G.},
author={Christensen, Ole},
author={Lindner, Alexander M.},
author={Vershynin, Roman},
title={Frames and the Feichtinger conjecture},
journal={Proc. Amer. Math. Soc.},
volume={133},
date={2005},
number={4},
pages={1025--1033 (electronic)}
}
\bib{EM}{article}{
author={Earle, C. J.},
author={Marden, A.},
title={Projections to automorphic functions},
journal={Proc. Amer. Math. Soc.},
volume={19},
date={1968},
pages={274--278},
issn={0002-9939},
review={\MR{0224813 (37 \#412)}},
}
\bib{MaSu}{article}{
author={Marshall, D.},
author={Sundberg, C.},
title={Interpolating sequences for the multipliers of the Dirichlet space},
year={1994},
eprint={http://www.math.washington.edu/~marshall/preprints/interp.pdf}
}
\bib{M}{article}{
author={McKenna, P. J.},
title={Discrete Carleson measures and some interpolation problems},
journal={Michigan Math. J.},
volume={24},
date={1977},
number={3},
pages={311--319},
issn={0026-2285}
}
\bib{T}{article}{
author={Tomerlin, Andrew T.},
title={Products of Nevanlinna-Pick kernels and operator colligations},
journal={Integral Equations Operator Theory},
volume={38},
date={2000},
number={3},
pages={350--356}
}
\bib{P}{article}{
author={Paulsen, Vern I.},
title={Matrix-valued interpolation and hyperconvex sets},
journal={Integral Equations Operator Theory},
volume={41},
date={2001},
number={1},
pages={38--62}
}
\bib{R}{article}{
author={Raghupathi, Mrinal},
title={Abrahamse's interpolation theorem and Fuchsian groups},
journal={J. Math. Anal. Appl.},
volume={355},
date={2009},
number={1},
pages={258--276},
issn={0022-247X}
}
\bib{RW}{article}{
author={Raghupathi, Mrinal},
author={Wick, Brett D.},
title={Duality, Tangential interpolation, and T\"oplitz corona problems},
journal={Integral Equations and Operator Theory},
date={2010}
}
\bib{Seip}{book}{
author={Seip, Kristian},
title={Interpolation and sampling in spaces of analytic functions},
series={University Lecture Series},
volume={33},
publisher={American Mathematical Society},
place={Providence, RI},
date={2004},
pages={xii+139}
}
\bib{ShSh2}{article}{
author={Shapiro, H. S.},
author={Shields, A. L.},
title={On some interpolation problems for analytic functions},
journal={Amer. J. Math.},
volume={83},
date={1961},
pages={513--532}
}
\bib{S}{article}{
author={Stout, E. L.},
title={Bounded holomorphic functions on finite Reimann surfaces},
journal={Trans. Amer. Math. Soc.},
volume={120},
date={1965},
pages={255--285}
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"timestamp": "2013-04-23T02:02:49",
"yymm": "1109",
"arxiv_id": "1109.1857",
"language": "en",
"url": "https://arxiv.org/abs/1109.1857",
"abstract": "In this paper we study two separate problems on interpolation. We first give some new equivalences of Stout's Theorem on necessary and sufficient conditions for a sequence of points to be an interpolating sequence on a finite open Riemann surface. We next turn our attention to the question of interpolation for reproducing kernel Hilbert spaces on the polydisc and provide a collection of equivalent statements about when it is possible to interpolation in the Schur-Agler class of the associated reproducing kernel Hilbert space.",
"subjects": "Functional Analysis (math.FA)",
"title": "Some remarks about interpolating sequences in reproducing kernel Hilbert spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138144607745,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.708715688521196
} |
https://arxiv.org/abs/math/0604457 | On some properties of contracting matrices | The concepts of paracontracting, pseudocontracting and nonexpanding operators have been shown to be useful in proving convergence of asynchronous or parallel iteration algorithms. The purpose of this paper is to give characterizations of these operators when they are linear and finite-dimensional. First we show that pseudocontractivity of stochastic matrices with respect to sup-norm is equivalent to the scrambling property, a concept first introduced in the study of inhomogeneous Markov chains. This unifies results obtained independently using different approaches. Secondly, we generalize the concept of pseudocontractivity to set-contractivity which is a useful generalization with respect to the Euclidean norm. In particular, we demonstrate non-Hermitian matrices that are set-contractive for ||.||_2, but not pseudocontractive for ||.||_2 or sup-norm. For constant row sum matrices we characterize set-contractivity using matrix norms and matrix graphs. Furthermore, we prove convergence results in compositions of set-contractive operators and illustrate the differences between set-contractivity in different norms. Finally, we give an application to the global synchronization in coupled map lattices. | \section{Introduction\label{sec:introduction}}
\begin{definition}[\cite{nelson:paracontractive:1987}]
Let $\|\cdot\|$ be a vector norm in ${\mathbb C}^n$. An $n$ by $n$ matrix $B$ is {\em nonexpansive} with respect to $\|\cdot\|$ if
\begin{equation}\label{eqn:noncontractive}
\forall x \in {\mathbb C}^n, \|Bx\|\leq \|x\|
\end{equation}
$B$ is called {\em paracontracting} with respect to $\|\cdot\|$ if
\begin{equation}\label{eqn:paracontracting}
\forall x \in {\mathbb C}^n, Bx \neq x \Leftrightarrow \|Bx\| < \|x\|
\end{equation}
\end{definition}
It is easy to see that normal matrices with eigenvalues in the unit circle and for which $1$ is the only eigenvalue of unit norm is paracontractive
with respect to ${\|\cdot\|_2}$.
\begin{definition}
For a vector $x\in {\mathbb C}^n$ and a closed set $X^*$,
$y^*$ is called a {\em projection vector} of $x$ onto $X^*$ if $y^*\in X^*$ and
\[ \|x-y^*\| = \min_{y\in X^*} \|x-y\|\]
The distance of $x$ to $X^*$ is defined as $d(x,X^*) = \|x-P(x)\|$ where
$P(x)$ is a projection vector of $x$ onto $X^*$.
\end{definition}
Even though the projection vector is not necessarily unique, we write $P(x)$ when it is clear which projection vector we mean or when the choice is immaterial.
Let us denote $e = (1,\cdots , 1)^T$.
The proof of the following Lemma is relatively straightforward and thus omitted.
\begin{lemma}\label{lem:projection}
If $x\in {\mathbb R}^n$ and $X^* = \{\alpha e:\alpha\in {\mathbb R}\}$, the projection vector $P(x)$ of
$x$ onto $X^*$ is
$\alpha e$ where:
\begin{itemize}
\item for the norm $\|\cdot\|_2$, $\alpha = \frac{1}{n}\sum_i x_i$ and $d(x,X^*) = \sqrt{\sum_i \left(x_i-\alpha\right)^2}$.
\item for the norm $\|\cdot\|_{\infty}$, $\alpha = \frac{1}{2}\left(\max_i x_i + \min_i x_i\right)$, and $d(x,X^*) = \frac{1}{2}\left(\max_i x_i - \min_i x_i\right)$.
\item for the norm $\|\cdot\|_1$, $d(x,X^*) =
\sum_{i=\lceil\frac{n}{2}\rceil+1}^n \hat{x}_i - \sum_{i=1}^{\lfloor\frac{n}{2}\rfloor}
\hat{x}_i$ and
\begin{itemize}\item for $n$ odd, $\alpha = \hat{x}_{\lceil\frac{n}{2}\rceil}$.
\item for $n$ even, $\alpha$ can be chosen
to be any number in the interval
$[\hat{x}_{\frac{n}{2}},\hat{x}_{\frac{n}{2}+1}]$.
\end{itemize}
\end{itemize}
Here $\hat{x}_i$ are the values $x_i$ rearranged in nondecreasing order
$\hat{x}_1\leq \hat{x}_2 \leq \cdots$.
\end{lemma}
The property of paracontractivity is used to show convergence of infinite products of paracontractive matrices and this in turn is used to prove convergence in various parallel and asynchronous iteration methods \cite{bru:paracontractive:1994}. In
\cite{su:pseudocontractive:2001} this property is generalized to pseudocontractivity.
\begin{definition}[\cite{su:pseudocontractive:2001}]
Let $T$ be an operator on ${\mathbb R}^n$. $T$ is {\em nonexpansive} with respect to $\|\cdot\|$ and a closed set $X^*$ if
\begin{equation}\label{eqn:su_nonexpansive}
\forall x\in {\mathbb R}^n, x^*\in X^*, \|Tx-x^*\| \leq \|x-x^*\|
\end{equation}
$T$ is {\em pseudocontractive} with respect to $\|\cdot\|$ and $X^*$ if it is nonexpansive
with respect to $\|\cdot\|$ and $X^*$ and
\begin{equation}\label{eqn:pseudocontractive}
\forall x\not\in X^*, d(Tx,X^*) < d(x,X^*)
\end{equation}
\end{definition}
Ref. \cite{su:pseudocontractive:2001} shows that there are pseudocontractive nonnegative matrices which are not paracontractive with respect to ${\|\cdot\|_{\infty}}$ and proves a result on the convergence of infinite products of pseudocontractive matrices. Furthermore, Ref. \cite{su:pseudocontractive:2001} studies a class of matrices for which a finite product of matrices from this class of length at least $n-1$ is pseudocontractive in ${\|\cdot\|_{\infty}}$.
The purpose of this paper is multifold. First we show that for stochastic matrices with respect to ${\|\cdot\|_{\infty}}$ and $X^* = \{\alpha e:\alpha\in {\mathbb R}\}$, pseudocontractive matrices are equivalent to scrambling matrices and thus are simply characterized. The concept of scrambling matrices is first introduced in the study of weak ergodicity in inhomogeneous Markov chains and
this equivalence allows us to unify several results obtained independently using these different concepts.
The second goal of this paper is to generalize pseudocontractivity by introducing the concept of set-contractivity. We prove a convergence result of set-contractive matrices and show existence of set-contractive matrices in ${\|\cdot\|_2}$ that are not pseudocontractive with respect to ${\|\cdot\|_2}$ or ${\|\cdot\|_{\infty}}$.
We study set-contraction with respect to ${\|\cdot\|_2}$ in terms of matrix norms and graphs of matrices.
Finally, we apply these results to the global synchronization of coupled map lattices.
We concentrate on the case where $T$ are matrices and $X^*$ is the span of the corresponding Perron eigenvector. If the Perron eigenvector is strictly positive, then as in \cite{su:pseudocontractive:2001}, a scaling operation $T\rightarrow W^{-1}TW$ where
$W$ is the diagonal matrix with the Perron eigenvector on the diagonal, transforms $T$ into a matrix for which the Perron eigenvector is $e$. Therefore in the sequel we will focus on constant row sum matrices with
${X^*=\{\alpha e:\alpha\in\R\}}$.
\section{Pseudocontractivity and scrambling stochastic matrices\label{sec:scrambling}}
Scrambling matrices were first defined in \cite{hajnal:weak_ergodic:1958} to study weak ergodicity of inhomogeneous Markov chains.
\begin{definition}
A matrix $A$ is {\em scrambling} if for any pair of indices $i,j$, there exists $k$
such that $A_{ik}\neq 0$ and $A_{jk}\neq 0$.
\end{definition}
\begin{definition}
For a real matrix $A$, $\mu (A)$ is
defined as
\[ \mu(A) = \min_{j,k} \sum_{i} \min(A_{ji},A_{ki})
\]
\end{definition}
For nonnegative matrices with row sums $\leq r$, it is clear that
$0\leq \mu(A)\leq r$ with $\mu(A) > 0$
if and only if $A$ is scrambling.
\begin{definition}
For a real matrix $A$, define $\delta(A)\geq 0$ as
\[
\delta(A) = \max_{i,j} \sum_k \max(0,A_{ik}-A_{jk}) \geq \max_{i,j,k} (A_{ik}-A_{jk})
\]
\end{definition}
If $A$ has constant row sums, then
$\delta(A) = \frac{1}{2}\max_{i,j}\sum_k |A_{ik}-A_{jk}|$.
\begin{theorem}\label{thm:paz}
If $A$ is a matrix where each row sum is equal to or less than $r$,
then $\delta(A) \leq r-\mu(A)$.
\end{theorem}
{\em Proof: } Ref. \cite{paz:hajnal:1967} proved this for the case of stochastic matrices and the same proof applies here.\hfill$\Box$
\begin{theorem}\label{thm:delta}
If $A$ is a real matrix with constant row sums and $x\in {\mathbb R}^n$,
then $\max_i y_i - \min_i y_i \leq \delta(A)\left(\max_i x_i - \min_i x_i\right)$
where $y = Ax$.
\end{theorem}
{\em Proof: } The proof is similar to the argument in \cite{paz:hajnal:1967}.
Let $x_{\max} = \max_i x_i$, $x_{\min} = \min_i x_i$, $y_{\max} =
\max_{i} y_i$, $y_{\min} = \min_i y_i$.
\begin{equation}
\begin{array}{lcl}y_{\max}-y_{\min} & = & \max_{i,j}\sum_k \left(A_{ik}-A_{jk}\right)x_k
\\ &\leq & \max_{i,j}\left(\sum_k \max\left(0,A_{ik}-A_{jk}\right)x_{\max}
+ \sum_k \min\left(0,A_{ik}-A_{jk}\right)x_{\min}\right)
\end{array}
\end{equation}
Since $A$ has constant row sums, $\sum_k A_{ik}-A_{jk} = 0$, i.e.
\[\sum_k\max\left(0,A_{ik}-A_{jk}\right)
+ \sum_k \min(\left(0,A_{ik}-A_{jk}\right) = 0\]
This means that
\begin{equation}
\begin{array}{lcl}y_{\max}-y_{\min} & \leq &
\max_{i,j}\left(\sum_k\max\left(0,A_{ik}-A_{jk}\right)\right)\left(x_{\max}-x_{\min}\right)
\\ & \leq & \delta(A)\left(x_{\max}-x_{\min}\right)
\end{array}
\end{equation}
\hfill$\Box$
The following result shows that pseudocontractivity of stochastic matrices with respect to ${\|\cdot\|_{\infty}}$ is equivalent to the scrambling condition and thus can be easily determined.
\begin{theorem}\label{thm:scrambling-contract}
Let $A$ be a stochastic matrix. The matrix $A$ is pseudocontractive with respect to ${\|\cdot\|_{\infty}}$ and $X^* = \{\alpha e:\alpha\in {\mathbb R}^n\}$ if
and only if $A$ is a scrambling matrix.
\end{theorem}
{\em Proof: }
Let $x^*\in X^*$. Then $Ax^* = x^*$ and thus
$\|Ax-x^*\|_{\infty} = \|A(x-x^*)\|_{\infty} \leq \|x-x^*\|_{\infty}$. Thus
all stochastic matrices are nonexpansive with respect to ${\|\cdot\|_{\infty}}$ and $X^*$.
Suppose $A$ is a scrambling matrix. Then $\mu(A) > 0$, and $\delta(A)< 1$ by Theorem \ref{thm:paz}. By Lemma \ref{lem:projection} and Theorem \ref{thm:delta}, $A$ is pseudocontractive.
Suppose $A$ is not a scrambling matrix. Then there exists $i$,$j$ such that
for each $k$, either $A_{ik} = 0$ or $A_{jk}=0$.
Define $x$ as $x_k = 1$ if $A_{ik}> 0$ and $x_k = 0$ otherwise. Since $A$ is stochastic,
it does not have zero rows and thus there exists $k'$ and $k''$ such that $A_{ik'} = 0$ and $A_{ik''} > 0$.
This means that $x\not\in X^*$.
Let $y = Ax$. Then $y_i = 1$ and $y_j = 0$. This means that
$\max_i y_i - \min_i y_i \geq 1 = \max_i x_i - \min_i x_i$, i.e. $A$ is not pseudocontractive.\hfill$\Box$
With Theorem \ref{thm:scrambling-contract} several results which were shown independently can now be seen to be equivalent. For instance, in \cite{wu:randomsynch:2006} it was shown that for stochastic matrices with positive diagonal entries and whose interaction digraph\footnote{The directed graph of a square matrix $A$ is defined as the graph with an edge from vertex $i$ to vertex $j$ if and only if $A_{ij} \neq 0$. The interaction digraph of a matrix $A$ is obtained from the directed graph of $A$ by reversing the orientation of all the edges, i.e. it is the graph of $A^T$.} contains
a spanning directed tree a finite product of $n-1$ or more such matrices is scrambling.
In \cite{wu:reducible:2005} it was shown that such matrices are irreducible or 1-reducible\footnote{A matrix is 1-reducible if after simultaneous row and column permutation it can be written in the form
$\left(\begin{array}{cccc}B_{11} & B_{12} & \cdots & \\ & B_{22} & B_{23} & \cdots \\ & & \ddots & \\ & & &B_{kk}\end{array}\right)$ such that $B_{ii}$ are irreducible and for each $i < k$, there exists $j> i$ such that $B_{ij}\neq 0$.} and this result in \cite{wu:randomsynch:2006}
then mirrors Proposition 3.3 in \cite{su:pseudocontractive:2001}.
In \cite{lubachevsky:async:1986} the convergence of a class of asynchronous iteration algorithms was shown by appealing to results about scrambling matrices. In \cite{su:pseudocontractive:2001} this result is proved using the framework of pseudocontractions. Theorem \ref{thm:scrambling-contract} shows that these two approaches are essentially the same.
\section{Set-nonexpansive and set-contractive operators}
Consider the stochastic matrix
\[A = \left(\begin{array}{ccc}0.5& 0 & 0.5\\0.5&0.5&0\\0.5&0&0.5\end{array}\right)\]
The matrix $A$ is not
pseudocontractive with respect to the Euclidean norm ${\|\cdot\|_2}$ and $X^* = \{\alpha e:\alpha\in{\mathbb R}\}$ since $\|A\|_2 = 1.088 > 1$. On the other hand, $A$ satisfies Eq. (\ref{eqn:pseudocontractive})\footnote{This can be shown using Theorem \ref{thm:ev}.}. This motivates us to define the following generalization of pseudocontractivity:
\begin{definition}\label{def:x-expand}
Let $X^*$ be a closed set in ${\mathbb R}^n$. An operator $T$ on ${\mathbb R}^n$ is {\em set-nonexpansive} with respect to $\|\cdot\|$ and $X^*$ if
\[ \forall x\in {\mathbb R}^n, d(Tx,X^*) \leq d(x,X^*) \]
An operator $T$ on ${\mathbb R}^n$ is {\em set-contractive} with respect to $\|\cdot\|$ and $X^*$
if it is set-nonexpansive with respect to $\|\cdot\|$ and $X^*$ and
\[ \forall x\not\in X^*, d(Tx,X^*) < d(x,X^*).
\]
The {\em set-contractivity} of an operator $T$ is defined as
\[ c(T) = \sup_{x\not\in X^*}\frac{d(Tx,X^*)}{d(x,X^*)} \geq 0\]
\end{definition}
There is a dynamical interpretation to Definition \ref{def:x-expand}. If we consider the operator $T$ as a discrete-time dynamical system, then $T$ being set-nonexpansive and set-contractive
imply that $X^*$ is a globally nonrepelling invariant set and a globally attracting set of the dynamical system respectively \cite{wiggins_tam_book_90}.
\begin{lemma}\label{lem:one}
$T$ is set-nonexpansive with respect to $\|\cdot\|$ and $X^*$ if and only if $T(X^*)\subseteq X^*$ and $c(T)\leq 1$.
If $T$ is set-contractive with respect to $\|\cdot\|$ and $X^*$, then the fixed points of $T$ is a subset of $X^*$.
If $T_1(X^*)\subseteq X^*$, then $c(T_1\circ T_2) \leq c(T_1)c(T_2)$.
\end{lemma}
{\em Proof: } The first statement is true by definition. The proof of the second statement is the same as in Proposition 2.1 in \cite{su:pseudocontractive:2001}.
Suppose $T_1(X^*)\subseteq X^*$. Let $x\not\in X^*$. If $T_2(x)\in X^*$, then
$d(T_1\circ T_2(x),X^*) = 0$. If $T_2(x)\not\in X^*$, then
$d(T_1\circ T_2(x)) \leq c(T_1)d(T_2(x),X^*) \leq c(T_1)c(T_2)d(x,X^*)$.\hfill$\Box$
\begin{lemma}\label{lem:ct}
Let $X^*$ be a closed set such that $\alpha X^*\subseteq X^*$ for all $\alpha\in{\mathbb R}$. If $T$ is linear and $T(X^*)\subseteq X^*$,
then $c(T) = \sup_{\|x\|=1,P(x) = 0} d(T(x),X^*)$.
\end{lemma}
{\em Proof: } Let $\epsilon = \sup_{\|x\|=1,P(x) = 0} d(T(x),X^*)$. Clearly $\epsilon \leq c(T)$. For $x\not\in X^*$, $0$ is a projection vector of $x-P(x)$.
Since $T(P(x))\in X^*$, this implies that $d(T(x),X^*) = d(T(x-P(x)),X^*) \leq \epsilon \|x-P(x)\| = \epsilon d(x,X^*)$, i.e. $\epsilon \geq c(T)$.
\hfill$\Box$
\begin{lemma}\label{lem:contractivity-linear}
Let $X^*$ be a closed set such that $\alpha X^*\subseteq X^*$ for all $\alpha\in{\mathbb R}$. An set-nonexpansive matrix $T$ is set-contractive with respect to $X^*$ if and only if $c(T) < 1$.
\end{lemma}
{\em Proof: } One direction is clear. Suppose $T$ is set-contractive. By compactness
\[ \sup_{\|x\|=1,P(x) = 0} d(T(x),X^*) = \epsilon < 1 \]
and the conclusion follows from Lemma \ref{lem:one} and Lemma \ref{lem:ct}.
\hfill$\Box$
If $T$ is nonexpansive with respect to $\|\cdot \|$ and $X^*$, then
\[ \|Tx-P(Tx)\| \leq \|Tx-P(x)\| \leq \|x-P(x)\| \]
and $T$ is set-nonexpansive. Thus set-contractivity is more general than pseudocontractivity.
However, they are equivalent for stochastic matrices with respect to ${\|\cdot\|_{\infty}}$ and $X^*=\{\alpha e:\alpha\in{\mathbb R}\}$.
\begin{lemma}
With respect to ${\|\cdot\|_{\infty}}$ and $X^*=\{\alpha e:\alpha\in{\mathbb R}\}$,
a stochastic matrix $T$ is pseudocontractive if and only if it is set-contractive.
\end{lemma}
{\em Proof: } Follows from the fact that a stochastic matrix is nonexpansive with respect to ${\|\cdot\|_{\infty}}$ and $X^*=\{\alpha e:\alpha\in{\mathbb R}\}$.\hfill$\Box$
\begin{definition}[\cite{horn-johnson:matrix_analysis:1985}]
A vector norm $\|\cdot\|$ on ${\mathbb R}^n$ is {\em monotone} if
\[\left\| \left( x_1, \cdots , x_n\right)^T \right\| \leq
\left\| \left( y_1, \cdots , y_n\right)^T \right\|
\]
for all $x_i$ and $y_i$ such that $|x_i|\leq |y_i|$.
A vector norm $\|\cdot\|$ on ${\mathbb R}^n$ is {\em weakly monotone} if
\[\left\| \left( x_1, \cdots, x_{k-1} , 0, x_{k+1}, \cdots , x_n\right)^T \right\| \leq
\left\| \left( x_1, \cdots, x_{k-1} , x_k, x_{k+1}, \cdots , x_n\right)^T \right\|
\]
for all $x_i$ and $k$.
\end{definition}
The next result gives a necessary condition of set-contractivity of a matrix in terms of its graph.
\begin{theorem}\label{thm:sdt}
Let $A$ be a constant row sum matrix with row sums $r$ such that $|r|\geq 1$. If
$A$ is set-contractive with respect to a weakly monotone vector norm $\|\cdot \|$
and ${X^*=\{\alpha e:\alpha\in\R\}}$, then the interaction digraph of $A$ contains a spanning directed tree.
\end{theorem}
{\em Proof: }
If the interaction digraph $A$ does not have a spanning directed tree, it was shown in \cite{wu:reducible:2005} that after simultaneous row and column permutation, $A$ can be written as a block upper triangular matrix:
\[ A = \left(\begin{array}{cccc} * & * & * & * \\ & \ddots & * & * \\ & & A_1& 0 \\
& & & A_2\end{array}\right) \]
where $*$ are arbitrary entries and $A_1$ and $A_2$ are $m_1$ by $m_1$ and $m_2$ by $m_2$ square irreducible matrices respectively. Define $x = (0,\dots , 0,-a_1 e_1, a_2 e_2)^T \not\in X^*$,
where $e_1$ and $e_2$ are vectors of all $1$'s of length $m_1$ and $m_2$ respectively.
Let $z = (0,\dots , 0, e_3)^T$ where $e_3$ is the vector of all $1$'s of length $m_1+m_2$
and $Z^* = \{\alpha z: \alpha \in {\mathbb R}\}$.
Note that the set of projection vectors of a fixed vector $x$ to $Z^*$ is a convex connected set.
Let $\alpha z$ be a projection vector of $x$ to $Z^*$. Suppose that for $a_1 = a_2 \neq 0$, $\alpha \neq 0$. Since $-\alpha z$ is a projection vector of $-x$
to $Z^*$ and $\alpha$ (or at least a choice of $\alpha$) depends continuously on $a_1$ and $a_2$, by varying $a_1$ to $-a_1$ and varying $a_2$ to $-a_2$, $\alpha$ changes to $-\alpha$. This means that we can find $a_1$ and $a_2$ not both zero, such that $0$ is a projection vector of $x$ to $Z^*$. In this case $x\not\in X^*$ and by weak monotonicity
$d(x,Z^*) = d(x,X^*) = \|x\|$.
It is clear that $y = Ax$ can be written as
\[y = \left(\begin{array}{c}*\\\vdots \\ *\\ -ra_1e_1\\ ra_2e_2
\end{array}\right)\]
Let $\beta e$ be a projection vector of $y$ onto $X^*$.
By the weak monotonicity of the norm,
\[ d(y,X^*) = \|y-\beta e\| \geq
\left(\begin{array}{c} 0\\\vdots \\ 0\\ (-ra_1-\beta)e_1\\ (ra_2-\beta)e_2
\end{array}\right)
= r\left(x-\frac{\beta}{r}z\right)
\]
Since $0$ is a projection vector of $x$ onto $Z^*$
\[ d(y,X^*) \geq |r| d(x,Z^*) \geq d(x,X^*) \]
Thus $A$ is not set-contractive.\hfill$\Box$
\subsection{max-norm}
\begin{theorem}\label{thm:contractinf}
Let $A$ be a matrix with constant row sum $r$. Then $c(A) = r-\mu(A)$ with respect to ${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$. In particular,
the matrix $A$ is set-nonexpanding with respect to ${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$
if and only if $r-\mu(A) \leq 1$.
The matrix
$A$ is set-contractive with respect to ${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$ if and only if $r-\mu(A) < 1$.
\end{theorem}
{\em Proof: }
$c(A)\leq r-\mu(A)$ follows from Lemma \ref{lem:projection}, Theorem \ref{thm:paz} and Theorem \ref{thm:delta}. Since $c(A)\geq 0$, $c(A) = r - \mu(A)$ if $r-\mu(A) = 0$.
Therefore we assume that $r-\mu(A) > 0$.
Let $j$ and $k$ be such that $\mu(A) = \sum_i \min\left(A_{ji},A_{ki}\right)$.
Define $x$ such that $x_i = 1$ if $A_{ji} < A_{ki}$ and $x_i = 0$ otherwise.
Since $r - \mu(A) > 0$, $x$ is not all $0$'s or all $1$'s, i.e. $x\not\in X^*$.
Let $y = Ax$. Then by Lemma \ref{lem:projection}
\[\begin{array}{lcl} 2d(y,X^*) \geq y_k-y_i & = & \sum_{i,A_{ji} < A_{ki}} A_{ki}-A_{ji} \\
&=&
\sum_i A_{ki} - \sum_{i,A_{ji} \geq A_{ki}} A_{ki} - \sum_{i,A_{ji} < A_{ki}} A_{ji}
\\ &
=& r - \mu(A) \end{array}
\]
Since $2d(x,X^*) = 1$, it follows that $c(A) \geq r-\mu(A)$.
\hfill$\Box$
\subsection{Euclidean norm}
The following result characterizes set-contractivity of matrices with respect to ${\|\cdot\|_2}$ in terms of
matrix norms.
\begin{theorem} \label{thm:ev}
Let $A$ be an $n$ by $n$ constant row sum matrix and $K$ be an $n$ by $n-1$ matrix whose columns form a orthonormal basis of $e^{\bot}$.
Then $c(A) = \left\|AK\right\|_2$ with respect to ${\|\cdot\|_2}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$.
In particular $\left\|AK\right\|_2 \leq 1$
if and only if $A$ is set-nonexpanding with respect to ${\|\cdot\|_2}$ and $X^*=\{\alpha e:\alpha\in{\mathbb R}\}$. Similarly, $\left\|AK\right\|_2 < 1$ if and only if
$A$ is set-contracting with respect to ${\|\cdot\|_2}$ and $X^*=\{\alpha e:\alpha\in{\mathbb R}\}$.
\end{theorem}
{\em Proof: } Define $J = ee^T$ as the $n$ by $n$ matrix of all $1's$. Note that $\|x\|_2 = \|Kx\|_2$ and $JK = 0$. Let $B = A-\frac{1}{n}J$.
Then
\[ \|AK\|_2 = \|BK\|_2 =
\max_{\|x\|_2=1}\|BKx\|_2 = \max_{\|Kx\|_2=1}\|BKx\|_2 =
\max_{x\bot e,\|x\|_2=1}\|Bx\|_2
\]
By Lemma \ref{lem:projection} $P(x) = \frac{1}{n}Jx$ and $d(Ax,X^*) = \|Bx\|$. Since $A$ has constant row sums, $A(X^*)\subseteq X^*$ and by Lemma \ref{lem:ct}
$c(A) = \max_{P(x) = 0,\|x\|_2=1} d(Ax,X^*) = \max_{P(x) = 0,\|x\|_2=1} \left\|Bx\right\|_2$.
Since $P(x) = 0$ if and only if $x\bot e$, this means that
$c(A) = \left\|AK\right\|_2$.
\hfill$\Box$
\subsection{weighted Euclidean norm}
\begin{definition}
Given a positive vector $w$, the weighted $2$-norm $\|\cdot\|_w$ is defined as
\[ \|x\|_w = \sqrt{\sum_{i} w_i x_i^2} \]
\end{definition}
\begin{theorem} \label{thm:ev2}
Let $A$ be an $n$ by $n$ constant row sum matrix and $K$ be as defined in Theorem \ref{thm:ev}. Let $w$ be a positive vector such that $\max_i w_i = 1$ and $W = \mbox{diag}(w)$.
Then $c(A) \leq \left\|W^{\frac{1}{2}}AW^{-1}K\right\|_2$ with respect to $\|\cdot\|_w$ and ${X^*=\{\alpha e:\alpha\in\R\}}$.
\end{theorem}
{\em Proof: } The proof is similar to Theorem \ref{thm:ev}.
Define $J_w = \frac{ew^T}{\sum_i w_i}$ and $B = A-J_w$.
Note that $J_wW^{-1}K = 0$.
Then
\[ \begin{array}{lcl} \|W^{\frac{1}{2}}AW^{-1}K\|_2 &= &\|W^{\frac{1}{2}}BW^{-1}K\|_2
\\ &=&
\max_{\|Kx\|_2=1}\|W^{\frac{1}{2}}BW^{-1}Kx\|_2 \\ &=&
\max_{x\bot e,\|x\|_2=1}\|W^{\frac{1}{2}}BW^{-1}x\|_2\end{array}\]
Now $x\bot e$ if and only if $W^{-1}x \bot w$. Since
$\|x\|_2 = \|W^{-\frac{1}{2}}x\|_w$, this means that
$\|W^{\frac{1}{2}}AW^{-1}K\|_2 = \max_{x\bot w,\|W^{\frac{1}{2}}x\|_w = 1}\|W^{\frac{1}{2}}Bx\|_2$.
Since $\max_i w_i = 1$, this means that $\|W^{\frac{1}{2}}x\|_w
= \sqrt{\sum_i (w_ix_i)^2} \leq \|x\|_w$ and thus
\[\|W^{\frac{1}{2}}AW^{-1}K\|_2 \geq \max_{x\bot w,\|x\|_w = 1}\|W^{\frac{1}{2}}Bx\|_2
\]
It is straightforward to show that $P(x) = J_wx$ and thus
$d(Ax,X^*) = \|Bx\|_w = \|W^{\frac{1}{2}}Bx\|_2$. Since $A$ has constant row sums, $A(X^*)\subseteq X^*$ and by Lemma \ref{lem:ct}
$c(A) = \max_{P(x) = 0,\|x\|_w=1} d(Ax,X^*) = \max_{P(x) = 0,\|x\|_w=1} \left\|W^{\frac{1}{2}}Bx\right\|_2$.
Since $P(x) = 0$ if and only if $x\bot w$, this means that
$c(A) \leq \left\|W^{\frac{1}{2}}AW^{-1}K\right\|_2$.\hfill$\Box$
Note that the matrix $A$ in Theorem \ref{thm:contractinf}, Theorem \ref{thm:ev} and Theorem \ref{thm:ev2} is not necessarily nonnegative or stochastic.
\subsection{examples}
The matrix
\[A_1 = \left(\begin{array}{ccc}1.1& 0.0 & 0.0\\0.6&0.5&0\\0.6&0&0.5\end{array}\right)\]
is set-contracting with respect to ${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$ since $\mu(A_1) = 0.6$
and $c(A_1) = 1.1-\mu(A_1) = 0.5 < 1$. It is not pseudocontracting with respect to
${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$ since $\|A_1\|_{\infty} = 1.1 > 1$.
The stochastic matrix
\[A_2 = \left(\begin{array}{ccc}0.4& 0.3 & 0.3\\0&1&0\\0&0&1\end{array}\right)\]
is set-nonexpanding with respect to ${\|\cdot\|_2}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$ since $\left\|A_2K\right\|_2 = 1$ but it is not nonexpanding with respect to ${\|\cdot\|_2}$ and $X^*=\{\alpha e:\alpha\in{\mathbb R}\}$ since $\|A_2\|_2 > 1$. Furthermore,
Theorem \ref{thm:sdt} shows that $A_2$ is not set-contractive with respect to any
weakly monotone norm and $X^*$.
The stochastic matrix
\[A_3 =\left(\begin{array}{ccc}1& 0 & 0\\0.5&0.5&0\\0&0.5&0.5\end{array}\right)\] is set-contractive with respect to ${\|\cdot\|_2}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$ since $\left\|A_3K\right\|_2 = 0.939$. Since $\|A_3\|_2 > 1$ it is not nonexpanding nor pseudocontractive with respect to ${\|\cdot\|_2}$ and $X^*$. It is also not pseudocontractive
with respect to ${\|\cdot\|_{\infty}}$ and $X^*$ since it is not scrambling.
The stochastic matrix
\[A_4 =\left(\begin{array}{ccc}1& 0 & 0\\0.9&0.1&0\\0.1&0.1&0.8\end{array}\right)\]
has an interaction digraph that contains a spanning directed tree.
However, it is not set-nonexpanding with respect to ${\|\cdot\|_2}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$
since $\left\|A_4K\right\|_2 = 1.125 > 1$.
This shows that
the converse of Theorem \ref{thm:sdt} is not true for ${\|\cdot\|_2}$.\footnote{Theorem
\ref{thm:scrambling-contract} shows that the converse of Theorem \ref{thm:sdt} is false as well for stochastic matrices with respect to ${\|\cdot\|_{\infty}}$ and $X^*$.}
On the other hand, $A_4$ is set-contractive with respect to ${\|\cdot\|_{\infty}}$ and $X^*$ since $A_4$ is a scrambling matrix. Furthermore, $A_4$ is set-contractive with respect to
$\|\cdot\|_w$ and $X^*$ for $w = (1,0.2265,1)^T$ since $\|W^{\frac{1}{2}}A_4W^{-1}K\|_2 < 1$.
Next we show some convergence results for dynamical systems of the form
$x(k+1) = T_kx(k)$ where some $T_k$'s are
set-contractive operators.
\begin{theorem}\label{thm:conv}
Let $\{T_k\}$ be a sequence of set-nonexpansive operators with respect to $\|\cdot\|$
and $X^*$ and suppose that
\[ \lim_{k\rightarrow\infty} \prod_k c(T_k) = 0\]
Let $x(k+1) = T_kx(k)$. For any initial
vector $x(0)$, $\lim_{k\rightarrow\infty} d(x(k),X^*) = 0$.
\end{theorem}
{\em Proof: } From Lemma \ref{lem:one},
$c(\prod_k T_k) \leq \prod_k c(T_k) \rightarrow 0$ as $k\rightarrow\infty$ and the conclusion follows.\hfill$\Box$
\begin{theorem}
Let ${X^*=\{\alpha e:\alpha\in\R\}}$ and $\{A_k\}$ be a sequence of $n$ by $n$ constant row sum nonnegative matrices
such that
\begin{itemize}
\item the diagonal elements are positive;
\item all nonzero elements are equal to or larger than $\epsilon$;
\item the row sum is equal to or less than $r$.
\end{itemize}
If
$r^{n-1}-\epsilon^{n-1} < 1$ and for each $k$, the interaction digraph of $A_k$ contains a spanning directed tree, then
$\lim_{k\rightarrow \infty} d(x(k),X^*) = 0$ where $x(k+1) = A_kx(k)$.
\end{theorem}
{\em Proof: } As discussed above, products of $n-1$ matrices $A_k$
is scrambling. By definition, since each $A_k$ has nonzero elements equal to or larger than $\epsilon$, the nonzero elements of this product, denoted as $P$, will be equal to or larger than $\epsilon^{n-1}$. This means that $\mu(P) \geq \epsilon^{n-1}$ and thus
$\delta(P) \leq r^{n-1}-\epsilon^{n-1} < 1$ since $P$ has row sums $\leq r^{n-1}$.
Therefore $P$ is set-contractive with respect to
${\|\cdot\|_{\infty}}$ and $X^*$ with $c(P) \leq r^{n-1}-\epsilon^{n-1} < 1$. The result then follows from Theorem \ref{thm:conv}.\hfill$\Box$
The following result shows existence of linear operators $B_k$ and vectors $x_k^*\in X^*$ such that $x(k+1) = B_kx(k) + x^*_k$ has the same dynamics as
$x(k+1) = T_kx(k)$. In particular, for $y(k+1) = B_ky(k)$ and $x(k+1) = T_kx(k)$,
$d(y(k),X^*) = d(x(k),X^*)$ for all $k$.
\begin{theorem}
$T$ is a set-nonexpansive operator with respect to ${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$
if and only if for each $x\in{\mathbb R}^n$ there exists a stochastic matrix $B$ and a vector
$x^*\in X^*$such that
$T(x) = Bx + x^*$.
$T$ is a set-contractive operator with respect to ${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$
if and only if for each $x\in{\mathbb R}^n$ there exists a scrambling stochastic matrix $B$ and
a vector $x^*\in X^*$ such that
$T(x) = Bx + x^*$.
\end{theorem}
{\em Proof: } One direction of both statements follows from Theorem \ref{thm:scrambling-contract}. Suppose $T$ is set-nonexpansive and fix $x\in {\mathbb R}^n$.
Define $x^* = P(T(x)) - P(x)$ which is a vector in $X^*$.
Let $y = T(x)-x^*$. Then $P(y) = P(T(x))-x^* = P(x)$ and by Lemma \ref{lem:projection},
\[ \min_i x_i \leq \min_i y_i \leq \max_i y_i \leq \max_i x_i \]
and thus there exists a stochastic matrix $B$ such that $Bx = y$.
If $T$ is set-contractive, then for $x\in X^*$, we can choose $B = \frac{1}{n}ee^T$
and
$T(x)-Bx\in X^*$.
For $x\not\in X^*$, $d(x,X^*)< d(T(x),X^*)$. Define $x^*$ and $y$ as before and
we see that
\[ \min_i x_i < \min_i y_i \leq \max_i y_i < \max_i x_i \]
If $x_{i'} = \min_i x$, then it is clear that we can pick $B$ with $Bx = y$ such that the $i'$-th column of $B$ is positive, i.e. $B$ is scrambling.\hfill$\Box$
It can be beneficial to consider set-contractivity with respect to different norms.
For instance, consider $x(k+1) = A_k x(k)$ where $A_k$ are matrices that are not pseudocontractive with respect to ${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$ and whose diagonal elements are $0$. Since the diagonal elements are not positive, the techniques in \cite{su:pseudocontractive:2001} cannot be used to show that products of $A_k$ are pseudocontractive with respect to ${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$. However, it is possible that $A_k$ are set-contractive with respect to
a different norm and thus convergence of $x(k)$ can be obtained by studying set-contractivity using this norm.
For instance, the stochastic matrix
\[ A = \left(\begin{array}{ccc} 0 & 0.5 & 0.5\\ 1 & 0 & 0\\ 0.5 & 0.5 & 0\end{array}\right)\]
has zeros on the diagonal and is not pseudocontractive with respect to ${\|\cdot\|_{\infty}}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$ since $A$ is not scrambling. On the other hand, $A$
is set-contractive with respect to ${\|\cdot\|_2}$ and ${X^*=\{\alpha e:\alpha\in\R\}}$ since
$\left\|AK\right\|_2 = 0.939 < 1$.
For a set of constant row sum matrices $A_k$ and $x(k+1) = A_kx(k)$, a lower bound for the exponential rate at which $x(k)$ approach ${X^*=\{\alpha e:\alpha\in\R\}}$ is $-\mbox{ln}(c(A))$.
The above examples show that there are matrices for which this rate is $0$ for ${\|\cdot\|_{\infty}}$ and positive for ${\|\cdot\|_2}$ and other matrices for which
the rate is positive and $0$ for ${\|\cdot\|_{\infty}}$ and ${\|\cdot\|_2}$ respectively.
On the other hand, even though set-contractivity depends on the norm used, the equivalence of norms on ${\mathbb R}^n$ and Lemma \ref{lem:contractivity-linear} provides the following result.
\begin{theorem}
Let $X^*$ be a closed set such that $\alpha X^*\subseteq X^*$ for all $\alpha\in{\mathbb R}$.
and let $H$ be a compact set of set-contractive matrices with respect to $\|\cdot\|_p$ and $X^*$. Then there exists $m$ such that a product of $m$
matrices in $H$ is set-contractive with respect to $\|\cdot\|_q$.
\end{theorem}
\begin{corollary}
Let $H$ be a compact set of stochastic set-contractive matrices with respect to $\|\cdot\|_p$ and ${X^*=\{\alpha e:\alpha\in\R\}}$. Then a sufficiently long product of matrices in $H$ is scrambling.
\end{corollary}
\section{Weak ergodicity of inhomogeneous Markov chains}
In Section \ref{sec:scrambling} we noted the connection between set-contractivity with respect to ${\|\cdot\|_{\infty}}$ and weak ergodicity in inhomogeneous Markov chains.
In this section we elaborate on this connection.
A sequence of stochastic matrices $A_i$ is {\em weakly ergodic}
if for each $r$, $\delta\left(A_rA_{r+1}\cdots A_{r+k}\right)\rightarrow 0$ as
$k\rightarrow\infty$.
In \cite{seneta:markov:1973} a {\em coefficient of ergodicity} is defined as a continuous function $\mu$ on the set of $n$ by $n$ stochastic matrices such that $0\leq \mu(A)\leq 1$.
A coefficient of ergodicity $\mu$ is {\em proper} if
\[ \mu(A) = 1 \Leftrightarrow A = ev^T \quad \mbox{for some probability vector $v$.}\]
Seneta \cite{seneta:markov:1973} gives the following necessary and sufficient conditions for weak ergodicity generalizing the arguments by Hajnal.
\begin{theorem}\label{thm:seneta}
Suppose $\mu_1$ and $\mu_2$ are coefficients of ergodicity such that $\mu_1$ is proper
and the following equation is satisfied for some constant $C$ and all $k$,
\begin{equation}\label{eqn:mult} 1-\mu_1(S_1S_2\cdots S_k) \leq C\prod_{i=1}^k (1-\mu_2(S_i)) \end{equation}
where $S_i$ are stochastic matrices.
Then a sequence of stochastic matrices $A_i$ is weakly ergodic if there exists
a strictly increasing subsequence $\{i_j\}$ such that
\begin{equation}\label{eqn:ergodic} \sum_{j=1}^{\infty} \mu_2 (A_{i_j+1}\cdots A_{i_{j+1}}) = \infty\end{equation}
Conversely, if $A_i$ is a weakly ergodic sequence, and $\mu_1$, $\mu_2$ are both proper
coefficients of ergodicity
satisfying Eq. (\ref{eqn:mult}),
then Eq. (\ref{eqn:ergodic}) is satisfied for some strictly increasing sequence $\{i_j\}$.
\end{theorem}
Define $H$ as the set of stochastic matrices that are set-nonexpansive with respect to
a norm $\|\cdot\|$ and ${X^*=\{\alpha e:\alpha\in\R\}}$. For ${\|\cdot\|_{\infty}}$, $H$ is the set of stochastic matrices.
Let us define $\mu_c(A) = 1-c(A)$. Then $\mu_c$ is a proper coefficient of ergodicity when restricted to $H$. This can be seen as follows. Clearly
$0\leq \mu_c(A) \leq 1$. If $A = ev^T$, then $Ax\in X^*$ and thus $c(A) = 0$ and
$\mu_c(A) = 1$. If $A \neq ev^T$, then there exists $i,j,k$ such that $A_{ik}\neq A_{jk}$.
Let $x$ be the $k$-th unit basis vector. Then $(Ax)_i\neq (Ax)_j$, i.e. $d(Ax,X^*) > 0$,
$c(A) > 0$ and $\mu_c(A) < 1$.
By choosing $\mu_1 = \mu_2 = \mu_c$, Eq. (\ref{eqn:mult}) is satisfied
with $C=1$ by Lemma \ref{lem:one}.
Thus we have shown that a sufficient and necessary condition for a sequence
of matrices in $H$ to be weakly ergodic is
\[\sum_{j=1}^{\infty} 1- c(A_{i_j+1}\cdots A_{i_{j+1}}) = \infty\]
for some strictly increasing subsequence $\{i_j\}$.
\section{Application to the synchronization of coupled map lattices}
Coupled map lattices \cite{kaneko:cml_overview_1992} have been studied extensively and have been shown to exhibit complex behavior \cite{kaneko:global_coupled_1989,manneville:random_coupling_1992}. Recently, synchronization in coupled map lattice has attracted considerable attention \cite{belykh:1dlattice:1996,gade:cml_synch_1996,wu:cml-1998,jost:cml:2002,lu:synch-discrete:2004}. We show here how set-contractivity can be useful in studying synchronization in coupled map lattices.
Given a map $f_k:{\mathbb R}\rightarrow{\mathbb R}$, consider state variables $x_i\in {\mathbb R}$ which evolve according to
$f_k$ at time $k$: $x_i(k+1) = f_k(x(k))$. By coupling the output of these maps we obtain a coupled map lattice where each state evolves as:
\[ x_i(k+1) = \sum_{j} a_{ij}(k) f_k(x_j(k)) \]
This can be rewritten as
\begin{equation}\label{eqn:cml} x(k+1) = A_k F_k(x(k))\end{equation}
where $x(k) = (x_1(k),\dots, x_n(k))^T \in {\mathbb R}^n$ and
$F_k(x(k)) = (f_k(x_1(k)),\dots , f_k(x_n(k)))^T$. We assume that $A_k$ is a constant row sum matrix for all $k$. The map $f_k$
depends on $k$, i.e. we allow the map in the lattice to be time varying. Furthermore, we do not require $A_k$ to be a nonnegative matrix. We say the coupled map lattice in Eq. (\ref{eqn:cml}) {\em synchronizes}
if $\lim_{k\rightarrow\infty} |x_i(k)-x_j(k)| = 0$ for all $i$ and $j$, i.e.
$x(k)$ approaches the synchronization manifold ${X^*=\{\alpha e:\alpha\in\R\}}$
as $k\rightarrow\infty$.
If the row sum of $A_k$ is $1$, then this means that at synchronization,
each state $x_i$ in the lattice
exhibits dynamics of the uncoupled map $f_k$, i.e. if $x(h)\in X^*$, then for all $k\geq h$,
$x(k)\in X^*$ and $x_i(k+1) = f_k(x_i(k))$.
We are now ready to state our synchronization result:
\begin{theorem} \label{thm:cml}
Let $\rho_k$ be the Lipschitz constant of $f_k$.
If $\lim_{k\rightarrow\infty} \prod_k c(A_k)\rho_k = 0$, where $c(A_k)$ is the set-contractivity with respect to ${X^*=\{\alpha e:\alpha\in\R\}}$ and a monotone norm, then the coupled map lattice in Eq. (\ref{eqn:cml})
synchronizes.
\end{theorem}
{\em Proof: }
\[ \|F_k(x(k))- P(F_k(x(k))\| \leq
\|F_k(x(k)) - F_k(P(x(k))\| \leq
\rho_k \|x(k)-P(x(k)\|\]
where the last inequality follows from monotonicity of the norm.
This implies that $c(F_k)\leq \rho_k$
and the result follows from Theorem \ref{thm:conv}.\hfill$\Box$
Thus we can synchronize the coupled map lattice if we can find matrices $A_k$ and a norm
such that the contractivities $c(A_k)$ are small enough.
\begin{corollary}
Let $\rho_k$ be the Lipschitz constant of $f_k$. If $\sup_k r(A_K)-\mu(A_K)-\frac{1}{\rho_k} < 0$,
then Eq. (\ref{eqn:cml}) synchronizes\footnote{Here $r(A)$ denotes the row sum of the matrix $A$.}.
\end{corollary}
{\em Proof: } Follows by applying Theorem \ref{thm:cml} to set-contractivity with respect to
${\|\cdot\|_{\infty}}$.\hfill$\Box$
| {
"timestamp": "2006-04-20T22:33:18",
"yymm": "0604",
"arxiv_id": "math/0604457",
"language": "en",
"url": "https://arxiv.org/abs/math/0604457",
"abstract": "The concepts of paracontracting, pseudocontracting and nonexpanding operators have been shown to be useful in proving convergence of asynchronous or parallel iteration algorithms. The purpose of this paper is to give characterizations of these operators when they are linear and finite-dimensional. First we show that pseudocontractivity of stochastic matrices with respect to sup-norm is equivalent to the scrambling property, a concept first introduced in the study of inhomogeneous Markov chains. This unifies results obtained independently using different approaches. Secondly, we generalize the concept of pseudocontractivity to set-contractivity which is a useful generalization with respect to the Euclidean norm. In particular, we demonstrate non-Hermitian matrices that are set-contractive for ||.||_2, but not pseudocontractive for ||.||_2 or sup-norm. For constant row sum matrices we characterize set-contractivity using matrix norms and matrix graphs. Furthermore, we prove convergence results in compositions of set-contractive operators and illustrate the differences between set-contractivity in different norms. Finally, we give an application to the global synchronization in coupled map lattices.",
"subjects": "Dynamical Systems (math.DS)",
"title": "On some properties of contracting matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138118632622,
"lm_q2_score": 0.7248702821204019,
"lm_q1q2_score": 0.7087156866383365
} |
https://arxiv.org/abs/1702.06068 | On a problem of Pethő | In this paper we deal with a problem of Pethő related to existence of quartic algebraic integer $\alpha$ for which $$ \beta=\frac{4\alpha^4}{\alpha^4-1}-\frac{\alpha}{\alpha-1} $$ is a quadratic algebraic number. By studying rational solutions of certain Diophantine system we prove that there are infinitely many $\alpha$'s such that the corresponding $\beta$ is quadratic. Moreover, we present a description of all quartic numbers $\alpha$ such that $\beta$ is quadratic real number. | \section{introduction}
Buchmann and Peth\H{o} \cite{BuPe} found an interesting unit in the number field $K=\mathbb{Q}(\alpha)$ with $\alpha^7-3=0$ it is as follows
$$
10+9\alpha+8\alpha^2+7\alpha^3+6\alpha^4+5\alpha^5+4\alpha^6.
$$
That is the coordinates $(x_0,\ldots,x_6)\in\mathbb{Z}^7$ of a solution of the norm form equation $N_{K/\mathbb{Q}}(x_0+x_1\alpha+\ldots+x_{6}\alpha^6)=1$ form an arithmetic progression.
In \cite{BePe0} B\'erczes and Peth\H{o} considered norm form equations
\begin{equation}\label{BP}
N_{K/\mathbb{Q}}(x_0+x_1\alpha+\ldots+x_{n-1}\alpha^{n-1})=m\quad \mbox{ in } x_0,x_1,\ldots,x_{n-1}\in\mathbb{Z}
\end{equation}
where $K=\mathbb{Q}(\alpha)$ is an algebraic number field of degree $n$ and $m$
is a given integer such that $x_0,x_1,\ldots,x_{n-1}$ are consecutive terms in an arithmetic progression. They proved that \eqref{BP} has only finitely many solutions if neither of the following two cases hold:
\begin{itemize}
\item $\alpha$ has minimal polynomial of the form
$$x^n-bx^{n-1}-\ldots-bx+(bn+b-1)$$
with $b\in\mathbb{Z}.$
\item $\frac{n\alpha^n}{\alpha^n-1}-\frac{\alpha}{\alpha-1}$ is a real quadratic number.
\end{itemize}
In 2006 B\'erczes, Peth\H{o} and Ziegler \cite{BPZ} studied norm form equations related to Thomas polynomials such that the solutions are coprime integers in arithmetic progression. B\'erczes and Peth\H{o} \cite{BePe1} considered \eqref{BP} with $m=1$ and $\alpha$ is a root of $x^n-T, (n\geq 3, 4\leq T\leq 100).$ They proved that the norm form equation has no solution in integers which are consecutive elements in an arithmetic progression.
In 2010 Peth\H{o} \cite{P15} collected 15 problems in number theory, Problem 6 is
based on the results given in \cite{BePe0}.
\begin{prob*}[Problem 6 in \cite{P15}]
Does there exist infinitely many quartic algebraic integers $\alpha$ such
that
$$
\frac{4\alpha^4}{\alpha^4-1}-\frac{\alpha}{\alpha-1}
$$
is a quadratic algebraic number?
\end{prob*}
The only example mentioned is $x^4+2x^3+5x^2+4x+2$ such that the corresponding element is a real quadratic number (that is a root of $x^2-4x+2$).
Moreover, B\'erczes, Peth\H{o} in \cite{BePe0} remarks that there are many solutions if we drop assumption of integrality of $\alpha$. However, it is not quite clear whether we can find infinitely many such examples and whether we can find precise description of such algebraic numbers. As we will see the problem is equivalent with the study of existence of rational zeros of family of four polynomials in six variables. With the help of Gr\"obner bases approach we reduce our problem to the study of rational zeros of only one (reducible) polynomial. A careful analysis of the corresponding variety allow us to get 2 infinite families of quartic polynomials defining quartic algebraic integers such that the algebraic number $\frac{4\alpha^4}{\alpha^4-1}-\frac{\alpha}{\alpha-1}$ is quadratic. Unfortunately, in this case we get real quadratic number only in finitely many cases. However, with different look we are able to show that the set of quartic algebraic numbers such that the algebraic number $\frac{4\alpha^4}{\alpha^4-1}-\frac{\alpha}{\alpha-1}$ is quadratic, is contained in a certain set given by (explicit) system of algebraic inequalities.
In particular the following is true:
\begin{thm}\label{6th}
There are infinitely many quartic algebraic integers defined by $\alpha^4+a\alpha^3+b\alpha^2+c\alpha+d=0$
for which
$$
\beta=\frac{4\alpha^4}{\alpha^4-1}-\frac{\alpha}{\alpha-1}
$$
is a quadratic algebraic number. Moreover, there are infinitely many quartic algebraic numbers $\alpha$ such that $\beta$ is real quadratic.
\end{thm}
\section{Auxiliary results}
We provide two infinite families of quartic polynomials, now we prove that among these polynomials there are infinitely many irreducible ones.
\begin{lem}\label{f1}
Let $t\in\mathbb{Z}.$ The polynomials defined by
$$
f_1(x)=x^4+2x^3+(2t^2+2)x^2+(4t^2-4t+2)x+6t^2-4t+1
$$
are irreducible if $t\notin\{0,1\}.$
\end{lem}
\begin{proof}
If there is a linear factor of $f_1,$ then there is an integral root. Hence we have that
$$
f_1(x)=(x+s_1)(x^3+s_2x^2+s_3x+s_4).
$$
By comparing coefficients one gets that $s_2=2-s_1,s_3=s_1^2 + 2t^2 - 2s_1+2.$
It remains to deal with the system of equations
\begin{eqnarray*}
-s_1s_4 + 6t^2 - 4t + 1&=&0\\
-s_1^3 - 2s_1t^2 + 2s_1^2 + 4t^2 - 2s_1 - s_4 - 4t + 2&=&0.
\end{eqnarray*}
The resultant of the two polynomials with respect to $s_4$ is quadratic in $t.$ The discriminant of this quadratic polynomial is
$$
(-8)(s_1 - 1)^2(s_1^4 - 2s_1^3 + 4s_1^2 - 2s_1 + 1).
$$
If $s_1=1,$ then we obtain that $t=0.$ In this case $f_1(x)=(x + 1)^2(x^2 + 1)$ is reducible. If $s_1\neq 0,$ then $-2(s_1^4 - 2s_1^3 + 4s_1^2 - 2s_1 + 1)=U^2.$ This equations has no rational solution since $-2(s_1^4 - 2s_1^3 + 4s_1^2 - 2s_1 + 1)<0$ for all $s_1\in\mathbb{Q}.$
If there are two quadratic factors, then
$$
f_1(x)=(x^2+s_1x+s_2)(x^2+s_3x+s_4).
$$
As in the previous case we compare coefficients to obtain a system of equations
\begin{eqnarray*}
-s_1^2s_2 - 2s_2t^2 + 2s_1s_2 + s_2^2 + 6t^2 - 2s_2 - 4t + 1&=&0\\
-s_1^3 - 2s_1t^2 + 2s_1^2 + 2s_1s_2 + 4t^2 - 2s_1 - 2s_2 - 4t + 2&=&0.
\end{eqnarray*}
The resultant of the above equations with respect to $s_2$ is
$$
(-1)(s_1^2 + 2t^2 - 2s_1 - 4t + 2)(s_1^4 + 2s_1^2t^2 - 4s_1^3 + 4s_1^2t - 4s_1t^2 + 6s_1^2 - 8s_1t - 4s_1 + 8t).
$$
If $s_1^2 + 2t^2 - 2s_1 - 4t + 2=0,$ then we have a quadratic polynomial in $t$ with discriminant $-8s_1^2 + 16s_1.$ It is non-negative if $s_1\in\{0,1,2\}.$ If $s_1\in\{0,1,2\},$ then $t=0$ or $t=1.$ Earlier we handled the case with $t=0,$ if $t=1,$ then we have $f_1(x)=(x^2 + 1)(x^2 + 2x + 3).$
If $s_1^4 + 2s_1^2t^2 - 4s_1^3 + 4s_1^2t - 4s_1t^2 + 6s_1^2 - 8s_1t - 4s_1 + 8t=0,$ then the discriminant with respect to $t$ is
$$
(-8)(s_1^2 - 2s_1 + 2)(s_1^4 - 4s_1^3 + 2s_1^2 + 4s_1 - 4).
$$
It remains to determine the rational points on the genus 2 curve
$$
C:\;(-8)(s_1^2 - 2s_1 + 2)(s_1^4 - 4s_1^3 + 2s_1^2 + 4s_1 - 4)=U^2.
$$
In order to do that let us note that there is a rational map $\phi:\;C\ni (s_{1},U)\mapsto (X,U)\in E$, where
$$
E:\;U^2=X^3+6X^2-20X+8,
$$
and
$$
\phi(s_{1},U)=(-2(s_{1}-1)^2,U).
$$
Using MAGMA we obtain that the rank of Mordell-Weil group is 0 with $\operatorname{Tors}(E(\mathbb{Q}))=\{\mathcal{O},(-2,\pm 8),(2,0)\}$. These torsion points yield affine rational points on the curve $C$ of the following form
$$
(0,\pm 4), (2,\pm 4).
$$
Thus $s_1\in\{0,2\}$ and it follows that $t=0,$ a case considered above.
\end{proof}
\begin{lem}\label{f2}
Let $t\in\mathbb{Z}.$
The polynomials defined by
$$
f_2(x)=x^4+2tx^3+(t^2+2t+2)x^2+(2t^2+2t)x+3t^2-2t+1
$$
are irreducible if $t\notin\{0,2\}.$
\end{lem}
\begin{proof}
The approach we apply here is similar to that used in the proof of the previous lemma, therefore here we only indicate the main steps. First we try to determine linear factors, that we write
$$
f_2(x)=(x+s_1)(x^3+s_2x^2+s_3x+s_4).
$$
We have that $s_2=2t-s_1,s_3=s_1^2 - 2s_1t + t^2+ 2t + 2$ and it remains to deal with the system of equations
\begin{eqnarray*}
-s_1s_4 + 3t^2 - 2t + 1&=&0\\
-s_1^3 + 2s_1^2t - s_1t^2 - 2s_1t + 2t^2 - 2s_1 - s_4 + 2t&=&0.
\end{eqnarray*}
The resultant of the two polynomials with respect to $s_4$ is quadratic in $t.$ The discriminant of this quadratic polynomial is
$$
(-8)(s_1^4 - 2s_1^3 + 4s_1^2 - 2s_1 + 1).
$$
This expression is negative for all rational $s_1,$ hence there exists no rational solution in $t.$
If there are two quadratic factors, then
$$
f_2(x)=(x^2+s_1x+s_2)(x^2+s_3x+s_4).
$$
As in the previous case we compare coefficients to obtain a system of equations
\begin{eqnarray*}
-s_1^2s_2 + 2s_1s_2t - s_2t^2 + s_2^2 - 2s_2t + 3t^2 - 2s_2 - 2t + 1 &=&0\\
-s_1^3 + 2s_1^2t - s_1t^2 + 2s_1s_2 - 2s_1t - 2s_2t + 2t^2 - 2s_1 + 2t &=&0.
\end{eqnarray*}
The latter equation can be written as
$$
(-1)(-s_1 + t)(-s_1^2 + s_1t + 2s_2 - 2t - 2)=0.
$$
If $s_1=t,$ then $s_2^2 - 2s_2t + 3t^2 - 2s_2 - 2t + 1=0.$ The discriminant of this equation with respect to $s_2$ is $(-8)t(t - 2).$ Hence $t\in\{0,2\}.$ If $t=0,$ then $f_2(x)=(x^2+1)^2.$ If $t=2,$ then $f_2(x)=(x^2 + 2x + 3)^2.$ Consider the case $-s_1^2 + s_1t + 2s_2 - 2t - 2=0.$ We get that $s_2=\frac{s_1^2-s_1t+2t+2}{2}.$ Thus we obtain a polynomial equation only in $s_1$ and $t$ given by
$$
(1/4)(-s_1^4 + 4s_1^3t - 5s_1^2t^2 + 2s_1t^3 - 4s_1^2t + 8s_1t^2 - 4t^3 - 4s_1^2 + 8s_1t + 4t^2 - 16t)=0.
$$
The discriminant with respect to $s_1$ factors as follows
$$
(-1/32)t(t - 2)(t^4 - 8t^3 + 40t^2 - 32t + 16)^2.
$$
The latter expression is a square only if $t=0$ or $t=2,$ so we do not get new reducible polynomials.
\end{proof}
\section{proof of theorem \ref{6th}}
\begin{proof}
Let $f(x)=x^4+ax^3+bx^2+cx+d$ with $a,b,c,d\in\mathbb{Z}$ and $g(x)=x^2+px+q$ with $p,q\in\mathbb{Q}.$
Assume that $\alpha$ is a root of $f(x)$ and $\beta=\frac{4\alpha^4}{\alpha^4-1}-\frac{\alpha}{\alpha-1}$
is a root of $g(x).$ From $g(\beta)=0$ we get a degree 6 polynomial for which $\alpha$ is a root. Therefore
it is divisible by $f(x).$ Computing the reminder we obtain a cubic polynomial which has to be zero. In SageMath \cite{sage} we may compute it as follows
\begin{sageblock}
var('X,u,v')
P.<d,p,q,a,b,c>=PolynomialRing(QQ,6,order='lex')
Px.<x>=PolynomialRing(P)
w=4*X^4/(X^4-1)-X/(X-1)
W=(w^2+p*w+q).numerator()
Wrem=Px(W(X=x)
Wcoeff=Wrem.coefficients()
\end{sageblock}
We obtain the following coefficients
{\small\begin{eqnarray*}
e_1:&&-3 d p a^{2} + 5 d p a + 3 d p b - 6 d p - d q a^{2} + 2 d q a + d q b - 3 d q - 9 d a^{2} + 12 d a + 9 d b - 10 d + q,\\
e_2:&&3 d p a - 5 d p + d q a - 2 d q + 9 d a - 12 d - 3 p a^{2} c + 5 p a c + 3 p b c - 6 p c + p - q a^{2} c + 2 q a c + \\
&&+q b c - 3 q c + 2 q - 9 a^{2} c + 12 a c + 9 b c - 10 c,\\
e_3:&&-3 d p - d q - 9 d - 3 p a^{2} b + 5 p a b + 3 p a c + 3 p b^{2} - 6 p b - 5 p c + 3 p - q a^{2} b + 2 q a b + q a c + \\
&&+q b^{2} - 3 q b - 2 q c + 3 q - 9 a^{2} b + 12 a b + 9 a c + 9 b^{2} - 10 b - 12 c + 1,\\
e_4:&& -3 p a^{3} + 5 p a^{2} + 6 p a b - 6 p a - 5 p b - 3 p c + 6 p - q a^{3} + 2 q a^{2} + 2 q a b - 3 q a - 2 q b - q c + 4 q - \\
&&-9 a^{3} + 12 a^{2} + 18 a b - 10 a - 12 b - 9 c + 4.
\end{eqnarray*}}
The Gr\"obner basis for $<e_1,e_2,e_3,e_4>$ contains 19 polynomials, one of these factors as follows
\begin{eqnarray*}
&&\left(\frac{1}{233}\right) \cdot (a - 2 b + c) \cdot \\
&&\cdot (233 a^{4} - 352 a^{3} b + 108 a^{3} c + 168 a^{3} + 368 a^{2} b^{2} - 264 a^{2} b c -\\
&&- 624 a^{2} b + 46 a^{2} c^{2} - 184 a^{2} c - 544 a^{2} - 160 a b^{3} + 128 a b^{2} c + \\
&&352 a b^{2} - 16 a b c^{2} + 64 a b c + 128 a b - 4 a c^{3} - 8 a c^{2} + 768 a c +\\ &&+640 a + 48 b^{4} - 64 b^{3} c - 256 b^{3} + 32 b^{2} c^{2} + 288 b^{2} c + 384 b^{2} -\\
&&-8 b c^{3} - 144 b c^{2} - 512 b c + c^{4} + 24 c^{3} + 96 c^{2} - 640 c - 256).
\end{eqnarray*}
Let us consider the case $c=2b-a.$
Denote by $e_{1,c},e_{2,c},e_{3,c},e_{4,c}$ the polynomials obtained by substituting $c=2b-a$ into $e_1,e_2,e_3$ and $e_4.$
Let us denote by $G_{c}$ the Gr\"obner basis for $<e_{1,c},e_{2,c},e_{3,c},e_{4,c}>$ and compute the ideal $I_{c,p,q}=G_{c}\cap \mathbb{Q}[a,b,d]$, i.e., we eliminate the variables $p, q$. We get that
$$
I_{c,p,q}=<(9b-12a-3d+5)^2 - 4(3a-2)^2+48d>.
$$
The equation $(9b-12a-3d+5)^2 -4(3a-2)^2+48d=0$ defines the curve, say $C$, defined over $\mathbb{Q}(a)$ of genus 0 (in the plane $(b,d)$). The standard method allows us to find the parametrization of $C$ in the following form
$$
b=\frac{1}{36}(9a^2+36a-16-8u-u^2),\quad d=\frac{1}{36}(9a^2+36a-16+8u-u^2).
$$
However, with $b, d$ given above and the corresponding $c=2b-a$ we get
$$
f(x)=\frac{1}{36}(6x+u+3a-2)\left(6x^3+(3a-u+2)x^2+2(3a-u-1)x+3(3a-u-2)\right),
$$
a reducible polynomial.
Let us consider the second factor that is
\begin{eqnarray}\label{defF}
F(a,b,c)&=&233 a^{4} - 352 a^{3} b + 108 a^{3} c + 168 a^{3} + 368 a^{2} b^{2} - 264 a^{2} b c -\\\notag
&&- 624 a^{2} b + 46 a^{2} c^{2} - 184 a^{2} c - 544 a^{2} - 160 a b^{3} + 128 a b^{2} c + \\\notag
&&352 a b^{2} - 16 a b c^{2} + 64 a b c + 128 a b - 4 a c^{3} - 8 a c^{2} + 768 a c +\\ \notag
&&+640 a + 48 b^{4} - 64 b^{3} c - 256 b^{3} + 32 b^{2} c^{2} + 288 b^{2} c + 384 b^{2} -\\\notag
&&-8 b c^{3} - 144 b c^{2} - 512 b c + c^{4} + 24 c^{3} + 96 c^{2} - 640 c - 256.\notag
\end{eqnarray}
First we compute the polynomial for some small fixed values of $a$. It turns out that $F(2,b,c)$ is a reducible polynomial given by
$$
(12b^2-4bc-96b+c^2+12c+196)(4b^2-4bc-16b+c^2+4c+20).
$$
Let us study this special case when $a=2$. Consider the equation $12b^2-4bc-96b+c^2+12c+196=0$.
It follows that $(c-2b+6)^2+2(2b-9)^2=2$. The only integral solutions correspond with $b=4$ or $b=5$. If $b=4$, then $c=2$ and $d=3$. We obtain the reducible polynomial $x^4+2x^2+4x^2+2x+3=(x^2+1)(x^2+2x+3)$. If $b=5$, then $c=4$ and $d=2$, the example from Peth\H{o}'s paper.
The set of rational solutions of $(c-2b+6)^2+2(2b-9)^2=2$ can be easily parametrized with
$$
b=\frac{8 t^2+5}{2 t^2+1},\quad c=\frac{4(t^2-t+1)}{2 t^2+1}.
$$
With $a=2$ and $b, c$ given above we easily compute the values
\begin{eqnarray*}
d&=&\frac{2 \left(3 t^2-2 t+1\right)}{2 t^2+1},\\
p&=&\frac{4 \left(2 t^3-5 t^2+t-1\right)}{4 t^2+1},\\
q&=&-\frac{2 (4 t-1) \left(3
t^2-2 t+1\right)}{4 t^2+1}.
\end{eqnarray*}
With $p, q$ given above one can easily check that the discriminant of $x^2+px+q$ is positive for all $t\in\mathbb{R}$ (and thus for all $t\in\mathbb{Q}$).
Consider the other possibility, that is the equation $4b^2-4bc-16b+c^2+4c+20=0.$ We have
$$
(2b-c)^2+20=4(4b-c).
$$
Let $u=2b-c$ and $v=4b-c.$ We get that $v=\frac{u^2+20}{4}$
and $b=\frac{u^2-4u+20}{8}, c=\frac{u^2-8u+20}{4}.$ Thus
\begin{eqnarray*}
b&=&2t^2+2,\\
c&=&4t^2-4t+2,
\end{eqnarray*}
where $u=4t+2.$ Let us denote by $e_1',e_2',e_3'$ and $e_4'$ the corresponding polynomials $e_1,e_2,e_3$ and $e_4$ after the substitution $a=2, b=2t^2+2, c=4t^2-4t+2$. Let $G'$ be the Gr\"obner basis of the ideal $<e_1', e_2', e_3', e_4'>$ with respect to the variables $d, p, q$ over polynomial ring $\mathbb{Q}[t]$. We get that
$$
G'\cap \mathbb{Q}[t][d]=<-t(1-d-4t+6t^2)(t^3+t^2-t-2), (d-6t^2+4t-1)(7+d+12t-6t^2-8t^3)>
$$
and thus $d=6t^2 - 4t + 1$ or $t=0$.
If $t=0,$ then $d=7$ and $f(x)$ is reducible $x^4+2x^3+2x^2+2x-7=(x-1)(x^3+3x^2+5x+7)$, a contradiction. If $d=6t^2 - 4t + 1$, then we have an infinite family of solutions of Peth\H{o}'s question given by
\begin{eqnarray*}
a&=&2,\\
b&=&2t^2+2,\\
c&=&4t^2-4t+2,\\
d&=&6t^2-4t+1,\\
p&=&-\frac{6 \, t^{2} - 6 \, t + 1}{t^{2} - t},\\
q&=&\frac{18 \, t^{3} - 18 \, t^{2} + 7 \, t - 1}{2 \, {\left(t^{3} - t^{2}\right)}}.
\end{eqnarray*}
It follows from Lemma \ref{f1} that there are infinitely many irreducible polynomial in this family. By computing the discriminant of the polynomial $x^2+px+q$ we observe that it has two real roots for $t\in\mathbb{Q}$ satisfying $t\in (1-\sqrt{2}/2, 1+\sqrt{2}/2) \setminus \{1\}$.
We computed all integral solutions of the equation $F(a,b,c)$ with $-200\leq a,b\leq 200.$ If $a=2,$ then we have all solutions provided by the above formulas and we also obtain $a=b=c=2$ and $d=-7.$ The corresponding polynomial is reducible, it is $(x-1)(x^3 + 3x^2 + 5x + 7).$ The remaining solutions are contained in Table 1.
\begin{equation*}
\begin{array}{|l|l|l|}
\hline
(-30, 197, 420, 706) & (-12, 26, 60, 121) & (6, 17, 24, 22) \\
(-28, 170, 364, 617) & (-10, 17, 40, 86) & (8, 26, 40, 41)\\
(-26, 145, 312, 534) & (-8, 10, 24, 57) & (10, 37, 60, 66)\\
(-24, 122, 264, 457) & (-6, 5, 12, 34) & (12, 50, 84, 97)\\
(-22, 101, 220, 386) & (-4, 2, 4, 17) & (14, 65, 112, 134)\\
(-20, 82, 180, 321) & (-2, 1, 0, 6) & (16, 82, 144, 177)\\
(-18, 65, 144, 262) & (0, 2, 0, 1) & (18, 101, 180, 226) \\
(-16, 50, 112, 209) & (2, 5, 4, 2) & \\
(-14, 37, 84, 162) & (4, 10, 12, 9) &\\
\hline
\end{array}
\end{equation*}
\begin{center}
Table 1. Integral solutions of the equation $F(a,b,c)$ with $-200\leq a,b\leq 200$.
\end{center}
All these solutions can be described by the formulas
\begin{eqnarray}\label{secondpar}
a&=&2t,\\\notag
b&=&t^2+2t+2,\\\notag
c&=&2t^2+2t,\\\notag
d&=&3t^2-2t+1,\\\notag
p&=&-\frac{2 \, {\left(3 \, t^{2} - 5 \, t + 4\right)}}{t^{2} - 2 \, t + 2},\\\notag
q&=&\frac{9 \, t^{3} - 12 \, t^{2} + 7 \, t - 2}{t^{3} - 2 \, t^{2} + 2 \, t}.\notag
\end{eqnarray}
It follows from Lemma \ref{f2} that there are infinitely many irreducible polynomial in this family.
By computing the discriminant of the polynomial $x^2+px+q$ we observe that it has two real roots for $t\in\mathbb{Q}$ satisfying $t\in (0,2)$.
\end{proof}
\begin{rem}
{\rm We extended the search of the solutions of $F(a,b,c)=0$ up to $-10^4\leq a,b\leq 10^4$ and found no additional solutions. }
\end{rem}
\begin{rem}
One can prove that the polynomial $f(x)=x^4+ax^3+bx^2+cx+d$ with
$$
a=2,\quad b=\frac{8 t^2+5}{2 t^2+1},\quad c=\frac{4(t^2-t+1)}{2 t^2+1}, \quad d=\frac{2 \left(3 t^2-2 t+1\right)}{2 t^2+1}.
$$
has no rational roots for all $t\in\mathbb{Q}$. However, if $t=(2 - s^2)/(4s)$, where $s\in\mathbb{Q}\setminus\{0\}$, then
$$
f(x)=\left(x^2+\frac{4}{s^2+2}x+\frac{s^2+4 s+6}{s^2+2}\right)\left(x^2+\frac{2 s^2}{s^2+2}x+\frac{3 s^2-4 s+2}{s^2+2}\right).
$$
\end{rem}
Let $\mathbb{P}$ be the set of prime numbers, $S\subset \mathbb{P}\cup\{\infty\}$, and recall that a rational number $r=r_1/r_2\in\mathbb{Q}, \gcd(r_1,r_2)=1$, is called $S$-integral if the set of prime factors of $r_2$ is a subset of $S$. The set of $S$-integers is denoted by $\mathbb{Z}_{S}$.
Although we were unable to prove that there are infinitely many quartic algebraic integers $\alpha$ such that the number $\beta=4\alpha^4/(1-\alpha^4)-\alpha/(\alpha-1)$ is real quadratic, from our result we can deduce the following:
\begin{cor}
Let $S\subset \mathbb{P}$. Then there are infinitely many $a,b,c,d\in\mathbb{Z}_{S}$ such that for one of the roots of $x^4+ax^3+bx^2+cx+d=0$, say $\alpha$, the number $\beta$ is real quadratic.
\end{cor}
\begin{proof}
In order to get the result it is enough to use the parametrization (\ref{secondpar}) by taking $t\in\mathbb{Z}_{S}$ satisfying the condition $t\in (0,2)$. Because there are infinitely many such $t$'s we get the result.
\end{proof}
\begin{rem}
{\rm
Let us note that the equation $F(a,b,c)=0$, where $F$ is given by (\ref{defF}), defines (an affine) quartic surface, say $V$. The existence of the parametric solution presented above leads to the generic point (by taking $t=a/2$):
$$
(a,b,c)=\left(a,\frac{a^2}{4}+a+2,\frac{a^2}{2}+a\right)
$$
lying on $V$. This suggests to look on $V$ as on a {\it quartic curve} defined over the rational function field $\mathbb{Q}(a)$. We call this curve $\mathcal{C}$. A quick computation in MAGMA \cite{MAGMA} reveals that the genus of $\mathcal{C}$ is 0. This implies that $\mathcal{C}$ is $\overline{\mathbb{Q}(a)}$-rational curve. Moreover, the existence of $\mathbb{Q}(a)$-rational point on $\mathcal{C}$ given by $P=\left(\frac{a^2}{4}+a+2,\frac{a^2}{2}+a\right)$ allows us to compute rational parametrization which is defined over $\mathbb{Q}(a)$ as follows
\begin{eqnarray*}
b(t)&=&\frac{\sum_{i=0}^{6}bn_i(t)a^i}{\sum_{i=0}^{4}bd_i(t)a^i},\\ c(t)&=&\frac{\sum_{i=0}^{6}cn_i(t)a^i}{\sum_{i=0}^{4}cd_i(t)a^i},\\
d(t)&=&\frac{\sum_{i=0}^{6}dn_i(t)a^i}{\sum_{i=0}^{4}dd_i(t)a^i}.
\end{eqnarray*}
where $t\in\mathbb{Q}$ and $bn_i(t),bd_i(t)$ are given by
\begin{center}
\tiny
\begin{tabular}{|l|l|l|}
\hline
$i$ & $bn_i(t)$ & $bd_i(t)$\\
\hline
\hline
$0$ & $663552 \, t^{4} - 2211840 \, t^{3} + 2764800 \, t^{2} - 1536000 \, t + 320000$ & $331776 \, t^{4} - 1105920 \, t^{3} + 1382400 \, t^{2} - 768000 \, t + 160000$ \\
$1$ & $-331776 \, t^{4} + 1050624 \, t^{3} - 1244160 \, t^{2} + 652800 \, t - 128000$ & $-331776 \, t^{4} + 1050624 \, t^{3} - 1244160 \, t^{2} + 652800 \, t - 128000$ \\
$2$ & $-41472 \, t^{3} + 105984 \, t^{2} - 90240 \, t + 25600$ & $124416 \, t^{4} - 373248 \, t^{3} + 419328 \, t^{2} - 209280 \, t + 39200$ \\
$3$ & $38016 \, t^{3} - 89280 \, t^{2} + 69696 \, t - 18080$ & $-20736 \, t^{4} + 58752 \, t^{3} - 62784 \, t^{2} + 30048 \, t - 5440$ \\
$4$ & $12960 \, t^{4} - 47520 \, t^{3} + 62928 \, t^{2} - 36240 \, t + 7748$ & $1296 \, t^{4} - 3456 \, t^{3} + 3528 \, t^{2} - 1632 \, t + 288$ \\
$5$ & $-3888 \, t^{4} + 11664 \, t^{3} - 13248 \, t^{2} + 6792 \, t - 1332$ & $0$ \\
$6$ & $324 \, t^{4} - 864 \, t^{3} + 900 \, t^{2} - 432 \, t + 81$ & $0$ \\
\hline
\end{tabular}
\end{center}
$cn_i(t)$ and $cd_i(t)$ are as follows
\begin{center}
\tiny
\begin{tabular}{|l|l|l|}
\hline
$i$ & $cn_i(t)$ & $cd_i(t)$\\
\hline
\hline
$0$ & $0$ & $165888 \, t^{4} - 552960 \, t^{3} + 691200 \, t^{2} - 384000 \, t + 80000$ \\
$1$ & $165888 \, t^{4} - 552960 \, t^{3} + 691200 \, t^{2} - 384000 \, t + 80000$ & $-165888 \, t^{4} + 525312 \, t^{3} - 622080 \, t^{2} + 326400 \, t - 64000$ \\
$2$ & $-82944 \, t^{4} + 235008 \, t^{3} - 241920 \, t^{2} + 105600 \, t - 16000$ & $62208 \, t^{4} - 186624 \, t^{3} + 209664 \, t^{2} - 104640 \, t + 19600$ \\
$3$ & $-20736 \, t^{4} + 86400 \, t^{3} - 126720 \, t^{2} + 79296 \, t - 18080$ & $-10368 \, t^{4} + 29376 \, t^{3} - 31392 \, t^{2} + 15024 \, t - 2720$ \\
$4$ & $20736 \, t^{4} - 66528 \, t^{3} + 79920 \, t^{2} - 42744 \, t + 8620$ & $648 \, t^{4} - 1728 \, t^{3} + 1764 \, t^{2} - 816 \, t + 144$ \\
$5$ & $-4536 \, t^{4} + 13176 \, t^{3} - 14580 \, t^{2} + 7308 \, t - 1404$ & $0$ \\
$6$ & $324 \, t^{4} - 864 \, t^{3} + 900 \, t^{2} - 432 \, t + 81$ & $0$ \\
\hline
\end{tabular}
\end{center}
$dn_i(t)$ and $dd_i(t)$ are given by
\begin{center}
\tiny
\begin{tabular}{|l|l|l|}
\hline
$i$ & $cn_i(t)$ & $cd_i(t)$\\
\hline
\hline
$0$ & $331776 \, t^{4} - 1105920 \, t^{3} + 1382400 \, t^{2} - 768000 \, t + 160000$ & $331776 \, t^{4} - 1105920 \, t^{3} + 1382400 \, t^{2} - 768000 \, t + 160000$ \\
$1$ & $-663552 \, t^{4} + 2211840 \, t^{3} - 2764800 \, t^{2} + 1536000 \, t - 320000$ & $-331776 \, t^{4} + 1050624 \, t^{3} - 1244160 \, t^{2} + 652800 \, t - 128000$ \\
$2$ & $705024 \, t^{4} - 2350080 \, t^{3} + 2939904 \, t^{2} - 1635840 \, t + 341600$ & $124416 \, t^{4} - 373248 \, t^{3} + 419328 \, t^{2} - 209280 \, t + 39200$ \\
$3$ & $-393984 \, t^{4} + 1271808 \, t^{3} - 1540224 \, t^{2} + 829536 \, t - 167680$ & $-20736 \, t^{4} + 58752 \, t^{3} - 62784 \, t^{2} + 30048 \, t - 5440$ \\
$4$ & $115344 \, t^{4} - 353376 \, t^{3} + 407880 \, t^{2} - 210624 \, t + 41148$ & $1296 \, t^{4} - 3456 \, t^{3} + 3528 \, t^{2} - 1632 \, t + 288$ \\
$5$ & $-16848 \, t^{4} + 48384 \, t^{3} - 52992 \, t^{2} + 26304 \, t - 5004$ & $0$ \\
$6$ & $972 \, t^{4} - 2592 \, t^{3} + 2700 \, t^{2} - 1296 \, t + 243$ & $0$ \\
\hline
\end{tabular}
\end{center}
The above parametrizations yield formulas for $p$ and $q$ as well, we have
\begin{eqnarray*}
p(t)&=&\frac{\sum_{i=0}^{8}pn_i(t)a^i}{\sum_{i=0}^{8}pd_i(t)a^i},\\ q(t)&=&\frac{\sum_{i=0}^{8}qn_i(t)a^i}{\sum_{i=0}^{8}qd_i(t)a^i},
\end{eqnarray*}
where $pn_i(t)$ and $pd_i(t)$ are as follows
\begin{center}
\tiny
\begin{tabular}{|l|l|}
\hline
$i$ & $pn_i(t)$\\
\hline
\hline
$0$ & $-764411904 \, t^{6} + 3853910016 \, t^{5} - 8095334400 \, t^{4} + 9068544000 \, t^{3} - 5713920000 \, t^{2} + 1920000000 \, t - 268800000$ \\
$1$ & $1624375296 \, t^{6} - 8066138112 \, t^{5} + 16689659904 \, t^{4} - 18417991680 \, t^{3} + 11433369600 \, t^{2} - 3785472000 \, t + 522240000$ \\
$2$ & $-1576599552 \, t^{6} + 7711801344 \, t^{5} - 15724855296 \, t^{4} + 17109688320 \, t^{3} - 10477670400 \, t^{2} + 3424128000 \, t - 466560000$ \\
$3$ & $901767168 \, t^{6} - 4328681472 \, t^{5} + 8665989120 \, t^{4} - 9262688256 \, t^{3} + 5575491072 \, t^{2} - 1792177920 \, t + 240364800$ \\
$4$ & $-328458240 \, t^{6} + 1538403840 \, t^{5} - 3007901952 \, t^{4} + 3143418624 \, t^{3} - 1852477056 \, t^{2} + 583908864 \, t - 76936320$ \\
$5$ & $77262336 \, t^{6} - 350884224 \, t^{5} + 666600192 \, t^{4} - 678507840 \, t^{3} + 390510720 \, t^{2} - 120573504 \, t + 15612432$ \\
$6$ & $-11384064 \, t^{6} + 49828608 \, t^{5} - 91598688 \, t^{4} + 90593856 \, t^{3} - 50882400 \, t^{2} + 15399264 \, t - 1963512$ \\
$7$ & $956448 \, t^{6} - 4012416 \, t^{5} + 7116336 \, t^{4} - 6832512 \, t^{3} + 3747096 \, t^{2} - 1113696 \, t + 140292$ \\
$8$ & $-34992 \, t^{6} + 139968 \, t^{5} - 239112 \, t^{4} + 222912 \, t^{3} - 119556 \, t^{2} + 34992 \, t - 4374$ \\
\hline
\end{tabular}
\end{center}
\begin{center}
\tiny
\begin{tabular}{|l|l|}
\hline
$i$ & $pd_i(t)$\\
\hline
\hline
$0$ & $191102976 \, t^{6} - 955514880 \, t^{5} + 1990656000 \, t^{4} - 2211840000 \, t^{3} + 1382400000 \, t^{2} - 460800000 \, t + 64000000$ \\
$1$ & $-382205952 \, t^{6} + 1879179264 \, t^{5} - 3849928704 \, t^{4} + 4206919680 \, t^{3} - 2586009600 \, t^{2} + 847872000 \, t - 115840000$ \\
$2$ & $346374144 \, t^{6} - 1676132352 \, t^{5} + 3381460992 \, t^{4} - 3640578048 \, t^{3} + 2206264320 \, t^{2} - 713625600 \, t + 96256000$ \\
$3$ & $-185131008 \, t^{6} + 879869952 \, t^{5} - 1744478208 \, t^{4} + 1847079936 \, t^{3} - 1101689856 \, t^{2} + 351010560 \, t - 46678400$ \\
$4$ & $63452160 \, t^{6} - 294865920 \, t^{5} + 572209920 \, t^{4} - 593720064 \, t^{3} + 347511168 \, t^{2} - 108828288 \, t + 14251040$ \\
$5$ & $-14183424 \, t^{6} + 64074240 \, t^{5} - 121124160 \, t^{4} + 122713920 \, t^{3} - 70316928 \, t^{2} + 21620544 \, t - 2788424$ \\
$6$ & $2006208 \, t^{6} - 8755776 \, t^{5} + 16052256 \, t^{4} - 15836256 \, t^{3} + 8873352 \, t^{2} - 2679360 \, t + 340884$ \\
$7$ & $-163296 \, t^{6} + 684288 \, t^{5} - 1212408 \, t^{4} + 1162944 \, t^{3} - 637200 \, t^{2} + 189216 \, t - 23814$ \\
$8$ & $5832 \, t^{6} - 23328 \, t^{5} + 39852 \, t^{4} - 37152 \, t^{3} + 19926 \, t^{2} - 5832 \, t + 729$ \\
\hline
\end{tabular}
\end{center}
finally, the formulas for $qn_i(t)$ and $qd_i(t)$
\begin{center}
\tiny
\begin{tabular}{|l|l|}
\hline
$i$ & $qn_i(t)$\\
\hline
\hline
$0$ & $-382205952 \, t^{6} + 1911029760 \, t^{5} - 3981312000 \, t^{4} + 4423680000 \, t^{3} - 2764800000 \, t^{2} + 921600000 \, t - 128000000$ \\
$1$ & $1242169344 \, t^{6} - 6210846720 \, t^{5} + 12939264000 \, t^{4} - 14376960000 \, t^{3} + 8985600000 \, t^{2} - 2995200000 \, t + 416000000$ \\
$2$ & $-1934917632 \, t^{6} + 9650700288 \, t^{5} - 20059840512 \, t^{4} + 22242263040 \, t^{3} - 13875148800 \, t^{2} + 4617216000 \, t - 640320000$ \\
$3$ & $1821450240 \, t^{6} - 9005727744 \, t^{5} + 18560544768 \, t^{4} - 20410417152 \, t^{3} + 12630919680 \, t^{2} - 4170854400 \, t + 574144000$ \\
$4$ & $-1091377152 \, t^{6} + 5298628608 \, t^{5} - 10725198336 \, t^{4} + 11586309120 \, t^{3} - 7046019072 \, t^{2} + 2287269120 \, t - 309668800$ \\
$5$ & $422143488 \, t^{6} - 1995010560 \, t^{5} + 3934065024 \, t^{4} - 4144690944 \, t^{3} + 2461317120 \, t^{2} - 781454784 \, t + 103672320$ \\
$6$ & $-104789376 \, t^{6} + 478690560 \, t^{5} - 914397984 \, t^{4} + 935521920 \, t^{3} - 541037520 \, t^{2} + 167812512 \, t - 21823272$ \\
$7$ & $16119648 \, t^{6} - 70777152 \, t^{5} + 130483872 \, t^{4} - 129400416 \, t^{3} + 72863136 \, t^{2} - 22105152 \, t + 2825172$ \\
$8$ & $-1399680 \, t^{6} + 5878656 \, t^{5} - 10437336 \, t^{4} + 10031040 \, t^{3} - 5506488 \, t^{2} + 1638144 \, t - 206550$ \\
\hline
\end{tabular}
\end{center}
\begin{center}
\tiny
\begin{tabular}{|l|l|}
\hline
$i$ & $qd_i(t)$\\
\hline
\hline
$0$ & $0$ \\
$1$ & $191102976 \, t^{6} - 955514880 \, t^{5} + 1990656000 \, t^{4} - 2211840000 \, t^{3} + 1382400000 \, t^{2} - 460800000 \, t + 64000000$ \\
$2$ & $-382205952 \, t^{6} + 1879179264 \, t^{5} - 3849928704 \, t^{4} + 4206919680 \, t^{3} - 2586009600 \, t^{2} + 847872000 \, t - 115840000$ \\
$3$ & $346374144 \, t^{6} - 1676132352 \, t^{5} + 3381460992 \, t^{4} - 3640578048 \, t^{3} + 2206264320 \, t^{2} - 713625600 \, t + 96256000$ \\
$4$ & $-185131008 \, t^{6} + 879869952 \, t^{5} - 1744478208 \, t^{4} + 1847079936 \, t^{3} - 1101689856 \, t^{2} + 351010560 \, t - 46678400$ \\
$5$ & $63452160 \, t^{6} - 294865920 \, t^{5} + 572209920 \, t^{4} - 593720064 \, t^{3} + 347511168 \, t^{2} - 108828288 \, t + 14251040$ \\
$6$ & $-14183424 \, t^{6} + 64074240 \, t^{5} - 121124160 \, t^{4} + 122713920 \, t^{3} - 70316928 \, t^{2} + 21620544 \, t - 2788424$ \\
$7$ & $2006208 \, t^{6} - 8755776 \, t^{5} + 16052256 \, t^{4} - 15836256 \, t^{3} + 8873352 \, t^{2} - 2679360 \, t + 340884$ \\
$8$ & $-163296 \, t^{6} + 684288 \, t^{5} - 1212408 \, t^{4} + 1162944 \, t^{3} - 637200 \, t^{2} + 189216 \, t - 23814$ \\
\hline
\end{tabular}
\end{center}
The reader interested in the details of mathematics behind the computation of parametrizations of rational curves can consult the excellent book of Rafael Sendra, Winkler and P\'{e}rez-D\'{i}az \cite{RATCurves}.
Let us also note that for $p,q$ given above the discriminant of $P(x)=x^2+px+q$ takes the form
$$
\operatorname{Disc}(P)=-2a P_1(a,t)\cdot P_{2}(a,t) Q(a,t)^2,
$$
where $Q$ is the rational function, $P_{2}$ is the polynomial of degree 2 (with respect to the variable $t$) with negative discriminant for $a\in\mathbb{R}\setminus\{4\}$ and
$$
P_1(a,t)=(9a^3-116a^2+524a-800)t^2-24(a-5)(a-4)^2t+18 (a-4)^3.
$$
We thus see that the polynomial $P(x)$ will have two real roots iff $-aP_1(a,t)>0$ and $Q(a,t)\neq 0$. We observe that if $a<0$ then $-aP(a,t)$ is always negative and we get no solutions. Indeed, if $a<0$ then $P(a,t)$ need to be positive. However, $9a^3-116a^2+524a-800<0$ and $\operatorname{Disc}_{t}(P_1)=-72(a-4)^3(a-2)^2a<0$ and thus $P_1(a,t)<0$ for all $a,t\in\mathbb{R}$. If $a>0$ and $a\neq 4$ there are solutions but the analytic expressions are quite complicated. Instead, in Figure \ref{fig:disc}, we present a plot of the solutions of the system $-aP(a,t)>0 \wedge Q(a,t)\neq 0$ satisfying $(a,t)\in\ [0,10]\times [-10,10]$. In particular, if $a\in\ (0,2)$ and $t\in\mathbb{Q}$ we get solutions we are interested in.
Unfortunately, we were no able to characterize all pairs $(a,t)$ such that the corresponding polynomial $f(x)=x^4+ax^3+b(t)x^2+c(t)x+d(t)$ is irreducible. It seems that this is rather difficult question.
Finally, if $a=4$ then we get $(b,c,d,p,q)=(46/3, 20, 25, 165/26, 525/52)$ and the polynomial $x^2+px+q$ has complex roots.
\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{disc.pdf}
\caption{Real solutions of the inequality $-aP_1(a,t)>0, (a,t)\in [0,10]\times [-10,10]$, are in shaded region}
\label{fig:disc}
\end{figure}
We were trying to use the obtained parametrization to find other integer points on the surface $V$ but without success. If $\alpha$ is not an algebraic integer, then using the above parametrizations we may obtain real quadratic algebraic numbers. Indeed, if $\alpha$ is a root of the polynomial $x^4+ax^3+bx^2+cx+d$ then write $\beta=\frac{4\alpha^4}{\alpha^4-1}-\frac{\alpha}{\alpha-1}$. As an example let us consider the case $a=1, t=1$. The above formulas provide that $\alpha$ is a root of the polynomial
$$
x^4 + x^3 + 97/24x^2 + 3/4x + 17/8
$$
then $\beta$ is a root of the following polynomial having two real roots
$$
x^2 - 6/13x - 51/5.
$$
We can also notice "near misses" solutions of Peth\H{o}'s problem, where among the numbers $a, b, c, d$ only one is genuine rational. All these solutions correspond to $a=2$. More precisely, if $\alpha$ is solution of
$$
x^4+2x^3+\frac{14}{3}x^2+2x+1
$$
then $\beta$ is root of the polynomial
$$
x^2+3x-\frac{3}{4}.
$$
Similarly, if $\alpha$ is a root of
$$
x^4+2x^3+\frac{13}{3}x^2+4x+4
$$
then $\beta$ is a root of
$$
x^2+\frac{36}{5}x+12.
$$
}
\end{rem}
| {
"timestamp": "2017-03-16T01:05:18",
"yymm": "1702",
"arxiv_id": "1702.06068",
"language": "en",
"url": "https://arxiv.org/abs/1702.06068",
"abstract": "In this paper we deal with a problem of Pethő related to existence of quartic algebraic integer $\\alpha$ for which $$ \\beta=\\frac{4\\alpha^4}{\\alpha^4-1}-\\frac{\\alpha}{\\alpha-1} $$ is a quadratic algebraic number. By studying rational solutions of certain Diophantine system we prove that there are infinitely many $\\alpha$'s such that the corresponding $\\beta$ is quadratic. Moreover, we present a description of all quartic numbers $\\alpha$ such that $\\beta$ is quadratic real number.",
"subjects": "Number Theory (math.NT)",
"title": "On a problem of Pethő",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138196557983,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7087156864757967
} |
https://arxiv.org/abs/1712.09335 | Restricted families of projections in vector spaces over finite fields | We study the restricted families of projections in vector spaces over finite fields. We show that there are families of random subspaces which admit a Marstrand-Mattila type projection theorem. | \section{Introduction}
A fundamental problem in fractal geometry is to determine how the projections affect dimension. Recall the classical Marstrand-Mattila projection theorem: Let $E\subset \mathbb{R}^{n}, n\geq2,$ be a Borel set with Hausdorff dimension $s$.
\begin{itemize}
\item (dimension part) If $s\leq m$, then the orthogonal projection of $E$ onto almost all $m$-dimensional subspaces has Hausdorff dimension $s$.
\item (measure part) If $s>m$, then the orthogonal of $E$ onto almost all $m$-dimensional subspaces has positive $m$-dimensional Lebesgue measure.
\end{itemize}
In 1954 J. Marstand \cite{Marstrand} proved this projection theorem
in the plane. In 1975 P. Mattila \cite{Mattila1975}
proved this for general dimension via 1968 R. Kaufman's \cite{Kaufman} potential theoretic methods. We refer to the recent survey of K. Falconer, J. Fraser, and X. Jin \cite{Falconer} for more backgrounds.
Recently there has been a growing interest in studying finite field version of some classical problems arising from Euclidean spaces. For instance, there are finite field Kakeya sets (also called Besicovitch sets), see Z. Dvir \cite{Dvir}; there are finite field Erd\H{o}s/ Falconer distance problem, see A. Iosevich, M. Rudnev \cite{IosevichRudnev}, T. Tao \cite{Tao1}; etc. Motivated by the above works, the author \cite{ChenP} studied the projections in vector spaces over finite fields, and obtained the Marstrand-Mattila type projection theorem in this setting. In this paper, we turn to the restricted families of projections in the vector spaces over finite fields. For more details on projection in vector space over finite fields see \cite{ChenP}. For more backgrounds on restricted families of projections in Euclidean spaces, we refer to \cite[Section 6]{Falconer}, \cite{FOO}, \cite{KOV} and reference therein.
Let $p$ be a prime number, $\mathbb{F}_{p}$ be the finite field with $p$ elements, and $\mathbb{F}_{p}^{n}$ be the $n$-dimensional vector space over this field. We use the same notation as in the Euclidean spaces. Let $G(n,m)$ be the collection of all $m$-dimensional linear subspaces of $\mathbb{F}_{p}^{n}$, and $A(n,m)$ be the family of all $m$-dimensional planes, i.e., the translation of some $m$-dimensional subspace. In the following we show the definition of projections in $\mathbb{F}_{p}^{n}$, see \cite{ChenP} for more details.
\begin{definition}
Let $E$ be a subset of $\mathbb{F}_{p}^{n}$ and $W$ be a non-trivial subspace of $\mathbb{F}_{p}^{n}$. Denoted by $\pi^{W}(E)$ the collection of coset of $W$ which intersects $E$, that is
\[
\pi^{W}(E)=\{x+W: E\cap (x+W) \neq \emptyset, x\in \mathbb{F}_{p}^{n}\}.
\]
In this paper we are interested in the cardinality of $\pi^{W}(E)$.
\end{definition}
For any set $E \subset \mathbb{F}_{p}^{n}$ and $W\in G(n,n-m)$ the Lagrange's group theorem implies
\[
|\pi^{W}(E)|\leq \min\{|E|, p^{m}\}.
\]
Here and in the following let $|J|$ denote the cardinality of set $J$. The author \cite[Corollary 1.3]{ChenP} obtained the following Marstrand-Mattila type projection theorem in $\mathbb{F}_{p}^{n}$. In fact the following form is often called the size of the exceptional sets of projections.
\begin{theorem}\label{thm:ChenP}
Let $E \subset \mathbb{F}_{p}^{n}$ with $|E|=p^{s}$.
(a) If $s\leq m$ and $t \in (0, s]$, then
\[
| \{W \in G(n,n-m) : |\pi^{W} (E)| \leq p^{t}/10 \} \leq \frac{1}{2} p^{m(n - m) -(m - t)}.
\]
(b) If $s> m$, then
\[
| \{W \in G(n,n-m) : |\pi^{W} (E)| \leq p^{m}/ 10 \}| \leq \frac{1}{2} p^{m(n - m) -(s-m)}.
\]
\end{theorem}
We note that $|G(n,m)|\approx p^{m(n-m)}$, see P. Cameron \cite[Theorem 6.3]{Cameron}. We write $f\lesssim g$ if there is a positive constant $C$ such that $f\leq Cg$, $f\gtrsim g$ if $g\lesssim f$, and $f\approx g$ if $f\lesssim g$ and $f\gtrsim g$.
In the following, we formulate finite fields version of restricted families of projections. Let $G$ be a subset of $G(n,k)$, then $(\pi^{W})_{W\in G }$ is called a restricted family of projection. The purpose of this paper is looking for subsets $G\subset G(n,k)$ such that $(\pi^{W})_{W\in G }$ admit a Marstrand-Mattila type projection theorem.
By studying the random subsets of $G(n, n-m)$, we obtain the following result.
\begin{theorem}\label{thm:main}
For any $ \min\{m, n-m\} <\alpha \leq m(n-m)$, there exists a subset $G\subset G(n,n-m)$ with $|G|\approx p^{\alpha}$ such that for any $E\subset \mathbb{F}_{p}^{n}$,
\begin{equation*}\label{eq:eee}
|\{W\in G: |\pi^{W}(E)|\leq N\}|\lesssim |G|N(|E|^{-1}+p^{-m}).
\end{equation*}
\end{theorem}
Note that for the case $\alpha=m(n-m)$, Theorem \ref{thm:main} follows from Theorem \ref{thm:ChenP} by choosing $G=G(n,m)$. Thus we consider the case $\min\{m, n-m\} <\alpha < m(n-m)$ only. We immediately have the following Marstrand-Mattila type projection theorem via the special choice $N$ in Theorem \ref{thm:main}.
\begin{corollary}
For any $ \min\{m, n-m\} <\alpha \leq m(n-m)$, there exists a subset $G\subset G(n,n-m)$ with $|G|\approx p^{\alpha}$ such the following holds. Let $E\subset \mathbb{F}_{p}^{n}$ with $|E|=p^{s}$.
(1) If $|E|\leq p^{m}$ and $t\in (0,s]$, then
\begin{equation*}
|\{W\in G: |\pi^{W}(E)|\leq p^{t}\}|\lesssim |G|p^{t-s}.
\end{equation*}
(2) If $|E|> p^{m}$, then for any small $\varepsilon$
\begin{equation*}
|\{W\in G: |\pi^{W}(E)|\leq \varepsilon p^{m}\}|\lesssim |G|\varepsilon.
\end{equation*}
\end{corollary}
For restricted families of projections in Euclidean spaces,
the author \cite{ChenR} obtained that some random subsets of sphere of $\mathbb{R}^{3}$ admit a Marstrand-Mattila type projection theorem. For more details, see \cite{ChenR}.
The structure of the paper is as follows. In Section \ref{sec:p}, we set up some notation and show some lemmas for later use. We prove Theorem \ref{thm:main} in Section \ref{sec:proof}. In the last Section we given some examples of restricted families of projections which admit a Marstrand-Mattila type theorem in finite fields setting.
\section{Preliminaries}\label{sec:p}
In this section we show some lemmas for later use.
\subsection{Outline of the methods}
In short words, we take a random subset $G\subset G(n,n-m)$, see the random model in Subsection \ref{sub:rrr}. Then we estimate the cardinality of ``the exceptional sets'',
\[
\{W\in G: |\pi^{W} (E)|\leq N\},
\]
and show that it satisfies our need. To estimate the ``exceptional sets'', we adapt the arguments in \cite{ChenP} to our setting which is a variant of Orponen's pairs argument \cite[Estimate (2.1)]{OrponenA}.
Let $W\in G(n,n-m)$ then Lagrange’s group theorem implies that there are $p^{m}$ cosets of $W$. Let $x_{W, j}+W, 1\leq j\leq p^{m}$
be the different cosets of $W$. Let $E\subset \mathbb{F}_{p}^{n}$, then
\[
|E| =\sum_{j=1}^{p^{m}} |E\cap (x_{W, j}+W)|,
\]
and the Cauchy-Schwarz inequality implies
\begin{equation}\label{eq:pairss}
|E|^{2}\leq |\pi^{W}(E)|\sum_{j=1}^{p^{m}} |E\cap (x_{W, j}+W)|^{2}.
\end{equation}
Note that $|E\cap (x_{W, j}+W)|^{2}$ is the amount of pairs of $E$ inside the coset $x_{W, j}+W$. Let $N\leq p^{m}$, define
\[
\Theta=\{W\in G: |\pi^{W} (E)|\leq N\}.
\]
Summing two sides over
$W\in \Theta$ in estimate \eqref{eq:pairss}, we obtain
\begin{equation}\label{eq:argument}
|\Theta| |E|^{2} \leq \mathcal{E}(E,\Theta')N
\end{equation}
where $\mathcal{E}(E,\Theta')=\sum_{W\in \Theta}\sum_{j=1}^{p^{m}}|E\cap (x_{W, j}+W)|^{2}.$
Thus the left problem is to estimate $\mathcal{E}(E, \Theta')$, and we use the doubling counting argument of Murphy and Petridis \cite[Lemma 1]{MurphyPetridis} and the discrete Plancherel identity. The above discusses motivated the following definition.
\begin{definition}
Let $E\subset \mathbb{F}_{p}^{n}$ and $ \mathcal{A} \subset A(n,m)$. Define the energy of $E$ on $\mathcal{A}$ as
\[
\mathcal{E}(E, \mathcal{A}) =\sum_{W\in \mathcal{A}} |E\cap W|^{2}.
\]
\end{definition}
We note that $\mathcal{E}(E, \mathcal{A})$ is closely related to the incidence identity of Murphy and Petridis \cite[Lemma 1]{MurphyPetridis}, and the additive energy in additive combinatorics \cite[Chapter 2]{TaoVu}.
\subsection{Discrete Fourier transform}
In the following we collect some basic facts about Fourier transformation which related to our setting. For more details on discrete Fourier analysis, see Green \cite{Green}, Stein and Shakarchi \cite{Stein}. Let $f : \mathbb{F}_{p}^{n}\longrightarrow \mathbb{C}$ be a complex value function. Then for $\xi \in \mathbb{F}_{p}^{n}$ we define the Fourier transform
\begin{equation*}\label{eq:dede}
\widehat{f}(\xi)=\sum_{x\in \mathbb{F}_{p}^{n}} f(x)e(-x\cdot \xi),
\end{equation*}
where the dot product $ x\cdot\xi $ is defined as $ x_1\xi_1+\cdots +x_n\xi_n$ and $e(-x \cdot \xi)=e^{-\frac{2\pi i x\cdot\xi}{p}}$. Recall the following Plancherel identity,
\begin{equation*}
\sum_{\xi \in \mathbb{F}_{p}^{n}}|\widehat{f}(\xi)|^{2}=p^{n}\sum_{x\in \mathbb{F}_{p}^{n}} |f(x)|^{2}.
\end{equation*}
Specially for the subset $E\subset \mathbb{F}_{p}^{n}$, we have
\[
\sum_{\xi \in \mathbb{F}_{p}^{n}} |\widehat{E}(\xi)|^{2}=p^{n}| E|.
\]
Here and in the following we use $E$ as characteristic function of the set $E$.
For $W\in G(n,n-m)$, define
\[
Per(W):=\{x\in \mathbb{F}_{p}^{n}: x\cdot w=0, w\in W\}.
\]
Note that if $W$ is some subspace in Euclidean space then $Per(W)$ is the orthogonal complement of $W$.
Furthermore, unlike in the Euclidean spaces, here $W\cap Per(W)$ can be some non-trivial subspace. However the rank-nullity theorem of linear algebra implies that for any subspace $W \subset\mathbb{F}_{p}^{n}$,
\begin{equation}\label{eq:rank}
\dim W+\dim Per(W)=n.
\end{equation}
The following Lemma \ref{lem:fff} of \cite[Lemma 2.3]{ChenP} plays an important role in the proof of Lemma \ref{lem:abstract} (2). For more details see \cite[Lemma 2.3]{ChenP}.
\begin{lemma} \label{lem:fff}
Use the above notation. We have
\begin{equation}\label{eq:kk}
\sum_{j=1}^{p^{m}} | E \cap (x_{j}+W)|^{2}=p^{-m}\sum_{\xi\in Per(W)} |\widehat{E}(\xi)|^{2}.
\end{equation}
\end{lemma}
\begin{remark}
We note that the Lemma \ref{lem:fff} is the only place in this paper where the prime field $\mathbb{F}_{p}$ is needed. We do not know if the Lemma \ref{lem:fff} also holds for vector spaces over general finite fields.
\end{remark}
In the following we extend a result of \cite[Lemma 3.1]{ChenP} to general subset of $G(n,n-m)$. Let $G\subset G(n,n-m)$, define
\begin{equation}\label{eq:define}
G'=\bigcup_{W\in G}\bigcup_{j=1}^{p^{m}}(x_{j,W}+W)
\end{equation}
where $x_{W, j}+W, 1\leq j\leq p^{m}$
are the cosets of $W$. For each $W$ we simply use $x_{W, j}+W, 1\leq j\leq p^{m}$ to represent the cosets of $W$.
\begin{lemma}\label{lem:abstract}
Let $G$ be a subset of $G(n, n-m)$ with $|G|\gtrsim p^{\beta}$.
(1) If for any $\xi\neq 0$,
\begin{equation}\label{eq:l11}
|\{W\in G: \xi \in V\}|\lesssim |G| p^{-\beta},
\end{equation}
then
\begin{equation}
\mathcal{E}(E,G')\lesssim |E||G|+|E|^{2}|G|p^{-\beta}.
\end{equation}
(2) If for any $\xi\neq 0$,
\begin{equation}\label{eq:l22}
|\{W\in G: \xi \in Per(W)\}|\lesssim |G| p^{-\beta},
\end{equation}
then
\begin{equation}
\mathcal{E}(E,G')\lesssim p^{-m}|G|(|E|^{2}+|E|p^{n-\beta}).
\end{equation}
\end{lemma}
\begin{proof}
The claim $(1)$ follows by doubling counting. Recall that we denote by $F(x)$ the characteristic function of the subset $F\subset \mathbb{F}_{p}^{n}$. Then
\begin{equation*}
\begin{aligned}
\mathcal{E}(E, G')&= \sum_{V\in G' }|E \cap V|^{2}\\
&=\sum_{V \in G'} \left(\sum_{x\in E}V(x) \right)^{2}\\
&=\sum_{V \in G'} \left(\sum_{x\in E}V(x)+\sum_{x\neq y \in E} V(x)V(y) \right)\\
&\lesssim |E||G|+|E|(|E|-1)|G|p^{-\beta}.
\end{aligned}
\end{equation*}
To establish $(2)$, the Lemma \ref{lem:fff} implies
\begin{equation*}
\begin{aligned}
\mathcal{E}(E, G')
&=\sum_{W\in G} \sum_{j=1}^{p^{m}}|E \cap (x_{W, j}+W)|^{2}\\
& =p^{-m}\sum_{W\in G} \sum_{\xi\in Per(W)}|\widehat{E}(\xi)|^{2}\\
& =p^{-m}(|G||E|^{2}+\sum_{W\in G} \sum_{\xi\in Per(W)\backslash \{0\}}|\widehat{E}(\xi)|^{2})\\
&\lesssim p^{-m}(|G||E|^{2}+p^{n}|E||G|p^{-\beta}).
\end{aligned}
\end{equation*}
Thus we finish the proof.
\end{proof}
\section{Proof of Theorem \ref{thm:main}}\label{sec:proof}
\subsection{Random subsets of $G(n,n-m)$}\label{sub:rrr}
We start by a description of these random subsets in $G(n,n-m)$. Let $0<\delta<1$. We choose each element of $G(n,n-m)$ with probability $\delta$ and remove it with probability $1-\delta$, all choices being independent of each other. Let $G=G^{\omega}$ be the collection of these chosen subspaces. Let $\Omega (G(n,n-m), \delta)$ be our probability space which consists of all the possible outcomes of random subspaces.
For the convenience to our use, we formulate the following large deviations estimate. For more background and details on large deviations estimate, see Alon and Spencer \cite[Appendix A]{Alon}.
\begin{lemma}[Chernoff bound]\label{lem:law of large numbers}
Let $\{X_j\}_{j=1}^N$ be a sequence independent Bernoulli random variables which takes value $1$ with probability $\delta$ and value $0$ with probability $1-\delta$. Then
\[
\mathbb{P}( \sum^N_{j=1} X_j \geq 3N\delta )\leq e^{-N\delta}.
\]
\end{lemma}
We also need the following Lemma of \cite[Lemma 2.7]{ChenP}.
\begin{lemma} \label{lem:c}
Let $\xi$ be a non-zero vector of $\mathbb{F}_{p}^{n}$.
(1) $|\{W\in G(n, k): \xi\in W\}|=|G(n-1, k-1)|$.
(2) $|\{W\in G(n, k): \xi\in Per(W)\}|=|G(n-1, k)|$.
\end{lemma}
\begin{corollary}\label{co:cc}
For any $m<\alpha<m(n-m)$, there exists a subset $G\subset G(n,n-m)$ such that
$|G|\approx p^{\alpha}$ and for any $\xi\neq 0$,
\[
|\{W\in G: \xi \in W\}|\lesssim |G|p^{-m}.
\]
\end{corollary}
\begin{proof}
We consider the random model $\Omega(G(n,n-m), \delta)$ where $\delta=|G(n,m)|^{-1}p^{\alpha}$. First observe that $p^{\alpha}/2 \leq |G| \leq 2 p^{\alpha}$ with high probability ($>1/2$) provided large $p$. This follows by applying Chebyshev's inequality, which says that
\begin{equation}\label{eq:che}
\begin{aligned}
\mathbb{P}(||G| - p^{\alpha}|&> \frac{1}{2}p^{\alpha})\leq \frac{4p^{\alpha}(1-\delta)}{p^{2\alpha}}\\
&\leq \frac{4}{p^{\alpha}}\rightarrow 0 \text{ as } p \rightarrow \infty.
\end{aligned}
\end{equation}
Let $\xi\neq 0$ and $G_{\xi}:=\{W\in G(n,n-m): \xi \in W\}$. Lemma \ref{lem:c} (1) implies that
\[
|G_{\xi}|=|G(n-1,n-m-1)|\approx p^{m(n-m)-m}.
\]
Observe that for $G\in \Omega(G(n,n-m),\delta)$,
\[
|\{W\in G: \xi \in W\}|=\sum_{W\in G_{\xi}}{\bf 1}_{G}(W).
\]
Thus by Lemma \ref{lem:law of large numbers},
\[
\mathbb{P}(\sum_{W\in G_{\xi}}{\bf 1}_{G}(W)\geq 3|G(n-1,m)|\delta)\leq e^{-Cp^{\alpha-m}}
\]
where $C$ is a positive constant. It follows that
\begin{equation*}
\begin{aligned}
\mathbb{P}(\exists \xi\neq 0, s.t. \sum_{W\in G_{\xi}}&{\bf 1}_{G}(W) \geq 3|G(n-1,m)|\delta)\\
&\leq p^{n}e^{-Cp^{\alpha-m}}\rightarrow 0 \text{ as } p \rightarrow \infty.
\end{aligned}
\end{equation*}
Together with the estimate \eqref{eq:che}, we conclude that $G\in \Omega(G(n,n-m), \delta)$ satisfies our need with high probability (at least one) provided $p$ is large enough.
\end{proof}
\begin{corollary}\label{co:ccc}
For any $n-m<\alpha<m(n-m)$, there exists a subset $G\subset G(n,n-m)$ such that
$|G|\approx p^{\alpha}$ and for any $\xi\neq 0$,
\[
|\{W\in G: \xi \in Per(W)\}|\lesssim |G|p^{-(n-m)}.
\]
\end{corollary}
\begin{proof}
We consider the random model $\Omega(G(n,n-m), \delta)$ where $\delta=|G(n,m)|^{-1}p^{\alpha}$. For any $\xi\neq 0$, Lemma \ref{lem:c} (2) implies that
\[
|\{W\in G(n, n-m): \xi \in Per(W)\}|=|G(n-1, n-m)|\approx p^{m(n-m)-(n-m)}.
\]
Then applying the similar argument to the proof of Corollary \ref{co:cc}, we obtain that $G\in \Omega(G(n,n-m), \delta)$ satisfies our need
with high probability provided $p$ is large enough.
\end{proof}
Now we intend to apply Lemma \ref{lem:abstract} and the above two Corollaries to prove Theorem \ref{thm:main}.
\begin{proof}[Proof of Theorem \ref{thm:main}]
Suppose $\alpha>m$. By Corollary \ref{co:cc} there exists a subset $G\subset G(n,n-m)$ such that $|G|\approx p^{\alpha}$ and for any $\xi\neq 0$,
\[
|\{W\in G: \xi \in W\}|\lesssim |G|p^{-m}.
\]
Applying Lemma \ref{lem:abstract} (1), we obtain that for any $E\subset \mathbb{F}_{p}^{n}$,
\[
\mathcal{E}(E,G')\lesssim |G|(|E|+|E|^{2}p^{-m}).
\]
By estimate \eqref{eq:argument} we obtain
\[
|\{W\in G: |\pi^{W}(E)|\leq N\}|\lesssim |G|N(|E|^{-1}+p^{-m}).
\]
For the case $\alpha>n-m$. By Corollary \ref{co:ccc} there exists a subset $G\subset G(n,n-m)$ with $|G|\approx p^{\alpha}$ and for any $\xi\neq 0$,
\[
|\{W\in G: \xi \in Per(W)\}|\lesssim |G|p^{n-m}.
\]
Applying Lemma \ref{lem:abstract} (2), we obtain that for any $E\subset \mathbb{F}_{p}^{n}$,
\[
\mathcal{E}(E,G')\lesssim |G|(|E|+|E|^{2}p^{-m}).
\]
Again by estimate \eqref{eq:argument}, we obtain
\[
|\{W\in G: |\pi^{W}(E)|\leq N\}|\lesssim |G|N(|E|^{-1}+p^{-m}).
\]
Thus we complete the proof.
\end{proof}
\section{Examples}
We show two examples in the following. For $D\subset \mathbb{F}_{p}^{n}$ let $G_{D}$ be
the collection of one dimensional subspaces which intersects $D$, i.e.,
\[
G_{D}=\{kx: x\in D, k\in \mathbb{F}_{p}\}.
\]
\begin{example}\label{exa:1}
Let $
S_{1}=\{(x_{1}, x_{2}, 1)\in \mathbb{F}_{p}^{3}: x_{1}^{2}+x_{2}^{2}=1\}$. Then for any $E\subset \mathbb{F}_{p}^{3}$,
\[
|\{L\in G_{S_{1}}: |\pi^{L}(E)|\leq N\}|\lesssim |S_{1}|N(p^{-2}+|E|^{-1}).
\]
\end{example}
\begin{proof}
A. Iosevich and M. Rudnev \cite[Lemma 2.2]{IosevichRudnev} proved that $|S_{1}|\approx p$, and hence $|G_{S_{1}}|\approx p$. Observe that $|W\cap S_{1}|\lesssim 1$ for any $W\in G(3,2)$.
For $\xi\neq 0$ let $Span(\xi)=\{k\xi: k\in \mathbb{F}_{p}\}$. Then
\[
\{L\in G_{S_{1}}: \xi \in Per(L)\}=G_{S_{1}}\cap Per(Span(\xi)).
\]
The rank-nullity theorem implies that $\dim Per(Span(\xi))=2$. Thus $Per(Span(\xi))\in G(3,2)$, and hence we obtain
\[
|\{L\in G_{S_{1}}: \xi \in Per(L)\}|\lesssim 1.
\]
Applying estimate \eqref{eq:argument} and Lemma \ref{lem:abstract} (2) with $\beta=1, m =2$, we finish the proof.
\end{proof}
Note that the above example $S_{1}$ can be considered as a finite fields version of curve
\[
\Gamma=\{\frac{1}{\sqrt{2}}(\cos t, \sin t, 1): t\in [0, 2\pi])\} \subset \mathbb{R}^{3}.
\]
For more details on restricted families of projections with respect to $\Gamma$ we refer to \cite{KOV}, \cite{OV}. In the following, we show a finite fields version of curve
\[
\{(t,t^{2},\cdots, t^{n}): t\in [0,1]\}\subset \mathbb{R}^{n}.
\]
\begin{example}\label{ex:ex}
Let $S=\{( a, a^{2} \cdots, a^{n}): a \in \mathbb{F}_{p}\backslash \{0\}\}$. Then $|G_{S}|=p-1$ and for any subset $E\subset \mathbb{F}_{p}^{n}$,
\[
|\{L\in G_{S}: |\pi^{L}(E)|\leq N\}|\lesssim |G_{S}|N(|E|^{-1}+p^{-(n-1)}).
\]
\end{example}
\begin{proof}
For $n=2$ we have $|G_{S}|\approx |G(2,1)|\approx p$, and the claim follows by applying Theorem \ref{thm:ChenP}. In the following we fix $n\geq 3$ and let $p$ be a large prime number.
For any $\xi\neq 0$,
\[
\{L\in G_{S}:\xi \in Per(L) \}=G_{S}\cap Per(Span(\xi)).
\]
The rank-nullity theorem implies that $\dim Per(Span(\xi))=n-1$.
Observe that any $n$ elements of $S$ form a nonsingular Vandermonde matrix, and hence these $n$ vectors are linear independent. It follows that for any hyperplane $W\in G(n,n-1)$,
\[
|W\cap S|\leq n-1\lesssim_{n} 1.
\]
Therefore we obtain
\[
|\{L\in G_{S}: \xi \in Per(L) \}|\lesssim_{n}1.
\]
Applying estimate \eqref{eq:argument} and Lemma \ref{lem:abstract} (2) with $\beta=1, m=n-1$, we finish the proof.
\end{proof}
By the special choices of $N$ in the above two examples, we conclude that Marstrand-Mattila type projection theorem hold for the restricted families $(\pi^{L})_{L\in G_{S_{1}}}$ and $(\pi^{L})_{L\in G_{S}}$.
| {
"timestamp": "2017-12-29T02:00:14",
"yymm": "1712",
"arxiv_id": "1712.09335",
"language": "en",
"url": "https://arxiv.org/abs/1712.09335",
"abstract": "We study the restricted families of projections in vector spaces over finite fields. We show that there are families of random subspaces which admit a Marstrand-Mattila type projection theorem.",
"subjects": "Classical Analysis and ODEs (math.CA); Combinatorics (math.CO)",
"title": "Restricted families of projections in vector spaces over finite fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138112138841,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7087156861676217
} |
https://arxiv.org/abs/1509.06029 | Capacity and Expressiveness of Genomic Tandem Duplication | The majority of the human genome consists of repeated sequences. An important type of repeated sequences common in the human genome are tandem repeats, where identical copies appear next to each other. For example, in the sequence $AGTC\underline{TGTG}C$, $TGTG$ is a tandem repeat, that may be generated from $AGTCTGC$ by a tandem duplication of length $2$. In this work, we investigate the possibility of generating a large number of sequences from a \textit{seed}, i.e.\ a small initial string, by tandem duplications of bounded length. We study the capacity of such a system, a notion that quantifies the system's generating power. Our results include \textit{exact capacity} values for certain tandem duplication string systems. In addition, motivated by the role of DNA sequences in expressing proteins via RNA and the genetic code, we define the notion of the \textit{expressiveness} of a tandem duplication system as the capability of expressing arbitrary substrings. We then \textit{completely} characterize the expressiveness of tandem duplication systems for general alphabet sizes and duplication lengths. In particular, based on a celebrated result by Axel Thue from 1906, presenting a construction for ternary square-free sequences, we show that for alphabets of size 4 or larger, bounded tandem duplication systems, regardless of the seed and the bound on duplication length, are not fully expressive, i.e. they cannot generate all strings even as substrings of other strings. Note that the alphabet of size 4 is of particular interest as it pertains to the genomic alphabet. Building on this result, we also show that these systems do not have full capacity. In general, our results illustrate that duplication lengths play a more significant role than the seed in generating a large number of sequences for these systems. | \section{Introduction}\label{sec:introduction}
More than $50\%$ of the human genome consists of repeated sequences~\cite{Lander}. Two important types of common repeats are i) interspersed repeats and ii) tandem repeats. Interspersed repeats are caused by transposons. A transposon (jumping gene) is a segment of DNA that can copy or cut and paste itself into new positions of the genome. Tandem repeats are caused by slipped-strand mispairings~\cite{Mundy}. Slipped-strand mispairings occur when one DNA strand in the duplex becomes misaligned with the other.
Tandem Repeats are common in both prokaryote and eukaryote genomes. They are present in both coding and non-coding regions and are believed to be the cause of several genetic disorders. The effects of tandem repeats on several biological processes is understood by these disorders. They can result in generation of toxic or malfunctioning proteins, chromosome fragility, expansion diseases, silencing of genes, modulation of transcription and translation~\cite{Usdin08} and rapid morphological changes~\cite{Fondon}.
A process that leads to tandem repeats, e.g.\ through slipped-strand mispairing, is called \textit{tandem duplication}, which allows substrings to be duplicated next to their original position. For example, from the sequence $AGTCGTCGCT$, a tandem duplication of length $2$ can give $AGTCGT\underline{CG}\underline{CG}CT$, which, if followed by a duplication of length $3$ may give $AGTCG\underline{TCG}\underline{TCG}\penalty 0 CGCT$. The prevalence of tandem repeats and the fact that much of our unique DNA likely originated as repeated sequences~\cite{Lander} motivates us to study the capacity and expressiveness of string systems with tandem duplication, as defined below.
The model of a \textit{string duplication system} consists of a \textit{seed}, i.e.\ a starting string of finite length, a set of duplication rules that allow generating new strings from existing ones, and the set of all sequences that can be obtained by applying the duplication rules to the seed a finite number of times. The notion of \textit{capacity}, introduced in~\cite{Farzad} and defined more formally in the sequel, represents the average number of $m$-ary symbols per sequence symbol that are asymptotically required to encode a sequence in the string system, where $m$ is the \textit{alphabet} size (for DNA sequences the alphabet size is 4). The maximum value for capacity is 1. A duplication system is \textit{fully expressive} if all strings with the alphabet appear as a substring of some string in the system. As we will show, if a system is not fully expressive, then its capacity is strictly less than~1.
Before presenting the notation, definitions, and the results more formally, in the rest of this section, we present two simple examples to illustrate the notions of expressiveness and capacity for tandem duplication string systems. Furthermore, we also outline some useful tools as well as some of the results of the paper.
\begin{exm}\label{exm:1}
Consider a string system on the binary alphabet $\Sigma=\{0,1\}$ with $01$ as the seed that allows tandem duplications of length up to 2. It is easy to check that the set of strings generated by this system start with $0$ and end with $1$. In fact, it can be proved that all binary strings of length $n$ which start with $0$ and end with $1$ can be generated by this system. The proof is based on the fact that every such string can be written as $0^{r_1}1^{r_2}\dotsm0^{r_{v-1}}1^{r_v}$, where each $r_i \geq 1$ and $v$ is even. A natural way to generate this string is to duplicate $01$ $\frac{v}{2}$ times and then duplicate the 0s and 1s as needed via duplications of length 1.
{\it Expressiveness:} From the preceding paragraph, every binary sequence $s$ can be generated as a substring in this system as $0s1$. For example, although $11010$ cannot be generated by this system, it can be generated as a substring of $0110101$ in the following way: $$ 01 \rightarrow 0101 \rightarrow 010101 \rightarrow 0\underline{11010}1.$$ Hence this system is fully expressive.
{\it Capacity:} The number of length-$n$ strings in this system is $2^{n-2}$. Thus, encoding sequences of length $n$ in this system requires $n-2$ bits. The capacity, or equivalently the average number of bits (since the alphabet $\Sigma$ is of size 2) per symbol, is thus equal to~1. This is not surprising as the system generates almost all binary sequences.
\myendexm\end{exm}
Observing these facts for an alphabet of size $2$, one can ask related questions on expressiveness and capacity for higher alphabet sizes and duplication lengths. However, counting the number of length-$n$ sequences for capacity calculation and characterizing fully expressive systems for larger alphabets are often not straightforward tasks. In this paper, we study these questions and develop methods to answer them.
A useful tool in this study is the theory of finite automata. As a simple example note that the string system over binary alphabet in the preceding example can be represented by the finite automaton given in Figure \ref{fig000}. The regular expression for the language defined by the finite automaton is
\begin{equation}\label{reg_exp_01}
R_{01} = {(0^+1^+)}^+,
\end{equation}
which represents all binary strings that start with $0$ and end with $1$. Here, for a sequence $s$, $s^+$ denotes one or more concatenated copies of $s$.
\begin{figure}
\centerline{\includegraphics[width=1.4in,keepaspectratio]{cap_2_2_new}}
\caption{Finite automaton for the systems $S = (\{0, 1\}, 01,\mathcal{T}_{\leq k}^{tan})$, where $k \geq 2$, including the system of Example \ref{exm:1}. Notation used here is described in detail in Section \ref{sec:prelim}. }
\vspace{-1em}
\label{fig000}
\end{figure}
One can use the Perron-Frobenius theory~\cite{Immink,Lind} to count the number of sequences which can be generated by a finite automaton. This enables us to use finite automata as a tool to calculate capacity for some string duplication systems with tandem repeats over larger alphabet.
In our results, we find that the exact capacity of the tandem duplication string system over ternary alphabet with seed $012$ and duplication length at most $3$ equals $\log_3\frac{3+\sqrt{5}}{2} \simeq 0.876036$. Moreover, we generalize this result by characterizing the capacity of tandem duplication string systems over an arbitrary alphabet and a seed with maximum duplication length of 3. Namely, we show that if the maximum duplication length is 3 and the seed contains $abc$ as a substring, where $a$, $b$, and $c$ are distinct symbols, then the capacity $\simeq 0.876036\log_{|\Sigma|}3$. If such a substring does not exist in the seed, then the capacity is given by $\log_{|\Sigma|}2$, unless the seed is of the form $a^m$, in which case the capacity is $0$. Some of these results are highlighted in Table~\ref{table:cap}.
Our next example presents a system that, unlike that of Example~\ref{exm:1}, is not fully expressive.
\begin{exm}
Consider a tandem duplication string system over the ternary alphabet $\{0,1,2\}$ with seed $012$ and maximum duplication length $3$. This system is not fully expressive as it cannot generate 210, 102, or 021, even as a substring. It is not difficult to see that to generate any of these strings, at least one of the other two must be already present as a substring of the seed. Since 012 does not contain any, by induction, it follows that the system is not fully expressive.
\myendexm\end{exm}
Based on the previous example, one may ask what happens if we start with a seed that contains one of the strings 210, 102, or 021, e.g.\ if we let the seed be 01210? Does the system become fully expressive? While this system can generate all strings of length 3 as substrings, the answer is still no as shown in Theorem \ref{thm:2}: Regardless of the seed, a ternary system with maximum duplication length of 3 is not fully expressive. We show in Theorem~\ref{thm:4}, that a maximum duplication length of at least $4$ is needed to arrive at a fully expressive ternary system.
While for alphabets of size 2 or 3, increasing the maximum length on duplications turns a system that is not fully expressive to one that is, for alphabets of size 4 or more, these systems are not fully expressive regardless how large the bound on duplication length is. The main tool in constructing quaternary strings that do not appear independently or as substrings in these systems is Thue's result proving the existence of ternary square-free sequences of any length. Note that unary and binary square-free sequences of arbitrarily large length do not exist.
The existence of such sequences underlies the significant shift in the behavior of tandem duplication systems with regards to expressiveness as a function of alphabet size. Some of our results on expressiveness are summarized in Table~\ref{table:div}.
As part of this paper, we also study regular languages for tandem duplication string systems. In \cite{Leupold2}, it was shown that the tandem duplication string system is not regular if the maximum duplication length is 4 or more when the seed contains $3$ consecutive distinct symbols as a substring. However for maximum duplication length $3$, this question remained open. In this paper, we show in Theorem \ref{thm:5} that if the maximum duplication length is 3, a tandem duplication string system is regular irrespective of the seed and the alphabet size. Moreover, we characterize the exact capacity for all these systems.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$\Sigma$ & $s$ &$k$ & Capacity\\\hline
$\{0,1,2\}$ &$012$ &$3$ & $\simeq0.876036$ \\\hline
arbitrary & $xabcy$ &$3$&$\simeq0.876036\log_{|\Sigma|}3$\\\hline
\end{tabular}
\end{center}
\caption{Capacity values tandem duplication string systems $(\Sigma, s,\mathcal{T}^{tan}_{\leq k})$. Here $x,y\in \Sigma^*$, and $a, b,c\in\Sigma$ are distinct.}
\label{table:cap}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$\Sigma$ & $s$ &$k$ & fully expressive \\\hline
$\{0,1,2\}$ & arbitrary & $\leq 3$ &No \\\hline
$\{0,1,2\}$&$012$& $\geq 4$& Yes\\\hline
Size $\geq 4$ &arbitrary &arbitrary & No \\\hline
\end{tabular}
\end{center}
\caption{Expressiveness of tandem duplication string systems~$\left(\Sigma, s, \mathcal{T}^{tan}_{\leq k}\right)$.}
\vspace{-1.5em}
\label{table:div}
\end{table}
\paragraph*{Related Work}
Tandem duplications have already been studied in~\cite{Dassow1,Dassow2,Leupold}. However the main concern of these works is to determine the place of tandem duplication rules in the Chomsky hierarchy of formal languages. A study related to our work can be found in \cite{Farzad,Leupold2}. String systems with different duplication rules namely - end duplication, tandem duplication, reversed duplication and duplication with a gap are defined and studied in \cite{Farzad}. In end duplication, a substring of certain length $k$ is appended to the end of the previous string - for example, $A\underline{CT}GT \rightarrow ACTGT\underline{CT}$. In reversed tandem duplication, the reverse of a substring is appended in tandem in the previous string - for example, $A\underline{CT}GT \rightarrow ACT\underline{TC}GT$. In duplication with a gap, a substring is inserted after a certain gap $g$ from its position in the previous string - for example $A\underline{CT}GT \rightarrow ACTG\underline{CT}T$.
For tandem duplication string systems, the authors in \cite{Farzad} show that for a fixed duplication length the capacity is $0$. Further, they find a lower bound on the capacity of these systems, when duplications of all lengths are allowed. In this paper, we consider tandem duplication string systems, where we restrict the maximum size of the block being tandemly duplicated to a certain \textit{finite} length. In \cite{Leupold2}, the authors show that for these \textit{bounded} tandem duplication string systems if the maximum duplication length is $4$ or more and the alphabet size is more than 2, the system is not regular for any seed that contains 3 consecutive distinct symbols as a substring. However for maximum duplication length 3, this question was left open. In this paper, we show in Theorem \ref{thm:5} that the language is regular for maximum duplication length 3 irrespective of the seed and the alphabet size. We also characterize the exact capacity of these systems.
In the rest of the paper, the term tandem duplication string system refers to these kind of string duplication systems with bounded duplication length.
The rest of the paper is organized as follows. In Section \ref{sec:prelim}, we present the preliminary definitions and notation. In Section \ref{sec:capexp}, we derive our main results on capacity and expressiveness. In Section \ref{sec:reg}, we show that if the maximum duplication length is 3, then the tandem duplication string system is regular irrespective of the seed and alphabet size. Further, using the regularity of the systems, we extend our capacity results. We present our concluding remarks in Section \ref{sec:conc}.
\section{Preliminaries}\label{sec:prelim}
Let $\Sigma$ be some finite alphabet. An $n$-string $x = x_1x_2\dotsm x_n$ $\in$ $\Sigma^n$ is a finite sequence where $x_i$ $\in$ $\Sigma$ and $|x| = n$. The set of all finite strings over the alphabet $\Sigma$ is denoted by $\Sigma^*$. For two strings $x~\in~\Sigma^n$ and $y~\in~\Sigma^m$, their concatenation is denoted by $xy~\in~\Sigma^{n+m}$. For a positive integer $m$ and a string $s$, $s^m$ denotes the concatenation of $m$ copies of $s$. A string $v~\in~\Sigma^*$ is a substring of $x$ if $x = uvw$, where $u, w~\in~\Sigma^*$.
A string system $S \subseteq \Sigma^*$ is represented as a tuple $S = (\Sigma, s, \mathcal{T})$, where $s \in \Sigma^*$ is a finite length string called seed, which is used to initiate the duplication process, and $\mathcal{T}$ is a set of rules that allow generating new strings from existing ones~\cite{Farzad}. In other words, the string system $S = (\Sigma, s, \mathcal{T})$ contains all strings that can be generated from $s$ using rules from $\mathcal T$ a finite number of times.
A tandem duplication map $T_{i,k}$,
\begin{equation*}
T_{i,k}(x)=\begin{cases}
uvvw, & \quad x=uvw,|u|=i,|v|=k,\\
x, & \quad\mbox{else},
\end{cases}
\end{equation*}
creates and inserts a copy of the substring of length $k$ which starts at position $i+1$. We use $\mathcal T^{tan}_k:\Sigma^*\to\Sigma^*$ and $\mathcal T^{tan}_{\le k}$ to denote the set of tandem duplications of length $k$, and tandem duplications of length at most $k$, respectively,
\begin{equation*}
\begin{split}
\mathcal{T}^{tan}_{k}&=\left\{ T_{i,k}:i\in\mathbb{N}\right\} ,\\
\mathcal{T}^{tan}_{\le k}&=\left\{ T_{i,j}:i,j\in\mathbb{N},j\le k\right\} .
\end{split}
\end{equation*}
With this notation, the system of Example~\ref{exm:1} can be written as $(\{0,1\}, 01, \mathcal{T}_{\leq 2}^{tan})$.
The \textit{capacity} of the string system $S=(\Sigma,s,\mathcal T)$ is defined as
\begin{equation}
\operatorname{cap}(S) =\limsup_{n \rightarrow \infty} \frac{\log _{|\Sigma|}|S\cap\Sigma^n|}{n}.
\end{equation}
Furthermore, it is \textit{fully expressive} if for each $y~\in~\Sigma^*$, there exists a $z ~\in~S$, such that $y$ is a substring of $z$.
\section{Capacity and Expressiveness}\label{sec:capexp}
In this section, we present our results on the capacity and expressiveness of tandem duplication system with bounded duplication length. The section is divided into two parts; the first part focuses on capacity and the second on expressiveness.
\subsection{Capacity}
Our first result is on the capacity of a tandem duplication string system over ternary alphabet.
\begin{thm}
\label{thm:1}
For the tandem duplication string system $S = \left(\{0,1,2\}, 012,\mathcal{T}^{tan}_{\leq 3}\right)$, we have \[\operatorname{cap}(S) = \log_3\frac{3+\sqrt5}{2}\simeq 0.876036.\]
\end{thm}
\begin{figure}
\centerline{\includegraphics[width= \columnwidth,keepaspectratio]{cap_3_3_new}}
\caption{Finite automaton for $S = (\{0, 1, 2\}, 012,\mathcal{T}_{\leq 3}^{tan})$.}
\vspace{-2em}
\label{fig1}
\end{figure}
\begin{IEEEproof}
We prove this theorem by showing that the finite automaton given in Figure~\ref{fig1} accepts precisely the strings in $S$, and then finding the capacity using the Perron-Frobenius theory~\cite{Immink, Lind}.
The regular expression $R$ for the language defined by this finite automaton is given by
\begin{equation}\label{reg_exp}
R = {(0^+1^+)}^+2^+{(1^+2^+)}^*{[0^+{(2^+0^+)}^*1^+{(0^+1^+)}^*2^+{(1^+2^+)}^*]}^*.
\end{equation}
Let $L_R$ be the language defined by the regular expression $R$ (and by the finite automaton). We first show that $L_R\subseteq S$. The direct way of doing so is to start with $012$ and generate all the sequences in $L_R$ via duplications. For simplicity of presentation, however, we take the reverse route: We show that every sequence in $R$ can be transformed to $012$ by a sequence \textit{deduplications}. A deduplication of length $k$ is an operation that replaces a substring $\alpha\alpha$ by $\alpha$ if $|\alpha|= k$. For two regular expressions $R_1$ and $R_2$, we use $R_1\dd{\le k} R_2$ to denote that \textit{each} sequence in $R_1$ can be transformed into \textit{some} sequence in $R_2$ via a sequence of deduplications of length at most $k$.
Note that $R = B_1{B_2}^*$, where
\begin{equation*}\label{B1}\begin{split}
B_1 &= {(0^+1^+)}^+2^+{(1^+2^+)}^*,\\
B_2 &= 0^+{(2^+0^+)}^*1^+{(0^+1^+)}^*2^+{(1^+2^+)}^*.
\end{split}
\end{equation*}
We have
$B_{1}\dd{\le3}012\left(12\right)^{*}\dd{\le3}012,$
since $a^{+}\dd{\le3}a$ and $\left(ab\right)^{+}\dd{\le3}ab$ for all $a,b\in\Sigma$.
Furthermore,%
\begin{equation}
\label{eq:B2}
B_2\dd{\le3} 0{(20)}^*1{(01)}^*2{(12)}^* \dd{\le3} 0{(20)}^*1{(01)}^*2\dd{\le3} 0{(20)}^*12 \dd{\le3} \{02012,012\}.
\end{equation}
Note for example that $1{(01)}^*\underline{2}{(12)}^* \dd{\le3} 1{(01)}^*2$ as the underlined $2$ is always preceded by a $1$.
We thus have $R=B_1B_2^*\dd{\le3}\{01202012,012012\}\dd{\le3}012$, proving that $L_R\subseteq S$.
To complete the proof of $L_R=S$, we now show that $S \subseteq L_{R}$. In what follows, we say a finite automaton generates a sequence $s$, if there is a path with label $s$ from $Start$ to an accepting state. If an automaton generates $uvw$, with $u,v,w\in\Sigma^*$, we may use $v$ to refer both to the string $v$ itself and to the part of the path that generates $v$. The meaning will be clear from the context.
We show $S\subseteq L_R$, by proving the following for the finite automaton in Figure~\ref{fig1}:
i) It can generate $012$.
ii) If the automaton can generate $pqr$, with $p,q,r$ $\in \Sigma^*$ and $|q| \leq 3$, it can also generate $pq^2r$.
Condition i) holds trivially (see the path $Start-S_1-S_2-S_3$ in Figure \ref{fig1}).
In order to prove ii), we define:
\begin{itemize}
\item \textit{Path Label}: Given a path $a$ in a finite automaton, the path label $l_a \in \Sigma^*$ is defined as the sequence obtained by concatenating the labels on the edges forming the path.
\item \textit{Path Length} is the number of edges of the path.
\item \textit{Superstate}: A state $D$ is a superstate of a state $C$ if for each path starting in $C$ and ending in an accepting state, there is a path with the same label starting in $D$ and ending in an accepting state. Note that every state is a superstate of itself.
\item \textit{Duplicable Path}: A path ending in a state $C$ is \textit{duplicable} if there is a path with the same label starting in $C$ and ending in a superstate of $C$.
\end{itemize}
Suppose a finite automaton can generate $pqr$. If $q$ is duplicable, then $pq^2r$ can also be generated by the finite automaton. As a result, to prove ii), it suffices to show that for each state $C$ in Figure \ref{fig1}, all paths of length $1$, $2$ or $3$ ending in $C$ are duplicable.
The rest of the proof is divided into two parts. In Part 1, we show that all paths ending in $\{S_4,S_5, S_6, T_4, T_5, T_6\}$ with length $\le3$ are duplicable. In Part 2, we prove the same statement for the states $\{S_1,S_2,S_3,T_2,T_3\}$. Note that there are no nontrivial paths ending in the $Start$ state.
{\it Part 1} : Given a state $u$ and $j\in\{1,2,3\}$, let $P^u_j$ be the set of all length-$j$ paths ending in $u$ and let $Q^u_j$ be the set of all length-$j$ paths starting and ending in $u$. If \begin{equation}\label{prop}
\bigcup_{a~\in~ P^u_j} l_a = \bigcup_{a~\in~ Q^u_j} l_a,
\end{equation}
then all length-$j$ paths ending in $u$ are duplicable.
We prove that (\ref{prop}) holds for all states $\{S_4,S_5, S_6, T_4, T_5, T_6\}$ and all $j\in\{1,2,3\}$. This is done by computing $\mathcal{A}_1$, $\mathcal{A}_1^2$ and $\mathcal{A}_1^3$, where $\mathcal{A}_1$ is the (labeled) adjacency matrix of the strongly connected component of the finite automaton given in Figure~\ref{fig1}, i.e.\ the subgraph induced by $\{S_4,S_5, S_6, T_4, T_5, T_6\}$. Here in computing the matrix products, symbols do not commute, e.g.\ $xy\neq yx$. The adjacency matrix $\mathcal{A}_1$ and its square $\mathcal A_1^2$, where $x,~y$ and $z$ represent edges labeled by $0$, $1$, and $2$, respectively, and where rows and columns correspond in order to $S_4,S_5, S_6, T_4, T_5, T_6$, are given by
\begin{equation*}
\mathcal{A}_1 =
\left[\begin{smallmatrix}
x & y & 0 & z & 0 & 0 \\
0 & y & z & 0 & x & 0 \\
x & 0 & z & 0 & 0 & y \\
x & 0 & 0 & z & 0 & 0 \\
0 & y & 0 & 0 & x & 0 \\
0 & 0 & z & 0 & 0 & y \\
\end{smallmatrix}\right],
\end{equation*}
\begin{equation*}
\vspace{1em}
\mathcal{A}_1^2 =
\left[\begin{smallmatrix}
x^2+zx & y^2+xy & yz & z^2+xz & yx & 0 \\
zx & y^2+xy & z^2+yz & 0 & x^2+yx & zy \\
x^2+zx & xy & z^2+yz & xz & 0 & y^2+zy \\
x^2+zx & xy & 0 & z^2+xz & 0 & 0 \\
0 & y^2+xy & yz & 0 & x^2+yx & 0 \\
zx & 0 & z^2+yz & 0 & 0 & y^2+zy
\end{smallmatrix}\right].
\end{equation*}
Each entry in these matrices lists the paths of specific length from the state identified by its row to the state identified by its column. For example, the entry $(6,3)$ of $\mathcal A_1^2$, which equals $z^2+yz$, indicates that there are two paths of length $2$ from $T_6$ to $S_6$ with labels $z^2=22$ and $yz=12$.
For a state $u\in\{S_4,S_5,S_6,T_4,T_5,T_6\}$, the terms in the column that corresponds to $u$ in these matrices represent the labels of the paths of the appropriate length that start in $S_4,S_5,S_6,T_4,T_5$, or $T_6$ and end in $u$.
Furthermore, for every path that starts in $\{S_1,S_2,S_3,T_2,T_3\}$ and ends in $u$, there is a corresponding path with the same label that starts in $\{S_4,S_5,S_6,T_4,T_5,T_6\}$ and ends in $u$--this path can be obtained by replacing $S_1$ with $S_4$, $S_2$ with $S_5$, $S_3$ with $S_6$, $T_2$ with $T_5$ and $T_3$ with $T_6$. Finally, there are no paths of length at most $3$ from $Start$ to $u$. Hence, the terms in the column corresponding to $u$ in the matrix $\mathcal A_1^i$, $i\in \{1,2,3\}$, contain the labels for all paths of length $i$ that end in $u$. On the other hand, the terms in the diagonal element in this column correspond to labels of the paths that start and end in $u$.
It thus follows that to check \eqref{prop}, we need to verify that the nonzero terms in the non-diagonal elements of each column also appear in its diagonal element. For $\mathcal A_1$ and $\mathcal A_1^2$, this can be easily done by observing the matrices. For example, the entry $(3,3)$ of $\mathcal{A}_1^2$ equals $z^2+yz$ and contains all terms appearing in column 3 of $\mathcal{A}_1^2$, which are $yz$ and $z^2+yz$. We verified using a computer that $\mathcal A_1^3$ also satisfies the same condition. Hence, we have shown that all paths of length at most 3 ending in $\{S_4,S_5, S_6, T_4, T_5, T_6\}$ are duplicable.
{\it Part 2} : Now, we prove that all paths of length at most 3 ending in $\{S_1, S_2, S_3, T_2, T_3\}$ are duplicable. We first show that (\ref{prop}) holds for all states $\in \{S_1, S_2, T_2, T_3\}$ for paths of length $\le3$, and also holds for $S_3$ for paths of length $1$ and $2$. Next, we show that while (\ref{prop}) does not hold for paths of length $3$ for $S_3$, all length-3 paths ending in $S_3$ are still duplicable.
Observe that there is no path of any length from any state $\in \{S_4, S_5, S_6, T_4, T_5, T_6\}$ to any state $\in \{Start, S_1, S_2, S_3,T_2,T_3\}$, hence we only need the (labeled) adjacency matrix $\mathcal A_2$ of the subgraph induced by $\{Start,S_1, S_2, S_3, T_2, T_3\}$. We have
\begin{equation*}
\mathcal{A}_2 =
\left[\begin{smallmatrix}
0 & x & 0 &0 &0&0\\
0&x & y & 0 & 0 & 0\\
0&0 & y & z & x & 0\\
0&0 & 0 & z & 0 & y\\
0&0 & y & 0 & x & 0\\
0&0 & 0 & z & 0 & y\\
\end{smallmatrix}\right],
\end{equation*}
\begin{equation*}
\mathcal{A}_2^2 =
\left[\begin{smallmatrix}
0&x^2&xy&0&0&0\\
0&x^2 & xy & yz & yx & 0\\
0&0 & y^2 + xy & z^2 + yz & x^2 + yx & zy\\
0&0 & 0 & z^2+yz & 0 & y^2 + zy\\
0&0 & y^2+xy & yz & x^2 + yx & 0\\
0&0 & 0 & z^2+yz & 0 & y^2 + zy \\
\end{smallmatrix}\right],
\end{equation*}
where rows and columns correspond to $Start$, $S_1$, $S_2$, $S_3$, $T_2$, $T_3$, in that order. We observe that in $\mathcal{A}_2$ and $\mathcal{A}_2^2$, in each of the columns corresponding to $S_1$, $S_2$, $S_3$, $T_2$, and $T_3$, the terms in the diagonal entry contain the terms appearing in that column, implying that \eqref{prop} holds for all $u\in \{S_1, S_2, S_3, T_2, T_3\}$ and $j\in\{1,2\}$, i.e.\ for paths of length $1$ and $2$. By computing $\mathcal{A}_2^3$ using a computer, it can be checked that (\ref{prop}) holds for all states $u\in \{S_1, S_2, T_2, T_3\}$ for paths of length 3 as well.
For $S_3$, there is a length-3 path $S_1-S_1-S_2-S_3$ with label $012$, for which there does not exist a corresponding path with the same label which starts and ends in $S_3$. Due to this fact (\ref{prop}) does not hold for $S_3$ for paths of length $3$. But for this length-$3$ path, we can traverse $S_3-S_4-S_5-S_6$ which also has label $012$. Now, since $S_6$ is a superstate of $S_3$, the path $012$ starting in $S_1$ and ending in $S_3$ is duplicable. The other length-3 paths ending in $S_3$ are $112$, $122$, $222$ and $212$. For each of these 4 paths, there exists a corresponding path with the same label that starts and ends in $S_3$ (see Figure \ref{fig1}). Hence, all length-3 path ending in $S_3$ are duplicable. This completes the proof of $S\subseteq L_R$.
Now that we have shown $S = L_R$, we use the Perron-Frobenius Theory~\cite{Immink,Lind} to count the number of sequences which can be generated via this deterministic finite automaton. We calculate the maximum absolute eigenvalue $e^*$ of the (unlabeled) adjacency matrix $B$ of the strongly connected component
of the finite automaton in Figure \ref{fig1} (i.e.\ the subgraph induced by $S_4, S_5, S_6, T_4, T_5, T_6$). The matrix $B$ can be obtained by replacing $x$, $y$, and $z$ in $\mathcal A_1$ by $1$,
\begin{equation*}
B =
\left[\begin{smallmatrix}
1 & 1 & 0 & 1 & 0 & 0 \\
0 & 1 & 1 & 0 & 1 & 0 \\
1 & 0 & 1 & 0 & 0 & 1 \\
1 & 0 & 0 &1 & 0 & 0 \\
0 & 1 & 0 & 0 & 1 & 0 \\
0 & 0 & 1 & 0 & 0 & 1 \\
\end{smallmatrix}\right].
\end{equation*}
The maximum absolute eigenvalue of $B$ is $e^* = \frac{3+\sqrt5}{2}\simeq 2.618034$. By the Perron-Frobenius Theory, $\operatorname{cap}(S) = \log_3 e^* \simeq 0.876036$.
\end{IEEEproof}
While the proof of the preceding theorem providing the exact capacity of the system under study is somewhat involved, it is easy to see why the capacity is strictly less than 1. One can observe from the regular expression for the finite automaton that it cannot generate a string which has $210$, $021$ or $102$ as a substring, implying that the system is not fully expressive. As we will see in Lemma~\ref{lem:expcap}, such systems cannot have capacity~$1$. It is worth noting that the set of strings that avoid $210$, $021$, and $102$ can be shown to have capacity $\simeq 0.914838$, which is slightly larger than the capacity of the system of the theorem.
\subsection{Expressiveness}
We now turn to study the expressiveness of tandem duplication systems with bounded duplication length. For completeness we start with binary systems, which is indeed the simplest case.
\begin{lem}\label{lem:bin}
The system $S = \left(\{0,1\}, s, \mathcal{T}_{\leq 1}^{tan}\right)$, for any $s$ is not fully expressive.
\end{lem}
\begin{IEEEproof}
The system cannot generate ${(01)}^m$ as a substring of any string in $S$ for $2m>|s|$.
\end{IEEEproof}
As shown in Example~\ref{exm:1}, to obtain fully expressive binary systems, it suffices to increase the maximum duplication length to 2.
The next theorem is concerned with the expressiveness of $S = (\{0,1,2\}, s, \mathcal{T}^{tan}_{\leq 3})$. Larger alphabets and larger duplication lengths are considered in Theorems~\ref{thm:3}~and~\ref{thm:4}.
\begin{thm} \label{thm:2}
Consider $S = (\{0,1,2\}, s, \mathcal{T}^{tan}_{\leq 3})$, where $s$ is any arbitrary starting string $s$ $\in \{0,1,2\}^*$. Then, $S$ is not fully expressive.
\end{thm}
\begin{IEEEproof}
A $k$-\textit{irreducible} string is a string that does not have a tandem repeat $\alpha\alpha$, such that $|\alpha| \leq k.$ For example, $01201$, $01210$, $02101$, and $01210121$ are $3$-irreducible strings, while $01212$, $021021$ and $01112$ are not $3$-irreducible. To prove the theorem, we identify certain properties in new $3$-irreducible strings that may appear after a duplication and then construct a 3-irreducible string that is neither a substring of $s$, nor it satisfies the properties that every new 3-irreducible substring must satisfy.
Consider a duplication event that transforms a sequence $z=uvw$ to $z^*=uvvw$, where $|v| \leq 3$. Let $x$ be a 3-irreducible string of length at least 4 that is present in $z^*$ but not in $z$. The string $x$ must intersect with both copies of $v$ in $z^*$ or else it is also present in $z$. Furthermore, it cannot contains $vv$, since otherwise it would not be 3-irreducible. To determine the properties of $x$, we consider three case: $|v|=1,2,3.$ In what follows assume $a_1,a_2,a_3\in\Sigma$.
First, suppose $|v|=1$, say $v=a_1$. In this case, a string $x$ with the aforementioned properties does not exist as all new substrings contain the square $a_1a_1$.
Second, assume $|v|=2$, say $v=a_1a_2$. Then $z^*=ua_1a_2a_1a_2w$ and $x$ either ends with $a_1a_2a_1$ or starts with $a_2a_1a_2$.
Third, suppose $|v|=3$, say $v=a_1a_2a_3$. So $z^*=ua_1a_2a_3a_1a_2a_3w$. Recall that $|x|\ge4$. The string $x$ either ends with $a_1a_2a_3a_1$ or $a_2a_3a_1a_2$, or starts with $a_2a_3a_1a_2$ or $a_3a_1a_2a_3$.
So for any new 3-irreducible substring $x=x_1\dotsm x_j$, $x_i\in\Sigma$, $j\ge4$, we have $x_1=x_3$, $x_1=x_4$, $x_j=x_{j-2}$, or $x_j=x_{j-3}$. Now consider the string $(0121)^\ell0$, where $\ell>|s|$. This sequences is 3-irreducible but does not satisfy any of the 4 properties stated for $x$. Since it is not a substring of $s$ and it cannot be generated as a new substring, it is not a substring of any $y\in S$.
\end{IEEEproof}
Next we consider the system $\left(\Sigma, s, \mathcal{T}^{tan}_{\leq k}\right)$, $|\Sigma|\ge4$ in Theorem~\ref{thm:3}. The proof of the theorem, uses the following lemma, which states that the expressiveness of a system also has a bearing on its capacity.
\begin{lem}\label{lem:expcap}
If a string system $S$ with alphabet $\Sigma$ is not fully expressive, then $\operatorname{cap}(S)<1$.
\end{lem}
\begin{IEEEproof}
Since $S$ is not fully expressive, there exists a $z \in \Sigma^*$ that does not appear as a substring of any $y \in S$. Let $|z| = m$ and $\mu = n -m\lfloor{\frac{n}{m}}\rfloor$. We have $$|S\cap\Sigma^n| \leq {(|\Sigma|^m - 1)}^{\left \lfloor{\frac{n}{m}}\right \rfloor}{|\Sigma|}^\mu.$$ Since $m$ is finite, $\operatorname{cap}(S) < 1$.
\end{IEEEproof}
\begin{thm}
\label{thm:3}Consider $S = \left(\Sigma, s, \mathcal{T}^{tan}_{\leq k}\right)$, where $|\Sigma| \geq 4$, $s$ is any arbitrary seed $\in \Sigma^*$ and $k$ is some finite natural number, then $S$ is not fully expressive, which also implies $\operatorname{cap} (S) < 1$.
\end{thm}
\begin{IEEEproof}
Suppose $z=uvw\in S$, where $|v|\le k$, and let $z^*=uvvw$ be the result of a duplication applied to $z$. Furthermore, suppose that $x=x_1\dotsm x_j$, where $x_i\in\Sigma$ and $j>k$, is a square-free substring of $z^*$ but not $z$. Similar to the proof of Theorem~\ref{thm:2}, $x$ intersects both copies of $v$ but does not contain both. As a result, either $x_1=x_{1+i}$ or $x_j=x_{j-i}$, for some $2\le i\le k$.
For definiteness assume $\Sigma$ contains the symbols $\{0,1,2,3\}$. The sequence $0t0$, where $t$ is a square-free sequence over the alphabet $\{1,2,3\}$ and $|t|>\max\{|s|,k\}$, is not a substring of $s$ and cannot be generated as a substring since it does not satisfy the conditions stated for $x$ above. Note that such a $t$ exists since as shown by Thue~\cite{Thue}, for an alphabet size $\geq 3$, there exists a square-free string of any length.
Hence $S$ is not fully expressive. The second part of the theorem follows from Lemma~\ref{lem:expcap}.
\end{IEEEproof}
\begin{thm}
\label{thm:4}Consider $S = (\{0,1,2\}, 012,\mathcal{T}^{tan}_{\leq 4})$, then $S$ is a fully expressive string system.
\end{thm}
\begin{IEEEproof}
Let $S'=\left(\{0,1,2\},012,\mathcal T_{\le3}^{tan}\right)$. Clearly, $S'\subseteq S$. From the proof of Theorem~\ref{thm:1}, we know that the automaton of Figure~\ref{fig1} gives the same language as $S'$. By checking this automaton, we find that all strings of lengths 1, 2, and 3, except $021$, $210$, and $102$, appear as a substring of some string in $S'$ and, as a result, some string in $S$.
To generate $021$, $210$, and $102$ as substrings of some string in $S$, we proceed as follows:
$$ 012 \rightarrow 0\underline{1212} \rightarrow \underline{01
{\bf 210}121}2 $$
\vspace{-1.5em}
$$ 012 \rightarrow \underline{012012} \rightarrow 01\underline{2020}12 \rightarrow
0\underline{12{\bf 021}202}012$$
\vspace{-1.5em}
$$ 012 \rightarrow \underline{012012} \rightarrow 01\underline{2020}12 \rightarrow 012\underline{020{\bf 102}01}2 $$
where the repeats are underlined.
We have shown that all strings of length 3 appear in $S$ as substrings. Now we show the same for every string $w=w_1w_2w_3w_4$ of length 4. To do so, we study 3 cases based on the structure of $w$:
I) First, suppose that $w_4$ is the same as $w_1$, $w_2$, or $w_3$. For generating such $w$ as a substring, we first generate $w'=w_1w_2w_3$ as a substring of some string and then do a tandem duplication of $w_3$ if $w_4 = w_3$, of $w_2w_3$ if $w_4 = w_2$ and of $w_1w_2w_3$ if $w_4 = w_1$.
II) Suppose I) does not hold but $w_1=w_2$ or $w_2=w_3$. If the former holds, first generate $w_1w_3w_4$ and then duplicate $w_1$, and if the latter hold, generate $w_1w_2w_4$ and duplicate $w_2$.
III) If neither I) nor II) holds, then $w=1210$, up to a relabling of the symbols. In this case, we first generate $w'=0121$ and then do a tandem duplication of $w'$ to get $w$. Note that $w'$ is of type considered in I).
Until now, we have shown that all strings $w$ of length at most $4$ appear as a substring of some string in $S$. We use induction to complete the proof. Suppose all strings of length at most $m$ appear as a substring of some string in $S$, where $m\geq 4$. We show that the same holds for strings of length $m+1$.
Consider an arbitrary $w = a_1a_2\dotsm a_ma_{m+1}$. We now consider two cases:
i) If all three letters in the alphabet occur at least once in $a_{m-3}a_{m-2}a_{m-1}a_m$, then $a_{m+1}$ equals $a_{m-3}$, $a_{m-2}$, $a_{m-1}$, or $a_m$, and $w$ can be generated as a substring by a tandem duplication of some suffix of size $\leq 4$ of $w' = a_1a_2\dotsm a_m$. Note that by the induction hypothesis $w'$ can be generated as a substring of some string.
ii) If at least one letter in the alphabet does not occur in $a_{m-3}a_{m-2}a_{m-1}a_m$, then $a_{m-3}a_{m-2}a_{m-1}a_m$ is a sequence over binary alphabet and so it has a tandem repeat. Therefore $w$ can be generated as a substring by tandem duplication. Hence, we have proved the Theorem.
\end{IEEEproof}
Table \ref{table:div2} summarizes the result of this subsection. It can be observed from the table that a change of behavior in expressiveness occurs when the size of the alphabet increases to 4. If the size of the alphabet is $1$, $2$, or $3$, for sufficiently large maximum duplication length, the systems are fully expressive. However, if the size of the alphabet is at least $4$, then regardless of the maximum duplication length, the system is not fully expressive. This change is related to the fact that for alphabets of size $1$ and $2$, all square-free strings are of finite length, but for alphabets of size $3$ and larger, there are square-free strings of any length. Specifically, in case ii) in the proof of Theorem~\ref{thm:4}, we used the fact that the binary string $a_{m-3}a_{m-2}a_{m-1}a_m$ has a tandem repeat. To adapt this proof for $|\Sigma|\ge4$, we would need to show that the $|\Sigma|-1$-ary string $a_{m-3}a_{m-2}a_{m-1}a_m$ has a tandem repeat. But this is not in general true, since there are square-free strings over alphabets of size at least 3 per Thue's result~\cite{Thue} and indeed we showed in Theorem~\ref{thm:3}, again using Thue's result, that the system $\left(\Sigma,s,\mathcal T_{\le k}^{tan}\right)$ is not fully expressive for $|\Sigma|\ge4$ and any $k$.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
$\Sigma$ & $s$ &$k$ & fully expressive & Reason \\\hline\hline
$\{0\}$ & $0$ & $\geq 1$ & Yes & Trivial \\\hline
$\{0,1\}$ & arbitrary & $ 1$ & No & Lemma~\ref{lem:bin} \\\hline
$\{0,1\}$ & $01$ & $\geq 2$& Yes & Example~\ref{exm:1}\\\hline
$\{0,1,2\}$ & arbitrary & $\leq 3$ &No & Theorem~\ref{thm:2}\\\hline
$\{0,1,2\}$&$012$& $\geq 4$& Yes& Theorem~\ref{thm:4}\\\hline
$|\Sigma|\geq 4$ &arbitrary &arbitrary & No & Theorem~\ref{thm:3} \\\hline
\end{tabular}
\end{center}
\caption{Expressiveness of tandem duplication string systems $(\Sigma, s, \mathcal{T}^{tan}_{\leq k}$).}
\vspace{-1.5em}
\label{table:div2}
\end{table}
\section{Regular Languages for Tandem Duplication String Systems}\label{sec:reg}
Regular languages for tandem duplication string systems are easier to study due to the fact that one can use tools from Perron-Frobenius theory~\cite{Immink, Lind} to calculate capacity. It was proved in \cite{Leupold2} that for $|\Sigma| \geq 3$ and maximum duplication length $\geq 4$, the language defined by tandem duplication string systems is not regular, if the seed contains $abc$ as a substring such that $a,b$ and $c$ are distinct. However, if the maximum duplication length is 3, this question was left unanswered. In Theorem~\ref{thm:5}, we show that the language resulting from a tandem duplication system with the maximum duplication length of 3 is regular regardless of the alphabet size and seed. Further, in Corollary~\ref{cor:1} we characterize the exact capacity of such tandem duplication string systems.
\begin{thm}
\label{thm:5}Let $S = (\Sigma, s, \mathcal{T}^{tan}_{\leq 3})$, where $\Sigma$ and $s$ are arbitrary. The language defined by $S$ is regular.
\end{thm}
\begin{IEEEproof}
We first assume that $s=a_1\dotsm a_m$, where $a_i$ are distinct. The case in which $a_i$ are not distinct is handled later
\newcommand{\blockTwo}[2]{{\left(#1^+#2^+\right)}}
\newcommand{\blockThree}[3]{{B_{#1#2#3}}}
For $3\le j\le m$, let
\begin{equation*
\begin{split}
R_{a_1\cdots a_j} = a_1^+a_2^+ {\blockTwo{a_1}{a_2}}^*
&a_3^+\blockTwo{a_2}{a_3}^*\blockThree{a_1}{a_2}{a_3}^*\\
&a_4^+\blockTwo{a_3}{a_4}^*\blockThree{a_2}{a_3}{a_4}^*\\
&\cdots \\
&a_i^+\blockTwo{a_{i-1}}{a_i}^*\blockThree{a_{i-2}}{a_{i-1}}{a_i}^*\\
&\cdots\\
&a_j^+\blockTwo{a_{j-1}}{a_j}^*\blockThree{a_{j-2}}{a_{j-1}}{a_j}^*,
\end{split}
\end{equation*}
where, for $a,b,c\in\Sigma$,
\begin{equation*
\begin{split}
\blockThree abc &= {a^+{(c^+a^+)}^*b^+{(a^+b^+)}^*c^+{(b^+c^+)}^*}.
\end{split}
\end{equation*}
We already know from Theorem~\ref{thm:1} that $S = (\Sigma, s, \mathcal{T}^{tan}_{\leq 3})$ with $s=a_1\dotsm a_m$ is a regular language if $m=3$. We show that for $m \geq 4$, $S$ represents a regular language whose regular expression is given by $R_{a_1a_2\cdots a_m}$. Let $L_R$ be the language defined by $R_{a_1a_2\cdots a_m}$. It suffices to show $L_R=S$.
\begin{table*}[t]
\label{cap_tab_1}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
$\Sigma$ & $s$ &$k$ & Capacity\\\hline
$\{0,1\}$ & $01$ & $1$& $ 0$\\\hline
$\{0,1\}$ & $01$ & $\geq 2$ &$ 1$\\\hline
arbitrary& arbitrary but not $a^m$ for some $a~\in~\Sigma$ & $2$ &$\log_{|\Sigma|} 2$\\\hline
$\{0,1,2\}$ &$012$ &$3$ & $ \log_3 \frac{3+\sqrt{5}}{2}$ \\\hline
arbitrary & $xabcy$ ($x$ and $y~ \in ~\Sigma^*$, $a, b$ and $c ~\in ~\Sigma$ and $a\neq b \neq c\neq a$)&$3$&$\log_{|\Sigma|}\frac{3+\sqrt{5}}{2}$\\\hline
arbitrary & No $3$ consecutive symbols in the seed are all distinct and $s \neq a^m$ for $a \in \Sigma$ & $3$ & $\log_{|\Sigma|} 2$\\\hline
\end{tabular}
\end{center}
\caption{Capacity values for different tandem duplication string systems $(\Sigma, s,\mathcal{T}^{tan}_{\leq k})$.}
\vspace{-1.5em}
\label{table:cap2}
\end{table*}
We first show that $L_R \subseteq S$ by proving $R_{a_1a_2\cdots a_m} \dd{\le3}s$. To do so, we show by induction that $R_{a_1a_2\dotsm a_i}\dd{\le3}a_1a_2\dotsm a_i$. First note that this holds for $i=3$, from the proof of Theorem~\ref{thm:1}. Assuming that it holds for $i$, to show that this also holds for $i+1$, where $i\ge3$. We write
\begin{equation*}
\begin{split}
R_{a_1a_2\dotsm a_{i+1}}
&\dd{\le3}R_{a_1a_2\dotsm a_{i}}a_{i+1}^+\blockTwo{a_i}{a_{i+1}}^*\blockThree{a_{i-1}}{a_i}{a_{i+1}}^*\\
&\dd{\le3}a_1a_2\dotsm a_i a_{i+1}\left(a_{i}a_{i+1}\right)^*\blockThree{a_{i-1}}{a_i}{a_{i+1}}^*\\
&\dd{\le3}a_1a_2\dotsm a_i a_{i+1}\left(a_{i-1}a_{i}a_{i+1}\right)^*or \\& \quad\quad\quad a_1a_2\dotsm a_i a_{i+1}\left(a_{i-1}a_{i+1}a_{i-1}a_{i}a_{i+1}\right)^*\\
&\dd{\le3}a_1a_2\dotsm a_i a_{i+1}.
\end{split}
\end{equation*}
Here we have used the fact that $c\blockThree{a}{b}{c}\dd{\le3}cabc$ which follows from \eqref{eq:B2}. Hence, $L_R \subseteq S$.
We now show that $ S \subseteq L_R$. Note that the seed $s$ is in $L_R$. It thus suffices to show that if $x = pqr \in L_R$, then $y = pq^2r \in L_R$, where $p, q,r\in\Sigma^*$ and $|q| \leq 3$. We have the following five cases:
\begin{enumerate}
\item $q = b$, $q=bb$ or $q=bbb$, for some $b\in \Sigma$: Since each symbol in the regular expression $R_{a_1\dotsm a_m}$ is followed by a $+$ or $*$ as a superscript, if $q$ represents a run and $pqr\in L_R$, then so is $pq^2r$.
\item $q = bc$ for distinct $b,c\in \Sigma:$ Here $q$ represents a length-$2$ path in the finite automaton for a regular expression of the form ${\blockTwo{b}{c}}^*$, $b^+{\blockTwo{c}{b}}^*$, $b^+\blockThree{c}{a}{b}$, $\blockThree{a}{b}{c}$, $\blockThree{b}{c}{a}$, $\blockThree{b}{a}{c}$, $\blockThree{c}{a}{b}$, $\blockThree{a}{c}{b}$ or $b^+c^+$. We know from the proof of Theorem \ref{thm:1} that $bc$ is duplicable in ${\blockTwo{b}{c}}^*$, $\blockThree{a}{b}{c}$, $\blockThree{b}{c}{a}$, $\blockThree{b}{a}{c}$, $\blockThree{c}{a}{b}$ and $\blockThree{a}{c}{b}$. For $b^+{\blockTwo{c}{b}}^*$ and $b^+\blockThree{c}{a}{b}$, we enter a state in the finite automaton for ${\blockTwo{c}{b}}^*$ and $\blockThree{c}{a}{b}$ respectively with incoming edge labeled by $c$. In this state, we can again duplicate path $bc$ and return back to the same state.
The finite automaton for $b^+c^+$ is followed by the finite automaton for ${\blockTwo{b}{c}}^*$, so $bc$ can be duplicated in the automaton for ${\blockTwo{b}{c}}^*$. The duplicate $q = bc$ generated here in ${\blockTwo{b}{c}}^*$ ends in some state $C$ which is a superstate of the state $D$ in which the original $q$ in $pqr$ ended. Since $C$ is a superstate of $D$, $r$ can also be generated from $C$. Hence $pq^2r \in L_R$.
\item $q = bbc$ or $bcc$ for distinct $b, c \in \Sigma:$ Here $q$ represents a length-3 path. We only consider $q=bbc$; the other case is similar. If $pbbcr\in L_R$, then $pbcr\in L_R$ as well, since every symbol in $R_{a_1\dots a_m}$ is followed by a $+$ or $*$ as a superscript. Now we already know from case 2 above if $pbcr$ can be generated then $pbcbcr$ can also be generated. Now from case 1 above, we also know if $pbcbcr$ can be generated then $pbbcbcr$ can also be generated. Further using case 1 again, we can generate $pbbcbbcr$ from $pbbcbcr$. Hence $pq^2r \in L_R.$
\item $q = abc$ for distinct $a, b, c \in \Sigma:$ Here $q$ represents a length-$3$ path in the finite automaton for $\blockThree{\sigma(a}{b}{c)}$ ($\sigma(abc)$ represents any permutation of $a,b,c$), $a^+{\blockTwo{b}{c}}^*$, $a^+\blockThree{b}{c}{a}$, ${\blockTwo{a}{b}}^*c^+$, $\blockThree{d}{a}{b}c^+$, ${\blockTwo{a}{b}}^*\blockThree{c}{a}{b}$ or $a^+b^+c^+$. We know from the proof of Theorem \ref{thm:1} that $abc$ is duplicable in $\blockThree{\sigma(a}{b}{c)}$. The same reasoning holds for $a^+\blockThree{b}{c}{a}$ and ${\blockTwo{a}{b}}^*\blockThree{c}{a}{b}$.
The finite automaton for $a^+{\blockTwo{b}{c}}^*$, ${\blockTwo{a}{b}}^*c^+$, $\blockThree{d}{a}{b}c^+$ and $a^+b^+c^+$ is followed by a finite automaton for $\blockThree{a}{b}{c}$, so $q$ can be duplicated in the finite automaton for $\blockThree{a}{b}{c}$. The duplicate $q$ ends in some state $E$ which is the superstate of the state $F$ in which the original $q$ in $pqr$ ended. Since, $E$ is a superstate of $F$, therefore $r$ can also be generated from $E$. Hence $pq^2r \in L_R$.
\item $q = cbc$ for distinct $b,c \in \Sigma:$ Here $q$ represents a length-$3$ path that can be generated by the finite automaton for ${\blockTwo{c}{b}}^*$, ${\blockTwo{b}{c}}^*$, $\blockThree{\sigma(c}{b}{a)}$, $c^+{\blockTwo{c}{b}}^*$ or $c^+b^+{\blockTwo{c}{b}}^*$. We know from the proof of Theorem 1 that $cbc$ is duplicable in ${\blockTwo{c}{b}}^*, {\blockTwo{b}{c}}^*$ and $\blockThree{\sigma(c}{b}{a)}$. As the state where $q$ in $pqr$ ends lies in the finite automata for either ${\blockTwo{c}{b}}^*, {\blockTwo{b}{c}}^* or \blockThree{\sigma(c}{b}{a)}$, it can be duplicated again the same finite automaton. The duplicate $q$ ends in the superstate of the state in which the original $q$ in $pqr$ ended. Hence $pq^2r \in L_R$.
\end{enumerate}
This completes the proof of $S\subseteq L_R$.
We have proved the statement of Theorem \ref{thm:5} assuming all $a_i$'s in the seed $s$ to be distinct. Now assume the symbols of $s$ are not distinct. We color the symbols of $s$ so that they become distinct and obtain the system $\tilde S=\left(\tilde\Sigma,\tilde s,\mathcal T_{\le3}^{tan}\right)$. Applying the preceding proof for distinct symbols to $\tilde S$, we find that $\tilde S$ is regular. Let $h:\tilde\Sigma\to\Sigma$ be a mapping that removes the colors. By \cite{Shallit}, we have that $S=h(\tilde S)$ is also regular.
\end{IEEEproof}
An immediate corollary on the capacity of tandem duplication string system considered in Theorem \ref{thm:5} is
\begin{cor}\label{cor:1}
If for $S$ in Theorem~\ref{thm:5}, $s$ contains $abc$ as a substring such that $a, b,$ and $c \in\Sigma$ are distinct, then $\operatorname{cap}(S) = \log_{|\Sigma|} \frac{3+\sqrt{5}}{2} \simeq 0.876036\log_{|\Sigma|}3$. Otherwise, except for the seed of the form $a^m$, $\operatorname{cap}(S) = \log_{|\Sigma|}2$. If $s = a^m$, $\operatorname{cap}(S) = 0$.
\end{cor}
\begin{IEEEproof}
By the Perron-Frobenius Theory \cite{Immink}, \cite{Lind}, for a regular language $L_R$, the capacity is given by the log of the maximum eigenvalue of the adjacency matrix of the strongly connected components. In the case when $abc$ occurs as a substring of the seed $s$ such that $a, b$ and $c \in \Sigma$ are distinct, then the adjacency matrix of the finite automaton for $B_{abc}$ (strongly connected component of the finite automaton for $R_{a_1a_2\cdots a_m}$) has the maximum eigenvalue. Therefore, the $\operatorname{cap}(S) = \log_{|\Sigma|} \frac{3+\sqrt{5}}{2} \simeq 0.876036\log_{|\Sigma|}3$ (see proof of Theorem~\ref{thm:1} for the adjacency matrix).
For the case when no $3$ consecutive symbols in the seed $s$ are all distinct and $s\neq a^m$, the maximum capacity component is a finite automaton only over 2 distinct symbols as in Figure \ref{fig000}. Hence the capacity is $\log_{|\Sigma|} 2$.
When seed $s = a^m$, there is at most one sequence of any given length in the system. Hence $\operatorname{cap}(S) = 0.$
\end{IEEEproof}
The following examples illustrate the statement of Theorem~\ref{thm:5} and an application of its proof method.
\begin{exm}
The string system $ S = (\{0,1,2,3\}, 0123, \mathcal{T}^{tan}_{\leq 3})$ is regular by Theorem~\ref{thm:5} and the regular expression is given by
\begin{equation*}\label{reg_exp_4}
R_{0123}
= 0^+1^+{(0^+1^+)}^*2^+{(1^+2^+)}^*{B_{012}}^*3^+{(2^+3^+)}^*{B_{123}}^*.
\end{equation*}
By Corollary~\ref{cor:1}, the capacity of this system $\simeq 0.876036\log_4 3 \simeq 0.694242.$
\myendexm\end{exm}
\begin{exm}
The string system $S = (\{0,1,2\}, 0112, \mathcal{T}^{tan}_{\leq 3})$ is regular by Theorem~\ref{thm:5}, and the regular expression is given by
\begin{equation*}\label{reg_exp_4_alt}
R_{0112}
= 0^+1^+{(0^+1^+)}^*1^+{(1^+1^+)}^
{B_{011}}^*2^+{(1^+2^+)}^*{B_{112}}^*.
\end{equation*}
By Corollary~\ref{cor:1}, the capacity of this system is given by $\log_3 2 \simeq 0.63093.$
\myendexm\end{exm}
When $a_i$'s are assumed to be distinct it can be verified from the regular expression $R_{a_1\cdots a_j}$ in the proof of Thereom \ref{thm:5} that the last occurence of $a_i$ is before the first occurence of $a_{i+3}$ for any $i = 1,2,\dotsm, j-3$ for all $z \in S$. Motivated by this, we state the following lemma regarding the structure of words in tandem duplication systems with bounded duplication lengths
\begin{lem}\label{lem:1} Let $s = a_1\cdots a_m$, where $a_i\in\Sigma$ are distinct. Then for any $ z \in S = \left(\Sigma, s, \mathcal{T}^{tan}_{\leq k}\right)$ and any $i= 1,\dotsc , m-k$, the last occurrence of $a_i$ is before the first occurrence of $a_{i+k}$ and the gap between them is at least k-1 (not counting $a_i$ and $a_{i+k}$).
\end{lem}
\begin{IEEEproof}
Fix the value of $i$. We prove the lemma by induction. Clearly, the lemma holds for $z=s$. Assuming that it holds for $x\in S$, we show that it also holds for $y=T(x)$ for any $T\in \mathcal T_{\le k}^{tan}$.
Assume $x=\alpha a_i \beta a_{i+k} \gamma$, where $\alpha,\beta,\gamma\in\Sigma^*$ and where $a_i$ and $a_{i+k}$ in this expression refer to the last occurrence of $a_i$ and the first occurrence of $a_{i+k}$ in $x$, respectively. Since, by assumption $|\beta|\ge k-1$, the tandem duplication $T$ cannot contain a substring that contains both the last occurrence of $a_i$ and the first occurrence of $a_{i+k}$. If the tandem duplication $T$ duplicates a substring of $\beta$, then the gap between the last $a_i$ and the first $a_{i+k}$ in $y$ is larger than that of $x$. In every other case, the gap stays the same. So the gap in $y$ is at least as large as the gap in $x$, which is $|\beta|\geq k-1$.
\end{IEEEproof}
The following example follows for maximum duplication length $2$ using the same idea as in Theorem \ref{thm:5}
\begin{exm}\label{exm:9}
The string system $ S = (\Sigma, a_1a_2\cdots a_m, \mathcal{T}^{tan}_{\leq 2})$ is regular. This can be proved using the same method as used in the proof of Theorem~\ref{thm:5}. The regular expression $Q_{a_1a_2\cdots a_m}$ for $m \geq 2$ is given by
\begin{equation*}\label{reg_exp_T}
Q_{a_1a_2\cdots a_m} = a_1^+a_2^+{(a_1^+a_2^+)}^*a_3^+{(a_2^+a_3^+)}^*\cdots a_m^+{(a_{m-1}^+a_m^+)}^*.
\end{equation*}
\myendexm\end{exm}
The finite automaton for a special case of Example \ref{exm:9} with $|\Sigma| = 3$ is given in Figure \ref{fig0}.
\begin{cor}\label{cor:2}
The capacity for $S = (\Sigma, a_1a_2\cdots a_m, \mathcal{T}^{tan}_{\leq 2})$ is given by $\log_{|\Sigma|} 2$, except for the case in which seed $s = a^m$ for $a ~\in~\Sigma$. In that case, the capacity is $0$.
\end{cor}
\begin{IEEEproof}
As in Proof of Corollary~\ref{cor:1}, By the Perron-Frobenius Theory, for a regular language, the capacity is given by the log of the maximum eigenvalue of the adjacency matrix of the strongly connected components. Except for the case when seed $s = a^m$, for all other cases $ab$ ($a,b ~\in~\Sigma$) occurs as a substring of the seed $s$ such that $a\neq b$. Hence, the maximum capacity component in the finite automaton for $Q_{a_1a_2\cdots a_m}$ is ${(a^+b^+)}^+$ for which the capacity is $\log_{|\Sigma|} 2$.
\end{IEEEproof}
\begin{figure}
\centerline{\includegraphics[width=3in,keepaspectratio]{cap_3_2_new}}
\caption{Finite automaton for $S = (\{0, 1, 2\}, 012,\mathcal{T}_{\leq 2}^{tan})$. The regular expression $R = 0^+1^+{(0^+1^+)}^*2^+{(1^+2^+)}^*.$}
\label{fig0}
\end{figure}
Our capacity results are listed in Table \ref{table:cap2}.
\section{Conclusion}\label{sec:conc}
In this paper, we showed that for tandem duplication string systems with bounded duplication length if the maximum duplication length is $3$ or less, the language described by the string system is regular. Further, we computed exact capacities for these systems. As a future work, we would like to calculate capacities for bounded tandem duplication string systems with maximum duplication length greater than $3$.
Using Thue's result~\cite{Thue}, we showed that a tandem duplication string system cannot be fully expressive if the alphabet size is $\geq4$. However, for an alphabet of size $3$ or less such systems can be fully expressive. This way, we completely characterized fully expressive and non-fully expressive tandem duplication string systems with bounded duplication length. As a future work, we would like to generalize the notion of expressiveness by counting the asymptotic number of \textit{substrings} of length $n$ that a string system can generate. Mathematically, we define the expressiveness $Exp(S)$ of a string system $S$ as
$$ Exp(S) = \limsup_{n\rightarrow \infty}\frac{\log_{|\Sigma|}E_n(S)}{n}.$$Here $E_n(S)$ represents the number of substrings of length $n$ that can be generated by $S$. It is notable here that with this definition of expressiveness, a fully expressive string system $S$ has $Exp(S) = 1$.
In this paper, we looked at questions related to the generation of a diversity of sequences from a seed given a tandem duplication rule. One can also study the minimum number of steps required to deduplicate a given sequence of length $n$ to a squarefree seed and therefore define the notion of distance between a sequence and its seed given a tandem duplication rule. It is notable here that the same sequence can be deduplicated to more than one squarefree seed given a tandem duplication rule. For example: the sequence $012101212$ can be deduplicated to $012$ as well as $0121012$ under bounded tandem duplication with maximum duplication length $4 $ in the following way $$\underline{01210121}2 \dd{\le4} 0\underline{1212} \dd{\le4} 012.$$
$$01210\underline{1212} \dd{\le4} 0121012.$$
Here the underlined portion represents the repeat that is being deduplicated in a given step.
| {
"timestamp": "2015-09-22T02:11:37",
"yymm": "1509",
"arxiv_id": "1509.06029",
"language": "en",
"url": "https://arxiv.org/abs/1509.06029",
"abstract": "The majority of the human genome consists of repeated sequences. An important type of repeated sequences common in the human genome are tandem repeats, where identical copies appear next to each other. For example, in the sequence $AGTC\\underline{TGTG}C$, $TGTG$ is a tandem repeat, that may be generated from $AGTCTGC$ by a tandem duplication of length $2$. In this work, we investigate the possibility of generating a large number of sequences from a \\textit{seed}, i.e.\\ a small initial string, by tandem duplications of bounded length. We study the capacity of such a system, a notion that quantifies the system's generating power. Our results include \\textit{exact capacity} values for certain tandem duplication string systems. In addition, motivated by the role of DNA sequences in expressing proteins via RNA and the genetic code, we define the notion of the \\textit{expressiveness} of a tandem duplication system as the capability of expressing arbitrary substrings. We then \\textit{completely} characterize the expressiveness of tandem duplication systems for general alphabet sizes and duplication lengths. In particular, based on a celebrated result by Axel Thue from 1906, presenting a construction for ternary square-free sequences, we show that for alphabets of size 4 or larger, bounded tandem duplication systems, regardless of the seed and the bound on duplication length, are not fully expressive, i.e. they cannot generate all strings even as substrings of other strings. Note that the alphabet of size 4 is of particular interest as it pertains to the genomic alphabet. Building on this result, we also show that these systems do not have full capacity. In general, our results illustrate that duplication lengths play a more significant role than the seed in generating a large number of sequences for these systems.",
"subjects": "Information Theory (cs.IT); Discrete Mathematics (cs.DM); Formal Languages and Automata Theory (cs.FL); Genomics (q-bio.GN)",
"title": "Capacity and Expressiveness of Genomic Tandem Duplication",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138105645059,
"lm_q2_score": 0.724870282120402,
"lm_q1q2_score": 0.7087156856969067
} |
https://arxiv.org/abs/2012.13288 | Answer to an open question concerning the $1/e$-strategy for best choice under no information | This paper answers a long-standing open question concerning the $1/e$-strategy for the problem of best choice. $N$ candidates for a job arrive at times independently uniformly distributed in $[0,1]$. The interviewer knows how each candidate ranks relative to all others seen so far, and must immediately appoint or reject each candidate as they arrive. The aim is to choose the best overall. The $1/e$ strategy is to follow the rule: `Do nothing until time $1/e$, then appoint the first candidate thereafter who is best so far (if any).'The question, first discussed with Larry Shepp in 1983, was to know whether the $1/e$-strategy is optimal if one has `no information about the total number of options'. Quite what this might mean is open to various interpretations, but we shall take the proportional-increment process formulation of \cite{BY}. Such processes are shown to have a very rigid structure, being time-changed {\em pure birth processes}, and this allows some precise distributional calculations, from which we deduce that the $1/e$-strategy is in fact not optimal. | \section{Dedication and background}
At the evening of Professor Larry Shepp's talk ``Reflecting Brownian Motion" at Cornell University on July 11, 1983 (13th Conference on Stochastic Processes and Applications), Professor Shepp and Thomas Bruss ran into each other in front of the Ezra Cornell statue. Thomas was honoured to meet Prof. Shepp in person, but Larry replied ``What are you working on?" And so Larry was the very first person with whom Thomas could discuss the {\it $1/e$-law of best choice} resulting from the {\it Unified Approach} \cite{B84} which had been accepted for publication shortly before.
Thomas was pleased to see the true interest Prof. Shepp showed for the $1/e$-law. As many of us have seen before, when Larry was interested in a problem, elementary or not, then he was really deeply interested.\smallskip
This article deals with an open question concerning the so-called $1/e$-strategy for the problem of best choice, which is to wait and do nothing until time $1/e$ and then to accept the first candidate (if any) who is best so far. The question which attracted our particular interest was whether this strategy is optimal if one has no initial information about the number $N$ of candidates. Bruss drew also attention to this open question in his own talk ``The $e^{-1}$-law in best choice problems" at Cornell on July 14, 1983, and re-discussed it with Larry at several later occasions.
A written record of somewhat related questions appeared on page 885 of \cite{B84} where he stated the conjecture that the $1/e$-strategy is optimal in certain two-person games for a decision maker who faces an adversary trying to minimize the win probability. However, the two-person game situation is quite different from the open question discussed with Larry and will not be considered in this paper.
\smallskip
As far as we are aware, the last time the question discussed with Larry was addressed was in \cite{BY}, and this may actually be the only written reference to the real open question. \cite{BY} studied another no-information stopping problem, the so-called last-arrival problem (l.a.p.). To prepare the paper's main result, they examined the hypothesis of no-information in a detailed way, and their conclusions will be used in the present paper. \cite{BY} also used these to give, as a side-result, an alternative proof of the $1/e$-law. However, as they pointed out, their approach did not contribute new insights for the open question.
\medskip
The present article proves that the $1/e$-strategy is {\it not} optimal under the interpretation of `no information' used by \cite{BY}. It thus closes a 37-years gap.
\section{The Unified Approach}
The so-called {\it 1/e-law of best choice~} is a result obtained in the {\it Unified Approach}-model of \cite{B84}. The model is as follows:
\begin{quote}{\bf Unified Approach}: Suppose $N>0$ points are IID $U[0,1].$
Points are marked with qualities which are supposed to be uniquely rankable from $1$ (best) to $N$ (worst), and all rank arrival orders are supposed to be equally likely. The goal is to maximize the probability of stopping online, and without recall on a preceding observation, on rank 1. \end{quote}
\noindent This model was suggested for the best choice problem (secretary problem) for an unknown number $N$ of candidates. (More general payoff-functions for the same model were studied in \cite{BrussSamuels}.)
\medskip
Now, if we contemplate the probability of picking the best candidate, we immediately face the question `What is $N$?' If $N$ is fixed and known, this is just the classical secretary problem. But if we take a Bayesian point of view and suppose a prior distribution for $N$, with arrivals coming at times $t \in \mathbb N$,
\cite{abdel} showed that the problem may not only lead to so-called {\it stopping islands} (\cite{presman}), but, much worse, that for any $\epsilon>0$ there exists a sufficiently unfavorable distribution $\{P(N=n)\}_{n=1,2, \cdots}$ to reduce the value of the optimal to less than $\epsilon.$ In other words, if $N$ is allowed an arbitrary prior, optimality may mean almost nothing.
These discouraging facts prompted efforts to find more tractable models, such as the model of \cite{stewart}, and the one of \cite{cowan} and its generalisation studied in \cite{bruss88}.
\smallskip The philosophy behind the unified approach of \cite{B84} was different. The approach was to suppose that arrival times are in $[0,1]$ and to study so-called $x$-strategies, where you do nothing until time $x$, and thereafter pick the first record. One of the main results of that paper was that the $1/e$-strategy gives a success probability of at least $1/e$ whatever the prior distribution of $N$, and that no other $x$-strategy does this well. This robustness suggests that the $1/e$-strategy is somehow special, and the open question became natural.
It is relevant to mention here that a similar phenomenon of robustness shows up in different forms. One is what \cite{bruss1990conditions} called `quasi-stationarity', meaning essentially that the optimal strategy
may (even for rather general payoffs) hardly depend on the number of candidates observed so far.
More remarkably, for so-called Pascal processes, optimal strategies do not depend at all on the number of preceding observations (For their characterization see \cite{bruss1991pascal}).
\subsection{The open question}\label{conj}
First we need to be clear about what exactly we mean by optimality of a strategy under no information on $N.$ We see a counting process $(N_t)_{0\leq t \leq 1} $, $N_0=0$, and we define $\F_t = \sigma( N_u,u \leq t)$. The law of $(N_t)_{0\leq t \leq 1} $ is $P_\theta$ for some $\theta \in \Theta$, where $\{P_\theta: \theta \in \Theta\}$ is the collection of possible laws considered.
The notion that `we have no prior information at all on $N$' means that we are only going to consider strategies which are $(\F_t)$-stopping times. That is, the strategies allowed can only know the arrival times (and ranks) of the individuals, not the value of $\theta \in \Theta$. This is the viewpoint of classical statistics.
\medskip\noindent
To understand the sense of optimality, define the process $\rho$ by
\begin{eqnarray*}
\rho_t &=& \hbox{\rm overall rank of object arriving at $t$ if $\Delta N_t =1$}
\\
&=& 0 \quad \hbox{\rm otherwise.}
\end{eqnarray*}
Let $\T$ denote the set of all $(\F_t)$-stopping times. Then the value of using $\tau \in \T$ is
\begin{equation}
R(\theta,\tau) = P_\theta[ \rho_\tau = 1 ].
\label{Rdef}
\end{equation}
We denote by $\tau^*$ the stopping time corresponding to the $1/e$ strategy, which is simply $\tau^* = \inf \{ t \geq 1/e: \rho_t = 1 \},$ where it is understood that $\tau^*=1$ if no such $t$ exists, and that, in this case, we lose by definition. In these terms, the open question is stated precisely as follows:
True or false
\begin{equation}
\forall \theta \in \Theta, \forall \tau \in \T, \qquad
R(\theta, \tau^*) \geq R(\theta, \tau)?
\label{1/e_conj}
\end{equation}
Of course, the set $\Theta$ of possible laws of $(N_t)_{0\leq t \leq 1} $ plays an important r\^ole in the conjecture. For example, if $\Theta$ contained just one law, under which $(N_t)_{0\leq t \leq 1} $ was the counting process of ten $U[0,1]$ arrival times, then clearly the $1/e$ strategy would not be optimal in the sense of \eqref{1/e_conj}. We shall shortly explain exactly what set of laws is considered here.
\subsection{A related problem.}\label{ss22}
\smallskip
We return to the related {\em last-arrival problem}
under no information (l.a.p.) studied in \cite{BY}.
In this model an unknown number $N$ of points are IID $U[0,1]$ random variables, and an observer, inspecting the interval $[0,1]$ sequentially from left to right, wants to maximise the probability of stopping online on the very last point. No information about $N$ whatsoever is given.
Only one stop is allowed, and this again without recall on preceding observations.
\medskip
Central to the approach of \cite{BY} is the choice of the family $\Theta$ of laws of the counting process $(N_t)_{0\leq t \leq 1} $. These authors present arguments (based on the properties of IID $U[0,1]$ arrival times) to justify their focus on the family of what they call {\em proportional-increments (p.i.)} counting processes. We shall not repeat all the reasoning which leads to this choice of counting processes, but we show its basic motivation and explain why we take its implications as our starting point.
\smallskip
\cite{BY}
defined a p.i.-process as follows:
A stochastic process $(N_t)$ defined on a filtered probability space$(\Omega, {\cal F}, ({\cal F}_t), P)$ with natural filtration ${\cal F}_t=\sigma\{N_u: u\le 1\}$ is a p.i.-counting process on $]0,\infty[$, if $$\forall t ~{\rm with}~ N_t>0, ~\forall s \ge 0,$$
$$\mathrm E(N_{t+s}-N_t \,\Big |\, {\cal F}_t) = \frac{s}{t} N_t~a.s.$$ The meaning of {\it proportional} is well understood in this definition. Moreover, three out of the four conclusions 1.- 4. in \cite{BY}, implying this definition, are proved to be compelling for combining the IID $U[0,1]$ - hypothesis for arrival times with the hypothesis that no prior information on $N$ can be used. Only Conclusion 3 (on page 3244) makes a concession. Here these authors use an (unprovable) tractability argument to justify why an unknown random variable, of which the expectation must be zero, is put equal to zero.
Why a concession? It is important to note that, if one has no information on $N,$ then the time of the first arrival $T_1$ is a particularly delicate point. It is the smallest order statistic of all arrival $N$ times. However, it is exactly this one which escapes any distributional prescription because the no-information setting does not allow us to assume a prior distribution $\{P(N=n)\}_{n=1,2, \cdots}.$
Hence, if one wants to confine one's interest to a well-posed problem, as \cite{BY} did, one has to make a concession somewhere if one wants to properly define a relevant decision process in the no-information case.
The mentioned concession seemed the least restrictive and almost compelling, but, more importantly, \cite{BY} found a solid a-posteriori justification for their tractability argument. The solution of the l.a.p. they obtained for p.i.-processes
satisfied the criteria of \cite{hadamard}
for the solution of a well-posed problem. \cite{BY} found these criteria convincing.
Now note that the only difference between the l.a.p. and our open problem (how to find rank 1) is that we want to stop on the last record of the arrival process, and not on the last point. By the IID-hypothesis for arrival times of absolute ranks, R\'enyi's Theorem of relative ranks (\cite{renyi}) implies that the $k$th point is a record with probability $1/k$ independently of preceding arrivals. Thus the basic arrival process
$(N_t)$ is not affected and can be chosen exactly the same!
This is why, confining our interest to well-defined problems only, we suppose that $(N_t)$ is a p.i.-process in the sense of \cite{BY}, from which we take the following definition.
\medskip
\begin{defin}
{\em A p.i.- counting process is a counting process whose compensator is $\lambda_t \equiv N_t/t$, so that ($t\in (0,1]$)
\begin{equation}
M_t \equiv N_{t \vee T_1} - N_{T_1} - \int_{T_1}^{t \vee T_1} \frac{N_s}{s}\; ds
\qquad \hbox{\rm is a martingale in its own filtration,}
\label{PIdef}
\end{equation}
where $T_1 \equiv \inf\{ t: N_t=1\}$ is the first jump time of the counting process.}
\end{defin}
\medskip
The class $\Theta$ of counting processes will be the class of all p.i.-processes, and the meaning of all the notation appearing in the statement \eqref{1/e_conj} has now been defined.
\section{Analysis of the open question.}\label{S3a}
Our analysis starts with the following little result, whose proof is immediate from the statement.
\begin{Prop}\label{prop1}
Suppose that $(N)$ is a p.i.-counting process.
If we define $\tilde{N}(u) = N(e^u)$ for $u \in (-\infty,0]$, and $t_1
= \log T_1$, then
\begin{eqnarray*}
M(e^u) &=& N(e^u\vee T_1)-N(T_1) - \int_{T_1}^{e^u \vee T_1} \frac{N_s}{s}\; ds
\\
&=& \tilde{N}(u \vee t_1) - \tilde{N}(t_1) - \int_{t_1}^{u \vee t_1} \tilde{N}(s)
\; ds
\end{eqnarray*}
is a martingale in its own filtration, so $(\tilde{N})$ is a pure birth process, started with one individual at time $t_1$.
\end{Prop}
\medskip
So the requirement that $(N_t)$ be a p.i.- counting process is not in fact very general - apart from the choice of the time $(\tilde{N})$ starts, the behaviour is uniquely determined!
\medskip
\noindent
{\sc Remarks.} If we model a Poisson process with intensity $\lambda$ in a Bayesian fashion, suppose a prior density $f(\lambda) = \varepsilon \exp(- \varepsilon \lambda)$ for $\lambda$, then the posterior mean for $\lambda$ given $\F_t$ is $N_t/t(1+\varepsilon)$, so a PI counting process is in some sense a limit of a Poisson process where we put an uninformative prior over $\lambda$.
\bigbreak
If we run a pure birth process from $\tilde{N}_u=1$ ($u <0$) to time $0$, the PGF
of $\tilde{N}_0$ is easily shown to be
\begin{equation}
E[z^{\tilde{N}_0} \vert \tilde{N}_u = 1] = \frac{ze^{u}}{1-z(1-e^{u})}
\qquad (z \in [0,1]),
\label{pbp1}
\end{equation}
so that $\tilde{N}_0$ is 1+geometric($e^{u}$).
Obviously, from \eqref{pbp1} we deduce
\begin{equation}
E[z^{\tilde{N}_0} \vert \tilde{N}_u = k] = \biggl\lbrace
\frac{ze^{u}}{1-z(1-e^{u})} \biggr\rbrace^k
\qquad (z \in [0,1]),
\label{pbp2}
\end{equation}
Thus if we see a record in the process $(\tilde{N})$ at time $u <0$, at the arrival of the $n^{ \hbox{\rm th} }$
observation, the probability that this is the best overall will be
\begin{equation}
\tilde{\pi}_n(u) \equiv E \biggl[ \;\frac{n}{\tilde{N}_0}\;
\biggl\vert\; \tilde{N}_u = n \biggr]
= n \int_0^1 \frac{dz}{z} \biggl\lbrace
\frac{ze^{u}}{1-z(1-e^{u})} \biggr\rbrace^n .
\label{pit}
\end{equation}
In terms of the original process $(N)$, if we see a record at the arrival of the
$n^{ \hbox{\rm th} }$ observation at time $t \in (0,1)$, then the probability that
this is the best overall is
\begin{equation}
\pi_n(t) \equiv E \biggl[ \;\frac{n}{N_1}\;
\biggl\vert\; N_t = n \biggr]
= n \int_0^1 \frac{dz}{z} \biggl\lbrace
\frac{z t}{1-z(1-t)} \biggr\rbrace^n .
\label{pin}
\end{equation}
Clearly this has to be increasing in $t$, but from numerics it appears also to be decreasing in $n.$
We can prove that this has to be the case, as follows. If we fix $t \in (0,1)$ then conditional on $N_t=n$ we have that
\begin{equation}
\frac{N_1}{n} = \xi_n \equiv\frac{n + W_1+ \ldots + W_n}{n},
\end{equation}
where the $W_j$ are IID geometrics. Now $(\xi_n)$ is a reversed martingale in the exchangeable filtration, so $(1/\xi_n)$ is a reversed submartingale in the exchangeable filtration, so its expectation decreases with $n$.
\section{The value of a fixed threshold rule.}\label{one_over_e}
Suppose we use a fixed threshold rule, that is, we do nothing until $u\geq b$ and then we take the first record thereafter. The $1/e$ rule corresponds to the special case $b=-1$. What is the value of this?
\medskip
If $\tilde{N}_{b} = n$, then the distribution of the number $Y$ of further observations is known, and is a negative binomial distribution:
\begin{equation}
P[ Y = y ] = q^y p^n \binom{n+y-1}{y} \qquad(y \geq 0),
\label{NBdist}
\end{equation}
where $p = \exp(b)$. Given that $Y=y$, the probability that the best comes after the first $n$ observations is $y/(n+y)$, and the probability that the first record after $u=b$ is actually the best is
\begin{equation}
P[ \hbox{\rm first record after $n$ is best}| \hbox{\rm best comes after first $n$,
$Y=y$}]
= \frac{1}{y} \sum_{j=1}^y \frac{n}{n+j-1} \;.
\end{equation}
Thus we have an expression for the probability that we pick the best using this rule:
\begin{equation}
P[ \hbox{\rm win}] = \sum_{y \geq 1} P[Y=y]\; \frac{n}{n+y}\; \sum_{j=1}^y\;
\frac{1}{n+j-1}.
\label{winprob}
\end{equation}
\subsection{The special case $n=1$.}\label{ss1}
Let us firstly observe that for $t \in (0,1)$
\begin{equation}
f_j(t) \equiv \sum_{k \geq j} \frac{t^k}{k} = \int_0^t \; \frac{s^{j-1}}{1-s} \; ds,
\label{fjdef}
\end{equation}
from which we see that $f_1(t) = -\log(1-t)$.
In the special case $n=1$, we have
\begin{eqnarray}
P[ \hbox{\rm win}] &=& \sum_{k \geq 1} q^k p\; \frac{1}{1+k}\; \sum_{j=1}^k\;
\frac{1}{j}\nonumber
\\
&=& pq^{-1} \sum_{j \geq 1} \; \frac{1}{j} \; f_{j+1}(q)\nonumber
\\
&=& pq^{-1} \int_0^q \sum_{j\geq 1} \frac{s^j}{j} \; \frac{ds}{1-s} \nonumber
\\
&=& pq^{-1} \int_0^q \biggl( \; \int_0^s \frac{dv}{1-v}
\; \biggr) \; \frac{ds}{1-s} \nonumber
\\
&=& {\scriptstyle{\frac{1}{2}} } \, pq^{-1} \biggl( \; \int_0^q \frac{dv}{1-v}
\; \biggr)^2 \nonumber
\\
&=& {\scriptstyle{\frac{1}{2}} } \, pq^{-1} \bigl( \; \log(1-q)
\; \bigr)^2 .\label{V1}
\end{eqnarray}
Similarly, from \eqref{pit} we have ($p \equiv 1-q \equiv e^u$)
\begin{eqnarray}
\tilde{\pi}_1(u) &=& \sum_{k \geq 0}\frac{q^k p}{1+k} \nonumber
\\
&=& \frac{p}{q} \; f_1(q) \nonumber
\\
&=& - \frac{p}{q}\; \log(1-q).
\label{pi1}
\end{eqnarray}
\section{The Hamilton-Jacobi-Bellman equations.}\label{S3}
If $V_n(u)$ denotes the value of being at time $u\leq 0$ with $n$ events already observed, none of them at time $u$, then the HJB equations of optimal control for the $V_n$ are
\begin{eqnarray}
0 &=& \dot{V_n}(u) + n \biggl\lbrace \frac{n}{n+1} \, V_{n+1}(u) +
\frac{1}{n+1} \max \{ V_{n+1}(u), \tilde{\pi}_{n+1}(u) \} -V_n(u) \; \biggr\rbrace
\nonumber
\\
&=&\dot{V_n} + n( V_{n+1} - V_n ) + \frac{n}{n+1} (\tilde{\pi}_{n+1} -
V_{n+1})^+,
\label{HJB}
\end{eqnarray}
together with the boundary conditions $V_n(0)=0$.
The solution is then the value function $V_n(u).$
\smallskip
\medskip
Now, if the answer to the open question is affirmative, then for all $n$, $V_n=V_n^*$, where $V^*$ is the value function for using the $1/e$ strategy, which we actually know reasonably explicitly; it is given by the right-hand side of \eqref{winprob}, where the dependence on the time $u<0$ comes via the parameter $p = e^u$ of the negative binomial distribution \eqref{NBdist}. Now by inspection of the HJB equations, we see that the optimal rule will be that we stop when we see a new record at time $u<0$ when there are $N_u = n$ observations in total if and only if
\begin{equation}
\tilde{\pi}_n(u) > V_n(u).
\label{optstop}
\end{equation}
{\em If the $1/e$-strategy is optimal, this would say that}
\begin{equation}
\tilde{\pi}_n(u) > V^*_n(u) \qquad \hbox{\rm if and only if}\quad u > -1.
\end{equation}
We can investigate this numerically by calculating $\tilde{\pi}_n(-1)$ and $V^*_n(-1)$ and comparing them; the results are plotted in Figure \ref{ceg}.
\noindent
\begin{figure}[H]
\centering
\includegraphics[scale=0.3]{fig1.pdf}
\caption{$\tilde{\pi}_n(-1) - V^*_n(-1)$.}
\label{ceg}
\end{figure}
We see that $\tilde{\pi}_n(-1) > V^*_n(-1)$ for all $n$, and that the gap narrows as $n$ increases. Hence the answer to the open question is No. The $1/e$-strategy is not optimal.
\subsection{Analytic proof.}\label{ss2}
It is nice to see without resorting to numerics that the answer must be No, by considering the special case $n=1$. From \eqref{V1} and \eqref{pi1} we see that for the $1/e$ rule where $p = e^{-1}$, $u=-1$
\begin{equation}
\tilde{\pi}_1(u) - V_1(u) = \frac{p}{2q}\bigl[\; -2\log(1-q) -
\bigl( \; \log(1-q)
\; \bigr)^2
\;\bigr] = p/2q
\label{gap}
\end{equation}
which is clearly positive.
\subsection{Should this have been obvious?}
A closer look at \eqref{gap} may attract our attention; we may be surprised how big the difference between
$\tilde \pi_1(u)$ and the performance of the $1/e$-strategy can actually become, namely the first one is twice the second one for $u=-1$ (i.e. for $t=1/e$ in $[0,1]$-time.)
Is it then not surprising that the non-optimality of the $1/e$-strategy did not follow already from simpler comparisons?
No, not easily. First note that we compare here a conditional win probability, given that we have a (granted) record at time $T_1\le 1/e$, with the unconditional win probability of the 1/e-strategy. Fortunately, this was all that was needed to show that the $1/e$-strategy is not optimal, namely showing that there are situations where it is definitely strictly sub-optimal
to pass over the first arrival, even when arriving at some time $1/e-\epsilon$ with $\epsilon >0$.
This however does, a priori, not say much about the absolute win probability of the $1/e$-strategy! To see this, suppose that for some small $\epsilon>$ we have $T_1\in[1/e-\epsilon,1/e+\epsilon]$, and that $N$ is not large. The latter is quite probable if $T_1$ is close to $1/e.$ Then $T_1$ will be almost equiprobable in the left or right half of $[1/e-\epsilon,1/e+\epsilon]$ for $\epsilon$ sufficiently small. If it is in the right half, however, the $1/e$-strategy
will accept it all the same, but now with a strictly greater win probability since $\tilde \pi_1(t)$ is strictly increasing in $t$.
A second reason why
non-optimality is not that evident lies in the interplay of time and the number of arrivals (see (6) and (8)).
If we have (by simpler estimates) no sufficient incentive to accept at time $t<1/e$ a record, being the $n$th arrival, say, we have even less incentive if it was the $(n+1)$th arrival. With each additional arrival before time $1/e$ increases the expected number of arrivals thereafter, and so does the expected number of those arriving after time $1/e.$ This increases the incentive to wait, but then we also have to wait for a record! This may bring us quickly behind
time $1/e.$ If we get there a record, then, as said in the first scenario, so much the better.
\smallskip
These two scenarios exemplify how important it is to have precise answers, and not only estimates, even if one has reasonably good ones.
It is the preceding approach which offered these precise answers, and it made things definite:
In the no-information case, the $1/e$-strategy is {\it not} optimal.
\bigskip \bigskip
\textwidth 14 cm
{\bf Acknowledgement}
\medskip
The authors would like to thank very warmly Professor Philip Ernst for his precious indirect and direct contributions to this article. When Philip organised in 2018 the most memorable conference at Rice University in honour of Larry Shepp, he motivated many of us to look at harder problems. And so this old open problem turned up again and attracted attention. Philip's interest in this problem and his numerous interesting comments on, and discussions of, a former version of this paper, were very encouraging.
\bigskip\bigskip
| {
"timestamp": "2020-12-25T02:13:51",
"yymm": "2012",
"arxiv_id": "2012.13288",
"language": "en",
"url": "https://arxiv.org/abs/2012.13288",
"abstract": "This paper answers a long-standing open question concerning the $1/e$-strategy for the problem of best choice. $N$ candidates for a job arrive at times independently uniformly distributed in $[0,1]$. The interviewer knows how each candidate ranks relative to all others seen so far, and must immediately appoint or reject each candidate as they arrive. The aim is to choose the best overall. The $1/e$ strategy is to follow the rule: `Do nothing until time $1/e$, then appoint the first candidate thereafter who is best so far (if any).'The question, first discussed with Larry Shepp in 1983, was to know whether the $1/e$-strategy is optimal if one has `no information about the total number of options'. Quite what this might mean is open to various interpretations, but we shall take the proportional-increment process formulation of \\cite{BY}. Such processes are shown to have a very rigid structure, being time-changed {\\em pure birth processes}, and this allows some precise distributional calculations, from which we deduce that the $1/e$-strategy is in fact not optimal.",
"subjects": "Probability (math.PR)",
"title": "Answer to an open question concerning the $1/e$-strategy for best choice under no information",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138183570424,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7087156855343674
} |
https://arxiv.org/abs/2201.12441 | Family-wise error rate control in Gaussian graphical model selection via Distributionally Robust Optimization | Recently, a special case of precision matrix estimation based on a distributionally robust optimization (DRO) framework has been shown to be equivalent to the graphical lasso. From this formulation, a method for choosing the regularization term, i.e., for graphical model selection, was proposed. In this work, we establish a theoretical connection between the confidence level of graphical model selection via the DRO formulation and the asymptotic family-wise error rate of estimating false edges. Simulation experiments and real data analyses illustrate the utility of the asymptotic family-wise error rate control behavior even in finite samples. | \section{Introduction}\label{sec1}
The estimation of the precision matrix $\Omega=\Sigma^{-1}$ of a Gaussian random vector $X \in \mathbb{R}^d$ with covariance matrix $\Sigma$ is a problem that has received much attention in statistics and machine learning \citep{dempster1972, Drton-Perlman, MY-LY:07, Drton2017}. The matrix $\Omega$ characterizes the \emph{conditional dependency} structure between variables. If a random variable $X$ follows a normal distribution, $\Omega_{jk} = 0$ if and only if the $j$-th and $k$-th variables of $X$ are conditionally independent given the rest \citep{lauritzen1996}.
Naturally, an $\ell_1$-regularized maximum likelihood approach that introduces sparsity in the estimation of $\Omega$ was proposed by \citep{MY-LY:07}. The approach will be referred to by the name of a well-known computational algorithm, graphical lasso \citep{JF-TH-RT:07}. The resulting sparsity pattern from graphical lasso can be then used to construct a graphical model, $G=(V,E)$, where $V$ is the set of nodes for each of the $d$-variables, and $E$ is the set of undirected edges: each edge $(i,j)$ represents a non-zero element for $i$-th and $j$-th variables in $\Omega$. Graphical lasso subsequently spurred significant research effort in methodological development as well as application domains \citep{guillot2015,HUANG2010935,Krumsiek2011}. As with most other learning methods, the performance of graphical lasso depends on a user-specified tuning parameter; however, tuning the sparsity-inducing regularization parameter of graphical lasso --- also called \emph{graphical model selection} --- is often challenging for various reasons.
In practice, procedures such as cross-validation (CV) and Bayes information criterion (BIC) minimization are often used to tune graphical lasso; however, they tend to overfit in simulation experiments \citep{hastie2009,liu2010}. Furthermore, CV and BIC minimization are computationally demanding because they search over a grid of candidate parameters. Moreover, asymptotic properties in the literature are often not beneficial for regularization parameter tuning in finite sample regimes. As a result, using graphical lasso in real applications is often met with significant computational and statistical subtleties, and, hence, practitioners sometimes resort to manual tuning in order to obtain an estimate of $\Omega$ with a targeted number of non-zeros.
Recently, \cite{cisneros20a} has formulated the precision matrix estimation problem using the distributionally robust optimization (DRO) framework \citep{VAN-DK-PME:18, JB-NS:19}. The authors establish the correspondence between the radius of the \emph{ambiguity set} in the DRO framework --- which measures the uncertainity around the empirical measure (see more below) --- and the regularization parameter of graphical lasso estimator. The authors leveraged this connection to propose a \emph{robust selection} (RobSel) algorithm that, given a confidence level $1-\alpha$, determines the corresponding regularization parameter for graphical lasso.
Our work theoretically relates the RobSel error tolerance $\alpha$ to the asymptotic family-wise error rate (FWER) for estimating any false positive non-zero in $\Omega$. The practical significance of our work is that graphical lasso regularization can be chosen according to a user specified FWER level. We illustrate the theoretical result in simulation and compare the similarity between RobSel chosen graphs and graphs estimated by a hypothesis testing-based procedure for graphical model selection. We confirm that choosing graphical lasso regularization parameter with RobSel can still yield a consistent family-wise error rate characteristic in finite samples.
\section{DRO formulation and family-wise error rate of graphical lasso}\label{sec2}
Distributionally robust optimization (DRO) as an estimation framework seeks parameters that minimize the worst expected risk over the uncertainty set of distributions (often called \emph{ambiguity set} in DRO terminology). Readers are referred to a review article by \cite{kuhn2019} for an overview of the DRO. Leveraging the DRO framework, \cite{cisneros20a} showed that for a fixed $\rho \ge 1$ and $p \in [1,\infty]$, their DRO formulation of regularized inverse covariance estimation is equivalent to the following expression:
\begin{align}
\label{eq1}
\min_{K\in\subscr{\mathbb{S}}{d}^{\operatorname{++}}}\left\{\operatorname{trace}(KA_n)-\log|K|+\delta^{1/\rho}\norm{\textbf{\textup{vec}}(K)}_p\right\},
\end{align}
where $\subscr{\mathbb{S}}{d}^{\operatorname{++}}$ denotes the set of $d \times d$ positive definite matrices, and $\delta$ is the radius of ambiguity set, which is constructed as a ball in the Wasserstein space of distributions, centered at the empirical measure of the data. Note that graphical lasso objective function is a special case of \eqref{eq1} when $p=1$ and $\rho=1$. Constants $p$ and $\rho$ specify the Wasserstein distance metric between two probability distributions \cite[see][for details]{cisneros20a}. Remarkably, the regularization parameter of graphical lasso corresponds to the ambiguity set radius $\delta$ despite the differing premise between DRO and maximum likelihood estimator. Intuitively, an increase in ambiguity set radius $\delta$ (i.e., an increased robustness in DRO) corresponds to an increased amount of regularization in graphical lasso (which results in conservative selection of non-zeros).
Using the \emph{Robust Wasserstein Profile (RWP) function} $R_n$ introduced by~\cite{blanchet_kang_murthy_2019}, \cite{cisneros20a} derived the RWP function for graphical lasso, $R_{n}(K) = \norm{\textbf{\textup{vec}}(A_n-K)}_\infty$, and characterized its asymptotic distribution. The distribution is used to determine $\delta$ \citep[equivalently, the regularization parameter $\lambda$ in graphical lasso][]{JF-TH-RT:07} given the user specified error tolerance level $\alpha$:
\begin{align}
\label{oii2}
\lambda = \delta :=
\inf\setdef{\delta>0}{\mathbb{P}_0(R_n(\Omega)\leq \delta)}
=\inf\setdef{\delta>0}{\mathbb{P}_0(\norm{\textbf{\textup{vec}}(A_n-\Sigma)}_\infty\leq \delta) \geq
1-\alpha},
\end{align}
where $\mathbb{P}_0$ denotes the true underlying distribution of the data.
This graphical model selection procedure is called \emph{RobSel} in~\citep{cisneros20a}.
Then, by Corollary 3.3 of \cite{cisneros20a}, $n^{1/2}\delta$ tends to $1-\alpha$ quantile of $R_n$, $r_{1-\alpha}$, and the corresponding $\delta$ can be determined from an order statistic in finite sample. The asymptotic result also motivates the approximation of the RWP function through a bootstrap procedure in Algorithm \ref{alg:robsel} to determine the regularization parameter $\lambda$, given significance level $\alpha$.
\subsection{Family-wise error rate control with RobSel}
\begin{algorithm}[Ht!]
\caption{RobSel algorithm for estimation of the regularization parameter $\lambda$~\citep{cisneros20a} \label{alg:robsel}}
\begin{algorithmic}
\State Input: $n$ observations, $X_{1},\ldots, X_{n}$.
\State Set parameters $ \alpha \in (0,1)$ and $B \in \mathbb{N}$.
\State Compute empirical covariance $A_n$.
\For{$b=1,...,B$}
\State Obtain a bootstrap sample $X_{1b}^*,\ldots, X_{nb}^*$ by sampling uniformly and with replacement from the data
\State Compute empirical covariance $A^*_{n,b}$ from the bootstrap sample.
\State $R^*_{n,b} \gets \norm{A^*_{n,b}-A_n}_\infty$
\EndFor
\State Set $\lambda$ to be the bootstrap order statistic $R^*_{n, ((B+1)(1-\alpha))}$.
\end{algorithmic}
\end{algorithm}
In this next section, we provide results for the interpretation of $\alpha$ and its relation to type I error control in graphical model selection. Recall that equation \eqref{eq1} shows that the DRO estimator is equivalent to the $\ell_1$-penalized estimator in graphical lasso, which produces a sparse estimator of $\Omega,$ denoted $\hat{\Omega}^\delta$. Given equation \eqref{oii2}, a natural question is how to interpret error tolerance $\alpha$, which was not addressed in \cite{cisneros20a}. The following result directly connects parameter $\alpha$ in RobSel and the asymptotic FWER of the corresponding obtained estimator.
\begin{theorem}[FWER of graphical lasso]
\label{thm-FWER}
Let $\Xi = \{(i,j): \Omega_{ij} = 0\}$ be the indices corresponding to zero entries of $\Omega.$ For a fixed $\alpha,$ let $\delta$ satisfy \eqref{oii2} and let $\hat{\Omega}^\delta$ be the unique solution to optimization problem~\eqref{eq1} with $\rho=1$. Then
\begin{align}
\label{FWER}
\lim_{n \rightarrow \infty} \mathbb{P}(\hat{\Omega}^\delta_{ij} \neq 0 \textrm{ for some } (i,j) \in \Xi) \leq \alpha.
\end{align}
\end{theorem}
\begin{proof
In this proof, let $\mathbb{S}_{\mathrm{d}}$ be the set of $d\times d$ symmetric matrices.
Recall that $n^{1/2}\delta \rightarrow r_{1 - \alpha},$ where $r_{1 - \alpha}$ is the $1 - \alpha$ quantile of the distribution in Corollary 3.3 and Remark 3.5 of \cite{cisneros20a}. Then, by Theorem 1 of \cite{MY-LY:07}, we have that $n^{1/2}(\hat{\Omega}^\delta - \Omega)$ converges in distribution to $U^*$, the minimizer of
$$
\arg \min_{U = U'}\,\, \operatorname{trace}(U\Sigma U \Sigma) + \operatorname{trace}(UH) + r_{1 - \alpha}\sum_{i \neq j}\left\{u_{ij}\mathrm{sign}(\Omega_{ij})\mathbf{1}(\Omega_{ij} \neq 0) + |u_{ij}|\mathbf{1}(\Omega_{ij} = 0)\right\},
$$
where $H \in \mathbb{S}_{\mathrm{d}}$ is a matrix of jointly Gaussian random variables with zero mean such that $\operatorname{Cov}\left(h_{i j}, h_{k \ell}\right)=E\left[x_{i} x_{j} x_{k} x_{\ell}\right]-\Sigma_{i j} \Sigma_{k \ell}$. By the convex nature of the above optimization problem, using the first optimality criterion using subdifferentials~\citep[Corollary~2.7]{FHC-YSL-RJS-PRW:98}, it follows that there exists some $Z \in \subscr{\mathbb{S}}{d}$ satisfying
$$
Z_{ij} = \left\{\begin{array}{ll} 0, & i = j, \\ \mathrm{sign}(\Omega_{ij}), & i \neq j, \Omega_{ij} \neq 0, \\ \mathrm{sign}(u_{ij}), &i \neq j, \Omega_{ij} = 0, u_{ij} \neq 0,\\ \in [-1,1], & i \neq j, \Omega_{ij} = u_{ij} = 0. \end{array} \right.
$$
for which $H + 2\Sigma U^* \Sigma + r_{1 - \alpha}Z = 0.$ Letting $\otimes$ denote the matrix Kronecker product and $\Gamma = \Sigma \otimes \Sigma$, it follows that
$$
\textbf{\textup{vec}}(U^*) = -\frac{1}{2}\Gamma^{-1}\left\{\textbf{\textup{vec}}(H) + r_{1 - \alpha}\textbf{\textup{vec}}(Z)\right\}.
$$
Finally, let $\hat{\Omega}_{\Xi}^\delta$ denote the vector of elements of $\hat{\Omega}^\delta$ whose indices are in $\Xi$, $\Omega_{\Xi}$ denote the vector of elements of $\Omega$ whose indices are in $\Xi$ (so it is the zero vector), and
$U^*_{\Xi}$ denote the vector of elements of $U^*$ whose indices are in $\Xi$. Then one concludes that
\[
\begin{split}
\lim_{n \rightarrow \infty} \mathbb{P}(\hat{\Omega}^\delta_{ij} \neq 0 \textrm{ for some } (i,j) \in \Xi) &= \lim_{n \rightarrow \infty} \mathbb{P}(\sqrt{n}(\hat{\Omega}^\delta_{\Xi} - \Omega_{\Xi}) \neq 0) = \mathbb{P}(U^*_{\Xi} \neq 0) \\
&\leq \mathbb{P}(U^* \neq 0) = 1 - \mathbb{P}(H \neq -r_{1 - \alpha}Z) \leq 1 - \mathbb{P}(\norm{\textbf{\textup{vec}}(H)}_\infty \leq r_{1 - \alpha}) \\
&=\alpha.
\end{split}
\]
\end{proof}
Using the estimated regularization parameter $\lambda(\alpha)$ from RobSel for graphical lasso, Theorem \ref{thm-FWER} states that the asymptotic probability that the estimated graph includes a false edge (false non-zero estimated in $\hat\Omega^\delta$) is bounded by $\alpha$. This interpretation is equivalent to having the FWER bounded by $\alpha$ in hypothesis testing-based graphical model selection in \cite{Drton-Perlman}. As a result, Theorem \ref{thm-FWER} implies that RobSel can also serve as a tool for controlling graphical lasso's FWER at some chosen significance level $\alpha$ with similar to using a hypothesis testing-based graphical model selection.
Concretely, testing $d(d-1)/2$ null hypotheses that each pairwise partial correlation is zero can serve as an alternative way to construct a graphical model, where the partial correlation between variables $i$ and $j$ is defined as $\rho_{i j{\boldmath\cdot}\text{rest}}=-\omega_{i j}/\sqrt{\omega_{i i} \omega_{j j}}$ and $i,j=1,2,\dots,d$. The unadjusted $p$-value $\pi_{ij}$ for each null hypothesis is be obtained by
\begin{equation}
\label{unadjusted-p-value}
\pi_{ij} = 2[1-\Phi(\sqrt{n - d - 1}\cdot \abs{z_{ij{\boldmath\cdot}\text{rest}}})],
\end{equation}
where $\Phi$ is the CDF of standard normal distribution, $z_{ij{\boldmath\cdot}\text{rest}} = \arctanh(r_{ij{\boldmath\cdot}\text{rest}})$ is the Fisher's $z$-transformed sample partial correlation $r_{ij{\boldmath\cdot}\text{rest}}$ for population partial correlation $\rho_{i j{\boldmath\cdot}\text{rest}}$. To account for multiple comparison, a p-value correction is needed to achieve a desired FWER characteristic. One of the multiple testing correction methods given in \cite{Drton-Perlman} controls the FWER based on Holm's approach for $p$-value adjustment:
\begin{align}
\pi_{a\uparrow}^{\text{Holm}} = \max_{b=1,...,a}\left[\min\left\{\left(\binom{d}{2} - b + 1\right)\pi_{b\uparrow}, 1 \right\}\right]\text{, for } 1 \le a \le \binom{d}{2}.
\end{align}
where $\pi_{1\uparrow} \le \pi_{2\uparrow} \le ... \le \pi_{d(d-1)/2\uparrow}$ are the ordered $p$-values from \eqref{unadjusted-p-value}. This approach will be referred to as the Holm-corrected testing method for graphical model selection in our numerical experiments. Other multiple testing correction approaches discussed in \cite{Drton-Perlman} include Bonferroni and Šidák adjustments. For the remainder of our work, we compare RobSel with the Holm-corrected testing method for its simplicity (compared to the Šidák-based approach) and better power characteristic (compared to the Bonferroni-based approach). We emphasize that the distinct advantage of graphical lasso is that it can can perform model selection and parameter estimation of $\Omega$ simultaneously, whereas any testing-based approach can only identify the zeros/non-zero locations of $\Omega$.
\section{Numerical simulations and real data analysis}\label{sec3}
In this section, analyses of simulated and real data illustrate the usefulness of RobSel's asymptotic FWER property in finite sample and compare to the Holm-based multiple testing approach for Gaussian graphical model selection. Furthermore, RobSel is used to analyze real datasets from genomics.
To carry out our numerical experiments, we used packages \texttt{CVglasso} for cross validation, \texttt{qgraph} for the extended Bayesian information criterion, and \texttt{robsel} for Robust Selection. These packages are from CRAN, and they use package \texttt{glasso} to estimate the sparse inverse covariance matrix. Robust Selection algorithm is also available as a Python package, \texttt{robust-selection}, at \url{https://pypi.org/project/robust-selection/}. Both Python and R packages are also available at \url{https://github.com/dddlab/robust-selection}, and the codes to reproduce the numerical results is available at \url{https://github.com/cbtran/robsel-reproducible}.
\subsection{Simulation experiments}\label{sec:setting}
\begin{figure}[Ht!]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{images/new_FWER.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\textwidth]{images/new_TPR.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{images/new_FPR.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{images/new_MCC.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.33\textwidth}
\centering
\includegraphics[width=\linewidth]{images/new_jaccard.pdf}
\end{subfigure}
\caption{
Observed family-wise error rate (top-left), True Positive Rate (top-middle), False Positive Rate (top-right), Matthews Correlation Coefficient (bottom-left), and Jaccard index of similarity (bottom-right) evaluated from graphs estimated with RobSel with graphical lasso and Holm-based multiple testing method. Note that Holm-based method is not applicable when $n\leq d=100$. All traces represent average quantities over 200 datasets.
}
\label{fig:FWER_GLasso}
\end{figure}
In applications, the finite sample behavior the FWER characteristic whose asymptotic properties are given in Theorem \ref{thm-FWER} is of practical interest. In this section, simulation studies are used to verify FWER of graph reconstruction when using RobSel with graphical lasso. Also, the FWER of a testing-based graphical model selection from \cite{Drton-Perlman} is given as a comparison. (Readers are referred to \cite{cisneros20a} for comparison to cross-validation procedure.)
The true precision matrix $\Omega\in\subscr{\mathbb{S}}{d}^{\operatorname{++}}$ used to generate the simulated data has been constructed as follows. First, generate an adjacency matrix of an undirected {Erd\H{o}s-Renyi} graph with equal edge probability of 0.02 discarding any self-loops. Then, the weight of each edge (the magnitude of the non-zero element) is sampled uniformly between $[0.5, 1]$, and the sign of each non-zero element is set to be positive or negative with equal probability of 0.5. The resulting matrix is made diagonally dominant by following a procedure described in~\citep{Peng2009}, which ensures that the resulting matrix $\Omega$ is positive definite with ones on the diagonal. Finally, the diagonal entries of $\Omega$ are resampled uniformly between $[1, 1.5]$. Throughout this numerical study section, one randomly generated instance of sparse matrix $\Omega$ with $d=100$ variables is fixed. Using this $\Omega$, a total of $N=200$ datasets for each sample size $n\in\{50,100,200,400,800,1600,3200\}$ were generated independently from a multivariate zero-mean Gaussian distribution, i.e., $\mathcal{N}(\vect{0}_d,\Omega^{-1})$.
To evaluate the selected models, family-wise error rate (FWER), true positive rate (TPR), false positive rate (FPR), Matthews correlation coefficient (MCC), and Jaccard index were used as performance metrics. These metrics are derived from elements in the confusion matrix, true positives (TP), true negatives (TN), false positives (FP) and false negatives (FN), where a positive indicates an estimated presence of an edge (two non-zero entries in $\Omega$). In this setting, family-wise error rate is the probability of any false edge detection: $FWER = \mathbf{1}( FP > 0$).
True positive rate is the proportion of edges in true graph $G$ that are correctly identified in the estimated graph: $TPR=\frac{TP}{TP+FN}$. False positive rate is the proportion of nonedges in true graph $G$ that are incorrectly identified as edges in the estimated graph: $FPR=\frac{FP}{FP+TN}$. Matthews correlation coefficient summarizes all count in confusion matrix to measure quality of graph recovery performance: $MCC = \frac{TP\cdot TN - FP \cdot FN}{(TP + FP)(TP+FN)(TN+FP)(TN+FN)}$.
Jaccard index measure the similarity between two edge sets $E_A$ and $E_B$: $J(E_A,E_B)=\frac{|E_A \cap E_B|}{|E_A \cup E_B|}$, and, by convention, Jaccard index of two empty sets is defined to be one, i.e., $J(\emptyset,\emptyset) = 1$.
Figure \ref{fig:FWER_GLasso} shows the FWER, TPR, FPR, MCC, and Jaccard index of the estimated graphs from both Holm's multiple testing method and the graphical lasso with RobSel criterion. TPR increases as sample size increases; however, for each sample size, both method have similar TPR, but RobSel appears to be more conservative at small significant levels since it tends to have smaller TPR and FWER. For larger $\alpha$, RobSel is less conservative with higher TPR while its FWER still bounded by $\alpha$. Figure \ref{fig:FWER_GLasso} also show the average Jaccard index from 200 simulations at 5 different sample size and 10 different levels $\alpha$. It can be seen that Jaccard index increases as sample size increases indicating the estimated graphs from both RobSel and Holm-based multiple testing method become increasingly similar.
Figure \ref{fig:estimated_graph} illustrates a striking similarity between graphical lasso tuned with RobSel and testing-based graphs for large $n$. Most edges appear in both graphs and both graphs do not contain any false positive edge owing to the stringent significance level. On the other hands, graphical lasso tuned with cross-validation have many false positive edges. These qualitative observations were typical in our numerical simulations when data were generated from multivariate normal distributions across a wide range of sample sizes we considered.
\begin{figure}[Ht!]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{images/true_graph.pdf}
\caption{Ground Truth}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{images/cv_graph.pdf}
\caption{Graphical lasso tuned with cross-validation}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{images/robsel_graph.pdf}
\caption{Graphical lasso tuned with RobSel ($\alpha=0.05$)}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{images/testing_graph.pdf}
\caption{Holm-based multiple testing ($\alpha=0.05$)}
\end{subfigure}
\caption{True and three estimated graphs from a dataset with $n=3200$. Red edges denote False Positive edges. }
\label{fig:estimated_graph}
\end{figure}
\subsection{Application to gene regulatory network reconstruction}\label{sec:gene}
Here, we infer gene regulatory networks from real datasets provided for the DREAM5 transcriptional network inference challenge \citep{Marbach2012}. We reconstructed the networks of interactions among transcription factors (TF). TF-encoding genes usually act as hub-genes with large numbers of interactions with other genes (\cite{JMLR:v15:tan14b}). Thus, identifying interactions between TFs may help researchers better understand the relationships between different groups of genes. The \emph{in silico} dataset contains $d=195$ transcription factors on $n=805$ arrays. The \emph{Escherichia coli} (E. coli) dataset contains $d=334$ transcription factors on $n=805$ arrays. The \emph{Saccharomyces cerevisiae} (S. cerevisiae) dataset contains $d=333$ transcription factors on $n=536$ arrays. To evaluate the inferred networks, we validated the edges in estimated graphical models against experimentally validated interactions given in \cite{Marbach2012}.
Graphical models were constructed using graphical lasso tuned with three different regularization parameter selection approaches as well as the using the Holm-corrected testing method described in Section \ref{sec2}. The regularization parameter tuning approaches we considered were as follows. The first is \emph{Robust Selection (RobSel)}, with $B=200$ sets of bootstrap samples. The second is \emph{$5$-fold cross-validation (CV)} procedure, where the performance on the validation set is the evaluation of the graphical loss function under the empirical measure of the samples on the training set. The third is extended Bayesian information criterion (EBIC) proposed in~\citep{RFBarber2010}. CV and EBIC are evaluated on the same grid of $\lambda$, which are ten logarithmically spaced values in the interval $(0.05s_{\max},s_{\max}]$ with $s_{\max}$ being the minimal value of regularization that gives an empty graph: i.e., setting $\lambda=s_{\max}$ for graphical lasso returning a diagonal matrix $\Omega$. Note that increasing the number of $\lambda$ values on the grid increases computational time.
Because DRO framework minimizes worst case expected loss, specifying a small error tolerance $\alpha$ for RobSel often results in a graph with very few edges being estimated especially when analyzing a real dataset. In practice, a larger $\alpha$ might be beneficial in order to estimate graphs with more edges. Note, however, that setting a $\lambda$ corresponding to a large $\alpha$ when using graphical lasso would still return a very sparse graph. In our analyses, RobSel was specified with $\alpha=0.9$, EBIC with parameter $\gamma = 0.5$, and 5-fold for cross-validation. EBIC criterion has the following form:
\begin{align}
\text{EBIC}_\gamma(E) = -2\mathcal{L}(\hat{\Omega}(E)) + \abs{E}\log n + \gamma4\abs{E}\log d,
\end{align}
where $E$ is the edge set of a candidate graph implied by $\hat{\Omega}$, and $\mathcal{L}(\hat{\Omega}(E))$ denotes the maximized log-likelihood function of the associated model.
Table \ref{table:dream} show the number of edges in the estimated graph, number of validated edges \citep[interactions found in][]{Marbach2012}, the ratio of validated edge counts to total edge counts, and the wall clock times. In our results, an estimated edge (i.e. gene interaction) is a true positive if it is experimentally validated interaction in the database, i.e. in \cite{Marbach2012}. We can see that for all three data sets, RobSel appears to be faster than EBIC and CV with similar discovery ratios. Between E. coli and S. cerevisiae data sets, computational time for RobSel decreases when sample size decreases, but computational times of both EBIC and CV increase. Even though we used RobSel with $\alpha=0.9$ to get a denser graph, the estimated graph by RobSel are still much sparser than EBIC and CV.
\begin{table}[Ht!]
\centering
\begin{tabular}{ |c|c|c|c|c|c| }
\hline
Dataset & Method & Estimated edges & Validated edges & Validated Edge Proportion & Time(s) \\
\hline
\multirow{4}{7em}{In silico} & Holm & 289 & 63 & {0.2184} & {0.088} \\
& RobSel & 693 & 89 & 0.1284 & 0.467 \\
& EBIC & 1237 & 108 & 0.0873& 1.566 \\
& CV & 7241 & {168} & 0.0232& 8.611 \\
\hline
\multirow{4}{7em}{E. coli} & Holm & 269 & 14 & {0.0520} & {0.166} \\
& RobSel & 3479 & 22 & 0.0063 & 3.355 \\
& EBIC & 6599 & 37 & 0.0056& 10.46 \\
& CV & 10770 & {43} & 0.0040& 52.92 \\
\hline
\multirow{4}{7em}{S. cerevisiae} & Holm & 56 & 3 & {0.0536} & {0.149} \\
& RobSel & 4259 & 46 & 0.0108 & 2.728 \\
& EBIC & 7731 & 70 & 0.0091& 17.80 \\
& CV & 11367 & {93} & 0.0082& 85.64 \\
\hline
\end{tabular}
\caption{Graph recovery results and computational times in seconds from the DREAM5 datasets for three methods, Holm's testing procedure with $\alpha = 0.9$, RobSel with $\alpha = 0.9$, extended BIC (EBIC) with $\gamma = 0.5$, and 5-fold cross-validation (CV).}
\label{table:dream}
\end{table}
\section{Discussion}\label{discussion}
We made a theoretical connection between significant level $\alpha$ from RobSel and family-wise error rate of estimating any false positive edges when RobSel is used to tune graphical lasso. Furthermore, the asymptotic FWER control property is tested in finite sample using simulation experiments. The similarity between Holm-testing method and RobSel tuned graphical lasso solutions when using the same significance level $\alpha$ give users practical insight about the behavior of graphical lasso: graphical lasso regularization can be chosen according to a user specified FWER level.
\section*{Acknowledgments}
This is acknowledgment. Provide text here. This is acknowledgment text. Provide text here. This is acknowledgment text.
\subsection*{Author contributions}
This is an author contribution text.
\subsection*{Financial disclosure}
None reported.
| {
"timestamp": "2022-02-01T02:04:24",
"yymm": "2201",
"arxiv_id": "2201.12441",
"language": "en",
"url": "https://arxiv.org/abs/2201.12441",
"abstract": "Recently, a special case of precision matrix estimation based on a distributionally robust optimization (DRO) framework has been shown to be equivalent to the graphical lasso. From this formulation, a method for choosing the regularization term, i.e., for graphical model selection, was proposed. In this work, we establish a theoretical connection between the confidence level of graphical model selection via the DRO formulation and the asymptotic family-wise error rate of estimating false edges. Simulation experiments and real data analyses illustrate the utility of the asymptotic family-wise error rate control behavior even in finite samples.",
"subjects": "Methodology (stat.ME); Applications (stat.AP)",
"title": "Family-wise error rate control in Gaussian graphical model selection via Distributionally Robust Optimization",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138177076645,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7087156850636526
} |
https://arxiv.org/abs/1801.09367 | Approximate Vanishing Ideal via Data Knotting | The vanishing ideal is a set of polynomials that takes zero value on the given data points. Originally proposed in computer algebra, the vanishing ideal has been recently exploited for extracting the nonlinear structures of data in many applications. To avoid overfitting to noisy data, the polynomials are often designed to approximately rather than exactly equal zero on the designated data. Although such approximations empirically demonstrate high performance, the sound algebraic structure of the vanishing ideal is lost. The present paper proposes a vanishing ideal that is tolerant to noisy data and also pursued to have a better algebraic structure. As a new problem, we simultaneously find a set of polynomials and data points for which the polynomials approximately vanish on the input data points, and almost exactly vanish on the discovered data points. In experimental classification tests, our method discovered much fewer and lower-degree polynomials than an existing state-of-the-art method. Consequently, our method accelerated the runtime of the classification tasks without degrading the classification accuracy. | \section{Introduction}
Bridging computer algebra and various applications such as machine learning, computer vision, and systems biology has been attracting interest over the past decade~\cite{torrente2009application,laubenbacher2009computer,li2011theory,livni2013vanishing,vera2014algebra,gao2016nonlinear}.
Borrowed from computer algebra, the vanishing ideal concept has
recently provided new perspectives and has proven effective
in various fields. Especially, vanishing ideal based approaches can discover the nonlinear structure of given data~\cite{laubenbacher2004computational,livni2013vanishing,hou2016discriminative,kera2016noise,kera2016vanishing}. The vanishing ideal is defined as a set of polynomials that always take zero value, \emph{i.e.}, vanishes, on a set of given data points:
\begin{align}
\mathcal{I}(X) & =\left\{ g\in\mathcal{P}\mid g(\mathbf{x})=0,\forall\mathbf{x}\in X\right\},\label{eq:def-vanishing-ideal}
\end{align}
where $X$ is a set of $d$-dimensional data points, and $\mathcal{P}$
is a set of $d$-variate polynomials. A vanishing ideal can be spanned by a finite set of vanishing polynomials~\cite{cox1992ideals}. This basis of the vanishing ideal can be viewed as a system to be satisfied by the input data points; as such, it holds the nonlinear structure of the data. Exploiting these properties, \citeauthor{livni2013vanishing}~(\citeyear{livni2013vanishing}) proposed Vanishing Component Analysis (VCA), which extracts the compact nonlinear
features of the data to be classified, and enables training by a simple
linear classifier. The vanishing ideal has also identified the underlying dynamics
in limited observations~\cite{torrente2009application,robbiano2010approximate,kera2016noise}. In these applications, monomials consisting of models are inferred from the vanishing ideal of the observations.
\begin{figure}
\includegraphics[width=\columnwidth]{teaser.eps}\caption{(Left panel)
Polynomials that approximately vanish on the data points (red dots) in VCA. (Right panel) Vanishing polynomials and data knots (blue circles) obtained by the proposed method, which almost exactly pass the data knots.\label{fig:teaser}}
\end{figure}
As the available data in many applications are exposed to noise, it has been common to compute polynomials that approximately rather than exactly vanish on the data to avoid the overfitting problem~\cite{heldt2009approximate,fassino2010almost,livni2013vanishing,limbeck2014computation}.
However, this approximation destroys the sound algebraic structure of
the vanishing ideal. For instance, as shown in the left panel of Fig.~\ref{fig:teaser},
a set of approximately vanishing polynomials is no longer an algebraic system because it possesses no common roots. In other words, there is a tradeoff between preserving the algebraic soundness of the vanishing ideal and avoiding overfitting
to noise. Such a tradeoff has not been explicitly considered
in existing work.
In the present paper, we newly deal with this tradeoff by addressing a vanishing ideal that is well tolerant to noisy data while pursuing to preserve a sound algebraic structure. Specifically, we introduce a new task that jointly discovers a
set of polynomials and \textit{summarized} data points (called data
knots) from the input data. As shown in the right panel of Fig.~\ref{fig:teaser}, the polynomials in this task avoid overfitting because they approximately vanish on the original data, and preserve the better algebraic structure of a vanishing ideal than those by VCA because they intersect (and vanish) almost exactly at the data knots.
To our knowledge, we present the first
computation of a vanishing ideal that handles both the given fixed points and the jointly
updated points. As the noise level increases, the vanishing ideal of the given fixed data
performs less accurately because it
requires a coarser approximation. If the approximation remains fine, the computed vanishing polynomials
(especially the higher-degree polynomials) will overfit to noise. To circumvent this problem, the proposed method conjointly computes the
vanishing polynomials and the data knots. Assuming that lower-degree polynomials
are less overfitted to noise and better preserve the data structure, our method generates
the vanishing polynomials from lower degree to higher degree while updating
the data knots at each degree. Consequently, overfitting by the higher-degree polynomials
is avoided. The data knots are updated by a nonlinear regularization,
which can be regarded as a generalization
of the Mahalanobis distance, which is commonly used in the metric learning~\cite{bellet2013survey}.
In a theoretical analysis, we also guarantee that the proposed algorithm
terminates and that in the extreme case, the
generated polynomials exactly (rather than almost) vanish on the data knots.
Experiments confirmed that our methods generate fewer and lower-degree polynomials
than an existing vanishing ideal-based approach. In different classification
tasks, our methods obtained a much more compact nonlinear feature than
VCA, reducing the test runtime. To verify that the obtained data
knots well represent the original data, we trained a $k$-nearest neighbor classifier.
Although the data knots are far fewer than the original points,
the classification accuracy of the nearest neighbor classifier was comparable to that of the baseline methods in most cases.
\section{Related Work}
Although our work addresses a new problem, several works are
closely related to the present study.
\citeauthor{fassino2010almost}~(\citeyear{fassino2010almost}) proposed a Numerical Buchberger--M\"oller (NBM)
algorithm that computes approximately vanishing polynomials for the
vanishing ideal of input data $X$. NBM
requires that each polynomial exactly vanishes on a set of nearby data points $X^{\varepsilon}$ (called an admissible perturbation) from $X$ up to a hyperparameter $\varepsilon$.
However, because the admissible perturbations can differ among the approximately vanishing polynomials,
NBM does not generally output vanishing polynomials that vanish on the same points. In addition, NBM does not specify the admissible perturbations, but only checks the sufficient condition of the existence of such points.
\citeauthor{Abbott2007}~(\citeyear{Abbott2007}) addressed a new task of \emph{thinning
out} the data points as a preprocessing for computing vanishing ideals
afterward. Similar to clustering, their
approach computes the empirical centroids of the input data. As thinning
out and the vanishing-ideal computation are performed independently, the
empirical centroids do not necessarily result in compact, lower-degree
vanishing polynomials. In contrast, our method conjointly performs the thinning out and vanishing-ideal computation in a single framework. Note that both methods by \citeauthor{Abbott2007} and by us aim to reduce data points by summarizing nearby points, which are different from the clustering tasks that aim to group even distant points according to their categories.
\section{Preliminaries}
\begin{defn}
(Vanishing Ideal) Given a set of $d$-dimensional data points $X$, the vanishing ideal of $X$ is a set of $d$-variate polynomials that take zero values, (\emph{i.e.}, vanish) at all points in $X$. Formally,
the vanishing ideal is defined as Eq.~(\ref{eq:def-vanishing-ideal}).%
\end{defn}
\begin{defn}
(Evaluation vector, Evaluation matrix) Given a set of data points
$X=\left\{ \mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}\right\} $
with $N$ samples, the evaluation vector of polynomial $f$ is defined as
\begin{align*}
f(X) & =\begin{pmatrix}f(\mathbf{x}_{1}) & f(\mathbf{x}_{2}) & \cdots & f(\mathbf{x}_{N})\end{pmatrix}^{\top}\in\mathbb{R}^{N}.
\end{align*}
Given a set of polynomials $F=\left\{ f_{1},f_{2},...,f_{s}\right\} $,
its evaluation matrix is $F(X)=(f_{1}(X)\ f_{2}(X)\ \cdots\ f_{s}(X))\in\mathbb{R}^{N\times s}$.
\end{defn}
\begin{defn}
($\varepsilon$-vanishing polynomial) Given a set of data points $X$,
a polynomial $g$ is called an $\varepsilon$-vanishing polynomial
on $X$ when the norm of its evaluation vector on $X$ is less than or
equal to $\varepsilon$, \emph{i.e.}, $\|g(X)\|\le\varepsilon$, where $\|\cdot\|$
denotes the Euclidean norm of a vector. Otherwise, $g$ is called an
$\varepsilon$-nonvanishing polynomial.
\end{defn}
In the vanishing ideal concept, polynomials are identified with their evaluation on the points; that is, a polynomial
$f$ is mapped to an $N$-dimensional vector $f(X)$. Two polynomials $f$ and $\tilde{f}$ are considered to be equivalent if they have the same evaluation vector, \emph{i.e.}, $f(X)=\tilde{f}(X)$. When $f(X)=\mathbf{0}$, $f$ is a vanishing polynomial on $X$.
Another important relation between polynomials and their evaluations is that the sum of the evaluation of polynomials equals to the evaluation of the sum of the polynomials. More specifically, the following relation holds for a set of polynomials $G=\left\{ g_{1},...,g_{s}\right\} $ and coefficient vectors $\mathbf{c}\in\mathbb{R}^{s}$:
\begin{align*}
G(X)\mathbf{c} & =(G\mathbf{c})(X),
\end{align*}
where $G\mathbf{c}=\sum_{i=1}^{s}c_{i}g_{i}$ defines the inner product between a set $G$ and a coefficient
vector $\mathbf{c}$, with
$c_{i}$ being the $i$-th entry of $\mathbf{c}$. Similarly,
$G\mathbf{C}=\left\{ G\mathbf{c}_{1},...,G\mathbf{c}_{k}\right\} $ defines the multiplication of $G$ by a matrix $\mathbf{C}=\left(\mathbf{c}_{1},\cdots,\mathbf{c}_{k}\right)$.
These special inner products will be used hereafter.
\section{Proposed Method}
Given a set of data points $X_{0}$, we seek a set of polynomials
$G$ and data knots $Z$ such that the polynomials in $G$ approximately
vanish on $X_{0}$ and almost exactly vanish on $Z$. Specifically,
\begin{align}
\forall g\in G, & \begin{Vmatrix}g(X_{0})\end{Vmatrix}\le\varepsilon,\begin{Vmatrix}g(Z)\end{Vmatrix}\le\delta,\label{eq:approx-exact-vanishing-polynomials}
\end{align}
where $\mathbf{\varepsilon},\delta$ are hyperparameters of the error
tolerance that requires $g$ to be $\varepsilon$-vanishing on $X_{0}$
and $\delta$-vanishing on $Z$. In our case, $\delta$ is set much smaller than $\varepsilon$.
Like many other methods that construct a vanishing ideal basis, our polynomial
construction starts from a degree-$0$
polynomial (\emph{i.e.}, constant), and increments the polynomial degree in successive data fittings. For each degree $t$, the proposed method
iterates the following two steps:
\begin{enumerate}
\item Compute a set of approximately vanishing polynomials $G_{t}$ and
a set of nonvanishing polynomials $F_{t}$.
\item Update the data knots such that the approximately vanishing polynomials
in $G^{t}=\cup_{i=1}^{t}G_{i}$ more closely vanish.
\end{enumerate}
These steps constitute our technical contributions to the field. In the former step, we newly introduce a vanishing polynomial construction that handles two set of points $X_{0},Z$ with different approximation scales $\varepsilon,\delta$.
In the latter step, we introduce a nonlinear regularization term that preserves the nonlinear structure of data while
updating $Z$.
In the following subsections, we first describe the polynomial construction part (Step~1; Section~\ref{subsec:Non-vanishing-and-vanishing})
and the data-knotting part (Step~2; Section~\ref{subsec:Data-knotting}).
To improve the quality of the vanishing polynomials and data knots, we propose an iterative update framework in Section~\ref{subsec:Exact-Vanish-Pursuit}. Finally, we present our whole algorithm in Section~\ref{subsec:Algorithm}.
\subsection{Vanishing polynomial construction\label{subsec:Non-vanishing-and-vanishing}}
\begin{algorithm}[t]
\caption{FindBasis\label{alg:FindBasis}}
\begin{algorithmic}[1]
\Require{
$
\tilde{C}_t, F^{t-1}, Z, X_0, \varepsilon, \delta$
}
\Ensure{$G_t, F_t$}
\State{\# Compute residual space}
\State{$C_t = \tilde{C}_t - F^{t-1}F^{t-1}(Z)^{\dagger}\tilde{C}_t(Z)$}
\State{}
\State{\# SVD for evaluation matrix on $X_0$}
\State{$C_t(X_0) = \mathbf{U}_0\mathbf{D}_0\mathbf{V}_0^{\top} = \mathbf{U}_0\mathbf{D}_0[\tilde{\mathbf{V}}_0\ \mathbf{V}_0^{\varepsilon}]^{\top}$}
\State{}
\State{\# SVD for evaluation matrix on $Z$}
\State{$C_t(Z)\mathbf{V}_0^{\varepsilon} = \mathbf{U}\mathbf{D}\mathbf{V}^{\top}= \mathbf{U}\mathbf{D}[\tilde{\mathbf{V}}\ \mathbf{V}^{\delta}]^{\top}$}
\State{$C_t(Z)\tilde{\mathbf{V}}_0 = \hat{\mathbf{U}}\mathbf{E}\mathbf{W}^{\top} = \hat{\mathbf{U}}\tilde{\mathbf{E}}[\tilde{\mathbf{W}}\ \mathbf{W}^{\delta}]^{\top}$}
\State{}
\State{$G_t = C_{t}\mathbf{V}_{0}^{\varepsilon}\mathbf{V}^{\delta}$}
\State{$F_t=C_{t}\tilde{\mathbf{V}}_{0}\tilde{\mathbf{W}}\cup C_{t}\mathbf{V}_{0}^{\varepsilon}\tilde{\mathbf{V}}$}
\State{\# Divide each polynomial in $F_t$ by the norm of its evaluation vectors on $Z$ }
\State{\Return $G_t, F_t$}
\end{algorithmic}
\end{algorithm}
Given a set of data points $X_{0}$ and tentative data knots $Z$,
we here construct two polynomial sets $G_{t}$ and
$F_{t}$ of degree $t\ge 1$. $G_{t}$ is the set of degree-$t$
polynomials that are $\varepsilon$-vanishing on $X_{0}$ and $\delta$-vanishing
on $Z$. $F_{t}$ is a set of polynomials that are not $\delta$-vanishing on $Z$. Note that at this moment, we have $G^{t-1}=\cup_{i=0}^{t-1}G_{i}$
and $F^{t-1}=\cup_{i=0}^{t-1}F_{i}$.
To obtain $G_{t}$ and $F_{t}$, we first generate candidate degree-$t$ polynomials $C_{t}$ based on the VCA framework. Multiplying all possible combinations of polynomials between $F_{1}$ and $F_{t-1}$, we construct $\tilde{C}_{t} =\left\{ fg\mid f\in F_{1},g\in F_{t-1}\right\}$. Then, $C_{t}$ is generated as the residual polynomials of $\tilde{C}_{t}$ with respect to $F^{t-1}$ and their evaluation vectors.
\begin{align*}
C_{t} & =\tilde{C}_{t}-F^{t-1}\left(F^{t-1}(Z)^{\dagger}\tilde{C}_{t}(Z)\right),
\end{align*}
where $A^{\dagger}$ denotes the pseudo-inverse matrix of $A$.
Note that the second term is the inner product between $F^{t-1}$
and $F^{t-1}(Z)^{\dagger}\tilde{C}_{t}(Z)$, and not the evaluation
of $F^{t-1}$ on $F^{t-1}(Z)^{\dagger}\tilde{C}_{t}(Z)$. This step calculates the residual
column space of $\tilde{C}_{t}(Z)$ that is orthogonal to that of
$F^{t-1}(Z)$.
\begin{align*}
C_{t}(Z) & =\tilde{C}_{t}(Z)-F^{t-1}(Z)\left(F^{t-1}(Z)^{\dagger}\tilde{C}_{t}(Z)\right).
\end{align*}
In short, when evaluated on the data knots $Z$, the column spaces of the residual polynomials $C_{t}$ and the above polynomials are orthogonal.
After generating $C_{t}$, degree-$t$ nonvanishing polynomials $F_{t}$
and vanishing polynomials $G_{t}$ are constructed by applying singular value
decomposition~(SVD) to $C_{t}(X_{0})$:
\begin{align*}
C_{t}(X_{0}) & =\mathbf{U}_{0}\mathbf{D}_{0}\mathbf{V}_{0}^{\top},
\end{align*}
where $\mathbf{U}_{0}\in\mathbb{R}^{N\times N}$ and $\mathbf{V}_{0}\in\mathbb{R}^{|C|\times|C|}$
are orthogonal matrices, and $D\in\mathbb{R}^{N\times|C|}$ contains singular values only along its diagonal. Multiplying both sides by $\mathbf{V}_{0}$ and focusing on the $i$-th column, we obtain
\begin{align*}
C_{t}(X_{0})\mathbf{v}_{i} & =\sigma_{i}\mathbf{u}_{i},
\end{align*}
where $\mathbf{v}_{i}$ and $\mathbf{u}_{i}$ are the $i$-th columns of
$\mathbf{V}_{0}$ and $\mathbf{U}_{0}$ respectively, and $\sigma_{i}$ is
the $i$-th diagonal entry of $\mathbf{D}_{0}$. Moreover, when $\sigma_{i}\le\mathbf{\varepsilon}$, we have
\begin{align*}
\|g_{i}(X_{0})\| & =\begin{Vmatrix}(C_{t}\mathbf{v}_{i})(X_{0})\end{Vmatrix}=\begin{Vmatrix}C_{t}(X_{0})\mathbf{v}_{i}\end{Vmatrix}=\|\sigma_{i}\mathbf{u}_{i}\|=\sigma_{i},
\end{align*}
meaning that polynomial $g_{i}:=C_{t}\mathbf{v}_{i}$
is an $\varepsilon$-vanishing polynomial on $X_{0}$.
We can regard $\mathbf{v}_{i}$ as a coefficient vector of the linear combination of
the polynomials in $C_{t}$. Let us denote $\mathbf{V}_{0}=(\tilde{\mathbf{V}}_{0}\ \mathbf{V}_{0}^{\varepsilon})$,
where $\tilde{\mathbf{V}}_{0}$ and $\mathbf{V}_{0}^{\varepsilon}$
are matrices corresponding to the singular values exceeding $\varepsilon$
and not exceeding $\varepsilon$, respectively. For any
unit vector $\mathbf{v}=\mathbf{V}_{0}^{\varepsilon}\mathbf{p}\in\text{span}(\mathbf{V}_{0}^{\varepsilon})$ expressed in terms of a unit vector $\mathbf{p}$, a polynomial $g=C_{t}\mathbf{v}$ satisfies
\begin{align*}
\|g(X_{0})\| & =\|C_{t}(X_{0})\mathbf{v}\|=\|G_{t}(X_{0})\mathbf{p}\|\le \sigma_{\varepsilon_\text{max}}\|\mathbf{p}\| =\sigma_{\varepsilon_\text{max}},
\end{align*}
meaning that $g=C_{t}\mathbf{v}$ is an $\varepsilon$-vanishing polynomial, where $\sigma_{\varepsilon_{\text{max}}}$ is the largest singular value not exceeding $\varepsilon$.
Next, we construct $\delta$-vanishing polynomials
in the above polynomial space. This constraint problem is formulated as
follows:
\begin{align*}
\min_{(\hat{\mathbf{V}}^{\delta})^{\top}\hat{\mathbf{V}}^{\delta}=\mathbf{I}} & \|C_{t}(Z)\hat{\mathbf{V}}^{\delta}\|_{\text{F}}, & \text{s.t. }\text{span}(\hat{\mathbf{V}}^{\delta})\subset\text{span}(\mathbf{V}_{0}^{\varepsilon}),
\end{align*}
where $\|\cdot\|_{\text{F}}$ denotes the Frobenius norm of a matrix.
From the discussion above, it can be reformulated as,
\begin{align*}
\min_{\mathbf{P}^{\top}\mathbf{P}=\mathbf{I}} & \|C_{t}(Z)\mathbf{V}_{0}^{\varepsilon}\mathbf{P}\|_{\text{F}},
\end{align*}
since the column space of $\mathbf{V}_{0}^{\varepsilon}$ spans the coefficient vectors of $\varepsilon$-vanishing polynomials on $X_0$.
This problem can be simply solved by applying SVD to $C_{t}(Z)\mathbf{V}_{0}^{\varepsilon}$
and selecting the right singular vectors corresponding to the singular
values not exceeding $\delta$. Supposing that SVD yields $C_{t}(Z)\mathbf{V}_{0}^{\varepsilon}=\mathbf{U}\mathbf{D}\mathbf{V}^{\top}$,
we denote $\mathbf{V}=(\tilde{\mathbf{V}}\ \mathbf{V}^{\delta})$,
where $\tilde{\mathbf{V}}$ and $\mathbf{V}^{\delta}$ are matrices
corresponding to the singular values exceeding $\delta$
and not exceeding $\delta$, respectively. The polynomials
$G_{t}$ that are $\varepsilon$-vanishing on $X$ and $\delta$-vanishing
on $Z$ are then obtained as follows:
\begin{align*}
G_{t} & =C_{t}\mathbf{V}_{0}^{\varepsilon}\mathbf{V}^{\delta}.
\end{align*}
$F_{t}$ is constructed similarly, but consists of two
parts.
\begin{align*}
F_{t} & =C_{t}\tilde{\mathbf{V}}_{0}\tilde{\mathbf{W}}\cup C_{t}\mathbf{V}_{0}^{\varepsilon}\tilde{\mathbf{V}},
\end{align*}
where $\tilde{\mathbf{W}}$ is the counterpart of $\tilde{\mathbf{V}}$
for $C_{t}(Z)\tilde{\mathbf{V}}_{0}$. The left-part polynomials are
$\varepsilon$-nonvanishing on $X_{0}$ and $\delta$-nonvanishing
on $Z$, whereas the right-part polynomials are $\varepsilon$-vanishing on
$X_{0}$ but $\delta$-nonvanishing on $Z$. Following the VCA framework, the polynomials in $F_{t}$ are rescaled by their evaluation vectors on $Z$ to maintain numerical stability. The main generating procedures of
$G_{t}$ and $F_{t}$ are summarized in Alg.~\ref{alg:FindBasis}.
For simplicity, we omit the step that constructs $\tilde{C}_{t}$.
\subsection{Data knotting\label{subsec:Data-knotting}}
Given a set of points $X$ and a set of vanishing polynomials $G^{t}$ up to degree $t$, we seek new data points $Z$ for which the polynomials in $G^{t}$ more exactly vanish on $Z$ while preserving the nonlinear structure of $X$. In the present paper, we refer to these $Z$ as data knots and the searching process as data knotting (by analogy to ropes). As described later, we iteratively update the data knots, so we designate the current and updated data knots as $X$ and $Z$, respectively.
First, we intuitively illustrate the concept in a simple case. Let
$\mathbf{X}$ to be a matrix whose $i$-th row corresponds to the
$i$-th point of $X$. Applying SVD, we have
\begin{align}
\mathbf{X} & =\mathbf{UDV}^{\top}=\tilde{\mathbf{U}}\tilde{\mathbf{D}}\tilde{\mathbf{V}}^{\top}+\mathbf{U}_{\varepsilon}\mathbf{D}_{\varepsilon}\mathbf{V}_{\varepsilon}^{\top},\label{eq:linear-data-knotting}
\end{align}
where $\mathbf{U}, \mathbf{V}$ are orthonormal matrices, and $\mathbf{D}$ has the singular values only along its diagonal.
The former and latter terms define the principal and minor variances in the data, respectively, regarding singular values exceeding $\varepsilon$ and the rest. Note that when $\mathbf{X}$ is mean-centralized,
$C_{1}(X)=\mathbf{X}$. We seek new data points
for which the minor variance ideally vanishes to a zero matrix while the principal
variance is preserved. In this linear case, the new points are simply defined as $\mathbf{Z}=\tilde{\mathbf{U}}\tilde{\mathbf{D}}\tilde{\mathbf{V}}^{\top}$,
where $\mathbf{Z}$ for $Z$ is defined identically to $\mathbf{X}$.
However, discovering $Z$ in nonlinear cases is not straightforward. In the degree-$t$ case, Eq.~(\ref{eq:linear-data-knotting}) becomes
\begin{align}
C_{t}(X) & =\tilde{\mathbf{U}}\tilde{\mathbf{D}}\tilde{\mathbf{V}}^{\top}+\mathbf{U}_{\varepsilon}\mathbf{D}_{\varepsilon}\mathbf{V}_{\varepsilon}^{\top}.\label{eq:svd-nonlinear}
\end{align}
As in the linear case, we seek $Z$ such that $C_{t}(Z)=\tilde{\mathbf{U}}\tilde{\mathbf{D}}\tilde{\mathbf{V}}^{\top}$. To this end, we address the following minimization problem:
\begin{align}
\min_{Z}\ & \begin{Vmatrix}C_{t}(Z)-\tilde{\mathbf{U}}\tilde{\mathbf{D}}\tilde{\mathbf{V}}^{\top}\end{Vmatrix}_{\text{F}}.\label{eq:original-problem}
\end{align}
Multiplying Eq.~(\ref{eq:svd-nonlinear}) by $\mathbf{V}$ and $\mathbf{V}_{\varepsilon}$
respectively, we obtain
\begin{align*}
F_{t}(X) & =F_{t}(Z),\\
G_{t}(Z) & =\mathbf{O}.
\end{align*}
In the derivation, we used $\tilde{\mathbf{V}}^{\top}\tilde{\mathbf{V}}=\mathbf{I}$,
$\mathbf{V}_{\varepsilon}^{\top}\mathbf{V}_{\varepsilon}=\mathbf{I}$,
and $\tilde{\mathbf{V}}^{\top}\mathbf{V}_{\varepsilon}=\mathbf{O}$,
where $\mathbf{I}$ is an identity matrix and $\mathbf{O}$ is a zero
matrix. From these relations, Eq.~(\ref{eq:original-problem}) is reformulated as,
\begin{align}
\min_{Z}\ & \|G_{t}(Z)\|_{\text{F}}+\lambda_{t}\|F_{t}(Z)-F_{t}(X)\|_{\text{F}},\label{eq:optimization-problem}
\end{align}
where $\lambda_{t}$ is a hyperparameter. It can be easily
shown,
\begin{align*}
C_{t}(Z)=\tilde{\mathbf{U}}\tilde{\mathbf{D}}\tilde{\mathbf{V}}^{\top}\iff F_{t}(X)=F_{t}(Z),G_{t}(Z)=\mathbf{O}.
\end{align*}
See the supplementary material for the proof. Note that the optimization
problem of Eq.~(\ref{eq:optimization-problem}) can be factorized
into a subproblem on each data point.
\begin{align}
\min_{\mathbf{z}_{i}}\ & \|G_{t}(\mathbf{z}_{i})\|_{\text{F}}+\lambda_{t}\|F_{t}(\mathbf{z}_{i})-F_{t}(\mathbf{x}_{i})\|_{\text{F}},\label{eq:optimization-problem-pointwise}
\end{align}
where $\mathbf{z}_{i}, \mathbf{x}_{i}$ are the $i$-th data point
of $Z,X$, respectively. Considering the polynomials up to
degree $t$, Eq.~(\ref{eq:optimization-problem-pointwise}) becomes
\begin{align}
\min_{\mathbf{z}_{i}}\ & \sum_{k=1}^{t}\|G_{k}(\mathbf{z}_{i})\|_{\text{F}}+\sum_{k=1}^{t}\lambda_{k}\|F_{k}(\mathbf{z}_{i})-F_{k}(\mathbf{x}_{i})\|_{\text{F}}.\label{eq:full-optimization-problem-pointwise}
\end{align}
The first term encourage the polynomials in $G^{t}$ to vanish on
$\mathbf{z}_{i}$, and the second term constrains the $\mathbf{z}_{i}$
to nearby $\mathbf{x}_{i}$ with respect to $F^{t}$. This formulation provides an interesting insight:
\begin{rem}
The second term of Eq.~(\ref{eq:full-optimization-problem-pointwise})
is a regularization term that is equivalent to a nonlinear generalization
of the Mahalanobis distance.
\end{rem}
The Mahalanobis distance between two data points $\mathbf{x}$ and
$\mathbf{y}$ of a mean-centralized matrix $\mathbf{X}$ is a
generalization of the Euclidean distance, where each variable is
normalized by its standard deviation, \emph{i.e.}, $d(\mathbf{x},\mathbf{y})=\sqrt{(\mathbf{x}-\mathbf{y})^{\top}\mathbf{\Sigma}^{\dagger}(\mathbf{x}-\mathbf{y})}$,
where $\mathbf{\Sigma}=\mathbf{X}^{\top}\mathbf{X}$ is an empirical
covariance matrix. The remark above holds because our $F_{t}$ describes the nonlinear principal variance of the current data knots $X$. For simplicity, we consider only the linear case $t=1$. In this case, $C_{1}(X)$ is a mean-centralized $X$ because it is the residual with respect to $F_{0}=1/\sqrt{|X|}$.
Adopting the notations in lines 8 and 9 of Alg.~\ref{alg:FindBasis},
we let $\mathbf{\tilde{E}}$ and $\tilde{\mathbf{D}}$ be submatrices
of $\mathbf{E}$ and $\mathbf{D}$ corresponding to singular values larger
than $\delta$. The distance between two points $\mathbf{x},\mathbf{y}$ of the mean-centralized $X$ is then calculated as
\begin{align*}
& \|F_{1}(\mathbf{x})-F_{1}(\mathbf{y})\|^{2}\\
& =\begin{Vmatrix}\mathbf{x}^{\top}\begin{pmatrix}\tilde{\mathbf{V}}_{0}\tilde{\mathbf{W}}\tilde{\mathbf{E}}^{\dagger} & \mathbf{V}_{0}^{\varepsilon}\tilde{\mathbf{V}}\tilde{\mathbf{D}}^{\dagger}\end{pmatrix}-\mathbf{y}^{\top}\begin{pmatrix}\tilde{\mathbf{V}}_{0}\tilde{\mathbf{W}}\tilde{\mathbf{E}}^{\dagger} & \mathbf{V}_{0}^{\varepsilon}\tilde{\mathbf{V}}\tilde{\mathbf{D}}^{\dagger}\end{pmatrix}\end{Vmatrix}^{2},\\
& =(\mathbf{x}-\mathbf{y})^{\top}\Sigma^{-1}(\mathbf{x}-\mathbf{y}),
\end{align*}
where
\begin{align*}
\Sigma^{-1} & =\tilde{\mathbf{V}}_{0}\tilde{\mathbf{W}}(\tilde{\mathbf{E}}^{\top}\tilde{\mathbf{E}})^{-1}\tilde{\mathbf{W}}^{\top}\tilde{\mathbf{V}}_{0}^{\top}+\mathbf{V}_{0}^{\varepsilon}\tilde{\mathbf{V}}(\tilde{\mathbf{D}}^{\top}\tilde{\mathbf{D}})^{-1}\tilde{\mathbf{V}}^{\top}{\mathbf{V}_{0}^{\varepsilon}}^{\top}.
\end{align*}
A straightforward calculation shows that $\Sigma$ is the principal variance of the empirical covariance matrix $C_{1}(X)^{\top}C_{1}(X)$. Similarly, in nonlinear case $t>1$, $\|F_{t}(\mathbf{x})-F_{t}(\mathbf{y})\|$ is a Mahalanobis distance with respect to principal variance of $C_{t}(X)^{\top}C_{t}(X)$.
Therefore, the regularization term in Eq.~(\ref{eq:full-optimization-problem-pointwise})
can be regarded as a generalized Mahalanobis distance in nonlinear cases (details are provided in the supplementary material).
To optimize Eq.~(\ref{eq:full-optimization-problem-pointwise}), quasi-Newton method with numerical gradient calculation was adopted in the present paper. We restricted the $F_{k}$ to lower-degree polynomials by assuming that lower-degree structures holds sufficiently good structure of data. Specifically, we took into account the regularization terms up to the degree where the first vanishing polynomial is found.
\subsection{Exact Vanish Pursuit\label{subsec:Exact-Vanish-Pursuit}}
Our original goal to discover the $\delta$-vanishing
polynomials on the data knots cannot be achieved by simply applying the data
knotting to a fixed set of polynomials. In some cases (e.g., when three polynomials never
intersect at any one instant), there may be no data knots on which the given polynomials sufficiently vanish. To resolve this problem, we introduce an iterative framework that alternatively updates data knots and polynomials~(Alg.~\ref{alg:ExactVanishPursuit}).
Let us construct the degree-$t$ polynomials $G_{t}$ and
$F_{t}$. At this moment, we have polynomial sets with degree less than $t$, $G^{t-1}$ and $F^{t-1}$, and tentative data knots $Z$.
Introducing $\eta>\delta$, we repeat the following steps.
\begin{enumerate}
\item Fixing $Z$, update $G_{t}$ and $F_{t}$ by Alg.~\ref{alg:FindBasis}. In this step, the polynomials in $G_{t}$ are $\varepsilon$-vanishing on $X_{0}$ and $\eta$-vanishing on $Z$.
\item Fixing $F^{t},G^{t}$, update the data knots $Z$ by solving Eq.~(\ref{eq:full-optimization-problem-pointwise}).
\item Decrease $\eta$.
\end{enumerate}
This iteration terminates before $\eta$ reaches $\delta$ when $G^{t}$ becomes a set of $\delta$-vanishing polynomials, $G_{t}$ becomes an empty set. The parameter $\eta$ approaches to $\delta$ over the iterations, and when $\eta=\delta$ then all the polynomials $G_{t}$ are $\delta$-vanishing on $Z$ and the algorithm terminates. Note that in this case the polynomials in $G^{t-1}$ may be no longer $\delta$-vanishing on $Z$ because $Z$ has been updated.
The next subsection introduces the reset framework, which resolves this situation. The way of reducing $\eta$ can affect the algorithm result. In
the present study, we decrease $\eta$ in a pragmatic way; we introduce a cooling parameter $\gamma<1$. Generally, $\eta$ was updated by $\gamma\eta$, but when the largest norm of the evaluation vector of $g\in G$, then $\eta$ was updated by that norm, \emph{i.e.}, $\eta=\min(\gamma\eta,\max_{g\in G}\|g(Z)\|)$.
The proper decrease of $\eta$ is left for future work.
The iterative framework introduced in this section is summarized in Alg.~\ref{alg:ExactVanishPursuit}. In this subroutine, the orders of the data knotting and polynomial construction are reversed for easy implementation in the latter sections.
\begin{algorithm}[t]
\caption{ExactVanishPursuit\label{alg:ExactVanishPursuit}}
\begin{algorithmic}[1]
\Require{$G^{t}, F^{t}, \tilde{C}_t, Z, X_0, \varepsilon, \eta, \delta, \lambda$}
\Ensure{$G_t, F_t$}
\While{$\eta > \delta$ and $G_t$ is not empty}
\State{Update $Z$ by solving Eq.~(\ref{eq:full-optimization-problem-pointwise});}
\If{$\forall g \in G^t, \|g(Z)\| \le \delta$}
\State{break}
\EndIf
\State{Decrease $\eta$;}
\State{$G_t, F_t = \text{FindBasis}(\tilde{C}_t, F^{t-1}, Z, X_0, \varepsilon, \eta)$}
\EndWhile
\State{\Return $G_t,F_t$}
\end{algorithmic}
\end{algorithm}
\subsection{Algorithm\label{subsec:Algorithm}}
\begin{algorithm}[t]
\caption{Main\label{alg:EVAVI}}
\begin{algorithmic}[1]
\Require{
$X_0,\varepsilon, \delta, \lambda$
}
\Ensure{$G, Z$}
\State{\# Initializetoin}
\State{$G = \{\}, F = F_0 = \{f(\cdot) = 1/\sqrt{|X|}\}$}
\State{$\tilde{C}_1 = Z = X_0$}
\State{$\eta = \varepsilon, t=1$}
\Loop
\State{\# Compute bases of degree-$t$ polynomials}
\State{$G_t, F_t = \text{FindBasis}(\tilde{C}_t, F, Z, X_0, \varepsilon,\eta)$}
\State{$G_t, F_t = \text{ExactVanishPursuit}(G\cup G_t, F\cup F_t, $}
\State{\hspace{13.5mm}$\tilde{C}_t, Z, X_0, \varepsilon,\eta, \lambda)$}
\State{$G = G \cup G_t, F = F \cup F_t$}
\State{$C_t = \{f_t f_1; f_t\in F_t, f_1\in F_1\}$}
\State{}
\If{$C_t$ is empty}
\If{$\forall g \in G, \|g(Z)\| \le \delta$}
\State{\Return $Z, G$}
\Else
\State{\# Reset to degree 1}
\State{$t=1$}
\State{$G=\{\}, F = \{f(\cdot)=1/\sqrt{Z}\}$}
\State{$C_1 = Z$}
\State{Decrease $\eta$;}
\EndIf
\Else
\State{$t = t+1$}
\EndIf
\EndLoop
\end{algorithmic}
\end{algorithm}
This section describes the overall algorithm of the proposed method (Alg.~\ref{alg:EVAVI}). The input are data points $X_{0}$, the error tolerances $\varepsilon,\delta$, and the regularization weight $\lambda$. The algorithm outputs a set of polynomials $G$ and data knots $Z$ for which polynomials in $G$ are $\varepsilon$-vanishing on $X_{0}$ and $\delta$-vanishing on $Z$.
As it proceeds, the algorithm increments the degree of the polynomials. The degree-$0$ polynomial sets are initialized to
$G_{0}=\left\{ \right\}$, $F_{0}=\{f(\cdot)=1/\sqrt{|X|}\}$, and the initial data knots
$Z$ are set to $X_{0}$. We also introduce an error tolerance parameter $\eta$, which is set to $\eta=\varepsilon$. Although we aim to discover the $\delta$-vanishing polynomials on $Z$, we first consider the $\eta$-vanishing
polynomials on $Z$, which is updated rather than fixed. $\eta$ is gradually decreased throughout the iterations, and eventually reaches $\delta$, thereby obtaining $\delta$-vanishing polynomials.
At degree-$t$, the algorithm proceeds through the following four steps: (1) generate
$G_{t}$ and $F_{t}$ by Alg.~\ref{alg:FindBasis}, where $G_{t}$
is a set of polynomials that are $\varepsilon$-vanishing on $X_{0}$
and $\eta$-vanishing on $Z$; (2) update $G_{t}$, $F_{t}$, and
$Z$ by Alg.~\ref{alg:ExactVanishPursuit} such that the polynomials in
$G_{t}$ become $\delta$-vanishing on $Z$; (3) generate degree-$(t+1)$
candidate polynomials for the next iteration; (4) check the termination conditions
(\emph{reset} or advance to the next degree). Reset restores all variables except $Z$ and $\eta$ to the $t=1$ stage. A reset is performed
if there is a $\delta$-nonvanishing polynomial in $G$ and
the algorithm cannot proceed to the next degree, \emph{i.e.}, when $C_{t}$
is empty. The reset system feedbacks the results of higher-degree polynomials
to lower-degree ones via the data knots $Z$. To our knowledge, the reset system is unique to our method; all of the existing methods appear to greedily construct the polynomials from lower to higher degrees.
Termination is guaranteed when $\eta$ reaches $\delta$. To prove it, first note that when $\eta=\delta$,
no data knotting occurs so $Z$ is fixed. To describe arbitrary polynomials on
$Z$, we need collect $|Z|$ linearly independent
polynomials in $F$, because the polynomials are associated with $\mathbb{R}^{|Z|}$-dimensional
vectors. In Alg.~\ref{alg:FindBasis}, the column space of the
evaluations of candidate polynomials $C_{t}$ is orthogonal to that
of $F$ on $Z$. By its construction, the column space of $F_{t}(Z)$
approximately spans the column space of $C_{t}(Z)$. When $\left|F_{t}\right|=0$,
the algorithm terminates; otherwise, the rank of $F$ is strictly
increased by appending $F_{t}$. Therefore, the rank of $F$ reaches
$\left|Z\right|$ after a finite number of steps, and the algorithm terminates.
As $\eta=\delta$, all polynomials in $G$ are $\delta$-vanishing
polynomials on $Z$.
Note that the output $G$ is not necessarily a basis of the vanishing ideal for $Z$ because polynomials that are $\delta$-vanishing on $Z$ but $\varepsilon$-nonvanishing on $X_{0}$ are excluded from $G$. This result is reasonable because the polynomials in $G$ do not well approximate the original data. In some cases, however, we require a basis of the vanishing ideal. Such a basis can be generated by applying existing basis generation methods such as VCA to small data knots $Z$, which is much less computationally costly than applying to $X_{0}$.
\section{Results}
In this sections, we demonstrate that our method discovers a compact
set of low-degree polynomials and a few data knots that well
represent the original points. The proposed method exhibits both noise tolerance and
good preservation of the algebraic structure. We first illustrate the vanishing polynomials and data knots
obtained on simple datasets as qualitative evaluation. In the next classification task, we show that the polynomials output by out method avoid overfitting and hold the useful nonlinear feature of data as observed in VCA.
Finally, we evaluate the representativeness of the data knots by training $k$-nearest neighbor classifiers in the classification tasks.
Note that classification tasks are adopted to measure how well the proposed method can hold nonlinear structure of data.
The proposed method is not specially tailored for classifications, and it can also contribute to other tasks where vanishing ideal based approach has been introduced.
\subsection{Illustration with simple data}
\begin{figure*}
\includegraphics[scale=0.27]{result1}\caption{A set of vanishing polynomials and data knots (blue circles) output
by the proposed method (a,c,e) and VCA (b,d,f) for simple data exposed to noise (red dots).
The input data are (a,b) three blobs with different variance (c,d) a single circle, (e,f) two concentric circles.
To enhance visibility, not all of the polynomials are shown.
\label{fig:result1}}
\end{figure*}
We applied our method and VCA with the same error tolerance to simple data perturbed by noise: three blobs
with different variances (60 points, 30\% noise on one blob;
5\% noise on the remaining blobs), a single circle
(30 samples, 5\% noise), and a pair of concentric circles (50 samples, 2\% noise). Here $n$\% noise denotes zero-mean Gaussian
noise with $n$-standard deviation. The blobs were generated by adding
noise at three distinct points. As shown in the Fig.~(\ref{fig:result1}),
each set of polynomials obtained by our method exactly vanishes on the data knots, and approximately
vanishes on the input data points, while those by VCA only coarsely intersect with each other. In Figs.~\ref{fig:result1}(a) and \ref{fig:result1}(b),
one blob has much larger variance than the others. Whereas each
blob with small variance is represented by a single data knot,
the very noisy blob is represented by two knots; consequently,
polynomials obtained by our method almost exactly vanish on the knots and approximately vanish over the whole blob. In contrast, while the polynomials obtained by VCA are similar to those by ours, they intersect with each other much more coarsely. In Figs.~\ref{fig:result1}(c), \ref{fig:result1}(d), \ref{fig:result1}(e), and \ref{fig:result1}(f), both our method and VCA discovered the lowest algebraic structures (a circle and a pair of concentric circles). In Fig.~\ref{fig:result1}(c), our method only outputs a circle since other polynomials that approximately vanish on the original data do not sufficiently vanish on the data knots. In Fig.~\ref{fig:result1}(e), some polynomials discovered by our method are different from those by VCA for better preserving the algebraic structure. Note that the same error tolerance is used for both our method and VCA. Thus, the polynomials obtained by our method still approximately vanish on the original points as those by VCA.
\subsection{Compact lower-degree feature extraction}
\begin{table*}
\caption{Classification results\label{tab:Classification-results}}
\begin{tabular}{|c|c|c|c|c||c|c||c|c||c|c|}
\hline
& \multicolumn{4}{c||}{Accuracy {[}\%{]}}
& \multicolumn{2}{c||}{Test runtime {[}sec{]}}
& \multicolumn{2}{c||}{\#features}
& \multicolumn{2}{c|}{\#degree}\tabularnewline
\hline
& Proposed & VCA & Proposed-hd & VCA-hd & Proposed & VCA & Proposed & VCA & Proposed & VCA\tabularnewline
\hline
\hline
Iris & 0.96 & 0.95 & 0.75 & 0.59 & 3.6e-4 & 6.6e-4 & 14.9 & 62.8 & 1.5 & 2.0\tabularnewline
\hline
Wine & 0.98 & 0.98 & 0.94 & 0.67 & 7.6e-4 & 2.1e-3 & 100.5 & 592.9 & 1.7 & 2.3\tabularnewline
\hline
Vehicle & 0.80 & 0.80 & 0.53 & 0.60 & 1.6e-3 & 6.7e-2 & 121.5 & 4147.9 & 1.6 & 2.5 \tabularnewline
\hline
Vowel & 0.92 & 0.93 & 0.80 & 0.35 & 1.5e-3 & 2.3e-3 & 191.0 & 267.6 & 1.5 & 1.8\tabularnewline
\hline
\end{tabular}
\end{table*}
The vanishing polynomials obtained by our method compactly hold the original data
structure. To show this, we compare our method with VCA in four classification
tasks. Bothe methods were tested by Python implementation on a workstation with four processors and 8GB memory. Both the proposed method and VCA adopt the feature-extraction method of \citeauthor{livni2013vanishing}~(\citeyear{livni2013vanishing}):
\begin{align*}
\mathcal{F}(\mathbf{x}) & =\Bigl(\cdots,\underbrace{\left|g_{i}^{(1)}(\mathbf{x})\right|,\cdots,\left|g_{i}^{(|G_{i}|)}(\mathbf{x})\right|}_{G_{i}(\mathbf{x})^{\top}},\cdots\Bigr)^{\top},
\end{align*}
where $G_{i}=\{g_{i}^{(1)},...,g_{i}^{(|G_{i}|)}\}$ denote the computed
vanishing polynomials of the $i$-th class. By its construction, the feature $\mathcal{F}(\mathbf{x})$ of sample $\mathbf{x}$
in the $i$-th class should be a vector whose entries are approximately zero in the $G_{i}$ part and non-zero elsewhere. The classifier for the proposed method and VCA was a linear Support
Vector Machine~\cite{cortes1995support}. The datasets were downloaded
from~UCI machine learning repository~\cite{Lichman2013machine}. The hyperparameters were determined by
3-fold cross validation and the results were averaged over ten independent
runs. In each run, the datasets were randomly split into training (60\%) and
test (40\%) datasets.
The classification results are summarized in Table~\ref{tab:Classification-results}.
The proposed method achieved comparable classification
accuracy to VCA, with much more compact features. The dimensions of the feature vectors
obtained by our method were only 3--70\% those
of the VCA feature vectors. The degree of the discovered polynomials was also lower in the proposed method. Consequently, the test runtime was lower in our method than in VCA. This result
suggests that the proposed method well preserves the data structure even
after data knotting. As we insisted in the introduction, higher-degree polynomials can be sensitive to noisy data under traditional vanishing polynomial construction with fixed data points. To confirm this, we also evaluated the methods by restricting the polynomials for feature extraction to half of them in the higher-degree part (Proposed-hd and VCA-hd). As can be seen from Table~\ref{tab:Classification-results}, the higher-degree polynomials by our method lead to much higher classification accuracy than those by VCA, which suggests that our method provides noise-tolerant polynomials even in relatively higher degree while VCA does not. An exception is the result for the Vehicle dataset. In this case, our method provided greatly compact features (only 2\% dimension of those by VCA). Thus the 50 \% restriction at Proposed- may remove important features that contribute much to the accuracy.
\subsection{Evaluating data knots}
\begin{table}
\caption{$k$-nearest neighbor classification\label{tab:Classification-results-knots}}
\begin{tabular}{|c|c|c|c||c|}
\hline
& \shortstack{Data\\ knots} & \shortstack{$k$-means\\ centroids} & \shortstack{Original\\ points} & \shortstack{Knotting\\ ratio}\tabularnewline
\hline
\hline
Iris & 0.95 & 0.95 & 0.94 & 0.08\tabularnewline
\hline
Wine & 0.95 & 0.97 & 0.95 & 0.20\tabularnewline
\hline
Vehicle & 0.60 & 0.61 & 0.68 & 0.05 \tabularnewline
\hline
Vowel & 0.76 & 0.79 & 0.96 & 0.53\tabularnewline
\hline
\end{tabular}
\end{table}
In the proposed framework, data knotting greatly reduces the number of original noisy
data points, enabling lower-degree
vanishing polynomials. Here we evaluate the representativeness of the data knots with the classification accuracy of the $k$-nearest neighbor classifier trained with data knots. As baselines, we also trained classifiers with $k$-means centroids and the original points as baselines. The number of centroids for $k$-means clustering is set to the same number of data knots for each class. As the number of data knots and $k$-means centroids are much smaller than the number of original points, we set $k=1$ in the classifiers. The other training and testing settings were those of the previous section.
The results are summarized in Table~\ref{tab:Classification-results-knots}.
The number ratio between the data knots and original points, called the knotting ratio, confirms the that the original data points were condensed into far fewer points after the knotting. Training the classifier on few data knots achieves the comparable accuracy of the classification with the $k$-means centroids, supporting our argument that the data knots well represent the original data. Note that the data knots are designed to provide lower-degree vanishing polynomials while $k$-means clustering is for simply summarizing nearby points. Compared to the classification results with original data points, those with data knots were comparable for the datasets with fewer classes (Iris and Wine; 3 classes). However, the accuracy degrades for the datasets with more classes (four classes in Vehicle and eleven classes in Vowel). In these datasets, there can be more overlapped points across different classes, resulting in accuracy degradation in ours and $k$-means. A possible solution specially tailored for classification is to introduce class-discriminative information to data knots. This modification is an interesting future extension.
\section{Conclusion and Future work}
The present paper focused on the tradeoff between noise-tolerance and better preserving an algebraic structure at the vanishing ideal construction, which has not been explicitly considered before. We addressed a new problem to discover a set of vanishing polynomials that approximately vanish
on the noisy input data and almost exactly vanish on the jointly discovered
representative data points (called data knots). In the proposed framework, we introduced a vanishing polynomial construction method that takes into account two different point sets with different noise-tolerance scales. We also linked the newly introduced nonlinear regularization term to the Mahalanobis distance, which is commonly used in metric learning.
In experiments, the proposed method discovered much more compact and lower-degree algebraic systems than the existing method.
Computing the vanishing polynomials that exactly vanish on the data knots remains an open problem. Exactly vanishing polynomials are desired because they can be manipulated by algebraic operations and combined with other algebraic tools. For practical reasons (numerical implementation and optimization), our method returns exactly vanishing polynomials in an extreme case only ($\delta=0$), which often show poor performance in reality due to the numerical instability. Another future work is to increase computational efficiency. The proposed method is rather slower than VCA in training runtime (see the supplemental material) due to the optimization step for data knotting. Also, our method has three main hyperparameters to be tuned ($\varepsilon$, $\delta$, and $\lambda$), which requires additional cost for the cross validation step. Empirically, the most important hyperparameter is $\varepsilon$ that defines the error tolerance for the original data. Determining $\varepsilon$ is similar to determining the number of principal components in Principal Component Analysis (PCA), while $\varepsilon$ affects not only selecting linear polynomials but also selecting nonlinear polynomials. How to choose $\varepsilon$ is still an open problem for many vanishing ideal based approaches including ours.
\section*{Acknowledgement}
This work was supported by JSPS KAKENHI Grant Number 17J07510. The authors would like to thank Hitoshi Iba, Hoan Tran Quoc, and Takahiro Horiba of the University of Tokyo for helpful conversations and fruitful comments.
\bibliographystyle{aaai}
| {
"timestamp": "2018-01-30T02:10:35",
"yymm": "1801",
"arxiv_id": "1801.09367",
"language": "en",
"url": "https://arxiv.org/abs/1801.09367",
"abstract": "The vanishing ideal is a set of polynomials that takes zero value on the given data points. Originally proposed in computer algebra, the vanishing ideal has been recently exploited for extracting the nonlinear structures of data in many applications. To avoid overfitting to noisy data, the polynomials are often designed to approximately rather than exactly equal zero on the designated data. Although such approximations empirically demonstrate high performance, the sound algebraic structure of the vanishing ideal is lost. The present paper proposes a vanishing ideal that is tolerant to noisy data and also pursued to have a better algebraic structure. As a new problem, we simultaneously find a set of polynomials and data points for which the polynomials approximately vanish on the input data points, and almost exactly vanish on the discovered data points. In experimental classification tests, our method discovered much fewer and lower-degree polynomials than an existing state-of-the-art method. Consequently, our method accelerated the runtime of the classification tasks without degrading the classification accuracy.",
"subjects": "Machine Learning (stat.ML)",
"title": "Approximate Vanishing Ideal via Data Knotting",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138177076645,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7087156850636526
} |
https://arxiv.org/abs/1602.00418 | Lifting Problem on Automorphism Groups of Cyclic Curves | Let X be a smooth projective hyperelliptic curve over an algeraically closed field k of prime characteristic p. The aim of this note is to find necessary and sufficient conditions on the automorphism group of the curve X to be lifted to characteristic zero. The results will be generalised for a certain family of curves that we call cyclic curves. | \section{Introduction}
Let $k$ be an algebraically closed field of prime characteristic $p.$ Given a smooth projective curve $X$ over $k,$ consider a lifting $(X_0/k_0, v)$ of $X/k$ to characteristic $0,$ with the following properties:
\begin{itemize}
\item $k_0$ is the algebraically closed field of the fraction field of $W(k),$ the ring of Witt vectors over $k;$
\item $v$ is a valuation such that $k_0 v=k;$
\item The curves $X/k$ and $X_0/k_0$ have the same genus.
\end{itemize}
Recall that such a lift always exists and called a good lifting of $X/k.$
\
According to the following proposition:
\begin{pro}[\cite{talin}]
Let $(\mathbb{F}{|}k,v)$ be a valued function field in one variable where $v$\footnote{Note that a good reduction is always invariant under action of any automorphism group of $\mathbb{F}{|}k.$} is assumed to be invariant under action of the automorphism group $\mathrm{Aut}(F{|}k).$ Then there is a natural injective homomorphism
$$\phi: \mathrm{Aut}(F{|}k)\hookrightarrow\mathrm{Aut}(\mathbb{F} v{|} k v)$$ where $\mathbb{F} v{|} k v$ denotes the residue function field with respect to the valuation $v.$
\end{pro}
Together with the two equivalent categories:
\begin{itemize}
\item Smooth projective curves over k, and dominant morphisms;
\item Function fields of one variable over k, and k-homomorphisms.
\end{itemize}
there is a natural injective homomorphism
$$\phi: \ G_0{:=}\mathrm{Aut}_{k_0}(X_0)\hookrightarrow G{:=}\mathrm{Aut}_k(X).$$ Moreover, if $\phi$ is surjective, we say that the automorphism group $G$ \texttt{is liftable} to characteristic $0.$ So, one can ask: Under which conditions is $\phi$ surjective? The aim of this note is to answer this question.
\
We have a partial answer when the order of the group $G$ and $p,$ the characteristic of $k,$ are relatively prime. Indeed, in \cite{grot} Expos\'e XIII \S 2, Grothendieck proved that if $p$ does not divide the order of $G,$ then $G$ could always be lifted to characteristic $0.$ However, in the case when $G$ is divisible by $p,$ the problem is not completely solved.
Here, we restrict to the case of cyclic curves (Definition \ref{def1}) over prime characteristic field. The main reason is the fact that we know all of the groups which can occur as an automorphism group of a cyclic curve in any characteristic which is not equal to $2$ (See \cite{shaska} and \cite{sanj}). Therefore, a priori, a quite elementary covering theory, some group theory and representation of finite subgroups of $\mathrm{PGL}_2(k)$ would suffice. We will show that:
\
\textit{Let $X/k$ be a smooth projective irreducible hyperelliptic curve. Denote by $G$ its full automorphism group and suppose that $p\neq2.$ Assume that the order of $G$ is divisible by $p.$ Then, the automorphism group $G$ is liftable to characteristic $0$ if and only if, up to isomorphism, $G$ is one of the following groups:
\begin{enumerate}
\item $G=\mathbb{Z}/2p\mathbb{Z};$
\item $G=D_{2p};$
\item $G=\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_5$ or $\mathrm{SL}_2(5)$ if $p=5;$
\item \label{tg} $G=\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_4, \mathbb{Z}/2\mathbb{Z}\times\mathcal{S}_4, \mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_5, \mathrm{SL}_2(3),$ or $\mathrm{GL}_2(3)$ in the case when $p=3.$
\end{enumerate}}
\textit{In particular, the group $G$ is liftable to characteristic $0$ if and only if $G$ is an Oort-group for $k.$ We also observe that the order of $G$ is divisible by $p,$ but not by $p^2.$ Furthermore, we have $p\leq2g+1,$ where $g$ denotes the genus of the curve $X/k.$}
\
It is important to point out that this problem is related to \texttt{Oort groups} and \texttt{Lifting problems} (Definition \ref{oort}) that we can see in \cite{oortgroup}. By definition, an Oort group for $k$ is clearly liftable to characterisitic $0.$ But, the converse is not always true. So, a priori, for an automorphism group of a smooth projective curve over $k,$ the condition of being an Oort group is stronger than being liftable to characteristic $0.$ Our results in this present paper confirm some predictions of Chinburg, Guralnick and Harbater, in \cite{oortgroup}.
To begin with, let us first recall some results on Oort groups for $k.$ And also, some preliminary results that we will need to prove our results in the last section.
\textit{Throughout this note, $k$ denotes an algebraically closed field of prime characteristic $p.$}
\section{Preliminary Results}
\begin{defn}\label{oort}
A finite group $G$ is called an \texttt{Oort group} for $k$ if every faithful action of $G$ on a smooth connected projective curve over $k$ lifts to characteristic $0.$
\end{defn}
Oort conjectured that we can lift any cyclic group. It turns out that this conjecture is true according to the works of Pop (See \cite{pop}), mainly using deformation theory and a special case of a result by Obus and Wewers that we can see in \cite{ob}. Moreover, Chinburg, Guralnick and Harbater, in \cite{oortgroup}, proved that if $G$ is liftable to characteristic zero, then any cyclic-by-p subgroup (extensions of a prime-to-$p$ cyclic group by a $p$-group) of $G$ must be either cyclic or dihedral of the form $D_{p^n}$, with the exception of $\mathcal{A}_4$ in characteristic $2.$ They also predict that the converse is true. But as far as we know, no one has given a proof that the dihedral group $D_{p^n}$ for $n>1$ can be lifted to characteristic zero. This conjecture is called the \textit{Strong Oort conjecture}.
\
The following lemma is a summary of what we will need about Oort groups:
\begin{lem}[\cite{oortgroup}]\label{rem2}
\
\begin{itemize}
\item A $p$-regular group (its order is not divisible by $p$), a finite cyclic group, the dihedral group $D_p$ and the Klein four-group $V_4$ (in the case $p=2$) are Oort groups for $k$;
\item The quaternion group $Q_8$ (in the case $p=2$), the group $(\mathbb{Z}/p\mathbb{Z})^n$ where $n\geq2$ (resp. $>2$) if $p\neq2$ (resp. $p=2$) are not Oort groups;
\item Let $G$ be a finite group. Then $G$ is an Oort group if and only if every cyclic-by-$p$ subgroup $H\subset G$ is an Oort group;
\item If a cyclic-by-$p$ group is an Oort group for $k,$ then it must be cyclic or dihedral of the form $D_{p^n}$ for some integer $n$.
\end{itemize}
\end{lem}
\begin{defn}\label{def1}
We say that a function field $\mathbb{F}{|}k$ is \texttt{cyclic} (or \texttt{superelliptic}) if the following condition is satisfied: There exists a transcendental element $x$ such that the rational function field $k(x)$ is invariant under the action of the full automorphism group $G$ of $\mathbb{F} {|}k $ and the subgroup $N{=}\mathrm{Aut}\left( \mathbb{F} {|}k (x)\right)$ is cyclic, Galois and a normal subgroup of $G.$
Here, the base field $k$ is an algebraically closed field of characteristic $p\geq0.$ The smooth projective curve $X$ over $k$ associated to $\mathbb{F}{|}k$ is called \texttt{cyclic curve.}
\end{defn}
\
\begin{pro}\label{pro}
Let $(\mathbb{F}{|}\mathbb{K},v)$ be a valued function field where the valuation $v$ is assumed to be invariant under action of $\mathrm{Aut}(\mathbb{F}{|}k).$ Denote by $\mathbb{E}$ the fixed field of the full automorphism group $G$ of $\mathbb{F}{|}\mathbb{K}.$ Then, for any subfield $\mathbb{L} v$ between $\mathbb{F} v$ and $\mathbb{E} v$, there exists a unique subfield $\mathbb{L}$ between $\mathbb{F}$ and $\mathbb{E}$ such that $\mathbb{L} v$ is the exact reduction of $\mathbb{L}$ by the valuation $v.$
Note that, the subfield $\mathbb{L}$ and $\mathbb{L}v$ have the same genus in the case when $v$ is a good reduction.
\end{pro}
\begin{proof}
We know that $G\simeq\mathrm{Aut}(\mathbb{F} v{|}\mathbb{E} v)$ and $\mathbb{F} v{|}\mathbb{E} v$ is Galois. The result follows by the Galois correspondence. Indeed, the extension $\mathbb{F} v{|}\mathbb{L} v$ is also Galois and denote by $Hv$ its Galois group. By the Galois correspondence theorem, if $n$ is the number of subgroups, $H_i$ ($1\leq i\leq n$), of $G$ which have the same order as $Hv,$ then there exists exactly $n$ extensions, $\mathbb{F} v{|}\mathbb{L}_i v$ ($1\leq i\leq n$), such that $H_iv{=}\mathrm{Aut}\left( \mathbb{F} v{|}\mathbb{L}_i v\right) $ for each $1\leq i\leq n.$ The group $H_iv$ denotes the reduction of $H_i$ by the natural isomorphism between $G$ and $\mathrm{Aut}(\mathbb{F} v{|}\mathbb{E} v)$ for each $1\leq i\leq n.$ Therefore, there must be a unique subgroup $H$ (one of the subgroups $H_i$) of $G$ such that $H\simeq Hv$ (via the natural isomorphism between $G$ and $\mathrm{Aut}(\mathbb{F} v{|}\mathbb{E} v)$) and $\mathbb{F}^{H}v=\mathbb{L}v.$ Thus, $\mathbb{L}=\mathbb{F}^{H}$.
\end{proof}
\
Now, let $\mathbb{F}{|}k$ be a cyclic function field. Denote by $X$ the smooth cyclic curve over $k$ associated to $\mathbb{F}{|}k.$ Now, let $X_0$ be a good lifting of $X$ to characteristic $0$ over an algebraically closed field $k_0.$ Then, there exists a valued function field $\left(\mathbb{F}_0{|}k_0,v\right)$ such that $v$ is a good reduction, $\mathbb{F}=\mathbb{F}_0v$ and $k=k_0v.$ Suppose that the full automorphism group $G$ of the curve $X/k$ is liftable to characteristic zero and assume that $G\simeq G_0{=}\mathrm{Aut}(\mathbb{F}_0{|}k_0).$ Using Proposition \ref{proo}, there exists a residually transcendental element $x$ such that the extensions $\mathbb{F}_0{|}k_0(x)$ and $\mathbb{F}{|}k(\overline{x})$ are Galois. Hence, $\mathbb{F}_0{|}k_0$ is also a cyclic function field
and $N{=}\mathrm{Aut}\left( \mathbb{F}_0{|}k_0(x)\right)\simeq\mathrm{Aut}(\mathbb{F}{|}k\left( \overline{x})\right).$ Note also that the curve $X_0/k_0$ is cyclic. Furthermore, the groups $G/N$ and $G_0/N$ are isomorphic and respectively embedded in $\mathrm{PGL}_2(k)$ and $\mathrm{PGL}_2(k_0).$ Therefore, it is important to know which kind of groups could be finite subgroups of both $\mathrm{PGL}_2(k)$ and $\mathrm{PGL}_2(k_0).$
\
\begin{lem}\label{biglem}
Let $H$ be a finite subgroup of $\mathrm{PGL}_2\left(k\right).$ The group $H$ can be embedded in $\mathrm{PGL}_2(k_0)$ if and only if one of the following statements holds:
\begin{itemize}
\item The prime characteristic $p$ does not divide ${\mid}H{\mid},$ the order of $H;$
\item If $p$ divides ${\mid}H{\mid},$ then up to isomorphism, $H$ is one of the following groups:
\begin{itemize}
\item $H=\mathbb{Z}/ p\mathbb{Z};$
\item $H=D_{p}$ if $p\neq2;$
\item $H=\mathcal{A}_4$ or a dihedral group $D_n$ where $n$ is a positive odd integer if $p=2;$
\item$H=\mathcal{A}_5$ in the case when $p\leq5;$
\item$H=\mathcal{A}_4$ or $\mathcal{S}_4$ when $p=3.$
\end{itemize}
\end{itemize}
\end{lem}
\begin{proof}
In order to prove the lemma, we shall recall the following theorem from \cite{fini} Theorem B and Theorem C:
\begin{pro}\label{lem1}
Let $\mathbb{K}$ be an algebraically closed field of characteristic $q.$ Let $H$ be a finite subgroup of $\mathrm{PGL}_2(\mathbb{K}).$ Then:
\begin{itemize}
\item If $q=0$ or $q>0$ and $q\nmid {|}{H}{|},$ the group $H$ is isomorphic to a cyclic group, a dihedral group, $\mathcal{A}_4, S_4$ or $\mathcal{A}_5;$
\item If $q>0$ and $q$ divides the order of $H,$ then $H$ is isomorphic to one of the following groups: $\mathrm{PGL}_2(\mathbb{F}_{q^n})$,$ \mathrm{PSL}_2(\mathbb{F}_{q^n})$ for some integer $n$ or to a $q$-semi-elementary subgroup. Note that the conjugacy classes of $q$-semi-elementary subgroups of $\mathrm{PGL}_2\left(\mathbb{K}\right) $ of order $p^mn$($n\in\mathbb{N}\setminus q\mathbb{N}$ and $m\in\mathbb{N}^{\star}$) are parameterized by the set of homothety classes of rank-$m$ subgroups $\Gamma$ satisfying $\mathbb{F}_{q^e}\subset\Gamma\subset \mathbb{K}$ via the map
$$\Gamma\mapsto\left( \begin{array}{cc}
1 & \Gamma \\
& 1
\end{array}\right)\rtimes\left( \begin{array}{cc}
\mu_n\left(\mathbb{K}\right) & \\
& 1
\end{array}\right)$$
where $e$ is the order of $q$ in $(\mathbb{Z}/n\mathbb{Z})^{\times}$ and $\mu_n\left(\mathbb{K}\right)$ is the group of primitive $n$-th roots of unity in $\mathbb{K}.$
\end{itemize}
With the following exceptional possibilities:
\begin{itemize}
\item Suppose that $q=3,$ then $H$ could also be isomorphic to $\mathcal{A}_5;$
\item If $q=2,$ $H$ is isomorphic to a dihedral group $D_n$ where $n$ is an odd positive integer.
\end{itemize}
\end{pro}
\
Let us now prove our lemma. If $p\nmid{|}{H}{|},$ then by the first statement of Proposition \ref{lem1} , the subgroup $H$ can be embedded in $\mathrm{PGL}_2(k_0).$ Therefore for the rest of the proof of Lemma \ref{biglem}, we may assume that $p$ divides the order of $H.$
Let us assume first that $p{>}5.$ Using Proposition \ref{lem1}, since $p$ divides ${|}{H}{|},$ the group $H$ is not isomorphic to one of the groups $\mathcal{A}_4, S_4$ or $\mathcal{A}_5.$ Therefore, $H$ is either cyclic or dihedral. However, the groups $\mathrm{PGL}_2(\mathbb{F}_{p^n})$ and $\mathrm{PSL}_2(\mathbb{F}_{p^n})$ for some integer $n$ are not cyclic nor dihedral. Hence, $H$ is isomorphic to a $p$-elementary group of order $p^mn$($n\in\mathbb{N}\setminus q\mathbb{N}$ and $m\in\mathbb{N}^{\star}$) parameterized by a set of homothety classes of a rank-$m$ subgroup $\Gamma$ satisfying $\mathbb{F}_{p^e}\subset\Gamma\subset k$ via the map
$$\Gamma\mapsto\left( \begin{array}{cc}
1 & \Gamma \\
& 1
\end{array}\right)\rtimes\left( \begin{array}{cc}
\mu_n(k) & \\
& 1
\end{array}\right)$$
where $e$ is the order of $p$ in $(\mathbb{Z}/n\mathbb{Z})^{\times}$ and $\mu_n(k)$ is the group of primitive $n$-th root of unity in $k.$ Furthermore, by the definition of $p$-elementary groups, $H$ is cyclic if $m=1$ or dihedral if $m=1$ and $n=2.$ Thus, $H\simeq\mathbb{Z}/pn\mathbb{Z}$ or $D_p$ where $n$ is a non-negative integer prime to $p.$ However if we assume that $n>1.$ This would suggest that there is a cyclic group of order $pn$ generated by the two matrices
$$\left( \begin{array}{cc}
1 & 1 \\
0 & 1
\end{array}\right) , \left( \begin{array}{cc}
\zeta & 0 \\
0 & 1
\end{array} \right)$$ where $\zeta$ is a primitive $n$-root of unity. But, those two matrices do not commute with each other. Hence, $n$ must be equal to $1.$
In the case when $p\leq5,$ if $H$ is a $p$-elementary group, then $H\simeq \mathbb{Z}/p\mathbb{Z}$ or $D_{p}$ with the exceptional case $\mathcal{A}_4\simeq\left(\mathbb{Z}/2\mathbb{Z}\right)^2\rtimes\mathbb{Z}/3\mathbb{Z}$ in characteristic $p=2.$
If $p=5,$ since $\mathcal{A}_5\simeq\mathrm{PSL}_2(\mathbb{F}_5),$ then $H$ could be isomorphic to $\mathcal{A}_5.$
Now if $p=3,$ since $\mathrm{PGL}_2(\mathbb{F}_3)\simeq\mathcal{S}_4$ and $\mathrm{PSL}_2(\mathbb{F}_3)\simeq\mathcal{A}_4, H$ could be isomorphic to $\mathcal{A}_4,\mathcal{S}_4$ and $\mathcal{A}_5$ according to the third statement of Proposition \ref{lem1}.
Finally, if $p=2,$ according to the last statement of Proposition \ref{lem1}, $H$ could be a dihedral group and $H\simeq D_n$ where $n$ is an odd integer. We know also that $\mathrm{PGL}(2,4)$ is isomorphic to $\mathcal{A}_5,$ thus, the group $H=\mathrm{PGL}(2,4)$ can be embedded in $\mathrm{PGL}_2(k_0).$
\end{proof}
\section{Lifting Automorphism Group of Cyclic Curves}
As we have already mentioned, according to Grothendieck in \cite{grot}, if the order of the automorphism group $G$ of a smooth curve (of genus $g\geq2$) is not divisible by $p,$ then $G$ is liftable to characteristic zero. Therefore, for the rest of this chapter, the order of the automorphism group of any curve over $k,$ unless otherwise specified, is assumed to be divisible by $p.$
\
A direct corollary of Lemma \ref{biglem} is the following:
\begin{thm}\label{thm}
Let $(\mathbb{F}_0{|}k_0,v)$ be a valued cyclic function field where the base field $k_0$ is of characteristic $0.$ Suppose that the valuation $v$ is invariant under the action of the group $G_0=\mathrm{Aut}(\mathbb{F}_0{|}k_0)$\footnote{Note if $\mathbb{F}_0{|}k_0$ has good reduction at $v$ then it is invariant under $G_0$ as good reduction is unique for $g\geq1.$}. Denote by $N$ the normal subgroup of $G_0$ as we defined in \ref{def1}.
If $G_0$ is isomorphic to the full automorphism group $G$ of the residue function field $\mathbb{F} {|}k ,$ then $G/N$ is isomorphic to one of the groups:
\begin{itemize}
\item $\mathbb{Z}/ p\mathbb{Z};$
\item $D_{p}$ if $p\neq2;$
\item $\mathcal{A}_4$ or a dihedral group $D_n$ where $n$ is a positive odd integer if $p=2;$
\item$\mathcal{A}_5$ in the case when $p\leq5;$
\item$\mathcal{A}_4$ or $\mathcal{S}_4$ when $p=3.$
\end{itemize}
with the following additional possibilities:
\begin{itemize}
\item $\mathbb{Z}/m\mathbb{Z}$ or a dihedral $D_m$ if $p$ divides ${|}N{|},$ the order of the group $N;$
\end{itemize}
The integers $m$ is prime to $p.$
\end{thm}
\begin{proof}
Suppose we have $G_0\simeq G.$ By hypothesis, there exists a transcendental element $x$ of $\mathbb{F}_0$ such that $k(x)$ is invariant under $G$ and $N=\mathrm{Aut}(\mathbb{F}{|}k)$ is Galois. Therefore, via the natural isomorphism between $G_0$ and $G$ and the fact that $N$ is a finite Galois group, the element $x$ is residually transcendental and the groups $G_0/N$ and $G/N$ are respectively finite subgroups of $\mathrm{PGL}_2(k_0)$ and $\mathrm{PGL}_2(k).$ Moreover, the isomorphism between $G_0$ and $G$ induces an isomorphism from $G_0/N$ to $G/N.$ Note that the group $G/N$ is a finite subgroup of $\mathrm{PGL}_2(k).$ If we assume that $p$ does not divide the order of $N,$ then $p$ must divide ${|}G/N{|}$, since $p$ divides the group $G$ by hypothesis. So, in this case, using Lemma \ref{biglem}, $G_0/N$ must be isomorphic to one of the groups:
$$\mathbb{Z}/p\mathbb{Z}, D_p,\mathcal{A}_5,\mathcal{S}_4,\mathcal{A}_4,$$ with the exceptional group $D_m,$ where $m$ is an odd integer in characteristic $2.$
Now, if $p$ divides ${|}N{|},$ the order of the group $G/N$ could be prime to $p.$ Hence, if this is the case, according to Lemma \ref{biglem}, the group $G/N$ could be isomorphic to one of the following groups:
$$\mathbb{Z}/n\mathbb{Z}, D_n,\mathcal{A}_5,\mathcal{S}_4,\mathcal{A}_4$$ where the integer $n$ is prime to $p.$ The results follow immediately using the list of possible finite subgroups of rational function fields that can be lifted to characteristic $0$ in Lemma \ref{biglem}.
\end{proof}
\
Let us make some observations in this context:
\
Let $X/k$ be a smooth cyclic curve with automorphism group $G$ that is liftable to characteristic $0.$ Denote by $X_0/k_0$ its good lifting to characteristic $0.$ Suppose that the prime characteristic of $k$ is odd. Let $\mathbb{F}{|}k$ and $\mathbb{F}_0{|}k_0$ be the function fields which correspond to $X/k$ and $X_0/k_0$ respectively. Since the characteristic of $k$ is $p\neq2,$ we may assume that the function fields $\mathbb{F}_0{|}k_0$ and $\mathbb{F}{|}k$ are defined respectively by the equations:
$$y_0^2=P(x_0)$$
and $$y^2=\overline{P}(x)$$ with $P(x_0)\in \mathcal{O}_{k_0}\left[ x_0\right]$ where $\mathcal{O}_{k_0}$ is the valuation ring of $k_0$ which corresponds to the restriction of $v,$ the good reduction of $\mathbb{F}_0{|}k_0,$ to $k_0.$ The polynomial $P$ is the reduction of the polynomial $P_0$ under the Gauss valuation $v_{x_0},$ prolongation of $v$ to $k_0(x_0).$ The transcendental elements $x$ and $y$ in $\mathbb{F}$ are, respectively, the reductions of the transcendental elements $x_0$ and $y_0$ of $\mathbb{F}_0.$
\
Now, denote by $G_0$ and $N$ the automorphism group of $\mathbb{F}_0{|}k_0$\footnote{Note that the function field $F_0{|}k_0$ has the same automorphism group as the cyclic curve $X_0/k_0.$} and the normal subgroup of $G_0$ such that the quotient space $X_0/N$ has genus $0.$ In the proof of Theorem \ref{thm}, we know that there is an injective homomorphism
$$\iota: G_0/N\hookrightarrow G/N.$$
The groups $G_0/N$ and $G/N$ are respectively subgroups of $\mathrm{Aut}(k_0(x_0){|}k_0)\simeq\mathrm{PGL}_2(k_0)$ and $\mathrm{Aut}(k(x){|}k)\simeq\mathrm{PGL}_2(k).$ The restriction of the good reduction on $\mathbb{F}_0$ to $k_0(x_0)$ is the Gauss valuation $v_{x_0}.$ So, we remark that:
\begin{rem}
The injective homomorphism $\iota$ is defined as follows:
\begin{align*}
\iota: & G_0/N\hookrightarrow G/N\\
&\left( \begin{array}{cc}
a & b \\
c & d
\end{array}\right) \mapsto \left( \begin{array}{cc}
\overline{a} & \overline{b} \\
\overline{c} & \overline{d}
\end{array}\right)
\end{align*}
where $a,b,c$ and $d$ belong to $k_0.$ For any element $u$ in $k_0,$ we denote its reduction under the valuation $v$ by $\overline{u}.$
\end{rem}
\
Let us illustrate this with an example:
\begin{exa}
Suppose that $G_0/N=\mathbb{Z}/p\mathbb{Z}.$ Denote by $\sigma$ the generator of the group $G_0/N.$ The generator $\sigma$ can not be equal to $$\gamma=\left( \begin{array}{cc}
\zeta &0 \\
0 & 1
\end{array}\right)$$ where $\zeta$ is a $p$-primitive root of unity in $k_0.$ Indeed, the image of $\gamma$ via the homomorphism $\iota$ is the identity in $G/N.$ However, the image, by $\iota,$ of the element $$\tau=\left(\begin{array}{cc}
\zeta+\zeta^{-1}+1 &-1 \\
1 & 1
\end{array}\right)$$ which is a conjugate of $\gamma$ in $\mathrm{PGL}_2(k_0)$ is
$$\overline{\tau}=\left( \begin{array}{cc}
3 &-1 \\
1 & 1
\end{array}\right).$$ Furthermore, for every odd prime $p,$ the 2 by 2 matrix $\overline{\tau}$ has order $p$ in $\mathrm{PGL}_2(\mathbb{F}_p).$ Note also that $\overline{\tau}$ and $\left( \begin{array}{cc}
1 &1 \\
0 & 1
\end{array}\right)$ are conjugate in $\mathrm{PGL}_2(\mathbb{F}_p).$
\end{exa}
\
With Theorem \ref{thm}, we are able to solve our lifting problem for certain type of cyclic curves. Indeed, it is clear that hyperelliptic curves are cyclic curves. The next theorem gives us all of possible finite groups of hyperelliptic curves over $k$ that can be lifted to characteristic $0.$
\begin{thm}\label{thm2}
Let $X/k$ be a smooth projective irreducible hyperelliptic curve. Denote by $G$ its full automorphism group and suppose that $p\neq2.$ Then, the automorphism group $G$ is liftable to characteristic $0$ if and only if, up to isomorphism, $G$ is one of the following groups:
\begin{enumerate}
\item $G=\mathbb{Z}/2p\mathbb{Z};$
\item $G=D_{2p};$
\end{enumerate}
with the exceptional possibilities:
\begin{enumerate}
\item $G=\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_5$ or $\mathrm{SL}_2(5)$ if $p=5;$
\item \label{tg} $G=\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_4, \mathbb{Z}/2\mathbb{Z}\times\mathcal{S}_4, \mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_5, \mathrm{SL}_2(3),$ or $\mathrm{GL}_2(3)$ in the case when $p=3.$
\end{enumerate}
In particular, the group $G$ is liftable to characteristic $0$ if and only if $G$ is an Oort-group for $k.$ We also observe that the order of $G$ is divisible by $p,$ but not by $p^2.$
\end{thm}
\begin{proof}
Suppose that $G$ is liftable to characteristic $0.$ Denote by $\sigma$ the hyperelliptic involution of order $2$ of the group $G.$ Let $X_0/K_0$ be a good lifting of $X/k$ such that $G_0=\mathrm{Aut}_{k_0}(X_0)\simeq G.$ The curve $X_0$ is also a hyperelliptic curve (by Proposition \ref{pro}). The isomorphism between $G_0$ and $G$ induces an isomorphism between $G_0/{\langle\sigma\rangle}$ and $H=G/{\langle\sigma\rangle}.$ That is: Up to isomorphism, the group $H$ which is a finite subgroup of $\mathrm{PGL}_2(k)$ can be embedded in $\mathrm{PGL}_2(k_0).$ So, let us first recall the list of possible groups that can occur as full automorphism groups of the hyperelliptic curve $X_0/k_0.$ As far as we know, the list first appeared in \cite{listhyper}:
\
\begin{center}
\begin{tabular}{|c|c|}
\hline
$H$ & $G_0$\\
\hline
$\mathbb{Z}/n\mathbb{Z}$ & $\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/n\mathbb{Z}, \mathbb{Z}/2n\mathbb{Z}$\\
\hline
$D_n$ & $\mathbb{Z}/2\mathbb{Z}\times D_n, V_n, D_{2n}, H_n, U_n, G_n$\\
\hline
$\mathcal{A}_4$ & $\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_4, \mathrm{SL}_2(3)$\\
\hline
$\mathcal{S}_4$ & $\mathbb{Z}/2\mathbb{Z}\times\mathcal{S}_4, \mathrm{GL}_2(3), W_2, W_3$ \\
\hline
$\mathcal{A}_5$ & $\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_5, \mathrm{SL}_2(5)$ \\
\hline
\end{tabular}
\
\end{center}
where the groups $V_n, H_n, U_n, G_n, W_2$ and $W_3$ are defined as follows:
\begin{align*}
V_n&=\langle x,y \ {|} \ x^4, y^n,(xy)^2, (x^{-1}y)^2\rangle;\\
H_n&=\langle x,y \ {|} \ x^4, (xy)^n,x^2y^2\rangle;\\
U_n&=\langle x,y \ {|} \ x^2, y^{2n},xyxy^{n+1}\rangle;\\
G_n&=\langle x,y \ {|} \ x^2y^n,y^{2n}, x^{-1}yxy\rangle;\\
W_2&=\langle x,y \ {|} \ x^4, y^3,yx^2y^{-1}x^2, (xy)^4\rangle;\\
W_3&=\langle x,y \ {|} \ x^4, y^3,x^2(xy)^4, (xy)^8\rangle.\\
\end{align*}
Now, following Lemma \ref{biglem} and Theorem \ref{thm}, we distinguish $4$ cases:
\begin{itemize}
\item[$\underline{1^{\text{st}} case}:$] If $H\simeq \mathbb{Z}/p\mathbb{Z};$
That is $G_0/\langle\sigma\rangle\simeq \mathbb{Z}/p\mathbb{Z}.$ The abelian groups $\mathbb{Z}/2\mathbb{Z}\times\mathbb{Z}/p\mathbb{Z}$ and $\mathbb{Z}/2p\mathbb{Z}$ are isomorphic since $p$ is an odd prime. According to the list we have above, we conclude that $G_0$ must be isomorphic to $\simeq\mathbb{Z}/2p\mathbb{Z}.$
\item[$\underline{2^{\text{nd}} case}:$] If $H\simeq D_{p};$
According to the list above, if $G_0/\langle\sigma\rangle\simeq D_n,$ for a given integer $n,$ then $G_0$ is isomorphic to one of the groups: $ \mathbb{Z}/2\mathbb{Z}\times D_n, D_{2n}, H_n, U_n V_n,$ and $G_n.$ However, in the cases when $G_0$ would be isomorphic to $V_n, H_n, U_n, G_n,$ or $\mathbb{Z}/2\mathbb{Z}\oplus D_n,$ the integer $n$ must be even (\cite{shaska1} Remark 6.). Since $p$ is assumed to be odd, then $G_0$ is isomorphic to $D_{2p}.$
\item[$\underline{3^{\text{rd}} case}:$] If $p\leq5$ and $H\simeq \mathcal{A}_5;$
Let $(\mathbb{F}_0{|}k_0,v)$ be the valued function field associated to $X_0/k_0$ where $v$ is the valuation such that $\mathbb{F}_0v=\mathbb{F}$ and $k_0=k.$ Let us consider the following polynomials:
\begin{align*}
R(x)&=x^{30}+522x^{25}-10005x^{20}-10005x^{15}-522x^5+1\\
S(x)&=x^{20}-228x^{15}+494x^{10}+228x^4+1\\
T(x)&=x^{10}+10x+1\\
G_i(x)&=(\lambda_i-1)x^{60}-36(19\lambda_i+29)x^{55}+6(26239\lambda_i-42079)x^{50}\\
&-540(23199\lambda_i-19343)x^{45}+105(737719\lambda_i-953143)x^{40}\\
&-72(1815127\lambda_i-145087)x^{35}-4(8302981\lambda_i+49913771)x^{30}\\
&+72(1815127\lambda_i-145087)x^{25}+105(737719\lambda_i-953143)x^{20}\\
&+540(23199\lambda_i-19343)x^{15}+6(26239\lambda_i-42079)x^{10}\\
&+36(19\lambda_i+29)x^{5}+(\lambda_i-1)\\
L&=\displaystyle{\prod_{i=1}^\delta}G_{i}.
\end{align*}
Let $x$ and $y$ be residually transcendental elements in $\mathbb{F}_0$ such that $\mathbb{F}_0=k_0(x,y)$ and $y^2=F(x).$ According to \cite{shaska} $\S$ 4.5, we may assume that the polynomial $F$ is one of the following forms:
$$F=L, SL, TL, STL, RL, RSL, RTL, RSTL$$ where the $\lambda_i$'s, appearing in the $G_i$'s, are in $k_0$ and $\delta$ is the dimension of the Hurwitz space $\mathcal{H}(G_0,\mathbf{C}),$ the space of the family of covers
$$\varphi: \mathcal{X}_g\rightarrow\mathbb{P}^1$$ with fixed signature $\mathbf{C}$ and genus $g$ (the genus of $X_0$). We recall that the space $\mathcal{H}(G_0,\mathbf{C})$ is a finite dimensional subspace of the moduli space of genus $g$ hyperelliptic curves.
Note that, according to \cite{shaska} Table 1, if $F=L, SL, TL$ or $STL,$ the group $G_0$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_5.$ In the other cases, $G_0$ must be isomorphic to $\mathrm{SL}_2(5).$
For $p=3,$ the reduction modulo $3$ of the polynomials $G_{i}(x),R(x), S(x)$ are respectively $({\lambda}_i-1)(\overline{x}^{10}+1)^6, (\overline{x}^{10}+1)^3, (\overline{x}^{10}+1)^2$ and the polynomial $T$ is irreducible in characteristic $3.$ So, if we assume that $R$ or $ST$ divides the polynomial $F,$ then the genus of the residue function field $k(\overline{x},\overline{y})$ defined by $$\overline{y}^2=\overline{F}(\overline{x})$$ is $\overline{g}\geq1.$ But, note that $g>\overline{g}$ where $g$ denotes the genus of the function field $\mathbb{F}_0{|}k_0.$ Furthermore, we have
$$\left[k(\overline{x},\overline{y})\ {:} \ k(\overline{x})\right] =2$$ since the genus of $k(\overline{x},\overline{y})$ is non-zero. On the other hand, we have
$$\left[\mathbb{F} \ {:}\ k(\overline{x})\right]=2$$ and $\mathbb{F}\supseteq k(\overline{x},\overline{y}).$ Hence, $\mathbb{F}=k(\overline{x},\overline{y}).$ This is a contradiction. As, the valuation $v$ is a good reduction, we must have $$\overline{g}=g.$$
Therefore, the polynomials $ST$ and $R$ can not divide the polynomial $F.$ We conclude that,
$$F=L, SL, T.$$ In these cases, the residue function field $k(\overline{x},\overline{y})$ is of genus $0.$ That might be possible. So $G$ could be isomorphic to $\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_5.$
Now, if $p{=}5,$ the reduction modulo $5$ of the polynomials $G_{i}(x),$ $ R(x), S(x)$ and $T(x)$ are respectively $(\lambda_i-1)(\overline{x}^2+\overline{x}-1)^{30},$ $(\overline{x}+2)^5(\overline{x}-2)^{25}, (\overline{x}^2+\overline{x}-1)^{10}$ and $ (\overline{x}^2-1)^5.$ Using the same arguments as above, we must have,
$$F=L, SL, TL, STL, RL, RSG$$ and $G\simeq\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_5$ or $\mathrm{SL}_2(5).$
\item[$\underline{4^{\text{th}} case}:$] If $H\simeq \mathcal{A}_4$ or $\mathcal{S}_4;$
In both cases, we have $p=3.$ We shall use the same argument with the same notations as above.
First, suppose we have $H\simeq\mathcal{A}_4.$ We consider the following polynomials:
\begin{align*}
G_i&=x^{12}-\lambda_ix^{10}-33x^8+2\lambda_ix^6-33x^4-\lambda_ix^2+1\\
R&=x^4+2\sqrt{-3}x^2+1\\
S&=x^8+14x^4+1\\
T&=x(x^4-1)\\
L&=\displaystyle{\prod_{i=1}^\delta}G_{i}.
\end{align*}
According to T. Shaska in \cite{shaska}, we may assume that the function field $\mathbb{F}_0=k_0(x,y)$ is defined by
$$y^2=F(x)$$ with
$$F(x)=L, RL, SL, TL, TRL, TSL.$$
In the cases when, $F= L, RL, SL,$ the group $G_0$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_4.$ Otherwise, $G_0\simeq\mathrm{SL}_2(3).$
Among $R, S$ and $T,$ the polynomial $S$ is the only reducible polynomial modulo $3.$ And we have $S\equiv \ (x^4+1)^2 \ \mathrm{mod}\ 3.$ Hence, $S$ can not divide $F.$ Otherwise, it will contradict the fact that $v$ is a good reduction. So, the possible equations for $\mathbb{F}_0{|}k_0$ are the following:
$$y^2=L, RL, TL, TRL.$$ Which means the group $G_0$ could be isomorphic to one of the possibilities which are $\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_4$ and $\mathrm{SL}_2(3).$
Now suppose $H\simeq\mathcal{S}_4.$ In this case, the polynomials $G_i, R, S$ and $T$ become:
\begin{align*}
G_i&=x^{24}+\lambda_ix^{20}+(759-4\lambda)x^{16}+2(3\lambda_i+1288)x^{12}\\
& \ +(759-4\lambda)x^8+2\lambda_ix^6-33x^4+\lambda_ix^4+1\\
R&=x^{12}-33x^8-33x^4+1\\
S&=x^8+14x^4+1\\
T&=(x^4-1)\\
L&=\displaystyle{\prod_{i=1}^\delta}G_{i}.
\end{align*}
According to T. Shaska in \cite{shaska}, we may assume that the function field $\mathbb{F}_0=k_0(x,y)$ is defined by
$$y^2=F(x)$$ with
$$F(x)=L, SL, TL, STL, RL, RSL, RTL, RSTL.$$
In the cases when, $F= L$ or $SL,$ the group $G_0$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}\times\mathcal{S}_4.$ If $F=TL$ or $STL$, we have $G_0\simeq\mathrm{GL}_2(3).$ The group $G_0\simeq W_2$ in the cases when $F=RL$ or $RSL.$ And $G_0$ is isomorphic to $W_3$ for the rest of the possibilities.
Using the same reasoning again, the polynomials $R$ and $S$ are reducible modulo $3.$ We have $S\equiv \ (x^4+1)^2 \ \mathrm{mod}\ 3$ and $R\equiv \ (x^4+1)^3 \ \mathrm{mod}\ 3.$ But, $T$ is irreducible modulo $3.$ Which means $R$ and $S$ could not divide the polynomial $F.$ Hence, we must have:
$$F=L, TL.$$ We conclude that $G_0$ could be isomorphic to $\mathbb{Z}/2\mathbb{Z}\times\mathcal{S}_4$ or $\mathrm{GL}_2(3).$
\end{itemize}
For the converse, we shall use Lemma \ref{rem2}. We know that any cyclic group is an Oort group. So, the groups $\mathbb{Z}/p\mathbb{Z}$ and $\mathbb{Z}/2p\mathbb{Z}$ are liftable to characteristic $0.$
The dihedral group $D_{2p}$ is liftable to characteristic $0$ for $p\neq2$ since $D_{2p}\simeq\mathbb{Z}/2\mathbb{Z}\times D_p$ and $D_p$ with $\mathbb{Z}/p\mathbb{Z}$ are the only cyclic-by-$p$ subgroups which are Oort groups.
Finally, the groups $\mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_4, \mathbb{Z}/2\mathbb{Z}\times\mathcal{S}_4, \mathbb{Z}/2\mathbb{Z}\times\mathcal{A}_5, \mathrm{SL}_2(3), \mathrm{GL}_2(3)$ and $\mathrm{SL}_2(5)$ are liftable to characteristic $0$ using the fact that their cyclic-by-$p$ subgroups are Oort groups for $p=3$ or $5.$
\end{proof}
\
\begin{rem}
One natural question to ask is the following: Does all the groups on the list above (Theorem \ref{thm2}) occur as full automorphisms groups of hyperelliptic curves? Sanjeewa, in \cite{sanj}, gives a list of all finite groups that can occur as automorphisms groups of cyclic curves in any characteristic. All the groups on our list above appear in Sanjeewa's list in any prime characteristic, except, the group $\mathrm{SL}_2(5)$ in characteristic $5$ and the group $\mathrm{GL}_2(3)$ in characteristic $3.$
However, there exists a hyperelliptic curve in characteristic $3$ which has $\mathrm{GL}_2(3)$ as full automorphism group. The curve is defined as follows:
$$X/k: \ y^2=x^6+x^4+x^2+1 \ (\text{See} \ \cite{Ariyang}).$$
Moreover, according to Shaska in \cite{shaska} Example 5.2, in characteristic $0$, the curve defined by:
$$y^2=x^6+a_1x^4+a_2x^2+1$$ has $\mathrm{GL}_2(3)$ as full automorphism group if and only if $(u_1,u_2)=(-250,50)$ where
$$u_1=a_1^3+a_2^3, \ u_2=2a_1a_2.$$ Therefore, the curve defined in characteristic $0$ by
$$X_0: \ y^2=x^6-5x^4-5x^2+1$$ has $\mathrm{GL}_2(3)$ as full automorphism group. Since $-5\equiv1 \ \mathrm{mod}\ 3,$ the automorphism group of $X/k$ is liftable to characteristic $0.$ Hence, the group $\mathrm{GL}_2(3)$ should appear in the list of Sanjaeewa in characteristic $3.$
\end{rem}
\
\begin{rem}
Theorem \ref{thm2} provides us with the list of the automorphism groups of hyperelliptic curves over $k,$ of characteristic $p\neq2,$ that can be lifted to characteristic $0.$ In the case when the characteristic of $k$ is equal to $2,$ our methods in the proof of the theorem do not seem to apply. The main reason is the fact that in characteristic $2,$ the minimal affine equation of the curve $X$ is given by
$$y^2+P(x)y=F(x), \ P(x), F(x)\in k(x)$$ where $P(x)$ is possibly a non-zero polynomial.
\end{rem}
\
For the rest of the note, we assume that the prime characteristic of the base field $k$ is odd.
\
According to Shaska (See \cite{shaska1}), determining the automorphism group $G$ of a cyclic curve $X/k$ in the case when $p>2g+1$ is the same as in characteristic $0.$ Our next proposition gives a necessary and sufficient condition for $G$ to be liftable to characteristic $0$ in this case.
\begin{pro}
Let $X/k$ be a smooth cyclic curve of genus $g$ and denote by $G$ its full automorphism group. Assume that $p>2g+1.$ Let $n$ be a positive integer such that $(n,p)=1$ and $n$ is the order of the cyclic normal subgroup $N$ of $G$ such that the quotient space $X/N$ has genus $0.$ Then, the group $G$ is liftable to characteristic $0$ if and only if $p$ does not divide the order of $G.$
\end{pro}
\begin{proof}
Assume first that $X/k$ is a hyperelliptic curve. Let us prove the proposition by contradiction.
Suppose that $G$ is liftable to characteristic zero and $p$ divides the order of $G.$ We shall use the same notations we used in the proof of Theorem \ref{thm2}.
By hypothesis, since the genus of the curve $X/k$ is always $\geq 2,$ we may assume that $p>5.$ According to Theorem \ref{thm2}, the group $G_0$ is isomorphic to $\mathbb{Z}/2p\mathbb{Z}$ or $D_{2p}.$ If $G_0\simeq\mathbb{Z}/2p\mathbb{Z},$ then the equation of the curve $X_0/k_0$ is one of the following (see \cite{shaska1}):
\begin{align*}
y^2 &=x^{2g+2}+a_1x^{p(t-1)}+\cdots +a_{\delta} x^p+1, t=\frac{2g+2}{p} \\
&=x^{2g+2}+a_1x^{p(t-1)}+\cdots +a_{\delta} x^p+1, t=\frac{2g+1}{p}, \text{or} \\
&=x(x^{pt}+a_1x^{p(t-1)}+\cdots +a_{\delta} x^p+1), t=\frac{2g}{p}.
\end{align*}
In all cases, we have $p\geq2g+2\geq pt.$ Since $2g+2$ is even, we conclude that $p>pt$ which is impossible. Thus, $G$ is not isomorphic to $\mathbb{Z}/2p\mathbb{Z}.$
Now, if $G\simeq D_{2p},$ then the equation of the curve $X_0/k_0$ is of the form:
$$y^2=x.\prod_{{i=1}}^t(x^{2p}+\lambda_ix^p+1), \ t=\frac{g+1}{p}.$$
But, by hypothesis, we have $p\geq2g+2$ which contradicts the fact that $pt=g+1$ for some integer $t.$
We use the same reasoning in the case when the cyclic curve is not necessarily hyperelliptic. Indeed, with the assumption $(n,p)=1,$ the genus of the corresponding function field is
$$g=\frac{n-1}{2}(-1+pt) \ \text{or} \ (n-1)(-1+pt)$$ for some integer $t.$ This contradicts the hypothesis, $p>2g+1.$
\end{proof}
\
Considering the results we have so far, we expect the following generalisation of Theorem \ref{thm2}:
\begin{conj}\label{conjc}
Let $X$ be a smooth cyclic curve over $k.$ Let $n$ be a positive integer such that $(2n,p)=1$ and $n$ is the order of the cyclic normal subgroup of $G=\mathrm{Aut}_{k}(X)$ such that the quotient space $X/N$ has genus $0.$ The group $G$ is liftable to characteristic $0$ if and only if $G$ is an Oort-group for $k.$
\end{conj}
Note that if Conjecture \ref{conjc} is true, using the results of Sanjeewa in \cite{sanj}, we have a complete list of all liftable automorphism groups of cyclic curves with the same hypothesis as in the conjecture. In the case when $p$ divides $n,$ we might have an automorphism group which is liftable to characteristic $0$ but not an Oort group for $k.$
| {
"timestamp": "2016-02-02T02:13:43",
"yymm": "1602",
"arxiv_id": "1602.00418",
"language": "en",
"url": "https://arxiv.org/abs/1602.00418",
"abstract": "Let X be a smooth projective hyperelliptic curve over an algeraically closed field k of prime characteristic p. The aim of this note is to find necessary and sufficient conditions on the automorphism group of the curve X to be lifted to characteristic zero. The results will be generalised for a certain family of curves that we call cyclic curves.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Lifting Problem on Automorphism Groups of Cyclic Curves",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138151101527,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7087156831807934
} |
https://arxiv.org/abs/math/0604459 | A Reproducing Kernel Condition for Indeterminacy in the Multidimensional Moment Problem | Using the smallest eigenvalues of Hankel forms associated with a multidimensional moment problem, we establish a condition equivalent to the existence of a reproducing kernel. This result is a multivariate analogue of Berg, Chen,and Ismail's 2002 result. We also present a class of measures for which the existence of a reproducing kernel implies indeterminacy. | \section{Introduction}
In \cite{BCI}, Berg, Chen, and Ismail find a new condition equivalent to determinacy in the one-dimensional moment
problem.
\begin{theorem}[Berg, Chen, and Ismail, 2002] \label{T:BCI}
Let $\lambda_N$ be the smallest eigenvalue of the truncated Hankel matrix $H_N$ for the measure $\mu$. Then
$\lambda_N \to 0$ as $N \to \infty$ if and only if $\mu$ is determinate.
\end{theorem}
They use the classical fact that a measure is indeterminate if and only if a reproducing kernel exists on the Hilbert
space in which the polynomials are dense. Additionally, a reproducing kernel exists if and only the sum
\[
\sum_{k=0}^\infty |P_k(z_0)|^2
\]
converges for some $z_0 \in \mathbb{C} \backslash \mathbb{R}$, where $\{P_k\}$ is the standard set of orthonormal polynomials. These
results go back to M. Riesz and may be found in \cite{A}.
Let $\mu$ be a positive measure such that $s_n = \int_\mathbb{R} x^n \, d\mu$ is finite for all $n$. For each $N \in \mathbb{N}$, we
define the $N^\text{th}$ Hankel matrix to be
\[
H_N = (s_{i+j})_{i,j=0}^N.
\]
Since $\mu$ is positive, this implies that $H_N$ is positive semidefinite for each $N$. We let $\lambda_N$ be the smallest
eigenvalue of $H_N$ and note that Cauchy's interlace theorem implies that $\lambda_N$ decreases as $N$ increases. If
$\lambda_N = 0$ for some $N$, then $\lambda_n = 0$ for all $n \geq N$, and the measure is a finite sum of point masses
\cite{A}.
This case implies determinacy since any measure with compact support is automatically determinate. In the case where $\mu$
has infinite support, then $\lambda_N > 0$ for every $N$.
Berg, Chen, and Ismail show that $\lambda_N \to \gamma > 0$ if and only if a reproducing kernel exists for the polynomials,
which gives the conclusion. We propose to extend this to a similar question in the multidimensional setting.
Let $\alpha = (\alpha_1, \alpha_2, \ldots, \alpha_d) \in \mathbb{N}_0^d$ be a multi-index, $x = (x_1, x_2, \ldots, x_d) \in \mathbb{R}^d$,
and let $x^\alpha = x_1^{\alpha_1} x_2^{\alpha_2} \cdots x_d^{\alpha_d}$. We shall use the standard notation of $|\alpha|
= \alpha_1 + \cdots + \alpha_d$. For a given multisequence $\{ s_\alpha \}$, define a linear functional $L : \mathbb{C}[x] \to \mathbb{C}$
by $L(x^\alpha) = s_\alpha$, and extend linearly.
We say the multisequence $\{s_\alpha\}$ and the associated functional $L$ are positive if
for every polynomial $p \in \mathbb{C}[x]$, we have $L(p \overline{p}) \geq 0$. When $L$ is positive, we construct a pre-inner
product by $\langle p,q \rangle = L(p \overline{q})$. If we consider the ideal $\mathcal{N} = \{ p \in \mathbb{C}[x] : \langle p,p \rangle
=0 \}$ then $\langle \cdot, \cdot \rangle$ acting on $\mathbb{C}[x] / \mathcal{N}$ becomes a postive definite form, and we complete
this to a Hilbert space $\mathcal{P}$ in which $\mathbb{C}[x] / \mathcal{N}$ is dense. This is the standard GNS construction and may be found in
full detail in \cite{F}.
We will also assume a normalizing condition of $s_{(0,0,\ldots,0)} = 1$ meaning that we deal with probability measures.
This assumption also simplifies many calculations.
We define the Hankel kernel indexed over $\alpha$ associated with the multisequence $\{ s_\alpha \}$ by
\[
H = ( s_{\alpha + \beta} )_{\alpha, \beta \in \mathbb{N}_0^d},
\]
and for any $N \in \mathbb{N}$ the $N^\text{th}$ truncation of $H$ by
\[
H_N = ( s_{\alpha + \beta} )_{0 \leq |\alpha|, |\beta| \leq N}.
\]
One significant difference between the multidimensional and one dimensional cases is that in one variable, a positive
measure $\mu$ exists in $\mathbb{R}$ which represents $L$ in the sense that $L(p) = \int_\mathbb{R} p \, d\mu$ if and only if $L$ is
positive, whereas there exist positive $L$ for which there are no such $\mu$ on $\mathbb{R}^d$ if $d \geq 2$, examples of which
were concurrently discovered in \cite{BCJ} and
\cite{Sch}. In the case where such a $\mu$ exists, its support must lie in the real algebraic variety generated by the
ideal $\mathcal{N}$. If we denote by $\lambda_N$ the smallest eigenvalue of the matrix $H_N$, then $\lambda_N \geq 0$ for all $N
\in \mathbb{N}$ and equality holds for some $N$ if and only if $\mathcal{N}$ is a nontrivial ideal of $\mathbb{C}[x]$. For the remainder of this
article we will assume that $\mathcal{N} = (0)$ so that $\lambda_N > 0$ for all $N$.
If we try to bring Berg, Chen, and Ismail's result to the multivariate case we must find a new statement since there exist
indeterminate measures $\mu$ for which $\lambda_N \to 0$ as $N \to \infty$.
To show this we use a theorem of Petersen~\cite{P}.
\begin{theorem}[L. C. Petersen]
Let $\mu$ be a moment measure on $\mathbb{R}^d$. Consider the projection $\pi_j: \mathbb{R}^d \to \mathbb{R}$ onto the $j^{th}$ coordinate
and the projection measures $\mu_j := \pi_{j *}(\mu)$ on $\mathbb{R}$.
\begin{itemize}
\item[(a)] If each $\mu_j$ is determinate, then so is $\mu$.
\item[(b)] If we assume that $\mu = \mu_1 \otimes \mu_2 \otimes \cdots \otimes \mu_d$, then $\mu$ is
determinate if and only if $\mu_j$ is determinate for $1 \leq j \leq d$.
\end{itemize}
\end{theorem}
\begin{example}
Let $\mu_1$ and $\mu_2$ be measures on $\mathbb{R}$ admitting all moments with $\mu_1$ determinate and $\mu_2$ indeterminate.
Let $\nu$ be a measure on $\mathbb{R}$ distinct from $\mu_2$ but possessing the same moments. Define $\mu := \mu_1 \otimes
\mu_2$. Clearly $\mu$ is indeterminate since $\mu_1 \otimes \nu$ gives the same moments as $\mu$. We now show that
for $\mu$, $\lambda_N \to 0$ as $N \to \infty$.
Consider the matrix $J_N = (s_{(m + n,0)})_{0 \leq m,n \leq N}$, and let $\eta_N$ be its smallest eigenvalue. Then
$J_N$ is a principal submatrix of the matrix $H_N$, and by their self-adjointness and Cauchy's
interlace theorem we see that $0 \leq \lambda_N \leq \eta_N$. Note that $J_N$ is the $N^\text{th}$ Hankel matrix
associated with the measure $\mu_1$ which is determinate in $\mathbb{R}$, whence $\eta_N \to 0$ as $N \to \infty$ by
\cite{BCI}. Thus $\mu$ is an indeterminate measure such that $\lambda_N \to 0$.
\end{example}
A reproducing kernel for $\mathcal{P}$, the completion of the polynomials on $\mathbb{C}^d$, is a function $K : \mathbb{C}^d \times \mathbb{C}^d \to \mathbb{C}$
which has the reproducing property, i.e.\ so that for any polynomial $p(x)$,
\[
p(y) = \langle p(x), K(x,y) \rangle_\mathcal{P}.
\]
We construct a complete system of orthonormal polynomials $\{ P_\alpha \}_{\alpha \in \mathbb{N}_0^d}$ which serve as an
orthonormal basis of $\mathcal{P}$. Typically one works with orthonormal polynomials in which $\deg(P_\alpha)$ is an
increasing with $|\alpha|$, and here we assume that $\deg(P_\alpha) = |\alpha|$. This may be achieved by ordering
$\{ x^\alpha \}_{\alpha \in \mathbb{N}_0^d}$ in a graded lexicographical order then using the Gram-Schmidt procedure to obtain an
orthonormal system of polynomials. The construction of orthonormal polynomials from the monomials may be found in
\cite{DX}.
Using this basis of orthonormal polynomials, we find
\[
K(x,y) = \sum_{\alpha \in \mathbb{N}_0^d} P_\alpha(x) \overline{P_\alpha (y)},
\]
and for a fixed $y \in \mathbb{C}^d$, this is a function in $\mathcal{P}$ provided
\begin{equation} \label{E:sum}
\sum_{\alpha \in \mathbb{N}_0^d} |P_\alpha (y)|^2
\end{equation}
is finite. It follows that if this sum is finite, point evaluation at $y$ is a bounded linear functional in $\mathcal{P}$, i.e.\
there is a constant $C_y$
so that $|p(y)|^2 \leq C_y L(p \overline{p})$. Riesz's theorem provides representation of this functional by the element
$K(x,y)$ as a function in the variable $x$.
Our focus will be on the sum \eqref{E:sum} to show that it converges for every $y \in \mathbb{C}^d$. In particular we will want
this sum to be uniformly bounded on compact subsets of $\mathbb{C}^d$. More information on reproducing kernels may be found in
\cite{Sai}.
\section{Main Result}
For $R > 0$ define the $R$-scaling of the multisequence $\{ s_\alpha \}$ to be $\{ \frac{s_\alpha}{R^{|\alpha|}} \}$
and the associated truncated Hankel matrices from this $R$-scaling:
\[
H_{R,N} = \left( \frac{s_{\alpha + \beta}}{R^{|\alpha + \beta|}} \right)_{0 \leq |\alpha|, |\beta| \leq N}.
\]
Note that if $\{ s_\alpha \}$ is the moment multisequence of a measure $\sigma$, then the $R$-scaling of $s$ is the
moment multisequence of the measure $\sigma_R$, where $\sigma_R(B) = \sigma(RB)$ for every Borel set $B$; we define $RB
= \{ R b : b \in B \}$ in the natural way and define the $R$-scaling of the measure $\sigma$ to be $\sigma_R$. In the
following theorem, we only assume that $\{s_\alpha\}$ is a positive multisequence, not necessarily that it is a moment
multisequence as we do in the one dimensional case.
We define $\mathcal{P}_R$ be the Hilbert space associated with the $R$-scaling of $\{ s_\alpha \}$.
If $f(x)$ is in the Hilbert space $\mathcal{P}$ associated with the sequence $s_\alpha$, then
the mapping $\phi_R : \mathcal{P} \to \mathcal{P}_R$ given by $\phi_R(f)(x) = f(Rx)$ is the required isometry. We will sometimes denote
$\phi_R(f)$ as $f_R$.
Using this isometry, we also relate a system of orthonormal polynomials of the original form to a system of the
scaled form by letting $P_{R,\alpha}(z) = P_\alpha (R z)$. We choose a set of orthonormal polynomials so that
$\deg(P_\alpha) = \deg(P_{R,\alpha}) = |\alpha|$ (see \cite{DX}).
Let $\lambda_{R,N}$ be the smallest eigenvalue of $H_{R,N}$, and let $\lambda_{R,N} \to \gamma_R$ as $N \to \infty$. We
now state the main result.
\begin{theorem} \label{T:R}
Assume that $\lambda_{R,N} > 0$ for all $N \in \mathbb{N}$ and some $R > 0$. A reproducing kernel for the polynomials exists
which is uniformly bounded on compact sets if and only if
$\gamma_R > 0$ for every $R > 0$.
\end{theorem}
\begin{proof}
This proof follows closely that of Berg, Chen, and Ismail. The primary difference is in the use of scalings of
a multisequence. This is from the fact that it is open problem to whether in the multivariate case the sum
$\sum_\alpha |P_\alpha(z)|^2$ converging at some nonreal $z$ implies the existence of a reproducing kernel as it does
in the one variable case.
We begin by writing the smallest eigenvalue of $H_{R,N}$ as the Rayleigh quotient
\[
\lambda_{R,N} = \min \left\{ \sum_{|\alpha| \leq N} \sum_{|\beta| \leq N}
\frac{s_{\alpha + \beta}}{R^{|\alpha + \beta|}} v_\alpha \overline{v_\beta} : \sum_{|\alpha| \leq N}
|v_\alpha|^2 = 1 \right\}.
\]
Note that if $\lambda_{R,N} > 0$ for some $R > 0$ and $N \in \mathbb{N}$, then $\lambda_{S,N} > 0$ for any $S > 0$.
We show this by defining $L_R$ to be the functional derived from the multisequence $\left\{
\frac{s_\alpha}{R^{|\alpha|}} \right\}$, then for any polynomial $p(x)$, $L_R(p(x))$ = $L_S(p(\frac{S x}{R}))$.
We also write for $p(x) = \sum_{|\alpha| \leq N} a_\alpha x^\alpha$,
\[
\lambda_{R,N} = \min \left\{ L_R(|p|^2) : \sum_{|\alpha| \leq N} |a_\alpha|^2 = 1, \: \deg(p) \leq N \right\},
\]
therefore if $L_R(|p|^2) = 0$ for some polynomial $p$, then we conclude that $\lambda_{S,N} = 0$ for any $S > 0$.
We note that the monomials are orthonormal with respect to normalized Lebesgue measure on the unit torus, so we
rewrite the smallest eigenvalue as
\[
\lambda_{R,N} = \min \left\{ L_R(|p|^2) : \int_{\theta \in [0,2 \pi]^d}
|p(e^{i \theta})|^2 \, \frac{d \theta}{(2 \pi)^d} = 1 \text{, } \deg(p) \leq N \right\}.
\]
Since $\lambda_{R,N} > 0$ we write
\[
\frac{1}{\lambda_{R,N}} = \max \left\{ \int_{\theta \in [0,2 \pi]^d} |p(e^{i \theta})|^2 \frac{d \theta}{
(2 \pi)^d} : L_R(|p|^2) = 1 \text{, } \deg(p) \leq N \right\}.
\]
We rewrite the polynomial $p(x) = \sum_{|\alpha| \leq N} c_\alpha P_{R,\alpha}(x)$, where $\{P_{R,\alpha}\}$ are
the standard orthonormal polynomials with respect to a fixed ordering in which $m \geq n \Rightarrow |\alpha_m| \geq
|\alpha_n|$ for $m,n \in \mathbb{N}$ (see \cite{DX}). Now considering the matrix $\mathcal{K}_R$, where
\[
\mathcal{K}_{R,\alpha,\beta} = \int_{\theta \in [0,2 \pi]^d} P_{R,\alpha} (e^{i \theta}) \overline{P_{R,\beta}(e^{i
\theta})} \frac{d \theta}{(2 \pi)^d},
\]
we express the equation above as another eigenvalue problem:
\[
\frac{1}{\lambda_{R,N}} = \max \left\{ \sum_{|\alpha| \leq N} \sum_{|\beta| \leq N} \mathcal{K}_{R,\alpha,\beta}
c_\alpha \overline{c_\beta} : \sum_{|\alpha| \leq N} |c_\alpha|^2 = 1 \right\}.
\]
We also let $\mathcal{K}_{R,N} = (\mathcal{K}_{\alpha,\beta})_{|\alpha|,|\beta| \leq N}$ be a truncation of the matrix $\mathcal{K}_R$. Note
that since
\[
\int_{\theta \in [0,2 \pi]^d} \left| \sum_{|\alpha| \leq N} c_\alpha P_{R,\alpha}(e^{i \theta}) \right|^2
\frac{d \theta}{(2 \pi)^d} = \sum_{|\alpha|,|\beta| \leq N}
c_\alpha \overline{c_\beta} \mathcal{K}_{\alpha,\beta} > 0
\]
when $p(z) = \sum_{|\alpha| \leq N} c_\alpha P_{R,\alpha}(z)$ is not the zero polynomial, the matrix $\mathcal{K}_{R,N}$ is
positive definite.
($\Rightarrow$) Suppose a reproducing kernel exists, i.e.\
\[
\sum_{\alpha \in \mathbb{N}_0^d} |P_\alpha (z)|^2
\]
is uniformly bounded on compact subsets of $\mathbb{C}^d$.
In particular, for $z$ on the torus of radius $R$, there is some
$M < \infty$ such that $\sum_{\alpha \in \mathbb{N}_0^d} |P_\alpha (z)|^2 \leq M$. If we consider the $R$-scaling of the
sequence and the resulting orthonormal polynomials, this implies that
\[
\sum_{\alpha \in \mathbb{N}_0^d} |P_{R,\alpha} (z)|^2 \leq M
\]
for all $z$ on the unit torus. Note that the $N^\text{th}$ partial sum is equal to
the trace of $\mathcal{K}_{R,N}$. Since $\mathcal{K}_{R,N}$ is positive, the trace is greater than the largest eigenvalue. Thus
\begin{align*}
\frac{1}{\lambda_{R,N}} &\leq \text{ tr}(K_{R,N})\\
&= \sum_{|\alpha| \leq N} \left( \int_{\theta \in [0,2 \pi]^d} |P_{R,\alpha} (e^{i \theta})|^2
\frac{d \theta}{(2 \pi)^d} \right) \\
&= \int_{\theta \in [0, 2 \pi]^d} \left( \sum_{|\alpha| \leq N} |P_{R,\alpha} (e^{i \theta})|^2
\right) \frac{d \theta}{(2 \pi)^d}\\
&\leq \int_{\theta \in [0,2 \pi]^d} M \frac{d \theta}{(2 \pi)^d}\\
&= M < \infty.
\end{align*}
Hence $\lambda_{R,N} \geq \frac{1}{M}$ for all $N$, and thus is bounded away from zero as $N \to \infty$.
($\Leftarrow$) Suppose $\lambda_{R,N} \to \gamma_R > 0$ as $N \to \infty$. From this we conclude that the largest
eigenvalue of $\mathcal{K}_{R,N}$ is bounded by $\frac{1}{\gamma_R}$ for all $N$, and since it is positive, $\| \mathcal{K}_{R,N} \|$
is also
bounded by $\frac{1}{\gamma_R}$. We wish to show that for each compact $E \subseteq \mathbb{C}^d$, there is some $M <
\infty$ so that $\sum_{\alpha \in \mathbb{N}_0^d} |P_\alpha(z)|^2 < M$ for all $z \in E$.
For an arbitary set of complex numbers $\{c_\alpha\}_{|\alpha| \leq N}$, the boundedness of the operator $\mathcal{K}_{R,N}$
implies
\[
\sum_{|\alpha|,|\beta| \leq N} c_\alpha \overline{c_\beta} \mathcal{K}_{R,\alpha,\beta} \leq \frac{1}{\gamma_R}
\sum_{|\alpha| \leq N} |c_\alpha|^2,
\]
and by letting $p(z) = \sum_{|\alpha| \leq N} c_\alpha P_{R,\alpha}(z)$, this is equivalent to
\begin{equation}\label{E:BCI}
\int_{\theta \in [0,2 \pi]^d} |p(e^{i \theta})|^2 \frac{d \theta}{(2 \pi)^d} \leq \frac{1}{\gamma_R}
L_R(|p|^2).
\end{equation}
Now let $E$ be a compact set in $\mathbb{C}^d$ and let $R > 0$ be a real number such that $E$ is a subset of the open
polydisk $D(0,R)^d$. Then the set $E_R := \frac{1}{R} E$ is contained in the unit polydisk and if $p$ is a
polynomial, then
\[
p(z) = p_R \left( \frac{z}{R} \right),
\]
so values taken by $p(z)$ on $E$ are the same as values taken by $p_R(z)$ on $E_R$.
Let $y \in E_R$. By Cauchy's integral formula,
\[
p_R(y) = \frac{1}{(2 \pi)^d} \int_{[0,2 \pi]^d} \frac{p_R(e^{i \theta})}{(y_1 - e^{i \theta_1})
(y_2 - e^{i \theta_2}) \cdots (y_d - e^{i \theta_d})} e^{i (\theta_1 + \theta_2 + \cdots + \theta_d)} \,
d\theta.
\]
Using H\"older's inequality, we obtain
\[
|p_R(y)|^2 \leq \int_{[\theta \in 2 \pi]^d} |p_R(e^{i \theta})|^2 \frac{d \theta}{(2 \pi)^d} \cdot \int_{\theta
\in [0,2 \pi]^d} \frac{1}{|y_1 - e^{i \theta_1}|^2 \cdots |y_d - e^{i \theta_d}|^2} \frac{d \theta}{
(2 \pi)^d}.
\]
Letting
\[
M = \max \left\{ \frac{1}{\gamma_R} \int_{\theta \in [0,2 \pi]^d} \frac{1}{|w_1 - e^{i \theta_1}|^2 \cdots
|w_d - e^{i \theta_d}|^2} \frac{d \theta}{(2 \pi)^d} : w \in E_R \right\},
\]
we combine this with~\eqref{E:BCI}, so for any polynomial $p_R$,
\[
|p_R(y)|^2 \leq M L_R( |p_R(x)|^2 ).
\]
Now we pick the particular polynomial $p_R(z) = \sum_{|\alpha| \leq N} \overline{P_{R,\alpha}(y)} P_{R,\alpha}(z)$
and apply this inequality:
\[
\left| \sum_{|\alpha| \leq N} |P_{R,\alpha}(y)|^2 \right|^2 \leq M \sum_{|\alpha| \leq N} |P_{R,\alpha}(y)|^2,
\]
Which becomes
\[
\sum_{|\alpha| \leq N} |P_{R,\alpha}(y)|^2 \leq M.
\]
Since there is a uniform bound over all $N$, this implies
\[
\sum_{\alpha \in \mathbb{N}_0^d} |P_{R,\alpha}(y)|^2 \leq M,
\]
thus the sum $\sum_\alpha |P_{R,\alpha}(z)|^2$ is bounded uniformly in $\frac{1}{R} E$ which implies that the sum
$\sum_\alpha |P_\alpha(z)|^2$ is bounded uniformly by $M$ in $E$, thus such a reproducing kernel exists.
\end{proof}
This theorem along with some considerations from the one variable case gives the proof of Theorem \ref{T:BCI}. In this
case $\{s_n\}$ is a positive multisequence exactly when there is a positive measure $\mu$ which represents $L$ in the
sense that
\[
L(p) = \int_\mathbb{R} p(x) \, d\mu.
\]
In one dimension it is known that the sum $\sum_{n = 0}^\infty |P_n(z)|^2 < \infty$ for some $z \in \mathbb{C} \backslash \mathbb{R}$ if
and only if the sum converges uniformly on compact subsets of $\mathbb{C}$. Also in this case, this sum converges if and only if
the measure $\mu$ associated with the moment sequence is indeterminate (see \cite{A}).
So assuming that $\mu$ on $\mathbb{R}$ is indeterminate, then there exists a reproducing kernel for the Hilbert space of
polynomials implying that $\sum_{n = 0}^\infty |P_n(z)|^2$ converges uniformly on compact subsets of $\mathbb{C}$. By Theorem
\ref{T:R} this leads us to the conclusion that $\gamma_1 > 0$, i.e.\ the smallest eigenvalues of $H_N$ are uniformly
bounded away from 0.
Now assuming that $\gamma_1 > 0$, the proof of Theorem \ref{T:R} implies that the sum $\sum_{n = 0}^\infty |P_n(z)|^2$
is bounded uniformly on
compact subsets of the open unit disk. Thus there is some $z_0 \in \mathbb{D} \backslash \mathbb{R}$ so that $\sum_{n = 0}^\infty
|P_n(z_0)|^2 < \infty$, hence the measure $\mu$ is indeterminate.
\section{Application}
We would like to know more about $\gamma_R$. If we consider it as a function $\gamma : (0,\infty) \to [0,\infty)$ given by
$\gamma(R) = \gamma_R$, then Theorem \ref{T:R} implies that if $\gamma(R)> 0$, then $\gamma(S) > 0$ for all $S < R$. We
see this since $\gamma(R) > 0$ implies that a reproducing kernel for $\mathcal{P}$ exists and converges uniformly on compact
subsets of the polydisk of radius $R$. Then in the converse portion of the argument, the fact that a reproducing kernel
exists which converges uniformly on the torus of radius $S$ is used to show that $\lambda_{S,N}$ are uniformly bounded away
from zero as $N \to \infty$, hence $\gamma(S) > 0$.
If a multisequence in any number of variables can be represented by a measure $\sigma$, then $\sigma$ is indeterminate if
and only if every $R$-scaling $\sigma_R$ is as well. In one
variable, Berg, Chen, and Ismail's result implies that $\gamma_1 > 0 \Leftrightarrow \gamma_R > 0$ for some $R > 0$, which
is equivalent to $\gamma_S > 0$ for every $S > 0$. This is an open question whether this property holds for multisequences
in multiple variables, but in Theorem \ref{T:R2} we establish a class of multisequences which do possess this property.
We would like to extend more of Theorem \ref{T:BCI} to the $d > 1$ case. Presently we have the conditions
\begin{align}
&\gamma_1 > 0, \label{c1}\\
&\gamma_R > 0, \text{ for all $R > 0$}, \label{c2}\\
&\sum_{\alpha \in \mathbb{N}_0^d} |P_\alpha (z)|^2 \text{ is bounded uniformly on compact sets}, \label{c3}\\
&\sum_{\alpha \in \mathbb{N}_0^d} |P_\alpha (z)|^2 < \infty \text{ for some $z \in (\mathbb{C} \backslash \mathbb{R})^d$}, \label{c4}
\end{align}
are related by the implications
\[
\eqref{c1} \Leftarrow \eqref{c2} \Leftrightarrow \eqref{c3} \Rightarrow \eqref{c4},
\]
and also $\eqref{c1} \Rightarrow \eqref{c4}$ if we pick some $z$ contained in the open unit polydisk. We would like also
to have $\eqref{c1} \Rightarrow \eqref{c2}$ and
$\eqref{c4} \Rightarrow \eqref{c3}$ in $d > 1$ as it is in $d = 1$. We show this is the case for a certain class of
multisequences.
\begin{theorem} \label{T:R2}
Let $\{s_\alpha\}$ be a positive multisequence which satisfies the condition
\begin{equation} \label{mult}
s_\alpha = s_{(\alpha_1, 0, 0, \ldots, 0)} s_{(0,\alpha_2, 0, \ldots, 0)} \cdots s_{(0, \ldots, 0, \alpha_d)}.
\end{equation}
Then there is a positive measure $\mu$ on $\mathbb{R}^d$ which represents the
multisequence in the sense that
\[
\int_{\mathbb{R}^d} x^\alpha \, d\mu = s_\alpha;
\]
the conditions \eqref{c1}, \eqref{c2}, \eqref{c3}, and \eqref{c4} are equivalent; and these conditions
all imply indeterminacy of $\mu$.
\end{theorem}
\begin{proof}
Let $\{ s_\alpha \}$ satisfy this condition and let $L$ be the functional derived from the multisequence. We define
a collection of $d$ sequences by for each $j = 1, \ldots, d$, let
\[
s_{j,n} = s_{(0, \ldots, 0, n, 0, \ldots, 0)},
\]
where the $n$ is in the $j^\text{th}$ place. The positivity of the sequence $s_{j,n}$ follows from the positivity of
the multisequence $\{ s_\alpha \}$, so for each $1 \leq j \leq d$, the theorem of Hamburger (\cite{A} p.30) implies
that there exists a positive measure $\mu_j$ on $\mathbb{R}$ for which
\[
\int_\mathbb{R} x^n \, d\mu_j(x) = s_{j,n}.
\]
If we define a measure $\mu := \mu_1 \otimes \mu_2 \otimes \cdots \otimes \mu_d$ on $\mathbb{R}^d$, the multiplicative
condition \eqref{mult} implies the first assertion, that
\[
\int_{\mathbb{R}^d} x^\alpha \, d\mu = \left( \int_\mathbb{R} x_1^{\alpha_1} \, d\mu_1 \right) \cdots \left( \int_\mathbb{R}
x_d^{\alpha_d} \, d\mu_d \right) = s_{1,\alpha_1} s_{2,\alpha_2} \cdots s_{d,\alpha_d} = s_\alpha,
\]
for each monomial $x^\alpha$.
For the measure $\mu$, we construct a suitable set of orthonormal polynomials $\{ P_\alpha (x)\}$. For each
$1 \leq j \leq d$, let
$\{ P_{j,n}(x) \}_{n = 0}^\infty$ be the standard set of orthonormal polynomials associated with the measure $\mu_j$
in $\mathbb{R}$ (\cite{A} p. 3). If we define for each $\alpha \in \mathbb{N}_0^d$ the polynomial
\[
P_\alpha (x) := P_{1,\alpha_1}(x_1) P_{2,\alpha_2}(x_2) \cdots P_{d, \alpha_d}(x_d),
\]
then the resulting set of polynomials $\{ P_\alpha (x) \}$ span $\mathbb{C}[x]$ and satisfy the orthonormal property
\[
\int_{\mathbb{R}^d} P_\alpha(x) \overline{P_\beta(x)} \, d\mu(x) = \delta_{\alpha_1, \beta_1} \delta_{\alpha_2,
\beta_2} \cdots \delta_{\alpha_d, \beta_d} = \delta_{\alpha, \beta}.
\]
Now for any $z \in \mathbb{C}^d$ and $N \in \mathbb{N}$, we consider the sum
\begin{align} \label{mult2}
\sum_{\overset{|\alpha_j| \leq N}{1 \leq j \leq d}} |P_\alpha(z)|^2 = \left( \sum_{\alpha_1 = 0}^N
|P_{1,\alpha_1}(z_1)|^2 \right)
&\left( \sum_{\alpha_2 = 0}^N |P_{2,\alpha_2}(z_2)|^2 \right) \cdots \notag \\
&\cdots \left( \sum_{\alpha_d = 0}^N |P_{d,\alpha_d}(z_d)|^2 \right).
\end{align}
This sum converges as $N \to \infty$ if and only if each of the sums on the right hand side converges. We will show
that for this measure, $\eqref{c1} \Rightarrow \eqref{c4}$ for a generic $z \in (\mathbb{C} \backslash \mathbb{R})^d$ and $\eqref{c4}
\Rightarrow \eqref{c3}$. In the process we prove that these imply indeterminacy of $\mu$.
($\eqref{c1} \Rightarrow \eqref{c4}$). Suppose $\gamma_1 > 0$. Define
\[
J_N^j = \left( s_{(0, \ldots, 0, k + l, 0, \ldots, 0)} \right)_{k,l = 0}^N,
\]
where the $k + l$ is in the $j^\text{th}$ coordinate, to be the $(N + 1)
\times (N + 1)$ truncated Hankel matrix associated with the measure $\mu_j$, and let $\eta_N^j$ be its smallest
eigenvalue. Then $\eta_N^j > \gamma_1$ for each $j$ and $N \in \mathbb{N}$ since $J_N^j$ is a compression of the operator
$H_N$. By Theorem \ref{T:BCI} it follows that $\mu_j$ is indeterminate and
\[
\sum_{\alpha_j = 0}^\infty |P_{j,\alpha_j}(z_j)|^2
\]
converges for some $z_j \in \mathbb{C} \backslash \mathbb{R}$. If $z = (z_1, z_2, \ldots, z_d) \in (\mathbb{C} \backslash \mathbb{R})^d$, then
\eqref{c4} follows from \eqref{mult2}.
($\eqref{c4} \Rightarrow \eqref{c3}$). Now let $z \in (\mathbb{C} \backslash \mathbb{R})^d$ be a point such that $\sum_{\alpha \in
\mathbb{N}_0^d} |P_\alpha (z)|^2 < \infty$. Via \eqref{mult2}, this implies that $\sum_{\alpha_j = 0}^\infty
|P_{j,\alpha_j}(z_j)|^2 < \infty$ for each $1 \leq j \leq d$, which implies that $\mu_j$ is indeterminate. Then
$\sum_{\alpha_j = 0}^\infty |P_{j,\alpha_j} (z_j)|^2$ converges uniformly on compact subsets of $\mathbb{C}$. Let $E$ be
a compact subset of $\mathbb{C}^d$ and $L \geq 0$ so that $E \subseteq \left( \overline{\mathbb{D}(0,L)} \right)^d$. Then
\eqref{mult2} tells us that $\sum_{\alpha \in \mathbb{N}_0^d} |P_\alpha (z)|^2$ converges uniformly on $\left(
\overline{\mathbb{D}(0,L)} \right)^d$ and thus on $E$.
We now have that \eqref{c1}, \eqref{c2}, \eqref{c3}, and \eqref{c4} are equivalent when \eqref{mult} applies, and our
arguments have shown that these conditions imply indeterminacy for each $\mu_j$. Since at least one of these
measures is indeterminate, Petersen's theorem leads to indeterminacy of $\mu$.
\end{proof}
This theorem provides that when a positive multisequence is multiplicative in this sense, a solution to the associated
moment problem exists, and the existence of a reproducing kernel implies indeterminacy. In the case of deciding
indeterminacy of an arbitrary moment multisequence, the role of reproducing kernels is not yet fully clear. Fuglede's
introduction of the notions of strong determinacy and ultradeterminacy \cite{F} and the creation of new indeterminate
moment problems from old ones \cite{PuSc} provide ground for further study, and we will address these in a future article.
\bibliographystyle{amsplain}
| {
"timestamp": "2007-02-02T17:35:08",
"yymm": "0604",
"arxiv_id": "math/0604459",
"language": "en",
"url": "https://arxiv.org/abs/math/0604459",
"abstract": "Using the smallest eigenvalues of Hankel forms associated with a multidimensional moment problem, we establish a condition equivalent to the existence of a reproducing kernel. This result is a multivariate analogue of Berg, Chen,and Ismail's 2002 result. We also present a class of measures for which the existence of a reproducing kernel implies indeterminacy.",
"subjects": "Functional Analysis (math.FA)",
"title": "A Reproducing Kernel Condition for Indeterminacy in the Multidimensional Moment Problem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138151101525,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7087156831807934
} |
https://arxiv.org/abs/2112.08678 | Asymptotically Optimal Golay-ZCZ Sequence Sets with Flexible Length | Zero correlation zone (ZCZ) sequences and Golay complementary sequences are two kinds of sequences with different preferable correlation properties. Golay-ZCZ sequences are special kinds of complementary sequences which also possess a large ZCZ and are good candidates for pilots in OFDM systems. Known Golay-ZCZ sequences reported in the literature have a limitation in the length which is the form of a power of 2. In this paper, we propose two constructions of Golay-ZCZ sequence sets with new parameters which generalize the constructions of Gong et al. (IEEE Transaction on Communications 61(9), 2013) and Chen et al (IEEE Transaction on Communications 61(9), 2018). Notably, one of the constructions results in optimal binary Golay-ZCZ sequences, while the other results in asymptotically optimal polyphase Golay-ZCZ sequences as the number of sequences increases. | \section{Introduction}\label{section 1}
Golay complementary sets (GCS) and zero correlation zone (ZCZ) sequence sets are two kinds of sequence sets with different desirable correlation properties. GCS are sequence sets have zero aperiodic autocorrelation sums (AACS) at all non-zero time shifts \cite{Golay61}, whereas ZCZ sequence sets have zero correlation zone within a certain time-shift \cite{Fan2007}. Due to its favourable correlation properties GCSs or ZCZ sequence sets have been widely used to reduce peak average power ratio (PAPR) in orthogonal frequency division multiplexing systems \cite{Davis1999,Paterson2000}. However, the sequences own periodic autocorrelation plays an important role in some applications like synchronization and detection of the signal.
Working in this direction, Gong \textit{et al.} \cite{Gong2013} investigated the periodic autocorrelation behaviour of a single Golay sequence in 2013. To be more specific, Gong \textit{et al.} presented two constructions of Golay sequences of length $2^m$, using generalized Boolean functions (GBF), each displaying a periodic zero autocorrelation zone (ZACZ) of $2^{m-2}$, and $2^{m-3}$, respectively, around the in-phase position \cite{Gong2013}. In 2018, Chen \textit{et al.} studied the zero cross-correlation zone among the Golay sequences and proposed Golay-ZCZ sequence sets \cite{Chen201811}. Golay-ZCZ sequence sets are sequence sets having periodic ZACZ for each sequences, periodic zero cross-correlation zone (ZCCZ) for any two sequences and also the aperiodic autocorrelation sum is zero for all non-zero time shifts. Specifically, using GBFs, Chen \textit{et al.} gave a systematic construction of Golay-ZCZ sequence set consisting $2^k$ sequences, each of length $2^m$ and $\min\{ZACZ,ZCCZ\}$ is $2^{m-k-1}$ \cite{Chen201811}.
In \cite{Gong2013}, the authors discussed the application of Golay sequences with large ZACZ for ISI channel estimation. Using Golay sequences with large ZACZ as channel estimation sequences (CES), the authors analysed the performance of Golay-sequence-aided channel estimation in terms of the error variance and the classical Cramer-Rao lower bound (CRLB). It was shown in \cite{Gong2013} that when the channel impulse response (CIR) is within the ZACZ width then the variance of the Golay sequences attains the CRLB. Recently in 2021, Yu \cite{yu} demonstrated that sequence sets having low coherence of the spreading matrix along with low PAPR are suitable as pilot sequences for uplink grant-free non-orthogonal multiple access (NOMA). The work of \cite{yu} depicts that Golay-ZCZ sequences can be suitably used as pilot sequences for uplink grant-free NOMA.
Inspired by the works of Gong \textit{et al.} \cite{Gong2013} and Chen \textit{et al.} \cite{Chen201811} and by the applicability of the Golay-ZCZ sequences as pilot sequences for uplink grant-free NOMA and channel estimation, we propose Golay-ZCZ sequence sets with new lengths. Note that, the lengths of the GCPs with large ZCZs discussed in the works of Gong \textit{et al.} \cite{Gong2013} and Chen \textit{et al.} \cite{Chen201811} are all in the powers of two. To the best of the authors knowledge, the problem of investigating the individual periodic autocorrelations of the GCPs and the periodic cross-correlations of the pairs when the length of the GCPs are non-power-of-two, remains largely open. An overview of the previous works, which considers the periodic ZACZ of the individual sequences and ZCCZ of a GCP, is given in Table \ref{Table duibi}.
\begin{table}
\caption{Golay sequences with periodic ZACZ and ZCCZ.}
\label{Table duibi}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Ref. & Length & set size & ZACZ width & ZCCZ width &Based on \\\hline
\hline
\cite{Gong2013} & $2^m$ & $2$ & $2^{m-2}$ or $2^{m-3}$ & Not discussed & GBF \\
\hline
\cite{Chen201811} & $2^m$ & $2^k$ & $2^{m-k-1}$ & $2^{m-k-1}$ & GBF \\
\hline
Theorem 1 & $4N$ & $2$ & $N$ & $N$ & GCP of length $N$. \\
\hline
Theorem 2 & $M^2N$ & $M$ & $(M-1)N$ & $(M-1)N$ & $(M,M,N)$- CCC.\\
\hline
\end{tabular}
\end{table}
We have proposed two constructions which results to Golay-ZCZ sequence sets consisting of sequences with more flexible lengths. To be more specific, assuming that Golay complementary pairs of length $N$ exists, we have proposed a systematic construction of Golay-ZCZ sequences of length $4N$, having ZCZ width of $N+1$. One of the limitations of construction of Golay-ZCZ sequence sets reported in \cite{Chen201811} is that, the ZCZ width decreases, as the number of sequences increases. To solve that problem, we proposed another construction of Golay-ZCZ sequences consisting of sequences of length $M^2N$ having ZCZ length $(M-1)N$ from an $(M,M,N)$ compete complementary code (CCC). Interestingly, the resultant Golay-ZCZ sequences derived from the CCC are asymptotically optimal with respect to Tang-Fan-Matsufuji bound \cite{tangfan} as the number of sequences increases, when polyphase sequences are considered. To increase the availability of the CCCs to design Golay-ZCZ sequences of more flexible lengths we have also proposed a new iterative construction of CCCs based on Kronecker product. A brief overview of all the parameters of CCCs proposed till date can be found in Table \ref{tabccc}.
\begin{table}
\centering
\resizebox{\textwidth}{!}{
\caption{Parameters of CCCs}
\label{tabccc}
\begin{tabular}{|c|c|c|c|}
\hline
Ref. & Parameters & constraints & Construction based on \\\hline
\hline
\cite{suherio15} & $(M,M,M^N)$ & $N\geq 2$ & Unitary matrices \\
\hline
\cite{huang18} & $(2^{N-r},2^{N-r},2^N)$ & $r=1,2,\dots,N-1$ & Golay-paired Hadamard matrices \\
\hline
\cite{han53} & $(M,M,MN)$ & $N\leq M$ & Unitary matrices \\
\hline
\cite{han17} & $(M,M,MN/P)$ & $N,P\leq M$ & Unitary matrices \\
\hline
\cite{zhang54} & $(M,M,2^mM)$ & $m\geq 1$ & Unitary matrices \\
\hline
\cite{rathinakumar21} & $(2^{k+1},2^{k+1},2^m)$ & $m,k\geq 1,~k=m-1$ & GBF \\
\hline
\cite{yang55} & $(2N,2N,l)$ & $l> 1,~N\geq 1$ & Unitary matrices \\
\hline
\cite{mar22} & $(2^m,2^m,2^{mN})$ & $m\geq 1$ & Equivalent Hadamard matrices \\
\hline
\cite{chen20} & $(2^m,2^m,2^k)$ & $m,k\geq 1$ with $k\geq m$ & GBF \\
\hline
\cite{liu19} & $(2^m,2^m,2^k)$ & $m,k\geq 1$ & GBF \\
\hline
\cite{das23,ma} & $(M,M,M^N)$ & $N\geq 1$ & Paraunitary (PU) matrices \\
\hline
\cite{das24} & $(M,M,P^N)$ & $N\geq 1,~P|M$ & PU matrices \\
\hline
\cite{jin} & $(M_1M_2,M_1M_2,N_1N_2)$ & $(M_1,M_1,N_1)$- CCC and $(M_2,M_2,N_2)$- CCC exists & Kronecker product \\
\hline
Proposed & $(M,M,MN_1N_2)$ & $(M,M,N_1)$- CCC and $(M,M,N_2)$- CCC exists & Kronecker product \\
\hline
\end{tabular}}
\end{table}
The rest of the paper is organized as follows. In Section \ref{section 2}, some useful notations and preliminaries are recalled.
In Section \ref{section 3}, a systematic construction of Golay-ZCZ sequence pairs with flexible lengths is proposed. In Section \ref{golayzcz2d}, a systematic construction of Golay-ZCZ sequence sets is proposed based on existing CCCs. Also in this section, we have proposed an iterative construction of CCCs to increase the flexibility of the parameters of the proposed Golay-ZCZ sequences. In Section \ref{secv}, we have discussed the optimality of the proposed Golay-ZCZ sequence sets. In Section \ref{novel}, we have discussed the novelty of the proposed constructions as compared to previous works. Finally, we conclude the paper in Section \ref{section 5}.
\section{Preliminaries}\label{section 2}
Before we begin, let us define the following notations:
\begin{itemize}
\item $\mathbf{0}_L$ denotes the all-zero vector of length-$L$.
\item $\overleftarrow{\mathbf{a}}$ denotes the reverse of the sequence $\mathbf{a}$.
\item $x^*$ denotes the complex conjugate of $x$.
\item $\mathbf{a}||\mathbf{b}$ denotes the concatenation of the sequences $\mathbf{a}$ and $\mathbf{b}$.
\item `$\mathbf{a}\cdot \mathbf{b}$' denotes the `inner product' of two sequences $\mathbf{a}$ and $\mathbf{b}$.
\item $<x>_M$ denotes $x \mod M$.
\item $\mathbf{x}\otimes\mathbf{y}= [x_0\mathbf{y},x_1\mathbf{y},\cdots, x_{N_1-1}\mathbf{y}]$, denotes the Kronecker product of the sequences $\mathbf{x}$ and $\mathbf{y}$.
\end{itemize}
\begin{definition}
Let $\mathbf{a}$ and $\mathbf{b}$ be two length $N$ sequences.
The periodic cross-correlation function (PCCF) of ${\mathbf{a}}$ and ${\mathbf{b}}$ is defined as
\begin{equation}\label{defi_PCCF}
R_{\mathbf{a},\mathbf{b}}(\tau):= \left \{
\begin{array}{cl}
\sum\limits_{k=0}^{N-1}{a_kb^*_{<k+\tau>_N}},&~~0\leq \tau \leq N-1;\\
\sum\limits_{k=0}^{N-1}{a_{<k-\tau>_N}b^*_k},&~~-(N-1)\leq \tau \leq -1;
\end{array}
\right .
\end{equation}
When $\mathbf{a} = \mathbf{b}$, $R_{\mathbf{a},\mathbf{b}}(\tau)$ is called periodic auto-correlation function (PACF) of $\mathbf{a}$ and is denoted as $R_{\mathbf{a}}(\tau)$.
\end{definition}
\begin{definition}
Let $\mathbf{a}$ and $\mathbf{b}$ be two length $N$ sequences.
The aperiodic cross-correlation function (ACCF) of ${\mathbf{a}}$ and ${\mathbf{b}}$ is defined as
\begin{equation}\label{defi_ACCF}
C_{\mathbf{a},\mathbf{b}}(\tau):= \left \{
\begin{array}{cl}
\sum\limits_{k=0}^{N-1-\tau}{a_kb^*_{k+\tau}},&~~0\leq \tau \leq N-1;\\
\sum\limits_{k=0}^{N-1+\tau}{a_{k-\tau}b^*_k},&~~-(N-1)\leq \tau \leq -1;
\end{array}
\right .
\end{equation}
When $\mathbf{a} = \mathbf{b}$, $C_{\mathbf{a},\mathbf{b}}(\tau)$ is called aperiodic auto-correlation function (AACF) of $\mathbf{a}$ and is denoted as $C_{\mathbf{a}}(\tau)$.
\end{definition}
\begin{definition}
A set $\mathcal{C} = \{\mathbf{c}_0, \mathbf{c}_1,..., \mathbf{c}_{M-1}\}$ consisting
of $M$ sequences of length $N$ is said to be an $(M,N,Z)$- ZCZ sequence set if it satisfies
\begin{equation}
\begin{split}
&R_{\mathbf{c}_i}(\tau)=0,~\text{for }1\leq |\tau|\leq Z,~0\leq i \leq M-1,\\
&R_{\mathbf{c}_i,\mathbf{c}_j}(\tau)=0,~\text{for }|\tau|\leq Z,~0\leq i\neq j\leq M-1.
\end{split}
\end{equation}
\end{definition}
\begin{definition}\label{golayzcz1d}
An $(M,N,Z)$- ZCZ sequence set becomes an $(M,N,Z)$- Golay-ZCZ sequence set if it satisfies
\begin{equation}
\begin{split}
&\text{C1: }\sum_{i=0}^{M-1}C_{\mathbf{c}_i}(\tau)=0,~\text{for all }\tau\neq 0,\\
&\text{C2: }R_{\mathbf{c}_i}(\tau)=0,~\text{for }1\leq |\tau|\leq Z,~0\leq i \leq M-1,\\
&\text{C3: }R_{\mathbf{c}_i,\mathbf{c}_j}(\tau)=0,~\text{for }|\tau|\leq Z,~0\leq i\neq j\leq M-1.
\end{split}
\end{equation}
\end{definition}
\begin{definition}
Let $\mathcal{C}$ be a $P \times N$ matrix, consisting of $P$ sequences of length $N$, as follows:
\begin{equation}
\mathcal{C}=\begin{bmatrix}
\mathbf{c}_0\\
\mathbf{c}_1\\
\vdots\\
\mathbf{c}_{P-1}
\end{bmatrix}_{P\times N}=\begin{bmatrix}
c_{0,0} & c_{0,1} & \dots & c_{0,N-1}\\
c_{1,0} & c_{1,1} & \dots & c_{1,N-1}\\
\vdots & \vdots & \ddots & \vdots\\
c_{P-1,0} & c_{P-1,1} & \dots & c_{P-1,N-1}
\end{bmatrix}_{P\times N}.
\end{equation}
Then $\mathcal{C}$ is called a CS of size $P$ if
\begin{equation}
C_{\mathbf{c}_0}(\tau)+C_{\mathbf{c}_1}(\tau)+\cdots+C_{\mathbf{c}_{P-1}}(\tau)=\begin{cases}
PN & \text{if } \tau=0,\\
0 & \text{if } 0<\tau<N.
\end{cases}
\end{equation}
\end{definition}
\begin{definition}
Consider $\mathfrak{C}=\left\{\mathcal{C}^{0}, \mathcal{C}^{1}, \cdots, \mathcal{C}^{M-1}\right\}$, consisting $M$ CSs $\mathcal{C}^{k}$, $0\leq k<M$, each having $M$ sequences of length $N$, i.e.,
\begin{equation}\label{eq2}
\mathcal{C}^{k}=\begin{bmatrix}
\mathbf{c}^{k}_0\\
\mathbf{c}^{k}_1\\
\vdots\\
\mathbf{c}^{k}_{M-1}
\end{bmatrix}_{M \times N}, 0 \leq k \leq M-1,
\end{equation}
where $ \mathbf{c}_{m}^{k}$ is the $m$-th sequence of length $N$ and is expressed as $ \mathbf{c}_{m}^{k}=\left(c_{m, 0}^{k}, c_{m, 1}^{k}, \cdots, c_{m, N-1}^{k}\right)$, $0 \leq m \leq M-1$. The set $\mathfrak{C}$ is called a $(M,M,N)$ complete complementary code (CCC) if for any $\mathcal{C}^{k_1},\mathcal{C}^{k_2}\in \mathfrak{C}$, $0\leq k_1,k_2
\leq K-1$,
$0 \leq \tau \leq N-1, k_{1} = k_{2}$ or $0<\tau \leq N-1, k_{1}\neq k_{2}$,
\begin{equation}
|C_{\mathcal{C}^{k_1},\mathcal{C}^{k_2}}(\tau)|=\left|\sum_{m=0}^{M-1} C_{\mathbf{c}_{m}^{k_{1}},
{\mathbf{c}_{m}^{k_{2}}}}(\tau)\right|= 0,
\end{equation}
where $M$ denotes the set size and the number of sequences in each sequence set, and $N$ the length of constituent sequences of $\mathfrak{C}$.
\end{definition}
Let $\mathcal{C}$ be an $M\times N$ matrix given by
\begin{equation}
\mathcal{C}=\begin{bmatrix}
c_{0,0} & c_{0,1} & \dots & c_{0,N-1}\\
c_{1,0} & c_{1,1} & \dots & c_{1,N-1}\\
\vdots & \vdots & \ddots & \vdots \\
c_{M-1,0} & c_{M-1,1} & \dots & c_{M-1,N-1}
\end{bmatrix}_{M\times N}.
\end{equation}
Then let us define a transformation $L^{r}_s(\mathcal{C})$ as follows:
\begin{equation}\label{eq12n}
\begin{split}
L^{r}_s(\mathcal{C})&=\begin{bmatrix}
\textcolor{blue}{c_{r,s} }&\textcolor{blue}{ c_{r,s+1} }&\textcolor{blue}{ \dots} &\textcolor{blue}{ c_{r,N-1}} &\textcolor{blue}{ c_{r+1,0}} &\textcolor{blue}{ c_{r+1,1}} &\textcolor{blue}{ \dots} &\textcolor{blue}{ c_{r+1,s-1}}\\
\textcolor{blue}{ c_{r+1,s} }& \textcolor{blue}{c_{r+1,s+1}} &\textcolor{blue}{ \dots} &\textcolor{blue}{ c_{r+1,N-1}}&\textcolor{blue}{c_{r+2,0}} & \textcolor{blue}{c_{r+2,1}} & \textcolor{blue}{\dots }&\textcolor{blue}{ c_{r+2,s-1}}\\
\textcolor{blue}{ \vdots} &\textcolor{blue}{ \vdots} &\textcolor{blue}{ \ddots} &\textcolor{blue}{ \vdots} &\textcolor{blue}{ \vdots} & \textcolor{blue}{\vdots} &\textcolor{blue}{ \ddots} &\textcolor{blue}{ \vdots} \\
\textcolor{blue}{ c_{M-2,s}} &\textcolor{blue}{ c_{M-2,s+1}} &\textcolor{blue}{ \dots} & \textcolor{blue}{c_{M-2,N-1}} &\textcolor{blue}{ c_{M-1,0}} &\textcolor{blue}{ c_{M-1,1}} & \textcolor{blue}{\dots} &\textcolor{blue}{ c_{M-1,s-1}}\\
\textcolor{blue}{ c_{M-1,s}} & \textcolor{blue}{c_{M-1,s+1}} &\textcolor{blue}{ \dots} & \textcolor{blue}{c_{M-1,N-1}} &\textcolor{red}{ c_{0,0} }&\textcolor{red}{ c_{0,1} }&\textcolor{red}{ \dots} &\textcolor{red}{ c_{0,s-1}}\\
\textcolor{red}{ c_{0,s}} & \textcolor{red}{c_{0,s+1}} & \textcolor{red}{\dots} &\textcolor{red}{ c_{0,N-1}} & \textcolor{red}{c_{1,0}} &\textcolor{red}{ c_{1,1}} &\textcolor{red}{ \dots} & \textcolor{red}{c_{1,s-1}}\\
\textcolor{red}{ \vdots} &\textcolor{red}{ \vdots} &\textcolor{red}{ \ddots} &\textcolor{red}{ \vdots} & \textcolor{red}{\vdots} &\textcolor{red}{ \vdots} & \textcolor{red}{\ddots} &\textcolor{red}{ \vdots} \\
\textcolor{red}{ c_{r-1,s}} &\textcolor{red}{ c_{r-1,s+1}} &\textcolor{red}{ \dots} &\textcolor{red}{ c_{r-1,N-1}} &\textcolor{red}{ c_{r,0}} &\textcolor{red}{ c_{r,1}} &\textcolor{red}{ \dots }&\textcolor{red}{ c_{r,s-1}}
\end{bmatrix}_{M\times N}\\
&=\begin{bmatrix}
T^r(\mathbf{c}_{:,s}) & T^r(\mathbf{c}_{:,s+1}) & \dots & T^r(\mathbf{c}_{:,N-1}) & T^{r+1}(\mathbf{c}_{:,0}) & T^{r+1}(\mathbf{c}_{:,1}) & \dots & T^{r+1}(\mathbf{c}_{:,s-1})
\end{bmatrix},
\end{split}
\end{equation}
where $\mathbf{c}_{:,s}$ denotes the $s$-th column of $\mathcal{C}$, and $T^r(\mathbf{c}_{:,s})$ denotes the $r$-step up-shift operator.
\begin{definition}
Let $\mathbf{a}=(a_0,a_1,\dots,a_{N-1})$ be a sequence of length $N$, then the characteristic polynomial of $\mathbf{a}$ is defined as
\begin{equation}
\mathbf{a}(x)=a_0+a_1x+\cdots+a_{N-1}x^{N-1},
\end{equation}
and the corresponding complex conjugate is given by
\begin{equation}
\mathbf{a}^*(x)=a^*_0+a^*_1x+\cdots+a^*_{N-1}x^{N-1}.
\end{equation}
\end{definition}
\begin{definition}
Let $\mathbf{a}(x)$ and $\mathbf{b}(x)$ be the characteristic polynomial of the length $N$ sequence $\mathbf{a}$ and $\mathbf{b}$, respectively. Then the ACCF is defined as
\begin{equation}
C_{\mathbf{a},\mathbf{b}}(x)=\sum_{\tau=0}^{N-1}C_{\mathbf{a},\mathbf{b}}(\tau)x^\tau=\mathbf{a}(x^{-1})\mathbf{b}^*(x).
\end{equation}
\end{definition}
\begin{definition}
A pair of sequences $(\mathbf{a}, \mathbf{b})$ is said to be a GCP of length $N$ if
\begin{equation}
C_{\mathbf{a}}(x)+C_{\mathbf{b}}(x)=N.
\end{equation}
\end{definition}
\begin{definition}
Consider $\mathfrak{C}=\left\{\mathcal{C}^{0}, \mathcal{C}^{1}, \cdots, \mathcal{C}^{M-1}\right\}$, consisting $M$ CSs $\mathcal{C}^{k}$, $0\leq k<M$, each having $M$ sequences of length $N$, as described in (\ref{eq2}). Then $\mathfrak{C}$ is a CCC if
\begin{equation}
C_{\mathcal{C}^i,\mathcal{C}^k}(x)=\sum_{j=0}^{M-1}C_{\mathbf{c}^i_j,\mathbf{c}^k_j}(x)=MN \delta(i-k),
\end{equation}
where $\delta$ is Kronecker delta function.
\end{definition}
\section{New Construction of Golay Complementary Pair with ZCZ}\label{section 3}
In \cite{Gong2013} and \cite{Chen201811}, the authors considered complementary sequences whose length is only of the form $2^m$. In this section, we considered GCPs of more flexible lengths and proposed Golay-ZCZ sequences of new lengths. Before introducing the construction, we need the following lemma.
\begin{lemma}\label{lem2}
Let $(\mathbf{a,b})$ be a GCP of lenth $N$ and $(\mathbf{c,d})$ be its complementary mate. Then
\begin{equation}
C^*_{\mathbf{a},\mathbf{b}}(\tau)+ C^*_{\mathbf{c},\mathbf{d}}(\tau) = 0
\end{equation}
\end{lemma}
\begin{IEEEproof}
From the properties of aperiodic cross-correlation, for any sequence $\mathbf{x}$ and $\mathbf{y}$, we have $C_{\mathbf{x}, \mathbf{y}}(\tau)=C_{\overleftarrow{\mathbf{y}^*}, \overleftarrow{\mathbf{x}^*}}(\tau)$ and $C_{-\mathbf{x}, \mathbf{y}}(\tau)=-C_{\mathbf{x}, \mathbf{y}}(\tau)$. Since $(\mathbf{c}, \mathbf{d})=(\overleftarrow{\mathbf{b}^*},-\overleftarrow{\mathbf{a}^*})$, therefore $C_{\mathbf{c}, \mathbf{d}}=C_{\overleftarrow{\mathbf{d}^*}, \overleftarrow{\mathbf{c}^*}}(\tau)=C_{-{\mathbf{a}},\mathbf{b}}(\tau)=-C_{\mathbf{a},\mathbf{b}}(\tau)$. Hence the proof follows.
\end{IEEEproof}
\begin{theorem}\label{th1}
Let $(\mathbf{a,b})$ be a GCP of size $N$ and $(\mathbf{c,d})$ be a Golay complementary mate of $(\mathbf{a,b})$. Define
\begin{equation}
\begin{split}
&\mathbf{p}=
\left(
\begin{array}{rrrr}
x_1\mathbf{a}~ || ~x_2\mathbf{b} ~||~ x_3\mathbf{a} ~||~ x_4\mathbf{b} \\
\end{array}
\right),\\&
\mathbf{q}=
\left(
\begin{array}{rrrr}
x_1\mathbf{c} ~ || ~ x_2\mathbf{d} ~ || ~ x_3\mathbf{c} ~ || ~ x_4\mathbf{d} \\
\end{array}
\right),
\end{split}
\end{equation}
where $x_1,x_2,x_3,x_4\in\{+1,-1\}$. Then $(\mathbf{p},\mathbf{q})$ is a Golay-ZCZ sequence pair of size $4N$ with $Z_{\min}=N$, if $x_1,x_2,x_3,x_4$ satisfy the following condition:
\begin{equation}
x_1x_2+x_3x_4=0.
\end{equation}
\end{theorem}
\begin{IEEEproof} Let us consider $0\leq\tau\leq N-1$. Then we have
\begin{equation}
\begin{split}
C_\mathbf{p}(\tau)=&2\big(C_\mathbf{a}(\tau) +C_\mathbf{b}(\tau)\big)\\&+(x_1x_2+x_3x_4)C_\mathbf{b,a}^*(N-\tau) \\&+x_2x_3C_\mathbf{a,b}^*(N-\tau), \\
C_\mathbf{q}(\tau)=&2\big(C_\mathbf{c}(\tau) +C_\mathbf{d}(\tau)\big)\\&+(x_1x_2+x_3x_4)C_\mathbf{d,c}^*(N-\tau) \\&+x_2x_3C_\mathbf{c,d}^*(N-\tau).
\end{split}
\end{equation}
Hence, for $0\leq\tau\leq N-1$, we have
\begin{equation}
C_\mathbf{p}(\tau)+C_\mathbf{q}(\tau)=4\big(C_\mathbf{a}(\tau) +C_\mathbf{b}(\tau)\big).
\end{equation}
Consider $N\leq\tau\leq 2N-1$. Then one has
\begin{equation}
\begin{split}
C_\mathbf{p}(\tau)=& (x_1x_2+x_3x_4)C_\mathbf{a,b}(\tau-N) +x_2x_3C_\mathbf{b,a}(\tau-N) \\&+x_1x_3C_\mathbf{a}^*(2N-\tau) +x_2x_4C_\mathbf{b}^*(2N-\tau), \\
C_\mathbf{q}(\tau)=& (x_1x_2+x_3x_4)C_\mathbf{c,d}(\tau-N) +x_2x_3C_\mathbf{d,c}(\tau-N) \\&+x_1x_3C_\mathbf{c}^*(2N-\tau) +x_2x_4C_\mathbf{d}^*(2N-\tau).
\end{split}
\end{equation}
Hence, for $N\leq\tau\leq 2N-1$, we have
\begin{equation}
C_\mathbf{p}(\tau)+C_\mathbf{q}(\tau)=0.
\end{equation}
Consider $2N\leq\tau\leq 3N-1$. Then we have
\begin{equation}
\begin{split}
C_\mathbf{p}(\tau)=& x_1x_3C_\mathbf{a}(\tau-2N) +x_2x_4C_\mathbf{b}(\tau-2N) \\&+x_1x_4C_\mathbf{b,a}^*(3N-\tau), \\
C_\mathbf{q}(\tau)=& x_1x_3C_\mathbf{c}(\tau-2N) +x_2x_4C_\mathbf{d}(\tau-2N) \\&+x_1x_4C_\mathbf{d,c}^*(3N-\tau).
\end{split}
\end{equation}
Hence, for $2N\leq\tau\leq 3N-1$, we have
\begin{equation}
C_\mathbf{p}(\tau)+C_\mathbf{q}(\tau)=0.
\end{equation}
Consider $3N\leq\tau\leq 4N-1$. Then we have
\begin{equation}
\begin{split}
C_\mathbf{p}(\tau)=& x_1x_4C_\mathbf{a,b}(\tau-3N), \\
C_\mathbf{q}(\tau)=& x_1x_4C_\mathbf{c,d}(\tau-3N).
\end{split}
\end{equation}
Hence, for $3N\leq\tau\leq 4N-1$, we have
\begin{equation}
C_\mathbf{p}(\tau)+C_\mathbf{q}(\tau)=0.
\end{equation}
Hence, $(\mathbf{p,q})$ is a complementary pair of length $4N$.
Next, we have to prove the other two conditions of Definition \ref{golayzcz1d}. Consider $1\leq\tau\leq N$, then we have
\begin{equation}
\begin{split}
R_\mathbf{p}(\tau)=& \sum_{k=0}^{4N-1} p_kp_{k+\tau}^*\\
=& \sum_{k=0}^{N-1-\tau} a_ka_{k+\tau}^* +\sum_{k=N-\tau}^{N-1} x_1a_kx_2^*b_{k-(N-\tau)}^* \\&+\sum_{k=0}^{N-1-\tau} b_kb_{k+\tau}^* +\sum_{k=N-\tau}^{N-1} x_2b_kx_3^*a_{k-(N-\tau)}^* \\
& +\sum_{k=0}^{N-1-\tau} a_ka_{k+\tau}^* +\sum_{k=N-\tau}^{N-1} x_3a_kx_4^*b_{k-(N-\tau)}^*\\& +\sum_{k=0}^{N-1-\tau} b_kb_{k+\tau}^* +\sum_{k=N-\tau}^{N-1} x_4b_kx_1^*a_{k-(N-\tau)}^* \\
=&2\big(C_\mathbf{a}(\tau) +C_\mathbf{b}(\tau)\big). \\
\end{split}
\end{equation}
Similarly for $1\leq\tau\leq N$, we have
\begin{equation}
\begin{split}
R_\mathbf{q}(\tau)=&2\big(C_\mathbf{c}(\tau) +C_\mathbf{d}(\tau)\big), \\
R_\mathbf{p,q}(\tau)=&2\big(C_\mathbf{a,c}(\tau) +C_\mathbf{b,d}(\tau)\big).
\end{split}
\end{equation}
Since $(\mathbf{a},\mathbf{b})$ is a GCP and $(\mathbf{c},\mathbf{d})$ is one of the complementary mates of $(\mathbf{a},\mathbf{b})$, $R_\mathbf{p}(\tau)=R_\mathbf{q}(\tau)=0,\hbox{ for all } 1\leq\tau\leq N,$ and $R_\mathbf{p,q}(\tau)=0,\hbox{ for all } 0\leq\tau\leq N$.
Therefore, $(\mathbf{p,q})$ is a $(2,4N,N)$- Golay-ZCZ sequence pair, consisting sequences of length $4N$ with $Z_{\min}=N$.
\end{IEEEproof}
\begin{example}\label{ex1 binary}
Let $(\mathbf{a,b})$ be a binary GCP of length 10, given by $\mathbf{a}=(1,1,-1,1,1,1,1,1,-1,-1)$, $\mathbf{b}=(1,1,-1,1,-1,1,-1, -1,1,1)$.
Then $(\mathbf{c,d})=(\overleftarrow{\mathbf{b}^*},-\overleftarrow{\mathbf{a}^*})$ is a Golay mate of $(\mathbf{a,b})$. Define
\begin{equation}
\begin{split}
\mathbf{p}=~&\mathbf{a}~\|~\mathbf{b}~\|~\mathbf{a}~\|-\mathbf{b}, \\
\mathbf{q}=~&\mathbf{c}~\|~\mathbf{d}~\|~\mathbf{c}~\|-\mathbf{d}.
\end{split}
\end{equation}
Then $(\mathbf{p,q})$ is $(2,40,10)$- Golay-ZCZ sequence pair, because
\begin{equation}
\begin{split}
\big(R_\mathbf{p}(\tau)\big)_{\tau=0}^{39}=&(40,\mathbf{0}_{10},-4,-8,4,8, -4,0,4,0,12,0,12,0,4,0,-4,8,4,-8,-4,\mathbf{0}_{10}), \\
\big(R_\mathbf{q}(\tau)\big)_{\tau=0}^{39}=&(40,\mathbf{0}_{10},4,8,-4,-8, 4,0,-4,0,-12,0,-12,0,-4,0,4,-8,-4,8,4,\mathbf{0}_{10}), \\
\big(R_\mathbf{p,q}(\tau)\big)_{\tau=0}^{39}=&(\mathbf{0}_{11},-4,-8,4, 16,4,0,4,-8,-4,0,4,-8,12,0,12,0,-4,8,4,\mathbf{0}_{10}).
\end{split}
\end{equation}
\end{example}
\begin{example}\label{ex1}
Let $\mathbf{a}=(1,i,-i,-1,i),\mathbf{b}=(1,1,1,i,-i)$, then $(\mathbf{a,b})$ be a quadriphase GCP of length 5.
Then $(\mathbf{c,d})=(\overleftarrow{\mathbf{b}^*},-\overleftarrow{\mathbf{a}^*})$ is a Golay mate of $(\mathbf{a,b})$. Define
\begin{equation}
\begin{split}
\mathbf{p}=~&\mathbf{a}~\|~\mathbf{b}~\|~\mathbf{a}~\|-\mathbf{b},\\
\mathbf{q}=~&\mathbf{c}~\|~\mathbf{d}~\|~\mathbf{c}~\|-\mathbf{d}.
\end{split}
\end{equation}
Then $(\mathbf{p,q})$ is a $(2,20,5)$- Golay-ZCZ sequence pair. The periodic correlation magnitudes of $(2,20,5)$- Golay-ZCZ sequence pair ($\mathbf{p,q}$) are shown in Fig. \ref{fig1}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{ex2golayzcz.pdf}
\caption{A glimpse of the periodic correlations of the proposed Golay-ZCZ sequence pair given in Example \ref{ex1}.}\label{fig1}
\end{figure}
\end{example}
\begin{remark}
Please note that binary Golay-ZCZ sequences of length 40 and quadriphase Golay-ZCZ sequences of length 20 has not been previously reported in the literature. By considering a GCP $(\mathbf{a,b})$ of length $2^m$ in Theorem \ref{th1}, we can construct $(\mathbf{p,q})$ of length $2^{m+2}$ and the resultant Golay-ZCZ sequences will have the parameters equivalent to the Golay-ZCZ sequences reported in \cite{Chen201811}.
\end{remark}
\section{New Construction of Complementary Sets with ZCZ}\label{golayzcz2d}
The resultant sequence sets of the constructions reported in \cite{Chen201811} have the property that, as the number of sequences of the Golay-ZCZ sequence set increases, the ZCZ width decreases. Also, the resultant sequence sets in \cite{Chen201811} are not optimal for polyphase cases. These problems are taken care of in the following construction. In this section, we propose a new construction of Golay-ZCZ sequence sets based on CCCs, which are asymptotically optimal. Before we proceed further, we reveal a nice property of CCC.
\begin{property}\label{prop1}
Let $\mathfrak{C}=\left\{\mathcal{C}^{0}, \mathcal{C}^{1}, \cdots, \mathcal{C}^{M-1}\right\}$ be a $(M,M,N)$- CCC, consisting $M$ CSs $\mathcal{C}^{k}$, $0\leq k<M$, each having $M$ sequences of length $N$. Let $\mathcal{D}$ be a sequence set of order $M\times MN$ defined as follows
\begin{equation}
\mathcal{D}=\begin{bmatrix}
\mathbf{c}^0_0 & \mathbf{c}^0_1 & \cdots & \mathbf{c}^0_{M-1}\\
\mathbf{c}^1_0 & \mathbf{c}^1_1 & \cdots & \mathbf{c}^1_{M-1}\\
\vdots & \vdots & \ddots & \vdots\\
\mathbf{c}^{M-1}_0 & \mathbf{c}^{M-1}_1 & \cdots & \mathbf{c}^{M-1}_{M-1}
\end{bmatrix}_{M\times MN},
\end{equation}
where the $i$-th row is generated from $\mathcal{C}^i$, by appending the rows of $\mathcal{C}^i$ to the right of one another. for $0\leq i \leq M-1$, let
\begin{equation}
\mathcal{D}^i=\begin{bmatrix}
\mathbf{c}^0_i\\ \mathbf{c}^1_i\\ \vdots\\ \mathbf{c}^{M-1}_i
\end{bmatrix}.
\end{equation}Then $\mathfrak{D}=\{\mathcal{D}^0,\mathcal{D}^1,\cdots,\mathcal{D}^{M-1}\}$ is also an $(M,M,N)$- CCC.
\end{property}
\begin{IEEEproof}
$\mathbf{c}_i^{k}(x)$ denote characteristic polynomial of sequence $\mathbf{c}_i^{k}$. As $\mathcal{C}^{k}=\{\mathbf{c}_0^{k},\mathbf{c}_1^{k},\cdots,\mathbf{c}_{M-1}^{k}\}$ is a CS, then we have
\begin{equation}
\sum_{i=0}^{M-1}\mathbf{c}_i^{k}(x)\left(\mathbf{c}_i^{k}(x^{-1})\right)^*=MN, \hbox{ for any }m.
\end{equation}
As $\mathfrak{C}=\left\{\mathcal{C}^{0}, \mathcal{C}^{1}, \cdots, \mathcal{C}^{M-1}\right\}$ is a $(M,M,N)$- CCC, we have
\begin{equation}
\sum_{i=0}^{M-1}\mathbf{c}_i^{k_1}(x)\left(\mathbf{c}_i^{k_2}(x^{-1})\right)^*=0, \hbox{ if }k_1 \neq k_2.
\end{equation}
By matrix representation, we have
\begin{equation}
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(x) & \cdots & \mathbf{c}_{M-1}^{0}(x) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_0^{M-1}(x) & \cdots & \mathbf{c}_{M-1}^{M-1}(x) \\
\end{array}
\right)
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(x^{-1}) & \cdots & \mathbf{c}_{0}^{M-1}(x^{-1}) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_{M-1}^{0}(x^{-1}) & \cdots & \mathbf{c}_{M-1}^{M-1}(x^{-1}) \\
\end{array}
\right)^*
=
\left(
\begin{array}{ccc}
MN & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & MN \\
\end{array}
\right).
\end{equation}
By the commutative law of matrix, we have
\begin{equation}
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(x^{-1}) & \cdots & \mathbf{c}_{0}^{M-1}(x^{-1}) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_{M-1}^{0}(x^{-1}) & \cdots & \mathbf{c}_{M-1}^{M-1}(x^{-1}) \\
\end{array}
\right)^*
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(x) & \cdots & \mathbf{c}_{M-1}^{0}(x) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_0^{M-1}(x) & \cdots & \mathbf{c}_{M-1}^{M-1}(x) \\
\end{array}
\right)
=
\left(
\begin{array}{ccc}
MN & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & MN \\
\end{array}
\right).
\end{equation}
Let $y=x^{-1}$, we have
\begin{equation}
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(y) & \cdots & \mathbf{c}_{0}^{M-1}(y) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_{M-1}^{0}(y) & \cdots & \mathbf{c}_{M-1}^{M-1}(y) \\
\end{array}
\right)^*
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(y^{-1}) & \cdots & \mathbf{c}_{M-1}^{0}(y^{-1}) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_0^{M-1}(y^{-1}) & \cdots & \mathbf{c}_{M-1}^{M-1}(y^{-1}) \\
\end{array}
\right)
=
\left(
\begin{array}{ccc}
MN & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & MN \\
\end{array}
\right).
\end{equation}
Taking conjugation on the above equation, we have
\begin{equation}
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(y) & \cdots & \mathbf{c}_{0}^{M-1}(y) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_{M-1}^{0}(y) & \cdots & \mathbf{c}_{M-1}^{M-1}(y) \\
\end{array}
\right)
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(y^{-1}) & \cdots & \mathbf{c}_{M-1}^{0}(y^{-1}) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_0^{M-1}(y^{-1}) & \cdots & \mathbf{c}_{M-1}^{M-1}(y^{-1}) \\
\end{array}
\right)^*
=
\left(
\begin{array}{ccc}
MN & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & MN \\
\end{array}
\right).
\end{equation}
Hence proved.
\end{IEEEproof}
\begin{theorem}\label{const2n}
Let $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\dots,\mathcal{C}^{M-1}\}$ be an $(M,M,N)$- CCC, and $\mathcal{F}$ be an IDFT matrix of order $M\times M$, where $f_{i,j}$ denotes the element of $i$-th row and $j$-th column of $\mathcal{F}$. Define $\mathcal{B}_k$ for $0\leq k<M$, as follows
\begin{equation}
\mathcal{B}_k=\begin{bmatrix}
f_{0,0}\mathbf{c}_{k}^0 & f_{0,1}\mathbf{c}_{k}^1 & \cdots & f_{0,M-1}\mathbf{c}_{k}^{M-1}\\
f_{1,0}\mathbf{c}_{k}^0 & f_{1,1}\mathbf{c}_{k}^1 & \cdots & f_{1,M-1}\mathbf{c}_{k}^{M-1}\\
\vdots & \vdots & \ddots & \vdots \\
f_{M-1,0}\mathbf{c}_{k}^0 & f_{M-1,1}\mathbf{c}_{k}^1 & \cdots & f_{M-1,M-1}\mathbf{c}_{k}^{M-1}\\
\end{bmatrix}_{M\times MN}
\end{equation}
Let $\tilde{\mathcal{B}_k}$ denotes a sequence of length $M^2N$ which is generated from $\mathcal{B}_k$, by appending the rows of $\mathcal{B}_k$ to the right of one another. Then the sequence set $\mathcal{D}=\{\tilde{\mathcal{B}}_0,\tilde{\mathcal{B}}_1,\dots,\tilde{\mathcal{B}}_{M-1}\}$ is a CS of length $M^2N$ with $Z_{\min}=(M-1)N$.
\end{theorem}
\begin{IEEEproof}
First we prove that $\mathcal{D}$ is a CS. For $0 < \tau \leq N-1$, we have
\begin{equation}
\begin{split}
\sum_{i=0}^{M-1}C_{\tilde{\mathcal{B}}_i}(\tau)=&\sum_{j=0}^{M-1}\left[\left(\sum_{k=0}^{M-1}f^2_{k,j}\right)\left(\sum_{k=0}^{M-1}C_{\mathbf{c}^j_k}(\tau)\right)\right]+\sum_{j=0}^{M-2}\left[\left(\sum_{k=0}^{M-1}f_{k,j}f^*_{k,j+1}\right)\left(\sum_{k=0}^{M-1}C_{\mathbf{c}^j_k,\mathbf{c}^{j+1}_k}(\tau)\right)\right]\\&+\left(\sum_{k=0}^{M-2}f_{k,M-1}f^*_{k+1,0}\right)\left(\sum_{k=0}^{M-1}C_{\mathbf{c}^{M-1}_k,\mathbf{c}^0_k}(\tau)\right).
\end{split}
\end{equation}
Since $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\dots,\mathcal{C}^{M-1}\}$ is a CCC, we have $\sum_{i=0}^{M-1}C_{\tilde{\mathcal{B}}_i}(\tau)=0$. Similarly, we can show for other values of $\tau$ that $\sum_{i=0}^{M-1}C_{\tilde{\mathcal{B}}_i}(\tau)=0$.
Next, we prove the other two conditions of Definition \ref{golayzcz1d}. For $0\leq \tau < M^2N$, let $\tau=rN+s$. Let us define
\begin{equation}
\mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}={\mathcal{B}_{k_1}} \odot L^r_s({\mathcal{B}_{k_2}}),
\end{equation}
where $\odot$ denotes elementwise product of the matrices $\mathcal{B}_{k_1}$ and $L^r_s({\mathcal{B}_{k_2}})$. Note that $\mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}$ is a matrix of size $M\times N$. When $k_1=k_2$, we write $\mathcal{H}_{\mathcal{B}_{k_1}}$ instead of $\mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}$. $\sum \mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}$ denotes sum of all elements of $\mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}$. To check the periodic autocorrelation of the constituent sequences of $\mathcal{D}$, we have, for $0<\tau \leq (M-1)N+1$ and $0\leq k <M$,
\begin{equation}
\begin{split}
R_{\tilde{\mathcal{B}_k}}(\tau)&=\sum \mathcal{H}_{\mathcal{B}_{k}}\\
&=\sum_{i=0}^{M-1} (f_{:,i})\cdot (T^{\lfloor\frac{i+r}{M}\rfloor}(f_{:,<i+r>_M})) C_{\mathbf{c}_k^{i},\mathbf{c}_k^{<i+r>_M}}(\tau-rN) \\
&+\sum_{i=0}^{M-1} (f_{:,i})\cdot (T^{\lfloor\frac{i+r+1}{M}\rfloor}(f_{:,<i+r+1>_M})) C_{\mathbf{c}_k^{i},\mathbf{c}_k^{<i+r+1>_M}}(\tau-(r+1)N),
\end{split}
\end{equation}
where `$\cdot$' denotes the `inner product' of two sequences, and $<x>_M$ denotes $x \mod M$. Since $\mathcal{F}$ is an IDFT matrix, using property \ref{prop1}, we have
\begin{equation}
R_{\tilde{\mathcal{B}_k}}(\tau)=\begin{cases}
M^2N, & \text{when }\tau=0;\\
0, & \text{when }1\leq \tau \leq (M-1)N;\\
\text{non-zero} & \text{when }(M-1)N< \tau.
\end{cases}
\end{equation}
Similarly, to check the cross-correlation we have for $0\leq k_1\neq k_2 <M$,
\begin{equation}
\begin{split}
R_{\tilde{\mathcal{B}}_{k_1},\tilde{\mathcal{B}}_{k_2}}(\tau)&=\sum \mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}\\
&=\sum_{i=0}^{M-1} (f_{:,i})\cdot (T^{\lfloor\frac{i+r}{M}\rfloor}(f_{:,<i+r>_M})) C_{\mathbf{c}_{k_1}^{i},\mathbf{c}_{k_2}^{<i+r>_M}}(\tau-rN) \\
&+\sum_{i=0}^{M-1} (f_{:,i})\cdot (T^{\lfloor\frac{i+r+1}{M}\rfloor}(f_{:,<i+r+1>_M})) C_{\mathbf{c}_{k_1}^{i},\mathbf{c}_{k_2}^{<i+r+1>_M}}(\tau-(r+1)N).
\end{split}
\end{equation}
Since $\mathfrak{C}$ is a CCC and $\mathcal{F}$ is an IDFT matrix, using property \ref{prop1}, we have
\begin{equation}
R_{\tilde{\mathcal{B}}_{k_1},\tilde{\mathcal{B}}_{k_2}}(\tau)=\begin{cases}
0, & \text{when }\tau=0;\\
0, & \text{when }1\leq \tau \leq (M-1)N;\\
\text{non-zero} & \text{when }(M-1)N< \tau.
\end{cases}
\end{equation}
Hence, $\mathcal{D}$ is a $(M,M^2N,(M-1)N)$- Golay-ZCZ sequence set of set size $M$, consisting sequences of length $M^2N$ having ZCZ width $(M-1)N$.
\end{IEEEproof}
\begin{example}\label{ex3n}
Let $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\mathcal{C}^2,\mathcal{C}^3\}$ be a $(4,4,4)$- CCC given by
\begin{equation}
\begin{split}
\mathcal{C}^0=\begin{bmatrix}
1 & 1 & 1 & 1\\
1 & 1 & -1 & -1\\
-1 & 1 & -1 & 1\\
-1 & 1 & 1 & -1
\end{bmatrix},&
\mathcal{C}^1=\begin{bmatrix}
-1 & -1 & 1 & 1\\
-1 & -1 & -1 & -1\\
1 & -1 & -1 & 1\\
1 & -1 & 1 & -1
\end{bmatrix},\\
\mathcal{C}^2=\begin{bmatrix}
-1 & 1 & -1 & 1\\
-1 & 1 & 1 & -1\\
1 & 1 & 1 & 1\\
1 & 1 & -1 & -1
\end{bmatrix},&
\mathcal{C}^3=\begin{bmatrix}
1 & -1 & -1 & 1\\
1 & -1 & 1 & -1\\
-1 & -1 & 1 & 1\\
-1 & -1 & -1 & -1
\end{bmatrix},
\end{split}
\end{equation}
and $\mathcal{F}$ be an $4\times 4$ IDFT matrix. Construct $\mathcal{D}=\{\tilde{\mathcal{B}}_0,\tilde{\mathcal{B}}_1,\tilde{\mathcal{B}}_2,\tilde{\mathcal{B}}_3\}$ as per Construction \ref{const2n}. Then $\mathcal{D}$ is a $(4,64,12)$- Golay-ZCZ sequence set. A glimpse of the correlations of the sequence set $\mathcal{D}$ is shown in Fig. \ref{fig2}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{example3cszcz.pdf}
\caption{A glimpse of the correlations of the sequence set $\mathcal{D}$, constructed in Example \ref{ex3n}.}\label{fig2}
\end{figure}
\end{example}
\begin{remark}
In \cite{Chen201811}, the authors reported $(2^k,2^m,2^{m-k-1})$- Golay-ZCZ sequence sets. Considering $k=2$, and $m=6$, we get $(4,64,8)$- Golay-ZCZ sequence set. As we see in Example \ref{ex3n}, the Golay-ZCZ sequence sets proposed in Theorem \ref{const2n} have larger ZCZ widths, as compared to the results in \cite{Chen201811}.
\end{remark}
Since Theorem \ref{const2n} is based on CCCs, the availability of CCCs for various flexible lengths highly improves the outcome of the construction. In Table \ref{tabccc}, we list down all the well-known construction of CCCs till date. We also provide an iterative construction of CCC to improve the flexibility of the parameters. Before we proceed, we need the following lemma.
\begin{lemma}\label{lemnew}
Let $(\mathbf{x}_1,\mathbf{x}_2)$ be two sequences of length $N_1$, $(\mathbf{y}_1,\mathbf{y}_2)$ be two sequences of length $N_2$, then
aperiodic correlation of $\mathbf{x}_1\otimes\mathbf{y}_1$ between $\mathbf{x}_2\otimes\mathbf{y}_2$ is given by \cite{jin}
\begin{equation}
C_{\mathbf{x}_1\otimes\mathbf{y}_1,\mathbf{x}_2\otimes\mathbf{y}_2} (\tau)= C_{\mathbf{x}_1,\mathbf{x}_2}(k_1) C_{\mathbf{y}_1,\mathbf{y}_2}(k_2) +C_{\mathbf{x}_1,\mathbf{x}_2}(k_1+1)C_{\mathbf{y}_1,\mathbf{y}_2}(k_2-N),
\end{equation}
where $\tau=k_1N_2+k_2$.
\end{lemma}
\begin{theorem}\label{th3}
Let $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\dots,\mathcal{C}^{M-1}\}$ and $\mathfrak{D}=\{\mathcal{D}^0,\mathcal{D}^1,\dots,\mathcal{D}^{M-1}\}$ be two CCCs with parameters $(M,M,N_1)$ and $(M,M,N_2)$, respectively. Then $\mathfrak{E}=\{\mathcal{E}^0,\mathcal{E}^1,\dots,\mathcal{E}^{M-1}\}$, given by
\begin{equation}
\mathcal{E}^{m}=\begin{bmatrix}
\mathbf{e}^{m}_0\\
\mathbf{e}^{m}_1\\
\vdots\\
\mathbf{e}^{m}_{M-1}
\end{bmatrix}_{M \times MN_1N_2}=\left[
\begin{array}{cccc}
\mathbf{c}^{m}_0\otimes\mathbf{d}^{0}_0 & \mathbf{c}^{m}_1\otimes\mathbf{d}^{0}_1 & \cdots & \mathbf{c}^{m}_{M-1}\otimes\mathbf{d}^{0}_{M-1} \\
\mathbf{c}^{m}_0\otimes\mathbf{d}^{1}_0 & \mathbf{c}^{m}_1\otimes\mathbf{d}^{1}_1 & \cdots & \mathbf{c}^{m}_{M-1}\otimes\mathbf{d}^{1}_{M-1} \\
\vdots & \vdots & & \vdots \\
\mathbf{c}^{m}_0\otimes\mathbf{d}^{M-1}_0 & \mathbf{c}^{m}_1\otimes\mathbf{d}^{M-1}_1 & \cdots & \mathbf{c}^{m}_{M-1}\otimes\mathbf{d}^{M-1}_{M-1} \\
\end{array}
\right].
\end{equation}
is a CCC with parameters $(M,M,MN_1,N_2)$.
\end{theorem}
\begin{IEEEproof}
First, we prove that for $0\leq m <M$, $\mathcal{E}^m$ is a CS of length $MN_1N_2$.
By lemma \ref{lemnew}, we have
\begin{equation}
\begin{split}
\sum_{l=0}^{M-1}C_{\mathbf{c}^{m}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0) = & \sum_{l=0}^{M-1} \left(C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1) C_{\mathbf{d}^{l}_{n_1}, \mathbf{d}^{l}_{n_2}}(k_2)+C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1+1) C_{\mathbf{d}^{l}_{n_1}, \mathbf{d}^{l}_{n_2}}(k_2-N_2)\right) \\
= & C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1)\sum_{l=0}^{M-1} C_{\mathbf{d}^{l}_{n_1}, \mathbf{d}^{l}_{n_2}}(k_2)+C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1+1) \sum_{l=0}^{M-1} C_{\mathbf{d}^{l}_{n_1}, \mathbf{d}^{l}_{n_2}}(k_2-N_2)\\
= & \begin{cases}
MN_2C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1), & k_2=0,n_1=n_2; \\
0, & k_2\neq0,n_1=n_2; \\
0, & \hbox{for all }k_2,n_1\neq n_2,
\end{cases}
\end{split}
\end{equation}
where $\tau_0=k_1N_2+k_2$.
For $\mathcal{E}^m$, let us consider $\tau=rN_1N_2+\tau_0$, then
\begin{equation}\label{eq39n}
\sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau)= \sum_{l=0}^{M-1}\sum_{n_1=0}^{M-1-r}(C_{\mathbf{c}^{m}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0)+ C_{\mathbf{c}^{m}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m}_{n_2+1}\otimes\mathbf{d}^{l}_{n_2+1}} (\tau_0-N_1N_2)),
\end{equation}
where $n_2=n_1+r$.
Clearly, we have $\sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau)=0$, if $r\geq1$. Consider $r=0$, then
\begin{equation}
\begin{split}
\sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau) = & \sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau_0) \\
= & \sum_{l=0}^{M-1}\sum_{n_1=n_2=0}^{M-1} C_{\mathbf{c}^{m}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0) \\
= & \left\{
\begin{array}{ll}
\sum\limits_{n_1=n_2=0}^{M-1}MN_2C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1), & k_2=0; \\
0, & k_2\neq 0.
\end{array}
\right. \\
= & \left\{
\begin{array}{ll}
M^2N_1N_2, & k_1=0,k_2=0; \\
0, & \hbox{otherwise.}
\end{array}
\right.
\end{split}
\end{equation}
Next, we prove that $\mathcal{E}^{m_1}$ and $\mathcal{E}^{m_2}$ are orthogonal if $m_1\neq m_2$.
Similar to (\ref{eq39n}), we consider cross-correlation sum of $\mathcal{E}^{m_1}$ between $\mathcal{E}^{m_2}$, then we have
\begin{equation}
\sum_{l=0}^{M-1}C_{\mathbf{e}^{m_1}_l,\mathbf{e}^{m_2}_l}(\tau)= \sum_{l=0}^{M-1}\sum_{n_1=0}^{M-1-r}(C_{\mathbf{c}^{m_1}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m_2}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0)+ C_{\mathbf{c}^{m_1}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m_2}_{n_2+1}\otimes\mathbf{d}^{l}_{n_2+1}} (\tau_0-N_1N_2)),
\end{equation}
where $n_2=n_1+r$.
Clearly, we have $\sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau)=0$, if $r\geq1$. Consider $r=0$, then
\begin{equation}
\begin{split}
\sum_{l=0}^{M-1}C_{\mathbf{e}^{m_1}_l,\mathbf{e}^{m_2}_l}(\tau) = & \sum_{l=0}^{M-1}C_{\mathbf{e}^{m_1}_l,\mathbf{e}^{m_2}_l}(\tau_0) \\
= & \sum_{l=0}^{M-1}\sum_{n_1=n_2=0}^{M-1} C_{\mathbf{c}^{m_1}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m_2}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0) \\
= & \left\{
\begin{array}{ll}
\sum\limits_{n_1=n_2=0}^{M-1}MN_2C_{\mathbf{c}^{m_1}_{n_1}, \mathbf{c}^{m_2}_{n_2}}(k_1), & k_2=0; \\
0, & k_2\neq 0.
\end{array}
\right. \\
= & 0.
\end{split}
\end{equation}
Hence $\mathfrak{E}=\{\mathcal{E}^0,\mathcal{E}^1,\dots,\mathcal{E}^{M-1}\}$ is an $(M,M,MN_1N_2)$- CCC.
\end{IEEEproof}
\begin{remark}\label{remarkccc}
To increase the flexibility of the proposed construction in Theorem \ref{const2n}, we have also found binary CCCs with parameters $(4,4,3)$, $(4,4,5)$, $(4,4,7)$, $(4,4,11)$, and $(4,4,13)$ based on computer search, which can be used as seed CCCs of the proposed construction in Theorem \ref{const2n}. Considering $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\mathcal{C}^2,\mathcal{C}^3\}$ as a $(4,4,N)$- CCC, the search results can be found in Table \ref{tab3}, where each element represents a power of $(-1)$. The search results are important in itself, because in recent results \cite{chenccc,yuboccc,bingshenccc}, we observe that for a $(K,M,N)$ mutually orthogonal sequence set, through systematic construction, the maximum achievable $K/M$ ratio is $1/2$, when $N$ is not in the form of $2^m$. However, for our case, although $N$ is not in the form of power-of-two, since the sequence sets are CCC (i.e., $K=M$), the $K/M$ ratio is $1$. Moreover, for length up to 200 (i.e., $N\leq 200$), using the CCCs given in Table \ref{tab3} as seed CCCs, using the results of \cite{jin} and Theorem \ref{th3}, we can design binary $(4,4,N)$- CCCs for $N=12,~13,~20,~24,~28,~36,~40,~44,~48,~52,~56,~60$, $72,~80,~84,~88,~96,~112,~120,~132,~140$, $144,~156,~160,~168,~176,~192,~196,~200$.
\end{remark}
\begin{table}[]
\renewcommand{\arraystretch}{1.3}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
$N$ & $\mathcal{C}^{0}$ & $\mathcal{C}^{1}$ & $\mathcal{C}^{2}$ & $\mathcal{C}^{3}$ \\ \hline
3 & \begin{tabular}[c]{@{}c@{}}000\\
001\\
001\\
010\end{tabular} & \begin{tabular}[c]{@{}c@{}}010\\
001\\
110\\
111\end{tabular} & \begin{tabular}[c]{@{}c@{}}011\\
000\\
101\\
100\end{tabular} & \begin{tabular}[c]{@{}c@{}}011\\
010\\
000\\
011\end{tabular} \\ \hline
5 & \begin{tabular}[c]{@{}c@{}}00001\\
01100\\
01000\\
01011\end{tabular} & \begin{tabular}[c]{@{}c@{}}00010\\
00101\\
01111\\
00110\end{tabular} & \begin{tabular}[c]{@{}c@{}}00101\\
11101\\
00110\\
10000\end{tabular} & \begin{tabular}[c]{@{}c@{}}01100\\
11110\\
01011\\
10111\end{tabular} \\ \hline
7 & \begin{tabular}[c]{@{}c@{}}0000001\\
0011010\\
0011010\\
0100011\end{tabular} & \begin{tabular}[c]{@{}c@{}}0011101\\
0011010\\
1100101\\
1000000\end{tabular} & \begin{tabular}[c]{@{}c@{}}0101100\\
0100011\\
1111110\\
1010011\end{tabular} & \begin{tabular}[c]{@{}c@{}}0101100\\
0111111\\
0011101\\
0101100\end{tabular} \\ \hline
11 & \begin{tabular}[c]{@{}c@{}}01110110110\\
00111000101\\
00011010100\\
00000001101\end{tabular} & \begin{tabular}[c]{@{}c@{}}00011010100\\
00000001101\\
10001001001\\
11000111010\end{tabular} & \begin{tabular}[c]{@{}c@{}}10110000000\\
11010100111\\
10100011100\\
10010010001\end{tabular} & \begin{tabular}[c]{@{}c@{}}01011100011\\
01101101110\\
10110000000\\
11010100111\end{tabular} \\ \hline
13 & \begin{tabular}[c]{@{}c@{}}0111011010100\\
0011101001101\\
0001100001001\\
0000000111010\end{tabular} & \begin{tabular}[c]{@{}c@{}}0001100001001\\
0000000111010\\
1000100101011\\
1100010110010\end{tabular} & \begin{tabular}[c]{@{}c@{}}0101110000000\\
0110111100111\\
1011001011100\\
1101010010001\end{tabular} & \begin{tabular}[c]{@{}c@{}}0100110100011\\
0010101101110\\
0101110000000\\
0110111100111\end{tabular} \\ \hline
\end{tabular}}
\caption{Computer search results of $(4,4,N)$- CCCs, for various values of $N$.\label{tab3}}
\end{table}
\section{Discussion on Optimality of the Proposed Sequence Sets}\label{secv}
For polyphase sequence sets consisting of $M$ sequences each of length $L$, having ZCZ width $Z$, we have from \cite{tangfan}
\begin{equation}
Z\leq \frac{L}{M}.
\end{equation}
For binary sequence sets, the bound is conjectured \cite{tangfan} as
\begin{equation}
Z\leq \frac{L}{2M}.
\end{equation}
Assuming $Z_{opti}$ to be the optimal value of $Z$, let us define the optimality factor $(C)$ as
\begin{equation}
C=\frac{Z}{Z_{opti}}.
\end{equation}
Consider the $(2,4N,N)$- Golay-ZCZ sequence sets described in Theorem \ref{th1}. If we consider the binary cases, then we have $C=1$. Hence the resultant Golay-ZCZ sequence pairs proposed in Theorem \ref{th1} are optimal.
Assuming that an $(M,M,N)$- CCC exists, from Theorem \ref{const2n}, we get $(M,M^2N,(M-1)N)$- Golay-ZCZ sequence set. To be an optimal Golay-ZCZ sequence set, the optimal ZCZ width $Z_{opti}=\frac{L}{M}$, i.e., in this case $Z_{opti}=MN$. However, the ZCZ we can achieve through Theorem \ref{const2n} is $Z=(M-1)N$.
Therefore,
\begin{equation}
\begin{split}
C&=\frac{Z}{Z_{opti}}\\
&=\frac{(M-1)}{M}.
\end{split}
\end{equation}
Therefore, $\lim\limits_{M\rightarrow \infty}C=1$, hence,
the sequence sets proposed in Theorem \ref{const2n} are asymptotically optimal, as the number of sequences increases.
\section{The novelty of the Proposed Constructions as Compared to Previous Works}\label{novel}
In this section we compare the proposed constructions to the previous works, specifically with the works of Gong \textit{et al.} \cite{Gong2013} and Chen \textit{et al.} \cite{Chen201811} and discuss the novelty of the proposed constructions.
\begin{enumerate}
\item In \cite{Gong2013}, the authors only considered the complementary sequences of lengths of the form $2^m$, and analysed their corresponding periodic zero auto-correlation zones. As compared to that, Theorem \ref{th1} and Theorem \ref{const2n} considers complementary sequences of a more flexible form and analyses both periodic zero auto-correlation zone as well as zero crosscorrelation zone.
\item In \cite{Chen201811}, the authors proposed Golay-ZCZ sequence sets. However, the constituent sequences have lengths only of the form of $2^m$. As compared to that, in Theorem \ref{th1} we have proposed Golay-ZCZ sequence pairs of length $4N$, where $N$ is the length of a GCP. In Theorem \ref{const2n}, we have proposed Golay-ZCZ sequence sets, consisting sequences of lengths $M^2N$, using an $(M,M,N)$ CCC. As we see the lengths are more flexible as compared to the results in \cite{Chen201811}.
\item The results reported in \cite{Chen201811} only achieves optimality for the case of binary $(2,2^m,2^{m-2})$- Golay-ZCZ sequence pairs. As compared to that, based on the discussions in Section \ref{secv}, we can see that Theorem \ref{th1} results to binary $(2,4N,N)$- Golay-ZCZ sequence pair, which is optimal. Theorem \ref{const2n} results to polyphase $(M,M^2N,(M-1)N)$ Golay-ZCZ sequence set, which is also asymptotically optimal, as the number of sequence increases.
\item Analysing Table 1 of \cite{Chen201811}, we can observe that, as the number of sequences of the Golay-ZCZ sequence set increases, the ZCZ width decreases. As compared to that, in Theorem \ref{const2n}, as the number of sequences of the Golay-ZCZ sequence set increases, the ZCZ width also increases, and eventually we achieve asymptotic optimality.
\item To increase the availability of CCCs of various parameters, which in turn increases the flexibility of the parameters of the proposed Golay-ZCZ sequence sets, we have proposed a new iterative construction of CCCs. We have also provided computer search results for binary CCCs with parameters $(4,4,N)$, for various values of $N$ given in Table \ref{tab3}. Moreover, using those as seed CCCs in \cite{jin} and Theorem \ref{th3}, we can construct several binary CCCs with new parameters, which are not reported before. As discussed in Remark \ref{remarkccc}, these CCCs are important in itself, since $N$ is not a power-of-two.
\end{enumerate}
\section{Conclusion}\label{section 5}
In this paper, we have made two contributions. Firstly, we have systematically constructed GCPs of lengths non-power-of-two where each of the constituent sequences have a periodic ZACZ and the pair have a periodic ZCCZ. We have also constructed complementary sets consisting of sequences having large periodic ZACZ and ZCCZ using CCCs and IDFT matrices. Notably, the second construction results to asymptotically optimal Golay-ZCZ sequences with respect to Tang-Fan-Matsufuji bound, as the number of sequence increases. To increase the availability of CCC for various parameters and consequently increase the flexibility of the proposed Golay-ZCZ sequence sets, we have also proposed a new iterative construction of CCCs. Moreover, we have found binary CCCs with parameters $(4,4,3)$, $(4,4,5)$, $(4,4,7)$, $(4,4,11)$, and $(4,4,13)$ based on computer search, which can be used as seed CCCs, and can hugely increase the flexibility of the proposed construction. Since the length of the resultant Golay-ZCZ sequences are more flexible, the proposed constructions partially fills the gap left by the previous remarkable works of Gong \textit{et al.} and Chen \textit{et al.}. The proposed Golay-ZCZ sequences have potential applications in uplink grant-free NOMA.
\section{Introduction}\label{section 1}
Golay complementary sets (GCS) and zero correlation zone (ZCZ) sequence sets are two kinds of sequence sets with different desirable correlation properties. GCS are sequence sets have zero aperiodic autocorrelation sums (AACS) at all non-zero time shifts \cite{Golay61}, whereas ZCZ sequence sets have zero correlation zone within a certain time-shift \cite{Fan2007}. Due to its favourable correlation properties GCSs or ZCZ sequence sets have been widely used to reduce peak average power ratio (PAPR) in orthogonal frequency division multiplexing systems \cite{Davis1999,Paterson2000}. However, the sequences own periodic autocorrelation plays an important role in some applications like synchronization and detection of the signal.
Working in this direction, Gong \textit{et al.} \cite{Gong2013} investigated the periodic autocorrelation behaviour of a single Golay sequence in 2013. To be more specific, Gong \textit{et al.} presented two constructions of Golay sequences of length $2^m$, using generalized Boolean functions (GBF), each displaying a periodic zero autocorrelation zone (ZACZ) of $2^{m-2}$, and $2^{m-3}$, respectively, around the in-phase position \cite{Gong2013}. In 2018, Chen \textit{et al.} studied the zero cross-correlation zone among the Golay sequences and proposed Golay-ZCZ sequence sets \cite{Chen201811}. Golay-ZCZ sequence sets are sequence sets having periodic ZACZ for each sequences, periodic zero cross-correlation zone (ZCCZ) for any two sequences and also the aperiodic autocorrelation sum is zero for all non-zero time shifts. Specifically, using GBFs, Chen \textit{et al.} gave a systematic construction of Golay-ZCZ sequence set consisting $2^k$ sequences, each of length $2^m$ and $\min\{ZACZ,ZCCZ\}$ is $2^{m-k-1}$ \cite{Chen201811}.
In \cite{Gong2013}, the authors discussed the application of Golay sequences with large ZACZ for ISI channel estimation. Using Golay sequences with large ZACZ as channel estimation sequences (CES), the authors analysed the performance of Golay-sequence-aided channel estimation in terms of the error variance and the classical Cramer-Rao lower bound (CRLB). It was shown in \cite{Gong2013} that when the channel impulse response (CIR) is within the ZACZ width then the variance of the Golay sequences attains the CRLB. Recently in 2021, Yu \cite{yu} demonstrated that sequence sets having low coherence of the spreading matrix along with low PAPR are suitable as pilot sequences for uplink grant-free non-orthogonal multiple access (NOMA). The work of \cite{yu} depicts that Golay-ZCZ sequences can be suitably used as pilot sequences for uplink grant-free NOMA.
Inspired by the works of Gong \textit{et al.} \cite{Gong2013} and Chen \textit{et al.} \cite{Chen201811} and by the applicability of the Golay-ZCZ sequences as pilot sequences for uplink grant-free NOMA and channel estimation, we propose Golay-ZCZ sequence sets with new lengths. Note that, the lengths of the GCPs with large ZCZs discussed in the works of Gong \textit{et al.} \cite{Gong2013} and Chen \textit{et al.} \cite{Chen201811} are all in the powers of two. To the best of the authors knowledge, the problem of investigating the individual periodic autocorrelations of the GCPs and the periodic cross-correlations of the pairs when the length of the GCPs are non-power-of-two, remains largely open. An overview of the previous works, which considers the periodic ZACZ of the individual sequences and ZCCZ of a GCP, is given in Table \ref{Table duibi}.
\begin{table}
\caption{Golay sequences with periodic ZACZ and ZCCZ.}
\label{Table duibi}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Ref. & Length & set size & ZACZ width & ZCCZ width &Based on \\\hline
\hline
\cite{Gong2013} & $2^m$ & $2$ & $2^{m-2}$ or $2^{m-3}$ & Not discussed & GBF \\
\hline
\cite{Chen201811} & $2^m$ & $2^k$ & $2^{m-k-1}$ & $2^{m-k-1}$ & GBF \\
\hline
Theorem 1 & $4N$ & $2$ & $N$ & $N$ & GCP of length $N$. \\
\hline
Theorem 2 & $M^2N$ & $M$ & $(M-1)N$ & $(M-1)N$ & $(M,M,N)$- CCC.\\
\hline
\end{tabular}
\end{table}
We have proposed two constructions which results to Golay-ZCZ sequence sets consisting of sequences with more flexible lengths. To be more specific, assuming that Golay complementary pairs of length $N$ exists, we have proposed a systematic construction of Golay-ZCZ sequences of length $4N$, having ZCZ width of $N+1$. One of the limitations of construction of Golay-ZCZ sequence sets reported in \cite{Chen201811} is that, the ZCZ width decreases, as the number of sequences increases. To solve that problem, we proposed another construction of Golay-ZCZ sequences consisting of sequences of length $M^2N$ having ZCZ length $(M-1)N$ from an $(M,M,N)$ compete complementary code (CCC). Interestingly, the resultant Golay-ZCZ sequences derived from the CCC are asymptotically optimal with respect to Tang-Fan-Matsufuji bound \cite{tangfan} as the number of sequences increases, when polyphase sequences are considered. To increase the availability of the CCCs to design Golay-ZCZ sequences of more flexible lengths we have also proposed a new iterative construction of CCCs based on Kronecker product. A brief overview of all the parameters of CCCs proposed till date can be found in Table \ref{tabccc}.
\begin{table}
\centering
\resizebox{\textwidth}{!}{
\caption{Parameters of CCCs}
\label{tabccc}
\begin{tabular}{|c|c|c|c|}
\hline
Ref. & Parameters & constraints & Construction based on \\\hline
\hline
\cite{suherio15} & $(M,M,M^N)$ & $N\geq 2$ & Unitary matrices \\
\hline
\cite{huang18} & $(2^{N-r},2^{N-r},2^N)$ & $r=1,2,\dots,N-1$ & Golay-paired Hadamard matrices \\
\hline
\cite{han53} & $(M,M,MN)$ & $N\leq M$ & Unitary matrices \\
\hline
\cite{han17} & $(M,M,MN/P)$ & $N,P\leq M$ & Unitary matrices \\
\hline
\cite{zhang54} & $(M,M,2^mM)$ & $m\geq 1$ & Unitary matrices \\
\hline
\cite{rathinakumar21} & $(2^{k+1},2^{k+1},2^m)$ & $m,k\geq 1,~k=m-1$ & GBF \\
\hline
\cite{yang55} & $(2N,2N,l)$ & $l> 1,~N\geq 1$ & Unitary matrices \\
\hline
\cite{mar22} & $(2^m,2^m,2^{mN})$ & $m\geq 1$ & Equivalent Hadamard matrices \\
\hline
\cite{chen20} & $(2^m,2^m,2^k)$ & $m,k\geq 1$ with $k\geq m$ & GBF \\
\hline
\cite{liu19} & $(2^m,2^m,2^k)$ & $m,k\geq 1$ & GBF \\
\hline
\cite{das23,ma} & $(M,M,M^N)$ & $N\geq 1$ & Paraunitary (PU) matrices \\
\hline
\cite{das24} & $(M,M,P^N)$ & $N\geq 1,~P|M$ & PU matrices \\
\hline
\cite{jin} & $(M_1M_2,M_1M_2,N_1N_2)$ & $(M_1,M_1,N_1)$- CCC and $(M_2,M_2,N_2)$- CCC exists & Kronecker product \\
\hline
Proposed & $(M,M,MN_1N_2)$ & $(M,M,N_1)$- CCC and $(M,M,N_2)$- CCC exists & Kronecker product \\
\hline
\end{tabular}}
\end{table}
The rest of the paper is organized as follows. In Section \ref{section 2}, some useful notations and preliminaries are recalled.
In Section \ref{section 3}, a systematic construction of Golay-ZCZ sequence pairs with flexible lengths is proposed. In Section \ref{golayzcz2d}, a systematic construction of Golay-ZCZ sequence sets is proposed based on existing CCCs. Also in this section, we have proposed an iterative construction of CCCs to increase the flexibility of the parameters of the proposed Golay-ZCZ sequences. In Section \ref{secv}, we have discussed the optimality of the proposed Golay-ZCZ sequence sets. In Section \ref{novel}, we have discussed the novelty of the proposed constructions as compared to previous works. Finally, we conclude the paper in Section \ref{section 5}.
\section{Preliminaries}\label{section 2}
Before we begin, let us define the following notations:
\begin{itemize}
\item $\mathbf{0}_L$ denotes the all-zero vector of length-$L$.
\item $\overleftarrow{\mathbf{a}}$ denotes the reverse of the sequence $\mathbf{a}$.
\item $x^*$ denotes the complex conjugate of $x$.
\item $\mathbf{a}||\mathbf{b}$ denotes the concatenation of the sequences $\mathbf{a}$ and $\mathbf{b}$.
\item `$\mathbf{a}\cdot \mathbf{b}$' denotes the `inner product' of two sequences $\mathbf{a}$ and $\mathbf{b}$.
\item $<x>_M$ denotes $x \mod M$.
\item $\mathbf{x}\otimes\mathbf{y}= [x_0\mathbf{y},x_1\mathbf{y},\cdots, x_{N_1-1}\mathbf{y}]$, denotes the Kronecker product of the sequences $\mathbf{x}$ and $\mathbf{y}$.
\end{itemize}
\begin{definition}
Let $\mathbf{a}$ and $\mathbf{b}$ be two length $N$ sequences.
The periodic cross-correlation function (PCCF) of ${\mathbf{a}}$ and ${\mathbf{b}}$ is defined as
\begin{equation}\label{defi_PCCF}
R_{\mathbf{a},\mathbf{b}}(\tau):= \left \{
\begin{array}{cl}
\sum\limits_{k=0}^{N-1}{a_kb^*_{<k+\tau>_N}},&~~0\leq \tau \leq N-1;\\
\sum\limits_{k=0}^{N-1}{a_{<k-\tau>_N}b^*_k},&~~-(N-1)\leq \tau \leq -1;
\end{array}
\right .
\end{equation}
When $\mathbf{a} = \mathbf{b}$, $R_{\mathbf{a},\mathbf{b}}(\tau)$ is called periodic auto-correlation function (PACF) of $\mathbf{a}$ and is denoted as $R_{\mathbf{a}}(\tau)$.
\end{definition}
\begin{definition}
Let $\mathbf{a}$ and $\mathbf{b}$ be two length $N$ sequences.
The aperiodic cross-correlation function (ACCF) of ${\mathbf{a}}$ and ${\mathbf{b}}$ is defined as
\begin{equation}\label{defi_ACCF}
C_{\mathbf{a},\mathbf{b}}(\tau):= \left \{
\begin{array}{cl}
\sum\limits_{k=0}^{N-1-\tau}{a_kb^*_{k+\tau}},&~~0\leq \tau \leq N-1;\\
\sum\limits_{k=0}^{N-1+\tau}{a_{k-\tau}b^*_k},&~~-(N-1)\leq \tau \leq -1;
\end{array}
\right .
\end{equation}
When $\mathbf{a} = \mathbf{b}$, $C_{\mathbf{a},\mathbf{b}}(\tau)$ is called aperiodic auto-correlation function (AACF) of $\mathbf{a}$ and is denoted as $C_{\mathbf{a}}(\tau)$.
\end{definition}
\begin{definition}
A set $\mathcal{C} = \{\mathbf{c}_0, \mathbf{c}_1,..., \mathbf{c}_{M-1}\}$ consisting
of $M$ sequences of length $N$ is said to be an $(M,N,Z)$- ZCZ sequence set if it satisfies
\begin{equation}
\begin{split}
&R_{\mathbf{c}_i}(\tau)=0,~\text{for }1\leq |\tau|\leq Z,~0\leq i \leq M-1,\\
&R_{\mathbf{c}_i,\mathbf{c}_j}(\tau)=0,~\text{for }|\tau|\leq Z,~0\leq i\neq j\leq M-1.
\end{split}
\end{equation}
\end{definition}
\begin{definition}\label{golayzcz1d}
An $(M,N,Z)$- ZCZ sequence set becomes an $(M,N,Z)$- Golay-ZCZ sequence set if it satisfies
\begin{equation}
\begin{split}
&\text{C1: }\sum_{i=0}^{M-1}C_{\mathbf{c}_i}(\tau)=0,~\text{for all }\tau\neq 0,\\
&\text{C2: }R_{\mathbf{c}_i}(\tau)=0,~\text{for }1\leq |\tau|\leq Z,~0\leq i \leq M-1,\\
&\text{C3: }R_{\mathbf{c}_i,\mathbf{c}_j}(\tau)=0,~\text{for }|\tau|\leq Z,~0\leq i\neq j\leq M-1.
\end{split}
\end{equation}
\end{definition}
\begin{definition}
Let $\mathcal{C}$ be a $P \times N$ matrix, consisting of $P$ sequences of length $N$, as follows:
\begin{equation}
\mathcal{C}=\begin{bmatrix}
\mathbf{c}_0\\
\mathbf{c}_1\\
\vdots\\
\mathbf{c}_{P-1}
\end{bmatrix}_{P\times N}=\begin{bmatrix}
c_{0,0} & c_{0,1} & \dots & c_{0,N-1}\\
c_{1,0} & c_{1,1} & \dots & c_{1,N-1}\\
\vdots & \vdots & \ddots & \vdots\\
c_{P-1,0} & c_{P-1,1} & \dots & c_{P-1,N-1}
\end{bmatrix}_{P\times N}.
\end{equation}
Then $\mathcal{C}$ is called a CS of size $P$ if
\begin{equation}
C_{\mathbf{c}_0}(\tau)+C_{\mathbf{c}_1}(\tau)+\cdots+C_{\mathbf{c}_{P-1}}(\tau)=\begin{cases}
PN & \text{if } \tau=0,\\
0 & \text{if } 0<\tau<N.
\end{cases}
\end{equation}
\end{definition}
\begin{definition}
Consider $\mathfrak{C}=\left\{\mathcal{C}^{0}, \mathcal{C}^{1}, \cdots, \mathcal{C}^{M-1}\right\}$, consisting $M$ CSs $\mathcal{C}^{k}$, $0\leq k<M$, each having $M$ sequences of length $N$, i.e.,
\begin{equation}\label{eq2}
\mathcal{C}^{k}=\begin{bmatrix}
\mathbf{c}^{k}_0\\
\mathbf{c}^{k}_1\\
\vdots\\
\mathbf{c}^{k}_{M-1}
\end{bmatrix}_{M \times N}, 0 \leq k \leq M-1,
\end{equation}
where $ \mathbf{c}_{m}^{k}$ is the $m$-th sequence of length $N$ and is expressed as $ \mathbf{c}_{m}^{k}=\left(c_{m, 0}^{k}, c_{m, 1}^{k}, \cdots, c_{m, N-1}^{k}\right)$, $0 \leq m \leq M-1$. The set $\mathfrak{C}$ is called a $(M,M,N)$ complete complementary code (CCC) if for any $\mathcal{C}^{k_1},\mathcal{C}^{k_2}\in \mathfrak{C}$, $0\leq k_1,k_2
\leq K-1$,
$0 \leq \tau \leq N-1, k_{1} = k_{2}$ or $0<\tau \leq N-1, k_{1}\neq k_{2}$,
\begin{equation}
|C_{\mathcal{C}^{k_1},\mathcal{C}^{k_2}}(\tau)|=\left|\sum_{m=0}^{M-1} C_{\mathbf{c}_{m}^{k_{1}},
{\mathbf{c}_{m}^{k_{2}}}}(\tau)\right|= 0,
\end{equation}
where $M$ denotes the set size and the number of sequences in each sequence set, and $N$ the length of constituent sequences of $\mathfrak{C}$.
\end{definition}
Let $\mathcal{C}$ be an $M\times N$ matrix given by
\begin{equation}
\mathcal{C}=\begin{bmatrix}
c_{0,0} & c_{0,1} & \dots & c_{0,N-1}\\
c_{1,0} & c_{1,1} & \dots & c_{1,N-1}\\
\vdots & \vdots & \ddots & \vdots \\
c_{M-1,0} & c_{M-1,1} & \dots & c_{M-1,N-1}
\end{bmatrix}_{M\times N}.
\end{equation}
Then let us define a transformation $L^{r}_s(\mathcal{C})$ as follows:
\begin{equation}\label{eq12n}
\begin{split}
L^{r}_s(\mathcal{C})&=\begin{bmatrix}
\textcolor{blue}{c_{r,s} }&\textcolor{blue}{ c_{r,s+1} }&\textcolor{blue}{ \dots} &\textcolor{blue}{ c_{r,N-1}} &\textcolor{blue}{ c_{r+1,0}} &\textcolor{blue}{ c_{r+1,1}} &\textcolor{blue}{ \dots} &\textcolor{blue}{ c_{r+1,s-1}}\\
\textcolor{blue}{ c_{r+1,s} }& \textcolor{blue}{c_{r+1,s+1}} &\textcolor{blue}{ \dots} &\textcolor{blue}{ c_{r+1,N-1}}&\textcolor{blue}{c_{r+2,0}} & \textcolor{blue}{c_{r+2,1}} & \textcolor{blue}{\dots }&\textcolor{blue}{ c_{r+2,s-1}}\\
\textcolor{blue}{ \vdots} &\textcolor{blue}{ \vdots} &\textcolor{blue}{ \ddots} &\textcolor{blue}{ \vdots} &\textcolor{blue}{ \vdots} & \textcolor{blue}{\vdots} &\textcolor{blue}{ \ddots} &\textcolor{blue}{ \vdots} \\
\textcolor{blue}{ c_{M-2,s}} &\textcolor{blue}{ c_{M-2,s+1}} &\textcolor{blue}{ \dots} & \textcolor{blue}{c_{M-2,N-1}} &\textcolor{blue}{ c_{M-1,0}} &\textcolor{blue}{ c_{M-1,1}} & \textcolor{blue}{\dots} &\textcolor{blue}{ c_{M-1,s-1}}\\
\textcolor{blue}{ c_{M-1,s}} & \textcolor{blue}{c_{M-1,s+1}} &\textcolor{blue}{ \dots} & \textcolor{blue}{c_{M-1,N-1}} &\textcolor{red}{ c_{0,0} }&\textcolor{red}{ c_{0,1} }&\textcolor{red}{ \dots} &\textcolor{red}{ c_{0,s-1}}\\
\textcolor{red}{ c_{0,s}} & \textcolor{red}{c_{0,s+1}} & \textcolor{red}{\dots} &\textcolor{red}{ c_{0,N-1}} & \textcolor{red}{c_{1,0}} &\textcolor{red}{ c_{1,1}} &\textcolor{red}{ \dots} & \textcolor{red}{c_{1,s-1}}\\
\textcolor{red}{ \vdots} &\textcolor{red}{ \vdots} &\textcolor{red}{ \ddots} &\textcolor{red}{ \vdots} & \textcolor{red}{\vdots} &\textcolor{red}{ \vdots} & \textcolor{red}{\ddots} &\textcolor{red}{ \vdots} \\
\textcolor{red}{ c_{r-1,s}} &\textcolor{red}{ c_{r-1,s+1}} &\textcolor{red}{ \dots} &\textcolor{red}{ c_{r-1,N-1}} &\textcolor{red}{ c_{r,0}} &\textcolor{red}{ c_{r,1}} &\textcolor{red}{ \dots }&\textcolor{red}{ c_{r,s-1}}
\end{bmatrix}_{M\times N}\\
&=\begin{bmatrix}
T^r(\mathbf{c}_{:,s}) & T^r(\mathbf{c}_{:,s+1}) & \dots & T^r(\mathbf{c}_{:,N-1}) & T^{r+1}(\mathbf{c}_{:,0}) & T^{r+1}(\mathbf{c}_{:,1}) & \dots & T^{r+1}(\mathbf{c}_{:,s-1})
\end{bmatrix},
\end{split}
\end{equation}
where $\mathbf{c}_{:,s}$ denotes the $s$-th column of $\mathcal{C}$, and $T^r(\mathbf{c}_{:,s})$ denotes the $r$-step up-shift operator.
\begin{definition}
Let $\mathbf{a}=(a_0,a_1,\dots,a_{N-1})$ be a sequence of length $N$, then the characteristic polynomial of $\mathbf{a}$ is defined as
\begin{equation}
\mathbf{a}(x)=a_0+a_1x+\cdots+a_{N-1}x^{N-1},
\end{equation}
and the corresponding complex conjugate is given by
\begin{equation}
\mathbf{a}^*(x)=a^*_0+a^*_1x+\cdots+a^*_{N-1}x^{N-1}.
\end{equation}
\end{definition}
\begin{definition}
Let $\mathbf{a}(x)$ and $\mathbf{b}(x)$ be the characteristic polynomial of the length $N$ sequence $\mathbf{a}$ and $\mathbf{b}$, respectively. Then the ACCF is defined as
\begin{equation}
C_{\mathbf{a},\mathbf{b}}(x)=\sum_{\tau=0}^{N-1}C_{\mathbf{a},\mathbf{b}}(\tau)x^\tau=\mathbf{a}(x^{-1})\mathbf{b}^*(x).
\end{equation}
\end{definition}
\begin{definition}
A pair of sequences $(\mathbf{a}, \mathbf{b})$ is said to be a GCP of length $N$ if
\begin{equation}
C_{\mathbf{a}}(x)+C_{\mathbf{b}}(x)=N.
\end{equation}
\end{definition}
\begin{definition}
Consider $\mathfrak{C}=\left\{\mathcal{C}^{0}, \mathcal{C}^{1}, \cdots, \mathcal{C}^{M-1}\right\}$, consisting $M$ CSs $\mathcal{C}^{k}$, $0\leq k<M$, each having $M$ sequences of length $N$, as described in (\ref{eq2}). Then $\mathfrak{C}$ is a CCC if
\begin{equation}
C_{\mathcal{C}^i,\mathcal{C}^k}(x)=\sum_{j=0}^{M-1}C_{\mathbf{c}^i_j,\mathbf{c}^k_j}(x)=MN \delta(i-k),
\end{equation}
where $\delta$ is Kronecker delta function.
\end{definition}
\section{New Construction of Golay Complementary Pair with ZCZ}\label{section 3}
In \cite{Gong2013} and \cite{Chen201811}, the authors considered complementary sequences whose length is only of the form $2^m$. In this section, we considered GCPs of more flexible lengths and proposed Golay-ZCZ sequences of new lengths. Before introducing the construction, we need the following lemma.
\begin{lemma}\label{lem2}
Let $(\mathbf{a,b})$ be a GCP of lenth $N$ and $(\mathbf{c,d})$ be its complementary mate. Then
\begin{equation}
C^*_{\mathbf{a},\mathbf{b}}(\tau)+ C^*_{\mathbf{c},\mathbf{d}}(\tau) = 0
\end{equation}
\end{lemma}
\begin{IEEEproof}
From the properties of aperiodic cross-correlation, for any sequence $\mathbf{x}$ and $\mathbf{y}$, we have $C_{\mathbf{x}, \mathbf{y}}(\tau)=C_{\overleftarrow{\mathbf{y}^*}, \overleftarrow{\mathbf{x}^*}}(\tau)$ and $C_{-\mathbf{x}, \mathbf{y}}(\tau)=-C_{\mathbf{x}, \mathbf{y}}(\tau)$. Since $(\mathbf{c}, \mathbf{d})=(\overleftarrow{\mathbf{b}^*},-\overleftarrow{\mathbf{a}^*})$, therefore $C_{\mathbf{c}, \mathbf{d}}=C_{\overleftarrow{\mathbf{d}^*}, \overleftarrow{\mathbf{c}^*}}(\tau)=C_{-{\mathbf{a}},\mathbf{b}}(\tau)=-C_{\mathbf{a},\mathbf{b}}(\tau)$. Hence the proof follows.
\end{IEEEproof}
\begin{theorem}\label{th1}
Let $(\mathbf{a,b})$ be a GCP of size $N$ and $(\mathbf{c,d})$ be a Golay complementary mate of $(\mathbf{a,b})$. Define
\begin{equation}
\begin{split}
&\mathbf{p}=
\left(
\begin{array}{rrrr}
x_1\mathbf{a}~ || ~x_2\mathbf{b} ~||~ x_3\mathbf{a} ~||~ x_4\mathbf{b} \\
\end{array}
\right),\\&
\mathbf{q}=
\left(
\begin{array}{rrrr}
x_1\mathbf{c} ~ || ~ x_2\mathbf{d} ~ || ~ x_3\mathbf{c} ~ || ~ x_4\mathbf{d} \\
\end{array}
\right),
\end{split}
\end{equation}
where $x_1,x_2,x_3,x_4\in\{+1,-1\}$. Then $(\mathbf{p},\mathbf{q})$ is a Golay-ZCZ sequence pair of size $4N$ with $Z_{\min}=N$, if $x_1,x_2,x_3,x_4$ satisfy the following condition:
\begin{equation}
x_1x_2+x_3x_4=0.
\end{equation}
\end{theorem}
\begin{IEEEproof} Let us consider $0\leq\tau\leq N-1$. Then we have
\begin{equation}
\begin{split}
C_\mathbf{p}(\tau)=&2\big(C_\mathbf{a}(\tau) +C_\mathbf{b}(\tau)\big)\\&+(x_1x_2+x_3x_4)C_\mathbf{b,a}^*(N-\tau) \\&+x_2x_3C_\mathbf{a,b}^*(N-\tau), \\
C_\mathbf{q}(\tau)=&2\big(C_\mathbf{c}(\tau) +C_\mathbf{d}(\tau)\big)\\&+(x_1x_2+x_3x_4)C_\mathbf{d,c}^*(N-\tau) \\&+x_2x_3C_\mathbf{c,d}^*(N-\tau).
\end{split}
\end{equation}
Hence, for $0\leq\tau\leq N-1$, we have
\begin{equation}
C_\mathbf{p}(\tau)+C_\mathbf{q}(\tau)=4\big(C_\mathbf{a}(\tau) +C_\mathbf{b}(\tau)\big).
\end{equation}
Consider $N\leq\tau\leq 2N-1$. Then one has
\begin{equation}
\begin{split}
C_\mathbf{p}(\tau)=& (x_1x_2+x_3x_4)C_\mathbf{a,b}(\tau-N) +x_2x_3C_\mathbf{b,a}(\tau-N) \\&+x_1x_3C_\mathbf{a}^*(2N-\tau) +x_2x_4C_\mathbf{b}^*(2N-\tau), \\
C_\mathbf{q}(\tau)=& (x_1x_2+x_3x_4)C_\mathbf{c,d}(\tau-N) +x_2x_3C_\mathbf{d,c}(\tau-N) \\&+x_1x_3C_\mathbf{c}^*(2N-\tau) +x_2x_4C_\mathbf{d}^*(2N-\tau).
\end{split}
\end{equation}
Hence, for $N\leq\tau\leq 2N-1$, we have
\begin{equation}
C_\mathbf{p}(\tau)+C_\mathbf{q}(\tau)=0.
\end{equation}
Consider $2N\leq\tau\leq 3N-1$. Then we have
\begin{equation}
\begin{split}
C_\mathbf{p}(\tau)=& x_1x_3C_\mathbf{a}(\tau-2N) +x_2x_4C_\mathbf{b}(\tau-2N) \\&+x_1x_4C_\mathbf{b,a}^*(3N-\tau), \\
C_\mathbf{q}(\tau)=& x_1x_3C_\mathbf{c}(\tau-2N) +x_2x_4C_\mathbf{d}(\tau-2N) \\&+x_1x_4C_\mathbf{d,c}^*(3N-\tau).
\end{split}
\end{equation}
Hence, for $2N\leq\tau\leq 3N-1$, we have
\begin{equation}
C_\mathbf{p}(\tau)+C_\mathbf{q}(\tau)=0.
\end{equation}
Consider $3N\leq\tau\leq 4N-1$. Then we have
\begin{equation}
\begin{split}
C_\mathbf{p}(\tau)=& x_1x_4C_\mathbf{a,b}(\tau-3N), \\
C_\mathbf{q}(\tau)=& x_1x_4C_\mathbf{c,d}(\tau-3N).
\end{split}
\end{equation}
Hence, for $3N\leq\tau\leq 4N-1$, we have
\begin{equation}
C_\mathbf{p}(\tau)+C_\mathbf{q}(\tau)=0.
\end{equation}
Hence, $(\mathbf{p,q})$ is a complementary pair of length $4N$.
Next, we have to prove the other two conditions of Definition \ref{golayzcz1d}. Consider $1\leq\tau\leq N$, then we have
\begin{equation}
\begin{split}
R_\mathbf{p}(\tau)=& \sum_{k=0}^{4N-1} p_kp_{k+\tau}^*\\
=& \sum_{k=0}^{N-1-\tau} a_ka_{k+\tau}^* +\sum_{k=N-\tau}^{N-1} x_1a_kx_2^*b_{k-(N-\tau)}^* \\&+\sum_{k=0}^{N-1-\tau} b_kb_{k+\tau}^* +\sum_{k=N-\tau}^{N-1} x_2b_kx_3^*a_{k-(N-\tau)}^* \\
& +\sum_{k=0}^{N-1-\tau} a_ka_{k+\tau}^* +\sum_{k=N-\tau}^{N-1} x_3a_kx_4^*b_{k-(N-\tau)}^*\\& +\sum_{k=0}^{N-1-\tau} b_kb_{k+\tau}^* +\sum_{k=N-\tau}^{N-1} x_4b_kx_1^*a_{k-(N-\tau)}^* \\
=&2\big(C_\mathbf{a}(\tau) +C_\mathbf{b}(\tau)\big). \\
\end{split}
\end{equation}
Similarly for $1\leq\tau\leq N$, we have
\begin{equation}
\begin{split}
R_\mathbf{q}(\tau)=&2\big(C_\mathbf{c}(\tau) +C_\mathbf{d}(\tau)\big), \\
R_\mathbf{p,q}(\tau)=&2\big(C_\mathbf{a,c}(\tau) +C_\mathbf{b,d}(\tau)\big).
\end{split}
\end{equation}
Since $(\mathbf{a},\mathbf{b})$ is a GCP and $(\mathbf{c},\mathbf{d})$ is one of the complementary mates of $(\mathbf{a},\mathbf{b})$, $R_\mathbf{p}(\tau)=R_\mathbf{q}(\tau)=0,\hbox{ for all } 1\leq\tau\leq N,$ and $R_\mathbf{p,q}(\tau)=0,\hbox{ for all } 0\leq\tau\leq N$.
Therefore, $(\mathbf{p,q})$ is a $(2,4N,N)$- Golay-ZCZ sequence pair, consisting sequences of length $4N$ with $Z_{\min}=N$.
\end{IEEEproof}
\begin{example}\label{ex1 binary}
Let $(\mathbf{a,b})$ be a binary GCP of length 10, given by $\mathbf{a}=(1,1,-1,1,1,1,1,1,-1,-1)$, $\mathbf{b}=(1,1,-1,1,-1,1,-1, -1,1,1)$.
Then $(\mathbf{c,d})=(\overleftarrow{\mathbf{b}^*},-\overleftarrow{\mathbf{a}^*})$ is a Golay mate of $(\mathbf{a,b})$. Define
\begin{equation}
\begin{split}
\mathbf{p}=~&\mathbf{a}~\|~\mathbf{b}~\|~\mathbf{a}~\|-\mathbf{b}, \\
\mathbf{q}=~&\mathbf{c}~\|~\mathbf{d}~\|~\mathbf{c}~\|-\mathbf{d}.
\end{split}
\end{equation}
Then $(\mathbf{p,q})$ is $(2,40,10)$- Golay-ZCZ sequence pair, because
\begin{equation}
\begin{split}
\big(R_\mathbf{p}(\tau)\big)_{\tau=0}^{39}=&(40,\mathbf{0}_{10},-4,-8,4,8, -4,0,4,0,12,0,12,0,4,0,-4,8,4,-8,-4,\mathbf{0}_{10}), \\
\big(R_\mathbf{q}(\tau)\big)_{\tau=0}^{39}=&(40,\mathbf{0}_{10},4,8,-4,-8, 4,0,-4,0,-12,0,-12,0,-4,0,4,-8,-4,8,4,\mathbf{0}_{10}), \\
\big(R_\mathbf{p,q}(\tau)\big)_{\tau=0}^{39}=&(\mathbf{0}_{11},-4,-8,4, 16,4,0,4,-8,-4,0,4,-8,12,0,12,0,-4,8,4,\mathbf{0}_{10}).
\end{split}
\end{equation}
\end{example}
\begin{example}\label{ex1}
Let $\mathbf{a}=(1,i,-i,-1,i),\mathbf{b}=(1,1,1,i,-i)$, then $(\mathbf{a,b})$ be a quadriphase GCP of length 5.
Then $(\mathbf{c,d})=(\overleftarrow{\mathbf{b}^*},-\overleftarrow{\mathbf{a}^*})$ is a Golay mate of $(\mathbf{a,b})$. Define
\begin{equation}
\begin{split}
\mathbf{p}=~&\mathbf{a}~\|~\mathbf{b}~\|~\mathbf{a}~\|-\mathbf{b},\\
\mathbf{q}=~&\mathbf{c}~\|~\mathbf{d}~\|~\mathbf{c}~\|-\mathbf{d}.
\end{split}
\end{equation}
Then $(\mathbf{p,q})$ is a $(2,20,5)$- Golay-ZCZ sequence pair. The periodic correlation magnitudes of $(2,20,5)$- Golay-ZCZ sequence pair ($\mathbf{p,q}$) are shown in Fig. \ref{fig1}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{ex2golayzcz.pdf}
\caption{A glimpse of the periodic correlations of the proposed Golay-ZCZ sequence pair given in Example \ref{ex1}.}\label{fig1}
\end{figure}
\end{example}
\begin{remark}
Please note that binary Golay-ZCZ sequences of length 40 and quadriphase Golay-ZCZ sequences of length 20 has not been previously reported in the literature. By considering a GCP $(\mathbf{a,b})$ of length $2^m$ in Theorem \ref{th1}, we can construct $(\mathbf{p,q})$ of length $2^{m+2}$ and the resultant Golay-ZCZ sequences will have the parameters equivalent to the Golay-ZCZ sequences reported in \cite{Chen201811}.
\end{remark}
\section{New Construction of Complementary Sets with ZCZ}\label{golayzcz2d}
The resultant sequence sets of the constructions reported in \cite{Chen201811} have the property that, as the number of sequences of the Golay-ZCZ sequence set increases, the ZCZ width decreases. Also, the resultant sequence sets in \cite{Chen201811} are not optimal for polyphase cases. These problems are taken care of in the following construction. In this section, we propose a new construction of Golay-ZCZ sequence sets based on CCCs, which are asymptotically optimal. Before we proceed further, we reveal a nice property of CCC.
\begin{property}\label{prop1}
Let $\mathfrak{C}=\left\{\mathcal{C}^{0}, \mathcal{C}^{1}, \cdots, \mathcal{C}^{M-1}\right\}$ be a $(M,M,N)$- CCC, consisting $M$ CSs $\mathcal{C}^{k}$, $0\leq k<M$, each having $M$ sequences of length $N$. Let $\mathcal{D}$ be a sequence set of order $M\times MN$ defined as follows
\begin{equation}
\mathcal{D}=\begin{bmatrix}
\mathbf{c}^0_0 & \mathbf{c}^0_1 & \cdots & \mathbf{c}^0_{M-1}\\
\mathbf{c}^1_0 & \mathbf{c}^1_1 & \cdots & \mathbf{c}^1_{M-1}\\
\vdots & \vdots & \ddots & \vdots\\
\mathbf{c}^{M-1}_0 & \mathbf{c}^{M-1}_1 & \cdots & \mathbf{c}^{M-1}_{M-1}
\end{bmatrix}_{M\times MN},
\end{equation}
where the $i$-th row is generated from $\mathcal{C}^i$, by appending the rows of $\mathcal{C}^i$ to the right of one another. for $0\leq i \leq M-1$, let
\begin{equation}
\mathcal{D}^i=\begin{bmatrix}
\mathbf{c}^0_i\\ \mathbf{c}^1_i\\ \vdots\\ \mathbf{c}^{M-1}_i
\end{bmatrix}.
\end{equation}Then $\mathfrak{D}=\{\mathcal{D}^0,\mathcal{D}^1,\cdots,\mathcal{D}^{M-1}\}$ is also an $(M,M,N)$- CCC.
\end{property}
\begin{IEEEproof}
$\mathbf{c}_i^{k}(x)$ denote characteristic polynomial of sequence $\mathbf{c}_i^{k}$. As $\mathcal{C}^{k}=\{\mathbf{c}_0^{k},\mathbf{c}_1^{k},\cdots,\mathbf{c}_{M-1}^{k}\}$ is a CS, then we have
\begin{equation}
\sum_{i=0}^{M-1}\mathbf{c}_i^{k}(x)\left(\mathbf{c}_i^{k}(x^{-1})\right)^*=MN, \hbox{ for any }m.
\end{equation}
As $\mathfrak{C}=\left\{\mathcal{C}^{0}, \mathcal{C}^{1}, \cdots, \mathcal{C}^{M-1}\right\}$ is a $(M,M,N)$- CCC, we have
\begin{equation}
\sum_{i=0}^{M-1}\mathbf{c}_i^{k_1}(x)\left(\mathbf{c}_i^{k_2}(x^{-1})\right)^*=0, \hbox{ if }k_1 \neq k_2.
\end{equation}
By matrix representation, we have
\begin{equation}
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(x) & \cdots & \mathbf{c}_{M-1}^{0}(x) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_0^{M-1}(x) & \cdots & \mathbf{c}_{M-1}^{M-1}(x) \\
\end{array}
\right)
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(x^{-1}) & \cdots & \mathbf{c}_{0}^{M-1}(x^{-1}) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_{M-1}^{0}(x^{-1}) & \cdots & \mathbf{c}_{M-1}^{M-1}(x^{-1}) \\
\end{array}
\right)^*
=
\left(
\begin{array}{ccc}
MN & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & MN \\
\end{array}
\right).
\end{equation}
By the commutative law of matrix, we have
\begin{equation}
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(x^{-1}) & \cdots & \mathbf{c}_{0}^{M-1}(x^{-1}) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_{M-1}^{0}(x^{-1}) & \cdots & \mathbf{c}_{M-1}^{M-1}(x^{-1}) \\
\end{array}
\right)^*
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(x) & \cdots & \mathbf{c}_{M-1}^{0}(x) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_0^{M-1}(x) & \cdots & \mathbf{c}_{M-1}^{M-1}(x) \\
\end{array}
\right)
=
\left(
\begin{array}{ccc}
MN & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & MN \\
\end{array}
\right).
\end{equation}
Let $y=x^{-1}$, we have
\begin{equation}
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(y) & \cdots & \mathbf{c}_{0}^{M-1}(y) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_{M-1}^{0}(y) & \cdots & \mathbf{c}_{M-1}^{M-1}(y) \\
\end{array}
\right)^*
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(y^{-1}) & \cdots & \mathbf{c}_{M-1}^{0}(y^{-1}) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_0^{M-1}(y^{-1}) & \cdots & \mathbf{c}_{M-1}^{M-1}(y^{-1}) \\
\end{array}
\right)
=
\left(
\begin{array}{ccc}
MN & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & MN \\
\end{array}
\right).
\end{equation}
Taking conjugation on the above equation, we have
\begin{equation}
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(y) & \cdots & \mathbf{c}_{0}^{M-1}(y) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_{M-1}^{0}(y) & \cdots & \mathbf{c}_{M-1}^{M-1}(y) \\
\end{array}
\right)
\left(
\begin{array}{ccc}
\mathbf{c}_0^{0}(y^{-1}) & \cdots & \mathbf{c}_{M-1}^{0}(y^{-1}) \\
\vdots & \ddots & \vdots \\
\mathbf{c}_0^{M-1}(y^{-1}) & \cdots & \mathbf{c}_{M-1}^{M-1}(y^{-1}) \\
\end{array}
\right)^*
=
\left(
\begin{array}{ccc}
MN & \cdots & 0 \\
\vdots & \ddots & \vdots \\
0 & \cdots & MN \\
\end{array}
\right).
\end{equation}
Hence proved.
\end{IEEEproof}
\begin{theorem}\label{const2n}
Let $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\dots,\mathcal{C}^{M-1}\}$ be an $(M,M,N)$- CCC, and $\mathcal{F}$ be an IDFT matrix of order $M\times M$, where $f_{i,j}$ denotes the element of $i$-th row and $j$-th column of $\mathcal{F}$. Define $\mathcal{B}_k$ for $0\leq k<M$, as follows
\begin{equation}
\mathcal{B}_k=\begin{bmatrix}
f_{0,0}\mathbf{c}_{k}^0 & f_{0,1}\mathbf{c}_{k}^1 & \cdots & f_{0,M-1}\mathbf{c}_{k}^{M-1}\\
f_{1,0}\mathbf{c}_{k}^0 & f_{1,1}\mathbf{c}_{k}^1 & \cdots & f_{1,M-1}\mathbf{c}_{k}^{M-1}\\
\vdots & \vdots & \ddots & \vdots \\
f_{M-1,0}\mathbf{c}_{k}^0 & f_{M-1,1}\mathbf{c}_{k}^1 & \cdots & f_{M-1,M-1}\mathbf{c}_{k}^{M-1}\\
\end{bmatrix}_{M\times MN}
\end{equation}
Let $\tilde{\mathcal{B}_k}$ denotes a sequence of length $M^2N$ which is generated from $\mathcal{B}_k$, by appending the rows of $\mathcal{B}_k$ to the right of one another. Then the sequence set $\mathcal{D}=\{\tilde{\mathcal{B}}_0,\tilde{\mathcal{B}}_1,\dots,\tilde{\mathcal{B}}_{M-1}\}$ is a CS of length $M^2N$ with $Z_{\min}=(M-1)N$.
\end{theorem}
\begin{IEEEproof}
First we prove that $\mathcal{D}$ is a CS. For $0 < \tau \leq N-1$, we have
\begin{equation}
\begin{split}
\sum_{i=0}^{M-1}C_{\tilde{\mathcal{B}}_i}(\tau)=&\sum_{j=0}^{M-1}\left[\left(\sum_{k=0}^{M-1}f^2_{k,j}\right)\left(\sum_{k=0}^{M-1}C_{\mathbf{c}^j_k}(\tau)\right)\right]+\sum_{j=0}^{M-2}\left[\left(\sum_{k=0}^{M-1}f_{k,j}f^*_{k,j+1}\right)\left(\sum_{k=0}^{M-1}C_{\mathbf{c}^j_k,\mathbf{c}^{j+1}_k}(\tau)\right)\right]\\&+\left(\sum_{k=0}^{M-2}f_{k,M-1}f^*_{k+1,0}\right)\left(\sum_{k=0}^{M-1}C_{\mathbf{c}^{M-1}_k,\mathbf{c}^0_k}(\tau)\right).
\end{split}
\end{equation}
Since $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\dots,\mathcal{C}^{M-1}\}$ is a CCC, we have $\sum_{i=0}^{M-1}C_{\tilde{\mathcal{B}}_i}(\tau)=0$. Similarly, we can show for other values of $\tau$ that $\sum_{i=0}^{M-1}C_{\tilde{\mathcal{B}}_i}(\tau)=0$.
Next, we prove the other two conditions of Definition \ref{golayzcz1d}. For $0\leq \tau < M^2N$, let $\tau=rN+s$. Let us define
\begin{equation}
\mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}={\mathcal{B}_{k_1}} \odot L^r_s({\mathcal{B}_{k_2}}),
\end{equation}
where $\odot$ denotes elementwise product of the matrices $\mathcal{B}_{k_1}$ and $L^r_s({\mathcal{B}_{k_2}})$. Note that $\mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}$ is a matrix of size $M\times N$. When $k_1=k_2$, we write $\mathcal{H}_{\mathcal{B}_{k_1}}$ instead of $\mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}$. $\sum \mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}$ denotes sum of all elements of $\mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}$. To check the periodic autocorrelation of the constituent sequences of $\mathcal{D}$, we have, for $0<\tau \leq (M-1)N+1$ and $0\leq k <M$,
\begin{equation}
\begin{split}
R_{\tilde{\mathcal{B}_k}}(\tau)&=\sum \mathcal{H}_{\mathcal{B}_{k}}\\
&=\sum_{i=0}^{M-1} (f_{:,i})\cdot (T^{\lfloor\frac{i+r}{M}\rfloor}(f_{:,<i+r>_M})) C_{\mathbf{c}_k^{i},\mathbf{c}_k^{<i+r>_M}}(\tau-rN) \\
&+\sum_{i=0}^{M-1} (f_{:,i})\cdot (T^{\lfloor\frac{i+r+1}{M}\rfloor}(f_{:,<i+r+1>_M})) C_{\mathbf{c}_k^{i},\mathbf{c}_k^{<i+r+1>_M}}(\tau-(r+1)N),
\end{split}
\end{equation}
where `$\cdot$' denotes the `inner product' of two sequences, and $<x>_M$ denotes $x \mod M$. Since $\mathcal{F}$ is an IDFT matrix, using property \ref{prop1}, we have
\begin{equation}
R_{\tilde{\mathcal{B}_k}}(\tau)=\begin{cases}
M^2N, & \text{when }\tau=0;\\
0, & \text{when }1\leq \tau \leq (M-1)N;\\
\text{non-zero} & \text{when }(M-1)N< \tau.
\end{cases}
\end{equation}
Similarly, to check the cross-correlation we have for $0\leq k_1\neq k_2 <M$,
\begin{equation}
\begin{split}
R_{\tilde{\mathcal{B}}_{k_1},\tilde{\mathcal{B}}_{k_2}}(\tau)&=\sum \mathcal{H}_{\mathcal{B}_{k_1},\mathcal{B}_{k_2}}\\
&=\sum_{i=0}^{M-1} (f_{:,i})\cdot (T^{\lfloor\frac{i+r}{M}\rfloor}(f_{:,<i+r>_M})) C_{\mathbf{c}_{k_1}^{i},\mathbf{c}_{k_2}^{<i+r>_M}}(\tau-rN) \\
&+\sum_{i=0}^{M-1} (f_{:,i})\cdot (T^{\lfloor\frac{i+r+1}{M}\rfloor}(f_{:,<i+r+1>_M})) C_{\mathbf{c}_{k_1}^{i},\mathbf{c}_{k_2}^{<i+r+1>_M}}(\tau-(r+1)N).
\end{split}
\end{equation}
Since $\mathfrak{C}$ is a CCC and $\mathcal{F}$ is an IDFT matrix, using property \ref{prop1}, we have
\begin{equation}
R_{\tilde{\mathcal{B}}_{k_1},\tilde{\mathcal{B}}_{k_2}}(\tau)=\begin{cases}
0, & \text{when }\tau=0;\\
0, & \text{when }1\leq \tau \leq (M-1)N;\\
\text{non-zero} & \text{when }(M-1)N< \tau.
\end{cases}
\end{equation}
Hence, $\mathcal{D}$ is a $(M,M^2N,(M-1)N)$- Golay-ZCZ sequence set of set size $M$, consisting sequences of length $M^2N$ having ZCZ width $(M-1)N$.
\end{IEEEproof}
\begin{example}\label{ex3n}
Let $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\mathcal{C}^2,\mathcal{C}^3\}$ be a $(4,4,4)$- CCC given by
\begin{equation}
\begin{split}
\mathcal{C}^0=\begin{bmatrix}
1 & 1 & 1 & 1\\
1 & 1 & -1 & -1\\
-1 & 1 & -1 & 1\\
-1 & 1 & 1 & -1
\end{bmatrix},&
\mathcal{C}^1=\begin{bmatrix}
-1 & -1 & 1 & 1\\
-1 & -1 & -1 & -1\\
1 & -1 & -1 & 1\\
1 & -1 & 1 & -1
\end{bmatrix},\\
\mathcal{C}^2=\begin{bmatrix}
-1 & 1 & -1 & 1\\
-1 & 1 & 1 & -1\\
1 & 1 & 1 & 1\\
1 & 1 & -1 & -1
\end{bmatrix},&
\mathcal{C}^3=\begin{bmatrix}
1 & -1 & -1 & 1\\
1 & -1 & 1 & -1\\
-1 & -1 & 1 & 1\\
-1 & -1 & -1 & -1
\end{bmatrix},
\end{split}
\end{equation}
and $\mathcal{F}$ be an $4\times 4$ IDFT matrix. Construct $\mathcal{D}=\{\tilde{\mathcal{B}}_0,\tilde{\mathcal{B}}_1,\tilde{\mathcal{B}}_2,\tilde{\mathcal{B}}_3\}$ as per Construction \ref{const2n}. Then $\mathcal{D}$ is a $(4,64,12)$- Golay-ZCZ sequence set. A glimpse of the correlations of the sequence set $\mathcal{D}$ is shown in Fig. \ref{fig2}.
\begin{figure}[ht]
\centering
\includegraphics[width=.7\textwidth]{example3cszcz.pdf}
\caption{A glimpse of the correlations of the sequence set $\mathcal{D}$, constructed in Example \ref{ex3n}.}\label{fig2}
\end{figure}
\end{example}
\begin{remark}
In \cite{Chen201811}, the authors reported $(2^k,2^m,2^{m-k-1})$- Golay-ZCZ sequence sets. Considering $k=2$, and $m=6$, we get $(4,64,8)$- Golay-ZCZ sequence set. As we see in Example \ref{ex3n}, the Golay-ZCZ sequence sets proposed in Theorem \ref{const2n} have larger ZCZ widths, as compared to the results in \cite{Chen201811}.
\end{remark}
Since Theorem \ref{const2n} is based on CCCs, the availability of CCCs for various flexible lengths highly improves the outcome of the construction. In Table \ref{tabccc}, we list down all the well-known construction of CCCs till date. We also provide an iterative construction of CCC to improve the flexibility of the parameters. Before we proceed, we need the following lemma.
\begin{lemma}\label{lemnew}
Let $(\mathbf{x}_1,\mathbf{x}_2)$ be two sequences of length $N_1$, $(\mathbf{y}_1,\mathbf{y}_2)$ be two sequences of length $N_2$, then
aperiodic correlation of $\mathbf{x}_1\otimes\mathbf{y}_1$ between $\mathbf{x}_2\otimes\mathbf{y}_2$ is given by \cite{jin}
\begin{equation}
C_{\mathbf{x}_1\otimes\mathbf{y}_1,\mathbf{x}_2\otimes\mathbf{y}_2} (\tau)= C_{\mathbf{x}_1,\mathbf{x}_2}(k_1) C_{\mathbf{y}_1,\mathbf{y}_2}(k_2) +C_{\mathbf{x}_1,\mathbf{x}_2}(k_1+1)C_{\mathbf{y}_1,\mathbf{y}_2}(k_2-N),
\end{equation}
where $\tau=k_1N_2+k_2$.
\end{lemma}
\begin{theorem}\label{th3}
Let $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\dots,\mathcal{C}^{M-1}\}$ and $\mathfrak{D}=\{\mathcal{D}^0,\mathcal{D}^1,\dots,\mathcal{D}^{M-1}\}$ be two CCCs with parameters $(M,M,N_1)$ and $(M,M,N_2)$, respectively. Then $\mathfrak{E}=\{\mathcal{E}^0,\mathcal{E}^1,\dots,\mathcal{E}^{M-1}\}$, given by
\begin{equation}
\mathcal{E}^{m}=\begin{bmatrix}
\mathbf{e}^{m}_0\\
\mathbf{e}^{m}_1\\
\vdots\\
\mathbf{e}^{m}_{M-1}
\end{bmatrix}_{M \times MN_1N_2}=\left[
\begin{array}{cccc}
\mathbf{c}^{m}_0\otimes\mathbf{d}^{0}_0 & \mathbf{c}^{m}_1\otimes\mathbf{d}^{0}_1 & \cdots & \mathbf{c}^{m}_{M-1}\otimes\mathbf{d}^{0}_{M-1} \\
\mathbf{c}^{m}_0\otimes\mathbf{d}^{1}_0 & \mathbf{c}^{m}_1\otimes\mathbf{d}^{1}_1 & \cdots & \mathbf{c}^{m}_{M-1}\otimes\mathbf{d}^{1}_{M-1} \\
\vdots & \vdots & & \vdots \\
\mathbf{c}^{m}_0\otimes\mathbf{d}^{M-1}_0 & \mathbf{c}^{m}_1\otimes\mathbf{d}^{M-1}_1 & \cdots & \mathbf{c}^{m}_{M-1}\otimes\mathbf{d}^{M-1}_{M-1} \\
\end{array}
\right].
\end{equation}
is a CCC with parameters $(M,M,MN_1,N_2)$.
\end{theorem}
\begin{IEEEproof}
First, we prove that for $0\leq m <M$, $\mathcal{E}^m$ is a CS of length $MN_1N_2$.
By lemma \ref{lemnew}, we have
\begin{equation}
\begin{split}
\sum_{l=0}^{M-1}C_{\mathbf{c}^{m}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0) = & \sum_{l=0}^{M-1} \left(C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1) C_{\mathbf{d}^{l}_{n_1}, \mathbf{d}^{l}_{n_2}}(k_2)+C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1+1) C_{\mathbf{d}^{l}_{n_1}, \mathbf{d}^{l}_{n_2}}(k_2-N_2)\right) \\
= & C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1)\sum_{l=0}^{M-1} C_{\mathbf{d}^{l}_{n_1}, \mathbf{d}^{l}_{n_2}}(k_2)+C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1+1) \sum_{l=0}^{M-1} C_{\mathbf{d}^{l}_{n_1}, \mathbf{d}^{l}_{n_2}}(k_2-N_2)\\
= & \begin{cases}
MN_2C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1), & k_2=0,n_1=n_2; \\
0, & k_2\neq0,n_1=n_2; \\
0, & \hbox{for all }k_2,n_1\neq n_2,
\end{cases}
\end{split}
\end{equation}
where $\tau_0=k_1N_2+k_2$.
For $\mathcal{E}^m$, let us consider $\tau=rN_1N_2+\tau_0$, then
\begin{equation}\label{eq39n}
\sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau)= \sum_{l=0}^{M-1}\sum_{n_1=0}^{M-1-r}(C_{\mathbf{c}^{m}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0)+ C_{\mathbf{c}^{m}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m}_{n_2+1}\otimes\mathbf{d}^{l}_{n_2+1}} (\tau_0-N_1N_2)),
\end{equation}
where $n_2=n_1+r$.
Clearly, we have $\sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau)=0$, if $r\geq1$. Consider $r=0$, then
\begin{equation}
\begin{split}
\sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau) = & \sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau_0) \\
= & \sum_{l=0}^{M-1}\sum_{n_1=n_2=0}^{M-1} C_{\mathbf{c}^{m}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0) \\
= & \left\{
\begin{array}{ll}
\sum\limits_{n_1=n_2=0}^{M-1}MN_2C_{\mathbf{c}^{m}_{n_1}, \mathbf{c}^{m}_{n_2}}(k_1), & k_2=0; \\
0, & k_2\neq 0.
\end{array}
\right. \\
= & \left\{
\begin{array}{ll}
M^2N_1N_2, & k_1=0,k_2=0; \\
0, & \hbox{otherwise.}
\end{array}
\right.
\end{split}
\end{equation}
Next, we prove that $\mathcal{E}^{m_1}$ and $\mathcal{E}^{m_2}$ are orthogonal if $m_1\neq m_2$.
Similar to (\ref{eq39n}), we consider cross-correlation sum of $\mathcal{E}^{m_1}$ between $\mathcal{E}^{m_2}$, then we have
\begin{equation}
\sum_{l=0}^{M-1}C_{\mathbf{e}^{m_1}_l,\mathbf{e}^{m_2}_l}(\tau)= \sum_{l=0}^{M-1}\sum_{n_1=0}^{M-1-r}(C_{\mathbf{c}^{m_1}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m_2}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0)+ C_{\mathbf{c}^{m_1}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m_2}_{n_2+1}\otimes\mathbf{d}^{l}_{n_2+1}} (\tau_0-N_1N_2)),
\end{equation}
where $n_2=n_1+r$.
Clearly, we have $\sum_{l=0}^{M-1}C_{\mathbf{e}^m_l}(\tau)=0$, if $r\geq1$. Consider $r=0$, then
\begin{equation}
\begin{split}
\sum_{l=0}^{M-1}C_{\mathbf{e}^{m_1}_l,\mathbf{e}^{m_2}_l}(\tau) = & \sum_{l=0}^{M-1}C_{\mathbf{e}^{m_1}_l,\mathbf{e}^{m_2}_l}(\tau_0) \\
= & \sum_{l=0}^{M-1}\sum_{n_1=n_2=0}^{M-1} C_{\mathbf{c}^{m_1}_{n_1}\otimes\mathbf{d}^{l}_{n_1}, \mathbf{c}^{m_2}_{n_2}\otimes\mathbf{d}^{l}_{n_2}} (\tau_0) \\
= & \left\{
\begin{array}{ll}
\sum\limits_{n_1=n_2=0}^{M-1}MN_2C_{\mathbf{c}^{m_1}_{n_1}, \mathbf{c}^{m_2}_{n_2}}(k_1), & k_2=0; \\
0, & k_2\neq 0.
\end{array}
\right. \\
= & 0.
\end{split}
\end{equation}
Hence $\mathfrak{E}=\{\mathcal{E}^0,\mathcal{E}^1,\dots,\mathcal{E}^{M-1}\}$ is an $(M,M,MN_1N_2)$- CCC.
\end{IEEEproof}
\begin{remark}\label{remarkccc}
To increase the flexibility of the proposed construction in Theorem \ref{const2n}, we have also found binary CCCs with parameters $(4,4,3)$, $(4,4,5)$, $(4,4,7)$, $(4,4,11)$, and $(4,4,13)$ based on computer search, which can be used as seed CCCs of the proposed construction in Theorem \ref{const2n}. Considering $\mathfrak{C}=\{\mathcal{C}^0,\mathcal{C}^1,\mathcal{C}^2,\mathcal{C}^3\}$ as a $(4,4,N)$- CCC, the search results can be found in Table \ref{tab3}, where each element represents a power of $(-1)$. The search results are important in itself, because in recent results \cite{chenccc,yuboccc,bingshenccc}, we observe that for a $(K,M,N)$ mutually orthogonal sequence set, through systematic construction, the maximum achievable $K/M$ ratio is $1/2$, when $N$ is not in the form of $2^m$. However, for our case, although $N$ is not in the form of power-of-two, since the sequence sets are CCC (i.e., $K=M$), the $K/M$ ratio is $1$. Moreover, for length up to 200 (i.e., $N\leq 200$), using the CCCs given in Table \ref{tab3} as seed CCCs, using the results of \cite{jin} and Theorem \ref{th3}, we can design binary $(4,4,N)$- CCCs for $N=12,~13,~20,~24,~28,~36,~40,~44,~48,~52,~56,~60$, $72,~80,~84,~88,~96,~112,~120,~132,~140$, $144,~156,~160,~168,~176,~192,~196,~200$.
\end{remark}
\begin{table}[]
\renewcommand{\arraystretch}{1.3}
\resizebox{\textwidth}{!}{
\begin{tabular}{|c|c|c|c|c|}
\hline
$N$ & $\mathcal{C}^{0}$ & $\mathcal{C}^{1}$ & $\mathcal{C}^{2}$ & $\mathcal{C}^{3}$ \\ \hline
3 & \begin{tabular}[c]{@{}c@{}}000\\
001\\
001\\
010\end{tabular} & \begin{tabular}[c]{@{}c@{}}010\\
001\\
110\\
111\end{tabular} & \begin{tabular}[c]{@{}c@{}}011\\
000\\
101\\
100\end{tabular} & \begin{tabular}[c]{@{}c@{}}011\\
010\\
000\\
011\end{tabular} \\ \hline
5 & \begin{tabular}[c]{@{}c@{}}00001\\
01100\\
01000\\
01011\end{tabular} & \begin{tabular}[c]{@{}c@{}}00010\\
00101\\
01111\\
00110\end{tabular} & \begin{tabular}[c]{@{}c@{}}00101\\
11101\\
00110\\
10000\end{tabular} & \begin{tabular}[c]{@{}c@{}}01100\\
11110\\
01011\\
10111\end{tabular} \\ \hline
7 & \begin{tabular}[c]{@{}c@{}}0000001\\
0011010\\
0011010\\
0100011\end{tabular} & \begin{tabular}[c]{@{}c@{}}0011101\\
0011010\\
1100101\\
1000000\end{tabular} & \begin{tabular}[c]{@{}c@{}}0101100\\
0100011\\
1111110\\
1010011\end{tabular} & \begin{tabular}[c]{@{}c@{}}0101100\\
0111111\\
0011101\\
0101100\end{tabular} \\ \hline
11 & \begin{tabular}[c]{@{}c@{}}01110110110\\
00111000101\\
00011010100\\
00000001101\end{tabular} & \begin{tabular}[c]{@{}c@{}}00011010100\\
00000001101\\
10001001001\\
11000111010\end{tabular} & \begin{tabular}[c]{@{}c@{}}10110000000\\
11010100111\\
10100011100\\
10010010001\end{tabular} & \begin{tabular}[c]{@{}c@{}}01011100011\\
01101101110\\
10110000000\\
11010100111\end{tabular} \\ \hline
13 & \begin{tabular}[c]{@{}c@{}}0111011010100\\
0011101001101\\
0001100001001\\
0000000111010\end{tabular} & \begin{tabular}[c]{@{}c@{}}0001100001001\\
0000000111010\\
1000100101011\\
1100010110010\end{tabular} & \begin{tabular}[c]{@{}c@{}}0101110000000\\
0110111100111\\
1011001011100\\
1101010010001\end{tabular} & \begin{tabular}[c]{@{}c@{}}0100110100011\\
0010101101110\\
0101110000000\\
0110111100111\end{tabular} \\ \hline
\end{tabular}}
\caption{Computer search results of $(4,4,N)$- CCCs, for various values of $N$.\label{tab3}}
\end{table}
\section{Discussion on Optimality of the Proposed Sequence Sets}\label{secv}
For polyphase sequence sets consisting of $M$ sequences each of length $L$, having ZCZ width $Z$, we have from \cite{tangfan}
\begin{equation}
Z\leq \frac{L}{M}.
\end{equation}
For binary sequence sets, the bound is conjectured \cite{tangfan} as
\begin{equation}
Z\leq \frac{L}{2M}.
\end{equation}
Assuming $Z_{opti}$ to be the optimal value of $Z$, let us define the optimality factor $(C)$ as
\begin{equation}
C=\frac{Z}{Z_{opti}}.
\end{equation}
Consider the $(2,4N,N)$- Golay-ZCZ sequence sets described in Theorem \ref{th1}. If we consider the binary cases, then we have $C=1$. Hence the resultant Golay-ZCZ sequence pairs proposed in Theorem \ref{th1} are optimal.
Assuming that an $(M,M,N)$- CCC exists, from Theorem \ref{const2n}, we get $(M,M^2N,(M-1)N)$- Golay-ZCZ sequence set. To be an optimal Golay-ZCZ sequence set, the optimal ZCZ width $Z_{opti}=\frac{L}{M}$, i.e., in this case $Z_{opti}=MN$. However, the ZCZ we can achieve through Theorem \ref{const2n} is $Z=(M-1)N$.
Therefore,
\begin{equation}
\begin{split}
C&=\frac{Z}{Z_{opti}}\\
&=\frac{(M-1)}{M}.
\end{split}
\end{equation}
Therefore, $\lim\limits_{M\rightarrow \infty}C=1$, hence,
the sequence sets proposed in Theorem \ref{const2n} are asymptotically optimal, as the number of sequences increases.
\section{The novelty of the Proposed Constructions as Compared to Previous Works}\label{novel}
In this section we compare the proposed constructions to the previous works, specifically with the works of Gong \textit{et al.} \cite{Gong2013} and Chen \textit{et al.} \cite{Chen201811} and discuss the novelty of the proposed constructions.
\begin{enumerate}
\item In \cite{Gong2013}, the authors only considered the complementary sequences of lengths of the form $2^m$, and analysed their corresponding periodic zero auto-correlation zones. As compared to that, Theorem \ref{th1} and Theorem \ref{const2n} considers complementary sequences of a more flexible form and analyses both periodic zero auto-correlation zone as well as zero crosscorrelation zone.
\item In \cite{Chen201811}, the authors proposed Golay-ZCZ sequence sets. However, the constituent sequences have lengths only of the form of $2^m$. As compared to that, in Theorem \ref{th1} we have proposed Golay-ZCZ sequence pairs of length $4N$, where $N$ is the length of a GCP. In Theorem \ref{const2n}, we have proposed Golay-ZCZ sequence sets, consisting sequences of lengths $M^2N$, using an $(M,M,N)$ CCC. As we see the lengths are more flexible as compared to the results in \cite{Chen201811}.
\item The results reported in \cite{Chen201811} only achieves optimality for the case of binary $(2,2^m,2^{m-2})$- Golay-ZCZ sequence pairs. As compared to that, based on the discussions in Section \ref{secv}, we can see that Theorem \ref{th1} results to binary $(2,4N,N)$- Golay-ZCZ sequence pair, which is optimal. Theorem \ref{const2n} results to polyphase $(M,M^2N,(M-1)N)$ Golay-ZCZ sequence set, which is also asymptotically optimal, as the number of sequence increases.
\item Analysing Table 1 of \cite{Chen201811}, we can observe that, as the number of sequences of the Golay-ZCZ sequence set increases, the ZCZ width decreases. As compared to that, in Theorem \ref{const2n}, as the number of sequences of the Golay-ZCZ sequence set increases, the ZCZ width also increases, and eventually we achieve asymptotic optimality.
\item To increase the availability of CCCs of various parameters, which in turn increases the flexibility of the parameters of the proposed Golay-ZCZ sequence sets, we have proposed a new iterative construction of CCCs. We have also provided computer search results for binary CCCs with parameters $(4,4,N)$, for various values of $N$ given in Table \ref{tab3}. Moreover, using those as seed CCCs in \cite{jin} and Theorem \ref{th3}, we can construct several binary CCCs with new parameters, which are not reported before. As discussed in Remark \ref{remarkccc}, these CCCs are important in itself, since $N$ is not a power-of-two.
\end{enumerate}
\section{Conclusion}\label{section 5}
In this paper, we have made two contributions. Firstly, we have systematically constructed GCPs of lengths non-power-of-two where each of the constituent sequences have a periodic ZACZ and the pair have a periodic ZCCZ. We have also constructed complementary sets consisting of sequences having large periodic ZACZ and ZCCZ using CCCs and IDFT matrices. Notably, the second construction results to asymptotically optimal Golay-ZCZ sequences with respect to Tang-Fan-Matsufuji bound, as the number of sequence increases. To increase the availability of CCC for various parameters and consequently increase the flexibility of the proposed Golay-ZCZ sequence sets, we have also proposed a new iterative construction of CCCs. Moreover, we have found binary CCCs with parameters $(4,4,3)$, $(4,4,5)$, $(4,4,7)$, $(4,4,11)$, and $(4,4,13)$ based on computer search, which can be used as seed CCCs, and can hugely increase the flexibility of the proposed construction. Since the length of the resultant Golay-ZCZ sequences are more flexible, the proposed constructions partially fills the gap left by the previous remarkable works of Gong \textit{et al.} and Chen \textit{et al.}. The proposed Golay-ZCZ sequences have potential applications in uplink grant-free NOMA.
| {
"timestamp": "2021-12-17T02:12:00",
"yymm": "2112",
"arxiv_id": "2112.08678",
"language": "en",
"url": "https://arxiv.org/abs/2112.08678",
"abstract": "Zero correlation zone (ZCZ) sequences and Golay complementary sequences are two kinds of sequences with different preferable correlation properties. Golay-ZCZ sequences are special kinds of complementary sequences which also possess a large ZCZ and are good candidates for pilots in OFDM systems. Known Golay-ZCZ sequences reported in the literature have a limitation in the length which is the form of a power of 2. In this paper, we propose two constructions of Golay-ZCZ sequence sets with new parameters which generalize the constructions of Gong et al. (IEEE Transaction on Communications 61(9), 2013) and Chen et al (IEEE Transaction on Communications 61(9), 2018). Notably, one of the constructions results in optimal binary Golay-ZCZ sequences, while the other results in asymptotically optimal polyphase Golay-ZCZ sequences as the number of sequences increases.",
"subjects": "Information Theory (cs.IT)",
"title": "Asymptotically Optimal Golay-ZCZ Sequence Sets with Flexible Length",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138125126403,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7087156812979338
} |
https://arxiv.org/abs/1708.07593 | Exotic Bifurcations Inspired by Walking Droplet Dynamics | We identify two rather novel types of (compound) dynamical bifurcations generated primarily by interactions of an invariant attracting submanifold with stable and unstable manifolds of hyperbolic fixed points. These bifurcation types - inspired by recent investigations of mathematical models for walking droplet (pilot-wave) phenomena - are introduced and illustrated. Some of the one-parameter bifurcation types are analyzed in detail and extended from the plane to higher-dimensional spaces. A few applications to walking droplet dynamics are analyzed. | \section{Introduction}
Inspired by our recent research on the dynamical properties of mathematical
models of walking droplet (pilot-wave) phenomena \cite{RB1}, we shall describe
and analyze what appear to be new types or classes of bifurcations. Owing
largely to its potential for producing macroscopic analogs of certain quantum
phenomena, walking droplet dynamics has become a very active area of research
since the seminal work of Couder \textit{et al.} \cite{CPFB}. In this study
we focus on the dynamical systems models arising from walking droplets, interesting
examples of which can be found in Gilet \cite{Gil}, Milewski et al. \cite{MGNB}, Oza et al. \cite{OHRB},
Rahman and Blackmore \cite{RB1}, and Shirokov \cite{Shir}. Furthermore, a detailed summary of recent
advancements in hydrodynamic pilot-waves can be found in \cite{Bush}. Simulations of the solutions of
some of the mathematical models for walkers are not only interesting for their
quantum-like effects, they exhibit exotic bifurcations that are apt to attract
the interest of dynamical systems researchers and enthusiasts. In particular,
the two-parameter planar discrete dynamical systems model of Gilet \cite{Gil}
of the form $G: \mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ defined as
\begin{equation}
G(x,y;C;\mu):=\left( x-C\Psi^{\prime}(x)y,\mu(y+\Psi(x))\right), \label{e1}
\end{equation}
where $0<C,\mu<1$ are parameters and $\Psi$ is an odd, $2\pi$-periodic
function given by
\[
\Psi(x):=\frac{1}{\sqrt{\pi}}\left( \cos\beta\sin3x+\sin\beta\sin5x\right),
\]
where $\beta$ is usually chosen to be $\pi/3$ or $\pi/6$, exhibits not only
Neimark--Sacker bifurcations, but more exotic chaotic bifurcations that have
apparently not been analyzed in detail in the literature. Simulations of the
dynamics of \eqref{e1} have shown that these exotic bifurcations are similar
in certain respects if one of the parameters $C,\mu$ is varied and the other
fixed, but also quite different in other ways. For example, in the $C$ fixed
case shown in Fig. \ref{Fig: mu-evolution} , we see a progression of similarly shaped attractors
ending in what appears to be a chaotic state. Actually, if $\mu$ is increased
further (not shown), the chaotic attractor exhibits dramatic changes, which
are apparently due to a series of dynamical crises (cf. Ott \cite{Ott}).
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.32\textwidth]{mu-a.jpg}
\includegraphics[width = 0.32\textwidth]{mu-b.jpg}
\includegraphics[width = 0.32\textwidth]{mu-c.jpg}
\caption{Bifurcation evolution for fixed $C$ (increasing $\mu$).}
\label{Fig: mu-evolution}
\end{figure}
On the other hand, in Fig. \ref{Fig: C-evolution} we show a sequence in which $\mu$ is fixed and
$C$ varied from $0.45$ to $0.528$ at which point we see what appears to be a
chaotic strange attractor. This chaotic strange attractor actually persists in
shape up to around $C=0.7$ (not shown). Beyond $0.7$ (also not shown) the
attractor changes in shape more or less continuously until it finally breaks
up into a chaotic splatter with just a ghost-like shadow of the prior shape -
a last stage that is also indicative of dynamical crises. In this
investigation, we concentrate on abstracting the bifurcation properties of the
Gilet map associated with fixing $\mu$ and varying $C$.
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.49\textwidth]{C-a.jpg}
\includegraphics[width = 0.49\textwidth]{C-b.jpg}
\includegraphics[width = 0.49\textwidth]{C-c.jpg}
\includegraphics[width = 0.49\textwidth]{C-d.jpg}
\caption{Bifurcation evolution for fixed $\mu$ (increasing $C$).}
\label{Fig: C-evolution}
\end{figure}
The bifurcations that we describe in the sequel are those for discrete
dynamical systems comprising the iterates of differentiable,
parameter-dependent self-maps of smooth finite-dimensional manifolds, and they
are generated by interactions of closed (positively) invariant submanifolds
with stable and unstable manifolds of saddle points as a single parameter is
varied. In the interest of simplicity and clarity, we shall confine our
attention to maps $f\in C^{1}\left( \mathbb{R}^{m}\times\mathbb{R}%
,\mathbb{R}^{m}\right) $, where $m$ is a natural number (in $\mathbb{N}$)
and, as usual,%
\[
C^{1}\left( \mathbb{R}^{m}\times\mathbb{R},\mathbb{R}^{m}\right) :=\left\{
f:\mathbb{R}^{m}\times\mathbb{R}\rightarrow\mathbb{R}^{m}:f\text{ is
continuously differentiable}\right\} .
\]
Here, of course, $\mathbb{R}^{m}$ and $\mathbb{R}$ are, respectively,
Euclidean $m$- and $1$-space, which represent the phase and parameter spaces
of the dynamical system and points in $\mathbb{R}^{m}\times\mathbb{R}$ to be
denoted as $(\boldsymbol{x},\mathbb{\sigma})$. An example of the kind of map
we shall be investigating is that of the form of \eqref{e1} with $\mu$ a fixed
constant in $(0,1)$ and the other parameter $C$, which we denote as $\sigma$,
varying in $(0,1)$; namely,%
\begin{equation}
F(x,y;\sigma):=\left( x-\sigma\Psi^{\prime}(x)y,\mu(y+\Psi(x))\right) ,
\label{e2}%
\end{equation}
In what follows, we assume a knowledge of the fundamentals of modern dynamical
systems theory such as can be found in \cite{Arr,Ott,PdM,Rob,NDS}. Our main
results are detailed in the remainder of this paper, which is arranged as
follows. In Section 2 and Section 3 we describe the planar forms of the
bifurcations involving the interaction of an attracting invariant Jordan curve
and stable and unstable manifolds of saddle points. These bifurcations share
some features with the dynamical phenomena described in Aronson \emph{et al}.
\cite{ACHM} and Frouzakis \emph{et al}. \cite{FKP}, but they appear to be
essentially new. More specifically, in Section 2, we introduce and analyze
planar dynamical bifurcations generated by the interaction of an attracting
Jordan curve and the stable manifold of a saddle point. In particular, the
interaction first induces a bifurcation caused by a tangent homoclinic orbit,
which is followed by a sequence of additional tangent homoclinic orbits
interspersed with transverse homoclinic orbits as the parameter is increased.
Ultimately, however, an increase in $\sigma$ leads to a final tangent
homoclinic orbit after which there is a parameter interval on which there a
robust chaotic strange attractor, which is amenable to abstraction. As
mentioned above, there are additional types of bifurcations for larger
parameter values, which shall not be described in detail here. It should be
noted that the bifurcations considered in this paper, which shall be
designated as being of type 1, are directly related to those of the Gilet map
\eqref{e1} where a slice-like region of non-injectivity of the map plays a key
role. There is a second type - a variant of the Gilet map - where the map is a
diffeomorphism that produces analogous bifurcations, which we shall analyze in
a forthcoming paper.
Section 3 is where we describe a modification of the bifurcation in the
preceding section generated by the interaction of an attracting Jordan curve
with a pair of stable manifolds, which can induce heteroclinic cycles that
generate chaotic strange attractors followed by homoclinic bifurcations. Next,
in Section 4, we discuss some higher-dimensional analogs of the planar
bifurcations analyzed in the preceding sections. In Section 5, we describe
several applications and examples of the bifurcations, with a focus on the
phenomena and mathematical models that inspired our work on the bifurcations;
namely, walking droplet dynamics. Finally, in Section 6, we summarize some of
the conclusions reached in this research and adumbrate possible related future
work, which includes analyzing the bifurcations in the dynamics of the map
\eqref{e1} when $C$ is fixed and $\mu$ varied.
\section{Homoclinic Bifurcations of Type 1 in the Plane}
In this and the next section, we restrict our attention to a $C^{1}$ map
$f:\mathbb{R}^{2}\times\mathbb{R}\rightarrow\mathbb{R}^{2}$, with the tacit
understanding that we could have just as well considered a 1-parameter
dependent map on a simply-connected open subset of a smooth surface. The
points of $\ \mathbb{R}^{2}\times\mathbb{R}$ shall be denoted by
$(\boldsymbol{x},\sigma)=\left( (x,y),\sigma\right) $ and we use the
standard notation $f_{\sigma}:\mathbb{R}^{2}\rightarrow\mathbb{R}^{2}$ for the
planar map with the parameter $\sigma$ fixed at a particular value in $(0,1)$.
Let us set the stage for the\emph{ homoclinic bifurcation of type }$1$ with
more specificity. For this, we assume that $\boldsymbol{p}(\sigma):=(\hat
{x}(\sigma),\hat{y}(\sigma))$ is a saddle point of $f_{\sigma}$, with one
eigenvalue $\lambda^{u}(\sigma)>1$ and the other $0<\lambda^{s}(\sigma)<1$ for
all $\sigma\in\lbrack a,b)\subset(0,1)$ such that $\hat{x}(a),\hat{y}(a)>0$,
where $a<b$ . We may also assume that the linear unstable manifold is
horizontal when $\sigma=a$; i.e., $W_{lin}^{u}(\boldsymbol{p}(a))$ is the line
$y=\hat{y}(a)$. In addition, we assume that the saddle point, although it may
move as the parameter is varied, remains within the open rectangle
$R_{\alpha,\beta}:=(0,\hat{x}(a)+\alpha)\times(0,\hat{y}(a)+\beta)$ for some
$\alpha,\beta>0$ and all $a\leq\nu<b$, and its stable manifold $W^{s}%
(\boldsymbol{p}(\sigma))$ lies in the open vertical strip $V_{\alpha}%
:=(0,\hat{x}(a)+\alpha)\times\mathbb{R}$ for all $a\leq\sigma<1$.
A principal feature of the bifurcation is an attracting invariant closed
tubular neighborhood $T(\sigma)$ of a $C^{1}$ closed Jordan curve
$C=C(\sigma)$, which we call an \emph{invariant tube} with center $C$ for the
map. Then there is for some $\sigma$ an associated compact invariant
attracting set
\[
\mathfrak{C}\mathcal{(\sigma)}:=%
{\displaystyle\bigcap\nolimits_{n=1}^{\infty}}
f_{\sigma}^{n}\left( T(\sigma)\right) ,
\]
which, for an appropriately chosen center, is equal to $C(\sigma)$ for some
initial subinterval of $[a,b)$. It should be noted that such invariant circles
and tubes rather frequently arise from Neimark--Sacker bifurcations of sinks,
especially those of the spiral variety (\emph{cf. }\cite{Neim,Sack} and also
\cite{Arr,Rob,NDS}). We also assume that for the parameter values for which
the invariant tubes exist, $T(\sigma)$ is contained in the open rectangle
$\mathcal{R}_{\alpha,\beta}:=\{(x,y):\left\vert x\right\vert <\hat
{x}(a)+\alpha,\left\vert y\right\vert <\hat{y}(a)+\beta\}$ , and even more; it
is contained in the set $Q(\sigma)$ comprising all points in $\mathcal{R}%
_{\alpha,\beta}$ to the left of $W^{s}(\boldsymbol{p}(\sigma))$ and beneath
$W_{lin}^{u}(\boldsymbol{p}(\sigma))$, respectively. Moreover, we assume that
$f_{\sigma}$ has a positive (counterclockwise) rotation number in the sense
that the iterates of any normal section of $T(\sigma)$ completely traverse the
tube in a counterclockwise manner.
Finally, there is another important feature for maps of the type represented
by Gilet's model. Namely, it is assumed that there is a $a<\sigma_{\#}<b$ such
that for all $\sigma\in\lbrack a,\sigma_{\#})$, the basin of attraction of
$\mathfrak{C}(\sigma)$ contains all points $(x,y)$ in $\mathcal{R}%
_{\alpha,\beta}$ to the left of $W^{s}(\boldsymbol{p}(\sigma))$, except for
points in a curvilinear slice $Z(\sigma)$ below the saddle point with one edge
and the image under $f_{\sigma}$ of the other edge contained in $W^{s}%
(\boldsymbol{p}(\sigma))$, having the the property that $f_{\sigma}$ flips its
interior across the stable manifold. This slice is a key agent in producing a
chaotic strange attractor as the center of the tube expands for Gilet-type maps.
Now that the contextual foundation has been established, we shall prove a
result describing the homoclinic type bifurcations that we have in mind.
Toward this end, it is useful to summarize the descriptions above in the form
of a list of detailed but reasonably succinct properties. These attributes of
the map $f:\mathbb{R}^{2}\times\mathbb{[}a,b)\rightarrow\mathbb{R}^{2}$,
illustrated in Fig. \ref{Top1}, are as follows:
\begin{itemize}
\item[(A1)] $f\in C^{1}:=C^{1}\left( \mathbb{R}^{2}\times\mathbb{[}%
a,b),\mathbb{R}^{2}\right) $, where $\mathbb{[}a,b)\subset(0,1)$ and
$f_{\sigma}$ has a single saddle point $\boldsymbol{p}(\sigma):=(\hat
{x}(\sigma),\hat{y}(\sigma))$, which is such that: (i) $\hat{x}(a),\hat
{y}(a)>0$; (ii) there are real constants $\kappa_{s}$, $\kappa_{u}$ such that
the eigenvalues $\lambda^{s}(\sigma)$ and $\lambda^{u}(\sigma)$ of $f_{\sigma
}^{\prime}\left( \boldsymbol{p}(\sigma)\right) $ satisfy $0<\lambda
^{s}(\sigma)\leq\kappa_{s}<1<\kappa_{u}\leq\lambda^{u}(\sigma)$ for all
$a\leq\sigma<b$; (iii) the eigenvector corresponding to $\lambda^{u}(a)$ is
parallel to the $x$-axis; (iv) there is a vertical strip of the form
\[
V_{\alpha}:=\{\boldsymbol{x}:=(x,y)\in\ \mathbb{R}^{2}:0<x<\hat{x}%
(a)+\alpha\}
\]
for some $\alpha>0$ such that the stable manifold $W^{s}(\boldsymbol{p}%
(\sigma))\subset S_{\alpha}$ for all $a\leq\sigma<b$ and separates the plane
into left and right components denoted as $K_{-}(\sigma)$ and $K_{+}(\sigma)$,
respectively; and (v) there is a $\beta>0$ such that
\begin{align*}
&\boldsymbol{p}(\sigma)\in R_{\alpha,\beta}:=\{(x,y)\in\
\mathbb{R}^{2}:0<x<\hat{x}(a)+\alpha,0<y<\hat{y}(a)+\beta\}\\
&\subset\mathcal{R}_{\alpha,\beta}:=\{(x,y):\left\vert x\right\vert
<\hat{x}(a)+\alpha,\left\vert y\right\vert <\hat{y}(a)+\beta\}
\end{align*}
for all $\sigma\in\lbrack a,b)$.
\item[(A2)] There exist $a<\sigma_{1}\leq\sigma_{\#}<\sigma_{t}<\sigma
_{2}<\sigma_{3}<b$ such that the following obtain: (i) there is a (positively)
$f_{\sigma}$-invariant attracting tubular neighborhood $T(\sigma)$ of a
$C^{1}$ closed Jordan curve $C(\sigma)$ for all $\sigma\in\lbrack a,\sigma
_{2})$; (ii) $f_{\sigma}$ restricted to $T(\sigma)$ has a positive
(counterclockwise) rotation number for all $\sigma\in\lbrack a,\sigma_{2})$;
(iii) the closed curve can be chosen so that \emph{centerset}
\begin{equation*}
\mathfrak{C}(\sigma):= {\displaystyle\bigcap\nolimits_{n=1}^{\infty}}
f_{\sigma}^{n}\left( T(\sigma)\right) =C(\sigma)
\end{equation*}
for all $\sigma\in\lbrack a,\sigma_{1})$; (iv) $T(\sigma)\subset K_{-}%
(\sigma)\cap\mathcal{R}_{\alpha,\beta}\cap H(\sigma)$, where $H(\sigma)$ is
the half-plane defined by $y<\hat{y}(\sigma)$, for all $\sigma\in\lbrack
a,\sigma_{2})$; (v) $\mathfrak{C}(\sigma)$ is a nonempty attracting set for
all $\sigma\in\lbrack a,\sigma_{\#})\cup\lbrack\sigma_{2},\sigma_{3})$ and has
an open basin of attraction, denoted as $\mathfrak{\mathring{B}}(\sigma)$,
containing $\left( K_{-}(\sigma)\cap\mathcal{R}_{\alpha,\beta}\cap
\mathfrak{C}_{\mathrm{ext}}(\sigma)\right) \smallsetminus Z(\sigma)$ and
$\mathfrak{C}(\sigma)\cap Z(\sigma)=\varnothing$ \ for all $\sigma\in\lbrack
a,\sigma_{\#})$, where $\mathfrak{C}_{\mathrm{ext}}(\sigma)$ is the exterior
of the centerset and $Z(\sigma)$ is a set defined as follows: There is for
every $\sigma\in\lbrack a,b)$ an orientation preserving $C^{1}$-diffeomorphism
$\Phi_{\sigma}$ of an open neighborhood $U_{\sigma}$ of a portion of
$W^{s}(\boldsymbol{p}(\sigma))$ below $\boldsymbol{p}(\sigma)$ such that
\begin{equation*}
\Phi_{\sigma}\left( Z(\sigma)\right) :=\left\{ (\xi,\eta):-1\leq\xi
\leq0,\eta\leq\varphi(\xi)\right\} ,
\end{equation*}
where $\varphi:[-1,1]\rightarrow(-1,0]$ is a $C^{1}$ function such that
$\varphi(0)=0$ and $\varphi^{\prime}(\xi)>0$ when $\xi<0$. Moreover, the
right-hand bounding curve $c_{r}(\sigma):=\Phi_{\sigma}^{-1}\left( \left\{
(0,\eta):\varphi(-1)\leq\eta\leq0\right\} \right) $ and the $f_{\sigma}$
image of left-hand bounding curve $c_{l}(\sigma):=\Phi_{\sigma}^{-1}\left(
\left\{ \left( \xi,\varphi(\xi)\right) :-1\leq\xi\leq0\right\} \right) $
lies in $W^{s}(\boldsymbol{p}(\sigma))$, while $f_{\sigma}$ maps the interior
$\mathring{Z}(\sigma)$ into $K_{+}(\sigma)$.
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{Topog.jpg}\caption{Topography of the homoclinic
type 1 bifurcation}%
\label{Top1}%
\end{figure}
We have now sufficiently prepared the way for the statement and proof of our
first main result based on the above assumptions, which concerns the existence
of what we call a \emph{homoclinic type }$1$\emph{ bifurcation}. However, it
is convenient to first introduce the following definition: The \emph{left unstable manifold} is
\begin{equation*}
W_{l}^{u}(\boldsymbol{p}(\sigma)):=%
{\displaystyle\bigcup\nolimits_{n=1}^{\infty}}
f_{\sigma}^{n}\left( \left\{ \boldsymbol{x}\in W^{u}(\boldsymbol{p}%
(\sigma)):\boldsymbol{x}\in\{\boldsymbol{p}(\sigma)\}\cup K_{-}(\sigma)\text{
and }d\left( \boldsymbol{x},\boldsymbol{p}(\sigma)\right) \leq\nu\, \forall\, \nu>0\text{ }\right\} \right) .
\end{equation*}
\begin{thm}
Let $f:\mathbb{R}^{2}\times\lbrack
a,b)\rightarrow\mathbb{R}^{2}$ satisfy (A1) and (A2) and
the additional property:
\begin{itemize}
\item[(A3)] The distance $\Delta(\sigma):=\mathrm{dist}\left(
\mathfrak{C}(\sigma),Z(\sigma)\right) :=\inf\left\{ \left\vert
\boldsymbol{x}-\boldsymbol{y}\right\vert :(\boldsymbol{x},\boldsymbol{y}%
)\in\mathfrak{C}(\sigma)\times Z(\sigma)\right\} $ satisfies\\
$\Delta(a)>0$ and is a nonincreasing function of $\sigma$ on
$[a,\sigma_{\#})\cup\lbrack\sigma_{2},\sigma_{3})$ and $\lim
_{\sigma\uparrow\sigma_{3}}\Delta(\sigma)=0$.
\end{itemize}
Then, there are $a<\sigma_{\#}\leq\sigma_{t}\leq\sigma_{2}%
\leq\sigma_{\ast}<\sigma_{3}<b_{1}<1$, where $\sigma_{t}$ and
$\sigma_{\ast}$ are the first and last, respectively, values of
$\sigma$ where there is a tangent intersection of the stable and
unstable manifolds of $\boldsymbol{p}(\sigma)$ such that for all
$\sigma_{\ast}<\sigma<b_{1}$, $f_{\sigma}$ has a chaotic strange
attractor $\mathfrak{A}(\sigma)$, which is the closure of the left
unstable manifold: namely,
\begin{equation}
\mathfrak{A}(\sigma):=\overline{W_{l}^{u}(\boldsymbol{p}(\sigma))}\text{.}
\label{e3}%
\end{equation}
\begin{proof}
We begin by focusing on a tubular type strip
$Q=Q(\sigma;\nu)$ for $W^{s}(\boldsymbol{p}(\sigma))$ (excluding $c_{l}%
(\sigma)$) cut off just above $\boldsymbol{p}(\sigma)$ and below at the lower
edge of $R(\alpha,\beta)$ as shown in Fig. \ref{Top1}, with the understanding that the
dimensions can be taken to be as small as suits our purposes in what follows.
In particular, we take the width of the strip to be $2\nu$ and the top edge to
be parallel to and at a distance of $\nu$ above $W^{u}(\boldsymbol{p}%
(\sigma))$, where $\nu>0$. In this context, we shall find it convenient to
define $W_{\nu}^{u}(\sigma)$ to be the closed segment, with interior in
$K_{-}(\sigma)$, of the unstable manifold from $\boldsymbol{p}(\sigma)$ to the
boundary of $Q$, and to use $s_{\sigma}:=s_{\sigma}(\boldsymbol{x}%
)=s(\boldsymbol{x})$ to denote the arclength from the fixed point to any point
$\boldsymbol{x}\in W_{\nu}^{u}(\sigma)$ or any of its $f_{\sigma}$-iterates.
We also define for every nonnegative integer $n$ the \emph{endpoint}
$e_{\sigma}^{n}$, to be the boundary point of the $C^{1}$-submanifold (with
boundary) $\mathcal{W}_{\sigma}^{u}(\sigma,n):=f_{\sigma}^{n}(W_{\nu}%
^{u}(\sigma))$ for which $s_{\sigma}$ is positive and actually increases
without bound as $n\rightarrow\infty$ as long as $\sigma<\sigma_{\#}$.
The idea of the proof is illustrated rather simply in Figs. \ref{Top1} and \ref{Fig: Evol-1}, but some
technical details are necessary. It follows from the hypotheses (A1)-(A3) that
for any $\nu>0$ there is a positive integer $N=N_{\nu}(\sigma)$ and a first
$\sigma=\sigma_{t}\in\lbrack a,\sigma_{2})$ such that $\mathcal{W}_{\nu}%
^{u}(\sigma,N)$ is tangent to $c_{l}(\sigma)$, which implies that
$\mathcal{W}_{\nu}^{u}(\sigma_{t},N+1)$ is tangent to $W^{s}(\boldsymbol{p}%
(\sigma_{t}))$. Then, as $\sigma$ is increased, there must be a last value,
$\sigma_{\ast}$, with $\sigma_{t}\leq\sigma_{\ast}<\sigma_{2}$, such that
$\mathcal{W}_{\nu}^{u}(\sigma,N)$ crosses $c_{l}(\sigma)$ into the interior of
$Z(\sigma)$, implying that $Q_{l,\sigma}(\omega,N+1):=$ $f_{\sigma}%
^{N+1}(Q_{-}(\sigma;\nu))$, where $Q_{-}(\sigma;\nu):=Q(\sigma;\nu
)\cap\overline{K_{-}(\sigma)}$, which contains a portion of $\mathcal{W}_{\nu
}^{u}(\sigma,N+1)$, actually crosses over $Q(\sigma;\nu)$ into $K_{+}(\sigma)$
for all sufficiently small $\nu>0$ whenever $\sigma_{\ast}<\sigma$. As a
consequence of this situation, called an $\times$\emph{-crossing}, the desired
result follows from the attracting horseshoe theorem in \cite{JB}, which
incorporates the geometric chaos and fractal set arguments of
Birkhoff--Moser--Smale theory (\emph{cf}. \cite{Arr,Moser,PdM,Rob,Smale,NDS}).
In particular, there is an attracting horseshoe in $f_{\sigma}^{N+1}(Q)$,
which is depicted in Fig. \ref{Evol-1c}, which persists as long as $\sigma_{\ast}%
<\sigma<b_{1}$, where $\sigma=b_{1}$ is the first parameter value where the
unstable manifold has a tangent intersection with the stable manifold in
$Z(\sigma)$.
\end{proof}
\end{thm}
The proof of Theorem 1 actually yields more information than provided in the
statement; for example, it explains the \textquotedblleft blinking
effect\textquotedblright\ observed for the attractor as $\sigma:=C$ increases.
This is caused by a sequence of tangent intersections between the stable and
unstable manifolds of the saddle point $\boldsymbol{p}(\sigma)$ - each a
bifurcation value in its own right - as described by Newhouse \cite{New},
which proves the following result.
\begin{corollary}
Let $f$ be as in Theorem
$1$. Then the bifurcation value $\sigma_{\ast}$ precipitating the
creation of a robust stable chaotic strange attractor for $\sigma_{\ast
}<\sigma<b_{1}$\emph{, is proceeded by a sequence of bifurcation values
}$\sigma_{t}<\sigma_{t(1)}<\sigma_{t(2)}<\cdots<\sigma_{\ast}$
corresponding to successive tangent intersections between the stable and
unstable manifolds of the saddle point as $\sigma$ increases.
\end{corollary}
\begin{figure}[htbp]
\centering
\begin{subfigure}{0.32\textwidth}
\includegraphics[width = \textwidth]{Evol-A.jpg}
\caption{$\sigma<\sigma_{t}$}\label{Evol-1a}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width = \textwidth]{Evol-B.jpg}
\caption{$\sigma=\sigma_{t}$}\label{Evol-1b}
\end{subfigure}
\begin{subfigure}{0.32\textwidth}
\includegraphics[width = \textwidth]{Evol-C.jpg}
\caption{$\sigma_{*}<\sigma<b_1$}
\label{Evol-1c}
\end{subfigure}
\caption{Geometric evolution of the homoclinic type-1 bifurcation.}
\label{Fig: Evol-1}
\end{figure}
\section{Heteroclinic-Homoclinic Bifurcations of Type 1 in the Plane}
Next, we consider a combined heteroclinic and homoclinic variant of the
bifurcation in Section 2, illustrated in Fig. \ref{Top2}, that involves a pair of
saddle points, which we set up first with the appropriate analogs of
properties (A1) and (A2).
\begin{itemize}
\item[(B1)] $f\in C^{1}:=C^{1}\left( \mathbb{R}^{2}\times\mathbb{[}%
a,b),\mathbb{R}^{2}\right) $ and $f_{\sigma}$ has a pair of saddle points
$\boldsymbol{p}(\sigma):=(\hat{x}(\sigma),\hat{y}(\sigma))$ and
$\boldsymbol{q}(\sigma):=(\breve{x}(\sigma),\breve{y}(\sigma))$, which are
such that: (i) $\boldsymbol{p}(\omega)$ is in the first quadrant and
$\boldsymbol{q}(\omega)$ is in the third quadrant of $\mathbb{R}^{2}$ for all
$a\leq\sigma<b$ ; (ii) there real constants $\kappa_{s}$, $\kappa_{u}$ such
that the eigenvalues $\hat{\lambda}^{s}(\sigma)$ and $\hat{\lambda}^{u}%
(\sigma)$ of $f_{\sigma}^{\prime}\left( \boldsymbol{p}(\sigma)\right) $ and
$\tilde{\lambda}^{s}(\sigma)$ and $\tilde{\lambda}^{u}(\sigma)$ of $f_{\sigma
}^{\prime}\left( \boldsymbol{q}(\omega)\right) $ satisfy $0<\hat{\lambda
}^{s}(\sigma),\tilde{\lambda}^{s}(\sigma)\leq\kappa_{s}<1<\kappa_{u}\leq
\hat{\lambda}^{u}(\sigma),\tilde{\lambda}^{u}(\sigma)$ for all $a\leq\sigma
<b$; (iii) the eigenvectors corresponding to $\hat{\lambda}^{u}(a)$ and
$\tilde{\lambda}^{u}(a)$ are parallel to the $x$-axis; (iv) there is a
vertical strip of the form
\begin{equation*}
S_{\alpha}:=\{\boldsymbol{x}\in\ \mathbb{R}^{2}:\breve{x}(a)-\alpha<\left\vert x\right\vert <\hat{x}(a)+\alpha\}
\end{equation*}
for some $\alpha>0$ such that the stable manifolds $W^{s}(\boldsymbol{p}%
(\sigma)),W^{s}(\boldsymbol{q}(\sigma))\subset S_{\alpha}$ for all
$a\leq\sigma<b$ and separates the plane into left, middle and right components
denoted as $K_{-}(\sigma)$, $K(\sigma)$ and $K_{+}(\sigma)$, respectively; and
(v) there is a $\beta>0$ such that
\begin{equation*}
\boldsymbol{p},\boldsymbol{q}\in R(\alpha,\beta):=\{(x,y)\in
\ \mathbb{R}^{2}:\breve{x}(a)-\alpha<\left\vert x\right\vert <\hat
{x}(a)+\alpha,\,\breve{y}(a)-\beta<\left\vert y\right\vert <\hat{y}(a)+\beta\}
\end{equation*}
for all $\sigma\in\lbrack a,b)$.
\item[(B2)] There exist $a<\sigma_{1}<\sigma_{2}<b$ such that the following
obtain: (i) there is a (positively) $f_{\sigma}$-invariant attracting tubular
neighborhood $T(\sigma)$ of a $C^{1}$ closed Jordan curve $C(\sigma)$ for all
$\sigma\in\lbrack a,\sigma_{2})$; (ii) $f_{\sigma}$ restricted to $T(\sigma)$
has a positive (counterclockwise) rotation number for all $\sigma\in\lbrack
a,\sigma_{2})$; (iii) the closed curve can be chosen so that \emph{centerset}
\begin{equation*}
\mathfrak{C}(\sigma):=
{\displaystyle\bigcap\nolimits_{n=1}^{\infty}}
f_{\omega}^{n}\left( T(\sigma)\right) =C(\sigma)
\end{equation*}
for all $\sigma\in\lbrack a,\sigma_{1})$; (iv) $T(\sigma)\subset K(\sigma)\cap
R(\alpha,\beta)$ for all $\sigma\in\lbrack a,\sigma_{2})$; (v) $T(\sigma)$ has
an open basin of attraction, denoted as $\mathfrak{\mathring{B}}_{T}(\sigma)$,
containing $\left( K(\sigma)\cap R(\alpha,\beta)\cap\mathcal{C}%
_{\mathrm{ext}}(\sigma)\right) \smallsetminus\left( Z(\sigma)\cup\breve
{Z}(\sigma)\right) $ for all $\sigma\in\lbrack a,\sigma_{2})$, where
$\mathcal{C}_{\mathrm{ext}}(\sigma)$ is the exterior of the centerset and
$Z(\sigma)$ and $\breve{Z}(\sigma)$ are sets defined as follows: For
$Z(\sigma)$ there is for every $\sigma\in\lbrack a,b)$ an orientation
preserving $C^{1}$-diffeomorphism $\Phi_{\sigma}$ of an open neighborhood
$W_{\sigma}$ of a portion of $W^{s}(\boldsymbol{p}(\sigma))$ below
$\boldsymbol{p}(\sigma)$ such that
\begin{equation*}
\Phi_{\sigma}\left( Z(\sigma)\right) :=\left\{ (\xi,\eta):-1\leq\xi
\leq0,\eta\leq\varphi(\xi)\right\} ,
\end{equation*}
where $\varphi:[-1,1]\rightarrow(-1,0]$ is a $C^{1}$ function such that
$\varphi(0)=0$ and $\varphi^{\prime}(\xi)>0$ when $\xi<0$. Moreover, the
right-hand bounding curve $c_{r}(\sigma):=\Phi_{\sigma}^{-1}\left( \left\{
(0,\eta):\varphi(-1)\leq\eta\leq0\right\} \right) $ and the $f_{\sigma}$
image of left-hand bounding curve $c_{l}(\sigma):=\Phi_{\sigma}^{-1}\left(
\left\{ \left( \xi,\varphi(\xi)\right) :-1\leq\xi\leq0\right\} \right) $
lie in $W^{s}(\boldsymbol{p}(\sigma))$, while $f_{\sigma}$ maps the interior
of $Z(\sigma)$ into $K_{+}(\sigma)$. Analogously, there is a sectorial region
$\breve{Z}(\sigma)$ (shown in Fig. \ref{Top2}) with vertex on $W^{s}(\boldsymbol{q}%
(\sigma))$ above $\boldsymbol{q}(\sigma)$ and interior above the vertex, such
that its (boundary) edges $\breve{c}_{r}(\sigma)$ and $\breve{c}_{l}(\sigma)$
lie in $K(\sigma)$ and on $W^{s}(\boldsymbol{q}(\sigma))$, respectively.
Moreover, $f_{\sigma}\left( \breve{c}_{r}(\sigma)\right) \subset
W^{s}(\boldsymbol{q}(\sigma))$ and $f_{\sigma}$ maps the interior of
$\breve{Z}(\sigma)$ into $K_{-}(\sigma).$
\end{itemize}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\textwidth]{Topog2.jpg}\caption{Topography of the
heteroclinic-homoclinic type 1 bifurcation}%
\label{Top2}%
\end{figure}
We now have assembled the basic elements needed to formulate our basic result
for heteroclinic-homoclinic bifurcations of type 1, of which there are several
variations. In the interest of keeping things as simple as possible, we impose
a symmetry requirement on the interaction of the invariant closed curve with
the slice sets $Z(\sigma)$ and $\breve{Z}(\sigma)$. For the result on the
heteroclinic-homoclinic bifurcation, it is convenient to introduce the\emph{
right unstable manifold.}
\begin{equation*}
W_{r}^{u}(\boldsymbol{q}(\sigma)):=%
{\displaystyle\bigcup\nolimits_{n=1}^{\infty}}
f_{\sigma}^{n}\left( \left\{ \boldsymbol{x}\in W^{u}(\boldsymbol{p}%
(\sigma)):\boldsymbol{x}\in\{\boldsymbol{q}(\sigma)\}\cup K(\sigma)\text{ and
}d\left( \boldsymbol{x},\boldsymbol{q}(\sigma)\right) \leq\nu\,\forall\,\nu>0\text{ }\right\} \right) ,
\end{equation*}
which is naturally the analog of the left unstable manifold of Theorem 1 and
in the present context would be the generated by a portion the component of
the unstable manifold of $\boldsymbol{p}(\sigma)$ in $K(\sigma)$ (rather than
in what was defined as $K_{-}(\sigma)$ for Theorem 1).
\begin{thm}
Let $f:\mathbb{R}^{2}\times\lbrack
a,b)\rightarrow\mathbb{R}^{2}$satisfy (B1) and (B2) and
the additional property:
\begin{itemize}
\item[(B3)] Suppose the distance $\Delta(\sigma):=\mathrm{dist}\left(
\mathfrak{C}(\sigma),Z(\sigma)\right) :=\inf\left\{ \left\vert
\boldsymbol{x}-\boldsymbol{y}\right\vert :(\boldsymbol{x},\boldsymbol{y}%
)\in\mathfrak{C}(\sigma)\times Z(\sigma)\right\} =\mathrm{dist}\left(
\mathfrak{C}(\sigma),\breve{Z}(\sigma)\right) :=\inf\left\{ \left\vert
\boldsymbol{x}-\boldsymbol{y}\right\vert :(\boldsymbol{x},\boldsymbol{y}%
)\in\mathfrak{C}(\sigma)\times\breve{Z}(\sigma)\right\} $, satisfies
$\Delta(a)>0$ and is a nonincreasing function of $\sigma$ on
$[a,\sigma_{\#})\cup\lbrack\sigma_{2},\sigma_{3})$ and $\lim
_{\sigma\uparrow\sigma_{3}}\Delta(\sigma)=0$.
\end{itemize}
Then, there are $a<\sigma_{\#}\leq\sigma_{t}^{(1)}\leq
\sigma_{2}\leq\sigma_{\ast}^{(1)}<\sigma_{t}^{(2)}<\sigma_{\ast}^{(2)}%
<\sigma_{3}<b_{1}<1$, where $\sigma_{t}^{(1)}$and $\sigma
_{t}^{(2)}$ are the first values of $\sigma$ for which there are
tangent intersections between $W^{u}(\boldsymbol{p}(\sigma))$ and
$W^{s}(\boldsymbol{q}(\sigma))$ $($ and $W^{u}(\boldsymbol{q}%
(\sigma))$ and $W^{s}(\boldsymbol{p}(\sigma)))$ and
$W^{u}(\boldsymbol{p}(\sigma))$ and $W^{s}(\boldsymbol{p}(\sigma
))$ $($ and $W^{u}(\boldsymbol{p}(\sigma))$ and
$W^{s}(\boldsymbol{p}(\sigma)))$, respectively. On the other hand,
$\sigma_{\ast}^{(1)}$ and $\sigma_{\ast}^{(2)}$ are the last
values of $\sigma$ where there are tangent intersections between
$W^{u}(\boldsymbol{p}(\sigma))$ and $W^{s}(\boldsymbol{q}(\sigma
))$ $($and $W^{u}(\boldsymbol{q}(\sigma))$ and %
$W^{s}(\boldsymbol{p}(\sigma)))$ and $W^{u}(\boldsymbol{p}(\sigma))$
\emph{and }$W^{s}(\boldsymbol{p}(\sigma))$ $($ and
$W^{u}(\boldsymbol{p}(\sigma))$ and $W^{s}(\boldsymbol{p}(\sigma)))$,
respectively. Then, for all $\sigma_{\ast}^{(1)}<\sigma<b_{1}$,
$f_{\sigma}$ has a chaotic strange attractor $\mathfrak{A}(\sigma
)$, which is the union of the closures of the left and right unstable
manifolds: namely,
\begin{equation}
\mathfrak{A}(\sigma):=\overline{W_{l}^{u}(\boldsymbol{p}(\sigma))}%
\cup\overline{W_{r}^{u}(\boldsymbol{q}(\sigma))}. \label{e4}%
\end{equation}
\begin{proof}
As in the proof of Theorem 1, we begin by focusing on a
tubular type strip $Q=Q(\sigma;\nu)$ for $W^{s}(\boldsymbol{p}(\sigma))$
(excluding $c_{l}(\sigma)$) cut off just above $\boldsymbol{p}(\sigma)$ and
below at the lower edge of $R(\alpha,\beta)$ (as shown in Fig. \ref{Top1}), but we also
introduce an analogous strip $\tilde{Q}=\tilde{Q}(\sigma;\nu)$ for
$W^{s}(\boldsymbol{q}(\sigma))$ with the understanding that the dimensions for
both strips can be taken to be as small as suits our purposes in what follows.
In particular, we take the width of the strips to be $2\nu$ and the top edges
to be parallel to and at a distance of $\nu$ above $W^{u}(\boldsymbol{p}%
(\sigma))$ and below $W^{u}(\boldsymbol{q}(\sigma))$, respectively, where
$\nu>0$. Again mimicking the proof of Theorem 1, but with twin objects for the
saddle points $\boldsymbol{p}$ and $\boldsymbol{q}$, it is convenient to
define $W_{\nu}^{u}(\sigma)$ and $\tilde{W}_{\nu}^{u}(\sigma)$ to be the
closed segments, with interiors in $K(\sigma)$ as shown in Fig. \ref{Top2},
respectively, of the unstable manifold from $\boldsymbol{p}(\sigma)$ to the
boundary of $Q$ and the unstable manifold from $\boldsymbol{q}(\sigma)$ to the
boundary of $\tilde{Q}$. In addition, we use $s_{\sigma}:=s_{\sigma
}(\boldsymbol{x})=s(\boldsymbol{x})$ and $\tilde{s}_{\sigma}:=\tilde
{s}_{\sigma}(\boldsymbol{x})=\tilde{s}(\boldsymbol{x})$ to denote the
arclengths from the fixed point $\boldsymbol{p}$ to any point $\boldsymbol{x}%
\in W_{\nu}^{u}(\sigma)$ and fixed point $\boldsymbol{q}$ to any point
$\boldsymbol{x}\in\tilde{W}_{\nu}^{u}(\sigma)$ or any of its $f_{\sigma}%
$-iterates. We also define for every nonnegative integer $n$ the
\emph{endpoints} $e_{\sigma}^{n}$ and $\tilde{e}_{\sigma}^{n}$ to be the
boundary points of the $C^{1}$-submanifolds (with boundary) $\mathcal{W}%
_{\sigma}^{u}(\sigma,n):=f_{\sigma}^{n}(W_{\nu}^{u}(\sigma))$ and
$\mathcal{\tilde{W}}_{\sigma}^{u}(\sigma,n):=f_{\sigma}^{n}(\tilde{W}_{\nu
}^{u}(\sigma))$ for which $s_{\sigma}$ and $\tilde{s}_{\sigma}$ are positive
and actually increase without bound as $n\rightarrow\infty$ as long as
$\sigma<\sigma_{\#}$.
As in the proof of Theorem 1, the details are best described and understood
with the aid of figures illustrating the evolution of the dynamics and
corresponding bifurcations such as in Figs. \ref{Top1} and \ref{Fig: Evol-1} for the homoclinic type
bifurcations. Although we only present the basic geometry for the case at hand
in Fig. \ref{Top2}, the analogous sequence of figures representing the evolution of
the heteroclinic-homoclinic bifurcations can easily be envisaged by comparison
with Figs. \ref{Top1} and \ref{Fig: Evol-1}, and we shall rely on this, along with an understanding of the
proof of Theorem 1, in what follows.
The hypotheses (B1)-(B3) imply that for any $\nu>0$ there are positive
integers $N^{(1)}=N_{\nu}^{(1)}(\sigma)<N^{(2)}=N_{\nu}^{(2)}(\sigma)$ such
that the following properties obtain:
\begin{itemize}
\item[(i)] There is a first $\sigma=\sigma_{t}^{(1)}\in\lbrack a,\sigma_{2})$
such that $\mathcal{W}_{\nu}^{u}(\sigma,N^{(1)})$ is tangent to $\breve{c}%
_{r}(\sigma)$ and $\mathcal{\tilde{W}}_{\nu}^{u}(\sigma,N^{(1)})$ is tangent
to $c_{l}(\sigma)$. Consequently, it follows from the definition of the slice
regions that $\mathcal{W}_{\nu}^{u}(\sigma_{t}^{(1)},N^{(1)}+1)$ is tangent to
$W^{s}(\boldsymbol{q}(\sigma_{t}^{(1)}))$ and $\mathcal{\tilde{W}}_{\nu}%
^{u}(\sigma_{t}^{(1)},N^{(1)}+1)$ is tangent to $W^{s}(\boldsymbol{p}%
(\sigma_{t}^{(1)}))$.
\item[(ii)] Moreover, there is a first $\sigma=\sigma_{t}^{(2)}\in\lbrack
a,\sigma_{2})$, with $\sigma_{t}^{(1)}<\sigma_{t}^{(2)}$, such that
$\mathcal{W}_{\nu}^{u}(\sigma,N^{(2)})$ is tangent to $c_{l}(\sigma)$ and
$\mathcal{\tilde{W}}_{\nu}^{u}(\sigma,N^{(2)})$ is tangent to $\breve{c}%
_{r}(\sigma).$ Accordingly, the characterization of the slice regions then
implies that $\mathcal{W}_{\nu}^{u}(\sigma_{t}^{(2)},N^{(2)}+1)$ is tangent to
$W^{s}(\boldsymbol{p}(\sigma_{t}^{(2)}))$ and $\mathcal{\tilde{W}}_{\nu}%
^{u}(\sigma_{t}^{(2)},N^{(2)}+1)$ is tangent to $W^{s}(\boldsymbol{q}%
(\sigma_{t}^{(2)}))$.
\end{itemize}
Therefore, we conclude from (i) that as $\sigma$ is increased, there
must be a last value, $\sigma_{\ast}^{(1)}$, with $\sigma_{t}^{(1)}\leq
\sigma_{\ast}^{(1)}<\sigma_{2}$, such that $\mathcal{W}_{\nu}^{u}%
(\sigma,N^{(1)})$ crosses $\breve{c}_{r}(\sigma)$ into the interior of
$\breve{Z}(\sigma)$, implying that $Q_{l,\sigma}(\sigma,N^{(1)}+1):=$
$f_{\sigma}^{N^{(1)}+1}(Q_{c}(\sigma;\nu))$, where $Q_{c}(\sigma
;\nu):=Q(\sigma;\nu)\cap\overline{K(\sigma)}$, which contains a portion of
$\mathcal{W}_{\nu}^{u}(\sigma,N^{(1)}+1)$, actually crosses over $\tilde
{Q}(\sigma;\nu)$ into $K_{-}(\sigma)$ for all sufficiently small $\nu>0$
whenever $\sigma_{\ast}^{(1)}<\sigma$. Furthermore, $\mathcal{\tilde{W}}_{\nu
}^{u}(\sigma,N^{(1)})$ crosses $c_{l}(\sigma)$ into the interior of
$Z(\sigma)$, implying that $\tilde{Q}_{l,\sigma}(\sigma,N^{(1)}+1):=$
$f_{\sigma}^{N^{(1)}+1}(\tilde{Q}_{c}(\sigma;\nu))$, where $\tilde{Q}%
_{c}(\sigma;\nu):=\tilde{Q}(\sigma;\nu)\cap\overline{K(\sigma)}$, which
contains a portion of $\mathcal{\tilde{W}}_{\nu}^{u}(\sigma,N^{(1)}+1)$,
actually crosses over $Q(\sigma;\nu)$ into $K_{+}(\sigma)$ for all
sufficiently small $\nu>0$ whenever $\sigma_{\ast}^{(1)}<\sigma$.
Consequently, for such values of the parameter $\sigma$ we have a transverse
heteroclinic 2-cycle connecting the saddle points $p(\sigma)$ and $q(\sigma)$,
so it follows from the work of Bertozzi \cite{Bert} (see also \cite{NDS}) that
one has a chaotic strange attractor of the form \eqref{e4} over the open
parameter interval $(\sigma_{\ast}^{(1)},\sigma_{t}^{(2)}).$
When the parameter value increases to $\sigma_{t}^{(2)}$, it follows from
(ii) that we get a small Newhouse type bifurcation \cite{New}, which amounts
to a "blinking effect" followed by possibly more tangent intersections of
$W^{u}(\boldsymbol{p}(\sigma))$ and $W^{s}(\boldsymbol{p}(\sigma))$ and
$W^{u}(\boldsymbol{q}(\sigma))$ and $W^{s}(\boldsymbol{q}(\sigma))$. Moreover,
there is a last such tangent intersection of these unstable and stable
manifolds, beyond which we have the the geometric chaos and fractal set
arguments of Birkhoff--Moser--Smale theory that imply the existence of the
chaotic strange attractor defined by \eqref{e4}, amounting to a two-fold
version of the result for a single saddle point in Theorem 1. In particular,
there is a two-fold manifestation of the attracting horseshoe in $f_{\sigma
}^{N+1}(Q)$ depicted in Fig. \ref{Evol-1c}, with a copy at each of the saddle points. In
a manner completely analogous to the homoclinic bifurcation, this (double)
horseshoe bifiurcation persists as long as $\sigma_{\ast}<\sigma<b_{1}$, where
$\sigma=b_{1}$ is the first parameter value where the unstable manifolds
$W^{u}(\boldsymbol{p}(\sigma))$ and $W^{u}(\boldsymbol{q}(\sigma))$ have
tangent intersections with the stable manifolds $W^{s}(\boldsymbol{p}%
(\sigma))$ and $W^{s}(\boldsymbol{q}(\sigma))$ in $Z(\sigma)$ and $\breve
{Z}(\sigma)$, respectively. Finally, by combining the heteroclinic and
homoclinic parts of the evolution of bifurcations, we obtain the desired
result, thereby completing the proof.
\end{proof}
\end{thm}
As in the case of Theorem 1, the proof of Theorem 2 yields more information
than provided in the statement of the result. For example there are compound
\textquotedblleft blinking effects\textquotedblright\ observed for the
attractor as $\sigma:=C$ increases caused by sequences of tangent
intersections: first the heteroclinic tangencies between $W^{u}(\boldsymbol{p}%
(\sigma))$ and $W^{s}(\boldsymbol{q}(\sigma))$ together with those between
$W^{u}(\boldsymbol{q}(\sigma))$ and $W^{s}(\boldsymbol{p}(\sigma))$, followed
by the homoclinic tangencies between the stable and unstable manifolds of
$\boldsymbol{p}(\sigma)$ together with those of $\boldsymbol{q}(\sigma)$.
Whence, the analysis in \cite{New} together with the proof of Theorem 2 leads
directly to a verification of the following analog of Corollary 1.
\begin{corollary}
Let $f$ be as in Theorem
$2$. Then the bifurcation values $\sigma_{\ast}^{(1)}$ and
$\sigma_{\ast}^{(2)}$ precipitating the creation of a robust stable
chaotic strange attractors for $\sigma_{\ast}^{(1)}<\sigma<\sigma_{t}^{(2)}$
and $\sigma_{\ast}^{(2)}<\sigma<b_{1}$ resulting from
heteroclinic and homoclinic interactions, respectively, are proceeded by
sequences of bifurcation values $\sigma_{t}^{(1)}<\sigma_{t(1)}^{(1)}%
<\sigma_{t(2)}^{(1)}<\cdots<\sigma_{\ast}^{(1)}$ and $\sigma_{t}%
^{(2)}<\sigma_{t(1)}^{(2)}<\sigma_{t(2)}^{(2)}<\cdots<\sigma_{\ast}^{(2)}$
corresponding to successive tangent intersections between the stable and
unstable manifolds $W^{s}\left( \boldsymbol{p}(\sigma)\right) $ and
$W^{u}\left( \boldsymbol{q}(\sigma)\right) $ and $W^{s}\left(
\boldsymbol{q}(\sigma)\right) $ and $W^{u}\left( \boldsymbol{p}
(\sigma)\right) $, and $W^{s}\left( \boldsymbol{p}(\sigma)\right)$
and $W^{u}\left( \boldsymbol{q}(\sigma)\right) $ and
$W^{s}\left( \boldsymbol{q}(\sigma)\right) $ and $W^{u}\left(
\boldsymbol{p}(\sigma)\right) $ as $\sigma$ increases.
\end{corollary}
It is worth mentioning that the heteroclinic-homoclinic bifurcation is apt to
experience dynamical crises for smaller parameter values than the homoclinic
type because there are two unstable manifolds, rather than just one, capable
of interacting with each stable manifold.
\section{Some Higher-Dimensional Generalizations}
There is an extensive array of possible generalizations of Theorems 1 and 2,
some of which might be quite difficult to realize in terms of discrete dynamical
systems in three or more dimensions. For example, suppose that instead of an
invariant attracting closed curve, we have an attracting invariant torus in
$\mathbb{R}^{3}$ on which the restricted dynamics is ergodic. In addition,
assume there is a single saddle point with a 2-dimensional stable manifold and
the analog of a slice set to which the torus tends to as the parameter of
choice increases. Then one would expect an extremely interesting and complex
analog of the 2-dimensional bifurcation described in Theorem 1. However,
finding a relatively simple smooth map of $\mathbb{R}^{3}$ satisfying these
properties is rather difficult, so we shall confine ourselves to a couple of
much simpler examples.
\subsection{A simple 3-dimensional generalization}
The first generalization is a more or less trivial extension of Gilet's planar
map; namely%
\begin{equation}
E(x,y,z;\sigma):=\left( x-\sigma\Psi^{\prime}(x)y,\mu(y+\Psi(x)),0.8z\right)
. \label{e5}%
\end{equation}
To see this, we note that the fixed points of $E$ are $(x_{\ast},y_{\ast},0)$,
where $(x_{\ast},y_{\ast})$ are the fixed points of Gilet's planar map
\eqref{e2} and that the $x$- and $y$-coordinate maps are independent of $z$.
Therefore, the fixed points comprise denumerably many hyperbolic points of the
form $\left( x_{k},\mu(1-\mu)^{-1}\Psi(x_{k}),0\right) $, with $\Psi
^{\prime}(x_{k})=0$, each having a 2-dimensional stable and 1- dimensional
unstable manifold, together with a denumerable set of hyperbolic fixed points
$\left( \tilde{x}_{m},0,0\right) $, with $\Psi(x_{m})=0$, each of which has
a 3-dimensional stable manifold that bifurcates into a 1-dimensional stable
manifold with a 2-dimensional unstable as the parameter $\sigma$ increases.
These later fixed points, associated with the Neimark--Sacker bifurcations of
Gilet's map, are on invariant lines $x=x_{m},y=0$, which bifurcate into
invariant attracting cylinders of the form $C(\sigma)\times\mathbb{R}$, where
the curves $C(\sigma)$ are as defined in Theorem 1. A simulation of $E$ for
increasing $\sigma$ is shown in Fig. \ref{gen1}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.32\textwidth]{Evol_E1.jpg}
\includegraphics[width=0.32\textwidth]{Evol_E2.jpg}
\includegraphics[width=0.32\textwidth]{Evol_E3.jpg}
\caption{Bifurcation evolution for the map $E$ \eqref{e5}.}%
\label{gen1}%
\end{figure}
\subsection{A more complex 3-dimensional generalization}
We can create a more interesting extension of Gilet's map by adding a
dependence of the $z$-coordinate map on the other variables; for example, as
in
\begin{equation}
\hat{E}(x,y,z;\sigma):=\left( x-\sigma\Psi^{\prime}(x)y,\mu(y+\Psi
(x)),0.8z+0.1\sin^{2}(z+\Psi^{\prime}(x)\right) . \label{e6}%
\end{equation}
It is easy to verify that the fixed points of \eqref{e6} are as follows: There
are, as for \eqref{e5}, denumerably many fixed points of the type $\left(
x_{k},\mu(1-\mu)^{-1}\Psi(x_{k}),0\right) $, with $\Psi^{\prime}(x_{k})=0$,
each having a 2-dimensional stable and 1-dimensional unstable manifold. There
are also denumerably many fixed points of the form $\left( \tilde{x}%
_{m},0,z_{m}\right) $, with $\Psi(x_{m})=0$ and $z_{m}$ the unique solution
of $2z_{m}=\sin^{2}\left( z_{m}+\Psi^{\prime}(x_{m})\right) $. These latter
fixed points start as sinks and bifurcate into hyperbolic fixed points each
with a 2-dimensional unstable manifold and a 1-dimensional stable manifold
parallel to the $z$-axis. Moreover, the cylinders $C(\sigma)\times\mathbb{R}$
are attracting and invariant for sufficiently large values of the parameter
$\sigma$. The main difference between this extension and \eqref{e5} is that
the unstable manifolds for the fixed points $\left( x_{k},\mu(1-\mu)^{-1}%
\Psi(x_{k}),0\right) $ do not remain in the $x,y$-plane as they wrap around
the cylinder $C(\sigma)\times\mathbb{R}$, which leads to the more complex
bifurcation behavior shown in Fig. \ref{gen2}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.49\textwidth]{Evol_Ehat1}
\includegraphics[width=0.49\textwidth]{Evol_Ehat2}
\includegraphics[width=0.49\textwidth]{Evol_Ehat3}
\includegraphics[width=0.49\textwidth]{Evol_Ehat4}
\caption{Bifurcation evolution for the map $\hat{E}$ \eqref{e6}.}%
\label{gen2}%
\end{figure}
\section{Examples}
Here we shall present examples of the bifurcation behavior described in
Theorem 1 and Theorem 2, mainly through illustrations based on simulations and
a bit of analysis. The examples, which comprise a homoclinic and
heteroclinic-homoclinic type, are to be planar maps based upon Gilet's model
and a minor modification thereof.
\subsection{Symmetric homoclinic bifurcations for Gilet's map}
It turns out that homoclinic bifurcations of the type described in Theorem 1
can be very effectively illustrated by considering a pair of contiguous
symmetric cells (containing symmetric invariant closed curves with source
point centers along the $x$-axis) for Gilet's map%
\begin{equation}
F(x,y;\sigma):=\left( x-\sigma\Psi^{\prime}(x)y,\mu(y+\Psi(x))\right) ,
\label{e7}%
\end{equation}
where $\mu$ is fixed and $\sigma=C$ is varied. The symmetric cells are chosen
to be the ones symmetric about the stable manifold corresponding to the line
$x\simeq-1.57$, with symmetric center sources (approximately) at $\left(
-1.79,0\right) $ and $\left( -1.35,0\right) $ as shown in Fig. \ref{Fig: Homoclinic}.
It should be noted that, in contrast to the description in Theorem
1, the saddle point on the stable manifold is below rather than above the
symmetric invariant closed attracting curves, which means that the assumptions
remain the same modulo a reflection in the $x$-axis.
Now, it is a straightforward but rather tedious matter to verify that all of
the hypotheses of Theorem 1 hold (modulo the reflection mentioned above), but
all of the assumptions are illustrated quite clearly in Fig. \ref{Fig: Homoclinic}
save the existence of the orientation reversing slice regions. Consequently,
we shall restrict our analysis to the identification of these regions for this
example. A simple calculation shows that the saddle point of interest in this
example is fixed at $\boldsymbol{p}=\left( \hat{x},\mu(1-\mu)^{-1}\Psi
(\hat{x})\right) \simeq\left( -1.57,\mu(1-\mu)^{-1}\Psi(-1.57)\right)
\simeq\left( -1.57,-0.206\mu(1-\mu)^{-1}\right) .$ We also compute that
\begin{equation*}
\det F^{\prime}=\mu\left[ 1-\sigma\left( y\Psi^{\prime\prime}(x)-(\Psi
^{\prime}(x))^{2}\right) \right] ,
\end{equation*}
so it follows from the fact that $\Psi^{\prime\prime}(x)$ is positive $\left(
\simeq\Psi^{\prime\prime}(\hat{x})\right) $ in a thin vertical strip centered
at $x=\hat{x}$. Hence, it follows that $\det F^{\prime}$ changes from positive
to negative along the stable manifold as $y$ changes from negative to
positive, which signals a reversal of orientation.
The change in orientation also produces the slice regions $Z$ described in
(A2). To see this, it follows from symmetry that it is enough to describe the
slice to the right of the stable manifold $x=\hat{x}$. Of course, the left
boundary curve for this slice is just a portion of the vertical line
$x=\hat{x}$ for $y\geq\tilde{y}>0.$ The right-hand boundary curve for the
slice is just
\begin{equation*}
y=\frac{x-\hat{x}}{\sigma\Psi^{\prime}(x)}%
\end{equation*}
for $x>\hat{x}$, which has a cusp at $\left( \hat{x},\tilde{y}\right) $,
where
\begin{equation*}
\tilde{y}=\lim_{x\downarrow\hat{x}}\left( \frac{x-\hat{x}}{\sigma\Psi
^{\prime}(x)}\right) .
\end{equation*}
Now it is straightforward to verify that this slice region has the properties
described in (A2) and (A3) of Theorem1, as does the symmetric slice to the
left of $x=\hat{x}.$
\begin{figure}[htbp]
\centering
\includegraphics[width = 0.45\textwidth]{C45.jpg}
\includegraphics[width = 0.45\textwidth]{C5.jpg}
\includegraphics[width = 0.45\textwidth]{C6.jpg}
\includegraphics[width = 0.45\textwidth]{C83.jpg}
\caption{Bifurcation evolution for the map $F$ \eqref{e7}.}
\label{Fig: Homoclinic}
\end{figure}
\subsection{Heteroclinic-homoclinic bifurcations for modified Gilet's map}
To illustrate the bifurcations in Theorem 2, we consider the following
modification of Gilet's map: It is the map $\tilde{F}:\mathbb{R}%
^{2}\rightarrow\mathbb{R}^{2}$ defined as follows:%
\begin{equation}
\tilde{F}(x,y;\sigma):=\left\{
\begin{array}
[c]{cc}%
F(x,y;\sigma), & \left\vert x\right\vert \leq0.4\\
\left( x-[1+5(x+0.4)]\sigma\tilde{\Psi}^{\prime}(x)y,\mu(y+\tilde{\Psi
}(x))\right) , & -0.6<x<-0.4\\
\left( x-[1-5(x-0.4)]\sigma\tilde{\Psi}^{\prime}(x)y,\mu(y+\tilde{\Psi
}(x))\right) , & 0.4<x<0.6\\
\left( x,\mu(y+\tilde{\Psi}(x))\right) , & \left\vert x\right\vert \geq0.6
\end{array}
\right. , \label{e8}%
\end{equation}
where
\begin{equation*}
\tilde{\Psi}(x):=\left\{
\begin{array}
[c]{cc}%
\left( 10x+3\right) \Psi(0.4), & -0.6<x<-0.4\\
(10x-3)\Psi(0.4), & 0.4<x<0.6\\
-3\Psi(0.4), & x\leq-0.6\\
3\Psi(0.4), & 0.6\leq x
\end{array}
\right. .
\end{equation*}
We note that the map \eqref{e8} is continuous everywhere and smooth except along
the lines $x=\pm0.4,\pm0.6$. It turns out that the vertical lines along which
it fails to be $C^{\infty}$ do not alter the qualitative nature of the
bifurcation evolution as described in Theorem 2, which is evident from
Fig. \ref{hh1}. Once again, we note that the hypotheses of Theorem 2 can be
readily checked, but all save the existence of the $Z$ slices are clearly
shown in the simulations in Fig. \ref{hh1}. The existence of the orientation
reversing slices $Z(\sigma)$ and $\breve{Z}(\sigma)$ can be verified in a
manner completely analogous to to that used in the previous example.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.32\textwidth]{Evol_Ftilde1.jpg}
\includegraphics[width=0.32\textwidth]{Evol_Ftilde2.jpg}
\includegraphics[width=0.32\textwidth]{Evol_Ftilde3.jpg}
\caption{Bifurcation evolution for the map $\tilde{F}$ \eqref{e8}.}%
\label{hh1}%
\end{figure}
\section{Concluding Remarks}
In our paper \cite{RB1}, we proved that Gilet's walking droplet model
\cite{Gil} develops Neimark--Sacker bifurcations generating invariant
attracting closed Jordan curves (topological circles) as either one of the parameters is
increased. We also saw that the diameter of these circles increases with
either increasing parameter, which ultimately gives rise to new types of
bifurcations arising from the interactions of stable manifolds with unstable
manifolds of saddle points winding around the expanding circles. The
investigation in this paper comprises an in-depth analysis of two variants -
one purely homoclinic and the other a combination of heteroclinic and
homoclinic interactions of unstable and stable manifolds of saddle points - of
these new bifurcations as the original interaction parameter $C$ (identified
with $\sigma$) is varied while the damping parameter $\mu$ is fixed. In
addition to our analysis of these bifurcations, we showed by examples how
these dynamical phenomena can be extended to higher dimensions.
There are several research directions related to this work that we intend to
pursue in the near future. First among these is a related new bifurcation
somewhat like those studied here, but based on diffeomorphisms that do not
include the orientation reversing, noninjective slice regions in Gilet's
model. Naturally, we also intend to study the bifurcations in models of Gilet
type with $C$ fixed and $\mu$ varying, which, for example, appears to exhibit
more striking dynamical crises behavior than the case studied here. Another
line of research, which has strong connections with the quantum aspects of
pilot waves, is the construction of invariant or approximately invariant
measures for the dynamical models such as Gilet's and is something that we
intend to investigate as part of our continuing investigation of the
mathematical aspects of walking droplet phenomena.
\section*{Acknowledgments}
\noindent Discussions with Anatolij Prykarpatski were very helpful in writing this paper.
\bigskip
| {
"timestamp": "2017-08-28T02:02:13",
"yymm": "1708",
"arxiv_id": "1708.07593",
"language": "en",
"url": "https://arxiv.org/abs/1708.07593",
"abstract": "We identify two rather novel types of (compound) dynamical bifurcations generated primarily by interactions of an invariant attracting submanifold with stable and unstable manifolds of hyperbolic fixed points. These bifurcation types - inspired by recent investigations of mathematical models for walking droplet (pilot-wave) phenomena - are introduced and illustrated. Some of the one-parameter bifurcation types are analyzed in detail and extended from the plane to higher-dimensional spaces. A few applications to walking droplet dynamics are analyzed.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Exotic Bifurcations Inspired by Walking Droplet Dynamics",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138118632621,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7087156808272189
} |
https://arxiv.org/abs/math/0512297 | Empty simplices of polytopes and graded Betti numbers | The conjecture of Kalai, Kleinschmidt, and Lee on the number of empty simplices of a simplicial polytope is established by relating it to the first graded Betti numbers of the polytope. The proof allows us to derive explicit optimal bounds on the number of empty simplices of any given dimension. As a key result, we prove optimal bounds for the graded Betti numbers of any standard graded K-algebra in terms of its Hilbert function. | \section{Introduction}
Let $P \subset \mathbb{R}^d$ be a simplicial $d$-polytope, i.e.\ the $d$-dimensional convex hull
of finitely many points in $\mathbb{R}^d$ such that all its faces are simplices. The simplest
combinatorial invariant of $P$ is its $f$-vector $\underline{f} = (f_{-1}, f_0,\ldots,
f_{d-1})$ where $f_{-1} := 1$ and $f_i$ is the number of $i$-dimensional faces of $P$ if
$i \geq 0$. McMullen conjectured in \cite{McM-70} a characterization of the possible
$f$-vectors. In order to state his conjecture we use an equivalent set of invariants, the
$h$-vector $\underline{h} := (h_0,\ldots, h_s)$. It is defined as the sequence of
coefficients of the polynomial
\[
\sum_{j=0}^s h_j z^j := \sum_{j=0}^d f_{j-1}\cdot z^j (1-z)^{d-j}.
\]
The $f$-vector can be recovered from the $h$-vector because
$$
f_{j-1} = \sum_{i = 0}^j \binom{d-i}{j-i} h_i.
$$
Using $h$-vectors we can state McMullen's conjecture which has become a proven statement
by combining the results of Billera and Lee \cite{BL2} and Stanley \cite{stanley2}
(cf.\ also \cite{McM-simple}).
\begin{theorem}[$g$-theorem] \label{g-thm}
A sequence $\underline{h} = (h_0,\dots,h_s)$
of positive integers is the $h$-vector of a simplicial $d$-polytope if and only
if $s = d$ and $\underline{h}$ is an SI-sequence, i.e.\
$\underline{h}$ satisfies:
\begin{itemize}
\item[(i)] (Dehn-Sommerville equations) $h_i = h_{d-i}$ for $i = 0,\ldots,d$;
\item[(ii)] $\underline{g} := (h_0, h_1 - h_0,\ldots,h_{\lfloor
\frac{d}{2} \rfloor} - h_{\lfloor \frac{d}{2} \rfloor - 1})$ is an
O-sequence.
\end{itemize}
\end{theorem}
Being an O-sequence is a purely numerical condition (cf.\ Section \ref{sec-Betti}). Note
that O-sequences are precisely the Hilbert functions of Artinian standard graded
$K$-algebras.
In order to prove sufficiency of these conditions, in \cite{BL2} Billera and Lee
construct, for each SI-sequence $\underline{h} := (h_0,\ldots, h_d)$, a certain
simplicial $d$-polytope $P_{BL}(\underline{h})$ whose $h$-vector is the given
SI-sequence $\underline{h}$. The Billera-Lee polytopes are rather particular which has
lead to expectations that they have some extremal properties. In order to state one such
instance recall (cf.\ \cite{Kalai-02}) that an {\em
empty simplex} of the polytope $P$ is a smallest subset $S$ of the
vertex set of $P$ such that $S$ is not a face of $P$, but each proper subset of $S$ is a
face of $P$. Sometimes, empty simplices are called {\em
missing faces}. They are just minimal
non-faces of the vertex set of $P$. Empty simplices play an important role in the
classification of polytopes (cf., e.g, \cite{Kalai-87} and Remark \ref{rem-skeleta}). In
\cite{Kalai-94}, Kalai states as Conjecture 2:
\begin{conjecture}[Kalai, Kleinschmidt, Lee] \label{conj}
For all simplicial $d$-polytopes with prescribed $h$-vector $\underline{h}$, the number
of $j$-dimensional empty simplices is maximized by the Billera-Lee polytope
$P_{BL}(\underline{h})$.
\end{conjecture}
Kalai has pointed out in \cite{Kalai-02}, Theorem 19.5.35, that this conjecture is a
consequence of results in \cite{MN3}, but his argument needs some adjustment. The
starting point of this note is to give a detailed proof of this conjecture which is
established in Theorem \ref{thm-conj}.
The construction of the Billera-Lee polytopes is rather involved. In general, the number
of empty $j$-simplices of a given Billera-Lee polytope $P_{BL} (\underline{h})$ has not
been known. Hence, the proof of Conjecture \ref{conj} leaves open the problem of giving
an explicit bound in terms of the $h$-vector. The bulk of this paper is devoted to
solving this problem. The key is given by our proof of Conjecture \ref{conj}. It
identifies the number of missing $j$-simplices of the polytope $P$ with a certain graded
Betti number of its Stanley-Reisner ring $K[P]$. Since the $h$-vector of $P$ is
determined by the Hilbert function of $K[P]$, we are lead to consider the problem of
finding sharp upper bounds for the graded Betti numbers of the Stanley-Reisner ring
$K[P]$ in terms of its Hilbert function. We solve this problem in Section \ref{sec-Betti}
in greater generality, namely for Gorenstein algebras with the Weak Lefschetz property
(Theorem \ref{thm-gor-betti}). Its proof requires explicit bounds for all graded Betti
numbers of any standard graded $K$-algebra $A$ in terms of its Hilbert function. These
are established in Theorem \ref{thm-Betti-bounds}. They are optimal. Because of the
importance of graded Betti numbers, it seems fair to expect that Theorem
\ref{thm-Betti-bounds} will find applications in other contexts as well.
In Section \ref{sec-missing}, we apply the results of Section \ref{sec-Betti} to derive
explicit optimal bounds for the number of missing $j$-simplices of a simplicial polytope
in terms of its $g$-vector (cf.\ Corollary \ref{cor-generators}). Note that the
$g$-vector is easily obtained from the $h$-vector (Definition \ref{deg-g-vector}). We
conclude with some applications. In particular, we bound the number of empty faces of
dimension $\leq k$ of a simplicial $d$-polytope in terms of $k$ and $f_0 - d$ (Corollary
\ref{cor-kalai-2-7}). Following Kalai \cite{Kalai-94}, such a bound is the key to a
central result of Perles \cite{perles} in the theory of arbitrary polytopes with ``few
vertices'' (cf.\ Remark \ref{rem-skeleta}). Finally, we show that very little information
on the $g$-vector is sufficient to bound the number of empty $j$-simplices of a
simplicial $d$-polytope if $d$ is large enough (Corollary \ref{cor-j-simp}). This result
slightly corrects and improves \cite{Kalai-94}, Theorem 3.8.
\section{The conjecture of Kalai, Kleinschmidt, and Lee} \label{sec-conj}
The goal of this section is to prove Conjecture \ref{conj}. To this end we need some more
notation. Let $P$ be a simplicial $d$-polytope. Denote its vertex set
by $\{v_1,\ldots,v_{f_0}\}$ and let $R := K[x_1,\ldots,x_{f_0}]$ be the
polynomial ring in $f_0$ variables over an arbitrary field $K$. Then the {\em
Stanley-Reisner ring} of $P$ is $K[P] := R/I_P$ where the {\em Stanley-Reisner ideal} is
generated by all square-free monomials $x_{i_1} x_{i_2} \cdots x_{i_t}$ such that
$\{v_{i_1}, v_{i_2},\ldots,v_{i_t}\}$ is not a face of $P$. It is well-known (cf.\
\cite{BH-book}, Corollary 5.6.5) that $K[P]$ is a Gorenstein ring of dimension $d = \dim
P$. Since $h_1 = f_0 - d$, its minimal graded free resolution is of the form
\[
0 \rightarrow \bigoplus_{j \in \mathbb{Z}} R(-j)^{\beta_{h_1, j}^K (P)}
\rightarrow \dots
\rightarrow
\bigoplus_{j \in \mathbb{Z}} R(-j)^{\beta_{1, j}^K (P)} \rightarrow R
\rightarrow R/I \rightarrow
0.
\]
The non-negative integers $\beta_{i, j}^K (P) = \dim_K [\Tor_i^R (K[P],
K)]_j$, $i, j \in \mathbb{Z}$, are called the {\em graded Betti numbers} of
$P$.
The following result is shown in \cite{MN3}:
\begin{theorem} \label{thm-betti}
Let $K$ be a field of characteristic zero and let $P$ be a simplicial
$d$-polytope with $h$-vector $\underline{h}$. Then we have for all
integers $i, j$:
$$
\beta_{i, j}^K (P) \leq \beta_{i, j}^K (P_{BL} (\underline{h})).
$$
\end{theorem}
\begin{proof}
The claim is a consequence of \cite{MN3}, Theorem 9.6, because its
proof shows (cf.\ page 57) that the extremal polytope that is not
specified in part
(b) of this theorem is indeed the Billera-Lee polytope $P_{BL}
(\underline{h})$.
\end{proof}
\begin{remark} The assumption on the characteristic of the field $K$ is
needed to ensure that the Stanley-Reisner ring $K[P]$ has the
so-called Weak Lefschetz property (cf.\ Section \ref{sec-Betti}). This property
also plays a crucial role in Stanley's necessity part of the
$g$-theorem in \cite{stanley2}.
\end{remark}
The Conjecture of Kalai, Kleinschmidt, and Lee follows now easily.
\begin{theorem} \label{thm-conj}
For all simplicial polytopes with prescribed $h$-vector $\underline{h}$, the number of
$j$-dimensional empty simplices is maximized by the Billera-Lee polytope
$P_{BL}(\underline{h})$.
\end{theorem}
\begin{proof}
It follows from its definition that ${\beta_{1, j}^K} (P)$ is the
number of minimal generators of degree $j$ of the Stanley-Reisner
ideal $I_P$. Since a $j$-dimensional empty face of $P$ corresponds to
a minimal generator of $I_P$ with degree $j+1$, the Conjecture of
Kalai, Kleinschmidt, and Lee is a consequence of Theorem
\ref{thm-betti} applied with $i=1$.
\end{proof}
The combinatorial interpretation of the first Betti numbers allows us to drop the
assumption on the characteristic in Theorem \ref{thm-betti} for certain Betti numbers.
\begin{corollary} \label{first-betti}
Let $P$ be a simplicial $d$-polytope with $h$-vector $\underline{h}$. Then we have for
all integers $j$:
$$
\beta_{1, j}^K (P) \leq \beta_{1, j}^K (P_{BL} (\underline{h})),
\quad \beta_{h_1 - 1, j}^K (P) \leq \beta_{h_1 - 1, j}^K (P_{BL}
(\underline{h})),
$$
and
$$
\beta_{h_1, j}^K (P) = \left \{ \begin{array}{ll}
0 & \mbox{if} ~ j \neq h_1 + d \\
1 & \mbox{if} ~ j = h_1 + d
\end{array} \right.
$$
\end{corollary}
\begin{proof}
Denote by $n_j (P)$ the number of empty $j$-simplices of $P$. We have seen that, for
every field $K$:
$$
n_{j-1} (P) = \beta_{1, j}^K (P).
$$
Let $K$ be a field of characteristic zero. Then Theorem \ref{thm-conj} provides
$$
n_{j-1} (P) \leq n_{j-1} (P_{BL} (\underline{h})).
$$
Let now $K$ be an arbitrary field. Then, applying the above equality again, the claim for
the first Betti numbers follows.
Since $K[P]$ is a Gorenstein ring, its minimal free resolution is self-dual. In
particular, for all integers $i, j$, we have
$$
\beta_{i, j}^K (P) = \beta_{h_1 - i, h_1 + d -j}^K (P)
$$
This implies the remaining assertions.
\end{proof}
\begin{remark} \label{rem-interp}
Note that the conjecture of Kalai, Kleinschmidt, and Lee has been shown by giving a
combinatorial interpretation of the first graded Betti numbers of a simplicial polytope.
By duality, it follows that the second last non-trivial graded Betti numbers have a
combinatorial interpretation, too. However, it is not possible to find combinatorial
interpretations of all graded Betti numbers because, in general, the Betti numbers
depend on the characteristic of the ground field (cf.\ \cite{th-char}, Example 3.3).
\end{remark}
\medskip
\section{Upper bounds for Betti numbers} \label{sec-Betti}
The key to proving the conjecture of Kalai, Kleinschmidt, and Lee has been to identify
the number of missing $i$-simplices as a certain first graded Betti number. The proof
also shows that in order to compute an upper bound for this number in terms of the
$h$-vector of the polytope, we need to know an upper bound for the Betti numbers of
Cohen-Macaulay algebras. The goal of this section is to establish such bounds. Since the
general case does not take more work than the special case of a Cohen-Macaulay algebra,
we will derive upper bounds for the graded Betti numbers of any arbitrary standard graded
$K$-algebra in terms of its Hilbert function.
Throughout this section we denote by $R$ the polynomial ring $K[x_1,\ldots,x_n]$ over an
arbitrary field $K$ with its standard grading where every variable has degree one. $A
\neq 0$ will be a standard graded $K$-algebra $R/I$ where $I \subset R$ is a proper
homogeneous ideal. For a finitely generated graded $R$-module $M = \oplus_{j \in \mathbb{Z}}
[M]_j$, we denote its graded Betti numbers by
$$
\beta^R_{i j} (M) := \dim_K [\Tor^R_i (M, K)]_j.
$$
Since the graded Betti numbers of $M$ do not change under field extensions of $K$, we may
and will assume that the field $K$ is infinite.
The Hilbert function of $M$ is the numerical function $h_M: \mathbb{Z} \to \mathbb{Z}, h_M (j) := \dim_K
[M]_j$. The Hilbert functions of graded $K$-algebras have been completely classified by
Macaulay. In order to state his result we need some notation.
\begin{notation} \label{not-bin-expansion}
(i) We always use the following convention for binomial coefficients: If $a \in \mathbb{R}$ and
$j \in \mathbb{Z}$ then
$$
\binom{a}{j} := \left \{ \begin{array}{ll}
\frac{a (a-1) \cdots (a-j+1)}{j!} & \mbox{if} ~ j > 0 \\
1 & \mbox{if} ~ j = 0 \\
0 & \mbox{if} ~ j < 0.
\end{array} \right.
$$
(ii) Let $b, d$ be positive integers. Then there are uniquely determined integers $m_d >
m_{d-1} > m_s \geq s \geq 1$ such that
$$
b = \binom{m_d}{d} + \binom{m_{d-1}}{d-1} + \ldots + \binom{m_s}{s}.
$$
This is called the {\em $d$-binomial expansion} of $b$. For any integer $j$ we set
$$
b\pp{d, j} := \binom{m_d + j}{d + j} + \binom{m_{d-1} + j}{d-1 + j } + \ldots +
\binom{m_s + j}{s + j}.
$$
Of particular importance will be the cases where $j = 1$ or $j = -1$. To simplify
notation, we further define
$$
b\pp{d} := b\pp{d, 1} = \binom{m_d + 1}{d + 1} + \binom{m_{d-1} + 1}{d } + \ldots +
\binom{m_s + 1}{s + 1}
$$
and
$$
b\pl{d} := b\pp{d, -1} = \binom{m_d - 1}{d - 1} + \binom{m_{d-1}-1}{d - 2} + \ldots +
\binom{m_s - 1}{s - 1}.
$$
(iii) If $b = 0$, then we put $b\pp{d} = b\pl{d} = b\pp{d, j} := 0$ for all $j, d \in
\mathbb{Z}$.
\end{notation}
\medskip
Recall that a sequence of non-negative integers $\left (h_j\right )_{j \geq 0}$ is called
an {\em O-sequence} if $h_0 = 1$ and $h_{j+1} \leq h_j\pp{j}$ for all $j \geq 1$. Now we
can state Macaulay's characterization of Hilbert functions \cite{Macaulay} (cf.\ also
\cite{stanley}).
\begin{theorem}[Macaulay] \label{thm-Mac}
For a numerical function $h: \mathbb{Z} \to \mathbb{Z}$, the following conditions are equivalent:
\begin{itemize}
\item[(a)] $h$ is the Hilbert function of a standard graded $K$-algebra;
\item[(b)] $h(j) = 0$ if $j < 0$ and $\{h(j)\}_{j \geq 0}$ is an O-sequence.
\end{itemize}
\end{theorem}
For later use we record some formulas for sums involving binomial coefficients.
\begin{lemma} \label{lem-sum-formulas}
For any positive real numbers $a, b$ and every integer $j \geq 0$, there are the
following identities:
\begin{itemize}
\item[(i)] ${\displaystyle \sum_{k = 0}^j (-1)^k \binom{a+k-1}{k} \binom{b}{j-k} =
\binom{b-a}{j} }$;
\item[(ii)] ${\displaystyle \sum_{k = 0}^j \binom{a+k-1}{k} \binom{b+j-k-1}{j-k} =
\binom{a+b+j-1}{j} }$;
\item[(iii)] ${\displaystyle \sum_{k = 0}^j (-1)^k \binom{a+k}{m} \binom{b}{j-k} =
\sum_{k=0}^{m} \binom{a-k-1}{m-k} \binom{b-k-1}{j}}$ \begin{minipage}[t]{3.1cm}
if \ $0 \leq m
\leq a$ are integers. \end{minipage}
\end{itemize}
\end{lemma}
\begin{proof}
(i) and (ii) are probably standard. In any case, they follow immediately by comparing
coefficients of power series using the identities $(1+x)^{b-a} = (1+x)^{-a} \cdot
(1+x)^b$ and $(1-x)^{-a-b} = (1-x)^{-a} \cdot (1-x)^{-b}$.
To see part (iii), we first use (ii) and finally (i); we get:
\begin{eqnarray*}
\sum_{k = 0}^j (-1)^k \binom{a+k}{m} \binom{b}{j-k} & = & \sum_{k = 0}^j (-1)^k
\binom{b}{j-k} \cdot \left \{\sum_{i=0}^m \binom{k+i}{i} \binom{a-1-i}{m-i} \right\} \\
& = & \sum_{i=0}^m \binom{a-1-i}{m-i} \cdot \left \{ \sum_{k = 0}^j (-1)^k
\binom{k+i}{k} \binom{b}{j-k}
\right \} \\
& = & \sum_{k=0}^{m} \binom{a-1-i}{m-i} \binom{b-i-1}{j},
\end{eqnarray*}
as claimed.
\end{proof}
After these preliminaries we are ready to derive bounds for Betti numbers. We begin with
the special case of modules having a $d$-linear resolution. Recall that the graded module
$M$ is said to have a {\em $d$-linear resolution} if it has a graded minimal free
resolution of the form
$$
\ldots \to R^{\beta_i} (-d-i) \to \ldots \to R^{\beta_1} (-d-1) \to R^{\beta_0} (-d) \to
M \to 0.
$$
Here $\beta_i^R (M) = \sum_{j \in \mathbb{Z}} \beta_{i, j}^R (M) := \beta_i$ is the $i$-th total
Betti number of $M$.
\begin{proposition} \label{prop-lin-res}
Let $M \neq 0$ be a graded $R$-module with a $d$-linear resolution. Then, for every $i
\geq 0$, its $i$-th total graded Betti number is
$$
\beta_i^R (M) = \sum_{j=0}^i (-1)^j \cdot h_M (d+j) \cdot \binom{n}{i-j}.
$$
\end{proposition}
\begin{proof}
We argue by induction on $i$. The claim is clear if $i = 0$. Let $i > 0$. Using the
additivity of vector space dimensions along exact sequences and the induction hypothesis
we get:
\begin{eqnarray*}
\beta_i^R (M) & = & (-1)^i h_M (d+ i) + \sum_{j=0}^{i-1} (-1)^{i-1 - j} \cdot \beta_j^R
(M)
\binom{n-1+i-j}{i-j} \\
& = & (-1)^i h_M (d+ i) + \\
& & \sum_{j=0}^{i-1} (-1)^{i-1 - j} \cdot \binom{n-1+i-j}{i-j} \cdot \left \{
\sum_{k=0}^j (-1)^j \cdot h_M (d+k) \binom{n}{j-k} \right \}\\
& = & (-1)^i h_M (d+ i) + \\
&& \sum_{k=0}^{i-1} (-1)^{k} \cdot h_M (d+k) \cdot \left \{ \sum_{j=k}^{i-1} (-1)^{i-1-j}
\binom{n-1-i-j}{i-j} \binom{n}{j-k} \right \}\\
& = & (-1)^i h_M (d+ i) + \\ && \sum_{k=0}^{i-1} (-1)^{k} \cdot h_M (d+k) \cdot \left \{
\sum_{j=1}^{i-k} (-1)^{j-1}
\binom{n+j-1}{j} \binom{n}{i-k-j} \right \}\\
& = & (-1)^i h_M (d+ i) + \sum_{k=0}^{i-1} (-1)^{k} \cdot h_M (d+k) \cdot \binom{n}{i-k}
\end{eqnarray*}
according to Lemma \ref{lem-sum-formulas}(i). Now the claim follows.
\end{proof}
It is amusing and useful to apply this result to a case where we know the graded Betti
numbers.
\begin{example} \label{ex-powers}
Consider the ideal $I = (x_1,\ldots,x_n)^d$ where $d>0$. Its minimal free resolution is
given by an Eagon-Northcott complex. It has a $d$-linear resolution and its Betti numbers
are (cf., e.g., the proof of \cite{MN3}, Corollary 8.14):
$$
\beta_i^R (I) = \binom{d+i-1}{i} \binom{n+d-1}{d+i}.
$$
Since the Hilbert function of $I$ is, for all $j \geq 0$, $h_I (d+j) =
\binom{n+d+j-1}{d+j}$, a comparison with Proposition \ref{prop-lin-res} yields:
\begin{equation} \label{eq-sum}
\binom{d+i-1}{i} \binom{n+d-1}{d+i} = \sum_{j=0}^i (-1)^j \cdot \binom{n+d+j-1}{d+j}
\binom{n}{i-j}.
\end{equation}
\end{example}
\medskip
Now we will compute the graded Betti numbers of lex-segment ideals. Recall that an ideal
$I \subset R$ is called a {\em lex-segment} ideal if, for every $d$, the ideal $I\dd{d}$
is generated by the first $\dim_k [I]_d$ monomials in the lexicographic order of the
monomials in $R$. Here $I\dd{d}$ is the ideal that is generated by all the polynomials of
degree $d$ in $I$. For every graded $K$-algebra $A= R/I$ there is a unique lex-segment
ideal $I^{lex} \subset R$ such that $A$ and $R/I^{lex}$ have the same Hilbert function.
For further information on lex-segment ideals we refer to \cite{BH-book}.
\begin{lemma} \label{lem-lin-lex-seg}
Let $I \subset R$ be a proper lex-segment ideal whose generators all have degree $d$.
Consider the $d$-binomial expansion of $b := h_{R/I} (d)$:
$$
b = \binom{m_d}{d} + \binom{m_{d-1}}{d-1} + \ldots + \binom{m_s}{s}.
$$
Then the Betti numbers of $A := R/I$ are for all $i \geq 0$:
\begin{eqnarray*}
\beta^R_{i+1} (A) & = & \beta^R_{i+1, i+d} (A)\\
& = & \binom{n+d-1}{d+i} \binom{d+i-1}{d-1} - \sum_{k=s}^d \sum_{j=0}^{m_k - k}
\binom{m_k -j -1}{k-1} \binom{n-1-j}{i}.
\end{eqnarray*}
(Note that according to Notation \ref{not-bin-expansion}, the sum on the right-hand side
is zero if $b = 0$.)
\end{lemma}
\begin{proof}
Gotzmann's Persistence Theorem \cite{gotzmann} implies that the Hilbert function of $A$
is, for $j \geq 0$, $h_A (d+j) = b\pp{d, j}$ and that $I$ has a $d$-linear resolution.
Hence Proposition \ref{prop-lin-res} in conjunction with Formula (\ref{eq-sum}) and
Lemma \ref{lem-sum-formulas}(iii) provides:
\begin{eqnarray*}
\beta_{i+1}^R (A) & = & \beta_i^R (I) = \sum_{j=0}^i (-1)^j \cdot h_I(d+j) \cdot
\binom{n}{i-j} \\
& = & \sum_{j=0}^i (-1)^j \left [\binom{n+d+j-1}{d+j} - b\pp{d, j} \right ] \cdot
\binom{n}{i-j} \\
& = & \binom{n+d-1}{d+i} \binom{d+i-1}{i} - \sum_{j=0}^i (-1)^j \cdot \left [\sum_{k=s}^d
\binom{m_k + j}{k+j} \right ] \cdot \binom{n}{i-j} \\
& = & \binom{n+d-1}{d+i} \binom{d+i-1}{i} - \sum_{k=s}^d \left [\sum_{j=0}^i (-1)^j
\cdot
\binom{m_k + j}{m_k - k} \cdot \binom{n}{i-j} \right ]\\
& = & \binom{n+d-1}{d+i} \binom{d+i-1}{i} - \sum_{k=s}^d \sum_{j=0}^{m_k - k}
\binom{m_k - j -1}{k-1} \cdot \binom{n-1-j}{i},
\end{eqnarray*}
as claimed.
\end{proof}
The above formulas simplify in the extremal cases.
\begin{corollary} \label{cor-linear-lex-seg}
Adopt the notation and assumptions of Lemma \ref{lem-lin-lex-seg}. Then
\begin{itemize}
\item[(a)] ${\displaystyle \beta^R_1 (A) = \binom{n+d-1}{d} - b}$;
\item[(b)] ${\displaystyle \beta^R_n (A) = \binom{n+d-2}{d-1} - b\pl{d} }$.
\end{itemize}
\end{corollary}
\begin{proof}
Part (a) being clear, we restrict ourselves to show (b). Since $\binom{n-1-j}{n-1} = 0$
for $j > 0$, Lemma \ref{lem-lin-lex-seg} immediately gives
$$
\beta^R_n (A) = \binom{n+d-2}{d-1} - \sum_{k=s}^d \binom{m_k -1}{k-1} =
\binom{n+d-2}{d-1} - b\pl{d}.
$$
\end{proof}
Now, we can compute the non-trivial graded Betti numbers of an arbitrary lex-segment
ideal.
\begin{proposition} \label{prop-res-lex-segment}
Let $I \subset R$ be an arbitrary proper lex-segment ideal and let $d \geq 2$ be an
integer. Set $A := R/I$ and consider the $d$-binomial expansion
$$
h_A (d) =: \binom{m_d}{d} + \binom{m_{d-1}}{d-1} + \ldots + \binom{m_s}{s}
$$
and the $(d-1)$-binomial expansion
$$
h_A (d-1) =: \binom{n_{d-1}}{d-1} + \binom{n_{d-2}}{d-2} + \ldots + \binom{n_t}{t}.
$$
Then we have for all $i \geq 0$:
$$
\beta^R_{i+1, i+d} (A) = \beta_{i+1, i+d} (h_A, n)
$$
where
$$
\beta_{i+1, i+d} (h_A, n) := \sum_{k=t}^{d-1} \sum_{j=0}^{n_k - k} \binom{n_k - j}{k}
\binom{n-1-j}{i} - \sum_{k=s}^{d} \sum_{j=0}^{m_k - k} \binom{m_k - 1 - j}{k-1}
\binom{n-1-j}{i}.
$$
\end{proposition}
\begin{proof}
As noticed above, since $I$ is a lex-segment ideal, for every $j \in \mathbb{Z}$, the ideal
$I\dd{j}$ has a $j$-linear resolution, i.e.\ the ideal $I$ is componentwise linear. Hence
\cite{HH}, Proposition 1.3, gives for all $i \geq 0$:
\begin{equation} \label{eq-HH-formula}
\beta^R_{i+1, i+d} (A) = \beta^R_{i+1} (R/I\dd{d}) - \beta^R_{i+1} (R/{\mathfrak m} I\dd{d-1})
\end{equation}
where ${\mathfrak m} = (x_1,\ldots,x_n)$ is the homogeneous maximal ideal of $R$.
Since $I\dd{d-1}$ is generated in degree $d-1$, the ideals $I\dd{d-1}$ and ${\mathfrak m}
I\dd{d-1}$ have the same Hilbert function in all degrees $j \geq d$. Thus, using the
assumption $d \geq 2$, Gotzmann's Persistence Theorem (\cite{gotzmann}) provides:
$$
h_{R/{\mathfrak m} I\dd{d-1}} (d-1+j) = h_{R/I\dd{d-1}} (d-1+j) = h_{A} (d-1)\pp{d-1, j} \quad
\mbox{for all} ~ j \geq 1.
$$
It is easy to see that ${\mathfrak m} I\dd{d-1}$ has a $d$-linear resolution because $I\dd{d-1}$
has a $(d-1)$-linear resolution. Hence, as in the proof of Lemma \ref{lem-lin-lex-seg},
Proposition \ref{prop-lin-res} provides
$$
\beta^R_{i+1} (R/{\mathfrak m} I\dd{d-1}) = \binom{n+d-1}{d+i} \binom{d+i-1}{d-1} -
\sum_{k=t}^{d-1} \sum_{j=0}^{n_k - k} \binom{n_k -j}{k} \binom{n-1-j}{i}.
$$
Plugging this and the result of Lemma \ref{lem-lin-lex-seg} into the Formula
(\ref{eq-HH-formula}), we get our claim.
\end{proof}
Again, the formula simplifies in the extremal cases. We will use the result in the
following section.
\begin{corollary} \label{cor-lex-seg}
Adopt the notation and assumptions of Proposition \ref{prop-res-lex-segment}. Then:
\begin{itemize}
\item[(a)] ${\displaystyle \beta^R_{1, d} (A) = \beta_{1, d} (h_A, n) = h_A (d-1)\pp{d-1}
- h_A (d)}$;
\item[(b)] ${\displaystyle \beta^R_{n, n-1+d} (A) = \beta_{n, n-1+d} (h_A, n) = h_A (d-1)
- (h_A (d))\pl{d}}$.
\end{itemize}
\end{corollary}
\begin{proof}
This follows from the formula given in Proposition \ref{prop-res-lex-segment}.
\end{proof}
In Proposition \ref{prop-res-lex-segment} we left out the case $d \leq 1$ which is easy
to deal with. We need:
\begin{definition} \label{def-betas}
Let $h$ be the Hilbert function of graded $K$-algebra such that $h(1) \leq n$. Then we
define, for all integers $i \geq 0$ and $d$, the numbers $ \beta_{i+1, i+d} (h, n)$ as in
Proposition \ref{prop-res-lex-segment} if $d \geq 2$ and otherwise:
$$
\beta_{i+1, i+d} (h, n) := \left \{ \begin{array}{cl}
\binom{n - h(1)}{i+1} & \mbox{if} ~ d = 1 \\[1ex]
0 & \mbox{if} ~ d \leq 0.
\end{array} \right.
$$
Moreover, if $i \leq 0$ we set:
$$
\beta_{i, j} (h, n) := \left \{ \begin{array}{cl}
1 & \mbox{if} ~ (i, j) = (0, 0) \\[1ex]
0 & {\rm otherwise}.
\end{array} \right.
$$
\end{definition}
\begin{lemma} \label{lem-small-d}
Let $A = R/I \neq 0$ be any graded $K$-algebra. Then we have for all integers $i, d$ with
$d \leq 1$:
$$
\beta^R_{i+1, i+d} (A) = \beta_{i+1, i+d} (h_A, n).
$$
\end{lemma}
\begin{proof}
Since $A$ has as $R$-module just one generator in degree zero, this is clear if $d \leq
0$. Furthermore, $I\dd{1}$ is generated by a regular sequence of length $n - h_A (1)$.
Its minimal free resolution is given by the Koszul complex. Hence, the claim follows for
$d = 1$ because $\beta^R_{i+1, i+1} (A) = \beta^R_{i, i+1} (I\dd{1})$.
\end{proof}
Combined with results of Bigatti, Hullet, and Pardue, we get the main result of this
section: bounds for the graded Betti numbers of a $K$-algebra as an $R$-module in terms
of its Hilbert function and the dimension of $R$.
\begin{theorem} \label{thm-Betti-bounds}
Let $A = R/I \neq 0$ be a graded $K$-algebra. Then its graded Betti numbers are bounded
by
$$
\beta^R_{i+1, i+j} (A) \leq \beta_{i+1, i+j} (h_A, n) \quad (i, j \in \mathbb{Z}).
$$
Furthermore, equality is attained for all integers $i, j$ if $I$ is a lex-segment ideal.
\end{theorem}
\begin{proof}
Let $I^{lex} \subset R$ be the lex-segment ideal such that $A$ and $R/I^{lex}$ have the
same Hilbert function. Then we have for all integers $i, j$ that
$$
\beta^R_{i+1, i+j} (A) \leq \beta^R_{i+1, i+j} (R/I^{lex})
$$
according to Bigatti \cite{bigatti} and Hulett \cite{hulett} if $\chara K = 0$ and to
Pardue \cite{pardue} if $K$ has positive characteristic. Since Proposition
\ref{prop-res-lex-segment} and Lemma \ref{lem-small-d} yield
$$
\beta^R_{i+1, i+j} (R/I^{lex}) = \beta_{i+1, i+j} (h_A, n) \quad (i, j \in \mathbb{Z}),
$$
our claims follow.
\end{proof}
\begin{remark} \label{rem-Hilb-syz}
Note that Theorem \ref{thm-Betti-bounds} gives in particular that $\beta^R_{i+1,i+ d} (A)
= 0$ if $i \geq n$, in accordance with Hilbert's Syzygy Theorem.
\end{remark}
\medskip
We conclude this section by discussing the graded Betti numbers of Cohen-Macaulay
algebras with the so-called Weak Lefschetz property.
Let $A = R/I$ be a graded Cohen-Macaulay $K$-algebra of Krull dimension $d$ and let
$l_1,\ldots,l_d \in [R]_1$ be sufficiently general linear forms. Then $\overline{A} :=
A/(l_1,\ldots,l_d) A$ is called the {\em Artinian reduction} of $A$. Its Hilbert function
and graded Betti numbers as module over $\overline{R} := R/(l_1,\ldots,l_d) R$ do not
depend on the choice of the forms $l_1,\ldots,l_d$. The Hilbert function of
$\overline{A}$ takes positive values in only finitely many degrees. The sequence of these
positive integers $\underline{h} = (h_0, h_1,\ldots, h_r)$ is called the {\em $h$-vector}
of $A$. We set $\beta_{i+1, i+d} (\underline{h}, n-d) := \beta_{i+1, i+d}
(h_{\overline{A}}, n-d)$. Using this notation we get.
\begin{corollary} \label{cor-CM-alg}
Let $A = R/I$ be a Cohen-Macaulay graded $K$-algebra of dimension $d$ with $h$-vector
$\underline{h}$. Then its graded Betti numbers satisfy
$$
\beta^R_{i+1, i+j} (A) \leq \beta_{i+1, i+j} (\underline{h}, n-d) \quad (i, j \in \mathbb{Z}).
$$
\end{corollary}
\begin{proof}
If $l \in [R]_1$ is a not a zerodivisor of $A$, then the graded Betti numbers of $A$ as
$R$-module agree with the graded Betti numbers of $A/l A$ as $R/l R$ module (cf., e.g.,
\cite{MN3}, Corollary 8.5). Hence, by passing to the Artinian reduction of $A$, Theorem
\ref{thm-Betti-bounds} provides the claim.
\end{proof}
\begin{remark} \label{rem-vanishing} Note that, for any O-sequence $\underline{h} =
(1, h_1,\ldots, h_r)$ with $h_r > 0$, Definition \ref{def-betas} provides $\beta_{i+1,
i+j} (\underline{h}, m) = 0$ for all $i, m \geq 0$ if $j \leq 1$ or $j \geq r+2$.
\end{remark}
Recall that an Artinian graded $K$-algebra $A$ has the so-called Weak Lefschetz property\ if there is an
element $l \in A$ of degree one such that, for each $j \in \mathbb{Z}$, the multiplication
$\times l: [A]_{j-1} \to [A]_j$ has maximal rank. The Cohen-Macaulay $K$-algebra $A$ is
said to have the {\em Weak Lefschetz property} if its Artinian reduction has the Weak Lefschetz property.
\begin{remark} \label{rem-WLP}
The Hilbert functions of Cohen-Macaulay algebras with the Weak Lefschetz property\ have been completely
classified in \cite{wlp}, Proposition 3.5. Moreover, Theorem 3.20 in \cite{wlp} gives
optimal upper bounds on their graded Betti numbers in terms of the Betti numbers of
certain lex-segment ideals. Thus, combining this result with Theorem
\ref{thm-Betti-bounds}, one gets upper bounds for the Betti numbers of these algebras in
terms of their Hilbert functions. In general, these bounds are strictly smaller than the
bounds of Corollary \ref{cor-CM-alg} for Cohen-Macaulay algebras that do not necessarily
have the Weak Lefschetz property.
\end{remark}
\medskip
The $h$-vectors of graded Gorenstein algebras with the Weak Lefschetz property\ are precisely the
SI-sequences (cf.\ \cite{MN3}, Theorem 6.3, or \cite{harima}, Theorem 1.2). For their
Betti numbers we obtain:
\begin{theorem} \label{thm-gor-betti}
Let $\underline{h} = (1,h_1,\dots,h_u,\dots, h_r)$ be an SI-sequence where $h_{u-1 } <
h_u = \cdots = h_{r-u} > h_{r-u+1}$. Put $\underline{g} =
(1,h_1-1,h_2-h_1,\dots,h_u-h_{u-1})$. If $A = R/I$ is a Gorenstein graded $K$-algebra of
dimension $d$ with the Weak Lefschetz property\ and $h$-vector $\underline{h}$, then its graded Betti
numbers satisfy
$$
\beta^R_{i+1, i+j} (A) \leq \left \{
\begin{array}{ll}
\beta_{i+1, i+j} (\underline{g}, m) & \hbox{if $j \leq r-u$} \\
\beta_{i+1, i+j} (\underline{g}, m) + \beta_{g_1-i,r+h_1-i-j} (\underline{g}, m) &
\hbox{if $r-u+1 \leq j \leq u+1$} \\
\beta_{g_1-i,r+h_1-i-j} (\underline{g}, m) & \hbox{if $j \geq u+2$}
\end{array}
\right.
$$
where $m := n-d-1 = \dim R - d- 1$.
\end{theorem}
\begin{proof}
This follows immediately by combining \cite{MN3}, Theorem 8.13, and Theorem
\ref{thm-Betti-bounds}.
\end{proof}
\section{Explicit bounds for the number of missing simplices} \label{sec-missing}
We now return to the consideration of simplicial polytopes. To this end we will
specialize the results of Section \ref{sec-Betti} and then discuss some applications.
We begin by simplifying somewhat our notation. Let $P$ be a simplicial $d$-polytope with
$f$-vector $\underline{f}$. It is well-known that the $h$-vector of the Stanley-Reisner
ring $K[P]$ agrees with the $h$-vector of $P$ as defined in the introduction.
Furthermore, in Section \ref{sec-conj} we defined the graded Betti numbers of $K[P]=
R/I_P$ by resolving $K[P]$ as an $R$-module where $R$ is a polynomial ring of dimension
$f_0$ over $K$, i.e.\
$$
\beta_{i, j}^K (P) = \beta_{i, j}^R (K[P]).
$$
Note that the Stanley-Reisner ideal $I_P$ does not contain any linear forms. The graded
Betti numbers of $P$ agree with the graded Betti of the Artinian reduction of $K[P]$ as
module over a polynomial ring of dimension $f_0 - d = h_1$. Thus, we can simplify the
statements of the bounds of $\beta_{i, j}^K (P)$ by setting:
\begin{notation} \label{not-betas}
Using the notation introduced above Corollary \ref{cor-CM-alg} we define for every
O-sequence $\underline{h}$
$$
\beta_{i+1, i+j} (\underline{h}) := \beta_{i+1, i+j} (\underline{h}, h_1).
$$
\end{notation}
\medskip
Notice that $\beta_{i+1, i+j} (\underline{h}) = 0$ if $i\geq 0$ and $j \leq 1$.
In this section we will primarily use the $g$-vector of a polytope which is defined as
follows:
\begin{definition} \label{deg-g-vector}
Let $P$ be a simplicial polytope with $h$-vector $\underline{h} := (h_0,\ldots, h_d)$.
Then the g-Theorem (Theorem \ref{g-thm}) shows that there is a unique integer $u$ such
that $h_{u-1 } < h_u = \cdots = h_{d-u} > h_{d-u+1}$. The vector $\underline{g} =
(g_0,\ldots, g_u) := (1,h_1-1,h_2-h_1,\dots,h_u-h_{u-1})$ is called the {\em $g$-vector}
of $P$. All its entries are positive.
\end{definition}
Some observations are in order.
\begin{remark} \label{rem-g-vec}
(i) By its definition, the $g$-vector of the polytope $P$ is uniquely determined by the
$h$-vector of $P$. The g-Theorem shows that the $h$-vector of $P$ (thus also its
$f$-vector) can be recovered from its $g$-vector, provided the dimension of $P$ is given.
(ii) The g-Theorem also gives an estimate of the length of the $g$-vector because it
implies $2 u \leq d = \dim P$.
\end{remark}
Now we can state our explicit bounds for the Betti numbers of a polytope.
\begin{theorem} \label{thm-poly-betti}
Let $K$ be a field of characteristic zero and let $\underline{g} = (g_0,\dots,g_u)$ be
an O-sequence with $g_u > 0$. Then we have:
\begin{itemize}
\item[(a)] If $P$ is a simplicial $d$-polytope with $g$-vector $\underline{g}$, then:
$$
\beta^K_{i+1, i+j} (P) \leq \left \{
\begin{array}{ll}
\beta_{i+1, i+j} (\underline{g}) & \hbox{if $j \leq d-u$} \\
\beta_{i+1, i+j} (\underline{g}) + \beta_{g_1-i,d+h_1-i-j} (\underline{g}) &
\hbox{if $d-u+1 \leq j \leq u+1$} \\
\beta_{g_1-i,d+g_1+1-i-j} (\underline{g}) & \hbox{if $j \geq u+2$.}
\end{array}
\right.
$$
\item[(b)] In {\rm (a)} equality is attained for all integers $i, j$ if $P$ is the
$d$-dimensional Billera-Lee polytope with $g$-vector $\underline{g}$.
\end{itemize}
\end{theorem}
\begin{proof}
According to Stanley \cite{stanley2} (cf.\ also \cite{McM-simple}), the Stanley-Reisner
ring of every simplicial polytope has the Weak Lefschetz property. Hence part (a) is a consequence of
Theorem \ref{thm-gor-betti}. Part (b) follows from \cite{MN3}, Theorem 9.6, and Theorem
\ref{thm-Betti-bounds}, as pointed out in the proof of Theorem \ref{thm-betti}.
\end{proof}
We have seen in Section \ref{sec-conj} that the number of empty $j$-simplices of the
simplicial polytope $P$ is equal to the Betti number $\beta^K_{1, j+1} (P)$. Thus, we
want to make the preceding bounds more explicit if $i = 0$. At first, we treat a trivial
case.
\begin{remark} \label{rem-trivial-g1}
Notice that the $g$-vector has length one, i.e.\ $u = 0$ if and only if the polytope $P$
is a simplex. In this case, its Stanley-Reisner ideal is a principal ideal generated by a
monomial of degree $d = \dim P$.
\end{remark}
\medskip
In the following result we stress when the Betti numbers vanish. Because of Remark
\ref{rem-trivial-g1}, it is harmless to assume that $u \geq 1$. We use Notation
\ref{not-bin-expansion}.
\begin{corollary} \label{cor-generators}
Let $\underline{g} = (g_0,\dots,g_u)$ be an O-sequence with $g_u > 0$ and $u \geq 1$.
Set $g_{u+1} := 0$. Then we have:
\begin{itemize}
\item[(a)] If $P$ is a simplicial $d$-polytope with $g$-vector $\underline{g}$, then
there are the following bounds:
\begin{itemize}
\item[(i)] If $d \geq 2 u + 1$, then
$$
\beta^K_{1, j} (P) \leq \left \{
\begin{array}{ll}
g_{j-1}\pp{j-1} - g_j & \mbox{if} ~ \ 2 \leq j \leq u+1 \\[.5ex]
g_{d+1-j} - (g_{d+2-j})\pl{d+2-j} & \mbox{if} ~ \ d-u+1 \leq j \leq d \\[.5ex]
0 & {otherwise; }
\end{array}
\right.
$$
\item[(ii)] If $d = 2 u$, then
$$
\beta^K_{1, j} (P) \leq \left \{
\begin{array}{ll}
g_{j-1}\pp{j-1} - g_j & \mbox{if} ~ \ 2 \leq j \leq u \\[.5ex]
g_u\pp{u} + g_u & \mbox{if} ~ \ j = u+1 \\[.5ex]
g_{d+1-j} - (g_{d+2-j})\pl{d+2-j} & \mbox{if} ~ \ u+2 \leq j \leq d \\[.5ex]
0 & {otherwise; }
\end{array}
\right.
$$
\end{itemize}
\item[(b)] In {\rm (a)} equality is attained for all integers $j$ if $P$ is the
$d$-dimensional Billera-Lee polytope with $g$-vector $\underline{g}$.
\end{itemize}
\end{corollary}
\begin{proof}
Since the first Betti numbers of any polytope do not depend on the characteristic of the
field, the claims follow from Theorem \ref{thm-poly-betti} by taking into account
Corollary \ref{first-betti}, Corollary \ref{cor-lex-seg}, and the fact that $\beta_{i+1,
i+j}(\underline{g}) = 0$ if $i \geq 0$ and either $j \leq 1$ or $j \geq u+2$ due to
Remark \ref{rem-vanishing}.
\end{proof}
To illustrate the last result, let us consider an easy case.
\begin{example} \label{ex-ci}
Let $P$ be a simplicial $d$-polytope with $g_1 = 1$. Then its Stanley-Reisner ideal $I_P$
is a Gorenstein ideal of height two, thus a complete intersection. Indeed, since the
$g$-vector of $P$ is an O-sequence, it must be $\underline{g} = (g_0,\ldots, g_u) =
(1,\ldots,1)$. Hence Corollary \ref{cor-generators} provides that $I_P$ has exactly two
minimal generators, one of degree $u+1$ and one of degree $d-u+1$. Equivalently, $P$ has
exactly two empty simplices, one of dimension $u$ and one of dimension $d-u$.
\end{example}
\medskip
As an immediate consequence of Corollary \ref{cor-generators} we partially recover
\cite{Kalai-94}, Proposition 3.6.
\begin{corollary} \label{cor-no-empty}
Every simplicial $d$-polytope has no empty faces of dimension $j$ if $u+1 \leq j \leq
d-u-1$.
\end{corollary}
\begin{remark} \label{rem-converse}
Kalai's Conjecture 8 in \cite{Kalai-94} states that the following converse of Corollary
\ref{cor-no-empty} should be true: If there is an integer $k$ such that $d \geq 2k$ and
the simplicial $d$-polytope has no empty simplices of dimension $j$ whenever $k \leq j
\leq d-k$, then $u < k$. Kalai has proved this if $k=2$ in \cite{Kalai-87}. Our results
provide the following weaker version of Kalai's conjecture:
If there is an integer $k$ such that $d \geq 2k$ and {\em every} simplicial $d$-polytope
with $g$-vector $(g_0,\ldots,g_u)$ has no empty simplices of dimension $j$ whenever $k
\leq j \leq d-k$, then $u < k$.
Indeed, this follows by the sharpness of the bounds in Corollary \ref{cor-generators}.
\end{remark}
Now we want to make some existence results of Kalai and Perles effective. As
preparation, we state:
\begin{corollary} \label{cor-up-to-k}
Let $P$ be a simplicial $d$-polytope with $g$-vector $\underline{g} = (g_0,\dots,g_u)$
where $u \geq 1$. Set $g_{u+1} = 0$. Then the number $N(k)$ of empty simplices of $P$
whose dimension is at most $k$, is bounded above as follows:
\begin{equation*}
N(k) \leq \left \{ \begin{array}{ll}
g_1 + \sum_{j=1}^k \left \{ g_j\pp{j} - g_j \right \} - g_{k+1} & \mbox{if} ~ \ 1 \leq k \leq
\min
\{u, d-u-1 \}; \\[1ex]
N(u) & \mbox{if} ~ \ u < k < d-u \\[1ex]
\begin{minipage}{7.1cm}
$g_1 + g_{d-k}\pp{d-k} + \sum_{j=1}^{d-k-1} \left \{ g_j\pp{j} - g_j \right \} +
\\
\hspace*{2.1cm} \sum_{j = d-k+1}^u \left \{ g_j\pp{j} - (g_j)\pl{j} \right \}$
\end{minipage}
& \mbox{if} ~ \ d-u
\leq k < d
\end{array} \right.
\end{equation*}
Furthermore, for each $k$, the bound is attained if $P$ is the Billera-Lee $d$-polytope
with $g$-vector $\underline{g}$.
\end{corollary}
\begin{proof}
By Corollary \ref{cor-no-empty}, this is clear if $u < k < d-u$. In any case, we know
that $N(k) = \sum_{j=2}^{k+1} \beta^K_{1, j} (P)$. Thus, using Corollary
\ref{cor-generators} carefully, elementary calculations provide the claim. We omit the
details.
\end{proof}
The last result immediately gives:
\begin{corollary} \label{cor-total}
If $P$ is a simplicial polytope with $g$-vector $\underline{g} = (g_0,\dots,g_u)$ where
$u \geq 1$, then its total number of empty simplices is at most
$$
\binom{g_1+ 2}{2} - 1 + \sum_{j = 2}^u \left \{ g_j\pp{j} - (g_j)\pl{j} \right \}.
$$
Furthermore, this bound is attained if $P$ is any Billera-Lee polytope with $g$-vector
$\underline{g}$.
\end{corollary}
\begin{proof}
Use Corollary \ref{cor-up-to-k} with $k = d-1$ and recall that $g_1\pp{1} =
\binom{g_1+1}{2}$.
\end{proof}
\begin{remark} It is somewhat surprising that the bound in Corollary \ref{cor-total} does {\em not} depend
on the dimension of the polytope. In contrast, the other bounds (cf., e.g., Corollary
\ref{cor-up-to-k}) do depend on the dimension $d$ of the polytope.
\end{remark}
In view of Corollary \ref{cor-up-to-k}, the following elementary facts will be useful.
\begin{lemma} \label{lem-monotonie}
Let $k$ be a positive integer. If $a \geq b$ are non-negative integers, then
\begin{itemize}
\item[(a)] \hspace*{3cm} ${\displaystyle a\pp{k} - a\pl{k} \geq b\pp{k} - b\pl{k}; }$\\[-1ex]
\item[(b)] \hspace*{3.3cm} ${\displaystyle a\pp{k} - a \geq b\pp{k} - b; }$\\[-1ex]
\item[(c)] \hspace*{4.1cm} ${\displaystyle a\pl{k} \geq b\pl{k}. }$
\end{itemize}
\end{lemma}
\begin{proof}
We show only (a). The proofs of the other claims are similar and only easier.
To see (a), we begin by noting, for integers $m \geq j
>0$, the identity
\begin{equation} \label{eq-comp}
\binom{m+1}{j+1} - \binom{m-1}{j-1} = \binom{m}{j+1} + \binom{m-1}{j}.
\end{equation}
Now we use induction on $k \geq 1$. Since $a\pp{1} - a\pl{1} = \binom{a+1}{2} - 1$, the
claim is clear if $k = 1$. Let $k \geq 2$. Consider the $k$-binomial expansions
$$
a =: \binom{m_k}{k} + \binom{m_{k-1}}{k-1} + \ldots + \binom{m_s}{s} \quad {\rm and}
\quad b =: \binom{n_{k}}{k} + \binom{n_{k-1}}{k-1} + \ldots + \binom{n_t}{t}.
$$
Since $a \geq b$, we get $m_k \geq n_k$. We distinguish two cases.
\smallskip
{\em Case 1}: Let $m_k = n_k$. Then the claim follows by applying the induction
hypothesis to $a - \binom{m_k}{k} \geq b - \binom{m_k}{k}$.
\smallskip
{\em Case 2}: Let $m_k > n_k$. Using $n_i \leq n_k-k+i$ and Formula (\ref{eq-comp}), we
get
\begin{eqnarray*}
b\pp{k} - b\pl{k} &= & \sum_{i = t}^{k} \left \{ \binom{n_i}{i+1} + \binom{n_i - 1}{i}
\right \} \\
& \leq & \sum_{i = 1}^{k} \left \{ \binom{n_k - k + i}{i+1} + \binom{n_k - k - 1 + i}{i}
\right \} \\
& = & \binom{n_k+1}{k+1} + \binom {n_k}{k} - (n_k - k + 2) \\
& < & \binom{m_k}{k+1} + \binom{m_k - 1}{k}
\end{eqnarray*}
because $n_k < m_k$. The claim follows since Formula (\ref{eq-comp}) gives
$\binom{m_k}{k+1} + \binom{m_k - 1}{k} \leq a\pp{k} - a\pl{k}$.
\end{proof}
\begin{remark}
In general, it is not true that $a > b$ implies $a\pp{k} - a\pl{k} > b\pp{k} - b\pl{k}$.
For example, if $k \geq 2$ and $a-1 = b= \binom{m}{k} > 0$, then $a\pp{k} - a\pl{k} =
b\pp{k} - b\pl{k}$.
\end{remark}
\medskip
We are ready to establish optimal bounds that depend only on the dimension and the number
of vertices.
\begin{theorem} \label{thm-kal-2-7} Let $P$ be a simplicial $d$-polytope with $d + g_1 +
1$ vertices which is not a simplex. Then there is the following bound on the number
$N(k)$ of empty simplices of $P$ whose dimension is $\leq k$:
\begin{equation*}
N(k) \leq \left \{ \begin{array}{ll}
\binom{g_1 + k}{g_1 - 1} & \mbox{if} ~ \ 1 \leq k < \frac{d}{2}; \\[1ex]
\binom{g_1 + \left \lfloor \frac{d}{2} \right \rfloor}{g_1 - 1} + \binom{g_1 + \left
\lfloor \frac{d}{2} \right \rfloor - 1}{g_1 - 1} & \mbox{if} ~ \ \frac{d}{2} \leq k< d.
\end{array} \right.
\end{equation*}
Furthermore, for each $k$, the bound is attained if $P$ is the Billera-Lee $d$-polytope
with $g$-vector $(g_0,\ldots,g_u)$ where $g_j = \binom{g_1 + j - 1}{j}$, $0 \leq j \leq
u$, and $u = \min \{k, \left \lfloor \frac{d}{2} \right \rfloor \}$.
\end{theorem}
\begin{proof}
Let $\underline{g} = (g_0,\dots,g_u)$ be the $g$-vector of $P$. Since $P$ is not a
simplex, we have $u \geq 1$. We have to distinguish two cases.
\smallskip
{\em Case 1}: Let $k < \frac{d}{2}$. If $k > u$, then we formally set $g_{u+1} = \ldots
= g_{\left \lfloor \frac{d}{2} \right \rfloor} =0$. Since $k < \frac{d}{2} \leq d - u$,
Corollary \ref{cor-up-to-k} provides:
$$
N(k) \leq g_1 + \sum_{j=1}^k \left \{ g_j\pp{j} - g_j \right \} - g_{k+1}.
$$
According to Lemma \ref{lem-monotonie}, the sum on the right-hand side becomes maximal if
$g_2,\ldots,g_k$ are as large as possible and $g_{k+1} = 0$. The latter means $u = k$.
Macaulay's Theorem \ref{thm-Mac} implies $g_j \leq \binom{g_1 + j - 1}{j}$. Now an easy
computation provides the bound in this case. It is sharp because $(g_0,\ldots,g_k)$,
where $g_j = \binom{g_1 + j - 1}{j}$, is a $g$-vector of a simplicial $d$-polytope by the
$g$-Theorem, thus Corollary \ref{cor-up-to-k} applies.
\smallskip
{\em Case 2}: Let $\frac{d}{2} \leq k < d$. First, let us also assume that $k \geq d-u$.
Then Corollary \ref{cor-up-to-k} gives:
$$
N(k) \leq g_1 + g_{d-k}\pp{d-k} + \sum_{j=1}^{d-k-1} \left \{ g_j\pp{j} - g_j \right \} +
\sum_{j = d-k+1}^u \left \{ g_j\pp{j} - (g_j)\pl{j} \right \}.
$$
Again, Lemma \ref{lem-monotonie} shows that, for fixed $u$, the bound is maximized if
$g_j = \binom{g_1 + j - 1}{j}$, $0 \leq j \leq u$. This provides
$$
N(k) \leq \binom{g_1 + u}{g_1 -1} +\binom{g_1 + u - 1}{g_1 - 1}.
$$
Since $u \leq \frac{d}{2}$, our bound follows in this case.
Second, assume $k < d-u$. Then $u \leq \frac{d}{2} \leq k < d-u$ yields $u <
\frac{d}{2}$. Thus Corollary \ref{cor-up-to-k} provides $N(k) = N(u)$, but $N(u) \leq
\binom{g_1 +u}{g_1 - 1}$ by Case 1. This concludes the proof of the bound in Case 2. Its
sharpness is shown as in Case 1.
\end{proof}
As immediate consequence we obtain:
\begin{corollary} \label{cor-kalai-2-7}
Every simplicial polytope, which is not a simplex, has at most $\binom{g_1 + k}{g_1 - 1}
+ \binom{g_1 + k-1}{g_1 - 1}$ empty simplices of dimension $\leq k$.
\end{corollary}
\begin{remark} \label{rem-comp-bounds}
Kalai \cite{Kalai-94}, Theorem 2.7, has first given an estimate as in Corollary
\ref{cor-kalai-2-7}. His bound is
$$
N(k) \leq (g_1 + 1)^{k+1} \cdot (k+1)!.
$$
Comparing with our bound, we see that Kalai's bound is asymptotically not optimal for
$g_1 \gg 0$.
\end{remark}
\medskip
Notice that the bound on $N(k)$ in Theorem \ref{thm-kal-2-7} does not depend on $k$ if $k
\geq \frac{d}{2}$. This becomes plausible by considering cyclic polytopes.
\begin{example}
(i)
Recall that a {\em cyclic polytope} $C(f_0, d)$ is a $d$-dimensional simplicial
polytope which is the convex hull of $f_0$ distinct points on the moment curve
$$
\{ (t, t^2,\ldots,t^d) \; | \; t \in \mathbb{R} \}.
$$
Its combinatorial type depends only on $f_0$ and $d$.
According to McMullen's Upper Bound Theorem (\cite{McM-cyclic}), the cyclic polytope
$C(f_0, d)$ has the maximal $f$-vector among all simplicial $d$-polytopes with $f_0$
vertices. Theorem \ref{thm-kal-2-7} shows that it also has the maximal total number of
empty simplices among these polytopes. Indeed, this follows by comparing with the main
result in \cite{th-cyclic} (cf.\ also \cite{MN3}, Corollary 9.10) which provides that
$C(f_0, d)$ has $\binom{g_1 + \left \lfloor \frac{d}{2} \right \rfloor}{g_1 - 1} +
\binom{g_1 + \left \lfloor \frac{d}{2} \right \rfloor - 1}{g_1 - 1}$ empty simplices.
Moreover, the empty simplices of $C(f_0, d)$ have either dimension $\frac{d}{2}$ if $d$
is even or dimensions $\frac{d-1}{2}$ and $\frac{d+1}{2}$ if $d$ is odd. This explains
why the bound on $N(k)$ in Theorem \ref{thm-kal-2-7} does not change if $k \geq
\frac{d}{2}$.
(ii) If $P$ is a simplicial $d$-polytope with $f_0 \geq d + 2$ vertices, then Theorem
\ref{thm-kal-2-7} gives for its number of empty edges
$$
N(1) \leq \left \{ \begin{array}{ll}
\frac{f_0 (f_0 - 3)}{2} & \mbox{if} ~ \ d = 2 \\[1ex]
\binom{f_0 - d}{2} & \mbox{if} ~ \ d \geq 3.
\end{array} \right.
$$
If $d=2$, the bound is always attained because $\frac{f_0 (f_0 - 3)}{2}$ is the number of
``missing diagonals'' of a convex $f_0$-gon.
\end{example}
\medskip
\begin{remark} \label{rem-skeleta}
Recall that the {\em $k$-skeleton} of an arbitrary $d$-polytope $P$ is the set of all
faces of $P$ whose dimension is at most $k$. Perles \cite{perles} has shown:
The number of combinatorial types of $k$-skeleta of $d$-polytopes with $d+g_1+1$ vertices
is bounded by a function in $k$ and $g_1$.
Following \cite{Kalai-94}, the proof of this result can be reduced to the case where the
polytopes are simplicial. Then one concludes by using a bound on $N(k)$ because the
$k$-skeleton of a simplicial polytope is determined by its set of empty simplices of
dimension $\leq k$.
\end{remark}
\medskip
In \cite{Kalai-94} Kalai sketches an argument showing that the number of empty simplices
can be bounded with very little information on the $g$-vector. Below, we will slightly
correct \cite{Kalai-94}, Theorem 3.8, and give explicit bounds. We use Notation
\ref{not-bin-expansion}.
\begin{theorem} \label{thm-kalai-3-8}
Fix integers $j \geq k \geq 1$ and $b \geq 0$. Let $P$ be a simplicial polytope
$d$-polytope $P$ with $g_k \leq b$ where we define $g_i = 0$ if $i > u$. If $d \geq j +
k$, then the number of empty $j$-simplices of $P$ is bounded by
$$
\left \{
\begin{array}{ll}
b\pp{k, j-k+1} & \mbox{if} ~ \ j < \frac{d}{2} \\
b\pp{k, j-k+1} + b\pp{k, j-k} & \mbox{if} ~ \ j = \frac{d}{2} \\
b\pp{k, d-j-k} & \mbox{if} ~ \ j > \frac{d}{2} \\
\end{array} \right.
$$
\end{theorem}
\begin{proof}
We have to bound $\beta_{1,j+1}^K (K[P])$. By Corollary \ref{cor-no-empty}, $P$ has no
empty $j$-simplices if $u+1 \leq j \leq d-u-1$. Thus, we may assume that $1 \leq j \leq
u$ or $d-u \leq j \leq d-1$.
\smallskip
{\em Case 1}: Assume $1 \leq j \leq u \leq \frac{d}{2}$. Then Corollary
\ref{cor-generators} provides if $j < \frac{d}{2}$:
$$
\beta_{1,j+1}^K (K[P]) \leq g_j\pp{j} - g_{j+1}.
$$
Using Lemma \ref{lem-monotonie}, we see that the bound is maximized if $g_{j+1} = 0$ and
$g_j$ is as large as possible. Since the $g$-vector is an O-sequence, we get $g_j \leq
g_k\pp{k, j-k} \leq b\pp{k, j-k}$. Our claimed bound follows.
If $j = \frac{d}{2}$, then we get $j = u = \frac{d}{2}$. Hence Corollary
\ref{cor-generators} gives:
$$
\beta_{1,j+1}^K (K[P]) \leq g_j\pp{j} + g_j.
$$
Now the bound is shown as above.
\smallskip
{\em Case 2}: Assume $\frac{d}{2} \leq d-u \leq j \leq d-1$. By the above considerations,
we may also assume that $j \neq \frac{d}{2}$. Thus, Corollary \ref{cor-generators}
provides:
$$
\beta_{1,j+1}^K (K[P]) \leq g_{d-j} - (g_{d+1-j})\pl{d+1-j}.
$$
Using our assumption $d-j \geq k$, we conclude as above.
\end{proof}
\begin{remark}
(i) In \cite{Kalai-94}, Theorem 3.8, the existence of bounds as in the above result is
claimed without assuming $d \geq j+k$. However, this is impossible, as Case 2 in the
above proof shows. Indeed, if $d-j < k$ and $d > j > \frac{d}{2}$, then knowledge of
$g_k$ does not give any information on $g_{d-j}$. In particular, $g_{d-j}$ can be
arbitrarily large preventing the existence of a bound on $\beta_{1,j+1}^K (K[P])$ in
terms of $g_k, j, k$ in this case.
For a somewhat specific example, fix $k = j = 2$ and $d=3$. Then the Billera-Lee
3-polytope with $g$-vector $(1, g_1)$ has $g_1$ empty 2-simplices.
(ii) Note that the bounds in Theorem \ref{thm-kalai-3-8} are sharp if $g_k = b$. This
follows from the proof.
\end{remark}
If we only know that $d$ is large enough compared to $j$ and $k$, then we have the
following weaker bound.
\begin{corollary} \label{cor-j-simp}
Fix integers $j \geq k \geq 1$, $b \geq 0$, and $d \geq j+k$. Then the number of empty
$j$-simplices of every simplicial $d$-polytope with $g_k \leq b$ is at most $b\pp{k,
j-k+1} + b\pp{k, j-k}$.
\end{corollary}
\begin{proof}
By Theorem \ref{thm-kalai-3-8}, it remains to consider the case where $j > \frac{d}{2}$.
But then $d-j < j$, thus $b\pp{k, d-j-k} \leq b\pp{k, j-k}$, and we conclude again by
using Theorem \ref{thm-kalai-3-8}.
\end{proof}
\begin{remark}
Notice that the bound in Corollary \ref{cor-j-simp} is independent of the number of
vertices of the polytope and its dimension, provided the latter is large enough.
\end{remark}
In essence, all the bounds on the number of empty simplices are bounds on certain first
graded Betti numbers of the Stanley-Reisner ring of a simplicial polytope. As such, using
Theorem \ref{thm-gor-betti}, they can be extended to bounds for the first graded Betti
numbers of any graded Gorenstein algebra with the Weak Lefschetz property. We leave this
and analogous considerations for higher Betti numbers to the interested reader.
\bigskip
\noindent
{\bf Acknowledgments}
\smallskip
The author would like to thank Gil Kalai, Carl Lee, and Juan Migliore for
motivating discussions, encouragement, and helpful comments.
| {
"timestamp": "2005-12-14T02:24:34",
"yymm": "0512",
"arxiv_id": "math/0512297",
"language": "en",
"url": "https://arxiv.org/abs/math/0512297",
"abstract": "The conjecture of Kalai, Kleinschmidt, and Lee on the number of empty simplices of a simplicial polytope is established by relating it to the first graded Betti numbers of the polytope. The proof allows us to derive explicit optimal bounds on the number of empty simplices of any given dimension. As a key result, we prove optimal bounds for the graded Betti numbers of any standard graded K-algebra in terms of its Hilbert function.",
"subjects": "Commutative Algebra (math.AC); Combinatorics (math.CO)",
"title": "Empty simplices of polytopes and graded Betti numbers",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138118632622,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7087156808272189
} |
https://arxiv.org/abs/2106.02334 | Statistics for Unimodal Sequences | We prove a number of limiting distributions for statistics for unimodal sequences of positive integers by adapting a probabilistic framework for integer partitions introduced by Fristedt. The difficulty in applying the direct analogue of Fristedt's techniques to unimodal sequences lies in the fact that the generating function for partitions is an infinite product, while that of unimodal sequences is not. Essentially, we get around this by conditioning on the size of the largest part and working uniformly on contributing summands. Our framework may be used to derive many distributions, and our results include joint distributions for largest parts and multiplicities of small parts. We discuss ranks as well. We further obtain analogous results for strongly unimodal sequences. | \section{Introduction and Statement of results}
A {\it partition} $\lambda$ of $n$ is a sequence of positive integers that sum to $n$,
$$
\lambda: \qquad \lambda_1 \geq \dots \geq \lambda_{\ell} > 0, \qquad \sum_{k=1}^{\ell} \lambda_k=n.
$$
We write $|\lambda|=n$ for the {\it size} of $\lambda$, set $p(n):= \#\{\lambda \vdash n\}$, and we define $p(0):=1$. The generating function for partitions is the well-known infinite product
$$
P(q):=\sum_{\lambda} q^{|\lambda|}=\sum_{n \geq 0} p(n)q^n = \prod_{n \geq 1} \frac{1}{1-q^n}.
$$ An important result in the study of partition statistics is due to Erd\H{o}s and Lehner who proved that, as $n \to \infty$, the largest part of almost all partitions of $n$ is roughly $A\sqrt{n}\log(A\sqrt{n})$ and varies from this mean by an extreme value distribution. Here and throughout the article, let $A:=\frac{\sqrt{6}}{\pi}$.
\begin{theorem}[Theorem 1.1 of \cite{EL}]\label{T:ErdosLehner}
For $v \in \mathbb{R}$, we have
$$
\lim_{n \to \infty} \frac{\#\left\{\lambda \vdash n : \frac{\lambda_1-A\sqrt{n}\log\left(A\sqrt{n}\right)}{A\sqrt{n}} \leq v \right\}}{p(n)} = e^{-e^{-v}}.
$$
\end{theorem}
Erd\H{o}s and Lehner's proof used only straightforward recurrences and the Hardy--Ramanujan asymptotic formula for $p(n)$ (\cite{HR}, eq. (1.41)). After a series of papers of Szalay--Tur\'{a}n and Erd\H{o}s--Tur\'{a}n (see \cite{ET,ST1,ST2,ST3}), Fristedt introduced what has proved to be an indispensable probabilistic technique, allowing him to greatly extend previous distributions \cite{F}. In particular, Theorem \ref{T:ErdosLehner} was extended to a joint distribution for the $t_n$ largest parts, where $t_n=o(n^{\frac 14})$.
\begin{theorem}[Theorems 2.5 and 2.6 of \cite{F}]\label{T:Fristedtlargeparts}
For any integer $t_n=o(n^{\frac 14})$ and $\{v_t\}_{t =1}^{t_n} \subset \mathbb{R}^{t_n}$, the following limit vanishes
$$
\lim_{n \to \infty} \left(\frac{\#\left\{\lambda \vdash n : \frac{\lambda_t-A\sqrt{n}\log\left(A\sqrt{n}\right)}{A\sqrt{n}} \leq v_t \ \text{for $1 \leq t \leq t_n$} \right\}}{p(n)}
- \int_{-\infty}^{v_1} \cdots \int_{-\infty}^{v_{t_n}} f(u_1,\dots, u_{t_n}) du_{t_n} \cdots du_1 \right),
$$
where
$$
f(u_1,\dots, u_{t_n}):= \begin{cases} e^{-\sum_{t=1}^{t_n} u_t - e^{-t_n}} & \text{if $u_1 \geq \dots \geq u_{t_n}$,} \\ 0 & \text{otherwise.} \end{cases}
$$
\end{theorem}
\begin{remark}
The distribution on the right-hand side was interpreted in \cite{F} as a limiting Markov chain; another equivalent distribution was given by Pittel (\cite{Pi}, lemma on p. 127).
\end{remark}
\begin{remark}
Fristedt obtained a stronger version of the above which he stated in terms of the L\'{e}vy--Prokhorov distance between measures (see \cite{Bi1}, p. 72). Our results below too could be strengthened in this way, but we prefer the simpler statements which show only convergence in distribution.
\end{remark}
Fristedt found a wide array of limiting distributions by introducing what is now known as a Boltzmann model. This is now a standard technique for studying the statistical behavior and constructing sampling algorithms of many combinatorial structures (see \cite{DFLS}), but our exposition here is self-contained. The Boltzmann model replaces the uniform probability measure on $\{\lambda \vdash n\}$ with a measure on all partitions $\lambda$, by defining
\begin{equation}\label{E:Boltzmannmodelpartitionsdef}
Q_q(\lambda):= \frac{q^{|\lambda|}}{P(q)}, \qquad \text{for $q \in (0,1)$.}
\end{equation}
Since $Q_q$ agrees on all $\lambda \vdash n$, the Boltzmann model, when conditioned on $\lambda \vdash n$, equals the uniform measure on partitions of $n$, and thus this technique is often called a {\it conditioning device}. In statistical mechanics, the Boltzmann model is known as the macro-canonical ensemble, and the uniform measure on $\{\lambda \vdash n\}$ is known as the micro-canonical ensemble (see \cite{V}).
Under $Q_q$, one directly gains independence of the relevant random variables that one lacks under the uniform probability measure. This is precisely because $P(q)$ is an infinite product, and much of the work applying Boltzmann models to study statistics for partitions relies on product generating functions. In this article, we show that a conditioned Boltzmann model may still be useful even without an infinite product generating function. We demonstrate this in the case of unimodal sequences.
A {\it unimodal sequence} $\lambda$ of size $n$ is a generalization of an integer partition, in which parts are allowed to increase and then decrease:
\begin{equation}\label{E:uniseqdef}
\lambda: \qquad \lambda_{1}^{[L]} \leq \dots \leq \lambda_{r}^{[L]} \leq \lambda_{\mathrm{PK}} \geq \lambda_{s}^{[R]} \dots \geq \lambda_{1}^{[R]} \qquad \text{and} \qquad \lambda_{\mathrm{PK}} + \sum_{k=1}^{r} \lambda_{k}^{[L]} + \sum_{k=1}^{s} \lambda_{k}^{[R]} = n.
\end{equation}
We write $\mathcal{U}_n$ for the set of unimodal sequences of size $n$ and set $u(0):=1$. Let $u(n):= \#\mathcal{U}_n$ and $\mathcal{U}:=\cup_{n \geq 0} \mathcal{U}_n$. The special part $\lambda_{\mathrm{PK}}$ is called the {\it peak}. The generating function for $u(n)$ is obtained by summing over the size of peaks as
$$
U(q):=\sum_{\lambda \in \mathcal{U}} q^{|\lambda|} = \sum_{n \geq 0} u(n)q^n=1+\sum_{m \geq 1} q^m \prod_{k=1}^{m} \frac{1}{\left(1-q^k\right)^2}.
$$
\begin{remark}
There are two slightly different definitions of unimodal sequences in the literature which differ in whether or not the peak is specified. What we call unimodal sequences here were referred to as {\it stacks with summits} on p. 196 of \cite{BM}, as opposed to {\it stacks}, where the peak is unspecified. (This is equivalent to forcing the inequality $\lambda_{\mathrm{PK}}\geq\lambda_{s}^{[R]}$ to be strict.) All of our results here hold for these stacks as well; it is simply easier to state and prove our results with the above definition. Stacks seem to have been introduced by Auluck in \cite{Au}, who called them {\it type B partitions}, whereas stacks with summits are also called {\it V-partitions} in $\S 2.5$ of \cite{St}. Andrews \cite{A1} has also called these sequences {\it convex compositions} when the peak is strictly larger than the other parts (which is equivalent to adding 1 to the peak of unimodal sequences).
\end{remark}
To state our results, let $\mathbf{P}_{n}$ be the uniform probability measure on $\mathcal{U}_n$, and introduce the following random variables on $\mathcal{U}$:
\begin{itemize}[leftmargin=*]
\item Let ${\rm PK}(\lambda):=\lambda_{\mathrm{PK}}$ denote the peak of a sequence.
\item Let $X_{k}^{[L]}(\lambda)$ (resp. $X_{k}^{[R]}(\lambda)$) denote the number of parts in $\lambda$ equal to $k$ and to the left (resp. right) of the peak.
\item Let $Y_{t}^{[L]}(\lambda)$ (resp. $Y_{t}^{[R]}$) denote the $t$-th largest part in $\lambda$ to the left (resp. right) of the peak.
\item Let $N(\lambda):=\sum_{k \geq 1} k(X_{k}^{[L]}(\lambda) + X_{k}^{[R]}(\lambda)) + {\rm PK}(\lambda)$ denote the {\it size} of $\lambda$.
\end{itemize}
\noindent{\bf Example.} If $\lambda$ is the unimodal sequence $1+1+2+3+3+{\bf 3}+1$ (the peak is boldfaced), then
$$
X_{1}^{[L]}(\lambda)=2, \ X_{2}^{[L]}(\lambda)=1, \ X_{3}^{[L]}(\lambda)=2, \ {\rm PK}(\lambda)=3, \ X_{3}^{[R]}(\lambda)=0,\ X_{2}^{[R]}(\lambda)=0, \ X_{1}^{[R]}=1, $$ $$ N(\lambda)=14.
$$
Our first result is an analogue of Theorem \ref{T:ErdosLehner}. Here and throughout the article, let $B:=\frac{\sqrt{3}}{\pi}$.
\begin{theorem}\label{T:PKdist}
For $v \in \mathbb{R}$, we have
$$
\lim_{n \to \infty} \mathbf{P}_{n}\left(\frac{{\rm PK}-B\sqrt{n}\log\left(2B\sqrt{n}\right)}{B\sqrt{n}} \leq v \right)=e^{-e^{-v}}.
$$
Moreover, if $\mathbf{E}_n$ denotes expectation under $\mathbf{P}_{n}$, then we have
$$
\mathbf{E}_n({\rm PK}) = B\sqrt{n}\log\left(2B\sqrt{n}\right)+B\gamma \sqrt{n}(1+o(1)),
$$
where $\gamma$ is the Euler--Mascheroni constant.
\end{theorem}
\begin{remark} Note that $\mathcal{U}_n$ is almost the same as the set of pairs of partitions of total size $n$, so a reasonable heuristic guess would be that Theorem \ref{T:ErdosLehner} holds with $\lambda_1 \mapsto {\rm PK}$ and $n \mapsto \frac{n}{2}$. However, the presence of the extra factor of $\log(2)$ is explained by Theorem \ref{T:largeparts} below.
\end{remark}
\begin{remark}
It would be interesting to improve the error term for the mean, as Ngo and Rhoades did for Theorem 1.1 \cite{NR}. We discuss this further in Section \ref{S:moments}.
\end{remark}
We further prove the following analogue of Theorem \ref{T:Fristedtlargeparts}.
\begin{theorem}\label{T:largeparts}
For any integer $t_n=o(n^{\frac 14})$ and $\{v_t^{[j]}\}_{1 \leq t \leq t_n, j \in \{L,R\}} \subset \mathbb{R}^{2t_n}$, the following difference vanishes as $n \to \infty$,
\begin{multline*}
\mathbf{P}_{n}\left(\frac{{\rm PK}-B\sqrt{n}\log\left(2B\sqrt{n}\right)}{B\sqrt{n}} \leq v_0, \ \frac{Y_{t}^{[j]}-B\sqrt{n}\log\left(2B\sqrt{n}\right)}{B\sqrt{n}} \leq v_{t}^{[j]}, \ \ \text{for $j \in \{L,R\}$, \ $1 \leq t \leq t_n$}\right) \\
- \int_{-\infty}^{v_0} \int_{-\infty}^{v_{1}^{[L]}} \int_{-\infty}^{v_{1}^{[R]}} \cdots \int_{-\infty}^{v_{t_n}^{[L]}} \int_{-\infty}^{v_{t_n}^{[R]}} F\left(u_0,u_{1}^{[L]},u_{1}^{[R]}, \dots, u_{t_n}^{[L]},u_{t_n}^{[R]}\right) du_{t_n}^{[R]} \cdots du_0,
\end{multline*}
where
\begin{multline*}
F\left(u_0,u_{1}^{[L]},u_{1}^{[R]}, \dots, u_{t_n}^{[L]},u_{t_n}^{[R]}\right) \\:= \begin{cases}\frac{1}{2^{2t_n}}e^{-u_0-\sum_{1 \leq t \leq t_n} \left(u_t^{[L]}+u_t^{[R]}\right) - \frac{e^{-u_{t_n}^{[L]}}}{2} - \frac{e^{-u_{t_n}^{[R]}}}{2}} & \text{if $u_0 \geq u_1^{[j]} \geq \dots \geq u_{t_n}^{[j]}$ for $j \in \{L,R\}$}, \\ 0 & \text{otherwise.} \end{cases}
\end{multline*}
\end{theorem}
Next, we show that the joint distribution of the numbers of small parts, when re-scaled, behaves as the joint distribution for independent and identically distributed exponential random variables; this is an analogue of Theorem 2.2 of \cite{F}.
\begin{theorem}\label{T:smallparts}
Let $k_n=o(n^{\frac 14})$ be an integer and $\{v_t^{[j]}\}_{1 \leq k \leq k_n, j \in \{L,R\}} \subset [0,\infty)^{2k_n}$. Then
$$
\lim_{n \to \infty} \left(\mathbf{P}_{n}\left(\frac{kX_{k}^{[j]}}{B\sqrt{n}} \leq v_{k}^{[j]}, \ \ \text{for $1 \leq k \leq k_n$ and $j \in \{L,R\}$} \right) - \prod_{\substack{1 \leq k \leq k_n \\ j \in \{L,R\}}} \int_{0}^{v_{k}^{[j]}} e^{-u_{k}^{[j]}}du_{k}^{[j]} \right)=0.
$$
Moreover, for an integer $k=o(n^{\frac{1}{2}})$ and $v^{[L]}, v^{[R]} \in \mathbb{R}$, we have
$$
\lim_{n \to \infty} \mathbf{P}_n\left(\frac{kX_k^{[L]}}{B\sqrt{n}} \leq v^{[L]}, \frac{kX_k^{[R]}}{B\sqrt{n}} \leq v^{[R]} \right) = \left(1-e^{-v^{[L]}}\right)\left(1-e^{-v^{[R]}}\right).
$$
\end{theorem}
It is combinatorially obvious that $\mathbf{P}_{n}(X_{k}^{[L]} \leq X_{k}^{[R]})=\frac{1}{2}$, simply by swapping left and right parts on every element of $\mathcal{U}_n$. A more refined measure of symmetry in the small parts would be a joint distribution for the differences of $X_{k}^{[L]}$ and $X_{k}^{[R]}$ for $k \leq k_n$. As a corollary of the above, we show that this behaves like the joint distribution of independent Laplace distributions. Below and throughout, $\chi_S:=1$ if a statement $S$ holds and 0 otherwise.
\begin{corollary}\label{C:smallpartsskewness}
Let $k_n=o(n^{\frac 14})$ be an integer and let $\{v_k\}_{1 \leq k \leq k_n} \subset \mathbb{R}^{k_n}$. Then the following limit vanishes,
\begin{align*}
&\lim_{n \to \infty} \left(\mathbf{P}_{n}\left(\frac{k\left(X_{k}^{[L]}-X_{k}^{[R]}\right)}{B\sqrt{n}} \leq v_{k}, \ \ \text{for $1 \leq k \leq k_n$} \right) - \prod_{1 \leq k \leq k_n} \left(\left(1-\frac{e^{-v_k}}{2}\right)\chi_{v_k \geq 0} + \frac{e^{v_k}}{2} \chi_{v_k < 0} \right) \right).
\end{align*}
\end{corollary}
By summing the differences $X_{k}^{[L]}-X_{k}^{[R]}$ over all $k$, we obtain the {\it rank} of a unimodal sequence,
\begin{equation}\label{E:rankdef}
{\rm rank}(\lambda):=\sum_{k \geq 1}\left( X_{k}^{[L]}-X_{k}^{[R]} \right).
\end{equation}
Bringmann, Jennings-Shaffer, and Mahlburg proved that at the scaling of $\sqrt{n}$ the rank obeys a logistic distribution.
\begin{theorem}[Proposition 1.2 of \cite{BJSM}]\label{T:rank}
For $x \in \mathbb{R}$, we have
$$
\lim_{n \to \infty} \mathbf{P}_{n}\left(\frac{{\rm rank}}{B\sqrt{n}} \leq x \right) = \frac{1}{1+e^{-x}}.
$$
\end{theorem}
Bringmann, Jennings-Shaffer, and Mahlburg used the method of moments, an approach that relies on suitable two-variable generating functions and is independent of the present work. We prove the following related result.
\begin{theorem}\label{T:totalsmallparts}
For any integer $k_n=o(\sqrt{n})$ and $v_L,v_R \in \mathbb{R}$,
$$\lim_{n \to \infty} \mathbf{P}_{n}\left( \sum_{k \leq k_n} \frac{X_{k}^{[j]}-B\sqrt{n}\log(k_n)}{B\sqrt{n}} \leq v_{j}, \ \text{for $j \in \{L,R\}$}\right) = e^{-e^{-v_{L}}-e^{-v_R}}.$$
\end{theorem}
Thus, the total small part counts on the left and the right behave as independent extreme value distributions when the mean of $B\sqrt{n}\log (k_n)$ is subtracted, and their convolution gives the logistic distribution in Theorem \ref{T:rank}. Our techniques are not robust enough to prove Theorem \ref{T:rank}, but when combined with the following, Theorem \ref{T:totalsmallparts} is highly suggestive of Theorem \ref{T:rank}.
\begin{proposition}[see Theorem 1.2 of \cite{B21}]\label{P:totallargeparts}
For any fixed $\varepsilon > 0$ and any fixed $a >0$, there exists an explicit function $f(a)$ such that
$$
\lim_{n \to \infty} \mathbf{P}_{n}\left(\left|\frac{1}{\sqrt{n}} \sum_{k \geq a\sqrt{n}} \left(X_{k}^{[L]}-X_{k}^{[R]}\right)-f(a)\right| < \varepsilon \right) = 1.
$$
\end{proposition}
Note that our methods also easily extend to the so-called $t$-th successive ranks, defined by limiting the range in \eqref{E:rankdef} to $k \geq t$, whereas extension of the techniques in \cite{BJSM} appears to be non-trivial.
Finally, we can prove results similar to Theorems \ref{T:largeparts} and \ref{T:smallparts} for strongly unimodal sequences, defined so that the inequalities in \eqref{E:uniseqdef} are strict. Let $\mathcal{U}_n^*$ denote the set of strongly unimodal sequences of size $n$, let $u^*(n):=\#\mathcal{U}_n^*$ and let $\mathbf{P}_{n}^*$ denote the uniform probability measure on $\mathcal{U}^*_n$.
\begin{theorem}\label{T:*largesmallparts}
For strongly unimodal sequences, Theorems \ref{T:PKdist} and \ref{T:largeparts} hold with $\mathbf{P}_{n}$ replaced by $\mathbf{P}_{n}^*$ and $B= \frac{\sqrt{3}}{\pi}$ replaced by $A=\frac{\sqrt{6}}{\pi}$.
\end{theorem}
Since a part can occur at most once in a strongly unimodal sequence, it is natural to expect the following analogue of Theorem 9.2 of \cite{F}. Note that here we can take a larger range for $k_n$ than in Theorem \ref{T:smallparts}. Here and throughout, boldface letters represent vectors (except when we use them for probability measures).
\begin{theorem}\label{T:*smallparts}
Suppose $k_n=o(n^{\frac{1}{2}})$ and let ${\bm w} \in \{0,1\}^{2k_n}$. Then
$$
\lim_{n \to \infty} \left(\mathbf{P}_{n}^*\left( \left(X_{k}^{[j]}\right)_{j \in \{L,R\}, 1 \leq k \leq k_n} = {\bm w} \right) - \frac{1}{2^{2k_n}} \right)=0.
$$
\end{theorem}
We content ourselves with the above results, as once our machinery is described many of the details follow \cite{F} closely. It would be interesting to apply our methods to the other types of stacks and unimodal sequences discussed in \cite{BM} or to extend the work of Pittel \cite{P} to unimodal sequences, where Fristedt's methods were used to study distributions of mid-range parts in partitions.
Section \ref{S:Preliminaries} contains some preliminaries needed for asymptotic analysis. In Section \ref{S:Fristedt}, we briefly recall Fristedt's machinery and in Section \ref{S:Boltzmann}, we introduce our modification, a conditioned Boltzmann model for unimodal sequences. We believe these methods should be useful in contexts beyond the present work, especially for non-product generating functions in the theory of partitions. Section \ref{S:details} contains the details of the proofs of our main theorems. Lastly, we append to this article a discussion of the moment generating functions for ${\rm PK}$ in Section \ref{S:moments}.
\section*{Acknowledgements} The first author is partially supported by the supported by the SFB/TRR 191 ``Symplectic Structures in Geometry, Algebra and Dynamics'', funded by the DFG (Projektnummer 281071066 TRR 191). The second author has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101001179).
\section{Preliminaries}\label{S:Preliminaries}
\subsection{Notation}
We use the following standard asymptotic notation. We write
\begin{itemize}[leftmargin=*]
\item $f(n) \sim g(n)$ if $\lim_{n \to \infty} \frac{f(n)}{g(n)} =1$,
\item $f(n)=O(g(n))$ if the quotient $\frac{f(n)}{g(n)}$ is bounded as $n \to \infty$,
\item $f(n)=o(g(n))$ if $\lim_{n \to \infty} \frac{f(n)}{g(n)}=0$,
\item $f(n)=\omega(g(n))$ if the quotient $\frac{f(n)}{g(n)}$ is unbounded as $n \to \infty$, and
\item $f(n) \asymp g(n)$ if $f(n)=O(g(n))$ and $g(n)=O(f(n))$.
\end{itemize}
To concisely write products in generating functions, we employ the standard $q$-factorial notatation, defined for $n \in \mathbb N_0 \cup \{\infty\}$ by
$$
(a)_n=(a;q)_n:=\prod_{j=0}^{n-1} \left(1-aq^j\right).
$$
\subsection{Euler--Maclaurin summation, logarithmic series, and integral calculations}
We need the following two variants of Euler--Maclaurin summation. Let $\{x\}:=x-\lfloor x \rfloor$ denote the fractional part of $x$. Note that the first variant is recovered by taking $a \to 1^-$ in Theorem B.5 of \cite{MV}.
\begin{lemma}[Theorem B.5 in \cite{MV}] \label{L:eulermac}
For $N \in \mathbb{N}$ and continuously differentiable $g:\mathbb{R} \to \mathbb{C}$, we have
\begin{align}
\sum_{k=1}^N g(k) &= \int_1^N g(u)du + \frac{1}{2}\left(g(N)+g(1)\right) + \int_1^N \left( \{u\} - \frac{1}{2}\right) g'(u)du \label{E:eulermac1} \\
&= \int_0^N g(u)du + \frac{1}{2}\left(g(N)-g(0)\right) + \int_0^N \left( \{u\} - \frac{1}{2}\right) g'(u) du. \label{E:eulermac2}
\end{align}
\end{lemma}
The following lemma is useful for the approximation logarithmic series by Taylor expansions. Here and throughout, we write $\operatorname{Log}$ for the principal branch of the complex logarithm, and $\log$ when $\operatorname{Log}$ is restricted to the positive real axis.
\begin{lemma}[Lemma 1 in \cite{R} with Lemma 1 in \cite{B20}]\label{L:Romik}
There exists a constant $C>0$ such that for all $0<x<1$ and $s \in \mathbb R$, we have
$$\left|\operatorname{Log}\left(\frac{1\pm x}{1\pm xe^{is}}\right)-\frac{isx}{1\pm x}+ \frac{s^2x}{2(1\pm x)^2} \right| \leq C \frac{x|s|^3}{(1-x)^3}.$$
\end{lemma}
We also need the following lemma concerning the asymptotic behavior of a certain product. This should be compared with a similar formula on p. 723 of \cite{F}.
\begin{lemma}\label{L:largepartsproduct}
Uniformly in $v \geq - \frac{\log (n)}{8}$ as $n \to \infty$, we have
$$
\prod_{k > A\sqrt{n}\left(v+\log(A\sqrt{n})\right)} \frac{1}{1+e^{-\frac{k}{A\sqrt{n}}}} \sim e^{-e^{-v}}.
$$
\end{lemma}
\begin{proof}
The claim of the lemma is equivalent to
$$
-\lim_{n \to \infty} \sum_{k > A\sqrt{n}\left(v+\log(A\sqrt{n})\right)} \log\left(1+e^{-\frac{k}{A\sqrt{n}}}\right) +e^{-v}= 0,
$$
uniformly in $v \geq -\frac{\log(n)}{8}$. After using the inequality $-x < - \log(1+x)<-x+\frac{x^2}{2}$ for $x \in (0,1)$, a short calculation shows
$$
O\left(e^{-v-\frac{\log (n)}{2}}\right) <- \sum_{k > A\sqrt{n}\left(v+\log(A\sqrt{n})\right)} \log\left(1+e^{-\frac{k}{A\sqrt{n}}}\right) +e^{-v} <O\left(e^{-v-\frac{\log(n)}{2}}\right) + O\left(e^{-2v-\frac{\log(n)}{2}}\right).
$$ Both terms are $o(1)$ if $v\geq-\frac{\log(n)}{8}$, completing the proof.
\end{proof}
Another integral evaluation gives the distribution of the sum of independent exponentially distributed random variables with means $1$, $\frac{1}{2}$, $\dots$, $\frac{1}{n}$; the proof follows using induction.
\begin{lemma}\label{L:stochYuleintegral}
For $x \geq 0$ and for $n \in \mathbb{N}$, we have
$$
n!\int_0^x \int_0^{x-u_n} \cdots \int_0^{x-u_n-\dots - u_2} e^{-\sum_{i=1}^{n} ju_j} du_1 \cdots du_n = \left(1-e^{-x}\right)^n.
$$
\end{lemma}
\subsection{Saddle-point method}
We require the saddle-point method for evaluating Cauchy integrals. Our variant is essentially the one used by Fristedt \cite{F} in the proof of Proposition 4.5, and also in the proofs of \cite{R} Proposition 3 and \cite{B20} Proposition 3, although these are all stated in probabilistic terminology. This may also be compared with \cite{FS} Chapter VIII, Figure VIII.3; the key differences are assumptions (iii) and (iv).
\begin{proposition}\label{P:saddlepointmethod}
Suppose that $g_n$ is a sequence of twice continuously differentiable functions and that $\theta_0=\theta_0(n) \to 0^+$ is chosen so that after decomposing the integral as
$$
\int_{-\frac{1}{2}}^{\frac{1}{2}} \exp\left(g_n(2\pi i \theta)\right)d \theta = \int_{-\theta_0}^{\theta} \exp\left(g_n(2\pi i \theta)\right)d \theta + \int_{\theta_0<|\theta| \leq \frac{1}{2}} \exp\left(g_n(2\pi i \theta)\right)d \theta,
$$
the following hold as $n \to \infty$.\\
\textnormal{(i)} We have a quadratic {\it central approximation on the ``major arc"}, i.e., for $|\theta| \leq \theta_0$,
\begin{equation}\label{E:centralapproximation}
g_n(2 \pi i \theta)=g_n(0)+2 \pi i g_n'(0) \theta + g_n''(0) \frac{(2 \pi i \theta)^2}{2} + O(E_n) \theta^3,
\end{equation}
where $E_n$ is an error depending on $n$.\\
\textnormal{(ii)} The {\it ``minor arc"} is negligible, i.e.,
$$
\int_{\theta_0<|\theta| \leq \frac{1}{2}} \exp\left(g_n(2\pi i \theta)\right)d \theta=o\left(\int_{-\theta_0}^{\theta} \exp\left(g_n(2\pi i \theta)\right)d \theta \right).
$$
\textnormal{(iii)} We have $g_n''(0)\theta_0^2 \to \infty$, while $E_n=O(1) \theta_0^3$.\\
\textnormal{(iv)} Let
$$
R(\theta):= g_n'(0) 2 \pi i \theta + g_n''(0) \frac{(2 \pi i \theta)^2}{2} + O(E_n) \theta^3 ,
$$ denote the right-hand side of \eqref{E:centralapproximation} without $g_n(0)$. Then, pointwise in $\theta$, we have
$$
R\left(\frac{\theta}{\pi \sqrt{2g_n''(0)}}\right) \to -\theta^2,
$$
and furthermore, for $|\theta| \leq \pi \sqrt{2g_n''(0)}\theta_0$,
$$
R\left(\frac{\theta}{\pi \sqrt{2g_n''(0)}}\right) \ll -\theta^2.
$$
Then
$$
\int_{-\frac{1}{2}}^{\frac{1}{2}} \exp\left(g_n(2 \pi i \theta)\right)d\theta \sim \frac{e^{g_n(0)}}{\sqrt{2\pi g_n''(0)}}.
$$
\end{proposition}
\begin{proof}
Using assumption (ii) and then (i), we have
\begin{multline*}
\int_{-\frac{1}{2}}^{\frac{1}{2}} \exp\left(g_n(2\pi i \theta)\right)d \theta
\sim \int_{-\theta_0}^{\theta_0} \exp\left(g_n(2\pi i \theta)\right)d \theta \\
\sim e^{g_n(0)}\int_{-\theta_0}^{\theta_0} \exp\left( g_n'(0) 2\pi i \theta + g_n''(0) \frac{(2 \pi i \theta)^2}{2} + O(E_n) \theta^3\right)d \theta
= e^{g_n(0)}\int_{-\theta_0}^{\theta_0} \exp\left(R(\theta)\right)d \theta.
\end{multline*}
Now substitute $\theta \mapsto \frac{\theta}{\pi \sqrt{2 g_n''(0)}}$ to see that the above can be approximated by
\begin{align}
\frac{e^{g_n(0)}}{\pi \sqrt{2 g_n''(0)}}\int_{-\pi \sqrt{2g_n''(0)}\theta_0}^{\pi \sqrt{2g_n''(0)}\theta_0} \exp\left(R\left(\frac{\theta}{\pi \sqrt{2 g_n''(0)}}\right)\right) d\theta. \label{E:saddlepointmethodproofstep}
\end{align}
By assumption (iii), the integrand above tends pointwise to $e^{-\theta^2}$. Furthermore, by assumption (iv), the integrand is bounded by $e^{-c\theta^2}$ for some constant $c>0$, which is integrable. Hence the Lebesgue Dominated Convergence implies that the integral is asymptotic to
$$
\int_{-\infty}^{\infty} e^{-\theta^2}d\theta = \sqrt{\pi},
$$
which when combined with \eqref{E:saddlepointmethodproofstep} finishes the proof.
\end{proof}
\subsection{Probability theory}
We require only the notions of random variables and their distributions, as well as the total variation metric, $d_{\mathrm{TV}}$, which is defined on measures $\mu$ and $\nu$ on $\mathbb{R}^{d}$ by
$$
d_{\mathrm{TV}}(\mu,\nu):=\sup_{\substack{B \subset \mathbb{R}^{d} \\ \text{Borel}}} \left(\mu(B)-\nu(B)\right).
$$
The Borel sets are the $\sigma$-algebra on $\mathbb{R}^{d}$ generated by open sets. We also use Chebyshev's Inequality.
\begin{lemma}[Chebyshev's Inequality]\label{L:Chebyshev}
If $X$ is an integrable random variable under a probability measure $\mathbf{P}$ with finite expectation $m$ and nonzero variance $\sigma$, then for any $t>0$
$$
\mathbf{P}(|X-m| \geq t) \leq \frac{\sigma^2}{t^2}.
$$
\end{lemma}
\section{Boltzmann models for partitions and the work of Fristedt}\label{S:Fristedt}
Here we recall the primary methods of Fristedt in Section 4 of \cite{F}. Let $P_n$ be the uniform probability measure on partitions of size $n$. In analogy to our situation, let $X_k$ be the random variable giving the number of $k$'s in a partition and let $N=\sum_{k \geq 1} kX_k$ denote the size of a partition. The key point is that the $X_k$ are independent under $Q_q$ (but not under $P_n$).
\begin{proposition}[Proposition 4.1 of \cite{F}]\label{P:Fristedt4.1}
The $X_k$ are independent are independent under $Q_q$. Furthermore, $Q_q(X_k=\ell)=q^{k\ell}(1-q^k)$, so $X_k$ is geometric with mean $\frac{q^k}{1-q^k}$.
\end{proposition}
By definition $Q_q$ agrees on all $\lambda \vdash n$, so an immediate consequence is that,
$$
P_n=Q_q( \cdot | N=n),
$$
i.e., $Q_q$, when conditioned on $N=n$, agrees with $P_n$. (This can also be shown directly from the definition of $Q_q$.) Now if $W_n:\mathcal{P} \to \mathbb{R}^{d_n}$ is a random variable on partitions which is defined in terms of the $X_k$, the probability measure $Q_q$ is used to bridge the gap between the probability distribtion $P_n(W_n^{-1})$ on $\mathbb{R}^{d_n}$ and some explicitly given distributions $\nu_n$ on $\mathbb{R}^{d_n}$. For a particular $q=q(n)$, Fristedt showed that $P_n(W_n^{-1})$ and $Q_q(W_n^{-1})$ are close in the sense of total variation given a very general condition on $W_n$.
\begin{proposition}[Proposition 4.6 of \cite{F}]\label{P:Fristedt4.6}
Let $K_n$ be a set of integers such that $W_n:\mathcal{P} \to \mathbb{R}^{d_n}$ is determined by $X_k$ for $k \in K_n$ with probability 1. Let $q=q(n)=e^{-\frac{\pi}{\sqrt{6n}}}$. If
$$
\sum_{k \in K_n} \frac{k^2q^k}{\left(1-q^k\right)^2}=o\left(n^{\frac 32}\right),
$$
then we have $d_{\mathrm{TV}}(P_n(W_n^{-1}), Q_q(W_n^{-1})) \to 0$.
\end{proposition}
The individual probabilities $Q_q(W_n={\bm w})$ are often straightforward to calculate, and Fristedt found limiting distributions for $W_n$ under $Q_q$ for a number of different $W_n$. Together with Proposition \ref{P:Fristedt4.6}, this proved his many results. The generality of Proposition \ref{P:Fristedt4.6} is the primary advantage of the probabilistic approach over a classical, direct Circle Method approach.
In the following section, we prove an analogue of Proposition \ref{P:Fristedt4.6} for a conditioned Boltzmann model for unimodal sequences. Our exposition is self-contained, but the interested reader is invited to examine Section 4 of \cite{F} for more probabilistic intuition for why Proposition \ref{P:Fristedt4.6} is true.
\section{Conditioned Boltzmann model}\label{S:Boltzmann}
To bridge the gap between $\mathbf{P}_{n}$ and the distributions in our main theorems, we introduce the Boltzmann model. This probability measure is defined for any $q \in (0,1)$ on all of $\mathcal{U}$ by
$$
\mathbf{Q}_{q}(\lambda):= \frac{q^{|\lambda|}}{U(q)},
$$
where $U(q):=1+\sum_{m \geq 1} \frac{q^m}{(q)_m^2}$ (see \cite{BM}, eq. (1.8)). As written, $\mathbf{Q}_q$ is not very useful for us because there is not a simple expression for the individual probabilities, $\mathbf{Q}_{q}(X_{k}^{[L]}=\ell)$, and in fact the $X_{k}^{[L]}$ and $X_{\ell}^{[R]}$ are not independent. This lies in the fact that $U(q)$ is not a product. However, the $m$-th summand in $U(q)$ is of course the product
$$
q^m\prod_{k=1}^{m}\frac{1}{\left(1-q^k\right)^2}=\sum_{\substack{\lambda \\ {\rm PK}(\lambda)=m}} q^{|\lambda|}.
$$
By conditioning $\mathbf{Q}_q$ on the event ${\rm PK}=m$, we do gain tractable expressions for the individual probabilities of $X_{k}^{[L]}=\ell$, and we can use the full power of Fristedt's techniques in \cite{F}. Furthermore - and most importantly - we can do this uniformly for $m$ in the contributing range, and thus we are able to piece together the local distributions to obtain our global results.
We do the analogous thing for strongly unimodal sequences, setting
$$
\mathbf{Q}^*_q(\lambda):=\frac{q^{|\lambda|}}{U^*(q)},
$$
where $U^*(q):=\sum_{n \geq 0} (-q)^2_{n-1}q^n$ is the generating function for strongly unimodal sequences (see \cite{A1}, where $u^*(n)=x_d(n)$). Throughout the article, we place $*$ in the superscript (and sometimes in the subscript) for the corresponding sets and functions defined on strongly unimodal sequences, calling attention to any significant differences when they arise.
Set $\mathbf{Q}_{q,m}:=\mathbf{Q}_q\left( \cdot | {\rm PK}=m \right)$ and $\mathbf{P}_{n,m}:=\mathbf{P}_{n} \left( \cdot | {\rm PK}=m \right)$. Let $\mathcal{U}_{n,m} \subset \mathcal{U}_n$ be those sequences with peak $m$, and set $u_m(n):=\# \mathcal{U}_{n,m}$. For strongly unimodal sequences, it is more convenient for indexing to set $\mathbf{P}_{n,m}^*:=\mathbf{P}_{n}^* \left( \cdot | {\rm PK}=m+1 \right)$ and $\mathbf{Q}_{q,m}^*:=\mathbf{Q}_q^*\left( \cdot | {\rm PK}=m+1\right)$.
The following two lemmas are easily verified directly from the definitions.
\begin{lemma}\label{L:Q_{q,m}independenceetc}
{\rm (1)} We have
$$\mathbf{Q}_{q,m}(\lambda)=\begin{cases} (q)_m^2 q^{|\lambda|-m} & \text{if ${\rm PK}(\lambda)=m$,} \\ 0 & \text{otherwise.} \end{cases}$$
{\rm (2)} The set $\left\{X_{j, k} \right\}_{\substack{j \in \{L,K\} \\ k \geq 1}}$ is a set of independent random variables under $\mathbf{Q}_{q,m}$ with probability densities
$$
\mathbf{Q}_{q,m}\left(X_{k}^{[j]}=\ell\right)=\begin{cases} \left(1-q^k\right)q^{\ell k} & \text{if $k \leq m$,} \\ 0 & \text{otherwise.} \end{cases}
$$
{\rm (3)} We have $\mathbf{P}_{n,m}=\mathbf{Q}_{q,m}\left( \cdot | N=n \right)$.
\end{lemma}
For strongly unimodal sequences, we have the following analogue.
\begin{lemma}\label{L:*Q_{q,m}independenceetc}
{\rm ($1$)} We have
$$\mathbf{Q}_{q,m}^*(\lambda)=\begin{cases} (-q)_m^2 q^{|\lambda|-m-1} & \text{if ${\rm PK}(\lambda)=m+1$,} \\ 0 & \text{otherwise.} \end{cases}$$
{\rm ($2$)} The set $\left\{X_{j, k} \right\}_{\substack{j \in \{L,K\} \\ k \geq 1}}$ is a set of independent random variables under $\mathbf{Q}_{q,m}$ with probability densities
$$
\mathbf{Q}_{q,m}^*\left(X_{k}^{[j]}=\ell\right)=\begin{cases} \frac{1}{1+q^k} & \text{if $k \leq m$ and $\ell=0$}, \\ \frac{q^k}{1+q^k} & \text{if $k \leq m$ and $\ell=1$}, \\ 0 & \text{otherwise.} \end{cases}
$$
{\rm ($3$)} We have $\mathbf{P}_{n,m}^*=\mathbf{Q}_{q,m}^*\left( \cdot | N=n \right)$.
\end{lemma}
Let $W_n:\mathcal{U} \to \mathbb{R}^{d_n}$ be a random variable, let $\bm{\xi}_{n,m}:=\mathbf{P}_{n,m}(W_n^{-1} )$ be the probability measure on $\mathbb{R}^{d_n}$ induced by $W_n$ under $\mathbf{P}_{n,m}$ and let $\bm{\xi}_{n}:=\mathbf{P}_{n}(W_n^{-1})$. Likewise, let $\bm{\zeta}_{n,m}:=\mathbf{Q}_{q,m}(W_n^{-1})$. When referring to unimodal sequences in Sections \ref{S:Boltzmann} and \ref{S:details}, set $q=q(n)=e^{-\frac{1}{B\sqrt{n}}}$. We always write $m$ in terms of $x \in \mathbb{R}$ as $m=B\sqrt{n}\log(2B\sqrt{n})+Bx\sqrt{n}$, and we switch freely between these notations. Here and throughout, $x$ is assumed to be in $\frac{1}{B\sqrt{n}}\left(\mathbb{Z}-\log(2B\sqrt{n}) \right)$, so that $m \in \mathbb{Z}$.
For any $[x_1,x_2] \subset \mathbb{R}$, we can break up $\bm{\xi}_{n}$ into the ranges
$$
\bm{\xi}_{n}= \left(\sum_{x < x_1} + \sum_{x \in [x_1,x_2]} + \sum_{x > x_2} \right) \mathbf{P}_{n} \left({\rm PK}=m\right) \cdot \bm{\xi}_{n,m}.
$$
We can bound the tail ranges as
\begin{align}\label{E:lowertail}
\sum_{x < x_1} \mathbf{P}_{n}\left({\rm PK}=m \right) \cdot \bm{\xi}_{n,m} &\leq \sum_{x < x_1} \mathbf{P}_{n}\left({\rm PK}=m\right)= \mathbf{P}_{n}\left(\frac{{\rm PK}-B\sqrt{n}\log(2B\sqrt{n})}{B\sqrt{n}}<x_1 \right),\\
\label{E:uppertail}
\sum_{x > x_2} \mathbf{P}_{n}\left({\rm PK}=m \right) \cdot \bm{\xi}_{n,m} &\leq \mathbf{P}_{n}\left(\frac{{\rm PK}-B\sqrt{n}\log(2B\sqrt{n})}{B\sqrt{n}}>x_2 \right).
\end{align}
To prove our main theorems for the random variables $W_n$ given there, we show that \eqref{E:lowertail} and \eqref{E:uppertail} tend to 0 as $x_1 \to -\infty$ and $x_2 \to \infty$, respectively, and that $\bm{\xi}_{n,m}$ and some explicitly given $\bm{\nu}_{n,m}$ converge in distribution uniformly for $x$ in any $[x_1,x_2] \subset \mathbb{R}$. The latter is proved by bridging the gap with the Boltzmann model by proving an analogue of Proposition \ref{P:Fristedt4.6}, i.e., we show that $d_{\mathrm{TV}}(\bm{\xi}_{n,m}; \bm{\zeta}_{n,m}) \to 0$ given a general condition on $W_n$, and then we compute the probabilities $\bm{\zeta}_{n,m}({\bf w})$.
For strongly unimodal sequences, we do exactly the same for $\bm{\xi}_{n,m}^*$, $\bm{\zeta}_{n,m}^*$ and so on, after changing the constant $B \mapsto A$. Note that we still use $q=q(n)=e^{-\frac{1}{A\sqrt{n}}}$ here for ease of notation, and we take $x \in \frac{1}{A\sqrt{n}}\left(\mathbb{Z} -\log(2A\sqrt{n})\right)$ depending on $m$ now as $m+1=A\sqrt{n}\log(2A\sqrt{n})+Ax\sqrt{n}$. The following proposition is an analogue of Lemma 4.2 in \cite{F}; it is proved in exactly the same way.
\begin{proposition}\label{P:B_{n,m}prop}
Suppose that there exist $B_{n,m} \subset \mathbb{R}^{d_n}$ such that, uniformly for $x$ in any $[x_1,x_2]$,
\begin{enumerate}[leftmargin=*,label=\textnormal{(\arabic*)}]
\item $\bm{\xi}_{n,m}(B_{n,m}) \to 1$,
\item $\frac{\mathbf{Q}_{q,m}\left(W_n={\bm w} | N=n \right)}{\mathbf{Q}_{q,m}(N=n)} \to 1$ uniformly in ${\bm w} \in B_{n,m}$.
\end{enumerate}
Then $d_{\mathrm{TV}}(\bm{\xi}_{n,m}; \bm{\zeta}_{n,m}) \to 0$ uniformly for $x$ in $[x_1,x_2]$. For strongly unimodal sequences, the same holds if we replace $\bm{\xi}_{n,m} \mapsto \bm{\xi}_{n,m}^*$, $\bm{\zeta}_{n,m} \mapsto \bm{\zeta}_{n,m}^*$, and $\mathbf{Q}_{q,m} \mapsto \mathbf{Q}_{q,m}^*$.
\end{proposition}
So long as $W_n$ fulfills a simple condition similar to Proposition \ref{P:Fristedt4.6}, the sets $B_{n,m}$ above exist, as we show in Proposition \ref{P:W_ncondition}. But first we find the asymptotic behavior of the denominator in condition (2) of Proposition \ref{P:B_{n,m}prop}.
\begin{proposition}\label{P:denomlocalasymp}
Let $q=e^{-\frac{1}{B\sqrt{n}}}$. Uniformly for $x$ in any $[x_1,x_2]$, we have
\begin{align}\label{E:denominator}
\mathbf{Q}_{q,m}(N=n) &\sim \frac{1}{2 \cdot 3^{\frac 14} n^{\frac 34}},\\
\label{E:PKlocalasymp}
\mathbf{P}_{n}({\rm PK}=m) &\sim \frac{1}{B\sqrt{n}}e^{-x-e^{-x}}.
\end{align}
For strongly unimodal sequences, we have the corresponding
\begin{align}\label{E:*denominator}
\mathbf{Q}_{q,m}^*(N=n) &\sim \frac{1}{2 \cdot 6^{\frac 14} n^{\frac 34}},\\
\label{E:*PKlocalasymp}
\mathbf{P}_{n}^*({\rm PK}=m+1) &\sim \frac{1}{A\sqrt{n}}e^{-x-e^{-x}}.
\end{align}
\end{proposition}
Before proving Proposition \ref{P:denomlocalasymp}, we show that the tail ranges \eqref{E:lowertail} and \eqref{E:uppertail} tend to 0 as a corollary and we prove Theorem \ref{T:PKdist}.
\begin{corollary}\label{C:tails}
For any $[x_1,x_2] \subset \mathbb{R}$, we have
$$
\mathbf{P}_{n}\left(x_1\leq \frac{{\rm PK}-B\sqrt{n}\log(2B\sqrt{n})}{B\sqrt{n}} \leq x_2 \right) \sim e^{-e^{-x_2}}-e^{-e^{-x_1}},
$$
and thus, because $\mathbf{P}_{n}$ is a probability measure, \eqref{E:lowertail} and \eqref{E:uppertail} tend to 0 as $x_1 \to -\infty$ and $x_2 \to \infty$, respectively. Furthermore, Theorem \ref{T:PKdist} holds.
For strongly unimodal sequences, the above holds with $\mathbf{P}_{n} \mapsto \mathbf{P}_{n}^*$ and $B \mapsto A$.
\end{corollary}
\begin{proof}
Using \eqref{E:PKlocalasymp}, we have,
\begin{multline*}
\mathbf{P}_{n}\left(\lambda \in \mathcal{U}(n) : x_1 \leq \frac{{\rm PK}(\lambda)-B\sqrt{n}\log(2B\sqrt{n})}{B\sqrt{n}} \leq x_2 \right)\\
\sim \frac{1}{B\sqrt{n}}\sum_{x \in [x_1,x_2] \cap \left( \frac{1}{B\sqrt{n}}\mathbb Z - \log(2B\sqrt{n})\right)}\hspace{-1cm} e^{-x-e^{-x}}
= \int_{x_1}^{x_2} e^{-x-e^{-x}}dx + O\left(n^{-\frac 12}\right)
=e^{-e^{-x_2}}-e^{-e^{-x_1}} +O\left(n^{-\frac 12}\right),
\end{multline*}
where the first inequality comes from recognizing a Riemann sum. Thus, the first part of Theorem \ref{T:PKdist} holds. The second part is a consequence of the following similar calculation. Let
$$
\widetilde{{\rm PK}}:=\frac{{\rm PK}-B\sqrt{n}\log(2B\sqrt{n})}{B\sqrt{n}},
$$
so ${\rm PK}=m$ if and only if $\widetilde{{\rm PK}}=x$. Thus, assuming \eqref{E:PKlocalasymp}, we have
\begin{align*}
&\frac{1}{u(n)}\sum_{\substack{\lambda \in \mathcal{U}_n \\ \widetilde{{\rm PK}}(\lambda) \in [x_1,x_2] \cap \left(\frac{1}{B\sqrt{n}}\mathbb{Z}-\log(2B\sqrt{n}) \right)}} x\mathbf{P}_{n}\left(\widetilde{{\rm PK}}=x\right)
\sim \frac{1}{B\sqrt{n}}\sum_{x \in [x_1,x_2] \cap \left( \frac{1}{B\sqrt{n}}\mathbb Z - \log(2B\sqrt{n})\right)}\hspace{-1cm} xe^{-x-e^{-x}} \\
&= \int_{x_1}^{x_2} xe^{-x-e^{-x}}dx + O\left(n^{-\frac 12}\right)
=e^{-e^{-x_2}}-e^{-e^{-x_1}} +O\left(n^{-\frac 12}\right),
\end{align*}
which tends to $\gamma$ as $x_1 \to -\infty$ and $x_2 \to \infty$ due to the integral evaluation
$$
\int_{-\infty}^{\infty} xe^{-x-e^{-x}}dx = -\int_0^{\infty} \log(t)e^{-t}dt=-\Gamma'(1)=\gamma.
$$
On the other hand, as $x_1 \to -\infty$ and $x_2 \to \infty$, the left-hand side tends to
$$
\mathbf{E}_n\left(\widetilde{{\rm PK}}\right) = \frac{1}{B\sqrt{n}}\mathbf{E}_n({\rm PK})-\log\left(2B\sqrt{n}\right),
$$
which proves the second part of Theorem \ref{T:PKdist}.
Repeating this entire proof with $B \mapsto A$ proves the claims for stronly unimodal sequences.
\end{proof}
We are now ready to prove Proposition \ref{P:denomlocalasymp}.
\begin{proof}[Proof of Proposition \ref{P:denomlocalasymp}]
First note that
\begin{equation}\label{E:QprobN=n}
\mathbf{Q}_{q,m}(N=n)= (q)_m^2 q^{n-m} u_m(n).
\end{equation}
To estimate the right-hand side, we could take the probabilistic approach given in the proofs of Proposition 4.5 in \cite{F} and Proposition 3 in \cite{R}, and the reader is invited to examine these for a more heuristic approach. Instead, we use Proposition \ref{P:saddlepointmethod}, a version of the classical saddle-point method, after representing $u_m(n)$ as a Cauchy contour integral, but this approach is essentially equivalent to Fristedt's probabilistic approach. We omit some details when they are very similar to calculations carried out in \cite{B20} and \cite{R}.
First, write
$$
u_m(n)=\frac{1}{2\pi i} \int_{\mathcal{C}} \frac{\zeta^{m-n-1}}{(\zeta)_m^2}d\zeta,
$$
where $\mathcal{C}$ is a circle centered at 0 with radius less than 1 orientated counterclockwise. Substituting $\zeta=e^{-\frac{1}{B\sqrt{n}}+2 \pi i \theta}$, we have
\begin{equation*}\label{E:Cauchyint}
u_m(n)=\int_{-\frac{1}{2}}^{\frac{1}{2}} \frac{e^{-\frac{m-n}{B\sqrt{n}}+ 2 \pi i(m-n) \theta}}{\left(e^{-\frac{1}{B\sqrt{n}}+2 \pi i \theta}\right)_m^2}d \theta = \int_{-\frac{1}{2}}^{\frac{1}{2}} \exp\left(f(2 \pi i \theta)\right) d \theta,
\end{equation*}
where
\begin{equation*}\label{E:fdef}
f(2 \pi i \theta):= \frac{m-n}{B\sqrt{n}}+ (n-m) 2 \pi i \theta -2 \sum_{k=1}^m \operatorname{Log}\left(1-e^{-\frac{k}{B\sqrt{n}}+2 \pi i m \theta } \right).
\end{equation*}
For ease of notation, we omit the dependence of $f$ on $n$ throughout the proof. Choose $\theta_0=n^{-\frac 52}$, and write
\begin{equation*}\label{E:intbreakup}
\int_{-\frac{1}{2}}^{\frac{1}{2}} \exp\left(f(2 \pi i \theta)\right) d \theta= \int_{-\theta_0}^{\theta_0} \exp\left(f(2 \pi i \theta)\right) d \theta+\int_{\theta_0 \leq | \theta| \leq \frac{1}{2}} \exp\left(f(2 \pi i \theta)\right) d \theta.
\end{equation*}
\vspace{5mm}
To prove that conditions (i-iv) in Proposition \ref{P:saddlepointmethod} hold, we need to find the asymptotic behavior of $f(0), f'(0)$, and $f''(0)$. These are easily computed:
\begin{align*}
f(0) &= \frac{n-m}{B\sqrt{n}}-2\sum_{k=1}^m \log\left(1-e^{-\frac{k}{B\sqrt{n}}}\right), \qquad f'(0) = m-n+2\sum_{k=1}^m \frac{k}{e^{\frac{k}{B\sqrt{n}}}-1},\\
f''(0) &= 2\sum_{k=1}^m \frac{k^2 e^{-\frac{k}{B\sqrt{n}}}}{\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^2}.
\end{align*}
Next, we use Lemma \ref{L:Romik} with $x\mapsto e^{-\frac{k}{B\sqrt{n}}}$, $s \mapsto 2\pi k\theta$ to compute
\begin{align*}
&\left|f(2\pi i \theta) - f(0)- f'(0) 2 \pi i \theta - f''(0) \frac{(2 \pi i \theta)^2}{2}\right|\\
&\hspace{1cm}=\left|\frac{n-m}{B\sqrt{n}} + (m-n) 2\pi i\theta - 2\sum_{k=1}^m \log\left(1-e^{-\frac{k}{B\sqrt{n}}+k\theta}\right) - \frac{n-m}{B\sqrt{n}} - 2\sum_{k=1}^m \log\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)\right.\\
&\hspace{6cm}\left. -\left((m-n) + 2\sum_{k=1}^m \frac{k}{e^{\frac{k}{B\sqrt{n}}}-1}\right)2\pi i\theta - \sum_{k=1}^m \frac{k^2 e^{-\frac{k}{B\sqrt{n}}}}{(1-e^{\frac{k}{B\sqrt{n}}})^2}(2\pi i\theta)^2\right|\\
&\hspace{1cm}= 2\left|{\vphantom{\sum_{k=1}^m \left(\operatorname{Log}\left(\frac{1-e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}+2 \pi i k \theta}}\right)- 2 \pi i k\theta \frac{e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}}} +\frac{(2\pi i k \theta)^2}{2} \frac{e^{-\frac{k}{B\sqrt{n}}}}{(1-e^{-\frac{k}{B\sqrt{n}}})^2}\right)}}\right.
\sum_{k=1}^m
\left({\vphantom{\operatorname{Log}\left(\frac{1-e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}+2 \pi i k \theta}}\right)- 2 \pi i k\theta \frac{e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}}} +\frac{(2\pi i k \theta)^2}{2} \frac{e^{-\frac{k}{B\sqrt{n}}}}{\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^2}}}\right.
\operatorname{Log}\left(\frac{1-e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}+2 \pi i k \theta}}\right)- \frac{e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}}}2 \pi i k\theta + \frac{e^{-\frac{k}{B\sqrt{n}}}}{(1-e^{-\frac{k}{B\sqrt{n}}})^2}
\left.{\vphantom{\operatorname{Log}\left(\frac{1-e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}+2 \pi i k \theta}}\right)- 2 \pi i k\theta \frac{e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}}} +\frac{(2\pi i k \theta)^2}{2} \frac{e^{-\frac{k}{B\sqrt{n}}}}{\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^2}}}\frac{(2\pi i k \theta)^2}{2}\right)
\left.{\vphantom{\sum_{k=1}^m \left(\operatorname{Log}\left(\frac{1-e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}+2 \pi i k \theta}}\right)- 2 \pi i k\theta \frac{e^{-\frac{k}{B\sqrt{n}}}}{1-e^{-\frac{k}{B\sqrt{n}}}} +\frac{(2\pi i k \theta)^2}{2} \frac{e^{-\frac{k}{B\sqrt{n}}}}{\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^2}\frac{(2\pi i k \theta)^2}{2}\right)}}\right|\\
&\hspace{1cm}\leq C \sum_{k=1}^m \frac{k^3e^{-\frac{k}{B\sqrt{n}}}}{\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^3} |\theta|^3 =:E|\theta|^3,
\end{align*}
for some constant $C$. Recognizing Riemann sums for a convergent integral, $E$ is bounded by
\begin{align}
C B^4n^2 \sum_{k \geq 1}\frac{\left(\frac{k}{B\sqrt{n}}\right)^3e^{-\frac{k}{B\sqrt{n}}}}{\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^3} \frac{1}{B\sqrt{n}} &= C B^4n^2 \int_{0}^{\infty} \frac{u^3e^{-u}}{(1-e^{-u})^3}du \left(1+O\left(\frac{1}{\sqrt{n}}\right)\right) =O\left(n^2\right), \label{E:Taylorerror}
\end{align}
where the constant is independent of $n$. Thus, condition (i) holds.
Now we work out the asymptotic behavior of $f(0)$, $f'(0)$, and $f''(0)$. The details here are very similar to the proofs of Propositions 1--3 in \cite{R}.
For $f(0)$, we take $g(u):= -\log(1-e^{-\frac{u}{B\sqrt{n}}})$ in \eqref{E:eulermac1}. Estimating as in the proof of Proposition 1 in \cite{R} gives
\begin{align}
f(0) = 2\pi\sqrt{\frac{n}{3}}- \log(n) -x-e^{-x}-\log\left(\frac{12}{\pi} \right) +o(1). \label{E:f(0)asymp}
\end{align}
The asymptotic behavior of the derivatives is similar to the proof of Proposition 2 in \cite{R}. For $f'(0)$, we take $g(u):=\frac{u}{e^{\frac{u}{B\sqrt{n}}}-1}$ with \eqref{E:eulermac2}, noting that
$$
g'(u) = \frac{e^{\frac{u}{B\sqrt{n}}}-\frac{u}{B\sqrt{n}}e^{\frac{u}{B\sqrt{n}}}-1}{\left(e^{\frac{u}{B\sqrt{n}}}-1\right)^2}.
$$ A short calculation gives
\begin{align}
f'(0) =O\left(\sqrt{n} \log (n) \right). \label{E:f'(0)asymp}
\end{align}
Finally, for $f''(0)$ take $g(u)= \frac{u^2e^{-\frac{u}{B\sqrt{n}}}}{(1-e^{-\frac{u}{B\sqrt{n}}})^2}$ in \eqref{E:eulermac2}. Using that
$$
g'(u)=-\frac{ue^{-\frac{u}{B\sqrt{n}}}\left(\left(\frac{u}{B\sqrt{n}}+2\right) e^{-\frac{u}{B\sqrt{n}}}+\frac{u}{B\sqrt{n}}-2\right)}{\left(1-e^{-\frac{u}{B\sqrt{n}}}\right)^3},
$$
a short calculation gives
\begin{align}
f''(0)= \frac{2\sqrt{3}}{\pi}n^{\frac 32} + O\left(n\log(n)^2\right). \label{E:f''(0)asymp}
\end{align}
Thus, \eqref{E:Taylorerror} and \eqref{E:f''(0)asymp} with $\theta_0=n^{-\frac 58}$ imply that condition (iii) holds, and the contribution on the major arc is
\begin{equation}\label{E:majorarccontribution}
\int_{-\theta_0}^{\theta_0} \exp\left(f(2 \pi i \theta) \right)d \theta\\
= e^{f(0)}\int_{-\frac{1}{n^{\frac 58}}}^{\frac{1}{n^{\frac 58}}}\exp\left(R(\theta)\right) d \theta,
\end{equation}
where
$$
R(\theta)= f'(0) 2 \pi i \theta + f''(0) \frac{(2 \pi i \theta)^2}{2}+E |\theta|^3.
$$
Thus,
$$
R\left(\frac{\theta}{\pi \sqrt{2 f''(0)}}\right)= \frac{i\sqrt{2}f'(0)}{\sqrt{f''(0)}}\theta - \theta^2 + \frac{E}{f''(0)^{\frac 32}}|\theta|^3.
$$
By \eqref{E:Taylorerror}, \eqref{E:f'(0)asymp}, and \eqref{E:f''(0)asymp}, we see that the above tends to $-\theta^2$ pointwise in $\theta$. Furthermore, for $|\theta| \leq \pi \sqrt{2f''(0)} \theta_0 \ll n^{\frac 18}$, the asymptotics \eqref{E:Taylorerror}, \eqref{E:f'(0)asymp}, and \eqref{E:f''(0)asymp} imply that the first term is $o(1)$ and that the third is $\theta^3O(n^{-\frac 14})=o(\theta^2)$. Thus, condition (iv) holds.
It remains to check condition (ii). First, we write
\begin{align*}
\left|\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp(f(2\pi i \theta))d\theta \right|
&\leq \int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp(\text{Re} f(2\pi i \theta)) d \theta \\
&= \exp(f(0))\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp\left(-2\sum_{k=1}^m \left( \text{Re} \left(\operatorname{Log}\left(1-e^{-\frac{k}{B\sqrt{n}}+2 \pi i \theta}\right) \right)
\right. \right. \\
& \left. \left. \hspace{6.5cm} - \operatorname{Log}\left(1-e^{-\frac{k}{B\sqrt{n}}}\right) \right) \right) d\theta \\
&= \exp(f(0))\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp\left(2\sum_{k=1}^m \sum_{\ell \geq 1} \frac{e^{-\ell \frac{k}{B\sqrt{n}}}}{\ell} \left( \cos(2 \pi k \ell \theta)-1 \right) \right) d\theta \\
&\leq \exp(f(0))\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp\left(2\sum_{k=1}^m e^{-\frac{k}{B\sqrt{n}}} \left( \cos(2 \pi k \theta)-1 \right) \right) d\theta .
\end{align*}
We analyze the sum in the exponential as in \cite{R}, p. 13. Because it is a Riemann sum, we have, for $n$ sufficiently large and $n^{-\frac 58}\leq |\theta| \leq \frac{1}{2}$,
\begin{align*}
\sum_{k=1}^m &e^{- \frac{k}{B\sqrt{n}}} \left( \cos(2 \pi k \theta)-1 \right)\\
&=-B\sqrt{n}\sum_{k=1}^m e^{- \frac{k}{B\sqrt{n}}} \left( 1-\cos(2 \pi k \theta) \right) \frac{1}{B\sqrt{n}}
< -\frac{B\sqrt{n}}{2}\int_0^{\frac{m}{B\sqrt{n}}} e^{-u}\left(1-\cos\left(2 \pi B\sqrt{n}\theta u\right)\right)du\\
&< -\frac{B\sqrt{n}}{2}\inf_{\theta \geq \frac{1}{n^{\frac 58}}}\int_0^{T} e^{-u}\left(1-\cos\left(2 \pi B\sqrt{n}\theta u\right)\right)du
= -\frac{B\sqrt{n}}{2}\inf_{s \geq \frac{1}{n^{\frac 18}}}\int_0^{T} e^{-u}\left(1-\cos\left(2 \pi Bs u\right)\right)du,
\end{align*}
for any $T >0$, since $\frac{m}{B\sqrt{n}} \to \infty$. We claim the infimum above is at most $\frac{c}{n^{\frac 14}}$ for some $c>0$. Indeed, the function
\[
s \mapsto \int_0^{T} e^{-u}\left(1-\cos\left(2 \pi Bs u\right)\right)du
\]
is continuous and nonzero on any $[\varepsilon, \infty)$ and tends to $1-e^{-T}>0$ as $s \to \infty$ by the Riemann--Lebesgue Lemma. Furthermore, if $s \to 0^+$ with $s \geq n^{-\frac 18}$, then using the Taylor expansion for cosine gives, for some $d>0$
$$
\int_0^{T} e^{-u}\left(1-\cos\left(2 \pi Bs u\right)\right)du=\frac{\pi^2 B^2}{2}s^2\int_0^T e^{-u}du + O\left(s^4\right) > \frac{c}{n^{\frac 14}}.
$$
Thus, the contribution on the minor arc is bounded by $e^{f(0)-dn^{\frac 14}}$, so condition (ii) holds.
By Proposition \ref{P:saddlepointmethod}, we conclude that
\begin{equation}\label{E:u(n,m)asymp}
u_m(n)\sim \frac{e^{f(0)}}{\sqrt{2\pi f''(0)}}.
\end{equation}
Recalling \eqref{E:QprobN=n} and using $e^{f(0)}=(q)_m^{-2} q^{m-n}$, the cancellation yields
$$
\mathbf{Q}_{q,m}(N=n)\sim \frac{1}{\sqrt{2\pi f''(0)}}.
$$
Plugging in \eqref{E:f''(0)asymp} proves \eqref{E:denominator}.
Note that the asymptotic behavior of $f(0)$ is only required to prove \eqref{E:PKlocalasymp}. By definition,
$$
\mathbf{P}_{n}({\rm PK}=m)= \frac{u_m(n)}{u(n)}.
$$
Plugging in \eqref{E:f(0)asymp} and \eqref{E:f''(0)asymp} into \eqref{E:u(n,m)asymp} and using asymptotic due to Auluck (\cite{A}, eq. (24)) $u(n) \sim \frac{e^{2 \sqrt{\frac n3}}}{8 \cdot 3^{\frac 34} n^{\frac 54}}$ proves \eqref{E:PKlocalasymp}.
For strongly unimodal sequences, we make a very similar argument using Proposition \ref{P:saddlepointmethod}. A more probabilistic approach for this case is similar to the proof of Proposition 3 in \cite{B20}.
We write
$$
u^*_m(n)= \int_{-\frac{1}{2}}^{\frac{1}{2}} \exp\left(f_*(2\pi i \theta)\right)d \theta=\int_{-\theta_0}^{\theta_0} \exp\left(f_*(2\pi i \theta)\right)d \theta+\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp\left(f_*(2\pi i \theta)\right)d \theta,
$$
where again we take $\theta_0=n^{-\frac 58}$ and this time,
$$
f_*(z):=\frac{n-m-1}{A\sqrt{n}}+(m+1-n)z + 2 \sum_{k=1}^m \log\left( 1+ e^{-\frac{k}{A\sqrt{n}} + k z }\right).
$$
We have
\begin{align*}
f_*(0) &= \frac{n-m-1}{A\sqrt{n}}+2\sum_{k=1}^m \log\left(1+e^{-\frac{k}{A\sqrt{n}}}\right), \qquad f_*'(0) = m+1-n+2\sum_{k=1}^m \frac{k}{e^{\frac{k}{A\sqrt{n}}}+1},\\
f''_*(0) &= 2\sum_{k=1}^m \frac{k^2 e^{-\frac{k}{A\sqrt{n}}}}{\left(1+e^{-\frac{k}{A\sqrt{n}}}\right)^2}.
\end{align*}
Next, we use Lemma \ref{L:Romik} with $x\mapsto e^{-\frac{k}{A\sqrt{n}}}$, $s \mapsto 2\pi k\theta$ to compute
\begin{align*}
&\left|f_*(2\pi i \theta) - f_*(0)- f_*'(0)2 \pi i \theta - f_*''(0)\frac{(2 \pi i \theta)^2}{2} \right| \leq C \sum_{k=1}^m \frac{k^3e^{-\frac{k}{A\sqrt{n}}}}{\left(1-e^{-\frac{k}{A\sqrt{n}}}\right)^3}|\theta|^3 =:E^*|\theta|^3,
\end{align*}
for some constant $C^*$ similarly to before. As in \eqref{E:Taylorerror}, we have
\begin{equation}\label{E:*Taylorerror}
E^*=O\left(n^2\right),
\end{equation}
where the constant is independent of $n$, and condition (i) holds as before.
Proceeding as before, we now find the asymptotic behavior of $f_*(0)$, $f_*'(0)$, and $f_*''(0)$ using Lemma \ref{L:eulermac}. The reader may consult the proof of Propositions 1 and 2 in \cite{B20} for similar asymptotic calculations. We obtain the following
\begin{align}
f_*(0)&=\pi\sqrt{\frac{2}{3}n} - x -e^{-x}-\log\left(4A\sqrt{n}\right)+o(1), \label{E:*f(0)asymp} \\
f_*'(0)& = O\left(\sqrt{n} \log (n)\right), \label{E:*f'(0)asymp} \\
f_*''(0)&= \frac{2\sqrt{6}}{\pi}n^{\frac 32} + O\left(n\log(n)^2\right). \label{E:*f''(0)asymp}
\end{align}
Here, \eqref{E:*Taylorerror} and \eqref{E:*f''(0)asymp} with $\theta_0=n^{-\frac 58}$ imply that condition (iii) holds. Condition (iv) may be checked as before using \eqref{E:*Taylorerror}, \eqref{E:*f'(0)asymp}, and \eqref{E:*f''(0)asymp}.
It remains to check condition (ii). Here, the analysis is similar to (p. 253 in \cite{RS}). First, we use
$$
\left|\frac{1+q^ke^{2\pi i k\theta}}{1+q^k}\right|^2=\frac{1}{\left(1+q^k\right)^2}\left(1+2q^k\cos\left(2 \pi k \theta\right)+q^{2k}\right)=1-\frac{2q^k\left(1-\cos(2\pi \theta)\right)}{\left(1+q^k\right)^2},
$$
where we recall that $q=e^{-\frac{1}{A\sqrt{n}}}$. Thus,
\begin{align*}
\left|\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp(f_*(2\pi i \theta))d\theta \right|
&=\left|e^{f_*(0)} \int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} e^{(m+1-n)2 \pi i \theta} \prod_{k=1}^m \left(\frac{1+q^ke^{2 \pi i k \theta}}{1+q^k}\right)^2 d \theta \right| \\
&\leq \exp(f_*(0))\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \prod_{k=1}^m \left|\frac{1+q^ke^{2 \pi i k \theta}}{1+q^k}\right|^2 d \theta \\
&= \exp(f_*(0))\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp\left(\sum_{k=1}^m\operatorname{Log}\left(1-\frac{2q^k(1-\cos(2 \pi k \theta))}{(1+q^k)^2} \right) \right) d \theta \\
&\leq \exp(f_*(0))\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp\left(-\sum_{k=1}^m \frac{2q^k(1-\cos(2 \pi k \theta))}{(1+q^k)^2} \right) d\theta \\
&\leq \exp(f_*(0))\int_{\theta_0 \leq |\theta| \leq \frac{1}{2}} \exp\left(-2\sum_{k=1}^m q^k(1-\cos(2 \pi k \theta)) \right) d\theta .
\end{align*}
The sum can be shown to be $c\sqrt{n}$ for some $c>0$ and $\theta_0 \leq |\theta| \leq \frac{1}{2}$ exactly as before, and hence the contribution from the minor arc is bounded by $e^{f_*(0) - c n^{\frac 14}}$ and condition (ii) holds.
Thus, Proposition \ref{P:saddlepointmethod} implies
$$
u_m^*(n) \sim \frac{e^{f_*(0)}}{\sqrt{2 \pi f_*''(0)}}.
$$
We then get \eqref{E:*denominator} exactly as before, by plugging in \eqref{E:*f''(0)asymp}. For \eqref{E:*PKlocalasymp}, we plug in \eqref{E:*f(0)asymp} and \eqref{E:*f''(0)asymp} and use the asymptotic $u^*(n) \sim \frac{e^{\pi \sqrt{\frac{2}{3}n}}}{8 \cdot 6^{\frac 14} n^{\frac 34}}$ due to Rhoades (\cite{Rh}, Theorem 1.1).
\end{proof}
The next proposition handles the numerator in Proposition \ref{P:B_{n,m}prop} by providing an explicit set $B_{n,m}$, given a simple condition.
\begin{proposition}\label{P:W_ncondition}
Suppose that $W_n = (X_{k}^{[j]})_{j \in \{L,R\}, k \in [k_{n,1},k_{n,2}]}$ and suppose further that, when conditioned on ${\rm PK}=m$, $W_n$ is determined by $X_{k}^{[j]}$ for $k \in [k_{n,m,1},k_{n,m,2}] \subset [1,m]$ with probability 1 under $\mathbf{P}_{n,m}$ and $\mathbf{Q}_{q,m}$. Then if the condition
\begin{equation}\label{E:b_ncondition}
\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} \frac{k^2q^k}{\left(1-q^k\right)^2}=o\left(b_n^2\right),
\end{equation}
holds, where $b_n=o(n^{\frac 34})$ is independent of $m$, then
$$
B_{n,m}:= \left\{ \left(w^{[j]}_k\right)_{\substack{j \in \{L,R\} \\ k_{n,m,1} \leq k \leq k_{n,m,2}}} : \left|\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} \frac{2kq^k}{1-q^k} - \sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} k\left(w^{[L]}_k+w^{[R]}_k\right) \right| \leq b_n \right\}
$$
satisfies the hypotheses of Proposition \ref{P:B_{n,m}prop}, so $d_{\rm{TV}}(\bm{\xi}_{n,m}; \bm{\zeta}_{n,m}) \to 0$ uniformly for $x$ in any $[x_1,x_2]$.
For strongly unimodal sequences, we have $d_{\rm{TV}}(\bm{\xi}^*_{n,m}; \bm{\zeta}^*_{n,m}) \to 0$ when $W_n$ is as above when conditioned on ${\rm PK}=m+1$ and
\begin{equation*}\label{E:*b_ncondition}
\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} \frac{k^2q^k}{\left(1-q^k\right)^2}=o\left(b_n^2\right),
\end{equation*}
where $b_n=o(n^{\frac 34})$ with the set
$$
B^*_{n,m}:= \left\{ \left(w^{[j]}_k\right)_{\substack{j \in \{L,R\} \\ k_{n,m,1} \leq k \leq k_{n,m,2}}} : \left|\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} \frac{2kq^k}{1+q^k} - \sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} k(w^{[L]}_k+w^{[R]}_k) \right| \leq b_n \right\}.
$$
\end{proposition}
\begin{proof}
Let $d_n:=k_{n,m,2}-k_{n,m,1}+1$. To show $\bm{\zeta}_{n,m}(B_{n,m}) \to 1$, we use Lemma \ref{L:Chebyshev}. Note that the expectation of $X_{k}^{[L]}$ is
$$
\mathbf{E}_{q,m}\left(X_{k}^{[L]}\right) = \left(1-q^k\right) \sum_{\ell \geq 1} \ell q^{k\ell}=\frac{q^k}{1-q^k},
$$
and the variance is
$$
\mathbf{Var}_{q,m}\left(X_{k}^{[L]}\right) = \left(1-q^k\right) \sum_{\ell \geq 1} \ell^2q^{k\ell} - \frac{q^{2k}}{\left(1-q^k\right)^2}=\frac{q^k+q^{2k}}{\left(1-q^k\right)^2}- \frac{q^{2k}}{\left(1-q^k\right)^2}=\frac{q^k}{\left(1-q^k\right)^2}.
$$
These are also exactly the same for $X_{k}^{[R]}$. Thus, we have
$$
\mathbf{E}_{q,m}\left(\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} k\left(X_{k}^{[L]}+X_{k}^{[R]}\right)\right)=\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} \frac{2kq^k}{1-q^k},
$$
and, using independence,
$$
\mathbf{Var}_{q,m}\left(\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} k\left(X_{k}^{[L]}+X_{k}^{[R]}\right)\right)=\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} \frac{2k^2q^k}{(1-q^k)^2}.
$$
Thus, by Chebyshev's Inequality (Lemma \ref{L:Chebyshev}) and the definition of $B_{n,m}$,
$$
\bm{\zeta}_{n,m}\left(\mathbb{R}^{d_n} \setminus B_{n,m}\right) \leq b_n^{-2}\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} \frac{2k^2q^k}{\left(1-q^k\right)^2} = o(1),
$$
as required. This is independent of $m$ by hypothesis.
Now let ${\bm w}=(w^{[j]}_k)_{j \in \{L,R\}, k_{n,m,1} \leq k \leq k_{n,m,2}} \in B_{n,m}$, and write
$$
\sum {\bm w}:=\sum_{k_{n,m,1} \leq k \leq k_{n,m,2}} k\left(w^{[L]}_k+w^{[R]}_k\right).
$$
We want to show that
$$
\mathbf{Q}_{q,m}(N=n | W_n={\bm w}) \sim \frac{1}{2 \cdot 3^{\frac 14}n^{\frac 34}}.
$$
The proof of this is simply a slight adjustment to the proof of the first part of Proposition \ref{P:denomlocalasymp}. Again, we wish to apply Proposition \ref{P:saddlepointmethod} to a Cauchy integral. We write, using independence,
\begin{align*}
\mathbf{Q}_{q,m}(N=n | W_n={\bm w})&= \frac{\mathbf{Q}_{q,m}\left(N=n \ \text{and} \ W_n={\bm w} \right)}{\mathbf{Q}_{q,m}(W_n={\bm w})} \\
&= \frac{\mathbf{Q}_{q,m}\left(N=n \ \text{and} \ W_n={\bm w} \right)}{\prod_{k_{n,m,1} \leq k \leq k_{n,m,2}} \mathbf{Q}_{q,m}\left(L_k=w^{[L]}_k\right) \mathbf{Q}_{q,m}\left(R_k=w^{[R]}_k\right)} \\
&= \frac{(q)_m^2q^{n-m}}{q^{\sum {\bm w}}\frac{(q)^2_{k_{n,m,2}}}{(q)^2_{k_{n,m,1-1}}}} \#\left\{ \lambda : N(\lambda)=n, {\rm PK}(\lambda)=m, W_n(\lambda)={\bm w} \right\} \\
&= \frac{(q)_m^2(q)^2_{k_{n,m,1-1}}q^{n-m}}{(q)^2_{k_{n,m,2}}q^{\sum {\bm w}}} \int_{-\frac 12}^{\frac 12} \exp(g(2 \pi i \theta))d \theta,
\end{align*}
where now
$$
F(z):=\frac{n-m-\sum {\bm w}}{B\sqrt{n}}+ \left(\sum {\bm w} + m-n\right) z -2 \sum_{k \in [1,m] \setminus [k_{n,m,1},k_{n,m,2}]} \operatorname{Log}\left(1-e^{-\frac{k}{B\sqrt{n}}+ k z}\right).
$$
We have
\begin{align*}
F(0) &= \frac{n-m-\sum {\bm w}}{B\sqrt{n}}-2\sum_k \log\left(1-e^{-\frac{k}{B\sqrt{n}}}\right), \qquad F'(0) = \sum {\bm w} + m-n+2\sum_k \frac{k}{e^{\frac{k}{B\sqrt{n}}}-1},\\
F''(0) &= 2\sum_k \frac{k^2 e^{-\frac{k}{B\sqrt{n}}}}{\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^2},
\end{align*}
where all sums are over $k \in [1,m] \setminus [k_{n,m,1},k_{n,m,2}]$. Again we split the integral exactly as before at $\theta_0=n^{-\frac 58}$. Since Lemma \ref{L:Romik} is used on individual summands, condition (i) holds as before, again with an error of (at most) $|\theta|^3n^2O(1)$.
As before, the asymptotic behavior of $F(0)$ is immaterial, since
$$
\frac{(q)_m^2(q)^2_{k_{n,m,1-1}}q^{n-m}}{(q)^2_{k_{n,m,2}}q^{\sum {\bm w}}}e^{F(0)}=1.
$$
We now compare the contributions of $F'(0)$ and $F''(0)$ to those of $f'(0)$ and $f''(0)$, respectively. Since ${\bm w} \in B_{n,m}$, we have
$$
F'(0) = m-n +2\sum_{k \leq m} \frac{k}{e^{\frac{k}{B\sqrt{n}}}-1} + o(b_n)=f'(0)+o(b_n) \sim f'(0),
$$
and by hypothesis,
$$
F''(0)=2\sum_{k\leq m} \frac{k^2 e^{-\frac{k}{B\sqrt{n}}}}{\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^2} +o\left(b_n^2\right)=f''(0)+o\left(b_n^2\right) \sim f''(0).
$$
Hence, conditions (iii) and (iv) hold as before.
It remains to show that condition (ii) holds. First, we claim that there is an interval $[a\sqrt{n},b\sqrt{n}]$ contained in $[1,m] \setminus [k_{n,m,1},k_{n,m,2}]$. It is contained in $[1,m]$ for large $n$ and $x$ in any $[x_1,x_2]$, and we have
$$
\sum_{a\sqrt{n} \leq k \leq b\sqrt{n}} \frac{k^2q^k}{\left(1-q^k\right)^2} \asymp n^{\frac 32},
$$
by estimating the Riemann sum by an integral as in the calculation for $f''(0)$. If for all $a$ and $b$ this intersected $[k_{n,m,1},k_{n,m,2}]$, then this would contradict the condition \eqref{E:b_ncondition}.
Now we estimate the contribution from the minor arc as before, and arrive at the sum
$$
\sum_{k \in [1,m] \setminus [k_{n,m,1},k_{n,m,2}]} e^{-\frac{k}{B\sqrt{n}}}\left(\cos\left(2 \pi k \theta \right)-1 \right)<\sum_{a\sqrt{n} \leq k \leq b\sqrt{n}}e^{-\frac{k}{B\sqrt{n}}}\left(\cos\left(2 \pi k \theta \right)-1 \right).
$$
Similarly to before, we bound this negative sum from above by
$$
-\frac{\sqrt{n}}{2B}\inf_{s \geq n^{-\frac 18}}\int_{\frac{a}{B}}^{\frac{b}{B}}\left(1-\cos(2 \pi s u) \right) e^{-u} du.
$$
Thus, the minor arc contributes $e^{-cn^{\frac 14}}$ less than the major arc for some $c>0$, so condition (ii) holds.
Employing Proposition \ref{P:saddlepointmethod}, we conclude that for ${\bm w} \in B_{n,m}$,
$$
\mathbf{Q}_{q,m}( W_n={\bm w} | N=n)\sim \frac{(q)_m^2(q)^2_{k_{n,m,1-1}}q^{n-m}}{(q)^2_{k_{n,m,2}}q^{\sum {\bm w}}}e^{F(0)} \frac{1}{\sqrt{2\pi F''(0)}} \sim \frac{1}{\sqrt{2\pi f''(0)}} \sim \mathbf{Q}_{q,m}(N=n).
$$
The proof for strongly unimodal sequences is the same, mirroring the proof of Proposition \ref{P:denomlocalasymp} for strongly unimodal sequences. The only difference being that, if $\mathbf{E}^*_{q,m}$ denotes expectation under $\mathbf{Q}_{q,m}^*$, then as $X_{k}^{[j]} \in \{0,1\}$, we have
$$
\mathbf{E}^*_{q,m}\left(X_{k}^{[j]}\right)=\frac{q^k}{1+q^k},
$$
for $j \in \{L,R\}$.
\end{proof}
\section{Proofs of main results}\label{S:details}
We now apply Proposition \ref{P:W_ncondition} to the relevant $W_n$'s in Theorems \ref{T:largeparts}, \ref{T:smallparts}, \ref{T:totalsmallparts}, \ref{T:*largesmallparts}, and \ref{T:*smallparts} to prove $d_{\mathrm{TV}}(\bm{\zeta}_{n,m}; \bm{\xi}_{n,m}) \to 0$ uniformly for $x$ in any $[x_1,x_2]$. We then compute the probability densities $\bm{\zeta}_{n,m}({\bm w})$ for ${\bm w} \in \mathbb{R}^{d_n}$ and identify Riemann sums to find the limiting distributions.
\subsection{Small parts: the proofs of Theorem \ref{T:smallparts}, Corollary \ref{C:smallpartsskewness}, and Theorem \ref{T:*smallparts}}\label{S:smallparts}
\begin{proof}[Proof of Theorem \ref{T:smallparts}] Let $\overline{W}_n:=(X_{k}^{[j]})_{j \in \{L,R\}, k \leq k_n}$ where $k_n = o(n^{\frac 14})$ is an integer, and let $\overline{\bm{\zeta}}_{n,m}$ (resp. $\overline{\bm{\xi}}_{n,m}$) be its probability distribution under $\mathbf{Q}_{q,m}$ (resp. $\mathbf{P}_{n,m}$). Then, as $m = \omega(k_n)$ uniformly for $x$ in any $[x_1,x_2]$, we can take $k_{n,m,1}:=1$ and $k_{n,m,2}:=k_n$ in Proposition \ref{P:W_ncondition}, and note that condition \eqref{E:b_ncondition} holds - recognizing Riemann sums gives
$$
\sum_{k \leq k_n} \frac{k^2q^k}{\left(1-q^k\right)^2} \sim \frac{n^{\frac 32}}{B} \int_0^{\frac{k_n}{B\sqrt{n}}} \frac{u^2e^{-u}}{(1-e^{-u})^2}du=o\left(n^{\frac 32}\right).
$$
So, Proposition \ref{P:W_ncondition} applies and $d_{\mathrm{TV}}(\overline{\bm{\zeta}}_{n,m}; \overline{\bm{\xi}}_{n,m}) \to 0$ uniformly for $x$ in any $[x_1,x_2]$. Clearly this also holds if we rescale with $W_n:=(\frac{kX_{k}^{[j]}}{B\sqrt{n}})_{j \in \{L,R\}, k \leq k_n}$, and redefine $\bm{\zeta}_{n,m}$ and $\bm{\xi}_{n,m}$ accordingly.
Now let ${\bm w}=(w^{[j]}_k)_{j \in \{L,R\}, k \leq k_n} \in \mathbb{R}^{2k_n}_{\geq 0}$ be such that $w^{[j]}_k\in \frac{k}{B\sqrt{n}}\mathbb{N}.$ Then, using independence and the exact same analysis as in \cite{F} \S 5,
\begin{align}
\mathbf{Q}_{q,m}(W_n = {\bm w})&=\prod_{\substack{j \in \{L,R\} \\ k \leq k_n}} \mathbf{Q}_{q,m}\left(X_{k}^{[j]} = \frac{B\sqrt{n}}{k}w^{[j]}_k\right)
= \prod_{\substack{j \in \{L,R\} \\ 1 \leq k \leq k_n}} \left(1-q^k\right)q^{\frac{B\sqrt{n}}{k}w^{[j]}_kk} \nonumber \\
&\sim k_n!^2 \left(\frac{1}{B\sqrt{n}}\right)^{2k_n}\prod_{\substack{j \in \{L,R\} \\ k \leq k_n}} e^{-w^{[j]}_k}
=\prod_{\substack{j \in \{L,R\} \\ k \leq k_n}} e^{-w^{[j]}_k} \frac{k}{B\sqrt{n}}. \label{E:smpartsQprobs}
\end{align}
Note that the second to last step, which comes from the analysis in \cite{F}, is the only place $k_n=o(n^{\frac 14})$ is used, rather than just $o(n^{\frac 12})$. This is uniform in ${\bm w}$ and independent of $m$. Hence, for any
$$
B=\prod_{\substack{k \leq k_n \\ j \in \{L,R\}}} \left.\left(-\infty,v_{k}^{[j]}\right.\right] \subset \mathbb{R}^{2k_n},
$$we have the following, uniformly for $x$ in any $[x_1,x_2]$, by recalling that $w^{[j]}_k \in \frac{k}{B\sqrt{n}}\mathbb{N}$:
$$
\bm{\zeta}_{n,m}(B)\sim \sum_{{\bm w} \in B}\prod_{\substack{j \in \{L,R\} \\ k \leq k_n}} e^{-w^{[j]}_k} \frac{k}{B\sqrt{n}} \sim \prod_{\substack{k \leq k_n \\ j \in \{L,R\}}} \int_{-\infty}^{v_{k}^{[j]}} e^{- u_{k}^{[j]}} du_{k}^{[j]}=\bm{\nu}_n(B).
$$
But $d_{\mathrm{TV}}(\bm{\zeta}_{n,m}, \bm{\xi}_{n,m}) \to 0$ implies that $\bm{\xi}_{n,m}(B) \sim \bm{\zeta}_{n,m}(B)$ uniformly for $x$ in any $[x_1,x_2]$, thus $\bm{\xi}_{n,m}(B) \sim \bm{\nu}_n(B)$ also, and by Corollary \ref{C:tails}, we have
\begin{align*}
|\bm{\xi}_{n}(B)-\bm{\nu}_n(B)| &=\left| \sum_{x} \bm{\xi}_{n,m}(B)\mathbf{P}_{n}({\rm PK}=m) - \bm{\nu}_n(B)\right| \\
&=\left| \sum_{x} \left(\bm{\xi}_{n,m}(B)- \bm{\nu}(B) \right)\mathbf{P}_{n}({\rm PK}=m) \right| \\
&\leq \left(\sum_{x < x_1} + \sum_{x \in [x_1,x_2]} + \sum_{x> x_2} \right) \left|\bm{\xi}_{n,m}(B)-\bm{\nu}_n(B)\right|\mathbf{P}_{n}({\rm PK}=m) \\
&\leq 2e^{-e^{-x_1}} + \sum_{x \in [x_1,x_2]}\left|\bm{\xi}_{n,m}(B)-\bm{\nu}_n(B)\right|\mathbf{P}_{n}({\rm PK}=m) + 2\left(1-e^{-e^{-x_2}}\right) \\
&\sim 2e^{-e^{-x_1}} +2\left(1-e^{-e^{-x_2}}\right).
\end{align*}
Taking $x_1$ and $x_2$ arbitrarily close to $-\infty$ and $\infty$, respectively, completes the proof of the first part of Theorem \ref{T:smallparts}.
The second part is proved in the same way, merely noting that for $k=o(n^{\frac{1}{2}})$ and $w^{[L]},w^{[R]} \in \frac{k}{B\sqrt{n}}\mathbb{N}_0$, we have
\begin{equation*}
Q_{q,m}\left(\frac{kX_k^{[L]}}{B\sqrt{n}}=w^{[L]}, \frac{kX_k^{[R]}}{B\sqrt{n}}=w^{[R]}\right)=\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^2e^{-w^{[L]}-w^{[R]}} \sim \left(\frac{k}{B\sqrt{n}}\right)^2e^{-w^{[L]}-w^{[R]}}. \qedhere
\end{equation*}
\end{proof}
Corollary \ref{C:smallpartsskewness} is proved similarly as above, again using \eqref{E:smpartsQprobs} to estimate the $\mathbf{Q}_{q,m}$ probabilities.
We now turn to small parts in strongly unimodal sequences.
\begin{proof}[Proof of Theorem \ref{T:*smallparts}]
To prove Theorem \ref{T:*smallparts}, we set $W_n:=\left(X_k\right)_{j \in \{L,R\}, k \leq k_n}$ where $k_n=o(n^{\frac 12})$ is an integer. We can apply Proposition \ref{P:W_ncondition}, for
$$
\sum_{k \leq k_n} \frac{k^2q^k}{\left(1+q^k\right)^2} \sim \frac{n^{\frac 32}}{A} \int_0^{\frac{k_n}{A\sqrt{n}}} \frac{u^2e^{-u}}{(1+e^{-u})^2}du=o\left(n^{\frac 32}\right).
$$
Now, if ${\bm w} \in \{0,1\}^{2k_n}$, then we simply note that
\begin{align*}
\mathbf{Q}_{q,m}^*(W_n={\bm w}) = \prod_{\substack{k \leq k_n \\ j \in \{L,R\}}} \mathbf{Q}_{q,m}^*\left(X_{k}^{[j]}=w^{[j]}_k\right)=\prod_{\substack{k \leq k_n \\ j \in \{L,R\}}} \frac{e^{-\frac{kw^{[j]}_k}{A\sqrt{n}}}}{1+e^{-\frac{k}{A\sqrt{n}}}} \sim \frac{1}{2^{2k_n}}. \tag* \qedhere
\end{align*}
\end{proof}
\subsection{Large parts: the proofs of Theorem \ref{T:largeparts} and \ref{T:*largesmallparts}}
Here it is easier to first show that $\bm{\zeta}_{n,m}(U) \sim \bm{\nu}_{x,n}(U)$ uniformly for $x$ in any $[x_1,x_2]$ for
\begin{equation}\label{E:openUdef}
U=\prod_{\substack{t \leq t_n \\ j \in \{L,R\}}} \left.\left(-\infty,v_{t}^{[j]}\right.\right]
\end{equation}
and $\bm{\nu}_{x,n}$ as below. After that we use Proposition \ref{P:W_ncondition} to complete the proof.
Let
$$
W_n:=\left(\frac{Y_{t}^{[j]}-B\sqrt{n}\log(2B\sqrt{n})}{B\sqrt{n}}\right)_{j \in \{L,R\}, 1 \leq t \leq t_n},
$$
$\bm{\zeta}_{n,m}$ be the distribution of $W_n$ under $\mathbf{Q}_{q,m}$, and $\bm{\nu}_{x,n}$ be the probability measure on $(-\infty,x]^{2t_n}$ with density
$$
\begin{cases} \frac{1}{2^{2t_n}}e^{e^{-x}-\sum_{t=1}^{t_n} (u_t^{[L]}+u_t^{[R]}) - \frac{e^{-u_{t_n}^{[L]}}}{2}- \frac{e^{-u_{t_n}^{[R]}}}{2}} & \text{if $u_1^{[j]} \geq \dots \geq u_{t_n}^{[j]}$ for $j \in \{L,R\}$,} \\ 0 & \text{otherwise.} \end{cases}
$$
Let ${\bm w}=(w^{[j]}_t)_{j \in \{L,R\}, t \leq t_n}$ be such that
$$
x\geq w^{[L]}_1 \geq \dots \geq w^{[L]}_{t_n} \qquad x\geq w^{[R]}_1 \geq \dots \geq w^{[R]}_{t_n}
$$
and
$$
y_{t}^{[j]}:=B\sqrt{n}\left(w^{[j]}_t+\log\left(2B\sqrt{n}\right)\right) \in \mathbb{Z}.
$$
Directly we find
\begin{align*}
\bm{\zeta}_{n,m}({\bm w}) &= \mathbf{Q}_{q,m} \left(\left(Y_{t}^{[L]}\right)_{t \leq t_n} \times \left(Y_{t}^{[R]}\right)_{t \leq t_n} = \left(y_{t}^{[L]}\right)_{t \leq t_n} \times \left(y_{t}^{[R]}\right)_{t \leq t_n}\right)\\
&= q^{-m} (q)_{m}^2q^{m+\sum_{t \leq t_n} \left(y_{t}^{[L]}+y_{t}^{[R]}\right)}\sum_{\lambda} q^{|\lambda|} ,
\end{align*}
where the sum is taken over pairs of partitions $\lambda$ with parts at most $y_{t_n}^{[j]}$, respectively, for $j \in \{L,R\}$. Continuing, this is
\begin{align*}
q^{\sum\limits_{t \leq t_n} \left(Y_{t}^{[L]}+Y_{t}^{[R]}\right)} (q)_{m}^2 (q)_{y_{t_n}^{[L]}}^{-1}(q)_{y_{t_n}^{[R]}}^{-1}
&=q^{\sum\limits_{t \leq t_n} \left(Y_{t}^{[L]}+Y_{t}^{[R]}\right)} \prod_{y_{t_n}^{[L]} < t \leq m} \left(1-q^t\right) \prod_{y_{t_n}^{[R]} < t \leq m} \left(1-q^t\right) \\
&=e^{-\sum\limits_{t \leq t_n} \left(w^{[L]}_t+w^{[R]}_t\right)} \left(\frac{1}{2B\sqrt{n}}\right)^{2t_n} \hspace{-.2cm}\prod_{y_{t_n}^{[L]} < t \leq m} \left(1-q^t\right) \prod_{y_{t_n}^{[R]} < t \leq m} \left(1-q^t\right)\\
&\sim e^{-\sum\limits_{t \leq t_n} \left(w^{[L]}_{t_n}+w^{[R]}_{t_n}\right)} \left(\frac{1}{2B\sqrt{n}}\right)^{2t_n} e^{e^{-x}-\frac{e^{-w^{[L]}_{t_n}}}{2}-\frac{e^{-w^{[R]}_{t_n}}}{2}} \\
&= \frac{1}{2^{2t_n}} e^{e^{-x}-\sum\limits_{t \leq t_n} (w^{[L]}_{t}+w^{[R]}_{t})-\frac{e^{-w^{[L]}_{t_n}}}{2}-\frac{e^{-w^{[R]}_{t_n}}}{2}} \left(\frac{1}{B\sqrt{n}}\right)^{2t_n},
\end{align*}
uniformly for $x, w^{[L]}_t,w^{[R]}_t \geq -\frac{\log(n)}{8}$, exactly as in \cite{F}, (6.10).
Now let
$$
S:=\left\{{\bm w} : w^{[L]}_{t_n} \geq - \frac{\log (n)}{8} \ \text{and} \ w^{[R]}_{t_n} \geq - \frac{\log (n)}{8} \right\}.
$$
Since $w^{[j]}_t \in \frac{1}{B\sqrt{n}}(\mathbb{Z}-\log(2B\sqrt{n}))$, recognizing Riemann sums gives
$$
\bm{\zeta}_{n,m}(U \cap S) \sim \bm{\nu}_{x,n}(U \cap S),
$$
for $U$ as in \eqref{E:openUdef} uniformly for $x$ in any $[x_1,x_2]$; in particular, we have $\bm{\zeta}_{n,m}(S) \sim \bm{\nu}_{x,n}(S)$. But $\bm{\nu}_{x,n}(S^c) \sim 0$ follows exactly as in \cite{F}, p. 724. Now note that
$$
0=1-1=\bm{\zeta}_{n,m}(S^{c})-\bm{\nu}_{x,n}(S^c)+\bm{\zeta}_{n,m}(S)-\bm{\nu}_{x,n}(S) \sim \bm{\zeta}_{n,m}(S^c).
$$ Thus, $\bm{\zeta}_{n,m}(S^c) \sim 0$ also, and we have $\bm{\zeta}_{n,m}(U) \sim \bm{\nu}_{x,n}(U)$ uniformly for $x$ in any $[x_1,x_2]$, as desired.
Since $\bm{\zeta}_{n,m}(S^c) \sim 0$, it follows that, with probability 1 under $\mathbf{Q}_{q,m}$, the $t_n$-th largest left and right parts are at least
$$
B\sqrt{n}\log\left(2B\sqrt{n}\right)-\frac{B\sqrt{n}}{8}\log(n)=B\sqrt{n}\log\left(2Bn^{\frac 38}\right)=:k_{n,m,1}.
$$
Take $k_{n,m,2}:=m$ and let $\overline{W}_n$ be as in Proposition \ref{P:W_ncondition}. Then
$$
\sum_{k_{n,m,1}\leq k \leq k_{n,m,2}} \frac{k^2q^k}{\left(1-q^k\right)^2} \sim B^3n^{\frac 32} \int_{\log\left(2Bn^{\frac 38}\right)}^{\log\left(2B\sqrt{n}\right)+x} \frac{u^2e^{-u}}{(1-e^{-u})^2}du \sim o\left(n^{\frac 32}\right),
$$
so if $\overline{\bm{\xi}}_{n,m}:=\mathbf{P}_{n}(\overline{W}_n^{-1})$ and $\overline{\bm{\zeta}}_{n,m}:=\mathbf{Q}_{q,m}(\overline{W}_n^{-1})$, then $d_{\rm{TV}}(\overline{\bm{\zeta}}_{n,m}, \overline{\bm{\xi}}_{n,m}) \to 0$. But with probability 1 under $\bm{\zeta}_{n,m}$, the random variable $\overline{W}_n$ determines $W_n$. Thus, $d_{\rm{TV}}(\bm{\zeta}_{n,m}, \bm{\xi}_{n,m}) \to 0$ also, and this finishes the proof.
The proof of Theorem \ref{T:*largesmallparts} is the same, except we use Lemma \ref{L:largepartsproduct} to estimate the product that arises in the calculation of $\zeta_{m,n}^*({\bm w})$. \hspace{10cm} \qedsymbol
\subsection{Total small parts: proof of Theorem \ref{T:totalsmallparts}}
Let $\overline{W}_n:= (X_{k}^{[j]})_{j \in \{L,R\}, k \leq k_n},$ where $k_n=o(n^{\frac 12})$ is an integer, and let $\overline{\bm{\xi}}_{n,m}:=\mathbf{P}_{n,m}(\overline{W}_n^{-1})$ and $\overline{\bm{\xi}}_{n,m}:=\mathbf{Q}_{q,m}(\overline{W}_n^{-1})$. Then, as in Section 5.1, Proposition \ref{P:W_ncondition} applies to $\overline{W}_n$, so $d_{\rm{TV}}(\overline{\bm{\xi}}_{n,m}, \overline{\bm{\zeta}}_{n,m}) \to 0$. This clearly implies that for
$$
W_n:= \left(\frac{\sum_{k \leq k_n} X_{k}^{[j]}-B\sqrt{n}k\log \left(k_n\right)}{B\sqrt{n}} \right)_{j \in \{L,R\}},
$$
$\bm{\xi}_{n,m}:=\mathbf{P}_{n,m}(W_n^{-1})$, and $\bm{\xi}_{n,m}:=\mathbf{Q}_{q,m}(W_n^{-1})$, we have $d_{\rm{TV}}(\bm{\xi}_{n,m}, \bm{\zeta}_{n,m}) \to 0.$
Now, as in Section \ref{S:smallparts}, we write
\begin{multline}\label{E:totalsmpartsmsummand}
\mathbf{P}_{n}\left(\left(W_n\right)_{j} \leq v_{j}, \ j \in \{L,R\}\right) \\
=\left(\sum_{x < x_1} + \sum_{x \in [x_1,x_2]} + \sum_{x > x_2} \right)\mathbf{P}_{n,m}\left(\left(W_n\right)_j \leq v_{j}, \ j \in \{L,R\}\right)\mathbf{P}_{n}\left({\rm PK}=m\right).
\end{multline}
We ignore the ranges $x<x_1$ and $x>x_2$ which tend to 0, and in the range $x \in [x_1,x_2]$ we may replace $\mathbf{P}_{n,m}$ with $\mathbf{Q}_{q,m}$, so that (using independence) this is asymptotic to
\begin{equation}\label{E:totalsmpartsQreplace}
\sum_{x \in [x_1,x_2]} \mathbf{P}_{n,m}({\rm PK}=m) \prod_{j \in \{L,R\}}\mathbf{Q}_{q,m}\left(\left(W_n\right)_{j} \leq v_j\right).
\end{equation}
Following \S 8 of \cite{F}, we focus now on a particular term of $\mathbf{Q}_{q,m}$ and first restrict the range to $k \leq \ell_n$, where $\ell_n:=\lfloor k_n^{\frac 12} \rfloor = o(n^{\frac 14})$, so that we can use the calculation \eqref{E:smpartsQprobs}. It is also simpler if we first do this without subtracting the $B\sqrt{n}\log \left(k_n\right)$ term. Thus, we have
\begin{multline}
\mathbf{Q}_{q,m}\left(\frac{\sum_{k \leq \ell_n} X_{k}^{[j]}}{B\sqrt{n}} \leq v_j \right) \\
=\sum_{w^{[j]}_{k_n} \in \frac{1}{B\sqrt{n}}\mathbb{N}_0 \cap [0,v_j]} \cdots \sum_{w^{[j]}_1 \in \frac{1}{B\sqrt{n}}\mathbb{N}_0 \cap \left[0,v_j-w^{[j]}_{\ell_n}-\dots - w^{[j]}_{2}\right]} \prod_{k=1}^{\ell_n} \mathbf{Q}_{q,m}\left(\frac{X_{k}^{[j]}}{B\sqrt{n}} = w^{[j]}_k\right). \label{E:totalsmpartsRiemannsum}
\end{multline}
Now, by \eqref{E:smpartsQprobs},
$$
\prod_{k=1}^{\ell_n} \mathbf{Q}_{q,m}\left(\frac{X_{k}^{[j]}}{B\sqrt{n}} = w^{[j]}_k\right)=\prod_{k=1}^{\ell_n} \mathbf{Q}_{q,m}\left(X_{k}^{[j]} = \frac{B\sqrt{n}}{k}kw^{[j]}_k\right) \sim \ell_n! \prod_{k=1}^{\ell_n} \left(e^{-kw^{[j]}_k} \frac{1}{B\sqrt{n}}\right).
$$
Plugging this into \eqref{E:totalsmpartsRiemannsum} and recognizing Riemann sums, we get, by Lemma \ref{L:stochYuleintegral}
\begin{align*}
\mathbf{Q}_{q,m}\left(\frac{\sum_{k \leq \ell_n} X_{k}^{[j]}}{B\sqrt{n}} \leq v_j\right) &\sim \ell_n!\int_0^{v_j} \cdots \int_0^{v_j-u^{[j]}_{\ell_n}-\dots -u^{[j]}_2} e^{-u_{1}^{[j]}-2u^{[j]}_2-\dots \ell_nu^{[j]}_{\ell_n}}du_{1}^{[j]} \cdots du^{[j]}_{\ell_n} \\
&=\left(1-e^{-v_j}\right)^{\ell_n}.
\end{align*}
Since this is uniform in $v_j \in [0, \infty)$, we may replace $v_j \mapsto v_j + \log (\ell_n)$ (for fixed $v_j$), to get
$$
\mathbf{Q}_{q,m}\left(\frac{\sum_{k \leq \ell_n} X_{k}^{[j]}-B\sqrt{n}\log \left(\ell_n\right)}{B\sqrt{n}} \leq v_j\right) \sim e^{-e^{-v_j}}.
$$
But we want to show that the above holds with $\ell_n \mapsto k_n$. But since $\ell_n = \lfloor k_n^{\frac 12} \rfloor$, this is equivalent to showing that
$$
\frac{\sum_{k_n^{\frac 12} < k \leq k_n} X_{k}^{[j]}-B\sqrt{n}\log \left(k_n^{\frac 12}\right)}{B\sqrt{n}}
$$
asymptotically has a point-mass distribution with mean 0. This is accomplished by showing that its expectation and variance under $\mathbf{Q}_{q,m}$ are both $o(1)$. This in turn follows from
\begin{align*}
&\hspace{-4.8cm}\mathbf{Var}_{q,m}\left(\frac{\sum_{k_n^{\frac 12} < k \leq k_n} X_{k}^{[j]}-B\sqrt{n}\log \left(k_n^{\frac 12}\right)}{B\sqrt{n}} \right)
\\
&=\mathbf{Var}_{q,m}\left(\frac{1}{B\sqrt{n}} \sum_{k_n^{\frac 12} < k \leq k_n} X_{k}^{[j]}\right)
= \frac{1}{B^2n}\sum_{k_n^{\frac 12} < k \leq k_n} \frac{q^k}{\left(1-q^k\right)^2}
\\
&\sim \frac{1}{B^2n}\sum_{k_n^{\frac 12} < k \leq k_n} \frac{1}{\left(1-e^{-\frac{k}{B\sqrt{n}}}\right)^2}
\sim \sum_{k_n^{\frac 12}<k \leq k_n} \frac{1}{k^2} =o(1),
\\
\mathbf{E}_{q,m}\left(\frac{1}{B\sqrt{n}}\sum_{k_n^{\frac 12} < k \leq k_n} X_k\right)
&=\frac{1}{B\sqrt{n}}\sum_{k_n^{\frac 12} < k \leq k_n} \frac{q^k}{1-q^k}= \sum_{k_n^{\frac 12} < k \leq k_n} \frac{1}{k} +o(1) = \log \left(k_n^{\frac 12}\right) +o(1).
\end{align*}
Thus, we have
$$
\mathbf{Q}_{q,m}\left(\left(W_n\right)_j \leq v_j \right) = \mathbf{Q}_{q,m}\left(\frac{\sum_{k \leq k_n} X_{k}^{[j]}-B\sqrt{n}\log \left(k_n\right)}{B\sqrt{n}} \leq v_j\right) \sim e^{-e^{-v_j}}.
$$
Using this in \eqref{E:totalsmpartsQreplace} and then replacing $\mathbf{P}_{n,m}$ in \eqref{E:totalsmpartsmsummand} with $\mathbf{Q}_{q,m}$, the range over $x \in [x_1,x_2]$ is asymptotic to
$$
\sum_{x \in [x_1,x_2]} \mathbf{P}_{n}({\rm PK}=m) e^{-e^{-v_{L}}-e^{-v_R}} \sim \left(e^{-e^{-x_2}}-e^{-e^{-x_1}}\right)e^{-e^{-v_L}-e^{-v_R}},
$$
by Proposition \ref{P:denomlocalasymp}. Taking $x_2 \to \infty$ and $x_1 \to - \infty$ completes the proof. \qed
\section{Moment Generating Functions}\label{S:moments}
By standard combinatorial arguments, one finds the generating function for the $k$-th moment of the largest part for partitions to be
$${\rm MP}_{k}(q)=\sum_{n \geq 0} {\rm mp}_{k}(n)q^n:=\sum_{\lambda \in \mathcal{P}} {\rm lg}(\lambda)^kq^{|\lambda|} = \sum_{m \geq 0} \frac{m^kq^m}{(q;q)_m}, \qquad \text{for $k \geq 0$.}$$
Analogous to the mean found in \ref{T:PKdist}, Erd\H{o}s and Lehner's result (Theorem \ref{T:ErdosLehner}) implies that
$$
{\rm mp}_{1}(n) = A\sqrt{n}\log(A)\sqrt{n} + A \gamma \sqrt{n} \left(1+o(1)\right),
$$
where $A=\frac{\sqrt{6}}{\pi}$, as before. Ngo and Rhoades used the factorization (\cite{NR}, eq. (1.8)),
$$
{\rm MP}_{1}(q)={\rm MP}_{0}(q)\sum_{n \geq 1} \frac{q^n}{1-q^n}=\frac{1}{(q)_{\infty}}\sum_{n \geq 1} \frac{q^n}{1-q^n},
$$
essentially a product of a modular form and a quantum modular form, to improve the error term to $\log(n)$.
\begin{theorem}[Theorem 1.5 in \cite{NR}] We have
$${\rm mp}_{1}(n) = A\sqrt{n}\log(A\sqrt{n}) + A \gamma \sqrt{n}+O(\log (n)).$$
\end{theorem}
Furthermore, they found the recursions (\cite{NR}, remark on p. 10)
\begin{equation}\label{E:partmomentrec}
{\rm MP}_{k}(q)=\sum_{j=0}^{k-1} \binom{k-1}{j}{\rm MP}_{j}(q)S_{k-1-j}(q), \qquad \text{where} \qquad S_{k}(q):=\sum_{n \geq 1} \frac{n^kq^n}{1-q^n}.
\end{equation}
which express each ${\rm MP}_{k}$ recursively in terms of modular forms and quantum modular forms. They stated their methods could be used to prove asymptotic expansions for all moments ${\rm mp}_{k}(n)$.
Turning to unimodal sequences, let
$$
{\rm MU}_{k}(q)=\sum_{n \geq 0} {\rm mu}_{k}(n)q^n:=\sum_{\lambda \in \mathcal{U}} {\rm lg}(\lambda)^kq^{|\lambda|}=\sum_{m \geq 0} \frac{m^kq^m}{(q)_m^2}, \qquad \text{for $k \geq 0$.}
$$
In particular, the generating function for unimodal sequences satisfies (see \cite{St}, Proposition 2.5.1)
$$
U(q)={\rm MU}_{0}(q)=\frac{1}{(q)_{\infty}^2}\sum_{n \geq 0} (-1)^{n}q^{\frac{n(n+1)}{2}},
$$
which is a product of a modular form and a false theta function. Recently, Nazaroglu and the second author discovered how to fit this false theta function into a modular framework \cite{BN}. Thus, it would be interesting if we could relate the higher moments ${\rm MU}_{k}$ to ${\rm MU}_{0}$ in the way that Ngo and Rhoades did for partitions. We leave this as an open problem, but we prove a recurrence that is somewhat analogous to \eqref{E:partmomentrec}.
Recall that the complete Bell polynomials $\mathbb{B}_k=\mathbb{B}_k(a_1, \dots, a_k)$ are defined by
$$
\sum_{k \geq 0} \frac{\mathbb{B}_k}{k!}u^k:=\exp\left(\sum_{k \geq 1} \frac{a_k}{k!}u^k\right).
$$
\begin{theorem}\label{thm:5.1}
For $k \geq 0$ and $n\geq 1$, define
$$
S_{k-1,n}(q):=\sum_{m \geq 1} \frac{m^{k-1}q^{nm}}{1-q^m},
$$
and let $\mathbb{B}_{k,n}:= \mathbb{B}_{k}\left(S_{0,n}, \dots, S_{k-1,n}\right)$ be the complete Bell polynomials in $S_{j,n}$. Then
\begin{equation}\label{E:U_kgenfn}
{\rm MU}_{k}(q) = \frac{1}{(q)_{\infty}^2}\sum_{n \geq 0} (-1)^{n}q^{\frac{n(n+1)}{2}}\mathbb{B}_{k,n+1}.
\end{equation}
\end{theorem}
\begin{remark} Note that we have the recurrence (see \cite{Co}, \S 3.3.)
$$
\mathbb{B}_k:=\begin{cases} 1 & \text{if $k=0$,} \\ \sum_{j=0}^{k-1} \left(\begin{smallmatrix}k-1\\j\end{smallmatrix}\right) \mathbb{B}_{k-j}a_{j+1} & \text{if $k \geq 1$.}\end{cases}
$$ But the reason that we cannot directly obtain a recurrence for the ${\rm MU}_k$s like in \eqref{E:partmomentrec} is that the $\mathbb{B}$s and $S$s depend on $n$.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:5.1}]
Define
\begin{equation*}\label{E:U_k2vargenfn}
U(\zeta;q):=\sum_{m \geq 0} \frac{\zeta^mq^m}{(q)_m^2},
\end{equation*} and let $\delta_{\zeta}:= \zeta\frac{\partial}{\partial\zeta}$. Then
\begin{equation}\label{E:M_kderiv}
{\rm MU}_{k}(q)= \left[\delta_{\zeta}^k \left(U(\zeta;q) \right) \right]_{\zeta=1}
\end{equation}
Using straightforward manipulation with Euler's two series expansions (\cite{A}, Corollary 2.2)
\begin{equation*}\label{E:Eulerdistinct}
\frac{1}{(\zeta;q)_{\infty}} = \sum_{n \geq 0} \frac{\zeta^n}{(q)_n}, \qquad (-\zeta;q)_{\infty}=\sum_{n \geq 0} \frac{\zeta^nq^{\frac{n(n-1)}{2}}}{(q)_n},
\end{equation*}
we have
\begin{align*}
U(\zeta;q)&=1 + \sum_{m \geq 1} \frac{\zeta^mq^m}{(q)_m^2}
= 1 + \frac{1}{(q)_{\infty}}\sum_{m \geq 1} \frac{\zeta^{m}q^m\left(q^{m+1}\right)_{\infty}}{(q)_{m}} \\
&= 1 + \frac{1}{(q)_{\infty}}\sum_{\substack{m \geq 1 \\ n \geq 0}}\frac{\zeta^{m}(-1)^{n}q^{m+n(m+1) + \frac{n(n-1)}{2}}}{(q)_{m}(q)_{n}}
= 1 + \frac{1}{(q)_{\infty}}\sum_{\substack{m \geq 1 \\ n \geq 0}}\frac{(-1)^{n}q^{\frac{n(n+1)}{2}}}{(q)_{n}}\cdot \frac{\zeta^mq^{(n+1) m}}{(q)_m} \\
&=1+\frac{1}{(q)_{\infty}}\sum_{n \geq 0} \frac{(-1)^{n}q^{\frac{n(n+1)}{2}}}{(q)_{n}}\left(\frac{1}{(\zeta q^{n+1})_{\infty}} - 1 \right).
\end{align*}
By the chain rule,
$$
\left[\delta_{\zeta}^k \left(\frac{1}{(\zeta q^{n+1})_{\infty}}\right) \right]_{\zeta=1} = \left[\frac{\partial^k}{\partial u^k}\frac{1}{(e^u q^{n+1})_{\infty}}\right]_{u=0},
$$
so it suffices to find the Taylor expansion of $(e^u q^{n+1})^{-1}_{\infty}$ about $u=0$. We first find the Taylor expansion of its logarithm and then exponentiate, which introduces the complete Bell polynomials. We have
$$
\operatorname{Log} \left( \frac{1}{(e^uq^{n+1})_{\infty}} \right)= -\sum_{\ell \geq n} \operatorname{Log}\left(1-e^uq^\ell\right) = \sum_{\ell \geq n+1} \frac{e^{mu}q^{m\ell}}{m} = \sum_{m \geq 1} \frac{e^{mu}q^{(n+1)m}}{m(1-q^m)} = \sum_{k \geq 0} \frac{u^k}{k!}S_{k-1,n+1},
$$
where clearly $S_{-1,n+1}=\operatorname{Log} (\frac{1}{(q^{n+1})_{\infty}})$. Hence,
$$
\frac{1}{(e^uq^{n+1})_{\infty}}= \frac{1}{(q^{n+1})_{\infty}}\exp\left(\sum_{k \geq 1} \frac{u^k}{k!}S_{k-1,n+1}\right) = \frac{1}{(q^{n+1})_{\infty}}\sum_{k \geq 0} \frac{u^k}{k!} \mathbb{B}_{k,n+1}.
$$
Thus,
$$
\left[\delta_{\zeta}^k \left(\frac{1}{\left(\zeta q^{n+1}\right)_{\infty}}\right) \right]_{\zeta=1}=\frac{\mathbb{B}_{k,n+1}}{(q^{n+1})_{\infty}},
$$
and we obtain \eqref{E:U_kgenfn} by plugging into \eqref{E:M_kderiv}.
\end{proof}
We leave further exploration of the $q$-series ${\rm MK}_k(q)$ as an open problem.
| {
"timestamp": "2021-06-07T02:14:48",
"yymm": "2106",
"arxiv_id": "2106.02334",
"language": "en",
"url": "https://arxiv.org/abs/2106.02334",
"abstract": "We prove a number of limiting distributions for statistics for unimodal sequences of positive integers by adapting a probabilistic framework for integer partitions introduced by Fristedt. The difficulty in applying the direct analogue of Fristedt's techniques to unimodal sequences lies in the fact that the generating function for partitions is an infinite product, while that of unimodal sequences is not. Essentially, we get around this by conditioning on the size of the largest part and working uniformly on contributing summands. Our framework may be used to derive many distributions, and our results include joint distributions for largest parts and multiplicities of small parts. We discuss ranks as well. We further obtain analogous results for strongly unimodal sequences.",
"subjects": "Number Theory (math.NT); Combinatorics (math.CO)",
"title": "Statistics for Unimodal Sequences",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.977713811213884,
"lm_q2_score": 0.7248702761768249,
"lm_q1q2_score": 0.7087156803565041
} |
https://arxiv.org/abs/1707.00837 | Computation of Green's functions through algebraic decomposition of operators | In this article we use linear algebra to improve the computational time for the obtaining of Green's functions of linear differential equations with reflection (DER). This is achieved by decomposing both the `reduced' equation (the ODE associated to a given DER) and the corresponding two-point boundary conditions. | \section{Introduction}
Differential operators with reflection have recently been of great interest, partly due to their applications to Supersymmetric Quantum Mechanics \cite{Post, Roy, Gam} or topological methods applied to nonlinear analysis \cite{Cab5}.\par
In the last years the works in this field have been related to either the obtaining of eigenvalues and explicit solutions of different problems \cite{PiaoSun,Pia3,Krits,Krits2}, their qualitative properties \cite{Ashy,Cab5} or the obtaining of the associated Green's function \cite{Sars,Sars2,Cab4,CabToj2,CabToj,Toj3,CTMal}. In \cite{CTMal} the authors described a method to derive the Green's function of differential equations with constant coefficients, reflection and two-point boundary conditions. This algorithm was implemented in \textit{Mathematica} (see \cite{Math}) in order to put it to a practical use. Unfortunately, it was soon observed that, although theoretically correct, there were severe limitations when it came to compute the Green's functions of problems of high order. In this respect, we have to point out that an order $n$ linear DER is reduced to an order $2n$ ordinary differential equation --see Theorem \ref{thmdei} and compare equations \eqref{rbvp} and \eqref{redpro}. This particularity posses a great challenge, for the computational time increases greatly with $n$.\par
To sort this out, the best option is to go back from an order $2n$ problem to \emph{two} problems of order $n$. This procedure, compared to solving directly the order $2n$, is much faster. Furthermore, it also happens that, in some cases, the decomposition provides two equivalent problems or a problem and its adjoint. In those cases the improvement is even more notorious.
\par
In the next Section we contextualize the problem with a brief introduction to differential equations with reflection and state some basic results concerning the Green's function associated to them. In Section 3 we develop some theoretical results which provide a way of decomposing the DER we are dealing with. Finally, in Section 4 we establish a suitable decomposition for the boundary conditions, state criteria for self-adjointness of the decomposed problem and provide examples to illustrate the theory.
\section{Differential equations with reflection}
In order to establish a useful framework to work with these equations, we consider the differential operator $D$, the pullback operator of the reflection $\phi(t)=-t$, denoted by $\phi^*(u)(t)=u(-t)$, and the identity operator, $\Id$. \par
Let $T\in{\mathbb R}^+$ and $I:=[-T,T]$. We now consider the algebra ${\mathbb R}[D,\phi^* ]$ consisting of the linear operators of the form
\begin{equation}\label{Lop}L=\sum_{k=0}^n\(a_k\phi^*+b_k\)D^k.\end{equation}
where $n\in{\mathbb N}$, $a_k,b_k\in{\mathbb R},\ k=1,\dots,n$, which act as
\begin{displaymath}Lu(t)=\sum_{k=0}^na_ku^{(k)}(-t)+\sum_{k=0}^nb_ku^{(k)}(t),\ t\in I,\end{displaymath}
on any function $u\in W^{n,1}(I)$.
The operation in the algebra is the usual composition of operators; we will omit the composition sign. We observe that $D^k\phi^*=(-1)^k\phi^*D^k$ for $k=0,1,\dots$, which makes it a \textit{noncommutative algebra}.
We will consider, for convenience, the sums $\sum_{k=0}^n\equiv\sum_{k}$ such that $k\in\{0,1,\dots\}$, but taking into account that the coefficients $a_k,b_k$ are zero for big enough indices.\par
Notice that ${\mathbb R}[D,\phi^*]$ is not a unique factorization domain. For instance, \begin{displaymath}D^2-1=(D+1)(D-1)=-(\phi^*D+\phi^*)^2.\end{displaymath} \par
Let ${\mathbb R}[D]$ be the ring of polynomials with real coefficients on the variable $D$. The following property is crucial for the obtaining of a Green's function.
\begin{thm}[{\cite[Theorem 2.1]{CTMal}}]\label{thmdec}
Take $L$ as defined in \eqref{Lop} and define
\begin{equation}\label{Rop}R:=\sum_{k}a_k\phi^*D^k+\sum_{l}(-1)^{l+1}b_lD^l\in{\mathbb R}[D,\phi^* ].\end{equation} Then $RL=LR\in{\mathbb R}[D]$.
\end{thm}
\begin{rem}\label{remcoefred}
If $S:=RL=\sum_{k=0}^{2n} c_kD^k$, then
\begin{displaymath}c_k=\begin{dcases} 0, & k \text{ odd,} \\
2\sum_{l=0}^{\frac{k}{2}-1}\(-1\)^l\(a_la_{k-l}-b_lb_{k-l}\)+\(-1\)^\frac{k}{2}\(a_\frac{k}{2}^2-b_\frac{k}{2}^2\) & k \text{ even.}\end{dcases}\end{displaymath}
\end{rem}
This implies that the \textit{reduced} operator $RL$ has only coefficients for the even powers of the derivative, so the equation is self-adjoint. If the boundary conditions are appropriate (we will clarify this statement in Theorem \ref{libcab}), then the Green's function is symmetric \cite{Cablibro}. Observe that $c_0=a_0^2-b_0^2$. Also, if $L=\sum_{i=0}^n \(a_i\phi^*+b_i\)D^i$ with $a_n\ne0$ or $b_n\ne0$, we have that $c_{2n}=(-1)^n(a_n^2-b_n^2)$. Hence, if $a_n=\pm b_n$, then $c_{2n}=0$. This shows that composing two elements of ${\mathbb R}[D,\phi^* ]$ we can get another element which has simpler terms in the sense of derivatives of less order. This is quite a difficulty when it comes to compute the Green's functions, for in this case we could have one, many or no solutions of our original problem \cite{CTMal}. The following example is quite illustrative.\par
\begin{exa}\label{firstexa} Consider the equation
\begin{displaymath}x^{3)}(t)+x^{3)}(-t)=\sin t,\ t\in I.\end{displaymath}
This equation cannot have a solution, for the left hand side is an even function while the right hand side is an odd function.\end{exa}
\par
As we said before, $S=RL$ is a usual differential operator with constant coefficients. Consider now the following problem.
\begin{equation}\label{lccbvp}Su(t):=\sum_{k=0}^na_ku^{k)}(t)=h(t),\ t\in I,\ B_ku:=\sum_{j=0}^{n-1}\left[\a_{kj}u^{j)}(-T)+\b_{kj}u^{j)}(T)\right]=0,\ k=1,\dots,n.
\end{equation}
The existence of Green's fuctions for problems such as \eqref{lccbvp} is a classical result (see, for instance, \cite{Cab6}). We present it here adapted to our framework.
\begin{thm}\label{thmgf}
Assume the following homogeneous problem has a unique solution
\begin{equation*}Su(t)=0,\ t\in I,\ B_ku=0,\ k=1,\dots n.
\end{equation*}
Then there exists a unique function, called \textbf{Green's function}, such that
\begin{itemize}
\item[(G1)] $G$ is defined on the square $I^2$.
\item[(G2)] The partial derivatives $\frac{\partial^kG}{\partial t^k}$ exist and are continuous on $I^2$ for $k=0,\dots,n-2$.
\item[(G3)] $\frac{\partial^{n-1}G}{\partial t^{n-1}}$ and $\frac{\partial^nG}{\partial t^n}$ exist and are continuous on $I^2\backslash\{(t,t)\ :\ t\in I\}$.
\item[(G4)] The lateral limits $\frac{\partial^{n-1}G}{\partial t^{n-1}}(t,t^+)$ and $\frac{\partial^{n-1}G}{\partial t^{n-1}}(t,t^-)$ exist for every $t\in(a,b)$ and
\begin{displaymath}\frac{\partial^{n-1}G}{\partial t^{n-1}}(t,t^-)-\frac{\partial^{n-1}G}{\partial t^{n-1}}(t,t^+)=\frac{1}{a_n}.\end{displaymath}
\item[(G5)] For each $s\in(a,b)$ the function $G(\cdot,s)$ is a solution of the differential equation $Su=0$ on $I\backslash\{s\}$.
\item[(G6)] For each $s\in(a,b)$ the function $G(\cdot,s)$ satisfies the boundary conditions $B_ku=0,\ k=1,\dots,n$.
\end{itemize}
Furthemore, the function $u(t):=\int_a^bG(t,s)h(s)\dif s$ is the unique solution of problem \eqref{lccbvp}.
\end{thm}
Now we can state the result which relates the Green's function of a problem with reflection to the Green's function of its associated reduced problem.\par
In order to do that, given an operator ${\mathcal L}$ defined on some set of functions of one variable, we will define the operator ${\mathcal L}_\vdash$ as ${\mathcal L}_\vdash G(t,s):={\mathcal L}(G(\cdot,s))|_{t}$ for every $s$ and any suitable function $G$ of two variables.
\begin{thm}[\cite{CTMal}]\label{thmdei} Let $I=[-T,T]$. Consider the problem
\begin{equation}\label{rbvp}Lu(t)=h(t),\ t\in I,\ B_iu=0,\ k=1,\dots,n,
\end{equation}
where $L$ is defined as in \eqref{Lop}, $h\in L^1(I)$ and
\begin{displaymath}B_ku:=\sum_{j=0}^{n-1}\left[\a_{kj}u^{j)}(-T)+\b_{kj}u^{j)}(T)\right],\ k=1,\dots,n.\end{displaymath}
Then, there exists $R\in {\mathbb R}[D,\phi^* ]$ (as in \eqref{Rop}) such that $S:=RL\in{\mathbb R}[D]$ and the unique solution of problem \eqref{rbvp} is given by $\int_a^bR_\vdash G(t,s)h(s)\dif s$ where $G$ is the Green's function associated to the problem
\begin{align}\label{redpro}Su & =0,\\ \label{redproc1}B_ku & =0,\ k=1,\dots,n, \\ \label{redproc2} B_kRu & =0,\ k=1,\dots,n,\end{align}
assuming it has a unique solution.
\end{thm}
As stated in Section 1, Theorem \ref{thmdei} was implemented in Mathematica in \cite{Math}. We now proceed to describe some steps which could be added to the algorithm in order to improve it.
\section{Decomposing the reduced equation}
The computation of Green's functions is prohibitive in computation time terms \cite{Math}, mostly for high order equations, so it is necessary to find ways to palliate this problem. Our approach will consist of decomposing our problem in order to deal with equations of less order.\par
First observe that, from Remark \ref{remcoefred}, we know that the reduced equation has no derivatives of odd indices. For convenience, if $p$ is a real (complex) polynomial, we will denote by $p_-$ the polynomial with the same principal coefficient and opposite roots.
\begin{lem}\label{lemqq-}
Let $n\in{\mathbb N}$ and $p(x)=\sum_{k=0}^{n}\a_{2k}x^{2k}$ a real polynomial of order $2n$. Then there is a complex polynomial $q$ of order $n$ such that $p=\a_{2n}qq_-$. Furthermore, if $\tilde p(x)=\sum_{k=0}^{n}\a_{2k}x^{k}$ has no negative roots, $q$ is a real polynomial.
\end{lem}
\begin{proof}
First observe that $p$ is a polynomial on $x^2$, and therefore, if $\l$ is an root of $p$, so has to be $-\l$. Hence, using the Fundamental Theorem of Algebra, the first part of the result can be derived by separating the monomials that compose $p$ in two different polynomials with opposite roots.\par Let us do that explicitly to show how in the case $\tilde p$ has no negative roots, $q$ is a real polynomial.\par
Take the change of variables $y=x^2$. Then, $p(x)=\tilde p(y)$ and, by the Fundamental Theorem of Algebra,
\begin{align*} \tilde p(y)= \sum_{k=0}^{n}\a_{2k}y^{k} = & \a_{2n}y^\sigma(y-\l_1^2)\cdots(y-\l_m^2)(y+\l_{m+1}^2) \\ & \cdots (y+\l_{\overline m}^2)(y^2+\mu_1y+\nu_1^2) \cdots(y^2+\mu_ly+\nu_l^2),\end{align*}
for some integers $\sigma,m,\overline m,l$ and real numbers $\l_1,\dots,\l_{\overline m},\nu_1,\dots,\nu_l,\mu_1,\dots,\mu_l$ such that $\l_k>0$ and $\nu_k>|\mu_k|/2$ for every $k$ in the appropriate set of indices. The terms of the form $y^2+\mu_ky+\nu_k^2$ correspond to the pairs of complex roots of the polynomial. This means that the discriminant $\Delta=\mu_k^2-4\nu_k<0$, that is, $\nu_k>|\mu_k|/2$.\par Hence,
\begin{align*}p(x)= & \a_{2n}x^{2\sigma}(x^2-\l_1^2)\cdots(x^2-\l_m^2)(x^2+\l_{m+1}^2) \\ & \cdots(x^2+\l_{\overline m}^2)(x^4+\mu_1x^2+\nu_1^2)\cdots(x^4+\mu_lx^2+\nu_l^2).\end{align*}
Now we have that
\begin{displaymath}(x^2-\l_k^2)=(x+\l_k)(x-\l_k),\quad (x^2+\l_k^2)=(x+\l_ki)(x-\l_ki),\end{displaymath}
\begin{displaymath}\text{and}\quad (x^4+\mu_kx^2+\nu_k^2)=(x^2-x \sqrt{2\nu_k-\mu_k}+\nu_k)(x^2+x \sqrt{2\nu_k-\mu_k}+\nu_k),\end{displaymath}
for any $k$ in the appropriate set of indices. Define
\begin{align*}q(x)= & x^\sigma(x-\l_1)\cdots(x-\l_m)(x-\l_{m+1}i)\cdots(x-\l_{\overline m}i)(x^2-x \sqrt{2\nu_1-\mu_1}+\nu_1) \\ & \cdots(x^2-x \sqrt{2\nu_l-\mu_l}+\nu_l),\end{align*}
and
\begin{align*}q_-(x)= & x^\sigma(x+\l_1)\cdots(x+\l_m)(x+\l_{m+1}i)\cdots(x+\l_{\overline m}i)(x^2+x \sqrt{2\nu_1-\mu_1}+\nu_1) \\ & \cdots(x^2+x \sqrt{2\nu_l-\mu_l}+\nu_l).\end{align*}
We have that $p=\a_{2n}qq_-$.
Observe that if $\l$ is a root of $p$, $\l^2$ is a root of $\tilde p$. Hence, if $\tilde p$ has no negative roots, this is equivalent to $p$ not having roots of the form $\l=ai$ with $a\ne 0$. Thus,
\begin{align*}p(x)= & \a_{2n}x^{2\sigma}(x^2-\l_1^2)\cdots(x^2-\l_m^2)(x^4+\mu_1x^2+\nu_1^2)\cdots(x^4+\mu_lx^2+\nu_l^2),
\\
q(x)= & x^\sigma(x-\l_1)\cdots(x-\l_m)(x^2-x \sqrt{2\nu_1-\mu_1}+\nu_1) \cdots(x^2-x \sqrt{2\nu_l-\mu_l}+\nu_l),
\\
q_-(x)= & x^\sigma(x+\l_1)\cdots(x+\l_m)(x^2+x \sqrt{2\nu_1-\mu_1}+\nu_1) \cdots(x^2+x \sqrt{2\nu_l-\mu_l}+\nu_l).
\end{align*}
That is, $q$ is a real polynomial.
\end{proof}
\begin{rem}\label{RemDes} Descartes' rule of signs establishes that the number of positive roots (with multiple roots counted separately) of a real polynomial on one variable is either equal to the number of sign differences between consecutive nonzero coefficients, or less than it by an even number, considering the case the terms of the polynomial are ordered by descending variable exponent. This implies that a sufficient criterion for a polynomial $p(x)$ to have no negative roots is for $p(-x)$ to have all coefficients with positive sign, that is, for $p(x)$ to have positive even coefficients and negative odd coefficients.\par
There exist algorithmic ways of determining the exact number of positive (or real) roots of a polynomial. For more information on this issue see, for instance, \cite{Yan1,Yan2, Lia}.
\end{rem}
The following Lemma establishes a relation between the coefficients of $q$ and $q_-$.
\begin{lem}\label{relqq-}
Let $n\in{\mathbb N}$ and $q(x)=\sum_{k=0}^{n}\a_{k}x^{k}$ be a complex polynomial. Then \[q_-(x)=\sum_{k=0}^{n}(-1)^{k+n}\a_{k}x^{k}.\]
\end{lem}
\begin{proof}
We proceed by induction. For $n=1$, $q(x)=\a(x-\l_1)$. Clearly, $q$ has the root $\l_1$ and $q_-(x)=\a(x+\l_1)=(-1)^{1+1}\a x+(-1)^{1}\a\l_1$ the root $-\l_1$.\par
Assume the result is true for some $n\ge1$. Then, for $n+1$, $q$ is of the form $q(x)=(x-\l_{n+1})r(x)$ where $r(x)=\sum_{k=0}^{n}\a_{k}x^{k}$ is a polynomial of order $n$, that is,
\begin{displaymath}q(x)=(x-\l_{n+1})\sum_{k=0}^{n}\a_{k}x^{k}=x^{n+1}+\sum_{k=1}^n\left[\a_{k-1}-\l_{n+1}\a_{k}\right]x^k-\l_{n+1}\a_0.\end{displaymath}
Now, $q_-(x)=(x+\l_{n+1})r_-(x)$. Since the formula is valid for $n$,
\begin{align*}q_-(x) & =(x+\l_{n+1})r_-(x)=(x+\l_{n+1})\sum_{k=0}^{n}(-1)^{k+n}\a_{k}x^{k}\\ & =x^{n+1}+\sum_{k=1}^n(-1)^{k+n+1}\left[\a_{k-1}-\l_{n+1}\a_{k}\right]x^k-(-1)^{n+1}\l_{n+1}\a_0.\end{align*}
So the formula is valid for $n+1$ as well.
\end{proof}
\begin{rem}The result can be directly proven by considering the last statement in Remark \ref{RemDes}. If we take a polynomial $p(x)=a(x-\l_1)\cdots(x-\l_n)$, the polynomial $p(-x)$ has exactly opposite roots. Actually, $p(-x)=a(-x-\l_1)\cdots(-x-\l_n)=(-1)^na(x+\l_1)\cdots(x+\l_n)$. It is easy to check that the coefficients of $p(-x)$ are precisely as described in the statement of Lemma \ref{relqq-} save for the factor $(-1)^n$.
\end{rem}
This last Lemma allows the computation of the polynomials $q$ and $q_-$ related to the polynomial $RL$ on the variable $D$ using the formula given in Remark \ref{remcoefred}. We will assume that $RL$ is of order $2n$, that is, $a_n^2-b_n^2\not=0$. Otherwise the problem of computing $q$ and $q_-$ would be the same but these polynomials would be of less order. Also, assume $RL$, considered as a polynomial on $D^2$, has no negative roots in order for $q$ to be a real polynomial. If $L=\sum_{k=0}^{n}(a_{k}\varphi^*+b_k)D^{k}$ and $q(D)=D^n+\sum_{k=0}^{n-1}\a_{k}D^{k}$ then \[RL=\sum_{k=0}^{2n}c_kD^k=(-1)^n(a_n^2-b_n^2)q(D)q_-(D).\]
This relation establishes the following system of quadratic equations:
\begin{align*} c_{2k}= & 2\sum_{l=0}^{k-1}\(-1\)^l\(a_la_{2k-l}-b_lb_{2k-l}\)+\(-1\)^k\(a_k^2-b_k^2\) \\ = & (a_n^2-b_n^2)\left[2\sum_{l=0}^{k-1}\(-1\)^l\(\a_l\a_{2k-l}\)+\(-1\)^k\a_k^2\right],\quad k=0,\dots,n,\end{align*}
where $a_k,b_k,\a_k=0$ if $k\not\in\{0,\dots,n\}$ and $\a_n=1$. These are $n$ equations with $n$ unknowns: $\a_0,\dots,\a_n$. We present here the case of $n=2$ to illustrate the solution of these equations.
\begin{exa}
For $n=2$, we have that
\begin{align*}RL &= \left(a_2^2-b_2^2\right)D^4+ \left(-a_1^2+2 a_0 a_2+b_1^2-2 b_0 b_2\right)D^2+a_0^2-b_0^2,\\ \left(a_2^2-b_2^2\right)q(D)q_-(D) & = \left(a_2^2-b_2^2\right)D^4+\left(2 \alpha _0-\alpha _1^2\right) \left(a_2^2-b_2^2\right)D^2+\alpha _0^2 \left(a_2^2-b_2^2\right),\end{align*}
and the system of equations is
\begin{equation} \label{eqsym2}\begin{aligned}
a_0^2-b_0^2 & =\left(a_2^2-b_2^2\right)\alpha _0^2,\\ -a_1^2+2 a_0 a_2+b_1^2-2 b_0 b_2 &= \left(a_2^2-b_2^2\right)\left(2 \alpha _0-\alpha _1^2\right).
\end{aligned}\end{equation}
Before computing the solutions let us state explicitly the limitations that the fact that $RL$, considered as an order 2 polynomial on $D^2$, that is, that $RL(x)=a x^2+b x +c$ has no negative roots implies. There are two options:
\begin{enumerate}
\item[\emph{(I)}] There are two complex roots, that is, $\Delta= b^2-4ac<0$. This is equivalent to $ac>0\land|b|<2\sqrt{ac}$. Expressed in terms of the coefficients of $RL$:
\begin{displaymath}(b_0^2-a_0^2) (b_2^2-a_2^2)>0 \text{ and } |-a_1^2+2 a_0 a_2+b_1^2-2 b_0 b_2|<2\sqrt{(b_0^2-a_0^2) (b_2^2-a_2^2)}.\end{displaymath}
\item[\emph{(II)}] There are two nonnegative roots, that is $\Delta=b^2-4ac\ge0$ and \begin{displaymath}(-b+\sqrt{b^2-4ac})/(2a)\le 0.\end{displaymath} This is equivalent to $(a,c\ge 0\land -b\ge2\sqrt{ac})\lor(a,c\le 0\land b\ge2\sqrt{ac})$. Expressed in terms of the coefficients of $RL$:
\end{enumerate}
\begin{align*}& \left[(b_0^2-a_0^2), (b_2^2-a_2^2)\ge0\land-(-a_1^2+2 a_0 a_2+b_1^2-2 b_0 b_2)\ge2\sqrt{(b_0^2-a_0^2) (b_2^2-a_2^2)}\right]\end{align*}\begin{center}{\textsc{or}}\end{center} \begin{align*} & \left[(b_0^2-a_0^2), (b_2^2-a_2^2)\le0\land-(-a_1^2+2 a_0 a_2+b_1^2-2 b_0 b_2)\ge 2\sqrt{(b_0^2-a_0^2) (b_2^2-a_2^2)}\right].\end{align*}
Now, with these conditions, the solutions the system of equations \eqref{eqsym2} are:
\emph{Case (I).} We have two solutions:
\begin{displaymath}\a_0=\sqrt{\frac{b_0^2-a_0^2}{b_2^2-a_2^2}},\end{displaymath} \begin{displaymath} \a_1=\pm\sqrt{\frac{2\sign(a_2^2-b_2^2)\sqrt{(b_0^2-a_0^2) (b_2^2-a_2^2)}-(-a_1^2+2 a_0 a_2+b_1^2-2 b_0 b_2)}{a_2^2-b_2^2}}.\end{displaymath}\par
\emph{Case (II).} We have four solutions depending on whether we choose $\xi=1$ or $\xi=-1$:
\begin{displaymath}\a_0=\xi\sqrt{\frac{b_0^2-a_0^2}{b_2^2-a_2^2}},\end{displaymath} \begin{displaymath} \a_1=\pm\sqrt{\frac{2\xi\sign(a_2^2-b_2^2)\sqrt{(b_0^2-a_0^2) (b_2^2-a_2^2)}-(-a_1^2+2 a_0 a_2+b_1^2-2 b_0 b_2)}{a_2^2-b_2^2}}.\end{displaymath}
These solutions provide well defined real numbers by conditions \emph{(I)} and \emph{(II)}.
\end{exa}
\section{Decomposing the boundary conditions}
Now we consider those cases where the problem can be decomposed in two equations. We will try to identify those circumstances when problem \eqref{redpro}-\eqref{redproc1}-\eqref{redproc2} can be expressed as an equivalent factored problem of the form
\begin{alignat}{2}\label{p1}L_1u & =y,\ && V_ju=0, j=1,\dots,n,\\\label{p2} L_2y & =Rh,\ && \widetilde{V_j}y=0, j=1,\dots,n,
\end{alignat}
where $S=L_2L_1$. If that where the case, we know the conditions \eqref{redproc1}-\eqref{redproc2} have to be equivalent to
\begin{equation}\label{vc}V_ju=0,\ \widetilde{V_j}L_1u=0,\ j=1,\dots,n.\end{equation}
In this case, the Green's function of problem \eqref{redpro}-\eqref{redproc1}-\eqref{redproc2} can be expressed as
\begin{displaymath}G(t,s)=\int_{-T}^TG_1(t,r)G_2(r,s)\dif r,\end{displaymath}
where $G_1$ is the Green's function associated to the problem \eqref{p1} and $G_2$ the one associated to the problem \eqref{p2} assuming both Green's functions exist.\par
In order to guarantee that \eqref{redproc1}-\eqref{redproc2} and \eqref{vc} are equivalent, let us establish the following definitions. Let \begin{align*}\Gamma_1: & =(\a_{kj})_{k =1,\dots,n}^{j =0,\dots,n-1},\quad X_n:=(u(T),u'(T),\dots,u^{(n-1)}(T)),\\\Theta_1: & =(\b_{kj})_{k =1,\dots,n}^{j =0,\dots,n-1},\quad\overline X_n:=(u(-T),u'(-T),\dots,u^{(n-1)}(-T)).\end{align*} Then the boundary conditions \eqref{redproc1} can be expressed as $\Gamma_1\overline X_n+\Theta_1X_n=0$. In the same way, \eqref{redproc2} can be written as $(\Gamma_2\ \Gamma_3)\overline X_{2n}+(\Theta_2\ \Theta_3)X_{2n}=0$ for some matrices $\Gamma_2,\Gamma_3,\Theta_2,\Theta_3\in{\mathcal M}_n({\mathbb R})$. So, globally, the conditions on equation \eqref{redpro} can be expressed as
\begin{equation}\label{c1}\begin{pmatrix} \Gamma_1 & 0 \\ \Gamma_2 & \Gamma_3\end{pmatrix}\overline X_{2n}+\begin{pmatrix}\Theta_1 & 0 \\ \Theta_2 & \Theta_3\end{pmatrix} X_{2n}=0.\end{equation}
Now, assume $L_1$ and $\widetilde{V}_j$ are of the form
\begin{align*}L_1 & =\sum_{l=0}^{n}c_{l}D^{l},\\
\widetilde{V}_ju & =\sum_{k=0}^{n-1}\left[\c_{jk}u^{k)}(-T)+\d_{jk}u^{k)}(T)\right]=\sum_{k=0}^{n-1}\left[\c_{jk}(-T)^*+\d_{jk}T^*\right]D^{k}u,\ j=1,\dots,n.\end{align*}
for some $c_l,\c_{jk},\d_{jk}\in{\mathbb R}$, $l,j,k=1,\dots,n$ and where $a^*$ denotes the pullback by the constant $a$. Define now $\Phi:=(\gamma_{jk})_{j,k}$, $\Psi:=(\delta_{jk})_{j,k}\in{\mathcal M}_n({\mathbb R})$ and
\[\Xi=(d_{jk})_{j=0,\dots, n-1}^{k=0,\dots,2n-1}:=\begin{pmatrix}c_0 & c_1 & c_2 & \cdots & c_{n-1} & c_n & 0 & 0 & \cdots & 0 \\ 0& c_0 & c_1 & \cdots & c_{n-2} & c_{n-1} & c_n & 0 & \cdots & 0\\ 0 & 0& c_0 & \cdots & c_{n-3} & c_{n-2} & c_{n-1} & c_n & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & c_0 & c_1 & c_2 & c_3 & \cdots & c_n\end{pmatrix}=\(\Xi_1\ \Xi_2\)\in{\mathcal M}_{n\times2n}({\mathbb R}),\]
where $\Xi_1$, $\Xi_2\in{\mathcal M}_n({\mathbb R})$, $\Xi_2$ is invertible (because $c_n\ne0$) and $\Xi_1$ is invertible if and only if $c_0\ne0$.
\par Now we are ready to start the calculations. We have that
\begin{align*}(\widetilde{V}_jL_1u)_j = & \(\sum_{k=0}^{n-1}\left[\c_{jk}(-T)^*+\d_{jk}T^*\right]D^{k}\sum_{l=0}^{n}c_{l}D^{l}u\)_j=\(\sum_{k=0}^{n-1}\sum_{l=0}^{n}\left[\c_{jk}c_{l}(-T)^*+\d_{jk}c_{l}T^*\right]D^{k+l}u\)_j \\ = &\(\sum_{k=0}^{n-1}\sum_{m=k}^{k+n}\left[\c_{jk}c_{m-k}(-T)^*+\d_{jk}c_{m-k}T^*\right]D^{m}u\)_j \\= & \(\sum_{k=0}^{n-1}\sum_{m=0}^{2n-1}\left[\c_{jk}d_{km}u^{(m)}(-T)+\d_{jk}d_{km}u^{(m)}(T)\right]\)_j\\= &\(\sum_{k=0}^{n-1}\c_{jk}d_{km}\)_{j,m}\overline X_{2n}+\(\sum_{k=0}^{m}\d_{jk}d_{km}\)_{j,m}X_{2n}=\Phi\Xi\overline X_{2n}+\Psi\Xi X_{2n}.
\end{align*}
Hence, we would write \eqref{vc} in the form
\begin{equation}\label{c2}\begin{pmatrix} \widetilde\Phi & 0 \\ \Phi\Xi_1 & \Phi\Xi_2\end{pmatrix}\overline X_{2n}+\begin{pmatrix}\widetilde\Psi & 0 \\ \Psi\Xi_1 & \Psi\Xi_2\end{pmatrix} X_{2n}=0.\end{equation}
Clearly, it is convenient to take $\widetilde\Phi=\Gamma_1$ and $\widetilde\Psi=\Theta_1$, that is, $V_j=B_j$, $j=1,\dots,n$.
\begin{lem}\label{invlem}
If $\Gamma_1$ and $\Gamma_3$ are invertible and $\Theta_2=\Gamma_2\Gamma_1^{-1}\Theta_1+\Theta_3\Xi_2^{-1}\Xi_1-\Gamma_3\Xi_2^{-1}\Xi_1\Gamma_1^{-1}\Theta_1$, then, taking
\[\widetilde\Phi=\Gamma_1,\ \widetilde\Psi=\Theta_1,\ \Phi=\Id,\text{ and } \Psi=\Xi_2\Gamma_3^{-1}\Theta_3\Xi_2^{-1},\]
condition \eqref{c1} is equivalent to condition \eqref{c2} and, therefore, problems \eqref{redpro}-\eqref{redproc1}-\eqref{redproc2} and \eqref{p1}-\eqref{p2} are equivalent.
\end{lem}
\begin{proof} Let
\[A=\begin{pmatrix}\Id & 0 \\ (\Xi_1-\Xi_2\Gamma_3^{-1}\Gamma_2)\Gamma_1^{-1} & \Xi_2\Gamma_3^{-1}\end{pmatrix}.\]
$A$ is invertible and
\[\begin{pmatrix} \widetilde\Phi & 0 \\ \Phi\Xi_1 & \Phi\Xi_2\end{pmatrix}=A\begin{pmatrix} \Gamma_1 & 0 \\ \Gamma_2 & \Gamma_3\end{pmatrix}, \quad\begin{pmatrix}\widetilde\Psi & 0 \\ \Psi\Xi_1 & \Psi\Xi_2\end{pmatrix}=A\begin{pmatrix}\Theta_1 & 0 \\ \Theta_2 & \Theta_3\end{pmatrix}.\]
Hence, conditions \eqref{c1} and \eqref{c2} are equivalent.
\end{proof}
Analogously, we have a result where it is the $\Theta_1$ and $\Theta_3$ which are invertible.
\begin{lem}\label{invlem2}
If $\Theta_1$ and $\Theta_3$ are invertible and $\Gamma_2=\Theta_2\Theta_1^{-1}\Gamma_1+\Gamma_3\Xi_2^{-1}\Xi_1-\Theta_3\Xi_2^{-1}\Xi_1\Theta_1^{-1}\Gamma_1$, then, taking
\[\widetilde\Psi=\Theta_1,\ \widetilde\Phi=\Gamma_1,\ \Psi=\Id,\text{ and } \Phi=\Xi_2\Theta_3^{-1}\Gamma_3\Xi_2^{-1},\]
condition \eqref{c1} is equivalent to condition \eqref{c2} and, therefore, problems \eqref{redpro}-\eqref{redproc1}-\eqref{redproc2} and \eqref{p1}-\eqref{p2} are equivalent.
\end{lem}
The following example illustrates this discussion explicitly.
\begin{exa}\label{exagf-2}
Consider the following problem.
\begin{equation}\begin{aligned}\label{prooc-2} & u'''(t)+u(-t)+u(t)=h(t),\ t\in I, \\ & u(-1)-u''(1)=0,\ u'(-1)=u'(1),\ u''(-1)-u(1)=0,\end{aligned}
\end{equation}
where $h(t)=\sin t$. Then, the operator we are studying is $L=D^3+\varphi^*+1$. If we take $R:=D^3+\varphi^*-1$, we have that $RL=D^6$, which admits a simple decompostion in ${\mathbb R}[D]$ as $RL=(D^3)(D^3)=L_2L_1$.\par
The boundary conditions are
\[[(-1)^*-1^*D^2]u=0,\ [(-1)^*D-1^*D]u=0,\ [(-1)^*D^2-1^*]u=0.\] Taking this into account, we add the conditions
\begin{align*}0 & =[(-1)^*-1^*D^2]Ru=u'''(-1)-u^{(5)}(1),\\
0 & =[(-1)^*D-1^*D]Ru=u^{(4)}(-1)-u^{(4)}(1),\\
0 & =[(-1)^*D^2-1^*]Ru=u^{(5)}(-1)-u'''(1).\\
\end{align*}
That is, our new \textit{reduced} problem, writing the boundary conditions in matrix form, is
\begin{equation}\label{proocr-2}\begin{aligned} & u^{(6)}(t)=f(t),\\ & \begin{pmatrix}
1 & 0 & 0 & 0 &0&0\\
0 & 1 & 0 & 0&0&0 \\
0 & 0 & 1 & 0&0&0 \\
0 & 0 & 0 & 1&0&0 \\
0 & 0 & 0 & 0&1&0 \\
0 & 0 & 0 & 0&0&1
\end{pmatrix}\begin{pmatrix}
u(-1) \\ u'(-1) \\ u''(-1) \\ u'''(-1)\\u^{(4)}(-1)\\u^{(5)}(-1)
\end{pmatrix}+\begin{pmatrix}
0&0 & -1 & 0 & 0&0 \\
0& -1 & 0 & 0 & 0 &0\\
-1 & 0 & 0 & 0&0&0 \\
0 & 0 & 0 & 0&0&-1 \\
0 & 0 & 0 & 0&-1&0 \\
0 & 0 & 0 & -1&0&0
\end{pmatrix}\begin{pmatrix}
u(1) \\ u'(1) \\ u''(1) \\ u'''(1)\\u^{(4)}(1)\\u^{(5)}(1)
\end{pmatrix} =0 .\end{aligned}
\end{equation}
where $f(t)=R\,h(t)=h'''(t)+h(-t)-h(t)=-3\sin t$.\par
Now, we can check that we are working in the conditions of Lemma \ref{invlem}. We have that $\Gamma_1=\Gamma_3=\Id$, $\Gamma_2=\Theta_2=0$ and
\[\Theta_1=\Theta_3=\begin{pmatrix}
0&0&-1\\
0&-1&0 \\
-1&0&0
\end{pmatrix}. \]
On the other hand,
\[\Xi=\(\Xi_1\ \Xi_2\)=\begin{pmatrix}1 & 0 & 0 & 1 & 0 & 0 \\ 0& 1 & 0 & 0 & 1 & 0\\0&0& 1 & 0 & 0 & 1 \end{pmatrix}.\]
Thus, it is straightforward to check that
\[\Gamma_2\Gamma_1^{-1}\Theta_1+\Theta_3\Xi_2^{-1}\Xi_1-\Gamma_3\Xi_2^{-1}\Xi_1\Gamma_1^{-1}\Theta_1=\Theta_2=0,\]
and therefore the hypotheses of Lemma \ref{invlem} are satisfied. The conditions $\tilde V_j$ are given by the matrices $\Phi=\Id$ and $\Psi=\Xi_2\Gamma_3^{-1}\Theta_3\Xi_2^{-1}=\Theta_3$. Hence, we know that this problem is equivalent to factored system,
\begin{alignat}{3}
\label{exrp1}u'''(t) & =v(t),\ && u(-1)-u''(1)=0,\ u'(-1)=u'(1),\ u''(-1)-u(1)=0,\\
\label{exrp2}v'''(t) & =-3\sin t,\ && v(-1)-v''(1)=0,\ v'(-1)=v'(1),\ v''(-1)-v(1)=0.
\end{alignat}
Thus, it is clear that
\begin{displaymath}u(t)=\int_{-1}^1G_1(t,s)v(s)\dif s,\ v(t)=\int_{-1}^1G_2(t,s)f(s)\dif s,\end{displaymath}
where, $G_1=G_2$ are, respectively, the Green's functions of \eqref{exrp1} and \eqref{exrp2}. The Green's functions of problems involving linear ordinary differential equations with constant coefficients and two-point boundary conditions can be computed with the \emph{Mathematica} notebooks \cite{Math2} or \cite{Math}. Explicitly,
\begin{equation*}
G_1(t,s)=\begin{cases} -\frac{1}{4} (s-t) (s (t-1)+t-3), & -1\leq s\leq t\leq 1, \\
-\frac{1}{4} (s-t) ((s-1) t+s-3), & -1<t<s\leq 1. \\ \end{cases}
\end{equation*}
Hence, the Green's function $G$ for problem \eqref{proocr-2} is given by
\begin{align*}& G(t,s) =\int_{-1}^1G_1(t,r)G_2(r,s)\dif r=\\ & \frac{1}{480} \begin{dcases}
\begin{aligned} 2 s^5 (t+1)-5 s^4 (t (t+2)+3)+20 s^3 t (t+3)-5 s^2 \left(t^2 (t+2)^2-5\right) \\ +2 s t \left(t^2 (t (t+5)+30)-166\right)-2 t^5-15 t^4+25 t^2-102, \end{aligned} & -1<t<s\leq 1, \\
\begin{aligned}-2 s^5-15 s^4-5 \left(s^2 (s+2)^2-5\right) t^2+2 \left(s^2 (s (s+5)+30)-166\right) s t \\ +25 s^2+2 (s+1) t^5-5 (s (s+2)+3) t^4+20 (s+3) s t^3-102, \end{aligned} & -1\leq s\leq t\leq 1.
\end{dcases}
\end{align*}
Therefore, using Theorem \ref{thmdei}, the Green's function for problem \eqref{prooc-2} is
\begin{align*} & \overline G(t,s) =R_\vdash G(t,s) =\frac{\partial^3 G}{\partial t^3}(t,s)+G(-t,s)+G(t,s)=\\ & \frac{1}{120}\begin{dcases}
\begin{aligned} -(s-1) t^5+10 (s-3) s t^3+30 (s-1) t^2-30 (s-3) s \\ -\left(s^5-5 s^4+30 s^3+30 s^2-226 s+90\right) t, \end{aligned} & -1\le |t|\le s\le 1, \\
\begin{aligned} s^5 (-(t-1))+10 s^3 (t-3) t-30 s^2 (t-1)+30 (t-3) t \\ +s \left(-t^5+5 t^4-30 t^3+30 t^2+106 t+90\right),\end{aligned} & -1\le |s|< t\le 1, \\
\begin{aligned} s^5 (-(t+1))-10 s^3 t (t+3)-30 s^2 (t+1)-30 t (t+3) \\ -s \left(t^5+5 t^4+30 t^3-30 t^2-226 t-90\right),\end{aligned} & -1\le |s|< -t\le 1, \\
\begin{aligned}-(s+1) t^5-10 s (s+3) t^3+30 (s+1) t^2+30 s (s+3) \\ -\left(s^5+5 s^4+30 s^3+30 s^2-106 s+90\right) t,\end{aligned} &-1\le |t|\le -s\le 1. \\
\end{dcases}\end{align*}
Hence, the solution of problem \eqref{prooc-2} is given by
\begin{align*}u(t)= \int_{-1}^1\overline G(t,s)\sin(s)\dif s= & -\frac{1}{60} \left(-30 - 91 t - 30 t^2 + 10 t^3 + t^5\right) \sin (1) \\ & +\frac{2}{3} \left(t^3-7 t-3\right) \cos (1)+2 \sin (t)+\cos (t).\end{align*}
\end{exa}
Computationally, this procedure poses a big advantage: it is always easier to obtain the Green's function for two order $n$ problems than to do so for one order $2n$ problem. Furthermore, if the hypotheses of Lemma~\ref{lemqq-} are satisfied and we are able to obtain a factorization of the aforementioned kind using $q$ and $q_-$ in the place of $L_1$ and $L_2$, we have an extra advantage: the differential equation given by $q_-$ is the adjoint equation of the one given by $q$ multiplied by the factor $(-1)^n$. This fact, together with the following result --which can be found, although not stated as in this work, in \cite{Cablibro}, illustrates that in this case it may be possible to solve problem \eqref{rbvp} just computing the Green's function of one order $n$ problem.
\begin{thm}\label{libcab}
Consider an interval $J=[a,b]\subset{\mathbb R}$, functions $\sigma,a_i\in\Lsp{1}(J)$, $i=1,\dots,n$, real numbers $\a_{ij},\b_{ij},h_i$, $i=1,\dots, n$, $j=0,\dots,n-1$, $D(L_n)\subset W^{n,1}(J)$ a vector subspace, the operator \[L_nu(t)=a_0u^{(n)}(t)+a_1(t)u^{(n-1)}(t)+\dots+a_{n-1}(t)u'(t)+a_n(t)u(t),\ t\in J,\ u\in D(L_n),\]
with $a_0=1$ and the problem
\begin{equation}\label{pfeq} L_nu(t)=\sigma(t),\ t\in J,\quad U_i(u)=h_i,\ i=1,\dots,n,\end{equation}
where
\[U_i(u):=\sum_{j=0}^{n-1}\(\a_{ij}u^{(j)}(a)+\b_{ij}u^{(j)}(b)\),\quad i=1,\dots,n.\]
Then, the associated adjoint problem is
\begin{equation}\label{feq} L_n^\dagger v(t)=\sum_{j=0}^n(-1)^ja_{n-j}(t)u^{(j)}(t),\ t\in J,\ v\in D(L_n^\dagger),\end{equation}
where
\[D(L_n^\dagger)=\left\{v\in W^{n,2}(J)\ :\ (b^*-a^*)\(\sum_{j=1}^n\sum_{i=0}^{j-1}(-1)^{(j-i-1)}(a_{n-j}v)^{j-i-1}u^{(i)}\)=0,\ u\in D(L_n)\right\}.\]
Furthermore, if $G(t,s)$ is the Green's function of problem \eqref{pfeq}, then the one associated to problem~\eqref{feq} is $G(s,t)$.
\end{thm}
Hence, if we can decompose problem \eqref{redpro}-\eqref{redproc1}-\eqref{redproc2} in two adjoint problems of the form \eqref{p1}-\eqref{p2}, its Green's function will be
\begin{displaymath}G(t,s)=\int_{-T}^TG_1(t,r)G_2(r,s)\dif r=\int_{-T}^TG_1(t,r)G_1(s,r)\dif r.\end{displaymath}
where $G_1$ is the Green's function of \eqref{p1} and $G_2(t,s)=G_1(s,t)$ the one of \eqref{p2}. We note though, that unless the operator $q_-$ is the adjoint equation times $(-1)^n$, the boundary conditions may not be the adjoint ones.
\begin{exa}Consider the problem
\begin{equation}\label{pex1}u'(-t)+u(t)+\sqrt{2}\,u(-t)=f(t):=e^t,\ t\in[-1,1],\ u(-1)=u(1),
\end{equation}
Taking $R=\phi^*D+\sqrt{2}\phi^*-\Id$ and composing problem \eqref{pex1} with this operator we obtain the reduced problem
\begin{equation}\label{pex2}u''(t)-u(t)=Rf(t),\ t\in[-1,1],\ u(-1)=u(1), \ u'(-1)=u'(1).
\end{equation}
Problem \eqref{pex2} is equivalent to the factored system
\begin{alignat}{3}
\label{ex2rp1}u'(t)+u(t) & =v(t), && u(-1) && =u(1),\\
\label{ex2rp2}-v'(t)+v(t) & =-Rf(t),\quad && v(-1) && =v(1).
\end{alignat}
for $t\in[-1,1]$.
Observe problem \eqref{ex2rp2} is the adjoint problem of \eqref{ex2rp1}. Since the Green's function of problem \eqref{ex2rp1} is given by
\[G_1(t,s):=\begin{dcases}
\frac{e^{s-t+2}}{e^2-1}, & -1\leq s\leq t\leq 1, \\
\frac{e^{s-t}}{e^2-1}, & -1<t<s\leq 1,\end{dcases}\]
and, therefore, $G_1(s,t)$ is the Green's function of problem \eqref{ex2rp2}, the Green's function of problem \eqref{pex2} is
\[G(t,s)=-\int_{-1}^1G_1(t,r)G_1(s,r)\dif r=\begin{dcases}
- \frac{e^{s-t+2}+e^{t-s}}{2 e^2-2}, & -1\leq s\leq t\leq 1, \\
-\frac{e^{s-t}+e^{-s+t+2}}{2 e^2-2}, & -1<t<s\leq 1. \\
\end{dcases}\]
Finally, the Green's function of problem \eqref{pex1} is
\begin{align*}\overline G(t,s) & =R_\vdash G(t,s) =\frac{\partial G}{\partial t}(-t,s)+\sqrt 2 G(-t,s)-G(t,s) \\ & =\begin{dcases}
\frac{e^{-s-t} \left[\left(\sqrt{2}-1\right) \left(-e^{2 (s+t+1)}\right)+e^{2 s+2}+e^{2 t}-\sqrt{2}-1\right]}{2 \left(e^2-1\right)}, & |t|\le-s, \\
\frac{e^{-s-t} \left[\left(\sqrt{2}-1\right) \left(-e^{2 (s+t)}\right)+e^{2 s+2}+e^{2 t}-\left(1+\sqrt{2}\right) e^2\right]}{2 \left(e^2-1\right)}, & |s|<t, \\
\frac{e^{-s-t} \left[\left(\sqrt{2}-1\right) \left(-e^{2 (s+t+1)}\right)+e^{2 s}+e^{2 t+2}-\sqrt{2}-1\right]}{2 \left(e^2-1\right)}, & |s|<-t,\\
\frac{e^{-s-t} \left[\left(\sqrt{2}-1\right) \left(-e^{2 (s+t)}\right)+e^{2 s}+e^{2 t+2}-\left(1+\sqrt{2}\right) e^2\right]}{2 \left(e^2-1\right)}, & |t|\le s. \\
\end{dcases}
\end{align*}
Hence, the solution of problem \eqref{pex1} is $u(t):= $
\begin{align*} -\frac{e^{-t} \left(-2 \left(1+\sqrt{2}\right) t+e^2 \left(2 \left(1+\sqrt{2}\right) t+3 \sqrt{2}\right)+e^{2 t} \left(-2 t+e^2 \left(2 t+\sqrt{2}-4\right)-\sqrt{2}\right)+\sqrt{2}+4\right)}{4 \left(e^2-1\right)}.\end{align*}
\end{exa}
| {
"timestamp": "2017-07-05T02:03:15",
"yymm": "1707",
"arxiv_id": "1707.00837",
"language": "en",
"url": "https://arxiv.org/abs/1707.00837",
"abstract": "In this article we use linear algebra to improve the computational time for the obtaining of Green's functions of linear differential equations with reflection (DER). This is achieved by decomposing both the `reduced' equation (the ODE associated to a given DER) and the corresponding two-point boundary conditions.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Computation of Green's functions through algebraic decomposition of operators",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138099151277,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7087156794150742
} |
https://arxiv.org/abs/1805.03075 | Goal oriented time adaptivity using local error estimates | We consider initial value problems where we are interested in a quantity of interest (QoI) that is the integral in time of a functional of the solution of the IVP. For these, we look into local error based time adaptivity. We derive a goal oriented error estimate and timestep controller, based on error contribution to the error in the QoI, for which we prove convergence of the error in the QoI for tolerance to zero under weak assumptions. We analyze global error propagation of this method and derive guidelines to predict performance of the method. In numerical tests we verify convergence results and guidelines on method performance. Additionally, we compare with the dual-weighted residual method (DWR) and classical local error based time-adaptivity. The local error based methods show better performance than DWR and the goal oriented method shows good results in most examples, with significant speedups in some cases. | \section{Introduction}\label{intro}
A typical situation in numerical simulations based on differential equations is that one is not interested in the solution of the differential equation per se, but a \textit{Quantity of Interest} (QoI) that is given as a functional of the solution. For example, when designing an airplane, the QoI would be the lift coefficient divided by the drag coefficient. In simulations of the Greenland ice sheet, one would like to know the net amount of ice loss over a year. When simulating wind turbines, the amount of energy produced during a certain time period is more important than the actual flow solution.
Further examples are found in optimization problems with ODEs or PDEs as constraints. In the turbine example, one may want to optimize blade shape or determine optimal placement of e.g. tidal turbines \cite{Funke2014} for maximal energy output. Inverse problems in e.g. oceanography \cite{CarlisleThacker1992} can also be considered. Here the aim is to determine model parameters or initial conditions to fit measurement data to goal functions of simulation results. An example for such an inverse problem is to determine vertical mixing parameters with the QoI being the total inflow of salt water from the North Sea into the Baltic Sea.
In this article, we restrict ourselves to problems where the QoI is given as an integral over time of a functional of the solution. From the examples above, only the steady state problem in air plane design does not qualify. The basic problem we consider is thus: Given the initial value problem
\begin{equation}\label{EQ BASE IVP}
\dot{\bm{u}}(t) = \bm{f}(t,\bm{u}(t)),
\quad t\in[t_0,t_e],
\quad \bm{u}(t_0) = \bm{u}_0,
\end{equation}
for a sufficiently smooth function $\bm{f}:[t_0, t_e] \times \mathbb{R}^d \rightarrow \mathbb{R}^d$ with solution $\bm{u}(t)$, we are interested in the QoI
\begin{equation}\label{EQ J DEF}
J(\bm{u}) := \int_{t_0}^{t_e}j(t, \bm{u}(t))dt,
\end{equation}
with $j:[t_0, t_e] \times \mathbb{R}^d \rightarrow \mathbb{R}$, which we will refer to as \textit{density function}, following the notation in \cite{bangerth2013adaptive}.
When solving PDEs, the system of ODEs originates from a semi-discretization, thus $\bm{u}$ consists of unknowns of the space discretization. Consequently $j$ can be used to provide spatial weighting and to select only specific points or regions of the spatial discretization.
The goal here is to determine an adaptive discrete approximation $\bm{u}_h \approx \bm{u}(t)$. Our degrees of freedom are the timesteps and we want to use as few as possible. This strategy will not yield an optimal solution, but works well in practice. We adapt the timesteps $\Delta t$ using a timestep controller, which is based on local error estimates. The solution process as a whole involves a variety of schemes.
An \textit{adaptive method} consists of a \textit{time-integration} scheme for \eqref{EQ BASE IVP}, an \textit{error estimator}, a \textit{timestep controller} and an \textit{initial timestep} $\Delta t_0$. If we consider problem \eqref{EQ BASE IVP} - \eqref{EQ J DEF}, the adaptive method also includes a discrete approximation $J_h \approx J$ given by a \textit{quadrature scheme}.
The input for an adaptive method is a tolerance $\tau$, which is used in the timestep controller and possibly to determine $\Delta t_0$. The output is an approximation to the solution, this can be a discrete solution $\bm{u}_h$, or $J_h(\bm{u}_h)$, depending on which problem is considered. Since we have an adaptive method, we cannot use the usual notion of convergence for $\Delta t \rightarrow 0$ for a time-integration scheme. Instead, we consider the limit of the tolerance going to zero.
\begin{definition}
An adaptive method for an IVP \eqref{EQ BASE IVP} is called \textit{convergent} (in the solution), if
\begin{equation*}
\| \bm{u}(t_e) - \bm{u}_N\| = 0, \quad \text{for} \quad \tau \rightarrow 0,
\end{equation*}
where $\bm{u}_N$ is an approximation to $\bm{u}(t_e)$ and $\| \cdot\|$ is an appropriate norm.
An adaptive method for an IVP \eqref{EQ BASE IVP} with QoI \eqref{EQ J DEF} is \textit{convergent in the QoI}, if
\begin{equation*}
|J(\bm{u}) - J_h(\bm{u}_h)| = 0, \quad \text{for} \quad \tau \rightarrow 0.
\end{equation*}
\end{definition}
For a convergent adaptive method we are naturally interested in the convergence rate and will express it in terms of $\mathcal{O}(\tau^q)$. This definition of adaptive methods and convergence is targeted to local error based methods, but can also be considered for methods based on global error estimates.
For goal oriented adaptivity, the standard approach is the dual weighted residual (DWR) method \cite{becker2001_DWR,prudho:15}. Originally, it was developed for spatial problems, but has been extended to time dependent problems. The basic idea is to use the adjoint (dual) problem to get an estimate of the error in the QoI. In the time dependent case, the adjoint problem is a terminal value problem (IVP backwards in time). For linear problems, this gives rise to global error bounds, in the nonlinear case, global error estimates are obtained.
The DWR method is based on global a-posteriori error estimates. To obtain these error estimates one needs to subsequently integrate forward and backward in time. Here the primal and adjoint solution need to be stored. The error estimate is obtained from the primal and dual solution and is used to refine the meshes. This iterative process is repeated until a discretization is found, where the error estimate $\eta(u_h)$ fulfills
\begin{equation*}
|J(\bm{u}) - J_h(\bm{u}_h)| \approx \eta(u_h) \leq \tau.
\end{equation*}
The major drawback of this method is its cost, both in implementation and computation. To reduce computational effort, Carey et. al. suggested to apply the approach in a blockwise manner, thus making it more local \cite{CaEJLT:10}. The storage of the primal and dual solution can be problematic for high resolutions. This, for example, can be solved by check-pointing \cite{meiric:14,meiric:15}, but will further increase computational costs. The method requires a full variational formulation, restricting it to Galerkin type schemes in space and time.
An alternative is to use a classical time adaptive method for IVPs based on estimating the local error. Results on convergence are well established and described in standard textbooks \cite{shampine1994_NUMODE,haiwan:93}. This adaptive method is not goal oriented, but can be used to solve problems with QoIs. We do not have global error bounds, since the accumulation of local errors is hard to analyse. This approach works particularly well for stiff problems, since there, local errors typically dissipate with time.
We choose a different approach, aiming to get the best of both methods. To this end we derive a new error estimator for the classic adaptive method to make it goal oriented. We estimate the time-stepwise error contribution to the error in the QoI, which consists of both quadrature and time-integration errors. Neglecting the quadrature contribution, we derive a local error estimate and use it in the deadbeat controller.
We show that convergence in the QoI follows from convergence in the solution, with additional requirements on the timesteps. The derived goal oriented adaptive method fulfills these requirements and is convergent in the QoI under weak assumptions. To obtain high convergence rates in the QoI when using higher order ($> 2$) time-integration schemes, one needs solutions of sufficiently high order in all quadrature evaluation points. We explain how to obtain these from the stage value of a given RK scheme.
We do our analysis for one-step methods for time-integration, embedded Runge-Kutta schemes \cite{haiwan:93} for error estimation, the deadbeat controller \eqref{EQ CONT DEADBEAT} and simple choices for $\Delta t_0$. These restrictions are done for easier analysis, but it is straightforward to extend the results to other error estimation techniques, such as Richardson-extrapolation \cite{haiwan:93}. For different controllers, such as PID controllers \cite{Soderlind2003}, our results allow for simple convergence proofs based on similarity to the deadbeat controller. The results hold for a wide range of initial timesteps and thus for any reasonable scheme used to compute $\Delta t_0$.
Implementation of this method only requires a standard deadbeat controller, an embedded Runge-Kutta scheme and the density function $j(t, \bm{u}(t))$. Due to being based on local error estimates, the method is computationally very cheap. For problems where the density function only regards a small part of the state vector $\bm{u}$, the error estimate will be even cheaper than the classical one.
A similar method has been proposed by \cite{John2010,Turek1999,Wick2017}, using various other techniques for error estimation. John, Rang propose it for drag and lift coefficients in incompressible flows, but do not show numerical results \cite{John2010}. Turek describes a case where using the method for an alternating lift coefficient leads to "catastrophical results" \cite{Turek1999}. Wick uses a point-wise evaluation of the displacement field in fluid-structure interaction \cite{Wick2017,Failer}. The author describes inconsistent convergence patterns but concludes satisfying results.
To be able to make statements on the performance of the goal oriented adaptive method, we analyse the impact of global error dynamics on the error in the QoI. This analysis revolves around the nullspace of the density function $j(t, \bm{u})$ and thus our error estimator. A method performs well, if all relevant processes are sufficiently resolved in time. To be able to sufficiently resolve a process, its local error or the local error of a faster process, must appear in the error estimate. The question if a process is relevant for the QoI is a matter of global error dynamics. Thus, with sufficient knowledge on the global error dynamics, we are able to make predictions on the performance of the goal oriented adaptive method.
We use numerical tests with widely different global error behaviors with respect to the QoI. For these we confirm the convergence results and are able to explain the performance results. It turns out to be relatively easy to predict bad performance, but hard to predict good performance. Our results show that the local error based methods are more efficient than the DWR method. The goal oriented adaptive method shows good performance in most cases and significant speedups in some.
The structure of the article is as follows: We first review current adaptive methods in section \ref{SEC OLD METHODS}, then we explain and analyse our approach in section \ref{SEC LOCAL GOAL}. Numerical results are presented in section \ref{SEC NUM RESULTS}.
\section{Current adaptive methods}\label{SEC OLD METHODS}
\subsection{A posteriori error estimation via the dual weighted residual method}\label{SEC DWR}
The starting point of the DWR method is an initial value problem in variational formulation: Find $u\in U$, such that
\begin{equation*}
A(u;v) = F(v), \quad u(t_0) = u_0, \quad \forall v \in V.
\end{equation*}
Here, $U$ and $V$ are appropriate spaces, $A$ is linear in $v$ and possibly nonlinear in $u$. Here we have $A(u;v) = (u_t,v) -(f(t, u),v)$ and $F(v) = 0$, see \eqref{EQ BASE IVP}. Furthermore, there is a discrete approximation to this problem, also in weak form: Find $u_h\in U_h$, such that
\begin{equation}\label{EQ DWR FWD}
A(u_h;v_h) = F(v_h), \quad u_h(t_0) = u_0, \quad \forall v_h \in V_h.
\end{equation}
Here, $U_h \subset U$ and $V_h\subset V$ are finite element spaces in time.
\subsubsection{The error estimate}
To obtain an estimate of the error $e^J = J(u)-J(u_h)$ in the QoI \eqref{EQ J DEF}, one uses the linearised adjoint problem for $J(u)$: Find $z \in V$, such that
\begin{equation*}
A'(u;v,z) = J'(u;v), \quad z(t_e) = 0, \quad \forall v \in U
\end{equation*}
and its discrete version
\begin{equation}\label{EQ ADJOINT DISCR}
A'(u_h;v_h,z_h) = J'(u_h;v_h), \quad z_h(t_e) = 0, \quad \forall v_h \in U_h,
\end{equation}
where $A'$ and $J'$ are the Gateaux derivatives of $A$ and $J$ with respect to $u$ in direction $v$. Note that the adjoint problem is an initial value problem backwards in time.
An approximation of the error in the QoI is given by
\begin{equation*}
e^J \lesssim A(u_h,z - z_h) - F(z - z_h),
\end{equation*}
with equality for linear functionals and approximate upper bounds for the general nonlinear case. Using an approximation $z_h^+ \approx z$, which is of higher accuracy than $z_h$, using e.g. higher order interpolation or a discrete solution on a finer grid \cite{bangerth2013adaptive}, one gets an estimate
\begin{equation}
e^J \lesssim \eta(u_h) := A(u_h,z_h^+ - z_h) - F(z_h^+ - z_h).
\end{equation}
This can be further bounded by decomposing it into timestep wise contributions and thus giving a guide on where and how to adapt. For this to work, it is imperative that the solutions of the primal and adjoint problems are obtained at all points. This can cause storage problems for long time simulations and can be dealt with using check-pointing \cite{Griewank2000}.
\subsubsection{Adaptation scheme}\label{SEC DWR ALG}
A large number of different adaptation strategies exist. Here we use a fixed-rate strategy \cite{bangerth2013adaptive}, where the $r\in[0, 1]$ elements with largest error are refined. Summarizing, the following scheme is obtained.
\begin{enumerate}
\item Start with initial grid.
\item Solve forward problem \eqref{EQ DWR FWD} to obtain $u_h$.
\item Construct and solve adjoint problem \eqref{EQ ADJOINT DISCR} to obtain $z_h$.
\item Calculate $z_h^+ \approx z$.
\item Calculate error estimate $\eta(u_h)$.
\item Check $\eta(u_h) \leq \tau$, if not met, refine grids and restart.
\end{enumerate}
The scheme is very expensive due to the need of solving adjoint problems to obtain an error estimate. While one can use generic schemes for grid adaptation, the adjoint problem and the error estimate are specific to a given equation and goal functional. Construction and solution of the adjoint problem can be automated using software such as \texttt{dolfin-adjoint} \cite{farrell2013_DOLFINADJ}. An advantage of the method is that the error estimate is global and one can expect the resulting discretizations to be of high quality.
We use a finer grid to approximate $z$ by $z_h^+$, making this the most expensive step in the computation of $\eta(u_h)$.
\subsection{Time Adaptivity based on local error estimates}\label{SEC LOCAL ERR BASIC}
The second adaptive method we discuss is the standard in ODE solvers. It uses local error estimates of the solution and does not take into account QoIs. The results from this section for One-step methods and the deadbeat controller \eqref{EQ CONT DEADBEAT} are in principal classic \cite{shampine1985_STEP}.
Here, we present a new convergence proof that separates requirements on the error estimate, timesteps and $\Delta t_0$, for generic One-step methods. This makes it easier to show convergence for general controllers and estimates, and we use it to show convergence in the QoI for the goal oriented adaptive method in section \ref{SEC LOCAL GOAL}.
We first introduce the relevant terminology used in this paper. For readers familiar with time adaptivity for ODEs, we use \textit{local extrapolation} and \textit{Error Per Step (EPS)} based control, see \cite{shampine1985_STEP}.
\begin{definition}
The \textit{flow} \cite{Soderlind2006a} of an IVP \eqref{EQ BASE IVP} is the map
\begin{equation*}
\mathcal{M}^{t, \Delta t} : \bm{u}(t) \rightarrow \bm{u}(t + \Delta t),
\end{equation*}
where $t \in [t_0, t_e]$ and $t + \Delta t \leq t_e$ for $\Delta t > 0$.
\end{definition}
The flow acts as the solution operator for $\bm{u}(t)$. To numerically solve an IVP means to approximate the flow by a \textit{numerical flow map} $\mathcal{N}^{t, \Delta t}: \mathbb{R}^d \rightarrow \mathbb{R}^d$ defined by some numerical scheme. A timestep can be written in the form
\begin{equation*}
\bm{u}_{n+1} = \mathcal{N}^{t_n, \Delta t_n} \bm{u}_n.
\end{equation*}
We generally assume problem \eqref{EQ BASE IVP} to have the unique solution $\bm{u}(t)$ guaranteeing existence of the flow map $\mathcal{M}^{t, \Delta t}$.
We define the global error by
\begin{equation}\label{EQ GLOBAL ERR}
\bm{e}_{n+1}
:= \bm{u}_{n+1} - \bm{u}(t_{n+1})
= \mathcal{N}^{t_n, \Delta t_n} \bm{u}_n - \mathcal{M}^{t_n, \Delta t_n} \bm{u}(t_n).
\end{equation}
By adding zero we obtain the global error propagation form
\begin{equation}\label{EQ ERR PROP NUM}
\bm{e}_{n+1}
= \underbrace{(\mathcal{N}^{t_n, \Delta t_n} - \mathcal{M}^{t_n, \Delta t_n})\bm{u}_n}_{\text{global error increment}} +
\underbrace{\mathcal{M}^{t_n, \Delta t_n}\bm{u}_n - \mathcal{M}^{t_n, \Delta t_n} \bm{u}(t_n)}_{\text{global error propagation}}.
\end{equation}
The dynamics of global error propagation are usually not known. The global error increments, however, have a known structure.
\begin{definition}
Assume a sufficiently smooth right-hand side $\bm{f}$ for \eqref{EQ BASE IVP}. The \textit{principal error function} \cite{haiwan:93} $\bm{\phi}$ of a scheme $\mathcal{N}^{t, \Delta t}$ of order $p$ is
\begin{equation}\label{EQ PRINCP ERROR FUNC}
\bm{\phi}(t, \bm{u}) := \lim_{\Delta t \rightarrow 0} \frac{(\mathcal{N}^{t, \Delta t} - \mathcal{M}^{t, \Delta t})\bm{u}}{\Delta t^{p + 1}}.
\end{equation}
The \textit{local error} of a scheme $\mathcal{N}^{t, \Delta t}$ of order $p$ is
\begin{equation*}
\left(\mathcal{N}^{t_n, \Delta t_n} - \mathcal{M}^{t_n, \Delta t_n}\right)\bm{u}_n = \Delta t_n^{p+1} \bm{\phi}(t_n, \bm{u}_n) + \mathcal{O}(\Delta t_n^{p+2}).
\end{equation*}
\end{definition}
Here, the local error is equivalent to the global error increment \eqref{EQ ERR PROP NUM}. We will estimate the local error and derive a timestep controller to keep the norm of the local error in check. Then we show that the resulting adaptive method is convergent, that is, the global error can be controlled by the global error increments and goes to zero for $\tau \rightarrow 0$.
\subsubsection{Error estimation and timestep controller}
We now derive an estimate for the local error using the two solutions of order $p$ and $\hat{p}$. We approximate the local error behaviour by a simplified model, focusing on the leading terms. Aiming to keep the norm of the local error equal to a desired tolerance, this determines the new timestep. This timestep controller gives us $\Delta t_{n+1}$ based on the previous timestep $\Delta t_n$, the local error estimate and a tolerance $\tau$.
Assume two time-integration schemes $(\mathcal{N}^{t, \Delta t},\,\,\mathcal{N}^{t, \Delta t}_-)$ with orders $(p,\,\,\hat{p})$ and principal error functions $(\bm{\phi},\,\,\bm{\phi}_-)$. Embedded Runge-Kutta schemes \cite{haiwan:93} are a possible choice, as they have the advantage that the embedded solution uses the same stage derivatives, requiring essentially no extra computation.
We use a \textit{local extrapolation} approach to estimate the local error
\begin{equation}\label{EQ LOCAL ERROR LOW}
\bm{\ell}_n
:= \left(\mathcal{N}_-^{t_n, \Delta t_n} - \mathcal{M}^{t_n, \Delta t_n} \right)\bm{u}_n
\end{equation}
by
\begin{equation}\label{EQ LOCAL ERROR LOW EST}
\hat{\bm{\ell}}_n
:= \left(\mathcal{N}_-^{t_n, \Delta t_n} - \mathcal{N}^{t_n, \Delta t_n} \right)\bm{u}_n.
\end{equation}
The leading term of this error estimate, characterized by the principal error function $\bm{\phi}_-$, matches the leading term of the local error \eqref{EQ LOCAL ERROR LOW}. Higher order terms with regards to $\Delta t_n$ will differ. Note that this local error is not the global error increment from \eqref{EQ ERR PROP NUM}, but the one corresponding to $\mathcal{N}^{t, \Delta t}_-$. We model the local error using
\begin{equation}\label{EQ LOCAL ERROR MODEL}
\bm{m}_n
:= \Delta t_n^{\hat{p} + 1} \bm{\phi}_-(t_n, \bm{u}_n),
\end{equation}
assuming $\bm{\phi}_-(t_n, \bm{u}_n)$ to be slowly changing. The next step of this model yields
\begin{equation}\label{EQ ERROR MODEL APPROX}
\bm{m}_{n+1}
\approx \Delta t_{n+1}^{\hat{p} + 1} \bm{\phi}(t_n, \bm{u}_n)
= \left( \frac{\Delta t_{n+1}}{\Delta t_n}\right)^{\hat{p} +1} \bm{m}_n
\approx \left( \frac{\Delta t_{n+1}}{\Delta t_n}\right)^{\hat{p} +1} \hat{\bm{\ell}}_n.
\end{equation}
Aiming for $\|\bm{m}_{n+1}\|= \tau$ gives the well-known deadbeat controller
\begin{equation}\label{EQ CONT DEADBEAT}
\Delta t_{n+1} = \Delta t_n \left( \frac{\tau}{\| \hat{\bm{\ell}}_n \|}\right)^{1/(\hat{p}+1)}.
\end{equation}
\subsubsection{Convergence in the solution}\label{SEC BASIC CONV}
We now show convergence with $\bm{e}_n = \mathcal{O}(\tau^{p/(\hat{p} + 1)})$ for the adaptive method consisting of a time-integration scheme of order $p$, the error estimate \eqref{EQ LOCAL ERROR LOW EST}, controller \eqref{EQ CONT DEADBEAT} and a suitable initial step-size.
First we build a relation between global error and maximal timestep with Lemma \ref{LEM CONV}. Corollary \ref{COR BASIC CONV} relaxes this relation to general timesteps in dependence on the tolerance $\tau$. We cannot use the timesteps from controller \eqref{EQ CONT DEADBEAT} directly, since their dependence on $\tau$ is more involved. Instead, we construct a reference timestep series which fulfills the requirements in both Lemma and Corollary and gives the targeted convergence rate. With Theorem \ref{THRM BASIC REF} we show the timesteps from the controller \eqref{EQ CONT DEADBEAT} converge to the reference timesteps for $\tau \rightarrow 0$, which gives convergence with the rate $\mathcal{O}(\tau^{p/(\hat{p} + 1)})$.
\begin{lemma}\label{LEM CONV}
Let problem \eqref{EQ BASE IVP} have a sufficiently smooth right-hand side $\bm{f}$, such that a scheme $\mathcal{N}^{t, \Delta t}$ of order $p$ has the global error increment
\begin{equation*}
(\mathcal{N}^{t_n, \Delta t_n} - \mathcal{M}^{t_n, \Delta t_n})\bm{u}_n = \Delta t^{p + 1}_n \bm{\phi}(t_n, \bm{u}_n) + \mathcal{O}(\Delta t_n^{p + 2}).
\end{equation*}
Assume a mesh $t_0 < \cdots < t_N = t_e$ with timesteps $\Delta t_n = t_{n+1} - t_n$ and the step-size function
\begin{equation*}
\theta : [t_0, t_e] \rightarrow (\theta_{min},1], \quad \theta_{min} > 0,
\end{equation*}
fulfilling
\begin{equation}\label{EQ LEMMA DT}
\Delta t_n = \theta(t_n) \Delta T + \mathcal{O}(\Delta T^{1 + \epsilon}), \quad \epsilon > 0,
\end{equation}
for some $\Delta T > 0$. Then the global error \eqref{EQ GLOBAL ERR} fulfills
\begin{equation*}
\bm{e}_n = \bm{u}_n - \bm{u}(t_n) = \mathcal{O}(\Delta T^p), \quad \forall\, t_n, \quad n = 0, \ldots,\, N,
\end{equation*}
for $\Delta T \rightarrow 0$.
\end{lemma}
\begin{proof}
We first neglect the $\mathcal{O}(\Delta T^{1 + \epsilon})$ term in \eqref{EQ LEMMA DT}. Under these assumptions a proof of $\|\bm{e}_n\| = \mathcal{O}(\Delta T^p)$ can be found in \cite[pp. 68]{gear1971_NUMIVP}.
To extend this result to $\Delta t_n = \theta(t_n) \Delta T + \mathcal{O}(\Delta T^{1 + \epsilon})$, we define
\begin{equation*}
\theta^*(t_n) := \frac{\theta(t_n)}{c} + \mathcal{O}(\Delta T^\epsilon)
\quad \text{and} \quad \Delta T^* := c\, \Delta T,
\end{equation*}
for some $c > 1$. This gives $\Delta t_n = \theta^*(t_n) \Delta T^*$, where
\begin{equation*}
\theta^*(t_n) \leq \frac{1}{c} + \mathcal{O}(\Delta T^\epsilon),
\end{equation*}
which fulfills $0 < \theta^*(t_n) \leq 1$ for $\Delta T$ sufficiently small. This means the general case \eqref{EQ LEMMA DT} is also covered by the proof in \cite[pp. 68]{gear1971_NUMIVP} and for $\Delta T \rightarrow 0$ we get $\bm{e}_n = \mathcal{O}({\Delta T^*}^p) = \mathcal{O}(\Delta T^p)$.\qed
\end{proof}
Using this Lemma, we can now link the global error to the tolerance.
\begin{corollary}\label{COR BASIC CONV}
Assume the smoothness requirements of Lemma \ref{LEM CONV} to be met and assume a scheme of order $p$ to get $\bm{u}_n$. Assume a mesh $t_0 < \cdots < t_N = t_e$ with timesteps $\Delta t_n = t_{n+1} - t_n,\,\, n = 0, \ldots,\, N-1$ that fulfill
\begin{equation*}
\Delta t_n = \mathcal{O}(\tau^{1/q}), \quad \Delta t_n > 0.
\end{equation*}
Then, the global error fulfills
\begin{equation*}
\bm{e}_n = \bm{u}_n - \bm{u}(t_n) = \mathcal{O}(\tau^{p/q}), \quad \forall\, t_n, \quad n = 0, \ldots,\, N,
\end{equation*}
for $\tau \rightarrow 0$.
\end{corollary}
\begin{proof}
The maximal step-size is $\Delta T = \max \{\Delta t_n \,|\, 0 \leq n \leq N-1\}$. The corresponding step-size function is
\begin{equation*}
\theta(t_n) = \frac{\Delta t_n}{\Delta T},
\end{equation*}
which fulfills $0 < \theta(t_n) \leq 1$. We thus meet all assumptions of Lemma \ref{LEM CONV} and get $\bm{e}_n = \bm{u}_n - \bm{u}(t_n) = \mathcal{O}(\Delta T^p) = \mathcal{O}(\tau^{p/q})$.\qed
\end{proof}
We cannot apply Corollary \ref{COR BASIC CONV} to the timesteps \eqref{EQ CONT DEADBEAT} directly, since they have a more complex dependence on $\tau$. Therefore we use reference timesteps $\Delta t_n^{\text{ref}} = \mathcal{O}(\tau^{1/q})$. We show $\Delta t_n \rightarrow \Delta t_n^{\text{ref}}$ for $\tau \rightarrow 0$ with a difference of at most $\mathcal{O}(\Delta T_{\text{ref}}^2)$ and can apply Lemma \ref{LEM CONV}.
We define the reference timesteps
\begin{equation}\label{EQ DT REFERENCE BASIC}
\Delta t_n^{\text{ref}}
:= \left( \frac{\tau}{c_n \| \bm{\phi}_-(t_n, \bm{u}(t_n))\|} \right)^{1/(\hat{p}+1)},
\end{equation}
where $\bm{\phi}_-$ is the principal error function \eqref{EQ PRINCP ERROR FUNC} corresponding to $\mathcal{N}^{t, \Delta t}_-$ and $c_n$ is given by
\begin{equation*}
c_n = \begin{cases}\mathcal{O}(1), & n = 0,\\ 1, & n > 0,\end{cases}
\end{equation*}
where $\mathcal{O}(1)$ is with respect to $\tau \rightarrow 0$ and $c_0 > 0$. This adds a degree of freedom to choose the initial timestep. For \eqref{EQ DT REFERENCE BASIC} to be well-defined we require $\bm{f}$ in problem \eqref{EQ BASE IVP} to be sufficiently smooth and define
\begin{equation*}
\phi_{-,\min} := \min_{t \in [t_0, t_e]} \| \bm{\phi}_-(t, \bm{u}(t))\|,
\end{equation*}
where we assume $\phi_{-,\min} > 0$. This gives the maximal timestep
\begin{equation}\label{EQ DTMAX BASIC}
\Delta T_{\text{ref}} = \left( \frac{\tau}{\max\{ 1, c_0\} \, \phi_{-,\min}} \right)^{1/(\hat{p}+1)}.
\end{equation}
We have $\Delta t_n^{\text{ref}} \leq \Delta T_{\text{ref}} = \mathcal{O}(\tau^{1/(\hat{p}+1)})$. Applying Lemma \ref{LEM CONV} gives us $\bm{e}_n = \mathcal{O}(\tau^{p/(\hat{p} + 1)})$, for a time-integration scheme of order $p$. We now show convergence of the adaptive method with timesteps from \eqref{EQ CONT DEADBEAT}.
\begin{theorem}\label{THRM BASIC REF}
Let problem \eqref{EQ BASE IVP} have a sufficiently smooth $\bm{f}$. Assume an adaptive method consisting of:
\begin{enumerate}
\item A pair of schemes ($\mathcal{N}^{t, \Delta t},\,\,\mathcal{N}^{t, \Delta t}_-)$ with orders ($p,\,\,\hat{p}$) with $p > \hat{p}$,
\item the error estimator \eqref{EQ LOCAL ERROR LOW EST},
\item the deadbeat controller \eqref{EQ CONT DEADBEAT},
\item an initial timestep $\Delta t_0 = \mathcal{O}(\tau^{1/(\hat{p} + 1)})$.
\end{enumerate}
If the principal error function $\bm{\phi}_-$ to $\mathcal{N}_-^{t, \Delta t}$ fulfills
\begin{equation}\label{EQ REQ PHI MIN}
\min_{t_0 \leq t \leq t_e}{||\bm{\phi}_-(t, \bm{u}(t))||} > 0,
\end{equation}
then the adaptive method is convergent with
\begin{equation*}
\bm{e}_n = \bm{u}_n- \bm{u}(t_n) = \mathcal{O}(\tau^{p/(\hat{p} + 1)}),
\quad \forall t_n, \quad n = 0, \ldots,\, N, \quad \text{and} \quad \tau \rightarrow 0.
\end{equation*}
\end{theorem}
\begin{proof}
By induction we show the timesteps fulfill
\begin{equation*}
\Delta t_n
= \Delta t_n^{\text{ref}} + \mathcal{O}(\Delta T_{\text{ref}}^2)
= \mathcal{O}(\tau^{1/(\hat{p}+1)}).
\end{equation*}
We choose $c_0$ in $\Delta t_0^{\text{ref}}$ such that $\Delta t_0 = \Delta t_0 ^{\text{ref}}$, meaning the \textit{induction base} is met. The timestep given by the controller is
\begin{equation*}
\Delta t_{n+1}
= \Delta t_n \left( \frac{\tau}{\| \hat{\bm{\ell}}_n \|}\right)^{1/(\hat{p}+1)}
= \left( \frac{\tau}{\|\bm{\phi}_-(t_n, \bm{u}_n) + \mathcal{O}(\Delta t_n)\|}\right)^{1/(\hat{p}+1)}.
\end{equation*}
We expand the denominator in $\bm{\phi}_-(t_{n+1}, \bm{u}(t_{n+1}))$ and get
\begin{equation*}
\Delta t_{n+1}
= \left( \frac{\tau}{\|\bm{\phi}_-(t_{n+1}, \bm{u}(t_{n+1})) + \mathcal{O}(\Delta t_n)\|}\right)^{1/(\hat{p}+1)}.
\end{equation*}
We perform another expansion to separate the $\mathcal{O}(\Delta t_n)$ term and get
\begin{equation*}
\Delta t_{n+1}
= \underbrace{\left( \frac{\tau}{\|\bm{\phi}_-(t_{n+1}, \bm{u}(t_{n+1})) \|}\right)^{1/(\hat{p}+1)}}_{=\Delta t_n^{\text{ref}}} + \mathcal{O}(\tau^{1/(\hat{p}+1)} \Delta t_n).
\end{equation*}
We now consider the $\mathcal{O}$ term. From the definition of the maximal timestep \eqref{EQ DTMAX BASIC} we know $\Delta T_{\text{ref}} = \mathcal{O}(\tau^{1/(\hat{p}+1)})$. The induction hypothesis gives us $\Delta t_n = \Delta t_n^{\text{ref}} + \mathcal{O}(\Delta T^2_{\text{ref}}) \leq \Delta T_{\text{ref}} + \mathcal{O}(\Delta T^2)$. Thus we showed the induction step. By Corollary \ref{COR BASIC CONV} we then get the result $\bm{e}_n = \mathcal{O}(\tau^{p/(\hat{p} + 1)})$.\qed
\end{proof}
Thus we established convergence for the derived adaptive method for a suitable initial timestep $\Delta t_0$. The assumption \eqref{EQ REQ PHI MIN} is a requirement of controllability in the asymptotic regime. The global error would not be controllable by means of local errors, if the local error vanishes at some point. Further we built a structure with which one can prove similar results for different controllers, e.g. PID controllers \cite{Soderlind2003}. To prove convergence one can either show \eqref{EQ LEMMA DT} using suitable reference timesteps or show a maximal deviation of $\mathcal{O}(\Delta T_{\text{ref}}^{1 + \epsilon})$, $\epsilon > 0$ of a given controller from the deadbeat controller \eqref{EQ CONT DEADBEAT}.
\section{Goal oriented adaptivity using local error estimates}\label{SEC LOCAL GOAL}
We now consider the goal oriented setting \eqref{EQ J DEF} for problem \eqref{EQ BASE IVP} and are only interested in the QoI $J(\bm{u})$. We approximate the integral in $J$ using quadrature and $\bm{u}(t)$ by the numerical solution $\bm{u}_h$ to get
\begin{equation}\label{EQ J DISCR}
J_h(\bm{u}_h) :=
\sum_{n = 0}^{N-1} \Delta t_n \sum_{k=0}^s \sigma_k j(t_n^{(k)}, \bm{u}_n^{(k)})
\approx
\int_{t_0}^{t_e} j(t, \bm{u}(t))dt = J(\bm{u}).
\end{equation}
Here $\bm{u}_n^{(k)} \approx \bm{u}(t_n^{(k)})$ and $\sigma_k$ are the evaluation points resp. weights for the quadrature scheme. We assume an embedded Runge-Kutta scheme for time-integration.
As we are now only interested in the QoI, we derive an adaptive method that is convergent in the QoI and goal oriented. The method aims to be more efficient by taking into account the QoI for the error estimate. Convergence in the QoI will be shown based on convergence in the solution. With the following Theorem we establish the connection between convergence rates.
\begin{theorem}\label{THRM ORDERS}
Assume $\bm{f}$ in problem \eqref{EQ BASE IVP} and $j$ in the QoI \eqref{EQ J DEF} to be sufficiently smooth, a mesh $t_0 < \cdots < t_N = t_e$ with timesteps $\Delta t_n = t_{n+1} - t_n = \mathcal{O}(\tau^{1/q})$ and an approximation $J_h \approx J$ \eqref{EQ J DISCR} by a quadrature scheme of order $r$. Further assume an approximation $\bm{u}_h \approx \bm{u}(t)$ with order
\begin{equation}\label{EQ J STAGE VALS}
\bm{u}_n^{(k)} - \bm{u}(t_n^{(k)}) = \mathcal{O}(\tau^{p/q})
\end{equation}
for all $n, k$. Then the error in the QoI fulfills
\begin{equation*}
e^J := | J(\bm{u}) - J_h(\bm{u}_h)| = \mathcal{O}(\tau^{\min(r,p)/q}).
\end{equation*}
\end{theorem}
\begin{proof}
By splitting the error, we obtain
\begin{equation}\label{EQ J ERROR SPLIT}
e^J \leq \underbrace{|J(\bm{u}) - J_h(\bm{u})|}_{\text{quadrature error}} + \underbrace{|J_h(\bm{u}) - J_h(\bm{u}_h)|}_{\text{time-integration error}}
\end{equation}
and can deal with the two errors separately.
An estimate for general numerical quadrature schemes of order $r$ gives
\begin{equation*}
|J(\bm{u}) - J_h(\bm{u})| \leq \left|\sum_{n=0}^{N-1}{c_q {\Delta t_n}^{r+1} \max_{t_n \leq t \leq t_{n+1}}{|(j(t, \bm{u}(t)))^{(r)}|}}\right|,
\end{equation*}
with a constant $c_q$. Using the bound $j_{\max} := \max_{t_0 \leq t \leq t_e}{|(j(t, \bm{u}(t)))^{(r)}|}$, we get
\begin{equation*}
|J(\bm{u}) - J_h(\bm{u})|
\leq c_q\,j_{\max} \sum_{n=0}^{N-1} {\Delta t_n}^{r+1}
\leq c_q\,j_{\max} \sum_{n=0}^{N-1} \Delta t_n\, \mathcal{O}(\tau^{r/q})
= \mathcal{O}(\tau^{r/q}).
\end{equation*}
For the time-integration error we have
\begin{equation*}
|J_h(\bm{u}) - J_h(\bm{u}_h)| \leq \sum_{n=0}^{N-1} \Delta t_n \sum_{k=0}^{s} \left|\sigma_k \left(j(t_n^{(k)}, \bm{u}_n^{(k)}) - j(t_n^{(k)}, \bm{u}(t_n^{(k)}))\right)\right|,
\end{equation*}
where we linearise $j(t_n^{(k)}, \bm{u}(t_n^{(k)}))$ and use assumption \eqref{EQ J STAGE VALS} to get
\begin{equation*}
j(t_n^{(k)}, \bm{u}(t_n^{(k)})) = j(t_n^{(k)}, \bm{u}_n^{(k)}) + \frac{\partial j(t_n^{(k)}, \bm{u}_n^{(k)})}{\partial \bm{u}}(\underbrace{\bm{u}(t_n^{(k)}) - \bm{u}_n^{(k)}}_{ = \, \mathcal{O}(\tau^{p/q})}) + \mathcal{O}((\tau^{p/q})^2).
\end{equation*}
This yields
\begin{equation*}
|J_h(\bm{u}) - J_h(\bm{u_h})|
\leq \sum_{n=0}^{N-1} \Delta t_n \sum_{k=0}^{s} \mathcal{O}(\tau^{p/q})
= \mathcal{O}(\tau^{p/q}).
\end{equation*}
Summing up quadrature and time-integration error yields
\begin{equation}\label{EQ ORDERS RESULTS}
e^J
\leq |J(\bm{u}) - J_h(\bm{u})| +|J_h(\bm{u}) - J_h(\bm{u}_h)|
= \mathcal{O}(\tau^{r/q}) + \mathcal{O}(\tau^{p/q})
= \mathcal{O}(\tau^{\min(r,p)/q}).
\end{equation}
\qed
\end{proof}
Here, we combined statements on convergence in the QoI and the respective rates. The assumption $\Delta t_n = \mathcal{O}(\tau^{1/q})$ gives $\bm{u}_n - \bm{u}(t_n) = \mathcal{O}(\tau^{p/q})$, for all $n = 0,\ldots,\,\, N-1$ by Corollary \ref{COR BASIC CONV}. Using linear interpolation for an intermediate point $t_n^{(k)} \in (t_n, t_{n+1})$, one gets at most $\bm{u}_n^{(k)} - \bm{u}(t_n^{(k)}) = \mathcal{O}(\tau^{2/q})$. The requirement \eqref{EQ J STAGE VALS} becomes relevant for schemes of order $p > 2$ and is discussed in the end of section \ref{SEC LOCAL GOAL CONV}.
Our idea is now to use a goal oriented error estimate to obtain step-sizes more suitable to address the error in the QoI. In practical computations this should lead to a gain in efficiency.
We first derive our error estimate and controller, for the resulting goal oriented adaptive method we show convergence in the QoI with Theorem \ref{THRM J CONV}. We make an analysis to predict the performance in section \ref{SEC ERR PROP}.
\subsection{Error estimate and timestep controller}
In the proof of Theorem \ref{THRM ORDERS} we see two different error sources - time-integration and quadrature, see \eqref{EQ J ERROR SPLIT}. While one can estimate the quadrature error, doing so is not necessary. Using an error estimate based on the time-integration error only, we will get an adaptive method that is convergent in the QoI.
Neglecting the quadrature error we have
\begin{equation*}
e^J
\approx
\sum_{n=0}^{N-1}\Big|\Delta t_n \sum_{k=0}^s \sigma_k \left(j(t_n^{(k)}, \bm{u}_n^{(k)}) - j(t_n^{(k)}, \bm{u}(t_n^{(k)}))\right)\Big|.
\end{equation*}
As we generally do not have error estimates for the intermediate points of the quadrature scheme, we approximate the above term-wise by the rectangular rule
\begin{equation}\label{EQ J TIME INT ERROR}
e_n^J
:= \Delta t_n \Big| j(t_{n+1}, \bm{u}_{n+1}) - j(t_{n+1}, \bm{u}(t_{n+1}))\Big|.
\end{equation}
Note that this is an approximation of the time-integration error $J_h(\bm{u}) - J_h(\bm{u}_h)$ and does not place general restrictions on choices for quadrature schemes. The global error propagation form of \eqref{EQ J TIME INT ERROR} is
\begin{align}
e_{n+1}^J
= & \,\Delta t_n \left(
j\left(t_{n+1}, \mathcal{N}^{t_n, \Delta t_n}\bm{u}_n\right) -
j\left(t_{n+1}, \mathcal{M}^{t_n, \Delta t_n}\bm{u}(t_n)\right)\right)
\label{EQ J ERROR PROP 1}\\
= & \, \underbrace{\Delta t_n \left(
j\left(t_{n+1}, \mathcal{N}^{t_n, \Delta t_n}\bm{u}_n\right) -
j\left(t_{n+1}, \mathcal{M}^{t_n, \Delta t_n}\bm{u}_n\right)\right)
}_{\text{global error increment}} + \label{EQ J SIMPL ERR INCR}\\
& \, \underbrace{\Delta t_n \left(
j\left(t_{n+1}, \mathcal{M}^{t_n, \Delta t_n}\bm{u}_n\right) -
j\left(t_{n+1}, \mathcal{M}^{t_n, \Delta t_n}\bm{u}(t_n)\right)\right)
}_{\text{global error propagation}}. \label{EQ J ERROR PROP 3}
\end{align}
Again, we do not know the global error propagation dynamics, but we can estimate the global error increment and control it using timesteps. We use local extrapolation with a scheme $\mathcal{N}^{t, \Delta t}_-$ of order $\hat{p} < p$ and control
\begin{equation}\label{EQ J LOCAL ERROR LOW}
\Delta t_n \ell_n^j :=
\Delta t_n \left(
j\left(t_{n+1}, \mathcal{N}_-^{t_n, \Delta t_n}\bm{u}_n\right) -
j\left(t_{n+1}, \mathcal{M}^{t_n, \Delta t_n}\bm{u}_n\right)\right).
\end{equation}
This is the global error increment \eqref{EQ J SIMPL ERR INCR}, but corresponding to $\mathcal{N}^{t_n, \Delta t_n}_-$. We estimate \eqref{EQ J LOCAL ERROR LOW} by
\begin{equation}\label{EQ J ERROR EST}
\Delta t_n \hat{\ell}_n^j :=
\Delta t_n \left(
j\left(t_{n+1}, \mathcal{N}^{t_n, \Delta t_n}_-\bm{u}_n\right) -
j\left(t_{n+1}, \mathcal{N}^{t_n, \Delta t_n}\bm{u}_n\right)\right).
\end{equation}
To construct a controller we need a model for \eqref{EQ J LOCAL ERROR LOW}. As $j$ may be non-linear, we linearise in $\bm{u}$ and get
\begin{align}
\ell_n^j = \,\, & j\left(t_{n+1}, \mathcal{N}^{t_n, \Delta t_n}_-\bm{u}_n\right) - j\left(t_{n+1}, \mathcal{M}^{t_n, \Delta t_n}\bm{u}_n\right)\nonumber\\
= \,\, &
\frac{\partial j\left(t_{n+1}, \mathcal{M}^{t_n, \Delta t_n}\bm{u}_n\right)}{\partial \bm{u}}
\underbrace{\left(\mathcal{N}^{t_n, \Delta t_n}_- - \mathcal{M}^{t_n, \Delta t_n} \right) \bm{u}_n}_{\stackrel{\eqref{EQ LOCAL ERROR LOW}}{=} \bm{\ell}_n} + \mathcal{O}(\Delta t_n^{2\hat{p} + 2}). \label{EQ J AUX LIN}
\end{align}
As model we choose the leading term of \eqref{EQ J AUX LIN}
\begin{equation*}
m_n^j
:= \Delta t_n \frac{\partial j(t_{n+1}, \bm{u}_{n+1})}{\partial \bm{u}} \bm{m}_n
\end{equation*}
and assume the derivative term to be slowly changing. Here, $\bm{m}_n$ is the classical error estimate \eqref{EQ LOCAL ERROR MODEL}. For this model the next step yields
\begin{align*}
m_{n+1}^j
& \approx \Delta t_{n+1} \frac{\partial j(t_{n+1}, \bm{u}_{n+1})}{\partial \bm{u}} \bm{m}_{n+1}
= \frac{\Delta t_{n+1}^{\hat{p}+2}}{\Delta t_n^{\hat{p} + 1}}\frac{\partial j(t_{n+1}, \bm{u}_{n+1})}{\partial \bm{u}} \bm{m}_n\\
& \stackrel{\eqref{EQ ERROR MODEL APPROX}}{\approx} \frac{\Delta t_{n+1}^{\hat{p}+2}}{\Delta t_n^{\hat{p} + 1}}\,\,\frac{\partial j(t_{n+1}, \bm{u}_{n+1})}{\partial \bm{u}}\, \hat{\bm{\ell}}_n
\stackrel{\eqref{EQ J ERROR EST}\,-\,\eqref{EQ J AUX LIN}}{\approx} \Delta t_{n+1} \left( \frac{\Delta t_{n+1}}{\Delta t_n}\right)^{\hat{p} + 1} \hat{\ell}^j_n.
\end{align*}
We aim to control the error \textit{per unit interval}, per step, which means aiming for $|m_{n+1}^j| = \Delta t_{n+1}\, \tau$. This is not to be confused with the common \textit{Error Per Unit Step} (EPUS) approach in classical timestep control. We get the deadbeat controller
\begin{equation}\label{EQ J DEADBEAT CONT}
\Delta t_{n+1}
= \Delta t_n \left( \frac{\tau}{|\hat{\ell}^j_n|}\right)^{1/(\hat{p}+1)}.
\end{equation}
We thus constructed a timestep controller to control the error in the QoI \eqref{EQ J DEF} using only local error estimates in $j(t, \bm{u})$. In the next section we show that the resulting adaptive method is convergent in the solution and QoI.
Comparing the implementation of this adaptive method to the classical one from section \ref{SEC LOCAL ERR BASIC}, we only require the density function $j$. This we need regardless of the used method, as it is necessary for the evaluation of $J(\bm{u})$.
\subsection{Convergence in the quantity of interest}\label{SEC LOCAL GOAL CONV}
We now show convergence of the derived goal oriented adaptive scheme in the QoI, using Theorem \ref{THRM ORDERS}. While we use the same controller, we have a different error estimator and cannot use Corollary \ref{COR BASIC CONV} directly. This is due to the timesteps \eqref{EQ J DEADBEAT CONT} converging to a different series of reference timesteps. We define these and repeat the steps of Theorem \ref{THRM BASIC REF}, showing convergence of the steps from the controller \eqref{EQ J DEADBEAT CONT} to our reference. We use
\begin{equation}\label{EQ J DT REF}
\Delta t_n^{\text{ref}} :=
\left( \frac{\tau}{c_n\,\left|\frac{\partial j(t_{n}, \bm{u}(t_n))}{\partial \bm{u}} \,\bm{\phi}_-(t_n, \bm{u}(t_n))\right|} \right)^{1/(\hat{p}+1)},
\end{equation}
with
\begin{equation*}
c_n = \begin{cases}\mathcal{O}(1),& n = 0, \\ 1, & n > 0,\end{cases}
\end{equation*}
where $\mathcal{O}(1)$ is for $\tau \rightarrow 0$ and $c_0 > 0$. This gives a degree of freedom in choosing $\Delta t_0$. For the timesteps to be well-defined we require
\begin{equation*}
\phi_{-, \min}^j := \min_{t \in [t_0, t_e]} \left|\frac{\partial j(t, \bm{u}(t))}{\partial \bm{u}} \, \bm{\phi}_-(t, \bm{u}(t))\right| > 0,
\end{equation*}
yielding the maximal timestep
\begin{equation}\label{EQ J DTMAX}
\Delta T_{\text{ref}} := \left( \frac{\tau}{\max\{ 1, c_0\}\,\phi_{-,\min}^j} \right)^{1/(\hat{p}+1)}.
\end{equation}
We have $\Delta t_n^{\text{ref}} \leq \Delta T_{\text{ref}} = \mathcal{O}(\tau^{1/(\hat{p}+1)})$. With the following Theorem we show convergence of the timesteps from controller \eqref{EQ J DEADBEAT CONT} with error estimate \eqref{EQ J ERROR EST} to the reference timesteps \eqref{EQ J DT REF}.
\begin{theorem}\label{THRM J CONV}
Let $\bm{f}$ in \eqref{EQ BASE IVP} and $j$ in \eqref{EQ J DEF} be sufficiently smooth. Assume an adaptive method consisting of:
\begin{enumerate}
\item A pair of schemes $\mathcal{N}^{t, \Delta t}, \mathcal{N}^{t, \Delta t}_-$ with orders $p,\,\,\hat{p}$ and $p > \hat{p}$,
\item a quadrature scheme of order $r$ to approximate $J(\bm{u})$ as in \eqref{EQ J DISCR},
\item schemes $\mathcal{N}^{t, \Delta t}_{(k)}$ to obtain solutions of order $p - 1$ for all quadrature evaluation points, that are not part of the resulting grid,
\item the error estimator \eqref{EQ J ERROR EST},
\item the deadbeat controller \eqref{EQ J DEADBEAT CONT},
\item an initial timestep $\Delta t_0 = \mathcal{O}(\tau^{1/(\hat{p} + 1)})$.
\end{enumerate}
If the principal error function $\bm{\phi}_-$ to $\mathcal{N}_-^{t, \Delta t}$ fulfills
\begin{equation}\label{EQ J PHIMIN}
\min_{t \in [t_0, t_e]} \left|\frac{\partial j(t, \bm{u}(t))}{\partial \bm{u}} \, \bm{\phi}_-(t, \bm{u}(t))\right| > 0,
\end{equation}
then
\begin{equation*}
e^J := |J(\bm{u}) - J_h(\bm{u}_h)| = \mathcal{O}(\tau^{p/(\hat{p}+1)}), \quad \text{for} \quad \tau \rightarrow 0.
\end{equation*}
\end{theorem}
\begin{proof}
We first show convergence in the solution by inductively showing that every step given by the controller \eqref{EQ J DEADBEAT CONT} fulfills
\begin{equation}\label{EQ J DT THEOREM}
\Delta t_n = \Delta t_n^{\text{ref}} + \mathcal{O}(\Delta T^2_{\text{ref}}).
\end{equation}
We choose $c_0$ for $\Delta t_0^{\text{ref}}$ such that $\Delta t_0 = \Delta t_0^{\text{ref}}$. Thus the \textit{induction base} $\Delta t_0 = \Delta t_0^{\text{ref}} + \mathcal{O}(\Delta T^2_{\text{ref}})$ is met. In the controller \eqref{EQ J DEADBEAT CONT} we have the denominator
\begin{align}
\hat{\ell}_n^j
& = j\left(t_{n+1}, \mathcal{N}_-^{t_n, \Delta t_n} \bm{u}_n\right)
- j\left(t_{n+1}, \mathcal{N}^{t_n, \Delta t_n} \bm{u}_n\right) \nonumber\\
& = \frac{\partial j(t_{n+1}, \bm{u}_{n+1})}{\partial \bm{u}} \hat{\bm{\ell}}_n + \mathcal{O}(\Delta t_n^{2\hat{p} + 2}).\label{EQ J THRM LINEAR}
\end{align}
Repeating the same expansions as in the proof for Theorem \ref{THRM BASIC REF} we have
\begin{equation*}
\hat{\bm{\ell}}_n
= \Delta t_n^{\hat{p} + 1} \left( \bm{\phi}_-(t_{n+1}, \bm{u}(t_{n+1})) + \mathcal{O}(\Delta t_n)\right)
\end{equation*}
and similarly
\begin{equation*}
\frac{\partial j(t_{n+1}, \bm{u}_{n+1})}{\partial \bm{u}}
= \frac{\partial j(t_{n+1}, \bm{u}(t_{n+1}))}{\partial \bm{u}} + \mathcal{O}(\bm{e}_{n+1}).
\end{equation*}
Here we have $\mathcal{O}(\bm{e}_{n+1}) = \mathcal{O}(\Delta T_{\text{ref}}^p)$ by Corollary \ref{COR BASIC CONV}, as we assume all timesteps leading up to $\bm{e}_{n+1}$ to fulfill \eqref{EQ J DT THEOREM}. Using these approximations in \eqref{EQ J THRM LINEAR} we get
\begin{equation*}
\hat{\ell}_n^j = \Delta t_n^{\hat{p} + 1} \left( \frac{\partial j(t_{n+1}, \bm{u}(t_{n+1}))}{\partial \bm{u}} \bm{\phi}_-(t_{n+1}, \bm{u}(t_{n+1})) + \mathcal{O}(\Delta t_n) + \mathcal{O}(\Delta T_{\text{ref}}^p)\right),
\end{equation*}
which we insert into the controller \eqref{EQ J DEADBEAT CONT} to get
\begin{equation*}
\Delta t_{n+1} = \left( \frac{\tau}{\frac{\partial j(t_{n+1}, \bm{u}(t_{n+1}))}{\partial \bm{u}} \bm{\phi}_-(t_{n+1}, \bm{u}(t_{n+1})) + \mathcal{O}(\Delta t_n) + \mathcal{O}(\Delta T_{\text{ref}}^p)}\right)^{1/(\hat{p}+1)}.
\end{equation*}
Here we can pull out $\mathcal{O}(\Delta t)$ terms and use the induction hypothesis to get
\begin{align*}
\Delta t_{n+1}
& \stackrel{\eqref{EQ J DT REF}}{=} \Delta t_{n+1}^{\text{ref}} + \tau^{1/(\hat{p}+1)} \left( \mathcal{O}(\Delta t_n) + \mathcal{O}(\Delta T_{\text{ref}}^p)\right)\\
& \stackrel{\eqref{EQ J DTMAX},\,\eqref{EQ J DT THEOREM}}{=} \Delta t_{n+1}^{\text{ref}} + \mathcal{O}(\Delta T_{\text{ref}}^2),
\end{align*}
which shows the induction step holds, yielding $\Delta t_{n+1} = \mathcal{O}(\tau^{1/(\hat{p}+1)})$. We thus proved the induction and get $\Delta t_n = \mathcal{O}(\tau^{1/(\hat{p}+1)})$ for all $n$. This gives us the assumption on step-sizes as needed by Theorem 3.1 and convergence in the solution in the grid-points by Corollary \ref{COR BASIC CONV}, with a rate of $\mathcal{O}(\tau^{p/(\hat{p} + 1)})$.
Now, we want to show that we fulfill assumption \eqref{EQ J STAGE VALS} in Theorem \ref{THRM ORDERS}. Taking a single step of size $\Delta t_n^{(k)}$ from $t_n$, to the quadrature evaluation point $t_n^{(k)} \in (t_n, t_{n+1})$, with the scheme $\mathcal{N}_{(k)}^{t_n, \Delta t_n^{(k)}}$, gives the error
\begin{align*}
\bm{e}_n^{(k)} = \bm{u}(t_n^{(k)}) - \bm{u}_n^{(k)} &
= \mathcal{N}^{t_n, \Delta t_n^{(k)}}_{(k)}\bm{u}_n - M^{t_n, \Delta t_n^{(k)}} \bm{u}(t_n)\\
& = \underbrace{M^{t_n, \Delta t_n^{(k)}}}_{=\,\,\mathcal{O}(1)} \underbrace{\bm{e}_n}_{=\,\,\mathcal{O}(\tau^{p/(\hat{p} + 1)})} + \underbrace{\left(\mathcal{N}^{t_n, \Delta t_N^{(k)}}_{(k)} - M^{t_n, \Delta t_n^{(k)}} \right)\bm{u}_n}_{=\,\,\mathcal{O}\left(\left(\Delta t_n^{(k)}\right)^p\right)}.
\end{align*}
Here we have $\Delta t_n^{(k)} \leq \Delta t_n = \Delta t_n^{\text{ref}} + \mathcal{O}(\Delta T_{\text{ref}}^2) = \mathcal{O}(\tau^{p/(\hat{p}+1)})$. Hence we have $\bm{e}_n^{(k)} = \mathcal{O}(\tau^{p/(\hat{p} + 1)})$ and fulfill assumption \eqref{EQ J STAGE VALS} of Theorem \ref{THRM ORDERS} for $q = \hat{p} + 1$, which gives us convergence in the QoI with $e^J = \mathcal{O}(\tau^{\min(r,\,p)/(\hat{p} + 1)})$.\qed
\end{proof}
Thus our adaptive method is convergent in the QoI. The requirement of $\phi^j_{-, \min} > 0$ is a requirement on controllability of the global error by means of the local error \eqref{EQ J LOCAL ERROR LOW}. For a $j$ which is linear in $\bm{u}$ this is equivalent to the local error not being in the nullspace of $j(t, \cdot)$. Possible consequences of it being in the nullspace of $j(t, u)$ are shown in the following example. A more general analysis of this is subject of the next section.
\begin{example}
In \cite{Turek1999} the author describes using the lift-coefficient of the flow around a cylinder as density function $j(t, \bm{u})$, which changes sign over time, implying it has zeros. It is observed that large timesteps are chosen when the lift-coefficient is close to zero, leading to "catastrophical results". By the description of this example we can see criterion \eqref{EQ J PHIMIN} not being fulfilled and thus not guaranteeing convergence in the QoI.
\end{example}
We now discuss the time-integration schemes for the quadrature evaluation points, as needed in the assumptions of Theorem \ref{THRM J CONV}. Here, we only need a solution for the points, that are not part of the grid. This is relevant only for quadrature schemes of order $r > 2$, for $r = 2 $ there is the trapezoidal rule. One can also use linear interpolation of the solution at grid points, which can be formally expressed using a combination of the identity operator and $\mathcal{N}^{t_n, \Delta t_n}$, using suitable weights. Linear interpolation will, however, yield at most $\bm{e}_n^{(k)} = \mathcal{O}(\tau^{2/(\hat{p} + 1)})$.
We instead want to use RK schemes and use the already calculated stage derivatives. To determine weights of RK schemes for intermediate points $t_n^{(k)} = t_n + \gamma_k \,\Delta t_n$ for $\gamma_k \in (0, 1]$ one has to modify the RK order conditions the following way:
Taking the order conditions for order $p$, e.g. $\sum_s b_s c_s = \frac{1}{2}$ for $p=2$, one has to multiply the right-hand side by $\gamma_k^p$. This becomes clear when looking into the details of a proof on order conditions \cite[pp. 142]{haiwan:93}.
\begin{example}
Assume the classic $4$th order Runge-Kutta scheme for time-integration. Since the convergence rate in the QoI \eqref{EQ ORDERS RESULTS} is determined by the minimum order of quadrature and time-integration scheme, we pick the Simpson rule ($r = 4$) for quadrature. To get a $4$th order ($3$rd order local) solution for the point $t_n + \Delta t_n/2$ one can use the RK weights $b^* = \frac{1}{24}(5, 4, 4, -1)$ for the same stage derivatives.
\end{example}
\subsection{The nullspace of $j(t, \bm{u})$ and global error propagation}\label{SEC ERR PROP}
Convergence of a method is a statement for the limit $\tau \rightarrow 0$ and does not regard global error dynamics. Here, we analyse them to make a qualitative statement about the grid obtained from the goal oriented adaptive method for $\tau > 0$ not in the limit. We establish guidelines to predict grid quality and thus performance of the goal-oriented adaptive method.
We assume a QoI with a density function that is linear in $\bm{u}$,
\begin{equation*}
j(t, \bm{u}(t)) = \bm{w}^T \bm{u}(t), \quad w_i \geq 0.
\end{equation*}
Considering the split \eqref{EQ J ERROR SPLIT} of $e^J$, the quadrature error does not involve the numerical solution. Global error propagation only appears in the time-integration part of the error, which in this case is given by
\begin{equation*}
|J_h(\bm{u}) - J_h(\bm{u}_h)|
\leq \sum_{n=0}^{N-1} \left| \Delta t_n \sum_{k=0}^s \sigma_k \bm{w}^T \bm{e}_n^{(k)}\right|.
\end{equation*}
We define a weighted seminorm $\| \cdot \|_w: \mathbb{R}^d \mapsto \mathbb{R}$ by
\begin{equation}\label{EQ SEMINORM DEF}
\| \bm{x} \|_w := \sum_{i=1}^d w_i |x_i|, \quad w_i \geq 0.
\end{equation}
This is a seminorm, since it may have a non-trivial nullspace if $w_i = 0$ for some indices $i$. Throughout this section, we assume $\| \cdot\|_w$ to have a non-trivial nullspace.
Using \eqref{EQ SEMINORM DEF} we get the bound
\begin{equation*}
|J_h(\bm{u}) - J_h(\bm{u}_h)|
\leq \sum_{n=0}^{N-1} \Delta t_n \sum_{k=0}^s \sigma_k \|\bm{e}_n^{(k)}\|_w
\end{equation*}
and we need to further investigate how $\|\bm{e}_n^{(k)}\|_w$ is affected by global error propagation. Starting from \eqref{EQ J ERROR PROP 1} - \eqref{EQ J ERROR PROP 3}, we extract the error corresponding to a single quadrature evaluation point and get
\begin{align*}
j(t_{n+1}, \bm{e}_{n+1}^{(k)})
&= j\left(t_{n+1}, \mathcal{M}_{(k)}^{t_n, \Delta t_n} \bm{u}_n\right) - j\left(t_{n+1}, \mathcal{M}^{t_n, \Delta t_n}_{(k)}\bm{u}(t_n)\right) \\
& \quad + j\left(t_{n+1}, \mathcal{N}_{(k)}^{t_n, \Delta t_n} \bm{u}_n\right) - j\left(t_{n+1}, \mathcal{M}^{t_n, \Delta t_n}_{(k)}\bm{u}_n\right).
\end{align*}
Replacing $j$ by $\|\cdot\|_w$ yields
\begin{align}
\|\bm{e}_{n+1}^{(k)} \|_w
& \leq \underbrace{\left\|\mathcal{M}_{(k)}^{t_n, \Delta t_n} \bm{u}_n - \mathcal{M}^{t_n, \Delta t_n}_{(k)}\bm{u}(t_n) \right\|_w}_{\text{global error propagation}} \label{EQ PROP ERR 1}\\
& \quad + \underbrace{\left\|\left( \mathcal{N}_{(k)}^{t_n, \Delta t_n} - \mathcal{M}^{t_n, \Delta t_n}_{(k)}\right)\bm{u}_n \right\|_w}_{\text{global error increment}}. \label{EQ PROP ERR 2}
\end{align}
We now want to find a bound for the global error propagation term \eqref{EQ PROP ERR 1} depending on $\bm{e}_n$. For this we use Lipschitz-conditions.
Assume a map $\bm{f}: U \rightarrow W$ fulfills the Lipschitz condition
\begin{equation*}
\| \bm{f}(\bm{u}) - \bm{f}(\bm{v}) \|_W \leq \mathcal{L} \|\bm{u} - \bm{v} \|_U
\end{equation*}
with some constant $\mathcal{L}$ and suitable norms for the spaces $U$ and $W$. The \textit{Lipschitz-norm} of $\bm{f}$ is the minimal $\mathcal{L}$ fulfilling the Lipschitz condition, cf. \cite{Soderlind2006a}. We can define an according \textit{Lipschitz-seminorm} in the following way.
\begin{definition}
Assume a map $\bm{f} : U \rightarrow W$ with $\| \cdot \|_w$ being a \textit{seminorm} on $W$ and $\| \cdot \|_U$ being a \textit{norm} on $U$. We define the \textit{Lipschitz-seminorm} (with respect to a given norm on $U$) by
\begin{equation*}
\mathcal{L}_w[\bm{f}] := \sup_{\bm{u} \neq \bm{v}} \frac{\| \bm{f}(\bm{u}) - \bm{f}(\bm{v})\|_w}{\|\bm{u} - \bm{v}\|_U},
\end{equation*}
where $\bm{u}, \bm{v} \in U$.
\end{definition}
Due to the possible non-trivial nullspace, we yet require a norm in the denominator. We can use this definition to bound the global error propagation in \eqref{EQ PROP ERR 1} - \eqref{EQ PROP ERR 2} by
\begin{equation*}
\|\bm{e}_{n+1}^{(k)} \|_w
\leq \mathcal{L}_w[\mathcal{M}_{(k)}^{t_n, \Delta t_n}]\, \| \bm{e}_n \|_U + \left\|\left( \mathcal{N}_{(k)}^{t_n, \Delta t_n} - \mathcal{M}^{t_n, \Delta t_n}_{(k)}\right)\bm{u}_n \right\|_w,
\end{equation*}
where $\| \cdot \|_U$ is a norm. It will be non-zero in the nullspace of $\| \cdot \|_w$. For a non-trivial nullspace this can be problematic, which we first illustrate by a simple test case and later using numerical test problems in section \ref{SEC NUM RESULTS}.
To get a better idea of the dynamics, we look at the linear case $\bm{f}(\bm{u}) = \bm{A}\bm{u}$ and take $\| \cdot \|_U$ to be the 1-norm.
\begin{lemma}
The Lipschitz-seminorm of a linear operator $\bm{A} \in \mathbb{R}^{n \times m}$, with respect to the 1-norm, is given by
\begin{equation}\label{EQ DEF LIP SEMINORM LIN}
\| \bm{A} \|_w
= \sup_{\| \bm{x}\|_1 = 1} \| \bm{A} \bm{x}\|_w
= \max_{j=1,\ldots,\,n}\left( \sum_{i=1}^n w_i |a_{ij}|\right).
\end{equation}
\end{lemma}
\begin{proof}
The first form using the supremum is obtained by defining $\bm{x} := \bm{u} - \bm{v}$ and scaling $\| \bm{x} \|_1 \neq 0$ to $\| \bm{x} \|_1 = 1$ using the homogeneity of the seminorm. For the second part we have
\begin{align*}
\| \bm{A} \bm{x}\|_w & =
\sum_{i = 1}^n w_i \sum_{j=1}^m | a_{ij} x_j|
= \sum_{i = 1}^n w_i \sum_{j=1}^m | a_{ij}| | x_j| \\
& = \sum_{j = 1}^m |x_j| \sum_{i=1}^n w_i | a_{ij}|
\leq \sum_{j = 1}^m |x_j| \left(\max_{j = 1,\ldots,\, m} \sum_{i=1}^n w_i | a_{ij}|\right).
\end{align*}
Using the supremum over all $ \|\bm{x}\|_1 = 1$ yields the result.\qed
\end{proof}
Thus the Lipschitz-seminorm of a linear operator is a (weighted) column max-semi\-norm of the given matrix.
\begin{example}
Consider
\begin{equation*}
\bm{A} = \begin{pmatrix} 2 & 1 \\ 0 & 4\end{pmatrix},
\quad \bm{x} = \begin{pmatrix} 1 \\ 2\end{pmatrix},
\quad \bm{w} = \begin{pmatrix} 1 \\ 0\end{pmatrix}
\Rightarrow
\bm{Ax} = \begin{pmatrix} 4 \\ 8\end{pmatrix}.
\end{equation*}
We have $\| \bm{A}\|_w = 2, \| \bm{x}\|_w = 1, \|\bm{x}\|_1 = 3$ and $\| \bm{Ax}\|_w = 4$. From \eqref{EQ DEF LIP SEMINORM LIN} we get the inequality
\begin{equation*}
\| \bm{A} \bm{x}\|_w \leq \| \bm{A} \|_w \|\bm{x}\|_1.
\end{equation*}
Here we fulfill $\| \bm{A}\bm{x}\|_w \leq \|\bm{A}\|_w \| \bm{x}\|_1$, but have $\| \bm{A}\bm{x}\|_w \nleq \|\bm{A}\|_w \| \bm{x}\|_w$. This shows the inequality $\| \bm{A}\bm{x}\| \leq \|\bm{A} \| \|\bm{x}\|$ for norms does not hold for seminorms.
\end{example}
Consider the flow map and weights
\begin{equation*}
\bm{M} = \begin{pmatrix} m_{11} & m_{12} \\ m_{21} & m_{22} \end{pmatrix}
, \quad
w = \begin{pmatrix}1\\0\end{pmatrix}.
\end{equation*}
The diagonal entries of $\bm{M}$ describe dampening or amplification in the nullspace or image and the off-diagonal entries describe (scaled) transport from the nullspace into the image or vice versa. This can also be formulated in an analogous blockwise formulation, then the diagonal blocks $m_{11}$ and $m_{22}$ may include transport inside the nullspace resp. image.
Dampening is generally favorable and amplification may be unavoidable if it is part of the ODE/PDE. Transport from the image into the nullspace is unproblematic, but transport from the nullspace to the image can be highly problematic. In this example the corresponding component is $m_{21}$.
We choose timesteps to control the global error increments \eqref{EQ PROP ERR 2} in the image. Controlling the timesteps we try to keep the seminorm of these increments below a given tolerance. We do not control the error increments in the nullspace, which can be problematic if errors from the nullspace are transported into the image.
As simulating a process results in errors regardless of the step-size, this is a question of sufficiently resolving relevant processes. Assume a process in the nullspace is faster than a process in the image at a given time. The timesteps are chosen to sufficiently resolve the slow process in the image. The process in the nullspace remains under-resolved, its error exceeding the tolerance. \textbf{If} this error is then transported into the image, the performance of the goal oriented adaptive method suffers.
Likewise the goal oriented controller performs well, if all processes whose errors end up in the image, are sufficiently resolved. This can be due to the image containing the processes which require smaller timesteps. The other case is that processes in the nullspace remain under-resolved but have neglectable impact on $J(\bm{u})$. This may be due to strong damping in $m_{22}$ or lack of transport with $m_{12}$ being small.
Due to potentially complicated dynamics of the system, it is hard to clearly identify which processes are neglectable.
\begin{example}
In \cite{Wick2017} the author simulates flow-driven fracturing of an obstacle. The QoI $j(t, \bm{u})$ is the displacement of the obstacle in flow direction, measured at the tip of the outflow edge. Using this density function to control timesteps, there may be a small delay from the flow building up around the obstacle and the displacement occurring. This delay would result in choosing too large timesteps when the displacement is just starting to grow, but the flow pattern around the obstacle is already beginning to form. The author observes significant error reductions after a certain tolerance, which may be the point at which the inflow is sufficiently resolved.
\end{example}
\section{Numerical Results}\label{SEC NUM RESULTS}
We now test the results of Theorem \ref{THRM ORDERS} on convergence rates numerically. Further we compare performance of the DWR method and the local error based adaptive methods. Verification of the results of section \ref{SEC LOCAL ERR BASIC} are not presented as they are well established.
The experiments were run on a Intel i7-3930K 3.20 GHz CPU and implemented in Python 2.7 using FEniCS \cite{logg2012_FENICS}.
The code is available at \url{http://www.maths.lu.se/fileadmin/maths/personal_staff/PeterMeisrimel/goal_oriented_time.zip}.
The following specifications are shared for all test-cases. For local error based methods using timestep-controllers, we bound the rate by which timesteps change \cite{haiwan:93} by
\begin{equation}\label{EQ DT LIMITER}
\Delta t_{n+1} = \Delta t \,\min (f_{\max}, \max (f_{\min}, \text{ind})),
\quad \text{ind} = \left( \frac{\tau}{\tilde{\ell}_n} \right)^{1/(\hat{p}+1)}.
\end{equation}
Here $\tilde{\ell} = \| \hat{\ell}_n\|$ for \eqref{EQ CONT DEADBEAT} resp. $\tilde{\ell}_n = |\hat{\ell}_n^j|$ for \eqref{EQ J DEADBEAT CONT}. The purpose of this is to provide more computational stability by preventing too large or too small timestep changes. In practice this will not take effect for $\tau \rightarrow 0$. We use $f_{\max} = 3$ and $f_{\min} = 0.01$ and do not reject timesteps. For the initial timestep we use $\Delta t_0 = \tau^{1/(\hat{p}+1)}$.
For the DWR method we use an initial grid with $10$ equidistant cells. As refinement strategy we use fixed-rate refinement \cite{bangerth2013adaptive} with $X = 0.8$ and $Y = 0$. This means we refine 80$\%$ of cells corresponding to the largest errors, where refinement means to split the cell into two equally sized cells. To approximate $z \approx z_h^+$ we use a finer grid, dividing all time-intervals in half.
We refer to the adaptive method from section \ref{SEC LOCAL ERR BASIC} as the "Classic" method and to the one from section \ref{SEC LOCAL GOAL} as the "Goal oriented" method.
\subsection{Test problem}
As a simple test problem with known global error dynamics we consider
\begin{equation}\label{EQ TOY MAIN}
\dot{\bm{u}}(t) =
\begin{pmatrix}
-1 & 1 \\ 0 & k
\end{pmatrix}
\bm{u}(t)
,\quad \bm{u}(t_0) = \bm{u}_0 =
\begin{pmatrix}
1 \\ 1
\end{pmatrix}
,\quad t \in [t_0, t_e].
\end{equation}
We use $[t_0, t_e] = [0,2]$ and vary the stiffness by $k < 0$.
\subsubsection*{DWR estimate}
The unique solution to \eqref{EQ TOY MAIN} is in $\mathcal{C}^{\infty}([t_0, t_e]) \times \mathcal{C}^{\infty}([t_0, t_e])$. We define the finite element space $V_h := \{ u \in \mathcal{C}([t_0, t_e]): u\big|_{I_n} \in \mathcal{P}^q(I_n)\}$ denoting the space of continuous piece-wise $q$-th order polynomials, where $I_n = [t_n, t_{n+1}]$. Using test-functions $\bm{\phi}_h := (\phi_1, \phi_2) \in V_h \times V_h$ and $\bm{u}_h := (u_1, u_2) \in V_h \times V_h$ we have a weak formulation
\begin{equation*}
\int_{t_0}^{t_e} \left( \dot{u}_1 + u_1 - u_2\right)\phi_1 + \left( \dot{u}_2 - k \, u_2\right)\phi_2 \, dt = 0.
\end{equation*}
Using $(\cdot, \cdot)_{I_n}$ as the standard $L^2$ scalar product over $I_n$, we have
\begin{equation*}
\sum_{n = 0}^{N-1} \left( \dot{u}_1 + u_1 - u_2, \phi_1\right)_{I_n} + \left( \dot{u}_2 - k\, u_2, \phi_2\right)_{I_n} = 0,
\end{equation*}
where the entire left-hand side defines the bilinear form $A(\bm{u}_h, \bm{\phi}_h)$. Let $\bm{z}$ be the exact adjoint solution and $\bm{z}_h = (z_1, z_2)$ its finite element approximation, we get $e^J = A(\bm{u}_h, \bm{z} - \bm{z}_h)$. We approximate $\bm{z}$ by $\bm{z}^+_h = (z_1^+, z_2^+)$ to get
\begin{align*}
e^J \approx &\,\, A(\bm{u}_h, \bm{z}^+_h - \bm{z}_h) \\
\lesssim & \sum_{n=0}^{N-1} \left| \int_{t_n}^{t_{n+1}} \left(\dot{u}_1 + u_1 - u_2\right)\left( z^+_1 - z_1\right) + \left( \dot{u}_2 + k\, u_2\right)\left( z^+_2 - z_2\right)dt \right|
\end{align*}
Defining
\begin{align*}
R_1(t) & := \left(\dot{u}_1 + u_1 - u_2\right)\left( z^+_1 - z_1\right),\\
R_2(t) &:= \left( \dot{u}_2 + k\, u_2\right)\left( z^+_2 - z_2\right),
\end{align*}
we get the final error estimate $\eta_h(\bm{u}_h)$ using the composite trapezoidal rule
\begin{align*}
\eta(\bm{u}_h) & := \sum_{n=0}^{N-1} \frac{\Delta t_n}{4} \left| \sum_{i=1}^2 R_i(t_n) + 2 R_i(t_n + \Delta t_n/2) + R_i(t_{n+1}) \right|.
\end{align*}
\subsubsection{Numerical verification of Theorem \ref{THRM ORDERS}}
We first verify Theorem \ref{THRM ORDERS} for the goal oriented method. Figure \ref{FIG TOY THRM VERIFY} shows results for the Crank-Nicolson scheme with Implicit Euler for error estimation, trapezoidal rule for quadrature and a range of different density functions. With $p, \,\,\hat{p} = (2,\,\,1)$ and $r = 2$, we expect at least $e^J = \mathcal{O}(\tau)$, which the plots clearly show.
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{basic_tol_vs_error_t_2_prob_0_func_1_k_-1.eps}
\includegraphics[scale = 0.19]{basic_tol_vs_error_t_2_prob_0_func_1_k_-100.eps}
\caption{Verification of Theorem \ref{THRM ORDERS} using the goal oriented adaptive method on problem \eqref{EQ TOY MAIN} for $k = -1$ (left) and $k = -100$ (right), the legend shows $j(t, \bm{u})$.}
\label{FIG TOY THRM VERIFY}
\end{figure}
Further we consider fourth order schemes $p = r = 4$ for the goal oriented adaptive method with time-dependent density functions $j$. We use Simpson's rule for quadrature and the classical Runge-Kutta scheme for time-integration. As embedded scheme we use the weights $\hat{b} = \frac{1}{3}(1, 1, 0, 1)^T$, which give a second order (third order for autonomous systems) solution. To get a fourth order solution in $t_{n} + \Delta t_n/2$ needed by the Simpson rule we use the weights $\bm{b}^* = \frac{1}{24}(5, 4, 4, -1)^T$. As the test problem \eqref{EQ TOY MAIN} is autonomous we have $p, \,\,\hat{p} = (4, 3)$. With $r = 4$ and 4th order solutions for all evaluation points of the quadrature scheme, we expect to get $e^J = \mathcal{O}(\tau) = \mathcal{O}(N^{-4})$ from Theorem \ref{THRM ORDERS}. This can be observed in Figure \ref{FIG TOY ORDER 4}.
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{4th_order_tol_vs_err.eps}
\includegraphics[scale = 0.19]{4th_order_steps_vs_err.eps}
\caption{Verification of Theorem \ref{THRM ORDERS} using goal oriented adaptive method on problem \eqref{EQ TOY MAIN} for $k = -1$, using 4-th order schemes, the legend shows $j(t, \bm{u})$.}
\label{FIG TOY ORDER 4}
\end{figure}
\subsubsection{Method comparison and performance tests}\label{SEC TOY PERF CMP}
We now compare the DWR method with the local error based classic and goal oriented method. For the DWR method we additionally have the estimate $\eta(u_h)$ of the error $e^J$, which we denote by "DWR Est" in Figures, the actual error is denoted by "DWR Err". We only consider the final grid with DWR Est $= \eta(u_h) \leq \tau$. We use second order time-integration for both DWR and the local error methods. As DWR requires a variational formulation, we use the Crank-Nicolson scheme for time-integration and for the local error based methods Implicit Euler for error estimation. For quadrature we use the trapezoidal rule.
We compare methods in terms of computational efficiency (error vs. computational time spent) and grid quality (error vs. number of timesteps). We consider the density functions $j(t, \bm{u}) = u_1$ for $k \in \{ -1, -100\}$ (Figures \ref{FIG TOY U1 K 1}, \ref{FIG TOY U1 K 100}) and $j(t, \bm{u})= u_2$ for $k = -1$ (Figure \ref{FIG TOY U2 K 1}). Results for DWR are considered first and the local error based adaptive methods are then discussed based on the results of section \ref{SEC ERR PROP}.
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{comp_efficiency_prob_0_func_1_k_-1.eps}
\includegraphics[scale = 0.19]{grid_quality_prob_0_func_1_k_-1.eps}
\caption{Performance comparison of the various methods for problem \eqref{EQ TOY MAIN} for $k =-1$ and $j(t, \bm{u}) = u_1$.}
\label{FIG TOY U1 K 1}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{comp_efficiency_prob_0_func_2_k_-1.eps}
\includegraphics[scale = 0.19]{grid_quality_prob_0_func_2_k_-1.eps}
\caption{Performance comparison of the various methods for problem \eqref{EQ TOY MAIN} for $k =-1$ and $j(t, \bm{u}) = u_2$.}
\label{FIG TOY U2 K 1}
\end{figure}
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{comp_efficiency_prob_0_func_1_k_-100.eps}
\includegraphics[scale = 0.19]{grid_quality_prob_0_func_1_k_-100.eps}
\caption{Performance comparison of the various methods for problem \eqref{EQ TOY MAIN} for $k =-100$ and $j(t, \bm{u}) = u_1$.}
\label{FIG TOY U1 K 100}
\end{figure}
Looking at Figures \ref{FIG TOY U1 K 1} - \ref{FIG TOY U1 K 100} and considering the actual error (DWR Err), we see the method produces the best grids. This is expected, since the method uses global grid adaptation. But the DWR method is significantly slower in performance due to the need of solving adjoint equations in computing the error estimate.
The differences in the local error methods have to be discussed for each case individually. For $k=-1$ and $j(t, \bm{u}) = u_1$, see Figure \ref{FIG TOY U1 K 1}, the derivative of $u_1$ is slightly smaller, due to the additional off-diagonal term. Thus only controlling the error in the first component under-resolves the second component, which is relevant to $J(\bm{u})$ due to coupling. While not immediately evident, we do not fulfill the criterion \eqref{EQ J PHIMIN} needed for convergence in the QoI, we have
\begin{equation}\label{EQ TOY PRINC ERR FCT}
j(t, \bm{\phi}(t, \bm{u}(t))) = \frac{1}{2} e^{-t} (t - 1),
\end{equation}
meaning the error estimate vanishes at $t = 1$ for $\tau \rightarrow 0$. As a result we do not have convergence in the QoI for $\tau \rightarrow 0$, since the timestep taken at $t = 1$ will tend to infinity. This trend can be observed when looking at the timesteps over time in Figure \ref{FIG TOY U1 K 1 STEPS}, which form an upward cusp. We are, however, using an extremely small tolerance of $\tau = 10^{-14}$ and have the error $e^J \approx 4 \cdot 10^{-13}$, which is already close to machine zero. This shows that the requirement \eqref{EQ J PHIMIN} may not be a strict requirement on convergence in the QoI in practice for some problems.
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{timesteps_tol_14.eps}
\caption{Timesteps for numerical solution of \eqref{EQ TOY MAIN} for $j(t, \bm{u}) = u_1$, $k = -1$ and $\tau = 10^{-14}$. A cusp at $t \approx 1$ can be observed, where the principal error function \eqref{EQ TOY PRINC ERR FCT} has a zero. Here we did not use the step-size limiter \eqref{EQ DT LIMITER}.}
\label{FIG TOY U1 K 1 STEPS}
\end{figure}
In the case of $k=-1$ and $j(t, \bm{u}) = u_2$, see Figure \ref{FIG TOY U2 K 1}, we do control the error in the fastest process with the goal oriented method. Thus the chosen timesteps sufficiently resolve all processes. The results show that the two local error based methods have grids of identical quality and require the same computational effort.
For $k = -100$ and $j(t, \bm{u}) = u_1$, see Figure \ref{FIG TOY U1 K 100}, we similarly to the case of $k = -1$ do not control the fastest process, but the impact of $u_2$ on $J(\bm{u})$ is small. It turns out the efficiency gain in not properly resolving the second component is worth the additional error, resulting in a more efficient method by a factor of around two.
\subsection{Convection-diffusion equation}
Moving to a problem involving a spatial component, we look at a linear convection-diffusion equation
\begin{align}
\partial_t u(t, \bm{x}) + a \, \bm{v} \cdot \nabla u(t, \bm{x}) - \gamma \Delta u(t, \bm{x}) = f(t,\bm{x}), \quad & (t, \bm{x}) \in [t_0, t_e] \times \Omega, \nonumber\\
u(t_0, \bm{x}) = u_0(\bm{x}), \quad & \bm{x} \in \Omega, \label{EQ HEAT PROP}\\
\nabla u \cdot \bm{n} = - c \, \bm{u}(t, \bm{x}), \quad & (t, \bm{x}) \in [t_0, t_e] \times \partial \Omega \nonumber.
\end{align}
We want to model the case of having error build-up in the nullspace of $j(t, u)$, which is transported into its image. We consider the domain $\Omega = [0,3] \times [0,1]$ and restrict the source term $f$ to $\Omega_f = [0.25, 0.75] \times [0.25, 0.75]$. As QoI we consider
\begin{equation}\label{EQ HEAT J}
J(u) = \int_{t_0}^{t_e} j(t, u(t)) dt, \quad j(t, u(t)) = \int_{\Omega_*} \frac{u(t, \bm{x})}{t_e - t_0} dx.
\end{equation}
with $\Omega_* = [2.25, 2.75] \times [0.25, 0.75]$. For a visualization of the spatial domain, see Figure \ref{FIG HEAT DOMAINS}, for the time-domain we use $[t_0, t_e] = [0, 6]$.
\begin{figure}[ht!]
\begin{center}
\begin{tikzpicture}[scale = 2.5]
\draw [-] (1,1) -- (4,1) -- (4,2) -- (1,2) -- (1,1);
\draw [-] (1.25,1.25) -- (1.75,1.25) -- (1.75,1.75) -- (1.25,1.75) -- (1.25,1.25);
\draw [-] (3.25,1.25) -- (3.75,1.25) -- (3.75,1.75) -- (3.25,1.75) -- (3.25,1.25);
\node at (1.5, 1.5) {$\Omega_f$};
\node at (3.5, 1.5) {$\Omega_*$};
\node at (2.5, 1.1) {$\Omega$};
\draw [->] (2.3,1.5) -- (2.7, 1.5); \node at (2.5, 1.55) {$\bm{v}$};
\draw [-] (1,1) -- (0.95, 1); \node at (0.85, 1) {0};
\draw [-] (1,1.25) -- (0.95, 1.25); \node at (0.75, 1.25) {0.25};
\draw [-] (1,1.75) -- (0.95, 1.75); \node at (0.75, 1.75) {0.75};
\draw [-] (1,2) -- (0.95, 2); \node at (0.85, 2) {1};
\draw [-] (1,1) -- (1, 0.95); \node at (1, 0.85) {0};
\draw [-] (1.25,1) -- (1.25, 0.95); \node at (1.25, 0.85) {0.25};
\draw [-] (1.75,1) -- (1.75, 0.95); \node at (1.75, 0.85) {0.75};
\draw [-] (3.25,1) -- (3.25, 0.95); \node at (3.25, 0.85) {2.25};
\draw [-] (3.75,1) -- (3.75, 0.95); \node at (3.75, 0.85) {2.75};
\draw [-] (4,1) -- (4, 0.95); \node at (4, 0.85) {3};
\end{tikzpicture}
\end{center}
\caption{Geometry of $\Omega$ for the convection-diffusion equation problem \eqref{EQ HEAT PROP}.}
\label{FIG HEAT DOMAINS}
\end{figure}
We use the source term
\begin{equation*}
f(t, \bm{x}) =
\begin{cases}
5 \, t^3, & \bm{x} \in \Omega_f \,\, \text{and}\,\, t < 1, \\
5\, (2 - t)^3, & \bm{x} \in \Omega_f \,\, \text{and}\,\, 1 \leq t < 2,\\
0, & \bm{x} \notin \Omega_f \,\,\text{or}\,\,\,\,\,\, 2 \leq t,
\end{cases}
\end{equation*}
providing a spike-shaped build-up in the first $2$ time units. The remaining parameters are $a = 0.5$, $\gamma = 0.01$, $c = 0.15$ and $\bm{v} = (1, 0)^T$. We use the initial condition $u_0(\bm{x}) = 1$. Since we do not have an analytical solution, we use as reference solution from using the classic adaptive method with $\tau_{\text{ref}} = \tau_{\text{min}}/10$, where $\tau_{\text{min}}$ is the minimal tolerance for which tests are done.
\subsubsection*{Discretization}
For our convection-diffusion problem we have a weak solution in the space
\begin{equation*}
V := H^1(t_0,t_e: L^2(\Omega)) \cap L^2(t_0, t_e: H^1(\Omega)),
\end{equation*}
see \cite{renardy2006introduction}. We discretize time along the points $t_n$ with $I_n := [t_n, t_{n+1}]$ and space by $2\,(3\cdot32)\cdot32 = 6144$ regular triangular cells $K$ defining the finite element mesh. We define the global finite element space by
\begin{align*}
V_{h,k} = \{ v \in L^\infty(t_0, t_e; H^1(\Omega)): \,&
v(\cdot,t)|_{Q_K^n} \in Q^1(K), \\
& v(x, \cdot)|_{Q_K^n} \in P^q(I_n), \forall \,\,Q_K^n\},
\end{align*}
where $P^q(I_n)$ is the space of polynomials on $I_n$ of degree up to $q$ and $Q^1(K)$ being the space of polynomials on $K$ with partial degrees up to $1$. In this space the variational formulation becomes
\begin{equation*}
A_h(u_h, \phi_h) = F(\phi_h)
\end{equation*}
for all $\phi \in V_{h,k}$ with the bilinear form
\begin{equation*}
A_h(u_h, \phi_h) := \int_{t_0}^{t_e}
(\partial_t u_h, \phi_h) +
a (\bm{v} \cdot \nabla u_h, \phi_h) -
\gamma (\Delta u_h, \phi_h) dt
\end{equation*}
and right-hand side
\begin{equation*}
F(\phi_h) = \int_{t_0}^{t_e} (f(t), \phi_h)dt.
\end{equation*}
The weak formulation is
\begin{equation*}
\int_{t_0}^{t_e} (\partial_t u_h + a \bm{v} \cdot \nabla u_h - f, \phi_h) +
\gamma (\nabla u_h, \nabla \phi_h) +
\gamma \, c (u_h, \phi_h)_{\partial \Omega} dt = 0,
\end{equation*}
from which one can directly write down the $\theta$-method yielding both Crank-Nicolson and Implicit Euler.
We have the adjoint equation
\begin{align*}
- z_t(t, \bm{x}) - a \,\bm{v} \cdot \nabla \bm{z}(t, \bm{x}) - \gamma \Delta z(t, \bm{x}) = \textstyle{\frac{1}{t_e - t_0}}\Big|_{\Omega_*}, & \quad (t,\bm{x}) \in [t_0, t_e] \times \Omega,\\
z(t_e, \bm{x}) = 0, & \quad \bm{x} \in \Omega,\\
\nabla z(t, \bm{x}) \cdot \bm{n} = - \frac{a}{\gamma} \, z(t, \bm{x})\, \bm{v} \cdot \bm{n} - c \,z(t, \bm{x}), & \quad (t,\bm{x}) \in [t_0, t_e] \times \partial \Omega.
\end{align*}
The weak formulation is
\begin{align*}
\int_{t_0}^{t_e} & (z_h + a \,\bm{v}\cdot\nabla z_h, \phi_h)_{\Omega} + \gamma (\nabla z_h, \nabla \phi_h)_{\Omega} \,+ \\
& (\textstyle{\frac{1}{t_e - t_0}}, \phi_h)_{\Omega_*} - (c\,\gamma\, z_h + a\,z_h\,\bm{v}\cdot \bm{n}, \phi_h)_{\partial \Omega} dt = 0.
\end{align*}
\subsubsection*{DWR Estimate}
We have
\begin{align*}
e^J & = \left|\int_{t_0}^{t_e} \int_{\Omega} A_h(u_h, z - z_h) - F(z_h) dx\,dt\right|\\
& = \left|\int_{t_0}^{t_e} \int_{\Omega} ((u_h)_t + a\,\bm{v}\cdot \nabla u_h - \gamma \Delta u_h - f)(z - z_h) dx\, dt\right|,
\end{align*}
where we approximate $z\approx z_h^+$ using a finer grid in time. Splitting this by timesteps gives
\begin{equation*}
e^J \lesssim \sum_{n=0}^{N-1} \Big|\int_{t_n}^{t_{n+1}} \underbrace{\int_{\Omega} ((u_h)_t + a\,\bm{v}\cdot \nabla u_h - \gamma \Delta u_h - f)(z_h^+ - z_h) dx}_{=: R_z(t)}\, dt\Big|.
\end{equation*}
We use the composite trapezoidal rule to get the error estimate
\begin{equation*}
e^J \lesssim \eta_h(u_h) := \sum_{n=0}^{N-1} \frac{\Delta t_n}{4}\left| R_z(t_n) + 2 R_z(t_n + \Delta t_n/2) + R_z(t_{n+1})\right|,
\end{equation*}
using linear interpolation for $u_h$ and $z_h$ in computing $R_z(t_n + \Delta t_n/2)$.
\subsubsection{Method comparison and performance tests}
We use the same schemes for time-integration as in section \ref{SEC TOY PERF CMP}. We again compare DWR with the two local error based adaptive methods. The way we set up the problem, we expect the goal oriented method to perform poorly. Due to the source term being in the nullspace of $j(t, u)$, the resulting timesteps will not sufficiently resolve it. The convection transports the build-up from the source term and its error into the image of $j(t, u)$. This leads to an increase in error, which can no longer be controlled by the step-size.
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{comp_eff_fwd.eps}
\includegraphics[scale = 0.19]{grid_qual_fwd.eps}
\caption{Performance comparison of the various methods for problem \eqref{EQ HEAT PROP} with QoI \eqref{EQ HEAT J}.}
\label{FIG HEAT FWD}
\end{figure}
The results can be seen in Figure \ref{FIG HEAT FWD}. One can observe the classic adaptive method performs fine and the goal oriented adaptive method shows the expected poor performance. In Figure \ref{FIG HEAT TIMESTEPS} one can see the timesteps chosen by the goal oriented method are too large to resolve the source term. Nevertheless we have convergence in the QoI with $e^J = \mathcal{O}(\tau)$, as predicted by Theorem \ref{THRM ORDERS}, see Figure \ref{FIG HEAT TIMESTEPS}.
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{tol_vs_err_fwd.eps}
\includegraphics[scale = 0.19]{tols_prob_12_func_1_dx_32_dom_1_t_6_tol_1e-06.eps}
\caption{Solving problem \eqref{EQ HEAT PROP} with QoI \eqref{EQ HEAT J}. Left: Tolerance over error for the goal oriented adaptive method; Right: Timesteps chosen by the different methods for $\tau = 10^{-6}$.}
\label{FIG HEAT TIMESTEPS}
\end{figure}
The DWR method is computationally expensive, but gives high quality grids. Here, we used it to only adapt the grid in time to get a fair comparison with the other methods.
Changing the sign of the convection term we expect good results for the goal oriented method, since it is no longer required to properly resolve the source term. Considering only $[t_0, t_e] = [0, 3]$ we get the results seen in Figure \ref{FIG HEAT BWD}.
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{comp_eff_bwd.eps}
\includegraphics[scale = 0.19]{grid_qual_bwd.eps}
\caption{Performance comparison of the various methods for problem \eqref{EQ HEAT PROP} and QoI \eqref{EQ HEAT J}. We changed the sign in $\bm{v}$ and use $t_e = 3$.}
\label{FIG HEAT BWD}
\end{figure}
The goal oriented method performs well in this example, but not better than the classic one. While not properly resolving the source term does allow larger timesteps, it does not seem to yield an advantage in terms of computational efficiency or grid quality. The DWR method performs better than in the previous examples, but is still slower than the local error based methods.
\subsection{Coupled Heat equations}
As a third test problem we consider the coupling of two heat equations with different thermal conductivities and diffusivities. As QoI we choose the average heat transfer over their interface $\Gamma$. The model equations for this problem are
\begin{align}
\alpha_m \frac{\partial u_m(t, \bm{x})}{\partial t} - \nabla \cdot (\lambda_m \nabla u_m(t, \bm{x})) = 0, &\quad (t,\bm{x}) \in [t_0, t_e] \times \Omega_m,\quad m = 1,2,\nonumber\\
u(t, x) = 0, &\quad (t,\bm{x}) \in [t_0, t_e] \times \Omega_m\setminus\Gamma,\nonumber\\
u_1(t, \bm{x}) = u_2(t, \bm{x}), \quad
\lambda_2 \frac{\partial u_2(t, \bm{x})}{\partial \bm{n}_2} = - \lambda_1 \frac{\partial u_1(t, \bm{x})}{\partial \bm{n}_1}, & \quad (t, \bm{x}) \in [t_0, t_e] \times \Gamma, \label{EQ FSI TRANS COND}\\
u_m(t_0, \bm{x}) = u_m^0(\bm{x}), & \quad\bm{x} \in \Omega_m, \quad m = 1,2.\nonumber
\end{align}
The QoI
\begin{equation*}
J(u)
= \int_{t_0}^{t_e}\int_{\Gamma} \frac{1}{t_e - t_0} \lambda_1 \frac{\partial u_1(t, \bm{x})}{\partial \bm{n}_1}\,\, dx\, dt
= - \int_{t_0}^{t_e}\int_{\Gamma} \frac{1}{t_e - t_0} \lambda_2 \frac{\partial u_2(t, \bm{x})}{\partial \bm{n}_2}\,\, dx\, dt,,
\end{equation*}
describes the time-averaged heat transfer over the interface. We consider the spatial domains $\Omega_1 = [0,1]\times[0,1]$, $\Omega_2 = [1,2]\times[0,1]$, $\Gamma = \Omega_1 \cap \Omega_2$ and $[t_0, t_e] = [0, 1]$. For discretization in space we use standard linear finite elements for both domains with identical triangular meshes for $\Delta x = 1/21$. In the discrete case the QoI becomes a summed finite difference, which we calculate based on the solution in $\Omega_1$.
For time-integration we use the SDIRK2 scheme, which is implicit with $(p,\, \hat{p}) = (2,\,1)$. To solve the problem arising from the so called transmission conditions \eqref{EQ FSI TRANS COND} on the interface $\Gamma$, we use the Dirichlet-Neumann iteration for each stage derivate of SDIRK2 \cite{Birken2010}. In the heat equations we choose the parameters $\alpha_1 = 0.6 ,\,\,\lambda_1 = 0.3$ and $\alpha_2 = \lambda_2 = 1$. Based on the results of \cite{monge2017}, this gives us a convergence rate of approximately $ \alpha_1/\alpha_2 = 0.6$ for the Dirichlet-Neumann iteration for $\Delta t \rightarrow 0$. The cancellation criterion for the Dirichlet-Neumann iteration is based on the update between two iterates for which we use a tolerance of $10^{-10}$, such that the arising error does not exceed the local errors.
Implementation of the discretization and methods are thanks to Azahar Monge, more details on the discretization in space are found in \cite{monge2016}. We use the initial timestep $\Delta t_0 = \tau$ for all computations. As our reference we use the solution from the classical adaptive method with $\tau = 10^{-7}$.
\subsubsection{Method comparison and performance tests}
Based on the results from the previous problems, we no longer consider the DWR method. While we specifically considered both grid quality and computational efficiency because of the DWR method, we now look only at grid quality, as these two performance measures are essentially identical here.
As our problem has zero Dirichlet boundary conditions and no source term, the solution will vanish. The question is how much heat transfer over the interface will occur during this process.
We consider the problem for two different sets of initial conditions given by
\begin{align}
u^0_1(\bm{x}) &= \left|200 \sin(\pi/2\,x_1^2)\,\sin(\pi\,x_2^2)\right|, \label{EQ FSI U0 1}\\
u^0_2(\bm{x}) &=
\begin{cases}
200, & \bm{x} \in \Omega_2/\partial \Omega,\\
0, & \text{otherwise}.
\end{cases}\label{EQ FSI U0 2}
\end{align}
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{tols_error_vs_steps_prob_21_func_0_t_1_u05.eps}
\includegraphics[scale = 0.19]{tols_error_vs_steps_prob_21_func_0_t_1_u02.eps}
\caption{Performance comparison of the local error based adaptive methods for the coupled heat problem with initial conditions \eqref{EQ FSI U0 1} on the left and \eqref{EQ FSI U0 2} on the right.}
\label{FIG FSI GRID}
\end{figure}
With \eqref{EQ FSI U0 1}, the initial conditions at the interface $x_1 = 1$ are symmetric, but the steepest heat gradient is inside $\Omega_2$. The choice of timesteps of the classical method is governed by the internal dynamics of $\Omega_2$, whereas the goal-oriented method will choose larger timesteps, especially in the beginning. Not correctly resolving the internal dynamics of $\Omega_2$ does, however, not have a big impact on the values at the interface, due to diffusion. The results in Figure \ref{FIG FSI GRID} (left) show that the grid quality of both methods are on the same parameterized curve for sufficiently small tolerances. For the same tolerance, the classical method gives a smaller error, since it resolves the internal dynamics of $\Omega_2$.
For the initial condition \eqref{EQ FSI U0 2}, the heat transfer over the interface is a good measure of the speed of the diffusion process. While the heat gradient is likely to be steeper at the non-interface boundaries of $\Omega_2$, these areas have little to no impact on our QoI. Hence the classical method will choose smaller timesteps than necessary for the QoI. This is confirmed by the results in Figure \ref{FIG FSI GRID} (right), which show that the goal oriented method performs better.
The timesteps over time in Figure \ref{FIG FSI TIMESTEPS} show that for the initial conditions \eqref{EQ FSI U0 1}, the chosen timesteps have a similar shape, but are shifted. This explains that the performance for both methods lie on the same curve for $\tau \rightarrow 0$. However, for the initial condition \eqref{EQ FSI U0 2} the timesteps have a different shape, one which gives better performance.
\begin{figure}[ht!]
\centering
\includegraphics[scale = 0.19]{tols_prob_21_func_0_t_1_tol_1e-05_u05.eps}
\includegraphics[scale = 0.19]{tols_prob_21_func_0_t_1_tol_1e-04_u02.eps}
\caption{Timestep series from solving the coupled heat problem with initial conditions \eqref{EQ FSI U0 1} for $\tau = 10^{-5}$ on the left and \eqref{EQ FSI U0 2} for $\tau = 10^{-4}$ on the right.}
\label{FIG FSI TIMESTEPS}
\end{figure}
\section{Conclusions}
We derived a simple and easy to implement goal oriented local error estimator. For the resulting goal oriented adaptive method we prove convergence in the QoI. The constructive nature of our proof gives us necessary requirements for convergence and on $\Delta t_0$. Specifically, we require the error estimate to be non-zero at all times. While this is a natural assumption on controllability, one has to keep in mind that the error estimate is not based on a norm and can have a non-trivial nullspace.
A broad range of initial timesteps are allowed, as long as they are of the right order with respect to the tolerance. This means our results hold for any reasonable scheme used to compute initial timesteps.
Furthermore we show convergence rates and sufficient requirements on the involved schemes to get high convergence rates in the QoI. This involves the need for high order solutions in the quadrature evaluation points, for which we describe how to get the right coefficients for RK schemes. The structure of our proof allows to immediately conclude the same result for closely related controllers.
We further derived guidelines to predict performance of the goal oriented method in relation to classical adaptive methods. These are based on analyzing global error propagation with respect to the nullspace of the error estimator. The goal oriented adaptive method does not regard errors in the nullspace of the error estimator when choosing timesteps. If processes in the nullspace are not sufficiently resolved by the chosen timesteps and the resulting error affects the QoI, due to global error propagation, performance of the goal oriented method will suffer. The goal oriented method will perform well, if all relevant processes are sufficiently resolved. To use these guidelines one requires sufficient knowledge of the global error dynamics of a problem.
In numerical experiments designed to test these guidelines, we confirm the results on convergence rates and that the guidelines hold true for our test-cases. We test a linear system with two variables with varying stiffness for various QoIs. As more complex test cases we have a 2D convection diffusion equation with source term that is outside the QoI. Further we test two coupled heat equations with varying coefficients and have heat transfer over the interface as the QoI.
The tests show that it is easy to correctly predict bad performance of the goal oriented method. It is, however, hard to predict if the goal oriented method will perform better than a classical norm-based adaptive method.
The results further show that the local error based adaptive methods perform better than the DWR method. The goal oriented method is shown to perform well in many cases, it is, however, not recommended to use it as a black-box solver for general goal oriented problems.
\section*{Acknowledgements}
The authors want to thank Patrick Farrell for helping with FEniCS and dolfin-adjoint, Claus F\"uhrer and Gustaf S\"oderlind for many interesting discussions and feedback, and Azahar Monge for the implementation of the final test problem.
\bibliographystyle{siam}
| {
"timestamp": "2018-05-09T02:10:34",
"yymm": "1805",
"arxiv_id": "1805.03075",
"language": "en",
"url": "https://arxiv.org/abs/1805.03075",
"abstract": "We consider initial value problems where we are interested in a quantity of interest (QoI) that is the integral in time of a functional of the solution of the IVP. For these, we look into local error based time adaptivity. We derive a goal oriented error estimate and timestep controller, based on error contribution to the error in the QoI, for which we prove convergence of the error in the QoI for tolerance to zero under weak assumptions. We analyze global error propagation of this method and derive guidelines to predict performance of the method. In numerical tests we verify convergence results and guidelines on method performance. Additionally, we compare with the dual-weighted residual method (DWR) and classical local error based time-adaptivity. The local error based methods show better performance than DWR and the goal oriented method shows good results in most examples, with significant speedups in some cases.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Goal oriented time adaptivity using local error estimates",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138092657496,
"lm_q2_score": 0.7248702761768248,
"lm_q1q2_score": 0.7087156789443593
} |
https://arxiv.org/abs/1110.1737 | Graded Morita equivalence of Clifford superalgebras | This note uses a variation of graded Morita theory for finite dimensional superalgebras to determine explicitly the graded basic superalgebras for all real and complex Clifford superalgebras. As an application, the Grothendieck groups of the category of left $\mathbb{Z}_2$-graded modules over all real and complex Clifford superalgebras are described explicitly. | \section{Introduction}
Clifford (super)algebras or special Grassmann (super)algebras play an important role in many branches of
mathematics such as Clifford analysis, algebras, mathematics physics, geometry and topology etc., see for
example \cite{abs,V}. For the finite dimensional superalgebras, there is a natural graded Morita theory,
which states that a finite dimensional superalgebra is graded Morita equivalent to a
graded basic superalgebra, see \cite[Theorem 2.3]{han} for a more general context.
Note that some important algebraic objects such as
the Hochschild and cyclic (co)homology of superalgebras are graded Morita invariants \cite{zhao} and
that the fundamental relationship between Clifford (super)algebras and Bott periodicity is established by
Atiyah, Bott and Shapiro \cite{abs}, see also \cite{Lawson} for a exposition. It is a natural and interesting problem
to determine explicitly the graded basic superalgebras for Clifford superalgebras. In this note, using a
variation of the graded Morita equivalent theory for finite dimensional
superalgebras (Proposition-Definition~\ref{psbasicdef}), we determine explicitly the graded basic superalgebras for
all real and complex Clifford superalgebras (Theorems~\ref{tclifford}
and \ref{thm-complex}). As an application, we determine explicitly the Grothendieck groups of $\mathbb{Z}_2$-graded modules over all real and complex Clifford superalgebras (Theorem~\ref{thm: Grothendieck}), which is useful to our
understanding of the Atiyah-Bott-Shapiro isomorphisms.
The lay-out of this note as follows. We begin in Section~\ref{sec: Preliminaries} with preliminaries on superalgebras
and fix our notations. In section~\ref{sec: Morita} we review briefly the graded Morita theory for superalgebras
and give some useful lemmas. The graded basic superalgebras for all real and complex Clifford
superalgebras are determine explicitly in section~\ref{sec: Main}. Finally, as an application, we determine
explicitly the Grothendieck groups of $\mathbb{Z}_2$-graded modules
over all real and complex Clifford superalgebras in section~\ref{application}.
\section{Preliminaries on superalgebras}\label{sec: Preliminaries}
In this section, we recall some facts on superalgebras and fix the notations, our references are
\cite{bass}, \cite[Chapter 3]{manin} and \cite[\S 1]{kassel}.
\begin{point}{}*
Let $K$ be a field and $\mathbb{Z}_2:=\mathbb{Z}/2\mathbb{Z}=\{0,
1\}$. By a \textit{superspace} we mean a $\mathbb{Z}_2$-graded
$K$-vector space $V$, namely a $K$-vector space with a decomposition
into two subspaces $V = V_{0}\oplus V_{1}$. A nonzero element $v$ of
$V_i$ will be called \textit{homogeneous} and we denote its degree by
$|v|=i\in \mathbb{Z}_2$. We will view $K$ as a superspace concentrated in degree 0.
Given superspaces $V$ and $W$, we view the direct sum $V\oplus W$
and the tensor product $V\otimes_K W$ as superspaces with $(V\oplus
W)_i = V_i\oplus W_i$, and $(V\otimes_K W)_i= V_0\otimes_K W_i\oplus
V_1\otimes_K W_{1-i}$ for $i\in\mathbb{Z}_2$. With this grading,
$V\otimes_K W$ is called the \textit{skew tensor product} of $V$ and
$W$ and is denoted by $V\widehat{\otimes}_KW$. Also, we make the
vector space $\Hom_K(V, W)$ of all $K$-linear maps from $V$ to $W$
into a superspace by setting that $\Hom_K(V, W)_i$ consists of all
the $K$-linear maps $f: V \rightarrow W$ with $f(V_j)\subseteq
W_{i+j}$ for $i, j\in \mathbb{Z}_2$. Elements of
$\Hom_K(V, W)_0$ (resp.~$\Hom_K(V, W)_1$) will be referred to as
\textit{even (resp.~odd) linear maps}.\end{point}
\begin{point}{}* Recall that a \textit{superalgebra} $A$ is both a superspace
and an associative $K$-algebra with identity such that
$A_iA_j\subseteq A_{i+j}$ for $i, j \in \mathbb{Z}_2$. By a
superalgebra \textit{homomorphism} we mean an even linear map which is an
algebra homomorphism in the usual sense and write $A\cong B$ provided that the superalgebras
$A$ and $B$ are isomorphic.
Given two superalgebras $A$ and $B$, the \textit{skew tensor
product} $A\widehat{\otimes}_{K} B$ is again a superalgebra with
the inducing grading and multiplication given by
\begin{align*}(a_1 {\otimes} b_1)(a_2 {\otimes} b_2) =
(-1)^{|b_1||a_2|}a_1a_2{\otimes} b_1b_2, \text{ for } a_i\in A \text{ and } b_i\in B.\end{align*}
Note this and other such expressions only make sense for
homogeneous elements.
Observe that the skew tensor product $A\widehat{\otimes}_{K} A\widehat{\otimes}_{K}\cdots\widehat{\otimes}_{K}A$ ($n$ factors) is well-defined and that the \textit{supertwist
map} $T_{A,B}: A\widehat{\otimes}_K B\rightarrow
B\widehat{\otimes}_K A$, $a\otimes b \mapsto (-1)^{|a||b|}b\otimes
a$, for $a\in A, b \in B$, is an isomorphism of superalgebras. \end{point}
\begin{point}{}*The main principle for superalgebras is the following \textit{rule
of signs} \cite[\S III.4]{good} or \cite[\S3.1.1]{manin}: if in some formula of usual algebra there are
monomials with interchanged terms, then in the corresponding formula
in superalgebra every interchange neighboring terms, say $a$ and
$b$, is accompanied by the multiplication of the monomial by the
factor $(-)^{|a||b|}$; or equivalently, each letter $a_i$ appearing
as an argument on the left-hand side (LHS) of the defining equation
has a degree associated with it, and a factor of (-1) is introduced
on the right-hand side (RHS) each time a pair of letters on the LHS,
both of odd degree, appearing in reverse order on the RHS. In order
for this rule make sense it is essential that every letter on the
LHS should appear exactly once on the RHS.
\end{point}
\begin{point}{}*
Let $A$ be a superalgebra. By a
graded $A$-module $M$ we mean a $\mathbb{Z}_2$-graded left
$A$-module, that is, $M$ is both a superspace $M = M_0\oplus M_1$
and a left $A$-module such that $A_iM_j \subseteq M_{i+j}$ for
$i, j \in \mathbb{Z}_2$. We denote by $\widehat{A}$ the superalgebra with the same underlying
superspace as $A$ but new multiplication $\hat{a}\hat{b}=(-1)^{|a||b|}\widehat{ab}$.
If $M$ is a graded $A$-module, let $\widehat{M}$ denote the
$\widehat{A}$-module with $M$ as the underlying superspace and
operators defined as $\hat{a}\hat{m}\!=\!(-1)^{|a||m|}\widehat{am}$ for
$a\in\!A,m\in\!M$.
Unless otherwise explicitly stated, all our
modules will be $\mathbb{Z}_2$-graded left modules. We denote by
$\mathrm{Mod} A$ the category of all $A$-modules with morphisms
\begin{align*}\Hom_{\Mod A}(M, N):=\Hom_{\Mod A}(M,
N)_0+\Hom_{\Mod A}(M,N)_1,\end{align*}
where $\Hom_{\Mod A}(M, N)_i$, $i\in\mathbb{Z}_2$, is consisting of
all $K$-linear maps $f$ from $M$ to $N$
such that $f(M_j)\subseteq N_{i+j}$ and $f(am)=(-1)^{i|a|}af(m)$,
for all $a\in A$, $m\in M$ and $j\in\mathbb{Z}_2$. The elements of $\Hom_{\Mod A}(M,
N)_0$ (resp.~$\Hom_{\Mod A}(M, N)_1$) are called \textit{even (resp.~odd) homomorphisms} form $M$ to $N$. Let $\mathrm{Gr} A$ be the category of all $A$-modules with even
homomorphisms. \end{point}
\section{Graded Morita equivalent theory}\label{sec: Morita}
\begin{definition}\label{shifts}
Let $A$ be a superalgebra. The \textit{ parity change} (reps.~\textit{suspension}) functor $\pi$ (resp.~$\sigma$) from $\mathrm{Gr} A$ to itself is defined as following: for
$M=M_0\oplus M_1$ in $\mathrm{Gr} A$, we define the graded
$A$-module $\pi(M)$ (resp.~$\sigma(M)$) by the conditions:
\begin{enumerate}
\item $\pi(M)$ and $\sigma(M)$ are superspaces with $\pi(M)_i=\sigma(M)_i=M_{i+1}$ for $i\in \mathbb{Z}_2$;
\item the module structure on $\pi(M)$ (resp.~$\sigma(M)$) is defined by $a\pi(m):=(-1)^{|a|}\pi(am)$ (resp. $a\sigma(m):=\sigma(am)$)
for homogeneous elements $a\in A$ and $m\in M$.
\end{enumerate}
\vspace{-2.3mm}
For a homomorphism $f$, we define $\pi(f)$ (resp.~$\sigma(f)$) is the
same underlying linear map as $f$.
\end{definition}
\begin{remark} It follows by definition that $\pi^2=\mathrm{Id}_{\mathrm{Gr} A}$ and $\sigma^2=
\mathrm{Id}_{\mathrm{Gr}A}$, which means that the parity change functor $\pi$ and the suspension functor
$\sigma$ are shifts of the category $\mathrm{Gr}A$ in the sense of \cite[Definition 3.2]{zhang} and \cite[\S2]{s}. \end{remark}
From now on we assume that $S$ is either the parity change functor $\pi$ or the suspension functor
$\sigma$, unless otherwise explicitly stated. For an $A$-module $M$, following \cite[\S2]{s}, we
define the $S$-{\it twisted endomorphism superalgebra} of $M$ to be
the superalgebra
\begin{align*}\mathrm{End}_A^{S}(M):=\Hom_{\Gr A}(M,
M)\oplus\Hom_{\Gr A}(S(M), M),\end{align*}
and define the $S$-{\it twisted $\Hom$ functor} to be the
functor\begin{align*}\Hom^{S}_A(M, -): \Gr A\rightarrow \Gr
\End_A^{S}(M),\quad N\mapsto \Hom_{\Gr A}(M,
N)\oplus\Hom_{\Gr A}(S(M), N).\end{align*}
Denote by $\Mod^{S} A$ the category of all graded $A$-modules
with homomorphisms $\Hom_A^{S}(M, N)$.
\begin{lemma}[cf.~\cite{bass}, p.\!~123] Assume that $A$ is a finite dimensional superalgebras and that
$M$ and $N$ are $A$-modules. Then $\mathrm{Hom}_{\Mod^{\pi} A}(M, N)\!=\!\Hom_{\Mod^{\sigma}
\widehat{A}}(\widehat{M}, \widehat{N})$ and $\Mod^{\pi} A=\Mod A=\Mod^{\sigma}\widehat{A}$.\label{lbass}\end{lemma}
\begin{proof}First note that for $A$-modules $M$ and $N$, by definition,
\begin{align*}&\mathrm{Hom}_{\Mod^{\pi} A}(M, N)=\Hom_{A}^{\pi}(M,N)=\Hom_{\Gr A}(M,N)\oplus\Hom_{\Gr A}(\pi(M),N) \text{ and}\\
&\Hom_{\Mod^{\sigma}
\widehat{A}}(\widehat{M}, \widehat{N})=\Hom_{\widehat{A}}^{\sigma}(\widehat{M},\widehat{N})=\Hom_{\Gr \widehat{A}}(\widehat{M},\widehat{N})\oplus\Hom_{\Gr \widehat{A}}(\sigma(\widehat{M}),\widehat{N}). \end{align*}
Secondly by definition, we have
\begin{align*}\Hom_{\Gr \widehat{A}}(\widehat{M},\widehat{N})&=\{f: \widehat{M}\rightarrow\widehat{N}\mid f(\widehat{M}_i)\subseteq\widehat{N}_i \text{ and } f(\hat{a}\hat{m})=\hat{a}f(\hat{m}), \forall i\in\mathbb{Z}_2, a\in A, m\in M\!\}\\
&=\{f: M\rightarrow N\mid f(M_i)\subseteq N_i \text{ and }f(am)=af(m), \forall i\in\mathbb{Z}_2, a\in A, m\in M\!\}\\
&=\Hom_{\Gr A}(M,N);\\
\Hom_{\Gr \widehat{A}}(\sigma(\widehat{M}),\widehat{N})&=\!\!\{\!f\!: \sigma(\widehat{M})\!\rightarrow\!\widehat{N}\!\mid\! f(\sigma(\widehat{M})_i)\!\subseteq\!\widehat{N}_i\text{ and } f(\hat{a}\sigma(\hat{m}))\!\!=\!\!\hat{a}f(\sigma(\hat{m})), \forall i\!\in\!\mathbb{Z}_2,a\!\in\! A, m\!\in\! M\!\}\\
&=\!\!\{\!f\!: M\!\rightarrow\! N\!\mid\! f(M_i)\!\subseteq\! N_{1+i}\text{ and }f(am)\!=\!(-1)^{|a|}af(m), \forall i\!\in\!\mathbb{Z}_2,a\!\in\! A, m\!\in\! M\}\\
&=\Hom_{\Gr A}(\pi(M),N).
\end{align*}
Therefore $\mathrm{Hom}_{\Mod^{\pi} A}(M, N)\!=\!\Hom_{\Mod^{\sigma}
\widehat{A}}(\widehat{M}, \widehat{N})$.
Finally note that the objects of the categories $\Mod^{\pi} A$, $\Mod A$ and $\Mod^{\sigma}\widehat{A}$ are same.
By the first part of the lemma, we only need to show that $\Hom_{\Mod^{\pi} A}(M,N)=\Hom_{\Mod A}(M,N)$ for all $A$-modules $M$ and $N$; furthermore, it suffices to show that $\Hom_{\Gr A}(\pi(M),N)=\Hom_{\Mod A}(M,N)_{1}$ for all $A$-modules $M$ and $N$. Indeed for all $A$-modules $M$ and $N$, by definition,
\begin{align*}\Hom_{\Gr A}(\!\pi(M),N\!)&=\!\{\!f\!:\pi(\!M\!)\!\rightarrow\! N\!\mid\!f(\!\pi(M)_i\!)\!\subseteq\!N_i \text{ and } f(a\pi(m))\!=\!af(\pi(m)), \forall i\!\in\! \mathbb{Z}_2, a\!\in\! A, m\!\in\! M\!\}\\
&=\!\{\!f\!:M\!\rightarrow\! N\!\mid\!f(\!M_{i+1}\!)\!\subseteq\!N_{i} \text{ and } f(\!\pi(am)\!)\!=\!(\!-1\!)^{|a|}af(\!\pi(m)\!), \forall i\!\in\! \mathbb{Z}_2, a\!\in\! A, m\!\in\! M\!\}\\
&=\!\{\!f\!:M\!\rightarrow\! N\!\mid\!f(M_i)\!\subseteq\!N_{1+i} \text{ and } f(am)\!=\!(-1)^{|a|}af(m), \forall i\!\in\! \mathbb{Z}_2, a\!\in\! A, m\!\in\! M\!\}\\
&=\Hom_{\Mod A}(M,N)_{1},\end{align*}
where the second and third equalities follow by Definition~\ref{shifts} that $\pi(M)_i=M_{i+1}$, $a\pi(m)=(-1)^{|a|}\pi(am)$, and that $f(\pi(am)=f(am)$ for homogeneous elements $a\in A$, $m\in M$ and homogeneous homomorphisms $f$.
As a consequence, we complete the proof.
\end{proof}
We say that a non-zero $A$ module $M$ is \textit{gr-indecomposable} if it is not the direct sum of two
non-zero modules that $M$ is
\textit{$S$-indecomposable} if it is not the direct sum of two
non-zero $A$-modules in the category $\mathrm{Mod}^{S} A$. A
superalgebra is called to be \textit{gr-divisional} (resp.~\textit{gr-local}) if
every nonzero homogeneous element of it is invertible (resp.~either
invertible or nilpotent).
\begin{lemma}[\cite{han}, \S2.2] Let $A$ be a finite dimensional
superalgebra over $K$ and $M\in\Gr A$. Then
\begin{enumerate}\item $M$ is $S$-indecomposable if and only if $M$ is gr-indecomposable.
\item $M$ is gr-indecomposable if and only if
$\mathrm{End}^S_{A}(M)$ is gr-local.
\item If $M$ is gr-simple then
$\mathrm{End}^S_{A}(M)$ is either a gr-divisional superalgebra
concentrated in degree zero, or a gr-divisional superalgebra containing an
odd involuting element.
\end{enumerate}
\end{lemma}
\begin{definition} Let $A$ and $B$ be two finite dimensional
superalgebras over $K$. We say that $A$ and $B$ are $S$-\textit{graded equivalent}, denote
by $A\thickapprox_{S}B$, if the categories $\Gr A$ and $\Gr B$ are equivalent and $B\!\cong\!\End_A^{S}(P)$
for some finitely generated projective $A$-module $P$. \end{definition}
The relationship between $\pi$-graded equivalence and $\sigma$-graded equivalence is the following:
\begin{lemma}\label{rsp}Assume that finite dimensional superalgebras $A$ and $B$ are $\sigma$-graded equivalent.
Then $A$ and $\widehat{B}$ are $\pi$-graded equivalent.
\end{lemma}
\begin{proof}Assume that the categories $\Gr A$ and $\Gr B$ are equivalent and that there is a finitely
generated projective $A$-module $P$ such that $B\cong \End_A^{\sigma}(P)$. Observe that
superalgebras $B$ and $\widehat{\widehat{B}}$ are isomorphic and that $\Gr B$ and
$\Gr \widehat{B}$ are equivalent. Applying Lemma~\ref{lbass},
$\End_{\widehat{A}}^{\pi}(\widehat{P})=\End_{\widehat{\widehat{A}}}^{\sigma}(\widehat{\widehat{P}})=
\End_A^{\sigma}(P)$. The proof is completed.
\end{proof}
Let $A$ be a finite dimensional superalgebra. Then $A$ has a complete set of primitive orthogonal
idempotents and denoted it by $\{f_1, \cdots, f_n\}$. Let $P_i=Af_i$ be the projective $A$-module corresponding to the primitive orthogonal idempotent $f_i$ of $A$. Then $\widehat{P}_i=\widehat{A}f_i$ and Lemma~\ref{lbass} and \cite[Corollary 3.10]{Da} imply that $\mathrm{Hom}_{\Mod^\pi A}(P_i,
P_j)=\Hom_{\Mod^\sigma A} (\widehat{P}_i,
\widehat{P}_j)=f_i\widehat{A}f_j $ and $\Hom_{\Mod^\sigma A} (P_i,
P_j)=f_iAf_j$.
We say
that two idempotents $f$ and $g$ of $A$ are $S$-\textit{equivalent}
if the $A$-modules $Af$ and $Ag$ are isomorphic in the category $\Mod^S A$. The following is a criterion of $S$-equivalent of idempotents.
\begin{lemma}\label{lequivalent}The idempotents $f$ and $g$ are
$S$-equivalent if and only if
there exist homogeneous elements $x\in fAg$ and $y\in gAf$ satisfying $xgy=f$ and
$yfx=g$.
\end{lemma}
\begin{proof}Note that $\Hom_{\Mod^{S}A}(Af, Ag)$ is either $fAg$ or $f\widehat{A}g$ and that $\Hom_{\Mod^{S}A}
(Ag, Af)$ is either $gAf$ or $g\widehat{A}f$. Assume that $Af$ and $Ag$ are isomorphic in the category $\Mod^{S}A$.
Then there exists invertible
homogenous homomorphisms induced by homogeneous elements $x\in fAg$ and
$y\in gAf$ satisfying $xgy=f$ and $yfx=g$.
Conversely, suppose that there exist homogeneous elements $x\in fAg$ and $y\in gAf$ satisfying $xgy=f$ and
$yfx=g$. Then $x$ and $y$ induce natural homomorphisms $Af\rightarrow Ag$, $f\mapsto xg$ and $Ag\rightarrow Af$, $g\mapsto yf$ respectively, which give the desired isomorphisms.
\end{proof}
The following definition is our investigation for all Clifford superalgebras in this note.
\begin{prop-def}[cf.~Proposition 2.4, \cite{han}] Let $A$ be a finite dimensional superalgebra and $J(A)$
the graded Jacobson radical of $A$. We say that the superalgebra $A$ is \textbf{$\mathbf{S}$-graded basic}
if it satisfies the following equivalent conditions:
\begin{enumerate} \item $f_i$ are pairwise non-$S$-equivalent for $1\le
i\le n$.
\item Any decomposition of $A$ into indecomposable
projective $A$-modules $A = \oplus^n_{i=1}P_i$ satisfies $P_i$ and $P_j$ are not isomorphic in $\Mod^{S}A$ for all $1\le i \neq j\le n$.
\item $A/J(A)$ is a direct sum of gr-divisional superalgebras.
\end{enumerate}
\label{psbasicdef}\end{prop-def}
\begin{remark}By \cite[Theorem 2.3]{han}, a finite dimensional superalgebra $A$
is $S$-graded equivalent to an $S$-graded basic superalgebra. More precisely, let
$\{e_i\mid 1\leq i\leq n\}$ be a complete set of non-$S$-equivalent orthogonal
primitive idempotents of $A$. It follows that $A$ is $S$-graded equivalent to
$\End^S_A(\oplus_{i=1}^nAe_i)$, which is $S$-graded basic.
\end{remark}
The following Lemma is the key to the graded Morita classification for Clifford superalgebras.
\begin{lemma}\label{pskewtensor}
Suppose that the superalgebras $A$ and $A'$ are $S$-graded
equivalent to $B$ and $B'$ respectively. Then $A\widehat{\otimes}_K
A'$ is $S$-graded equivalent to $B\widehat{\otimes}_K B'$.
\end{lemma}
\begin{proof}Without loss of generality, we may assume that both $B$ and $B'$ are $S$-graded basic and that
$\{f_1, \dots, f_r\}$ and $\{f'_1, \dots,
f'_{r'}\}$ are the complete sets of orthogonal primitive idempotents of $B$ and $B'$ respectively.
Now let $\{e_1, \dots, e_n\}$ and $\{e'_1, \dots,
e'_{n'}\}$ be the complete sets of orthogonal primitive idempotents of
$A$ and $A'$ respectively. Then $\{f_1, \dots, f_r\}\subseteq\{e_1, \dots, e_n\}$ and $\{f'_1, \dots,
f'_{r'}\}\subseteq\{e'_1, \dots,e'_{n'}\}$. Observe that
$\{e_i\otimes e'_j| 1\le i\le n, 1\le j\le n' \}$ and
$\{f_i\otimes f'_j|1\le i\le r, 1\le j\le r'\}$ is a set of
orthogonal idempotents of $A\widehat{\otimes}_K A'$ such that $\sum_{i,j}e_i\otimes e'_j=1_{A}\otimes 1_{A'}$, and
$\{f_i\otimes f'_j|1\le i\le r, 1\le j\le r'\}$ is a set of
orthogonal idempotents of $B\widehat{\otimes}_K B'$ such that $\sum_{i,j}f_i\otimes f'_j=1_{B}\otimes 1_{B'}$.
Note that $\{f_i\otimes f'_j|1\le i\le r, 1\le j\le r'\}\subseteq \{e_i\otimes e'_j| 1\le i\le n, 1\le j\le n' \}$, which implies that the complete set of non-$S$-equivalent
orthogonal primitive idempotents of $A\widehat{\otimes}_K A'$ can be obtained form
$\{f_i\otimes f'_j|1\le i\le r, 1\le j\le r'\}$ by decomposing these orthogonal idempotents. As a consequence,
$A\widehat{\otimes}_K A'$ is $S$-graded equivalent to
$B\widehat{\otimes}_K B'$.
\end{proof}
We close this section with some remarks.
\begin{remark}\label{Rem:graded-basic}\begin{enumerate}\item The $\sigma$-graded equivalence
is exactly the so-called \textit{graded Morita equivalence} defined in \cite{zhang,s} for group-graded
algebras by ignoring the rule of signs in the case of superalgebras.
\item A finite dimensional superalgebra is $\sigma$-graded equivalent but not $\pi$-graded equivalent to itself.
\item The rule of signs implies that for finite dimensional superalgebra
the $\pi$-graded equivalence is deserving of a better understanding than $\sigma$-graded equivalence, that is, for finite dimensional superalgebra
$A$ the category $\Mod A$ should be study exhaustively.
\item\label{item-pi-sigma} Lemmas~\ref{rsp} and \ref{lequivalent} show that the $S$-graded basic classification of finite dimensional superalgebras can be determined completely by its graded Morita equivalent classification, that is, by its $\sigma$-graded equivalent classification.
\end{enumerate}\end{remark}
\section{Graded Morita equivalent classification of Clifford superalgebras}\label{sec: Main}
From now on we will write $\mathbb{R}$, $\mathbb{C}$
and $\mathbb{H}$ respectively for the real, complex and quaternion
number-fields and view them as superalgebras over $\mathbb{R}$
concentrated on degree zero respectively and we will say a superalgebra to be graded basic if it is either
$\sigma$-graded basic or $\pi$-graded basic (cf.~Remark~\ref{Rem:graded-basic}(iv)). Note that $\mathbb{R}$, $\mathbb{C}$
and $\mathbb{H}$ are gr-divisional superalgebras; in particular, they are graded basic superalgebras according to Proposition-Definition~\ref{psbasicdef}.
\begin{point}{}*\label{pchevalley} Let $p$, $q$ and $r$ be positive integers.
Following Porteous \cite{porteous95}, we
denote by $\mathbb{R}^{p,
q, r}$ the real quadratic space $\mathbb{R}^{p+q+r}$ with the
quadratic form $\rho(x)=x_1^2+\cdots+ x_{p}^2-x_{p+1}^2-\cdots-x_{p+q}^2$.
The Clifford superalgebra $\mathbb{R}_{p, q, r}$ on
$\mathbb{R}^{p, q, r}$ is the real unitary superalgebra generated by
odd generators $e_1, \dots, e_{p+q+r}$ subject to the
following relations:\begin{align*}& e_ie_j+e_je_i=0&& \text{for } 1\leq i\neq j\leq p+q+r;\\
& e_i^2=-e_{j+p}^2=1&&\text{for }1\leq i\leq p \text{ and }1\leq j\leq q;\\
&e_{p+q+k}^2=0&&\text{for } 1\le k\leq r.\end{align*}
Note that the Clifford superalgebra
$\mathbb{R}_{0, 0, r}$ is the real Grassmann superalgebra
$\bigwedge_{\mathbb{R}}(r)$ generated by odd generators $e_{p+q+1}, \dots, e_{p+q+r}$. Observe that the orthogonal primitive idempotent of $\bigwedge_{\mathbb{R}}(r)$ is the unit 1 since $\bigwedge_{\mathbb{R}}(r)/J(\bigwedge_{\mathbb{R}}(r))=\mathbb{R}$, which implies $\bigwedge_{\mathbb{R}}(r)$
are graded basic for all positive integer $r\ge 1$ according to Proposition-Definition~\ref{psbasicdef}.
For now on we write $\mathbb{R}_{p, q}$ for the Clifford superalgebra
$\mathbb{R}_{p, q, 0}$. It is well-known that there are superalgebras isomorphism $\mathbb{R}_{p, q, r}
\cong \mathbb{R}_{p, q}\widehat{\otimes}_\mathbb{R}\bigwedge_{\mathbb{R}}(r)$ and $\mathbb{R}_{p,
q}\widehat{\otimes}_{\mathbb{R}} \mathbb{R}_{p', q'}\cong
\mathbb{R}_{p+p', q+q'}$ for all positive integers $p$, $q$, $r$, $p'$ and $q'$, see \cite{chev} or \cite[Proposition~1.6]{abs}.\end{point}
\begin{point}{}* Let $V=\mathbb{R}^{p+q+r}$ and $\alpha: V\rightarrow V$, $v\mapsto -v$. Then $\alpha$ induces a
automorphism of the (ungraded) Clifford algebra $\mathbf{C}\ell(V,\rho)=T(V)/\langle v\otimes v-\rho(v)|v\in V\rangle$
where $T(V)$ is the tensor algebra of $V$. Since $\alpha^2=\mathrm{Id}$, there is a decomposition
\begin{align*}\mathbf{C}\ell(V,\rho)=\mathbf{C}\ell^0(V,\rho)\oplus \mathbf{C}\ell^1(V,\rho)\end{align*}
where $\mathbf{C}\ell^i(V,\rho)=\{\phi\in\mathbf{C}\ell(V,\rho)\mid\alpha(\phi)=(-1)^i\phi\}$ are the eigenspaces
of $\alpha$. Clearly, since $\alpha(\phi_1\phi_2)=\alpha(\phi_1)\alpha(\phi_2)$, we have that
\begin{align*}\mathbf{C}\ell^i(V,\rho)\mathbf{C}\ell^j(V,\rho)\subseteq \mathbf{C}\ell^{i+j}(V,\rho)\quad
\text{ for all }i,j\in \mathbb{Z}_2,\end{align*}
that is, $\mathbf{C}\ell(V,\rho)$ is a superalgebra. It is an observation of Atiyah, Bott and Shapiro \cite{abs}
that this $\mathbb{Z}_2$-grading plays an important role in the analysis and application of Clifford algebras.
\end{point}
\begin{remark}The two superalgebras $\mathbb{R}_{p, q, r}$ and $\mathbf{C}\ell(V,\rho)$ are naturally isomorphic.
\end{remark}
Denote by $\mathbb{D}_{+}$ (resp.~$\mathbb{D}_{-}$) the real superalgebra generated by odd element $e_{+}$
(resp.~$e_{-}$) subject to the relation $e_{+}^2=1$ (resp.~$e^2_{-}=-1$). For a positive integer $n$, we denote by
$\mathbb{D}_{+}^{n}$ (resp.~$\mathbb{D}_{-}^{n}$) the superalgebra
$\mathbb{D}_{+}\widehat{\otimes}_{\mathbb{R}}\cdots\widehat{\otimes}_{\mathbb{R}}
\mathbb{D}_{+}$ (resp.~$\mathbb{D}_{-}\widehat{\otimes}_{\mathbb{R}}\cdots\widehat{\otimes}_{\mathbb{R}}
\mathbb{D}_{-}$) ($n$ factors) and write $e_{+}^{i_1\dots i_n}$ (resp.~$e_{-}^{i_1\dots i_n}$) for the homogeneous element
$e_{+}^{i_1}\otimes\dots\otimes
e_{+}^{i_n}\in\mathbb{D}_{+}^{n}$ (resp.~$e_{-}^{i_1}\!\otimes\!\cdots\!\otimes\!
e_{-}^{i_n}\!\in\!\mathbb{D}_{-}^{n}$) where $i_j=0, 1$ for all $1\leq j\leq n$. Note that $\mathbb{D}_{+}^n$
and $\mathbb{D}_{-}^n$ are exactly $\mathbb{R}_{n,0}$ and $\mathbb{R}_{0,n}$ respectively.
\begin{lemma}\label{dd}If $n=1, 2,3$ then $\mathbb{D}^n_{+}$ and $\mathbb{D}^n_{-}$ are
gr-divisional superalgebras.\end{lemma}
\begin{proof}First, it is trivial that $\mathbb{D}_{\pm}$ are gr-divisional superalgebras. Noticing that the even (resp.~odd) elements of $\mathbb{D}_{+}^2$ are of the form $a\otimes 1+be_{+}
\otimes e_{+}$ (resp.~$a\otimes e_{+}+be_{+}\otimes 1$), where $a$, $b\in \mathbb{R}$, and that for
all $a, b\in \mathbb{R}$, \begin{align*}(a\otimes 1+be_{+}\otimes e_{+})
(a\otimes 1-be_{+}\otimes e_{+})=(a^2+b^2)\otimes 1=
(a\otimes e_{+}+be_{+}\otimes 1)(a\otimes e_{+}+be_{+}\otimes
1).\end{align*}
Thus all non-zero homogeneous elements of $\mathbb{D}_{+}^2$ are invertible, that is,
$\mathbb{D}_{+}^2$ is gr-divisional.
Set $\mathbf{1}:=1\widehat{\otimes}1\widehat{\otimes}1$,
$\mathbf{i}_{+}:=e_{+}^{011}$,
$\mathbf{j}_{+}:=e_{+}^{101}$,
$\mathbf{k}_{+}:=e_{+}^{110}$ and
$\theta_{+}:=e_{+}^{111}$. Then it follows directly that $\theta_{+} \mathbf{i}_{+}+\mathbf{i}_{+}\theta_{+}=0$, $\theta_{+} \mathbf{j}_{+}+\mathbf{j}_{+}\theta_{+}=0$ and $\theta_{+} \mathbf{k}_{+}+\mathbf{k}_{+}\theta_{+}=0$. Furthermore, we have $\mathbb{R}\langle\mathbf{1}, \mathbf{i}_{+},
\mathbf{j}_{+}, \mathbf{k}_{+}\rangle\cong \mathbb{H}$ and
$\mathbb{D}_{+}^3\cong\mathbb{H}\oplus\mathbb{H}\theta_{+}$. Note that $\theta_{+}$ is a odd element of $\mathbb{D}^3_{+}$ satisfying $\theta^2_{+}=-1$ and that $\mathbb{H}$ is a gr-divisional superalgebra concentrated on degree zero. So $\mathbb{D}^3_{+}$ is gr-divisional.
Observe that the even (resp.~odd) elements of $\mathbb{D}_{-}^2$ are of the form $a\otimes 1+be_{-}\otimes e_{-}$ (resp.~$a\otimes e_{-}+be_{-}\otimes 1$), where $a$, $b\in \mathbb{R}$, and that for all $a, b\in \mathbb{R}$,
\begin{align*}(a\otimes 1+be_{-}\otimes e_{-})
(a\otimes 1-be_{-}\otimes e_{-})=-(a^2+b^2)\otimes 1=
(a\otimes e_{-}+be_{-}\otimes 1)(a\otimes e_{-}+be_{-}\otimes
1).\end{align*}
Thus all non-zero homogeneous elements of $\mathbb{D}_{-}^2$ are invertible, that is
$\mathbb{D}_{-}^2$ is gr-divisional.
Set $\mathbf{i}_{-}:=e_{-}^{011}$,
$\mathbf{j}_{-}:=e_{-}^{101}$,
$\mathbf{k}_{-}:=e_{-}^{110}$, and
$\theta_{-}:=e_{-}^{111}$. Then it follows directly that $\theta_{-} \mathbf{i}_{-}+\mathbf{i}_{-}\theta_{-}=0$, $\theta_{-} \mathbf{j}_{-}+\mathbf{j}_{-}\theta_{-}=0$, and $\theta_{-}\mathbf{k}_{-}+\mathbf{k}_{-}\theta_{-}=0$. Moreover, we have $\mathbb{R}\langle\mathbf{1}, \mathbf{i}_{-},
\mathbf{j}_{-}, \mathbf{k}_{-}\rangle\cong \mathbb{H}$ and
$\mathbb{D}_{-}^3\cong\mathbb{H}\oplus\mathbb{H}\theta_{-}$. Note that $\theta_{-}$ is a odd element of $\mathbb{D}^3_{-}$ satisfying $\theta^2_{-}=1$ and that $\mathbb{H}$ is a gr-divisional superalgebra concentrated on degree zero. Thus $\mathbb{D}^3_{-}$ is gr-divisional.
\end{proof}
\begin{lemma}\label{dc} $\mathbb{D}_{+}\widehat{\otimes}_{\mathbb{R}}\mathbb{D}_{-}$
is graded Morita equivalent to the superalgebra
$\mathbb{R}$.\end{lemma}
\begin{proof}Set $A=\mathbb{D}_{+}\widehat{\otimes}_{\mathbb{R}}\mathbb{D}_{-}$. By Proposition-Definition~\ref{psbasicdef},
it suffices to determine the non-$\sigma$-equivalent idempotents of $A$. Let
$f_{\pm}=\frac{1}{2}(1\otimes 1\pm e_{+}\otimes e_{-})$. Then $f_{\pm}^2=f_{\pm}$, $f_{+}+f_{-}=1\otimes 1$ and $f_{+}f_{-}=0=f_{-}f_{+}$, which implies that
$\{f_{+}, f_{-}\}$ is the complete set of orthogonal primitive
idempotents of $A$ since $\dim_{\mathbb{R}}A_{0}=2$. It is straightforward to show that
\begin{align*}f_{+}Af_{-}=\{r(e_{+}\otimes1-1\otimes e_{-})\mid r\in\mathbb{R}\}\quad\text{and}\quad
f_{-}Af_{+}=\{r(e_{+}\otimes1+1\otimes e_{-})\mid r\in\mathbb{R}\}.
\end{align*}
Let $x=\frac{1}{2}(e_{+}\otimes1-1\otimes e_{-})$ and $y=\frac{1}{2}(e_{+}\otimes1+1\otimes e_{-})$. Then $xf_{-}y=f_{+}$ and $yf_{+}x=f_{-}$.
Applying Lemma~\ref{lequivalent}, $f_+$ and $f_-$ are
$\sigma$-equivalent orthogonal primitive idempotents of $A$. As a consequence, Proposition-Definition~\ref{psbasicdef} implies that
$A$ is graded Morita equivalent to
$f_{+}Af_{+}=\mathbb{R}f_{+}\cong \mathbb{R}$.
\end{proof}
\begin{lemma}\label{dddd}$\mathbb{D}_{\pm}^4$ are graded Morita
equivalent to the superalgebra $\mathbb{H}$.
\end{lemma}
\begin{proof}By the proof of Lemma~\ref{dd}, the natural graded map $e_{-}\mapsto \theta_{+}$ gives an isomorphism between $\mathbb{D}_{-}{\widehat{\otimes}}_{\mathbb{R}}\mathbb{H}$ and $\mathbb{D}_{+}^3$.
Therefore, using Lemmas~\ref{pskewtensor} and \ref{dc}, $\mathbb{D}_{+}^4=\mathbb{D}_{+}\widehat{\otimes}_{\mathbb{R}}
\mathbb{D}_{+}^3\cong \mathbb{D}_{+}\widehat{\otimes}_{\mathbb{R}}\mathbb{D}_{-}\widehat{\otimes}_{\mathbb{R}}\mathbb{H}$
is graded Morita equivalent to the superalgebra $\mathbb{H}$. Similarly,
noticing that $\mathbb{D}_{+}\widehat{\otimes}\mathbb{H}\cong \mathbb{D}_{-}^3$,
we can show that $\mathbb{D}_{-}^4$ is graded Morita equivalent to the superalgebra $\mathbb{H}$.\end{proof}
\begin{remark}\begin{enumerate}\item The Lemmas~\ref{dc} and \ref{dddd} show that the skew tensor product
of $\sigma$-graded basic superalgebras may not be $\sigma$-graded basic. In particular,
the skew tensor product of gr-divisional superalgebras may not be gr-divisional.
\item Lemmas~\ref{rsp}, \ref{dc} and \ref{dddd} imply that $\mathbb{D}_{+}\otimes\mathbb{D}_{-}$ and $\mathbb{D}_{\pm}^4$ are also $\pi$-graded equivalent to $\mathbb{R}$ and $\mathbb{H}$ respectively since $\widehat{\mathbb{D}_{\pm}}=\mathbb{D}_{\mp}$.
\end{enumerate}
\end{remark}
Now we can obtain the graded Morita classifications for all real Clifford superalgebras.
\begin{theorem}\label{tclifford} Assume that $p$, $q$ and $r$ are non-negative integers. Then \begin{enumerate}\item
$\mathbb{R}_{p, q, r}\,\approx_{\sigma}\left\{\aligned
&\bigwedge\!_{\mathbb{R}}(r), &&\text{if }p-q\equiv0 (\mathrm{mod\,} 8);\\
&\mathbb{H}\widehat{\otimes}_{\mathbb{R}}\bigwedge\!_{\mathbb{R}}(r),&&\text{if } p-q\equiv 4 (\mathrm{mod\,}8);\\
&\mathbb{D}_{+}^i\widehat{\otimes}_{\mathbb{R}}\bigwedge\!_{\mathbb{R}}(r), &&\text{if }p-q\equiv 4-i(\mathrm{mod\,}8)\text{ for } 1\leq i\leq 3;\\
&\mathbb{D}_-^i\widehat{\otimes}_{\mathbb{R}}\bigwedge\!_{\mathbb{R}}(r),&&\text{if }p-q\equiv 4+i (\mathrm{mod\,}8)
\text{ for } 1\le i \leq 3.
\endaligned\right.$
\item $\mathbb{R}_{p, q, r}\,\approx_{\pi}\left\{\aligned
&\bigwedge\!_{\mathbb{R}}(r), &&\text{if }p-q\equiv0 (\mathrm{mod\,} 8);\\
&\mathbb{H}\widehat{\otimes}_{\mathbb{R}}\bigwedge\!_{\mathbb{R}}(r),&&\text{if } p-q\equiv 4 (\mathrm{mod\,}8);\\
&\mathbb{D}_{+}^i\widehat{\otimes}_{\mathbb{R}}\bigwedge\!_{\mathbb{R}}(r), &&\text{if }p-q\equiv 4+i(\mathrm{mod\,}8)\text{ for } 1\leq i\leq 3;\\
&\mathbb{D}_-^i\widehat{\otimes}_{\mathbb{R}}\bigwedge\!_{\mathbb{R}}(r),&&\text{if }p-q\equiv 4-i (\mathrm{mod\,}8)
\text{ for } 1\le i \leq 3.
\endaligned\right.$
\end{enumerate}
$($Note that these superalgebras are graded basic superalgebras.$)$
\end{theorem}
\begin{proof}(i) Note that $\mathbb{D}_+\cong \mathbb{R}_{1, 0}$ and $\mathbb{D}_{-}\cong \mathbb{R}_{1, 0}$.
Using \cite[Proposition~1.6]{abs}, Lemmas~\ref{pskewtensor} and \ref{dc},
$$\mathbb{R}_{p,q}=\mathbb{D}_{+}^p\widehat{\otimes}_{\mathbb{R}}\mathbb{D}_{-}^q
\,{\stackrel{\sigma}\approx}\left\{\aligned\mathbb{D}^{p-q}_+,&\, \text{\quad\quad if } p\ge q;\\
\mathbb{D}_-^{q-p},&\,\text{\quad\quad otherwise}.\endaligned\right.
$$
\indent Now by \cite[\S 5.6]{porteous95}, $\mathbb{H}\widehat{\otimes}_{\mathbb{R}}\mathbb{H}$ is isomorphic to the $4\times 4$ real matrix superalgebra concentrated
on degree zero, which is (graded) Morita equivalent to $\mathbb{R}$. Consequently, by Lemma~\ref{dddd}, $\mathbb{D}_{\pm}^8$ is graded Morita equivalent to $\mathbb{R}$. Therefore Lemmas~\ref{pskewtensor} and
\ref{dddd} imply that
\begin{enumerate}\item if $p\geq q$ then $\mathbb{D}^{p-q}_+\,\approx_{\sigma}\left\{\aligned\mathbb{D}_{+}^i,\quad&\,\text {\quad\quad if } p-q\equiv
i(\text{mod\,}8), i=0, 1, 2, 3,\\\mathbb{D}_{+}^i\widehat{\otimes}_{\mathbb{R}}\mathbb{H},&\,
\text {\quad\quad if } p-q\equiv 4+i(\text{mod\,}8), i=0, 1, 2, 3;
\endaligned\right. $
\vspace{1\jot}
\item if $p<q$ then $\mathbb{D}^{q-p}_-\,\approx_{\sigma}\left\{\aligned\mathbb{D}_{-}^i,\quad&\,\text {\quad\quad if } q-p\equiv
i(\text{mod\,}8), i=0, 1, 2, 3,\\\mathbb{D}_{-}^i\widehat{\otimes}_{\mathbb{R}}\mathbb{H},&\,
\text {\quad\quad if } q-p\equiv 4+i(\mathrm{mod\,}8), i=0, 1, 2, 3;
\endaligned\right. $\end{enumerate}
By the proof of Lemma~\ref{dddd}, $\mathbb{D}_{+}\widehat{\otimes}_{\mathbb{R}}\mathbb{H}\cong
\mathbb{D}_{-}^3$ and $\mathbb{D}_{-}\widehat{\otimes}_{\mathbb{R}}\mathbb{H}\cong
\mathbb{D}_{+}^3$. Applying Lemmas~\ref{pskewtensor} and \ref{dc} again, we have $\mathbb{D}_{+}^{i}\widehat{\otimes}_{\mathbb{R}}\mathbb{H}\approx_{\sigma}\mathbb{D}_{-}^{4-i}$ and $\mathbb{D}_{-}^{i}\widehat{\otimes}_{\mathbb{R}}\mathbb{H}\approx_{\sigma}\mathbb{D}_{+}^{4-i}$ for $1\le i\le 3$.
Noticing that $q-p\equiv 4+i (\mathrm{mod\,}8)$ if and only if $p-q\equiv 4-i (\mathrm{mod\,}8)$.
As a consequence we complete the proof of (i) by \cite[Proposition~1.6]{abs}.
Note that $\widehat{\mathbb{R}_{p,q,r}}\cong \mathbb{R}_{q,p,r}$. Hence (ii) is follows directly by Lemma~\ref{rsp} and (i). \end{proof}
\begin{remark}Note that the $\mathbb{Z}_2$-graded Hochschild and cyclic (co)homology $H_*(A)$ of a finite dimensional superalgebra $A$ is graded Morita equivalent invariant \cite{zhao}. Theorem~\ref{tclifford} implies that $$H_{*}(\mathbb{R}_{p,q})=\left\{\aligned
&0,&&\text{if }*\geq 1;\\
&\mathbb{R},&&\text{if } *=0,
\endaligned\right.$$
since $\mathbb{R}$, $\mathbb{H}$ and $\mathbb{D}_{\pm}^i$ for all $1\leq i\leq 3$ are separable superalgebras (see
\cite[Chap.~IV-V]{bass}, which gives a slight generalization of \cite[Proposition 1]{kassel} since the quadratic form $\rho(x)=\sum_{i=1}^px_i^2-\sum_{j=1}^qx_{p+j}^2$ on $\mathbb{R}^{p+q}$ is degenerate.
\end{remark}
\begin{point}{}* Let $p$ and $q$ be positive integers. We denote by
$\mathbb{C}^{p, q}$ the complex quadratic space $\mathbb{C}^{p+q}$
with a quadratic form $x_1^2+\cdots+ x_{p}^2$. The
Clifford superalgebra $\mathbb{C}_{p, q}$ on $\mathbb{C}^{p, q}$ is the complex unitary
superalgebra generated by odd generators $e_1, \cdots, e_{p+q}$ subject to the following relations:
\begin{align*}& e_ie_j + e_je_i = 0 &&\text{for } 1\le i\neq j\le p+q;\\
& e_i^2=1&&\text{for }1\leq i\leq p;\\
&e_{p+j}^2=0&&\text{for } 1\leq j\leq q.\end{align*}
Note that the Clifford superalgebra $\mathbb{C}_{0, q}$ is the complex
Grassman superalgebra $\bigwedge_{\mathbb{C}}(q)$ generated by odd generators $e_{p+1}$, $\dots$, $e_{p+q}$.
Observe that the orthogonal primitive idempotent of $\bigwedge_{\mathbb{C}}(q)$ is the unity 1,
which implies that $\bigwedge_{\mathbb{C}}(q)$ are graded basic superalgebras for all positive integer $q\ge 1$.
We denote by $D=\mathbb{C}\oplus \mathbb{C}\varepsilon$ the complex
superalgebra generated by odd generators $\varepsilon$ subject to the relation
$\varepsilon^2=1$. For positive integer $n$, let $D^n$ be the superalgebra
$D\widehat{\otimes}_{\mathbb{C}}\cdots\widehat{\otimes}_{\mathbb{C}}
D$ ($n$ factors). Then $D^p\cong \mathbb{C}_{p,0}$ and $\mathbb{C}_{p, q}\cong
D^p\widehat{\otimes}_{\mathbb{C}}\bigwedge_{\mathbb{C}}(q)$ for all positive integers $p$ and $q$, see \cite[Proposition~1.6]{abs}.\end{point}
\begin{lemma}\label{DD}
$D\widehat{\otimes}_{\mathbb{C}}D$ is graded Morita equivalent to
the superalgebras $\mathbb{C}$.
\end{lemma}
\begin{proof}Set $A=D\widehat{\otimes}_{\mathbb{C}}D$
Let $\varepsilon_{\pm}=\frac{1}{2}(1\otimes 1\pm
\mathbbm{i}\varepsilon\otimes\varepsilon)$, where $\mathbbm{i}$ is the
imaginary unit of $\mathbb{C}$. Then $\varepsilon_{\pm}^2=\varepsilon_{\pm}$, $\varepsilon_{+}\varepsilon_{-}=0=\varepsilon_{-}\varepsilon_{+}$ and $\varepsilon_{+}+\varepsilon_{-}=1\otimes 1$, which implies that $\{\varepsilon_{+}, \varepsilon_{-}\}$ is the
complete set of orthogonal primitive idempotents of $D^2$ since $\dim_{\mathbb{C}}A_{0}=2$. It is straightforward to show that \begin{align*}\varepsilon_{+}A\varepsilon_{-}=\{c(\mathbbm{i}\otimes\varepsilon-1\otimes \varepsilon)\mid c\in\mathbb{C}\}\quad\text{and}\quad
\varepsilon_{-}A\varepsilon_{+}=\{c(\mathbbm{i}\otimes \varepsilon+\varepsilon\otimes1)\mid c\in\mathbb{C}\}.
\end{align*}
Let $x=\frac{1}{2}(\mathbbm{i}\otimes\varepsilon-1\otimes \varepsilon)$ and $y=\frac{1}{2}(\varepsilon\otimes1+\mathbbm{i}\otimes \varepsilon)$. Then $x\varepsilon_{-}y=\varepsilon_{+}$ and $y\varepsilon_{+}x=\varepsilon_{-}$.
Applying Lemma~\ref{lequivalent}, $\varepsilon_+$ and $\varepsilon_-$ are
$\sigma$-equivalent orthogonal primitive idempotents of $A$. Thus Proposition-Definition~\ref{psbasicdef} implies
that $A$ is graded Morita equivalent to $\varepsilon_{+}A\varepsilon_{+}=\mathbb{C}\varepsilon_{+}\cong\mathbb{C}$.
\end{proof}
Now we yield the graded Morita equivalent classification for all complex Clifford superalgebras.
\begin{theorem}\label{thm-complex}Assume that $p$ and $q$ are non-negative integers. Then $\mathbb{C}_{p,q}$ is graded Morita equivalent to graded basic superalgebra $\mathbb{C}_{p(\mathrm{mod\,}2),q}$.
\end{theorem}
\begin{proof}Note that $\mathbb{C}_{p, q}\cong
\mathbb{C}_{1, 0}\widehat{\otimes}_{\mathbb{C}}\bigwedge_{\mathbb{C}}(q)$ and $\mathbb{C}_{1, 0}\cong D$.
It follows immediately that $\mathbb{C}_{p,q}\approx_{\sigma}\mathbb{C}_{p(\mathrm{mod\,}2),q}$ by using Lemmas~\ref{DD} and \ref{pskewtensor}. On the other hand, note that $\widehat{\mathbb{C}_{p,q}}\cong \mathbb{C}_{p,q}$, we have $\mathbb{C}_{p,q}\approx_{\pi}\mathbb{C}_{p(\mathrm{mod\,}2),q}$. We complete the proof.
\end{proof}
\section{The Grothendieck groups of Clifford superalgebras}\label{application}
\begin{point}{}* Most of the important applications of Clifford Superalgebras come through a detailed understanding of their ($\mathbb{Z}_2$-graded) representations. This understanding follows rather easily form the graded Morita equivalent classification given by Theorems~\ref{tclifford} and \ref{thm-complex}. In this section, we shall restrict our attention to the Clifford superalgebras $\mathbb{R}_{p,q}$ and $\mathbb{C}_{p,0}$ for all positive integers $p$ and $q$ in order to simplify our presentation, which are exactly those Clifford (super)algebras play a fundamental role in \cite{abs,Lawson}.\end{point}
\begin{point}{}*We begin with some notations. For positive integers $p$ and $q$, let $\mathrm{Irr}_{p,q}$ be the set of
all nonequivalent irreducible graded $\mathbb{R}$-module for $\mathbb{R}_{p,q}$ in the category $\Gr \mathbb{R}_{p,q}$.
Similarly, let $\mathrm{Irr}_{p}^{\mathbb{C}}$ be the set of all nonequivalent irreducible graded $\mathbb{R}$-module
for $\mathbb{C}_{p,0}$ in the category $\Gr \mathbb{C}_{p,0}$.
An object which will be of our interest is the following. Let $K_{p,q}$ be the Grothedieck group of
the category of finite dimensional graded real representations of $\mathbb{R}_{p,q}$, and let
$K_{p}^{\mathbb{C}}$ be the Grothedieck group of the category of
finite dimensional graded complex representations of $\mathbb{C}_{p,0}$. Then $K_{p,q}$ and $K_{p}^{\mathbb{C}}$
are the free abelian groups generated by $\mathrm{Irr}_{p,q}$ and $\mathrm{Irr}_{p}^{\mathbb{C}}$ respectively.
\end{point}
From the classification of Theorems~\ref{tclifford} and \ref{thm-complex} we immediately conclude the following:
\begin{lemma}\label{lem-v_p,q}Let $v_{p,q}$ and $v_p^{\mathbb{C}}$ denote the cardinality of $\mathrm{Irr}_{p,q}$
and of $\mathrm{Irr}_{p}^{\mathbb{C}}$ respectively. Then $$v_{p,q}=\left\{\begin{array}{ll}
2, & \text{ if }p-q\equiv 0 (\mathrm{mod\,}4) \\
1, & \hbox{otherwise}\end{array}\right. \quad \text{ and }\quad
v_{p}^{\mathbb{C}}=\left\{\begin{array}{ll}2, & \text{ if }p \text{ is odd}\\
1, &\text{ if }p \text{ is even}.
\end{array}
\right.$$\end{lemma}
Now we can obtain the main result of this section.
\begin{theorem}\label{thm: Grothendieck}Let $p$ and $q$ be positive integers.
\begin{enumerate}\item The elements $\mathrm{Irr}_{p,q}$, $v_{p,q}$ and $K_{p,q}$ are as given in the following table:
\vspace{1.5\jot}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$p-q(\mathrm{mod\,}8)$ & $0$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$&$7$ \\ \hline
$\mathbb{R}_{p,q}/\!\!\approx$& $\mathbb{R}$ & $\mathbb{R}_{1,0}$ & $\mathbb{R}_{2,0}$ & $\mathbb{R}_{3,0}$ & $\mathbb{H}$ & $\mathbb{R}_{0,1}$ & $\mathbb{R}_{0,2}$ &$\mathbb{R}_{0,3}$\\
$\mathrm{Irr}_{p,q}/\!\!\approx$ & $\{\mathbb{R}, \sigma(\mathbb{R})\}$ & $\{\mathbb{R}_{1,0}\}$ & $\{\mathbb{R}_{2,0}\}$ & $\{\mathbb{R}_{3,0}\}$ &$\{\mathbb{H}, \sigma(\mathbb{H})\}$ & $\{\mathbb{R}_{0,1}\}$ & $\{\mathbb{R}_{0,2}\}$&$\{\mathbb{R}_{0,3}\}$ \\
$v_{p,q}$ & $2$ & $1$ & $1$ & $1$ & $2$ & $1$ & $1$&$1$ \\
$K_{p,q}$ & $\mathbb{Z}\oplus\mathbb{Z}$ & $\mathbb{Z}$ & $\mathbb{Z}$ & $\mathbb{Z}$ &$\mathbb{Z}\oplus \mathbb{Z}$ & $\mathbb{Z}$ & $\mathbb{Z}$&$\mathbb{Z}$ \\
\hline
\end{tabular}
\vspace{1\jot}
\item $v_{p}^{\mathbb{C}}$, $\mathrm{Irr}_{p}^{\mathbb{C}}$ and $K_{p}^{\mathbb{C}}$ are given by following table:
\vspace{1.5\jot}\begin{tabular}{|c|c|c|c|c|}
\hline
$p(\mathrm{mod\,}2)$ & $\mathbb{C}_p/\!\!\approx$& $\mathrm{Irr}_{p}^{\mathbb{C}}/\!\!\approx$&$v_{p}^{\mathbb{C}}$&$K_{p}^{\mathbb{C}}$ \\ \hline
0 & $\mathbb{C}$ & $\{\mathbb{C}, \sigma(\mathbb{C})\}$&$2$&$\mathbb{Z}\oplus\mathbb{Z}$ \\
1 & $\mathbb{C}_{1,0}$ & $\{\mathbb{C}_{1,0}\}$&$1$&$\mathbb{Z}$ \\
\hline
\end{tabular}
\end{enumerate}
\end{theorem}
\begin{proof}Note that the graded modules of a gr-divisional superalgebra without non-trivial odd part are some copies of this gr-divisional superalgebra. Then the theorem is a direct consequence of Theorems~\ref{tclifford} and \ref{thm-complex} and Lemma~\ref{lem-v_p,q}.\end{proof}
We remark in closing that Atiyah, Bott and Shapiro obtained Theorem~\ref{thm: Grothendieck}
by using the periodicity theorem for Clifford superalgebras \cite[\S~5]{abs}, and it is
important to understand the Atiyah-Bott-Shapiro isomorphisms \cite[Theorem~11.5]{abs}, see \cite{abs} and \cite{Lawson} for more details.
\section*{Acknowledgements}
The note develops partly form a part of \cite{zhaophd}.
The author is very grateful to Professors Yang Han and Yingbo Zhang for their invaluable help.
The author would like to thank the referees for their comments and suggestions, which contributed to improvements in the presentation of the results.
\bibliographystyle{amsplain}
| {
"timestamp": "2012-04-20T02:03:09",
"yymm": "1110",
"arxiv_id": "1110.1737",
"language": "en",
"url": "https://arxiv.org/abs/1110.1737",
"abstract": "This note uses a variation of graded Morita theory for finite dimensional superalgebras to determine explicitly the graded basic superalgebras for all real and complex Clifford superalgebras. As an application, the Grothendieck groups of the category of left $\\mathbb{Z}_2$-graded modules over all real and complex Clifford superalgebras are described explicitly.",
"subjects": "Rings and Algebras (math.RA); Representation Theory (math.RT)",
"title": "Graded Morita equivalence of Clifford superalgebras",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138157595305,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7087156778403906
} |
https://arxiv.org/abs/1301.5882 | Geometric homology revisited | Given a cohomology theory, there is a well-known abstract way to define the dual homology theory using the theory of spectra. In [4] the author provides a more geometric construction of the homology theory, using a generalization of the bordism groups. Such a generalization involves in its definition the vector bundle modification, which is a particular case of the Gysin map. In this paper we provide a more natural variant of that construction, which replaces the vector bundle modification with the Gysin map itself, which is the natural push-forward in cohomology. We prove that the two constructions are equivalent. | \section{Introduction}
Given a cohomology theory $h^{\bullet}$, there is a well-known abstract way to define the dual homology theory $h_{\bullet}$, using the theory of spectra. In particular, if $h^{\bullet}$ is representable via a spectrum $E = \{E_{n}, e_{n}, \varepsilon_{n}\}_{n \in \mathbb{Z}}$, for $e_{n}$ the marked point of $E_{n}$ and $\varepsilon_{n}: \Sigma E_{n} \rightarrow E_{n+1}$ the structure map, one can define on a space with marked point $(X, x_{0})$ \cite{Rudyak}:
\[h_{n}(X, x_{0}) := \pi_{n}(E \wedge X).
\]
In \cite{Jakob} the author provides a more geometric construction of $h_{\bullet}$, using a generalization of the bordism groups. In particular, he shows that, for a given pair $(X,A)$, a generator of $h_{n}(X,A)$ can be represented by a triple $(M, \alpha, f)$, where $M$ is a compact $h^{\bullet}$-manifold with boundary of dimension $n+q$, $\alpha \in h^{q}(M)$ and $f: (M, \partial M) \rightarrow (X,A)$ is a continuous function. On such triples one must impose a suitable equivalence relation, which is defined via the natural generalization of the bordism equivalence relation and via the notion of vector bundle modification. In this paper we provide a variant of that construction, which seems to be more natural. In particular, we replace the notion of vector bundle modification with the Gysin map, the latter being the natural push-forward in cohomology. The vector bundle modification is just a particular case, which holds when the underlying map is a section of a sphere bundle. We prove that the two constructions are equivalent, since there is a natural isomorphism between the geometric homology groups defined in \cite{Jakob} and their variant defined in the present paper.
The paper is organized as follows. In section \ref{Preliminaries} we recall the definition of the Gysin map, even for manifolds with boundary, and the geometric construction of the homology groups provided in \cite{Jakob}. In section \ref{DualRevisited} we introduce the variant of the geometric construction we discussed above, and in section \ref{Equivalence} we prove that the two constructions are equivalent.
\section{Preliminaries}\label{Preliminaries}
We call $\mathcal{FCW}_{2}$ the category of pairs of spaces having the homotopy type of finite CW-complexes. Let $h^{\bullet}$ be a multiplicative cohomology theory on $\mathcal{FCW}_{2}$. We recall the construction of the Gysin map for smooth maps between differentiable manifolds with boundary (v.\ \cite{Karoubi, FR} for manifolds without boundary).
\subsection{Gysin map}
Let $h^{\bullet}$ be a cohomology theory on $\mathcal{FCW}_{2}$. A smooth $h^{\bullet}$-manifold is a smooth manifold with an $h^{\bullet}$-orientation, the latter being a Thom class of its tangent bundle or, equivalently, of its stable normal bundle. Given two compact smooth $h^{\bullet}$-manifolds with boundary $X$ and $Y$ and a map $f: (Y, \partial Y) \rightarrow (X, \partial X)$, we can define the Gysin map $f_{!}: h^{\bullet}(Y) \rightarrow h^{\bullet + \dim\,X - \dim\,Y}(X)$ as:
\begin{equation}\label{GysinLefschetz}
f_{!}(\alpha) := L_{X}^{-1} f_{*} L_{Y} (\alpha)
\end{equation}
where:
\begin{equation}\label{Lefschetz}
L_{X}: h^{\bullet}(X) \rightarrow h_{\dim X - \bullet}(X, \partial X)
\end{equation}
is the Lefschetz duality \cite{Switzer}. The problem of this definition is that it involves the homology groups, which we have to define, therefore we need a construction involving only the cohomology groups. When $f$ is a neat map, one can define the Gysin map in a way similar to the one shown in \cite{Karoubi}, pp.\ 230-234, about topological K-theory on manifolds without boundary. Then the definition can be easily extended to any map between $h^{\bullet}$-manifolds.
We start with embeddings. We call $\mathbb{R}^{n}_{+} := \{(x_{1}, \ldots, x_{n}) \in \mathbb{R}^{n} \,\vert\, x_{n} \geq 0\}$. Given a manifold $X$ and a point $x \in \partial X$, by definition there exists a chart $(U, \varphi)$ of $X$ in $x$, with $U \subset X$ open and $\varphi: U \rightarrow \mathbb{R}^{n}_{+}$, such that $\varphi(x) = 0$ and $\varphi(\partial X \cap U) = (\mathbb{R}^{n-1} \times \{0\}) \cap \varphi(U)$. We call such a chart \emph{boundary chart}. For $m \leq n-1$, we call $\overline{\mathbb{R}}^{m}$ the subspace of $\mathbb{R}^{n}$ containing those vectors whose first $n-m$ components are vanishing, i.e.\ $\overline{\mathbb{R}}^{m} = \{0\} \times \mathbb{R}^{m} \subset \mathbb{R}^{n}$, and $\overline{\mathbb{R}}^{m}_{+} := \overline{\mathbb{R}}^{m} \cap \mathbb{R}^{n}_{+}$.
\begin{Def} An embedding of manifolds $i: (Y, \partial Y) \hookrightarrow (X, \partial X)$, where $\dim Y = m$, is \emph{neat} \cite{Hirsh, Kosinski} if:
\begin{itemize}
\item $i(\partial Y) = i(Y) \cap \partial X$;
\item for every $y \in \partial Y$ there exists a boundary chart $(U, \varphi)$ of $X$ in $i(y)$ such that $U \cap i(Y) = \varphi^{-1}(\overline{\mathbb{R}}^{m}_{+})$.
\end{itemize}
\end{Def}
The importance of neat embeddings in this context relies on fact that the properties of tubular neighborhoods are similar to the ones holding for manifolds without boundary.
\begin{Def} Let $(Y, \partial Y)$ be a neat submanifold of $(X, \partial X)$. A tubular neighborhood $U$ of $Y$ in $X$ is \emph{neat} \cite{Kosinski} if $U \cap \partial X$ is a tubular neighborhood of $\partial Y$ in $\partial X$.
\end{Def}
\begin{Theorem} If $(Y, \partial Y)$ is a neat submanifold of $(X, \partial X)$, there exists a neat tubular neighborhood of $Y$ in $X$ and it is unique up to isotopy.
\end{Theorem}
The proof can be found in \cite[Chapter~4.6]{Hirsh} and in \cite[Chapter~III.4]{Kosinski}. Let $i: (Y, \partial Y) \hookrightarrow (X, \partial X)$ be a neat embedding of smooth compact manifolds of codimension $n$, such that the normal bundle $N_{Y}X$ is $h^{\bullet}$-orientable. Let $U$ be a tubular neighborhood of $Y$ in $X$, and $\varphi_{U}: U \rightarrow N_{Y}X$ a homeomorphism, which exists by definition. The map:
\[i_{!}: h^{\bullet}(Y) \rightarrow h^{\bullet + n}(X)
\]
is defined in the following way:
\begin{itemize}
\item we apply the Thom isomorphism $T: h^{\bullet}(Y) \rightarrow h^{\bullet + n}_{\cpt}(N_{Y}X) = \tilde{h}^{\bullet + n}(N_{Y}X^{+})$, for $N_{Y}X^{+}$ the one-point compactification of $N_{Y}X$;
\item we extend $\varphi_{U}$ to $\varphi_{U}^{+}: U^{+} \rightarrow N_{Y}X^{+}$ in the natural way and apply $(\varphi_{U}^{+})^{*}: h^{\bullet}_{\cpt}(N_{Y}X) \rightarrow h^{\bullet}_{\cpt}(U)$;
\item considering the natural map $\psi: X \rightarrow U^{+}$ given by:
\[\psi(x) = \left\{\begin{array}{ll}
x & \text{if } x \in U \\
\infty & \text{if } x \in X \setminus U
\end{array}\right.
\]
we apply $\psi^{*}: \tilde{h}^{\bullet}(U^{+}) \rightarrow \tilde{h}^{\bullet}(X)$.
\end{itemize}
Summarizing:
\begin{equation}\label{GysinMap}
i_{!}(\alpha) := \psi^{*} \circ (\varphi_{U}^{+})^{*} \circ T(\alpha).
\end{equation}
We now define the Gysin map associated to a generic neat map $f: (Y, \partial Y) \rightarrow (X, \partial X)$, not necessarily an embedding.
\begin{Def} A smooth map $f: (Y, \partial Y) \rightarrow (X, \partial X)$ is \emph{neat} (v.\ \cite[Appendix~C]{HS} and references therein) if:
\begin{itemize}
\item $f^{-1}(\partial X) = \partial Y$;
\item for every $y \in \partial Y$, the map $df_{y}: T_{y}Y/T_{y}\partial Y \rightarrow T_{f(y)}X/T_{f(y)}\partial X$ is an isomorphism.
\end{itemize}
\end{Def}
If $f$ is an embedding this definition is equivalent to the previous one. In the case of manifolds without boundary, in order to construct the Gysin map one considers an embedding $j: Y \rightarrow \mathbb{R}^{N}$, and the embedding $(f, j): Y \rightarrow X \times \mathbb{R}^{N}$. This does not apply to manifolds with boundary, since $j$ is not a neat map, and, if we consider $\mathbb{R}^{N}_{+}$ instead of $\mathbb{R}^{N}$, then it is more complicated to define the integration map. Anyway, a similar construction is possible thanks to the following theorem (v.\ \cite[Appendix~C]{HS} and references therein).
\begin{Theorem} Let $f: (Y, \partial Y) \rightarrow (X, \partial X)$ be a neat map. Then there exists a neat embedding $\iota: (Y, \partial Y) \rightarrow (X \times \mathbb{R}^{N}, \partial X \times \mathbb{R}^{N})$, stably unique up to isotopy, such that $f = \pi_{X} \circ \iota$ for $\pi_{X}: X \times \mathbb{R}^{N} \rightarrow X$ the projection.
\end{Theorem}
Therefore we consider the Gysin map:
\[\iota_{!}: h^{\bullet}(Y) \rightarrow h_{\cpt}^{\bullet + (N + \dim X - \dim Y)}(X \times \mathbb{R}^{N})
\]
as previously defined, followed by the integration map:
\begin{equation}\label{IntegrationMap}
\int_{\mathbb{R}^{N}}: \; h^{\bullet+N}_{\cpt}(X \times \mathbb{R}^{N}) \rightarrow h^{\bullet}(X)
\end{equation}
defined in the following way:
\begin{itemize}
\item $h^{\bullet+N}_{\cpt}(X \times \mathbb{R}^{N}) = \tilde{h}^{\bullet+N}((X \times \mathbb{R}^{N})^{+}) \simeq \tilde{h}^{\bullet+N}(\Sigma^{N}(X_{+}))$, for $X_{+} = X \sqcup \{\infty\}$;
\item we apply the suspension isomorphism $\tilde{h}^{\bullet+N}(\Sigma^{N}(X_{+})) \simeq \tilde{h}^{\bullet}(X_{+}) \simeq h^{\bullet}(X)$.
\end{itemize}
Summarizing:
\[f_{!}(\alpha) := \int_{\mathbb{R}^{N}} \iota_{!}(\alpha).
\]
In order to prove that the Gysin map so defined does not depend on the choices involved in the construction (the tubular neighborhood $U$, the diffeomorphism $\varphi_{U}$ with the normal bundle, the embedding $\iota$), the proof in \cite{Karoubi} applies also to the case of manifolds with boundary. In fact, the independence from the tubular neighborhood and the associated diffeomorphism is a consequence of the unicity up to isotopy (in particular, homotopy) of such a neighborhood. Also for what concerns the embedding $\iota$, the proof of \cite{Karoubi}, Prop.\ 5.24 p.\ 233, applies. In particular, $f_{!}$ only depends on the homotopy class of $f$.
If $f: (Y, \partial Y) \rightarrow (X, \partial X)$ is a generic map, not necessarily neat, we can define $f_{!}$ via the following lemma.
\begin{Lemma}\label{HomotopicNeat} Any smooth map $f: (Y, \partial Y) \rightarrow (X, \partial X)$ between compact manifolds is homotopic to a neat map relatively to $\partial Y$.
\end{Lemma}
\paragraph{Proof:} We can choose two collar neighborhoods $U$ of $\partial Y$ and $V$ of $\partial X$, such that $f(U) \subset V$. Hence we think of $f\vert_{U}$ as a map from $\partial Y \times [0, 1)$ to $\partial X \times [0, 1)$. We consider the following homotopy: $F_{t}(y, u) = (\pi_{\partial X}f(y, u), (1-t)\pi_{[0,1)}f(y,u) + tu)$. It follows that $F_{0} = f$ and $F_{1}(y, u) = (\pi_{\partial X}f(y, u), u)$, and the latter is neat. Gluing $F_{t}$ and $f\vert_{X \setminus U}$ via a bump function, we get a complete homotopy. $\square$ \\
Since the Gysin map $f_{!}$, for $f$ neat, only depends on the homotopy class of $f$, we can define it for a generic map $f$, simply considering any neat function homotopic to it. The Gysin map commutes with the restrictions to the boundaries up to a sign, as the following theorem shows.
\begin{Theorem}\label{GysinCommBoundary} Let $f: (Y, \partial Y) \rightarrow (X, \partial X)$ be a map between $h^{\bullet}$-oriented smooth manifolds, and $f': \partial Y \rightarrow \partial X$ the restriction to the boundaries. Then, for $\alpha \in h^{\bullet}(X)$:
\[f'_{!}(\alpha\vert_{\partial Y}) = (-1)^{\dim X - \dim Y}(f_{!}\alpha)\vert_{\partial X}
\]
where the orientations on the boundaries are naturally induced from the ones of $X$ and $Y$.
\end{Theorem}
\paragraph{Proof:} It is enough to prove the statement for embeddings, since the integration over $\mathbb{R}^{N}$, which is actually the suspension isomorphism, commutes with the restrictions. Therefore, let us suppose that $f$ is a neat embedding, and that $N_{Y}X$ is a neat tubular neighborhood. Then $N_{\partial Y}\partial X = N_{Y}X\vert_{\partial Y}$, but the orientation induced by the ones of $\partial Y$ and $\partial X$ on $N_{\partial Y}\partial X$ differs from the restriction of the one induced by $Y$ and $X$ by a factor $(-1)^{\dim X - \dim Y}$. Therefore, if:
\[T: h^{\bullet}(Y) \rightarrow h^{\bullet + \dim X - \dim Y}_{\cpt}(N_{Y}X), \qquad T': h^{\bullet}(\partial Y) \rightarrow h^{\bullet + \dim X - \dim Y}_{\cpt}(N_{\partial Y}\partial X)
\]
are the Thom isomorphisms, it follows that:
\[T'(\alpha\vert_{\partial Y}) = (-1)^{\dim X - \dim Y}T(\alpha)\vert_{N_{\partial Y}\partial X}.
\]
Then, since the pull-backs commute with the restrictions, the thesis follows. If $f$ is a generic embedding, not necessarily neat, since $f$ is homotopic to a neat embedding relatively to the boundary, then $f'$ remains unchanged under the homotopy and the thesis follows. $\square$
\subsection{Geometric homology}
We recall the geometric definition of the homology theory dual to a given cohomology theory $h^{\bullet}$, as defined in \cite{Jakob}. Let $M$ be a paracompact space, $\pi_{V}: V \rightarrow M$ a real vector bundle of rank $r+1$ with metric and $h^{\bullet}$-orientation, and $\sigma: M \rightarrow V$ a section of norm $1$. Then, $\sigma$ induces an isomorphism $V \simeq E \oplus 1$, for $\pi_{E}: E \rightarrow M$ a vector bundle of rank $r$ with metric, such that $\sigma(m) \simeq (0, 1)_{m}$. We identify $V$ with $E \oplus 1$. The unit sphere bundle $SV$ of $V$ can be thought of as the union of two disc bundles, the two hemispheres, joined on the equator: the two disc bundles are isomorphic to the unit disc bundle of $E$, therefore we call them $D^{+}E$ and $D^{-}E$, while the bundle of the equators is isomorphic to the sphere bundle of $E$, which we call $SE$. Moreover, the north pole $(0, 1)_{m}$ of $D^{+}E_{m}$ is $\sigma(m)$, therefore $D^{+}E$ is a tubular neighborhood of the image of $\sigma$. There is a natural map:
\begin{equation}\label{VBM}
\sigma_{!}: h^{\bullet}(M) \rightarrow h^{\bullet+r}(SV)
\end{equation}
defined in the following way:
\begin{itemize}
\item we apply the Thom isomorphism $T: h^{\bullet}(M) \rightarrow h^{\bullet+r}(E^{+}) \simeq h^{\bullet+r}(D^{+}E, SE)$;
\item by excision $h^{\bullet+r}(D^{+}E, SE) \simeq h^{\bullet+r}(SV, D^{-}E)$;
\item from the inclusion of couples $(SV, \emptyset) \subset (SV, D^{-}E)$ we get a map $h^{\bullet+r}(SV, D^{-}E) \rightarrow h^{\bullet+r}(SV)$.
\end{itemize}
The map \eqref{VBM} coincides with the Gysin map associated to $\sigma$ \cite{Jakob}.
\begin{Def}\label{CyclesJakob} For $(X, A) \in \Ob(\mathcal{FCW}_{2})$ and $n \in \mathbb{Z}$ fixed, we consider the quadruples $(M, u, \alpha, f)$ where:
\begin{itemize}
\item $(M, u)$ a smooth compact $h^{\bullet}$-manifold, possibly with boundary, whose connected components $\{M_{i}\}$ have dimension $n+q_{i}$, with $q_{i}$ arbitrary; we think of $u$ as a Thom class of the tangent bundle;
\item $\alpha \in h^{\bullet}(M)$, such that $\alpha\vert_{M_{i}} \in h^{q_{i}}(M)$;
\item $f: (M, \partial M) \rightarrow (X, A)$ is a map.
\end{itemize}
Two quadruples $(M, u, \alpha, f)$ and $(N, v, \beta, g)$ are equivalent if there exists an orientation-preserving diffeomorphism $F: (M, u) \rightarrow (N, v)$ such that $f = g \circ F$ and $\alpha = F^{*}\beta$. The \emph{group of $n$-cycles} $C_{n}(X, A)$ is the free abelian group generated by equivalence classes of such quadruples.
\end{Def}
We now consider the group $G_{n}(X, A)$ defined as the quotient of $C_{n}(X, A)$ by the subgroup generated by elements of the form:
\begin{itemize}
\item $[(M, u, \alpha, f)] - [(M_{1}, u\vert_{M_{1}}, \alpha\vert_{M_{1}}, f\vert_{M_{1}})] - [(M_{2}, u\vert_{M_{2}}, \alpha\vert_{M_{2}}, f\vert_{M_{2}})]$, for $M = M_{1} \sqcup M_{2}$;
\item $[(M, u, \alpha + \beta, f)] - [(M, u, \alpha, f)] - [(M, u, \beta, f)]$.
\end{itemize}
Moreover, we define the subgroup $U_{n}(X, A) \leq G_{n}(X, A)$ as the one generated by elements:
\begin{itemize}
\item $[(M, u, \alpha, f)] - [(S(E \oplus 1), \tilde{u}, \sigma_{!}\alpha, f \circ \pi)]$, where $S(E \oplus 1)$ is the sphere bundle induced by an $h^{\bullet}$-oriented vector bundle\footnote{The vector bundle $E$ may have different rank on different connected components of $M$.} $E \rightarrow M$ with metric, $\tilde{u}$ is the orientation canonically induced on $S(E \oplus 1)$ as a manifold from $u$ and the orientation of $E$, $\sigma: M \rightarrow E \oplus 1$ is the section $\sigma(m) = (0, 1)_{m}$ and $\sigma_{!}$ is the vector bundle modification \eqref{VBM};
\item $[(M, u, \alpha, f)]$ such that there exists $[(W, U, A, F)] \in G_{n+1}(X, X)$ such that $M \subset \partial W$ is a regularly embedded submanifold of codimension $0$ and $F(\partial W \setminus M) \subset A$, $U = u\vert_{M}$, $\alpha = A\vert_{M}$, $f = F\vert_{M}$.
\end{itemize}
Finally:
\begin{Def} The geometric homology groups are defined as $h_{n}(X, A) := G_{n}(X, A) / U_{n}(X, A)$.
\end{Def}
\section{Geometric homology revisited}\label{DualRevisited}
We now redefine the homology groups using only the Gysin map instead of the vector bundle modification. We also define cycles and boundaries in a slightly different way.
\begin{Def} On a pair $(X, A) \in \mathcal{FCW}_{2}$, we define the group of \emph{$n$-precycles of $h_{\bullet}$} as the free abelian group generated by the quadruples $(M, u, \alpha, f)$, for:
\begin{itemize}
\item $(M, u)$ a smooth compact $h^{\bullet}$-manifold, possibly with boundary, whose connected components $\{M_{i}\}$ have dimension $n+q_{i}$, with $q_{i}$ arbitrary;
\item $\alpha \in h^{\bullet}(M)$, such that $\alpha\vert_{M_{i}} \in h^{q_{i}}(M)$;
\item $f: (M, \partial M) \rightarrow (X, A)$ a continuous map.
\end{itemize}
\end{Def}
Contrary to definition \ref{CyclesJakob}, we do not quotient out with respect to orientation-preserving diffeomorphisms, since it will turn out not to be necessary. We define cycles and boundaries in the following way.
\begin{Def} The group of \emph{$n$-cycles of $h_{\bullet}$}, denoted by $z_{n}(X, A)$, is the quotient of the group of $n$-precycles by the free subgroup generated by elements of the form:
\begin{itemize}
\item $(M, u, \alpha, f) - (M_{1}, u\vert_{M_{1}}, \alpha\vert_{M_{1}}, f\vert_{M_{1}}) - (M_{2}, u\vert_{M_{2}}, \alpha\vert_{M_{2}}, f\vert_{M_{2}})$, for $M = M_{1} \sqcup M_{2}$;
\item $(M, u, \alpha + \beta, f) - (M, u, \alpha, f) - (M, u, \beta, f)$;
\item $(M, u, \varphi_{!}\alpha, f) - (N, v, \alpha, f \circ \varphi)$ for $\varphi: (N, \partial N) \rightarrow (M, \partial M)$ a map.
\end{itemize}
\end{Def}
The use of the Gysin map in this definition is more natural than the one of the vector bundle modification, which is just a particular case, while the Gysin map is the natural push-forward, defined for any $\varphi: (N, \partial N) \rightarrow (M, \partial M)$. Moreover, it is not necessary to explicitly deal with diffeomorphisms: in fact, if $\varphi: (M, u) \rightarrow (N, v)$ is an orientation preserving diffeomorphism, it is trivial to show from the definition that $\varphi_{!} = (\varphi^{-1})^{*}$, therefore the quotient in definition \ref{CyclesJakob} is just another particular case of the Gysin map.
\begin{Def} The group of \emph{$n$-boundaries of $h_{\bullet}$}, denoted by $b_{n}(X, A)$, is the subgroup of $z_{n}(X, A)$ containing the cycles which are representable by a precycle $(M, u, \alpha, f)$ such that there exits a precycle $(W, U, A, F)$ of $(X, X)$ such that:
\begin{itemize}
\item $M \subset \partial W$ is a regularly embedded submanifold of codimension $0$;
\item $F(\partial W \setminus M) \subset A$;
\item $U = u\vert_{M}$, $\alpha = A\vert_{M}$, $f = F\vert_{M}$.
\end{itemize}
\end{Def}
Of course we define:
\[h_{n}(X,A) := z_{n}(X,A) / b_{n}(X,A).
\]
For $g: (X, A) \rightarrow (Y, B)$ a map, the push-forward $g_{*}: h_{\bullet}(X,A) \rightarrow h_{\bullet}(Y, B)$ is naturally defined as $g_{*}[(M, u, \alpha, f)] = [(M, u, \alpha, f \circ g)]$, while the connecting homomorphism $\partial_{n}: h_{n}(X,A) \rightarrow h_{n-1}(A)$ is defined as:
\[\partial_{n}[(M, u, \alpha, f)] = (-1)^{\abs{\alpha}}[(\partial M, u\vert_{\partial M}, \alpha\vert_{\partial M}, f\vert_{\partial M})]
\]
where $(-1)^{\abs{\alpha}}$ depends on the connected component and $u\vert_{\partial M}$ is the orientation naturally induced by $u$ on the boundary. It is well-defined thanks to theorem \ref{GysinCommBoundary}. The exterior product and the cap product are defined as in \cite{Jakob}.
\section{Equivalence}\label{Equivalence}
We call $h''_{n}(X, A)$ the geometric homology groups defined in \cite{Jakob}, $h'_{n}(X,A)$ the ones defined in the present paper and $h_{n}(X,A)$ the ones defined via spectra. There is a natural map:
\begin{equation}\label{EquivalenceMap}
\begin{split}
\Psi_{n}(X, A):\; &h''_{n}(X, A) \rightarrow h'_{n}(X,A) \\
& [(M, u, \alpha, f)] \rightarrow [(M, u, \alpha, f)],
\end{split}
\end{equation}
where of course the square brackets denote two different equivalence relations in the domain and in the codomain. It is easy to show that $\Psi_{n}$ is well-defined, since the quotient by diffeomorphisms and vector bundle modifications in the domain corresponds to quotient by the Gysin map in the codomain, therefore equivalent quadruples are sent to equivalent quadruples. Moreover, the quotient by boundaries, disjoint union of base manifolds and addition of cohomology classes are defined in the same way in the two cases.
\begin{Theorem}\label{EquivalenceTheorem} The maps $\Psi_{\bullet}$ defined in \eqref{EquivalenceMap} induce an equivalence of homology theories on $\mathcal{FCW}_{2}$ between $h''_{\bullet}$ and $h'_{\bullet}$.
\end{Theorem}
\paragraph{Proof:} $\Psi_{n}$ is a group homomorphism by construction, since it is defined on the generators of the free abelian group $C_{n}(X, A)$, and $h'_{n}(X, A)$ is defined quotienting out repeatedly $C_{n}(X, A)$. It is clearly surjective, since the elements of the form $[(M, u, \alpha, f)]$ are generators even for $h_{n}(X,A)$. Therefore, it remains to prove injectivity. Every element of $h''_{n}(X, A)$ can be written as a single class $[(M, u, \alpha, f)]$, since a sum of such classes can be reduced to a single one via the disjoint union of the base manifolds. Therefore, since $\Psi_{n}[(M, u, \varphi_{!}\alpha, f)] = \Psi_{n}[(N, v, \alpha, f \circ \varphi)]$ for $\varphi: (N, \partial N) \rightarrow (M, \partial M)$ a map, we must prove that $[(M, u, \varphi_{!}\alpha, f)] = [(N, v, \alpha, f \circ \varphi)]$ even in $h''_{n}(X,A)$. The equivalence between $h''_{n}(X,A)$ and $h_{n}(X,A)$ is given in \cite{Jakob} by the maps $\Phi_{\bullet}: h''_{\bullet}(X,A) \rightarrow h_{\bullet}(X,A)$ defined by $\Phi_{n}[(M, u, \alpha, f)] := f_{*}(\alpha \cap [M])$, for $[M]$ the fundamental class of $M$. By definition $\alpha \cap [M] = L_{M}(\alpha)$ for $L_{M}$ the Lefschetz duality \eqref{Lefschetz}. Hence, because of \eqref{GysinLefschetz}:
\[\begin{split}
& \Phi_{n}[(M, u, \varphi_{!}\alpha, f)] = f_{*} L_{M}(\varphi_{!}\alpha) \\
& \Phi_{n}[(N, v, \alpha, f \circ \varphi)] = (f \circ \varphi)_{*} L_{N}(\alpha) = f_{*} \varphi_{*} L_{N}(\alpha) = f_{*} L_{M}(\varphi_{!}\alpha).
\end{split}\]
Since the map $\Phi_{n}$ is injective, it follow that $[(M, u, \varphi_{!}\alpha, f)] = [(N, v, \alpha, f \circ \varphi)]$ even in $h''_{n}(X,A)$. The fact that $\Psi_{\bullet}$ is a morphism of homology theories then follows from the fact that the boundary morphism and the push-forward are defined in the same way for $h'_{n}(X,A)$ and $h''_{n}(X,A)$. $\square$
\section*{Acknowledgements}
The author is financially supported by FAPESP (Funda\c{c}\~ao de Amparo \`a Pesquisa do Estado de S\~ao Paulo). We would like to thank Ryan Budney for having suggested the proof of lemma \ref{HomotopicNeat} on http://mathoverflow.net/.
| {
"timestamp": "2013-01-25T02:03:06",
"yymm": "1301",
"arxiv_id": "1301.5882",
"language": "en",
"url": "https://arxiv.org/abs/1301.5882",
"abstract": "Given a cohomology theory, there is a well-known abstract way to define the dual homology theory using the theory of spectra. In [4] the author provides a more geometric construction of the homology theory, using a generalization of the bordism groups. Such a generalization involves in its definition the vector bundle modification, which is a particular case of the Gysin map. In this paper we provide a more natural variant of that construction, which replaces the vector bundle modification with the Gysin map itself, which is the natural push-forward in cohomology. We prove that the two constructions are equivalent.",
"subjects": "Algebraic Topology (math.AT); K-Theory and Homology (math.KT)",
"title": "Geometric homology revisited",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138144607745,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7087156768989609
} |
https://arxiv.org/abs/2206.09844 | Heavy-traffic single-server queues and the transform method | Heavy-traffic limit theory deals with queues that operate close to criticality and face severe queueing times. Let $W$ denote the steady-state waiting time in the ${\rm GI}/{\rm G}/1$ queue. Kingman (1961) showed that $W$, when appropriately scaled, converges in distribution to an exponential random variable as the system's load approaches 1. The original proof of this famous result uses the transform method. Starting from the Laplace transform of the pdf of $W$ (Pollaczek's contour integral representation), Kingman showed convergence of transforms and hence weak convergence of the involved random variables. We apply and extend this transform method to obtain convergence of moments with error assessment. We also demonstrate how the transform method can be applied to so-called nearly deterministic queues in a Kingman-type and a Gaussian heavy-traffic regime. We demonstrate numerically the accuracy of the various heavy-traffic approximations. | \section{Introduction and results} \label{sec1}
{The title of this contribution to the memorial issue for J.W.~Cohen refers to the The Single-Server Queue, the monumental book \cite{cohen2012single} in which J.W.~Cohen teaches the reader how to use complex analysis and transform methods to obtain mathematical rigorous results for the general GI/G/1 queue and its many extensions. In turn, J.W.~Cohen admired the work of F.~Pollaczek, in particular for the analytic treatment of queues by means of complex function theory and integral equations \cite{cohen1993complex}, techniques that also feature prominently in The Single-Server Queue, and in this paper on the GI/G/1 queue in heavy traffic. }
F.~Pollaczek initiated the analysis of the GI/G/1 queue
in the 1940s and 1950s, and obtained a contour integral representation for the Laplace transform of the steady-state waiting time. J.F.C.~Kingman introduced heavy-traffic analysis in the 1960s \cite{ref10,ref11}.
For the {\rm GI}/{\rm G}/1 queue in a regime where the system load tends to 1, Kingman showed that, under certain conditions, the Laplace transform of the pdf of an appropriately normalized steady-state waiting time (Pollaczek's contour integral representation) converges to the Laplace transform of an exponential distribution. We will refer to this technique---to show convergence in distribution by convergence of transforms---as the transform method.
To explain Kingman's result in more detail, let $W$
denote the steady-state waiting time in the {\rm GI}/{\rm G}/1 queue, which solves the stochastic equation
\beq \label{1.1}
W\stackrel{d}{=} \Big(W+V-\frac{1}{\rho}\,U\Big)^+,
\eq
with $x^+=\max\{0,x\}$. Here, $V$ is the generic service time with mean 1 and variance $\sigma_V^2\in(0,\infty)$, $U$ is the generic inter-arrival time with mean 1 and variance $\sigma_U^2\in(0,\infty)$ and $\rho\in(0,1)$. It is assumed that $V$ and $U$ are independent. Since convergence of transforms implies convergence of distributions, Kingman effectively showed using the transform method that, for $W=W_{\alpha}$ solving (\ref{1.1}) with $\alpha=1-\rho$,
\beq \label{1.2}
P(\alpha\,W_{\alpha}\leq t)\pr 1-e^{-2t/\sigma^2},~~~~~~\alpha\downarrow 0,
\eq
for all $t\geq0$ with $\sigma^2=\sigma_U^2+\sigma_V^2$.
Kingman's observation that heavily loaded systems admit a simple scaling limit triggered a surge of research in the 1960s and 1970s; see \cite{ref3,ref6,ref4,ref15,ref19,ref12,ref13}, among others. In the decades that followed, heavy-traffic analysis, and more generally, stochastic-process limits, developed into popular topics in the applied probability community, with queueing theory as one of its many applications. The general idea remained to consider a parametrized set of systems and to identify the limiting system as the parameter converges to a limiting value yielding criticality (e.g.\ $\alpha\downarrow0$ as in (\ref{1.2})).
Kingman's transform method is thus largely based on Pollaczek's contour integral that we introduce next, see \cite{ref1}. Assume analyticity of $\psi(s)=\dE\,[\exp(s(V-\frac{1}{\rho}\,U))]$ in an open strip containing $|\mbox{Re}(s)|\leq\delta$ for some $\delta>0$; in particular, all moments of $V-\frac{1}{\rho}\,U$ are finite. Then, Pollaczek's integral reads
\beq \label{1.3}
\dE\,[e^{-sW}]=\exp\,\Bigl\{\frac{-1}{2\pi i}\,\il_C\,\frac{s}{z(s-z)}\,\log(1-\psi({-}z))dz\Bigr\},
\eq
where $C$ is a contour to the left of, and parallel to, the imaginary axis, and to the right of the singularities of $\log(1-\psi({-}z))$ in the left half-plane, and $s$ is any complex number to the right of $C$. Kingman uses (\ref{1.3}) to prove that $\dE\,[\exp({-}s\alpha W)]\pr(1+\sigma^2s/2)^{-1}$ in $\mbox{Re}(s)\geq0$ as $\alpha\downarrow 0$, yielding (\ref{1.2}).
Being tailor-made for the steady-state ${\rm GI}/{\rm G}/1$ queue, the transform method that uses contour integral representations has rarely been applied to other queueing models. A notable exception is the heavy-traffic analysis of O.J.~Boxma and J.W.~Cohen \cite{ref3b} for the ${\rm GI}/{\rm G}/1$ queue with heavy-tailed distributions, so when the second moment of the service time and/or interarrival time is infinite. Boxma and Cohen apply the transform method to an extended form of Pollaczek's integral \eqref{1.3} to identify the heavy-traffic limit.
Other studies that apply this transform method for heavy-traffic analysis are \cite{ref12,ref13} on the GI/G/$s$ queue and \cite{boon2019pollaczek,boon2021optimal} on the fixed-cycle traffic-light queue, a variation of the GI/G/1 queue.
More probabilistic methods for proving heavy-traffic results developed later use functional limit theorems, and typically establish that the sample path of a scaled waiting time converges to some limiting stochastic process. One is then faced with the problem of showing that the steady-state of the limiting process corresponds to the limiting steady-state of the queueing model in heavy traffic. This requires and interchange-of limits argument, which is often challenging as it involves proving tightness of sequences of probability measures. The transform method works directly with the steady-state random variables, and thus avoids the problem of interchanging limits.
\subsection{Nearly deterministic queues and the transform method} \label{subsec1.1}
Next to the classical heavy-traffic setting, we will consider so-called nearly deterministic ${\rm GI}_n/{\rm G}_n/1$ queues, whose heavy-traffic behavior has been investigated in \cite{ref17,ref18} using sample-path methods. Nearly-deterministic queues are motivated by cycling thinning. To explain this, denote for all $n=1,2,...$
\beq \label{1.4}
W_n\stackrel{d}{=} \Bigl(W_n+V_n-\frac{1}{\rho_n}\,U_n\Bigr)^+,
\eq
with
\beq \label{1.5}
V_n=\frac1n\,\sum_{k=1}^n\,V_{n,k},~~~~~~U_n=\frac1n\,\sum_{k=1}^n\,U_{n,k},
\eq
where $V_{n,k}$ are i.i.d.\ copies of $V$ and $U_{n,k}$ are i.i.d.\ copies of $U$, with $V$ and $U$ as before, and $\rho_n\in(0,1)$.
The cyclic thinning thus regards each interarrival (service) time as the $n$-th occurrence in a sequence of i.i.d.~random variables, which mitigates the random fluctuations. For instance, if this sequence consists of exponential random variables, the interarrival or service time would follow an Erlang distribution, and if $n$ is large, this becomes more and more `deterministic'.
Interesting heavy-traffic regimes now arise when $\rho_n\pr 1$ as $n\pr\infty$. In \cite{ref17,ref18}, two heavy-traffic regimes are considered. The first (Kingman-type) regime assumes that $(1-\rho_n)n\pr\beta$ as $n\pr\infty$ for some fixed $\beta>0$. In this case, $W_n$ converges in distribution to an exponential random variable with mean $\sigma^2/2\beta$, where $\sigma^2=\sigma_U^2+\sigma_V^2$. The second (Gaussian) regime assumes $(1-\rho_n)\,\sqrt{n}\pr\beta$ as $n\pr\infty$ for some fixed $\beta>0$. In this case $\sqrt{n}\,W_n/\sigma$ converges in distribution to the all-time maximum $M_{\beta}$ of a one-dimensional directed random walk, starting at 0, with normally distributed step sizes of mean $-\beta$. This Gaussian random walk and $M_{\beta}$ are well studied, see \cite{ref2,ref5,ref7,ref8,ref16}.
\subsection{Main results} \label{subsec1.2}
In the present paper we apply the transform method for heavy-traffic analysis of the ${\rm GI}/{\rm G}/1$ and ${\rm GI}_n/{\rm G}_n/1$ queues.
We first consider the classical heavy-traffic regime, and provide a detailed proof of a version of Kingman's original result using the transform method. We do this with service times $V$ and interarrival times $U$ that do not depend on $\rho=1-\alpha$ (in Kingman's original result, a controlled dependence of $V$ and $U$ on $\alpha$ is allowed). In this more restricted setting, we show the following.
\begin{thm} \label{thm1}
With $\sigma_{\alpha}^2=(\sigma_V^2+\rho^{-2}\sigma_U^2)\rho$,
\beq \label{1.6}
\dE\,[e^{-\alpha sW}]=(1+\sigma_{\alpha}^2\,s/2)^{-1}+O(\alpha\,\log(1/\alpha)),~~~~~~\alpha\downarrow0,
\eq
uniformly in any bounded set of $s$ with ${\rm Re}(s)\geq{-}1/2\sigma_{\alpha}^2$.
\end{thm}
As a consequence of Theorem~\ref{thm1}, we have for any $k=1,2,...$
\beq \label{1.7}
\dE\,[(\alpha W)^k]=k!(\tfrac12\sigma_{\alpha}^2)^k+O(\alpha\,{\log}(1/\alpha)),~~~~~~\alpha\downarrow 0.
\eq
We observe that for $k=1$ in \eqref{1.7} we get
\beq \label{1.7a}
\dE\,[\alpha W]=\tfrac12\sigma_{\alpha}^2+O(\alpha\,{\log}(1/\alpha)),~~~~~~\alpha\downarrow 0.
\eq
It turns out that, after appropriate identifications, the quantity $\tfrac12\sigma_\alpha^2$ at the right-hand side of \eqref{1.7a} coincides with the right-hand side of (6) in \cite{ref5a} (Kingman's bound, see \cite{ref11a}, Theorem~2 and (33)). In \cite{ref5a} there are discussed sharpenings of Kingman's bound, including a new upper bound \cite[Equation (17)]{ref5a}, involving Lambert's W function.
We next apply the transform method to the ${\rm GI}_n/{\rm G}_n/1$ queues described in Subsection~\ref{subsec1.1}, where we let $(1-\rho_n)\asymp 1/n$, meaning that there are fixed $\beta_1$ and $\beta_2$ with $0<\beta_1\leq\beta_2<\infty$ such that $(1-\rho_n)n\in[\beta_1,\beta_2]$, $n=1,2,...\,$. This leads to the following result.
\begin{thm} \label{thm2}
With $\gamma_n=(\sigma_V^2+\rho_n^{-2}\sigma_U^2)\rho_n/(2n(1-\rho_n))$,
\beq \label{1.8}
\dE\,[e^{-tW_n}]=(1+\gamma_nt)^{-1}+O\Bigl(\frac{{\rm log}\,n}{\sqrt{n}}\Bigr),~~~~~~n\pr\infty,
\eq
uniformly in any bounded set of $t$ with ${\rm Re}(t)\geq{-}1/4\gamma_n$.
\end{thm}
From Theorem~\ref{thm2} we obtain for any $k=1,2,...$
\beq \label{1.9}
\dE\,[W_n^k]=k!\,\gamma_n^k+O\Bigl(\frac{{\rm log}\,n}{\sqrt{n}}\Bigr),~~~~~~n\pr\infty.
\eq
We then proceed to apply the transform method to the ${\rm GI}_n/{\rm G}_n/1$ when we let $(1-\rho_n)\asymp 1/\sqrt{n}$, meaning that there are fixed $\beta_1$ and $\beta_2$ with $0<\beta_1\leq\beta_2<\infty$ such that $(1-\rho_n)\,\sqrt{n}\in[\beta_1,\beta_2]$, $n=1,2,...\,$. In this regime, the integration contour $C$ occurring in (\ref{1.3}) can be chosen to pass through the saddle point $z=\zeta_{sp}$ of $h(z)={\rm log}(\psi({-}z))$ on the negative real axis, allowing a saddle point analysis (under an additional assumption). We show the following, with $M_{\beta}$ as in Subsection~\ref{subsec1.1}.
\begin{thm} \label{thm3}
With $\sigma_n=(h''(\zeta_{sp}))^{1/2}$ and $\beta_n={-}\zeta_{sp}\sigma_n\sqrt{n}$, $\sigma_n\asymp1$, $\beta_n\asymp 1$ as $n\pr\infty$, and
\beq \label{1.10}
\dE\,\Bigl[\exp\Bigl({-}s\,\frac{\sqrt{n}}{\sigma_n}\,W_n\Bigr)\Bigr]=\dE\,[\exp({-}sM_{\beta_n})]+O\Bigl(\frac{1}{\sqrt{n}}\Bigr),~~~~~~n\pr\infty,
\eq
uniformly in any bounded set of $s$ with ${\rm Re}(s)\geq{-}\,\frac12\,\beta_n$.
\end{thm}
From Theorem~\ref{thm3} we get for any $k=1,2,...$
\beq \label{1.11}
\dE\,\Bigl[\Bigl(\frac{\sqrt{n}}{\sigma_n}\,W_n\Bigr)^k\Bigr]=m_k(\beta_n)+O\Bigl(\frac{1}{\sqrt{n}}\Bigr),~~~~~~n\pr\infty,
\eq
where $m_k(\beta)=\dE\,[M_{\beta}^k]$. Theorem~\ref{thm3} can be refined by taking account of $h'''(\zeta_{sp})$ in the saddle point analysis. This yields Theorem~\ref{thm4}, see Section~\ref{sec5} for its precise formulation, where the right-hand side of (\ref{1.10}) is replaced by $\dE\,[\exp({-}R_n(s)\,M_{B_n})]+O(1/n)$, with appropriate non-linear transforms $R_n(s)$ and $B_n(\beta)=B_n$ of $s$ and $\beta$. Theorem~\ref{thm3} and its refinement Theorem~\ref{thm4} result from an adaptation of the saddle point method developed in \cite{ref9}, \cite{ref14}.
Theorem \ref{thm5} in Section \ref{sec5} gives a consequence of Theorem~\ref{thm4} on the level of moments.
Theorems~\ref{thm1}-\ref{thm4} generalize and refine some classical heavy-traffic results. Theorem~\ref{thm1} recovers Kingman’s weak convergence result \eqref{1.2}, and generalizes
this to a heavy-traffic limit theorem for all moments of $W$. The refined heavy-traffic approximations not only provide the heavy-traffic limit, but also contain pre-limit information (for $\rho$ away from 1) and shed light on the rate of convergence (the speed at which the limit is attained, as a function of the scaling parameter). Similarly, Theorems~\ref{thm2}-\ref{thm4}
recover results in \cite{ref17}, \cite{ref18} for convergence in
distribution, and convergence of the first two moments. In the paper we show that
all normalized moments of $W_n$ converge to those of $M_\beta$, again with rate of convergence and refinements. As a consequence, the theoretical results in Theorems~\ref{thm1}-\ref{thm4} give sharp approximations, not only in heavy traffic, but also in more moderate conditions.
We demonstrate the accuracy of the approximations by comparing with exact results. We also address the computational aspects of numerically calculating complex contour integrals, which is required for both the exact and approximate performance analysis.
In particular, we explain how to calculate reliably the Pollaczek contour integrals with numerical integration. Since we operate in heavy-traffic regimes, numerical integration can become cumbersome, with integration contours closely passing the origin,
but we show how to deal with this.
\subsection{Organization of the paper} \label{subsec1.3}
In Section~\ref{sec2} we present assumptions and preliminaries on the function $\psi({-}z)=\dE\,[\exp({-}z(V-\frac{1}{\rho}\,U))]$ that occus in the various Pollaczek integrals in Sections~\ref{sec3}, \ref{sec4} and \ref{sec5}. Section~\ref{sec2} also contains information on the function $h(z)={\rm log}\,\psi({-}z)$ that is heavily used in the dedicated saddle point method of Section~\ref{sec5}; we avoid using saddle points in Sections~\ref{sec3}, \ref{sec4} on the Kingman-type results. In Section~\ref{sec3} we present the formulation and proof of our version of Kingman's classical result (Theorem~\ref{thm1}), yielding convergence (with error assessment) of the mgf and all moments of those of an exponentially distributed random variable with a tailored $\alpha$-dependent mean. In Section~\ref{sec4}, we consider the nearly deterministic queue in the regime $(1-\rho)\asymp 1/n$, and prove that the mgf and all moments of $W_n$ converge to those of a specifically designed exponentially distributed random variable (Theorem~\ref{thm2}). In Section~\ref{sec5} we present Theorem~\ref{thm3} and its refinement Theorem~\ref{thm4}, with consequences for moment convergence, for the nearly deterministic queue when $(1-\rho)\asymp 1/\sqrt{n}$. This requires an additional assumption on the function $\psi$, allowing a saddle point approach to Pollaczek's integral around the saddle point $\zeta_{sp}$, at the expense of an exponentially small error. The proof of the refinement Theorem~\ref{thm4}, and its consequence (\ref{5.6})
for moment convergence, is rather involved and technical, so that we have deferred details to Appendices~A and B. In Section~\ref{sec6} we summarize in detail the computational schemes for the quantities we want to calculate via Pollaczek's formula \eqref{1.3}.
In Section~\ref{sec8} we present our conclusions.
In the appendix we illustrate the results of Sections~\ref{sec3}, \ref{sec4} and \ref{sec5} for the several specific cases, including $V$ and $U$ being Gamma distributed, and compare our asymptotic results with the results of numerical integration.
\section{Preliminaries} \label{sec2}
The convergence results of the Kingman type given in the present paper will be shown under the condition that the function $\psi$ is analytic in an open set containing a strip $|\mbox{Re}(z)|\leq \delta$ with $\delta>0$. For the convergence results for nearly deterministic queues related to the maximum of the Gaussian random walk, we require one more condition, additional to the analyticity assumption on $\psi$. The latter results are shown using the dedicated saddle point method, and this requires an assumption that guarantees that one can confine attention to the immediate vicinity of the saddle point on the negative real axis when conducting an asymptotic analysis on the relevant Pollaczek integral when $n\to\infty$. We refer to Section \ref{sec5} for the technical details. This additional condition is satisfied for all special distributions listed above, except the deterministic and two-point distributions that yield functions $\psi(-x+iy)$, $y\in\dR$, that are (nearly) periodic in $y$, and thus take values (near to) $\psi(-x)$ for certain $y$ away from 0.
We use in the sequel the letter $\delta$ to denote a generic positive number that may take case-dependent values. We consider random variables $V,U\geq0$ that are independent with $\dE\,[V]=1=\dE\,[U]$ and $0<\sigma_V^2+\sigma_U^2<\infty$. We furthermore assume that there is a $\delta>0$ such that $\dE\,[\exp(zV)]$, $\dE\,[\exp(zU)]$ are analytic in an open strip containing $-\delta\leq {\rm Re}(z)\leq\delta$. For $\rho\in(0,1)$, we let
\beq \label{2.1}
\psi({-}\zeta)=\psi({-}\zeta\,;\,\rho)=\dE\,[e^{-\zeta(V-\frac{1}{\rho}U)}].
\eq
Then $\psi({-}\zeta)$ is analytic in an open strip containing $-\delta\leq{\rm Re}(\zeta)\leq\delta$ for some $\delta>0$. Since
\beq \label{2.2}
\psi({-}\zeta)\equiv\il_{-\infty}^{\infty}\,e^{-\zeta t}\,d\,G(t),
\eq
with $G(t)$ the cumulative distribution function of $V-\frac{1}{\rho}\,U$, we have that $\psi({-}\zeta)$, $-\delta\leq\zeta\leq\delta$, is logarithmically convex. We have, uniformly in $\rho\in[\frac12,1]$,
\begin{eqnarray} \label{2.3}
\psi({-}\zeta) & = & \dE\,\Bigl[1-\zeta\Bigl(V-\frac{1}{\rho}\,U\Bigr)+\frac12\,\zeta^2\Bigl(V-\frac{1}{\rho}\,U\Bigr)^2\Bigr]+O(\zeta^3) \nonumber \\[3mm]
& = & 1+\Bigl(\frac{1}{\rho}-1\Bigr)\,\zeta+\frac12\,\Bigl(\sigma_V^2+\dfrac{1}{\rho^2}\,\sigma_U^2+\Bigl(1-\frac{1}{\rho}\Bigr)^2\Bigr)\,\zeta^2+O(\zeta^3), \quad |\zeta|\leq\delta,
\end{eqnarray}
for some $\delta>0$. Therefore, there is a $\delta>0$ such that $\psi({-}\zeta)\geq1/2$ when $\rho\in[\frac12,1]$ and $-\delta\leq\zeta\leq0$. Hence, by continuity, there is a $\delta>0$ such that for $\rho\in[\frac12,1]$
\beq \label{2.4}
h(\zeta)={\rm log}\,\psi({-}\zeta)={\rm log}(\dE\,[e^{-\zeta(V-\frac{1}{\rho}U)}])
\eq
is well-defined and analytic as a principal value logarithm in an open set containing the rectangle $-\delta\leq{\rm Re}(\zeta)\leq0$, $|{\rm Im}(\zeta)|\leq\delta$.
We have for $\rho\in[\frac12,1]$
\beq \label{2.5}
h(0)=0,~~~~~~h'(0)=\frac{1}{\rho}-1,~~~~~~h''(0)=\sigma_V^2+\frac{1}{\rho^2}\,\sigma_U^2,
\eq
and there is a $\delta>0$ such that
\beq \label{2.6}
h(\zeta)=\Bigl(\frac{1}{\rho}-1\Bigr)\,\zeta+\frac12\,\Bigl(\sigma_V^2+\frac{1}{\rho^2}\,\sigma_U^2\Bigr)\,\zeta^2+O(\zeta^3)
\eq
in an open set containing the rectangle $-\delta\leq{\rm Re}(\zeta)\leq0$, $|{\rm Im}(\zeta)|\leq\delta$. There is a $\delta>0$ such that the function $h(\zeta)$, $-\delta\leq\zeta\leq0$, is convex. Furthermore, $h$ has, for $\rho$ sufficiently close to 1, a unique saddle point $\zeta_{sp}\in[{-}\delta,0]$. We have
\beq \label{2.7}
\zeta_{sp}={-}\,\frac{1}{\rho}~\frac{1-\rho}{\sigma_V^2+\frac{1}{\rho^2}\,\sigma_U^2}+O((1-\rho)^2),
\eq
and
\beq \label{2.8}
h(\zeta_{sp})={-}\,\frac{1}{\rho^2}~\frac{(1-\rho)^2}{2(\sigma_V^2+\frac{1}{\rho^2}\,\sigma_U^2)}+O((1-\rho)^3),~~~~~h''(\zeta_{sp})=\sigma_V^2+\frac{1}{\rho^2}\,\sigma_U^2+O(1-\rho).
\eq
\section{Classical heavy-traffic result of the Kingman type} \label{sec3}
With $V$ and $U$ as in Section~\ref{sec2}, we let
\beq \label{3.1}
W\stackrel{d}{=}\Bigl(W+V-\frac{1}{\rho}\,U\Bigr)^+,
\eq
where $\rho=1-\alpha$ and $\alpha\downarrow 0$. We shall show that
\beq \label{3.2}
{\rm log}(\dE\,[e^{-\alpha sW}])={-}{\rm log}(1+\tfrac12\,\sigma_{\alpha}^2s)+O(\alpha\,{\rm log}(1/\alpha)),~~~~~~\alpha\downarrow0,
\eq
uniformly in any bounded set of $s$ with ${\rm Re}(s)\geq\frac12\,\mu_0$, where
\beq \label{3.3}
\sigma_{\alpha}^2=\frac{-1}{\mu_0}=\Bigl(\sigma_V^2+\frac{1}{\rho^2}\,\sigma_U^2\Bigr)\,\rho.
\eq
The mgf $\dE\,[\exp({-}t\bfx)]$ of an $\bfx$ having the exponential probability distribution $\vart\,e^{-\vart x}$, $x\geq0$, with mean $1/\vart$, is given by $(1+t/\vart)^{-1}$, ${\rm Re}(t)>{-}\vart$. Hence, all moments of $\alpha W$ are equal to the moments of the exponential probability distribution with mean $2/\sigma_{\alpha}^2$, with error $O(\alpha\,{\rm log}(1/\alpha))$ as $\alpha\downarrow 0$.
We show this result by using the Pollaczek result for $W$, in which we follow the argumentation as given by Kingman in \cite{ref10} rather closely. Observe that $V$ and $U$ are independent of $\alpha$, which allows us to be explicit about the error term in (\ref{3.2}).
From Pollaczek's result, we have
\beq \label{3.4}
{\rm log}(\dE\,[e^{-\alpha sW}])=\frac{-1}{2\pi i}\,\il_C\,\frac{\alpha s}{\zeta(\alpha s-\zeta)}\,{\rm log}(1-\psi({-}\zeta))d\zeta,
\eq
where $\psi({-}\zeta)$ is as in (\ref{2.1}), and $C$ is a line parallel to, and to the left, of the imaginary axis, and to the right of the singularities of ${\rm log}(1-\psi({-}\zeta))$ in the open left-half-plane, and $\alpha s$ in (\ref{3.4}) lies to the right of $C$. To be more detailed about the choice of $C$, we observe from (\ref{2.3}) that there is a $\delta>0$ such that
\beq \label{3.5}
\psi({-}\zeta)=1+\alpha\zeta/\rho+\tfrac12\,\sigma_{\alpha}^2\zeta^2/\rho+O(\alpha^2\zeta^2)+O(\zeta^3),~~~~~~|\zeta|\leq\delta,
\eq
where $\sigma_{\alpha}^2$ is as in (\ref{3.3}). The leading part $1+\alpha\zeta/\rho+\sigma_{\alpha}^2\zeta^2/2\rho$, considered for $\zeta\leq0$, in (\ref{3.5}) equals 1 for $\zeta=0$ and $\zeta=2\alpha\mu_0$, and is minimal for $\zeta=\alpha\mu_0$, where $\mu_0<0$ is as in (\ref{3.3}), with minimal value $1-\alpha^2/2\rho\sigma_{\alpha}^2$. By (\ref{2.2}) we have
\beq \label{3.6}
|\psi({-}\alpha\mu_0-i\eta)|\leq\psi({-}\alpha\mu_0)=1-\frac{\alpha^2}{2\rho\sigma_{\alpha}^2}+O(\alpha^3),~~~~~~\eta\in\dR,
\eq
and so
\beq \label{3.7}
|\psi({-}\alpha\mu_0-i\eta)|\leq1-\frac{\alpha^2}{4\rho\sigma_{\alpha}^2},~~~~~~\eta\in\dR,
\eq
when $\alpha$ is sufficiently small. For such $\alpha$, we can therefore choose $C=\{\alpha\mu_0+i\eta\;|\;\eta\in\dR\}$, with principal value of the log in (\ref{3.4}), and ${\rm Re|}(s)\geq\frac12\,\mu_0$.
In (\ref{3.4}) we substitute $\zeta=\alpha z$, with $z\in\{\mu_0+i\eta\;|\;\eta\in\dR\}=C_0$ and $d\zeta=\alpha\,dz$, to get
\beq \label{3.8}
{\rm log}(\dE\,[e^{-\alpha sW}])=\dfrac{-1}{2\pi i}\,\il_{C_0}\,\frac{s}{z(s-z)}\,{\rm log}(1-\psi({-}\alpha z))\,dz.
\eq
From (\ref{3.5}) we have when $|\alpha z|\leq\delta$,
\beq \label{3.9}
\psi({-}\alpha z)=1+\alpha^2z/\rho+\sigma_{\alpha}^2\alpha^2z^2/2\rho+O(\alpha^4z^4)+O(\alpha^3z^3),
\eq
and so
\beq \label{3.10}
\frac{1-\psi({-}\alpha z)}{-\alpha^2z/\rho}=1+\tfrac12\,\sigma_{\alpha}^2z+O(\alpha^2z)+O(\alpha z^2).
\eq
We have from (\ref{3.7}) that both $1-\psi({-}\alpha z)$ and $-\alpha z^2/\rho$ lie in the open right half-plane when $z\in C_0$ and $\alpha$ is sufficiently small. Hence, with principal-value logarithms,
\beq \label{3.11}
{\rm log}(1-\psi({-}\alpha z))={\rm log}\Bigl(\frac{1-\psi({-}\alpha z)}{-\alpha^2z/\rho}\Bigr)+{\rm log}({-}\alpha^2z/\rho),~~~~~~z\in C_0.
\eq
Since ${\rm Re}(s)\geq\frac12\,\mu_0$, we have by Cauchy's theorem
\beq \label{3.12}
\frac{-1}{2\pi i}\,\il_{C_0}\,\frac{s}{z(s-z)}\,{\rm log}({-}\alpha^2z/\rho)\,dz=0,
\eq
and so
\beq \label{3.13}
{\rm log}(\dE\,[e^{-\alpha sW}])=\frac{-1}{2\pi i}\,\il_{C_0}\,\frac{s}{z(s-z)}\,{\rm log}\Bigl(\frac{1-\psi({-}\alpha z)}{-\alpha^2z/\rho}\Bigr)\,dz.
\eq
We shall show now that
\beq \label{3.14}
{\rm log}\Bigl(\frac{1-\psi({-}\alpha z)}{-\alpha^2z/\rho}\Bigr)={\rm log}(1+\tfrac12\,\sigma_{\alpha}^2z)+O(\alpha^2)+O(\alpha z)
\eq
when $|\alpha z|\leq c$ and $\alpha$ and $c$ are sufficiently small. We have from (\ref{3.10})
\beq \label{3.15}
\frac{1-\psi({-}\alpha z)}{-\alpha^2z/\rho}=(1+\tfrac12\,\sigma_{\alpha}^2z)\Bigl(1+\frac{O(\alpha^2z)+O(\alpha z^2)}{1+\frac12\,\sigma_{\alpha}^2z}\Bigr).
\eq
Now when $z=\mu_0+i\eta\in C_0$ with $\eta\in\dR$, we have by (\ref{3.3})
\begin{eqnarray} \label{3.16}
|1+\tfrac12\,\sigma_{\alpha}^2z|^2 & = & |\tfrac12+\tfrac12\,i\sigma_{\alpha}^2\eta|^2=\tfrac14+\tfrac14\,\sigma_{\alpha}^4\eta^2 \nonumber \\[3mm]
& = & \tfrac14\,\sigma_{\alpha}^4(\mu_0^2+\eta^2)=\tfrac14\,\sigma_{\alpha}^4\,|z|^2.
\end{eqnarray}
Hence
\beq \label{3.17}
\frac{O(\alpha^2z)+O(\alpha z^2)}{1+\frac12\,\sigma_{\alpha}^2z}=O(\alpha^2)+O(\alpha z),
\eq
and this has modulus $\leq\;1/2$ when $\alpha$ is sufficiently small and $|\alpha z|\leq c$ with $c$ determined by the implicit constants in the $O$'s in (\ref{3.10}). This gives (\ref{3.14}).
To finish the proof of (\ref{3.2}), we split the integral in (\ref{3.13}) into the parts $|z|\leq c/\alpha$ and $|z|\geq c/\alpha$. We have by (\ref{3.14})
\begin{align} \label{3.18}
&{\rm log}(\dE\,[e^{-\alpha sW}]) = \frac{-1}{2\pi i}\,\il_{\scriptsize{\ba{l} z\in C_0, \\ |z|\leq c/\alpha \ea}}\,\frac{s}{z(s-z)}\,{\rm log}(1+\tfrac12\,\sigma_{\alpha}^2z)\,dz \nonumber \\[3.5mm]
& +~\il_{\scriptsize{\ba{l} z\in C_0, \\ |z|\geq c/\alpha \ea}}\,\frac{s}{z(s-z)}\,(O(\alpha^2)+O(\alpha z))\,dz - \frac{1}{2\pi i}\,\il_{\scriptsize{\ba{l} z\in C_0, \\ |z|\leq c/\alpha \ea}}\,\frac{s}{z(s-z)}\,{\rm log}\Bigl(\frac{1-\psi({-}\alpha z)}{-\alpha^2z/\rho}\Bigr)\,dz.
\end{align}
For the first integral on the second line of (\ref{3.18}), we have
\begin{eqnarray} \label{3.19}
& \mbox{} & \Bigl|\il_{\scriptsize{\ba{l} z\in C_0, \\ |z|\leq c/\alpha \ea}}\,\frac{s}{z(s-z)}\,(O(\alpha^2)+O(\alpha z))\,dz\Bigr| \nonumber \\[3.5mm]
& & =~O(\alpha^2)+)\Bigl(\il_{\scriptsize{\ba{l} z\in C_0, \\ |z|\leq c/\alpha \ea}}\,\frac{\alpha}{|z|}\,|dz|\Bigr)=O(\alpha^2)+O(\alpha\,{\rm log}(1/\alpha)),
\end{eqnarray}
uniformly in any bounded set of $s$ with ${\rm Re}(s)\geq\frac12\,\mu_0$. For the second integral on the second line of (\ref{3.18}), we use (\ref{3.7}), so that
\beq \label{3.20}
\frac{1}{4\sigma_{\alpha}^2|z|}\leq\Bigl|\frac{1-\psi({-}\alpha z)}{-\alpha^2z/\rho}\Bigr|\leq\frac{2\rho}{\alpha^2|z|}\,,~~~~z\in C_0.
\eq
Therefore, for $z\in C_0$ and $|z|\geq c/\alpha$,
\beq \label{3.21}
-{\rm log}\,|z|+{\rm log}\Bigl(\frac{1}{4\sigma_{\alpha}^2}\Bigr)\leq{\rm log}\Bigl|\frac{1-\psi({-}\alpha z)}{-\alpha^2z/\rho}\Bigr|\leq{\rm log}\Bigl(\frac{2\rho}{c\alpha}\Bigr).
\eq
Using that $(1-\psi({-}\alpha z))/({-}\alpha^2z/\rho)$ lies in the cut plane $\dC\backslash({-}\infty,0]$, so that its argument is between $-\pi$ and $\pi$, we get
\beq \label{3.22}
\Bigl|{\rm log}\Bigl(\frac{1-\psi({-}\alpha z)}{-\alpha^2z/\rho}\Bigr)\Bigr|=O({\rm log}\,|z|)+O({\rm log}(1/\alpha))+O(1).
\eq
From (\ref{3.22}) it follows that the second integral on the second line of (\ref{3.18}) is $O(\alpha\,{\rm log}(1/\alpha))$, uniformly in any bounded set of $s$ with ${\rm Re}(s)\geq\frac12\,\mu_0$.
We conclude that
\begin{eqnarray} \label{3.23}
{\rm log}(\dE\,[e^{-\alpha sW}]) = \frac{-1}{2\pi i}\,\il_{\scriptsize{\ba{l} z\in C_0, \\ |z|\leq c/\alpha \ea}}\,\frac{s}{z(s-z)}\,{\rm log}(1+\tfrac12\,\sigma_{\alpha}^2z)\,dz +~O(\alpha\,{\rm log}(1/\alpha)),
\end{eqnarray}
uniformly in any bounded set of $s$ with ${\rm Re}(s)\geq\frac12\,\mu_0$. We extend the integration range in the integral in (\ref{3.23}) to all $z\in C_0$, at the expense of an error $O(\alpha\,{\rm log}(1/\alpha))$ uniformly in any bounded set of $s$ with ${\rm Re}(s)\geq\frac12\,\mu_0$. Finally,
\beq \label{3.24}
\frac{-1}{2\pi i}\,\il_{C_0}\,\frac{s}{z(s-z)}\,{\rm log}(1+\tfrac12\,\sigma_{\alpha}^2z)\,dz={-}{\rm log}(1+\tfrac12\,\sigma_{\alpha}^2s),~~~~~~{\rm Re}(s)\geq\tfrac12\,\mu_0,
\eq
by Cauchy's theorem, and we get (\ref{3.2}).
\section{Kingman-type result for nearly deterministic queues} \label{sec4}
We consider
\beq \label{4.1}
W_n\stackrel{d}{=}\Bigl(W_n+V_n-\frac{1}{\rho_n}\,U_n\Bigr)^+,
\eq
with
\beq \label{4.2}
V_n=\frac1n\,\sum_{k=1}^n\,V_{n,k},~~~~~~U_n=\frac1n\,\sum_{k=1}^n\,U_{n,k},
\eq
where $0<\rho_n<1$, $V_{n,k}$ are i.i.d.\ copies of $V$ and $U_{n,k}$ are i.i.d.\ copies of $U$, with $V$ and $U$ as in Section~\ref{sec2}, and $V_{n,k}$ and $U_{n,k}$ are independent. We shall show that, when $(1-\rho_n)\asymp 1/n$ (so that there are fixed $\beta_1$, $\beta_2$ with $0<\beta_1\leq\beta_2<\infty$ such that $(1-\rho_n)\,n\in[\beta_1,\beta_2]$ for $n=1,2,...$),
\beq \label{4.3}
{\rm log}(\dE\,[e^{-tW_n}])={-}{\rm log}(1+\gamma_nt)+O\Bigl(\frac{{\rm log}\,n}{\sqrt{n}}\Bigr),
\eq
uniformly in any bounded set of $t$ with ${\rm Re}(t)\geq\frac12\,x_0$. Here
\beq \label{4.4}
\gamma_n=\frac{\sigma_V^2+\rho^{-2}\sigma_U^2}{2n(1-\rho_n)}\,\rho_n,~~~~~~x_0=\frac{-1}{2\gamma_n},
\eq
with $\gamma_n$ bounded away from 0 and $\infty$.
We use again Pollaczek's result, so that
\beq \label{4.5}
{\rm log}(\dE\,[e^{-tW_n}])=\frac{-1}{2\pi i}\,\il_C\,\frac{t}{z(t-z)}\,{\rm log}(1-\varp({-}z))\,dz,
\eq
where for $-n\delta\leq{\rm Re}(z)\leq n\delta$ (with $\delta$ as in the first paragraph of Section~\ref{sec2})
\beq
\label{4.6}
\varp({-}z)=\dE\,[e^{-z(V_n-\frac{1}{\rho_n}\,U_n)}]=(\dE\,[e^{-\frac{z}{n}\,(V-\frac{1}{\rho_n}\,U)}])^n=\psi^n({-}z/n),
\eq
with $\psi$ as in (\ref{2.1}) with $\rho=\rho_n$, and $C$ is a line parallel to, and to the left of, the imaginary axis, and to the right of the singularities of ${\rm log}(1-\varp({-}z))$ in the open left half-plane, and lies to the right of $C$.
Assume that $1-\rho_n\asymp 1/n$, and delete $n$ from $\rho_n$ and $\gamma_n$ below for conciseness. We use (\ref{2.3}) with $\zeta=z/n$ and $z=O(\sqrt{n})$, so that
\beq \label{4.7}
\psi({-}z/n)=1+\Bigl(\frac{1}{\rho}-1\Bigr)\,\frac{z}{n}+\Bigl(\frac{1}{\rho}-1\Bigr)\,\gamma\,\frac{z^2}{n}+O\Bigl(\frac{z^2}{n^4}\Bigr)+O\Bigl(\frac{z^3}{n^3}\Bigr).
\eq
With $\varp({-}z)$ as in \eqref{4.6} and using the expansion $(1+X)^n=1+nX+\frac12\,n^2X^2+O(n^3X^3)$, valid for $X=O(\frac1n)$, we get
\beq \label{4.8}
\varp({-}z)=1+\Bigl(\frac{1}{\rho}-1\Bigr)\,z+\Bigl(\frac{1}{\rho}-1\Bigr)\,\gamma z^2+O\Bigl(\frac{|z|^2+|z|^3+|z|^4}{n^2}\Bigr).
\eq
In the $O$ in (\ref{4.8}) terms like $z^6/n^4$, which is $O(z^2/n^2)$ when $z=O(\sqrt{n})$, have been collected. The leading part of the right-hand side of (\ref{4.8}), considered for $z\leq0$, equals 1 for $z=0$ and $z={-}1/\gamma$, and is minimal for $z={-}1/2\gamma=x_0$, with minimum value $1-(1-\rho)/4\rho\gamma$. Therefore, for large $n$,
\beq \label{4.9}
|\varp({-}x_0-iy)|\leq\varp({-}x_0)\leq1-(1-\rho)/8\gamma,~~~~~~y\in\dR.
\eq
Hence, we can use $C=\{x_0+iy\:|\:y\in\dR\}$ in (\ref{4.5}), so that $1-\varp({-}z)$ has positive real part when $z\in C$ and $n$ is large, with principal-value logarithm for the log in the integral.
We have for $z\in C$, $z=O(\sqrt{n})$ from (\ref{4.8}) and $(1-\rho_n)\asymp1/n$,
\beq \label{4.10}
\frac{1-\varp({-}z)}{-(\frac{1}{\rho}-1)\,z}=1+\gamma z+O\Bigl(\frac{|z|+|z|^2+|z|^3}{n}\Bigr).
\eq
We are now in a quite similar position as in Section~\ref{sec3} from (\ref{3.10}) onwards. In particular, using $|1+\gamma z|=\gamma|z|$ for $z\in C$, we have
\begin{eqnarray} \label{4.11}
{\rm log}\Bigl(\frac{1-\varp({-}z)}{-(\frac{1}{\rho}-1)\,z}\Bigr) & = & {\rm log}(1+\gamma z)+O\Bigl(\frac{|z|+|z|^2+|z|^3}{n(1+\gamma z)}\Bigr) \nonumber \\[3.5mm]
& = & {\rm log}(1+\gamma z)+O\Bigl(\frac{1+|z|+|z|^2}{n}\Bigr),
\end{eqnarray}
when $|z|\leq c\sqrt{n}$ and $c>0$ is small enough to ensure that the $O$-terms in (\ref{4.11}) do not exceed $1/2$. Furthermore,
\begin{eqnarray} \label{4.12}
& \mbox{} & \il_{\scriptsize{\ba{l} z\in C, \\ |z|\leq c\sqrt{n}\ea}}\,\Bigl|\frac{t}{z(t-z)}\Bigr|\,\frac{1+|z|+|z|^2}{n}\,|dz| \nonumber \\[3.5mm]
& & =O\Bigl(\il_{\scriptsize{\ba{l} z\in C, \\ |z|\leq c\sqrt{n}\ea}}\,\frac{1+|z|+|z|^2}{n\,|z|^2}\,|dz|\Bigr)=O\Bigl(\frac{1}{\sqrt{n}}\Bigr),
\end{eqnarray}
uniformly in any bounded set of $t$ with ${\rm Re}(t)\geq\frac12\,x_0$. Proceeding then as in Section~\ref{sec3} from (\ref{3.18}) onwards, with (\ref{4.12}) as substitute for (\ref{3.19}) and $\sqrt{n}$ instead of $1/\alpha$, we get (\ref{4.3}).
\section{Gaussian regime for nearly deterministic queues} \label{sec5}
We consider the same setting as in Section~\ref{sec4}, but now we assume that $(1-\rho_n)\asymp 1/\sqrt{n}$, so that there are fixed $\beta_1$, $\beta_2$ with $0<\beta_1\leq\beta_2<\infty$ such that $(1-\rho_n)\,\sqrt{n}\in [\beta_1,\beta_2]$ for $n=1,2,...\,$. The precise formulation of our results requires quantities derived from $h(\zeta)={\rm log}\,(\psi({-}\zeta)$ at the saddle point $\zeta_{sp}$ of $h$ on the negative real axis, and an additional condition discussed below.
We shall show the following (Theorem~\ref{thm3}). Let
\beq \label{5.1}
\sigma_n=(h''(\zeta_{sp}))^{1/2},~~~~~~\beta_n={-}\zeta_{sp}\,\sigma_n\,\sqrt{n}.
\eq
Then we have $\sigma_n\asymp 1$, $\beta_n\asymp 1$ as $n\pr\infty$, and
\beq \label{5.2}
{\rm log}\Bigl(\dE\,\Bigl[\exp\Bigl({-}s\,\frac{\sqrt{n}}{\sigma_n}\,W_n\Bigl)\Bigr]\Bigr)={\rm log}(\dE\,[e^{-sM_{\beta_n}}])+O\Bigl(\frac{1}{\sqrt{n}}\Bigr),~~~~~~n\pr\infty,
\eq
uniformly in any bounded set of $s$ with ${\rm Re}(s)\geq{-}\frac12\,\beta_n$. We shall also show the following refinement of Theorem~\ref{thm3}.
\begin{thm} \label{thm4}
Let $\sigma_n$ and $\beta_n$ as in (\ref{5.1}), and let
\beq \label{5.3}
d_2={-}\,\frac{h'''(\zeta_{sp})}{6h''(\zeta_{sp})}.
\eq
Then $d_2=O(1)$, $n=1,2,...\,$, and, uniformly in any bounded set of $s$ with ${\rm Re}(s)\geq{-}\,\frac12\,\beta_n$,
\beq \label{5.4}
{\rm log}\Bigl(\dE\,\Bigl[\exp\Bigl({-}s\,\frac{\sqrt{n}}{\sigma_n}\,W_n\Bigr)\Bigr]\Bigr)={\rm log}(\dE\,[e^{-R_nM_{B_n}}])+O\Bigl(\frac1n\Bigr),~~~~~~n\pr\infty,
\eq
where
\beq \label{5.5}
B_n=\frac{\beta_n}{1+\beta_n\varp_n},~~~~~~R_n=R_n(s)=\frac{s}{(1+\beta_n\varp_n)(1+(s+\beta_n)\varp_n)},
\eq
with $\varp_n=d_2/\sigma_n\sqrt{n}=O(1/\sqrt{n})$.
\end{thm}
From (\ref{5.2}) the moments of $\frac{\sqrt{n}}{\sigma_n}\,W_n$ are approximated with error $O(1/\sqrt{n})$ by the moments $m_k(\beta_n)$ of $M_{\beta_n}$ as in (\ref{1.11}).
As a consequence of Theorem~\ref{thm4}, we have the following result.
\begin{thm}\label{thm5}
For $k=1,2,...$
\begin{align}
\dE\,\Bigl[\Bigl(\frac{\sqrt{n}}{\sigma_n}\,W_n\Bigr)^k\Bigr]\,&\,=({-}1)^k\,\Bigl(\frac{d}{ds}\Bigr)^k\,(\dE\,[e^{-R_n(s)M_{B_n}}])\Bigl|_{s=0}+O\Bigl(\frac1n\Bigr)
\nonumber\\
&\,=\frac{m_k(B_n)}{(1+\beta_n\varp_n)^{2k}}+\frac{k(k-1)m_{k-1}(B_n)\varp_n}{(1+\beta_n\varp_n)^{2k-1}}+O\Bigl(\frac1n\Bigr),
\label{5.6}
\end{align}
with $\varp_n$ as in Theorem~\ref{thm4} and $m_k(B_n)=\dE\,[M_{B_n}^k]$.
\end{thm}
We shall prove Theorem~\ref{thm3} below, and we present the proofs of Theorem~\ref{thm4} and Theorem~\ref{thm5} in Appendices~A and B. For all these proofs, we use again Pollaczek's formula (\ref{4.5}) in which we substitute $\zeta=z/n$. Thus, we have
\beq \label{5.8}
{\rm log}(\dE\,[e^{-tW_n}])=\frac{-1}{2\pi i}\,\il_{C_n}\,\frac{t/n}{\zeta(t/n-\zeta)}\,{\rm log}(1-\psi^n({-}\zeta))\,d\zeta,
\eq
where $C_n=\frac1n\,C$ is a line parallel to, and to the left of, the imaginary axis, and to the right of the singularities of ${\rm log}(1-\psi^n({-}\zeta))$, and $t/n$ lies to the right of $C_n$.
We now need the following assumption, because it allows us to integrate in (\ref{5.8}) over $\zeta$ with $|{\rm Im}(\zeta)|\leq\delta$, at the expense of an error of exponential decay as $n\pr\infty$.
\begin{assumption}\label{ass:delta}
There is a $\delta>0$, $\eps>0$ such that for all $\rho\in[\frac12,1)$ and $x\in[{-}\delta,0)$ and $y\in\dR$, $|y|>\delta$, we have $|\psi({-}x+iy)|<\psi({-}x)-\eps$.
\end{assumption}
With reference to Section~\ref{sec2}, we can assume that $\delta>0$ is such that $h(\zeta)={\rm log}\,\psi({-}\zeta)$ is analytic in the rectangle $-\delta\leq{\rm Re}(\zeta)\leq0$, $|{\rm Im}(\zeta)|\leq\delta$, and, by taking $n$ sufficiently large with $(1-\rho)=(1-\rho_n)\asymp 1/\sqrt{n}$ in (\ref{2.7}), that the saddle point $\zeta_{sp}$ of $h$ lies in this rectangle. We then have, with exponentially small error as $n\pr\infty$,
\beq \label{5.9}
{\rm log}(\dE\,[e^{-t W_n}])=\frac{-1}{2\pi i}\,\il_{\zeta_{sp}-i\delta}^{\zeta_{sp}+i\delta}\,\frac{t/n}{\zeta(t/n-\zeta)}\,{\rm log}(1-e^{nh(\zeta)})\,d\zeta
\eq
when ${\rm Re}(t/n)\geq\frac12\,\zeta_{sp}$.
We have from (\ref{2.6}--\ref{2.8}) and $(1-\rho_n)\asymp 1/\sqrt{n}$ that
\beq \label{5.10}
\zeta_{sp}\asymp\frac{1}{\sqrt{n}},~~~~~~h(\zeta_{sp})\asymp\frac1n,~~~~~~\sigma_n^2=h''(\zeta_{sp})\asymp1,
\eq
and this shows that $\sigma_n\asymp1$, $\beta_n\asymp1$ and also that $d_2=O(1)$, see (\ref{5.1}) and (\ref{5.2}).
For both Theorem~\ref{thm3} and \ref{thm4}, we shall work from the integral in (\ref{5.9}) towards the integral representation of the mgf $\dE\,[\exp({-}s\,M_{\beta})]$ of the maximum $M_{\beta}$ of the Gaussian random walk with drift $-\beta$. For the latter we have (from Pollaczek's formula, applied to $W=(W+V-U)^+$ with $V$ and $U$ having pdf's $\frac{1}{\sqrt{2\pi}}\,\exp({-}\frac12\,(x-A)^2)\,\chi_{[0,\infty)}(x)$ and $\delta_{\beta}(x-A)$, respectively, and letting $A\pr\infty$)
\beq \label{5.11}
{\rm log}(\dE\,[e^{-s M_{\beta}}])=\frac{-1}{2\pi i}\,\il_C\,\frac{s}{z(s-z)}\,{\rm log}(1-e^{\beta z+\frac12 z^2})\,dz,
\eq
where $C$ is a line parallel to, and to the left of, the imaginary axis, and $s\in C$ is to the right of $C$. Substituting $z={-}\beta+iy$, $-\infty<y<\infty$, we get for $s\in\dC$, ${\rm Re}(s)>{-}\beta$
\beq \label{5.12}
{\rm log}(\dE\,[e^{-sM_{\beta}}])=\frac{1}{2\pi}\,\il_{-\infty}^{\infty}\,\frac{s}{(\beta-iy)(s+\beta-iy)}\,{\rm log}(1-e^{-\frac12\beta^2-\frac12 y^2})\,dy.
\eq
We make in the integral in (\ref{5.9}) a substitution that brings $\exp(nh(\zeta))$ in Gaussian form. We thus let $\zeta=\zeta(v)$ be the solution of the equation
\beq \label{5.13}
h(\zeta(v))=h(\zeta_{sp})-\tfrac12\,v^2\,h''(\zeta_{sp})
\eq
that satisfies $\zeta(v)=\zeta_{sp}+iv+O(v^2)$ as $v\pr0$. By Lagrange's theorem, there is an $r>0$, independent of $n$, such that
\beq \label{5.14}
\zeta(v)=\zeta_{sp}+iv+\sum_{l=2}^{\infty}\,d_l(iv)^l,~~~~~~|v|\leq r,
\eq
with real $d_l$, and $d_2$ given by (\ref{5.3}), see \cite{ref9}, end of Section~3 for more details about such a substitution. With the substitution $\zeta=\zeta(v)$, $-r\leq v\leq r$, in (\ref{5.9}), we get with exponentially small error
\beq \label{5.15}
{\rm log}(\dE\,[e^{-tW_n}])=\frac{-1}{2\pi i}\,\il_{-r}^r\,\frac{\zeta'(v)\,t/n}{\zeta(v)(t/n-\zeta(v))}\,{\rm log}(1-e^{nh(\zeta_{sp})-\frac12 v^2h''(\zeta_{sp})})\,dv
\eq
when $n\pr\infty$ and ${\rm Re}(t)\geq\frac12\,n\zeta_{sp}$. We write for conciseness $\sigma=\sigma_n$ in the sequel, and we take $t=s\,\sqrt{n}/\sigma$ in (\ref{5.16}) and substitute $y=v\sigma\sqrt{n}$, see (\ref{5.1}). Then, for $s$ in a bounded set contained in ${\rm Re}(s)\geq{-}\frac12\,\beta_n$ (so that ${\rm Re}(t)\geq{-}\frac12\,\beta_n\,\sqrt{n}/\sigma=\frac12\,n\,\zeta_{sp}$, see (\ref{5.1})), we have with exponentially small error
\begin{eqnarray} \label{5.16}
& \mbox{} & {\rm log}\Bigl(\dE\,\Bigl[\exp\Bigl({-}\,\frac{s\sqrt{n}}{\sigma}\,W_n\Bigr)\Bigr]\Bigr) \nonumber \\[3.5mm]
& & =~\frac{1}{2\pi}\,\il_{-R}^R\,\frac{is\zeta'(y/\sigma\sqrt{n})\,{\rm log}(1-e^{nh(\zeta_{sp})-\frac12 y^2})} {\sigma\sqrt{n}\,\zeta(y/\sigma\sqrt{n})(s-\sigma\sqrt{n}\,\zeta(y/\sigma\sqrt{n}))}\,dy,
\end{eqnarray}
where $R=r\sigma\sqrt{n}$.
The remainder of the proofs of (\ref{5.2}) and Theorem~\ref{thm4} consists now of approximating $nh(\zeta_{sp})$ by $-\frac12\,\beta_n^2$ and $-\frac12\,B_n^2$, respectively, and approximating the front factor
\beq \label{5.17}
FF=\frac{is\zeta'(y/\sigma\sqrt{n})}{\sigma\sqrt{n}\,\zeta(y/\sigma\sqrt{n})(s-\sigma\sqrt{n}\,\zeta(y/\sigma\sqrt{n}))},
\eq
using a linear and quadratic approximation, respectively, from the power series in (\ref{5.14}).
For (\ref{5.2}) we thus use in (\ref{5.16})
\beq \label{5.18}
\zeta'(y/\sigma\sqrt{n})=i+O(y/\sqrt{n})\,,~~~~~\sigma\sqrt{n}\,\zeta(y/\sigma\sqrt{n})={-}\beta_n+iy+O(y^2/\sqrt{n}),
\eq
and obtain, uniformly in any bounded set of $s$ such that ${\rm Re}(s)\geq{-}\frac12\,\beta_n$,
\begin{eqnarray} \label{5.19}
& \mbox{} & {\rm log}\Bigl(\dE\,\Bigl[\exp\Bigl({-}\,\frac{s\sqrt{n}}{\sigma_n}\,W_n\Bigr)\Bigr]\Bigr) \nonumber \\[3.5mm]
& & =~\frac{1}{2\pi}\,\il_{-R}^R\,\frac{s\,{\rm log}(1-e^{-\frac12\beta_n^2-\frac12 y^2})}{(\beta_n-iy)(s+\beta_n-iy)}\,dy\Bigl(1+O\Bigl(\frac{1}{\sqrt{n}}\Bigr)\Bigr),
\end{eqnarray}
where we have restored $\sigma=\sigma_n$ on the left-hand side of (\ref{5.19}). Recalling that $R=r\,\sigma_n\sqrt{n}$, with $\sigma_n\asymp1$ and $r>0$ independent of $n$, we see that the integral in (\ref{5.19}) equals the integral in (\ref{5.12}), with $\beta_n$ instead of $\beta$, within exponentially small error when ${\rm Re}(s)\geq{-}\frac12\,\beta_n$. This completes the proof of Theorem~\ref{thm3}.
The proof of Theorem~\ref{thm4} is similar, but the details require somewhat more elaboration, and are therefore given in Appendix~\ref{appA}. We show Theorem~\ref{thm5} in Appendix~\ref{appB} and also that the $O(1/\sqrt{n})$ at the right-hand side of \eqref{1.10} can be replaced by $O(1/n)$ when the third cumulants of $V$ and $U$ are equal.
\section{Computational issues} \label{sec6}
In this section we present computational schemes for
exact and approximate values of the (properly scaled) moments of the steady-state waiting time distribution $W$ via Pollaczek's formula \eqref{1.3},
\beq\label{75}
\log\big(\dE[e^{-sW}]\big)=\frac{-1}{2\pi i}\int_C \frac{s}{z(s-z)}\log(1-\psi(-z))\,dz,
\eq
with $\psi(-z)=\dE[\exp(-z(V-\frac1\rho U))]$, as earlier. In Section~\ref{sec7} this is made specific by making various choices for $V$ and $U$.
\subsection{Moments and cumulants}\label{subsec61}
Assume that $X$ is a random variable with finite moments of all order. We compute the moments
\beq\label{76}
m_k(X)=\dE[X^k]=\left(\frac{d}{ds}\right)^k\left(\dE[e^{sX}]\right)_{s=0}\, , k=0,1,\dots,
\eq
of $X$ from the cumulants
\beq\label{77}
c_l(X)=\left(\frac{d}{ds}\right)^l\log\left(\dE[e^{sX}]\right)_{s=0}\, , l=0,1,\dots,
\eq
of $X$ recursively according to
\beq\label{78}
m_0(X)=1 \,;\, m_k(X)=\sum_{l=1}^k\binom{k-1}{l-1}c_l(X)m_{k-l}(X), \, k=1,2,\dots \, .
\eq
\subsection{Moments in the classical heavy-traffic Kingman case}\label{subsec62}
With $\alpha=1-\rho\,\downarrow\,0$, we compute the moments $m_k(\alpha W)=\dE[(\alpha W)^k]$ of $\alpha W$ from the cumumants of $\alpha W$ according to \eqref{78} in Subsection~\ref{subsec61}. The latter are obtained from the appropriate version \eqref{3.8} of Pollaczek's formula, so that
\begin{align}
c_l(\alpha W)\,&=\,(-1)^l\left(\frac{d}{ds}\right)^l\left(\frac{-1}{2\pi i}\int_{C_0}\frac{z}{z(s-z)}\log(1-\psi(-\alpha z))dz\right)_{s=0}\nonumber\\
&=\,\frac{(-1)^l l!}{\pi}\int_0^\infty \mbox{Re}\left[\frac{\log(1-\psi(-\alpha z))}{z^{l+1}}\right]dy,
\label{79}
\end{align}
where $C_0=\{z=\mu_0+iy\,|\,-\infty<y<\infty\}$ with $\mu_0=-1/\sigma_\alpha^2=-1/[(\sigma_V^2+\rho^{-2}\sigma^2_U)\rho]$, compare \cite[Equation (15)]{ref1}.
From the result of Section~\ref{sec3}, see \eqref{1.7} in Theorem \ref{thm1}, we have for $k=1,2,\dots$
\begin{equation}\label{eqn80}
\dE[(\alpha W)^k]=k! (\frac12 \sigma_\alpha^2)^k+O(\alpha\log(1/\alpha)), \quad \alpha\downarrow0,
\end{equation}
where $\sigma_\alpha^2=(\sigma_V^2+\rho^{-2}\sigma^{2}_U)\rho$.
\subsection{Moments in the nearly deterministic heavy-traffic Kingman case}\label{subsec63}
With $1-\rho_n \asymp 1/n$ and $W_n$ as in Section~\ref{sec4}, we compute the moments $m_k(W_n)=\dE[W_n^k]$ of $W_n$ from the cumulants $c_l(W_n)$ of $W_n$ according to \eqref{78} in Subsection~\ref{subsec61}. The latter are obtained from the appropriate version \eqref{4.5} of Pollaczek's formula, so that
\begin{align}
c_l(W_n)\,&=\,(-1)^l\left(\frac{d}{ds}\right)^l\left(\frac{-1}{2\pi i}\int_{C}\frac{z}{z(s-z)}\log(1-\psi^n(-z/n))dz\right)_{s=0}\nonumber\\
&=\,\frac{(-1)^l l!}{\pi}\int_0^\infty \mbox{Re}\left[\frac{\log(1-\psi^n(-z/n))}{z^{l+1}}\right]dy,
\label{81}
\end{align}
where $C=\{z=\frac{-1}{2\gamma_n}+iy\,|\,-\infty<y<\infty\}$ and
\begin{equation}
\label{82}
\gamma_n=(\sigma_V^2+\rho_n^{-2}\sigma_U^2)\rho_n/(2n(1-\rho_n)).
\end{equation}
From the result of Section~\ref{sec4}, see \eqref{1.9} in Theorem~\ref{thm2}, we have for $k=1,2,\dots$
\begin{equation}\label{eqn83}
\dE[W_n^{k}]=k!\gamma_n^k+O\left(\frac{\log n}{\sqrt{n}}\right), \quad n\to\infty.
\end{equation}
\subsection{Moments in the nearly deterministic heavy-traffic Gaussian case}\label{subsec64}
We consider the dedicated saddle point method with $1-\rho_n \asymp 1/\sqrt{n}$ and $W_n$ as given in Section~\ref{sec5}. Thus $\zeta_{sp}$ is the unique saddle point $\zeta$ on the negative real axis of $h(\zeta)=\log \psi(-\zeta)$, characterized by $h'(\zeta)=\psi'(-\zeta)=0$, see Section~\ref{sec2}. In several of the specific cases as given in Section~\ref{sec7}, $\zeta_{sp}$ can be found in closed form; in general a Newton iteration can be used, using the leading term at the right-hand side of \eqref{2.7} as initial value. We then let
\begin{equation}\label{eqn84}
\sigma_n=(h''(\zeta_{sp}))^{1/2}, \ \beta_n=-\zeta_{sp}\sigma_n\sqrt{n}.
\end{equation}
The moments $m_k\left(\frac{\sqrt{n}}{\sigma_n}W_n\right)=\dE\left[\left(\frac{\sqrt{n}}{\sigma_n}W_n\right)^k\right]$ of $\frac{\sqrt{n}}{\sigma_n}W_n$ can be computed from the cumulants $c_l\left(\frac{\sqrt{n}}{\sigma_n}W_n\right)$ according to \eqref{78} in Subsection~\ref{subsec61}. The latter are obtained from the appropriate version \eqref{5.8} of Pollaczek's formula, with $t=s\sqrt{n}/\sigma_n$, so that
\begin{align}
c_l\left(\frac{\sqrt{n}}{\sigma_n} W_n\right)\,&=\,(-1)^l\left(\frac{d}{ds}\right)^l\left(\frac{-1}{2\pi i}\int_{C_n}\frac{(s/\sigma_n\sqrt{n})\log(1-\psi^n(-\zeta))}{\zeta((s/\sigma_n\sqrt{n})-\zeta)}d\zeta\right)_{s=0}\nonumber\\
&=\,\frac{(-1)^l l!}{\pi}\left(\frac{1}{\sigma_n\sqrt{n}}\right)^l\int_0^\infty \mbox{Re}\left[\frac{\log(1-\psi^n(-\zeta))}{\zeta^{l+1}}\right]dy,
\label{eqn85}
\end{align}
where $C_n=\{\zeta=\zeta_{sp}+iy\,|\,-\infty<y<\infty\}$.
From the result in Section~\ref{sec5}, see \eqref{1.11} in Theorem~\ref{thm3}, we have for $k=1,2,\dots$
\begin{align}\label{eqn86}
\dE\left[\left(\frac{\sqrt{n}}{\sigma_n}W_n\right)^k\right]=m_k(\beta_n)+O\left(\frac{1}{\sqrt{n}}\right), \quad n\to\infty,
\end{align}
where $m_k(\beta)$ is the $k^\text{th}$ moment of the maximum $M_\beta$ of the Gaussian random walk with drift $-\beta$. These $m_k(\beta)$ can be computed from the cumulants $c_l(M_\beta)$ of $M_\beta$ using \eqref{78} in Subsection~\ref{subsec61}. The latter can be computed by numerical integration, using \eqref{5.12}, so that
\begin{align}
c_l\left(M_\beta\right)\,&=\,\frac{(-1)^l l!}{\pi}\int_0^\infty \mbox{Re}\left[\frac{\log(1-e^{\beta z+\frac12 z^2})}{z^{l+1}}\right]dy,
\label{eqn87}
\end{align}
where $z=-\beta+iy, -\infty<y<\infty$. Alternatively, we have from Theorem~\ref{thm1} in \cite{ref8} for $0<\beta<2\sqrt{\pi}$ and $l=1,2,\dots$
\begin{align}\label{eqn88}
c_l\left(M_\beta\right)\,&=\,\frac{(l-1)!}{(2\beta)^l}+\frac{1}{\sqrt{2\pi}}\sum_{j=0}^l\binom{l}{j}\Gamma\left(\frac{l-j+1}{2}\right) \zeta\left(-\frac12 l-\frac12 j+1\right)2^{\frac{l-j-1}{2}}(-\beta)^j\nonumber\\
&+\,\frac{(-1)^{l+1}l!}{\sqrt{2\pi}}\sum_{r=0}^{\infty}\frac{\zeta(-l-r+\frac12)(-\frac12)^r\beta^{2r+l+1}}{r!(2r+1)\cdot\ldots\cdot(2r+l+1)},
\end{align}
where $\zeta$ denotes here the Riemann zeta function (not to be confused with the function $\zeta(v)$ in (\ref{5.13}--\ref{5.14}) pertaining to the dedicated saddle point method).
From Theorems \ref{thm4} and \ref{thm5} in Section~\ref{sec5}, we can refine the result in \eqref{eqn86} according to
\be\label{eqn89}
\dE\,\Bigl[\Bigl(\frac{\sqrt{n}}{\sigma_n}\,W_n\Bigr)^k\Bigr]=\frac{m_k(B_n)}{(1+\beta_n\varp_n)^{2k}}+\frac{k(k-1)\,m_{k-1}(B_n)\varp_n}{(1+\beta_n\varp_n)^{2k-1}}+O\Bigl(\frac1n\Bigr)~,
\eq
where
\be\label{eqn90}
B_n=\beta_n/(1+\beta_n\varp_n), \qquad \varp_n=-\frac{h'''(\zeta_{sp})}{6h''(\zeta_{sp})\sigma_n\sqrt{n}},
\eq
with $\sigma_n$ and $\beta_n$ given in \eqref{eqn84}.
In several of the specific cases as mentioned in Section~\ref{sec7}, the quantities $h''(\zeta_{sp})$ and $h'''(\zeta_{sp})$ as required in \eqref{eqn84} and \eqref{eqn90} can be computed in closed form. In general, one has
\begin{align}
h''(\zeta_{sp})\,&=\,\frac{f_V''}{f_V}-\left(\frac{f_V'}{f_V}\right)^2+\left(\frac{f_U''}{f_U}-\left(\frac{f_U'}{f_U}\right)^2\right) \frac{1}{\rho^2},\label{eqn91}\\
h'''(\zeta_{sp})\,&=\,-\frac{f_V'''}{f_V}+3\frac{f_V''f_V'}{f_V^2}+\left(\frac{f_U'''}{f_U}-3\frac{f_U''f_U'}{f_U^2}\right) \frac{1}{\rho^3},\label{eqn92}
\end{align}
where $f_V(z)=\dE[\exp(zV)]$, $f_U(z)=\dE[\exp(zU)]$, and where all $f_V^{(l})$ and $f_U^{(l)}$ appearing at the right-hand sides of (\ref{eqn91}--\ref{eqn92}) have to be evaluated at $z=-\zeta_{sp}$ and $z=\zeta_{sp}/\rho$, respectively.
We end this section with a note on the computational issues encountered when evaluating the contour integrals in expressions such as \eqref{79},\eqref{81},\eqref{eqn85} and \eqref{eqn87}. Although modern computer algebra software can numerically evaluate these integrals, despite one of the limits being infinity, one has to carefully choose a sufficiently high numerical accuracy in order to obtain accurate results. Not all software packages may support this feature, which is the reason why we used Wolfram Mathematica for the numerical computations in this paper. Unsurprisingly, this may lead to long computation times, in particular when the load of the system is close to one. In contrast, the numerical evaluation of the approximations in this paper takes practically zero time and does not suffer from numerical issues.
\subsection{Example 1: $U$ and $V$ are Gamma distributed}
We now demonstrate the results for a specific example. Further numerical examples are provided Appendix~\ref{sec7}. In this first example, we consider the case that both $V$ and $U$ have a Gamma distribution, with means $k_U\vart_U=k_V\vart_V=1$, variances $\sigma^2_V=k_V\vart^2_V=\vart_V$ and $\sigma^2_U=k_U\vart^2_U=\vart_U$, and pdfs
\beq \label{6.1}
\frac{x^{k_V-1}\,e^{-x/\vart_V}}{\Gamma(k_V)\,\vart_V^{k_V}}\,,~~x\geq0~;~~~~~~\frac{x^{k_U-1}\,e^{-x/\vart_U}}{\Gamma(k_U)\,\vart_U^{k_U}}\,,~~x\geq0~,
\eq
respectively, with $0<\vart_V,\vart_U<\infty$. Note that Assumption~\ref{ass:delta} in Section~\ref{sec5} is easily checked.
The third cumulants of $U$ and $V$ are $c_3(U)=2\vart_U^2$ and $c_3(V)=2\vart_V^2$, respectively.
Then
\beq \label{6.4}
\psi({-}\zeta)=\dE\,[e^{-\zeta(V-\frac{1}{\rho}U)}]=(1+\vart_V\zeta)^{-k_V}\,(1-\vart_U\zeta/\rho)^{-k_U}
\eq
for $-\vart_V^{-1}<{\rm Re}(\zeta)<\rho\,\vart_U^{-1}$. As a consequence of $U$ and $V$ both being Gamma distributed, all approximations can be obtained in closed-form. In our numerical experiments we take three different parameter sets $(\vart_U, \vart_V)$ = $(5/2, 1/2)$, $(1/2, 5/2)$ and $(3/2, 3/2)$.
\subsubsection{Classical regime} We approximate the moments of $\alpha W$, for $\alpha\downarrow0$, by \eqref{eqn80}, with
\beq \label{6.9}
\sigma_{\alpha}^2=(\sigma_V^2+\rho^{-2}\sigma_U^2)\,\rho=(k_V\vart_V^2+\rho^{-2}k_U\vart_U^2)\,\rho=(\vart_V+\rho^{-2}\vart_U)\,\rho~,
\eq
and $\rho=1-\alpha$, where we take $\alpha=1/10,1/100,1/1000$. The results, for the first five moments of $W$, are presented in Table~\ref{tbl:1}.
\begin{table}[t]
\[
\begin{array}{|l|rr|rr|rr|}
\hline
\mc{7}{|c|}
\vart_U=5/2, \vart_V=1/2}\\
\hline
& \mc{2}{c|}{\alpha=1/10} & \mc{2}{c|}{\alpha=1/100} & \mc{2}{c|}{\alpha=1/1000} \\
\hline
k & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} \\
\hline
1 & 1.396 & 1.614 & 1.490 & 1.510 & 1.499 & 1.501 \\
2 & 4.092 & 5.209 & 4.459 & 4.561 & 4.496 & 4.506 \\
3 & 17.987 & 25.222 & 20.021 & 20.663 & 20.227 & 20.291 \\
4 & 105.426 & 162.819 & 119.860 & 124.814 & 121.336 & 121.825 \\
5 & 772.403 & 1313.861 & 896.946 & 942.427 & 909.816 & 914.295 \\
\hline
\end{array}
\]
\[
\begin{array}{|l|rr|rr|rr|}
\hline
\mc{7}{|c|}
\vart_U=1/2, \vart_V=5/2}\\
\hline
& \mc{2}{c|}{\alpha=1/10} & \mc{2}{c|}{\alpha=1/100} & \mc{2}{c|}{\alpha=1/1000} \\
\hline
k & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} \\
\hline
1 & 1.331 & 1.403 & 1.483 & 1.490 & 1.498 & 1.499 \\
2 & 4.083 & 3.936 & 4.459 & 4.440 & 4.496 & 4.494 \\
3 & 18.806 & 16.562 & 20.111 & 19.849 & 20.236 & 20.210 \\
4 & 115.505 & 92.932 & 120.933 & 118.300 & 121.444 & 121.176 \\
5 & 886.802 & 651.817 & 909.028 & 881.352 & 911.030 & 908.217 \\
\hline
\end{array}
\]
\[
\begin{array}{|l|rr|rr|rr|}
\hline
\mc{7}{|c|}
\vart_U=3/2, \vart_V=3/2}\\
\hline
& \mc{2}{c|}{\alpha=1/10} & \mc{2}{c|}{\alpha=1/100} & \mc{2}{c|}{\alpha=1/1000} \\
\hline
k & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} \\
\hline
1 & 1.367 & 1.508 & 1.487 & 1.500 & 1.499 & 1.500 \\
2 & 4.100 & 4.550 & 4.460 & 4.500 & 4.496 & 4.500 \\
3 & 18.452 & 20.589 & 20.071 & 20.253 & 20.232 & 20.250 \\
4 & 110.711 & 124.223 & 120.427 & 121.525 & 121.393 & 121.500 \\
5 & 830.332 & 936.845 & 903.203 & 911.480 & 910.446 & 911.252 \\
\hline
\end{array}
\]
\caption{Example 1 - Classical HT Kingman: Comparison of exact and asymptotic results for $m_k(\alpha W)$.
}
\label{tbl:1}
\end{table}
We observe that the error
behaves practically linearly with $\alpha$ for a fixed $k$. Thus, for this case, the error estimate $O(\alpha\,{\rm log}(1/\alpha))$ in Theorem~\ref{thm1} seems a factor ${\rm log}(1/\alpha)$ too pessimistic. To confirm this, we included a plot for the difference between the approximation (based on the asymptotic result) and the exact values for the first three moments. Indeed, Figure \ref{fig:error} confirms that the absolute error is almost completely linear in $\alpha$, meaning that the factor $\log(1/\alpha)$ is negligible here. Higher moments show the same type of behavior, but for reasons of compactness we have omitted the corresponding figures.
It is notable that for the first moment ($k=1$), the approximations systematically overestimate the exact values, which is known in the literature as Kingman's bound, see below \eqref{1.7a}. For higher moments, this is still the case for $\vart_U=5/2$, $\vart_V=1/2$, but choosing parameter values $\vart_U=1/2$, $\vart_V=5/2$ leads to underestimations.
The $k$-behavior of the error for the three considered cases is markedly different, especially when $\alpha$ is not small.
\begin{figure}[ht]
\begin{tabular}{ccccc}
\includegraphics[width=0.31\textwidth]{Ex1error1} &
\includegraphics[width=0.31\textwidth]{Ex1error2} &
\includegraphics[width=0.31\textwidth]{Ex1error3}
\end{tabular}
\caption{Absolute error $k!(\frac12\,\sigma_{\alpha}^2)^k-m_k(\alpha W)$ for the classical Kingman heavy-traffic regime.}
\label{fig:error}
\end{figure}
\subsubsection{Nearly-deterministic Kingman regime} Take a fixed $\beta>0$, and let $1-\rho_n=\beta/n$. From Theorem~\ref{thm2}, we can approximate $\dE[W_n^k]$ by $k!\gamma_n^k$, with
\beq \label{6.12}
\gamma_n=(\sigma_V^2+\rho_n^{-2}\sigma_U^2)\,\rho_n/(2n(1-\rho_n))=(\vart_V+\rho_n^{-2}\vart_U)\,\rho_n/(2n(1-\rho_n)).
\eq
In Table~\ref{tbl:2}, we choose $\beta=1$. Note that the approximation, based on the asymptotic results, for $n=10,100,1000$ yields exactly the same numerical values as in the classical HT Kingman case with $\alpha=1/n$. This is due to the fact that accidentally $\frac12\sigma_{\alpha}^2=\gamma_n$ for our chosen parameter values.
\begin{table}[t]
\[
\begin{array}{|l|rr|rr|rr|rr|rr|}
\hline
\mc{11}{|c|}
\vart_U=5/2, \vart_V=1/2}\\
\hline
& \mc{2}{c|}{n=10} & \mc{2}{c|}{n=100} & \mc{2}{c|}{n=1000} & \mc{2}{c|}{n=10000} & \mc{2}{c|}{n=100000} \\
\hline
k & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp}\\
\hline
1 & 1.212 & 1.614 & 1.404 & 1.510 & 1.469 & 1.501 & 1.490 & 1.500 & 1.497 & 1.500 \\
2 & 3.572 & 5.209 & 4.205 & 4.561 & 4.405 & 4.506 & 4.470 & 4.501 & 4.490 & 4.500 \\
3 & 15.711 & 25.222 & 18.880 & 20.663 & 19.819 & 20.291 & 20.114 & 20.254 & 20.207 & 20.250 \\
4 & 92.084 & 162.819 & 113.030 & 124.814 & 118.888 & 121.825 & 120.680 & 121.532 & 121.241 & 121.503 \\
5 & 674.652 & 1313.861 & 845.833 & 942.427 & 891.460 & 914.295 & 905.080 & 911.554 & 909.307 & 911.280 \\
\hline
\end{array}
\]
\[
\begin{array}{|l|rr|rr|rr|rr|rr|}
\hline
\mc{11}{|c|}
\vart_U=1/2, \vart_V=5/2}\\
\hline
& \mc{2}{c|}{n=10} & \mc{2}{c|}{n=100} & \mc{2}{c|}{n=1000} & \mc{2}{c|}{n=10000} & \mc{2}{c|}{n=100000} \\
\hline
k & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} \\
\hline
1 & 1.151 & 1.403 & 1.397 & 1.490 & 1.468 & 1.499 & 1.490 & 1.500 & 1.497 & 1.500 \\
2 & 3.551 & 3.936 & 4.204 & 4.440 & 4.405 & 4.494 & 4.470 & 4.499 & 4.490 & 4.500 \\
3 & 16.371 & 16.562 & 18.962 & 19.849 & 19.828 & 20.210 & 20.115 & 20.246 & 20.207 & 20.250 \\
4 & 100.564 & 92.932 & 114.027 & 118.300 & 118.993 & 121.176 & 120.691 & 121.468 & 121.242 & 121.497 \\
5 & 772.098 & 651.817 & 857.114 & 881.352 & 892.646 & 908.217 & 905.200 & 910.946 & 909.320 & 911.220 \\
\hline
\end{array}
\]
\[
\begin{array}{|l|rr|rr|rr|rr|rr|}
\hline
\mc{11}{|c|}
\vart_U=3/2, \vart_V=3/2}\\
\hline
& \mc{2}{c|}{n=10} & \mc{2}{c|}{n=100} & \mc{2}{c|}{n=1000} & \mc{2}{c|}{n=10000} & \mc{2}{c|}{n=100000} \\
\hline
k & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} & {\rm Exact} & {\rm Asymp} \\
\hline
1 & 1.183 & 1.508 & 1.401 & 1.500 & 1.468 & 1.500 & 1.490 & 1.500 & 1.497 & 1.500 \\
2 & 3.568 & 4.550 & 4.205 & 4.500 & 4.405 & 4.500 & 4.470 & 4.500 & 4.490 & 4.500 \\
3 & 16.065 & 20.589 & 18.922 & 20.253 & 19.823 & 20.250 & 20.114 & 20.250 & 20.207 & 20.250 \\
4 & 96.396 & 124.223 & 113.532 & 121.525 & 118.940 & 121.500 & 120.685 & 121.500 & 121.242 & 121.500 \\
5 & 722.972 & 936.845 & 851.487 & 911.480 & 892.054 & 911.252 & 905.140 & 911.250 & 909.314 & 911.250 \\
\hline
\end{array}
\]
\caption{Example 1 - Nearly-deterministic HT Kingman: Comparison of exact and asymptotic results for $m_k(W_n)$.}
\label{tbl:2}
\end{table}
We observe that the error, the difference between the exact result and the asymptotic result, decays like $1/\sqrt{n}$ for a fixed $k$ (this decay behavior becomes more manifest when $n$ is further increased). This is in reasonable agreement with the error estimate $O({\rm log}\,n/\sqrt{n})$ that is given in Theorem~\ref{thm2}, (\ref{1.9}). The $k$-behavior of the error for the three considered cases is markedly different, especially for low $n$.
\subsubsection{Nearly-deterministic Gaussian regime} We have now to invoke the whole machinery of the dedicated saddle point method. We have
\beq \label{6.14}
h(\zeta)={-}k_V\,{\rm log}(1+\vart_V\zeta)-k_U\,{\rm log}(1-\vart_U\zeta/\rho),~~~~~~\frac{-1}{\vart_V}<{\rm Re}(\zeta)<\frac{\rho}{\vart_U},
\eq
\beq \label{6.15}
\zeta_{sp}={-}\,\frac{1-\rho}{\vart_V+\vart_U},~~~~~~h''(\zeta_{sp})=\frac{(\vart_U+\vart_V)^3}{(\vart_U+\rho\vart_V)^2}, ~~~~~~ d_2={-}\frac{\vart_U^2-\vart_V^2}{3(\vart_U+\rho\vart_V)}.
\eq
Take a fixed $\beta>0$, and let $1-\rho_n=\beta/\sqrt{n}$. We approximate the moments of the scaled waiting times $\frac{\sqrt{n}}{\sigma_n}\,W_n$ by the moments of the maximum of the Gaussian random walk with drift $-\beta$. These $m_k(\beta)$ can be computed from the cumulants $c_l(M_{\beta})$ of $M_{\beta}$, using (\ref{78}) and \eqref{eqn87} or \eqref{eqn88}.
\begin{table}[t]
\[
\begin{array}{|l|rrr|rrr|rrr|}
\hline
\mc{10}{|c|}
\vart_U=5/2, \vart_V=1/2}\\
\hline
& \mc{3}{c|}{n=10} & \mc{3}{c|}{n=100} & \mc{3}{c|}{n=1000} \\
\hline
k & {\rm Exact} & {\rm Asymp\ 1} & {\rm Asymp\ 2} & {\rm Exact} & {\rm Asymp\ 1} & {\rm Asymp\ 2} & {\rm Exact} & {\rm Asymp\ 1} & {\rm Asymp\ 2} \\
\hline
1 & 0.3674 & 0.3748 & 0.3776 & 0.4021 & 0.4015 & 0.4030 & 0.4105 & 0.4100 & 0.4106 \\
2 & 0.5898 & 0.6499 & 0.6136 & 0.7071 & 0.7202 & 0.7094 & 0.7396 & 0.7432 & 0.7398 \\
3 & 1.3565 & 1.6288 & 1.3756 & 1.7936 & 1.8703 & 1.7963 & 1.9283 & 1.9514 & 1.9286 \\
4 & 4.1049 & 5.3717 & 3.8574 & 5.9908 & 6.3985 & 5.9714 & 6.6235 & 6.7521 & 6.6218 \\
5 & 15.4834 & 22.0568 & 12.5010 & 24.9315 & 27.2673 & 24.6639 & 28.3460 & 29.1063 & 28.3202 \\
\hline
\end{array}
\]
\[
\begin{array}{|l|rrr|rrr|rrr|}
\hline
\mc{10}{|c|}
\vart_U=1/2, \vart_V=5/2}\\
\hline
& \mc{3}{c|}{n=10} & \mc{3}{c|}{n=100} & \mc{3}{c|}{n=1000} \\
\hline
k & {\rm Exact} & {\rm Asymp\ 1} & {\rm Asymp\ 2} & {\rm Exact} & {\rm Asymp\ 1} & {\rm Asymp\ 2} & {\rm Exact} & {\rm Asymp\ 1} & {\rm Asymp\ 2} \\
\hline
1 & 0.2227 & 0.2259 & 0.2282 & 0.3506 & 0.3523 & 0.3514 & 0.3938 & 0.3943 & 0.3938 \\
2 & 0.3242 & 0.3139 & 0.3367 & 0.6006 & 0.5931 & 0.6025 & 0.7039 & 0.7009 & 0.7041 \\
3 & 0.6903 & 0.6223 & 0.7043 & 1.4922 & 1.4410 & 1.4947 & 1.8236 & 1.8032 & 1.8239 \\
4 & 1.9359 & 1.6078 & 1.8922 & 4.8802 & 4.6016 & 4.8685 & 6.2235 & 6.1091 & 6.2220 \\
5 & 6.7472 & 5.1462 & 6.2056 & 19.8621 & 18.2869 & 19.6959 & 26.4527 & 25.7788 & 26.4304 \\
\hline
\end{array}
\]
\[
\begin{array}{|l|rrr|rrr|rrr|}
\hline
\mc{10}{|c|}
\vart_U=3/2, \vart_V=3/2}\\
\hline
& \mc{3}{c|}{n=10} & \mc{3}{c|}{n=100} & \mc{3}{c|}{n=1000} \\
\hline
k & {\rm Exact} & {\rm Asymp\ 1} & {\rm Asymp\ 2} & {\rm Exact} & {\rm Asymp\ 1} & {\rm Asymp\ 2} & {\rm Exact} & {\rm Asymp\ 1} & {\rm Asymp\ 2} \\
\hline
1 & 0.2919 & 0.2985 & 0.2985 & 0.3761 & 0.3768 & 0.3768 & 0.4021 & 0.4022 & 0.4022 \\
2 & 0.4504 & 0.4660 & 0.4660 & 0.6532 & 0.6550 & 0.6550 & 0.7217 & 0.7219 & 0.7219 \\
3 & 1.0062 & 1.0449 & 1.0449 & 1.6410 & 1.6462 & 1.6462 & 1.8757 & 1.8763 & 1.8763 \\
4 & 2.9556 & 3.0712 & 3.0712 & 5.4270 & 5.4442 & 5.4442 & 6.4226 & 6.4245 & 6.4245 \\
5 & 10.7979 & 11.2169 & 11.2169 & 22.3476 & 22.4176 & 22.4176 & 27.3938 & 27.4020 & 27.4020 \\
\hline
\end{array}
\]
\caption{Example 1 - Nearly-deterministic HT Gaussian: Comparison of exact results for $m_k(\frac{\sqrt{n}}{\sigma_n}\, W_n)$ and the asymptotic results for the case that $\vart_U=5/2$, $\vart_V=1/2$ with $\beta=1$, $n=10, 100, 1000$ and $k=1, 2, 3, 4, 5$. The two entries in the Asymp-columns give, for a particular $k$, the asymptotic result from (\ref{eqn86}) and (\ref{eqn89}) in that order.}
\label{tbl:3}
\end{table}
The results, shown in Table~\ref{tbl:3}, indicate that the absolute error behaves quite accurately as $O(1/\sqrt{n})$ and $O(1/n)$, respectively, for the two asymptotic estimates. It is also interesting to see that the two asymptotic estimates perform equally well for low $k$ and $n$, while the refined asymptotic estimate based on \eqref{eqn86} outperforms the asymptotic estimate based on \eqref{eqn89} in all other cases.
Notice that the two approximations yield the same results when $\vart_U=\vart_V=3/2$ in Table~\ref{tbl:3}. This is caused by $U$ and $V$ having the same Gamma distribution, resulting in $h'''(\zeta_{sp})=0$. It follows that $\phi_n=0$ and $B_n=\beta_n$ in Theorem~\ref{thm4}. Consequently, the refined approximation and the original approximation are equal here.
\section{Conclusions} \label{sec8}
We have presented, using Kingman's transform method based on Pollaczek's formula for the Laplace transform of the steady-state waiting time distribution of the ${\rm GI}/{\rm G}/1$ queue, several heavy-traffic limit theorems. Under the assumption that the distribution of both the service time and the interarrival time have a Laplace transform, analytic in an open strip containing the imaginary axis, these heavy-traffic limit results can be shown to be valid on the level of transforms in a full neighborhood of the origin, with error assessment. As a consequence, there is convergence for all moments, with a corresponding error assessment. We have considered the classical heavy-traffic regime in which the transform of the steady-state waiting time distribution converges, after appropriate scaling, to the transform of an exponentially distributed random variable as the system load $\rho=1-\alpha$ tends to 1 (Kingman-type result), with error shown to be bounded as $O(\alpha\,{\rm log}(1/\alpha))$ as $\alpha\downarrow0$. We also have considered nearly deterministic queues (obtained through cyclic thinning) in two different regimes, viz.\ where the system's load $\rho_n$ satisfies $1-\rho_n\asymp 1/n$ and $1-\rho_n\asymp 1/\sqrt{n}$, respectively, as the thinning factor $n$ tends to infinity. The first regime allows a result of the Kingman-type, viz.\ convergence in terms of transforms to an exponential distribution, with error bounded as $O(\frac{1}{\sqrt{n}}\,{\rm log}\,n)$. The second regime allows, after appropriate scaling, convergence in terms of transforms to the maximum of the Gaussian random walk with a specific negative drift. For this case, we have shown $O(1/\sqrt{n})$ error behavior of transforms and moments. The latter result is refined, so as to yield $O(1/n)$ errors, by judicious choice of the drift parameter, as well as by using a weakly non-linear transformation of the Laplace variable in the transform of the maximum of the Gaussian random walk. Our asymptotic results have been illustrated and compared with results obtained by numerical integration for various different combinations of service time and interarrival time distributions. For all regimes, the heavy-traffic limits for the moments of the waiting time distribution were also found to be good asymptotic approximations for the exact values.
\bibliographystyle{abbrv}
| {
"timestamp": "2022-06-22T02:33:27",
"yymm": "2206",
"arxiv_id": "2206.09844",
"language": "en",
"url": "https://arxiv.org/abs/2206.09844",
"abstract": "Heavy-traffic limit theory deals with queues that operate close to criticality and face severe queueing times. Let $W$ denote the steady-state waiting time in the ${\\rm GI}/{\\rm G}/1$ queue. Kingman (1961) showed that $W$, when appropriately scaled, converges in distribution to an exponential random variable as the system's load approaches 1. The original proof of this famous result uses the transform method. Starting from the Laplace transform of the pdf of $W$ (Pollaczek's contour integral representation), Kingman showed convergence of transforms and hence weak convergence of the involved random variables. We apply and extend this transform method to obtain convergence of moments with error assessment. We also demonstrate how the transform method can be applied to so-called nearly deterministic queues in a Kingman-type and a Gaussian heavy-traffic regime. We demonstrate numerically the accuracy of the various heavy-traffic approximations.",
"subjects": "Probability (math.PR)",
"title": "Heavy-traffic single-server queues and the transform method",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138131620183,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7087156759575312
} |
https://arxiv.org/abs/1903.00470 | Geometric and Probabilistic Limit Theorems in Topological Data Analysis | We develop a general framework for the probabilistic analysis of random finite point clouds in the context of topological data analysis. We extend the notion of a barcode of a finite point cloud to compact metric spaces. Such a barcode lives in the completion of the space of barcodes with respect to the bottleneck distance, which is quite natural from an analytic point of view. As an application we prove that the barcodes of i.i.d. random variables sampled from a compact metric space converge to the barcode of the support of their distribution when the number of points goes to infinity. We also examine more quantitative convergence questions for uniform sampling from compact manifolds, including expectations of transforms of barcode valued random variables in Banach spaces. We believe that the methods developed here will serve as useful tools in studying more sophisticated questions in topological data analysis and related fields. | \subsection{This subsection is numbered but not shown in the toc}
\begin{document}
\title{Geometric and Probabilistic Limit Theorems in Topological Data Analysis}
\author{Sara~Kali\v{s}nik, Christian Lehn, and Vlada Limic}
\begin{abstract}
We develop a general framework for the probabilistic analysis of random finite point clouds in the context of
topological data analysis.
We extend the notion of a barcode
of a finite point cloud to compact metric spaces. Such a barcode lives in the completion
of the space of barcodes with respect to the bottleneck distance, which is quite natural from an analytic point
of view. As an application we prove that the barcodes of i.i.d. random variables
sampled from a compact metric space converge to the barcode of the support of their distribution when the number
of points goes to infinity. We also examine more quantitative convergence questions for uniform sampling from
compact manifolds, including expectations of transforms of barcode valued random variables in Banach spaces.
We believe that the methods developed here will serve as useful tools in studying more sophisticated questions
in topological data analysis and related fields.
\end{abstract}
\subjclass[2010]{57N65, 60B12, 60D05 (primary), 55N35, 60B05 (secondary).}
\keywords{topological data analysis, persistent homology, topological manifolds, probabilistic limit theorems, barcodes}
\maketitle
\let\languagename\relax
\tableofcontents
\section{Introduction}\label{section intro}
Topological Data Analysis (TDA) is a fast growing field whose aim is to provide a set of new topological and geometric tools for analyzing data. One of the most widely used tools is persistent homology. The ideas behind persistent homology can be traced back to the works of Patrizio Frosini~\cite{Frosini} on size functions, and of Vanessa Robins~\cite{Robins} on using experimental data to infer the topology of attractors in dynamical systems, though the method only gained prominence with the pioneering works of Edelsbrunner, Letscher and Zomorodian~\cite{elz-tps-02} and Carlsson and Zomorodian~\cite{ZC}.
Persistent homology has been used to address problems in fields ranging from sensor networks~\cite{VinEvader, adams}, medicine~\cite{Ferri, ADCOCK201436}, neuroscience~\cite{Chung2009, Curto2013, Giusti_Pastalkova_Curto_Itskov_2015}, as well as imaging analysis~\cite{Klein}.
The input of persistent homology is usually a point cloud, i.e. a finite metric space.
Since finitely many points do not carry any nontrivial topological information, the idea is to consider the homology of
thickenings of this point cloud in order to deduce information about the data or the distribution it is sampled from.
The output is a barcode, i.e. a multiset of intervals,
where each interval (``bar'') represents a topological feature present at parameter values specified by the interval.
This space of barcodes $\BC$ comes equipped with natural metrics, for example the Wasserstein and the Bottleneck distance.
The present paper grew out of an attempt to understand how some of the fundamental aspects of persistent homology \emph{and probability theory} could interact in order to
allow for further statistical applications.
Here and in the rest of the introduction we will present some of our key results.
Firstly, we wish to extend the notion of a barcode from finite sets to compact sets.
This is done in
\begin{custom}[Proposition \ref{proposition barcodes for compact metric spaces}] Let $k \in \N_0$ be a nonnegative integer, and let $M$ be a metric space.
Then for every compact set $K\subset M$ there is a barcode $\beta_k(K) \in \BCbot$ such that $K \mapsto \beta_k(K)$ is a $1$-Lipschitz map from the space of
compact subsets of $M$, equipped with the Hausdorff distance, to the completion $\BCbot$ of the barcode space $\BC$, with respect to the bottleneck distance.
\end{custom}
This result can also be obtained from the main theorem of \cite{CSEH07}. It was later explicitly stated and proved in~\cite[Proposition 5.1]{Chazal2014} and relied on a measure theoretic approach to persistent homology introduced in~\cite{chazal:hal-01330678}. For completeness, we include a simple, conceptually clear, and self-contained proof. See Remark \ref{remark barcodes for totally bounded spaces} for an extension to totally bounded spaces.
Since we use a limiting procedure to define $\beta_k$ on compact subsets of $M$, the barcode $\beta_k(K)$ has to live in the completion $\BCbot$, which is a natural space
for doing analysis with barcodes.
\enlargethispage{\baselineskip}
Suppose now that the point cloud is obtained by sampling independent and identically distributed (i.i.d.) points from an unknown distribution with compact support $C$.
The following question seems very natural, and it is somewhat surprising that it has not yet been answered:
\begin{center}
\emph{What happens to the barcode as we sample more and more such points?}
\end{center}
\noindent
In Section \ref{section approximation two}, we provide the following intuitive answer.
\begin{custom}[Theorem \ref{theorem almost sure limit of number of points}]
Let $M$ be a metric space, $X_1,X_2, \ldots$ be i.i.d.~$M$-valued random variables, and $k\in \N_0$.
Define $P_n=\{X_1,X_2, \ldots,X_n\}$.
If the distribution of the $X_i$ has support equal to a compact subset $C \subset M$, then
\[
\beta_k(C) = \lim_{n\to \infty} \beta_k(P_n) \ \ \textrm{ almost surely.}
\]
\end{custom}
In the stochastic setting we also address questions about the mean and deviation from (or concentration about) the mean.
For this discussion we consider random variables taking values in some Banach space.
Starting from a barcode valued random variable $\beta$ (e.g.~$\beta=\beta_k(P_n)$ as above), one can obtain a Banach space
valued random variable, as in~\cite{algfn, Bub15, polonik, Kal18, Reininghaus_2015_CVPR} and many others. In this paper we use the functions proposed by~\cite{Kal18} as a primary example, though the results are stated in full generality for Lipschitz continuous maps
\[
T: \BCbot \to V
\]
from the completed space of barcodes to some space $V$. Mapping to a Banach space is necessary to be able to talk about stochastic quantities such as expectation. But even more importantly, in order to use the information contained in the barcode using machine learning algorithms, one needs to produce a vector valued output. As mentioned above, there is a plethora of methods to produce such an output and our setup covers all of them, the only hypothesis being that the map $T$ is Lipschitz continuous. Note that it has been shown in \cite{CarriereB19} that one cannot embed the space of barcodes with finitely many functions which is why we stress the (infinite dimensional) Banach setup.
The study of probabilistic properties of $T\circ \beta$ naturally leads to the law of large numbers and a central limit theorem, as Bubenik first observed in~\cite{Bub15}.
They can be formulated as follows.
\begin{custom}[Theorems \ref{theorem lln for barcodes} and \ref{theorem clt for barcodes}]---
\begin{itemize}
\item Let $T: \BCbot \to V$ be a continuous map from the (bottleneck completed) space of barcodes to a separable Banach space $V$ and let $\{X_i\}_{i\in \N}$ be an
i.i.d.~sequence of $\BCbot$-valued random variables such that $\E[\norm{T(X_1)}]< \infty$. Then the sequence $(S_n)_n$ of empirical means
$$
S_n := \frac{T(X_1) + \ldots + T(X_n)}{n}
$$
converges almost surely to $\E[T(X_1)]$.
\item Suppose that in addition $\E[{T(X_1)}]=0$ and $\E[\norm{T(X_1)}^2]< \infty$, and let $S_n$ be as above.
If $V$ is of type $2$, then $(\sqrt{n} S_n)_n$ converges weakly to a Gaussian random variable with the covariance structure of $T(X_1)$.
\end{itemize}
\end{custom}
The reader may be concerned about the vacuousness of the just stated result due to its rather abstract setting.
We respond by addressing the important situation of barcodes of compact metric spaces (in particular including that of point clouds sampled from a distribution with compact support).
\begin{custom}[Theorem \ref{theorem hypothesis of lln and clt for compact}]
Let $M$ be a metric space, $K(M)$ the complete metric space of all compact subsets of $M$ with the Hausdorff distance, and $X$ a random variable taking
values in $K \in K(M)$.
Consider the $k$-th barcode map $\beta_k:K\left(M\right)\to \BCbot$ and let $T:\BCbot \to V$ be a continuous map, where $V$ is a separable Banach space of type $2$.
Then $\norm{T(\beta_k(X))}$ has finite $n$-th moments for all $n\geq 0$.
\end{custom}
Once the existence of barcode expectations is settled, it is important to know how to calculate them for random point clouds of bigger and bigger samples, drawn from an unknown distribution.
The TDA pipeline is too complicated for permitting one to find an explicit symbolic way for such calculations in general.
The only reasonable way of doing so is to make an \emph{educated guess}!
We infer directly from Theorem \ref{theorem almost sure limit of number of points}
\begin{corollary}
Let $M$ be a metric space, and $X_1,X_2, \ldots$ an i.i.d.~sequence of $M$-valued random variables.
Set $k\in \N_0$, and put $P_n=\{X_1,X_2, \ldots,X_n\}$.
If the distribution of the $X_i$ has support equal to a compact subset $C \subset M$, and if $T:\BCbot \to V$ is a continuous map to a Banach space of type 2, then
\[
T\left(\beta_k(C)\right) = \lim_{n\to \infty} \E[T\left(\beta_k(P_n)\right)].
\]
\end{corollary}
For some specific underlying probability distributions, explicit calculations and more careful asymptotic estimates may be possible.
We consider the simplest (and paradigm) example of the circle $\S^1 \subset \R^2$, and i.i.d.~points sampled from it.
The interesting barcode here is the $\beta_1$-barcode, and it is uniquely determined by its length.
In Theorem \ref{theorem expectation circle} we give an explicit formula for the expectation of the length.
The principal contribution of this work is that we devise
a new concrete general framework for analysis of random finite point clouds and their corresponding barcodes.
The fact that the proofs of our main results are not technically involved is in our opinion a firm indication that the framework here proposed
is natural and potentially very useful in studying more sophisticated TDA questions.
\subsection*{Related work}
There are a number of related approaches to studying the statistical properties of persistent homological estimators (see~\cite{2017arXiv171004019C} for an overview).
A closely related work is by Bubenik~\cite{Bub15}, who develops statistical inference via an embedding with ``persistence landscapes'', which is further studied in~\cite{ChazalFasy2015} and~\cite{pmlr-v37-chazal15}. Like Bubenik, we use CLT and LLN theory in Banach spaces, but on an object different from his. Unlike him, we study natural geometric and probabilistic limits directly on the barcodes of large point clouds (Theorem~\ref{theorem almost sure limit of number of points} and Theorem~\ref{theorem consequence of NSW}).
In particular, in Theorem~\ref{theorem consequence of NSW}, we establish a connection with the work of \cite{NSW08}.
The just mentioned theorems are linked in spirit
to~\cite{Bobrowski2015}, who also study homology approximations based on large point clouds
drawn from a compact manifold, and their analysis is based on \cite{NSW08} as well.
Unlike us, for large $n$, Bobrowski and Mukherjee~\cite{Bobrowski2015} approximate simultaneously the homology of the manifold in a large range $m_n$ of degrees, with the homology in the corresponding degrees of the point cloud inflated by $r_n$, where $r_n$
is a power of $\frac{\log(n)}{n}$.
In our context of persistent homology, we look at a continuum (a segment) of radii (away from zero) and aim to match, for large $n$, the homotopy type of the manifold with that of the inflated point cloud, simultaneously for all radii in thus fixed interval.
Chazal et al.~\cite{Chazal:2015:CRP:2789272.2912112} establish convergence rates for metric spaces endowed with a probability measure that satisfies the $(a, b)$-standard assumption, see section 2.2 of that paper. In our study of almost sure convergence, we do not impose any conditions on the measure (except for compact support), see Theorem \ref{theorem almost sure limit of number of points}. Hiraoka, Shirai, and Trinh~\cite{Hiraoka2018}, Owada~\cite{owada2018}, Adler
and Owada~\cite{owada2017} also study limit theorems for persistence diagrams; but in their case, the point clouds are stationary point processes on $\mathbb{R}^n$. Similar results also appear in~\cite{Chazal:2015:CRP:2789272.2912112}.
The foundational work of Mileyko, Mukherjee and Harer in~\cite{0266-5611-27-12-124007} introduces probability measures on barcode space, and these ideas are developed further (with Turner) in the context of Frech\'et means as ways of summarizing barcode distributions in~\cite{Turner2014}. Since we work with embeddings into Banach spaces, we do not need to rely on the theory developed in these papers.
Another active area of research in TDA deals with topological features of random simplicial complexes and noise~\cite{Bobrowski2018, Adler2014}. The present paper has a different focus, but it would be interesting to incorporate noise into our framework. This will be the subject of a forthcoming work.
\subsection*{Outline of the article}
In Section \ref{section persistent homology and barcodes}, we recall the definition of persistent homology, barcodes, and the space of barcode representations. This is the basis for what follows. Most results presented in this section are not new, nevertheless we occasionally included arguments (such as for Lemma \ref{lemma hausdorff metric}) to make the text more self-contained.
As explained above, equivalent statements to Proposition \ref{proposition barcodes for compact metric spaces}, where the definition of barcode representations is extended from finite point clouds to compact subsets of a given metric space (with the induced metric), already appeared in the literature. As our approach through completions is conceptually very clear, we nevertheless chose to include it.
This definition is fundamental for the rest of the paper. The generalized barcode representations live in the completion $\BCbot$ of the space of barcodes with respect to the bottleneck distance, thus they can be thought of as representing barcodes consisting of countably many intervals with a finite metric distance to a given barcode.
In the same spirit as Proposition \ref{proposition barcodes for compact metric spaces} we show in Theorem \ref{theorem tame is decomposable} that the filtration associated to a limit of tame functions also has an associated barcode representation.
Following Bubenik \cite[Section 3.2]{Bub15}, we study a Law of Large Numbers and a Central Limit Theorem in Section \ref{section clt}. The main new contribution here is Theorem \ref{theorem hypothesis of lln and clt for compact} as explained above. Section \ref{section true persistent homology} contains the probabilistic limit theorem \ref{theorem almost sure limit of number of points} for barcodes which are probably the most fundamental contribution of this article. It heavily relies on Lemma \ref{lemma almost sure limit of number of points} which is a more geometric limit theorem for random point clouds in the space of compact subspaces of $\R^d$.
Section \ref{section sphere} is independent of the preceding two sections and gives a hint at a more quantitative version of a limit theorem. For this we consider the simplest nontrivial example of a compact metric space in $\R^2$ -- the circle $\S^1$. We fix the number $n$ of points and we would like to determine the expected barcode of a random $n$-point cloud (consisting of independent uniform samples from $\S^1$). To give meaning to this idea, we need to find some quantity which determines the barcode and over which we can average (in order to speak of expectations). In this simple case, the elementary geometry of the circle (and of point clouds on it) only allow for a restricted barcode which is entirely determined by its length, see Corollary \ref{corollary bn leq one} and the discussion thereafter. We give explicit formulas for the \emph{expected length} for $n=3$ in Proposition \ref{theorem expectation circle 3points} and arbitrary $n$ in Theorem \ref{theorem expectation circle}.
\enlargethispage{\baselineskip}
Such quantities as the length which determine the barcode completely can of course no longer be explicitly given for arbitrary compact submanifolds of $\R^d$ which is why in order to talk about expectation we consider embeddings (or more generally continuous maps) $T:\BCbot\to V$ to some Banach space $\left(V,\norm\cdot_V\right)$. Building on the work of Niyogi--Smale--Weinberger \cite{NSW08}, who investigated when an $\veps$-neighborhood of a random point sample on a compact submanifold of $\R^d$ is homotopy equivalent to that manifold, we give an estimate for the distance in $V$ of the expectation of the transform (under $T$) of the barcode for a random $n$-point cloud for fixed $n$ from the transform of the barcode for the manifold $M$ from which the point cloud is sampled.
Section \ref{section embedding barcodes} shows that our hypotheses on the existence of a (Lipschitz continuous) map from the barcode space to a Banach space can be fulfilled using functions introduced in Kalisnik's work~\cite{Kal18}. Finally, Section \ref{section discussion} gives a glimpse at open problems in this context.
\subsection*{Notation and Conventions} Let $(M,d)$ be a metric space.
For $x \in M$ and ${t\in \R_{\geq 0}}$, let $B_t(x)=\{y \in M \mid d(x,y)<t\}$ be the open $t$-ball of $x$ and $\overline{B}_t(x)=\{y \in M \mid d(x,y)\leq t\}$
the closed $t$-ball around $x$. For a subset $P\subset M$ we will denote $P_t:=\{x\in M\mid d(x,P)\leq t\}$ the $t$-neighborhood of $P$ which is closed if $P$ is.
We denote by $\sP(X)$ the power set of $X$ and by $F(X) \subset \sP(X)$ the set of finite nonempty subsets of $X$.
Throughout this paper, we take homology groups with coefficients in a field $\mathbf k$. For $n\in\N$ denote by $[n]$ the set $\{1,\ldots,n\}$.
Recall that a \emph{multiset} is a set $A$ together with multiplicities, i.e., a map $A\to \N_0$. We will usually suppress the map in the notation and just speak of a multiset $A$. Also, we will use set notation such as $A=\{x_1,x_2,x_3,\ldots\}$.
We use $\Theta$ for asymptotically comparable in the Big O notation. For example, $f=\Theta (\frac{\log(n)}{n})$ if and only if there exist positive constants $C_1$ and $C_2$ such that $C_1\frac{\log(n)}{n}\leq f(n) \leq C_2\frac{\log(n)}{n}$ for all large $n$.
\subsection*{Acknowledgements}
We thank Bernd Sturmfels for his interest and the MPI Leipzig where a large part of the article was written for the hospitality and excellent working conditions. Christian Lehn benefited from discussions with Joscha Diehl and Mateusz Micha\l{}ek. Vlada Limic benefited from discussions with Vitalii Konarovskyi.
Christian Lehn was supported by the DFG through the research grants Le 3093/2-2 and Le 3093/3-1.
Vlada Limic was supported by the Friedrich Wilhelm Bessel Research Award from the Alexander von Humboldt Foundation.
\section{From persistent homology to barcodes}\label{section persistent homology and barcodes}
\subsection{Persistence}\label{section persistent homology}
In many applications data lies in a metric space, for example, in a subset of a Euclidean space with an inherited distance function.
From this (necessarily finite, and often large) sample one wishes to learn some basic characteristics, such as the number of components or the existence of holes and voids, of the underlying space from which we sampled.
Finite metric spaces are discrete spaces, and as such do not per se have interesting topological structure in their own right.
The philosophy of topological data analysis is that data does have an inherent topology and in order to
uncover it, one assigns a 1-parameter family of topological spaces or a filtration to a finite metric space $M$~\cite{topodata, carlsson2014, elz-tps-02}. Applying the degree-$k$ homology functor $H_k$ to this filtration yields what is called a persistence module
\cite{chazal:hal-01330678}.
\begin{definition}\label{definition persistence module}
Let $\k$ be a field.
A {\bf persistence module} (over $\k$) is an indexed family of vector spaces
\[
V=\left(\{V_t\}_{t\in \R},\{\phi_s^t\}_{s \leq t \in \R}\right)
\]
of $\mathbf k$-vector spaces $V_t$ and linear maps $\phi_s^t:V_s \to V_t$ for every $s \leq t$ such that $\phi_t^t=\id_{V_t}$ and $\phi_r^t = \phi_s^t \circ \phi_r^s$ for all
$r \leq s \leq t$.
\end{definition}
One could also replace the field $\k$ by a ring $R$ (e.g. $R=\Z$ is a natural choice) and define an $R$-persistence module by replacing $\k$-vector spaces by $R$-modules in the above definition. This might give finer information about the topology of the point clouds, but is also much more complicated from the representation theoretic point of view, see e.g. the discussion in \cite{topodata} before Theorem 2.10 (p. 267). For example, analogs of essential results like Gabriel's theorem (stated here as Theorem \ref{theorem tame is decomposable}) are not available for $R=\Z$. As our work builds on that in an essential way, we work with fields and vector spaces throughout the paper.
Recall that if we work with field coefficients, homology is a collection of functors $(H_n)_{n\in \N_0}$ from the category of topological spaces to the category of $\k$-vector spaces. We refer the reader to standard textbooks such as
Bredon \cite{Bre97} or Hatcher \cite{Hat02}. It is sometimes useful to consider \emph{reduced homology} whose definition we briefly recall: denote by $\pt$ the one point space. Then for every topological space $X$ there is a unique continuous map $p_X:X \to \pt$. One defines the reduced degree $k$ homology of $X$ as
\[
\tilde H_k(X):= \ker\left( H_k(X) \to H_k(\pt)\right)
\]
where $H_k(X) \to H_k(\pt)$ is the map in homology induced by $p_X$. As $H_k(\pt)=\k$ if $k=0$ and is trivial otherwise, we have $H_k(X)=\tilde H_k(X)$ for every $k \neq 0$. Reduced homology is also a functor on the category of topological spaces.
\begin{definition}\label{definition sublevelset filtration}
Let $X$ be a topological space and let $f\colon X \to \R$ be a continuous function.
This function defines a filtration, called the {\bf sublevelset filtration} of $(X, f)$, by setting ${X_{t}=f^{-1}\left((-\infty, t]\right)}$.
For $k \in \N_0$ the sublevel set filtration of $(X, f)$ defines a persistence module $({\mathrm{PH}}_k(X,f),\phi)$ by ${{\mathrm{PH}}_k(X,f)_t= \tilde H_k(X_{t})}$ and
$\phi_s^t:\tilde H_k(X_{s}) \to \tilde H_k(X_{t})$ induced by the inclusion $X_{s} \xhookrightarrow{} X_{t}$.
For $X\subset \R^d$ we will simply write ${\mathrm{PH}}_k(X)$ instead of ${\mathrm{PH}}_k(X,f)$ if $X \subset \R^d$ and ${f:\R^d \to \R}$ is the distance--to--$X$ function. We refer to ${\mathrm{PH}}_k(X)$ (respectively ${\mathrm{PH}}_k(X,f)$) as the {\bf persistent homology} in degree $k$ of $X$ (respectively of $(X,f)$).
\end{definition}
\begin{definition}\label{definition tame}
A persistence module $V$ is called {\bf tame} if all $V_t$ have finite dimension and there exist finitely many $t_1 < \ldots < t_m \in \R$ such that $\phi_s^t$ is an isomorphism whenever $s,t \in (t_i,t_{i+1})$ for some $i$ (where we set $t_0=-\infty$, $t_{m+1}=\infty$). The function $f$ is called {\bf tame} if the module ${\mathrm{PH}}_k(X,f)$ is tame for all $k$.
\end{definition}
\begin{example}\label{example non tame}
It is clear that for an arbitrary smooth manifold $M \subset \R^d$ the $k$-th persistence module ${\mathrm{PH}}_k(M)$ is not necessarily tame. Take for example a strictly decreasing sequence $(r_n)_{n \in \N}$ of positive rational numbers such that $\sum_{n \in \N} r_n < \infty$ and put $R_n:=\sum_{m=1}^n r_n$. If $M$ is the union over all ${n \in \N}$ of circles $K_n$ with radius $R_n$ centered at the origin, then the persistent homology ${\mathrm{PH}}_1(M)$ will decompose as a direct sum of interval modules and this decomposition will give rise to an element $b\in \BCbot\ohne \BC$.
\end{example}
In certain cases a persistence module can be expressed as a direct sum of ``interval modules'', which can be thought of as the building blocks of the theory.
Here we have four types of intervals and recall the representation from~\cite{chazal:hal-01330678}:
\begin{center}
\begin{tabular}{ccc}
\footnotesize{interval}
&
\footnotesize{decorated pair}
\\
$(p,q)$ & $(p^+,q^-)$\\
$(p,q]$ & $(p^+,q^+)$\\
$[p,q)$ & $(p^-,q^-)$\\
$[p,q]$ & $(p^-,q^+)$\\
\end{tabular}
\end{center}
\begin{definition}\label{definition interval module}
For an interval $(p^*, q^*)$, where $^*$ is either $+$ or $-$, denote by $\mathbb{I}(p^*, q^*)$ the persistence module
\[
(\mathbb{I}(p^*, q^*))_t=\begin{cases}{\mathbf k}, & \textrm{ for } t \in (p^*, q^*)\\ 0, & \textrm{ otherwise}\end{cases} \textrm{ and }
\phi_s^t = \begin{cases}\id_{{\mathbf k}}, & \textrm{ for } s \leq t, \textrm{ and }s, t \in (p^*, q^*)\\ 0, & \textrm{ otherwise}\end{cases}.
\]
\end{definition}
\begin{definition}\label{definition persistent homology}
A persistence module $V$ over ${\mathbf k}$ is called \emph{decomposable} if it can be decomposed as a direct sum
\[
V \isom \bigoplus_{m\in \Lambda} \mathbb{I}(p_m^*, q_m^*),
\]
where $\Lambda$ is some index set and $*\in\{+,-\}$. If $V$ is decomposable, then the {\bf barcode} associated to $V$ is the {\bf multiset}
\[
\left\{(p_m^*, q_m^*) \mid m\in \Lambda \right\}.
\]
We call a decomposable persistence module $V$ \emph{of finite type} if $\Lambda$ is a finite set.
\end{definition}
\begin{remark}
The barcode of $V$ is also called the {\bf persistence} of $V$.
\end{remark}
Not all persistence modules decompose in this way~\cite{chazal:hal-01330678}, and there is a considerable body of literature trying to ascertain under
which conditions persistence modules are decomposable~\cite{quiver, Chazal:2009:PPM:1542362.1542407, chazal:hal-01330678, CB15}.
We will restrict to the case of most interest to us.
\begin{theorem}[Gabriel \cite{quiver}]\label{theorem tame is decomposable} Let $X$ be a topological space and let $f:X\to \R$ be a tame
function in the sense of Definition~\ref{definition tame}. Then ${\mathrm{PH}}_k(X,f)$ is decomposable and of finite type.
\end{theorem}
\begin{example}\label{example tame}
Examples of $(X, f)$ with a tame function $f$ include:
\begin{itemize}
\item
$X$ a compact manifold and $f$ a Morse function (where tameness is the result of Morse theory, see \cite{Mil63} for a general reference to this classical field).
\item
$X$ a compact polyhedron and $f$ a piecewise linear function, see Theorem 2.2 in \cite{chazal:hal-01330678}.
\item $X=\R^d$ and $f$ the distance to $P$ function for a finite set $P \subset \R^d$. In this case, tameness is a direct consequence of the nerve theorem. A textbook reference for the general nerve theorem is e.g. \cite[Corollary 4G.3]{Hat02}.
\end{itemize}
\end{example}
Let $P \subset \R^d$ be a finite set and ${f:\R^d \to \R}$ the distance to $P$ function. Then $P_t = f^{-1}((-\infty,t])$ is just the closed $t$-neighborhood of $P$ and ${\mathrm{PH}}_k(P)_t=\tilde H_k(P_{t})$ is decomposable by Theorem \ref{theorem tame is decomposable} for $k \in \N_0$. Furthermore, all non-zero intervals appearing in the barcode are
closed on the left and open on the right (also known as \emph{closed-open type}), or equivalently of the third
type in the above table describing the decorated pair notation.
We can of course define ${\mathrm{PH}}_k(P)$ even when $P$ is not finite as it may still be decomposable.
For example, ${\mathrm{PH}}_k(P)$ is decomposable and of finite type for a semi-algebraic set $P$ as a consequence of Hardt's theorem, see the discussion in section 3.2 of~\cite{HW18}.
\subsection{Persistent Homology of Finite Subsets of Metric Spaces}\label{section metric spaces}
As mentioned in Example \ref{example tame}, the persistent homology ${\mathrm{PH}}_k(P)$ of a finite point cloud $P \subset \R^d$ can be calculated using the nerve theorem. It tells us in particular, that the homology of $P_t$ is the same as the homology of a simplicial complex, the so-called \emph{\v Cech complex} with parameter $t$.
Recall that given a metric space $M$, a finite set $P\subset M$, and a parameter $t \geq 0$, the {\v{C}ech complex} $\check{C}_t(P)$ is the abstract simplicial complex whose vertex set is $P$, and where $\{x_0, x_1, \ldots, x_k\}$ spans a $k$-simplex if and only if $\bigcap_{i=0}^k\overline{B}_t(x_i) \neq \emptyset$.
The \emph{\v{C}ech filtration} of $P$ is the nested family of \v{C}ech complexes obtained by varying parameter $t$ from $0$ to $\infty$. This can be used as a definition.
\begin{definition}\label{definition persistent homology for finite metric spaces}
Let $M$ be a metric space and let $P\subset M$ be a finite subset. For $k\in \N_0$ we define the persistent homology ${\mathrm{PH}}_k(P)$ in degree $k$ of $P$ to be the persistence module obtained from taking the homology of the nested family of \v{C}ech complexes associated to $P$. In formulas:
\[
{\mathrm{PH}}_k(P)_t:= \tilde H_k(\check{C}_t(P)) \quad \textrm{for } t\in{\R_{\geq 0}}.
\]
\end{definition}
From the construction and Theorem \ref{theorem tame is decomposable}, we immediately deduce
\begin{corollary}\label{corollary persistent homology for finite metric spaces}
Let $M$ be a metric space and let $P\subset M$ be a finite subset. Then for every $k\in \N_0$, the persistence module ${\mathrm{PH}}_k(P)$ is tame and decomposable.\qed
\end{corollary}
Note that the two definitions (Definition \ref{definition sublevelset filtration} and Definition \ref{definition persistent homology for finite metric spaces}) of ${\mathrm{PH}}_k(P)$ for a finite subset $P \subset \R^d$ coincide.
\subsection{Barcode Space}\label{section barcodes}
In this subsection we describe a useful way of representing barcodes.
Given an interval $I\subset \R_{\geq 0}$ of finite length, we encode it as a point
$(x, d)\in\R_{\geq 0}^2$ where $x$ is the left endpoint of $I$ and $d$ is its length.
The price we pay with this simplified representation is the loss of information about the
inclusion of endpoints of the intervals. However, restricted to only one
single interval type, this representation map is injective.
In the cases we are mainly interested in, this is indeed the case. We are led to the following
\begin{definition}\label{definition barcode}
Let us denote $A:= \coprod_{n \in \N_0} \mathbb{R}_{\geq 0}^{2n}$. Let $\sim$ on $A$ be an equivalence relation generated by the relations
\begin{gather*}
(x_1,d_1,\ldots,x_n,d_n) \sim (y_1,e_1,\ldots,y_m,e_m) \Longleftrightarrow\\ (x_{\sigma(1)},d_{\sigma(1)},\ldots,x_{\sigma(n)},d_{\sigma(n)}) = (y_1,e_1,\ldots,y_n,e_n) \textrm{ and } \\ e_{n+1}=\ldots = e_m=0 \textrm{ for some } \sigma \in S_n, \ n\leq m \in \N_0
\end{gather*}
where $S_n$ denotes the symmetric group on $n$ elements.
A {\bf barcode representation} is an equivalence class of $(x_1,d_1,\ldots,x_n,d_n)$ with respect to $\sim$.
The {\bf space of barcode representations} is the quotient of the disjoint union $A$ by the equivalence relation defined above:
\[
\BC:= \left. A \middle/ \sim \right..
\]
For simplicity, we will sometimes also refer to $\BC$ as the {\bf barcode space}. We denote by $\BC_n \subset \BC$ the image of $\coprod_{m \leq n} \mathbb{R}_{\geq 0}^{2m}$ under the canonical map $A \to \BC$.
We adopt the notation of Definition \ref{definition interval module}. Let $b=\{ \left(x_1^*, (x_1+d_1)^*\right), \ldots, \left(x_n^*, (x_n+d_n)^*\right)\}$ with $*\in\{+,-\}$ be a barcode such that all intervals have non-negative left endpoint $x_i$ and finite length $d_i$. Then we call $(x_1,d_1,\ldots,x_n,d_n) \in \BC$ the \textbf{barcode representation of the barcode $b$}.
\end{definition}
The equivalence relation $\sim$ defined above says that two barcode representations are equivalent if they coincide up to permutation of intervals and after deleting zero length intervals (i.e.\ $(x_i,d_i)$ with $d_i=0$).
As already pointed out, given a finite subset $P$ of a metric space $M$, the persistence module ${\mathrm{PH}}_k(P)$ is decomposable and of finite type by Theorem \ref{theorem tame is decomposable}, Example \ref{example tame}, and Corollary \ref{corollary persistent homology for finite metric spaces}.
Therefore, there is an associated barcode all of whose intervals have finite length. This -- and in fact only for $k=0$ -- is where we need to use reduced instead of ordinary homology. We can define the following barcode map from the set of finite nonempty subsets of a metric space $M$ to the barcode space.
\begin{definition}\label{definition barcode map}
Let us fix $k\in\N_0$. Given a finite subset $P$ of some metric space $M$, we denote by $\beta_k(P)$ the barcode representation of the barcode associated to the persistence module ${\mathrm{PH}}_k(P)$. This defines a map
\[
\beta_k : F(M) \to \BC
\]
where $F(M)$ is the set of finite nonempty subsets of $M$. We will refer to this map as the $k$-th barcode map.
\end{definition}
The barcode space comes equipped with natural metrics. In order to define them, we first specify the distance between any pair of intervals, as well as the distance between any interval and the equivalence class of the zero length interval which for this purpose is represented by the set $\Delta = \{(x, 0)\,|\, -\infty < x < \infty \}$. We put
\[
\textrm{d}_\infty \left((x_1, d_1), (x_2, d_2)\right) = \max \left(|x_1-x_2|, |(x_1 +d_1) - (x_2+d_2)|\right).
\]
The distance between (the representation of) an interval and the set $\Delta$ is
\[
\textrm{d}_\infty ((x, d), \Delta) = \frac{d}{2}.
\]
Recall that $[n]=\{1,2,\ldots,n\}$.
Let $b_1 = \{I_i\}_{i \in [n]}$ and $b_2 = \{J_j\}_{j \in [m]}$ be barcodes. For any bijection $\theta$ from a subset $A \subseteq [n]$ to $B \subseteq [m]$, the \emph{penalty} $P_\infty(\theta)$ of $\theta$ is
\begin{equation}\label{eq penalty bottleneck}
P_\infty (\theta) = \max\left(\max_{a\in A}\left(\textrm{d}_\infty(I_a, J_{\theta(a)})\right), \max_{a\in [n] \setminus A} \textrm{d}_\infty (I_a, \Delta), \max_{b\in [m]\setminus B} \textrm{d}_\infty (I_b, \Delta)\right).
\end{equation}
\begin{definition}
The \emph{bottleneck distance} is defined by
\[
\textrm{d}_\infty(b_1, b_2) = \min_\theta P_\infty(\theta),
\]
where with the notation above the minimum is over all possible bijections $\theta$ from subsets $A\subset [n]$ to subsets $B\subset [m]$.
\end{definition}
There are other metrics also commonly used for barcode spaces. Keeping the notation and changing the penalty \eqref{eq penalty bottleneck} for the bottleneck distance to
\begin{equation}\label{eq penalty wasserstein}
P_p(\theta) = \sum_{a\in A}\textrm{d}_\infty(I_a, J_{\theta(a)})^p +\sum_{a\in [n] \setminus A} \textrm{d}_\infty (I_a, \Delta)^p +\sum_{b\in [m]\setminus B} \textrm{d}_\infty (I_b, \Delta)^p
\end{equation}
yields the \emph{$p$th-Wasserstein distance} ($p\geq 1$) between $b_1, b_2 \in \BC$:
\[
\textrm{d}_p(b_1, b_2) = \left(\min_\theta P_p(\theta)\right)^{\frac{1}{p}}.
\]
Let us consider an example in order to get acquainted with these metrics.
\begin{example}\label{example bottleneck}
Let $\BC_1 \subset \BC$ consist of barcodes containing a single interval (bar). We set $b_1=(x_1, d_1), b_2=(x_2, d_2) \in \BC_1$ and calculate
\[
d_\infty(b_1,b_2)=\min\left(\max\left(|x_1-x_2|, |x_1+d_1-(x_2+d_2)|\right), \max\left(\frac{d_1}{2},\frac{d_2}{2}\right)\right).
\]
Then we see that if for arbitrary fixed $x_1,x_2 \in {\R_{\geq 0}}$ the length of both intervals is small, the bottleneck distance between $b_1$ and $b_2$ is equally small, even if the intervals are far away from each other.
The $p$th-Wasserstein distance behaves similarly.
\end{example}
The barcode space $\BC$ is not a complete metric space, neither with respect to the bottleneck,
nor with respect to any of the Wasserstein distances~\cite{0266-5611-27-12-124007}.
This is a consequence of the fact that appending bars of smaller and smaller but nonzero length to any given barcode can easily yield a Cauchy
sequence of barcodes, with respect to any of
the above metrics, and clearly without a limit in $\BC$.
For the sake of concreteness, let $x >0$ be fixed, and consider the barcode $b_n$ consisting of all intervals $I_k:=(x,\frac{1}{k})$ for all $1\leq k\leq n$ (so that $b_n \in B_n$).
In this case, we have for $n<m$
\[
d(b_n,b_m) \leq \max_{n+1\leq k \leq m} d_\infty(\{I_k \},\Delta) = \frac{1}{2(n+1)}.
\]
A limit could only be a barcode consisting of infinitely many bars, which is impossible.
In order to overcome this problem, we shall consider the completions
\begin{equation}\label{eq completions}
\left(\BCwas,d_p\right) \quad \textrm{and} \quad \left(\BCbot,d_\infty\right)
\end{equation}
of $\BC$ with respect to the Wasserstein and bottleneck distances.
\subsection{Limits of Barcodes}\label{section true persistent homology}
In subsection~\ref{section persistent homology} we recalled the classical construction of barcodes from finite point clouds.
Here we present a generalization, which is natural in the context of our probabilistic investigations.
Let $(M,d)$ be a metric space and consider the family
\[
K(M):=\left\{ Y \subset M \mid Y \textrm{ compact, non-empty}\right\}
\]
of all compact subsets of $M$.
Together with the Hausdorff distance
\begin{equation}\label{eq hausdorff metric}
d_H(A,B):= \max\left(\inf\left\{t\in{\R_{\geq 0}}\mid A \subset B_t\right\},\inf\left\{t\in{\R_{\geq 0}}\mid B \subset A_t\right\}\right),
\end{equation}
the set $K(M)$ becomes a metric space.
It is well known that $(K(M),d_H)$ is complete whenever $(M,d)$ is complete, and compact whenever $(M,d)$ is compact.
Given a bounded subset $A \subset M$, we consider the continuous function, the ``distance from $A$'', defined by
\[
d_A:M\to {\R_{\geq 0}}, \quad d_A(x):=\inf\{d(x,y) \mid y \in A \}.
\]
We can also describe compact metric spaces in terms of functions. The following result should be rather standard, but it turned out to be easier to give a proof than to find an exact reference.
\begin{lemma}\label{lemma hausdorff metric}
Let $M$ be a metric space, and denote by $\left(L_\infty(M),\norm{\cdot}_\infty\right)$ the Banach space of bounded functions $f:M \to \R$, equipped with the
supremum norm.
\begin{enumerate}
\item\label{lemma hausdorff metric item one} For $A,B \in K(M)$ the function $d_A-d_B$ is bounded on $M$.
\item\label{lemma hausdorff metric item two}
The function $n_\infty:K(M) \times K(M) \to \R_{\geq 0}$, $n_\infty(A,B):=\norm{d_A-d_B}_\infty$ defines a metric on $K(M)$
such that $$(K(M),d_H) \to (K(M),n_\infty), \quad A\mapsto A $$ is an isometry.
\item\label{lemma hausdorff metric item three} If $M$ is compact, then the function $d_A$ for $A\subset M$ is bounded and $A\mapsto d_A$ defines a continuous injective map
$$(K(M),d_H){\, \hookrightarrow\,} \left(L_\infty(M),\norm{\cdot}_\infty\right),$$
which is an isometry of metric spaces onto its image.
\end{enumerate}
\end{lemma}
\begin{proof}
For \eqref{lemma hausdorff metric item one} let us denote $R:=\sup_{a\in A, b\in B} d(a,b)$ which is $< \infty$ by compactness. For a given $x\in M$, we choose $a\in A, b \in B$ such that $d_A(x)=d(a,x)$, $d_B(x)=d(b,x)$ which is again possible by compactness.
Without loss of generality $d_A(x) \geq d_B(x)$. The triangle inequality gives
\[
\abs{d_A(x) - d_B(x)} = d(a,x)-d(b,x) \leq d(a,b)
\leq R
\]
and the claim follows.
For \eqref{lemma hausdorff metric item two} let $A,B \in K(M)$. We will first prove that $d_H(A,B) \leq \norm{d_A-d_B}_\infty$.
Suppose that $\abs{d_A(x)-d_B(x)} \leq t$ for some $t\in {\R_{\geq 0}}$ and for all $x \in M$.
Then in particular for $a \in A$ we deduce $d_B(a) \leq t$ so that $A \subset B_t$.
By symmetry the other inclusion follows and therefore $d_H(A,B) \leq t$.
For the inequality in the other direction, let us now assume that for some $t\in {\R_{\geq 0}}$ we find $A \subset B_t$ and $B\subset A_t$. Let $x \in M$ be given. It suffices to show that $\abs{d_A(x)-d_B(x)} \leq t$. We may assume $d_A(x)-d_B(x) > 0$. By compactness, the infimum is a minimum so that $d_A(x) = d(a,x)$ and $d_B(x) = d(b,x)$ for some $a \in A$, $b \in B$. As $B \subset A_t$ there is $a' \in A$ such that $d(a',b) \leq t$. From $d_A(x) = d(a,x)$ it follows that $d(a',x) \geq d(a,x)$ and we infer
\[
\abs{d_A(x)-d_B(x)} = d(a,x)-d(b,x) \leq d(a',x) -d(b,x) \leq d(a',b) \leq t.
\]
Thus $\norm{d_A-d_B}_\infty \leq t$.
Let us now prove \eqref{lemma hausdorff metric item three}.
Every compact metric space has a finite radius $R:=\sup_{x,y\in M} d(x,y)$. Obviously $\norm{d_A}_\infty \leq R$.
The rest of the claim follows from \eqref{lemma hausdorff metric item two}.
\end{proof}
\begin{proposition}\label{proposition barcodes for compact metric spaces} Let $k \in \N_0$ be a nonnegative integer and $M$ be a metric space.
\begin{enumerate}
\item\label{proposition barcodes for compact metric spaces item one}
The map $\beta_k: F(M) \to \BCbot$ is Lipschitz continuous with Lipschitz constant equal to $1$.
\item\label{proposition barcodes for compact metric spaces item two}
There is a unique continuous extension $K(M) \to \BCbot$ of $\beta_k:F(M) \to \BC \subset \BCbot$.
We will denote it by the same symbol $\beta_k$.
The extended map is also Lipschitz continuous with Lipschitz constant $1$.
\end{enumerate}
\end{proposition}
\begin{proof}
The claim in (1) was proved in~\cite{Chazal:2009:GSS:1735603.1735622}.
For (2), we first show that $F(M) \subset K(M)$ is dense. Given a compact subset $K\subset M$ and $\veps > 0$ we will show that $B_\veps(K):=\{A \in K(M)\mid d_H(A,K)<\veps\} \subset K(M)$ intersects $F(M)$ nontrivially. Since $K$ is compact, there is $P=\{x_1,\ldots,x_n\}\subset K$ such that $K \subset B_\veps(P)$. On the other hand, $P \subset K \subset K_\veps$ so that $d_H(P,K) < \veps$. As Lipschitz functions are in particular uniformly continuous, $\beta_k:F(M) \to \BCbot$ extends to $K(M)$. The fact that the extension is again Lipschitz with the same Lipschitz constant is also standard.
\end{proof}
\begin{definition}
We call the map $\beta_k:K(M)\to \BCbot$ the barcode map and for a compact set $K \subset M$ we refer to $\beta_k(K)$ as the $k$-th barcode of $K$.
\end{definition}
This definition and the definition of \cite{Chazal:2009:GSS:1735603.1735622,chazal:hal-01330678} are equivalent concepts as both produce barcode maps which are continuous functions from the space $K(M)$ to some space of barcodes and they coincide on the dense subset $F(M) \subset K(M)$ of finite subsets of $M$.
\begin{remark}\label{remark barcodes for totally bounded spaces}
Note that the barcode map $\beta_k$ can easily be extended to a map on totally bounded sets. Since in the proof of Proposition \ref{proposition barcodes for compact metric spaces} we reduced to the case where $M$ is complete, then a totally bounded subset is compact if and only if it is closed. Therefore, for every totally bounded set there is a compact set (its closure) at Hausdorff distance zero (see \eqref{eq hausdorff metric}, although totally bounded spaces only form a pseudo metric space for the Hausdorff spaces). One way to define a barcode for a totally bounded set is therefore to define it via Proposition \ref{proposition barcodes for compact metric spaces} as the barcode of its completion which is compact.
\end{remark}
One can naturally generalize Proposition \ref{proposition barcodes for compact metric spaces} to the setting of tame functions.
\begin{definition}
Let $M$ be a metric space. We denote by $C(M,\R)$ the set of continuous functions with values in $\R$ and endow it with the metric
\[
d:C(M,\R) \times C(M,\R) \to[] [0,\infty], \quad d(f,g)= \norm{f-g}_\infty.
\]
We will denote by $T(M) \subset C(M,\R)$ the subset of tame functions and by $\wh T(M)$ its completion.
\end{definition}
There is no harm in allowing the metric to take value $\infty$.
The induced topology is the same as the one induced by the metric
\[
(f,g) \mapsto d'(f,g):=\frac{d(f,g)}{1+d(f,g)} \in [0,1].
\]
The metrics $d,d'$ also feature the same notion of Cauchy sequences. Working with $d$ is however more appropriate for the inequalities we need.
\begin{theorem}\label{theorem barcodes for tame functions} Let $k \in \N_0$ be a nonnegative integer and let $M$ be a metric space.
\begin{enumerate}
\item\label{proposition barcodes for tame functions item one} The map $\beta_k: T(M) \to \BC$ is Lipschitz continuous with Lipschitz constant equal to $1$.
\item\label{proposition barcodes for tame functions item two} There is a unique continuous extension $\wh T(M) \to \BCbot$ of $\beta_k:T(M) \to \BC \subset \BCbot$.
As before we denote it by the same letter, and note that the extension is also $1$-Lipschitz continuous.
\end{enumerate}
\end{theorem}
\begin{proof}
As in Proposition \ref{proposition barcodes for compact metric spaces}, the first part follows from~\cite{Chazal:2009:GSS:1735603.1735622}. The second part is implied by the same extension argument for uniformly continuous maps. Note that Lipschitz continuity implies uniform continuity.
\end{proof}
\section{Barcodes of compact sets as almost sure limits}\label{section approximation two}
In this section, we will address a very natural convergence problem for stochastic barcodes. It is somewhat surprising that this question has never been addressed before, at least not in full generality.
Let $M$ be a metric space. We consider i.i.d.~$M$-valued random variables $X_1,X_2, \ldots$ whose distribution has support equal to a compact subset $C \subset M$. Recall that the \emph{support} of a measure $\mu$ on a $\sigma$-algebra containing the Borel $\sigma$-algebra $B(M)$ is defined to be the closed subset
\[
\supp(\mu):= \{ x \in M \mid \forall \veps >0: \mu(B_\veps(x))> 0\}.
\]
Let us consider the finite random set $P_n=\{X_1,X_2, \ldots,X_n\}$ and for a fixed $k$ the sequence of barcodes $\left(\beta_k(P_n)\right)_{n \in \N}$. We would like to describe the limit of this sequence for $n\to \infty$. If $P_n$ were a deterministic sequence approaching $C$ in the Hausdorff distance, then the limit of $\beta_k(P_n)$ would be $\beta_k(C)$ by definition of the latter, see Proposition \ref{proposition barcodes for compact metric spaces}. Now, the $P_n$ are random variables, and we prove the following
\begin{theorem}\label{theorem almost sure limit of number of points}
Let $M$ be a metric space, let $X_1,X_2, \ldots$ be i.i.d.~$M$-valued random variables, let $k\in \N_0$, and put $P_n=\{X_1,X_2, \ldots,X_n\}$.
If the distribution of the $X_i$ has support equal to a compact subset $C \subset M$, then
\[
\beta_k(C) = \lim_{n\to \infty} \beta_k(P_n) \ \ \textrm{ almost surely.}
\]
\end{theorem}
This theorem immediately results from the following lemma by continuity of the barcode map, see Proposition \ref{proposition barcodes for compact metric spaces}. This statement is a `folk theorem', and a variation of it with extra assumptions appears in the work of Cuevas and Fraiman~\cite{10.2307/2959033}. We include it here for completeness because we could not find this precise statement in the literature.
\begin{lemma}\label{lemma almost sure limit of number of points}
Let $M$ be a metric space, let $X_1,X_2, \ldots$ be i.i.d.~$M$-valued random variables, and put $P_n=\{X_1,X_2, \ldots,X_n\}$.
If the distribution of the $X_i$ has support equal to a compact subset $C \subset M$, then
\[
\lim_{n\to \infty}d_H(C,P_n)= 0 \textrm{ almost surely.}
\]
\end{lemma}
\begin{proof}
As $\supp(X_i) = C$ we have $P_n \subset C$ with probability $1$. Thus, $$d_H(C,P_n)= \inf \left\{\veps >0 \mid C \subset B_\veps(P_n)\right\}.$$
By construction, $P_n \subset P_{n+1}$ almost surely for all $n$ so that
\[
d_H(C,P_{n+1}) \leq d_H(C,P_n) \textrm{ almost surely},
\]
and $0 \leq \lim_{n\to \infty} d_H(C,P_n)$ exists almost surely due to monotonicity.
It thus suffices to show that $d_H(C,P_n) \to 0$ in probability.
Here we use the property that if $Z_n \to Z$ in probability and $Z_n \to Y$ almost surely, then $Z=Y$ almost surely.
For $\gamma > 0$ let us denote the event
\[
A^n_\gamma = \left\{ d_H(C,P_n) > \gamma \right\}.
\]
We need to show that $\P(A^n_\gamma) \to[n\to \infty] 0$ for all $\gamma > 0$.
Let us fix some $\gamma>0$.
We have
\begin{equation}\label{eq agamman}
A_\gamma^n = \left\{ C \not\subset (P_n)_\gamma \right\} = \left\{ \exists y \in C : y \not\in (P_n)_\gamma \right\}= \left\{ \exists y \in C : B_\gamma(y) \cap P_n = \emptyset \right\}.
\end{equation}
Since $C$ is compact, it is totally bounded, i.e., for each $\veps > 0$ we can find $c_1,\ldots,c_{N(\veps)} \in C$
such that $C \subset \bigcup^{N(\veps)}_{i=1} B_\veps(c_i)$.
For $\veps=\frac{\gamma}{2}$ it must be that
\[
A_\gamma^n \subset \bigcup_{i=1}^{N\left(\frac{\gamma}{2}\right)} \left\{ B_{\frac{\gamma}{2}}(c_i) \cap P_n = \emptyset\right\} \textrm{ almost surely}
\]
from \eqref{eq agamman}.
Indeed, if $\xi \in C$ is a random point satisfying $B_\gamma(\xi)\cap P_n=\emptyset$,
then for $i \leq N\left(\frac{\gamma}{2}\right)$ such that $\xi\in B_{\frac{\gamma}{2}}(c_i)$
we must have $B_{\frac{\gamma}{2}}(c_i)\cap P_n=\emptyset$ (otherwise we could find a point in $P_n$ at distance smaller than
$\gamma$ from $\xi$ by the triangle inequality).
Since the random points $X_j$ are i.i.d., we have for each $i$
\[
\P\left( \left\{ B_{\frac{\gamma}{2}}(c_i) \cap P_n = \emptyset\right\} \right) = \prod_{j=1}^n \left( 1- \P\left(X_j\in B_{\frac{\gamma}{2}}(c_i)\right)\right) =
\left( 1- \P(X_1\in B_{\frac{\gamma}{2}}(c_i)\right)^n.
\]
Due to subadditivity of $\P$ we conclude
\[
\P\left(A_\gamma^n\right) \leq \P\left( \bigcup_{i=1}^{N\left(\frac{\gamma}{2}\right)} \left\{ B_{\frac{\gamma}{2}}(c_i) \cap P_n = \emptyset\right\}\right) \leq
\sum_{i=1}^{N\left(\frac{\gamma}{2}\right) } \left( 1- \P(X_1\in B_{\frac{\gamma}{2}}(c_i))\right)^n.
\]
Each term in the finite sum on the right-hand-side goes to zero as $n\to \infty$, since all the $c_i$ were chosen in the support of the distribution of $X_1$.
Since $\gamma>0$ is arbitrary, the claim follows as noted above.
\end{proof}
It is worthwhile emphasizing that there is no condition on the distribution of the random variables such as absolute continuity, the above result is completely general
and vaguely reminiscent of the Glivenko-Cantelli theorem.
\section{LLN and CLT for barcodes}\label{section clt}
We deduce a law of large numbers (LLN) and a central limit theorem (CLT) for $\BCbot$-valued random variables.
This becomes meaningful via Theorem \ref{theorem embedding l1} in Section \ref{section embedding barcodes}.
In the context of \emph{persistence landscapes}, Bubenik \cite{Bub15} observed that LLN and CLT can be deduced from general probability theory in Banach spaces.
In this section we mirror his approach in the present (barcode representation) context.
For a general reference on probability theory in Banach spaces we refer to the monographs by Vakhania, Tarieladze, and Chobanyan \cite{VTC87} respectively by Ledoux and Talagrand \cite{LT91}.
Let us recall the definition of the {Pettis integral}. Let $(V, \norm{\cdot}_V)$ be a Banach space. Given a probability space $(\Omega,\sF,\P)$ and a random vector $\xi:\Omega\to V$, an element $v\in V$ is called the \emph{Pettis integral} of $\xi$ if for each continuous linear functional $\vphi:V \to \R$ we have
\[
\vphi(v) = \int_\Omega \vphi\left(\xi(\omega)\right) d\P(\omega).
\]
The vector $v$ is also called the \emph{expectation} of $\xi$ and is denoted by $\E[\xi]$ or $\int_\Omega \xi(\omega) d\P(\omega)$. If $\E[\norm{\xi}_V]< \infty$, then $\E[\xi]$ exists and satisfies $\norm{\E[\xi]}_V \leq \E[\norm{\xi}_V]$, see \cite[II.3.1 (c)]{VTC87}.
\begin{theorem}[LLN for barcodes]\label{theorem lln for barcodes}
Let $T: \BCbot \to V$ be a continuous map from the space of barcodes to a separable Banach space $V$.
Let $\{X_i\}_{i\in \N}$ be an i.i.d.~sequence of $\BCbot$-valued random barcodes such that $\E[\norm{T(X_1)}]< \infty$.
Then the sequence of random variables $(S_n)_n$ where
\begin{equation}\label{eq mean}
S_n := \frac{T(X_1) + \ldots + T(X_n)}{n}
\end{equation}
converges almost surely to $\E[T(X_1)]$.
\end{theorem}
\begin{proof}
As the $\{X_n\}_n$ are i.i.d., so are the random variables $\{T(X_n)\}_n$.~
Thus, the theorem follows from the general theory of Banach space valued probability, see \cite[Corollary 7.10]{LT91}.
\end{proof}
Let us recall the concept of \emph{type} and \emph{cotype} of a Banach space, see e.g. \cite[II.9.2]{LT91}. A \emph{Rademacher} (or \emph{Bernoulli}) \emph{sequence} is a sequence of independent random variables with values $\pm 1$ both taken with probability $1/2$. For $1 \leq p\leq 2$ a Banach space $\left(V,\norm\cdot\right)$ is said to be \emph{of type $p$} if for every Rademacher sequence $(\veps_i)_{i\in \N}$ there exists a constant $C$ such that for all finite sequences $(x_i)$ the inequality
\[
\norm{\sum_{i}\veps_i x_i}_p \leq C\cdot \left(\sum_i \norm{x_i}^p\right)^{\frac{1}{p}}
\]
holds. Here, $\norm\cdot_p$ is defined as follows:
\[
\norm X_p =(\int_\Omega \, ||X||^p d\P )^{\frac{1}{p}},
\]
where $(\Omega, \sF, \P)$ is the underlying probability space, and the norm $\norm{\cdot}$ is the norm of the Banach space $V$.
Similarly, $\left(V,\norm\cdot\right)$ is said to be \emph{of cotype $q$} for $1\leq q \leq \infty$ if instead there is a constant $D$ such that
\[
\left(\sum_i \norm{x_i}^q\right)^{\frac{1}{q}} \leq D\cdot \norm{\sum_{i}\veps_i x_i}_q.
\]
By \cite[Theorem 2.1]{HJP76}, being of type $p$ is equivalent the existence of a constant $C>0$ such that
\[
\E\left[ \norm{\sum\nolimits_{j=1}^n X_j}^p\right] \leq C \cdot \sum_{j=1}^n E\left[\norm{X_j}^p\right]
\]
for all independent $X_1, \ldots, X_n$ with mean $0$ and finite $p$-th moment.
Note that every Banach space is of type $1$ and that a Hilbert space is of type $2$ and cotype $2$. It can be shown that even the converse is true, i.e. a Banach space of type $2$ and cotype $2$ is a Hilbert space, see \cite[Theorem 1.1]{Kw}.
\begin{theorem}[CLT for barcodes]\label{theorem clt for barcodes}
Let $T: \BCbot \to V$ be a continuous map from the space of barcodes to a separable Banach space $V$ of type $2$. Let $\{X_i\}_{i\in \N}$ be an i.i.d.~sequence of $\BCbot$-valued random barcodes such that $\E[{T(X_1)}]=0$ and $\E[\norm{T(X_1)}^2]< \infty$ and let
$S_n$ be the $V$-valued random variable from \eqref{eq mean}.
Then $(\sqrt{n} S_n)_n$ converges weakly to a Gaussian random variable with the covariance structure of $T(X_1)$.
\end{theorem}
\begin{proof}
Separability of $V$ implies that any probability measure on $V$ is Radon. Thus, the claim follows from~\cite[Theorem 3.6]{HJP76}.
\end{proof}
We will show next that for important classes of examples the hypotheses of Theorem~\ref{theorem lln for barcodes} and Theorem~\ref{theorem clt for barcodes} are fulfilled.
Let $M$ be a metric space. For a finite set $P \subset M$ recall that $\beta_k(P)$ is its $k$-th barcode, see Definition \ref{definition barcode map}. For a compact set $K\subset M$, the barcode $\beta_k(K)$ is defined in Proposition \ref{proposition barcodes for compact metric spaces}.
\begin{theorem}\label{theorem hypothesis of lln and clt for compact}
Let $M$ be a metric space and let $X$ be a random variable with values in a compact set $\sK \subset K(M)$.
Let $T:\BCbot \to V$ be a continuous map to a separable Banach space $V$ of type $2$.
Then $\norm{T(\beta_k(X))}$ has finite $n$-th moments for all $n\geq 0$ where $\beta_k$ denotes the $k$-th barcode.
\end{theorem}
\begin{proof}
The map $\beta_k$ is continuous with respect to the bottleneck (in the codomain) and the Hausdorff (in the domain) distances.
Thus, the image
$$C=\left\{\norm{T(\beta_k(K))}\mid K \in \sK\right\} \subset {\R_{\geq 0}}$$ is compact.
Let $R:=\sup C < \infty$.
If $\left(\Omega,\sF,\P\right)$ is the underlying probability space on which $X$ is defined,
then clearly $\|T(\beta_k(X))\|\leq R$ holds $\P$-almost surely, and in particular
\[
\E[\|T(\beta_k(X))\|^n] \leq R^n \int_{\Omega}d\P=R^n.
\]
\end{proof}
The following is our main application.
\begin{corollary}\label{example point cloud}
Let $M=\R^d$ and let $X_1, \ldots, X_n$ be random variables with values in a compact subset $W\subset \R^d$. Then $P_n=\{X_1, \ldots, X_n\}$ is a random variable with values in the compact subset $\sK=K(W) \subset K(\R^d)$. Thus, for every continuous map $T:\BCbot \to V$ as in Theorem \ref{theorem hypothesis of lln and clt for compact} the LLN and CLT (Theorems \ref{theorem lln for barcodes} and \ref{theorem clt for barcodes}) apply to a sequence of i.i.d. copies of $P_n$ for fixed~$n$.
\end{corollary}
\section{Sampling from the circle: expected barcode lengths}\label{section sphere}
We wish to consider the question of approximation by expectations (of transformations) of random barcodes, where the barcodes are obtained from i.i.d.~samples with a fixed (large) sample size.
We first compute expectations in the context of i.i.d.~sampling in the simplest example at work - the circle $$\S^1=\{x=(x_1,x_2)\in \R^2 \mid x_1^2 + x_2^2 = 1\}$$ with
uniform samples. Recall that the uniform distribution on an $m$-dimensional manifold $M \subset \R^d$ of finite volume is defined by
\[
\P(A):=\frac{{\mathrm{vol}}(A)}{{\mathrm{vol}}(M)} \qquad \forall \ A\subset M \textrm{ measurable.}
\]
Here, ${\mathrm{vol}}$ is the $m$-dimensional volume of measurable subsets of $M$.
In our study, we will more precisely focus on the length of the $\beta_1$-barcode for the unit circle\footnote{The $\beta_1$-barcode of $\S^1$ is shown to consist of at most one interval in Corollary \ref{corollary bn leq one}, thus we may speak of its length by which we just mean the length of that interval.}, and approach the question more generally in Section \ref{section approximation one}. In order to get these more precise results, we need to be more concrete on the distribution.
Recall that for a finite set $P\subset \S^1$ and $t \geq 0$ we denoted by $P_{t}$ the closed $t$-neighborhood of $P$.
Before allowing $P$ to be random, we deduce some general properties of deterministic~$P_{t}$.
\begin{lemma}\label{lemma homology sn}
If $t\in[0,1) $, the projection $\pi: P_{t} \to \S^1$, $v \mapsto \frac{v}{\norm v}$ is a homotopy equivalence onto its image $\pi(P_t) \subset \S^1$.
If $t \geq 1$, then $P_{t}$ is star-shaped for $0 \in P_{t} \subset \R^{2}$. In particular, $P_{t}$ is contractible in that case.
\end{lemma}
\begin{example}\label{example points on circle}
Before we proceed to the proof of the lemma, let us illustrate what happens in two simple examples.
\begin{figure}[ht]
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=3.9]{3points_on_circle.pdf}
\caption{Three points whose $t$-neighborhood has a cycle.}\label{figure three points}
\end{minipage}
\hfill
\begin{minipage}[b]{0.45\textwidth}
\centering
\includegraphics[scale=0.6]{points_on_circle_no_homology.pdf}
\caption{Six points whose $t$-neighborhood has no cycle.}\label{figure six points}
\end{minipage}
\end{figure}
The first example is $P=\{x\in\C\mid x^3=1\}$ and $t=\frac{\sqrt{3}}{2}$ as depicted in Figure \ref{figure three points}. Even though $P_t$ contains a nontrivial $1$-cycle (the triangle between the three points), it does not contain $\S^1$. However, its image $\pi(P_t)$ is the full circle and indeed, $P_t$ and $\S^1$ are homotopy equivalent. The homotopy equivalence is realized by exhibiting a subspace of $P_t$ that maps homeomorphically to the sphere, namely the orange triangle.
The second example is depicted in Figure~\ref{figure six points}. In this case both $P_t$ and $\pi(P_t)$ have three connected components and each of them is contractible. As in the previous example, the homotopy equivalence is shown by noting that the orange polygonal chain inside $P_t$ maps homeomorphically to $\pi(P_t)$. This chain is obtained by considering each connected component of $P_t$ separately and within such a component connecting every point of $P$ through a straight line segment with its left and right neighbor (if existent) and furthermore connecting the ``leftmost'' and the ``rightmost'' point (call them $x_\ell$ and $x_r$) via a straight line segment to the unique leftmost point on the boundary of the $t$-ball around $x_\ell$ respectively to the unique rightmost point on the boundary of the $t$-ball around $x_r$.
As explained in Example \ref{example tame} and Section \ref{section metric spaces}, the homotopy type of an ``inflated point cloud'' $P_t \subset \R^d$ can be calculated using the nerve theorem. The homology of $P_t$ is the same as the homology of the \v{C}ech complex $\check{C}_t(P)$ with parameter $t \geq 0$. The \v{C}ech filtration also gives a computational tool to get one's hand on the persistent homology of a finite point cloud. However, it turns out that there is no actual homology computation to be done in this section, because by Corollary \ref{corollary bn leq one} below the persistent homology of a finite point cloud on the circle will be rather simple.
\end{example}
\begin{proof}[Proof of Lemma \ref{lemma homology sn}]
The statement for $t\geq 1$ is clear because every point in $P_t$ is contained in a convex ball containing $0\in P_t$ with center on the circle. We will therefore assume that $t < 1$ from now on. Let us construct a homotopy inverse to $\pi$.
As was anticipated in the examples, the homotopy equivalence will be obtained by exhibiting a subspace $G \subset P_t$ which under $\pi$ maps homeomorphically onto $\pi(P_t)$. The homotopy inverse to $\pi$ will then be $\iota:=\left(\pi\vert_G\right)^{-1}:\pi(P_t) \to G \subset P_t$ to the effect that $\pi\circ \iota = \id_{\pi(P_t)}$ and $\iota\circ \pi$ will be homotopic to the identity on $P_t$ via the homotopy $(x,t) \mapsto tx + (1-t)\iota(\pi(x))$. Observe that with a point $x$ also the line segment between $x$ and $\pi(x)$ is in $P_t$ so that the homotopy is well-defined.
First note that every connected component of $P_t$ is closed and maps onto a closed interval $I\subset \S^1$ where a closed interval on the circle is just the image of a closed interval in $\R$ under the parametrization $t\mapsto \left(\cos(t),\sin(t)\right)$. Thus, it is sufficient to treat each connected component separately. Moreover, connected components of $P_t$ are again of the form $P'_t$ for a subset $P'\subset P$ because balls $\bar B_t(x)$ are connected. In other words, we may assume that $P_t$ is connected.
If $\pi(P_t)=\S^1$, we put $n=\# P$ and let $G$ be the $n$-gon connecting the centers of the circles in circular order by line segments. This is the triangle in the first example from Example \ref{example points on circle} above.
Suppose now that $\pi(P_t) \not = \S^1$. Without loss of generality $1$ is not in the image of $\pi$. We write $P=\{p_1,\ldots,p_n\}$ such that $\arg(p_i) < \arg(p_{i+1})$ for all $i=1,\ldots,n-1$ where for all $z\neq 1$ we denote $\arg(z)\in(0,2\pi)$ the unique point such that $e^{i \arg(z)}=z$. Moreover, there are unique points $p_0 \in \bar B_t(p_1)$ and $p_{n+1}\in \bar B_t(p_n)$ such that
\[
\arg(p_0)=\min\{\arg(z) \mid z \in \pi(P_t)\} \quad \textrm{and} \quad \arg(p_{n+1})=\max\{\arg(z) \mid z \in \pi(P_t)\}.
\]
Then, we define $G$ to be the polygonal chain which is the union of the line segments connecting $p_i$ and $p_{i+1}$ for all $i=0,\ldots,n$. We leave it to the reader to verify that $\pi\vert_G$ is a homeomorphism onto $\pi(P_t)$.
\end{proof}
\begin{corollary}\label{corollary bn leq one}
For every $t \in [0,1)$ and $P\subset \S^1$ finite we have
\[
H_1(P_{t}, \mathbf k)=0 \textrm{ or } H_1(P_{t}, \mathbf k)= \mathbf k.
\]
\end{corollary}
\begin{proof}
By Lemma \ref{lemma homology sn} (whose notation we use) it suffices to show that $H_1(\pi(P_t),\mathbf k)=0$ or $H_1(\pi(P_t),\mathbf k)=\mathbf k.$
We have seen in the proof of the preceding lemma that the connected components of $\pi(P_t)$ are either all homeomorphic to closed intervals in $\R$ or $\pi(P_t)=\S^1$, whence the two cases.
\end{proof}
As usual, we denote by $\beta_k(P)$ the barcode obtained from the $k$-th persistent homology of a finite set $P \subset \R^d$.
By Corollary \ref{corollary bn leq one} we know that the $\beta_1$-barcode of a point cloud $P \subset \S^1$ consists of at most one interval.
We denote the \textbf{length} of this interval by $$\ell(\beta_1(P)) \in [0,1]$$ and also sometimes refer to it as the \textbf{length of the barcode}. Before stating the main result of this section, Theorem \ref{theorem expectation circle}, in its most general form, it might be instructive to consider the following special case.
\begin{proposition}\label{theorem expectation circle 3points}
Suppose that $P_3=\{X_1,X_2, X_3\} \subset \S^1$ is composed of three independent uniformly distributed points on the circle $\S^1$. Then
\[
\E[\ell(\beta_1(P_3))] = \frac{9(\sqrt{3} - 2)}{\pi^2} + 1/4.
\]
\end{proposition}
\begin{proof}
We parametrize the circle by the interval $I=(-\pi,\pi]$.
Using the rotational symmetry we may assume that $X_1=\pi$ and that $X_2=\vartheta$, $X_3=\vphi$ where $\vartheta, \vphi$ are uniformly distributed random angles. It follows from Lemma \ref{lemma homology sn} that the time of death of the $\beta_1$-barcode is $t_d=1$. Its time of birth is
\begin{equation}\label{eq time of birth}
t_b=\begin{cases} 1 &\textrm{if } X_1,X_2,X_3 \textrm{ lie on a half circle}\\ \max\left( \frac{\abs{X_1-X_2}}{2},\frac{\abs{X_1-X_3}}{2},\frac{\abs{X_2-X_3}}{2}\right)\end{cases}
\end{equation}
where $\abs{\cdot}$ denotes the Euclidean norm.
We have
\[
\begin{aligned}
\abs{X_1-X_2}&=\sqrt{\left(1+\cos(\vartheta)\right)^2 + \sin(\vartheta)^2} = 2\cos\left(\frac{\vartheta}{2}\right),\\
\abs{X_1-X_3}&= 2\cos\left(\frac{\vphi}{2}\right), \\
\abs{X_2-X_3}&= 2\sin\left(\frac{\vartheta-\vphi}{2}\right).
\end{aligned}
\]
Now we wish to calculate
\[
\E[\ell]=\int_{I \times I} \left( t_d-t_b \right) d\P
\]
where $\P=\frac{1}{4\pi^2} \mu$ is the uniform measure on $I \times I$ and $\mu$ is the Lebesgue measure. We observe that $\ell=t_d-t_b=0$ whenever $X_1,X_2,X_3$ lie on a half circle in $\S^1$ by \eqref{eq time of birth}.
Let $G \subset I \times I$ be the event that $X_1,X_2,X_3$ do not lie on a half circle. We have
\[
G=G_0 \cup \left( -G_0\right) \quad \textrm{where } G_0=\{(\vartheta,\vphi)\in I \times I \mid \vartheta \geq 0, \vartheta - \pi < \vphi < 0 \}.
\]
This event, as well as the function $\ell$, are invariant under $(\vartheta,\vphi)\mapsto (-\vartheta,-\vphi)$. Thus
\[
\begin{aligned}
\E[\ell]&= 2 \int_{G_0} \left(1 - t_b\right)d\P\\
&=\frac{1}{2\pi^2} \int_0^\pi \int_{\vartheta - \pi}^0 \left(1 - t_b(\vartheta,\vphi)\right)d\vphi d\vartheta.
\end{aligned}
\]
Next we divide $G_0=G_{12} \cup G_{13} \cup G_{23}$ into three subevents corresponding to whether $\abs{X_1-X_2}$, $\abs{X_1-X_3}$, or $\abs{X_2-X_3}$ is maximal.
For example, $\abs{X_1-X_2}$ is maximal on $G_{12}=\{(\vartheta,\vphi)\mid 0< \vartheta < \frac{\pi}{3}, -\pi+2\vartheta < \vphi < -\vartheta\}$.
Again by symmetry considerations, these events have the same probabilities, and the integrals (expectations) restricted to them have equal values, so that
\[
\begin{aligned}
\E[\ell]&= \frac{3}{2\pi^2}\int_{G_{12}} \left(1 - t_b\right)d\mu\\
&=\frac{3}{2\pi^2} \int_0^{\frac{\pi}{3}} \int_{2\vartheta - \pi}^{-\vartheta} \left(1 - \cos\left( \frac{\vartheta}{2}\right)\right)d\vphi d\vartheta\\
&=\frac{9(\sqrt{3} - 2)}{\pi^2} + \frac 1 4
\end{aligned}
\]
as claimed.
\end{proof}
We note $\frac{9(\sqrt{3} - 2)}{\pi^2} + \frac 1 4 \approx 0,00565963600183$.
The just made calculation can be generalized as follows.
\begin{theorem}\label{theorem expectation circle}
Suppose that $P_n=\{X_1,\ldots,X_n \} \subset \S^1$ is a random point set on the circle, i.e., $X_1, \ldots, X_n$ are independent, uniformly distributed $\S^1$--valued random variables. Then
\[
\E[\ell\left(\beta_1(P_n)\right)] =1-\left(\sum_{k\geq 1}(-1)^{k-1}\binom{n}{k} \int_0^{\min\left(\frac{1}{2},\frac{1}{k}\right)} \pi \cos(\pi t) (1-kt)^{n-1} \,dt\right).
\]
\end{theorem}
\begin{proof}
This time we parametrize the circle by the interval $[0,2\pi]$, modulo $2\pi$.
Let $\Theta_i$ with values in $(0,1]$ be specified through the identity $X_i=(\cos(2\pi\Theta_i),\sin(2\pi\Theta_i))=\exp\{2\pi i\, \Theta_i\}$.
It is again natural to identify one of the points (for example the last one) with the angle $0=2\pi$.
Let $\Theta_{(i)}$ be the $i$-th order statistic of $(\Theta_1,\ldots,\Theta_{n-1})$, i.e. the $i$-th smallest value among $(\Theta_1,\ldots,\Theta_{n-1})$, and let us set in addition $\Theta_{(0)}:=0$ and $\Theta_{(n)}: =1$.
The \emph{normalized (angular) spacings} between the points are defined as follows: $S_i:=\Theta_{(i)}-\Theta_{(i-1)}$ for $i=1,\ldots,n$.
We also define
$$
X_{(i)}:=\exp\{2\pi i \,\Theta_{(i)}\}, \ \ i=0,1,\ldots,n,
$$
so that the 2-dimensional random points are ordered via their respective angles (similarly to the proof of Lemma \ref{lemma homology sn}).
It is easy to check by induction (or alternatively look in \cite{BK07} or \cite{Dev81})
that the joint distribution of the spacings vector $(S_1,\ldots,S_n)$ is uniform on the unit $n-1$-simplex, as given by,
$$
\P(S_1>a_1,S_2>a_2,\ldots, S_n>a_n)=
\begin{cases}
(1-\sum_j a_j)^{n-1}, & \sum_j a_j<1\\
0, & \sum_j a_j\geq 1
\end{cases}
$$
As in the case of three random points above, from Lemma \ref{lemma homology sn} we know that the $\beta_1$-barcode dies at time $t_d=1$ and is born at time
\begin{equation}\label{eq birth n points}
t_b=\begin{cases} 1, &\textrm{if } X_1,X_2,\ldots,X_n \textrm{ lie on a half circle}\\ \max_{i=1}^n |X_{(i)}-X_{(i-1)}|/{2} & \textrm{otherwise}\end{cases}.
\end{equation}
The first condition in \eqref{eq birth n points} is equivalent to the \emph{maximal spacing} $M_n:=\max_{i=1}^n S_i$ being $\geq 1/2$.
However on $\{M_n<1/2\}$ we have
$$
\max_{i=1}^n \frac{|X_{(i)}-X_{(i-1)}|}{2} = \sin(\pi M_n).
$$
For the remainder of the calculation let us abbreviate $\ell\left(\beta_1(P_n)\right)$ by $\ell$.
Due to the just made observations we conclude that $\E[\ell] =\E\left[(1-\sin(\pi M_n))\one{\{M_n<1/2\}}\right]$.
From the above given expression for the joint residual distribution of spacings and the inclusion-exclusion formula, one deduces the following expression for the residual distribution of $M_n$:
$$
\P(M_n>x)=\P(M_n\geq x)=\sum_{k\geq 1: \, kx <1 } (-1)^{k-1} \binom{n}{k} (1-kx)^{n-1}.
$$
This formula is attributed to Whitworth \cite{Whi97}.
Let us define $g: [0,1] \mapsto [0,1]$ as
$$
g(t):=\begin{cases} \sin{(\pi x)}, & x<1/2\\
1, & x\geq 1/2.
\end{cases}
$$
Now $\E[\ell] = 1- \E\left[g(\pi M_n)\right]$.
Since $g$ is non-negative and differentiable (of class $C^1$), we can apply a well-known change of (order of) integration formula
$$
\E\left[g(\pi M_n)\right] = \int_{t\geq 0} g'(t) \P(M_n\geq t) \,dt =\int_0^{\frac{1}{2}} \pi \cos(\pi t) \P(M_n\geq t)\, dt ,
$$
which equals
$$
\sum_{k\geq 1}(-1)^{k-1}\binom{n}{k} \int_0^{\min\left(\frac{1}{2},\frac{1}{k}\right)} \pi \cos(\pi t) (1-kt)^{n-1} \,dt.
$$
\end{proof}
\begin{remark}[Related work]
Similar computations to ours were made in Bubenik and Kim \cite{BK07}
in the setting of Vietoris-Rips filtration (as opposed to \v{C}ech filtration),
and with respect to the angular (unlike Euclidean taken here) metric on points.
\end{remark}
\section{Approximation by expected transformations of random barcodes}\label{section approximation one}
The calculations made in the previous section demonstrate that expected functionals of barcodes can be quite difficult (and, for more complicated examples, impossible) to obtain explicitly.
Theorem \ref{theorem almost sure limit of number of points} applied to $\S^1$ on the other hand tells us that as $n$ gets large, in the notation of the previous section, the length $\ell(\beta_1(P_n))$ of the single bar
comprising $\beta_1(\S^1)$ must converge to $1$.
If interested in the asymptotics of $\ell(\beta_1(P_n))$ and $\E\left[\ell(\beta_1(P_n))\right]$, we refer the reader to Devroye~\cite{Dev81}.
In particular, since $\sin(x)\sim x$ for small $x$, one can apply \cite{Dev81}, Lemma 2.5 saying $\frac{n M_n}{\log{n}} \to 1$ in probability, whereas in the last section $M_n$ denotes the maximal spacing. Therefore, $M_n\to 0$
almost surely, and $1-\ell(\beta_1(P_n))$ is of order $\frac{\log{n}}{n}$ with an overwhelming probability as $n\to \infty$.
Similar considerations based on \cite{Dev81}, Lemma 2.6 lead to $\E\left[\ell(\beta_1(P_n))\right]=1-\Theta(\frac{\log{n}}{n})$ as $n\to \infty$.
This is an interesting example that motivates the study of the quality of such an approximation in general.
Similarly to Section \ref{section approximation two}, one could consider, for a fixed (and relatively large) $n\in \N$, i.i.d.~$\R^d$-valued random variables $X_1,\ldots, X_n$, where the joint distribution has support on some compact subset $M \subset \R^d$.
The $k$-th barcode of the resulting random finite set $P_n=\{X_1,\ldots,X_n\}$ yields a random barcode $\beta_k(P_n)$ for each $k$.
Suppose that $T:\BC \to V$ is a continuous function from the barcode space to some Banach space.
By Theorem \ref{theorem lln for barcodes} and Theorem \ref{theorem hypothesis of lln and clt for compact}, the expected value $\E[T(\beta_k(P_n))]$, can be well approximated by the
empirical means (taken over many i.i.d.~samples of point clouds of size $n$).
We restrict our hypotheses somewhat with respect to those of Section \ref{section approximation two}, in assuming in addition that $M$ is a compact $m$-dimensional
manifold in $\R^d$,
and the distribution of $X_1$ above is uniform on $M$.
We are working on relaxing these hypotheses in a forthcoming project.
Let us first introduce some notation.
Recall that the \emph{medial axis} of $M$ is defined as the closure of the set of points in $\R^d$ that do not have
a unique nearest point on $M$.
We denote by $\tau=\inf_{p\in M} \sigma(p) $ the infimum of the distances $\sigma(p)$ of $p\in M$ from the medial axis of $M$, i.e., every point in the open $\tau$-neighborhood has a unique nearest point on $M$.
It follows from compactness that $\tau$ is positive. The quantity $\tau$ is referred to as
the \emph{reach} of $M$.
Under the above assumptions, we can rely on the work by Niyogi et al.~\cite{NSW08}.
The result \cite{NSW08}, Theorem 3.1 is not sufficient for our purposes, therefore we prove a stronger statement
in Theorem \ref{theorem NSW reinforced} and explain how this also follows from the analysis in \cite{NSW08}, see also Remark \ref{remark nsw reinforced}.
Let
\begin{equation}\label{eq nsw constants}
\begin{aligned}
c_1 (\eps)&:=\frac{{\mathrm{vol}}(M) } {\cos\left({\arcsin{\left(\frac{\eps}{8\tau}\right)} }\right)^m {\mathrm{vol}}\left(B^m_{\eps/4}(0)\right)} , \\ \\
c_2 (\eps)&:=\frac{{\mathrm{vol}}(M) } {\cos\left({\arcsin{\left(\frac{\eps}{16\tau}\right)}}\right)^m {\mathrm{vol}}\left(B^m_{\eps/8}(0)\right)},
\end{aligned}
\end{equation}
where the superscript $m$ indicates that the balls of radii $\eps/4$ and $\eps/8$, respectively, are taken in $\R^m$ (and not necessarily in the ambient space $\R^d$).
In particular, the smaller the $\eps$, the larger are $c_{1, 2}$, and they are of order $1/\eps^m$. We will use these constants throughout this section.
Let $A\subset \R^d$ be a set and $t\geq 0$. As in Section \ref{section barcodes} we denote by $A_{t}$ the closed $t$-neighborhood of $A$. For every manifold $M$ with reach $\tau$ as above and for every $0\leq t < \tau$ the inclusion $\iota_t: M{\, \hookrightarrow\,} M_{t}$ is a homotopy equivalence. This is almost by definition of the reach: a homotopy inverse is given by the projection $\pi: M_{t}\to M$ to the nearest point on $M$. Note that for any $p\in M_{t}$ the line segment connecting $p$ to $\pi(p)$ is entirely contained in $M_{t}$ (even in the fiber of $\pi$ over $\pi(p)$) so that a simple convex combination between $\iota \circ \pi$ and the identity gives a homotopy equivalence. For $A\subset M$ and $t \in [0,\tau)$ we denote
\begin{equation}\label{eq homotopy equivalence}
\chi_{A,t} : A_{t} {\, \hookrightarrow\,} M_{t} \to[\pi] M
\end{equation}
the composition of the inclusion with the projection.
\begin{theorem}\label{theorem NSW reinforced}
Let $M \subset \R^d$ be a smooth compact submanifold of dimension $m$ and let $X_1,X_2,\ldots,X_n$ be an i.i.d.
random sample from $M$ for the uniform distribution. Denoting $P_n:=\{X_1,\ldots,X_n\}$ we have that
if $\veps \in (0,\sqrt{\frac{3}{5}}\tau)$, then for each $\delta>0$ and each
\begin{equation}\label{eq nsw n}
n>c_1(\veps)\left(\log(c_2 (\veps))+\log{\dfrac{1}{\delta}}\right),
\end{equation}
the map $\chi_{P_n,t} : (P_n)_{t}\to M$ from \eqref{eq homotopy equivalence} is a homotopy equivalence for all $t \in \left[\veps,\sqrt{\frac{3}{5}}\tau\right)$ with probability at least $1-\delta$.
\end{theorem}
\begin{remark}\label{remark nsw reinforced}
We could have restricted $\delta$ to $(0,1]$, but prefer this statement (trivially true if $\delta>1$ since any probability is
non-negative) in view of applications below.
A careful comparison with \cite{NSW08}, Theorem 3.1, reveals several differences, but only one is responsible for the fact that the just stated result is non-trivially stronger in the stochastic sense.
The claim in Theorem \ref{theorem NSW reinforced} is that for any $0 \leq \veps < \sqrt{\frac{3}{5}}\tau$ the map $\chi_{P_n,t} : {P_n}_{t}\to M$ from \eqref{eq homotopy equivalence} is a homotopy equivalence \emph{on the whole interval} of parameters $t \in [\veps,\sqrt{\frac{3}{5}}\tau)$ on one and the same event of a sufficiently large probability.
The claim in \cite{NSW08} is only that $\chi_{P_n,\veps} $ is a homotopy equivalence at the given parameter $\veps$ on an event of a sufficiently large probability.
However, an intersection of many (let alone, infinitely many) highly probable events may have a drastically smaller probability.
This does however not happen here, for the reasons we give next. We do not contribute any new argument for this, the stronger formulation stated in Theorem \ref{theorem NSW reinforced} is merely a consequence of ordering the arguments of \cite{NSW08} accordingly.
\end{remark}
\begin{proof}[Proof of Theorem \ref{theorem NSW reinforced}]
Recall that $P_n$ is called \emph{$\veps$-dense} if the open $\veps$-neighborhood of $P_n$ covers $M$. For a given $\veps \in (0,\sqrt{\frac{3}{5}}\tau)$, $\delta >0$, and $n$ satisfying \eqref{eq nsw n}, the event $A_\veps$ defined by the random point cloud $P_n \subset M$ being \emph{$\frac{\veps}{2}$-dense} in $M$, has probability at least $1-\delta$ by Lemma 5.1 in \cite{NSW08}. Therefore, on the same event $A_\veps$ the same point cloud is $\frac{t}{2}$-dense for every $t\in [\veps,\sqrt{\frac{3}{5}}\tau)$.
Now we infer from Proposition 3.1 in \cite{NSW08} the deterministic statement that whenever a subset $P\subset M$ is $\frac{t}{2}$-dense, the map $\chi_{P,t}:P_{t} \to M$ is a homotopy equivalence. Let $(\Omega,\sF,\P)$ denote the corresponding probability space. Then we apply the above reasoning and the just mentioned proposition to obtain
$$
\begin{aligned}
A_\veps &=\left\{ \omega \in \Omega\middle| P_n(\omega) \textrm{ is } \frac{\veps}{2}\textrm{-dense} \right\} \\
&=\left\{ \omega \in \Omega\middle| P_n(\omega) \textrm{ is } \frac{t}{2}\textrm{-dense for all } t\in \left[\veps, \sqrt{\frac{3}{5}}\tau\right) \right\} \\
&=\left\{ \omega \in \Omega\middle| \chi_{P_n(\omega),t} \textrm{ is a homotopy equivalence for all } t\in \left[\veps, \sqrt{\frac{3}{5}}\tau\right) \right\},\\
\end{aligned}
$$
Together with Lemma 5.1 from \cite{NSW08} for these sets $A_\veps$, the claim follows. Note that the quantities from that Lemma 5.1 are bounded
according to the analysis in section 5 of \cite{NSW08} in such a way that \eqref{eq nsw n} holds.
\end{proof}
We will make essential use of the following easy but important observation.
\begin{lemma}\label{lemma barcode approximation}
Let $M \subset \R^d$ be a smooth compact submanifold of dimension $m$ and reach $\tau$, let $X_1,X_2,\ldots,X_n$ be an
i.i.d. random sample from $M$ for the uniform distribution, and put $P_n:=\{X_1,\ldots,X_n\}$.
Then for each $\veps \in (0,\sqrt{\frac{3}{5}}\tau)$, each $\delta>0$, and each
\begin{equation}\label{eq nsw n2}
n>c_1(\veps) \left(\log(c_2 (\veps))+\log{\dfrac{1}{\delta}}\right),
\end{equation}
we have that
\[
d_\infty\left(\beta_k(M),\beta_k(P_n)\right) \leq \frac{\veps}{2}
\]
with probability at least $1-\delta$.
\end{lemma}
\begin{proof}
By \cite[Proposition 3.2]{NSW08} for every $\veps \in (0,\sqrt{\frac{3}{5}}\tau)$ the sample $P_n$ is
$\frac{\veps}{2}$-dense in $M$. Because of $P_n\subset M$ this just means that $d_H(P_n,M)\leq \frac{\veps}{2}$ for the Hausdorff distance $d_H$. Therefore, the claim follows by \eqref{proposition barcodes for compact metric spaces item one} of Proposition \ref{proposition barcodes for compact metric spaces}.
\end{proof}
To formulate our next result, we introduce an operator on barcodes.
For any two $a,b$ such that $0<a\leq b<\infty$, let $R_{[a,b]}$ denote the restriction map
$R_{[a,b]} :\BCbot \to \BCbot$ defined as follows:
for each finite barcode representation $b=\{I_i\}_{i=1}^n \in \BC$ with $I_i =(x_i,d_i)\in \R_{\geq 0}^2$ we first define
$$I_i^{|(a,b)}:=\left(\max(x_i,a), \min(x_i+d_i,b)-\max(x_i,a)\right)$$ if $\min(x_i+d_i,b)\geq \max(x_i,a)$ and $I_i^{|(a,b)}:=(x_i,0)$ otherwise. Finally, we put $R_{[a,b]}(b)= \{I_i^{|(a,b)}\}_{i=1}^n$.
Since the thus defined $R_{[a,b]} :\BC \to \BC$ is clearly a 1-Lipshitz map,
we can extend it as usual to $R_{[a,b]} :\BCbot \to \BCbot$.
Note that the coordinates of $I_i^{|(a,b)}$ are just the starting point and the length of the interval
$[x_i,x_i+d_i]\cap [a,b]$ if nonempty.
For further use we also record that for a given barcode $\beta \in \BCbot$ the barcode $R_{[a,b]}(\beta)$ depends continuously on $a$ and $b$.
\begin{setup}\label{setup banach space} We fix a Lipschitz continuous map $T:\BCbot \to V$ to some Banach space $\left(V,\norm\cdot\right)$ with Lipschitz constant $L(T)>0$.
Due to compactness and continuity, the transformed barcodes $T(\beta_k(M))$ and $T(\beta_k(P_n))$ are uniformly bounded over $n$ by some finite number, which we denote by $C(M;T)$.
We also know that, for large $n$, both $T(\beta_k(P_n))$ and $\E[T(\beta_k(P_n))]$ (due to the dominated convergence theorem) approximate $T(\beta_k(M))$.
The question is how large can the difference of $T(\beta_k(M))$ and $\E[T(\beta_k(P_n))]$ be?
By interpreting Theorem \ref{theorem NSW reinforced} we arrive to the following conclusion.
\end{setup}
\begin{theorem}\label{theorem consequence of NSW}
Let $M \subset \R^d$ be a smooth compact submanifold of dimension $m$ and reach~$\tau$,
let $X_1,X_2,\ldots,X_n$ be an i.i.d.
random sample from $M$ for the uniform distribution, and denote $P_n:=\{X_1,\ldots,X_n\}$. Let $\veps\in \left[0,\sqrt{\frac{3}{5}}\tau\right)$, and put $I_\veps:= \left[\veps,\sqrt{\frac{3}{5}}\tau\right)$. Then for all $k \in\N_0$ the following hold:
\begin{enumerate}
\item\label{theorem consequence of NSW item one} Let $\bar I_\veps=\left[\veps,\sqrt{\frac{3}{5}}\tau\right]$ denote the closure of the
interval $I_\veps$ and $c_1(\veps)>0$ and $c_2(\veps)>0$ be as in \eqref{eq nsw constants}. Then:
\[
{\E\left[\norm{T\circ R_{\bar I_\veps}(\beta_k(P_n)) - T\circ R_{\bar I_\veps}(\beta_k(M))}_V\right] } \ \leq \ 3c_2(\veps) \exp\left(\frac{-n}{c_1(\veps)}\right) C(M;T).
\]
\item\label{theorem consequence of NSW item two} For the unrestricted barcodes we have:
\[
{\E\left[\norm{T\left(\beta_k(P_n)\right)- T\left(\beta_k(M)\right)}_V\right] } \ \leq \ 3c_2(\veps) \exp\left(\frac{-n}{c_1(\veps)}\right) C(M;T) \ + \ \frac{L(T)\cdot \veps}{2}.
\]
\end{enumerate}
Here, $T$, $C(M;T)$ and $L(T)$ are as in Setup \ref{setup banach space}.
\end{theorem}
\begin{proof}
Let us prove \eqref{theorem consequence of NSW item one}. By continuity of the projection as a function of the (endpoints of) the interval, it suffices to prove the inequality for every closed interval contained in $I_\veps$. Let $I \subset I_\veps$ be such an interval.
Due to Theorem \ref{theorem NSW reinforced}, with our choice of $n$ we have that for all $s\in I_\veps$ the homology of
$M$ equals that of the point cloud thickened by $s$, except on an event $E_\veps$ of probability at most $\delta$.
Condition \eqref{eq nsw n} is equivalent to
\[
\delta > c_2(\veps) \exp{\left(-\frac{n}{c_1(\veps)}\right)}.
\]
In particular, we could take $\delta(n)=3c_2(\veps) \exp{\left(-\frac{n}{c_1(\veps)}\right)}/2$.
Therefore, we find $\P(E_\veps) \leq \frac{3}{2} c_2(\veps) \exp{\left(\frac{-n}{c_1(\veps)}\right)}$,
and on the complement of $E_\veps$ we know that the homology of the inflated point cloud $P_s$ does
not change when $s\in I$ varies, and is equal to that of $M$ and hence to that of $M_s$.
In particular, $T \circ R_{I}(\beta_k(P_n)) =T\circ R_{I}(\beta_k(M))$ for all
$k\in \N_0$ on the complement $E_\veps^c$.
To arrive at the above stated bound, for each given $k$, we apply the trivial upper bound
$\norm{T \circ R_{I}(\beta_k(P_n)) -T\circ R_{I}(\beta_k(M))}_V \leq 2 C(M;T)$ on $E_\veps$,
and take expectation.
For the proof of \eqref{theorem consequence of NSW item two}, we just have to note that on $E_\veps^c$ we have $d_\infty(\beta_k(P_n),\beta_k(M)) \leq \frac{\veps}{2}$ by Lemma \ref{lemma barcode approximation}. The claim follows as $T$ is $L(T)$-Lipschitz and $\P(E_\veps^c)\leq 1$.
\end{proof}
In particular, the theorem implies that the quantity $\norm{\E\left[T(\beta_k(P_n))\right]-T(\beta_k(M))}_V$ satisfies the same inequalities as in Theorem \ref{theorem consequence of NSW} thanks to the basic properties of the Pettis integral, see Section~\ref{section clt}.
\section{Embedding the space of barcodes}\label{section embedding barcodes}
In Section \ref{section clt} we have deduced LLN and CLT for random variables induced from random barcodes. We have been working with Lipschitz continuous maps from $\BCbot$ to some Banach space. In this section we will take a look at one such example by building on work of the first named author \cite{Kal18}. Let $\BC$ denote the space of barcode representations. Our goal is to describe a Lipschitz continuous embedding $\BCbot {\, \hookrightarrow\,} \ell_1$.
Let us consider the operations $\boxplus,\oplus,\odot$ on $\R$ defined as
\[
a\oplus b := \min{(a, b)}, \quad a\boxplus b := \max{(a, b)}, \quad a\odot b := a+b.
\]
We call $(\R, \boxplus, \odot)$ the max-plus semiring and $(\R, \oplus, \odot)$ the tropical semiring.
Just as ordinary polynomials are formed by multiplying and adding real variables, max-plus polynomials can be formed by multiplying and adding variables in the max-plus semiring. Let $x_1, x_2, \ldots, x_N$ be variables that represent elements in the max-plus semiring. A \emph{max-plus monomial expression} is any product of these variables, where repetition is permitted. By commutativity, we can sort the product and write monomial expressions with the variables raised to exponents:
\[
p(x_1, x_2, \ldots, x_N) = a_1\odot x_1^{a_1^1} x_2^{a_2^1} \ldots x_N^{a_N^1} \boxplus a_2\odot x_1^{a_1^2} x_2^{a_2^2} \ldots x_N^{a_N^2}\boxplus \ldots \boxplus a_m\odot x_1^{a_1^m} x_2^{a_2^m} \ldots x_N^{a_N^m}.
\]
Here the coefficients $a_1, a_2, \ldots a_m$ are in $\R$, and the exponents $a_j^i$ for ${1\leq j \leq N}$ and ${1\leq i \leq m}$ are in $\N_0$.
Different max-plus polynomial expressions may happen to define the same functions. Thus, if $p$ and $q$ are max-plus polynomial expressions and
\[
p(x_1, x_2, \ldots, x_N) = q(x_1, x_2, \ldots, x_N)
\]
for all $(x_1, x_2, \ldots, x_N)\in \R^N$, then $p$ and $q$ are said to be \emph{functionally equivalent}, and we write $p \sim q$. Max-plus polynomials are the semiring of equivalence classes of max-plus polynomial expressions with respect to $\sim$.
The goal of \cite{Kal18} was to identify sufficiently many max-plus polynomials on $\BC$ to separate points. This involves finding functions invariant under the action of the symmetric group. To be able to list these functions, consider the set $\scrE_N$ of $(N\times 2)$-matrices with entries in $\{0,1\}$. The symmetric group $S_N$ acts on $\scrE_N$ by permuting the rows. To a matrix $E=(e_{i,j})_{i,j} \in \scrE_N$ we associate the max-plus monomial $P(E) = x_{1, 1}^{e_{1,1}} x_{1, 2}^{e_{1,2}} \ldots x_{N, 1}^{e_{N,1}} x_{N, 2}^{e_{N, 2}}$. Suppose that the $S_N$-orbit of $E$ is $[E]= \{ E_1, E_2,\ldots, E_m\}$. Then $P_E= P(E_1)\boxplus P(E_2)\boxplus \ldots \boxplus P(E_m)$ is a 2-symmetric max-plus polynomial and a we can define a function $P_{k, E}$ on $\BC_n$ as
\begin{equation}\label{poly}
P_{k,E}(x_1,d_1,\ldots,x_N,d_N):=P_{E}(x_1\oplus d_1^k,d_1,\ldots,x_N\oplus d_N^k,d_N).
\end{equation}
For $m,n \in \N_0$ with $m+n \geq 1$ we denote by $E_{m, n}$ the matrix
\[
\begin{array}{c@{\!\!\!}l}
\left( \begin{array}[c]{cc}
1 & 1\\
\vdots & \vdots\\
1 & 1\\
0 & 1\\
\vdots&\vdots\\
0 & 1\\
\end{array} \right)
&
\begin{array}[c]{@{}l@{\,}l}
\left. \begin{array}{c} \vphantom{0} \\ \vphantom{\vdots}
\\ \vphantom{0} \end{array} \right\} & \text{$m$ times} \\
\left. \begin{array}{c} \vphantom{0}
\\ \vphantom{\vdots} \\ \vphantom{0} \end{array} \right\} & \text{$n$ times} \\
\end{array}
\end{array}
\]
and write $P_{k,m,n}$ for the polynomial $P_{k, E_{m,n}}$. This is a function on $\BC$; if $b$ is a barcode with $N$ bars, then
\begin{itemize}
\item
if $m+n=N$, we use Equation~(\ref{poly});
\item
if $m+n>N$, then we add $N-m-n$ 0 length bars to $b$ and then use Equation~(\ref{poly});
\item
if $m+n<N$, then we add $N-m-n$ 0 length rows to the $E_{m, n}$ matrix and then use Equation~(\ref{poly}) for this matrix.
\end{itemize}
It was shown in \cite[Theorem 6.7]{MKV2016} that the set of functions $\{P_{{k,m,n}}\}_{{k,m,n} \in \N_0^3}$ separates points from $\BC$. Furthermore, all of these functions are Lipschitz~\cite{Kal18}, i.e.\ for $C(k, m, n)=2(2m\max(k, 1) + 2m + 2n)$, the estimate
\begin{equation}\label{lipschitz}
|P_{{k,m,n}}(b)-P_{{k,m,n}}(b')|\ \leq \ C(k,m,n)\ d_B(b, b')
\end{equation}
holds for $b, b'\in \BC$.
We fix once and for all an enumeration $(k_1,m_1,n_1),(k_2,m_2,n_2),\ldots$ and consider the corresponding coordinates on the barcode space. We obtain:
\begin{theorem}\label{theorem embedding l1}
The sequence $(\frac{1}{C(k_t, m_t, n_t)t^2}P_{k_t,m_t,n_t})_{t\in \N}$ of functions $\BC \to \R$ defines an injective map $\iota:\BC {\, \hookrightarrow\,} {\ell_1}$. This map is Lipschitz continuous.
\end{theorem}
\begin{proof}
Let $b\in \BC$ be a barcode. We will first prove that $\iota(b)$ is well-defined, i.e., lies in ${\ell_1}$. Let us write $b=(x_1, d_1, \ldots, x_N, d_N)$ for $x_i, d_i \in \R^2_{\geq 0}$, and let $M:=\max_{i=1}^N \max(x_i,d_i)$. For any $k,m,n \in \N_0$ we claim that $\frac{1}{C(k_t, m_t, n_t)}P_{k,m,n}(b) \leq 2MN$. Since $P_{k,m,n}(b)$ is the maximum of $P(E)(b)$ where $E$ runs through the orbit of $E_{m,n}$ and the monomials $P(E)$ have degree $2N$,
\[
P(E)(x_1\oplus d_1^k,d_1, \ldots, x_N\oplus d_N^k,d_N) \leq P(E)(x_1,d_1, \ldots, x_N,d_N) \leq P(E)(M,\ldots,M) \leq 2NM.
\]
Since $C(k,m,n)\geq 1$, $\frac{1}{C(k_t, m_t, n_t)}P_{k,m,n}(b) \leq 2MN$.
Consequently,
\[
\sum_{t\in \N} \frac{1}{C(k_t, m_t, n_t)t^2}\abs{P_{k_t,m_t,n_t}(b)} \leq \sum_{t\in \N} \frac{2MN}{t^2} < \infty.
\]
As mentioned above the functions $P_{k,m,n}$ separate points on $\BC$ so that $\iota$ is indeed injective.
This embedding is Lipschitz since it follows from Equation~\eqref{lipschitz} that
\[
\begin{aligned}
\sum_{t=1}^\infty \left|(\frac{1}{C(k_t, m_t, n_t)t^2}P_{k_t,m_t,n_t})(b)-(\frac{1}{C(k_t, m_t, n_t)t^2}P_{k_t,m_t,n_t})(b')\right|&\leq \sum_{t=1}^\infty \frac{1}{t^2} d_B(b, b')\\ &=\frac{\pi^2}{6} d_B(b, b').\\
\end{aligned}
\]
\end{proof}
\begin{example}
It is easy to see that the scaling by $\frac{1}{t^2}$ in the definition of $\iota$ is necessary. Consider e.g. $b=(1,1,\ldots,1,1)\in\BC_N \subset \BC$. Then the sequence $a_\ell=P_{0,\ell,0}(b) =2\ell$ if $\ell\leq N$ and $a_\ell=P_{0,\ell,0}(b) =2N$ otherwise. In particular, $\sum_\ell a_\ell$ diverges.
\end{example}
\begin{remark}
Note that for $1\leq p\leq q \leq \infty$ we have $\ell_p \subset \ell_q$ and the inclusion is Lipschitz continuous. In particular, we have a Lipschitz continuous embedding $\BCbot$ into the separable Hilbert space $\ell_2$, thus into a separable Banach space of type $2$ as in the assumptions of several results in Section \ref{section clt}.
\end{remark}
\section{Discussion}\label{section discussion}
The focus in the present paper is on perfect data, sampled without noise.
It seems important to allow for noise, and therefore for data issued from distributions with unbounded support.
Once we allow for noise (potentially with unbounded support), with the number of points $n$ being large, the maximal error will typically also be large with high probability.
To overcome this problem, it seems reasonable to assume that, for each $n$, the random points are sampled independently from a distribution indexed by $n$,
in such a way that the maximal error stays bounded in $n$ with high probability. We postpone this study to a future work.
| {
"timestamp": "2020-06-29T02:11:16",
"yymm": "1903",
"arxiv_id": "1903.00470",
"language": "en",
"url": "https://arxiv.org/abs/1903.00470",
"abstract": "We develop a general framework for the probabilistic analysis of random finite point clouds in the context of topological data analysis. We extend the notion of a barcode of a finite point cloud to compact metric spaces. Such a barcode lives in the completion of the space of barcodes with respect to the bottleneck distance, which is quite natural from an analytic point of view. As an application we prove that the barcodes of i.i.d. random variables sampled from a compact metric space converge to the barcode of the support of their distribution when the number of points goes to infinity. We also examine more quantitative convergence questions for uniform sampling from compact manifolds, including expectations of transforms of barcode valued random variables in Banach spaces. We believe that the methods developed here will serve as useful tools in studying more sophisticated questions in topological data analysis and related fields.",
"subjects": "Probability (math.PR)",
"title": "Geometric and Probabilistic Limit Theorems in Topological Data Analysis",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138131620183,
"lm_q2_score": 0.7248702702332476,
"lm_q1q2_score": 0.7087156759575312
} |
https://arxiv.org/abs/1305.6461 | Observation of vibrating systems at different time instants | In this paper, we obtain new observability inequalities for the vibrating string. This work was motivated by a recent paper by A. Szijártó and J. Hegedűs in which the authors ask the question of determining the initial data by only knowing the position of the string at two distinct time instants. The choice of the observation instants is crucial and the estimations rely on the Fourier series expansion of the solutions and results of Diophantine approximation. | \section{Introduction}
Let $q$ be a nonnegative number. The small transversal vibrations of a string of length $\pi$ fixed at its two ends satisfy
\footnote{
The quantity $y=y(t,x)$ is the height of the string at time $t$ and abscissa $x$ while $y(t)$ stands for the map $y(t,\cdot)$. The choice of $\pi$ for the length of the string is made in order to simplify the writing in the expansion of the solutions in Fourier series.
}
\begin{equation} \label{string}
\begin{cases}
y''-y_{xx}+qy=0 & \text{in } \R\times(0,\pi),\\
y=0 & \text{in }\R\times\{0,\pi\},\\
y(0)=y_0, \quad y'(0)=y_1 & \text{in }(0,\pi).
\end{cases}
\end{equation}
The obtaining of observability inequalities for the vibrating string and for oscillating systems in general has been the object of many works. Indeed, observability being dual to controllability (cf. D. L. Russell \cite{Russell1978}), it is often a starting point to obtain controllability results (see e.g., J.-L. Lions \cite{Lions1988}, A. Haraux \cite{Haraux1990}).
A useful tool to obtain such inequalities is the Fourier series expansion of the solutions (cf. V. Komornik and P. Loreti \cite{KL05}).
Among all the different ways to observe the system \eqref{string}, \emph{pointwise observation} has been widely studied (see e.g., J.-L. Lions \cite{Lions1992}, A. Haraux \cite{HarauxPonct}). It consists in getting estimations of the form
\begin{equation*}
\|(y_0,y_1)\|_{\mathcal{I}}\le c\|y(\cdot,\xi)\|_{\mathcal{O}}.
\end{equation*}
The main difficulties are the choice of the norms for the initial data $\|\cdot\|_{\mathcal{I}}$ as for the observation $\|\cdot\|_{\mathcal{O}}$ and the choice of a \emph{strategic point} $\xi$ in the domain. These particular points can be characterized by some of their arithmetical properties (see e.g., A. G. Butkovskiy \cite{But1979}, V. Komornik and P. Loreti \cite{KL11}).
Following a recent paper of A. Szij\'art\'o and J. Heged\H us \cite{SH12}, we focus on a \emph{pointwise-in-time observation}. Such type of observation seems to have been studied at first by A. I. Egorov \cite{Egorov2008} and L. N. Znamenskaya \cite{Znamen2010}. Given two norms, one for the initial data $\|\cdot\|_{\mathcal{I}}$ and one for the observation $\|\cdot\|_{\mathcal{O}}$, the objective is to find two times $t_0$ and $t_1$ such that
\begin{equation}\label{generalobservation}
\|(y_0,y_1)\|_{\mathcal{I}}\le c(\|y(t_0)\|_{\mathcal{O}}+\|y(t_1)\|_{\mathcal{O}})
\end{equation}
From a practical point of view, such an inequality means that only knowing the position of the whole system at two different instants, we are able to recover the initial data $y_0$ and $y_1$.
\begin{deff}
A pair $(t_0,t_1)$ of real numbers such that the observability inequality \eqref{generalobservation} holds is called a \emph{strategic pair (for \eqref{generalobservation})}.
\footnote{In particular, this notion depends on the the norms in the left and right members.
}
\end{deff}
The main idea of this paper is the following : depending on how the quantity
\begin{equation*}
\frac{t_0-t_1}{\pi}
\end{equation*}
is approximable by rational numbers, such pointwise-in-time observability inequalities hold. The main tools are the explicit expansion of the solutions in Fourier series and classical results of Diophantine approximation.
\bigskip
Let us describe the organization of the paper and state (informally) the main results.
In section 2, after recalling the definition of adapted functional spaces to study the well-posedness of \eqref{string}, we reformulate the observation problem in this setting. These spaces, denoted by $D^s$ ($s \in \R)$, correspond essentially to the domain of $-\Delta^{s/2}$. Then, we may chose two real numbers $r$ and $s$ such that $\|\cdot\|_{\mathcal{I}}=\|\cdot\|_{D^s}$ and $\|\cdot\|_{\mathcal{O}}=\|\cdot\|_{D^r}$.
In section \ref{classicalstring}, we investigate the observation of the \emph{classical string} (i.e. $q=0$). We prove (see Theorem \ref{MRclassicalstring}) the following result :
\begin{quote}
\emph{
Assume that $r-s\ge 1$. Then, there exist strategic pairs. Moreover, if the inequality is strict, then almost all pairs are strategic. This result is optimal in the sense that there cannot be any strategic pair if $r-s<1$.
}
\end{quote}
In section \ref{more}, we prove that the difference $r-s$ (see Theorem \ref{moreobservations}) can be reduced by adding further observations.
In section \ref{loadedstring}, we focus on the \emph{loaded string} (i.e. $q>0$). First we recall the main result of \cite{SH12} in Theorem \ref{thmSH}, which states essentially that if $(t_0-t_1)/\pi$ is a \emph{rational} number along with another hypothesis, then $(t_0,t_1)$ is a strategic pair with $r-s=1$. After analyzing the occurence of such pairs under the above hypotheses in Proposition \ref{density}, we use another method to obtain new observability inequalities. We can state the following result (see Theorem \ref{MRloadedstring}):
\begin{quote}
\emph{
Assume that $r-s=1$. If $(t_0,t_1)$ is a strategic pair for the classical string, then it is also a strategic pair for the loaded string provided that $q$ is sufficiently small.
}
\end{quote}
Finally, in section \ref{other}, we extend our method to the vibrating beam and rectangular plates.
\section{Problem setting and notations}\label{setting}
Let us recall the construction of some useful functional spaces related to the above problem (see e.g., \cite[pp. 7--11]{Kom94}, \cite[pp. 335--340]{Brezis2}).
The functions $\sin(kx),\, k=1,2,\ldots$ form an orthogonal and dense system in $L^2(0,\pi)$.
We denote by $D$ the vector space spanned by these functions and for $s\in \R$, we define an euclidean norm on $D$ by setting
\begin{equation*}
\Big\|\sum_{k=1}^{\infty}c_k\sin(kx)\Big\|_s^2:=\sum_{k=1}^{\infty}k^{2s}|c_k|^2.
\end{equation*}
The space $D^s$ is defined as the completion of $D$ for the norm $\|.\|_s$.
Then, $D^0$ coincide with $L^2(0,\pi)$ with equivalent norms and
more generally, it is possible to prove that for $s>0$,
\begin{equation*}
D^s=\Big\{f\in H^s(0,\pi) : f^{(2j)}(0)=f^{(2j)}(\pi)=0,\quad \forall \, 0\le j\le\Big[\frac{s-1}{2}\Big]\Big\}.
\end{equation*}
Identifying $D^0$ with its own dual, $D^{-s}$ is the dual of $D^s$. For example,
\begin{equation*}
D^0=L^2(0,\pi), \quad D^1=H^1_0(0,\pi) \quad \text{and} \quad D^{-1}=H^{-1}(0,\pi)
\end{equation*}
with equivalent norms.
Now, we recall a well-posedness result for the problem \eqref{string} via an expansion of the solutions in Fourier series. We set
\begin{equation*}
\om_k:=\sqrt{k^2+q},\qquad k=1,2,\ldots
\end{equation*}
\begin{prop}
Let $s \in \R$. For all initial data $y_0 \in D^s$ and $y_1 \in D^{s-1}$, the problem \eqref{string} admits a unique solution $y \in C(\R,D^s) \cap C^1(\R,D^{s-1}) \cap C^2(\R,D^{s-2})$ given by
\begin{equation} \label{solution}
y(t,x)=\sum_{k=1}^{\infty}(a_k e^{i\om_kt}+b_k e^{-i\om_kt})\sin{kx},
\end{equation}
where the complex coefficients $a_k$ and $b_k$ satisfy
\footnote{
$A\asymp B$ means that there are two positive constants $c_1$ and $c_2$ such that $c_1 B\le A\le c_2 B$.
}
\begin{equation} \label{conservation}
\|y_0\|_s^2+\|y_1\|_{s-1}^2\asymp\sum_{k=1}^{\infty}k^{2s}(|a_k|^2+|b_k|^2).
\end{equation}
\end{prop}
The observability problem that we are going to investigate in this paper is the following. Given two real numbers $r$ and $s$ such that $s\le r$, we ask wether or not there are two times $t_0$ and $t_1$ such that
\begin{equation} \label{observation2}
\|y_0\|_s+\|y_1\|_{s-1}\leq c(\|y(t_0)\|_r+\|y(t_1)\|_r)
\end{equation}
for a positive constant $c$, independent of the initial data $(y_0,y_1)\in D^r \times D^{r-1}$.
\section{Observability of the classical string ($q=0$)}\label{classicalstring}
In this paragraph, we assume that $q=0$ in the problem \eqref{string}.
The following statement transforms the observation inequality \eqref{observation2} to a problem of Diophantine approximation.
\begin{prop}\label{thmstring}
The pair $(t_0,t_1)$ is strategic if and only if there is a positive constant $c$ such that
\footnote{
If $x$ is a real number, $\|x\|$ denotes the distance between $x$ and the nearest integer.
}
\begin{equation}\label{estthmstring}
\Big\|\frac{k(t_0-t_1)}{\pi}\Big\| \ge \frac{c}{k^{r-s}}, \qquad k=1,2,\ldots
\end{equation}
\end{prop}
For the proof, we need the following
\begin{lem} \label{lemmasinus}
Set $x \in \R$. We have
\begin{equation*}
|\sin kx| \asymp \Big\|\frac{kx}{\pi}\Big\|, \qquad k=1,2,\ldots
\end{equation*}
\end{lem}
\begin{proof}[Proof of Lemma \ref{lemmasinus}]
We follow the proof of \cite[Lemma 2.3]{KL11}. Denoting by $m$ the nearest integer from $kx/\pi$,
\begin{equation*}
|\sin kx|=|\sin(kx-m\pi)|=\Big|\sin\Big(\frac{kx}{\pi}-m\Big)\pi\Big|.
\end{equation*}
We notice that $|kx/\pi-m|\pi \le \pi/2$. Hence, using the estimations $(2/\pi)|t|\le|\sin t|\le |t|$ which hold for
$|t|\le \pi/2$, we have
\begin{equation*}
\frac{2}{\pi}\Big|\frac{kx}{\pi}-m\Big|\pi\le\Big|\sin\Big(\frac{kx}{\pi}-m\Big)\pi\Big| \le \pi\Big|\frac{kx}{\pi}-m\Big|,
\end{equation*}
i.e.
\begin{equation*}
2\Big\|\frac{kx}{\pi}\Big\| \le |\sin kx| \le \pi \Big\|\frac{kx}{\pi}\Big\|. \qedhere
\end{equation*}
\end{proof}
\begin{proof}[Proof of Proposition \ref{thmstring}]
Using the Fourier series expansion \eqref{solution} of the solutions of \eqref{string} and the estimation \eqref{conservation}, we observe that the left member in \eqref{observation2} is equivalent
\footnote{
in the sense of the symbol $\asymp$
}
to
\begin{equation*}
\sum_{k=1}^{\infty}k^{2s}(|a_k|^2+|b_k|^2)
\end{equation*}
and the right member is equivalent to
\begin{equation*}
\sum_{k=1}^{\infty}k^{2r}(|a_ke^{ikt_0}+b_ke^{-ikt_0}|^2+|a_ke^{ikt_1}+b_ke^{-ikt_1}|^2).
\end{equation*}
Therefore, the observability inequality \eqref{observation2} holds if and only if there exist a positive constant $c'$ such that for all $k=1,2,\ldots$ and all complex numbers $a$ and $b$,
\begin{equation}\label{estmode}
k^{2s}(|a|^2+|b|^2)\leq c'k^{2r}(|ae^{ikt_0}+be^{-ikt_0}|^2+|ae^{ikt_1}+be^{-ikt_1}|^2).
\end{equation}
Now, for all $k$, we consider the linear maps $T_k$ in $\C\times\C$ (endowed with its usual euclidean norm) defined by
\begin{equation*}
T_k(a,b):=(ae^{ikt_0}+be^{-ikt_0},ae^{ikt_1}+be^{-ikt_1}).
\end{equation*}
Hence, the estimation \eqref{estmode} holds for all $k$ if and only if all the $T_k$ are invertible and there exists a positive constant $c''$ independent of $k$ such that
\begin{equation*}
\frac{1}{\|T_k^{-1}\|}\ge \frac{c''}{k^{r-s}}.
\end{equation*}
The determinant of $T_k$ equalling $2i\sin k(t_0-t_1)$, we deduce that all the $T_k$ are invertible if and only if $(t_0-t_1)/\pi$ is irrational. In that case, their inverses are given by
\begin{equation*}
T_k^{-1}(a,b)=\frac{1}{2i\sin k(t_0-t_1)}(e^{-ikt_1}a-e^{-ikt_0}b,-e^{ikt_1}a+e^{ikt_0}b)
\end{equation*}
and a computation of their norms yield
\begin{equation*}
\|T_k^{-1}\|=\frac{\sqrt{1+|\cos k(t_0-t_1)|}}{\sqrt{2}|\sin k(t_0-t_1)|}
\end{equation*}
Thus,
\begin{equation*}
\frac{1}{\|T_k^{-1}\|} \asymp |\sin k(t_0-t_1)| \asymp \Big\|\frac{k(t_0-t_1)}{\pi}\Big\|.
\end{equation*}
The first estimation follows from the expression of $\|T_k^{-1}\|$ while the second estimation is a consequence of the Lemma \ref{lemmasinus}. We observe that if \eqref{estthmstring} holds, then $(t_0-t_1)/\pi$ must be irrational and that ensures that all the $T_k$ are invertible. The proof is complete.
\end{proof}
\begin{thm}${}$\label{MRclassicalstring}
\begin{itemize}
\item[(a)]
If $r-s<1$, there cannot be any strategic pair.
\item[(b)]
If $r-s=1$,
the set of strategic pairs has zero Lebesgue measure and full Hausdorff dimension in $\R^2$.
\item[(c)]
If $r-s>1$,
the set of strategic pairs has full Lebesgue measure in $\R^2$.
\end{itemize}
\end{thm}
In the following lemma, we gather some classical results of Diophantine approximation.
\footnote{
The results concerning the Lebesgue measure are due to A. Khinchin and the result concerning the Hausdorff dimension is due to V. Jarn\'ik
}
For a real number $\alpha$, we set
$E_{\alpha}:=\{x\in \R : \exists c>0 : \, \|kx\|\ge ck^{-\alpha},\,k=1,2,\ldots\}$.
\begin{lem}[{\cite[pp. 120--121]{Cassels}}, {\cite[p. 104]{Bugeaud}}, {\cite[p. 142]{Falconer}}] \label{lemmadioph}
${}$
\begin{itemize}
\item[(a)]
If $\alpha=1$, then $E_{\alpha}$ has zero Lebesgue measure and full Hausdorff dimension in $\R$.
\item[(b)]
If $\alpha>1$, then $E_{\alpha}$ has full Lebesgue measure in $\R$.
\end{itemize}
\end{lem}
\begin{proof}[Proof of Theorem {\ref{MRclassicalstring}}] The result is a consequence of the Proposition \ref{thmstring} and results of Diophantine approximation.
If $\alpha<1$ then the set $E_1$ defined in the Lemma \ref{lemmadioph} is empty. Indeed, if we suppose that $x\in E_{\alpha}$, then for sufficiently large $k$, $\|kx\|\ge 1/k$ with is in contradiction with a theorem of Dirichlet (see \cite[p.4]{Cassels}) that asserts that if $x$ is irrational, then the inequality $\|kx\|<1/k$ as infinitely many solutions in $k$.
If $\alpha\ge 1$, then we use the
Lemma \ref{lemmadioph}. One can notice that the set of pairs $(t_0,t_1)\in \R^2$ such that $(t_0-t_1)/\pi \in E_{\alpha}$ has full (resp. zero) Lebesgue measure or Hausdorff dimension in $\R^2$ if $E_{\alpha}$ has full (resp. zero) Lebesgue measure or Hausdorff dimension in $\R$. For the Lebesgue measure, this results from Fubini's theorem. For the Hausdorff dimension, this is a consequence of its behaviour with a product of sets and its invariance by a bi-Lipschitz transformation (see e.g., \cite{Falconer}).
\end{proof}
\pagebreak
\begin{rks}${}$
\begin{itemize}
\item
The assertion (a) of Corollary \ref{MRclassicalstring} can be seen as an \emph{optimality result}. Indeed, it means that with only two observations, the difference $r-s$ between the orders of the Sobolev norms in the inequality \eqref{observation2} must be at least 1.
\item
One cannot obtain such estimations with only one observation. Indeed, let $t_0\in \R$. Then, the function
$
y(t,x)=\sin(t-t_0)\sin(x)
$
is a solution to \eqref{string} with $y(0)\neq 0$ or $y'(0)\neq 0$, but $y(t_0)=0$.
\item
If the pair $(t_0,t_1)$ is strategic, then, having only access to the two observations i.e. the position of the string at times $t_0$ and $t_1$, we can recover the initial data $y_0$ and $y_1$ using the expansion in Fourier series of $y(t_0)$ and $y(t_1)$ and the applications $T_k^{-1}$. Moreover, the observability inequality ensures a ``continuity property'' in this reconstruction process. Indeed, if two sets of observations are close, then the two sets of associated initial data must be close too.
\item
In the same way, if $r-s\ge 1$, we can obtain estimations of the form
\begin{align*}
\|y_0\|_s+\|y_1\|_{s-1}&\le c(\|y'(t_0)\|_{r-1}+\|y(t_1)\|_r),\\
\|y_0\|_s+\|y_1\|_{s-1}&\le c(\|y(t_0)\|_r+\|y'(t_1)\|_{r-1}),\\
\|y_0\|_s+\|y_1\|_{s-1}&\le c(\|y'(t_0)\|_{r-1}+\|y'(t_1)\|_{r-1}),
\end{align*}
\item
Applying the Hilbert Uniqueness Method (see \cite{Lions1988}, \cite{Kom94}), it is possible to prove the following \emph{exact controllability} result : let $0<t_0<t_1<T$ such that the observability inequality \eqref{observation2} holds with $r=0$ and $s=-1$. Then, given initial data $(y_0,y_1) \in D^2\times D^1$, we can find two control vectors $v$ and $w$ in $D^0$ such that the solution (that can be defined rigorously) of the inhomogeneous problem
\footnote{
$\delta$ is the Dirac mass in $0$.
}
\begin{equation*}
\begin{cases}
y''-y_{xx}=\delta(t-t_0)v+\delta(t-t_1)w &\text{in }(0,T)\times(0,\pi),\\
y=0&\text{on }(0,T)\times\{0,\pi\},\\
y(0)=y_0,\quad y'(0)=y_1 &\text{in } (0,\pi).
\end{cases}
\end{equation*}
satisfy $y(T)=y'(T)=0$.
\end{itemize}
\end{rks}
\section{With more observations}\label{more}
In this paragraph, we still assume that $q=0$ in \eqref{string}.
In section \ref{classicalstring}, we have seen that with only two observations, it is necessary that $r-s\ge 1$ in the estimation \eqref{observation2}. In this paragraph, we prove that adding other observations, it is possible to reduce the difference $r-s$.
\begin{thm}\label{moreobservations}
Let $t_1, t_2, \ldots, t_n \in \mathbb{R}$ with $n \geq 2$, $r\in\mathbb{R}$ and set $s:=r-1/(n-1)$.
Assume that among the $(t_i-t_j)/\pi$, $1\le i,j\le n$, we can extract $n-1$ elements $\tau_1,\ldots,\tau_{n-1}$ that belong to a real algebraic extension of $\Q$ of degree $n$ and such that
$1,\tau_1,\ldots,\tau_{n-1}$ are linearly independent over $\Q$.
Then, there exists a positive constant $c$ such that
\begin{equation*}
\|y_0\|_s+\|y_1\|_{s-1}\leq c(\|y(t_1)\|_r+\ldots+\|y(t_n)\|_r)
\end{equation*}
for all initial data $(y_0,y_1)\in D^r\times D^{r-1}$.
\end{thm}
The proof relies on the following
\footnote{
The second inequality in the Lemma will only be used in section \ref{other}.
}
\begin{lem}[{\cite[p. 79]{Cassels}}]\label{lemmaCassels}
Let $x_1,\ldots,x_n$ be numbers that belong to a real algebraic extension of $\Q$ of degree $n+1$ such that $1,x_1,\ldots,x_n$ are linearly independent over $\Q$. Then, there exists a positive constant $c$, only depending on $x_1,\ldots,x_n$, such that
\begin{equation*}
\max\|kx_j\| \geq ck^{-1/n}, \qquad k=1,2,\ldots
\end{equation*}
and
\begin{equation*}
\|k_1x_1+k_2x_2+\ldots k_nx_n\| \geq c(\max |k_j|)^{-n},
\qquad (k_1,\ldots,k_n)\in \Z^n\setminus \{(0,\ldots 0)\}.
\end{equation*}
\end{lem}
\begin{proof}[Proof of Theorem \ref{moreobservations}]
Adapting the method described in the proof of Theorem \ref{thmstring}, it is sufficient to obtain the estimation
\begin{equation*}
\sum_{p=1}^n|ae^{ikt_p}+be^{-ikt_p}|^2\geq ck^{-\frac{2}{n-1}}(|a|^2+|b|^2),
\end{equation*}
where $c$ is a positive constant, independent of $a,b\in\mathbb{C}$ and $k\in\N^*$. With no loss of generality, we can assume that $\tau_p=(t_1-t_{p+1})/\pi$ for $p=1,\ldots,n-1$. We have
\begin{eqnarray*}
\sum_{p=1}^n|ae^{ikt_p}+be^{-ikt_p}|^2 &=& \sum_{p=2}^n(\frac{1}{n-1}|ae^{ikt_1}+be^{-ikt_1}|^2+ |ae^{ikt_p}+be^{-ikt_p}|^2)\\
&\geq&c_1 \sum_{p=2}^n(|ae^{ikt_1}+be^{-ikt_1}|^2+|ae^{ikt_p}+be^{-ikt_p}|^2)\\
&\geq& c_2 \Big(\sum_{p=2}^n|\sin k(t_1-t_p|^2\Big)(|a|^2+|b|^2)\\
&\geq& c_3 \Big(\sum_{p=2}^n\Big\|\frac{k(t_1-t_p)}{\pi}\Big\|^2\Big)(|a|^2+|b|^2)\\
&\geq& c_3 \max\Big\|\frac{k(t_1-t_p)}{\pi}\Big\|^2(|a|^2+|b|^2)\\
&\geq& c_4 k^{-2/(n-1)}(|a|^2+|b|^2)
\end{eqnarray*}
for all $k=1,2,\ldots$, with positive constants $c_1,c_2,c_3,c_4$ independent of $a,b\in\mathbb{C}$.
The numbers $1,(t_1-t_2)/\pi,\ldots,(t_1-t_n)/\pi$ are independent over $\Q$. In particular the numbers $(t_1-t_p)/\pi$, $p=1,\ldots,n$ are irrational numbers. This ensures that some corresponding linear transformations on $\C\times\C$ (see the proof of Theorem \ref{thmstring}) are invertible and implies the second inequality. The third inequality is a consequence of Lemma \ref{lemmasinus} while the last inequality results from Lemma \ref{lemmaCassels}.
\end{proof}
\begin{rk}
Formally, letting the number of observations tend to $+\infty$, setting $r=0$ and $T>0$, we recover an internal observability result :
\begin{equation*}
\|y_0\|^2_0+\|y_1\|^2_{-1}\le c\int_0^T\int_0^{\pi}|y(t,x)|^2\ud x\ud t .
\end{equation*}
\end{rk}
\section{Observability of the loaded string ($q>0$)}\label{loadedstring}
In this paragraph, we assume that $q>0$ in \eqref{string} and that $r$ and $s$ are two real numbers such that $r-s=1$. First, let us recall the
\begin{thm}[A. Szij\'art\'o and J. Heged\H us, {\cite[Theorem 1 p.4]{SH12}}]\label{thmSH}
Let $t_0$ and $t_1$ be real numbers such that
\begin{equation}\label{hyp1}
\frac{t_0-t_1}{\pi} \in \Q
\end{equation}
and
\begin{equation} \label{hyp2}
\sin((t_0-t_1)\sqrt{k^2+q})\neq0,\qquad k=1,2,\ldots
\end{equation}
Then, $(t_0,t_1)$ is an strategic pair.
\end{thm}
Are such hypotheses easily satisfied? We can answer this question with the following
\begin{prop}\label{density}
The set of strategic pairs satisfying the hypotheses \eqref{hyp1} and \eqref{hyp2} is dense in $\R^2$.
\end{prop}
\begin{proof}
It is sufficient to prove that for each real number $\tau$ and each $\delta>0$, there exists a real number $\tau'$ satisfying the three conditions : $|\tau-\tau'|<\delta$, $\tau' \in \pi\Q$ and $\sin\Big(\tau' \sqrt{k^2+q} \Big) \neq 0$ for all $k=1,2,\ldots$
First, we notice that $\sin(\zeta \sqrt{k^2+q})=0$ if and only if $\zeta\sqrt{k^2+q}\in \pi\Z$. Now, we distinguish three cases.
1. \emph{If $q$ is an irrational number.} The set $\pi\Q$ being dense in $\R$, there exists a number $\tau'\in \pi\Q$ such that $|\tau-\tau'|\le \delta$. Moreover, $\tau'$ can be written $\tau'=(a/b)\pi$
with $a\in \Z$ and $b\in \N^*$ relatively primes. Assume that there exist $k\in \N^*$ and $n\in \Z$ such that
\begin{equation*}
\tau'\sqrt{k^2+q}=n\pi \quad \iff \quad \frac{a}{b}\sqrt{k^2+q}=n.
\end{equation*}
Then,
\begin{equation*}
q=\frac{n^2b^2}{a^2}-k^2\in \Q,
\end{equation*}
which is in contradiction with our assumption on $q$.
2. \emph{If $q$ is an integer}. We recall that if $(a/b)\pi\in \pi\Q$, then, $\sin((a/b)\pi\sqrt{k^2+q})=0$ if and only if $(a/b)\sqrt{k^2+q}\in \Z$.
Moreover, the quantity $\sqrt{k^2+q}$ is either an integer or an irrational number (depending on the fact that $k^2+q$ is a square or not). For sufficiently large $k$, $\sqrt{k^2+q}$ cannot be an integer. Indeed,
\begin{equation*}
\sqrt{k^2+q}=k\sqrt{1+\frac{q}{k^2}}=k\Big( 1+\frac{q}{2k^2}+o\big
(\frac{1}{k^2}\big)\Big)=k+\frac{q}{2k}+o\big(\frac{1}{k}\big)
\end{equation*}
and this is not an integer for sufficiently large $k$. Hence, for such $k$, it is an irrational number and so is $(a/b)\sqrt{k^2+q}$. Now, let $\tau'':=(a/b)\pi\in \pi\Q$ such that
\begin{equation*}
|\tau''-\tau|<\frac{\delta}{2}.
\end{equation*}
We are going to perturb a little bit the rational number $(a/b)$ in order to construct a number $\tau'$ such that the sine neither vanish. From the above discussion, the quantity $\sqrt{k^2+q}$ can take at most a finite number of integer values when $k$ varies. We denote them by $x_1,\ldots,x_N$ (if it does not take any integer value, then it is always an irrational number and we can take $\tau'=\tau''$). Let $p$ be a prime number that is not a divisor of any of the numbers $x_1,\ldots,x_N$. For sufficiently large $n$,
\begin{equation*}
\Big|\pi\frac{(p^n-1)a}{p^nb}-\tau\Big|<\delta.
\end{equation*}
and $p^n$ does not divide $a$. Now, two cases are possible. If $\sqrt{k^2+q}$ is not an integer, then it is an irrational number and
\begin{equation*}
\frac{(p^n-1)a}{p^nb}\sqrt{k^2+q} \not \in \Z.
\end{equation*}
On the other hand, if $\sqrt{k^2+q}$ is an integer, then $\sqrt{k^2+q}=x_l$ for one $l\in \{1,\ldots,N\}$ and
\begin{equation*}
\frac{(p^n-1)a}{p^nb}\sqrt{k^2+q} =\frac{(p^n-1)ax_l}{p^nb}\not \in \Z.
\end{equation*}
because $p^n$ does not divide $(p^n-1)ax_l$. Finally,
\begin{equation*}
\tau':=\frac{(p^n-1)a}{p^nb}\pi
\end{equation*}
satisfies the three expected conditions.
3. \emph{If $q$ is a rational number but not an integer.} Then, we can write $q=c/d$, where $c$ and $d$ are integers. Hence,
\begin{equation*}
\tau\sqrt{k^2+q}=\tau\sqrt{k^2+\frac{c}{d}}=\tau\sqrt{k^2+\frac{cd}{d^2}}=\frac{\tau}{d}\sqrt{k^2d^2+cd}
\end{equation*}
and we are lead back to the case where $q$ is an integer.
\end{proof}
Now, we give another method to obtain an observability result for the loaded string.
\begin{thm}\label{MRloadedstring}
Let $(t_0,t_1)$ be a strategic pair for the classical string i.e.
\begin{equation}\label{hyp3}
|\sin(k(t_0-t_1)| \ge \frac{c}{k},\qquad k=1,2,\ldots
\end{equation}
for a suitable positive constant $c$.
Then, it is also a strategic pair for the loaded string, provided that $q$ is sufficiently small.
\end{thm}
\begin{rk}
This result can be viewed as a complementary result to Theorem \ref{thmSH} since the hypothesis \eqref{hyp3} implies that $(t_0-t_1)/\pi$ is irrational; hence \eqref{hyp1} cannot hold.
\end{rk}
\begin{proof}
Applying the method described in the proof of Proposition \ref{thmstring}, a necessary and sufficient condition for estimation \eqref{observation2} to hold true is
\begin{equation}\label{estpot}
|\sin (\omega_k(t_0-t_1))|=|\sin(\sqrt{q+k^2}(t_0-t_1))|\ge \frac{c'}{k},\qquad k=1,2,\ldots,
\end{equation}
where $c'$ is a positive constant, independent of $k$.
Comparing the quantities $|\sin \omega_k(t_0-t_1)|$ and $|\sin (k(t_0-t_1))|$, we will find a sufficient condition that implies \eqref{estpot}. Let us estimate the difference
\begin{equation*}
|\sin(\sqrt{q+k^2}(t_0-t_1))-\sin( k(t_0-t_1))|.
\end{equation*}
For a fixed $k\in \N^*$, we consider the application $f_k$, defined for $x \ge 0$ by
\begin{equation*}
f_k(x):=\sin(\sqrt{k^2+x}(t_0-t_1)).
\end{equation*}
We have
\begin{align*}
|f_k'(x)|&=\frac{|\cos(\sqrt{k^2+x}(t_0-t_1))||t_0-t_1|}{2\sqrt{k^2+x}}\\
&\le \frac{|t_0-t_1|}{2k}.
\end{align*}
From the triangle inequality and the mean value theorem,
\begin{equation*}
|f_k(0)|-|f_k(q)|\le|f_k(q)-f_k(0)|\le \frac{|t_0-t_1|q}{2k}.
\end{equation*}
Hence,
\begin{align*}
|\sin(\sqrt{q+k^2}(t_0-t_1))|&\ge |\sin (k(t_0-t_1))|-\frac{|t_0-t_1|q}{2k}\\
&\ge \frac{c}{k}-\frac{|t_0-t_1|q}{2k}
\end{align*}
and these estimations are satisfied for all $k=1,2,\ldots$ Thus, if the quantity
\begin{equation}\label{positiveconstant}
c':=c-\frac{|t_0-t_1|q}{2}
\end{equation}
is positive, the estimation \eqref{observation2} is true. A sufficient condition is
\begin{equation}\label{qsmall1}
q<\frac{2c}{|t_0-t_1|}. \qedhere
\end{equation}
\end{proof}
\begin{rks}${}$
\begin{itemize}
\item
The inequality \eqref{qsmall1} can be rewritten more precisely as
\footnote{
$K(x)$ denotes the largest partial quotient in the continued fraction of $x$, i.e. if the development in continued fraction of $x$ is given by $x=[a_0;a_1,a_2,\ldots]$, then $
K(x):=\sup_{k\ge 1}a_k$.
}
\begin{equation*}
q<\frac{4}{|t_0-t_1|(K((t_0-t_1)/\pi)+2)}.
\end{equation*}
Indeed, from the proof of Lemma \ref{lemmasinus} and classical results of Diophantine approximation (see \cite{Shallit}), the hypothesis \eqref{hyp3} holds if and only if the the number $(t_0-t_1)/\pi$ is badly approximable by rational numbers so that its partial quotients are bounded i.e. $K((t_0-t_1)/\pi)$ is finite. Moreover,
\begin{equation*}
|\sin k(t_0-t_1)|\ge 2\Big\| k\frac{t_0-t_1}{\pi}\Big\|\ge \frac{2}{(K((t_0-t_1)/\pi)+2)k}.
\end{equation*}
\item
It is possible to avoid a restriction on the size of the potential $q$. Set $\xi:=(t_0-t_1)/\pi \in \R\setminus\Q$ and
\begin{equation*}
\nu(\xi):=\liminf_{k\to+\infty}k\|k\xi\|.
\end{equation*}
If $\xi$ is badly approximable, then $\nu(\xi)>0$. Moreover, if $\xi'$ is an irrational number such that its partial quotients coincide with those of $\xi$ from a certain rank, then $\nu(\xi')=\nu(\xi)$ (see \cite[p. 11]{Cassels}). Let us construct a strictly decreasing sequence of irrational numbers by setting $\xi_0=\xi$ and
\begin{equation*}
\xi_{n+1}= \frac{\xi_n}{1+\xi_n}=\frac{1}{1+1/\xi_n}.
\end{equation*}
We can assume that $0<\xi<1$ so that its development in continued fraction has the form
\begin{equation*}
\xi=\xi_0=[0;a_1,a_2,a_3,\ldots].
\end{equation*}
Therefore, $1/\xi_0=[a_1;a_2,a_3,\ldots]$ and $1+1/\xi_0=[1+a_1;a_2,a_3,\ldots]$, whence $\xi_1=[0;1+a_1,a_2,a_3,\ldots]$ and by recurrence
\begin{equation*}
\xi_n=[0;n+a_1,a_2,a_3,\ldots].
\end{equation*}
Thus, for all $n$, $\nu(\xi_n)=\nu(\xi)>0$ and the sequence $(\xi_n)$ converges to zero.
Now, from the definition of $\nu(\xi)$ and the Lemma \ref{lemmasinus}, we obtain, for $k$ \emph{sufficiently large},
\begin{eqnarray*}
|\sin k\pi\xi_n|\ge 2\|k\xi_n\|\ge 2\frac{\nu(\xi)}{k}.
\end{eqnarray*}
Hence, going back to the relation \eqref{positiveconstant}, if we choose $n$ sufficiently large so that
\begin{equation*}
2\nu(\xi)-\frac{\xi_n\pi q}{2}>0
\end{equation*}
and if we assume moreover that
\begin{equation*}
\sin(\om_k\pi\xi_n)\neq 0, \qquad k=1,2,\ldots,
\end{equation*}
then, choosing $t_0$ and $t_1$ such that $t_0-t_1=\pi\xi_n$, the observability inequality holds.
\end{itemize}
\end{rks}
\section{Extension of the method to beams and plates}\label{other}
\subsection{Observability of a hinged beam}
The small transversal vibrations of a hinged beam of length $\pi$ satisfy
\begin{equation} \label{beam}
\begin{cases}
y''+y_{xxxx}=0 & \text{in } \R\times (0,\pi),\\
y=y_{xx}=0 & \text{in } \R\times\{0,\pi\},\\
y(0)=y_0, \quad y'(0)=y_1 & \text{in } (0,\pi).
\end{cases}
\end{equation}
Using the same spaces $D^s$ as for the vibrating string, we have the
\begin{prop}
Let $s \in \R$. For all initial data $y_0 \in D^s$ and $y_1 \in D^{s-2}$, the problem \eqref{beam} admits a unique solution $y \in C(\R,D^s) \cap C^1(\R,D^{s-2}) \cap C^2(\R,D^{s-4})$ given by
\begin{equation} \label{solutionbeam}
y(t,x)=\sum_{k=1}^{\infty}(a_k e^{ik^2t}+b_k e^{-ik^2t})\sin{kx},
\end{equation}
where the complex coefficients $a_k$ and $b_k$ satisfy
\begin{equation} \label{conservationbeam}
\|y_0\|_s^2+\|y_1\|_{s-2}^2\asymp\sum_{k=1}^{\infty}k^{2s}(|a_k|^2+|b_k|^2).
\end{equation}
\end{prop}
In this case, the observability problem turns to the following one : given two real numbers $r$ and $s$ such that $s\le r$, we are looking for two times $t_0$ and $t_1$ such that
\begin{equation} \label{observationbeam}
\|y_0\|_s+\|y_1\|_{s-2}\leq c(\|y(t_0)\|_r+\|y(t_1)\|_r)
\end{equation}
for a positive constant $c$, independent of the initial data $(y_0,y_1)\in D^r \times D^{r-1}$. Again, such a pair $(t_0,t_1)$ will be called an \emph{strategic pair}.
\begin{prop}
The pair $(t_0,t_1)$ is strategic if and only if there is a positive constant $c$ such that
\begin{equation*}
\Big\|\frac{k^2(t_0-t_1)}{\pi}\Big\| \ge \frac{c}{k^{r-s}}, \qquad k=1,2,\ldots
\end{equation*}
\end{prop}
\begin{thm}${}$\label{MRbeam}
\begin{itemize}
\item[(a)]
If $r-s=2$,
there is a set of strategic pairs that is infinite, has zero Lebesgue measure and full Hausdorff dimension in $\R^2$.
\item[(b)]
If $r-s>2$,
there is a set of strategic pairs that has full Lebesgue measure in $\R^2$.
\end{itemize}
\end{thm}
\subsection{Observability of a hinged rectangular plate}
Let $a$ and $b$ be positive real numbers and $\Omega=(0,a)\times(0,b)\subset\R^2$
the rectangular domain whose boundary is denoted by $\Gamma$. The small transversal vibrations of a hinged plate whose shape is delimited by $\Omega$ satisfy
\begin{equation}\label{plate}
\begin{cases}
y''+\Delta^2 y=0 & \text{in } \R\times\Omega,\\
y=\Delta y=0 & \text{in } \R\times\Gamma,\\
y(0)=y_0,\quad y'(0)=y_1 & \text{in } \Omega.
\end{cases}
\end{equation}
The eigenvalues of the operator $-\Delta$ with Dirichlet boundary conditions are (see e.g., \cite{CourantHilbert1})
\begin{equation*}
\lambda_{m,n}=\frac{m^2\pi^2}{a^2}+\frac{n^2\pi^2}{b^2},\qquad m,n=1,2,\ldots
\end{equation*}
with associated eigenfunctions
\begin{equation*}
e_{m,n}(x,y)=\sin\frac{mx\pi}{a}\sin\frac{ny\pi}{b}, \qquad m,n=1,2,\ldots
\end{equation*}
These functions form an orthogonal and dense system in $L^2(\Omega)$. For $s\in\R$, we define $D^s$ as the completion of the vector space spanned by the functions $e_{m,n}$ for the euclidean norm
\begin{equation*}
\Bigg\|\sum_{m,n=1}^{\infty}c_{m,n}e_{m,n} \Bigg\|^2_s:=\sum_{m,n=1}^{\infty}\lambda_{m,n}^s|c_{m,n}|^2.
\end{equation*}
\begin{prop}
Given $y_0\in D^s$ and $y_1\in D^{s-2}$, the problem \eqref{plate} has a unique solution $y\in C(\R,D^s) \cap C^1(\R,D^{s-2})\cap C^2(\R,D^{s-4})$, whose expansion in Fourier series is
\begin{equation*}
y(t,x)=\sum_{m,n=1}^{\infty}(a_{m,n} e^{i \lambda_{m,n}t}+b_{m,n} e^{-i \lambda_{m,n}t})e_{m,n}(x,y),
\end{equation*}
where the complex coefficients $a_{m,n}$ and $b_{m,n}$ satisfy
\begin{equation*}
\|y_0\|_s^2+\|y_1\|_{s-2}^2\asymp \sum_{m,n=1}^{\infty}\lambda_{m,n}^s(|a_{m,n}|^2+|b_{m,n}|^2).
\end{equation*}
\end{prop}
The observability problem can be stated exactly as in the previous paragraph. In other words, we are looking for pairs $(t_0,t_1)$ satisfying the estimation \eqref{observationbeam}.
From the expression of the eigenvalues,
\begin{equation*}
\lambda_{m,n} \asymp m^2+n^2.
\end{equation*}
Moreover, an adaptation of Lemma \ref{lemmasinus} yields
\begin{equation*}
|\sin\lambda_{m,n}(t_0-t_1)|\asymp\Bigg\| \frac{\lambda_{m,n}(t_0-t_1)}{\pi}\Bigg\|.
\end{equation*}
Hence, setting $\theta_1:=(\pi(t_0-t_1))/a^2$, $\theta_2:=(\pi(t_0-t_1))/b^2$ and $\alpha:=(r-s)/2$, and applying the same method as for the vibrating string, we get the
\begin{prop}\label{thmplate}
The pair $(t_0,t_1)$ is strategic if and only if there is a positive constant $c$ such that
\begin{equation}\label{estplate}
\|m^2\theta_1+n^2\theta_2\|\ge \frac{c}{(m^2+n^2)^{(r-s)/2}},\qquad m,n=1,2,\ldots
\end{equation}
\end{prop}
We give sufficient conditions for \eqref{estplate} to hold.
\emph{First case: particular domains.} We assume that there exists a positive integer $N$ such that $\theta_1=N\theta_2$ or equivalently
\begin{equation*}
b^2=Na^2.
\end{equation*}
Therefore, setting $\theta:=\theta_2$, the estimation \eqref{estplate} simplifies in
\begin{equation*}
\|(Nm^2+n^2)\theta\|\ge \frac{c}{(m^2+n^2)^{(r-s)/2}},\qquad m,n=1,2,\ldots
\end{equation*}
We have already seen that if $r-s\ge 2$, the above estimation holds for some choices of $t_0$ and $t_1$. More precisely \emph{the Theorem \ref{MRbeam} remains true in this case.}
\emph{Second (general) case.} It is not always possible to uncouple the expression $m^2\theta_1+n^2\theta_2$ as we did in the first case. Nevertheless, we can use some results on the approximation of linear forms by rationals.
\begin{thm}${}$
\begin{itemize}
\item[(a)]
Assume that $r-s=4$. If $t_0$ and $t_1$ are real numbers such that $\theta_1$ and $\theta_2$ belong to a real algebraic extension of $\Q$ of degree 3 and $1,\theta_1,\theta_2$ are linearly independent over the rationals, then $(t_0,t_1)$ is an strategic pair.
\item[(b)]
Assume that $r-s>4$. Then, almost all (in the sense of the Lebesgue measure) couples $(t_0,t_1)$ are strategic.
\end{itemize}
\end{thm}
\begin{proof}
The assertion (a) is a direct consequence of the Theorem \ref{thmplate} and the second estimation of the Lemma \ref{lemmaCassels}. the assertion (b) is a consequence of the Theorem \ref{thmplate} and of a generalization of the Lemma \eqref{lemmadioph} (see \cite[p. 24]{Bugeaud}).
\end{proof}
| {
"timestamp": "2013-05-29T02:01:33",
"yymm": "1305",
"arxiv_id": "1305.6461",
"language": "en",
"url": "https://arxiv.org/abs/1305.6461",
"abstract": "In this paper, we obtain new observability inequalities for the vibrating string. This work was motivated by a recent paper by A. Szijártó and J. Hegedűs in which the authors ask the question of determining the initial data by only knowing the position of the string at two distinct time instants. The choice of the observation instants is crucial and the estimations rely on the Fourier series expansion of the solutions and results of Diophantine approximation.",
"subjects": "Optimization and Control (math.OC); Analysis of PDEs (math.AP)",
"title": "Observation of vibrating systems at different time instants",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138131620184,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7087156759575312
} |
https://arxiv.org/abs/1801.02340 | Lifting a prescribed group of automorphisms of graphs | In this paper we are interested in lifting a prescribed group of automorphisms of a finite graph via regular covering projections. Here we describe with an example the problems we address and refer to the introductory section for the correct statements of our results.Let $P$ be the Petersen graph, say, and let $\wp:\tilde{P}\to P$ be a regular covering projection. With the current covering machinery, it is straightforward to find $\wp$ with the property that every subgroup of $\Aut(P)$ lifts via $\wp$. However, for constructing peculiar examples and in applications, this is usually not enough. Sometimes it is important, given a subgroup $G$ of $\Aut(P)$, to find $\wp$ along which $G$ lifts but no further automorphism of $P$ does. For instance, in this concrete example, it is interesting to find a covering of the Petersen graph lifting the alternating group $A_5$ but not the whole symmetric group $S_5$. (Recall that $\Aut(P)\cong S_5$.) Some other time it is important, given a subgroup $G$ of $\Aut(P)$, to find $\wp$ with the property that $\Aut(\tilde{P})$ is the lift of $G$. Typically, it is desirable to find $\wp$ satisfying both conditions. In a very broad sense, this might remind wallpaper patterns on surfaces: the group of symmetries of the dodecahedron is $S_5$, and there is a nice colouring of the dodecahedron (found also by Escher) whose group of symmetries is just $A_5$.In this paper, we address this problem in full generality. | \section{Introduction}
Covering projections of graphs and lifting automorphisms along them is a classical tool in algebraic graph theory that goes back to
Djokovi\'c and his proof of the infinitude of cubic $5$-arc-transitive graphs~\cite{Dj1}. Moreover, several theoretical aspects of
lifting graph automorphisms along covering projections, together with their remarkable applications, are considered in a number of papers, for example in \cite{CM,MNS,elabcov,Sir}, to name a few of the most notable ones.
One of many applications of lifting automorphisms along
covering projections is the construction of graphs with a prescribed type of symmetry. For example,
covering techniques are used to find new peculiar examples of semisymmetric
graphs (see \cite{wang,MK,WanChe,ZF}),
half-arc-transitive
graphs (see \cite{CPS,CZ,RS}), and arc-regular graphs (see \cite{CF,feng2,GaSp}), to name a few.
In these papers, a typical strategy is to start with a graph $\Gamma$ and a group $G\le \mathrm{Aut}(\Gamma)$ with a prescribed type of action on
$\Gamma$
(such as edge-transitive, vertex-transitive, $s$-arc-transitive for $s\ge 1$, locally primitive etc.) and then trying to find
regular covering projections $\wp:\tilde{\Gamma} \to \Gamma$ along which $G$ lifts.
In many
applications, it is desirable for the lift to have the following two additional properties:
\begin{itemize}
\item[(1)] $G$ is the \textit{largest} subgroup of $\mathrm{Aut}(\Gamma)$ that lifts along $\wp$;
\item[(2)] \textit{every} automorphism of $\tilde{\Gamma}$ projects along $\wp$.
\end{itemize}
If both these requirements are fulfilled, then $\mathrm{Aut}(\tilde{\Gamma})$ is precisely the lift of $G$.
The problem of finding regular covering projections satisfying (1) has been addressed in an ad-hoc way for some fixed
pairs ($\Gamma,G)$ (see, for example,~\cite{MK,WanChe,ZF}) and determining conditions under which
a covering projection satisfies (2) was considered by several authors in the very specific context of canonical double covers (see~\cite{Sur,Wil}). There have been some attempts to determine covering projections satisfying simultaneously (1) and (2), but again only for a small number of very specific pairs $(\Gamma,G)$ (see, for example,~\cite{CM,Ma,spiga2}).
The aim of this paper is
to address the problem of existence of a regular covering projection satisfying (1) and (2)
for arbitrary pairs $(\Gamma,G)$.
In Theorem~\ref{the:main} we prove that, if $\mathrm{Aut}(\Gamma)$ acts faithfully on the integral cycle space $\H_1(\Gamma;\mathbb{Z})$, then
a regular covering projection onto $\Gamma$ fulfilling~(1) always exists; see Section~\ref{BM} for notation and terminology. The condition of $\mathrm{Aut}(\Gamma)$ acting faithfully on $\H_1(\Gamma;\mathbb{Z})$ is very mild: in most interesting cases $\mathrm{Aut}(\Gamma)$ does act faithfully on $\H_1(\Gamma;\mathbb{Z})$, see Lemma~\ref{lemma11} and Corollary~\ref{cor:3ec}. Moreover, there are examples where $\mathrm{Aut}(\Gamma)$ does not act faithfully on $\H_1(\Gamma;\mathbb{Z})$ and where a regular covering projection as in~(1) does not exist: the easiest example is when $\Gamma$ is a cycle and $G$ is the transitive cyclic subgroup of $\mathrm{Aut}(\Gamma)$.
In Theorem~\ref{the:cor} we prove that, if $\mathrm{Aut}(\Gamma)$ acts faithfully on $\H_1(\Gamma;\mathbb{Z})$ and $(\Gamma,G)$
satisfies an additional condition, then there exists a regular covering projection onto $\Gamma$ satisfying~(1) and~(2). The extra condition on $(\Gamma,G)$ is slightly technical and requires some notation and terminology, thus we refer the reader to Section~\ref{sec:Cor} for its definition and details. Here we simply observe that the condition is satisfied by many interesting classes: for instance,
when $G$ acts transitively on the $2$-arcs of $\Gamma$, or
when $G$ acts transitively on the arcs of $\Gamma$ and the valency of $\Gamma$ is prime.
In Conjecture~\ref{conj}, we dare to conjecture that the additional requirements we put on $\Gamma$ and $G$ in Theorem~\ref{the:cor}
are not needed, that is,
a regular covering projection satisfying both (1) and (2) exists whenever
$\mathrm{Aut}(\Gamma)$ acts faithfully on the integral cycle space of $\Gamma$.
Both Theorems~\ref{the:main} and~\ref{the:cor} do apply to graphs that are not necessarily simple and we refer to Subsection~\ref{subsection} for the precise definition of graph in our paper.
We conclude this introductory section giving some applications. Theorem~\ref{the:cor} and Corollary~\ref{cor:final} reprove, in a unified way, a number of results that have been proved in the past using methods specific to the families of graphs under consideration. For example, one of the consequences of Corollary~\ref{cor:final} is a solution to three problems posed by Djokovi\'c and Miller~\cite[Problems 2, 3 and 4]{DjM}
about the existence of finite cubic arc-transitive graphs with a prescribed type of the full automorphism group,
which were first solved in~\cite{ConLor} by an ad-hoc construction. Furthermore, assuming the correctness of Conjecture~\ref{conj} one can answer a 2001 question of Maru\v{s}i\v{c} and Nedela~\cite[Problem 7.7]{MarNed} about the existence of tetravalent half-arc-transitive graphs of any given possible type, and a number
of similar other problems, such as the one of existence of graphs of every possible arc-type, see~\cite{Nem,CPZ}.
\section{Background material and notation}\label{BM}
When working with covering projections of graphs, it proves useful to allow graphs so have multiple edges, loops and
semiedges: to avoid unnecessary complications, semiedges will be prohibited in this paper.
We will thus introduce the definitions and notations pertaining to graphs as defined in~\cite{MNS}, see also~\cite{elabcov} for a more succinct overview. In what follows, we provide only a brief
account.
\subsection{Graph, fundamental group and integral cycle space}\label{subsection}A {\em graph} is an ordered $4$-tuple $\Gamma=(D,V; \mathop{{\rm beg}},\mathop{{\rm inv}})$ where
$D$ and $V$ are disjoint non-empty finite sets of {\em darts}
and {\em vertices}, respectively, $\mathop{{\rm beg}}: D \to V$ is a mapping
which assigns to each dart $x$ its {\em tail}
$\mathop{{\rm beg}}(x)$, and $\mathop{{\rm inv}}: D \to D$ is an involutory permutation which interchanges
every dart $x$ with its {\em inverse dart}, denoted by $x^{-1}$.
The vertex $\mathop{{\rm beg}}(x^{-1})$ is then called the {\em head} of the dart $x$.
To avoid degeneracies, we assume that $\mathop{{\rm inv}}$ has no fixed points, that is $x^{-1} \not = x$ for every dart $x$.
In the language of \cite{MNS} this means that we only consider graphs without {\em semiedges}. To some extent this hypothesis is not really needed, however it makes the statements of our main results neater and the proofs uniform without subdivision into cases.
An {\em edge} underlying a dart $x$ is an unordered pair $\{x,x^{-1}\}$ of mutually inverse darts,
and $\{\mathop{{\rm beg}}(x), \mathop{{\rm beg}}(x^{-1})\}$ is the set of {\em endvertices} of that edges. Two edges with the same set of
endvertices are called {\em parallel} and an edge with only one endvertex is a {\em loop}. A graph
without loops and parallel edges is {\em simple} and the usual terminology about simple graphs applies in this case.
The neighbourhood of a vertex $v$ of $\Gamma$, denoted $\Gamma(v)$,
is the set of darts having $v$ as its tail, and the cardinality of $\Gamma(v)$ is called the
{\em valency} of $v$.
A {\em walk} from a vertex $v$ to a vertex $u$ in $\Gamma$ is a sequence of darts such that
$v$ is the tail of the first dart, $u$ is the head of the last dart and
the head of each dart in the walk coincides with the tail of the next dart in the walk. When $v=u$, the walk is said to be {\em closed}, whereas
the walk is {\em reduced} provided no two consecutive darts are inverse of each other.
Note that if $x$ is a dart and $\{x,x^{-1}\}$ is
a loop, then $(x)$ is a reduced closed walk,
called a
{\em loop walk}.
The empty sequence of darts is considered a walk and is called a trivial walk.
For a vertex $b$ of $\Gamma$ one can define the
{\em fundamental group at $b$}, denoted $\pi(\Gamma,b)$, as the set of all closed reduced walks starting and
ending in $b$, with the operation being the concatenation (with the deletion of consecutive pairs of mutually inverse darts, if necessary). Note that $\pi(\Gamma,b)$ is a free product of infinite cyclic groups and cyclic groups of order $2$, the latter arising from semiedge walks. In particular, since we are assuming that $\Gamma$ has no semiedges, $\pi(\Gamma,b)$ is a free group.
The abelianisation $\pi(\Gamma,b)/[\pi(\Gamma,b),\pi(\Gamma,b)]$ of $\pi(\Gamma,b)$, viewed
as a $\mathbb Z$-module, is called the {\em first homology group} or the {\em integral cycle space}
and denoted $\H_1(\Gamma;\mathbb Z)$.
For the rest of this subsection, we assume $\Gamma$ to be connected. Then, $\H_1(\Gamma;\mathbb Z)$
is independent on the choice of the vertex $b$. Moreover,
$\H_1(\Gamma;\mathbb Z)$ is isomorphic to $\mathbb Z^{m_\Gamma}$ where $m_\Gamma$ is the {\em Betti number of $\Gamma$}, that is, the number of cotree edges relative to a fixed spanning tree of $\Gamma$.
In fact, given a fixed spanning tree ${\mathcal{T}}$ of $\Gamma$, a $\mathbb{Z}$-basis for $\H_1(\Gamma,\mathbb Z)$ can be chosen
in such a way that each cotree edge $e$ corresponds to an oriented cycle in $\Gamma$ whose only cotree edge is $e$.
This generating set is called an {\em oriented cycle basis} and does depend on the choice of ${\mathcal{T}}$.
Given a prime number $p$, we let $\mathbb Z_p$ be the finite field of order $p$. Now, the tensor product $\H_1(\Gamma;\mathbb Z)\otimes_{\mathbb Z}\mathbb{Z}_p$ will be denoted $\H_1(\Gamma;\mathbb Z_p)$.
Since $\mathbb Z_p$ is a $(\mathbb Z,\mathbb Z_p)$ bi-module, $\H_1(\Gamma;\mathbb Z_p)$ can be viewed as a $\mathbb Z_p$-module.
Note that
$\H_1(\Gamma;\mathbb Z_p)\cong \mathbb Z_p^{m_\Gamma}$.
Indeed, an oriented cycle basis for $\H_1(\Gamma;\mathbb Z)$ gives rise to a $\mathbb{Z}_p$-basis for $\H_1(\Gamma,\mathbb Z_p)$ whose elements
correspond to oriented cycles of $\Gamma$.
\subsection{Graph morphism, regular covering projection, universal covering}
Let $\tilde{\Gamma}= (\tilde{D},\tilde{V}; \mathop{{\rm beg}}_{\tilde{\Gamma}},\mathop{{\rm inv}}_{\tilde{\Gamma}})$ and $\Gamma= (D,V; \mathop{{\rm beg}}_\Gamma,\mathop{{\rm inv}}_\Gamma)$ be two graphs.
A {\em morphism of graphs}, $f \colon \tilde{\Gamma} \to \Gamma$,
is a function $f \colon \tilde{V} \cup \tilde{D} \to V \cup D$
such that
$f(\tilde{V}) \subseteq V$, $f(\tilde{D}) \subseteq D$,
$f\circ \mathop{{\rm beg}}_{\tilde{\Gamma}} = \mathop{{\rm beg}}_\Gamma \circ f$ and $f \circ \mathop{{\rm inv}}_{\tilde{\Gamma}} = \mathop{{\rm inv}}_\Gamma \circ f$.
A graph morphism is an {\em epimorphism} ({\em automorphism}) if it is
a surjection (bijection, respectively).
A graph epimorphism $\wp\colon \tilde{\Gamma} \to \Gamma$ is called a {\em covering projection} provided that it maps
the neighbourhood $\tilde{\Gamma}(\tilde{v})$ bijectively onto the neighbourhood $\Gamma(\wp(\tilde{v}))$, for every $\tilde{v}\in \tilde{V}$.
Let $\wp\colon \tilde{\Gamma} \to \Gamma$ be a covering projection of connected graphs, let
$g\in \mathrm{Aut}(\Gamma)$ and let ${\tilde{g}}\in\mathrm{Aut}(\tilde{\Gamma})$ be such that $\wp(x^{\tilde{g}}) = \wp(x)^g$ for every vertex and for every dart $x$ of $\tilde{\Gamma}$.
Then we say that $g$ lifts along $\wp$ and that ${\tilde{g}}$ is a {\em lift} of $g$. Similarly, we say that ${\tilde{g}}$ projects along $\wp$ and
that $g$ is a {\em projection} of ${\tilde{g}}$ along $\wp$. The set of all
automorphisms of $\Gamma$ that lift along $\wp$ is called the {\em maximal group that lifts along $\wp$}. If $G$ is a subgroup
of the maximal group that lifts, then the set ${\tilde{G}}$ of all lifts of elements of $G$ forms a subgroup of $\mathrm{Aut}(\tilde{\Gamma})$ and is called {\em the lift of $G$}. The lift of the maximal group that lifts along $\wp$ is the {\em maximal group that projects along $\wp$}.
The lift of the identity group is called the {\em group of covering transformations of $\wp$} and denoted $\mathrm{CT}(\wp)$. Whenever the covering graph $\tilde{\Gamma}$ is connected, the group $\mathrm{CT}(\wp)$ acts semi-regularly on each fibre, and if it is transitive (and thus regular) on each fibre, then we say that the covering projection
$\wp$ is {\em regular}.
Regular covering projections can equivalently be defined in terms of {\em graph quotients}. Let $\tilde{\Gamma}$ be a graph,
let $N \le \mathrm{Aut}(\tilde{\Gamma})$ with the stabiliser
$N_x$ being trivial for every vertex and for every edge $x$ of $\tilde{\Gamma}$,
and let $\tilde{\Gamma}/N$ be the graph whose vertices and darts are $N$-orbits of vertices and darts of $\tilde{\Gamma}$
and with the functions $\mathop{{\rm inv}}_{\tilde{\Gamma}/N}$ and $\mathop{{\rm beg}}_{\tilde{\Gamma}/N}$ mapping a dart $x^N$ of $\tilde{\Gamma}/N$ to the $N$-orbit
of $\mathop{{\rm inv}}_{\tilde{\Gamma}}(x)$ and $\mathop{{\rm beg}}_{\tilde{\Gamma}}(x)$, respectively (see \cite[Section~2.1]{elabcov}). The corresponding
{\em quotient projection} $\wp_{N} \colon \tilde{\Gamma} \to \tilde{\Gamma}/N$, mapping each vertex or dart of $\tilde{\Gamma}$ to its $N$-orbit,
is a regular covering projection and $N$ is the group $\mathrm{CT}(\wp_N)$ of covering transformations of $\wp_N$. Every regular covering projection arises in this way.
\begin{lemma}{{\cite[Sections 2.2 and 3]{elabcov}}}
\label{lem:maxlift}
If $\wp \colon \tilde{\Gamma} \to {\Gamma}$ is a regular covering projection, then the maximal group that projects along $\wp$
equals the normaliser of $N=\mathrm{CT}(\wp)$ in $\mathrm{Aut}(\tilde{\Gamma})$.
Moreover, ${\Gamma}$ is isomorphic to the quotient graph $\tilde{\Gamma}/N$,
the quotient projection $\tilde{\Gamma} \to \tilde{\Gamma}/N$ is a covering projection isomorphic to $\wp$,
and for a group $G\le \mathrm{Aut}({\Gamma})$ and its lift ${\tilde{G}}$, we have $G\cong {\tilde{G}}/N$.
\end{lemma}
\subsection{Splitting of covering projections}
Let $\wp\colon \tilde{\Gamma} \to {\Gamma}$ be a regular covering projection between connected graphs
with covering transformation group $N$, let $G$ be a subgroup of $\mathrm{Aut}({\Gamma})$ that lifts along $\wp$ to ${\tilde{G}}$ and let $K$ be a normal subgroup of $N$. Then one can consider the quotient projection $\wp_K \colon \tilde{\Gamma} \to \tilde{\Gamma}/K$
and define $\wp_{N/K}\colon \tilde{\Gamma}/K \to {\Gamma}$ by $\wp_{N/K}(x^K) = \wp(x)$ for every vertex and for every dart $x$ of $\tilde{\Gamma}$.
Then the following lemma holds:
\begin{lemma}
The covering projection $\wp_{N/K}$ is regular with covering transformation group
$N/K$ and $\wp = \wp_{N/K} \circ \wp_K$. Moreover, if $K$ is normalised by $G$, then $G$ lifts along $\wp_{N/K}$ and its lift is ${\tilde{G}}/K$.
\end{lemma}
If ${\mathcal{T}}$ is an infinite tree and $\mathrm \wp\colon {\mathcal{T}} \to \Gamma$ is a regular covering projection, then we say that $\mathrm \wp$
is {\em universal}. It is well known that for every finite graph there is, up to equivalence of covering projections, a unique
universal covering projection and that it has the following property:
\begin{lemma}
\label{lem:unilift}
If $\Gamma$ is a finite connected graph and $\mathrm \wp\colon {\mathcal{T}} \to \Gamma$ is the universal covering projection,
then $\mathrm{Aut}(\Gamma)$ lifts along $\mathrm \wp$ and $\mathrm{CT}(\mathrm \wp)$ is isomorphic to the fundamental group $\pi(\Gamma,b)$
for some (every) vertex $b$ of $\Gamma$. Moreover, $\mathrm{CT}(\mathrm \wp)_x = 1$ for every vertex and for every edge $x$ of $\Gamma$.
\end{lemma}
\section{Results and proofs}
Given a group $X$ and a subgroup $Y$, we denote by $[X,Y]$ the commutator subgroup defined by $[X,Y]:=\langle x^{-1}y^{-1}xy\mid x\in X,y\in Y\rangle$, by $\N XY$ the normaliser of $Y$ in $X$ and we write $X^p:=\langle x^p\mid x \in X\rangle$. By $\Zent{G}$, we denote the centre of the group $G$.
Recall that a $\mathbb{Z}_pX$-module $V$ is regular if $V$ is isomorphic to the group algebra $\mathbb{Z}_pX$ (seen as a $\mathbb{Z}_pX$-module). This means that $\dim_{\mathbb{Z}_p}(V)=|X|$ and that $V$ has a $\mathbb{Z}_p$-basis $(v_x\mid x\in X)$ such that the action of $X$ on $(v_x\mid x\in X)$ is permutation isomorphic to the action of $X$ on itself by right multiplication. In other words, $v_x y=v_{xy}$, for each $x,y\in X$.
\begin{theorem}
\label{the:tree}
Let $p$ be a prime, let ${\mathcal{T}}$ be an infinite tree, let ${\mathcal{G}} \le \mathrm{Aut}({\mathcal{T}})$, let ${\mathcal{N}}$ be a non-identity normal subgroup of ${\mathcal{G}}$ of finite index such that
${\mathcal{N}}_x = 1$ for every vertex and for every edge $x$ of ${\mathcal{T}}$,
and let ${\mathcal{H}} = \N {\mathrm{Aut}({\mathcal{T}})} {{\mathcal{N}}}$.
If
${\mathcal{H}}/{\mathcal{N}}$ acts faithfully on $\H_{1}({\mathcal{T}}/{\mathcal{N}};\mathbb Z)$,
then there exists a normal subgroup ${\mathcal{P}}$ of ${\mathcal{N}}$ of finite index such that $\N {{\mathcal{H}}}{{\mathcal{P}}}={\mathcal{G}}$
and that ${\mathcal{N}}/{\mathcal{P}}$ is $p$-group.
\end{theorem}
\begin{proof}
The idea for the proof of this theorem is inspired by a surprisingly unrelated problem solved by Bryant and Kov\'{a}cs in~\cite{BK}. We follow closely~\cite{BK} and we use some of the observations therein. This is the second time that this paper on Lie algebras has proved useful in the context of group actions on graph, see for instance~\cite{PSV} for another application.
Observe that, as ${\mathcal{N}}\ne 1$ and ${\mathcal{N}}_v=1$ for every $v\in \mathrm V({\mathcal{T}})$, from the Bass-Serre theory, we deduce that ${\mathcal{N}}$ is a non-identity free group, see~\cite[Proposition~$4.5$]{Wood}. Following~\cite{BK}, we construct a filtration of the free group ${\mathcal{N}}$. Define ${\mathcal{N}}_1:={\mathcal{N}}$ and, for $i\in\mathbb{N}\setminus\{0\}$, ${\mathcal{N}}_{i+1}:={\mathcal{N}}_i^p[{\mathcal{N}}_i,{\mathcal{N}}]$. By construction, ${\mathcal{N}}_{i+1}$ is the smallest normal subgroup of
${\mathcal{N}}$
contained in ${\mathcal{N}}_i$ such that ${\mathcal{N}}_i/{\mathcal{N}}_{i+1}$ has exponent $p$ and is central in ${\mathcal{N}}/{\mathcal{N}}_{i+1}$, that is,
${\mathcal{N}}_i/{\mathcal{N}}_{i+1}\le \Zent {{\mathcal{N}}/{\mathcal{N}}_{i+1}}$. Moreover, ${\mathcal{N}}_{i+1}$ is normal in ${\mathcal{G}}$ because so is ${\mathcal{N}}$.
Write $G:={\mathcal{G}}/{\mathcal{N}}_1$ and $H:={\mathcal{H}}/{\mathcal{N}}_1$. As ${\mathcal{G}}\le {\mathcal{H}}$, we have $G\le H$. Given $i\in\mathbb{N}\setminus\{0\}$, define $V_i:={\mathcal{N}}_i/{\mathcal{N}}_{i+1}$. As ${\mathcal{N}}_{i}$ is centralised by ${\mathcal{N}}={\mathcal{N}}_1$ modulo ${\mathcal{N}}_{i+1}$, the action of ${\mathcal{H}}$ by conjugation on ${\mathcal{N}}_i/{\mathcal{N}}_{i+1}=V_i$ defines a group homomorphism
$H={\mathcal{H}}/{\mathcal{N}}_1\to \mathrm{Aut}(V_i)$, that is, $H$ acts as a linear group on the $\mathbb{Z}_p$-vector space $V_i$ and hence $V_i$ is a $\mathbb{Z}_pH$-module. The inclusion $G\le H$ allows us to regard, via the restriction mapping, $V_i$ also as $\mathbb{Z}_pG$-modules.
We now require a few facts, from~\cite{Wood} and from~\cite{BK}. From~\cite[Theorem~$9.2$]{Wood}, the $\mathbb{Z}H$-module $\H_1({\mathcal{T}}/{\mathcal{N}};\mathbb{Z})$ is isomorphic to the $\mathbb{Z}H$-module ${\mathcal{N}}_1/[{\mathcal{N}}_1,{\mathcal{N}}_1]$. Therefore the $\mathbb{Z}_pH$-module $\H_1({\mathcal{T}}/{\mathcal{N}};\mathbb{Z}_p)=\H_1({\mathcal{T}}/{\mathcal{N}};\mathbb{Z})\otimes \mathbb{Z}_p$ is isomorphic to the $\mathbb{Z}_pH$-module ${\mathcal{N}}/[{\mathcal{N}},{\mathcal{N}}]\otimes \mathbb{Z}_p\cong {\mathcal{N}}/[{\mathcal{N}},{\mathcal{N}}]{\mathcal{N}}^p={\mathcal{N}}_1/{\mathcal{N}}_2=V_1$. Now, the hypothesis in the statement of the theorem allows us to conclude that $H$ acts faithfully on $V_1={\mathcal{N}}_1/{\mathcal{N}}_2$ and hence we can view $H$ as a subgroup of $\mathrm{Aut}({\mathcal{N}}_1/{\mathcal{N}}_2)=\mathrm{Aut}(V_1)$. This fact will allow us to apply directly the results from~\cite{BK}. Let $\Sigma=\mathrm{Aut}(V_1)$. From ~\cite[page 416]{BK} it follows that the action of $\Sigma$ on $V_1$ induces an action on $V_i$ and, moreover, the embedding of $H$ in $\Sigma$ is compatible with the action of $H$ defined on $V_i$ above.
From~\cite[Theorems~2 and 3]{BK}, we deduce that there exists a positive integer $i$ such that
the $\mathbb{Z}_p\Sigma$-module $V_i$ contains a regular submodule.
Let $R$ be a regular $\mathbb{Z}_p\Sigma$-module contained in $V_{i}$
and let $(r_\sigma \mid \sigma \in \Sigma)$ be a $\mathbb Z_p$-basis of $R$
with $r_\sigma \delta = r_{\sigma\delta}$ for every $\sigma, \delta \in \Sigma$.
Let $P:=\langle r_\sigma \mid \sigma \in G \rangle$ and observe that $P$ is a regular $\mathbb Z_pG$-module.
Since $V_i={\mathcal{N}}_i/{\mathcal{N}}_{i+1}$, we may write $P={\mathcal{P}}/{\mathcal{N}}_{i+1}$ for some subgroup ${\mathcal{P}}$ of ${\mathcal{N}}_i$ containing ${\mathcal{N}}_{i+1}$. Observe that ${\mathcal{N}}/{\mathcal{P}}$ is a $p$-group because ${\mathcal{N}}/{\mathcal{P}}$ is a quotient of the $p$-group ${\mathcal{N}}_1/{\mathcal{N}}_{i+1}$. Moreover, the index of ${\mathcal{P}}$
in ${\mathcal{N}}$ is finite because ${\mathcal{N}}_{i+1}$ has finite index in ${\mathcal{N}}_1={\mathcal{N}}$. (Each $\mathbb{Z}_p$-vector space $V_i$ is finite dimensional because ${\mathcal{N}}_1$ is finitely generated.)
Let $x\in \N {\mathcal{H}} {\mathcal{P}}=\N{\mathrm{Aut}({\mathcal{T}})} {\mathcal{N}}\cap \N {\mathrm{Aut}({\mathcal{T}})}{\mathcal{P}}$. Since $x$ normalises ${\mathcal{N}}$, $x$ acts by conjugation as a linear transformation of the vector spaces ${\mathcal{N}}_1/{\mathcal{N}}_2=V_1$ and ${\mathcal{N}}_i/{\mathcal{N}}_{i+1}=V_i$. Denote by $\tau\in \mathrm{Aut}(V_1) = \Sigma$ the linear transformation of $V_1$ induced by the conjugation of $x$. Now, $\tau$ fixes setwise $R$ because $R$ is a $\mathbb{Z}_p\Sigma$-submodule of $V_i$. Since $x$ normalises ${\mathcal{P}}$, $\tau$ fixes setwise ${\mathcal{P}}/{\mathcal{N}}_{i+1}=P$. Since $r_{1} \in P$, we see that $r_{1} \tau=r_\tau\in P=\langle r_\sigma\mid \sigma\in G \rangle$ and hence $\tau\in G$. Let $y\in {\mathcal{G}}$ be an element projecting to $\tau$. Now, $xy^{-1}$ projects to the identity element of $\Sigma=\mathrm{Aut}(V_1)$. Therefore $xy^{-1}$ centralises $V_1={\mathcal{N}}_1/{\mathcal{N}}_2$. Observe that $xy^{-1}$ lies in ${\mathcal{H}}$ because so does $x$ and $y$. By hypothesis, ${\mathcal{H}}/{\mathcal{N}}$ acts faithfully on $\H_1({\mathcal{T}}/{\mathcal{N}};\mathbb{Z}_p)=V_1$. Therefore $xy^{-1}\in {\mathcal{N}}$. Since ${\mathcal{N}}\le {\mathcal{G}}$ and $y\in {\mathcal{G}}$, we obtain $x\in {\mathcal{G}}$. We have thus shown $\N {\mathcal{H}} {\mathcal{P}}\le {\mathcal{G}}$; the inclusion ${\mathcal{G}}\le \N {\mathcal{H}} {\mathcal{P}}$ is obvious.
\end{proof}
\begin{theorem}
\label{the:main}
Let $p$ be a prime, let $\Gamma$ be a finite connected graph such that the induced action of $\mathrm{Aut}(\Gamma)$
on $\H_1({\Gamma};\mathbb Z)$ is faithful, and let $G\le \mathrm{Aut}(\Gamma)$.
Then there exists a regular covering projection $\wp \colon \tilde{\Gamma} \to {\Gamma}$
with $\tilde{\Gamma}$ finite, such that the maximal group that lifts along $\wp$ is $G$ and that the group of covering transformations of $\wp$ is a $p$-group.
\end{theorem}
\begin{proof}
Let $\mu \colon {\mathcal{T}} \to \Gamma$ be the universal covering projection, see Section~\ref{BM}.
Then ${\mathcal{T}}$ is an infinite tree and
in view of Lemma~\ref{lem:unilift},
$G$ lifts along $\mu$ to a group ${\mathcal{G}}\le \mathrm{Aut}({\mathcal{T}})$. Let ${\mathcal{N}} = \mathrm{CT}(\mu)$.
Then ${\mathcal{N}}\ne 1$,
${\mathcal{N}}_x = 1$ for every vertex and for every edge $x$ of ${\mathcal{T}}$, ${\mathcal{T}}/{\mathcal{N}} \cong \Gamma$,
and we may identify $\Gamma$ with ${\mathcal{T}}/{\mathcal{N}}$ in such a way that ${\mathcal{G}}/{\mathcal{N}} = G$ and that
the quotient projection $\wp_{\mathcal{N}} \colon {\mathcal{T}} \to {\mathcal{T}}/{\mathcal{N}}$ is $\mu$.
Let ${\mathcal{H}}=\N{\mathrm{Aut}({\mathcal{T}})} {{\mathcal{N}}}$. By Lemma~\ref{lem:maxlift}, ${\mathcal{H}}$ is the largest group that projects along $\mu$ and thus,
since $\mathrm{Aut}(\Gamma)$ lifts along $\mu$, ${\mathcal{H}}$ is the lift of $\mathrm{Aut}(\Gamma)$.
Since $\mathrm{Aut}({\Gamma})$ acts faithfully on $\H_1({\Gamma};\mathbb Z)$, by Theorem~\ref{the:tree}, there exists
a normal subgroup ${\mathcal{P}}$ of ${\mathcal{N}}$ of finite index such that $\N{{\mathcal{H}}}{{\mathcal{P}}}={\mathcal{G}}$ and ${\mathcal{N}}/{\mathcal{P}}$ is a $p$-group.
Let $\tilde{\Gamma} = {\mathcal{T}}/{\mathcal{P}}$ and ${\tilde{G}} = {\mathcal{G}}/{\mathcal{P}} \le \mathrm{Aut}(\tilde{\Gamma})$.
In view of Lemma~\ref{lem:maxlift},
the quotient projection $\wp_{\mathcal{P}} \colon {\mathcal{T}} \to \tilde{\Gamma}$ is a covering projection
and there exists a regular covering projection $\wp \colon \tilde{\Gamma} \to {\Gamma}$ such
that $\mu = \wp \circ \wp_{\mathcal{P}}$. Moreover, since ${\mathcal{G}}$ normalises ${\mathcal{P}}$, the group
$G$ lifts along $\wp$ and its lift is ${\tilde{G}}$.
Let $M\le \mathrm{Aut}(\Gamma)$ be the maximal group that lifts along $\wp$ and let ${\tilde{M}} \le \mathrm{Aut}(\tilde{\Gamma})$ be its lift.
Clearly, $G\le M$ and thus ${\tilde{G}} \le {\tilde{M}}$.
Since ${\mathcal{T}}$ is a tree, $\wp_{\mathcal{P}}$ is a universal covering projection, and
in view of Lemma~\ref{lem:unilift},
${\tilde{M}}$ lifts along $\wp_{\mathcal{P}}$ to some ${\mathcal{M}} \le \mathrm{Aut}({\mathcal{T}})$.
But then
${\mathcal{M}}$ is the lift of $M$ along $\mu = \wp \circ \wp_{\mathcal{P}}$,
and thus
${\mathcal{M}}\le \N{\mathrm{Aut}({\mathcal{T}})} {{\mathcal{N}}}={\mathcal{H}}$. On the other hand, ${\mathcal{M}}$ normalises ${\mathcal{P}}$ and so
${\mathcal{M}}\le \N{{\mathcal{H}}}{{\mathcal{P}}}= {\mathcal{G}}$. But then ${\tilde{M}} = {\mathcal{M}}/{\mathcal{P}} \le {\mathcal{G}}/{\mathcal{P}} = {\tilde{G}}$, and hence ${\tilde{M}} = {\tilde{G}}$.
Therefore,
$M = {\tilde{M}}/ ({\mathcal{N}}/{\mathcal{P}}) = {\tilde{G}}/ ({\mathcal{N}}/{\mathcal{P}}) = G$, and
thus $G$ is the maximal group that lifts along $\wp$, as required.
\end{proof}
Let us now discuss the condition of $G$ acting faithfully on $\H_1({\Gamma};\mathbb Z)$.
\begin{lemma}\label{lemma11}
If $\Gamma$ is a simple $3$-edge-connected graph, then $\mathrm{Aut}(\Gamma)$ acts
faithfully on $\H_1({\Gamma};\mathbb Z)$.
\end{lemma}
\begin{proof}
Suppose on the contrary that the action of $\mathrm{Aut}(\Gamma)$ on $\H_1({\Gamma};\mathbb Z)$ is not faithful. Then there exists
an automorphism $g$ fixing every element of $\H_1({\Gamma};\mathbb Z)$ and a vertex $v$ such that $v^g \not = v$.
Let $u,w$ and $z$ be three neighbours of $v$. By $3$-edge-connectivity of $\Gamma$ it follows that
there is a cycle $C_1$ through the $2$-path $uvw$ and a cycle $C_2$ through the $2$-path $uvz$.
Now fix an orientation of $C_1$ and $C_2$ in such a way that $u$ is a predecessor of $v$ in both $C_1$ and $C_2$.
Consider $C_1$ and $C_2$ as elements of $\H_1({\Gamma};\mathbb Z)$. By assumption, $g$ preserves $C_1$ and $C_2$ together with their orientation. In particular, the vertex $v^g$ lies on $C_1$ and on $C_2$. For $i\in \{1,2\}$, let $P_i$ be the path from $v^g$ to $u$ following
the cycle $C_i$ in the positive direction with respect to the chosen orientation. Note that, since $g$ preserves the orientation,
$u^g$ belongs to neither $P_1$ nor $P_2$. Now consider the closed walk $C$ obtained by concatenating $P_1$ with the reverse of $P_2$ and consider it as an element of $\H_1({\Gamma};\mathbb Z)$. By assumption, $C$ is fixed by $g$. However, the vertex $u$ belongs to $C$, while $u^g$ does not, a contradiction.
\end{proof}
Since in connected vertex-transitive graphs the edge connectivity equals the valency (see for instance~\cite[Lemma~$3.3.3$]{GoRo}), we get the following corollary.
\begin{corollary}
\label{cor:3ec}
If $\Gamma$ a simple connected vertex-transitive graph of valency at least $3$, then $\mathrm{Aut}(\Gamma)$ acts
faithfully on $\H_1({\Gamma};\mathbb Z)$.
\end{corollary}
\section{A corollary}\label{sec:Cor}
Given a graph $\Gamma$, $G\le \mathrm{Aut}(\Gamma)$ and a vertex $v$ of $\Gamma$, we let
$G_v^{\Gamma(v)}$ denote the permutation group
induced by the action of the vertex-stabiliser $G_v$ on the neighbourhood $\Gamma(v)$ of the vertex $v$.
A finite transitive permutation group $L$ is {\em graph-restrictive}
provided there exists a constant $c = c(L)$ such that whenever
$\Gamma$ is a connected $G$-arc-transitive group with $G_v^{\Gamma(v)}$ being permutationally isomorphic to $L$,
the order of the stabiliser $G_v$ is at most $c(L)$. This notion was introduced and studied in~\cite{PSV1} and is relevant in the context of the Weiss conjecture: using this terminology, Weiss conjecture states that every primitive group is graph-restrictive.
We will call a transitive permutation group $L$ acting on a set $\Omega$
{\em strongly graph-restrictive} if every group $T$ with $L\le T \le\mathrm{Sym}(\Omega)$ is graph-restrictive.
Examples of strongly graph-restrictive permutation groups are provided by certain classes of primitive groups. As the culmination of work of Weiss and Trofimov, every $2$-transitive group is graph-restrictive and, as an overgroup of a $2$-transitive group is still $2$-transitive, we deduce that $2$-transitive groups are strongly graph-restrictive. The proof of Weiss conjecture for $2$-transitive groups is scattered over many papers and hence this result is somewhat
part of folklore, see~\cite[Section~$6$]{PSV1} and the references therein for an overview of the argument. Other examples of strongly graph-restrictive groups are provided by primitive groups with abelian socle, that is, primitive groups of affine type. Recently~\cite{spiga1} it was proved that Weiss conjecture does hold for this class of primitive groups. In many interesting cases, every overgroup of a primitive group of affine type is either affine or $2$-transitive and hence these groups are strongly graph-restrictive. (For instance, most affine groups whose point stabilisers are primitive linear groups satisfy this property.) Now, it would take us too far astray to describe the primitive groups of affine type where each overgroup is either affine or $2$-transitive, thus we simply refer to~\cite{Asch,Asch2} or~\cite{Praeger} for a thorough analysis of the inclusions among primitive groups. Here we simply observe that every primitive group of prime degree is strongly graph-restrictive. In conclusion, if Weiss conjecture proved to be true, then every primitive group is strongly graph-restrictive.
For a strongly graph-restrictive group $L$,
we let $c_*(L)$ denote the maximum of all constants $c(T)$ with $L\le T \le \mathrm{Sym}(\Omega)$.
\begin{theorem}
\label{the:cor}
Let $\Gamma$ be a finite connected $G$-arc-transitive graph of valency at least $3$
such that $G_v^{\Gamma(v)}$ is strongly graph-restrictive.
Then there exists a regular covering projection $\wp \colon \tilde{\Gamma} \to {\Gamma}$
with $\tilde{\Gamma}$ finite, such that the maximal group that lifts along $\wp$ is $G$ and every automorphism of $\tilde{\Gamma}$ projects along $\wp$.
\end{theorem}
\begin{proof}
Let $n$ be the order of $\Gamma$ and let $p$ be a prime with $p >nc_*(G_v^{\Gamma(v)})$.
By Corollary~\ref{cor:3ec}, $\mathrm{Aut}(\Gamma)$ acts faithfully on $\H_1({\Gamma};\mathbb Z_p)$, and then by
Theorem~\ref{the:main}, there exists a regular covering projection $\wp \colon \tilde{\Gamma} \to {\Gamma}$
with $\tilde{\Gamma}$ finite, such that the maximal group that lifts along $\wp$ is $G$ and that the group of covering transformations of $\wp$ is a $p$-group.
Let $\tilde{A}$ be the automorphism group of $\tilde{\Gamma}$, let $\tilde{G}$ be the lift of $G$ along $\wp$ and let $N=\mathrm{CT}(\wp)$. From Lemma~\ref{lem:maxlift}, $\tilde{G}/N\cong G$ and $\tilde{G}=\N{\tilde{A}}{N}$. Let $\tilde{v}$ be a vertex of $\tilde{\Gamma}$, let $v=\wp(\tilde{v})$ and set $c=c_*(G_v^{{\Gamma}(v)})$. Since $G_{v}^{\Gamma(v)}\cong \tilde{G}_{\tilde{v}}^{\tilde{\Gamma}(\tilde{v})}$ is strongly graph-restrictive and $\tilde{G}_{\tilde{v}}^{\tilde{\Gamma}(\tilde{v})}\le \tilde{A}_{\tilde{v}}^{\tilde{\Gamma}(\tilde{v})}$, we have $|\tilde{A}_{\tilde{v}}|\le c$. Since $\tilde{G}$ is transitive on the vertices of $\tilde{\Gamma}$, we have $\tilde{A}=\tilde{A}_{\tilde{v}}\tilde{G}$ and hence $[\tilde{A}:\tilde{G}]=[\tilde{A}_{\tilde{v}}:\tilde{G}_{\tilde{v}}]\le |\tilde{A}_{\tilde{v}}|\le c<p$. Moreover, $[\tilde{G}:N]=|G|=n|G_v|\le nc<p$. Therefore, $[\tilde{A}:N]=[\tilde{A}:\tilde{G}][\tilde{G}:N]$ is not divisible by $p$ and hence $N$ is a Sylow $p$-subgroup of $\tilde{A}$.
By Sylow theorem, the number of Sylow $p$-subgroups of $\tilde{A}$ is $[\tilde{A}:\N{\tilde{A}}{N}]=[\tilde{A}:\tilde{G}]<p$ and is congruent to $1$ modulo $p$. Therefore, $\tilde{A}=\N{\tilde{A}}N$ and hence $\tilde{A}=\tilde{G}$.
\end{proof}
\begin{corollary}
\label{cor:final}
Let $\Gamma$ be a finite connected $G$-arc-transitive graph of valency at least $3$. If $G$ acts transitively on the $2$-arcs of $\Gamma$ or if
the valency of $\Gamma$ is prime,
then
there exists a regular covering projection $\wp \colon \tilde{\Gamma} \to {\Gamma}$
with $\tilde{\Gamma}$ finite, such that the maximal group that lifts along $\wp$ is $G$ and every automorphism of $\tilde{\Gamma}$ projects along $\wp$.
\end{corollary}
We conclude daring to conjecture that the requirement of $G_v^{\Gamma(v)}$ being strongly graph-restrictive in Theorem~\ref{the:cor} is not necessary.
\begin{conjecture}
\label{conj}
Let $\Gamma$ be a finite connected graph such that the induced action of $\mathrm{Aut}(\Gamma)$
on $\H_1({\Gamma};\mathbb Z)$ is faithful, and let $G\le \mathrm{Aut}(\Gamma)$.
Then there exists a regular covering projection $\wp \colon \tilde{\Gamma} \to {\Gamma}$
with $\tilde{\Gamma}$ finite, such that the maximal group that lifts along $\wp$ is $G$ and such that $\mathrm{Aut}(\tilde{\Gamma})$ projects along $\wp$.
\end{conjecture}
\smallskip
{\bf Acknowledgements.} The first author gratefully acknowledges financial support of the Slovenian Research Agency, ARRS, research program no.\ P1-0294.
| {
"timestamp": "2018-01-09T02:11:51",
"yymm": "1801",
"arxiv_id": "1801.02340",
"language": "en",
"url": "https://arxiv.org/abs/1801.02340",
"abstract": "In this paper we are interested in lifting a prescribed group of automorphisms of a finite graph via regular covering projections. Here we describe with an example the problems we address and refer to the introductory section for the correct statements of our results.Let $P$ be the Petersen graph, say, and let $\\wp:\\tilde{P}\\to P$ be a regular covering projection. With the current covering machinery, it is straightforward to find $\\wp$ with the property that every subgroup of $\\Aut(P)$ lifts via $\\wp$. However, for constructing peculiar examples and in applications, this is usually not enough. Sometimes it is important, given a subgroup $G$ of $\\Aut(P)$, to find $\\wp$ along which $G$ lifts but no further automorphism of $P$ does. For instance, in this concrete example, it is interesting to find a covering of the Petersen graph lifting the alternating group $A_5$ but not the whole symmetric group $S_5$. (Recall that $\\Aut(P)\\cong S_5$.) Some other time it is important, given a subgroup $G$ of $\\Aut(P)$, to find $\\wp$ with the property that $\\Aut(\\tilde{P})$ is the lift of $G$. Typically, it is desirable to find $\\wp$ satisfying both conditions. In a very broad sense, this might remind wallpaper patterns on surfaces: the group of symmetries of the dodecahedron is $S_5$, and there is a nice colouring of the dodecahedron (found also by Escher) whose group of symmetries is just $A_5$.In this paper, we address this problem in full generality.",
"subjects": "Combinatorics (math.CO); Group Theory (math.GR)",
"title": "Lifting a prescribed group of automorphisms of graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138131620183,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.708715675957531
} |
https://arxiv.org/abs/2212.08291 | Drivers, hitting times, and weldings in Loewner's equation | In addition to conformal weldings $\varphi$, simple curves $\gamma$ growing in the upper half plane generate driving functions $\xi$ and hitting times $\tau$ through Loewner's differential equation. While the Loewner transform $\gamma \mapsto \xi$ and its inverse $\xi \mapsto \gamma$ have been carefully examined, less attention has been paid to the maps $\xi \mapsto \tau \mapsto \varphi$. We study their continuity properties and show that uniform driver convergence implies uniform hitting time convergence and uniform welding convergence, even when the corresponding curves do not converge. Welding convergence implies neither hitting time nor driver convergence, while hitting time convergence implies driver convergence in (at least) the case of constant drivers.As an application, we show that a curve $\gamma$ of finite Loewner energy can be well approximated by an energy minimizer that matches $\gamma$'s welding on a sufficiently-fine mesh. | \section{Introduction and main results}
\subsection{Loewner's equation and associated functions}
A hundred years ago, Charles Loewner \cite{Loewner1923} showed the evolution of maps $g_t$ from the slit disk $\mathbb{D} \backslash \gamma([0,t])$ back to $\mathbb{D}$, where $\gamma$ is a curve growing into $\mathbb{D}$ from its boundary, satisfy a differential equation which in effect transforms $\gamma$ into a continuous \emph{driving function} $\lambda(t) = g_t(\gamma(t))$ taking values on $\partial \mathbb{D}$. Loewner's approach played an important role in de Branges' proof \cite{bieberbach} of the Bierberbach conjecture in 1985, and received renewed interest following the ground-breaking 2000 work of Schramm \cite{Schramm2000}, who showed that the random curves generated by using a Brownian driving function run at speed $\kappa$ give the only-possible conformally-invariant scaling limits of a number of discrete models from statistical physics. These processes, typically normalized to live in the upper half plane $\mathbb{H}$ now instead of Loewner's $\mathbb{D}$, and known as \emph{Schramm-Loewner-Evolutions SLE$_\kappa$}, have been intensely studied since and continue to be a topic of active research.\footnote{The literature is vast, and a non-exhaustive sampling of references is \cite{beff}, \cite{BLM}, \cite{JiamVlad},
\cite{Dubedat},
\cite{Jamespaper},
\cite{ShekharFriz},
\cite{FrizYuan},
\cite{GwynneMiller},
\cite{VikLawler},
\cite{lswerner},
\cite{LawlerVik2},
\cite{BasicpropSLE},
\cite{Schrammperc},
\cite{Qzipper},
\cite{Zhanrevers}.}
In the setting of the upper half plane, Loewner's method takes a simple curve $\gamma:[0,T] \rightarrow \mathbb{H} \cup \{x\}$ and produces a real-valued driving function. In the process, however, it also produces a \emph{hitting time} function and a \emph{conformal welding} $\varphi$. To see this, consider the reversed Loewner flow in $\mathbb{H}$, described by normalized conformal maps $h_t:\mathbb{H} \rightarrow \mathbb{H}\backslash \gamma_t$ which satisfy the ODE
\begin{align}\label{Eq:UpwardsLoewner}
\partial_t h_t(z) = \frac{-2}{h_t(z) - \xi(t)}, \qquad h_0(z) = z
\end{align}
(see \S\ref{Sec:Prelim} for precise definitions and any unexplained terminology). Here the $h_t$ conformally map $\mathbb{H}$ to the complement of the curve $\gamma_t$ (or more generally ``compact $\mathbb{H}$-hull'') generated by the continuous function $\xi:[0,T]\rightarrow \mathbb{R}$ on $[0,t]$. The dynamics in \eqref{Eq:UpwardsLoewner} extends to points $x \in \mathbb{R}$ away from $\xi(0)$, which by the ODE flow towards the driving function until the \emph{hitting time}
\begin{align}\label{Def:HittingTime1}
\tau(x) := \inf\{\,t \geq 0 \; : \; \liminf_{s \nearrow t} |h_s(x) - \xi(s)| =0 \,\}.
\end{align}
When $\xi$ is sufficiently regular, $\gamma_t$ neither intersects itself nor the real line, other than at its base $\xi(t)$, and in this case the function $x \mapsto \tau(x)$ is continuous. Furthermore, exactly two points hit $\xi(t)$ to be ``welded together'' at each time $t$ (see Lemma \ref{Lemma:tauContinuous} below). By \eqref{Eq:UpwardsLoewner} this pair started on opposite sides of $\xi(0)$, and we obtain another map, the \emph{conformal welding} $\varphi$, by sending $x < \xi(0)$ to the unique $\varphi(x) > \xi(0)$ which satisfies $\tau(x) = \tau(\varphi(x))$. We thus arrive at a natural sequence of mappings
\begin{align}\label{Intro:Maps}
\gamma \mapsto \xi \mapsto \tau \mapsto \varphi.
\end{align}
The first arrow is the Loewner transform, and both it and its inverse have been carefully studied.\footnote{\label{Footnote:LoewnerTrans}Results include: $\gamma \mapsto \xi$ is continuous \cite[Thm. 6.2]{Kemp}, \cite[Thm. 1.8]{YizhengTopology}, but not uniformly so \cite[Figure 6]{LMR}. $\xi \mapsto \gamma$ is not continuous with respect to capacity parametrization on the $\gamma$ \cite[Ex. 4.49]{Lawler}, but it is when the $\gamma$ are equipped with a Carath\'{e}odory-type topology on their normalized Riemann maps \cite[Prop. 4.47]{Lawler}. In addition, restricting to more regular $\xi$ yields continuity for $\xi \mapsto \gamma$ with respect to various finer topologies on the $\gamma$. See \cite[Thm. 4.1]{LMR} for $\xi$ with locally-small H\"{o}lder-$1/2$ norm, \cite[Thm. 2$(v)$,$(vi)$]{ShekharFriz} for $\xi$ of finite Loewner energy and \cite[Thm. 1.2]{ShekharWang} for $\xi$ of ``locally regular'' bounded variation. See also \cite[Thm. 1.2]{SheffieldSun} for a continuity criterion in the $\xi \mapsto \gamma$ direction involving ``bi-directional and generic closeness'' on the drivers $\xi$.} The latter maps have received less attention, however, and we preface and motivate our study of them by summarizing what is currently known.
Lind \cite{Lind4} used connections between $\xi, \tau$ and $\varphi$ as part of her argument that a driver $\xi$ with H\"{o}lder-1/2 seminorm $|\xi|_{1/2}<4$ generates a simple curve. She proved that, given $|\xi|_{1/2}<4$, $\xi$ welds exactly two points at each time, $\tau(x) \asymp (x-\xi(0))^2$, and $\varphi$ satisfies $(\varphi(x-h)-\varphi(x))/(\varphi(x)-\varphi(x+h)) \asymp 1$ \cite[Lemma 3, Corollary 1, Lemma 4]{Lind4}. In our Lemma \ref{Lemma:tauContinuous} we generalize the first result to hold whenever $\xi$ generates a simple curve (which is known to be a broader class than $|\xi|_{1/2}<4$).
A connection between $\xi$ to $\tau$ appeared in Tran and Yuan's work on the topological support of SLE$_\kappa$ \cite{Yizheng}, where they proved that if two drivers $\xi$ and $\tilde{\xi}$ are $\delta$-close in sup norm, then $\tau(x) \leq T$ implies $\tilde{\tau}(x+\delta) \leq T$ \cite[Lemma 5.1]{Yizheng}, where $\tau$ and $\tilde{\tau}$ are the hitting times under the flows generated by $\xi$ and $\tilde{\xi}$, respectively. We give a new simplified proof of this result in Lemma \ref{Lemma:DriverTimes1} and expand on the idea to show continuity properties of $\xi \mapsto \tau$ and $\tau \mapsto \varphi$.
Another recent work on SLE$_\kappa$ \cite{Vlad} studied the connection between $\xi$ and $\varphi$ in the case that $\xi$ a Brownian motion. In it, the authors showed that the conformal welding $x \mapsto \varphi_\kappa(\omega, x)$ associated to SLE$_\kappa$ has a modification which is a.s. jointly continuous in $\kappa$ and $x$ for $(\kappa, x) \in [0,4]\times(-\infty,0]$. One component of their proof was to show that, a.s., for all $0 \leq \kappa \leq 4$ simultaneously, the hitting times $x\mapsto \tau_\kappa(\omega, x)$ are continuous and strictly increasing on either side of $0=\xi_\kappa(0)$ \cite[Prop. 4.1$(a)$]{Vlad}. This actually served as inspiration for the present study, as we wondered what could be said \emph{always} using deterministic Loewner theory, and not just with probability one. Our Lemma \ref{Lemma:tauContinuous} shows that $\tau$ always has these properties when $\xi$ generates a simple curve.
Furthermore, note that the closeness of $\varphi_\kappa(\omega,x)$ and $\varphi_{\kappa'}(\omega,x)$ is a type of continuity statement about the driver-to-welding map $\xi \mapsto \varphi$, since when $\kappa'$ is sufficiently close to $\kappa$, the Brownian drivers $\xi_{\kappa'}(\omega,t) = \sqrt{\kappa'}B(\omega,t)$ and $\xi_{\kappa}(\omega,t) = \sqrt{\kappa}B(\omega,t)$ are close on a finite interval $[0,T]$ (note $\omega$ is fixed). We generalize this joint continuity to hold for any drivers producing simple curves in Lemma \ref{Lemma:DriverToWeldingJoint}. The authors also show \cite[Prop. 4.1$(b)$]{Vlad} that $\kappa \mapsto \tau_\kappa(\omega, \cdot)$ is continuous as a map from the reals to the space of continuous functions, which is likewise a type of continuity statement about $\xi \mapsto \tau$. We extend this to all $\xi$ generating simple curves in Theorem \ref{Cor:TimesPointwise}.
This latter type of continuity question, for maps of functions to functions, is where our main interest lies, and this is what we study for $\xi \mapsto \varphi$, $\xi \mapsto \tau$ and $\tau \mapsto \varphi$.
\subsection{Main results}
We equip spaces of continuous functions with the uniform norm, and restrict to the class of $\xi, \tau$, and $\varphi$ corresponding to simple curves $\gamma$. This is natural to obtain continuous hitting times, as well as conformal weldings in the classical sense of the term. We obtain the following. Note that we state some of these informally; in each case see the referenced result for the precise statement, as well as Remark \ref{Remark:DomainTechnicality} below.
\begin{enumerate}[$(i)$]
\item\label{Results:DriverToTimes} $\xi \mapsto \tau$ is continuous (Theorem \ref{Cor:TimesPointwise}) but not uniformly so (Lemma \ref{Lemma:DriverToTimesNotUniform}): if $\xi_n$ and $\xi$ generate simple curves and $\xi_n \xrightarrow{u} \xi$, then $\tau_n \xrightarrow{u} \tau$. However, we can find $\xi_n, \tilde{\xi}_n$ with $\|\xi_n-\tilde{\xi}_n\|_\infty \rightarrow 0$ but where $\|\tau_n - \tilde{\tau}_n \|_\infty > \epsilon$. In addition, $(x,\xi) \mapsto \tau(x;\xi)$ is pointwise jointly continuous (Lemma \ref{Lemma:DriverToTimesJoint}).
\item\label{Results:Lipschitz} $\xi \mapsto \tau^{-1}$ is Lipschitz continuous, with optimal Lipschitz constant 1 (Theorem \ref{Lemma:InverseTauLip}).
\item $\tau \mapsto \varphi$ is continuous (Lemma \ref{Lemma:TimesToWeldingConvergence}).
\item\label{Results:DriverToWelding} $\xi \mapsto \varphi$ is continuous (Theorem \ref{Thm:DriverToWeldingConvergence}), but not uniformly so (Lemma \ref{Lemma:DriverToWeldingNotUniform}). In addition, $(x; \xi) \mapsto \varphi(x;\xi)$ is pointwise jointly continuous (Lemma \ref{Lemma:DriverToWeldingJoint}).
\item Neither $\varphi \mapsto \tau$ nor $\varphi \mapsto \xi$ is continuous (Theorem \ref{Thm:WeldingToOthersNotContinuous}).
\item\label{Results:TimesToDriver} $\tau \mapsto \xi$ is well defined and continuous at $\tau_{\tb{C}}$, the hitting time function of the constant driver $\tb{C}(t) \equiv C \in \mathbb{R}$ (Theorem \ref{Thm:TimesToDriverContinuousZero}). That is, if a driver $\xi$ generates hitting times that are the same as $\tau_{\tb{C}}$, then $\xi = \tb{C}$. Furthermore, if $\xi_n$ are drivers corresponding to simple curves with hitting times $\tau_n$ satisfying $\tau_n \xrightarrow{u} \tau_{\tb{C}}$, then $\xi_n \xrightarrow{u} \tb{C}$.
\end{enumerate}
\subsubsection{Discussion}
One of the contributions of these results is that no regularity is assumed on the drivers $\xi_n,\xi$ other than they belong to $S([0,T])$, the class of drivers generating simple curves on $[0,T]$.\footnote{There is currently no known analytic characterization of drivers $\xi \in S$, and this remains an important open problem. The literature on this question, in addition to the above-mentioned work of Lind \cite{Lind4} building off \cite{Marshallrohde}, appears to consist of just \cite{LMR},\cite{Schleissingerthesis} and \cite{Zins}.} It is well known, however, that the inverse Loewner transform $\xi \mapsto \gamma$ acting on $S$ is not continuous: there exist $\xi_n, \xi \in S([0,T])$ such that $\|\xi_n - \xi\|_{\infty[0,T]} \rightarrow 0$ but where the corresponding curves $\gamma_n$ do not even have subsequential limits in their half-plane capacity parametrizations, let alone uniformly converge \cite[Ex. 4.49]{Lawler}. Our results in $(\ref{Results:DriverToTimes})$ and $(\ref{Results:DriverToWelding})$ say the uniform topologies on $\tau$ and $\varphi$ are oblivious to this pathological behavior in the $\gamma_n$: the $\tau_n$ and $\varphi_n$ generated by $\xi_n$ still converge to the $\tau$ and $\varphi$ generated by $\xi$. Furthermore, by $(\ref{Results:Lipschitz})$ one even has quantitative convergence of the points $x_n < 0 <y_n$ welded at a given time $t$ by $\xi_n$ to the points $x< 0 <y$ welded at the same time by $\xi$.
Combined with the known results on $\gamma \mapsto \xi$ (see footnote \ref{Footnote:LoewnerTrans} above), our results show that moving from left to right in \eqref{Intro:Maps} is generally moving from stronger to weaker forms of convergence. More work is needed to understand the precise nature of the middle arrow. While our results on its inverse in part $(\ref{Results:TimesToDriver})$ above are very preliminary, as they only cover constant drivers, we believe the questions behind them are natural: given $\xi \in S([0,T])$, does $\tau(\cdot; \xi)$ determine $\xi$? And if so, is $\xi \mapsto \tau$ a homeomorphism onto its image? We find it interesting that, even in the simplest case that we consider, proofs of the existence and continuity of the inverse are not entirely trivial; see \S\ref{Sec:TimesToDriver}. We attempted to build a driver with large oscillations that we suspected could be a counterexample to continuity of $\tau \mapsto \xi$ for more general $\tau$, but numerical simulations showed our construction still converged for relatively smooth data. We describe this construction and the simulations in \S\ref{Sec:PositiveEvidence} as positive evidence for a broader result.
We also highlight two other contributions. In \S\ref{Sec:Zipper} we show that our results for the $\xi \mapsto \varphi$ map in $(\ref{Results:DriverToWelding})$ imply that, for any finite collections of pairs $\{(x_j,y_j)\}_{j=1}^N$ with
\begin{align*}
x_N < x_{N-1} < \cdots < x_1 < y_1 < \cdots < y_N,
\end{align*}
there exists a curve of minimal Loewner energy which welds each $x_j$ to $y_j$. We use this in Theorem \ref{Cor:Zipper} to show that we can well-approximate any given finite-energy curve $\gamma([0,T])$ by an energy minimizer on a sufficiently-fine discretization of its welding. This is related, although not identical, to the welding zipper algorithm of Donald Marshall.
Finally, in the appendix we list two integral formulas relating all our main actors $\xi, \tau$ and $\varphi$ that appear to have thus far escaped notice in the literature.
\begin{remark}\label{Remark:DomainTechnicality}
Some of the statements in the list above, as alluded to, are imprecise as there is technicality regarding domains to deal with. For instance, for the $\xi \mapsto \tau$ result in $(\ref{Results:DriverToTimes})$, we assume uniform convergence $\|\xi_n - \xi\|_{\infty[0,T]} \rightarrow 0$ on a fixed time interval $[0,T]$. However, that does not imply that all the hitting time functions $\tau_n, \tau$ share a common domain (see Example \ref{Eg:NoEndpoints}), and so we prove that if the domains of $\tau_n$ and $\tau$ are $[a_n,b_n]$ and $[a,b]$, respectively, then $a_n \rightarrow a$, $b_n \rightarrow b$ and $\tau_n \xrightarrow{u} \tau$ on any compact subinterval $[c,d] \subset (a,b)$. (In addition, our result in ($\ref{Results:Lipschitz}$) gives sharp quantitative control on $|a_n-a|$ and $|b_n-b|$.)
In \emph{this sense} we mean $\xi \mapsto \tau$ is continuous, and many of the other results above are similar. We are thus often using ``continuity'' in a somewhat informal sense; in particular we do not attempt to equip the space $\tilde{C}$ of continuous functions with different domains with a topology $\tilde{\mathcal{C}}$ which would make, for instance, $\xi \mapsto \tau$ continuous from $(C([0,T]), \|\cdot\|_{\infty[0,T]})$ to $(\tilde{C},\tilde{\mathcal{C}})$, although this may be possible.
\end{remark}
\subsection{Methods}
For results concerning the $\xi \mapsto \tau$, $\tau \mapsto \varphi$ and $\xi \mapsto \varphi$ maps, we primarily rely on Lemma \ref{Lemma:tauContinuous} combined with the surprising power of Lemma \ref{Lemma:DriverTimes1} and the formula \eqref{Eq:LoewnerShift}. Proofs that maps are not continuous or not uniformly continuous are based on explicitly-constructed examples. Some new machinery was needed to say anything about the $\tau \mapsto \xi$ direction, and our main tool here is Lemma \ref{Lemma:FasterTimes}, which says that the farther a driver welds two points $x_0 < y_0$ from their initial average $(x_0+y_0)/2$, the lower the hitting time.
\subsection{Organization}
In \S\ref{Sec:Prelim} we establish notation and review background of deterministic Loewner chains. We prove our main lemmas in \S\ref{Sec:Lemmas}, the results on $\xi \mapsto \tau$ and $\xi \mapsto \tau^{-1}$ in \S\ref{Sec:DriverToTimes} and \S\ref{Sec:DriverToTimesInverse}, respectively, and give results and simulations regarding $\tau \mapsto \xi$ in \S\ref{Sec:TimesToDriver}. In \S\ref{Sec:BlankToWelding} we cover continuity of $\tau \mapsto \varphi$ and $\xi \mapsto \varphi$, and show by example in \S\ref{Sec:WeldingCounterexample} that $\varphi \mapsto \tau$ and $\varphi \mapsto \xi$ are not continuous. Our application of Theorem \ref{Thm:DriverToWeldingConvergence} to minimal-energy curves falls in \S\ref{Sec:Zipper}, and we conclude with open problems in \S\ref{Sec:Problems}, and then the appendix.
\bigskip
\noindent \textbf{Acknowledgements}
The authors are thankful to Yizheng Yuan for pointing our attention to \cite[Lemma 5.1]{Yizheng} and suggesting how it could yield welding convergence, and for looking at a draft of the paper. We also thank Don Marshall and Steffen Rohde for looking at a very early draft, and we are grateful to have learned the trick \eqref{Eq:LoewnerShift} from Steffen Rohde (perhaps it goes back to Oded Schramm), and to have seen it applied in a similar manner, albeit rougher, to what we do in Theorem \ref{Lemma:InverseTauLip}. This research was partially conducted while the authors were at Mathematical Sciences Research Institute during the spring 2022 semester and is thus partially supported by the US National Science Foundation under Grant No. DMS-1928930.
\section{Notation and preliminaries}\label{Sec:Prelim}
\subsection{Basics of Loewner theory}
We sketch some notation and results concerning the Loewner equation; for more background see, for instance, \cite{Kemp} or \cite{Lawler}. We frame the theory largely in terms of the reverse/upwards maps $h_t$, as they induce the hitting times $\tau$.
Indeed, $h_t:\mathbb{H} \rightarrow \mathbb{H} \backslash \gamma_t([0,t])$ in \eqref{Eq:UpwardsLoewner} is the unique conformal map which fixes $\infty$ and satisfies
\begin{align}\label{Eq:UpwardsMapsInfty1}
h_t(z) = z + O(1/z), \qquad z \rightarrow \infty.
\end{align}
In other words, scaling and translation only occurs locally around $\gamma_t$, not at $\infty$. We assume $\gamma_t$ is parametrized by \emph{half-plane capacity}, in which case the above expansion is actually
\begin{align}\label{Eq:h_tInfty}
h_t(z) = z - \frac{2t}{z} + O(1/z^2), \qquad z \rightarrow \infty,
\end{align}
and we say that the half-plane capacity $\hcap(\gamma_t)$ of $\gamma_t$ is $2t$ (so note time $t$ corresponds to $\hcap$ $2t$). The extension of $h_t(z)$ to the real line maps $\xi(0)$ to the tip $\tilde{\gamma}_t(t)$ of the curve $\tilde{\gamma}_t$ generated by $\xi$ on $[0,t]$, while the base of $\gamma_t$ is at $\xi(t)$.
Note that can define $\hcap(K)$ whenever $K\subset \mathbb{H}$ is a \emph{compact $\mathbb{H}$-hull}, which is to say, $K$ is bounded, relatively closed in $\mathbb{H}$, and $\mathbb{H} \backslash K$ is simply connected. In this case, there is again a unique conformal map $h_K:\mathbb{H} \rightarrow \mathbb{H}\backslash K$ satisfying \eqref{Eq:UpwardsMapsInfty1} at $\infty$, and with expansion
\begin{align*}
h_K(z) = z - \frac{a_1}{z} + O(1/z^2), \qquad z \rightarrow \infty.
\end{align*}
We define $\hcap(K) := a_1$, which is positive when $K \neq \emptyset$. It follows from the definition that $\hcap$ satisfies $\hcap(K+x) = \hcap(K)$ and $\hcap(rK) = r^2\hcap(K)$ for $r \geq 0$. Also, $K_1 \subset K_2$ implies $\hcap(K_1) \leq \hcap(K_2)$. Considering $K= \overline{B_1(0)} \cap \mathbb{H}$ as a concrete example, we find that $h_K^{-1}(z) = z+1/z$, and thus
\begin{align}\label{Eq:Diskhcap}
\hcap\big(\overline{B_1(0)} \cap \mathbb{H} \big) = 1.
\end{align}
There is also a stochastic definition, which allows us to drop the requirement that $\mathbb{H}\backslash K$ be simply connected. Indeed, one can show
\begin{align}\label{Def:hCapProbability}
\hcap(K) = \lim_{y \rightarrow \infty}y \mathbb{E}^{iy}(\text{Im } B_\tau),
\end{align}
where $B_t$ is two-dimension Brownian motion started from $iy$, and $\tau$ is the exit time of $\mathbb{H} \backslash K$. See \cite[\S3.4]{Lawler} for these and further properties.
We will interchangeably call the dynamics given by \eqref{Eq:UpwardsLoewner} the ``upwards Loewner flow'' and ``reverse Loewner flow.'' The former is not entirely standard, but is natural from the point of view that the curves $\gamma_t$ grow upwards into $\mathbb{H}$ from $\xi$'s position in $\mathbb{R}$. The \emph{downwards} or \emph{forwards} map $g_t$ is the unique map from $\mathbb{H}\backslash \gamma([0,t])$ to $\mathbb{H}$ which fixes $\infty$ and is $z + O(1/z)$ near infinity, and in this direction the expansion corresponding to \eqref{Eq:h_tInfty} is
\begin{align*}
g_t(z)= z + \frac{2t}{z} + O(1/z^2), \qquad z \rightarrow \infty,
\end{align*}
and the Loewner equation for the $g_t$ is
\begin{align}\label{Eq:Loewner}
\dot{g}_t(z) = \frac{2}{g_t(z)-\lambda(t)}, \qquad g_0(z) = z.
\end{align}
The relation between the $h_t$ and $g_t$, $\xi$ and $\lambda$, and $\gamma_t$ and $\gamma$ is the following. Let $\gamma:[0,T] \rightarrow \mathbb{H} \cup \{x\}$ be a fixed, simple curve with $\gamma(0)=x$ (which is what our notation for the range means). The downwards/forward driving function $\lambda$ is just the reversal of $\xi$, $\lambda(t) = \xi(T-t)$. (If we wish to normalize by starting at zero, we may take $\lambda(t) = \xi(T-t)-\xi(T)$, or equivalently, $\xi(t) = \lambda(T-t) - \lambda(T)$.) We always write $\lambda$ from the downwards driving function and $\xi$ for the upwards. The curve $\gamma_t$ generated by $\xi$ on $[0,t]$ is the conformal image $g_{T-t}\big( \gamma([T-t,T]) \big)$ of the last $t$ units $\gamma([T-t,T])$ of $\gamma$ under $g_{T-t}$. That is,
\begin{align*}
h_t = g_{T-t} \circ g_T^{-1}, \qquad 0 \leq t \leq T.
\end{align*}
To see this, note that it holds for $t=0$, and observe $\partial_t (g_{T-t}^{-1} \circ h_t ) =0$ by \eqref{Eq:UpwardsLoewner} and \eqref{Eq:Loewner}. So for the $g_t$ the underlying $\gamma$ is a fixed curve which is growing at its tip, whereas in the case of the $h_t$ maps, $\gamma_t$ grows at its base and the entire curve is constantly being conformally deformed.
Recall driving functions $\xi, \lambda$ are always continuous and thus members of $C([0,T])$. We write $C_0$ for the set of continuous functions starting at zero, and $S, S_0$ for those $\xi \in C,C_0$ that generate a \emph{simple curve} $\gamma^\xi$ upwards Loewner flow (where we include the time domain $[0,T]$ as needed). By this we mean that the final curve $\gamma^\xi$ generated on time $[0,T]$ by $\xi$ is non self-intersecting and also does not touch $\mathbb{R}$ other than at its base: $\gamma^\xi \cap \mathbb{R} = \{\xi(T)\}$. It is not hard to see this is equivalent to saying that the curve generated by $\xi$ on any time interval $[t_1,t_2] \subset [0,T]$ has these same two properties.
Recall that well-known elements of $S$ include linear drivers \cite{KNK}, drivers with one-sided H\"{o}lder-1/2 norm less than four \cite{Lind4,LMR,Zins} and, a.s., scaled Brownian motion $\sqrt{\kappa}B(t)$ when $0 \leq \kappa \leq 4$ \cite{BasicpropSLE}.
Points $x_0< \xi(0) <y_0$ under \eqref{Eq:UpwardsLoewner} flow along the real line towards $\xi$, and we interchangeably write
\begin{align*}
x(t) = x(t;\xi) = h_t(x_0;\xi) = h_t(x_0)
\end{align*}
for the image of $x_0$ after $t$ units of time under driver $\xi$, and similarly for $y(t)$.
\subsection{Hitting times and weldings}
Let $\xi$ be continuous. For $x_0 \in \mathbb{R} \backslash \xi(0)$, the dynamics in \eqref{Eq:UpwardsLoewner} becomes
\begin{align}\label{Eq:UpwardsLoewnerx}
\dot{x}(t) = \frac{-2}{x(t) - \xi(t)}, \qquad x(0) = x_0.
\end{align}
As noted in the introduction, the \emph{hitting time} $\tau(x_0)$ of $x_0$ is then
\begin{align}\label{Def:HittingTime1+}
\tau(x) = \inf\{\,t \geq 0 \; : \; \liminf_{s \nearrow t} |x(s) - \xi(s)| =0 \,\},
\end{align}
and so the flow $t \mapsto x(t;\xi)$ is well defined on $[0,\tau(x_0))$. Note that if there are no such times $t$ for some $x$, then $\tau(x) = +\infty$, and we say that $x$ is not welded by $\xi$. We also remark that \eqref{Def:HittingTime1+} says nothing about whether or not $\xi$ generates a simple curve; $\tau(x)$ is well defined in either case.
Suppose $\tau(x_0) <\infty$. By \eqref{Eq:UpwardsLoewnerx}, $x_0$ flows monotonically towards $\xi$, and its position is bounded by the maximum of $\xi$ on $[0,\tau(x_0)]$. In particular, $\lim_{t\nearrow \tau(x_0) }x(t)$ exists, and by \eqref{Def:HittingTime1+} must be $\xi(\tau(x_0))$. Thus we can extend $t \mapsto x(t)$ from $[0,\tau(x_0))$ to $[0,\tau(x_0)]$ by setting $x(\tau(x_0)) = \xi(\tau(x_0))$, and we have
\begin{align}
\tau(x) &= \inf\{\,t \geq 0 \; : \; \lim_{s \nearrow t} |x(s) - \xi(s)| =0 \,\} \notag\\
&= \inf\{\,t \geq 0 \; : \; x(t)=\xi(t) \,\}.\label{Def:HittingTime3}
\end{align}
In the case of the zero driver $\tb{0}(t) \equiv 0$, for example, it is not hard to see that the map satisfying \eqref{Eq:UpwardsLoewner} is $h_t(z) = \sqrt{z^2 - 4t}$, and so the points mapping to the base of the curve 0 under the extension of $h_t$ to $\mathbb{R}$ are $\pm 2\sqrt{t}$, yielding the hitting times
\begin{align}\label{Eq:ZeroHittingTime}
\tau(x; \tb{0})=\frac{x^2}{4}, \qquad x \in \mathbb{R}.
\end{align}
We show in Lemma \ref{Lemma:tauContinuous} that, for $\xi \in S$, $x \mapsto \tau(x)$ is strictly increasing as one moves away from $\xi(0)$. We can thus think of $\tau$ as consisting of two invertible functions, the left and the right hitting times, which we denote by
\begin{align}\label{Eq:HittingTimesTwoFunctions}
\tau_- := \tau|_{x\leq \xi(0)} \qquad \text{ and } \qquad \tau_+ := \tau|_{y \geq \xi(0)}.
\end{align}
As mentioned in the introduction, the \emph{conformal welding} associated to $\xi \in S$ is the homeomorphism of intervals on either side of $\xi(0)$ which satisfies $\tau(x) = \tau(\varphi(x))$ for all $x$ with $\tau(x)<\infty$. We take the convention that $\varphi$ maps from the left of $\xi(0)$ to the right, and so more precisely, for $x \leq \xi(0)$,
\begin{align*}
\tau_-(x) = \tau_+(\varphi(x)) \qquad \text{ or } \qquad \tau_+^{-1} \circ \tau_-(x) = \varphi(x).
\end{align*}
The welding can also be defined in terms of the maps $h_t$ via $h_t(\varphi(x)) = h_t(x)$ for $t \geq \tau(x)$.
\subsection{Other notation}
For a curve welding $\varphi:[-a,0] \rightarrow [0,b]$, when one (or both) of $-a,b$ is infinite, we mean that $\varphi$ is defined on $(-a,0]$ and $\lim_{x \rightarrow a^+}\varphi(x) = b$.
For a continuous function $f$ on an interval $[a,b]$, we write $\|f\|_{\infty[a,b]} := \max_{x \in [a,b]}|f(x)|$. Of course, $f_n \xrightarrow{u} f$ on $[a,b]$ means $\|f_n-f\|_{\infty[a,b]} \rightarrow 0$.
We write $a \wedge b := \min\{a,b\}$ and $a \vee b := \max\{a,b\}$, and $A(x) \asymp B(x)$ means there exists constant $C\geq 1$ such that
\begin{align*}
\frac{1}{C}B(x) \leq A(x) \leq CB(x)
\end{align*}
for all values of $x$.
\subsection{Elementary lemmas}
We will use the following two easy lemmas. The first is similar to Dini's theorem and specifies a situation where we can upgrade from pointwise to uniform convergence.
\begin{lemma}\label{Lemma:OurDini}
Let $[a,b] \subset \mathbb{R}$ be a closed interval and $f_n:[a,b] \rightarrow \mathbb{R}$ a sequence of functions such that each $f_n$ is non-increasing or non-decreasing. Suppose $f:[a,b] \rightarrow \mathbb{R}$ is continuous and $f_n \rightarrow f$ pointwise on $[a,b]$. Then we have $\|f_n-f \|_{\infty[a,b]} \rightarrow 0$.
\end{lemma}
\noindent Note that we do not assume the $f_n$ are continuous or that the monotonicity across the sequence is the same, i.e. some $f_n$ may be increasing and some decreasing.
\begin{proof}
Choose $\delta>0$ such that $|f(x)-f(y)|<\epsilon/2$ whenever $a \leq x,y \leq b$ satisfy $|x-y|<\delta$, and let $a = x_0 < x_1< \cdots <x_m = b$ be a partition of $[a,b]$ of mesh size less than $\delta$. Let $N$ be large enough so that $|f_n(x_j)-f(x_j)|<\epsilon/2$ for all $j \in \{1,\ldots, m\}$ whenever $n \geq N$. Choosing such an $n$, suppose $f_n$ is non-decreasing. Then for any $x_j < x < x_{j+1}$,
\begin{align*}
f_n(x) - f(x) \leq f_n(x_{j+1}) - f(x_{j+1}) +f(x_{j+1}) - f(x) < \epsilon,
\end{align*}
and similarly
\begin{align*}
f(x)-f_n(x) \leq f(x) - f(x_j) + f(x_j) - f_n(x_j) < \epsilon.
\end{align*}
The argument is similar when $f_n$ is non-increasing.
\end{proof}
\begin{lemma}\label{Lemma:InversesConverge}
Let $f_n, f\colon \mathbb{R} \to \mathbb{R}$ be continuous and strictly increasing with $f_n \to f$ point-wise. Then $f_n^{-1} \to f^{-1}$ point-wise.
\end{lemma}
\begin{proof}
Let $y \in \mathbb{R}$ and $x = f^{-1}(y)$. By the monotonicity of $f$ we have $f(x-\varepsilon) < f(x) < f(x+\varepsilon)$. Let $\delta := |f(x)-f(x-\varepsilon)| \vee |f(x)-f(x+\varepsilon)|$ and let $n$ be large enough such that $|f(x\pm\varepsilon)-f_n(x\pm\varepsilon)| < \delta$ . Then $f_n(x-\varepsilon) < f(x) < f_n(x+\varepsilon)$, and consequently $f_n^{-1}(y) \in {(x-\varepsilon,x+\varepsilon)}$ by monotonicity of $f_n$.
\end{proof}
\section{Continuity properties of $\xi \mapsto \tau$ and $\xi \mapsto \tau^{-1}$}\label{Sec:TauContinuity}
\subsection{Lemmas}\label{Sec:Lemmas}
We establish some tools before proving our continuity results. Our first lemma gives basic properties of the hitting times $x \mapsto \tau(x;\xi)$ when $\xi \in S$. This is a generalization of \cite[Prop. 4.1$(a)$]{Vlad} and \cite[Lemma 3]{Lind4}; the former, because this always holds, rather than only almost surely, and the latter, because we only assume $\xi \in S([0,T])$, not that $\xi \in \text{H\"{o}l}(1/2)$ with $|\xi|_{1/2}<4$.
\begin{lemma}\label{Lemma:tauContinuous}
If $\xi \in S([0,T])$, $0 < T \leq \infty$, $x \mapsto \tau(x) = \tau(x;\xi)$ is continuous. It is strictly increasing for $x\geq\xi(0)$ and strictly decreasing for $x \leq \xi(0)$. In particular, for each $0 <t \leq T$ there are exactly two points $x < \xi(0) < y$ such that $\tau(x;\xi) = t = \tau(y;\xi)$.
\end{lemma}
\begin{proof}
Without loss of generality $\xi(0)=0$, and by symmetry it suffices to prove continuity and monotonicity for $y> 0$. If $0 <y_1 < y_2$ and $t < \tau(y_1)$, then \eqref{Eq:UpwardsLoewner} yields
\begin{align}\label{Eq:LoewnerIncrement}
\frac{d}{dt} \big(y_2(t)-y_1(t) \big) = \frac{2(y_2(t)-y_1(t))}{(y_2(t) - \xi(t))(y_1(t)-\xi(t))} > 0.
\end{align}
Thus the points are getting further apart for $t<\tau(y_1)$, showing $y_1(\tau(y_1)) = \xi(\tau(y_1)) < y_2(\tau(y_1))$. Since $\xi$ is continuous, $\tau(y_1) < \tau(y_2)$, and we see $y \mapsto \tau(y)$ is strictly increasing.
In particular, $\tau$ can only have jump discontinuities. To see this actually does not happen, pick $0 <y_1 < y_3$ arbitrarily and let $t_2 \in (\tau(y_1), \tau(y_3))$. We show there exists $y_2$ with $\tau(y_2) = t_2$. Indeed, map up with $h_{t_2}$. Since the curve $\gamma_{t_2}$ generated by $\xi([0,t_2])$ is simple, the prime end $\xi(t_2)$ of $\mathbb{H} \backslash \gamma_{t_2}$ corresponds to exactly two pre-images $x_2 < 0 < y_2$ under the extension of $h_{t_2}$. Since
\begin{align}\label{Eq:x2hit}
h_{t_2}(y_2) = \xi(t_2),
\end{align}
$\tau(y_2) \leq t_2$ by definition of the hitting time.
Suppose that $y_2$'s hitting time is actually earlier, i.e.\begin{align}\label{Eq:x2Assumption}
\tau(y_2) =: t_2' < t_2,
\end{align}
which means we also have that $h_{t_2'}(y_2) = \xi(t_2')$. Let $\tilde{h}_{t}$ be the map which, for $t \geq t_2'$, solves
\begin{align*}
\partial_t \tilde{h}_t(z) = \frac{-2}{\tilde{h}_t(z) - \xi(t)}, \qquad \tilde{h}_{t_2'}(z) = z,
\end{align*}
i.e. which maps to the complement of the curve segment generated by $\xi$ for times $t \geq t_2'$. Then for $t>t_2'$, $\tilde{h}_t$ sends $\xi(t_2')$ to the tip of the non-trivial curve segment generated on $[t_2',t]$ by $\xi$, which thus has positive imaginary part by the assumption $\xi \in S$ and the fact that its capacity is $2(t-t_2')$ (see \cite[Lemma 4.2]{Kemp} for the latter). Since $h_{t_2} = \tilde{h}_{t_2-t_2'} \circ h_{t_2'}$, we thus see $\text{Im}\,h_{t_2}(y_2)>0$, which contradicts \eqref{Eq:x2hit}. We conclude that $\tau(y_2)=t_2$ and that there are no jump discontinuities in $\tau$.
The last statement follows from the strict monotonicity and definition of the hitting times \eqref{Def:HittingTime3}.
\end{proof}
We will need the following two observations stemming from the Loewner equation \eqref{Eq:UpwardsLoewner}. First, for fixed $\delta \in \mathbb{R}$, we have the shifting formula
\begin{align}\label{Eq:LoewnerShift}
h_t(x+\delta; \xi+\delta) = h_t(x; \xi)+\delta.
\end{align}
This holds for any $x \in \mathbb{H}$ for all $t$ (since solutions starting in $\mathbb{H}$ last forever) and for $x \in \mathbb{R}$ when $0 \leq t \leq \tau(x)$.
Secondly, given two drivers $\xi_1 \leq \xi_2$ and a point $y_0$ with $\xi_2(0) < y_0$ with a time $t \leq \tau(y_0; \xi_2)$, driver monotonicity combined with the Loewner equation yield
\begin{align}\label{Ineq:LoewnerOrdering}
h_t(y_0; \xi_2) \leq h_t(y_0; \xi_1).
\end{align}
The following lemma gives the key inequality for our $\xi \mapsto \tau$ arguments. While we borrow the statement from \cite{Yizheng}, we offer a new, succinct proof, and also drop the requirement that the drivers start at the same location.
\begin{lemma}\label{Lemma:DriverTimes1}
\cite[Lemma 5.1]{Yizheng} Suppose $\xi, \tilde{\xi} \in C([0,T])$ with $\|\xi-\tilde\xi\|_{\infty[0,T]} \le \delta$. Fix $y$ with $\tilde{\xi}(0) < y$. If $t < \tau(y;\tilde\xi)$ then $t< \tau(y+\delta; \xi)$. Similarly, for $x < \tilde{\xi}(0)$, if $t< \tau(x; \tilde{\xi})$ then $t < \tau(x-\delta; \xi)$.
\end{lemma}
\noindent Note also that, as in \cite{Yizheng}, we do not assume the drivers generate simple curves.
\begin{proof}
By symmetry, it suffices to prove the statement for $y$ with $\tilde{\xi}(0) < y$. Observing $t < \tau(y;\tilde{\xi})$ is equivalent to $\tilde{\xi}(t) < h_t(y; \tilde{\xi})$, we use \eqref{Eq:LoewnerShift} and \eqref{Ineq:LoewnerOrdering} to conclude
\begin{equation*}
\xi(t) \leq \tilde{\xi}(t) + \delta < h_t(y; \tilde{\xi}) + \delta = h_t(y+\delta; \tilde{\xi}+\delta) \leq h_t(y+\delta; \xi),
\end{equation*}
and hence $t < \tau(y+\delta; \xi)$.
\end{proof}
\begin{corollary}\label{Cor:TimesSandwich}
Suppose $\xi, \tilde{\xi} \in C([0,T])$, $0 < T \leq \infty$, with $\|\xi-\tilde\xi\|_{\infty[0,T]} \le \delta$. Then for any $y > \xi(0)+\delta$,
\begin{align}\label{Ineq:DriverTimesMonotone}
\tau(y-\delta ; \xi) \leq \tau(y; \tilde\xi) \leq \tau(y+\delta ; \xi).
\end{align}
Similarly, for $x < \xi(0)-\delta$, $\tau(x-\delta ; \xi) \geq \tau(x; \tilde\xi) \geq \tau(x+\delta ; \xi).$
\end{corollary}
\noindent As no assumption is made on the finiteness of the hitting times, part of the statement of the corollary is that \eqref{Ineq:DriverTimesMonotone} holds whether or not each of the $\tau$'s is finite or infinite. Note also that we again do not assume that $\xi, \tilde{\xi} \in S$.
\begin{proof}
By symmetry it suffices to consider $y > \xi(0) + \delta$. If $\tau(y-\delta; \xi) = \infty$, then by using $t = n$ for large $n$ in Lemma \ref{Lemma:DriverTimes1} we find $n<\tau(y;\tilde{\xi})$, and so $\tau(y; \tilde{\xi}) = \infty$ as well. Thus \eqref{Ineq:DriverTimesMonotone} holds when at least two of the members are infinite.
Suppose that $\tau(y; \tilde{\xi}) < \infty$. Since $y>\tilde{\xi}(0)$ by assumption, by using times $\tilde{t}_n = \tau(y;\tilde{\xi})-1/n$ in Lemma \ref{Lemma:DriverTimes1} we obtain the right-most inequality in \eqref{Ineq:DriverTimesMonotone} in the limit. The assumption $\tau(y; \tilde{\xi}) < \infty$ and Lemma \ref{Lemma:DriverTimes1} also yield $\tau(y-\delta; \xi) <\infty$, and then exchanging the roles of $\tilde{\xi}$ and $\xi$ and replacing $y$ with $y-\delta$ in the above argument yields the left inequality as well.
We conclude \eqref{Ineq:DriverTimesMonotone} holds irrespective of whether any of the three hitting times is finite or infinite.
\end{proof}
\subsection{Continuity of $\xi \mapsto \tau$}\label{Sec:DriverToTimes}
Our above inequalities immediately yield continuity properties of the driver-to-hitting-time map. Here we restrict to $\xi \in S([0,T])$ to avail ourselves of the continuity of $\tau$ from Lemma \ref{Lemma:tauContinuous}.
\begin{theorem}\label{Cor:TimesPointwise}
Let $0<T \leq \infty$ and $\xi, \xi_n \in S([0,T])$ be drivers, with $[a,b]$ the maximal interval on which $\tau(\cdot; \xi)$ is finite, and $[a_n,b_n]$ the maximal interval where $\tau(\cdot; \xi_n)$ is finite, $-\infty \leq a < b \leq \infty$, $-\infty \leq a_n < b_n \leq \infty$. If $\xi_n \xrightarrow{u} \xi$ on $[0,T]$, then $a_n \rightarrow a$, $b_n \rightarrow b$, and $\tau(x;\xi_n) \xrightarrow{u} \tau(x;\xi)$ on any $[c,d] \subset (-a,b)$.
\end{theorem}
\noindent With regards to the technicality that we cannot include the endpoints $-a$ and $b$ in the convergence, see Example \ref{Eg:NoEndpoints} below. We give sharp quantitative control on how far $a_n, b_n$ can be from $a,b$ in Theorem \ref{Lemma:InverseTauLip} below.
The notation is slightly imprecise when one of the interval endpoints is $\infty$. For instance, if $a =-\infty$ and $b \in \mathbb{R}$, we mean that $\tau(\cdot; \xi)$ is finite on $(-\infty,b)$ and $a_n$ is either identically $-\infty$ or is eventually less than any $-N$ for all large $n$. If $T=\infty$, we mean $\xi,\xi_n$ are defined on $[0,\infty)$.
\begin{proof}
Fix an interval $[c,d] \subset (a,b)$ and $\epsilon >0$, and choose $\delta$ small enough such that the following three conditions hold: $[c-\delta, d+\delta] \subset (a,b)$,
\begin{align}\label{Ineq:TimesNearZero}
\max\{\, \tau(\xi(0)-2\delta;\xi), \tau(\xi(0)+2\delta;\xi) \,\} < \epsilon,
\end{align}
and the modulus of continuity $\omega$ of $\tau(\cdot; \xi)$ on $[c-\delta,d+\delta]$ satisfies
\begin{align}
\omega(\delta; [c,d]) &:= \sup\{\, |\tau(u;\xi) - \tau(v;\xi)| \; : \; c-\delta\leq u,v \leq d+\delta, \, |u-v| \leq \delta \,\}\notag\\
&< \epsilon.\label{Ineq:TimesModulusSmall}
\end{align}
By Corollary \ref{Cor:TimesSandwich}, the first condition yields that $\tau(x;\xi_n) < \infty$ for all $x \in [c,d]$ when $n$ is large enough. For $x \in [c,\xi(0)-\delta) \cup (\xi(0)+\delta, d]$, by \eqref{Ineq:DriverTimesMonotone} we then have
\begin{align*}
|\tau(x;\xi_n) - \tau(x;\xi)| \leq \omega(\delta; [c,d]) < \epsilon.
\end{align*}
If $\xi(0)-\delta \leq x \leq \xi(0) + \delta$, then
\begin{align*}
|\tau(x;\xi_n) - \tau(x;\xi)| \leq \tau(x;\xi_n) + \tau(x;\xi) \leq \tau(x;\xi_n) + \epsilon
\end{align*}
by \eqref{Ineq:TimesNearZero}. Furthermore, using monotonicity and the fact that $\xi_n(0) \in [\xi(0)-\delta, \xi(0)+\delta]$ for all large $n$, we see
\begin{align*}
\tau(x;\xi_n) &\leq \max\{\, \tau(\xi(0)-\delta; \xi_n), \tau(\xi(0)+\delta; \xi_n) \,\}\\ &\leq \max\{\, \tau(\xi(0)-2\delta;\xi), \tau(\xi(0)+2\delta;\xi) \,\} < \epsilon
\end{align*}
by \eqref{Ineq:DriverTimesMonotone} and \eqref{Ineq:TimesNearZero}. We conclude $\tau(\cdot;\xi_n)$ is uniformly close to $\tau(\cdot;\xi)$ on $[c,d]$ for large $n$.
For the welding interval endpoints, by symmetry it suffices to show that $b_n \rightarrow b$. Suppose first that $b<\infty$ and let $\epsilon >0$. Since by the above $\tau(b-\epsilon;\xi_n) \rightarrow \tau(b-\epsilon;\xi) < \infty$, monotonicity of the hitting times yields $b -\epsilon \leq b_n$ for large $n$, and thus $b \leq \liminf_{n \rightarrow \infty} b_n$. On the other hand, however,
\begin{align*}
+\infty = \tau(b+\epsilon/2; \xi) \leq \tau(b + \epsilon/2 + \| \xi_n -\xi\|_\infty ; \xi_n) \leq \tau(b + \epsilon; \xi_n)
\end{align*}
for all large $n$, where the first inequality is by \eqref{Ineq:DriverTimesMonotone}. Thus $b_n \leq b+\epsilon$ for all large $n$, showing $\limsup_{n \rightarrow \infty} b_n \leq b$.
If $b=+\infty$, then by the first paragraph we have $\tau(x; \xi_n) \rightarrow \tau(x;\xi)$ for any fixed, finite $x>\xi(0)$. This implies $b_n \geq x$ for all large $n$, and thus $b_n \rightarrow \infty$.
\end{proof}
\begin{example}\label{Eg:NoEndpoints}
We cannot conclude that the convergence of hitting times in Theorem \ref{Cor:TimesPointwise} extends all the way to the endpoints of the welding interval $[a,b]$, since, for instance, we may have $b_n <b$ for all $n$. Consider, for example, $\xi \equiv 0$ on $0 \leq t \leq 1/4$, which generates the vertical line segment $\gamma = [0,i]$. For the curves generating $\xi_n$, set $\alpha_n := \frac{1}{2}-\frac{1}{n}$ and consider straight line segments $\gamma_n$ from $0$ to
\begin{align*}
\gamma_n(\tau_n) = \alpha_n^{\alpha_n-\frac{1}{2}}(1-\alpha_n)^{\frac{1}{2}-\alpha_n} \,e^{i \alpha_n \pi} = \Big(1 + \frac{4}{n^2} + O(n^{-4}) \Big)e^{i \alpha_n \pi}.
\end{align*}
The conformal map $F_n: \mathbb{H} \rightarrow \mathbb{H} \backslash \gamma_n([0,\tau_n])$ taking 0 to the tip $\gamma_n(\tau_n)$ which satisfies $F_n(z) = z + O(1)$ as $z \rightarrow \infty$ is explicitly
\begin{align*}
F_n(z) &= \Big(z- \sqrt{\frac{\alpha_n}{1-\alpha_n}} \Big)^{\alpha_n}\Big(z + \sqrt{\frac{1-\alpha_n}{\alpha_n}} \Big)^{1- \alpha_n}\\
&= z + \frac{4}{\sqrt{n^2-4}} - \frac{1}{2z} + O(z^{-2}), \qquad z \rightarrow \infty
\end{align*}
(see \cite[The Slit Algorithm]{MRZip}, for instance). From this see that the centered welding $\varphi_n:[-a_n,0] \rightarrow [0,b_n]$ for $\gamma_n$ has endpoints
\begin{align}\label{Eq:EndptsPathology}
-a_n = -\sqrt{\frac{1-\alpha_n}{\alpha_n}}, \qquad b_n = \sqrt{\frac{\alpha_n}{1-\alpha_n}} = \sqrt{\frac{1/2 -1/n}{1/2+1/n}}<1,
\end{align}
that the total time for $\gamma_n$ is $\tau_n \equiv 1/4$ and that the driver $\lambda_n$ for $\gamma_n$ has terminal value $\lambda_n(1/4)= 4/\sqrt{n^2-4}$. As it is well-known that the driver is $\lambda_n(t) = C_n\sqrt{t}$ (see \cite[Example 4.12]{Lawler}, for instance), $\lambda_n$ monotonically increases in $t$ and so the reversed drivers $\xi_n(t) := \lambda_n(1/4-t) -\lambda_n(1/4) \in C_0([0,1/4])$ converge uniformly to zero on $[0,1/4]$ as $n \rightarrow \infty$. However, as they only weld $[-a_n,b_n]$, we see
\begin{align*}
+\infty \equiv \tau(1; \xi_n) \not\rightarrow \tau(1; \xi) = 1/4.
\end{align*}
\end{example}
While the mapping $\xi \mapsto \tau$ is continuous in the sense of Theorem \ref{Cor:TimesPointwise}, the next lemma says there is no global modulus of continuity.
\begin{lemma}\label{Lemma:DriverToTimesNotUniform}
There exist two sequences of drivers $\xi_n, \tilde{\xi}_n \in S_0([0,T])$ such that $\|\xi_n - \tilde{\xi}_n\|_{\infty[0,T]} \rightarrow 0$ but where for some points $y_n$ and fixed $\epsilon>0$, \begin{align*}
\tau(y_n;\xi_n)<\infty, \quad \tau(y_n;\tilde{\xi}_n)<\infty, \quad \text{ and } \quad |\tau(y_n;\xi_n)- \tau(y_n;\tilde{\xi}_n)| \geq \epsilon
\end{align*}
for all $n$.
\end{lemma}
\begin{proof}
We first construct $\xi_n \in S_0$ and $\tilde{\xi}_n \in S$, and then modify the construction so that all drivers are in $S_0$. Indeed, set
\begin{align*}
\xi(t) = \xi(t,\delta) := \begin{cases}
-t/\delta & 0 \leq t \leq \delta\\
-1 & \delta <t,
\end{cases}
\end{align*}
and $\tilde{\xi}(t) = \tilde{\xi}(t,\delta) := \xi(t) - \delta$, and consider $y_0 = y_0(\delta) := 2\delta$. Note that $\xi, \tilde{\xi} \in S$ since $\xi$ is piecewise linear. It is easy to see from the Loewner equation \eqref{Eq:UpwardsLoewner} that on $0\leq t \leq \delta$,
\begin{align*}
y_0(t):= y_0(t;\xi) = 2\delta - \frac{t}{\delta},
\end{align*}
and thus $y_0(\delta) = -1 + 2\delta$, yielding
\begin{align}\label{Eq:NotUniformFastTime}
\tau(y_0; \xi) = \delta + \frac{(2\delta)^2}{4} = \delta + \delta^2
\end{align}
by \eqref{Eq:ZeroHittingTime}. On the other hand, for $\tilde{y}_0(t) := y_0(t;\tilde{\xi})$, it is not hard to see that, as $y_0 - \tilde{\xi}(0) = 3\delta$, $\tilde{y}_0(t)-\tilde{\xi}(t)$ is increasing on $[0,\delta]$ for sufficiently-small $\delta$, and so we have the coarse estimate
\begin{align}\label{Ineq:MovementOfy0}
\Delta \tilde{y}|_{[0,\delta]} \leq \frac{2}{3\delta}\delta = \frac{2}{3},
\end{align}
yielding
\begin{align}\label{Ineq:TimeBoundedBelow}
\tau(y_0; \tilde{\xi}) \geq \delta + \frac{1}{4}\Big( \frac{1}{3} + 2\delta\Big)^2 \geq \frac{1}{36}.
\end{align}
Thus setting $\xi_n(t) := \xi(t,1/n)$, $\tilde{\xi}_n(t) := \tilde{\xi}(t,1/n)$, and $y_n:=y_0(1/n)$ we have $|\tau(y_n;\xi_n)- \tau(y_n;\tilde{\xi}_n)|$ bounded below while the drivers become arbitrarily close.
We can easily adjust this construction so that $\tilde{\xi}(\cdot,\delta) \in S_0$ by starting $\tilde{\xi}$ at zero and having it move in time $\delta^2$, say (i.e. extremely fast), to $\delta$, while $\xi(\cdot, \delta)$ remains at zero for $0 \leq t \leq \delta^2$. We then proceed as in the above construction, setting $y_0$ to be the point which has image $y(\delta^2;\xi)=2\delta$ under $\xi$. Then $y(\delta^2;\xi) < y(\delta^2;\tilde{\xi})$ and so we still obtain \eqref{Ineq:TimeBoundedBelow}, while from \eqref{Eq:NotUniformFastTime}, $\tau(y_0;\xi) = \delta + 2\delta^2$.
\end{proof}
We also easily have the following pointwise joint continuity of $(x, \xi) \mapsto \tau(x;\xi)$.
\begin{lemma}\label{Lemma:DriverToTimesJoint}
Let $0<T \leq \infty$ and $\xi \in S([0,T])$ be a driver, and suppose $\tau(\cdot; \xi)$ is finite on the interval $[-a,b]$. If $x \in (a,b)$ and $\epsilon>0$, there exists $\delta = \delta(\epsilon, x,\xi)$ such that whenever $|\tilde{x}-x| < \delta$ and $\tilde{\xi} \in S([0,T])$ satisfies $\|\tilde{\xi} - \xi\|_{\infty [0,T]} < \delta$,
\begin{align*}
|\tau(\tilde{x};\tilde{\xi}) - \tau(x; \xi)| < \epsilon.
\end{align*}
\end{lemma}
\begin{proof}
Let $\omega(\cdot; \tau)$ be the modulus of continuity of the uniformly-continuous function $\tau(\cdot; \xi)$ on $[a,b]$, and suppose first that $x=\xi(0)$. Choose $\delta$ such that $[x-2\delta, x+2\delta] \subset [-a,b]$ and $\omega(2\delta; \tau) < \epsilon$. By hypothesis, $\tilde{\xi}(0)<x+\delta$, and so by Lemma \ref{Lemma:DriverTimes1} we have
\begin{align*}
\tau(x+\delta; \tilde{\xi}) \leq \tau(x+2\delta; \xi) < \epsilon,
\end{align*}
showing by monotonicity that $\tau(\tilde{x}; \tilde{\xi}) < \epsilon$ for all $\tilde{x} \in [\tilde{\xi}(0), x+\delta]$. A parallel argument holds for $\tilde{x} \in [x-\delta, \tilde{\xi}(0)]$, and so we conclude that
\begin{align*}
\tau(\tilde{x};\tilde{\xi}) = |\tau(\tilde{x};\tilde{\xi})- \tau(x;\xi)|<\epsilon
\end{align*}
whenever $|\tilde{x}-x|<\delta$ and $\| \tilde{\xi} - \xi\| < \delta$.
In the case that $x \neq \xi(0)$, by symmetry we may assume $\xi(0) < x$, and we choose $\delta$ such that $[x-2\delta,x+2\delta] \subset (\xi(0), b]$ and $\omega(\delta;\tau) <\epsilon/2$. Then if $|\tilde{x}-x|<\delta$ and $\|\tilde{\xi}-\xi\|<\delta$, we see from \eqref{Ineq:DriverTimesMonotone} that
\begin{align*}
|\tau(\tilde{x}; \tilde{\xi}) - \tau(\tilde{x}; \xi)| < \frac{\epsilon}{2},
\end{align*}
and thus
\begin{equation*}
|\tau(\tilde{x};\tilde{\xi}) - \tau(x; \xi)| \leq |\tau(\tilde{x};\tilde{\xi}) - \tau(\tilde{x}; \xi)| + |\tau(\tilde{x};\xi) - \tau(x; \xi)| < \epsilon.\qedhere
\end{equation*}
\end{proof}
\subsection{Continuity of $\xi \mapsto \tau^{-1}$}\label{Sec:DriverToTimesInverse}
The hitting times $\tau(\cdot;\xi)$ associated to $\xi \in S([0,T])$ are strictly monotonic on intervals on either side of $\xi(0)$ by Lemma \ref{Lemma:tauContinuous}. So, as in \eqref{Eq:HittingTimesTwoFunctions}, we may consider them as two invertible functions $\tau_{\pm}$. The following lemma says that the maps $\xi \mapsto \tau_+^{-1}$ and $\xi \mapsto \tau_-^{-1}$ are Lipschitz continuous with Lipschitz constant 1, where in each case both the domain and range are equipped with the sup norm on $[0,T]$.
\begin{theorem}\label{Lemma:InverseTauLip}
For $\xi, \tilde{\xi} \in S([0,T])$, $0< T \leq \infty$, we have
\begin{align}\label{Ineq:InverseHittingLipschitz}
\| \tau_\pm^{-1}(\cdot;\xi) - \tau_\pm^{-1}(\cdot;\tilde{\xi}) \|_{\infty[0,T]} \leq \| \xi - \tilde{\xi} \|_{\infty[0,T]}.
\end{align}
Furthermore, $C=1$ is the best-possible Lipschitz constant.
\end{theorem}
\noindent Note that the Lipschitz constant does not depend on $T$. Compare also Lemma \ref{Lemma:DriverToTimesNotUniform}, where we saw very different behavior for $\xi \mapsto \tau$.
\begin{proof}
We show \eqref{Ineq:InverseHittingLipschitz} for $\tau_+^{-1}$; the argument for $\tau_-^{-1}$ is similar. We start with two observations. Setting $\|\xi - \tilde{\xi}\|_{\infty[0,T]} =:\delta$, we note by \eqref{Eq:LoewnerShift} and \eqref{Ineq:LoewnerOrdering} that
\begin{align}\label{Ineq:hMapsDiff}
h_t(y_0;\xi) + \delta = h_t(y_0 + \delta; \xi+\delta) \leq h_t(y_0+\delta; \tilde{\xi})
\end{align}
for all $t \leq T$ and $\xi(0)\leq y_0$ such that $t \leq \tau_+(y_0;\xi)$ (which implies $t\leq\tau_+(y_0+\delta; \tilde{\xi})$ by Lemma \ref{Lemma:DriverTimes1}). We secondly observe that if for some driver $\eta$ we have $\eta(0) < y_1< y_1 + \epsilon < y_2$, then
\begin{align}\label{Ineq:hMapsDiff2}
h_t(y_1; \eta) + \epsilon < h_t(y_2; \eta)
\end{align}
whenever $t \leq \tau_+(y_1; \eta)$, as follows from the expansion of intervals in the upwards flow; recall \eqref{Eq:LoewnerIncrement}.
Now, fix $t_0 \in [0,T]$ and consider $y_0: = \tau_+^{-1}(t_0;\xi)$ and $\tilde{y}_0 := \tau_+^{-1}(t_0;\tilde{\xi})$, and suppose, without loss of generality, that $y_0 \leq \tilde{y}_0$. We proceed by contradiction: if $y_0+\delta + \epsilon < \tilde{y}_0$ for some $\epsilon>0$, then by \eqref{Ineq:hMapsDiff},
\begin{align*}
h_{t_0}(y_0;\xi) +\delta +\epsilon \leq h_{t_0}(y_0+\delta; \tilde{\xi}) + \epsilon,
\end{align*}
while by \eqref{Ineq:hMapsDiff2} we have
\begin{align*}
h_{t_0}(y_0+\delta; \tilde{\xi}) + \epsilon < h_{t_0}(\tilde{y}_0, \tilde{\xi}),
\end{align*}
and thus combining these yields
\begin{align*}
\|\xi - \tilde{\xi}\|_{\infty[0,T]} + \epsilon \leq h_{t_0}(\tilde{y}_0; \tilde{\xi}) - h_{t_0}(y_0; \xi) = \tilde{\xi}(t_0) - \xi(t_0),
\end{align*}
a contradiction. Hence $\tilde{y}_0 - y_0 \leq \delta$, as claimed.
The fact that the Lipschitz constant is optimal is immediately evident from, say, constant drivers $\xi(t) \equiv 0$ and $\tilde{\xi}(t) \equiv \epsilon$. Here
\begin{align*}
\tau_+^{-1}(0;\tilde{\xi}) - \tau_+^{-1}(0;\xi) = \epsilon = \|\xi - \tilde{\xi} \|_{\infty}.
\end{align*}
Note that $C=1$ is still sharp under the more restrictive condition that $\xi, \tilde{\xi} \in S_0([0,T])$. Indeed, consider $\xi_2(t) \equiv 0$ and $\tilde{\xi}_2(t) = t/\delta$ on $[0,\delta]$ for some small $\delta>0$, with $\tilde{\xi}_2(t) \equiv 1$ for $t \geq \delta$. We see from \eqref{Eq:ZeroHittingTime} that $\tau_+^{-1}(\delta;\xi_2) = \delta^2/4$, while $\tau_+^{-1}(\delta;\tilde{\xi}_2) \geq \tilde{\xi}_2(\delta)=1$, and thus
\begin{equation*}
|\tau_+^{-1}(\delta;\xi)-\tau_+^{-1}(\delta;\tilde{\xi})| \geq 1- 2\sqrt{\delta} = ( 1- 2\sqrt{\delta}\,)\|\xi - \tilde{\xi} \|_{\infty[0,\delta]}.\qedhere
\end{equation*}
\end{proof}
\subsection{Preliminary results on continuity properties of $\tau \mapsto \xi$}\label{Sec:TimesToDriver}
In this section, we begin to explore the question of whether the map $\tau \mapsto \xi$ can be defined, and if so, if it is continuous. We show that for the zero driver $\tb{0}(t) \equiv 0$, if $\tau(\cdot; \xi) = \tau(\cdot; \tb{0})$, then $\xi = \tb{0}$, and thus $\tau \mapsto \xi$ is well defined at the zero. Furthermore, if $\tau_n$ are hitting times for $\xi_n \in S_0([0,T])$ with $\tau_n \rightarrow \tau(\cdot; \tb{0})$ uniformly, then $\xi_n \rightarrow \tb{0}$ uniformly, and so $\tau \mapsto \xi$ is also continuous at $\tb{0}$. See Theorem \ref{Thm:TimesToDriverContinuousZero} for precise statements.
Before moving to the proofs, we comment that moving from hitting times to drivers is more subtle than moving from drivers to hitting times. One indication of this is that the proof of the above facts is, to our surprise, not trivial and seems to require some new machinery, and another indication is the breakdown of monotonicity properties that one has in the $\xi \mapsto \tau$ direction. Indeed, given two drivers $\xi, \tilde{\xi} \in S_0([0,T])$, recall that if $\xi \leq \tilde{\xi}$, then, similar to \eqref{Ineq:LoewnerOrdering}, we have that the corresponding hitting-time functions satisfy
\begin{align}\label{Ineq:HittingTimesMonotone}
\tau_-\leq \tilde{\tau}_- \qquad \text{and} \qquad \tilde{\tau}_+ \leq \tau_+
\end{align}
on their common domains. However, the converse implication does not hold: assuming the hitting times satisfy \eqref{Ineq:HittingTimesMonotone} does not imply $\xi \leq \tilde{\xi}$, as the following example shows.
\begin{example}
Consider $\xi(t) \equiv 0$ and the driver $\tilde{\xi}$ which begins at 0 but then moves linearly with slope $1/\epsilon$ to reach value $1$ at time $\epsilon$. Then at time 1, say, $\tilde{\xi}$ moves extremely fast to value $-\epsilon$ linearly with slope $-(1+\epsilon)/\epsilon$, then immediately back to value 1 with slope $(1+\epsilon)/\epsilon$. We have $\xi, \tilde{\xi} \in S_0([0,1+2\epsilon])$ and the drivers are not monotone, but we claim that \eqref{Ineq:HittingTimesMonotone} still holds for $\epsilon$ sufficiently small.
Consider first the left inequality
\begin{align}\label{Ineq:HittingTimesMonotoneLeft}
\tau_- \leq \tilde{\tau}_-.
\end{align}
This holds for all
\begin{align*}
x\in [\tau_-^{-1}(1)\wedge \tilde{\tau}_-^{-1}(1),0] = [\tilde{\tau}_-^{-1}(1),0]
\end{align*}
since $\xi \leq \tilde{\xi}$ on $[0,1]$. Set $x_0 := \tau_-^{-1}(1)$. Since $\tilde{\xi}-\xi = 1$ on $[\epsilon,1]$, the flow $\tilde{x}$ of the same initial point $x_0$ under $\tilde{\xi}$ satisfies $\tilde{x}(1)<-\delta(\epsilon) <0$, where $\delta$ is increasing in $\epsilon$. Thus, for small-enough $\epsilon$, we still have $\tilde{x}(1+2\epsilon) < \tilde{\xi}(1+2\epsilon)$, which shows that \eqref{Ineq:HittingTimesMonotoneLeft} holds on the common domain $[\tilde{\tau}_-^{-1}(1+2\epsilon),0]$ of $\tau_-$ and $\tilde{\tau}_-$.
That $\tau_+ \geq \tilde{\tau}_+$ on $[0,\tau_+^{-1}(1+2\epsilon)]$ is similarly clear: by \eqref{Eq:ZeroHittingTime} we have $\tau_+^{-1}(1+2\epsilon) = 2\sqrt{1+2\epsilon} = 2 + O(\epsilon)$, while $\tilde{\xi}$ has already eaten points past this on $[0,1]$ when $\epsilon$ is small, as $\tilde{\tau}_+^{-1}(1) \approx 3$.
\end{example}
In short, to say much about the $\tau \mapsto \xi$ direction one has to find a mechanism for handling rapid oscillations in $\xi$. The machinery we construct below works in the case of a constant driver $\tau(\cdot; \tb{C})$, which, without loss of generality, we may assume to be the zero driver $\tb{0}$. Our principal tool is Lemma \ref{Lemma:FasterTimes}, which bounds the welding time $\tau$ for two points $x_0 <y_0$ in terms of how far $\xi(\tau)$ is from the initial average $(x_0+y_0)/2$. We also find it helpful to write $\xi$ as a convex combination of the points it will weld, which leads to the following integral representation for the interval length $y(t)-x(t)$.
\begin{lemma}\label{Lemma:IntervalWidth}
Suppose $\xi \in C([0,\tau])$ welds initial points $x_0 < \xi(0) < y_0$ at some time $\tau$. Let $I(t) := y(t)-x(t)$ be the interval length at time $t$ and let $\alpha:[0,\tau) \rightarrow (0,1)$ be the convex combination coefficient yielding
\begin{align}\label{Eq:XiConvexCombo}
\xi(t) = (1-\alpha(t))x(t) + \alpha(t) y(t).
\end{align}
Then
\begin{align}\label{Eq:IntervalWidth}
I(t) = \sqrt{I(0)^2 -4\int_0^t \frac{ds}{\alpha(s)(1-\alpha(s))}}
\end{align}
for all $0 \leq t \leq \tau$.
\end{lemma}
\noindent Recall that, as usual, $x(t) = h_t(x_0;\xi)$ and $y(t) = h_t(y_0;\xi)$, with the $h_t$ maps satisfying \eqref{Eq:UpwardsLoewner}. We note \eqref{Eq:IntervalWidth} is a generalization of the formula $t \mapsto \sqrt{y_0^2 -4t}$ for the image of a point $y_0 >0$ under the zero driver, as in this case
\begin{align}\label{Eq:ZeroDriverInterval}
I(t) = 2\sqrt{y_0^2-4t} = \sqrt{I_0^2 - 16t}.
\end{align}
\begin{proof}
For $0 \leq t < \tau$, the Loewner equation \eqref{Eq:UpwardsLoewner} yields
\begin{align*}
\frac{d}{dt}\big(I(t)^2\big) = \frac{-4I(t)^2}{(y(t)-\xi(t))(\xi(t)-x(t))} = \frac{-4}{\alpha(t)(1-\alpha(t))},
\end{align*}
yielding \eqref{Eq:IntervalWidth}. As $t \rightarrow \tau$, $I(t) \rightarrow I(\tau) = 0$, which shows the integral in the radical is convergent and that the formula also holds when $t=\tau$.
\end{proof}
Observe that when $t=\tau$, \eqref{Eq:IntervalWidth} says
\begin{align}\label{Eq:IntervalWidthEnd}
I(0)^2 = \int_0^\tau \frac{4 ds}{\alpha(s)(1-\alpha(s))} \geq 16 \tau
\end{align}
by calculus on the function $x \mapsto \frac{4}{x(1-x)}$, thus showing
\begin{align}\label{Ineq:MaxTime}
\tau \leq \frac{(y_0-x_0)^2}{16}.
\end{align}
This maximum time is achieved by the driver which is constantly the average of $x_0$ and $y_0$ by \eqref{Eq:ZeroDriverInterval} (or \eqref{Eq:ZeroHittingTime}).
The lemma which immediately yields our result on $\tau \mapsto \xi$ for the zero driver is the following.
\begin{lemma}\label{Lemma:FasterTimes}
Suppose $\xi \in C([0,\tau])$ welds initial points $x_0 < \xi(0) < y_0$ satisfying $x_0 + y_0 = 0$ at time $\tau$ with $|\xi(\tau)| = \delta$. Then $\tau \leq f(\delta)$ for some function $f$ which is strictly decreasing in $\delta$ and satisfies $f(0) = \frac{(y_0-x_0)^2}{16}$.
\end{lemma}
\noindent Note that, as the proof will make clear, we do not require that $\xi \in S([0,\tau])$, only that $\xi$ welds $x_0$ and $y_0$ at time $\tau$.
\begin{proof}
By symmetry it suffices to consider the case $\delta > 0$. With $\alpha(t)$ as in \eqref{Eq:XiConvexCombo}, define $\alpha_0:= \frac{1}{2}+\frac{\delta}{4y_0}$ so that
\begin{align*}
(1-\alpha_0)x_0 + \alpha_0y_0 = \frac{\delta}{2},
\end{align*}
and consider the set
\begin{align*}
T_0 := \{ \, t \, : \, 1-\alpha_0 < \alpha(t) < \alpha_0 \,\},
\end{align*}
which is open and thus a finite or countable collection of open intervals. For a constant $\epsilon_0 = \epsilon_0(\delta, y_0)>0$ to be determined below, we either have $(i)$ $|T_0^c| < \epsilon_0$ or $(ii)$ $|T_0^c| \geq \epsilon_0$, with $|T_0^c|$ the Lebesgue measure of $T_0^c$. We proceed to explicitly construct a suitable bounding function for $\tau$ in each case.
As the intuition is easily obscured, it may be worth explicitly stating: the times $T_0$ represent the ``reasonable'' region where $\xi(t)$ is ``close'' to the average $(x(t)+y(t))/2$. If $|T_0|$ is large, as in case $(i)$, then $\tau$ cannot be too large, or else $y$ would move past $\delta$ (as for the zero driver and $\tau$ approaching $(y_0-x_0)^2/16$). If $|T_0|$ is ``small,'' as in $(ii)$, then the interval $I(t)$ is collapsing quickly since $\xi(t)$ is ``close'' to one of $x(t),y(t)$ often, which also makes the time ``small.''
Beginning with case $(i)$, we estimate the movement of $y_0$ on $T_0$. Since
\begin{align}\label{Ineq:IntervalWidthIntegralEst}
\int_0^t \frac{4 ds}{\alpha(s)(1-\alpha(s))} \geq 16t,
\end{align}
we see from \eqref{Eq:IntervalWidth} that $I(t) \leq \sqrt{I(0)^2 - 16t}$. Thus, using the Loewner equation \eqref{Eq:UpwardsLoewner}, we find for $t \in T_0$ that
\begin{align*}
-\dot{y}(t) = \frac{2}{(1-\alpha(t))I(t)} \geq \frac{2}{\alpha_0\sqrt{I(0)^2 - 16t}},
\end{align*}
and that therefore that the cumulative change $\Delta y|_{T_0}$ of $y$ over times $t \in T_0$ satisfies
\begin{align*}
y_0 - \delta \geq - \Delta y|_{T_0} &\geq \int_{T_0}\frac{2dt}{\alpha_0\sqrt{I(0)^2 - 16t}}\\
&\geq \int_0^{|T_0|}\frac{2dt}{\alpha_0\sqrt{I(0)^2 - 16t}}\\
&= \frac{I(0)}{4\alpha_0} - \frac{\sqrt{I(0)^2-16|T_0|}}{4\alpha_0}.
\end{align*}
Here the first inequality is by the assumption that $y(\tau) = \delta$, while the second line follows by monotonicity of the integrand. Solving for $|T_0|$ yields
\begin{align}\label{Ineq:TimesFastBound1}
\frac{I(0)^2}{16}- \frac{1}{16}\big( I(0)-4\alpha_0(y_0-\delta) \big)^2 \geq |T_0| > \tau - \epsilon_0,
\end{align}
and so if we choose
\begin{align}
\epsilon_0 := \frac{1}{32}\big( I(0)-4\alpha_0(y_0-\delta) \big)^2= \frac{\delta^2(y_0+\delta)^2}{32y_0^2} >0, \label{Eq:epsilon0}
\end{align}
we see that \eqref{Ineq:TimesFastBound1} yields
\begin{align*}
\tau \leq \frac{(y_0-x_0)^2}{16}- \frac{\delta^2(y_0+\delta)^2}{32y_0^2} =: f_1(\delta),
\end{align*}
which is decreasing in $\delta$ and has the desired boundary value at $\delta=0$.
We turn, then, to the second case $|T_0^c| \geq \epsilon_0$, with $\epsilon_0 = \epsilon_0(\delta)$ still given by \eqref{Eq:epsilon0}. Using \eqref{Eq:IntervalWidthEnd}, the symmetry of $x \mapsto x(1-x)$ around $1/2$, and \eqref{Ineq:IntervalWidthIntegralEst}, we find that
\begin{align*}
\frac{(y_0-x_0)^2}{4} &= \int_{T_0^c}\frac{ds}{\alpha(s)(1-\alpha(s))} + \int_{T_0}\frac{ds}{\alpha(s)(1-\alpha(s))}\\
&\geq \frac{|T_0^c|}{\alpha_0(1-\alpha_0)} + 4|T_0|\\
&= \frac{\big( 1 -4 \alpha_0(1-\alpha_0) \big)|T_0^c|}{\alpha_0(1-\alpha_0)} + 4\tau\\
&\geq \frac{\big( 1 -4 \alpha_0(1-\alpha_0) \big)\epsilon_0}{\alpha_0(1-\alpha_0)} + 4\tau
\end{align*}
since $1 - 4\alpha_0(1-\alpha_0) >0$ (recall $x_0$ and $y_0$ are symmetric about zero and $\delta/2 > 0$, and so $\alpha_0 \neq 1/2$). We thus have
\begin{align*}
\tau \leq \frac{(y_0-x_0)^2}{16} - \Big(\frac{1}{4 \alpha_0(1-\alpha_0)} -1\Big)\epsilon_0(\delta) =: f_2(\delta),
\end{align*}
which again is decreasing in $\delta$ with initial value $f_2(0) =(y_0-x_0)^2/16$. Combining the cases, we thus see
\begin{equation*}
\tau \leq f_1(\delta) \wedge f_2(\delta) =: f(\delta).\qedhere
\end{equation*}
\end{proof}
\noindent We can immediately conclude the following from Lemma \ref{Lemma:FasterTimes}.
\begin{theorem}\label{Thm:TimesToDriverContinuousZero}
The map hitting-time-to-driver map $\tau \mapsto \xi$ is well defined and continuous at the zero driver $\tb{0}$. More precisely,
\begin{enumerate}[$(i)$]
\item If $\xi \in C([0,T])$ has hitting times $\tau(x;\xi) = x^2/4$ on some interval $[-y_0,y_0]$, then $\xi(t) \equiv 0$ on $0 \leq t \leq y_0^2/4$.
\item If $\tau_n:[a_n,b_n] \rightarrow \mathbb{R}$ are hitting times for drivers $\xi_n \in S_0([0,T_n])$ with $a_n \rightarrow a$, $b_n \rightarrow -a$, and where $\tau_n$ converges uniformly to hitting time function $\tau_{\tb{0}}$ of the zero driver on compact subsets of $(a, -a)$, then $\xi_n$ converges uniformly to $\tb{0}$ on any $[0,T'] \subset [0,T)$, where $T = \tau_{\tb{0}}(a) = a^2/4$.
\end{enumerate}
\end{theorem}
\begin{proof}
$(i)$ Lemma \ref{Lemma:FasterTimes} implies that $\xi(y^2/4) = 0$ for each $0 \leq y \leq y_0$.
$(ii)$ We use Arzela-Ascoli to show $\{\xi_n\}$ is precompact on any $[0,T'] \subset [0,T)$, and then show all subsequential limits are $\tb{0}$.
Since $a_n < \xi_n(t) <b_n$ for all $0 \leq t \leq T$, the sequence $\{\xi_n\}$ is uniformly bounded. If it is not equicontinuous on $[0,T'] \subset [0,T)$, then there exist a subsequence $\{\xi_{n_k}\}$, $\epsilon_1 >0$, and times $t_{n_k},t' \in [0,T']$ satisfying $t_{n_k} \rightarrow t'$, such that
\begin{align}\label{Ineq:NotEquicont}
|\xi_{n_k}(t_{n_k}) - \xi_{n_k}(t')| \geq \epsilon_1
\end{align}
for all $k$, as follows from negating the definition of equicontinuity and using the compactness of $[0,T']$.
We note that $\tau_{n_k,\pm}^{-1} \xrightarrow{u} \tau_{z,\pm}^{-1}$ on $[0,T']$, as follows from choosing an appropriate compact of $(-a,a)$ and using lemmas \ref{Lemma:OurDini} and \ref{Lemma:InversesConverge}. Thus the points $x_{n_k}, y_{n_k}$ and $x_{n_k}', y_{n_k}'$ welded together by $\xi_{n_k}$ at times $t_{n_k}$ and $t'$, respectively, satisfy
\begin{align*}
\max\{\,|x_{n_k} - x'|, |x_{n_k}'-x'|, |y_{n_k}-y'|, |y_{n_k}'-y'| \,\} \rightarrow 0
\end{align*}
as $k \rightarrow \infty$, where $x' := \tau_{\tb{0},-}^{-1}(t')$ and $y' := \tau_{\tb{0},+}^{-1}(t')$. In particular, $x_{n_k} + y_{n_k} \rightarrow 0$ and $x_{n_k}' + y_{n_k}' \rightarrow 0$. Since by \eqref{Ineq:NotEquicont} we have either $|\xi_{n_k}(t_{n_k})| \geq \epsilon_1/2$ or $|\xi_{n_k}(t_{n_k}')| \geq \epsilon_1/2$, by Lemma \ref{Lemma:FasterTimes} there is some $\delta>0$ such that
\begin{align*}
\min\{\tau_{n_k}(x_{n_k}), \tau_{n_k}(x_{n_k}')\} < \tau(t') - \delta
\end{align*}
for all large $k$. Since this contradicts $\tau_{n_k} \xrightarrow{u} \tau_{\tb{0}}$ on a sufficiently-large compact of $(-a,a)$, we conclude that the sequence $\{\xi_n\}$ is, indeed, equicontinuous on $[0,T']$.
Take any subsequential limit $\xi_{n_k} \rightarrow \tilde{\xi}$ on $[0,T']$. We wish to show that the hitting time function $\tilde{\tau}$ for $\tilde{\xi}$ is the same as $\tau_{\tb{0}}$ and then apply part $(i)$ of the theorem. (Unfortunately we cannot jump to use Theorem \ref{Cor:TimesPointwise} because, \emph{a priori}, we do not know that $\tilde{\xi} \in S$.) Fix $0 < y_0$ and $0 < \epsilon < y_0/2$. Then by \eqref{Ineq:DriverTimesMonotone},
\begin{align*}
\tau_{n_k}(y_0-\epsilon) \leq \tilde{\tau}(y_0) \leq \tau_{n_k}(y_0+\epsilon)
\end{align*}
for all sufficiently-large $k$, and so in the limit we find
\begin{align*}
\frac{(y_0-\epsilon)^2}{4} \leq \tilde{\tau}(y_0) \leq \frac{(y_0+\epsilon)^2}{4}.
\end{align*}
As this holds for any sufficiently-small $\epsilon$, we conclude that $\tilde{\tau}(y_0) = y_0^2/4$, and similarly that $\tilde{\tau}(-y_0) = y_0^2/4$. By assumption on $\tau_n$ and part $(i)$, we conclude $\tilde{\xi} = \tb{0}$ on $[0,T']$. Hence all subsequential limits are the same, and $\xi \rightarrow \tb{0}$ uniformly on $[0,T']$.
\end{proof}
\subsubsection{Positive evidence for the non-zero case}\label{Sec:PositiveEvidence}
In considering whether hitting time convergence implies driver convergence, we ran some numerical experiments to gain intuition. We thought we may have generated a method to produce a counter-example, but the simulations actually produced convergent driving functions, thus yielding some positive evidence for a generalization of Theorem \ref{Thm:TimesToDriverContinuousZero}$(ii)$. We share the construction and this evidence here.
Let $\mathcal{T}_0$ be the collection of hitting-time functions $\tau(\cdot; \xi)$ generated by $\xi \in S_0([0,T])$, and choose $\tau(\cdot; \xi) \in \mathcal{T}_0$ corresponding to $\xi \neq \tb{0}$. Say $\tau:[-a,b] \rightarrow \mathbb{R}$ with $\tau(a) = \tau(b) = T$. At stage $n$ consider the pairs
\begin{align*}
(x_j,y_j) = \big( \tau_-^{-1}(jT/n; \xi ),\; \tau_+^{-1}(jT/n; \xi ) \big), \qquad j =1,\ldots, n,
\end{align*}
that $\xi$ welds together at times $jT/n$. We construct a driver $\xi_n \in S_0$ that also welds $x_j$ to $y_j$ in time $jT/n$ but has ``large" oscillations, which will potentially destroy uniform converge to $\xi$.
Indeed, start $\xi_n$ at zero and have it move linearly with extremely-large speed until it reaches $(x_1(\epsilon;\xi_n)+y_1(\epsilon;\xi_n))/2$ at some time $\epsilon$. The true welding time for $(x_1,y_1)$ is
\begin{align*}
\frac{T}{n} \leq \frac{(y_1-x_1)^2}{16}
\end{align*}
by \eqref{Ineq:MaxTime}, where the maximum is attained by the driver which is constantly $(y_1+x_1)/2$. So if we set $\xi_n$ to be $(x_1(\epsilon;\xi_n)+y_1(\epsilon;\xi_n))/2$ for $\epsilon \leq t \leq \frac{T}{n}-\epsilon$, it does not weld these points, as $\epsilon$ is very small and $\xi$ is not generally constant. At time $\frac{T}{n}-\epsilon$ we then use a large oscillation in $\xi_n$ to weld both points together in time $\epsilon$ (see below in the proof of Theorem \ref{Thm:WeldingToOthersNotContinuous} for a careful description on how to do this). Thus $\xi_n$ welds the pair $(x_1,y_1)$ in exactly time $t_1=T/n$.
We next have $\xi_n$ move in time $\epsilon$ to the average $\big(x_2(t_1+\epsilon; \xi_n)+y_2(t_1+\epsilon; \xi_n)\big)/2$ of the next pair. Intuitively, since $\xi_1$ stayed ``far away'' from $x_1$ and $y_1$ until the large $\epsilon$-oscillation at the end, $x_2$ and $y_2$ have not moved as far as they normally would have under $\xi$ in $[0,T/n]$, and thus we expect
\begin{align*}
\frac{T}{n} < \frac{\big( x_2(t_1; \xi_n)+y_2(t_1; \xi_n) \big)^2}{16}.
\end{align*}
So we have $\xi_n$ wait constantly at the average $(x_2(t_1+\epsilon; \xi_n)+y_2(t_1+\epsilon; \xi_n))/2$ on $\frac{T}{n} +\epsilon \leq t \leq \frac{2T}{n} - \epsilon$, and then use another large oscillation to weld the images of $x_2$ and $y_2$ together in time $\epsilon$. Thus $\xi_n$ also welds $(x_2,y_2)$ in exactly the same amount of time as $\xi$.
We continue in this way to weld all the $(x_j,y_j)$ at times identical to $\tau(\cdot; \xi)$, which by the monotonicity of $\tau_\pm$ implies $\tau(\cdot;\xi_n) \xrightarrow{u} \tau(\cdot;\xi)$.
We wondered if in the $\epsilon \rightarrow 0$ limit the oscillations of $\xi_n$ at the end of each interval $[jT/n,(j+1)T/n]$ would grow so large that $\|\xi_n - \xi\|_{\infty[0,T]}$ would be bounded below. In numerical simulations this was not the case, however, suggesting that the convergence of hitting times is fairly robust. We show a characteristic simulation in Figure \ref{Fig:Simulations}, and submit this as positive evidence for a generalization of Theorem \ref{Thm:TimesToDriverContinuousZero}$(ii)$.
We comment that our constructed driver $\xi_n$ is not the worst-case scenario, as it is not necessarily the one that minimizes the shrinking of the last interval $y_n-x_n$ on $0 \leq t \leq \frac{(n-1)T}{n}$ among all the drivers that weld the pairs $(x_j,y_j)$ at time $jT/n$ for $1 \leq j \leq n-1$. If such an extremal driver could be approximated and simulated, one could then follow it with the above construction to weld $(x_n,y_n)$ on $\frac{n-1}{nT} \leq t \leq T$ and produce the largest-possible oscillation in $\xi_n$ for welding these last points. The intuition is that the pair $(x_n,y_n)$ is ``far away'' when the driver is welding the other points, and so does not move very much; if we minimize its movement we then have the opportunity for an extremely large fluctuation in $\xi_n$. It would be interesting to see if the oscillation would remain macroscopic in the $n \rightarrow \infty$ limit.
\begin{figure}
\begin{center}
\includegraphics[scale=0.7]{CornerLowMesh-eps-converted-to.pdf}\\
\bigskip
\includegraphics[scale=0.7]{CornerManyMesh-eps-converted-to.pdf}
\end{center}
\caption{\small Numerical simulations of drivers $\xi_n$ as constructed in \S\ref{Sec:PositiveEvidence}, in the $\epsilon \rightarrow 0$ limit (where the drivers move instantly and hence yield vertical lines). Here the true driver $\xi$, pictured in blue, generates a line with angle $\pi/3$ to the positive reals on $0 \leq t \leq 2$, followed by a vertical line segment on $2 \leq t \leq 3$. With a coarse approximation, the constructed driver $\xi_{n_1}$ in the upper figure struggles to stay close, but we see in the lower figure that the $\xi_n$ still converges as the mesh becomes finer. Convergence at the corner is apparently only logarithmically fast in the mesh size, however (we quadruple the points in each iteration). Given the difficulty at the non-smooth point of $\xi$, it is unclear whether the $\xi_n$ would converge for a rough fractal driver, such as, for instance, $\xi$ for the Von Koch snowflake \cite[Figure 2]{Lindrohdespace}.}
\label{Fig:Simulations}
\end{figure}
\section{Continuity properties of $\tau \mapsto \varphi$ and $\xi \mapsto \varphi$}\label{Sec:Welding}
\subsection{Continuity of $\tau \mapsto \varphi$ and $\xi \mapsto \varphi$}\label{Sec:BlankToWelding}
For convenience in this section, we have our drivers start at zero, drawing them from $S_0$. We also restrict to times $T<\infty$, as we will need domain compactness for uniform continuity.
The continuity of $\tau \mapsto \varphi$ follows immediately from writing $\varphi = \tau_+^{-1} \circ \tau_-$ via Lemma \ref{Lemma:tauContinuous} and then using lemmas \ref{Lemma:OurDini} and \ref{Lemma:InversesConverge}; we leave the details to the interested reader.
\begin{lemma}\label{Lemma:TimesToWeldingConvergence}
Let $0<T < \infty$ and let $\xi,\xi_n \in S_0([0,T])$. Let $\tau:[-a,b]\rightarrow \mathbb{R}$ and $\tau_n:[-a_n,b_n] \rightarrow \mathbb{R}$ be the hitting times for $\xi$ and $\xi_n$, respectively, and $\varphi:[-a,0] \rightarrow [0,b]$ and $\varphi_n:[-a_n,0] \rightarrow [0,b_n]$ their conformal weldings. If $a_n \rightarrow a$, $b_n \rightarrow b$ and $\tau_n \xrightarrow{u} \tau$ on any $[c,d] \subset (-a,b)$, then $\varphi_n \xrightarrow{u} \varphi$ on $[-d,0]$ for any $[-d,0]\subset (-a,0]$.
\end{lemma}
We are more interested in the continuity of $\xi \mapsto \varphi$, the content of the following theorem.
\begin{theorem}\label{Thm:DriverToWeldingConvergence}
Let $0<T < \infty$ and let $\xi,\xi_n \in S_0([0,T])$. Let $\varphi:[-a,0] \rightarrow [0,b]$ and $\varphi_n:[-a_n,0] \rightarrow [0,b_n]$ be the conformal weldings for $\xi$ and $\xi_n$, respectively. If $\xi_n \xrightarrow{u} \xi$ on $[0,T]$, then $a_n \rightarrow a$, $b_n \rightarrow b$, and $\varphi_n \xrightarrow{u} \varphi$ on $[-c,0]$ for any $[-c,0]\subset (-a,0]$.
\end{theorem}
\begin{remark}
As in Theorem \ref{Cor:TimesPointwise}, note that we also have
\begin{align}\label{Ineq:EndpointsQuantWeldings}
\max\{\, |a_n-a|, |b_n-b| \,\} \leq \|\xi_n-\xi\|_{\infty[0,T]}
\end{align}
by Theorem \ref{Lemma:InverseTauLip}.
\end{remark}
\begin{proof}
Since $a_n \rightarrow a$ and $b_n \rightarrow b$ by \eqref{Ineq:EndpointsQuantWeldings}, $\varphi_n \in C_0([-c,0])$ for all large $n$. Noting $\varphi = \tau_+^{-1}\circ \tau_-$ and writing $\tau_\pm$ and $\tau_{n,\pm}$ for the hitting-time functions generated by $\xi$ and $\xi_n$, respectively, we see for $-c \leq x \leq 0$ and sufficiently large $n$ that
\begin{align}
|\varphi_n(x) &- \varphi(x)|\notag \\
&= |\tau_{n,+}^{-1}\big(\tau_{n,-}(x)\big)-\tau_+^{-1}\big(\tau_-(x)\big)| \notag\\
&\leq |\tau_{n,+}^{-1}\big(\tau_{n,-}(x)\big)-\tau_+^{-1}\big(\tau_{n,-}(x)\big)| + |\tau_+^{-1}\big(\tau_{n,-}(x)\big) - \tau_+^{-1}\big(\tau_{-}(x)\big)| \notag\\
&\leq \| \xi_n - \xi \|_{\infty[0,T]} + \omega\big(\|\tau_{n,-} - \tau_-\|_{\infty[-c,0]}; \tau_+^{-1}\big)\label{Ineq:AlmostQuantitative}
\end{align}
by Theorem \ref{Lemma:InverseTauLip}, where $\omega(\cdot; \tau_+^{-1})$ is the modulus of continuity of $\tau_+^{-1}$ on $[0,T]$, which exists by Lemma \ref{Lemma:tauContinuous}. Since $\|\tau_{n,-} - \tau_-\|_{\infty[-c,0]} \rightarrow 0$ by Theorem \ref{Cor:TimesPointwise}, we have $\| \varphi_n - \varphi \|_{\infty[-c,0]} \rightarrow 0$.
\end{proof}
\begin{remark}
Note that if one could control $\omega(\cdot; \tau_+^{-1})$ by the modulus of continuity $\omega(\cdot; \xi)$ of $\xi$ on $[0,T]$, then \eqref{Ineq:AlmostQuantitative} would yield a type of quantitative estimate.
\end{remark}
We saw in Theorem \ref{Cor:TimesPointwise} a sense in which $\xi \mapsto \tau$ is continuous, and in Lemma \ref{Lemma:DriverToTimesNotUniform} that it is not uniformly so. The map $\xi \mapsto \varphi$ is analogous: it is continuous in the sense of Theorem \ref{Thm:DriverToWeldingConvergence} but there is again no universal modulus of continuity.
\begin{lemma}\label{Lemma:DriverToWeldingNotUniform}
There exist drivers $\xi_n, \tilde{\xi}_n \in S_0([0,T])$ such that $\|\xi_n - \tilde{\xi}_n\|_{\infty[0,T]} \rightarrow 0$ but where there exists $x$ welded by both $\xi_n$ and $\tilde{\xi}_n$ and $\epsilon>0$ such that \begin{align}\label{Ineq:WeldingsFar}
|\varphi_n(x)- \tilde{\varphi}_n(x)| \geq \epsilon
\end{align}
for all $n$, where $\varphi_n$ and $\tilde{\varphi}_n$ are the weldings for $\xi_n$ and $\tilde{\xi}_n$, respectively.
\end{lemma}
\begin{proof}
We use the same drivers $\xi(\cdot, \delta)$ and $\tilde{\xi}(\cdot,\delta)$ as in the proof Lemma \ref{Lemma:DriverToTimesNotUniform} (including the modification in the last paragraph so that both are in $S_0$), and show that
\begin{align*}
|\varphi^{-1}(y_0)- \tilde{\varphi}^{-1}(y_0)| \geq \epsilon,
\end{align*}
with $y_0$ the same point selected in that proof (to obtain \eqref{Ineq:WeldingsFar} reflect the drivers across the origin). Consider the situation at time $t=\delta^2+\delta$, when $\xi$ and $\tilde{\xi}$ arrive at $-1$ and $-1-\delta$, respectively, for the first time. We have that $y(\delta^2+\delta; \xi) = -1+2\delta$, and thus the point $x_0$ which will weld to it is, at that moment, at
\begin{align}\label{Eq:WeldingNotUniformx}
x(\delta^2+\delta;\xi) = -1-2\delta.
\end{align}
By \eqref{Ineq:MovementOfy0},
\begin{align*}
y(\delta^2+\delta; \tilde{\xi}) - \tilde{\xi}(\delta^2+\delta) \geq 3\delta + \frac{1}{3},
\end{align*}
and thus the point $\tilde{x}_0$ which welds to $y_0$ under $\tilde{\xi}$ satisfies
\begin{align*}
\tilde{x}(\delta^2+\delta; \tilde{\xi}) \leq -1-\delta - \Big( 3\delta + \frac{1}{3} \Big) = -4\delta - \frac{4}{3},
\end{align*}
and so by \eqref{Eq:WeldingNotUniformx} we see
\begin{align}\label{Ineq:PreWeldFar}
x(\delta^2+\delta; \xi) - \tilde{x}(\delta^2+\delta; \tilde{\xi}) \geq \frac{1}{3} + 2\delta.
\end{align}
As $\delta \rightarrow 0+$, simple estimates with the Loewner equation show $x(\delta^2+\delta;\xi) - x_0 \rightarrow 0$ and $\tilde{x}(\delta^2+\delta;\tilde{\xi}) - \tilde{x_0} \rightarrow 0$, and thus \eqref{Ineq:PreWeldFar} shows
\begin{align*}
x_0 -\tilde{x}_0 = \varphi^{-1}(y_0) - \tilde{\varphi}^{-1}(y_0) \geq \frac{1}{4}
\end{align*}
for all small $\delta$.
\end{proof}
As in Lemma \ref{Lemma:DriverToTimesJoint}, we can easily conclude from above results that $(x;\xi) \mapsto \varphi(x;\xi)$ is pointwise jointly continuous in $x$ and $\xi$. This generalizes \cite[Thm. 1.2$(b)$]{Vlad}, as our statement covers all drivers $\xi$ generating simple curves, not just the a.s. Brownian motion case.
\begin{lemma}\label{Lemma:DriverToWeldingJoint}
Let $0<T < \infty$ and $\xi \in S_0([0,T])$ a driver with welding $\varphi:[-a,0] \rightarrow [0,b]$. If $x \in (-a,0]$ and $\epsilon>0$, there exists $\delta = \delta(\epsilon, x,\xi)$ such that whenever $\tilde{x}\leq 0$ and $\tilde{\xi} \in S_0([0,T])$ satisfy
\begin{align*}
\max\{\, |\tilde{x}-x|, \|\tilde{\xi} - \xi\|_{\infty [0,T]} \,\} < \delta,
\end{align*}
then
\begin{align*}
|\varphi(\tilde{x};\tilde{\xi}) - \varphi(x; \xi)| < \epsilon.
\end{align*}
\end{lemma}
\begin{proof}
Write $\tilde{\varphi}$ and $\varphi$ for $\varphi(\cdot; \tilde{\xi})$ and $\varphi(\cdot; \xi)$, respectively, and similarly for their drivers and hitting times. For $\frac{x-a}{2}\leq \tilde{x} \leq 0$, we have that $\tilde{\varphi}(\tilde{x})$ is defined whenever $\|\tilde{\xi}-\xi\|_{\infty[0,T]}$ is sufficiently small by Theorem \ref{Lemma:InverseTauLip}, and for such $\tilde{x}$, the triangle inequality yields
\begin{align*}
|\tilde{\varphi}(\tilde{x}) - \varphi(x)| &\leq \|\tilde{\varphi} - \varphi\|_{\infty[(x-a)/2,0]} + \omega(|\tilde{x}-x|; \varphi)\\
&\leq \| \tilde{\xi} - \xi \|_{\infty[0,T]} + \omega\big(\|\tilde{\tau} - \tau\|_{\infty[(x-a)/2,0]}; \tau_+^{-1}\big) + \omega(|\tilde{x}-x|; \varphi)
\end{align*}
by \eqref{Ineq:AlmostQuantitative}, where $\omega(\cdot; \varphi)$ is the modulus of continuity of $\varphi$ on $[-a,0]$, and $\omega(\cdot; \tau_+^{-1})$ that for $\tau_+^{-1}$ on $[0,T]$. As $\xi$ and $\varphi$ are fixed, $\tau_+^{-1}$ is determined by $\xi$, and the hitting times are continuous by Theorem \ref{Cor:TimesPointwise}, we may choose $\delta$ small enough such that each of the three terms is less than $\epsilon/3$.
\end{proof}
\subsection{$\varphi \mapsto \tau$ and $\varphi \mapsto \xi$ are not continuous}\label{Sec:WeldingCounterexample}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\textwidth]{CE5-eps-converted-to.pdf}\\
\bigskip
\includegraphics[width=0.9\textwidth]{CE10-eps-converted-to.pdf}\\
\bigskip
\includegraphics[width=0.9\textwidth]{CE20-eps-converted-to.pdf}
\end{center}
\caption{\small Numerical simulations of curves $\gamma_n$ (left) and the drivers $\xi_n$ (right) which give a counterexample to the converse of Theorem \ref{Thm:DriverToWeldingConvergence}. Here the weldings $\varphi_n$ of the $\gamma_n$ converge uniformly on $[-1,0]$ to $\varphi(x)=-x$, but the drivers stay far in supremum distance from the zero driver. (Note we have simplified the $\gamma_n$ by drawing them as piecewise polygonal, which would not quite be the case.)}
\label{Fig:CounterExample}
\end{figure}
\begin{theorem}\label{Thm:WeldingToOthersNotContinuous}
Let $\varphi(x) =-x$ on $[-1,0]$ be the welding for the vertical line segment $[0,i]$, with corresponding driver $\tb{0}$ and hitting times $\tau_{\tb{0}}$. There exist $\varphi_n$ corresponding to simple curves $\gamma_n$ with drivers $\xi_n$ and hitting times $\tau_n$ such that $\|\varphi_n - \varphi\|_{\infty[0,1]} \rightarrow 0$ but where $\xi_n \not\rightarrow \tb{0}$ and $\tau_n \not\rightarrow \tau_{\tb{0}}$.
\end{theorem}
\begin{proof}
We build $\gamma_n$ by a piece-wise linear driving function $\xi_n$ which welds each $k/n$ to $-k/n$, $k=1,\ldots, n$, under its upwards Loewner flow. The intuition is that $\xi_n$ will capture the initial points extremely fast through rapid oscillations, and hence the latter points will not have time to move very far, affording $\xi_n$ the opportunity to travel far from 0 to capture them. By construction, the welding $\varphi_n$ generated by $\xi_n$ will satisfy $\varphi_n(-k/n) = k/n$ for all $k \in \{1, \ldots, n\}$, and since the weldings are monotone, this yields $\varphi_n \xrightarrow{u} \varphi$ on $[-1,0]$.
We first note that we can weld $1/n$ to $-1/n$ in an arbitrarily-small amount of time. Write $x_1 = -1/n$ and $y_1=1/n$, with $x_1(t)$ and $y_1(t)$ their images after time $t$ in the upwards Loewner flow generated by $\tilde{\xi}_1$, which we now construct. Indeed, starting from $\tilde{\xi}_1(0)=0$, move $\tilde{\xi}_1$ linearly to $y_1(\epsilon_1)-\epsilon_1$ in time $\epsilon_1$, for some small $\epsilon_1 >0$. Then $\dot{y}_1(\epsilon_1) = -2/\epsilon_1$, and have $\tilde{\xi}_1$ rush back towards $x_1(t)$ at precisely this same speed, until the time $t_1$ when it is exactly half-way between $x_1(t_1)$ and $y_1(t_1)$. Then we freeze $\tilde{\xi}_1$ at this point $\tilde{\xi}_1(t_1)$ and let $x_1$ and $y_1$ flow together until they weld at time $T_1$.
We have $\tilde{\xi}_1$ is piecewise linear thus an element of $S_0([0,T_1])$, and we note $T_1$ is small: when moving back towards $x_1$, the distance $\tilde{\xi}_1$ travels is less than $2/n$, and so the time required is less than $\epsilon_1/n$. Once $\tilde{\xi}_1$ stops, the image of $y_1$ is still $\epsilon_1$ away, and so takes $\epsilon_1^2/4$ units of time to reach $\tilde{\xi}_1(t_1)$. Thus
\begin{align*}
T_1 < \epsilon_1 + \epsilon_1/n + \epsilon_1^2/4.
\end{align*}
After flowing up with such a driver to weld the pair $(x_1,y_1)$, we can repeat the idea to capture subsequent points with $\tilde{\xi}_j$'s, and can thus generate a $\xi_n$ that welds the first $n-1$ pairs in $T_{n-1}<1/n^2$ time. The last mesh point remaining on the right is the image $y_n(T_{n-1})$ of $y_n(0)=1$, and since for all $0 \leq s \leq T_{n-1}$ we have the very coarse estimate
\begin{align*}
y_n(s) - \xi(s) \geq y_n(s) - y_{n-1}(s) \geq y_n(0) - y_{n-1}(0) = \frac{1}{n}
\end{align*}
by \eqref{Eq:LoewnerIncrement}, we see $y_n$ has moved towards $\xi_n$ no more than $2n(1/n^2) = 2/n$ units, showing
\begin{align*}
\sup |\xi_n| > 1 - \frac{3}{n}
\end{align*}
when $\xi_n$ moves fast enough after $T_{n-1}$ towards $y_n$.
We conclude $\varphi_n \xrightarrow{u} \varphi$ but $\xi_n \not\rightarrow \tb{0}$. Furthermore, we may build $\xi_n$ to weld all the points in time $2/n^2$, showing the hitting times also do not converge to $\tau_{\tb{0}}$.
\end{proof}
Figure \ref{Fig:CounterExample} gives a numerical approximation for the curves $\gamma_n$ generated by the $\xi_n$, which collapse to the real interval $[-1,1]$ as $n \rightarrow \infty$.
\subsection{An application: convergence of a zipper-like algorithm for welding using minimal-energy curves}\label{Sec:Zipper}
Let $\gamma([0,T])$ be a finite simple curve in $\mathbb{H}\cup\{0\}$ with associated upwards driver $\xi$ and conformal welding $\varphi:[-a,0] \rightarrow [0,b]$. Recall that $\gamma$ has finite \emph{Loewner energy} if the Dirichlet energy of $\xi$ is finite, i.e. $\xi$ is absolutely continuous and
\begin{align}\label{Eq:LoewnerEnergy}
I_L(\xi) := \frac{1}{2}\int_0^T \dot{\xi}(t)^2dt < \infty.
\end{align}
The Loewner energy was introduced in \cite{ShekharFriz} and subsequently saw rapid development in \cite{RohdeWang,YilinFredwelding,YilinFredKufarev,WangReverse,WangEquiv}, to give an incomplete list. In short, it has fascinating connections to a diverse array of fields: probability theory, complex analysis, hyperbolic geometry and geometric measure theory \cite{Bishop}, and even Teichm\"{u}ller theory. See \cite{Yilinsurvey} for a helpful overview.
We use Loewner energy minimizers in this section to address a conformal welding approximation question. Given a partition $\mathcal{P} = \{(x_j,y_j)\}_{j=1}^N$ of $[-a,b]$,
\begin{align}\label{Ineq:Partition}
-a = x_N < x_{N-1} < \cdots < x_1 < x_0=0=y_0 < y_1 < \cdots < y_N = b
\end{align}
with $\varphi(x_j) = y_j$ for each $j$, an interesting question is when a curve $\tilde{\gamma}$ which welds each pair $(x_j,y_j)$ together is close to $\gamma$. We have seen in \S\ref{Sec:WeldingCounterexample} that this is not always the case. Indeed, given $\varphi \mapsto \xi$ and $\xi \mapsto \gamma$ are both not continuous, we expect $\varphi \mapsto \gamma$ to exhibit a number of pathologies.
This question of closeness of $\gamma$ given closeness of $\varphi$ is related to the \emph{(domain) zipper algorithm} of Don Marshall \cite{Marshallrohde}, which seeks to approximate a conformal map $f$ to a domain $\Omega$ via a map $f_n$ which maps to a domain $\Omega_n$ whose boundary agrees with $\partial \Omega$ on a given mesh/discretization $\mathcal{Q} \subset \partial \Omega$.\footnote{Thus note all ``zippers'' in this section are distinct from Sheffield's quantum zipper in probability theory.} The map $f_n$ is built from composing $\#\mathcal{Q}$ conformal maps, where each subsequent map, in effect, draws a boundary segment between the next two points in $\mathcal{Q}$. The question here is, when $\mathcal{Q}$ is very fine (and one ``draws'' a reasonable arc between successive points), is $\partial\Omega_n$ actually uniformly close to $\partial\Omega$? There is a similar algorithm for weldings, Marshall's \emph{welding zipper algorithm}, which seeks to reconstruct $\gamma$ through composing $N$ conformal maps which ``zip up'' the partition \eqref{Ineq:Partition} discretizing $\varphi$ one pair at a time, producing a curve $\gamma_n$ whose welding $\varphi_n$ agrees with $\varphi$ at each $x_j$. The question for the welding zipper is: what conformal maps can you use for each zip to guarantee that $\gamma_n$ is close to $\gamma$?
Both versions of the zipper, it turns out, work remarkably well in practice and have become something of industry standards for numerically computing conformal maps. Proving convergence, however, has been elusive. For the domain zipper, the only proof is for when one draws hyperbolic geodesic segments between subsequent boundary points \cite{Marshallrohde}, and convergence for the welding zipper remains open (though see \cite{Mesikepp} for a partial result and further discussion).
In this subsection, we give as a corollary of Theorem \ref{Thm:DriverToWeldingConvergence} a positive convergence result to an algorithm \emph{similar} to the welding zipper. We construct curves $\gamma_n$ whose weldings match $\varphi$ on $\mathcal{P}_n$, but we create each $\gamma_n$ ``all at once'' through minimizing Loewner energy among all such curves, instead of building it through $N$ compositions.\footnote{We do not propose concrete means to actually compute the minimizers $\gamma_n$, and so we are admittedly using the term ``algorithm'' rather loosely. Our point is to allude to the welding zipper algorithm, our source of inspiration.} That is, whereas the zipper welds the first two points $x_1,y_1$ with some map $F_1$, and then welds the images $F_1(x_2),F_1(y_2)$ of the next two points under $F_1$ with some $F_2$, and so on, solving the welding problem with $F_N\circ \cdots \circ F_1$, we start with curves which already weld all pairs in $\mathcal{P}$ and minimize energy among them. This makes the problem more tractable, and we only need existing results once Theorem \ref{Cor:TimesPointwise} establishes existence of minimizers.
Let us write $|\mathcal{P}| := \max\{\, x_{j-1}-x_j, y_j-y_{j-1} \,\}_{j=1}^N$ for the norm of the partition and say $\varphi$ \emph{welds} $\mathcal{P}$ if $\varphi(x_j) = y_j$ for all $1 \leq j \leq N$.
\begin{theorem}\label{Cor:Zipper}
Let $\gamma:[0,T] \rightarrow \mathbb{H} \cup \{x\}$ be a finite curve of finite Loewner energy, with upwards driver $\xi \in S_0([0,T])$ and welding $\varphi:[-a,0] \rightarrow [0,b]$.
\begin{enumerate}[$(i)$]
\item\label{Thm:MinimizersExist} For any partition $\mathcal{P}$ of $[-a,b]$ as in \eqref{Ineq:Partition}, there exists a curve $\gamma_{\mathcal{P}}$ with driver $\xi_{\mathcal{P}} \in S_0$ which minimizes the Loewner energy among all curves welding $\mathcal{P}$.
\item\label{Thm:MinimizersConverge} If $\{\mathcal{P}_n\}$ is any sequence partitions with $|\mathcal{P}_n| \rightarrow 0$, and $\{\gamma_n\}$ a sequence of corresponding Loewner-energy minimizers from $(i)$, then in the half-plane capacity parametrizations of the curves,
\begin{align}\label{Lim:CurvesConverge}
\lim_{n \rightarrow \infty} \| \gamma_n - \gamma\|_{\infty[0,T']} =0
\end{align}
for any $[0,T'] \subset [0,T)$. Furthermore, the Loewner energies of the entire curves satisfy
\begin{align}\label{Lim:EnergiesIncrease}
\lim_{n \rightarrow \infty} I_L(\gamma_n) = I_L(\gamma).
\end{align}
If the partitions are nested, $\mathcal{P}_{n} \subset \mathcal{P}_{n+1}$ for all $n$, this limit is non-decreasing.
\end{enumerate}
\end{theorem}
\noindent We note that the reason we must restrict to $[0,T'] \subset [0,T]$ in \eqref{Lim:CurvesConverge} is that, \emph{a priori}, hcap$(\gamma_n)$ could be less than hcap$(\gamma)$ for all $n$.\footnote{Recall the similar technicality due to the changing domains of the $\tau_n$ discussed in Example \ref{Eg:NoEndpoints}.} The proof will show that if we rescale all the $\gamma_n$'s to have the ``correct'' time $T$ via setting $\tilde{\gamma} := \sqrt{\frac{T}{T_n}}\gamma_n$, then
\begin{align}\label{Lim:CurvesConverge2}
\|\tilde{\gamma}_n - \gamma\|_{\infty[0,T]} \rightarrow 0
\end{align}
in the half-plane-capacity parametrizations. In fact, our strategy to prove \eqref{Lim:CurvesConverge} will be to first show \eqref{Lim:CurvesConverge2}.
Note also that we normalize so that $\xi(0)=0$ (corresponding to the conformal welding exchanging intervals on either side of the origin). Considering $\gamma$ as generated by $\xi$ on $[0,T]$, we thus have $\gamma(0) = x=\xi(T)$, which is not necessarily zero.
We precede the proof by collecting several known results that we will use.
\begin{proposition}[Lemma 4.2 \cite{LMR}]\label{Prop:LMR}
Let $\gamma_n, \tilde{\gamma}:[0,T] \rightarrow \mathbb{H} \cup \{0\}$ be simple curves parametrized by half-plane capacity, with $\lambda_n \in S_0([0,T])$ the downwards driving functions for $\gamma_n$. If $\|\gamma_n - \tilde{\gamma}\|_{\infty[0,T]} \rightarrow 0$ and there exists $\lambda \in C_0([0,T])$ such that $\|\lambda_n - \lambda\|_{\infty[0,T]} \rightarrow 0$, then $\tilde{\gamma} = \gamma^\lambda$. That is, $\tilde{\gamma}$ is the curve generated by $\lambda$.
\end{proposition}
\begin{proposition}[Prop. 2.1$(iii)$ \cite{ShekharFriz}]\label{Prop:Holder1/2}
If $I_L(\gamma) \leq M$, the half-plane capacity parametrization $\gamma$ is H\"{o}lder-$1/2$ with H\"{o}lder semi-norm $|\gamma|_{1/2} \leq Ce^{CM}$ for some $C>0$.
\end{proposition}
\noindent Note that this result also follows, although without the explicit bound on $|\gamma|_{1/2}$, from \cite[proof of Lemma 4.1]{LMR} and the fact that finite-energy curves have locally-small H\"{o}lder norm.
We also need a continuity property of half-plane capacity. For a set $F \subset \mathbb{H}$, the \emph{(closed) $\epsilon$-neighborhood of $F$ in $\mathbb{H}$} is
\begin{align}\label{Def:EpsilonNeighborhood}
F^\epsilon := \mathbb{H} \cap \bigcup_{z \in F} \overline{B_\epsilon(z)},
\end{align}
and if $F$ is bounded and relatively closed in $\mathbb{H}$, $\fil(F)$ is the complement of the unbounded connected component of $F^c$.
\begin{proposition}[Lemma 4.4 \cite{Kemp}]\label{Prop:hcapUniform1}
There are positive constants $\alpha$ and $C=C(R)$ such that if a compact $\mathbb{H}$-hull $K$ satisfies $K \subset \fil(K^\epsilon) \subset B_R(x)$ for some $x \in \mathbb{R}$, then
\begin{align}\label{Ineq:hcapHolder}
\hcap(K^\epsilon) - \hcap(K) \leq C\epsilon^\alpha.
\end{align}
\end{proposition}
\noindent The actual statement in \cite{Kemp} is slightly stronger, using $\hcap(\fil(K^\epsilon))$ instead of $\hcap(K^\epsilon)$, but \eqref{Ineq:hcapHolder} suffices for our purposes (while $K^\epsilon$ may not have simply-connected complement, its half-plane capacity is still well defined through the probabilistic definition of $\hcap$ in \eqref{Def:hCapProbability}.) Recall that the Hausdorff distance between closed sets $E,F$ is
\begin{align*}
d_h(E,F) := \inf\Big\{\, \epsilon >0 \; | \; E \subset \bigcup_{z \in F} \overline{B_\epsilon(z)} \quad \text{and} \quad F \subset \bigcup_{z \in E} \overline{B_\epsilon(z)} \,\Big\}.
\end{align*}
Proposition \ref{Prop:hcapUniform1} immediately yields the following.
\begin{corollary}\label{Prop:hcapHolder}
Fix $R>0$. The half-plane capacity, as a map from the compact $\mathbb{H}$-hulls inside $\overline{B_R(0)}$ equipped with $d_h(\cdot,\cdot)$ to the reals, is H\"{o}lder continuous.
\end{corollary}
\begin{proof}
For compact $\mathbb{H}$-hulls $K_1, K_2 \subset \overline{B_R(0)}$, write $d:= d_h(K_1,K_2) \leq 2R$. Using the superscript notation of \eqref{Def:EpsilonNeighborhood}, we see by $\hcap$ monotonicity and \eqref{Ineq:hcapHolder} that
\begin{align*}
|\hcap(K_1) - \hcap(K_2)| &\leq |\hcap(K_1) - \hcap(K_1^d)| + |\hcap(K_2^{2d})-\hcap(K_2)|\\
&\leq (C +C2^\alpha)d^\alpha ,
\end{align*}
where $C=C(4R)$ from Proposition \ref{Prop:hcapUniform1}.
\end{proof}
\noindent We immediately obtain the following, which is the form of the continuity result we will use.
\begin{corollary}\label{Cor:hcapContinuousCurves}
Let $\xi_n \in S([0,T_n])$ and $\xi \in S([0,T])$ be drivers generating half-plane-capacity parametrized curves $\gamma_n:[0,T_n] \rightarrow \mathbb{H} \cup \{\xi_n(T_n)\}$ and $\gamma:[0,T] \rightarrow \mathbb{H} \cup \{\xi(T)\}$, respectively. If $\gamma_n$ converges uniformly to $\gamma$ in some parametrization, which is to say
\begin{align*}
\| \gamma_n \circ \sigma_n - \gamma \circ \sigma \|_{\infty[0,A]} \rightarrow 0
\end{align*}
for some increasing, continuous functions $\sigma_n: [0,A] \rightarrow [0,T_n]$, $\sigma:[0,A] \rightarrow [0,T]$, then $\hcap(\gamma_n) \rightarrow \hcap(\gamma)$.
\end{corollary}
We proceed with the proof of Theorem \ref{Cor:Zipper}. In \cite[Lemma 4.1]{MesikeppEnergy}, the second author proved the existence of Loewner-energy minimizers for a single pair $(x_1,y_1)$, also using Theorem \ref{Thm:DriverToWeldingConvergence}. We extend the idea for the existence of $\gamma_{\mathcal{P}}$, repeating some details for the convenience of the reader.
\begin{proof}
$(i)$ We first observe that the set
\begin{align*}
D(\mathcal{P},C) := \{\, \xi \in S_0 \; : \; I_L(\xi) \leq C, \gamma^\xi \text{ welds }\mathcal{P} \,\}
\end{align*}
is non-empty for sufficiently-large $C$. This is because one can repeatedly map up with conformal maps to the complement of circular arc segments orthogonal to $\mathbb{R}$, welding two points at a time to the base of the arc, to obtain a simple curve with $\tilde{\varphi}$ which welds $\mathcal{P}$, and driven by some $\tilde{\xi}$. Each circular arc segment has finite energy, and so $I_L(\tilde{\xi}) <\infty$ because there are finitely-many pairs $(x_j,y_j)$.\footnote{This iterative construction is an example of the welding zipper algorithm. See \cite[Lemma 5.7]{Mesikepp} for details on the finiteness of energy of the circular arcs, such as an explicit energy formula.} Next, take a sequence $\xi_n \in D_0 := D(\mathcal{P}, I_L(\tilde{\xi}))$, such that
\begin{align*}
\lim_{n \rightarrow \infty} I_L(\xi_n) = \inf\{\, I_L(\eta) \; : \; \eta \in D_0 \,\}.
\end{align*}
We claim we may suppose all the $\xi_n$ are defined on a universal interval $[0,T_U]$ of capacity time. Indeed, since the diameter of any curve $\tilde{\gamma}$ welding $-a$ to $b$ is comparable to $b+a$ \cite[top of p.74]{Lawler}, and
\begin{multline*}
\hcap(\tilde{\gamma}) \leq \hcap\big( \diam(\tilde{\gamma})\overline{B_1(0)}\cap \mathbb{H} \big)\\ \leq C^2(a+b)^2\hcap\big(\overline{B_1(0)} \cap \mathbb{H} \big) \leq C^2(a+b)^2
\end{multline*}
by scaling and monotonicity of $\hcap(\cdot)$ and \eqref{Eq:Diskhcap}, the times $T_n$ for $\xi_n$ to weld $-a$ to $b$ are all bounded. Furthermore, we can extend any $\xi_n$ past $T_n$ by the constant function $\xi_n(T_n)$ without adding energy, and thus some such $T_U$ exists.
Since $\{I_L(\xi_n)\}$ is bounded, by \eqref{Eq:LoewnerEnergy} and H\"{o}lder's inequality, $\{\xi_n\}$ is bounded and equicontinuous on $[0,T_U]$ and thus precompact. If $\xi_{n_k} \rightarrow \xi'$ is any subsequential uniform limit, by the lower-semicontinuity of the energy in this topology \cite[\S2.2]{WangReverse},
\begin{align*}
I_L(\xi') \leq \liminf_{k \rightarrow \infty} I_L(\xi_{n_k}) = \inf\{\, I_L(\eta) \; : \; \eta \in D_0 \,\},
\end{align*}
and so $\xi'$ is a minimizer so long as it belongs to $D_0$. That is, we must show $\xi' \in S_0$ and that $\gamma^{\xi'}$ welds $\mathcal{P}$. The first property is immediate since $I_L(\xi)< \infty$; in fact, $\gamma^{\xi'}$ is a $K$-quasiarc for some $K = K(I_L(\xi'))$ \cite[Prop. 2.1]{WangReverse}. We claim $\gamma^{\xi'}$ welds $\mathcal{P}$ by Theorem \ref{Thm:DriverToWeldingConvergence}. Indeed, by extending $\xi'$ past $T_U$ to $[0,T_U']$ with the constant value $\xi'(T_U)$ on $[T_U,T_U']$, if necessary, we may assume that $[x_N, y_N]$ is in the interior of the interval welded by $\xi'$. Similarly extending all the $\xi_{n_k}$ on $[T_U,T_U']$ to be constantly their terminal value $\xi_{n_k}(T_U)$, we still have uniform convergence on $[0,T_U']$, and Theorem \ref{Thm:DriverToWeldingConvergence} then yields
\begin{align*}
y_j = \varphi_{n_k}(x_j) \rightarrow \tilde{\varphi}(x_j)
\end{align*}
for each $j$, where $\varphi_{n_k}$ and $\varphi'$ are the weldings associated to $\xi_{n_k}$ and $\xi'$, respectively. Thus $\xi' \in D_0$ and minimizers $\gamma_{\mathcal{P}} = \gamma^{\xi'}$ exist among all curves welding $\mathcal{P}$.
\bigskip
\noindent $(ii)$ Now suppose we have a sequence of partitions $\mathcal{P}_n = \{(x_j,y_j) \}_{j=0}^{N_n}$ with $|\mathcal{P}_n| \rightarrow 0$, and let $\gamma_n$ a minimizer for welding $\mathcal{P}_n$ with upwards driver $\xi_n \in S_0([0,T_n])$, which welds $-a$ to $b$ at time $T_n$. By extending the drivers by the ending values, we may again assume there is a single interval $[0,T_U]$ on which all the $\xi_n$ are defined, and furthermore that we have $\epsilon>0$ such that each $\xi_n$ welds an interval including \begin{align}\label{Int:bigger}
[-a-\epsilon,b+\epsilon]
\end{align}
by time $T_U$.
We show $\xi_n \rightarrow \xi$ on $[0,T_U]$ (we have also extended $\xi$ by constantly $\xi(T)$, if needed). First note that the weldings $\varphi_n$ for $\xi_n$ converge uniformly to the welding $\varphi$ for $\xi$, which is clear from monotonicity and the agreement on $\mathcal{P}_n$ with $|\mathcal{P}_n| \rightarrow 0$. As above, uniformly-bounded energy implies $\{\xi_n\}$ is precompact, and taking any uniform limit $\xi_{n_k} \rightarrow \xi'$ on $[0,T_U]$ we have that
\begin{align*}
I_L(\xi') \leq \liminf_{k \rightarrow \infty}I_L(\xi_{n_k}) \leq I_L(\xi),
\end{align*}
and so $\xi'$ has finite energy and generates a simple curve $\gamma'$, as noted above.\footnote{Of course, ``prime'' here is notation and has nothing to do with derivative.} In particular, we find
\begin{align*}
\max\{\,\tau(-a-\epsilon/2;\xi'), \tau(b+\epsilon/2;\xi')\,\} < T_U
\end{align*}
by \eqref{Int:bigger} and Theorem \ref{Cor:TimesPointwise}, and so by Theorem \ref{Thm:DriverToWeldingConvergence}, $\xi'$ welds identically to $\xi$ on $[-a,b]$. By injectivity of the welding-to-curve map in the category of quasiarcs, the curves $\xi'$ and $\xi$ generate by welding $[-a,0]$ to $[0,b]$ are the same, up to post-composition by affine map $z \mapsto cz+d$ for some $c,d \in \mathbb{R}$. However, since both $\gamma'$ and $\gamma$ are normalized by the Loewner flow, we have $c=1,d=0$ and consequently that $\xi' \equiv \xi$ on $[0,T_U]$.
All subsequential limits of $\{\xi_n\}$ are therefore $\xi$, and we conclude $\xi_n \xrightarrow{u} \xi$ on $[0,T_U]$. In particular, convergence of hitting times yields
\begin{align}\label{Conv:ConstructedTimesToTrue}
T_n = \tau(-a; \xi_n) \rightarrow \tau(-a; \xi) = T
\end{align}
and hence, by uniformity also yields
\begin{align}\label{Conv:CurveBases}
\xi_n(T_n) \rightarrow \xi(T).
\end{align}
It remains to show that the curves $\gamma_n$ generated by $\xi_n$ converge to $\gamma$ on any $[0,T']$. Recall that since the inverse Loewner transform $\eta \mapsto \gamma^{\eta}$ is not continuous from $S_0([0,T_U])$ to $C([0,T_U])$ \cite[Ex. 4.49]{Lawler}, this is not immediate. We start by rescaling the minimizers to all the have the same capacity time $T$, setting $\alpha_n := \sqrt{T/T_n}$ and $\tilde{\gamma}_n := \alpha_n\gamma_n$, and show that the $\tilde{\gamma}_n$ converge uniformly in their half-plane capacity parametrizations to $\gamma$ on $[0,T]$. Note that the $\tilde{\gamma}_n$ have driving functions $\tilde{\xi}_n(\cdot) = \alpha_n \xi(\cdot/\alpha_n^2)$ which satisfy
\begin{align*}
|\tilde{\xi}_n(t)-\xi(t)| \leq& \sqrt{\frac{T}{T_n}} \Big| \xi_n\Big( t \, \frac{T_n}{T} \Big) - \xi\Big( t \, \frac{T_n}{T} \Big) \Big| + \Big| \sqrt{\frac{T}{T_n}}\xi\Big( t \, \frac{T_n}{T} \Big) - \xi(t)\Big|,
\end{align*}
where we are using the extension of $\xi$ to $[0,T_U]$ as needed. For large $n$, this is small by \eqref{Conv:ConstructedTimesToTrue}, the uniform continuity of $\xi$ on $[0,T_U]$, and the convergence $\xi_n \xrightarrow{u} \xi$, showing
\begin{align}\label{Conv:ScaledDrivers}
\|\tilde{\xi}_n - \xi\|_{\infty[0,T]} \rightarrow 0,
\end{align}
which we will use this to prove
\begin{align}\label{Conv:GoalCurveConvergence}
\|\tilde{\gamma}_n - \gamma\|_{\infty[0,T]} \rightarrow 0.
\end{align}
To begin with, note that $\{ \tilde{\gamma}_n \}$ is a bounded sequence both in diameter and Loewner energy (the former by \cite[top of p.74]{Lawler} again). By Proposition \ref{Prop:Holder1/2}, the latter implies it is also equicontinuous in capacity parametrization, and thus a precompact family. Taking any uniform subsequential limit $\tilde{\gamma}_{n_k} \rightarrow \tilde{\gamma}$ on $[0,T]$, we show that $\tilde{\gamma}$ is the half-plane capacity parametrization for $\gamma$, which is to say,
\begin{align}\label{Eq:GoalSubsequentialCurveConvergence}
\tilde{\gamma}([0,t])=\gamma([0,t])
\end{align}
for each $0 \leq t \leq T$. As we are considering an arbitrary subsequential limit $\tilde{\gamma}$, if \eqref{Eq:GoalSubsequentialCurveConvergence} holds, so does \eqref{Conv:GoalCurveConvergence}.
We first show \eqref{Eq:GoalSubsequentialCurveConvergence} for $t=T$. Since the $\tilde{\gamma}_{n}$ are uniformly $K$-quasiarcs, we can write $\tilde{\gamma}_{n} = q_{n}([0,i])$ for some $K$-quasiconformal self-map $q_{n}$ of $\mathbb{H}$ that fixes $\infty$, sends $0$ to the base of curve $\tilde{\xi}_n(T)$, and $i$ to its tip $\tilde{\gamma}_{n}(T)$ \cite[Prop. 2.1]{WangReverse}. We claim that
\begin{align}\label{Ineq:PointsSeparated}
|q_{n}(i) - q_{n}(0)| = |\tilde{\gamma}_n(T) - \tilde{\gamma}_n(0)|\geq \epsilon>0
\end{align}
for all $n$. If so, then $\{q_n\}$ is a normal family, as then the points $\{0,i,\infty\}$ have images under the $q_n$ which are uniformly bounded below in spherical distance (we extend the $q_n$ by reflection $q_n(\bar{z}) = \overline{q_n(z)}$ to obtain quasiconformal mappings of the sphere $\hat{\mathbb{C}}$ and apply \cite[Thm. 2.1]{LehtoQC}). Recall that if $\tilde{\gamma}_n \subset \overline{B_{\tilde{\gamma}_n(0)}(R)}\cap \mathbb{H}$, then $2T = \hcap(\tilde{\gamma}_n) \leq R^2$, and so there exists a point $\tilde{\gamma}_n(t_n)$ on $\tilde{\gamma}_n$ such that
\begin{align}\label{Ineq:PtFarAway}
\sqrt{T} \leq |\tilde{\gamma}_n(t_n) - \tilde{\gamma}_n(0)|,
\end{align}
say. As the $\tilde{\gamma}_n$ are uniformly $K$-quasiarcs, by Ahlfors' three point condition we have some $C=C(K)$ such that
\begin{align*}
|\tilde{\gamma}_n(0) -\tilde{\gamma}_n(t_n)| \leq C|\tilde{\gamma}_n(0) - \tilde{\gamma}_n(T)|.
\end{align*}
Combined with \eqref{Ineq:PtFarAway}, we therefore have \eqref{Ineq:PointsSeparated} and consequently normality of $\{q_n\}$.
In particular, for $\tilde{\gamma}_{n_k} = q_{n_k}([0,i])$, we may move to a further subsequence of $\{q_{n_k}\}$, which we also label the same, and obtain a locally-uniform limit $\tilde{q}$, which is either a point in $\mathbb{R} \cup \{\infty\}$ or a $K$-quasiconformal homeomorphism $\mathbb{H}$ \cite[Thm. 2.3]{LehtoQC}. To see that the former cannot happen, recall by \cite[Prop. 3.1]{WangReverse} that $I_L(\tilde{\gamma}_n) \geq -8\log(\sin(\theta_n))$, where $\theta_n = \arg(\tilde{\gamma}_n(t_n) - \tilde{\gamma}_n(0))$, and so $\theta_n$ is uniformly bounded away from 0 and $\pi$. Combined with \eqref{Ineq:PtFarAway} and boundedness of $\{\tilde{\gamma}_n\}$, we conclude that the $q_{n_k}$ cannot degenerate to any $x \in \mathbb{R}$ or blow up to $\infty$, and hence that the $\tilde{\gamma}_{n_{k}}$ have a limiting Jordan curve $\tilde{\Gamma}:=q([0,i])$, where the limit is uniform in the parametrization given by the quasiconformal maps.
In particular, $\hcap(\tilde{\Gamma}) = 2T$ by Corollary \ref{Cor:hcapContinuousCurves}. For any $0 \leq t \leq T$, let $y_t \in [0,1]$ and $t_{n_k} \in [0,T]$ be such that $q([0,iy_t]) = \tilde{\Gamma}([0,t])$ and $\tilde{\gamma}_{n_k}(t_{n_k})=q_{n_k}(iy_t)$. We then have the point-wise convergence
\begin{align*}
\tilde{\gamma}_{n_k}(t_{n_k}) \rightarrow \tilde{\Gamma}(t),
\end{align*}
and since $q_{n_k} \xrightarrow{u} q$ on $[0,iy_t]$, by Corollary \ref{Cor:hcapContinuousCurves} again we conclude $t_{n_k} \rightarrow t$, and thus that
\begin{align*}
\tilde{\gamma}_{n_k}(t) \rightarrow \tilde{\Gamma}(t)
\end{align*}
by Proposition \ref{Prop:Holder1/2}. Hence our subsequential limit $\tilde{\gamma} = \tilde{\Gamma}$ is a simple curve.
By \eqref{Conv:ScaledDrivers} we have that the downwards drivers $\tilde{\lambda}_{n_k}(t) = \tilde{\xi}_{n_k}(T-t) - \tilde{\xi}_{n_k}(T)$ for the centered curves $\tilde{\gamma}_{n_k} - \tilde{\xi}_{n_k}(0)$ converge uniformly on $[0,T]$ to $\lambda(t) = \xi(T-t) - \xi(T)$, the downwards driver for $\gamma-\xi(T)$. Proposition \ref{Prop:LMR} then yields $\tilde{\gamma} = \gamma$, showing that all subsequential limits are the same, and completing the proof of \eqref{Conv:GoalCurveConvergence}.
Note \eqref{Lim:CurvesConverge} is an immediate consequence: $\gamma_n$ is defined on $[0,T'] \subset [0,T)$ for large $n$ by \eqref{Conv:ConstructedTimesToTrue}, and for $0 \leq t \leq T'$ we observe
\begin{align*}
|\gamma_n(t) - \gamma(t)| &\leq \Big|\gamma_n(t) - \sqrt{\frac{T}{T_n}}\gamma_n\Big(t \frac{T_n}{T} \Big) \Big| + \|\tilde{\gamma}_n-\gamma\|_{\infty[0,T]}\\
&\leq B\sqrt{|T-T_n|} + \|\tilde{\gamma}_n-\gamma\|_{\infty[0,T]},
\end{align*}
where the second line follows from \eqref{Conv:ConstructedTimesToTrue}, uniform boundedness of $\{\gamma_n\}$, and Proposition \ref{Prop:Holder1/2}. We conclude \eqref{Lim:CurvesConverge} holds.
We lastly turn to the energy limit \eqref{Lim:EnergiesIncrease}. By minimization,
\begin{align*}
I_L(\gamma_n) = \frac{1}{2}\int_0^{T_n} \dot{\xi}_n^2(t) dt \leq \frac{1}{2}\int_0^{T} \dot{\xi}^2(t) dt = I_L(\gamma)
\end{align*}
for all $n$, and so $\limsup_{n \rightarrow \infty} I_L(\gamma_n) \leq I_L(\gamma)$. Write $I_{L,t}(\cdot)$ for the Loewner energy on an interval $[0,t]$. Since our extended drivers satisfy $\xi_n \xrightarrow{u} \xi$ on $[0,T_U]$, by energy lower semicontinuity \cite[\S2.2]{WangReverse},
\begin{align*}
I_{L,T}(\gamma) = I_{L,T_U}(\xi) \leq \liminf_{n \rightarrow \infty} I_{L, T_U}(\xi_n) = \liminf_{n \rightarrow \infty} I_{L, T_n}(\xi_n) \leq I_{L,T}(\gamma),
\end{align*}
where we recall that extending the drivers did not add any energy. Thus $\limsup I_L(\gamma_n) \leq \liminf I_L(\gamma_n)$ and the claimed limit follows. If the partitions are nested, $\gamma_{n+1}$ also welds $\mathcal{P}_n$, showing $I_L(\gamma_n) \leq I_L(\gamma_{n+1})$.
\end{proof}
\section{Problems}\label{Sec:Problems}
We close with two problems that appear natural from our study of drivers, weldings, and hitting times.
\begin{problem}
Suppose weldings $\varphi_n, \varphi$ correspond to drivers $\xi_n, \xi \in S_0$. If all the weldings share a fixed modulus of continuity $\omega$, does $\varphi_n \xrightarrow{u} \varphi$ imply $\xi_n \xrightarrow{u} \xi$?
\end{problem}
\begin{problem}
Determine if $\xi \mapsto \tau$ is injective on a larger subcollection $S' \subset S$ (with equality, perhaps), beyond just constant drivers. Is the map a homeomorphism onto its image of $S'$?
\end{problem}
\section{Appendix: Integral formulas relating $\xi, \tau$ and $\varphi$}\label{Appendix:Formulas}
\begin{lemma}
Let $\xi$ be an upwards driver generating a simple curve, and let $x_0$ be a point with $\tau(x_0;\xi)<\infty$. The hitting time $\tau := \tau(x_0;\xi)$ satisfies
\begin{align}\label{Eq:HittingTimeIntegral1}
\tau = \frac{1}{4}(x_0^2-x(\tau)^2)-\int_0^{\tau}\frac{\xi(t)}{x(t)-\xi(t)}dt.
\end{align}
Furthermore, if $y_0 = \varphi(x_0)$ is the image of $x_0$ under the conformal welding $\varphi$ generated by $\xi$, then
\begin{align}\label{Eq:HittingTimeIntegral2}
\varphi(x_0)^2 - x_0^2 = 4\int_0^\tau \frac{\xi(t)(y(t)-x(t))}{(\xi(t)-x(t))(y(t)-\xi(t))}dt.
\end{align}
\end{lemma}
\begin{proof}
If $x_0 = \xi(0)$ the first formula is obvious. If not, we observe that $\partial_t (x(t)^2) = -4x(t)/(x(t)-\xi(t))$, yielding
\begin{align*}
x(\tau-\epsilon)^2 - x_0^2 = -4 \int_0^{\tau-\epsilon} \frac{x(t)}{x(t)-\xi(t)}dt.
\end{align*}
Since $t \mapsto x(t)$ is continuous up to $t=\tau$, sending $\epsilon \rightarrow 0$ gives
\begin{equation*}
x(\tau)^2 - x_0^2 = -4 \int_0^{\tau} \frac{x(t)}{x(t)-\xi(t)}dt = -4 \tau - 4 \int_0^\tau \frac{\xi(t)}{x(t)-\xi(t)}dt,
\end{equation*}
which yields \eqref{Eq:HittingTimeIntegral1}.
Secondly, since $x(\tau) = y(\tau) = \xi(\tau)$, using \eqref{Eq:HittingTimeIntegral1} for both $x_0$ and $y_0$ and equating the right-hand sides yields
\begin{align*}
x_0^2 - 4\int_0^\tau \frac{\xi(t)}{x(t)-\xi(t)}dt = y_0^2 - 4\int_0^\tau \frac{\xi(t)}{y(t)-\xi(t)}dt,
\end{align*}
which is equivalent to \eqref{Eq:HittingTimeIntegral2}.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2022-12-19T02:06:47",
"yymm": "2212",
"arxiv_id": "2212.08291",
"language": "en",
"url": "https://arxiv.org/abs/2212.08291",
"abstract": "In addition to conformal weldings $\\varphi$, simple curves $\\gamma$ growing in the upper half plane generate driving functions $\\xi$ and hitting times $\\tau$ through Loewner's differential equation. While the Loewner transform $\\gamma \\mapsto \\xi$ and its inverse $\\xi \\mapsto \\gamma$ have been carefully examined, less attention has been paid to the maps $\\xi \\mapsto \\tau \\mapsto \\varphi$. We study their continuity properties and show that uniform driver convergence implies uniform hitting time convergence and uniform welding convergence, even when the corresponding curves do not converge. Welding convergence implies neither hitting time nor driver convergence, while hitting time convergence implies driver convergence in (at least) the case of constant drivers.As an application, we show that a curve $\\gamma$ of finite Loewner energy can be well approximated by an energy minimizer that matches $\\gamma$'s welding on a sufficiently-fine mesh.",
"subjects": "Complex Variables (math.CV); Classical Analysis and ODEs (math.CA)",
"title": "Drivers, hitting times, and weldings in Loewner's equation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138125126402,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7087156754868161
} |
https://arxiv.org/abs/2208.06324 | On the Connectivity and Diameter of Geodetic Graphs | A graph $G$ is geodetic if between any two vertices there exists a unique shortest path. In 1962 Ore raised the challenge to characterize geodetic graphs, but despite many attempts, such characterization still seems well beyond reach. We may assume, of course, that $G$ is $2$-connected, and here we consider only graphs with no vertices of degree $1$ or $2$. We prove that all such graphs are, in fact $3$-connected. We also construct an infinite family of such graphs of the largest known diameter, namely $5$. | \section{Geodetic Graphs and Connectivity}
In this section we prove \autoref{No2Connected}.
From here on we assume $G$ is geodetic with $\delta(G)\ge 3$. We denote the (unique) $v,u$ geodesic in $G$ by $\pi(v,u)$, and by convention we enumerate its vertices in order from $v$ to $u$. Arguing by contradiction, let $S=\{x,y\}$ be a vertex cut for which $d(x,y)$ is as small as possible, denote this distance by $\ell$. We denote by $\Pi=\pi(x,y)$ the $x,y$ geodesic, with vertices $\Pi= x, x_1, x_2, \ldots, x_{\ell-1}, y$. Let $A_1\ldots A_k$ the connected components of $G\setminus S$. Clearly, $x_1, x_2, \ldots, x_{\ell-1}$ all belong to the same connected component of $G\setminus S$, say they are in $A_1$.
\begin{lemma}\label{GeodeticComponents}
For every $i$, if $u,v\in A_i$, then $\pi_G(u,v)$ is contained in $A_i\cup \Pi$.
\end{lemma}
\begin{proof}
If $\pi_G(u,v)$ is not contained in $A_i\cup \Pi$, then it must leave $A_i$ and come back. But the only way to exit $A_i$ is via $x$ or $y$. But then $\pi_G(u,v) = \pi_G(u,x)*\Pi * \pi_G(y,v)$, since $\Pi$ is the $x,y$ geodesic. Thus $\pi_G(u,v)$ is contained in $A_i\cup \Pi$, as claimed.
\end{proof}
\begin{lemma}\label{Two Components}
The graph $G\setminus S$ has exactly two connected components, i.e., $k=2$.
\end{lemma}
\begin{proof}
Suppose toward contradiction that $A_1,A_2,A_3\neq\emptyset$. Let $\pi_i$ to be an $x,y$ geodesic in the subgraph induced by $A_i\cup S$. (in particular $\pi_1 = \Pi$). At least two of the integers $|\pi_1|, |\pi_2|, |\pi_3|$ have the same parity, so the corresponding paths form a cycle $C$ of even length. We consider two cases:
\begin{enumerate}
\item $C = \pi_1*\pi_2^{-1}$: Since $\abs{\pi_2}\ge\abs{\pi_1}+2$ we can find two vertices $v_0,v_1\in A_2$ which are antipodal points on $C$. However, $\pi_2(v_0,v_1) = \pi_G(v_0,v_1)$ by \autoref{GeodeticComponents}. But $\pi_C(v_0,x)*\Pi*\pi_C(y,v_k)$ is another $v_0,v_1$ path of the same length, contradicting geodeticity.
\item $C = \pi_2* \pi_3^{-1}$: Let $v_0,v_1$ be $C$ - antipodal points with $v_0\in A_2, v_1\in A_3$.
The two arcs of $C$ that $v_0, v_1$ define are two $v_0,v_1$ paths of equal length. By assumption $G$ is geodetic so $\pi_G(v_0,v_1)$ differs from both these paths. But $\pi_G(v_0,v_1)$ must traverse either $x$ or $y$. This means e.g., that $|\pi_G(v_0,x)| < |\pi_2(v_0,x)|$ contrary to the assumption that $\pi_2$ is a shortest $x,y$ path in $A_2\cup S$.
\end{enumerate} \end{proof}
A \emph{graph contraction} between two graphs $\Gamma, \Omega$, is a function $f:V(\Gamma)\to V(\Omega)$ such that for any $vu\in E(\Gamma)$, either $f(v)f(u)\in E(\Omega)$ or $f(v) = f(u)$. Clearly, for any $u,v\in V(\Gamma)$ it holds that $d_{\Omega}(f(v),f(u))\le d_\Gamma(v,u)$. Therefore if $\Gamma$ is connected, so is $f[\Gamma]$.\\
Let $\ZZ^2_\infty$ be the graph with vertex set $\ZZ^2$ where $p, q\in \ZZ^2$ are neighbors whenever $\norm{p-q}_{\infty} = 1$.
We denote coordinates in this plane by $(\xi,\eta)$ and employ this graph as a visualization tool
for $G$. This is accomplished using the
mapping $\varphi:V(G)\to \ZZ^2$, where
\begin{align*}
\varphi(v) =\begin{cases}
(d(x,v), d(y,v)) & v\in A_1\cup S \\
(-d(y,v) + \ell, -d(x,v) + \ell) & v\in A_2 \cup S
\end{cases}
\end{align*}
(Recall that $\ell = d(x,y)$). We denote the image of $G$ in $\ZZ^2_\infty$
by $\Omega = \varphi[G]$. Clearly $\varphi$ is a graph contraction, and therefore $\varphi[A_i]$ is a connected subgraph of $\Omega$. For any $j$ define
\[
R_j = \cbk{v\in A_1\mid d(v,y) - d(v,x) = j}\qquad L_j = \cbk{v\in A_2\mid d(v,y) - d(v,x) = j}
\]
For ease of notation, we add $x$ to $R_\ell,L_\ell$ and $y$ to $R_{-\ell}$, $L_{-\ell}$. We denote $R = \bigcup_{j=-\ell}^\ell R_j$ and $L = \bigcup_{j=-\ell}^\ell L_j$. By \autoref{Two Components} $G = R\cup L$, and $\varphi[L_j],\varphi[R_j]$ are included in the straight line $\{(\xi,\xi + j)|\xi\in \ZZ\}$.
\input{TikzDrawings/phi_example_bw}
In using $\varphi(G)$ as a visualization tool, we keep in mind that $\varphi$ is not injective. We note that $\varphi[\Pi]$ is the interval between $(0,\ell)$ and $(\ell,0)$, and that by geodeticity $\varphi^{-1}((j,\ell-j))$ is the $j$-th vertex in $\Pi$. In drawing $\varphi[R]$ and $\varphi[L]$, we note that $\varphi[R]$ is "to the right" of $\varphi[\Pi]$, and $\varphi[L]$ is "to the left" of $\varphi[\Pi]$.
\begin{lemma}\label{NeighborsInPhi}
If $u,v$ are neighbors in $G$, $u\in R_j, v\in R_k$, then $\abs{k-j}\le 2$. Moreover, if $\abs{k-j} = 2$,
then the edge $\varphi(v)\varphi(u)$ is one of the two $ (\xi,\xi+j)\sim (\xi \pm1,\xi +j\mp 1)$.
\end{lemma}
\begin{proof}
A simple application of the triangle inequality.
\end{proof}
\begin{lemma}\label{lemma: existance of SE edges}
Suppose $R_j, R_{j-2} \neq \emptyset$ whereas $R_{j-1} = \emptyset$, then there exists an edge between $R_j$ and $ R_{j-2}$.
\end{lemma}
\begin{proof}
Since $A_1$ is connected, there must be a path between $R_j$ and $R_{j-2}$. Consider a shortest such path. It must completely reside in $R_j\cup R_{j-1}\cup R_{j-2}$, and since $R_{j-1}=\emptyset$, its first step outside of $R_j$, must be to a vertex in $R_{j-2}$, as claimed.
\end{proof}
The previous two lemmas clearly apply to $L$ as well.
\begin{lemma}\label{No Antipodal Distance difference}
If $\abs{j}< \ell$, then either $R_j$ or $L_{-j}$ must be empty.
\end{lemma}
\begin{proof}
We show that if there exist vertices $v\in R_j$, $u\in L_{-j}$, then $G$ is not geodetic, because $d(u,v)$ is realized by two distinct paths. Any $u,v$ path must clearly traverse either $x$ or $y$. But the assumption that $\abs{j} < \ell$ implies that the shortest $u,x,v$ path cannot traverse $y$
and the shortest $u,y,v$ path cannot traverse $x$. In particular,
the shortest $u,x,v$ path and $u,y,v$ path in $G$ are distinct. Moreover, they have the same length, because the shortest length of a $u,y,v$ resp.\ $u,x,v$ paths is $d(u,y) + d(y,v)$ resp.\ $d(u,x) + d(x,v)$. Since $d(u,y) - d(u,x) = j = d(v,x) - d(v,y)$, they have the same length, as claimed.
\end{proof}
It follows from \autoref{No Antipodal Distance difference} that at least one of $R_0, L_0$ must be empty.
We denote below $\Pi= x, x_1, x_2, \ldots, x_{\ell-1}, y$.
In particular, if $\ell=1$, then $x_1=y$. We need the following lemma.
\begin{lemma}
\label{Find square}
If both $R_{\ell-1}=R_{\ell-3}=\emptyset$, then $R_\ell= \{x\}$. In particular, $x_{1}$ is the only neighbor of $x$ in $A_1\cup\Pi$.
\end{lemma}
\begin{proof}
The vertex $x_1$ has degree at least $3$, and must therefore have a neighbor in $R$.
But by assumption $R_{\ell-1} = R_{\ell-3} = \emptyset$, so $x_1$ must have a neighbor
in $R_{\ell - 2}$. Consequently, $x_1$ is not the only vertex in $R_{\ell - 2}$.
The graph $\varphi[A_1]$ is connected, and since $\varphi^{-1}(0,\ell)=\{x\}$, it does not contain the vertex
$(0,\ell)$, nor the edge $(0,\ell)\sim (1,\ell-1)$. Let us assume toward contradiction that $R_\ell$ is comprised not only of $x$. Then $\varphi[R_\ell]\setminus\{(0,\ell)\}$ and $R_{\ell -2}$, are nonempty whereas $R_{\ell-1} = \emptyset$. By \autoref{lemma: existance of SE edges}, there exists an edge $(\xi,\xi+\ell)\sim (\xi+1,\xi+\ell - 1)$. Let $v_\xi u_\xi\in E(G)$ be a pre-image of this edge. Namely, $\varphi(v_\xi) = (\xi,\xi+\ell), \varphi(u_\xi) = (\xi + 1, \xi + \ell -1)$. Clearly $\pi(v_\xi,x)$ must be contained in $R_\ell$. Moreover, $\pi(u_\xi,y)$ does not go through $x$ because $d(u_\xi,y) - d(u_\xi,x) = \ell-2$. Therefore, $\pi(v_\xi,x)*\Pi$ and $v_\xi * \pi(u_\xi,y)$ are two distinct $v_\xi,y$ geodesics - contrary to the assumption of geodeticity.
\end{proof}
A similar lemma can be proved for $R_{-\ell}$: If both $R_{1 - \ell} = R_{3 - \ell} = \emptyset$, then $R_{-\ell} = \{y\}$. In particular, $x_{\ell-1}$ is the only neighbor of $y$ in $A_1\cup\Pi$.
\input{TikzDrawings/findSquareLemma.tex}
We can now prove \autoref{No2Connected}.
\notwoconnected*
\begin{proof}
The argument runs as follows: The sets $R_j$ are non-empty for every other value of $j$. To wit, $R_{\ell - 2j}$ is non-empty for any $1\le j\le \ell-1$, since $R_{\ell - 2j}$ contains the vertex $x_j$. But then \autoref{No Antipodal Distance difference} implies that $L_{\ell-2j}= \emptyset$, see \autoref{fig:Stage 2}. We claim that $L_{\ell - 1}$ and $L_{1-\ell}$ are nonempty, thus repeated application of \autoref{lemma: existance of SE edges} results in $L_{k}\neq\emptyset$ for any $k\not\equiv \ell \mod 2$. Therefore, $R_k = \emptyset$ for such $k$. Now we satisfy the assumptions of \autoref{Find square}, so $R_\ell = \{x\}$. But then $\{x_1,y\}$ is a vertex cut. If $xy\in E(G)$, then $x_1 = y$, contrary to $G$ being $2$-connected. Otherwise, $d(x_1,y)<d(x,y)$ - contrary to the minimality of $\ell$.
It is left to justify that $L_{\ell-1},L_{1-\ell}$ are nonempty. The neighbors in
$x$ in $A_2$ reside in $L_\ell, L_{\ell-1}$. If $x$ does not have a neighbor in $L_\ell$, we are done, so suppose $x$ has a neighbor in $L_\ell$, and $y$ has a neighbor $v_y\in L_{-\ell}\cup L_{1-\ell}$. Since $A_2$ is connected, there must be a path connecting these vertices. Consider a shortest such path, and its first step outside of $L_{\ell}$. It cannot be a vertex in $L_{\ell-2}$, so it must be in $L_{\ell-1}$.
\end{proof}
\input{TikzDrawings/MainProof}
\section{Geodetic blocks of diameter 4 and 5 \label{point_flag_section}}
In this section, we construct two families of geodetic blocks, the graphs in which
have diameter $4$ and $5$ respectively. We need some notation first. Let $v$ be a vertex in a
graph $G = (V,E)$ and let $i\ge 0$ be an integer. Clearly $|d(v,x)-d(v,y)|\le 1$ for every edge $e=xy$ in $G$.
We say that $e$ is {\em $v$-horizontal} resp.\ {\em $v$-vertical} if $d(v,x)=d(v,y)$ resp.
$|d(v,x)-d(v,y)|= 1$. Note that every edge in every shortest $v\to u$ path is $v$-vertical.
\begin{proposition}
$G$ is geodetic if and only if for every $v\in V$ the $v$-vertical edges form a spanning tree.
\end{proposition}
\begin{proof}
It is easy to see that the $v$-vertical edges form a spanning subgraph in every connected graph.
Suppose that $G$ is not geodetic, and let us find some vertex $w$ such that the $w$-vertical
edges in $G$ do not form a spanning tree, as there is a cycle comprised of $w$-vertical edges. Indeed,
since $G$ is not geodetic, it has vertices $v,u$ with two distinct shortest paths between them $\pi\neq\pi'$.
But all the edges in $\pi,\pi'$ are $v$-vertical, and together they contain a cycle, as claimed.
Conversely let $v$ be a vertex in a geodetic graph $G$,
and let $T$ be a BFS tree rooted at $v$. Clearly all edges in $T$ are
$v$-vertical, and as we show, all edges $xy\notin T$ are $v$-horizontal. Indeed,
if $xy$ is $v$-vertical, with $d(v,x) = i$ and $d(v,y) = i+1$,
this yields two distinct shortest paths from $v$ to $y$.
\end{proof}
We mostly follow the notation and terminology of \cite{van2001course}.
Let $q$ be a prime power, and
let $PG_2(q)$ and $AG_2(q)$ be a projective resp.\ affine plane of oredr $q$.
The point sets of these geometries is denoted by $P$, and
their sets of lines (blocks) by $\Ll$. Their point-line incidence relation
is denoted by $\Ii \subset P\times \Ll$.
Elements of $\Ii$ are called \emph{Flags}. The \emph{Levi Graph} of an incidence structure $\SS = (P,\Ll,\Ii)$,
denoted $Levi(\SS)$ is the bipartite graph with vertex sets $P\sqcup \Ll$,
where $p$ and $L$ are neighbors iff $p$ is incident with $L$.
The \emph{Flag graph} of $\SS$ is denoted $Flag(\SS)$. Its vertex set is $P\sqcup \Ii$.
It has two kinds of edges: between a point $p$ and flag $(p,L)$. In addition $(p,L)\sim (p',L)$
for every two points $p,p'$ of the same line $L$. As we show below
$Flag(\SS)$ is geodetic, and has diameter $4$ when $\SS = PG_2(q)$ and $5$ if $\SS = AG_2(q)$.
To fix ideas, associated with each $L\in \Ll$
in $Levi(\SS)$ is a {\em porcupine}, a clique of size $|L|$, plus an
edge $p\sim(p,L)$ emanating from $(p,L)$ for every $p\in L$.
Therefore, to every simple path $Q$ in $Line(\SS)$
there corresponds a simple path $\hat{Q}$ in $Flag(\SS)$. Namely,
if $Q$ traverses through $L$, that is $[p,L,p']$, the corresponding steps in $\hat{Q}$
$[p,(p,L),(p',L),p']$, which is a simple path in $Flag(\SS)$. Recall
that a graph is $2$-connected if and only if every two of its vertices lie on a simple cycle. We conclude:
\begin{cor}\label{levi:cohen}
If $Levi(\SS)$ is $2$-connected, then so is $Flag(\SS)$.
\end{cor}
\subsection{Properties of $Flag(AG_2(q))$}
Recall the following properties of $AG_2(q)$:
\begin{enumerate}
\item \label{numofitems} It has $q^2$ points and $q^2 + q$ lines.
\item Every point is incident with $q+1$ lines
\item \label{pointsinline}Every line has $q$ points
\item If a point $p$ is not in a line $L$, then there exists a unique line $L'$ with $p\in L'$ such that $L$ and $L'$ are disjoint. The common practice is to say that $L$ and $L'$ are parallel and denote $L\parallel L'$.
\end{enumerate}
We denote $\mathscr{F}(q) = Flag(AG_2(q))$. Properties \ref{numofitems} and \ref{pointsinline} imply that the number of vertices in $\mathscr{F}(q))$ is $q^3 + 2q^2$.
We denote by $L_{\alpha,\beta}$ the unique line that contains the two points $\alpha,\beta$.
\begin{proposition}
\label{prop: flag_graph_2_connected}
$\mathscr{F}(q)$ is 2-connected.
\end{proposition}
\begin{proof}
We exhibit a simple cycle through any two vertices in $AG_2(q)$.
Consider all cycles of the form
\[(L_1,x_1,L_{x_1,x_2},x_2,L_2,x,L_1)\]
where $L_1, L_2$ are two distinct intersecting lines with $L_1\cap L_2 = x$, and
$x_1,x_2$ are points in $L_1,L_2$ respectively, other than $x$.
These yield cycles through any two vertices in $\mathscr{F}(q)$ other than a pair of parallel lines. Finally,
here is a simple cycle through two parallel lines $L_1, L_2$
\[(L_1,x_1,L_{x_1,x_2},x_2,L_2,x'_2,L_{x_1',x_2'},x_1',L_1).\]
Here $x_1,x_1', x_2,x_2'$ are distinct points on $L_1, L_2$. This completes the proof.
\end{proof}
\begin{theorem}
\label{thm: affine_flag_geodetic}
$\mathscr{F}(q)$ is geodetic and of diameter $5$.
\end{theorem}
\begin{proof}
We show that the collection of $v$-vertical edges form a tree for every vertex $v$ in $\mathscr{F}(q)$.
There are just two cases to consider: $v = \bf{p}$, a point in $P$ or a flag $v = \bf{(p,L)}\in \Ii$.
Let $\Vv(i)$ be the number
of $v$-vertical edges $\alpha\beta\in E$ of {\em height} $i$, i.e.,
$\{(d(v,\alpha),d(v,\beta)\} = \{i-1,i\}$. Let $N_i(v)$ be the $i$-sphere centered at $v$, that is $N_i(v) = \cbk{u\in V\mid d(v,u) = i}$. We analyze $N_i(v)$ in either case.
\paragraph{$\mathbf{v = p}$:}
\begin{enumerate}
\item Clearly $N_1(\bf{p}) = \cbk{(\bf{p},L)\in \Ii}$, so $\Vv(1) = |N_1(\bf{p})| = q+1$.
\item Each $(\bf{p},L)\in N_1(\bf{p})$ is adjacent to all the vertices in the clique defined by $L$.
This contributes $q-1$ vertical edges. Also, $N_2(\bf{p}) = \cbk{(x,L)\mid \bf{p}\in L, x\neq p}$, since
these cliques are disjoint. It also follows that $\Vv(2) = |N_1(p)|\cdot (q-1) = (q^2-1)$.
\item Let $(x,L)\in N_2(\bf{p})$. The neighbors of $(x,L)$ are $x$ and $(y,L)$ for $y\neq x$ in $L$.
All the latter are in $N_2(\bf{p})$, so every $(x,L)\in N_2(\bf{p})$ contributes exactly one vertex
to $N_3(\bf{p})$. Consequently, $N_3(v) = \cbk{x\mid x\ne \bf{p}}$, and $\Vv(3) = \Vv(2) = (q^2-1)$.
\item Each $x\ne \bf{p}$ is incident with $q+1$ lines, only one of which is $L_{\bf{p},x}$. Therefore, $x$ is adjacent
to $q$ flags $(x,L')$, one for each $L'\ne L_{\bf{p},x}$. These flags are clearly distinct, therefore $N_4(v) = \cbk{(x,L')\mid L'\ne L_{\bf{p},x}}$ and $\Vv(4) = q\cdot \Vv(3) = q(q^2-1)$.
\end{enumerate}
The calculation checks:
\[
\overbrace{(q+1)}^{\Vv(1)} + \overbrace{(q^2-1)}^{\Vv(2)} + \overbrace{(q^2-1)}^{\Vv(3)} + \overbrace{q(q^2-1)}^{\Vv(4)} = q^3+2q^2-1 = |V(\mathscr{F}(q))| - 1.
\]
\begin{figure}[H]
\centering
\includegraphics[scale = .3]{TikzDrawings/CaseOneGraph.pdf}
\caption{$\mathscr{F}(3)$, as seen from $v=\bf{p}$. The colors have no mathematical significance and are only intended for better visibility.}
\label{fig:my_label}
\end{figure}
\paragraph{$\mathbf{v = (p,L)}$:} This case is a bit trickier. Let $x\neq\mathbf{p}$ be a point on $\mathbf{L}$.
Let $y$ be a point not on $\mathbf{L}$, and $\ell_y$ the unique line through $y$
that is parallel to $\mathbf{L}$.
Finally, we denote by $\cbk{\bf{L}^i_x}_{i=0}^q$ the lines through $x$, where $\bf{L} = \bf{L}_x^0$.
\begin{enumerate}
\item $N_1(v) = \{\bf{p}\} \sqcup \{(x,\bf{L})\mid x\neq \bf{p}\}$, so $\Vv(1) = 1 + (q-1) = q$.
We refer below to descendants of $\bf{p}$ as the left vertices, and to descendants of $\{(x,\bf{L})\mid x\neq \bf{p}\}$ as right vertices (see \autoref{fig: second_case}).
\item The neighbors of $\bf{p}$ other than $\bf{(p,L)}$ are $ \cbk{(\bf{p},\bf{L}_\bf{p}^i)}_{i\ne 0}$, so the left side in $N_1(v)$ contributes $q$ edges of height $2$. The right vertices in $N_1(v)$ form a clique, with a porcupine structure. Therefore, each vertex of this clique contributes a single height $2$ edge, namely $(x,\bf{L})\sim x$. So $N_2(\bf{(p,L)}) = \cbk{(\bf{p},\bf{L}^i_{\bf{p}})}_{i\ne 0}\sqcup \cbk{x\mid \bf{p}\ne x\in \bf{L}}$ and $\Vv(2) = q + (q-1)$.
\item Each left vertex in $N_2(v)$ is adjacent to a clique $\cbk{(y,\bf{L}^i_\bf{p})}_{y\ne \bf{p}}$ of $q-1$ flags, so each $(\bf{p},\bf{L}^i_\bf{p})$ contributes $(q-1)$ distinct edges of height $3$. As for the right side, the edges $x\sim (x,\bf{L}^i_x)$ have height $3$, and each $x\ne \bf{p}$ contributes $q$ of them.\\ So, $N_3(v) = \cbk{(y,\bf{L}^i_\bf{p})\mid y\ne p, i\ne 0}\sqcup \cbk{(x,\bf{L}_x^i)\mid x\ne \bf{p}, i\ne 0}$ and $\Vv(3) = q(q-1) + q(q-1)$.
\item For every $y \notin \bf{L}$, there holds $L_{\bf{p},y} = \bf{L}_\bf{p}^i$ for some $i\ne 0$. Therefore, the left side vertices of $N_3(v)$ are partitioned into cliques in $N_3(v)$, and cover all $y\notin \bf{L}$. Hence each vertex contributes a unique edge $(y,L_{\bf{p},y})\sim y$ of height $4$. As for the right side: Every edge $(x,\bf{L}_x^i)$ of height $4$ is adjacent to the clique $\cbk{(y,L_x^i)\mid y\ne x}$. There are exactly $q-1$ such vertices for each $(x,\bf{L}_x^i)$. So $N_4(v) = \cbk{y\mid y\notin \bf{L}} \sqcup \cbk{(y,\bf{L}_x^i)\mid x\in \bf{L},y\notin \bf{L}, i\in [q]}$ and $\Vv(4) = q(q-1) + q(q-1)^2$.
\item The edges $(y,L_x^i)\sim y$ are horizontal (between the two sides of $N_4(v)$), so the only $5$-vertical edges are of the form $y\sim (y,\ell_y)$ - and since $\ell_y$ is unique, $\Vv(5) = q(q-1)$ and $N_5(v) = \cbk{(y,\ell_y)\mid y\notin \bf{L}}$.
\end{enumerate}
Once again, the calculation checks:
\[
\overbrace{1 + (q-1)}^{\Vv(1)} +
\overbrace{q+(q-1)}^{\Vv(2)} + \overbrace{q(q-1)+q(q-1)}^{\Vv(3)} + \overbrace{q(q-1)+q(q-1)^2}^{\Vv(4)} + \overbrace{q(q-1)}^{\Vv(5)} = q^3 + 2q^2 - 1.
\]
This analysis also establishes that $\mathscr{F}(q)$ has diameter 5.
such edges. Each $x\in L$ lies on $q$ lines other than $L$, so each $x$ contributes $p$ additional $3$-vertical edges of the form $x(x,L')$ for a total of $q(q-1)$ such edges. Each such $L'$ contains $p-1$ points other than $x$, and said points d not lie on $L$. Thus each $(x,L')$ contributes $p-1$ vertical edges of the form $(x,L')(y,L')$, for a total of $q\cdot (q-1)^2$ edges. Neighbors of $(y,L')$ are the points $y$, which we are of distance $4$ from $(p,L)$ - as we see in the fllowing part.
\end{proof}
\begin{figure}[H]
\centering
\includegraphics[scale=.3]{TikzDrawings/CaseTwoGraph.pdf}
\caption{$\mathscr{F}(3)$, as seen from $v = \bf{(p,L)}$.}
\label{fig: second_case}
\end{figure}
\subsection{Properties of $Flag(PG_2(q))$}
Recall the following properties of $PG_2(q)$:
\begin{enumerate}
\item It has $q^2 + q + 1$ points and the same number of lines.
\item Every point is incident with $q+1$ lines, and every line has $q+1$ points.
\end{enumerate}
These properties imply that $\abs{V(Flag(PG_2(q))} = (q+1)^3+1$.
\begin{theorem}
$Flag(PG_2(q))$ is $2$-connected, geodetic and has diameter $4$.
\end{theorem}
\begin{proof}
By \autoref{levi:cohen}, if $Levi(PG_2(q))$ is $2$-connected, then so is
$Flag(PG_2(q))$. To show that $Levi(PG_2(q))$ is $2$-connected, it suffices to use
the first cycle that is described in the proof of \autoref{prop: flag_graph_2_connected}.
The proof of geodeticity is a slight adaptation of the proof of \autoref{thm: affine_flag_geodetic}.
The numbers change somewhat, and what's more, since no lines are parallel, we do not reach $N_5(v)$
in the second case. Consequently, the diameter is $4$.
\end{proof}
\section{Introduction}
In this work we consider loopless, undirected graphs $G = (V,E)$. We think of a path in $G$ as a sequence of vertices $P = (v_1,\ldots v_k)$, and the subpath of $P$ from $v_j$ to $v_l$ is denoted $P(v_j,v_l)$. The length of a path $P$ is denoted $|P|$, and path concatenation is denoted by $*$. The distance between $u$ and $v$ is $d_G(v,u)=d(v,u)$, and is the length of a shortest $u,v$ path. For clarity, we occasionally add an index indicating the graph in which some parameter or quantity is calculated.
The notion of geodetic graphs was introduced by Ore \cite{ore1962theory} as a natural extension of trees: a tree is a graph in which between any two vertices there exists a unique simple path, and hence a unique shortest path (i.e - a geodesic). Ore purposed an extended definition, and asked in which simple graphs geodesics are unique. Some simple examples are trees, complete graphs, and odd-length cycles.
Specifically, Ore raised the challenge to characterize geodetic graphs. Despite many attempts, a complete characterization still seems beyond reach.
There are easy necessary and sufficient properties for a graph to be geodetic, the following can be easily proved:
\begin{claim}
A graph $G$ is not geodetic if and only if it contains an even circuit $C$ with two vertices $u,v\in C$ such that $d_C(u,v) = d_G(u,v) = \frac{|C|}{2}$.
\end{claim}
\begin{claim}
A graph $G$ is geodetic if and only if each block of $G$ is geodetic.
\end{claim}
Here a \emph{block} is a maximal $2$-connected component of $G$.
There is clearly no loss in generality if we restrict our attention to $2$-connected graphs.
Also, vertices of degree $1$ or $2$ give the question of geodeticity a more arithmetical flavor.
Since our emphasis is combinatorial and geometric we will concentrate on graphs whose smallest degree is at least $3$.
There are not many families of geodetic graphs that are classified in full. The only constructive classifications known presently are planar geodetic graphs \cite{stemple1968planar}, which is extended to a classification of geodetic graphs homeomorphic to a complete graph \cite{stemple1979geodeticcompletegraph}. There is a considerable body of work classifying geodetic graphs of diameter $2$ \cite{stemple1974geodeticdiamtwo,scapellato1986geodeticdiamtwogeometric,blokhuis1988geodeticdiamtwo}, including a classification of such graphs. While some constructions are known, we do not know that they exhaust all possible geodetic graphs of diameter $2$. In this sense, the classification is lacking. Naturally - the question turns to higher diameters. Some properties of geodetic graphs of diameter $3$ are known \cite{parthasarathy1984geodeticdiamthree}. However, to the best of our knowledge the following is unknown:
\begin{problem}\label{diameter_three_existance}
Do there exist geodetic blocks $G$ of diameter $3$ with $\delta(G) \ge 3$?
\end{problem}
Here $\delta(G)$ is the minimal degree of $G$. Progress on this problem has been very slow. Bridgland \cite{bridgland1983geodeticconvexity} constructed a family of geodetic blocks of diameter $4$ and arbitrarily large minimal degree. This construction was later generalized
in several ways using block designs \cite{srinivasan1987construction},
yielding a family of geodetic blocks of diameter $5$. This construction has the largest diameter presently known.
Despite many attempts, we were unable to retrieve the latter paper.
We therefore present these constructions along a different proof of their geodeticity.
A main problem that we raise is:
\begin{problem}\label{high_diameter_problem}
What is the largest possible diameter of a geodetic block with
minimal degree $\ge 3$? Can it be arbitrarily large?
\end{problem}
In the journey to classification, other properties of geodetic graphs were discovered. A graph is called\emph{ self centered} if its diameter equals its radius. Geodetic blocks of diameter $2$ are
known to have this property \cite{stemple1974geodeticdiamtwo}.
Likewise, for blocks of diameter $3$ \cite{parthasarathy1984geodeticdiamthree}. Some connections to other graphs properties were explored, namely by Zelinka \cite{zelinka1975geodetic}, Gorovoy and Zamiaikou \cite{gorovoy2021geodeticantipode}, and connections to other fields such as algebra and group theory \cite{elder2022rewriting,nebesky2002new}. We continue these lines of research, resulting in our main theorem:
\begin{restatable}{thm}{notwoconnected}\label{No2Connected}
Every $2$-connected geodetic graph $G$ with $\delta(G)\ge 3$ must be $3$-connected. This lower bound is tight as shown by the Petersen Graph.
\end{restatable}
$\mathscr{G}(a,b)\neq \emptyset$? Some body of work was done by Lee \cite{LEE1977165}, showing that for any prime power $q\ge 3$, $\mathscr{G}(q,q+1),\mathscr{G}(q+1,2q-1),\mathscr{G}(q+1,2q)\neq \emptyset$ using Orthogonal Latin Squares. For any natural number $n$, $\mathcal{2,n}\neq \emptyset$ - such graphs were described both in
\section{Discussion}
Geodetic blocks with $\delta(G)\ge 3$
(which, by Theorem \ref{No2Connected}, are $3$ connected) seem hard to find.
Specifically, we ask:
\begin{enumerate}
\item We recall our question whether geodetic blocks of $\delta(G)\ge 3$ can
have arbitrarily large diameter. Also, can they have arbitrarily large girth?
The two questions are closely related since the diameter of a graph is at least half its girth.
\item So far, we only know of three cubic geodetic blocks - the Petersen Graph, $K_4$ and $Flag(PG_2(2))$. Is this list exhaustive? Is the number of such graphs finite?
\item Is the study of geodetic graphs is related to structural graph theory, and more
concretely to the family of \emph{even-hole-free graphs} \cite{vuvskovic2010even}? The
origin of this question is this: If $u,v$ are two antipodal vertices in an induced even cycle
(aka an even hole) $C$ in a geodetic graph, then the $uv$ geodetic is not included in $C$.
\end{enumerate}
| {
"timestamp": "2022-08-15T02:13:16",
"yymm": "2208",
"arxiv_id": "2208.06324",
"language": "en",
"url": "https://arxiv.org/abs/2208.06324",
"abstract": "A graph $G$ is geodetic if between any two vertices there exists a unique shortest path. In 1962 Ore raised the challenge to characterize geodetic graphs, but despite many attempts, such characterization still seems well beyond reach. We may assume, of course, that $G$ is $2$-connected, and here we consider only graphs with no vertices of degree $1$ or $2$. We prove that all such graphs are, in fact $3$-connected. We also construct an infinite family of such graphs of the largest known diameter, namely $5$.",
"subjects": "Combinatorics (math.CO)",
"title": "On the Connectivity and Diameter of Geodetic Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.977713810564506,
"lm_q2_score": 0.7248702702332475,
"lm_q1q2_score": 0.7087156740746716
} |
https://arxiv.org/abs/math/0608063 | The Maslov class of Lagrangian tori and quantum products in Floer cohomology | We use Floer cohomology to prove the monotone version of a conjecture of Audin: the minimal Maslov number of a monotone Lagrangian torus in C^n is 2. Our approach is based on the study of the quantum cup product on Floer cohomology and in particular the behaviour of Oh's spectral sequence with respect to this product. As further applications we prove existence of holomorphic disks with boundaries on Lagrangians as well as new results on Lagrangian intersections. | \section{Introduction and main results} \label{S:intro}
Let $(M, \omega)$ be a tame symplectic manifold (see ~\cite{A-L-P}, also Section ~\ref{S:basic notions}).
The class of tame symplectic manifolds includes compact manifolds,
Stein manifolds, and more generally, manifolds which are
symplectically convex at infinity, as well as products of all the
above. Let $L \subset (M, \omega)$ be a closed Lagrangian
submanifold. Throughout the paper, by a closed manifold we mean
compact manifold without boundary, and all Lagrangian manifolds
are supposed to be compact and without boundary. One of
fundamental problems in symplectic topology is to find
restrictions on the topology of $ L $, in particular on the Maslov
class $\mu_{L}:\pi_{2}(M,L) \rightarrow \mathbb{Z}$. See
Section~\ref{S:basic notions} for the definition, and for other
basic notions from symplectic topology. Below we will be concerned
with monotone Lagrangian submanifolds. For such a Lagrangian $ L
\subset (M, \omega) $ denote by $ N_L \in \mathbb{Z}_+ $ the
minimal Maslov number, i.e. the generator of the image of $
\mu_{L} $.
Our first result deals with Lagrangian submanifolds of $ \mathbb{R}^{2n} $.
Here we endow $ \mathbb{R}^{2n} $ with its standard symplectic
structure $ \omega_{std} = dx_1 \wedge dy_1 + dx_2 \wedge dy_2 +
... + dx_n \wedge dy_n $, where $ (x_1,y_1,x_2,y_2,...,x_n,y_n) $ are standard coordinates on $ \mathbb{R}^{2n} $.
\begin{thm} \label{T:Audin conj}
Given a monotone Lagrangian embedding
of the $ n $-torus $ L = \mathbb{T}^{n} \hookrightarrow (
\mathbb{R}^{2n}, \omega_{std} ) $, its minimal Maslov number must be
$ N_{L} = 2 $.
\end{thm}
Let us remark, that the minimal Maslov of such an embedding
must be even, due to the orientability of the torus, and it is a
non-negative integer. The possibility of $ N_L = 0 $ cannot occur,
since then, by monotonicity of $ L $, the area class of $ L $ must
vanish, and this contradicts the famous result of Gromov
~\cite{Gr}, that in our case guarantees the existence of a
pseudo-holomorphic disc with a boundary on $ L $, which must have
a positive symplectic area.
\begin{thm} \label{T:Maslov 2}
Let $ L = \mathbb{T}^{n} \hookrightarrow ( \mathbb{R}^{2n}, \omega_{std} ) $ be a monotone Lagrangian embedding
of the $n$-torus and let $ J $ be an arbitrary almost complex structure on $ \mathbb{R}^{2n}
$, compatible with $ \omega_{std} $. Then for every point $ p \in L
$ there exists a $ J $-holomorphic disc $ u: (D, \partial D)
\rightarrow (M,L) $ whose boundary passes through $p$, i.e. $ p \in u( \partial D) $, and whose
Maslov index $ \mu_{L}([u]) $ is $2$.
\end{thm}
Theorem~\ref{T:Audin conj} is a partial solution to a question of
Audin~\cite{Audin conj}, which states the same thing without the
monotonicity assumption. Previous results in this direction were
obtained by Biran and Cieliebak, Li, Oh, Polterovich, Viterbo
~\cite{Bi-Ci,Li,Oh-Spectral,A-L-P,P,V-1,V-2}.
Our approach is based on Floer cohomology, in particular on the quantum product on Floer cohomology.
We will study the multiplicative behaviour of the spectral
sequence due to Oh, whose first page (i.e term $ E_1 $) is related
to the singular cohomology of a Lagrangian, and which converges to
its Floer cohomology.
The idea of the proof of Theorem~\ref{T:Audin conj} is based on an idea originally raised by Seidel.
The proof of Theorem~\ref{T:Maslov 2} uses ideas
from~\cite{Bi-Co-2,Cornea-Lalonde}. The statement of
Theorem~\ref{T:Maslov 2} can be more directly proved (using Gromov
compactness theorem, but without any Floer theory) in the case of a
Clifford torus and of an exotic torus due to Chekanov
(see~\cite{Chekanov,E-P}), hence for every Lagrangian torus which is
Hamiltonianly isotopic to each one of them. However, the full
classification of monotone tori in $ \mathbb{R}^{2n} $ is still not
known.
In this paper we use Floer cohomology with coefficients in a ring $ A = \mathbb{Z}_2 [T^{-1} , T] $,
and grade $ A $ by $ \deg T = N_{L} $ (see~\cite{Bi-Co-3,Oh-HF1}). We show
\begin{thm} \label{T:Proj space}
Let $ L = \mathbb{RP}^{n}
\hookrightarrow M $ be a monotone
Lagrangian embedding of the real projective space into a tame symplectic manifold $ M $. Assume
in addition that its minimal Maslov number is $ N_{L} \geqslant 3
$. Then $ HF^*(L,L) \cong ( H(L; \mathbb{Z}_{2} ) \otimes A )^* $. In particular $ L
$ is not Hamiltonianly displaceable. Moreover, for every
Hamiltonian diffeomorphism $ f\colon M \rightarrow M $, for which $ f(L) $
intersects $ L $ transversally, we have $ \sharp( L \cap f(L) ) \geqslant
n+1 $.
\end{thm}
After the first version of this paper was written, we received
from Fukaya, Oh, Ohta and Ono revised version of their
work~\cite{FOOO} in which results, similar to Theorem
~\ref{T:Audin conj}, were obtained.
Section ~\ref{S:Proof of main results} is devoted for the proofs of
Theorems ~\ref{T:Audin conj}, ~\ref{T:Maslov 2}, ~\ref{T:Proj space}.
In Section ~\ref{S:Floer Spec} we recall the definitions of Floer
cohomology and the spectral sequence of Oh, and give the definition of
the quantum product on the Floer complex. Then, in the end of Section
~\ref{S:Floer Spec}, we state Theorems ~\ref{T:Leibnitz rule},
~\ref{T:E1}. Theorem ~\ref{T:Leibnitz rule} guarantees the
multiplicativity of the Floer cohomology and of the spectral sequence
of Oh. The statement of Theorem ~\ref{T:E1} is used in an essential
way in the proofs of Theorems ~\ref{T:Audin conj}, ~\ref{T:Maslov 2},
~\ref{T:Proj space}. Section ~\ref{S:Proofs of quantum product}
stands for the proofs of Theorems ~\ref{T:Leibnitz rule},
~\ref{T:E1}. Finally, Section ~\ref{S:basic notions} is devoted to
recall the basic notions from symplectic topology, that we use in the
present article.
\section{Proof of main results} \label{S:Proof of main results}
In this section we provide proofs for Theorems ~\ref{T:Audin
conj}, ~\ref{T:Maslov 2}, ~\ref{T:Proj space}. The tools that
we use in the proofs are the multiplicativity of the spectral
sequence of Oh, and special properties of this spectral sequence. For
detailed description of the Floer cohomology, the spectral sequence
of Oh, and the proof of existence of the multiplicative structure, we
refer the reader to Section ~\ref{S:Floer Spec}.
Denote by $ \{ E_{r}^{p,q} , d_{r} \} $ the spectral sequence of
Oh, with coefficients in the ring $ A = \mathbb{Z}_2 [T^{-1} , T]
$, where $ A $ is graded by $ \deg T = N_{L} $.
The properties of it, that are essential for the proofs of Theorems
~\ref{T:Audin conj}, ~\ref{T:Maslov 2}, ~\ref{T:Proj space}, are (see
Section ~\ref{S:Floer Spec}) :
\begin{itemize}
\item $E_{1}^{p,q}=H^{p+q-pN_{L}}(L, \mathbb{Z}_{2}) \otimes
A^{pN_{L}}$ , $d_{1} = [ \partial_{1} ] \otimes T $, where $ [
\partial_{1} ] : H^{p+q-pN_{L}}(L, \mathbb{Z}_{2}) \rightarrow
H^{p+1+q-(p+1)N_{L}}(L, \mathbb{Z}_{2}) $ is induced from $
\partial_{1} $ - the operator, that enters in the definition of
the Floer differential (see Section ~\ref{S:Floer Spec}).
The multiplication on $ H^{*} (L, \mathbb{Z}_{2}) $, induced
from the multiplication on $E_{1}^{p,q}$, coincides with the
standard cup product.
\item More generally, for every $r\geq 1$, $E_r^{p,q}$ has the form $E_r^{p,q} =
V_r^{p,q} \otimes A^{p N_L}$ with $d_r = \delta_r \otimes
T^r$, where $V_r^{p,q}$ are vector spaces over $\mathbb{Z}_2$
and $\delta_r$ are homomorphisms $\delta_r:V_r^{p,q} \to
V_r^{p+r,q-r+1}$ defined for every $p,q$ and satisfy $\delta_r
\circ \delta_r = 0$. We have $$V_{r+1}^{p,q} = \frac{\ker
(\delta_r: V_r^{p,q} \to
V_r^{p+r,q-r+1})}{\textnormal{image\,} (\delta_r: V_r^{p-r,
q+r-1} \to V_r^{p,q})}.$$
\item $ \{ E_{r}^{p,q}, d_{r} \}$ converges to $ HF $, i.e
$$ E_{\infty}^{p,q} \cong \frac{F^p HF^{ p+q }(L,L)}{F^{p+1} HF^{ p+q }(L,L)}
,$$ where $ F^p HF(L,L) $ is the filtration on $ HF(L,L) $,
induced from the filtration $ F^p CF(L,L) $.
\end{itemize}
\begin{proof}[Proof of Theorem~\ref{T:Audin conj}]
Consider a Lagrangian embedding $ L = \mathbb{T}^{n}
\hookrightarrow ( \mathbb{R}^{2n}, \omega_{std} ) $, and assume
by contradiction that $ N_{L} \geqslant3 $. The Floer cohomology $ HF(L,L) $
is well-defined, and the spectral sequence of
Oh, which computes it, becomes multiplicative.
Look at the first stage $ E_{1} $ of the spectral sequence. We know that
$V_1^{p,q} = H^{p+q-pN_L}(L;\mathbb{Z}_2)$, and the induced
product on $ V_{1} $ is the classical cup-product and we have
a differential $ \delta_{1} : V_{1} \rightarrow V_{1} $, which
decreases the natural grading $ V_{1} $ by $ N_{L}-1$.
The key observation now is that
the entire cohomology ring $ H^{*}(L, \mathbb{Z}_{2}) $ is
generated by the first cohomology $ H^{1}(L, \mathbb{Z}_{2}) $
since $ L = \mathbb{T}^{n} $ . Therefore looking on the natural
grading on the $ V_{1} $, the elements of degree $ 1 $ generate
the whole $ V_{1} $. Now, because of $ \delta_{1} $ decreases
the grading by $ N_{L} - 1 \geqslant 2 $, then for every $ a \in
V_{1} $ of degree $ 1 $ we have $ \delta_{1}(a) = 0 $. Then, since
$ \delta_{1} $ satisfies the Leibnitz rule, the kernel $ \operatorname{Ker}(\delta_{1}) \subseteq
V_{1} $ is a sub-ring, therefore $ \operatorname{Ker}(\delta_{1}) = V_{1} $,
so we obtain $ \delta_{1} \equiv 0 $.
Therefore $ V_{2} = H(V_{1},\delta_{1}) = V_{1} $, therefore
$ V_{2} $ is isomorphic to $ V_{1} $ as a graded ring.
Therefore we can apply the same argument for $ \delta_{2} : V_{2}
\rightarrow V_{2} $, since $ \delta_{2} $ decreases the grading by $
2N_{L} -1 \geqslant 2 $, so we will get that $ \delta_{2} \equiv 0 $,
and so on, so at each stage we will get that $ \delta_{r} \equiv 0 $ by
induction, therefore $ d_{1} = d_{2} = \ldots = 0 $, so
as a conclusion we get that $ E_{1} = E_{2} =
\ldots = E_{ \infty } = HF(L,L) $. But
$ L $ is clearly Hamiltonianly displaceable in $ \mathbb{R}^{2n} $,
hence $ HF(L,L)=0 $ and we obtain that $ E_{1} = 0 $. Contradiction.
\end{proof}
\begin{rem} \label{R:Audin generalized}
The same arguments from the proof of Theorem~\ref{T:Audin conj} in
fact prove the following more general result:
Let $L \subset (M,\omega)$ be a monotone Lagrangian with $N_L\geq
2$. Assume that:
\begin{enumerate}
\item $(M,\omega)$ is a subcritical Stein manifold. (See
e.g.~\cite{Bi-NonIntersect} for the definition).
\item $H^*(L;\mathbb{Z}_{2})$ is generated by
$H^1(L;\mathbb{Z}_{2})$.
\end{enumerate}
Then $N_L=2$.
\end{rem}
\begin{proof}[Proof of Theorem~\ref{T:Maslov 2}]
First let us show this statement for generic
$ J $ and then we will use a compactness argument for
proving it for every $ J $, compatible with $ \omega_{std} $.
Let us make a generic choice of $ J $ and a Morse function $ f:
L \rightarrow L $, such that the point $ x \in L $ is the only
point of maximum of $ f $, i.e the only point with index equal
to $ n $ ( it is easy to show that such an $ f $ exists ).
Then the Floer cohomology $ HF(L,L) $ is well-defined.
By Theorem~\ref{T:Audin conj} we have
that $ N_{L}=2 $. Look at the spectral sequence of Oh, which
converges to $ HF(L,L) $. Let us show that the differential $ \delta_{1}
: V_{1} \rightarrow V_{1} $ is non-zero. The argument is
similar to the one from Theorem~\ref{T:Audin conj}.
Indeed, if conversely, we have that $ \delta_{1} \equiv 0 $, then $ V_{2} =
H(V_{1},\delta_{1}) = V_{1} $ as graded rings and $ \delta_{2}
: V_{2} \rightarrow V_{2} $ is a differential which decreases
the grading by $ 2N_{L}-1 = 3 $ and $ V_{2} = V_{1} $ is
as a ring generated by it's elements of degree $ 1 $, therefore
$ \delta_{2} = 0 $, so $ V_{3} = H(V_{2},\delta_{2}) = V_{2} = V_{1} $
as graded rings and so on. Therefore we will obtain a
contradiction as in proof of Theorem~\ref{T:Audin conj} with
the fact that $ HF(L,L) = 0 $. So we have proved that $ \delta_{1}
: V_{1} \rightarrow V_{1} $ is non-zero, therefore the map $
[ \partial_{1} ] : H^{*}(L, \mathbb{Z}_{2}) \rightarrow
H^{*-1}(L, \mathbb{Z}_{2}) $ is non-zero. Let us show that
this implies that $ [ \partial_{1} ] ( [p] ) \neq 0 $, where
$ [p] \in H^{n}(L, \mathbb{Z}_{2}) $ is a generator. From the
theorems above we have that $ [ \partial_{1} ] : H^{*}(L, \mathbb{Z}_{2}) \rightarrow
H^{*-1}(L, \mathbb{Z}_{2}) $ is a differential and satisfies a
Leibnitz rule. Therefore, since $ H^{1}(L, \mathbb{Z}_{2}) $
generates the whole $ H^{*}(L, \mathbb{Z}_{2}) $, $ [ \partial_{1}
]$ restricted to $ H^{1}(L, \mathbb{Z}_{2}) $ is non-zero. Take
some $ x_{1} \in H^{1}(L, \mathbb{Z}_{2}) $ such that $ [ \partial_{1}
](x_{1}) \neq 0 $, then because of $ [ \partial_{1}
](x_{1}) \in H^{0}(L,\mathbb{Z}_{2}) \cong \mathbb{Z}_{2} $ we
have that $ [ \partial_{1} ] (x_{1}) = 1 \in
H^{*}(L,\mathbb{Z}_{2}) $. Complete $ x_{1} $ to a basis $ x_{1},x_{2}, \ldots ,x_{n} $ of
$ H^{1}(L, \mathbb{Z}_{2}) $ as a vector space over $ \mathbb{Z}_{2} $.
Then it is easy to see that the product $ x_{1}x_{2} \ldots
x_{n} = [p] \in H^{n}(L,\mathbb{Z}_{2}) $. Denote $ y =
x_{2}x_{3} \ldots x_{n} $, then $ x_{1} ( [ \partial_{1} ]
([p])) = x_{1} ( [ \partial_{1} ](x_{1}y)) =
x_{1} ( ([ \partial_{1} ](x_{1})) y + x_{1} ([ \partial_{1} ](y))) =
x_{1}y + x_{1}^{2} ([ \partial_{1} ](y)) = x_{1}y = [p] \neq
0 $, hence $ [ \partial_{1} ]([p]) \neq 0 $. Now if we go back
to the definition of $ [ \partial_{1} ] $, we see that the
moduli-space $ \mathcal{M}_{1}(p,q) $ is non-empty for some critical point $
q $ of $ f $ with $ \operatorname{ind}_{f}(q) = n-1 $. Now, because of $ p $
is a critical point to the top index, the only gradient
trajectories which start with $ x $ are constant trajectories,
therefore the boundary of the $ J $-holomorphic disc from the
definition of $ \mathcal{M}_{1}(p,q) $ contains the point $ p
$.
We have proved the theorem for generic choice of $ J $. For the
general $ J $, consider a sequence of generic $ J_{n} $ which
converge to $ J $. Then for each $ J_{n} $ we have a $ J_{n} $
-holomorphic disc $ u_{n} : (D, \partial D) \rightarrow ( \mathbb{R}^{2n},
L) $ such that $ p \in u_{n}( \partial D ) $. By Gromov
compactness theorem ( see~\cite{Gr} ) there is a subsequence of
$ u_{n} $ which converges to a tree of discs with sphere
bubblings, which are $ J $-holomorphic, however because of $ \mu(u_{n}) = N_{L} = 2 $ is
minimal, in the limit we have only one disc and no bubblings of
spheres occur, therefore we obtain a $ J $ -holomorphic disc
which contains $ p $.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{T:Proj space}]
The idea is similar to the the one in the proof of
Theorem~\ref{T:Audin conj}. Namely, as before we see that the Floer
cohomology of $ L $ is well defined, and we can compute it via the
spectral sequence of Oh, which is multiplicative.
Look at the first stage $ E_{1} $ of the spectral sequence.
We have that $ H^{1}(L, \mathbb{Z}_{2}) $
generates the entire cohomology $ H^{*}(L, \mathbb{Z}_{2}) $,
hence $ V_{1} $ is generated as a ring by it's elements of
degree $ 1 $ and the differential $ \delta : V_{1} \rightarrow
V_{1} $ decreases the grading by $ N_{L}-1 \geqslant 2 $. Then
arguing as in the proof of Theorem~\ref{T:Audin conj}, we conclude
that $ E_{1} = E_{2} = \ldots = E_{ \infty } = HF(L,L) = H^{*}(L;
\mathbb{Z}_{2}) $. Therefore $ HF(L,L) \neq 0 $ , hence $ L $ is
not displaceable. Moreover for every Hamiltonian diffeomorphism $
f\colon M \rightarrow M $, for which $ f(L) $ intersects $ L $ transversally, we have $
\sharp ( L \cap f(L) ) = \dim_{ \mathbb{Z}_{2} } CF(L;f(L))
\geqslant \dim_{ \mathbb{Z}_{2} } H^{*}(L; \mathbb{Z}_{2}) = n+1 $.
\end{proof}
\section{Floer cohomology, the spectral sequence, and the quantum product} \label{S:Floer Spec}
\subsection{Floer cohomology and the spectral sequence of Oh}
We recall some basic facts of Floer theory, which will be stated
without proofs (for the proofs we refer the reader
to~\cite{Bi-Co-1,Bi-Co-2,Oh-Spectral}). Let $(M, \omega)$ be a
tame symplectic manifold and let $L \subset (M, \omega)$ be a
monotone Lagrangian submanifold with minimal Maslov number $N_L
\geq 2$. Then one can define the Floer cohomology of the pair $
(L,L) $, which we will denote by $ HF(L,L) $. As mentioned before,
we will work here with coefficients in the ring $ A = \mathbb{Z}_2
[T,T^{-1}] $, as described in ~\cite{Bi-Co-3}. In fact, we will
work with an equivalent definition of $ HF(L,L) $ as described in
~\cite{Bi-Co-3,Oh-Quantum}, which uses holomorpic disks rather
than holomorphic strips. We briefly recall the construction now.
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f1.eps}
\end{center}
\caption{}
\label{F:fig1}
\end{figure}
We choose a generic pair of a Morse function $f\colon L \to
\mathbb{R}$ and a Riemannian metric on $ L $ and consider a
generic almost complex structure on $ M $. Denote by $ C_f^* $
the Morse complex associated to $ f $, graded by Morse indeces of
$f$. Let $ A = \mathbb{Z}_{2}[T,T^{-1}] $ be the algebra of
Laurent polynomials over $ \mathbb{Z}_{2} $ , where we take $ \deg
T = N_{L} $. Take the decomposition $ A = \bigoplus_{i \in
\mathbb{Z}} A^i $, where $ A^i $ is the subspace of homogenous
elements of degree $ i $.
We define the Floer complex as $ CF(L,L) = C_{f} \otimes A $,
which has a natural grading coming from grading on $ C_f, A $.
More specifically, $$ CF^{l}(L,L) = \bigoplus_{k \in \mathbb{Z} }
C_f^{l-kN_{L}}(L,L) \otimes A^{kN_{L}} ,$$ for each $ l \in
\mathbb{Z} $. To define the Floer differential $ d_F : CF(L,L)
\rightarrow CF(L,L) $, we introduce auxiliary operators $
\partial_0,\partial_1,\dots,\partial_{\nu} : C_{f} \rightarrow
C_{f}$, where $\nu = \big[ \frac{\dim L + 1}{N_L} \big]$.
For every pair of critical points
$ x,y \in C_{f} $ denote by $ \mathcal{M}_{k}(x,y) $ the
moduli-space of diagrams as in Figure~\ref{F:fig1}. This diagram
consists of pieces of gradient trajectories of $ -f $, joined by
somewhere injective(see ~\cite{L}) pseudo-holomorphic discs $ u_{1},u_{2}, \ldots
, u_{l} $ in $ M $, with boundaries on $ L $, such that the first
piece of gradient trajectory converges to $ x $, and the last
piece converges to $ y $, when time is reversed and such that sum
of Maslov indices of the discs is $ \mu(u_{1}) + \mu(u_{2}) +
\ldots + \mu(u_{l}) = kN_{L} $. Let us give a precise definition
of an element of $ \mathcal{M}_{k}(x,y) $. Consider a collection
of somewhere injective pseudo-holomorphic discs $ u_{j} \colon
(D^{2}, \partial D^{2}) \rightarrow (M,L) $, $ j=1,2, \ldots ,l$,
with boundaries on $ L $, and a collection of trajectories $$
\gamma_{0}: (-\infty , a_{0}] \rightarrow L , \gamma_{1}:
[a_{0},a_{1}] \rightarrow L , \gamma_{2}: [a_{1},a_{2}]
\rightarrow L , \ldots ,$$ $$ \gamma_{l}: [a_{l-1},a_{l}]
\rightarrow L, \gamma_{l+1}: [a_{l}, + \infty ) \rightarrow L ,$$
where $ -\infty < a_{0} < a_{1} < \ldots < a_{l} < +\infty $. Then
we demand that $ \gamma_{0}, \gamma_{1}, \ldots , \gamma_{l+1} $
are gradient trajectories for $ -f $ (with respect to our
Riemannian metric on $ L $), namely
$$ \dot{\gamma_{j}}(t) = - \nabla f ( \gamma_{j}(t) ) , \quad j= 0,1,2, \ldots ,
l+1 ,$$ with matching conditions :
$$ \lim_{t \rightarrow +\infty} \gamma_{0}(t) = x,\lim_{t \rightarrow -\infty} \gamma_{l}(t) = y, $$
$$ \gamma_{j}(a_{j}) = u_{j+1}(-1) , \quad j =0,1, \ldots , l-1 ,$$
$$ \gamma_{j}(a_{j-1}) = u_{j}(1) , \quad j =1,2, \ldots , l ,$$
and that the total Maslov of the collection $ u_{1},u_{2}, \ldots
,u_{l} $ of discs is $$ \mu( u_{1} ) + \mu( u_{2} ) + \ldots + \mu(
u_{l} ) = kN_{L} .$$ Then an element of $ \mathcal{M}_{k}(x,y) $ is
given by a collection $$ \{ (u_{1},u_{2}, \ldots ,u_{l}), (\gamma_{0},
\gamma_{1}, \ldots , \gamma_{l}) \} $$ as above, when we factorize it
by its group of inner automorphisms.
For generic choice of $ f $ and of almost complex structures (see
~\cite{Oh-HF1} for the details), $ \mathcal{M}_{k}(x,y) $ is a
manifold of dimension
$$ \dim \mathcal{M}_{k}(x,y) = \operatorname{ind} (y) - \operatorname{ind}(x) + kN_{L} - 1 $$
(see ~\cite{FOOO,Oh-Quantum}). If the dimension is $ 0 $, $
\mathcal{M}_{k}(x,y) $ is a manifold of dimension $ 0 $ and is
compact, so it is a finite collection of points. Denote in this
case $ n_{k}(x,y) := \sharp \mathcal{M}_{k}(x,y)(\mod 2) $. Then define
$$ \partial_{k}(x) = \sum_{ y \in C_{f} , ind(y) - ind(x) + kN_{L} - 1 =
0 } n_{k}(x,y)y .$$
Note that $ \partial_0 $ is the usual Morse-cohomology boundary
operator. We see that $ d_{F} $ is not compatible with grading,
since each $
\partial_i $ acts like $\partial_i : C_{f}^* \to C_{f}^{*+1 - i
N_L}(L)$.
Let Floer differential $ d_F : CF^{*}(L,L) \rightarrow CF^{*+1}(L,L) $ by definition equals to
$ d_F = \partial_{0} \otimes 1 + \partial_{1} \otimes T + \ldots +
\partial_{\nu} \otimes T^{\nu} $. Then one can show that $
d_F $ is indeed a differential. It is well known that the homology of
the complex $ (CF(L,L),d_F) $ is canonically isomorphic to the Floer
cohomology of the pair $ (L,L) $ (see ~\cite{Bi-Co-1,Bi-Co-2} and the
references therein). Therefore we will write
$$ HF^*(L,L) = H^{*}(CF(L,L), d_F) .$$
Consider now the following decreasing filtration on $ CF(L,L) $: $$
F^{p}CF(L,L) = \{ \sum x_{i} \otimes T^{n_{i}} \mid x_{i} \in
C_{f} , n_{i} \geqslant p \} .$$ It is obviously compatible with $
d_F $ (due to the monotonicity of $L$), so by a standard algebraic argument we obtain the spectral
sequence $ \{ E_{r}^{p,q}, d_{r} \}$. The following properties of
it have been proved in~\cite{Bi-NonIntersect}:
\begin{itemize}
\item $ E_{0}^{p,q} = C_{f}^{p+q-pN_{L}} \otimes
A^{pN_{L}}$ , $ d_{0} = [ \partial_{0} ] \otimes 1 $
\item $E_{1}^{p,q}=H^{p+q-pN_{L}}(L, \mathbb{Z}_{2}) \otimes
A^{pN_{L}}$ , $d_{1} = [ \partial_{1} ] \otimes T $, where $ [
\partial_{1} ] : H^{p+q-pN_{L}}(L, \mathbb{Z}_{2}) \rightarrow
H^{p+1+q-(p+1)N_{L}}(L, \mathbb{Z}_{2}) $ is induced from $
\partial_{1} $.
\item For every $r\geq 1$, $E_r^{p,q}$ has the form $E_r^{p,q} =
V_r^{p,q} \otimes A^{p N_L}$ with $d_r = \delta_r \otimes
T^r$, where $V_r^{p,q}$ are vector spaces over $\mathbb{Z}_2$
and $\delta_r$ are homomorphisms $\delta_r:V_r^{p,q} \to
V_r^{p+r,q-r+1}$ defined for every $p,q$ and satisfy $\delta_r
\circ \delta_r = 0$. Moreover $$V_{r+1}^{p,q} = \frac{\ker
(\delta_r: V_r^{p,q} \to
V_r^{p+r,q-r+1})}{\textnormal{image\,} (\delta_r: V_r^{p-r,
q+r-1} \to V_r^{p,q})}.$$
(For $r=0,1$ we have
$V_0^{p,q}=C_f^{p+q-pN_L}$, $V_1^{p,q} =
H^{p+q-pN_L}(L;\mathbb{Z}_2)$.)
\item $ \{ E_{r}^{p,q}, d_{r} \}$ collapses at $ \nu + 1 $ step,
namely $ d_{r} = 0 $ for every $ r \geqslant \nu +1 $, so $
E_{r}^{p,q} = E_{\infty}^{p,q} $ for every $ r \geqslant \nu +1 $
and the sequence converges to $ HF $, i.e
$$ E_{\infty}^{p,q} \cong \frac{F^p HF^{ p+q }(L,L)}{F^{p+1} HF^{ p+q }(L,L)}
,$$ where $ F^p HF(L,L) $ is the filtration on $ HF(L,L) $,
induced from the filtration $ F^p CF(L,L) $.
\end{itemize}
Note that on each $ E_{r} $ and $ V_{r}
$ we have a natural grading coming from the grading on $ CF(L,L) $
and $ d_{r} : E_{r} \rightarrow E_{r} $ shifts this grading by $ 1
$. Therefore $ \delta_{r} : V_r \to V_r $ shifts the grading on $
V_r $ by $ 1-rN_{L} $, since the degree of $ T $ is $ N_{L} $. On
$ V_{1} $ this grading coincides with the natural grading on the
cohomology ring $ H^{*}(L, \mathbb{Z}_{2}) $.
\subsection{Quantum product on Floer cohomology}
Consider generic Morse functions $ f,g,h : L
\rightarrow \mathbb{R} $ and denote by $$ ( CF_{f}, d_F^{f}),
(CF_{g}, d_F^{g}), (CF_{h}, d_F^{h}) $$ the corresponding Floer
complexes. Then we will be able to define a "quantum product" $
\star : CF_{f} \otimes CF_{g} \rightarrow
CF_{h} $, such that differentials will satisfy the
following analog of the Leibnitz rule: $$ d_F^{h} (a \star b) =
d_F^{f}(a) \star b + a \star d_F^{g}(b) .$$ Moreover, this product is
compatible with the filtrations on the $ CF $'s in the sense that $ * $
maps $ F^p CF_f \otimes F^{p'}CF_g $ to $ F^{p+p'}CF_h $.
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f2.eps}
\end{center}
\caption{}
\label{F:fig2}
\end{figure}
Then automatically the spectral sequences become multiplicative, i.e we have products $
E_{r}^{p,q}(f) \otimes E_{r}^{p',q'}(g) \rightarrow
E_{r}^{p+p',q+q'}(h) $ at each stage of the spectral sequence,
which are induced from $ \star $, such that the differential $
d_{r} $ satisfies the Leibnitz rule, according to this product,
and such that the product at the $ r+1 $ stage comes from the
product at $ r $ stage. Note that in this case these products
induce products $ V_{r}^{p,q}(f) \otimes V_{r}^{p',q'}(g)
\rightarrow V_{r}^{p+p',q+q'}(h) $, the differential $
\delta_r:V_r^{p,q} \to V_r^{p+r,q-r+1} $ satisfies the Leibnitz
rule with respect to this product and that the product on the $
V_{r+1} $ is induced from the product on $ V_{r} $. Then the
crucial observation will be that on $ V_{1} $ the induced product
coincides with the usual cup-product on $ H^{*}(L; \mathbb{Z}_{2})
$, so the next products on the $ V_{r} $ are induced from this cup
product, therefore the quantum effects are lost for $ r \geqslant
1 $. Now let us describe the operation $ \star : F^{p}CF_{f}
\otimes F^{p'}CF_{g} \rightarrow F^{p+p'}CF_{h} $. To do this, let
us introduce operations $ m_{l} : C_{f} \otimes C_{g} \rightarrow
C_{h} $ (for a more general introduction of such an operations see
~\cite{Bets-Cohen,FOOO}). The operation $ m_{0} $ is the usual
product on the Morse complexes of $ f,g,h $. Let us recall its
definition. For every triple of generators $ x \in C_{f},y \in
C_{g},z \in C_{h} $, denote by $ \mathcal{M}_{0}(x,y;z) $ the
moduli-space of diagrams as in Figure~\ref{F:fig2}. This diagram
is given by trajectories $ \gamma_{f} :[0, + \infty ) \rightarrow
L, \gamma_{g} :[0, + \infty ) \rightarrow L,\gamma_{h} :( -
\infty, 0] \rightarrow L,$ such that
$$ \dot{\gamma_{f}}(t) = - \nabla f (
\gamma_{f}(t)), \quad \dot{\gamma_{g}}(t) = - \nabla g (
\gamma_{g}(t)), \quad \dot{\gamma_{h}}(t) = - \nabla h (
\gamma_{h}(t)),$$
and
$$ \lim_{t \rightarrow + \infty} \gamma_{f}(t) = x,
\quad \lim_{t \rightarrow + \infty} \gamma_{g}(t) = y, \quad \lim_{t
\rightarrow - \infty} \gamma_{h}(t) = z, $$
$$ \gamma_{f}(0)=\gamma_{g}(0)=\gamma_{h}(0).$$
Then by our assumption that $ f,g,h $ is a generic triple of Morse
functions, we get that $ \mathcal{M}_{0}(x,y;z) $ is a manifold,
and its dimension is given by $ \operatorname{ind}_{z}h - \operatorname{ind}_{x}f - \operatorname{ind}_{y}g
$. Moreover, when $\operatorname{ind}_{z}h - \operatorname{ind}_{x}f - \operatorname{ind}_{y}g = 0 $, $
\mathcal{M}_{0}(x,y;z) $ is a zero-dimensional compact manifold,
hence it consists of a finite number of points. For the case of $
\operatorname{ind}_{z}h - \operatorname{ind}_{x}f - \operatorname{ind}_{y}g = 0 $ we set $ n_{0}(x,y;z) :=
\sharp \mathcal{M}_{0}(x,y;z)(\mod 2) $. Now we define $$
m_{0}(x,y) := \sum_{ z \in C_{h}, \operatorname{ind}_{z}h -
\operatorname{ind}_{x}f - \operatorname{ind}_{y}g = 0 } n_{0}(x,y;z)z .$$ This is the classical
cup product $ C_{f}^{i} \otimes C_{g}^{j} \rightarrow C_{h}^{i+j} $, and
so the classical Morse differential satisfies the Leibnitz rule and
induces the classical cup-product on cohomology $ H^{*}(L;
\mathbb{Z}_{2}) $. The further operations $ m_{l} $ for $ l \geqslant
1 $ will use quantum contributions.
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f3.eps}
\end{center}
\caption{}
\label{F:fig3}
\end{figure}
Before defining the general $ m_{l} $, let us first describe $
m_{1} $. For a triple of critical points $ x \in C_{f},y \in
C_{g},z \in C_{h} $, we define the space $ \mathcal{M}_{1}(x,y;z)
$ to be the moduli-space of diagrams as in Figure~\ref{F:fig3},
where "black lines" are gradient trajectories of $ f,g,h $
respectively and the "discs" are pseudo-holomorphic somewhere
injective discs with boundaries on $ L $ and Maslov indices equal
to $ N_{L} $. Let us describe the first and the second diagram
from the Figure~\ref{F:fig3}. The first diagram is given by a
collection
$$ (u, \gamma_{f1},\gamma_{f2},\gamma_{g},\gamma_{h} ),$$ where
$ u: (D^{2},S^{1}) \rightarrow (M,L) $ is a pseudo-holomorphic
disc in $ M $ with boundary on $ L $ and $$ \gamma_{f1} :
[0,+\infty ) \rightarrow L , \gamma_{f2} : [0,a ] \rightarrow L ,
\gamma_{g} : [0,+\infty ) \rightarrow L , \gamma_{h} : ( -\infty,0
] \rightarrow L ,$$ where $ 0 < a \in \mathbb{R} $, such that
$$ \dot{\gamma}_{f1}(t)
= - \nabla f ( \gamma_{f1}(t)), \quad \dot{\gamma}_{f2}(t) = -
\nabla f ( \gamma_{f2}(t)), \quad \dot{\gamma_{g}}(t) = - \nabla g
( \gamma_{g}(t)),$$ $$ \dot{\gamma_{h}}(t) = - \nabla h (
\gamma_{h}(t)),$$
$$ \lim_{t \rightarrow + \infty} \gamma_{f1}(t) = x,
\quad \lim_{t \rightarrow + \infty} \gamma_{g}(t) = y, \quad \lim_{t
\rightarrow - \infty} \gamma_{h}(t) = z, $$
$$ \gamma_{f2}(0)=\gamma_{g}(0)=\gamma_{h}(0), \quad
\gamma_{f1}(0)=u(-1), \quad \gamma_{f2}(a)=u(1). $$
The second diagram is given by a collection $ (u,
\gamma_{f},\gamma_{g},\gamma_{h} )$, where $ u: (D^{2},S^{1})
\rightarrow (M,L) $ is a somewhere injective pseudo-holomorphic
disc in $ M $, with boundary on $ L $ and $$ \gamma_{f} :
[0,+\infty ) \rightarrow L , \gamma_{g} : [0,+\infty ) \rightarrow
L , \gamma_{h} : ( -\infty,0 ] \rightarrow L ,$$ such that
$$ \dot{\gamma_{f}}(t) = - \nabla f (
\gamma_{f}(t)), \quad \dot{\gamma_{g}}(t) = - \nabla g (
\gamma_{g}(t)), \quad \dot{\gamma_{h}}(t) = - \nabla h (
\gamma_{h}(t)),$$
$$ \lim_{t \rightarrow + \infty} \gamma_{f}(t) = x,
\quad \lim_{t \rightarrow + \infty} \gamma_{g}(t) = y, \quad \lim_{t
\rightarrow - \infty} \gamma_{h}(t) = z, $$
$$ \gamma_{f}(0)=u(1), \quad \gamma_{g}(0)=u(i), \quad \gamma_{h}(0)=u(-1). $$
The other diagrams from the picture have analogous definitions. For
generic choices of $ f,g,h $ and of almost complex structures, $
\mathcal{M}_{1}(x,y;z) $ is a manifold of dimension $ \operatorname{ind}_{z}h -
\operatorname{ind}_{x}f - \operatorname{ind}_{y}g + N_{L} $. As before, if $ \dim
\mathcal{M}_{1}(x,y;z) = 0 $, it is a zero-dimensional compact
manifold, hence it is a finite collection of points and we set $
n_{1}(x,y;z) := \sharp \mathcal{M}_{1}(x,y;z) (\mod 2)$. Now we define
$$ m_{1}(x,y) := \sum_{ z \in C_{h},\operatorname{ind}_{z}h - \operatorname{ind}_{x}f - \operatorname{ind}_{y}g
+ N_{L} = 0 } n_{1}(x,y;z)z .$$ For defining $ m_{l} $ for general $
l $ we introduce a manifold $ \mathcal{M}_{l}(x,y;z) $ for $ x \in
C_{f},y \in C_{g},z \in C_{h} $. Its points are diagrams like in
Figure~\ref{F:fig3}, but instead they include several somewhere
injective discs with total Maslov index $ lN_{L} $. In general
these diagrams are of two types, as in Figures~\ref{F:fig4} and
~\ref{F:fig5}.
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f4.eps}
\end{center}
\caption{}
\label{F:fig4}
\end{figure}
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f5.eps}
\end{center}
\caption{}
\label{F:fig5}
\end{figure}
In Figure~\ref{F:fig4}, $ D_{11},D_{12}, \ldots , D_{1k} $ are
somewhere injective pseudo-holomorphic discs with boundaries on $
L $, which connect pieces of gradient trajectories of $ f $, $
D_{21},D_{22}, \ldots , D_{2j} $ - of $ g $, and $ D_{31},D_{32},
\ldots , D_{3m} $ - on $ h $, such that $ k,j,m \geqslant 0 $ and
total Maslov index
$$ \sum_{i,j} \mu(D_{ij}) = lN_{L} .$$
In Figure~\ref{F:fig5} in addition we have a disc $ D_{4} $ in the
middle, with gradient trajectory of $ h $ going into and gradient
trajectories of $ f,g $ going out of its boundary, and as before
the total Maslov index is
$$ \mu(D_{4}) + \sum_{i,j} \mu(D_{ij}) = lN_{L} .$$
We will denote by $ \mathcal{ \overline{M} }_{l}^{kjm} (x,y;z) $
the space of diagrams as in Figure~\ref{F:fig4} and by $ \mathcal{
\widetilde{M} }_{l}^{kjm} (x,y;z) $ the space of diagrams as in
Figure~\ref{F:fig5}. Denote also $ \mathcal{ \overline{M} }_{l}
(x,y;z) = \bigcup_{k,j,m} \mathcal{ \overline{M} }_{l}^{kjm}
(x,y;z) $, $ \mathcal{
\widetilde{M} }_{l} (x,y;z) = \bigcup_{k,j,m} \mathcal{
\widetilde{M} }_{l}^{kjm} (x,y;z) $. We have that $$
\mathcal{M}_{l}(x,y;z) = \mathcal{ \overline{M} }_{l}(x,y;z) \cup
\mathcal{ \widetilde{M} }_{l} (x,y;z)
$$ and $ \dim( \mathcal{ M }_{l}(x,y;z) ) = \operatorname{ind}_{z}h - \operatorname{ind}_{x}f -
\operatorname{ind}_{y}g + lN_{L} $. Then, as before, in the case of $ \operatorname{ind}_{z}h -
\operatorname{ind}_{x}f - \operatorname{ind}_{y}g + lN_{L} = 0 $, we have that $ \mathcal{ M
}_{l}(x,y;z) $ is a finite collection of points and we set $
n_{l}(x,y;z) := \sharp \mathcal{ M }_{l}(x,y;z) $ in this case. Then,
as usual, we define
$$ m_{l}(x,y) := \sum_{ z \in C_{h}, \operatorname{ind}_{z}h - \operatorname{ind}_{x}f -
\operatorname{ind}_{y}g + lN_{L} = 0 } n_{l}(x,y;z)z $$ for generators $ x \in
C_{f}, y \in C_{g} $.
Now we can define the quantum product $ \star : F^{p}CF_{f}
\otimes F^{p'}CF_{g} \rightarrow F^{p+p'}CF_{h} $ by $ x \star y =
m_{0}(x,y) \otimes 1 + m_{1}(x,y) \otimes T + m_{2}(x,y) \otimes
T^{2} + \ldots $ for $ x \in C_{f}, y \in C_{g} $, and then
naturally extend it to a map $ CF^{i}_{f} \otimes CF^{j}_{g}
\rightarrow CF^{i+j}_{h} $. Note that the filtrations on $ CF_{f},
CF_{g} ,CF_{h} $ are compatible with this map, i.e the image of $
F^{p}CF_{f} \otimes F^{p'}CF_{g}$ is in $ F^{p+p'}CF_{h} $. This
sum is finite, because $ \operatorname{ind}_{z}h - \operatorname{ind}_{x}f - \operatorname{ind}_{y}g +
lN_{L} = 0 $, hence $ l = ( -\operatorname{ind}_{z}h + \operatorname{ind}_{x}f + \operatorname{ind}_{y}g
)/N_{L} \leqslant 2n/N_{L} $. The main goal is now to prove the
following theorems.
\begin{thm} \label{T:Leibnitz rule} The differentials $
d_F^{f},d_F^{g},d_F^{h} $ satisfy the
Leibnitz rule with respect to the product $ \star $
$$ d_F^{h} (a \star b) = d_F^{f}(a) \star b + a
\star d_F^{g}(b), $$ for every $ a \in CF_{f},
b \in CF_{g} $.
\end{thm}
\begin{thm} \label{T:E1}
The product on $ V_{1} $, induced from $ \star $, coincides with the
classical cup-product on $ H^{*}(L; \mathbb{Z}_{2}) $.
\end{thm}
\section{Existence and properties of the quantum product - proofs} \label{S:Proofs of quantum product}
\begin{proof}[Proof of Theorem~\ref{T:Leibnitz rule}]
The main idea of the proof is similar to the analogous statement in
classical Morse theory, for a standard Morse differential and a
product on the Morse complex. In what follows we have to find a
compactification of the manifolds $ \mathcal{ \overline{M} }_{l}
(x,y;z) $ and $ \mathcal{ \widetilde{M} }_{l} (x,y;z) $. We are
mostly interested in the components of the boundary of codimension
$ 1 $. A point of the compactification of $ \mathcal{M}_{l}(x,y;z)
$ is a diagram, consisting of several pseudo-holomorphic discs with
boundaries on $ L $, spheres, critical points of $ f,g,h $ and
pieces of gradient trajectories of $ f,g,h $ between them. Let us
describe what can happen when we pass to the limit of a sequence of
elements of $ \mathcal{M}_{l}(x,y;z) $. First, some of the gradient
trajectories of $ f,g $ or $ h $ can "brake" and new critical
points of $ f,g,h $ can appear in the diagram. Second, some of the
pieces of the gradient trajectories can "shrink" to a point, such
that we will obtain two touching discs or one disc containing a
critical point. Also looking on $ \mathcal{ \overline{M} }_{l}
(x,y;z) $, the piece of gradient trajectory containing the "middle
point" can shrink to a point, so we will get a disc containing this
"middle point". It can also happen that some of the discs split to
a tree of discs and also some of the discs can bubble a sphere. The
last thing that can happen is that for some pieces of trajectories
which have endpoints on a disc, their endpoints on that disc
converge one to another and become a single point. All these
degenerations can happen simultaneously. However, when one looks on
the codimension-$ 1 $ part of the boundary of $
\mathcal{M}_{l}(x,y;z) $, only one of this degenerations can
happen, moreover when we have the case of breaking of some
trajectory, it can brake only at one point and this can happen only
for one trajectory. Also, when we have "shrinking" of some
trajectory, only one trajectory can shrink to a point. Thirdly,
when we have splitting of a disc to discs, then only one disc can
split and only to two discs, and bubbling of spheres always has
co-dimension $ \geqslant 2 $. Finally, when endpoints of some
trajectories, lying on the boundary of some disc, become one point,
we always have codimension $ \geqslant 2 $, except for the
case when it is in $ \mathcal{ \widetilde{M} }_{l} (x,y;z) $, and
two of the trajectories which end on the boundary of the "middle
disc" in the limit have the same end on this boundary.
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f6.eps}
\end{center}
\caption{}
\label{F:fig6}
\end{figure}
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f7.eps}
\end{center}
\caption{}
\label{F:fig7}
\end{figure}
Therefore, when we are looking only at the codimension $ 1 $ part of $
\mathcal{M}_{l}(x,y;z) $, only the following cases can happen:
For $ \mathcal{ \overline{M} }_{l} (x,y;z) $ we can obtain:
$a$) One trajectory "brakes" and we obtain a situation as in Figure
~\ref{F:fig6}, when a new critical point $ t $ appears ( this can happen with
gradient trajectories of $ f,g,h $).
$b$) Two neighboring discs in the chain become "touching" and the
trajectory which joins them collapses into a
point, as in Figure~\ref{F:fig7}.
$c$) The last disc from the chain of discs of $ g $, for example,
comes closer and closer to the "middle point", where gradient
trajectories of $ f,g,h $ meet, until the trajectory which connects
it to this "middle point" collapses to a point ( as in Figure~\ref{F:fig8} ).
$d$) One of the discs splits to the union of two touching discs
(Figure~\ref{F:fig9}).
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f8.eps}
\end{center}
\caption{}
\label{F:fig8}
\end{figure}
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f9.eps}
\end{center}
\caption{}
\label{F:fig9}
\end{figure}
Denote by $$ \mathcal{M}^{a}_{l}(x,y;z),
\mathcal{M}^{b}_{l}(x,y;z),
\mathcal{M}^{c}_{l}(x,y;z),\mathcal{M}^{d}_{l}(x,y;z) $$ the
manifolds of diagrams of types $ a,b,c,d $ respectively.
Now addressing to the co-dimension $ 1 $ compactification of $
\mathcal{ \widetilde{M} }_{l} (x,y;z) $, we see that the following
cases are possible:
$e$) One trajectory "brakes" and we obtain a situation as in Figure
~\ref{F:fig10}.
$f$) Two neighboring discs in the chain come close and the
trajectory which joins them collapses to a point, or the middle
disc comes close to some neighboring disc as in Figure~\ref{F:fig11}.
$g$) One of the discs splits to a union of two touching discs (and
we again obtain a situation as in Figure~\ref{F:fig11}).
$h$) Two trajectories touching the middle disc converge to
trajectories which touch the middle disc in the same point. We
obtain again Figure~\ref{F:fig8}.
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f10.eps}
\end{center}
\caption{}
\label{F:fig10}
\end{figure}
\begin{figure} [h]
\begin{center}
\includegraphics[scale=0.7]{f11.eps}
\end{center}
\caption{}
\label{F:fig11}
\end{figure}
As before, we denote by $$ \mathcal{M}^{e}_{l}(x,y;z),
\mathcal{M}^{f}_{l}(x,y;z),
\mathcal{M}^{g}_{l}(x,y;z),\mathcal{M}^{h}_{l}(x,y;z)
$$ the manifolds of the situations $ e,f,g,h $ respectively.
Note that
$$ (*)
\begin{cases}
\mathcal{M}^{c}_{l}(x,y;z) = \mathcal{M}^{h}_{l}(x,y;z) \\
\mathcal{M}^{b}_{l}(x,y;z) = \mathcal{M}^{d}_{l}(x,y;z) \\
\mathcal{M}^{f}_{l}(x,y;z) = \mathcal{M}^{g}_{l}(x,y;z)
\end{cases}
$$
Now let us see how this can be applied to prove the Leibnitz rule.
Let us first write what it means. Taking $ x \in C_{f}, y \in
C_{g} $, we have $$ d_F^{h}(x \star y) =
d_F^{h} ( \sum_{ i \geqslant 0 } m_{i}(x,y) \otimes
T^{i} ) = \sum_{ i,j \geqslant 0 } \partial_{j}( m_{i}(x,y) )
\otimes T^{j+i} .$$ Similarly, $$ d_F^{f}(x) \star y =
\sum_{ j,i \geqslant 0 } m_{i}(\partial_{j} x,y) \otimes T^{j+i}
$$
and $$
x \star d_F^{g}(y) = \sum_{ j,i \geqslant 0 }
m_{i}(x,\partial_{j} y) \otimes T^{j+i} .$$
Therefore, we are left
with proving that for every $ l $, $$
\sum_{ i,j \geqslant 0, i+j
= l } \partial_{j}( m_{i}(x,y) )= \sum_{ i,j \geqslant 0, i+j =
l } m_{i}(\partial_{j} x,y) + \sum_{ i,j \geqslant 0, i+j = l }
m_{i}(x,\partial_{j} y) .$$
This means that we have to show that
for every choice of generators $ x \in C_{f}, y \in C_{g}, z \in
C_{h} $ with $ \operatorname{ind}(z) = \operatorname{ind}(x) + \operatorname{ind}(y) + lN_{L} +1 $, the
total number of configurations in $ \mathcal{M}^{a}_{l}(x,y;z)
\cup \mathcal{M}^{e}_{l}(x,y;z) $ is even. For this consider the
space $ \mathcal{M}_{l}(x,y;z) $. It is a $ 1 $-dimensional
manifold, therefore its boundary consists of an even number of
points. On the other hand, from a gluing argument
(see~\cite{Fukaya-Oh1,FOOO,McDuff-Jhol,Seidel}) it follows that
this boundary is the union of
$$ \mathcal{M}^{a}_{l}(x,y;z),
\mathcal{M}^{b}_{l}(x,y;z),
\mathcal{M}^{c}_{l}(x,y;z),\mathcal{M}^{d}_{l}(x,y;z), $$
$$\mathcal{M}^{e}_{l}(x,y;z),
\mathcal{M}^{f}_{l}(x,y;z),\mathcal{M}^{g}_{l}(x,y;z),
\mathcal{M}^{h}_{l}(x,y;z)$$ (because in our case $ \dim
\mathcal{M}_{l}(x,y;z)=1 $, so in a generic situation the part of
the boundary of $ \mathcal{M}_{l}(x,y;z) $ of co-dimension bigger
than $ 1 $ must be of dimension less than $ 0 $, so it is an empty
set). Now, $ (*) $ shows that modulo $ 2 $, the total number of
points on the boundary of $ \mathcal{M}_{l}(x,y;z) $ is equal to $
\sharp \mathcal{M}^{a}_{l}(x,y;z) + \sharp
\mathcal{M}^{f}_{l}(x,y;z) $ and is even. This proves
Theorem~\ref{T:Leibnitz rule}.
\end{proof}
\begin{rem} \label{R:Dimension formula}
In several places we have applied the dimension formula $
\dim\mathcal{M}(A,J) = n+\mu(A) $, where $ \mathcal{M}(A,J) $
is a manifold of $ J $-holomorphic maps $ u \colon
(D^{2}, \partial D^{2}) \rightarrow (M,L) $ with $ [u] = A
\in \pi_{2}(M,L) $, in order to show that certain
configurations of gradient trajectories and
pseudo-holomorphic discs cannot appear for generic choice of
$ J $ ( because of negative dimension ). However, this
dimension formula is based on the transversality argument ( see~\cite{McDuff-Jhol} )
which requires somewhere injectivity of the $ J $-holomorphic
discs. To solve this problem we use work of Kwon, Oh and of
Lazzarini ( see~\cite{K-O,L} ). More precisely, suppose we
have such a configuration and some pseudo-holomorphic disc
$ u : (D,\partial D) \rightarrow (M,L) $ participated in it
is not somewhere injective. Then we can decompose it to a
union of almost everywhere injective
discs $ u_{1},u_{2},\ldots, u_{k} $ with multiplicities $
m_{1},m_{2},\ldots,m_{k} $ respectively, such that there
exists $ m_{j} >1 $ ( see~\cite{K-O,L} for the precise details of this
decomposition ). This decomposition preserves the relative homology class, namely
$$ [u]=m_{1}[u_{1}] + m_{2}[u_{2}] + \ldots + m_{k}[u_{k}] \in H_{2}(M,L) ,$$
however, when we look on this configuration when we take all
the discs $ u_{j} $ with multiplicity $ 1 $, then it's total
area and hence a total Maslov class is strictly smaller than of the
original configuration, therefore by a usual dimension count
we obtain that such a configuration has negative dimension and
hence cannot appear in a generic situation, therefore the original
configuration also cannot appear. See ~\cite{Bi-Co-1} for more details on such arguments.
\end{rem}
\begin{proof}[Proof of Theorem~\ref{T:E1}]
First look at the induced product on the level $ E_{0} $. Let us
show that it coincides with the classical product on the Morse
complex. From the standard construction of the spectral sequence
we have that
$$ E_{0}^{p,q}(f) = F^{p} CF_{f}^{p+q} / F^{p+1}
CF_{f}^{p+q} \cong V_{0}^{p,q}(f) \otimes A^{p N_L}, $$
$$E_{0}^{p',q'}(g) = F^{p'} CF_{g}^{p' + q'} / F^{p'+1}
CF_{g}^{p' + q'} \cong V_{0}^{p',q'}(g) \otimes A^{p' N_L},$$
where $ V_{0}^{p,q}(f) = C_{f}^{p+q-pN_{L}} $,
$ V_{0}^{p',q'}(g) = C_{g}^{p'+q'-p'N_{L}} $.
Now take $ \alpha \in V_{0}^{p,q}(f)$, $ \beta \in
V_{0}^{p',q'}(g) $. Now, we can take $ \overline{\alpha} :=
\alpha \otimes T^{p} \in F^{p} CF_{f}^{p+q} $,
$ \overline{\beta} := \beta \otimes T^{p'} \in F^{p'} CF_{g}^{p'+q'} $
as a pre-images of $ \alpha \otimes T^{p} \in E_{0}^{p,q}(f)
$, $ \beta \otimes T^{p'} \in E_{0}^{p',q'}(g) $ under
natural projections $ F^{p} CF_{f}^{p+q} \rightarrow
E_{0}^{p,q}(f)$, $ F^{p'} CF_{g}^{p'+q'} \rightarrow
E_{0}^{p',q'}(g) $ respectively. Then, by definition of the product $
\star $, $$ \overline{\alpha} \star \overline{\beta} = m_{0}( \alpha ,
\beta) \otimes T^{p+p'} + m_{1}( \alpha ,
\beta) \otimes T^{p+p'+1} + $$ $$ + m_{2}(
\alpha , \beta) \otimes T^{p+p'+2} + \ldots
\in F^{p+p'} CF_{h}^{q+q'},$$ and so the induced
product of $ \alpha \otimes T^{p} \in E_{0}^{p,q}(f) $, $
\beta \otimes T^{p'} \in E_{0}^{p',q'}(g) $
is the image of $ \overline{\alpha} \star \overline{\beta}
\in C_{h}^{p+p'+q+q'-pN_{L}-p'N_{L}} $ under the natural
projection $ F^{p+p'} CF_{h}^{p+p'+q+q'} \rightarrow
E_{0}^{p+p',q+q'}(h)$, which is $ m_{0}( \alpha ,
\beta) \otimes T^{p+p'} $. Therefore the induced product of
$ \alpha \in V_{0}^{p,q}(f)$, $ \beta \in
V_{0}^{p',q'}(g) $ is $ m_{0}( \alpha , \beta)
\in V_{0}^{p+p',q+q'}(h) $, which is the classical product in the Morse
complex. Note also that the differential $ \delta_{0} :V_{0}^{p,q} \to
V_{0}^{p,q+1} $ coincides with classical Morse differential, therefore
the induced product on $ V_{1} = H( V_{0} , \delta_{0} ) $
is the classical cup-product.
\end{proof}
\section{Basic notions of symplectic topology in terms of Lagrangian Floer theory.} \label{S:basic notions}
In this section we summarize some relevant notions from symplectic
topology used in the article.
\subsection{Tame symplectic manifold}
A symplectic manifold $ (M, \omega) $ is called tame if there
exists an almost complex structure $ J $, such that the bilinear form
$ g_{ \omega , J }( \cdot,\cdot) = \omega( \cdot, J \cdot ) $ is a Riemmanian metric on $ M $,
and moreover the Riemmanian manifold $ (M , g_{ \omega , J } ) $ is geometrically bounded (i.e. its sectional curvature is bounded above and the injectivity radius is bounded below). See ~\cite{A-L-P,Gr} for more details and the relevance of this condition for the theory of pseudo-holomorphic curves.
\subsection{The Maslov class}
The Maslov class is a homomorphism $\mu_{L}:\pi_{2}(M,L)
\rightarrow \mathbb{Z}$, associated to a Lagrangian submanifold $L
\subset (M,\omega)$ . To describe it, we start with the linear
case. Consider the space $\mathbb{R}^{2n} \cong \mathbb{C}^{n}$
with standard symplectic structure. Denote by $\mathcal{L}(n)$ the
set of all Lagrangian linear subspaces of $\mathbb{R}^{2n}$. The unitary
group $U(n)$ acts transitively on $\mathcal{L}(n)$ such that a stabilizer of
Lagrangian subspace $\mathbb{R}^{n}=\{ y_{1}=\ldots=y_{n}=0\}
\subset \mathbb{C}^{n}$ is the orthogonal group $O(n) \subset
U(n)$. So $\mathcal{L}(n)$ is homeomorphic to the quotient
$U(n)/O(n)$. On $U(n)/O(n)$ we have well-defined map $det^{2}:
U(n)/O(n) \rightarrow S^{1} \subset \mathbb{C}$, hence we obtain a
map $\mathcal{L}(n) \rightarrow S^{1}$. The corresponding
homomorphism $\mu: \pi_{1}( \mathcal{L}(n)) \rightarrow
\mathbb{Z}$ is called the Maslov index. It can be verified that it
is an isomorphism.
Now, consider a symplectic manifold $(M,\omega)$ and a
Lagrangian submanifold $L \subset M$. Take a disc in $M$ with
boundary lying on $L$: $ u:(D,\partial D) \rightarrow (M,L)$. We
obtain the following diagram of vector bundles:
\begin{equation*}
\begin{array}{clcr}
u^{*}T(M) & \supset & u^{*}T(L) \\
\downarrow & & \downarrow \\
D & & \partial D
\end{array}
\end{equation*}
Over each point on the disc $D$ we have a symplectic linear space
and for every point on the boundary $\partial D$ we have a
Lagrangian linear subspace of the corresponding linear symplectic
space. Now, we can symplectically trivialize the bundle $u^{*}T(M)
\rightarrow D$, and as a result we will get a loop of Lagrangians
in $\mathbb{R}^{2n}$, $\gamma: S^{1} \rightarrow \mathcal{L}(n)$.
Applying to this loop the Maslov index, we get the Maslov class
evaluated on $D$, namely, we define $\mu_{L}(D)=\mu(\gamma)$. It
can be shown that this definition does not depend on the
trivialization, and actually depends only on $[D] \in
\pi_{2}(M,L)$. Given a Lagrangian submanifold $L \subset
(M,\omega)$ we denote by $N_{L}$ the positive generator of the
image $\mu_{L}(\pi_{2}(M,L)) \subset \mathbb{Z}$. We shall refer
to $N_{L}$ as the minimal Maslov number of $L$.
\subsection{Symplectic area class}
This is a homomorphism $\omega : \pi_{2}(M,L) \rightarrow \mathbb{R}$
which computes the symplectic area of a disc: if we have a
representative $\alpha=[\varphi : (D , \partial D ) \rightarrow
(M,L)] \in \pi_{2}(M,L)$ , we define $\omega(\alpha):= \int_{D}(
\varphi^{*}\omega) $. Also here it can be shown that the
symplectic area class $\omega(\alpha)$ depends only on $\alpha
\in \pi_{2}(M,L)$.
\subsection{Monotone symplectic manifolds}
Let $(M,\omega)$ be a symplectic manifold. Denote by
$c_{1} \in H^{2}(M;\mathbb{R})$ the first Chern class of the
tangent bundle $T(M)$ viewed as a complex vector bundle (where the
complex structure on $T(M)$ is taken to be any almost complex
structure tamed by $\omega $). We say that $(M,\omega)$ is a
monotone symplectic manifold if there exists a positive real
number $\lambda>0$ such that $[\omega]=\lambda c_{1}$ .
Given a Lagrangian submanifold $L \subset (M,\omega )$, we
have two homomorphisms: $\mu_{L}:\pi_{2}(M,L) \rightarrow
\mathbb{Z}$, and $\omega:\pi_{2}(M,L) \rightarrow \mathbb{R}$. We
say that $L$ is monotone, if these two homomorphisms $\mu_{L}
, \omega$ are proportional by some positive constant, that is,
there exists a constant $\lambda \in \mathbb{R}, \lambda > 0$ ,
such that for every $\alpha \in \pi_{2}(M,L)$ we have
$\mu_{L}(\alpha)=\lambda \omega(\alpha) $.
\subsubsection*{Acknowledgments}
I would like to thank my supervisor Paul Biran for his help and
attention he gave to me. I am grateful to Felix Schlenk for his
comments and for helping me to improve the quality of the
exposition. Also I am grateful to Leonid Polterovich, Alex Ivri
and Laurent Lazzarini for useful comments.
| {
"timestamp": "2009-12-04T20:41:00",
"yymm": "0608",
"arxiv_id": "math/0608063",
"language": "en",
"url": "https://arxiv.org/abs/math/0608063",
"abstract": "We use Floer cohomology to prove the monotone version of a conjecture of Audin: the minimal Maslov number of a monotone Lagrangian torus in C^n is 2. Our approach is based on the study of the quantum cup product on Floer cohomology and in particular the behaviour of Oh's spectral sequence with respect to this product. As further applications we prove existence of holomorphic disks with boundaries on Lagrangians as well as new results on Lagrangian intersections.",
"subjects": "Symplectic Geometry (math.SG)",
"title": "The Maslov class of Lagrangian tori and quantum products in Floer cohomology",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138151101525,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.708715671558558
} |
https://arxiv.org/abs/1508.04126 | Basis construction for range estimation by phase unwrapping | We consider the problem of estimating the distance, or range, between two locations by measuring the phase of a sinusoidal signal transmitted between the locations. This method is only capable of unambiguously measuring range within an interval of length equal to the wavelength of the signal. To address this problem signals of multiple different wavelengths can be transmitted. The range can then be measured within an interval of length equal to the least common multiple of these wavelengths. Estimation of the range requires solution of a problem from computational number theory called the closest lattice point problem. Algorithms to solve this problem require a basis for this lattice. Constructing a basis is non-trivial and an explicit construction has only been given in the case that the wavelengths can be scaled to pairwise relatively prime integers. In this paper we present an explicit construction of a basis without this assumption on the wavelengths. This is important because the accuracy of the range estimator depends upon the wavelengths. Simulations indicate that significant improvement in accuracy can be achieved by using wavelengths that cannot be scaled to pairwise relatively prime integers. |
\section{Introduction}\label{sec:intro}
\newcommand{\operatorname{lcm}}{\operatorname{lcm}}
Range (or distance) estimation is an important component of modern technologies such as electronic surveying~\cite{Jacobs_ambiguity_resolution_interferometery_1981, anderson1998surveying} and global positioning~\cite{Teunissen_GPS_LAMBDA_2006,Teunissen_GPS_1995}. Common methods of range estimation are based upon received signal strength~\cite{Chitte_RSS_Estimation2009, HingCheung_RSSbasedRangeEstimation2012}, time of flight (or time of arrival)~\cite{XinrongLi_TOA_range_estimation2004, Lanzisera_TOA_range_estimation2011}, and phase of arrival~\cite{Jacobs_ambiguity_resolution_interferometery_1981,Towers_frequency_selection_interferometry_2003,Li_distance_est_wrapped_phase}. This paper focuses on the phase of arrival method which provides the most accurate range estimates in many applications. Phase of arrival has become the technique of choice in modern high precision surveying and global positioning ~\cite{Odijk-nteger-ambiguity-resolutionPPP, Teunissen_GPS_LAMBDA_2006, Teunissen_GPS_1995}.
A difficulty with phase of arrival is that only the principal component of the phase can be observed. This limits the range that can be unambiguously estimated. One approach to address this problem is to utilise signals of multiple different wavelengths and observe the phase at each.
Range estimators from such observations have been studied by numerous authors~\cite{Teunissen_GPS_1995,Hassibi_GPS_1998,Towers_frequency_selection_interferometry_2003,Li_distance_est_wrapped_phase}. Least squares/maximum likelihood and maximum a posteriori (MAP) estimators of range have been studied by Teunissen~\cite{Teunissen_GPS_1995}, Hassibi and Boyd~\cite{Hassibi_GPS_1998}, and more recently Li~et.~al.~\cite{Li_distance_est_wrapped_phase}. A key realisation is that least squares and MAP estimators can be computed by solving a problem from computational number theory known as the~\emph{closest lattice point problem}~\cite{Babai1986,Agrell2002}. Teunissen~\cite{Teunissen_GPS_1995} appears to have been the first to have realised this connection.
Efficient general purpose algorithms for computing a closest lattice point require a~\emph{basis} for the lattice. Constructing a basis for the least squares estimator of range is non-trivial. Based upon the work of Teunissen~\cite{Teunissen_GPS_1995}, and under some assumptions about the distribution of phase errors, Hassibi and Boyd~\cite{Hassibi_GPS_1998} construct of a basis for the MAP estimator. Their construction does not apply for the least squares estimator.\footnote{The least squares estimator is also the maximum likelihood estimator under the assumptions made by Hassibi and Boyd~\cite{Hassibi_GPS_1998}. The matrix $G$ in~\cite{Hassibi_GPS_1998} is rank deficient in the least squares and weighted least squares cases and so $G$ is not a valid lattice basis. In particular, observe that the determinant of $G$~\cite[p.~2948]{Hassibi_GPS_1998} goes to zero as the a priori assumed variance $\sigma_x^2$ goes to infinity.} This is problematic because the MAP estimator requires sufficiently accurate prior knowledge of the range, whereas the least squares estimator is accurate without this knowledge. An explicit basis construction for the least squares estimator was recently given by Li~et.~al.~\cite{Li_distance_est_wrapped_phase} under the assumption that the wavelengths can be scaled to pairwise relatively prime integers. In this paper, we remove the need for this assumption and give an explicit construction in the general case. This is important because the accuracy of the range estimator depends upon the wavelengths. Simulations show that a more accurate range estimator can be obtained using wavelengths that are suitable for our basis, but are not suitable for the basis of Li~et.~al.~\cite{Li_distance_est_wrapped_phase}.
The paper is organised as follows. Section \ref{sec:ls-estimator} presents the system model and defines the least squares range estimator. Section \ref{sec:lattice-theory} introduces some required properties of lattices. Section~\ref{sec:range-estim-clos} shows how the least squares range estimator is given by computing a closest point in a lattice. An explicit basis construction for these lattices is described. Simulation results are discussed in Section~\ref{sec:simulation-results} and the paper is concluded by suggesting some directions for future research.
\section{Least squares estimation of range}\label{sec:ls-estimator}
Suppose that a transmitter sends a signal $x(t) = e^{2\pi(ft + \phi)}$ with phase $\phi$ and frequency $f$ in Hertz. The signal is assumed to propagate by line of sight to a receiver resulting in the signal
\[
y(t) = \alpha x(t - r_0/c) + w(t) = \alpha e^{2\pi(ft + \theta)} + w(t)
\]
where $r_0$ is the distance (or range) in meters between receiver and transmitter, $c$ is the speed at which the signal propagates in meters per second, $\alpha > 0$ is the real valued amplitude of the received signal, $w(t)$ represents noise, $\theta = \phi - r_0/\lambda$ is the phase of the received signal, and $\lambda = c/f$ is the wavelength. The receiver is assumed to be \emph{synchronised} by which it is meant that the phase $\phi$ and frequency $f$ are known to the receiver.
Our aim is to estimate $r_0$ from the signal $y(t)$. To do this we first calculate an estimate $\hat{\theta}$ of the principal component of the phase $\theta$. In optical ranging applications $\hat{\theta}$ might be given by an interferometer. In sonar or radio frequency ranging applications $\hat{\theta}$ might be obtained from the complex argument of the demodulated signal $y(t)e^{-2\pi f t}$. Whatever the method of phase estimation, the range $r_0$ is related to the phase estimate $\hat{\theta}$ by the phase difference
\begin{equation}
Y = \sfracpart{\phi - \hat{\theta}} = \sfracpart{ r_0/\lambda + \Phi }
\end{equation}
where $\Phi$ represents phase noise and $\sfracpart{x} = x - \floor{x + \tfrac{1}{2}}$ where $\floor{x}$ denotes the greatest integer less than or equal to $x$.
For all integers $k$,
\begin{equation}
Y = \fracpart{ r_0/\lambda + \Phi } = \sfracpart{(r_0 + k\lambda)/\lambda + \Phi},
\end{equation}
and so, the range is identifiable only if $r_0$ is assumed to lie in an interval of length $\lambda$. A natural choice is the interval $[0, \lambda)$. This poses a problem if the range $r_0$ is larger than the wavelength $\lambda$. To alleviate this, a common approach is to transmit multiple signals
$x_n(t) = e^{2\pi(f_nt + \phi)}$ for $n = 1,\dots,N$, each with a different frequency $f_n$. Now $N$ phase estimates $\hat{\theta}_1,\dots,\hat{\theta}_N$ are computed along with phase differences
\begin{equation}\label{eq:Yndefn}
Y_n = \sfracpart{\phi - \hat{\theta}_n} = \fracpart{ r_0/\lambda_n + \Phi_n} \qquad n = 1,\dots,N
\end{equation}
where $\lambda_n = c/f_n$ is the wavelength of the $n$th signal and $\Phi_1,\dots,\Phi_N$ represent phase noise. Given $Y_1,\dots,Y_N$, a pragmatic estimator of the range $r_0$ is a minimiser of the least squares objective function
\begin{equation}
LS(r) = \sum_{n=1}^N \fracpart{Y_n - r/\lambda_n}^2.
\end{equation}
This least squares estimator is also the maximum likelihood estimator under the assumption that the phase noise variables $\Phi_1,\dots,\Phi_N$ are independent and identically wrapped normally distributed with zero mean~\cite[p.~50]{Mardia_directional_statistics}\cite[p.~76]{McKilliam2010thesis}\cite[p.~47]{Fisher1993}.
The objective function $LS$ is periodic with period equal to the smallest positive real number $P$ such that $P/\lambda_n \in \ints$ for all $n=1,\dots,N$, that is, $P = \operatorname{lcm}(\lambda_1,\dots,\lambda_N)$ is the least common multiple of the wavelengths. The range is identifiable if we assume $r_0$ to lie in an interval of length $P$. A natural choice is the interval $[0,P)$ and we correspondingly define the least squares estimator of the range $r_0$ as
\begin{equation}\label{eq:hatdininterval}
\hat{r} = \arg\min_{r \in [0,P)} LS(r).
\end{equation}
If $\lambda_n/\lambda_k$ is irrational for some $n$ and $k$ then the period $P$ does not exist and the objective function $LS$ is not periodic. In this paper we assume this is not the case and that a finite period $P$ does exist.
\section{Lattice theory}\label{sec:lattice-theory}
Let $\Bbf$ be the $m\times n$ matrix with linearly independent column vectors $\bbf_1,\dots,\bbf_n$ from $m$-dimensional Euclidean space $\reals^m$ with $m\geq n$. The set of vectors
\[
\Lambda=\{ \Bbf\ubf \mid \ubf \in \ints^n \}
\]
is called an $n$-dimensional \term{lattice}.
The matrix $\Bbf$ is called a \term{basis} or \term{generator} for $\Lambda$. The basis of a lattice is not unique. If $\Ubf$ is an $n \times n$ matrix with integer elements and determinant $\det\Ubf=\pm 1$ then $\Ubf$ is called a \term{unimodular matrix} and $\Bbf$ and $\Bbf\Ubf$ are both bases for $\Lambda$.
The set of integers $\ints^n$ is called the \term{integer lattice} with the $n\times n$ identity matrix $\Ibf$ as a basis.
Given a lattice $\Lambda$ its \term{dual lattice}, denoted $\Lambda^*$, contains those points that have integral inner product with all points from $\Lambda$, that is,
\[
\Lambda^* = \{ \xbf \mid \xbf^\prime \ybf \in \ints \text{ for all } \ybf \in \Lambda \}.
\]
The following proposition follows as a special case of Proposition~1.3.4 and Corollary~1.3.5 of \cite{Martinet2003}.
\begin{proposition} \label{cor:intlatticedim1}
Let $\vbf\in\ints^n$, let $H$ be the $n-1$ dimensional subspace orthogonal to $\vbf$, and let
\[
\Qbf = \Ibf - \frac{\vbf\vbf^\prime}{\vbf^\prime\vbf} = \Ibf - \frac{\vbf\vbf^\prime}{\|\vbf\|^2}
\]
be the $n\times n$ orthogonal projection matrix onto $H$. The set of vectors $\ints^n\cap H$ is an $n-1$ dimensional lattice with
dual lattice $(\ints^n \cap H)^* = \{ \Qbf \zbf \mid \zbf \in \ints^n \}$.
\end{proposition}
Given a lattice $\Lambda$ in $\reals^m$ and a vector $\ybf \in \reals^m$, a problem of interest is to find a lattice point $\xbf \in \Lambda$ such that the squared Euclidean norm $\| \ybf - \xbf \|^2 = \sum_{i=1}^m (y_i - x_i)^2$ is minimised. This is called the \term{closest lattice point problem} (or \term{closest vector problem}) and a solution is called a \term{closest lattice point} (or simply \term{closest point}) to $\ybf$ \cite{Agrell2002}.
The closest lattice point problem is known to be NP-hard~\cite{micciancio_hardness_2001, Jalden2005_sphere_decoding_complexity}. Nevertheless, algorithms exist that can compute a closest lattice point in reasonable time if the dimension is small (less that about 60)~\cite{Kannan1987_fast_general_np,schnorr_euchner_sd_1994,Viterbo_sphere_decoder_1999,Agrell2002,MicciancioVoulgaris_deterministic_jv_2013}. These algorithms have gone by the name ``sphere decoder'' in the communications engineering and signal processing literature.
Although the problem is NP-hard in general, fast algorithms are known for specific highly regular lattices~\cite{McKilliam2009CoxeterLattices,McKilliam_closest_point_lattice_first_kind_2014}. For the purpose of range estimation the dimension of the lattice will be $N-1$ where $N$ is the number of frequencies transmitted. The number of frequencies is usually small (less than 10) and, in this case, general purpose algorithms for computing a closest lattice point are fast~\cite{Agrell2002}.
\section{Range estimation and the closest lattice point problem} \label{sec:range-estim-clos}
In this section we show how the least squares range estimator $\hat{r}$ from~\eqref{eq:hatdininterval} can be efficiently computed by computing a closest point in a lattice of dimension $N-1$. The derivation is similar to those in~\cite{McKilliamFrequencyEstimationByPhaseUnwrapping2009,McKilliam_mean_dir_est_sq_arc_length2010,McKilliam_pps_unwrapping_tsp_2014}. Our notation will be simplified by the change of variable $r = P\beta$, where $P$ is the least common multiple of the wavelengths. Put $v_n = P/\lambda_n \in \ints$ and define the function
\[
F(\beta) = LS(P\beta) = \sum_{n=1}^N\fracpart{Y_n - \beta v_n}^2.
\]
Because $LS$ has period $P$ it follows that $F$ has period $1$. If $\hat{\beta}$ minimises $F$ then $P\hat{\beta}$ minimises $LS$ and, because $\hat{r} \in [0,P)$, we have $\hat{r} = P( \hat{\beta} - \sfloor{\hat{\beta}} )$. It is thus sufficient to find a minimiser $\hat{\beta} \in \reals$ of $F$.
Observe that $\fracpart{Y_n - \beta v_n}^2 = \min_{z \in \ints} (Y_n - \beta v_n - z)^2$ and so $F$ may equivalently be written
\[
F(\beta) = \min_{z_1,\dots,z_N \in \ints} \sum_{n=1}^N (Y_n - \beta v_n - z_n)^2.
\]
The integers $z_1,\dots,z_N$ are often called \emph{wrapping variables} and are related to the number of whole wavelengths that occur over the range $r_0$ between transmitter and receiver. The minimiser $\hat{\beta}$ can be found by jointly minimising the function
\[
F_1(\beta, z_1,\dots,z_N) = \sum_{n=1}^N (Y_n - \beta v_n - z_n)^2
\]
over the real number $\beta$ and integers $z_1,\dots,z_N$. This minimisation problem can be solved by computing a closest point in a lattice.
To see this, define column vectors
\begin{align*}
\ybf &= (Y_1,\dots,Y_N)^\prime \in \reals^N, \\
\zbf &= (z_1,\dots,z_N)^\prime \in \ints^N, \\
\vbf &= (v_1,\dots,v_N)^\prime = \left(P/\lambda_1,\dots,P/\lambda_N\right)^\prime \in \ints^N.
\end{align*}
Now
\[
F_1(\beta, z_1,\dots,z_N) = F_1(\beta, \zbf) = \| \ybf - \beta\vbf - \zbf \|^2.
\]
The minimiser of $F_1$ with respect to $\beta$ as a function of $\zbf$ is
\[
\hat{\beta}(\zbf) = \frac{(\ybf - \zbf)^\prime\vbf}{\vbf^\prime\vbf}.
\]
Substituting this into $F_1$ gives
\[
F_2(\zbf) = \min_{\beta \in \reals} F_1(\beta, \zbf) = F_1\big(\hat{\beta}(\zbf), \zbf\big) = \| \Qbf\ybf - \Qbf\zbf \|^2
\]
where $\Qbf = \Ibf - \vbf\vbf^\prime/\|\vbf\|^2$ is the orthogonal projection matrix onto the $N-1$ dimensional subspace orthogonal to $\vbf$. Denote this subspace by $H$. By Proposition~\ref{cor:intlatticedim1} the set $\Lambda = \ints^N \cap H$ is an $N-1$ dimensional lattice with dual lattice $\Lambda^* = \{ \Qbf \zbf \mid \zbf \in \ints^N \}$. We see that the problem of minimising $F_2(\zbf)$ is precisely that of finding a closest point in the lattice $\Lambda^*$ to $\Qbf\ybf \in \reals^N$. Suppose we find $\hat{\xbf} \in \Lambda^*$ closest to $\Qbf \ybf$ and a corresponding $\hat{\zbf} \in \ints^N$ such that $\hat{\xbf} = \Qbf\hat{\zbf}$. Then $\hat{\zbf}$ minimises $F_2$ and $\hat{\beta}(\hat{\zbf})$ minimises $F$. The least squares range estimator in the interval $[0,P)$ is then
\begin{equation}\label{eq:leastsquaresrangehatz}
\hat{r} = P\big(\hat{\beta}(\hat{\zbf}) - \sfloor{\hat{\beta}(\hat{\zbf})}\big).
\end{equation}
It remains to provide a method to compute a closest point $\hat{\xbf} \in \Lambda^*$ and a corresponding $\hat{\zbf} \in \ints^N$. In order to use known general purpose algorithms we must first provide a basis for the lattice $\Lambda^*$~\cite{Agrell2002}. The projection matrix $\Qbf$ is \emph{not} a basis because it is not full rank. As noted by Li~et.~al.~\cite{Li_distance_est_wrapped_phase}, a modification of the Lenstra-Lenstra-Lovas algorithm due to Pohst~\cite{Pohst_modified_LLL_reduced_rank_1987} can be used to compute a basis given $\Qbf$. However, it is preferable to have an explicit construction and Li~et.~al.~\cite{Li_distance_est_wrapped_phase} give a construction under the assumption that the wavelengths $\lambda_1,\dots,\lambda_N$ can be scaled to relatively prime integers, that is, there exists $c \in \reals$ such that $\gcd(c\lambda_k,c\lambda_n) = 1$ for all $k \neq n$.\footnote{The assumption that the wavelengths are pairwise relatively prime is made implicitly in equation (75) in~\cite{Li_distance_est_wrapped_phase}.} We now remove the need for this assumption and construct a basis in the general case. As a secondary benefit, we believe our construction to be simpler than that in~\cite{Li_distance_est_wrapped_phase}. The following proposition is required.
\begin{proposition}\label{prop:unimodMbasisfinder}
Let $\mathbf{U}$ be an $N \times N$ unimodular matrix with first column given by $\vbf$. A basis for the lattice $\Lambda^*$ is given by the projection of the last $N-1$ columns of $\mathbf{U}$ orthogonally onto $H$. That is, $\Qbf\ubf_2, \dots, \Qbf\ubf_{N}$ is a basis for $\Lambda^*$ where $\ubf_1,\dots,\ubf_{N}$ are the columns of $\Ubf$.
\end{proposition}
\begin{IEEEproof}
Because $\mathbf{U}$ is unimodular it is a basis matrix for the integer lattice $\ints^N$. So, every lattice point $\zbf \in \ints^N$ can be uniquely written as $\zbf = c_1 \ubf_1 + \dots + c_{N} \ubf_{N}$ where $c_1,\dots,c_N\in\ints$. The lattice
\begin{align*}
\Lambda^* &= \{ \Qbf\zbf \mid \zbf \in \ints^N \} \\
&= \{ \Qbf (c_1 \ubf_1 + \dots + c_{N} \ubf_{N}) \mid c_1,\dots,c_N\in\ints \} \\
&= \{ c_2 \Qbf\ubf_2 + \dots + c_{N} \Qbf\ubf_{N} \mid c_{2},\dots,c_{N}\in\ints \}
\end{align*}
because $\Qbf\mathbf{u}_1 = \Qbf\vbf = \zerobf$ is the origin. It follows that $\Qbf\ubf_2,\dots,\Qbf\ubf_{N}$ form a basis for $\Lambda^*$.
\end{IEEEproof}
To find a basis for $\Lambda^*$ we require a matrix $\mathbf{U}$ as described by the previous proposition. Such a matrix is given by Li~et.~al.~\cite[Eq.~(76)]{Li_distance_est_wrapped_phase} under the assumption that the wavelengths can be scaled to pairwise relatively prime integers. We do not require this assumption here.
Because $P = \operatorname{lcm}(\lambda_1,\dots,\lambda_N)$ it follows that the integers $v_1,\dots,v_N$ are \emph{jointly} relatively prime, that is, $\gcd(v_1,\dots,v_N) = 1$. Define integers $g_{1},\dots,g_N$ by $g_N = v_N$ and
\[
g_k = \gcd(v_k,\dots,v_N) = \gcd(v_k,g_{k+1}), \;\; k = 1,\dots,N-1
\]
and observe that $g_{k+1}/g_k$ and $v_k/g_k$ are relatively prime integers. For $k = 1,\dots,N-1$, define the $N$ by $N$ matrix $\Abf_k$ with $m,n$th element
\[
A_{kmn} = \begin{cases}
v_k/g_k & m=n=k \\
g_{k+1}/g_k & m=k+1, n=k \\
a_k & m=k, n=k+1 \\
b_k & m=n=k+1 \\
I_{mn} & \text{otherwise}
\end{cases}
\]
where $I_{mn} = 1$ if $m=n$ and $0$ otherwise. The integers $a_k$ and $b_k$ are chosen to satisfy
\begin{equation}\label{eq:akbkexteuc}
b_k \frac{v_k}{g_k} - a_k \frac{g_{k+1}}{g_k} = 1
\end{equation}
and can be computed by the extended Euclidean algorithm. The matrix $\Abf_k$ is equal to the identity matrix everywhere except at the $2$ by $2$ block of indices $k \leq m \leq k+1$ and $k \leq n \leq k+1$.
The matrix $\Abf_k$ is unimodular for each $k$ because it has integer elements and because the determinant of the $2$ by $2$ matrix
\[
\left\vert\begin{array}{cc}
v_k/g_k & a_{k} \\
g_{k+1}/g_k & b_{k}
\end{array}
\right\vert = b_k \frac{v_k}{g_k} - a_k \frac{g_{k+1}}{g_k} = 1
\]
as a result of~\eqref{eq:akbkexteuc}. A matrix $\Ubf$ satisfying the requirements of Proposition~\ref{prop:unimodMbasisfinder} is now given by the product
\[
\Ubf = \prod_{k=1}^{N-1} \Abf_k = \Abf_{N-1}\times \Abf_{N-2} \times \dots \times \Abf_1.
\]
That $\Ubf$ is unimodular follows immediately from the unimodularity of $\Abf_1,\dots,\Abf_{N-1}$. It remains to show that the first column of $\Ubf$ is equal to $\vbf$. Let $\vbf_1,\dots,\vbf_{N-1}$ be column vectors of length $N$ defined as
\begin{align*}
&\vbf_k = (v_1,\dots,v_k, g_{k+1},0,\dots,0)^\prime, \qquad k = 1,\dots,N-2 \\
&\vbf_{N-1} = (v_1,\dots,v_{N-1}, g_{N})^\prime = \vbf.
\end{align*}
One can readily check that $\vbf_{k+1} = \Abf_{k+1}\vbf_k$ for all $k=1,\dots,N-1$. The first column of the matrix $\Abf_1$ is $\vbf_1$ and so, by induction, the first column of the product $\prod_{k=1}^K \Abf_k$ is $\vbf_K$ for all $K = 1,\dots,N-1$. It follows that the first column of $\Ubf$ is $\vbf_{N-1} = \vbf$ as required.
Let $\Ubf_2$ be the $N$ by $N-1$ matrix formed by removing the first column from $\Ubf$, that is, $\Ubf_2 =(\ubf_2,\dots,\ubf_N)$. By Proposition~\ref{prop:unimodMbasisfinder} a basis for $\Lambda^*$ is given by projecting the columns of $\Ubf_2$ orthogonally to $\vbf$, that is, a basis matrix for $\Lambda^*$ is the $N$ by $N-1$ matrix $\Bbf = \Qbf\Ubf_2$. Given $\Bbf$ a general purpose algorithm~\cite{Agrell2002} can be used to compute $\hat{\wbf} \in \ints^{N-1}$ such that $\hat{\xbf} = \Bbf\hat{\wbf}$ is a closest lattice point in $\Lambda^*$ to $\Qbf\ybf \in \reals^{N}$. Now
\[
\hat{\xbf} = \Bbf\hat{\wbf} = \Qbf\Ubf_2\hat{\wbf} = \Qbf\hat{\zbf}
\]
and so $\hat{\zbf} = \Ubf_2\hat{\wbf} \in \ints^N$. The least squares range estimator $\hat{r}$ is then given by~\eqref{eq:leastsquaresrangehatz}.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\selectcolormodel{gray}
\begin{groupplot}[
group style={
group name=my plots,
group size=1 by 2,
xlabels at=edge bottom,
xticklabels at=edge bottom,
vertical sep=0pt
},
legend style={
draw=none,
fill=none,
legend pos=south east,
font=\footnotesize
},
ylabel={MSE},
ylabel style={at={(0.065,0.52)}},
footnotesize,
width=9.2cm,
height=4cm,
tickpos=left,
ytick align=inside,
xtick align=inside,
xmode=log,
ymode=log,
]
\nextgroupplot[]
\addplot[mark=o,only marks,mark options={scale=1}] table {code/data/LeastSquaresA};
\addplot[mark=*,only marks,mark options={scale=0.7}] table {code/data/LeastSquaresB};
\legend{Wavelengths A, Wavelengths B}
\nextgroupplot[
xlabel={$\sigma^2$},
xlabel style={at={(0.5,0.22)}},
ytick={10000,10,0.01,0.00001},
yticklabels={$10^4$,$10^1$,$10^{-2}$,$10^{-5}$}
]
\addplot[mark=o,only marks,mark options={scale=1}] table {code/data/LeastSquaresC};
\addplot[mark=*,only marks,mark options={scale=0.7}] table {code/data/LeastSquaresD};
\legend{Wavelengths C, Wavelengths D}
\end{groupplot}
\end{tikzpicture}
\caption{Comparison of the least squares range estimator with wavelengths $A$ (top) and $C$ (bottom) suitable for the basis in~\cite{Li_distance_est_wrapped_phase} and $B$ (top) and $D$ (bottom) suitable only for the basis described in this paper. Sets $B$ and $D$ result in smaller mean square error when the noise variance $\sigma^2$ is greater than approximately $1.2\times 10^{-4}$ and $7\times 10^{-5}$ respectively.}\label{fig:Comp_with_Li1}
\end{figure}
\section{Simulation Results}\label{sec:simulation-results}
We present the results of Monte-Carlo simulations with the least squares range estimator. Simulations with $N=4$ and $N=5$ wavelengths are performed. For each case we consider two different set of wavelengths. The first set is suitable for the basis of Li~et.~al.~\cite{Li_distance_est_wrapped_phase} and was used in the simulations in~\cite{Li_distance_est_wrapped_phase}. The second set is suitable only for our basis. In each simulation the true range $r_0 = 20$ and the phase noise variables $\Phi_1,\dots,\Phi_N$ are wrapped normally distributed, that is, $\Phi_n = \fracpart{X_n}$ where $X_1,\dots,X_N$ are independent and normally distributed with zero mean and variance $\sigma^2$. In this case, the least squares estimator is also the maximum likelihood estimator. Figure ~\ref{fig:Comp_with_Li1} shows the sample mean square error for $\sigma^2$ in the range $10^{-5}$ to $10^{-2}$ and $10^7$ Monte-Carlo trials used for each value of $\sigma^2$.
For $N=4$ the two sets of wavelengths are
\[
A = \{2, 3, 5, 7\}, \;\;\; B = \{\tfrac{210}{79}, \tfrac{210}{61}, \tfrac{210}{41}, \tfrac{210}{31}\}.
\]
For both sets the wavelengths are contained in the interval $[2,7]$ and $P = 210 = \operatorname{lcm}(A) = \operatorname{lcm}(B)$ so that the identifiable range is the same. The wavelengths $A$ are relatively prime integers and are suitable for the basis of Li~et.~al.~\cite{Li_distance_est_wrapped_phase} and are used in the simulations in~\cite{Li_distance_est_wrapped_phase}. The wavelengths $B$ are not suitable for the basis of~\cite{Li_distance_est_wrapped_phase} because they can not be scaled to pairwise relatively prime integers. To see this, observe that the smallest positive number by which we can multiply the elements of $B$ to obtain integers is $c = \tfrac{6124949}{210}$. Multiplying the elements of $B$ by $c$ we obtain the set
\[
c \times B = \{77531, 100409, 149389, 197579 \}
\]
and these elements are not pairwise relatively prime because, for example, $\gcd(77531, 100409) = 1271$. Figure~\ref{fig:Comp_with_Li1} shows the results of simulations with both sets $A$ and $B$. When the noise variance $\sigma^2$ is small wavelengths $A$ result in slightly reduced sample mean square error as compared with $B$. As $\sigma^2$ increases the sample mean square error exhibits a `threshold' effect and increases suddenly. The threshold occurs at $\sigma^2 \approx 1.2\times 10^{-4}$ for wavelengths $A$ and $\sigma^2 \approx 3\times 10^{-4}$ for wavelength $B$. Wavelengths $B$ are more accurate than $A$ when $\sigma^2$ is greater than approximately $1.2\times 10^{-4}$.
For $N=5$ the two sets of wavelengths are
\[
C = \{2, 3, 5, 7, 11\}, \;\;\; D = \{\tfrac{2310}{877}, \tfrac{2310}{523}, \tfrac{2310}{277}, \tfrac{2310}{221}, \tfrac{2310}{211}\}.
\]
For both sets all wavelengths are contained in the interval $[2,11]$ and $P = 2310 = \operatorname{lcm}(C) = \operatorname{lcm}(D)$ so that the maximum identifiable range is the same. The basis of Li~et.~al.~\cite{Li_distance_est_wrapped_phase} can be used for wavelengths $C$ but not for $D$. The wavelengths $C$ were used in the simulations in~\cite{Li_distance_est_wrapped_phase}. Figure~\ref{fig:Comp_with_Li1} shows the result of Monte-Carlo simulations with these wavelengths. Wavelengths $C$ result in slightly smaller sampler mean square error than $D$ when $\sigma^2$ is small, but dramatically more error for $\sigma^2$ above the threshold occurring at $\sigma^2 \approx 7 \times 10^{-5}$.
The sets $B$ and $D$ have been selected based on a heuristic optimisation criterion. The properties of this criterion are not yet fully understood and will be the subject of a future paper.
\section{Conclusion}
We have considered least squares/maximum likelihood estimation of range from observation of phase at multiple wavelengths. The estimator can be computed by finding a closest point in a lattice. This requires a basis for the lattice. Bases have previously been constructed under the assumption that the wavelengths can be scaled to relatively prime integers. In this paper, we gave a construction in the general case and indicated by simulation that this can dramatically improve range estimates. An open problem is how to select wavelengths to maximise the accuracy of the least squares estimator. We will study this problem in future research.
{
\small
| {
"timestamp": "2015-08-18T02:16:40",
"yymm": "1508",
"arxiv_id": "1508.04126",
"language": "en",
"url": "https://arxiv.org/abs/1508.04126",
"abstract": "We consider the problem of estimating the distance, or range, between two locations by measuring the phase of a sinusoidal signal transmitted between the locations. This method is only capable of unambiguously measuring range within an interval of length equal to the wavelength of the signal. To address this problem signals of multiple different wavelengths can be transmitted. The range can then be measured within an interval of length equal to the least common multiple of these wavelengths. Estimation of the range requires solution of a problem from computational number theory called the closest lattice point problem. Algorithms to solve this problem require a basis for this lattice. Constructing a basis is non-trivial and an explicit construction has only been given in the case that the wavelengths can be scaled to pairwise relatively prime integers. In this paper we present an explicit construction of a basis without this assumption on the wavelengths. This is important because the accuracy of the range estimator depends upon the wavelengths. Simulations indicate that significant improvement in accuracy can be achieved by using wavelengths that cannot be scaled to pairwise relatively prime integers.",
"subjects": "Applications (stat.AP); Information Theory (cs.IT)",
"title": "Basis construction for range estimation by phase unwrapping",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.977713811213884,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.7087156687342687
} |
https://arxiv.org/abs/1202.4121 | Properties of pointed and connected Hopf algebras of finite Gelfand-Kirillov dimension | Let $H$ be a pointed Hopf algebra. We show that under some mild assumptions $H$ and its associated graded Hopf algebra $\gr H$ have the same Gelfand-Kirillov dimension. As an application, we prove that the Gelfand-Kirillov dimension of a connected Hopf algebra is either infinity or a positive integer. We also classify connected Hopf algebras of GK-dimension three over an algebraically closed field of characteristic zero. | \part{Use this type of header for very long papers only}
\section{Introductioin}
The Gelfand-Kirillov dimension (or GK-dimension for short) has been a useful tool for investigating infinite-dimensional Hopf algebras. For example, Hopf algebras of low GK-dimensions are studied in \cite{BZ,GZ,Li,Z,WZZ1,WZZ2}.
It is well known that every Hopf algebra $H$ has a coradical filtration $\{H_n\}_{n=0}^{\infty}$. If $H$ is pointed with group-like elements $G$, then the associated graded algebra of $H$ with respect to the filtration $\{H_n\}_{n=0}^{\infty}$ is a graded Hopf algebra, which we denote by $\gr H$. The structure of $\gr H$ is relatively easier in the sense that it has a nice decomposition $\gr H\cong R\#kG$, where $R$ is a certain graded subalgebra of $\gr H$ (see \cite[Theorem 3]{R}). In the first part of this paper, we clarify the behavior of the GK-dimension of a pointed Hopf algebra under taking associated graded algebra. In fact, we prove the following Theorem.
\begin{theorem}[(Theorem $\ref{equality}$)]\label{First}
Retain the above notation. If $R$ is finitely generated, then
\[\GK R+ \GK kG=\GK \gr H=\GK H.\]
\end{theorem}
The first equality follows from Lemma \ref{addition}, which is a generalized version of \cite[Lemma 5.5]{Z}. The proof of the second equality depends heavily on Takeuchi's construction of free Hopf algebras, which we will review briefly in Section \ref{Takeuchi}.
An interesting phenomenon is that the GK-dimension of every known Hopf algebra is either infinity or a non-negative integer. So it is tempting to conjecture that this is always true for any Hopf algebra. As positive evidence for this conjecture, we prove in Theorem \ref{integer} that the GK-dimension of a connected Hopf algebra over an algebraically closed field of characteristic zero is either infinity or a positive integer. This is basically a consequence of Theorem \ref{equality} and the following result.
\begin{theorem}[(Proposition $\ref{polynomial}$)]
Let $K$ be a connected coradically graded Hopf algebra and assume that the base field $k$ is algebraically closed of characteristic $0$. If $K$ is finitely generated, then $K$ is isomorphic to the polynomial ring in $\ell$ variables for some $\ell\ge0$ as algebras.
\end{theorem}
For the definition of coradically graded Hopf algebras, one can refer to Definition \ref{gradedHopf}. Notice that if $H$ is a connected Hopf algebra, then $\gr H$ is a connected coradically graded Hopf algebra (see Remark \ref{coradically}).
In the last two sections, with the help of the results from previous sections, we classify connected Hopf algebras of GK-dimension three over an algebraically closed field of characteristic zero. The result can be stated as follows. Note that we use $P(H)$ to denote the space of primitive elements of $H$.
\begin{theorem}[(Theorem $\ref{classification}$)]
Let $H$ be a connected Hopf algebra of GK-dimension three (over an algebraically closed field of characteristic $0$). Then $H$ is isomorphic to one of the following:
\begin{enumerate}
\item[\textup{(I)}] The enveloping algebra $U(\mathfrak{g})$ for some three-dimensional Lie algebra $\mathfrak{g}$;
\item[\textup{(II)}] The Hopf algebras $A(0, 0, 0)$, $A(0, 0, 1)$, $A(1, 1, 1)$ or $A(1, \lambda, 0)$ from Example \ref{typeA} for some $\lambda\in k$;
\item[\textup{(III)}] The Hopf algebras $B(\lambda)$ from Example \ref{typeB} for some $\lambda\in k$.
\end{enumerate}
\end{theorem}
\noindent{\bf Acknowledgment.} The author thanks Professor James Zhang, Professor Dingguo Wang, Xingting Wang and Cris Negron for useful conversations and for their careful reading of this paper. The research was partially supported by the US National Science Foundation.
\section{Preliminaries}\label{pre}
Throughout this paper, $k$ denotes a base field. All algebras, coalgebras and tensor products are taken over $k$ unless otherwise stated. Given a group $G$, we will use $kG$ to denote the group algebra of $G$ over $k$.
For a coalgebra $C$, we denote the comultiplication and the counit by $\Delta$ and $\e$, respectively. Let $G(C)$ be the set of the group-like elements in $C$, and $C^+$ be the kernel of the counit. The \textbf{coradical} $C_0$ of $C$ is defined to be the sum of all simple subcoalgebras of $C$. The coalgebra $C$ is called \textbf{pointed} if $C_0=kG(C)$, and \textbf{connected} if $C_0$ is one-dimensional. Also, we use $\{C_n\}_{n=0}^{\infty}$ to denote the coradical filtration of $C$ \cite[5.2.1]{Mont}. The coalgebra $C$ is called \textbf{coradically finite} if $\dim_k C_n<\infty$ for any $n$. For a Hopf algebra $H$, we use $P(H)$ to denote the space of primitive elements of $H$.
Let $C=\bigoplus\limits_{i=0}^{\infty }C(i)$ be a graded coalgebra. We say that $C$ is \textbf{coradically graded} if $C_0=C(0)$ and $C_1=C(0)\bigoplus C(1)$. If $C$ is coradically graded, then as shown by \cite[Lemma]{CM}, $C_n=\bigoplus_{i\le n}C(i)$ for any $n\ge 0$. Now we recall the definition of graded Hopf algebras, which will be used intensively in this paper.
\begin{definition}\label{gradedHopf}
Let $H$ be a Hopf algebra with antipode $S$. If
\begin{enumerate}
\item[(1)] $H=\bigoplus\limits_{i=0}^{\infty }H(i)$ is a graded algebra,
\item[(2)] $H=\bigoplus\limits_{i=0}^{\infty }H(i)$ is a graded coalgebra,
\item [(3)]$S(H(n))\subset H(n)$ for any $n\ge 0$,
\end{enumerate}
then $H=\bigoplus\limits_{i=0}^{\infty }H(i)$ is called a \textbf{graded Hopf algebra}. If in addition,
\begin{enumerate}
\item[(4)]$H=\bigoplus\limits_{i=0}^{\infty }H(i)$ is a coradically graded coalgebra,
\end{enumerate}
then $H$ is called a \textbf{coradically graded Hopf algebra}.
\end{definition}
\begin{remark}\label{coradically}
It turns out that the notion of coradically graded Hopf algebra is very natural. For example, let $H$ be a pointed Hopf algebra with coradical filtration $\{H_n\}_{n\ge 0}$. Then the associated graded space $\gr H=\bigoplus_{n\ge 0}H_n/H_{n-1}$ is a graded Hopf algebra \cite[p. 62]{Mont}. Moreover, as mentioned in \cite[Definition 1.13]{AS}, $\gr H$ is a coradically graded coalgebra. Therefore, $\gr H$ is a coradically graded Hopf algebra.
\end{remark}
\section{Takeuchi's construction of free Hopf alegbras}\label{Takeuchi}
In \cite{T}, Takeuchi proved that for any coalgebra $C$ there exists a Hopf algebra $\mathcal{H}(C)$ characterized by the following universal property:
\begin{itemize}
\item[(1)] There is a coalgebra map $i: C\rightarrow \mathcal{H}(C)$,
\item[(2)] For any Hopf algebra $H$ and coalgebra map $f: C\rightarrow H$, there is a Hopf algebra map $f': \mathcal{H}(C)\rightarrow H$ such that $f=f'i$.
\end{itemize}
The Hopf algebra $\mathcal{H}(C)$ is called the free Hopf algebra generated by $C$ \cite[Definition 1]{T}. Takeuchi showed the existence of $\mathcal{H}(C)$ by an explicit construction, which we will describe briefly.
Let $V=\bigoplus_{i=0}^{\infty}V_i$ where $V_i=C$ if $i$ is even and $V_i=C^{cop}$ if $i$ is odd. Notice that $V$ has a natural coalgebra structure. Let $S: V\rightarrow V^{cop}$ be the coalgebra map sending $(x_0, x_1, x_2, \cdots)$ to $(0, x_0, x_1, x_2, \cdots)$. Then $S$ induces a bialgebra map $S: T(V)\rightarrow T(V)^{op, cop}$. Let $I$ be the two-sided ideal of $T(V)$ generated by the set
$$\{S*Id(x)-\e(x)1\, |\, x\in V\}\bigcup \{Id*S(x)-\e(x)1 \,|\, x\in V\},$$
where $*$ represents the convolution. Moreover, $I$ is a coideal and $S(I)\subset I$. By \cite[Lemma 1]{T}, the Hopf algebra $T(V)/I$, with the antipode induced from the map $S$, is the universal object $\mathcal{H}(C)$.
Takeuchi's construction generalizes the notion of a free group generated by a set in the following sense.
\begin{proposition}[{\cite[Lemma 34]{T}}]
$\mathcal{H}(kG(C))=k\langle G(C)\rangle$, where $\langle G(C)\rangle$ is the free group generated by the set $G(C)$.
\end{proposition}
Let $C$ be a coalgebra with coradical $C_0$ and let $C=C_0\oplus V$ be a decomposition of $C$ as a $k$-space. Then $\mathcal{H}(C)$ can be realized by giving a natural Hopf structure on the algebra $\mathcal{H} (C_0)\amalg T(V)$ \cite[\S 6]{T}, where $\amalg $ denotes the coproduct in the category of algebras. Now the canonical coalgebra map $i: C\rightarrow \mathcal{H}(C)$ can be identified with the map induced by maps $C_0\rightarrow \mathcal{H}(C_0)$ and $V\rightarrow T(V)$. By this characterization, we have
\begin{lemma}[{\cite[Theorem 35]{T}}]\label{basis}
Suppose that $C$ is a pointed coalgebra and $G(C)\cup B$ is a $k$-basis for $C$. Let $X=G(C)\cup G(C)^{-1}\cup B$ and let $Y$ be the set of finite sequences $(x_1, \cdots, x_n)$ of elements of $X$ such that $(x_i, x_{i+1})$ is not of the form $(g^{\pm1}, g^{\mp1})$ where $g\in G(C)$. Set
$$\overline{x}=x_1\cdots x_n\in \mathcal{H}(C) \,\,\text{for}\,\, x=(x_1, \cdots, x_n)\in Y,$$
where by abuse of notation we still use $x_j$ for its image in $\mathcal{H}(C)$ under the canonical map $i: C\rightarrow \mathcal{H}(C)$.
Then $\{\overline{x} \,|\, x\in Y\}$ forms a $k$-basis for $\mathcal{H}(C)$.
\end{lemma}
Now we are able to prove the following proposition, which determines the coradical of $\mathcal{H}(C)$ when $C$ is pointed. For a given $k$-subspace $W$ of an algebra $R$ and any $n\ge 1$, let $W^n$ denote the $k$-subspace of $R$ spanned by products of $\le n$ elements in $W$.
\begin{proposition}\label{coradicaluniversal}
Let $C$ be a pointed coalgebra. For any $n\ge 1$. Then the following statements are true.
\begin{enumerate}
\item[\textup{(I)}] the Hopf algebra $\mathcal{H}(C)$ is pointed and the coradical of $\mathcal{H}(C)$ is equal to the subalgebra generated by $G(C)$ and their inverses, which is isomorphic to $k\langle G(C)\rangle$.
\item[\textup{(II)}] for any $n\ge 1$, $C^n$ is a subcoalgebra of $\mathcal{H}(C)$ and any element in $G(C^n)$ can be expressed as a product of $\le n$ elements in $G(C)$.
\end{enumerate}
\end{proposition}
\begin{proof} Denote $\mathcal{H}(C)$ by $H$. It is clear that the subalgebra of $H$ generated by $G(C)$ and their inverses is contained in $H_0$. By Lemma \ref{basis}, this subalgebra is isomorphic to $k\langle G(C)\rangle$.
Choose a subset $B$ of $C$ such that $B=\bigcup_{i=1}^{\infty}B_i$ and $G(C)\bigcup B_1\bigcup\cdots\bigcup B_n$ is a $k$-basis of $C_n$ for any $n$. Let $V$ be the $k$-space spanned by $B$. By \cite[\S 6]{T}, $H\cong k\langle G(C)\rangle\amalg T(V)$ as an algebra. Hence we can define a grading on $H$ by setting $\deg g=0$ for any $g\in G(C)$ and $\deg b_i=i$ for any $b_i\in B_i$. Under this grading, $H$ becomes a graded algebra (but not necessarily a graded coalgebra) and $C$ is a graded subspace of $H$. Let $H(n)$ be the $k$-subspace of $H$ spanned by homogeneous elements of degree $n$.
Now, fix a basis of $H$ described in Lemma \ref{basis} with the chosen set $B$. Then this is a basis consisting of homogeneous elements with respect to the grading defined above. For any $m\ge 0$, let $A_m=\sum_{i=0}^{m}H(i)$.
It is clear that $\{A_m\}_{m\ge 0}$ is an algebra filtration on $H$ and $A_0$ is exactly the subalgebra of $H$ generated by $G(C)$ and their inverses. Moreover, by the choice of $B$, we have $C_m\subset A_m$ for any $m\ge 0$. Let $\overline{x}=x_1\cdots x_n$ be as in Lemma \ref{basis} and suppose that $x_j\in B_{i_j}$ (here we set $B_0=G(C)\bigcup G(C)^{-1}$). Then by definition $A_m$ is spanned by $\overline{x}$ such that $\sum_j i_j\le m$. Pick such an $\overline{x}\in A_m$.
Notice that
$\Delta(\overline{x})=\Delta(x_1)\cdots\Delta(x_n)$ and for each $j$, $\Delta(x_j)\subset \sum_{t} C_t\otimes C_{i_j-t} \subset \sum_{t} A_t\otimes A_{i_j-t}$. As a consequence,
\[\Delta(\overline{x})\in\sum_{t} A_t\otimes A_{m-t}.\]
This shows that $\{A_m\}_{m\ge 0}$ is also a coalgebra filtration. By \cite[Lemma 5.3.4]{Mont}, $H_0\subset A_0$. This proves the first statement.
For the second statement, it is easy to check that $C^n$ is a subcoalgebra of $H$. As mentioned before, $C$ is a graded $k$-subspace of $H$. This means that $C=\bigoplus_{i=0}^{\infty}C(i)$ where $C(i)=C\bigcap H(i)$. Now $G(C^n)=C^n\bigcap G(H)\subset C^n\bigcap H(0)$. Since $C$ is a graded $k$-subspace of $H$, $C^n\bigcap H(0)=C(0)^n$. Notice that $C(0)$ is spanned by $G(C)$. Hence every element in $G(C^n)$ can be expressed as a linear combination with each summand a scalar multiple of a product of $\le n$ elements in $G(C)$. Since distinct group-like elements are $k$-linearly independent by \cite[3.2.1]{Sw}, every element in $G(C^n)$ is actually a product of $\le n$ elements in $G(C)$.
\end{proof}
\begin{remark}\label{grading}
Suppose that $C=\bigoplus_{i=0}^{\infty}C(i)$ is a pointed graded coalgebra such that $C(0)=C_0$. Now $C$ has a canonical decomposition $C=C_0\oplus V$, where $V=\bigoplus_{i=1}^{\infty}C(i)$. Then by the construction of the comultiplication and the antipode on $\mathcal{H}(C_0)\amalg T(V)$ in \cite[Lemma 26, Lemma 27]{T}, $\mathcal{H}(C)=\mathcal{H}(C_0)\amalg T(V)$ becomes a graded Hopf algebra, where elements in $\mathcal{H}(C_0)$ have degree $0$ and the grading on $T(V)$ is inherited from $V$. It is clear that the canonical inclusion $i: C\rightarrow \mathcal{H}(C)$ becomes a graded coalgebra map. Moreover, if there is a graded coalgebra map $f$ from $C$ to a graded Hopf algebra $H$, then the lifting Hopf algebra map $f': \mathcal{H}(C)\rightarrow H$ is also graded.
\end{remark}
\begin{corollary}\label{approx}
Let $H$ be a pointed Hopf algebra. Then every finite-dimensional subspace $V$ of $H$ is contained in a finitely generated Hopf subalgebra.
Moreover, if $D$ is a finite-dimensional subcoalgebra of $H$ with group-like elements $G(D)$, then elements in $G(D^n)$ can be expressed as products of $\le n$ elements in $G(D)$.
\end{corollary}
\begin{proof}
Since $V$ is contained in a finite-dimensional subcoalgebra of $H$, we can assume $V=D$ is a finite-dimensional subcoalgebra. Let $C$ be a copy of $D$ as coalgebras. Then there is an injective coalgebra map $f: C\rightarrow H$ whose image is $D$. By the universal property of $\mathcal{H}(C)$, there is a Hopf algebra map $f':\mathcal{H}(C)\rightarrow H$ such that $f=f'i$, where $i$ is the inclusion $C\rightarrow \mathcal{H}(C)$. (Notice here if we do not introduce a copy $C$ of $D$, then by writing $D$ we could mean either a subcoalgebra of $H$ or a subcoalgebra of $\mathcal{H}(D)$, which may cause confusion in the proof). By Lemma \ref{basis}, $\mathcal{H}(C)$ is a finitely generated algebra. By construction, $D$ is contained in $f'(\mathcal{H}(C))$. This proves the first claim.
Let $S=\{g_1, \cdots, g_\ell\}$ be the set of group-like elements of $C$. Then $\{f(g_1), \cdots, f(g_\ell)\}$ is the set of group-like elements of $D$. Notice that $C^n$ is a subcoalgebra of $\mathcal{H}(C)$. By \textup{(II)} of Proposition \ref{coradicaluniversal}, every group-like element of $C^n$ can be expressed as a product of $\le n$ elements in $S$. Since $C^n$ is mapped onto $D^n$ by $f'$, we have $G(D^n)=f'(G(C^n))$ by \cite[Corollary 5.3.5]{Mont}. The result then follows.
\end{proof}
\begin{corollary}\label{0finite}
Let $H$ be a finitely generated pointed Hopf algebra. Then $H_0$ is a finitely generated algebra. In fact, if $D$ is a finite-dimensional subcoalgebra of $H$ that generates $H$ as an algebra, then $H_0$ is generated by $G(D)$.
\end{corollary}
\begin{proof}
Since every finite-dimensional subspace of $H$ is contained in a finite-dimensional subcoalgebra by \cite[5.1.1]{Mont}, we can assume that $H$ is generated as an algebra by a finite-dimensional subcoalgebra $D$. By assumption $H_0$ is spanned by $G(H)$. For any $g\in G(H)$, there exists some $n$ such that $g\in D^n$. Notice that $D^n$ is a subcoalgebra of $H$. Hence $g\in G(D^n)$. Now the result follows from the second statement of Corollary \ref{approx}.
\end{proof}
\begin{remark}\label{gradedaffine}
Let $H=\bigoplus_{i=0}^{\infty}H(i)$ be a graded pointed Hopf algebra such that $H(0)$ is spanned by all group-like elements of $H$. In this case, every finite-dimensional subspace $V$ of $H$ is contained in a finitely generated graded Hopf subalgebra of $H$. In fact, without loss of generality, we can assume that $V$ is a finite-dimensional graded subspace of $H$. By a similar argument as in \cite[Theorem 5.1.1]{Mont}, $V$ is contained in a finite-dimensional graded subcoalgebra $C$ of $H$. Then the result follows from Remark \ref{grading} and an argument similar to that of Corollary \ref{approx}.
\end{remark}
We conclude this section with a proposition regarding the GK-dimension of a pointed Hopf algebra, which is a direct consequence of Corollary \ref{approx}.
\begin{proposition}\label{localfinite}
Let $H$ be a pointed Hopf algebra. Then
$$\GK H=\sup_{E}\GK E,$$
where the supreme is taken over all finitely generated Hopf subalgebras of $H$.
\end{proposition}
\section{Pointed Hopf algebras and their associated graded Hopf algebras}\label{associated}
Throughout this section, let $H$ be a pointed Hopf algebra with group-like elements $G$. We use $\gr H$ to denote the associated graded Hopf algebra of $H$ with respect to the coradical filtration.
There is a canonical Hopf projection $\psi: \gr H\rightarrow H_0$. Let $R=(\gr H)^{co\psi}$, the algebra of coinvariants of $\psi$ \cite[1.5]{AS}. By definition $R$ is a graded subalgebra of $\gr H$. In fact, it is well known that $R$ is a graded braided Hopf algebra in $^{G}_G\mathcal{YD}$, the Yetter-Drinfeld category over $G$, and
\begin{equation}\label{bidecomp}
\gr H\cong R\#H_0,
\end{equation}\label{smash}
as Hopf algebras.
Let $H_0^+$ be the $k$-space spanned by the elements of the form $1-g$ where $g\in G$. Notice that $HH_0^+$ is a coideal of $H$. Denote the coalgebra $H/HH_0^+$ by $\theta(H)$ and the coalgebra projection $H\rightarrow \theta(H)$ by $\pi_H$.
\begin{lemma}\label{flatness}
Retain the above notation. Then $H/H_n$ is a free (right) $H_0$-module for any $n\ge 0$.
\end{lemma}
\begin{proof}
By \cite[Lemma 1]{Ra}, for any $m\ge0$, $H_{m+1}/H_m$ is a free $H_0$-module. Hence inductively we see that $H_{n+i}/H_n$ is a free $H_0$-module for any $i\ge1$ and $H_{n+i}/H_n\cong \bigoplus\limits_{j\ge 0}^{i-1}H_{n+j+1}/H_{n+j}$ as $H_0$-modules. As a consequence, $H/H_n\cong \bigoplus\limits_{j\ge 0}H_{n+j+1}/H_{n+j}$ as $H_0$-modules. The result then follows.
\end{proof}
\begin{lemma}\label{intersection}
Retain the above notation and let $I=HH_0^+$. Then $I\cap H_n=H_nH_0^+$ for any $n\ge 0$.
\end{lemma}
\begin{proof}
For any $n\ge 0$, we have the short exact sequence
\[0\rightarrow H_n\rightarrow H\rightarrow H/H_n\rightarrow 0.\]
Since $H/H_n$ is a free $H_0$-module, the following sequence is exact,
\[0\rightarrow H_n\otimes_{H_0}k\rightarrow H\otimes_{H_0}k\rightarrow (H/H_n)\otimes_{H_0}k\rightarrow 0.\]
This shows that the canonical map $H_n/H_nH_0^+\rightarrow H/I$ is injective, which implies that $I\cap H_n=H_nH_0^+$.
\end{proof}
Let $F_n$ be $\pi_H(H_n)\subset \theta(H)$. Then $\{F_n\}_{n\ge0}$ becomes a coalgebra filtration on $\theta(H)$. The following lemma is clear.
\begin{lemma}\label{inducedfil}
Suppose that $f: C\rightarrow D$ is a surjective coalgebra map and $C$ has a coalgebra filtration $\{A_n\}_{n\ge0}$. Let $B_n=f(A_n)$. Then $\{B_n\}_{n\ge 0}$ is a coalgebra filtration on $D$. Moreover, $f$ induces a surjective graded coalgebra map $\gr_AC\rightarrow \gr_B D$.
\end{lemma}
By this lemma, we see that there is a surjective graded coalgebra map $\gr H\rightarrow \gr_F\theta(H)$ induced by $\pi_H$.
\begin{proposition}\label{Filtration}
Retain the above notation. Then $\gr_F\theta(H)$ is isomorphic to $\theta(\gr H)$ as graded coalgebras.
\end{proposition}
\begin{proof}
By definition, $\theta(\gr H)=\gr H/(\gr H)H_0^+$. So we only have to show that the kernel of the map $\gr H\rightarrow \gr_F\theta(H)$ induced by $\pi_H$ is $(\gr H)H_0^+$. It suffices to prove that for any $n\ge 0$, the canonical map
$H_{n+1}/H_n\rightarrow \pi_H(H_{n+1})/\pi_H(H_n)$ has kernel $(H_{n+1}/H_n)H_0^+$.
Let $I=HH_0^+$. It is easy to check that the
\[\ker (H_{n+1}/H_n\rightarrow \pi_H(H_{n+1})/\pi_H(H_n))=\frac{H_{n+1}\cap(H_n+I)}{H_n}=\frac{H_{n+1}\cap I+H_n}{H_n}.\]
By Lemma \ref{intersection}, $H_{n+1}\cap I=H_{n+1}H_0^+$. Therefore,
\[\frac{H_{n+1}\cap I+H_n}{H_n}=\frac{H_{n+1}H_0^++H_n}{H_n}=(H_{n+1}/H_n)H_0^+.\]
This completes the proof.
\end{proof}
Now we are able to determine the coradical filtration of $\theta(H)$ by using the following lemma.
\begin{lemma}\label{gr}
Let $C$ be a coalgebra with a coalgebra filtration $\{F_n\}$ such that $F_0=C_0$. If the associated graded coalgebra with respect to $\{F_n\}$ is coradically graded, then $\{F_n\}$ agrees with the coradical filtration of $C$.
\end{lemma}
\begin{proof}
Denote the associated graded coalgebra with respect to $\{F_n\}$ by $\gr _FC$.
By the definition of coalgebra filtration and the fact that $F_0=C_0$, it is easy to see that $F_n\subset C_n$ for any $n\ge1$. If the assertion is not true, then choose $n$ to be minimal such that $F_n\subsetneq C_n$. Pick some element $y\in C_n\backslash F_n$. Then
\begin{equation}\label{com}
\Delta(y)\in \sum\limits_{i=0}^{n}C_i\otimes C_{n-i}= C_n\otimes F_0+F_0\otimes C_n+\sum\limits_{i=1}^{n-1}F_i\otimes F_{n-i}.
\end{equation}
Suppose $y\in F_m\backslash F_{m-1}$ for some $m\ge n+1$. Let $\overline{y}$ be the corresponding non-zero element in $\gr_FC(m)$. Since $\gr _FC$ is coradically graded, $\overline{y}$ is not in $(\gr _FC)_{m-1}$. But by $(\ref{com})$, $\overline{y}$ is in $(\gr _FC)_{1}$, which is a contradiction. This completes the proof.
\end{proof}
\begin{proposition}\label{piH}
The coradical filtration of the coalgebra $\theta(H)$ is $\{\pi_H(H_n)\}_{n\ge 0}$.
\end{proposition}
\begin{proof}
By Proposition \ref{Filtration}, $\gr_F\theta(H)\cong \theta(\gr H)$ as coalgebras. By the proof of \cite[Theorem 3]{R}, $\theta(\gr H)\cong R$ as graded coalgebras, where $R$ is defined in $(\ref{bidecomp})$. By \cite[p.15]{AS}, $R$ is coradically graded. The result now follows from Lemma \ref{gr}.
\end{proof}
\section{GK-dimensions of $H$ and $\gr H$}
This section is devoted to the proof of Theorem \ref{equality}. Let $H$ be a pointed Hopf algebra with group-like elements $G$. As mentioned in the previous section, $\gr H$ has a decomposition
$
\gr H\cong R\#H_0$. By construction, the graded algebra $R$ is connected in the sense that $R(0)=k$. So if $R$ is finitely generated as an algebra, then it is locally finite.
\begin{lemma}\label{socole}
Retain the above notation. Let $D$ be a graded subcoalgebra of $\gr H$. Then
\[D\subset \bigoplus_{i\ge0}\bigoplus_{h\in G(D)}R(i)h. \]
\end{lemma}
\begin{proof}
Let $y$ be a non-zero element in $D$. By assumption we can further assume that $y$ is homogeneous of degree $s$. If $s=0$, then $y\in D(0)=kG(D)$. If $s\ge 1$, then by the decomposition (\ref{bidecomp}), we can write $y=\sum_{i=1}^{N}y_ih_i$ where $h_i$'s are distinct group-like elements and $0\neq y_i\in R(n)$. We only need to show that $h_i\in G(D)$ for any $i$. Let $\psi: \gr H\rightarrow H_0$ be the canonical Hopf projection. Then it is obvious that $\psi$ maps $D$ onto $D(0)=kG(D)$. Now we have
\[(Id\otimes \psi)\Delta(y)=\sum_{i=1}^{N}y_ih_i\otimes h_i\in D\otimes D(0).\]
Hence $h_i\in G(D)$ by the choice of $y_i$ and $h_i$.
\end{proof}
The next lemma is about the GK-dimensions of skew group algebras. It can be viewed as a generalization of \cite[Lemma 5.5]{Z}. Let $\Gamma$ be a group and $A$ an algebra with a left $G$-action. As $k$-spaces, the skew group algebra $A*\Gamma$ is isomorphic to $A\otimes k\Gamma$. The multiplication is given by
\[(a*g)(b*h)=a(g.b)*gh,\]
where $a, b\in A$, $g, h\in \Gamma$ and $a*g$ stands for $a\otimes g$. We will omit the $*$ in $a*g$ if there is no confusion. We say that the $\Gamma$-action on $A$ is \textbf{locally finite} if any finite-dimensional subspace of $A$ is contained in a finite-dimensional $\Gamma$-submodule of $A$.
\begin{lemma}\label{addition}
Let $A$ and $\Gamma$ be as above and suppose that the $\Gamma$-action on $A$ is locally finite. Then
\[\GK A*\Gamma=\GK A+ \GK k\Gamma.\]
\end{lemma}
\begin{proof} We say a subalgebra $B$ of $A$ is $\mathbf{\Gamma}$-\textbf{affine} if $B$ is generated as an algebra by a finite-dimensional $\Gamma$-submodule of $A$.
It is easy to check by the local finiteness condition that
\[\GK A=\sup\limits_B \GK B,\]
where $B$ runs over all $\Gamma$-affine subalgebras of $A$.
Next we claim that
\[\GK A*\Gamma=\sup\limits_{B, L}\GK B*L,\]
where $B$ runs over all $\Gamma$-affine subalgebras of $A$ and $L$ runs over all finitely generated subgroups of $\Gamma$. In fact, by the definition of the GK-dimension, $\GK A*\Gamma=\sup\limits_{E} \GK E$, where the supremum is taken over all finitely generated subalgebras $E$ of $A*\Gamma$. Let $V$ be a finite-dimensional generating subspace of $E$. Then there exists $g_1, \cdots, g_s\in G$ and some finite-dimensional subspace $W$ of $A$ such that $V\subset Wg_1+Wg_2+\cdots Wg_s$. By the local finiteness condition we can further
assume that $W$ is a finite-dimensional $\Gamma$-submodule of $A$.
Let $B$ be the subalgebra of $A$ generated by $W$ and let $L$ be the subgroup of $G$ generated by $g_1, \cdots, g_s$ and their inverses. Then it is clear that $B$ is $\Gamma$-affine and $E\subset B*L$. This prove the claim.
Now,
\begin{align*}
\GK A*\Gamma&=\sup\limits_{B, L}\GK B*L\\
&=\sup\limits_{B, L}( \GK B+\GK kL)\\
&=\sup\limits_B \GK B+\sup\limits_L\GK kL\\
&=\GK A+\GK k\Gamma.
\end{align*}
The second equality follows from \cite[Lemma 5.5]{Z}. For the last equality, one just notices that $\GK k\Gamma=\sup\limits_L\GK kL$ by Proposition \ref{localfinite}.
\end{proof}
Recall from \cite[\S 5.4]{Mont} that for $g\in G$,
\[P_{1, g}(H)=\{x\in H \,|\,\Delta(x)=x\otimes1+g\otimes x\}.\]
Let $P'_{1, g}(H)$ be a subspace of $P_{1, g}(H)$ such that $P_{1, g}(H)=k(1-g)\oplus P'_{1, g}(H)$ and define
\[P'_T(H)=\bigoplus_{g\in G}P'_{1, g}(H).\]
\begin{lemma}\label{1finite}
Let $H$ be a pointed Hopf algebra. Then the following statements are equivalent.
\begin{itemize}
\item[\textup{(I)}] $\dim_k P'_T(H)< \infty$,
\item[\textup{(II)}] $\dim_k R(1)<\infty$,
\item[\textup{(III)}] the graded algebra $R$ is locally finite,
\item[\textup{(IV)}] the coalgebra $\theta(H)$ is coradically finite.
\end{itemize}
\end{lemma}
\begin{proof}
By \cite[Theorem 5.4.1(1)]{Mont} and the definition of $R$, $P'_T(H)\cong R(1)$ as $k$-spaces. Hence \textup{(I)} and \textup{(II)} are equivalent. Since $R$ is a coradically graded coalgebra and $R(0)=k$, the equivalence of \textup{(II)} and \textup{(III)} follows from \cite[Lemma 2.3 (2)]{MS}. As shown in the proof of Theorem \ref{equality}, $\dim_k \pi_H(H_n)=\dim_k\sum_{i=1}^{n} R(i)$. By Proposition \ref{piH}, $\{\pi_H(H_n)\}_{n\ge0}$ is the coradical filtration of $\theta(H)$. This shows that \textup{(III)} and \textup{(IV)} are equivalent.
\end{proof}
Now we are ready to prove the main theorem of this section.
\begin{theorem}\label{equality}
Retain the above notation. Suppose that $\dim_k R(1)<\infty$. Then
\begin{equation}\label{ineq}
\GK R+ \GK kG=\GK \gr H\le\GK H\le\GK kG+\gamma,
\end{equation}
where $\gamma=\varlimsup\limits_{n\rightarrow \infty}\log_n \dim_k \pi_H(H_n)=\varlimsup\limits_{n\rightarrow \infty}\log_n \dim_k \bigoplus_{i=0}^{n} R(i)$. If $R$ is a finitely generated algebra, then
\begin{equation}\label{eq}
\GK R+ \GK kG=\GK \gr H=\GK H=\GK kG+\gamma,
\end{equation}
\end{theorem}
\begin{proof}
Let $V_n=\bigoplus_{i=1}^{n} R(i)$. By Lemma \ref{1finite}, $V_n$ is finite-dimensional for any $n$. Also, $\{V_n\}_{n\ge0}$ is the coradical filtration of $R$ since $R$ is a coradically graded coalgebra. On the other hand, $R\cong \theta(\gr H)$ as graded coalgebras. It then follows from Proposition \ref{Filtration} and Proposition \ref{piH} that $V_n\cong \pi_H(H_n)$ as $k$-spaces.
As a consequence, the number $\gamma$ is well defined.
It is well known that $R\#kG$ is just $R*G$ as algebras. Moreover, the $G$-action on $R$ preserves the grading. Since every finite-dimensional subspace $V$ of $R$ is contain in $V_s$ for some $s\ge0$, we see that the $G$-action on $R$ is locally finite. Now by Lemma \ref{addition}, $\GK R+ \GK kG=\GK \gr H$. By \cite[Lemma 6.5]{KL}, $\GK\gr H\le \GK H$. Next we are going to show that $\GK H\le\GK kG+\gamma$.
Now let $C$ be a finite-dimensional subspace of $H$. Without loss of generality, we can assume that $C$ is a subcoalgebra of $H$. Let $S=G(C)$. By the choice of $C$, the set $S$ is finite. Denote by $G_S(\ell)$ the set of elements in $G$ that can be expressed
as products of $\le \ell$ elements in $S\cup S^{-1}$ and let $g_S(\ell)=|G_S(\ell)|$. By Corollary \ref{approx}, $G(C^\ell)\subset G_S(\ell)$.
Suppose that $C\subset H_N$ for some $N\ge 1$. Then $C^n\subset H_{nN}$. Let $D=\gr C^n$, the associated graded coalgebra of $C^n$ with respect to its coradical filtration. Notice that $D$ is naturally embedded in $\gr H$. Since $G(D)$ can be identified with $G(C^n)$, we have $G(D)\subset G_S(n)$.
Now by Lemma \ref{socole}, we have
\begin{align*}
D &\subset \bigoplus_{i=0}^{nN}\bigoplus_{h\in G(D)}R(i)h\\
&=\bigoplus_{h\in G(D)}V_{nN}h\subset \bigoplus_{h\in G_S(n)}V_{nN}h.
\end{align*}
As a consequence,
\[\dim_k C^n=\dim_kD\le \dim_k V_{nN}\cdot g_S(n).\]
Therefore,
\begin{align*}
\varlimsup_{n\rightarrow \infty}\log_n \dim_k C^n&\le\varlimsup_{n\rightarrow \infty}\log_n \dim_k V_{nN}\cdot g_S(n)\\
&\le\varlimsup_{n\rightarrow \infty}\log_n \dim_k V_{nN}+ \varlimsup_{n\rightarrow \infty}\log_ng_S(n)\\
&\le \varlimsup_{n\rightarrow \infty}\log_n \dim_k V_{n}+ \varlimsup_{n\rightarrow \infty}\log_ng_S(n)\\
&\le \gamma+ \GK kG.
\end{align*}
This proves $(\ref{ineq})$.
When $R$ is finitely generated, by \cite[Proposition 6.6]{KL}, $\GK R=\varlimsup\limits_{n\rightarrow \infty}\log_n \dim_k V_n=\gamma$. Combining this fact with $(\ref{ineq})$, we have $(\ref{eq})$.
\end{proof}
\begin{remark} In Theorem \ref{equality}, if we further assume that $G$ is a finite group, then $\{H_n\}_{n\ge 0}$ is a finite filtration in the sense that $\dim_k H_n< \infty$ for any $n\ge 0$. This is true because $H_n/H_{n-1}\cong R(n)\otimes kG$ as $k$-spaces for all $n$. In this case, the result $\GK \gr H= \GK H$ follows from \cite[Proposition 6.6]{KL}.
\end{remark}
\begin{remark} It is easy to check that $\gr H$ is finitely generated if and only if both $R$ and $kG$ are finitely generated. Hence if $\gr H$ is finitely generated, then $\GK R+ \GK kG=\GK \gr H=\GK H$.
\end{remark}
If $H$ is a finitely generated pointed Hopf algebra, then $G$ is a finitely generated group by Corollary \ref{approx}. However, the finite generation of $H$ does not imply that $\dim_kR(1)<\infty$, as shown in the following example.
\begin{example}
Let the base field $k$ be $\mathbb{F}_p$, where $p$ is a prime number. Let $H=k[x]$. Then $H$ is a connected Hopf algebra where $x$ is primitive. It is well known that $\gr H\cong k[x_1, x_2, \cdots]/(x^p_1, x^p_2, \cdots)$ with $x_i$ being primitive. As a consequence, $\GK \gr H=0$ since every finitely generated subalgebra of $\gr H$ is finite-dimensional. On the other hand, $\GK H=1$.
\end{example}
The above example relies heavily on the assumption that the base field $k$ has characteristic $p$. In fact, based on known examples, it is conjectured that if the base field is of characteristic $0$, and $H$ is finitely generated with finite GK-dimension, then $R(1)$ is always finite-dimensional. Some partial results are discussed in \cite[Section 3]{WZZ2}.
We conclude this section with a straightforward corollary. For the definitions and basic properties of Yetter-Drinfeld modules and Nichols algebras, a good reference is \cite{AS}.
\begin{corollary}
Let $H$ be a pointed Hopf algebra with group-like elements $G$. If $\gr H\cong \mathcal{B}(V)\#kG$, where $V$ is a finite-dimensional left Yetter-Drinfeld module over $G$ and $\mathcal{B}(V)$ is the Nichols algebra of $V$, then
\[\GK \mathcal{B}(V)+ \GK kG=\GK \gr H= \GK H.\]
\end{corollary}
\section{Connected Hopf algebras}
This section is primarily devoted to the study of connected Hopf algebras. Let $H$ be a connected Hopf algebra. Then its associated graded Hopf algebra $\gr H$ with respect to the coradical filtration is also connected. Moreover, the natural grading on $\gr H$ makes it into a coradically graded Hopf algebra as mentioned in Section \ref{pre}. In fact, we are able to show that if the base field is algebraically closed of characteristic $0$ and $\GK H<\infty$, then $\GK H$ must be a non-negative integer $\ell$ and $\gr H$ is isomorphic to the polynomial ring in $\ell$ variables as algebras (see Proposition \ref{polynomial} and Theorem \ref{integer}). As a consequence, we derive some ring-theoretic properties of such Hopf algebras. For instance, we show that they are always domains, which reproves an unpublished result by Le Bruyn.
By \cite[Lemma 5.2.10]{Mont}, a connected bialgebra is automatically a connected Hopf algebra. Furthermore, we have the following lemma.
\begin{lemma}\label{sub}
Let $H$ be a connected Hopf algebra and $K$ a sub-bialgebra of $H$. Then $K$ is a Hopf subalgebra of $H$.
\end{lemma}
\begin{proof}
Let $S$ be the antipode of $H$. We need to show that $S(K)\subset K$. It suffices to show that $S(K_n)\subset K$ for any $n\ge 0$. When $n=0$, the statement is true since $K_0$ is spanned by the unit $1$. For any $n\ge 1$ and $c\in K_n$, by \cite[Lemma 5.3.2]{Mont}, $\Delta(c)=1\otimes c+c\otimes 1+ \sum a_i\otimes b_i$, where $a_i, b_i\in K_{n-1}$.
Since $S$ is the convolution inverse of the identity map, we have $S(c)+c+\sum a_iS(b_i)=\e(c)$.
By induction hypothesis, $S(c)\in K$. This completes the proof.
\end{proof}
The following technical lemma will be used frequently in the rest of the paper.
\begin{lemma}\label{regular}
Let $f: A \rightarrow B$ be a surjective algebra map. If $A$ is a Noetherian prime algebra and $\GK A< \GK B+1<\infty$, then $f$ is an isomorphism.
\end{lemma}
\begin{proof}
We only have to show that $I:=\ker f$ is zero. If not, then by Goldie's theorem, the ideal $I$ contains a regular element. Now by \cite[Proposition 3.15]{KL}, $\GK B+1\le \GK A$. But this is a contradiction.
\end{proof}
\begin{lemma}\label{commutative}
Let $K=\bigoplus_{n=0}^{\infty}K(n)$ be a graded Hopf algebra with $K(0)=k$. Then the following statements are true.
\begin{itemize}
\item[\textup{(I)}] If $K$ is generated in degree one, then $K$ is cocommutative;
\item[\textup{(II)}] If $K$ is coradically graded, then $K$ is commutative.
\end{itemize}
\end{lemma}
\begin{proof}
It is easy to check that if a Hopf algebra is generated by elements $x$ such that $\Delta(x)=\tau\Delta(x)$, where $\tau$ is the twisting map, then the Hopf algebra is cocommutative. Since $K(1)$ is spanned by primitive elements, the statement $\textup{(I)}$ is true.
For the second statement,
by Remark \ref{gradedaffine}, we can assume, without loss of generality, that $K$ is finitely generated and thus locally finite. Let $S=\bigoplus_{n=0}^{\infty}K(n)^*$ be the graded dual of $K$. Then $S$ is also a graded Hopf algebra with $S(0)=k$. By \cite[Lemma 5.5]{AS2}, $S$ is generated in degree one and thus cocommutative by $\textup{(I)}$. Hence $K$ is commutative.
\end{proof}
Notice that for any connected Hopf algebra $H$, $\gr H$ is connected coradically graded. Hence the following proposition is clear.
\begin{proposition}
Let $H$ be a connected Hopf algebra. Then $\gr H$ is commutative.
\end{proposition}
In fact, if the base field is algebraically closed of characteristic $0$, we can say more about the algebra structure of a connected coradically graded Hopf algebra.
\begin{proposition}\label{polynomial}
Let $K=\bigoplus_{n=0}^{\infty}K(n)$ be a coradically graded Hopf algebra with $K(0)=k$ and assume that the base field $k$ is algebraically closed of characteristic $0$. If $K$ is finitely generated, then $K$ is isomorphic to the polynomial ring in $\ell$ variables for some $\ell\ge0$ as algebras.
\end{proposition}
\begin{proof}
Since $K$ is finitely generated commutative, $K\cong \mathcal{O}(\Gamma)$, the coordinate ring of some algebraic group $\Gamma$ over $k$. Hence $K$ has finite global dimension. Now the result follows from \cite[III.2.5]{NO}.
\end{proof}
The previous proposition leads to the following theorem, which is a result by Le Bruyn (unpublished).
\begin{theorem}\label{domain}
Assume that the base field $k$ is algebraically closed of characteristic $0$. Let $H$ be a connected Hopf algebra. Then $H$ is a domain.
\end{theorem}
\begin{proof}
We only need to show that $\gr H$ is a domain. By Remark \ref{gradedaffine}, every finite subset of $\gr H$ is contained in a finitely generated graded Hopf subalgebra of $\gr H$. By Proposition \ref{polynomial}, such subalgebras are domains. The result then follows.
\end{proof}
\begin{remark}
In Theorem \ref{domain}, the statement fails if the base field is of characteristic $p$. For example, let $k=\mathbb{F}_p$ and $H=k[x]/(x^p)$. It is well known that $H$ has a unique connected Hopf algebra structure under which $x$ is primitive. Obviously, $H$ is not a domain.
\end{remark}
\begin{lemma}\label{GKplus1}
Assume that the base field $k$ is algebraically closed of characteristic $0$. Let $K$ be a connected coradically graded Hopf algebra and $L$ a finitely generated graded Hopf subalgebra of $K$. If $L\neq K$, then $\GK K\ge \GK L+1$.
\end{lemma}
\begin{proof}
Let $N$ be the smallest number such that $L(N)\neq K(N)$. Pick $y\in K(N)\setminus L(N)$. By the choice of $N$ we see that $\Delta(y)=1\otimes y+y\otimes 1+w$, where $w\in \bigoplus_{i=1}^{N-1}L(i)\otimes L(N-i)$. Hence the algebra $P$ generated by $L$ and $y$ is a finitely generated graded sub-bialgebra of $K$. By Lemma \ref{sub}, $P$ is a Hopf subalgebra of $K$. By replacing $K$ with $P$, we may assume that $K$ is also finitely generated. Now we have an exact sequence of Hopf algebras
\[0\rightarrow L\rightarrow K\rightarrow H\rightarrow 0,\]
in the sense that $L$ is a normal Hopf subalgebra of $K$ and $H=K/L^+K=K/KL^+$. Since $L\neq K$, the connected Hopf algebra $H$ is not isomorphic to $k$ by \cite[Theorem 4.3]{T1} and therefore $\GK H\ge1$. By taking the spectrum, we get an exact sequence of algebraic groups \cite[Theorem 5.2]{T1}
\[1\rightarrow \Gamma_1\rightarrow \Gamma_2\rightarrow \Gamma_3\rightarrow 1.\]
By \cite[7.4 Proposition B]{H}, $\dim \Gamma_2=\dim \Gamma_1+\dim \Gamma_3$, where $\dim$ represents the dimension of an affine variety. It is well known that for any affine variety $X$, $\dim X=\GK \mathcal{O}(X)$. Consequently,
\[\GK K=\GK L+\GK H.\]
Now the result follows since $\GK H\ge1$.
\end{proof}
In next section, we will lift the result to the ungraded case in Lemma \ref{GKcrit}.
\begin{lemma}\label{finiteGK}
Assume that the base field $k$ is algebraically closed of characteristic $0$. Let $K$ be a connected coradically graded Hopf algebra. Then $K$ has finite GK-dimension if and only if $K$ is finitely generated.
\end{lemma}
\begin{proof}
If $K$ is finitely generated, then by Proposition \ref{polynomial}, $K$ has finite GK-dimension. Now assume that $K$ is not finitely generated. If $\dim_kK(1)=\infty$, then $K$ has a Hopf subalgebra isomorphic to $U(\mathfrak{g})$, where $\mathfrak{g}:=K(1)$ is an infinite-dimensional Lie algebra. Hence $\GK K=\infty$. Now assume that $\dim_k K(1)<\infty$. It suffices to show that there is a chain of Hopf subalgebras $K^{(1)}\subset K^{(2)}\subset\cdots$ such that $\GK K^{(i)}+1\le\GK K^{(i+1)}$. Let $K^{(1)}$ be the subalgebra generated by $K(1)$. Then $K^{(1)}$ is a finitely generated graded Hopf subalgebra of $K$. Since $K^{(1)}\subsetneq K$ by assumption, there is some homogeneous element $y\in K\setminus K^{(1)}$ such that $\Delta(y)=1\otimes y + y\otimes 1+w$ where $w\in (K^{(1)})^+\otimes (K^{(1)})^+$. Let $K^{(2)}$ be the subalgebra generated by $K^{(1)}$ and $y$. It is obvious that $K^{(2)}$ is again a finitely generated graded Hopf subalgebra. By Lemma \ref{GKplus1}, $\GK K^{(2)}\ge \GK K^{(1)}+1$. Now $K^{(2)}\subsetneq K$, so we can repeat the above process and get the desired chain of Hopf subalgebras. This completes the proof.
\end{proof}
Now we are able to deliver the following theorem.
\begin{theorem}\label{integer}
Assume that the base field $k$ is algebraically closed of characteristic $0$ and let $H$ be a connected Hopf algebra. Then the following statements are equivalent:
\begin{enumerate}
\item[\textup{(I)}] $\GK H< \infty$;
\item[\textup{(II)}] $\GK \gr H< \infty$;
\item[\textup{(III)}] $\gr H$ is finitely generated;
\item[\textup{(IV)}] $\gr H$ is isomorphic to the polynomial ring of $\ell$ variables for some $\ell\ge0$ as algebras.
\end{enumerate}
In this case, $\GK H=\GK \gr H$, which is a positive integer.
\end{theorem}
\begin{proof}
If $\GK H=\infty$, we need to show that $\GK \gr H$ is also infinity. If not, by Lemma \ref{finiteGK}, $\gr H$ is finitely generated. Then by Theorem \ref{equality} (or \cite[Proposition 6.6]{KL}), $\GK H=\GK \gr H< \infty$, which is a contradiction. If $\GK H<\infty$, by \cite[Lemma 6.5]{KL}, $\GK \gr H\le \GK H<\infty$. Hence $\textup{(I)}$ and $\textup{(II)}$ are equivalent. The equivalence of $\textup{(II)}$ and $\textup{(III)}$ is just Lemma \ref{finiteGK}. The equivalence of $\textup{(III)}$ and $\textup{(IV)}$ follows from Proposition \ref{polynomial}.
If one of the four conditions holds, then $\GK H=\GK \gr H$ by Theorem \ref{equality}. Moreover, in this case $\GK \gr H$ is a positive integer by $\textup{(IV)}$. This completes the proof.
\end{proof}
As a consequence of Theorem \ref{integer}, a connected Hopf algebra enjoys many nice ring-theoretical properties. A few of them are listed in the following corollary.
\begin{corollary}
Assume that the base field $k$ is algebraically closed of characteristic $0$ and let $H$ be a connected Hopf algebra of GK-dimension $\ell<\infty$. Then $H$ is
\begin{itemize}
\item[\textup{(I)}] a noetherian domain of global dimension $\ell$ and Krull dimension $\le \ell$;
\item[\textup{(II)}] Auslander-regular;
\item[\textup{(III)}] GK-Cohen-Macaulay, i.e., for any non-zero finitely generated $H$-module $M$,
\[j(M)+\GK M=\GK H,\]
where $j(M):=\min\{n \,|\,\Ext^n_H(M, H)\neq 0\}$.
\end{itemize}
\begin{proof}
By Theorem \ref{integer}, $\gr H$ is a noetherian domain of global dimension and Krull dimension $\ell$. Now by \cite[Lemma I.12.12, Theorem I.12.13]{BG} and \cite[Lemma 5.6, Corollary 6.18]{MR}, $H$ is a noetherian domain of global dimension and Krull dimension $\le \ell$. Moreover, by taking $M$ to be the trivial $H$-module $k$ in $(\textup{III})$, we have $j(k)=\ell$. This shows that the global dimension of $H$ is $\ell$.
Since $\gr H$ is noetherian, the filtration $\{H_n\}_{n\ge 0}$ on $H$ is Zariskian by \cite[2.10]{Bj}. Then the statement $\textup{(II)}$ follows from \cite[Theorem 3.9]{Bj}.
For the statement $\textup{(III)}$, we first choose a good filtration $\{M_n\}_{n\in\mathbb{Z}}$ of $M$ in the sense of \cite[Definition 5.1]{LO}. It then follows from \cite[Lemma 5.4]{LO} that $\gr M$ is a finitely generated $\gr H$-module. It is clear that $\gr H$ is GK-Cohen-Macaulay.
Hence $j_{\gr H}(\gr M)+\GK \gr M=\GK \gr H$. As mentioned in the proof of \cite[Theorem 3.9]{Bj}, $j_{\gr H}(\gr M)=j(M)$. By Theorem \ref{integer}, $\GK \gr H=\GK H$ and $\gr H$ is a finitely generated algebra. It then follows from \cite[Proposition 6.6]{KL} that $\GK \gr M=\GK M$. This completes the proof.
\end{proof}
\end{corollary}
\section{Connected Hopf algebras of GK-dimension three}
\textbf{Throughout this section, the base field $k$ is algebraically closed of characteristic zero.} We are going to classify all connected Hopf algebras of GK-dimension three. To begin with, we introduce two classes of Hopf algebras.
\begin{example}\label{typeA}
Let $A$ be the algebra generated by elements $X, Y, Z$ satisfying the following relations,
\begin{align*}
[X, Y]=0,\\
[Z, X]=\lambda_1 X+\alpha Y,\\
[Z, Y]=\lambda_2 Y,
\end{align*}
where $\alpha=0$ if $\lambda_1\neq \lambda_2$ and $\alpha=0$ or $1$ if $\lambda_1= \lambda_2$.
Then $A$ becomes a Hopf algebra via
\begin{align*}
\e(X)=0,\quad \Delta(X)=1\otimes X+ X\otimes 1,\quad S(X)=-X,\\
\e(Y)=0,\quad \Delta(Y)=1\otimes Y+ Y\otimes 1,\quad S(Y)=-Y,\\
\e(Z)=0,\quad \Delta(Z)=1\otimes Z+X\otimes Y+ Z\otimes 1,\quad S(Z)=-Z+XY.
\end{align*}
We denote this Hopf algebra by $A(\lambda_1, \lambda_2, \alpha)$.
\end{example}
\begin{example}\label{typeB}
Let $B$ be the algebra generated by elements $X, Y, Z$ satisfying the following relations,
\begin{align*}
[X, Y]=Y,\\
[Z, X]=-Z+\lambda Y,\\
[Z, Y]=\frac{1}{2}Y^2,
\end{align*}
where $\lambda \in k$.
Then $B$ becomes a Hopf algebra via
\begin{align*}
\e(X)=0,\quad \Delta(X)=1\otimes X+ X\otimes 1,\quad S(X)=-X,\\
\e(Y)=0,\quad \Delta(Y)=1\otimes Y+ Y\otimes 1,\quad S(Y)=-Y,\\
\e(Z)=0,\quad \Delta(Z)=1\otimes Z+X\otimes Y+ Z\otimes 1,\quad S(Z)=-Z+XY.
\end{align*}
We denote this Hopf algebra by $B(\lambda)$.
\end{example}
\begin{proposition}
The algebras $A(\lambda_1, \lambda_2, \alpha)$ and $B(\lambda)$ are connected Hopf algebras of GK-dimension three.
\end{proposition}
\begin{proof}
We only prove the statement for $B(\lambda)$. The case of $A(\lambda_1, \lambda_2, \alpha)$ can be proved analogously.
As mentioned in \cite[Section 1]{GZ}, to check $B(\lambda)$ is a Hopf algebra, it suffices to check the Hopf algebra axioms on a set of algebra generators for $B(\lambda)$, namely, $X, Y$ and $Z$. This is easy and we leave it to the readers.
By Bergman's Diamond Lemma, the algebra $B(\lambda)$ has a $k$-linear basis of monomials
\[\{X^{w_1}Y^{w_2}Z^{w_3}\},\]
where $w_i\in \mathbb{N}$. Define the degree of $X^{w_1}Y^{w_2}Z^{w_3}$ to be $w_1+w_2+2w_3$ and let $F_n$ be the $k$-space spanned by all monomials of degree $\le n$. It is easy to check that $\{F_n\}_{n\ge 0}$ is an algebra filtration on $A$ by the defining relations. Hence by \cite[Lemma 6.1 (b)]{KL},
\[\GK B(\lambda)=\varlimsup_{n\rightarrow \infty}\log_n\dim_k F_n=3.\]
Next, we claim that $\{F_n\}_{n\ge 0}$ is also a coalgebra filtration on $B$, i.e. $\Delta(F_n)\subset\sum_{i=0}^{n}F_i\otimes F_{n-i}$ for any $n$. Let $X^{w_1}Y^{w_2}Z^{w_3}$ be a monomial such that $w_1+w_2+2w_3\le n$. Then
\begin{align*}
\Delta(X^{w_1}Y^{w_2}Z^{w_3})=&\Delta(X)^{w_1}\Delta(Y)^{w_2}\Delta(Z)^{w_3}\\
\in& (F_0\otimes F_1+F_1\otimes F_0)^{w_1+w_2}\cdot(\sum_{i=0}^2 F_i\otimes F_{2-i})^{w_3}\\
\subset &(\sum_{i=0}^{w_1+w_2} F_i\otimes F_{w_1+w_2-i})\cdot(\sum_{i=0}^{2w_3} F_i\otimes F_{2w_3-i})\\
\subset &\sum_{i=0}^{n}F_i\otimes F_{n-i}.
\end{align*}
For the last two inclusions, we use the fact that $\{F_n\}_{n\ge 0}$ is an algebra filtration. Then it follows from \cite[Lemma 5.3.4]{Mont} that the coradical of $B(\lambda)$ is contained in $F_0$, which is one-dimensional. Hence $B(\lambda)$ is a connected coalgebra. This completes the proof.
\end{proof}
Before moving on to study connected Hopf algebras of GK-dimension three, we still need a few lemmas.
\begin{lemma}\label{GKcrit}
Let $H$ be a connected Hopf $k$-algebra of finite GK-dimension and $K$ a Hopf subalgebra of $H$. If $\GK K=\GK H$, then $K=H$.
\end{lemma}
\begin{proof}
By \cite[Lemma 5.2.12]{Mont}, $\gr K$ is naturally embedded in $\gr H$ as a graded Hopf subalgebra. Also, by Theorem \ref{integer}, $\GK\gr K=\GK K=\GK H=\GK \gr H$ and they are all finitely generated. It suffices to show that $\gr K=\gr H$. If not, by Lemma \ref{GKplus1}, $\GK \gr H\ge \GK \gr K+1$, which is a contradiction.
\end{proof}
The following proposition is a direct consequence of Lemma \ref{GKcrit}.
\begin{proposition}\label{enveloping}
Let $H$ be a connected Hopf algebra of finite GK-dimension. Then $\GK H\ge \dim_kP(H)$. If $\GK H=\dim_k P(H)$, then $H\cong U(\mathfrak{g})$ as Hopf algebras, where $\mathfrak{g}=P(H)$. If $\GK H=3$, then $\dim_k P(H)=2$ or $3$.
\end{proposition}
\begin{proof} Let $\mathfrak{g}=P(H)$. Then the injective Lie algebra map $\mathfrak{g}\hookrightarrow H$ induces a Hopf map $U(\mathfrak{g})\rightarrow H$. It is well known that $P(U(\mathfrak{g}))=\mathfrak{g}$ and therefore by \cite[Corollary 5.4.7]{Mont} the Hopf map $U(\mathfrak{g})\rightarrow H$ is injective. By PBW Theorem, $\GK U(\mathfrak{g})=\dim_k \mathfrak{g}$. Hence $\GK H\ge \GK U(\mathfrak{g})= \dim_kP(H)$. If $\GK H=\dim_k P(H)$, then we have $\GK H= \GK U(\mathfrak{g})$. Hence $H=U(\mathfrak{g})$ by Lemma \ref{GKcrit}. The last statement is from \cite[Lemma 5.11]{Z}.
\end{proof}
The following proposition is an easy consequence of Proposition \ref{enveloping}. It is also mentioned in \cite{GZ}.
\begin{proposition}\label{lessthan3}
Let $H$ be a connected Hopf algebra of GK-dimension strictly less than $3$. Then $\GK H= 0, 1$ or $2$. In fact,
\begin{enumerate}
\item[\textup{(I)}] if $\GK H=0$, then $H\cong k$, the trivial Hopf algebra;
\item[\textup{(II)}] if $\GK H=1$, then $H\cong k[x]$ with $x$ being primitive;
\item[\textup{(III)}] if $\GK H=2$, then $H\cong U(\mathfrak{g})$, where $\mathfrak{g}$ is either the $2$-dimensional abelian Lie algebra or the Lie algebra with basis $\{x, y\}$ and $[x, y]=y$.
\end{enumerate}
\end{proposition}
Now we focus on connected Hopf algebras of GK-dimension three.
The following theorem is the key to our main theorem of this section.
\begin{theorem}\label{3generators}
Let $H$ be a connected Hopf algebra of GK-dimension $\ge 3$ such that $\dim_k P(H)=2$. Then for any linearly independent primitive elements $x, y$, there exists $z\in H$ such that $\Delta(z)=1\otimes z+x\otimes y+z\otimes 1$. Moreover, if in addition $\GK H=3$, then for any such $z$, the set $\{x, y, z\}$ generates $H$ as an algebra.
\end{theorem}
We postpone the proof to the last section.
Now we are ready to deliver the main theorem of this section.
\begin{theorem}\label{classification}
Let $H$ be a connected Hopf algebra of GK-dimension three. Then $H$ is isomorphic to one of the following:
\begin{enumerate}
\item[\textup{(I)}] The enveloping algebra $U(\mathfrak{g})$ for some three-dimensional Lie algebra $\mathfrak{g}$;
\item[\textup{(II)}] The Hopf algebras $A(0, 0, 0)$, $A(0, 0, 1)$, $A(1, 1, 1)$ or $A(1, \lambda, 0)$ from Example \ref{typeA} for some $\lambda\in k$;
\item[\textup{(III)}] The Hopf algebras $B(\lambda)$ from Example \ref{typeB} for some $\lambda\in k$.
\end{enumerate}
\end{theorem}
\begin{proof}
By Proposition \ref{enveloping}, $\dim_k P(H)$ is either $2$ or $3$. If $\dim_k P(H)=3$, then by Proposition $\ref{enveloping}$, $H\cong U(\mathfrak{g})$ as Hopf algebras, where $\mathfrak{g}=P(H)$. This gives the Hopf algebras in $\textup{(I)}$. Now we focus on the case $\dim_k P(H)= 2$.
Let $\mathfrak{h}=P(H)$. Then $\mathfrak{h}$ is a two-dimensional Lie algebra. It is well known that there are two isomorphic classes of two-dimensional Lie algebras.
Case 1: The Lie algebra $\mathfrak{h}$ is spanned by $x$ and $y$ with $[x, y]=0$. By Theorem \ref{3generators}, there is $z\in H$ such that $\Delta(z)=1\otimes z+x\otimes y+z\otimes 1$ and $x, y, z$ generate $H$ as an algebra. It is easy to check that $[z, x]$ and $[z, y]$ are primitive elements. Therefore
\begin{align}\label{relation1}
[z, x]=a_{11}x+ a_{12}y,\\
[z, y]=a_{21}x+ a_{22}y,\nonumber
\end{align}
where $a_{ij}\in k$.
Let $P$ be a $2\times2$ invertible matrix such that $P^{-1}AP$ is a Jordan matrix, where $A=(a_{ij})$. We take $\det P=1$. Let $P=(b_{ij})$ and $P^{-1}=(c_{ij})$. Then by setting $x'=b_{11}x+b_{21}y$ and $y'=b_{12}x+b_{22}y$, the relations $(\ref{relation1})$ become
\begin{align*}
[z, x']&=\lambda_1x'+ \alpha y',\\
[z, y']&=\lambda_2y',\nonumber
\end{align*}
where $\left({\begin{array}{cc}\lambda_1 &\alpha\\ 0 &\lambda_2\end{array}}\right)$ is a Jordan matrix. Now we have
\[\Delta(z)=1\otimes z+(c_{11}x'+c_{21}y')\otimes(c_{12}x'+c_{22}y')+z\otimes 1.\]
Let $z'=z-\frac{1}{2}c_{11}c_{12}{x'}^2-\frac{1}{2}c_{11}c_{22}{y'}^2-c_{12}c_{21}x'y'$. Then a direct calculation shows that
\begin{equation}\label{comul}
\Delta(z')=1\otimes z' +x'\otimes y'+z'\otimes 1,
\end{equation}
and
\begin{align}\label{relation2}
[z', x']&=\lambda_1x'+ \alpha y',\\
[z', y']&=\lambda_2y',\nonumber
\end{align}
Notice that $x', y', z'$ generate $H$ and $[x', y']=0$.
If $\lambda_1=\lambda_2=0$ and $\alpha=0$ (resp, $\alpha=1$), then there is a surjective Hopf map from $A(0, 0, 0)$ (resp. $A(0, 0, 1)$) to $H$ sending $X, Y, Z$ to $x', y', z'$, respectively.
If $\lambda_1=\lambda_2\neq 0$ and $\alpha=1$, then there is a surjective Hopf map from $A(1, 1, 1)$ to $H$ sending $X, Y, Z$ to $x', \frac{1}{\lambda_1}y', \frac{1}{\lambda_1}z'$, respectively.
If $\lambda_1\neq 0$ and $\alpha=0$, then there is a surjective Hopf map from $A(1, \lambda_2/\lambda_1, 0)$ to $H$ sending $X, Y, Z$ to $\frac{1}{\lambda_1}x', y', \frac{1}{\lambda_1}z'$, respectively.
If $\lambda_2\neq 0$ and $\alpha=0$, then there is a surjective Hopf map from $A(1, \lambda_1/\lambda_2, 0)$ to $H$ sending $X, Y, Z$ to $\frac{1}{\lambda_2}y', -x', \frac{1}{\lambda_1}(z'-x'y')$, respectively.
By Lemma \ref{regular}, all the above surjective Hopf maps are isomorphisms. This completes the proof of $\textup{(II)}$.
Case 2: The Lie algebra $\mathfrak{h}$ is spanned by $x$ and $y$ with $[x, y]=y$. Again by Theorem \ref{3generators}, there is $z\in H$ such that $\Delta(z)=1\otimes z+x\otimes y+z\otimes 1$ and $x, y, z$ generate $H$ as an algebra. A straight calculation shows that $[z, y]-\frac{1}{2}y^2$ and $[z, x]+z$ are primitive elements.
Therefore
\begin{align*}
[z, x]=-z+a_{11}x+ a_{12}y,\\
[z, y]=\frac{1}{2}y^2+a_{21}x+ a_{22}y,
\end{align*}
where $a_{ij}\in k$. By replacing $z$ with $z-a_{11}x$, we can assume that $a_{11}=0$. We claim that $a_{21}=a_{22}=0$. Notice that the relations between $x, y, z$ can be rewritten as
\begin{align*}
yx&= xy-y,\\
zx&= xz-z+ a_{12}y,\\
zy&= yz+\frac{1}{2}y^2+a_{21}x+a_{22}y.
\end{align*}
By these relations, we have
\[z(yx)=xyz+\frac{1}{2}xy^2+a_{21}x^2+a_{22}xy-2yz+(a_{12}-1)y^2-2a_{21}x-2a_{22}y.\]
On the other hand,
\[(zy)x=xyz+\frac{1}{2}xy^2+a_{21}x^2+a_{22}xy-2yz+(a_{12}-1)y^2-a_{22}y.\]
Since $z(yx)=(zy)x$ by associativity, we have $-2a_{21}x-2a_{22}y=-a_{22}y$, which implies that $a_{21}=a_{22}=0$. Now it is clear that there is a surjective Hopf algebra map from $B(a_{12})$ to $H$ sending $X, Y, Z$ to $x, y, z$, respectively.
By Lemma \ref{regular}, this surjective map is an isomorphism. This completes the proof of $\textup{(III)}$.
\end{proof}
\begin{remark} It is clear from the proof of Theorem \ref{classification} that any Hopf algebra $H$ listed in $\textup{(II)}$ and $\textup{(III)}$ has GK-dimension $3$ and $\dim_k P(H)=2$.
\end{remark}
In fact, by following the same lines as in case $1$ and case $2$ in the proof of the previous theorem, we have the following proposition.
\begin{proposition}
Let $H$ be a connected Hopf algebra of GK-dimension $\ge 3$ such that $\dim_k P(H)=2$. Then for any linearly independent primitive elements $x, y$, there exists $z\in H$ such that $\Delta(z)=1\otimes z+x\otimes y+z\otimes 1$. Moreover, the algebra $C$ generated by $x, y, z$ is a Hopf subalgebra of $H$ and $\GK C=3$.
\end{proposition}
For the rest of this section, we are going to look closer at the Hopf algebras listed in Theorem \ref{classification} $\textup{(II)}$ and $\textup{(III)}$.
\begin{lemma}\label{dim1}
Let $H$ be a connected Hopf algebra of GK-dimension three such that $\dim_k P(H)=2$ and let $x, y, z$ be a set of generators as described in Theorem \ref{3generators}. Denote by $K$ the Hopf subalgebra generated by $x$ and $y$. Then $H_2/K_2$ is spanned by the image of $z$.
\end{lemma}
\begin{proof}
It is clear that $z\in H_2\setminus K_2$. Hence we only have to show that $H_2/K_2$ is one-dimensional. Notice that $H_1=K_1$. Therefore it suffices to show that $\gr H(2)/\gr K(2)$ is one dimensional. By Theorem \ref{3generators}, $\overline{x}, \overline{y}\in \gr H(1)$ and $\overline{z}\in \gr H(2)$ generate $\gr H$. It is also clear that $\gr K$ is the Hopf subalgebra of $\gr H$ generated by $\overline{x}$ and $\overline{y}$. As a consequence, $\gr H(2)= k\overline{z}+(\gr H(1))^2= k\overline{z}+(\gr K(1))^2= k\overline{z}+\gr K(2)$. This completes the proof.
\end{proof}
For any Hopf algebra $H$, the commutator ideal $[H, H]$ is a Hopf ideal \cite[Lemma 3.7]{GZ}. We call $H/[H, H]$ the \textbf{abelianization} of $H$. For any $h\in H$, let $\ad(h)\in \End_k(H)$ be the linear map sending $u$ to $[h, u]$ for any $u\in H$.
\begin{proposition}\label{isoclass}
For any given $\lambda\in k$, $A(0, 0, 0)$, $A(0, 0, 1)$, $A(1, 1, 1)$ and $A(1, \lambda, 0)$ are pairwise non-isomorphic. Also, $A(1, \lambda, 0)\cong A(1, \gamma, 0)$ if any only if $\lambda=\gamma$ or $\lambda\gamma=1$.
\end{proposition}
\begin{proof}
We start with the first statement. It is clear from defining relations that the abelianizations of $A(0, 0, 0)$, $A(0, 0, 1)$ and $A(1, 1, 1)$ are $A(0, 0, 0)$, $k[X, Z]$ and $k[Z]$, respectively. And the abelianization of $A(1, \lambda, 0)$ is $k[Z]$ if $\lambda\neq 0$, and $k[Y, Z]$ if $\lambda=0$. Now to prove the first statement, we only have to show $A(0, 0, 1)\ncong A(1, 0, 0)$ and $A(1, 1, 1)\ncong A(1, \lambda, 0)$ for $\lambda\neq 0$.
Suppose that $f$ is a Hopf isomorphism from $H':=A(1, 1, 1)$ to $ H:=A(1, \lambda, 0)$. We label the canonical generators of $H'$ by $X', Y'$ and $Z'$. Let $K'$ (resp. $K$) be the Hopf subalgebra of $H'$ (resp. $H$) generated by $X', Y'$ (resp. $X, Y$). Then $f$ restricts to a Hopf isomorphism from $K'$ to $K$. As a consequence, $f$ induces a linear isomorphism from $H'_2/K'_2$ to $H_2/K_2$. This indicates that $f(Z')$ is of the form $aZ+u$ for some $a\in k^{\times}$ and $u\in K_2$. Now consider the maps
$$\ad(Z'): H'_1\rightarrow H'_1\,\, \text{and}\,\, \ad(f(Z')): H_1\rightarrow H_1. $$
Since $f$ restricts to a linear isomorphism from $H'_1$ to $H_1$ and $f\circ \ad(Z')=\ad(f(Z'))\circ f$, the two maps must have the same eigenvalues and the same number of independent eigenvectors. However, from the defining relations we see that $\ad(Z')$ has only one linearly independent eigenvector while $\ad(f(Z'))=\ad(aZ+u)$ has two. This shows that $A(1, 1, 1)\ncong A(1, \lambda, 0)$. If we replace $H'$ and $H$ by $A(0, 0, 1)$ and $A(1, 0, 0)$ respectively, then the above argument shows that $A(0, 0, 1)\ncong A(1, 0, 0)$. This completes the proof of the first statement.
Next, we proceed to prove the second statement. As mentioned before, the abelianization of $A(1, \lambda, 0)$ is $k$ if $\lambda\neq 0$, and $k[Y]$ if $\lambda=0$. Hence $A(1, 0, 0)\ncong A(1, \lambda, 0)$ for any $\lambda\neq 0$. Now assume $A(1, \lambda, 0)\cong A(1, \gamma, 0)$ and we have to show that either $\lambda=\gamma$ or $\lambda\gamma=1$. Repeat the argument in the second paragraph of the proof by taking $H'=A(1, \lambda, 0)$ and $H=A(1, \gamma, 0)$. It is easy to check by defining relations that $\ad Z'$ has eigenvalues $\{1, \lambda\}$ and $\ad f(Z')$ has eigenvalues $\{a, a\gamma\}$. Since they have the same eigenvalues, we must have
$$\begin{cases}
1=a\\
\lambda=a\gamma
\end{cases}
\text{or}\,\,\,\,
\begin{cases}
1=a\gamma\\
\lambda=a
\end{cases}.$$
Clearly, these imply that either $\lambda=\gamma$ or $\lambda\gamma=1$.
Conversely, we only have to show that if $\lambda\gamma=1$, then $A(1, \lambda, 0)\cong A(1, \gamma, 0)$. Label the canonical generators of $A(1, \lambda, 0)$ by $X', Y'$ and $Z'$. Then there is a surjective Hopf map from $A(1, \lambda, 0)$ to $A(1, \gamma, 0)$ sending $X', Y', Z'$ to $Y, -\lambda X, \lambda (Z-XY)$, respectively. This map is an isomorphism by Lemma \ref{regular}.
\end{proof}
\begin{proposition} $B(\lambda)\cong B(\gamma)$ if and only if $\lambda=\gamma$.
\end{proposition}
\begin{proof}
Label the canonical generators of $B(\lambda)$ by $X', Y', Z'$. Suppose that $f$ is an isomorphism from $H':=B(\lambda)$ to $H:=B(\gamma)$. By Theorem \ref{classification}, $P(H')$ (resp. $P(H)$) are spanned by $X', Y'$ (resp. $X, Y$). Notice that $f$ restricts to a linear isomorphism from $P(H')$ to $P(H)$. Hence
\begin{align*}
f(X')&=a_{11}X+a_{12}Y,\\
f(Y')&=a_{21}X+a_{22}Y,
\end{align*}
for some non-degenerate matrix $(a_{ij})$. Since $[f(X'), f(Y')]=f(Y')$, we have
\[[a_{11}X+a_{12}Y, a_{21}X+a_{22}Y]=a_{21}X+a_{22}Y.\]
By using the fact $[X, Y]=Y$ and comparing the coefficients, we find that $a_{11}=1$, $a_{21}=0$ and $a_{22}\ne 0$. Since $f$ is a coalgebra map,
\[\Delta f(Z')=(f\otimes f)\Delta(Z')=1\otimes f(Z')+(X+a_{12}Y)\otimes a_{22}Y+f(Z')\otimes 1.\]
Then it is easy to check that $f(Z')-a_{22}Z-\frac{1}{2}a_{12}a_{22}Y^2\in P(H)$. As a consequence, there exists $c, d\in k$ such that
\[f(Z')=a_{22}Z+\frac{1}{2}a_{12}a_{22}Y^2+cX+dY.\]
Since $[f(Z'), f(Y')]=\frac{1}{2}f(Y')^2$,
\[[a_{22}Z+\frac{1}{2}a_{12}a_{22}Y^2+cX+dY, a_{22}Y]=\frac{1}{2}a_{22}^2Y^2.\]
By comparing the coefficients, we find that $c=0$. Now the relation $[f(Z'), f(X')]=-f(Z')+\lambda f(Y')$ gives
\begin{equation}\label{compare}
[a_{22}Z+\frac{1}{2}a_{12}a_{22}Y^2+dY, X+a_{12}Y]=-a_{22}Z-\frac{1}{2}a_{12}a_{22}Y^2-dY+\lambda a_{22}Y.
\end{equation}
The left-hand side of (\ref{compare}) becomes
\[-a_{22}Z+a_{22}\gamma Y+\frac{1}{2}a_{12}a_{22}Y^2-a_{12}a_{22}Y^2-dY.\]
Comparing this with the right-hand side of (\ref{compare}) we have $\lambda=\gamma$. This completes the proof.
\end{proof}
We conclude the section by two propositions regarding the algebra structures of the Hopf algebras $A(\lambda_1, \lambda_2, \alpha)$ and $B(\lambda)$, the first of which suggests that they can be considered as coalgebra deformations of universal enveloping algebras. However, we will not pursue this direction further.
\begin{proposition} For any choice of $(\lambda_1, \lambda_2, \alpha)$ (resp. $\lambda$), as an algebra, $A(\lambda_1, \lambda_2, \alpha)$ (resp. $B(\lambda)$) is isomorphic to the enveloping algebra of a solvable Lie algebra.
\end{proposition}
\begin{proof} For $A(\lambda_1, \lambda_2, \alpha)$, by the defining relations in Example \ref{typeA}, we have $A(\lambda_1, \lambda_2, \alpha)\cong U(\frak{g})$ as algebras where $\frak{g}$ is the solvable Lie algebra spanned by $X, Y$ and $Z$. For $B(\lambda)$, let $Z':= Z-\frac{1}{2}XY$, then $B(\lambda)$ is generated by $X, Y$ and $Z'$ with the following relations
\begin{align*}
[X, Y]=Y,\\
[Z', X]=-Z'+\lambda Y,\\
[Z', Y]=0.
\end{align*}
Now it is clear that $B(\lambda)\cong U(\frak{g})$ where $\frak{g}$ is the solvable Lie algebra spanned by $X, Y$ and $Z'$.
\end{proof}
Since the Lie algebra $U(sl_2)$ is not solvable, we have the following corollary, which suggests that $U(sl_2)$ has no non-trivial coalgebra deformations.
\begin{corollary}
For any choice of $(\lambda_1, \lambda_2, \alpha)$ (resp. $\lambda$), the Hopf algebra $A(\lambda_1, \lambda_2, \alpha)$ (resp. $B(\lambda)$) is not isomorphic to $U(sl_2)$ as an algebra.
\end{corollary}
\section{Proof of Theorem \ref{3generators}}
This section is devoted to the proof of Theorem \ref{3generators}. The proof uses the cohomology of coalgebras, which we will recall briefly.
Let $C$ be a coaugmented coalgebra in the sense that there is a coalgebra map from the trivial coalgebra $k$ to $C$. Let $J=C^+$, the kernel of the counit, and one defines the reduced comultiplication on $J$ by
\[\overline{\Delta}(c)=\Delta(c)-(1\otimes c+c\otimes 1).\]
Then the \textbf{cobar construction} $\Omega C$ on $C$ is the differential graded algebra defined as follows:
\begin{itemize}
\item As a graded algebra, $\Omega C$ is the tensor algebra $T(J)$,
\item The differential in $\Omega C$ is given by
\begin{equation*}
\partial_C^n=\sum_{i=0}^{n-1} (-1)^{i+1}
Id^{\otimes i}\otimes \overline{\Delta} \otimes Id^{\otimes (n-i-1)}.
\end{equation*}
\end{itemize}
Dually, given an augmented algebra $A$, one can construct a differential graded coalgebra $BA$, which is called the \textbf{bar construction} of $A$. See \cite[\S 19]{FHT} for basic properties of cobar and bar constructions.
\begin{lemma}\label{cohomology}
Let $C=U(\mathfrak{h})$ where $\mathfrak{h}$ is a two-dimensional Lie algebra spanned by $x$ and $y$. Then $\dim_k \h^2(\Omega C)=1$ and in fact $\h^2(\Omega C)=(x\otimes y)$ where $(x\otimes y)$ is the cohomology class defined by the cocycle $x\otimes y$.
\end{lemma}
\begin{proof}
By the PBW Theorem, the coalgebra $C$ has a basis of the form $\{x^iy^j \,|\, i, j\in \mathbb{N}\}$. It is also well known that $C$ becomes a graded coalgebra by setting $\deg x^iy^j =i+j$. Denote the $n$-th homogeneous part by $C(n)$. Now $J$ can be naturally identified with $\bigoplus_{i=1}^{\infty}C(i)$. Moreover, the graded $k$-linear dual of the graded coalgebra $C$ is isomorphic to $A=k[x_1, x_2]$ as graded algebras, where $x_i$ has degree $1$. By \cite[Lemma 8.6 (c)]{LPWZ2}, $B^\#A\cong \Omega C$ as DG algebras, where $B^\#A$ is the graded dual of the bar construction of $A$. On the other hand, by \cite[Lemma 4.2]{LPWZ}, $\h^\bullet(B^\#A)\cong \Ext^\bullet_A(k_A, k_A)$. As a consequence, $\dim_k \h^2(\Omega C)=\dim_k\Ext^2_A(k_A, k_A)=1$.
It is easy to check by definition that $x\otimes y$ is a cocycle, i.e. $\partial ^2(x\otimes y)=0$. We only have to show that $x\otimes y\notin \im\partial^1$. Suppose to the contrary that there is some $w\in C$ such that $\partial^1(w)=x\otimes y\in C(1)\otimes C(1)$. Then by a degree argument, the element $w$ is in $C(2)$, i.e. $w$ must be of the form $ax^2+ bxy+ cy^2$ for some $a, b, c\in k$. However, an easy calculation shows that $\partial^1(ax^2+ bxy+ cy^2)$ is in the $k$-subspace $V$ spanned by $x\otimes x$, $y\otimes y$ and $x\otimes y+y\otimes x$ and clearly $x\otimes y$ is not in $V$. This completes the proof.
\end{proof}
Now we are ready to prove Theorem \ref{3generators}.
\begin{proof}[of Theorem \ref{3generators}]
Let $C$ be the subalgebra of $H$ generated by $x$ and $y$. Then $C$ is a Hopf subalgebra of $H$ and $C$ is isomorphic to $U(\mathfrak{h})$ where $\mathfrak{h}$ is a two-dimensional Lie algebra.
Notice that by construction $C_1=H_1$. Let $N\ge 2$ be the least number such that $C_N\subsetneq H_N$. By \cite[Lemma 5.3.2]{Mont}, there exists $z'\in H_N\setminus C_N$ such that $\Delta(z')=1\otimes z'+z'\otimes 1+u$, where $u\in H_{N-1}\otimes H_{N-1}=C_{N-1}\otimes C_{N-1}\subset C\otimes C$. Without loss of generality, we assume that $\e(z')=0$.
Now we have two DG algebras, $(\Omega H, \partial_H)$ and $(\Omega C, \partial_C)$. In fact, $(\Omega C, \partial_C)$ can be viewed as a sub-complex of $(\Omega H, \partial_H)$. Notice that $0=\partial_H^2\partial_H^1(z')=\partial_H^2(u)=\partial_C^2(u)$, i.e. $u$ is a cocycle. We claim that $u$ represents a non-zero cohomology class in $\h^2(\Omega C)$. If not, there is $w\in C$ such that $\partial_C^1(w)=1\otimes w-\Delta(w)+w\otimes 1=u$. As a consequence, $\Delta(z'+w)=1\otimes (z'+w)+(z'+w)\otimes 1$, i.e. $z'+w$ is a primitive element in $H$. By the fact that $H_1=C_1$, $z'+w\in C_1$. But this would imply that $z'\in C$, which contradicts the choice of $z'$.
By Lemma \ref{cohomology}, the cohomology classes in $\h^2(\Omega C)$ represented by $u$ and $x\otimes y$ only differ by a non-zero scalar. Hence there exists $v\in C^+$ and $a\in k^\times$ such that $\partial^1(v)=au-x\otimes y$. Let $z=az'+v$. Then $z\notin C$ and $\Delta(z)=1\otimes z+x\otimes y+z\otimes 1$.
Next, assume that $\GK H=3$. Now we have to show that $H$ is generated by $x$, $y$ and $z$. Let $K$ be the subalgebra of $H$ generated by $x$, $y$ and $z$. Then it is easy to check that $K$ is a sub-bialgebra and thus a Hopf subalgebra of $H$ by Lemma \ref{sub}. By the construction of $K$, $C\subsetneq K$. By Lemma \ref{GKplus1}, $\GK \gr K\ge \GK \gr C+1= 3$. On the other hand, $\GK\gr K=\GK K\le \GK H=3$ since $K\subset H$. Hence $\GK K=3$. Now it follows from Lemma \ref{GKcrit} that $K=H$. This completes the proof.
\end{proof}
| {
"timestamp": "2012-11-20T02:01:44",
"yymm": "1202",
"arxiv_id": "1202.4121",
"language": "en",
"url": "https://arxiv.org/abs/1202.4121",
"abstract": "Let $H$ be a pointed Hopf algebra. We show that under some mild assumptions $H$ and its associated graded Hopf algebra $\\gr H$ have the same Gelfand-Kirillov dimension. As an application, we prove that the Gelfand-Kirillov dimension of a connected Hopf algebra is either infinity or a positive integer. We also classify connected Hopf algebras of GK-dimension three over an algebraically closed field of characteristic zero.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Properties of pointed and connected Hopf algebras of finite Gelfand-Kirillov dimension",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138105645059,
"lm_q2_score": 0.72487026428967,
"lm_q1q2_score": 0.7087156682635538
} |
https://arxiv.org/abs/1704.08160 | From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation | In statistical prediction, classical approaches for model selection and model evaluation based on covariance penalties are still widely used. Most of the literature on this topic is based on what we call the "Fixed-X" assumption, where covariate values are assumed to be nonrandom. By contrast, it is often more reasonable to take a "Random-X" view, where the covariate values are independently drawn for both training and prediction. To study the applicability of covariance penalties in this setting, we propose a decomposition of Random-X prediction error in which the randomness in the covariates contributes to both the bias and variance components. This decomposition is general, but we concentrate on the fundamental case of least squares regression. We prove that in this setting the move from Fixed-X to Random-X prediction results in an increase in both bias and variance. When the covariates are normally distributed and the linear model is unbiased, all terms in this decomposition are explicitly computable, which yields an extension of Mallows' Cp that we call $RCp$. $RCp$ also holds asymptotically for certain classes of nonnormal covariates. When the noise variance is unknown, plugging in the usual unbiased estimate leads to an approach that we call $\hat{RCp}$, which is closely related to Sp (Tukey 1967), and GCV (Craven and Wahba 1978). For excess bias, we propose an estimate based on the "shortcut-formula" for ordinary cross-validation (OCV), resulting in an approach we call $RCp^+$. Theoretical arguments and numerical simulations suggest that $RCP^+$ is typically superior to OCV, though the difference is small. We further examine the Random-X error of other popular estimators. The surprising result we get for ridge regression is that, in the heavily-regularized regime, Random-X variance is smaller than Fixed-X variance, which can lead to smaller overall Random-X error. | \section{Introduction}
A statistical regression model seeks to describe the relationship
between a response $y \in \mathbb{R}$ and a covariate vector $x \in \mathbb{R}^p$,
based on training data comprised of paired observations
$(x_1,y_1),\ldots,(x_n,y_n)$. Many modern regression models are
ultimately aimed at prediction: given a new covariate value $x_0$,
we apply the model to predict the corresponding response value
$y_0$. Inference on the prediction error of
regression models is a central part of model evaluation and model
selection in statistical learning (e.g., \citealt{hastie2009elements}).
A common assumption that is used in the estimation of
prediction error is what we call a ``Fixed-X'' assumption,
where the training covariate values $x_1,\ldots,x_n$ are treated as
fixed, i.e., nonrandom, as are the covariate values at which
predictions are to be made, $x_{01},\ldots,x_{0n}$, which are also
assumed to equal the training values. In the Fixed-X setting,
the celebrated notions of optimism and degrees of freedom lead to
covariance penalty approaches to estimate the prediction performance
of a model
\citep{efron1986biased,efron2004estimation,hastie2009elements},
extending and generalizing classical approaches like Mallows' Cp
\citep{mallows1973comments} and AIC \citep{akaike1973information}.
The Fixed-X setting is one of the most common views on
regression (arguably the predominant view), and it can be found at
all points on the spectrum from cutting-edge research to
introductory teaching in statistics. This setting
combines the following two assumptions about
the problem.
\begin{enumerate}
\item[(i)] The covariate values $x_1,\ldots,x_n$ used in training are not
random (e.g., designed), and the only randomness in training is due
to the responses $y_1,\ldots,y_n$.
\item[(ii)] The covariates $x_{01},\ldots,x_{0n}$ used for
prediction exactly match $x_1,\ldots,x_n$, respectively,
and the corresponding responses $y_{01},\ldots,y_{0n}$ are independent
copies of $y_1,\ldots,y_n$, respectively.
\end{enumerate}
Relaxing assumption (i), i.e., acknowledging randomness in the
training covariates $x_1,\ldots,x_n$, and taking this randomness
into account when performing inference on estimated parameters and
fitted models, has received a good deal of attention in the
literature. But, as we see it, assumption (ii) is the critical one
that needs to be relaxed in most realistic prediction
setups. To emphasize this, we define two settings beyond the Fixed-X
one, that we call the ``Same-X'' and ``Random-X'' settings. The
Same-X setting drops assumption (i), but does not account for new
covariate
values at prediction time. The Random-X setting drops both
assumptions, and deals with predictions at new covariates
values. These will be defined more precisely in the next subsection.
\subsection{Notation and assumptions}
We assume that the training data
$(x_1,y_1),\ldots,(x_n,y_n)$ are i.i.d.\ according to some joint
distribution $P$. This is an innocuous assumption, and it
means that we can posit a relationship for the training data,
\begin{equation}
\label{eq:iid_pairs}
y_i = f(x_i) + \epsilon_i, \quad i=1,\ldots,n
\end{equation}
where $f(x)=\mathbb{E}(y|x)$, and the expectation here is taken with respect
to a draw $(x,y) \sim P$. We also assume that for $(x,y) \sim P$,
\begin{equation}
\label{eq:eps_indep_x}
\text{$\epsilon=y-f(x)$ is independent of $x$},
\end{equation}
which is less innocuous, and precludes, e.g.,
heteroskedasticity in the data. We let $\sigma^2 = \mathrm{Var}(y|x)$ denote
the constant conditional variance. It is worth pointing out that some
results in this paper can be adjusted or modified to hold when
\eqref{eq:eps_indep_x} is not assumed; but since other results hinge
critically on \eqref{eq:eps_indep_x}, we find it is more convenient
to assume \eqref{eq:eps_indep_x} up front.
For brevity, we write $Y=(y_1,\ldots,y_n) \in \mathbb{R}^n$ for the vector of
training responses, and $X \in \mathbb{R}^{n\times p}$ for the matrix of
training covariates with $i$th row $x_i$, $i=1,\ldots,n$.
We also write $Q$ for the marginal distribution of $x$ when $(x,y)
\sim P$, and $Q^n=Q \times \cdots \times Q$ ($n$ times) for the
distribution of $X$ when its $n$ rows are drawn i.i.d.\ from $Q$.
We denote by \smash{$\tilde{y}_i$} an independent copy of $y_i$, i.e.,
an independent draw from the conditional law of $y_i|x_i$, for
$i=1,\ldots,n$, and we abbreviate
\smash{$\tilde{Y}=(\tilde{y}_1,\ldots,\tilde{y}_n) \in \mathbb{R}^n$}.
These are the responses considered in the Same-X setting,
defined below.
We denote by $(x_0,y_0)$ an independent draw from $P$. This the
covariate-response pair evaluated in the Random-X setting, also
defined below.
Now consider a model building procedure that uses the training data
$(X,Y)$ to build a prediction function \smash{$\hat{f}_n : \mathbb{R}^p \to
\mathbb{R}$}. We can associate to this procedure two notions of prediction
error:
$$
{\rm ErrS} = \mathbb{E}_{X,Y,\tilde{Y}} \bigg[\frac{1}{n} \sum_{i=1}^n
\big( \tilde{y}_i - \hat{f}_n(x_i)\big)^2\bigg]
\quad \text{and} \quad
{\rm ErrR} = \mathbb{E}_{X,Y,x_0,y_0} \big( y_0 - \hat{f}_n(x_0) \big)^2,
$$
where the subscripts on the expectations highlight the random
variables over which expectations are taken. (We omit subscripts when
the scope of the expectation is clearly understood by the context.)
The {\it Same-X} and {\it Random-X} settings
differ only in the quantity we use to measure
prediction error: in Same-X, we use {\rm ErrS}, and in Random-X, we use
{\rm ErrR}. We call {\rm ErrS}{} the Same-X prediction error and
{\rm ErrR}{} the Random-X prediction error, though we note these are also
commonly called in-sample and out-of-sample prediction
error, respectively. We also note that by exchangeability,
$$
{\rm ErrS} = \mathbb{E}_{X,Y,\tilde{y}_1} \big( \tilde{y}_1 - \hat{f}_n(x_1)\big)^2.
$$
Lastly, the {\it Fixed-X} setting is defined by the same model
assumptions as above, but with $x_1,\ldots,x_n$ viewed as nonrandom,
i.e., we assume the responses are drawn from \eqref{eq:iid_pairs},
with the errors being i.i.d. We can equivalently view this as the
Same-X setting, but where we condition on $x_1,\ldots,x_n$. In
the Fixed-X setting, prediction error is defined by
$$
{\rm ErrF} = \mathbb{E}_{Y,\tilde{Y}} \bigg[ \frac{1}{n} \sum_{i=1}^n
\big( \tilde{y}_i - \hat{f}_n(x_i)\big)^2\bigg].
$$
(Without $x_1,\ldots,x_n$ being random, the terms in the
sum above are no longer exchangeable, and so {\rm ErrF}{} does
not simplify as {\rm ErrS}{} did.)
\subsection{Related work}
From our perpsective, much of the work encountered in statistical
modeling takes a Fixed-X view, or when treating the covariates as
random, a Same-X view. Indeed, when concerned with parameter
estimates and parameter inferences in regression models, the
randomness of new prediction points plays no role, and so the
Same-X view seems entirely appropriate. But, when focused on
prediction, the Random-X view seems more realistic as a study ground
for what happens in most applications.
On the other hand, while the Fixed-X view is common, the Same-X and
Random-X views have not exactly been ignored, either, and several
groups of researchers in statistics, but also in machine learning and
econometrics, fully adopt and argue for such random covariate views.
A scholarly and highly informative treatment of how randomness in the
covariates affects parameter estimates and inferences in regression
models is given in \citet{buja2014models,buja2016models}. We also
refer the reader to these papers for a nice review of the history of
work in statistics and econometrics on random covariate models.
It is also worth mentioning that in nonparametric regression theory,
it is common to treat the covariates as random, e.g., the book by
\citet{gyorfi2002distribution}, and the random covariate view is the
standard in what machine learning researchers call statistical
learning theory, e.g., the book by \citet{vapnik1998statistical}.
Further, a stream of recent papers in high-dimensional regression
adopt a random covariate perspective, to give just a few examples:
\citet{greenshtein2004persistence,chatterjee2013assumptionless,
dicker2013optimal,hsu2014analysis,dobriban2015high}.
In discussing statistical models with random covariates, one should
differentiate between what may be called the ``i.i.d.\ pairs'' model
and ``signal-plus-noise'' model. The former assumes i.i.d.\ draws
$(x_i,y_i)$, $i=1,\ldots,n$ from a common distribution $P$, or
equivalently i.i.d.\ draws from the model \eqref{eq:iid_pairs}; the
latter assumes i.i.d.\ draws from \eqref{eq:iid_pairs}, and
additionally assumes \eqref{eq:eps_indep_x}. The additional
assumption \eqref{eq:eps_indep_x} is not a light one, and it does not
allow for, e.g., heteroskedasticity.
The books by
\citet{vapnik1998statistical,gyorfi2002distribution} assume the
i.i.d.\ pairs model, and do not require \eqref{eq:eps_indep_x}
(though their results often require a bound on the maximum of
$\mathrm{Var}(y|x)$ over all $x$.)
More specifically related to the focus of our paper is the seminal
work of \citet{breiman1992submodel}, who considered Random-X
prediction error mostly from an intuitive and empirical point of view.
A major line of work on practical covariance penalties for Random-X
prediction error in least squares regression begins with
\citet{stein1960multiple} and \citet{tukey1967discussion}, and
continues onwards throughout the late 1970s and early 1980s with
\citet{hocking1976analysis,thompson1978selection1,
thompson1978selection2,breiman1983how}. Some more recent contributions
are found in \citet{leeb2008evaluation,dicker2013optimal}.
A common theme to these works is the assumption that $(x,y)$
is jointly normal. This is a strong assumption, and is one that we
avoid in our paper (though for some results we assume $x$ is
marginally normal); we will discuss comparisons to these works later.
Through personal communication, we are aware of work in progress
by Larry Brown, Andreas Buja, and coauthors on a variant of Mallows'
Cp for a setting in which covariates are random. It is out
understanding that they take somewhat of a broader view than we do in
our proposals \smash{${\rm RCp},\text{$\widehat{{\rm RCp}}$},\text{{\rm RCp}$^+$}$}, each designed for a more
specific scenario, but resort to asymptotics in order to do so.
Finally, we must mention that an important alternative to covariance
penalties for Random-X model evaluation and selection are
resampling-based techniques, like cross-validation and bootstrap
methods (e.g.,
\citealt{efron2004estimation,hastie2009elements}). In particular,
ordinary leave-one-out cross-validation or OCV evaluates a model by
actually building $n$ separate prediction models, each one using $n-1$
observations for training, and one held-out observation for model
evaluation. OCV naturally provides an almost-unbiased estimate
of Random-X prediction error of a modeling approach (``almost'',
since training set sizes are $n-1$ instead of $n$), albeit, at a
somewhat high price in terms of variance and inaccuracy (e.g., see
\citealt{burman1989comparative,hastie2009elements}).
Altogether, OCV is an important benchmark for comparing the results of
any proposed Random-X model evaluation approach.
\section{Decomposing and estimating prediction error}
\label{sec:decomp_pred_error}
\subsection{Bias-variance decompositions}
Consider first the Fixed-X setting, where $x_1,\ldots,x_n$ are
nonrandom. Recall the well-known decomposition of Fixed-X
prediction error (e.g., \citealt{hastie2009elements}):
$$
{\rm ErrF} = \sigma^2 + \frac{1}{n} \sum_{i=1}^n
\big(\mathbb{E}\hat{f}_n(x_i) - f(x_i)\big)^2 +
\frac{1}{n} \sum_{i=1}^n \mathrm{Var}\big(\hat{f}_n(x_i)\big)
$$
where the latter two terms on the right-hand side above are called the
(squared) {\it bias} and {\it variance} of the estimator
\smash{$\hat{f}_n$}, respectively. In the Same-X setting, the same
decomposition holds conditional on $x_1,\ldots,x_n$. Integrating out
over $x_1,\ldots,x_n$, and using exchangeability, we conclude
$$
{\rm ErrS} = \sigma^2 + \underbrace{\mathbb{E}_X
\Big(\mathbb{E}\big(\hat{f}_n(x_1)\,|\,X\big) - f(x_1)\Big)^2}_{B} +
\underbrace{\mathbb{E}_X \mathrm{Var}\big(\hat{f}_n(x_1)\,|\,X\big)
\vphantom{\Big(\Big)}}_{V}.
$$
The last two terms on the right-hand side above are integrated bias
and variance terms associated with \smash{$\hat{f}_n$}, which we denote by
$B$ and $V$, respectively. Importantly, whenever the Fixed-X variance
of the estimator \smash{$\hat{f}_n$} in question is unaffected by the
form of $f(x)=\mathbb{E}(y|x)$ (e.g., as is the case in least squares
regression), then so is the integrated variance $V$.
For Random-X, we can condition on $x_1,\ldots,x_n$ and $x_0$, and then
use similar arguments to yield the decomposition
$$
{\rm ErrR} = \sigma^2 + \mathbb{E}_{X,x_0}\Big(
\mathbb{E}\big(\hat{f}_n(x_0)\,|\,X,x_0\big) - f(x_0)\Big)^2 +
\mathbb{E}_{X,x_0} \mathrm{Var}\big(\hat{f}_n(x_0)\,|\,X,x_0\big).
$$
For reasons that will become clear in what follows,
it suits our purpose to rearrange this as
\begin{align}
\label{eq:er}
{\rm ErrR} &= \sigma^2 + B + V \\
\label{eq:eb}
&\quad +
\underbrace{\mathbb{E}_{X,x_0}\Big(\mathbb{E}\big(\hat{f}_n(x_0)\,|\,X,x_0\big) -
f(x_0)\Big)^2 - \mathbb{E}_X \Big(\mathbb{E}\big(\hat{f}_n(x_1)\,|\,X\big) -
f(x_1)\Big)^2}_{\text{$B^+$}} \\
\label{eq:ev}
&\quad +
\underbrace{\mathbb{E}_{X,x_0} \mathrm{Var}\big(\hat{f}_n(x_0)\,|\,X,x_0\big) -
\mathbb{E}_X \mathrm{Var}\big(\hat{f}_n(x_1)\,|\,X\big)}_{\text{$V^+$}}.
\end{align}
We call the quantities in \eqref{eq:eb}, \eqref{eq:ev} the {\it
excess bias} and {\it excess variance} of \smash{$\hat{f}_n$}
(``excess'' here referring to the extra amount of bias and variance
that can be attributed to the randomness of $x_0$), denoted by \text{$B^+$}{}
and \text{$V^+$}{}, respectively. We note that, by construction,
$$
{\rm ErrR} - {\rm ErrS} = \text{$B^+$} + \text{$V^+$},
$$
thus, e.g., $\text{$B^+$}+\text{$V^+$} \geq 0$ implies the Random-X
(out-of-sample) prediction error of \smash{$\hat{f}_n$} is no smaller than
its Same-X (in-sample) prediction error. Moreover, as ${\rm ErrS}$ is
easily estimated following standard practice for estimating ${\rm ErrF}$,
discussed next, we see that estimates or bounds $\text{$B^+$},\text{$V^+$}$
lead to estimates or bounds on ${\rm ErrR}$.
\subsection{Optimism for Fixed-X and Same-X}
Starting with the Fixed-X setting again, we recall the definition of
optimism, e.g., as in \citet{efron1986biased,efron2004estimation,
hastie2009elements},
$$
{\rm OptF} = \mathbb{E}_{Y,\tilde{Y}} \bigg[\frac{1}{n} \sum_{i=1}^n
\big(\tilde{y}_i-\hat{f}_n(x_i)\big)^2 -
\frac{1}{n} \sum_{i=1}^n \big(y_i-\hat{f}_n(x_i)\big)^2\bigg],
$$
which is the difference in prediction error and training error.
Optimism can also be expressed as the following elegant sum of
self-influence terms,
$$
{\rm OptF} = \frac{2}{n} \sum_{i=1}^n \mathrm{Cov} \big(y_i, \hat{f}_n(x_i)\big),
$$
and furthermore, under a normal regression model (i.e., the data model
\eqref{eq:iid_pairs} with $\epsilon \sim N(0,\sigma^2)$) and some
regularity conditions on \smash{$\hat{f}_n$} (i.e., continuity and almost
differentiability as a function of $y$),
$$
{\rm OptF} = \frac{2\sigma^2}{n} \sum_{i=1}^n \mathbb{E} \bigg[\frac{\partial
\hat{f}_n(x_i)}{\partial y_i}\bigg],
$$
which is often called Stein's formula \citep{stein1981estimation}.
Optimism is an interesting and important concept because an unbiased
estimate \smash{$\widehat{{\rm OptF}}$} of {\rm OptF}{} (say, from Stein's
formula or direct calculation) leads to an unbiased estimate of
prediction error:
$$
\frac{1}{n} \sum_{i=1}^n \big(y_i-\hat{f}_n(x_i)\big)^2 + \widehat{{\rm OptF}}.
$$
When \smash{$\hat{f}_n$} is given by the least squares regression
of $Y$ on $X$ (and $X$ has full column rank), so that
\smash{$\hat{f}_n(x_i)=x_i^T (X^T X)^{-1} X^T Y$}, $i=1,\ldots,n$, it is
not hard to check that \smash{${\rm OptF}=2\sigma^2p/n$}. This is exact
and hence ``even better'' than an unbiased estimate; plugging in this
result above for \smash{$\widehat{{\rm OptF}}$} gives us Mallows'
Cp \citep{mallows1973comments}.
In the Same-X setting, optimism can be defined similarly, except
additionally integrated over the distribution of $x_1,\ldots,x_n$,
$$
{\rm OptS} = \mathbb{E}_{X,Y,\tilde{Y}} \bigg[\frac{1}{n} \sum_{i=1}^n
\big(\tilde{y}_i-\hat{f}_n(x_i)\big)^2 -
\frac{1}{n} \sum_{i=1}^n \big(y_i-\hat{f}_n(x_i)\big)^2\bigg] =
\frac{1}{n} \sum_{i=1}^n \mathbb{E}_X
\mathrm{Cov} \big(y_i, \hat{f}_n(x_i) \,|\, X\big).
$$
Some simple results immediately follow.
\begin{prop}\mbox{}
\label{prop:opts}
\begin{enumerate}
\item[(i)] If $T(X,Y)$ is an unbiased estimator of {\rm OptF}{} in
the Fixed-X setting, for any $X$ in the support of $Q^n$, then it is
also unbiased for {\rm OptS}{} in the Same-X setting.
\item[(ii)] If {\rm OptF}{} in the Fixed-X setting does not depend on $X$
(e.g., as is true in least squares regression), then it is
equal to {\rm OptS}{} in the Same-X setting.
\end{enumerate}
\end{prop}
\noindent
Some consequences of this proposition are as follows.
\begin{itemize}
\item For the least squares regression estimator of $Y$ on $X$
(and $X$ having full column rank almost surely under $Q^n$), we have
${\rm OptF}={\rm OptS}=2\sigma^2 p/ n$.
\item For a linear smoother, where \smash{$\hat{f}_n(x_i)=s(x_i)^T Y$},
$i=1,\ldots,n$ and we denote by $S(X) \in \mathbb{R}^{n\times n}$ the
matrix with rows $s(x_1),\ldots,s(x_n)$, we have (by direct
calculation) ${\rm OptF} = 2\sigma^2\mathrm{tr}(S(X)) / n$ and
${\rm OptS} = 2\sigma^2\mathbb{E}_X[\mathrm{tr}(S(X))]/n$.
\item For the lasso regression estimator of $Y$ on $X$ (and $X$ being
in general position almost surely under $Q^n$), and a normal
data model (i.e., the model in \eqref{eq:iid_pairs},
\eqref{eq:eps_indep_x} with $\epsilon \sim N(0,\sigma^2)$),
\citet{zou2007degrees,tibshirani2012degrees,tibshirani2013lasso}
prove that for any value of the lasso tuning parameter
$\lambda > 0$ and any $X$, the Fixed-X optimism is just
\smash{${\rm OptF} = 2\sigma^2 \mathbb{E}_Y|A_\lambda(X,Y)| /n$}, where
\smash{$A_\lambda(X,Y)$} is the active set at the lasso solution
at $\lambda$ and \smash{$|A_\lambda(X,Y)|$} is its size;
therefore we also have
\smash{${\rm OptS} = 2\sigma^2 \mathbb{E}_{X,Y}|A_\lambda(X,Y)|/n$}.
\end{itemize}
Overall, we conclude that for the estimation of prediction error,
the Same-X setting is basically identical to Fixed-X.
We will see next that the situation is different for Random-X.
\subsection{Optimism for Random-X}
For the definition of Random-X optimism, we have to now integrate over
all sources of uncertainty,
$$
{\rm OptR} = \mathbb{E}_{X,Y,x_0,y_0} \Big[\big(y_0-\hat{f}_n(x_0)\big)^2 -
\big(y_1-\hat{f}_n(x_1)\big)^2\Big].
$$
The definitions of ${\rm OptS},{\rm OptR}$ are both given by a type of
prediction error (Same-X or Random-X) minus training error, and
there is just one common way to define training error.
Hence, by subtracting training error from both sides in the
decomposition \eqref{eq:er}, \eqref{eq:eb}, \eqref{eq:ev}, we obtain
the relationship:
\begin{equation}
\label{eq:optr}
{\rm OptR} = {\rm OptS} + \text{$B^+$} + \text{$V^+$},
\end{equation}
where $\text{$B^+$},\text{$V^+$}$ are the excess bias and variance as defined in
\eqref{eq:eb}, \eqref{eq:ev}, respectively.
As a consequence of our definitions, Random-X optimism is tied to
Same-X optimism by excess bias and variance terms, as in
\eqref{eq:optr}. The practical utility of this relationship: an
unbiased estimate of Same-X optimism (which, as pointed out
in the last subsection, follows straightforwardly from
an unbiased estimate of Fixed-X optimism), combined
with estimates of excess bias and variance, leads to an estimate
for Random-X prediction error.
\section{Excess bias and variance for least squares regression}
\label{sec:least_squares_eb_ev}
In this section, we examine the case when \smash{$\hat{f}_n$} is defined
by least squares regression of $Y$ on $X$, where we assume $X$ has
full column rank (or, when viewed as random, has full column rank
almost surely under its marginal distribution $Q^n$).
\subsection{Nonnegativity of $\text{$B^+$},\text{$V^+$}$}
Our first result
concerns the signs of \text{$B^+$}{} and \text{$V^+$}.
\begin{thm}
\label{thm:least_squares_nonneg}
For \smash{$\hat{f}_n$} the least squares regression estimator, we have
both $\text{$B^+$} \geq 0$ and $\text{$V^+$} \geq 0$.
\end{thm}
\begin{proof}
We prove the result separately for \text{$V^+$}{} and \text{$B^+$}.
\smallskip\smallskip
\noindent {\it Nonnegativity of \text{$V^+$}.}
For a function $g : \mathbb{R}^p \to \mathbb{R}$, we will write
\smash{$g(X)=(g(x_1),\ldots,g(x_n))\in \mathbb{R}^n$}, the
vector whose components are given by applying $g$ to the rows of
$X$. Letting $X_0 \in \mathbb{R}^{n\times p}$ be a matrix of test covariate
values, whose rows are i.i.d.\ draws from $Q$, we note that excess
variance in \eqref{eq:ev} can be equivalently expressed as
$$
\text{$V^+$} = \mathbb{E}_{X,X_0} \frac{1}{n}
\mathrm{tr}\big[ \mathrm{Cov}\big( \hat{f}_n(X_0) \,|\, X,X_0 \big)\big] -
\mathbb{E}_X \frac{1}{n}
\mathrm{tr}\big[ \mathrm{Cov}\big( \hat{f}_n(X) \,|\, X \big)\big].
$$
Note that the second term here is just
\smash{$\mathbb{E}_X [(\sigma^2/n)\mathrm{tr}(X(X^T X)^{-1}X^T)] = \sigma^2 p/n$}.
The first term is
\begin{align}
\nonumber
\frac{\sigma^2}{n} \mathrm{tr}\big(\mathbb{E}_{X,X_0} \big[ (X^T X)^{-1}
X_0^TX_0 \big]\big) &=
\frac{\sigma^2}{n} \mathrm{tr}\big(\mathbb{E} \big[ (X^T X)^{-1} \big]
\mathbb{E} [ X_0^TX_0]\big) \\
\label{eq:integrated_var}
&= \frac{\sigma^2}{n} \mathrm{tr}\big(\mathbb{E} \big[ (X^T X)^{-1} \big]
\mathbb{E} [ X^TX]\big),
\end{align}
where in the first equality we used the independence of $X$ and $X_0$,
and in the second equality we used the identical distribution of $X$
and $X_0$. Now, by a result of \citet{groves1969note}, we know that
\smash{$\mathbb{E}[(X^tX)^{-1}]- [\mathbb{E} (X^tX)]^{-1}$} is positive
semidefinite. Thus we have
$$
\frac{\sigma^2}{n} \mathrm{tr}\big(\mathbb{E} \big[ (X^T X)^{-1} \big]
\mathbb{E} [X^TX]\big) \geq
\frac{\sigma^2}{n} \mathrm{tr}\big(\big[ \mathbb{E}(X^T X) \big]^{-1}
\mathbb{E} [X^TX]\big) = \frac{\sigma^2 p}{n}.
$$
This proves $\text{$V^+$} \geq 0$.
\smallskip\smallskip
\noindent {\it Nonnegativity of \text{$B^+$}.}
This result is actually a special case of Theorem
\ref{thm:ridge_eb_nonneg}, and its proof follows from the proof
of the latter.
\end{proof}
An immediate consequence of this, from the relationship between
Random-X and Same-X prediction error in \eqref{eq:er}, \eqref{eq:eb},
\eqref{eq:ev}, is the following.
\begin{cor}
For \smash{$\hat{f}_n$} the least squares regression estimator, we have
${\rm ErrR} \geq {\rm ErrS}$.
\end{cor}
This simple result, that the Random-X (out-of-sample) prediction error
is always larger than the Same-X (in-sample) prediction error for
least squares regression, is perhaps not suprising; however, we have
not been able to find it proven elsewhere in the literature at the
same level of generality. We emphasize that our result only
assumes \eqref{eq:iid_pairs}, \eqref{eq:eps_indep_x}
and places no other assumptions on the distribution of errors,
distribution of covariates, or the form of $f(x)=\mathbb{E}(y|x)$.
We also note that, while this relationship may seem obvious, it is in
fact not universal. Later in Section \ref{sec:ridge_ev_neg}, we show
that the excess variance \text{$V^+$}{} in heavily-regularized ridge regression
is guaranteed to be negative, and this can even lead to ${\rm ErrR} <
{\rm ErrS}$.
\subsection{Exact calculation of \text{$V^+$}{} for normal covariates}
Beyond the nonnegativity of $\text{$B^+$},\text{$V^+$}$, it is actually easy to
quantify \text{$V^+$}{} exactly in the case that the covariates follow a normal
distribution.
\begin{thm}
\label{thm:least_squares_normal}
Assume that $Q = N(0,\Sigma)$, where $\Sigma \in \mathbb{R}^{p\times p}$ is
invertible, and $p<n-1$. Then for the least squares regression
estimator,
$$
\text{$V^+$} = \frac{\sigma^2 p}{n} \frac{p+1}{n-p-1}.
$$
\end{thm}
\begin{proof}
As the rows of $X$ are i.i.d.\ from $N(0,\Sigma)$, we have
$X^T X \sim W(\Sigma,n)$, which denotes a Wishart distribution
with $n$ degrees of freedom, and so $\mathbb{E}(X^T X)=n\Sigma$.
Similarly, \smash{$(X^T X)^{-1} \sim W^{-1} (\Sigma^{-1},n)$},
denoting an inverse Wishart with $n$ degrees of freedom,
and hence \smash{$\mathbb{E}[(X^T X)^{-1}]=\Sigma^{-1}/(n-p-1)$}. From the
arguments in the proof of Theorem \ref{thm:least_squares_nonneg},
$$
\text{$V^+$} = \frac{\sigma^2}{n} \mathrm{tr}\big(\mathbb{E} \big[ (X^T X)^{-1} \big]
\mathbb{E} [X^TX]\big) - \frac{\sigma^2 p}{n} =
\frac{\sigma^2}{n} \mathrm{tr}\bigg( I_{p\times p} \frac{n}{n-p-1} \bigg)
- \frac{\sigma^2 p}{n} = \frac{\sigma^2p}{n} \frac{p+1}{n-p-1},
$$
completing the proof.
\end{proof}
Interestingly, as we see, the excess variance
\text{$V^+$}{} does not depend on the covariance matrix $\Sigma$ in the case
of normal covariates.
Moreover, we stress that (as a consequence of our decomposition and
definition of $\text{$B^+$},\text{$V^+$}$), the above calculation does not rely on
linearity of $f(x)=\mathbb{E}(y|x)$.
When $f(x)$ is linear, i.e., the linear model is unbiased, it is not hard
to see that $\text{$B^+$}=0$, and the next result follows from \eqref{eq:optr}.
\begin{cor}
\label{cor:least_squares_normal}
Assume the conditions of Theorem \ref{thm:least_squares_normal}, and
further, assume that $f(x)=x^T \beta$, a linear function of $x$. Then
for the least squares regression estimator,
$$
{\rm OptR} = {\rm OptS} + \frac{\sigma^2 p}{n}
\frac{p+1}{n-p-1} = \frac{\sigma^2 p}{n}
\bigg(2+\frac{p+1}{n-p-1}\bigg)
$$
\end{cor}
For the unbiased case considered in Corollary
\ref{cor:least_squares_normal}, the same result can be found in
previous works, in particular in \citet{dicker2013optimal}, where it
is proven in the appendix. It is also similar to older results
from \citet{stein1960multiple,tukey1967discussion,
hocking1976analysis,thompson1978selection1,thompson1978selection2},
which assume the pair $(x,y)$ is jointly normal (and thus also assume
the linear model to be unbiased).
We return to these older classical results in the next section. When
bias is present, our decomposition is required, so that the
appropriate result would still apply to \text{$V^+$}.
\subsection{Asymptotic calculation of \text{$V^+$}{} for nonnormal covariates}
Using standard results from random matrix theory, the result of
Theorem \ref{thm:least_squares_normal} can be generalized to an
asymptotic result over a wide class of distributions.\footnote{We
thank Edgar Dobriban for help in formulating and proving this
result.}
\begin{thm}
\label{thm:least_squares_asymp}
Assume that $x \sim Q$ is generated as follows: we draw $z \in \mathbb{R}^p$,
having i.i.d.\ components $z_i \sim F$, $i=1,\ldots,p$, where $F$ is
any distribution with zero mean and unit variance, and then set
$x=\Sigma^{1/2} z$, where $\Sigma \in \mathbb{R}^{p \times p}$ is positive
definite and $\Sigma^{1/2}$ is its symmetric square root.
Consider an asymptotic setup where $p/n \to \gamma \in (0,1)$ as $n
\to \infty$. Then
$$
\text{$V^+$} \to \frac{\sigma^2 \gamma^2}{1-\gamma}
\quad \text{as $n \to \infty$}.
$$
\end{thm}
\begin{proof}
Denote by $X_n = Z_n \Sigma^{1/2}$ the training covariate
matrix, where $Z_n$ has rows $z_1,\ldots,z_n$, and we use subscripts
of $X_n,Z_n$ to denote the dependence on $n$ in our asymptotic
calculations below. Then as in the proof of Theorem
\ref{thm:least_squares_nonneg},
$$
\text{$V^+$} = \frac{\sigma^2}{n} \mathrm{tr}\big(\mathbb{E} \big[ (X_n^T X_n)^{-1} \big]
\mathbb{E} [X_n^TX_n]\big)
= \frac{\sigma^2}{n} \mathrm{tr}\big(\mathbb{E} \big[ (Z_n^T Z_n)^{-1} \big]
\mathbb{E} [Z_n^TZ_n]\big)
= \frac{\sigma^2}{n} \mathrm{tr}\big( n \mathbb{E} \big[ (Z_n^T Z_n)^{-1} \big]
\big).
$$
The second equality used the relationship $X_n=Z_n\Sigma^{1/2}$, and
the third equality used the fact that the entries of $Z_n$ are i.i.d.\
with mean 0 and variance 1. This confirms that \text{$V^+$}{} does not
depend on the covariance matrix $\Sigma$.
Further, by the Marchenko-Pastur theorem, the distribution of
eigenvalues $\lambda_1,\ldots,\lambda_p$ of \smash{$Z_n^T Z_n/n$}
converges to a fixed law, independent of $F$; more precisely, the
random measure $\mu_n$, defined by
$$
\mu_n(A) = \frac{1}{p} \sum_{i=1}^p 1\{\lambda_i \in A\},
$$
converges weakly to the Marchenko-Pastor law $\mu$. We note that
$\mu$ has density bounded away from zero when $\gamma<1$. As the
eigenvalues of \smash{$n(Z_n^T
Z_n)^{-1}$} are simply $1/\lambda_1,\ldots,1/\lambda_p$, we also have
that the random measure \smash{$\tilde\mu_n$}, defined by
$$
\tilde\mu_n(A) = \frac{1}{p} \sum_{i=1}^p 1\{1/\lambda_i \in A\},
$$
converges to a fixed law, call it \smash{$\tilde\mu$}. Denoting the
mean of \smash{$\tilde\mu$} by $m$, we now have
$$
\text{$V^+$} = \frac{\sigma^2}{n} \mathrm{tr}\big( n \mathbb{E} \big[ (Z_n^T Z_n)^{-1} \big]
\big) = \frac{\sigma^2 p}{n} \mathbb{E}\bigg[\frac{1}{p} \sum_{i=1}^p
\frac{1}{\lambda_i}\bigg] \to \sigma^2 \gamma m \quad \text{as $n \to
\infty$}.
$$
As this same asymptotic limit, independent of $F$, must agree with
specific the case in which $F=N(0,1)$, we can conclude from Theorem
\ref{thm:least_squares_normal} that $m=\gamma/(1-\gamma)$, which
proves the result.
\end{proof}
The next result is stated for completeness.
\begin{cor}
\label{cor:least_squares_asymp}
Assume the conditions of Theorem \ref{thm:least_squares_asymp}, and
moreover, assume that the linear model is unbiased for $n$ large
enough. Then
$$
{\rm OptR} \to \sigma^2 \gamma \frac{2-\gamma}{1-\gamma}
\quad \text{as $n \to \infty$}.
$$
\end{cor}
It should be noted that the requirement of Theorem
\ref{thm:least_squares_asymp} that the covariate vector $x$ be
expressible as $\Sigma^{1/2} z$ with the entries of $z$ i.i.d.\ is not
a minor one, and limits the set of covariate distributions for
which this result applies, as has been discussed in the literature on
random matrix theory (e.g., \citealt{elkaroui2009concentration}). In
particular, left multiplication by the square root matrix
$\Sigma^{1/2}$ performs a kind of averaging operation. Consequently,
the covariates $x$ can either
have long-tailed distributions, or have complex dependence structures,
but not both, since then the averaging will mitigate any long tail of
the distribution $F$. In our simulations in Section
\ref{sec:least_squares_sims}, we examine some settings that combine
both elements, and indeed the value of \text{$V^+$}{} in such settings can
deviate substantially from what this theory suggests.
\section{Covariance penalties for Random-X least squares}
\label{sec:least_squares_cp}
We maintain the setting of the last section, taking \smash{$\hat{f}_n$} to
be the least squares regression estimator of $Y$ on $X$, where $X$ has
full column rank (almost surely under its marginal distribution $Q$).
\subsection{A Random-X version of Mallows' Cp}
Let us denote \smash{$\mathrm{RSS}=\|Y-\hat{f}_n(X)\|_2^2$}, and recall
Mallows' Cp \citep{mallows1973comments}, which is defined as
$\mathrm{Cp}=\mathrm{RSS}/n+2\sigma^2p/n$.
The results in Theorems \ref{thm:least_squares_normal} and
\ref{thm:least_squares_asymp} lead us to define the following
generalized covariance penalty criterion we term {\rm RCp}:
$$
{\rm RCp} = \mathrm{Cp} + \text{$V^+$} = \frac{\mathrm{RSS}}{n} +
\frac{\sigma^2 p}{n} \bigg(2+\frac{p+1}{n-p-1}\bigg).
$$
An asymptotic approximation is given by
\smash{${\rm RCp} \approx \mathrm{RSS}/n + \sigma^2 \gamma
(2+\gamma/(1-\gamma))$}, in a problem scaling where
$p/n \to \gamma \in (0,1)$.
{\rm RCp}{} is an unbiased estimate of Random-X prediction error when the
linear model is unbiased and the covariates are normally distributed,
and an asymptotically unbiased estimate of Random-X prediction
error when the conditions of Theorem \ref{thm:least_squares_asymp}
hold. As we demonstrate below, it is also quite an effective
measure, in the sense that it has much lower variance (in the
appropriate settings for the covariate distributions) compared to
other almost-unbiased measures of Random-X prediction error, such as
OCV (ordinary leave-one-out cross-validation) and GCV (generalized
cross-validation). However, in addition to the dependence on the
covariate distribution as in Theorems \ref{thm:least_squares_normal}
and \ref{thm:least_squares_asymp}, two other major
drawbacks to the use of {\rm RCp}{} in practice should be acknowledged.
\begin{enumerate}
\item[(i)] {\it The assumption that $\sigma^2$ is known.} This
obviously affects the use of Cp in Fixed-X situations as well, as
has been noted in the literature.
\item[(ii)] {\it The assumption of no bias.} It is critical to note
here the difference from Fixed-X or Same-X situations, where {\rm OptS}{}
(i.e., Cp) is independent of the bias in the model and must only
correct for the ``overfitting'' incurred by model fitting. In
contrast, in Random-X, the existence of \text{$B^+$}, which is a component of
{\rm OptR}{} not captured by the training error, requires taking it into
account in the penalty, if we hope to obtain low-bias estimates of
prediction error. Moreover, it is often desirable to assume
nothing about the form of the true model $f(x)=\mathbb{E}(y|x)$, hence it
seems unlikely that theoretical considerations like those presented
in Theorems \ref{thm:least_squares_normal} and
\ref{thm:least_squares_asymp} can lead to estimates of \text{$B^+$}.
\end{enumerate}
We now propose enhancements that deal with each of these problems separately.
\subsection{Accounting for unknown $\sigma^2$ in unbiased least
squares}
Here, we assume that the linear model is unbiased, $f(x)=x^T \beta$,
but the variance $\sigma^2$ of the noise in \eqref{eq:iid_pairs} is
unknown. In the Fixed-X setting, it is customary to replace $\sigma^2$
in covariance penalty approach like Cp with the unbiased estimate
\smash{$\hat{\sigma}^2 = \mathrm{RSS}/(n-p)$}.
An obvious choice is to also use \smash{$\hat{\sigma}^2$} in place of
$\sigma^2$ in {\rm RCp}, leading to a generalized covariance penalty
criterion we call \smash{\text{$\widehat{{\rm RCp}}$}}:
$$
\text{$\widehat{{\rm RCp}}$} = \frac{\mathrm{RSS}}{n} +
\frac{\hat{\sigma}^2 p}{n} \bigg(2+\frac{p+1}{n-p-1}\bigg) =
\frac{\mathrm{RSS}(n-1)}{(n-p)(n-p-1)}.
$$
An asymptotic approximation, under the scaling $p/n \to \gamma \in
(0,1)$, is \smash{$\text{$\widehat{{\rm RCp}}$} \approx \mathrm{RSS}/(n (1-\gamma)^2)$}.
This penalty, as it turns out, is exactly equivalent to the Sp
criterion of \citet{tukey1967discussion,sclove1969criteria}; see also
\citet{stein1960multiple,hocking1976analysis,
thompson1978selection1,thompson1978selection2}. These authors all
studied the case in which $(x,y)$ is jointly normal, and therefore the
linear model is assumed correct for the full model and any
submodel. The asymptotic approximation, on other hand, is equivalent
to the GCV (generalized cross-validation) criterion of
\citet{craven1978smoothing,golub1979generalized}, though the
motivation behind the derivation of GCV is somewhat different.
Comparing \text{$\widehat{{\rm RCp}}$}{} to {\rm RCp}{} as a model evaluation criterion, we can
see the price of estimating $\sigma^2$ as opposed to knowing it, in
their asymptotic approximations. Their expectations are similar when
the linear model is true, but the variance of (the asymptotic form) of
\smash{\text{$\widehat{{\rm RCp}}$}} is roughly $1/(1-\gamma)^4$ times larger than that of
(the asymptotic form) of {\rm RCp}. So when, e.g., $\gamma = 0.5$, the
price of not knowing $\sigma^2$ translates roughly into a 16-fold
increase in the variance of the model evaluation metric. This is
clearly demonstrated in our simulation results in the next section.
\subsection{Accounting for bias and estimating \text{$B^+$}}
Next, we move to assuming nothing about the underlying regression
function $f(x)=\mathbb{E}(y|x)$, and we examine methods that account for the
resulting bias \text{$B^+$}.
First we consider the behavior of \smash{\text{$\widehat{{\rm RCp}}$}} (or
equivalently Sp) in the case that bias is present. Though this
criterion was not designed to account for bias at all, we will see
it still performs an inherent bias correction. A straightforward
calculation shows that in this case
$$
\mathbb{E}_{X,Y} \mathrm{RSS} = (n-p) \sigma^2 + nB,
$$
where recall \smash{$B=\mathbb{E}_X \|\mathbb{E}(\hat{f}_n(X)\,|\,X)-f(X)\|^2/n$},
generally nonzero in the current setting, and thus
$$
\mathbb{E}_{X,Y} \text{$\widehat{{\rm RCp}}$} = \sigma^2 \frac{n-1}{n-p-1} + B
\frac{n(n-1)}{(n-p)(n-p-1)} \approx \frac{\sigma^2}{1-\gamma} +
\frac{B}{(1-\gamma)^2},
$$
the last step using an asymptotic approximation, under the scaling
$p/n \to \gamma \in (0,1)$. Note that the second term on the
right-hand side above is the (rough) implicit estimate of integrated
Random-X bias used by \smash{\text{$\widehat{{\rm RCp}}$}}, which is larger than the
integrated Same-X bias $B$ by a factor of $1/(1-\gamma)^2$. Put
differently, \smash{\text{$\widehat{{\rm RCp}}$}} implicitly assumes that \text{$B^+$}{} is (roughly)
$1/(1-\gamma)^2-1$ times as big as the Same-X
bias. We see no reason to believe that this relationship
(between Random-X and Same-X biases) is generally correct, but it is
not totally naive either, as we will see empirically that
\smash{\text{$\widehat{{\rm RCp}}$}} still provides reasonably good estimates of Random-X
prediction error in biased situations in Section
\ref{sec:least_squares_sims}. A partial explanation is available
through a connection to OCV, as
discussed, e.g., in the derivation of GCV in
\citet{craven1978smoothing}. We return to this issue in Section
\ref{sec:discussion}.
We describe a more principled approach to estimating the
integrated Random-X bias, $B+\text{$B^+$}$, assuming knowledge of $\sigma^2$,
and leveraging a bias estimate implicit to OCV.
Recall that OCV builds $n$ models, each time leaving one observation
out, applying the fitted model to that observation, and using these
$n$ holdout predictions to estimate prediction error. Thus it gives
us an almost-unbiased estimate of Random-X prediction error ${\rm ErrR}$
(``almost'', because its training sets are all of size $n-1$ rather than
$n$). For least squares regression (and other estimators), the
well-known ``shortcut-trick'' for OCV
(e.g., \citealt{wahba1990spline,hastie2009elements}) allows us to
represent the OCV residuals in terms of weighted training residuals.
Write \smash{$\hat{f}_n^{(-i)}$} for the least squares estimator trained
on all but $(x_i,y_i)$, and $h_{ii}$ the $i$th diagonal element of
$X(X^T X)^{-1}X^T$, for $i=1,\ldots,n$. Then this trick tells
us that
$$
y_i - \hat{f}_n^{(-i)}(x_i) = \frac{y_i-\hat{f}_n(x_i)}{1-h_{ii}},
$$
which can be checked by applying the Sherman-Morrison update
formula for relating the inverse of a matrix to the inverse of its
rank-one pertubation. Hence the OCV error can be expressed as
$$
\mathrm{OCV} = \frac{1}{n} \sum_{i=1}^n
\Big(y_i-\hat{f}_n^{(-i)}(x_i)\Big)^2 = \frac{1}{n} \sum_{i=1}^n
\bigg(\frac{y_i-\hat{f}_n(x_i)}{1-h_{ii}}\bigg)^2.
$$
Taking an expectation conditional on $X$, we find that
\begin{align}
\nonumber
\mathbb{E}(\mathrm{OCV} | X) &=
\frac{1}{n} \sum_{i=1}^n \frac{\mathrm{Var}(y_i-\hat{f}_n(x_i) \,|\,
X)}{(1-h_{ii})^2} + \frac{1}{n} \sum_{i=1}^n
\frac{[f(x_i)-\mathbb{E}(\hat{f}_n(x_i) \,|\, X)]^2}{(1-h_{ii})^2} \\
\label{eq:ocv_decomp}
&=\frac{\sigma^2}{n} \sum_{i=1}^n \frac{1}{1-h_{ii}} +
\frac{1}{n} \sum_{i=1}^n \frac{[f(x_i)-\mathbb{E}(\hat{f}_n(x_i) \,|\,
X)]^2}{(1-h_{ii})^2},
\end{align}
where the second line uses \smash{$\mathrm{Var}(y_i-\hat{f}_n(x_i)
\,|\, X)=(1-h_{ii})\sigma^2$}, $i=1,\ldots,n$.
The above display shows
$$
\mathrm{OCV}-\frac{\sigma^2}{n} \sum_{i=1}^n \frac{1}{1-h_{ii}} =
\frac{1}{n} \sum_{i=1}^n \Big( \big(y_i-\hat{f}_n(x_i)\big)^2 - (1-h_{ii})
\sigma^2\Big) \frac{1}{(1-h_{ii})^2}
$$
is an almost-unbiased estimate of the integrated Random-X prediction
bias, $B+\text{$B^+$}$ (it is almost-unbiased, due to the almost-unbiased
status of OCV as an estimate of Random-X prediction error).
Meanwhile, an unbiased estimate of the integrated Same-X prediction
bias $B$ is
$$
\frac{\mathrm{RSS}}{n} - \frac{\sigma^2 (n-p)}{n} = \frac{1}{n}
\sum_{i=1}^n \Big(\big(y_i-\hat{f}_n(x_i)\big)^2 - (1-h_{ii}) \sigma^2 \Big).
$$
Subtracting the last display from the second to last delivers
$$
\widehat\text{$B^+$} = \frac{1}{n} \sum_{i=1}^n \Big(\big(y_i-\hat{f}_n(x_i)\big)^2 -
(1-h_{ii}) \sigma^2 \Big) \bigg(\frac{1}{(1-h_{ii})^2} - 1\bigg),
$$
an almost-unbiased estimate of the excess bias \text{$B^+$}. We now define a
generalized covariance penalty criterion that we call \text{{\rm RCp}$^+$}{} by
adding this to {\rm RCp}:
$$
\text{{\rm RCp}$^+$} = \mathrm{RCp} + \widehat\text{$B^+$}
= \mathrm{OCV} - \frac{\sigma^2}{n} \sum_{i=1}^n
\frac{h_{ii}}{1-h_{ii}} + \frac{\sigma^2 p}{n}
\bigg(1+\frac{p+1}{n-p-1}\bigg).
$$
It is worth pointing out that, like {\rm RCp}{} and \smash{\text{$\widehat{{\rm RCp}}$}{}}, \text{{\rm RCp}$^+$}
assumes that we are in a setting covered by Theorem
\ref{thm:least_squares_normal} or asymptotically by Theorem
\ref{thm:least_squares_asymp}, as it takes advantage of the value of
\text{$V^+$}{} prescribed by these theorems.
A key question, of course, is: what have we achieved by moving from
OCV to \text{{\rm RCp}$^+$}, i.e., can we explicitly show that \text{{\rm RCp}$^+$}{} is preferable to
OCV for estimating Random-X prediction error when its assumptions hold?
We give a partial positive answer next.
\subsection{Comparing \text{{\rm RCp}$^+$} and OCV}
\label{sec:compare_rcpp_ocv}
As already discussed, OCV is by an almost-unbiased estimate of
Random-X prediction error (or an unbiased estimate of Random-X
prediction error for the procedure in question, here least squares,
applied to a training set of size $n-1$). The decomposition in
\eqref{eq:ocv_decomp} demonstrates its variance and bias components,
respectively, conditional on $X$. It should be emphasized that OCV
has the significant advantage over \text{{\rm RCp}$^+$}{} of not requiring
knowledge of $\sigma^2$ or assumptions on $Q$. Assuming that
$\sigma^2$ is known and $Q$ is well-behaved,
we can compare the two criteria for estimating Random-X prediction error in
least squares.
OCV is generally slightly conservative as an
estimate of Random-X prediction error, as models trained on
more observations are generally expected to be better. \text{{\rm RCp}$^+$}{} does
not suffer
from such slight conservativeness in the variance component, relying
on the integrated variance from theory, and in that regard
it may already be seen as an improvement. However we will choose to
ignore this issue of conservativeness, as the difference in training
on $n-1$ versus $n$ observations is clearly small when $n$ is large.
Thus, we can approximate the mean squared error or MSE of each
method, as an estimate of Random-X prediction error, as
\begin{align*}
\mathbb{E}(\mathrm{OCV}-{\rm ErrR})^2 &\approx
\mathrm{Var}_X \big(\mathbb{E}(\mathrm{OCV}|X)\big) +
\mathbb{E}_X \big(\mathrm{Var}(\mathrm{OCV}|X)\big), \\
\mathbb{E}(\text{{\rm RCp}$^+$} - {\rm ErrR})^2 &\approx
\mathrm{Var}_X \big(\mathbb{E}(\text{{\rm RCp}$^+$}|X)\big) +
\mathbb{E}_X \big(\mathrm{Var}(\text{{\rm RCp}$^+$}|X)\big),
\end{align*}
where these two approximations would be equalities if OCV and \text{{\rm RCp}$^+$}{}
were exactly unbiased estimates of ${\rm ErrR}$. Note that
conditioned on $X$, the difference between OCV and \text{{\rm RCp}$^+$}{},
is nonrandom (conditioned on $X$, all
diagonal entries $h_{ii}$, $i=1,\ldots,p$ are nonrandom). Hence
$\mathbb{E}_X\mathrm{Var}(\mathrm{OCV}|X)=\mathbb{E}_X\mathrm{Var}(\text{{\rm RCp}$^+$}|X)$, and we are
left to compare $\mathrm{Var}_X\mathbb{E}(\mathrm{OCV}|X)$ and
$\mathrm{Var}_X\mathbb{E}(\text{{\rm RCp}$^+$}|X)$, according to the (approximate) expansions above,
to compare the MSEs of OCV and \text{{\rm RCp}$^+$}{}.
Denote the two terms in \eqref{eq:ocv_decomp} by $v(X)$ and $b(X)$,
respectively, so that $\mathbb{E}(\mathrm{OCV}|X)=v(X)+b(X)$ can be viewed as
a decomposition into variance and bias components, and note that by
construction
$$
\mathrm{Var}_X \big(\mathbb{E}(\mathrm{OCV}|X)\big) = \mathrm{Var}_X\big(v(X)+b(X)\big)
\quad\text{and}\quad
\mathrm{Var}_X \big(\mathbb{E}(\text{{\rm RCp}$^+$}|X)\big) = \mathrm{Var}_X\big(b(X)\big).
$$
It seems reasonable to believe that \smash{$\mathrm{Var}_X\mathbb{E}(\mathrm{OCV}|X)
\geq \mathrm{Var}_X\mathbb{E}(\text{{\rm RCp}$^+$}|X)$} would hold in most cases, thus
\text{{\rm RCp}$^+$}{} would be no worse than OCV. One situation in which this occurs
is the case
when the linear model is unbiased, hence $b(X)=0$ and consequently
\smash{$\mathrm{Var}_X \big(\mathbb{E}(\text{{\rm RCp}$^+$}|X)\big) = \mathrm{Var}_X\big(b(x)\big) = 0$}.
In general, \smash{$\mathrm{Var}_X\mathbb{E}(\mathrm{OCV}|X)
\geq \mathrm{Var}_X\mathbb{E}(\text{{\rm RCp}$^+$}|X)$} is guaranteed when
\smash{$\mathrm{Cov}_X(v(X),b(X)) \geq 0$}. This means that
choices of $X$ that give large variance tend to also give large
bias, which seems reasonable to assume and indeed appears to be
true in our experiments. But,
this covariance depends on the underlying mean function
$f(x)=\mathbb{E}(y|x)$ in complicated ways, and at the moment it
eludes rigorous analysis.
\section{Simulations for least squares regression}
\label{sec:least_squares_sims}
We empirically study the decomposition of Random-X prediction
error into its various components for least squares regression in
different problem settings, and examine the performance of the
various model evaluation criteria in these settings. The only
criterion which is assumption-free and should invariably give unbiased
estimates of Random-X prediction error is OCV (modulo the slight bias
in using $n-1$ rather than $n$ training
observations). Thus we may consider OCV as the ``gold standard''
approach, and we will hold the other methods up to its standard under
different conditions, either when the assumptions they use hold or are
violated.
Before diving into the details, here is a high-level summary of the
results: {\rm RCp}{} performs very well in unbiased settings (when the mean
is linear), but very poorly in biased ones (when the mean is
nonlinear); \text{{\rm RCp}$^+$}{} and \smash{\text{$\widehat{{\rm RCp}}$}{}} peform well overall, with
\smash{\text{$\widehat{{\rm RCp}}$}{}} having an advantage and even holding a small
advantage over OCV, in essentially all settings, unbiased and biased.
This is perhaps a bit surprising since \smash{\text{$\widehat{{\rm RCp}}$}{}} is not
designed to account for bias, but then again, not as surprising once
we recall that \smash{\text{$\widehat{{\rm RCp}}$}{}} is closely related to GCV.
We perform experiments in a total of six data generating mechanisms,
based on three different distributions $Q$ for the covariate vector
$x$, and two models for $f(x)=\mathbb{E}(y|x)$, one unbiased (linear) and the
other biased (nonlinear). The three generating models for $x$ are
as follows.
\begin{itemize}
\item {\it Normal.} We choose $Q=N(0,\Sigma)$, where $\Sigma$ is
block-diagonal, containing five blocks such that all variables in a
block have pairwise correlation 0.9.
\item {\it Uniform.} We define $Q$ by taking $N(0,\Sigma)$ as above,
then apply the inverse normal distribution function componentwise.
In other words, this can be seen as a Gaussian copula with
uniform marginals.
\item {\it $t(4)$.} We define $Q$ by taking $N(0,\Sigma)$ as above,
then adjust the marginal distributions appropriately, again a
Gaussian copula with $t(4)$ marginals.
\end{itemize}
Note that Theorem \ref{thm:least_squares_normal} covers the
normal setting (and in fact, the covariance matrix $\Sigma$ plays no
role in the {\rm RCp}{} estimate), while the uniform and $t(4)$ settings do
not comply with either Theorems \ref{thm:least_squares_normal} or
\ref{thm:least_squares_asymp}. Also, the latter two settings differ
considerably in the nature of the distribution $Q$: finite support
versus long tails, respectively. The two generating models for $y|x$
both use $\epsilon \sim N(0,20^2)$, but differ in the specification
for the mean function $f(x)=\mathbb{E}(y|x)$, as follows.
\begin{itemize}
\item {\it Unbiased.} We set $f(x)=\sum_{j=1}^p x_j$.
\item {\it Biased.} We set $f(x) = C \sum_{j=1}^p |x_j|$.
\end{itemize}
The simulations discussed in the coming subsections all use $n=100$
training observations. In the ``high-dimensional'' case, we use $p=50$
variables and $C=0.75$, while in the ``low-dimensional, extreme bias''
case, we use $p=10$ and $C=100$. In both cases, we use a test set
of $10^4$ observations to evaluate Random-X quantities like
${\rm ErrR},\text{$B^+$},\text{$V^+$}$. Lastly, all figures show results averaged over 5000
repetitions.
\subsection{The components of Random-X prediction error}
We empirically evaluate $B,V,\text{$B^+$},\text{$V^+$}$ for least squares regression
fitted in the six settings (three for the distribution of
$x$ times two for $\mathbb{E}(y|x)$) in the high-dimensional
case, with $n=100$ and $p=50$. The results are shown
in Figure \ref{fig:decomp}. We can see
the value of \text{$V^+$}{} implied by Theorem
\ref{thm:least_squares_normal} is extremely accurate for the
normal setting, and also very accurate for the short-tailed
uniform setting. However for the $t(4)$ setting, the value of
\text{$V^+$}{} is quite a bit higher than what the theory implies. In
terms of bias, we observe that for the biased settings the value of
\text{$B^+$}{} is bigger than the Same-X bias $B$, and so it must be taken into
account if we hope to obtain reasonable estimates of Random-X
prediction error ${\rm ErrR}$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.7\textwidth]{ls_n100_p50_decomp.pdf}
\caption{\it Decomposition of Random-X prediction error into its
reducible components: Same-X bias $B$, Same-X variance $V$,
excess bias \text{$B^+$}, and excess variance \text{$V^+$}, in the
``high-dimensional'' case with $n=100$ and $p=50$.}
\label{fig:decomp}
\end{figure}
\subsection{Comparison of performances in estimating prediction error}
Next we compare the performance of the proposed criteria for
estimating the Random-X prediction error of least squares over the six
simulation settings. The results in Figures \ref{fig:relative_high} and
\ref{fig:relative_low} correspond to the
``high-dimensional'' case with $n=100$ and $p=50$ and the
``low-dimensional, extreme bias'' case with $n=100$ and $p=10$,
respectively. Displayed are the MSEs in estimating the Random-X
prediction error, relative to OCV; also, the MSE for each
method are broken down into squared bias and variance components.
\begin{figure}[htb]
\centering
\includegraphics[width=0.75\textwidth]{ls_n100_p50_rel.pdf}
\caption{\it MSEs of the different methods in estimating
Random-X prediction error relative to OCV, in the
``high-dimensional'' case with $n=100$ and $p=50$. The
plot is truncated at a relative error of 3 for clarity, but the
{\rm RCp}{} relative errors continue as high as 10 in the biased
settings.}
\label{fig:relative_high}
\end{figure}
In the high-dimensional case in Figure \ref{fig:relative_high}, we
see that for the true linear models (three leftmost scenarios),
{\rm RCp}{} has by far the lowest MSE in estimating Random-X prediction
error, much better than OCV. For the normal and uniform
covariate distributions, it also has no bias in estimating this error,
as warranted by Theorem \ref{thm:least_squares_normal} for the normal
setting. For the $t(4)$ distribution, there is
already significant bias in the prediction error estimates generated
by {\rm RCp}, as is expected from the results in Figure \ref{fig:decomp};
however, if the linear model is correct then we see {\rm RCp}{} still has
three- to five-fold lower MSE compared to all other methods.
The situation changes dramatically when bias is added (three rightmost
scenarios). Now, {\rm RCp}{} is by far the worse method,
failing completely to account for large \text{$B^+$}, and its relative MSE
compared to OCV reaches as high as 10.
As for \text{{\rm RCp}$^+$}{} and \smash{\text{$\widehat{{\rm RCp}}$}{}} in the high-dimensional case, we
see that \text{{\rm RCp}$^+$}{} indeed has lower error than OCV under the normal
models as argued in Section \ref{sec:compare_rcpp_ocv}, and also in
the uniform models. This is true regardless of the presence of bias.
The difference, however is small: between 0.1\% and 0.7\%. In these
settings, we can see \smash{\text{$\widehat{{\rm RCp}}$}{}} has even lower MSE than \text{{\rm RCp}$^+$},
with no evident bias in dealing with the biased models.
For the long-tailed $t(4)$ distribution, both \smash{\text{$\widehat{{\rm RCp}}$}{}} and
\text{{\rm RCp}$^+$}{} suffer some bias in estimating prediction error, as
expected. Interestingly, in the nonlinear model with $t(4)$
covariates (rightmost scenario), \smash{\text{$\widehat{{\rm RCp}}$}{}} does suffer
significant bias in estimating prediction error, as opposed to
\text{{\rm RCp}$^+$}. However, this bias does not offset the increased variance due
to \text{{\rm RCp}$^+$}/OCV.
\begin{figure}[htb]
\centering
\includegraphics[width=0.75\textwidth]{ls_n100_p10_rel.pdf}
\caption{\it MSEs of the different methods in estimating
Random-X prediction error relative to OCV, in the
``low-dimensional, extreme bias'' case with $n=100$ and $p=10$.}
\label{fig:relative_low}
\end{figure}
In the low-dimensional case in Figure \ref{fig:relative_low}, many of
the same conclusions apply: {\rm RCp}{} does well if the linear
model is correct, even with the long-tailed covariate distribution,
but fails completely in the presence of nonlinearity. Also, \text{{\rm RCp}$^+$}{}
performs almost identically to OCV throughout. The most important
distinction is the failure of \smash{\text{$\widehat{{\rm RCp}}$}{}} in the normal covariate,
biased setting, where it suffers significant bias in estimating the
prediction error (see circled region in the plot). This demonstrates
that the heuristic correction for \text{$B^+$} employed by \smash{\text{$\widehat{{\rm RCp}}$}{}} can
fail when the linear model does not hold, as opposed to \text{{\rm RCp}$^+$}{} and
OCV. We discuss this further in Section \ref{sec:discussion}.
\section{The effects of ridge regularization}
\label{sec:ridge}
In this section, we examine ridge regression, which behaves
similarly in some ways to least squares regression, and differently in
others. In particular, like least squares, it has nonnegative excess
bias, but unlike least squares, it can have negative excess variance,
increasingly so for larger amounts of regularization.
These results are established in the subsections below, where we study
excess bias and variance separately. Throughout, we will write
\smash{$\hat{f}_n$} for the estimator from the ridge regression of $Y$ on
$X$, i.e.,
\smash{$\hat{f}_n(x)=x^T (X^T X + \lambda
I)^{-1} X^T Y$}, where the tuning parameter $\lambda > 0$ is
considered arbitrary (and for simplicity, we make the dependence of
\smash{$\hat{f}_n$} on $\lambda$ implicit).
When $\lambda=0$, we must assume that $X$ has full column rank
(almost surely under its marginal distribution $Q^n$), but when
$\lambda>0$, no assumption is needed on $X$.
\subsection{Nonnegativity of \text{$B^+$}}
We prove an extension to the excess bias result in Theorem
\ref{thm:least_squares_nonneg} for least squares regression that the
excess bias in ridge regression is nonnegative.
\begin{thm}
\label{thm:ridge_eb_nonneg}
For \smash{$\hat{f}_n$} the ridge regression estimator, we have $\text{$B^+$} \geq
0$.
\end{thm}
\begin{proof}
This result is actually itself a special case of Theorem
\ref{thm:rkhs_eb_nonneg}; the latter is phrased in somewhat of
a different (functional) notation, so for concreteness, we give a
direct proof of the result for ridge regression here.
Let $X_0 \in \mathbb{R}^{n\times p}$ be a matrix of test covariate values,
with rows i.i.d.\ from $Q$, and let $Y_0 \in \mathbb{R}^n$ be a vector of
associated test response value. Then excess bias in \eqref{eq:eb} can
be written as
$$
\text{$B^+$} = \mathbb{E}_{X,X_0} \frac{1}{n} \big\|
\mathbb{E}\big(\hat{f}_n(X_0) \,|\, X,X_0 \big)-f(X_0)\big\|_2^2 -
\mathbb{E}_X \frac{1}{n} \big\|
\mathbb{E}\big(\hat{f}_n(X) \,|\, X \big)-f(X)\big\|_2^2.
$$
Note \smash{$\hat{f}_n(X)=X(X^T X+\lambda I)^{-1} X^T Y$}, and by
linearity, \smash{$\mathbb{E}(\hat{f}_n(X) \,|\, X)=X(X^T X+\lambda I)^{-1} X^T
f(X)$}.
Recalling the optimization problem underlying ridge regression, we
thus have
$$
\mathbb{E}\big(\hat{f}_n(X) \,|\, X\big)= \mathop{\mathrm{argmin}}_{X\beta \in \mathbb{R}^n} \;
\| f(X) - X\beta \|_2^2 + \lambda\|\beta\|_2^2.
$$
An analogous statement holds for \smash{$\hat{f}_{0n}$}, which
we write to denote the result from the ridge regression $Y_0$ on
$X_0$; we have
$$
\mathbb{E}\big(\hat{f}_{0n}(X_0) \,|\, X_0\big)= \mathop{\mathrm{argmin}}_{X_0\beta \in \mathbb{R}^n} \;
\| f(X_0) - X_0\beta \|_2^2 + \lambda\|\beta\|_2^2.
$$
Now write \smash{$\beta_n=(X^T X + \lambda I)^{-1} X^T f(X)$} and
\smash{$\beta_{0n}=(X_0^T X_0 + \lambda I)^{-1} X_0^T f(X_0)$} for
convenience. By optimality of \smash{$X_0\beta_{0n}$} for
the minimization problem in the last display,
$$
\| X_0 \beta_n - f(X_0)\|_2^2 + \lambda \|\beta_n\|_2^2 \geq
\| X_0 \beta_{0n} - f(X_0)\|_2^2 + \lambda \|\beta_{0n}\|_2^2,
$$
and taking an expectation over $X,X_0$ gives
\begin{align*}
\mathbb{E}_{X,X_0} \Big[\| X_0 \beta_n - f(X_0)\|_2^2 + \lambda
\|\beta_n\|_2^2\Big] &\geq
\mathbb{E}_{X_0} \Big[\| X_0 \beta_{0n} - f(X_0)\|_2^2 + \lambda
\|\beta_{0n}\|_2^2\Big] \\
&=
\mathbb{E}_X \Big[\| X \beta_n - f(X)\|_2^2 + \lambda
\|\beta_n\|_2^2\Big],
\end{align*}
where in the last line we used the fact that $(X,Y)$ and $(X_0,Y_0)$
are identical in distribution. Cancelling out the common term of
\smash{$\lambda\, \mathbb{E}_X \|\beta_n\|_2^2$} in the first and third lines
above establishes the result, since
\smash{$\mathbb{E}(\hat{f}_n(X_0) \,|\, X,X_0) = X_0\beta_n$} and
\smash{$\mathbb{E}(\hat{f}_n(X) \,|\, X)=X\beta_n$}.
\end{proof}
\subsection{Negativity of \text{$V^+$}{} for large $\lambda$}
\label{sec:ridge_ev_neg}
Here we present two complementary results on the variance side.
\begin{prop}
\label{prop:ridge_var_decr}
For \smash{$\hat{f}_n$} the ridge regression estimator, the integrated
Random-X prediction variance,
$$
V+\text{$V^+$} = \mathbb{E}_{X,x_0} \mathrm{Var}\big(\hat{f}_n(x_0) \,|\, X,x_0\big),
$$
is a nonincreasing function of $\lambda$.
\end{prop}
\begin{proof}
As in the proofs of Theorems \ref{thm:least_squares_nonneg} and
\ref{thm:ridge_eb_nonneg}, let
$X_0 \in \mathbb{R}^{n\times p}$ be a test covariate matrix, and
notice that we can write the integrated Random-X variance as
$$
V+\text{$V^+$} = \mathbb{E}_{X,X_0} \frac{1}{n}
\mathrm{tr}\big[ \mathrm{Cov}\big( \hat{f}_n(X_0) \,|\, X,X_0 \big)\big].
$$
For a given value of $X,X_0$, we have
\begin{align*}
\frac{1}{n} \mathrm{tr}\big[ \mathrm{Cov}\big( \hat{f}_n(X_0) \,|\, X,X_0 \big)\big] &=
\frac{\sigma^2}{n} \mathrm{tr} \Big( X_0(X^T X + \lambda I)^{-1} X^T X
(X^T X + \lambda I)^{-1} X_0^T \Big) \\
&= \frac{\sigma^2}{n} \mathrm{tr} \bigg( X_0^T X_0 \sum_{i=1}^p u_iu_i^T
\frac{d_i^2}{(d_i^2 + \lambda)^2} \bigg),
\end{align*}
where the second line uses an eigendecomposition $X^T X =
U D U^T$, with $U \in \mathbb{R}^{p\times p}$ having orthonormal columns
$u_1,\ldots,u_p$ and
$D=\mathrm{diag}(d_1^2,\ldots,d_p^2)$. Taking a derivative with
respect to $\lambda$, we see
$$
\frac{d}{d\lambda} \Bigg(
\frac{1}{n} \mathrm{tr}\big[ \mathrm{Cov}\big( \hat{f}_n(X_0) \,|\, X,X_0 \big)\big]\Bigg)
= -2\frac{\sigma^2}{n} \mathrm{tr} \bigg( X_0^T X_0 \sum_{i=1}^p u_iu_i^T
\frac{\lambda d_i^2}{(d_i^2 + \lambda)^3} \bigg) \leq 0,
$$
the inequality due to the fact that $\mathrm{tr}(AB) \geq 0$ if $A,B$ are
positive semidefinite matrices. Taking an expectation and
switching the order of integration and differentiation (which is
possible because the integrand is a continuously differentiable
function of $\lambda>0$) gives
$$
\frac{d}{d\lambda} \Bigg(
\mathbb{E}_{X,X_0} \frac{1}{n} \mathrm{tr}\big[ \mathrm{Cov}\big( \hat{f}_n(X_0) \,|\, X,X_0
\big)\big]\Bigg) =
\mathbb{E}_{X,X_0} \frac{d}{d\lambda} \Bigg(
\frac{1}{n} \mathrm{tr}\big[ \mathrm{Cov}\big( \hat{f}_n(X_0) \,|\, X,X_0
\big)\big]\Bigg) \leq 0,
$$
the desired result.
\end{proof}
The proposition shows that adding regularization guarantees a decrease
in variance for Random-X prediction. The same is true of the variance
in Same-X prediction. However, as we show next, as the amount
of regularization increases these two variances decrease at different
rates, a phenomenon that manifests itself in the fact that and the
Random-X prediction variance is guaranteed to be smaller than the
Same-X prediction variance for large enough $\lambda$.
\begin{thm}
\label{thm:ridge_ev_neg}
For \smash{$\hat{f}_n$} the ridge regression estimator, the integrated
Same-X prediction variance and integrated Random-X prediction variance
both approach zero as $\lambda \to \infty$. Moreover, the limit of
their ratio satisfies
$$ \lim_{\lambda\to \infty}
\frac{\mathbb{E}_{X,x_0} \mathrm{Var}(\hat{f}_n(x_0) \,|\, X,x_0)}
{\mathbb{E}_{X} \mathrm{Var}(\hat{f}_n(x_1) \,|\, X)}
= \frac{ \mathrm{tr}[\mathbb{E}(X^TX) \mathbb{E}(X^TX)]}
{\mathrm{tr}[\mathbb{E}(X^TX X^TX)]} \leq 1,
$$
the last inequality reducing to an equality if and only if $x
\sim Q$ is deterministic and has no variance.
\end{thm}
\begin{proof}
Again, as in the proof of the last proposition as well as Theorems
\ref{thm:least_squares_nonneg} and \ref{thm:ridge_eb_nonneg},
let $X_0 \in \mathbb{R}^{n\times p}$ be a test covariate matrix, and
write the integrated Same-X and Random-X prediction variances as
$$
\mathbb{E}_X \frac{1}{n} \mathrm{tr}\big[ \mathrm{Cov}\big(\hat{f}_n(X) \,|\, X\big)\big]
\quad \text{and} \quad
\mathbb{E}_{X,X_0} \frac{1}{n} \mathrm{tr}\big[ \mathrm{Cov}\big(\hat{f}_n(X_0) \,|\, X,X_0
\big)\big],
$$
respectively. From the arguments in the proof of Proposition
\ref{prop:ridge_var_decr}, letting $X^T X =
U D U^T$ be an eigendecomposition with $U \in \mathbb{R}^{p\times p}$
having orthonormal columns $u_1,\ldots,u_p$ and
$D=\mathrm{diag}(d_1^2,\ldots,d_p^2)$, we have
\begin{align*}
\lim_{\lambda \to \infty} \mathbb{E}_{X,X_0} \frac{1}{n} \mathrm{tr}\big[
\mathrm{Cov}\big(\hat{f}_n(X_0) \,|\, X,X_0 \big)\big]
&= \lim_{\lambda \to \infty}\mathbb{E}_{X,X_0}
\frac{\sigma^2}{n} \mathrm{tr} \bigg( X_0^T X_0 \sum_{i=1}^p u_iu_i^T
\frac{d_i^2}{(d_i^2 + \lambda)^2} \bigg) \\
&= \mathbb{E}_{X,X_0} \lim_{\lambda \to \infty}
\frac{\sigma^2}{n} \mathrm{tr} \bigg( X_0^T X_0 \sum_{i=1}^p u_iu_i^T
\frac{d_i^2}{(d_i^2 + \lambda)^2} \bigg) = 0,
\end{align*}
where in the second line we used the dominated convergence theorem to
exchange the limit and the expectation (since \smash{$\mathbb{E}_{X,X_0} \mathrm{tr}
[X_0^T X_0 \sum_{i=1}^p u_iu_i^T (d_i^2/(d_i^2+\lambda)^2)] \leq
\mathbb{E}_{X,X_0}\mathrm{tr}(X_0^T X_0 X^T X) < \infty$}).
Similar arguments show that the integrated Same-X prediction variance
also tends to zero.
Now we consider the limiting ratio of the integrated variances,
\begin{align*}
\lim_{\lambda \to \infty}
\frac{\mathbb{E}_{X,X_0} \mathrm{tr}[ \mathrm{Cov}(\hat{f}_n(X_0) \,|\, X,X_0)]}
{\mathbb{E}_X \mathrm{tr}[ \mathrm{Cov}(\hat{f}_n(X) \,|\, X)]} &=
\lim_{\lambda \to \infty} \frac{\mathbb{E}_{X,X_0} \mathrm{tr}[
X_0(X^T X + \lambda I)^{-1} X^T X
(X^T X + \lambda I)^{-1} X_0^T]}
{\mathbb{E}_X \mathrm{tr}[X(X^T X + \lambda I)^{-1} X^T X
(X^T X + \lambda I)^{-1} X^T]} \\
&= \lim_{\lambda \to \infty} \frac{\mathbb{E}_{X,X_0} \mathrm{tr}[
\lambda^2 X_0(X^T X + \lambda I)^{-1} X^T X
(X^T X + \lambda I)^{-1} X_0^T]}
{\mathbb{E}_X \mathrm{tr}[\lambda^2 X(X^T X + \lambda I)^{-1}
X^T X (X^T X + \lambda I)^{-1} X^T]} \\
&= \frac{\lim_{\lambda \to \infty} \mathbb{E}_{X,X_0} \mathrm{tr}[
\lambda^2 X_0(X^T X + \lambda I)^{-1} X^T X
(X^T X + \lambda I)^{-1} X_0^T]}
{\lim_{\lambda \to \infty} \mathbb{E}_X \mathrm{tr}[\lambda^2 X(X^T X +
\lambda I)^{-1} X^T X (X^T X + \lambda I)^{-1} X^T]},
\end{align*}
where the last line holds provided that the numerator and denominator
both converge to finite nonzero limits, as will be confirmed by
our arguments below. We study the numerator first. Noting that
\smash{$\lambda^2 (X^T X + \lambda I)^{-1}
X^T X (X^T X + \lambda I)^{-1}-X^T X$} has eigenvalues
$$
d_i^2 \bigg(\frac{\lambda^2}{(d_i^2 + \lambda)^2}-1\bigg), \;
i=1,\ldots,p,
$$
we have that \smash{$\lambda^2 (X^T X + \lambda I)^{-1}
X^T X (X^T X + \lambda I)^{-1} \to X^T X$} as $\lambda \to \infty$,
in (say) the operator norm, implying
\smash{$\mathrm{tr}[\lambda^2 X_0 (X^T X + \lambda I)^{-1}
X^T X (X^T X + \lambda I)^{-1} X_0^T] \to \mathrm{tr}(X_0 X^T X X_0^T)$} as
$\lambda \to \infty$. Hence
\begin{align*}
\lim_{\lambda \to \infty} \mathbb{E}_{X,X_0} \mathrm{tr}\big[
\lambda^2 X_0(X^T X + \lambda I)^{-1} &X^T X
(X^T X + \lambda I)^{-1} X_0^T\big] \\
&= \mathbb{E}_{X,X_0} \lim_{\lambda \to \infty} \mathrm{tr}\big[
\lambda^2 X_0(X^T X + \lambda I)^{-1} X^T X
(X^T X + \lambda I)^{-1} X_0^T\big] \\
&= \mathbb{E}_{X,X_0} \mathrm{tr}(X_0 X^T X X_0^T) \\
&= \mathrm{tr} \big[ \mathbb{E}_X(X^T X)\mathbb{E}_{X_0}(X_0^T X_0) \big] \\
&= \mathrm{tr} \big[ \mathbb{E}_X(X^T X) \mathbb{E}_X(X^T X)\big].
\end{align*}
Here, in the first line, we applied the dominated convergence theorem
as previously, in the third we used the independence of $X,X_0$, and
in the last we used the identical distribution of $X,X_0$. Similar
arguments lead to the conclusion for the denominator
$$
\lim_{\lambda \to \infty} \mathbb{E}_X \mathrm{tr}\big[
\lambda^2 X(X^T X + \lambda I)^{-1} X^T X
(X^T X + \lambda I)^{-1} X^T\big] =
\mathrm{tr} \big[ \mathbb{E}_X(X^T XX^T X)\big],
$$
and thus we have shown that
$$
\lim_{\lambda \to \infty}
\frac{\mathbb{E}_{X,X_0} \mathrm{tr}[ \mathrm{Cov}(\hat{f}_n(X_0) \,|\, X,X_0)]}
{\mathbb{E}_X \mathrm{tr}[ \mathrm{Cov}(\hat{f}_n(X) \,|\, X)]} =
\frac{\mathrm{tr} [ \mathbb{E}_X(X^T X) \mathbb{E}_X(X^T X)]}
{\mathrm{tr} [ \mathbb{E}_X(X^T XX^T X)]},
$$
as desired. To see that the ratio on the right-hand side is at most 1,
consider
$$
A = \mathbb{E}(X^TX X^TX) - \mathbb{E}(X^TX) \mathbb{E}(X^TX),
$$
which is a symmetric matrix whose trace is
$$
\mathrm{tr}(A) = \sum_{i,j=1}^p \mathrm{Var} \big((X^T X)_{i,j}\big) \geq 0.
$$
Furthermore, the trace is zero if and only if all summands are zero,
which occurs if and only if all components of $x \sim Q$ have no
variance.
\end{proof}
In words, the theorem shows that the excess variance \text{$V^+$}{} of ridge
regression approaches zero as $\lambda \to \infty$, but it does so
from the left (negative side) of zero. As we can have cases in which
the excess bias is very small or even zero (for example, a null model
like in our simulations below), we see that ${\rm ErrR}-{\rm ErrS}=\text{$V^+$}$ can be
negative for ridge regression with a large level of regularization;
this is a striking contrast to the behavior of this gap for least
squares, where it is always nonnegative.
We finish by demonstrating this result empirically, using a simple
simulation setup with $p=100$ covariates drawn from
$Q=N(0,I)$, and training and test sets each of size $n=300$. The
underlying regression function was $f(x)=\mathbb{E}(y|x)=0$, i.e., there was
no signal, and the errors were also standard normal. We drew
training and test data from this simulation setup, fit ridge
regression estimators to the training at various levels of $\lambda$,
and calculated the ratio of the sample versions of the Random-X and
Same-X integrated variances. We repeated this 100 times, and averaged
the results. As shown in Figure \ref{fig:ridge_ev_neg}, for values of
$\lambda$ larger than about 250, the Random-X integrated variance
is smaller than the Same-X integrated variance, and consequently the
same is true of the prediction errors (as there is no
signal, the Same-X and Random-X integrated biases are both zero).
Also shown in the figure is the theoretical limiting ratio of the
integrated variances according to Theorem \ref{thm:ridge_ev_neg},
which in this case can be calculated from the properties of Wishart
distributions to be $n^2p/(n^2p+np^2+np) \approx 0.7481$, and is in
very good agreement with the empirical limiting ratio.
\begin{figure}[htb]
\centering
\includegraphics[width=0.65\textwidth]{ridge_var.pdf}
\caption{\it Ratio of Random-X integrated variance to Same-X
integrated variance for ridge regression as we vary the tuning
parameter $\lambda$, in a simple problem setting with
$p=100$ covariates and training and test sets each of size $n=300$.
For values of $\lambda$ larger than about
250, the ratio is smaller than 1, meaning the Random-X prediction
variance is smaller than the Same-X prediction variance; as the
integrated bias is zero in both settings, the same ordering also
applies to the Random-X and Same-X prediction errors.
The plotted curve is an average of the computed ratios over
100 repetitions, and the
dashed lines (hard to see, because they are very close to the
aforementioned curve) denote 95\% confidence intervals over these
repetitions. The red line is the theoretical limiting ratio of
integrated variances due to Theorem \ref{thm:ridge_ev_neg}, in good
agreement with the simulation results.}
\label{fig:ridge_ev_neg}
\end{figure}
\section{Nonparametric regression estimators}
\label{sec:nonpar}
We present a brief study of the excess bias and variance of some
common nonparametric regression estimators. In Section
\ref{sec:discussion}, we give a high-level discussion of the view on
the gap between
Random-X and Same-X prediction errors from the perspective of
empirical process theory, which is a topic that is well-studied by
researchers in nonparametric regression.
\subsection{Reproducing kernel Hilbert spaces}
Consider an estimator \smash{$\hat{f}_n$} defined by the general-form
functional optimization problem
\begin{equation}
\label{eq:rkhs}
\hat{f}_n = \mathop{\mathrm{argmin}}_{g \in \mathcal{G}} \; \sum_{i=1}^n
\big(y_i-g(x_i)\big)^2 + J(g),
\end{equation}
where $\mathcal{G}$ is a function class and $J$ is a roughness penalty
on functions. Examples estimators of this form include the (cubic)
smoothing spline estimator in $p=1$ dimensions, in which $\mathcal{G}$
is the space of all functions that are twice differentiable and whose
second derivative is in square integrable, and \smash{$J(g)=\int
g''(t)^2 \,dt$}; and more broadly, reproducing kernel Hilbert space
or RKHS estimators (in an arbitrary dimension $p$), in which
$\mathcal{G}$ is an RKHS and \smash{$J(g)=\|f\|_{\mathcal{G}}$} is the
corresponding RKHS norm.
Provided that \smash{$\hat{f}_n$} defined by \eqref{eq:rkhs} is a linear
smoother, which means \smash{$\hat{f}_n(x)=s(x)^T Y$} for a weight
function $s:\mathbb{R}^p \to \mathbb{R}$ (that can and will generally also depend on
$X$), we now show that the excess bias of \smash{$\hat{f}_n$} is always
nonnegative. We note that this result applies to smoothing
splines and RKHS estimators, since these are linear smoothers; it also
covers ridge regression, and thus generalizes the result in Theorem
\ref{thm:ridge_eb_nonneg}.
\begin{thm}
\label{thm:rkhs_eb_nonneg}
For \smash{$\hat{f}_n$} a linear smoother defined by a problem of the form
\eqref{eq:rkhs}, we have $\text{$B^+$} \geq 0$.
\end{thm}
\begin{proof}
Let us introduce a test covariate matrix
$X_0 \in \mathbb{R}^{n\times p}$ and associated response vector
$Y_0\in\mathbb{R}^n$, and write the excess bias in \eqref{eq:eb}
as
$$
\text{$B^+$} = \mathbb{E}_{X,X_0} \frac{1}{n} \big\|
\mathbb{E}\big(\hat{f}_n(X_0) \,|\, X,X_0 \big)-f(X_0)\big\|_2^2 -
\mathbb{E}_X \frac{1}{n} \big\|
\mathbb{E}\big(\hat{f}_n(X) \,|\, X \big)-f(X)\big\|_2^2.
$$
Writing \smash{$\hat{f}_n(x)=s(x)^T Y$} for a weight function
$s:\mathbb{R}^p\to\mathbb{R}$, let $S(X) \in \mathbb{R}^{n\times n}$ be a smoother matrix
that has rows $s(x_1),\ldots,s(x_n)$. Thus \smash{$\hat{f}_n(X)=S(X) Y$},
and by linearity, \smash{$\mathbb{E}(\hat{f}_n(X) \,|\, X )=S(X)f(X)$}. This in
fact means that we can express \smash{$g_n=\mathbb{E}(\hat{f}_n|X )$}, a
function defined by $g(x)=s(x)^T f(X)$, as the solution of an
optimization problem of the form \eqref{eq:rkhs},
$$
g_n = \mathop{\mathrm{argmin}}_{g \in \mathcal{G}} \; \|f(X)-g(X)\|_2^2+ J(g),
$$
where we have rewritten the loss term in a more convenient notation.
Analogously, if we denote by \smash{$\hat{f}_{0n}$} the estimator of the
form \eqref{eq:rkhs}, but fit to the test data $X_0,Y_0$ instead of
the training data $X,Y$, and \smash{$g_{0n}=\mathbb{E}(\hat{f}_{0n}|X_0)$},
then
$$
g_{0n} = \mathop{\mathrm{argmin}}_{g \in \mathcal{G}} \; \|f(X_0)-g(X_0)\|_2^2
+ J(g).
$$
By virtue of optimality of \smash{$g_{0n}$} for the problem in
the last display, we have
$$
\|g_n(X_0) - f(X_0)\|_2^2 + J(g_n) \geq
\|g_{0n}(X_0) - f(X_0)\|_2^2 + J(g_{0n})
$$
and taking an expectation over $X,X_0$ gives
\begin{align*}
\mathbb{E}_{X,X_0} \Big[\|g_n(X_0) - f(X_0)\|_2^2 + J(g_n)\Big]
&\geq \mathbb{E}_{X_0} \Big[\|g_{0n}(X_0) - f(X_0)\|_2^2 +
J(g_{0n})\Big] \\
&= \mathbb{E}_X \Big[\|g_n(X) - f(X)\|_2^2 + J(g_n)\Big],
\end{align*}
where in the equality step we used the fact that $X,X_0$ are identical
in distribution. Cancelling out the common term of \smash{$\mathbb{E}_X
J(g_n)$} from the the first and third expressions proves the
result, because \smash{$\mathbb{E}(\hat{f}_n(X_0) \,|\, X,X_0) =
g_n(X_0)$} and \smash{$\mathbb{E}(\hat{f}_n(X) \,|\,
X)=g_n(X)$}.
\end{proof}
\subsection{$k$-nearest-neighbors regression}
Consider \smash{$\hat{f}_n$} the $k$-nearest-neighbors or kNN regression
estimator, defined by
$$
\hat{f}_n(x) = \frac{1}{k} \sum_{i \in N_k(x)} y_i,
$$
where $N_k(x)$ returns the indices of the $k$ nearest points among
$x_1,\ldots,x_n$ to $x$. It is immediate that the excess variance of
kNN regression is zero.
\begin{prop}
\label{prop:knn_ev_zero}
For \smash{$\hat{f}_n$} the kNN regression estimator, we have $\text{$V^+$}=0$.
\end{prop}
\begin{proof}
Simply compute
$$
\mathrm{Var}\big( \hat{f}_n(x_1) \,| X \big) = \frac{\sigma^2}{k},
$$
by independence of $y_1,\ldots,y_n$, and hence the points in the
nearest neighbor set $N_k(x_1)$, conditional on $X$. Similarly,
$$
\mathrm{Var}\big( \hat{f}_n(x_0) \,| X,x_0 \big) = \frac{\sigma^2}{k}.
$$
\end{proof}
On the other hand, the excess bias is not easily computable, and is
not covered by Theorem \ref{thm:rkhs_eb_nonneg}, since kNN
cannot be written as an estimator of the form \eqref{eq:rkhs} (though
it is a linear smoother). The next result sheds some light on the
nature of the excess bias.
\begin{prop}
\label{prop:knn_eb_exact}
For \smash{$\hat{f}_n$} the kNN regression estimator, we have
$$
\text{$B^+$} = B_{n,k} - \bigg(1-\frac{1}{k^2}\bigg) B_{n-1,k-1},
$$
where $B_{n,k}$ denotes the integrated Random-X prediction bias of
the kNN estimator fit to a training set of size $n$, and with tuning
parameter (number of neighbors) $k$.
\end{prop}
\begin{proof}
Observe
$$
\Big(\mathbb{E}\big(\hat{f}_n(x_0)\,|\,X,x_0\big)-f(x_0)\Big)^2=
\Bigg( \frac{1}{k} \sum_{i \in N_k(x_0)}
\big(f(x_i)-f(x_0)\big)\Bigg)^2,
$$
and by definition,
\smash{$\mathbb{E}_{X,x_0}[\mathbb{E}(\hat{f}_n(x_0)\,|\,X,x_0)-f(x_0)]^2=B_{n,k}$}.
Meanwhile
$$
\Big(\mathbb{E}\big(\hat{f}_n(x_1)\,|\,X\big)-f(x_1)\Big)^2=
\Bigg( \frac{1}{k} \sum_{i \in N_k(x_1)}
\big(f(x_i)-f(x_1)\big)\Bigg)^2
= \Bigg( \frac{1}{k} \sum_{i \in N_k^{-1}(x_1)}
\big(f(x_i)-f(x_1)\big)\Bigg)^2,
$$
where \smash{$N_{k-1}^{-1}(x_1)$} gives the indices of the $k-1$
nearest points among $x_2,\ldots,x_n$ to $x_1$ (which equals
$N_k(x_1)$ as $x_1$ is trivially one of
its own $k$ nearest neighbors). Now notice that $x_1$ plays the role
of the test point $x_0$ in the last display, and therefore,
\smash{$\mathbb{E}_X[\mathbb{E}(\hat{f}_n(x_1)\,|\,X)-f(x_1)]^2=((k-1)/k)^2B_{n-1,k-1}$}.
This proves the result.
\end{proof}
The above proposition suggests that, for moderate values of $k$, the
excess bias in kNN regression is likely positive.
We are comparing the integrated Random-X bias of a kNN model with $n$
training points and $k$ neighbors to that of a model $n-1$ points and
$k-1$ neighbors; for large $n$ and moderate $k$, it seems that the
former should be larger than the latter, and in addition, the
factor of $(1-1/k^2)$ multiplying the latter term makes it even more
likely that the difference \smash{$B_{n,k} -
(1-1/k^2)B_{n-1,k-1}$} is positive. Rephrased, using
the zero excess variance result of Proposition
\ref{prop:knn_ev_zero}: the gap in Random-X and Same-X
prediction errors, \smash{${\rm ErrR}-{\rm ErrS}=B_{n,k} -
(1-1/k^2)B_{n-1,k-1}$}, is likely positive for large $n$ and
moderate $k$. Of course, this is not a formal proof; aside from the
choice of $k$, the shape of the underlying mean function
$f(x)=\mathbb{E}(y|x)$ obviously plays an important role here too. As a
concrete problem setting, we might try analyzing the Random-X bias
$B_{n,k}$ for $f$ Lipschitz and a scaling for $k$ such
that $k \to \infty$ but $k/n \to 0$ as $n \to \infty$, e.g., \smash{$k
\asymp \sqrt{n}$}, which ensures consistency of kNN. Typical
analyses provide upper bounds on the kNN bias in this problem
setting (e.g., see \citealt{gyorfi2002distribution}), but a more
refined analysis would be needed to compare $B_{n,k}$ to
$B_{n-1,k-1}$.
\section{Discussion}
\label{sec:discussion}
We have proposed and studied a division of Random-X prediction error
into components: the irreducible error $\sigma^2$, the traditional
(Fixed-X or Same-X) integrated bias $B$ and integrated variance $V$
components, and our newly defined excess bias \text{$B^+$}{} and excess
variance \text{$V^+$}{} components, such that $B+\text{$B^+$}$ gives the Random-X
integrated bias and
$V+\text{$V^+$}$ the Random-X integrated variance. For least squares
regression, we were able to quantify
\text{$V^+$}{} exactly when the covariates are normal and asymptotically when
they are drawn from a linear transformation of a product distribution,
leading to our definition of {\rm RCp}. To account for unknown error
variance $\sigma^2$, we defined \smash{\text{$\widehat{{\rm RCp}}$}} based on the usual
plug-in estimate, which turns out to be asymptotically identical to
GCV, giving this classic method a
novel interpretation. To account for \text{$B^+$}{} (when $\sigma^2$ is known
and the distribution $Q$ of the covariates is well-behaved),
we defined \text{{\rm RCp}$^+$}{}, by leveraging a Random-X bias estimate implicit to
OCV. We also briefly considered methods beyond least squares, proving
that \text{$B^+$}{} is nonnegative in all settings considered, while \text{$V^+$}{}
can become negative in the presence of heavy regularization.
We reflect on some issues surrounding our findings and possible
directions for future work.
\paragraph{Ability of \smash{\text{$\widehat{{\rm RCp}}$}{}} to account for bias.}
An intriguing phenomenon that we observe is the ability of
\smash{\text{$\widehat{{\rm RCp}}$}}/Sp and its close (asymptotic) relative GCV to deal to
some extent with \text{$B^+$}{} in estimating Random-X prediction error,
through the inflation it performs on the squared training residuals.
For GCV in particular, where recall
\smash{$\mathrm{GCV} = \mathrm{RSS}/(n(1-\gamma)^2)$}, we see that
this inflation a simple form: if the linear model is biased, then
the squared bias component in each residual is
inflated by $1/(1-\gamma)^2$. Comparing this to the inflation that OCV
performs, which is \smash{$1/(1-h_{ii})^2$}, on the $i$th residual,
for $i=1,\ldots,n$, we can interpret GCV as inflating the bias for
each residual by some ``averaged'' version of the elementwise factors
used by OCV. As OCV provides an almost-unbiased estimate of \text{$B^+$}{} for
Random-X prediction, GCV can get close when the diagonal elements
$h_{ii}$, $i=1,\ldots,n$ do not vary too wildly. When they do vary
greatly, GCV can fail to account for \text{$B^+$}, as in the circled region in
Figure \ref{fig:relative_low}.
\paragraph{Alternative bias-variance decompositions.}
The integrated terms we defined are expectations of conditional
bias and variance terms, where we conditioned on both training and
testing covariates $X,x_0$. One could also consider other
conditioning schemes, leading to different decompositions. An
interesting option would be to condition on the prediction point $x_0$
only and calculate the bias and variance unconditional on the training
points $X$ before integrating, as in
\smash{$\mathbb{E}_{x_0}(\mathbb{E}(\hat{f}_n(x_0)\,|\,x_0)-f(x_0))^2$} and
\smash{$\mathbb{E}_{x_0}(\mathrm{Var}(\hat{f}_n(x_0)\,|\,x_0))$} for these alternative
notions of Random-X bias and variance, respectively. It is easy to
see that this would cause the bias (and thus excess bias) to
decrease and variance (and thus excess variance) to increase.
However, it is not clear to us that computing or bounding such new
definitions of (excess) bias and (excess) variance would be possible
even for least squares regression. Investigating the tractability of
this approach and any insights it might offer is an interesting topic
for future study.
\paragraph{Alternative definitions of prediction error.}
The overall goal in our work was to estimate the prediction error,
defined as
\smash{${\rm ErrR}=\mathbb{E}_{X,Y,x_0,y_0}(y_0-\hat{f}_n(x_0))^2$}, the squared
error integrated over all of the random variables available in
training and testing. Alternative definitions have been suggested by
some authors. \citet{breiman1992submodel}
generalized the Fixed-X setting in a manner that led them to define
\smash{$\mathbb{E}_{Y,x_0,y_0}[(y_0-\hat{f}_n(x_0))^2 \,|\, X]$} as the
prediction error quantity of interest, which can be interpreted as the
Random-X prediction error of a Fixed-X model.
\citet{hastie2009elements} emphasized the importance of the
quantity
\smash{$\mathbb{E}_{x_0,y_0}[(y_0-\hat{f}_n(x_0))^2 \,|\, X,Y]$}, which is the
out-of-sample error of the specific model we have trained on the given
training data $X,Y$. Of these two alternate definitions, the second
one is more interesting in our opinion, but investigating it
rigorously requires a different approach than what we have developed
here.
\paragraph{Alternative types of cross-validation.}
Our exposition has concentrated on comparing OCV to generalized
covariance penalty methods. We have not discussed other
cross-validation approaches, in particular, K-fold
cross-validation (KCV) method with $K \ll n$ (e.g., $K=5$ or 10). A
supposedly well-known problem with OCV is that its estimates of
prediction error have very high variance; we indeed
observe this phenomenon in our simulations (and for least squares
estimation, the analytical form of OCV clarifies the source of this
high variance). There are some claims in the
literature that KCV can have lower variance than OCV
(\citealt{hastie2009elements}, and others), and should be
considered as the preferred CV variant for estimation of Random-X
prediction error. Systematic investigations of this issue for least
squares regression such as
\citet{burman1989comparative,arlot2010survey} actually reach the
opposite conclusion---that high variance is further compounded by
reducing $K$. Our own simulations also support this view (results not
shown), therefore we do not consider KCV to be an important benchmark
to consider beyond OCV.
\paragraph{Model selection for prediction.}
Our analysis and simulations have focused on the accuracy of
prediction error estimates provided by various
approaches. We
have not considered their utility for model selection, i.e., for
identifying the best predictive model, which differs from model
evaluation in an important way. A method can do well in the model
selection task even when it is inaccurate
or biased for model evaluation, as long as such inaccuracies are
consistent across different
models and do not affect its ability to select the better predictive
model. Hence the correlation of model evaluations using the same
training data across different models plays a central role in model
selection performance. An investigation of the correlation between
model evaluations that each of the approaches we considered here
creates is of major interest, and is left to future work.
\paragraph{Semi-supervised settings.}
Given the important role that the marginal distribution $Q$ of $x$
plays in evaluating Random-X prediction error (as expressed,
e.g., in Theorems \ref{thm:least_squares_normal} and
\ref{thm:least_squares_asymp}), it is of interest to consider
situations where, in addition to the training data, we have large
quantities of additional observations with $x$ only and no response
$y$. In the machine learning literature this situation is often
considered under then names semi-supervised learning or transductive
learning. Such data could be used, e.g., to directly estimate the
excess variance from expressions like \eqref{eq:integrated_var}.
\paragraph{General view from empirical process theory.}
This paper was focused in large part on estimating or
bounding the excess bias and variance in specific problem settings,
which led to estimates or bounds on the gap in Random-X and
Same-X prediction error, as ${\rm ErrR}-{\rm Err}=\text{$B^+$}+\text{$V^+$}$. This gap is indeed
a familiar concept to those well-versed in the theory of nonparametric
regression, and roughly speaking, standard results from empirical
process theory suggest that we should in general expect ${\rm ErrR}-{\rm ErrS}$
to be small, i.e., much smaller than either of {\rm ErrR}{} or {\rm ErrS}{} to
begin with. The connection is as follows. Note that
\begin{align*}
{\rm ErrR} - {\rm ErrS} &= \mathbb{E}_{X,Y,x_0} \big( f(x_0) - \hat{f}_n(x_0)\big)^2 -
\mathbb{E}_{X,Y} \bigg[\frac{1}{n} \sum_{i=1}^n \big( f(x_i) -
\hat{f}_n(x_i)\big)^2\bigg] \\
&= \mathbb{E}_{X,Y} \Big[ \| f-\hat{f}_n\|_{L_2(Q)}^2 -
\| f-\hat{f}_n\|_{L_2(Q_n)}^2 \Big],
\end{align*}
where we are using standard notation from nonparametric regression for
``population'' and ``empirical'' norms,
\smash{$\|\cdot\|_{L_2(Q)}$} and \smash{$\|\cdot\|_{L_2(Q_n)}$},
respectively. For an appropriate function class $\mathcal{G}$,
empirical process theory can be used to control the deviations between
\smash{$\|g\|_{L_2(Q)}$} and
\smash{$\|g\|_{L_2(Q_n)}$}, uniformly over all functions $g \in
\mathcal{G}$. Such uniformity is important, because it gives us
control on the difference in population and empirical norms for the
(random) function \smash{$g=f-\hat{f}_n$} (provided of course this
function lies in $\mathcal{G}$).
This theory applies to finite-dimensional $\mathcal{G}$ (e.g., linear
functions, which would be relevant to the case when $f$ is assumed to
be linear and \smash{$\hat{f}_n$} is chosen to be linear), and even to
infinite-dimensional classes $\mathcal{G}$, provided we have
some understanding of the entropy or Rademacher complexity of
$\mathcal{G}$ (e.g., this is true of Lipschitz functions, which would
be relevant to the analysis of $k$-nearest-neighbors regression or
kernel estimators).
Under appropriate conditions, we typically find
\smash{$\mathbb{E}_{X,Y}|\|f-\hat{f}_n\|_{L_2(Q)}^2-\|f-\hat{f}_n\|_{L_2(Q_n)}^2|
=O(C_n)$}, where $C_n$ is the
$L_2(Q)$ convergence rate of \smash{$\hat{f}_n$}
to $f$.
This is even true in an asymptotic setting in which $p$ grows with
$n$ (so $C_n$ here gets replaced by $C_{n,p}$), but such
high-dimensional results usually require more restrictions on the
distribution $Q$ of covariates.
The takeaway message: in most cases where \smash{$\hat{f}_n$} is
consistent with rate $C_n$, we should expect to see the gap being
\smash{${\rm ErrR}-{\rm ErrS} =
O(C_n)$}, whereas ${\rm ErrR},{\rm ErrS} \geq \sigma^2$, so the difference in
Same-X and Random-X prediction error is quite small (as small as the
Same-X and Random-X risk) compared to these prediction errors
themselves; said differently, we should expect to see
$\text{$B^+$},\text{$V^+$}$ being of the same order (or smaller than) $B,V$.
It is worth pointing out that several interesting aspects of our study
really lie outside what can be inferred from empirical process theory.
One aspect to mention is the precision of the results: in some
settings we can characterize $\text{$B^+$},\text{$V^+$}$ individually, but (as
described above), empirical process theory would only provide a handle
on their sum. Moreover, for least squares regression estimators, with
$p/n$ converging to a nonzero constant, we are able to characterize
the exact asymptotic excess variance under some conditions on $Q$
(essentially, requiring $Q$ to be something like a rotation of a
product distribution), in Theorem \ref{thm:least_squares_asymp}; note
that this is a problem setting in which least squares is not
consistent, and could not be treated by standard results empirical
process theory.
Lastly, empirical process theory tells us nothing about the sign of
$$
\| f-\hat{f}_n\|_{L_2(Q)}^2 - \| f-\hat{f}_n\|_{L_2(Q_n)}^2,
$$
or its expectation under $P$ (which equals ${\rm ErrR}-{\rm ErrS}=\text{$B^+$}+\text{$V^+$}$, as
described above). This 1-bit quantity is of interest to us,
since it tells us if the Same-X (in-sample) prediction error is
optimistic compared to the Random-X (out-of-sample) prediction error.
Theorems \ref{thm:least_squares_nonneg}, \ref{thm:ridge_eb_nonneg},
\ref{thm:ridge_ev_neg}, \ref{thm:rkhs_eb_nonneg} and Propositions
\ref{prop:knn_ev_zero}, \ref{prop:knn_eb_exact} all pertain to this
quantity.
\bibliographystyle{plainnat}
| {
"timestamp": "2017-06-13T02:04:36",
"yymm": "1704",
"arxiv_id": "1704.08160",
"language": "en",
"url": "https://arxiv.org/abs/1704.08160",
"abstract": "In statistical prediction, classical approaches for model selection and model evaluation based on covariance penalties are still widely used. Most of the literature on this topic is based on what we call the \"Fixed-X\" assumption, where covariate values are assumed to be nonrandom. By contrast, it is often more reasonable to take a \"Random-X\" view, where the covariate values are independently drawn for both training and prediction. To study the applicability of covariance penalties in this setting, we propose a decomposition of Random-X prediction error in which the randomness in the covariates contributes to both the bias and variance components. This decomposition is general, but we concentrate on the fundamental case of least squares regression. We prove that in this setting the move from Fixed-X to Random-X prediction results in an increase in both bias and variance. When the covariates are normally distributed and the linear model is unbiased, all terms in this decomposition are explicitly computable, which yields an extension of Mallows' Cp that we call $RCp$. $RCp$ also holds asymptotically for certain classes of nonnormal covariates. When the noise variance is unknown, plugging in the usual unbiased estimate leads to an approach that we call $\\hat{RCp}$, which is closely related to Sp (Tukey 1967), and GCV (Craven and Wahba 1978). For excess bias, we propose an estimate based on the \"shortcut-formula\" for ordinary cross-validation (OCV), resulting in an approach we call $RCp^+$. Theoretical arguments and numerical simulations suggest that $RCP^+$ is typically superior to OCV, though the difference is small. We further examine the Random-X error of other popular estimators. The surprising result we get for ridge regression is that, in the heavily-regularized regime, Random-X variance is smaller than Fixed-X variance, which can lead to smaller overall Random-X error.",
"subjects": "Methodology (stat.ME)",
"title": "From Fixed-X to Random-X Regression: Bias-Variance Decompositions, Covariance Penalties, and Prediction Error Estimation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9777138099151277,
"lm_q2_score": 0.7248702642896702,
"lm_q1q2_score": 0.7087156677928389
} |
https://arxiv.org/abs/1509.08145 | Linear Arrangement of Halin Graphs | We study the Optimal Linear Arrangement (OLA) problem of Halin graphs, one of the simplest classes of non-outerplanar graphs. We present several properties of OLA of general Halin graphs. We prove a lower bound on the cost of OLA of any Halin graph, and define classes of Halin graphs for which the cost of OLA matches this lower bound. We show for these classes of Halin graphs, OLA can be computed in O(n log n), where n is the number of vertices. |
\section{Introduction}
\label{sect:intro}
\input{introduction}
\section{Preliminaries}
\label{sect:prelim}
\input{preliminaries}
\section{Some Properties of OLA of Halin Graphs}
\label{sect:OLA-halin-graphs}
\input{OLA-halin-graphs}
\section{Halin Graphs With Polynomially Solvable LA Algorithm}
\label{sect:halin-solvable-case}
\input{halin-solvable-case}
\section{Conclusion and Future Work}
\label{sect:future}
\input{future}
\Hide
{\footnotesize
\printbibliography
}
{\footnotesize
\bibliographystyle{plainurl}
\subsection{Linear Arrangement of Trees}
\label{sec:LA-trees}
As one of non-trivial results, the OLA of trees was first shown to be polynomially solvable in~\cite{goldberg1976algorithm}
and more efficient algorithms were later presented in~\cite{shiloach1979, chung1984}.
In this section we present some well-known properties of OLA of trees which
simplify the understanding of linear arrangement of Halin graphs
in the rest of the report. For more details, one can refer to ~\cite{chung1984}.
\begin{property}
\label{prop:tree-end-nodes}
Given an OLA $\varphi^ *$ for tree $T=(V,E)$, vertices that are assigned label $1$ and $n$ are leaf nodes (the two extreme vertices are leafs in $T$).
\end{property}
\begin{definition}{Spinal path}
\label{def:tree-spine}
Given layout $\varphi$ for a tree $T=(V,E)$, and vertices $v,u \in V$
where $\varphi(v) = 1$ and $\varphi(u) = n$, we define the path $P = (w_1 = v, w_2, \dots, w_l = u)$ connecting
$v$ and $u$, the \textbf{spine} of tree $T$ corresponding to $\varphi$.
\end{definition}
\begin{property}
\label{prop:tree-monotone-path}
Having OLA $\varphi^ *$ for tree $T=(V,E)$ and spinal path $P = (w_1 = v, w_2, \dots, w_l = u)$, it is the case that $\forall 1 \le i < l: \varphi^ *(w_i) < \varphi^ *(w_{i+1})$. In other word, the function $\varphi^ *$ is monotonic along the path $P$.
\end{property}
\begin{definition}{Spinal rooted subtree and anchored branches}
\label{def:anchored-subtree}
Given layout $\varphi$ for a tree $T=(V,E)$ and spinal path $P = (w_1 = v, w_2, \dots, w_l = u)$,
Removing all edges of $P$, leaves us with a set of subtrees $T_1, T_2, \ldots, T_l$ respectively rooted at
$w_1, w_2, \dots, w_l$. Removing spinal vertex $w_i$ from $T_i$ where $d_T(w_i) = k > 2$, results in a set of branches $B_{i,1}, B_{i,2},\ldots,B_{i,k-2}$. Each branch $B_{i,j}$ is anchored at a vertex $v_j$ which is connected to $w_i$.
\end{definition}
\begin{property}
\label{prop:tree-subtrees-labeled-separately}
Consider a tree $T=(V,E)$ and its OLA $\varphi^ *$ and the corresponding spinal path $P = (w_1 = v, w_2, \dots, w_l = u)$.
Removing all the edges of $P$ results in a set of $l$ subtrees $T_1, \ldots, T_l$, respectively rooted at $w_1, \ldots, w_l$.
Then based on $\varphi^ *$, for a fixed $i$, the vertices of of $T_i$ are labeled by contentious integers. Formally speaking:
\[
\forall 1 < i < l, \forall u \in T_{i-1}, v \in T_{i}, w \in T_{i+1} \Rightarrow \varphi^ *(u) < \varphi^ *(u) < \varphi^ *(w)
\]
Moreover $\varphi^ *$ restricted to $V(T_i)$ (denoted by $\varphi^ *_i$) is optimal for $T_i$.
\end{property}
\begin{comment}
\begin{property}
\label{prop:tree-branches-labeled-separately}
Consider a tree $T=(V,E)$ and the corresponding OLA $\varphi^ *$ and subtrees $T_1, \ldots, T_l$ after removing the edges of spinal path $P = (w_1 = v, w_2, \dots, w_l = u)$. Let $\Set{B_{i,1},\dots,B_{i,k-2}}$ be the set of \emph{branches} connected to a spinal vertex $w_i$
with degree $k > 2$. The vertices of each branch $B_{i,j}$ are labeled with continuous integers.
\end{property}
\end{comment}
| {
"timestamp": "2015-09-29T02:14:42",
"yymm": "1509",
"arxiv_id": "1509.08145",
"language": "en",
"url": "https://arxiv.org/abs/1509.08145",
"abstract": "We study the Optimal Linear Arrangement (OLA) problem of Halin graphs, one of the simplest classes of non-outerplanar graphs. We present several properties of OLA of general Halin graphs. We prove a lower bound on the cost of OLA of any Halin graph, and define classes of Halin graphs for which the cost of OLA matches this lower bound. We show for these classes of Halin graphs, OLA can be computed in O(n log n), where n is the number of vertices.",
"subjects": "Discrete Mathematics (cs.DM)",
"title": "Linear Arrangement of Halin Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.965381162156829,
"lm_q2_score": 0.7341195327172402,
"lm_q1q2_score": 0.7087051676565975
} |
https://arxiv.org/abs/1205.3266 | On the 1-2-3-conjecture | A k-edge-weighting of a graph G is a function w: E(G)->{1,2,...,k}. An edge-weighting naturally induces a vertex coloring c, where for every vertex v in V(G), c(v) is sum of weights of the edges that are adjacent to vertex v. If the induced coloring c is a proper vertex coloring, then w is called a vertex-coloring k-edge weighting (VCk-EW). Karonski et al. (J. Combin. Theory Ser. B 91 (2004) 151-157) conjectured that every graph admits a VC3-EW. This conjecture is known as 1-2-3-conjecture. In this paper, frst, we study the vertex-coloring edge-weighting of the cartesian product of graphs. Among some results, we prove that the 1-2-3-conjecture holds for some infinite classes of graphs. Moreover, we explore some properties of a graph to admit a VC2-EW | \section{\hspace*{-.6cm}. Introduction}
In this paper, we consider finite and simple graphs.
A \textit{$r$-vertex coloring} $c$ of $G$ is a function $c:V(G)\rightarrow\{1,2,\ldots ,r\}$. The coloring $c$ is called a \textit{proper vertex coloring} if for every two adjacent vertices $u$ and $v$, $c(u)\neq c(v)$. A graph $G$ is \textit{$r$-colorable} if $G$ has a proper $r$-vertex coloring.
A \textit{$k$-edge-weighting} of a graph $G$ is a function $w:E(G)\rightarrow\{1,2,\ldots,k\}$. An edge-weighting naturally induces a vertex coloring $c$, where for every $v\in~V(G)$, $c(v)=\sum_{e\sim v}w(e)$. The notion $e\sim v$, shows that $e$ is an edge incident to vertex $v$. If the induced coloring $c$ is a proper vertex coloring, then $w$ is called a \textit{vertex-coloring $k$-edge weighting} (VC$k$-EW). The minimum $k$ which $G$ has a VC$k$-EW is denoted by $\mu(G)$.
Obviously for a graph $G$ with components $G_{1}, G_{2}, \ldots, G_{t}$; $\mu(G)=\max\{\mu(G_{i})\,|\, 1\leq i\leq t\}$.
Also, note that the vertex-coloring $k$-edge-weighting is defined for graphs with $\Delta\geq 2$, where $\Delta$ is the maximum degree of vertices in $G$. Thus, we consider connected graphs with at least three vertices.
Karo\'{n}ski et al. in \cite{123} introduced the concept of vertex-coloring $k$-edge weighting and they proposed the following conjecture.
\begin{conj}{\rm\cite{123}} {\rm($1$-$2$-$3$-conjecture)}
For every connected graph $G$ with at least three vertices, $\mu(G)\leq 3$.
\end{conj}
Addario-Berry et al. \cite{vc 30-ew} showed that for every connected graph with at least three vertices, $\mu(G)\leq 30$. Then Addario-Berry et al. \cite{vc 16-ew} improved the bound to 16. Later, Wang and Yu \cite{vc 13-ew} improved this bound to 13. Recently, Kalkowski et al. \cite{vc 5-ew} showed that for every connected graph $G$ with at least three vertices, $\mu(G)\leq 5$.
It is proved that for every $3$-colorable graph $G$, $\mu(G)\leq3$ \cite{123}. In general the similar fact for $2$-colorable graphs is not true. The only known families of bipartite graphs with $\mu(G)=3$ are cycles $C_{4k+2}$ and theta graph $\theta(1,4k_2+1,4k_3+1,\ldots,4k_r+1)$, where $\theta(1,4k_2+1,4k_3+1,\ldots,4k_r+1)$ is a graph obtained by $r$ simple paths of length $4k_i+1$ with common end vertices \cite{chang}. In the following two theorems some sufficient conditions for bipartite graphs to admit a VC$2$-EW are presented.
\begin{ghazie}{\rm\cite{chang}}\label{delta=1}
If $G\ncong K_{2}$ is a connected bipartite graph with one of the following conditions, then $\mu(G)\leq 2$.
\begin{itemize}
\item
$\delta(G)=1$, where $\delta(G)$ is the minimum degree of $G$.
\item
$G$ has a part with even number of vertices.
\end{itemize}
\end{ghazie}
The notion $N[v]$ denotes the $N(v)\cup \{v\}=\{u\,|\,u\in V(G), uv\in E(G)\}\cup\{v\}$.
\begin{ghazie}{\rm\cite{europ}}\label{G-N[v]}
Let $G\ncong K_{2}$ be a connected bipartite graph. If one of the following conditions holds, then $\mu(G)\leq2$.
\begin{itemize}
\item There exists a vertex $v$ such that $\deg(v)\notin\{\deg(u) | u\in N(v)\}$ and $G-N[v]$ is connected.
\item There exists a vertex $v$ of degree $\delta(G)$ such that
$\deg(v)\notin\{\deg(u) | u\in N(v)\}$ and $G-v$ is connected.
\item $G$ is $3$-connected.
\end{itemize}
\end{ghazie}
\begin{ghazie}{\rm\cite{chang}}\label{P_n & C_n}
Let $P_n$, $C_n$ and $K_n$, $n\geq3$, denote the path, cycle and complete graph with $n$ vertices, respectively. Then
\begin{equation*}
\mu(P_n)=
\left\{
\begin{array}{ll}
1 & n=3 \\
2 & n\geq 4
\end{array}
\right.
,\quad
\mu(C_n)=
\left\{
\begin{array}{ll}
2 & n\equiv 0\hspace*{-.35cm}\pmod4 \\
3 & \text{otherwise}
\end{array}
\right.
\text{,and }\
\mu(K_n)=3.
\end{equation*}
\end{ghazie}
The structure of this paper is as follows. In Section \ref{sec: cartesian}, we consider the $1$-$2$-$3$-conjecture for the cartesian product of graphs. First, we prove that the conjecture holds for the cartesian product of some well known families of graphs. Moreover, we prove that the cartesian product of connected bipartite graphs admits a VC$2$-EW. Then, we determine $\mu(G)$ for some well known families of graphs. In \cite{vc 16-ew} Addario-Berry et al. proved that almost all graphs admit a VC$2$-EW. The problem of classifying graphs which admit a VC$2$-EW is an open problem.
In Section \ref{sec: VC2-EW}, first, we present some sufficient conditions for a graph to admit a VC$2$-EW. Then, in Section \ref{sec: VCEW of bipartite} following to exploring more bipartite graphs with $\mu(G)=3$, we consider a generalization of theta graphs which are called generalized polygon trees and determine $\mu(G)$ for some bipartite generalized polygon trees.
\section{\hspace*{-.6cm}. 1-2-3-conjecture for the cartesian product of graphs}\label{sec: cartesian}
The \textit{cartesian product} of two graphs $G$ and $H$, written $G\Box H$, is the graph with vertex set $V(G)\times V(H)$ specified by putting $(u,v)$ adjacent to $(u', v')$ if and only if $u=u'$ and $vv'\in E(H)$, or $v=v'$ and $uu'\in E(G)$.
In this section, we prove that if the $1$-$2$-$3$-conjecture holds for two graphs $G$ and $H$, then it also holds for $G\Box H$. Moreover, we prove that if $G$ and $H$ are bipartite graphs, then $\mu(G\Box H)\leq 2$.
\begin{ghazie}\label{max}
For every two graphs $G$ and $H$, $\mu(G\Box H)\leq \max\{\mu(G),\mu(H)\}$.
\end{ghazie}
\begin{proof}{
Let $k:=\max\{\mu(G),\mu(H)\}$ and $w_{1}:E(G)\rightarrow \{1,2,\ldots,k\}$, $w_{2}:E(H)\rightarrow \{1,2,\ldots,k\}$ be VC$k$-EW for graphs $G$ and $H$, respectively. We define $w:E(G\Box H)\rightarrow \{1,2,\ldots,k\}$, $w((u,v)(u',v))=w_{1}(uu')$, and $w((u,v)(u,v'))=w_{2}(vv')$, where
$u,u'\in V(G)$, $v,v'\in V(H)$. It is easy to see that $w$ is a VC$k$-EW for $G\Box H$
}\end{proof}
By Theorem \ref{P_n & C_n} and Theorem \ref{max}, we conclude that the $1$-$2$-$3$-conjecture is true for some well known families of graphs.
\begin{cor}
The $1$-$2$-$3$-conjecture is true for the cartesian product of complete graphs, cycles and paths. For instance the following graphs satisfy the $1$-$2$-$3$-conjecture.\\
$\bullet\; C_{n}\Box P_{m}$\hspace{2.75cm}
$\bullet\; K_{n}\Box P_{m}$\hspace{2.6cm}
$\bullet\; K_{n}\Box C_{m}$\hspace{2cm}
\\
$\bullet\; C_{n_1}\Box C_{n_2}\Box\cdots\Box C_{n_k}$\hspace{.7cm}
$\bullet\; P_{n_1}\Box P_{n_2}\Box\cdots\Box P_{n_k}$\hspace{.7cm}
$\bullet\; K_{n_1}\Box K_{n_2}\Box\cdots \Box K_{n_k}$
\end{cor}
We need the following lemma to prove our main theorem in this section. In what follows $n(G)$ and $\kappa(G)$ denote the order and vertex connectivity of $G$, respectively.
\begin{lemma}\label{kapa} {\rm\cite{simon}}
For every two graphs $G$ and $H$, $$\kappa(G\Box H)=\min\{\delta(G\Box H),\kappa(H).n(G),\kappa(G).n(H)\}.$$
\end{lemma}
In the following theorem we prove that the cartesian product of every two bipartite graphs admits a VC$2$-EW.
\begin{ghazie}\label{main result}
If $G$ and $H$ are two connected bipartite graphs and $G\Box H\neq K_2$, then $\mu(G\Box H)\leq 2$.
\end{ghazie}
\begin{proof}{
It can be seen that $G\Box H$ is a connected bipartite graph if and only if $G$ and $H$ are connected bipartite graphs.
If $n(G)=n(H)=2$, then $G\cong C_4$ and by Theorem~\ref{P_n & C_n}, $\mu(G)=2$.
If $n(G), n(H)>2$ and both have a leaf vertex, then by Theorem \ref{delta=1}
and Theorem~\ref{max}, $\mu(G\Box H)\leq 2$, otherwise $\delta(G)+\delta(H)\geq 3$. Hence, by Lemma \ref{kapa}, $\kappa(G\Box H)\geq~3$ and hence by Theorem \ref{G-N[v]}, $\mu(G\Box H)\leq 2$.
Now, let $n=n(G)>2$ and $n(H)=2$. Thus, $G\Box H\cong G\Box K_2$. Let $G_1$ and $G_2$ be the induced subgraphs representing the first and the second copy of $G$, respectively. To give a VC$2$-EW for $G\Box K_2$, first we assign weight $1$ to all the edges in $G_1$ and weight $2$ to all the edges in $G_2$. We denote the unweighted edge $e$ incident to vertex $u\in V(G_1)$ by $e_u$. Thus, for every two adjacent vertices $u$ and $v$, where $u\in G_1$, and $v\in G_2$, independent to the weight of $e_u$, we have $c(u)\neq c(v)$. Now we assign a proper weight to the unweighted edges such that for every $uv\in E(G_1)$, $c(u)\neq c(v)$, that also implies $c(u)\neq c(v)$ for every edge $uv\in E(G_2)$. We do this among the following process.
Let $U_{i}=\{u\in V(G_{1})\: |\: d_{G_{1}}(u)=i\}$, $1\leq i\leq n-1$. Now consider the partition $C=\{A\: |\: A\ \text{is a component of}\; \langle U_{i}\rangle,\; 1\leq i\leq n-1\}$ of the vertices of $G_1$. Note that to get a proper weighting, for each $u\in A$ the weight of $e_u$ forces the weight of the other edges incident to the vertices in $A$ as follows. For each unweighted edge $e_v,\; v\in A$, let
\begin{equation}
w(e_{v})=
\left\{
\begin{array}{ll}
w(e_u)+1\hspace*{-.3cm}\pmod 2 & d(u,v) \ is \ odd \\[.2cm]
w(e_u) & \text{otherwise,}
\end{array}
\right.\tag{$*$}
\end{equation}
where $d(u,v)$ is the length of a shortest path between $u$ and $v$ in $G$. Thus, for every component $A$, it is enough to assign the weight of $e_u$ only for one vertex $u\in A$.
Note that, by $(*)$, for every edge $uv\in E(G_1)$, where $d_{G_1}(u)=d_{G_1}(v)$, $c(u)-c(v)=w(e_u)-w(e_v)\neq0$. Also if $|d_{G_1}(u)-d_{G_1}(v)|\geq2$, then independent to the weights of $e_u$ and $e_v$, we have $c(u)\neq c(v)$.
The following algorithm provide a desired edge weighting for $e_u$, $u\in A$.\\[.1cm]
\hspace*{.2cm}\begin{tabular}{|p{14cm}|}
\hline
\tt \hspace*{.2cm}10 \hspace*{.3cm} ${\tt h=0}$;\\[.3cm]
\hspace*{.2cm}20 \hspace*{.45cm} {\tt while} ($\mathtt{C\neq \emptyset}$)\\[.3cm]
\hspace*{.2cm}30 \hspace*{.9cm} h=h+1, {\tt choose an arbitrary component} $\mathtt{A\in C}$ {\tt and define} $\mathtt{{S^{(h)}=\{A\}}}$,\\ \hspace*{1.7cm}$\mathtt{C=C\backslash A,\; p(A)=\emptyset}$ {\tt and let} $\mathtt{w(e_{u})=1}$ {\tt for an arbitrary vertex} $\mathtt{u\in A}$;\\[.2cm]
\hspace*{.2cm}40 \hspace*{26pt} {\tt while} $\left(\begin{array}{l}
{\tt \text{\tt there exist}\ A'\in C,\ A\in S^{(h)}\ \text{\tt and egde}\ e=xy,\ \text{\tt where}\ x\in A}\\
{\tt \text{\tt and}\ y\in A'\ and\ |d_{G_{1}}(x)-d_{G_{1}}(y)|=1}
\end{array} \newline \right)$\\[.4cm]
\hspace*{.2cm}50 \hspace*{1.8cm} ${\tt S^{(h)}=S^{(h)}\mathtt{\cup} {A'}}$, ${\tt C=C\backslash A'}$, ${\tt w(e_{y})=w(e_{x})}$, ${\tt p(A')=A}$;\\
\hline
\end{tabular}
To complete the proof it is enough to show that for every two adjacent vertices $u, v$, where $|d_{G}(u)-d_{G}(v)|=1$, $c(u)\neq c(v)$. Without loss of generality assume that $u\in A$, $v\in A'$ and $d_{G}(u)=d_{G}(v)-1$.
If $p(A')=A$, then there exist $x\in A$ and $y\in A'$ such that $xy\in E(G)$ and $w(e_x)=w(e_y)$. Since $G$ is bipartite $d_{A}(x,u)+d_{A'}(y,v)$ is even, thus $d_A(x,u)$ and $d_A'(y,v)$ both are even or both are odd. Therefore by $(*)$, $w(e_{u})=w(e_{v})$ and since $\deg(u)\neq \deg(v)$ we have $c(u)\neq c(v)$.
If $p(A')\not =p(A)$, then by the given algorithm, there exist $A=A_1, A_2, \ldots, A_r=~A'$ in $S^{(h)}$, for some $h$, where $p(A_{i+1})=A_{i},\; 1\leq i\leq r-1$. Moreover, for $i$, $1\leq i\leq r-1$, there exist the edges $x_iy_{i+1}$ such that $x_i, y_{i}\in~A_i$, $y_r\in~A_r$ and $w(e_{x_i})=w(e_{y_{i+1}})$.
Note that for $i$, $1\leq i\leq r-1$, $|\deg(x_i)-\deg(y_{i+1})|=1$ and $\deg(x_1)=\deg(y_r)-1$. Therefore relation $(*)$ implies that $r$ is even. On the other hand, since $G$ is bipartite, $d(u,x_1)+\sum_{i=2}^{r-1} d(y_{i},x_{i})+d(y_{r},v)+r$ is even; otherwise, we get an odd cycle in $G$. Therefore, $d(u,x_1)+\sum_{i=2}^{r-1} d(y_{i},x_{i})+d(y_{r},v)$ is even. By the equality $w(e_{x_i})=w(e_{y_{i+1}})$ and the relation $(*)$, we get $w(e_u)=w(e_v)$ which completes the proof.
}\end{proof}
Obviously, for every nontrivial graph $G$, $\mu(G)=1$ if and only if $G$ has no adjacent vertices with a same degree.
\begin{pro}\label{mu=1 for cartesian}
For every two graphs $G$ and $H$, $\mu(G\Box H)=1$ if and only if $\mu(G)=1$ and $\mu(H)=1$.
\end{pro}
\begin{proof}{
By Theorem \ref{max}, the condition is sufficient. Conversely, if $\mu(G\Box H)=1$, then $d_{G\Box H}(u,v)\neq d_{G\Box H}(u',v)$, where $uu'\in E(G)$.
On the other hand $d_{G\Box H}(u,v)=d_{G}(u)+d_{H}(v)$. Therefore, for every edge $uu'\in E(G)$, $d_{G}(u)\neq d_{G}(u')$. This implies $\mu(G)=1$. Similarly, $\mu(H)=1$.
}\end{proof}
By Theorem \ref{main result} and Proposition \ref{mu=1 for cartesian}, we have the following corollary.
\begin{cor}
For every two connected bipartite graph $G$ and $H$, which at least one of them doesn't admit a VC$1$-EW, we have $\mu(G\Box H)=2$. Particularly, for even cycles $C_n$ and $C_m$, $\mu(C_{n}\Box C_{m})=2$, $\mu(C_n\Box P_m)=2$. Also $\mu(P_{n}\Box P_{m})=2$ and for the cube graph $Q_n=K_2\Box K_2\Box\cdots\Box K_2$, we have $\mu(Q_{n})=2$.
\end{cor}
\section{\hspace*{-.6cm}. Vertex-coloring $2$-edge-weighting}\label{sec: VC2-EW}
In this section, following to investigating the properties of graphs which admit a VC$2$-EW, first we present some sufficient conditions for a graph to admit a VC$2$-EW.
In the following, we consider separable graphs.
\begin{ghazie}\label{2 Blocks}
Let $G_1$ and $G_2$ be simple connected graphs which $V(G_1)\cap V(G_2)=\{v\}$. If one of the following conditions holds, then $\mu(G)\leq 2$.
\begin{itemize}
\item
$d_G(v)\geq4$ and $\forall i,\; 1\leq i\leq 2$, $\mu(G_i)\leq 2$ and $\forall u\in N(v)$, $\deg(u)\leq 2$.
\item
$d_G(v)\geq4$ and $\mu(G_1)\leq 2$ and $\forall u\in N(v)$, $\deg(u)\leq 2$ and $G_2$ is a cycle.
\item
$G_1$ and $G_2$ are cycles.
\end{itemize}
\end{ghazie}
\begin{proof}
{
Assume the first condition holds. Since $\mu(G_i)\leq 2$, there exist vertex-coloring $2$-edge-weightings $w_1:E(G_1)\rightarrow\{1,2\}$ and $w_2:E(G_2)\rightarrow\{1,2\}$. Let $w:E(G)\rightarrow\{1,2\}$ be defined as follows
$$w(e)=
\left\lbrace
\begin{array}{ll}
w_1(e) & e\in E(G_1), \\[.2cm]
w_2(e) & e\in E(G_2).
\end{array}
\right.$$
Then $c(v)\geq4$ and for every vertex $u\in N(v)$, $c(u)\leq 4$.
Now if there exists a vertex $u\in N(v)$, such that $c(u)=c(v)$, then $c(u)=4$ implies $w(uv)=2$ while $c(v)=4$ implies $w(uv)=1$, a contradiction.
Hence, $w$ is a desired edge-weighting.
Now, suppose the second condition holds, and $w_1$ be a vertex-coloring $2$-edge weighting of $G_1$. For each edge $e\in E(G_1)$ assign the weight $w_1(e)$. For other edges start from an incident edge on $v$, assign the weights $2,2,1,1$ periodically, such that the last edge be incident on $v$. It is easy to check that this is a vertex-coloring $2$-edge-weighting.
If the last condition holds. For each $G_i$, start from an incident edge on $v$, assign the weights $2,2,1,1$ periodically such that the last edge be incident on $v$. Clearly for every two adjacent vertices $x$ and $y$ which $x,y\notin \{v\}$, $c(x)\neq c(y)$.
For every $u\in N(v)$, $c(u)\in\{2,3,4\}$ while $c(v)\geq6$. Therefore, $\mu(G)=2$.
}
\end{proof}
\begin{cor}\label{k Blocks}
Let $G$ be a simple connected graph with the blocks $B_1,B_2,\ldots ,B_r$,\ $r\geq 2$, and cut vertices $v_1,v_2,\ldots ,v_{t}$. If for every i, $1\leq i\leq r$, $B_i$ satisfies one of the following conditions, then $\mu(G)\leq2$.
\begin{itemize}
\item $B_i$ is a cycle.
\item $\mu(B_i)\leq2$ and $\forall v_j\in V(B_i),\; \forall u\in N(v_j),\; \deg(u)\leq 2$.
\end{itemize}
\end{cor}
\begin{ghazie}\label{thm: MSP}
Let $G\ncong C_{4k+r}$ for $r=1,2,3$ be a connected graph. If one of the following conditions holds, then $\mu(G)\leq2$.
\begin{itemize}
\item $G$ contains no edge as a maximal simple path and for every maximal simple path of $G$ which is of length $1\pmod4$ connecting two vertices $x$ and $y$, $\{\deg(x),\deg(y)\}\neq\{3\}$.
\item $G$ contains no maximal simple path of length $3\pmod4$ and for every edge $e=xy$ which is a maximal simple path of $G$, $\deg(x)\neq\deg(y)$.
\end{itemize}
\end{ghazie}
\begin{proof}
{
The edge set of $G$ is disjoint union of maximal simple paths. We assign the weights $1$ and $2$ to the edges of every maximal simple path, starting from one of the end-edges, according to the pattern $1,1,2,2$ and the pattern $2,1,1,2$ periodically, for maximal simple paths of length $2$ and length $0,1$ or $3$, respectively. Now, if there exists an edge $uv\in E(G)$ such that $c(u)=c(v)$, then it is clear that this edge is an end-edge for a maximal simple path. Therefore one of the vertices $u$ or $v$ is an end-vertex of a maximal simple path. Without loss of generality assume that $v$ is such a vertex.
If $\deg(v)=1$, then $c(v)<c(u)$. If $\deg(v)=2$, then $G\cong C_{4k}$ and it is easy to see that the given pattern is a VC$2$-EW for $G$. Thus, we assume that $\deg(v)\geq3$.
Assume the first condition holds. Since for every vertex $u\in N(v)$, $\deg(u)=2$ and by the given patterns, we have $c(u)\in\{2,3,4\}$. Clearly if $\deg(v)>4$, then $c(v)>4$ and $c(u)\neq c(v)$. On the other hand, if $\deg(v)=c(v)=4$, then by the given patterns, $c(x)=2$ for every $x\in N(v)$. Thus we assume that $\deg(v)=3$ and there is two possibilities $c(v)=3$ and $c(v)=4$.
If $c(v)=3$, then by the given patterns, $c(x)=2$ for every $x\in N(v)$. Let $c(v)=4$. Thus one of the edges on $v$ has weight $2$ and by the given patterns this edges belongs to a maximal simple path of length $1\pmod4$. Replace the pattern of this path from $2,1,1,2$ to $1,1,2,2$, Since the vertex which is on other end of this path is not of degree $3$, this process reduce the number of edges with two end of same color. Hence, repeated applications give the desired weighting.
If the last condition holds, we change the pattern of maximal simple paths of length $2\pmod4$ to $2,2,1,1$. Since every edge incident to $v$ has weight $2$, we have $c(v)=2\deg(v)\geq6$ and if $e=uv$ is a maximal simple path of $G$, by the assumption, $c(u)\neq c(v)$. we assume that $u$ is an internal vertex for a maximal simple path. Hence, $\deg(u)=2$ and by the given patterns, we have $c(u)\in\{3,4\}$. Therefore $c(u)\neq c(v)$.
}
\end{proof}
\begin{cor}\label{cor: MSP}
Let $G\ncong C_{4k+r}$ for $r=1,2,3$ be a connected graph. If one of the following conditions holds, then $\mu(G)\leq2$.
\begin{itemize}
\item $G$ contains no maximal simple path of length $1\pmod 4$.
\item $G$ contains no maximal simple path of length $3\pmod4$ and no edge as a maximal simple path.
\item Every maximal simple path of $G$ is of length $4k+1$, for $k\geq1$.
\end{itemize}
\end{cor}
\begin{cor}\label{cor: p and p'}
Let $G\ncong C_{4k+r}$ for $r=1,2,3$ be a connected graph contains no edge as a maximal simple path. If $\mu(G)>2$, then $G$ has $P$ and $P'$ as a maximal simple path of length $1\pmod4$ and $3\pmod4$, respectively.
\end{cor}
\rem{
In every VC$2$-EW of path $P_n$, if $n\equiv 1\pmod 4$, two end-edges of $P_n$ get a same weight and if $n\equiv 3\pmod 4$, they get different weights. Furthermore, if $n\not\equiv1,3\pmod4$, then for every arbitrary assignment to end-edges of $P_n$ with $\{1,2\}$, there is a VC$2$-EW for $P_n$.
}
The same fact for the remaining cases, that every maximal simple path of $G$ is of length $1$ or for example $G$ has maximal simple paths of length $1$ and $3$, is not true.
The following proposition gives an example of graphs in which every maximal simple path is of length $1\pmod4$ and $\mu(G)=2$, while $\mu(K_n)=3$.
Also, Figure \ref{exm1} is an example of graphs which contain maximal simple path of length $1\pmod4$ and $3\pmod4$ with $\mu(G)=3$.
\begin{pro}\label{equitable}
For every complete $r$-partite graph $K_{n,n,\ldots ,n}$ with $r,n\geq 2$, $\mu(K_{n,n,\ldots ,n})=~2$.
\end{pro}
\begin{proof}
{Let
\begin{equation*}
A=\left(%
\begin{array}{cccc}
0 & B & \cdots & B \\
B^{T} & \ddots & \ddots & \vdots \\
\vdots & \ddots & \ddots & B \\
B^{T} & \cdots & B^{T} & 0 \\
\end{array}%
\right)_{rn\times rn}
\end{equation*}
be the weighted adjacency matrix of $G=K_{n,n,\ldots ,n}$, where
$$B=\left(%
\begin{array}{cccc}
1 & 1 & \ldots & 1 \\
\vdots & \vdots & & \vdots \\
1 & 1 & \ldots & 1 \\
2 & 2 & \ldots & 2 \\
\end{array}
\right)_{n\times n}$$
Note that the induced color on every vertex is the sum of entries in it's corresponding row in $A$. Thus, for every two arbitrary vertices $u$ and $v$ in $i$-th part and $j$-th part of $G$, $c(u)\in\{(r-i)n+(i-1)(n+1),2(r-i)n+(i-1)(n+1)\}$ and $c(v)\in\{(r-j)n+(j-1)(n+1),2(r-j)n+(j-1)(n+1)\}$. Since $i\neq j$ we have
\begin{equation*}
(r-i)n+(i-1)(n+1) \neq (r-j)n+(j-1)(n+1)
\end{equation*}
and
\begin{equation*}
2(r-i)n+(i-1)(n+1)\neq 2(r-j)n+(j-1)(n+1)
\end{equation*}
On the other hand, $j\leq r$ hence, $j-i\leq n(r-i)$ with equality if and only if $n=1$ and $j=r$. Thus $(r-j)n+(j-1)(n+1)\neq 2(r-i)n+(i-1)(n+1)$. Similarly $(r-i)n+(i-1)(n+1)\neq 2(r-j)n+(j-1)(n+1)$. Therefore, $A$ is a desired edge weighting.
}
\end{proof}
\begin{exm}\label{exm:mu=3}
By the Remark, it's easy to check that the graph is shown in Figure \ref{exm1} is a bipartite graph with $\mu(G)=3$, in which $p_{i}$, $q_{i}$ and $r_{i}$ are paths of length $1$, $1$ and $3\pmod 4$, respectively.
\begin{figure}[ht]
\centering
{\includegraphics[scale=.5]{mu3.pdf} }
\caption{A family of $2$-connected bipartite graphs with $\mu(G)=3$\label{exm1}}
\end{figure}
\end{exm}
We end this section with the following theorem which is an improvement of Theorem \ref{thm: MSP}.
\begin{ghazie}\label{thm: our conj}
If $G\ncong C_{4k+r}$ for $r=1,2,3$ is a connected graph contains no edge as a maximal simple path, then $\mu(G)\leq2$.
\end{ghazie}
\section{\hspace*{-.6cm}. Vertex-coloring edge-weighting of bipartite graphs}\label{sec: VCEW of bipartite}
It is known that every $3$-colorable graph admits a VC$3$-EW \cite{123}, and there are bipartite graph $G$ with $\mu(G)=2$ and also bipartite graph $G$ with $\mu(G)=3$.
The only known bipartite graphs with $\mu(G)=3$ are $C_{4k+2}$ and the theta graphs $\theta(1,4k_2+1,\ldots,4k_r+1)$ given in \cite{chang}.
The \textit{theta graph} $\theta(l_{1}, l_{2},\ldots , l_{r})$ is the graph obtains from $r$ disjoint paths, of lengths $l_{1}, l_{2},\ldots , l_{r}$, respectively, by identifying their end-vertices called \textit{the roots} of the graph. Notice that $\theta(l_{1})=P_{1+l_{1}}$ and $\theta(l_{1},l_{2})=C_{l_{1}+l_{2}}$.
In Example \ref{exm:mu=3}, we constructed more bipartite graphs with $\mu(G)=3$. The natural question is whether there exists another well known family of bipartite graphs with $\mu(G)=3$? Regarding to answer to this question, in this section, we consider a generalization of theta graphs. The theta graphs are special cases of well known family of graphs called generalized polygon trees.
First, in the following theorem we give a sufficient condition to guarantee that a bipartite graph admit a VC$2$-EW.
\begin{ghazie}\label{d(v)>d(u)}
If $G$ is a connected bipartite graph contains a vertex $v$ such that $\deg(v)>\deg(u)$ for every $u\in N(v)$ and $G-v$ is connected, then $\mu(G)\leq 2$.
\end{ghazie}
\begin{proof}{
Let $U$ and $W$ be parts of the graph $G$. If $|U|.|W|$ is even, by Theorem \ref{delta=1}, the result follows. Thus we assume that both $|U|$ and $|W|$ are odd. Let $v\in U$ satisfy $\deg(v)>\deg(u)$ for every $u\in N(v)$ and $G-v$ is connected. Since $|U-v|$ is even, by Theorem \ref{delta=1}, $G-v$ admits a VC$2$-EW such that $c(x)$ is odd for every $x\in U-v$ and $c(y)$ is even for every $y\in W$. Now we assign weight $2$ to all the edges that are adjacent to vertex $v$.
Clearly $c(x)$ is odd for every $x\in U-v$ and $c(y)$ is even for every $y\in W$. Also $c(v)=2d_G(v)>2d_G(u)\geq c(u)$ for every $u\in N(v)$.
}
\end{proof}
\begin{ghazie} {\rm\cite{chang}}\label{theta}
Let $G=\theta(l_{1}, l_{2},\ldots , l_{r})$, where $r\geq 3$ and $1\leq l_{1}\leq l_{2}\leq\cdots\leq l_{r}$. Then,
\begin{center}
$\mu(G)=
\left\{
\begin{array}{ll}
1 & l_i=2, \; \forall i \\
3 & l_1=1 \; \text{and} \;\; l_i\equiv 1\pmod4 \; \forall i\not =1\\
2 & {\text{otherwise.}}
\end{array}
\right.$
\end{center}
\end{ghazie}
A \textit{generalized polygon tree} is the graph defined recursively as follows. A cycle $C_{p}$ $(p\geq 3)$ is a generalized polygon tree. Next, suppose $H$ is a generalized polygon tree containing a simple path $P_{k}$, where $k\geq 1$. If $G$ is a graph obtained from the union of $H$ and a cycle $C_{r}$, where $r>k$, by identifying $P_{k}$ in $H$ with a path of length $k$ in $C_{r}$, then $G$ is also a generalized polygon tree.\cite{koh}
In the following theorem we consider bipartite generalized polygon trees with at most three interior regions, (see Figure \ref{gpt}) and by determining the $\mu(G)$, we classify that in which conditions they admit a VC$2$-EW.
\begin{ghazie}\label{thm:GPTs}
Let G be a bipartite generalized polygon tree with at most three interior regions shown in Figure {\rm\ref{gpt}}, which $p_{1},p_{2},\ldots ,p_{6}$ are paths of length $a$, $b$, $c$, $d$, $e$ and $f$, respectively. Then $\mu(G)=1$ if and only if one of the following conditions holds
\begin{itemize}
\item $a=b=c=d=e=f=2$.
\item $a=b=e+f=2$ and $c=d=0$.
\item $a=d=f=2$, $b=c=1$ and $e=0$.
\item $a=d=f=2$, $b=c=2$ and $e=0$.
\item $a=b=c=d=2$ and $e=f=0$.
\end{itemize}
And $\mu(G)=3$ if and only if one of the following conditions holds
\begin{itemize}
\item $a=b=c=d=0$ and $e+f\equiv 2\pmod 4$.
\item $b=1$, $c=d=0$ and $a,e+f\equiv1\pmod4$.
\item $b=1$, $e=f=0$ and $a,c,d\equiv 1\pmod 4$.
\item $b=c=1$, $a,d,e\equiv 1\pmod 4$ and $f\equiv 3 \pmod 4$.
\end{itemize}
Otherwise $\mu(G)=2$.
\begin{figure}[ht]
\centering
{\includegraphics[scale=.4]{GPT.pdf}}
\caption{A generalized polygon tree with three interior regions\label{gpt}}
\end{figure}
\end{ghazie}
\begin{proof}
{
We know that a graph admits a vertex-coloring 1-edge-weighting if and only if every two adjacent vertices have different degrees. It can be verify that this condition holds if and only if one of the given possibilities in the statement occurs.
For $b$ and $c$ there is three possibilities $b=c=0$ or $b\neq 0$, $c=0$ or $b,c\neq 0$.
Let $b=c=0$. If $a=d=0$, then $G$ is an even cycle $C_{e+f}$ and by Theorem \ref{P_n & C_n}, if $e+f\equiv 0\pmod 4$, then $\mu(G)=2$, otherwise, $\mu(G)=3$. If $a\neq 0$ or $d\neq 0$, by Corollary~\ref{k Blocks}, $\mu(G)=2$.
Now let $b\neq 0$ and $c=0$. If $d=0$ then $G\cong \theta(a,b,e+f)$, and we have done. For $d\neq 0$, $G$ is a graph with two blocks $\theta(a,b,e+f)$ and cycle $C_d$.
Thus by Theorem \ref{2 Blocks}, $\mu(G)=2$ unless $\mu(\theta(a,b,e+f))=3$. In this case we give a VC$2$-EW for $G$ in Figure $3$(\ref{last}). In the figures the edges and paths are denoted by straight lines and curves, respectively, and every path is weighted periodically by the given pattern through the denoted direction.
Otherwise $b,c\neq 0$. In this case, we consider three possibilities $b,c>1$ or $b=1$, $c>1$ or $b=c=1$.
Now let $b,c>1$. In this case if $e\neq 1$ or $f\neq 1$, then there exists a vertex $x$ with $\deg(x)\geq 3$ such that $\deg(x)>\deg(y)$ for all $y\in N(x)$ and $G-x$ is connected. Thus by Theorem \ref{d(v)>d(u)},
$\mu(G)\leq 2$. Therefore we assume that $e=f=1$. Notice that if $a=0$ or $d=0$, then $G$ is a theta graph and we have done. Now, since $G$ is bipartite $b+c$ is even. Hence, $b+c\equiv 0\pmod4$ or $b+c\equiv 2\pmod4$. For the first possibility if there is a part of even order, by Theorem \ref{delta=1},
we have done. Otherwise we have two odd parts and $a+d\equiv2\pmod4$. In this case we have four possibilities $a\equiv2\pmod4$, $b,c,d\equiv0\pmod4$ or $a,b,c\equiv2\pmod4$, $d\equiv0\pmod4$ or $a,b,d\equiv1\pmod4$, $c\equiv3\pmod4$ or $a,c,d\equiv3\pmod4$, $b\equiv1\pmod4$, in which in each case the given pattern in Figure $3$(\ref{2-1-1}) is a VC$2$-EW for $G$. For the latter case, $a+d\equiv2\pmod4$ give a part with even number of vertices and if both parts have odd number of vertices and $a+d\equiv0\pmod4$ by the symmetry of $b$, $c$ and $a$, $d$ with the same discussion we get the desired result.
Now let $b=1, c>1$. If $e=f=0$, then $G\cong\theta(a,1,c,d)$. Thus we assume that $\{e,f\}\neq\{0\}$. In this case if $e\neq1$ or $f\neq1$, then there exists a vertex $x$ with $\deg(x)\geq 3$ such that $\deg(x)>\deg(y)$ for all $y\in N(x)$ and $G-x$ is connected. Thus by Theorem \ref{d(v)>d(u)}, $\mu(G)\leq 2$. Therefore, we assume $e=f=1$. Since $G$ is bipartite $a,c$ and $d$ are odd. For the cases $a,c,d\equiv 3\pmod4$ and $a\equiv 3\pmod4$, $c,d\equiv 1\pmod4$ and $a,c\equiv 1\pmod4$, $d\equiv 3\pmod4$ in Figure $3$(\ref{3-1}) are given a VC$2$-EW. Otherwise, $G$ has a part of even order and by Theorem \ref{delta=1},
$\mu(G)\leq 2$.
Now if $b=c=1$, then since $G$ is bipartite, $e+f$ is even and $a$, $d$ are odd.\\[.5cm]
\textbf{Case 1. $a,d\equiv 1\pmod 4$.} If $e+f\equiv 2\pmod 4$, then $G$ has a part of even number of vertices and by Theorem \ref{delta=1},
$\mu(G)\leq 2$. Thus we assume $e+f\equiv 0\pmod4$.
For the cases $e,f\equiv 0\pmod 4$ and $e,f\equiv 2\pmod 4$ we give a VC$2$-EW for $G$ in Figures $3$(\ref{1-1-1}) and $3$(\ref{1-1-2}), respectively.
For the case $e\equiv 1, f\equiv 3\pmod 4$ first we show that $\mu(G)\geq 3$. Suppose to the contrary that $G$ admits a VC$2$-EW. Since $a\equiv 1\pmod4$, by the Remark, in $p_1$ the incidence edges on $u$ and $v$ have a same weight. Similarly incidence edges on $u'$ and $v'$ in $p_4$. Thus, the incidence edges on $u$, $v$ through $p_5$ and $p_6$ must have different weights. On the other hand, since $e\equiv 1\pmod4$, two end-edges on $p_5$ get the same weight but two edges on $p_6$ get different weights, because $f\equiv 3\pmod4$. Therefore $c(u')=c(v')$. This contradiction implies $\mu(G)\geq 3$. On the other hand, $G$ is bipartite, thus is $3$-colorable and $\mu(G)\leq 3$. Therefore, $\mu(G)=3$.\\[.5cm]
\textbf{Case 2. $a,d\equiv 3\pmod 4$.} If $e+f\equiv 2\pmod 4$, then by Theorem \ref{delta=1},
$\mu(G)\leq 2$. Thus we assume $e+f\equiv 0\pmod 4$. Now one of the three possibilities $e,f\equiv 0\pmod 4$ or $e,f\equiv 2\pmod 4$ or $e\equiv 1, f\equiv 3\pmod 4$ occurs, in which in each case the given pattern in Figure $3$(\ref{1-1-1}) is a VC$2$-EW for $G$.\\[.5cm]
\textbf{Case 3. $a\equiv 1, d\equiv 3\pmod 4$.} If $e+f\equiv 0\pmod 4$, then by Theorem \ref{delta=1},
$\mu(G)\leq 2$. Thus we assume $e+f\equiv 2\pmod 4$ and for the cases $e,f\equiv1\pmod4$ or $e\equiv0\pmod4$, $f\equiv2\pmod4$ and the case $e,f\equiv3\pmod4$ we give a VC$2$-EW for $G$ in Figures $3$(\ref{1-1-2}) and $3$(\ref{1-1-1}), respectively. Notice that in the case $e\equiv0\pmod4$, $f\equiv2\pmod4$ if $e=0$, then by Theorem \ref{d(v)>d(u)}, $\mu(G)\leq2$. Also in the case $e,f\equiv1\pmod4$ if $e=1$, we can replace weight of the edge $uv$ with $1$ to get a VC$2$-EW.
\setcounter{figure}{0}
\renewcommand{\figurename}{{}}
\begin{tabular}{|c|c|c|}
\hline
\begin{minipage}{5.09cm}
\hspace*{-.18cm}\includegraphics[scale=.26]{last.pdf}
\captionof{figure}{\label{last}}
\end{minipage}
&
\begin{minipage}{4.32cm}
\hspace*{-.14cm}\includegraphics[scale=.26]{2-1-1.pdf}
\captionof{figure}{\label{2-1-1}}
\end{minipage}
&
\begin{minipage}{4.3cm}
\hspace*{-.14cm}\includegraphics[scale=.26]{3-1.pdf}
\captionof{figure}{\label{3-1}}
\end{minipage}
\\
\hline
\end{tabular}
\vspace*{-.3cm}
\hspace*{2.1cm}\begin{tabular}{|c|c|}
\hline
\begin{minipage}{5.09cm}
\hspace*{-.cm}\includegraphics[scale=.26]{1-1-1.pdf}
\captionof{figure}{\label{1-1-1}}
\end{minipage}
&
\begin{minipage}{5.1cm}
\includegraphics[scale=.26]{1-1-2.pdf}
\captionof{figure}{\label{1-1-2}}
\end{minipage}
\\
\hline
\end{tabular}
\renewcommand{\tablename}{{Figure}}
\addtocounter{table}{2}
\captionof{table}{A VC$2$-EW for $G$}
}
\end{proof}
\section{\hspace*{-.6cm}. Conclusion}
Let $G$ be a connected graph which $G\not\cong C_{4k+r}$ for $r=1,2,3$, if $\mu(G)>2$, then by Corollary \ref{cor: MSP}, $G$ contains a maximal simple path of length $1$. In this case there are infinite families of graphs either with $\mu(G)\leq2$ (see Proposition \ref{equitable}) and $\mu(G)>2$ (see Theorems \ref{thm:GPTs} and \ref{theta}).
If we consider the connected graphs contains no edge as a maximal simple path which don't admit a VC$2$-EW, then Corollary \ref{cor: p and p'} guarantee that there exist maximal simple paths $P$ and $P'$ where length of $P$ and length of $P'$ is $1\pmod4$ and $3\pmod4$, respectively. In Theorem \ref{thm: our conj}, we proved that every connected graph $G$ which $G\not\cong C_{4k+r}$ for $r=1,2,3$ and $\mu(G)>2$, contains an edge as a maximal simple path. In the other word, if $G\not\cong C_{4k+r}$ for $r=1,2,3$ is a connected graph contains no edge as a maximal simple path, then $\mu(G)\leq2$.
It was proved that every $3$-colorable graph admits a VC$3$-EW. Thus, for bipartite graphs $\mu(G)\leq3$, and bipartite graphs are divided into two classes, one is bipartite graphs with $\mu(G)\leq2$, say Class $1$, and the second is bipartite graphs with $\mu(G)=3$, say Class $2$. The classification of bipartite graphs in these two classes is not a trivial problem.
It is proved that every $3$-connected bipartite graph is in Class $1$. For a graph contains cut vertices, some properties of it's blocks which guarantee to $G$ be in Class $1$ are given (Theorem \ref{2 Blocks} and Corollary \ref{k Blocks}). Corollary \ref{cor: MSP} concludes that every $2$-connected bipartite graph which has no ear of length $1$ in it's ear decomposition belongs to Class $1$. In Theorem \ref{thm:GPTs} and Example \ref{exm:mu=3} infinite families of $2$-connected bipartite graphs in Class $2$ are provided.
| {
"timestamp": "2012-05-16T02:02:19",
"yymm": "1205",
"arxiv_id": "1205.3266",
"language": "en",
"url": "https://arxiv.org/abs/1205.3266",
"abstract": "A k-edge-weighting of a graph G is a function w: E(G)->{1,2,...,k}. An edge-weighting naturally induces a vertex coloring c, where for every vertex v in V(G), c(v) is sum of weights of the edges that are adjacent to vertex v. If the induced coloring c is a proper vertex coloring, then w is called a vertex-coloring k-edge weighting (VCk-EW). Karonski et al. (J. Combin. Theory Ser. B 91 (2004) 151-157) conjectured that every graph admits a VC3-EW. This conjecture is known as 1-2-3-conjecture. In this paper, frst, we study the vertex-coloring edge-weighting of the cartesian product of graphs. Among some results, we prove that the 1-2-3-conjecture holds for some infinite classes of graphs. Moreover, we explore some properties of a graph to admit a VC2-EW",
"subjects": "Combinatorics (math.CO)",
"title": "On the 1-2-3-conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9653811631528337,
"lm_q2_score": 0.7341195269001831,
"lm_q1q2_score": 0.7087051627721068
} |
https://arxiv.org/abs/2106.06510 | Measuring the robustness of Gaussian processes to kernel choice | Gaussian processes (GPs) are used to make medical and scientific decisions, including in cardiac care and monitoring of atmospheric carbon dioxide levels. Notably, the choice of GP kernel is often somewhat arbitrary. In particular, uncountably many kernels typically align with qualitative prior knowledge (e.g.\ function smoothness or stationarity). But in practice, data analysts choose among a handful of convenient standard kernels (e.g.\ squared exponential). In the present work, we ask: Would decisions made with a GP differ under other, qualitatively interchangeable kernels? We show how to answer this question by solving a constrained optimization problem over a finite-dimensional space. We can then use standard optimizers to identify substantive changes in relevant decisions made with a GP. We demonstrate in both synthetic and real-world examples that decisions made with a GP can exhibit non-robustness to kernel choice, even when prior draws are qualitatively interchangeable to a user. |
\section{Code assets used}
\label{app:resourcesUsed}
Our experiments use the following dependencies which are listed alongside their license details:
\begin{enumerate}
\item \texttt{NumPy} \cite{numpy}, which uses the BSD 3-Clause ``New'' or ``Revised'' License.
\item \texttt{jax} \cite{jax2018_github}, which uses the Apache License, Version 2.0.
\item \texttt{scipy} \cite{2020SciPy-NMeth}, which uses the BSD 3-Clause ``New'' or ``Revised'' License.
\item \texttt{neural-tangents}~\citep{neuraltangents2020} package, which uses the Apache License, Version 2.0.
\end{enumerate}
\section{Additional details for the heart rate example}
\label{app:heartRate}
\begin{figure}
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.06]{figures/HR_nonNonrobust_data.png} &
\includegraphics[scale=0.06]{figures/HR_nonNonrobust_prior0.png} \\
\includegraphics[scale=0.06]{figures/HR_nonNonrobust_prior1.png} &
\includegraphics[scale=0.06]{figures/HR_nonNonrobust_hist.png}
\end{tabular}
\caption{Sensitivity of heart rate analysis in \cref{app:heartRate} for an example where we fail to find non-robustness.
(\emph{Top-left}): Heart rate data; notice the data is trending downwards at the end of the time series.
(\emph{Top-right}): Prior draws from our original kernel $k_0$ from \cref{heartRateKernel}). (\emph{Bottom-left}): Prior draws from our decision-changing kernel $k_1$ that achieves $F^\star = \Delta$, noise matched by color to the draws from $k_0$.
(\emph{Bottom-right}): Comparison of the difference between $k_0$ and $k_1$ (red line) to posterior hyperparameter uncertainty (histogram).}
\label{fig:app-nonNonRobustHR}
\end{figure}
Here, we give additional details for our heart rate modeling example from \cref{sec:heartRate}.
The assets from the Computing in Cardiology challenge \cite{compCardiology1,compCardiology2} are available under Creative Commons Attribution 4.0 International Public License.
According to \cite{compCardiology1}, the data was collected under the approval of appropriate institutional review boards, and personal identifiers were removed.
Following \cite{Colopy2016_identifying}, we first take the log transform of our heart rate observations $y_n$.
We then zero-mean the observations ($\sum_{n=1}^N y_n = 0$) and set them to have unit variance ($\sum_{n=1}^N y_n^2 = 1$).
The kernel used by \cite{Colopy2016_identifying} to model the resulting data log-scaled standardized data is a Mat\'{e}rn $5/2$ kernel plus a squared exponential kernel:
\begin{equation}
k_0(x,x') = h_1^2 \left( 1 + \frac{\sqrt{5}\abs{x-x'}}{\lambda_1} + \frac{5\abs{x-x'}^2}{3\lambda_1} \right) \exp \left[ - \frac{\sqrt{5}\abs{x-x'}}{\lambda_1} \right]
+ h_2^2 \exp\left[ -\frac{\abs{x-x'}^2}{2\lambda_2^2} \right],
\label{heartRateKernel}
\end{equation}
where $h_1, h_2, \lambda_1, \lambda_2 > 0$ are kernel hyperparameters, which we set via MMLE.
While all inferences are done on the zero-mean, unit-variance log-scaled data, all of our plots and discussion are given in the untransformed (i.e.\ raw bpm) scale for ease of interpretability.
In the main text, we showed an example where our workflow in \cref{alg:workflow} discovered non-robustness in predicting whether a patient's heart rate would be likely to be above 130 BPM or not 1.5 hours in the future.
We noted that there was some evidence in the data supporting this finding: the patient's heart rate was trending upward towards the end of the observed data, so we might expect that small changes to the prior could result in significant posterior mass being placed on high heart rates.
To demonstrate that we do not always find GP analyses non-robust to the choice of the prior, we give an example here where we fail to find non-robustness.
For our example, we use a different patient from the Computing in Cardiology challenge \cite{compCardiology1,compCardiology2}.
The heart rate for this patient is plotted in \cref{fig:app-nonNonRobustHR}; notice that their heart rate is trending down at the end of the observed data.
As in \cref{sec:heartRate}, we use the constraint set and objective specified by \cref{constraintSet:stationary} (i.e.\ we constrain ourselves to stationary kernels with spectral densities close to the density of $k_0$). Following \cref{alg:workflow}, we expand the constraint set size until we achieve $F^\star = \Delta$.
We then assess whether the recovered $k_1$ is qualitatively qualitatively interchangeable with $k_0$.
We plot noise-matched prior draws from $k_0$ and $k_1$ in \cref{fig:app-nonNonRobustHR}.
We see that $k_1$ has obvious qualitative deviations from $k_0$; the functions drawn from $k_1$ have noticably larger variance (count the number of times the functions from $k_1$ pass 130 bpm).
Additionally, we see in \cref{fig:app-nonNonRobustHR} that the relative Frobenius norm between $k_0(X,X)$ and $k_1(X,X)$ is much larger than the typical deviations around $k_0$ due to hyperparameter uncertainty.
We conclude that $k_0$ and $k_1$ are not qualitatively interchangeable.
Thus, we say that we fail to find non-robustness in the sense of \cref{def:nonRobust}.
Again, this conclusion is fairly sensible: at the final observation, the patient's heart rate is below 80 BPM and is trending downwards.
It thus seems reasonable that it would take a somewhat unusual prior to predict that the patient's heart rate would suddenly spike to 130.
The heart rate data we use necessarily contains information about individuals -- each time series of a heart rate comes from an individual patient.
The creators of this dataset had their study approved by an institutional review board, so we assume they obtained consent from all people in the dataset.
The dataset contains no immediately personally identifying information; all patients are referred to only by a number.
However, we are not aware of a proof that no personally identifying information could be extracted from this dataset.
We are confident that this dataset contains no offensive information.
The heart rate experiments in were run on a laptop with a six-core i7-9750H processor.
Each of the two experiments took roughly five minutes to complete.
\section{Additional details for \co2 experiment}
\label{app:co2}
\begin{figure}
\includegraphics[width=\textwidth]{figures/appendix_maunaloa-perturbed-original-prior0.png}\\
\includegraphics[width=\textwidth]{figures/appendix_maunaloa-perturbed-original-prior1.png}\\
\includegraphics[width=\textwidth]{figures/appendix_maunaloa-perturbed-original-prior2.png}\\
\includegraphics[width=\textwidth]{figures/appendix_maunaloa-perturbed-original-prior3.png}\\
\includegraphics[width=\textwidth]{figures/appendix_maunaloa-perturbed-original-prior4.png}\\
\includegraphics[width=\textwidth]{figures/appendix_maunaloa-perturbed-original-prior5.png}\\
\includegraphics[width=\textwidth]{figures/appendix_maunaloa-perturbed-original-prior6.png}\\
\includegraphics[width=\textwidth]{figures/appendix_maunaloa-perturbed-original-prior7.png}\\
\caption{Sensitivity analysis of Mauna Loa. Each row plots noise matched samples from a zero mean Gaussian process with original and perturbed kernel functions. These plots provide a zoomed in view of the prior samples shown in \cref{fig:MaunaLoa}. We note that draws from $k_1$ are in-phase with those of $k_0$ (i.e.\ $k_1$ captures the seasonal maxima and minima of \co2 just as well as $k_0$ does). Overall, there is high agreement between functions sampled from the two GPs.}
\label{fig:app-maunaLoaPriors}
\end{figure}
Here, we give additional details on the \co2 experiment from \cref{sec:co2}.
Our dataset is a series of monthly \co2 levels taken from Mauna Loa in Hawaii between 1958 and January of 2021 \cite{osti_1389383}; we download our data from \href{https://scrippsco2.ucsd.edu/assets/data/atmospheric/stations/in\_situ\_co2/monthly/monthly\_in\_situ\_co2\_mlo.csv}{https://scrippsco2.ucsd.edu/assets/data/atmospheric/stations/in\_situ\_co2/monthly/monthly\_in\_situ\_co2\_mlo.csv}.
The data is freely available (no attached license).
\cite[Section 5.4.3]{RasmussenWilliams2006} predict future \co2 levels using a GP.
Their kernel is the sum of four terms:
\begin{align}
k_0(x_1, x_2) &= \theta_1^2 \exp\left( -\frac{(x_1 - x_2)^2}{2\theta_2^2} \right) \\
& + \theta_3^2 \exp\left( - \frac{(x_1-x_2)^2)}{2\theta_4^2} - \frac{2\sin^2(\pi (x_1 - x_2))}{\theta_5^2} \right) \\
& + \theta_6^2 \left( 1 + \frac{(x_1 - x_2)^2}{2\theta_7^2 \theta_8} \right)^{-\theta_8} \label{co2RationalQuadratic}\\
& + \theta_9^2 \exp\left( - \frac{(x_1-x_2)^2)}{2\theta_{10}^2} \right),
\end{align}
where the $\theta_i$ comprise the kernel hyperparameters (in addition to the noise variance $\sigma^2$).
The different components of this kernel encode different pieces of prior knowledge.
The two squared exponentials encode long-term trends and small-scale noise, respectively.
The rational quadratic kernel (\cref{co2RationalQuadratic}) encodes small seasonal variability in \co2 levels between different years.
The periodic kernel captures the periodic trend in \co2 levels, which peak in the summer and reach their minimum in the winter.
This periodic is multiplied by a squared exponential to allow deviations away from exact periodicity.
Similar to \cite[Section 5.4.3]{RasmussenWilliams2006}, we first compute the empirical mean of the training data's \co2 levels. We use this as the mean function for the GP.
To set the GP hyperparameters, we find that the hyperparameters values reported in \cite[Section 5.4.3]{RasmussenWilliams2006} are close, but not exactly, the MMLE solution on our data set, because the Jacobian of the marginal log-likelihood has an entry substantively different from zero.\footnote{This problem might be due to the existence of slightly different versions of the Mauna Loa data set. The originally link for the data \url{http://cdiac.esd.ornl.gov/ftp/trends/co2/maunaloa.co2} is no longer responsive, for instance.} We set hyperparameters by $10$ random restarts of MMLE, where the solution iterates are initialized at the values reported in \cite[Section 5.4.3]{RasmussenWilliams2006}. The fitted values are $\theta_1 = 68.58$, $\theta_2 = 69.09$, $\theta_3 = 2.55$, $\theta_4 = 87.60$, $\theta_5 = 1.44$, $\theta_6 = 0.66$, $\theta_7 = 1.18$, $\theta_8 = 0.74$, $\theta_9 = 0.18$, $\theta_{10} = 0.13$, $\theta_{11} = 0.19.$ They are, for the most part, within $5\%$ of the values reported in \cite[Section 5.4.3]{RasmussenWilliams2006}.
When \cite{RasmussenWilliams2006} ran their analysis, only data up to 2003 were available.
As it turns out, their analysis significantly underestimates current \co2 levels.
We ask if a qualitatively interchangeable kernel could have changed this result.
Thus, we let $F^\star$ be the mean of the posterior in June 2020 and set our substantive change level $\Delta$ to be the observed \co2 level in June 2020.
$k_0$ is stationary, so we could search for alternative stationary kernels using our spectral density framework from \cref{sec:nearbyKernels} (\cref{constraintSet:stationary}).
However, there is good reason to think we might want to consider non-stationary prior beliefs.
Developments in technology and/or global policy could have a large impact on \co2 levels.
Thus, we might encode past / expected future changes in technology and policy into our prior beliefs, making our prior beliefs non-stationary.
Thus, we use the input warping approach from \cref{sec:nearbyKernels} (\cref{constraintSet:nonStationary}).
However, we do not input warp the entirety of $k_0$.
As we know \co2 data has a regular periodicity, we leave the periodic component of the kernel, $\exp[ - \sin^2(\pi(x_1 - x_2)) / (2\theta^2_5) ]$ unwarped.
In preliminary experiments, we input warped the entirety of $k_0$; the resulting prior draws sometimes had minima in the summer and maxima in the winter, a clear violation of our prior knowledge about \co2 levels.
We input warp all other parts of $k_0$ using a use a two hidden layer fully connected network, with $50$ units and ReLU nonlinearities to parameterize $h$.
Finally, to ensure the optimal $k_1$ is finite, we use $\ell(k_1; F^\star, \Delta) = (F^\star(k_1) - \Delta)^2$ in \cref{constraintSet:nonStationary}, which guarantees that our objective is bounded below.
We plot noise matched prior draws for $k_0$ and the $k_1$ that achieves $F^\star(k_1) = L$ in \cref{fig:app-maunaLoaPriors}.
The samples from $k_1$ appropriately line up with the expected maxima and minima of \co2 levels (to see this, note that the draws from $k_1$ are in-phase with those from $k_0$, which correctly captures the seasonal maxima and minima).
The deviations between the noise-matched samples do not seem significant, so we say that $k_1$ and $k_0$ are qualitatively interchangeable.
We therefore conclude that the prediction of \co2 levels under $k_0$ is non-robust to the choice of the kernel in the sense of \cref{def:nonRobust}.
We also show the relative Frobenius norms of samples drawn from a Laplace approximation about the MMLE estimate and our perturbed kernel's Gram matrix in \cref{fig:app-ml-hist}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figures/app_maunaloa-histogram}
\caption{Relative Frobenius norms of Gram matrices constructed with samples drawn from a Laplace approximation (in blue) and our perturbed kernel (in red).}
\label{fig:app-ml-hist}
\end{figure}
This dataset is not about people, and so issues of consent and personally identifying information are not relevant.
The Mauna Loa experiments were run on a laptop with a 2.3 GHz 8-Core Intel Core i9, with 64 GB of RAM. The experiment (which optimizes from five random initialization) took about 15 minutes to run, with each seed taking about 3 mins.
\section{More details on MNIST experiments}
\label{app:MNIST}
\begin{figure}[t]
\includegraphics[width=\textwidth]{figures/appendix_mnist-grid.png}
\caption{Additional MNIST experiments. Here we visualize the training and test set performances along the hyperparamter grid used for assessing qualitative interchangeability. The train and test accuracies exhibit high performance and low variability across the grid.}
\label{fig:app_minist_grid}
\end{figure}
MNIST data is available under Creative Commons Attribution-Share Alike 3.0.
We use the publicly available \texttt{neural-tangents}~\citep{neuraltangents2020} package for constructing the kernels in our MNIST experiments. The package is available under Apache License Version 2.0.
We follow the experimental setup of \cite{Lee2018_dnn_gp} where the authors use a Gaussian process with a kernel corresponding to a $20$ layer, infinitely wide, fully connected, deep neural network with ReLU non-linearities. They place zero mean Gaussian priors over the weights, $\mathcal{N}(0, \sigma^2_w)$, and biases, $\mathcal{N}(0, \sigma^2_b)$, and set the hyper-parameters $\sigma^2_w = 1.45$ and $\sigma^2_b = 0.28$ via a grid search over parameters to maximize held-out predictive performance. \cite{Lee2018_dnn_gp} use a GP with $C=10$ outputs (classes). They pre-process one-hot encoded output vectors to have zero mean, i.e. $y_{ic} = 0.9$ if $c$ is the correct class for the $i$th training point, and $y_{ic} = -0.1$ for all incorrect classes; input images are flattened and an overall mean is subtracted from every image. Test prediction is made by selecting a class corresponding to the GP output with mean closest to $0.9$. The resulting GP trained on one thousand images from the MNIST training set and evaluated on the MNIST test set achieves an accuracy of $92.79\%$.
In our experiments we assess the robustness of their kernel. The $28 \times 28$ MNIST images require a warping function $g : \R^{784} \rightarrow \R^{784}$. We use a fully connected multi-layer perceptron with one $784$ unit hidden layer, $784$ input, and $784$ output units with ReLU non-linearities to parametrize $g$. Let $c_0$ be the prediction under the original kernel at a target test image $\xtest$. We define $c_1 := |c_0 - 1|$ and create a ``fake'' output $y^*$ with $y^*_{c_1} = 0.9$ and $y^*_{c} = -0.1$ for $c \neq c_1$. We find parameters of $g$ by minimizing the objective in Algorithm \ref{alg:non-stationary} plugging in
\begin{equation}
\label{eq:sup:l-mnist}
\ell(k; F^\star, \Delta) = -\frac{1}{C}\sum_{c=1}^{10} \log p(y^*_c | X, \xtest, Y),
\end{equation}
i.e. the negative log-likelihood of the ``fake'' output at a particular test image $\xtest$ under the perturbed kernel; $X$ and $Y$ are the train inputs and outputs. As we discussed in the main text, directly optimizing the posterior quantity of interest $F^\star = \mu_{c_{1}}(\xtest) - \mu_{c_{0}}(\xtest)$ produces unrealistic outputs, e.g.\ $\mu_{c_{0}}(\xtest) \ll -0.1$.
Such predictions would look obviously suspicious to a user, so we would say that our supposed malicious actor has not achieved their goal in this case.
Instead, we optimize the surrogate loss in \cref{eq:sup:l-mnist}. With this surrogate loss we are able to find kernel perturbations yielding benign-looking outputs and achieving the goal of the malicious actor to change the prediction at $\xtest$ to $c_1$, i.e.\ $\mu_{c_{1}}(\xtest) \approx 0.9$ and $\mu_{c}(\xtest) \approx -0.1$ for all $c \neq c_1$.
In this case, we feel that a user would not be able to identify these predictions as obviously wrong, and so we say that the malicious actor has achieved their goal of changing the predictions of $k_0$ without detection.
\paragraph{Hyperparameter sensitivity grid} To quantify variability in the Gram matrices arising from hyperparameter uncertainty, we vary $\sigma^2_w$ over $30$ uniformly spaced points between $1.4$ and $1.5$, and $\sigma^2_b$ over $30$ uniformly spaced points between $0.23$ and $0.33$. \cref{fig:app_minist_grid} shows that over the $900$ possible hyperparameter combinations the train and test accuracies remain high and exhibit low variability.
We do not know if the creators of the MNIST dataset obtained consent from each person who wrote the digits; however, we are not aware of any issues in this regard.
The dataset contains no immediately personally identifying information, and we strongly suspect that no personally identifying information could be recovered given only pictures of an anonymous person's handwriting.
We are confident that this dataset contains no offensive information.
The experiment took approximately 55 minutes to run for a single test image. We ran the computations for the 1000 test images in parallel on a compute cluster with Intel Xeon E5-2667 v2, 3.30GHz cores, requesting one core each time.
\section{Related work}
\label{app:related_work}
TODODO
\section{Details of spectral density constraints}
\label{app:spectral_density}
Here, we give the details of how we optimize over spectral densities to produce a stationary kernel as summarized in \cref{constraintSet:stationary}. Our goal is to optimize over the set of stationary kernels.
It is not immediately clear how to enforce this constraint; however, Bochner's theorem \cite[Thm.\ 4.1]{RasmussenWilliams2006} tells us that every stationary kernel $k(\tau)$, where $\tau \in \R^D$ has a positive finite \emph{spectral measure} $\mu$ such that:
\begin{equation}
k(\tau) = \int_{\R^D } e^{2\pi i \tau^T \omega} d\mu(\omega).
\end{equation}
A common assumption in the literature on kernel discovery \cite{WilsonAdams2013_sm,Benton2019_function,Wilson2016_dkl} is to assume that $\mu$ has a density $S$ with respect to the Lebesgue measure; that is, we can write:
\begin{equation}
k(\tau) = \int_{\R^D} e^{2\pi i \tau^T \omega} S(\omega) d\omega. \label{spectralDensityGeneral}
\end{equation}
These works have shown that the class of stationary kernel with spectral densities is a rich, flexible class of kernels.
In all of our examples optimizing over spectral densities, we have $D = 1$.
We thus assume $D = 1$ in the rest of our development here.
In this case, it must be that $S$ is symmetric around the origin to obtain a real-valued $k$.
So, we can simply \cref{spectralDensityGeneral} further as:
\begin{equation}
k(\tau) = \int_0^\infty \cos(2\pi \tau\omega) S(\omega) d\omega. \label{finalSpectralDensity}
\end{equation}
Optimizing over positive functions $S$ on the positive real line seems at least somewhat more tractable than optimizing over stationary positive-definite functions $k(\tau)$.
However, this is still an infinite dimensional optimization problem.
To recover a finite dimensional optimization problem, we follow \cite{Benton2019_function} and choose a grid $\omega_1, \dots, \omega_G$.
We can then optimize over the finite values $S(\omega_1), \dots, S(\omega_G)$ and use the trapezoidal rule to approximate the integral in \cref{finalSpectralDensity}.
\cite{Benton2019_function} find that $G = 100$ gives reasonable performance in their experiments; we find the same in ours, and fix $G = 100$ throughout.
\cite{Benton2019_function} recommend setting $\omega_g = 2\pi g / (8\tau_{max})$, where $\tau_{max}$ is the maximum spacing between datapoints.
We find this to sometimes give inaccurate results in the sense that using the trapezoidal rule / an exact formula to compute the density of $k_0$, $S(\omega_1), \dots, S(\omega_G)$ and then using the trapezoidal rule to recover the gram matrix $k_0(X,X)$ gives an inaccurate approximation to $k_0(X,X)$.
This is problematic in our case, as it would imply $k_0(X,X)$ is not in the constraint set for small $\eps$.
Instead, we recommend setting our $\omega_g$'s as a uniform grid from $\omega_1 = 0$ up to an $\omega_G$ such that $S_0(\omega_G)$ is equal to the floating point epsilon ($10^{-15}$ in our experiments); some manual experimentation will be required to implement this rule.
As we are only interested in kernels nearby $k_0$, we will have to put some kind of constraint on $k_1$'s spectral density, $S_1(\omega_1), \dots, S_1(\omega_G)$.
We use a simple $\eps$-ball given by:
\begin{equation}
\max \big( 0, (1-\eps) S_0(\omega_g) \big) \leq S(\omega_g) \leq (1+\eps) S_0(\omega_g), \quad g = 1, \dots, G,
\label{spectralConstraint}
\end{equation}
As long as our posterior functional of interest $F^\star$ is a differentiable function of the kernel matrix, we can compute gradients of $F^\star$ with respect to our discretized spectral density.
Rather than manually work out the derivatives of the trapezoidal rule combined with $F^\star$, we use the automatic differentiation package \texttt{jax}\footnote{Note that \texttt{jax} does not use 64 bit floating point numbers by default. We found that the increased precision given by 64 bit floating point arithmetic to be important in our experiments.} \citep{jax2018_github}.
Given a gradient of $F^\star$, we take a step in the direction of the gradient and then project the current iterate onto our constraint set in \cref{spectralConstraint} by clipping the resulting spectral density.
\section{Additional details of synthetic-data experiment}
\label{app:synthetic-data}
We generated the $x$-component of the synthetic data by first drawing $25$ uniformly random numbers in $[0,5].$
To increase the density around the interpolation point $\xtest = 2.00$, we then draw $10$ uniformly random numbers in $[1.9,2.1].$
The extrapolation point $\xtest = 5.29$ lies $0.5$ to the right of the largest $x$ value drawn.
The $y$-component is defined to be
\begin{equation*}
y_i = \frac{x_i^2}{2} + \cos(\pi x_i) + \epsilon_i,
\end{equation*}
where $\epsilon_i \stackrel{iid}{\sim} \N(0, 0.01)$.
We chose the $\eps$ grid for extrapolation ($\xtest = 5.29$) to be $15$ evenly-spaced values between $0.2$ and $0.8.$ The grid for interpolation ($\xtest = 2.00$) is $15$ evenly-spaced on the log grid between $10^{0.1}$ and $10^{1}.$
To discretize the spectral density, we follow \cref{app:spectral_density} in using $100$ frequencies evenly-spaced from $0$ to $2.$ To optimize over nearby spectral densities, we also perform constrained gradient descent with randomized initializations in the sense of \cref{app:spectral_density}. For extrapolation ($\xtest = 5.29$), using $25$ random seeds, we find sensitivity. For interpolation ($\xtest = 2.00$), even with $40$ random seeds, we fail to detect non-robustness.
Computing is done using a computing cluster, which has xeon-p8 computing cores. We request $7$ nodes, each using $15$ cores to run parallel experiments across both $\epsilon$ and the random seed for initialization. Total wall-clock time comes to roughly $5$ minutes.
\section{Discussion}
\label{sec:discussion}
In this paper, we proposed and implemented a workflow for measuring the sensitivity of GP inferences to the choice of the kernel function.
We used our workflow to discover substantial non-robustness in a variety of practical examples, but also showed that many analyses are not flagged as non-robust by our method.
We feel that robustness quantification, as in the present paper, will primarily have a positive impact: we hope that robustness quantification can help overcome brittle or cherry-picked machine learning analyses that will fail to yield stable conclusions in real practice.
However, a potential negative outcome of our robustness quantification machinery is that it may be repurposed for adversarially changing the predictions of a GP model.
There are many exciting directions for expanding on the present work -- both within our existing workflow and beyond. Many of our choices in the present paper were made for mathematical convenience. For instance, the constraint in our stationary objective in \cref{constraintSet:stationary} and the regularizer in our non-stationary objective in \cref{constraintSet:nonStationary} might be replaced by other notions of ``nearby'' spectral densities or ``small'' input warpings, respectively.
Additionally, our framework flags robustness but does not show how to make an analysis more robust. The instances of non-robustness we have found suggest it might be worthwhile to develop methods to robustify GP inferences to the choice of kernel. One challenge would be understanding how to best balance sensitivity, which can be desirable in that a method should adapt to the data at hand, and robustness, to encourage stability and generalization of results.
\section{Methods}
We begin with an overview of our approach,
\begin{enumerate}
\item Given a kernel $k_0$, define an $\epsilon$-set, $\mathcal{N}_\epsilon(k_0)$, centered at $k_0$. The set $\mathcal{N}_\epsilon(k_0)$ is restricted to valid kernel functions.
\item If there exists a $\tilde{k} \in \mathcal{N}_\epsilon(k_0)$, such that a prediction or inference of interest under a GP model with $k_0$ and a GP model with $\tilde{k}$, differ by $\delta$ or more, then we deem $k_0$ not $(\epsilon, \delta)$ robust.
\end{enumerate}
We now fill in details about how $\mathcal{N}_\epsilon(k_0)$ is constructed and how we search for $\tilde{k} \in \mathcal{N}_\epsilon(k_0)$.
\section{Introduction}
\input{intro}
\section{Our Workflow}
\input{overview}
\input{toy_illustration}
\section{Stationary perturbations to a model of heart rates}
\input{heart_rate}
\section{Non-stationary perturbations to a model of carbon dioxide emissions}
\input{mauna_loa}
\section{Non-stationary perturbations in classifying MNIST digits}
\input{mnist}
\input{discussion}
\section*{Acknowledgments}
This work was supported by the MIT-IBM Watson AI Lab, an NSF CAREER Award, an ARPA-E project with program director David Tew, and an ONR MURI.
\small
\bibliographystyle{plain}
\subsection{Overview of our approach}
\label{sec:overview}
\textbf{Setup and notation.} Consider data $\mathcal{D} = \{(\bx_{n}, y_{n})\}_{n = 1}^{N}$, with covariates $\bx_n \in \R^{D}$ and outcomes $y_n \in \R.$
Take a regression model $y_{n} \sim \N(f(\bx_{n}), \sigma^{2})$ with $\sigma^2 > 0$ and a Gaussian process (GP) prior on $f$; we parameterize the GP with mean zero and covariance kernel $k$: $f \sim \GP(0, k).$ Let $k_0$ be the practitioner-chosen kernel.
Typically, $k_{0}$ depends on hyperparameters $\theta$ (including $\sigma^2$); unless stated otherwise, we assume that $\theta$ is estimated using maximal marginal likelihood estimation (MMLE).
That is,
$$
\theta = \argmax_{\tilde{\theta}}p(y_{1}, \ldots, y_{N} \mid \bx_{1}, \ldots, \bx_{N}, \tilde{\theta}).
$$
We make a decision based on the posterior predictive at a fixed test point. Let $\xtest \not\in \{\bx_{1}, \ldots, \bx_{N}\}$ be the test point; the posterior predictive is $f^{\star} \mid \mathcal{D}$ with $f^{\star} := f(\xtest).$
Let $F^\star(k)$ be a scalar functional, such as a mean or quantile, of this distribution; we make the dependence on the kernel $k$ of the GP explicit. Let the level $\Delta \in \R$ represent a decision threshold in $F^\star(k)$. That is, we make one decision when $F^\star(k) \ge \Delta$ and a different one when $F^\star(k) < \Delta$.
For example, let time be a single covariate, and let outcome be the resting heart rate of a hospital patient. $F^\star(k)$ could be the 95th percentile of the GP posterior at a particular time; an alarm might trigger if $F^\star(k)$ is greater than $\Delta=$130 bpm but not otherwise \cite{Fidler2017_heartRateThreshold}.
We want to assess whether our decision would change if we used a qualitatively interchangeable kernel. Without loss of generality, we assume that $F^\star(k_0) < \Delta$ in what follows. Then we can define non-robustness.
\begin{defn}
For original kernel $k_0$, we say that our decision $F^\star(k_0) < \Delta$ is non-robust to the choice of kernel if there exists a kernel $k_1$ that is qualitatively interchangeable with $k_0$ and $F^\star(k_1) \ge \Delta$.
\label{def:nonRobust}
\end{defn}
\textbf{Workflow overview.}
Our workflow is summarized in \cref{alg:workflow}. We start by defining a set $\Keps$ of kernels that are ``$\eps$-near'' $k_0$.
We solve the optimization problem:
\begin{equation}
k_1 = \argmax_{k \in \Keps} F^\star(k).
\label{mainOptimizationProblem}
\end{equation}
Then we increase $\eps$ until the optimum of \cref{mainOptimizationProblem} changes our decision.
Finally, we check whether the decision-changing kernel, $k_1$, is qualitatively interchangeable with $k_0$.
It remains to precisely define a set of ``$\eps$-near'' kernels (\cref{sec:nearbyKernels}), to show that we can efficiently solve \cref{mainOptimizationProblem} (\cref{sec:nearbyKernels}), and to provide ways to assess qualitative interchangeability (\cref{sec:QI}). We provide a number of practical choices for each step in the remainder of this section and then illustrate on simulated and real data sets in subsequent sections.
Note that although \cref{alg:workflow} can detect non-robustness, it cannot certify robustness; it is possible, even though it may be unlikely, that there exists a qualitatively interchangeable kernel that the methodology has not detected but that still changes the decision. This point is generally true of sensitivity analyses, and the present workflow is no exception.
This observation is also similar in spirit to classical hypothesis tests: a user can reject -- but not accept -- a null hypothesis.
\begin{algorithm}
\caption{Workflow for assessing robustness of GP inferences to kernel choice}
\label{alg:workflow}
\begin{algorithmic}[1]
\State Choose initial kernel $k_0$ using prior information.
\State Choose posterior quantity of interest $F^\star$. \Comment{E.g.\ Posterior mean at test point $\xtest$}
\State Define decision threshold $\Delta$ \Comment{E.g.\ 130 bpm is an alarming resting heart rate}
\State Define ``$\eps$-near'' kernels $\Keps$, for $\eps > 0$ \Comment{\cref{sec:nearbyKernels}} \label{line:Kchoice}
\While{$F^\star(k_1) < \Delta$}
\State Expand $\Keps$ (increase $\eps$) and re-solve \cref{mainOptimizationProblem} to get $k_1$. \Comment{\cref{sec:nearbyKernels}}
\EndWhile
\State Assess whether $k_0$ and $k_1$ are qualitatively interchangeable. \Comment{\cref{sec:QI}}
\If{$k_0$ and $k_1$ qualitatively interchangeable}
\Return ``$F^\star$ is non-robust to the choice of kernel.''
\Else
\, \Return ``Failed to find that $F^\star$ is non-robust to the choice of kernel.''
\EndIf
\end{algorithmic}
\end{algorithm}
\subsection{Nearby kernels and efficient optimization}
\label{sec:nearbyKernels}
We give two practical examples of how to choose $\Keps$ in the present work and detail how to solve \cref{mainOptimizationProblem} in each case. In both cases, we assume we have prior information that our kernel $k$ is smooth. First, we consider the case where we assume $k$ should be stationary. Second, we allow non-stationary kernels $k$.
\textbf{Stationary kernels.}
By Bochner's theorem, every stationary kernel can be represented by a positive measure \cite[Thm.\ 4.1]{RasmussenWilliams2006}.
In the kernel discovery literature, it is common to make the additional assumption that this measure has a density $S(\omega) = \int e^{ -2\pi i \tau\omega} k(\tau) d\tau$, where $\tau = \bx - \bx'$ \cite{WilsonAdams2013_sm,Benton2019_function,Wilson2016_dkl}.
These authors show that the class of stationary kernels with a spectral density is a rich, flexible class of kernels.
Following these works, we optimize over spectral densities $S(\omega)$ -- which are just positive integrable functions on the real line -- to recover stationary kernels.
To make this optimization problem finite dimensional, we optimize the spectral density over a finite grid of frequencies $\omega$ and use the trapezoidal rule to recover $k$.
To guarantee that our recovered kernels are not overly dissimilar from $k_0$, we constrain ourselves to an $\eps$ ball in the $\ell_\infty$ norm around the spectral density of $k_0$ for some $\eps > 0$.
We summarize this constraint set and the resulting optimization objective in \cref{constraintSet:stationary}; see \cref{app:spectral_density} for more details.
\begin{algorithm}
\caption{Objective and $\Keps$ for stationary kernels}
\begin{algorithmic}[1]
\label{constraintSet:stationary}
\Statex \textbf{Objective}
\State \textbf{Input}: Frequencies $\omega_1, \dots, \omega_G$, and density values $S(\omega_1), \dots, S(\omega_G)$.
\State Use trapezoidal rule to approximate the integral $k(\tau) = \int e^{2\pi i \tau\omega} S(\omega) d\omega$.
\State \Return{ $F^\star(k)$.}
\end{algorithmic}
%
\begin{algorithmic}[1]
\Statex
\Statex \textbf{Spectral constraint defining $\Keps$}
\State \textbf{Input}: Frequencies $\omega_1, \dots, \omega_G$, density values $S(\omega_1), \dots, S(\omega_G)$, constraint set size $\eps > 0$.
\State Compute spectral density of $k_0$, $S_0(\omega_1), \dots, S_0(\omega_G)$ via trapezoidal rule or exact formula.
\State Constrain spectral density $S$ of $k$ as:
$$
\max \big( 0, (1-\eps) S_0(\omega_g) \big)
\leq
S(\omega_g)
\leq
(1+\eps) S_0(\omega_g),
\quad g = 1, \dots, G.
$$
\end{algorithmic}
\end{algorithm}
\textbf{Non-stationary kernels.}
In many modeling problems, stationarity may be a choice of convenience rather than prior conviction, or one may believe non-stationarity is probable. In either case, we wish to allow non-stationary kernels in the neighborhood $\Keps$. While there are several methods for constructing non-stationary kernels, a particularly convenient technique for our purposes relies on input warping~\cite[Sec 4.2.3]{RasmussenWilliams2006}. Given a kernel $k_0$ and a non-linear mapping $g$, we define a perturbed kernel $k(\bx, \bx') = k_0(g(\bx), g(\bx'))$.
This construction guarantees that the perturbed kernel $k$ is a kernel function as long as $k_0$ is a valid kernel. We let the function $g$ have parameters $w$ and set
$
g(\bx; w) := \bx + h(\bx; w),
$
where $h: \R^D \rightarrow \R^D$ is a small neural network with weights $w$.
By controlling the magnitude of $h$, we can control the deviations from $k_0$.
We could optimize over the set of non-stationary kernels with respect to the weights $w$, under the constraint $\|h(\bx;w)\|_2^2 \leq \eps$.
However, it is unclear how to enforce this constraint.
Instead, we select a grid of points $\tilde{\bx}_1, \ldots, \tilde{\bx}_M \in \R^D$ and add a regularizer to our objective,
$
\frac{1}{\eps} \frac{1}{M}\sum_{m=1}^M ||h(\tilde{\bx}_m, w)||_2^2,
$
where $\eps$ controls the regularization strength.
While we would ideally regularize using the entire function $h$, we find using a grid of points to be a computationally cheap, mathematically simple, and empirically effective approximation.
We summarize our objective as a function of the network weights $w$ in \cref{constraintSet:nonStationary}.
Note that we have also changed our objective to include a generic loss $\ell$; some care needs to be taken to ensure that the optimal $k_1$ is finite.
See \cref{sec:co2,sec:MNIST} for specific implementations of $\ell$.
Given the $\hat w$ minimizing the objective in \cref{constraintSet:nonStationary}, we set $k_1(\bx, \bx') = k_0(g(\bx ; \hat w), g(\bx'; \hat w))$.
\begin{algorithm}
\caption{Objective for non-stationary kernels}
\begin{algorithmic}[1]
\label{constraintSet:nonStationary}
\State \textbf{Input:} Regularizer grid points $\tilde{\bx}_1, \dots, \tilde{\bx}_M$, regularizer strength $\eps > 0$, neural network weights $w \in \R^D$.
\State Define neural network $h(x; w)$ parameterized by weights $w$.
\State Define $k(\bx, \bx') := k_0(\bx + h(\bx; w), \bx' + h(\bx'; w))$.
\State \Return{$\ell(k; F^\star, \Delta) + \frac{1}{\eps} \frac{1}{M}\sum_{m=1}^M ||h(\tilde{\bx}_m; w)||_2^2$ }
\end{algorithmic}
\label{alg:non-stationary}
\end{algorithm}
\subsection{Assessing qualitative interchangeability}
\label{sec:QI}
We introduce two visual assessments, similar to prior predictive checks \citep{Gabry2019_bayesianViz}, to assess qualitative interchangeability.
\textbf{Visual comparison of prior predictive draws.}
When the covariates $\bx$ are one-dimensional, we can plot a small collection of functions drawn from each of the two distributions $\GP(0, k_{0})$ and $\GP(0, k_{1}).$
To ensure that visual differences between the priors are due to actual differences in the kernels and not randomness in the draws, we use \emph{noise-matched} prior draws. To define noise-matched draws, recall that one can draw from an $N$-dimensional Gaussian distribution $\mathcal{N}(0, \Sigma)$ by computing the Cholesky decomposition $LL^\top = \Sigma$;
we then have $L z \sim \mathcal{N}(0, \Sigma)$, where $z \sim \mathcal{N}(0, I_N)$.
We say that draws from two multivariate Gaussians are noise-matched if they use the same $z \sim \mathcal{N}(0, I_N)$.
If the user believes the two plots express the same qualitative information, the kernels are qualitatively interchangeable under this test.
Two potential drawbacks to this method are as follows. (1) When covariates are high-dimensional, it may be difficult to effectively visualize prior draws. (2) The initial motivation for our paper was that some users may have difficulties expressing their prior beliefs; it might not be surprising, then, if some users may have difficulty in visually assessing prior beliefs. To address these concerns, we provide a second test next.
\textbf{Gram matrix comparison.}
No matter the dimension of the covariates, we can compare the Gram matrices $k_{0}(X,X)$ and $k_{1}(X,X)$, whose $(i,j)$ entries are, respectively, $k_{0}(\bx_{i}, \bx_{j})$ and $k_{1}(\bx_{i}, \bx_{j}).$
The two matrices $k_{0}(X,X)$ and $k_{1}(X,X)$ are the prior covariance matrices of the vector $\ftrain = \ftrainvec$ implied by the $\GP(0,k_{0})$ and $\GP(0,k_{1})$ priors.
These two matrices fully characterize the prior distribution of $\ftrain.$
We cannot directly assess a matrix norm of the difference $k_1(X,X) - k_0(X,X)$ because the scale is uncalibrated.
To understand whether the difference between $k_{0}(X,X)$ and $k_{1}(X,X)$ is large, we make $R$ draws, $\{\theta^{(r)}\}_{r=1}^{R}$, from the posterior distribution $p(\theta \mid \mathcal{D})$ (or an approximation thereof) of the original kernel hyperparameters.
For each $r$, we compute the difference between $k_{0}(X,X)$ and $k^{(r)}(X,X),$ where $k^{(r)}$ has the same functional form as $k_{0}$ but with hyperparameters $\theta^{(r)}$ instead of $\theta.$
If the difference between $k_{1}(X,X)$ and $k_{0}(X,X)$ is small relative to the natural posterior variability in the difference between $k_{0}(X,X)$ and $k^{(r)}(X,X)$ across $r$, then we conclude that $k_{1}$ is qualitatively interchangeable with $k_{0}.$
In our experiments, we make the following concrete choices (though others are possible). Unless otherwise stated, we use a Laplace approximation to the posterior around the MMLE hyperparameters.
And we compare Gram matrices using a relative Frobenius norm: namely, $\| k_1(X,X) - k_0(X,X) \|_{F} / \|k_0(X,X)\|_F$, where $\|\cdot \|_F$ is the Frobenius norm.
We construct a histogram of the relative Frobenius norms between $k^{(r)}$ and $k_0$ across $r$, with a marker indicating the position of the relative norm between $k_1$ and $k_0$. If the marker lies to the right of the mass of the histogram, we conclude that $k_1$ and $k_0$ are not qualitatively interchangeable; otherwise, we conclude $k_1$ and $k_0$ are qualitatively interchangeable.
\subsection{Related work}
Insofar as the covariance kernel is just a prior hyperparameter, our aims in this paper are similar in spirit to those pursued in a rich literature on Bayesian robustness and local sensitivity.
Specifically, \cite{Gustafson1996_local, Giordano2018_covariance, Liu2018_stickbreaking} carefully study how infinitesimal changes to finite-dimensional hyperparameters can affect posterior expectations.
The methods developed in those papers, however, are not as easy to apply to assess the sensitivity of more complex functions like quantiles to more general changes to the covariance kernel.
Our goals are distinct from those pursued in the automated or flexible kernel discovery literature.
\cite{Duvenaud2013_compositional}, for instance, selects a kernel by iteratively searching over elementary combinations of simpler kernels.
\cite{Wilson2016_dkl}, on the other hand, increases the representational flexibility of elementary kernels by first transforming the inputs using a neural network.
\cite{WilsonAdams2013_sm} and \cite{Benton2019_function} instead model the spectral density of a $k$ with, respectively, a mixture of Gaussians and exponentiated GP.
In all of these works, the goal was to adaptively identify a kernel from a flexible class that fits the data well.
Our focus, in contrast, is to construct a new kernel that (i) encodes similar prior beliefs to kernel -- elicited from real prior beliefs or adaptively identified -- while (ii) producing different posterior predictive inference.
It is also important to note a distinction between our focus on prior perturbations and the existing literature on robust GPs, which is primarily concerned with assessing sensitivity to perturbations to the data.
\cite{Jylanki2011_student-t} and \cite{Ranjan2016_em}, for instance, reduce the influence of outlying observations in GP regression by replacing the conventional Gaussian noise model with heavier-tailed alternatives.
\cite{KimGhahramani2008_outlier} and \cite{Hernandez-Lobato2011_multiclass} similarly propose GP classifiers that respectively are robust to outliers and label noise.
A more recent line of work concerns potentially adversarial perturbations to a test point $\xtest.$
In the respective contexts of regression and optimization, \cite{Cardelli2019_aaai} and \cite{Bogunovic2018_optimization} carefully control the maximal change in the value of the latent function over a local neighborhood of $\xtest.$
For classification, \cite{Blaas2020_classification} similarly bounds the maximal and minimal values of the class probabilities in a neighborhood of $\xtest$ while \cite{Smith2019_vulnerability} instead bound the size of input perturbation required to change the label of a confidently classified test observation.
To the best of our knowledge, the only work that assesses sensitivity to kernel specification is \cite{WangJing2021_convergence}, which establishes optimal convergence rates for GP regression when the smoothness implied by the selected covariance kernel is misspecified.
While interesting and important, their results do not provide insight into how kernel misspecification affects functionals like posterior predictive means or quantiles in finite samples, nor do they provide specific information about any given application.
\subsection{Workflow illustration on synthetic data}
\label{sec:syntheticExample}
\begin{figure}[h]
\includegraphics[width = 0.245\textwidth]{figures/synthetic_data.png}
\includegraphics[width = 0.245\textwidth]{figures/synthetic_prior_original.png}
\includegraphics[width = 0.245\textwidth]{figures/synthetic_prior_extrapolation.png}
\includegraphics[width = 0.245\textwidth]{figures/synthetic_prior_interpolation.png}
\caption{Synthetic data (\emph{far left}). Draws from the original prior $\GP(0, k_{0})$ (\emph{center left}) and alternative priors $\GP(0,k_{1})$ for extrapolation at $\xtest = 5.29$ (\emph{center right}) and interpolation at $\xtest = 2.00$ (\emph{far right}). The prior draws are noise-matched (\cref{sec:QI}). The gray-shaded area in the right three plots is the 99.7\% uncertainty interval for the original prior ($\GP(0,k_{0})$). }
\label{fig:syntheticData_all}
\end{figure}
\textbf{Data and prior.}
Before turning to real data, we illustrate our workflow with a synthetic-data example.
We consider $N=35$ data points with a single covariate; see the leftmost panel of \cref{fig:syntheticData_all}.
We assume we have qualitative prior beliefs that (i) $f$ is smooth and (ii) our beliefs about $f$ are invariant to translation along the single covariate (stationarity). In this case a standard kernel choice is squared exponential: $k_{0}(x, x') = \exp[-(1/2)(x - x')^2/\theta^{2}]$ \cite[see, e.g.,][Chap. 4]{RasmussenWilliams2006}. We estimate $\theta$ via maximum marginal likelihood estimation (MMLE).
Four draws from the resulting prior are shown in the second panel of \cref{fig:syntheticData_all}.
\textbf{Decision boundary.}
For the purposes of this illustration, we will look at two separate decisions. One is at $\xtest = 2.00,$ which is within the range of the training data (interpolation). And one is at $\xtest = 5.29,$ which is outside the range of the training data (extrapolation).
Our functional of interest at either point will be the relative change in posterior mean
$$
F^\star(k) := \frac{\mu(\xtest, k_0) - \mu(\xtest,k)}{\sigma(\xtest, k_{0})},
$$
where $\mu(\xtest, k)$ and $\sigma(\xtest,k)$ are the posterior mean and standard deviation, respectively, of $f(\xtest)$ corresponding to kernel $k.$
We suppose that we would make a different decision if the posterior mean were two posterior standard deviations away from its current value: $\Delta = 2$.
\textbf{Nearby kernels and efficient optimization.}
Since we assume stationarity, we choose \cref{constraintSet:stationary} at line~\ref{line:Kchoice} of \cref{alg:workflow}. The left panel of \cref{fig:syntheticHistograms} shows what happens in the while loop as we increase $\eps$. The black dots (extrapolation) quickly cross the decision threshold line, so the while loop in that case is complete. The orange triangles (interpolation) do not cross the decision threshold line, even for very large $\eps$.
\begin{figure}[h]
\centering
\includegraphics[width = 0.3\textwidth]{figures/synthetic_epsilon.png}
\includegraphics[width = 0.3\textwidth]{figures/synthetic_hist_extrapolation.png}
\includegraphics[width = 0.3\textwidth]{figures/synthetic_hist_interpolation.png}
\caption{(\emph{Left}): Maximal value of the function $F^\star$ as a function of constraint set $\varepsilon.$ Comparison of the distance between $k_{0}(X,X)$ and $k_{1}(X,X)$ to the posterior variation due to hyperparameter uncertainty for extrapolation (\emph{middle}) and interpolation (\emph{right}). The red line corresponds to our decision-changing kernel $k_1$.}
\label{fig:syntheticHistograms}
\end{figure}
\textbf{Qualitative interchangeability: Visual comparison of prior predictive draws.} Since this example is one-dimensional, it is straightforward to visualize the prior predictive draws.
For our extrapolation example ($\xtest = 5.29$), let $\kextrap$ be the solution to \cref{mainOptimizationProblem} with the smallest $\eps$ achieving $F^\star(\kextrap) \geq \Delta = 2$. The third panel of \cref{fig:syntheticData_all} shows prior predictive draws with $\kextrap$; the draws are noise-matched with the second panel. Visually, the two sets of prior predictive draws (second and third panels) are qualitatively similar.
Both are smooth and stationary by constraint; the length scale (cf.\ the number of ``wiggles'' in each function) and amplitudes seem unchanged. We say that $\kextrap$ is qualitatively interchangeable with $k_0$.
Since even the largest constraint size value considered ($\eps = 10$) is not enough to induce the required change, we define $\kinterp$ for our interpolation example ($\xtest= 2.00$) to be the solution of \cref{mainOptimizationProblem} with the final value of $\eps = 10.$
The fourth panel of \cref{fig:syntheticData_all} shows prior predictive draws with $\kinterp$; the draws are noise-matched with the second panel.
Again, by design, both sets of draws (second and fourth panels) are stationary and smooth.
However, the magnitudes of peaks and troughs with $\kinterp$ are much larger than those with $k_0$; cf.\ the gray uncertainty bands, which depict the 99.7\% uncertainty interval from the marginal of the $k_0$ GP.
Thus, we say that $\kinterp$ is not qualitatively interchangeable with $k_0$ under this test. We expect the difference would be even more pronounced at higher $\eps$.
\textbf{Qualitative interchangeability: Gram matrix comparison.} Our relative Frobenius norm histograms appear in \cref{fig:syntheticHistograms}. $\kextrap$ sits within the histogram of alternative kernels generated via hyperparameter uncertainty (center), whereas $\kinterp$ sits far outside of this uncertainty region (right).
As in the prior predictive comparison, we say that $\kinterp$ is not qualitatively interchangeable with $k_0$ under the Gram matrix comparison, whereas $\kextrap$ is.
Finally, following our workflow, we conclude that our extrapolation example is non-robust to the choice of kernel in the sense of \cref{def:nonRobust}. On the other hand, in our interpolation example, we fail to find non-robustness. | {
"timestamp": "2021-06-14T02:28:49",
"yymm": "2106",
"arxiv_id": "2106.06510",
"language": "en",
"url": "https://arxiv.org/abs/2106.06510",
"abstract": "Gaussian processes (GPs) are used to make medical and scientific decisions, including in cardiac care and monitoring of atmospheric carbon dioxide levels. Notably, the choice of GP kernel is often somewhat arbitrary. In particular, uncountably many kernels typically align with qualitative prior knowledge (e.g.\\ function smoothness or stationarity). But in practice, data analysts choose among a handful of convenient standard kernels (e.g.\\ squared exponential). In the present work, we ask: Would decisions made with a GP differ under other, qualitatively interchangeable kernels? We show how to answer this question by solving a constrained optimization problem over a finite-dimensional space. We can then use standard optimizers to identify substantive changes in relevant decisions made with a GP. We demonstrate in both synthetic and real-world examples that decisions made with a GP can exhibit non-robustness to kernel choice, even when prior draws are qualitatively interchangeable to a user.",
"subjects": "Machine Learning (stat.ML); Machine Learning (cs.LG); Computation (stat.CO)",
"title": "Measuring the robustness of Gaussian processes to kernel choice",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9653811611608242,
"lm_q2_score": 0.7341195269001831,
"lm_q1q2_score": 0.7087051613097337
} |
https://arxiv.org/abs/2211.08939 | Augmented Physics-Informed Neural Networks (APINNs): A gating network-based soft domain decomposition methodology | In this paper, we propose the augmented physics-informed neural network (APINN), which adopts soft and trainable domain decomposition and flexible parameter sharing to further improve the extended PINN (XPINN) as well as the vanilla PINN methods. In particular, a trainable gate network is employed to mimic the hard decomposition of XPINN, which can be flexibly fine-tuned for discovering a potentially better partition. It weight-averages several sub-nets as the output of APINN. APINN does not require complex interface conditions, and its sub-nets can take advantage of all training samples rather than just part of the training data in their subdomains. Lastly, each sub-net shares part of the common parameters to capture the similar components in each decomposed function. Furthermore, following the PINN generalization theory in Hu et al. [2021], we show that APINN can improve generalization by proper gate network initialization and general domain & function decomposition. Extensive experiments on different types of PDEs demonstrate how APINN improves the PINN and XPINN methods. Specifically, we present examples where XPINN performs similarly to or worse than PINN, so that APINN can significantly improve both. We also show cases where XPINN is already better than PINN, so APINN can still slightly improve XPINN. Furthermore, we visualize the optimized gating networks and their optimization trajectories, and connect them with their performance, which helps discover the possibly optimal decomposition. Interestingly, if initialized by different decomposition, the performances of corresponding APINNs can differ drastically. This, in turn, shows the potential to design an optimal domain decomposition for the differential equation problem under consideration. | \section{Introduction}
Deep learning has become popular in scientific computing and is widely adopted in solving forward and inverse problems involving partial differential equations (PDEs). The physics-informed neural network (PINN) \cite{raissi2019physics} is one of the seminal works in utilizing deep neural networks to approximate PDE solutions by optimizing them to satisfy the data and physical laws governed by the PDE. Furthermore, the extended PINN (XPINN) \cite{jagtap2020extended} is a follow-up work of PINN, which first proposes space-time domain decomposition to partition the domain into several subdomains, where several sub-nets are employed to approximate the solution on their subdomains, while the solution continuity between them is enforced via interface losses. Then, its output is the ensemble of all sub-nets. The theoretical analysis of when XPINNs can improve generalization over PINNs is of great interest. The recent work of Hu et al., \cite{hu2021extended} analyzes the trade-off in XPINN generalization between the simplicity of decomposed
target function in each subdomain and the overfitting effect due to less available training data in each subdomain, which counterbalance each other to determine if XPINN can improve generalization over PINN. However, sometimes the negative overfitting effect incurred by the less available training data in each subdomain dominates the positive effect of simpler partitioned target functions. Furthermore, XPINNs may also suffer from relatively large errors at the interfaces between
subdomains, which degrades the overall performance of XPINNs.
In this paper, we propose the \textit{Augmented PINN (APINN)}, which employs a gate network for soft domain partitioning to mimic the hard XPINN decomposition, which can be fine-tuned for a better decomposition. The gate network gets rid of the need for interface losses and weight-averages several sub-nets as the output of APINN, where each sub-net is able to utilize all training samples in the domain in order to prevent overfitting. Moreover, APINN adopts an efficient partial parameter sharing scheme for sub-nets, to capture the similar components in each decomposed function.
To further understand the benefits of our APINN, we follow the theory in \cite{hu2021extended} to theoretically analyze the generalization bound of APINN, compared to those of PINN and XPINN, which justifies our intuitive understanding of the advantages of APINN. Concretely, generalization bounds for APINNs with trainable or fixed gate networks are derived, which show the advantages of soft and trainable domain and function decomposition in APINN.
We also perform extensive experiments on several PDEs that validate the effectiveness of our APINN.
Specifically, we have examples where XPINN performs similarly to or worse than PINN, so that APINN can significantly improve both. Moreover, we present cases where XPINN is already much better than PINN, but APINN can still slightly improve XPINN.
In addition to the superior performance of APINN, we also visualize the optimized gating networks and the optimization trajectories and then relate their shapes with their performances to select the potentially best decomposition. We show that if APINN is initialized by the optimal decomposition, then it can perform even better, which suggests strategies for designing the optimal domain decomposition for a given PDE problem.
\section{Related Work}
The PINN \cite{raissi2019physics} is one of the pioneering frameworks that employs deep learning techniques to solve forward and inverse problems governed by parametrized PDEs. PINN has been successfully used to solve many problems in the field of computational science since its initial publication; for more information, see \cite{raissi2018hidden, yang2019adversarial, jagtap2022deep, haghighat2021physics, jagtap2022deepKNN}.
The original idea of domain decomposition in the PINN method was proposed in \cite{jagtap2020conservative} for nonlinear conservation laws and named Conservative PINN (CPINNs). In subsequent work, the same authors proposed XPINN \cite{jagtap2020extended} for general space-time domain decomposition, where there is a sub-PINN on each sub-domain for fitting the target function on this sub-domain, while the continuity between sub-PINNs is enforced via additional interface losses (penalty terms). The Parallel PINN \cite{shukla2021parallel} is the follow-up work of CPINN and XPINN, where CPINN and XPINN are trained on multiple GPUs or CPUs simultaneously. Parareal PINN \cite{meng2020ppinn} decomposes a longer time domain into several short-time subdomains, which can be efficiently solved by a coarse-grained (CG) solver and PINN, so that Parareal PINN shows an obvious speedup over PINN in long-time integration PDEs. The main limitation of Parareal PINN is, it cannot be applied to all types of PDEs. The hp-VPINN \cite{kharazmi2021hp} proposes a variational PINN method to decompose the domain when defining a new set of test functions, while the trial functions are still neural networks defined over the whole domain. DDM \cite{li2020deep} uses the Schwarz method for overlapping domain decomposition and training the sub-nets iteratively rather than in parallel like XPINNs and CPINNs. Also, \cite{mercier2021coarse} extends DDM through coarse space acceleration for improved convergence across a growing number of domains. \cite{li2022deep} also uses the Schwarz method, but the sub-nets are multi-Fourier feature networks instead.
The finite basis PINN (FBPINN) \cite{moseley2021finite} proposes dividing the domain into several small, overlapping sub-domains, with each of them using PINNs. Although FBPINN eliminates the need for interface conditions, our model differs from it in the following aspects: First, our domain decomposition is flexible and trainable, while FBPINN fixes the decomposition. Second, FPINN does not allow parameter sharing for the efficiency of sub-networks. Moreover, the overlapping subdomains in FBPINN can become computationally costly for multi-dimensional problems. The
penalty-free neural networks (PFNN) \cite{sheng2022pfnn} propose the idea of overlapping domain decomposition, which is different from our models for the same reasons as FBPINN.
The GatedPINN \cite{stiller2020large} proposes to adopt the idea of a mixture of experts (MoE) \cite{shazeer2017outrageously} to modify XPINNs. Although they also use a gate network to weight-average several sub-PINNs, their GatePINN is different from our APINN because the gate function in GatedPINN is randomly initialized, while that in our APINN is pretrained on an XPINN domain decomposition. Furthermore, they do not consider efficient parameter sharing for sub-nets to improve model expressiveness.
\cite{dong2021local,DWIVEDI2021299} also proposes a similar idea of domain decomposition as in XPINNs. However, they use extreme learning machines (ELMs) to replace the neural networks in XPINNs, where only the parameters at the last layer are trained.
Based on variational principles and the deep Ritz method, D3M \cite{li2019d3m} further combines the Schwarz method for overlapping domain decomposition. Compared to our trainable domain decomposition, the domains in D3M are fixed during optimization.
To learn optimal modifications on the interfaces of different sub-domains, \cite{taghibakhshi2022learning} proposes using graph neural networks and unsupervised learning.
\cite{heinlein2021combining} presents a review on the domain decomposition method for numerical PDEs.
The first comprehensive theoretical analysis of PINNs as well as XPINNs for a prototypical nonlinear PDE, the Navier-Stokes equations, is proposed in \cite{Ryck2021ErrorAF}. The generalization abilities of PINNs and XPINNs are theoretically analyzed in \cite{hu2021extended} while the generalization and optimization capabilities of deep neural networks have been analyzed in the general field of deep learning in \cite{kawaguchi2018generalization,kawaguchi2022robustness,kawaguchi2016deep,kawaguchi2019depth,xu2021optimization,kawaguchi2021theory,kawaguchi2022understanding}.
\section{Problem Definition and Background}
\subsection{Problem Definition}
We consider partial differential equations (PDEs) defined on the bounded domain $\Omega \subset \mathbb{R}^d$, with the following form:
\begin{equation}\label{eq:PDE}
\begin{aligned}
\mathcal{L}u^*(\boldsymbol{x})&=f(\boldsymbol{x}) \ \text{in}\ \Omega, \qquad
u^*(\boldsymbol{x})=g(\boldsymbol{x}) \ \text{on}\ \partial\Omega,
\end{aligned}
\end{equation}
For matrix norms, we denote the spectral norm by $\Vert \cdot\Vert_2$ and $l_{p,q}$ norms by $\Vert W \Vert_{p,q} = (\sum_j(\sum_k |W_{j,k}|^p)^{q/p})^{1/q}$.
In the following, we introduce the formulations of PINN \cite{raissi2019physics} and XPINN \cite{jagtap2020extended}.
\subsection{PINN and XPINN}
The PINN is motivated by optimizing neural networks to satisfy the data and physical laws governed by a PDE to approximate its solution. Given a set of $n_b$ boundary training points $\left\{\boldsymbol{x}_{b,i}\right\}_{i=1}^{n_b}\subset\partial\Omega$ and $n_r$ residual training points $\left\{\boldsymbol{x}_{r,i}\right\}_{i=1}^{n_r}\subset\Omega$, the ground truth PDE solution $u^*:\overline{\Omega}\rightarrow\mathbb{R}$ is approximated by the PINN model $u_{\boldsymbol{\theta}}$, by minimizing the training loss containing a boundary loss and a residual loss:
\begin{equation}
R_S(\boldsymbol{\theta}) = \frac{1}{n_b}\sum_{i=1}^{n_b} {|u_{\boldsymbol{\theta}}(\boldsymbol{x}_{b,i})-g(\boldsymbol{x}_{b,i})|}^2 + \frac{1}{n_r}\sum_{i=1}^{n_r} {|\mathcal{L}u_{\boldsymbol{\theta}}(\boldsymbol{x}_{r,i})-f(\boldsymbol{x}_{r,i})|}^2,
\end{equation}
where PINN learns boundary conditions in the first term, while learning the physical laws described by the PDEs in the second term.
The XPINN extends PINN by decomposing the domain ${\Omega}$ into several subdomains where several sub-PINNs are employed. The continuity between each sub-PINNs is maintained via the interface loss function, and the output of XPINN is the ensemble of all sub-PINNs, where each of them makes predictions on their corresponding subdomains.
Concretely, domain $\Omega$ is decomposed into $N_D$ subdomains as $\Omega = \cup_{i=1}^{N_D} \Omega_i$. The loss of XPINN contains the sum of the PINN losses of the sub-PINNs, including boundary and residual losses, plus the interface losses using points on the interfaces of different subdomains $
\partial\Omega_i\cap\partial\Omega_j$, where $i,j\in\left\{1,2,...,N_D\right\}$ such that $
\partial\Omega_i\cap\partial\Omega_j\neq \emptyset$ to maintain the continuity between the two sub-PINNs $i$ and $j$. Specifically, XPINN loss for the $i$-th sub-PINN is
\begin{equation}
R_S^{i}(\boldsymbol{\theta}^i) + \lambda_I \sum_{i,j:\partial\Omega_i\cap\partial\Omega_j\neq\emptyset} R_I(\boldsymbol{\theta}^i, \boldsymbol{\theta}^j),
\end{equation}
where $\lambda_I$ is the weight controlling the strength of the interface loss, $\boldsymbol{\theta}^i$ is the parameters for subdomain $i$, and each $R_S^i(\boldsymbol{\theta})$ is the PINN loss for subdomain $i$ containing boundary and residual losses, i.e.,
\begin{equation}
R_S^i(\boldsymbol{\theta}^i) = \frac{1}{n_{b,i}}\sum_{j=1}^{n_{b,i}} {|u_{\boldsymbol{\theta}^i}(\boldsymbol{x}^i_{b,j})-g(\boldsymbol{x}^i_{b,j})|}^2 + \frac{1}{n_{r,i}}\sum_{j=1}^{n_{r,i}} {|\mathcal{L}u_{\boldsymbol{\theta}^i}(\boldsymbol{x}^i_{r,j})-f(\boldsymbol{x}^i_{r,j})|}^2,
\end{equation}
where $n_{b,i}$ and $n_{r,i}$ are the number of boundary points and residual points in subdomain $i$ respectively, and $\boldsymbol{x}^i_{b,j}$ and $\boldsymbol{x}^i_{r,j}$ are the $j$-th boundary and residual training points in subdomain $i$, respectively. Furthermore, $R_I(\boldsymbol{\theta}^i, \boldsymbol{\theta}^j)$ is the interface loss between the $i$-th and $j$-th subdomains based on interface training points $\{\boldsymbol{x}^{ij}_{I,k}\}_{k=1}^{n_{I,ij}}\subset\partial\Omega_i\cap\partial\Omega_j$
\begin{equation}
\begin{aligned}
R_I(\boldsymbol{\theta}^i, \boldsymbol{\theta}^j) &= \frac{1}{n_{I,ij}} \sum_{k=1}^{n_{I,ij}}[ |u_{\boldsymbol{\theta}^i}(\boldsymbol{x}^{ij}_{I,k})- \{\{
u_{\boldsymbol{\theta}^{avg}} \}\} |^2 +\\&\ |(\mathcal{L}u_{\boldsymbol{\theta}^i}(\boldsymbol{x}^{ij}_{I,k}) - f_i(\boldsymbol{x}^{ij}_{I,k}))-(\mathcal{L}u_{\boldsymbol{\theta}^j}(\boldsymbol{x}^{ij}_{I,k}) - f_j(\boldsymbol{x}^{ij}_{I,k}))|^2 ],
\end{aligned}
\end{equation}
where $\{\{
u_{\boldsymbol{\theta}^{avg}} \}\} = u_{avg} \coloneqq ({u_{\boldsymbol{\theta}^i}(\boldsymbol{x}^{ij}_{I,k})+u_{\boldsymbol{\theta}^j}(\boldsymbol{x}^{ij}_{I,k})})/{2}$, $n_{I,ij}$ is the number of interface points between the $i$-th and $j$-th subdomains, while $\boldsymbol{x}^{ij}_{I,k}$ is the $k$-th interface points between them. The first term is the average solution continuity between the $i$-th and the $j$-th sub-nets, while the second term is the residual continuity condition on the interface given by the $i$-th and the $j$-th sub-nets. We will refer to the XPINN model introduced above as \textbf{XPINNv1}, since it is exactly the model proposed in the original work of \cite{jagtap2020extended}.
In practice, XPINNv1 may exhibit relatively larger errors near the interface, i.e., the interface losses in XPINNv1 cannot necessarily maintain the continuity between different sub-PINNs. This is because the enforcement of residual continuity conditions for PDEs involving higher-order derivatives is difficult to maintain accurately due to the obvious presence of higher-order derivatives.
Therefore, \cite{de2022error} introduces enforcing the continuity of first-order derivatives between different sub-PINNs to resolve the issue:
\begin{equation}
R_A(\boldsymbol{\theta}^i, \boldsymbol{\theta}^j) = \frac{1}{n_{I,ij}} \sum_{k=1}^{n_{I,ij}} \sum_{m=1}^d \left| \frac{\partial u_{\boldsymbol{\theta}^i}(\boldsymbol{x}^{ij}_{I,k})}{\partial \boldsymbol{x}_m}-\frac{\partial u_{\boldsymbol{\theta}^j}(\boldsymbol{x}^{ij}_{I,k})}{\partial \boldsymbol{x}_m}\right|^2,
\end{equation}
where $d$ is the problem dimension, i.e., $\boldsymbol{x} \in \mathbb{R}^d$. With this additional term on first-order derivatives, we name the corresponding XPINN model as \textbf{XPINNv2}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{model.png}
\caption{The APINN model structure. The input $(x,t)$ is passed through the blue shared network $h$, which is then routed to $m$ distinct subnets $E_1, \cdots, E_m$ (red), yielding the corresponding $m$ outputs of subnets $E_i(h(x,t))$. Subsequently, APINN outputs the weighted average of the $m$ outputs of subnets based on the weights $G(x,t)_i$ (green), where $G$ is also a network mapping $(x,t)$ to the $m$-dimensional simplex $\Delta_m$. The weights $G(x,t)_i$ satisfies the property of partion-of-unity, i.e., $\sum_i G(x,t)_i = 1$.}
\label{fig:model}
\end{figure}
\section{Augmented PINN (APINN)}
\subsection{Parameterization of Augmented PINN}
In this section, we introduce the model parameterization of APINN, which is graphically shown in Figure \ref{fig:model}.
We consider a shared network $h: \mathbb{R}^d \rightarrow \mathbb{R}^H$ (blue), where $d$ is the input dimension and $H$ is the hidden dimension, and $m$ sub-nets $(E_i(\boldsymbol{x}))_{i=1}^m$ (red), where each $E_i: \mathbb{R}^H \rightarrow \mathbb{R}$, and a gating network $G: \mathbb{R}^d \rightarrow \Delta^m$ (green) where $\Delta^m$ is the $m$-dimensional simplex, for weight-averaging the outputs of the $m$ sub-nets. The output of our augmented PINN (APINN) $u_{\boldsymbol{\theta}}$ parameterized by $\boldsymbol{\theta}$ is:
\begin{equation}
u_\theta(\boldsymbol{x}) = \sum_{i=1}^m (G(\boldsymbol{x}))_i E_i(h(\boldsymbol{x})),
\end{equation}
where $(G(\boldsymbol{x}))_i$ is the $i$-th entry of $G(\boldsymbol{x})$, and $\boldsymbol{\theta}$ is the collection of all parameters in $h$, $G$ and $E_i$. Both $h$ and $E_i$ are trainable in our APINN, while $G$ can be either trainable or fixed. If $G$ is trainable, we name the model APINN, otherwise we call it APINN-F.
The APINN is a universal approximator.
The detailed proof is as follows.
\begin{proof}
(The APINN is a universal approximator) Denote the function class of all neural networks as $\mathcal{NN}$, then it is universal, i.e., for all continuous functions $f \in C(\Omega)$ and $\epsilon > 0$, there exists a neural network $g \in \mathcal{NN}$, such that $\sup_{\boldsymbol{x} \in \Omega} |f(x) - g(x)| \leq \epsilon$. In addition, we also denote the function class of gating network by $\mathcal{G}$, which collects all vector-value neural networks mapping $\mathcal{R}^d$ to $\Delta^m$.
Back to the APINN model, and denote the function class of APINN as $\mathcal{APINN}$, which is
\begin{equation}
\mathcal{APINN} = \left\{f \Big| \exists E_1, \cdots E_m, h\in \mathcal{NN}, G \in \mathcal{G}, s.t., f = \sum_{i=1}^m G(x)_i E_i(h(x)) \right\}.
\end{equation}
If we choose $E_1 = E_2 = \cdots = E_m = E$, then APINN degenerates to a vanilla multilayer network since $\sum_{i=1}^m G(x)_i = 1$, i.e., $\mathcal{NN} \subset \mathcal{APINN}$.
Therefore, since multilayer neural networks are already universal approximator and it is a subset of APINN model, APINN is a universal approximator.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=0.6]{APINN_1.png}
\caption{The first example of a gating network in APINN: an upper domain (middle) and a lower domain (right).}
\label{fig:APINN1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.6]{APINN_2.png}
\caption{The second example of a gating network in APINN: an inner domain (middle) and an outer domain (right).}
\label{fig:APINN2}
\end{figure}
In APINN, $G$ is pre-trained to mimic the hard and discrete decomposition of XPINN, which will be discussed in the next subsection.
If $G$ is trainable, then our model can fine-tune the pre-trained domain decomposition to further discover a better decomposition through optimization. If not, then APINN is exactly the soft version of XPINN with the corresponding hard decomposition.
APINN is better than PINN thanks to the adaptive domain decomposition and parameter efficiency.
\subsection{Explanation of the Gating Network}
In this section, we will show how the gating network $G$ can be trained to mimic XPINNs for soft domain decomposition.
Specifically, in Figure \ref{fig:APINN1} left, XPINN decomposes the entire domain $(x,t) \in \Omega = [-1, 1] \times [-1,1]$, into two subdomains: the upper one $\Omega_1 = [-1, 1] \times [0,1]$, and the lower one $\Omega_2 = [-1, 1] \times [-1,0)$, which is based on the interface $t = 0$.
The soft domain decomposition in APINN is shown in Figure \ref{fig:APINN1} (middle and right), which are the pretrained gating networks for the two sub-nets corresponding to the upper and bottom subdomains. Here, $(G(x,t))_1$ is pretrained on $\exp(t-1)$ and $(G(x,t))_2$ on $1 - \exp(t-1)$. Intuitively, the first sub-PINN focuses on where $t$ is larger, corresponding to the upper part, while the second sub-PINN focuses on where $t$ is smaller, corresponding to the bottom part.
Another example is to decompose the domain into an inner part and an outer part, as shown in Figure \ref{fig:APINN2}.
In particular, we decompose the entire domain $(x,t) \in \Omega = [0, 1] \times [0,1]$, into two subdomains: the inner one $\Omega_1 = [0.25, 0.75] \times [0.25,0.75]$, and the outer one $\Omega_2 = \Omega \setminus \Omega_1$.
The soft domain decomposition is generated by the gating functions $(G(x,t))_1$ pretrained on $\exp(-5(x - 0.5)^2 - 5(t-0.5)^2)$ and $(G(x,t))_2$ pretrained on $1 - \exp(-5(x - 0.5)^2 - 5(t-0.5)^2)$, such that the first sub-net concentrates in the inner part near $(x,t)=(0.5,0.5)$, while the second sub-bet focuses on the rest of the domain.
The gating network can also be adapted for complex domains like the L-shape domain or even high-dimensional domains by properly choosing the corresponding gating function.
\subsection{Difference in the position of $h$}
We have three options for building the model of APINN. First, the simplest idea is that if we omit the parameter sharing in our APINN, then the model becomes:
\begin{equation}\label{eq:model1}
u_\theta(\boldsymbol{x}) = \sum_{i=1}^m (G(\boldsymbol{x}))_i E_i(\boldsymbol{x}).
\end{equation}
The proposed model in this paper is
\begin{equation}\label{eq:model2}
u_\theta(\boldsymbol{x}) = \sum_{i=1}^m (G(\boldsymbol{x}))_i E_i(h(\boldsymbol{x})).
\end{equation}
Another method for parameter sharing is to place $h $ outside the weighted average of several sub-nets.
\begin{equation}\label{eq:model3}
u_\theta(\boldsymbol{x}) = h\left(\sum_{i=1}^m (G(\boldsymbol{x}))_i E_i(\boldsymbol{x})\right).
\end{equation}
Compared to the first model, our new model given in equation (\ref{eq:model2}) adopts parameter sharing for each sub-PINN to improve parameter efficiency. Equation (\ref{eq:model2}) generalizes equation (\ref{eq:model1}) by using the same $E_i$ networks and selecting the shared network as identity mapping. Intuitively, the functions learned by each sub-PINNs should have some kind of similarity, since they are parts of the same target function. The prior of network sharing in our model explicitly utilizes intuition and is therefore more parameter efficient.
Compared to the model given in equation (\ref{eq:model3}), our model is more interpretable. In particular, our model in equation \ref{eq:model2} is a weighted average of $m$ sub-PINNs $E_i \circ h$, so that we can visualize each $E_i \circ h$ to observe what functions they are learning. However, for equation (\ref{eq:model3}), there is no clear function decomposition due to the $h$ being outside, so that visualization of each function component learned is not possible.
\section{Theoretical Analysis}
\begin{comment}
We will prove the theorems, but will not validate them numerically in the experiments, due to the following reasons:
\begin{enumerate}
\item First, the bound is too loose to predict the generalization errors of different models. In PINN, we only need to compute the complexity of one network. In XPINN, we are required to compute the weighted average of the complexity of the several sub-nets. However, in APINN, we have a gate net and several PINN sub-nets, and we have to multiply their complexities together, which explodes. In the experiments, this case usually happens, in both the Poisson's and the Burgers' equations.
\item Second, following \cite{liu2022adaptive,liu2021discrete}, we just utilize the theory to provide intuition to understand the effectiveness of our model.
\end{enumerate}
\end{comment}
\subsection{Preliminaries}
To facilitate the statement of our main generalization bound, we first define several quantities related to the network parameters. For a network $u_{\boldsymbol{\theta}}(x)=W_L \sigma (W_{L-1} \sigma(\cdots \sigma(W_1x)\cdots)$, we denote $M(l) = \lceil \Vert W_l \Vert_2 \rceil$ and $N(l) = \lceil \frac{\Vert W_l - A_l \Vert_{2,1}}{\Vert W_l \Vert_2} \rceil$ for fixed reference matrices $A_l$, where $A_l$ can vary for different networks. We denote its complexity as follows
\begin{equation}
R_i(u_{\boldsymbol{\theta}}) = \left(\prod_{l=1}^L M(l)\right)^{i+1} \left(\sum_{l=1}^L N(l)^{2/3}\right)^{3/2}, \quad i \in \{0,1,2\},
\end{equation}
where $i$ signifies the order of derivative, i.e., $R_i$ denotes the complexity of the $i$-th derivative of the network.
We further denote the corresponding $M(l)$, $N(l)$, and $R_i$ quantities of the sub-PINN $E_j\circ h$ as $M_j(j)$, $N_j(l)$ and $R_i(E_j \circ G)$. We also denote those of the gate network $G$ as $M_G(l)$, $N_G(l)$ and $R_i(G)$.
The train loss and test loss of a model $u_{\boldsymbol{\theta}}(\boldsymbol{x})$ are the same as that of PINN, i.e.,
\begin{equation}
\begin{aligned}
R_S(\boldsymbol{\theta}) &= R_{S\cap\partial\Omega}(\boldsymbol{\theta}) +R_{S\cap\Omega}(\boldsymbol{\theta}) \\
&=\frac{1}{n_b}\sum_{i=1}^{n_b} {|u_{\boldsymbol{\theta}}(\boldsymbol{x}_{b,i})-g(\boldsymbol{x}_{b,i})|}^2 + \frac{1}{n_r}\sum_{i=1}^{n_r} {|\mathcal{L}u_{\boldsymbol{\theta}}(\boldsymbol{x}_{r,i})-f(\boldsymbol{x}_{r,i})|}^2.\\
R_{{D}}(\boldsymbol{\theta})&= R_{D\cap\partial\Omega}(\boldsymbol{\theta}) +R_{D\cap\Omega}(\boldsymbol{\theta}) \\&=\mathbb{E}_{\text{Unif}(\partial\Omega)} {|u_{\boldsymbol{\theta}}(\boldsymbol{x})-g(\boldsymbol{x})|}^2 + \mathbb{E}_{\text{Unif}(\Omega)} {|\mathcal{L}u_{\boldsymbol{\theta}}(\boldsymbol{x})-f(\boldsymbol{x})|}^2.
\end{aligned}
\end{equation}
Since the following assumption holds for a vast variety of PDEs, we can bound the test $L_2$ error by the test boundary and residual losses:
\begin{assumption}\label{assumption:L2}
Assume that the PDE satisfies the following norm constraint:
\begin{equation}
\begin{aligned}
C_1 \Vert u \Vert_{L_2(\Omega)} \leq \Vert \mathcal{L}u \Vert_{L_2(\Omega)} + \Vert u \Vert_{L_2(\partial\Omega)}, \qquad \forall u \in \mathcal{NN}_{L}, \forall L,
\end{aligned}
\end{equation}
where the positive constant $C_1$ does not depend on $u$ but on the domain and the coefficients of the operators $\mathcal{L}$, and the function class $\mathcal{NN}_{L}$ contains all $L$-layer neural networks.
\end{assumption}
The following assumption is widely adopted in related works \cite{Luo2020TwoLayerNN,hu2021extended}.
\begin{assumption}
\label{assumption:bounded}
(Symmetry and boundedness of $\mathcal{L}$). Throughout the analysis in this paper, we assume the differential operator $\mathcal{L}$ in the PDE satisfies the following conditions.
The operator $\mathcal{L}$ is a linear second-order differential operator in a non-divergence form, i.e.,
$(\mathcal{L}u^*)(\boldsymbol{x}) = \sum_{\alpha=1,\beta=1}^d \boldsymbol{A}_{\alpha \beta} (\boldsymbol{x})u^*_{x_\alpha x_\beta} (\boldsymbol{x})+ \sum_{\alpha=1}^d \boldsymbol{b}_\alpha (\boldsymbol{x})u^*_{x_\alpha} (\boldsymbol{x})+ c(\boldsymbol{x}) u^*(\boldsymbol{x})$,
where all $\boldsymbol{A}_{\alpha \beta},\boldsymbol{b}_\alpha, c:\Omega \rightarrow \mathbb{R}$ are given coefficient functions and $u^*_{x_\alpha}$ are the first-order partial derivatives of the function $u^*$ with respect to its $\alpha$-th argument (the variable $x_\alpha$) and $u^*_{x_\alpha x_\beta}$ are the second-order partial derivatives of the function $u^*$ with respect to its $\alpha$-th and $\beta$-th arguments (the variables $x_\alpha$ and $x_\beta$).
Furthermore, there exists constant $K>0$ such that for all $\boldsymbol{x}\in\Omega=[-1,1]^d$, and $\alpha,\beta\in[d]$, we have $A_{\alpha\beta}=A_{\beta\alpha}$ and
$A_{\alpha\beta}(\boldsymbol{x}), b_\alpha(\boldsymbol{x}), c(\boldsymbol{x}) $ are all $K$-Lipschitz, and their absolute values are not larger than $K$.
\end{assumption}
\subsection{A Tradeoff in XPINN Generalization}
In this subsection, we review the tradeoff in XPINN generalization, introduced in \cite{hu2021extended}. There are two factors that counterbalance each other to affect XPINN generalization, namely the simplicity of the decomposed target function within each subdomain thanks to the domain decomposition, and the complexity and negative overfitting effect due to the lack of available training data. When the former effect is more obvious, XPINN outperforms PINN. Otherwise, PINN outperforms XPINN. When the two factors strike a balance, XPINN and PINN perform similarly.
\subsection{APINN with Non-Trainable Gate Network}
In this section, we state the generalization bound for APINN with a non-trainable gate network.
Since the gate network is fixed, the only complexity comes from the sub-PINNs. The following theorem holds for any gate function $G$.
\begin{theorem}\label{thm:1}
Assume that \ref{assumption:bounded} holds for any $\delta\in(0,1)$, with a probability of at least $1-\delta$ over the choice of random samples $S=\left\{\boldsymbol{x}_{i}\right\}_{i=1}^{n_b+n_r} \subset \overline{\Omega}$ with $n_b$ boundary points and $n_r$ residual points, we have the following generalization bound for an APINN model $u_{\boldsymbol{\theta}_S}(\boldsymbol{x}) = \sum_{i=1}^m (G(\boldsymbol{x}))_i E_i(h(\boldsymbol{x}))$:
\begin{equation}
\begin{aligned}
R_{D \cap \partial \Omega}(\boldsymbol{\theta}_S) &\leq R_{S\cap \partial \Omega}(\boldsymbol{\theta}_S) + \Tilde{O}\left(\frac{\sum_{j=1}^m\max_{\boldsymbol{x} \in \partial \Omega}\Vert G(\boldsymbol{x})_j \Vert_{\infty}R_0(E_j \circ h)}{n_{b}^{1/2}}+ \sqrt{\frac{\log(4/\delta(E))}{n_{b}}} \right).\\
R_{D \cap \Omega}(\boldsymbol{\theta}_S) &\leq R_{S\cap \Omega}(\boldsymbol{\theta}_S) + \Tilde{O}\left(\frac{\sum_{i=0}^2\sum_{j=1}^m\max_{\boldsymbol{x} \in \partial \Omega}\left\| \text{vec}\left( \frac{\partial^i G(\boldsymbol{x})_j}{\partial \boldsymbol{x}^i} \right) \right\|_{\infty}R_{2-i}(E_j \circ h)}{n_{r}^{1/2}}+ \sqrt{\frac{\log(4/\delta(E))}{n_{r}}} \right),
\end{aligned}
\end{equation}
where
$
\delta(E) = \frac{\delta}{{\prod_{l=1}^L\prod_{j \in \{1,\cdots,m\}} M_j(l)(M_j(l)+1)N_j(l)(N_j(l)+1)}}.
$
\end{theorem}
Intuition: The first term is the train loss, and the third is the probability term, in which we divide the probability $\delta$ into $\delta(E)$ for a union bound over all parameters in $E_j \circ h$.The second term is the Rademacher complexity of the model.
For the boundary loss, the network $u_{\boldsymbol{\theta}_S}(\boldsymbol{x}) = \sum_{j=1}^m (G(\boldsymbol{x}))_j E_j(h(\boldsymbol{x}))$ is not differentiated. So, each $E_j(h(\boldsymbol{x}))$ contributes $R_0(E_j \circ h)$, and $(G(\boldsymbol{x}))_j$ contributes $\max_{\boldsymbol{x} \in \partial \Omega}\Vert G(\boldsymbol{x})_j \Vert_{\infty}$ since it is fixed and is $\max_{\boldsymbol{x} \in \partial \Omega}\Vert G(\boldsymbol{x})_j \Vert_{\infty}$ Lipschitz.
For the residual loss, the case of the second term is similar. Note that the second-order derivative of APINN is
\begin{equation}
\frac{\partial^2 u_{\boldsymbol{\theta}_S}(\boldsymbol{x})}{\partial \boldsymbol{x}^2} =\sum_{i=0}^2 \sum_{j=1}^m \frac{\partial^i (G(\boldsymbol{x}))_j}{\partial \boldsymbol{x}^i} \frac{\partial^{2-i}E_j(h(\boldsymbol{x}))}{\partial \boldsymbol{x}^{2-i}}.
\end{equation}
Consequently, each $\frac{\partial^{2-i}E_j(h(\boldsymbol{x}))}{\partial \boldsymbol{x}^{2-i}}$ contributes $R_{2-i}(E_j \circ h)$, while each $\frac{\partial^i (G(\boldsymbol{x}))_j}{\partial \boldsymbol{x}^i}$ contributes $\max_{\boldsymbol{x} \in \partial \Omega}\left\| \text{vec}\left( \frac{\partial^i G(\boldsymbol{x})_j}{\partial \boldsymbol{x}^i} \right) \right\|_{\infty}$ since it is fixed.
\subsection{Explain the Effectiveness of APINN via Theorem \ref{thm:1}}
In this section, we explain the effectiveness of APINNs using Theorem \ref{thm:1}, which shows that the benefit of APINN comes from (1) soft domain decomposition, (2) getting rid of interface losses, (3) general target function decomposition, and (4) the fact that each sub-PINN of APINN is provided with all the training data, which prevents overfitting.
For the boundary loss of APINN, we can apply Theorem \ref{thm:1} to each of the APINN's soft subdomains. Specifically, for the $k$-th sub-net in the $k$-th soft subdomain of APINN, i.e., the $\Omega_k, k\in\left\{1,2,...,m\right\}$, the bound is
\begin{equation}
R_{D \cap \Omega_k}(\boldsymbol{\theta}_S) \leq R_{S \cap \Omega_k}(\boldsymbol{\theta}_S)+ \Tilde{O}\left(\frac{\sum_{j=1}^m\max_{\boldsymbol{x} \in \partial \Omega_k}\Vert G(\boldsymbol{x})_j \Vert_{\infty}R_0(E_j \circ h)}{n_{b,k}^{1/2}}+ \sqrt{\frac{\log(4/\delta(E))}{n_{b,k}}} \right),
\end{equation}
where $n_{b,k}$ is the number of training boundary points in the $k$-th subdomain.
If the gate net is mimicking the hard decomposition of XPINN, then we assume that the $k$-th sub-PINN $E_k$ focuses on $\Omega_k$, in particular $\Vert G(\boldsymbol{x})_j \Vert_{\infty} \leq \overline{c}$ for $j \neq k$, where $\overline{c}$ approaches zero. Note that Theorem \ref{thm:1} does not depend on any requirement on the quantity $\overline{c}$, and we are making such assumption for illustration. Then, the bound reduces to
\begin{equation}
\begin{aligned}
R_{D \cap \Omega_k}(\boldsymbol{\theta}_S) &\leq R_{S \cap \Omega_k}(\boldsymbol{\theta}_S)+ \Tilde{O}\left(\frac{\Vert G(\boldsymbol{x})_k \Vert_{\infty}R_0(E_k \circ h) + \overline{c}\sum_{j \neq k}R_0(E_j \circ h)}{n_{b,k}^{1/2}}+ \sqrt{\frac{\log(4/\delta(E))}{n_{b,k}}} \right)\\
&\approx R_{S \cap \Omega_k}(\boldsymbol{\theta}_S)+ \Tilde{O}\left(\frac{R_0(E_k \circ h)}{n_{b,k}^{1/2}}+ \sqrt{\frac{\log(4/\delta(E))}{n_{b,k}}} \right),
\end{aligned}
\end{equation}
which is exactly the bound of XPINN if the domain decomposition is hard.
Therefore, APINN has the benefit of XPINN, i.e., it can decompose the target function into several simpler parts in some sub-domains.
Furthermore, since APINN does not require the complex interface losses, its train loss $R_S(\boldsymbol{\theta}_S)$ is usually smaller than that of XPINN, and it is free from errors near the interface
In addition to soft domain decomposition, even if the output of $G$ does not concentrate on certain sub-domains, i.e., does not mimic XPINN, APINN still enjoys the benefit of general function decomposition, and each sub-PINN of APINN is provided with all training data, which prevents overfitting.
Concretely, for boundary loss of APINN, the complexity term of the model is $$\frac{\sum_{j=1}^m\max_{\boldsymbol{x} \in \partial \Omega}\Vert G(\boldsymbol{x})_j \Vert_{\infty}R_0(E_j \circ h)}{n_b^{1/2}},$$ which is a weighted average of the complexity of all sub-PINNs.
Note that, similar to PINN, if we view APINN on the entire domain, then all sub-PINNs are able to take advantage of all training samples, thus preventing overfitting.
Hopefully, the weighted sum of each part is simpler than the whole. To be more specific, if we train a PINN, $u_{\boldsymbol{\theta}}$, the complexity term will be $R_0(u_{\boldsymbol{\theta}})$.
If APINN is able to decompose the target function into several simpler parts such that their complexity weighted sum is smaller than the complexity of PINN, then APINN can outperform PINN.
\subsection{APINN with Trainable Gate Network}
In this section, we state the generalization bound for APINN with a trainable gate network. In this case, both the gate network and the $m$ sub-PINNs contribute to the complexity of the APINN model, influencing generalization at the same time.
\begin{theorem}\label{thm:2}
Let Assumption \ref{assumption:bounded} holds, for any $\delta\in(0,1)$, with probability at least $1-\delta$ over the choice of random samples $S=\left\{\boldsymbol{x}_{i}\right\}_{i=1}^{n_b+n_r} \subset \overline{\Omega}$ with $n_b$ boundary points and $n_r$ residual points, we have the following generalization bound for an APINN model $u_{\boldsymbol{\theta}_S}(\boldsymbol{x}) = \sum_{i=1}^m (G(\boldsymbol{x}))_i E_i(h(\boldsymbol{x}))$:
\begin{equation}
\begin{aligned}
R_{D \cap \partial \Omega}(\boldsymbol{\theta}_S) &\leq R_{S\cap \partial \Omega}(\boldsymbol{\theta}_S) + \Tilde{O}\left(\frac{R_0(G)+\sum_{j=1}^m R_0(E_j \circ h)}{n_{b}^{1/4}}+ \sqrt{\frac{\log(4/\delta(G,E))}{n_{b}}} \right).\\
R_{D \cap \Omega}(\boldsymbol{\theta}_S) &\leq R_{S\cap \Omega}(\boldsymbol{\theta}_S) + \Tilde{O}\left(\frac{\sum_{i=0}^2\left(R_i(G)+\sum_{j=1}^mR_{2-i}(E_j \circ h)\right)}{n_{r}^{1/4}}+ \sqrt{\frac{\log(4/\delta(G,E))}{n_{r}}} \right),
\end{aligned}
\end{equation}
where
$
\delta(G,E) = \frac{\delta}{{\prod_{l=1}^L\prod_{j \in \{1,\cdots,m,G\}} M_j(l)(M_j(l)+1)N_j(l)(N_j(l)+1)}}.
$
\end{theorem}
{Intuition}: It is somehow similar to that of Theorem \ref{thm:1}. Here, we treat the APINN model as a whole. Now, $G(\boldsymbol{x})$ will contribute its complexity, $R_i(G)$, rather than its infinity norm, since it is trainable rather than fixed.
\begin{figure}
\centering
\includegraphics[scale=0.5]{Burgers_Sol.png}
\includegraphics[scale=0.5]{Burgers_Point.pdf}
\caption{The Burgers equation. Left: ground truth solution. Right: training points of XPINN.}
\label{fig:burgers1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.8]{Burgers_Loss_L2.png}
\caption{The Burgers equation. Train loss and relative $L_2$ error. Blue: Adam optimization. Red: L-BFGS finetuning. Green: final convergence point. In this case, Adam can already train the model to convergence, so additional L-BFGS converges fast due to its stopping criterion.}
\end{figure}
\subsection{Explain the Effectiveness of The APINN via Theorem \ref{thm:2}}
By Theorem \ref{thm:2}, besides the benefits explained by Theorem \ref{thm:1}, a good initialization of soft decomposition inspired by XPINN helps generalization. If this is the case, the trained gate network's parameters will not deviate significantly from their initialization. Consequently, $N_j(l)$ quantities for all $j \in \{1,\cdots,m,G\}$ and $l\in\{1,\cdots,L\}$ will be smaller, and thus $R_i(G)$ will be smaller, decreasing the right hand side of the bound stated in Theorem \ref{thm:2}, which means good generalization.
\section{Computational Experiments}
\subsection{The Burgers Equation}
The one-dimensional viscous Burgers equation is given by
\begin{equation}
\begin{aligned}
& u_t + uu_x - \frac{0.01}{\pi}u_{xx} = 0, x \in [-1,1], t\in [0,1].\\
& u(0,x) = -\sin(\pi x).\\
& u(t,-1) = u(t,1) = 0.
\end{aligned}
\end{equation}
The difficulty of the Burgers equation is in the steep region near $x = 0$ where the solution changes rapidly, which is hard to capture by PINNs. The ground truth solution is visualized in Figure \ref{fig:burgers1} left. In this case, XPINN performs badly near the interface. Thus, APINN improves XPINN, especially in the accuracy near the interface, both by getting rid of the interface losses and by improving the parameter efficiency.
\subsubsection{PINN and Hard XPINN}
For the PINN, we use a 10-layer tanh network of 20-width with 3441 neurons, and provide 300 boundary points and 20000 residual points. We use 20 as the weight on the boundary and 1 as the weight for the residual. We train PINN by the Adam optimizer with 8e-4 learning rate for 100k epochs.
XPINNv1 decomposes the domain based on whether $x > 0$.
The weights for boundary loss, residual loss, interface boundary loss, and interface residual loss are 20, 1, 20, 1, respectively.
XPINNv2 shares the same decomposition as XPINNv1, but its weights for boundary loss, residual loss, interface boundary loss, and interface first-order derivative continuity loss are 20, 1, 20, 1, respectively.
The sub-nets are 6-layer tanh networks of 20-width with 3522 neurons in total, and we provide 150 boundary points and 10000 residual points for all sub-nets in XPINN. The number of interface points is 1000.
The training points of XPINNs are visualized in Figure \ref{fig:burgers1} right.
We train XPINNs by the Adam optimizer with 8e-4 learning rate for 100k epochs.
Both models are finetuned by the L-BFGS optimizer until convergence after Adam optimization.
\begin{table}[]
\centering
\caption{Results for the Burgers' equation.}
\label{tab:burgers}
\begin{tabular}{|c|c|c|c|c|}
\hline
Model & PINN & XPINNv1 & XPINNv2 & -\\ \hline
Rel. $L_2$ & 1.620E-3$\pm$7.632E-4 & 1.490E-1$\pm$6.781E-3 & 1.304E-1$\pm$7.256E-3 & -\\ \hline
Model & APINN-X-F & APINN-X & APINN-M-F & APINN-M \\ \hline
Rel. $L_2$ & 1.293E-3$\pm$4.629E-4 & \textbf{9.109E-4$\pm$3.689E-4} & 1.375E-3$\pm$6.732E-4 & 1.137E-3$\pm$7.675E-4 \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.82]{burgers_error.png}
\caption{{The Burgers equation. Left: error plot of APINN. Right: error plot of XPINNv1. Note that APINN and XPINNv1 share the same colorbar. Compared to the error of XPINNv1, that of APINN is negligible.}}
\label{fig:burgers2}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=1]{SXPINN_Gate_burgers_0.png}
\includegraphics[scale=1]{SXPINN_Gate_burgers_2.png}
\includegraphics[scale=1]{SXPINN_Gate_MPINN_burgers_0.png}
\includegraphics[scale=1]{SXPINN_Gate_MPINN_burgers_2.png}
\caption{The Burgers equation: APINN gate nets $G_1$ after convergence at the last epoch.
That for the second subnet $G_2$ can be easily computed using the property of partition-of-unity $G_1 + G_2 = 1$.
First row: those of APINN-X with two different random seeds. Relative $L_2$ errors = 7.541E-4, 8.034E-4.
Second row: those of APINN-M with two different random seeds. Relative $L_2$ errors = 6.936E-4, 8.284E-4.}
\label{fig:burgers_gate}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.35]{Burgers_0.png}
\includegraphics[scale=0.35]{Burgers_1.png}
\includegraphics[scale=0.35]{Burgers_2.png}
\includegraphics[scale=0.35]{Burgers_3.png}
\caption{The Burgers equation: Visualization of the gating network $G_1$ optimization trajectory via four snapshots, at epoch = 0, 1E4, 2E4, 3E4, from left to right and from top to bottom.
The value for the second subnet, $G_2$, is easily calculated using partition-of-unity property gating networks, i.e., $\sum_i G_i = 1$. }
\label{gif:burgers}
\end{figure}
\subsubsection{APINN}
To mimic the hard decomposition based on whether $x > 0$, we pretrain the gate net $G$ on the function $(G(x, t))_1 = 1 - (G(x, t))_2 = exp(x-1)$, so that the first sub-PINN focuses on where $x$ is larger and the second sub-PINN focuses on where $x$ is smaller.The corresponding model is named APINN-X.
In addition, we pretrain the gate net $G$ on $(G(x, t))_1 = 1 - (G(x, t))_2 = 0.8$ to mimic multi-level PINN (MPINN) \cite{anonymous2022multilevel}, where the first sub-net focuses on the majority part, while the second one is responsible for the minority part. The corresponding model is named APINN-M.
All networks have a width of 20. The numbers of layers in the gate network, sub-PINN networks, and shared network are 2, 4, and 3, respectively, with 3462 / 3543 parameters depending on whether the gate network is trainable.
All models are finetuned by the L-BFGS optimizer until convergence after Adam optimization.
\subsubsection{Results}The results for the Burgers equation are shown in Table \ref{tab:burgers}. The reported relative $L_2$ errors are averaged over 10 independent runs, which are the best $L_2$ errors among their whole optimization processes. The error plots of XPINNv1 and APINN-X are visualized in Figures \ref{fig:burgers2} left and right, respectively.
\begin{itemize}
\item XPINN performs much worse than PINN, due to the large error near the interface, where the steep region is located.
\item APINN-X performs the best because its parameters are more flexible than those of PINN, and it does not require interface conditions like in XPINN, so it can model the steep region well.
\item APINN-M performs worse than APINN-X, which means that MPINN initialization is worse than the XPINN one in this Burgers problem.
\item APINN-X-F with a fixed gate function performs slightly worse than PINN and APINN, which justifies the flexibility of trainable domain decomposition. However, even without fine-tuning the domain decomposition, APINN-X-F can still outperform XPINN significantly, which shows the effectiveness of soft domain partition.
\end{itemize}
\subsubsection{Visualization of Gating Networks}
Some representative optimized gating networks after convergence are visualized in Figure \ref{fig:burgers_gate}.
In the first row, we visualize two gate nets of APINN-X. Despite the fact that their optimized gates differ, they retain the original left-and-right decomposition with the change in interface position.Thus, their $L_2$ errors are similar.
In the second row, we show two gate nets of APINN-M. Their performances differ a lot, and they weight the two subnets differently. The third figure uses a weight $\approx 0.9$ for subnet-1 and a weight $\approx 0.1$ for subnet-2, while the fourth figure uses $\approx 0.6$ weight for subnet-1 and $\approx 0.4$ weight for subnet-2. It means that the training of MPINN-type decomposition is unstable, that APINN-M is worse than its XPINN counterpart in the Burgers problem, and that the weight in MPINN-type decomposition is crucial to its final performance.
From these examples, we can see that initialization is crucial for APINN's success. Despite the optimization, the trained gate will still be similar to the initialization.
Furthermore, we visualize the optimization trajectory of the gating network for the first subnet in the Burgers equation in Figure \ref{gif:burgers}, where each snapshot is the gating net at epoch = 0, 1E4, 2E4, 3E4. That for the second subnet $G_2$ can be easily computed using the property of partition-of-unity $G_1 + G_2 = 1$. The trajectory is smooth, and the gating net gradually converges by moving the interface from left to right and shifting the interface.
\begin{figure}
\centering
\includegraphics[scale=0.5]{Helmholtz_Sol.png}
\includegraphics[scale=0.5]{Helmholtz_Point.pdf}
\caption{The Helmholtz equation. Left: ground truth solution. Right: training points of XPINN.}
\label{fig:helmholtz1}
\end{figure}
\subsection{Helmholtz Equation}
Problems in physics including seismology, electromagnetic radiation, and acoustics are solved using the Helmholtz equation, which is given by
\begin{equation}
\begin{aligned}
& u_{xx} + u_{yy} + k^2 u = q(x,y), x \in [-1,1], y\in [-1,1].\\
& u(-1,y) = u(1,y) = u(x,-1) = u(x,1) = 0.\\
& q(x, y) = \left(- (a_1 \pi)^2 - (a_2 \pi)^2 + k^2 \right)\sin(a_1\pi x)\sin(a_2\pi y).
\end{aligned}
\end{equation}
The analytic solution is
\begin{equation}
u(x, y) = \sin(a_1\pi x)\sin(a_2\pi y),
\end{equation}
and is shown in Figure \ref{fig:helmholtz1} left.
In this case, XPINNv1 performs worse than PINN due to the large errors near the interface. With additional regularization, XPINNv2 reduces 47\% the relative $L_2$ error compared to PINN, but it still performs worse than our APINN due to the overfitting effect caused by the small availability of the training data in each sub-domain.
\begin{table}[]
\centering
\caption{Results for the Helmholtz equation.}
\label{tab:helmholtz}
\begin{tabular}{|c|c|c|c|c|}
\hline
Model & PINN & XPINNv1 & XPINNv2 & -\\ \hline
Rel. $L_2$ & 2.438E-3$\pm$5.196E-4 & 5.222E-2$\pm$4.001E-3 & 1.297E-3$\pm$1.786E-4 & - \\ \hline
Model & APINN-X-F & APINN-X & APINN-M-F & APINN-M \\ \hline
Rel. $L_2$ & 1.554E-3$\pm$3.203E-4 & \textbf{1.275E-3$\pm$4.710E-4} & 1.911E-3$\pm$3.850E-4 & 1.477E-3$\pm$5.679E-4\\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.8]{Helmholtz_Loss_L2.png}
\caption{The Helmholtz equation. Train loss and relative $L_2$ error during optimization. Blue: Adam optimization. Red: L-BFGS finetuning. Green: final convergence point. L-BFGS automatically stops since its convergence criterion is satisfied.}
\end{figure}
\subsubsection{PINN and Hard XPINN}
For PINN, we provide 400 boundary and 10000 residual points.
The XPINN decomposes the domain based on whether $y > 0$, whose training points are shown in Figure \ref{fig:helmholtz1} right.
We provide 200 boundary points, 5000 residual points, and 400 interface points for the two sub-nets in XPINN.
Other settings of PINN and XPINN are the same as those in the Burgers equation.
\begin{figure}
\centering
\includegraphics[scale=0.35]{helmholtz_error1.png}
\includegraphics[scale=0.35]{helmholtz_error2.png}
\caption{{The Helmholtz equation. Left: error plot of XPINNv1. Middle: error plot of APINN. Right: error plot of XPINNv2.}}
\label{fig:helmholtz2}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.7]{SXPINN_Gate_helmholtz_1.png}
\includegraphics[scale=0.7]{SXPINN_Gate_helmholtz_2.png}
\includegraphics[scale=0.7]{SXPINN_Gate_helmholtz_3.png}
\caption{The Helmholtz equation: APINN-X gate nets $G_1$ after convergence at the last epoch. Their relative $L_2$ errors are similar.}
\label{fig:helmholtz_gate}
\end{figure}
\subsubsection{APINN}
We pretrain the gate net $G$ on the function $(G(x, y))_1 = 1-(G(x, y))_2 = \exp(y-1)$ to mimic XPINN, and on $(G(x, y))_1 = 1-(G(x, y))_2 = 0.8$ to mimic MPINN.
For other experimental settings, please refer to the introduction of APINN in the Burgers equation.
\subsubsection{Results}
The results for the Helmholtz equation are shown in Table \ref{tab:helmholtz}. The reported relative $L_2$ errors are averaged over 10 independent runs, which are selected as having the lowest errors during optimization. The error plots of XPINNv1, APINN-X and XPINNv2 are visualized in Figure \ref{fig:helmholtz2} left, middle, and right, respectively.
\begin{itemize}
\item XPINNv1 performs the worst, since its interface loss cannot enforce the interface continuity satisfactorily.
\item XPINNv2 performs significantly better than PINN, but it is worse than APINN-X, because it overfits in the two sub-domains a bit due to the small number of available training samples, compared with APINN-X.
\item APINN-M performs worse than APINN-X due to bad initialization of the gating network.
\item The errors of APINN, XPINNv2 and PINN concentrate near the boundary, which is due to the gradient pathology \cite{wang2021understanding}.
\end{itemize}
\subsubsection{Visualization of Optimized Gating Networks}
The randomness of this problem is smaller, so that the final relative $L_2$ errors of different runs are similar.
Some representative optimized gating networks after convergence of APINN-X are visualized in Figure \ref{fig:helmholtz_gate}.
Specifically, every gating network maintains approximately the original decomposition into an upper and a lower domain, despite the fact that the interfaces change a bit in each run.
From these observations, the XPINN-type decomposition into an upper and a bottom domain is already satisfactory for XPINN. We also notice that the XPINN outperforms PINN, which is consistent with our observation.
Furthermore, we visualize the optimization trajectory of the gating network for the first subnet in the Helmhotz equation in Figure \ref{gif:helmholtz}, where each snapshot is the gating net at epoch = 0 to 5E2 with 6 snapshots in all. That for the second subnet $G_2$ can be easily computed using partition-of-unity property gating networks, i.e., $ \sum_i G_i = 1$. The trajectory is similar to the case in the Burgers equation. Here, the gating net of the Helmhotz equation converges much faster than the one in the previous Burgers equation.
\begin{figure}
\centering
\includegraphics[scale=0.45]{Helmholtz_0.png}
\includegraphics[scale=0.45]{Helmholtz_1.png}
\includegraphics[scale=0.45]{Helmholtz_2.png}
\includegraphics[scale=0.45]{Helmholtz_3.png}
\includegraphics[scale=0.45]{Helmholtz_4.png}
\includegraphics[scale=0.45]{Helmholtz_5.png}
\caption{The Helmholtz equation: visualization of the gating network $G_1$ optimization trajectory via six snapshots, at epoch = 0, 1E2, 2E2, 3E2, 4E2, and 5E2, from left to right and from top to bottom.
That for the second subnet $G_2$ can be easily computed using partition-of-unity property gating networks, i.e., $ \sum_i G_i = 1$.}
\label{gif:helmholtz}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.5]{KG_Sol.png}
\includegraphics[scale=0.5]{KG_Point.pdf}
\caption{The Klein-Gordon equation. Left: ground truth solution. Right: training points of XPINN.}
\label{fig:KG1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.8]{KG_Loss_L2.png}
\caption{The Klein-Gordon equation. Train loss and relative $L_2$ error during optimization. Green: final convergence point. L-BFGS automatically stops since its convergence criterion is satisfied.}
\end{figure}
\begin{table}[]
\centering
\caption{Results for the Klein-Gordon equation.}
\label{tab:KG}
\begin{tabular}{|c|c|c|c|c|}
\hline
Model & PINN & XPINNv1 & XPINNv2 & -\\ \hline
Rel. $L_2$ & 3.565E-3$\pm$9.412E-4 & 5.980E-1$\pm$7.601E-2 & 3.700E-3$\pm$2.741E-4 & - \\ \hline
Model & APINN-X-F & APINN-X & APINN-M-F & APINN-M\\ \hline
Rel. $L_2$ & 3.195E-3$\pm$8.112E-4 & {3.030E-3$\pm$1.474E-3} & 3.197E-2$\pm$6.253E-3 & \textbf{2.846E-3$\pm$8.568E-4}\\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.36]{KG_error1.pdf}
\includegraphics[scale=0.5]{KG_error2.png}
\caption{{The Klein-Gordon equation. Left: error plot of XPINNv1. Middle: error plot of APINN-X. Right: error plot of XPINNv2. Since XPINNv1 exhibits large errors near the interface, it has its own colorbar, since errors of other models become negligible compared with XPINNv1. APINN and XPINNv2 share the same colorbar for clear comparison since their error scales are similar.}}
\label{fig:KG2}
\end{figure}
\subsection{Klein-Gordon Equation}
In modern physics, the equation is used in a wide variety of fields, such as particle physics, astrophysics, cosmology, classical mechanics, etc. and it is given by
\begin{equation}
\begin{aligned}
& u_{tt} - u_{xx} + u^3 = f(x,t), x \in [0,1], t\in [0,1].\\
& u(x,0) = u_t(x,0) = 0.\\
& u(x,t) = h(x,t), x\in\{0,1\}, t\in[0,1].
\end{aligned}
\end{equation}
Its boundary and initial conditions are given by the ground truth solution:
\begin{equation}
u(x, y) = x \cos(5 \pi t) + (xt)^3,
\end{equation}
and is shown in Figure \ref{fig:KG1} left. In this case, XPINNv1 performs worse than PINN due to the large errors near the interface induced by unsatisfactory continuity between sub-nets, while XPINNv2 performs similarly to PINN. APINN performs much better than XPINNv1 and better than PINN and XPINNv2
\subsubsection{PINN and Hard XPINN}
The experimental settings of PINN and XPINN are identical to those of the previous Helmholtz equation, with the exception that XPINN now decomposes the domain based on whether $x > 0.5$ and Adam optimization is performed for 200k epochs.
\subsubsection{APINN}
We pretrain the gate net $G$ on the function $(G(x, t))_1 = 1 - (G(x, t))_2 = \exp(-x)$ to mimic XPINN, and on $(G(x, t))_1 = 1- (G(x, t))_2 = 0.8$ to mimic MPINN.
For other experimental settings, please refer to the introduction of APINN in the first equation.
\subsubsection{Results}
The results for the Klein-Gordon equation are shown in Table \ref{tab:KG}. The reported relative $L_2$ errors are averaged over 10 independent runs. The error plots of XPINNv1, APINN-X, and XPINNv2 are visualized in Figures \ref{fig:KG2} left, middle, and right, respectively.
\begin{itemize}
\item XPINNv1 performs the worst, since the interface loss of XPINNv1 cannot enforce the interface continuity well, while XPINNv2 performs similarly to PINN, since the two factors in XPINN generalization reach a balance.
\item APINN performs better than all XPINNs and PINNs, and APINN-M is slightly better than APINN-X
\end{itemize}
\begin{table}[]
\centering
\caption{Results for the Wave equation.}
\label{tab:wave}
\begin{tabular}{|c|c|c|c|c|}
\hline
Model & PINN & XPINNv2 & APINN-X & APINN-M\\ \hline
Rel. $L_2$ & 1.900E-3$\pm$3.375E-4 & 1.378E-3$\pm$2.424E-4 & {1.492E-3$\pm$7.041E-4} & \textbf{1.299E-3$\pm$2.941E-4} \\ \hline
\end{tabular}
\end{table}
\subsection{Wave Equation}
We consider a wave problem given by
\begin{equation}
\begin{aligned}
& u_{tt} = 4 u_{xx}, x \in [0,1], t\in [0,1].
\end{aligned}
\end{equation}
The boundary and initial conditions are given by the ground truth solution:
\begin{equation}
u(x, t) = \sin(\pi x) \cos(2 \pi t),
\end{equation}
and is shown in Figure \ref{fig:wave1} left.
In this example, XPINN is already significantly better than PINN because its relative $L_2$ error is 2
However, APINN still performs slightly better than XPINN, even if XPINN is already good enough.
\begin{figure}
\centering
\includegraphics[scale=0.5]{Wave_Sol.png}
\includegraphics[scale=0.5]{Wave_Point.pdf}
\caption{The Wave equation. Left: ground truth solution. Right: training points of XPINN.}
\label{fig:wave1}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.8]{Wave_Loss_L2.png}
\caption{The Wave equation. Train loss and relative $L_2$ error during optimization. Blue: Adam optimization. Red: L-BFGS finetuning. Green: final convergence point. L-BFGS automatically stops since its convergence criterion is satisfied.}
\end{figure}
\begin{figure}
\includegraphics[scale=0.6]{Wave_error.png}
\caption{{The Wave equation. Left: error plot of PINN. Middle: error plot of XPINNv2. Right: error plot of APINN.}}
\label{fig:wave2}
\end{figure}
\subsubsection{PINN and Hard XPINN}
We use a 10-layer tanh network with 3441 neurons and 400 boundary points and 10,000 residual points for PINN.We use 20 weight on the boundary and unity weight for the residual. We train PINN using the Adam optimizer for 100k epochs at an 8E-4 learning rate.
XPINN decomposes the domain based on whether $t > 0.5$.
The weights for boundary loss, residual loss, interface boundary loss, interface residual loss, and interface first-order derivative continuity loss are 20, 1, 20, 0, 1, respectively. The sub-nets are 6-layer tanh networks of 20-width with 3522 neurons in total, and we provide 200 boundary points, 5000 residual points, and 400 interface points for all sub-nets in XPINN. The training points of XPINN are visualized in Figure \ref{fig:wave1} right. We train XPINN using the Adam optimizer for 100k epochs at a learning rate of 1e-4.
\subsubsection{APINN}
The APINNs mimic XPINN by pretraining on $(G(x, t))_1 = 1-(G(x, t))_2= \exp(-t)$ and mimic MPINN by pretraining on $(G(x, t))_1 =1-(G(x, t))_2= 0.8$.
For other experimental settings, please refer to the introduction of APINN in the first equation.
\begin{figure}
\centering
\includegraphics[scale=1]{SXPINN_Gate_XPINN_Wave_0.png}
\includegraphics[scale=1]{SXPINN_Gate_XPINN_Wave_1.png}
\includegraphics[scale=1]{SXPINN_Gate_MPINN_Wave_0.png}
\includegraphics[scale=1]{SXPINN_Gate_MPINN_Wave_1.png}
\caption{The Wave equation: APINN gate nets $G_1$ after convergence at the last epoch.
That for the second subnet $G_2$ can be easily computed using partition-of-unity property gating networks, i.e., $ \sum_i G_i = 1 $.
First row: those of APINN-X with two different random seeds. Relative $L_2$ errors = 1.477E-3, 1.527E-3.
Second row: those of APINN-M with two different random seeds. Relative $L_2$ errors = 1.055E-3, 1.315E-3.}
\label{fig:wave_gate}
\end{figure}
\subsubsection{Results}
The results for the wave equation are shown in Table \ref{tab:wave}. The reported relative $L_2$ errors are averaged over 10 independent runs, which are selected as the error at the epoch with the smaller training loss among the last 10\% epochs. The error plots of PINN, XPINNv2, and APINN-X are visualized in Figures \ref{fig:wave2} left, middle, and right, respectively.
\begin{itemize}
\item Although XPINN is already much better than PINN and reduces by 27\% the relative $L_2$ of PINN, APINN can still slightly improve over XPINN and performs the best among all models. In particular, APINN-M outperforms APINN-X.
\end{itemize}
\subsubsection{Visualization of Optimized Gating Networks}
Some representative optimized gating networks after convergence are visualized in Figure \ref{fig:wave_gate}. The first row shows the gate networks of optimized APINN-X, while the second row shows those of APINN-M.
In this case, the variance is much smaller, and the optimized gate nets maintain the characteristics at initialization, i.e., those of APINN-X remain an upper-and-lower decomposition and those of APINN-M remain a multi-level partition. Gate nets under the same initialization are also similar in different independent runs, which is consistent with their similar performances.
\begin{comment}
\subsection{Poisson's Equation}
In this subsection, we consider a Poisson equation with residual discontinuity, which is given by $u_{xx} + u_{yy} = f$, where $(x, y) \in [0, 1] \times [0, 1]$, and $f$ is given by $f(x, y) = 1$ for $(x,y)\in[0.25, 0.75] \times [0.25, 0.75]$, and $f(x, y) = 0$ for the rest of the domain. The boundary condition is zero. The ground truth solution is visualized in Figure \ref{fig:poisson} left.
In this case, XPINN performs much worse than PINN, since the former incurs huge errors near the interface, due to the continuity conditions near the interface. Since APINN does not require the interface condition and is parameter-efficient, it performs better than both XPINN and PINN.
\begin{figure}
\centering
\includegraphics[scale=0.5]{Figures/Poisson_Sol.png}
\includegraphics[scale=0.5]{Figures/Poisson_Point.pdf}
\caption{Poisson's equation. Left: ground truth solution. Right: training points of XPINN.}
\label{fig:poisson}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.8]{Fig_loss_l2/Poisson_Loss_L2.png}
\caption{Poisson's equation. Average train loss and relative $L_2$ error during optimization of one run.}
\end{figure}
\subsubsection{PINN and Hard XPINN}
For PINN, we use a 10-layer tanh network, and provide 80 boundary points and 400 residual points. We use 20 for the weight on the boundary and unity for the weight for the residual. It is trained by Adam \cite{kingma2014adam} with 1e-3 learning rate for 60k epochs.
For domain decomposition of (hard) XPINN, sub-domain 1 contains the area $(x,y)\in[0.25, 0.75]\times [0.25, 0.75]$, while sub-domain 2 contains the rest of the domain. The weights for boundary loss, residual loss, interface boundary loss, interface residual loss and interface first order derivative loss are 20, 1, 20, 20, 20, respectively. The sub-nets are 6-layer tanh networks, and we provide 0 and 80 boundary points for sub-net 1 and 2, respective.
We choose such structure to make sure that PINN and XPINN have similar number of parameters.
We also provide 100 and 300 residual points for them, respectively. The number of interface points is 1000.
The training points for XPINN is shown in Figure \ref{fig:poisson} right.
We train XPINN by the Adam optimizer with 1e-3 learning rate for 60k epochs.
In the XPINN generalization paper \cite{hu2021extended}, it is shown that XPINNs either perform bad on the boundary or near the interface of the two sub-domains. Therefore, XPINNs fail in this problem, and we expect that APINN without additional continuity losses can improve XPINN.
\begin{table}[]
\centering
\caption{Results for the Poisson's equation.}
\label{tab:poisson}
\begin{tabular}{|c|c|c|c|}
\hline
Model & PINN & XPINNv1 & XPINNv2\\ \hline
Rel. $L_2$ & 4.436E-2$\pm$ 2.721E-2 & 4.022E-1$\pm$ 1.005E-3 & 8.578E-2$\pm$ 9.856E-3 \\ \hline
Model & APINN-F & APINN & APINN-O\\ \hline
Rel. $L_2$& 4.263E-2 $\pm$ 1.189E-2 & \textbf{3.978E-2 $\pm$ 2.061E-2} & \textcolor{red}{2.749E-2 $\pm$ 1.432E-2}\\\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.9]{Figures/SXPINN_Gate_poisson_0.png}
\includegraphics[scale=0.9]{Figures/SXPINN_Gate_poisson_3.png}
\caption{Poisson's equation: Visualization of optimized gate networks $G_1$ after convergence of two independent runs with different seeds. Left: relative $L_2$ error = 6.599E-2. Right: relative $L_2$ error = 1.900E-2.}
\label{fig:poisson_gate}
\end{figure}
\subsubsection{APINN}
We pretrain the gate net $G$ on the Gaussian function $(G(x, y))_1 = 1-(G(x, y))_2 = \exp(-(x-0.5)^2 - (y-0.5)^2)$ to force the first sub-net focus on the central part and the second sub-net on the rest of the domain. For other experimental settings, please refer to the introduction of APINN in the first equation.
\subsubsection{Results}
The results for the Poisson's equation are shown in Table \ref{tab:poisson}. The reported relative $L_2$ errors are averaged over 10 independent runs, which are selected as the error at the epoch with the smaller training loss among the last 10\% epochs. The key observations are as follows.
\begin{itemize}
\item Although XPINN is given an intuitive decomposition, it still performs worse than PINN due to the complicated interface conditions, which induce large errors near the interface. Our APINN performs the best, which shows its advantages of getting rid of the interface loss and parameter efficient.
\item Even APINN-F with fixed gate network can outperform PINN and XPINN, which shows the advantages of soft decomposition and parameter sharing.
\end{itemize}
\subsubsection{Visualization of Optimized Gating Network}
We visualize several representative optimized gating networks $G_1$ after convergence as well as their corresponding relative $L_2$ error in Figure \ref{fig:poisson_gate}.
For good APINN models with low errors, their gating networks are almost constant functions, where their $(G(x,y))_1 > 0.9$ while $(G(x,y))_2 < 0.1$, which means that these good APINNs are actually mimicking an MPINN which puts most weights to the first sub-PINNs.
By contrast, worse APINN models with larger errors do not follow such a formulation. Their concentration on the first sub-net is not enough
Furthermore, we visualize the optimization trajectory of the gating network for subnet 1 in the Poisson's equation in Figure \ref{gif:poisson}, where each snapshots are the gating net $G_1$ at epoch = 0, 1E4, 2E4, 3E4, 4E4, and 5E4. Although the change is not obvious, the weight of gating network for the first subnet gradually increases, to mimic an MPINN-type \cite{anonymous2022multilevel} decomposition.
\begin{figure}
\centering
\includegraphics[scale=0.3]{APINN_GIF/Poisson_0.png}
\includegraphics[scale=0.3]{APINN_GIF/Poisson_1.png}
\includegraphics[scale=0.3]{APINN_GIF/Poisson_2.png}
\includegraphics[scale=0.3]{APINN_GIF/Poisson_3.png}
\includegraphics[scale=0.3]{APINN_GIF/Poisson_4.png}
\includegraphics[scale=0.3]{APINN_GIF/Poisson_5.png}
\caption{Poisson equation: visualization of the gating network $G_1$ optimization trajectory via snapshots at epoch = 0, 1E4, 2E4, 3E4, 4E4, and 5E4.}
\label{gif:poisson}
\end{figure}
\subsubsection{Additional Experiment: APINN with Good Initialization}
Here, we directly initialize the gating network to mimic good APINNs, through pretraining on $(G(x,y))_1=0.9$ and $(G(x,y))_2=0.1$. The corresponding model is named APINN-O, whose relative $L_2$ error is 2.749E-2, which performs even better than APINN.
Therefore, the soft decomposition discovered by the trainable gating function is really better. Our APINN shows the potential to find out the optimal decomposition for a given problem.
We visualize the two subnets of APINN-O after optimization in Figure \ref{fig:poisson_subnet}. Although subnet-2 is given only 0.1$\times$ weight, it is not converging to zero, but it is learning a part of the target function, mimicking MPINN. APINN benefits from general target function decomposition, and the MPINN-type initialization is optimal, which is supported by our Theorem \ref{thm:2}, stating that APINN with good gate network initialization generalizes better.
\begin{figure}
\centering
\includegraphics[scale=0.9]{Figures/Subnet_poisson.png}
\caption{Poisson's equation. Two subnets in APINN-O.}
\label{fig:poisson_subnet}
\end{figure}
\end{comment}
\subsection{Boussinesq-Burger Equation}
Here we consider the Boussinesq-Burger system, which is a nonlinear water wave model consisting of two unknowns. A thorough understanding of such a model's solutions is important in order to apply it to harbor and coastal designs.
The Boussinesq-Burger equation under consideration is given by
\begin{equation}
\begin{aligned}
u_t = 2 u u_x + \frac{1}{2}v_x, \quad v_t = \frac{1}{2}v_{xxx} + 2 (uv)_x, \quad x \in [-10,15], t \in [-3,2],
\end{aligned}
\end{equation}
where the Dirichlet boundary condition and the ground truth solution is given in \cite{lin2022two}, and shown in Figure \ref{fig:BB} (left and middle) for the unknown $u$ and $v$, respectively.
In this experiment, we consider a system of PDEs, and try XPINN and APINN with more than two subdomains.
\begin{figure}
\centering
\includegraphics[scale=0.35]{BB_u.png}
\includegraphics[scale=0.35]{BB_v.png}
\includegraphics[scale=0.35]{BB_Points_4.png}
\caption{The Boussinesq-Burger equation. Left and middle: ground truth solution for $u$ and $v$. Right: training points of XPINN4 with four subdomains.}
\label{fig:BB}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.8]{BB_Loss_L2.png}
\caption{The Boussinesq-Burger equation. Train loss and relative $L_2$ error during optimization. In this case, Adam can already train the model to convergence, so additional L-BFGS converges fast due to its stopping criterion.}
\end{figure}
\subsubsection{PINN and Hard XPINN}
For PINN, we use a 10-layer Tanh network, and provide 400 boundary points and 10,000 residual points. We use 20 weight on the boundary and unity weight for the residual. It is trained by Adam \cite{kingma2014adam} with an 8E-4 learning rate for 100K epochs.
For domain decomposition of (hard) XPINN, we design two different strategies.
First, a XPINN with two subdomains decomposes the domain based on whether $t > -0.5$.
The sub-nets are 6-layer tanh networks of 20-width, and we provide 200 boundary points and 5000 residual points for every sub-net in XPINN.
Second, a XPINN4 with four subdomains decomposes the domain based on $t = -1.75, -0.5, $and $0.75$ into 4 subdomains, whose training points are visualized in Figure \ref{fig:BB} right.
The sub-nets in XPINN4 are 4-layer tanh networks of 20-width, and we provide 100 boundary points and 2500 residual points for every sub-net in XPINN4.
The number of interface points is 400.
The weights for boundary loss, residual loss, interface boundary loss, interface residual loss, and interface first-order derivative continuity loss are 20, 1, 20, 0, 1, respectively.
We use the Adam optimizer to train XPINN and XPINN4 for 100k epochs with an 8E-4 learning rate.
To make a fair comparison, the parameter counts in PINN, XPINN, and XPINN4 are 6882, 7044, and 7368, respectively.
\begin{figure}
\centering
\includegraphics[scale=0.26]{BB_Pretrained_Gate_1.png}
\includegraphics[scale=0.26]{BB_Pretrained_Gate_2.png}
\includegraphics[scale=0.26]{BB_Pretrained_Gate_3.png}
\includegraphics[scale=0.26]{BB_Pretrained_Gate_4.png}
\caption{The Boussinesq-Burger equation. APINN4-X pretrained gate nets, with four-dimensional output for weighted averaging the four subnets.}
\label{fig:BB_pretrained_gate}
\end{figure}
\subsubsection{APINN}
For APINN with two subdomains, we pretrain the gate net $G$ of APINN-X on the function $(G(x, t))_1 = 1- (G(x, t))_2= \exp(0.35 * (t - 2))$ to mimic XPINN, and pretrain that of APINN-M on the function $(G(x, t))_1 = 1-(G(x, t))_2 = 0.8$ to mimic MPINN.
In APINN-X and APINN-M, all networks have a width of 20. The numbers of layers in the gate network, sub-PINN networks, and shared network are 2, 4, and 5, respectively, with 6945 parameters in total.
For APINN with four subdomains, we pretrain the gate net $G$ of APINN4-X on the function $(G(x, t))_i = u_i(x,t) / (\sum_{i=1}^4 u_i(x,t))$, where $u_1(x,t)=\exp(t-2), u_2(x,t)=\exp(-|t - \frac{1}{3}|), u_3(x,t)=\exp(-|t + \frac{4}{3}|), u_4(x,t)=\exp(-3 - t)$, to mimic XPINN.
Furthermore, we pretrain that of APINN4-M on the function $(G(x, t))_1 = 0.8$, and $(G(x, t))_{2,3,4}=\frac{1}{15}$, to mimic MPINN.
The pretrained gate functions of APINN4-X are visualized in Figure \ref{fig:BB_pretrained_gate}.
In APINN4-X and APINN4-M, $h$ and $G$ are width 20, while $E_i$ is width 18. The numbers of layers in the gate network, sub-PINN networks, and the shared network
are 2, 4, and 3, respectively, with 7046 parameters in total.
\begin{table}[]
\centering
\caption{Relative $L_2$ error for the function $u$ in the Boussinesq-Burger equation.}
\label{tab:BB_u}
\begin{tabular}{|c|c|c|c|c|}
\hline
Model & PINN & XPINN & XPINN4 & / \\ \hline
Rel. $L_2$ & 1.470E-02$\pm$6.297E-03 & 1.456E-02$\pm$6.391E-03&3.254E-02$\pm$1.025E-02
& / \\ \hline
Model & APINN-M & APINN-X & APINN4-M & APINN4-X \\ \hline
Rel. $L_2$ & \bf1.091E-02$\pm$4.588E-03 & 1.388E-02$\pm$4.310E-03 & 1.328E-02$\pm$8.099E-03
& 2.559E-02$\pm$6.554E-03
\\ \hline
\end{tabular}
\end{table}
\begin{table}[]
\centering
\caption{Relative $L_2$ error for the function $v$ in the Boussinesq-Burger equation.}
\label{tab:BB_v}
\begin{tabular}{|c|c|c|c|c|}
\hline
Model & PINN & XPINN & XPINN4 & / \\ \hline
Rel. $L_2$ & 1.106E-01$\pm$4.498E-02 & 9.786E-02$\pm$3.485E-02
& 2.706E-01$\pm$9.078E-02
& / \\ \hline
Model & APINN-M & APINN-X & APINN4-M & APINN4-X \\ \hline
Rel. $L_2$ & \bf8.185E-02$\pm$2.973E-02 &9.623E-02$\pm$2.446E-02 &9.616E-02$\pm$5.397E-02
& 1.676E-01$\pm$4.946E-02
\\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.75]{SXPINN_BB_Error_U.png}
\includegraphics[scale=0.75]{SXPINN_BB_Error_V.png}
\caption{The Boussinesq-Burger equation: Error of APINN-M.}
\label{fig:BB_Error}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.75]{SXPINN_Gate_MPINN_BB_0.png}
\includegraphics[scale=0.75]{SXPINN_Gate_MPINN_BB_1.png}
\includegraphics[scale=0.7]{SXPINN_Gate_XPINN_BB_0.png}
\includegraphics[scale=0.7]{SXPINN_Gate_XPINN_BB_4.png}
\caption{The Boussinesq-Burger equation: visualization of trained gate networks $G_1$ of APINN-M (first row) and APINN-X (second row), after convergence, with two different random seeds for each model. Their relative $L_2$ errors are similar for the same type of APINNs.
}
\label{fig:BB_gate}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.23]{SXPINN_Gate1_XPINN_BB4_0.pdf}
\includegraphics[scale=0.23]{SXPINN_Gate2_XPINN_BB4_0.pdf}
\includegraphics[scale=0.23]{SXPINN_Gate3_XPINN_BB4_0.pdf}
\includegraphics[scale=0.23]{SXPINN_Gate4_XPINN_BB4_0.pdf}
\caption{The Boussinesq-Burger equation: Visualization of the four trained gate networks $G(x,t)_{1,2,3,4}$ in APINN4-X in one independent run, corresponding to the four subnets.}
\label{fig:BB4_gate_X}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=0.25]{SXPINN_Gate1_MPINN_BB4_1.pdf}
\includegraphics[scale=0.25]{SXPINN_Gate2_MPINN_BB4_1.pdf}
\includegraphics[scale=0.25]{SXPINN_Gate3_MPINN_BB4_1.pdf}
\includegraphics[scale=0.25]{SXPINN_Gate4_MPINN_BB4_1.pdf}
\caption{The Boussinesq-Burger equation: Visualization of the four trained gate networks $G(x,t)_{1,2,3,4}$ in APINN4-M in one independent run, corresponding to the four subnets.}
\label{fig:BB4_gate_M}
\end{figure}
\subsubsection{Results}
The results for the Boussinesq-Burger equation are shown in Tables \ref{tab:BB_u} and \ref{tab:BB_v}. The reported relative $L_2$ errors are averaged over 10 independent runs, which are selected as the error at the epoch with the smaller training loss among the last 10\% epochs. The key observations are as follows.
\begin{itemize}
\item APINN-M performs the best.
\item APINN and XPINN with four sub-nets do not perform as well as their two sub-net counterparts, which may be due to the tradeoff between target function complexity and number of training samples in XPINN generalization. Also, more subdomains do not necessarily contribute to parameter efficiency.
\item The error of the best performing APINN-M is visualized in Figure \ref{fig:BB_Error}, which is concentrated near the steep regions, where the solution changes rapidly.
\end{itemize}
\subsubsection{Visualization of Optimized Gating Network}
We visualize several representative optimized gating networks after convergence with similar relative $L_2$ errors in Figures \ref{fig:BB_gate}, \ref{fig:BB4_gate_X} and \ref{fig:BB4_gate_M}, for the APINNs with two subnets, APINN4-X and APINN4-M, respectively. Note that the variance of this Boussinesq-Burger equation is smaller, so these models have similar performances.
The key observation is that the gate nets after optimization maintain the characteristics at initialization, especially for APINN-M.
Specifically, for APINN-M, the optimized gate networks do not change much from the initialization.
For APINN-X, although the position and slope of the interfaces between subdomains change, the optimized APINN-X is still partitioning the whole domain into four upper-to-bottom parts.
Therefore, we have the following conclusions.
\begin{itemize}
\item Initialization is crucial to the success of APINN, which is reflected in the performance gaps between APINN-M and APINN-X, since the gate networks after optimization maintain the characteristics at initialization.
\item APINN with one kind of initialization can hardly be optimized into another kind. For instance, we seldom see gate nets of APINN-M are optimized to be similar to the decomposition of XPINNs.
\item These observations are consistent with our Theorem \ref{thm:2}, which states that a good initialization of the gate net contributes to better generalization, since the gate net does not need to change significantly from its initialization.
\item However, based on our extensive experiments, trainable gate nets still contribute to generalization, due to the positive fine-tuning effect, although it cannot optimize a MPINN-type APINN into a XPINN-type APINN and vice versa.
\end{itemize}
Furthermore, we visualize the optimization trajectory of the gating network for all subnets in the Boussinesq-Burger equation in Figure \ref{gif:BB} in the Appendix, where each snapshot is the gating net at epochs = 0, 10, 20, 30, 40, and 50. The change is fast and continuous.
\section{Summary}
In this paper, we propose the Augmented Physics-Informed Neural Networks (APINN) method, which employs a gate network for soft domain partition that can mimic the hard eXtended PINN (XPINN) domain decomposition and is trainable and fine-tunable. The gate network satisfying the partition-of-unity property averages several sub-networks as the output of APINN. Moreover, it adopts partial parameter sharing for sub-nets. It has the following advantages over the state-of-the-art generalized space-time domain decomposition based XPINN method:
\begin{itemize}
\item APINN does not include the complicated interface losses to maintain the continuity between different sub-networks (sub-PINNs) due to the gate network decomposing the entire domain in a soft way, which also contributes to better convergence and lower training loss.
\item The gate network can mimic the hard decomposition of XPINN, such that APINN enjoys the advantage of XPINN in that it can decompose the complicated target function into several simpler parts to reduce the complexity and improve the generalizability of each sub-network.
\item The trainable gate network enables fine-tuning the domain decomposition to discover a better function as well as domain decomposition for simpler parts, contributing to better generalization based on \cite{hu2021extended}.
\item The parameter sharing in APINN utilizes the essential idea that each sub-PINN is learning one part of the same target function, so that the commonality can be well captured by the shared part.
\item Each sub-networks in APINN takes advantage of all training samples within the domain to prevent over-fitting. By contrast, sub-networks in XPINN can only utilize part of the training samples.
\end{itemize}
All of the benefits are justified empirically on various PDEs and theoretically in \cite{hu2021extended} using the PINN generalization theory.
More specifically, we prove the generalization bound for APINNs with fixed and trainable get networks. Since APINNs with certain gate networks can recover PINN and XPINN, they have the advantages of the two models due to their trainability and flexibility. It is shown that APINN enjoys the benefit of general domain and function decomposition, which reduces the complexity of the optimized networks to improve generalization.
In terms of parallelization, APINN shares more data points as well as parameters than XPINNs, and thus can be more expensive than the XPINN method.
\section*{Acknowledgment}
A. D. Jagtap and G. E. Karniadakis would like to acknowledge the funding by OSD/AFOSR MURI Grant FA9550-20-1-0358, and the US Department of Energy (DOE) PhILMs project (DE-SC0019453).
\begin{comment}
Meeting:
\begin{itemize}
\item Discuss universal approximation of APINN.
\item BB equation: should be positive time.
\item BB equation: coupled equation similar error for u and v.
\item Discussion complex geometry L shape, hold shape, and 3D domain.
\item engineering applications of artificial intelligence? 6 weeks review and 7.8 impact factor.
\item CMAME 6 IF
\end{itemize}
\end{comment}
| {
"timestamp": "2022-11-24T02:15:00",
"yymm": "2211",
"arxiv_id": "2211.08939",
"language": "en",
"url": "https://arxiv.org/abs/2211.08939",
"abstract": "In this paper, we propose the augmented physics-informed neural network (APINN), which adopts soft and trainable domain decomposition and flexible parameter sharing to further improve the extended PINN (XPINN) as well as the vanilla PINN methods. In particular, a trainable gate network is employed to mimic the hard decomposition of XPINN, which can be flexibly fine-tuned for discovering a potentially better partition. It weight-averages several sub-nets as the output of APINN. APINN does not require complex interface conditions, and its sub-nets can take advantage of all training samples rather than just part of the training data in their subdomains. Lastly, each sub-net shares part of the common parameters to capture the similar components in each decomposed function. Furthermore, following the PINN generalization theory in Hu et al. [2021], we show that APINN can improve generalization by proper gate network initialization and general domain & function decomposition. Extensive experiments on different types of PDEs demonstrate how APINN improves the PINN and XPINN methods. Specifically, we present examples where XPINN performs similarly to or worse than PINN, so that APINN can significantly improve both. We also show cases where XPINN is already better than PINN, so APINN can still slightly improve XPINN. Furthermore, we visualize the optimized gating networks and their optimization trajectories, and connect them with their performance, which helps discover the possibly optimal decomposition. Interestingly, if initialized by different decomposition, the performances of corresponding APINNs can differ drastically. This, in turn, shows the potential to design an optimal domain decomposition for the differential equation problem under consideration.",
"subjects": "Machine Learning (cs.LG); Dynamical Systems (math.DS); Numerical Analysis (math.NA); Machine Learning (stat.ML)",
"title": "Augmented Physics-Informed Neural Networks (APINNs): A gating network-based soft domain decomposition methodology",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9653811601648193,
"lm_q2_score": 0.734119521083126,
"lm_q1q2_score": 0.7087051549628697
} |
https://arxiv.org/abs/0708.2336 | Unsatisfiable Linear k-CNFs Exist, for every k | We call a CNF formula linear if any two clauses have at most one variable in common. Let Linear k-SAT be the problem of deciding whether a given linear k-CNF formula is satisfiable. Here, a k-CNF formula is a CNF formula in which every clause has size exactly k. It was known that for k >= 3, Linear k-SAT is NP-complete if and only if an unsatisfiable linear k-CNF formula exists, and that they do exist for k >= 4. We prove that unsatisfiable linear k-CNF formulas exist for every k. Let f(k) be the minimum number of clauses in an unsatisfiable linear k-CNF formula. We show that f(k) is Omega(k2^k) and O(4^k*k^4), i.e., minimum size unsatisfiable linear k-CNF formulas are significantly larger than minimum size unsatisfiable k-CNF formulas. Finally, we prove that, surprisingly, linear k-CNF formulas do not allow for a larger fraction of clauses to be satisfied than general k-CNF formulas. | \section{Introduction}
A CNF formula $F$ (conjunctive normal form) over a variable set $V$ is
a set of clauses; a clause is a set of literals; a literal is either a
variable $x \in V$ or its negation $\bar{x}$. A CNF formula $F$, or
short, a CNF $F$, is called a $k$-CNF if $|C| = k$ for every $C \in
F$. Define $\ensuremath{{\rm vbl}}(x)=\ensuremath{{\rm vbl}}(\bar{x}):=x$ for $x \in V$,
$\ensuremath{{\rm vbl}}(C):=\{\ensuremath{{\rm vbl}}(l) \ | \ l \in C\}$ and $\ensuremath{{\rm vbl}}(F) := \bigcup_{C\in F}
\ensuremath{{\rm vbl}}(C)$. For example, $\ensuremath{{\rm vbl}}(\{\bar{x},y,\bar{z}\}) = \{x,y,z\}$. A
(partial) assignment $\alpha$ is a (partial) function $V \rightarrow
\{0,1\}$. It can be extended to negated variables by $\alpha(\bar{x})
:= \neg \alpha(x)$. A clause is {\em satisfied} by $\alpha$ if at
least one literal in it evaluates to $1$, and a formula is satisfied
if every clause is satisfied. {\em Applying} a partial assignment
$\alpha$ means removing from $F$ every clause satisfied by $\alpha$,
and from the remaining clauses removing all literals evaluating to
$0$. The
resulting formula is denoted by $F^{[\alpha]}$.\\
Consider a set system $S$ of sets of cardinality $k$ over some ground
set $V$, i.e. a $k$-uniform hypergraph. We say $S$ is a $k$-set
system. We call $S$ {\em linear} if $|A \cap B| \leq 1$ for any $A, B
\in S, \ A \ne B$. We do not use any deep results from hypergraph
theory in this paper. Nevertheless, for definitions and basic terminology of
hypergraphs we refer the reader to~\cite{handbook} or~\cite{berge}.\\
A CNF $F$ is {\em linear} if $|\ensuremath{{\rm vbl}}(C) \cap \ensuremath{{\rm vbl}}(D)| \leq 1$ for all
clauses $C,D\in F, \ C \ne D$. The set system $\{ \ensuremath{{\rm vbl}}(C),\ C \in
F\}$ is called the {\em skeleton} of $F$. If $F$ is a $k$-CNF, then
its skeleton is a $k$-uniform hypergraph, which is linear if
$F$ is linear. Note that the converse does not hold in general:
The formula $\{\{x,y\},\{\bar{x},\bar{y}\}\}$ is not linear, but its skeleton
is $\{\{x,y\}\}$, thus linear.\\
\textbf{Examples:} The formula
$\{\{\bar{x}_1,x_2\},\{\bar{x}_2,x_3\},\{\bar{x}_3,x_4\},\{\bar{x}_4,x_1\}\}$
is a linear $2$-CNF, whereas
$\{\{x_1,x_2,x_3\},\{\bar{x}_2\,x_3,x_4\}\}$ is not linear.\\
\subsection*{Previous Results}
Let $k$-SAT be the problem of deciding whether a given $k$-CNF is
satisfiable. It is well-known that $k$-SAT is NP-complete for $k \geq
3$. Define Linear $k$-SAT to be the corresponding decision problem for
linear $k$-CNFs. Porschen, Speckenmeyer and Randerath~\cite{porschen}
observed that Linear $k$-SAT is NP-complete if and only if there
exists an unsatisfiable linear $k$-CNF. They proved the existence of
unsatisfiable linear $k$-CNFs for $k=2,3$. In~\cite{porschen-neu},
Porschen, Speckenmeyer and Zhao prove existence for $k=4$.
Up to now, for $k\geq 5$ the question whether unsatisfiable $k$-CNFs exist
has been open.
\subsection*{Our Contribution}
We show that unsatisfiable linear $k$-CNFs exist for any $k$, hence
establishing NP-completeness of Linear $k$-SAT for all $k\geq 3$.
Further, let $f(k)$ denote the size of a smallest unsatisfiable linear
$k$-CNF. We prove that $f(k) \in O(k^4 4^k)$ and, using the
Lov\'{a}sz Local Lemma, show that $f(k) \in \Omega(k2^k)$. This is in
contrast to the general (non-linear) case, where
we know that unsatisfiable $k$-CNFs with $2^k$ clauses exist.\\
Having established $f(k) \in O(k^44^k)$, we are still looking for
explicit constructions of unsatisfiable linear $k$-CNFs. We give a
construction using $\leq t(k)$ clauses, for $t(0) := 1$ and $t(k+1) :=
t(k)2^{t(k)}$, i.e., a tower-like function. Compared to the gigantic
growth of $t(k)$, even $k^4 4^k$ seems very modest.
\section{Preliminaries}
Denote by $L(n,k)$ the maximum number of sets a linear $k$-set system
over $n$ elements can have. In this section, we give some bounds on
$L(n,k)$. Everything in this section is standard graph and hypergraph
theory. The following upper bound is an easy observation.
See Theorem 3 in Chapter 1 of~\cite{berge} for example.
\begin{lemma}
\begin{displaymath}
L(n,k) \leq \frac{n(n-1)}{k(k-1)}
\end{displaymath}
\label{upper-bound-set-system}
\end{lemma}
\textbf{Proof.} Let $S$ be a linear $k$-system over $n$ elements.
There are ${n \choose 2}$ pairs of elements, and each $k$-set in $S$
contains ${k \choose 2}$ pairs. Since each pair is present in at most
one set, we obtain
$|S| \leq {n \choose 2}/{k \choose 2}$. $\hfill\Box$\\
If this upper bound is achieved, then every pair of elements occurs in
exactly one set, and the set system $S$ is also called a {\em Steiner
system}. For existence of Steiner systems for specific values of $n$
and $k$ see for example~\cite{lindner}. At this point, we only give a
proof of existence of Steiner systems for $k$ being a prime power.
\begin{lemma}
For every prime power $k$, there are infinitely many $n$ such that
$$
L(n,k) = \frac{n(n-1)}{k(k-1)}
$$
\end{lemma}
\textbf{Proof.} Let $k$ be any prime power, and let $\ensuremath{\mathbb{F}_k}$ be the
finite field of cardinality $k$. Let $\ensuremath{\mathbb{F}_k}^d$ be the $d$-dimensional
vector space over $\ensuremath{\mathbb{F}_k}$. It has $n = |\ensuremath{\mathbb{F}_k}^d| = k^d$ elements, called
{\em points}. For $x, y \in \ensuremath{\mathbb{F}_k}^d$ and $y \ne 0$, the set $\{x +
\lambda y \ | \ \lambda \in \ensuremath{\mathbb{F}_k}\}$ is called a {\em line}. A line
contains exactly $k$ points, and the vector space $\ensuremath{\mathbb{F}_k}^d$ has
$$
\frac{{n \choose 2}}{{k \choose 2}} = \frac{n(n-1)}{k(k-1)}
$$
lines: Every pair of distinct points $a$ and $b$ lies on exactly one
line, namely $\{a + \lambda (b-a) \big| \lambda \in \ensuremath{\mathbb{F}_k}\}$, and each
line can contains ${k \choose 2}$ pairs of distinct points. Note that
two lines intersect in at most one point. Let $S$ be the $k$-set
system of all lines in $\ensuremath{\mathbb{F}_k}^d$. Then $S$ is a linear set system over
$n$ points, and $|S| = \frac{n(n-1)}{k(k-1)}$. $\hfill\Box$\\
If $k$ is not a prime power, we have the following weaker bound on $L(n,k)$.
\begin{lemma} For any $n,k \in \mathbb{N}$,
\begin{displaymath}
L(n,k) \geq \frac{2n(n-1)}{k^2(k-1)^2}\ .
\end{displaymath}
\label{lower-bound-set-system}
\end{lemma}
\textbf{Proof.} Recall that any simple graph $G$ on $n$ vertices with
maximum degree $\Delta$ has an independent set $I \subseteq V$ with
$|I| \geq \frac{n}{\Delta+1}$. This follows from a greedy construction: As
long as $G$ is not empty, pick a vertex and insert it into $I$. Remove
it and all its $\leq \Delta$ neighbors. In every
step, $\leq \Delta+1$ vertices are removed, hence we add
at least $\frac{n}{\Delta+1}$ vertices to $I$.\\
For $n,k \in \mathbb{N}$, define a graph as follows: The vertices of
the graph are all ${n \choose k}$ $k$-sets over $n$ elements, and two
sets are connected by an edge if they share more than one element.
Each independent set of the graph corresponds to a linear $k$-set
systems over these $n$ elements. We estimate the maximum degree of
this graph. Let $s$ be a $k$-set. How many sets share two or more
elements with $s$? There are $k \choose 2$ possibilities to fix $2$
elements to be included in the neighbor set $s'$, and ${n-2 \choose
k-2}$ possibilities to choose the rest. Of course, this will
overcount the number of such sets. Hence there are at most ${k \choose
2}{n-2 \choose k-2}$ sets sharing two or more elements with $s$.
Since $s$ itself is counted among those, we have $\Delta+1\leq
{k \choose 2}{n-2 \choose k-2}$. The graph itself has ${n \choose k}$
vertices, hence
$$
L(n,k) \geq \frac{{n \choose k}}{{k \choose 2}{n-2 \choose k-2}} \ ,
$$
and the lemma follows from a simple calculation.$\hfill \Box$\\
\section{Unsatisfiable $k$-CNFs formulas and NP-hardness}
In this section, we will prove existence of unsatisfiable $k$-CNFs for
any $k$, as well as proving some upper bounds on $|F|$, the number of
clauses in such a formula. Porschen, Speckenmeyer and
Randerath~\cite{porschen} already stated that for $k \geq 3$, Linear
$k$-SAT is NP-hard if there exists an unsatisfiable linear $k$-CNF.
To keep this paper self-contained, we include a proof of this result.
\begin{theorem}[Porschen, Speckenmeyer and Randerath~\cite{porschen}]
For any $k\geq 3$, Linear $k$-SAT is NP-complete if there exists an
unsatisfiable linear $k$-CNF.
\label{theorem-np-complete}
\end{theorem}
\textbf{Proof.} We reduce $k$-SAT to Linear $k$-SAT. Since $k$-SAT is
NP-complete for $k \geq 3$, this will prove the theorem. Let $F$ be a
$k$-CNF. We transform it to a linear $k$-CNF $F'$ such that $F$ is
satisfiable iff $F'$ is. Let $F$ have $m$ clauses and $n$ variables.
For a variable $x$ let $d(x)$ denote the number of times $x$ appears
in $F$. Replace each $x$ by $d(x)$ new variables $x_1, \dots,
x_{d(x)}$. To ensure that $F$ is satisfiable iff $F'$ is, we force
these variables to take on the same truth value by adding $d(x)$
implication clauses $\{\bar{x}_1, x_2\},
\{\bar{x}_2,x_3\},\dots,\{\bar{x}_{d(x)-1}, x_{d(x)-1}\},
\{\bar{x}_{d(x)}, x_1\}$. Clearly, the new formula $F'$ is linear, and
it is satisfiable iff $F$ is. However, $F'$ is not a $k$-CNF. We
remedy this by adding $k-2$ new variables to each implication clause
and forcing each of them to $0$ by adding a {\em forcer}. A
$\bar{y}$-forcer is a linear $k$-CNF which is satisfiable iff $y$ is set to
$0$. Such a formula can be obtained by taking any minimal
unsatisfiable linear $k$-CNF formula $G_y$ with $y\in\ensuremath{{\rm vbl}}(G)$ and removing
from $G$ all clauses containing $y$. Adding a $\bar{y}$-forcer to $F'$
for each variable $y$ we added to the implication clauses guarantees
that $F'$ is satisfiable iff $F$ is. $F'$
is a linear $k$-CNF, and the proof is complete. $\hfill\Box$
\subsection{Existence of Unsatisfiable Linear $k$-CNFs}
We will complete the NP-completeness proof of Linear $k$-SAT by
showing that unsatisfiable linear $k$-CNFs exist, for any $k \geq 0$.
This answers the main open question from Porschen, Speckenmeyer and
Randerath~\cite{porschen} and establishes the NP-completeness of
Linear $k$-SAT for all $k\geq 3$.
\begin{theorem}
For any $k \in \mathbb{N}_0$, there are unsatisfiable linear $k$-CNFs.
\label{theorem-existence}
\end{theorem}
\textbf{Proof.} We prove this by induction on $k$. For $k=0$, the
formula $F=\{\{\}\}$ containing only the empty clause is linear and
unsatisfiable. For the induction step, let $F=\{C_1,\dots,C_m\}$ be
an unsatisfiable linear $k$-CNF. We will construct an unsatisfiable
linear $(k+1)$-CNF formula $F'$. Create $m$ new variables $x_1, \dots,
x_m$. For a clause $D=\{u_1, \dots, u_m\}$ with $u_i \in \{x_i,
\bar{x}_i\}$, define
$$
F \otimes D :=
\left\{ C_i \cup \{u_i\} \ | \ i=1,\dots,m\right\} \ .
$$
$F'$ is a linear $(k+1)$-CNF formula, and every assignment satisfying
$F\otimes D$ satisfies $D$. Create $2^m$ variable disjoint copies
$F_1,\dots, F_{2^m}$ of $F$, i.e., $\ensuremath{{\rm vbl}}(F_i) \cap \ensuremath{{\rm vbl}}(F_j) =
\emptyset$ for $i \ne j$. By choosing $2^m$ different sign patterns,
we create $2^m$ distinct $m$-clauses $D_1, \dots, D_{2^m}$ over the
variables $x_i$. The formula $\{D_1, \dots, D_{2^m}\}$ is
unsatisfiable. Hence
$$
F' := \bigcup_{i=1}^{2^m} F_i \otimes D_i
$$
is unsatisfiable, as well.
Clearly, $F'$ is a linear $(k+1)$-CNF. $\hfill\Box$\\
This proof constitutes an explicit construction, but note the gigantic
growth of the size of the constructed formulas: Let $t(k)$ denote the
number of clauses of the $k$-CNF formula generated in this
construction. Then $t(k+1) = t(k)2^{t(k)}$, so we have $t(1)=2$,
$t(2)=8$, $t(3)=2048$, $t(4)=2048 \times 2^{2048}$. Fortunately, there is a
much better upper bound, obtained by a probabilistic argument.\\
\begin{theorem}
For every $k \in \mathbb{N}_0$, there exist an unsatisfiable $k$-CNF $F$ with
\begin{displaymath}
|F| \leq k^4 4^{k} \ .
\end{displaymath}
\label{theorem-upper-bound-unsatisfiable-formula}
\end{theorem}
\textbf{Proof.} Fix any $k \in \mathbb{N}_0$. Let $V$ be a set of $n$
variables, $n$ to be specified later. Let $S$ be a linear $k$-set
system over $V$ and write $m := |S|$. From each $s \in S$, build a
$k$-clause by choosing uniformly at random one of the $2^k$ possible
sign patterns. Do this independently for each $s \in S$ and obtain a
linear $k$-CNF $F$. Fix an assignment $\alpha$. For every set $s \in
S$, the probability that the clause $C$ built from $s$ is satisfied by
$\alpha$ is $1-2^{-k}$. Since the sign pattern of each clause is
chosen independently, we obtain
$$
\Pr\left( \alpha \textnormal { satisfies } F \right) = \left(1-2^{-k}\right)^m
$$
There are $2^n$ different truth assignments to $V$, thus the
probability that at least one of them satisfies $F$ can be estimated
by the union bound:
$$
\Pr \left( F \textnormal{ is satisfiable } \right) \leq 2^n
\left(1-2^{-k}\right)^m
$$
If $2^n \left(1-2^{-k}\right)^m < 1$, then there exists an
unsatisfiable linear $k$-CNF with $m$ clauses and $n$ variables. Since
$1+x < e^x$ for all $x \ne 0$, we have $2^n\left(1-2^{-k}\right)^m <
e^{n\ln 2-m2^{-k}}$, and
\begin{eqnarray}
e^{n\ln 2-m2^{-k}}
& \leq & 1 \Leftrightarrow \nonumber \\
n\ln 2 - \frac{m}{2^k} & \leq & 0 \Leftrightarrow \nonumber \\
2^kn\ln 2 & \leq & m \ . \label{bound-n-m}
\end{eqnarray}
That is, if $m \geq 2^kn\ln 2$, then the random formula $F$ is
unsatisfiable with positive probability.
By Lemma \ref{lower-bound-set-system} we know that there is a linear
$k$-set system $S$ over $n$ elements of size
\begin{eqnarray}
m = \left\lceil \frac{2n(n-1)}{k^2(k-1)^2} \right\rceil\geq
\left\lceil\frac{2n^2}{k^4}\right\rceil
\ .
\label{bound-m}
\end{eqnarray}
Since $m$
grows superlinearly in $n$, we see that for sufficiently large $n$
the last inequality holds, which implies that there is an
unsatisfiable linear $k$-CNF of size $m$ over $n$ variables. To
obtain an upper bound on $n$ and $m$, plug (\ref{bound-m}) into
(\ref{bound-n-m}):
\begin{eqnarray*}
2^kn\ln 2 & \leq & \frac{2n^2}{k^4}\\
n & \geq & \frac{\ln 2k^4 2^k }{2}
\end{eqnarray*}
Since we are interested in the order of growth for large $k$ rather than
in constant factors, write
$$
m \in \Theta\left( \frac{n^2}{k^4} \right) = \Theta \left( k^4 4^k\right) \ .
$$
Therefore, there is an unsatisfiable linear $k$-CNF $F$ over
$n \in \Theta\left(k^42^k\right)$ variables having
$m \in \Theta \left( k^4 4^k\right)$ clauses, and the theorem follows.
$\hfill \Box$\\
This is the best upper bound we have. It is much better than the
explicit construction of Theorem~\ref{theorem-existence}, but it is
still far away from the best lower bound of $\Omega(k2^k)$.
\section{Partial Satisfaction in Linear $2$-CNF Formulas}
It is well known that every unsatisfiable $k$-CNF contains at least
$2^k$ clauses. This bound is tight, since the $k$-CNF $F_k$ consisting
of all $2^k$ clauses over some variable set $V$, $|V| =k$, is
unsatisfiable. Further, for every $k$-CNF, there exists an assignment
satisfying at least $(1 -2^{-k})|F|$ clauses. This can be seen by
choosing a random assignment and calculating the expected number of
satisfied clauses. This bound is also tight, as $F_k$ demonstrates.
This is interesting: The upper bound on the fraction of clauses one can
always satisfy is achieved by a smallest unsatisfiable formula. Since
unsatisfiable {\em linear} $k$-CNFs are much larger than $2^k$, as we
will see, one might suspect that linear $k$-CNFs are more amenable to
partial satisfaction than general $k$-CNFs, i.e., that for at least
some $k$, there is an $r_k > (1-2^{-k})$ such that every linear
$k$-CNF $F$ admits an assignment satisfying $\geq r_k |F|$ of its
clauses. However, this is not true:
\begin{theorem}
For every $k \in \mathbb{N}$ and $\delta > 0$, there is a linear $k$-CNF
$F_{k,\delta}$ such that every assignment leaves at least
fraction of $(1-\delta)2^{-k}$ of all clauses unsatisfied.
\label{theorem-partial-sat}
\end{theorem}
\textbf{Proof.} The proof is similar to the probabilistic proof of
Theorem \ref{theorem-upper-bound-unsatisfiable-formula}: Given $k$,
fix a linear set system $S$ over ground set $V$. Let $n := |V|$, $m :=
|F|$, which will be determined later. Fix an assignment $\alpha$ on
$V$ and build a random formula $F$ over the skeleton $S$ by randomly
choosing the signs of the literals in every clause. Let $F = \{C_1,
\dots, C_m\}$ and define $m$ random variables $X_i$ by
\begin{eqnarray*}
X_i = \left\{
\begin{array}{ll}
0 & \ \textnormal{ if } \alpha \textnormal{ satisfies } C_i, \\
1 & \ \textnormal{ otherwise. }
\end{array}
\right.
\end{eqnarray*}
Define $X := \sum_{i=1}^m X_i$. Observe that $\mu := \ensuremath{\mathbf{E}}[X_i] = 2^{-k}$
and $\ensuremath{\mathbf{E}}[X] = 2^{-k}m$. We want to bound the probability that less
that $(1-\delta)2^{-k}m$ clauses are unsatisfied by $\alpha$.
First observe that the $X_i$ are independently identically distributed
binary random variables with expectation $2^{-k}$. Therefore, $X$
has a binomial distribution with expectation
$\mu = 2^{-k}m$, and Chernoff's inequality yields
\begin{eqnarray*}
\Pr\left\{X < (1-\delta)\mu\right\} & < & e^{-\frac{\mu \delta^2}{2}}
\end{eqnarray*}
For a derivation of this inequality see e.g.~\cite{motwani}. Applying
the union bound, we estimate
\begin{eqnarray*}
\Pr\left\{\exists \alpha \textnormal{ leaving } \leq
(1-\delta)\mu \textnormal{ clauses unsatisfied }\right\}
& \leq & 2^n e^{-\frac{\mu \delta^2}{2}}
\end{eqnarray*}
and want last term to be smaller than $1$. By
Lemma~\ref{lower-bound-set-system}, we can choose $m \geq \frac{2n^2}{k^4}$
and calculate
\begin{eqnarray}
n \ln 2 - \frac{\mu \delta^2}{2} & < & 0 \nonumber \\
\frac{\mu \delta^2}{2} = 2^{-k-1}m\delta^2 & > & n \ln 2 \nonumber\\
m & > &2^{k+1} n \ln 2 \delta^{-2} \label{maxsat-ineq} \\
\frac{2n^2}{k^4} & > &2^{k+1} n \ln 2 \delta^{-2} \nonumber \\
n & > & k^42^k \ln 2 \delta^{-2}
\end{eqnarray}
For every fixed $k$ and $\delta > 0$, we can make the last
inequality true by choosing $n$ sufficiently large. Therefore, there
is a positive probability that the randomly chosen formula $F$ does
not have a truth assignment satisfying more than
$(1- (1-\delta)2^{-k})m$ clauses. $\hfill\Box$\\
By setting $\epsilon = \delta2^{-k}$, we see that there is a linear
$k$-CNF $F$ in which no more than $\left(1-2^{-k}+\epsilon\right)|F|$
clauses can be satisfied. Note that the proof of Theorem\
\ref{theorem-partial-sat} is not specific to linear CNFs. For a more
general setting, call a property of formulas {\em structural} if it
only depends on the skeleton of the formula, not on its signs. For a
structural propery $\P$, let $\ensuremath{{\rm ex}}_{\P}(n,k)$ be the maximum number of
clauses a $k$-CNF over $n$ variables having property $\P$ can have.
\begin{theorem}
Let $\P$ be a structural property of CNFs. If for fixed $k$,
$\ensuremath{{\rm ex}}_{\P}(n,k)$ grows superlinearly in $n$, then for every $\epsilon >
0$, there exists a formula $F_{\epsilon}$ for which no truth
assignment $\alpha$ satisfies more than $(1-2^{-k}+\epsilon)
|F_{\epsilon}|$ clauses.
\end{theorem}
\section{Lower Bounds}
After having established that $f(k) \in O (k^42^k)$, we want to obtain
lower bounds on $f(k)$. To be more precise, we show that $f(k) \in
\Omega(k2^k)$. We prove this by repeated application of the Lov\'{a}sz
Local Lemma. For a formula $F$, define the {\em neighborhood} of a
clause $C$ to be
$$\Gamma(C) := \left\{D \in F \ \big| \ \ensuremath{{\rm vbl}}(D) \cap
\ensuremath{{\rm vbl}}(C) \ne \emptyset, \ C \ne D \right\} \ .
$$
It follows from the Local Lemma that a $k$-CNF with $|\Gamma(C)| \leq
\frac{1}{4}2^k$ for every clause $C$ is satisfiable (the constant
$\frac{1}{4}$ can be improved upon). Conversely, if $F$ is
unsatisfiable, it contains a clause $C$ with a large neighborhood. We
find a partial assignment $\alpha$ on $\ensuremath{{\rm vbl}}(C)$ that satisfies $C$ and
a large part of its neighborhood, say at least $c 2^k$ clauses, for
some constant $c$. Since $F$ is linear, applying $\alpha$ deletes at
most one literal from any clause in $\Gamma(C)$, hence $F^{[\alpha]}$
is a $(k-1)$-CNF. Here, we can again apply the Local Lemma and satisfy
$c 2^{k-1}$ clauses, and so on. Repeating $k$ times, we have
satisfied at least $c (2^k + 2^{k-1} + \dots + 2^1) = c 2^{k+1} - 2c$
clauses. Unfortunately, this is not enough. We must somehow take
advantage of the fact that though $F^{[\alpha]}$ contains
$(k-1)$-clauses, the neighborhood of a clause in $F^{[\alpha]}$ cannot
contain too many of them.
\begin{lemma}[Lov\'{a}sz Local Lemma]
Let $A_1, \dots, A_m$ be events in some probability space, and let
$G$ be a graph with vertices $A_1,\dots, A_m$ and edges $E$ such
that each $A_i$ is mutually independent of all the events $\left\{
A_j \ | \ \{A_i, A_j\} \not \in E, \ i \ne j \right\}$. If there
exist real numbers $0 < \gamma_i < 1$ for $i = 1, \dots, m$
satisfying
$$
\Pr(A_i) \leq \gamma_i \prod_{j: \{A_i,A_j\} \in E} (1-\gamma_j)
$$
for all $i = 1,\dots,m$, then
$$
\Pr (A_1 \cup A_2 \cup \dots \cup A_m) < 1
$$
\label{lemma-lovasz}
\end{lemma}
For a proof of the Lov\'{a}sz Local Lemma and different versions,
see e.g.~\cite{probmeth}.
\begin{lemma}
Let $F$ be a CNF not containing any clause of size $\leq 1$.
If for any $C \in F$ it holds that
$$
\sum_{D \in \Gamma(C)} 2^{-|D|} \leq \frac{1}{4}
$$
then $F$ is satisfiable.
\label{lemma-local}
\end{lemma}
\textbf{Proof.} This is an application of the Lov\'{a}sz Local Lemma.
Let the probability space be the set of all truth assignments to the
$n$ variables in $F$ with the uniform distribution. Write $F=\{C_1,\dots,
C_m\}$ and let $A_i$ be the event that a random
assignments $\alpha$ does not satisfy clause $C_i$. Let $G$ be the graph
where $A_i$ and $A_j$ are conntected if they have a variable in
common, and let $\gamma_i := 2^{1-|C|} < 1$. For each $i=1,\dots,m$, we have
\begin{eqnarray*}
& & \gamma_i \prod_{j: \{A_i,A_j\} \in E} (1-\gamma_j) \\
& \geq \quad & \gamma_i \left(1 - \sum_{C_j \in \Gamma(C_i)} \gamma_j\right)\\
& = \quad & 2^{1-|C_i|} \left(1 - 2\sum_{C_j \in \Gamma(C_i)}2^{-|D_j|} \right)\\
& \geq \quad & 2^{-|C_i|} = \Pr(A_i)
\end{eqnarray*}
Hence, by Lemma~\ref{lemma-lovasz}, the probability that $\alpha$ leaves
some clause unsatisfied is $< 1$, and thus with positive probability,
$\alpha$ satisfies $F$. Therefore, $F$ is satisfiable.$\hfill\Box$\\
\begin{definition}
Let $k$ be fixed. An $[l,k]$-CNF is a CNF with $l \leq |C| \leq k$
for every $C\in F$. For an $[l,k]$-CNF $F$ and a variable $x$, let
$$
d_F(x) := \left| \left\{ C \in F \ \big| \ x \in \ensuremath{{\rm vbl}}(C),\ |C| \leq
k-1\right\}\right|
$$
If there is no danger of confusion, we will simply write $d(x)$.
Further, let
$$
d(F) := \max_{x \in \ensuremath{{\rm vbl}}(F) } d_F(x)
$$
Finally, define
\begin{eqnarray*}
\Gamma'(C) & := & \left\{ D \in \Gamma(C) \ \big| \ |D| \leq k-1\right\} \\
\Gamma'_x(C) & := & \left\{D \in \Gamma'(C) \ \big| \ x \in \ensuremath{{\rm vbl}}(D)\right\}.
\end{eqnarray*}
\end{definition}
\begin{lemma}
Let $F$ be an $[l,k]$-CNF. Then for any clause $C \in F$
$$
|\Gamma'(C)| \leq kd(F)
$$
\label{lemma-shrunk-neighborhood}
\end{lemma}
\textbf{Proof.} We simply calculate
$$
\left|\Gamma'(C)\right| = \left| \bigcup_{x \in \ensuremath{{\rm vbl}}(C)} \Gamma'_x(C) \right|
\leq \sum_{x \in \ensuremath{{\rm vbl}}(C)} \left|\Gamma'_x(C)\right|
\leq |C|d_F(x) \leq kd(F) \ .
$$
and the lemma follows.$\hfill\Box$\\
We need a lemma that states that after setting the variables of $C$
such that $C$ and a large part of its neighborhood is satisfied,
$d(F)$ does not increase too much.
\begin{lemma}
Let $F$ be a linear $[l,k]$-CNF and $C$ be any clause in $F$. Let
$\alpha$ be any assignment that is defined only on the variables of
$C$. If $\alpha$ satisfies $C$, then $d(F^{[\alpha]}) \leq d(F) +
k$, and $F^{[\alpha]}$ is a linear $[l-1,k]$-CNF.
\end{lemma}
\textbf{Proof.} Since $F$ is linear, $C$ is the only clause containing
more than one variable in the domain of $\alpha$. Since $\alpha$
satisfies $C$, every clauses loses at most one literal, and hence
$F^{[\alpha]}$ is an $[l-1,k]$-CNF. Surely, it is linear as well. To
bound the amount by which $d(F)$ increases, consider any $y \in
\ensuremath{{\rm vbl}}(F)$. If $y$ is set by $\alpha$, then $d_{F^{[\alpha]}}(y)=0$.
Otherwise, $d_{F^{[\alpha]}}(y)$ is at most $d_F(y)$ plus the number
of clauses that count additionally towards $d(y)$, i.e., clauses $D$
with $y \in \ensuremath{{\rm vbl}}(D)$, $|D| = k$ and $|D^{[\alpha]}|=k-1$. Clearly, $D
\in \Gamma(C)$, otherwise its size would not decrease under $\alpha$.
For each $x\in\ensuremath{{\rm vbl}}(C)$, there is at most one such clause $D \in
\Gamma(C)$, since $x, y \in \ensuremath{{\rm vbl}}(D)$ and $F$ is linear. Hence there
are at most $|C| \leq k$ such clauses, thus $d_{F^{[\alpha]}}(y) \leq
d_F(y)+k$. Therefore, $d(F^{[\alpha]}) \leq d(F)+k$ holds as
well.$\hfill\Box$
\begin{corollary}
Let $F$ be an unsatisfiable $[l,k]$-CNF for $l \geq 2$. There is a
partial assignment $\alpha$ such that $F^{[\alpha]}$ is an
$[l-1,k]$-CNF, $d(F^{[\alpha]}) \leq d(F)+k$, and $\alpha$ satisfies
at least
$$
\frac{l-1}{2l} \left(2^{k-2} - kd(F)2^{k-l} \right)
$$
clauses of $F$.
\label{corollary-neighborhood}
\end{corollary}
\textbf{Proof.} By Lemma\ \ref{lemma-local}, we know that if $F$ is
unsatisfiable, there is a clause $C \in F$ with
$$
\sum_{D \in \Gamma(C)} 2^{-|D|} > \frac{1}{4}
$$
Further, using Lemma\ \ref{lemma-shrunk-neighborhood}, we can estimate
$$
\sum_{D \in \Gamma(C)} 2^{-|D|} \leq | \Gamma(C)|2^{-k} + kd(F)2^{-l}
$$
And thus, solving for $|\Gamma(C)|$,
\begin{eqnarray}
|\Gamma(C)| > 2^{k-2} - kd(F)2^{k-l}
\label{ineq-neighborhood}
\end{eqnarray}
Let $x_1, \dots, x_{|C|}$ be the variables of $C$. Since the
$\Gamma_{x_i}(C)$ are pairwise disjoint, by the pigeonhole principle
there is an $x_i$ such that $|\Gamma_{x_1}(C)| \leq |\Gamma(C)| / l$.
Set $\alpha(x_i)$ such that $\alpha$ satisfies $C$. For the remaining
$|C|-1$ variables $x_j$ of $C$ , set $\alpha(x_j)$ such that it
satisfies at least $|\Gamma_{x_j}|/2$. Overall, we satisfy at least
$$
\frac{|C|-1}{2|C|} | \Gamma(C)|
$$
clauses. Inequality (\ref{ineq-neighborhood}) and the fact that $|C| \geq l$ imply the lemma.$\hfill\Box$\\
\begin{theorem}
Let $f(k) := \min \left\{ |F| \ \big| \ F \textnormal{ is an unsatisfiable linear } k\textnormal{-CNF}\right\}$.
Then $f(k) \in \Omega\left(k2^k\right)$.
\end{theorem}
\textbf{Proof.} Let $F$ be an unsatisfiable linear $k$-CNF. We show
that $|F| \in \Omega\left(k2^k\right)$. Define $F_i$ for $0 \leq i
\leq k-1$ as follows: $F_0 := F$. For $0 \leq i \leq k-2$, apply
Corollary~\ref{corollary-neighborhood} on $F_i$ and let $\alpha_i$ be
a partial assignment as described in the corollary. Define $F_{i+1} =
F_i^{[\alpha_i]}$. It follows that $F_i$ is an unsatisfiable
$[k-i,k]$-CNF, $d(F_i) \leq ik$, and $\alpha_i$ satisfies at least
$$
M(i) := \frac{k-i-1}{2(k-i)} \left(2^{k-2} - k^2i2^i\right)
$$
clauses of $F_i$, i.e., $|F_i| - |F_{i+1}| \geq M(i)$. Hence, for any
$0 \leq j \leq k-1$, we obtain
\begin{eqnarray*}
|F| & \geq & \quad \sum_{i=0}^{j-1} M(i) \geq \frac{k-j-1}{2(k-j)} \left( j2^{k-2} -
k^2\sum_{i=0}^{j-1} i2^i\right) \\
& \geq & \quad \frac{k-j-1}{2(k-j)} \left( j2^{k-2} - k^2 j2^j\right) \ .
\end{eqnarray*}
Plugging in for example $j = k - 3\log k$ yields the claimed bound of
$|F| \in \Omega\left(k2^k\right)$. $\hfill\Box$
\section{Small Unsatisfiable Linear $3$-CNFs}
In this section, we construct small unsatisfiable linear $3$-CNFs. However,
we do not know the exact value of $f(3)$. Consider the formula
$$\{\{\bar{x}_1,x_2\}, \{\bar{x}_2,x_3\},\dots,\{\bar{x}_{n-1},
x_n\}, \{\bar{x}_n, x_1\}\} \ .$$ Every assignment satisfying it sets
all $x_i$ to $1$ or all to $0$.
We use this formula as a gadget for
building so-called {\em forcers}. A formula $F$ is called a {\em
$C$-forcer} if every assignment satisfying $F$ satisfies $C$.
Define
$$
F_6 := \{\{\bar{x}_1,x_2\}, \{\bar{x}_2,x_3\}, \{\bar{x}_3,x_4\},
\{\bar{x}_4,x_1\}, \{x_1,x_3\}, \{\bar{x}_2,\bar{x}_4\}
$$
and note that it is unsatisfiable. For a clause $\{u,v,w\}$, define
$$
F_6 (\{u,v,w\}) := \{\{\bar{x}_1,x_2,u\}, \{\bar{x}_2,x_3,v\},
\{\bar{x}_3,x_4,u\}, \{\bar{x}_4,x_1,v\}, \{x_1,x_3,w\},
\{\bar{x}_2,\bar{x}_4,w\}\} \ .
$$
This formula is a linear $3$-CNF and a $\{u,v,w\}$-forcer. A nice
property of this forcer is that no variables in the forced clause
occur together in one of the clauses of a forcer. Taking the union of
the forcers $F_6(C)$ for all $8$ clauses $C$ over $\{u,v,w\}$ (and
renaming the variables $x_i$ each time, to ensure linearity), we
obtain an unsatisfiable linear $3$-CNF with $48$ clauses. This is
exactly the construction used in the proof of
Theorem~\ref{theorem-existence}.\\
We can improve the above construction. Define
\begin{eqnarray*}
F_8 (\{u,v\}) & := & \{\{\bar{x}_1,x_2,u\}, \{\bar{x}_2,x_3,v\},
\{\bar{x}_3,x_4,u\}, \{\bar{x}_4,x_5,v\}, \{\bar{x}_5,x_6,u\},
\{\bar{x}_6,x_1,v\}, \\
& & \{x_1,x_3,x_5\},\{\bar{x}_2,\bar{x}_4,\bar{x}_6\} \} \ .
\end{eqnarray*}
This is ax $\{u,v\}$-forcer with $8$ clauses. Building $4$ forcers
for the clauses $\{u,v\},\{u,\bar{v}\},\{\bar{u},v\},\{\bar{u},\bar{v}\}$,
we obtain an unsatisfiable linear $3$-CNF with $32$ clauses.\\
We go on: Consider $\{\{u,v\},\{u,\bar{v}\},\{\bar{u},v\},
\{\bar{u},\bar{v},w\}, \{\bar{u},\bar{v},\bar{w}\}\}$. Clearly, this formula
is unsatisfiable. Build the forcer $F_8(C)$ for the three $2$-clauses.
Then build $F_6(\{\bar{u},\bar{v},w\})$. Finally, add
$\{\bar{u},\bar{v},\bar{w}\}\}$ and obtain an unsatisfiable linear $3$-CNF
with $3 \times 8+6+1 = 31$ clauses.\\
The trick here was that we can afford to enforce one clause $C$ not by
using $F_6(C)$, but by directly including it into our final formula.
Of course, we must keep that final formula linear, hence we cannot apply
this trick too often. However, we can tweak the formula such that we
can apply this trick twice. Consider
\begin{eqnarray*}
F_w & = & \{ \{u,v,w\}, \{\bar{u},v,w\}, \{\bar{v},w\}\} = \{C_1,C_2,C_3\}\\
F_{\bar{w}} & = & \{ \{\bar{w},x,y\}, \{\bar{w},x,\bar{y}\},
\{\bar{w},\bar{x}\}\} = \{D_1,D_2,D_3\} \ .
\end{eqnarray*}
The formula $F_w$ is a $\{w\}$-forcer, and $F_{\bar{w}}$ is a
$\{\bar{w}\}$-forcer. We can build a linear formula from which $F_w$
and $F_{\bar{w}}$ can be derived:
$$
F := F_6(C_2) \cup F_6(D_2) \cup F_8(C_3) \cup F_8(D_3) \cup \{C_1\}
\cup \{D_1\}
$$
Here, we applied the above trick of directly including a desired clause
into the final formula twice, namely to $C_1$ and $D_1$. This is
an unsatisfiable linear $3$-CNF with $6+6+8+8+1+1 = 30$ clauses.
\section{Conclusion}
We showed that the size of a smallest unsatisfiable linear $k$-CNF is
in $\Omega\left(k2^k\right) \cap O\left(k^4 4^k\right)$. However, the
best constructive upper bound is a tower-like function. It is
desirable to find a way to {\em construct} unsatisfiable $k$-CNFs of
reasonable size, since this will give much better insight into the
structure of those formulas than a probabilistic proof.\\
One approach would be to use hypergraph vertex coloring problems: If
one translates the $k$-colorability problem of a $d$-uniform linear
hypergraph into a CNF in the natural way, one obtains a linear CNF
with clauses of size $k$ and $d$. There are linear $d$-uniform
hypergraphs with arbitrarily large chromatic number, for any $d$. This
follows e.g. from the Hales-Jewett-Theorem~\cite{hales-jewett} on
combinatorial lines. However, the bounds obtained from this theorem
are tower-like, too.
\bibliographystyle{splncs}
| {
"timestamp": "2007-08-17T11:44:21",
"yymm": "0708",
"arxiv_id": "0708.2336",
"language": "en",
"url": "https://arxiv.org/abs/0708.2336",
"abstract": "We call a CNF formula linear if any two clauses have at most one variable in common. Let Linear k-SAT be the problem of deciding whether a given linear k-CNF formula is satisfiable. Here, a k-CNF formula is a CNF formula in which every clause has size exactly k. It was known that for k >= 3, Linear k-SAT is NP-complete if and only if an unsatisfiable linear k-CNF formula exists, and that they do exist for k >= 4. We prove that unsatisfiable linear k-CNF formulas exist for every k. Let f(k) be the minimum number of clauses in an unsatisfiable linear k-CNF formula. We show that f(k) is Omega(k2^k) and O(4^k*k^4), i.e., minimum size unsatisfiable linear k-CNF formulas are significantly larger than minimum size unsatisfiable k-CNF formulas. Finally, we prove that, surprisingly, linear k-CNF formulas do not allow for a larger fraction of clauses to be satisfied than general k-CNF formulas.",
"subjects": "Discrete Mathematics (cs.DM); Computational Complexity (cs.CC); Logic in Computer Science (cs.LO)",
"title": "Unsatisfiable Linear k-CNFs Exist, for every k",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9653811571768048,
"lm_q2_score": 0.734119521083126,
"lm_q1q2_score": 0.7087051527693099
} |
https://arxiv.org/abs/0901.4389 | Spectral fluctuation properties of constrained unitary ensembles of Gaussian-distributed random matrices | We investigate the spectral fluctuation properties of constrained ensembles of random matrices (defined by the condition that a number N(Q) of matrix elements vanish identically; that condition is imposed in unitarily invariant form) in the limit of large matrix dimension. We show that as long as N(Q) is smaller than a critical value (at which the quadratic level repulsion of the Gaussian unitary ensemble of random matrices may be destroyed) all spectral fluctuation measures have the same form as for the Gaussian unitary ensemble. | \section{Introduction}
\label{int}
We investigate the spectral fluctuation properties of the constrained
unitary ensembles of Gaussian--distributed random matrices (CGUE)
introduced in Ref.~\cite{Pap06}. Constrained ensembles of random
matrices deserve interest because they represent entire classes of
non--canonical random--matrix ensembles that were proposed to mimic
typical properties of interacting many--fermion systems more closely
than do the canonical ensembles (the Gaussian Orthogonal, Unitary, and
Symplectic Ensembles)~\cite{Meh91}. Examples for constrained ensembles
are the Embedded Gaussian Orthogonal Ensemble~\cite{Mon75,Ben03} and
the Two--Body Random Ensemble~\cite{Fre70,Boh71,Pap07}. The
constraints essentially require certain matrix elements or linear
combinations of matrix elements to vanish. It is difficult to deal
with the constrained ensembles analytically because they lack the
invariance properties that give analytical access to the canonical
ensembles. That difficulty is overcome by imposing the condition of
unitary, orthogonal, or symplectic invariance on the constrained
ensembles. It is expected that that condition leaves the spectral
fluctuations unchanged (whereas the eigenfunctions acquire the same
distribution as for the canonical ensembles). Here we focus on the
case of unitary invariance, i.e., on the CGUE.
Some spectral properties of the CGUE were exhibited in
Ref.~\cite{Pap06}. In particular, the following sufficient condition
for level repulsion was given. The quadratic level repulsion
characteristic of the GUE (the ensemble of Gaussian--distributed
unitary random matrices) also prevails for the CGUE provided that the
number $N_Q$ of constraints does not exceed a critical value,
\begin{equation}
N_Q < N^{\rm crit}_Q
\label{1}
\end{equation}
with $N^{\rm crit}_Q$ defined in Eq.~(\ref{A1}) below.
In the present paper we go beyond Ref.~\cite{Pap06} and address the
spectral fluctuation properties of the CGUE in the limit of large
matrix dimension $N \gg 1$. We do so for $N_Q < N^{\rm crit}_Q$. For
$N_Q = 0$ the CGUE coincides with the GUE. For $N_Q \geq N^{\rm
crit}_Q$ the form of the constraints does not seem to permit
definitive analytical statements. It has remained an open question to
what extent the spectral fluctuation properties of the CGUE (beyond
the statement of sheer level repulsion) are the same or differ from
those of the GUE for $0 < N_Q < N^{\rm crit}_Q$. We prove the
following
\noindent
{\bf Theorem}: For matrix dimension $N \gg 1$ and the number of
constraints $N_Q <N^{\rm crit}_Q$ (with $N^{\rm crit}_Q$ defined in
Eq.~(\ref{A1}) below), the spectral fluctuation measures of the CGUE
coincide with those of the GUE save for correction terms of order $1 /
N$.
To make the paper self--contained, we collect in Section~\ref{def}
some definitions and results given in Ref.~\cite{Pap06}. In
Section~\ref{mod} we slightly modify the definition of the constrained
ensembles so as to remove a singularity. In Section~\ref{asym} we
discuss the form of the constraints in the limit $N \gg 1$. Our proof
is given in Section~\ref{proof}. It is based on an approach developed
in Ref.~\cite{Hac95}. Section~\ref{dis} contains a discussion. Some
technical details are presented in an Appendix.
\section{Definitions}
\label{def}
We consider Hermitean matrices acting on a Hilbert space ${\cal H}$
of dimension $N$. For any two such matrices $A$ and $B$, we introduce
the canonical scalar product in terms of the trace
\begin{equation}
\langle A | B \rangle \equiv {\rm Tr} (A B) \ .
\label{scalar}
\end{equation}
This allows us to define an orthonormal basis of $N^2$ Hermitean
basis matrices $B_\alpha = B^{\dagger}_\alpha$ which obey
\begin{equation}
\label{basis}
\langle B_\alpha|B_\beta\rangle \equiv {\rm Tr} (B_\alpha B_\beta)
= \delta_{\alpha \beta}
\label{orth}
\end{equation}
and
\begin{equation}
\sum_{\alpha = 1}^{N^2} | B_\alpha \rangle \langle B_\alpha | =
{\bf 1}_N
\label{compl}
\end{equation}
where ${\bf 1}_N$ is the unit matrix in $N$ dimensions. Any Hermitean
matrix $H$ acting on ${\cal H}$ can be expanded in terms of the $N^2$
Hermitean basis matrices $B_\alpha$ as
\begin{equation}
\label{expan}
H=\sum_{\alpha=1}^{N^2} h_\alpha B_\alpha.
\end{equation}
The Gaussian Unitary Ensemble (GUE) of random matrices is obtained by
assuming that the expansion coefficients $h_\alpha$ are uncorrelated
Gaussian--distributed real random variables with mean value zero and a
common variance. For the GUE, the probability density W(H) of the
matrix elements of $H$ has the form
\begin{equation}
W(H){\rm d} [H] = {\cal N}^{\rm GUE} \exp \bigg( - \frac{N}{2
\lambda^2} \langle H | H \rangle \bigg) {\rm d} [H] \ .
\label{GUE}
\end{equation}
Here ${\rm d} [H]$ is the product of the differentials of all
independent matrix elements, ${\cal N}^{\rm GUE}$ is a normalization
factor, and $2 \lambda$ is the radius of Wigner's semicircle.
We introduce the constraints by considering two orthogonal subspaces
labeled ${\cal P}$ and ${\cal Q}$ with dimensions $N_P$ and $N_Q = N^2
- N_P$, respectively. These are defined in terms of orthogonal
projection operators
\begin{eqnarray}
{\cal P} &=& \sum_{p =1}^{N_P} | B_p \rangle \langle B_p
| \ , \nonumber \\
{\cal Q} &=& \sum_{q = N_P + 1}^{N^2} | B_q \rangle \langle B_q
| \ .
\label{A3}
\end{eqnarray}
We have
\begin{equation} {\cal P}^{\dagger} = {\cal P} \ , \ {\cal Q}^{\dagger} =
{\cal Q} \ , \ {\cal P}^2 = {\cal P} \ , \ {\cal Q}^2 = {\cal Q} \
, \ {\cal P} {\cal Q} = 0 \ , \ {\cal P} + {\cal Q} = {\bf 1}_N \ .
\label{A4}
\end{equation}
Constraints can be formulated in the form $\langle H | {\cal Q}
\rangle = 0$ for all $H$. In the CGUE such constraints are used in
unitarily invariant form. The CGUE is defined by the probability
density $W_{\cal P}$ of the matrix elements of $H$ given by \begin{eqnarray}
W_{\cal P}(H) {\rm d} [ H ] &=& {\cal N}^{\rm GUE} \exp{\left( -
\frac{N} {2 \lambda^2} \langle H | H\rangle \right)} {\rm d} [ H ]
\nonumber \\ && \times \int {\rm d} [U] \bigg( \prod_q \ \delta (
\sqrt{\frac{N}{2 \pi \lambda^2}} \langle U B_q U^{\dagger} | H \rangle
) \bigg) \ .
\label{A7b}
\end{eqnarray}
The integral ${\rm d}[U]$ extends over the unitary group in $N$
dimensions. The Haar measure of the unitary group is normalized to
one, i.e.,
\begin{equation}
\int {\rm d} [U] = 1 \ .
\end{equation}
We diagonalize the matrix $H$ with the help of a unitary matrix $V$,
\begin{equation}
H = V x V^{\dagger} \ ,
\label{A8}
\end{equation}
where $x={\rm diag}(x_1,\ldots,x_N)$ is the diagonal matrix of the
eigenvalues. The integration measure becomes
\begin{equation}
{\rm d} [H] \propto \Delta^2(x) {\rm d} [x] {\rm d} [V] \ ,
\label{dh}
\end{equation}
where ${\rm d}x$ is the product of the differentials of the $N$
eigenvalues, where ${\rm d} [V]$ is the Haar measure of the unitary
group in $N$ dimensions, and where $\Delta(x)$ denotes the Vandermonde
determinant
\begin{equation}
\Delta(x) = \prod_{1 \le \mu < \nu \le N} (x_\mu - x_\nu) \ .
\label{van}
\end{equation}
Eq.~(\ref{dh}) shows that eigenvalues and eigenvectors of the CGUE are
uncorrelated random variables. The joint probability distribution
$P_{\cal P}(x)$ of the eigenvalues is given by
\begin{equation}
P_{\cal P}(x) = {\cal N}_0 \exp{\left( - \frac{N}{2 \lambda^2} \langle
x | x \rangle \right)} \Delta^2(x) F_{\cal P}(H) \ ,
\label{A8a}
\end{equation}
where
\begin{eqnarray}
F_{\cal P}(H) &\equiv& \int {\rm d} [ U ] \bigg( \prod_q \delta (
\sqrt{ \frac{N}{2 \pi \lambda^2} } \langle B_q | U H U^{\dagger}
\rangle ) \bigg) \nonumber \\
&=& \int {\rm d} [ U ] \bigg( \prod_q \delta ( \sqrt{ \frac{N}{2 \pi
\lambda^2} } \langle B_q | U x U^{\dagger} \rangle ) \bigg)
\label{A9a}
\end{eqnarray}
is a function of the eigenvalues $\{ x_\mu \}$ only, and where ${\cal
N}_0$ is another irrelevant normalization factor. Comparison of
Eq.~(\ref{A7b}) and Eq.~(\ref{GUE}) shows that the eigenvalue
distribution of the CGUE differs from that of the GUE by the factor
$F_{\cal P}(H)$. GUE--type level repulsion is contained in the factor
$\Delta^2(x)$ in Eq.~(\ref{A8a}), and such level repulsion will
prevail also in the CGUE unless $F_{\cal P}(H)$ is singular whenever
two eigenvalues coincide. In Ref.~\cite{Pap06} it was shown that
$F_{\cal P}(H)$ cannot be singular if the number $N_Q$ of constraints
obeys the inequality
\begin{equation}
N_Q < N^{\rm crit}_Q = N ( N - 1 ) / 2 - \sum_{j = 1}^J L_j ( L_j - 1
) / 2\ .
\label{A1}
\end{equation}
Here it is assumed that the matrix $B = \sum_q s_q B_q$ with real
coefficients $s_q$ possesses asymptotically (all $s_q$ large) $J$ sets
of degenerate eigenvalues with multiplicities $L_j$, $j = 1,
\ldots, J$.
\section{Modified Form of the Constraints}
\label{mod}
The function $F_{\cal P}(H)$ embodies the constraints. Therefore, it
is the central object of study in this paper. The treatment of
$F_{\cal P}(H)$ simplifies when all confining matrices $B_q$ are
traceless. We believe that that case is physically the more
interesting one, for the following reason. We show in the Appendix
that whenever the $B_q$s are not traceless, there always exists an
orthogonal transformation of the set $\{B_q\}$ to a new set
$\{\tilde{B}_q\}$ such that $F_{\cal P}(H)$ is unchanged and that all
$\tilde{B}_q$ with $q > N_P + 1$ are traceless. The one constraining
matrix $\tilde{B}_{N_P + 1}$ that is not traceless, is the sum of a
traceless part and of a multiple of the unit matrix. But constraining
$H$ with a multiple of the unit matrix means that we constrain the
centroid of the spectrum of $H$. We cannot think of a physically
interesting situation where such a constraint would be
meaningful. This is why we focus attention on the case where all $B_q$
are traceless,
\begin{equation}
\langle B_q \rangle = 0 \ {\rm for \ all} \ q = N_P + 1, N_P + 2,
\ldots, N^2 \ ,
\label{traceless}
\end{equation}
and treat the more general case where the condition~(\ref{traceless})
is violated, in the Appendix. Here and in the sequel we use the symbol
$\langle A \rangle$ to denote the trace of the matrix $A$. This is
consistent with the definition~(\ref{scalar}).
For the developments in Section~\ref{proof} we note the following
properties of $F_{\cal P}(H)$. The function $F_{\cal P}(H)$ is real
(this follows from Eq.~(\ref{A9a})) and positive definite (this is
seen when we write the defining delta functions as limits of
Gaussians). Using Fourier transformation, we can write $F_{\cal P}(H)$
in Eq.~(\ref{A9a}) as an $N_Q$--fold Fourier integral,
\begin{equation}
F_{\cal P}(H) = \bigg( \frac{\lambda^2}{2 \pi N} \bigg)^{N_Q / 2}
\prod_{q = 1}^{N_Q} \int {\rm d} s_q \int {\rm d} [U] \ \exp \{ i
\langle B(s) | U H U^\dag \rangle \}
\label{F1}
\end{equation}
where
\begin{equation}
B(s) = \sum_q s_q B_q
\label{F2}
\end{equation}
and where $s$ stands for the set $\{s_1, \ldots, s_{N_Q} \}$. The
integral over the unitary group can be worked out and with ${\rm d} s
= \prod_q {\rm d} s_q$ yields (see Ref.~\cite{Pap06})
\begin{equation}
F_{\cal P}(H) \propto \bigg( \frac{\lambda^2}{2 \pi N} \bigg)^{N_Q
/ 2} \int {\rm d} s \ \frac{\det \exp \{ i x_\mu b_\nu(s) \} }{
\Delta(x) \Delta(b(s))} \ .
\label{F6}
\end{equation}
Here the $b_\nu(s)$ are the eigenvalues of the matrix $B(s)$, and
$\Delta (b)$ is the Vandermonde determinant of the $b_\nu(s)$, see
Eq.~(\ref{van}). In Ref.~\cite{Pap06} it was shown that the integrals
over $s$ converge if the condition~(\ref{A1}) is met. Moreover,
inspection of Eq.~(\ref{F6}) shows that $F_{\cal P}(H)$ is not
singular when two eigenvalues $x_\mu, x_\nu$ coincide.
However, because of the form of the constraints (Eq.~(\ref{A9a})), the
function $F_{\cal P}(H)$ is singular when all eigenvalues of $H$
coincide. To see this we define
\begin{equation}
\tilde{H} = H - \frac{{\bf 1}_N}{N} \langle H \rangle \ ,
\label{F6a}
\end{equation}
use the assumption~(\ref{traceless}) and rewrite Eq.~(\ref{F1}) in
the form
\begin{equation}
F_{\cal P}(H) = F_{\cal P}(\tilde{H}) = \bigg( \frac{\lambda^2}{2 \pi
N} \bigg)^{N_Q / 2} \int {\rm d} s \int {\rm d} [U] \ \exp \{ i \langle
B(s) | U \tilde{H} U^\dag \rangle \} \ .
\label{F4}
\end{equation}
Eq.~(\ref{F4}) shows that $F_{\cal P}(\tilde{H})$ is singular when
$\tilde{H} = 0$, i.e., when all eigenvalues of $H$ coincide. The
singularity mirrors a singularity in the definition~(\ref{A9a}) of
$F_{\cal P}(H)$. Indeed, when $x_\mu = x_\nu = y$ for all $\mu, \nu =
1, \ldots, N$, the Dirac deltas in the definition~(\ref{A9a}) take the
form $\delta(y \langle B_q \rangle)$. Because of
Eq.~(\ref{traceless}) each of these terms is singular. We avoid the
singularity by modifying the definition of $F_{\cal
P}(\tilde{H})$. Instead of $F_{\cal P}(\tilde{H})$, we consider the
constraining function
\begin{equation}
\tilde{F}_{\cal P}(\tilde{H}) = \bigg( \frac{\langle \tilde{H}^2
\rangle}{N \lambda^2} \bigg)^{N_Q / 2} F_{\cal P}(\tilde{H}) \ .
\label{F5}
\end{equation}
The factor in front of $F_{\cal P}(\tilde{H})$ guarantees that
$\tilde{F}_{\cal P}(\tilde{H})$ is not singular at $\tilde{H} = 0$. At
the same time, that factor is a function of the sum of the eigenvalues
$x_\mu$ only. Thus, that factor cannot modify the correlations of
close--lying eigenvalues $x_\mu$, and the spectral fluctuation
properties of the constrained ensembles defined by the constraining
functions $F_{\cal P}(\tilde{H})$ and $\tilde{F}_{\cal P}(\tilde{H})$
are the same. Moreover, for real $x_\mu$ the function $\tilde{F}_{\cal
P}(\tilde{H})$ is real and positive definite. According to
Eq.~(\ref{F6}) $\tilde{F}_{\cal P}(\tilde{H})$ is not singular for
finite values of the $x_\mu$. Inspection shows that when one of the
eigenvalues, $x_\mu$ say, tends to infinity, $\tilde{F}_{\cal
P}(\tilde{H})$ cannot grow more strongly than some power of
$x_\mu$. That growth is much weaker than the Gaussian suppression of
large eigenvalues in Eq.~(\ref{A7b}). Hence the confinement of the
spectrum to a finite interval characteristic of the GUE persists also
for the CGUE with constraining function $\tilde{F}_{\cal
P}(\tilde{H})$ although the shape of the average spectrum may be
modified.
Collecting everything, we have
\begin{equation}
\tilde{F}_{\cal P}(\tilde{H}) = \bigg( \frac{\langle (\tilde{H})^2
\rangle }{2 \pi N^2} \bigg)^{N_Q / 2} \int {\rm d} [U] \int {\rm d}
s \ \exp \{ i \langle U B(s) U^\dag | \tilde{H} \rangle \} \ .
\label{F7}
\end{equation}
It is convenient to introduce the new variables $t_q = \lambda s_q$.
Then
\begin{equation}
\tilde{F}_{\cal P}(\tilde{H}) = \bigg( \frac{\langle (\tilde{H})^2
\rangle}{2 \pi \lambda^2 N^2} \bigg)^{N_Q / 2} \int {\rm d} [U] \int
{\rm d} t \ \exp \{ i \langle U B(t) U^\dag | (\tilde{H} / \lambda)
\rangle \} \ .
\label{F7a}
\end{equation}
This shows that $\tilde{F}_{\cal P}(\tilde{H})$ depends on $\tilde{H}$
only via the dimensionless ratio $\tilde{H} / \lambda$, as expected.
Because of unitary invariance, $\tilde{F}_{\cal P}(\tilde{H})$ can
depend only on unitary invariants constructed from $\tilde{H} /
\lambda$. The only such invariants are the normalized traces of
$\tilde{H}^n / \lambda^n$ with positive integer $n$. For $N \gg 1$
this is shown explicitly in the next Section. The probability density
for the Hamiltonian matrices of the CGUE is given by
\begin{equation}
\tilde{W}_{\cal P}(H) {\rm d} [ H ] = \tilde{\cal N} \exp{ \left( -
\frac{N} {2 \lambda^2} \langle H | H\rangle \right)} \tilde{F}_{\cal
P}(\tilde{H}) \ {\rm d} [ H ] \ .
\label{F8}
\end{equation}
The substitution of $F_{\cal P}(\tilde{H})$ by $\tilde{F}_{\cal
P}(\tilde{H})$ also modifies the normalization factor of
$\tilde{W}_{\cal P}$ but that is irrelevant for what follows.
\section{Asymptotic Form of $\tilde{F}_{\cal P}(\tilde{H})$ for $N
\gg 1$}
\label{asym}
For $N \gg 1$ we now display explicitly the dependence of
$\tilde{F}_{\cal P}(\tilde{H})$ on the normalized unitary invariants
$(1 / N) \langle \tilde{H}^n / \lambda \rangle$ with positive integer
$n$. We mention in passing that the terms of leading order in a
systematic expansion of $\tilde{F}_{\cal P}(\tilde{H})$ in powers of
$N_Q / N^2$ can also be obtained from the Harish--Chandra Itzykson
Zuber integral~\cite{Itz80,Zin03}, or from the standard supersymmetry
approach~\cite{Efe97,Ver85}. We use the assumption~(\ref{traceless})
and discuss the case where not all constraining matrices are traceless
in the Appendix. Then $\langle B(t) \rangle = 0$. We consider the
expressions
\begin{equation}
\int {\rm d} [U] \ ( i \langle (\tilde{H} / \lambda) | U B(t) U^\dag
\rangle )^k
\label{B4}
\end{equation}
with $k$ positive integer. Such expressions are generated when the
exponential in Eq.~(\ref{F7a}) is expanded in a Taylor series. To
calculate the integral over the unitary group we use a method valid
for $N \gg 1$~\cite{Pro02,Bro96}. To leading order in $1 / N$ the
integral can be done using Wick contraction on the matrices $U$, the
rules being
\begin{equation}
U_{\mu \nu} U^\dag_{\rho \sigma} \to (1 / N) \delta_{\mu \sigma}
\delta_{\nu \rho} \ {\rm and} \ U_{\mu \nu} U_{\rho \sigma} \to 0 \ .
\label{B5}
\end{equation}
Terms of higher order are obtained in a similar fashion and lead to
similar results but are not considered here. We first look at a few
simple cases. For $k = 1$ the expression~(\ref{B4}) vanishes. For $k =
2$ and $k = 3$ we obtain $( i^2 / N) [ (1 / N) \langle (\tilde{H} /
\lambda)^2 \rangle ]$ $\langle B^2(t) \rangle$ and $2! (i^3 / N)^2
[ (1 / N) \langle (\tilde{H} / \lambda)^3 \rangle ] \langle B^3(t)
\rangle$, respectively. For $k = 4$ Wick contraction generates two
terms. One is proportional to the square of the $k = 2$ term just
considered. The other is given by $3! ( i^4 / N^3) [ (1 / N) \langle
(\tilde{H} / \lambda)^4 \rangle ] \langle B^4(t) \rangle$. For $k =
5$, Wick contraction generates two types of terms: The product of the
$k = 2$ term and the $k = 3$ term, and a new term given by $4! ( i^5
/ N^4)$ $[ (1 / N) \langle (\tilde{H} / \lambda)^5 \rangle ]$ $\langle
B^5(t) \rangle$.
For the general expression~(\ref{B4}) we consider all partitions of
$k$ into sets of positive integers $k_1, k_2, \ldots k_f$ greater than
unity such that $\sum_{i = 1}^f k_i = k$. To leading order in $1 / N$,
expression~(\ref{B4}) is given by the sum over all such partitions,
the contribution of each partition being $i^{k} {k \choose k_1} {k -
k_1 \choose k_2} \times \ldots \times {k - k_1 - \ldots - k_{f - 1}
\choose k_f} \prod_{i = 1}^f (k_i - 1)! ( 1 / N^{k_i - 1} ) [ (1 / N)
\langle (\tilde{H} / \lambda)^{k_i} \rangle ] \langle B^{k_i}(t)
\rangle$. The terms of higher order in $1 / N$ also involve products
of traces of powers of $\tilde{H} / \lambda$ and of traces of $B$, the
difference being that at least one trace of a power of $\tilde{H} /
\lambda$ is multiplied by at least two traces of powers of $B$ such
that the sum of the exponents of $B$ equals the exponent of $\tilde{H}
/ \lambda$. It is shown below that $\langle B^n \rangle$ and $\prod_i
\langle B^{n_i} \rangle$ with $\sum_i n_i = n$ are of the same order
in $N$ so the neglect of such terms is legitimate.
We conclude that to leading order in $1 / N$, the integral over the
unitary group in Eq.~(\ref{F7a}) is given by
\begin{eqnarray}
&& \int {\rm d} [U] \ \exp \{ i \langle U B(t) U^\dag | (\tilde{H} /
\lambda) \rangle \} \nonumber \\
&& \qquad = \exp \{ \sum_{n \geq 2} (1 / n) (i^n / N^{n - 1}) [ (1 / N)
\langle (\tilde{H} / \lambda)^n \rangle ] \langle B^n(t) \rangle \} \ .
\label{B6}
\end{eqnarray}
For $N \gg 1$ the constraining function $\tilde{F}$ is then given by
\begin{eqnarray}
\tilde{F}_{\cal P}(\tilde{H}) &=& \bigg( \frac{\langle (\tilde{H})^2
\rangle}{2 \pi \lambda^2 N^2} \bigg)^{N_Q / 2} \nonumber \\
&\times& \int {\rm d} t \ \exp \{ \sum_{n \geq 2} (1 / n) (i^n / N^{n
- 1} ) [ (1 / N ) \langle (\tilde{H} / \lambda )^n \rangle] \langle
B^n(t) \rangle \} \ .
\label{B7}
\end{eqnarray}
It may seem that because of the factors $N^{n - 1}$ the terms of
higher order in $n$ in Eqs.~(\ref{B6}) and (\ref{B7}) can be
neglected. We now show that for $N_Q \sim N^2$ this is not the
case. In Eq.~(\ref{B7}) we replace the Cartesian integration variables
$t_q$ by polar coordinates $\{ r, \Omega \}$ in $N_Q$ dimensions where
\begin{equation}
r^2 = \sum_q t^2_q
\label{B8}
\end{equation}
and where $\Omega$ stands for the angular variables. We write
\begin{equation}
B(t) = r B(\Omega)
\label{B9}
\end{equation}
and since $\langle B_q | B_{q'} \rangle = \delta_{q q'}$ have
\begin{equation}
\langle B^2(\Omega) \rangle = 1 \ .
\label{B10}
\end{equation}
Let $b_\mu(\Omega)$ denote the
$N$ real eigenvalues of $B(\Omega)$. Then $\sum_\mu b^2_\mu(\Omega) =
1$ and $|b_\mu(\Omega)|
\leq 1$ for all $\mu = 1, \ldots, N$. For integer $n > 2$ this implies
that
\begin{equation}
\langle B^n(\Omega) \rangle \leq 1.
\label{B11}
\end{equation}
This, incidentally, justifies the omission of terms of order $1 / N$
above and shows that $\langle B^n(\Omega) \rangle$ and $\langle
(\tilde{H} / \lambda)^n \rangle$ are characteristically different: The
first expression is (at most) of order unity while the second is of
order $N$. That is why we always carry the second expression in the
form $(1 / N) \langle (\tilde{H} / \lambda)^n \rangle$.
Using the transformation to polar coordinates we observe that the term
with $n = 2$ in Eq.~(\ref{B7}) gives a Gaussian integral in
$r$. Expanding the terms with $n > 2$ in the exponent in a Taylor
series we are led to consider radial integrals of the form
\begin{equation}
\int {\rm d} r \ r^{N_Q - 1 + 2 k} \exp \{ - c r^2 \} \ .
\label{B12}
\end{equation}
Here $c$ is a constant and $2 k$ must be even as otherwise the
integrals vanish. Compared to the leading term ($k = 0$) these
integrals yield a factor $(N_Q + 2k - 2) (N_Q + 2 k - 4) \times \ldots
\times N_Q$. (We assume for simplicity that $N_Q$ is even). If the
expansion of the exponential converges sufficiently rapidly so that
for $N_Q \sim N^2$ we need consider only terms with $k \ll N_Q$ then
every power of $r$ in the exponential in Eq.~(\ref{B7}) effectively
carries a factor $\sqrt{N_Q}$, and the series in $n$ proceeds
effectively in powers of $N_Q / N^2$. For $N_Q \sim N^2$ that factor
is of order unity.
While it is, thus, not permitted to terminate for $N_Q \sim N^2$ the
series in $n$ in Eq.~(\ref{B7}) with the first few terms, rapid
convergence of the Taylor expansion of the exponential around the
Gaussian form is assured by the following property of the matrix
$B(\Omega)$ defined in Eq.~(\ref{B9}). Each term in the Taylor
expansion of the right--hand side of Eq.~(\ref{B7}) around the
Gaussian form ($n = 2$) generates a factor of the form
\begin{equation}
\int {\rm d} \Omega \ \prod_i \langle B^{k_i}(\Omega) \rangle \ {\rm
where} \ \sum_i k_i = {\rm even} = 2 k \ {\rm and \ where \ all} \ k_i
\geq 3 \ .
\label{B13}
\end{equation}
That expression can also be written as
\begin{equation}
\int {\rm d} \Omega \ \prod_i \sum_{\mu = 1}^N b^{k_i}_\mu(\Omega) \ .
\label{B14}
\end{equation}
In magnitude, each eigenvalue $b_\mu(\Omega)$ is bounded by unity. It
is, therefore, safe to expect that on averaging over the
$N_Q$--dimensional unit sphere, each eigenvalue is of order $1 /
\sqrt{N}$ so that the expression in Eq.~(\ref{B14}) is of order $N^{-
k} \Omega(N_Q)$. Here $\Omega(N_Q)$ is the surface of the unit
sphere in $N_Q$ dimensions. That shows that only few terms in the
expansion are expected to contribute significantly even for $N_Q \sim
N^2$. The statement holds {\it a fortiori} for $N_Q \ll N^2$.
\section{Proof}
\label{proof}
To calculate the influence of the constraints in Eq.~(\ref{F8}) on the
spectrum for $N \gg 1$, we use the approach developed in
Ref.~\cite{Hac95} based on the supersymmetry
method~\cite{Efe97,Ver85}. We only sketch the essential steps, using
the definitions and notation of Refs.~\cite{Ver85} and
\cite{Hac95}. The average level density and all correlation functions
are obtained with the help of a generating functional $Z$ which is
written as
\begin{equation}
Z = \int {\rm d} \Psi \big \langle \exp \big \{ \frac{i}{2}
\Psi^{\dag} {\bf L}^{1/2} {\bf G} {\bf L}^{1/2} \Psi \big \} \big
\rangle_{H}
\label{C1}
\end{equation}
where
\begin{equation}
{\bf G} = {\bf H} - {\bf E} + {\bf M} \ .
\label{C1a}
\end{equation}
The average over the ensemble is denoted by angular brackets with an
index $H$ while for the trace we continue to use angular brackets
without index as in Eq.~(\ref{scalar}). The symbol $\Psi$ stands for a
supervector the dimension of which depends on the particular
correlation function under study. The same is true of the matrices
${\bf H}$ (the Hamiltonian), ${\bf E}$ (the energy), and ${\bf
M}$. The matrix ${\bf M}$ is of order $O(N^{-1})$ and contains energy
differences, source terms and possible couplings to channels.
Differentiation with respect to the source terms generates the
particular correlation function of interest.
The invariance of CGUE under unitary transformations implies that the
integrand in Eq.~(\ref{C1}) depends upon $\Psi$ and $\Psi^{\dag}$ only
via the invariant form
\begin{equation}
A_{\alpha \beta} = N^{-1} L^{1/2}_{\alpha \alpha} \sum_{\mu = 1}^N
\Psi_{\mu \alpha} \Psi^{\dag}_{\mu \beta} L^{1/2}_{\beta \beta} \ .
\label{C2}
\end{equation}
Here $\alpha$ amd $\beta$ are matrix indices in superspace while $\mu$
runs over the $N$ basis states of Hilbert space ${\cal H}$. We
introduce a supermatrix $\sigma$ with the same dimension and symmetry
properties as $A$ by writing $Z$ as an integral over a delta function,
\begin{equation}
Z = \int {\rm d} \Psi \int {\rm d} \sigma \ \delta(\sigma - A) \big
\langle \exp \big \{ \frac{i}{2} \Psi^{\dag} {\bf L}^{1/2} {\bf G}
{\bf L}^{1/2} \Psi \big \} \big \rangle_{H} \ .
\label{C3}
\end{equation}
The delta function is replaced by its Fourier transform, and the
multiple Gaussian integral over the supervector $\Psi$ is performed to
yield
\begin{equation}
Z = \int {\rm d} \sigma \int {\rm d} \tau \ \exp\big\{ \frac{i}{2} N \
{\rm trg} ( \tau \sigma) \big\} \big \langle \exp \big \{ - \frac{1}{2}
{\rm tr} \ {\rm trg}\ln [{\bf G} - \lambda \tau] \big \} \big
\rangle_{H} \ .
\label{C4}
\end{equation}
The remaining superintegrations over $\tau$ and $\sigma$ are
eventually done for $N \to \infty$ with the help of the saddle--point
approximation. Prior to that step, the average over the ensemble is
performed using Eq.~(\ref{F8}). We first integrate over the unitary
group. To leading order in $N^{-1}$ we obtain
\begin{eqnarray}
&& \big \langle \exp \big \{ - \frac{1}{2} {\rm tr} \ {\rm trg} \ln
[{\bf G} - \lambda \tau] \big \} \big \rangle_{H} = \big \langle \exp
\big \{ - \frac{1}{2} {\rm tr} \ {\rm trg} \ln {\bf D} \big \}
\nonumber \\
&& \qquad \qquad \times \exp \big\{ - \frac{1}{2} {\rm tr} \ {\rm trg}
\ln \big[ 1 + \frac{1}{N} {\rm tr} D^{-1} {\bf M} \big] \big \} \big
\rangle_{H} \ .
\label{C5}
\end{eqnarray}
Here ${\bf D} = {\bf x} - {\bf E} - \lambda \tau$ is diagonal in
Hilbert space, and the angular brackets now stand for the remaining
integration over the eigenvalues $\{x_\mu\}$. Under inclusion of the
terms which arise from $\tilde{W}_{\cal P}$ in Eq.~(\ref{F8}), the
exponent of the integrand is given by
\begin{eqnarray}
&& - \frac{1}{2} {\rm tr} \ {\rm trg} \ln {\bf D} - \frac{1}{2} {\rm
tr} \ {\rm trg} \ln \big[ 1 + \frac{1}{N} {\rm tr} {\bf D}^{-1} {\bf M}
\big] \nonumber \\
&& \qquad + 2 \sum_{\mu < \nu} \ln | x_\mu - x_\nu | - \frac{N}{2
\lambda^2} \sum_\mu x^2_\mu + \ln \tilde{F}_{\cal P}(\tilde{H}) \ .
\label{C6}
\end{eqnarray}
Following Ref.~\cite{Hac95}, we perform the eigenvalue integration
using the saddle--point approximation for $N \gg 1$. In
expression~(\ref{C6}), all terms in the first line are at most of
order $N$ while the first two terms in the second line are of order
$N^2$. Omitting the terms in the first line but keeping $\ln
\tilde{F}_{\cal P}(\tilde{H})$ (depending on the value of $N_Q$, that
function may or may not be of order $N^2$), we put the derivatives of
the resulting expression with respect to the $x_\mu$ equal to zero and
obtain the $N$ saddle--point equations
\begin{equation}
x_\mu = \frac{2 \lambda^2}{N} \sum_{\sigma \neq \mu} \frac{1}{x_\mu
- x_\sigma} + \frac{2 \lambda^2}{N} \frac{1}{\tilde{F}_{\cal P}
(\tilde{x})} \frac{\partial \tilde{F}_{\cal P}(\tilde{H})} {\partial
x_\mu} \ , \ \mu = 1, \ldots, N \ .
\label{C7}
\end{equation}
Without the term $\ln \tilde{F}_{\cal P}(\tilde{H})$ in
expression~(\ref{C6}), the saddle--point equations would have taken the
standard GUE form
\begin{equation}
x_\mu = \frac{2 \lambda^2}{N} \sum_{\sigma \neq \mu} \frac{1}{x_\mu
- x_\sigma} \ .
\label{C8}
\end{equation}
To prepare for the treatment of Eq.~(\ref{C7}) we recall how
Eqs.~(\ref{C8}) are solved in Ref.~\cite{Bre78}. The variables $x_\mu /
\lambda$ are replaced by a single dimensionless continuous variable,
$x_\mu / \lambda \to \varepsilon$. The normalized level density $\rho(\varepsilon)$
with $\int {\rm d} \varepsilon \ \rho(\varepsilon) = 1$ of the ensemble (which at this
point is an unknown function) is introduced, and Eq.~(\ref{C8}) is
written in the form
\begin{equation}
\varepsilon = 2 P\int {\rm d} \varepsilon' \ \frac{\rho(\varepsilon')}{\varepsilon - \varepsilon'} \ .
\label{C9}
\end{equation}
Here $P\int$ stands for the principal--value integral. Eq.~(\ref{C9})
has an electrostatic analogue and can be solved using the theory of
analytic functions. The result is Wigner's semicirle law for
$\rho(\varepsilon)$. The generating functional is subsequently taken at the
saddle--point values for the $x_\mu$. All summations over $x_\mu$ in
$Z$ are, thus, replaced by integrations over $\varepsilon$ with $\rho(\varepsilon)$ as
weight function.
In applying that same method to Eqs.~(\ref{C7}) we introduce the (yet
unknown) normalized average level density $\rho_{\cal P}(\varepsilon)$ of the
constrained ensemble in the sum on the right--hand side of
Eq.~(\ref{C7}). We also have to implement the change of variables
$x_\mu / \lambda \to \varepsilon$ in $\tilde{F}_{\cal P}(\tilde{H})$ and its
derivative. As for $\tilde{F}_{\cal P}(\tilde{H})$, this is done by
replacing everywhere in Eq.~(\ref{B7}) the term $(1 / N) \langle (H /
\lambda)^n \rangle$ by $\langle \varepsilon^n \rangle = \int {\rm d} \varepsilon \
\varepsilon^n \rho_{\cal P}(\varepsilon)$. The form of $\tilde{W}(H)$ in Eq.~(\ref{F8})
implies that $\rho_{\cal P}(\varepsilon) = \rho_{\cal P}(-\varepsilon)$ so that only
terms with $n$ even survive, and we obtain
\begin{eqnarray}
\tilde{F}_{\cal P} &=& \bigg( \frac{\langle \varepsilon^2 \rangle}{2 \pi
N} \bigg)^{N_Q / 2} \nonumber \\
&& \times \int {\rm d} t \ \exp \{ \sum_{n \geq 1} (1 / (2 n)) ((-1)^n
/ N^{2 n - 1} ) \langle \varepsilon^{2 n} \rangle \langle B^{2 n}(t) \rangle
\} \ .
\label{C10}
\end{eqnarray}
This is a function of the unknown level density $\rho_{\cal P}$ with a
rapidly converging Taylor expansion around the Gaussian term ($n = 1$).
In the derivative of $\tilde{F}_{\cal P}$, we substitute $x_\mu /
\lambda \to \varepsilon$ after differentiating with respect to $x_\mu$. We
obtain
\begin{eqnarray}
\frac{\lambda}{N} \frac{\partial \tilde{F}_{\cal P}}{\partial x_\mu}
&=& \frac{N_Q \varepsilon}{N^2 \langle \varepsilon^2 \rangle} \tilde{F}_{\cal P} +
\bigg( \frac{\langle \varepsilon^2 \rangle}{2 \pi N} \bigg)^{N_Q / 2}
\sum_{n \geq 1} \frac{(-)^n \varepsilon^{2 n - 1}}{N^{2 n + 1}} \int {\rm
d} t \ \langle B^{2 n}(t) \rangle \nonumber \\
&& \times \exp \{ \sum_{n \geq 1} (1 / (2 n)) ((-1)^n / N^{2 n - 1} )
\langle \varepsilon^{2 n} \rangle \langle B^{2 n}(t) \rangle \} \ .
\label{C11}
\end{eqnarray}
The right--hand side of Eq.~(\ref{C11}) is a polynomial of odd order
in $\varepsilon$ with rapidly decreasing coefficients.
As a result, the saddle--point equations~(\ref{C7}) take the form
of Eq.~(\ref{C9}), with $\varepsilon$ replaced by an odd--order polynomial in
$\varepsilon$ with rapidly decreasing coefficients. These coefficients depend
on $\rho_{\cal P}(\varepsilon)$; the solution of Eq.~(\ref{C11}) must,
therefore, proceed iteratively, with the GUE average level density as
a starting point for calculating $\langle \varepsilon^n \rangle$. In
Ref.~\cite{Pap06} it was shown perturbatively that $\rho(E)$ and
$\rho_{\cal P}(E)$ differ by a term of order $N_Q / N^2$. Therefore,
we expect the two level densities to differ significantly when $N_Q
\sim N^2$.
Returning to Eq.~(\ref{C6}) we perform the integration over the
variables $x_\mu$ by taking their values at the saddle points. That
means, for instance, that we write
\begin{equation}
\frac{\lambda}{N} \sum_\mu \frac{1}{x_\mu - E - \lambda \tau} \to
\int {\rm d} E' \frac{\rho_{\cal P}(E' / \lambda)}{E' - E - \lambda
\tau} \ .
\label{C12}
\end{equation}
This is the essential step: The summations over the eigenvalues
$x_\mu$ disappear in all expressions in the integrand of $Z$. Each
such summation is replaced by an energy integral involving the level
density $\rho_{\cal P}(\varepsilon)$ of the constrained ensemble. This is the
only place where the constraints show up in the calculation. From here
on the calculation of the correlation functions for the CGUE and that
for the GUE run completely in parallel~\cite{Hac95}. It follows that
all correlation functions of the CGUE have the same form as their GUE
counterparts except that we have to replace the local average level
spacing of the GUE by that of the CGUE. This proves our theorem.
\section{Discussion}
\label{dis}
Our theorem holds in the limit $N \gg 1$ and for $N_Q < N^{\rm
crit}_Q$. The average level density of the CGUE may differ from that
of the GUE but all correlation functions have the same form for both
ensembles. Although our result is perhaps expected, to the best of our
knowledge this is the first time that GUE--type statistics has been
analytically proved for a class of ensembles different from the GUE.
Deviations of order $1 / N$ from the asymptotic form of the GUE
statistics exist, of course, even for the pure GUE and are expected
{\it a fortiori} for the CGUE. Our proof specifically applies to the
case of unitary invariance. We believe, however, that a corresponding
result holds also for the other symmetries.
The proof of the theorem rests on the fact that in the limit $N \gg 1$
and for $N_Q < N^{\rm crit}_Q$, the constraining function
$\tilde{F}_{\rm P}(\tilde{H})$ is free of singularities. The proof
holds independently of any specific properties of the constraining
matrices $B_q$. What happens for $N \gg 1$ but $N_Q \geq N^{\rm
crit}_Q$? That seems to depend on specific properties of the
constraints which determine the eigenvalues $b_\mu(s)$ and, thus, the
convergence properties of the integrals over $s$. Therefore, generic
statements about the spectral fluctuation properties of the CGUE
probably cannot be made for $N_Q \geq N^{\rm crit}_Q$.
One may speculate that with $N_Q$ increasing beyond the value $N^{\rm
crit}_Q$, the spectral fluctuations of the CGUE remain GUE--like until
$N_P = N^2 - N_Q$ is reduced to the value $N_P = N$ (where $H_P =
\sum_p h_p B_p$ may be a linear combination of $N$ commuting matrices
and, thus, integrable). But that speculation is surely incorrect.
Indeed, random band matrices with a band width less than or of order
$\sqrt{N}$ are known~\cite{Cas90,Fyo91} to possess localized
eigenfunctions and a Poisson spectrum. For such matrices, the number
$N_Q$ of constraints is at least of order $N^2 - N \sqrt{N}$ and, for
$N \gg 1$, obviously much larger than $N_Q$ but still much smaller
than $N^2 - N$.
It is of interest to discuss the embedded random $k$--body ensembles
EGUE($k$) (see Ref.~\cite{Mon75} and the review~\cite{Ben03}) in the
light of these considerations. The EGUE(k) models a Fermionic
many--body system with $k$--body interactions: $m$ identical spinless
Fermions occupy $l$ degenerate single--particle states. The Hilbert
space is spanned by $N = {l \choose m}$ Slater determinants. To
construct the $k$--body interaction operators, we denote by
$a^\dag_\mu$ and $a^{}_\mu$ the creation and destruction operators for
a Fermion in the single--particle state labelled $\mu$ with $\mu = 1,
\ldots, l$. Let $\mu_1, \ldots, \mu_m$ with $1 \leq \mu_i \leq l$ for
all $i$ denote a set of $m$ non--equal integers, and analogously for
$\nu_1, \ldots, \nu_m$ with $1 \leq \nu_j \leq l$ for all $j$. Then a
general interaction operator has the form $A(\{ \mu_i \}, \{ \nu_j \})
= \prod_{i = 1}^m a^{\dag}_{\mu_i} \prod_{j = 1}^m a^{}_{\nu_j}$. In
the Hilbert space of Slater determinants, the $N^2$ Hermitean
operators $A(\{ \mu_i \}, \{ \nu_j \})$ $+$ $A^\dag(\{ \mu_i \}, \{
\nu_j \})$ and $i [ A( \{ \mu_i \}, \{ \nu_j \})$ $-$ $A^\dag( \{
\mu_i \}, \{ \nu_j \}) ]$ play the very same role as do the matrices
$B_\alpha$ introduced in Section~\ref{def} in the Hilbert space ${\cal
H}$. From the general form of $A$, a $k$--body operator is obtained by
imposing the condition that a subset of $m - k$ elements of the set
$\{ \mu_i \}$ is identically equal to a subset of $m - k$ elements of
the set $\{ \nu_j \}$. There are ${l \choose m} {m \choose k} {l - m
\choose k}$ such $k$--body operators. The EGUE($k$) is obtained by
writing the Hamiltonian as a linear combination of all $k'$--body
operators with $k' \leq k$ and with coefficients that are real random
Gaussian--distributed variables.
Among the EGUE($k$), the EGUE($2$) has received particular attention
because it mimics a Hamiltionian with two--body interactions, a form
typical for Fermionic many--body systems like atoms or nuclei. One of
the central questions (undecided so far) has been whether for $N \gg
1$ the spectral fluctuation properties of EGUE($2$) are GUE--like.
Numerical simulations~\cite{Ben03} suggest that the answer is
affirmative. However, these simulations are typically done for small
values of $l$ and $m$, with $l$ around $12$ and $m$ around $4$ or
so. But for these values the number of one-- plus two--body
interaction terms is ${12 \choose 4} [ 4 \times 8 + 6 \times 28 ] =
200 {12 \choose 4}$. The number of constraints is accordingly given by
$N_Q = {12 \choose 4} [ 495 - 200 ] = {12 \choose 4} \times 295$. That
figure is not much larger than $N^{\rm crit}_Q = {12 \choose 4} \times
247$, so that it is difficult to draw firm conclusions. It would be
more informative to investigate numerically large values of $l$ and
$m$ but that is prohibitively difficult. For $l \gg m \gg 1$ we have
$N \approx l^m$, and the number of independent two--body operators is
approximately $l^m m^2 l^2$. In other words, there are only $m^2 l^2$
non--zero interaction matrix elements in every row and column of the
matrix representation of the Hamiltonian for the EGUE($2$), much fewer
than for a banded random matrix where that number would be
approximately $\sqrt{N} = l^{m / 2}$. Put differently, the number
$N_Q$ of constraints for the EGUE($2$) is much bigger than it is for a
banded random matrix. That fact suggests that mixing in the EGUE($2$)
is weaker than it is for a banded random matrix, and that EGUE($2$)
has Poissonian level statistics. On the other hand, in a banded random
matrix it takes approximately $\sqrt{N}$ different interaction matrix
elements to connect two arbitrary states in Hilbert space. In the
EGUE($2$) that number is only $m / 2$, i.e., less even than $(1 / 2)
\ln N$. This fact suggests that mixing of the states in Hilbert space
is much more efficient for the EGUE($2$) than it is for a banded
random matrix, and the question remains undecided. But the discussion
suggests that for $N_Q \geq N^{\rm crit}_Q$, the form of the
constraints (and not just their sheer number) becomes important in
determining the spectral fluctuation properties of CGUE.
Another frequently used ensemble that simulates the nuclear many--body
system is the two--body random ensemble (TBRE), see
Refs.~\cite{Fre70,Boh71} and the review~\cite{Pap07}. Actually that
ensemble is taken to be invariant under time reversal and, thus, has
orthogonal rather than unitary symmetry. For simplicity we disregard
this fact. The single--particle states belonging to a major shell of
the nuclear shell model are occupied by a number of nucleons. The
resulting Slater determinants are coupled to states with fixed total
spin $J$ and isospin $T$ and are written as $| J T \mu \rangle$. The
running index $\mu$ has a typical range $R$ from several ten ($J$
large) to several thousand or more ($J$ small). Level statistics can
be meaningfully discussed only for large $R$. It is assumed that the
interaction between nucleons is of two--body type. Within a major
shell, the number of independent two--body matrix elements $v_\alpha$
is small (of order $10$ or $10^2$) compared to the large values of $R$
that are of interest. These matrix elements are taken to be
uncorrelated Gaussian--distributed random variables. This defines the
TBRE. For a given set of states $| J T \mu \rangle$, the matrix
representation of the Hamiltonian $H_{\rm TB}$ of the TBRE takes the
form \begin{equation} (H_{\rm TB})_{\mu \nu} = \sum_\alpha v_\alpha C^{J T}_{\mu
\nu}(\alpha) \ .
\label{E1}
\end{equation}
The matrices $C^{J T}_{\mu \nu}(\alpha)$ are fixed by the major shell
and by the quantum numbers $J$ and $T$ under consideration but have
some properties in common with matrices drawn from a canonical
random--matrix ensemble. Again, it is of interest whether in the limit
of $R \gg 1$ the TBRE generically obeys GUE (or GOE) level
statistics. Numerical results and semi--analytical arguments both
support such a hypothesis. Unfortunately, the matrices $C^{J T}_{\mu
\nu}(\alpha)$ are not accessible analytically so far. Therefore, it
does not even seem possible to determine the number $N_Q$ of
constraints that would characterize the TBRE matrix~(\ref{E1}), and we
cannot apply our results to that ensemble.
The authors of Ref.~\cite{Pap06} considered not only the CGUE but in
addition also what they called ``Deformed Gaussian Ensembles''. Here
the delta functions in Eq.~(\ref{A9a}) are replaced by Gaussians, and
the constraining function $F_{\cal P}(H)$ is everywhere regular.
Following the arguments in Section~\ref{proof} we conclude that the
spectral fluctuations of the deformed ensembles coincide with those of
the GUE. In other words, constraints affect the spectral fluctuation
properties only if they constrain the relevant matrix elements to the
value zero (and not to very small non--zero values). As a consequence,
in the GUE the transition from GUE to Poisson level statistics is not
a continuous process (where the level statistics would be smoothly
deformed) but actually happens discontinuously. These statements apply
on the ``macroscopic'' level where the values of the coefficients
$h_q$ of the constraining matrices $B_q$ are compared with those of
the $h_p$ multiplying $B_p$. If, on the other hand, the $h_q$ are
measured in units of the mean level spacing (i.e., in effect, on a
scale $1 / \sqrt{N}$ compared to the scale of the $h_p$) then the
transition from GUE to Poisson level statistics is expected to be
smooth and to allow for intermediate forms of the level
statistics. That expectation is supported by transitions between
symmetries like the GOE $\to$ GUE transition, and by many examples of
partially chaotic systems that show intermediate level statistics. We
have not attempted to introduce a correspondingly scaled
parametrization for the deformed ensembles. Such a step would be
meaningful only in the immediate vicinity of the transition point from
GUE to Poisson level statistics. That point is not known analytically,
however.
\section*{Acknowledgments}
ZP thanks the members of the Max-Planck-Institut f{\"u}r Kernphysik in
Heidelberg for their hospitality and support, acknowledges support
by the Czech Science Foundation in Prague under grant no. 202/09/0084,
and thanks J. Kvasil and P. Cejnar for stimulating discussions.
\section*{Appendix: Constraining matrices with non--zero traces}
It is convenient to relabel the indices $q$ so that they run from $1$
to $N_Q$. We use Eq.~(\ref{F1}) and rotate the basis $B_q \to
\tilde{B}_q = \sum_{q'} O_{q q' } B_{q'}$ with the help of an orthogonal
transformation $O_{q q'}$ such that $\tilde{B}_1$ points in the
direction of the unit matrix ${\bf 1}_N$. Then,
\begin{eqnarray}
\langle \tilde{B}_q | \tilde{B}_{q'} \rangle &=& \delta_{q q'} \ {\rm
for \ all} \ q,q' \ , \nonumber \\
\langle \tilde{B}_q \rangle &=& 0 \ {\rm for \ all} \ q > 1
\ .
\label{AB2}
\end{eqnarray}
We apply the same orthogonal transformation to the variables $t_q$ so
that $t_q \to \tilde{t}_q = \sum_{q'} O_{q q' } t_{q'}$ and obtain
\begin{equation}
F_{\cal P}(H) = \bigg( \frac{\lambda^2}{2 \pi N} \bigg)^{N_Q/2} \int
\prod_q {\rm d} \tilde{t}_q \int {\rm d} [U] \ \exp \big( i \sum_q
\tilde{t}_q \langle \tilde{B}_q | U H U^{\dag} \rangle \big) \ .
\label{AB1a}
\end{equation}
We write $\tilde{B}_1$ as the sum of a traceless part and of a multiple
of the unit matrix,
\begin{equation}
\tilde{B}_1 = \bigg( \tilde{B}_1 - \langle \tilde{B}_1 \rangle \frac{{\bf
1}_N}{N} \bigg) + \langle \tilde{B}_1 \rangle \frac{{\bf 1}_N}{N} \ .
\label{AB3}
\end{equation}
By construction, the traceless part is orthogonal to all the
$\tilde{B}_q$'s with $q > 1$ and has norm
\begin{equation}
\langle \tilde{B}_1 - \langle \tilde{B}_1 \rangle \frac{{\bf 1}_N}{N} |
\tilde{B}_1 - \langle \tilde{B}_1 \rangle \frac{{\bf 1}_N}{N} \rangle =
1 - \frac{1}{N} \langle \tilde{B}_1 \rangle^2 = \alpha^2 \ .
\label{AB4}
\end{equation}
Because of the first of Eqs.~(\ref{AB2}), the eigenvalues $b_{1 \mu}$
with $\mu = 1, \ldots, N$ of $\tilde{B}_1$ obey $\sum_\mu b^2_{1 \mu}
= 1$. We maximize $\langle \tilde{B}_1 \rangle= \sum_\mu b_{1 \mu}$
under that constraint and find that
\begin{equation}
- \sqrt{N} \leq \langle \tilde{B}_1 \rangle \leq \sqrt{N} \ .
\label{AB5}
\end{equation}
Therefore,
\begin{equation}
0 \leq \alpha^2 \leq 1 \ .
\label{AB6}
\end{equation}
We define $\alpha$ as the positive root of $\alpha^2$ and write
Eq.~(\ref{AB3}) in the form
\begin{equation}
\tilde{B}_1 = \alpha \hat{B}_1 + \langle \tilde{B}_1 \rangle
\frac{{\bf 1}_N}{N} \ .
\label{AB3a}
\end{equation}
Then, $\hat{B}_1$ has trace zero and norm one. We define $\hat{B}_q =
\tilde{B}_q$ for all $q > 1$ and have
\begin{equation}
\langle \hat{B}_q \rangle = 0 \ {\rm for \ all} \ q \ {\rm and} \
\langle \hat{B}_q | \hat{B}_{q'} \rangle = \delta_{q q'} \ {\rm for
\ all} \ q,q' \ .
\label{AB7}
\end{equation}
For $\alpha = 0$ the matrix $\tilde{B}_1$ is a multiple of the unit
matrix, and the integral over $\tilde{t}_1$ in Eq.~(\ref{AB1a}) yields
a multiple of the delta function for $\langle H \rangle$ while the
remaining $N_Q - 1$ integrations over the $\tilde{t}_q$ with $q > 1$
are treated as in Section~\ref{asym}. For $\alpha = 1$ the matrix
$\tilde{B}_1$ is actually traceless; that case was treated in
Section~\ref{asym}. Therefore, we consider $\alpha$ only in the open
interval
\begin{equation}
0 < \alpha < 1 \ .
\label{AB6a}
\end{equation}
We rewrite Eq.~(\ref{AB1a}) by using the decomposition~(\ref{AB3a}), by
rescaling $\alpha \tilde{t}_1 \to \tilde{t}_1$, by introducing
spherical polar coordinates $\{ t, \Omega\}$ in $N_Q$ dimensions, and
by defining the matrix $B(\Omega)$ as
\begin{equation}
t B(\Omega) = \sum_q \tilde{t}_q \hat{B}_q \ .
\label{AB8}
\end{equation}
The function $F_{\cal P}(H)$ takes the form
\begin{eqnarray}
F_{\cal P}(H) &=& \bigg( \frac{\lambda^2}{2 \pi N} \bigg)^{N_Q / 2}
\frac{1}{\alpha} \int {\rm d} t \ t^{N_Q - 1} {\rm d} \Omega \exp \big(
i \tilde{t}_1(t, \Omega) \langle \tilde{B}_1 \rangle \langle H \rangle /
(\alpha N) \big) \nonumber \\
&& \times \int {\rm d} [U] \ \exp \big( i t \langle B(\Omega) | U H
U^{\dag} \rangle \big) \label{AB1b} \ .
\end{eqnarray}
Here $\tilde{t}_1(t, \Omega)$ stands for the rescaled old integration
variable $\tilde{t}_1$ as expressed in terms of the new integration
variables $\{t, \Omega\}$. The function $F_{\cal P}(H)$ diverges for
$\langle H \rangle \to 0$. The divergence is removed by multiplying
$F_{\cal P}(H)$ with $[\langle H / \lambda \rangle^2]^{N_Q / 2}$.
As in the case of the substitution $F_{\cal P} \to \tilde{F}_{\cal P}$
in Section~\ref{mod}, we expect that this step does not affect the
spectral fluctuations of the ensemble. The matrix $B(\Omega)$ is
traceless by construction, and the integration over the unitary group
can be carried out as in Section~\ref{asym}. From here on we proceed
as in Section~\ref{proof}.
\section*{References}
| {
"timestamp": "2009-01-28T13:55:54",
"yymm": "0901",
"arxiv_id": "0901.4389",
"language": "en",
"url": "https://arxiv.org/abs/0901.4389",
"abstract": "We investigate the spectral fluctuation properties of constrained ensembles of random matrices (defined by the condition that a number N(Q) of matrix elements vanish identically; that condition is imposed in unitarily invariant form) in the limit of large matrix dimension. We show that as long as N(Q) is smaller than a critical value (at which the quadratic level repulsion of the Gaussian unitary ensemble of random matrices may be destroyed) all spectral fluctuation measures have the same form as for the Gaussian unitary ensemble.",
"subjects": "Mathematical Physics (math-ph)",
"title": "Spectral fluctuation properties of constrained unitary ensembles of Gaussian-distributed random matrices",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9653811591688145,
"lm_q2_score": 0.7341195152660688,
"lm_q1q2_score": 0.7087051486160056
} |
https://arxiv.org/abs/1407.8336 | Induced Matchings in Graphs of Maximum Degree 4 | For a graph $G$, let $\nu_s(G)$ be the induced matching number of $G$. We prove the sharp bound $\nu_s(G)\geq \frac{n(G)}{9}$ for every graph $G$ of maximum degree at most $4$ and without isolated vertices that does not contain a certain blown up $5$-cycle as a component. This result implies a consequence of the well known conjecture of Erdős and Nešetřil, saying that the strong chromatic index $\chi_s'(G)$ of a graph $G$ is at most $\frac{5}{4}\Delta(G)^2$, because $\nu_s(G)\geq \frac{m(G)}{\chi_s'(G)}$ and $n(G)\geq \frac{m(G)\Delta(G)}{2}$. Furthermore, it is shown that there is polynomial-time algorithm that computes induced matchings of size at least $\frac{n(G)}{9}$. | \section{Introduction}
For a graph $G$, a set $M$ of edges is an \emph{induced matching} of $G$ if
no two edges in $M$ have a common endvertex and no edge of $G$ joins two edges in $M$.
The maximum number of edges that form an induced matching in $G$ is the {\it strong matching number $\nu_s(G)$ of $G$}.
Unlike the well investigated matching number \cite{lopl}, which can be determined in polynomial time \cite{ed},
it is known that the computation of the strong matching number is NP-hard even in very restricted graph classes as for example bipartite subcubic graphs \cite{ca1,lo,stva}.
The chromatic index $\chi'(G)$ and the strong chromatic index $\chi_s'(G)$ are the least numbers $k$
such that the edge set of $G$ can be partitioned in $k$ matchings and $k$ strong matchings, respectively.
While Vizing's Theorem gives $\chi'(G)\in\{\Delta(G),\Delta(G)+1\}$ \cite{vi}
where $\Delta(G)$ is the maximum degree of $G$,
no comparable result holds for the strong chromatic index.
In fact, Erd{\H{o}}s and Ne{\v{s}}et{\v{r}}il \cite{fashgytu2} conjectured $\chi_s'(G)\leq \frac{5}{4}\Delta(G)^2$,
which would be best-possible for even maximum degree
and the graph obtained from a $5$-cycle by replacing every vertex by an independent set of order $\frac{\Delta(G)}{2}$.
In the case $\Delta(G)=4$, we denote this graph by $C_5^2$.
A simple greedy algorithm only gives $\chi_s'(G)\leq 2\Delta^2-2\Delta+1$,
and the best general result is due to Molloy and Reed who proved $\chi_s'(G)\leq 1.998\Delta(G)^2$ for sufficiently large maximum degree \cite{more}.
For subcubic graphs, Erd{\H{o}}s and Ne{\v{s}}et{\v{r}}il's conjecture was verified, to be precise $\chi_s'(G)\leq 10$ \cite{an, hoqitr}.
For $\Delta(G)=4$, Erd{\H{o}}s and Ne{\v{s}}et{\v{r}}il's conjecture claims $\chi_s'(G)\leq 20$ while the best known upper bound is $22$ \cite{cr}.
If $\Delta(G)\leq 4$, then their conjecture implies $\nu_s(G)\geq \frac{m(G)}{20}$.
In the present paper, I prove this consequence by showing $\nu_s(G)\geq \frac{n(G)}{10}$ if $\Delta(G)\leq 4$ and $G$ has no isolated vertices; note that $\frac{m(G)}{20}\leq \frac{\Delta(G)n(G)}{40}\leq \frac{n(G)}{10}$.
Furthermore, if $G$ does not contain $C_5^2$ as a component, then the result can be strengthened to $\nu_s(G)\geq \frac{n(G)}{9}$.
Both results are best possible.
Moreover, since the proof is constructive, it is easy to extract a polynomial-time algorithm which computes induced matchings of the guaranteed size.
For subcubic planar graphs, Kang, Mnich and M\"uller \cite{kamnmu} showed that $\nu_s(G)\geq \frac{m(G)}{9}$.
This was improved by Rautenbach et al.~\cite{jorasa} who proved $\nu_s(G)\geq \frac{m(G)}{9}$ for a subcubic graph $G$ without $K_{3,3}^+$ as a component
where $K_{3,3}^+$ is obtained from a $5$-cycle by replacing the vertices by independent sets of orders $1,1,1,2,$ and $2$, respectively;
equivalently, $K_{3,3}^+$ is obtained from a $K_{3,3}$ by subdividing exactly one edge once.
In particular, $\nu_s(G)\geq \frac{m(G)}{9}=\frac{n(G)}{6}$ for a cubic graph $G$.
Recently, I proved $\nu_s(G)\geq \frac{n(G)}{(\lceil\frac{\Delta}{2}\rceil+1) (\lfloor\frac{\Delta}{2}\rfloor+1)}$
if $G$ is a graph of sufficiently large maximum degree $\Delta$ and without isolated vertices \cite{jo}.
Our main result is the following.
\begin{theorem}\label{result four detail}
If $G$ is a graph of maximum degree at most $4$,
then
\begin{align*}\label{result4}
\nu_s(G)\geq \frac{n(G)-i(G)-n_5(G)}{9}
\end{align*}
where $n_5(G)$ is the number of components of $G$ that are isomorphic to $C_5^2$
and $i(G)$ is the number of isolated vertices of $G$.
\end{theorem}
\noindent
Let $G$ be a graph with $\Delta(G)\leq 4$.
Since the graph $C_5^2$ has order $10$,
we obtain $n_5(G)\leq \frac{n(G)-i(G)}{10}$.
Moreover,
$\Delta(G)\leq 4$ implies $n(G)-i(G)\geq \frac{m(G)}{2}$.
Therefore, $\nu_s(G) \geq\frac{m(G)}{20}$ and if $n_5(G)=0$, then $\nu_s(G) \geq\frac{n(G)-i(G)}{9}$ and hence $\nu_s(G) \geq \frac{m(G)}{18}$.
In view of the graph $C_5^2$ and the graph obtained from a triangle by attaching two pendent vertices at every vertex, respectively,
Theorem \ref{result four detail} is best-possible.
However, I was not able to construct a graph $G$ without a component isomorphic to $C_5^2$ such that $\nu_s(G)= \frac{m(G)}{18}$.
Let $H$ be the graph obtained from a $5$-cycle by replacing the vertices by independent sets of orders $1,1,1,3,$ and $3$, respectively.
If $G$ is the graph obtained from two disjoint copies of $H$ by identifying the unique vertices of degree~$2$ in the two copies,
then $\nu_s(G)=\frac{34}{17}=\frac{m(G)}{17}$.
\begin{conjecture}
If $G$ is a graph of maximum degree at most $4$ and no component is isomorphic to $C_5^2$,
then $\nu_s(G)\geq \frac{m(G)}{17}$.
\end{conjecture}
\noindent
We use standard notation and terminology.
For a graph $G$, let $V(G)$ and $E(G)$ be its vertex set and edge set, respectively.
Let the \textit{order} and the \textit{size} of $G$ be defined by $|V(G)|$ and $|E(G)|$, respectively.
For a vertex $v$ of $G$, let $d_G(v)$ be its degree,
$N_G(v)$ be the set of neighbors of $v$, and
$N_G[v]=N_G(v)\cup \{v\}$.
If the graph is
clear from the context, we only write $d(v)$, $N(v)$, and $N[v]$, respectively.
If $d_G(v)=k$ holds for a non-negative integer $k$, then we say that $v$ is a \textit{degree}-$k$ vertex in $G$.
A set $I$ of vertices of $G$ is \textit{independent} if no edge of $G$ joins two vertices in $I$.
Two edges $e$ and $f$ are independent if they do not share a common vertex and there is no edge that is adjacent to $e$ and $f$.
The rest of the paper is devoted to the proof of Theorem \ref{result four detail}.
\section{Proof of Theorem \ref{result four detail}}
For a contradiction, we assume that $G$ is a counterexample of minimum order.
Since the statement of the theorem is linear in terms of the components, $G$ is connected.
It is easy to see that $n_5(G)=i(G)=0$.
By a sequence of claims, we establish several properties of $G$ in order to derive a final contradiction.
All claims follow a common pattern.
We mark particular (pairwise independent) edges and delete all vertices $S$ of $G$ at distance at most 1 from these edges.
We denote the resulting graph by $G'$.
Note that $n_5(G')=0$.
By the choice of $G$, we know that $\nu_s(G')\geq \frac{1}{9}(n(G')-i(G'))$.
Afterwards, we obtain a contradiction by considering a maximum induced matching of $G'$ together with the marked edges of $G$;
we only have to show that $|S|+i(G')\leq 9k$ where $k$ is the number of marked edges.
In all our cases $k$ is $1$ or $2$.
Throughout the proof we denote by $I'$ the set of isolated vertices of $G'$.
Note that a vertex in $I'$ has all its neighbors in $S$.
\begin{claim}\label{c1}
If $v$ is a vertex of degree at least $2$,
then $v$ is adjacent to at least two vertices of degree at least $2$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c1}]
For a contradiction, we assume that $v$ is adjacent to at most one vertex $w$ of degree at least $2$.
If $w$ does not exist, then $G$ is a path of order $3$, which is a contradiction.
Thus we may assume that $w$ exists.
Let $u$ be a degree-$1$ vertex adjacent to $v$ and we mark the edge $uv$.
Recall that $S$ is the set of vertices of $G$ that are at distance at most $1$ to some marked edge.
Thus $|S|\leq 5$.
Moreover,
all isolated vertices of $G'=G-S$ are adjacent to $w$.
This implies that $|S|+i(G')\leq 8$, which is a contradiction and completes the proof of Claim \ref{c1}.
\end{proof}
\begin{claim}\label{c2}
If $u_1$ and $u_2$ are distinct degree-$1$ vertices,
then $u_1$ and $u_2$ do not have a common neighbor.
\end{claim}
\begin{proof}[Proof of Claim \ref{c2}]
Assume for a contradiction that $v$ is the common neighbor of $u_1$ and $u_2$.
By Claim \ref{c1},
$v$ has degree $4$.
Let $w_1$ and $w_2$ be the neighbors of $v$ beside $u_1$ and $u_2$.
We mark the edge $u_1v$.
This implies that $|S|=5$.
By Claim \ref{c1}, $w_1$ and $w_2$ are adjacent to at most two degree-$1$ vertices;
that is, if $i(G')\geq 5$,
then $i(G')=5$, $w_1$ and $w_2$ are adjacent to two degree-$1$ vertices, respectively, and there is a degree-$2$ vertex adjacent to both $w_1$ and $w_2$;
thus $G$ is a graph of order $10$ and $\nu_s(G)=2$, which is a contradiction.
Therefore, $i(G')\leq 4$ and hence $|S|+i(G')\leq 9$, which is a contradiction.
\end{proof}
\begin{claim}\label{c3}
If $u_1$ and $u_2$ are degree-$1$ vertices,
then ${\rm dist}(u_1,u_2)\not=4$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c3}]
Assume for a contradiction that ${\rm dist}(u_1,u_2)=4$ and let $v_i$ be the neighbor of $u_i$ for $i\in\{1,2\}$.
We mark $u_1v_1$ and $u_2v_2$ and hence $|S|\leq 9$.
Note that, by Claim \ref{c2}, there are at most five degree-$1$ vertices in $V(G)\setminus S$ adjacent to a vertex in $S$.
Furthermore, there are at most $14$ edges joining $S$ and vertices of $G'$.
Thus $i(G')\leq 9$ and hence $|S|+i(G')\leq 18$.
\end{proof}
\begin{claim}\label{c4}
$\delta(G)\geq 2$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c4}]
Assume for a contradiction that there is a degree-$1$ vertex $u$ in $G$ and let $v$ be its neighbor.
We mark $uv$.
If $d_G(v)\leq 3$, then Claim \ref{c2} immediately implies $|S|+i(G')\leq 8$.
Thus we may assume that $w_1,w_2,w_3$ are the neighbors of $v$ beside $u$ and hence $|S|=5$.
Suppose $\{w_1,w_2,w_3\}$ is an independent set in $G$.
By Claim \ref{c2} and \ref{c3}, there is at most one degree-$1$ vertex in $V(G)\setminus S$ adjacent to a vertex in $S$.
Since there are at most nine edges joining $S$ and vertices of $G'$,
we have $i(G')\geq 5$ only if $G$ is a graph of order $10$ and exactly one degree-$1$ vertex and four degree-$2$ vertices are adjacent to $w_1,w_2,w_3$;
this implies that $\nu_s(G)\geq 2\geq \frac{n(G)}{9}$, which is a contradiction.
Suppose now that $G[\{w_1,w_2,w_3\}]$ is not a triangle.
By Claim \ref{c2} and \ref{c3}, there are at most two degree-$1$ vertices in $V(G)\setminus S$ adjacent to a vertex in $S$.
Since there are at most seven edges joining $S$ and vertices of $G'$,
we conclude $i(G')\leq 4$.
Suppose now that $G[\{w_1,w_2,w_3\}]$ is a triangle.
By Claim \ref{c3}, there are at most three degree-$1$ vertices in $V(G)\setminus S$ adjacent to a vertex in $S$.
Since there are at most three edges joining $S$ and vertices of $G'$,
we conclude $i(G')\leq 3$.
\end{proof}
\begin{claim}\label{c5}
Degree-$2$ vertices are adjacent only to degree-$4$ vertices.
\end{claim}
\begin{proof}[Proof of Claim \ref{c5}]
We assume for a contradiction that $u$ is a degree-$2$ vertex and $v$ is a neighbor of $u$ such that $d_G(v)\leq 3$.
We mark $uv$ and hence $|S|\leq 5$.
This implies that at most nine edges join $S$ and vertices of $G'$.
By Claim \ref{c4}, this implies $i(G')\leq 4$.
\end{proof}
\begin{claim}\label{c6}
Every degree-$4$ vertex is adjacent to at most two degree-$2$ vertices.
\end{claim}
\begin{proof}[Proof of Claim \ref{c6}]
Assume for a contradiction that there is a degree-$4$ vertex $v$ which has at least three neighbors of degree $2$.
Let $u$ be one of these neighbors and mark $uv$.
Thus $|S|\leq 6$.
Note that at most eight edges join $S$ and $V(G)\setminus S$.
Since $i(G')\geq 4$ implies that all isolated vertices of $G'$ have degree $2$ in $G$.
Thus a degree-$2$ vertex of $S$ and a degree-$2$ vertex of $I'$ share a common edge and this contradicts Claim \ref{c5}.
Thus we may assume that $i(G')\leq 3$ and so $|S|+i(G')\leq 9$, which is a contradiction.
\end{proof}
\begin{claim}\label{c7}
Every degree-$4$ vertex is adjacent to at most one degree-$2$ vertex.
\end{claim}
\begin{proof}[Proof of Claim \ref{c7}]
Assume for a contradiction that there is a degree-$4$ vertex $v$ with two neighbors $u_1,u_2$ of degree $2$.
Let $w$ be the neighbor of $u_1$ beside $v$.
We mark $u_1v$ and hence $|S|\leq 6$.
If $|S|\leq 5$, then there are at most six edges joining $S$ and $V(G)\setminus S$ and hence $i(G')\leq 3$.
Thus we may assume $|S|=6$.
For a contradiction, we assume that $i(G')\geq 4$.
Note that at most $10$ edges join $S$ and $I'$.
Suppose $u_2$ is adjacent to a vertex in $I'$, then, by Claim \ref{c5}, $I'$ contains a vertex of degree $4$.
Thus $I'$ contains three degree-$2$ vertices.
However, $w$ is adjacent to two of them and hence in total adjacent to at least three degree-$2$ vertices, which is a contradiction to Claim \ref{c6}.
Thus we may assume that $u_2$ is not adjacent to a vertex in $I'$.
Note that $I$ either contains four degree-$2$ vertices or three degree-$2$ vertices and one degree-$3$ vertex.
In both cases $w$ is adjacent to at least two degree-$2$ vertices in $I'$, which is a contradiction to Claim \ref{c6}.
\end{proof}
\begin{claim}\label{c8}
$\delta(G)\geq 3$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c8}]
For a contradiction, we assume that there is a degree-$2$ vertex $u$ and if possible choose $u$ to be contained in a $C_4$.
Let $v,w$ be the neighbors of $u$.
We mark $uv$ and hence $|S|\leq 6$.
If $|S|\leq 5$, then at most eight edges join $S$ and $I'$, which implies that $i(G')\leq 4$, which is a contradiction.
Thus we may assume $|S|=6$.
If the graph $G[S]$ has size at least $7$,
then there are at most eight edges joining $S$ and $I'$ and because $w$ is only adjacent to vertices of degree at least $3$ (Claim \ref{c7}),
we obtain $i(G')\leq 3$, which is a contradiction.
Suppose now that $G[S]$ is a graph of size $6$.
Hence at most $10$ edges join $S$ and $I'$.
Assume for contradiction that $i(G')\geq 4$.
If $i(G')\geq 5$, then Claim \ref{c7} yields the contradiction.
Thus we assume that $i(G')=4$ and hence $I'$ contains at least two degree-$2$ vertices $x,y$.
By Claim \ref{c7}, $x,y$ have distinct neighbors in $S$ and hence $w$ is adjacent to $x$ or $y$.
However, $w$ is adjacent to $u$, which is a contradiction to Claim \ref{c7}.
Thus we may assume that $G[S]$ is a graph of size $5$; that is, $G[S]$ is a tree and thus $u$ is not contained in a $C_4$.
Moreover, by our choice of $u$, no degree-$2$ vertex is contained in a $C_4$.
This implies that $I'$ contains no degree-$2$ vertex
because such a vertex cannot be adjacent to $w$ (Claim \ref{c7}) and if both neighbors in $S$ are distinct from $w$, then it is contained in a $C_4$.
Note that at most $12$ edges join $S$ and $I'$.
If $i(G')\geq 4$, then $i(G')=4$ and all vertices in $I'$ are degree-$3$ vertices and all vertices in $S\setminus\{u,v\}$ are degree-$4$ vertices.
Thus $n(G)=10$ and $\nu_s(G)\geq 2$, which is a contradiction.
\end{proof}
\begin{claim}\label{c9}
The set of degree-$3$ vertices is an independent.
\end{claim}
\begin{proof}[Proof of Claim \ref{c9}]
Assume for a contradiction that two degree-$3$ vertices $u,v$ are adjacent.
We mark $uv$.
Note that $|S|\leq 6$ and
at most $12$ edges join $S$ and $I'$.
If $|S|+i(G')\geq 10$, then $|S|=6$, $i(G')=4$, all vertices in $I'$ and $S\setminus\{u,v\}$ are degree-$3$ and degree-$4$ vertices, respectively, and $n(G)=10$.
It is easy to see that $\nu_s(G)\geq 2$, which is a contradiction.
\end{proof}
\begin{claim}\label{c10}
No degree-$3$ vertex is contained in a triangle.
\end{claim}
\begin{proof}[Proof of Claim \ref{c10}]
Assume for a contradiction that a degree-$3$ vertex $u$ is contained in a triangle $uvwu$.
We mark $uv$.
Note that $|S|\leq 6$ and at most $11$ edges join $S$ and $I'$.
This implies that $i(G')\leq 3$.
\end{proof}
\begin{claim}\label{c11}
$G$ is not a graph of order $10$ and minimum degree $3$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c11}]
We show that $\nu_s(G)\geq 2$ holds for every connected graph $G\not=C_5^2$ of order $10$ with $\delta(G)=3$ such that the set of degree-$3$ vertices form an independent set and every degree-$3$ vertex is not contained in a triangle.
Since the number of degree-$3$ vertices is even, we suppose first that there are two degree-$3$ vertices $u_1,u_2$.
If ${\rm dist}(u_1,u_2)\geq 4$, then $\nu_s(G)\geq 2$ trivially holds.
Note that for every edge $xy$, we may assume that the graph $G-(N[x]\cup N[y])$ is an independent set.
Let $v_1, v_2, v_3$ be the neighbors of $u_1$.
Suppose ${\rm dist}(u_1,u_2)= 3$.
Let $w_1,w_2,w_3$ be the neighbors of $v_1$ beside $u_1$.
We mark $u_1v_1$ and thus $S=N[u_1]\cup N[v_1]$.
Since ${\rm dist}(u_1,u_2)= 3$, we conclude $u_2\notin S$ and hence $N(u_2)=\{w_1,w_2,w_3\}$.
In order that $\{u_2w_i , u_1v_j\}$ for $i\in \{1,2,3\}$ and $j\in \{1,2\}$ is not an induced matching of size two,
we conclude that both $v_2$ and $v_3$ are adjacent to $w_1,w_2,w_3$.
Thus at most six edges leave $S$ but exactly $10$ edges leave $I$ towards $S$, which is a contradiction.
Suppose ${\rm dist}(u_1,u_2)= 2$.
By symmetry, let $v_1$ be a common neighbor of $u_1$ and $u_2$.
We mark $u_1v_1$ and thus $S=N[u_1]\cup N[v_1]$.
Note that $V(G)\setminus S$ is a set of three degree-$4$ vertices in $G$.
Hence, by using that there are $12$ edges leaving $V(G)\setminus S$, there is exactly one edge within $S\setminus \{u_1,v_1\}$.
By symmetry, we assume that $v_2$ has only neighbors in $V(G)\setminus S$; that is, $v_2$ is adjacent to all vertices in $V(G)\setminus S$.
Moreover, $u_2$ has at least one non-neighbor $w$ in $V(G)\setminus S$.
This implies that $\{u_2v_1,v_2w\}$ is an induced matching of size $2$.
This completes the case that $G$ contains exactly two degree-$3$ vertices.
Next, we suppose that $G$ contains exactly four degree-$3$ vertices $u_1,\ldots,u_4$.
Suppose there is a vertex $v_1$ that is adjacent to $u_1,\ldots,u_4$.
We mark $u_1v_1$.
Since $12$ edges leave $V(G)\setminus S$, the graph $G[S]$ is a tree.
Let $w$ be a non-neighbor of $u_2$ in $V(G)\setminus S$ and $v_2$ be a neighbor of $u_1$ beside $v_1$.
Since $w$ has four neighbors $wv_2\in E(G)$.
This implies that $\{u_2v_1,v_2w\}$ is an induced matching of size $2$.
Suppose there is a vertex $v_1$ that is adjacent to exactly three degree-$3$ vertices, say $u_1,u_2,u_3$.
Let $v_2,v_3$ be the neighbors of $u_1$ beside $v_1$.
We mark $u_1v_1$.
Hence $u_4$ is contained in the independent set $V(G)\setminus S$ and adjacent to $v_2,v_3$ and the degree-$4$ neighbor of $v_1$.
Hence, by using that there are $11$ edges leaving $V(G)\setminus S$, there is exactly one edge within $S\setminus \{u_1,v_1\}$.
By symmetry, we assume that $u_2v_2\notin E(G)$.
This implies that $\{u_2v_1,u_4v_2\}$ is an induced matching of size $2$.
Suppose there is a vertex $v_1$ that is adjacent to exactly two degree-$3$ vertices, say $u_1,u_2$, but no vertex is adjacent to more than two degree-$3$ vertices.
This implies that $u_3,u_4\in V(G)\setminus S$.
Let $v_2,v_3$ be the neighbors of $u_1$ beside $v_1$.
We mark $u_1v_1$.
Since no degree-$3$ vertex is contained in a triangle and $\{u_1,\ldots,u_4\}$ is an independent set,
$u_2$ is adjacent to $v_2$ or $v_3$; by symmetry, say $v_2$.
Because there are $10$ edges leaving $V(G)\setminus S$,
there are exactly two edges within $S\setminus \{u_1,v_1\}$.
Thus there is a degree-$4$ neighbor $x$ of $v_1$ that has only neighbors in $V(G)\setminus S$.
Furthermore, there is a non-neighbor $w$ of $v_2$ in $V(G)\setminus S$ (note that $w=u_3$ or $w=u_4$ is possible).
This implies that $\{u_1v_2,wx\}$ is an induced matching of size $2$.
Suppose now that all degree-$4$ vertices are adjacent to at most one degree-$3$ vertex.
This implies that there are at least $3$-times as many degree-$4$ vertices as degree-$3$ vertices, which is a contradiction that $G$ has order $10$.
This completes the case that $G$ contains exactly four degree-$3$ vertices.
If $G$ contains at least six degree-$3$ vertices, then at least $18$ edges leave the set of degree-$3$ vertices,
but at most $16$ edges leave the set of degree-$4$ vertices, which is the final contradiction.
\end{proof}
A vertex $v$ is a \textit{cut vertex} of a graph $G$ if $G-v$ has more components than $G$
and a \textit{block} is a maximal $2$-connected subgraph of $G$.
\begin{claim}\label{c12}
There is no cut vertex $v$ such that there is a block $B$ of order $10$ with $v\in V(B)$ such that $B$ contains only one cut vertex of $G$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c12}]
For a contradiction, we assume that such a configuration exists.
Suppose there is an edge $f$ in $B$ at distance at least $2$ from $v$.
We mark $f$ and thus $|S|+i(G')\leq 9$, which is a contradiction.
Thus all edges in $B$ are at distance at most $1$ from $v$.
Note that
there are at most three vertices at distance $1$ from $v$ and hence at most three vertices at distance $2$ from $v$, which is a contradiction.
\end{proof}
Note the following: suppose we mark an edge incident to a degree-$3$ and degree-$4$ vertex.
Thus $|S|=7$.
If $i(G')=3$, then Claim \ref{c11} and \ref{c12} imply that $G'$ has non-trivial components and at least two edges join $S$ and these non-trivial components.
This simplifies the proofs of the next claims significantly.
\begin{claim}\label{c13}
No degree-$4$ vertex is adjacent to four degree-$3$ vertices.
\end{claim}
\begin{proof}[Proof of Claim \ref{c13}]
For a contradiction, we assume that there is a vertex $v$ that is adjacent to four degree-$3$ vertices $u_1,\ldots, u_4$.
We mark $u_1v$; that is, $|S|=7$ and at most $12$ edges join $S$ and $I'$.
Every vertex in $I'$ has degree $4$ because there are only two degree-$4$ vertices which might be adjacent to a vertex in $I'$ (Claim \ref{c9}).
Thus $i(G')=3$ and $G$ is a graph of order $10$, which is a contradiction to Claim \ref{c11}.
\end{proof}
\begin{claim}\label{c14}
No degree-$4$ vertex is adjacent to three degree-$3$ vertices.
\end{claim}
\begin{proof}[Proof of Claim \ref{c14}]
For a contradiction, we assume that there is a vertex $v$ that is adjacent to three degree-$3$ vertices $u_1,u_2,u_3$.
If possible choose $v$ and $u_1$ such that $u_1v$ is contained in a $C_4$.
Let $v_1,v_2$ be the neighbors of $u_1$ beside $v$.
We mark $u_1v$; that is, $|S|=7$ and at most $13$ edges join $S$ and $I'$.
If $i(G')\geq 4$, then two degree-$3$ vertices share a common edge which contradicts Claim \ref{c9}.
For contradiction, we assume that $i(G')=3$.
By Claim \ref{c11} and \ref{c12}, at least two edges join $S$ and $V(G)\setminus (S \cup I')$.
Hence at most $11$ edges join $S$ and $I'$.
Thus there is at least one degree-$3$ vertex $w\in I'$ that is adjacent to the three degree-$4$ vertices in $S$.
If there are three degree-$3$ vertices in $I'$, then all are adjacent to $v_1$ which is a contradiction to Claim \ref{c13}.
If there are two degree-$3$ vertices in $I'$, then $S$ induces a tree and both degree-$3$ vertices in $I'$ are adjacent to both $v_1$ and $v_2$.
Thus $v_1$ is a vertex adjacent to three degree-$3$ vertices and $v_1w$ is contained in a $C_4$, which is a contradiction to our choice of $u_1$ and $v$.
Therefore,
$I'$ contains two degree-$4$ vertices and the degree-$3$ vertex $w$, and exactly two edges join $S$ and $V(G)\setminus(S \cup I')$.
Thus there are two vertices $x,y\notin \{u_1,v\}$ such that one is a neighbor of $u_1$, one is a neighbor of $v$, and
$x$ has no neighbor in $V(G)\setminus(S \cup I')$ but $y$ does.
Marking $x$ with its neighbor in $\{u_1,v\}$ instead of $u_1v$ leads to a contradiction and this completes the proof of the claim.
\end{proof}
\begin{claim}\label{c15}
No degree-$4$ vertex is adjacent to two degree-$3$ vertices.
\end{claim}
\begin{proof}[Proof of Claim \ref{c15}]
For a contradiction, we assume that there is a vertex $v$ that is adjacent to two degree-$3$ vertices $u_1,u_2$.
If possible choose $v$ and $u_1$ such that $u_1v$ is contained in a $4$-cycle $C$ and if possible choose $C$ to contain two degree-$3$ vertices.
Let $v_1,v_2$ be the neighbors of $u_1$ beside $v$.
We mark $u_1v$; that is, $|S|=7$ and at most $14$ edges join $S$ and $I'$.
For a contradiction, we assume $i(G')\geq 3$.
Let $x_1,x_2$ be the degree-$4$ neighbors of $v$.
Note that $v_1,v_2$ and $x_1,x_2$ are adjacent to at most one degree-$3$ vertex and to at most two degree-$3$ vertices in $I'$, respectively;
that is, $I'$ contains at most two degree-$3$ vertices.
If $I'$ contains two degree-$3$ vertices $w_1,w_2$,
then, by symmetry of $v_1, v_2$, we obtain $N(w_i)=\{v_i,x_1,x_2\}$.
Thus $x_1$ is a degree-$4$ vertex adjacent to two degree-$3$ vertices and $x_1w_1x_2w_2x_1$ is a $4$-cycle containing two degree-$3$ vertices.
Our choice of $v$ implies that there is an edge joining $u_2$ and $v_1$ or $v_2$, which is a contradiction to Claim \ref{c14}.
Thus we assume that $I'$ contains at most one degree-$3$ vertex.
A degree counting argument implies that $i(G')=3$ and hence at least $11$ edges join $S$ and $I'$.
Claim \ref{c11} and \ref{c12} imply that there are at most $12$ edges joining $S$ and $I'$.
It follows that $S$ induces a tree.
Suppose $v_1$ has no neighbor in $V(G)\setminus (S\cup I')$.
Thus $N(v_1)=I'\cup \{u_1\}$.
Let $w$ be a non-neighbor of $u_2$ in $I'$.
Marking $v_1w$ instead of $u_1v$ leads to a contradiction because the edges $v_1w$ and $u_1v$ are independent
and at most one vertex in $G- (S\cup I')$ becomes an isolated vertex.
Thus, by symmetry in $\{v_1,v_2\}$, both $v_1$ and $v_2$ have a neighbor in $V(G)\setminus (S\cup I')$.
By symmetry in $\{x_1,x_2\}$, we may assume that $x_1$ has no neighbor in $V(G)\setminus (S\cup I')$.
Hence $N(x_1)=I'\cup \{v\}$.
Let $w'$ be a non-neighbor of $v_1$ in $I'$.
Similar as above, marking $x_1w'$ instead of $u_1v$ leads to a contradiction.
\end{proof}
\begin{claim}\label{c16}
$\delta(G)\geq 4$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c16}]
For a contradiction, we assume that there is a vertex $v$ which is adjacent to a degree-$3$ vertices $u$.
Choose $u,v$ such that $uv$ is contained in as many $4$-cycles as possible.
Let $v_1,v_2$ be the neighbors of $u$ beside $v$ and $x_1,x_2,x_3$ be the neighbors of $v$ beside $u$.
We mark $uv$; that is, $|S|=7$ and at most $15$ edges join $S$ and $I'$.
If $I'$ does not contain a degree-$3$ vertex, then $i(G')\leq 3$.
Suppose $I'$ contains a degree-$3$ vertex $w$.
By Claim \ref{c15}, we conclude $N(w)=\{x_1,x_2,x_3\}$
and thus, by our choice of $uv$,
the set $S\setminus\{u,v\}$ induces a graph of size at least $2$,
which in turn implies that at most $11$ edges join $S$ and $I'$.
Hence $i(G')\leq 3$.
Therefore,
we may assume that $i(G')=3$ and Claim \ref{c11} and \ref{c12} imply that there are at most $13$ edges joining $S$ and $I'$.
If $I'$ contains a degree-$3$ vertex, then with the same argumentation as above there are at least two edges within $S\setminus \{u,v\}$ but then at most nine edges join $S$ and $I'$,
which is a contradiction to the fact there is at most one degree-$3$ vertex in $I'$.
Thus we may assume that $I'$ contains three degree-$4$ vertices.
A degree sum argument implies that $S$ induces a tree and exactly three edges join $S$ and $V(G)\setminus(S \cup I')$.
If this three edges are incident with a common vertex $z\notin S$ and $z$ has degree~$3$, then $z$ is contained in a $4$-cycle and this contradicts the choice of $u$ and $v$ because $S$ induces a tree.
Thus deleting vertices in $S$ does not lead to isolated vertices in $G-(S \cup I')$.
Suppose $v_1$ or $v_2$, by symmetry say $v_1$, has at least one neighbor in $V(G)\setminus(S \cup I')$.
Let $w\in I'$ be a non-neighbor of $v_1$.
By symmetry in $\{x_1,x_2,x_3\}$, we conclude that $x_1$ has no neighbor in $V(G)\setminus(S \cup I')$ and hence marking $x_1w$ instead of $uv$ leads to a contradiction.
Therefore, we may assume that $N(v_i)=\{u\}\cup I'$ for $i\in \{1,2\}$.
By symmetry, $x_1$ has a neighbor in $V(G)\setminus(S \cup I')$ and a non-neighbor $w\in I'$.
Marking $v_1w$ instead of $uv$ leads to a contradiction, which completes the proof of the claim.
\end{proof}
\begin{claim}\label{c17}
$G$ triangle-free.
\end{claim}
\begin{proof}[Proof of Claim \ref{c17}]
For a contradiction, we assume that there is an edge $uv$ which is contained in a triangle.
Choose $uv$ such that it is contained in as many triangles as possible.
We mark $uv$.
If $uv$ is contained in at least two triangles,
then $|S|\leq 6$ and at most $10$ edges join $S$ and $I'$.
Thus $i(G')\leq 2$, which is a contradiction.
Therefore, we assume that $uv$ is contained in one triangle $uvwu$ only.
Moreover, we choose the triangle edge $uv$ such that it is contained in as many $4$-cycles as possible.
Thus $|S|=7$ and at most $14$ edges join $S$ and $I'$.
Hence $i(G')=3$.
Furthermore, $S\setminus\{u,v\}$ induces a graph on at most one edge.
Let $x\in I'$.
If $xw \in E(G)$, then either $uw$ or $vw$ is contained in a triangle and in two $4$-cycles, which is a contradiction to our choice of $uv$.
This implies that $S\setminus \{u,v,w\} \cup I'$ induces a complete bipartite graph.
Let $y\in N(x)$.
Marking $xy$ instead of $uv$ leads to a contradiction.
\end{proof}
Since we may assume from now on that $G$ is triangle-free,
we use the following notation in the remaining part of the proof.
The marked edge will be denoted by $uv$.
Moreover, let $N(u)=\{v,u_1,u_2,u_3\}$ and $N(v)=\{u,v_1,v_2,v_3\}$.
Note that all these vertices are distinct and that $|S|=8$.
Furthermore, let $S'=S\setminus\{u,v\}$.
This implies that at most $18$ edges join $I'$ and $S$.
For a contradiction, we assume that $i(G')\geq 2$; that is, at least eight edges join $I'$ and $S'$.
Let $I'=\{w_1,w_2,\ldots \}$.
Let $m_{S'}$ be the number of edges in $G[S']$.
If $m_{S'}\geq 6$, then at most six edges join $I'$ and $S'$ and this a contradiction.
Since $G$ is triangle-free, $G[S']$ is bipartite.
\begin{claim}\label{c18}
$\Delta(G[S'])\leq 2$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c18}]
By symmetry, we assume for contradiction that $v_1$ is adjacent to $u_1,u_2,u_3$.
Suppose first that $m_{S'}\geq 4$ and hence $i(G')=2$.
By symmetry, $u_1v_2\in E(G)$.
If $u_1v_3 \in E(G)$,
then $i(G')=2$ and $N(w_i)=\{u_2,u_3,v_2,v_3\}$ for $i\in\{1,2\}$.
Thus $G=C_5^2$, which is a contradiction.
Hence we suppose $u_1v_3\notin E(G)$.
We mark instead of $uv$ the edge $u_1v_1$.
If $u_1$ has a neighbor in $I'$, say $w_1$, then $w_2v_3$ is independent of $u_1v_1$, which leads to a contradiction.
If $u_1$ has no neighbor in $I'$, then $w_1v_3w_2$ is a path independent from $u_1v_1$, which leads to a contradiction.
Therefore, we may suppose $m_{S'}= 3$.
If $i(G')=3$, then $n(G)=11$ and $u_1$ has a non-neighbor in $I'$, say $w_1$.
Hence $uu_1$ together with $w_1v_2$ is an induced matching of size~$2$, which is a contradiction.
Thus $i(G')=2$ and four edges join $S'$ and $V(G)\setminus(S \cup I')$.
The fact that $i(G')=2$ and $m_{S'}= 3$ imply that $v_2$ and $v_3$ have at least one neighbor in $V(G)\setminus(S \cup I')$.
By symmetry in $\{u_1,u_2,u_3\}$, we assume that $u_1$ has no neighbor in $V(G)\setminus(S \cup I')$.
Thus marking $u_1v_1$ instead of $uv$ leads to a contradiction.
\end{proof}
\begin{claim}\label{c19}
$G[S']$ contains no two independent edges.
\end{claim}
\begin{proof}[Proof of Claim \ref{c19}]
By symmetry, we assume that $u_1v_1$ and $u_2v_2$ are independent edges.
This implies that at most $14$ edges join $S'$ and $I'$ and hence $i(G')\in \{2,3\}$.
We now mark $u_1v_1$ and $u_2v_2$ instead of $uv$.
Suppose first that $i(G')=3$.
Let $S''=N[u_1]\cup N[u_2]\cup N[v_1]\cup N[v_2]$ and $G''= G-S''$.
It is easily checked that $i(G'')+|S|\leq 18$, which is a contradiction.
Therefore, we may assume $i(G')=2$.
Suppose that $m_{S'}\geq 3$.
Hence at most four edges join $S$ and $V(G)\setminus(S \cup I')$.
This implies that $|S''\cup S \cup I'|\leq 14$ and at most $12$ edges join $S''\cup S \cup I'$ and $V(G)\setminus (S''\cup S \cup I')$;
that is, $|S''\cup S \cup I'|+i(G'')\leq |S''|+i(G'')\leq 17$, which is a contradiction.
Therefore, we may assume $m_{S'}=2$.
Since $G$ is triangle-free, we conclude that $w_i$ for $i\in \{1,2\}$ is adjacent to $u_3,v_3$ and to exactly one vertex of the two marked edges, respectively.
Thus both $u_3$ and $v_3$ have exactly one neighbor in $V(G)\setminus(S \cup I')$.
Moreover, exactly four edges join $\{u_1u_1, u_2v_2\}$ and $V(G)\setminus(S \cup I')$;
that is, $|S''\cup S \cup I'|\leq 14$ and at most $14$ edges join $S''\cup S \cup I'$ and $V(G)\setminus(S'' \cup S \cup I')$,
which implies the contradiction $|S''\cup S \cup I'|+i(G'')\leq |S''|+i(G'')\leq 17$.
\end{proof}
Claim \ref{c19} implies that $G[S']$ has at most one nontrivial component.
Claims \ref{c17}, \ref{c18} and \ref{c19} imply that the nontrivial component of $G[S']$, if it exists, is
a $C_4$, a $P_4$, a $P_3$ or a $P_2$.
\begin{claim}\label{c20}
If it exists, then the nontrivial component of $G[S']$ is not a $C_4$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c20}]
For a contradiction, we assume that the nontrivial component of $G[S']$ is a $C_4$.
By symmetry, $u_1v_1u_2v_2u_1$ is this $C_4$.
Since at most $10$ edges join $S$ and $I'$, we have $i(G')=2$.
Furthermore, since $G$ is triangle-free and by symmetry in $\{w_1,w_2\}$, we may assume that $N(w_1)=\{u_1,u_2,u_3,v_3\}$ and $N(w_2)=\{u_3,v_1,v_2,v_3\}$.
Marking $u_1v_1$ instead of $uv$ leads to a contradiction to Claim \ref{c19}.
\end{proof}
\begin{claim}\label{c21}
If it exists, then the nontrivial component of $G[S']$ is not a $P_4$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c21}]
For a contradiction, we assume that the nontrivial component $P$ of $G[S']$ is a $P_4$.
By symmetry, $u_1v_2u_2v_1$ is this $P_4$; that is,
at most $12$ edges join $S$ and $I'$ and hence $i(G')\in \{2,3\}$.
Suppose first $i(G')=3$.
This implies that $n(G)=11$ and $N(u_3)=\{w_1,w_2,w_3,u\}$; by symmetry, say $w_1$ is nonadjacent to $u_2$ and $v_2$.
Since $\{u_3w_1,u_2v_2\}$ is an induced matching of size $2$, which is a contradiction,
we may assume that $i(G')=2$;
that is, exactly four edges join $S$ and $V(G)\setminus(S \cup I')$.
Since $w_i$ for $i\in\{1,2\}$ has at most two neighbors in $P$,
we conclude that $w_i$ is adjacent to $u_3$ and $v_3$.
Moreover,
$u_3$ and $v_3$ have a neighbor in $V(G)\setminus(S \cup I')$.
Suppose at least one of the endvertices of $P$ is adjacent to both $w_1$ and $w_2$, say $u_1$.
Marking $uu_1$ instead of $uv$ leads to a contradiction because $v_3$ has a neighbor in $V(G)\setminus(S \cup I')$.
Thus we may assume that $u_2$ and $v_2$ have a neighbor in $I'$.
Then, marking $u_2v_2$ instead of $uv$ leads to a contradiction because $u_3$ and $v_3$ have a neighbor in $V(G)\setminus(S \cup I')$.
\end{proof}
In the following we choose $uv$ such that $m_{S'}$ is maximal.
\begin{claim}\label{c22}
If it exists, then the nontrivial component of $G[S']$ is not a $P_3$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c22}]
For a contradiction, we assume that the nontrivial component $P$ of $G[S']$ is a $P_3$.
By symmetry, $u_1v_1u_2$ is this $P_3$.
If $v_1$ has a neighbor in $I'$, say $w_1$, then $N(w_1)=\{u_3,v_1,v_2,v_3\}$ because $G$ is triangle-free.
The neighborhood of $vv_1$ induces a graph of size at least $4$, which is a contradiction to the previous claims.
Thus we may assume that $v_1$ has a neighbor in $V(G)\setminus(S \cup I')$.
If $u_1$ or $u_2$, say $u_1$ has a neighbor in $I'$, say $w_1$,
then since $w_1$ is adjacent to $u_2$ or $u_3$,
neighborhood of $uu_1$ induces a graph on at least three edges, which is a contradiction to the previous claims.
Thus we may assume that both $u_1$ and $u_2$ have two neighbors in $V(G)\setminus(S \cup I')$.
Since $G$ is $4$-regular, $w_1$ is adjacent to at least one vertex in $P$, which is a contradiction to our assumptions.
\end{proof}
\begin{claim}\label{c23}
If it exists, then the nontrivial component of $G[S']$ is not a $P_2$.
\end{claim}
\begin{proof}[Proof of Claim \ref{c23}]
For a contradiction, we assume that the nontrivial component of $G[S']$ is an edge.
By symmetry, let $u_1v_1$ be this edge.
If $u_1$ or $v_1$, say $u_1$, is adjacent to a vertex in $I'$, say $w_1$,
then $w_1$ is nonadjacent to $u_2,u_3,v_1$ according to our choice of $uv$, which is a contradiction
to the $4$-regularity of $G$.
This implies that the neighborhood of every vertex in $I'$ is $\{u_2,u_3,v_2,v_3\}$.
Since $i(G')\geq 2$,
the neighborhood of the edge $u_2w_1$ induces a graph on at least two edges, which is a contradiction to our choice of $uv$.
\end{proof}
Claims \ref{c17}-\ref{c23} imply that $G$ has girth at least $5$.
This implies that every vertex in $V(G)\setminus S$ has at most two neighbors in $S$ and hence $i(G')=0$, which is the final contradiction.
\qed
| {
"timestamp": "2014-08-01T02:07:51",
"yymm": "1407",
"arxiv_id": "1407.8336",
"language": "en",
"url": "https://arxiv.org/abs/1407.8336",
"abstract": "For a graph $G$, let $\\nu_s(G)$ be the induced matching number of $G$. We prove the sharp bound $\\nu_s(G)\\geq \\frac{n(G)}{9}$ for every graph $G$ of maximum degree at most $4$ and without isolated vertices that does not contain a certain blown up $5$-cycle as a component. This result implies a consequence of the well known conjecture of Erdős and Nešetřil, saying that the strong chromatic index $\\chi_s'(G)$ of a graph $G$ is at most $\\frac{5}{4}\\Delta(G)^2$, because $\\nu_s(G)\\geq \\frac{m(G)}{\\chi_s'(G)}$ and $n(G)\\geq \\frac{m(G)\\Delta(G)}{2}$. Furthermore, it is shown that there is polynomial-time algorithm that computes induced matchings of size at least $\\frac{n(G)}{9}$.",
"subjects": "Combinatorics (math.CO)",
"title": "Induced Matchings in Graphs of Maximum Degree 4",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.990587412635403,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7086900109308504
} |
https://arxiv.org/abs/1610.05674 | The $p$-curvature conjecture and monodromy about simple closed loops | The Grothendieck-Katz $p$-curvature conjecture is an analogue of the Hasse Principle for differential equations. It states that a set of arithmetic differential equations on a variety has finite monodromy if its $p$-curvature vanishes modulo $p$, for almost all primes $p$. We prove that if the variety is a generic curve, then every simple closed loop on the curve has finite monodromy. | \section{Introduction}
The Grothendieck-Katz $p$-curvature conjecture posits the existence of a full set of algebraic solutions to arithmetic differential equations in characteristic 0, given the existence of solutions after reducing modulo a prime for almost all primes.
More precisely, let $R \subset \mathbb{C}$ be a finitely generated $\mathbb{Z}$-algebra. Suppose that $X$ is a smooth connected variety over $\Spec R$ and let $(V,\nabla)$ be a vector bundle on $X$ equipped with a flat connection. For almost every prime number $p$, we can reduce $X, V$ and $\nabla$ to obtain a vector bundle with connection on the $\Spec R/p$ scheme, $X/p$. Associated to such a system is an invariant $\psi_p$, its $p$-curvature, whose vanishing is equivalent to the existence of a full set of solutions to $\nabla$ modulo $p$.
\begin{conjecture}[Grothendieck-Katz $p$-curvature conjecture]
Suppose that for almost all primes $p$, the $p$-curvature of $(V/p,\nabla/p)$ vanishes. Then the differential equation $(V,\nabla)$ has a full set of algebraic solutions, i.e. it becomes trivial on a finite etale cover of $X$.
\end{conjecture}
Katz in \cite{Katz} proved that if the $p$-curvatures vanish for almost all $p$, then $(V,\nabla)$ has regular singular points, so that the conclusion of the conjecture is equivalent to asking that the monodromy representation of $(V,\nabla)$ is finite. Moreover, he showed that $(V,\nabla)$ has finite local mondromy. More precisely, if $X \hookrightarrow \overline{X}$ is a smooth compactification of $X$ with $\overline{X} \setminus X$ a normal crossings divisor, Katz proved that the monodromy about a boundary component is finite.
The main theorem of this paper is:
\begin{Theorem}\label{gen}
Let $(V,\nabla)$ be a vector bundle with connection on a generic proper smooth curve of genus $g$, such that almost all $p$-curvatures vanish. Then the monodromy about any simple closed loop is finite.
\end{Theorem}
Here, by generic, we mean that the image of the map from $\Spec R$ to $\mathcal{M}_{g}$ (the moduli space of genus $g$ curves) contains the generic point of $\mathcal{M}_{g}$. We are also able to prove results for open curves:
\begin{Theorem}\label{P1}
Let $(V,\nabla)$ be a vector bundle with connection on $\mathbb{P}^1 \setminus \{Q_1, \hdots Q_d\}$ where $\{Q_1 \hdots Q_d\}$ is a generic set of $d$ points, $d >2$. If the $p$-curvatures vanish for almost all primes $p$, then the monodromy about any simple closed loop is finite.
\end{Theorem}
We note that Theorem \ref{P1}, when applicable, generalises Katz's local monodromy theorem. Katz proves that any simple closed loop which encloses exactly one point has finite order. However, nothing is said about loops which enclose more than just a single point. Theorem \ref{P1} shows that any simple closed loop, though it may bound several points, has finite order in the monodromy representation, so long as the points are generic.
Another result, which combines Theorems \ref{gen} and \ref{P1} is
\begin{Theorem}\label{genpts}
Let $(V, \nabla)$ be a vector bundle with connection on the generic $d$-punctured curve. If the $p$-curvatures vanish for almost all primes $p$, then the monodromy about every simple closed loop is finite.
\end{Theorem}
We prove Theorems \ref{gen}, \ref{P1} and \ref{genpts} by using a beautiful suggestion of Emerton and Kisin, which allows us to view global monodromy as local monodromy. Given a family of smooth algebraic curves degenerating to a nodal curve, the singular curve can be realised by contracting an appropriately chosen simple closed loop in one of the smooth fibers. Indeed, locally about the node, we get a family of holomorphic annuli degenerating to the node. The annuli approximate a punctured disc, and this approximation gets better and better as one approaches the node.
With this idea as our guiding principle, we prove that when given a vector bundle on the smooth part of such a family with a connection relative to the base, the $p$-curvatures vanishing imply the finiteness of monodromy about the vanishing loop. The precise result is stated as Theorem \ref{thm:tech}. The key step is to prove a rigid-analytic version of the $p$-curvature conjecture:
suppose $S$ is a finitely-generated $\mathbb{Z}$-algebra, and let $(V,\nabla)$ be a vector bundle with connection on a closed rigid-analytic annulus $\mathcal{X}$ (with coordinate $t$) over $S((q))$. We also assume that $(V,\nabla)$ has a cyclic vector. If the $p$-curvatures vanish for almost all primes $p$, we prove that $(V,\nabla)$ pulled back to $\mathcal{X} \times_{S((q))} K((q))$ has finite finite monodromy, where $K$ is the fraction field of $S$.
The exact result we prove is stated as Theorem \ref{thm:use}.
This allows us to prove that $(V,\nabla)$ pulled back to the family of holomorphic annuli has finite monodromy, thereby completing the proof of Theorem \ref{thm:tech}. We remark that this is an instance where the theory of rigid-analytic varieties over an equicharacteristic-zero base is used to prove a result in arithmetic.
We note that Katz's theorem on local monodromy cannot be applied to prove Theorem \ref{thm:use}. Indeed, in Katz's setting, all the singularities are poles of finite order (in $t$), whereas we have to deal with series in $t$ with infinitely many negative powers. A key step is to prove that $(V,\nabla)$ extends to an integral model of $\mathcal{X}$ which has special fiber $\mathbb{G}_m$.
The paper is organised as follows. In \S 2, we state Theorem \ref{thm:tech} precisely and infer from it Theorem \ref{gen}. We spend Sections 3--5 proving Theorem \ref{thm:tech}. In \S3, we formulate and prove Theorem \ref{thm:use}, and a more general result. In \S4, we give a rigid criterion for a family of connections on holomorphic annuli to have solutions. In \S5, we put together the results in the previous two sections to finish the proof of Theorem \ref{thm:tech}. Finally in \S6, we demonstrate the proof of the other applications of Theorem \ref{thm:tech}.
\subsection*{Acknowledgements}
It is a pleasure to thank my advisor Mark Kisin, for suggesting that I work on this problem, and also for many conversations and ideas. I also thank George Boxer, Anand Patel and Yunqing Tang for several helpful discussions. Finally, I thank Shiva Shankar and Yunqing Tang for reading previous versions of this paper, and for offering very helpful comments.
\section{Statement of the main result}
\subsection{$p$-curvature}
Let $R_p$ be an $\mathbb{F}_p$-algebra, and let $X$ be a scheme smooth over $\Spec R_p$. Suppose that $(V,\nabla)$ is a vector bundle on $X$ with flat connection relative to $R_p$, and let $\mathcal{N}$ be its sheaf of sections. Suppose that $D$ is a section of $\Der(X/R_p)$, the sheaf of $R_p$-linear differentials on $X_p$. The data of $\nabla$ gives, for every $D$, a map
$$\nabla(D): \mathcal{N} \rightarrow \mathcal{N}.$$
This map is $R_p$-linear, and obeys the Liebnitz rule. The $p$-curvature $\psi_p$ is defined to be the mapping of sheaves
$$\Der(X/R_p) \rightarrow \End_{\mathcal{O}_X}(\mathcal{N})$$
by setting
$$\psi_p(D) = \nabla(D)^p - \nabla(D^p).$$
Note that $D^p$ is a derivation, as we are in characteristic $p >0$. It is easy to see that $\psi_p(D)$, which apriori lies only in $\End_{R_p}(\mathcal{N})$, is actually an element of $\End_{\mathcal{O}_X}(\mathcal{N})$. It is a well known fact due to Cartier (section 5.2 in \cite{Katz}) that $\psi_p$ being identically zero is equivalent to $\nabla$ having a full set of solutions.
\subsection{Simple closed loops}
Let $C$ be a smooth algebraic curve (not necessarily projective) over $\Spec \mathbb{C}$, and let $C^{\an}$ denote the associated complex analytic space.
\begin{definition}
We say that $\gamma \subset C^{\an}$ is a simple closed loop if it is the image of an injective continuous map $f_{\gamma}: S^1 \rightarrow C^{\an}$.
\end{definition}
Given a vector bundle $V$ with connection $\nabla$ on $C^{\an}$, we say that $\gamma$ has finite order in the monodromy representation if for some $b \in \gamma$, $f_{\gamma} \in \pi_1(C^{\an},b)$ has finite order in the monodromy representation based at $b$. Note that this definition is independent of $b \in \gamma$, and the parameterization map $f_{\gamma}$.
\begin{definition}
Two simple closed loops $\gamma$ and $\gamma'$ of $C^{\an}$ are isotopic if there is a map $I:S^1 \times [0,1]\rightarrow C^{\an}$ with the following properties:
\begin{enumerate}
\item $I(S^1 \times \{0\}) = \gamma$ and $I(S^1 \times \{1\}) = \gamma'$
\item $I (S^1 \times \{a\}) \rightarrow C^{\an}$ is an embedding for $a \in [0,1]$.
\end{enumerate}
\end{definition}
We now make precise what we mean for a simple closed loop to contract to a node in a family of curves. To that end, suppose that $\overline{B}$ is a variety over $\Spec \mathbb{C}$. Let $\mathcal{C} \rightarrow \overline{B}$ be a family of algebraic curves over $\overline{B}$ which is generically smooth. Let $B \subset \overline{B}$ be the open subvariety over which $\mathcal{C}$ is smooth. Suppose that there exists a closed point $s_0 \in \overline{B} \setminus B$ such that the fiber $\mathcal{C}_{s_0}$ over $s_0$ is a nodal curve, with $P \in \mathcal{C}_{s_0}$ a node. Let $s \in B(\mathbb{C})$ and suppose that $\gamma \subset \mathcal{C}^{\an}_s$ is a simple closed loop.
\begin{definition}
We say that $\gamma$ contracts to the node $P$ if there exists a continuous map $J: S^1 \times [0,1] \rightarrow \mathcal{C}^{\an}$ such that
\begin{enumerate}
\item For $a<1$, $J(S^1 \times a)$ is a simple closed loop contained entirely within a single smooth fiber of $\mathcal{C}^{\an}$.
\item $J(S^1 \times \{0\}) = \gamma$ and $J(S^1 \times \{1\}) = \{P\}$.
\end{enumerate}
\end{definition}
Clearly $J$ induces a map $\beta: [0,1] \rightarrow \overline{B}^{\an}$, such that $[0,1)$ is contained in the interior $B^{\an}$. We say that $\gamma$ contracts to the node via $\beta$. It follows from the definitions that if $\gamma$ and $\gamma'$ are isotopic simple closed loops contained in some smooth fiber, $\gamma$ contracts to the node if and only if $\gamma'$ does.
\subsection{The generic curve}
Suppose that $\overline{B}$ is a variety over a number field. Let $\mathcal{C} \rightarrow \overline{B}$ be as above. We now state the main technical result, from which Theorem \ref{gen} follows.
\begin{Theorem}\label{thm:tech}
Notation as above. Suppose that $(V, \nabla)$ is a vector bundle on $\mathcal{C}_{|_B}$ with connection relative to $B$, such that almost all $p$-curvatures vanish. Let $s \in B(\mathbb{C})$. Suppose that $\gamma\subset \mathcal{C}_s^{\an}$ is a simple closed loop which contracts to the node. Then the image of $\gamma$ in the monodromy representation has finite order.
\end{Theorem}
We end this section by demonstrating how Theorem \ref{thm:tech} is used to prove Theorem \ref{gen}. We first prove the following lemma, which will allow us to reduce Theorem \ref{gen} to the case of {\it the} generic curve. Let $X$ and $Y$ be irreducible varieties over $\mathbb{C}$, and let $f: X \rightarrow Y$ be a map with generically connected fibers. Suppose that $U \subset X$ and $V \subset Y$ denote smooth open subvarieties of $X$ and $Y$, such that $f$ restricted to $U$ is a smooth morphism. Suppose that $x \in U^{\an}$, and $y = f(x)$.
\begin{Lemma}\label{pathlifting}
Let $X, Y,f$ be as above, and let $\beta: [0,1] \rightarrow X^{\an}$ be a path, such that $\beta(0) = x$ and $\beta([0,1)) \subset U^{\an}$. Let $\alpha: [0,1] \rightarrow V^{\an}$ be any loop based at $y$. Then, a suitable reparameterization of the path $\alpha * (f \circ \beta)$ can be lifted to $X$.
\end{Lemma}
\begin{proof}
The path $\alpha$ can be lifted to a path $\tilde{\alpha}: [0,1] \rightarrow X$ because the map $f: U \rightarrow V$ is smooth. Because the fibers of $f$ are connected, there exists a path $\alpha'$ contained inside $f^{-1}(y)^{\an}$ which connects $\tilde{\alpha}(0)$ and $x$. The path $\tilde{\alpha}*\alpha'*\beta$ is clearly a lift of $\alpha * (f \circ \beta)$ (up to reparameterization).
\end{proof}
\begin{proof}[Proof of Theorem \ref{gen}]
If $g = 0$, then the curve is simply connnected, and so there is no monodromy. If $g = 1$, then fundamental group is abelian, and by \cite{Bost} the monodromy represenation has finite image. Therefore, we assume that $g \geq 2$. We deal with the cases $g =2$ and $g > 2$ separately, because the generic point of the moduli space of genus two curves is not fine.
\subsubsection*{Case 1: $g > 2$}
Let $\mathcal{M}_{g} \rightarrow \mathbb{Q}$ be the moduli space of genus $g$ curves. Let $\overline{\mathcal{M}_{g}}$ be its compactification (as constructed by Knudsen in \cite{Knudsen}. Also see \cite[Chapter 4]{Harris}). We recall the following facts about $\mathcal{M}_{g}, \ \overline{\mathcal{M}_{g}}$ and mapping class groups of curves.
\begin{enumerate}
\item The generic genus $g$ curve has no automorphisms, for $g>2$. \label{itm:one}
\item There exists a boundary divisor $D_{\irr} \subset \overline{\mathcal{M}_{g}}$, consisting of irreducible nodal curves. The generic irreducible nodal curve has no automorphisms. \label{itm:two}
\item For every $g_1 + g_2 = g$ with $g_i \geq 1$, there exists a boundary divisor $D_{g_1,g_2}$ consisting of a reducible curve, with two smooth irreducible components with genera $g_1$ and $g_2$ meeting at a node. If the $g_i >1$, the generic such curve has no automorphisms. \label{itm:three}
\item By Facts \ref{itm:one}, \ref{itm:two} and \ref{itm:three}, there exists an open subvariety $\overline{U} \subset \overline{\mathcal{M}_{g}}$ which contains the generic point of $D_{\irr}$ and $D_{g_1,g_2}$ for $g_i >1$ over which the moduli problem is fine. Let $\mathcal{C} \rightarrow \overline{U}$ be the corresponding family of curves. Let $\overline{U} \cap \mathcal{M}_{g} = U$ ($U$ can be assumed to be affine). \label{itm:four}
\item For $s \in U(\mathbb{C})$ let $\Sigma_s$ denote the mapping class group of $\mathcal{C}_s^{\an}$. There is a surjective map from $\pi_1(U(\mathbb{C}),s)$ to $\Sigma_s$. The action of $\pi_1(U(\mathbb{C}),s)$ on the isotopy classes of simple closed loops (by parallel transport) of $\mathcal{C}_s$ factors through this map. \label{itm:five}
\item The mapping class group $\Sigma_s$ acts transitively on the isotopy classes of non-separating simple closed loops. If two separating simple closed loops break $\mathcal{C}_s$ into pieces with same genera, then the isotopy classes of these two loops lie in the same $\Sigma_s$ orbit. (See \cite{Farb})\label{itm:six}
\end{enumerate}
We will first prove the theorem for {\it the} generic curve, i.e. we will prove that every simple closed loop of $\mathcal{C}_s$ has finite order in the monodromy representation. There exists a proper normal irreducible variety $\overline{\mathcal{M}'}$ which is a double-cover of $\overline{\mathcal{M}_{g}}$ branched over the generic point of $D_{1.g-1}$, and which is \`etale over the generic points of all the other boundary divisors. Let $D' \subset \overline{\mathcal{M}'}$ denote the pullback of $D_{1,g-1}$. The family of curves over $\overline{\mathcal{M}'}$ extends to the generic point of $D'$. In order to prove the result for the generic curve, it suffices to prove it over the family over $\overline{\mathcal{M}'}$. Let $U' \subset \overline{\mathcal{M}'}$ denote the pullback of $U \subset \overline{\mathcal{M}_{g}}$.
We now apply Theorem \ref{thm:tech} as follows.
Suppose that $s' \in U'(\mathbb{C})$. Let $s \in U(\mathbb{C})$ be its image. Let $\beta$ be a path joining $s \in U(\mathbb{C})$ and $s_0 \in D_{\irr}(\mathbb{C}) \cap \overline{U}(\mathbb{C})$. This corresponds to a non-separating simple closed loop $\gamma \subset \mathcal{C}_s$ contracting to the nodal point of $\mathcal{C}_{s_0}^{\an}$ along $\beta$. By applying Theorem \ref{thm:tech}, we see that $\gamma$ acts with finite order. By Facts \ref{itm:five} and \ref{itm:six}, the action of $\pi_1(U(\mathbb{C}),s)$ on the set of isotopy classes of non-separating simple closed loops is transitive. Therefore, by modifying $\beta$ by a loop $\alpha$ based at $s$, every non-separating simple closed loop $\gamma' \subset \mathcal{C}_s^{\an}$ can be contracted to the node of $\mathcal{C}_{s_0}^{\an}$. Lifting this path to $\overline{\mathcal{M}'}$, we see that every simple closed loop in $\mathcal{C}_{s'}$ can also be contracted to the node of an boundary point
of $\overline{\mathcal{M}'}$. The result follows.
For a simple closed loop which breaks $\mathcal{C}_{s'}^{\an}$ into two pieces both of which have genus greater than $1$, the same argument applies.
Finally, we claim that the fundamental group of $U'(\mathbb{C})$ acts transtively on the isotopy-classes of simple closed loops on $\mathcal{C}_{s'}^{\an}$, which break $\mathcal{C}_{s'}^{\an}$ into two pieces, one of which has genus $1$. Indeed, the element of $\pi_1(U(\mathbb{C}),s)$ which corresponds to local monodromy about $D_{1,g-1}$ acts trivially on such loops, and along with the identity, forms a full set of representatives for the quotient $\pi_1(U'(\mathbb{C}),s')$, establishing the claim. Therefore, the above argument applies to prove the result in the case of the family over $\overline{\mathcal{M}'}$, and therefore for the generic family.
Now, let $\mathcal{C} \rightarrow \Spec R$ be any any generic proper smooth genus $g$ curve. We replace $\Spec R$ with $\Spec R \times_{\overline{\mathcal{M}_{g}}} U'$ (note that $U'$ is affine, and therefore, $\Spec R \times_{\overline{\mathcal{M}_{g}}} U'$ is affine. For ease of notation, we refer to this as $\Spec R$). There exists a proper normal irreducible variety $X$ along with a map $f: X \rightarrow \overline{\mathcal{M}'}$ with the following properties:
\begin{enumerate}
\item The map $\Spec R \rightarrow \overline{\mathcal{M}'}$ factors through $X \rightarrow \overline{\mathcal{M}'}$.
\item The map $\Spec R \rightarrow X$ is generically finite.
\item The map $X \rightarrow \overline{\mathcal{M}'}$ has generically connected fibers.
\end{enumerate}
By replacing $\Spec R$ with a birational variety, we may assume that $X$ is a proper normal variety. Let $x \in X(\mathbb{C})$ denote a point at which $f$ is smooth (such an $x$ exists because $f$ is generically smooth), and let $C$ denote the fiber of $\mathcal{C}$ over $x$. Given any simple closed curve, we may apply Lemma \ref{pathlifting} to find a path $\beta \subset X^{\an}$ which contracts the chosen simple closed curve to a point. Finally, we replace $\Spec R$ with a normal proper variety $X'$ birational to it. Therefore, the map $X' \rightarrow X$ is generically \`etale, and is branched over the generic points of the boundary divisors of $X$. By using the path lifting property for branched covers, we see that every simple closed curve as above on a generic fiber over $X'$ can be contracted to a point. We apply Theorem \ref{thm:tech} to conclude the finiteness of monodromy.
\subsubsection*{Case 2: $g = 2$}
Consider the family of projective curves $\mathcal{C} \rightarrow \mathcal{M}_{0,6}$ with affine equations
$$y^2 = x(x-1)(x-\lambda_1)(x-\lambda_2)(x-\lambda_3).$$
The same arguments used in Case 1 for the family $\overline{\mathcal{M}'}$ apply to prove Theorem \ref{gen} for this family. Suppose that $\mathcal{C} \rightarrow H$ was some other family of smooth genus 2 curves over an irreducible base $H$. We claim that there exists a generically \`etale map $H' \rightarrow H$, such that the family $\mathcal{C} \times_H H'$ is pulled back from $\mathcal{M}_{0,6}$. The arguments of Case 1 again apply to prove our result if this were the case.
We now prove the claim. Because $\mathcal{C} \rightarrow H$ is a family of genus two curves, it is hyperelliptic Zariski-locally over $H$, i.e. there exists a degree-2 map $\mathcal{C} \rightarrow H \times \mathbb{P}^1$, once we replace $H$ by a Zariski-open subset. The inverse image of the ramification locus in $H \times \mathbb{P}^1$ gives a degree-six \`etale cover of $H$. By choosing $H'' \rightarrow H$ to be an appropriate \`etale cover and pulling the family back to $H''$, we may assume that the degree-six \`etale cover has six connected components. This yields a canonical map $H'' \rightarrow \mathcal{M}_{0,6}$, and the family $\mathcal{C} \rightarrow H''$ is locally a quadratic twist of the pullback family from $\mathcal{M}_{0,6}$. By shrinking $H''$ if necessary, and taking a double-cover $H'$ of $H''$, we can get rid of the quadratic twist to assume that $\mathcal{C} \times_{H''} H'$ is indeed the pullback family from $\mathcal{M}_{0,6}$, as required. The theorem follows.
\end{proof}
\section{Rigid computations}
In this section, we work with rigid-analytic varieties over equicharacteristic bases. All our spaces are of dimension 1, and a good reference for this case is \cite{Kedlaya}. For a treatment of higher dimensinal varieties, see \cite{Remmert} and \cite{Tate}.
Let $S \subset \mathbb{C}$ be a $\mathbb{Z}$-algebra of finite type and let $K$ be its field of fractions. By inverting finitely many elements of $S$, we may assume that $S \cap \overline{\mathbb{Q}} = \mathcal{O}$. Here, $\mathcal{O}$ is the ring of integers of some number field, sufficiently localized. Then for almost all primes $\mathfrak{p} \subset \mathcal{O}$, $S/\mathfrak{p}$ is also an integral domain, with fraction field $K_{\mathfrak{p}}$ of same transcendence degree over $\mathbb{F}_p$ as $K$ over $\mathbb{Q}$. Given an element of $K$, its $\mathfrak{p}$-adic valuation still makes sense, and if this valuation isn't negative, it makes sense to reduce the element modulo $\mathfrak{p}$ to obtain an element of $K_{\mathfrak{p}}$.
Consider the ring of power series over $K$ (\textit{resp.} $K_{\mathfrak{p}}$) in the variable $q$, $K[|q|]$ (\textit{resp.} $K_{\mathfrak{p}}[|q|]$), and its field of fractions $K((q))$ (\textit{resp.} $K_{\mathfrak{p}}((q))$). These fields are complete with respect to the discrete $q$-adic valuation (which is non-archemedian). We consider vector bundles with connection on rigid-analytic annuli over $K((q))$ and $K_{\mathfrak{p}}$. We make the following definitions:
\begin{definition}
\begin{enumerate}
\item $\mathcal{X}_{r_1,r_2}\ ($\textit{resp.} $\mathcal{X}_{r_1,r_2,\mathfrak{p}})$ is defined to be the closed rigid annulus over $K((q))\ ($\textit{resp.} $K_{\mathfrak{p}}((q)))$ with inner radius $r_1$ and outer radius $r_2$.
\item $\mathcal{X}\ ($\textit{resp.} $\mathcal{X}_{\mathfrak{p}})$ is defined to be the closed rigid annulus over $K((q))\ ($\textit{resp.} $K_{\mathfrak{p}}((q)))$ with inner and outer radii 1.
\item For any rigid-analytic annulus $\mathcal{Y}$ as above, $\mathcal{O}^{\rig}(\mathcal{Y})$ is defined to be the corresponding affnoid ring (of rigid functions).
\item For $\mathcal{Y}$ as above, $\mathcal{O}^{\rig}_{\le 1}(\mathcal{Y})$ is defined to be that subring of $\mathcal{O}^{\rig}(\mathcal{Y})$ consisting of functions bounded by $1$
\end{enumerate}
\end{definition}
For example, $\mathcal{O}^{\rig}(\mathcal{X}) = K((q))\langle t^{-1},t \rangle$ is the ring of infinitely tailed laurent series with coefficients in $K((q))$ in the variable $t$ with the property that the coefficients at either end converge to zero. The ring of functions bounded by $1$ is $\mathcal{O}^{\rig}_{\le 1}(X) = K[|q|] \langle t^{-1},t\rangle$, the ring of inifinitely tailed laurent series with coefficients in $K[|q|]$ in the variable $t$ with the property that the coefficients at either end converge to zero. Similarly, $\mathcal{O}^{\rig}(\mathcal{X}_{r_1,r_2}) \subset K((q))[|t^{-1},t|]$ consists of elements of the form $\sum a_kt^k$ with the property that for all $k$, $|a_k|r_i^k \rightarrow 0$ as $|k| \rightarrow \infty$ for $i = 1,2$. A function $f$ is in $\mathcal{O}^{\rig}_{\le 1}(\mathcal{X}_{r_1,r_2})$ if it is in $\mathcal{O}^{\rig}(\mathcal{X}_{r_1,r_2})$ and satisfes the following (for all $k$, and for $i=1,2$):
\begin{equation}\label{growthcond}
|a_{k}|r_i^k \le 1
\end{equation}
More often than not, we will use the language of the above inequalities, instead of saying that $f \in \mathcal{O}^{\rig}_{\le 1}$.
In this section, the vector bundles with connection we consider will be those ``which can be reduced modulo $\mathfrak{p}$" (we will soon make precise what we mean by this) for almost all primes $\mathfrak{p} \subset \mathcal{O}$, and prove a rigid-analytic version of the Grothendieck-Katz $p$-curvature conjecture.
Before stating the main theorem of this section, we remark that we will exclusively work with annuli with ``rational" radii, i.e. with inner and outer radius a rational power of $|q|$. By extending scalars to fields obtained by adjoining rational powers of $q$, and by substituting $q^{\alpha}t$ for $t$ (for rational $\alpha$), we will be able to move back and forth between different annuli, without disrupting any of the integrality (with respect to primes $\mathfrak{p}$) properties we might have assumed. Henceforth, by annulus, we will always mean annulus with ``rational" inner and outer radii. Further, whenever we move back and forth between different annuli, we will leave implicit the operations of adjoining appropriate roots of $q$, and the replacing of $t$ with the appropriate $q^{\alpha}t$.
We now state the main theorems of this section:
\begin{Theorem}\label{thm:main}
Let $(V,\nabla)$ be a vector bundle with connection on $\mathcal{X}_{r_1,r_2}$, and suppose that there exists a cyclic vector with respect to which all data can be reduced modulo $\mathfrak{p}$ for almost all primes $\mathfrak{p} \subset \mathcal{O}$. If the $p$-curvatures vanish for almost all $p$, then there exists a basis of flat sections after finite pullback.
\end{Theorem}
\begin{Theorem}\label{thm:use}
Let $(V,\nabla)$ be a vector bundle with connection on $\mathcal{X}$, and suppose that there exists a cyclic vector with respect to which all data can be reduced modulo $\mathfrak{p}$ for almost all primes $\mathfrak{p} \subset \mathcal{O}$. Suppose that the $p$-curvatures vanish for almost all $p$. Then:
\begin{enumerate}
\item The coefficient matrix of the connection with respect to the cyclic basis has entries in $\mathcal{O}^{\rig}_{\le 1}(\mathcal{X})$.
\item After finite pullback, there exists a basis of solutions over $\mathcal{O}^{\rig}_{\le 1}(\mathcal{X})$.
\end{enumerate}
\end{Theorem}
Notice that $\mathcal{O}^{\rig}_{\le 1}(\mathcal{X})/(q) = K[t,t^{-1}]$, which is the coordinate ring of $\mathbb{G}_m$ over $K$. Part 1 of Theorem \ref{thm:use} proves that $(V, \nabla)$ actually extends to a very special integral model, namely the one with special fiber $\mathbb{G}_m$. Part 2 posits the existence of solutions over this integral model.
We now say a word about what we mean by reducing reducing data modulo primes. Given a vector bundle with connection over a rigid $K((q))$-annulus, we would like to produce vector bundles with connections over the corresponding rigid $K_{\mathfrak{p}}((q))$-annuli, for almost all $\mathfrak{p}$. First of all, note that vector bundles over closed rigid-analytic annuli are always trivial, because the ring of functions is a principal ideal domain (for instance, see \cite[Proposition 7.3.2]{Kedlaya}). Therefore, given a basis, it makes sense to reduce the vector bundle modulo $\mathfrak{p}$ by simply looking at trivial vector bundles on the appropriate annulus over $K_{\mathfrak{p}}((q))$, with a basis of sections indexed by the same set. However, if we now add a connection, matters are not as clear. Fixing a basis and writing out the connection in terms of that basis (and the ubiquitous coordinate $t$), it might not be possible to reduce the connection modulo these primes.
We say that $a \in K((q))$ can be reduced modulo almost all primes if its $\mathfrak{p}$-adic valuation is non-negative for almost all primes. We say that $f \in \mathcal{O}^{\rig}(\mathcal{X}_{r_1,r_2})$ can be reduced modulo almost all primes, if the set of primes $\mathfrak{p}$ with the property that some coefficient $a$ of $f$ has negative $\mathfrak{p}$-adic valuation, is finite. Here, we think of $f$ as an infinitely tailed Laurent series in the coordinate $t$ with coefficients in $K((q))$. In this case, $f$ can be reduced modulo almost all primes $\mathfrak{p}$ to give functions on $\mathcal{X}_{r_1,r_2,\mathfrak{p}}$. Also, note that a function $f$ which is bounded by $1$ will reduce to a bounded-by-$1$ function on the corresponding mod-$\mathfrak{p}$ annulus. In fact, if the reduction of $f$ modulo almost all primes is bounded by $1$, then $f$ will also be bounded by $1$.
Henceforth, we will always work with the data of a vector bundle with connection along with a basis of sections of the vector bundle, such that the connection matrix with respect to that basis can be reduced modulo almost all primes. Once we have a vector bundle with connection on a $K_{\mathfrak{p}}((q))$-annulus, the notion of its $p$-curvature makes complete sense. When we talk of a vector bundle with connection having vanishing $p$-curvatures, we implicitly mean that the basis which we're working with has the property that the connection matrix can be reduced modulo almost all primes $\mathfrak{p}$, and that the induced vector bundle with connection over $K_{\mathfrak{p}}$-annuli have vanishing $p$-curvatures.
\subsection{Extension to an integral model}
For the rest of this section, denote by $D$ the derivation $t\frac{d}{dt}$. In this subsection, we prove a statement slightly more general than the first statement in Theorem \ref{thm:use}.
\begin{Proposition}
Let $(W, \nabla)$ be a vector bundle with connection along with a cyclic vector $v$ on $\mathcal{X}$, such that the $p$-curvatures vanish for infinitely many $p$. Then, the entries of the connection matrix (with respect to the above cyclic basis) lie in $\mathcal{O}^{\rig}_{\le 1}(\mathcal{X})$.
\end{Proposition}
Note that we only need infinitely many $p$-curvatures to vanish.
\begin{proof}
Let the $n$ be the dimension of $V$. Suppose that the connection matrix (with respect to our cyclic basis $\bf{e}$) is
\begin{equation*}
A=
\left(
\begin{array}{cccc}
&&& f_0\\
1& &&f_1\\
&\ddots & &\vdots\\
&&1&f_{n-1}
\end{array}
\right)
\end{equation*}
Let $f_k(t) = \sum a_{k,i}t^i$. By replacing $t$ by $t^{1/n!}$, we may assume that $a_{k,i} = 0$ unless $n!\vert i$. We also assume that some $a_{k,i}$ has size bigger than 1. Let $\ell = \max_{k,i}{ |a_{k,i}|^{\frac{1}{n-k}}}$, and let $\nu = \max_{|a_{k,i}| = \ell^{n-k}}{ (\frac{i}{n-k})}$.
Consider the following basis ${\bf f}$ of the vector bundle
\begin{equation*}
{\bf f} =
\left(
\begin{array}{cccc}
1&&&\\
&t^{-\nu}&&\\
&&\ddots&\\
&&&t^{-(n-1)\nu}
\end{array}
\right)
{\bf e} = T {\bf e}
\end{equation*}
A direct computation shows that
\begin{equation*}
\nabla(D){\bf f} = B{\bf f}
\end{equation*}
where
\begin{equation*}
B=
\left(
\begin{array}{cccc}
0&&&\\
&-\nu&&\\
&&\ddots&\\
&&&-(n-1)\nu
\end{array}
\right)
+
\left(
\begin{array}{cccc}
&&& t^{-(n-1)\nu}f_0\\
t^{\nu}&&&t^{-(n-2)\nu}f_1\\
&\ddots & &\vdots\\
&&t^{\nu}&f_{n-1}
\end{array}
\right)
\end{equation*}
The entries of $B$ can still be reduced modulo almost all primes $\mathfrak{p} \subset \mathcal{O}$. Let $t^{-(n-k -1)\nu}f_k(t) = \sum b_{k,i}t^i$. The definition of $\nu$ implies that $|b_{k,i}| < \ell^{n-k}$ for $i > \nu$, and that $|b_{k,i}| \le \ell^{n-k}$, with equality holding for at least some pair $(k, -\nu)$. Write the connection matrix $B$ as $\sum\nolimits_{i \in \mathbb{Z}}t^i B_i$. The $B_i$ have the property that as $|i| \rightarrow \infty$, $B_i \rightarrow \bf{0}$. For $i$ different from $0, \nu$, $B_i$ has non-zero entires only in its last column. We have that \\
\begin{equation*}
B_0=
\left(
\begin{array}{cccc}
0&&&b_{0,0}\\
&-\nu&&b_{1,0}\\
&&\ddots&\\
&&&-(n-1)\nu + b_{n-1,0}
\end{array}
\right)
\end{equation*}
\\and \\
\begin{equation*}
B_{\nu}=
\left(
\begin{array}{cccc}
&&& b_{0,\nu}\\
1& &&b_{1,\nu}\\
&\ddots & &\vdots\\
&&1&b_{n-1,\nu}
\end{array}
\right)
\end{equation*}
\\
We now reduce modulo a prime $\mathfrak{p} \subset \mathcal{O}$, large enough so that the sizes of the $b_{k,\nu}$ don't change when reduced modulo $\mathfrak{p}$. Suppose also, that the $p$-curvature vanishes, and that $p$ is larger than $2n$. For ease of notation, we use the same terms to denote the connection matrix and its entries modulo $\mathfrak{p}$.
The characteristic polyomial of $B_{\nu}$ splits over a separable extension of $K_\mathfrak{p}((q))$ (because $p>2n$). Therefore, by \cite[Lemma 3]{Kedlaya1}, there exists an integer $m$ such that the field $L = \overline{K_\mathfrak{p}}((q^{1/m}))$ contains all the eigenvalues of $B_{\nu}$.
In the remainder of the proof, we establish the following statement, which supplies the necessary contradiction.
\begin{statement}
There exists a vector $w$ with entries in L, such that $\nabla(D^p)(w) \neq \nabla(D)^p(w)$.
\end{statement}
Let $\lambda$ be an eigenvalue with the largest norm. Clearly, $|\lambda| = \ell$. Let $w = [\theta_0, ... , \theta_{n-1}]^t$ be an eigenvector with eigenvalue $\lambda$. We may assume that $|\theta_{n-1}|=1$, beacuse the last coordinate of an eigenvector of $B_{\nu}$ with non-zero eigenvalue cannot be $0$. It is easy to check that $|\theta_k| \le \ell^{n-1-k}$.
Let $\nabla(D)^p w = \sum w_i t^i$. We describe $w_{\nu p}$ as a sum of terms as follows:
\begin{equation*}
w_{\nu p}t^{\nu p} = \sum_{W \in \mathcal{I}} Ww
\end{equation*}
where $\mathcal{I}$ is the set of all length $p$ words in the letters $B_i$ and $D$, with the property that the sum of all the subscripts $i$ occuring in $W$ is $\nu p$. The word $W$ acts on $w$ as follows: $B_i$ multiplies vectors by $B_i t^i$, and $D$ just acts on the power of $t$ in the usual way.
For a vector $v$, let $v[k]$ denote its $(k+1)^{th}$ entry (therefore, the last entry of $v$ would be denoted by $v[n-1]$). We will show that the word $W_0 = B_{\nu}B_{\nu} \hdots B_{\nu}$ has the property that $W_0w[n-1]$ is strictly larger than $Ww[n-1]$, $W_0 \neq W\in \mathcal{I}$. To that end, we make the following claim, which we prove after finishing the proof of this propostion.
\begin{Claim}
Suppose that $vt^a = [\phi_0, ... ,\phi_{n-1}]^t$ is a vector with the property that for all $k$, $|\phi_k| \le$ $(resp.$ $<)$ $\ell^{n-k-1} |\theta_{n-1}|$. Then, the coordinates of $\lambda w$ and $C v$ satisfy the same inequalities, where $C$ stands for either $B_i$, or $D$.
\end{Claim}
The proposition follows from the claim as follows: let $w^j$ and $w_0^j$ be the vectors obtained by applying the first $j$ letters of $W$ and $W_0$ respectively on $w$. The claim implies $|w^j[k]| \le \ell^{n-k-1}|w_0^j[k]|$ for any $0\le k \le n-1$. However, $W$ differing from $W_0$, must contain the letter $D$, or a letter of the form $B_i$ where $i>\nu$. In either case, such a letter would render the inequality strict, i.e. if such a letter first occured at the $j_0^{th}$ stage, then $|w^{j_0}[k]| < \ell^{n-k-1}|w_0^{j_0}[k]|$. According to the claim, this said strictness would persist through the application of the rest of the word, i.e. $|w^{j}[k]| < \ell^{n-k-1}|w_0^{j}[k]|$, for $j \ge j_0$. Therefore, $|w_{\nu p}[n-1]| = |B_{nu}^p w [n-1]| = \ell^p$. All the entries of $\nabla(D)w$ are bounded by $\ell^{2n}$, which establishes the statement, as required.
\end{proof}
\begin{proof}[Proof of Claim]
The claim is clearly true if $C$ stands for $D$, because $\ell > 1$ and applying $D$ doesn't increase the size of any of the coordinates. For the rest of the proof, we ignore the factor of $t$, as it changes nothing. If $C$ stands for $B_i$ for $i$ different from $0$ and $-\nu$, then $|Cv[k]| = |b_{i,k}\phi_{n-1}| \le$ $(resp. $ $<) \ell^{n-k}|\theta_{n-1}|$, as required. If $L = B_0$, $Cv[k] = -k\nu\phi_k + b_{k,0}\phi_{n-1}$, which is less than $\ell^{n-k}|\theta_{n-1}|$ because $|\phi_{n-1}| \le$ $(resp.$ $<) |\theta_{n-1}|$ and $|b_{k,0}| \le \ell^{n-k}$, as required.
Finally, suppose that $C = B_{\nu}$. For brevity, let $b_{k}=b_{k,-\nu}$. We have that
\begin{equation*}
Cv = [b_0\phi_{n-1}, \phi_0 + b_1\phi_{n-1}, ... ,\phi_{n-2} + b_{n-1}\phi_{n-1}]^t.
\end{equation*}
We must show that $|Cv[k]| \le$ $(resp.$ $<)$ $ \ell^{n-k}|\theta_{n-1}|$. This is clear for $Cv[0]$. We have $Cv[k] = \phi_{k-1} + b_k\phi_{n-1}$. The bound on $b_k$ shows that $|b_k\phi_{n-1}| \le$ $(<)$ $\ell^{n-k}|\theta_{n-1}|$. Therefore it suffices to show that $|\phi_{k-1}| \le$ $(<)$ $\ell^{n-k}|\theta_{n-1}|$. This proves the claim, because $\phi_{k-1}$ satisfies the required inequality by hypothesis.
\end{proof}
By appropriately substituting $q^{\alpha}t$ for $t$, the corollary below directly follows:
\begin{Corollary}\label{grow}
Let $(V, \nabla)$ be a vector bundle with connection along with a cyclic vector $v$ on $\mathcal{X}_{r_1,r_2}$, such that the $p$-curvatures vanish for infinitely many $p$. Then, the entries of the connection matrix (with respect to the above cyclic basis) satisfy $\eqref{growthcond}$ and therefore lie in $\mathcal{O}^{\rig}_{\le 1}(\mathcal{X}_{r_1,r_2})$.
\end{Corollary}
\subsection{Existence of rigid solutions in a special case}
We devote this subsection to proving Theorem \ref{thm:use}.
\begin{proof}[Proof of Theorem \ref{thm:use}]
Reducing $(V,\nabla)$ modulo $q$, we get a vector bundle with connection on $\mathbb{G}_m$ over $K$, with almost all $p$-curvatures vanishing. After sufficient pullback, we may assume that $(V,\nabla)$ has a full set of solutions modulo $q$ (the $p$-curvature conjecture for $\mathbb{G}_m$ is known -- see \cite{Katz}, or \cite{Andre}, or \cite{Bost}). Therefore, changing basis by an element of $GL_n(K)[t,t^{-1}]$, suppose that the connection equals $\nabla(D) \textbf{e} = A \textbf{e}$, where $A \in M_n(\mathcal{O}^{\rig}_{\le 1}{\mathcal{X}})$ satisfies $A \equiv \textbf{0} \mod{q}$. We now use an inductive argument to find a basis with respect to which the connection matrix $\nabla(D)$ is constant (we will use the exact same argument in the proof of the more general theorem too): assume that modulo $q^m$, the connection matrix is constant, and trivial modulo $q$. Then, modulo $q^{m+1}$, the connection matrix is of the form $[A_{0} + q^mB_m(t)]dt$, where $A_{0}$ has constant entries (which are $0$ modulo $q$), and $B_m(t)$ is a matrix with entries in $R[t^{-1},t]$, with no constant term. We pick a new set of coordinates
\begin{equation*}
{\bf f_{m+1}} = (I - q^mC_m(t)){\bf f_m}
\end{equation*}
where $\bf{f_m}$ is the old set of coordinates and where $C_m(t)$ satisfies
\begin{equation*}
D(C_m(t)) = B_m(t)
\end{equation*}
(note that we assumed $B_m(t)$ has no constant term).
Modulo $q^{m+1}$,
\begin{equation*}
\begin{array}{rcl}
\nabla(D)(\bf{f_{m+1}}) &=& ((A_{0}t^{-1} + q^nB_m(t))(I - q^mC_m(t)) - q^m\displaystyle D(C_m(t)) \bf{f_m}\\[.15in]
&=&A_{0} (I - q^mC_m(t))\displaystyle\bf{f_m} \\[.15in]
&=& (I - q^mC_m(t))^{-1}A_{0} (I - q^mC_m(t))\displaystyle \bf{f_{m+1}}.
\end{array}
\end{equation*}
Therefore, with respect to the coordinates $\bf{f_{m+1}}$, the connection matix modulo $q^{m+1}$ is just $A_{0}$. The product \begin{equation*}
\prod\limits_{m=1}^{\infty}(I - q^mC_m(t))
\end{equation*}
clearly converges on the annulus $|t| = 1$. Therefore, $\nabla(D)\textbf{f} = A_0 \textbf{f}$, where $A_0 \in qM_n(K[|q|])$ and $\textbf{f} = \prod\limits_{m=1}^{\infty}(I - q^mC_m(t)) \textbf{e}$. While we might need to invert infinitely many integers in order to change basis from $\textbf{e}$ to $\textbf{f}$, note that at each stage (that is, modulo $q^m$) we only invert finitely many primes. Therefore, with respect to $\textbf{f}$, for almost every prime ideal $\mathfrak{p}$, the vector bundle with connection has vanishing $p$-curvatures (modulo $q^m$). This is true for every $m$, but the set of primes to be omitted might grow with $m$. The following lemma implies that $A_0 = \textbf{0}$. The theorem follows.
\end{proof}
\begin{Lemma}\label{A_0}
Let $A_0 \in qM_n(K[|q|])$, and let $V$ be a vector bundle along with a basis $\textbf{f}$ and a connection $\nabla$, such that the connection matrix of $\nabla$ is $A_0$. Suppose that for each $m \in \mathbb{N}$, the $p$-curvatures of $\nabla$ vanish modulo ${q^m}$ for all primes outside a finite set (which could depend on $m$). Then $A_0 =\textbf{0}$ and $\textbf{f}$ is a basis of horizontal sections.
\end{Lemma}
\begin{proof}
$A_0$ being $\textbf{0}$ modulo $q$, is topologically nilpotent, and therefore nilpotent modulo $q^m$. For large enough primes $p$, we have that the $p$-curvature (modulo $q^m$) vanishes, and that $A_0^p \equiv \textbf{0} \mod{q^m}$. We have
\begin{equation*}
\begin{array}{rcl}
\displaystyle {\bf 0} &=& \nabla(D)^p - \nabla(D^p)\\[.12in]
\displaystyle & = & A_0^p - A_0 \\[.12in]
\displaystyle & = &A_0
\end{array}
\end{equation*}
with all equalities read modulo $(\mathfrak{p},q^m)$. This means that $A_0$ (mod $q^m$) is $\bf{0}$ modulo $\mathfrak{p}$ for almost all primes $\mathfrak{p} \subset \mathcal{O}$, and so $A_0 \equiv \textbf{0} \mod{q^m}$. Now letting $m$ tend to infinity, we see that $A_0 = \textbf{0}$.
The claim about $\textbf{f}$ follows immediately.
\end{proof}
\subsection{Proof of the $p$-curvature conjecture for rigid-analytic annuli}
We first prove a lemma.
\begin{Lemma}\label{lem}
Let $(V,\nabla)$ be a vector bundle with connection on $\mathcal{X}_{r_1,r_2}$, such that $r_1 \le 1 \le r_2$. Suppose that the entries of the connection matrix (with respect to a chosen basis ${\bf e}$) $A = \sum A_n t^n$ satisfy $\eqref{growthcond}$. Suppose further, that $A$ modulo $q$ is constant and semisimple with integer eigenvalues. Then there exists
\begin{enumerate}
\item A basis $\textbf{f}$ of $V$ such that the connection matrix is trivial modulo $q$.
\item A closed subannulus $\mathcal{X}_{r'_1,r'_2}$ of $\mathcal{X}_{r_1,r_2}$ with that $\mathcal{X}_{r'_1,r'_2} \supset \mathcal{X}$, such that the connection matrix with respect to $\textbf{f}$ satisfies \ref{growthcond} on $\mathcal{X}_{r'_1,r'_2}$. Further, $r'_1 < 1$ if $r_1 < 1$, and $r'_2>1$ if $r_2 >1$.
\end{enumerate}
\end{Lemma}
\begin{proof}
We remark that if $r_1 <1$ and $r_2 >1$, then $\eqref{growthcond}$ implies that $A$ modulo $q$ is constant. Suppose that $X$ is an element of $GL_n(K)$ which diagonalises $A$ modulo $q$. Pick ${\bf f_0} = X \bf{e}$. The connection matrix with respect to ${\bf f_0}$ is $X^{-1}AX$. Note that the entries (which we denote by $a_{kj}(t)$) of the new connection matrix (which we still denote by the letter $A$) satisfy $\eqref{growthcond}$ on the annulus with radii $r_i$, and that $A$ is now diagonal with integer entries, modulo $q$.
Suppose that \\
\begin{equation*}
A_0 \equiv
\left(
\begin{array}{cccc}
d_0&&&\\
&d_1&&\\
&&\ddots & \\
&&&d_{n-1}
\end{array}
\right)
(mod \hspace{1 mm} q)
\end{equation*}\\
where the $d_i$ are the integer eigenvalues alluded to just above. Let $\textbf{f}$ be defined to be
\begin{equation*}
{\bf f} =
\left(
\begin{array}{cccc}
t^{-d_0}&&&\\
&t^{-d_1}&&\\
&&\ddots & \\
&&&t^{-d_{n-1}}
\end{array}
\right)
{\bf f_0}
\end{equation*}\\
The connection matrix with respect to $\textbf{f}$ is trivial modulo $q$, thus proving the first assertion. We now demonstrate the existence of $\mathcal{X}_{r'_1,r'_2}$. Denote by $a'_{kj}(t)$ the entries of the new connection matrix. For $k \neq j$, we have
\begin{equation*}
a'_{kj}(t) = t^{d_k - d_j}a_{kj}(t),
\end{equation*}
and we have that
\begin{equation*}
a'_{kk}(t) = a_{kk}(t) - d_k.
\end{equation*}
The $r_i = 1$ case follows directly from the previous two equations, therefore suppose that $r_1 < 1$. The $a'_{kk}(t)$ satisfy $\eqref{growthcond}$ on $\mathcal{X}_{r_1,r_2}$, and the $a'_{kj}(t)$ satisfy the strict version of $\eqref{growthcond}$ on $\mathcal{X}$. Fixing $k \neq j$, suppose that $a_{kj}(t) = \sum b_i t^i$. Then, $a'_{kj}(t) = t^{k-j} \sum b_i t^i$. Let $m=k-j$. We have
\begin{equation*}
|b_i|r_1^i \le 1, \\
|b_i| < 1
\end{equation*}
Both quantities tend to zero as $|i|$ tends to infinity. To demonstrate the existence of $\mathcal{X}_{r'_1,r'_2}$, we need to find some $r'_1 < 1$ so that the following equations hold for all values of $i$:
\begin{equation*}
|b_i|{r'_1}^{i+m} \le 1, \\
|b_i| < 1
\end{equation*}
Note that if $m \ge 0$, then the inequalities automatically hold with $r'_1 = r_1$. Assume therefore, that $m < 0$. For $i \ge -m$, any $r'_1$ would work. As $|b_i| < 1$ for all $i$, it is possible to choose an $r'_1<1$ such that $|b_i|{r'_1}^{i+m}<1$, for $0 \le i \le -m$. This would make sure that the required inequalities hold for $i \ge 0$. Further refining our choice so that ${r'_1}^{-1+m} \le r_1^{-1}$, it follows that the required inequalities hold for negative $i$ too, thereby demonstrating the existence of $r'_1$. The exact same argument works for $r_2$ as well, proving the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:main}]
Let the connection matrix be $A = \sum A_it^i$. Choose $ r_1 \le r \le r_2$, for $r =|q|^{\alpha}$ ($\alpha \in \mathbb{Q}$). By replacing $t$ by $t/q^{\alpha}$, we may assume that $r=1$. The constant term of the new connection matrix stays the same. By Corollary \ref{grow}, the entries of the connection matrix have coefficients in $K[|q|]$. Reducing modulo $q$, we obtain a vector bundle with connection on the ring $K[t^{-1},t]$, the coordinate ring of $\mathbb{G}_m$ over $K$.
According to Katz, the vector bundle with connection on $\mathbb{G}_m$ over $K$ has regular singularities at 0 and $\infty$. Therefore, with respect to a cyclic basis, the entries of the connection matrix $\nabla(D)$ have to be regular functions on $\mathbb{P}^1$ (this follows directly from \cite[Theorem 11.9]{Katz}), and so $\nabla(D)$ has to be a constant matrix. Therefore, $A \equiv A_0 \mod{q}$. Note that this does not depend on the initial choice of $r$.
The $p$-curvatures vanishing imply that $A_0$ modulo $q$ is semisimple with rational eigenvalues. Pulling back from the rigid annulus by the map $t \mapsto s^m$ for some $m$, a common multiple of all the denominators, we get a vector bundle with connection on the rigid annulus with inner and outer radii $r_1^{1/m}$ and $r_2^{1/m}$ respectively. With respect to the same basis, and the derivation $D = s\frac{d}{ds}$, we see that the connection matrix gets multiplied by a factor of $m$. For ease of notation, we replace $s$ by $t$ and $r_i^{1/m}$ by $r_i$. $A_0$ is therefore semisimple with integer eigenvalues, modulo $q$. We again remark hat the pullback does not depend on the $r$ chosen.
We now solve the differential equation in a subannulus containing $\mathcal{X}$. There exists a subannulus $\mathcal{X}_{r'_1,r'_2}$ of $\mathcal{X}_{r_1,r_2}$, satisfying the conclusions of Lemma \ref{lem}. Note that the $p$-curvatures still make sense and vanish for almost every $p$. This is because both the change of basis matrices used in the lemma make sense and are invertible modulo almost every $p$.
We now use the same inductive argument as in the proof of Theorem \ref{thm:use}, the base case of which has just been demonstrated: letting $\textbf{f}_m$ and $C_m$ be as before, we change basis by the matrix $I - q^mC_m(t)$ to make $\nabla(D)$ constant modulo $q^{m+1}$. Therefore, with respect to the basis $\textbf{f} = \prod\limits_{m=1}^{\infty}(I - q^mC_m(t)) \textbf{e}$, the connection matrix is constant and satisfies the hypothesis of Lemma $\ref{A_0}$. The connection has a full set of solutions (by Lemma \ref{A_0}) wherever the product $\prod\limits_{m=1}^{\infty}(I - q^mC_m(t))$ converges.
According to the lemma below, this converges on a subannulus containing $|t|=1$, with inner and outer radii $r''_i$, where $r''_1 < 1$ if $r'_1$ is, and $r''_2 > 1$ if $r'_2$ is. Hence, it is possible to cover $\mathcal{X}_{r_1,r_2}$ by a finite number of closed subannuli such that on each subannulus there exists a full set of solutions. These solutions clearly glue on the intersections and therefore form global solutions on the entire rigid annulus.
\end{proof}
\begin{Lemma}
Notation as in the proof of the theorem. The infinite product $\prod\limits_{m=1}^{\infty}(I - q^mC_m(t))$ converges on a subannulus $\mathcal{X}_{r''_1,r''_2} \supset \mathcal{X}$. Further, $r''_1 < 1$ if $r'_1$ is, and $r''_2 > 1$ if $r'_2$ is.
\end{Lemma}
\begin{proof}
We have that $\prod\limits_{m=1}^{\infty}(I - q^mC_m(t))$ converges on $\mathcal{X}$. We first prove that $q^mC_m(t)$ also satisfied the inequalities $\eqref{growthcond}$ with respect to the annulus with radii $r'_i$. Suppose that $_mA(t)$ is the connection matrix at the $m^{th}$ stage (i.e. with respect to the basis ${\bf f_m}$). If $q^mC_m(t)$ satisfies $\eqref{growthcond}$, then so do $D(q^mC_m(t))$ and $(I-q^mC_m(t))^{\pm 1}$. We have
\begin{equation*}
\displaystyle _{m+1}A(t) = (I-q^mC_m(t))^{-1} {_m}A(t)(I-q^mC_m(t)) + (I-q^mC_m(t))^{-1}D(q^mC_m(t))
\end{equation*}
Therefore, if $_mA(t)$ and $q^mC_m(t)$ satisfy $\eqref{growthcond}$, then $_{m+1}A(t)$ also does.
On the other hand, $_mA(t)$ doesn't have any non-constant terms where the power of $q$ is lower than $m$. Therefore, if $_mA(t)$ satisfies $\eqref{growthcond}$, so does $q^mB_m(t)$ , and so does $q^mC_m(t)$. As $_1A(t)$ clearly does satisfy $\eqref{growthcond}$, we have that all the $q^mC_m(t)$ do, too.
To finish the proof, $q^nC_n(t)$ is bounded by $|q|^n$ on $|t|=1$, and by 1 on $|t| =r'_1$. The non-archimedian analogue of the Hadamard 3-circle theorem (\cite[Proposition 7.2.3]{Kedlaya}) implies that for every positive $\epsilon$, $q^mC_m(t)$ is bounded by $|q^{c_{\epsilon}}|^m$ on $|t|=r'_1+\epsilon$, where $c_{\epsilon}$ is positive and independent of $m$. It follows that $\prod\limits_{m=1}^{\infty}(I - q^mC_m(t))$ converges on the subannulus with inner radius $r'_1 + \epsilon$ for any positive epsilon. The same argument works for $r'_2$ as well, and the lemma follows.
\end{proof}
\section{Families of holomorphic connections}
In this section, we give a rigid criterion for a holomorphic family of vector bundles with connections on certain families of annuli to have global solutions.
Let $X$ and $E$ be a holomorphic annulus and disc with coordinates $t$ and $q$, with respect to which $X$ and $E$ are centered at $0$. All holomorphic functions on $X \times E$ can be expressed as series in $x, x^{-1}$ and $q$, which converge absolutely in any compact subset of $X \times E$.
The field $\mathbb{C}((q))$ equipped with $\nu_q$, the $q$-adic valuation, is a complete non-archimedian field. As in \S 3, let $\mathcal{X}$ be the closed $q$-adic annulus over $\mathbb{C}((q))$ with inner and outer radii 1.
Let $f \in \mathcal{O}^{\hol}(X\times E)$, $f(x,q) = \sum_{n \in \mathbb{Z}} a_n(q) x^n$ where the $a_n(q) \in \mathbb{C}[|q|]$. If $\nu_q(a_n(q)) \rightarrow \infty$ as $|n| \rightarrow \infty$, then $f$ converges on the rigid anulus $\mathcal{X}$, and we say that $f \in \mathcal{O}^{\rig}_{\le 1}(\mathcal{X})$ (notation as in section 3, with $R = \mathbb{C}[|q|]$). The main result we prove in this section is
\begin{Proposition}\label{csol}
Let $V$ be a holomorphic rank $n$ vector bundle on $X \times E$, along with a connection $\nabla$ relative to $E$. Suppose that the connection matrix $A$ with respect to a basis $\textbf{e}$ has the following properties:
\begin{enumerate}
\item $A$ has entries in $\mathcal{O}^{\rig}_{\le 1}(\mathcal{X})$.
\item The vector bundle with connection on $\mathcal{X}$ whose connection matrix is $A$ has a full set of rigid solutions over $\mathcal{O}^{\rig}_{\le 1}(\mathcal{X})$.
\end{enumerate}
Then $\nabla$ has a full set of solutions.
\end{Proposition}
\subsection{Preliminaries}
The data of $(V, \nabla)$ as in Proposition \ref{csol} is the same as a holomorphic family of connections $\nabla_{ X\times \{Q\}}$ on $V_{ X \times \{Q\}}$, parameterised by points $Q \in E$. A section $s$ is a solution if it is in the kernel of $\nabla$. The set of solutions form an $\mathcal{O}^{\hol}(E)$-module. We first show the existence of local solutions.
\begin{Lemma}\label{holex}
Let $Y$ be a simply connected open subset of $\mathbb{C}$, and $E_d$ be a holomorphic $d$-dimensional polydisc. Suppose that $(V, \nabla)$ is a vector bundle on $Y \times E_d$ equipped with a connection relative to $E_d$. There exists a full set of solutions for the connection $\nabla$.
\end{Lemma}
\begin{proof}
It suffices to prove the lemma in the case where $Y$ is a disc. Suppose that the coordinate on $Y$ is $y$, with respect to which $Y$ is centered at 0. Let ${\bf q} = (q_1, \hdots, q_d)$ be a set of coordinates on $E_d$, with respect to which $E_d$ is centered at ${\bf 0} = (0,\hdots, 0)$. Let $\textbf{e}$ be a basis of sections of $V$. We have that
$$\nabla(\frac{d}{dy})\textbf{e} = A(y,{\bf q})\textbf{e} = \sum_{i \ge 0} A_n({\bf q})y^n,$$
where the $A_n({\bf q})$ are power-series in the $q_i$ converging on $E_d$. Because $A(y,{\bf q})$ converges on $Y \times E_d$, the series $\sum_{i \ge 0} A_n({\bf q_0})y^n$ converges on $Y$ for every ${\bf q_0} \in E_d$. We claim that there exists a unique matrix $U(y,{\bf q})$ with entries in $\mathbb{C}[|y,q_1, \hdots ,q_d|]$ such that $U(y,{\bf q}) \equiv \textbf{I} \mod{y}$, and such that $U(y,{\bf q})\textbf{e}$ form a formal set of solutions for the connection. Indeed, this follows by writing $U(y,{\bf q}) = \sum_{i=0}^{\infty}U_i({\bf q})y^i$, and solving the equation $\nabla(\frac{d}{dy})U\textbf{e} = \textbf{0}$ recursively. The solution is seen to be
$$U_{i+1}({\bf q}) = -\sum_{j=0}^{i}A_j(q)U_{i-j}({\bf q})/(i+1).$$
Note that if we specialized to a point ${\bf q} = {\bf q_0}$, $U(y,{\bf q_0})\textbf{e}$ would be a set of holomorphic solutions of $(V,\nabla)$ pulled back to $Y \times \{{\bf q_0}\}$ (this well known fact is also proved in \cite[Theorem 6.2.1]{Kedlaya}). This implies the convergence of $U(y,{\bf q})$ on all of $Y \times E_d$, proving the lemma.
\end{proof}
For the rest of this section, we fix a point $P\in X$. For every point $Q \in E$, the connection $\nabla_{X\times \{Q\}}$ gives the monodromy representation
$$\rho_{(P,Q)}: \pi_1(X,P) \rightarrow \rm{GL}(V_{P \times Q})$$
Recall that we have fixed a basis $\textbf{e}$ of $V$, and so we can (and will) think of the monodromy representation as
$$\rho_{(P,Q)}: \pi_1(X,P) \rightarrow \rm{GL}_n(\mathbb{C}).$$
Because $\pi_1(X,P) = \mathbb{Z}$, the data of $\rho_{(P,Q)}$ is the same as the data of a matrix $\mathcal{M}_{(P,Q)} \in \rm{GL}_n(\mathbb{C})$, where $\mathcal{M}_{(P,Q)}$ is the image under $\rho_{(P,Q)}$ of the generator of $\pi_1$ given by the loop going around $0$ counterclockwise. By Lemma \ref{holex}, this gives a holomorphic map $\mathcal{M}_P$ from $E$ to $\rm{GL}_n(\mathbb{C})$, or an element $\mathcal{M}_P \in \rm{GL}_n(\mathcal{O}^{\hol}(E))$. Notice that if we change basis by $U(x,q) \in \rm{GL}_n(\mathcal{O}^{\hol}(X \times E))$, $\mathcal{M}_P$ changes by conjugation by $U(P,q) \in \rm{GL}_n(\mathcal{O}^{\hol}(E))$. Henceforth, whenever we refer to monodromy matricies $\mathcal{M} \in \rm{GL}_n(\mathcal{O}^{\hol}(E))$, we will mean with respect to the basis $\textbf{e}$ and the point $P$ (unless explicitly stated otherwise).
\begin{Proposition}
With the setup as above, the vector bundle with connection has a full set of holomorphic solutions if the monodromy matrix is the identity.
\end{Proposition}
\begin{proof}
This follows trivially from the existence of local solutions, as posited by Lemma \ref{holex}.
\end{proof}
\begin{Lemma}
Let $Y$ be a simply connected open subset of $\mathbb{C}$ and $V, \textbf{e}$ be a vector bundle on $Y \times E$ along with a basis $\textbf{e}$, equipped with a connection relative to $E$. Suppose that $\nabla_1$ and $\nabla_2$ are two connections whose connection matricies $A_i$ (with respect to the basis $\textbf{e}$) are congruent modulo $q^m$. Then, their solutions are also congruent modulo $q^m$.
\end{Lemma}
\begin{proof}
The question is local in $Y$, so we assume that $Y$ is a disc, with coordinate $y$. Fix a point $P \in Y$. Let $g_1(y,q) \in \rm{GL}_n(\mathcal{O}(Y \times E))$ satisfy $-D(g_1) = A_1g_1$ (the existence of $g_1$ is guaranteed by Lemma \ref{holex}). Then, $-D(g_1) \equiv A_2g_1 \mod{q^m}$. Therefore, changing basis by $g_1$, we may assume that $A_1 = \textbf{0}$, and $A_2 \equiv \textbf{0} \mod{q^n}$.
Suppose that $g_2(y,q) \in \rm{GL}_n(\mathcal{O}(Y \times E))$ satisfies $-D(g_2) = A_2g_2$. Replacing $g_2$ by $g_2(y,q)g_2^{-1}(P,q)$, we may assume that $g_2(P,q) = \textbf{I}$. As $A_2 \equiv \textbf{0} \mod q^n$, $g_2(y,q) = C(q) + q^ng_3(y,q)$. Evaluating at $y=P$, we see that $C(q) \equiv \textbf{I} \mod q^m$, whence $g_2 \equiv \textbf{I} \mod q^m$, proving the lemma.
\end{proof}
\begin{Lemma}\label{cong}
Let $V,\textbf{e}$ be a vector bundle along with a basis on $X \times E$. Suppose that $\nabla_1$ and $\nabla_2$ are connections on $V$ relative to $E$. If the connection matricies of $\nabla_i$ are congruent modulo $q^m$, then $\mathcal{M}_1$ is congruent to $\mathcal{M}_2$ modulo $q^m$, where $\mathcal{M}_i$ is the family of monodromy representations corresponding to $\nabla_i$.
\end{Lemma}
\begin{proof}
Let $\mathbb{C}_l = \mathbb{C} \setminus l$, where $l$ is some half-line originating at $0$. Define $X_0 \subset X$ and $X'_0 \subset X$ to be the open subsets obtained by intersecting $X$ with $\mathbb{C}_{l_0}$ and $\mathbb{C}_{l'_0}$ where $l_0$ and $l'_0$ are non-parallel half lines originating at $0$, and not passing through $P$. As $X_0$ and $X'_0$ are simply connected, the restriction of $\nabla_i$ to both these spaces have a full set of solutions. Let $P'$ be the point in the connected component of $X_0 \cap X'_0$ not containing $P$.
Supppose that $\{s_{i}\}_{\alpha}$ ($\alpha$ ranging from 1 to $n$) form $\mathcal{O}(E)^{\hol}$-bases of solutions for the $\nabla_i$ pulled back to $X_0\times E$, such that the $\{s_1\}_{\alpha}$ are congruent to $\{s_2\}_{\alpha}$ modulo $q^m$. Similarly, let $\{s'_{i}\}$ be $\mathcal{O}^{\hol}$-bases of solutions for $\nabla_i$ pulled back to $X'_0 \times E$, which are congruent modulo $q^m$, and such that $\{s_{i}\} = \{s'_i\}$ on the connected component of $X_0 \cap X'_0$ not containing $P$. The monodromy operator $\mathcal{M}_i$ corresponding to our chosen generator of $\pi_1(X)$ is precisely the element of $GL_n(\mathcal{O}(E))$ satisfying $\mathcal{M}(\{s'_i\}_{\{P\}\times E}) = \{s_i\}_{\{P\}\times E}$. By Lemma \ref{cong}, $\{s_1\}_{\{P\}\times E}$ is congruent to $\{s_2\}_{\{P\}\times E}$ and $\{s'_1\}_{\{P\} \times E}$ is congruent to $\{s'_2\}_{\{P\} \times E}$, both modulo $q^m$. Therefore, it follows that $\mathcal{M}_1$ is congruent to $\mathcal{M}_2$ modulo $q^m$ as required.
\end{proof}
\subsection{Constructing holomorphic solutions}
We now prove Proposition \ref{csol}, using the results proved in the previous subsection.
\begin{proof}[Proof of Proposition \ref{csol}]
Let $\mathcal{M} \in GL_n(\mathcal{O}(E))$ be the monodromy matrix of $\nabla$. By hypothesis, there exists a matrix $g^{\rig} \in \rm{GL}_n(\mathcal{O}^{\rig}_{\le 1}(\mathcal{X}))$ such that $D(g^{\rig}) + Ag = \textbf{0}$. For any $m \in \mathbb{N}$, there exists a matrix $g^{\hol}_m \in \rm{GL}(\mathcal{O}(E\times X))$, such that $g^{\hol}_m \equiv g^{\rig} \mod{q^m}$. Therefore, after changing basis of the vector bundle by $g^{\hol}_m$, the new connection matrix $A_m$ is congruent to $\textbf{0}$ modulo $q^m$. By Lemma \ref{cong}, $\mathcal{M}$ (with respect to the old basis) is conjugate to $\textbf{I}$ modulo $q^m$. Therefore, $\mathcal{M}$ equals $\textbf{I}$ modulo $q^m$. It follows that $\mathcal{M} = \textbf{I}$, and the proposition follows.
\end{proof}
\subsection{Monodromy operators for more general families of connections}
Let $U$ be a path-connected complex manifold, and let $\mathcal{A} \rightarrow U$ be a family of annuli. Suppose that $(V,\nabla)$ is a rank-$n$ vector bundle on $\mathcal{A}$ with connection relative to $U$. Because we have not assumed that there is a section $U\rightarrow \mathcal{A}$, there needn't exist a global monodromy operator as in the case of \S 4.1. However, the following result does hold true:
\begin{Lemma}\label{forfutureuse}
For every $u \in U$, there exists a holomorphic map $\mathcal{M}: N_u \rightarrow \rm{GL}_n(\mathbb{C})$ such that:
\begin{enumerate}
\item $N_u$ is a simply connected open neighbourhood of $u$.
\item For every $u' \in N_u$, $\mathcal{M}(u')$ is the monodromy matrix of $(V, \nabla)$ restricted to $\mathcal{A}_{u'}$.
\end{enumerate}
\end{Lemma}
Beause $N_u$ is simply connected, there exists a compatible choice of generators for $H_1(\mathcal{A}_{u'},\mathbb{Z})$. We choose one such. For a vector bundle with connection on an annulus, the monodromy representation is only well defined up to conjugation. When we say that $\mathcal{M}_{u'}$ is the monodromy matrix of $(V, \nabla)$ restricted to $\mathcal{A}_{u'}$, we mean that the conjugacy class of $\mathcal{M}_{u'}$ equals the conjugacy class of the monodromy representation evaluated on the choice of generator of $H_1(\mathcal{A}_{u'},\mathbb{Z})$.
\begin{Proposition}\label{futureuse}
Setting as above. Suppose that there exists an open subset $U' \subset U$ such that for $u\in U'$, $(V,\nabla)$ pulled back to $\mathcal{A}_{u'}$ has finite monodromy, with the order $m$, which is independent of $u'$. Then for every $u \in U$, $(V,\nabla)$ pulled back to $\mathcal{A}_{u'}$ has finite monodromy.
\end{Proposition}
\begin{proof}
This is a direct application of Lemma \ref{forfutureuse}. Let $u\in U$ be any point, and let $\beta$ be a path connecting $u$ and $u'$ ($u' \in U'$ some point). There exists a finite open cover $\{N_i \}$ of $\beta$ where each $N_i$ satisfies the conclusion of Lemma \ref{forfutureuse}. Assume that if we omit any $N_i$, we no longer have a cover. Let $\mathcal{M}_i$ be the corresponding monodromy operators.
By holomorphicity, if any $N_i$ intersects $U'$, then $\mathcal{M}_i^m$ is the constant function ${\bf I}$. Further, if $N_i$ and $N_j$ intersect, and if $\mathcal{M}_i^m$ is ${\bf I}$, then so is $\mathcal{M}_j$. Some $N_i$ must intersect $U'$ (as $u' \in \beta$). Therefore, the connectedness of $\beta$ and the assumptions on $N_i$ give us that each $\mathcal{M}_i^m$ must equal ${\bf I}$, and so $\mathcal{A}_u$ has finite monodromy. As $u$ was arbitrary, the result follows.
\end{proof}
We conclude this section by proving Lemma \ref{forfutureuse}.
\begin{proof}[Proof of Lemma \ref{forfutureuse}]
Fix $u \in U$, and let $a \in \mathcal{A}_u$ be any point. By the implicit function theorem, there exists a neighbourhood of $a \in \mathcal{A}$ which is biholomorphic to $N_a \times X_a$, where $N_a \subset U$ is open and equals a polydisc, and $X_a$ is an open disc inside $\mathbb{C}$. By Lemma \ref{holex}, $(V,\nabla)$ has a full set of solutions on $N_a \times X_a$. Therefore, there exist local solutions to $\nabla$.
Now, fix a point $a \in \mathcal{A}_u$, and a neighbourhood $N_a \times X_a$ as above, and let $\{e_i \}$ be a basis of solutions. As there exist local solutions, we may analytically continue the $\{e_i\}$ about the annulus, until we come back to the point $a$. Suppose that the continuation of the $\{e_i \}$, in some neighbourhood $N'_a \times X'_a$ of $a$, are $\{e'_i\}$. Let $X_u = X_a \cap X_a'$, and $N_u \subset N_a \cap N'_a$ be any simply connected neighbourhood of $u$. Because the sections $\{e_i \}$ and $\{ e'_i \}$ are holomorphic, there exists $\mathcal{M} \in \rm{GL}_n(\mathcal{O}^{\hol}(N_u \times X_u))$ such that $\mathcal{M} \{ e_i\} = \{ e'_i\}$. Because the sections are solutions of $\nabla$, $\mathcal{M}$ actually lives in $\rm{GL}_n(\mathcal{O}^{\hol}(N_u))$. This gives the required $\mathcal{M}: N_u \rightarrow \rm{GL}_n(\mathbb{C})$.
\end{proof}
\section{Proof of the main technical result}
\subsection{Setup}
Notation is as in Theorem \ref{thm:tech}. Let $d$ be the dimension of $B$. Let $Y = \overline{B} \setminus B$ be the nodal locus. We may assume that $Y$ is a generically smooth divisor (and so, has dimension $d-1$). Indeed, blowing up at the point $s_0$ allows us to assume that the nodal locus is divisorial, and by normalizing, we may assume that it is generically smooth. We also assume that $\gamma_s$ contracts to the node $P \in \mathcal{C}_{s_0}$, where $s_0$ can be assumed to be a smooth point of $Y$.
Let $F$ be the function field of $B$, and $K$ be the function field of $Y$. Suppose that the rational function $q \in F$ is a local uniformising parameter for $Y$. Let $S \subset F$ be the corresponding discrete valuation ring - the maximal ideal is generated by $q$, and residue field is $K$. The special fiber $\mathcal{C} \times \Spec K$ is a nodal curve with node $P$, and the generic fiber $\mathcal{C} \times \Spec F$ is a smooth curve.
Let $\mathcal{C}^{/P}$ be the complete local ring of $\mathcal{C}$ at $P$, and let $\mathcal{C}^{h,P}$ be the henselization of the local ring of $\mathcal{C}$ at $P$. There exists a positive integer $a$, and functions $x,y \in \mathcal{C}^{h,P}$ such that $\mathcal{C}^{/P} \simeq K[|x,y,q|]/(xy - q^a)$.
\begin{definition}
A $\mathbb{C}$-point of $Y$ is said to be generic enough if it is induced by an embedding $K \hookrightarrow C$.
\end{definition}
Pick $x_1, \hdots x_{d-1}$, elements in $K$ that form a transcendence basis. Then the $x_i$ and $q$ give holomorphic coordinates in a holomorphic neighbourhood in $B$ of any generic enough point $s \in Y(\mathbb{C})$. Let $s_q$ be the $1$-dimensional manifold obtained by fixing the values of the $x_i$ and allowing $q$ to vary (such that $|q|$ is small enough). Pulling back $\mathcal{C}$ to $s_q$ gives a holomorphic one-parameter family of complex curves. Call this family $\mathcal{C}_{s_q}$. We first establish the result for this family.
\begin{Lemma}
Notation as above. Let $\gamma \subset \mathcal{C}_{s_{q'}}$ (for some non-zero value of $q = q'$) be a simple closed loop which contracts to the node at $q = 0$. Then $\gamma$ has finite order (independent of the point $s$) in the monodromy representation.
\end{Lemma}
\begin{proof}
We set things up so as to apply Theorem \ref{thm:use} and Proposition \ref{csol}. Let $x,y$ be as in the beginning of this section. Consider the $q$-adic formal scheme over $K[|q|]$ (the completion of $S$ at $q$) obtained by completing $\mathcal{C}$ along its special fiber. Suppose that $\mathscr{C}$ is the rigid space over $K((q))$ associated to the generic fiber of the formal scheme.
The formal tube of $\mathscr{C}$ over $P$ is an open rigid anulus $\mathcal{X}^o$. The inner and outer radii of $\mathcal{X}^o$ are $|q|^a$ and $1$. We have that $\mathcal{O}^{\rig}_{\le 1}(\mathcal{X}^o)= \mathcal{C}^{/P}$. The pullback of $(V,\nabla)$ to $\mathcal{X}^o$ descends to the ring $\mathcal{C}^{h,P}[1/q]$. By the cyclic basis theorem (for instance, see \cite[Theorem 4.4.2]{Kedlaya}), there exists a function $f \in \mathcal{C}^{h,P}[1/q]$, such that $(V,\nabla)$ pulled back to $\mathcal{C}^{h,P}[1/fq]$ is the trivial vector bundle with a basis $\textbf{e}$ cyclic for $\nabla, D= x\frac{d}{dx}$.
Let $\alpha < 1$ be a positive rational number such that $f$ has no zeros in the rigid subannulus $\mathcal{X}'$ given by $|x| = q^{\alpha}$ of $\mathcal{X}^o$. By pulling back by a map of the form $q \mapsto q^m$, we assume that $\alpha$ is a positive integer. We also have that $\alpha < a$. (Recall that the $\mathcal{X}^o$ has inner radius $|q|^a$)
Let $t = x /q^{\alpha}$. Note that $t \in \mathcal{C}^{h,P}[1/q]$. Then $1/f$ has a series expansion in terms of $t, t^{-1}$ which converges on the rigid annulus $\mathcal{X} \subset \mathcal{X}^o$, given by $|t| = 1$. Notice that all the above data can be reduced modulo $p$ for almost all primes, because the functions $x,t,f$ are elements of $\mathcal{C}^{h,P}[1/q]$ and the basis $\textbf{e}$ is defined over $\mathcal{C}^{h,P}[1/q]$. Further, almost all $p$-curvatures vanish.
A holomorphic neighbourhood of the node is described locally by the conditions $xy = q^a$, $|x|,|y|,|q| < \epsilon$. Again, $x,y$ are as above, and $\epsilon > 0$ is some real number that is small enough. In terms of $t$, the neighbourhood is now given by $0 < |q| < \epsilon,$ $|q|^{a-\alpha}/\epsilon < |t| < \epsilon/|q|^{\alpha}$. Suppose that $E$ is the open disc centered at $q=0$ with radius $\epsilon$, and let $E^{\times}$ equal $E \setminus \{0\}$. Choose $X'$ to be a holomorphic annulus centered at $t=0$, such that $X' \times E^{\times}$ is a subset of the above neighbourhood, and the series $f$ converges on $X' \times E^{\times}$. Fixing an appropriate value of $q$ with $|q| = \epsilon' \le \epsilon$, the series expansion of $1/f$ converges on a complex analytic subannulus $X$ of $X'$. This series has only finite denominator in $q$, because it converges on $\mathcal{X}$. Therefore, the series of $1/f$ has to converge for any non-zero value of $q$ with $|q| < \epsilon'$. Replace $\epsilon$ with $\epsilon'$.
We have set things up so that the (cyclic) basis $\textbf{e}$ of $V$, and the entries of the connection matrix with respect to $\textbf{e}$ converge both on the rigid annulus $\mathcal{X}$ and the family of holomorphic annuli $X \times E^{\times}$, with at worst a pole of finite order at $q=0$. We now apply Theorem \ref{thm:use}, part 2, to deduce the existence of rigid solutions (after finite pullback). By \ref{thm:use}, part 1, the entries of the connection matrix doesn't have poles in $q$, and so $(V,\nabla)$ extends to $X \times E$. We may apply Proposition \ref{csol} to deduce that $(V ,\nabla)$ has finite monodromy. The result follows.
\end{proof}
\begin{Lemma}\label{lem:nbd}
Notation and setting as above. Then, in a complex analytic neighbourhood $U$ of $s_0$, the vanishing loop in each smooth fiber acts with finite order (independent of the fiber) in the monodromy representation.
\end{Lemma}
\begin{proof}
The set of generic enough points of $Y$ are dense (in the analytic topology). Therefore, for an appropriate open neighbourhood $U$ of $s_0$, the set of points $s_q$ (where $s\in Y(\mathbb{C})$ ranges over generic enough points in $Y$) are dense in $U$. By the above lemma, the vanishing loop acts with finite order (independent of the point) for a dense set of points in $U$. The proposition follows from the holomorphicity of $\nabla$, and Lemma \ref{forfutureuse}.
\end{proof}
\subsection{Completion of the proof}
\begin{proof}[Proof of Theorem \ref{thm:tech}]
Let $\beta$ be a path connecting $s$ to $s_0$ such that $\gamma_s$ can be pinched off along $\beta$. If $d$ (the dimension of $B$) is greater than $1$, we may assume that $\beta$ does not intersect itself. Otherwise, replace $\overline{B}$ with $\overline{B} \times \mathbb{P}^1$ and pull everything back by the projection to $B$ so that $d > 1$.
Suppose that $U' \subset \mathcal{B}^{\an}$ is a contractible open set containing the intersection of $\beta$ with the interior of $B$. Then $\gamma_s$ defines a holomorphic family of annuli $X_{U'}$ over $U'$. Therefore, showing that $\gamma_s$ acts with finite order in the monodromy repsentation is the same as proving that $(V, \nabla)$ pulled back to $X_{U'}$ has finite monodromy. This is true by Lemma \ref{lem:nbd} and Proposition \ref{futureuse}. The theorem follows.
\end{proof}
\section{Applications}
\subsection{The universal family over $\mathcal{M}_{0,d}$}
Let $\mathcal{M}_{0,d}$ be the moduli space of $d$ ordered points on $\mathbb{P}^1$, with $d>3$, and let $\mathcal{C} \rightarrow \mathcal{M}_{0,d}$ be the universal family. To account for the $\textrm{PSL}_2$ action on $\mathbb{P}^1$, we may assume that the first three marked points are $\{0,1,\infty\}$ in that order. $\mathcal{M}_{0,d}$ can be then identified with ${\mathbb{P}^1} ^{d-3} \setminus \Delta$, the locus $(Q_4,Q_5, \hdots Q_{d}) \in {\mathbb{P}^1} ^{d-3}$ where the $Q_i$ are all distinct and different from $\{0,1,\infty\}$. Let the ordered marked points on $\mathcal{C}$ be $Q_1, Q_2, \hdots, Q_d$ where $Q_1, Q_2, Q_3$ equal $0,1,\infty$ respectively. Note that $\mathcal{C}\rightarrow \mathcal{M}_{0,d}$ is defined over $\mathbb{Z}[1/m]$ for some appropriate $m$. Let $\mathcal{C}^o$ denote the open subscheme $\mathcal{C}\setminus \{Q_1, \hdots Q_d\}$. Every simple closed loop $\gamma$ induces a partition of the set of $\{Q_1, \hdots, Q_d\} = S_1 \cup S_2$ into 2 subsets. Conversely, every partition into two sets is induced by some simple closed loop $\gamma$.
\begin{definition}
Call a partition of $\{Q_1, \hdots Q_d\} = S_1 \cup S_2$ nice if $Q_1, Q_2, Q_3 \in S_1$ and $S_2 \neq \phi$.
\end{definition}
We now prove Theorem \ref{P1}
\begin{proof}[Proof of Theorem \ref{P1}]
As in the proof of Theorem \ref{gen}, it suffices to prove the result in the case of {\it the} generic family. Suppose that $\overline{\mathcal{M}_{0,d}}$ is the compactification of $\mathcal{M}_{0,d}$ constructed in \cite{Knudsen}. The family $\mathcal{C} \rightarrow \mathcal{M}_{0,d}$ extends to a family $\overline{\mathcal{C}} \rightarrow \overline{\mathcal{M}_{0,d}}$. In words, the compactification can be described as follows: the non-compactness of $\mathcal{M}_{0,d}$ arises from the fact that the points $Q_i$ are not allowed to be the same. Therefore, to ``approach" the boundary of $\mathcal{M}_{0,d}$ is to ``let some of the points $Q_i$ collide" (also see \cite[Chapter 3, G]{Harris}. The boundary divisors correspond to 2-set partitions of the $\{Q_1,\hdots, Q_d\} = S_1 \cup S_2$ with $|S_i|>1$. If the partition is not nice assume without loss of generality that $|S_2 \cap \{0,1,\infty\}| = 1$. Regardless of the niceness of $S_2$, one can approach the boundary divisor corresponding to $S_1 \cup S_2$ by allowing the points of $S_2$ to collide (in case the partition isn't nice, allow the $Q_i$, $i>3$ to move towards the remaining $Q_2 \in S_2$). The family of curves degenarates with the loop inducing this partition getting pinched to a point.
With this description in hand, we prove the theorem. Look at the partition induced by $\gamma$. If $|S_2| = \{1\}$, then Katz's original theorem applies. Otherwise, the discussion above shows that $\gamma$ can be pinched off to a point by approaching the boundary divisor corresponding to the partition. Therefore, Theorem \ref{thm:tech} applies to the family $\mathcal{C}^o \rightarrow \mathcal{M}_{0,d}$, and the result follows.
\end{proof}
\subsection{The generic $d$-pointed curve}
Suppose that $\mathcal{M}_{g,d}$ denotes the moduli space of curves with $d$-marked points. If $g \geq 2$ or $d \geq 2$, there exists an open subscheme $U \subset \mathcal{M}_{g,d}$ over which the moduli problem is fine. In this case let $\overline{\mathcal{M}_{g,d}}$ be the compactification of $\mathcal{M}_{g,d}$.
In case $g = d = 1$, we consider the Legendre family of elliptic curves $y^2 = x(x-1)(x- \lambda)$ over $\mathbb{P}^1 \setminus \{0,1,\infty\}$. In either case, $\mathcal{C}$ be the family of curves over either $\mathcal{M}_{g,d}$ or $\mathbb{P}^1 \setminus \{0,1,\infty\}$, with the marked points removed. We now prove Theorem \ref{genpts} using Theorem \ref{thm:tech}
\begin{proof}[Proof of Theorem \ref{genpts}]
As in the case of Theorem \ref{gen}, it suffices to prove the result for {\it the} generic family, or the Legendre family (depending on whether $(g,d) = (1,1)$ or not). The proof mainly consists of combining the arguments used to prove Theorems \ref{P1} and \ref{gen}. We will therefore content ourselves with simply sketching the salient points of the proof.
The mapping class group of a genus one curve with a single puncture acts transitively on the set of isotopy classes of non-trivial separating simple closed loops, and on the set of isotopy classes of non-separating simple closed loops. Therefore, by choosing a appropriate path to $0, 1$ or $\infty$, every simple closed loop in a smooth fiber of the Legnedre family can by contracted to a point.
Therefore, we assume that $d>1$, or $g>1$. Let $s \in U(\mathbb{C})$ and $\gamma \subset \mathcal{C}^{\an}_s$ be a simple closed loop. Supppose first that the image of $\gamma$ in the fundamental group of the compactified curve is trivial. This means that $\gamma$ is separating, and one of the connected components of $\mathcal{C}_s \setminus \gamma$ is homeomorphic to the complex unit disc with finitely many punctures given by $Q_1 \hdots Q_i$. If $i = 0$ then $\gamma$ is contractible. If $i = 1$, then Katz's theorem applies. If $i> 1$, then approach the boundary of $\mathcal{M}_{g,d}$ by allowing the points $Q_1 \hdots Q_i$ to ``come together". This boundary component of $\overline{\mathcal{M}_{g,d}}$ is fine. The family of curves degenerates with $\gamma$ getting pinched to a point, and the result follows by applying Theorem \ref{thm:tech}.
Otherwise, we proceed as in the proof of Theorem \ref{gen}. The results about the transitivity of the fundamental group acting on the isotopy classes of simple closed loops still hold - i.e. the action on non-separating simple closed loops is transitive, and two separating loops are in the same orbit if and only if homeomorphism class of the components for the first loop is the same as those for the second loop. In case $\gamma$ is non-separating, we degenerate to the boundary divisor $D_{\irr}$ consisting of irreducible nodal $d$-pointed curves, which is generically fine. Using the same method as in Theorem \ref{gen}, we may apply Theorem \ref{thm:tech} to prove the result for non-separating loops.
If $\gamma$ is separating, we use the boundary divisors $D_{g_1,d_1,g_2,d_2}$ given by reducible nodal curves, with components being two irreducible smooth $d_i$ pointed curves with genera $g_i$, satisfying $d_1 + d_2 = d$ and $g_1 + g_2 = g$ and $g_1 \leq g_2$. Unless $g_1 = 1$ and $d_1 = 0$, the generic point of this divisor is fine and the result follows by applying Theorem \ref{thm:tech}.
Finally, if $g_1 = 1$ and $d_1 = 0$, we pull back to an appropriate cover of $\overline{\mathcal{M}_{g,d}}$ until the moduli problem becomes fine just as in Theorem \ref{gen}, and then apply Theorem \ref{thm:tech} appropriately.
\end{proof}
| {
"timestamp": "2016-10-19T02:07:40",
"yymm": "1610",
"arxiv_id": "1610.05674",
"language": "en",
"url": "https://arxiv.org/abs/1610.05674",
"abstract": "The Grothendieck-Katz $p$-curvature conjecture is an analogue of the Hasse Principle for differential equations. It states that a set of arithmetic differential equations on a variety has finite monodromy if its $p$-curvature vanishes modulo $p$, for almost all primes $p$. We prove that if the variety is a generic curve, then every simple closed loop on the curve has finite monodromy.",
"subjects": "Number Theory (math.NT)",
"title": "The $p$-curvature conjecture and monodromy about simple closed loops",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9905874101345136,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7086900091416541
} |
https://arxiv.org/abs/1511.03828 | Multiply union families in $\mathbb{N}^n$ | Let $A\subset \mathbb{N}^{n}$ be an $r$-wise $s$-union family, that is, a family of sequences with $n$ components of non-negative integers such that for any $r$ sequences in $A$ the total sum of the maximum of each component in those sequences is at most $s$. We determine the maximum size of $A$ and its unique extremal configuration provided (i) $n$ is sufficiently large for fixed $r$ and $s$, or (ii) $n=r+1$. | \section{Introduction}
Let $\N:=\{0,1,2,\ldots\}$ denote the set of non-negative integers,
and let $[n]:=\{1,2,\ldots,n\}$.
Intersecting families in $2^{[n]}$ or $\{0,1\}^n$ are one of the main objects in
extremal set theory. The equivalent dual form of an intersecting family
is a union family, which is the subject of this paper.
In \cite{FT} Frankl and Tokushige proposed to consider such problems not
only in $\{0,1\}^n$ but also in $[q]^n$. They determined
the maximum size of 2-wise $s$-union families (i) in $[q]^n$ for $n>n_0(q,s)$,
and (ii) in $\N^3$ for all $s$ (the definitions will be given shortly).
In this paper we extend their results
and determine the maximum size and structure of $r$-wise $s$-union
families in $\N^{n}$ for the following two cases: (i) $n\geq n_0(r,s)$,
and (ii) $n=r+1$.
Much research has been done for the case of families in $\{0,1\}^n$,
and there are many challenging open problems.
The interested reader is referred to \cite{F1,F2,FT0,T1,T2}.
For a vector $\xx\in\R^n$, we write $x_i$ or $(\xx)_i$
for the $i$th component, so $\xx=(x_1,x_2,\ldots,x_n)$.
Define the {\sl weight} of $\aa\in\N^n$ by
\[
|\aa|:=\sum_{i=1}^n a_i.
\]
For a finite number of vectors $\aa,\bb,\ldots,\zz\in \N^n$ define the
join $\aa\myor \bb\myor\cdots\myor\zz$ by
\[
(\aa\myor \bb\myor\cdots\myor\zz)_i:=\max\{a_i,b_i,\ldots,z_i\},
\]
and we say that $A\subset \N^n$ is $r$-wise $s$-{\sl union} if
\[
|\aa_1\myor \aa_2\myor\cdots\myor\aa_r|\leq s\text{ for all }
\aa_1,\aa_2,\ldots,\aa_r\in A.
\]
In this paper we address the following problem.
\begin{prob}
For given $n,r$ and $s$, determine the maximum size $|A|$ of $r$-wise
$s$-union families $A\subset\N^n$.
\end{prob}
To describe candidates $A$ that give the maximum size to the above problem,
we need some more definitions.
Let us introduce a partial order $\prec$ in $\R^n$.
For $\aa, \bb\in \R^n$ we let $\aa\prec \bb$ iff
$a_i\leq b_i$ for all $1\leq i\leq n$.
Then we define a {\sl down set} for $\aa\in \N^n$ by
\[
\DD(\aa):=\{\cc\in \N^n: \cc\prec \aa\},
\]
and for $A\subset \N^n$ let
\[
\DD(A):=\bigcup_{\aa\in A}\DD(\aa).
\]
We also introduce $\UU(\aa,d)$, which can be viewed as a part of sphere
centered at $\aa\in\N^n$ with radius $d\in\N$, defined by
\[
\UU(\aa,d):=\{\aa+{\boldsymbol \epsilon}\in \N^n:
{\boldsymbol\epsilon}\in \N^n,\, |{\boldsymbol\epsilon}|=d\}.
\]
We say that $\aa\in \N^n$ is a {\sl balanced partition},
if all $a_i$'s are as close to each other as possible, more precisely,
$|a_i-a_j|\leq 1$ for all $i,j$.
Let $\one:=(1,1,\ldots,1)\in \N^n$.
For $r,s,n,d\in\N$ with $0\leq d\leq\lfloor\frac sr\rfloor$
and $\aa\in\N^n$ with $|\aa|=s-rd$ let us define a family $K$ by
\begin{equation}\label{def:K}
K= K(r,n,\aa,d):=
\bigcup_{i=0}^{\lfloor\frac du\rfloor}\DD(\UU(\aa+i\one,d-ui)),
\end{equation}
where $u=n-r+1$.
This is the candidate family. Intuitively $K$ is a union of balls,
and the corresponding centers and radii are chosen so that $K$ is
$r$-wise $s$-union as we will see in Claim~\ref{claim1} in the next section.
\begin{conj
Let $r\geq 2$ and $s$ be positive integers.
If $A\subset\N^{n}$ is $r$-wise $s$-union, then
\[
|A|\leq\max_{0\leq d\leq\lfloor\frac sr\rfloor}
\left| K(r,n,\aa,d)\right|,
\]
where $\aa\in\N^{n}$ is a balanced partition with $|\aa|=s-rd$.
Moreover if equality holds, then $A=K(r,n,\aa,d)$ for some
$0\leq d\leq\lfloor\frac sr\rfloor$.
\end{conj}
We first verify the conjecture when $n$ is sufficiently large for fixed $r,s$.
Let $\ee_i$ be the $i$-th standard base of $\R^n$, that is,
$(\ee_i)_j=\delta_{ij}$. Let $\tilde\ee_0=\zero$, and
$\tilde\ee_i=\sum_{j=1}^i\ee_j$ for $1\leq i\leq n$, e.g., $\tilde\ee_n=\one$.
\begin{thm}\label{thm1}
Let $r\geq 2$ and $s$ be fixed positive integers.
Write $s=dr+p$ where $d$ and $p$ are non-negative integers with $0\leq p<r$.
Then there exists an $n_0(r,s)$ such that if $n>n_0(r,s)$
and $A\subset \N^n$ is $r$-wise $s$-union, then
\[
|A|\leq\left| \DD(\UU(\tilde\ee_p,d))\right|.
\]
Moreover if equality holds, then $A$ is isomorphic to
$\DD(\UU(\tilde\ee_p,d))=K(r,n,\tilde\ee_p,d)$.
\end{thm}
We mention that the case $A\subset\{0,1\}^n$ of
Conjecture is posed in \cite{F1} and partially solved in \cite{F1,F2},
and the case $r=2$ of Theorem~\ref{thm1} is proved in \cite{FT}
in a slightly stronger form.
We also notice that if $A\subset\{0,1\}^n$ is
2-wise $(2d+p)$-union, then the Katona's $t$-intersection theorem \cite{Katona}
states that $|A|\leq |\DD(\UU(\tilde\ee_p,d)\cap\{0,1\}^n)|$ for all $n\geq s$.
Next we show that the conjecture is true if $n=r+1$.
We also verify the conjecture on general $n$ if $A$ satisfies some additional
properties described below.
Let $A\subset\N^{n}$ be $r$-wise $s$-union. For $1\leq i\leq n$ let
\begin{equation}\label{def:mi}
m_i:=\max\{x_i:\xx\in A\}.
\end{equation}
If $n-r$ divides $|\mm|-s$, then we define
\begin{equation}\label{def:d}
d:=\frac{|\mm|-s}{n-r}\geq 0,
\end{equation}
and for $1\leq i\leq n$ let
\begin{equation}\label{def:ai}
a_i:=m_i-d,
\end{equation}
and we assume that $a_i\geq 0$.
In this case we have $|\aa|=s-rd$.
Since $|\aa|\geq 0$ it follows that
$d\leq\lfloor\frac sr\rfloor$.
For $1\leq i\leq n$ define $P_i\in\N^n$ by
\begin{equation}\label{def:Pi}
P_i:=\aa+d\ee_i,
\end{equation}
where $\ee_i$ denotes the $i$th standard base, for example,
$P_2=(a_1,a_2+d,a_3,\ldots,a_n)$.
\begin{thm}\label{mainthm}
Let $A\subset\N^{n}$ be $r$-wise $s$-union.
Assume that the sequences $P_i$ are well-defined and
\begin{equation}\label{assumption}
\{P_1,\ldots,P_n\}\subset A.
\end{equation}
Then it follows that
\[
|A|\leq\max_{0\leq d'\leq\lfloor\frac sr\rfloor}
\left| K(r,n,\aa',d')\right|,
\]
where $\aa'\in\N^{n}$ is a balanced partition with $|\aa'|=s-rd'$.
Moreover if equality holds, then
$A=K(r,n,\aa',d')$ for some $0\leq d'\leq\lfloor\frac sr\rfloor$.
\end{thm}
We will show that the assumption \eqref{assumption} is satisfied
when $n=r+1$, see Corollary~\ref{cor} in the last section.
Notation: For $\aa,\bb\in\N^n$ we define
$\aa\setminus\bb\in\N^n$ by $(\aa\vee\bb)-\bb$, in other words,
$(\aa\setminus\bb)_i:=\max\{a_i-b_i,0\}$.
The support of $\aa$ is defined by
$\supp(\aa):=\{j:a_j>0\}$.
\section{Proof of Theorem~\ref{thm1} --- the case when $n$ is large}
Let $r,s$ be given, and let $s=dr+p$, $0\leq p<r$.
We consider the situation $n\to\infty$ for fixed $r,s,d$, and $p$.
\begin{claim}
$|\DD(\UU(\tilde\ee_p,d))|=
\sum_{j=0}^p\binom pj\binom{n-j+d}d=(2^p/d!)n^d+O(n^{d-1})$.
\end{claim}
\begin{proof}
By definition we have
\[
\DD(\UU(\tilde\ee_p,d))=\{\xx+\yy\in\N^n:|\xx|\leq d, \,\yy\prec{\tilde\ee_p}\}.
\]
We rewrite the RHS by classifying vectors according to their supports.
For $I\subset[p]$ let ${\tilde\ee_p}|_I$ be the restriction of
${\tilde\ee_p}$ to $I$, that is,
$({\tilde\ee_p}|_I)_i$ is $1$ if $i\in I$ and $0$ otherwise, and let
\[
R(I):=\{{\tilde\ee_p}|_I+\zz:\supp(\zz)\subset I\sqcup([n]\setminus[p]),\,|\zz|\leq d\}.
\]
Then we have
$\DD(\UU(\tilde\ee_p,d))=\bigsqcup_{I\subset[p]} R(I)$.
For each $I\in\binom{[p]}i$
the number of $\zz$ in $R(I)$ equals the number of nonnegative
integer solutions of $z_1+z_2+\cdots+z_{i+(n-p)}\leq d$.
Thus it follows that $|R(I)|=\binom{n-(p-i)+d}d$, and
\[
|\DD(\UU(\tilde\ee_p,d))|=\sum_{i=0}^p\binom pi\binom{n-(p-i)+d}d
=\sum_{j=0}^p\binom pj\binom{n-j+d}d.
\]
The RHS is further rewritten using
$\binom{n-j+d}d=n^d/d!+O(n^{d-1})$ and
$\sum_{j=0}^p\binom pj=2^p$, as needed.
\end{proof}
Let $A\subset\N^n$ be $r$-wise $s$-union with maximal size.
So $A$ is a down set. We will show that
$|A|\leq |\DD(\UU(\tilde\ee_p,d))|$.
First suppose that {there is a $t$} with $2\leq t\leq r$ such that
$A$ is $t$-wise $(dt+p)$-union, but not $(t-1)$-wise $(d(t-1)+p)$-union.
In this case, by the latter condition,
there are $\bb_1,\ldots,\bb_{t-1}\in A$ such that
$|\bb|\geq d(t-1)+p+1$, where $\bb=\bb_1\vee\cdots\vee\bb_{t-1}$.
Then, by the former condition, for every $\aa\in A$
it follows that $|\aa\vee\bb|\leq dt+p$, so $|\aa\setminus\bb|\leq d-1$.
This gives us
\[
A\subset \{\xx+\yy\in\N^n: |\xx|\leq d-1,\, \yy\prec\bb\}.
\]
There are $\binom{n+(d-1)}{d-1}$
choices for $\xx$ satisfying $|\xx|\leq d-1$.
On the other hand, the number of $\yy$ with $\yy\prec\bb$ is independent
of $n$ (so it is a constant depending on $r$ and $s$ only).
In fact $|\bb|\leq (t-1)s<rs$, and there are less than $2^{rs}$ choices
for $\yy$. Thus we get $|A|<\binom{n+(d-1)}{d-1}2^{rs}=O(n^{d-1})$
and we are done.
Next we suppose that
\begin{equation}\label{dt+p union}
\text{$A$ is $t$-wise $(dt+p)$-union for all $1\leq t\leq r$. }
\end{equation}
The case $t=1$ gives us $|\aa|\leq d+p$ for every $\aa\in A$.
If $p=0$, then this means that $A\subset\DD(\UU(\zero,d))$,
which finishes the proof for this case.
So, from now on, we assume that $1\leq p<r$.
We will see that
there is a $u$ with $u\geq 1$ such that there exist
$\bb_1,\ldots,\bb_u\in A$ satisfying
\begin{equation}\label{def:u}
|\bb|=u(d+1),
\end{equation}
where $\bb:=\bb_1\vee\cdots\vee\bb_u$. In fact we have \eqref{def:u} for
$u=1$, if otherwise $A\subset\DD(\UU(\zero,d))$.
On the other hand,
setting $t=p+1 \leq r$ in \eqref{dt+p union},
we see that $A$ is $(p+1)$-wise $((p+1)(d+1)-1)$-union,
and \eqref{def:u} fails if $u=p+1$.
So we choose maximal $u$ with $1\leq u\leq p$ satisfying \eqref{def:u}, and fix
$\bb=\bb_1\vee\cdots\vee\bb_u$. By this maximality, for every $\aa\in A$,
it follows that $|\aa\vee\bb|\leq(u+1)(d+1)-1$, and
\begin{equation}\label{a-b<=d}
|\aa\setminus\bb|=|\aa\vee\bb|-|\bb|\leq d.
\end{equation}
Using \eqref{a-b<=d} we have $A\subset\bigcup_{i=0}^d A_i$, where
\[
A_i:=\{\xx+\yy\in A: |\xx|=i,\, \yy\prec\bb\}.
\]
Then we have $|A_i|\leq \binom{n+i}i2^{|\bb|}$. Noting that
$|\bb|\leq u(d+1)<r(d+1)=O(1)$
it follows $\sum_{i=0}^{d-1}|A_i|=O(n^{d-1})$.
So the size of $A_d$ is essential.
We naturally identify $\aa\in A$ with a subset of
$[n]\times\{1,\ldots,d+p\}$. Formally let
\[
\phi(\aa):=\{(i,j):1\leq i\leq n,\, 1\leq j\leq a_i\},
\]
for example, if $\aa=(1,0,2)$, then $\phi(\aa)=\{(1,1),(3,1),(3,2)\}$.
Define $m=m(d)$ to be $r+1$ if $d=1$ and $dr$ if $d\geq 2$.
We say that $\bb'\prec\bb$ is \emph{rich} if there exist $m$ vectors
$\cc_1,\ldots,\cc_m$ of weight $d$
such that $\bb'\vee\cc_j\in A$ for every $j$, and
the $m+1$ subsets $\phi(\cc_1),\ldots,\phi(\cc_m),\phi(\bb)$ are
pairwise disjoint.
In this case $\bb''\vee\cc_j\in A$ for all $\bb''\prec\bb'$
because $A$ is a down set. This means that richness is hereditary, namely,
if $\bb'$ is rich and $\bb''\prec\bb'$, then $\bb''$ is rich as well.
Informally, $\bb'$ is rich if it can be extended
to a $(|\bb'|+d)$-element subset of $A$ in $m$ ways disjointly outside $\bb$.
We are comparing our family $A$ with the reference family
$\DD(\UU(\tilde\ee_p),d)$, and we define $\tilde\bb$ which plays the
role of $\tilde\ee_p$ in our family, namely, let us define
\[
\tilde\bb:=\bigvee \{\bb'\prec\bb:\text{$\bb'$ is rich}\}.
\]
\begin{claim}\label{q<=p}
$|\tilde\bb|\leq p$.
\end{claim}
\begin{proof}
Suppose the contrary. Then $|\tilde\bb|>p$ and we can find rich
$\bb'_1,\bb'_2,\ldots,\bb'_{p+1}$ {(with repetition if necessary)}
such that
$|\bb'_1\vee\cdots\vee\bb'_{p+1}|\ge p+1$.
Since richness is hereditary we may assume that
$|\bb'_1\vee\cdots\vee\bb'_{p+1}|= p+1$.
Let $\cc_1^{(i)},\ldots,\cc_m^{(i)}$ support the richness of $\bb'_i$.
By definition $\phi(\cc_1^{(i)}),\ldots,\phi(\cc_m^{(i)})$ and $\phi(\bb)$
are pairwise disjoint.
Let $\aa_1:=\bb'_1\vee\cc_{j_1}^{(1)}\in A$, say, $j_1=1$.
Then choose $\aa_2:=\bb'_2\vee\cc_{j_2}^{(2)}$ so that
$\phi({\cc_{j_1}^{(1)}})$ and $\phi(\cc_{j_2}^{(2)})$
are disjoint.
If $i\leq p$, then having $\aa_1,\ldots,\aa_i$ chosen,
we only used $id$ elements as $\bigcup_{l=1}^i\phi(\cc_{j_l}^{(l)})$,
which intersect at most $id$ of $\cc_1^{(i+1)},\ldots,\cc_m^{(i+1)}$.
Then, since $id\leq pd< rd\leq m$,
we still have some $\cc_{j_{i+1}}^{(i+1)}$, which is
disjoint from any already chosen vectors.
So we can continue this procedure until we get
$\aa_{p+1}:=\bb'_{p+1}\vee\cc_{j_{p+1}}^{(p+1)}\in A$
such that all
$\phi(\cc_{j_1}^{(1)}),\ldots,\phi(\cc_{j_{p+1}}^{(p+1)})$
and $\phi(\bb)$ are disjoint.
However, these vectors yield that
\begin{align*}
|\aa_1\vee\cdots\vee\aa_{p+1}|&
=|\bb'_1\vee\cdots\vee\bb'_{p+1}|
+|\cc_{j_1}^{(1)}|+\cdots+|\cc_{j_{p+1}}^{(p+1)}|\\
&= (p+1) + (p+1)d =(p+1)(d+1),
\end{align*}
which contradicts \eqref{dt+p union} at $t=p+1$.
\end{proof}
If $\yy\prec\bb$ is not rich, then
\[
\{\phi(\xx):\xx+\yy\in A_d, |\xx|=d\}
\]
is a family of $d$-element subsets on $(d+p)n$ vertices, which has no
$m$ pairwise disjoint subsets (so the matching number is $m-1$ or less).
Thus, by the Erd\H os matching theorem \cite{Erdos}, the size of this family is
$O(n^{d-1})$.
There are at most $2^{|\bb|}=O(1)$ choices for non-rich $\yy\prec\bb$, and
we can conclude that the number of vectors
in $A_d$ coming from non-rich $\yy$ is $O(n^{d-1})$.
Then the remaining vectors in $A_d$ come from rich $\yy\prec\tilde\bb$, and
the number of such vectors is at most $2^{|\tilde\bb|}\binom{n+d}d$.
Note also that $\sum_{i=0}^{d-1}|A_i|=O(n^{d-1})$.
Consequently we get
\[
|A|\leq 2^{|\tilde\bb|}\binom{n+d}d+O(n^{d-1})
=(2^{|\tilde\bb|}/d!)\,n^d+O(n^{d-1}).
\]
Recall that the reference family is of size
$(2^p/d!)n^d+O(n^{d-1})$, and
$|\tilde\bb|\leq p$ from Claim~\ref{q<=p}.
So we only need to deal with the case when $|\tilde\bb|=p$ and
there are exactly $2^p$ rich sets.
In other words, $\tilde\bb=\tilde\ee_p$ (by renaming coordinates if necessary)
and every $\bb'\prec\tilde\ee_p$ is rich.
We show that $A\subset\DD(\UU(\tilde\ee_p,d))$. Suppose the contrary, then
there is an $\aa\in A$ such that
$|\aa'|\geq d+1$, where $\aa'=\aa\setminus\tilde\ee_p$.
Since $A$ is a down set we may assume that $|\aa'|=d+1$.
Now $\tilde\ee_p$ is rich and
let $\cc_1,\ldots,\cc_m$ be vectors assured by the richness.
We remark that $m-(d+1)\geq r-1$. In fact if $d=1$ then
$m-(d+1
=r-1$, and if $d\geq 2$ then
$m-(d+1
=(r-1)(d-1)+r-2\geq r-1$.
So we may assume that $\phi(\cc_1),\ldots,\phi(\cc_{r-1})$
are pairwise disjoint and disjoint to $\phi(\aa)$ as well.
Let $\aa_i:=\tilde\ee_p\vee\cc_i\in A$ for $1\leq i\leq r-1$.
Then we get
\begin{align*}
|\aa\vee\aa_1\vee\cdots\vee\aa_{r-1}|
&=|\tilde\ee_p\vee\aa'|+|\cc_1|+\cdots+|\cc_{r-1}|\\
&=(p+d+1)+(r-1)d=dr+p+1=s+1,
\end{align*}
which contradicts that $A$ is $r$-wise $s$-union.
This completes the proof of Theorem~\ref{thm1}.
\section{The polytope $\bP$ and proof of Theorem~\ref{mainthm}}
Let $\aa=(a_1,\ldots,a_n)\in\N^n$ with $|\aa|=s-rd$ for some $d\in\N$.
We introduce a convex polytope $\bP\subset\R^{n}$,
which will play a key role in our proof.
This polytope is defined by the following
$n+\binom n1+\binom n2+\cdots +\binom {n}{n-r+1}$ inequalities:
\begin{align}
x_i&\geq 0 &&\hskip -5em\text{if } 1\leq i\leq n,\label{L:eq1}\\
\sum_{i\in I}x_i&\leq \sum_{i\in I}a_i+d
&&\hskip -5em\text{if }1\leq |I|\leq n-r+1,\, I\subset [n].\label{L:eq2}
\end{align}
Namely,
\[
\bP:=\{\xx\in\R^{n}: \text{$\xx$ satisfies \eqref{L:eq1} and \eqref{L:eq2}}\}.
\]
Let $L$ denote the integer lattice points in $\bP$:
\[
L= L(r,n,\aa,d):=\{\xx\in\N^{n}:\xx\in\bP\}.
\]
\begin{la}\label{lemma1}
The two sets $K$ (defined by \eqref{def:K})
and $L$ are the same, and $r$-wise $s$-union.
\end{la}
\begin{proof}
This lemma is a consequence of the following three claims.
\begin{claim}\label{claim1}
The set $K$ is $r$-wise $s$-union.
\end{claim}
\begin{proof}
Let $\xx_1,\xx_2,\ldots,\xx_r\in K$. We show that
$|\xx_1\myor\xx_2\myor\cdots\myor\xx_r|\leq s$.
We may assume that $\xx_j\in\UU(\aa+i_j\one,d-ui_j)$, where $u=n-r+1$.
We may also assume that $i_1\geq i_2\geq \cdots \geq i_r$.
Let $\bb:=\aa+i_1\one$. Then, informally,
$|\xx\setminus\bb|:=|(\xx\myor\bb)-\bb|$
counts the excess of $\xx$ above $\bb$, more precisely, it is
$\sum_{j\in[n]}\max\{0,x_j-b_j\}$. Thus we have
\begin{align*}
|\xx_1\myor\xx_2\myor\cdots\myor\xx_r| &\leq
|\bb|+\sum_{j=1}^r|\xx_j\setminus\bb|\\
&\leq|\aa|+ni_1+\sum_{j=1}^r\big((d-ui_j)-(i_1-i_j)\big)\\
&=|\aa|+dr+(n-r)i_1-\sum_{j=1}^r(u-1)i_j\\
&=s-(n-r)\sum_{j=2}^ri_j\leq s,
\end{align*}
as required.
\end{proof}
\begin{claim}\label{claim2}
$K\subset L$.
\end{claim}
\begin{proof}
Let $\xx\in K$. We show that $\xx\in L$, that is,
$\xx$ satisfies \eqref{L:eq1} and \eqref{L:eq2}.
Since \eqref{L:eq1} is clear by definition of $K$,
we show that \eqref{L:eq2}. To this end we may assume that
$\xx\in\UU(\aa+i\one,d-ui)$, where $u=n-r+1$ and $i\leq\lfloor\frac du\rfloor$.
Let $I\subset[n]$ with $1\leq|I|\leq u$. Then $i|I|\leq ui$. Thus it follows
\[
\sum_{j\in I}x_j\leq\sum_{j\in I}a_j+i|I|+(d-ui)\leq
\sum_{j\in I}a_j+d,
\]
which confirms \eqref{L:eq2}.
\end{proof}
\begin{claim}\label{claim3}
$K\supset L$.
\end{claim}
\begin{proof}
Let $\xx\in L$. We show that $\xx\in K$, that is,
there exists some $i'$ such that
$0\leq i'\leq \lfloor\frac d{n-r+1}\rfloor$ and
\[
|\xx \setminus (\aa+i'\one)| \leq d-(n-r+1)i'.
\]
We write $\xx$ as
\[
\xx=(a_1+i_1,a_2+i_2,\ldots,a_{n}+i_{n}),
\]
where we may assume that
$d\geq i_1\geq i_2\geq \cdots \geq i_{n}$.
We notice that some $i_j$ can be negative.
Since $\xx\in L$ it follows from \eqref{L:eq2}
(a part of the definition of $L$) that
if $1\leq|I|\leq n-r+1$ and $I\subset[n]$, then
\[
\sum_{j\in I}i_j\leq d.
\]
Let $J:=\{j:x_j\geq a_j\}$ and we argue separately by the size of $|J|$.
If $|J|\leq n-r+1$, then we may choose $i'=0$. In fact,
\begin{align*}
|\xx\setminus\aa|&=\max\{0,i_1\}+\max\{0,i_2\}+\cdots+\max\{0,i_{n-r+1}\}\\
&=\max\bigg\{\sum_{j\in I}i_j: I\subset [n-r+1]\bigg\}\leq d.
\end{align*}
If $|J|\geq n-r+2$, then we may choose $i'=i_{n-r+2}$.
In fact, by letting $i':=i_{n-r+2}$, we have
\begin{align*}
|\xx\setminus(\aa+i'\one)|&=
(i_1-i')+(i_2-i')+\cdots+(i_{n-r+1}-i')\\
&\leq d-(n-r+1)i'.
\end{align*}
We need to check $0\leq i'\leq\lfloor\frac d{n-r+1}\rfloor$.
It follows from $|J|\geq n-r+2$ that $i'\geq 0$.
Also $d\geq i_1\geq i_2\geq \cdots\geq i_{n-r+2}$ and
$i_1+i_2+\cdots+i_{n-r+1}\leq d$ yield
$i'\leq\lfloor\frac d{n-r+1}\rfloor$.
\end{proof}
This completes the proof of Lemma~\ref{lemma1}.
\end{proof}
Let
\[
\sigma_k(\aa):=\sum_{K\in\binom{[n]}k}\prod_{i\in K}a_i
\]
be the $k$th elementary symmetric polynomial of $a_1,\ldots,a_n$.
\begin{la}\label{lemma2}
The size of $K(r,n,\aa,d)$ is given by
\begin{align*}
|K(r,n,\aa,d)|&=
\sum_{j=0}^{n}\binom{d+j}j\sigma_{n-j}(\aa)\\
&\qquad+\sum_{i=1}^{\lfloor\frac du\rfloor}
\sum_{j=u+1}^{n}
\left(
\binom{d-ui+j}j-\binom{d-ui+u}j
\right)\sigma_{n-j}(\aa+i\one),
\end{align*}
where $u=n-r+1$.
Moreover, for fixed $n,r,d$ and $|\aa|$, this size is maximized
if and only if $\aa$ is a balanced partition.
\end{la}
\begin{proof}
For $J\subset[n]$ let $\xx|_J$ be the restriction of $\xx$ to $J$, that is,
$(\xx|_J)_i$ is $x_i$ if $i\in J$ and $0$ otherwise.
First we count the vectors in the base layer $\DD(\UU(\aa,d))$.
To this end we partition this set into $\bigsqcup_{J\subset[n]}A_0(J)$, where
\[
A_0(J)=\{\aa|_J+\ee+\bb:\supp(\ee)\subset J,\,|\ee|\leq d,\,
\supp(\bb)\subset[n]\setminus J,\, b_i<a_i\text{ for }i\not\in J\}.
\]
The number of vectors $\ee$ with the above property is equal to the number of
non-negative integer solutions of the inequality
$x_1+x_2+\cdots+x_{|J|}\leq d$, which is $\binom{d+|J|}{|J|}$.
The number of vectors $\bb$ is clearly $\prod_{l\in[n]\setminus J}a_l$.
Thus we get
\[
\sum_{J\in\binom{[n]}j} |A_0(J)|=
\sum_{J\in\binom{[n]}j}\binom{d+|J|}{|J|}\prod_{l\in[n]\setminus J}a_l
=\binom{d+j}{j}\sigma_{n-j}(\aa),
\]
and $|\DD(\UU(\aa,d))|=\sum_{j=0}^n\binom{d+j}{j}\sigma_{n-j}(\aa)$.
Next we count the vectors in the $i$th layer:
\[
\DD(\UU(\aa+i\one,d-ui))\setminus
\left(\bigcup_{j=0}^{i-1}\DD(\UU(\aa+j\one,d-uj))\right).
\]
For this we partition the above set into $\bigsqcup_{J\subset[n]}A_i(J)$, where
\begin{align*}
A_i(J)=\{(\aa+i\one)|_J+\ee+\bb:&\supp(\ee)\subset J,\,
d-u(i-1)-|J|<|\ee|\leq d-ui,\\
&\supp(\bb)\subset [n]\setminus J,\, b_l<a_l+i\text{ for }l\not\in J\}.
\end{align*}
In this case we need $d-u(i-1)<|J|+|\ee|$ because the vectors satisfying
the opposite inequality are already counted in the lower layers
$\bigcup_{j<i}A_j(J)$.
We also notice that $d-u(i-1)-|J|<d-ui$ implies that $|J|>u$.
So $A_i(J)=\emptyset$ for $|J|\leq u$.
Now we count the number of vectors $\ee$ in $A_i(J)$, or equivalently,
the number of non-negative integer solutions of
\[
d-u(i-1)-|J|<x_1+x_2+\cdots+x_{|J|}\leq d-ui.
\]
This number is $\binom{d-ui+j}j-\binom{d-ui+u}j$, where $j=|J|$.
On the other hand, the number of vectors $\bb$ in $A_i(J)$ is
$\prod_{l\in[n]\setminus J}(a_l+i)$. Consequently we get
\[
\sum_{J\subset[n]}|A_i(J)|=
\sum_{j=u+1}^n\left(\binom{d-ui+j}j-\binom{d-ui+u}j\right)
\sigma_{n-j}(\aa+i\one).
\]
Summing this term over $1\leq i\leq \lfloor\frac du\rfloor$ we finally
obtain the second term of the RHS of $|K|$ in the statement of this lemma.
Then, for fixed $|\aa|$, the size of $K$ is maximized when $\sigma_{n-j}(\aa)$
and $\sigma_{n-j}(\aa+i\one)$ are maximized. By the property of symmetric
polynomials, this happens if and only if $\aa$ is a balanced partition,
see e.g., Theorem~52 in section 2.22 of \cite{HLP}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{mainthm}]
Let $A\subset\N^n$ be an $r$-wise $s$-union with \eqref{assumption}.
For $I\subset[n]$ let
\[
m_I:=\max\bigg\{\sum_{i\in I}x_i:\xx\in A\bigg\}.
\]
\begin{claim}
If $I\subset[n]$ and $1\leq|I|\leq n-r+1$, then
\[
m_I=\sum_{i\in I}a_i+d.
\]
\end{claim}
\begin{proof}
Choose $j\in I$. By \eqref{assumption} we have $P_j\in A$ and
\begin{equation}\label{eq:m_I}
m_I\geq\sum_{i\in I}(P_j)_i=\sum_{i\in I}a_i+d.
\end{equation}
We need to show that this inequality is actually an equality.
Let $[n]=I_1\sqcup I_2\sqcup\cdots\sqcup I_r$ be a partition of $[n]$.
Then it follows that
\begin{align*}
s\geq m_{I_1}+m_{I_2}+\cdots+ m_{I_r}
\geq\sum_{i\in[n]}a_i+rd = s,
\end{align*}
where the first inequality follows from the $r$-wise $s$-union property of
$A$, and the second inequality follows from \eqref{eq:m_I}. Since the left-most
and the right-most sides are the same $s$, we see that all inequalities are
equalities. This means that \eqref{eq:m_I} is equality, as needed.
\end{proof}
By this claim if $\xx\in A$ and $1\leq|I|\leq n-r+1$, then we have
\[
\sum_{i\in I}x_i\leq m_I=\sum_{i\in I}a_i+d.
\]
This means that $A\subset L$.
Finally the theorem follows from Lemmas~\ref{lemma1} and \ref{lemma2}.
\end{proof}
\begin{cor}\label{cor}
If $n=r+1$, then Conjecture is true.
\end{cor}
\begin{proof}
Let $n=r+1$ and let $A\subset\N^{r+1}$ be $r$-wise $s$-union with maximum size.
Define $\mm$ by \eqref{def:mi}.
Since $n-r=1$ we can define $d$ by \eqref{def:d}.
Then define $\aa$ by \eqref{def:ai}.
We need to verify $a_i\geq 0$ for all $i$.
To this end we may assume that $m_1\geq m_2\geq\cdots\geq m_{r+1}$.
Then $a_i\geq a_{r+1}=m_{r+1}-d$, so it suffices to show $m_{r+1}\geq d$.
Since $A$ is $r$-wise $s$-union it follows that $m_1+m_2+\cdots+m_r\leq s$.
This together with the definition of $d$ implies
$d=|\mm|-s\leq m_{r+1}$, as needed.
So we can properly define $P_i$ by \eqref{def:Pi}.
Next we check that $\xx\in A$ satisfies \eqref{L:eq1} and \eqref{L:eq2}.
By definition we have $x_i\leq m_i=a_i+d$, so we have \eqref{L:eq1}.
Since $A$ is $r$-wise $s$-union, we have
\[
(x_1+x_2)+m_3+\cdots+m_{r+1}\leq s,
\]
or equivalently,
\[
(x_1+x_2)+(a_3+d)+\cdots+(a_{r+1}+d)\leq s=|\aa|+rd.
\]
Rearranging we get $x_1+x_2\leq a_1+a_2+d$, and we get the other cases
similarly, so we obtain \eqref{L:eq2}. Thus $A\subset L$
and the result follows from Lemmas~\ref{lemma1} and \ref{lemma2}.
\end{proof}
\section*{Acknowledgement}
The authors thank the anonymous referees of European journal of combinatorics
for pointing out the error in Claim~1 of the earlier version and for many helpful suggestions
which improve the presentation of this paper.
| {
"timestamp": "2016-06-03T02:05:35",
"yymm": "1511",
"arxiv_id": "1511.03828",
"language": "en",
"url": "https://arxiv.org/abs/1511.03828",
"abstract": "Let $A\\subset \\mathbb{N}^{n}$ be an $r$-wise $s$-union family, that is, a family of sequences with $n$ components of non-negative integers such that for any $r$ sequences in $A$ the total sum of the maximum of each component in those sequences is at most $s$. We determine the maximum size of $A$ and its unique extremal configuration provided (i) $n$ is sufficiently large for fixed $r$ and $s$, or (ii) $n=r+1$.",
"subjects": "Combinatorics (math.CO)",
"title": "Multiply union families in $\\mathbb{N}^n$",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9905874101345136,
"lm_q2_score": 0.7154240018510026,
"lm_q1q2_score": 0.7086900091416541
} |
https://arxiv.org/abs/1612.09083 | A Constant Optimization of the Binary Indexed Tree Query Operation | There are several data structures which can calculate the prefix sums of an array efficiently, while handling point updates on the array, such as Segment Trees and Binary Indexed Trees (BIT). Both these data structures can handle the these two operations (query and update) in $O(\log{n})$ time. In this paper, we present a data structure similar to the BIT, but with an even smaller constant. To do this, we use Zeckendorf's Theorem, a property of the Fibonacci sequence of numbers. The new data structure achieves the same complexity of $O(\log{n})$, but requires about $\log_{\phi^{2}} n$ computations for the Query Operation as opposed to the $\log_{2} n$ computations required for a BIT Query Operation in the worst case. | \section{Problem Motivation}
A Prefix Sum is defined as the sum of the first $n$ elements of an array, where $ 1 \leq n \leq size(array) $. The problem can be traditionally solved on an array $arr$ by creating a prefix sum array $pre$ such that
\begin{equation}
pre[1] = arr[1]
\end{equation}
\begin{equation}
pre[i] = pre[i-1] + arr[i]
\end{equation}
(The above equations follow 1-based indexing)
\\\\
The Query Operation is defined as calculating the Prefix Sum of any index. The Update Operation is defined as assigning a new value to any index in the $arr$ array.
\\\\
It is easy to see that the above $pre$ array can handle the Query Operation in $O(1)$ time, since it only requires a single memory call. However, the Update Operation is highly inefficient without the use of any data structure, taking $O(n)$ time in the worst case (when updating the last element in the array).
\\\\
The Segment Tree and the Binary Indexed Tree (BIT) \cite{fenwick1994new} are two structures which can handle both the Query and the Update Operation in $O(\log{n})$ time. However, a BIT is much more efficient than a Segment Tree due to a smaller constant. In this paper, we present an alternative to the BIT, which we will call the Fibonacci Indexed Tree (FIT), for the sake of convenience. It can be shown that in the worst case, FIT takes about $\log_{\phi^2} n$ computations for the Query Operation and about $\log_{\phi} n)$ computations for the Update Operation.
\\\\
The problem of handling the above two operations simultaneously on a collection of data is an important one. It is used to solve several problems such as the Line-of-Sight Problem and is used in the implementation of multiple algorithms such as Radix Sort, lexical comparison of strings and Arithmetic Coding for data compression \cite{blelloch1990prefix} \cite{witten1987arithmetic}. When extended to two dimensions, Prefix Sums (the sum of all elements in a prefix rectangle) can be used in image processing and geographical information systems \cite{samet1984geographic}.
\section{Preliminaries}
Before introducing the data structure, we will first tackle the few definitions and concepts which are required to understand its mechanism. We will be discussing the following topics:
\begin{itemize}
\item Zeckendorf's Theorem \cite{zeckendorf1972representation}
\item Fibonacci Coding
\item Least Significant Used Fibonacci
\item Mechanism of the Binary Indexed Tree
\end{itemize}
\subsection{Zeckendorf's Theorem}
This theorem states that every positive integer can be expressed as the sum of distinct non-consecutive terms of the Fibonacci Sequence of numbers.
Example: $46$ can expressed as $46 = 34 + 8 + 3 + 1$. This is called the Zeckendorf Representation of the number. The theorem also states that there exists only one Zeckendorf Representation of every positive integer.
\\\\
The Zeckendorf Representation of a number can be obtained by using a Greedy Algorithm. At every step, simply take the largest Fibonacci number smaller than the required number and subtract it. Now, repeat the step until a Fibonacci number is obtained. All the Fibonacci Numbers that were subtracted and the number finally obtained together make up the Zeckendorf Representation.
\subsection{Fibonacci Coding}
An alternate way to write the Zeckendorf Representation of a number is 'Fibonacci Coding'. For a number $N$, we define the Fibonacci Coding of the number as follows:
\begin{equation}
N = \sum_{i=0}^{k-1} d(i)F(i+2)
\end{equation}
where $F(i)$ is the $i$th Fibonacci number starting from 1, $d(i)$ is the $i$th digit in the Fibonacci Coding which is either one or zero, and $k$ is the number of digits in the Fibonacci Coding. This means that the $i$th digit of the Fibonacci Coding is $1$ if the $(i+2)$th Fibonacci number is part of its Zeckendorf Representation, and $0$ otherwise.
\\\\
Example, $46$ is written as $46 = 34 + 8 + 3 + 1$. The following are the values for $F(i)$ and $d(i)$:
\begin{table}[ht]
\centering
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
i & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 \\ \hline
F(i) & 1 & 2 & 3 & 5 & 8 & 13 & 21 & 34 \\ \hline
d(i) & 1 & 0 & 1 & 0 & 1 & 0 & 0 & 1 \\ \hline
\end{tabular}
\end{table}
Thus, the Fibonacci Coding of $46$ is given as $10101001$.
\\\\
However, note that the most significant Fibonacci number is positioned to the right (the place value of digits increases from left to right). This is opposite to what we observe in the decimal or binary representation of numbers, where the place value of the digit increases from right to left. Thus, for the purposes of this paper, we will use the reverse Fibonacci Coding. For the sake of convenience, we will call it the Fibonacci Coding of the number. Therefore, the Fibonacci Coding of $46$ is given as $10010101$.
\subsection{Least Significant Used Fibonacci}
Formally, the Least Significant Bit of a positive integer is defined as the bit position in the binary representation of the number, which gives its units value, that is, determines whether the number is odd or even. However, for the purposes of this paper, we will define the "Least Significant Used Bit" (LSUB): the smallest power of two which is part of the Binary Representation. For instance, the Binary Representation of $40$ is $101000$. Therefore, the LSUB of $40$ is $8$.
\\\\
Similarly, we can also define the "Least Significant Used Fibonacci" (LSUF) of a number as the smallest Fibonacci Number which occurs in the Fibonacci Coding of a number. For instance, the Fibonacci Coding of $45$ is $10010100$. Thus, the LSUF of $45$ is $3$.
\\\\
In the following sections, we will use $lsub(x)$ and $lsuf(x)$ to denote the the least significant used bit and fibonacci of $x$ respectively.
\subsection{Mechanism of the Binary Indexed Tree}
Before moving on to the working of the Fibonacci Indexed Tree, we must be familiar with how a Binary Indexed Tree calculates prefix sums. Let's define an array $bit$, which stores the Binary Indexed Tree. Each element of the $bit$ array stores the sum of a certain range in the $arr$ array (the original array which contains our data).
\begin{figure}[ht]
\includegraphics[width = \textwidth]{bitshot}
\caption{How sums are stored in the $bit$ array}
\label{fig:fig1}
\end{figure}
Figure \ref{fig:fig1} shows hows sums are stored in the $bit$ array. For instance, $bit[12]$ stores the sum of elements from $arr[9]$ to $arr[12]$. The range of values that $bit[x]$ stores is given as follows:
\begin{equation}
bit[x] = \sum_{i = y}^{x} arr[i] \quad ; \quad y = x-lsub(x)+1
\end{equation}
\subsubsection{Query Operation}
In order to calculate the prefix sum ending at $n$, we follow the below algorithm:
\\
\begin{algorithm}[H]
\caption{BIT Query Operation}
\begin{algorithmic}[1]
\Function{queryb}{$x$}
\State $sum := 0$
\While{$x > 0$}
\State $sum := sum + bit[x]$
\State $x := x - lsub(x)$
\EndWhile
\State \Return $sum$
\EndFunction
\end{algorithmic}
\end{algorithm}
It can be seen that the loop in the above code runs the same number of times as the number of ones in the Binary Representation of $n$, since each iteration turns the LSUB into a zero. Now, the Binary Representation of $n$ contains at most $\log_{2} n$ digits. Therefore, in the worst case, the loop runs $\log_{2} n$ times (when $n = 2^k - 1$ for some positive integer $k$).
\subsubsection{Update Operation}
As stated earlier, the advantage of using a Binary Indexed Tree (BIT) over the $pre$ array is that the BIT can support the Update Operation, that is, we can change the value of an element in the $arr$ array in $O(\log{n})$ time, rather than $O(n)$ time. To update an element and set its value to $c$, the following algorithm must be used:
\\
\begin{algorithm}[H]
\caption{BIT Update Operation}
\begin{algorithmic}[1]
\Function{updateb}{$x, c$}
\State $temp := c - arr[x]$
\State $n := $ size of $arr$
\While{$x \leq n$}
\State $bit[x] := bit[x] + temp$
\State $x := x + lsub(x)$
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
\section{The Fibonacci Indexed Tree}
The essential mechanism of the Fibonacci Indexed Tree (FIT) is the same as that of the BIT: we declare an array $fit$, which contains the data structure. As before, $fit[i]$ stores the sum of a certain range of values in the original $arr$ array, where our data is stored.
\begin{figure}[H]
\includegraphics[width = \textwidth]{fitshot}
\caption{How sums are stored in the $bit$ array}
\label{fig:fig2}
\end{figure}
Figure \ref{fig:fig2} shows hows sums are stored in the $fit$ array. For instance, $fit[11]$ stores the sum of elements from $arr[9]$ to $arr[11]$. The range of values that $fit[x]$ stores is given as follows:
\begin{equation}
fit[x] = \sum_{i = y}^{x} arr[i] \quad ; \quad y = x-lsuf(x)+1
\end{equation}
Notice that the only thing different in Equation (5) from Equation (4) is the use of the $lsuf$ function instead of the $lsub$ function. We will also use the following equation to prove the time complexities of the Query and the Update Operation:
\begin{equation}
\lim_{n \to \infty} \frac{F_{n+1}}{F_{n}} = \phi
\end{equation}
\subsection{Query Operation}
\begin{algorithm}[H]
\caption{FIT Query Operation}
\begin{algorithmic}[1]
\Function{queryf}{$x$}
\State $sum := 0$
\While{$x > 0$}
\State $sum := sum + fit[x]$
\State $x := x - lsuf(x)$
\EndWhile
\State \Return $sum$
\EndFunction
\end{algorithmic}
\end{algorithm}
The loop in the above code runs the same number of times as the number of ones in the Fibonacci Coding of $x$. There are approximately $\log_{\phi} x$ Fibonacci numbers less than $x$. This follows from Equation (6). Thus, there are $\log_{\phi} x$ digits in the Fibonacci Coding of $x$. This would imply that in the worst case, approximately $log_{\phi} x$ computations would be required (when $x = F(k) - 1$, where $F(k)$ is the $k$th Fibonacci Number). However, that is not the case.
\\\\
No two consecutive positions in the Fibonacci Coding can be $1$, otherwise the two consecutive Fibonacci Numbers will add up to form the next Fibonacci Number, thus eliminating both of the original numbers. Thus, in the worst case, only half of the digits in the Fibonacci Coding are ones. Therefore, the number of computations required becomes: $0.5 * \log_{\phi} n = \log_{\phi^2} n$.
\\
\begin{center}
\begin{tikzpicture}
\begin{semilogxaxis}[
axis lines = left,
xlabel = $n$,
ylabel = {$f(n)$},
legend pos = north west,
xmax = 1e6,
ymax = 20
]
\addplot [
domain=1:3e5,
samples=20,
color=red,
]
{ln(x)/ln((3 + sqrt(5))/2)};
\addlegendentry{$\log_{\phi^2}{n}$}
\addplot [
domain=1:3e5,
samples=20,
color=blue,
]
{ln(x)/ln(2)};
\addlegendentry{$\log_{2}{n}$}
\end{semilogxaxis}
\end{tikzpicture}
\end{center}
\subsection{Update Operation}
\begin{algorithm}[H]
\caption{FIT Update Operation}
\begin{algorithmic}[1]
\Function{updatef}{$x, c$}
\State $temp := c - arr[x]$
\State $n := $ size of $arr$
\While{$x \leq n$}
\State $fit[x] := fit[x] + temp$
\State $x := x + prefib(lsuf(x))$
\EndWhile
\EndFunction
\end{algorithmic}
\end{algorithm}
In the above code, the $prefib(x)$ function returns the previous Fibonacci Number of $x$, where $x$ itself should be a Fibonacci Number. The motive of the above algorithm and the BIT Query Operation is the same: moving the Least Significant Used Bit/Fibonacci by (at least) one place to the left. By adding $prefib(lsuf(x))$, we will get a one in two consecutive positions of the Fibonacci Coding, which will add up to form the next Fibonacci number. This is slightly different from Algorithm 2: The BIT Update Operation, where we simply add the $lsub(x)$ to $x$.
\\\\
The worst case occurs when $x = F(k) + 1$, for some positive integer $k$. This is because in each iteration of the loop, the position of the Least Significant Used Fibonacci shifts by a single place. Thus, approximately $\log_{\phi} x$ computations are required to perform the entire update operation, which is more than what we have in the BIT Update Operation.
\\
\begin{center}
\begin{tikzpicture}
\begin{semilogxaxis}[
axis lines = left,
xlabel = $n$,
ylabel = {$f(n)$},
legend pos = north west,
xmax = 1e6,
ymax = 30
]
\addplot [
domain=1:3e5,
samples=20,
color=red,
]
{ln(x)/ln((1 + sqrt(5))/2)};
\addlegendentry{$\log_{\phi}{n}$}
\addplot [
domain=1:3e5,
samples=20,
color=blue,
]
{ln(x)/ln(2)};
\addlegendentry{$\log_{2}{n}$}
\end{semilogxaxis}
\end{tikzpicture}
\end{center}
\section{Conclusion and Scope for Further Study}
The proposed data structure is essentially a modification of the Binary Indexed Tree with a certain trade-off: faster queries for slower updates. For most practical purposes however, this trade-off is a favourable one. The most common use of Prefix sums is calculating "range sums", that is, for given positions $l$ and $r$, calculating the following:
\begin{equation}
\sum_{i=l}^{r} arr[i] = query(r) - query(l-1)
\end{equation}
As we can see, two $query$ operations are needed to obtain a single "range sum". On the other hand, updating a single element of the array only requires only one call to the $update$ function. Thus, having a faster $query$ function is preferable for most practical applications such as the Arithmetic Coding Algorithm for data compression \cite{witten1987arithmetic}.
\\\\
Like the Binary Indexed Tree, the Fibonacci Indexed Tree can also be extended to multiple dimensions \cite{mishra2013new} . In the 2D variant of the BIT, which is used in image processing and geographical information systems \cite{samet1984geographic}, $4$ $query$ operations are required to calculate the "2D range sum". Moreover the complexity for one $query$ operation is $O((\log{n})^2)$. Thus, switching to a Fibonacci Indexed Tree has true merit in this case.
\\\\
| {
"timestamp": "2016-12-30T02:06:54",
"yymm": "1612",
"arxiv_id": "1612.09083",
"language": "en",
"url": "https://arxiv.org/abs/1612.09083",
"abstract": "There are several data structures which can calculate the prefix sums of an array efficiently, while handling point updates on the array, such as Segment Trees and Binary Indexed Trees (BIT). Both these data structures can handle the these two operations (query and update) in $O(\\log{n})$ time. In this paper, we present a data structure similar to the BIT, but with an even smaller constant. To do this, we use Zeckendorf's Theorem, a property of the Fibonacci sequence of numbers. The new data structure achieves the same complexity of $O(\\log{n})$, but requires about $\\log_{\\phi^{2}} n$ computations for the Query Operation as opposed to the $\\log_{2} n$ computations required for a BIT Query Operation in the worst case.",
"subjects": "Data Structures and Algorithms (cs.DS)",
"title": "A Constant Optimization of the Binary Indexed Tree Query Operation",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9905874093008835,
"lm_q2_score": 0.7154239957834733,
"lm_q1q2_score": 0.7086900025348369
} |
https://arxiv.org/abs/1801.07634 | Khovanov homology detects the trefoils | We prove that Khovanov homology detects the trefoils. Our proof incorporates an array of ideas in Floer homology and contact geometry. It uses open books; the contact invariants we defined in the instanton Floer setting; a bypass exact triangle in sutured instanton homology, proven here; and Kronheimer and Mrowka's spectral sequence relating Khovanov homology with singular instanton knot homology. As a byproduct, we also strengthen a result of Kronheimer and Mrowka on $SU(2)$ representations of the knot group. | \section{Introduction}
\label{sec:intro}
Khovanov homology assigns to a knot $K\subset S^3$ a bigraded abelian group \[\Kh(K)=\bigoplus_{i,j}\Kh^{i,j}(K)\] whose graded Euler characteristic recovers the Jones polynomial of $K$.
In their landmark paper \cite{km-khovanov}, Kronheimer and Mrowka proved that Khovanov homology detects the unknot, answering a categorified version of the famous open question below.
\begin{question} Does the Jones polynomial detect the unknot?
\end{question}
The following question is perhaps even more difficult.
\begin{question}\label{ques:jones-trefoil} Does the Jones polynomial detect the trefoils?
\end{question}
The goal of this paper is to prove that Khovanov homology detects the right- and left-handed trefoils, $T_+$ and $T_-$, answering a categorified version of Question \ref{ques:jones-trefoil}.
Recall that $\Kh(T_+)$ and $\Kh(T_-)$ are both isomorphic to $\Z^4\oplus\Z/2\Z$ but are supported in different bigradings. Our main result is the following.
\begin{theorem}\label{thm:kh-detects-trefoil}
$\Kh(K)\cong \Z^4\oplus\Z/2\Z$ if and only if $K$ is a trefoil.
\end{theorem}
As a bigraded theory, Khovanov homology therefore detects each of $T_+$ and $T_-$.
Like Kronheimer and Mrowka's unknot detection result, Theorem \ref{thm:kh-detects-trefoil} relies on a relationship between Khovanov homology and instanton Floer homology. More surprising is that our proof also hinges fundamentally on ideas from contact geometry. Essential tools include the invariant of contact 3-manifolds with boundary we defined in the instanton Floer setting \cite{bs-shi}; our naturality result for sutured instanton homology \cite{bs-naturality}; and an instanton Floer version of Honda's bypass exact triangle, established here.
We describe below how our main Theorem \ref{thm:kh-detects-trefoil} follows from a certain result, Theorem \ref{thm:khi-fibered}, in the instanton Floer setting. We then explain how the latter theorem can be used to strengthen a result of Kronheimer and Mrowka on $SU(2)$ representations of the knot group. Finally, we outline both the ideas which motivated our approach to Theorem \ref{thm:khi-fibered} and the proof itself, and along the way we state a bypass exact triangle for instanton Floer homology.
\subsection{Trefoils and reduced Khovanov homology} We first note that Theorem \ref{thm:kh-detects-trefoil} follows from the detection result below for reduced Khovanov homology $\Khr$.
\begin{theorem}\label{thm:khr-detects-trefoil}
$\dim_\Z\Khr(K)=3$ if and only if $K$ is a trefoil.
\end{theorem}
To see how Theorem \ref{thm:kh-detects-trefoil} follows, let us suppose
$\Kh(K)\cong \Z^4\oplus\Z/2\Z.$ Then \[\Kh(K;\Z/2\Z)\cong (\Z/2\Z)^6\] by the Universal Coefficient Theorem. Recall the general facts that\begin{enumerate}
\item $\Kh(K)$ and $\Khr(K)$ fit into an exact triangle \[ \xymatrix@C=-15pt@R=15pt{
\Kh(K) \ar[rr] & & \Khr(K) \ar[dl] \\
& \Khr(K); \ar[ul] & \\
} \]
\item $\Kh(K;\Z/2\Z)\cong \Khr(K;\Z/2\Z)\oplus \Khr(K;\Z/2\Z)$.
\end{enumerate} The first implies that $\dim_\Z\Khr(K)\geq 2$ while the second implies that \[\Khr(K;\Z/2\Z) \cong(\Z/2\Z)^3.\] These together force $\dim_\Z\Khr(K)=3$ by another application of the UCT. Therefore, $K$ is a trefoil by Theorem \ref{thm:khr-detects-trefoil}. We describe below how Theorem \ref{thm:khr-detects-trefoil} follows from Theorem \ref{thm:khi-fibered}.
\subsection{Trefoils and instanton Floer homology}
To prove that Khovanov homology detects the unknot, Kronheimer and Mrowka established in \cite{km-khovanov} a spectral sequence relating Khovanov homology and singular instanton knot homology, the latter of which assigns to a knot $K\subset Y$ an abelian group $\Inat(Y,K)$. In particular, they proved that \begin{equation*}\label{eqn:kh-i} \dim_\Z \Khr(K) \geq \dim_\Z \Inat(S^3,K).\end{equation*} Kronheimer and Mrowka moreover showed that the right side is odd and greater than one for nontrivial knots. Theorem \ref{thm:khr-detects-trefoil} therefore follows immediately from the result below.
\begin{theorem}\label{thm:skhi-detects-trefoil}
If $\dim_\Z \Inat(S^3,K)=3$ then $K$ is a trefoil.
\end{theorem}
We prove Theorem \ref{thm:skhi-detects-trefoil} using yet another knot invariant. The instanton knot Floer homology of a knot $K\subset Y$ is a ${\mathbb{C}}$-module defined in \cite{km-excision} as the sutured instanton homology of the knot complement with two oppositely oriented meridional sutures, \[\KHI(Y,K):=\SHI(Y(K),\Gamma_\mu):=\SHI(Y\ssm\nu(K),\mu\cup -\mu).\] It is related to singular instanton knot homology as follows \cite[Proposition 1.4]{km-khovanov}, \begin{equation*}\label{eqn:khi-i}\KHI(Y,K) \cong \Inat(Y,K)\otimes_\Z {\mathbb{C}}.\end{equation*}
Theorem \ref{thm:skhi-detects-trefoil} therefore follows immediately from the result below.
\begin{theorem}\label{thm:khi-detects-trefoil}
If $\dim_{\mathbb{C}} \KHI(S^3,K)=3$ then $K$ is a trefoil.
\end{theorem}
Our proof of Theorem \ref{thm:khi-detects-trefoil} makes use of some additional structure on $\KHI$. Namely, if $\Sigma$ is a Seifert surface for $K$ then $\KHI(Y,K)$ may be endowed with a symmetric Alexander grading, \[\KHI(Y,K)=\bigoplus_{i=-g(\Sigma)}^{g(\Sigma)}\KHI(Y,K,[\Sigma],i), \] where \[\KHI(Y,K,[\Sigma],i)\cong\KHI(Y,K,[\Sigma],-i) \text{ for all i}.\] This grading depends only on the relative homology class of the surface in $H_2(Y,K)$. We will omit this class from the notation when it is unambiguous, as when $Y=S^3$. Kronheimer and Mrowka proved in \cite{km-excision} that if $K$ is fibered with fiber $\Sigma$ then \[\KHI(Y,K,[\Sigma],g(\Sigma))\cong {\mathbb{C}}.\] Moreover, they showed \cite{km-excision,km-alexander} that the Alexander grading completely detects genus and fiberedness when $Y=S^3$. Specifically, \begin{align}
\label{eqn:genus}&\KHI(S^3,K,g(K))\neq 0 \textrm{ and } \KHI(S^3,K,i)=0 \textrm{ for } i>g(K)\\
\label{eqn:genusfibered}&\KHI(S^3,K,g(K))\cong {\mathbb{C}} \textrm{ if and only if } K \textrm{ is fibered},\end{align} exactly as in Heegaard knot Floer homology.
We claim that Theorem \ref{thm:khi-detects-trefoil} (and therefore each preceding theorem) follows from the result below, which states that the instanton knot Floer homology of a fibered knot is nontrivial in the next-to-top Alexander grading.
\begin{theorem}\label{thm:khi-fibered}
Suppose $K$ is a genus $g>0$ fibered knot in $Y\not\cong \#^{2g}(S^1\times S^2)$ with fiber $\Sigma$. Then $\KHI(Y,K,[\Sigma],g{-}1)\neq 0$
\end{theorem}
To see how Theorem \ref{thm:khi-detects-trefoil} follows, let us
suppose that \[\dim_{\mathbb{C}} \KHI(S^3,K)=3.\] Then $\KHI(S^3,K)$ is supported in Alexander gradings $0$ and $\pm g(K)$ by symmetry and genus detection \eqref{eqn:genus}. Note that $g(K)\geq 1$ since $K$ is otherwise the unknot and \[\dim_{\mathbb{C}} \KHI(S^3,K)=1,\] a contradiction. So we have that \[\KHI(S^3,K,i)\cong \begin{cases}
{\mathbb{C}},&i=g(K),\\
{\mathbb{C}},&i=0,\\
{\mathbb{C}},&i=-g(K).
\end{cases}\] The fiberedness detection \eqref{eqn:genusfibered} therefore implies that $K$ is fibered. But Theorem \ref{thm:khi-fibered} then forces $g(K)=1$. We conclude that $K$ is a genus one fibered knot. It follows that $K$ is either a trefoil or the figure eight, but $\KHI$ of the latter is 5-dimensional, so $K$ is a trefoil
In summary, we have shown that Theorem \ref{thm:khi-fibered} implies all of the other results above including that Khovanov homology detects the trefoils. The bulk of this paper is therefore devoted to proving Theorem \ref{thm:khi-fibered}. Before outlining its proof in detail below, we describe an application of Theorem \ref{thm:skhi-detects-trefoil} to $SU(2)$ representations of the knot group.
\subsection{Trefoils and $SU(2)$ representions}
\label{ssec:trefoil-reps}
Given a knot $K$ in the 3-sphere, consider the representation variety \[\mathscr{R}(K,\mathbf{i}) = \{\rho:\pi_1(S^3\ssm K)\to SU(2)\mid \rho(\mu)=\mathbf{i}\},\] where $\mu$ is a chosen meridian and \[\mathbf{i}=\left[\begin{array}{cc}
i&0\\
0&-i
\end{array}\right].\] Recall that the representation variety of a trefoil $T$ is given by \[\mathscr{R}(T,\mathbf{i}) \cong \{*\} \sqcup S^1,\] where $*$ is the reducible homomorphism in $\mathscr{R}(T,\mathbf{i})$ and $S^1$ is the unique conjugacy class of irreducibles. We conjecture that $\mathscr{R}(K,\mathbf{i})$ detects the trefoil.
\begin{conjecture}
$\mathscr{R}(K,\mathbf{i}) \cong \{*\} \sqcup S^1$ if and only if $K$ is a trefoil.
\end{conjecture}
We prove this conjecture modulo an assumption of nondegeneracy, using Theorem \ref{thm:khi-detects-trefoil} together with the relationship between $\mathscr{R}(K,\mathbf{i})$ and $\KHI$ described in \cite[Section 7.6]{km-excision} and \cite[Section 4.2]{km-alexander}.
The rough idea is that points in $\mathscr{R}(K,\mathbf{i})$ should correspond to critical points of the Chern-Simons functional whose Morse-Bott homology computes $\KHI(S^3,K)$; the reducible corresponds to a single critical point while conjugacy classes of irreducibles ought to correspond to circles of critical points. In other words, the reducible should contribute 1 generator and each class of irreducibles should contribute 2 generators (generators of the homology of the corresponding circle of critical points) to a chain complex which computes $\KHI(S^3,K)$. This heuristic holds true as long as the circles of critical points corresponding to irreducibles are nondegenerate in the Morse-Bott sense.
Thus, if $n(K)$ is the number of conjugacy classes of irreducibles and the corresponding circles of critical points are nondegenerate then \[\dim_{\mathbb{C}}\KHI(S^3,K)\leq 1+2n(K).\] Theorem \ref{thm:khi-detects-trefoil} therefore implies the following.
\begin{theorem}
\label{thm:rep-detects-trefoil}
Suppose there is one conjugacy class of irreducible homomorphisms in $\mathscr{R}(K,\mathbf{i})$. If these homomorphisms are nondegenerate, then $K$ is a trefoil.
\end{theorem}
This improves upon a result of Kronheimer and Mrowka \cite[Corollary 7.20]{km-excision} which under the same hypotheses concludes only that $K$ is fibered.
\subsection{The proof of Theorem \ref{thm:khi-fibered}}
\label{ssec:outline}The rest of this introduction is devoted to explaining the proof of Theorem \ref{thm:khi-fibered}. This result and its proof were inspired by work of Baldwin and Vela-Vick \cite{bvv} who proved the following analogous result in Heegaard knot Floer homology
\begin{theorem}
\label{thm:hfk-fibered}
Suppose $K$ is a genus $g>0$ fibered knot in $Y\not\cong \#^{2g}(S^1\times S^2)$ with fiber $\Sigma$. Then $\HFK(Y,K,[\Sigma],g{-}1)\neq 0$.\footnote{The conclusion of this theorem also holds for $Y\cong \#^{2g}(S^1\times S^2)$.}
\end{theorem}
Theorem \ref{thm:hfk-fibered} can be used to give new proofs that the dimension of $\HFK$ detects the trefoil \cite{hw} and that $L$-space knots are prime \cite{krcatovich}. It has no bearing, however, on whether Khovanov homology detects the trefoils, as there is no known relationship between Khovanov homology and Heegaard knot Floer homology.
We summarize below the proof of Theorem \ref{thm:hfk-fibered} from \cite{bvv} and then explain how it can be reformulated in a manner that is translatable to the instanton Floer setting.
Suppose $K$ is a fibered knot as in Theorem \ref{thm:hfk-fibered} and let $(\Sigma,h)$ be an open book corresponding to the fibration of $K$ with $g(\Sigma)=g,$ supporting a contact structure $\xi$ on $Y$.
When it suits us, we are free to assume in proving Theorem \ref{thm:khi-fibered} that $h$ is \emph{not right-veering}, meaning that $h$ sends some arc in $\Sigma$ \emph{to the left} at one of its endpoints, as shown in Figure \ref{fig:left} and made precise in \cite{hkm-rv}.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $a$ at 47 28
\pinlabel $h(a)$ at 20 28
\pinlabel $p$ at 41 -3
\pinlabel $\Sigma$ at 65 57
\endlabellist
\centering
\includegraphics[width=2cm]{Figures/left}
\caption{$h$ sends $a$ to the left at $p$.
}
\label{fig:left}
\end{figure}
To see that we can make this assumption without loss of generality, note that one of $h$ or $h^{-1}$ is not right-veering since otherwise $h=\operatorname{id}$ and $Y\cong \#^{2g}(S^1\times S^2)$. If $h$ is right-veering then we can use the fact that knot Floer homology is invariant under reversing the orientation of $Y$ and consider instead the knot $K\subset -Y$ with open book $(\Sigma,h^{-1})$.
Recall that the knot Floer homology of $-K\subset -Y$ is the homology of the associated graded object of a filtration
\[\euscript{F}_{-g}\subset\euscript{F}_{1-g}\subset \dots \subset \euscript{F}_{g}=\CF(-Y)\] of the Heegaard Floer complex of $-Y$ induced by the knot. By careful inspection of a Heegaard diagram for $-K\subset -Y$ adapted to the open book $(\Sigma,h)$, Baldwin and Vela-Vick prove:
\begin{lemma}
\label{lem:nonrv} If the monodromy $h$ is not right-veering then there exist $c\in\euscript{F}_{-g}$ and $d\in \euscript{F}_{1-g}$ such that $[c]$ generates $H_*(\euscript{F}_{-g})\cong\Z/2\Z$ and $\partial d=c$.
\end{lemma}
To see how Lemma \ref{lem:nonrv} implies Theorem \ref{thm:hfk-fibered}, let us assume that the monodromy $h$ is not right-veering. Given $c$ and $d$ as guaranteed by Lemma \ref{lem:nonrv}, it is then an easy exercise to see that $\partial\circ\partial=0$ and $[c]$ nonzero imply that the class \[[d]\in H_*(\euscript{F}_{1-g}/\euscript{F}_{-g})=\HFK(-Y,-K,-[\Sigma],1-g)\] is nonzero. Theorem \ref{thm:hfk-fibered} then follows from the symmetry \[\HFK(Y,K,[\Sigma],g-1)\cong\HFK(-Y,-K,-[\Sigma],1-g).\]
Our strategy is to translate a version of this proof to the instanton Floer setting. Of course, it does not translate readily. For one thing, it makes use of Heegaard diagrams in an essential way. For another, it relies on a description of knot Floer homology as coming from a filtration of the Floer complex of the ambient manifold, for which there is no analogue in $\KHI$.
Our solution to these difficulties starts with a reformulation of Lemma \ref{lem:nonrv} in terms of the \emph{minus} version of knot Floer homology, which assigns to a knot a module over the polynomial ring $(\mathbb{Z}/2\mathbb{Z})[U]$. Specifically, we observe that Lemma \ref{lem:nonrv} can be recast as follows:
\begin{lemma}
\label{lem:nonrv2} If the monodromy $h$ is not right-veering then the generator of \[\HFKm(-Y,K,[\Sigma],g)\cong\Z/2\Z\] is in the kernel of multiplication by $U.$
\end{lemma}
It may seem as though this reformulation of Lemma \ref{lem:nonrv} makes translation even \emph{more} difficult, as there is no analogue of $\HFKm$ whatsoever in the instanton Floer setting. Surprisingly, however, it is Lemma \ref{lem:nonrv2} that proves most amenable to translation.
Our approach is inspired by work of Etnyre, Vela-Vick, and Zarev, who provide in \cite{evvz} a more contact-geometric description of $\HFKm$ with its $(\Z/2\Z)[U]$-module structure. As we show below, their work enables a proof of Lemma \ref{lem:nonrv2} in terms of sutured Floer homology groups, bypass attachment maps, and contact invariants. The value for us in proving Lemma \ref{lem:nonrv2} from this perspective is that while there is no analogue of $\HFKm$ in the instanton Floer setting, there \emph{are} instanton Floer analogues of these groups, bypass maps, and contact invariants, due to Kronheimer and Mrowka \cite{km-excision} and the authors \cite{bs-shi}. We are thus able to port key elements of this alternative proof of Lemma \ref{lem:nonrv2} to the instanton Floer setting and, with (substantial) additional work, use these elements to prove Theorem \ref{thm:khi-fibered}. Below, we:
\begin{itemize}
\item review the work of \cite{evvz}, tailored to the case of our fibered knot $K$,
\item prove Lemma \ref{lem:nonrv2} from this \emph{direct limit} point of view,
\item outline in detail the proof of Theorem \ref{thm:khi-fibered}, based on these ideas.
\end{itemize}
As the binding of the open book $(\Sigma,h)$, the knot $K$ is naturally a transverse knot in $(Y,\xi)$. Moreover, $K$ has a Legendrian approximation $\mathcal{K}_0^-$ with Thurston-Bennequin invariant \[tb_\Sigma(\mathcal{K}_0^-)=-1.\] For each $i\geq 1$, let $\mathcal{K}^{\pm}_i$ be result of negatively Legendrian stabilizing the knot $\mathcal{K}^-_0$ $i-1$ times and then positively/negatively stabilizing the result one additional time. Note that each $\mathcal{K}^-_i$ is also a Legendrian approximation of $K$. Let \[(Y(K),\Gamma_i,\xi^\pm_i)\] be the contact manifold with convex boundary and dividing set $\Gamma_i$ obtained by removing a standard neighborhood of $\mathcal{K}_i^\pm$ from $Y$. These contact manifolds are related to one another via positive and negative bypass attachments. By work of Honda, Kazez, and Mati{\'c} in \cite{hkm-tqft}, these bypass attachments induce maps on sutured Floer homology, \[\psi_i^\pm:\SFH(-Y(K),-\Gamma_i)\to \SFH(-Y(K),-\Gamma_{i+1})\] for each $i$, which satisfy \[
\psi_i^-(\EH(\xi_i^-)) = \EH(\xi_{i+1}^-)\textrm{ and } \psi_i^+(\EH(\xi_i^-)) = \EH(\xi_{i+1}^+),
\]
where $\EH$ refers to the Honda-Kazez-Mati{\'c} contact invariant defined in \cite{hkm-sutured}. The main result of \cite{evvz} says that $\HFKm(-Y,K)$ is isomorphic to the direct limit \begin{equation*}\label{eqn:limit}\SFH(-Y(K),-\Gamma_0)\xrightarrow{\psi_0^-}\SFH(-Y(K),-\Gamma_1)\xrightarrow{\psi_1^-}\SFH(-Y(K),-\Gamma_2)\xrightarrow{\psi_2^-}\cdots\end{equation*} of these sutured Floer homology groups and the negative bypass attachment maps. Moreover, under this identification, multiplication by $U$ is the map on this limit induced by the positive bypass attachment maps $\psi_i^+$.
We now observe that Lemma \ref{lem:nonrv2} has a very natural interpretation and proof in this direct limit formulation.
The first step is to identify the element of the direct limit which corresponds to the generator of $\HFKm(-Y,K,[\Sigma],g)$.
For this, recall that Vela-Vick proved in \cite{vv} that the transverse binding $K$ has nonzero invariant \[\euscript{T}(K)\in\HFKm(-Y,K),\] where $\euscript{T}$ refers to the transverse knot invariant defined by Lisca, Ozsv{\'a}th, Stipsicz, and Szab{\'o} in \cite{loss}. Moreover, this class lies in Alexander grading \begin{equation}\label{eqn:alexgradingcalc}(sl(K)+1)/2=g \end{equation} according to \cite{loss}.
So $\euscript{T}(K)$ is the generator of \[\HFKm(-Y,K,[\Sigma],g)\cong \Z/2\Z.\] But Etnyre, Vela-Vick, and Zarev proved that $\euscript{T}(K)$ corresponds to the element of the direct limit represented by the contact invariant \[\EH(\xi_i^-)\in\SFH(-Y(K),-\Gamma_i)\] for any $i$.
It follows that $U \euscript{T}(K)$ corresponds to the element of the limit represented by \[\psi_0^+(\EH(\xi_0^-))=\EH(\xi_1^+).\]
Lemma \ref{lem:nonrv2} therefore follows from the lemma below.
\begin{lemma}
\label{lem:bypassclaim1} If the monodromy $h$ is not right-veering then $\psi_0^+(\EH(\xi_0^-))=\EH(\xi_1^+)=0.$\end{lemma}
But this lemma follows immediately from the result below since the $\EH$ invariant vanishes for overtwisted contact manifolds.
\begin{lemma}
\label{lem:otstab}
If the monodromy $h$ is not right-veering then $\xi_1^+$ is overtwisted.
\end{lemma}
This concludes our alternative proof of Lemma \ref{lem:nonrv2}, modulo the proof of Lemma \ref{lem:otstab} which we provide in Section \ref{sec:proof}.
We now describe in detail our proof of Theorem \ref{thm:khi-fibered}, inspired by these ideas.
The instanton Floer analogues of $\EH(\xi_i^\pm)$ and $\psi_i^\pm$ are the contact invariants \[\cinvt(\xi_i^\pm)\in\SHI(-Y(K),-\Gamma_i)\] and bypass attachment maps \[\phi_i^\pm:\SHI(-Y(K),-\Gamma_i)\to\SHI(-Y(K),-\Gamma_{i+1})\] we defined in \cite{bs-shi}. Guided by the discussion above, our approach to proving Theorem \ref{thm:khi-fibered} begins with the following analogue of Lemma \ref{lem:bypassclaim1}.
\begin{lemma}\label{lem:bypassclaim}If the monodromy $h$ is not right-veering then $\phi_0^+(\cinvt(\xi_0^-))=\cinvt(\xi_1^+)=0$.\end{lemma}
We note that Lemma \ref{lem:bypassclaim} follows immediately from Lemma \ref{lem:otstab} since our contact invariant $\theta$ vanishes for overtwisted contact manifolds, just as the $\EH$ invariant does.
Unfortunately, Lemma \ref{lem:bypassclaim} does not automatically imply Theorem \ref{thm:khi-fibered} in the same way that Lemma \ref{lem:bypassclaim1} implies Theorem \ref{thm:hfk-fibered}, as the latter implication ultimately makes use of structure that is unavailable in the instanton Floer setting. Indeed, proving Theorem \ref{thm:khi-fibered} from the starting point of Lemma \ref{lem:bypassclaim} requires some additional ideas, as explained below.
First, we recall that in \cite{stipsicz-vertesi}, Stipsicz and V{\'e}rtesi proved that the \emph{hat} version of the Lisca, Ozsv{\'a}th, Stipsicz, Szab{\'o} transverse invariant, \[\widehat{\euscript{T}}(K)\in\HFK(-Y,K),\] can be described as the $\EH$ invariant of the contact manifold obtained by attaching a certain bypass to the complement of a standard neighborhood of \emph{any} Legendrian approximation of $K$. In particular, the contact manifold resulting from these Stipsicz-V{\'e}rtesi bypass attachments is independent of the Legendrian approximation. Inspired by this, we define an element \[{\kinvt}(K):=\phi^{SV}_i(\cinvt(\xi_i^-))\in \KHI(-Y,K),\] where \begin{equation*}\label{eqn:mapsv}\phi_i^{SV}:\SHI(-Y(K),-\Gamma_i)\to\SHI(-Y(K),-\Gamma_\mu)=\KHI(-Y,K)\end{equation*} is the map our work \cite{bs-shi} assigns to the Stipsicz-V{\'e}rtesi bypass attachment. Since each $\mathcal{K}_i^-$ is a Legendrian approximation of $K$, the contact manifold obtained from these attachments, and hence $\kinvt(K)$, is independent of $i$.
We prove that the $\kinvt$ invariant of the transverse binding $K$ lies in the top Alexander grading, just as in Heegaard Floer homology:
\begin{theorem}
\label{thm:ttopgrading}
$\kinvt(K)\in\KHI(-Y,K,[\Sigma],g)$.
\end{theorem}
Moreover, we prove the following analogue of Vela-Vick's result \cite{vv} that the transverse binding of an open book has nonzero Heegaard Floer invariant.
\begin{theorem}
\label{thm:nonzero}
$\kinvt(K)$ is nonzero.
\end{theorem}
\begin{remark}
To be clear, Theorems \ref{thm:ttopgrading} and \ref{thm:nonzero} hold without any assumption on $h$.\end{remark}
\begin{remark}Our proof of Theorem \ref{thm:nonzero} relies on formal properties of our contact invariants as well as the surgery exact triangle and adjunction inequality in instanton Floer homology. In fact, our argument can be ported directly to the Heegaard Floer setting to give a new proof of Vela-Vick's theorem.
\end{remark}
The task remains to put all of these pieces together to conclude Theorem \ref{thm:khi-fibered}. This involves proving a bypass exact triangle in sutured instanton homology analogous to Honda's triangle in sutured Heegaard Floer homology. In Section \ref{sec:bypass} we prove the following.
\begin{theorem}
\label{thm:bypass}
Suppose $\Gamma_1,\Gamma_2,\Gamma_3\subset \partial M$ is a 3-periodic sequence of sutures related by the moves in a bypass triangle as in Figure \ref{fig:bypass-triangle2}. Then there is an exact triangle
\[ \xymatrix@C=-35pt@R=30pt{
\SHI(-M,-\Gamma_1) \ar[rr] & & \SHI(-M,-\Gamma_2) \ar[dl] \\
& \SHI(-M,-\Gamma_3), \ar[ul] & \\
} \]
in which the maps are the corresponding bypass attachment maps.
\end{theorem}
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $\alpha_1$ at 24 118
\pinlabel $\alpha_2$ at 148 117
\pinlabel $\alpha_3$ at 96 38
\pinlabel $\Gamma_1$ at -3 145
\pinlabel $\Gamma_2$ at 173 144
\pinlabel $\Gamma_3$ at 85 -8
\endlabellist
\centering
\includegraphics[width=4.7cm]{Figures/bypass_triangle2}
\caption{The bypass triangle. Each picture shows the arc $\alpha_i$ along which a bypass is attached to achieve the next set of sutures in the triangle. The gray and white regions indicate the negative and positive regions, respectively.}
\label{fig:bypass-triangle2}
\end{figure}
For example, we show that the map $\phi_0^+$ fits into a bypass exact triangle of the form
\begin{equation}\label{eqn:bypasstriintro} \xymatrix@C=-25pt@R=35pt{
\SHI(-Y(K),-\Gamma_0) \ar[rr]^{\phi_0^+} & & \SHI(-Y(K),-\Gamma_1) \ar[dl]^{\phi_1^{SV}} \\
& \KHI(-Y,K) \ar[ul]^C. & \\
} \end{equation}
To prove Theorem \ref{thm:khi-fibered}, let us now assume that the monodromy $h$ is not right-veering. Then \[\phi_0^+(\cinvt(\xi_0^-))=0\] by Lemma \ref{lem:bypassclaim}. Exactness of the triangle \eqref{eqn:bypasstriintro} then tells us that there is a class $x\in\KHI(-Y,K)$ such that \[C(x)=\cinvt(\xi_0^-).\]
The composition \begin{equation}\label{eqn:phicompintro}\phi_0^{SV}\circ C: \KHI(-Y,K)\to \KHI(-Y,K)\end{equation} therefore satisfies \begin{equation*}\phi_0^{SV}(C(x))=\kinvt(K),\end{equation*} which is nonzero by Theorem \ref{thm:nonzero}. It follows that the class $x$ is nonzero as well
Although the map in \eqref{eqn:phicompintro} is not \emph{a priori} homogeneous with respect to the Alexander grading, we prove that it shifts the grading by at most 1. On the other hand, this composition is trivial on the top summand \[\KHI(-Y,K,[\Sigma],g)\cong{\mathbb{C}}\] since by Theorems \ref{thm:ttopgrading} and \ref{thm:nonzero} this summand is generated by $\kinvt(K)=\phi_1^{SV}(\xi_1^-)$, and \[C(\kinvt(K)) = C(\phi_1^{SV}(\xi_1^-))=0\] by exactness of the triangle \eqref{eqn:bypasstriintro}. This immediately implies the result below.
\begin{theorem}
The component of $x$ in $\KHI(-Y,K,[\Sigma],g-1)$ is nonzero.
\end{theorem}
Theorem \ref{thm:khi-fibered} then follows from the symmetry \[\KHI(Y,K,[\Sigma],g-1)\cong \KHI(-Y,K,[\Sigma],g-1).\]
This completes our outline of the proof of Theorem \ref{thm:khi-fibered}. There are several challenges involved in making this outline rigorous. The most substantial and interesting of these has to do with the Alexander grading, as described below.
\subsection{On the Alexander grading}
Kronheimer and Mrowka define the Alexander grading on $\KHI$ by embedding the knot complement in a particular closed $3$-manifold. On the other hand, the argument outlined above relies on the contact invariants in $\SHI$ we defined in \cite{bs-shi} and our naturality results from \cite{bs-naturality} (the latter tell us that different choices in the construction of $\SHI$ yield groups that are \emph{canonically} isomorphic, which is needed to talk sensibly about maps between $\SHI$ groups). Both require that we use a much larger class of \emph{closures}. Accordingly, one obstacle we had to overcome was showing that the Alexander grading can be defined in this broader setting in such a way that it agrees with the one Kronheimer and Mrowka defined (so that it still detects genus and fiberedness). We hope this contribution might prove useful for other purposes as well.
\subsection{Organization} Section \ref{sec:bkgnd} provides the necessary background on instanton Floer homology, sutured instanton homology, and our contact invariants. We also prove several results in this section which do not appear elsewhere but are familiar to experts. In Section \ref{sec:alex}, we give a more robust definition of the Alexander grading associated with a properly embedded surface in a sutured manifold. In Section \ref{sec:bypass}, we prove a bypass exact triangle in sutured instanton homology. In Section \ref{sec:leg}, we define invariants of Legendrian and transverse knots in $\KHI$ and establish some of their basic properties. In Section \ref{sec:proof}, we prove Theorem \ref{thm:khi-fibered} according to the outline above. As discussed, this theorem implies the other theorems stated above, including our main result that Khovanov homology detects the trefoil.
\subsection{Acknowledgments} We thank Chris Scaduto and Shea Vela-Vick for helpful conversations. We also thank Etnyre, Vela-Vick, and Zarev for their beautiful article \cite{evvz} which inspired certain parts of our approach. Finally, we would like the acknowledge the debt this paper owes to the foundational work of Kronheimer and Mrowka
\section{Background}
\label{sec:bkgnd}
\subsection{Instanton Floer homology}
\label{ssec:instanton} This section provides the necessary background on instanton Floer homology. Our discussion is borrowed from \cite{km-excision}, though we include proofs of some propositions and lemmas which are familiar to experts but do not appear explicitly elsewhere. Our description of the surgery exact triangle is taken from \cite{scaduto}.
Let $(Y,\alpha)$ be an \emph{admissible pair}; that is, a closed, oriented 3-manifold $Y$ and a closed, oriented 1-manifold $\alpha \subset Y$ intersecting some embedded surface transversally in an odd number of points. We associate the following data to this pair:
\begin{itemize}
\item A Hermitian line bundle $w \to Y$ with $c_1(w)$ Poincar\'e dual to $\alpha$;
\item A $U(2)$ bundle $E \to Y$ equipped with an isomorphism $\cinvt: \wedge^2 E \to w$.
\end{itemize}
The \emph{instanton Floer homology} $I_*(Y)_\alpha$ is the Morse homology of the Chern-Simons functional on the space of $SO(3)$ connections on $\operatorname{ad}(E)$ modulo determinant-1 gauge transformations, as in \cite{donaldson-book}. It is a ${\mathbb{Z}}/8{\mathbb{Z}}$-graded ${\mathbb{C}}$-module.
\begin{notation}
Given disjoint oriented $1$-manifolds $\alpha,\eta\subset Y$ we will use the shorthand \[I_*(Y)_{\alpha+\eta}:=I_*(Y)_{\alpha\sqcup\eta}\] as it will make the notation cleaner in what follows.
\end{notation}
For each even-dimensional class $\Sigma \in H_d(Y)$, there is an operator \[\mu(\Sigma):I_*(Y)_\alpha\to I_{*+d-4}(Y)_\alpha,\] as described in \cite{donaldson-kronheimer}. These operators are additive in that \[\mu(\Sigma_1+\Sigma_2)=\mu(\Sigma_1)+\mu(\Sigma_2).\] Moreover, any two such operators commute. Using work of Mu\~noz \cite{munoz}, Kronheimer and Mrowka prove the following in \cite[Corollary~7.2]{km-excision}.
\begin{theorem}
\label{thm:simultaneouseigenvalues} Suppose $R$ is a closed surface in $Y$ of positive genus with $\alpha\cdot R$ odd. Then the simultaneous eigenvalues of the operators $\mu(R)$ and $\mu(\pt)$ on $I_*(Y)_\alpha$ belong to a subset of the pairs
\[ (i^r(2k), (-1)^r\cdot 2) \]
for $0 \leq r \leq 3$ and $0 \leq k \leq g(R)-1$.
\end{theorem}
With this, they make the following definition.
\begin{definition}Given $Y,\alpha,R$ as in Theorem \ref{thm:simultaneouseigenvalues}, let
\[ I_*(Y|R)_\alpha\subset I_*(Y)_\alpha \]
to be the simultaneous $(2g(R)-2,2)$-eigenspace of $(\mu(R),\mu(\pt))$ on $I_*(Y)_\alpha$.\footnote{We will use \emph{eigenspace} to mean \emph{generalized eigenspace} as these operators may not be diagonalizable.}
\end{definition}
The commutativity of these operators implies that for any closed surface $\Sigma\subset Y$ the operator $\mu(\Sigma)$ acts on $I_*(Y|R)_\alpha$. Moreover, Kronheimer and Mrowka obtain the following bounds on the spectrum of this operator \emph{without} the assumption that $\alpha\cdot \Sigma$ is odd \cite[Proposition~7.5]{km-excision}.
\begin{proposition} \label{prop:mu-spectrum}
For any closed surface $\Sigma\subset Y$ of positive genus, the eigenvalues of \[\mu(\Sigma):I_*(Y|R)_\alpha\to I_{*-2}(Y|R)_\alpha\] belong to the set of even integers between $2-2g(\Sigma)$ and $2g(\Sigma)-2$.
\end{proposition}
\begin{lemma}
\label{lem:symmetric}
If $g(R)=1$ then the $m$-eigenspace of $\mu(\Sigma)$ acting on $I_*(Y|R)_\alpha$ is isomorphic to its $-m$-eigenspace for each $m$.
\end{lemma}
\begin{proof} Suppose $g(R)=1$. Then the $m$-eigenspace of $\mu(\Sigma)$ acting on $I_*(Y|R)_\alpha$ is the simultaneous $(m,0,2)$-eigenspace of the operators $(\mu(\Sigma),\mu(R),\mu(\pt))$ on $I_*(Y)_\alpha$. Recall that $I_*(Y)_\alpha$ is a $\Z/8\Z$-graded group. We may thus write an element of this group as $(c_0,c_1,c_2,c_3,c_4,c_5,c_6,c_7)$, where $c_i$ is in grading $i$ mod 8. It then follows immediately from the fact that $\mu(\Sigma)$ and $\mu(R)$ are degree 2 operators and $\mu(\pt)$ is a degree 4 operator that the map which sends \[(c_0,c_1,c_2,c_3,c_4,c_5,c_6,c_7) \textrm{ to }(c_0,c_1,-c_2,-c_3,c_4,c_5,-c_6,-c_7)\] defines an isomorphism from the
$(m,0,2)$-eigenspace of $(\mu(\Sigma),\mu(R),\mu(\pt))$ to the $(-m,0,2)$-eigenspace of these operators.
\end{proof}
Suppose $(Y_1,\alpha_1)$ and $(Y_2,\alpha_2)$ are admissible pairs. A cobordism $(W,\nu)$ from the first pair to the second induces a map \begin{equation*}\label{eqn:maptwo}I_*(W)_\nu:I_*(Y_1)_{\alpha_1}\to I_*(Y_2)_{\alpha_2}\end{equation*} which depends up to sign only on the homology class $[\nu]\subset H_2(W,\partial W)$ and the isomorphism class of $(W,\nu)$, where two such pairs are isomorphic if they are diffeomorphic by a map which intertwines the boundary identifications (the surface $\nu$ specifies a bundle over $W$ restricting to the bundles on the boundary specified by $\alpha_1$ and $\alpha_2$). Moreover, if $\Sigma_1\subset Y_1$ and $\Sigma_2\subset Y_2$ are homologous in $W$ then \begin{equation}\label{eqn:commute}\mu(\Sigma_2)(I_*(W)_\nu(x)) = I_*(W)_\nu(\mu(\Sigma_1)x),\end{equation} which implies the following.
\begin{lemma}
\label{lem:commute}
Suppose $x\in I_*(Y_1)_{\alpha_1}$ is in the $m$-eigenspace of $\mu(\Sigma_1)$. Then $I_*(W)_\nu(x)$ is in the $m$-eigenspace of $\mu(\Sigma_2)$.
\end{lemma}
\begin{proof}
Since $x\in I_*(Y_1)_{\alpha_1}$ is in the $m$-eigenspace of $\mu(\Sigma_1)$, there exists an integer $N$ such that \[(\mu(\Sigma_1)-m)^Nx=0.\] The relation \eqref{eqn:commute} then implies that \[(\mu(\Sigma_2)-m)^NI_*(W)_\nu(x) = I_*(W)_\nu\big((\mu(\Sigma_1)-m)^Nx\big)=0,\] which confirms that $I_*(W)_\nu(x)$ is in the $m$-eigenspace of $\mu(\Sigma_2)$.
\end{proof}
A similar result holds if $(Y_1,\alpha_1)$ is the disjoint union of two admissible pairs \[(Y_1,\alpha_1) = (Y_1^a,\alpha_1^a)\sqcup(Y_1^b,\alpha_1^b).\] In this case, $(W,\nu)$ induces a map \begin{equation*}\label{eqn:mapthree}I_*(W)_\nu:I_*(Y_1^a)_{\alpha_1^a}\otimes I_*(Y_1^b)_{\alpha_1^b}\to I_*(Y_2)_{\alpha_2}.\end{equation*} Moreover, if \[\Sigma_1^a\sqcup \Sigma_1^b\subset Y_1^a\sqcup Y_1^b\] is homologous in $W$ to $\Sigma_2\subset Y_2$ then \begin{equation}\label{eqn:commutethree}\mu(\Sigma_2)\big(I_*(W)_\nu(x\otimes y)\big) = I_*(W)_\nu\big(\mu(\Sigma_1^a)x\otimes y\big)+I_*(W)_\nu\big(x\otimes \mu(\Sigma_1^b)y\big),\end{equation} which implies the following analogue of Lemma \ref{lem:commute}.
\begin{lemma}
\label{lem:commutethree}
Suppose $x\in I_*(Y_1^a)_{\alpha_1^a}$ is in the $m$-eigenspace of $\mu(\Sigma_1^a)$ and $y\in I_*(Y_1^b)_{\alpha_1^b}$ is in the $n$-eigenspace of $\mu(\Sigma_1^b)$. Then $I_*(W)_\nu(x\otimes y)$ is in the $(m+n)$-eigenspace of $\mu(\Sigma_2)$.
\end{lemma}
\begin{proof}
Under the hypotheses of the lemma, there exists an integer $N>0$ such that \[(\mu(\Sigma_1^a)-m)^Nx=(\mu(\Sigma_1^b)-n)^Ny=0.\] It follows easily from the relation \eqref{eqn:commutethree} that
\begin{multline*}
\big(\mu(\Sigma_2) - (m+n)\big)^{2N}I_*(W)_{\nu}(x\otimes y)
=\\ \sum_{j=0}^{2N} {2N\choose j} I_*(W)_{\nu}\big((\mu(\Sigma_1^a)-m)^jx\otimes (\mu(\Sigma_1^b)-n)^{2N-j}y\big).
\end{multline*}
Each term in this sum vanishes since either $j\geq N$ or $2N-j\geq N$, which confirms that $I_*(W)_{\nu}(x\otimes y)$ lies in the $(m+n)$-eigenspace of $\mu(\Sigma_2)$.
\end{proof}
Lemmas \ref{lem:commute} and \ref{lem:commutethree} will be used repeatedly in Section \ref{sec:alex}. They are also used to prove the next proposition and its corollary, which will in turn be important in the proof of Theorem \ref{thm:khi-fibered} in Section \ref{sec:proof}. In particular, Proposition \ref{prop:grading-shift} will be used to constrain the Alexander grading shift of the map $\phi^{SV}_0\circ C$ described in Section \ref{ssec:outline}.
Suppose for proposition below that $(W,\nu)$ is a cobordism from $(Y_1,\alpha_1)$ to $(Y_2,\alpha_2)$ and that $R_1\subset Y_1$ and $R_2\subset Y_2$ are closed surfaces of the same positive genus which are homologous in $W$ with $\alpha_1\cdot R_1$ and $\alpha_2\cdot R_2$ odd. Then Lemma \ref{lem:commute} implies that $I_*(W)_\nu$ restricts to a map \[I_*(W)_\nu:I_*(Y_1|R_1)_{\alpha_1}\to I_*(Y_2|R_2)_{\alpha_2}.\]
\begin{proposition}
\label{prop:grading-shift}
Suppose $\Sigma_1\subset Y_1$ and $\Sigma_2 \subset Y_2$ are closed surfaces
and $F \subset W$ is a closed surface of genus $k \geq 1$ and self-intersection $0$ such that
\[ \Sigma_1 + F = \Sigma_2 \]
in $H_2(W)$. If $x \in I_*(Y_1|R_1)_{\alpha_1}$ belongs to the $2m$-eigenspace of $\mu(\Sigma_1)$, then we can write \[ I_*(W)_\nu(x) = y_{2m-2k+2} + y_{2m-2k+4} + \dots + y_{2m+2k-2}, \]
where each $y_\lambda$ lies in the $\lambda$-eigenspace of the action of $\mu(\Sigma_2)$ on $I_*(Y_2|R_2)_{\alpha_2}$.
\end{proposition}
\begin{proof} First, suppose $\nu\cdot F$ is odd. Consider the cobordism \[(\overline{W},\overline{\nu}):(Y_1,\alpha_1)\sqcup (F\times S^1,\alpha_F)\to(Y_2,\alpha_2)\] obtained from $W$ by removing a tubular neighborhood $F\times D^2$ of $F$. We may assume that $\alpha_F=\nu\cap (F\times S^1)$ intersects a fiber $F$ in an odd number of points. Then for $x \in I_*(Y_1|R_1)_{\alpha_1}$ we have \[I_*(W)_{\nu}(x) = I_*(\overline{W})_{\overline{\nu}}(x\otimes \psi),\] where $I_*(\overline{W})_{\overline{\nu}}$ is the cobordism map \[I_*(\overline{W})_{\overline{\nu}}:I_*(Y_1|R_1)_{\alpha_1}\otimes I_*(F\times S^1)_{\alpha_F}\to I_*(Y_2|R_2)_{\alpha_2}\] and $\psi$ is the relative invariant of the $4$-manifold $(F\times D^2,\nu\cap (F\times D^2))$.
From the discussion above, we can write \[\psi=\psi^{-2}+\psi^2,\] where $\psi^i$ is in the $i$-eigenspace of the operator $\mu(\pt)$ on $I_*(F\times S^1)_{\alpha_F}$. Recall that an element $x \in I_*(Y_1|R_1)_{\alpha_1}$ lies in the $2$-eigenspace of $\mu(\pt)$ on $I_*(Y_1)_{\alpha_1}$ by definition. Since a point in either $Y_1$ or $F\times S^1$ is homologous to a point in $Y_2$, Lemma \ref{lem:commutethree} implies that $I_*(\overline{W})_{\overline{\nu}}(x\otimes \psi^{-2})$ lies in both the $(+2)$- and $(-2)$-eigenspaces of $\mu(\pt)$ on $I_*(Y_2)_{\alpha_2}$. Thus, \[I_*(\overline{W})_{\overline{\nu}}(x\otimes \psi^{-2})=0.\]We therefore have that \[I_*(W)_{\nu}(x) = I_*(\overline{W})_{\overline{\nu}}(x\otimes \psi^2).\] From the discussion above, we can write $\psi^2$ as a sum \[\psi^2 = \psi_{2-2k} + \psi_{4-2k}+\dots +\psi_{2k-4} + \psi_{2k-2},\] where each $\psi_\lambda$ is in the $\lambda$-eigenspace of the operator $\mu(F)$ on $I_*(F\times S^1)_{\alpha_F}.$
Suppose $x \in I_*(Y_1|R_1)_{\alpha_1}$ belongs to the $2m$-eigenspace of the operator $\mu(\Sigma_1)$ as in the proposition. It then follows from Lemma \ref{lem:commutethree} that \[I_*(\overline{W})_{\overline{\nu}}(x\otimes \psi_\lambda)\] lies in the $(2m+\lambda)$-eigenspace of $\mu(\Sigma_2)$ for each $\lambda$. We may therefore write \[I_*(\overline{W})_{\overline{\nu}}(x\otimes \psi)=y_{2m-2k+2} + y_{2m-2k+4} + \dots + y_{2m+2k-2},\] where \[y_\lambda := I_*(\overline{W})_{\overline{\nu}}(x\otimes \psi_{\lambda-2m})\] is in the $\lambda$-eigenspace of $\mu(\Sigma_2)$.
Now suppose $\nu\cdot F$ is even. We claim that there is a surface $G\subset W$ homologous to $2F$ of genus $2k-1$. Let $F\times D^2$ be a tubular neighborhood of $F$ in $W$. Let $F'$ be a parallel copy of $F$ in $F\times D^2$. We cut $F$ open along a non-separating curve $c$, cut $F'$ open along a parallel curve $c'$, and glue these cut open surfaces together in a way that is consistent with their orientations and results in a connected surface $G$ of genus $2k-1$. Figure \ref{fig:cut} shows how we modify $F\sqcup F'$ in $A\times D^2$ to obtain $G$, where $A$ is an annular neighborhood of $c$ in $F$.
\begin{figure}[ht]
\labellist
\tiny\hair 2pt
\pinlabel $p$ at 18 54
\pinlabel $p'$ at 20 84
\endlabellist
\centering
\includegraphics[width=8.2cm]{Figures/cut}
\caption{A schematic for the modification of $F\sqcup F'$ in $A\times D^2$ to obtain $G$. Left, $A\times D^2$ is represented by $I\times D^2$ while $A\subset F$ and its parallel copy $A'\subset F'$ are represented by the horizontal segments $I\times\{p\}$ and $I\times \{p'\}$. Taking the product of these pictures with $S^1$ is a local model for the actual modification.}
\label{fig:cut}
\end{figure}
By tubing $G$ to a copy of $R_2$, we obtain a closed surface $F'\subset W$ homologous to $2F-R_2$ of genus $2k-1+r$, where $r=g(R_2)$. This surface has $\nu\cdot F'$ odd and self-intersection $0$, and we have the relation \[2\Sigma_1+F'=2\Sigma_2-R_2\] in $H_2(W)$. Now suppose that $x \in I_*(Y_1|R_1)_{\alpha_1}$ belongs to the $2m$-eigenspace of $\mu(\Sigma_1)$. Then $x$ belongs to the $4m$-eigenspace of $\mu(2\Sigma_1)$. The argument in the previous case tells us that we can write \[ I_*(W)_\nu(x) = z_{4m-2(2k-1+r)+2} + z_{4m-2(2k-1+r)+4} + \dots + z_{4m+2(2k-1+r)-2}, \] where each $z_\lambda$ lies in the $\lambda$-eigenspace of the action of $\mu(2\Sigma_2-R_2)$ on $I_*(Y_2|R_2)_{\alpha_2}$. Then
\[ \big(2\mu(\Sigma_2) - (\lambda+2r-2)\big)^nz_\lambda = \sum_{j=0}^n {n\choose j} \big(\mu(2\Sigma_2-R_2)-\lambda\big)^j\big(\mu(R_2) - (2r-2)\big)^{n-j}z_\lambda, \]
and the right side is again zero for $n$ large enough, meaning that $z_\lambda$ is in the $((\lambda+2r-2)/2)$-eigenspace of $\mu(\Sigma_2)$. Since \[4m-2(2k-1+r)+2\leq \lambda\leq 4m+2(2k-1+r)-2,\] we have that \[2m-2k+1\leq(\lambda+2r-2)/2\leq 2m+2k+2r-3.\] Since the eigenvalues of $\mu(\Sigma_2)$ must also be even integers, we see that the minimum eigenvalue of $\mu(\Sigma_2)$ showing up in the expansion of $I_*(W)_\nu(x)$ into eigenvectors of $\mu(\Sigma_2)$ is $2m-2k+2$. Applying the same argument but for a surface $F'$ homologous to $2F+R_2$ of genus $2k-1+r$ and satisfying\[2\Sigma_1+F'=2\Sigma_2+R_2\] shows that the maximum eigenvalue of $\mu(\Sigma_2)$ showing up in the expansion of $I_*(W)_\nu(x)$ is $2m+2k-2$. This proves the result.
\end{proof}
For the corollary below, suppose $Y,\alpha,R$ are such that $I_*(Y|R)_\alpha$ is defined.
\begin{corollary} \label{cor:grading-shift}
Suppose $\Sigma_1,\Sigma_2 \subset Y$ are closed surfaces of the same genus $g\geq 1$ and $F \subset Y$ is a closed surface of genus $k \geq 1$ such that
\[ \Sigma_1 + F = \Sigma_2 \]
in $H_2(Y)$. If $x \in I_*(Y|R)_\alpha$ belongs to the $2m$-eigenspace of $\mu(\Sigma_1)$, then we can write \[ x = x_{2m-2k+2} + x_{2m-2k+4} + \dots + x_{2m+2k-2}, \]
where each $x_\lambda$ lies in the $\lambda$-eigenspace of the action of $\mu(\Sigma_2)$ on $I_*(Y|R)_\alpha$.
\end{corollary}
\begin{proof}
Simply apply Proposition \ref{prop:grading-shift} to the product cobordism from $(Y,\alpha)$ to itself.
\end{proof}
We will make repeated use of the surgery exact triangle in instanton Floer homology. This triangle goes back to Floer but appears in the form below in work of Scaduto \cite{scaduto}.
Suppose $K$ is a framed knot in $Y$. Let $\alpha$ be an oriented $1$-manifold in $Y$. Let $Y_i$ denote the result of $i$-surgery on $K$, and let $\alpha_i$ be the induced $1$-manifold in $Y_i$, for $i=0,1$.
\begin{theorem}
\label{thm:exacttri}
There is an exact triangle \[ \xymatrix@C=-5pt@R=30pt{
I_*(Y)_{\alpha+ K} \ar[rr]^{I_*(W)_\kappa} & &I_*(Y_0)_{\alpha_0} \ar[dl]^{I_*(W_0)_{\kappa_0}} \\
&I_*(Y_1)_{\alpha_1}\ar[ul]^{I_*(W_1)_{\kappa_1}} & \\
} \] as long as $(Y,\alpha\sqcup K),$ $(Y_0,\alpha_0)$, and $(Y_1,\alpha_1)$ are all admissible pairs.
\end{theorem}
Recall that each manifold in the surgery exact triangle is obtained from the preceding one via integer surgery, as shown in Figure \ref{fig:surgery}. The maps in the exact triangle are induced by the associated $2$-handle cobordisms \[Y\xrightarrow{\,W\,}Y_0\xrightarrow{\,W_0\,}Y_1\xrightarrow{\,W_1\,}Y,\] equipped with certain 2-dimensional cobordisms \[\alpha\sqcup K\xrightarrow{\, \,\kappa\,\, }\alpha_0\xrightarrow{\, \,\kappa_0\,\, }\alpha_1\xrightarrow{\, \,\kappa_1 \,\,}\alpha\sqcup K\] between the $1$-manifolds on the ends.
\begin{figure}[ht]
\labellist
\tiny\hair 2pt
\pinlabel $K$ at 25 100
\pinlabel $\mu_0$ at 62.5 51
\pinlabel $-1$ at 96.5 51
\pinlabel $0$ at 26.5 51
\pinlabel $0$ at 26.5 17.5
\pinlabel $1$ at 61.5 17.5
\pinlabel $\mu_1$ at 96.5 17.5
\pinlabel $0$ at 59 100
\pinlabel $1$ at 94 100
\pinlabel $0$ at 25 66.5
\pinlabel $0$ at 59.5 66.5
\pinlabel $0$ at 93.5 66.5
\pinlabel $1$ at 25 33
\pinlabel $1$ at 59.5 33
\pinlabel $1$ at 93.5 33
\small
\pinlabel $Y$ at 14 1.5
\pinlabel $Y_0$ at 49 1
\pinlabel $Y_1$ at 83 1
\endlabellist
\centering
\includegraphics[width=4cm]{Figures/surgery2}
\caption{Each manifold in the surgery exact triangle is obtained by integer surgery on the red knot in the preceding manifold.}
\label{fig:surgery}
\end{figure}
Note that there are two additional surgery exact triangles involving these same 3-manifolds and 4-dimensional cobordisms, corresponding to surgeries on $\mu_0\subset Y_0$ and $\mu_1\subset Y_1$ as shown in Figure \ref{fig:surgery}. The only differences between these three triangles are the $1$-manifolds used to define the instanton Floer groups and the $2$-dimensional cobordisms between those $1$-manifolds.
\subsection{Sutured instanton homology} This section provides the necesary background on sutured instanton homology. Our discussion is adapted from \cite{bs-naturality,bs-shi}, though the basic construction of $\SHI$ is of course due to Kronheimer and Mrowka \cite{km-excision}.
\subsubsection{Closures of sutured manifolds} We first recall the following definition.
\begin{definition}A \emph{balanced sutured manifold} $(M,\Gamma)$ is a compact, oriented 3-manifold $M$ together with an oriented multicurve $\Gamma\subset \partial M$ whose components are called \emph{sutures}. Letting \[R(\Gamma) = \partial M\smallsetminus\Gamma,\] oriented as a subsurface of $\partial M$, it is required that:
\begin{itemize}
\item neither $M$ nor $R(\Gamma)$ has closed components,
\item $R(\Gamma) = R_+(\Gamma)\sqcup R_-(\Gamma)$ with $\partial R_+(\Gamma) = -\partial R_-(\Gamma) = \Gamma$, and
\item $\chi(R_+(\Gamma)) = \chi(R_-(\Gamma))$.
\end{itemize}
\end{definition}
The following examples will be important for us.
\begin{example}
\label{eg:prodsutured}
Suppose $S$ is a compact, connected, oriented surface with nonempty boundary. The pair \[(H_S,\Gamma_S):=(S\times[-1,1],\partial S\times\{0\})\] is called a \emph{product sutured manifold}.
\end{example}
\begin{example}
\label{eg:knotcomplement}
Given a knot $K$ in a closed, oriented $3$-manifold $Y$, let \[(Y(K),\Gamma_\mu) := (Y\ssm\nu(K),\mu\cup -\mu),\] where $\nu(K)$ is a tubular neighborhood of $K$ and $\mu$ and $-\mu$ are oppositely oriented meridians.
\end{example}
\begin{definition} An \emph{auxiliary surface} for $(M,\Gamma)$ is a compact, connected, oriented surface $T$ with the same number of boundary components as components of $\Gamma$.\end{definition} Suppose $T$ is an auxiliary surface for $(M,\Gamma)$, that $A(\Gamma)\subset\partial M$ is a tubular neighborhood of $\Gamma$, and that \[h:\partial T\times[-1,1]\xrightarrow{\cong} A(\Gamma)\] is an orientation-reversing diffeomorphism.
\begin{definition} We form a \emph{preclosure} of $M$ \begin{equation*}\label{eqn:bF}M'=M\cup \big(T\times [-1,1]\big)\end{equation*} by gluing $T\times[-1,1]$ to $M$ according to $h$.
\end{definition}
This preclosure has two diffeomorphic boundary components, \[\partial M' = \partial_+M'\sqcup\partial_-M',\] where \[ \partial_+M':=\big(R_+(\Gamma)\cup T\big)\cong\big(R_-(\Gamma)\cup -T\big)=:\partial_-M'.\] Let $R:=\partial_+M'$ and choose an orientation-reversing diffeomorphism \[\varphi:\partial_+M'\xrightarrow{\cong}\partial_-M'\] which fixes a point $q\in T$.
We form a closed $3$-manifold \[Y=M'\cup \big(R\times[1,3]\big)\] by gluing $R\times[1,3]$ to $M'$ according to the maps
\begin{align*}
\operatorname{id}&:R\times\{1\}\to \partial_+M',\\
\varphi&:R\times\{3\}\to \partial_-M'.
\end{align*}
Let $\alpha\subset Y$ be the curve formed as the union of the arcs \[\{q\}\times[-1,1]\textrm{ in } M' \textrm{ and }\{q\}\times [1,3]\textrm{ in } R\times[1,3].\] Choose a nonseparating curve $\eta\subset R\ssm\{q\}$.
For convenience, we will also use $R$ to denote the \emph{distinguished surface} $R\times\{2\}\subset Y$ and $\eta$ to denote the curve $\eta\times\{2\}\subset R\times\{2\}\subset Y$.
\begin{definition} We refer to the tuple $\data = (Y,R,\eta,\alpha)$ together with the embeddings $M\hookrightarrow Y$ and $R\times[1,3]\hookrightarrow Y$ as a \emph{closure} of $(M,\Gamma)$ as long as $g(R)\geq 1$.\footnote{In \cite{bs-shi}, we called such a tuple a \emph{marked odd closure}.}
\end{definition}
The \emph{genus} $g(\data)$ refers to the genus of $R$.
\begin{remark} Suppose $\data = (Y,R,\eta,\alpha)$ is a closure of $(M,\Gamma)$. Then, the tuple \[-\data:=(-Y,-R,-\eta,-\alpha)\] is a closure of $-(M,\Gamma):=(-M,-\Gamma).$ \end{remark}
\subsubsection{Sutured instanton homology}
\label{ssec:shi}
Following Kronheimer and Mrowka \cite{km-excision}, we make the definition below.
\begin{definition}
Given a closure $\data = (Y,R,\eta,\alpha)$ of $(M,\Gamma)$, the \emph{sutured instanton homology}\footnote{In \cite{bs-naturality}, we called this the \emph{twisted sutured instanton homology} and denoted it by $\underline{SHI}(\data)$ instead.} of $\data$ is the ${\mathbb{C}}$-module $\SHI(\data) = I_*(Y|R)_{\alpha+ \eta}.$
\end{definition}
Kronheimer and Mrowka proved that, up to isomorphism, $\SHI(\data)$ is an invariant of $(M,\Gamma)$.
In \cite{bs-naturality}, we constructed for any two closures $\data,\data'$ of $(M,\Gamma)$ of genus at least two a \emph{canonical} isomorphism \[\Psi_{\data,\data'}:\SHI(\data)\to\SHI(\data')\] which is well-defined up to multiplication in ${\mathbb{C}}^\times$. In particular, these isomorphisms satisfy, up to multiplication in ${\mathbb{C}}^\times$, \[\Psi_{\data,\data''} = \Psi_{\data',\data''}\circ\Psi_{\data,\data'}\] for any triple $\data,\data',\data''$ of such closures. The groups $\SHI(\data)$ and isomorphisms $\Psi_{\data,\data'}$ ranging over closures of $(M,\Gamma)$ of genus at least two thus define what we called a \emph{projectively transitive system of ${\mathbb{C}}$-modules} in \cite{bs-naturality}.
\begin{definition} The \emph{sutured instanton homology of $(M,\Gamma)$} is the projectively transitive system of ${\mathbb{C}}$-modules $\SHI(M,\Gamma)$ defined by the groups and canonical isomorphisms above. \end{definition}
\begin{remark}The isomorphisms $\Psi_{\data,\data'}$ are defined using $2$-handle and excision cobordisms. We will provide more details in Section \ref{sec:alex} where we show that these isomorphisms respect the Alexander gradings associated to certain properly embedded surfaces in $(M,\Gamma)$.
\end{remark}
The following result of Kronheimer and Mrowka \cite[Proposition 7.8]{km-excision} will be important for us. We sketch their proof below so that we can refer to this construction later.
\begin{proposition}
\label{prop:prodsutured}
Suppose $(H_S,\Gamma_S)$ is a product sutured manifold as in Example \ref{eg:prodsutured}. Then $\SHI(H_S,\Gamma_S)\cong{\mathbb{C}}.$
\end{proposition}
\begin{proof}
Let $T$ be an auxiliary surface for $(H_S,\Gamma_S)$. Form a preclosure by gluing $T\times[-1,1]$ to $S\times[-1,1]$ according to a map \[h:\partial T\times[-1,1]\to\partial S\times[-1,1]\] of the form $f\times \operatorname{id}$ for some diffeomorphism $f:\partial T\to \partial S$. This preclosure is then a product \[M'=(S\cup T)\times[-1,1].\] To form a closure, we let $R=S\cup T$ and glue $R\times[1,3]$ to $M'$ by maps
\begin{align*}
\operatorname{id}&:R\times\{1\}\to \partial_+M',\\
\varphi&:R\times\{3\}\to \partial_-M'
\end{align*}
as usual, where $\varphi$ fixes a point $q$ on $T$. We then define $\eta$ and $\alpha$ as described earlier to obtain a closure $\data_S = (Y,R,\eta,\alpha)$, where \[Y = R\times_{\varphi}S^1\] is the surface bundle over the circle with fiber $R$ and monodromy $\varphi$. We then have that \[\SHI(\data_S)=I_*(R\times_\varphi S^1|R)_{\alpha+\eta}\cong{\mathbb{C}},\] by \cite[Proposition 7.8]{km-excision}. \end{proof}
As mentioned in the introduction, Kronheimer and Mrowka define in \cite[Section 7.6]{km-excision} the instanton Floer homology of a knot as follows.
\begin{definition}
\label{def:khi}
Suppose $K$ is a knot in a closed, oriented $3$-manifold $Y$. The \emph{instanton knot Floer homology} of $K$ is given by \[\KHI(Y,K):=\SHI(Y(K),\Gamma_\mu),\] where $(Y(K),\Gamma_\mu)$ is the knot complement with two meridional sutures as in Example \ref{eg:knotcomplement}.
\end{definition}
\subsubsection{Contact handle attachments and surgery}
\label{sssec:handles}
In \cite{bs-shi}, we defined maps on $\SHI$ associated to contact handle attachments and surgeries; see Figure \ref{fig:handles} for illustrations of such handle attachments. We describe the constructions of these maps below.
\begin{figure}[ht]
\labellist
\hair 2pt
\pinlabel $c$ at 300 277
\endlabellist
\centering
\includegraphics[width=7cm]{Figures/handles}
\caption{Left, a $1$-handle attachment. Right, a $2$-handle attachment. Recall that a $2$-handle is attached along an annular neighborhood of a curve $c$ which intersects $\Gamma$ in exactly two points, as shown. The gray regions represent $R_-(\Gamma)$ and the white regions $R_+(\Gamma)$. }
\label{fig:handles}
\end{figure}
First, we consider contact $1$-handle attachments.
Suppose $(M',\Gamma')$ is obtained from $(M,\Gamma)$ by attaching a contact $1$-handle. Then one can show that any closure $\data' = (Y,R,\eta,\alpha)$ of the first is also naturally a closure $\data = (Y,R,\eta,\alpha)$ of the second (the only difference between these closures are the embeddings $M,M'\hookrightarrow Y$). We therefore define the $1$-handle attachment map \[\operatorname{id}:\SHI(-\data)\to\SHI(-\data')\] to be the identity map on instanton Floer homology. For closures of genus at least two, these maps commute with the canonical isomorphisms described above and thus define a map \[H_1:\SHI(-M,-\Gamma)\to\SHI(-M',-\Gamma'),\] as in \cite[Section 3.2]{bs-shi}.
Next, we consider contact $2$-handle attachments.
Suppose $(M',\Gamma')$ is obtained from $(M,\Gamma)$ by attaching a contact $2$-handle along a curve $c\subset \partial M$. Let $\data = (Y,R,\eta,\alpha)$ be a closure of $(M,\Gamma)$. We proved in \cite[Section 3.3]{bs-shi} that $\partial M$-framed surgery on $c\subset Y$ naturally yields a closure $\data'=(Y',R,\eta,\alpha)$ of $(M',\Gamma')$, where $Y'$ is the surgered manifold. Let \[W:-Y\to-Y'\] be the cobordism obtained from $-Y\times[0,1]$ by attaching a $2$-handle along $c\times\{1\}\subset -Y\times\{1\}$, and let $\nu$ be the cylinder \[\nu=(-\alpha\sqcup-\eta)\times [0,1].\] The fact that $c$ is disjoint from $R$ means that $-R\subset-Y$ and $-R\subset-Y'$ are isotopic in $W$.
Since these surfaces have the same genus, Lemma \ref{lem:commute} implies that the induced map \[I_*(W)_\nu:I_*(-Y)_{-\alpha-\eta}\to I_*(-Y')_{-\alpha-\eta}\]
restricts to a map \[I_*(W)_\nu:\SHI(-\data)\to\SHI(-\data').\]
For closures of genus at least two, these maps commute with the canonical isomorphisms and therefore define a map \[H_2:\SHI(-M,-\Gamma)\to\SHI(-M',-\Gamma'),\] as shown in \cite[Section 3.3]{bs-shi}.
Finally, we consider surgeries.
Suppose $(M',\Gamma')$ is obtained from $(M,\Gamma)$ via $(+1)$-surgery on a framed knot $K\subset M$. A closure $\data = (Y,R,\eta,\alpha)$ of $(M,\Gamma)$ naturally gives rise to a closure $\data'=(Y',R,\eta,\alpha)$ of $(M',\Gamma')$, where $Y'$ is the surgered manifold as in the $2$-handle attachment case. The $2$-handle cobordism $(W,\nu)$ corresponding to this surgery induces a map \[I_*(W)_\nu:\SHI(-\data)\to\SHI(-\data')\] as in the previous case. We showed in \cite[Section 3.3]{bs-shi} that for closures of genus at least two, these maps commute with the canonical isomorphisms and thus give rise to a map \begin{equation*}\label{eqn:leg-surg}F_K:\SHI(-M,-\Gamma)\to\SHI(-M',-\Gamma').\end{equation*}
\subsection{The contact invariant}
\label{ssec:contact} We assume familiarity with contact structures and open books. The background material on partial open books is introduced in large part to establish common notation. In what follows, we write $(M,\Gamma,\xi)$ to refer a contact manifold $(M,\xi)$ with nonempty convex boundary and dividing set $\Gamma$; we call such a triple a \emph{sutured contact manifold}.
\begin{definition}
\label{def:pob} A \emph{partial open book} is a triple $(S,P,h)$, where:
\begin{itemize}
\item $S$ is a connected, oriented surface with nonempty boundary,
\item $P$ is a subsurface of $S$ formed as the union of a neighborhood of $\partial S$ with $1$-handles in $S$, and
\item $h:P\to S$ is an embedding which restricts to the identity on $\partial P\cap \partial S$.
\end{itemize}
\end{definition}
\begin{definition}
A \emph{basis} for a partial open book $(S,P,h)$ is a collection \[\mathbf{c}=\{c_1,\dots,c_n\}\] of disjoint, properly embedded arcs in $P$ such that $S\ssm \mathbf{c}$ deformation retracts onto $S\ssm P$; essentially, the \emph{basis arcs} in $\mathbf{c}$ specify the cores of $1$-handles used to form $P$.
\end{definition}
A partial open book specifies a sutured contact manifold as follows.
Suppose $(S,P,h)$ is a partial open book with basis $\mathbf{c}=\{c_1,\dots,c_n\}$. Let $\xi_S$ be the unique tight contact structure on the handlebody \[H_S=S\times[-1,1] \textrm{ with dividing set }\Gamma_S = \partial S\times\{0\}.\]
For $i=1,\dots,n$, let $\gamma_i$ be the curve on $\partial H_S$ given by \begin{equation}\label{eqn:basishandle}\gamma_i=(c_i\times\{1\})\cup (\partial c_i\times [-1,1])\cup (h(c_i)\times\{-1\}).\end{equation}
\begin{definition} We define $M(S,P,h,\mathbf{c})$ to be the sutured contact manifold obtained from $(H_S,\Gamma_S,\xi_S)$ by attaching contact $2$-handles along the curves $\gamma_1,\dots,\gamma_n$ above.
\end{definition}
\begin{remark}
Up to a canonical isotopy class of contactomorphisms, $M(S,P,h,\mathbf{c})$ does not depend on the choice of basis $\mathbf{c}$.
\end{remark}
\begin{definition}
\label{def:pobd}
A \emph{partial open book decomposition} of $(M,\Gamma,\xi)$ consists of a partial open book $(S,P,h)$ together with a contactomorphism \[M(S,P,h,\mathbf{c})\xrightarrow{\cong}(M,\Gamma,\xi)\] for some basis $\mathbf{c}$ of $(S,P,h)$.
\end{definition}
\begin{remark} We will generally conflate partial open book decompositions with partial open books. Given a partial open book decomposition as above, we will simply think of $(M,\Gamma,\xi)$ as being equal to $M(S,P,h,\mathbf{c})$. In particular, we will view $(M,\Gamma,\xi)$ as obtained from $(H_S,\Gamma_S,\xi_S)$ by attaching contact $2$-handles along the curves $\gamma_1,\dots,\gamma_n$ in \eqref{eqn:basishandle}.
\end{remark}
Half of the \emph{relative Giroux correspondence}, proven in \cite{hkm-sutured}, states the following.
\begin{theorem}
Every $(M,\Gamma,\xi)$ admits a partial open book decomposition.
\end{theorem}
\begin{example}
Given an open book decomposition $(\Sigma, h)$ of a closed contact manifold $(Y,\xi)$, the triple \[(S=\Sigma, P=\Sigma\ssm D^2, h|_P)\] is a partial open book decomposition for the complement of a Darboux ball in $(Y,\xi)$.
\end{example}
\begin{example}
\label{eq:legendrian-complement}
Given an open book decomposition $(\Sigma, h)$ of a closed contact manifold $(Y,\xi)$ with a Legendrian knot $K$ realized on the page $\Sigma$, the triple \[(S=\Sigma, P=\Sigma\ssm\nu(K), h|_P)\] is a partial open book decomposition for the complement of a standard neighborhood of $K$.
\end{example}
To define the contact invariant of $(M,\Gamma,\xi)$, we choose a partial open book decomposition $(S,P,h)$ of $(M,\Gamma,\xi)$. Let $\mathbf{c}=\{c_1,\dots,c_n\}$ be a basis for $P$ and let \[\gamma_1,\dots,\gamma_n\subset \partial H_S\] be the corresponding curves as in \eqref{eqn:basishandle}. Let \[H:\SHI(-H_S,-\Gamma_S)\to\SHI(-M,-\Gamma)\] be the composition of the maps associated to contact $2$-handle attachments along the curves $\gamma_1,\dots,\gamma_n$, as described in Section \ref{sssec:handles}. Recall from Proposition \ref{prop:prodsutured} that \[\SHI(-H_S,-\Gamma_S)\cong{\mathbb{C}},\] and let $\mathbf{1}$ be a generator of this group. The following is from \cite[Definition 4.2]{bs-shi}.
\begin{definition}
\label{def:contact} We define the contact class to be
\[\cinvt(M,\Gamma,\xi):= H(\mathbf{1})\in\SHI(-M,-\Gamma).\]
\end{definition} As the notation suggests, this class does not depend on the partial open book decomposition of $(M,\Gamma,\xi)$. Indeed, we proved the following in \cite[Theorem 4.3]{bs-shi}.
\begin{theorem}
$\cinvt(M,\Gamma,\xi)$ is an invariant of the sutured contact manifold $(M,\Gamma,\xi)$.
\end{theorem}
We will often think about the contact class in terms of closures. For that point of view, let $\data_S = (Y,R,\eta,\alpha)$ be a closure of $(H_S,\Gamma_S)$ with \[Y=R\times_\varphi S^1\] as in the proof of Proposition \ref{prop:prodsutured}. As mentioned in the definition of the contact $2$-handle attachment maps, performing $\partial H_S$-framed surgery on the curves \[\gamma_1,\dots,\gamma_n\subset \partial H_S\subset Y\] naturally yields a closure $\data=(Y',R,\eta,\alpha)$ of $(M,\Gamma)$. We then define \[\cinvt(M,\Gamma,\xi,\data):=I_*(V)_\nu(\mathbf{1}),\] where \[I_*(V)_\nu:I_*(-Y|{-}R)_{-\alpha-\eta}\to I_*(-Y'|{-}R)_{-\alpha-\eta},\] is the map associated to the corresponding $2$-handle cobordism $V$, $\nu=(-\alpha\sqcup-\eta)\times[0,1]$ is the usual cylindrical cobordism, and $\mathbf{1}$ is a generator of \[I_*(-Y|{-}R)_{-\alpha-\eta}\cong {\mathbb{C}}.\] In particular, this class is well-defined up to multiplication in ${\mathbb{C}}^\times$. Our invariance result is then the statement that the classes $\cinvt(M,\Gamma,\xi,\data)$ defined in this manner, for any partial open book decompositions and closures of genus at least two, are related by the canonical isomorphisms between the groups assigned to different closures.
\begin{remark}
For convenience of notation, we will often use the shorthand $\cinvt(\xi)$ and $\cinvt(\xi,\data)$ to denote the classes $\cinvt(M,\Gamma,\xi)$ and $\cinvt(M,\Gamma,\xi,\data)$, respectively.
\end{remark}
Below are some important properties of the contact class, all proven in \cite[Section 4.2]{bs-shi}. In brief, the contact class vanishes for overtwisted contact manifolds and behaves naturally with respect to contact handle attachment and contact $(+1)$-surgery on Legendrian knots.
\begin{theorem}
\label{thm:zero-overtwisted}
If $(M,\Gamma,\xi)$ is overtwisted then $\cinvt(\xi)=0$.
\end{theorem}
\begin{theorem}
\label{thm:handletheta2} Suppose $(M',\Gamma',\xi')$ is the result of attaching a contact $i$-handle to $(M,\Gamma,\xi)$ for $i=1$ or $2$. Then the associated map \[H_i:\SHI(-M,-\Gamma)\to\SHI(-M',-\Gamma')\] defined in Section \ref{sssec:handles} sends $\cinvt(\xi)$ to $\cinvt(\xi')$.
\end{theorem}
\begin{theorem}
\label{thm:legendrian-surgery}
Suppose $K$ is a Legendrian knot in $(M,\Gamma,\xi)$ and that $(M',\Gamma',\xi')$ is the result of contact $(+1)$-surgery on $K$. Then the associated map \[F_K:\SHI(-M,-\Gamma)\to\SHI(-M',-\Gamma')\] defined in Section \ref{sssec:handles} sends $\cinvt(\xi)$ to $\cinvt(\xi')$.
\end{theorem}
We end this background section by proving a variant of Theorem \ref{thm:legendrian-surgery} needed in Sections \ref{sec:leg} and \ref{sec:proof}. Roughly, we would like to say that the map $F_K$ fits into a surgery exact triangle where the third term involves the Floer homology of the sutured manifold obtained by $0$-surgery on $K$. The issue is that the surface $\nu$ in the $2$-handle cobordism used in defining $F_K$ is generally different from the surface $\kappa$ used in defining the map in the surgery triangle. We show below that the latter map still sends $\cinvt(\xi)$ to $\cinvt(\xi')$. We begin with some setup.
Suppose $(M',\Gamma')$ is obtained from $(M,\Gamma)$ via $(+1)$-surgery on a framed knot $K\subset M$. Let $\data = (Z,R,\eta,\alpha)$ be a closure of $(M,\Gamma)$, and let \[\data_1=(Z_1,R,\eta,\alpha)\textrm{ and }
\data_0=(Z_0,R,\eta,\alpha)\] be the tuples obtained from $\data$ by performing $1$- and $0$-surgery on $K\subset Z$.
These are naturally closures of the sutured manifolds
\[(M,\Gamma)\textrm{ and }
(M_1(K),\Gamma)=(M',\Gamma')\textrm{ and }
(M_0(K),\Gamma).
\] Note that $-Z_1$ and $-Z_0$ are obtained from surgeries on $ K\subset -Z$,
\begin{align*}
-Z_1&\cong (-Z)_{-1}(K),\\
-Z_0&\cong (-Z)_{0}(K).
\end{align*}
By Theorem \ref{thm:exacttri} (and Lemma \ref{lem:commute}), there is an exact triangle
\begin{equation}\label{eqn:surgeryexacttriangleZ} \xymatrix@C=-35pt@R=35pt{
I_*(-Z|{-}R)_{-\alpha -\eta} \ar[rr]^{I_*(W)_{{\kappa}}} & &I_*(-Z_1|{-}R)_{-\alpha -\eta} \ar[dl]^{I_*(W_{0})_{{\kappa}_0}} \\
&I_*(-Z_0|{-}R)_{-\alpha -\eta+ \mu}\ar[ul]^{I_*(W_{1})_{{\kappa}_1}}. & \\
} \end{equation}
Here, $\mu$ is the curve in $-Z_0$ corresponding to the meridian of $ K\subset -Z$, as in Figure \ref{fig:meridian}, such that $0$- and $1$-surgeries on $\mu$ produce $-Z$ and $-Z_1$.
\begin{figure}[ht]
\labellist
\tiny\hair 2pt
\pinlabel $\mu$ at 26.5 16.5
\pinlabel $0$ at 61 16.5
\pinlabel $1$ at 95 16.5
\pinlabel $0$ at 60.5 34
\pinlabel $0$ at 26 34
\pinlabel $0$ at 94.5 34
\small
\pinlabel $-Z_0$ at 12 1
\pinlabel $-Z$ at 46 1
\pinlabel $-Z_1$ at 81 1
\endlabellist
\centering
\includegraphics[width=4cm]{Figures/meridian2}
\caption{The curve $\mu$ in $-Z_0$ on which $0$- and $1$-surgeries produce $-Z$ and $-Z_1$.}
\label{fig:meridian}
\end{figure}
Since $K\subset M$ and $\kappa$ is a surface in the product portion $M\times I$ of $W$, the maps \[I_*(W)_{{\kappa}}:\SHI(-\data)\to\SHI(-\data_1)\] defined in this way commute with the canonical isomorphisms relating the groups assigned to different closures, and therefore define a map \[G_K:\SHI(-M,-\Gamma)\to\SHI(-M',-\Gamma'),\] exactly as in the proof that the map $F_K$ is well-defined in \cite[Section 3.3]{bs-shi}.
As mentioned above, the only difference between the definitions of $G_K$ and $F_K$ is that the former is defined using the surface $\kappa$ while the latter is defined using the cylinder $\nu$. Although these maps are not \emph{a priori} equal, we can still prove the following.
\begin{theorem}
\label{thm:legendrian-surgery2} Suppose $K$ is a Legendrian knot in the interior of $(M,\Gamma,\xi)$ and that $(M',\Gamma',\xi')$ is the result of contact $(+1)$-surgery on $K$. Then the associated map \[G_K:\SHI(-M,-\Gamma)\to\SHI(-M',-\Gamma')\] sends $\cinvt(\xi)$ to $\cinvt(\xi')$.
\end{theorem}
\begin{proof}
Let $(S,P,h)$ be a partial open book decomposition for $(M,\Gamma,\xi)$ such that $K$ is Legendrian realized on a page $S$. Let $\mathbf{c}=\{c_1,\dots,c_n\}$ be a basis for $P$ so that $(M,\Gamma,\xi)$ is obtained from $(H_S,\Gamma_S,\xi_S)$ by attaching contact $2$-handles along the corresponding curves \[\gamma_1,\dots,\gamma_n\subset \partial H_S\] defined in \eqref{eqn:basishandle}. We will view $K$ as a knot \[K\subset S\times\{0\}\subset H_S,\] where the contact framing of $K$ agrees with the framing induced by $S\times\{0\}$.
Let $\data_S = (Y,R,\eta,\alpha)$ be a closure of $(H_S,\Gamma_S)$ with $Y=R\times_\varphi S^1$ as constructed in the proof of Proposition \ref{prop:prodsutured}. Let \[\data_{S,1} = (Y_1,R,\eta,\alpha)\textrm{ and }\data_{S,0} = (Y_0,R,\eta,\alpha)\] be the tuples obtained from $\data_S$ by performing $1$- and $0$-surgery on $K\subset Y$ with respect to the framing of $K$ induced by $S\times\{0\}$. These are naturally closures of the sutured manifolds \[(H_{S,1},\Gamma_S)\cong (H_S,\Gamma_S)\textrm{ and }(H_{S,0},\Gamma_S)\] obtained from the corresponding surgeries on $K\subset H_S$.
Let
\begin{equation*}\data = (Z,R,\eta,\alpha)\textrm{ and }
\data_1=(Z_1,R,\eta,\alpha)\textrm{ and }
\data_0=(Z_0,R,\eta,\alpha)
\end{equation*} be the tuples obtained from \[\data_S\textrm{ and }\data_{S,1}\textrm{ and }\data_{S,0}\] by performing $\partial H_S$-framed surgeries on the curves $\gamma_1,\dots,\gamma_n$ in \[Y\textrm{ and }Y_1\textrm{ and }Y_0,\] respectively. These are naturally closures of the sutured manifolds
\[(M,\Gamma)\textrm{ and }
(M_1(K),\Gamma)=(M',\Gamma')\textrm{ and }
(M_0(K),\Gamma)
\] as above. Our aim is to show that the map \[I_*(W)_{{\kappa}}:I_*(-Z|{-}R)_{-\alpha -\eta}\to I_*(-Z_1|{-}R)_{-\alpha -\eta}\] which appears in the surgery exact triangle \eqref{eqn:surgeryexacttriangleZ} sends $\cinvt(\xi,\data)$ to $\cinvt(\xi',\data_1)$.
For this, first note that there is also a surgery exact triangle
\[ \xymatrix@C=-35pt@R=35pt{
I_*(-Y|{-}R)_{-\alpha -\eta} \ar[rr]^{I_*(X)_{{\kappa}}} & &I_*(-Y_1|{-}R)_{-\alpha -\eta} \ar[dl]^{I_*(X_{0})_{{\kappa}_0}} \\
&I_*(-Y_0|{-}R)_{-\alpha -\eta+ \mu}\ar[ul]^{I_*(X_{1})_{{\kappa}_1}}. & \\
} \]
Note that $-Y_0$ is obtained by performing $0$-surgery on a copy of $K$ in a fiber of $-Y=-R\times_\varphi S^1$ with respect to the fiber framing. But this means that the surface $-R\subset -Y_0$ is homologous to a surface of genus $g(R)-1$ obtained by surgering a fiber along $K$. It follows that \[I_*(-Y_0|{-}R)_{-\alpha -\eta+ \mu}=0\] since in this case $2g(R)-2$ is not an eigenvalue of the operator $\mu(R)$ on $I_*(-Y_0)_{-\alpha -\eta+ \mu}$ by Theorem \ref{thm:simultaneouseigenvalues}. The map \[I_*(X)_{{\kappa}}:I_*(-Y|{-}R)_{-\alpha -\eta}\to I_*(-Y_1|{-}R)_{-\alpha -\eta} \] is therefore an isomorphism. Now, we have a commutative diagram \[
\xymatrix@C=15pt@R=30pt{
I_*(-Y|{-}R)_{-\alpha -\eta} \ar[d]_{I_*(V)_{\nu}}\ar[rr]^{I_*(X)_{{\kappa}}}_{\cong} & &I_*(-Y_1|{-}R)_{-\alpha -\eta} \ar[d]^{I_*(V_1)_{\nu}}\\
I_*(-Z|{-}R)_{-\alpha -\eta} \ar[rr]_{I_*(W)_{{\kappa}}} & &I_*(-Z_1|{-}R)_{-\alpha -\eta}, }\] where $V$ and $V_1$ are the cobordisms corresponding to the surgeries on $\gamma_1,\dots,\gamma_n$, and these $\nu$ are the usual cylindrical cobordisms. Recall that \[
\cinvt(\xi,\data) = I_*(V)_{\nu}(\mathbf{1}) \textrm{ and }
\cinvt(\xi',\data_1) = I_*(V_1)_{\nu}(\mathbf{1})\]
where the elements $\mathbf{1}$ refer to generators of \[I_*(-Y|{-}R)_{-\alpha -\eta}\cong I_*(-Y_1|{-}R)_{-\alpha -\eta}\cong {\mathbb{C}}.\] The commutativity of the diagram above then implies that \[I_*(W)_{\kappa}(\cinvt(\xi,\data))=\cinvt(\xi',\data_1),\] up to multiplication in ${\mathbb{C}}^{\times}$, as desired.\end{proof}
\section{The Alexander grading}
\label{sec:alex}
In \cite[Section 7.6]{km-excision}, Kronheimer and Mrowka explain how a Seifert surface for a knot $K\subset Y$ gives rise to an Alexander grading on the instanton knot Floer homology $\KHI(Y,K)$ defined in Definition \ref{def:khi}. In their construction, they use a genus one closure of $(Y(K),\Gamma_\mu)$ formed with an annular auxiliary surface. In this section, we describe how a properly embedded surface in any sutured manifold which intersects the sutures twice gives rise to an Alexander grading on $\SHI$, using closures of any genus. In particular, we prove that this grading is preserved by the canonical isomorphisms relating the groups assigned to different closures.
Suppose $(M,\Gamma)$ is a balanced sutured manifold and $\Sigma$ is a properly embedded, oriented surface in $M$ with one boundary component such that the closed curve $\sigma=\Sigma\cap \partial M$ intersects $\Gamma$ transversely in two points, $p_+$ and $p_-$, and let \[\sigma_{\pm} = \sigma\cap R_{\pm}(\Gamma),\] as shown in Figure \ref{fig:surfaceclosure}. To define the Alexander grading on $\SHI(M,\Gamma)$ associated with $\Sigma$, we first cap off $\Sigma$ in a closure of $(M,\Gamma)$.
\begin{figure}[ht]
\labellist
\tiny \hair 2pt
\pinlabel $p_+$ at 94 87
\pinlabel $p_-$ at 204 87
\pinlabel $\sigma_+$ at 68 99
\pinlabel $\sigma_+$ at 230 99
\pinlabel $\sigma_-$ at 148 98
\pinlabel $\Sigma$ at 55 59
\pinlabel $\Sigma'$ at 332 59
\pinlabel $\sigma_+$ at 342 99
\pinlabel $\sigma_-$ at 427 98
\pinlabel $\tau_+$ at 358 143
\pinlabel $\tau_-$ at 387 143
\pinlabel $\tau_-$ at 467 143
\pinlabel $\tau_+$ at 496 143
\endlabellist
\centering
\includegraphics[width=12.3cm]{Figures/surfaceclosure}
\caption{Left, a portion of the manifold $M$ with $\Gamma$ shown in red. The surface $\Sigma$ is shown with oriented boundary in blue. Right, the preclosure $M'$ formed by attaching $T\times[-1,1]$. The $1$-handle $\tau\times[-1,1]$ glues to $\Sigma$ to form the properly embedded surface $\Sigma'$ with boundary shown in blue. }
\label{fig:surfaceclosure}
\end{figure}
Let $T$ be an auxiliary surface with an identification \[f:\partial T\xrightarrow{\cong}\Gamma.\] Let $A(\Gamma)=\Gamma\times[-1,1]$ be an annular neighborhood of $\Gamma$ such that $\sigma$ intersects $A(\Gamma)$ in the arcs \[\{p_+\}\times[-1,1]\textrm{ and }\{p_-\}\times[-1,1].\] and let \[h:\partial T\times[-1,1]\xrightarrow{\cong} A(\Gamma)\] be the diffeomorphism $h=f\times\operatorname{id}$. Let $M'$ be the preclosure obtained by gluing $T\times[-1,1]$ to $M$ according to $h$. Let $\tau$ be a nonseparating, properly embedded arc in $T$ with endpoints at $p_+$ and $p_-$. Let $\Sigma'$ be the properly embedded surface in $M'$ obtained as the union of $\Sigma$ with the 1-handle $\tau\times[-1,1]$, as shown in Figure \ref{fig:surfaceclosure}. The boundary of $\Sigma'$ consists of the two circles \[c_\pm:=\sigma_\pm\cup\tau_\pm\subset \partial_{\pm}M',\] where $\tau_\pm = \tau\times\{\pm 1\}$. The fact that $\tau$ is nonseparating in $T$ implies that the circles $c_\pm$ are nonseparating in $\partial_\pm M'$. In forming a closure, we can therefore choose a diffeomorphism \[\varphi:\partial_+M'\to\partial_-M'\] which identifies $c_+$ with $c_-$. Let $R=\partial_+M'$ and let \[Y= M'\cup \big(R\times[1,3]\big)\] be the closed manifold formed according to $\varphi$ in the usual manner. We define \[\alpha=\big(\{q\}\times[-1,1]\big)\cup\big(\{q\}\times[1,3]\big)\] in the usual way, for some $q\in T$ fixed by $\varphi$, and choose for $\eta\subset R$ a curve which intersects $c_+$ in one point.
\begin{definition}We say that a closure $\data = (Y,R,\eta,\alpha)$ defined as above is \emph{adapted to $\Sigma$}.
\end{definition}
\noindent Let \[\overline\Sigma=\Sigma'\cup \big( c_+\times[1,3]\big)\] be the closed surface in $Y$ obtained as the union of $\Sigma'$ with the annulus $c_+\times[1,3]\subset R\times[1,3]$. Note that $\overline\Sigma$ is obtained from $\Sigma$ by capping off its boundary with a punctured torus, so that \[g(\overline \Sigma)= g(\Sigma)+1.\] We use $\overline\Sigma$ to define an Alexander grading on $\SHI(\data)$, generalizing the construction of \cite[Section~7.6]{km-excision}, as follows.
\begin{definition}
\label{def:alexgrading} Given a closure $\data$ of $(M,\Gamma)$ adapted to $\Sigma$, the sutured instanton homology of $\data$ \emph{in Alexander grading $i$ relative to $\Sigma$} is the $2i$-eigenspace of the operator \[\mu(\overline\Sigma):\SHI(\data)\to \SHI(\data).\] We denote this eigenspace by $\SHI(\data,[\Sigma],i)$.\end{definition}
\begin{remark} As the notation above suggests, the Alexander grading on $(M,\Gamma)$ relative to $\Sigma$ depends only on the class \[[\Sigma]\in H_2(M, \sigma)\] as this class determines the homology class of $\overline\Sigma$, and the operator $\mu(\overline\Sigma)$ depends only on $[\overline\Sigma]$.\end{remark}
\begin{lemma}
\label{lem:gradingbound}
The group $\SHI(\data,[\Sigma],i)$ is trivial for $|i|> g(\Sigma)$. Thus, \[\SHI(\data) = \bigoplus_{i=-g(\Sigma)}^{g(\Sigma)}\SHI(\data,[\Sigma],i).\]
\end{lemma}
\begin{proof}
This follows immediately from Proposition \ref{prop:mu-spectrum}, which implies that the eigenvalues of $\mu(\overline\Sigma)$ on $\SHI(\data)$ belong to the set of even integers from $2-2g(\overline\Sigma)$ to $2g(\overline\Sigma)-2$.
\end{proof}
We next prove that this construction gives a well-defined Alexander grading on $\SHI(M,\Gamma)$.
\begin{theorem}
\label{thm:alexwelldefined}
Suppose $\data$ and $\data'$ are closures of $(M,\Gamma)$ adapted to $\Sigma$. For each $i$, we have \[\SHI(\data,[\Sigma],i)\cong\SHI(\data',[\Sigma],i).\] Moreover, when $\data$ and $\data'$ have genus at least two, the canonical isomorphism $\Psi_{\data,\data'}$ restricts to an isomorphism \[\Psi_{\data,\data'}:\SHI(\data,[\Sigma],i)\xrightarrow{\cong}\SHI(\data',[\Sigma],i)\] for each $i$. \end{theorem}
\begin{proof}
Suppose \[\data=(Y,R,\eta,\alpha) \,\textrm{ and }\, \data' = (Y',R',\eta',\alpha')\] are closures of $(M,\Gamma)$ adapted to $\Sigma$. Let us suppose first that $g(\data)=g(\data')$. Since the curves $c_+$ and $\eta$ are essential in $R$ and intersect in one point, we can find a diffeomorphism \begin{equation}\label{eqn:isoR}R\xrightarrow{\cong} R'\end{equation} which identifies $c_+,\eta\subset R$ with the corresponding $c_+',\eta'\subset R'$. We can also ensure that this map sends the point $q$ defining $\alpha$ to the corresponding point $q'$ defining $\alpha'$.
Note that for genus $1$ closures, there is a unique isotopy class of such diffeomorphisms since the complements \[R\ssm(c_+\cup\eta) \textrm{ and } R'\ssm(c_+'\cup\eta')\] are disks in this case. Thus, we automatically have \[\SHI(\data,[\Sigma],i)\cong \SHI(\data',[\Sigma],i)\] when $g(\data) = g(\data')=1$.
More generally, the diffeomorphism in \eqref{eqn:isoR} allows us to view $\data$ and $\data'$ as formed from the \emph{same} preclosure $M'$, according to different diffeomorphisms \[\varphi,\varphi':\partial_+M'\to\partial_-M',\] where $\varphi^{-1}\varphi'$ is a diffeomorphism of $R$ fixing $(c_+,\eta,q)$. We may factor $\varphi^{-1}\varphi'$ as a composition of positive and negative Dehn twists around curves \[a_1,\dots,a_n\subset R\] disjoint from $(c_+,\eta,q)$. This then allows us to view $\data'$ as obtained from $\data$ via $(\pm 1)$-surgeries on copies \[a_i\times\{t_i\}\subset R\times\{t_i\}\subset R\times[1,3]\subset Y\] of the $a_i$.
Suppose first that only positive Dehn twists appear in the factorization of $\varphi^{-1}\varphi'$. Let $(W,\nu)$ be the associated cobordism from $Y$ to $Y'$, obtained from $Y\times[0,1]$ by attaching $(-1)$-framed $2$-handles along the $a_i\times\{t_i\}$ in $Y\times \{1\}$, with $\nu = (\alpha\sqcup\eta)\times[0,1]$. Since $R$ and $R'$ are isotopic in $W$ and \[2g(R)-2=2g(R')-2\]
Lemma \ref{lem:commute} implies that the induced map \[I_*(W)_\nu:I_*(Y)_{\alpha+\eta}\to I_*(Y')_{\alpha'+\eta'}\] restricts to a map \begin{equation}\label{eqn:mapiso}I_*(W)_\nu:\SHI(\data)\to\SHI(\data').\end{equation} The map in \eqref{eqn:mapiso} defines the canonical isomorphism $\Psi_{\data,\data'}$ when both closures have genus at least 2 \cite[Definition~9.10]{bs-naturality}. Since the $a_i$ are disjoint from $c_+$, the surgery curves $a_i\times\{t_i\}$ are disjoint from the annulus $c_+\times[1,3]$ which caps off the surface $\Sigma'$ to form $\overline \Sigma\subset Y$. The capped off surfaces $\overline\Sigma\subset Y$ and $\overline\Sigma'\subset Y'$ are therefore isotopic in $W$. Lemma \ref{lem:commute} then implies that $I_*(W)_\nu$ restricts to an isomorphism \[I_*(W)_\nu:\SHI(\data,[\Sigma],i)\xrightarrow{\cong}\SHI(\data',[\Sigma],i)\] for each $i$, as claimed.
If both positive and negative Dehn twists appear in the factorization of $\varphi^{-1}\varphi'$ then the isomorphism relating $\SHI(\data)$ and $\SHI(\data')$ is defined as the composition of a map as in \eqref{eqn:mapiso} with the inverse of such a map, so the same argument applies.
Next, suppose $\data=(Y,R,\eta,\alpha)$ is a closure of $(M,\Gamma)$ adapted to $\Sigma$ as in the beginning of this section, and let us borrow the notation used there. We will construct a closure $\data'$ adapted to $\Sigma$ with \[g(\data')=g(\data)+1\] and show that the isomorphism relating $\SHI(\data)$ and $\SHI(\data')$ preserves Alexander gradings.
To start, let $d\subset T$ be a closed curve dual to the arcs $\tau\subset T$ and $\eta\cap T$ (we can choose $\eta$ so that these arcs are parallel). Let us assume that the map \[\varphi:\partial_+M'\to\partial_-M'\] used to form $Y$ identifies the curves \[d\times\{\pm 1\}\subset \partial T\times[-1,1]\subset \partial_{\pm}M'.\] Let $F$ be a closed genus 2 surface containing parallel curves $\boldsymbol{\tau}$ and $\boldsymbol{\eta}$ and a curve $\mathbf{d}$ dual to both. Let $T'$ be the surface obtained by cutting $T$ and $F$ open along $d$ and $ \mathbf{d}$ and gluing these cut-open surfaces together according to a homeomorphism of their boundaries which identifies $d\cap \tau$ and $d\cap \eta$ with $ \mathbf{d}\cap \boldsymbol{\tau}$ and $ \mathbf{d}\cap \boldsymbol{\eta}$, respectively, as shown in Figure \ref{fig:splice}.
\begin{figure}[ht]
\labellist
\tiny \hair 2pt
\pinlabel $R_+(\Gamma)$ at 25 23
\pinlabel $R_+(\Gamma)$ at 345 22
\pinlabel $T$ at 60 24
\pinlabel $F$ at 177 23
\pinlabel $T'$ at 382 23
\pinlabel $\tau$ at 60 97
\pinlabel $\sigma_{+}$ at 28 97
\pinlabel $d$ at 92 65
\pinlabel $\eta$ at 25 53
\pinlabel $\boldsymbol{\tau}$ at 179 97
\pinlabel $\mathbf{d}$ at 142 65
\pinlabel $\boldsymbol{\eta}$ at 178 52
\pinlabel $\sigma_{+}$ at 347 96
\pinlabel $\tau'$ at 382 98
\pinlabel $\eta'$ at 347 54
\endlabellist
\centering
\includegraphics[width=14cm]{Figures/splice}
\caption{Left, the genus $2$ surface $F$ and portion of $R = R_+(\Gamma)\cup T$ in an example where $T$ is has genus $0$ and $2$ boundary components. Right, the result of cutting and regluing to form $R' = R_+(\Gamma)\cup T'$. In performing this operation, we increase the genus of the closure by one.}
\label{fig:splice}
\end{figure}
The union $\tau'=\tau\cup \boldsymbol{\tau}$ is then a nonseparating, properly embedded arc in $T'$. Let \[\data'=(Y',R',\eta'=\eta\cup \boldsymbol{\eta},\alpha'=\alpha)\] be the closure of $(M,\Gamma)$ adapted to $\Sigma$ formed using the auxiliary surface $T'$ and a diffeomorphism $\varphi'$ which restricts to \[\varphi:\partial_+M'\ssm (d\times\{1\})\to\partial_-M'\ssm (d\times\{-1\})\] and to \[\operatorname{id}:(F\ssm \mathbf{d})\times\{1\}\to(F\ssm \mathbf{d})\times\{-1\}.\] In particular, $\varphi'$ identifies the two circles \[c'_\pm:=\sigma_\pm\cup\tau'_\pm\] so that $\Sigma$ caps off to a closed surface $\overline\Sigma'$ in $Y'$ in the usual manner.
Note that $Y'$ is obtained by cutting $Y$ and $F\times S^1$ open along the tori \[d\times S^1=(d\times[-1,1])\cup (d\times[1,3])\] and $\mathbf{d}\times S^1$, respectively, and regluing. From this point of view, the capped off surface $\overline\Sigma'$ is obtained by cutting $\overline\Sigma$ and the torus $\boldsymbol{\tau}\times S^1\subset F\times S^1$ open along essential curves and regluing. There is a standard excision-type cobordism \[(W,\nu):(Y,\alpha\sqcup \eta)\sqcup(F\times S^1,\boldsymbol{\eta})\to(Y',\alpha'\sqcup\eta')\] associated to this cutting and regluing, as described in \cite[Section 3]{km-excision} and \cite[Section 9.3.2]{bs-naturality}, and a corresponding map \[I_*(W)_\nu: I_*(Y)_{\alpha+ \eta}\otimes I_*(F\times S^1)_{\boldsymbol{\eta}}\to I_*(Y')_{\alpha'+ \eta'}.\]
Kronheimer and Mrowka prove in \cite[Proposition 7.9]{km-excision} that the $(2,2)$-eigenspace of the operator $(\mu(F),\mu(\pt))$ acting on $I_*(F\times S^1)_{\boldsymbol{\eta}}$ is one-dimensional. Let $\Theta$ be a generator of this eigenspace. Then, since the disjoint union of surfaces \[R\sqcup F\subset Y\sqcup(F\times S^1)\] is homologous in $W$ to $R'\subset Y'$, and \[2g(R')-2 = (2g(R)-2)+2,\] Lemma \ref{lem:commutethree} implies that $I_*(W)_\nu(\cdot,\Theta)$ defines a map \begin{equation}\label{eqn:thetamap}I_*(W)_\nu(\cdot,\Theta): \SHI(\data)\to\SHI(\data').\end{equation}
The map in \eqref{eqn:thetamap} is an isomorphism \cite[Section 3]{km-excision}. Moreover, it agrees with the canonical isomorphism $\Psi_{\data,\data'}$ defined in \cite{bs-naturality} when $g(\data)\geq 2$.
Since $\boldsymbol\eta$ intersects the torus $\mathbf{d}\times S^1\subset F\times S^1$ in a single point, Theorem \ref{thm:simultaneouseigenvalues} says that the only eigenvalue of $\mu(\mathbf{d}\times S^1)$ acting on $I_*(F\times S^1)_{\boldsymbol{\eta}}$ is $0$. It follows that the $2$-eigenspace of $\mu(\pt)$ on $I_*(F\times S^1)_{\boldsymbol{\eta}}$ is simply the group we denote by $I_*(F\times S^1|\,\mathbf{d}\times S^1)_{\boldsymbol{\eta}}.$ We therefore have that \[\Theta\in I_*(F\times S^1|\,\mathbf{d}\times S^1)_{\boldsymbol{\eta}}.\] Note that the only eigenvalue of $\mu(\boldsymbol{\tau}\times S^1)$ acting on $I_*(F\times S^1|\,\mathbf{d}\times S^1)_{\boldsymbol{\eta}}$ is $0$ by Proposition \ref{prop:mu-spectrum} since $\boldsymbol{\tau}\times S^1$ is a torus. In particular, $\Theta$ is in the $0$-eigenspace of this operator $\mu(\boldsymbol{\tau}\times S^1)$.
Then, since the disjoint union of surfaces \[ \overline\Sigma\sqcup (\boldsymbol{\tau}\times S^1)\subset Y\sqcup(F\times S^1)\] is homologous in $W$ to $\overline\Sigma'\subset Y'$, Lemma \ref{lem:commutethree} implies that the map $I_*(W)_\nu(\cdot,\Theta)$ restricts to an isomorphism \[I_*(W)_\nu(\cdot,\Theta): \SHI(\data,[\Sigma],i)\xrightarrow{\cong}\SHI(\data',[\Sigma],i)\] for each $i$.
Finally, for \emph{any} two closures $\data$ and $\data'$ of $(M,\Gamma)$ adapted to $\Sigma$, we define the isomorphism (the canonical isomorphism if both closures have genus at least 2) \[\SHI(\data)\to \SHI(\data')\] to be a composition of isomorphisms defined as above. This completes the proof.
\end{proof}
Given Theorem \ref{thm:alexwelldefined}, we make the following definition for a surface $\Sigma$ in $(M,\Gamma)$ as above.
\begin{definition}
The sutured instanton homology of $(M,\Gamma)$ \emph{in Alexander grading $i$ relative to $\Sigma$} is the projectively transitive system of ${\mathbb{C}}$-modules \[\SHI(M,\Gamma,[\Sigma],i)\] consisting of the groups $\SHI(\data,[\Sigma],i)$ for $g(\data)\geq 2$ together with the canonical isomorphisms between them.
\end{definition}
\begin{remark}
Note that \[\SHI(M,\Gamma) = \bigoplus_{i=-g(\Sigma)}^{g(\Sigma)}\SHI(M,\Gamma,[\Sigma],i),\] by Lemma \ref{lem:gradingbound}.
\end{remark}
\begin{remark}
\label{rmk:symmetry} If $(M,\Gamma)$ admits a genus one closure then the Alexander grading on $\SHI(M,\Gamma)$ with respect to $\Sigma$ is symmetric by Lemma \ref{lem:symmetric}. That is,
\[\SHI(M,\Gamma,[\Sigma],i)\cong\SHI(M,\Gamma,[\Sigma],-i)\] for each $i$.
\end{remark}
Following \cite[Section 7.6]{km-excision}, we make the definition below.
\begin{definition}
\label{def:khialex}Given a knot $K$ in a closed, oriented $3$-manifold $Y$ with Seifert surface $\Sigma$, the instanton knot Floer homology of $K$ \emph{in Alexander grading $i$ relative to $\Sigma$} is \[\KHI(Y,K,[\Sigma],i):=\SHI(Y(K),\Gamma_\mu,[\Sigma],i).\] When the relative homology class of $\Sigma$ is unambiguous we will omit it from the notation.
\end{definition}
\begin{remark}
In \cite{km-excision}, Kronheimer and Mrowka defined the Alexander grading on $\KHI$ in exactly the same way as we do, but using genus one closures only. It follows from Theorem \ref{thm:alexwelldefined} that the Alexander grading we give in Definition \ref{def:khialex} agrees with theirs up to isomorphism.
\end{remark}
\begin{remark}
\label{rmk:symmetryknots} Since the knot complement $(Y(K),\Gamma_\mu)$ admits a genus one closure, the Alexander grading on $\KHI(Y,K)$ relative to $\Sigma$ is symmetric by Remark \ref{rmk:symmetry}. That is,
\[\KHI(Y,K,[\Sigma],i)\cong\KHI(Y,K,[\Sigma],-i)\] for each $i$.
\end{remark}
Kronheimer and Mrowka proved the following three results. The first follows from \cite[Theorem 7.18]{km-excision} and the discussion at the end of \cite[Section 7.6]{km-excision}; the second is \cite[Proposition 7.16]{km-excision}; and the third is \cite[Proposition 4.1]{km-alexander}. As explained in the introduction, all three are important in our proof of Theorem \ref{thm:khi-detects-trefoil}.
\begin{theorem}
\label{thm:fiberedY}
If $K\subset Y$ is fibered with fiber $\Sigma$ then $\KHI(Y,K,[\Sigma],g(K))\cong{\mathbb{C}}$.
\end{theorem}
\begin{theorem}
\label{thm:genus}
For $K\subset S^3$, $\KHI(S^3,K,g(K))\neq 0$.
\end{theorem}
\begin{theorem}
\label{thm:fiberedS}
For $K\subset S^3$, $\KHI(S^3,K,g(K))\cong{\mathbb{C}}$ if and only if $K$ is fibered.
\end{theorem}
\section{The bypass exact triangle}
\label{sec:bypass}
In this section, we prove the bypass exact triangle, stated in the introduction as Theorem \ref{thm:bypass}. Our proof is very similar to the proof we gave in \cite[Section 5]{bs-shm} for sutured monopole homology, and we will rely on the topological ideas developed there. We must be a bit careful in the instanton Floer setting, however, with the bundles involved in the exact triangle.
Suppose $(M,\Gamma)$ is a balanced sutured manifold and $\alpha\subset \partial M$ is an arc which intersects $\Gamma$ transversally in three points, including both endpoints of $ \alpha$. A \emph{bypass move} along $\alpha$ replaces $\Gamma$ with a new set of sutures $\Gamma'$ which differ from $\Gamma$ in a neighborhood of $\alpha$, as shown in Figure \ref{fig:bypass-move}.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $\alpha$ at 22 28
\endlabellist
\centering
\includegraphics[width=4.5cm]{Figures/bypass}
\caption{A bypass move along the arc $\alpha$, with $\Gamma$ on the left and $\Gamma'$ on the right. The gray and white regions indicate the negative and positive regions, respectively.}
\label{fig:bypass-move}
\end{figure}
A bypass move can be achieved by attaching a contact $1$-handle along disks in $\partial M$ centered at the endpoints of $ \alpha$ and then attaching a contact $2$-handle along the union $\beta$ of $\alpha$ with an arc on the boundary of this $1$-handle, as shown in Figure \ref{fig:bypass-handles}. We refer to this sequence of handle attachments as a \emph{bypass attachment along $\alpha$}, following \cite{honda-lens, ozbagci}. A bypass attachment along $\alpha$ therefore gives rise to a morphism \[\phi_\alpha:\SHI(-M,-\Gamma)\to \SHI(-M,-\Gamma')\] which is the composition of the corresponding contact $1$- and $2$-handle attachment maps defined in Section \ref{ssec:shi}. Theorem \ref{thm:handletheta2} implies the following.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $\alpha$ at 51 57
\pinlabel $\beta$ at 242 39
\endlabellist
\centering
\includegraphics[width=8cm]{Figures/bypass_handles2}
\caption{Performing a bypass move by attaching a contact $1$-handle at the endpoints of $\alpha$ and a contact $2$-handle along $\beta$.}
\label{fig:bypass-handles}
\end{figure}
\begin{proposition}
\label{prop:bypass}
Suppose $(M,\Gamma',\xi')$ is obtained from $(M,\Gamma,\xi)$ by attaching a bypass along $\alpha$. Then the induced map $\phi_\alpha$ sends $\cinvt(\xi)$ to $\cinvt(\xi')$.
\end{proposition}
Figure \ref{fig:bypass-triangle} shows a sequence of bypass moves, performed in some fixed neighborhood in $\partial M$, resulting in a 3-periodic sequence of sutures on $M$. Such a sequence is called a \emph{bypass triangle}. Unpublished work of Honda shows that a bypass triangle gives rise to a \emph{bypass exact triangle} in sutured Heegaard Floer homology. We proved a similar result in the monopole Floer setting in \cite[Theorem 5.2]{bs-shm}. Here, we prove the analogue for sutured instanton homology, stated below.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $\alpha_1$ at 24 118
\pinlabel $\alpha_2$ at 148 117
\pinlabel $\alpha_3$ at 96 38
\pinlabel $\Gamma_1$ at -3 145
\pinlabel $\Gamma_2$ at 173 144
\pinlabel $\Gamma_3$ at 85 -8
\endlabellist
\centering
\includegraphics[width=4.5cm]{Figures/bypass_triangle2}
\caption{The bypass triangle. Each picture shows the attaching arc used to achieve the next set of sutures in the triangle.}
\label{fig:bypass-triangle}
\end{figure}
{
\renewcommand{\thetheorem}{\ref{thm:bypass}}
\begin{theorem}
Suppose $\Gamma_1,\Gamma_2,\Gamma_3\subset \partial M$ is a 3-periodic sequence of sutures resulting from successive bypass moves along arcs $\alpha_1,\alpha_2,\alpha_3$ as in Figure \ref{fig:bypass-triangle}. Then there is an exact triangle
\[ \xymatrix@C=-25pt@R=35pt{
\SHI(-M,-\Gamma_1) \ar[rr]^{\phi_{\alpha_1}} & & \SHI(-M,-\Gamma_2) \ar[dl]^{\phi_{\alpha_2}} \\
& \SHI(-M,-\Gamma_3), \ar[ul]^{\phi_{\alpha_3}} & \\
} \]
in which $\phi_{\alpha_1}, \phi_{\alpha_2},\phi_{\alpha_2}$ are the corresponding bypass attachment maps.
\end{theorem}
\addtocounter{theorem}{-1}
}
\begin{proof}
The main idea behind the proof is that there are closures of these three sutured manifolds which are related as in the surgery exact triangle of Theorem \ref{thm:exacttri}. This was first shown in the proof of \cite[Theorem 5.2]{bs-shm} by an argument we review below. Unlike in the monopole Floer case, though, it is not obvious that the cobordism maps in the surgery exact triangle relating the instanton Floer groups of these closures are the same as those which induce the bypass attachment maps on $\SHI$, as the bundles on these cobordisms are different in the two cases. However, we show that one can choose closures so that any \emph{two} of the three maps agree with those that induce the bypass attachment maps. This allows us to prove exactness of the triangle in Theorem \ref{thm:bypass} at each group, one group at a time.
Note that by enlarging our local picture slightly, we can think of the arcs $\alpha_1,\alpha_2,\alpha_3$ as being arranged as in Figure \ref{fig:bypass-setup} with respect to $\Gamma_1$. We may therefore view \[(M,\Gamma_2)\,\,{\rm and} \,\,(M,\Gamma_3)\,\, {\rm and}\,\,(M,\Gamma_1)\] as being obtained from $(M,\Gamma_1)$ by attaching bypasses along the arcs \[\alpha_1\,\,{\rm and}\,\,\alpha_1,\alpha_2\,\, {\rm and}\,\, \alpha_1,\alpha_2, \alpha_3,\] respectively. As described above, attaching a bypass along $\alpha_i$ amounts to attaching a contact $1$-handle $h_i$ along disks centered at the endpoints of $\alpha_i$ and then attaching a contact $2$-handle along a curve $\beta_i$ which extends $\alpha_i$ over the handle, as in Figure \ref{fig:bypass-setup}. Let $(Z_1,\gamma_1)$ be the sutured manifold obtained by attaching all three $h_1,h_2,h_3$ to $(M,\Gamma_1)$, as in Figure \ref{fig:bypass-setup}.
For $i=1,2,3$, let $(Z_{i+1},\gamma_{i+1})$ be the result of attaching a contact $2$-handle to $(Z_i,\gamma_i)$ along $\beta_i$. Then
\begin{align*}
(Z_1,\gamma_1)&= (M,\Gamma_1)\cup h_1\cup h_2\cup h_3,\\
(Z_2,\gamma_2)&= (M,\Gamma_2)\cup h_2\cup h_3,\\
(Z_3,\gamma_3)&= (M,\Gamma_3)\cup h_3,\\
(Z_4,\gamma_4)&= (M,\Gamma_1).
\end{align*}
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $\alpha_1$ at 61 79
\pinlabel $\alpha_2$ at 106 181
\pinlabel $\alpha_3$ at 151 282
\pinlabel $\beta_1$ at 769 70
\pinlabel $\beta_2$ at 815 172
\pinlabel $\beta_3$ at 861 272
\endlabellist
\centering
\includegraphics[width=11cm]{Figures/bypass_setup2}
\caption{Left, another view of the arcs $\alpha_1,\alpha_2,\alpha_3$ of attachment for the bypasses in the triangle, where the suture shown here is $\Gamma_1$. Middle, a view of $(Z_1,\gamma_1)$, obtained by attaching the contact 1-handles $h_1,h_2,h_3$ to $(M,\Gamma_1).$ Right, the attaching curves $\beta_1,\beta_2,\beta_3$ for the contact $2$-handles.}
\label{fig:bypass-setup}
\end{figure}
Recall from Section \ref{ssec:shi} that contact $1$-handle attachment has little effect on the level of closures. Specifically, a closure of a sutured manifold after a $1$-handle attachment can also be viewed naturally as a closure of the sutured manifold before the $1$-handle attachment, and the corresponding $1$-handle attachment morphism is simply the identity map. We therefore have canonical identifications
\[\SHI(-Z_i,-\gamma_i)\cong \SHI(-M,-\Gamma_i),\] for $i=1,2,3,4$, where the subscript of $\Gamma_i$ is taken mod $3$. In particular, $\SHI(-Z_4,-\gamma_4)$ is canonically identified with $\SHI(-Z_1,-\gamma_1)$.
Therefore, to prove Theorem \ref{thm:bypass}, it suffices to prove that there is an exact triangle
\begin{equation}\label{eqn:exacttriz} \xymatrix@C=-25pt@R=35pt{
\SHI(-Z_1,-\gamma_1) \ar[rr]^{H_{\beta_1}} & & \SHI(-Z_2,-\gamma_2) \ar[dl]^{H_{\beta_2}} \\
& \SHI(-Z_3,-\gamma_3), \ar[ul]^{H_{\beta_3}}& \\
} \end{equation} where $H_{\beta_i}$ is the map associated to contact $2$-handle attachment along $\beta_i$.
Recall that on the level of closures, contact $2$-handle attachment corresponds to surgery. Specifically, if $\data_i = (Y_i,R,\eta,\alpha)$ is a closure of $(Z_i,\gamma_i)$, then there is a closure of $(Z_{i+1},\gamma_{i+1})$ of the form $\data_{i+1} = (Y_{i+1},R,\eta,\alpha)$, where $Y_{i+1}$ is the result of $(\partial Z_i)$-framed surgery on $\beta_i\subset Y_i$. The map $H_{\beta_i}$ is induced by the $2$-handle cobordism map
\[I_*(W_i)_{\nu}:I_*(-Y_i|{-}R)_{-\alpha-\eta}\to I_*(-Y_{i+1}|{-}R)_{-\alpha-\eta}\] corresponding to this surgery, where $\nu$ is the usual cylindrical cobordism \[\nu=(-\alpha\sqcup -\eta)\times [0,1].\] In order to prove the exactness of the triangle \eqref{eqn:exacttriz} at $\SHI(-Z_2,-\gamma_2)$, say, it therefore suffices to prove exactness of the sequence
\begin{equation}\label{eqn:exacttriy} \xymatrix@C=18pt@R=35pt{
I_*(-Y_1|{-}R)_{-\alpha -\eta} \ar[rr]^{I_*(W_1)_{\nu}} & &I_*(-Y_2|{-}R)_{-\alpha -\eta} \ar[rr]^{I_*(W_2)_{\nu}} & &I_*(-Y_3|{-}R)_{-\alpha -\eta}.
} \end{equation}
Our approach is to find a closure $\data_1$ of $(Z_1,\gamma_1)$ such that the surgeries relating the $-Y_i$ above are exactly the sort one encounters in the surgery exact triangle of Theorem \ref{thm:exacttri}, as depicted in Figure \ref{fig:surgery}. Fortunately, the proof of \cite[Theorem 5.2]{bs-shm} shows that for \emph{any} closure $\data_1$,
\begin{itemize}
\item $W_1$ is the cobordism associated to $0$-surgery on some $K=\beta_1\subset-Y_1$,
\item $W_2$ is the cobordism associated to $(-1)$-surgery on a meridian $\mu_1\subset -Y_2$ of $K$,
\item $W_3$ is the cobordism associated to $(-1)$-surgery on a meridian $\mu_2\subset -Y_3$ of $\mu_1$,
\end{itemize}
as desired.
Theorem \ref{thm:exacttri} then says that there is an exact sequence of the form
\begin{equation}\label{eqn:exacttriy2} \xymatrix@C=18pt@R=35pt{
I_*(-Y_1|{-}R)_{-\alpha -\eta+ K} \ar[rr]^{I_*(W_1)_{\kappa_1}} & &I_*(-Y_2|{-}R)_{-\alpha -\eta} \ar[rr]^{I_*(W_2)_{\kappa_2}} & &I_*(-Y_3|{-}R)_{-\alpha -\eta}.
} \end{equation}
Note that this sequence \eqref{eqn:exacttriy2} is subtly different from that in \eqref{eqn:exacttriy}. For one thing, the Floer group on the left is defined using the $1$-manifold $-\alpha\sqcup -\eta\sqcup K$ rather than $\alpha\sqcup -\eta.$ On the other hand, this group depends, up to isomorphism, only on the homology class of this $1$-manifold, and we can find a closure $\data_1$ such that $K$ is null-homologous in $-Y_1$. Indeed, the construction in the beginning of Section \ref{sec:alex}, and illustrated in Figure \ref{fig:surfaceclosure}, shows that for any curve (like $K=\beta_1$) in the boundary of a sutured manifold which intersects the sutures twice, we can find a closure in which this curve bounds a once-punctured torus such that the framing on the curve induced by this surface agrees with the framing induced by the boundary. Let $\data_1$ be such a closure. Then \eqref{eqn:exacttriy2} becomes the exact sequence
\begin{equation}\label{eqn:exacttriy3} \xymatrix@C=18pt@R=35pt{
I_*(-Y_1|{-}R)_{-\alpha -\eta} \ar[rr]^{I_*(W_1)_{\overline{\kappa}_1}} & &I_*(-Y_2|{-}R)_{-\alpha -\eta} \ar[rr]^{I_*(W_2)_{\kappa_2}} & &I_*(-Y_3|{-}R)_{-\alpha -\eta}
} \end{equation}
where $\overline{\kappa}_1$ is the cobordism given as the composition of the once-punctured torus viewed as a cobordism from $\emptyset$ to $K$ with $\kappa_1$. To deduce \eqref{eqn:exacttriy} from \eqref{eqn:exacttriy3}, it suffices to show that \begin{equation}\label{eqn:mapsequal}I_*(W_1)_\nu = I_*(W_1)_{\overline{\kappa}_1} \textrm{ and }I_*(W_2)_\nu = I_*(W_2)_{\kappa_2}.\end{equation} Recall that these maps depend only on the relative homology classes of the various $2$-dimensional cobordisms involved. Since we have chosen a closure $\data_1$ in which $K$ is nullhomologous in $Y_1$ and $Y_2$ is obtained via $0$-surgery on $K$ with respect to the framing induced by the once-punctured torus providing the nullhomology, we have that \[b_1(Y_1)=b_1(Y_2)-1=b_1(Y_3).\] The long exact sequence of the pair $(W_1,\partial W_1)$ shows in this case that a relative homology class in $H_2(W_1,\partial W_1)$ is determined by its boundary. Since $\nu$ and $\overline{\kappa}_1$ have the same boundary, they represent the same class, which then implies the first equality in \eqref{eqn:mapsequal}; likewise, for the second equality. This proves that the triangle in \eqref{eqn:exacttriz} is exact at $\SHI(-Z_2,\gamma_2)$. Identical arguments show that it is exact at the other groups, completing the proof of Theorem \ref{thm:bypass}.
\end{proof}
\section{Invariants of Legendrian and transverse knots}
\label{sec:leg}
In this section, we define invariants of Legendrian and transverse knots in instanton knot Floer homology. As described in the introduction, our construction is motivated by Stipsicz and V{\'e}rtesi's interpretation \cite{stipsicz-vertesi} of the Legendrian and transverse knot invariants in Heegaard Floer homology defined by Lisca, Ozsv{\'a}th, Stipsicz, and Szab{\'o} \cite{loss}, and is nearly identical to a previous construction of the authors in monopole knot Floer homology \cite{bs-legendrian}. These invariants and their properties will be important in our proof of Theorem \ref{thm:khi-fibered} in the next section, as outlined in the introduction.
Suppose $K$ is an oriented Legendrian knot in a closed contact $3$-manifold $(Y,\xi)$. Let \begin{equation*}\label{eqn:complement1}(Y(K),\Gamma_K,\xi_{K})\end{equation*} be the sutured contact manifold obtained by removing a standard neighborhood of $K$ from $(Y,\xi)$. Attaching a bypass to this knot complement along the arc $c\subset \partial Y(K)$ shown in Figure \ref{fig:bypasses2} yields the knot complement with its two meridional sutures. We refer to this operation as a \emph{Stipsicz-V{\'e}rtesi bypass attachment} as it was studied extensively by those authors in \cite{stipsicz-vertesi}. Let us denote by \begin{equation}\label{eqn:complement}(Y(K),\Gamma_\mu,\xi_{\mu,K})\end{equation} the sutured contact manifold obtained via this attachment. Inspired by \cite[Theorem 1.1]{stipsicz-vertesi}, we define the Legendrian invariant $\linvt(K)$ to be the contact invariant of this manifold.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $-\mu$ at 170 0
\pinlabel $+$ at 105 85
\pinlabel $-$ at 162 135
\pinlabel $c$ at 145 49
\endlabellist
\centering
\includegraphics[width=2.5cm]{Figures/bypasses2}
\caption{The boundary of the complement of a standard neighborhood of $K$, with dividing set $\Gamma_K$ in red. The Stipsicz-V{\'e}rtesi bypass is attached along the arc $c$. The $\pm$ indicate the regions $R_\pm(\Gamma_{K})$.}
\label{fig:bypasses2}
\end{figure}
\begin{definition} Given a Legendrian knot $K\subset (Y,\xi)$, let \[\linvt(K):=\cinvt(\xi_{\mu,K})\in\SHI(-Y(K),-\Gamma_\mu)=\KHI(-Y,K).\] This class is by construction an invariant of the Legendrian knot type of $K$.
\end{definition}
Stipsicz and V{\'e}rtesi observed in the proof of \cite[Theorem 1.5]{stipsicz-vertesi} that the sutured contact manifold $(Y(K),\Gamma_\mu,\xi_{\mu,K})$ and therefore the class $\linvt(K)$ is invariant under negative Legendrian stabilization of $K$.
This then enables us to define an invariant of transverse knots in $(Y,\xi)$ via Legendrian approximation as below since any two Legendrian approximations of a transverse knot are related by negative Legendrian stabilization.
\begin{definition}
Given a transverse knot $K\subset (Y,\xi)$ with Legendrian approximation $\mathcal{K}$, let
\[\kinvt(K):=\linvt(\mathcal{K})\in\KHI(-Y,K).\] This class is an invariant of the transverse knot type of $K$.
\end{definition}
\begin{remark}
\label{rmk:sv}Given a transverse knot $K\subset (Y,\xi)$ with Legendrian approximation $\mathcal{K}$, we have \[\kinvt(K)=\linvt(\mathcal{K})=\cinvt(\xi_{\mu,\mathcal{K}})=\phi^{SV}(\cinvt(\xi_\mathcal{K}))\] where \[\phi^{SV}:\SHI(-Y(K),-\Gamma_\mathcal{K})\to\KHI(-Y,K)\] is the map our theory associates to the Stipsicz-V{\'e}rtesi bypass attachment.
\end{remark}
Below, we prove some results about the Legendrian invariant $\linvt$ which will be important in Section \ref{sec:proof}. Our proofs are similar to those of analogous results in the monopole Floer setting \cite{bs-legendrian, sivek-legendrian}. First, we establish the following notation
\begin{notation}Given a closed contact 3-manifold $(Y,\xi)$, we will denote by $Y(1)$ the sutured contact manifold \[Y(1)=(Y\ssm B^3, \Gamma_{S^1},\xi|_{Y\ssm B^3})\] obtained by removing a Darboux ball from $(Y,\xi)$, with dividing set $\Gamma_{S^1}$ consisting of a single curve on the boundary. In particular, we will write $\cinvt(Y(1))$ for the contact invariant of this sutured contact manifold.
\end{notation}
The result below is an analogue of \cite[Proposition 3.13]{bs-legendrian}.
\begin{lemma}
\label{lem:unknot}
Suppose $U\subset (Y,\xi)$ is a Legendrian unknot with $tb(U)=-1$ contained inside a Darboux ball in $(Y,\xi)$. Then there is an isomorphism \[\SHI(-Y(1))\to\KHI(-Y,U)\] which sends $\cinvt(Y(1))$ to $\linvt(U)$.
\end{lemma}
\begin{proof}
Attaching a contact $1$-handle to $Y(1)$ results in a sutured contact manifold of the form \[(Y(U),\Gamma_\mu,\xi').\] The associated contact $1$-handle attachment isomorphism \begin{equation}\label{eqn:isomorphism}\SHI(-Y(1))\to\SHI(-Y(U),\Gamma_\mu)=\KHI(-Y,U)\end{equation} identifies $\cinvt(Y(1))$ with $\cinvt(\xi')$. It thus suffices to check that $\xi'$ is isotopic to the contact structure $\xi_{\mu,U}$ used to define $\linvt(U)$, obtained by removing a standard neighborhood of $U$ and attaching a Stipsicz-V{\'e}rtesi bypass. Since we can arrange that this operation and the contact $1$-handle attachment both take place in a Darboux ball, it suffices to check this in the case \[(Y,\xi)=(S^3,\xi_{std}).\] But in this case, $(Y(U),\Gamma_\mu)$ is a solid torus with two longitudinal sutures. As there is a unique isotopy class of tight contact structures on the solid torus with this dividing set, we need only check that $\xi'$ and $\xi_{\mu,U}$ are both tight.
The class $\cinvt(Y(1))$ is nonzero since $\xi_{std}$ is Stein fillable \cite[Theorem 1.4]{bs-shi}. It follows that $\cinvt(\xi')$ is nonzero as well since the isomorphism \eqref{eqn:isomorphism} identifies this class with $\cinvt(Y(1))$. This implies that $\xi'$ is tight by Theorem \ref{thm:zero-overtwisted}.
The fact that $\xi_{\mu,U}$ is tight follows from the fact that the Heegaard Floer Legendrian invariant of the $tb=-1$ unknot in $(S^3,\xi_{std})$ is nonzero, since this invariant agrees with the Heegaard Floer contact invariant of $\xi_{\mu,U}$ according to \cite[Theorem 1.1]{stipsicz-vertesi}. (One can also prove tightness directly---i.e., without using Heegaard Floer homology---though it takes more room.)
\end{proof}
The result below follows immediately from Theorem \ref{thm:legendrian-surgery}.
\begin{lemma}
\label{lem:leg}
Let $K$ and $S$ be disjoint Legendrian knots in $(Y,\xi)$, and let $(Y',\xi')$ be the contact manifold obtained by contact $(+1)$-surgery on $S$. If $K$ has image $K'$ in $Y'$ then there is a map \[\KHI(-Y,K)\to\KHI(-Y',K')\] which sends $\linvt(K)$ to $\linvt(K')$.\qed
\end{lemma}
The following is an analogue of \cite[Theorem 5.2]{sivek-legendrian} and \cite[Corollary 3.15]{bs-legendrian}.
\begin{lemma}
\label{lem:plusone}Suppose $K$ is a Legendrian knot in $(Y,\xi)$ and that $(Y',\xi')$ is the result of contact $(+1)$-surgery on $K$. Then there is a map \[\KHI(-Y,K)\to\SHI(-Y'(1))\] which sends $\linvt(K)$ to $\cinvt(Y'(1))$.
\end{lemma}
\begin{proof}
Let $S$ be a Legendrian isotopic pushoff of $K$ with an extra positive twist around $K$, as in Figure \ref{fig:legpushoff}.
\begin{figure}[ht]
\labellist
\tiny \hair 2pt
\pinlabel $K$ at 15 10
\pinlabel $S$ at 15 28
\endlabellist
\centering
\includegraphics[width=3.3cm]{Figures/legpushoff}
\caption{$S$ is obtained by adding a positive twist to a Legendrian pushoff of $K$.}
\label{fig:legpushoff}
\end{figure}
As in the proof of \cite[Proposition 1]{ding-geiges-handles}, the image of $K$ in contact $(+1)$-surgery on $S$ is a $tb=-1$ Legendrian unknot $U\subset(Y',\xi')$. By Lemma \ref{lem:leg}, there is a map \[\KHI(-Y,K)\to\KHI(-Y',U)\] which sends $\linvt(K)$ to $\linvt(U)$. But by Lemma \ref{lem:unknot}, we also have an isomorphism \[\KHI(-Y',U)\to\SHI(-Y'(1))\] which sends $\linvt(U)$ to $\cinvt(Y'(1)).$ Composing these two maps gives the desired map.
\end{proof}
The lemmas above culminate in Theorem \ref{thm:tb} below, which is an analogue of \cite[Corollary 5.4]{sivek-legendrian} and \cite[Corollary 3.16]{bs-legendrian}. The exact triangle argument used in its proof was inspired by Lisca and Stipsicz's proof of \cite[Theorem 1.1]{lisca-stipsicz}. We will use Theorem \ref{thm:tb} in our proof of Theorem \ref{thm:nonzero} in the next section, which states that $\kinvt$ is nonzero for transverse bindings of open books.
\begin{theorem}
\label{thm:tb}
Suppose $K$ is a Legendrian knot in $(S^3,\xi_{std})$ with 4-ball genus $g_4(K)>0$ such that $tb(K)=2g_4-1$. Then $\linvt(K)\neq 0.$
\end{theorem}
\begin{proof}
Choose a Darboux ball disjoint from $K$. Let $S^3(1)$ denote the sutured contact manifold obtained by removing this ball from $(S^3,\xi_{std})$. Note that contact $(+1)$-surgery on $K\subset S^3(1)$ results in a sutured contact manifold \[S^3_{tb+1=2g_4}(K)(1),\] which is topologically the result of $2g_4$-surgery on $K$ with respect to its Seifert framing, minus a ball. We have that $\cinvt(S^3(1))$ is nonzero since $\xi_{std}$ is Stein fillable \cite[Theorem 1.4]{bs-shi}. We will use the surgery exact triangle and the adjunction inequality to show that the map \begin{equation}\label{eqn:injective}G_K:SHI(-S^3(1))\to\SHI(-S^3_{2g_4}(K)(1))\end{equation} defined at the end of Section \ref{ssec:contact} is injective. But we know that \[G_K(\cinvt(S^3(1)))=\cinvt(S^3_{2g_4}(K)(1))\] by Theorem \ref{thm:legendrian-surgery2}. This will show that $\cinvt(S^3_{2g_4}(K)(1))$ is nonzero as well, which will then imply that $\linvt(K)$ is nonzero by Lemma \ref{lem:plusone}, proving the theorem.
Let $\data = (Z,R,\eta,\alpha)$ be a closure of $S^3(1)$ and let \begin{equation*}
\data_1=(Z_1,R,\eta,\alpha)\textrm{ and }
\data_0=(Z_0,R,\eta,\alpha)
\end{equation*} be the tuples obtained from $\data$ by performing $1$- and $0$-surgeries on $K\subset Z$ with respect to its contact framing. These tuples are naturally closures of \[S^3_{2g_4}(K)(1)\textrm{ and } S^3_{2g_4-1}(K)(1),\] respectively.
By Theorem \ref{thm:exacttri}, there is an exact triangle
\[ \xymatrix@C=-35pt@R=30pt{
I_*(-Z|{-}R)_{-\alpha -\eta} \ar[rr]^{I_*(W)_{{\kappa}}} & &I_*(-Z_1|{-}R)_{-\alpha -\eta} \ar[dl]^{I_*(W_{0})_{{\kappa}_0}} \\
&I_*(-Z_0|{-}R)_{-\alpha -\eta+ \mu}\ar[ul]^{I_*(W_{1})_{{\kappa}_1}}, & \\
} \]
where $\mu$ is the curve in $-Z_0$ corresponding to the meridian of $ K\subset -Z$, as in Figure \ref{fig:meridian}.
Note that $W_{1}$ is topologically the same as the cobordism from $Z$ to $Z_0$ obtained from $Z\times[0,1]$ by attaching a $(2g_4-1)$-framed $2$-handle along $K\subset Z\times\{1\}$ with respect to its Seifert framing. This cobordism contains a closed surface $\overline\Sigma$ of genus $g_4$ and self-intersection \[\overline\Sigma\cdot\overline\Sigma=2g_4-1\] gotten by capping off the surface $\Sigma\subset Z\times[0,1]$ bounded by $K$ which attains \[g(\Sigma)=g_4(K)\] with the core of the $2$-handle. The surface $\overline\Sigma$ violates the adjunction inequality \cite[Theorem 1.1]{km-gauge2}, which implies that the map \[I_*(W_{1})_{{\kappa}_1}\equiv 0.\]
Therefore, \[I_*(W)_{{\kappa}}:I_*(-Z|{-}R)_{-\alpha -\eta} \to I_*(-Z_1|{-}R)_{-\alpha -\eta} \] is injective. But this is the map which defines the map $G_K$ in \eqref{eqn:injective}. This completes the proof of Theorem \ref{thm:tb} as discussed above.
\end{proof}
\section{Proof of Theorem \ref{thm:khi-fibered}}
\label{sec:proof}
In this section we prove Theorem \ref{thm:khi-fibered}, restated below, as outlined in the introduction.
{
\renewcommand{\thetheorem}{\ref{thm:khi-fibered}}
\begin{theorem}
Suppose $K$ is a genus $g>0$ fibered knot in $Y\not\cong \#^{2g}(S^1\times S^2)$ with fiber $\Sigma$. Then $\KHI(Y,K,[\Sigma],g-1)\neq 0$.
\end{theorem}
\addtocounter{theorem}{-1}
}
Suppose for the rest of this section that $K$ is a fibered knot as in the hypothesis of Theorem \ref{thm:hfk-fibered}. Let $(\Sigma,h)$ be an open book corresponding to the fibration of $K$ with $g(\Sigma)=g,$ supporting a contact structure $\xi$ on $Y$.
We begin with some preliminaries.
\begin{definition}
Given a properly embedded arc $a\subset \Sigma$, we say that $h$ sends $a$ \emph{to the left at an endpoint $p$} if $h(a)$ is not isotopic to $a$ and if, after isotoping $h(a)$ so that it intersects $a$ minimally, $h(a)$ is to the left of $a$ near $p$, as shown in Figure \ref{fig:left2}.
\end{definition}
This definition and the one below are due to Honda, Kazez, and Mati{\'c} \cite{hkm-rv}.
\begin{definition} $h$ is \emph{not right-veering} if it sends some arc to the left at one of its endpoints.
\end{definition}
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $a$ at 47 28
\pinlabel $h(a)$ at 19 28
\pinlabel $p$ at 41 -3
\pinlabel $\Sigma$ at 65 57
\endlabellist
\centering
\includegraphics[width=2cm]{Figures/left}
\caption{$h$ sends $a$ to the left at $p$.
}
\label{fig:left2}
\end{figure}
\begin{remark}For the proof of Theorem \ref{thm:khi-fibered}, we may assume without loss of generality that the monodromy $h$ is not right-veering.
Indeed, note that one of $h$ or $h^{-1}$ is not right-veering since otherwise \[h=\operatorname{id} \textrm{ and } Y\cong \#^{2g}(S^1\times S^2).\] If $h$ is right-veering then we use the fact that $\KHI$ is invariant under reversing the orientation of $Y$ and consider instead the knot $K\subset -Y$ with open book $(\Sigma,h^{-1})$.
We will make clear below where we are relying on the assumption that $h$ is not right-veering; most of the results in this section do not require it.
\end{remark}
\begin{remark}Since $\KHI$ is invariant under reversing the orientation of $Y$,
it suffices to prove that \[\KHI(-Y,K,[\Sigma],g-1)\neq 0.\] That is what we do in this section.
\end{remark}
With these preliminaries out of the way, we may proceed to the proof of Theorem \ref{thm:khi-fibered}.
As the binding of the open book $(\Sigma,h)$, the knot $K$ is naturally a transverse knot in $(Y,\xi)$. Let $\mathcal{K}_0^-$ denote the Legendrian approximation of $K$ realized on a page of the open book $(S,f)$ obtained by positively stabilizing $(\Sigma,h)$ as in Figure \ref{fig:surface3}. In particular, \[f=h\circ D_\gamma\] is the composition of $h$ with a positive Dehn twist around the curve $\gamma$ shown in the figure.
Note that \begin{equation}\label{eqn:tbneg1}tb_\Sigma(\mathcal{K}_0^-)=-1.\end{equation} We will see later (Remark \ref{rmk:tbneg1}) why this is important.
Let
$\mathcal{K}_1^{\pm}$ denote the positive/negative Legendrian stabilization of $\mathcal{K}_0^-$. In particular, $\mathcal{K}_1^-$ is also a Legendrian approximation of $K$. Let \[(Y(K),\Gamma_i,\xi^\pm_i)\] denote the contact manifold with convex boundary and dividing set $\Gamma_i$ obtained by removing a standard neighborhood of $\mathcal{K}_i^\pm$ from $Y$.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $\Sigma$ at 97 249
\pinlabel $S$ at 425 249
\pinlabel $\mathcal{K}_0^-$ at 610 93
\tiny
\pinlabel $\gamma$ at 396 91
\endlabellist
\centering
\includegraphics[width=6.8cm]{Figures/surface3}
\caption{Left, the fiber surface for $K$. Right, a stabilization of the open book $(\Sigma,h)$ obtained by attaching a $1$-handle and composing $h$ with a positive Dehn twist around the curve $\gamma$ in red, with $\mathcal{K}=\mathcal{K}_0^-$ realized on the page $S$.
}
\label{fig:surface3}
\end{figure}
Recall from the previous section (Remark \ref{rmk:sv}) that the transverse invariant $\kinvt(K)$ is given by \[\kinvt(K) = \phi_0^{SV}(\cinvt(\xi_0^-)) = \phi_1^{SV}(\cinvt(\xi_1^-))\in\KHI(-Y,K)\] where \[\phi_i^{SV}:\SHI(-Y(K),-\Gamma_i)\to\SHI(-Y(K),-\Gamma_\mu)=\KHI(-Y,K)\] is the map induced by a Stipsicz-V{\'e}rtesi bypass attachment. In the case $i=0$, this bypass is attached along the arc $c$ shown in Figure \ref{fig:bypasses}. Stipsicz and V{\'e}rtesi also showed in the proof of \cite[Theorem 1.5]{stipsicz-vertesi} that \[(Y(K),\Gamma_1,\xi^+_1)\textrm{ is obtained from }(Y(K),\Gamma_0,\xi^-_0)\] by attaching a bypass along the arc $p$ shown in Figure \ref{fig:bypasses}. Letting
\[\phi_0^+:\SHI(-Y(K),-\Gamma_0)\to\SHI(-Y(K),-\Gamma_1)\] denote the associated bypass attachment map, as in the introduction, we have that \[\phi_0^+(\cinvt(\xi_0^-))=\cinvt(\xi_1^+).\]
Recall from the introduction that our proof of Theorem \ref{thm:khi-fibered} begins with the following lemma.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $-\mu$ at 170 0
\pinlabel $+$ at 105 85
\pinlabel $-$ at 162 135
\pinlabel $p$ at 75 115
\pinlabel $c$ at 145 49
\tiny
\pinlabel $\partial\Sigma$ at 447 131
\endlabellist
\centering
\includegraphics[width=5.5cm]{Figures/bypasses}
\caption{The boundary of the complement of a standard neighborhood of $\mathcal{K}_0^-$ with dividing set $\Gamma_0$ in red. The Stipsicz-V{\'e}rtesi bypass is attached along the arc $c$. Attaching a bypass along $p$ yields the complement of a standard neighborhood of $\mathcal{K}_1^+$. Right, the blue curve is the intersection of the boundary of the fiber surface $\Sigma$ of $K$ with this torus.}
\label{fig:bypasses}
\end{figure}
{
\renewcommand{\thetheorem}{\ref{lem:bypassclaim}}
\begin{lemma}
If the monodromy $h$ is not right-veering then $\phi_0^+(\cinvt(\xi_0^-))=\cinvt(\xi_1^+)=0.$
\end{lemma}
\addtocounter{theorem}{-1}
}
This in turn follows immediately from the lemma below, by Theorem \ref{thm:zero-overtwisted}.
{
\renewcommand{\thetheorem}{\ref{lem:otstab}}
\begin{lemma}
If the monodromy $h$ is not right-veering then $\xi_1^+$ is overtwisted.
\end{lemma}
\addtocounter{theorem}{-1}
}
\begin{remark}
Baker and Onaran proved in \cite[Theorem 5.2.3]{baker-onaran} a similar result under the strictly stronger assumption that $(\Sigma,h)$ is the negative stabilization of another open book.
\end{remark}
\begin{proof}[Proof of Lemma \ref{lem:otstab}]
Suppose $h$ is not right-veering. Let $a$ be a properly embedded arc in $\Sigma$ with endpoints $p$ and $q$ such that $h$ sends $a$ to the left at $p$. We can arrange that the $1$-handle added to $\Sigma$ in forming the stabilization $(S,f)$ and the curve $\gamma$ are as shown in the middle of Figure \ref{fig:surface}. Note in particular that $a$ intersects the Legendrian realization $\mathcal{K}_0^-\subset S$ in one point and with negative sign.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $\Sigma$ at 106 258
\pinlabel $S$ at 434 257
\pinlabel $S'$ at 769 254
\pinlabel $a$ at 233 118
\pinlabel $\mathcal{K}_0^-$ at 620 100
\pinlabel $\mathcal{K}_1^+$ at 955 98
\tiny
\pinlabel $p$ at 208 68
\pinlabel $q$ at 118 68
\pinlabel $\gamma$ at 396 96
\endlabellist
\centering
\includegraphics[width=11cm]{Figures/surface}
\caption{Left, the fiber surface for $K$ and the arc $a$ which is sent to the left at $p$ by the monodromy $h$. Middle, the stabilization $(S,f)$ with $\mathcal{K}_0^-$ realized on the page. Right, a further stabilization with $\mathcal{K}_1^+$ realized on the page $S'$
}
\label{fig:surface}
\end{figure}
Given this setup, there is a standard way to realize $\mathcal{K}_1^+$ on a page of the open book $(S',f')$ obtained by positively stabilizing $(S,f)$, as shown in Figure \ref{fig:posstab}. Namely, $\mathcal{K}_1^+$ is given by the curve obtained by pushing $\mathcal{K}_0^-$ along the arc $a$ and then over the $1$-handle that was added in the stabilization. The right of Figure \ref{fig:surface} provides another view of this realization $\mathcal{K}_1^+\subset S'$.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $S$ at 15 34
\pinlabel $S'$ at 178 34
\pinlabel $a$ at 78 29
\pinlabel $\mathcal{K}_0^-$ at 122 27
\pinlabel $\mathcal{K}_1^+$ at 285 27
\tiny
\pinlabel $p$ at 72 -1
\pinlabel $q$ at 72 50
\endlabellist
\centering
\includegraphics[width=8.7cm]{Figures/posstab}
\caption{Left, the Legendrian knot $\mathcal{K}_0^-$ on a page of the open book $(S,f)$. Right, the positive Legendrian stabilization $\mathcal{K}_1^+$ on a page of the stabilized open book $(S',f')$, where $f'$ is the composition of $f$ with the positive Dehn twist around the curve in red.
}
\label{fig:posstab}
\end{figure}
To show that $\xi_1^+$ is overtwisted, recall from Example \ref{eq:legendrian-complement} that \[(S',P=S'\ssm \nu(\mathcal{K}_1^+), f'|_{P})\] is a partial open book for the complement $(Y(K),\Gamma_1,\xi_1^+).$ Note that $a$ is an arc in $P$. Moreover, $f'|_{P}$ sends $a$ to the left at $p$ since $h$ does. This means that $f'|_{P}$ is not right-veering. Work of Honda, Kazez, and Mati{\'c} \cite[Proposition 4.1]{hkm-sutured} then says that $\xi_1^+$ is overtwisted.
\end{proof}
We now prove the remaining theorems which combine with Lemma \ref{lem:bypassclaim} to prove Theorem \ref{thm:khi-fibered}, as outlined in the introduction. First, we have the following.
{
\renewcommand{\thetheorem}{\ref{thm:ttopgrading}}
\begin{theorem}
$\kinvt(K)\in\KHI(-Y,K,[\Sigma],g)$.
\end{theorem}
\addtocounter{theorem}{-1}
}
\begin{proof}
We will first prove that \begin{equation}\label{eqn:otheralex}\cinvt(\xi_0^-)\in \SHI(-Y(K),-\Gamma_0,[\Sigma],g).\end{equation}
As in Example \ref{eq:legendrian-complement}, a partial open book for the complement $(Y(K),\Gamma_0,\xi_0^-)$ is given by \[(S,P=S\ssm \nu(\mathcal{K}_0^-), f|_P).\] Let $\mathbf{c} = \{c_1,\dots,c_n\}$ be a basis for $P$ such that the endpoints of the basis arcs intersect $\gamma$ as shown in the middle of Figure \ref{fig:handleseifert}. Let \[\gamma_1,\dots,\gamma_n\] be the corresponding curves on the boundary of $H_{S}=S\times[-1,1]$ as defined in \eqref{eqn:basishandle}, so that \[(Y(K),\Gamma_0,\xi_0^-)\] is obtained from \[(H_{S},\Gamma_{S},\xi_{S})\] by attaching contact $2$-handles along the $\gamma_i$. By Definition \ref{def:contact}, the element $\cinvt(\xi_0^-)$ is then the image of the generator \[\mathbf{1}=\cinvt(\xi_{S})\in\SHI(-H_{S},-\Gamma_{S})\] under the map associated to these $2$-handle attachments.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $\Sigma$ at 218 33
\pinlabel $S$ at 538 33
\pinlabel $K$ at 33 52
\pinlabel $\mathcal{K}_0^-$ at 360 52
\pinlabel $\gamma$ at 520 76
\tiny
\pinlabel $c_i$ at 470 5
\pinlabel $\gamma_i$ at 790 5
\endlabellist
\centering
\includegraphics[width=12cm]{Figures/handleseifert}
\caption{Left, a portion of $\Sigma$ near its boundary with the knot $K$. Middle, the stabilized surface $S$ with $\mathcal{K}_0^-$. The basis arcs $c_i$ are shown in green. Right, $(H_{S},\Gamma_{S})$ with a copy of the Seifert surface $\Sigma$ properly embedded in $H_{S}$ as shown in blue. Note that this copy $\Sigma$ is disjoint from the curves $\gamma_i$. Here, the red curves represent $\Gamma_S$.
}
\label{fig:handleseifert}
\end{figure}
We claim that \begin{equation}\label{eqn:calex}\cinvt(\xi_{S})\in\SHI(-H_{S},-\Gamma_{S},[\Sigma],g).\end{equation}
To prove this, we let $\data_{S} = (Z,R,\eta,\alpha)$ be a closure of $(H_{S},\Gamma_{S})$ adapted to a properly embedded copy of $\Sigma$ in $H_S$ as shown on the right of Figure \ref{fig:handleseifert}. Let us assume that this closure is formed using an annular auxiliary surface. That is, to form $Z$ we first form a preclosure $M'$ by gluing a thickened annulus to $H_S$ in such a way that $\Sigma'$ extends to a surface in $M'$ obtained from $\Sigma$ by adding a $1$-handle, as shown in the lower left of Figure \ref{fig:torusdifference}.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $B$ at 510 145
\endlabellist
\centering
\includegraphics[width=7.2cm]{Figures/torusdifference2}
\caption{Top left, the product $(H_{\Sigma},\Gamma_\Sigma)$ with a copy of the surface $\Sigma$ shown in blue and properly embedded. Top right, the product $(H_{S},\Gamma_{S})$. Bottom left, the preclosure $M'$ of $(H_{S},\Gamma_{S})$ formed by gluing on a thickened annulus. The surface $\Sigma$ extends by a $1$-handle to the surface $\Sigma'$ shown in blue, with boundary curves $c_\pm$. Bottom right, $\Sigma'$ is isotopic to the subsurface of $R=\partial_+M'$ shown in blue. This subsurface differs from $R$ by the white annulus $B$.
}
\label{fig:torusdifference}
\end{figure}
Then $R$ is defined to be $\partial_+M$ and $Z$ is formed by gluing $R\times[1,3]$ to $M'$ as specified by a diffeomorphism \[\varphi:\partial_+M'\to\partial_-M'\] which identifies the two boundary components \[c_\pm\subset \partial M'\] of $\Sigma'$. The surface $\Sigma'$ caps off in $Z$ to a closed surface $\overline\Sigma$, which is the union of $\Sigma'$ with the annulus \[A=c_+\times [1,3]\subset R\times[1,3].\]
We claim that $\overline\Sigma$ and $R$ differ in $H_2(Z)$ by a torus. Indeed, if we remove the annulus $A$ from $\overline\Sigma$ we recover $\Sigma'$, which is isotopic to the subsurface of $R$ shown in blue in the lower right of Figure \ref{fig:torusdifference}. The union of this subsurface with the white annulus $B$ is equal to $R$. So, the difference $\overline\Sigma-R$ is homologous to the torus $T=A\cup-B$ obtained as the union of these two annuli. Note that \[2g(R)-2=2g(\overline\Sigma)-2=2g.\] By definition, then, the class $\cinvt(\xi_{S},\data_{S})$ is an element of the $2g$-eigenspace of the operator \[\mu(R):I_*(-Z)_{-\alpha-\eta}\to I_*(-Z)_{-\alpha-\eta}.\] Corollary \ref{cor:grading-shift} then implies that it is also an element of the $2g$-eigenspace of the operator \[\mu(\overline\Sigma):I_*(-Z|{-}R)_{-\alpha-\eta}\to I_*(-Z|{-}R)_{-\alpha-\eta}\] since $\overline\Sigma$ and $R$ differ by a torus. But the latter eigenspace is, by definition, $\SHI(-\data_{S},[\Sigma],g)$. This implies that $\cinvt(\xi_{S})$ lies in Alexander grading $g$, as claimed in \eqref{eqn:calex}.
To show that $\cinvt(\xi_0^-)$ lies in Alexander grading $g$, as in \eqref{eqn:otheralex}, recall that on the level of closures, this class is the image of $\cinvt(\xi_S,\data_S)$ under the map induced by the cobordism associated to $\partial H_S$-framed surgeries on $\gamma_1,\dots,\gamma_n\subset Z$. The right of Figure \ref{fig:handleseifert} shows that these curves do not intersect the surface $\Sigma$ in $H_S$, which means that they are disjoint from the capped off surface $\overline\Sigma\subset Z$. This means that the capped off surfaces defining the Alexander gradings on the two ends of this cobordism are isotopic in the cobordism. Lemma \ref{lem:commute} then implies the cobordism map respects the Alexander grading, which proves the claim.
To complete the proof of Theorem \ref{thm:ttopgrading}, we need only show that the bypass attachment map $\phi_0^{SV}$ preserves Alexander grading, keeping in mind that \[\kinvt(K)=\phi_0^{SV}(\cinvt(\xi_0^-)).\] The argument for this is the same as above. The map $\phi_0^{SV}$ is the composition of the maps associated to attaching a $1$-handle with feet at the endpoints of the arc $c$ in Figure \ref{fig:bypasses} and then attaching a $2$-handle along the union of $c$ with an arc passing once over this 1-handle. Call this union $s$. Suppose $\data=(Z,R,\alpha,\eta)$ is a closure of the manifold obtained after the $1$-handle attachment, adapted to $\Sigma$. Then it is also a closure of $(Y(K),\Gamma_0)$, and, on the level of closures, $\phi_0^{SV}$ is induced by the cobordism associated to surgery on $s\subset Z$. The right of Figure \ref{fig:bypasses} shows that the arc $c$ does not intersect the surface $\Sigma\subset Y(K)$, which implies that the surgery curve $s$ is disjoint from the capped off surface $\overline\Sigma\subset Z$. This cobordism map therefore respects the Alexander grading, as argued above, which proves the claim.
\end{proof}
\begin{remark}
\label{rmk:tbneg1} It was important at the end of the proof of Theorem \ref{thm:ttopgrading} that the arc $c$ was disjoint from the surface $\Sigma\subset Y(K)$. This claim relied on the depiction of $\partial \Sigma$ in Figure \ref{fig:bypasses}, so it behooves us to justify this depiction. This is where the observation \[tb_\Sigma(\mathcal{K}_0^-)=-1\] in \eqref{eqn:tbneg1} comes into play. First, this condition implies that \[tb_\Sigma(\mathcal{K}_1^+)=-2.\] These two Thurston-Bennequin calculations then imply that each component of $\Gamma_0$ and $\Gamma_1$ intersects $\partial \Sigma$ in $1$ and $2$ points, respectively. This forces $\partial\Sigma$ to be as indicated in Figure \ref{fig:bypasses}.
\end{remark}
Next, we prove the following analogue of Vela-Vick's theorem \cite{vv} that the connected binding of an open book has nonzero transverse invariant in Heegaard Floer homology.
{
\renewcommand{\thetheorem}{\ref{thm:nonzero}}
\begin{theorem}
$\kinvt(K)$ is nonzero.
\end{theorem}
\addtocounter{theorem}{-1}
}
\begin{proof}
We first show that there is \emph{some} genus $g$ fibered knot $K'$ with open book $(\Sigma,\phi)$ such that the corresponding transverse invariant $\kinvt(K')$ is nonzero. We then use the surgery exact triangle to show that there is an isomorphism \begin{equation}
\label{eqn:isoknots}\KHI(-Y',K',[\Sigma],g)\xrightarrow{\cong} \KHI(-Y'',K'',[\Sigma],g)\end{equation} for \emph{any} two such fibered knots which sends $\kinvt(K')$ to $\kinvt(K'').$ Theorem \ref{thm:nonzero} will follow.
For the first, let $K'$ be the braid with two strands and $2g+1$ positive crossings. Note that $K'$ has genus $g$. As a positive braid, $K'$ is fibered with $g_4(K')=g$, with open book supporting the tight contact structure on $S^3$. As the transverse binding of this open book, we have that \[sl(K') = 2g-1.\] Let $\mathcal{K}'$ be the Legendrian representative of $K'$ shown in Figure \ref{fig:legbraid}. We have that \[tb(\mathcal{K}')=2g-1=2g_4(K')-1 \text{ and } rot(\mathcal{K}')=0.\] Theorem \ref{thm:tb} therefore implies that \[\linvt(\mathcal{K}')\neq 0.\] We claim that $\mathcal{K}'$ is a Legendrian approximation of $K'$. For this, simply note that the transverse pushoff $K''$ of $\mathcal{K}'$ has self-linking number \[sl(K'')=tb(\mathcal{K}')+rot(\mathcal{K}')=2g-1.\] The transverse simplicity of torus knots, proven by Etnyre and Honda \cite{etnyre-honda}, then implies that $K''$ is transversely isotopic to the transverse binding $K'$. In other words, $\mathcal{K}'$ is a Legendrian approximation of $K'$. It then follows that \[\kinvt(K'):=\linvt(\mathcal{K'})\] is nonzero, as desired.
\begin{figure}[ht]
\labellist
\tiny
\pinlabel $2g{+}1$ at 60 45
\pinlabel $\cdots$ at 64.8 31.5
\endlabellist
\centering
\includegraphics[width=4.5cm]{Figures/legbraid}
\caption{A Legendrian approximation $\mathcal{K}'$ of the torus braid $K'=T(2g+1,2)$.
}
\label{fig:legbraid}
\end{figure}
Suppose next that $K'\subset Y'$ is a genus $g$ fibered knot with open book $(\Sigma,\phi)$. Let $K''\subset Y''$ be the fibered knot corresponding to the open book $(\Sigma,\phi\circ D_{\beta}^{-1})$, where $D_{\beta}$ is a positive Dehn twist around a nonseparating curve $\beta\subset \Sigma$. We first prove the isomorphism \eqref{eqn:isoknots} for this pair.
Note that $\beta$ can be viewed as a Legendrian knot in the contact structure on $Y'$ corresponding to $(\Sigma,\phi)$, with contact framing equal to the framing induced by $\Sigma$. In this case, $Y''$ is the result of contact $(+1)$-surgery on $\beta$. Let \[(Y'(K'),\Gamma_\mu,\xi_{\mu,\mathcal{K}'})\textrm{ and } (Y''(K''),\Gamma_\mu,\xi_{\mu,\mathcal{K}''})\] be the sutured contact manifolds obtained by removing standard neighborhoods of Legendrian approximations $\mathcal{K}'$ and $\mathcal{K}''$ of $K'$ and $K''$ and attaching Stipsicz-V{\'e}rtesi bypasses, as described in the previous section. Then $\beta$ is naturally a Legendrian knot in the first of these sutured contact manifolds, as $\Sigma$ is a subsurface of a page of a partial open book for this contact manifold. The second sutured contact manifold above is the result of contact $(+1)$-surgery on $\beta$. Moreover, we have that \begin{equation}\label{eqn:Tcontact}\kinvt(K'):=\theta(\xi_{\mu,\mathcal{K}'})\textrm{ and }\kinvt(K''):=\theta(\xi_{\mu,\mathcal{K}''})\end{equation} as per Remark \ref{rmk:sv}
Let $\data = (Z,R,\alpha,\eta)$ be a closure of $(Y'(K'),\Gamma_\mu)$ adapted to $\Sigma$. Let
\[\data_1=(Z_1,R,\eta,\alpha)\textrm{ and }
\data_0=(Z_0,R,\eta,\alpha)
\] be the tuples obtained from $\data$ by performing $1$- and $0$-surgery on $\beta$ with respect to the framing induced by $\Sigma$. These are naturally closures of the sutured manifolds
\[(Y''(K''),\Gamma_\mu) =(Y'(K')_1(\beta),\Gamma_\mu)\textrm{ and } (Y'(K')_0(\beta),\Gamma_\mu),\] adapted to $\Sigma$ in each case.
By Theorem \ref{thm:exacttri}, there is a surgery exact triangle
\[ \xymatrix@C=-35pt@R=30pt{
I_*(-Z|{-}R)_{-\alpha -\eta} \ar[rr]^{I_*(W)_{{\kappa}}} & &I_*(-Z_1|{-}R)_{-\alpha -\eta} \ar[dl] \\
&I_*(-Z_0|{-}R)_{-\alpha -\eta+ \mu},\ar[ul] & \\
} \] where $\mu$ is the curve in $-Z_0$ corresponding to the meridian of $ \beta\subset -Z$.
We can push $\beta$ slightly off of $\Sigma$ to ensure that the capped off surfaces $\overline{\Sigma}$ in these closures are isotopic in the cobordisms between them. Lemma \ref{lem:commute} then implies that the maps in this exact triangle respect the eigenspace decompositions associated with the operators $\mu(\overline\Sigma)$. In particular, we have an exact triangle \[ \xymatrix@C=-35pt@R=30pt{
\SHI(-\data,[\Sigma],g) \ar[rr]^{I_*(W)_{{\kappa}}} & &\SHI(-\data_1,[\Sigma],g) \ar[dl] \\
&I_*(-Z_0|{-}R)_{-\alpha -\eta+ \mu}^{(2g)},\ar[ul] & \\
} \] where the third group denotes the $2g$-eigenspace of the operator $\mu(\overline\Sigma)$ on $I_*(-Z_0|{-}R)_{-\alpha -\eta+ \mu}$. But the surface $\overline\Sigma$ is homologous in $-Z_0$ to a surface of genus \[g(\overline\Sigma)-1=g\] obtained by surgering $\overline\Sigma$ along $\beta$, so Proposition \ref{prop:mu-spectrum} tells us that this $2g$-eigenspace is trivial. The map $I_*(W)_{{\kappa}}$ is therefore an isomorphism. Recall that this map gives rise to the map we denote by $G_\beta$ in Section \ref{ssec:contact}. We have shown above that $G_\beta$ restricts to an isomorphism \[G_\beta:\KHI(-Y',K',[\Sigma],g)\to\KHI(-Y'',K'',[\Sigma],g).\] Moreover, this map sends $\kinvt(K')$ to $\kinvt(K'')$ by Theorem \ref{thm:legendrian-surgery2} and \eqref{eqn:Tcontact}.
Finally, since any two genus $g$ fibered knots $K'\subset Y'$ and $K''\subset Y''$ with fiber $\Sigma$ have open books with monodromies related by positive and negative Dehn twists around nonseparating curves in $\Sigma$, we conclude that there is for any two such knots an isomorphism \[\KHI(-Y',K',[\Sigma],g)\to \KHI(-Y'',K'',[\Sigma],g)\] sending $\kinvt(K')$ to $\kinvt(K'').$ As discussed above, this completes the proof of Theorem \ref{thm:nonzero}.
\end{proof}
The bypass attachment along the arc $p$ in Figure \ref{fig:bypasses} fits into a bypass triangle as shown in Figure \ref{fig:bypasstri2}. Note that the arc of attachment in the upper right is precisely the arc defining the Stipsicz-V{\'e}rtesi bypass attachment which induces the map $\phi_1^{SV}$.
\begin{figure}[ht]
\labellist
\small \hair 2pt
\pinlabel $+$ at 402 370
\pinlabel $-$ at 459 420
\pinlabel $+$ at 782 370
\pinlabel $=$ at 905 421
\pinlabel $-$ at 837 420
\tiny
\pinlabel $p$ at 345 428
\pinlabel $\phi_0^+$ at 578 451
\pinlabel $\phi_1^{SV}$ at 758 261
\pinlabel $C$ at 410 260
\endlabellist
\centering
\includegraphics[width=11.5cm]{Figures/bypasstri2}
\caption{The arcs of attachment for the bypass exact triangle \eqref{eqn:bypasstri}. The arc in the upper right specifies a Stipsicz-V{\'e}rtesi bypass attachment.
}
\label{fig:bypasstri2}
\end{figure}
By Theorem \ref{thm:bypass}, there is a corresponding bypass exact triangle of the form
\begin{equation}\label{eqn:bypasstri} \xymatrix@C=-25pt@R=35pt{
\SHI(-Y(K),-\Gamma_0) \ar[rr]^{\phi_0^+} & & \SHI(-Y(K),-\Gamma_1) \ar[dl]^{\phi_1^{SV}} \\
& \KHI(-Y,K) \ar[ul]^C. & \\
} \end{equation}
To prove Theorem \ref{thm:khi-fibered}, let us now assume that the monodromy $h$ is not right-veering. Then \[\phi_0^+(\cinvt(\xi_0^-))=0\] by Lemma \ref{lem:bypassclaim}. Exactness of the triangle \eqref{eqn:bypasstri} then tells us that there is a class \[x\in\KHI(-Y,K)\] such that \[C(x)=\cinvt(\xi_0^-).\]
The composition \begin{equation}\label{eqn:phicomp}\phi_0^{SV}\circ C: \KHI(-Y,K)\to \KHI(-Y,K)\end{equation} therefore satisfies \begin{equation}\label{eqn:xmapsto}\phi_0^{SV}(C(x))=\kinvt(K),\end{equation} which is nonzero by Theorem \ref{thm:nonzero}. Thus, $x$ is nonzero.
The class $x$ is not \emph{a priori} homogeneous with respect to the Alexander grading on $\KHI(-Y,K)$. However, we prove the following, which completes the proof of Theorem \ref{thm:khi-fibered}.
\begin{theorem}
The component of $x$ in $\KHI(-Y,K,[\Sigma],g-1)$ is nonzero.
\end{theorem}
\begin{proof}
The composite map $\phi_0^{SV}\circ C$ is ultimately induced by a cobordism \[(W,\nu):(-Y_1,-\alpha\sqcup-\eta)\to (-Y_2,-\alpha\sqcup-\eta)\] from a closure \[-\data_1=(-Y_1,-R,-\eta,-\alpha)\textrm{ of }(-Y(K),-\Gamma_\mu)\] to another closure \[-\data_2=(-Y_2,-R,-\eta,-\alpha)\textrm{ of }(-Y(K),-\Gamma_\mu),\] corresponding to surgery on curves away from the embedded copies of $-Y(K)$ in these $-Y_i$. We can arrange that these two closures are adapted to $\Sigma$. The capped off surfaces $\overline\Sigma_a\subset -Y_1$ and $\overline\Sigma_b\subset -Y_2$ in these closures are unions of $\Sigma$ with once-punctured tori, \[\overline\Sigma_a=\Sigma\cup T_a\textrm{ and } \overline\Sigma_b=\Sigma\cup T_b.\] The surfaces $\Sigma\subset Y_i$ are isotopic within $W$ as they are contained on the boundary of a product region \[-Y(K)\times I\subset W.\] We therefore have that \begin{equation}\label{eqn:difference}\overline\Sigma_a + F = \overline\Sigma_b\end{equation} in $H_2(W)$, where $F$ is the genus two surface in $W$ with self-intersection $0$ given as the union \[F=T_b\cup (\partial \Sigma\times I) \cup -T_a\] of these once-punctured tori. Let $y \in \SHI(-\data_1)$ be the element representing the class $x \in \KHI(-Y,K).$ Write \[y=y_{-g}+y_{1-g}+\dots +y_{g-1}+y_g\] where $y_i\in \SHI(-\data_1,[\Sigma],i)$.
We know from \eqref{eqn:xmapsto} that the map $I_*(W)_\nu$ sends $y$ to a representative of $\kinvt(K)$, which must be a nonzero element of $\SHI(-\data_2,[\Sigma],g)$ by Theorems \ref{thm:ttopgrading} and \ref{thm:nonzero}.
It follows from Proposition \ref{prop:grading-shift} and the relation \eqref{eqn:difference}, however, that the only components of $y$ whose images can have nonzero components in $\SHI(-\data_2,[\Sigma],g)$ are $y_{g-1}$ and $y_g$, since $g(F)=2$. On the other hand, we claim that \[I_*(W)_\nu(y_g)=0.\] This will prove that $y_{g-1}$ must be nonzero, proving the theorem. For this claim, note that \[\SHI(-Y,K,[\Sigma],g)\cong {\mathbb{C}}\] since $K$ is fibered. This group is therefore generated by $\kinvt(K)$ by Theorems \ref{thm:ttopgrading} and \ref{thm:nonzero}. If $y_g$ is zero then we are done. If not, then $y_g$ represents a nonzero multiple of $\kinvt(K)$, so \[I_*(W)_\nu(y_g) = 0 \textrm{ if and only if }\phi_0^{SV}(C(\kinvt(K)))= 0.\] Thus, it remains to show that the latter is zero. For this, recall that \[\kinvt(K) = \phi_1^{SV}(\cinvt(\xi_1^-)).\] It then follows that \[C(\kinvt(K)) = C(\phi_1^{SV}(\cinvt(\xi_1^-)))=0\] by the exactness of the triangle \eqref{eqn:bypasstri}.
\end{proof}
\bibliographystyle{alpha}
| {
"timestamp": "2018-01-24T02:10:59",
"yymm": "1801",
"arxiv_id": "1801.07634",
"language": "en",
"url": "https://arxiv.org/abs/1801.07634",
"abstract": "We prove that Khovanov homology detects the trefoils. Our proof incorporates an array of ideas in Floer homology and contact geometry. It uses open books; the contact invariants we defined in the instanton Floer setting; a bypass exact triangle in sutured instanton homology, proven here; and Kronheimer and Mrowka's spectral sequence relating Khovanov homology with singular instanton knot homology. As a byproduct, we also strengthen a result of Kronheimer and Mrowka on $SU(2)$ representations of the knot group.",
"subjects": "Geometric Topology (math.GT); Symplectic Geometry (math.SG)",
"title": "Khovanov homology detects the trefoils",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.98615138856342,
"lm_q2_score": 0.7185944046238982,
"lm_q1q2_score": 0.7086428699337614
} |
https://arxiv.org/abs/2210.11770 | Hamilton completion and the path cover number of sparse random graphs | We prove that for every $\varepsilon > 0$ there is $c_0$ such that if $G\sim G(n,c/n)$, $c\ge c_0$, then with high probability $G$ can be covered by at most $(1+\varepsilon)\cdot \frac{1}{2}ce^{-c} \cdot n$ vertex disjoint paths, which is essentially tight. This is equivalent to showing that, with high probability, at most $(1+\varepsilon)\cdot \frac{1}{2}ce^{-c} \cdot n$ edges can be added to $G$ to create a Hamiltonian graph. | \section{Introduction} \label{sec-intro}
A classical result by Koml\'{o}s and Szemer\'{e}di \cite{KS83}, and independently by Bollob\'{a}s \cite{B84}, states that if $p=p(n)=(\log n + \log \log n + f(n))/n$ then
\begin{eqnarray*}
\lim _{n\to \infty} \mathbb{P} (G(n,p)\text{ is Hamiltonian}) =
\begin{cases}
1 & \text{if } f(n) \to \infty ;\\
0 & \text{if } f(n) \to -\infty .
\end{cases}
\end{eqnarray*}
In light of this, one cannot expect a Hamilton cycle to exist in a random graph $G(n,p)$ when $p$ is below the sharp threshold stated above. This invites a quantitative question: typically, how close is a random graph below the threshold to being Hamiltonian? To answer this question a measure of ``distance" from Hamiltonicity should first be defined.
One example of such a measure is the maximum length of a cycle in the graph $L_{\max}(G)$, where if $p$ is above the Hamiltonicity threshold then typically $L_{\max}(G(n,p)) = n$. Frieze \cite{F86} showed that if $c>0$ is a large enough constant with respect to $\varepsilon$ then typically
$$
L_{\max}(G(n,c/n)) \ge (1-(1+\varepsilon )ce^{-c})n.
$$
Up to the value of $\varepsilon$, this is a tight bound. This is due to the fact that, with high probability, there are $(c+1+o(1))\cdot e^{-c}n$ vertices of degree less than 2 in $G(n,c/n)$, and such vertices cannot be a part of any cycle. Following that, Anastos and Frieze \cite{AF21} extended this result by showing that, in fact, there is a function $f$ such that $L_{\max}(G(n,c/n))/n \xrightarrow{a.s.} f(c)\cdot n$. The function $f(c)$ can be expressed as a series, with explicitly computable terms, the first few of which are computed in the paper.
In a recent paper Nenadov, Sudakov and Wagner \cite{NSW} presented a general measure for the distance of a graph from a given property: deficiency. The \textit{deficiency} of a graph $G$ with respect to a graph property $\mathcal{P}$, denoted $\text{def}(G,\mathcal{P} )$, is equal to the smallest integer $t$ such that $G * K_t$ has the property $\mathcal{P}$. Here, $G*H$ is the \emph{join} of $H$ and $G$, that is, the graph with vertex set $V(G) \uplus V(H)$ and edge set $E(G)\cup E(H) \cup (G\times H)$.
With respect to the property of Hamiltonicity, the notion of deficiency is equivalent to other natural measure of distance from Hamiltonicity: it is also equal to the distance in edges of a graph from the nearest Hamiltonian graph. In other words, the deficiency of $G$ with respect to Hamiltonicity is equal to the smallest integer $t$ such that there is a graph $H$ with $e(H)=t$ and $G\cup H$ is Hamiltonian. Evidently, this is also equal to the \emph{path cover number} of $G$, defined as follows.
\begin{defin}
Let $G$ be a graph. The \emph{path cover number} of $G$, denoted by $\mu (G)$, is 0 if $G$ is Hamiltonian, and otherwise it is the minimal number $k\in \mathbb N$ such that there is a covering of $V(G)$ by $k$ vertex disjoint paths.
\end{defin}
Henceforth we will denote $\text{def}(G,\mathcal{H} )$, where $\mathcal{H}$ is the property of Hamiltonicity, as $\mu (G)$, as the two quantities are equal.
In the above mentioned paper Nenadov et al. presented a (tight) combinatorial result by presenting an inequality involving $|V(G)|,|E(G)|,\mu (G)$ and showing that it holds for every graph $G$, as well as finding examples of $G$ of every size for which the inequality is an equality.
When random graphs are considered, an immediate bound on $\mu (G)$, similar to the bound on the maximum cycle length, can be derived from the fact that every path in a disjoint path covering of $G$ contains at most two vertices of degree 1 or a single vertex of degree 0. From this we get that, with high probability for $G\sim G(n,c/n)$, one has $\mu (G) \ge (\frac{1}{2}c+1+o(1))\cdot e^{-c}n$. Like in the case of the maximum cycle length, this trivial bound turns out to be tight, as shown in the following theorem, which is our result of this paper.
\begin{thm}\label{main}
For every $\varepsilon > 0$ there is $c_0>1$ such that if $G\sim G(n,c/n)$, $c\ge c_0$, then with high probability $$(1-\varepsilon)\cdot \frac{1}{2}ce^{-c} \cdot n \le \mu (G) \le (1+\varepsilon)\cdot \frac{1}{2}ce^{-c} \cdot n.$$
\end{thm}
In Section \ref{sec-per} we introduce notation and definitions to be used later in the paper, as well as some auxiliary results required for our proof. In Section \ref{sec:proof} we present our proof for Theorem \ref{main}.
\section{Preliminaries} \label{sec-per}
Throughout the paper we use the following graph theoretic notation.
For a graph $G=(V,E)$ and two disjoint vertex subsets $U,W\subseteq V$, $E_G(U,W)$ denotes the set of edges of $G$ adjacent to exactly one vertex from $U$ and one vertex from $V$, and $e_G(U,W)=|E_G(U,W)|$.
Similarly, $E_G(U)$ denotes the set of edges spanned by a subset $U$ of $V$, and $e_G(U)$ its size.
The (external) neighbourhood of a vertex subset $U$, denoted by $N_G(U)$, is the set of vertices in $V\setminus U$ adjacent to a vertex of $U$.
The degree of a vertex $v\in V$, denoted by $d_G(v)$, is the number of edges of $G$ incident to $v$.
For $u,v\in G$ we let $\text{dist}_G(u,v)$ denote the distance in $G$ between $u$ and $v$, that is the length of a shortest path in $G$ connecting them (or $\text{dist}_G(u,v)=\infty$ if $u$ and $v$ are not connected by any path), where $\text{dist}_G(v,v)$ is defined to be the minimal length of a cycle containing $v$ (or $\text{dist}_G(v,v) = \infty$ if $v$ is not contained in any cycle).
While using the above notation we occasionally omit $G$ if the identity of the specific graph is clear from the context.
We suppress the rounding signs to simplify the presentation.
The following auxiliary results are required for our proof.
\begin{lemma}\label{binom-rv}
Let $1\le k \le n$ be integers, $0 < p < 1$, and let $X\sim \Dist{Bin} (n,p)$. Then the following inequalities hold:
\begin{enumerate}
\item $\mathbb{P} (X \ge k) \le \left( \frac{enp}{k} \right) ^k$\,;
\item $\mathbb{P} (X = k) \le \left( \frac{enp}{k(1-p)} \right) ^k\cdot e^{-np}$\,.
\end{enumerate}
\noindent If, additionally, $k\le np$, then
\begin{enumerate}[resume]
\item $\mathbb{P} (X \le k) \le (k+1)\cdot \left( \frac{enp}{k(1-p)} \right) ^k\cdot e^{-np}$\,.
\end{enumerate}
\end{lemma}
\begin{lemma}\label{chernoff}
{\em (Chernoff bound for binomial lower tail, see e.g. \cite{CHER})} Let $X\sim \Dist{Bin} (n,p)$ and $\delta >0$. Then
$$
\mathbb{P} (X <(1-\delta )np) \le \exp \left( -\frac{\delta ^2 np}{2} \right) .
$$
\end{lemma}
\begin{lemma} \label{lemma:const-degree-vrtcs} {\em (see e.g. \cite{FK}, Lemma 3.3)}
Let $d\in \mathbb{N}$, $c>0$ and let $G\sim G(n,c/n)$. Then, with high probability,
$$
\left| |\{ v\in G \mid d(v) = d \} | - e^{-c} \cdot \frac{c^d}{d!} \cdot n \right| = o\left( n^{0.6} \right).
$$
\end{lemma}
\begin{defini}\label{def:2core}
The \emph{connected 2-core} of a graph $G$, denoted $C_2(G)$, is the 2-core of the largest connected component of $G$. That is, the maximum size subgraph of the largest connected component that has minimum degree at least 2.
\end{defini}
\begin{lemma} \label{lemma:sizeof2core} {\em (see e.g. \cite{FK}, Lemma 2.16)}
Let $\varepsilon > 0$. Then there is $C=C(\varepsilon )$ such that, if $c\ge C$ and $G\sim G(n,c/n)$, then, with high probability, $|V(C_2(G))| \ge (1-(c+1)e^{-c}-(1+\varepsilon )c^2e^{-2c})n.$
\end{lemma}
We will also use the following stronger variant of the Azuma-Hoeffding inequality due to Warnke.
\begin{lemma} \label{lemma:martingales} {\em (An immediate consequence of \cite{WAR}, Theorem 1.2)}
Let $G\sim G(n,p)$, let $f$ be a graph theoretic function, and let $X_0,X_1,...,X_n$ be the corresponding vertex exposure martingale. Further assume that there is a graph property $\mathcal{P}$ and a positive integer $d$ such that, for every $G_1,G_2$ graphs on $V$ such that $E(G_1)\triangle E(G_2) \subseteq \{v\}\times V$ for some $v\in V$, the following holds:
\begin{eqnarray*}
|f(G_1)-f(G_2) | \le
\begin{cases}
d & \text{if } G_1,G_2\in \mathcal{P} ;\\
n & \text{otherwise} .
\end{cases}
\end{eqnarray*}
Then
$$
\mathbb{P} \left( X_n \ge X_0 + t \ \wedge \ G\in \mathcal{P} \right) \le \exp \left( -\frac{t^2}{2n(d+1)^2} \right) .
$$
\end{lemma}
\section{Proof of Theorem \ref{main}} \label{sec:proof}
In this section we present a proof of Theorem \ref{main}. Recall that only the upper bound in the statement requires a proof. Throughout the proof we assume that $c$ is large enough (with respect to $\varepsilon$) without stating this explicitly.
\textbf{Proof outline:} Let $V_1\subseteq V(G)$ be the set of vertices of degree 1. We prove the claim by showing that for a large enough value of $c$, with high probability, if we add to $G$ the edges of a (partial) matching on $V_1$, the resulting graph contains a cycle of length at least $(1-\frac{1}{2}\varepsilon ce^{-c})n$. Since, with high probability, $|V_1|\le (1+\frac{1}{2}\varepsilon)ce^{-c}n$, by removing the added matching from the cycle we get a set of at most $(1+\frac{1}{2}\varepsilon)\frac{1}{2}ce^{-c}n$ paths that cover at least $(1-\frac{1}{2}\varepsilon ce^{-c})n$ vertices. The remaining vertices can now be covered by paths of length 0 to establish the result.
In Section \ref{sec:cores} we study the structure of $G$, and show its typical properties that will help with proving the likely existence of a long cycle. We then describe a construction of an auxiliary graph $G^*$ and a matching $M\subseteq G^*$, such that if $G^*$ contains a Hamilton cycle that contains all the edges of $M$ then $G$ along with a matching on $V_1$ defined by $M$ contains a sufficiently long cycle.
Finally, in Section \ref{sec:hamiltonian} we prove that indeed, with high probability, $G^*$ contains such a Hamilton cycle. Here, we will employ adapted variations on methods used in the setting of finding Hamilton cycles in expanding graphs, namely, rotations and extensions. For an overview on this subject we refer the reader to \cite{SURV}, Section 2.3.
\subsection{Properties of $G$ and its 2-core} \label{sec:cores}
Recall that $V_1$ denotes the set of vertices in $G$ of degree 1. Define the following three subsets of $V(G)$.
\begin{eqnarray*}
&&\text{SMALL} = \text{SMALL} (G) = \{ v\in V(G) \mid d_G(v,V(G)\setminus N(V_1)) < c/1000 \}; \\
&&\text{LARGE} = \text{LARGE} (G) = \{ v\in V(G) \mid d_G(v) > 20c \}; \\
&&\text{CLOSE} = \text{CLOSE} (G) = \{ v\in \text{SMALL} \mid \exists u\in \text{SMALL} : \text{dist}(v,u) \le 4 \} . \\
\end{eqnarray*}
Notice that $V_1 \subseteq \text{SMALL}$.
\begin{lemma} \label{lemma:properties}
With high probability $G$ has the following properties:
\begin{enumerate}[{label=\textbf{(P\arabic*)}}]
\item
\label{item:V1-is-small}
$(1-n^{-0.4})\cdot ce^{-c}\cdot n \le |V_1| \le (1+n^{-0.4})\cdot ce^{-c}\cdot n$;
\item
\label{item:SMALL-is-small}
$|\text{SMALL} | \le e^{-0.9c}\cdot n$;
\item
\label{item:BIG-is-small}
$|\text{LARGE} | \le 10^{-6}\cdot n$;
\item
\label{item:CLOSE-is-small}
$|\text{CLOSE} | \le e^{-1.8c}\cdot n$;
\item
\label{item:not-too-many-edges}
every $U\subseteq V(G)$ with $|U| \le 10^{-5}\cdot n$ has $e(U) < 10^{-4}c\cdot |U|$;
\item
\label{item:not-too-few-edges}
every $U,W\subseteq V(G)$ disjoint with $|U| = 10^{-6}\cdot n$ and $|W|=\frac{1}{5}n$ satisfy: $e(U,W) \ge 10^{-7} c\cdot n$.
\end{enumerate}
\end{lemma}
\begin{proof}
For each of the given properties, we bound the probability that $G\sim G(n,p)$ fails to uphold it.
\begin{itemize}
\item[\ref{item:V1-is-small}.]
This is an immediate consequence of Lemma \ref{lemma:const-degree-vrtcs}, with $d=1$.
\item[\ref{item:SMALL-is-small}.]
Since $|N(V_1)|\le |V_1|$, assuming $G$ has \ref{item:V1-is-small}, the probability that $|\text{SMALL} | \ge s \coloneqq e^{-0.9c}n$ is at most the probability that there is a set $U$ of size $s $ and a set $W$ of size $t=2 ce^{-c}\cdot n$ with less than $\frac{c}{1000}\cdot s$ edges between $U$ and $V(G)\setminus (U\cup W)$. The probability for this is at most
\begin{eqnarray*}
& & \binom{n}{s}\cdot \binom{n}{t}\cdot \mathbb{P} \left( \Dist{Bin} \left( s(n-s-t), p \right) < \frac{cs}{1000} \right) \\
& \le & \left(\frac{en}{s} \right) ^{s} \cdot \left(\frac{en}{t} \right) ^{t} \cdot \frac{cs}{1000}\cdot \mathbb{P} \left( \Dist{Bin} \left( s(n-s-t), p \right) = \frac{c}{1000} \cdot s \right) \\
& \le & O(n)\cdot e^{0.91c\cdot s} \cdot e^{2c\cdot t} \left( \frac{1000es(n-s-t)p}{cs(1-p)} \right) ^{10^{-3}cs}\cdot e^{-s(n-s-t)p} \\
& \le & e^{0.92c\cdot s} \cdot (1001e) ^{10^{-3}cs}\cdot e^{-0.99c\cdot s} \\
& \le & e^{-0.06cs} \\
& = & o(1).
\end{eqnarray*}
\item[\ref{item:BIG-is-small}.]
If there are more than $10^{-6} n$ vertices with degree more than $20c$, then in particular there is a set $U\subseteq V(G)$ of size $10^{-6}n$ with $e(U)+e(U,V(G)\setminus U) \ge 10^{-5}cn$. The probability of this is at most
\begin{eqnarray*}
& & \binom{n}{10^{-6}n}\cdot \mathbb{P} \left( \Dist{Bin} \left( \binom{10^{-6}n}{2}+10^{-6}\cdot(1-10^{-6})n^2, p \right) \ge 10^{-5}c n \right) \\
& \le & 2^n \cdot \left( \frac{e\cdot 10^{-6}n^2\cdot p}{10^{-5}c n} \right) ^{10^{-5}c n} \\
& \le & 2^n \cdot \left( \frac{e}{10} \right) ^{10^{-5}c n} \\
& = & o(1).
\end{eqnarray*}
\item[\ref{item:CLOSE-is-small}.]
In order to bound the probability that $G$ fails to satisfy \ref{item:CLOSE-is-small} we aim to apply Lemma \ref{lemma:martingales} with $f(G) = |\text{CLOSE} (G)|$ and $\mathcal{P}$ the property $\Delta (G) \le \log n$. First, we bound $X_0 = \mathbb{E}\left[ f(G) \right]$ from above.
Let $Y_1$ denote the random variable counting cycles in $G$ of length 3 or 4, and $Y_2$ the random variable counting the number of paths in $G$ of length at most 4, such that both their endpoints are in $\text{SMALL}$. Then, deterministically, $|\text{CLOSE} | \le Y_1 + 2Y_2$, and therefore $\mathbb{E}[\text{CLOSE} ] \le \mathbb{E}[Y_1] + 2\mathbb{E}[Y_2]$. It is already known that $\mathbb{E}[Y_1]=O(1)$ (see e.g. \cite{FK}, Theorem 5.4), so it remains to bound $\mathbb{E}[Y_2]$.
For a given pair of vertices $u,v\in V(G)$ and a path $P$ of length $\ell \le 4$ between them, the probability that $P\subseteq G$ and $u,v\in \text{SMALL}$ is at most $p^{\ell}$ times the probability of the event: at least one of the vertices $u,v$ has at least $10^{-4}c$ neighbours in $V(G)\setminus V(P)$ such that each of these neighbours has a neighbour of degree 1, or $e_G(\{u,v\},V(G)\setminus V(P)) \le 2(10^{-3} + 10^{-4})c)$. The probability of this event is at most
\begin{eqnarray*}
& & 2\cdot (n^2p^2)^{c/10000}\cdot (1-p)^{c(n-5)/10000} + \mathbb{P} \left( \Dist{Bin} \left( 2(n-\ell -1),p \right) \le 2(10^{-3} + 10^{-4})c \right) \\
& \le & e^{-c^2/20000} + c\cdot \left( 2000e \right) ^{c/450}\cdot e^{-2c} \\
& \le & e^{-1.95c}.
\end{eqnarray*}
From this we get
$$
\mathbb{E}[Y_2] \le \sum _{\ell =1}^4 n^{\ell +1}p^{\ell}e^{-1.95c} \le c^4e^{-1.9c}n,
$$
and therefore $\mathbb{E}[|\text{CLOSE} |] \le \mathbb{E}[Y_1] + 2\mathbb{E}[Y_2] \le e^{-1.85c}n$.
In order to apply Lemma \ref{lemma:martingales} it remains to determine the parameter $d$. We claim that setting $d = 3\log ^7 n$ satisfies the conditions of the lemma. Let $G_1,G_2$ be two graphs on $V$ with maximum degree at most $\log n$, that differ from each other only in edges incident to $v\in V$. It suffices to show that $\text{CLOSE} (G_1) \triangle \text{CLOSE} (G_2)$ only contains vertices of distance at most 7 from $v$ in either $G_1$ or $G_2$. Indeed, let $u$ be (without loss of generality) in $\text{CLOSE} (G_1) \setminus \text{CLOSE} (G_2)$. This can occur for two reasons.
\begin{enumerate}
\item $v$ is part of a path of length at most 4 in $G_1$ between $u$ and another vertex, or a cycle of length at most 4 that contains $u$. This can only occur if $\text{dist}_{G_1}(u,v) \le 3$.
\item $u$ or a vertex of distance at most 4 from $u$ are in $\text{SMALL} (G_1) \setminus \text{SMALL} (G_2)$. Call this vertex $w$ (possibly $w=u$). Going from $G_2$ to $G_1$, this can happen because $\{v,w\} \in E(G_2)$, or because changing the edges of $v$ caused another neighbour of $w$ to be moved into $N_{G_1}(V_1(G_1))$. In any of these cases the distance between $v$ and $w$ is at most 3 in one of the graphs $G_1$ or $G_2$, and therefore the distance between $u$ and $v$ is at most 7 in that graph.
\end{enumerate}
Now we get by Lemma \ref{lemma:martingales}
\begin{eqnarray*}
& & \mathbb{P} \left( |\text{CLOSE} | > \mathbb{E}[|\text{CLOSE} |] + n^{2/3} \right) \\
& \le & \mathbb{P} \left( |\text{CLOSE} | > \mathbb{E}[|\text{CLOSE} |] + n^{2/3} \ \wedge \ \Delta (G) \le \log n \right) + \mathbb{P} \left( \Delta (G) > \log n \right) \\
& \le & \exp \left( -\frac{n^{4/3}}{2n(3\log ^7n+1)^2} \right) + n\cdot \binom{n}{
\log n}\cdot \left( \frac{c}{n} \right) ^{\log n} \\
& = & o(1),
\end{eqnarray*}
which implies that $|\text{CLOSE} | \le e^{-1.8c}n$ with high probability.
\item[\ref{item:not-too-many-edges}.]
The probability that there is a set $U\subseteq V(G)$ of size $k\le 10^{-5}n$ that contradicts \ref{item:not-too-many-edges} is at most
$$
\binom{n}{k}\cdot \mathbb{P} \left( \Dist{Bin} \left( \binom{k}{2},p \right) \ge 10^{-4}ck \right)
\le \left( \frac{en}{k} \right) ^k \cdot \left( \frac{10^4ek^2p}{2ck} \right) ^{10^{-4}ck}
\le \left( \frac{10^4ek}{2n} \right) ^{10^{-5}ck} .
$$
Observe that if $k\le \log n$ this expression is of order $n^{-\Omega (1)}$. By the union bound the probability that $G$ does not have \ref{item:not-too-many-edges} is at most
$$
\sum _{k=1}^{10^{-5}n} \left( \frac{10^4ek}{2n} \right) ^{10^{-5}ck}
\le \log n \cdot n^{-\Omega (1)} + \sum _{k=\log n}^{10^{-5}n} \left( \frac{e}{20} \right) ^{10^{-5}ck}
= o(1).
$$
\item[\ref{item:not-too-few-edges}.]
Since there are at most $3^n$ ways to choose two disjoint subsets of $V(G)$, by the union bound and by Lemma \ref{chernoff}, the probability that there are disjoint $U,W\subseteq V(G)$ with $|U| = 10^{-6}\cdot n$ and $|W|=\frac{1}{5}n$ and less than $10^{-7} c\cdot n$ edges between them is at most $3^n\cdot e^{-\frac{1}{4}\cdot 10^{-7}cn} = o(1).$
\end{itemize}
\end{proof}
Towards constructing $G^*$, following is the definition of the set $\text{BAD} = \text{BAD} (G)\subseteq V(G)$, whose vertices we would like to exclude from $G^*$. Let $X\subseteq V(G)$ be a minimum size set such that $d_G(v,\text{SMALL} \cup X) \le 1$ for every $v\notin X$, $Y \coloneqq \{ v\in V(G) \mid d_G(v) =2, d_G(v,X) = 1 \}$ and set $\text{BAD} \coloneqq X \cup Y$.
Observe that $X$, and therefore $\text{BAD} $, are well defined: if $X_1,X_2 \subseteq V(G)$ satisfy $\forall v\notin X_i: d_G(v,\text{SMALL} \cup X_i) \le 1$, $i=1,2$, then $\forall v\notin X_1\cap X_2: d_G(v,\text{SMALL} \cup (X_1\cap X_2)) \le 1$ also holds, and therefore there is a unique smallest set $X$ that satisfies the condition.
\begin{lemma} \label{lemma:bad-is-small}
With high probability $| \text{BAD} | \le 3\sqrt{c}e^{-c}n$.
\end{lemma}
\begin{proof}
We first show that, with high probability, $|X|\le x\coloneqq \sqrt{c}e^{-c}n$. Let $A$ denote the complementing event, and observe the vertices of $X$ by order of addition to $X$, according to the following process: while there is a vertex outside $X$ with degree at least 2 into $\text{SMALL} \cup X$, add the first such vertex (according to some pre-determined order). Assume that $A$ happened, that is, assume that $|X|\ge x$, and let $X'$ be the set containing the first $x$ vertices added to $X$. Then there is a subset $Z\subseteq \text{SMALL}$ such that $|Z| \le 2x$ and $e(Z\cup X') \ge 2x$. By the definition of $\text{SMALL}$, this means that the degree of every vertex in $Z$ into $V(G)\setminus N(V_1)$ is at most $\frac{c}{1000}$.
Therefore, assuming $G$ has property $\ref{item:V1-is-small}$, $A$ is contained in the following event: there are sets $X',Z,S,T\subseteq V(G)$ such that the following hold:
\begin{enumerate}
\item $|X'| = x,\ |Z|\le 2x,\ |T|=|S|\le 2ce^{-c}n$;
\item $e(Z\cup X') \ge 2x$;
\item $e(Z,V(G)\setminus (X'\cup Z\cup S\cup T)) \le \frac{c}{1000}\cdot |Z| $;
\item all the vertices in $T$ have degree 1, and their unique neighbour is in $S$, so that no two vertices in $T$ share their neighbour.
\end{enumerate}
For given sets $X',Z,S,T$ of respective sizes $x,z,s,s$, the probability that all the conditions above are satisfied is at most
\begin{equation}\label{eqn:bad}
p^s(1-p)^{s(n-1)}\cdot \mathbb{P} \left( \Dist{Bin} \left( \binom{3x}{2},p \right) \ge 2x \right) \cdot \mathbb{P} \left( \Dist{Bin} \left( z(n-z-x-2s),p \right) \le 10^{-3}cz \right)
\end{equation}
We first use Lemma \ref{binom-rv} to bound from above some of the terms in Equation \ref{eqn:bad}, under the assumption $s \le 2ce^{-c}n$.
First,
$$
\mathbb{P} \left( \Dist{Bin} \left( \binom{3x}{2},p \right) \ge 2x \right) \le \left( \frac{9ex^2p}{4x} \right) ^{2x} \le e^{-1.9cx}.
$$
Next, if $s \le 2ce^{-c}n$ then
$$
\mathbb{P} \left( \Dist{Bin} \left( z(n-z-x-2s),p \right) \le 10^{-3}cz \right) \le n\cdot \left( \frac{1000ez\cdot np}{cz\cdot (1-p)} \right) ^{10^{-3}cz} \cdot e^{-0.99cz} \le e^{-0.9cz}.
$$
Altogether we get that the expected number of such sets is at most
\begin{eqnarray*}
& & \binom{n}{s}\cdot (np\cdot e^{-(n-1)p})^s \binom{n}{x} \binom{n}{z} \cdot \mathbb{P} \left( \left( \binom{3x}{2},p \right) \ge 2x \right) \cdot \mathbb{P} \left( \Dist{Bin} \left( z(n-z-x-2s),p \right) \le 10^{-3}cz \right)\\
& \le & \left( \frac{en}{x} \right) ^x \cdot \left( \frac{en}{z} \right) ^z \cdot \exp \left( 2ce^{-c}n-1.9cx-0.9cz \right) \\
& \le & \exp \left( e^{-c}n+2ce^{-c}n-0.7cx \right) \\
& \le & e^{-cx/2};
\end{eqnarray*}
where in the second inequality we used the fact that $\left( \frac{e\cdot e^{-c}n}{z} \right) ^z \le e^{e^{-c}n}$, since the expression is maximized when $z=e^{-c}n$.
Finally, summing over all $O(n^2) = o(e^{cx/2})$ possibilities for $s,z$, by the union bound, we get that the probability that sets $X',Z,S,T\subseteq V(G)$ satisfying the above-listed conditions exist, and therefore $\mathbb{P} (A)$, is of order $o(1)$.
To finish the proof we observe that every vertex in $Y\setminus X$ has a unique neighbour in $X$, and that for two vertices $u,v\in Y\setminus (\text{CLOSE} \cup X) \subseteq \text{SMALL} \setminus \text{CLOSE}$ these unique neighbours in $X$ are distinct, since otherwise $\text{dist}(u,v) \le 2$, a contradiction. It follows that $|Y\setminus X|\le |X|+|\text{CLOSE} |$ and therefore, due to property \ref{item:CLOSE-is-small}, with high probability we have
$$
|\text{BAD} |\le |X|+|Y\setminus X| \le 2|X| + |\text{CLOSE} | \le 3\sqrt{c}e^{-c}n .
$$
\end{proof}
Recall Definition \ref{def:2core} and let $C_2=C_2(G)$. Now set $ V(G^*) = V(C_2) \setminus \left( \text{CLOSE} \cup \text{BAD} \right) $, set $M$ to be a pairing on the vertex set $N_G(V_1) \cap V(G^*)$ by order, that is, the smallest vertex of $N_G(V_1) \cap V(G^*)$ is paired to the second smallest and so on, so that all vertices are paired except possibly the largest one, and set $E(G^*) = E_G(V(G^*))\cup E(M)$. Observe that since $V(G^*) \cap \text{CLOSE} = \emptyset$, no two vertices in $V(M)$ are connected by an edge of $G^*\setminus M$. Also observe that $d_{G^*}(v,V(G^*)\setminus V(M)) \ge 2$ for every $v\in V(G^*)$. Indeed, if $v\in V(G^*)\setminus \text{SMALL}$ then it cannot have more than one neighbour in $\text{BAD} \cup \text{CLOSE} \subseteq \text{BAD} \cup \text{SMALL}$, since otherwise it would be a member of $\text{BAD}$, contradicting the definition of $V(G^*)$. On the other hand, if $v\in V(G^*)\cap \text{SMALL}$ then $v$ has no neighbours in $\text{CLOSE}$, since otherwise it would be in $\text{CLOSE}$ itself, no neighbours in $\text{BAD}$ if it has degree 2 into $V(G^*)\setminus V(M)$, and no more than one neighbour in $\text{BAD}$ if it has degree more than 2 into $V(G^*)\setminus V(M)$. In any of these cases we conclude that $d_{G^*}(v,V(G^*)\setminus V(M)) \ge 2$.
\begin{lemma} \label{lemma:Gstar-is-big}
With high probability $|V(G^*)| \ge (1-(1+\frac{1}{4}\varepsilon)\cdot ce^{-c})\cdot n$.
\end{lemma}
\begin{proof}
The lemma follows immediately from property \ref{item:CLOSE-is-small} in Lemma \ref{lemma:properties}, from the fact that, with high probability, $|V(C_2)| \ge (1-(1+\frac{1}{8}\varepsilon)\cdot ce^{-c})\cdot n$ due to Lemma \ref{lemma:sizeof2core} with $\frac{1}{8}\varepsilon$, and from Lemma \ref{lemma:bad-is-small}.
\end{proof}
\begin{lemma} \label{lemma:M-is-big}
With high probability $|V(M)| \ge (1-\frac{1}{4}\varepsilon )\cdot ce^{-c}n $.
\end{lemma}
\begin{proof}
By Lemmas \ref{lemma:const-degree-vrtcs} and \ref{lemma:sizeof2core}, with high probability
$$
|V(G)\setminus (V_1 \cup V_0 \cup C_2)| \le 2c^2e^{-2c}n.
$$
By this inequality, and by the high probability bounds on the sizes of $\text{BAD}$ and $\text{CLOSE}$ we get from \ref{item:CLOSE-is-small} and Lemma \ref{lemma:bad-is-small}, we get
$$
|V(M)| \ge |V_1 \setminus (\text{CLOSE} \cup \text{BAD} )| - |V(G)\setminus (V_1 \cup V_0 \cup C_2)| \ge (1-\frac{1}{4}\varepsilon )\cdot ce^{-c}n.
$$
\end{proof}
Recall that $M$ is a (partial) pairing of $N_G(V_1)$. For every $v\in V(M)$ denote by $v'$ an arbitrary neighbour of $v$ in $V_1$, and let $V_1^*\coloneqq \{v' \mid v\in V(M)\}\subseteq V_1$. Define a matching $M'$ on $V_1^*$ by matching the pair $v',u'$ for every $\{v,u\}\in E(M)$.
In the next section we show that, with high probability, $G^*$ contains a Hamilton cycle that contains all the edges of $M$, and show that this implies Theorem \ref{main}.
\subsection{$M$-Hamiltonicity of $G^*$} \label{sec:hamiltonian}
Call a path $P\subseteq G^*$ an $M$\emph{-path} if for every edge $e\in M$, either $e\in E(P)$ or $e\cap V(P) = \emptyset$, and similarly define an $M$\emph{-cycle}. Say that a graph containing $M$ is $M$\emph{-Hamiltonian} is it contains a Hamilton $M$-cycle. Note that, by its definition, a Hamilton $M$-cycle must contain all the edges of $M$. For a non-$M$-Hamiltonian subgraph $H\subseteq G$ we say that $e\notin E(H)$ is an $M$\emph{-booster} with respect to $H$ if $H\cup \{e\}$ is $M$-Hamiltonian, or a maximum length $M$-path contained in $H\cup \{e\}$ is strictly longer than a maximum length $M$-path contained in $H$. Finally, say that a graph $\Gamma$ on $V(G^*)$ is an $M$\emph{-expander} if $M\subseteq \Gamma$, and for every $U\subseteq V(G^*)$ with $|U| \le \frac{1}{4}n$ the inequality $\left| N_{\Gamma}(U) \setminus V(M) \right| \ge 2|U|$ holds.
\begin{lemma} \label{lemma:many-boosters}
For every $M$-expander $\Gamma$ on $V(G^*)$ such that $\Gamma$ is not $M$-Hamiltonian, $\binom{V(G^*)}{2}$ contains at least $\frac{1}{32}n^2$ $M$-boosters with respect to $\Gamma$.
\end{lemma}
A similar statement to this is made in \cite{AF21}, Section 3.5. For completeness, and since the authors of the aforementioned paper use some tools different than ours in their proof, we add our version of a proof to this claim.
\begin{proof}
Let $\Gamma \supseteq M$ be an $M$-expander with no Hamilton $M$-cycle. Recall first the definition of a P{\'o}sa rotation of a path and its pivot (see \cite{POS}). Say $(v_0,...,v_{\ell}) = P\subseteq \Gamma$ is a path. If $\{v_i,v_{\ell}\} \in E(\Gamma )$ then we say that the path $(v_0,...,v_{i-1},v_i,v_{\ell},v_{\ell -1},...,v_{i+1})$ is obtained from $P$ by a rotation with fixed point $v_0$ and pivot $v_i$.
Say that a rotation of an $M$-path $P$ is $M$\emph{-respecting} if the pivot is not a vertex in $V(M)$, and observe that if $P'$ is obtained from $P$ by an $M$-respecting rotation then $P'$ is also an $M$-path. Indeed, no new vertices were added to the path in the process, and the unique removed edge cannot be an edge of $M$, since it is incident to the pivot vertex, which is not in $V(M)$. For a path $P$ with $v$ one of its endpoints, denote by $\text{END}_M (P,v)$ the set of all endpoints (other than $v$) of paths that can be obtained from $P$ by a sequence of $M$-respecting rotations with fixed point $v$.
We now claim that if $P=(v_0,...,v_{\ell})$ is a maximal $M$-path in $\Gamma$ then
$$
N_{\Gamma}(\text{END}_M (P,v_0)) \setminus V(M) \subseteq N_{P}(\text{END}_M (P,v_0)) .
$$
Indeed, assume otherwise. Since $P$ is assumed to be a maximal $M$-path, it must be that $N_{\Gamma}(\text{END}_M (P,v_0)) \setminus V(M) \subseteq P$. Then there is $v_i\in N_{\Gamma}(\text{END}_M (P,v_0)) \setminus V(M)$, and $u\in \text{END}_M (P,v_i)$ such that $\{u,v_i\}\in E(\Gamma )$ and such that $v_{i-1},v_{i+1} \notin \text{END}_M (P,v_0)$. Let $P'$ be an $M$-path from $v_0$ to $u$ that can be obtained from $P$ by a sequence of $M$-respecting rotations, and $w$ be the successor of $v_i$ on $P'$. Observe that, since $v_{i-1},v_i,v_{i+1} \notin \text{END}_M (P,v_0)$, it must be that $w\in \{v_{i-1},v_{i+1}\}$. But since $v_i\notin V(M)$ and in particular $\{v_i,w\}\notin E(M)$, a rotation of $P'$ with fixed point $v_0$ and pivot $v_i$ is $M$-respecting, implying $w\in \text{END}_M (P,v_0)$, a contradiction.
Now, since we assumed that $\Gamma$ is $M$-expanding and $|N_{P}(U)|<2|U|$ for every set $U\subset V(P)$ that includes an endpoint of $P$, this implies that if $P=(v_0,...,v_{\ell})$ is maximal then $|\text{END}_M (P,v_0)| > n/4$.
To complete the proof, for $u\in \text{END}_M (P,v)$ denote by $P_u$ an $M$-path from $v$ to $u$ obtained be $M$-respecting rotations. We claim that if $P$ is maximal and $v$ an endpoint of $P$ then any edge of $\binom{V(G^*)}{2}$ between $u\in \text{END}_M (P,v)$ and $\text{END}_M (P_u,u)$ is an $M$-booster. Indeed, if $e$ is such an edge then $P_u\cup \{e\}$ is a cycle of length $|V(P)|$. If $P$ was a Hamilton $M$-path then we obtained a Hamilton $M$-cycle. Otherwise, since $M$-expansion implies connectivity, there is an outgoing edge $\{x,y\}\in e(\Gamma )$, where $x\in V(P)$ and $y\notin V(P)$. Since $M$ is a matching, at least one of the edges incident to $v$ in $P_u$ is not in $E(M)$, which implies the existence of a strictly longer path in $\Gamma \cup \{e\}$. Since there are at least $\frac{1}{2}\cdot \left( \frac{n}{4} \right) ^2 = \frac{1}{32}n^2$ such edges, this finishes the proof.
\end{proof}
\begin{lemma} \label{lemma:exist-a-booster}
With high probability, for every $M$-expander $\Gamma \subseteq G^*$ that is not $M$-Hamiltonian and has at most $\frac{c}{900}n$ edges, $E(G^*)$ contains an $M$-booster with respect to $\Gamma$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:many-boosters}, $\binom{V(G^*)}{2}$ contains at least $\frac{1}{32}n^2$ $M$-boosters with respect to any such expander. We use the union bound over all choices for the triple $\Gamma , V(G^*), V(M)$ to bound the probability that $G^*$ contains such an expander but none of its $M$-boosters, here using the fact that $V(M) \subseteq V(G^*) \subseteq V(G)$, that is, the pair $V(M),V(G^*)$ defines a partition of $V(G)$ into three sets, and therefore there are at most $3^n$ choices for the pair $V(M),V(G^*)$. We get
$$
\sum _{k=n}^{\frac{c}{900}n} 3^n \cdot \binom{\binom{n}{2}}{k}\cdot p^k \cdot (1-p)^{\frac{1}{32}n^2}
\le \sum _{k=n}^{\frac{c}{900}n} 3^n \cdot \left( \frac{en^2p}{2k} \right) ^k \cdot e^{-\frac{c}{32}n}
\le cn\cdot 3^n \cdot \left( 450e \right) ^{\frac{c}{900}n} \cdot e^{-\frac{c}{32}n}
= o(1),
$$
where in the last inequality we used the fact that $\left( \frac{en^2p}{2k} \right) ^k$ is increasing when $k\le \frac{c}{900}n$, and therefore if $k\le \frac{c}{900}n$ then this size is at most $\left( 450e \right) ^{\frac{c}{900}n}$.
\end{proof}
\begin{lemma} \label{lemma:gamma}
With high probability $G^*$ contains an $M$-expander $\Gamma _0$ with at most $\frac{c}{950}n$ edges.
\end{lemma}
\begin{proof}
We describe a construction of a random subgraph $\Gamma _0$ of $G^*$ with at most $\frac{c}{950}n$ edges and prove that if $G$ has properties \ref{item:V1-is-small}-\ref{item:not-too-few-edges} then the constructed subgraph is an $M$-expander with positive probability, which implies existence.
Construct a random subgraph $\Gamma _0$ of $G^*$ as follows. For every $v\in V(G^*)$ set $E_v$ to be $E_{G^*}(v,V(G^*)\setminus V(M))$ in the case $v\in \text{SMALL} $, and otherwise set $E_v$ to be a subset of $E_{G^*}(v,V(G^*)\setminus V(M))$ of size $\frac{c}{1000}$, chosen uniformly at random. The random subgraph $\Gamma_0$ is the $G^*$-subgraph with edge set $M\cup \bigcup _{v\in V(G^*)}E_v$. Observe that, since the minimum degree of a a vertex into $V(G^*)\setminus V(M)$ in $G^*$ is at least 2, this is also true for a graph $\Gamma_0$ drawn this way, and that $e(\Gamma_0 ) \le \frac{c}{1000}n+\frac{1}{2}n\le \frac{c}{950}n$.
We bound from above the probability that $\Gamma_0$ contains a subset $U$ with at most $n/4$ vertices with less than $2|U|$ neighbours in $V(G^*)\setminus V(M)$. Let $|U|=k\le \frac{n}{4}$, and denote $U_1 = U\cap \text{SMALL} ,U_2 = U\setminus U_1$ and $k_1,k_2$ the sizes of $U_1,U_2$ respectively. Observe that $V(\Gamma_0 ) \cap \text{CLOSE} = \emptyset$ implies that \emph{(i)} every vertex in $U_2$ has at most one neighbour in $U_1 \cup N_G(U_1) \cup V(M) \subseteq \text{SMALL} \cup N_G(\text{SMALL} )$, and therefore $|N_{\Gamma_0}(U_2) \cap (U_1 \cup N_{\Gamma_0}(U_1) \cup V(M))| \le k_2$; and \emph{(ii)} distinct vertices in $\text{SMALL} (G)$ have non-intersecting neighbourhoods, and therefore $|N_{\Gamma_0}(U_1) \setminus V(M)| \ge 2k_1$.
First we show that if $k_2 \le 2\cdot 10^{-6}n$ then $|N_{\Gamma_0}(U) \setminus V(M)|\ge 2|U|$ with probability 1. Indeed, in this case $|N_{\Gamma_0}(U_2)\setminus V(M)| \ge 4k_2$. Assume otherwise, then $U_2 \cup (N_{\Gamma_0}(U_2)\setminus V(M))$ is a set of size at most $5k_2 \le 10^{-5}n$ which spans at least $\frac{c}{1000}\cdot k_2 - e(U_2)$ edges. But by \ref{item:not-too-many-edges}, $U_2$ spans at most $10^{-4}ck_2$ edges, and $U_2 \cup (N_{\Gamma_0}(U_2)\setminus V(M))$ spans at most $5\cdot 10^{-4}ck_2$ edges, so we get $$10^{-3}c\cdot k_2 - 10^{-4}ck_2 \le 5\cdot 10^{-4}ck_2,$$ a contradiction. Now
\begin{eqnarray*}
|N_{\Gamma_0}(U) \setminus V(M)| & \ge & |N_{\Gamma_0}(U_1)\setminus (U_2\cup V(M))| + |N_{\Gamma_0}(U_2)\setminus (N_{\Gamma_0}(U_1) \cup U_1 \cup V(M)) | \\
& \ge & 2 k_1 - k_2 + 4k_2 - k_2 \\
& \ge & 2k_1+2k_2=2|U|.
\end{eqnarray*}
Now assume that $2\cdot 10^{-6}n\le k_2 \le \frac{1}{4}n$. We show that $|N_{\Gamma_0}(U) \setminus V(M)| \ge 2|U|$ for every $U\subseteq V(G^*)$ with $|U|\le \frac{n}{4}$ with positive probability. Suppose otherwise, then $|V(G^*)\setminus (U \cup N_{\Gamma_0}(U) \cup V(M))| \ge \frac{1}{5}n$. In particular, by \ref{item:BIG-is-small} there are disjoint sets $U'\subseteq U_2 \setminus \text{LARGE} $ and $W\subseteq V(G^*)\setminus (U \cup N_{\Gamma_0}(U) \cup V(M))$, of sizes $10^{-6}n$ and $\frac{1}{5}n$ respectively, such that $e_{\Gamma_0}(U',W)=0$. Observe that by \ref{item:not-too-few-edges}, $e_{G^*}(U',W) \ge e_{G}(U',W) \ge 10^{-7}cn$. For a given pair of subsets $U',W$, the probability for this is at most
\begin{eqnarray*}
\prod_{u\in U'}\mathbb{P} (e_{\Gamma_0}(u,W)=0) & \le & \prod _{u \in U'} \frac{\binom{e_{G^*}(u,V(G^*)\setminus V(M)) - e_{G^*}(u,W)}{10^{-3}c}}{\binom{e_{G^*}(u,V(G^*)\setminus V(M))}{10^{-3}c}} \\
& \le & \prod _{u \in U'} e^{-\frac{c}{1000}\cdot \frac{e_{G^*}(u,W)}{d_{G^*}(u)}}\\
& \le & \exp \left( {-\frac{c}{1000}\cdot \frac{e_{G^*}(U',W)}{20c}} \right) \\
& \le & \exp \left( -10^{-12}\cdot cn \right)
\end{eqnarray*}
Since there are at most $3^n$ pairs of subsets $U',W$, by the union bound the probability that two subsets of this size with no edges between them in $\Gamma_0$ exist is at most $3^n\cdot \exp \left( -10^{-12}\cdot cn \right) = o(1)$, for large enough $c$. Consequently, the random subgraph $\Gamma_0$ is an $M$-expander with probability $1-o(1)$, implying that $G^*$ contains a sparse $M$-expander, as claimed.
\end{proof}
\begin{lemma}
With high probability $G^*$ is $M$-Hamiltonian.
\end{lemma}
\begin{proof}
This is an immediate consequence of Lemmas \ref{lemma:exist-a-booster}, \ref{lemma:gamma}. By Lemma \ref{lemma:gamma}, with high probability, $G^*$ contains an $M$-expander $\Gamma _0\subseteq G^*$ with at most $\frac{c}{950}n$ edges. By Lemma \ref{lemma:exist-a-booster}, with high probability, if $\Gamma _0$ is not $M$-Hamiltonian then $E(G^*)$ contains an $M$-booster $e_1$ with respect to $\Gamma _0$, an $M$-booster $e_2$ with respect to $\Gamma _0 \cup \{e\}$ and so on. After adding at most $n$ such $M$-boosters, the resulting subgraph is already $M$-Hamiltonian, and consequently so is $G^*$.
\end{proof}
To finish our proof, we now claim the $G^*$ being $M$-Hamiltonian with high probability implies Theorem \ref{main}. Indeed, if $C$ is a Hamilton $M$-cycle in $G^*$, then since $E(G^*)\subseteq E(G\cup M)$ it is an $M$-cycle in $G\cup M$ of length $|V(G^*)|$. Therefore, the cycle $C'$ obtained by replacing every edge $\{u,v\}$ of $M$ with the 3-path $(u,u',v',v)$ is contained in $G\cup M'$, and has length
$$
|V(C')| = |V(C)| + |V(M')| = |V(G^*)| + |V(M)|,
$$
which, by Lemmas \ref{lemma:Gstar-is-big} and \ref{lemma:M-is-big}, is at least
$$
(1-(1+\varepsilon /4)\cdot ce^{-c})\cdot n + (1-\varepsilon /4)\cdot ce^{-c}n = \left( 1-\frac{1}{2}\varepsilon ce^{-c} \right) \cdot n .
$$
As discussed earlier, by removing $M'$ from this cycle we are left with at most $(1+\frac{1}{2}\varepsilon )\frac{1}{2}ce^{-c}n$ vertex disjoint paths that cover at least $\left( 1-\frac{1}{2}\varepsilon ce^{-c} \right) \cdot n$ vertices, and therefore by using additional $\frac{1}{2}\varepsilon ce^{-c} n$ paths of length 0 to cover the remaining vertices we get a disjoint path covering of $G$ with at most $(1+\varepsilon )\frac{1}{2}ce^{-c}n$ paths, thus finishing the proof. \hfill $\square$
| {
"timestamp": "2022-10-24T02:08:43",
"yymm": "2210",
"arxiv_id": "2210.11770",
"language": "en",
"url": "https://arxiv.org/abs/2210.11770",
"abstract": "We prove that for every $\\varepsilon > 0$ there is $c_0$ such that if $G\\sim G(n,c/n)$, $c\\ge c_0$, then with high probability $G$ can be covered by at most $(1+\\varepsilon)\\cdot \\frac{1}{2}ce^{-c} \\cdot n$ vertex disjoint paths, which is essentially tight. This is equivalent to showing that, with high probability, at most $(1+\\varepsilon)\\cdot \\frac{1}{2}ce^{-c} \\cdot n$ edges can be added to $G$ to create a Hamiltonian graph.",
"subjects": "Combinatorics (math.CO)",
"title": "Hamilton completion and the path cover number of sparse random graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513885634199,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7086428699337611
} |
https://arxiv.org/abs/1511.00166 | Extension of Chebfun to periodic functions | Algorithms and underlying mathematics are presented for numerical computation with periodic functions via approximations to machine precision by trigonometric polynomials, including the solution of linear and nonlinear periodic ordinary differential equations. Differences from the nonperiodic Chebyshev case are highlighted. | \section{Introduction}
It is well known that trigonometric representations of periodic
functions and Chebyshev polynomial representations of nonperiodic functions
are closely related. Table~\ref{parallels} lists some of the
parallels between these two situations. Chebfun, a software
system for computing with functions and solving ordinary
differential equations~\cite{battlestref,chebbook,cacm},
relied entirely on Chebyshev representations in its
first decade. This paper describes
its extension to periodic problems initiated by
the first author and released with Chebfun Version 5.1
in December 2014.
\begin{table}[h]
\label{parallels}
\caption{Some parallels between trigonometric and Chebyshev
settings. The row of contributors' names is just a sample
of some key figures.}
\begin{center}
\begin{tabular}{c|c}
{\bf Trigonometric} & {\bf Chebyshev} \\ \hline
\vrule height 14 pt width 0 pt $t\in \pipi$ & $x\in \ones$ \\
periodic & nonperiodic \\
$\exp(ikt)$ & $T_k(x)$ \\
trigonometric polynomials & algebraic polynomials \\
equispaced points & Chebyshev points \\
trapezoidal rule & Clenshaw--Curtis quadrature \\
companion matrix & colleague matrix \\
Horner's rule & Clenshaw recurrence \\
Fast Fourier Transform & Fast Cosine Transform \\
Gauss, Fourier, Zygmund$,\dots$ & Bernstein, Lanczos, Clenshaw$,\dots$ \\
\end{tabular}
\end{center}
\end{table}
Though Chebfun is a software product, the main focus of this
paper is mathematics and algorithms rather than software
{\em per se.} What makes this subject interesting is that
the trigonometric/Chebyshev parallel, though close, is not
an identity. The experience of building a software system
based first on one kind of representation and then extending
it to the other has given the Chebfun team a uniquely intimate
view of the details of these relationships. We begin this
paper by listing ten differences between Chebyshev
and trigonometric formulations that we have found important.
This will set the stage for presentations of the problems of
trigonometric series, polynomials, and projections (Section 2),
trigonometric interpolants, aliasing, and barycentric formulas
(Section 3), approximation theory and quadrature (Section 4), and various
aspects of our algorithms (Sections 5--7).
{\em 1. One basis or two.} For working with polynomials on
$\ones$, the only basis functions one needs are the Chebyshev
polynomials $T_k(x)$. For trigonometric polynomials on $\pipi$,
on the other hand, there are two equally good equivalent
choices: complex exponentials $\exp(ikt)$, or sines and
cosines $\sin(kt)$ and $\cos(kt)$.
The former is mathematically
simpler; the latter is mathematically more elementary and
provides a framework for dealing with even and odd symmetries.
A fully useful software system for periodic functions needs
to offer both kinds of representation.
{\em 2. Complex coefficients.}
In the $\exp(ikt)$ representation, the expansion coefficients
of a real periodic function are complex. Mathematically,
they satisfy certain symmetries, and a software system needs to
enforce these symmetries to avoid
imaginary rounding errors. Polynomial approximations
of real nonperiodic functions, by contrast, do not lead to
complex coefficients.
{\em 3. Even and odd numbers of parameters.}
A polynomial of degree $n$ is determined by $n+1$ parameters,
a number that may be odd or even. A trigonometric
polynomial of degree $n$, by contrast, is determined by $2n+1$
parameters, always an odd number, as a consequence of the
$\exp(\pm inx)$ symmetry. For most purposes it is unnatural
to speak of trigonometric polynomials with an even number of
degrees of freedom. Even numbers make sense, on the other hand,
in the special case of trigonometric polynomials defined by
interpolation at equispaced points, if one imposes the symmetry
condition that the interpolant of the $(-1)^j$ sawtooth should
be real, i.e., a cosine rather than a complex exponential.
Here distinct formulas are needed for the even and odd cases.
{\em 4. The effect of differentiation.}
Differentiation lowers the degree of an algebraic polynomial,
but it does not lower the degree of a trigonometric polynomial; indeed
it enhances the weight of its highest-degree components.
{\em 5. Uniform resolution across the interval.}
Trigonometric representations have uniform properties across
the interval of approximation,
but polynomials are nonuniform, with much greater
resolution power near the ends of $\ones$ than near the
middle~\cite[chap.~22]{atap}.
{\em 6. Periodicity and translation-invariance.}
The periodicity of trigonometric representations means
that a periodic chebfun constructed on $\pipi$, say, can be perfectly
well evaluated at $10\pi$ or $100\pi$; nonperiodic chebfuns have
no such global validity. Thus, whereas
interpolation and extrapolation are utterly different for
polynomials, they are not so different in the trigonometric case.
A subtler consequence of translation
invariance is explained in the footnote on p.~\pageref{footn}.
{\em 7. Operations that break periodicity.}
A function that is smooth and periodic may lose these properties
when restricted to a subinterval or subjected to
operations like rounding or absolute value.
This elementary fact has the consequence that
a number of operations on periodic chebfuns require
their conversion to nonperiodic form.
{\em 8. Good and bad bases.}
The functions $\exp(ikt)$ or $\sin(kt)$ and $\cos(kt)$
are well-behaved by any measure, and nobody would normally
think of using any other basis functions for representing
trigonometric functions. For polynomials, however,
many people would reach for the basis of monomials $x^k$
before the Chebyshev polynomials $T_k(x)$. Unfortunately,
the monomials are exponentially ill-conditioned on $\ones$:
a degree-$n$ polynomial of size $1$ on $\ones$ will typically
have coefficients of order $2^n$ when expanded in the basis
$1,x,\dots,x^n$. Use of this basis will cause trouble in
almost any numerical calculation unless $n$ is very small.
{\em 9. Good and bad interpolation points.}
For interpolation of periodic functions, nobody would normally
think of using any interpolation points other than equispaced.
For interpolation of nonperiodic functions by polynomials,
however, equispaced points lead to exponentially ill-conditioned
interpolation problems~\cite{ptk,runge}. The mathematically
appropriate choice is not obvious until one learns it: Chebyshev
points, quadratically clustered near $\pm 1$.
{\em 10. Familiarity.}
All the world knows and trusts Fourier analysis.
By contrast, experience with Chebyshev polynomials is often
the domain of experts, and it is not as widely appreciated that
numerical computations based on polynomials can be trusted.
Historically, points 8 and 9 of this list have led to
this mistrust.
The book {\em Approximation Theory and Approximation Practice}~\cite{atap}
summarizes the mathematics and algorithms of Chebyshev technology
for nonperiodic functions. The present paper was written
with the goal in mind of compiling analogous information
in the trigonometric case. In particular,
Section 2 corresponds to Chapter 3 of~\cite{atap}, Section 3 to
Chapters 2, 4, and 5, and Section 4 to Chapters 6, 7, 8, 10, and 19.
\section{Trigonometric series, polynomials, and projections}
Throughout this paper, we assume $f$ is
a Lipschitz continuous periodic function on
$\pipi$.
Here and in all our statements about periodic
functions, the interval $\pipi$ should be understood
periodically: $t=0$ and $t=2\pi$ are identified,
and any smoothness assumptions apply across this point in the
same way as for $t\in (0,2\pi)$~\cite[chap.~1]{katznelson}.
It is known that $f$ has a unique trigonometric series,
absolutely and uniformly convergent, of the form
\begin{equation}
f(t) = \sum_{k=-\infty}^\infty \ck e^{ikt},
\label{series1}
\end{equation}
with Fourier coefficients
\begin{equation}
\ck = {1\over 2\pi} \int_0^{2\pi} f(t) e^{-ikt} dt.
\label{coeffs1}
\end{equation}
(All coefficients in our discussions are
in general complex, though in cases of certain symmetries
they will be purely real or imaginary.)
Equivalently, we have
\begin{equation}
f(t) = \sum_{k=0}^\infty \ak \cos(kt) + \sum_{k=1}^\infty \bk \sin(kt),
\label{series2}
\end{equation}
with $a_0^{} = c_0^{}$ and
\begin{equation}
\ak = {1\over \pi} \int_0^{2\pi} f(t) \cos(kt) dt, \quad
\bk = {1\over \pi} \int_0^{2\pi} f(t) \sin(kt) dt \qquad \rlap{$(k\ge 1).$}
\label{coeffs2}
\end{equation}
The formulas (\ref{coeffs2}) can be derived by matching
the $e^{ikt}$ and $e^{-ikt}$ terms of (\ref{series2}) with
those of (\ref{series1}), which yields the identities
\begin{equation}
\ck = {a_k^{}\over 2} + {\bk\over 2i},\quad
\cmk = {a_k^{}\over 2} - {\bk\over 2i} \qquad \rlap{$(k\ge 1),$}
\label{abccoeffs1}
\end{equation}
or equivalently,
\begin{equation}
\ak = \ck + \cmk, \quad \bk = i(\ck - \cmk) \qquad\rlap{$(k\ge 1).$}
\label{abccoeffs2}
\end{equation}
Note that if $f$ is real, then (\ref{coeffs2}) implies that
$\ak$ and $\bk$ are real. The coefficients
$\ck$ are generally complex, and (\ref{abccoeffs1}) implies that they
satisfy $\cmk = \overline{c}_k^{}$.
The {\em degree $n$ trigonometric projection} of $f$ is the function
\begin{equation}
\fn(t) = \sum_{k=-n}^n \ck e^{ikt},
\label{trigpoly1}
\end{equation}
or equivalently
\begin{equation}
\fn(t) = \sum_{k=0}^n \ak \cos(kt) + \sum_{k=1}^n \bk \sin(kt).
\label{trigpoly2}
\end{equation}
More generally, we say that a function of the form
(\ref{trigpoly1})--(\ref{trigpoly2}) is a
{\em trigonometric polynomial of degree $n$}, and
we let $\Pn$ denote the $(2n+1)$-dimensional vector
space of all such polynomials.
The trigonometric projection $\fn$
is the least-squares approximant to $f$ in $\Pn$, i.e., the
unique best approximation to $f$ in the $L^2$ norm over $\pipi$.
\section{Trigonometric interpolants, aliasing, and barycentric formulas}
Mathematically, the simplest degree $n$ trigonometric approximation of
a periodic function $f$ is its trigonometric projection
(\ref{trigpoly1})--(\ref{trigpoly2}). This approximation
depends on the values of $f(t)$ for all $t\in\pipi$ via
(\ref{coeffs1}) or (\ref{coeffs2}). Computationally, a simpler
approximation of $f$ is its degree $n$ {\em trigonometric
interpolant}, which only depends on the values at
certain interpolation points.
In our basic configuration, we wish to interpolate $f$ in equispaced
points by a function $\pn \in \Pn$.
Since the dimension of $\Pn$ is $2n+1$, there should be
$2n+1$ interpolation points. We take these
{\em trigonometric points} to be
\begin{equation}
\tk = {2\pi k\over N}, \qquad 0\le k \le N-1
\label{trigpts}
\end{equation}
with $N=2n+1$.
The trigonometric interpolation problem goes back at least to
the young Gauss's calculations of the orbit of the asteroid
Ceres in 1801~\cite{gauss}.
It is known that there exists a unique interpolant
$\pn\in\Pn$ to any set of data values $\fk = f(\tk)$.
Let us write $\pn$ in the form
\begin{equation}
\pn(t) = \sum_{k=-n}^n \ckt e^{ikt},
\label{interp1}
\end{equation}
or equivalently
\begin{equation}
\pn(t) = \sum_{k=0}^n \akt \cos(kt) + \sum_{k=1}^n \bkt \sin(kt),
\label{interp2}
\end{equation}
for some coefficients $\tilde c_{-n}^{},\dots,\tilde c_n^{}$
or equivalently $\tilde a_0^{},\dots,\tilde a_n^{}$
and $\tilde b_1^{},\dots,\tilde b_n^{}$.
The coefficients $\ckt$ and $\ck$ are related by
\begin{equation}
\ckt = \sum_{j=-\infty}^\infty c_{k + jN}^{}
\qquad\rlap{$(|k| \le n)$}
\label{aliasing1}
\end{equation}
(the {\em Poisson summation formula}),
and similarly $\akt$/$\bkt$ and $\ak$/$\bk$ are related by
$\tilde a_0^{} = \sum_{j=0}^\infty a_{jN}^{}$ and
\begin{equation}
\akt = \ak + \sum_{j=1}^\infty (a_{k+jN}^{} + a_{-k+jN}^{}),
\quad \bkt = \bk + \sum_{j=1}^\infty (b_{k+jN}^{} - b_{-k+jN}^{})
\label{aliasing2}
\end{equation}
for $1\le k \le n$.
We can derive these formulas by considering the
phenomenon of {\em aliasing.} For all~$j$, the functions
$\exp(i[k+jN]t)$ take the same
values at the trigonometric points (\ref{trigpts}). This implies that
$f$ and the trigonometric polynomial (\ref{interp1}) with
coefficients defined by (\ref{aliasing1}) take the same
values at these points. In other words, (\ref{interp1}) is the
degree $n$ trigonometric interpolant to $f$.
A similar argument justifies (\ref{interp2})--(\ref{aliasing2}).
Another interpretation of the coefficients $\ckt, \akt, \bkt$ is
that they are equal to the approximations to $\ck, \ak, \bk$
one gets if the integrals (\ref{coeffs1}) and (\ref{coeffs2}) are
approximated by the periodic trapezoidal quadrature rule with $N$
points~\cite{tw}:
\begin{equation}
\ckt = {1\over N} \sum_{j=0}^{N-1} \fj e^{-ik\tj} ,
\label{coeffs3}
\end{equation}
\begin{equation}
\akt = {2\over N} \sum_{j=0}^{N-1} \fj \cos(k\tj), \quad
\bkt = {2\over N} \sum_{j=0}^{N-1} \fj \sin(k\tj) \qquad\rlap{$(k\ge 1).$}
\label{coeffs4}
\end{equation}
To prove this, we note that the trapezoidal rule computes
the same Fourier coefficients for $f$ as for $\pn$, since
they take the same values at the grid points;
but these must be equal to the true Fourier coefficients of $\pn$,
since the $N=(2n+1)$-point trapezoidal rule is exactly correct
for $e^{-2int}, \dots, e^{2int}$, hence for any
trigonometric polynomial of degree $2n$, hence in particular for any
trigonometric polynomial of degree $n$ times an exponential $\exp(-ikt)$
with $|k|\le n$.
From (\ref{coeffs3})--(\ref{coeffs4}) it is evident that
the discrete Fourier coefficients
$\ckt$, $\akt$, $\bkt$ can be computed by
the Fast Fourier Transform (FFT), which, in fact, Gauss invented
for this purpose.
Suppose one wishes to evaluate the interpolant $\pn(t)$ at certain points
$t$. One good algorithm is to compute
the discrete Fourier coefficients and then apply them.
Alternatively, another good approach is to
perform interpolation directly by means of the {\em barycentric
formula} for trigonometric interpolation, introduced by
Salzer~\cite{salzer} and later simplified by Henrici~\cite{henrici}:
\begin{equation}
\pn(t) = \sum_{k=0}^{N-1} (-1)^k f_k \csc({t-\tk\over 2})
\left/\, \sum_{k=0}^{N-1} (-1)^k \csc({t-\tk\over 2}) \right.
\rlap{\quad ($N$ odd).}
\label{bary1}
\end{equation}
(If $t$ happens to be exactly equal to a grid point $\tk$,
one takes $\pn(t) = \fk$.)
The work involved in this formula
is just $O(N)$ operations per evaluation, and stability has
been established (after a small modification) in~\cite{AX}.
In practice, we find the Fourier coefficients and barycentric
formula methods equally effective.
In the above discussion, we have assumed that the number
of interpolation points, $N$, is odd. However,
trigonometric interpolation, unlike trigonometric projection,
makes sense for an even number of degrees of freedom too
(see e.g.~\cite{faber,kress,zygmund});
it would be surprising if
FFT codes refused to accept input vectors of even lengths!
Suppose $n\ge 1$ is given and we wish to
interpolate $f$ in $N=2n$ trigonometric points (\ref{trigpts})
rather than $N=2n+1$. This is one data value less than usual for
a trigonometric polynomial of this degree,
and we can lower the number of degrees of
freedom in (\ref{interp1}) by imposing the condition
\begin{equation}
\tilde c_{-n}^{} = \tilde c_n^{}
\label{cond1}
\end{equation}
or equivalently in (\ref{interp2}) by imposing the condition
\begin{equation}
\tilde b_n^{} = 0.
\label{cond2}
\end{equation}
This amounts to prescribing that the trigonometric interpolant
through sawtoothed data of the form $\fk = (-1)^k$
should be $\cos(nt)$ rather than some other function such as
$\exp(int)$---the only choice that ensures that
real data will lead to a real interpolant.
An equivalent prescription is that an arbitrary number $N$
of data values, even or odd, will be interpolated by a linear
combination of the first $N$ terms of the sequence
\begin{equation}
1,\, \cos(t),\, \sin(t),\, \cos(2t),\, \sin(2t),\, \cos(3t),\, \dots.
\label{specialset}
\end{equation}
In this case of trigonometric interpolation with $N$ even, the formulas
(\ref{trigpts})--(\ref{coeffs4}) still hold, except that
(\ref{aliasing1}) and (\ref{coeffs3}) must be multiplied by $1/2$
for $k = \pm n$. FFT codes, however, do not
store the information that way. Instead, following (\ref{cond1}), they
compute $\tilde a_{-n}^{}$ by (\ref{coeffs3}) with $2/N$ instead
of $1/N$ out front---thus effectively storing
$\tilde c_{-n}^{} +\tilde c_n^{}$ in the place of $\tilde c_{-n}^{}$---and
then apply (\ref{interp1}) with the $k=n$ term omitted.
This gives the right result for values of $t$ on
the grid, but not at points in-between.
Note that the conditions (\ref{cond1})--(\ref{specialset}) are
very much tied to
the use of the sample points (\ref{trigpts}). If the grid
were translated uniformly, then different relationships
between $c_n^{}$ and $c_{-n}^{}$ or $a_n^{}/b_n^{}$ and
$a_{-n}^{}/b_{-n}^{}$ would be appropriate in
(\ref{cond1})--(\ref{cond2}) and
different basis functions in (\ref{specialset}), and if the grid
were not uniform, then it would be hard to justify any particular
choices at all for even $N$.
For these reasons, even numbers of degrees of freedom make sense
in equispaced interpolation but not in other trigonometric
approximation contexts, in general.
Henrici~\cite{henrici} provides a modification
of the barycentric formula (\ref{bary1}) for the equispaced case $N=2n$.
\section{Approximation theory and quadrature}
The basic question of approximation theory is, will
approximants to a function $f$ converge as the degree
is increased, and how fast?
The formulas of the last two
sections enable us to derive theorems addressing this question
for trigonometric projection and interpolation.
(For finer points of trigonometric approximation theory,
see~\cite{meinardus}.)
The smoother $f$ is, the faster its Fourier coefficients
decrease, and the faster the convergence of the approximants.
(If $f$ were merely continuous rather than
Lipschitz continuous, then the trigonometric version of the Weierstrass
approximation theorem~\cite[Section I.2]{katznelson}
would ensure that it could be approximated
arbitrarily closely by trigonometric polynomials,
but not necessarily by projection or interpolation.)
Our first theorem asserts that Fourier coefficients
decay algebraically if $f$ has a finite number of derivatives,
and geometrically
if $f$ is analytic. Here and in Theorem~\ref{thm2} below,
we make use of the notion of the {\em total variation,} $V$, of
a periodic function $\varphi$ defined on $\pipi$, which is defined
in the usual way as the supremum of all sums $\sum_{i=1}^n
|\varphi(x_i)-\varphi(x_{i-1})|$, where $\{x_i\}$
are ordered points in $\pipi$ with $x_0 = x_n$; $V$ is equal to the
the $1$-norm of $f'$, interpreted if necessary
as a Riemann--Stieltjes integral~\cite[Section I.4]{katznelson}.
Thus $|\sin(t)|$ on $\pipi$, for example, corresponds to $\nu =1$,
and $|\sin(t)|^3$ to $\nu =3$.
All our theorems continue to assume that $f$ is $2\pi$-periodic.
\begin{theorem}
\label{thm1}
If $f$ is $\nu\ge 0$ times differentiable and
$f^{(\nu)}$ is of bounded variation $V$ on $\pipi$, then
\begin{equation}
|\ck| \le {V\over 2\pi |k|^{\nu+1}}.
\label{est1}
\end{equation}
If $f$ is analytic with $|f(t)|\le M$ in the open strip of
half-width $\alpha$ around the real axis in the complex $t$-plane, then
\begin{equation}
|\ck| \le M e^{-\alpha|k|} .
\label{est2}
\end{equation}
\end{theorem}
\begin{proof}
The bound (\ref{est1}) can be derived by integrating (\ref{coeffs1}) by
parts $\nu+1$ times.
Equation (\ref{est2}) can be derived by shifting the interval
of integration $\pipi$ of (\ref{coeffs1}) downward in
the complex plane for $k>0$, or upward for $k<0$, by a
distance arbitrarily close to $\alpha$; see~\cite[Section 3]{tw}.
\end{proof}
To apply Theorem~\ref{thm1} to trigonometric approximations, we
note that the error in the degree $n$ trigonometric
projection (\ref{trigpoly1}) is
\begin{equation}
f(t) - \fn(t) = \sum_{|k|>n} \ck e^{ikt} ,
\label{error1}
\end{equation}
a series that converges absolutely and uniformly by the Lipschitz
continuity assumption on $f$. Similarly, (\ref{aliasing1}) implies
that the error in trigonometric interpolation is
\begin{equation}
f(t) - \pn(t) = \sum_{|k|>n} \ck (e^{ikt} - e^{ik'\kern -1pt t}),
\label{error2}
\end{equation}
where $k' = \hbox{mod}(k+n,2n+1)-n$ is the index that $k$ gets
aliased to on the $(2n+1)$-point grid, i.e., the integer of absolute value
$\le n$ congruent to $k$ modulo $2n+1$.
These formulas give us bounds on the error in trigonometric
projection and interpolation.
\begin{theorem}
\label{thm2}
If $f$ is $\nu\ge 1$ times differentiable and
$f^{(\nu)}$ is of bounded variation $V$ on $\pipi$, then
its degree $n$ trigonometric projection and interpolant satisfy
\begin{equation}
\|f - \fn\|_\infty^{} \le {V\over \pi\kern .7pt \nu\kern .7pt n^\nu}, \qquad
\|f - \pn\|_\infty^{} \le {2V\over \pi\kern .7pt \nu \kern .7pt n^\nu}.
\label{est3}
\end{equation}
If $f$ is analytic with $|f(t)|\le M$ in the open strip of
half-width $\alpha$ around the real axis in the complex $t$-plane, they
satisfy
\begin{equation}
\|f-\fn\|_\infty^{} \le {2M e^{-\alpha n}\over e^\alpha-1}, \qquad
\|f-\pn\|_\infty^{} \le {4M e^{-\alpha n}\over e^\alpha-1} .
\label{est4}
\end{equation}
\end{theorem}
\begin{proof}
The estimates (\ref{est3}) follow by bounding the
tails (\ref{error1}) and (\ref{error2}) with (\ref{est1}), and
(\ref{est4}) likewise by bounding them with (\ref{est2}).
\end{proof}
A slight variant of this argument gives an estimate for quadrature.
If $I$ denotes the integral of a function $f$ over $\pipi$
and $\IN$ its approximation by the $N$-point
periodic trapezoidal rule, then from (\ref{coeffs1}) and (\ref{coeffs3}),
we have $I = 2\pi c_0^{}$ and $\IN = 2\pi \tilde c_0^{}$.
By (\ref{aliasing1}) this implies
\begin{equation}
\IN - I = 2\pi \sum_{j\ne 0} c_{jN}^{},
\label{trapest}
\end{equation}
which gives the following result.
\begin{theorem}
\label{thm4}
If $f$ is $\nu\ge 1$ times differentiable and
$f^{(\nu)}$ is of bounded variation $V$ on $\pipi$, then
the $N$-point periodic trapezoidal rule
approximation to its integral over $\pipi$ satisfies
\begin{equation}
|\IN - I| \le {4 V\over N^{\nu+1}}.
\label{trap1}
\end{equation}
If $f$ is analytic with $|f(t)|\le M$ in the open strip of
half-width $\alpha$ around the real axis in the complex $t$-plane,
it satisfies
\begin{equation}
|\IN-I| \le {4\pi M \over e^{\alpha N}-1}.
\label{trap2}
\end{equation}
\end{theorem}
\begin{proof}
These results follow by bounding (\ref{trapest}) with
(\ref{est1}) and (\ref{est2}) as in the proof of Theorem~\ref{thm2}.
From (\ref{est1}), the bound one gets
is $2V\zeta(\nu+1)/N^{\nu+1}$, where $\zeta$ is the
Riemann zeta function, which we have simplified by
the inequality $\zeta(\nu+1)\le \zeta(2) < 2$ for $\nu\ge 1$.
The estimate (\ref{trap2}) originates with Davis~\cite{davis};
see also~\cite{kress,tw}.
\end{proof}
Finally, in a section labeled ``Approximation theory'' we must mention another
famous candidate for periodic function
approximation: best approximation in the $\infty$-norm.
Here the trigonometric version of the Chebyshev alternation theorem holds,
assuming $f$ is real.
This result is illustrated below in Figure~\ref{bestapprox}.
\begin{theorem}
Let $f$ be real and continuous on
the periodic interval $\pipi$. For each $n\ge 0$,
$f$ has a unique best approximant $\pns\in \Pn$ with
respect to the norm $\|\cdot\|_\infty^{}$, and $\pns$ is characterized
by the property that the error curve $(f-\pns)(t)$ equioscillates on
$[\kern .5pt 0,2\pi)$ between at least $2n+2$ equal extrema $\pm\|f-\pns\|_\infty^{}$ of
alternating signs.
\end{theorem}
\begin{proof}
See~\cite[Section 5.2]{meinardus}.
\end{proof}
\section{Trigfun computations}
Building on the mathematics of the past three sections,
Chebfun was extended in 2014 to incorporate trigonometric
representations of periodic functions alongside its traditional
Chebyshev representations.
(Here and in
the remainder of the paper, we assume the reader is familiar
with Chebfun.)
Our convention is that a {\em trigfun} is a
representation via coefficients $\ck$ as in (\ref{trigpoly1})
of a sufficiently smooth
periodic function $f$ on an interval by a trigonometric
polynomial of adaptively determined degree, the aim always being
accuracy of 15 or 16 digits relative
to the $\infty$-norm of the function on the interval.
This follows the same pattern as traditional Chebyshev-based
chebfuns, which are representations of nonperiodic functions
by polynomials, and a trigfun is not a distinct object from a
chebfun but a particular type of chebfun. The default interval,
as with ordinary chebfuns, is $\ones$, and other intervals are
handled by the obvious linear
transplantation.\footnote{\label{footn}Actually,
one aspect of the transplantation is not obvious, an indirect
consequence of the translation-invariance
of trigonometric functions.
The nonperiodic function $f(x) = x$ defined on $[-1,1]$, for
example, has Chebyshev coefficients $a_0^{}=0$ and $a_1^{} = 1$,
corresponding to the expansion $f(x) = 0T_0^{}(x) + 1T_1^{}(x)$.
Any user will expect the transplanted function $g(x) = x-1$
defined on $[0,2]$ to have the same coefficients $a_0^{}=0$
and $a_1^{} = 1$, corresponding to the transplanted expansion
$g(x) = 0T_0^{}(x-1) + 1T_1^{}(x-1)$, and this is what Chebfun
delivers. By contrast, consider the periodic function $f(t)
= \cos t$ defined on $[-\pi,\pi]$ and its transplant $g(t) =
\cos(t-\pi) = -\cos t$ on $\pipi$. A user will expect the
expansion coefficients of $g$ to be not the same as those
of $f$, but their negatives! This is because we expect to
use the same basis functions $\exp(ikx)$ or $\cos(kx)$ and
$\sin(kx)$ on any interval of length $2\pi$, however translated.
The trigonometric part of Chebfun is designed accordingly.}
For example, here we construct and plot a trigfun for $\cos(t) +
\sin(3t)/2$ on $\pipi$:
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{fig1a.eps}
\end{center}
\caption{\label{fig1a} The trigfun representing
$f(t) = \cos(t)+ \sin(3t)/2$ on $\pipi$. One can evaluate
$f$ with\/ {\tt f(t)}, compute its definite integral with\/ {\tt sum(f)}
or its maximum with\/ {\tt max(f)},
find its roots with {\tt roots(f)}, and so on.
}
\end{figure}
{\small
\topspace\begin{verbatim}
>> f = chebfun('cos(t) + sin(3*t)/2', [0 2*pi], 'trig'), plot(f)
\end{verbatim}
\bottomspace\par}
\noindent
The plot appears in Figure~\ref{fig1a}, and
the following text output is produced, with the flag {\tt trig} signalling
the periodic representation.
{\small
\topspace\begin{verbatim}
f =
chebfun column (1 smooth piece)
interval length endpoint values trig
[ 0, 6.3] 7 1 1
\end{verbatim}
\bottomspace\par}
\noindent
We see that Chebfun has determined that this function
$f$ is of length $N=7$. This means
that there are $7$ degrees of freedom, i.e., $f$ is a trigonometric polynomial
of degree $n=3$, whose coefficients we can extract with
{\tt c = trigcoeffs(f)}, or in cosine/sine form
with {\tt [a,b] = trigcoeffs(f)}.
Note that the Chebfun constructor does not analyze its input
symbolically, but just evaluates the function at trigonometric points
(\ref{trigpts}), and from this information the degree
and the values of the coefficients are determined.
The constructor also detects when a function is real.
A trigfun constructed in the ordinary manner
is always of odd length $N$, corresponding to a trigonometric
polynomial of degree $n = (N-1)/2$, though it is
possible to make even-length trigfuns by explicitly specifying $N$.
To construct a trigfun, Chebfun samples the function on grids of
size $16, 32, 64,\dots$ and tests the resulting discrete Fourier
coefficients for convergence down to relative machine
precision. (Powers of 2 are used
since these are particularly efficient for the FFT, even though
the result will ultimately be trimmed to an odd number
of points. As with non-trigonometric Chebfun,
the engineering details are complicated and under ongoing
development.) When convergence is achieved, the
series is chopped at an appropriate point and the degree reduced
accordingly.
Once a trigfun has been created, computations can be carried out
in the usual Chebfun fashion via overloads of familiar
MATLAB commands. For example,
{\small
\topspace\begin{verbatim}
>> sum(f.^2)
ans = 3.926990816987241
\end{verbatim}
\bottomspace\par}
\noindent
This number is computed by integrating the trigonometric
representation of $f^2$, i.e., by returning the number $2\pi c_0^{}$
corresponding to the trapezoidal rule applied to $f^2$ as
described around Theorem~\ref{thm4}.
The default 2-norm is the square root of this result,
{\small
\topspace\begin{verbatim}
>> norm(f)
ans = 1.981663648803005
\end{verbatim}
\bottomspace\par}
\noindent
Derivatives of functions are computed
by the overloaded command {\tt diff}.
(In the unusual case where a trigfun has been constructed of
even length, differentiation will increase its length by $1$.)
The zeros of $f$ are found with {\tt roots}:
{\small
\topspace\begin{verbatim}
>> roots(f)
ans =
1.263651122898791
4.405243776488583
\end{verbatim}
\bottomspace\par}
\noindent
and Chebfun determines maxima and minima by first computing
the derivative, then checking all of its roots:
{\small
\topspace\begin{verbatim}
>> max(f)
ans = 1.389383416980387
\end{verbatim}
\bottomspace\par}
\noindent
Concerning the algorithm used for periodic rootfinding,
one approach would be to solve a companion
matrix eigenvalue problem, and $O(n^2)$ algorithms for
this task have recently been developed~\cite{amvw}.
When development of these methods settles down, they may be incorporated
in Chebfun. For the moment, trigfun rootfinding
is done by first converting the problem to nonperiodic Chebfun form using
the standard Chebfun constructor,
whereupon we take advantage of Chebfun's $O(n^2)$ recursive interval
subdivision strategy~\cite{bt}.
This shifting to subintervals for rootfinding is an
example of an operation that breaks periodicity as mentioned in
item 7 of the introduction.
The main purpose of the periodic part of Chebfun is to
enable machine precision computation with periodic
functions that are not exactly trigonometric polynomials.
For example,
$\exp(\sin t)$ on $\pipi$ is represented by a trigfun
of length $27$, i.e., a
trigonometric polynomial of degree 13:
{\small
\topspace\begin{verbatim}
g = chebfun('exp(sin(t))', [0 2*pi], 'trig')
g =
chebfun column (1 smooth piece)
interval length endpoint values trig
[ 0, 6.3] 27 1 1
\end{verbatim}
\bottomspace\par}
\noindent
The coefficients can be plotted on a log scale with the
command {\tt plotcoeffs(f)},
and the in
Figure~\ref{fig2} reveals the faster-than-geometric decay
of an entire function.
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{fig2.eps}
\end{center}
\caption{\label{fig2} Absolute values of the Fourier coefficients of
the trigfun for\/ $\exp(\sin t)$ on $\pipi$.
This is an entire function (analytic throughout
the complex $t$-plane), and in accordance with Theorem~$\ref{thm1}$, the
coefficients decrease faster than geometrically.}
\end{figure}
Figure~\ref{twomore} shows trigfuns and coefficient plots
for $f(t)=\tanh(5\cos(5t))$ and
$g(t)=\exp(-1/\max\{0, 1-t^2/4\})$ on
$[-\pi, \pi]$. The latter is $C^\infty$ but not analytic.
Figure~\ref{signals} shows a further pair of examples that
we call an ``AM signal'' and an ``FM signal''. These are
among the preloaded functions available with
{\tt cheb.gallerytrig}, Chebfun's trigonometric analogue
of the MATLAB {\tt gallery} command.
\begin{figure}
\begin{center}
\includegraphics[scale=.6]{twomore.eps}
\end{center}
\caption{\label{twomore} Trigfuns of\/
$\tanh(5\sin t)$ and\/ $\exp(-100(t+.3)^2)$
(upper row) and corresponding absolute values
of Fourier coefficients (lower row).}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[scale=.6]{signals.eps}
\end{center}
\caption{\label{signals} Trigfuns of the ``AM signal''
$\cos(50t)(1+\cos(5t)/5)$ and the ``FM signal''
$\cos(50t+4\sin(5t))$ (upper row) and corresponding absolute values
of Fourier coefficients (lower row).}
\end{figure}
Computation with trigfuns, as with nonperiodic chebfuns,
is carried out by
a continuous analogue of floating point arithmetic~\cite{cacm}.
To illustrate the ``rounding'' process involved,
the degrees of the trigfuns above are
555 and 509, respectively. Mathematically,
their product is of degree 1064. Numerically, however,
Chebfun achieves 16-digit accuracy with degree 556.
Here is a more complicated example of Chebfun rounding adapted
from~\cite{cacm}, where it is computed with nonperiodic
representations.
{\small
\topspace\begin{verbatim}
f = chebfun(@(t) sin(pi*t), 'trig')
s = f
for j = 1:15
f = (3/4)*(1 - 2*f.^4), s = s + f
end
\end{verbatim}
\bottomspace\par}
\noindent
This program takes 15 steps of an iteration that in principle
quadruples the degree at each step, giving a function
$s$ at the end of degree
$4^{15} = \hbox{1,073,741,824}$. In actuality,
however, because of the rounding to 16 digits,
the degree comes out one million times smaller as 1148. This function
is plotted in Figure~\ref{logistic}. Following~\cite{cacm},
we can compute the roots of $s-8$
in half a second on a desktop machine:
{\small
\topspace\begin{verbatim}
>> roots(s-8)
ans =
-0.992932107411876
-0.816249934290177
-0.798886729723433
-0.201113270276572
-0.183750065709828
-0.007067892588112
0.346696120418255
0.401617073482111
0.442269489632475
0.557730510367530
0.598382926517899
0.653303879581760
\end{verbatim}
\bottomspace\par}
\noindent
The integral with {\tt sum(s)} is
$15.265483825826763$, correct except in the last two digits.
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{logistic.eps}
\end{center}
\caption{\label{logistic} After fifteen steps of an iteration,
this periodic function has degree $1148$ in its Chebfun representation
rather than the mathematically exact figure {\rm 1,073,741,824}.}
\end{figure}
If one tries to construct a trigfun by sampling
a function that is not smoothly periodic,
Chebfun will by default go up to length $2^{16}$ and then
issue a warning:
{\small
\topspace\begin{verbatim}
>> h = chebfun('exp(t)', [0 2*pi], 'trig')
Warning: Function not resolved using 65536 pts.
Have you tried a non-trig representation?
\end{verbatim}
\bottomspace\par}
\noindent
On the other hand, computations that are known
to break periodicity or smoothness will result in the representation
being automatically cast from a trigfun to a chebfun.
For example, here we define $g$ to be the
absolute value of the function
$f(t) = \cos(t) + \sin(3t)/2$ of Figure~\ref{fig1a}.
The system detects that $f$ has zeros,
implying that $g$ will probably not be smooth, and
accordingly constructs it not as a trigfun but as an ordinary chebfun with
several pieces:
{\small
\topspace\begin{verbatim}
>> f = chebfun('cos(t) + sin(3*t)/2', [0 2*pi], 'trig'), g = abs(f)
g =
chebfun column (3 smooth pieces)
interval length endpoint values
[ 0, 1.3] 17 1 3.8e-16
[ 1.3, 4.4] 25 1.8e-15 7.3e-16
[ 4.4, 6.3] 20 -6.6e-18 1
Total length = 62.
\end{verbatim}
\bottomspace\par}
\noindent
Similarly, if you add or multiply a trigfun and a chebfun,
the result is a chebfun.
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{fig1b.eps}
\end{center}
\caption{\label{fig1b} When the absolute value of
the trigfun $f$ of Figure~$\ref{fig1a}$ is computed,
the result is a nonperiodic chebfun with three smooth pieces.}
\end{figure}
\section{Applications}
Analysis of periodic functions and signals is one of the oldest
topics of mathematics and engineering. Here we give
six examples of how a system for automating such computations may be useful.
{\em Complex contour integrals.}
Smooth periodic integrals arise ubiquitously in complex analysis.
For example, suppose we wish to determine the number of zeros of
$f(z) = \cos(z) - z$ in the
complex unit disk. The answer is given by
\begin{equation}
m = {1\over 2\pi i} \int {f'(z)\over f(z) } dz
= {1\over 2\pi i} \int {1\over f(z) } {df\over dt} dt
\label{contourint}
\end{equation}
if $z = \exp(it)$ with $t\in \pipi$.
With periodic Chebfun, we can compute $m$ by
{\small
\topspace\begin{verbatim}
>> z = chebfun('exp(1i*t)', [0 2*pi], 'trig');
>> f = cos(z) - z;
>> m = real(sum(diff(f)./f)/(2i*pi))
m = 1.000000000000000
\end{verbatim}
\bottomspace\par}
\noindent
Changing the integrand from $f'(z)/f(z)$ to $z f'(z)/f(z)$
gives the location of the zero, correct to all digits displayed.
{\small
\topspace\begin{verbatim}
>> z0 = real(sum(z.*diff(f)./f)/(2i*pi))
z0 = 0.739085133215161
\end{verbatim}
\bottomspace\par}
\noindent
(The {\tt real} commands are included to remove imaginary
rounding errors.)
For wide-ranging extensions of calculations like these, including
applications to matrix eigenvalue problems, see~\cite{akt}.
{\em Linear algebra.}
Chebfun does not work from explicit formulas: to construct a
function, it is only necessary to be able to evaluate it.
This is an extremely useful feature for linear algebra calculations.
For example, the matrix
\begin{equation}
\def\r#1{\phantom{xx}\llap{$#1$}}
\def\rr#1{\phantom{xx,}\llap{$#1$}}
A = {1\over 3}
\pmatrix{
\r{2} & \rr{-2i} & \r{1} & \r{1} \cr
\r{2i} & \rr{-2} & \r{0} & \r{2} \cr
\r{-2} & \rr{0} & \r{1} & \r{2} \cr
\r{0} & \rr{i} & \r{0} & \r{2}
}
\end{equation}
has all its eigenvalues in the unit disk. A question with
the flavor of control and stability theory is, what is the
maximum resolvent norm $\|(zI-A)^{-1}\|$ for $z$ on the unit
circle? We can calculate the answer with the code below, which
constructs a periodic chebfun of degree $n=569$. The maximum is
$27.68851$, attained with $z = \exp(0.454596\kern.5pt i)$.
{\small
\topspace\begin{verbatim}
A = [2 -2i 1 1; 2i -2 0 2; -2 0 1 2; 0 1i 0 2]/3, I = eye(4)
ff = @(t) 1/min(svd(exp(1i*t)*I-A))
f = chebfun(ff, [0 2*pi], 'trig', 'vectorize')
[maxval,maxpos] = max(f)
\end{verbatim}
\bottomspace\par}
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{resolve.eps}
\end{center}
\caption{\label{resolve} Resolvent norm $\|(zI-A)^{-1}\|$
for a $4\times 4$ matrix $A$ with $z= e^{it}$ on
the unit circle.}
\end{figure}
{\em Circular convolution and smoothing.}
The circular or periodic convolution of two functions
$f$ and $g$ with period $T$ is defined by
\begin{equation}
(f*g)(t) := \int_{t_0}^{t_0 + T} g(s)f(t-s)ds,
\end{equation}
where $t_0$ is aribtrary. Circular convolutions can be
computed for trigfuns with the {\tt circconv} function, whose algorithm
consists of coefficientwise multiplication in Fourier space.
For example, here is a trigonometric interpolant through
$201$ samples of a smooth function plus noise, shown
in the upper-left panel of Figure~\ref{noisy}.
{\small
\topspace\begin{verbatim}
N = 201, tt = trigpts(N, [-pi pi])
ff = exp(sin(tt)) + 0.05*randn(N,1)
f = chebfun(ff, [-pi pi], 'trig')
\end{verbatim}
\bottomspace\par}
\noindent
The high wave numbers can be smoothed by convolving
$f$ with a mollifier. Here we use
a Gaussian of standard deviation $\sigma=0.1$ (numerically
periodic for $\sigma\le 0.35$). The result is
shown in the upper-right panel of the figure.
{\small
\topspace\begin{verbatim}
gaussian = @(t,sigma) 1/(sigma*sqrt(2*pi))*exp(-0.5*(t/sigma).^2)
g = @(sigma) chebfun(@(t) gaussian(t,sigma), [-pi pi], 'trig')
h = circconv(f, g(0.1))
\end{verbatim}
\bottomspace\par}
\begin{figure}
\begin{center}
\includegraphics[scale=.6]{noisy.eps}
\end{center}
\caption{\label{noisy} Circular convolution of a noisy function with
a smooth mollifier.}
\end{figure}
{\em Fourier coefficients of non-smooth functions.}
A function $f$ that is not smoothly periodic will at best
have a very slowly converging trigonometric series, but still,
one may be interested in its Fourier coefficients. These can
be computed by applying {\tt trigcoeffs} to a chebfun representation
of $f$ and specifying how many coefficients are required; the
integrals (\ref{coeffs1}) are then evaluated numerically by
Chebfun's standard method of Clenshaw--Curtis quadrature.
For example,
Figure~\ref{runge} shows a portrayal of the Gibbs phenomenon
from Runge's 1904 book together with its Chebfun equivalent computed
in a few seconds with the commands
{\small
\topspace\begin{verbatim}
t = chebfun('t', [-pi pi]), f = (abs(t) < pi/2)
for N = 2*[1 3 5 7 21 79] + 1
c = trigcoeffs(f, N)
fN = chebfun(c, [-pi pi], 'coeffs', 'trig')
plot(fN, 'interval', [0 4*pi])
end
\end{verbatim}
\bottomspace\par}
\begin{figure}
\begin{center}
\includegraphics[scale=.58]{rungefig.eps}
\includegraphics[scale=.327]{runge.eps}~~
\end{center}
\caption{\label{runge} On the left, a figure from
Runge's\/ $1904$ book\/
{\em Theorie und Praxis der Reihen\/}~{\rm\cite{rungebook}}.
On the right, the
equivalent computed with periodic Chebfun. Among other things, this
figure illustrates that a trigfun can be accurately evaluated outside
its interval of definition.}
\end{figure}
{\em Interpolation in unequally spaced points.}
Very little attention has been given to trigonometric
interpolation in unequally spaced points, but the
barycentric formula (\ref{bary1}) for odd $N$ and Henrici's
generalization for even $N$ have been generalized
to this case by Salzer and Berrut~\cite{berrut}. Chebfun
makes these formulas available through the
command {\tt chebfun.interp1}, just as has long been
true for interpolation by algebraic polynomials. For
example, the code
{\small
\topspace\begin{verbatim}
t = [-3 -2 -1 0 .5 1 1.5 2 2.5]
p = chebfun.interp1(t, abs(t), 'trig', [-pi pi])
\end{verbatim}
\bottomspace\par}
\noindent
interpolates the function $|t|$ on $[-\pi,\pi]$
in the 9 points indicated by a trigonometric
polynomial of degree $n=4$.
The interpolant is shown
in Figure~\ref{interpdemo} together with the analogous
curve for equispaced points.
\begin{figure}
\begin{center}
\includegraphics[scale=.6]{interpdemo.eps}
\end{center}
\caption{\label{interpdemo}
Trigonometric interpolation of $|t|$ in unequally spaced
points with
the generalized barycentric formula implemented in {\tt chebfun/interp1}.}
\end{figure}
{\em Best approximation, CF approximation, and rational functions.}
Chebfun has long had a dual role: it is a tool
for computing with functions, and also a tool for exploring
principles of approximation theory, including
advanced ones. The trigonometric side of
Chebfun extends this second aspect to periodic problems. For example,
Chebfun's new
{\tt trigremez} command can compute best approximants
with equioscillating error curves as described in
Theorem~\ref{thm4}~\cite{javed}.
Here is an example that generates the error curve displayed
in Figure~\ref{bestapprox}, with error $12.1095909$.
{\small
\topspace\begin{verbatim}
f = chebfun('1./(1.01-sin(t-2))', [0 2*pi], 'trig')
p = trigremez(f,10), plot(f-p)
\end{verbatim}
\bottomspace\par}
\noindent
Chebfun is also acquiring other capabilities for trigonometric
polynomial and rational approximation, including
Carath\'eodory--Fej\'er (CF) near-best approximation via singular
values of Hankel matrices, and these will be described elsewhere.
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{bestapprox.eps}
\end{center}
\caption{\label{bestapprox} Error curve in degree\/ $n=10$ best
trigonometric approximation to
$f(t) = 1/(1.01-\sin(t-2))$ over $\pipi$. The curve equioscillates between
$2n+2 = 22$ alternating extrema.}
\end{figure}
\section{Periodic ODEs, operator exponentials, and eigenvalue problems}
A major capability of Chebfun is
the solution of linear and nonlinear ordinary differential
equations (ODEs), as well as integral equations,
by applying the backslash command to a ``chebop'' object.
We have extended these capabilities to periodic problems, both
scalars and systems.
See~\cite{eastham}
for the theory of existence and uniqueness
of solutions to periodic ODEs, which goes back to Floquet in the 1880s,
a key point being the avoidance of nongeneric
configurations corresponding to eigenmodes.
Chebfun's algorithm for linear ODEs amounts to
an automatic spectral collocation method wrapped up so that
the user need not be aware of the discretization. With
standard Chebfun, these are Chebyshev spectral methods, and now
with the periodic extension, they are Fourier
spectral methods~\cite{boydspec}.
The problem is solved on grids of size 32, 64, and so on until
the system judges that the Fourier coefficients have converged
down to the level of noise, and the series is then truncated
at an appropriate point.
For example, consider the problem
\begin{equation}
0.001(u'' + u') - \cos(t) u = 1, \qquad 0\le t \le 6\pi
\label{odelin}
\end{equation}
with periodic boundary conditions.
The following Chebfun code produces the solution
plotted in Figure~\ref{figodelin} in half a second
on a laptop. Note that the trigonometric discretizations are
invoked by the flag \verb|L.bc = 'periodic'|.
{\small
\topspace\begin{verbatim}
L = chebop(0,6*pi)
L.op = @(x,u) 0.001*diff(u,2) + 0.001*diff(u) - cos(x).*u
L.bc = 'periodic'
u = L\1
\end{verbatim}
\bottomspace\par}
\begin{figure}
\begin{center}
\includegraphics[scale=.55]{odelin.eps}
\end{center}
\caption{\label{figodelin} Solution of the linear periodic ODE\/
$(\ref{odelin})$ as a trigfun of degree\/ $168$, computed by
an automatic Fourier spectral method.}
\end{figure}
\noindent
This trigfun is of degree 168,
and the residual reported by {\tt norm(L*u-1)} is
$1\times 10^{-12}$.
As always, $u$ is a chebfun; its maximum, for example, is
$\hbox{\tt max(u)} =66.928$.
For periodic nonlinear ODEs, Chebfun applies trigonometric
analogues of the algorithms developed by Driscoll and Birkisson
in the Chebshev case~\cite{bd1,bd2}. The whole solution is carried out
by a Newton or damped Newton iteration formulated
in a continuous mode (``solve then discretize'' rather than
``discretize then solve''), with Jacobian matrices replaced
by Fr\'echet derivative operators implemented by means of automatic
differentiation and automatic spectral discretization.
For example, suppose we seek a solution of the nonlinear problem
\begin{equation}
0.004u'' + uu' - u = \cos(2\pi t), \qquad t\in [-1,1]
\label{nonlinprob}
\end{equation}
with periodic boundary conditions. After seven Newton
steps, the Chebfun commands below
produce the result shown in Figure~\ref{fignonlin}, of
degree $n = 362$, and
the residual norm {\tt norm(N(u)-rhs,'inf')} is reported as
$8\times 10^{-9}$.
{\small
\topspace\begin{verbatim}
N = chebop(-1,1)
N.op = @(x,u) .004*diff(u,2) + u.*diff(u) - u
N.bc = 'periodic'
rhs = chebfun('cos(2*pi*t)', 'trig')
u = N\rhs
\end{verbatim}
\bottomspace\par}
\begin{figure}
\begin{center}
\includegraphics[scale=.55]{odenonlin.eps}
\end{center}
\caption{\label{fignonlin} Solution of the nonlinear periodic ODE\/
$(\ref{nonlinprob})$ computed by iterating the Fourier
spectral method within a continuous form of
Newton iteration. Executing\/
{\tt max(diff(u))} shows that the maximum of $u'$ is
$32.094$.}
\end{figure}
Chebfun's overload of the MATLAB {\tt eigs} command solves
linear ODE eigenvalue problems by, once again, automated spectral
collocation discretizations~\cite{driscoll}. This too has
been extended to periodic problems, with Fourier discretizations
replacing Chebyshev. For example, a famous periodic eigenvalue
problem is the Mathieu equation
\begin{equation}
-u'' + 2 q \cos(2t) u = \lambda u, \qquad t\in \pipi,
\label{mathieueq}
\end{equation}
where $q$ is a parameter. The commands below give
the plot shown in Figure~\ref{mathieu}.
{\small
\topspace\begin{verbatim}
q = 2
L = chebop(@(x,u) -diff(u,2)+2*q*cos(2*x).*u, [0 2*pi], 'periodic')
[V,D] = eigs(L,5), plot(V)
\end{verbatim}
\bottomspace\par}
\begin{figure}
\begin{center}
\includegraphics[scale=.55]{mathieu.eps}
\end{center}
\caption{\label{mathieu} First five eigenfunctions of the Mathieu
equation $(\ref{mathieueq})$ with $q=2$, computed with {\tt eigs}.}
\end{figure}
So far as we know, Chebfun is the only system offering
such convenient solution of ODEs and related
problems, now in the periodic as well as nonperiodic case.
We have also implemented a periodic analogue of Chebfun's
{\tt expm} command for computing
exponentials of linear operators, which
we omit discussing here for reasons of space. All the capabilities
mentioned in this section can be explored with Chebgui, the graphical
user interface written by Birkisson, which now invokes
trigonometric spectral discretizations when periodic boundary
conditions are specified.
\section{Discussion}
Chebfun is an open-source project written in MATLAB and
hosted on GitHub; details and the user's guide can be found at
{\tt www.chebfun.org}~\cite{chebbook}.
About thirty people have contributed to
its development over the years, and at present there are about
ten developers based mainly at the University of Oxford.
During 2013--2014 the code was redesigned and rewritten as
version 5 (first released June 2014) in
the form of about 100,000 lines of code realizing
about 40 classes. The aim of this redesign was to enhance Chebfun's
modularity, clarity, and extensibility, and the introduction
of periodic capabilities, which had not been planned in
advance, was the first big test of this extensibility.
We were pleased to
find that the modifications proceeded smoothly.
The central new feature is a new
class {\tt @trigtech} in parallel to the existing
{\tt @chebtech1} and {\tt @chebtech2}, which work with
polynomial interpolants in first- and second-kind Chebyshev points,
respectively.
About half the classes of Chebfun are concerned with representing
functions, and the remainder are mostly
concerned with ODE discretization and automatic
differentiation for solution of nonlinear problems, whether
scalar or systems, possibly with nontrivial block structure.
The incorporation of periodic problems into this second, more
advanced part of Chebfun was achieved by introducing
a new class {\tt @trigcolloc} matching
{\tt @chebcolloc1} and {\tt @chebcolloc2}.
About a dozen software projects in various computer languages have been
modeled on Chebfun, and a partial
list can be found at {\tt www.chebfun.org}.\ \ One of
these, Fourfun, is a MATLAB system for periodic functions
developed independently of the
present work by Kristyn McLeod, a student of former Chebfun
developer Rodrigo Platte~\cite{mcleod}. Another that also has
periodic and differential equations
capabilities is ApproxFun, written in Julia by Sheehan
Olver and former Chebfun developer
Alex Townsend~\cite{julia}.\footnote{Platte created
Chebfun's edge detection algorithm for fast splitting
of intervals. Townsend extended Chebfun
to two dimensions.} We think the enterprise of
numerical computing with functions is here to stay,
but cannot predict what systems or languages may be dominant,
say, twenty years from now. For the moment, only Chebfun offers
the breadth of capabilities entailed in the vision of
MATLAB-like functionality for continuous functions and operators in
analogy to the long-familiar methods for discrete
vectors and matrices.
In this article we
have not discussed Chebfun computations with two-dimensional
periodic functions,
which are under development. For example, we are
investigating capabilities for solution of time-dependent PDEs on a
periodic spatial domain and for PDEs in two space dimensions, one or
both of which are periodic. A particularly interesting prospect is
to apply such representations to computation with functions
on disks and spheres.
For computing with vectors and matrices, although MATLAB codes are
rarely the fastest in execution, their convenience makes them
nevertheless the best tool for many applications.
We believe that Chebfun, including now its extension to periodic problems,
plays the same role for numerical computing with functions.
\section*{Acknowledgements}
This work was carried out in collaboration with the rest of
the Chebfun team, whose names are listed at
{\tt www.chebfun.org}.\ \ Particularly active in this phase of the project have
been Anthony Austin, \'Asgeir Birkisson,
Toby Driscoll, Nick Hale, Hrothgar (an Oxford graduate
student who has just a single name),
Alex Townsend, and Kuan Xu.
We are grateful to all of these people for their suggestions
in preparing this paper.
The first author would like to thank the Oxford University Mathematical
Institute, and in particular the Numerical Analysis Group, for hosting
and supporting his sabbatical visit in
2014, during which this research was initiated.
| {
"timestamp": "2015-11-03T02:10:18",
"yymm": "1511",
"arxiv_id": "1511.00166",
"language": "en",
"url": "https://arxiv.org/abs/1511.00166",
"abstract": "Algorithms and underlying mathematics are presented for numerical computation with periodic functions via approximations to machine precision by trigonometric polynomials, including the solution of linear and nonlinear periodic ordinary differential equations. Differences from the nonperiodic Chebyshev case are highlighted.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Extension of Chebfun to periodic functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513873424045,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7086428690563463
} |
https://arxiv.org/abs/2106.12726 | Images of multilinear polynomials on $n\times n$ upper triangular matrices over infinite fields | In this paper we prove that the image of multilinear polynomials evaluated on the algebra $UT_n(K)$ of $n\times n$ upper triangular matrices over an infinite field $K$ equals $J^r$, a power of its Jacobson ideal $J=J(UT_n(K))$. In particular, this shows that the analogue of the Lvov-Kaplansky conjecture for $UT_n(K)$ is true, solving a conjecture of Fagundes and de Mello. To prove that fact, we introduce the notion of commutator-degree of a polynomial and characterize the multilinear polynomials of commutator-degree $r$ in terms of its coefficients. It turns out that the image of a multilinear polynomial $f$ on $UT_n(K)$ is $J^r$ if and only if $f$ has commutator degree $r$. | \section{Introduction}
Let $K$ be an infinite field and let $M_n(K)$ denote the algebra of $n\times n$ matrices over $K$. A famous problem known as Lvov-Kaplansky conjecture asserts: the image of a multilinear polynomial (in noncommutative variables) on $M_n(K)$ is a vector space. It is well-known that this is equivalent to prove that the image of a multilinear polynomial is one of the following: $\{0\}$, $K$ (viewed as the set of scalar matrices), $sl_n(K)$ (the set of traceless matrices) or $M_n(K)$.
Although proving that some subset is a vector space seems to be in a first look a simple problem, a solution for the Lvov-Kaplansky conjecture is known only for $n=2$ \cite{Kanel2, Malev}. The case $n=3$ has interesting progress, but not a solution \cite{Kanel3}.
This conjecture motivated other studies related to images of polynomials. For instance, papers on images of Lie, and Jordan polynomials, and also for other algebras have been published since then. For the most recent results on images of polynomials, we recommend the survey \cite{Survey}.
The particular case of $n\times n$ upper triangular matrices over a field $K$, $UT_n(K)$, was first studied by the second named author and Fagundes in \cite{FagundesdeMello}. The authors proved that if $K$ is infinite, the image of a multilinear polynomial of degree up to four on $UT_n(K)$ is $\{0\}$, $J$ or $J^2$, where $J=J(UT_n(K))$ is the Jacobson radical of $UT_n(K)$, which consists of the strictly upper triangular matrices. In the same paper, the authors conjecture that the image of a multilinear polynomial on $UT_n(K)$ is a vector space. After proving that the linear span of the image of an arbitrary multilinear polynomial on $UT_n(K)$ equals $J^r$ for some $r$, they restate the above conjecture as:
\begin{conjecture}\label{LKUTn}
Let $K$ be an infinite field and let $f\in \KX$ be a multilinear polynomial. Then the image of $f$ on $UT_n(K)$ equals $J^r$ for some $r$.
\end{conjecture}
In \cite{deMello}, the second named author verified this conjecture for $n=3$ and polynomials of arbitrary degree but up to now a complete solution for this conjecture is not known.
The paper \cite{FagundesdeMello} also motivated further research on images of polynomials on $UT_n(K)$. For instance, the images of multilinear polynomials on $J$ (and also on $J^k$, for $k\geq 1$) were completely described by Fagundes in \cite{Fagundes}. In \cite{WangHomogeneous} the authors give a complete description of the images of completely homogeneous polynomials on $UT_2$ and in \cite{WangArbitrary} they give a complete description of the image of polynomials with zero constant term on $UT_2$ when $K$ is an algebraically closed field. Some results were also obtained in the nonassociative setting. Let $K(n,*)$ denote the Lie algebra of $n\times n$ skew-symmetric elements of $UT_n(K)$ with respect to an involution of $UT_n(K)$. In \cite{FrancaUrure1} the authors give a complete description of the image of multihomogeneous Lie polynomials evaluated on $K(3,*)$ and $K(4,*)$ and of all multilinear Lie polynomials whose image evaluated on $K(m,*)$ is the set of strictly upper triangular skew-symmetric matrices (with respect to transpose-like and symplectic-like involutions). In \cite{FrancaUrure2} the authors proved that the image of any multilinear Jordan polynomial with at most three variables evaluated on the Jordan algebra $S(n,*)$ (of $n\times n$ symmetric elements of $UT_n(K)$ with respect to the transpose-like involution on $UT_n(K)$) is a vector space.
In this paper, we prove Conjecture \ref{LKUTn}, i.e., that the image of an arbitrary multilinear polynomial over an infinite field $K$ is $J^r$ for some $r$. To that we introduce the notion of \emph{commutator-degree} of a polynomial and classify the polynomials of a given commutator degree in terms of its coefficients. Finally, we prove that a multilinear polynomial has commutator degree $r$ if and only if its image on $UT_n(K)$ is $J^r$, i.e., our result not only describes the possible images of a given multilinear polynomial, but also for each possible image, it describes the set of multilinear polynomials with that image.
\section{Preliminaries}
In this paper, unles otherwise stated, $K$ will denote an infinite field of arbitrary characteristic and all algebras are associative.
If $X$ is a set, we denote by $\KX$ the free associative algebra freely generated by $X$. The elements of $\KX$ are polynomials in the noncommutative variables of $X$.
Given an algebra $A$ over $K$, and a polynomial $f=f(x_1, \dots, x_m)\in \KX$, we denote also by $f$ the map \[\begin{array}{cccc}
f: & A^m & \longrightarrow & A \\
& (a_1,\dots, a_m) & \mapsto & f(a_1, \dots, a_m)
\end{array}\]
The image of such map is called \emph{the image of $f$ on $A$} and will be denoted by $f(A)$.
Image of polynomials have been studied long time before in the language of polynomial identities and central polynomials. If $A$ is an algebra over $K$, a polynomial $f\in \KX$ is a polynomial identity (PI) for $A$ if $f$ vanishes under substitutions of variables by elements of $A$. If any evaluation of $f$ by elements of $A$ yields central elements of $A$, we say that $f$ is a central polynomial. In the language of images of polynomials, a polynomial identity of $A$ is a polynomial whose image is $\{0\}$ and a central polynomial of $A$ is a polynomial whose image is contained in the center of $A$. An algebra $A$ satisfying a nonzero polynomial identity is called a PI-algebra.
For a given PI-algebra $A$, the set $Id(A) = \{f\in \KX \,|\, f \text{ is an identity of A}\}$ is an ideal of $\KX$ which is invariant under any endomorphism of the algebra $\KX$, i.e., it is invariant under substitution of variables by arbitrary elements of $\KX$. An ideal with this property is called a T-ideal of $\KX$. We refer the reader to \cite{Drensky, G-Z} for the basic theory of PI-algebras.
Given a set of polynomials $\mathcal F \subseteq \KX$, we say the ideal $I$ is the T-ideal generated by $\mathcal F$, if $I$ is the smallest T-ideal containing $\mathcal F$. In this case we say $\mathcal F$ is a basis for $I$ or that the elements of $I$ follow from the elements of $\mathcal F$ and we denote $I=\langle \mathcal F \rangle ^T$.
Throughout the paper $UT_n(K)$ will denote the set of upper triangular matrices over a given field $K$. If $1\leq i \leq j \leq n$, we denote by $E_{i,j}$ the $n\times n$ matrix with $1$ in the entry $(i, j)$, and $0$ elsewhere. These will be called matrix units. In particular, the $E_{i,j}$ with $1\leq i \leq j \leq n$ is a basis for the vector space $UT_n(K)$.
The set of strictly upper triangular matrices is the Jacobson radical of $UT_n(K)$. It is an ideal of $UT_n(K)$ and will be denoted by $J$. If $r\in\{1, \dots, n\}$, the power $J^r$ is the linear span of the set of all $E_{i,j}$ with $j-i \geq r$. For convenience, we denote $J^0=UT_n(K)$.
A finite basis of identities of $UT_n(K)$ were first described by Yu. N. Maltsev in the characteristic zero case. This result was proved by many other authors for arbitrary infinite fields. The classical proof can be found in \cite[Chapter 5]{Drensky}. An interesting proof based on a theorem J. Lewin can be found in \cite[Chapter 1]{G-Z}.
\begin{theorem}\label{IDsUTn}
If $K$ is an infinite field, the polynomial identities of $UT_n(K)$ follow from the identity
\[[x_1, x_2]\cdots [x_{2n-1}, x_{2n}] = 0\]
\end{theorem}
Here $[x,y]:=xy-yx$ denotes the commutator of $x$ and $y$.
A polynomial $f(x_1, \dots, x_m)\in \KX$ is called multihomogneous of multidegree $(n_1, \dots, n_m)$ if in each monomial of $f$, the variable $x_i$ has degree $n_i$. The polynomial $f$ is called multilinear if it is multihomogeneous of degree $(1, \dots, 1)$. It is clear that if the polynomial $f$ is multilinear, the corresponding map $f$ is a multilinear map between the vector spaces $A^m$ and $A$. Also, if $f(x_1, \dots, x_m)$ is multilinear, one can write
\[f(x_1, \dots, x_m)=\sum_{\sigma\in S_m}\alpha_{\sigma}x_{\sigma(1)}\cdots x_{\sigma(m)},\]
for some $\alpha_{\sigma}\in K$, where $S_m$ denotes the symmetric group on $\{1, \dots, m\}$.
It is well known that if $K$ is an infinite field, any T-ideal is generated by multihomogeneous polynomials, and if $K$ has characteristic zero, by multilinear polynomials. This follows from the following fact which can be found in \cite{BresarKlep}.
\begin{lemma}
Let $V$ be a vector space over $K$ and $U$ be a subspace of $V$. Let $c_0, c_1, \dots, c_n \in V$ such that $\sum_{i=0}^n\lambda^ic_i\in U$ for at least $n+1$ pairwise distinct scalars $\lambda$. Then $c_i\in U$ for each $i\in\{0, \dots, n\}$.
\end{lemma}
In this paper we also need the notion of Lie ideal of an associative algebra $A$. If $I\subseteq A$ is a vector subspace of $A$, we say that $I$ is a Lie ideal of $A$ if for any $x\in A$ and $y\in I$, we have $[x,y]\in A$. As examples, the sets $\{0\}$, $K$, $sl_n(K)$ and $M_n(K)$ are Lie ideals of $M_n(K)$. Actually, these were shown to be the only Lie ideals of $M_n(K)$, if the characteristic of $K$ is not 2 or $n\neq 2$ (see \cite{Herstein}).
The Lie ideal structures appear when studying images of polynomials. The following is a particular case of \cite[Theorem 2.3]{BresarKlep}.
\begin{theorem}\label{LieIdeal}
Let $K$ be an infinite field, and let $A$ be an algebra over $K$. Then for every polynomial $f = f(x_1, \dots, x_m)\in \KX$, the linear span of $f(A)$ is a Lie ideal of A.
\end{theorem}
A Lie ideal $I$ of $\KX$ will be called a T-Lie ideal if $I$ is a Lie ideal of $\KX$ and if it is closed under endomorphisms of $\KX$. We say that a T-Lie ideal $I$ is generated as a T-Lie ideal by some subset $\mathcal F\subseteq \KX$, if $I$ is the smallest T-Lie ideal containing $\mathcal F$. In this case we denote it by $I=\langle \mathcal F \rangle^{TL}$.
\section{The Commutator-Degree of a Polynomial}
In this section we introduce the notion of \emph{commutator-degree} of a polynomial $f\in \KX$, and give a characterization of multilinear polynomials of commutator-degree $r$ in terms of its coefficients.
In order to define such notion, one first needs to observe that we have a strictly descending chain of T-ideals of $\KX$,
\[\KX \supsetneqq \langle [x_1,x_2] \rangle^T\supsetneqq \langle [x_1,x_2][x_3,x_4]\rangle^T\supsetneqq\langle [x_1,x_2][x_3,x_4][x_{5},x_{6}]\rangle^T\supsetneqq \cdots\]
\begin{definition}
Let $f\in \KX$. We say that $f$ has \emph{commutator-degree} $r$ if $f\in \langle [x_1,x_2][x_3,x_4]\cdots [x_{2r-1},x_{2r}] \rangle^T$ and $f\not \in \langle [x_1,x_2][x_3,x_4]\cdots [x_{2r+1},x_{2r+2}] \rangle^T$.
\end{definition}
For convenience we say that $f$ has commutator-degree 0 if $f\in \KX$ but $f\not\in \langle [x_1, x_2] \rangle^T$.
An immediate consequence of Theorem \ref{IDsUTn} is the following proposition.
\begin{proposition}\label{identities}
Let $f\in \KX$. Then $f$ has commutator-degree $r$ if and only if $f\in Id(UT_r)$ and $f\not\in Id(UT_{r+1})$.
\end{proposition}
The definition of commutator degree was inspired in the following result (see \cite[Lemma 3]{deMello}). In \cite{deMello} such lemma is presented without proof, so we present its proof here for the sake of completeness.
\begin{lemma}\label{8}
Let $A$ be a unitary algebra over $K$ and let \[f(x_1,\dots,x_m)=\sum_{\sigma\in S_n} \alpha_{\sigma} x_{\sigma(1)} \cdots x_{\sigma(m)}.\] be a multilinear polynomial in $\KX$.
\begin{enumerate}
\item If $\sum_{\sigma \in S_n}\alpha_{\sigma}\neq 0$, then $f(A) = A$.
\item $f\in \langle [x_1,x_2]\rangle ^T$ if and only if $\sum_{\sigma \in S_n} \alpha_{\sigma}=0$.
\end{enumerate}
\end{lemma}
\begin{proof}
To prove (1) one just need to substitute $m-1$ variables by $1\in A$ and the last variable by $(\sum_{\sigma\in S_m}\alpha_\sigma)^{-1} a$ where $a\in A$ is an arbitrary element of $A$. This implies $f(A)=A$.
To prove (2) we first assume that $f\in \langle [x_1,x_2]\rangle ^T$. Then using the identity $[xy,z]=x[y,z]+[x,z]y$, it is enough to verify it for a polynomial of the form $x_1\cdots x_{i-1} [x_i,x_{i+1}]x_{i+2}\cdots x_m$, which is obvious.
Finally, we assume $\sum_{\sigma\in S_m}\alpha_\sigma=0$. Then for any $\tau\in S_m$, $\alpha_{\tau}=-\sum_{\sigma\neq \tau}\alpha_\sigma$ and $f$ can be written as
\[f(x_1, \dots, x_m)=\sum_{\substack{\sigma\in S_m \\ \sigma\neq \tau}}\alpha_{\sigma}(x_{\sigma(1)}\cdots x_{\sigma(m)}-x_{\tau(1)} \cdots x_{\tau(m)}).\]
By the above, it is enough to prove that for each $\sigma\neq \tau$, $x_{\sigma(1)}\cdots x_{\sigma(m)}-x_{tau(1)} \cdots x_{\tau(m)}\in \langle [x_1, x_2]\rangle^T$.
We do it by induction on $m$. If $m=2$, we have $x_1x_2-x_2x_1\in \langle[x_1,x_2]\rangle^T$. Now assume the above is true for $m-1$. Up to relabeling variables, we may assume $\tau=id$ (the identical permutation). For a given $\sigma\in S_m$, let $j=\sigma(1)$. Then we can write
\begin{align*}
& x_{\sigma(1)}\cdots x_{\sigma(m)}-x_1 \cdots x_m\\
&= x_{\sigma(1)}\cdots x_{\sigma(m)}-x_jx_1\cdots x_{j-1}x_{j+1}\cdots x_m + x_jx_1\cdots x_{j-1}x_{j+1}\cdots x_m - x_1 \cdots x_m\\
&= x_j(x_{\sigma(2)}\cdots x_{\sigma(m)}-x_1\cdots x_{j-1}x_{j+1}\cdots x_m) + [x_j,x_1\cdots x_{j-1}]x_{j+1}\cdots x_m.
\end{align*}
The induction hypothesis implies that \[x_j(x_{\sigma(2)}\cdots x_{\sigma(m)}-x_1\cdots x_{j-1}x_{j+1}\cdots x_m)\in \langle[x_1, x_2]\rangle^T,\]
and since $[x_j,x_1\cdots x_{j-1}]x_{j+1}\cdots x_m\in \langle[x_1,x_2]\rangle^T$ trivially, we obtain \[x_{\sigma(1)}\cdots x_{\sigma(m)}-x_1 \cdots x_m\in \langle[x_1, x_2]\rangle^T\] and the lemma is proved.
\end{proof}
The above lemma is a tool to determine if a polynomial lies in the T-ideal generated by $[x_1, x_2]$ only by knowing some relations satisfied by the coefficients of $f$, i.e, it characterizes the multilinear polynomials with commutator degree 0, and shows that polynomials of commutator degree 0 have image equal to $UT_n(K)$. The aim of this section is to generalize such characterization for multilinear polynomials of commutator-degree greater than 0 in order to apply it to the solution of the Lvov-Kaplansky conjecture for upper triangular matrices.
The following can be found in \cite{FagundesdeMello}.
\begin{lemma}\label{units}
If $f$ is a multilinear polynomial, any evaluation of $f$ on upper triangular matrix units is either zero or some nonzero multiple of an upper triangular matrix unit.
\end{lemma}
Let now $f=f(x_1, \dots, x_m) \in \KX$ be a multilinear polynomial and write it as \[f(x_1,\dots,x_m)=\sum_{\sigma\in S_n} \alpha_\sigma x_{\sigma(1)}\cdots x_{\sigma(m)}.\]
We now define some constants depending on the coefficients of $f$ and some subsets of $\{1,\dots,m\}$.
If $k\leq m$, let $h_1,\cdots,h_k\in \{1,2,\cdots,m\}$, such that $h_1+\cdots+h_k\leq m$, let $t=(t_1,\cdots,t_k)$ be a tuple of elements in $\{1,2,\cdots,m\}$, with $t_1<t_2<\cdots <t_k$ and let $T=(T_1,T_2,\cdots,T_k)$ such that for all $j$, $T_j\subset \{1,2,\cdots,m\}$, $T_i\cap T_j=\emptyset$, if $i\neq j$ and $|T_i|=h_i-1,$ for $i\in \{1, 2, \dots, k\}.$
If $k\geq 1$, let us denote by $S(k,T,t)$ the subset of $S_m$ consisting of all permutations $\sigma$ satisfying:
\begin{itemize}
\item $\sigma(\{1,2,\cdots,h_1-1\})=T_1$
\item $\sigma(h_1)=t_1$
\item for $i\in \{2,\dots, k\}$, $\sigma(\{h_1+\cdots +h_{i-1}+1, \dots, h_1+\cdots + h_i-1\})=T_i$
\item for $i\in \{2,\dots, k\}$, $\sigma(h_1+\cdots +h_i)=t_i$
\end{itemize}
If $k=0$, we consider $S(0)=S_m$
where $|T_i|=h_i-1$.
And now consider the following sums of coefficients of $f$:
\[\beta^{(0)}=\sum_{\sigma \in S_m}\alpha_{\sigma} \ \ \ \text{ and } \ \ \ \beta^{(k,T,t)}=\sum_{\sigma \in S(k,T,t)}\alpha_{\sigma}\]
The next two lemmas show how the above constants can be used to determine whether the multilinear polynomial $f$ is an identity for upper triangular matrices of a given order or not. In particular, the next is a generalization of Lemma \ref{8}.
\begin{lemma}\label{is}
Let $f(x_1,\dots,x_m)=\sum_{\sigma\in S_n} \alpha_\sigma x_{\sigma(1)}\cdots x_{\sigma(m)}$ be a multilinear polynomial in $\KX$. If $\beta^{(k, T, t)}=0$, for all $k< n$, and for any $T$, and $t$, then $f\in Id(UT_n)$.
\end{lemma}
\begin{proof}
Since $f$ is multilinear, it is enough to verify that $f$ vanishes under substitution of variables by matrix units. So let $r_1, \dots, r_m$ be matrix units. Among them, assume that $E_{i_1,j_1}, \dots, E_{i_k,j_k}$ are the strictly upper triangular ones. If $k\geq n$, then $f$ vanishes, so we may assume $k<n$. Let $r=f(r_1, \dots, r_m)$. Up to a permutation of indexes, we may assume
\[i_1<j_1=i_2<j_2=\cdots = i_k<j_k\] otherwise $r=0$. For $r_l\not\in\{E_{i_1,j_1}, \dots, E_{i_k,j_k}\}$, we must have
\[r_l\in\{E_{i_1,i_1},E_{i_2,i_2}, \dots, E_{i_k,i_k}, E_{j_k,j_k}\}\] otherwise, $r=0$.
For each $\sigma\in S_m$, the product $r_{\sigma(1)}\cdots r_{\sigma(m)}$ is nonzero only if it coincides with
\[(E_{i_1,i_1}\cdots E_{i_1,i_1})E_{i_1,i_2}(E_{i_2,i_2}\cdots E_{i_2,i_2})E_{i_2,i_3}\cdots E_{i_k,j_k}(E_{j_k,j_k}\cdots E_{j_k,j_k})=E_{i_1,j_k}.\]
Thus let $t_1, \dots, t_k$ be such that $r_{t_l}=E_{i_l,j_l}$ and let \[T_l=\{i\,|\,r_i=E_{i_l,j_l}\}, \quad l=1, \dots, k.\]
From the considerations above, we obtain
\begin{align*}
f(r_1, \dots, r_m) & =\sum_{\sigma\in S_m}\alpha_\sigma r_{\sigma(1)}\cdots r_{\sigma(m)}\\
& =\sum_{\sigma\in S(k,T,t)}\alpha_\sigma r_{\sigma(1)}\cdots r_{\sigma(m)}\\
& =\beta(k,T,t)E_{i_1,j_k}=0.
\end{align*}
So, in any case, $f(r_1, \dots, r_m)=0$.
\end{proof}
\begin{lemma}\label{isnot}
Let $f(x_1,\dots,x_m)=\sum_{\sigma\in S_n} \alpha_\sigma x_{\sigma(1)}\cdots x_{\sigma(m)}$ be a multilinear polynomial in $\KX$ and let $n>0$. If there exist $T$, and $t$ such that $\beta^{(n, T, t)}\neq 0$, then $f\not\in Id(UT_{n+1})$.
\end{lemma}
\begin{proof}
Let us consider the following evaluation of $f$ on matrix units of $UT_{n+1}$:
\begin{itemize}
\item $x_{t_i}\mapsto E_{i,i+1}$, for $i\in\{1,\dots,n\}$.
\item $x_i\mapsto E{i,i}$, if $i\in T_i$.
\item $x_i\mapsto E_{n+1,n+1}$ otherwise.
\end{itemize}
Under such evaluation, the only non-vanishing monomials are those indexed by $\sigma \in S(n,T,t)$, and for any such $\sigma$ the resulting evaluation is $E_{1,n+1}$. Under such evaluation $f$ will result in $(\sum_{\sigma\in S(n,T,t)}\alpha_\sigma)E_{1,n+1}=\beta^{(n,T,t)}E_{1,n+1}$, which is nonzero. Then $f\not \in Id(UT_{n+1})$.
\end{proof}
As an immediate consequence of Lemma \ref{is} and Lemma \ref{isnot}, we have the main result of this section, which is a characterization of multilinear polynomials of commutator-degree $r$ in terms of its coefficients.
\begin{theorem}\label{equivalent
Let $f(x_1,\dots,x_m)=\sum_{\sigma\in S_n} \alpha_\sigma x_{\sigma(1)}\cdots x_{\sigma(m)}$ be a multilinear polynomial in $\KX$. Then the following assertions are equivalent.
\begin{enumerate}
\item The polynomial $f$ has commutator-degree $r\geq 1$
\item $f\in Id(UT_r)\setminus Id(UT_{r+1})$
\item For all $k< r$, and for any $T$, and $t$, we have $\beta^{(k, T, t)}=0$ and there exist $T$, and $t$ such that $\beta^{(r, T, t)}\neq 0$
\end{enumerate}
\end{theorem}
To illustrate concept of commutator-degree, we compute the commutator degree of the standard polynomials.
\begin{example}
Let $St_m(x_1,\dots, x_m)=\sum_{\sigma\in S_m}(-1)^{\sigma} x_{\sigma(1)}\cdots x_{\sigma(m)}$ be the standard polynomial of degree $m$. Then for any $r$, $St_{2r}$ and $St_{2r+1}$ have commutator-degree $r$.
\end{example}
\begin{proof}
From the well-known Amistsur-Levitzky theorem $St_{2m}$ is a polynomial identity for $M_m(K)$, and hence for $UT_m$. Since $St_{2m+1}$ is a consequence of $St_{2m}$, it is also an identity for $UT_m$. The staircase lemma shows that these two polynomials are not identities for $UT_{m+1}$.
\end{proof}
Actually, the above is a particular case of a more general result (see \cite[Lemma 9.1.2]{G-Z}) that the block-triangular matrix algebra $UT(d_1, \dots, d_s)$ satisfies the standard identity $St_k$ if and only if the sizes of the blocks $d_1, \dots, d_s$ satisfy $2(d_1+\cdots+d_s)\leq k$.
The study of the standard polynomials and a characterization of its multilinear consequences in terms of its coefficients may be an interesting tool to approach the Lvov-Kaplansky conjecture for $M_n(K)$, since we know from the celebrated theorem of Amistsur-Levitzki that for any $n$, $M_n(K)$ satisfies $St_{2n}$. Although we have computed the commutator-degree of the standard polynomials, this is a too coarse measure to verify if some polynomials is a consequence of some standard identity or not. One can argue in a similar way as it was done to commutator-degree to define a more refined tool. Let us call it the \emph{St-degree} of a polynomial: consider the descending chain of T-ideals
\[\KX \supsetneqq \langle St_{2}\rangle^T \supsetneqq \langle St_{3}\rangle^T \supsetneqq \langle St_{4}\rangle^T\supsetneqq \cdots \]
A polynomial $f$ has St-degree $k$ if $f\in \langle St_{k}\rangle^T$ and $f \not \in \langle St_{k+1}\rangle^T$.
\begin{problem}
Characterize the St-degree of multilinear polynomials in terms of its coefficients.
\end{problem}
\section{Images of multilinear polynomials on $UT_n(K)$}
In this section we present the main result of the paper, which gives a complete solution of the Lvov-Kaplansky conjecture for the algebra of $n\times n$ upper triangular matrices, i.e., a solution for Conjecture \ref{LKUTn}. Moreover, for each possible subspace which is the image of some multilinear polynomial, we give a complete description of the set of all multilinear polynomials having such an image.
Before that, we recall a well-known result about commutative polynomials over infinite fields.
\begin{lemma}\label{nonroot}
Let $K$ be an infinite field and let $f(x_1,\dots,x_m)\in K[x_1,\dots, x_m]$ be a nonzero polynomial. Then, there exists $(a_1, \dots, a_m)\in K^m$ such that $f(a_1, \dots, a_m)\neq 0$.
\end{lemma}
In the proof of the main result, we will need the following consequence of the above:
\begin{lemma}
Let $K$ be an infinite field and let $f_1(x_1, \dots, x_m), \dots, f_k(x_1, \dots, x_m)$ be nonzero polynomials in $K[x_1, \dots, x_m]$. Then, there exist $a_1, \dots, a_m\in K$ such that $f_i(a_1, \dots, a_m)\neq 0$, for $i=1, \dots, k$.
\end{lemma}
\begin{proof}
Let $f(x_1, \dots, x_m)=\prod_{i=1}^{k}f_i(x_1, \dots, x_m)$. Since each $f_i$ is a nonzero polynomial, $f$ is also a nonzero polynomial, and the above lemma guarantees the existence of elements $a_1, \dots, a_m\in K$ such that $f(a_1, \dots, a_m)\neq 0$. The lemma is proved once one observes that
\[\prod_{i=1}^{k}f_i(a_1, \dots, a_m) = f(a_1, \dots, a_m)\neq 0.\]
\end{proof}
\begin{theorem}\label{main}
Let $f\in \KX$ be a multilinear polynomial. Then the image of $f$ on $UT_n(K)$ is $J^r$ if and only if $f$ has commutator-degree $r$.
\end{theorem}
\begin{proof
Let $f(x_1,\dots,x_m)$ be a multilinear polynomial of commutator-degree $r$. We will show that the image of $f$ on $UT_n(K)$ is $J^r$. We may assume that $r<n$, otherwise, $f$ is an identity for $UT_n(K)$.
Let $t=(t_1,\dots, t_r)$, and $T=(T_1, \dots, T_r)$ such that $\beta(r, T, t)\neq 0$. We define $T_{r+1}=\{1,\dots, m\}\setminus (T_1\cup \cdots \cup T_r \cup \{t_1, \dots, t_r\})$.
Given $
\in J^r$, we can write \[A=\sum_{q=r}^{n-1}\sum_{k=1}^{n-q} a_{k,k+q}E_{k,k+q}\] We will show that there exists a suitable evaluation of variables $x_1\mapsto c_1, \dots, x_m\mapsto c_m$ such that $f(c_1,\dots, c_m)=A$.
Before that, we will consider some matrices with entries in the polynomial algebra $K[d_j^{(i)}, y_{j}^{(i)}, x_{k, l}]$:
\begin{itemize}
\item $D_i=\sum_{j=1}^n d_{j}^{(i)}E_{j,j}$, for $i\in \{1, \dots, r+1\}$.
\item $N_i=\sum_{j=1}^{n-1}y_j^{(i)}E_{j,j+1}$, for $i\in \{1, \dots, r\}$.
\item $X=\sum_{k<l}x_{k,l}E_{k,l}$.
\end{itemize}
Now we define the matrices $c_i$ depending on variables $d_j^{(i)}$, $y_{j}^{(i)}$ and $x_{k, l}$, so that we later can apply substitutions of such variables by elements of $K$ yielding a solution to the equation $f(c_1, \dots, c_m)=A$.
The evaluation is given as follows:
\begin{itemize}
\item $c_i=D_i$, if $i\in T_i$, for $i\in \{1,\dots, r+1\}$.
\item $c_{t_i}=N_i$, if $i\in \{1, \dots, r-1\}$.
\item $c_{t_r}=X$.
\end{itemize}
Observe that if $r=1$, the matrices $N_i$ do not appear.
Under such evaluation, since diagonal matrices commute, we obtain $f(c_1, \dots, c_m)$ is equal to
\[\sum_{j=1}^{n}\sum_{i,\sigma}\gamma_{i,\sigma}D^{[i^{(1)}]}N_{\sigma(1)}D^{[i^{(2)}]}\cdots D^{[i^{(j)}]} X D^{[i^{(j+1)}]} \cdots D^{[i^{(r)}]}N_{\sigma(r)}D^{[i^{(r+1)}]},\]
where:
\begin{itemize}
\item $D^{[i^{(k)}]}=D_1^{i_1^{(k)}}D_2^{i_2^{(k)}}\cdots D_{r+1}^{i_{r+1}^{(k)}}$,
\item $\displaystyle \sum_{i,\sigma} = \sum_{\substack{\sigma \in S_r\\ \sigma(j)=r}} \sum_{p=1}^{r+1}\sum_{i_p^{(1)}+\cdots +i_{p}^{(r+1)}=|T_p|}$
\item Each $\gamma_{i,\sigma}\in K$ depends on the $(r+1)^2$-tuple $i=(i_1^{(1)}, \dots, i_{r+1}^{(r+1)})$ and on $\sigma\in S_r$.
\item If $i_0$ is the tuple satisfying $i_{q}^{(s)}=0$, for $q\neq s$, and if $id$ is the identity permutation of $S_r$, then $\gamma_{i_0, {id}}=\beta(r,T,t)$ (which is nonzero by hypothesis).
\end{itemize}
Let us write $D^{[i^{(k)}]}=\sum_{j=1}^n \delta_j^{(k)}E_{j,j}$. Of course, for each $j$ and $k$, we have
\[\delta_{j}^{(k)}=\left(d_j^{(1)}\right)^{i_1^{(k)}} \cdots \left(d_j^{(r+1)}\right)^{i_{r+1}^{(k)}}.\]
Computations with the above matrices yield
\[ D^{[i^{(1)}]}N_{\sigma(1)}D^{[i^{(2)}]}\cdots N_{\sigma(j-1)}D^{[i^{(j)}]}
= \sum_{k=1}^{n-j+1} \xi_{k}E_{k,k+j-1} \]
where \[\xi_{k}=\prod_{s=1}^{j}\left(\delta_{k+s-1}^{(s)}\right)\prod_{s=1}^{j-1}\left(y_{k+s-1}^{(\sigma(s))}\right)\] and
\[ D^{[i^{(j+1)}]}N_{\sigma(j+1)} \cdots D^{[i^{(r)}]}N_{\sigma(r)}D^{[i^{(r+1)}]}
= \sum_{k=1}^{n-r+j} \eta_{k}E_{k,k+r-j} \]
where \[\eta_{k}=\prod_{s=1}^{r-j+1}\left(\delta_{k+s-1}^{(j+s)}\right)\prod_{s=1}^{r-j}\left(y_{k+s-1}^{(\sigma(j+s))}\right).\]
By multiplying the two expressions above with $X$ in the correct order, we obtain
\begin{align*}
& D^{[i^{(1)}]}N_{\sigma(1)}D^{[i^{(2)}]}\cdots D^{[i^{(j)}]} X D^{[i^{(j+1)}]} \cdots D^{[i^{(r)}]}N_{\sigma(r)}D^{[i^{(r+1)}]} \\
= & \sum_{q=r}^{n-1}\sum_{k=1}^{n-q} \Omega_{i,j,q,k}x_{k+j-1,k+j-1+q}E_{k,k+q}
\end{align*}
where
\[\Omega_{i,j,q,k}=\left(\prod_{s=1}^{j}\delta_{k+s-1}^{(s)}\right)\left(\prod_{s=1}^{j-1} y_{k+s-1}^{(\sigma(s))}\right)
\left(\prod_{s=1}^{r-j+1}\delta_{k+j+q-r+s-1}^{(j+s)}\right)\left(\prod_{s=1}^{r-j} y_{k+j+q-r+s-1}^{(\sigma(j+s))}\right)\]
If we write \[\Delta_{j,q,k}=\sum_{\substack{\sigma\in S_r \\ \sigma(j)=r}}\sum_{i}\gamma_{i,\sigma}\Omega_{j,q,k},\] we obtain
\begin{align*}
f(c_1, \dots, c_m) & = \sum_{q=r}^{n-1}\sum_{k=1}^{n-q} \sum_{j=1}^{r} \Delta_{j,q,k}x_{k+j-1,k+j-1+q}E_{k,k+q}
\end{align*}
To obtain a solution for $f(c_1, \dots, c_m)=A$, we need to show that the following system of equations in variables $x_{k,l}$, $1\leq k<l\leq n$, has a solution.
\[\sum_{j=1}^{r}\Delta_{j,q,k}x_{k+j-1,k+j-1+q}=a_{k, k+q},\quad q=r, \dots, n-1, k=1, \dots, n-q.\]
The above equations can be written as
\[\Delta_{r,q,k}x_{k+r-1,k+r-1+q} + \sum_{j=1}^{r-1}\Delta_{j,q,k}x_{k+j-1,k+j-1+q}=a_{k, k+q}.\]
Let us assume that there exists an evaluation of variables $d_j^{(i)}$, $y_{j}^{(i)}$ by elements of $K$ such that under this evaluation we have $\Delta_{r, q, t}\neq 0$, for each $q, k$. Then a solution of the above system of equations is obtained by considering $x_{k,l}=0,$ for $1\leq k\leq r-1$, $k+1\leq l \leq n$ and obtaining $x_{k,l}$ for all $k$ and $l$ recursively from the following formula:
\[x_{k+r-1,k+r-1+q} = \frac{a_{k, k+q} - \sum_{j=1}^{r-1}\Delta_{j,q,t}x_{k+j-1,k+j-1+q}}{\Delta_{r,q,k}}.\]
The existence of such an evaluation of variables $d_j^{(i)}$, $y_{j}^{(i)}$ by elements of $K$ such that for each $q, k$, $\Delta_{r, q, t}\neq 0$, is a consequence of Lemma \ref{nonroot}, once we prove that for each $q$ and $k$, $\Delta_{r,q,k}$ is a nonzero polynomial.
To prove that, let us recall that
\begin{align*}
\Delta_{r,q,k} = & \sum_{\sigma\in S_{r-1}}\sum_{i}\gamma_{i,\sigma}\left(\prod_{s=1}^{r}\delta_{k+s-1}^{(s)}\right)\left(\prod_{s=1}^{r-1}y_{k+s-1}^{(\sigma(s))}\right)\delta_{k+q}^{(r+1)}\\
= & \sum_{\sigma\in S_{r-1}}\sum_{i}\gamma_{i,\sigma}\left(\prod_{s=1}^{r}\prod_{l=1}^{r+1} \left(d_{k+s-1}^{(l)}\right)^{i_l^{(s)}}\right)\left(\prod_{s=1}^{r-1}y_{k+s-1}^{(\sigma(s))}\right)\prod_{l=1}^{r+1} \left(d_{k+q}^{(l)}\right)^{i_l^{(r+1)}}.
\end{align*}
Since \[\displaystyle \sum_{i} = \sum_{p=1}^{r+1}\sum_{i_p^{(1)}+\cdots +i_{p}^{(r+1)}=|T_p|},\] we observe that the multi-homogeneous degree of each monomial indexed by the $(r+1)^2$-tuple $i$ and by the permutation $\sigma$ is uniquely determined by $i$ and $\sigma$. Since $\gamma_{i_0, id}=\beta(r,T,t)\neq0$, the polynomial $\Delta_{r,q,k} $ is nonzero for each $q$ and $k$, and this finishes the proof of the theorem.
\end{proof}
\section{Final Considerations}
We start the final considerations with a remark on the main theorem.
\begin{remark}
Observe that for each possible image of a multilinear polynomial on $UT_n(K)$ (i.e., $J^r$), Theorem \ref{main} also describes the set of all multilinear polynomials with such image.
\end{remark}
Now let us consider images of not necessarily multilinear polynomials. It was shown in \cite{WangHomogeneous} that if $f$ is not a multilinear polynomial, its image may not be a vector subspace of $UT_n(K)$. Nevertheless, for each $r$, we may study the set all polynomials (not necessarily multilinear) with image contained in $J^r$. We have:
\begin{proposition}
Assume $K$ is a field of characteristic zero. For each $r$, the set \[I^r(UT_n)=\{f\in \KX\,|\, f(UT_n)\subseteq J^r\}\] is a T-ideal of $\KX$. Moreover, for each $r$, $I^r(UT_n)= Id(UT_r)$. In particular, it is generated as a T-ideal by the polynomial $[x_1, x_2]\cdots [x_{2r-1}, x_{2r}]$.
\end{proposition}
\begin{proof}
The fact that $I^r(UT_n)$ is a T-ideal of $\KX$ follows directly from the fact that $J^r$ is an ideal of $UT_n(K)$ (and do not depend on the characteristic of $K$).
Since any T-ideal of $\KX$ is generated by its multilinear polynomials, the proposition follows directly from Theorem \ref{main} and Lemma \ref{equivalent}.
\end{proof}
The above result raises some questions when images of polynomials over matrix algebras (or even for algebras in general) are considered.
Assuming the Lvov-Kaplansky conjecture is true, the possible images of multilinear polynomials on $M_n(K)$ are $\{0\}$, $K$ (viewed as the set of scalar matrices), $sl_n(K)$ and $M_n(K)$.
The set of polynomials with image $\{0\}$ is the set of polynomial identities of $M_n(K)$, which is a T-ideal of $\KX$. The set of polynomials with image contained in $K$ is the set of central polynomials, denoted by $C(M_n(K))$, which is a T-space of $\KX$ (actually, a more suitable name should be T-subalgebra). It is a subalgebra of $\KX$ which is invariant under endomorphisms of the algebra $\KX$. Such kind of subsets of $\KX$ have been extensively studied, but in light of the Lvov-Kaplansky conjecture, one can also wonder about the set of polynomials with image contained in $sl_n(K)$. These were implicitly studied in the paper \cite{BresarKlep}. In order to make it explicit, we define the following set, which we call the set of \emph{traceless polynomials} and denote by $TL(M_n(K))$.
\[TL(M_n(K))=\{f(x_1, \dots, x_m)\in \KX\,|\, tr(f(a_1, \dots, a_m)=0\}.\]
An obvious but interesting remark is the following.
\begin{remark}
The intersection of traceless polynomials with the central polynomials of $M_n(K))$ is the ideal of identities of $M_n(K)$. With the above notation:
\[Id(M_n(K))=C(M_n(K))\cap TL(M_n(K)).\]
\end{remark}
We notice that $TL(M_n(K))$ has an interesting structure.
\begin{proposition}
Let $TL(M_n(K))$ be as above. Then $TL(M_n(K))$ is a T-Lie ideal of $\KX$. Moreover, the T-Lie ideal generated by $[x_1, x_2]$ is contained in $TL(M_n(K))$, i.e.
\[\langle[x_1, x_2] \rangle^{TL}\subseteq TL(M_n(K)).\]
\end{proposition}
\begin{proof}
It follows directly from the fact that $tr([A,B])=0$ for any $A, B\in M_n(K)$.
\end{proof}
We now observe that this is a particular case of a more general result.
\begin{proposition}
Let $A$ be an algebra over $K$ and let $V_f\subseteq A$ be the linear span of the image of some polynomial $f$ on $A$. Then the set
\[\{g\in \KX\,|\, g(A)\subseteq V_j\}\]
is a T-Lie ideal of $\KX$.
\end{proposition}
\begin{proof}
This is an easy consequence of Theorem \ref{LieIdeal}.
\end{proof}
Last, we present a description of the set of traceless polynomials of $M_n(K)$ module $Id(M_n(K))$. To that, we recall the result below \cite[Theorem 4.5]{BresarKlep}. We say that $f$ is cyclically equivalent to a polynomial $g$ if $f-g$ is a sum of commutators in $\KX$.
\begin{theorem}
Assume the characteristic of $K$ is zero. Let
$f\in \KX$, and let us write $\mathcal L= \text{span } f(M_n(K))$. Then exactly one of the following four possibilities holds:
\begin{enumerate}
\item $f$ is an identity of $M_n(K)$; in this case $\mathcal L = 0$;
\item $f$ is a central polynomial of $M_n(K)$; in this case $\mathcal L=K$;
\item $f$ is not an identity of $M_n(K)$, but is cyclically equivalent to an identity of $M_n(K)$; in this case $\mathcal L = sl_n(K)$;
\item $f$ is not a central polynomial of $M_n(K)$ and is not cyclically equivalent to an identity of $M_n(K)$; in this case $\mathcal L = M_n(K)$.
\end{enumerate}
\end{theorem}
\begin{theorem}
Assume the characteristic of $K$ is zero. Then the T-Lie ideal $TL(M_n(K))$ is generated (module $Id(M_n(K))$) by $[x_1, x_2]$ as a TL-ideal. Or, equivalently, $TL(M_n(K))$ is the T-Lie ideal generated by $Id(M_n(K))\cup \{[x_1,x_2]\}$.
\end{theorem}
\begin{proof}
This is an easy consequence of the above theorem. Indeed, if $f$ is not an identity nor cyclically equivalent to an identity of $M_n(K)$, then the linear span of $f(M_n(K))$ is $K$ or $M_n(K)$. In both cases we do not have $f(M_n(K))\subseteq sl_n(K)$. So $f$ is cyclically equivalent to an identity of $M_n(K)$. The theorem is proved.
\end{proof}
\begin{remark}
Working module $Id(M_n(K))$ is equivalent to work in the algebra of generic $n\times n$ matrices, $\mathcal G_n$ (see \cite[Chapter 7]{Drensky} for nice exposition on the algebra of generic matrices). In this language, the last theorem says that if an element of $\mathcal G_n$ has trace zero, then it is a sum of commutators of some elements of $\mathcal G_n$, as one can see in \cite[Theorem 5.1]{BresarKlep}.
\end{remark}
\section*{Funding}
T. C. de Mello was supported by grants \#2018/15627-2, and \#2018/23690-6 São Paulo Research Foundation (FAPESP).
| {
"timestamp": "2021-06-25T02:07:23",
"yymm": "2106",
"arxiv_id": "2106.12726",
"language": "en",
"url": "https://arxiv.org/abs/2106.12726",
"abstract": "In this paper we prove that the image of multilinear polynomials evaluated on the algebra $UT_n(K)$ of $n\\times n$ upper triangular matrices over an infinite field $K$ equals $J^r$, a power of its Jacobson ideal $J=J(UT_n(K))$. In particular, this shows that the analogue of the Lvov-Kaplansky conjecture for $UT_n(K)$ is true, solving a conjecture of Fagundes and de Mello. To prove that fact, we introduce the notion of commutator-degree of a polynomial and characterize the multilinear polynomials of commutator-degree $r$ in terms of its coefficients. It turns out that the image of a multilinear polynomial $f$ on $UT_n(K)$ is $J^r$ if and only if $f$ has commutator degree $r$.",
"subjects": "Rings and Algebras (math.RA)",
"title": "Images of multilinear polynomials on $n\\times n$ upper triangular matrices over infinite fields",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513869353993,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7086428687638746
} |
https://arxiv.org/abs/2212.12072 | Completing the solution of the directed Oberwolfach problem with cycles of equal length | In this paper, we give a solution to the last outstanding case of the directed Oberwolfach problem with tables of uniform length. Namely, we address the two-table case with tables of odd length. We prove that the complete symmetric digraph on $2m$ vertices, denoted $K^*_{2m}$, admits a resolvable decomposition into directed cycles of odd length $m$. This completely settles the directed Oberwolfach problem with tables of uniform length. | \section{Introduction}
In this paper, we address the last open case of the directed Oberwolfach problem with tables of uniform length, namely the case with two tables of odd length. A variation of the celebrated Oberwolfach problem, the directed Oberwolfach problem asks whether $t$ conference attendees can be seated at $k$ round tables seating $m_1, m_2, \ldots, m_k$ guests, respectively, over the course of $t-1$ nights, with the crux being that each guest is to be seated to the right of every other guest exactly once. In Problem \ref{prob:ini} below, we formulate this problem when tables are of the same length in graph-theoretic terms.
\begin{prob}
\label{prob:ini}
Let $\alpha$ and $m$ be positive integers. Identify all $m$ values for which $K^*_{\alpha m}$ admits a factorization into directed cycles of length $m$.
\end{prob}
Observe that a factorization of $K^*_{\alpha m}$ into directed cycles of length $m$ is equivalent to a resolvable Mendelsohn design with blocks of size $m$ \cite{ColDin}.
First, we point out that a solution to the original Oberwolfach problem with tables of uniform length can be found in \cite{AlsSch} and \cite{HuaKot}. Now, as with many cycle decomposition problems, Problem \ref{prob:ini} was first solved for small $m$ values. In \cite{BerGerSot}, Bermond et al. prove that $K^*_{3\alpha}$ admits a factorization into directed cycles of length three if and only if $\alpha \neq 2$. Then, in \cite{BenZha}, Bennett and Zhang show that, except for possibly $\alpha=3$, the digraph $K^*_{4\alpha}$ admits a factorization into directed cycles of length four if and only if $\alpha \neq 1$. A construction for $\alpha=3$ is given by Adams and Bryant \cite{AdaBry}. Abel et al. \cite{Abel} settle the existence of a factorization of $K^*_{5\alpha}$ into directed cycles of length five for $\alpha \geqslant 103$ with a few possible exceptions. In \cite{Til}, Tillson settles the existence of a directed hamiltonian decomposition of $K^*_m$ for $m$ even. An important step was taken by Burgess and \v{S}ajna \cite{BurSaj}, who used recent advances \cite{Liu} to solve Problem \ref{prob:ini} when $m$ is even and when $\alpha$ and $m$ are odd. As for the case $m$ odd and $\alpha$ even, Burgess and \v{S}ajna showed that, if $K^*_{2m}$ admits a factorization into cycles of length $m$, then $K^*_{\alpha m}$ also admits a factorization into directed cycles of length $m$. Therefore, to completely resolve the directed Oberwolfach problem with cycles of uniform length, it suffices to construct a factorization of $K_{2m}^*$ into directed cycles of odd length $m$. This problem has proven to be particularly difficult and has thus far only been solved for a finite number of cases.
\begin{theorem} \cite{BurFranSaj} \label{BurFraSaj}
Let $m$ be an odd integer such that $5 \leqslant m \leqslant 49$. The digraph $K^*_{2m}$ admits a factorization into directed cycles of length $m$.
\end{theorem}
The main result of this paper is stated in Theorem \ref{thm:main} below.
\begin{theorem}
\label{thm:main}
The digraph $K^*_{2m}$ admits a factorization into directed cycles of length $m$ for all odd $m\geqslant 5$.
\end{theorem}
Theorem \ref{thm:main} implies a complete resolution of the directed Oberwolfach problem with tables of uniform lengths.
We now provide an outline of this paper. In Section 2, we give key definitions. In Sections 3 and 4, we show that three particular classes of digraphs on $2m$ vertices admit a factorization into directed cycles of odd length $m$. These decompositions are then used in Section 5 to construct a factoorization of $K^*_{2m}$ into directed $m$-cycles.
\section{Preliminaries}
In this paper, all directed graph (digraph for short) are strict meaning that they do not contain loops or repeated arcs. If $G$ is a digraph [graph], we shall denote its vertex set as $V(G)$ and its arc set [edge set] as $A(G)$ [$E(G)$]. We denote the complete graph as $K_m$. The \textit{complete symmetric digraph}, denoted $K^*_m$, is the digraph on $m$ vertices such that for any two distinct vertices $x$ and $y$, we have $(x, y), (y, x) \in A(K^*_n)$. The symbol $\vec{C}_m$ denotes the directed cycle on $m$ vertices. We shall denote the length of a directed path $P$ (dipath for short) as $\textrm{len}(P)$. Two dipaths of digraph $G$, $P=v_1v_2\ldots v_n$ and $Q=v_nv_{n+1}\ldots v_m$, can be \textit{concatenated} to form the directed walk $PQ=v_1v_2\ldots v_{n-1}v_nv_{n+1}\ldots v_m$ in $G$. The digraph that consists of $k$ disjoint copies of digraph $G$ is denoted $kG$.
\begin{definition} \rm
\label{def:1}
Let $G$ be a digraph. A set of sub-digraphs of $G$ $\{H_1, H_2, \ldots, H_k\}$ is a \textit{decomposition of $G$} if $\{A(H_1), A(H_2), \ldots, A(H_k)\}$ is a partition of $A(G)$. If $G$ admits such a decomposition, we write $G=H_1\oplus H_2 \oplus \ldots \oplus H_k$.
\end{definition}
A definition analogous to Definition \ref{def:1} holds for undirected graphs.
\begin{definition}\rm
Let $G$ be a digraph. A \textit{directed 2-factor} of $G$ is a spanning sub-digraph of $G$ comprised of disjoint directed cycles of $G$. A \textit{$\vec{C}_m$-factor} is a directed 2-factor of $G$ in which all directed cycles are of length $m$. A \textit{$\vec{C}_m$-factorization} is a decomposition of $G$ into $\vec{C}_m$-factors.
\end{definition}
If a digraph $G$ admits a $\vec{C}_m$-factorization, then $|V(G)|$ and $|A(G)|$ are necessarily multiples of $m$. Hence, Problem \ref{prob:ini} asks to identify all $m$ values for which this condition is also sufficient for $K^*_{\alpha m}$.
\begin{definition} \rm
The \textit{wreath product} of digraphs $G$ and $H$, denoted $G\wr H$, is the digraph with vertex set $V(G) \times V(H)$. In $G \wr H$, there exists an arc from $(g_1, h_1)$ to $(g_2, h_2)$ if and only if $(g_1, g_2) \in A(G)$ or $g_1=g_2$ and $(h_1, h_2) \in A(H)$.
\end{definition}
\begin{definition} \rm
Let $C \subseteq \{1,2, \ldots, \lfloor \frac{m}{2} \rfloor\}$. The \textit{circulant} with \textit{connection set} $C$, denoted $X(m, C)$, is the graph with vertex set $V(G)=\mathds{Z}_m$ and edge set $E(G)=\{\{x, y\} \, |\, \min(x-y, m-(x-y)) \in C\}$ with computations done modulo $m$. Let $D \subseteq \{1,2, \ldots, m-1\}$. The \textit{directed circulant} with \textit{connection set} $D$, denoted $\vec{X}(m, D)$, has vertex set $V(G)=\mathds{Z}_m$ and arc set $A(G)=\{(x, y)\, | \, y-x \in D\}$ with computations done modulo $m$.
\end{definition}
\section{Decomposition of $\vec{X}(m, \{\pm1\}) \wr \overline{K}_2$ and $\vec{X}(m, \{1, 3\}) \wr \overline{K}_2$}
In the following two sections, we will show that three classes of digraphs admit a $\vec{C}_m$-factorization. First, we introduce some definitions and notations pertaining to these digraphs.
\begin{notation}
\label{not:2di}
Let $m$ be an odd. We let $H_{2m}=\vec{X}(m, \{\pm1\})\, \wr \,\overline{K}_2$, $L_{2m}=\vec{X}(m, \{1, 3\})\, \wr \, \overline{K}_2$, and $G_{2m} =\vec{X}(m, \{1, 3\})\, \wr \, K^*_2$. Let $V(\overline{K}_2)=V(K^*_2)=\{x, y\}$. Then, let $V(H_{2m})=V(L_{2m})=V(G_{2m})=\{ x_a, y_b\, |\ a,b \in \mathds{Z}_m\}$ where $ x_a=(a,x)$ and $y_b=(b, y)$.
\end{notation}
In Figure \ref{fig:defini}, we give an example of $L_{22}$. Note that, in this paper, edges in all figures are assumed to be arcs oriented from left to right. Vertical arcs of the form $(x_i, y_i)$ or $(y_i, x_i)$ are oriented accordingly.
\begin{figure} [htpb]
\begin{tikzpicture}[thick, every node/.style={circle,draw=black,fill=black!90, inner sep=1.5}, scale=1.1]
\node (x0) at (0.0,2.0) [label=above:$\scriptstyle x_0 $] {};
\node (x1) at (1.0,2.0) [label=above:$\scriptstyle x_1 $] {};
\node (x2) at (2.0,2.0) [label=above:$\scriptstyle x_2$] {};
\node (x3) at (3.0,2.0) [label=above:$\scriptstyle x_3 $] {};
\node (x4) at (4.0,2.0) [label=above:$\scriptstyle x_4 $] {};
\node (x5) at (5.0,2.0) [label=above:$\scriptstyle x_5 $] {};
\node (x6) at (6.0,2.0) [label=above:$\scriptstyle x_6 $] {};
\node (x7) at (7.0,2.0) [label=above:$\scriptstyle x_7 $] {};
\node (x8) at (8.0,2.0) [label=above:$\scriptstyle x_8 $] {};
\node (x9) at (9.0,2.0) [label=above:$\scriptstyle x_9 $] {};
\node (x10) at (10.0,2.0) [label=above:$\scriptstyle x_{10} $] {};
\node (x11) at (11,2.0)[draw=gray, fill=gray] [label=above:$\scriptstyle x_{0} $] {};
\node (x12) at (12,2.0)[draw=gray, fill=gray][label=above:$\scriptstyle x_{1} $] {};
\node (x13) at (13,2.0) [draw=gray, fill=gray][label=above:$\scriptstyle x_{2} $] {};
\node(y0) at (0,0) [label=below:$\scriptstyle y_0$]{};
\node(y1) at (1,0) [label=below:$\scriptstyle y_1$]{};
\node(y2) at (2,0) [label=below:$\scriptstyle y_2$]{};
\node(y3) at (3,0) [label=below:$\scriptstyle y_3$]{};
\node(y4) at (4,0) [label=below:$\scriptstyle y_4$]{};
\node(y5) at (5,0) [label=below:$\scriptstyle y_5$]{};
\node(y6) at (6,0) [label=below:$\scriptstyle y_6$]{};
\node(y7) at (7,0) [label=below:$\scriptstyle y_7$]{};
\node(y8) at (8,0) [label=below:$\scriptstyle y_8$]{};
\node(y9) at (9,0) [label=below:$\scriptstyle y_9$]{};
\node(y10) at (10,0) [label=below:$\scriptstyle y_{10}$]{};
\node(y11) at (11,0) [draw=gray, fill=gray][label=below:$\scriptstyle y_{0}$]{};
\node(y12) at (12,0) [draw=gray, fill=gray][label=below:$\scriptstyle y_{1}$]{};
\node(y13) at (13,0) [draw=gray, fill=gray] [label=below:$\scriptstyle y_{2}$]{};
\path[every node/.style={font=\sffamily\small}]
(x0) edge [color2, very thick] (x1)
(x1) edge [color2, very thick] (x2)
(x2) edge [color2, very thick] (x3)
(x3) edge [color2, very thick] (x4)
(x4) edge [color2, very thick] (x5)
(x5) edge [color2, very thick] (x6)
(x6) edge [color2, very thick] (x7)
(x7) edge [color2, very thick] (x8)
(x8) edge [color2, very thick] (x9)
(x9) edge [color2, very thick] (x10)
(x10) edge [color2, very thick] (x11)
(y0) edge [color2, very thick] (y1)
(y1) edge [color2, very thick] (y2)
(y2) edge [color2, very thick] (y3)
(y3) edge [color2, very thick] (y4)
(y4) edge [color2, very thick] (y5)
(y5) edge [color2, very thick] (y6)
(y6) edge [color2, very thick] (y7)
(y7) edge [color2, very thick] (y8)
(y8) edge [color2, very thick] (y9)
(y9) edge [color2, very thick] (y10)
(y10) edge [color2, very thick] (y11)
(x0) edge [color2, very thick] (y1)
(x1) edge [color2, very thick] (y2)
(x2) edge [color2, very thick] (y3)
(x3) edge [color2, very thick] (y4)
(x4) edge [color2, very thick] (y5)
(x5) edge [color2, very thick] (y6)
(x6) edge [color2, very thick] (y7)
(x7) edge [color2, very thick] (y8)
(x8) edge [color2, very thick] (y9)
(x9) edge [color2, very thick] (y10)
(x10) edge [color2, very thick] (y11)
(y0) edge [color2, very thick] (x1)
(y1) edge [color2, very thick] (x2)
(y2) edge [color2, very thick] (x3)
(y3) edge [color2, very thick] (x4)
(y4) edge [color2, very thick] (x5)
(y5) edge [color2, very thick] (x6)
(y6) edge [color2, very thick] (x7)
(y7) edge [color2, very thick] (x8)
(y8) edge [color2, very thick] (x9)
(y9) edge [color2, very thick] (x10)
(y10) edge [color2, very thick] (x11)
(y0) edge [color2, very thick] (x3)
(y1) edge [color2, very thick] (x4)
(y2) edge [color2, very thick] (x5)
(y3) edge [color2, very thick] (x6)
(y4) edge [color2, very thick] (x7)
(y5) edge [color2, very thick] (x8)
(y6) edge [color2, very thick] (x9)
(y7) edge [color2, very thick] (x10)
(y8) edge [color2, very thick] (x11)
(y9) edge [color2, very thick] (x12)
(y10) edge [color2, very thick] (x13)
(x0) edge [color2, very thick] (y3)
(x1) edge [color2, very thick] (y4)
(x2) edge [color2, very thick] (y5)
(x3) edge [color2, very thick] (y6)
(x4) edge [color2, very thick] (y7)
(x5) edge [color2, very thick] (y8)
(x6) edge [color2, very thick] (y9)
(x7) edge [color2, very thick] (y10)
(x8) edge [color2, very thick] (y11)
(x9) edge [color2, very thick] (y12)
(x10) edge [color2, very thick] (y13)
(x0) edge [color2, very thick, bend right=20] (x3)
(x1) edge [color2, very thick, bend right=20] (x4)
(x2) edge [color2, very thick, bend right=20] (x5)
(x3) edge [color2, very thick, bend right=20] (x6)
(x4) edge [color2, very thick, bend right=20] (x7)
(x5) edge [color2, very thick, bend right=20] (x8)
(x6) edge [color2, very thick, bend right=20] (x9)
(x7) edge [color2, very thick, bend right=20] (x10)
(x8) edge [color2, very thick, bend right=20] (x11)
(x9) edge [color2, very thick, bend right=20] (x12)
(x10) edge [color2, very thick, bend right=20] (x13)
(y0) edge [color2, very thick, bend left=20] (y3)
(y1) edge [color2, very thick, bend left=20] (y4)
(y2) edge [color2, very thick, bend left=20] (y5)
(y3) edge [color2, very thick, bend left=20] (y6)
(y4) edge [color2, very thick, bend left=20] (y7)
(y5) edge [color2, very thick, bend left=20] (y8)
(y6) edge [color2, very thick, bend left=20] (y9)
(y7) edge [color2, very thick, bend left=20] (y10)
(y8) edge [color2, very thick, bend left=20] (y11)
(y9) edge [color2, very thick, bend left=20] (y12)
(y10) edge [color2, very thick, bend left=20] (y13);
\end{tikzpicture}
\caption{The digraph $L_{22}$. Arcs are oriented from left to right.}
\label{fig:defini}
\end{figure}
When constructing $L_{2m}$ and $G_{2m}$, we chose $\vec{X}(m, \{1,3\})$ as the first factor. One could obtain simpler digraphs by selecting $\vec{X}(m, \{1,2\})$. In fact, if $\vec{X}(m, \{1,2\})\, \wr \, \overline{K}_2$ and $\vec{X}(m, \{1,2\})\, \wr \, K^*_2$ were to admit a $\vec{C}_m$-factorization, then a proof akin to the proof of this paper's main result, found in Section 5, would hold. However, it can be shown that $\vec{X}(m, \{1,2\})\, \wr \, \overline{K}_2$ does not admit a $\vec{C}_m$-factorization.
Let $G \in \{H_{2m}, L_{2m}, G_{2m}\}$. Arcs of $G$ of the form $(x_a, x_b)$, $(x_a, y_b)$, $(y_a, y_b)$, and $(y_a, x_b)$ are said to be of \textit{difference $b-a$} with differences computed modulo $m$. Let $W=v_0 v_1 \ldots v_n$ be a directed walk in $G$, with consecutive arcs of difference $d_0, d_1, \ldots, d_{n-1}$ modulo $m$. If $t=d_0+d_1+\ldots+d_{n-1} (\textrm{mod}\ m)$, then we say that the arcs of $W$ \textit{sum to $t$}. A \textit{type-$k$} cycle of $G$ is a directed $m$-cycle whose arcs sum to $km$ for some positive integer $k$. It follows from the definition of $L_{2m}$ and $G_{2m}$ that the arcs of a directed $m$-cycle of these digraphs sum to at most $3m$.
In Lemmas \ref{lem:1} and \ref{lem:2}, and Proposition \ref{thm:cand}, we will construct a $\vec{C}_m$-factorization of $H_{2m}, L_{2m}$, and $G_{2m}$, respectively. In these decompositions, each $\vec{C}_m$-factors will consists of two disjoint copies of $\vec{C}_m$. The digraphs $H_{2m}$ and $L_{2m}$ require four $\vec{C}_m$-factors in their respective decomposition while $G_{2m}$ requires five $\vec{C}_m$-factors.
\begin{lemma}
\label{lem:1}
Let $m \geqslant 5$ be an odd integer. The digraph $H_{2m}$ admits a $\vec{C}_m$-factorization.
\end{lemma}
\begin{proof}
Below, we build eight directed cycles of length $m$:
\begin{multicols}{2}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&C^0=x_0 x_1x_2x_3x_4x_5\ldots x_{m-3} x_{m-2} x_{m-1} x_0; \\
&C^2=x_0 y_1 y_2 x_3y_4x_5 \ldots y_{m-3} x_{m-2} y_{m-1} x_0; \\
&C^4=y_0 y_1x_2 y_3x_4y_5\ldots x_{m-3} y_{m-2} x_{m-1} y_0; \\
&C^6=y_0 x_1 y_2 y_3y_4y_5 \ldots y_{m-3} y_{m-2} y_{m-1} y_0; \\
\end{aligned}
$
\par}
\medskip
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&C^1= y_0 y_{m-1} y_{m-2} y_{m-3} \ldots y_5y_4 y_3 y_2 y_1 y_0;\\
&C^3=y_0 x_{m-1} y_{m-2} x_{m-3} \ldots y_5x_4y_3 x_2 x_1 y_0;\\
&C^5= x_0 y_{m-1} x_{m-2} y_{m-3} \ldots x_5y_4 x_3 y_2 x_1x_0;\\
&C^7=x_0 x_{m-1} x_{m-2} x_{m-3} \ldots x_5x_4x_3 x_2 y_1 x_0.\\
\end{aligned}
$
\par}
\medskip
\end{multicols}
It can be verified that, for each even $i$, the directed cycles $C^i$ and $C^{i+1}$ are disjoint. Each arc of $H_{2m}$ occurs precisely once in these eight directed cycles. Therefore, the set $\{C^0 \cup C^1, C^2\cup C^3, C^4\cup C^5, C^6\cup C^7\}$ is a $\vec{C}_m$-factorization of $H_{2m}$. \end{proof}
We proceed by building a $\vec{C}_m$-factorization of $L_{2m}$ for all odd $m\geqslant 13$ that are not divisible by three.
\begin{lemma}
\label{lem:2}
Let $m\geqslant 13$ be an odd integer. The digraph $L_{2m}$ admits a $\vec{C}_m$-factorization if and only if $3 \not |\,m$.
\end{lemma}
\begin{proof} To prove necessity, we first show that $L_{2m}$ does not contain a type-2 directed $m$-cycle. Suppose that $L_{2m}$ admits a type-2 directed $m$-cycles comprised of $k_1$ arcs of difference 1 and $k_2$ arcs of difference 3. Since $k_1+k_2=m$ and $m$ is odd, then the parities of $k_1$ and $k_2$ must differ. Yet, if $2m=k_1+3k_2$, then $k_1$ and $k_2$ must be of the same parity. Therefore, a $\vec{C}_m$-factorization of $L_{2m}$ must necessarily contain four type-3 directed $m$-cycles and four type-1 directed $m$-cycles.
Suppose that $3|m$. Then, one cannot construct a directed walk of length $m$ of $L_{2m}$ using only arcs of difference 3 without repeating at least one vertex. Therefore, the digraph $L_{2m}$ does not contain a type-3 $\vec{C}_m$ and thus, does not admit a $\vec{C}_m$-factorization.
For sufficiency, we construct a $\vec{C}_m$-factorization with four $\vec{C}_m$-factors, each consisting of one type-1 directed cycle of length $m$ and one type-3 directed cycle of length $m$. Let $m=p+6k$ with $p\in \{1,5\}$ and $k\geqslant 2$.
\noindent {\bf Case 1:} $p=1$. To construct our first $\vec{C}_m$-factor, we start by building eight dipaths as follows:
\begin{multicols}{2}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&W_0=y_0y_1 y_2 x_3 x_4 x_5y_{6} y_{7} x_{8} x_{9} x_{10}y_{11} x_{12} y_{13};\\
&X_0=x_2 y_5 y_{8} x_{11}x_{14};\\
&Y_0=x_1 y_4 x_{7} y_{10}x_{13}; \\
&Z_0=x_0 y_3 x_{6} y_{9}y_{12}x_{15}.
\end{aligned}
$
\par}
\medskip
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&Q_0=y_{13}y_{14} y_{15} \ldots y_{m-2}y_{m-1}y_0;\\
&R_0=x_{14}x_{17} x_{20} \ldots x_{m-5}x_{m-2}x_1;\\
&S_0=x_{13}x_{16} x_{19} \ldots x_{m-6}x_{m-3}x_0; \\
&T_0=x_{15}x_{18} x_{21} \ldots x_{m-4}x_{m-1}x_2.
\end{aligned}
$
\par}
\medskip
\end{multicols}
\noindent See Figures \ref{fig:m5.1.1} for an illustration of the dipaths $W_0, X_0, Y_0,$ and $Z_0$.
Next, we address the case $m=13$. In that case, observe that $y_{13}=y_0$, $x_{14}=x_1$, $x_{13}=x_0$, and $x_{15}=x_2$. Therefore, we can form the following $\vec{C}_{13}$-factor: $F_0=W_0 \cup R_0S_0T_0$.
\begin{figure}[h!]
\begin{FlushLeft}
\begin{subfigure}[l]{1 \textwidth}
\begin{tikzpicture}[thick, every node/.style={circle,draw=black,fill=black!90, inner sep=1.2}, scale=0.65]
\node (x0) at (0.0,1.0) [label=above:$\scriptstyle x_0 $] {};
\node (x1) at (1.0,1.0) [label=above:$\scriptstyle x_1 $] {};
\node (x2) at (2.0,1.0) [label=above:$\scriptstyle x_2$] {};
\node (x3) at (3.0,1.0) [label=above:$\scriptstyle x_3 $] {};
\node (x4) at (4.0,1.0) [label=above:$\scriptstyle x_4 $] {};
\node (x5) at (5.0,1.0) [label=above:$\scriptstyle x_5 $] {};
\node (x6) at (6.0,1.0) [label=above:$\scriptstyle x_6 $] {};
\node (x7) at (7.0,1.0) [label=above:$\scriptstyle x_7 $] {};
\node (x8) at (8.0,1.0) [label=above:$\scriptstyle x_8 $] {};
\node (x9) at (9.0,1.0) [label=above:$\scriptstyle x_9 $] {};
\node (x10) at (10.0,1.0) [label=above:$\scriptstyle x_{10} $] {};
\node (x11) at (11,1.0) [label=above:$\scriptstyle x_{11} $] {};
\node (x12) at (12,1.0)[label=above:$\scriptstyle x_{12} $] {};
\node (x13) at (13,1.0) [label=above:$\scriptstyle x_{13} $] {};
\node (x14) at (14,1.0) [label=above:$\scriptstyle x_{14} $] {};
\node (x15) at (15,1.0) [label=above:$\scriptstyle x_{15} $] {};
\node (x16) at (16,1.0) [label=above:$\scriptstyle x_{16} $] {};
\node (x17) at (17,1.0) [label=above:$\scriptstyle x_{17} $] {};
\node (x18) at (18,1.0) [label=above:$\scriptstyle x_{18} $] {};
\node (x19) at (19,1.0) [label=above:$\scriptstyle x_{19} $] {};
\node (x20) at (20,1.0) [label=above:$\scriptstyle x_{20} $] {};
\node (x21) at (21,1.0) [label=above:$\scriptstyle x_{21} $] [label=right:$\ldots$]{};
\node(y0) at (0,0) [label=below:$\scriptstyle y_0$]{};
\node(y1) at (1,0) [label=below:$\scriptstyle y_1$]{};
\node(y2) at (2,0) [label=below:$\scriptstyle y_2$]{};
\node(y3) at (3,0) [label=below:$\scriptstyle y_3$]{};
\node(y4) at (4,0) [label=below:$\scriptstyle y_4$]{};
\node(y5) at (5,0) [label=below:$\scriptstyle y_5$]{};
\node(y6) at (6,0) [label=below:$\scriptstyle y_6$]{};
\node(y7) at (7,0) [label=below:$\scriptstyle y_7$]{};
\node(y8) at (8,0) [label=below:$\scriptstyle y_8$]{};
\node(y9) at (9,0) [label=below:$\scriptstyle y_9$]{};
\node(y10) at (10,0) [label=below:$\scriptstyle y_{10}$]{};
\node(y11) at (11,0) [label=below:$\scriptstyle y_{11}$]{};
\node(y12) at (12,0) [label=below:$\scriptstyle y_{12}$]{};
\node(y13) at (13,0) [label=below:$\scriptstyle y_{13}$]{};
\node(y14) at (14,0) [label=below:$\scriptstyle y_{14}$]{};
\node(y15) at (15,0) [label=below:$\scriptstyle y_{15}$]{};
\node(y16) at (16,0) [label=below:$\scriptstyle y_{16}$]{};
\node (y17) at (17,0.0) [label=below:$\scriptstyle y_{17} $] {};
\node (y18) at (18,0.0) [label=below:$\scriptstyle y_{18} $] {};
\node (y19) at (19,0.0) [label=below:$\scriptstyle y_{19} $] {};
\node (y20) at (20,0.0) [label=below:$\scriptstyle y_{20} $] {};
\node (y21) at (21,0.0) [label=below:$\scriptstyle y_{21} $] [label=right:$\ldots$]{};
\path[every node/.style={font=\sffamily\small}]
(x0) edge [very thick, color7](y3)
(y3) edge [very thick, color7](x6)
(x6) edge [very thick, color7](y9)
(y9) edge [very thick, color7, bend left=20](y12)
(y12) edge [very thick, color7](x15)
(x15) edge [very thick, color7, bend right=20](x18)
(x18) edge [very thick, color7, bend right=20](x21)
(x1) edge [very thick, color5](y4)
(y4) edge [very thick, color5](x7)
(x7) edge [very thick, color5](y10)
(y10) edge [very thick, color5](x13)
(x13) edge [very thick, color5, bend right=20](x16)
(x16) edge [very thick, color5, bend right=20](x19)
(x2) edge [very thick, color4](y5)
(y5) edge [very thick, color4, bend left=20](y8)
(y8) edge [very thick, color4](x11)
(x11) edge [very thick, color4, bend right=20](x14)
(x14) edge [very thick, color4, bend right=20](x17)
(x17) edge [very thick, color4, bend right=20](x20)
(y0) edge [very thick, color3](y1)
(y1) edge [very thick, color3](y2)
(y2) edge [very thick, color3](x3)
(x3) edge [very thick, color3](x4)
(x4) edge [very thick, color3](x5)
(x5) edge [very thick, color3](y6)
(y6) edge [very thick, color3] (y7)
(y7) edge [very thick, color3] (x8)
(x8) edge [very thick, color3](x9)
(x9) edge [very thick, color3](x10)
(x10) edge [very thick, color3](y11)
(y11) edge [very thick, color3](x12)
(x12) edge [very thick, color3](y13)
(y13) edge [very thick, color3](y14)
(y14) edge [very thick, color3](y15)
(y15) edge [very thick, color3](y16)
(y16) edge [very thick, color3](y17)
(y17) edge [very thick, color3](y18)
(y18) edge [very thick, color3](y19);
\end{tikzpicture}
\caption{The dipaths $W_0Q_0$ (red), $X_0R_0$ (dark blue), $Y_0S_0$ (light blue), and $Z_0T_0$ (green).}
\label{fig:m5.1.1}
\end{subfigure}
\begin{subfigure}[l]{1.0\textwidth}
\hfill
\begin{tikzpicture}[thick, every node/.style={circle,draw=black,fill=black!90, inner sep=1.2}, scale=1]
\node (x0) at (0.0,1.0) [label={[label distance=-0.2cm]above:$\scriptstyle x_{m-6}$}][label=left:$\ldots$] {};
\node (x1) at (1.0,1.0) [label={[label distance=-0.2cm]above:$\scriptstyle x_{m-5}$}]{};
\node (x2) at (2.0,1.0)[label={[label distance=-0.2cm]above:$\scriptstyle x_{m-4}$}]{};
\node (x3) at (3.0,1.0)[label={[label distance=-0.2cm]above:$\scriptstyle x_{m-3}$}]{};
\node (x4) at (4.0,1.0) [label={[label distance=-0.2cm]above:$\scriptstyle x_{m-2}$}]{};
\node (x5) at (5.0,1.0)[label={[label distance=-0.2cm]above:$\scriptstyle x_{m-1}$}]{};
\node (x6) at (6.0,1.0)[draw=gray, fill=gray] [label=above:$\scriptstyle x_{0} $] {};
\node (x7) at (7.0,1.0) [draw=gray, fill=gray] [label=above:$\scriptstyle x_{1} $] {};
\node (x8) at (8.0,1.0) [draw=gray, fill=gray] [label=above:$\scriptstyle x_{2} $] {};
\node(y0) at (0,0) [label={[label distance=-0.2cm]below:$\scriptstyle y_{m-6}$}][label=left:$\ldots$] {};
\node(y1) at (1,0) [label={[label distance=-0.2cm]below:$\scriptstyle y_{m-5}$}]{};
\node(y2) at (2,0) [label={[label distance=-0.2cm]below:$\scriptstyle y_{m-4}$}]{};
\node(y3) at (3,0) [label={[label distance=-0.2cm]below:$\scriptstyle y_{m-3}$}]{};
\node(y4) at (4,0) [label={[label distance=-0.2cm]below:$\scriptstyle y_{m-2}$}]{};
\node(y5) at (5,0) [label={[label distance=-0.2cm]below:$\scriptstyle y_{m-1}$}]{};
\node(y6) at (6,0) [draw=gray, fill=gray] [label=below:$\scriptstyle y_{0}$]{};
\node(y7) at (7,0) [draw=gray, fill=gray] [label=below:$\scriptstyle y_{1}$]{};
\node(y8) at (8,0) [draw=gray, fill=gray] [label=below:$\scriptstyle y_{2}$]{};
\path[every node/.style={font=\sffamily\small}]
(x1) edge [very thick, color4, bend right=20](x4)
(x4) edge [very thick, color4, bend right=20](x7)
(x2) edge [very thick, color7, bend right=20](x5)
(x5) edge [very thick, color7, bend right=20](x8)
(x0) edge [very thick, color5, bend right=20](x3)
(x3) edge [very thick, color5, bend right=20](x6)
(y0) edge [very thick, color3](y1)
(y1) edge [very thick, color3](y2)
(y2) edge [very thick, color3](y3)
(y3) edge [very thick, color3](y4)
(y4) edge [very thick, color3](y5)
(y5) edge [very thick, color3](y6);
\end{tikzpicture}
\caption{Last six arcs $Q_0$ (red) and last three arcs of $R_0$ (dark blue), $S_0$ (light blue), and $T_0$ (green).}
\label{fig:m5.1.2}
\end{subfigure}
\end{FlushLeft}
\caption{The key subdipaths in the construction of $F_0$ for $p=1$.}
\label{fig2}
\end{figure}
In Figure \ref{fig2}, we illustrate $Q_0, R_0, S_0$, and $T_0$. Now, suppose that $m>13$. The dipath $Q_0$ is of length $m-13$ while the dipaths in $\{R_0, S_0, T_0\}$ are each of length $2k$. We also point out that dipaths in $\{Q_0, R_0, S_0, T_0\}$ are pairwise disjoint. We can then form the following directed $m$-cycles:
\begin{center}
$C^0=W_0Q_0$, and $C^1=X_0R_0Y_0S_0Z_0T_0$.
\end{center}
\noindent It can be verified that $C^0$ and $C^1$ are of length $m=13+6k$ and both are disjoint. Moreover, they both have no repeated vertices except for their endpoints. In turn, this means that they span our digraph $L_{2m}$ and thus form a $\vec{C}_m$-factor denoted $F_0$.
\pagebreak
We proceed by building our second $\vec{C}_m$-factor $F_1$. First, we build four dipaths:
\begin{multicols}{2}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&W_1=x_0 x_1 x_2 y_3 x_4 y_5 y_{6} x_{7} x_{8} y_{9} y_{10} x_{11} y_{12} x_{13};\\
&X_1=y_2 x_5 y_{8} y_{11} y_{14};\\
&Y_1=y_1 y_4 y_{7} x_{10} y_{13}; \\
&Z_1=y_0 x_3 x_{6} x_{9} x_{12} y_{15}.
\end{aligned}
$
\par}
\medskip
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&Q_1=x_{13}x_{14} x_{15} \ldots x_{m-2} x_{m-1} x_0;\\
&R_1=y_{14} y_{17}y_{20} \ldots y_{m-5} y_{m-2}y_{1};\\
&S_1=y_{13}y_{16} y_{19} \ldots y_{m-6} y_{m-3} y_0; \\
&T_1=y_{15}y_{18} y_{21} \ldots y_{m-4} y_{m-1} y_2.
\end{aligned}
$
\par}
\medskip
\end{multicols}
If $m=13$, then $W_1 \cup R_1S_1T_1$ is a $\vec{C}_{13}$-factor of $L_{26}$. If $m>13$ we form the following pair of directed cycles of length $m$:
\begin{center}
$C^2=W_1Q_1$, and $C^3=X_1R_1Y_1S_1Z_1T_1$.
\end{center}
It can be verified that $C^2$ and $C^3$ are disjoint and that their union spans $L_{2m}$. Therefore, we have a $\vec{C}_m$-factor $F_1=C^2\cup C^3$.
Now, we form our third $\vec{C}_m$-factor, $F_2$. We do so by first forming eight dipaths as follows:
\begin{multicols}{2}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&W_2=x_0 y_1 x_2 x_3 y_4x_5 x_{6} x_{7} y_{8} y_{9} x_{10} x_{11} x_{12} x_{13}; \\
&X_2=y_2 y_5 x_{8} y_{11} x_{14};\\
&Y_2=x_1 x_4 y_{7} y_{10} y_{13};\\
&Z_2=y_0 y_3 y_{6} x_{9} y_{12} y_{15};
\end{aligned}
$
\par}
\medskip
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&Q_2= x_{13} y_{14} x_{15} y_{16} \ldots y_{m-3} x_{m-2} y_{m-1} x_0;\\
&R_2= x_{14} y_{17} x_{20} y_{23} \ldots x_{m-5} y_{m-2} x_1;\\
&S_2= y_{13}x_{16} y_{19} x_{22}\ldots y_{m-6} x_{m-3} y_0;\\
&T_2=y_{15} x_{18} y_{21} x_{24}\ldots y_{m-4} x_{m-1} y_2.
\end{aligned}
$
\par}
\medskip
\end{multicols}
If $m=13$, then $C^4=W_2$ and $C^5=X_2Y_2Z_2$ form a $\vec{C}_{13}$-factor of $L_{26}$. Otherwise, we form the following $\vec{C}_m$-factor: $C^4=W_2Q_2$, and $C^5=X_2R_2Y_2S_2Z_2T_2$.
\pagebreak
Lastly, we build our fourth $\vec{C}_m$-factor $F_3$. We start with the following eight dipaths:
\begin{multicols}{2}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&W_3=y_0x_1 y_2 y_3y_4y_5 x_{6} y_{7}y_{8}x_{9}y_{10} y_{11} y_{12} y_{13};\\
&X_3=x_2 x_5 x_{8}x_{11}y_{14};\\
&Y_3=y_1 x_4 x_{7}x_{10} x_{13};\\
&Z_3=x_0 x_3 y_{6}y_{9}x_{12} x_{15};
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_3=y_{13}x_{14} y_{15} x_{16} \ldots y_{m-2} x_{m-1} y_0;\\
&R_3=y_{14}x_{17} y_{20} x_{23} \ldots y_{m-5} x_{m-2} y_{1};\\
&S_3=x_{13}y_{16} x_{19} y_{22} \ldots x_{m-6} y_{m-3} x_0;\\
&T_3=x_{15} y_{18} x_{21} y_{24} \ldots x_{m-4} y_{m-1} x_2.
\end{aligned}
$
\par}
\medskip
\end{multicols}
If $m=13$, then $W_3 \cup X_3Y_3Z_3$ is a $\vec{C}_{13}$-factor. If $m>13$, the union of the following two directed cycles of length $m$ is a $\vec{C}_m$-factor: $C^6=W_3Q_3$, and $C^7=X_3R_3Y_3S_3Z_3T_3$.
It is laborious yet routine to verify that each arc of $L_{2m}$ appears exactly once in $\{F_0, F_1, F_2,$ $F_3\}$. Therefore, the set $\{F_0, F_1, F_2, F_3\}$ is a $\vec{C}_m$-factorization of $L_{2m}$.
\noindent {\bf Case 2:} $p=5$. First, we build a set of 12 dipaths, each of length $2k$:
\begin{multicols}{2}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&R'_0=x_{14} x_{17} x_{20} \ldots x_{m-6} x_{m-3} x_0;\\
&S'_0= x_{15}x_{18} x_{21} \ldots x_{m-5} x_{m-2}x_1;\\
&T'_0=x_{13}x_{16} x_{19} \ldots x_{m-4} x_{m-1} x_2;\\
&R'_1=y_{14}y_{17} y_{20} \ldots y_{m-6} y_{m-3} y_{0};\\
&S'_1=y_{15} y_{18} y_{21} \ldots y_{m-5} y_{m-2} y_1;\\
&T'_1=y_{13} y_{16} y_{19} \ldots y_{m-4}y_{m-1} y_2;\\
\end{aligned}
$
\par}
\medskip
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&R'_2=x_{14} y_{17} x_{20} y_{23} \ldots y_{m-6} x_{m-3} y_0;\\
&S'_2=y_{15} x_{18} y_{21} x_{24} \ldots x_{m-5} y_{m-2} x_1;\\
&T'_2=y_{13} x_{16} y_{19} x_{22} \ldots y_{m-4} x_{m-1} y_2;\\
&R'_3=y_{14} x_{17} y_{20} x_{23} \ldots x_{m-6} y_{m-3} x_0; \\
&S'_3==x_{15} y_{18} x_{21} y_{24} \ldots y_{m-5} x_{m-2} y_{1}; \\
&T'_3=x_{13}y_{16} x_{19} y_{22} \ldots x_{m-4} y_{m-1} x_2.\\
\end{aligned}
$
\par}
\medskip
\end{multicols}
We will also use certain dipaths from Case 1 to construct a $\vec{C}_m$-factorization. Observe that, for each $i\in \{0,1,2,3\}$, dipaths in $\{Q_i, R'_i, S'_i, T'_i\}$ are pairwise disjoint. We now build eight directed cycles of length $m$:
\medskip
{\centering
$ \displaystyle
\begin{aligned}
& C^0=W_0Q_0; & C^4=X_0R'_0Z_0S'_0Y_0T'_0; \\
& C^1=W_1Q_1; & C^5=X_1R'_1Z_1S'_1Y_1T'_1;\\
& C^2=W_2Q_2; & C^6=X_2R'_2Z_2S'_2Y_2T'_2;\\
& C^3=W_3Q_3; & C^7=X_3R'_3Z_3S'_3Y_3T'_3.
\end{aligned}
$
\par}
\medskip
For $i=0,1,2,3$, it can be verified that $F_i=C^i \cup C^{i+4}$ is a $\vec{C}_m$-factor of $L_{2m}$. Furthermore, it can also be verified that each arc of $L_{2m}$ occurs exactly once in $\{F_0, F_1, F_2, F_3\}$. Consequently, the set $\{F_0, F_1, F_2, F_3\}$ is a $\vec{C}_m$-factorization of $L_{2m}$. \end{proof}
\section{Decomposition of $G_{2m}$}
The purpose of this section is to construct a $\vec{C}_m$-factorization of $G_{2m}$. First, we introduce some notation.
\begin{notation} \rm
\label{not:bas}
Let $m=p+12k$ for some positive integer $k$ and $p\in\{11, 13, 17,19\}$. We let $V_0=\{x_0, x_1, \ldots x_{p-1}\}\cup \{y_0, y_1, \ldots, y_{p-1}\}$, and $V_i=\{x_{p+12(i-1)}, x_{p+12(i-1)+1}, \ldots, x_{p+12i}\} \cup \{y_{p+12(i-1)}, y_{p+12(i-1)+1}, \ldots,$ $y_{p+12i}\}$ for $i=1,\ldots, k$.
\end{notation}
Observe that $V_0$ contains $2p$ vertices while each $V_i$ contains $24$ vertices for $i=1,2,\ldots , k$. Next, we define two functions on $V(G_{2m})$.
\begin{definition} \rm
Let $m=p+12k$ for some positive integer $k$ and $p$ odd. Define a function $\rho:V(G_{2m}) \rightarrow V(G_{2m})$ as follows: $\rho(x_i)=x_{i+1}$ and $\rho(y_i)=y_{i+1}$ with addition done modulo $m$. Define another function $\pi: V(G_{2m}) \rightarrow V_0$ as follows: $\pi(x_i)=x_j$ and $\pi(y_i)=y_j$ for $j \in \{0,1,\ldots, p-1\}$ such that $i \equiv j \ (\textrm{mod}\ p)$.
\end{definition}
The aim of Lemmas \ref{lem:list2}, \ref{lem:list1}, and \ref{lem:list3} is to reduce the decomposition of $G_{2m}$ into $\vec{C}_m$-factors to the existence of a set of 32 dipaths of specific lengths. The properties that these ingredient constructions must satisfy are given in Definitions \ref{defn:basic2} and \ref{defn:basic1}.
\begin{definition} \rm
\label{defn:basic2}
Let $m=p+12k$ for some non-negative integer $k$ and $p$ odd. The set $\{W, X, Y, Z\}\cup \{Q, R, S, T\}$ is called a \textit{type-2 basic set of dipaths of $G_{2m}$} if it satisfies the following properties.
\begin{enumerate} [label=\textbf{C\arabic*.}]
\item Dipaths in $\{W, X, Y, Z\}$ are pairwise arc-disjoint; likewise for dipaths in $\{Q, R, S, T\}$.
\item The concatenations $\pi(W)\pi(X)$ and $\pi(Y)\pi(Z)$ are possible in $G_{2p}$ while their union forms a $\vec{C}_p$-factor of $G_{2p}$.
\item We have $\textrm{len}(Q)+\textrm{len}(R)= \textrm{len}(S)+\textrm{len}(T)=12$.
\item Each of $Q, R, S,$ and $T$ is a $(x_t, x_{t+12})$-dipath or a $(y_t, y_{t+12})$-dipath in $G_{2m}$ with $t \in \{p, p+1, p+2\}$.
\item The following concatenations are possible: $WQ, XR, YS,$ and $ZT$.
\end{enumerate}
\end{definition}
\begin{lemma}
\label{lem:list2}
Let $m=p+12k$ with $p$ odd and $k$ a non-negative integer. Suppose that $\{W, X, Y, Z\} \cup\{P_0^0, P_0^1, Q_0^0, Q_0^1\}$ is a type-2 basic set of dipaths of $G_{2m}$. Then, for each $i \in \{0,1\}$ and $j \in \{1,2, \ldots, k-1\}$, let $P_j^i=\rho^{12j}(P^i_0) $ and $Q^i_j=\rho^{12j}(Q^i_0)$. These dipaths can be concatenated as follows:
\begin{center}
$C^0=WP_0^0P^0_1\ldots P^0_{k-1}XP^1_0P_1^1\ldots P^1_{k-1}$; $C^1=YQ_0^0Q_1^0\ldots Q_{k-1}^0ZQ_0^1Q_1^1\ldots Q_{k-1}^1$.
\end{center}
\noindent Moreover, the directed walks $C^0$ and $C^1$ are type-2 directed $m$-cycles and $C^0 \cup C^1$ is a $\vec{C}_m$-factor of $G_{2m}$.
\end{lemma}
\begin{proof}
Let $t \in \{p, p+1, p+2\}$. If $P^0_0$ is a $(x_t, x_{t+12})$-dipath then $P^0_j$ is a $(x_{t+12j}, x_{t+12(j+1)})$-dipath. Otherwise, if $P^0_0$ is a $(y_t, y_{t+12})$-dipath, then $P^0_j$ is a $(y_{t+12j}, y_{t+12(j+1)})$-dipath. The two dipaths $P^0_j$ and $P^0_{j+1}$ share precisely one vertex, namely $x_{t+12(j+1)}$ in the former case, or $y_{t+12(j+1)}$ in the latter case. A similar observation holds for $P^1_j$ and $P^1_{j+1}$. Lastly, if $|j-i|>1$, then $P^0_i$ and $P^0_j$ are disjoint; likewise for $P^1_i$ and $P^1_j$. We then form the following two dipaths:
\begin{center}
$I_0=P_0^0P^0_1\ldots P^0_{k-1}$ and $I_1=P^1_0P_1^1\ldots P^1_{k-1}$.
\end{center}
\noindent By C3 of Definition \ref{defn:basic2}, it follows that $\textrm{len}(I_0)+\textrm{len}(I_1)=12k$.
Properties C2, C4 and C5 of Definition \ref{defn:basic2} jointly imply that, if $x_t$ ($y_t$) is the last vertex of $W$, then $x_{t+12k}$ $(y_{t+12k})$ (subscripts are modulo $m$) is the first vertex of $X$. Similarly, if $x_t$ $(y_t)$ is the last vertex of $X$, then $x_{t+12k}$ $(y_{t+12k})$ is the first vertex of $W$. Therefore, given that $x_t$ ($y_t$) is the first vertex of $P^0_0$, it follows that $x_{t+12k}$ $(y_{t+12k}$ respectively) is the last vertex of $P^0_{k-1}$. Consequently, the last vertex of $I_0$ is the first vertex of $X$. A similar observation holds for $I_1$ and $W$. By C2 of Definition \ref{defn:basic2}, it follows that $\textrm{len}(W)+\textrm{len}(X)=p$. Hence, we see that $C^0=WI_0XI_1$ is a directed cycle of length $m$. A similar reasoning applies to $C^1$.
By C1 of Definition \ref{defn:basic2}, dipaths in $\{ P^0_0, P^1_0, Q^0_0, Q^1_0\}$ are pairwise disjoint. Therefore, for each $j$, dipaths in $\{P^0_j, P^1_j, Q^0_j, Q^1_j\}$ are also pairwise disjoint. Since dipaths in $\{W, X, Y, Z\}$ are pairwise disjoint, it follows that $C^0$ and $C^1$ are disjoint and thus form a $\vec{C}_m$-factor.\end{proof}
\begin{definition} \rm
\label{defn:basic1}
Let $m=p+12k$ for $p$ odd and $k$ a non-negative integer. The set $\{X, Y\}\cup \{R,S\}$ is called a \textit{type-1 basic set of dipaths of $G_{2m}$} if it satisfies the following properties.
\begin{enumerate}[label=\textbf{C\arabic*.}]
\item Dipaths $X$ and $Y$ are disjoint; likewise $R$ and $S$.
\item The union of $\pi(X)$ with $\pi(Y)$ is a $\vec{C}_p$-factor of $G_{2p}$.
\item We have $\textrm{len}(R)=\textrm{len}(S)=12$.
\item Each of $R$ and $S$ is an $(x_t, x_{t+12})$-dipath or $(y_t, y_{t+12})$-dipath with $t \in \{p, p+1, p+2\}$.
\item The following concatenations are possible: $XR$ and $YS$.
\end{enumerate}
\end{definition}
\begin{lemma}
\label{lem:list1}
Let $m=p+12k$ with $p$ odd and $k$ a non-negative integer. Suppose that $\{X, Y\} \cup\{P_0,Q_0\}$ is a type-1 basic set of dipaths of $G_{2m}$. Then, for each $j \in \{1,2, \ldots, k-1\}$, let $P_j=\rho^{12j}(P_0)$ and $Q_j=\rho^{12j}(Q_0)$. These dipaths can be concatenated as follows:
\begin{center}
$C^0=XP_0P_1\ldots P_{k-1}$; $C^1=YQ_0Q_1\ldots Q_{k-1}$.
\end{center}
\noindent Moreover, the directed walks $C^0$ and $C^1$ each form a type-1 directed $m$-cycle and $C^0\cup C^1$ is a $\vec{C}_m$-factor of $G_{2m}$.
\end{lemma}
\begin{proof}
If the last vertex of $X$ is $x_t$ for some $t\in \{p, p+1, p+2\}$, then, the first vertex of $P_0$ is also $x_t$ We then see that $P_j$ must be a $(x_{t+12j}, x_{t+12(j+1)})$-dipath for all $j$. If the last vertex of $X$ is $y_t$, then $P_j$ must be a $(y_{t+12j}, y_{t+12(j+1)})$-dipath for all $j$. Consequently, $P_j$ and $P_{j+1}$ share precisely one vertex. Otherwise, if $|i-j|>1$, then $P_i$ and $P_j$ do not have a common vertex. Hence, these dipaths can be concatenated as follows: $C^0=XP_0P_1\ldots P_{k-1}$.
Observe that C2, C4, and C5 of Definition \ref{defn:basic1} jointly imply that, if the last vertex of $X$ is $x_t$ ($y_t$), then the first vertex of $X$ is of the form $x_{t+12k}$ ($y_{t+12k}$). Therefore, the directed walk $C^0$ is closed and has no repeated vertices. By C3 of Definition \ref{defn:basic1}, each $P_j$ must be of length 12 and $X$ must be of length $p$ by C2 of Definition \ref{defn:basic1}. Consequently, we have that $C^0$ is a directed $m$-cycle. A similar argument applies to $C^1$.
Next, by property C1, we have that $X$ and $Y$ are disjoint. Moreover, since $P_0$ and $Q_0$ are also disjoint, it follows that $P_j$ and $Q_j$ are disjoint for all $j$. Similarly, it also follows that $P_i$ and $Q_j$ are also disjoint when $i \neq j$. In conclusion, the pair of directed cycles $C^0$ and $C^1$ forms a $\vec{C}_m$-factor of $G_{2m}$. \end{proof}
\begin{lemma}
\label{lem:list3}
If $G_{2m}$ admits three type-2 basic sets of dipaths and two type-1 basic sets of dipaths such that the dipaths in these five sets are pairwise arc-disjoint, then $G_{2m}$ admits a $\vec{C}_m$-factorization
\end{lemma}
\begin{proof}
Let $A_1, A_2$, and $A_3$ be type-2 basic sets of dipaths and $A_4$ and $A_5$ be type-1 basic sets of dipaths such that the 32 dipaths in $S=A_1\cup A_2 \cup A_3 \cup A_4 \cup A_5$ are pairwise arc-disjoint.
By Lemma \ref{lem:list2}, each $A_i$ for $i \in \{1,2,3\}$ gives rise to a $\vec{C}_m$-factor $F_i$ of $G_{2m}$ consisting of two type-2 directed $m$-cycles. In addition, by Lemma \ref{lem:list1}, each $A_i$ for $i \in \{4,5\}$ gives rise to a $\vec{C}_m$-factor $F_i$ consisting of two type-1 directed $m$-cycles. It remains to be shown that the $F_i$, with $i \in \{1,2,\ldots, 5\}$, are pairwise arc-disjoint.
Suppose that $a=(x_s, x_t)$ where $x_s\in V_j$, for some $j>1$, and $a$ occurs twice in the $\vec{C}_m$-factors $F_1, \ldots, F_5$. By the construction of $F_1, \ldots, F_5$, from Lemmas \ref{lem:list2} and \ref{lem:list1}, it follows that the arc $a'=(x_{s-12(j-1)}, x_{t-12(j-1)})$ also occurs twice in $F_1, \ldots, F_5$. However, we see that $a'$ is from $V_1$, and thus appears in $S$. By assumption, the arc $a'$ must appear exactly once, a contradiction.
An analogous argument applies if $a$ is of the form $(y_s, y_t)$, $(y_s, x_t)$, and $(x_s, y_t)$. Therefore, each arc of $G_{2m}$ occurs at most once in $F_1,\ldots, F_5$. Since $G_{2m}$ is of degree five, it follows that every arc occurs exactly once. Therefore, the set $\{F_1, \ldots, F_5\}$ is a $\vec{C}_m$-factorization of $G_{2m}$.
\hfill \end{proof}
The implications of Lemma \ref{lem:list3} are simple. To construct a $\vec{C}_m$-factorization of $G_{2m}$, it suffices to build three type-2 basic sets of dipaths and two type-1 basic sets of dipaths such that dipaths in the union of these five sets are pairwise disjoint.
In the proof of Theorem \ref{thm:cand}, we consider four cases, one for each $p\in \{11, 13, 17, 19\}$. This yields a construction for each congruency class modulo 12 for $m$ odd and $m \not\equiv 0 \ (\textrm{mod}\ 3)$. In each case, we construct three type-2 basic sets of dipaths and two type-1 basic sets. It is straightforward, though tedious, to verify that the hypothesis of Lemma \ref{lem:list3} is satisfied by these five sets of dipaths. To aid in the verification of these properties, we illustrate all dipaths constructed in the proof of Proposition \ref{thm:cand} in Appendix A.
\begin{prop}
\label{thm:cand}
Let $m\geqslant 11$ be an odd integer. The digraph $G_{2m}$ admits a $\vec{C}_m$-factorization.
\end{prop}
\begin{proof}
Let $m=p+12k$ for $p \in \{11,13,17,19\}$ and $k$ a non-negative integer. Throughout this proof, we shall refer to notation introduced in Notation \ref{not:bas}. In each case, we construct three type-2 basic sets of dipaths $L_0, L_1$, and $L_2$, and two type-1 basic sets of dipaths $L_3$ and $L_4$. In each case, it can then be verified that dipaths in $L_0\cup L_1\cup L_2 \cup L_3 \cup L_4$ are pairwise arc-disjoint, hereby satisfying the hypothesis of Lemma \ref{lem:list3}.
\noindent {\bf Case 1:} $p=11$. Dipaths for this case are illustrated in Figures \ref{fig:m11.1.1}-\ref{fig:m11.5.2}.
Dipaths in type-2 basic set $L_0=\{X_0, Y_0, W_0, Z_0\}\cup\{Q_0, R_0, S_0, T_0\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_0=x_0 x_3 x_4 y_7 x_7 y_8 y_{11};\\
&X_0=y_0 y_1 y_4 y_5 x_8 x_{11};\\
&Y_0=x_1 y_2 x_5 x_6 x_9 x_{10} y_{10} x_{13};\\
&Z_0=x_2 y_3 y_6 y_9 x_{12};
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_0=y_{11} y_{14} y_{17} y_{18} y_{19} x_{22} y_{23};\\
&R_0=x_{11} y_{12} y_{13} x_{16} x_{17} y_{20} x_{23};\\
&S_0=x_{13} x_{14} x_{15} y_{16} x_{19} x_{20} x_{21} y_{22} x_{25};\\
&T_0=x_{12} y_{15} x_{18} y_{21} x_{24}.
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-2 basic set $L_1=\{X_1, Y_1, W_1, Z_1\}\cup\{Q_1, R_1, S_1, T_1\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_1=x_0 y_1 x_4 y_5 y_8 y_9 x_9 x_{12};\\
&X_1=x_1 y_4x_7 y_{10} x_{11};\\
&Y_1=y_0 y_3x_3 x_6 y_6 y_7 x_{10} y_{13};\\
&Z_1=y_2 x_2 x_5 x_8 y_{11};
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_1=x_{12} y_{12} y_{15} x_{15} x_{18} y_{18} y_{21} x_{21} x_{24} ;\\
&R_1=x_{11} y_{14} x_{17} x_{20} x_{23};\\
&S_1=y_{13} x_{13} y_{16} x_{16} y_{19} x_{19} y_{22} x_{22} y_{25};\\
&T_1=y_{11} x_{14} y_{17} y_{20} y_{23}.
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-2 basic set $L_2=\{X_2, Y_2, W_2, Z_2\}\cup\{Q_2, R_2, S_2, T_2\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_2=x_0 y_3 x_6 y_9 y_{10}y_{11};\\
&X_2=y_0 x_3 y_4 y_7 x_8 y_8 x_{11};\\
&Y_2=y_1 x_1 x_4 x_7 x_{10} x_{13};\\
&Z_2=x_2 y_2 y_5 x_5 y_6 x_9 y_{12};
\medskip
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_2=y_{11} x_{12} y_{13} y_{16} y_{17} x_{20} y_{23};\\
&R_2=x_{11} x_{14} x_{17} x_{18} y_{19} y_{22} x_{23};\\
&S_2=x_{13} y_{14} y_{15} x_{16} x_{19} y_{20} y_{21} x_{22} x_{25};\\
&T_2=y_{12} x_{15} y_{18} x_{21} y_{24}.
\end{aligned}
$
\par}
\end{multicols}
\pagebreak
Dipaths in type-1 basic set $L_3=\{X_3, Y_3,\}\cup\{ R_3, S_3\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&X_3=x_0 y_0 x_1y_1 x_2 x_3 y_6 x_7x_8y_9 x_{10}x_{11};\\
&Y_3=y_2 y_3 x_4y_4x_5y_5 x_6 y_7 y_8 x_9 y_{10} y_{13};\\
\medskip
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&\hspace{-0.6cm} R_3=x_{11} y_{11} y_{12} x_{12} x_{13} x_{16} y_{17},x_{17} y_{18} x_{18} x_{19} x_{22} x_{23};\\
&\hspace{-0.6cm} S_3=y_{13} x_{14} y_{14} x_{15} y_{15} y_{16} y_{19} x_{20} y_{20} x_{21}y_{21} y_{22} y_{25}.\\
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-1 basic set $L_4=\{X_4, Y_4,\}\cup\{ R_4, S_4\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&X_4=y_0 x_0 x_1 x_2y_5 y_6 x_6x_7 y_7 y_{10} x_{10} y_{11};\\
&Y_4=y_1 y_2 x_3 y_3 y_4 x_4 x_5 y_8 x_8 x_8 y_9 y_{12}; \\
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&\hspace{-0.6cm} R_4=y_{11} x_{11} x_{12} x_{15} x_{16} y_{16} x_{17}y_{17} x_{18} x_{21} x_{22} y_{22} y_{23};\\
&\hspace{-0.6cm} S_4=y_{12} x_{13} y_{13}y_{14} x_{14}y_{15}y_{18}x_{19} y_{19} y_{20} x_{20}y_{21} y_{24}.\\
\end{aligned}
$
\par}
\end{multicols}
It is tedious, but straightforward, to verify that, for each vertex $x_i$ ($y_i$), each of the five arcs with tail $x_i$ ($y_i$) appears in a dipath in $\{W_j, X_j, Y_j, Z_j: j=0,1,2\} \cup \{X_j, Y_j: j=3,4\}$ exactly once. An analogous claim holds for vertices $x_i$ ($y_i$) in $V_1$ and the set of dipaths $\{Q_j, R_j, S_j, T_j: j=0,1,2\}\cup \{R_j, S_j: j=3,4\}$. It follows that the dipaths in $L_0\cup \ldots \cup L_4$ are pairwise-disjoint and thus satisfy the hypothesis of Lemma \ref{lem:list3}. As a result, the digraph $G_{2m}$ corresponds to an R$\vec{C}_m$-D of $G_{2m}$.
\noindent {\bf Case 2:} $p=13$. Dipaths for this case are illustrated in Figures \ref{fig:m13.1.1}-\ref{fig:m13.5.2}.
Dipaths in type-2 basic set $L_0=\{X_0, Y_0, W_0, Z_0\}\cup\{Q_0, R_0, S_0, T_0\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_0=y_0 x_1 x_2 y_5 x_6 x_9 y_{12} x_{13};\\
&X_0=x_0 y_3 y_4 y_7 y_{10} x_{10} y_{13};\\
&Y_0=y_1 x_4 x_5 x_8 y_9 x_{12} y_{15};\\
&Z_0=y_2 x_3 y_6 x_7 y_8 x_{11} y_{11} y_{14};\\
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_0=x_{13} y_{16} y_{19} x_{20} x_{21} y_{24} x_{25}; \\
&R_0= y_{13} x_{14} x_{15} y_{18} x_{19} y_{22} y_{25};\\
&S_0=y_{15} x_{16} y_{17} x_{18} y_{21} x_{22} y_{23} x_{24} y_{27};\\
&T_0=y_{14} x_{17} y_{20} x_{23} y_{26}.
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-2 basic set $L_1=\{X_1, Y_1, W_1, Z_1\}\cup\{Q_1, R_1, S_1, T_1\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_1=x_0 y_0 x_3 x_6 x_7 y_{10}y_{11} x_{14};\\
&X_1=x_1 y_4 x_5 y_6 y_{9} x_{10} x_{13};\\
&Y_1=y_2 y_5 x_8 x_{11} y_{14};\\
&Z_1=y_1 x_2 y_3 x_4 y_7 y_8 x_9 x_{12} y_{12}y_{15};\\
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_1=x_{14} x_{17} x_{20} x_{23} x_{26};\\
&R_1= x_{13} y_{13} y_{16} x_{16} x_{19} y_{19} y_{22} x_{22} x_{25};\\
&S_1=y_{14} y_{17} y_{20} y_{23} y_{26};\\
&T_1=y_{15} x_{15} x_{18} y_{18} y_{21} x_{21} x_{24} y_{24} y_{27};\\
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-2 basic set $L_2=\{X_2, Y_2, W_2, Z_2\}\cup\{Q_2, R_2, S_2, T_2\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_2=x_0 x_3 y_3 y_6 x_9 y_{10} y_{13};\\
&X_2=y_0 y_1 y_4 x_7 x_8 y_{11} x_{12} x_{13};\\
&Y_2=x_2 y_2 x_5 x_6 y_7 x_{10} x_{11} x_{14};\\
&Z_2=x_1 x_4 y_5 y_8 y_9 y_{12} x_{15};\\
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_2=y_{13}y_{14} y_{15} x_{18} x_{19} x_{22} y_{25};\\
&R_2=x_{13} x_{16} y_{19} y_{20} y_{21} x_{24} x_{25};\\
&S_2=x_{14} y_{17} x_{20} y_{23} x_{26} ;\\
&T_2=x_{15} y_{16} x_{17} y_{18} x_{21} y_{22} x_{23} y_{24} x_{27}.
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-1 basic set $L_3=\{X_3, Y_3,\}\cup\{ R_3, S_3\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&X_3=y_0 x_0 y_1 x_1 y_2 y_3 x_6 y_6 y_7 x_7 x_{10} y_{11} y_{12} y_{13};\\
&Y_3=x_2 x_3 x_4 y_4 y_5 x_5 y_8 x_8 x_9 y_9 y_{10} x_{11} x_{12} x_{15};\\
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&R_3= y_{13} x_{13} y_{14} x_{14} y_{15} y_{18} y_{19} x_{19} y_{20} x_{20} y_{21} y_{24} y_{25};\\
&S_3=x_{15} x_{16} y_{16} y_{17} x_{17} x_{18} x_{21} x_{22}y_{22} y_{23} x_{23} x_{24} x_{27}.\\
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-1 basic set $L_4=\{X_4, Y_4,\}\cup\{ R_4, S_4\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&X_4=x_0 x_1 y_1 y_2 x_2 x_5 y_5y_6 x_6 y_9 x_9 x_{10} y_{10} x_{13};\\
&Y_4=y_0 y_3 x_3 y_4 x_4x_7 y_7 x_8 y_8 y_{11}x_{11} y_{12} x_{12} y_{13};\\
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&R_4= x_{13}x_{14} y_{14} x_{15} y_{15} y_{16} x_{19} x_{20} y_{20} x_{21} y_{21} y_{22} x_{25};\\
&S_4=y_{13} x_{16} x_{17} y_{17} y_{18} x_{18} y_{19} x_{22} x_{23} y_{23}y_{24} x_{24} y_{25}.\\
\end{aligned}
$
\par}
\end{multicols}
Once again, it is tedious but straightforward to verify that $\{L_0, L_1, L_2, L_3, L_4\}$ satisfies the hypothesis of Lemma \ref{lem:list3}.
\noindent {\bf Case 3:} $p=17$. Dipaths for this case are illustrated in Figures \ref{fig:m17.1.1}-\ref{fig:m17.5.2}.
Dipaths in $L_0=\{X_0, Y_0, W_0, Z_0\}\cup\{Q_0, R_0, S_0, T_0\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_0=y_0 x_0y_3x_6y_7x_8 x_{11} y_{12}x_{15}x_{18};\\
&X_0=x_1 y_2 x_5 y_6 y_9x_{10}x_{13}y_{14},y_{17} ;\\
&Y_0=y_1 y_4y_5y_8 x_9 x_{12}y_{13} y_{16}x_{16}x_{19};\\
&Z_0=x_2 x_3 x_4x_7 y_{10} y_{11}x_{14} y_{15} y_{18};
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_0=x_{18} y_{21} x_{24} y_{27} x_{30};\\
&R_0= y_{17} x_{17}y_{20} x_{20} y_{23} x_{23} y_{26} x_{26} y_{29};\\
&S_0=x_{19}y_{19} x_{22}y_{22} x_{25} y_{25} x_{28}y_{28} x_{31};\\
&T_0=y_{18} x_{21}y_{24} x_{27} y_{30}.
\end{aligned}
$
\par}
\medskip
\end{multicols}
Dipaths in type-2 basic set $L_1=\{X_1, Y_1, W_1, Z_1\}\cup\{Q_1, R_1, S_1, T_1\}$ are constructed as follows:
\begin{multicols}{2}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&W_1=x_1 x_4y_4 x_7 y_7 x_{10} y_{10} y_{13} y_{14} x_{17};\\
&X_1=x_0 x_3 y_3 y_6, x_6 x_9y_{12} y_{15}x_{18} ;\\
&Y_1=y_0 y_1 y_2 y_5 x_8 y_{11} x_{11} x_{14} x_{15} y_{16} x_{19};\\
&Z_1=x_2 x_5 y_8 y_9 x_{12} x_{13} x_{16} y_{17};
\end{aligned}
$
\par}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&Q_1=x_{17} y_{18} y_{19} y_{22} x_{23} y_{24} y_{25} y_{28}x_{29};\\
&R_1= x_{18}, x_{21} x_{24} x_{27} x_{30};\\
&S_1=x_{19}, x_{20} y_{21} x_{22} x_{25} x_{26} y_{27} x_{28} x_{31};\\
&T_1=y_{17} y_{20} y_{23} y_{26} y_{29}.
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-2 basic set $L_2=\{X_2, Y_2, W_2, Z_2\}\cup\{Q_2, R_2, S_2, T_2\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_2=y_0 x_3 x_6 x_7 y_8y_{11}y_{14}x_{14} x_{17}; \\
&X_2=x_0 x_1 y_4 x_5 x_8 y_9y_{12} x_{13} y_{16} y_{17};\\
&Y_2=y_1 x_2 y_5 y_6x_9x_{10} y_{13} x_{16} y_{19} ; \\
&Z_2= y_2 y_3 x_4 y_7y_{10} x_{11} x_{12} y_{15}x_{15} y_{18};\\
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_2=x_{17} x_{18}x_{19}y_{22} y_{23} x_{26} x_{29};\\
&R_2= y_{17}x_{20} x_{23}x_{24} x_{25}y_{28} y_{29};\\
&S_2=y_{19}y_{20} x_{21} x_{22} y_{25}y_{26}x_{27} x_{28} y_{31};\\
&T_2=y_{18}y_{21}, y_{24} y_{27} y_{30}.
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-1 basic set $L_3=\{X_3, Y_3,\}\cup\{ R_3, S_3\}$ are constructed as follows:
\smallskip
{\centering
$ \displaystyle
\begin{aligned}
&X_3=x_0 y_0 x_1 y_1 x_4 y_5 x_5 x_6 y_6 x_7 x_{10} x_{11} y_{11} y_{12} x_{12} x_{15} x_{16} x_{17};\\
&Y_3=y_2 x_2 y_3 x_3 y_4 y_7 y_8 x_8 x_9 y_9y_{10} x_{13} y_{13} x_{14}y_{14} y_{15} y_{16} y_{19} ;\\
&R_3= x_{17} y_{17} x_{18} y_{18} x_{19} x_{22} x_{23} y_{23} x_{24} y_{24} x_{25} x_{28} x_{29} ;\\
&S_3=y_{19} x_{20} y_{20} y_{21} x_{21}y_{22} y_{25} x_{26}y_{26} y_{27}x_{27}y_{28} y_{31}.\\
\end{aligned}
$
\par}
\pagebreak
Dipaths in type-1 basic set $L_4=\{X_4, Y_4,\}\cup\{ R_4, S_4\}$ are constructed as follows:
{\centering
$ \displaystyle
\begin{aligned}
&X_4=x_0 y_1 x_1x_2y_2 x_3 y_6 y_7 x_7 x_8 y_8 x_{11} y_{14} x_{15} y_{15} x_{16} y_{16} x_{17};\\
&Y_4=y_0 y_3 y_4 x_4 x_5 y_5 x_6 y_9 x_9 y_{10} x_{10} y_{11} x_{12} y_{12} y_{13} x_{13} x_{14}y_{17};\\
&R_4= x_{17} x_{20} x_{21} y_{21} y_{22} x_{22} y_{23} y_{24} x_{24} y_{25} x_{25} y_{26} x_{29};\\
&S_4=y_{17} y_{18} x_{18}y_{19} x_{19} y_{20} x_{23} x_{26} x_{27} y_{27}y_{28} x_{28} y_{29}.\\
\end{aligned}
$
\par}
\medskip
Similarly to Cases 1 and 2, it can be verified that $\{L_0, L_1, L_2, L_3, L_4\}$ satisfies the hypothesis of Lemma \ref{lem:list3}.
\noindent {\bf Case 4:} $p=19$. Dipaths for this case are illustrated in Figures \ref{fig:m19.1.1}-\ref{fig:m19.5.2}.
Dipaths in $L_0=\{X_0, Y_0, W_0, Z_0\}\cup\{Q_0, R_0, S_0, T_0\}$ are constructed as follows:
\begin{multicols}{2}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&W_0=y_0y_1 x_4 y_4 y_7 x_{10} x_{13} y_{14} y_{17} x_{20};\\
&X_0=x_1 y_2 y_5 y_6 x_7 y_{10} x_{11} y_{12} y_{13} y_{16} y_{19} ;\\
&Y_0=x_0 y_3 x_3 x_6 y_9 x_9 x_{12} y_{15} x_{18} x_{21};\\
&Z_0=x_2 x_5 y_8 x_8 y_{11} x_{14} x_{15} x_{16} x_{17} y_{18} x_{19};\\
\end{aligned}
$
\par}
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&Q_0=x_{20} y_{21} x_{22} y_{23} x_{26} y_{27} x_{28} y_{29} x_{32};\\
&R_0= y_{19} y_{22} y_{25} y_{28} y_{31};\\
&S_0=x_{21} x_{24} x_{27} x_{30} x_{33};\\
&T_0=x_{19} y_{20} x_{23} y_{24} x_{25} y_{26} x_{29} y_{30} x_{31}.
\end{aligned}
$
\par}
\medskip
\end{multicols}
Dipaths in type-2 basic set $L_1=\{X_1, Y_1, W_1, Z_1\}\cup\{Q_1, R_1, S_1, T_1\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_1=y_2x_2 y_5 x_8 y_9 y_{12} x_{12} y_{13} x_{16} y_{17} x_{17} x_{20};\\
&X_1=x_1 x_4 x_7 y_8 x_{11} x_{14} y_{15} y_{18} y_{21};\\
&Y_1=x_0 y_1 y_4 x_5 y_6 x_9 x_{10} y_{11} y_{14}, x_{15} x_{18} y_{19};\\
&Z_1=y_0 x_3 y_3 x_6 y_7 y_{10} x_{13} y_{16} x_{19};\\
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_1=x_{20} x_{21} y_{22} x_{23} x_{26} x_{27} y_{28} x_{29} x_{32};\\
&R_1=y_{21} y_{24} y_{27} y_{30} y_{33};\\
&S_1=y_{19} y_{20} y_{23} x_{24} x_{25} x_{28}y_{31};\\
&T_1=x_{19} x_{22} y_{25} y_{26} y_{29} x_{30} x_{31}.
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-2 basic set $L_2=\{X_2, Y_2, W_2, Z_2\}\cup\{Q_2, R_2, S_2, T_2\}$ are constructed as follows:
\begin{multicols}{2}
{\centering
$ \displaystyle
\begin{aligned}
&W_2=y_2x_5 y_5 x_6 x_9 y_{10} y_{13} x_{13} x_{14} y_{17} y_{20};\\
&X_2=y_1 x_1 y_4 x_7 x_{10} x_{11} y_{14} x_{17} x_{18} y_{21};\\
&Y_2=x_0 y_0 y_3 y_6 y_9 x_{12} x_{15} y_{18} x_{21};\\
&Z_2=x_2 x_3 x_4 y_7 x_8 y_8 y_{11} y_{12} y_{15} y_{16} x_{16} x_{19};
\end{aligned}
$
\par}
{\centering
$ \displaystyle
\begin{aligned}
&Q_2=y_{20} x_{20} x_{23} y_{23} y_{26} x_{26} x_{29} y_{29} y_{32};\\
&R_2= y_{21} x_{24} y_{27} x_{30} y_{33};\\
&S_2=x_{21} y_{24} x_{27} y_{30} x_{33};\\
&T_2=x_{19} y_{19} x_{22} y_{22} x_{25} y_{25} x_{28} y_{28} x_{31}.
\end{aligned}
$
\par}
\end{multicols}
Dipaths in type-1 basic set $L_3=\{X_3, Y_3,\}\cup\{ R_3, S_3\}$ are constructed as follows:
\smallskip
{\centering
$ \displaystyle
\begin{aligned}
&X_3=y_0 x_1 y_1 x_2 y_2 y_3 x_4 x_5 x_6 y_6 y_7 x_7 x_8 x_{11} y_{11} x_{12} y_{12} x_{13}x_{16} y_{19};\\
&Y_3=x_0 x_3 y_4 y_5, y_8 x_9 y_9 y_{10} x_{10} y_{13} x_{14} y_{14} y_{15} x_{15}y_{16} x_{17} y_{17} y_{18} x_{18} x_{19} ;\\
&R_3= y_{19} x_{20} y_{20} y_{21} x_{21} x_{22} x_{25} y_{28} y_{29} x_{29} x_{30} y_{30} y_{31};\\
&S_3=x_{19}y_{22} y_{23} x_{23} x_{24} y_{24} y_{25} x_{26} y_{26} y_{27} x_{27} x_{28} x_{31}.
\end{aligned}
$
\par}
\pagebreak
Dipaths in type-1 basic set $L_4=\{X_4, Y_4,\}\cup\{ R_4, S_4\}$ are constructed as follows:
\medskip
{\centering
$ \displaystyle
\begin{aligned}
&X_4=y_1 y_2 x_3y_6 x_6x_7 y_7 y_8 y_9 x_{10} y_{10} y_{11} x_{11} x_{12}x_{13} y_{13} y_{14} x_{14} x_{17} y_{20};\\
&Y_4=y_0 x_0 x_1 x_2 y_3 y_4 x_4y_5 x_5 x_8 x_9 y_{12} x_{15} y_{15} x_{16}y_{16}y_{17} x_{18} y_{18}y_{19};\\
&R_4=y_{20} x_{21} y_{21} y_{22}x_{22} x_{23} y_{26} x_{27} y_{27} y_{28}x_{28} x_{29} y_{32};\\
&S_4= y_{19} x_{19} x_{20} y_{23} y_{24}x_{24} y_{25} x_{25}x_{26} y_{29} y_{30}x_{30}y_{31}.
\end{aligned}
$
\par}
\medskip
Lastly, it can be verified that $\{L_0, L_1, L_2, L_3, L_4\}$ satisfies the hypothesis of Lemma \ref{lem:list3}. \end{proof}
\section{Factorization of $K_{2m}^*$}
In this section, we use Lemmas \ref{lem:list2} and \ref{lem:list1} and Proposition \ref{thm:cand} to construct a $\vec{C}_m$-factorization of $K^*_{2m}$. Theorems \ref{berm}, \ref{Ng2}, and \ref{Ng} and Lemmas \ref{lem:disj} and \ref{lem:deco} are tools that we will use to obtain the desired construction.
\begin{theorem} \cite{BerFavMah}
\label{berm}
Every connected 4-regular Cayley graph on a finite abelian group has a hamiltonian decomposition.
\end{theorem}
\begin{theorem}\cite{Ng2} \label{Ng2}
Let $m$ and $n$ be positive integers. The digraph $\vec{C}_m \wr K^*_n$ admits a $\vec{C}_{mn}$-decomposition.
\end{theorem}
\begin{theorem}\cite{Ng} \label{Ng}
Let $m$ and $n$ odd integers. Then, the digraph $\vec{C}_m \wr \vec{C}_n$ admits a $\vec{C}_{mn}$-decomposition.
\end{theorem}
Observe that, in Theorems \ref{Ng2} and \ref{Ng} , both decompositions are necessarily $\vec{C}_{mn}$-factorizations since we are decomposing both digraphs into directed hamiltonian cycles.
\begin{lemma}\cite{BurSaj}
\label{lem:disj}
If a digraph $H$ admits an R$\vec{C}_m$-D, then $kH$ admits a $\vec{C}_m$-factorization.
\end{lemma}
\begin{lemma} \cite{BurSaj}
\label{lem:deco}
Let $\{H_1, H_2, \ldots, H_k\}$ be a decomposition of digraph $G$ into spanning sub-digraphs. If each $H_i$ admits a $\vec{C}_m$-factorization, then $G$ admits a $\vec{C}_m$-factorization.
\end{lemma}
In Lemma \ref{lem:2}, it is shown that $L_{2m}$ does not admit an R$\vec{C}_m$-D when $3|m$. We circumvent this particular case using Lemma \ref{thm:reduction} below. This lemma reduces Problem \ref{prob:ini} to the case $m$ is odd and $m \not\equiv 0 \ (\textrm{mod}\ 3)$.
\begin{lemma}
\label{thm:reduction}
Suppose that $m$ is odd. If $K^*_{2m}$ admits a $\vec{C}_m$-factorization, then $K^*_{2(3m)}$ admits a $\vec{C}_{3m}$-factorization.
\end{lemma}
\begin{proof}
Assume that $K^*_{2m}$ admits a $\vec{C}_m$-factorization. Let $F_1, F_2, \ldots, F_{2m-1}$ be the corresponding $\vec{C}_m$-factors, so for each $k \in \{1, 2, \ldots, 2m\minus1 \}$, we have $F_k =2\vec{C}_m$. Then, we can obtain the following decomposition of $K^*_{2(3m)}$ into spanning sub-digraphs:
\medskip
{\centering
$ \displaystyle
\begin{aligned}
K^*_{2(3m)}&=K^*_{2m}\wr K^*_3;\\
&=(F_1\oplus F_2\oplus\ldots \oplus F_{2m-1})\wr K^*_3;\\
&=F_1 \wr K^*_3\oplus F_2 \wr \overline{K}_3 \oplus \ldots \oplus F_{2m-1} \wr \overline{K}_3.\\
\end{aligned}
$
\par}
\medskip
Observe that $F_1\wr K^*_3=2(\vec{C}_m\wr K^*_3)$ and $F_i\wr \overline{K_3}= 2(\vec{C}_m \wr \overline{K}_3)$ for $i>1$. Theorem \ref{Ng}, in conjunction with Lemma \ref{lem:disj}, implies that $2(\vec{C}_{m} \wr K^*_3)$ admits an R$\vec{C}_{3m}$-D. Similarly, by Theorem \ref{Ng2} and Lemma \ref{lem:disj}, the digraph $2(\vec{C}_m \wr \overline{K}_3)$ also admits an R$\vec{C}_{3m}$-D. In conclusion, Lemma \ref{lem:deco} implies that $K_{2(3m)}$ admits a $\vec{C}_{3m}$-factorization. \end{proof}
Observe that Lemma \ref{thm:reduction} is vacuous when $m=3$ because $K^*_6$ does not admit a $\vec{C}_3$-factorization \cite{BerGerSot}. If $m \equiv 3\ (\textrm{mod}\ 6)$ and $m>9$, then $m=3^rt$, where $t \equiv 1,5\ (\textrm{mod}\ 6)$ or $t=9$, and $r$ is a positive integer. Then, Lemma \ref{thm:reduction} implies that it suffices to solve Problem \ref{prob:ini} for $m \equiv 1, 5\ ( \textrm{mod} \ 6)$ and $m=9$. In the latter case, a solution can be found in \cite{BurFranSaj}.
Now, we proceed with this paper's main theorem, Theorem \ref{thm:main}, restated below for convenience's sake.
\noindent {\bf Theorem 3} Let $m$ be an odd integer such that $m \geqslant 5$. The digraph $K^*_{2m}$ admits a $\vec{C}_m$-factorization.
\\
\begin{proof}
Lemma \ref{thm:reduction} implies that it suffices to find a decomposition for all $m \equiv 1, 5\ (\textrm{mod}\ 6)$ and $m=9$. Theorem \ref{BurFraSaj} implies a solution for $m=9$.
Suppose that $m \equiv 1, 5\ (\textrm{mod}\ 6)$ and $m \geqslant 13$. We begin by strategically decomposing the graph $K_m$. If $m \equiv 1\ (\textrm{mod}\ 4)$, then
\begin{center}
$K_m=X(m, \{1,3\})\oplus X(m, \{2,4\})\oplus X(m, \{5,6\})\oplus \ldots \oplus X(m, \{\frac{m-3}{2}, \frac{m-1}{2}\})$.
\end{center}
\noindent If $m \equiv 3\ (\textrm{mod}\ 4)$, then
\begin{center}
$K_m=X(m, \{1,3\})\oplus X(m, \{2\})\oplus X(m, \{4,5\})\oplus \ldots \oplus X(m, \{\frac{m-3}{2}, \frac{m-1}{2}\})$.
\end{center}
\noindent Since each spanning sub-graph in this decomposition of $K_m$ are 4-regular and connected, Theorem \ref{berm} implies that $K_m$ can be decomposed into one copy of $X(m, \{1,3\})$ and $\frac{m-5}{2}$ hamiltonian cycles. Each hamiltonian cycle of $K_m$ is isomorphic to $X(m, \{1\})$. It follows that $K^*_m$ admits a decomposition into one copy of $\vec{X}(m, \{\pm1,\pm3\})$ and $\frac{m-5}{2}$ copies of $\vec{X}(m, \{\pm 1\})$.
Consequently, we have the following decomposition of $K_{2m}^*$ into spanning sub-digraphs:
\medskip
{\centering
$ \displaystyle
\begin{aligned}
K^*_{2m}&=K^*_{m}\wr K^*_2;\\
&=(\vec{X}(m, \{\pm1,\pm3\})\oplus \vec{X}(m, \{\pm 1\})\oplus\ldots \oplus \vec{X}(m, \{\pm 1\}))\wr K^*_2;\\
&=\vec{X}(m ,\{\pm1,\pm3\}) \wr K^*_2\oplus \vec{X}(m, \{\pm 1\})\wr \overline{K}_2 \oplus \ldots \oplus \vec{X}(m, \{\pm 1\})\wr \overline{K}_2;\\
&=\vec{X}(m, \{\pm1,\pm3\}) \wr K^*_2\oplus H_{2m} \oplus \ldots \oplus H_{2m};\\
&=L_{2m} \oplus G_{2m} \oplus H_{2m} \oplus \ldots \oplus H_{2m}.\\
\end{aligned}
$
\par}
\medskip
By Lemma \ref{lem:1}, the digraph $H_{2m}$ admits a $\vec{C}_m$-factorization. Furthermore, by Lemma \ref{lem:2} and Proposition \ref{thm:cand}, the digraphs $L_{2m}$ and $G_{2m}$ also admit an R$\vec{C}_m$-D, respectively. As a result, Lemma \ref{lem:deco} implies that $K^*_{2m}$ admits a $\vec{C}_m$-factorization.
\end{proof}
Theorem 3, in conjunction with results from \cite{Abel}, \cite{AdaBry}, \cite{BenZha}, \cite{BerGerSot}, \cite{BurSaj}, \cite{BurFranSaj} and \cite{Til}, implies a resolution of the directed Oberwolfach problem with tables of uniform lengths as stated in Theorem \ref{thm:final}.
\begin{theorem}
\label{thm:final}
The digraph $K^*_{\alpha m}$ admits a $\vec{C}_m$-factorization if and only if $(\alpha, m) \not\in \{ (1,6), (1, 4),$ $(2, 3) \}$.
\end{theorem}
\noindent{\bf \large Acknowledgements}
\noindent The author would like to thank Mateja \v{S}ajna for her guidance and support over the course of this project. The author would also like to thank the NSERC CGS-D scholarship program for its financial support.
| {
"timestamp": "2023-01-03T02:19:48",
"yymm": "2212",
"arxiv_id": "2212.12072",
"language": "en",
"url": "https://arxiv.org/abs/2212.12072",
"abstract": "In this paper, we give a solution to the last outstanding case of the directed Oberwolfach problem with tables of uniform length. Namely, we address the two-table case with tables of odd length. We prove that the complete symmetric digraph on $2m$ vertices, denoted $K^*_{2m}$, admits a resolvable decomposition into directed cycles of odd length $m$. This completely settles the directed Oberwolfach problem with tables of uniform length.",
"subjects": "Combinatorics (math.CO)",
"title": "Completing the solution of the directed Oberwolfach problem with cycles of equal length",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513869353992,
"lm_q2_score": 0.7185944046238981,
"lm_q1q2_score": 0.7086428687638745
} |
https://arxiv.org/abs/1204.6422 | Conflict-free coloring with respect to a subset of intervals | Given a hypergraph H = (V, E), a coloring of its vertices is said to be conflict-free if for every hyperedge S \in E there is at least one vertex in S whose color is distinct from the colors of all other vertices in S. The discrete interval hypergraph Hn is the hypergraph with vertex set {1,...,n} and hyperedge set the family of all subsets of consecutive integers in {1,...,n}. We provide a polynomial time algorithm for conflict-free coloring any subhypergraph of Hn, we show that the algorithm has approximation ratio 2, and we prove that our analysis is tight, i.e., there is a subhypergraph for which the algorithm computes a solution which uses twice the number of colors of the optimal solution. We also show that the problem of deciding whether a given subhypergraph of Hn can be colored with at most k colors has a quasipolynomial time algorithm. | \section{Introduction
\label{sec:intro}
A hypergraph $H$ is a pair $(V,\E)$, where $V$ is a finite set
and $\E$ is a family of non-empty subsets of $V$. We denote by
$\Positives$ the set of positive integers and by $\Naturals$
the set of non-negative integers.
\begin{definition}\label{def:origcf}
Let $H=(V,\E)$ be a hypergraph and let $C$ be a coloring
$C \colon V \rightarrow \Positives$:
We say that $C$ is a {\em conflict-free coloring} (cf-coloring in short) if for
every hyperedge $e \in \E$ there exists a color $i \in \Positives$
such that $\card{e \cap C^{-1}(i) }=1$. That is, every
hyperedge $e \in \E$ contains some vertex whose color is unique
in $e$.
\end{definition}
The study of cf-coloring was initiated in the work of Even et al.
\cite{ELRS} and of Smorodinsky \cite{SmPHD} and was extended in
numerous other works (c.f.,
\cite{cf9,AS08,BCOScpc2010,CheilarisCUNYthesis2009
cf7,CKS2009talg,cf5,HS02,Lev-TovP09,CFPT09,cf1}).
The study was initially motivated by its application to frequency
assignment for cellular networks. A cellular network consists of
two kinds of nodes: \emph{base stations} and \emph{mobile
clients}. Base stations have fixed positions, modeled by a finite set of
points in the plane, and provide the
backbone of the network. Every
base station emits at a fixed frequency. If a client
wants to establish a link with a base station it has to tune
itself to this base station's frequency. Clients, however, can be
in the range of many different base stations. To avoid
interference, the system must assign frequencies to base stations
in the following way: For any closed disk $d$ in the plane
(representing the communication range of a client located at the
center of this disk), there must be at least one
base station which is contained in $d$ and has a frequency that is not
used by any other base station contained in $d$. Since frequencies
are limited and costly, a scheme that reuses frequencies, where
possible, is desirable.
Here is a more general, formal definition:
Let $P$ be a set of $n$
points in the plane and let $\R$ be a family of regions in the
plane (e.g., all closed discs). We
denote by $H=H_{\R}(P)$ the hypergraph on the set $P$ whose
hyperedges are all subsets $P'$ that can be cut off from $P$ by a
region in $\R$. That is, all subsets $P'$ such that there exists
some region $r \in \R$ with $r\cap P = P'$. We refer to such a
hypergraph as the hypergraph \emph{induced by $P$ with respect to
$\R$}.
Now, consider the hypergraph induced by a set of $n$ \emph{collinear}
points with respect to the family of closed disks in the plane.
It is not difficult to see that this hypergraph is isomorphic to the
hypergraph induced by a set of $n$ real numbers with respect to the family of
closed intervals,
which is also isomorphic to the following discrete interval hypergraph.
\begin{definition}
Let $[n] = \{1,\dots,n\}$.
For $s \leq t$, $s, t \in [n]$, we define the {(discrete) interval}
$[s,t] = \{i \in [n] \mid s \leq i \leq t\}$.
The \emph{discrete interval hypergraph} $H_n$ has vertex set $[n]$ and
hyperedge set
$\I_n = \{ [s,t] \mid {s \leq t}\text{, }{s, t \in [n]} \}$.
\end{definition}
It is not difficult to prove that
$\lfloor \log_2 n \rfloor + 1$ colors are necessary and sufficient
in order to cf-color $H_n$
(see, e.g., \cite{ELRS}).
An online variation of this cf-coloring problem in which
vertices appear one by one and the algorithm has to commit to
a color for each point as soon as it appears,
maintaining the conflict-free property
of the point set at every time,
was introduced in \cite{cf7} and further studied in
\cite{BCS2008talg}.
In this paper, we are interested in cf-coloring subhypergraphs of
$H_n$ of the following form:
$H = ([n], I)$, where $I \subseteq \I_{n}$.
Then, $H$ is a hypergraph induced by $n$ points on the real line
with respect to a \emph{subset} of all possible intervals.
Cf-colorings of such hypergraphs were studied
in the online setting in \cite{BCS2008talg}.
Katz et al., in \cite{cf2}, claim a 4-approximation polynomial time
cf-coloring for any such hypergraph $H$ (in the offline setting).
Studying cf-coloring for subhypergraphs of geometric hypergraphs
can be justified by applications where only a given subset of the
hyperedge set is required to have the conflict-free property.
In section~\ref{sec:hittingsetalg}, we describe an algorithm
for computing cf-colorings for general hypergraphs,
based on hitting sets.
In section~\ref{sec:2approx}, we show how the above algorithm
and an appropriate choice of the hitting set can give a 2-approximation
polynomial time
algorithm for cf-coloring a subhypergraph of the discrete interval
hypergraph, improving on the 4-approximation algorithm of Katz et al.
In section~\ref{sec:tight}, we show that the above analysis is tight, i.e.,
there are subhypergraphs of $H_n$ for which
the algorithm computes a cf-coloring with twice the optimal
(minimum) number of colors.
In section~\ref{sec:quasip}, we show that the
decision problem whether a given
subhypergraph of $H_n$ can be cf-colored with
at most $k$ colors has a quasipolynomial time algorithm;
this implies that this decision problem is probably not
NP-complete.
\section{A hitting-set algorithm for conflict-free coloring
\label{sec:hittingsetalg}
In this section, we present an algorithm for conflict-free
coloring a hypergraph. It is based on repeatedly computing a
minimal hitting set in hypergraphs.
\begin{definition}
A \emph{hitting set} of a hypergraph $H = (V,\E)$
is a subset $S \subseteq V$ such that for every $e \in \E$
there exists some $v \in S$ with $v \in e$. A hitting set $S$ is
\emph{minimal} if for every $v \in S$, $S \setminus \{v\}$
is not a hitting set.
\end{definition}
In the literature, a conflict-free coloring is an
assignment of colors (positive integers) to the vertices of the hypergraph.
In this work, we introduce and
consider a slight variation of conflict-free coloring, in which
we allow some vertices to not be assigned colors, as long as
in every hyperedge, there
exists a vertex with assigned color that is
uniquely occurring in the hyperedge.
In other words, we allow the coloring function
$C\colon V \to \Positives$ in definition~\ref{def:origcf}
to be a partial function. Alternatively, we
can use a special color `0' given to vertices that are not
assigned any positive color and obtain a total function $C \colon V
\to \Naturals$. Then, we arrive at the following variant of
definition~\ref{def:origcf}.
\begin{definition}
Let $H=(V,\E)$ be a hypergraph and let
$C \colon V \rightarrow \Naturals$:
We say that $C$ is a {\em conflict-free coloring} if for
every hyperedge $S \in \E$ there exists a color $i \in \Positives$
such that $\card{S \cap C^{-1}(i) }=1$.
We denote by
$\chicf(H)$ the minimum integer $k$ for which $H$ admits a
cf-coloring with colors in $\{0,\dots,k\}$.
\end{definition}
\begin{remark}
We claim that this variation of conflict-free coloring,
with the partial coloring function or the placeholder color `0',
is interesting from the point of view
of applications. As mentioned in section~\ref{sec:intro},
vertices model base stations in a cellular network. A vertex with
no positive color assigned to it can model a situation where a base station
is not activated at all, and therefore the base station does not
consume energy.
One can also think of a bi-criteria optimization problem where a
conflict-free assignment of frequencies has to be found with small
number of frequencies (in order to conserve the frequency spectrum)
and few activated base stations (in order to conserve energy).
\end{remark}
We describe algorithm~\ref{alg:hitset} for conflict-free coloring any
hypergraph $H = (V,\E)$.
\begin{algorithm}[htbp]
\caption{A hitting set algorithm for conflict-free coloring $H=(V,\E)$
\label{alg:hitset}
\begin{algorithmic}
\STATE $\ell \gets 0$; $V^0 \gets V$; $\E^0 \gets \E$
\WHILE{$\E^\ell \neq \emptyset$}
\STATE $S^\ell \gets \text{a minimal hitting set for $(V^\ell, \E^\ell)$}$
\STATE color every $v \in V^\ell \setminus S^\ell$ with color $\ell$
\STATE $V^{\ell+1} \gets S^\ell$
\STATE $\E^{\ell+1} \gets \{e \cap S^{\ell} \mid e \in \E^\ell \text{ and }
|e \cap S^{\ell}| > 1 \}$
\STATE $\ell \gets \ell+1$
\ENDWHILE
\STATE
\textbf{if} $V^\ell \neq \emptyset$
\textbf{then} color every $v \in V^\ell$ with color $\ell$
\textbf{end if}
\end{algorithmic}
\end{algorithm}
\begin{lemma}
Algorithm~\ref{alg:hitset} terminates.
\end{lemma}
\begin{proof}
At every iteration of the loop, there is some hyperedge $e \in \E^\ell$
for which $|e \cap S^{\ell}| = 1$.
This follows from the minimality of $S^\ell$.
Thus, $|\E^\ell| > |\E^{\ell+1}|$. Therefore, the number of
hyperedges decreases at every iteration of the loop, and
necessarily reaches zero after a finite number of iterations of
the loop.
\end{proof}
\begin{lemma}
Algorithm~\ref{alg:hitset} produces a conflict-free coloring.
\end{lemma}
\begin{proof}
We first show that
for every hyperedge $e \in \E$, there is some $\ell$ for which
$\cardin{{e} \cap S^{\ell}} = 1$.
Notice that for every iteration $i>0$, we have
$S^{i-1} \supseteq S^i$.
If $\cardin{e \cap S^0} > 1$, consider the maximum $i$ for which
$\cardin{e \cap S^i} > 1$.
Then, hyperedge $e \cap S^i = e \cap V^{i+1}$
belongs to $\E^{i+1}$ and has to be hit by $S^{i+1}$, i.e.,
$(e \cap S^{i}) \cap S^{i+1} = e \cap S^{i+1}$ is non-empty
and thus $\cardin{e \cap S^{i+1}} = 1$,
because of the maximality of $i$.
Let $v$ be the one element of ${e} \cap S^{\ell}$.
Vertex $v$ is colored with some color greater than
$\ell$ by the algorithm and all other vertices of $e$ are colored
with colors which are at most of value $\ell$. Thus, $e$ has the
conflict-free property.
\end{proof}
\section{A 2-approximation algorithm for a set of intervals
\label{sec:2approx}
We use algorithm~\ref{alg:hitset}, described in the previous section,
to conflict-free color a subhypergraph of $H_n$ which is comprised of a given
subset $I \subseteq \I_n$ of intervals.
It is necessary to specify how to compute the minimal hitting set.
The minimal hitting set $S$ is computed as follows
(in fact, we compute a minimum cardinality hitting set, but we do
not need this stronger fact):
\begin{quote}
First, we compute a special independent set of intervals $F \subseteq I$
(i.e., in $F$ no two intervals have a common vertex).
We compute this independent set $F$ of intervals incrementally.
Initially, there is nothing in the independent set. We scan
vertices from $1$ to $n$ and we include in the independent set the
interval $[i,j] \in I$ with minimum $j$ such that $[i,j]$ does not
intersect anything already in the independent set. After computing
$F$, for every interval $[i,j] \in F$, we take in $S$ the vertex $j$
(i.e., the maximum or rightmost vertex).
\end{quote}
\begin{lemma}
$S$ is a minimal hitting set.
\end{lemma}
\begin{proof}
Set $S$ is a hitting set because no interval is completely contained
between two vertices in $S$, no interval ends before the first
interval in $F$, and no interval starts after the last interval in
$F$; otherwise such intervals would be chosen in the independent set
$F$. Set $S$ is minimal, because removing any element $j$ of it, means
that the interval with right endpoint $j$ in $F$ is not hit any
more.
\end{proof}
\begin{remark}
The computation of the maximal (in fact maximum)
independent set of intervals given above
is also known as a solution to the activity selection problem.
See for example \cite[section~16.1]{CLRS}.
\end{remark}
Notice that the time complexity of the algorithm is $O(n\log{n})$:
We sort the intervals according to their right endpoints.
Then, at every iteration of the loop we can choose the hitting set
in linear time. There is at most a logarithmic number of
iterations of the loop, because
$\chicf(H) \leq \chicf(H_n) = \floor{\log_{2}{n}}+1$.
We intend to compare colorings produced by the above algorithm
with optimal colorings.
We define recursively the following families of sets of intervals
of $\Positives$.
\begin{definition}
Family $\calJ_1$ exactly contains all singleton sets of intervals.
For $k>1$,
set of intervals $I$ is in family $\calJ_k$
if and only if it can be expressed as a
union $I = L \cup R \cup \{\iota\}$,
where both $L$, $R \in \calJ_{k-1}$,
no interval from $L$ has a
common point with an interval from $R$,
and interval $\iota$
includes every interval
in $L$ and every interval in $R$.
We refer to a set of intervals in family $\calJ_k$ as a
\emph{$\calJ_k$ configuration}.
\end{definition}
\begin{lemma}
Any conflict-free coloring uses at least $k$ colors for a set of
intervals that is a superset of a $\calJ_k$ configuration.
\end{lemma}
\begin{proof}
We use induction on $k$. For $k=1$, the statement is trivially true.
Assume it is true for $k$, we will prove it for $k+1$. Assume, for
the sake of contradiction, that there is a conflict-free coloring $C$
with just $k$ colors of a set of intervals $I'$ that is a superset
of a $\calJ_{k+1}$ configuration $I$.
Then, by definition of $\calJ_{k+1}$,
$I = L \cup R \cup \{\iota\}$,
where both $L$, $R \in \calJ_{k}$,
no interval from $L$ has a
common point with an interval from $R$,
and interval $\iota$
includes every interval
in $L$ and every interval in $R$.
By the inductive hypothesis,
the points contained in intervals of $L$ use $k$ colors
and also
the points contained in intervals of $R$ use $k$ colors.
The above two pointsets are disjoint
and the interval $\iota$ includes both pointsets.
As a result, $\iota$ is not
conflict-free colored, which is a contradiction.
\end{proof}
We are now ready to bound the approximation ratio of the proposed
algorithm.
\begin{theorem}
The conflict-free coloring algorithm for hypergraphs with respect to
a subset of intervals is a 2-approximation algorithm.
\end{theorem}
\begin{proof}
It is enough to prove that if
some hyperedge (or interval), say $\iota$,
reaches iteration with $\ell=k-1$ of the loop (i.e.,
the algorithm uses at least $k$ colors),
then the input contains as a subset a
$\calJ_{\lceil k/2 \rceil}$ configuration
and moreover this configuration is entirely contained in $\iota$.
We prove it by induction. For $k=1,2$, it is true,
because there is at least one interval in the input, and therefore
at least one non-zero color is needed in any optimal coloring.
For $k > 2$, assume there is a vertex $v$ that gets color $k$. Then
at iteration with $\ell=k-1$ of the loop there is an interval $\iota$ with
its rightmost vertex being $v \in S^{\ell}$ (see figure~\ref{fig:jkfind}).
\begin{figure}[htbp]
\centering
\begin{tikzpicture}
\coordinate (u) at ( 2,0);
\coordinate (w) at ( 5,0);
\coordinate (v) at (10,0);
\node [below] at (u) {$u$};
\node [below] at (w) {$w$};
\node [below] at (v) {$v$};
\begin{scope}[gray,dashed]
\draw ( 2,0)--( 2,3.3);
\draw ( 5,0)--( 5,3.3);
\draw (10,0)--(10,3.3);
\end{scope}
\fill (u) circle (2pt);
\fill (w) circle (2pt);
\fill (v) circle (2pt);
\draw [thick] (1,1)--(10,1);
\node [right] at (10,1) {$\iota$};
\draw [thick] (0,2)--(2,2);
\node [right] at (2,2) {$\iota'_1$};
\draw [thick] (4,2)--(10,2);
\node [right] at (10,2) {$\iota'_2$};
\draw [thick] (3,3)--(5,3);
\node [right] at (5,3) {$\iota''_1$};
\draw [thick] (8,3)--(10,3);
\node [right] at (10,3) {$\iota''_2$};
\end{tikzpicture}
\caption{Intervals in an input using $k$ colors
\label{fig:jkfind
\end{figure}
Since $\iota$ was not removed
in the previous iteration $\ell-1$, there were two vertices of
$\iota$ in
$S^{\ell-1}$, say $u$ and $v$, with $u < v$. Also, since $u$ and $v$
are in $S^{\ell-1}$ there are two intervals with them as right
endpoints in the independent set computed at iteration $\ell-1$, say
$\iota'_1$ and $\iota'_2$. Since $\iota'_2$ was not removed in the iteration
$\ell-2$, there were two vertices of $\iota'_2$ in $S^{\ell-2}$,
say $w$ and $v$,
with $u < w < v$. Also, since $u$, $w$, and $v$ are in $S^{\ell-2}$
there are three intervals with them as right endpoints in the
independent set computed at iteration $\ell-2$; call $\iota''_1$ the one
ending at $w$ and $\iota''_2$ the one ending at $v$. Since the three
intervals are independent, $\iota''_1$ and $\iota''_2$ start after $u$,
therefore they are fully contained in $\iota$ (which contains $u$).
By the inductive hypothesis, since each of $\iota''_1$, $\iota''_2$ reach
iteration $\ell-2$, each of them entirely contains a
$\calJ_{\lceil (k-2)/2 \rceil}$ configuration, and, since $\iota''_1$
and $\iota''_2$ are disjoint, together with $\iota$ they constitute a
$\calJ_{\lceil k/2 \rceil}$ configuration.
\end{proof}
\section{A tight instance for the 2-approximation algorithm
\label{sec:tight}
For $k \geq 2$, we intend to define an input $I_k$ that is a tight instance
for the approximation algorithm, i.e., an instance that forces the algorithm
to use at least twice the number of colors in an optimal coloring.
Before doing that, we define some notation
that will prove useful.
\begin{definition}
Given a set of intervals $I$ and a natural number $d$, we define
$I^{+d}$ to be the set of intervals, where all intervals of $I$
are shifted $d$ to the right, i.e.,
\[I^{+d} = \{[i+d,j+d] \mid [i,j] \in I \}.\]
\end{definition}
\begin{definition}
Given a set of intervals $I$, we define the \emph{length} of $I$, denoted
$\length(I)$ to be the rightmost point occurring in any of the
intervals of $I$ minus the leftmost point occurring in any of the
intervals of $I$ plus one.
\end{definition}
Now, we are ready to proceed with the definition of the tight
instance.
\begin{definition}
For $k=2$ the input $I_2$ has length equal to
four and consists of three intervals.
\[I_2 = \{[1,2], [3,3], [2,4] \}\]
For $k > 2$ the input is defined recursively as follows.
\[I_{k+1} = I_k \cup I_k^{+\length(I_k)} \cup
\{[\length(I_k)-k+1, 2\length(I_k)+1]\}\]
\end{definition}
Abusing notation,
we call the $I_k$ component the \emph{left} $I_k$ part of $I_{k+1}$
and the $I_k^{+\length(I_k)}$ component the \emph{right} $I_k$ part of
$I_{k+1}$. These left and right parts are disjoint.
Input $I_4$ is shown in figure~\ref{fig:inputk4}.
Moreover, in the figure,
under the vertices of the input we give the coloring produced by
the 2-approximation
algorithm and then an optimal conflict-free coloring.
\begin{comment}
\begin{figure}[htbp]
\centering
\input{inputk2ext}
\medskip
\makebox[3.3cm][l]{0\hfill 1\hfill 2\hfill 0
\medskip
\makebox[3.3cm][l]{1\hfill 0\hfill 1\hfill 0
\caption{Input $I_2$ and conflict-free colorings
\label{fig:inputk2
\end{figure}
\begin{figure}[htbp]
\centering
\input{inputk3ext}
\makebox[8.3cm][l]
0\hfill 1\hfill 2\hfill 0\hfill
0\hfill 1\hfill 3\hfill 0\hfill 0\ }
\makebox[8.3cm][l]
1\hfill 0\hfill 1\hfill 2\hfill 1\hfill 0\hfill 1\hfill 0\hfill
0\ }
\caption{Input $I_3$ and conflict-free colorings
\label{fig:inputk3
\end{figure}
\end{comment}
\begin{figure}[htbp]
\centering
\input{inputk4ext}
\makebox[12.9cm][l]
0\hfill 1\hfill 2\hfill 0\hfill 0\hfill 1\hfill 3\hfill 0\hfill
0\hfill 0\hfill 1\hfill 2\hfill 0\hfill 0\hfill 1\hfill 4\hfill
0\hfill 0\hfill 0\ }
\makebox[12.9cm][l]
1\hfill 0\hfill 1\hfill 2\hfill 1\hfill 0\hfill 1\hfill 0\hfill
0\hfill 1\hfill 0\hfill 1\hfill 2\hfill 1\hfill 0\hfill 1\hfill
0\hfill 0\hfill 0\ }
\caption{Input $I_4$, algorithm cf-coloring, and optimal cf-coloring
\label{fig:inputk4
\end{figure}
\begin{comment}
\begin{figure}[htbp]
\centering
\input{inputk5ext}
\medskip
\makebox[13.472cm][l]
0\hfill 1\hfill 2\hfill 0\hfill 0\hfill 1\hfill 3\hfill
0\hfill 0\hfill 0\hfill 1\hfill 2\hfill 0\hfill 0\hfill
1\hfill 4\hfill 0\hfill 0\hfill 0\hfill
0\hfill 1\hfill 2\hfill 0\hfill 0\hfill 1\hfill 3\hfill
0\hfill 0\hfill 0\hfill 1\hfill 2\hfill 0\hfill 0\hfill
1\hfill 5\hfill 0\hfill 0\hfill 0\hfill
\ \ }
\medskip
\makebox[13.472cm][l]
1\hfill 0\hfill 1\hfill 2\hfill 1\hfill 0\hfill 1\hfill 0\hfill 0\hfill
1\hfill 0\hfill 1\hfill 2\hfill 1\hfill 0\hfill 1\hfill 0\hfill 0\hfill 3\hfill
1\hfill 0\hfill 1\hfill 2\hfill 1\hfill 0\hfill 1\hfill 0\hfill 0\hfill
1\hfill 0\hfill 1\hfill 2\hfill 1\hfill 0\hfill 1\hfill 0\hfill 0\hfill 0\hfill
\ \ }
\caption{Input $I_5$ and conflict-free colorings
\label{fig:inputk5
\end{figure}
\end{comment}
It is not difficult to see that the length of the instance
satisfies the recurrence relation
\begin{equation}
\length(I_{k+1}) = 2\length(I_k) + 1 , \label{eq:Iklength}
\end{equation}
which implies, since $\length(I_2) = 4$, that
\(\length(I_k) = 5 \cdot 2^{k-2} - 1\).
Another notion that will prove useful is the \emph{level} of each
interval in the above instance that we define in the following.
\begin{definition}
In input $I_2$, intervals $[1,2]$ and $[3,3]$ are of level 1 and
interval $[2,4]$ is of level 2. In the recursively defined instance
\[I_{k+1} = I_k \cup I_k^{+\length(I_k)} \cup
\{[\length(I_k)-k+1, 2\length(I_k)+1]\}\]
the intervals of the $I_k$ part have
the same levels as the corresponding intervals in the $I_k$ instance,
the intervals of the $I_k^{+\length(I_k)}$ part have the same levels
as the corresponding intervals of the $I_k$ instance before the
`$+\length(I_k)$' operation,
and interval $[\length(I_k)-k+1, 2\length(I_k)+1]$ has level $k+1$.
\end{definition}
In fact, in
figure~\ref{fig:inputk4}
the vertical coordinate
of each interval signifies its level, with higher intervals having
higher level.
\begin{lemma}\label{lem:leftmostklevel
For $k \geq 3$, in $I_k$, the leftmost point of the level $k$
interval is the same as the rightmost level $1$ interval in the
left $I_{k-1}$ part of $I_k$.
\end{lemma}
\begin{proof}
We prove by induction that the rightmost level $1$
interval of the left $I_{k-1}$ part of $I_k$ is at position
$\length(I_{k-1})-(k-1)+1$.
For $I_3$, the rightmost level 1 interval of the left $I_2$ part
of $I_3$ consists of point $4-(3-1)+1=3$.
By the inductive hypothesis, the rightmost level $1$
interval of the left $I_{k-1}$ part of $I_k$ is at $\length(I_{k-1})-(k-1)+1$.
Then for $I_{k+1}$, the rightmost level $1$ interval of its
left $I_{k}$ part is at
\[
\length(I_{k-1})-(k-1)+1 + \length(I_{k-1}) =
(2\length(I_{k-1})+1) + k - 1 = \length(I_k)+k-1.
\]
The last equality is implied by equation~\eqref{eq:Iklength}.
\end{proof}
\begin{lemma}\label{lem:IkcontainsJk2}
Instance $I_k$ contains a $\calJ_{\ceil{k/2}}$ configuration
as a subset.
\end{lemma}
\begin{proof}
By induction. It is true for $k=2$ and $k=3$, because $I_2$
contains a $\calJ_1$ configuration and $I_3$ contains a $\calJ_2$
configuration. For $k>3$, in instance $I_k$, the interval of level
$k$ contains completely a copy of $I_{k-1}$, in which two disjoint
copies of $I_{k-2}$ are contained. By the inductive hypothesis, in
each copy of $I_{k-2}$, a $\calJ_{\ceil{(k-2)/2}}$ configuration
is contained. These two disjoint $\calJ_{\ceil{(k-2)/2}}$
configurations, together with the level $k$ interval constitute a
$\calJ_{\ceil{k/2}}$ configuration in $I_k$.
\end{proof}
\begin{lemma}\label{lem:Ikcolk2}
There is a conflict-free coloring of $I_k$ with $\ceil{k/2}$ colors.
\end{lemma}
\begin{proof}
We define recursively a coloring of $I_k$ that uses
$\ceil{k/2}$ colors and we prove by induction that it is
conflict-free.
For $k=2$ the coloring is $1010$, which can be easily checked to
be conflict-free.
If $k$ is odd, take a coloring of $I_{k-1}$ and in its rightmost
position use color $\ceil{k/2}$, concatenate a coloring of
$I_{k-1}$, and then concatenate color `0'. By induction, the left
$I_{k-1}$ part is conflict-free because we started with a conflict-free
coloring and we introduced a new color $\ceil{k/2}$, the right
$I_{k-1}$ part is conflict-free because it is colored with a
conflict-free coloring. The level $k$ interval
is conflict-free because of color $\ceil{k/2}$ that occurs
uniquely.
If $k$ is even, with $k>2$,
take a coloring of $I_{k-1}$, concatenate a coloring of
$I_{k-1}$, and then concatenate color `0'. By induction, the left
$I_{k-1}$ part is conflict-free because it is colored with a
conflict-free coloring, the right
$I_{k-1}$ part is conflict-free because it is colored with a
conflict-free coloring. The level $k$ interval
is conflict-free because of color $\ceil{k/2}$ that occurs in the
right $I_{k-1}$ part and because its leftmost point, by
lemma~\ref{lem:leftmostklevel}, is to the right of the
$\ceil{k/2}$ color occurring in the left $I_{k-2}$ part of the
left $I_{k-1}$ part.
\end{proof}
\begin{corollary}
An optimal coloring of $I_k$ uses $\ceil{k/2}$ colors.
\end{corollary}
We now describe a family of hypergraphs that arise after the first
iteration of the while loop of the 2-approximation algorithm,
if the initial input is $I_k$.
\begin{definition}
The instance $L_0$ is on one vertex, namely the vertex set is \{1\},
and contains no
interval, i.e, $L_0 = \{\}$. The length of instance $L_0$ is
defined to be 1. For $k>0$, $L_{k+1}$ is defined
recursively, as follows.
\[L_{k+1} = L_k \cup L_k^{+\length(L_k)} \cup
\{[\length(L_k),2\length(L_k)]\}
\]
\end{definition}
It is not difficult to see that the length satisfies the
recurrence relation $\length(L_{k+1})=2\length(L_k)$,
which implies $\length(L_k)=2^k$. We say that
$L_{k+1}$ consists of a left $L_k$ part, a right $L_k$ part,
and the interval $[2^{k},2^{k+1}]$.
\begin{proposition}\label{prop:Ikalgk}
The 2-approximation algorithm colors $I_k$ with $k$ colors.
\end{proposition}
\begin{proof}
Assume input $I_k$ is given to the 2-approximation algorithm.
In the iteration of the while loop
where the algorithm colors points with color
$\ell$ ($\ell = 0, 1,
\dots$), the algorithm considers a
hypergraph $H_\ell$. We will prove that the algorithm
considers the hypergraphs
\[H_0 = I_k, H_1 = L_{k-1}, \dots, H_{k-1} = L_1, H_k = L_0, \]
and then it terminates, i.e., it uses $k$ colors. We say that
$H_i$ is \emph{followed} by $H_{i+1}$, to show that two
hypergraphs $H_i$, $H_{i+1}$ are considered successively by the
algorithm, in that order.
First, we prove that for every $k \geq 2$, $I_k$ is followed by
$L_{k-1}$, by induction on $k$.
It is not difficult to see that, when $I_k$ is considered,
the independent set of intervals chosen consists of all level 1
intervals of $I_k$ and the hitting set that is chosen
consists of the right endpoints of all level 1 intervals of $I_k$
(a formal proof can be carried out by induction on $k$).
For $k=2$ it is not difficult to check that $I_2$ is followed by
$L_1$. For $k>2$, $I_k$ consists of a left $I_{k-1}$ part which
induces a left $L_{k-2}$ part and a right $I_k$ part, which
induces a right $L_{k-2}$ part (we use the inductive hypothesis).
From lemma~\ref{lem:leftmostklevel},
the leftmost point of the level $k$
interval is the same as the rightmost level $1$ interval in the
left $I_{k-1}$ part of $I_k$, and therefore the level $k$
interval induces
an interval that starts from the last point of the left $L_{k-2}$
part of the hypergraph that follows $I_k$
and ends at the last point of the right $L_{k-2}$
part of the hypergraph that follows $I_k$.
To summarize, the $I_k$ is followed by a left $L_{k-2}$
part, a right $L_{k-2}$ part and interval $[2^{k-2},2^{k-1}]$, i.e., it is
$L_{k-1}$.
Then, we prove that for $k > 0$, $L_k$ is followed by $L_{k-1}$,
by induction on $k$.
For $k=1$, it is not difficult to see that for $L_1$ the interval
$[1,2]$ is chosen and its right endpoint, i.e., $2$,
makes up the hitting set. Then, easily, $L_1$ is followed by $L_0$.
For $k>1$,
when $L_k$ is considered, the independent set of intervals that is
chosen consists of the intervals of length two
of the left $L_{k-1}$ part
\[\{[1,2],[3,4], \dots, [2^{k-1}-1,2^{k-1}]\}\]
and the intervals of length two of the right
$L_{k-1}$ part
\[\{[2^{k-1}+1,2^{k-1}+2],[2^{k-1}+3,2^{k-1}+4],
\dots, [2^{k}-1,2^{k}]\} . \]
Therefore the hitting set is
\[ \{2,4, \dots 2^{k-1}\} \cup
\{2^{k-1}+2, 2^{k-1}+4, \dots, 2^k\} =
\{i \colon \text{odd} \mid 2 \leq i \leq 2^k\}
\]
and consists of $2^{k-1}$ elements.
By induction, after removal of the points of the hitting set,
the left $L_{k-1}$ part induces a $L_{k-2}$ part, and the right
$L_{k-1}$ part induces a $L_{k-2}$ part. The interval
$[2^{k-1},2^k]$ of $L_k$ contains all points in
$\{2^{k-1}+2,2^{k-1}+4,\dots,2^k\}$
of the right $L_{k-1}$ part
and just point $2^{k-1}$ of the left $L_{k-1}$ part, and therefore
induces $[2^{k-2}, 2^{k-1}]$ in the hypergraph that follows $L_k$.
To summarize, the $L_k$ is followed by a left $L_{k-2}$
part, a right $L_{k-2}$ part and interval $[2^{k-2},2^{k-1}]$, i.e., it is
$L_{k-1}$.
Finally, we prove that when $L_0$ is reached, no hypergraph
follows, and the algorithm
terminates. This is true, because $L_0$ contains no interval
(hyperedge).
\end{proof}
\begin{remark}
From the above proof of proposition~\ref{prop:Ikalgk},
it is immediate that if $L_k$ is given as an input to the
2-approximation algorithm, the following sequence of hypergraphs
\[H_0 = L_k, H_1 = L_{k-1}, \dots, H_{k-1} = L_1, H_k = L_0 \]
is considered in the iterations of the while loop.
Moreover, it can also be proved, with a proof similar to those
of lemmata~\ref{lem:IkcontainsJk2} and~\ref{lem:Ikcolk2},
that an optimal coloring for $L_k$
uses $\ceil{k/2}$ colors.
Therefore, the family of instances $L_k$ is also a family of tight
instances for the 2-approximation algorithm.
However, the family of instances $I_k$ has the additional property
that no two intervals in it share a common right endpoint.
\end{remark}
\section{A quasipolynomial time algorithm
\label{sec:quasip}
Consider the decision problem \textsc{CFSubsetIntervals}:
\begin{quote}
``Given a subhypergraph $H=([n],I)$
of the discrete interval hypergraph $H_n$ and a natural number $k$,
is it true that
$\chicf(H) \leq k$?''
\end{quote}
Notice that the above problem is non-trivial
only when $k < \floor{\log_2 n}+1$; if
$k \geq \floor{\log_2 n} + 1$ the answer is always yes, since
$\chicf(H_n) = \floor{\log_2 n} + 1$.
Algorithm~\ref{alg:nondet} is a non-deterministic
algorithm for \textsc{CFSubsetIntervals}.
The algorithm scans points from $1$ to $n$, tries
non-deterministically every color in $\{0,\dots,k\}$
at the current point and checks if all intervals in $I$
ending at the current point have the conflict-free property.
If some interval in $I$ has not the conflict-free property under
a non-deterministic assignment, the algorithm answers `no'.
If all intervals in $I$ have the conflict-free property
under some non-deterministic assignment, the algorithm answers
`yes'.
We check if an interval in $I$
that ends at the current point, say $t$,
has the conflict-free property
in the following space-efficient way.
For every color $c$ in $\{0,\dots,k\}$, we keep track of:
\begin{enumerate}
\item[(a)]
the closest point to $t$ colored with $c$ in variable $p_c$
and
\item[(b)]
the second closest point to $t$ colored with $c$ in variable
$s_c$.
\end{enumerate}
Then, color $c$ is occurring exactly one time in $[j,t] \in I$
if and only if $s_c < j \leq p_c$.
\begin{algorithm}[htb!]
\caption{A non-deterministic algorithm deciding whether
$\chicf(H) \leq k$ for $H=([n],I)$}
\label{alg:nondet}
\begin{algorithmic}
\FOR{$c \gets 0 \text{ to } k$}
\STATE $s_c \gets 0
\STATE $p_c \gets 0$
\ENDFOR
\FOR{$t \gets 1 \text{ to } n$}
\STATE choose $c$ non-deterministically from $\{0,\dots,k\}$
\STATE $s_{c} \gets p_{c}
\STATE $p_{c} \gets t$
\FOR{$j \in \{j \mid [j,t] \in I\}$}
\STATE IntervalConflict $\gets$ True
\FOR{$c \gets 1 \text{ to } k$}
\IF{$s_c < j \leq p_c$}
\STATE IntervalConflict $\gets$ False
\ENDIF
\ENDFOR
\IF {IntervalConflict}
\RETURN{NO}
\ENDIF
\ENDFOR
\ENDFOR
\RETURN{YES}
\end{algorithmic}
\end{algorithm}
\begin{lemma}
The space complexity of algorithm \ref{alg:nondet} is $O(\log^{2}{n})$.
\end{lemma}
\begin{proof}
Since $k = O(\log n)$ and each point position can be encoded with
$O(\log n)$ bits, the arrays $p$ and $s$ (indexed by color)
take space $O(\log^2 n)$. All other variables in the algorithm can
be implemented in $O(\log n)$ space.
Therefore the above non-deterministic algorithm has space
complexity $O(\log^2 n)$.
\end{proof}
\begin{corollary}
\textsc{CFSubsetIntervals} has a quasipolynomial time deterministic
algorithm.
\end{corollary}
\begin{proof}
By standard computational complexity theory arguments
(see, e.g., \cite{PapadimitriouCC}), we can transform
algorithm~\ref{alg:nondet}
to a deterministic algorithm solving the same problem with time
complexity $2^{O(\log^2 n)}$, i.e., \textsc{CFSubsetIntervals} has a
quasipolynomial time deterministic algorithm.
\end{proof}
\section{Discussion and open problems}
The exact complexity of computing an optimal cf-coloring for a
subhypergraph of the discrete interval hypergraph remains an
open problem. We have provided a 2-approximation algorithm.
One might try to improve the approximation ratio, find a
polynomial time approximation scheme, or even find a polynomial
time exact algorithm.
The last possibility is supported by the fact that
the decision version of the problem, \textsc{CFSubsetIntervals},
is unlikely to be NP-complete, unless NP-complete problems have
quasipolynomial time algorithms.
It would also be interesting to study the complexity
of computing optimal conflict-free colorings for subhypergraphs of
other geometric hypergraphs,
like the hypergraph induced by a set of $n$ points in the plane
with respect to
a given set of closed disks in the plane.
Finally, we introduced a slightly different cf-coloring function
$C\colon V \to \Naturals$, for which vertices colored with `0' can
not act as uniquely-colored vertices in a hyperedge.
Naturally, one could try to study the bi-criteria optimization
problem, in which there two minimization goals: (a) the number
of colors used, $\max_{v \in V}C(v)$ (minimization of frequency
spectrum use) and (b) the number of vertices with positive colors,
$\cardin{\{v \in V \mid C(v)>0\}}$ (minimization of activated
base stations).
\subsubsection*{Acknowledgments.} We wish to thank Matya Katz and
Asaf Levin for helpful discussions concerning the problems
studied in this paper.
\bibliographystyle{plain}
| {
"timestamp": "2012-05-01T02:02:25",
"yymm": "1204",
"arxiv_id": "1204.6422",
"language": "en",
"url": "https://arxiv.org/abs/1204.6422",
"abstract": "Given a hypergraph H = (V, E), a coloring of its vertices is said to be conflict-free if for every hyperedge S \\in E there is at least one vertex in S whose color is distinct from the colors of all other vertices in S. The discrete interval hypergraph Hn is the hypergraph with vertex set {1,...,n} and hyperedge set the family of all subsets of consecutive integers in {1,...,n}. We provide a polynomial time algorithm for conflict-free coloring any subhypergraph of Hn, we show that the algorithm has approximation ratio 2, and we prove that our analysis is tight, i.e., there is a subhypergraph for which the algorithm computes a solution which uses twice the number of colors of the optimal solution. We also show that the problem of deciding whether a given subhypergraph of Hn can be colored with at most k colors has a quasipolynomial time algorithm.",
"subjects": "Combinatorics (math.CO); Discrete Mathematics (cs.DM); Data Structures and Algorithms (cs.DS)",
"title": "Conflict-free coloring with respect to a subset of intervals",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513926334712,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7086428669154152
} |
https://arxiv.org/abs/1902.08165 | The harmonicity of slice regular functions | In this article we investigate harmonicity, Laplacians, mean value theorems and related topics in the context of quaternionic analysis. We observe that a Mean Value Formula for slice regular functions holds true and it is a consequence of the well known Representation Formula for slice regular functions over $\mathbb{H}$. Motivated by this observation, we have constructed three order-two differential operators in the kernel of which slice regular functions are, answering positively to the question: is a slice regular function over $\mathbb{H}$ (analogous to an holomorphic function over $\mathbb{C}$) "harmonic" in some sense, i.e. is it in the kernel of some order-two differential operator over $\mathbb{H}$ ? Finally, some applications are deduced, such as a Poisson Formula for slice regular functions over $\mathbb{H}$ and a Jensen's Formula for semi-regular ones. | \section{Introduction}
In \cite{gentilistruppa1} and \cite{gentilistruppa}, Gentili and Struppa gave the following definition of {\it slice regular function} over the quaternions:
\begin{definition}
Let $\Omega$ be a domain in $\H$. A real differentiable function $f \colon \Omega \to \H $ is said to be slice regular if, $\forall \,\, I \in \S =\{ q \in \H , \, \Re(q)=0 \, : \, |q| =1 \}$,
its restriction $f_{I}$ to the complex line
${\mathbb C}_I= \mathbb{R} + \mathbb{R} I$ passing through the origin and containing $1$ and $I$ is holomorphic on $\Omega \cap {\mathbb C}_I$,
which is equivalent to require that, $\forall \, I \, \in \S ,$
\begin{equation}\label{defdbar}
\overline{\partial}_I f (x+yI) := \frac{1}{2} ( \frac{\partial}{\partial x} + I \frac{\partial}{\partial y})f_I (x+yI)=0
\end{equation}
on $\Omega \cap {\mathbb C}_I.$
\end{definition}
Later the notion of ``slice regularity'' was generalized to algebras
other than $\H$ (\cite{CSS09,ghiloniperotti}).
For simplicity, sometimes slice regukar functions are simply
called ``regular functions''.
Let $D\subset\mathbb{C}$ be any symmetric set
with respect to the real axis.
A function $F=F_{1}+ F_{2}\imath:D\rightarrow \H\otimes{\mathbb{C}}$
such that $F(\overline z)=\overline{F(z)}$ is said to be a
\textit{stem function}.
Let $\Omega_D=\{\alpha+\beta I:\alpha,\beta\in{\mathbb R}, I\in\S,
\alpha+\beta i\in D\}$.
A function $f:\Omega_{D}\rightarrow \H$ is said to be a \textit{(left) slice
function} if it is induced by a stem function $F=F_{1}+F_2\imath $ defined on $D$
in the following way: for any $\alpha+I\beta\in\Omega_{D}$,
\begin{equation*}
f(\alpha+I\beta)=F_{1}(\alpha+i\beta)+I F_{2}(\alpha+i\beta).
\end{equation*}
\noindent
If a stem function $F$ induces the slice function $f$, we will write
$f=\mathcal{I}(F)$.
\begin{proposition}
Let $D$ be a symmetric domain in ${\mathbb C}$ which intersects ${\mathbb R}$
and let $\Omega_D\subset\H$ be defined as above.
Then a {\em slice function} $f:\Omega_D\to \H$ is slice
regular if and only
if its stem function $F:D\to \H\otimes{\mathbb C}$ is holomorphic.
\end{proposition}
(See Proposition~8 of \cite{ghiloniperotti}.)
These notions have been studied a lot in the last years:
see, for example, the many results for slice regular functions from the unit ball of $\H $ to itself: \cite{bisigentili}, \cite{bisigentilitrends}, \cite{bs12}, \cite{bisistoppato}, \cite{bs17} and for entire slice regular functions \cite{BW}.\\
Classically, mean value theorems are closely related to harmonicity.
We investigate mean value properties for quaternionic functions.
We prove (Proposition~\ref{gen-mean}) that a slice regular function $f$
fulfills
\[
f(a+bI)
=\frac{1}{2\pi}
\int_\S \int_0^{2\pi}
(1-IJ)f(a+bJ+re^{J\theta})
d\theta
d\mu(J), \ \forall a,b\in{\mathbb R}, I\in\S
\]
where $\mu$ is a probability measure on $\S$ which is
invariant under the involution $J\mapsto -J$.
Conversely, we show that every continuous function $f:\H\to\H$ with
this mean value property must be the sum of a regular and an
anti-regular function (Theorem~\ref{char-harm}).
We also show that for any point $p\in\H$ and every $3$-sphere $S$
containing $p$ in the interior, there exists a $\H$-valued measure
on $S$ such that $f(p)=\int_S f(q)d\mu(q)$ for every slice regular function
$f$ (Theorem~\ref{H-measure}).
Over the field of complex numbers, the mean value property is equivalent to harmonicity.
Therefore it is natural ask ourselves if slice regular functions were in the kernel of some order-two differential operator over $\H $: in section 8 we answer positively to this question
constructing three order-two differential operators in the kernel of which slice regular functions are.
The first one is $\Delta_*$, introduced in
Definition~\ref{def-laplace-star}.
For slice functions it is defined everywhere, for other functions
only outside ${\mathbb R}$.
On each slice ${\mathbb C}_I$ the operator $\Delta_*$ acts as the complex
Laplacian if we identify ${\mathbb C}_I\simeq{\mathbb C}$ and $\H={\mathbb C}_I\oplus J{\mathbb C}_I
\simeq{\mathbb C}^2$ (with $J$ being an imaginary unit orthogonal to $I$).
If $f$ is a slice function, then $\Delta_*(f)$ is again a slice function
and $\Delta_*(f)=\mathcal{I}(\Delta F)$
for $f=\mathcal{I}(F)$.
The second order-two differential operator is $\Delta'$:
\[
(\Delta'f)(q)=\left(\Delta_* \int_{\S}R_wfd\mu(w)\right) (q)
\]
Here $R_w$ is an averaging operator which we define based on rotations,
cf.~section \ref{sect-rot}.
We observe that $(\Delta' f)(q)=\frac{1}{2}\Delta_* (Tr(f))(q)=
\frac{1}{2}\Delta_* (f+f^c)(q)$. For the definition of $f^c$ see Definition 2.10. \\
The third order-two differential operator is $\Delta''$:
$$
(\Delta'' f)(q)=(\Delta_* N(f))(q)=(\Delta_* (f \cdot f^c)) (q)
$$
On one side $\Delta_*$ and $\Delta '$ are $\mathbb{R}-$linear operator, on the other hand $\Delta''$ is not a linear operator: but for $\Delta ''$ a sort of
Leibnitz rule for $(f * g)$ holds true
(proposition~\ref{product-rule}).
For the definition of slice product, denoted with $*$, see Definition 2.7.\\
Our main results on $\Delta_*$ and $\Delta'$ are the following :
\begin{theorem}[Theorem~\ref{harm-star}]
Let $f:\Omega_D\to\H$ be a $C^2$ slice function.
Assume that $D$ is simply-connected.
$\Delta_*f$ is vanishing identically if and only if $f$
can be written as a sum of a regular function $g$ and an
anti-regular function $h$.
\end{theorem}
For the definition of anti-regular function see Remark 2.5.
\begin{proposition}[Proposition~\ref{real-part}]
Let $h:\H \to\mathbb{R}$ be a
slice function with $\Delta'h=0$.
Then there exists a slice-preserving regular function $f$ such that $h=\Re \, (f)$.
\end{proposition}
\begin{proposition}[Proposition~\ref{isol-zero}]
Let $u \colon \H \to \mathbb{R}$ be
a $C^2$-function such that $\Delta' u=0$ outside ${\mathbb R}$.
Then $u$ admits no isolated zero in any real point $a\in{\mathbb R}$.
\end{proposition}
Finally, we provide a Jensen's formula.
\begin{proposition}[Jensen's Formula; Proposition \ref{jensen}]
Let $\Omega=\Omega_D$ be a circular
domain of $\H $ and
let $f:\Omega\rightarrow\H \cup\{ \infty \}$ be a
semi-regular function.
Let $\rho>0$ be such that the ball, centered in 0 and of radius $\rho,$
$\overline{\mathbb{B}_{\rho}}\subset\Omega$, $f(0)\neq 0,\infty$
and such that $f(y)\neq 0, \infty$, for any $y\in\partial \mathbb{B}_{\rho}$.
Let $\mu$ be a probability measure on $\S$.
Then: \\
\begin{equation}
\begin{array}{ccc}
\log|f(0)|
&\le
\displaystyle\frac{1}{2\pi \mu(\S)}\displaystyle \int_0^{2\pi} \int_{\S }\log|f(\rho\cos\theta +\rho\sin\theta I)|d\mu(I)d\theta \, + \\
&\\
& \displaystyle-\sum_{|p_k|<\rho} m_k\log \frac{\rho}{|p_k|}
\quad
\\
\end{array}
\end{equation}
for $div(f)=\sum m_k\{p_k\}$.
\end{proposition}
\vskip0.3cm
We hope that this paper can provide new ideas for studying slice regular functions and their "harmonic properties'' on slice regular
quaternionic manifolds recently introduced by Bisi-Gentili in \cite{BG} and Angella-Bisi in \cite{AnB}.
\section{Prerequisites about quaternionic functions}
In this section we will overview the main notions and results needed
for our aims.
First of all, let us denote by $\mathbb {H}$ the real algebra
of quaternions. An element $x\in\H $ is usually written as
$x=x_{0}+ix_{1}+jx_{2}+kx_{3}$, where $i^{2}=j^{2}=k^{2}=-1$ and $ijk=-1$.
Given a quaternion $x$ we introduce a conjugation in $\H $ (the usual one),
as $x^{c}=x_{0}-ix_{1}-jx_{2}-kx_{3}$; with this conjugation we define the real
part of $x$ as $\Re(x):=(x+x^{c})/2$ and the imaginary part as $\Im(x):=(x-x^{c})/2$.
With the notion of conjugation just defined
we can write the euclidean square norm of a
quaternion $x$ as $|x|^{2}=xx^{c}$.
The subalgebra of real numbers will be identified, of course,
with the set
$\mathbb{R}:=\{x\in\H \,\mid\,\Im(x)=0\}.$
Now, if $x\in\H \setminus \mathbb{R}$ is such that $\Re (x)=0$,
then the imaginary part of $x$ is such that
$(\Im(x)/|\Im(x)|)^{2}=-1$. More precisely, any imaginary quaternion
$I=ix_{1}+jx_{2}+kx_{3}$, such
that $x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=1$ is an imaginary unit. The set of imaginary units
is then a real $2-$sphere and it will be conveniently denoted as follows:
\begin{equation*}
\S :=\{x\in\H \,\mid\, x^{2}=-1\}=\{x\in\H \,\mid\,
\Re(x)=0, \, |x|=1\}.
\end{equation*}
With the previous notation, any $x\in\H $ can be written as $x=\alpha+I\beta$,
where $\alpha,\beta\in\mathbb{R}$ and $I\in\S $.
Given any $I\in\S $ we will denote the real subspace of $\H $
generated by $1$ and $I$ as:
\begin{equation*}
\mathbb{C}_{I}:=\{x\in\H \,\mid\,x=\alpha+I\beta, \alpha,\beta\in\mathbb{R}\}.
\end{equation*}
Sets of the previous kind will be called \textit{slices}.
We denote the $2-$sphere with center
$\alpha\in\mathbb{R}$ and radius $|\beta|$ (passing through $\alpha+I\beta\in\H $), as:
\begin{equation*}
\S _{\alpha+I\beta}:=\{x\in\H \,\mid\,x=\alpha+J\beta, J\in\S \}.
\end{equation*}
Obviously, if $\beta=0$, then $\S _{\alpha}=\{\alpha\}$.
\subsection{Slice functions and regularity}
In this part we will recall the main definitions and features of slice functions.
The theory of slice functions was introduced in \cite{ghiloniperotti} as a tool to generalize
the theory of quaternionic regular functions defined on particular domains introduced in \cite{gentilistruppa1, gentilistruppa},
to more general domains and to
alternative $*-$algebras.
\noindent
The complexification of $\H $ is defined to be the real tensor product
between $\H $ itself and $\mathbb{C}$:
\begin{equation*}
\H _{\mathbb{C}}:=\H \otimes_{\mathbb{R}}\mathbb{C}:=\{p+q\imath \,\mid\, p,q\in\H \}.
\end{equation*}
(Here $\imath=1 \otimes i$.)
Note that $\H\tensor\C $ has a natural structure of an associative
algebra induced by the algebra structures of $\H$ and ${\mathbb C}$.
Explicitly, the product on $\H\tensor\C$ is given as follows:
if $p_{1}+ q_{1}\imath , p_{2}+ q_{2}\imath$ belong to $\H\tensor\C $,
then,
\begin{equation*}
(p_{1}+ q_{1}\imath)(p_{2}+ q_{2}\imath)=p_{1}p_{2}-q_{1}q_{2}+(p_{1}q_{2}+q_{1}p_{2})\imath.
\end{equation*}
The usual complex conjugation $\overline{p+q\imath }=p-q\imath $ commutes with
the following involution (the quaternionic conjugation)
$(p+q\imath )^{c}=p^{c}+ q^{c}\imath$.
We introduce now the class of subsets of $\H $ where our functions will be defined.
\begin{definition}\label{def-circular}
Given any set $D\subseteq\mathbb{C}$,
we define its \textit{circularization} as the
subset in $\H $ defined as follows:
\begin{equation*}
\Omega_{D}:=\{\alpha+I\beta\,\mid\,\alpha,\beta\in{\mathbb R},
\alpha+i\beta\in D, I\in\S \}.
\end{equation*}
Such subsets of $\H $ are called \textit{circular} sets.
If $D\subset \mathbb{C}$ is
an open connected subset such that $D\cap\mathbb{R}\neq\emptyset$,
then $\Omega_{D}$
(which is again open and connected and intersects the real line ${\mathbb R}$)
is called a \textit{slice domain}
(see \cite{genstostru}).
\end{definition}
Note that for any subset $D\subset {\mathbb C}$ the circularization
$\Omega_D$ coincides with the circularization $\Omega_{D^s}$ of the
symmetrized domain $D^s=\{z:z\in D\text{ or }
\bar z\in D\}$.
From now on, $\Omega_{D}\subset\H $ will always denote a circular domain arising as circularization of a symmetric domain $D\subset{\mathbb C}$.
\begin{definition}
Let $D\subset\mathbb{C}$ be any symmetric set with respect to the real axis. A function $F=F_{1}+F_2\imath :D\rightarrow \H\tensor\C $ such that $F(\overline z)=\overline{F(z)}$ is said to be a \textit{stem function}.
\noindent
A function $f:\Omega_{D}\rightarrow\H $ is said to be a \textit{(left) slice
function} if it is induced by a stem function $F=F_{1}+F_2\imath $ defined on $D$
in the following way:
\begin{equation}\label{eq-stem}
f(\alpha+I\beta)=F_{1}(\alpha+i\beta)+I F_{2}(\alpha+i\beta).
\end{equation}
for all $\alpha+i\beta\in D$ and
all $I\in \S$.
\noindent
If a stem function $F$ induces the slice function $f$, we will write $f=\mathcal{I}(F)$.
The set of slice functions defined on a certain circular domain $\Omega_{D}$ will
be denoted by $\mathcal{S}(\Omega_{D})$.
\end{definition}
\begin{lemma}\label{stem-basic}
Let $f:\Omega_D\to\H$ be a function defined on a circular
domain $\Omega_D$. If there exists a function $F=F_1+F_2\imath: D\to\H\tensor\C$
such that
equation~\eqref{eq-stem} holds, then
$F$ is a stem function, i.e., $F_1(z)=F_1(\bar z)$
and $F_2( z)=-F_2(\bar z)$.
\end{lemma}
\begin{proof}
We observe that for all $z=\alpha+i \beta\in D$ and all $I\in\S$
we have
\[
f(\alpha+I\beta)=F_{1}(\alpha+i\beta)+I F_{2}(\alpha+i\beta)
\]
and
\[
f(\alpha+(-I)(-\beta))=F_{1}(\alpha-i\beta)-I F_{2}(\alpha-i\beta),
\]
implying the statement.
\end{proof}
Examples of (left) slice functions are polynomials and
functions given by power series in the variable $x\in\H $ with all coefficients on the right, i.e., a power series
\begin{equation*}
\sum_{k=0}^{+ \infty} x^{k}a_{k},\quad \{a_{k}\}\subset\H ,
\end{equation*}
if convergent, defines a slice function.
A function $f:\Omega_D\to\H$ is a {\em slice function}
if and only if it obeys the following {``representation formula''}:
\[
f(x+yJ)=\frac{1-JI}2f(x+yI)+ \frac{1+JI}2f(x-yI)\,\, \forall x,y\in{\mathbb R}, \,\, \forall
I,J\in\S
\]
(see \cite{genstostru, ghiloniperotti}).
\subsubsection{Regularity}
Let now $D\subset \mathbb{C}$ be an open set and $z=\alpha+i\beta\in D$. Given a stem function $F=F_1+F_2\imath :D\rightarrow \mathbb{H}_\mathbb{C}$
of class $C^1$, then
\begin{equation*}
\frac{\partial F}{\partial z},\frac{\partial F}{\partial\bar{z}}:D\rightarrow\mathbb{H}_\mathbb{C}\simeq{\mathbb C}^4,
\end{equation*}
are defined as usual, i.e.,
\begin{equation*}
\frac{\partial F}{\partial z}=\frac{1}{2}\left(\frac{\partial F}{\partial \alpha}-\imath \frac{\partial F}{\partial \beta}\right)\quad \mbox{ and }\quad \frac{\partial F}{\partial \bar{z}}=\frac{1}{2}\left(\frac{\partial F}{\partial \alpha}+\imath \frac{\partial F}{\partial \beta}\right).
\end{equation*}
They are again stem functions.
Let $f$ be a slice function induced by a stem function $F$ (i.e.~$f=\mathcal{I}
(F)$) and let $q\in\H$, $q\in{\mathbb C}_I$, $I\in \S$.
Then
\begin{equation}\label{cullen}
(\partial_If)(q)=\mathcal{I}\left(\frac{\partial F}{\partial z}\right)(q),
\quad
(\bar \partial_If)(q)=
\mathcal{I}\left(\frac{\partial F}{\partial \overline{z}}\right)(q).
\end{equation}
These derivatives are
also called ``Cullen derivatives''.
We are now in the position to define slice regular functions (see Definition 8 in \cite{ghiloniperotti}).
\begin{definition}
Let $\Omega_{D}$ be a circular open set.
A slice function $f=\mathcal{I}(F)\in\mathcal{S}(\Omega_{D})$
is \textit{(left) regular} if its stem function $F$ is holomorphic.
The set of regular functions will be denoted by
\begin{equation*}
\mathcal{SR}(\Omega_{D}):=\{f\in\mathcal{S}(\Omega_{D})\,|\,f=\mathcal{I}(F), \,\,\, F:D\rightarrow \H\tensor\C \mbox{ holomorphic}\}.
\end{equation*}
\end{definition}
The set of regular functions is a real vector space and a right $\H $-module.
In the case in which $\Omega_{D}$ is a slice domain, the definition of regularity is equivalent to the one given in \cite{genstostru}.
\begin{remark}
A function $f=\mathcal{I}(F)\in\mathcal{S}^{1}(\Omega_{D})$ is
called \textit{(left) anti-regular} if its stem function $F$ is anti-holomorphic.
\end{remark}
We recall a key lemma of this theory that will be useful later on, \cite{genstostru}.
\begin{lemma}[Splitting]\label{splitting}
Let $f$ be a regular function defined on an open set $\Omega$ of $\mathbb{H}.$ Then, for any $I \in \mathbb{S}$ and for any $J \in \mathbb{S}$ with $J \perp I,$ there exist two holomorphic functions
$g_I,h_I \colon \Omega \cap \mathbb{C}_I \to \mathbb{C}_I$ such that, $\forall z = x + y I$, it is :
$$
f_I (z)=g_I(z)+h_I(z)J
$$
where $f_I$ is the restriction of $f$ to $\mathbb{C}_I.$
\end{lemma}
\subsubsection{Product of slice functions and their zero set}
In general, the pointwise product of slice functions is not a slice
function.
However there is some product called ``slice product'' which
does turn slice functions into slice functions.
The following notion is of great importance in the theory.
For the following basic facts on this ``slice product''
see
\cite{ghiloniperotti}
and \cite{genstostru}.
\begin{definition}
Let $f=\mathcal{I}(F)$, $g=\mathcal{I}(G)$ both belonging to $\mathcal{S}(\Omega_{D})$ then the \textit{slice product} of $f$ and $g$ is the slice function
\begin{equation*}
f* g:=\mathcal{I}(FG)\in\mathcal{S}(\Omega_{D}).
\end{equation*}
\end{definition}
Explicitly, if $F=F_{1}+F_2\imath $ and $G=G_{1}+ G_{2}\imath$ are stem functions, then
\begin{equation*}
FG=F_{1}G_{1}-F_{2}G_{2}+(F_{1}G_{2}+F_{2}G_{1})\imath .
\end{equation*}
\begin{definition}
A slice function $f=\mathcal{I}(F)
\in {\mathcal S}(\Omega_D)$
is called \textit{slice-preserving} if,
for all $J\in \S $, $f(\Omega_{D}\cap\mathbb{C}_{J})\subset \mathbb{C}_{J}$.
\end{definition}
Slice preserving functions satisfy the following characterization.
\begin{proposition}\label{slpreschar}
Let $f=\mathcal{I}(F_{1}+F_2\imath )$ be a slice function. Then $f$ is slice preserving if and only if the $\H $-valued components $F_{1}$, $F_{2}$ are real valued.
\end{proposition}
Since real numbers commute with all quaternions, this has the
following consequence:
Let $f, g\in\mathcal{S}(\Omega_{D})$. If $f$ is slice preserving, then
\begin{equation*}
(f*g)(x)=f(x)g(x).
\end{equation*}
If $f$ and $g$ are both slice-preserving, then
$fg=f*g=g*f=gf$.
As stated in \cite{genstostru}, if
$f$ is a regular function defined on $\mathbb{B}_{\rho}$, the ball of center 0 and radius $\rho,$ then it is slice preserving if and only if $f$ can be expressed as a power series of the form
\begin{equation*}
f(x)=\sum_{n\in\mathbb{N}}x^{n}a_{n},
\end{equation*}
with $a_{n}$ real numbers.
The following definitions are taken from \cite{genstostru,ghiloniperotti}.
\begin{definition}
Let $f=\mathcal{I}(F)\in\mathcal{S}(\Omega_{D})$, then also $F^{c}(z)=F(z)^{c}:=F_{1}(z)^{c}+F_2(z)^{c}\imath $ is a stem function.
We set
\begin{itemize}
\item $f^{c}:=\mathcal{I}(F^{c})\in\mathcal{S}(\Omega_{D})$, the \textit{slice conjugate} of $f$;
\item $f^{s}:=f^c* f$, the \textit{symmetrization} of $f$.
\end{itemize}
\end{definition}
\begin{remark}
We have that $(FG)^{c}=G^{c}F^{c}$, and so $(f* g)^{c}=g^{c}* f^{c}$.
In particular,
$f^{s}=(f^{s})^{c}$.
Moreover it holds
\begin{equation*}
(f* g)^{s}=(f)^{s}(g)^{s}\quad \mbox{and }\quad (f^{c})^{s}= f^{s}.
\end{equation*}
Another observation is that, if $f$ is slice preserving,
then $f^{c}=f$ and so $f^{s}=f^{2}$.
\end{remark}
Frequently, the sum $f+f^c$ is denoted by $Tr(f)$.
\subsubsection{Zeros of regular functions}
\label{sect-zero}
We are going now to recall some key facts about the zeros of a slice function.
Let $f:\Omega_{D}\rightarrow \H $ be any slice function with
zero locus
\[
\mathcal{Z}(f) = \{ x\in\Omega_D : f(x)=0 \}.
\]
Let $x\in\mathcal{Z}(f)$.
There are the following three possibilities:
\begin{itemize}
\item $x\in\mathbb{R}$, i.e., $x$ is a \textit{real} zero;
\item $x$ a \textit{punctual (non-real)} zero, i.e.,
$x\notin\mathbb{R}$ and $\S _{x}\cap\mathcal{Z}(f)=\{x\}$;
\item $x$ a \textit{spherical zero}, i.e.,
$x\notin\mathbb{R}$ and $\S _{x}\subset\mathcal{Z}(f)$.
\end{itemize}
The inclusion
\begin{equation}\label{zerosofprod}
\mathcal{Z}(f)\subset\mathcal{Z}(f*g),
\end{equation}
holds for any two slice functions $f,g:\Omega_{D}\rightarrow\H $,
while in general $\mathcal{Z}(g)\not\subset\mathcal{Z}(f*g)$.
What is true in general is the following equality:
\begin{equation*}
\bigcup_{x\in \mathcal{Z}(f* g)} \S _{x}=\bigcup_{x\in \mathcal{Z}(f)\cup \mathcal{Z}(g)}\S _{x}
\end{equation*}
\begin{theorem}[\cite{genstostru}]
Let $f\in\mathcal{SR}(\Omega_{D})$. If $\S _{x}\subset\Omega_{D}$ then the zeros of $f^{c}$ on
$\S _{x}$ are in bijective correspondence with those of $f$. Moreover $f^{s}$ vanishes exactly on the sets $\S _{x}$
on which $f$ has a zero.
\end{theorem}
\subsubsection{Identity principle}
\begin{theorem}[Identity principle, \cite{gentilistruppa1}, \cite{gentilistruppa}]\label{identity}
Let $\Omega_D$ be a slice domain. Given $f=\mathcal{I}(F):\Omega_{D}\rightarrow \H $ a regular function, if there exists a $J\in\S $ such that $(\Omega_{D}\cap\mathbb{C}_{J})\cap\mathcal{Z}(f)$ admits an accumulation point, then $f\equiv 0$ on $\Omega_{D}$.
\end{theorem}
\begin{corollary}\label{cor-id}
Let $f$ be a regular function on a circular slice domain $\Omega_D$.
If there exists a convergent sequence
of distinct numbers $p_n=x_n+iy_n$ in $D$
such that $f$ has at least one zero on every sphere of the form
\[
\Sigma_n=\{ x_n+Iy_n: I\in\S\},
\]
then $f$ vanishes identically.
\end{corollary}
\begin{proof}
Under the assumptions of the corollary, the symmetrization
$f^s=f^c*f$ vanishes identically on each $\Sigma_n$. Hence
$(\Omega_{D}\cap\mathbb{C}_{J})\cap\mathcal{Z}(f^s)$ contains
an accumulation point for any $J\in\S$. Consequently $f$
must vanish identically.
\end{proof}
\subsubsection{Multiplicities of zeros}
Let $f\in\mathcal{SR}(\Omega_D)$ such that $f^{s}$ does not vanish identically. Given $n\in\mathbb{N}$ and $q \in \mathcal{Z}(f)$,
we say that $x$ is a zero of $f$
of \textit{total multiplicity} $n$, and we will denote it by $m_f(x )=n$,
if $((q-x )^{s})^n\mid f^{s}$ and $((x-q )^{s})^{n+1}\nmid f^{s}$.
If $m_f(x )=1$, then $x $ is called a \textit{simple zero} of $f$.
\begin{lemma}\label{right-factor}
Let $f$ be a regular function on a circular domain
with $f(p)=0$. Then there exists
$\tilde p\in\S_p$ and a regular function $g$
such that
$f(q)=g(q)*(q-\tilde p)$.
\end{lemma}
\begin{proof}
There is an element $a\in\S_p$ such that $f^c(a)=0$,
implying that there exists a regular function $h$
with
$f^c(q)=(q-a)*h(q)$. It follows that
$f(q)=h^c(q)*(q-a^c)$.
\end{proof}
\subsubsection{Semi$-$regular functions and their poles}
We will recall now some concept of ``semi-regular functions''
which are the quaternionic analog of meromorphic functions.
Here our main references are
\cite{genstostru} and \cite{ghilperstosing}.
\begin{definition}
Let $f=\mathcal{I}(F)\in\mathcal{SR}(\Omega_{D})$.
We call the \textit{slice reciprocal} of $f$ the slice function
\begin{equation*}
f^{-*}:\Omega_{D}\setminus \mathcal{Z}(f^{s})\rightarrow \mathbb{H}, \qquad f^{-*}=\mathcal{I}((F^cF)^{-1}F^c).
\end{equation*}
\end{definition}
From the previous definition it follows that, if $x\in\Omega_D\setminus \mathcal{Z}(f^{s})$, then
\begin{equation*}
f^{-*}(x)=(f^{s}(x))^{-1}f^{c}(x).
\end{equation*}
The regularity of
$f^{-*}$ on $\Omega_{D}\setminus \mathcal{Z}(f^{s})$
just defined follows thanks to the last equality.
If $f$ is slice preserving, then $f^{c}=f$ and so $f^{-*}=f^{-1}$ where it is defined. Moreover $(f^{c})^{-*}=(f^{-*})^{c}$.
\begin{proposition}
Let $f\in\mathcal{SR}(\Omega_{D})$ such that $\mathcal{Z}(f)=\emptyset$, then $f^{-*}\in\mathcal{SR}(\Omega_{D})$ and
\begin{equation*}
f* f^{-*}=f^{-*}* f=1.
\end{equation*}
\end{proposition}
The concept of a semi-regular function has been
introduced in \cite{ghilperstosing}, 11.1-11.2.
For our purposes the crucial property of semi-regular functions
is that every semi-regular function $f$ may locally written in the
form $F=g^{-*}*h$ with $g,h$ being slice regular functions.
\begin{lemma}\label{lem-4.1}
Let $f$ be a slice function, $x,y\in\mathbb{R}$, $I,J\in\S$.
Then
\[
f(x+yI)+f(x-yI)=
f(x+yJ)+f(x-yJ)
\]
\end{lemma}
\begin{proof}
Due to the representation formula we have
\[
f(x+yJ)=\frac{1-JI}2f(x+yI)+ \frac{1+JI}2f(x-yI)
\]
and
\[
f(x-yJ)=\frac{1+JI}2f(x+yI)+ \frac{1-JI}2f(x-yI)
\]
Adding both above equalities yields the assertion of the lemma.
\end{proof}
\section{Divisors}\label{sect-div}
In complex analysis, the divisor of a holomorphic function is
the formal sum of its zeroes, counted with the respective multiplicities.
We propose that for a quaternionic slice regular function defined on $\Omega_D$
the divisor should
be defined as a formal sum of points in the closed upper plane intersected
with $D$, i.e., on $\{z\in D:\Im(z)\ge 0\}$.
\begin{definition}
Let $\Omega_D$ be a slice domain
and let $f$ be a slice regular
function on $\Omega_D$. Let $D^+=D\cap\{ z\in{\mathbb C}: \Im(z)\ge 0\}$.
Then the ``(slice) divisor'' $div(f)$ of $f$ is defined as the
formal ${\mathbb Z}$-linear combination $\sum_{z\in D^+} m_z(f)\{z\}$
where for $z=x+yi$ the multiplicity $m_z(f)$ is defined as follows:
$m_z(f)=m$ if in a neighbourhood of $\S_{x+yI}=
\{x+yJ:J\in\S\}$ the function $f$ can be written as
\[
f(q)=(q-a_1)*\ldots*(q-a_m)*g(q)
\]
with $a_i\in \S_{x+yI}$ and $g$ being a slice regular function without zeros
on $\S_{x+yI}$.
\end{definition}
Standard facts on zeros of slice regular functions (see \ref{sect-zero})
guarantee us the following properties:
\begin{itemize}
\item
$div(f*g)=div(f)+div(g)$ if both $f$ and $g$ are slice regular on $\Omega_D$.
\item
If $p_k=a_k+I_kb_k$
(with $a_k,b_k\in{\mathbb R}, b_k\ge 0, I_k\in\S $) are the isolated zeros with multiplicity
$n_k$ and $\S_{c_k+Jd_k}$ are the spherical zeros with multiplicity
$m_k$, then
\[
div(f) = \sum_k n_k\{a_k+ib_k\} +2 m_k\{c_k+id_k\}
\]
\item
$\{z\in D^+: div(f)>0\}$ is discrete in $D^+$
(for $f\not\equiv 0$).
\end{itemize}
For example, let $I,J\in\S$ with $I\ne J$ and
consider $f(q)=(q-I)*(q-J)=q^2-q(I+J)+IJ$.
Then $f$ has a zero
only at $I$ while $g(q)=(q-J)*(q-I)$ has a zero only at $J$, but the
divisor is the same:
\[
div(f)=div(q-I)+div(q-J)=div(g)=2\{i\}.
\]
This notion of a divisor is easily extended
from (slice) regular to semi-regular functions,
since semi-regular functions may locally be written in the form
$f=g^{-*}h$ with $g,h$ slice regular.
If $z=x+yi$ is a point in a symmetric domain $D$, and $f$ is semi-regular
on $\Omega_D$ we choose a sufficiently small symmetric
domain $D'$ with $p\in D'\subset D$ such that $f$
may be written in the
form $g^{-*}*h$ on $\Omega_{D'}$.
Then we define $div(f)=div(h)-div(g)$ on $D'$.
\begin{warning}
In complex analysis, a meromorphic function $f$ is holomorphic
iff $div(f)\ge 0$. The analog quaternionic statement is {\em not}
true. For example, let $I,J\in\S$ and
consider
\[
(q-I)*(q-J)\frac{1}{q^2+1}= (q^2-q(I+J)+IJ)\frac{1}{q^2+1}
\]
This is a semi-regular function whose divisor is zero,
although $f$ is not slice regular unless $I=-J$.
\end{warning}
\section{A mean value theorem}
\begin{proposition}[General Mean Value Formula]\label{GMVF}
\label{gen-mean}
Let $\mu$ be a probability measure on $\S$
which is
invariant under $J\mapsto -J$.
Let $f$ be a slice regular
function on a slice domain $\Omega_D$
induced by a stem function $F:D\to\H\tensor\C$.
Let $a,b,r\in{\mathbb R}$, $b\ge 0$, $r>0$ and $I\in\S$
such that $\Omega_D$ contains the closed ball with radius $r$
and center $a+bI$.
Then we have:
\begin{align*}
F_1(a+bi)&=\frac{1}{4\pi}
\int_\S\int_0^{2\pi}
f(a+bJ+re^{J\theta})+f(a-bJ+re^{-J\theta})
d\theta d\mu(J)\\
&=\frac{1}{2\pi}
\int_\S\int_0^{2\pi}
f(a+bJ+re^{J\theta})
d\theta d\mu(J)
\end{align*}
as well as
\[
F_2(a+bi)= -\frac{1}{2\pi}
\int_\S J \int_0^{2\pi}
f(a+bJ+re^{J\theta}))
d\theta d\mu(J)
\]
and therefore
\begin{align*}
f(a+bI)& = F_1(a+bi)+IF_2(a+bi)\\
&=\frac{1}{2\pi}
\int_\S \int_0^{2\pi}
(1-IJ)f(a+bJ+re^{J\theta})
d\theta
d\mu(J)\\
\end{align*}
\end{proposition}
\begin{proof}
This follows from combining the complex mean value theorem
with the formulae relating slice and stem functions.
\end{proof}
\begin{corollary}\label{meanvalue}
Let $f$ be a regular function, $r>0$, $a\in{\mathbb R}$
Then
\[
f(a)=\frac1{2\pi}\int_{\S}\int_0^{2\pi}
f(a+r\cos\theta+r\sin\theta I)
d\theta d\mu(I)
\]
for any probability measure $\mu$ on $\S$
which is invariant under the involution $J\mapsto -J$.
\end{corollary}
\begin{proof}
This a special case of Proposition~\ref{gen-mean} with $b=0$.
\end{proof}
\begin{remark}
Note that in Corollary~\ref{meanvalue} we integrate over the
sphere with radius $r$ and center $a$, but not with respect to the
euclidean volume element $dV$ on the $3$-sphere.
This is crucial.
For example, $\int_{||q||=1} q^2dV<0$, hence
$\int_{||q||=1} f(q)dV\ne f(0)$ for $f(q)=q^2$.
\end{remark}
\subsection{Characterization of harmonicity}
A continuous function on ${\mathbb C}$ is harmonic if and only if
it satisfies the mean value property.
We derive a similar criterion in the quaternionic setup.
\begin{theorem}\label{char-harm}
Let $\mu$ be a probability measure on $\S$ which is
invariant under the involution $J\mapsto -J$.
Let $f:\H\to\H$ be a continuous function and for $p=a+bI\in\H$
and $r>0$ define
\[
M_{p,r}=
\frac{1}{2\pi}
\int_\S \int_0^{2\pi}
(1-IJ)f(a+bJ+re^{J\theta})
d\theta
d\mu(J).
\]
Then $f$ is harmonic (in the sense of being the
sum of a regular and an anti-regular function) if and only if
\begin{equation}\label{eq-mr}
M_{p,r}=f(p)\quad\quad \forall p\in\H, r>0.
\end{equation}
\end{theorem}
\begin{proof}
We assume that \eqref{eq-mr} holds.
The function $f$ is a slice function if and only if it satisfies the
representation formula.
Hence $f$ is a slice function iff
\[
f(a+bH)=\frac{1-HI}2 M_{p,r}+\frac{1+HI}2 M_{p^c,r}
\]
for all $a,b\in{\mathbb R}$, $H,I\in\S$ and $p=a+bI$.
This can be verified by explicit calculation:
\begin{align*}
& \frac{1-HI}2 M_{p,r}+\frac{1+HI}2 M_{p^c,r}\\
& =
\frac{1}{4\pi}
\int_\S \int_0^{2\pi}
\left((1-HI)(1-IJ)+(1+HI)(1+IJ)\right)
f(a+bJ+re^{J\theta})d\theta d\mu(J)\\
& =
\frac{1}{2\pi}
\int_\S \int_0^{2\pi}
\left( 1-HJ \right)
f(a+bJ+re^{J\theta})d\theta d\mu(J)\\
&= M_{a+bH,r}=f(a+bH).\\
\end{align*}
Thus $f$ is a slice function induced by some stem function $F$.
This stem function can be easily determined as $F=F_1+F_2\otimes i$ with
\begin{align*}
F_1(a+bi) &
= \frac{1}{2\pi}
\int_\S \int_0^{2\pi}
f(a+bJ+re^{J\theta})d\theta d\mu(J)\\
F_2(a+bi) &
= - \frac{1}{2\pi}
\int_\S \int_0^{2\pi}
Jf(a+bJ+re^{J\theta})d\theta d\mu(J)\\
\end{align*}
Since $\mu$ is invariant under $J\mapsto -J$, we have
\begin{align*}
F_1(a+bi) &
= \frac{1}{4\pi}
\int_\S \int_0^{2\pi}
f(a+bJ+re^{J\theta}) + f(a-bJ+re^{-J\theta})
d\theta d\mu(J)\\
& = \frac{1}{2\pi}\int_0^{2\pi} F_1(a+bi+re^{i\theta}) d \theta.\\
\end{align*}
Thus $F_1$ satisfies the ordinary mean value property for functions
defined on ${\mathbb C}$ and therefore must be harmonic. Similar arguments
apply to $F_2$.
As a result we see that $F$ is the sum of a holomorphic
and an antiholomorphic function from ${\mathbb C}$ to $\H\otimes_{{\mathbb R}}{\mathbb C}$ and
consequently $f:\H\to\H$ is the sum of a regular and an anti-regular
function.
For the opposite direction, assume that $f$ is the sum of a regular
function and an anti-regular function.
Then \eqref{eq-mr} follows immediately from
Proposition~\ref{gen-mean}.
\end{proof}
\section{Generalized Representation Formula}
For slice regular functions, the formula below
already appeared in \cite{CGSS}: see Theorem 3.2.
Here we give a new proof and we deduce some consequences.
\begin{proposition}\label{gen-rep-formel}
Let $f$ be a slice function (not necessarily regular)
and let $I,J,H\in\S$
(not necessarily orthogonal). Assume that $J\ne I$, $H\ne I$.
Then the following equality holds:
\begin{gather*}
\left( \frac{1+JI}2 \right)^{-1}
f(x+yJ) - \left(\frac{1+HI}2\right)^{-1}f(x+yH) \\
=
\left( \left( \frac{1+JI}2 \right)^{-1}
\left( \frac{1-JI}2 \right)
- \left(\frac{1+HI}2\right)^{-1}
\left(\frac{1-HI}2\right)\right) f(x+yI)
\end{gather*}
\end{proposition}
\begin{proof}
We have
\[
f(x+yJ)=\frac{1-JI}2f(x+yI)+ \frac{1+JI}2f(x-yI)
\]
and
\[
f(x+yH)=\frac{1-HI}2f(x+yI)+ \frac{1+HI}2f(x-yI).
\]
A linear combination of both equations yields:
\begin{gather*}
\left(\frac{1+HI}2\right) \left( \frac{1+JI}2 \right)^{-1}
f(x+yJ) - f(x+yH) \\
=
\left(\left(\frac{1+HI}2\right) \left( \frac{1+JI}2 \right)^{-1}
\left( \frac{1-JI}2 \right)
- \left(\frac{1-HI}2\right)\right) f(x+yI)
\end{gather*}
and
\begin{gather*}
\left( \frac{1+JI}2 \right)^{-1}
f(x+yJ) - \left(\frac{1+HI}2\right)^{-1}f(x+yH) \\
=
\left( \left( \frac{1+JI}2 \right)^{-1}
\left( \frac{1-JI}2 \right)
- \left(\frac{1+HI}2\right)^{-1}
\left( \frac{1-HI}2\right)\right) f(x+yI)
\end{gather*}
\end{proof}
\begin{lemma}\label{lem-inj}
Fix $I\in\S$. For $J\ne I$ we define
\[
R(J)=\left( \frac{1+JI}2 \right)^{-1}
\left( \frac{1-JI}2 \right)
\]
Then $R:\S\setminus\{I\}\to\H $ is injective, and
$R(J)=0$ iff $J=-I$.
Furthermore $\lim_{J\to I}|R(J)| = +\infty$.
\end{lemma}
\begin{proof}
Since $I,J$ are purely imaginary, we have $\overline{JI}=IJ$.
Therefore
\[
\left( \frac{1+JI}2 \right)^{-1}=2\frac{1+IJ}{|1+JI|^2}
\]
and therefore
\[
R(J)= \frac{(1+IJ)(1-JI)}{|1+JI|^2}
=\frac{IJ-JI}{|1+JI|^2}
\]
Since $\overline{IJ}=JI$, $\frac{IJ-JI}{2}=\frac{IJ-\overline{IJ}}{2}$
denotes the vector part of $IJ$.
Define $r=\Re \, (IJ)$. Observe that $r\in]-1,+1]$.
Using $|IJ|=1$ we know that the vector part of $IJ$ has norm
$\sqrt{1-r^2}$.
Therefore
\[
|R(J)|^2= 4\frac{1-r^2}{|1+JI|^4}
\]
Now $|1+JI|^2=|1+\Re \, (JI)|^2+|\Im \, (JI)|^2$ and
therefore $|1+JI|^4=(2+2r)^2$ implying
\[
|R(J)|^2=4\frac{1-r^2}{(2+2r)^2}=\frac{(1+r)(1-r)}{(1+r)^2}
=\frac{1-r}{(1+r)}=\left( -1 +\frac{2}{1+r}\right).
\]
We observe that
the map
\[
r\mapsto \left( -1 +\frac{2}{1+r}\right)
\]
is evidently an injective map from $(-1, +1]$ to $\mathbb{R}^+$.
As a consequence, we obtain: If $|R(J)|=|R(H)|$ for some $J,H\in\S \setminus \{ I \} $,
then $\Re \, (IJ)=\Re \, (HI)$.
On the other hand, the vector part of $IJ$ equals
\[
\frac{1}{2}\left(IJ-\overline{IJ}\right)
=\frac12 R(J) |1+JI|^2 = R(J) (1+r)
\]
because $|1+JI|^2=(2+2r).$
Hence also the vector parts of $JI$ and $HI$ have to agree
as soon as $R(J)=R(H)$. Finally observe that $J=H$ if $JI=HI$.
\end{proof}
\begin{proposition}\label{gen-repr}
Fix $I\in\S$.
Then there exists a continuous map $M=(M_1,M_2):\S\times\S\setminus
D_\S\to\H \times\H $ (where $D_\S$ denotes the diagonal, i.e.,
$D_\S=\{(q,q):q\in\S\}$)
such that
\begin{equation} \label{repgenfor}
f(x+yI)=M_1(J,H)f(x+yJ) + M_2(J,H) f(x+yH) \ \forall J,H\in\S
\end{equation}
for every regular function $f$.
\end{proposition}
\begin{proof}
First assume that $I,J,H$ are pairwise distinct.
Then the statement follows from Proposition \ref{gen-rep-formel}
with
\[
M_1(J,H)=\left( R(J)-R(H) \right)^{-1} \left( \frac{1+JI}2 \right)^{-1}
\]
and
\[
M_2(J,H)=- \left( R(J)-R(H) \right)^{-1} \left( \frac{1+HI}2 \right)^{-1}.
\]
(Note that $1+JI\ne 0$, resp.~$1+HI\ne 0$, because of our assumptions
$J\ne I$, $H\ne I$. Note further that $R(J)-R(H)\ne 0$
due to $J\ne H$ and the injectivity statement of Lemma \ref{lem-inj}.)
Next we claim that the functions $M_i$ do extend continuously
to the points where $J=I$ or $H=I$, i.e., extend continuously to all
of $\S\times\S\setminus
D_\S$.
Consider the case where $J$ approaches $I$. Since
we excluded the diagonal $D_\S$, we may fix $H\ne I$.
Now
\begin{align*}
M_1(J,H) &=\left( R(J)-R(H) \right)^{-1} \left( \frac{1+JI}2 \right)^{-1}\\
&= \left( \frac{1+JI}2 \left( R(J)-R(H) \right) \right)^{-1}\\
&= \left( \frac{1+JI}2 R(J) - \frac{1+JI}2R(H) \right)^{-1}\\
&= \left( \frac{1-JI}2 - \frac{1+JI}2R(H) \right)^{-1} \\
\end{align*}
implying
\[
\lim_{J\to I} M_1(J,H)
= \left( \frac{2}2 - 0\cdot R(H) \right)^{-1} =1
\]
In a similar way one proves
\[
\lim_{J\to I} M_2(J,H)=0
\]
and
the analog statement for $H\to I$.
\end{proof}
\begin{remark}
We observe that our formula \eqref{repgenfor} coincides with the Representation Formula of Proposition 6 in \cite{ghiloniperotti},
when $M_1(J,H)=(I-H)(J-H)^{-1}$ and $M_2(J,H)=-(I-J)(J-H)^{-1}.$
\end{remark}
\section{Rotations}
\label{sect-rot}
For every $w\in\H ^*$ let $S_w:\H \to\H $ denote the map given by
$S_w(q)=w^{-1}qw$. This is an orthogonal transformation of $\mathbb{R}^4$ which fixes
$\mathbb{R}$ pointwise.
Observe that $S_w^{-1}=S_{w^{-1}}$.
\begin{lemma}
Let $I,J,K$ be orthogonal imaginary units.
Then $S_I:\H \to\H $ is a linear map acting as $id$
on ${\mathbb C}_I=\left<1,I\right>_\mathbb{R}$
and as $-id$ on ${\mathbb C}_I^\perp= \left<J,K\right>_\mathbb{R}$.
\end{lemma}
\begin{proof}
Follows easily from explicit calculations.
\end{proof}
The following lemma is a well known result, see for example
\cite{GHS} Prop. 2.22, page 28), but for the reader convenience
we prefer to give here our own proof.
\begin{lemma}\label{generate-so}
The group of all orientation preserving orthogonal transformations
of $\left<I,J,K\right>_\mathbb{R}$ is generated by the
transformations $S_w$ with $w\in\S$.
\end{lemma}
\begin{proof}
The group is $SO(3,\mathbb{R})$.
For each $k\in{\mathbb N},$ let $\Sigma_k$ denote the
set of all $S_{w_1}\circ\ldots\circ S_{w_{2k}}$.
Then $\Sigma=\cup_k\Sigma_k$ is the group generated by all the $S_w$.
($\Sigma$ is evidently a semigroup and in fact a group, because
$(S_w)^{-1}=S_{w^{-1}}$.)
$\Sigma$ is connected, because each $\Sigma_k$ is connected and $\Sigma_{k} \subseteq \Sigma_{k+1}.$
On the other hand,
it is not commutative, since e.g.~$S_I$ and $S_{(I+J)/\sqrt 2}$
do not commute.
However, by standard Lie theory, $SO(3,\mathbb{R})$ has no non-commutative
connected subgroups except $SO(3,\mathbb{R})$ itself.
\end{proof}
\begin{lemma}\label{aver-sw}
Let $q\in\H$.
Let $\mu$ denote the (unique) probability measure on $\S$ which is
invariant under all rotations.
Then
\[
\Re \, (q)=\int_{\S} S_w(q)d\mu(w)
\]
\end{lemma}
\begin{proof}
The map $H:q\mapsto \int_{\S} S_w(q)d\mu(w)$ is ${\mathbb R}$-linear.
Let $v\in\S$. Then $S_v$ is an orthogonal transformation.
Due to the invariance of the measure $\mu$, we have:
\[
S_v(H(q))
=\int_{\S} S_v\left(S_w(q)\right)d\mu(w)
=\int_{\S} S_w(q) S_v^*d\mu(w)=\int_{\S} S_w(q)d\mu(w)=H(q).
\]
(Here $S_v^*$ denotes the pull-back by the map $S_v$.)
It follows that
\[
H(q)\in\{x\in\H:S_w(x)=x ,\,\, \forall w \in \mathbb{S} \}={\mathbb R}\, , \,\, \forall q\in\H.
\]
We observe that $\int_{\S} S_w(q)d\mu(w)=q, \,\, \forall q\in{\mathbb R}$,
because $S_w(q)=q, \,\, \forall q\in{\mathbb R}, w\in\S$.
Now let $q$ be in the orthogonal complement of ${\mathbb R}$, i.e., in the
real vector subspace $V$ of $\H$ spanned by $I,J,K$.
Since the integral is linear, and $V$ is stabilized by every $S_w$
($w\in\S$)
it follows that
\[
H(q)=\int_{\S} S_w(q)d\mu(w)\in V , \,\,\, \forall q\in V.
\]
Combined with the fact $H(q)\in{\mathbb R}, \,\, \forall q\in\H$,
we obtain
\[
H(q)=\int_{\S} S_w(q)d\mu(w)\in V\cap{\mathbb R}=\{0\}, \,\,\, \forall q\in V.
\]
Thus
\[
\Re \, (q)=\int_{\S} S_w(q)d\mu(w)
\]
for every $q\in {\mathbb R}$ and every $q\in V$. By ${\mathbb R}$-linearity of the map $H$,
it follows that this equality holds for all $q\in\H$.
\end{proof}
\begin{definition}\label{def-rot}
A function $f:\H\to\H$ is called
{\em ``rotationally invariant''} if it is invariant under all
orthogonal transformations of the space of imaginary elements.
\end{definition}
\begin{remark} This class of functions has been studied in
\cite{GMP} where they are called {\em ``circular''} functions.
\end{remark}
\begin{lemma}\label{LemRotInv}
For a function $f:\H\to\H$ the following properties are
equivalent:
\begin{enumerate}
\item
$f$ is {\em rotationally invariant}.
\item
$f(q)=f(S_wq)$ for all $q\in\H$, $w\in\S$.
\item
$f(x+yI)=f(x+yJ)$ for all $x,y\in{\mathbb R}$, $I,J\in\S$.
\item
$f$ is induced by a stem function $F$ with $F({\mathbb C})\subset\H\otimes{\mathbb R}$,
i.e., $F_2=0$ for $F=F_1+F_2\imath $.
\end{enumerate}
\end{lemma}
\begin{proof}
First we show $(3)\Rightarrow(4)$: Given such a function $f$, we define
$F:{\mathbb C}\to \H\otimes{\mathbb C}$ as $F(x+iy)=f(x+yI)\otimes 1$ for any $I\in\S$.
Since $(3)$ implies $f(x+yI)=f(x+y(-I))$, we have $F(z)=F(\bar z)$.
Combined with $F({\mathbb C})\subset\H\otimes {\mathbb R}$ it follows
that $\overline{F(z)}=F(z)=F(\bar z)$.
Consequently $F$ fulfills $F(\bar z)=\overline{F(z)}$ and is a
stem function.
The implications $(4)\Rightarrow (3)\iff(1)\Rightarrow (2)$ are obvious.
The implication $(2)\Rightarrow(1)$
follows from Lemma~\ref{generate-so}.
\end{proof}
\begin{definition}\label{defRw}
Let $\Omega$ be a circular\footnote{in the sense of
Definition~\ref{def-circular}} subset of $\H$ and let
$f:\Omega\to \H$ be a function. Let $w\in\S$.
Then we define a function $R_wf:\Omega\to \H$ as
\[
(R_wf)(q)=S_w^{-1}\left(f(S_w(q)\right)).
\]
\end{definition}
\begin{lemma}\label{lemstemrot}
Let $\Omega_D$ be a slice domain, and let
$f:\Omega_D \to\H $ be a slice function induced by a stem function
$F:D\to\H \otimes\mathbb{C}$.
Then $R_wf$ is induced by $S_{w^{-1}}(F)$ with $S_{w^{-1}}$
acting via the first
factor of the tensor product $\H \otimes\mathbb{C}$.
\end{lemma}
\begin{proof}
We have
\[
f(x+yI)=F_1(x+yi)+IF_2(x+yi)\ \ \ (x,y\in\mathbb{R}, I\in\S,
x+yi\in D)
\]
and
\[
R_wf(q)=w\left( f(w^{-1}qw) \right)w^{-1}.
\]
Now
$w^{-1}qw=x+yw^{-1}Iw$ for $q=x+yI$.
Furthermore $w^{-1}Iw\in\S$ and consequently
\[
f(x+yw^{-1}Iw)=
F_1(x+yi) +w^{-1}IwF_2(x+yi).
\]
Therefore
\begin{align*}
(R_wf)(x+yI)&=
w\left(f\left(x+yw^{-1}Iw\right)\right)w^{-1}\\
&=
w\left(
F_1(x+yi) +w^{-1}IwF_2(x+yi)
\right)w^{-1}\\
&= wF_1(x+iy)w^{-1}+IwF_2(x+yi)w^{-1}
\end{align*}
Therefore (using Lemma~\ref{stem-basic})
$R_wf$ is induced by the stem function
\[
S_{w^{-1}}F=\left(S_{w^{-1}}F_1\right)
\otimes 1 +
\left(S_{w^{-1}}F_2\right)\otimes i
\]
\end{proof}
\begin{corollary}
Let $f$ denote a slice regular function
on a slice domain $\Omega_D$ and let $w\in\S$.
Then $R_wf:\Omega_D\to\H$ is a slice regular function, too.
\end{corollary}
\begin{proof}
As a slice regular function, $f$ is induced by a holomorphic
stem function $F:D\to\H\tensor\C$. Holomorphicity of $F$ implies that
\[
z\mapsto \left( S_wF\right)(z)=w^{-1}(F(z))w
\]
is likewise holomorphic. Hence $R_wf$ is slice regular.
\end{proof}
\begin{corollary}
Under the assumptions of the Lemma \ref{lemstemrot},
\[
g=\int_{\S} R_w(f)d\mu(w)
\] is
induced by the stem function $R(F)$ where $R$ denotes the
real part in the first factor of the tensor product, i.e.,
$R(a\otimes b)=(\Re \, a)\otimes b$,
for $a\in\H$, $b\in{\mathbb C}$.
\end{corollary}
\begin{proof}
By the Lemma \ref{lemstemrot}, $g$ is induced by $\int_{w} S_{w^{-1}}Fd\mu(w)$.
Aided by Lemma~\ref{stem-basic}, this implies the assertion, because
\[
\int_{\S} S_{w^{-1}}(q)d\mu(w)=\Re \, (q)
\] for every $q\in\H $ (Lemma~\ref{aver-sw}).
\end{proof}
\begin{lemma}
Let $f$ be a slice regular function
given by a convergent power series
$f=\sum_{k=0}^{+ \infty} q^ka_k$.
Then the following holds:
\[
\int_\S R_wfd\mu(w) =\sum_k q^k\Re \, (a_k)=\frac{1}{2}(f+f^c) = \frac{1}{2} Tr(f).
\]
\end{lemma}
We observe that $R_{vw}=R_v\circ R_w$ and $S_{vw}=S_w\circ S_v$
for $v,w\in\H ^*$.
\begin{definition}\label{dirder}
For $v\in\H$ let $\partial_v$ denote the directional derivative in the
direction of $v$, i.e.,
\[
(\partial_v)(f)(q)=\lim_{t\to 0, t\in{\mathbb R}^*}\frac{f(q+vt)-f(q)}{t}.
\]
\end{definition}
Next we discuss
differential operators $\partial_*$, $\bar\partial_*$ for
differentiable functions on $\H\setminus{\mathbb R}$.
These operators were first introduced
by Ghiloni and Perotti
in \cite{{ghilperglobal}}
as $\vartheta$ resp.~$\bar\vartheta$. For slice functions, they coincide
with the operators $\partial/\partial x$
and $\partial/\partial x^c$ introduced
in \cite{ghiloniperotti}.
\begin{definition}
Let $\Omega$ be a domain in $\H$
and let $f:\Omega\to\H$ be a $C^1$-function.
Let $I\in\S$ and $q\in({\mathbb C}_I\setminus{\mathbb R})\cap\Omega$.
We define
\begin{align*}
\left(\partial_*f\right)(q) &=
\left(\frac 12\left(\partial_1-I\partial_I\right) f\right)(q)\\
\left(\bar\partial_*f\right)(q) &=
\left(\frac 12\left(\partial_1+I\partial_I\right) f\right)(q).
\end{align*}
(with $\partial_1$, $\partial_I$ being directional derivatives,
cf.~\ref{dirder}).
\end{definition}
\begin{remark}
Let $\Omega$ be a domain.
Let $I,J$ be orthogonal imaginary units, $f:\Omega\to\H$,
$g,h:\Omega\to{\mathbb C}_I$ be $C^1$-functions with $f=g+hJ$.
Then $\bar\partial_* f$ vanishes on $\Omega\cap{\mathbb C}_I$ if
and only if the restrictions of $g,h$ are holomorphic
functions from $\Omega\cap{\mathbb C}_I$ to ${\mathbb C}_I$.
\end{remark}
Next we define a Laplacian:
\begin{definition}\label{def-laplace-star}
Let $\Omega$ a domain in $\H$, $f$ be a $C^2$-function on $\Omega$
and $q\in\Omega\setminus{\mathbb R}$.
We define
\[
(\Delta_* f)(q)=\left( 4\partial_*\bar\partial_* f\right)(q).
\]
\end{definition}
\begin{remark}
\begin{enumerate}
\item
$ \Delta_* = 4\partial_*\bar\partial_* = 4\bar\partial_*\partial_*$.
\item
For orthogonal imaginary units $I,J\in\S$ the restriction of $\Delta_*f$ to
$\Omega\cap{\mathbb C}_I$ vanishes if and only if
$f|_{{\mathbb C}_I}=g+hJ$ with $g,h:{\mathbb C}_I\to{\mathbb C}_I$ harmonic in the sense
of complex analysis.
\end{enumerate}
\end{remark}
\begin{lemma}
Let $I\in\S$. Then $\Delta_*(f)$ vanishes along $\mathbb{C}_I$
for (slice) regular functions $f$ (and anti-regular functions $f$).
\end{lemma}
\begin{proof}
This is clear, because the restriction of a regular function $f$ to a complex
line ${\mathbb C}_I$ can be written as $f(z)=f_1(z)+f_2(z)J$
where $f_i:{\mathbb C}_I\to{\mathbb C}_I,$ for $i=1,2,$ are
entire functions with respect to the complex structure on ${\mathbb C}_I$
and $J$ orthogonal to $I$, by the splitting Lemma \ref{splitting}.
\end{proof}
Let us now discuss the case where $f$ is a slice function, i.e.,
induced by a stem function $F$.
\begin{proposition}\label{delta-stem}
Let $\Omega_D$ be circular domain, arising as circularization
of a symmetric domain $D$ in ${\mathbb C}$.
Let $f:\Omega_D \to\H $ be a slice function
which is induced by a stem function $F:D\to\H \otimes\mathbb{C}$.
Then $\partial_* f$, $\bar\partial_*f$ and $\Delta_*f$ are slice functions
on $\Omega_D\setminus{\mathbb R}$ induced by the stem functions
$\frac {\partial F}{\partial z}$, $\frac{\partial F}{\partial \bar z}$
resp.~$\Delta F$.
\end{proposition}
\begin{proof}
This may be deduced from the definitions of the respective operators
using the representation formula for slice functions.
See \cite{ghilperglobal},~Theorem 2.2.
\end{proof}
In particular,
for slice functions these operators $\partial_*$, $\bar\partial_*$
and $\Delta_*$ are
well-defined everywhere, including at the real points, whereas for
arbitrary $C^1$-functions they are defined only outside ${\mathbb R}$.
\begin{corollary}
A slice function $f$ is annihilated by $\Delta_*$ if and only if its stem
function $F$ is harmonic.
\end{corollary}
\begin{corollary}
Let $f$ be a $C^2$ slice function on a circular domain $\Omega_D$.
If $\Delta_*f\equiv 0$, then $f$ is real-analytic.
\end{corollary}
\begin{corollary}
Let $f:\H\to\H$ be a $C^2$ slice function.
If $f$ is bounded and $\Delta_*f$ vanishes identically,
then $f$ is constant.
\end{corollary}
\begin{proof}
$\Delta_*f\equiv 0$ implies the harmonicity of the stem function $F$.
Now boundedness of $f$ implies boundedness of $F$ which leads to a
contradiction unless $F$ (and therefore also $f$) is constant.
\end{proof}
\begin{theorem}\label{harm-star}
Let $\Omega_D$ be a slice domain.
Let $f:\Omega_D\to\H$ be a $C^2$ slice function.
Assume that $D$ is simply-connected.
\begin{enumerate}
\item
$\Delta_*f$ is vanishing identically if and only if $f$
can be written as a sum of a regular function $g$ and an
anti-regular function $h$.
\item Assume $\Delta_*f\equiv 0$.
Let $D_{\mathbb R}=D\cap{\mathbb R}$.
Then $f$ can be written as a sum of
a slice-preserving regular function $g$ and a slice-preserving
anti-regular function $h$ if and only if
\[
f(x)\in{\mathbb R},\ \forall x\in D_{\mathbb R}\quad\text{ and }(\partial_*f)(x)\in{\mathbb R},
\ \forall x\in D_{\mathbb R},
\]
which in turn holds if and only if
\[
f(x)\in{\mathbb R},\ \forall x\in D_{\mathbb R}\quad\text{ and }(\bar\partial_*f)(x)\in{\mathbb R},
\ \forall x\in D_{\mathbb R},
\]
or if and only if
\[
\exists \,\, p\in D\cap{\mathbb R}:f(p)\in{\mathbb R},
(\bar\partial_*f)(x)\in{\mathbb R}, \ \forall x\in D_{\mathbb R}\quad\text{ and }
(\partial_*f)(x)\in{\mathbb R},
\ \forall x\in D_{\mathbb R}.
\]
\end{enumerate}
\end{theorem}
\begin{proof}
Let $F$ be the stem function inducing $f$.
Then $\Delta_*f$ is a slice function induced by the stem function
$\Delta F$.
Now $F$ is a map from $D$ to the complex vector space $\H\tensor\C$.
Hence $\Delta F$ vanishes iff $F$ is harmonic iff $F=G+H$ for
a holomorphic function $G:D\to\H\tensor\C$ and an anti-holomorphic
function $H:D\to\H\tensor\C$.
We have to verify that $G,H$ may be taken to be stem functions.
To state it more precisely:
We have to show:
{\em If $F:D\to\H\tensor\C$ is a map such that $\overline{F(\bar z)}
=F(z)$ and such that $F=G+H$ for a holomorphic map $G$ and an
anti-holomorphic map $H$, then $G$ and $H$ may be chosen in such a way
that $\overline{G(\bar z)}
=G(z)$ and $\overline{H(\bar z)}
=H(z)$.}
Now $G(z)-\overline{G(\bar z)}$ is holomorphic,
$H(z)-\overline{H(\bar z)}$ is anti-holomorphic, and
\[
\left( G(z)-\overline{G(\bar z)} \right)+
\left( H(z)-\overline{H(\bar z)} \right)=0
\]
because of $F=G+H$ and $\overline{F(\bar z)}
=F(z)$.
It follows that there is a constant $c$ such that
\[
\left( G(z)-\overline{G(\bar z)} \right)=c=
-\left( H(z)-\overline{H(\bar z)} \right).
\]
We observe that $c$ is totally imaginary and that
\[
\overline{G(\bar z)-c/2}=\overline{G(\bar z)} +c/2=
G(z) - c/2
\]
Thus, by replacing $G$ with $G- c/2$
we may turn $G$ into a stem function.
Correspondingly we replace $H$ by $H+c/2$.
Finally we let $g$, resp.~$h$, be the slice functions
induced by the stem functions $G$, resp.~$H$, and observe that
$g$ is regular, because $G$ is holomorphic and $h$ is anti-regular,
because $H$ is anti-holomorphic.
Now we demonstrate $(2)$.
First we observe that
\[
\partial_*f+\bar\partial_*f = \partial_1f
\]
Hence, if two of these values are zero, so is the third.
Next we claim:
{\em $f(\Omega_D\cap {\mathbb R})\subset{\mathbb R}$ if and only
if $(\partial_1f)$ is real on $\Omega_D\cap{\mathbb R}$ and
$\exists \,\, p\in \Omega_D\cap {\mathbb R}$ such that $f(p)\in{\mathbb R} $.}
The direction ``$\implies$'' is obvious.
To verify the converse, let $I$ denote a connected component ot
$\Omega_D\cap{\mathbb R}$. Then $f(I)\subset{\mathbb R}$. This implies that the stem function
$F$ maps $I$ into ${\mathbb R}\otimes{\mathbb C}\subset\H\otimes{\mathbb C}$.
Because $F:D\to\H\tensor\C$ is a holomorphic map, we may now conclude
with the identity principle that $F(D)\subset {\mathbb R}\otimes{\mathbb C}$.
Using $\overline{F(\bar z)}=F(z)$, this implies that $F({\mathbb R}\cap D)\subset {\mathbb R}$
which in turn (via representation formula) implies $f({\mathbb R})\subset{\mathbb R}$.
This completes the proof, because a slice function $f$ on a slice
domain $\Omega_D$ is slice-preserving if and only if the stem function
$F$ satisfies $F(D\cap{\mathbb R})\subset{\mathbb R}\otimes_{\mathbb R}\R\subset \H\tensor\C$.
\end{proof}
\begin{remark}
We observe that in \cite{perotti0} a function in the kernel of $\Delta_*$ is called {\em {slice-harmonic}}.
\end{remark}
\begin{definition}
\[
(\Delta'f)(q)=\left(\Delta_*\int_{\S}R_wfd\mu(w)\right) (q)
\]
We observe that $(\Delta' f)(q)=\frac{1}{2}\Delta_* (Tr(f))(q)=\frac{1}{2}\Delta_* (f+f^c)(q)$ for all $I \in \S .$ \\
Analogously we can introduce another second order operator in the following way:
$$
(\Delta'' f)(q)=(\Delta_* N(f))(q)=(\Delta_* (f \cdot f^c)) (q)
$$
for all $I \in \S .$ \\
On one side $\Delta '$ and $\Delta_*$
are linear operators, on the other hand, $\Delta''$ is not a linear operator.
\end{definition}
By the way, for $\Delta ''$ a product formula holds:
\begin{proposition}\label{product-rule}
Let $f,g$ be slice functions.
Then
\[
\Delta''(f*g)=(f^s)*\Delta'' g + (\Delta'' f) * (g^s)
+\left( \partial_*(f^s)* \bar\partial_*(g^s) \right)
+\left( \bar\partial_*(f^s)* \partial_*(g^s) \right)
\]
\end{proposition}
\begin{proof}
First we note that
\[
\Delta''(f*g)=\Delta_*(N(f*g))=\Delta_*((f*g)^s)=
\Delta_*(f*g*g^c *f^c)
\]
Next we recall that (with respect to the slice product $*$) a slice
preserving function commutes with every other slice function.
Since $g*g^c$ is always slice preserving, we obtain:
\begin{align*}
&\Delta_*(f*g*g^c *f^c)=\Delta_*(f*f^{c}*g*g^c)\\
=&
\Delta_*(f^s* g^s)=\partial_*\bar\partial_*(f^s\cdot g^s)\\
=& \partial_* \left( (\bar\partial_*f^s)\cdot g^s
+ f^s\cdot(\bar\partial_*g^s) \right)\\
=& (\partial_* \bar\partial_*f^s)\cdot g^s
+ (\bar\partial_*f^s)\cdot (\partial_* g^s)
+ (\partial_*f^s)\cdot(\bar\partial_*g^s)
+ f^s\cdot(\partial_*\bar\partial_*g^s) \\
=& f^s*(\Delta'' g) + (\Delta'' f) * g^s
+ (\bar\partial_*f^s)\cdot (\partial_* g^s)
+(\partial_*f^s)\cdot(\bar\partial_*g^s)\\
\end{align*}
\end{proof}
\begin{remark}
In \cite{CGCS} the following global first order differential operator
was introduced:
$$
G(x)=(x_1^2 +x_2^2 + x_3^2) \frac{\partial}{\partial x_0} +(x_1 i +x_2 j + x_3 k) \cdot \sum_{j=1}^3 x_j \frac{\partial}{\partial x_j}
$$
where $x=x_0 + x_1 i + x_2 j + x_3 k.$
Direct calculations verify readily that $G=y^2\bar\partial_*$ where
\[
y=|\Im(x)|=
\sqrt{\sum_{k=1}^3|x_i|^2}.
\]
Whereas the operator $\Delta_*$ is defined everywhere only if applied
to slice functions, $G$ is everywhere defined for any $C^1$-function.
Still, we believe that the additional factor $y^2$ (which guarantees the
applicability of the operator even to non-slice functions) is
somewhat unnatural.
In particular, if we try to construct a second order differential operator,
we see that
\[
\bar G G=y^4\Delta_* -2Iy^3\bar\partial_*.
\]
Thus $\bar G G$ even applied to slice functions on a complex slice
${\mathbb C}_I$ is not merely the multiple of the ordinary complex
Laplacian.
Moreover, $\bar GG$ annihilates regular functions, but not
anti-regular functions. \\
We observe that in \cite{ghilperglobal} a global operator related to $G$ was studied.
\end{remark}
\begin{remark}
It may happen that a function $f:\H\to\H$ satisfies both
$\bar\partial_*Tr(f)=0$ and $\bar\partial_* N(f)=0$, but
nevertheless is not
regular. For example, let $I,J\in\S$ be orthogonal
(i.e.~$IJ=-JI$), and consider the function
\[
f:q\mapsto I\cos(\Re q)+J\sin(\Re q).
\]
This is a slice function (induced by the stem function
$z\mapsto I\otimes\cos(\Re z) + J\otimes \sin(\Re(z))$.)
It is not regular because it is not open,
although both $Tr(f)=f+f^c=0$ and $N(f)=ff^c=1$ are
regular (in fact constant) functions.
\end{remark}
\begin{definition}
A function $f:\H\to\H$ is {\em rotationally equivariant}
if $R_I(f)=f$ for all $I\in\S$.%
\footnote{see definition~\ref{defRw} for the notion $R_I(f)$.}
\end{definition}
\begin{lemma}\label{LemRotEq}
For a function $f:\H\to\H$, the following conditions are equivalent:
\begin{enumerate}
\item
$f$ is rotationally equivariant.
\item
$R_I(f)=f$ for all $I\in\S$.
\item
$S_I(f(q))=f(S_I(q))$ for
all $I\in\S$ and $q\in\H$.
\item
If $g:q\to g\cdot q$ denotes the ${\mathbb R}$-linear action of
$SO(3,{\mathbb R})$ on $\H$ which is trivial on ${\mathbb R}$ and which is the natural
orthogonal transformations on $\left<I,J,K\right>={\mathbb R}^3$,
then
$f:\H\to\H$ is equivariant for this action, i.e., $g\cdot f(q)=f(g\cdot q)$
for all $g\in SO(3,{\mathbb R})$, $q\in \H$.
\item
$f$ is induced by a stem function $F:{\mathbb C}\to \H\otimes{\mathbb C}$ with
$F({\mathbb C})\subset {\mathbb R}\otimes{\mathbb C}$,
i.e., both $F_1$ and $F_2$ are real-valued for $F=F_1+ F_2\imath$.
\item
$f$ is induced by a stem function $F$ and $f$ is slice-preserving,
i.e., $f({\mathbb C}_I)\subset{\mathbb C}_I$, $\forall I\in\S$.
\end{enumerate}
\end{lemma}
\begin{proof}
The equivalence $(5)\iff(6)$ is well-known.
$(1)\iff(2)\iff(3)$ follow directly form the respective definitions.
Lemma~\ref{generate-so} implies $(3)\iff(4)$.
$(5)\Rightarrow(2)$ is easy.
Finally, assume $(3)$. Fix $I\in\S$. Then $(3)$ implies
$S_I(f(q))=f(S_I(q))$. For $q\in {\mathbb C}_I$ we have $S_I(q)=q$ and therefore
$S_I(f(q))=f(q)$, which implies $f(q)\in{\mathbb C}_I$. Thus $f({\mathbb C}_I)\subset {\mathbb C}_I$.
We define functions $g,h:{\mathbb C}\to{\mathbb R}$ such that
\[
f(x+yI)=g(x+yi)+h(x+yi)I\ \ (x,y\in{\mathbb R})
\]
Now, for every $J\in\S$, there is an orthogonal transformation $\phi$
fixing ${\mathbb R}$ with $\phi(I)=J$. Using $(4)$, we have
\begin{align*}
f(x+yJ) &=f(\phi(x+yI))=\phi( f(x+yI))\\
&=\phi\left(
g(x+yi)+h(x+yi)I\right)= g(x+yi)+h(x+yi)J
\end{align*}
Thus $f:\H\to\H$ is induced by the stem function $F$ given as
$F(x+yi)=g(x,y) \otimes 1 + h(x,y)\otimes i$.
(The fact $\overline{F(z)}=F(\bar z)$
follows from Lemma~\ref{stem-basic}.)
This completes the proof of $(3)\Rightarrow(5)$.
\end{proof}
\begin{lemma}\label{stem-real-fct}
Let $h:\H\to{\mathbb R}$.
Then the following conditions are equivalent:
\begin{enumerate}
\item
$h$ is induced by a stem function.
\item
$h$ obeys the representation formula.
\item
$h$ is rotationally equivariant.
\item
$h$ is rotationally invariant.
\end{enumerate}
Moreover, in this case the stem function $H$
can be defined as
\[
H(x+yi)=h(x+yI)\otimes 1
\]
for all $x,y\in{\mathbb R}$, $I\in\S$.
\end{lemma}
\begin{proof}
Assume that $h$ is induced by a stem function $H$.
The formula
\[
h(x+yI)=F_1(x+yi) + I F_2(x+yi)\ \forall x,y\in{\mathbb R}, I\in\S
\]
combined with $h(q)\in{\mathbb R},\ \forall q\in\H$, implies that $F_2$ vanishes
and that $F_1({\mathbb C})\subset{\mathbb R}$.
Then $h$ is rotationally equivariant (by Lemma~\ref{LemRotEq} $(5)\Rightarrow(1)$)
and rotationally invariant (by Lemma~\ref{LemRotInv} $(5)\Rightarrow(1)$).
Conversely, by Lemma~\ref{LemRotEq} $(1)\Rightarrow(5)$,
(resp.~ by Lemma~\ref{LemRotInv} $(1)\Rightarrow(5)$), imply that $h$ admits
a stem function if $h$ is rotationally equivariant, resp.~rotationally
invariant.
\end{proof}
\begin{proposition}\label{real-part}
Let $h:\H \to\mathbb{R}$ be a
slice
function with $\Delta'h=0$.
Then
there exists a slice-preserving
regular function $f$ such that $h=\Re \, (f)$.
\end{proposition}
\begin{proof}
Since $h$ is real-valued and a slice function, it is also
rotationally equivariant and rotationally invariant
(due to Lemma~\ref{stem-real-fct}).
Being rotationally equivariant implies $\Delta'h=\Delta_*h$.
Being rotationally invariant implies
\[
h(x+yI)=h(x-yI) \,\, \forall x,y\in{\mathbb R},\ \forall
I\in\S
\]
which in turn implies that
\[
\partial_I h
\]
vanishes on the real line.
It follows that $(\partial_*f)(x)=\frac 12 (\partial_1 f)(x)\in{\mathbb R}$
for all $x\in{\mathbb R}$.
Hence all the assumptions of Theorem~\ref{harm-star} (2)
are fulfilled, and we may deduce that $h=g+k$ where
$g$ is regular, $k$ is anti-regular and both $g$ and $k$
are slice-preserving.
The condition $h(q)\in{\mathbb R}$ is equivalent to $h(q)-\overline{h(q)}=0$.
Therefore
\[
g(q)-\overline{k(q)}=\overline{g(q)}-k(q)\,\, \forall q\in\H.
\]
The left hand side of this equation is a regular function, while the
right hand side is anti-regular. Thus both must be constant.
That the right hand side is constant, implies:
\[
k(q)= \overline{g(q)}-\overline{g(0)}+k(0)\,\, \forall q\in\H.
\]
Hence
\[
h(q)=g(q)+k( q)=g(q)+\overline{g(q)}-\overline{g(0)}+k(0)
=2 \Re\left(g(q)\right)-\overline{g(0)}+k(0)
\ \ \forall q\in\H.
\]
Finally, $h(\H)\subset{\mathbb R}$ implies $-\overline{g(0)}+k(0)\in{\mathbb R}$.
This proves the assertion for
\[
f(q)=2 g(q)-\overline{g(0)}+k(0).
\]
\end{proof}
\begin{proposition}
Let $u$ be real-valued and
rotationally invariant (in the sense of Definition \ref{def-rot})
$C^2$-function.
Let $f$ be regular.
Then
\[
\Delta'(u\circ f)=\int_{\S}
\left( (\Delta_* u)\circ (R_wf)\right)
|(\partial_*f)(S_wq)|^2 d\mu(w)\quad
\]
\end{proposition}
\begin{proof}
Note that a real valued rotationally invariant function $u:\H\to{\mathbb R}$ is
automatically a slice function. (Lemma \ref{stem-real-fct}.)
Hence $\Delta'u$ is well defined.
The claim of the proposition
is a consequence of the complex computation \\
$\Delta (u \circ f)= \left( (\Delta u)\circ f\right)
|(\partial_*f)(z)|^2$
(applied on ${\mathbb C}_I$) and of the definition of $\Delta '$. For more details, see the proof of the next proposition.
\end{proof}
\begin{proposition}\label{armcomp}
Let $u \colon \H \to\mathbb{R}$ be a rotationally invariant
$C^2$-function with $\Delta'u=0$.
Let $f \colon \H \to \H $ be a slice preserving regular
function, then $u \circ f$ is such that $\Delta' (u\circ f)=0.$
\end{proposition}
\begin{proof}
First observe that $u$ is a slice function,
because it is real valued and rotationally
invariant (Lemma \ref{stem-real-fct}).
Next we prove that, under the hypotheses of the proposition,
\begin{equation}\label{invrot}
R_w (u \circ f)= u \circ (R_w (f)) \,\,\, \forall \,\, w \in \S .
\end{equation}
Indeed $\forall \,\, w \in \S $:
\begin{align*}
R_w ( u\circ f) &= w(u(f(w^{-1}qw)))w^{-1} = w(u(R_w (f))w^{-1} \\
&= S_w ^{-1} (u \circ R_w (f)) = u ( R_w (f)).
\end{align*}
Then, by definition, $\Delta' (u \circ f) =\Delta_* \int\limits_{w \in \S } R_w (u \circ f) d \mu.$ \\
By \eqref{invrot},
$$\Delta_* \int\limits_{\S } R_w(u \circ f) d \mu(w)=\Delta_* \int\limits_{\S } u \circ R_w(f) d \mu(w)= \int\limits_{ \S } (\Delta_* (u \circ R_w f) d \mu(w) =$$
$$=\int\limits_{\S }
{(\Delta_* u)\circ (R_w(f))}{|(R_w(f))'|^2} d \mu(w)=\int\limits_{\S }
{(\Delta_* u)\circ (R_w(f))}{|(f)' (S_w(q))|^2} d \mu(w)=0$$
where $g' : =\partial_* g$.
\end{proof}
\begin{proposition}\label{isol-zero}
Let $u \colon \H \to \mathbb{R}$ be
a $C^2$-function such that $\Delta' u\equiv 0 $ on $\H\setminus{\mathbb R}$.
Then
$u$ admits no real isolated zero.
\end{proposition}
\begin{remark}
At a real point $\Delta'u$ is defined only if $u$ is a slice function.
But in the proposition $\Delta'u$ is only considered for points
outside ${\mathbb R}$. Hence one does not need to require $u$ to be
a slice function.
\end{remark}
\begin{proof}
Assume the contrary, i.e., assume that
$u$ has an isolated zero in a point $a\in{\mathbb R}$.
Then there exists an open neighborhood
$W$ of $a$ such that $u$ has no zero on $W\setminus\{a\}$.
For dimension reasons, $W\setminus\{a\}$ is connected. Thus $u$ is either
everywhere positive or everywhere negative on $W\setminus\{a\}$.
Without loss of generality, assume that $u>0$ on $W\setminus\{a\}$.
Define
\[
\tilde u(q)=\int_\S \left(R_wu)\right(q) d\mu(w).
\]
For $q$ sufficiently close to $a$ (but $q\ne a$) we have
$\S_q\subset W\setminus\{a\}$. For such $q$ we
have
\[
(R_wu)(q)>0\ \forall w\in\S
\]
and therefore $\tilde u(q)>0$.
On the other hand, $\tilde u(a)=u(a)=0$, because $a\in{\mathbb R}$.
Thus $\tilde u$ has a strict local minimum in $a$.
By construction $\Delta'u=\Delta_*\tilde u$
on $\H\setminus{\mathbb R}$.
Fix $I\in\S$.
By definiton, on ${\mathbb C}_I$ the operator $\Delta_*$ agrees
with the ordinary complex Laplacian, if we identify ${\mathbb C}_I\simeq{\mathbb C}$ as usual.
Thus $\tilde u$ restricts to a $C^2$-function on ${\mathbb C}_I$ which
is harmonic on ${\mathbb C}_I\setminus{\mathbb R}$.
By continuity of $\Delta\tilde u$, the function $\tilde u$ is
harmonic on the whole of ${\mathbb C}_I$.
Thus we obtain a harmonic function on ${\mathbb C}_I\simeq{\mathbb C}$ with a strict local
minimum in $a$.
This is impossible.
\end{proof}
\section{A kind of Poisson formula}
\subsection{$\H$-valued measures}
\begin{theorem}\label{H-measure}
Let $p\in\H $ and let $S\subset\H $ be a $3$-dimensional
sphere such that $p$ is in its interior.
Let $\Omega$ denote a circular domain containing both $S$ and its
interior.
Then there exists an $\H $-valued measure $\mu$ on $S$ which is absolutely
continuous with respect to the euclidean measure such that
\[
f(p)=\int_{S} f(q)d\mu(q)
\]
for every regular function $f$ defined on $\Omega$.
\end{theorem}
\begin{proof}
We first discuss the special case where $p\in\mathbb{R}$. In this case for
each $I\in\S$ the restriction of $f$ to $\mathbb{C}_I$ is holomorphic,
and $\mathbb{C}_I$ and $S$ intersect in a $1$-dimensional sphere which
contains $p$ in its interior. We thus may construct $d\mu$ first
taking the measure on $S\cap\mathbb{C}_I$ given by the classical Poisson
formula, and then averaging over $I\in\S$ with respect to any
probability measure of $\mathbb{S}.$
Now assume $p\not\in\mathbb{R}$. Fix $I\in\S$ such that $p\in\mathbb{C}_I$.
Let $c=s+vJ$ ($s,v\in\mathbb{R}, v>0, J\in\S$)
denote the center of the sphere $S$ and $\rho$ its radius.
Define $\bar G=\{t+yi\in\mathbb{C}: t,y\in\mathbb{R}, y\ge 0, \exists \,\, H\in\S: t+yH\in S\}$.
Then
\[
\bar G = \{ (t+yi): \exists \,\, H\in\S : (t-s)^2 + |yH-vJ|^2 = \rho^2 \}.
\]
We observe that $\S$ is connected and
that $H\mapsto |yH-vJ|^2$ (for fixed $y,v>0,J\in\S$) defines
a continuous map which evidently takes its maximum in $H=-J$
(with $(y+v)^2$ as its value) and
its minimum in $H=J$ (with value $|y-v|^2$).
From this we may deduce :
\[
\bar G = \{ (t+yi): |t-s|\le\rho,\quad
|y-v| \le \sqrt{ \rho^2 -|t-s|^2} \le |y+v|
\}.
\]
Let us now fix $t\in\mathbb{R}$ and investigate for which $y>0$ we
have $t+iy\in\bar G$. We define $K=\sqrt{ \rho^2 -|t-s|^2}$
and
obtain the following condition:
\begin{align*}
&|y-v| \le \sqrt{ \rho^2 -|t-s|^2}=K \le |y+v|\\
\iff & |y-v| \le K \le |y+v|\\
\iff & v-K\le y \le v+K \text{ and }-v+K\le y\\
\iff & |v-K| \le y \le v+K.
\end{align*}
It follows that the interior $G$ of $\bar G$ is simply-connected and
therefore biholomorphic to the unit disc.
Let $\tilde p=x+yi\in{\mathbb C}$ such that $x,y\in{\mathbb R}$, $y\ge 0$ and
$p=x+yH$ for some $H\in\S$.
Then $\tilde p$ is in the interior of $G$.
We fix such a biholomorphic map $\psi:G\to \Delta$
and recall
that it extends
continuously to the respective boundaries.
We may and do require $\psi(\tilde p)=0$.
By the classical mean value theorem
\[
F(0)=\int_0^1\int_0^{2\pi} F(re^{i\theta}) \frac {d\theta}{2\pi} d\sigma
\]
for every holomorphic function $F:\mathbb{C}\to\mathbb{C}$, every $r>0$ and
every probability measure $\sigma$ on $[0,1]$.
Pulling-back with $\psi$ yields a probability measure $d\xi$ on $G$
such that
\begin{equation}\label{eq-6-1}
F(p)=\int_G F(w)d\xi(w)
\end{equation}
for every holomorphic function $F$. The measure $d\xi$ constructed
in this way is
absolutely continuous, if the measure $\sigma$ on $[0,1]$ used in the
construction is taken to be absolutely continuous.
For each point $t+yi\in G$ we have a $2$-sphere $t+y\S$ in $\H $.
Let $V$ denote the ``imaginary subspace'' of $\H$,
i.e., the ${\mathbb R}-$vector
subspace of $\H$ generate by $y\S$.
The intersection of the $3$-sphere $S$ with
real affine subspace $t+V$ is a sphere
(of dimension $\le 2$). Thus $S\cap(t+y\S)$ is an intersection
of two spheres in a three-dimensional real affine space and
therefore again a sphere.
We let $\eta$ denote the involution defined by sending each
element of $\Sigma_{t,y}=S\cap (t+y\S)$ to its antipodal element.
Since $\Sigma_{t,y}\subset (t+y\S)$, for every
$q\in\Sigma_{t,y}$ there are $J,H\in\S$ such that
$q=t+yJ$ and $\eta(q)=t+yH$.
By the generalized representation formula
(Proposition~\ref{gen-repr})
we have
\[
f(t+yI)= M_1(J,H) f(q) + M_2(J,H) f(\eta(q)) \quad \forall q\in\Sigma_{t,y}
\]
for every regular function $f$.
With $m_1(q):=M_1(J,H)$ and $m_2(q)=M_2(J,H)$ we
obtain continuous functions
$m_i:\Sigma_{t,y} \to\H ,$ for $i=1,2,$ such that
\[
f(t+yI)= m_1(q) f(q) + m_2(q) f(\eta(q)) \quad \forall q\in\Sigma_{t,y}
\]
for every regular function $f$.
In particular
\[
f(t+yI)= \int_{q\in \Sigma_{t,y}} m_1(q) f(q) + m_2(q) f(\eta(q)) d\alpha(q)
\]
for every probability measure $\alpha$ on $\Sigma_{t,y}$.
Hence we may choose an absolutely continuous probability measure
$\beta_{t,y}$
on $\Sigma_{t,y}$
such that
\begin{equation}\label{eq-6-2}
f(t+yI)= \int_{\Sigma_{t,y}} f(q) d\beta(q)\quad \forall f
\end{equation}
We recall that regular functions restrict to
holomorphic ones on $\mathbb{C}_I$.
We may therefore combine the above constructions
(see equations~\eqref{eq-6-1},\eqref{eq-6-2}) to obtain
\[
f(p)=\int_{t+yi\in G} \int_{q\in \Sigma_{t,y}}
f(q)
d\beta(q) d\xi(t+yi)
\]
(with $\Sigma_{t,y}=S\cap(t+y\S)$).
\end{proof}
\subsection{Poisson's formula}
\begin{proposition}[Poisson's Formula]
Let $\mu$ denote a probability measure on $\S$.
Let $u \colon \overline{\mathbb{B}_R} \to \mathbb{R}$ be
a rotationally invariant $C^2$- function.
Assume that $\Delta'u\equiv 0$.
\footnote{$\Delta'u$ is defined on ${\mathbb R}$ for slice functions only.
This is no problem, because every rotationally invariant function
is a slice function.}
Let $a \in \mathbb{R}$.
Then the following formula holds:
$$
u(a)=\frac{1}{2\pi} \int\limits_{\S }
\int\limits_{0}^{2\pi}
\frac{R^2-a^2}{|R \cdot e^{I\theta}-a|^2}u(R \cdot e^{I\theta})d\theta d\mu(I)
$$
\end{proposition}
\begin{proof}
Due to Lemma~\ref{stem-real-fct} the function $u$ is a {\em slice function}.
Thus we may conclude from Proposition~\ref{real-part} that $u$ is the
real part of a slice-preserving regular function $f$. Therefore
for each $I\in\S$, the restriction of $u$ to ${\mathbb C}_I$ is the real part
of a holomorphic function from ${\mathbb C}_I$ to ${\mathbb C}_I$ and the above formula
follows from the complex Poisson formula.
\end{proof}
\begin{remark}
As in Proposition \ref{GMVF}, it is possible to generalize the above Poisson Formula at any $a \in \mathbb{H}$ with an integration on the circularization of $\partial \Delta(a,r) \cup \partial\Delta(\overline{a},r)$ instead of an integration on $\partial \mathbb{B}_R.$
\end{remark}
\section{A Jensen's Formula}
The goal of this section is to prove a quaternionic version of Jensen's formula.
For this purpose we need Blaschke factors.
\subsection{Quaternionic $\rho$-Blaschke factors}
In this subsection we are going to reproduce some results proved in \cite{irenebla,shur} for a modification of quaternionic Blaschke factors.
\begin{definition}
Given $\rho>0$ and $a\in\H $ such
that $|a|<\rho$. We define the $\rho$\textit{-Blaschke factor} at $a$
as the following semi-regular function on $\mathbb{H}$:
\begin{equation*}
B_{a,\rho}:\H \rightarrow\hat{\H },\quad
B_{a,\rho}(x):=(\rho^{2}-x\bar a)*(\rho(x-a))^{-*}.
\end{equation*}
\end{definition}
We observe that
\begin{align*}
B_{a,\rho}
&= (\rho^2-q\bar a)*(q-\bar a)
\left(\rho(q^2-q(a+\bar a)+|a|^2)\right)^{-*}\\
&= (q^2(-\bar a) +q(\rho^2+\bar a ^2) -\rho^2\bar a)\left(\rho(q^2-q(a+\bar a)+|a|^2)\right)^{-1}
\end{align*}
(using that $g(q)^{-*}=(g(q))^{-1}$ for any slice-preserving
function $g$, hence in particular for $g(q)=\rho(q^2-q(a+\bar a)+|a|^2)$).
In particular,
\[
|B_{a,\rho}(0)|=\left| \frac{\rho}{a}\right|, \,\, \textrm{if} \,\, a \neq 0
\]
and
\[
|B_{a,\rho}^{-*}(0)|=\left| \frac{a}{\rho}\right|.
\]
\begin{remark}\label{zerobla}
We observe that
$(B_{a,\rho})^{-*}$ has a zero of multiplicity one at $a$
and no other zeros or poles in ${\mathbb B}_\rho$.
\end{remark}
The following is a consequence of Theorem 5.5 of \cite{irenebla}.
\begin{theorem}\label{borderbla}
Given $\rho>0$ and $a\in\H $. The
$\rho$-Blaschke factors $B_{a,\rho}$ have the following properties:
\begin{itemize}
\item they satisfy $B_{a,\rho}(\H \setminus\mathbb{B}_{\rho})\subset {\mathbb B}_1$ and $B_{a,\rho}(\mathbb{B}_{\rho})\subset \H \setminus{\mathbb B}_1$.
\item they send the boundary of the ball
$\partial\mathbb{B}_{\rho}$ in the boundary of the ball $\partial {\mathbb B}_1$.
\end{itemize}
\end{theorem}
\subsection{Jensen's formula}
First we prove a variant of Jensen's formula for the special
case where there are neither zeros nor poles.
\begin{proposition}\label{jensen-ni-ni}
Let $\rho>0$ and let $f$ be a regular function defined in a
neigbourhood of $\overline{\mathbb{B}_{\rho}}$. Assume that $f$ has no
zeros in $\overline{\mathbb{B}_{\rho}}$.
Let $\mu$ be a probability measure on $\S$.
Then
\[
\log|f(0)| \le
\displaystyle\frac{1}{2\pi}\displaystyle \int_0^{2\pi} \int_{\S }\log|f(\rho\cos\theta +\rho\sin\theta I)|d\mu(I)d\theta
\]
with equality if $f$ is slice-preserving.
\end{proposition}
\begin{proof}
Fix an imaginary unit $I$.
Choose another imaginary unit $J$ such that
$IJ=-JI$
(i.e., $I$ and $J$ are supposed to be orthogonal).
Thus, using the ``Splitting Lemma'' \ref{splitting},
there are two holomorphic functions $g,h$ with values in ${\mathbb C}_I$
defined on a neighborhood
of $\bar\Delta_\rho=\{z\in{\mathbb C}_I:|z|\le\rho\}$ such that
\[
f(q)=g(q)+h(q)J,\ \ \forall q\in\Delta_\rho={\mathbb B}_\rho\cap{\mathbb C}_I.
\]
Then $|f(q)|^2=|g(q)|^2+|h(q)|^2$.
Now, $\log\left( |g|^2+|h|^2\right)$ is subharmonic
for any two holomorphic functions $g,h:{\mathbb C}\to{\mathbb C}$. Thus we have
subharmonicity of $\log|f|^2$ and consequently
\[
\log|f(0)|\le \frac{1}{2\pi}\int_0^{2\pi}
\log|f(\rho e^{It})|dt
\]
Finally, by integration over the sphere of imaginary units we obtain
the assertion.
\end{proof}
In order to deal with the general case (where the function $f$ is
allowed to have zeros or poles) we need some preparations.
\begin{lemma}
Let $f,g$ be regular functions on an open neighborhood of
$\partial {\mathbb B}_\rho=\{q\in\H:|q|=\rho\}$.
Assume that $|g(q)|=1$ for all $q\in\partial {\mathbb B}_\rho$.
Then
$|f(q)|=|(f*g)(q)|$ and $|g^{-*}(q)|=1$ for all
$q\in\partial {\mathbb B}_\rho$.
\end{lemma}
\begin{proof}
The formula
\[
|p^{-1}qp|= |q|,\ \ \forall p\in\H^*,q\in\H
\]
implies that
$ f(q)^{-1}qf(q)\in\partial{\mathbb B}_\rho$ whenever $q\in\partial{\mathbb B}_\rho$.
Combined with
\[
(f*g)(q)=f(q)g\left( f(q)^{-1}qf(q) \right)\quad
\text{for $q$ with $f(q)\ne 0$}
\]
and $|g(q)|=1\ \forall q\in\partial{\mathbb B}_\rho$ we obtain
\[
|(f*g)(q)|=|f(q)|\ \ \forall q\in\partial{\mathbb B}_\rho.
\]
If we apply this equation to $f=g^{-*}$, we obtain
\[
1=|(g^{-*}*g)(q)|=|(g^{-*})(q)|\ \ \forall q\in\partial{\mathbb B}_\rho.
\]
\end{proof}
\begin{proposition}\label{factorization}
Let $f$ be a semi-regular function on a neighborhood of
$\bar {\mathbb B}_\rho$, with neither zeros nor poles on $\partial {\mathbb B}_{\rho}$.
Then there exist ``$\rho$-Blaschke factors'' $B_1,\ldots,B_r$
and a regular function without zeros $f_0$
such
that:
\[
f=f_0*B_1*\ldots *B_r.
\]
\end{proposition}
Here a function $B$ is called $\rho$-Blaschke factor, if $B=B_{a,\rho}$
or $B=B_{a,\rho}^{-*}$ for an element $a\in {\mathbb B}_\rho$.
\begin{proof}
Let $g,h$ be regular functions such that $f=g^{-*}*h$.
First we claim that there exist $\rho$-Blaschke factors $B_1,\ldots, B_s$
and a regular function $\tilde h$ without zeros
such that $f=g^{-*}*\tilde h*B_1*\ldots* B_s$ .
We proceed recursively. If $h$ admits a zero
in a point $a\in{\mathbb B}_\rho$, then there exists
an element $b\in\S_a$ such that $h=h_0*(q-b)$. Recall
that
\[
B_{b,\rho}=(\rho^2-q\bar b)*(\rho(q-b))^{-*}
= (\rho(q-b))^{-*} *(\rho^2-q\bar b)
\]
and therefore
\[
(q-b)=\frac 1{\rho}\left( \rho^2-q\bar b\right)
* B_{b,\rho}^{-*}.
\]
Thus
$f=g^{-*}*h_1*B_{b,\rho}^{-*}$
with
\[
h_1(q)=h_0(q)\frac 1{\rho} * \left( \rho^2-q\bar b\right)
\]
being regular.
Repeating this procedure recursively, we obtain a regular function
$\tilde h$ without zeros in $\bar{\mathbb B}_\rho$
and $\rho$-Blaschke factors $B_1,\ldots, B_s$
such that
\[
f=g^{-*}*\tilde h*B_1*\ldots* B_s
\]
Define
\[
f_1=g^c*(\tilde h^{-*})^c
\]
Then
$f_1$ is regular and
\[
f=(f_1^{-*})^c*B_1*\ldots* B_s
\]
Repeating the above process, we obtain a regular function $\phi$
without zeros and $\rho$-Blaschke factors $B'_1,\ldots B'_t$
such that
\[
f_1=\phi* (B'_1*\ldots * B'_t)
\]
Consequently
\begin{align*}
f &=\left(\left( \phi* (B'_1*\ldots * B'_t) \right)^{-*}\right)^c
*B_1*\ldots* B_s \\
&
=(\phi^{-*})^c * \left( ((B'_1)^{-*})^c\right) * \ldots (((B'_t)^{-*})^c)
* \left( B_1*\ldots* B_s \right)
\end{align*}
\end{proof}
\begin{proposition}[Jensen's Formula]\label{jensen}
Let $\Omega=\Omega_D$ be a circular
domain of $\H $ and
let $f:\Omega\rightarrow\H \cup\{ \infty \}$ be a
semi-regular function.
Let $\rho>0$ such that the ball $\overline{\mathbb{B}_{\rho}}\subset\Omega$, $f(0)\neq 0,\infty$ and such that $f(y)\neq 0, \infty$, for any $y\in\partial \mathbb{B}_{\rho}$. \\
Assume that (for the function $f$)
\begin{itemize}
\item $\{r_{k}\}_{k=1,2,..}$ are the punctual zeros, \\
\item $\{ w_n \}_{n=1,2,...}$ are the punctual real poles, \\
\item $\{\S _{a_{i}}\}_{i=1,2,..}$ are the spherical zeros, \\
\item $\{\S _{b_{j}}\}_{j=1,2,..}$ are the spherical poles, \\
\end{itemize}
everything repeated accordingly to their multiplicity.
Let $\mu$ be a probability measure on $\S$.
Then: \\
\begin{align*}
\log|f(0)|
\le &
\displaystyle\frac{1}{2\pi }\displaystyle \int_0^{2\pi} \int_{\S }\log|f(\rho\cos\theta +\rho\sin\theta I)|d\mu(I)d\theta \, \\
& -\displaystyle\sum_{|r_{k}|<\rho}\left(\log\displaystyle\frac{\rho}{|r_{k}|}\right)
+ \displaystyle\sum_{|w_{n}|<\rho}\left(\log\displaystyle\frac{\rho}{|w_{n}|}\right)\\
&- 2\displaystyle\sum_{|a_{j}|<\rho}\left(\log\displaystyle\frac{\rho}{|a_{j}|}\right)+ 2\displaystyle\sum_{|b_{j}|<\rho}\left(\log\displaystyle\frac{\rho}{|b_{j}|} \right).
\end{align*}
\end{proposition}
Using the language of divisors as explained in section
\ref{sect-div}
we may reformulate this as follows:
\begin{proposition}[Jensen's Formula]
Let $\Omega=\Omega_D$ be a circular
domain of $\H $ and
let $f:\Omega\rightarrow\H \cup\{ \infty \}$ be a
semi-regular function.
Let $\rho>0$ such that the ball
$\overline{\mathbb{B}_{\rho}}\subset\Omega$, $f(0)\neq 0,\infty$
and such that $f(y)\neq 0, \infty$, for any $y\in\partial \mathbb{B}_{\rho}$.
Let $\mu$ be a probability measure on $\S$.
Then: \\
\begin{equation}\label{jensenNreg}
\begin{array}{ccc}
\log|f(0)|
&\le
\displaystyle\frac{1}{2\pi }\displaystyle \int_0^{2\pi} \int_{\S }\log|f(\rho\cos\theta +\rho\sin\theta I)|d\mu(I)d\theta \, + \\
&\\
& \displaystyle-\sum_{|p_k|<\rho} m_k\log \frac{\rho}{|p_k|}
\quad
\\
\end{array}
\end{equation}
for $div(f)=\sum m_k\{p_k\}$.
\end{proposition}
\begin{proof}
If $f$ has neither zeros, nor poles, this is
Proposition~\ref{jensen-ni-ni}.
For the general case, we
consider
$div(f)=\sum_k m_k\{p_k\}$.
First of all, since $\overline{\mathbb{B}_{\rho}}\subset\Omega$,
it follows from Corollary~\ref{cor-id} that
there are only finitely many $k$ with $m_k\ne 0$,
hence the sums in the statement are finite
and $\sum_k |m_k|<\infty$.
From Proposition~\ref{factorization} we deduce that $f$ may be represented
in the form
\[
f=f_0*B_1*\ldots *B_r
\]
where $f_0$ is regular on a neighborhood of $\bar{\mathbb B}_\rho$
with neither zeros nor poles and each $B_i$ equals
$B_{p_i,\rho}^{\epsilon_i *}$
for some $p_i\in{\mathbb B}_\rho$ and $ \epsilon_i\in\{+1,-1\}$.
Now
\[
\log|f(0)|=\log|f_0(0)| +\sum_i \log |B_i(0)|=
\log|f_0(0)| -\sum_i \epsilon_i\log \frac{\rho}{|p_i|}
\]
On the other hand
\[
|f(q)|=|f_0(q)| \ \ \forall q\in\partial {\mathbb B}_\rho
\]
because $|B_i(q)|=1$ for all $i\in\{1,\ldots,r\}$ and all $q\in\partial {\mathbb B}_\rho$.
Furthermore,
\[
\log|f_0(0)| \le
\displaystyle\frac{1}{2\pi}\displaystyle \int_0^{2\pi} \int_{\S }\log|f_0(\rho\cos\theta +\rho\sin\theta I)|d\mu(I)d\theta
\]
(with equality if $f$ is slice-preserving)
due to Proposition~\ref{jensen-ni-ni}.
Combining these facts, we obtain the assertion.
\end{proof}
\begin{remark}
For every semi-regular function $f,$ its symmetrization
$f^s=N(f)=f*f^c$ is slice-preserving.
Therefore:
\begin{equation}
\begin{array}{ccc}
\log|f^s(0)|
&=
\displaystyle\frac{1}{2\pi }\displaystyle \int_0^{2\pi} \int_{\S }\log|f^s(\rho\cos\theta +\rho\sin\theta I)|d\mu(I)d\theta \, + \\
& -2{\displaystyle\sum_{|p_k|<\rho} m_k\log \frac{\rho}{|p_k|} }
\quad
\\
\end{array}
\end{equation}
for $div(f)=\sum m_k\{p_k\}$.
However, there is no similar formula for $Tr(f)=f+f^c$, because
$div(f+f^c)$ is not completely determined by $div(f)$ whereas
$div(f^s)=2\cdot div(f)$.
\end{remark}
\begin{definition}
Let $f$ be a slice regular function on $\mathbb{B}_R.$ For all $0 < r < R$ we define:
$$
M_f (r) = \sup\limits_{|q|=r} |f(q)| \,\,;
$$
$$
P_f (r) = \textrm{number of punctual zeros of f with multiplicities} \,\, ;
$$
$$
S_f (r ) = \textrm{number of spherical zeros of f with multiplicities};
$$
\vskip0.3cm
$$
n_f(r)=P_f(r)+2S_f(r).
$$
\end{definition}
Then
\[
n_f(r)=\sum_{|a_k|\le r} m_k
\]
for a regular function $f$ with divisor $div(f)=\sum m_k\{a_k\}$.
\begin{corollary}
Let $f$ be a slice regular function defined in a neighborhood of $\overline{\mathbb{B}_R}$ and such that $f(0) \neq 0.$ Then
\begin{equation}\label{eq2}
n_f(r)=P_f(r)+2S_f(r)
\le \frac{\log M_{f} (R) -\log |f(0)| }{\log R - \log r}
\end{equation}
for any $0 <r <R.$
\end{corollary}
\begin{proof}
First we observe that
\[
\log M_f(R)\ge
\displaystyle\frac{1}{2\pi }\displaystyle \int_0^{2\pi} \int_{\S }\log|f(R\cos\theta +R\sin\theta I)|d\mu(I)d\theta.
\]
Therefore the Jensen's inequality \eqref{jensenNreg} implies:
\begin{align*}
\log M_f(R) - \log|f(0)| &\ge
\sum_{|p_k|<R} m_k\log \frac{R}{|p_k|}\\
&= \left( \sum_{|p_k|<R} m_k\log R \right)
- \sum_{|p_k|<R} m_k\log|p_k|\\
&= n_f(R)\log R - \sum_{|p_k|<R} m_k\log|p_k|\\
&= n_f(R)\log R - \sum_{|p_k|\le r} m_k\log|p_k|
-\sum_{r<|p_k|<R} m_k\log|p_k|\\
&\ge n_f(R)\log R - \sum_{|p_k|\le r} m_k\log r
-\sum_{r<|p_k|<R} m_k\log R\\
&= n_f(R)\log R - n_f(r)\log r
-(n_f(R)-n_f(r))\log R\\
&= n_f(r)\left( \log R - \log r \right).\\
\end{align*}
Thus
\begin{equation}\label{eq42}
\log M_f(R) - \log|f(0)| \ge n_f(r)\left( \log R - \log r \right)
\end{equation}
for all $0<r<R$ such that $f$ has no zeros on $\partial{\mathbb B}_R$.
By continuity (in $R$) it follows that \eqref{eq42} holds
without any assumption whether there are zeros on $\partial{\mathbb B}_R$
or not.
This yields the assertion.
\end{proof}
\begin{corollary}
Let $f:{\mathbb B}_1\to{\mathbb B}_1$ be a regular function with $f(0)\ne 0$.
Then there is no zero of $f$ in ${\mathbb B}_r$ for any $r<|f(0)|$.
\end{corollary}
\begin{proof}
Assume the contrary. Then $n_f(r)\ge 1$ while
\[
\lim_{R\to 1} \frac{\log M_{f} (R) -\log |f(0)| }{\log R - \log r}
\le \frac{-\log|f(0)|}{-\log r}<1
\]
leading to a contradiction.
\end{proof}
The interested reader can find in \cite{AB} and in \cite{perotti} other results about Jensen's Formula but in a slightly different context.
\section*{Acknowledgements}
The two authors were partially supported by GNSAGA of INdAM and by INdAM itself.
C. Bisi was also partially supported by PRIN \textit{Variet\'a reali e complesse:
geometria, topologia e analisi armonica}. The two authors would like also to thank the anonymous referees for their insightful comments.
| {
"timestamp": "2020-10-19T02:14:35",
"yymm": "1902",
"arxiv_id": "1902.08165",
"language": "en",
"url": "https://arxiv.org/abs/1902.08165",
"abstract": "In this article we investigate harmonicity, Laplacians, mean value theorems and related topics in the context of quaternionic analysis. We observe that a Mean Value Formula for slice regular functions holds true and it is a consequence of the well known Representation Formula for slice regular functions over $\\mathbb{H}$. Motivated by this observation, we have constructed three order-two differential operators in the kernel of which slice regular functions are, answering positively to the question: is a slice regular function over $\\mathbb{H}$ (analogous to an holomorphic function over $\\mathbb{C}$) \"harmonic\" in some sense, i.e. is it in the kernel of some order-two differential operator over $\\mathbb{H}$ ? Finally, some applications are deduced, such as a Poisson Formula for slice regular functions over $\\mathbb{H}$ and a Jensen's Formula for semi-regular ones.",
"subjects": "Complex Variables (math.CV); Probability (math.PR); Rings and Algebras (math.RA)",
"title": "The harmonicity of slice regular functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151391819461,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.708642866330472
} |
https://arxiv.org/abs/2209.12540 | The Generalized Cluster Complex: Refined Enumeration of Faces and Related Parking Spaces | The generalized cluster complex was introduced by Fomin and Reading, as a natural extension of the Fomin-Zelevinsky cluster complex coming from finite type cluster algebras. In this work, to each face of this complex we associate a parabolic conjugacy class of the underlying finite Coxeter group. We show that the refined enumeration of faces (respectively, positive faces) according to this data gives an explicit formula in terms of the corresponding characteristic polynomial (equivalently, in terms of Orlik-Solomon exponents). This characteristic polynomial originally comes from the theory of hyperplane arrangements, but it is conveniently defined via the parabolic Burnside ring. This makes a connection with the theory of parking spaces: our results eventually rely on some enumeration of chains of noncrossing partitions that were obtained in this context. The precise relations between the formulas counting faces and the one counting chains of noncrossing partitions are combinatorial reciprocities, generalizing the one between Narayana and Kirkman numbers. | \section{Introduction}
The {\it cluster complex} of a finite type cluster algebra was introduced by Fomin and Zelevinsky~\cite{fominzelevinsky}. It is a simplicial complex, which can be built using {\it almost positive roots} as its vertex set. It can be seen as the dual of a corresponding {\it associahedron}. A natural extension is the {\it generalized cluster complex}, defined by Fomin and Reading~\cite{fominreading} via {\it colored almost-positive roots}. Although there is no related cluster algebra nor associahedron in this case, it can be given a representation theoretic interpretation via quiver representations~\cite{thomas,zhu}. Most importantly, this is a simplicial complex with nice enumerative and topological properties~\cite{athanasiadistzanaki1,athanasiadistzanaki2,fominzelevinsky,stumpthomaswilliams,tzanaki}. In particular, its number of facets is the {\it Fuß-Catalan number}
\[
\Cat^{(m)}(W) := \frac{1}{|W|} \prod_{i=1}^n (m h + e_i + 1),
\]
and its number of positive facets is the {\it positive Fuß-Catalan number}
\[
\Cat^{(m)}_+(W) := \frac{1}{|W|} \prod_{i=1}^n (m h + e_i - 1).
\]
Here, $W$ is a finite and irreducible real reflection group, $h$ is its Coxeter number, $e_i,\dots,e_n$ is the sequence of exponents. Moreover, $m$ is the number of different colors that a positive root can have. See Section~\ref{sec:defclus} for details.
Consider a {\it flat}, {\it i.e.}, an element $X$ in the intersection lattice $L(W)$ generated by the reflecting hyperplanes of $W$. There is an associated hyperplane arrangement on $X$ called the {\it restricted arrangement} (a general reference on this subject is Orlik and Terao~\cite{orlikterao}). It turns out to be a {\it free arrangement}, so that its {\it characteristic polynomial} $\chi_X(t)$ is factorized in the form $\prod_{i=1}^{\dim(X)} (t-b_i^X)$ where the roots $b_i^X$ are positive integers called the {\it Orlik-Solomon exponents}. (Two flats that are congruent under the action of $W$ have the same characteristic polynomial and we consider that indices are orbits of flats.) It has been established that these characteristic polynomials can be used to refine the enumeration of some Catalan families. In particular, Sommers~\cite{sommers2} considers certain ideals in Lie theory (that correspond to the combinatorial notion of {\it nonnesting partitions}). He showed in~\cite[Theorem~5.7]{sommers2} that the number of such ideals (respectively, positive ideals) is the Catalan number $\Cat(W) := \Cat^{(1)}(W)$ (respectively, the positive Catalan number $\Cat_+(W) := \Cat^{(1)}_+(W)$). Moreover, he showed in~\cite[Proposition~6.6]{sommers2} that the number of ideals (respectively, positive ideals) associated to the orbit of a flat $X$ under a certain natural map is
\[
\frac{ \chi_X(h+1) }{[N(W_X):W_X]},
\quad \text{ respectively,} \quad
\frac{ \chi_X(h-1) }{[N(W_X):W_X]}.
\]
Here, the denominator is the index of the parabolic subgroup $W_X$ in its normalizer, see Section~\ref{sec:parking} for details. In particular, the numbers in the previous equation add up to $\Cat(W)$ and $\Cat_+(W)$, respectively.
Another Catalan family consists of {\it noncrossing partitions} (see~\cite{baumeisteretal} for a recent survey). They are particularly important here because of the close connection to the cluster complex. A refined enumeration of these objects akin to Sommers' exists: indeed Athanasiadis and Reiner~\cite[Theorem~6.3]{athanasiadisreiner} had previously shown that such refined enumerations of nonnesting and noncrossing partitions coincide (without giving the explicit formulas in terms of characteristic polynomials). However, this coincidence is proved via a case-by-case check using the finite type classification and finding a more conceptual explanation is still an open problem. The most promising attempt in this direction is {\it parking space theory}, introduced by Armstrong, Reiner and Rhoades~\cite{armstrongreinerrhoades}: among various other features, it gives a representation theoretic framework to prove this kind of refined enumeration. In particular, the characteristic polynomials $\chi_X(t)$ naturally appear there via an identity in the parabolic Burnside ring (see Section~\ref{sec:parking}, and Orlik and Solomon~\cite{orliksolomon}), rather than via hyperplane arrangements. This theory was also extended in the Fuß-Catalan setting by Rhoades~\cite{rhoades}: $m$-element chains of noncrossing partitions are counted by $\Cat^{(m)}(W)$, and a natural refinement gives the numbers:
\[
\frac{\chi_X(mh+1)}{[N(W_X):W_X]}.
\]
See Theorem~\ref{theo_park} for the precise statement.
The main goal of this work is to give a refined enumeration of faces in the generalized cluster complex. In Corollary~\ref{formulas_gammagammaplus}, we show that the number of faces (respectively, positive faces) associated to the orbit of a flat $X$ under a certain natural map is
\begin{align} \label{eq:introformulagamma}
(-1)^{\dim(X)} \frac{\chi_X(-mh-1)}{[N(W_X):W_X]},
\quad\text{ respectively, }\quad
(-1)^{\dim(X)} \frac{\chi_X(-mh+1)}{[N(W_X):W_X]}.
\end{align}
A notable difference with the refined enumeration of nonnesting and noncrossing partitions is that these numbers don't add up to the Fuß-Catalan numbers: rather, $\Cat^{(m)}(W)$ and $\Cat^{(m)}_+(W)$ appear as particular cases when $X$ is the minimal flat of the intersection lattice (which corresponds to the enumeration of facets and positive facets). Our derivation is case-free, but eventually rely on Theorem~\ref{theo_park} mentioned above (for which the only known proof is via a case-by-case check).
There are a few preliminaries that are interesting on their own, as well as nice consequences. Let us outline those both by giving the detailed organization of this article:
\begin{itemize}
\item Section~\ref{sec:background} contains some background material.
\item {\bf Parking space theory} is reviewed in Section~\ref{sec:parking}, following~\cite{armstrongreinerrhoades,rhoades}. The aim is to present Theorems~\ref{theo_park} and~\ref{theo_fpark}, on which our results rely. Sections~\ref{parkingproof1} and~\ref{parkingproof2} contain proofs of the latter theorem (more precisely, we show it is a consequence of the former). Eventually, it reoccurs in Section~\ref{sec:clusterpark} which contains a representation theoretic interpretation of our refined enumeration in~\eqref{eq:introformulagamma} via {\it cluster parking functions}. These form a simplicial complex whose (equivariant) Euler characteristic is what we call a {\it cluster parking space}.
\item The {\bf generalized cluster complex} is reviewed in Section~\ref{sec:defclus}, following~\cite{bradywatt,fominreading,fominzelevinsky,tzanaki}. This section also contains the definition of the natural map from faces to orbits of flats. The formulas in terms of characteristic polynomials is obtained Section~\ref{sec:reciprocities}, via combinatorial reciprocities which make a link with chains of noncrossing partitions. It is also proved in Section~\ref{sec:bij} via a bijection, again making a link with certain chains of noncrossing partitions.
In Section~\ref{sec:fhvectors}, we give some consequences concerning the $f$-vectors and $h$-vectors of the generalized cluster complex. In particular, an identity can be seen as a refinement of the relations between $f$- and $h$-vectors. Eventually, Section~\ref{sec:recfaces} provides a recursion satisfied by the left-hand side of~\eqref{eq:introformulagamma}, proved via the combinatorics of the generalized cluster complex.
\item {\bf Minimal factorizations} of the Coxeter element are ubiquitous in the context of noncrossing partitions and cluster complexes. In Section~\ref{sec:factor}, we get a formula for a $q$-enumeration of certain minimal factorizations (where one factor is in a given parabolic conjugacy class and the others are reflections). This is related to the enumeration of faces of the generalized cluster complex, via a recursion which is equivalent to that in Section~\ref{sec:recfaces}.
\item {\bf Two order relations} on noncrossing partitions, denoted $\sqsubset$ and $\ll$, are used throughout. We introduced them in~\cite{bianejosuat}, and in some sense they refine the absolute order (used to define the lattice structure on noncrossing partitions). They are useful to prove an identity on parking spaces (Section~\ref{parkingproof2}), to give bijections between faces of the generalized cluster complex and certain chains of noncrossing partitions (Section~\ref{sec:bij}), and to define the $q$-statistic in our $q$-enumeration of minimal factorizations mentioned above (Section~\ref{sec:factor}).
\end{itemize}
\subsection*{Acknowledgements}
We thank Frédéric Chapoton for suggesting us to investigate the generating function $\phi$ (defined in Section~\ref{sec:factor}), which was a motivation for the whole project. We also thank Philippe Biane for our fruitful discussion throughout.
\section{Preliminary definitions}
\label{sec:background}
Through this work, $W$ is a finite real reflection group of rank $n$. We don't assume that it be irreducible, unless stated otherwise. Its geometric representation is an $n$-dimensional Euclidean space $V \simeq \mathbb{R}^n$. Let $T \subset W$ denote the set of reflections, and $S\subset T$ a set of simple reflections that we write $S = \{s_1, \dots , s_n\}$. The standard parabolic subgroup $W_I$ for $I\subset S$ is the subgroup of $W$ generated by $I$. Any subgroup conjugated to some $W_I$ for $I\subset S$ is called a parabolic subgroup. The {\it support} of $w\in W$ is:
\[
\supp(w) := \min\big\{ I\subset S \;:\; w\in W_I \big\}
\]
where the minimum is taken with respect to inclusion, and $w$ is said to have {\it full support} if $\supp(w)=S$.
\subsection{The intersection lattice}
\begin{defi}
For each $w\in W$, we denote
\[
\Fix(w) := \ker(w-I) = \big\{ v\in V\;:\; w(v)=v \big\}.
\]
The \emph{intersection lattice} of $W$ is defined as
\[
L(W) := \big\{ \Fix(w) \;: \; w\in W \big\}.
\]
By convention, the order relation on $L(W)$ is reversed inclusion.
\end{defi}
This is indeed a lattice, and the join operation is given by intersection of subspaces. It is order-isomorphic to the lattice of parabolic subgroups of $W$ (where the order is inclusion) via
\begin{align} \label{def_WX}
X \mapsto W_X := \{ w\in W \;:\; X\subset \Fix(w) \}.
\end{align}
There is a natural action of $W$ on $L(W)$ induced by the action of $W$ on $V$, and the corresponding action on parabolic subgroups is by conjugation. Let $\sim$ denote the equivalence class given by the orbit decomposition.
The apparent clash of notation between $W_I$ and $W_X$ is dealt with by identifying the powerset of $S$ with a subposet of $L(W)$, via
\[
I \mapsto \Fix\big( \prod_{s\in I} s \big).
\]
(The order of the product is irrelevant.) Implicitly, it is assumed that $I,J,K,\dots$ are in this subposet, while $X,Y,Z,\dots$ are general flats in $L(W)$. In particular, $I \sim J$ means that the standard parabolic subgroups $W_I$ and $W_J$ are conjugated. This convention will be used when some objects are indexed sometimes by $I,J,\dots$ and sometimes by $X,Y,\dots$.
\subsection{The noncrossing partition lattice}
Noncrossing partitions are defined with respect to a {\it standard Coxeter element} $c$, which is the product of all the simple reflections in $S$. By reindexing the set $S=\{s_1,\dots,s_n\}$, we can assume $c=s_1 \cdots s_n$.
\begin{defi}
The {\it reflection length} of $w\in W$ is defined by
\[
\ell(w) := n-\dim(\Fix(w)).
\]
It is also the minimal integer $k$ such that $w$ is a product of $k$ reflections, and any such factorization $w=t_1 \cdots t_k$ is called {\it minimal}.
\end{defi}
\begin{defi}
The {\it absolute order} of $W$ is defined by $w_1 \leq w_2 $ if $\ell(w_1) + \ell(w_1^{-1}w_2) = \ell(w_2)$. Alternatively, $w_1 \leq w_2 $ iff a minimal reflection factorization of $w_1$ is a subword of a minimal reflection factorization of $w_2$. The {\it noncrossing partition lattice} of $W$ (with respect to a standard Coxeter element $c$), denoted $\NC(W)$, is defined as the order ideal containing elements below $c$ in the absolute order.
\end{defi}
This is a widely studied object, and we refer to Baumeister {\it et.~al.}~\cite{baumeisteretal} for a recent survey. Note that we have a map $w\mapsto \Fix(w)$ from $\NC(W)$ to $L(W)$. This map is injective, increasing and rank-preserving. Moreover $w_1,w_2 \in \NC(W)$ are conjugated (in $W$) iff $\Fix(w_1)$ and $\Fix(w_2)$ are in the same orbit under the action of $W$.
\begin{lemm} \label{lemm:bij_xi}
The following two sets are in bijection:
\begin{itemize}
\item \emph{parabolic conjugacy classes}, {\it i.e.}, conjugacy classes $\mathcal{X}\subset W$ such that $\mathcal{X}\cap \NC(W) \neq \varnothing$,
\item $L(W)/W$, {\it i.e.}, orbits of flats under the action of $W$.
\end{itemize}
\end{lemm}
\begin{proof}
We refer to \cite{orliksolomon} (see Lemma~(3.4), Lemma~(3.5), and the lines thereafter). Let us briefly describe the explicit bijections.
To each parabolic conjugacy class $\mathcal{X}$, we associate the orbit of $\Fix(w)$ for some arbitrary $w\in\mathcal{X}$. In the other direction, consider a flat $X$ defined up to the action of $W$. There is $I\subset S$ such that $W_X$ is conjugated to the standard parabolic subgroup $W_I$, and to the orbit of $X$ we associate the conjugacy class of $\prod_{s\in I}s$ (the order of the product is irrelevant).
\end{proof}
For $X \in L(W)$, the parabolic conjugacy class corresponding to its orbit in $L(W)/W$ (via the previous bijection) is denoted $\mathcal{X}$. We sometimes use the bijection implicitly and the notation makes clear what are the objects. For example, if $w\in\NC(W)$ then the condition $w\in\mathcal{X}$ is equivalent to $\Fix(w)\sim X$.
\begin{lemm} \label{label:lemmWfix}
For each $w\in \NC(W)$, the parabolic subgroup $W_{\Fix(w)}$ is the minimal parabolic subgroup of $W$ containing $w$.
\end{lemm}
\begin{prop} \label{prop:wcox}
A noncrossing partition $w\in\NC(W)$ is a standard Coxeter element of $W_{\Fix(w)}$. This means there is a factorization $w = t_1 \cdots t_k$ (called the \emph{canonical factorization}, unique up to commutations among the factors) where the elements $t_1, \dots , t_k$ are the simple generators of $W_{\Fix(w)}$.
\end{prop}
This has been observed by several authors. We refer to \cite[Proposition~3.1]{bianejosuat} for a discussion.
\subsection{A tale of two orders}
The lattice structure of $\NC(W)$ (the absolute order) can be refined: we introduced in~\cite{bianejosuat} two partial order relations $\sqsubset$ and $\ll$ with many combinatorial properties. In particular, they are useful to deal with the combinatorics of the cluster complex.
\begin{defi}[\cite{bianejosuat}]
Let $w\in \NC(W)$, and write its canonical factorization $w = t_1 \cdots t_k$. We define $\sqsubset$ and $\ll$ on $\NC(W)$ by:
\begin{itemize}
\item $v\sqsubset w$ if $v$ can be written as a subword of $t_1 \cdots t_k$ (so that $v\leq w$, in particular),
\item $v\ll w$ if $v\leq w$ and $v$ has full support in $W_{\Fix(w)}$, {\it i.e.}, each $t_i$ for $1\leq i \leq k$ appear at least once in any factorization $w = t_{i_1} t_{i_2} \cdots$.
\end{itemize}
\end{defi}
Note that these partial orders are such that $u\sqsubset v \Rightarrow u\leq v$ and $u\ll v \Rightarrow u\leq v$.
Another characterization of these partial orders makes a close connection with the {\it Bruhat order}, denoted $\leq_B$. This will be used in Section~\ref{sec:factor}. It states that the cover relations for $\sqsubset$ and $\ll$ are such that:
\[
u \sqsubset\hspace{-3.4mm}\cdot\hspace{3.4mm} v \;\Leftrightarrow\; u\lessdot v \text{ and } u\leq_B v, \qquad
u \ll\hspace{-2.5mm}\cdot\hspace{2.5mm} v \;\Leftrightarrow\; u\lessdot v \text{ and } u\geq_B v.
\]
Let us give some other properties, mostly taken from~\cite{bianejosuat}.
\begin{prop} \label{prop:order_uvw}
For each $u,w \in\NC(W)$ such that $u \leq w$, there exists a unique $v \in\NC(W)$ such that $u \ll v \sqsubset w$.
\end{prop}
\begin{proof}
Let us first consider the case where $w$ is maximal, {\it i.e.}, it is the Coxeter element $c$. In this case, let $I \subset S$ be the support of $u$ and $v$ be the unique $v\sqsubset c$ which is the product of $s_i$ for $i\in I$. It is easily checked that it satisfies $u \ll v \sqsubset w$, and it is unique. The general case follows by doing the same procedure in the parabolic subgroup $W_{\Fix(w)}$.
\end{proof}
\begin{prop}[{\cite[Corollary~4.10]{bianejosuat}}] \label{lemm:bool}
Let $w\in\NC(W)$. We have:
\begin{itemize}
\item The two orders $\leq$ and $\sqsubset$ agree on the set $\{ v\in\NC(W)\;:\; v\sqsubset w \}$. The resulting poset is a boolean lattice of order $2^{\ell(w)}$, containing all elements than can be written as subwords of the canonical factorization of $w$.
\item The two orders $\leq$ and $\ll$ agree on the set $\{ v\in\NC(W)\;:\; v\gg w \}$. The resulting poset is a boolean lattice of order $2^{\#\supp(w) - \ell(w)}$, and its maximal element is the unique $w'$ such that $w\ll w' \sqsubset c$ (given by the previous proposition).
\end{itemize}
\end{prop}
\section{Parking spaces and their characters}
\label{sec:parking}
The goal of this section is to introduce some background about the parking space theory, and introduce a ``prime'' analog of the parking space. The main result about this prime parking space is Theorem~\ref{theo_fpark}.
\subsection{The parabolic Burnside ring}
We first need some preliminaries about characters of $W$, due to Orlik and Solomon~\cite{orliksolomon}. See also Geck and Pfeifer~\cite[Chapter~2.4]{geckpfeiffer}.
Generically, we will use bold symbols to denote characters. In particular, ${\bf 1}$ denote the trivial character (of a group which is clear from the context).
\begin{defi}
For $I\subset S$, let $ \bPhi_I := \mathbf{ind}_{W_I}^W(\bf 1)$ (the trivial character of $W_I$ induced to $W$). The {\it parabolic Burnside ring} $R(W)$ of $W$ is the ring linearly generated by $(\bPhi_I)_{I\subset S}$ (as a subring of the character ring of $W$).
\end{defi}
It is not obvious that the linear span of $(\bPhi_I)_{I\subset S}$ is indeed a ring. We refer to~\cite[Section~2.4.3]{geckpfeiffer}. It also follows from {\it loc.~cit.}~that a basis of $R(W)$ is $(\bPhi_I)_{I\in \Theta}$, where $\Theta$ is a set of representatives of subsets of $S$ modulo the equivalence relation $\sim$ as in Lemma~\ref{lemm:bij_xi}. Using implicitly one of the bijections from Lemma~\ref{lemm:bij_xi}, we identify $\Theta$ with a set of representatives for the quotient $L(W)/W$. We thus write $\bPhi_X$ in place of $\bPhi_I$ for $X\in L(W)$ such that $W_X$ and $W_I$ are conjugated.
Note that $\bPhi_{S}$ is the trivial character of $W$, and the unit of $R(W)$. Also, it can be seen that $\bPhi_I$ is the character of the representation $\mathbb{C}^{ W / W_I }$ (the linearization of the group action on $W/ W_I$ where $W$ acts by left multiplication on the cosets).
\begin{rema}
Let us mention that $R(W)$ is a subring of the {\it Burnside ring} of $W$ (which is linearly generated by characters of the representations $\mathbb{C}^{ W / W' }$ where $W'$ is any subgroup of $W$). The terminology comes from the fact that here we only consider parabolic subgroups.
\end{rema}
\begin{rema} \label{carac_OS}
The algebra $\mathbb{Q}\otimes R(W)$ is the space of functions $\chi : W \to \mathbb{Q}$ such that the value $\chi(w)$ only depends on the orbit of $\Fix(w)$ in $L(W)/W$.
This characterization is essentially due to Orlik and Solomon~\cite{orliksolomon}.
\end{rema}
The {\it sign character} of $W$, denoted $\boldsymbol{\epsilon}$, is defined by
\[
\boldsymbol{\epsilon}(w) := (-1)^{\ell(w)} = (-1)^{n-\dim\Fix(w)}.
\]
This character is usually defined with the Coxeter length rather than the reflection length, but they have the same parity as each reflection has odd Coxeter length. The following lemma says how it acts on $R(W)$ by multiplication (note that taking $J=\varnothing$ shows that $\boldsymbol{\epsilon}\in R(W)$, which can also be seen via the previous remark).
\begin{lemm}[Solomon~\cite{solomon}] \label{lemm_solomon}
For any $J\subset S$, we have
\begin{equation}\label{epstensor}
\boldsymbol{\epsilon} \otimes \bPhi_J = \sum_{I\subset J} (-1)^{\#I} \bPhi_I.
\end{equation}
\end{lemm}
Now, define a class function $\bPsi_t: W \to \mathbb{Z}$ for each integer $t$ by
\[
\bPsi_t(w) = t^{\dim \Fix(w)}.
\]
In particular, $\bPsi_{-1} = (-1)^n \boldsymbol{\epsilon}$.
Note that $\bPsi_t \in \mathbb{Q} \otimes R(W)$ for each $t\in \mathbb{Z}$. This is easily seen from the characterization of $\mathbb{Q} \otimes R(W)$ stated in Remark~\ref{carac_OS}. In the more general context of complex reflection groups, Ito and Okada~\cite{itookada} gave a classification of positive integers $t$ such that $\bPsi_t$ is in the positive integral span of $(\bPhi_I)_{I\in \Theta}$ ({\it i.e.}, $\bPsi_t$ is the character of some representation of $W$). The important cases for us are the following: if $t=mh+1$ with $m$ a nonnegative integer, or $t=mh-1$ with $m$ a positive integer, this property holds for $\bPsi_{t}$.
\begin{rema}
The results of Ito and Okada~\cite{itookada} also show the following. For any finite complex reflection group with Coxeter number $h$, it is still true that $\bPsi_{mh+1}$ ($m\geq 0$) is the genuine character of a group representation. But this does not hold for $\bPsi_{mh-1}$ ($m>0$) beyond the real case.
\end{rema}
In the next statement, we treat $t$ as a formal variable rather than an integer. Also, recall from the introduction that $N(W_X)$ is the normalizer of $W_X$ in $W$.
\begin{prop}[Orlik and Solomon~\cite{orliksolomon}] \label{prop:psik_expansion}
In $\mathbb{Q}[t] \otimes R(W)$, there is an expansion:
\begin{equation} \label{psik_expansion}
\bPsi_t = \sum_{ X \in \Theta} \frac{\chi_X(t)}{[N(W_X):W_X]}
\bPhi_X
\end{equation}
where $\chi_X(t)$ is a polynomial in $t$ called the \emph{characteristic polynomial} of $X$. It can be defined using the Möbius function $\mu$ of $L(W)$ by
\begin{equation} \label{eq:def_chi}
\chi_X(t)
=
\sum_{Y\in L(W),\; Y\geq X} \mu(X,Y) t^{\dim (Y)}
\end{equation}
and can be factorized in the form:
\[
\chi_X(t) = \prod_{i=1}^{\dim(X)} (t-b_i^X)
\]
where the roots $b_i^X$ are positive integers called the \emph{Orlik-Solomon exponents} of $X$.
\end{prop}
See~\cite{orlikterao} for tables containing the Orlik-Solomon exponents for all irreducible $W$ in the finite type classification. When $X$ is the minimal element of $L(W)$ ({\it i.e.}, the full-dimensional subspace of the geometric representation $V$ of $W$), the associated Orlik-Solomon exponents are the exponents $e_1,\dots,e_n$ of $W$ (classically defined by considering eigenvalues of the Coxeter element, see~\cite{humphreys}).
From the previous proposition, we see that there are at least two ways to compute the characteristic polynomials (or the Orlik-Solomon exponents, by taking their roots):
\begin{itemize}
\item we can use the intersection lattice and its Möbius function via Equation~\eqref{eq:def_chi},
\item we can use Equation~\eqref{psik_expansion} and some character calculations.
\end{itemize}
Let us make the second point more explicit. The values of the characters $\bPhi_I$ on parabolic conjugacy classes can be organized in a square matrix (with rows and columns indexed by $\Theta$) called the {\it parabolic table of marks}. An algorithm to compute it is given by Geck and Pfeiffer~\cite[Section~2.4]{geckpfeiffer}. Since the values of the character $\bPsi_t$ is explicit, we can get the coefficients $([N(W_I):W_I]^{-1} \chi_I(t))$ by inverting the parabolic table of marks.
Another way to compute these characteristic polynomials is given by Sommers in~\cite[Propositions~4.7 and~5.1]{sommers1}. We will give another method below, by giving a recursion satisfied by the numbers $\gamma(W,\mathcal{X},m)$ (see Section~\ref{sec:recfaces}).
The denominator $[ N(W_X) : W_X ]$ that appear above can be written differently. Orlik and Solomon~\cite{orliksolomon} showed that:
\begin{equation} \label{factornu}
\frac{1}{[ N(W_X) : W_X ]} = \frac{ \nu(X) }{ \prod_{i=1}^k (b_i^X + 1)}
\end{equation}
where $\nu(X)$ is the number of $J\subset S$ such that $W_J \sim W_X$. Indeed, this follows by plugging $t=-1$ in the previous proposition, and using Lemma~\ref{lemm_solomon}. As a check of what happens in the case of the symmetric group (type A), let $\lambda$ be a partition of $n$ and $\mathfrak{S}_\lambda$ the corresponding Young subgroup of $\mathfrak{S}_n$. Let $\mu_i$ be the multiplicity of $i$ in $\lambda$. The normalizer $N(\mathfrak{S}_\lambda)$ is a semidirect product $\mathfrak{S}_\lambda \rtimes (\prod_i \mathfrak{S}_{\mu_i} )$, and it follows $[N(\mathfrak{S}_\lambda):\mathfrak{S}_\lambda] = \prod_i \mu_i!$.
Eventually, let us mention the following (see also \cite[Theorem~7.4.2]{haiman}):
\begin{prop}
If $t>0$ is such that $\bPsi_t$ is the character of a representation, the multiplicity of the trivial character in $\bPsi_t$ is the \emph{rational Catalan number}:
\[
\frac{1}{|W|} \prod_{i=1}^n(t+e_i).
\]
\end{prop}
\begin{proof}
The trivial character is orthogonal to the other irreducible characters, and it follows that this multiplicity is:
\[
\frac{1}{|W|} \sum_{w\in W} \bPsi_t (w).
\]
The result then follows from the Shephard-Todd formula~\cite{shephardtodd}:
\[
\sum_{w\in W} t^{\dim \Fix(w)} = \prod_{i=1}^n (t+e_i).
\]
\end{proof}
In the two cases which are relevant to this work, we get that the multiplicity of the trivial character in $\Psi_{mh+1}$ and $\Psi_{mh-1}$ are respectively $\Cat^{(m)}(W)$ and $\Cat_+^{(m)}(W)$.
\subsection{Parking spaces}
We review the theory of parking spaces, introduced by Armstrong, Reiner and Rhoades~\cite{armstrongreinerrhoades}. This is a very brief account, which mostly aims at some enumeration formulas for chains of noncrossing partitions. In particular, we simply consider characters of $W$ whereas the general theory deals with characters of $W$ times a cyclic group. Also, we focus on the {\it noncrossing parking space} but other kinds of parking spaces exist.
Assume that $W$ is irreducible, and let $h$ be its Coxeter number. A {\it parking space} is a representation of $W$ having $\bPsi_{h+1}$ as its character. The name comes from the fact that in type $A_n$, the symmetric group $\mathfrak{S}_{n+1}$ acting on parking functions of length $n+1$ is such a parking space. More generally, a $t$-parking space (for an integer $t\geq 1$) is a representation of $W$ having $\bPsi_{t}$ as its character. In the case where $W$ is a Weyl group, there is a root lattice $Q$ acted on by $W$. It follows from Sommers~\cite[Proposition~3.9]{sommers1} that the quotient $Q/tQ$ is a $t$-parking space for the natural action of $W$, upon some conditions on $t$ (being coprime to $h$ is sufficient). For the symmetric group $\mathfrak{S}_n$, other $t$-parking spaces are given by the action on {\it rational parking functions} (see~\cite{armstrongloehrwarrington} for details).
Two $t$-parking spaces are isomorphic as representations of $W$, since by definition they have the same character. However, it might be difficult to find an explicit isomorphism and it is therefore interesting to consider various kinds of parking spaces. The {\it noncrossing parking space} from~\cite{armstrongreinerrhoades} is defined in terms of noncrossing partitions. Rhoades' generalization~\cite{rhoades} in the Fuß-Catalan settings is:
\begin{equation} \label{parkbigoplus}
\bigoplus_{\substack{ w_1,\dots, w_m \in\NC(W) \\ w_1\leq \dots \leq w_m }}
\mathbb{C}^{ W / W_{\Fix(w_1)} }.
\end{equation}
It can be seen as the linearization of the $W$-set of $m$-{\it parking functions}:
\begin{align} \label{def:PF}
\PF(W,m) := \biguplus_{\substack{ w_1,\dots, w_m \in\NC(W) \\ w_1\leq \dots \leq w_m }}
\{(w_1,\dots,w_m ) \} \times W / W_{\Fix(w_1)}
\end{align}
(where $W$ acts on the second factor). Rather than the representation itself, we mostly consider its character:
\begin{align}
\label{parkdef}
\mathbf{park}_{W,m} := \sum_{X\in \Theta} \kappa(W,X,m) \bPhi_X \in R(W)
\end{align}
where
\begin{align} \label{def:kappa}
\kappa(W, X, m) := \#\Big\{ (w_1,\dots,w_m) \in \NC(W)^m \; : \; w_1\leq \dots \leq w_{m} \text{ and } \Fix(w_1) \sim X \Big\}.
\end{align}
\begin{theo}[\cite{rhoades}, except for $E_7$, $E_8$] \label{theo_park}
Assume that $W$ is irreducible, with Coxeter number $h$. Then, the representation in~\eqref{parkbigoplus} is a $mh+1$-parking space, i.e., $\mathbf{park}_{W,m} = \bPsi_{mh+1}$. Equivalently:
\begin{equation} \label{eq:formula_kappa}
\forall X \in \Theta,
\quad
\kappa(W,X,m) = \frac{\chi_X(mh+1)}{[N(W_X):W_X]}.
\end{equation}
\end{theo}
\begin{proof}
Note that the equivalence between the two statements follows from Proposition~\ref{prop:psik_expansion}. Since the infinite families of the finite type classification have been treated (see Table~3 in~\cite{rhoades}), there remains only to deal with a finite number of exceptional groups. This can be done by computer.
\end{proof}
Let us give some comments about the computer verification mentioned in this proof. First note that the computation is possible because we have here a finite statement about polynomials in $m$, whereas Rhoades' general conjecture involves a cyclic group of order $mh+1$ and is therefore of a different nature. A naive computation of the set $\NC(W)$ might be lengthy, but an efficient way to get all noncrossing partitions is to use the characterization in~\cite[Section~4.5]{bianejosuat}. It says that there is a binary relation $\between$ on $T$ (that depends on $c$) such that each $w\in\NC(W)$ can be identified with a set of pairwise-related reflections (explicitly, this is the set of simple reflections of $W_{\Fix(w)}$). It gives a quick way to get all noncrossing partitions, and for each $w\in \NC(W)$ we also have the canonical factorization $w=t_1 \cdots t_k$ into simple reflections of the parabolic subgroup $W_{\Fix(w)}$. Now, the right-hand side of~\eqref{def:kappa} can be rewritten
\[
\sum_{ w \in \mathcal{X} } \Cat^{(m-1)}(W_{\Fix(w^{-1}c)}).
\]
Using the formula for Fuß-Catalan numbers in terms of exponents (which we can use here since we have the Coxeter type of $W_{\Fix(w^{-1}c)}$ via the canonical factorization of $w^{-1}c$), we can compute this sum in reasonable time even for $E_7$ and $E_8$.
Now, Let $\NC'(W) \subset \NC(W)$ denote the subset of element with full support. The ``prime'' analog of the noncrossing parking space is now defined as
\begin{equation} \label{fparkbigoplus}
\bigoplus_{\substack{ w_1,\dots, w_{m-1} \in\NC(W), \; w_m\in \NC'(W) \\ w_1\leq \dots \leq w_m }}
\mathbb{C}^{ W / W_{\Fix(w_1)} }.
\end{equation}
Again, we mostly consider its character:
\begin{equation}
\label{fparkdef}
\mathbf{park\mathrlap{'}}_{W,m} := \sum_{X\in \Theta} \kappa^+(W,X,m) \bPhi_X
\end{equation}
where
\begin{align}
\begin{split}
\kappa^+(W, X, m) := \#\Big\{ (w_1,\dots,w_m) \in \NC(W)^{m-1} \times \NC'(W) \; : \; w_1\leq \dots \leq w_{m} \\ \text{ and } \Fix(w_1) \sim X \Big\}.
\end{split}
\end{align}
In complete analogy with Theorem~\ref{theo_park}, we have the following one. Even though it does not explicitly appeared in the literature, it is not particularly surprising: some known enumeration formulas clearly suggest that the parameter $mh-1$ should appear in the present situation (see~\cite{athanasiadistzanaki2}, concerning the positive part of the generalized cluster complex, and~\cite{sommers2}, concerning strictly positive ideals in the root poset).
\begin{theo} \label{theo_fpark}
Assume that $W$ is irreducible, with Coxeter number $h$. The representation in~\eqref{fparkbigoplus} is a $mh-1$-parking space, i.e., $\mathbf{park\mathrlap{'}}_{W,m} = \bPsi_{mh-1}$. Equivalently:
\begin{equation} \label{eq:formula_kappaplus}
\forall X \in \Theta,
\quad
\kappa^+(W,X,m) = \frac{\chi_X(mh-1)}{[N(W_X):W_X]}.
\end{equation}
\end{theo}
Theorem~\ref{theo_fpark} can be approached along the same path as Theorem~\ref{theo_park}.
Rather than doing that, we will prove that Theorems~\ref{theo_park} and~\ref{theo_fpark} are equivalent. More precisely, we give two proofs of this equivalence, that follow a similar pattern:
\begin{itemize}
\item In Section~\ref{parkingproof1}, we show that the characters $\bPsi_{mh+1}$ and $\bPsi_{mh-1}$ satisfy a kind of inclusion-exclusion (which is straightforward for the characters $\mathbf{park}_{W,m}$ and $\mathbf{park\mathrlap{'}}_{W,m}$).
\item In Section~\ref{parkingproof2}, we show that the characters $\mathbf{park}_{W,m}$ and $\mathbf{park\mathrlap{'}}_{W,m}$ satisfy $\mathbf{park\mathrlap{'}}_{W,m} = (-1)^n \boldsymbol{\epsilon} \otimes \mathbf{park\mathrlap{'}}_{W,-m} $ (the corresponding identity for $\bPsi_{mh+1}$ and $\bPsi_{mh-1}$ being clear).
\end{itemize}
In each case, the equivalence of the two theorems follows. These two proofs are case-free (they do not rely in the finite type classification), so it might happen that both have a role to play in a fully combinatorial and case-free approach to parking space theory.
\section{Proof of Theorem~\ref{theo_fpark} via the W-Laplacian}
\label{parkingproof1}
We begin this section by stating a property of the characters $\mathbf{park}_{W,m}$ and $\mathbf{park\mathrlap{'}}_{W,m}$.
\begin{prop} \label{lemm:inclexcl}
We have the ``inclusion-exclusion'' formulas:
\begin{align*}
\mathbf{park}_{W,m} &= \sum_{I\subset S} \mathbf{ind}_{W_I}^W (\mathbf{park\mathrlap{'}}_{W_I,m}),\\
\mathbf{park\mathrlap{'}}_{W,m} &= \sum_{I\subset S} (-1)^{\#S - \#I}\mathbf{ind}_{W_I}^W (\mathbf{park}_{W_I,m}).
\end{align*}
\end{prop}
\begin{proof}
First note that if $I\subset J \subset S$, we have $\mathbf{ind}_{W_J}^W ( \mathbf{ind}_{W_I}^{W_J} ({\bf 1}) ) = \mathbf{ind}_{W_J}^W({\bf 1})$. This transitivity property follows from the general theory of group characters. With this at hand, these identities follow from the same lines as the classical inclusion-exclusion principle.
\end{proof}
Our goal is to show that the characters $\bPsi_{mh+1}$ and $\bPsi_{mh-1}$ satisfy the same relations. First note that these two characters are only defined in the irreducible case, and we need to extend them in the natural multiplicative way. For these we refer to the {\it abstract parking spaces}, as opposed to other kinds of parking spaces which are defined as characters of some explicit representation of $W$.
\begin{defi}
If $W$ is irreducible with Coxeter number $h$, we define $\mathbf{park}^{\op{abs}}_{W,m} := \bPsi_{mh+1}$ and $\mathbf{park'}^{\op{abs}}_{W,m} := \bPsi_{mh-1}$. Otherwise, write $W = \prod_{i=1}^r W_i$ its decomposition in irreducible factors. Note that there is an isomorphism
\[
R(W) \simeq \bigotimes_{i=1}^r R(W_i)
\]
and define:
\begin{alignat*}{2}
\mathbf{park}^{\op{abs}}_{W,m} &:=\mathbf{park}^{\op{abs}}_{W_1,m} \otimes \cdots \otimes \mathbf{park}^{\op{abs}}_{W_r,m}, \\
\mathbf{park'}^{\op{abs}}_{W,m} &:= \mathbf{park'}^{\op{abs}}_{W_1,m} \otimes \cdots \otimes \mathbf{park'}^{\op{abs}}_{W_r,m}.
\end{alignat*}
More explicitly, there is a natural identification $L(W) \simeq \prod_{i=1}^r L(W_i)$. So $w=(w_1,\dots, w_r) \in W$ determines flats $Z_i \in L(W_i)$ by:
\[
\Fix(w) = \prod_{i=1}^r Z_i.
\]
We then have:
\begin{align*}
\mathbf{park}^{\op{abs}}_{W,m}(w) &= \prod_{i=1}^r(mh_i+1)^{\dim(Z_i)},\\ \mathbf{park'}^{\op{abs}}_{W,m}(w) &= \prod_{i=1}^r(mh_i-1)^{\dim(Z_i)}.
\end{align*}
where $h_i$ is the Coxeter number of $W_i$.
\end{defi}
Since the values of these characters at $w\in W$ only depend on $\Fix(w)$, we can describe their parabolic inductions in terms of the geometry of hyperplane arrangements.
\begin{lemm} \label{lemm:evalchar}
For $X \in L(W)$, $w\in W$ and $Z=\Fix(w)$, we have:
\[
\Big(\mathbf{ind}_{W_X}^W\mathbf{park'}^{\op{abs}}_{W_X,m}\Big) (w)
=
[N(X):W_X]\cdot\sum_{\substack{ Y \in L(W), \\ Y \sim X \text{ and } Y \subset Z }}
\mathbf{park'}^{\op{abs}}_{W_{Y},m} (w).
\]
\end{lemm}
\begin{proof}
By the general formula for induction of characters, we have
\[
\Big(\mathbf{ind}_{W_X}^W\mathbf{park'}^{\op{abs}}_{W_X,m}\Big)(w)
=
\sum_{y\in W/W_X, \; y^{-1} w y\in W_X}
\mathbf{park'}^{\op{abs}}_{W_X,m}(y^{-1} w y).
\]
Note that we have:
\[
\mathbf{park'}^{\op{abs}}_{W_{(y\cdot X)},m}( w )
=
\mathbf{park'}^{\op{abs}}_{W_X,m}(y^{-1} w y),
\]
as can be seen by using the inner automorphism $w\mapsto y^{-1} w y$ (and its extension to $L(W)$ which the action of $y$).
Moreover, we have:
\[
y^{-1} w y\in W_X
\;\Leftrightarrow\;
X \subset \Fix(y^{-1} w y)
\;\Leftrightarrow\;
X \subset y^{-1} \cdot \Fix(w)
\;\Leftrightarrow\;
y\cdot X \subset Z.
\]
We can thus rewrite the sum and get:
\[
\Big(\mathbf{ind}_{W_X}^W\mathbf{park'}^{\op{abs}}_{W_X,m}\Big)(w)
=
\sum_{y\in W/W_X, \; y\cdot X \subset Z }
\mathbf{park'}^{\op{abs}}_{W_{(y\cdot X)},m}( w ).
\]
By letting $ Y = y \cdot X$, this can be rewritten:
\begin{align*}
\Big(\mathbf{ind}_{W_X}^W\mathbf{park'}^{\op{abs}}_{W_X,m}\Big)(w)
=
[N(W_X):W_{X}] \cdot \sum_{\substack{ Y\in L(W) \\ Y\sim X \text{ and } Y \subset Z } } \mathbf{park'}^{\op{abs}}_{W_{Y},m}(w).
\end{align*}
To get the equality, it suffices to check that $[N(W_{X}):W_{X}]$ is the number of $y\in W / W_X $ such that $y\cdot X = Y$, for each $Y \sim X$. This is the orbit-stabilizer theorem, indeed $N(W_X)$ is the subgroup of $W$ containing $w\in W$ such that $w\cdot X = X$.
\end{proof}
\begin{rema} \label{evalcharrema}
The previous lemma holds with $\mathbf{park}^{\op{abs}}$ in place of $\mathbf{park'}^{\op{abs}}$, with a completely similar proof.
\end{rema}
Every parabolic subgroup $W_X$ of $W$ is a possibly reducible reflection group and we would like a simpler notation of the values of the parking characters $\mathbf{park}^{\op{abs}}_{W_X,m}$ and $\mathbf{park'}^{\op{abs}}_{W_X,m}$. For this reason, we introduce the following notation.
\begin{notation}[multisets of Coxeter numbers]
Let $W$ be an irreducible reflection group, and $X, Z \in L(W)$ such that $X\subset Z$. Assume that $W_X=W_1 \times \cdots \times W_r$ is the decomposition into irreducible factors. Write $Z = \prod_{i=1}^r Z_i$, using as above $L(W_X) \simeq \prod_{i=1}^r L(W_i)$.
Note that this isomorphism is such that
\[
\dim(Z) - \dim(X) = \sum_{i=1}^r \dim( Z_i),
\]
which can be seen by comparing the rank functions on each side. Then we write the \emph{multiset of Coxeter numbers} associated to $X$ and $Z$ as:
\[
(h_i(X,Z))_{1 \leq i \leq \dim(Z)-\dim(X)}
:=
( \underbrace{h_1,\dots,h_1}_{\op{dim}(Z_1)\text{ times}},\dots,\underbrace{h_r,\dots,h_r}_{\op{dim}(Z_r)\text{ times}} ),
\]
where $h_i$ is the Coxeter number of $W_i$. With this notation, we can write the parking space characters for parabolic subgroups: for each $w\in W_X$ with $\Fix(w)=Z$, we have:
\begin{align*}
\mathbf{park}^{\op{abs}}_{W_X,m}(w)
&=
\prod_{i=1}^{\op{dim}(Z)-\op{dim}(X)}(mh_i(X,Z)+1), \\ \mathbf{park'}^{\op{abs}}_{W_X,m}(w)
&=
\prod_{i=1}^{\dim(Z)-\dim(X)}(mh_i(X,Z)-1).
\end{align*}
\end{notation}
What we need is a relation involving the Coxeter numbers of $W$ and those of its parabolic subgroups that generalizes the one which appeared in~\cite[Theorem~8.8]{chapuydouvropoulos}. Recall that this was given as follows: if $W$ is irreducible with Coxeter number $h$,
\begin{equation}\label{EQ: chapuy-douvr}
(h+t)^n
=
\sum_{X\in L(W) } \Big(\prod_{i=1}^{n-\dim(X)} h_i(X) \Big) \cdot t^{\op{dim}(X)}
\end{equation}
where $h_i(X)$ for $1\leq i \leq n$ is the particular case of $(h_i(X,Z))_{1\leq i \leq \dim(Z)-\dim(X)}$ when $Z$ is the minimal element of $L(W)$. Such a generalization (see \cite[\S4]{douvr_recursions} for details) is the following:
\begin{prop}[Corollary of the restricted Laplacian recursion] \label{prop:laplacian}
For any $X,Z\in L(W)$ such that $X\subset Z$, we have the following relation between Coxeter numbers:
\begin{equation} \label{eq:corollaryoflaplacianrec}
\prod_{i=1}^{\dim(Z)-\dim(X)} \big(h_i(X,Z)+t\big)
=
\sum_{X\subset Y \subset Z}
\Bigg(\prod_{i=1}^{\dim(Z)-\dim(Y)} h_i(Y,Z) \Bigg) t^{\dim(Y)-\dim(X)}.
\end{equation}
\end{prop}
\begin{proof}
For $X,Z\in L(W)$ such that $X\subset Z$, consider the set
\[
\mathcal{A}^{X,Z}
:=
\Big\{
Y/X \; : \;
X\subset Y\subset Z,\; \dim(Y) = \dim(Z)-1
\Big\}
\]
which is a hyperplane arrangement in the quotient $Z/X$ (the essentialization of the restricted localization $\mathcal{A}_X^Z$). When $X$ is the $0$-dimensional subspace, we just denote $\mathcal{A}^{Z} = \mathcal{A}^{X,Z}$.
Generalizing~\cite{chapuydouvropoulos}, to this hyperplane arrangement there is an associated laplacian $\mathcal{L}_{X,Z}$ for which each hyperplane $K\in\mathcal{A}^{X,Z}$ is taken with multiplicity equal to $h_1(K,Z)$. Its characteristic polynomial is the left-hand side of \eqref{eq:corollaryoflaplacianrec}. This follows analogously to~\cite[Proposition~3.13]{chapuydouvropoulos}.
Now, the laplacian recursion~\cite[Proposition~8.3]{chapuydouvropoulos} relates the characteristic polynomial of $\mathcal{L}_{X,Z}$ to the determinants of $\mathcal{L}_{Y,Z}$ for $X\subset Y\subset Z$, and give precisely the above identity.
\end{proof}
The hyperplane arrangement $\mathcal{A}^{Z}$ introduced above is called a {\it restricted arrangement}, see~\cite{orliksolomon,orlikterao}. The number $r(\mathcal{A}^{Z})$ of regions in the complement of this arrangement is given by
\begin{equation}
r(\mathcal{A}^{Z})
=
(-1)^{\dim(Z)} \chi_Z(-1)
=
\nu(Z) \cdot [ N(W_Z) : W_Z ].
\end{equation}
What we need is the latter equality, see~\cite[Equations~(4.1) and~(4.2)]{orliksolomon}. It can also be obtained by plugging $t=-1$ in~\eqref{psik_expansion}, using~\eqref{factornu} and~\eqref{epstensor} in the case $J=S$.
\begin{prop}
Assume that $W$ is irreducible, with Coxeter number $h$. For any $Z\in L(W)$, we have the following relation between Coxeter numbers:
\[
(mh+1)^{\dim(Z)}
=
\sum_{X\subset Z}r(\mathcal{A}^X) \cdot \prod_{i=1}^{\dim(Z)-\dim(X)} (m h_i(X,Z) - 1).
\]
\end{prop}
\begin{proof}
Consider Equation~\eqref{eq:corollaryoflaplacianrec} in the case where $X$ is the $0$-dimensional subspace, so that $h_i(X,Z)=h$. By replacing $t$ with $t/m$ and multiplying both sides by $m^{\op{dim}(Z)}$, we get
\[
(mh+t)^{\dim(Z)}=\sum_{Y\subset Z}\Big(\prod_{i=1}^{\dim(Z)-\dim(Y)} mh_i(Y,Z)\Big) \cdot t^{\dim(Y)}.
\]
By plugging $t^{\dim(Y)}=\sum_{X\subset Y}\chi(\mathcal{A}^X,t)$ (which is the inverse of \eqref{eq:def_chi}), we obtain
\begin{align*}
(mh+t)^{\dim(Z)}
&=
\sum_{X\subset Y \subset Z} \Big(\prod_{i=1}^{\op{dim}(Z)-\op{dim}(Y)}mh_i(Y,Z)\Big) \cdot \chi_{X}(t) \\
&=
\sum_{X \subset Z} \Big( \sum_{X \subset Y \subset Z } \prod_{i=1}^{\dim(Z)-\dim(Y)} mh_i(Y,Z) \Big) \cdot \chi_X(t).
\end{align*}
Again using~\eqref{eq:corollaryoflaplacianrec} (with $t=1$), this gives:
\[
(mh+t)^{\op{dim}(Z)}=\sum_{X\subset Z}\Big(\prod_{i=1}^{\op{dim}(Z)-\op{dim}(X)}\big(mh_i(X,Z)+1\big)\Big)\cdot \chi_X(t).
\]
Now, replacing $(m,t)$ with $(-m,-1)$ and getting rid of the signs, this becomes
\[
(mh+1)^{\op{dim}(Z)}
=
\sum_{X\subset Z} \Big( \prod_{i=1}^{\op{dim}(Z)-\op{dim}(X)} \big(mh_i(X,Z)-1\big) \Big) \cdot
(-1)^{\dim(X)} \chi_X(-1).
\]
This is precisely what we needed to prove.
\end{proof}
\begin{theo}
We have:
\begin{equation} \label{theo_parkabsind}
\mathbf{park}^{\op{abs}}_{W,m}
=
\sum_{I\subset S} \mathbf{ind}_{W_I}^W ( \mathbf{park'}^{\op{abs}}_{W_I,m} ).
\end{equation}
\end{theo}
\begin{proof}
We assume that $W$ is irreducible, and let $h$ be its Coxeter number. We let the reader check that the reducible case follows.
Let $\bR$ denote the character in the right-hand side of~\eqref{theo_parkabsind}. Using the notation $\nu(X)$ (see Equation~\eqref{factornu}), we have:
\[
\bR
=
\sum_{ X \in \Theta}
\nu(X) \cdot \mathbf{ind}_{W_X}^W (\mathbf{park'}^{\op{abs}}_{W_X,m}).
\]
Let $w\in W$ and $Z=\Fix(w)$. Lemma~\ref{lemm:evalchar} tells us that the evaluation of the character is:
\begin{align*}
\bR(w)
=
\sum_{X \in \Theta} \nu(X) \cdot [N(X):W_X] \cdot \sum_{\substack{ X'\in L(W), \\ X'\sim X\text{ and } X'\subset Z}}
\mathbf{park'}^{\op{abs}}_{W_{X'},m}(w)
\end{align*}
This can be rewritten as a single sum over $X'$, and we get:
\begin{align*}
\bR(w)
&=
\sum_{\substack{ X\in L(W), \\ X\subset Z }} \nu(X) \cdot [N(X):W_X]\cdot
\mathbf{park'}^{\op{abs}}_{W_X,m}(w) \\
&=
\sum_{X\subset Z}r(\mathcal{A}^X) \cdot
\mathbf{park'}^{\op{abs}}_{W_X,m}(w)
\end{align*}
Indeed, recall $\nu(X) \cdot [N(X):W_X] = (-1)^{\dim(X)} \chi_X(-1) = r(\mathcal{A}^X)$. From Proposition~\ref{prop:laplacian}, this becomes $\bR(w) = (mh+1)^{\dim(Z)}$, so that $\bR = \mathbf{park}^{\op{abs}}_{W,m}$.
\end{proof}
\begin{theo} \label{theo:abs}
We have:
\[
\mathbf{park'}^{\op{abs}}_{W,m}
=
\sum_{I\subset S} (-1)^{\#S-\#I} \mathbf{ind}_{W_I}^W ( \mathbf{park}^{\op{abs}}_{W_I,m} ).
\]
\end{theo}
\begin{proof}
It is straightforward to adapt the proof of the previous theorem. We can also see it as a consequence: substitute $m$ to $-m$ in~\eqref{theo_parkabsind} and tensor with $\boldsymbol{\epsilon}$, this gives:
\begin{align*}
\mathbf{park'}^{\op{abs}}_{W,m}
&=
(-1)^n \sum_{I\subset S} \boldsymbol{\epsilon} \otimes \mathbf{ind}_{W_I}^W ( \mathbf{park'}^{\op{abs}}_{W_I,-m} ) \\
&=
(-1)^n \sum_{I\subset S} \mathbf{ind}_{W_I}^W ( \boldsymbol{\epsilon} \otimes \mathbf{park'}^{\op{abs}}_{W_I,-m} ) \\
&=
(-1)^n \sum_{I\subset S} \mathbf{ind}_{W_I}^W ( (-1)^{\#I} \mathbf{park}^{\op{abs}}_{W_I,m} ).
\end{align*}
We used the identity
\[
\boldsymbol{\epsilon} \otimes \mathbf{ind}_{W_I}^W (\chi)
=
\mathbf{ind}_{W_I}^W (\boldsymbol{\epsilon} \otimes \chi)
\]
(where the latter $\boldsymbol{\epsilon}$ denote the sign character of $W_I$, which is also the restriction of the sign character of $W$, and $\chi$ is any character of $W_I$), which easily follows the general general formula for induction:
\[
\mathbf{ind}_{W_I}^W (\boldsymbol{\epsilon} \otimes \chi)(w)
=
\frac{1}{|W_I|}
\sum_{x\in W, \; x^{-1} w x \in W_I}
(\boldsymbol{\epsilon} \otimes \chi) (x^{-1} w x)
=
\frac{ \boldsymbol{\epsilon}(w) }{|W_I|}
\sum_{x\in W, \; x^{-1} w x \in W_I}
\chi (x^{-1} w x).
\]
\end{proof}
We have thus proved that $\mathbf{park}^{\op{abs}}_{W,m}$ and $\mathbf{park'}^{\op{abs}}_{W,m}$ satisfy the same identities as $\mathbf{park}_{W,m}$ and $\mathbf{park\mathrlap{'}}_{W,m}$ in Proposition~\ref{lemm:inclexcl}.
\begin{proof}[Proof of Theorem~\ref{theo_fpark}]
We have $\mathbf{park}_{W_I,m} = \mathbf{park}^{\op{abs}}_{W_I,m}$ for all $I\subset S$ by Theorem~\ref{theo_park}. From the second identity in Proposition~\ref{prop:inclexcl} and the previous theorem, we immediately get $\mathbf{park'}^{\op{abs}}_{W,m} = \mathbf{park\mathrlap{'}}_{W,m}$.
\end{proof}
\section{Proof of Theorem~\ref{theo_fpark} via combinatorial matrices}
\label{parkingproof2}
The goal of this section is to prove Identity~\eqref{eq:relationchar} below, as the proof of Theorem~\ref{theo_fpark} will immediately follow.
\begin{prop} \label{prop:relationchar}
We have:
\begin{equation} \label{eq:relationchar}
\mathbf{park\mathrlap{'}}_{W,m} = \bPsi_{-1} \otimes \; \mathbf{park}_{W,-m}.
\end{equation}
\end{prop}
\begin{proof}[Proof of Theorem~\ref{theo_fpark}]
If $W$ is irreducible, we have $\mathbf{park}_{W,m} = \bPsi_{mh+1}$ by Theorem~\ref{theo_park}. The right-hand side of \eqref{eq:relationchar} is then $\bPsi_{-1} \otimes \bPsi_{-mh+1} = \bPsi_{mh-1}$. We thus get $\mathbf{park\mathrlap{'}}_{W,m} = \bPsi_{mh-1}$. The reducible case follows immediately.
\end{proof}
The proof of Proposition~\ref{prop:relationchar} relies on combinatorial properties of the two order relations $\sqsubset$ and $\ll$ on $\NC(W)$. We begin with a few definitions and lemmas.
Below, we identify elements in $R(W)$ with column vectors indexed by $\Theta$. We also use matrices with rows and columns indexed by $\Theta$. In both cases, we use bold fonts as we do for elements of $R(W)$. Note that $\otimes$ still denote the product of $R(W)$, but we use no sign for the matrix products.
\begin{defi}
We define three matrices $\bQ$, $\bR$, and $\bN$ by:
\begin{align*}
\bQ_{X,Y} &= \# \Big\{ v \in\NC(W,c) \; : \;
\Fix(v) \sim X \text{ and } v\leq w \Big\}, \\
\bR_{X,Y} &= \# \Big\{ v \in\NC(W,c) \; : \;
\Fix(v) \sim X \text{ and } v\ll w \Big\}, \\
\bN_{X,Y} &= \# \Big\{ v \in\NC(W,c) \; : \;
\Fix(v) \sim X \text{ and } v\sqsubset w \Big\},
\end{align*}
where $w$ is a fixed element of $\NC(W)$ such that $\Fix(w)\sim Y$. Also we define $\bD$ as the diagonal matrix such that, for all $X\in \Theta$:
\[
\bD_{X,X} = (-1)^{n - \dim X}.
\]
Moreover $\bI$ denote the identity matrix.
\end{defi}
It turns out that these matrices don't depend on the chosen element $w$.
Note that the matrices $\bQ$, $\bR$, and $\bN$ are unitriangular (upon ordering $\Theta$ so that dimension is increasing).
\begin{lemm} \label{lemm:solomon2}
For any $\bX \in R(W)$, we have: $\boldsymbol{\epsilon} \otimes \bX = \bD \bN \bX$.
\end{lemm}
\begin{proof}
This is a rewriting of Lemma~\ref{lemm_solomon}.
\end{proof}
\begin{lemm} \label{lemm:ninverse}
We have $ (\bD \bN)^2 = \bI$.
\end{lemm}
\begin{proof}
This follows from the previous proposition, since $\boldsymbol{\epsilon} \otimes \boldsymbol{\epsilon} = {\bf 1}$. Alternatively, one can see the relation $\bN^{-1} = \bD \bN \bD$ as a reformulation of the inclusion-exclusion principle on subsets of $S$.
\end{proof}
\begin{lemm} \label{lemm:RNQ}
We have $\bR \bN = \bQ$.
\end{lemm}
\begin{proof}
A combinatorial expansion of the product $\bR \bN$ shows that the $X,Y$-coefficient is the number of $u,v$ such that $u\ll v \sqsubset w$ and $\Fix(u) \sim X$, where $w$ is a fixed element such that $\Fix(W) \sim Y$. So this is an immediate consequence of Proposition~\ref{prop:order_uvw}.
\end{proof}
\begin{lemm} \label{lemm:Qinverse}
We have $\bQ^{-1} = (\bD \bN) \bQ (\bD \bN)$.
\end{lemm}
\begin{proof}
A combinatorial expansion of the product $ (\bR \bD)^2$ shows that the coefficient of $X,Y$ is:
\[
\sum_{\substack{ u,v \in \NC(W) \\ u \ll v \ll w, \; \Fix(u) \sim X} }
(-1)^{\ell(v) + \ell(w)}
\]
where $w \in \NC(W)$ is such that $\Fix(w) \sim Y$. By Lemma~\ref{lemm:bool}, we get an alternating sum over a Boolean lattice. So this is $0$ if $X \neq Y$, and $1$ if $X=Y$ (the two signs cancel each other out). We thus have $ (\bR \bD)^2 = \bI $.
Since $\bR = \bQ \bN^{-1}$ by Lemma~\ref{lemm:RNQ}, we get $ (\bQ \bN^{-1} \bD)^2 = \bI $, so that:
\[
\bQ^{-1} = \bN^{-1} \bD \bQ \bN^{-1} \bD.
\]
We have $ \bN^{-1} \bD = \bD \bN$ from Lemma~\ref{lemm:ninverse}, and the result follows.
\end{proof}
\begin{defi}
Let $\bU$ denote the column vector that correspond to the trivial character ${\bf 1} \in R(W)$ under our convention. Explicitly, it is given by:
\[
\bU_X = \begin{cases}
1 &\text{if $X$ is the maximal element of $L(W)$}, \\
0 &\text{otherwise}.
\end{cases}
\]
\end{defi}
Indeed, by~\eqref{def_WX} we have $W_X = W$ if $X$ is the $0$-dimensional subspace. It follows that $\bPhi_X = \mathbf{ind}_{W_X}^W({\bf 1}) = {\bf 1}$.
\begin{prop}
We have:
\begin{align}
\label{park_matrices}
\mathbf{park}_{W,m} &= \bQ^m \bU, \\
\mathbf{park\mathrlap{'}}_{W,m} &= \bQ^{m-1} \bR \bU.
\end{align}
\end{prop}
\begin{proof}
By a combinatorial expansion of the product $\bQ^m \bU$, we find that the coefficient of $\bPhi_X$ is the number of chains $w_1 \leq w_2 \leq \dots \leq w_m \leq c$ in $\NC(W)$ with $\Fix(w_1) \sim X$. Doing the same in the product $\bQ^{m-1} \bR \bU$, we find that the coefficient of $\bPhi_X$ is the number of chains $w_1 \leq w_2 \leq \dots \leq w_m \ll c$ in $\NC(W)$ with $\Fix(w_1) \sim X$. We thus recover the combinatorial definitions of $\mathbf{park}_{W,m}$ and $\mathbf{park\mathrlap{'}}_{W,m}$ in \eqref{parkdef} and~\eqref{fparkdef}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:relationchar}]
By~\eqref{park_matrices} and Lemma~\ref{lemm:solomon2}, we have:
\[
\boldsymbol{\epsilon} \otimes \mathbf{park}_{-m}^W
=
\bD \bN \bQ^{-m} \bU.
\]
We have $\bQ^{-1} = (\bD \bN) \bQ (\bD \bN)^{-1}$ by Lemma~\ref{lemm:Qinverse}, so we get:
\[
\boldsymbol{\epsilon} \otimes \mathbf{park}_{-m}^W
=
(\bD\bN)^2 \bQ^{m} (\bD\bN)^{-1} \bU.
\]
We have $(\bD\bN)^2 = \bI$ by Lemma~\ref{lemm:ninverse}, so we get:
\[
\boldsymbol{\epsilon} \otimes \mathbf{park}_{-m}^W
=
\bQ^m \bN^{-1} \bD \bU.
\]
We have $\bQ \bN^{-1} = \bR$ by Lemma~\ref{lemm:RNQ}, and $\bD \bU = (-1)^n \bU$ is easily checked. We thus get $ (-1)^n \bQ^{m-1} \bR \bU$ on the right-hand side of the previous equation. By the previous proposition, this is $ (-1)^n \mathbf{park\mathrlap{'}}_{W,m}$. This completes the proof since $(-1)^n \boldsymbol{\epsilon} = \bPsi_{-1}$.
\end{proof}
\section{The generalized cluster complex}
\label{sec:defclus}
We review Fomin and Reading's generalized cluster complex, introduced in~\cite{fominreading}, and Tzanaki's characterization from~\cite{tzanaki}. Afterwards, we introduce in Definition~\ref{def_facetoTheta} the natural way to associate a parabolic conjugacy class to each face of this complex. This will be the basis of the refined enumeration of faces.
Through this section, we assume that $c$ is a \emph{bipartite Coxeter element}: we can write $S= S_+ \uplus S_-$ where $S_\pm$ contains pairwise commuting reflections, and $c = c_+ c_- $ where $c_\pm$ is the product of elements in $S_\pm$.
Let $\Phi \subset V$ be a \emph{root system} for $W$ (in the sense of Coxeter groups), $\Phi_+\subset \Phi$ a set of positive roots, and $\Delta \subset \Phi_+$ the corresponding set of simple roots. We denote $R:\Phi \to T$ the natural map that sends $\alpha$ to the unique $t\in T$ such that $t(\alpha)=-\alpha$. Eventually, let $\Delta_+$ and $\Delta_-$ the set of simple roots such that $R(\Delta_+) = S_+$ and $R(\Delta^-)=S_-$.
\subsection{Colored almost-positive roots}
Let $[m]=\{1,\dots,m\}$. We think of it as a set of ``colors'' that we use to define {\it colored roots}.
\begin{defi}[{Fomin and Reading~\cite{fominreading}}]
An {\it $m$-colored root} is a pair $(\alpha,i)$, denoted $\alpha^i$ for short, where $\alpha\in\Phi$ and $i\in [m]$. It is {\it positive} if $\alpha\in\Phi_+$. It is {\it almost-positive} if either $\alpha\in\Phi_+$, or $\alpha\in -\Delta$ and $i=1$. We denote $\Phi^{(m)}_{\geq -1}$ the set of almost-positive $m$-colored roots, and $\Phi^{(m)}_{+}$ the set of positive $m$-colored roots.
\end{defi}
When $m=1$, we can ignore colors and $\Phi^1_{\geq -1}$ is thus identified with $\Phi_{\geq -1} := \Phi_+ \cup (-\Delta)$. It is the set of {\it almost-positive roots} introduced by Fomin and Zelevinsky to define the cluster complex~\cite{fominzelevinsky}. In general, we also identify $\rho \in -\Delta$ with the colored root $\rho^{1}$ (as no confusion can arise).
Fomin and Zelevinsky~\cite{fominzelevinsky} defined a rotation $\mathcal{R}$ on $\Phi_{\geq -1}$ by $\mathcal{R} = \mathcal{R}_+ \circ \mathcal{R}_-$, where:
\[
\mathcal{R}_+ (\rho) =
\begin{cases}
\rho & \text{ if } \rho \in -\Delta_-, \\
c_+(\rho) & \text{otherwise},
\end{cases}
\qquad
\mathcal{R}_- (\rho) =
\begin{cases}
\rho & \text{ if } \rho \in -\Delta_+, \\
c_-(\rho) & \text{otherwise}.
\end{cases}
\]
Explicitly, we can check that:
\[
\mathcal{R}(\rho)
=
\begin{cases}
-\rho &\text{if } \rho \in (-\Delta_+) \cup \Delta_-, \\
c(\rho) &\text{otherwise}.
\end{cases}
\]
Following Fomin and Reading~\cite{fominreading},
the higher order rotation $\mathcal{R}_m$ acting on $\Phi_{\geq -1}^{(m)}$ is defined as follows:
\begin{equation} \label{def:Rm1}
\mathcal{R}_m(\alpha^i)
=
\begin{cases}
\alpha^{i+1} & \text{ if } \alpha\in\Phi_+ \text{ and } i<m, \\
\mathcal{R}(\alpha)^1 & \text{ otherwise.}
\end{cases}
\end{equation}
It will be useful to make second case more explicit:
\begin{align} \label{def:Rm_2}
\mathcal{R}_m(\alpha^m)
&=
\begin{cases}
(-\alpha)^1 \text{ if } \alpha\in \Delta_-, \\
c(\alpha)^1 \text{ if } \alpha\in \Phi_+ \backslash \Delta_-,
\end{cases}
\\
\mathcal{R}_m(\alpha^1)
&=
\begin{cases}
(-\alpha)^1 \text{ if } \alpha\in -\Delta_+, \\
c(\alpha)^1 \text{ if } \alpha\in -\Delta_-.
\end{cases}
\end{align}
\begin{prop}[\cite{fominreading}] \label{prop:mrotationorbits}
Assume $W$ is irreducible, with Coxeter number $h$. Then each orbit $\omega$ for the action of $\mathcal{R}_m$ on $\Phi^{(m)}_{\geq -1}$ satisfy:
\begin{itemize}
\item either $\# \omega = \frac{mh+2}2$ and $\# (-\Delta) \cap \omega = 1$,
\item or $\# \omega = mh+2$ and $\# (-\Delta) \cap \omega = 2$.
\end{itemize}
Moreover, in the latter case the two elements $-\delta_i$ and $-\delta_j$ of $(-\Delta) \cap \omega$ are related by $w_\circ(\delta_i)=\delta_j$, where $w_\circ$ is the longest element of $W$.
\end{prop}
Note that the previous proposition says that negative simple roots have the same proportion in each orbit.
This is similar to a classical result by Steinberg~\cite{steinberg} for the action of the bipartite Coxeter element on $T$ by conjugation. It states that each orbit is:
\begin{itemize}
\item either an $h/2$-element set containing one simple reflection,
\item or an $h$-element set containing two simple reflections.
\end{itemize}
\subsection{The compatibility relation}
The generalized cluster complex can be defined as the flag simplicial complex associated to a binary relation on $\Phi^{(m)}_{\geq-1}$, called {\it compatibility}.
\begin{defi}[\cite{fominreading}] \label{def:defcompat}
The {\it compatibility relation} $\mathrel{\|}$ is the unique symmetric and irreflexive binary relation on $\Phi^{(m)}_{\geq -1}$, characterized by the conditions:
\begin{itemize}
\item for all $\alpha,\beta\in \Phi^{(m)}_{\geq -1}$, we have $\alpha \mathrel{\|} \beta$ iff $\mathcal{R}_m(\alpha) \mathrel{\|} \mathcal{R}_m(\beta)$,
\item if $\alpha\in -\Delta$ and $\beta\in \Phi^{(m)}_{\geq -1}$, we have $\alpha \mathrel{\|} \beta$ iff $\alpha$ does not appear in the expansion of $\beta$ (forgetting its color) as a linear combination of simple roots.
\end{itemize}
\end{defi}
In particular, the elements of $-\Delta$ are mutually compatible by the second condition. Note that these two conditions suffice to decide if $\alpha \mathrel{\|} \beta$ holds, for any $\alpha,\beta \in \Phi^{(m)}_{\geq -1}$. Indeed, we can proceed as follows:
\begin{itemize}
\item By Proposition~\ref{prop:mrotationorbits}, there exists $i$ such that $\mathcal{R}_m^i(\alpha)$ is in $-\Delta$,
\item By the first condition above, $\alpha \mathrel{\|} \beta$ holds iff $\mathcal{R}_m^i(\alpha) \mathrel{\|} \mathcal{R}_m^i(\beta)$ holds,
\item by the second condition above, we can decide if $\mathcal{R}_m^i(\alpha) \mathrel{\|} \mathcal{R}_m^i(\beta)$ by expanding $\mathcal{R}_m^i(\beta)$ in terms of simple roots.
\end{itemize}
In particular, it follows that there exists at most one such relation $\mathrel{\|}$. Fomin and Reading~\cite{fominreading} proved that such a relation does exist.
\begin{defi}[\cite{fominreading}]
The {\it generalized cluster complex} $\Upsilon(W,m)$ is the flag simplicial complex generated by the compatibility relation $\|$ of Definition~\ref{def:defcompat}: its vertex set is $\Phi^{(m)}_{\geq -1}$, and its faces are sets of pairwise compatible vertices. The {\it positive part} of $\Upsilon(W,m)$, denoted $\Upsilon^+(W,m)$, is its full subcomplex having $\Phi^{(m)}_{+}$ as vertex set. In the case $m=1$, we write $\Upsilon(W) := \Upsilon(W,1)$ and $\Upsilon^+(W) := \Upsilon^+(W,1)$.
\end{defi}
Before stating more properties of $\Upsilon(W,m)$ and $\Upsilon^+(W,m)$, let us give some reformulations of the definition.
\subsection{Reflection ordering}
An alternative characterization of the complex $\Upsilon(W,m)$ can be given via a reflection ordering. It is due to Brady and Watt~\cite{bradywatt} in the case $m=1$, and Tzanaki~\cite{tzanaki} in the general case.
\begin{prop}[\cite{steinberg}]
\label{prop:reforder}
There exists an indexing $\Phi_{\geq -1} = ( \alpha_i )_{1\leq i \leq nh/2+n}$ with the following properties:
\begin{itemize}
\item $c(\alpha_i) = \alpha_{i+n}$ for $1\leq i \leq nh/2$.
\item $-\Delta_- = \{ \alpha_1 , \dots , \alpha_r \}$ and $\Delta_+ = \{ \alpha_{r+1} , \dots , \alpha_n \}$ (where $r$ is the cardinality of $\Delta_-$),
\item $\Delta_- = \{ \alpha_{nh/2+1} , \dots , \alpha_{nh/2+r} \}$ and $-\Delta_+ = \{ \alpha_{nh/2+r+1} , \dots , \alpha_{nh/2+n} \}$.
\end{itemize}
\end{prop}
Note that the first two conditions above uniquely define the sequence. It remains to check the last condition, and the fact that each element in $\Phi_{\geq -1}$ appear exactly once. In fact, Steinberg's construction in~\cite{steinberg} gives an indexing of the whole root system $\Phi = ( \alpha_i )_{1\leq i \leq nh}$ from which we can extract the above indexing of $\Phi_{\geq -1}$.
\begin{rema}
The map $\mathcal{R}$ is essentially a rotation through this indexing:
\[
\mathcal{R}(\alpha_i) =
\begin{cases}
\alpha_{i+n} & \text{ if } 1\leq i \leq nh/2, \\
w_\circ (\alpha_{i-nh/2}) & \text{ otherwise}
\end{cases}
\]
where $w_\circ$ is the longest element in $W$.
\end{rema}
The indexing of the previous proposition defines a total order $\prec$ on $\Phi_{\geq -1}$ by the condition $\alpha_i\prec\alpha_j \Longleftrightarrow i<j$. We refer to it as the {\it reflection ordering}.
\begin{prop}[{Brady and Watt~\cite[Section~8]{bradywatt}}]
Let $(\rho_i)_{1\leq i \leq n}$ be a tuple of $n$ distinct elements of $\Phi_{\geq -1}$, ordered so that $\rho_1 \succ \rho_2 \succ \dots \succ \rho_n$. Then it is a facet of $\Upsilon(W)$ iff $c = R(\rho_1) R(\rho_2) \cdots R(\rho_n)$.
\end{prop}
The extension to general $m$ is as follows. Define an indexing $\Phi^{(m)}_{\geq -1} = \{ \beta_i \}_{1\leq i \leq mnh/2+n}$ (and accordingly, a total order $\prec$ on $\Phi^{(m)}_{\geq -1}$ where we omit the dependence in $m$ in the notation) as follows. As a sequence, it is obtained by the concatenation of the three following sequences:
\begin{itemize}
\item $\alpha^1_1,\dots,\alpha^1_r$ (elements of $-\Delta_-$ with color $1$),
\item $ \alpha^m_{r+1}, \dots, \alpha^m_{nh/2+r},
\alpha^{m-1}_{r+1}, \dots, \alpha^{m-1}_{nh/2+r},
\dots, \alpha^{1}_{r+1}, \dots, \alpha^{1}_{nh/2+r}$ (colored positive roots),
\item $\alpha^1_{nh/2+r+1},\dots,\alpha^1_{nh/2+n}$ (elements of $-\Delta_+$ with color $1$).
\end{itemize}
Note that the rotation $\mathcal{R}_m$ has {\it a priori} no simple description using this order, unlike in the case $m=1$.
\begin{prop}[Tzanaki~\cite{tzanaki}] \label{prop:tzanaki}
Let $(\rho_i)_{1\leq i \leq n}$ be a tuple of $n$ distinct elements of $\Phi^{(m)}_{\geq -1}$, ordered so that $\rho_1 \succ \rho_2 \succ \dots \succ \rho_n$. Then it is a facet of $\Upsilon(W,m)$ iff $c = R(\rho_1) \cdots R(\rho_n)$.
\end{prop}
In what follows, we generally assume that each face $f\in\Upsilon(W,m)$ is indexed in decreasing order, and write $f = \{ \rho_1 \succ \rho_2 \succ \cdots \succ \rho_k \}$.
This completes the definition and characterization of the generalized cluster complex. Note that our exposition is not exhaustive: another construction of $\Upsilon(W,m)$ is via {\it subword complexes}, see Stump, Thomas and Williams~\cite{stumpthomaswilliams}.
\subsection{Other properties}
Fomin and Reading proved various properties about their generalized cluster complex, some of them will be useful in the present work. First, $\Upsilon(W,m)$ and $\Upsilon^+(W,m)$ are purely $n-1$-dimensional. Their number of facets are respectively the Fuß-Catalan number $\Cat^{(m)}(W)$ and the positive Fuß-Catalan number $\Cat_+^{(m)}(W)$. Another property that naturally follows from the definition is:
\begin{prop}
Let $\rho \in \Delta$, $s=R(\rho)$, and denote the irreducible factor of $W_{(s)}$ as $W_1$, $W_2$, etc. Then the link of the vertex $-\rho$ in $\Upsilon(W,m)$ is the join $\Upsilon(W_1,m) \star \Upsilon(W_2,m) \star \cdots$.
\end{prop}
By the above result, it is natural to extend the definition of $\Upsilon(W,m)$ in the reducible case, by declaring that it is the join of the simplicial complexes $\Upsilon(W',m)$ where $W'$ runs through the irreducible factors of $W$.
The results of Brady and Watt and their extension by Tzanaki make clear that the cluster complex is related with minimal factorizations of the Coxeter element. In particular, to each face we can associate a noncrossing partition in a natural way:
\begin{defi}
\label{def_facetoTheta0}
Let $f = \{ \rho_1 \succ \dots \succ \rho_k \} \in \Upsilon(W,m)$. We define
\[
\prod f := R(\rho_1) \cdots R(\rho_k) \in \NC(W).
\]
\end{defi}
To see that this is well-defined, first note that from Proposition~\ref{prop:tzanaki} we have $R(\rho_1) \cdots R(\rho_k) \allowbreak = c$ if $k=n$ ({\it i.e.}, $f$ is a facet). In general, since a face is a subset of a facet we get that $R(\rho_1) \cdots R(\rho_k)$ is a subword of a minimal reflection factorization of $c$. It follows that $\prod f \in \NC(W)$.
\begin{prop} \label{lemm:upsilonfusscat}
For $w\in \NC(W)$, we have:
\[
\# \Big\{ f \in \Upsilon^+(W,m) \;:\; \prod f = w \Big\}
=
\Cat^{(m)}_+(W_{\Fix(w)}).
\]
\end{prop}
\begin{proof}
Essentially, it is possible to identify those $f \in \Upsilon^+(W,m)$ such that $\prod f = w$ with facets of $\Upsilon^+(W_{\Fix(w)},m)$. However, some care is needed to do that. Although $w$ is a Coxeter element of $W_{\Fix(w)}$ by Proposition~\ref{prop:wcox}, it might not be bipartite Coxeter element. We thus need to use the generalized cluster complex associated to any standard Coxeter element, following the definition of Stump, Thomas, Williams~\cite{stumpthomaswilliams}.
An alternative path is to first get the result for $m=1$. Indeed,~\cite[Proposition~6.3]{bianejosuat} give a characterization of the compatibility relation on (uncolored) positive roots which is clearly stable by restriction to parabolic subgroups, allowing the identification mentioned above. Now, we have:
\begin{equation} \label{iddd}
\# \Big\{ f \in \Upsilon^+(W,m) \;:\; \prod f = w \Big\}
=
\sum_{w = w_1 \cdots w_m }
\prod_{i=1}^m
\# \Big\{ f \in \Upsilon^+(W) \;:\; \prod f = w_i \Big\},
\end{equation}
where we sum over length-additive factorizations with $m$ factors. Indeed, this can be proved bijectively: to $f \in \Upsilon^+(W,m)$ we associate $f_1,\dots,f_m \in \Upsilon^+(W)$ where $f_i$ contains roots in $f$ of color $i$. Using the case $m=1$ of the proposition, we get
\[
\# \Big\{ f \in \Upsilon^+(W,m) \;:\; \prod f = w \Big\}
=
\sum_{w = w_1 \cdots w_m }
\prod_{i=1}^m \Cat_+(W_{\Fix(w_i)}).
\]
The result follows from the identity:
\[
\Cat^{(m)}_+(W_{\Fix(w)})
=
\sum_{w = w_1 \cdots w_m } \prod_{i=1}^m \Cat_+(W_{\Fix(w_i)}),
\]
where we sum over length-additive factorizations with $m$ factors. This identity can be proved as follows: since it is invariant under conjugating $w$, we can assume $w$ is a bipartite Coxeter element of $W_{\Fix(w)}$, and the result comes from double counting of facets in $\Upsilon(W_{\Fix(w)},m)$ as above.
\end{proof}
\subsection{The map to orbit of flats}
The {\it Kreweras complement} is an anti-automorphism of $\NC(W)$ defined by $K(w) := c w^{-1}$. We use here a variant adapted to the bipartite Coxeter element:
\begin{lemm} \label{lemm:bipartitekreweras}
The map $w\mapsto c_+ w c_-$ is an involutive anti-automorphism of $\NC(W)$.
\end{lemm}
See for example~\cite[Proposition~3.5]{bianejosuat}.
\begin{defi}
\label{def_facetoTheta}
For $f \in \Upsilon(W,m)$, we define
\[
\underline{f} := c_+ \big( \prod f \big) c_-,
\]
which is in $\NC(W)$ by the previous lemma. The natural map to orbits of flats is:
\begin{align*}
\Upsilon(W,m) & \to L(W)/W \\
f &\mapsto \text{ the orbit of } \Fix(\underline{f}).
\end{align*}
Accordingly, we define for any $X\in\Theta$:
\begin{align*}
\Upsilon(W, X, m)
&:=
\big\{
f\in \Upsilon(W, m)
\;:\;
\Fix(\underline{f}) \sim X
\big\}, \\
\Upsilon^+(W, X, m)
&:=
\big\{
f\in \Upsilon^+(W, m)
\;:\;
\Fix(\underline{f}) \sim X
\big\},
\end{align*}
and their cardinalities
\begin{align*}
\gamma(W, X, m )
&:=
\# \Upsilon(W, X, m), \\
\gamma^+(W, X, m )
&:=
\# \Upsilon^+(W, X, m).
\end{align*}
\end{defi}
We could actually take the usual Kreweras complement $K(w)$ instead of its bipartite variant. Indeed, $c w^{-1}$ and $c_+ w c_-$ are conjugated (it easily follows from the fact that each element is conjugated to its inverse, see~\cite[Corollary~3.2.14]{geckpfeiffer}). The bipartite Kreweras complement is somewhat more adapted to the present situation.
In the next sections, we give the explicit formulas for the quantities $\gamma$ and $\gamma^+$. Another important property is that the natural map in the previous definition is invariant under the rotation $\mathcal{R}_m$ (see Proposition~\ref{prop:invariance}).
As a final remark for this section, observe that $\gamma(W, X, m )$ and $ \gamma^+(W, X, m )$ are polynomial in $m$. Indeed, the quantity in Proposition~\ref{lemm:upsilonfusscat} is polynomial (from the definition of Fuß-Catalan numbers), and by summing over a finite set of $w$ we get $\gamma(W, X, m )$ or $ \gamma^+(W, X, m )$.
\section{Combinatorial reciprocities}
\label{sec:reciprocities}
In this section, we prove combinatorial reciprocities between the quantities $\kappa$, $\kappa^+$ on one side and $\gamma^+$, $\gamma$ on the other side. We deduce the formulas for $\gamma^+$, $\gamma$ in terms of characteristic polynomials (or Orlik-Solomon exponents), knowing those for $\kappa$ and $\kappa^+$ (see Corollary~\ref{formulas_gammagammaplus}).
\begin{theo} \label{reciprocity}
We have
\begin{align}
\label{eq:reciprocity1}
(-1)^k \kappa(W, X, -m)
&=
\gamma^+(W, X, m), \\
\label{eq:reciprocity2}
(-1)^k \kappa^+(W, X, -m)
&=
\gamma(W, X, m).
\end{align}
\end{theo}
Before proving this, we need to state inclusion-exclusion formulas that relate $\kappa(W, X, m)$ to $\kappa^+(W, X, m)$ on one side, and $\gamma(W, X, m)$ to $\gamma^+(W, X,m)$ on the other side. To do this, it is convenient to extend the definitions of these quantities. Using the bijection from Lemma~\ref{lemm:bij_xi}, we write $\kappa(W, \mathcal{X} , m )$ in place of $\kappa(W, X , m )$ and similarly for $\kappa^+$, $\gamma$ and $\gamma^+$. Eventually, we extend this definition in an additive way: if $\mathcal{Y} \subset W$ is a disjoint union of parabolic conjugacy classes, say $\mathcal{Y} = \uplus_{i=1}^j \mathcal{X}_i$, we write:
\[
\kappa(W, \mathcal{Y} , m )
=
\sum_{i=1}^j \kappa(W, \mathcal{X}_i , m )
\]
and similarly for $\kappa^+$, $\gamma$, and $\gamma^+$. In particular, for each standard parabolic subgroup $W_I \subset W$ and $\mathcal{X}$ a parabolic conjugacy class of $W$, the intersection $\mathcal{X} \cap W_I$ is such a union.
Now, we can state:
\begin{prop} \label{prop:inclexcl}
We have:
\begin{align}
\label{IE1}
\kappa(W, \mathcal{X} , m )
&=
\sum_{I \subset S}
\kappa^+(W_I, \mathcal{X} \cap W_I , m ),\\
\label{IE2}
\kappa^+(W, \mathcal{X} , m )
&=
\sum_{I \subset S} (-1)^{n-\#I}
\kappa(W_I, \mathcal{X} \cap W_I , m ),
\end{align}
and
\begin{align}
\label{IE3}
\gamma( W, \mathcal{X} , m )
&=
\sum_{I \subset S}
\gamma^+(W_I, \mathcal{X} \cap W_I , m ),\\
\label{IE4}
\gamma^+(W, \mathcal{X} , m )
&=
\sum_{I \subset S} (-1)^{n-\#I}
\gamma(W_I, \mathcal{X} \cap W_I , m ).
\end{align}
\end{prop}
\begin{proof}
The number $\kappa^+(W_I, \mathcal{X} \cap W_I , m )$ counts chains $w_1 \leq \dots \leq w_m$ in $\NC(W_I)$ where $w_1 \in \mathcal{X} \cap W_I $ and $w_m$ has full support in $W_I$. By summing over $I$, we get the first equation. The second can be deduced, essentially via the same proof as in the classical inclusion-exclusion principle.
Let's fix $I\subset S$. Then the map
\[
f \mapsto f^+ := f \cap \Phi^{(m)}_+
\]
is a bijection from faces $f\in \Upsilon(W,m)$ such that $f\cap (-\Delta) = \{ -\delta_i \;:\; i\in S\backslash I \}$ to $\Upsilon^+(W_I,m)$. To check the relation between $\underline{f}$ and $\underline{f^+}$, note that with $f$ as above we have:
\[
\prod f
=
\big( \prod_{i\in S^+ \backslash I } s_i \big)
\big( \prod f^+ \big)
\big( \prod_{i\in S^- \backslash I } s_i \big),
\]
so that
\[
\underline{f}
=
\big( \prod_{i\in S^+ \cap I } s_i \big)
\big( \prod f^+ \big)
\big( \prod_{i\in S^- \cap I } s_i \big) = \underline{f^+}.
\]
The last equality comes from the fact that $\underline{f^+}$ is defined with respect to the bipartite Coxeter element of $W_I$. It follows that $\underline{f} \in \mathcal{X}$ is equivalent to $\underline{f^+} \in \mathcal{X}\cap W_I$. By summing over $I\subset S$, we get~\eqref{IE3}. The inverse relation follows as before.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{reciprocity}]
Recall that $\kappa(W, \mathcal{X}, m)$ counts chains $w_1 \leq \dots \leq w_m$ in $\NC(W)$ where $w_1 \in \mathcal{X} $. If $w_1$ is fixed, the other elements $w_2,\dots,w_m$ can be mapped to $w_1^{-1} w_2, \dots ,\allowbreak w_1^{-1} w_m$ and are thus in bijection with $m-1$-element chains in $\NC(W)$ where the top element is below $w_1^{-1}c$. The number of such tuples $(w_2,\dots,w_m)$ is thus given by the Fuß-Catalan number $\Cat^{(m-1)}( W_{\Fix(w_1^{-1}c)} )$. We get:
\begin{equation} \label{eq:kappacat}
\kappa(W, \mathcal{X}, m)
=
\sum_{w \in \mathcal{X} } \Cat^{(m-1)}( W_{\Fix(w^{-1}c)} ).
\end{equation}
Note that we have a combinatorial reciprocity for Fuß-Catalan numbers:
\begin{align}
\Cat^{(-m)}(W) = (-1)^n \Cat^{(m-1)}_+(W).
\end{align}
This relation follows from the two formulas in terms of the exponents given in the introduction, and the fact that the (increasingly sorted) exponents satisfy $e_i = h-e_{n+1-i}$. Using this reciprocity, Equation~\eqref{eq:kappacat} gives:
\[
(-1)^k \kappa(W,\mathcal{X},-m)
=
\sum_{w \in \mathcal{X} } (-1)^k \Cat^{(-m-1)}( W_{\Fix(w^{-1}c)} )
=
\sum_{w \in \mathcal{X} } \Cat_+^{(m)}( W_{\Fix(w^{-1}c)} ).
\]
Since $w^{-1}c$ is conjugated to $c_+ w c_-$, we also have:
\[
(-1)^k \kappa(W,\mathcal{X},-m)
=
\sum_{w \in \mathcal{X} } \Cat_+^{(m)}( W_{\Fix(c_+ w c_-)} ).
\]
By Proposition~\ref{lemm:upsilonfusscat}, each term $\Cat_+^{(m)}( W_{\Fix(c_+ w c_-)} )$ is the number of positive faces in $f \in \Upsilon^+(W,m)$ such that $\prod f = c_+ w c_-$. This is also the number of positive faces $f \in \Upsilon^+(W,m)$ such that $\underline{f} = w$. So the sum is $\gamma^+(W, \mathcal{X}, m)$ by definition, and we have proved the first equation.
Then, by substitution $m\leftarrow -m$ in \eqref{IE2} we get:
\[
\kappa^+( W, \mathcal{X} , -m )
=
\sum_{I \subset S} (-1)^{n-\#I}
\kappa( W_I, \mathcal{X} \cap W_I , -m )
\]
Using~\eqref{eq:reciprocity1} that we have just proved, we get:
\[
\kappa( W_I , \mathcal{X} \cap W_I, -m )
=
(-1)^{\#I - (n-k) } \gamma^+( W_I , \mathcal{X} \cap W_I , m ).
\]
To check the sign, note that in Theorem~\ref{reciprocity} the integer $k = \dim(X)$ is $n-\ell(w)$ for some $w\in\mathcal{X}$. Here, the elements of $\mathcal{X} \cap W_I$ have reflection length $n-k$ in $W_I$ (it is easily seen to be the same as their reflection length in $W$). In $W_I$, which has rank $\#I$, this sign is thus given by the difference $\#I - (n-k)$.
The previous two equations thus give:
\[
\kappa^+(W, \mathcal{X}, -m )
=
(-1)^k \sum_{I \subset S}
\gamma^+(W_I, \mathcal{X}\cap W_I , m ).
\]
From \eqref{IE3}, the right-hand side of the previous equation is $(-1)^k \gamma(W, \mathcal{X}, m)$. We have thus proved the second equation.
\end{proof}
\begin{coro} \label{formulas_gammagammaplus}
We have
\begin{align} \label{eq:formulagamma}
\gamma(W,X,m)
=
(-1)^{\dim(X)} \frac{\chi_X(-mh-1)}{[N(W_X):W_X]}, \\
\gamma^+(W,X,m)
=
(-1)^{\dim(X)} \frac{\chi_X(-mh+1)}{[N(W_X):W_X]}.
\end{align}
\end{coro}
\begin{proof}
Using the combinatorial reciprocities in Theorem~\ref{reciprocity}, this follows from the corresponding formulas for $\kappa$ and $\kappa^+$ in Equations~\eqref{eq:formula_kappa} and~\eqref{eq:formula_kappaplus}.
\end{proof}
\section{Bijections between faces of the generalized cluster complex and chains of noncrossing partitions}
\label{sec:bij}
We give another proof of the formulas in Corollary~\ref{formulas_gammagammaplus}, via a bijection which is of independent interest. The combinatorial reciprocities in the previous section gave connections $\kappa \leftrightarrow \gamma^+ $ and $\kappa^+ \leftrightarrow \gamma$ (schematically), but here the bijection will give connections $\kappa \leftrightarrow \gamma $ and $\kappa^+ \leftrightarrow \gamma^+$.
\begin{prop} \label{prop:bijgammachi}
We have:
\begin{align}
\label{EQ:gammakappa}
\gamma(W,X,m)
&=
\sum_{Y \in \Theta} \bN_{X,Y} \kappa(W,Y,m), \\
\label{EQ:gammakappa2}
\gamma^+(W,X,m)
&=
\sum_{Y \in \Theta} \bN_{X,Y} \kappa^+(W,Y,m).
\end{align}
\end{prop}
Note that these two identities are related to each other: one is the consequence of the other, using the combinatorial reciprocity in~\eqref{eq:reciprocity1} and~\eqref{eq:reciprocity2} and the inverse of the matrix $\bN$. Here we prove both identities: the bijection proving the first one, suitably restricted, also proves the second one.
One of our main tool is the following:
\begin{lemm} \label{lemm:intervalsfaces}
For any $u,w \in\NC(W)$ such that $u\leq w$, the elements $v\in\NC(W)$ such that $u\sqsubset v \ll w$ are in bijection with faces $f \in \Upsilon^+(W)$ such that $\prod f = u^{-1} w$.
\end{lemm}
\begin{proof}
This follows from the results in~\cite{bianejosuat} (though in this reference we only deal with the case where $u$ is the minimal element, this slight generalization is proved similarly). More explicitly, the construction is as follows.
Start from $f=\{\alpha_1,\dots, \alpha_k\}$ as above, and let $t_i = R(\alpha_i)$. By Lemmas~8.7 and 8.8 from~\cite{bianejosuat}, we can reindex the elements of $f$ (switching pairs of orthogonal reflections) so that $u \sqsubset ut_1 \sqsubset \null \dots \null \sqsubset ut_1 \cdots t_j \ll u t_1 \cdots t_{j+1} \ll \null\dots\null \ll ut_1 \cdots t_k = w$. Then the bijection sends $f$ to $ut_1 \cdots t_j$.
In the other direction, write $u^{-1} v = t_1 \cdots t_k$ where the factors are the simple reflections of $W_{\Fix(u^{-1}v)}$, and similarly $v^{-1} w = u_1 \cdots u_j$ where the factors are the simple reflections of $W_{\Fix(v^{-1}w)}$. Then $R(f)$ contains the reflections
\begin{itemize}
\item $t_k \cdots t_{i+1} t_i t_{i+1} \cdots t_k$ for $1\leq i \leq k$,
\item $u_1 \cdots u_{i-1} u_i u_{i-1} \cdots u_1$ for $1\leq i \leq j$.
\end{itemize}
See~\cite[Section~8]{bianejosuat} for details.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:bijgammachi}]
We give a bijection between faces $f \in \Upsilon(W,X,m)$ and chains of noncrossing partitions of the form $
w_0 \sqsubset w_1 \leq \dots \leq w_m$ such that $\Fix(w_0) \sim X$.
Let us start from a chain in $\NC(W)$ as above. Using Proposition~\ref{prop:order_uvw}, a chain $w_0 \sqsubset w_1 \leq \dots \leq w_m \leq c $ can be completed in a unique way as a chain:
\[
w_0 \sqsubset w_1 \ll w'_1 \sqsubset w_2 \ll w'_2 \sqsubset \dots w_{m-1} \ll w'_{m-1} \sqsubset w_m \ll w'_m \sqsubset c.
\]
Let us denote $w'_0 := w_0$ for convenience. Each element $w_i$ (with $1\leq i \leq m$) is such that $w'_{i-1} \sqsubset w_i \ll w'_i$. Via Lemma~\ref{lemm:intervalsfaces}, it corresponds bijectively to a face $f_i \in \Upsilon^+(W)$ such that
\begin{equation} \label{eq:prod_ra}
\prod f_i = (w'_{i-1})^{-1} w'_i
\end{equation}
Define $f\in \Upsilon(W,m)$ as follows:
\begin{itemize}
\item $f$ contains the colored positive roots $\alpha^i$ for $\alpha\in f_i$,
\item $f$ contains the negative roots $-\delta_i$ for $i \in S\backslash \supp(w'_m)$.
\end{itemize}
To check that this is indeed in $\Upsilon(W,m)$, first note that from~\eqref{eq:prod_ra} we get
\begin{equation} \label{prod_ra2}
\prod f^+ = (w'_{0})^{-1} w'_m
\end{equation}
with the notation $f^+ = f \cap \Phi^{(m)}_+$ as before. Indeed, the order is such that (reading this product from left to right) we first read roots with color $1$, then roots with color $2$, {\it etc}. By Proposition~\ref{prop:tzanaki}, this shows that $f^+ \in \Upsilon^+(W,m)$. Then, each element $-\delta_i \in f $ is such that $s_i \notin \supp(w_m)$. Consequently, $s_i \notin \supp(t)$ for each $t\in T$ such that $t\leq w_m$. Thus this $-\delta_i$ is compatible with the roots in $f^+$, and it follows $f\in \Upsilon(W,m)$.
To compute $\underline{f}$, let's keep the notation as in the proof of Proposition~\ref{prop:inclexcl} so that $f \cap (-\Delta) = \{ -\delta_i \;:\; i\in S\backslash I \}$ for some $I\subset S$. From~\eqref{prod_ra2}, we get
\[
\prod f =
\big( \prod_{i\in S^+ \backslash I} \big )
(w'_{0})^{-1} w'_m
\big( \prod_{i\in S^- \backslash I} \big )
\]
so that
\[
\underline{f}
=
\big( \prod_{i\in S^+ \cap I} \big )
(w'_{0})^{-1} w'_m
\big( \prod_{i\in S^- \cap I} \big ).
\]
By definition, we have
\[
w'_m =
\big( \prod_{i\in S^+\cap I} \big)
\big( \prod_{i\in S^-\cap I} \big).
\]
Indeed, the condition $w'_m \sqsubset c$ implies that a reduced factorization of $w'_m$ can be extracted from the canonical one of $c = c_+ c_-$, and the definition of $f$ implies that the elements that appear are those indexed by $I$. From the previous two equations, we get:
\[
\underline{f} =
\big( \prod_{i\in S^+\cap I} \big)
(w'_{0})^{-1}
\big( \prod_{i\in S^+\cap I} \big).
\]
We thus have $f\in \Upsilon(W,X,m)$, since $\Fix(\underline{f}) \sim \Fix( (w'_0)^{-1} ) \sim \Fix( w'_0 ) \sim X$.
Describing the inverse bijection is straightforward. Starting from $f\in \Upsilon(W,m)$, define:
\begin{itemize}
\item $w'_m \sqsubset c$ is the product of the reflections $R(\alpha)$ for $\alpha \in (-\Delta) \backslash f$,
\item $ (w'_{i-1})^{-1} w'_i$ is the product of $R(\alpha^i)$ over positive roots $\alpha^i$ (of color $i$) in $f$.
\end{itemize}
From $w'_{i-1}$, $w'_i$, and the positive roots of color $i$ in $f$, we use the inverse bijection from Lemma~\ref{lemm:intervalsfaces} to get $w_i$ such that $w'_{i-1} \sqsubset w_i \ll w'_i$. We omit details about checking that the two maps are indeed inverse bijections.
We thus have proved~\eqref{EQ:gammakappa}. Eventually, observe that when we restrict this bijection to chains such that $w_m \ll c$, we get $w'_m = c$ (keeping the same notation), and the corresponding faces $f \in \Upsilon(W,\mathcal{X},m)$ contains no vertex in $-\Delta$. So the bijection proves~\eqref{EQ:gammakappa2} as well.
\end{proof}
In the case where $X$ is the minimal flat, our bijection specializes into a bijection between facets of $\Upsilon(W,m)$ and chains $w_1\leq\dots \leq w_m$. Such a bijection was first given via representation theory by Buan, Reiten, and Thomas~\cite{buanreitenthomas}. A more combinatorial one and various related bijections were given by Stump, Thomas, and Williams~\cite{stumpthomaswilliams}.
To end this section, we use the previous bijection to get:
\begin{proof}[Proof of Corollary~\ref{formulas_gammagammaplus}]
From Lemma~\ref{lemm:solomon2} and $\bPsi_{-1} = (-1)^n \boldsymbol{\epsilon}$, we have:
\[
(-1)^n \bD \bPsi_{-t} = \bN \bPsi_{t}, \qquad
\]
By taking the coefficients via Proposition~\ref{prop:psik_expansion} and evaluating at $t=mh+1$ and $t=mh-1$, this rescpectively gives:
\begin{align} \label{eq:kreweras_reciprocity}
(-1)^{\dim(X)} \frac{\chi_X(-mh-1)}{[N(W_X):W_X]}
&=
\sum_{Y \in \Theta} \bN_{X,Y}
\frac{\chi_Y(mh+1)}{[N(W_Y):W_Y]}, \\
(-1)^{\dim(X)} \frac{\chi_X(-mh+1)}{[N(W_X):W_X]}
&=
\sum_{Y \in \Theta} \bN_{X,Y}
\frac{\chi_Y(mh-1)}{[N(W_Y):W_Y]}.
\end{align}
Using the formulas for $\kappa$ and $\kappa^+$ obtained in \eqref{eq:formula_kappa} and \eqref{eq:formula_kappaplus}, we see that the right-hand sides of the previous two equations give, respectively, the right-hand sides in~\eqref{EQ:gammakappa} and~\eqref{EQ:gammakappa2}.
By identifying the left-hand sides of the previous two equations with those of~\eqref{EQ:gammakappa} and~\eqref{EQ:gammakappa2}, we immeditely obtain the formulas for $\gamma$ and $\gamma^+$ in Corollary~\ref{formulas_gammagammaplus}.
\end{proof}
\section{Cluster parking spaces}
\label{sec:clusterpark}
Via Proposition~\ref{prop:psik_expansion}, the formulas we obtained in Corollary~\ref{formulas_gammagammaplus} imply:
\begin{align} \label{euler_char1}
\sum_{X\in \Theta} (-1)^{\dim(X)} \gamma(W,X,m) \bPhi_X
&=
\bPsi_{-mh-1} = (-1)^n \boldsymbol{\epsilon} \otimes \bPsi_{mh+1}, \\
\label{euler_char2}
\sum_{X\in \Theta} (-1)^{\dim(X)} \gamma^+(W,X,m) \bPhi_X
&=
\bPsi_{-mh+1} = (-1)^n \boldsymbol{\epsilon} \otimes \bPsi_{mh-1}.
\end{align}
This suggests a representation theoretic interpretation, which is the goal of this section. The following definition extends one given in~\cite{delcroixogerjosuatvergesrandazzo} (in the case of type A, with the ``linear'' Coxeter element), in the context of a poset of parking functions.
\begin{defi}
The set of {\it cluster parking functions} is:
\[
\CPF(W,m)
:=
\biguplus_{ f \in \Upsilon(W,m) }
\{f\}
\times
W / W_{\Fix(\underline{f})}.
\]
An element is thus a pair $(f, wW_{\Fix(\underline{f})})$ where $f \in \Upsilon(W,m)$ and $w\in W$. Denote $\CPF^+(W,m) \allowbreak \subset \CPF(W,m) $ the subset of such pairs where $f\in \Upsilon^+(W,m)$. Moreover:
\begin{itemize}
\item $W$ acts on $\CPF(W,m)$ and $\CPF^+(W,m)$ via the natural left action on the quotient $W/W_{\Fix(\underline{f})}$ (as in the case of parking functions, see Equation~\eqref{def:PF}),
\item a partial order on $\CPF(W,m)$ (respectively, on $\CPF^+(W,m)$) is given by
\[
(f', w'W_{\Fix(\underline{f'})}) \leq (f, wW_{\Fix(\underline{f})})
\quad \Longleftrightarrow \quad
f' \subset f \text{ and } w'W_{\Fix(\underline{f'})} = wW_{\Fix(\underline{f'})}.
\]
\end{itemize}
\end{defi}
To see that the order is well-defined, note that $f' \subset f$ implies $\underline{f} \leq \underline{f'}$, then $\Fix(\underline{f'}) \subset \Fix(\underline{f})$ and $W_{\Fix(\underline{f})} \subset W_{\Fix(\underline{f'})}$. So the coset $wW_{\Fix(\underline{f})}$ unambiguously defines a coset $wW_{\Fix(\underline{f'})}$ via the natural quotient map.
Also, note that $\CPF(W,m)$ and $\CPF^+(W,m)$ are ranked posets where the rank function is $(f, wW_{\Fix(\underline{f})}) \mapsto \# f$.
There is a similarity with the set $\PF(W) := \PF(W,1)$ of parking functions. The precise formulation is the following: we can see $\CPF(W,m)$ as the (setwise) fiber product of $\Upsilon(W,m)$ and $\PF(W)$ over $\NC(W)$, along the maps:
\[
f \mapsto \underline{f}
\qquad \text{ and } \qquad
(w_1, w_2W_{\Fix(w_1)}) \mapsto w_1.
\]
And the same holds with $\Upsilon^+(W,m)$ and $\PF^+(W)$ in place of $\Upsilon(W,m)$ and $\PF(W)$.
\begin{lemm}
Let $E \subset \NC(W)$, and denote $\wedge E$ its meet in the lattice $\NC(W)$. Then we have:
\[
W_{\Fix(\wedge E)}
=
\bigcap_{w\in E} W_{\Fix(w)}.
\]
\end{lemm}
\begin{proof}
An explicit construction of $\wedge E$ is given by Brady and Watt~\cite{bradywatt}, and it follows that:
\[
\big\{t \in T \;:\; t\leq \wedge E\big\}
=
\bigcap_{w\in E} \big\{t \in T \;:\; t\leq w\big\}.
\]
Moreover, each set $\big\{t \in T \;:\; t\leq w\big\}$ is the set of reflections in $W_{\Fix(w)}$, and this permits to complete the proof.
\end{proof}
Note that $\NC(W)$ is embedded (as a subposet, but not a sublattice) in the lattice of parabolic subgroups of $W$, via the map $w\mapsto W_{\Fix(w)}$. In the latter lattice, a meet is clearly given by taking an intersection of subgroups. So the previous lemma says that the meet of $E$ as a subset of $\NC(W)$ coincides with its meet in the lattice of parabolic subgroups. (The analog statement about the join does not hold.)
\begin{prop}
The poset $\CPF(W,m)$ is the face poset of a simplicial complex, that we still denote $\CPF(W,m)$. The same holds for $\CPF^+(W,m)$.
\end{prop}
\begin{proof}
Observe that in the definition of the order relation, the coset $w' W_{\Fix(\underline{f'})}$ is uniquely defined by $f'$ and $w$. It follows that the elements below $(f, wW_{\Fix(\underline{f})})$ can be identified with subsets of $f$, and form a boolean lattice of rank $\# f$.
Now, consider $(f, wW_{\Fix(\underline{f})}) \in \CPF(W,m)$, and write $f = \{ \alpha_1 , \dots , \alpha_k \}$. The rank $1$ elements below $(f, wW_{\Fix(\underline{f})})$ are $(\{\alpha_i\}, wW_{\Fix(\underline{\{\alpha_i\}})})$ for $1\leq i \leq k$. We easily recover $f$ as $\cup_{i=1}^k \{\alpha_i\}$. Note that $\prod f$ is the join of the elements $R(\alpha_i)$ for $1\leq i \leq k$. Using Lemma~\ref{lemm:bipartitekreweras}, we see that $\underline{f}$ is the meet of the elements $\underline{\{\alpha_i\}}$ for $1\leq i \leq k$. Using the previous lemma, we get:
\[
wW_{\Fix(\underline{f})}
=
\bigcap_{i=1}^k wW_{\Fix(\underline{\{\alpha_i\}})}.
\]
What we have showed is that each element of $\CPF(W,m)$ can be recovered from the rank 1 elements below it. This completes the proof: the simplicial complex is realized by identifying each element with the set of rank 1 element below it. The same arguments show the result for $\CPF^+(W,m)$.
\end{proof}
Let $\Delta$ be a finite simplicial complex, endowed with an action of $W$ preserving dimension and inclusion of faces. The {\it equivariant Euler characteristic} of $\Delta$ is:
\[
\chi(\Delta)
=
\sum_{i = -1}^{\dim(\Delta)} (-1)^i C_i(\Delta)
\]
where $C_i(\Delta)$, the $i$th chain space, is the free abelian group over $i$-dimensional faces of $\Delta$. By the {\it Hopf trace formula} (see~\cite[Theorem~2.3.9]{wachs}), $\chi(\Delta)$ is also equal to the alternating sum of the (reduced and simplicial) homology groups of $\Delta$ over $\mathbb{Z}$. This definition gives $\chi(\Delta)$ as an element of the representation ring of $W$, but we can as well see it as an element of its character ring. Also note that ``equivariant Euler characteristic'' is not really a standard terminology. The Euler characteristic is the evaluation at $1$ of this character, and the evaluation at other group elements are called {\it Lefschetz numbers}.
\begin{theo} We have:
\[
\chi(\CPF(W,m)) = (-1)^n \boldsymbol{\epsilon} \otimes \bPsi_{mh+1},
\quad \text{ and } \quad
\chi(\CPF^+(W,m)) = (-1)^n \boldsymbol{\epsilon} \otimes \bPsi_{mh-1}.
\]
\end{theo}
\begin{proof}
By definition of cluster parking functions, each $f\in\Upsilon(W,X,m)$ contributes to a term $\bPhi_X$ in the $k$th chain space $C_k(\CPF(W,m))$ (with $k=\dim(X)$). We can thus identify $\chi(\CPF(W,m)$ and $\chi^+(\CPF(W,m)$ with the left-hand sides of~\eqref{euler_char1} and~\eqref{euler_char2}, respectively. The right-hand sides give the expected formulas.
\end{proof}
It would be very interesting to investigate whether these simplicial complexes are shellable. If this holds, the equivariant Euler characteristic is (up to the sign $(-1)^n$) the top homology group:
\begin{align}
\chi(\CPF(W,m)) &= (-1)^n \tilde H_{n-2}(\CPF(W,m)),
\\
\chi(\CPF^+(W,m)) &= (-1)^n \tilde H_{n-2}(\CPF^+(W,m)).
\end{align}
Assume that this property holds. Knowing the combinatorial interpretation of the characters $\bPsi_{mh+1}$ and $\bPsi_{mh-1}$ from Section~\ref{sec:parking}, our results raise the following problem: find an explicit basis $(Z_p)_{p\in \PF(W,m)}$ of $\tilde H_{n-2}(\CPF(W,m))$ such that $w\cdot Z_p = (-1)^{\ell(w)} Z_{w\cdot p}$, and also an explicit basis $(Z'_p)_{p\in \PF'(W,m)}$ of $\tilde H_{n-2}(\CPF^+(W,m))$ with the same property.
\begin{rema}
By a combinatorial version of the Hopf-Lefschetz theorem due to Baclawski and Björner~\cite{baclawskibjorner}, the character $\chi(\CPF(W,m))$ evaluated at $w\in W$ is the Euler characteristic of the fixed-point subcomplex $\CPF(W,m)^w$. It could be interesting to make explicit what are those fixed-point subcomplex.
\end{rema}
\section{\texorpdfstring{A refinement of the $f$- to $h$- transformation}{A refinement of the f- to h-transformation}}
\label{sec:fhvectors}
Through this section, let $(f_i)_{-1\leq i \leq n-1}$ denote the $f$-vector of $\Upsilon(W,m)$, {\it i.e.}, $f_i$ is the number of $i$-dimensional faces in the complex. Similarly, $(f^+_i)_{-1\leq i \leq n-1}$ denote the $f$-vector of $\Upsilon^+(W,m)$. In terms of $\gamma$ and $\gamma^+$, we thus have:
\begin{align} \label{eq:formula_fk}
f_{k-1} = \sum_{X\in \Theta, \; \dim(X)=k} \gamma(W, X, m)
\quad \text{ and } \quad
f^+_{k-1} = \sum_{X\in \Theta, \; \dim(X)=k} \gamma^+(W, X, m).
\end{align}
First, let us explain how our formulas for $\gamma$ completely explain some numerology concerning $f_k$ obtained in~\cite[Section~8]{fominreading}. By examining the tables in \cite[Appendix~C]{orlikterao}, we note that the Orlik-Solomon exponents $b_i^X$ of $X\in \Theta$ (with $\dim(X)=k)$ often look like $e_1,\dots,e_k$, namely the $k$ smallest exponents of $W$ (in fact, this happens exactly for the so-called \cite[\S3.1.5]{williams_thesis} coincidental types $A_n=S_{n+1}, B_n, I_2(m), H_3$, see \cite[\S3.3]{miller_foulkes}).
Now, consider the formula for $f_k$ in \eqref{eq:formula_fk} as a polynomial in $m$. If some integer $b$ is an Orlik-Solomon exponent for all $X \in \Theta$ with $\dim(X) = k$, each term contains a factor $(mh+1-b)$ so that the sum $f_k$ also has this factor. Also, we can observe that there is an equivalence between the two statements:
\begin{itemize}
\item The integer $b$ is an Orlik-Solomon exponent for all $X\in\Theta$ with $\dim(X) = k$.
\item The integer $b$ is an Orlik-Solomon exponent for all $X\in\Theta$ with $\dim(X) \geq k$.
\end{itemize}
(It is easily checked on a case-by-case basis.) Note that it might {\it a priori} happen that $f_k$ contains a factor $mh+1-b$ even though some of the terms in the sum of~\eqref{eq:formula_fk} don't contain it. This actually never happen. The upshot of this discussion is:
\begin{prop}
An exponent $e_i$ of $W$ has level $j$ in the sense of Fomin and Reading~\cite{fominreading} if and only if $e_i$ is an Orlik-Solomon exponent of all $X\in \Theta$ with $\dim(X) \geq j$.
\end{prop}
Now, consider the $h$-vector $(h_i)_{0\leq i \leq n}$ of $\Upsilon(W,m)$ and $(h^+_i)_{0\leq i \leq n}$ of $\Upsilon^+(W,m)$, defined in terms of the $f$-vector via the polynomial relation:
\begin{align} \label{eq:fh-transform}
\sum_{i=0}^n f_{i-1} z^{n-i}
=
\sum_{i=0}^n h_i (z+1)^{n-i},
\qquad
\sum_{i=0}^n f^+_{i-1} z^{n-i}
=
\sum_{i=0}^n h^+_i (z+1)^{n-i}.
\end{align}
These $h$-vectors have been obtained by Athanasiadis and Tzanaki~\cite{athanasiadistzanaki1,athanasiadistzanaki2} and coincide with the Fuß-Narayana numbers of Armstrong~\cite[Chapter~5]{armstrong}. These results can be stated as follows:
\begin{prop}[Athanasiadis and Tzanaki~\cite{athanasiadistzanaki1,athanasiadistzanaki2}] \label{prop:hvectors}
For $0\leq k\leq n$, we have:
\begin{align*}
h_{n-k} &= \#\Big\{ (w_1,\dots,w_m) \in \NC(W)^m \; : \; \ell(w_1)=k \text{ and } w_1\leq \dots \leq w_m \Big\}, \\
h^+_{n-k} &= \#\Big\{ (w_1,\dots,wa_m) \in \NC(W)^m \; : \; \ell(w_1)=k, \; w_1\leq \dots \leq w_m, \text{ and } \supp(w_m)=S \Big\}
\end{align*}
\end{prop}
Note that the right-hand sides above are respectively the sums
\begin{align*}
\sum_{X\in \Theta, \; \dim(X)=k} \kappa(W,X,m),
\quad \text{ and }
\quad
\sum_{X\in \Theta, \; \dim(X)=k} \kappa^+(W,X,m),
\end{align*}
similar to the sums in Equation~\eqref{eq:formula_fk}. So, our Equations~\eqref{EQ:gammakappa} and~\eqref{EQ:gammakappa2} can be seen as a refinement of the $f$- to $h$- vector transformation, which is obtained by summing over $X\in\Theta$ of a given dimension $k$.
In order to make that more explicit, let us show how the bijection from Section~\ref{sec:bij} gives the formulas above for $h_k$ and $h^+_k$.
\begin{proof}[Proof of Proposition~\ref{prop:hvectors}]
Via the bijection from Section~\ref{sec:bij}, faces of $\Upsilon(W,m)$ of dimension $k-1$ are in bijection with chains $w_0 \sqsubset w_1 \leq \dots \leq w_m$ in $\NC(W)$ such that $\ell(w_0)=n-k$. So, we get:
\[
\sum_{k=0}^n f_{k-1} z^{n-k}
=
\sum_{ w_0 \sqsubset w_1 \leq \dots \leq w_m }
z^{\ell(w_0)}.
\]
Since the order ideal containing elements below $w_1$ for $\sqsubset$ is a boolean lattice, the binomial theorem gives:
\[
\sum_{k=0}^n f_{k-1} z^{n-k}
=
\sum_{ w_1 \leq \dots \leq w_m }
(1+z)^{\ell(w_1)}.
\]
By comparing with~\eqref{eq:fh-transform}, we immediately get the combinatorial formula for $h_k$. The one for $h^+_k$ is obtained similarly.
\end{proof}
It is well-known that a shelling of a simplicial complex can be used to find its $h$-vector. We refer to~\cite{wachs}. Explicitly, suppose we have a shelling of $\Upsilon(W,m)$, {\it i.e.}, an indexing $F_1,F_2,\dots$ of its facets such that for each $j$ the (geometric) intersection
\begin{align} \label{def:shelling}
\big( \bigcup_{i=1}^{j-1} F_i \big) \cap F_j
\end{align}
is purely $1$-codimensional in $F_j$. Then $h_k$ is the number of indices $j$ such that the intersection in~\eqref{def:shelling} is the union of $n-k$ facets of $F_j$. It would be very interesting to show that our Equation~\eqref{EQ:gammakappa} can also be obtained via such a shelling of $\Upsilon(W,m)$, and similarly Equation~\eqref{EQ:gammakappa2} from a shelling of $\Upsilon^+(W,m)$.
\section{A recursion for the refined enumeration of faces}
\label{sec:recfaces}
We give a recursion satisfied by the numbers $\gamma(W , X , m)$, in the spirit of Fomin and Reading's recursion for the $f$-vector of $\Upsilon(W,m)$. Our main tool is also the rotation $\mathcal{R}_m$, and the method relies on the following key result.
\begin{prop} \label{prop:invariance}
For each face $f \in \Upsilon(W,m)$, $\underline{f}$ and $\underline{\mathcal{R}_m(f)}$ are conjugated in $W$. In other words, $\Upsilon(W,X,m)$ is stable under the action of $\mathcal{R}_m$.
\end{prop}
\begin{proof}
Denote $f = \{ \rho_1 \succ \dots \succ \rho_k \}$, so that $\prod f = R(\rho_1 ) \cdots R(\rho_k) \in\NC(W,c)$ and $\underline{f} = c_+ (\prod f) c_-$. There is a factorization $\prod f = w_+ w_1 \cdots w_m w_-$ obtained as follows: $w_+$ (respectively, $w_i$, $w_-$) is obtained from $R(\rho_1 ) \cdots R(\rho_k)$ by keeping the factors $R(\rho_j)$ such that $\rho_j\in -\Delta_+$ (respectively, such that $\rho_j$ is a positive root with color $i$, such that $\rho_j\in -\Delta_-$). This is possible because the definition of these factors agrees with the total order $\prec$ on $\Phi^{(m)}_{\geq-1}$: the elements in $-\Delta_+$ come first in $\rho_1,\rho_2,\dots$, {\it etc}. Using similar notations, the noncrossing partition associated to the face $f' = \mathcal{R}_m(f)$ is denoted $ \prod f' = w'_+ w'_1 \cdots w'_m w'_- $.
To find what are the factors of $\prod f'$, we need to refine the factorization of $\prod f$. Note that from Proposition~\ref{prop:reforder}, we get
\[
\{ \alpha_{nh/2-n+s+1} , \dots , \alpha_{nh/2} \} = c^{-1}(-\Delta_+).
\]
We write
\[
w_m = w_{m,3} w_{m,2} w_{m,1}
\]
where $w_{m,3}$ contains the factors $R(\alpha^m)$ where $\alpha \in \Delta_-$, $w_{m,2}$ contains the factors $R(\alpha^m)$ where $\alpha \in c^{-1}(-\Delta_+)$, and $w_{m,1}$ contains the other factors $R(\alpha^m)$. As above, this is possible because this definition agrees with the ordering on $\Phi^{(m)}_{\geq-1}$.
By examining the action of $\mathcal{R}_m$, we can check:
\begin{align*}
w'_+ &= c w_{m,2} c^{-1}, \\
w'_1 &= (c w_{m,1} c^{-1} ) (c w_- c^{-1}) w_+, \\
w'_i &= w_{i-1} \text{ for } 2\leq i \leq m,\\
w'_- &= w_{m,3}.
\end{align*}
For example, $w'_- = w_{m,3}$ comes from the case $\mathcal{R}_m(\alpha^m) = -\alpha$ if $\alpha \in \Delta_-$.
Gathering the factors gives:
\begin{align*}
\prod f' &= ( c w_{m,2} c^{-1} ) ( c w_{m,1} c^{-1} \cdot c w_- c^{-1} \cdot w_+ ) w_1 \cdots w_{m-1} ( w_{m,3} ) \\
&= ( c w_{m,2} w_{m,1} w_- c^{-1} \cdot w_+ ) w_1 \cdots w_{m-1} w_{m,3} \\
&= ( c w_{m,2} w_{m,1} w_- c^{-1} )
\big( \prod f \big)
(w_{m,2} w_{m,1} w_-)^{-1}.
\end{align*}
Using $c=c_+c_-$ and the definition of $\underline{f}$, it follows:
\[
\underline{f'}
=
c_- (w_{m,2} w_{m,1} w_-) c_-
\underline{f}
c_- (w_{m,2} w_{m,1} w_-)^{-1} c_-.
\]
So $\underline{f'}$ and $\underline{f}$ are conjugated.
\end{proof}
\begin{prop} \label{prop:clus_reduc}
If $W = W_1 \times W_2$ and $X = X_1 \times X_2 $, we have:
\begin{align} \label{eq:gammareducible}
\gamma(W, X, m )
=
\gamma(W_1, X_1, m ) \cdot \gamma(W_2, X_2, m ).
\end{align}
\end{prop}
\begin{proof}
This follows from $\Upsilon(W, X, m)$ being the join of $\Upsilon(W_1, X_1, m)$ and $\Upsilon(W_2, X_2, m)$.
\end{proof}
To state the next proposition, we extend the definition of $\gamma(W, X, m)$ in the same way as we did for $\kappa$ and $\kappa^+$ in Section~\ref{sec:reciprocities}. We use parabolic conjugacy classes rather than flats, and if $\mathcal{Y} = \uplus_{i=1}^j \mathcal{X}_i$ is a disjoint union of parabolic conjugacy classes, we write:
\begin{align} \label{eq:gamma_uplus}
\gamma(W, \mathcal{Y}, m)
=
\sum_{i=1}^j \gamma(W, \mathcal{X}_i, m).
\end{align}
\begin{prop} \label{recfaces}
Assume that $W$ is irreducible and let $h$ be its Coxeter number. We have:
\begin{equation}\label{eqrecfaces}
\gamma(W , \mathcal{X}, m) = \frac{mh+2}{2k} \sum_{s\in S} \gamma( W_{(s)} , \mathcal{X}\cap W_{(s)} , m ).
\end{equation}
\end{prop}
\begin{proof}
Following the idea in~\cite{fominreading} used to count facets of $\Upsilon(W,m)$, we consider {\it pointed faces} and let:
\[
\Upsilon^\bullet(W, X, m)
:=
\big\{
(f,\rho) \;:\;
f\in \Upsilon(W, X, m),\;
\rho \in f
\big\}.
\]
The result comes from double counting. First, we clearly have
\[
\# \Upsilon^\bullet(W, \mathcal{X}, m)
=
k \cdot \# \Upsilon(W, \mathcal{X}, m)
=
k\cdot \gamma(W, \mathcal{X}, m),
\]
as each face $f\in \Upsilon(W, \mathcal{X}, m)$ contains $k$ vertices.
Second, we consider the action of $\mathcal{R}_m$ on $\Upsilon^\bullet(W, \mathcal{X}, m)$ by $\mathcal{R}_m\big( (f,\rho) \big) = \big(\mathcal{R}_m(f),\mathcal{R}_m(\rho)\big)$. Let $\omega \subset \Phi_{\geq -1}^{(m)}$ be an orbit for the action of $\mathcal{R}_m$. We will show that
\begin{align} \label{equation_to_prove}
\#\big\{ (f,\rho) \in \Upsilon^\bullet(W, \mathcal{X}, m) \;:\; \rho \in \omega \big\}
=
\frac{mh+2}{2}
\sum_{s\in R( (-\Delta) \cap \omega) }\gamma( W_{(s)} , \mathcal{X}\cap W_{(s)}, m ).
\end{align}
By summing over the orbits $\omega$, it will follow that $\# \Upsilon^\bullet(W, \mathcal{X}, m)$ is $k$ times the right hand side of~\eqref{eqrecfaces} (since the union of the sets $R( (-\Delta) \cap \omega)$ is $R(-\Delta) = S$).
To prove~\eqref{equation_to_prove}, first observe that the cardinality in the left-hand side of~\eqref{equation_to_prove} actually doesn't depend on $\rho$. Indeed, the rotation $\mathcal{R}_m$ immediately give a bijection between the set associated to $\rho$ and that associated to $\mathcal{R}_m(\rho)$.
Consider the first case of Proposition~\ref{prop:mrotationorbits}. So, $\#\omega =\frac{mh+2}{2}$ and $\omega$ contains one element of $-\Delta$, that we denote $-\delta_i$. By the previous observation, we get
\begin{align}
\#\big\{ (f,\rho) \in \Upsilon^\bullet(W, \mathcal{X}, m) \;:\; \rho \in \omega \big\}
=
\frac{mh+2}{2}
\#\big\{ f \in \Upsilon(W, \mathcal{X}, m) \;:\; -\delta_i \in f \big\}.
\end{align}
The cardinality in the right-hand side is also $\gamma(W_{(s_i)}, \mathcal{X} \cap W_{(s_i)} ,m )$. We thus get the term indexed by $s_i$ in \eqref{eqrecfaces} (up to the factor $k$).
Now, consider the second case of Proposition~\ref{prop:mrotationorbits}. So, $\#\omega =mh+2$ and $\omega$ contains two elements of $-\Delta$, that we denote $-\delta_i$ and $-\delta_j$. In this case, we get:
\begin{align}
\#\big\{ (f,\rho) \in \Upsilon^\bullet(W, \mathcal{X}, m) \;:\; \rho \in \omega \big\}
=
(mh+2) \cdot
\#\big\{ f \in \Upsilon(W, \mathcal{X}, m) \;:\; -\delta_i \in f \big\}.
\end{align}
Here, we get the two terms indexed by $s_i$ and $s_j$ in~\eqref{eqrecfaces} (up to the factor $k$). Indeed these two terms are equal (since $s_i$ and $s_j$ are conjugated), and combine to give $(mh+2)$ times one of the two terms.
\end{proof}
To finish this section, note that Equations~\eqref{eq:gammareducible}, \eqref{eq:gamma_uplus}, and \eqref{eqrecfaces} give an inductive procedure to compute $\gamma(W, X, m)$ as follows:
\begin{itemize}
\item if $\mathcal{X}$ is the conjugacy class of the Coxeter element, we have $\gamma(W, \mathcal{X},m) = 1$ (initial case),
\item if $W$ is reducible, use \eqref{eq:gammareducible},
\item if $W$ is irreducible, use~\eqref{eqrecfaces}, then use~\eqref{eq:gamma_uplus} on each summand (in particular $\gamma(W_{(s)} , \mathcal{X}\cap W_{(s)}, m ) = 0$ if $\mathcal{X}\cap W_{(s)}=\varnothing$),
\item repeat the previous two steps, until each term can be treated by the initial case.
\end{itemize}
Also, it is interesting to make explicit what is the intersection $\mathcal{X}\cap W_{(s)}$ in the various cases of the finite type classification. The common situation is that $\mathcal{X}\cap W_{(s)}$ is itself a single conjugacy class of $W_{(s)}$. This comes from the fact that there is often a unique conjugacy class $\mathcal{X}$ of $W_{(s)}$ of a given Coxeter type (see tables in~\cite[Appendix~C]{orliksolomon}), as all elements in $\mathcal{X} \cap W_{(s)}$ have the same Coxeter type. The exceptions are given explicitly as follows.
First, let $W$ be of type $D_n$, where $n>4$ is odd. Assume that $\mathcal{X}$ contains elements of type $A_{n-2}$, and $W_{(s)}$ is the standard parabolic subgroup of type $D_{n-1}$. There is a unique order 2 automorphism of the diagram of type $D_{n-1}$ (unless $n=5$, we leave details to the reader), and it defines an outer automorphism of $W_{(s)}$. Then $\mathcal{X}\cap W_{(s)}$ is the union of two conjugacy classes, which are image of each other under this outer automorphism.
In the case of $E_8$, there is a unique conjugacy class of type $A_5$, but the intersection with the parabolic subgroup of type $E_7$ is the union of two conjugacy classes. These are given the five-element subsets of the Dynkin diagram of $E_7$ as follows:
\[
\begin{tikzpicture}[scale=0.7]
\draw[fill=black] (0,0) circle (0.1cm);
\draw[fill=black] (1,0) circle (0.1cm);
\draw[fill=black] (2,0) circle (0.1cm);
\draw[fill=black] (3,0) circle (0.1cm);
\draw[fill=black] (4,0) circle (0.1cm);
\draw[fill=black] (5,0) circle (0.1cm);
\draw[fill=black] (2,1) circle (0.1cm);
\draw (0,0) -- (1,0);
\draw (1,0) -- (2,0);
\draw (2,0) -- (3,0);
\draw (3,0) -- (4,0);
\draw (4,0) -- (5,0);
\draw (2,0) -- (2,1);
\draw (2,1) circle (0.2cm);
\draw (2,0) circle (0.2cm);
\draw (5,0) circle (0.2cm);
\draw (4,0) circle (0.2cm);
\draw (3,0) circle (0.2cm);
\end{tikzpicture} \; ,\qquad
\begin{tikzpicture}[scale=0.7]
\draw[fill=black] (0,0) circle (0.1cm);
\draw[fill=black] (1,0) circle (0.1cm);
\draw[fill=black] (2,0) circle (0.1cm);
\draw[fill=black] (3,0) circle (0.1cm);
\draw[fill=black] (4,0) circle (0.1cm);
\draw[fill=black] (5,0) circle (0.1cm);
\draw[fill=black] (2,1) circle (0.1cm);
\draw (0,0) -- (1,0);
\draw (1,0) -- (2,0);
\draw (2,0) -- (3,0);
\draw (3,0) -- (4,0);
\draw (4,0) -- (5,0);
\draw (2,0) -- (2,1);
\draw (1,0) circle (0.2cm);
\draw (2,0) circle (0.2cm);
\draw (5,0) circle (0.2cm);
\draw (4,0) circle (0.2cm);
\draw (3,0) circle (0.2cm);
\end{tikzpicture} \; .
\]
The same phenomenon appear when $\mathcal{X}$ has type $A_1 \times A_3$, and then the two conjugacy classes in $E_7$ are given by:
\[
\begin{tikzpicture}[scale=0.7]
\draw[fill=black] (0,0) circle (0.1cm);
\draw[fill=black] (1,0) circle (0.1cm);
\draw[fill=black] (2,0) circle (0.1cm);
\draw[fill=black] (3,0) circle (0.1cm);
\draw[fill=black] (4,0) circle (0.1cm);
\draw[fill=black] (5,0) circle (0.1cm);
\draw[fill=black] (2,1) circle (0.1cm);
\draw (0,0) -- (1,0);
\draw (1,0) -- (2,0);
\draw (2,0) -- (3,0);
\draw (3,0) -- (4,0);
\draw (4,0) -- (5,0);
\draw (2,0) -- (2,1);
\draw (2,1) circle (0.2cm);
\draw (5,0) circle (0.2cm);
\draw (4,0) circle (0.2cm);
\draw (3,0) circle (0.2cm);
\end{tikzpicture} \; ,\qquad
\begin{tikzpicture}[scale=0.7]
\draw[fill=black] (0,0) circle (0.1cm);
\draw[fill=black] (1,0) circle (0.1cm);
\draw[fill=black] (2,0) circle (0.1cm);
\draw[fill=black] (3,0) circle (0.1cm);
\draw[fill=black] (4,0) circle (0.1cm);
\draw[fill=black] (5,0) circle (0.1cm);
\draw[fill=black] (2,1) circle (0.1cm);
\draw (0,0) -- (1,0);
\draw (1,0) -- (2,0);
\draw (2,0) -- (3,0);
\draw (3,0) -- (4,0);
\draw (4,0) -- (5,0);
\draw (2,0) -- (2,1);
\draw (1,0) circle (0.2cm);
\draw (5,0) circle (0.2cm);
\draw (4,0) circle (0.2cm);
\draw (3,0) circle (0.2cm);
\end{tikzpicture} \; .
\]
Eventually, this also occur when $\mathcal{X}$ has type $A_1^3$, then the two conjugacy classes in $E_7$ are given by:
\[
\begin{tikzpicture}[scale=0.7]
\draw[fill=black] (0,0) circle (0.1cm);
\draw[fill=black] (1,0) circle (0.1cm);
\draw[fill=black] (2,0) circle (0.1cm);
\draw[fill=black] (3,0) circle (0.1cm);
\draw[fill=black] (4,0) circle (0.1cm);
\draw[fill=black] (5,0) circle (0.1cm);
\draw[fill=black] (2,1) circle (0.1cm);
\draw (0,0) -- (1,0);
\draw (1,0) -- (2,0);
\draw (2,0) -- (3,0);
\draw (3,0) -- (4,0);
\draw (4,0) -- (5,0);
\draw (2,0) -- (2,1);
\draw (2,1) circle (0.2cm);
\draw (5,0) circle (0.2cm);
\draw (3,0) circle (0.2cm);
\end{tikzpicture} \; ,\qquad
\begin{tikzpicture}[scale=0.7]
\draw[fill=black] (0,0) circle (0.1cm);
\draw[fill=black] (1,0) circle (0.1cm);
\draw[fill=black] (2,0) circle (0.1cm);
\draw[fill=black] (3,0) circle (0.1cm);
\draw[fill=black] (4,0) circle (0.1cm);
\draw[fill=black] (5,0) circle (0.1cm);
\draw[fill=black] (2,1) circle (0.1cm);
\draw (0,0) -- (1,0);
\draw (1,0) -- (2,0);
\draw (2,0) -- (3,0);
\draw (3,0) -- (4,0);
\draw (4,0) -- (5,0);
\draw (2,0) -- (2,1);
\draw (1,0) circle (0.2cm);
\draw (5,0) circle (0.2cm);
\draw (3,0) circle (0.2cm);
\end{tikzpicture} \; .
\]
\section{The recursion for counting factorizations}
\label{sec:factor}
In this ultimate section, we consider a class of minimal factorizations of the Coxeter element and its refined enumeration via a $q$-statistic defined in terms of the orders $\sqsubset$ and $\ll$. This kind of refined enumeration was first considered in~\cite{josuatverges}, and reinterpreted in~\cite{bianejosuat}. In these references, we deal with reflection factorization of the Coxeter element.
Here we consider more general factorizations where the first factor is in a given parabolic conjugacy class $\mathcal{X}$. They were considered by the first author in~\cite{douvropoulos}. As for the refined enumeration, the obtained polynomial $\phi(W,\mathcal{X},q)$ is shown to satisfy a recursion which is equivalent to the one obtained in Section~\ref{sec:recfaces} for $\gamma(W, X, m)$. Consequently, we get a formula for $\phi(W,\mathcal{X},q)$ in terms of the Orlik-Solomon exponents from the formula for $\gamma$ in Corollary~\ref{formulas_gammagammaplus}.
\begin{defi}
Let $\mathcal{X}$ be a parabolic conjugacy class, and $k$ such that the elements of $\mathcal{X}$ have reflection length $n-k$. We define $\mathfrak{F}(W,\mathcal{X},c)$ as the set of length-additive factorizations $c=wt_1\cdots t_k$ where $w\in\mathcal{X}$ and $t_1,\dots,t_k \in T$. We define a statistic $\nil$ on this set by
\[
\nil( w t_1 \cdots t_k )
:=
\# \big\{ i \in \{1,\dots,k\} \;: \; wt_1\cdots t_{i-1} \ll wt_1\cdots t_i \big\},
\]
and the associated generating function:
\[
\phi(W,\mathcal{X},q)
:=
\sum_{ w t_1 \cdots t_k \in \mathfrak{F}(W , \mathcal{X} , c)}
q^{\nil(wt_1\cdots t_k)}.
\]
\end{defi}
Let us comment this definition. First, note that the factorizations above are {\it length-additive}, i.e. that $\ell(c) = \ell(w) + \ell(t_1) +\dots + \ell(t_k)$. Also, it is not {\it a priori} obvious from the definition that the generating function doesn't depend on the Coxeter element. This property will be apparent from the recursion given below (in particular it is part of the induction hypothesis). We anticipate this, omitting dependence on $c$ in the notation.
Since the factorizations are minimal, the two elements $wt_1\cdots t_{i-1}$ and $wt_1\cdots t_i$ in the definition of the $\nil$ statistic form a cover relation in $\NC(W)$. As these two elements differ by multiplying with a reflection, they are comparable in the {\it Bruhat order}, denoted $\leq_B$. We have:
\begin{align}
wt_1\cdots t_{i-1} \ll wt_1\cdots t_i
\quad &\Longleftrightarrow\quad
wt_1\cdots t_{i-1} \geq_B wt_1\cdots t_i, \\
wt_1\cdots t_{i-1} \sqsubset wt_1\cdots t_i
\quad &\Longleftrightarrow\quad
wt_1\cdots t_{i-1} \leq_B wt_1\cdots t_i.
\end{align}
Though it is interesting to make the connection with the order $\ll$ used through this work, here it will be adequate to think in terms of the Bruhat order and write:
\[
\nil( w t_1 \cdots t_k )
=
\# \big\{ i \in \{1,\dots,k\} \;: \; wt_1\cdots t_{i-1} \geq_B wt_1\cdots t_i \big\}.
\]
Let's recall some facts about {\it inversions}, which are closely connected to the Bruhat order. Recall from Section~\ref{sec:defclus} that $\Phi_+$ (respectively, $\Phi_-$) is the set of positive roots (respectively, negative roots). We call:
\begin{itemize}
\item $\alpha\in\Phi_+$ an {\it inversion} of $w\in W$ if $w(\alpha)\in\Phi_-$,
\item $t\in T$ a {\it right inversion} of $w\in W$ if $wt$ has smaller Coxeter length than $w$.
\end{itemize}
These two notions are related: $\alpha\in \Phi_+$ is an inversion of $w$ if and only if $R(\alpha)$ is a right inversion of $w$.
\begin{lemm} \label{inv_rem1}
There is a bijection between right inversions of $c$ and $S$. It sends $t$ to the unique $s\in S$ such that $ct \in W_{(s)}$.
\end{lemm}
\begin{proof}
A reduced expression of $c$ is $c=s_1\cdots s_n$. By a well-known fact, it follows that $c$ has $n$ right inversions $t_1,\dots t_n$ explicitly given by:
\[
c t_i = s_1 \cdots \check{s_i} \cdots s_n
\]
where $\check{s_i}$ means that $s_i$ is omitted in the product. The result follows straightforwardly.
\end{proof}
\begin{lemm} \label{lemm:orbits_roots}
Assume that $W$ is irreducible, and let $h$ be its Coxeter number. Each orbit for the action of $c$ on $\Phi$ has cardinality $h$ and contains exactly one inversion of $c$.
\end{lemm}
\begin{proof}
The fact that the orbits have cardinality $h$ follows from Steinberg's indexing of the root system (that we mentioned in Section~\ref{sec:defclus}, in relation with the reflection ordering). Though this concerns the bipartite Coxeter element, other standard Coxeter elements are conjugated to it and the result follows.
Next, we show that each orbit $\omega \subset \Phi$ contains at least one inversion. Since $c$ has $n$ inversions by the previous lemma, it will follow that each of the $n$ orbits contains exactly one inversion by the pigeonhole principle. So, let $\omega \subset \Phi$ be an orbit. Since $h$ is the order of $c$, we have:
\[
0 = c^h - I = (c-I)\Big(\sum_{i=0}^{h-1} c^i\Big).
\]
As $1$ is not an eigenvalue of $c$ (see~\cite[Section~3.19]{humphreys} for details), $c-I$ is invertible. Therefore
\[
\sum_{i=0}^{h-1} c^i = 0.
\]
By evaluating the linear operators on both sides of this equation at some $\alpha\in\omega$, we get $\sum_{\alpha\in\omega}\alpha = 0$. So $\omega$ contains both positive roots and negative roots. We deduce that $\omega$ contains at least a positive root whose image by $c$ is negative, {\it i.e.}, an inversion of $c$.
\end{proof}
The next proposition is similar to Proposition~\ref{prop:mrotationorbits} (and Steinberg's result described afterwards). But here, the statement holds for any standard Coxeter element and not just the bipartite coxeter element.
\begin{prop}\label{propcconj}
Assume that $W$ is irreducible, and let $h$ be its Coxeter number. Then each orbit $\omega \subset T$ for the action of $c$ by conjugation satisfies:
\begin{itemize}
\item either $\#\omega = h/2$ and it contains one right inversion of $c$,
\item or $\#\omega = h$ and it contains two right inversions of $c$.
\end{itemize}
\end{prop}
\begin{proof}
The action of $c$ by conjugation on the set $T$ of reflections is related to the action of $c$ on roots by
\begin{equation}
R \big( c(\alpha) \big) = c \cdot R(\alpha) \cdot c^{-1}
\end{equation}
So, this essentially follows from Lemma~\ref{lemm:orbits_roots}. Let $\omega \subset T$ be an orbit. Note that $R^{-1}(\omega) \subset \Phi$ is stable under the action of $c$ and under the map $-I$.
First suppose that $R^{-1}(\omega)$ is a single orbit under the action of $c$. Since the map $R$ is 2 to 1, $\omega$ has cardinality $h/2$. By Lemma~\ref{lemm:orbits_roots}, $R^{-1}(\omega)$ contains $1$ inversion of $c$, and it follows that $\omega$ contains one right inversion of $c$.
In the second case, suppose that $R^{-1}(\omega)$ is the union of two orbits. Since the map $R$ is 2 to 1, $\omega$ has cardinality $h$. By Lemma~\ref{lemm:orbits_roots}, $R^{-1}(\omega)$ contains $2$ inversion of $c$, and it follows that $\omega$ contains two right inversion of $c$.
\end{proof}
To state the next result, we first need to slightly extend the definition of $\phi$. This is completely similar to what we did for $\gamma$ and $\kappa$. So, if $\mathcal{Y} = \uplus_{i=1}^j \mathcal{X}_i$ is a disjoint union of parabolic conjugacy class, we define:
\begin{equation} \label{union_sum}
\phi( W, \mathcal{Y} , q ) := \sum_{i=1}^j \phi( W, \mathcal{X}_i , q ).
\end{equation}
As before, this situation occur when we consider the intersection of a parabolic conjugacy class with a parabolic subgroup.
\begin{lemm} \label{lemm:induc}
Let $t \in T$. There is an integer $i$ such that $c^i t c^{-i}$ is a right inversion of $c$, and this corresponds to an element $s\in S$ via the bijection in Lemma~\ref{inv_rem1}. With these notations, we have:
\begin{align} \label{lemma_sumt_k}
\sum_{ \substack{ wt_1 \cdots t_k \in \mathfrak{F}(W,\mathcal{X},c), \\[1mm] t_k=t } }
q^{\nil(wt_1 \cdots t_k )}
=
\begin{cases}
\phi( W_{(s)}, \mathcal{X} \cap W_{(s)} , q ) & \text{ if $t$ is a right inversion of $c$}, \\
q \phi( W_{(s)}, \mathcal{X} \cap W_{(s)} , q ) & \text{ otherwise}.
\end{cases}
\end{align}
\end{lemm}
\begin{proof}
Suppose first that $t$ is a right inversion of $c$, so that $ct \in W_{(s)}$. It follows that $wt_1\cdots t_{k-1} \leq_B w t_1 \cdots t_k$ and this pair of elements don't contribute to the $\nil$ statistic. Since $ct$ is $s_1\cdots s_n$ with $s$ omitted (see the proof of Lemma~\ref{inv_rem1}), it is a standard Coxeter element of $W_{(s)}$. To each factorization $ c = w t_1 \cdots t_{k}$ as in the left-hand side of~\eqref{lemma_sumt_k}, we associate $ct = w t_1 \cdots t_{k-1}$ which is again a minimal factorization. We thus immediately obtain the generating function $\phi( W_{(s)}, \mathcal{X} \cap W_{(s)} , q )$.
Consider the second case. We now have $wt_1\cdots t_{k-1} \geq_B w t_1 \cdots t_k$, so that this pair of elements contribute by 1 to the $\nil$ statistic, giving the factor $q$. As in the previous case, the sum can be seen as a sum over factorizations of $ct$. We thus get $q \phi( W_{\Fix(ct)}, \mathcal{X} \cap W_{\Fix(ct)} , q )$, but we have
\[
\phi( W_{\Fix(ct)}, \mathcal{X} \cap W_{\Fix(ct)} , q )
=
\phi( W_{(s)}, \mathcal{X} \cap W_{(s)} , q )
\]
since $W_{\Fix(ct)}$ is conjugated to $W_{(s)}$ in $W$.
\end{proof}
\begin{prop} \label{recfactors}
Assume that $W$ is irreducible, and let $h$ be its Coxeter number. We have:
\begin{equation} \label{eqrecfactors}
\phi(W, \mathcal{X}, q) = \frac{2+q(h-2)}{2} \sum_{s\in S} \phi( W_{(s)} , \mathcal{X}\cap W_{(s)}, q ).
\end{equation}
\end{prop}
\begin{proof}
We partition the set $\mathfrak{F}(W,\mathcal{X},c)$ into subsets $\mathfrak{F}_\omega$, where $\omega$ are the orbits for the action of $c$ on $T$:
\[
\mathfrak{F}_\omega
:=
\{ wt_1\cdots t_k \in \mathfrak{F}(W,c,\mathcal{X}) \;:\; t_k \in \omega \},
\]
so that $\phi(W, \mathcal{X}, q)$ is obtained by summing the generating functions of the sets $\mathfrak{F}_\omega$.
We first consider the first case of Proposition~\ref{propcconj}, so that $\#\omega = \frac{h}{2}$ and $\omega$ contains a right inversion $t \in T$ of $c$. Let $s\in S$ such that $ct \in W_{(s)}$. Using Lemma~\ref{lemm:induc}, we get:
\begin{align} \label{eq:omega_sum1}
\sum_{ w t_1 \cdots t_k \in \mathfrak{F}_\omega } q^{\nil(wt_1 \cdots t_k)}
=
(1+q(\tfrac{h}{2}-1)) \phi( W_{(s)}, \mathcal{X} \cap W_{(s)} , q ).
\end{align}
Now, we consider the other case and assume that $\#\omega = h$. By Proposition~\ref{propcconj}, this orbit contains two right inversions of $c$, that we denote $t,t' \in T$. Also, let $s,s'\in S$ such that $ct \in W_{(s)}$ and $ct' \in W_{(s')}$. Using Lemma~\ref{lemm:induc}, we get:
\begin{align}
\sum_{ w t_1 \cdots t_k \in \mathfrak{F}_\omega } q^{\nil(wt_1 \cdots t_k)}
=
(2+q(h-2)) \phi( W_{(s)} , \mathcal{X}\cap W_{(s)}, q )
\end{align}
which can also be written:
\begin{align} \label{eq:omega_sum2}
\sum_{ w t_1 \cdots t_k \in \omega } q^{\nil(wt_1 \cdots t_k)}
=
(1+q(\tfrac{h}{2}-1))
\big( \phi(W_{(s)} , \mathcal{X}\cap W_{(s)}, q )
+\phi( W_{(s')} , \mathcal{X}\cap W_{(s')}, q )
\big).
\end{align}
By summing \eqref{eq:omega_sum1} and \eqref{eq:omega_sum2}, we get the result. The fact that each $s\in S$ appear exactly once in the sum follows from Lemma~\ref{inv_rem1}.
\end{proof}
As for the reducible case, we have:
\begin{prop}
Suppose $W=W_1\times W_2$. For $i\in\{1,2\}$, let $\mathcal{X}_i$ be a parabolic conjugacy classes in $W_i$. Let $k_i$ such that the elements of $\mathcal{X}_i$ have reflection length $n_i-k_i$ where $n_i$ is the rank of $W_i$. Then:
\begin{equation} \label{prop_redcase}
\phi(W_1\times W_2, \mathcal{X}_1\times\mathcal{X}_2, q ) = \binom{k_1+k_2}{k_1}
\phi(W_1,\mathcal{X}_1,q)
\phi(W_2,\mathcal{X}_2,q).
\end{equation}
\end{prop}
\begin{proof}
The Coxeter element is denoted $c=(c_1,c_2)$. For $i \in \{1,2\}$, the projection $W_1\times W_2 \to W_i$ on each factor gives a map $\mathfrak{F}(W_1\times W_2, (c_1,c_2), \mathcal{X}_1 \times \mathcal{X}_2 ) \to \mathfrak{F}(W_i, c_i, \mathcal{X}_i )$ (ignoring the factors $t_j$ that are not reflections $W_i$, that give the unit of $W_i$). Together, these two maps give one map
\[
\mathfrak{F}(W_1\times W_2, \mathcal{X}_1 \times \mathcal{X}_2, (c_1,c_2) )
\to
\mathfrak{F}(W_1, \mathcal{X}_1, c_1 ) \times \mathfrak{F}(W_2, \mathcal{X}_2, c_2 ).
\]
It is easily seen that this map is $\binom{k_1+k_2}{k_1}$-to-$1$. Indeed, we recover the initial factorization $w t_1 \cdots t_{k_1+k_2}$ if we know which $t_i$ correspond to a reflection of $W_1$ (or $W_2$), and this gives a choice of $k_1$ indices among $k_1+k_2$.
It is also easily seen that the $q$-statistic is preserved via the above map, as the Bruhat order on $W_1\times W_2$ identifies with the product of the two Bruhat orders.
\end{proof}
Gathering the previous propositions, we have an inductive way to compute $\phi(W, \mathcal{X}, q)$:
\begin{itemize}
\item if $\mathcal{X}$ is the conjugacy class of the Coxeter element, we have $\phi(W, \mathcal{X}, q) = 1$ (initial case),
\item if $W$ is reducible, use~\eqref{prop_redcase},
\item if $W$ is irreducible, use~\eqref{eqrecfactors}, then use~\eqref{union_sum} on each term,
\item repeat the previous two steps until each term can be treated via the initial case.
\end{itemize}
\begin{theo}
Let $\mathcal{X}$ be a parabolic conjugacy class, and $k$ such that the elements of $\mathcal{X}$ have reflection length $n-k$. We have:
\begin{align} \label{eq:phigamma}
\phi(W,\mathcal{X},q) = k! (1-q)^k \gamma\big( W , \mathcal{X} , \tfrac{q}{1-q} \big).
\end{align}
\end{theo}
\begin{proof}
Up to rescaling of the polynomials, the induction to compute $\phi$ is equivalent to the one that we show $\gamma$ satisfies (see Section~\ref{sec:recfaces}). This shows the equality by induction.
\end{proof}
From the formula for $\gamma$ in terms of a characteristic polynomial, we immediately obtain the following formula for $\phi$. It is conveniently stated in terms of the Orlik-Solomon exponents.
\begin{coro}
Let $\mathcal{X}$ be a parabolic conjugacy class, let $X = \Fix(w)$ for some $w\in\mathcal{X}$, $k=\dim(X)$, and recall that $b_1^X,\dots, b_k^X$ are the Orlik-Solomon exponents of $X$. We have:
\begin{align*}
\phi(W,\mathcal{X},q)
=
\frac{k!}{[ N(W_X) : W_X ]} \prod_{i=1}^k \Big( b_i^{X} + 1 + q( h - b_i^{X} - 1) \Big).
\end{align*}
\end{coro}
\begin{proof}
This follows from~\eqref{eq:phigamma}, together with the formula for $\gamma$ in~\eqref{eq:formulagamma}.
\end{proof}
In particular, the case $q=1$ gives:
\begin{align*}
\phi(W,\mathcal{X},1)
=
\frac{k! h^k}{[ N(W_X) : W_X ]}.
\end{align*}
This was obtained by the first author~\cite[Theorem~99]{douvropoulos} by geometric methods (using the Lyashko-Looijenga morphism). On the other side, the case where $X$ is the maximal element of $L(W)$ correspond to the $q$-enumeration of minimal reflection factorization of $c$. Another particular case is the following. The second author in \cite{josuatverges} (see also~\cite[Section~4.6]{bianejosuat}) showed that:
\begin{align*}
\phi(W,\{0\},1)
=
\frac{n!}{|W|} \prod_{i=1}^n (d_i + q(h-d_i))
\end{align*}
(where we use the {\it degree} $d_i=e_i+1$ rather the exponent $e_i$). This is a one-parameter refinement of the Deligne-Looijenga number
\[
\frac{n! h^n}{|W|}.
\]
\section{Further questions}
In Corollary~\ref{formulas_gammagammaplus}, we give a refinement of the $f$-vector of the generalized cluster complex $\Upsilon(W,m)$, that looks \emph{almost} like a flag $f$-vector for it (as in \cite[Ch.III,\S4]{stanley}; almost because it is indexed by \emph{classes} of subsets of $[n]$). This is surprising because $\Upsilon(W,m)$ is not a balanced complex. The Coxeter complex of $W$ though is balanced and it is natural to ask (we thank Vic Reiner for these questions) whether there is a connection between these objects that can explain the appearance of this almost flag $f$-vector? Is this part of a more general theory for non-balanced complexes?
In Section~\ref{sec:clusterpark}, we defined cluster parking functions (and their positive counterpart). As explained there, it would be very interesting to prove that these complexes are wedges of spheres, and to find an explicit homology basis.
In Proposition~\ref{prop:invariance}, we showed that $\Upsilon(W,X,m)$ is stable under the action of $\mathcal{R}_m$. It would be interesting to find a cyclic sieving phenomenon for this action (see~\cite{reinersommers} for similar and possibly related results).
| {
"timestamp": "2022-09-27T02:27:54",
"yymm": "2209",
"arxiv_id": "2209.12540",
"language": "en",
"url": "https://arxiv.org/abs/2209.12540",
"abstract": "The generalized cluster complex was introduced by Fomin and Reading, as a natural extension of the Fomin-Zelevinsky cluster complex coming from finite type cluster algebras. In this work, to each face of this complex we associate a parabolic conjugacy class of the underlying finite Coxeter group. We show that the refined enumeration of faces (respectively, positive faces) according to this data gives an explicit formula in terms of the corresponding characteristic polynomial (equivalently, in terms of Orlik-Solomon exponents). This characteristic polynomial originally comes from the theory of hyperplane arrangements, but it is conveniently defined via the parabolic Burnside ring. This makes a connection with the theory of parking spaces: our results eventually rely on some enumeration of chains of noncrossing partitions that were obtained in this context. The precise relations between the formulas counting faces and the one counting chains of noncrossing partitions are combinatorial reciprocities, generalizing the one between Narayana and Kirkman numbers.",
"subjects": "Combinatorics (math.CO); Representation Theory (math.RT)",
"title": "The Generalized Cluster Complex: Refined Enumeration of Faces and Related Parking Spaces",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151391819461,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.708642866330472
} |
https://arxiv.org/abs/2011.13808 | Asymptotic behavior and zeros of the Bernoulli polynomials of the second kind | The main aim of this article is a careful investigation of the asymptotic behavior of zeros of Bernoulli polynomials of the second kind. It is shown that the zeros are all real and simple. The asymptotic expansions for the small, large, and the middle zeros are computed in more detail. The analysis is based on the asymptotic expansions of the Bernoulli polynomials of the second kind in various regimes. | \section{Introduction}
The Bernoulli polynomials of the second kind $b_{n}$ are defined by the generating function
\begin{equation}
\sum_{n=0}^{\infty}b_{n}(x)\frac{t^{n}}{n!}=\frac{t}{\ln(1+t)}(1+t)^{x}, \quad |t|<1.
\label{eq:gener_func_BP_sec}
\end{equation}
Up to a shift, they coincide with the generalized Bernoulli polynomials of order $n$, precisely,
\begin{equation}
b_{n}(x)=B_{n}^{(n)}(x+1),
\label{eq:BP_sec_rel_B}
\end{equation}
where the generalized Bernoulli polynomials of order $a\in{\mathbb{C}}$ are defined by the generating function
\begin{equation}
\sum_{n=0}^{\infty}B_{n}^{(a)}(x)\frac{t^{n}}{n!}=\left(\frac{t}{e^{t}-1}\right)^{a}e^{xt}, \quad |t|<2\pi.
\label{eq:gener_B_n_a}
\end{equation}
Among the numerous branches of mathematics where the generalized Bernoulli polynomials play a significant role, we emphasize the theory of finite integration and interpolation which is nicely exposed in the classical treatise of N\"{o}rlund~\cite{norlund_24} (unfortunately still not translated to English).
In contrast to the Bernoulli polynomials of the first kind~$B_{n}\equiv B_{n}^{(1)}$, the second kind Bernoulli polynomials appear less frequently. Still a great account of the research focuses on the study of their various generalizations and their combinatorial, algebraic, and analytical properties; let us mention at least~\cite{carlitz-sm61,guo_etal-rmjm16,gupta_prabhakar-ijpam80,kim_etal-ade14,roman84,srivastava-choi01, srivastava-todorov_jmaa88}. Concerning the Bernoulli polynomials of the first kind~$B_{n}$, a significant part of the research is devoted to the study of the asymptotic properties of their zeros that exhibit a fascinating complexity and structure; the asymptotic zero distribution of~$B_{n}$, for $n\to\infty$, was obtained in~\cite{boyer-goh_aam07}, for the results on the asymptotic behavior of real zeros of~$B_{n}$ for $n$ large, see~\cite{delange_86,delange_91,efimov_fm08,inkeri-auts59,veselov-ward_jmaa05}, the proof that the zeros of~$B_{n}$ are actually all simple is in~\cite{brillhart-jram69,dilcher_aa08}, and further results on the number of real zeros of~$B_{n}$ can be found in~\cite{edwards-leeming_jat12,leeming_jat89}.
On the other hand, it seems that the asymptotic behavior of zeros of the Bernoulli polynomials of the second kind has not been studied yet. This contrast was a motivation for the present work whose main goal is to fill this gap. To this end, let us mention that the asymptotic behavior of the polynomials $B_{n}^{(a)}$, for $n$ large and the order~$a$ not depending on~$n$, was studied in~\cite{lopez-temme_jmaa99,lopez-temme_jmaa10}.
Throughout the paper, we prefer to work with the polynomials $B_{n}^{(n)}$ rather than the Bernoulli polynomials of the second kind~$b_{n}$ themselves which, however, makes no difference due to the simple relation~\eqref{eq:BP_sec_rel_B}. First, we prove that, unlike zeros of the Bernoulli polynomials of the first kind, the zeros of the second kind Bernoulli polynomials can be only real. Moreover, all zeros are simple, located in the interval~$[0,n]$, and interlace with the integers $1,2,\dots,n$ (Theorem~\ref{thm:Ber_sec_zer_real}).
Next, we focus on the small zeros of~$B_{n}^{(n)}$, i.e., the zeros that are located in a fixed distance from the origin. The proof of their asymptotic behavior (Theorem~\ref{thm:asympt_zer_small}) is based on a complete local uniform asymptotic formula for~$B_{n}^{(n)}$ (Theorem~\ref{thm:Ber_sec_compl_asympt}). It turns out that the zeros of~$B_{n}^{(n)}$ are distributed symmetrically around the point $n/2$ and hence we obtain also the asymptotic formulas for the large zeros, i.e., the zeros located in a fixed distance from $n$.
Further, the asymptotic behavior of zeros of~$B_{n}^{(n)}$ located in a fixed distance from a point~$\alpha n$, where $\alpha\in(0,1)$, is examined. The analysis uses an interesting integral representation of $B_{n}^{(n)}$ which can be a formula of independent interest (Theorem~\ref{thm:beta_integr_repre}). With the aid of Laplace's method, the leading term of the asymptotic expansion of~$B_{n}^{(n)}(z+\alpha n)$, as $n\to\infty$, is deduced (Theorem~\ref{thm:asympt_ber_alpha}) and limit formulas for the zeros located around $\alpha n$ then follow (Corollary~\ref{cor:lim_zer_alp}). A particular attention is paid to the middle zeros, i.e., the case $\alpha=1/2$. In this case, more detailed results are obtained. First, a complete local uniform asymptotic expansion for $B_{n}^{(n)}(z+n/2)$, as $n\to\infty$, is derived (Theorem~\ref{thm:asympt_ber_alpha1/2}). As a consequence, we obtain several terms in the asymptotic expansion of the middle zeros (Theorem~\ref{thm:asympt_middle_zer}).
The asymptotic formulas for~$B_{n}^{(n)}$ that are used to analyze the zeros can be viewed as the asymptotic expansion of the scaled polynomials $B_{n}^{(n)}(nz)$ in the oscilatory region $z\in(0,1)$ and close to the edge points $z=0$ and $z=1$. To complete the picture, the leading term of the asymptotic expansion of $B_{n}^{(n)}(nz)$ in the non-oscilatory regime or the zero-free region $z\in{\mathbb{C}}\setminus[0,1]$ is derived (Theorem~\ref{thm:asympt_ber_non-osc}) using the saddle point method. Finally, we formulate several open research problems in the end of the paper.
\section{Asymptotic behavior and zeros of the Bernoulli polynomials of the second kind}
First, we recall several basic properties of the polynomials $B_{n}^{(a)}$ that are used in this section. For $n\in\mathbb{N}$ and $x,a\in{\mathbb{C}}$, the identities
\begin{equation}
B_{n}^{(a)}(x+1)=B_{n}^{(a)}(x)+nB_{n-1}^{(a-1)}(x), \quad \frac{\partial}{\partial x}B_{n}^{(a)}(x)=nB_{n-1}^{(a)}(x)
\label{eq:ber_a_ident1}
\end{equation}
and
\begin{equation}
B_{n}^{(a)}(-x)=(-1)^{n}B_{n}^{(a)}(x+a)
\label{eq:symmetry_id_genBP}
\end{equation}
can be derived readily from~\eqref{eq:gener_B_n_a}, see also~\cite[Chp.~4, Sec.~2]{roman84}. Next, by making use of the special value
\begin{equation}
B_{n-1}^{(n)}(x)=\prod_{k=1}^{n-1}(x-k)
\label{eq:B_n-1_n_explicit}
\end{equation}
together with the identities from~\eqref{eq:ber_a_ident1}, one easily deduces the well-known integral formula
\begin{equation}
B_{n}^{(n)}(x)=\int_{0}^{1}\prod_{k=1}^{n}(x+y-k)\,{\rm d} y.
\label{eq:int_repre_simple}
\end{equation}
\subsection{Real and simple zeros}
An elementary proof of the reality and simplicity of the zeros of the Bernoulli polynomials of the second kind is based on an inspection of the integer values of these polynomials.
\begin{thm}\label{thm:Ber_sec_zer_real}
Zeros of $B_{n}^{(n)}$ are real and simple. In addition, if $x_{1}^{(n)}<x_{2}^{(n)}<\dots<x_{n}^{(n)}$ denotes the zeros of $B_{n}^{(n)}$,
then
\begin{equation}
k-1<x_{k}^{(n)}<k,
\label{eq:loc_zeros_first}
\end{equation}
for $1\leq k\leq n$. Further, the zeros of $B_{n}^{(n)}$ are distributed symmetrically around the value $n/2$, i.e.,
\begin{equation}
x_{k}^{(n)}=n-x_{n-k+1}^{(n)},
\label{eq:symmetry_zeros}
\end{equation}
for $1\leq k\leq n$. In particular, $x_{n}^{(2n-1)}=n-1/2$.
\end{thm}
\begin{proof}
We start with the integral representation~\eqref{eq:int_repre_simple} which implies that
\[
B_{n}^{(n)}(k)=\int_{0}^{1}\prod_{j=1}^{n}\left(y+k-j\right){\rm d} y=
(-1)^{n+k}\int_{0}^{1}\left[\prod_{i=0}^{k-1}\left(y+i\right)\right]\left[\prod_{j=1}^{n-k}\left(j-y\right)\right]{\rm d} y,
\]
for any integer $0\leq k\leq n$. Since each factor in the above integral is positive for $y\in(0,1)$, the whole integral has to be positive and therefore
\begin{equation}
(-1)^{n+k}B_{n}^{(n)}(k)>0,
\label{eq:int_val_sign}
\end{equation}
for $0\leq k\leq n$.
Consequently, the signs of the values $B_{n}^{(n)}(k)$ alternate for $0\leq k\leq n$ and hence there has to be at least one root of $B_{n}^{(n)}$ in each interval $(k-1,k)$ for $1\leq k\leq n$. Since the polynomial $B_{n}^{(n)}$ is of
degree $n$, there has to be exactly one zero in each interval $(k-1,k)$, for $1\leq k\leq n$. Thus, $B_{n}^{(n)}$ has $n$ distinct roots
located in the intervals $(k-1,k)$, $1\leq k\leq n$. These roots are necessarily simple.
The symmetry of the distribution of the roots around $n/2$ follows readily from the identity~\eqref{eq:symmetry_id_genBP} which implies
\begin{equation}
B_{n}^{(n)}\left(\frac{n}{2}-x\right)=(-1)^{n}B_{n}^{(n)}\left(\frac{n}{2}+x\right)\!,
\label{eq:symmetry_ber_sec}
\end{equation}
for $n\in\mathbb{N}_{0}$.
\end{proof}
According to the Gauss--Lucas theorem, the zeros of
\[
B_{k}^{(n)}(x)=\frac{k!}{n!}\frac{{\rm d}^{n-k}}{{\rm d} x^{n-k}}B_{n}^{(n)}(x),
\]
where $0\leq k\leq n$, are located in the convex hull of the zeros of $B_{n}^{(n)}$ which is, by Theorem~\ref{thm:Ber_sec_zer_real}, a subset of the interval~$(0,n)$.
\begin{cor}
For all $n\in\mathbb{N}_{0}$ and $0\leq k\leq n$, the zeros of $B_{k}^{(n)}$ are located in $(0,n)$.
\end{cor}
\subsection{The asymptotic expansion of $B_{n}^{(n)}(z)$ and small and large zeros}
First, we derive a complete locally uniform asymptotic expansion of~$B_{n}^{(n)}$ in negative powers of $\log n$. As an application, this expansion allows us to derive asymptotic formulas for the zeros of~$B_{n}^{(n)}$ that are located in a fixed distance from the origin or the point~$n$, for $n$ large.
In the proof of the asymptotic expansion of~$B_{n}^{(n)}$, we will make use of a particular case of Watson's lemma given below; for the more general version and its proof, see, e.g., \cite[Chp.~3, Thm.~3.1]{olver_97} or \cite[Sec.~I.5]{wong01}.
\begin{lem}[Watson]\label{lem:Watson}
Let $f(u)$ be a function of positive variable $u$, such that
\begin{equation}
f(u)\sim\sum_{m=0}^{\infty}a_{m}u^{m+\lambda-1}, \quad \mbox{ as }u\to0+,
\label{eq:f_asympt_watson}
\end{equation}
where $\lambda>0$. Then one has the complete asymptotic expansion
\begin{equation}
\int_{0}^{\infty}e^{-xu}f(u){\rm d} u\sim\sum_{m=0}^{\infty}\Gamma\!\left(m+\lambda\right)\frac{a_{m}}{x^{m+\lambda}}, \quad \mbox{ as }x\to\infty,
\label{eq:int_asympt_watson}
\end{equation}
provided that the integral converges absolutely for all sufficiently large $x$.
\end{lem}
\begin{rem}\label{rem:uniform_watson}
If additionally the coefficients $a_{m}=a_{m}(\xi)$ in~\eqref{eq:f_asympt_watson} depend continuously on a parameter $\xi\in K$ where $K$ is a compact subset of ${\mathbb{C}}$ and the asymptotic expansion~\eqref{eq:f_asympt_watson} is uniform in $\xi\in K$, then the expansion~\eqref{eq:int_asympt_watson} holds uniformly in $\xi\in K$ provided that the integral converges uniformly in $\xi\in K$ for all sufficiently large~$x$. This variant of Watson's lemma can be easily verified by a slight modification of the proof given, for example, in~\cite[Chp.~3, Thm.~3.1]{olver_97}.
\end{rem}
Yet another auxiliary statement, this time an inequality for the Gamma function with a complex argument, will be needed to obtain the desired asymptotic expansion of $B_{n}^{(n)}(z)$ for $n\to\infty$.
\begin{lem}\label{lem:gamma_ineq}
For all $s\in[0,1]$ and $z\in{\mathbb{C}}$ such that $\Re z>1$, it holds
\[
\left|z^{s}\frac{\Gamma(z-s)}{\Gamma(z)}-1\right|\leq\frac{2s}{\Re z -1}\left|\frac{z+1}{z-s}\right|.
\]
\end{lem}
\begin{proof}
Let $f_{z}(s):=z^{s}\Gamma(z-s)$, $\Re z>1$, and $s\in[0,1]$. Then, by the Lagrange theorem,
\begin{equation}
|f_{z}(s)-f_{z}(0)|\leq|f_{z}'(s^{*})|s,
\label{eq:dif_f_Lagrange}
\end{equation}
for some $s^{*}\in(0,s)$. The differentiation of~$f_{z}(s)$ with respect to~$s$ yields
\begin{equation}
f_{z}'(s)=z^{s}\Gamma(z-s)\left(\log z -\psi(z-s)\right)\!,
\label{eq:der_f_z}
\end{equation}
where $\psi=\Gamma'/\Gamma$ is the Digamma function. Recall that \cite[Eq.~5.9.13]{dlmf}
\[
\psi(z)=\log z + \int_{0}^{\infty}\left(\frac{1}{t}-\frac{1}{1-e^{-t}}\right)e^{-t z}{\rm d} t,
\]
for $\Re z>0$. From the above formula, one deduces that, for $\Re z>1$ and $s\in[0,1]$,
\[
|\psi(z-s)-\log(z-s)|\leq\int_{0}^{\infty}e^{-t(\Re z-1)}{\rm d} t=\frac{1}{\Re z-1},
\]
where we used that
\[
\left|\frac{1}{t}-\frac{1}{1-e^{-t}}\right|<1, \quad \forall t>0.
\]
Further, one has
\[
|\log(z-s)-\log z|\leq\sum_{n=1}^{\infty}\frac{1}{n}\frac{s^{n}}{|z|^{n}}\leq\frac{1}{|z|-1}\leq\frac{1}{\Re z -1},
\]
for $\Re z>1$ and $s\in[0,1]$. Consequently, we get from~\eqref{eq:der_f_z} the estimate
\[
|f_{z}'(s)|\leq\frac{2}{\Re z-1}\left|z^{s}\Gamma(z-s)\right|,
\]
for all $s\in[0,1]$ and $\Re z>1$.
Recalling~\eqref{eq:dif_f_Lagrange}, the statement is proven, if we show that
\[
\left|\frac{z^{s}\,\Gamma(z-s)}{\Gamma(z)}\right|\leq\left|\frac{z+1}{z-s}\right|,
\]
for $s\in[0,1]$ and $\Re z>1$. To this end, we apply the inequality~\cite[Eq.~5.6.8]{dlmf}
\begin{equation}
\left|\frac{\Gamma(z+a)}{\Gamma(z+b)}\right|\leq\frac{1}{|z|^{b-a}},
\label{eq:gamma_ratio_ineq}
\end{equation}
which holds true provided that $b-a\geq1$, $a\geq0$, and $\Re z>0$. Then
\[
\left|\frac{\Gamma(z-s)}{\Gamma(z)}\right|=\left|\frac{z(z+1)}{z-s}\right|\left|
\frac{\Gamma(z+1-s)}{\Gamma(z+2)}\right|\leq\frac{1}{|z|^{s}}\left|\frac{z+1}{z-s}\right|,
\]
where we applied~\eqref{eq:gamma_ratio_ineq} with $a=1-s$ and $b=2$.
\end{proof}
Now, we are at the position to deduce the complete asymptotic expansion of the polynomials $B_{n}^{(n)}(z)$ for $n\to\infty$.
\begin{thm}\label{thm:Ber_sec_compl_asympt}
The asymptotic expansion
\begin{equation}
(-1)^{n}\frac{n^{z}}{n!}B_{n}^{(n)}(z)\sim\sum_{k=0}^{\infty}\frac{c_{k}(z)}{\log^{k+1}n}, \quad n\to\infty,
\label{eq:asympt_ber_sec}
\end{equation}
holds locally uniformly in $z\in{\mathbb{C}}$, where
\begin{equation}
c_{k}(z)=\frac{{\rm d}^{k}}{{\rm d} z^{k}}\frac{1}{\Gamma(1-z)}.
\label{eq:coeff_c_asympt}
\end{equation}
\end{thm}
\begin{proof}
The integral formula~\eqref{eq:int_repre_simple} can be rewritten in terms of the Gamma functions as
\[
B_{n}^{(n)}(z)=(-1)^{n}\int_{0}^{1}\frac{\Gamma(1-s-z+n)}{\Gamma(1-s-z)}{\rm d} s.
\]
Using this together with Lemma~\ref{lem:gamma_ineq}, we obtain
\begin{align*}
&\left|\frac{B_{n}^{(n)}(z)}{\Gamma(1-z+n)}-(-1)^{n}\int_{0}^{1}\frac{{\rm d} s}{(1-z+n)^{s}\Gamma(1-s-z)}\right|\\
&\leq\frac{2|n+2-z|}{n-\Re z}\int_{0}^{1}\frac{s{\rm d} s}{|n+1-z-s||(1-z+n)^{s}\Gamma(1-s-z)|},
\end{align*}
for $\Re z<n$. Since $1/\Gamma$ is an entire function, for $K\subset{\mathbb{C}}$ a compact set, there is a constant $C>0$ such that
\[
\sup_{s\in[0,1]}\sup_{z\in K}\frac{1}{|\Gamma(1-z-s)|}\leq C.
\]
Moreover, one has
\[
\left|\frac{n+2-z}{n+1-z-s}\right|\leq\frac{n+2+|z|}{n-|z|},
\]
for $s\in[0,1]$ and $|z|<n$.
Hence we have the estimate
\[
\left|\frac{B_{n}^{(n)}(z)}{\Gamma(1-z+n)}-(-1)^{n}\int_{0}^{1}\frac{{\rm d} s}{(1-z+n)^{s}\Gamma(1-s-z)}\right|\leq\frac{2C}{n-\Re z}\frac{n+2+|z|}{n-|z|}\int_{0}^{1}\frac{s{\rm d} s}{|1-z+n|^{s}},
\]
provided that $|z|<n$. Moreover, since
\[
\int_{0}^{1}\frac{s{\rm d} s}{|1-z+n|^{s}}=\frac{|1-z+n|-\log|1-z+n|-1}{|1-z+n|\log^{2}|1-z+n|}\leq\frac{1}{\log^{2}|1-z+n|},
\]
we conclude that
\begin{align}
\frac{B_{n}^{(n)}(z)}{\Gamma(1-z+n)}&=(-1)^{n}\int_{0}^{1}\frac{{\rm d} s}{(1-z+n)^{s}\Gamma(1-s-z)}+O\left(\frac{1}{n\log^{2}n}\right)\nonumber\\
&=(-1)^{n}\int_{0}^{1}\frac{{\rm d} s}{n^{s}\Gamma(1-s-z)}+O\left(\frac{1}{n}\right)\!,\label{eq:towards_asympt_exp_inproof0}
\end{align}
as $n\to\infty$, locally uniformly in $z\in{\mathbb{C}}$.
The analyticity of the reciprocal Gamma function implies
\[
\frac{1}{\Gamma(1-s-z)}=\sum_{k=0}^{\infty}c_{k}(z)\frac{s^{k}}{k!},
\]
where
\[
c_{k}(z)=\frac{{\rm d}^{k}}{{\rm d} s^{k}}\bigg|_{s=0}\frac{1}{\Gamma(1-s-z)}=\frac{{\rm d}^{k}}{{\rm d} z^{k}}\frac{1}{\Gamma(1-z)}.
\]
Moreover, $c_{k}$ is an entire function for any $k\in\mathbb{N}_{0}$. Consequently, if $\chi_{(0,1)}$ denotes the indicator function of the interval $(0,1)$, then
\[
f_{z}(s):=\frac{\chi_{(0,1)}(s)}{\Gamma(1-s-z)}\sim\sum_{k=0}^{\infty}c_{k}(z)\frac{s^{k}}{k!}, \quad s\to0+,
\]
and the application of Lemma~\ref{lem:Watson} yields
\begin{equation}
\int_{0}^{1}\frac{{\rm d} s}{n^{s}\Gamma(1-s-z)}=\int_{0}^{\infty}e^{-s\log n}f_{z}(s){\rm d} s\sim\sum_{k=0}^{\infty}\frac{c_{k}(z)}{\log^{k+1}n}, \quad n\to\infty.
\label{eq:asympt_int_log_watson}
\end{equation}
This asymptotic formula is again local uniform in $z\in{\mathbb{C}}$. To see this, one may proceed as follows. For $m\in\mathbb{N}_{0}$, we have
\[
\left|\frac{1}{\Gamma(1-s-z)}-\sum_{k=0}^{m}c_{k}(z)\frac{s^{k}}{k!}\right|\leq\left|\frac{{\rm d}^{m+1}}{{\rm d} s^{m+1}}\bigg|_{s=s^{*}}\frac{1}{\Gamma(1-s-z)}\right|\frac{s^{m+1}}{(m+1)!}=|c_{m+1}(z+s^{*})|\frac{s^{m+1}}{(m+1)!},
\]
where $s^{*}\in(0,s)$. Now, if $K\subset{\mathbb{C}}$ is compact, then, by the analyticity of $c_{m+1}$, there is a constant $C'>0$ such that
\[
\sup_{s\in[0,1]}\sup_{z\in K}|c_{m+1}(z+s)|\leq C'.
\]
Consequently, the error term in the expansion~\eqref{eq:asympt_int_log_watson} is majorized by
\[
\frac{C'}{(m+1)!}\int_{0}^{\infty}\frac{s^{m+1}}{n^{s}}{\rm d} s=\frac{C'}{\log^{m+2} n},
\]
for all $z\in K$; cf. also Remark~\ref{rem:uniform_watson}. Thus, \eqref{eq:towards_asympt_exp_inproof0} together with~\eqref{eq:asympt_int_log_watson} imply that
\begin{equation}
\frac{B_{n}^{(n)}(z)}{\Gamma(1-z+n)}\sim(-1)^{n}\sum_{k=0}^{\infty}\frac{c_{k}(z)}{\log^{k+1}n}, \quad n\to\infty,
\label{eq:asympt_ber_sec_inproof}
\end{equation}
locally uniformly in $z\in{\mathbb{C}}$.
Finally, it follows from the well-known Stirling asymptotic expansion that
\[
\frac{1}{\Gamma(1-z+n)}=\frac{n^{z}}{n!}\left(1+O\left(\frac{1}{n}\right)\right), \quad n\to\infty,
\]
locally uniformly in $z\in{\mathbb{C}}$. By applying the above formula in~\eqref{eq:asympt_ber_sec_inproof}, we arrive at the statement.
\end{proof}
\begin{rem}
Using different arguments based on a contour integration, a more general result than~\eqref{eq:asympt_ber_sec} was already obtained by N{\" o}rlund in~\cite{norlund_rcmp61} (in French). Nemes derived a complete asymptotic expansion of the Bernoulli numbers of the second kind $B_{n}^{(n)}(1)/n!$ in~\cite{nemes_jis11} which is a particular case of Theorem~\ref{thm:Ber_sec_compl_asympt} for $z=1$. Another asymptotic formula for the Bernoulli numbers of the second kind was deduced by Van Veen in~\cite{vanveen_51}.
\end{rem}
By using Theorem~\ref{thm:Ber_sec_compl_asympt}, we immediately get the following corollary,
where the second statement follows from the Hurwitz theorem, see~\cite[Thm.~2.5, p.~152]{conway_78}, and
the fact that the zeros of $1/\Gamma$ are located at non-positive integers and are simple.
\begin{cor}\label{cor:hurwitz_small_zer}
It holds
\[
\lim_{n\to\infty}(-1)^{n}\frac{n^{z}\log n}{n!}B_{n}^{(n)}(z)=\frac{1}{\Gamma(1-z)}
\]
uniformly in compact subsets of ${\mathbb{C}}$. Consequently, for $k\in\mathbb{N}$, one has
\[
\lim_{n\to\infty}x_{k}^{(n)}=k.
\]
\end{cor}
\begin{rem}
Both sequences above converge quite slowly. The error terms turn out to decay as $1/\log n$, for $n\to\infty$.
\end{rem}
Since we known all the coefficients in the asymptotic expansion~\eqref{eq:asympt_ber_sec} by any power of $1/\log n$,
we can compute also the asymptotic expansions of zeros $x_{k}^{(n)}$ for any fixed $k\in\mathbb{N}$, as $n\to\infty$, to an arbitrary order, in principle. However, the coefficients by the powers of $1/\log n$ become quickly complicated and no closed formula for the coefficients was found. We provide the first three terms of the asymptotic expansions.
\begin{thm}\label{thm:asympt_zer_small}
For $k\in\mathbb{N}$, we have the asymptotic expansion
\[
x_{k}^{(n)}=k-\frac{1}{\log n}-\frac{\psi(k)}{\log^{2}n}+O\left(\frac{1}{\log^{3}n}\right)\!, \quad \mbox{ as } n\to\infty,
\]
where $\psi=\Gamma'/\Gamma$ is the Digamma function.
\end{thm}
\begin{rem}\label{rem:asympt_zer_large}
Theorem~\ref{thm:asympt_zer_small} gives the asymptotic behavior of zeros located in a fixed distance from $0$.
As a consequence of the symmetry~\eqref{eq:symmetry_zeros}, we know also the asymptotic behavior of the zeros
located in a fixed distance from $n$. Namely,
\[
x_{n-k+1}^{(n)}=n-k+\frac{1}{\log n}+\frac{\psi(k)}{\log^{2}n}+O\left(\frac{1}{\log^{3}n}\right)\!, \quad \mbox{ as } n\to\infty,
\]
for any fixed $k\in\mathbb{N}$.
\end{rem}
\begin{proof}
We fix $k\in\mathbb{N}$ and introduce $\epsilon_{n}^{(k)}:=k-x_{k}^{(n)}$. If we substitute for $z=k-\epsilon_{n}^{(k)}$ in~\eqref{eq:coeff_c_asympt}, we get
\begin{equation}
c_{j}\left(k-\epsilon_{n}^{(k)}\right)
=(-1)^{j}\frac{{\rm d}^{j}}{{\rm d} z^{j}}\bigg|_{z=\epsilon_{n}^{(k)}}\frac{1}{\Gamma(1-k+z)}=(-1)^{j}\frac{{\rm d}^{j}}{{\rm d} z^{j}}\bigg|_{z=\epsilon_{n}^{(k)}}\frac{(z+1-k)_{k}}{\Gamma(z+1)},
\label{eq:c_j_tow_exp_inproof}
\end{equation}
where $(a)_{k}:=a(a+1)\dots(a+k-1)$ is the Pochhammer symbol.
Recall that, for $k\in\mathbb{N}$,
\begin{equation}
(z+1-k)_{k}=\sum_{l=1}^{k}s(k,l)z^{l},
\label{eq:pochham_expand}
\end{equation}
where $s(k,l)$ are the Stirling numbers of the first kind; see~\cite[Sec.~26.8]{dlmf}. The Stirling numbers can be defined recursively but here we will make use only of the special values
\begin{equation}
s(k,1)=(-1)^{k-1}(k-1)! \quad\mbox{ and }\quad s(k,2)=(-1)^{k-1}(k-1)!\left(\gamma+\psi(k)\right)\!,
\label{eq:stirling_spec_val}
\end{equation}
for $k\in\mathbb{N}$. We will also need the Maclaurin series for the reciprocal Gamma function~\cite[Eqs.~(5.7.1) and~(5.7.2)]{dlmf}
\begin{equation}
\frac{1}{\Gamma(z+1)}=\sum_{m=0}^{\infty}\pi_{m}z^{m},
\label{eq:recip_gamma_expand}
\end{equation}
where
\[
\pi_{0}=1 \quad \mbox{ and } \quad m\pi_{m}=\gamma \pi_{m-1}+\sum_{j=2}^{m}(-1)^{j+1}\zeta(j)\pi_{m-j}, \quad \mbox{ for } m\in\mathbb{N},
\]
and $\zeta$ stands for the Riemann zeta function. In particular,
\begin{equation}
\pi_{0}=1 \quad\mbox{ and }\quad \pi_{1}=\gamma.
\label{eq:recip_gamma_spec_val}
\end{equation}
By using~\eqref{eq:pochham_expand} and~\eqref{eq:recip_gamma_expand} in~\eqref{eq:c_j_tow_exp_inproof}, we obtain
\begin{equation}
c_{j}\left(k-\epsilon_{n}^{(k)}\right)=(-1)^{j}j!\sum_{m=j}^{\infty}\binom{m}{j}X_{m}^{(k)}\left(\epsilon_{n}^{(k)}\right)^{\!m-j},
\label{eq:c_j_exp_inproof}
\end{equation}
where
\begin{equation}
X_{m}^{(k)}:=\sum_{i=1}^{\min(k,m)}s(k,i)\pi_{m-i}.
\label{eq:def_X_m_k}
\end{equation}
It follows from Theorem~\ref{thm:Ber_sec_compl_asympt} that $\epsilon_{n}^{(k)}=O(1/\log n)$. Therefore we may write
\begin{equation}
\epsilon_{n}^{(k)}=\frac{\mu_{n}^{(k)}}{\log n},
\label{eq:def_mu_n_k}
\end{equation}
where $\mu_{n}^{(k)}$ is a bounded sequence. If we take the first two terms from the expansion~\eqref{eq:asympt_ber_sec}
and use that the left-hand side of~\eqref{eq:asympt_ber_sec} vanishes at $z=x_{k}^{(n)}$, we arrive at the equation
\[
c_{0}\!\left(k-\epsilon_{n}^{(k)}\right)\log n+c_{1}\!\left(k-\epsilon_{n}^{(k)}\right)+O\left(\frac{1}{\log n}\right)=0, \quad \mbox{ as } n\to\infty.
\]
With the aid of~\eqref{eq:c_j_exp_inproof} and~\eqref{eq:def_mu_n_k}, the above equation can be written as
\[
X_{1}^{(k)}\mu_{n}^{(k)}-X_{1}^{(k)}+O\left(\frac{1}{\log n}\right)=0, \quad \mbox{ as } n\to\infty,
\]
which implies
\[
\mu_{n}^{(k)}=1+O\left(\frac{1}{\log n}\right)\!, \quad \mbox{ as } n\to\infty,
\]
since $X_{1}^{(k)}\neq0$. Consequently, we get
\[
x_{k}^{(n)}=k-\epsilon_{n}^{(k)}=k-\frac{1}{\log n}+O\left(\frac{1}{\log^{2}n}\right)\!, \quad \mbox{ as } n\to\infty.
\]
Similarly, by writing
\begin{equation}
\epsilon_{n}^{(k)}=\frac{1}{\log n}+\frac{\nu_{n}^{(k)}}{\log^{2} n},
\label{eq:def_nu_n_k}
\end{equation}
where $\nu_{n}^{(k)}$ is a bounded sequence, repeating the same procedure that uses the first three terms of the asymptotic expansion~\eqref{eq:asympt_ber_sec}, we compute another term in the expansion of $\epsilon_{n}^{(k)}$,
for $n\to\infty$. More precisely, it follows from~\eqref{eq:asympt_ber_sec} that
\[
c_{0}\!\left(k-\epsilon_{n}^{(k)}\right)\log^{2} n+c_{1}\!\left(k-\epsilon_{n}^{(k)}\right)\log n +
c_{2}\!\left(k-\epsilon_{n}^{(k)}\right)+O\left(\frac{1}{\log n}\right)=0, \quad \mbox{ as } n\to\infty.
\]
By using~\eqref{eq:c_j_exp_inproof} and~\eqref{eq:def_nu_n_k}, the above equation implies that
\[
X_{1}^{(k)}\nu_{n}^{(k)}+X_{2}^{(k)}+O\left(\frac{1}{\log n}\right)=0, \quad \mbox{ as } n\to\infty.
\]
Since
\[
X_{1}^{(k)}=(-1)^{k-1}(k-1)! \quad\mbox{ and }\quad X_{2}^{(k)}=(-1)^{k}(k-1)!\psi(k),
\]
as one computes from~\eqref{eq:def_X_m_k} by using the special values~\eqref{eq:stirling_spec_val}, \eqref{eq:recip_gamma_spec_val}, and the identity $\psi(1)=-\gamma$, see~\cite[Eq.~(5.4.12)]{dlmf}, we conclude that
\[
\nu_{n}^{(k)}=\psi(k)+O\left(\frac{1}{\log n}\right)\!, \quad \mbox{ as } n\to\infty.
\]
The above formula used in~\eqref{eq:def_nu_n_k} imply the asymptotic expansion of~$x_{k}^{(n)}$ from the statement.
\end{proof}
\begin{rem}
Note that for $k=1$, the coefficients~\eqref{eq:def_X_m_k} simplify. Namely, $X_{0}^{(1)}=0$ and $X_{m}^{(1)}=\pi_{m-1}$, for $m\in\mathbb{N}$. Without going into details, we write down several other terms in the asymptotic expansion of the smallest zero:
\begin{align*}
x_{1}^{(n)}&=1-\frac{1}{\log n}+\frac{\gamma}{\log^{2}n}-\frac{\gamma^{2}-\pi^{2}/6}{\log^{3}n}
+\frac{\gamma^{3}-\gamma\pi^{2}/2+3\zeta(3)}{\log^{4}n}\\
&-\frac{\gamma^{4}-\gamma^{2}\pi^{2}+12\gamma\zeta(3)-\pi^{4}/90}{\log^{5}n}
+O\left(\frac{1}{\log^{6}n}\right)\!, \quad \mbox{ as } n\to\infty.
\end{align*}
\end{rem}
\subsection{Another integral representation and more precise localization of zeros}
Further asymptotic analysis as well as an improvement of the localization~\eqref{eq:loc_zeros_first} will rely on an integral representation for the polynomials~$B_{n}^{(n)}$ derived below. This integral representation appeared recently in the article~\cite{blagouchine_i18} of Blagouchine, see~\cite[Eq.~(57)]{blagouchine_i18} and Appendix therein. We provide an alternative proof. We remark that the formula is a generalization of Schr\"{o}der's integral representation for the Bernoulli numbers of the second kind, see~\cite{blagouchine_jis17,schroder_zmp80}.
All fractional powers appearing below have their principal parts.
\begin{thm}\label{thm:beta_integr_repre}
One has
\begin{equation}
B_{n}^{(n)}(z)=(-1)^{n}\frac{n!}{\pi}\int_{0}^{\infty}\frac{u^{z-1}}{(1+u)^{n}}\frac{\pi\cos\pi z-\log(u)\sin\pi z}{\pi^{2}+\log^{2}u}{\rm d} u,
\label{eq:integr_repre_ber_sec}
\end{equation}
for $n\in\mathbb{N}$ and $0<\Re z<n$.
\end{thm}
\begin{proof}
We again start with the integral formula~\eqref{eq:int_repre_simple} expressing the integrand as a ratio of the Gamma functions:
\[
B_{n}^{(n)}(z)=\int_{0}^{1}\prod_{j=1}^{n}(s+z-j)\,{\rm d} s=\int_{0}^{1}\frac{\Gamma(s+z)}{\Gamma(s+z-n)}{\rm d} s.
\]
By using the well-known identity
\[
\Gamma(z)\Gamma(1-z)=\frac{\pi}{\sin\pi z},
\]
we may rewrite the integral formula for $B_{n}^{(n)}(z)$ as
\begin{equation}
B_{n}^{(n)}(z)=\frac{1}{\pi}\int_{0}^{1}\sin\left(\pi(s+z) -\pi n\right)\Gamma\left(s+z\right)\Gamma\left(1-s-z+n\right){\rm d} s.
\label{eq:tow_int_repre_inproof0}
\end{equation}
Recall that for $u,v\in{\mathbb{C}}$, $\Re u>0$, $\Re v>0$, it holds
\[
\frac{\Gamma(u)\Gamma(v)}{\Gamma(u+v)}=2\int_{0}^{\pi/2}\sin^{2u-1}\theta\cos^{2v-1}\theta{\rm d}\theta,
\]
see \cite[Eqs.~5.12.2 and~5.12.1]{dlmf}. Using the above formula in~\eqref{eq:tow_int_repre_inproof0}, we get
\begin{equation}
B_{n}^{(n)}(z)=(-1)^{n}\frac{2n!}{\pi}\int_{0}^{1}\sin\left(\pi(s+z)\right)\int_{0}^{\pi/2}\sin^{2s+2z-1}\theta\cos^{2n-2s-2z+1}\theta{\rm d}\theta{\rm d} s,
\label{eq:tow_int_repre_inproof1}
\end{equation}
where $z$ has to be restricted such that $0<\Re z<n$ to guarantee that both arguments of the Gamma functions in~\eqref{eq:tow_int_repre_inproof0} are of positive real part.
By using Fubini's theorem, we may change the order of integration in~\eqref{eq:tow_int_repre_inproof1}. Doing also some elementary manipulations with the trigonometric functions, we arrive at the expression
\begin{equation}
B_{n}^{(n)}(z)=(-1)^{n}\frac{2n!}{\pi}\int_{0}^{\pi/2}\cos^{2n}\theta\tan^{2z}\theta\int_{0}^{1}\sin\left(\pi(s+z)\right)\tan^{2s-1}\theta{\rm d} s {\rm d}\theta,
\label{eq:tow_int_repre_inproof2}
\end{equation}
for $0<\Re z<n$.
As the last step, we evaluate the inner integral in~\eqref{eq:tow_int_repre_inproof2}. We can make use of the elementary integral
\[
\int e^{ax}\sin bx\,{\rm d} x=\frac{e^{ax}}{a^{2}+b^{2}}\left(a\sin bx- b\cos bx\right)\!,
\]
to compute that
\begin{equation}
\int_{0}^{1}\sin\left(\pi(s+z)\right)\tan^{2s-1}\theta{\rm d} s=\frac{\pi\cos\pi z-2\log(\tan\theta)\sin\pi z}{\sin\theta\cos\theta\left(\pi^{2}+4\log^{2}\tan\theta\right)},
\label{eq:inner_int_comp_inproof}
\end{equation}
for $\theta\in(0,\pi/2)$. By using~\eqref{eq:inner_int_comp_inproof} in~\eqref{eq:tow_int_repre_inproof2}, we arrive at the integral representation
\begin{equation}
B_{n}^{(n)}(z)=2(-1)^{n}\frac{n!}{\pi}\int_{0}^{\pi/2}\cos^{2n-2}\theta\tan^{2z-1}\theta\frac{\pi\cos \pi z-2\log(\tan\theta)\sin \pi z}{\pi^{2}+4\log^{2}\tan\theta}{\rm d}\theta,
\label{eq:integr_repre_ber_sec_trigon}
\end{equation}
for $n\in\mathbb{N}$ and $0<\Re z<n$. Substituting for $\tan^{2}\theta=u$ in~\eqref{eq:integr_repre_ber_sec_trigon}, we arrive at the formula~\eqref{eq:integr_repre_ber_sec}.
\end{proof}
As a first application of the integral formula~\eqref{eq:integr_repre_ber_sec}, we improve the localization of the zeros of~$B_{n}^{(n)}$ given by the inequalities~\eqref{eq:loc_zeros_first}. It turns out that the zeros are located in a half of the respective intervals between two integers.
To do so, we rewrite the integral in~\eqref{eq:integr_repre_ber_sec} to a slightly different form. By writing the integral in~\eqref{eq:integr_repre_ber_sec} as the sum of two integrals integrating from $0$ to $1$ and from $1$ to $\infty$, respectively, substituting $u=1/\tilde{u}$ in the second one (and omitting the tilde notation afterwards), one obtains the formula
\begin{equation}
B_{n}^{(n)}(z)=(-1)^{n}\frac{n!}{\pi}\int_{0}^{1}\frac{1}{(1+u)^{n}}\left[\rho_{z}(u)+u^{n}\rho_{-z}(u)\right]{\rm d} u,
\label{eq:integr_repre_ber_sec_reform1}
\end{equation}
for $n\in\mathbb{N}$ and $0<\Re z<n$, where
\[
\rho_{z}(u):=u^{z-1}\frac{\pi\cos\pi z-\log(u)\sin\pi z}{\pi^{2}+\log^{2}u}.
\]
Below, $\lfloor x\rfloor$ denotes the integer part of $x\in{\mathbb{R}}$.
\begin{thm}
For $n\in\mathbb{N}$, one has
\[
k-\frac{1}{2}<x_{k}^{(n)}<k, \quad \mbox{ if}\quad 1\leq k \leq \bigg\lfloor \frac{n}{2}\bigg\rfloor,
\]
and
\[
k<x_{k+1}^{(n)}<k+\frac{1}{2}, \quad \mbox{ if}\quad \bigg\lfloor \frac{n+1}{2}\bigg\rfloor\leq k \leq n-1.
\]
Recall that $x_{n}^{(2n-1)}=n-1/2$.
\end{thm}
\begin{rem}
The global localization of the zeros in the intervals of fixed lengths given above is the best possible. Indeed, as it is shown below, the zeros of $B_{n}^{(n)}$ located around $n/2$ cluster at half-integers as $n\to\infty$, while
the zeros located in a left neighborhood of the point~$n$ (or in a right neighborhood of~$0$) cluster at integers as $n\to\infty$.
\end{rem}
\begin{proof}
Clearly, if the first set of inequalities is established then the second one follows readily from the symmetry~\eqref{eq:symmetry_zeros}.
For $1\leq k \leq \lfloor n/2\rfloor$, write $z=k-1/2$ in~\eqref{eq:integr_repre_ber_sec_reform1}. Since
\[
\rho_{k-1/2}(u)=-u^{2k-1}\rho_{-k+1/2}(u)=(-1)^{k}u^{k-3/2}\frac{\log u}{\pi^{2}+\log^{2}u},
\]
we have
\[
B_{n}^{(n)}\left(k-\frac{1}{2}\right)=(-1)^{n+k}\frac{n!}{\pi}\int_{0}^{1}\frac{u^{k-3/2}\left(1-u^{n-2k+1}\right)}{(1+u)^{n}}\frac{\log u}{\pi^{2}+\log^{2}u}{\rm d} u.
\]
The integrand is obviously a negative function on $(0,1)$ for any $1\leq k \leq \lfloor n/2\rfloor$. Hence
\[
(-1)^{n+k+1}B_{n}^{(n)}\left(k-\frac{1}{2}\right)>0, \quad \mbox{ for}\quad 1\leq k \leq \bigg\lfloor \frac{n}{2}\bigg\rfloor.
\]
Taking also~\eqref{eq:int_val_sign} into account, we observe that the values
$B_{n}^{(n)}(k)$ and $B_{n}^{(n)}(k-1/2)$ differ in sign for $1\leq k \leq \lfloor n/2\rfloor$. Hence
\[
x_{k}^{(n)}\in\left(k-\frac{1}{2},k\right)\!, \quad \mbox{ for}\quad 1\leq k \leq \bigg\lfloor \frac{n}{2}\bigg\rfloor.
\]
\end{proof}
\subsection{Asymptotic expansion of $B_{n}^{(n)}\left(z+\alpha n\right)$ and consequences for zeros}
Theorem~\ref{thm:asympt_zer_small} and Remark~\ref{rem:asympt_zer_large} give an information on the asymptotic behavior of the small or large zeros of the Bernoulli polynomials of the second kind. Bearing in mind the symmetry~\eqref{eq:symmetry_ber_sec}, the zeros of $B_{n}^{(n)}$ located ``in the middle'', i.e., around the point~$n/2$ are of interest, too. More generally, we will study the asymptotic behavior of zeros of $B_{n}^{(n)}$ that are traced along the positive real line at a speed $\alpha\in(0,1)$ by investigating the asymptotic behavior of $B_{n}^{(n)}(z+\alpha n)$, for $n\to\infty$, focusing particularly on the case $\alpha=1/2$.
In order to derive the asymptotic behavior $B_{n}^{(n)}(z+\alpha n)$ for $n$ large, we apply Laplace's method to the integral representation obtained in Theorem~\ref{thm:beta_integr_repre}. We refer reader to~\cite[Sec.~3.7]{olver_97} for a general description of Laplace's method. Here we use a particular case of Laplace's method adjusted to the situation which appears below. In particular, we need the variant where the extreme point is an inner point of the integration interval which is an easy modification of the standard form of Laplace's method where the extreme point is assumed to be one of the endpoints of the integration interval.
\begin{lem}[Laplace's method with an interior extreme point]\label{lem:laplace}
Let $f$ be real-valued and $g$ complex-valued continuous functions on~$(0,\infty)$ independent of~$n$. Assume further that $f$ and $g$ are analytic functions at a point $a\in(0,\infty)$ where $f$ has a unique global minimum in $(0,\infty)$. Let $f_{k}$ and $g_{k}$ be the coefficients from the Taylor series
\begin{equation}
f(u)=f(a)+\sum_{k=0}^{\infty}f_{k}(u-a)^{k+2} \quad\mbox{ and }\quad g(u)=\sum_{k=0}^{\infty}g_{k}(u-a)^{k+\ell},
\label{eq:tayl_ser_laplace}
\end{equation}
where $\ell\in\{0,1\}$ and $g_{0}\neq0$. Suppose moreover that $f_{0}\neq0$. Then, for $n\to\infty$, one has
\begin{equation}
\int_{0}^{\infty}e^{-nf(u)}g(u){\rm d} u=\frac{2\sqrt{\pi}}{\ell+1}e^{-nf(a)}\left[\frac{c_{\ell}}{n^{\ell+1/2}}+O\left(\frac{1}{n^{\ell+3/2}}\right)\!\right]
\label{eq:int_asympt_laplace}
\end{equation}
provided that the integral converges absolutely for all $n$ sufficiently large. The coefficients $c_{\ell}$ are expressible in terms of $f_{\ell}$ and $g_{\ell}$ as follows:
\[
c_{0}=\frac{g_{0}}{2f_{0}^{1/2}} \quad\mbox{ and }\quad c_{1}=\frac{2f_{0}g_{1}-3f_{1}g_{0}}{4f_{0}^{5/2}}.
\]
\end{lem}
\begin{rem}\label{rem:uniform_laplace}
Suppose that the coefficients $g_{k}=g_{k}(\xi)$ in~\eqref{eq:tayl_ser_laplace} depend continuously on an additional parameter $\xi\in K$ where $K$ is a compact subset of ${\mathbb{C}}$ and the power series for $g$ in~\eqref{eq:tayl_ser_laplace} converges uniformly in $\xi\in K$. Then the asymptotic expansion~\eqref{eq:int_asympt_laplace} holds uniformly in $\xi\in K$ as well provided that the integral converges uniformly in $\xi\in K$ for all $n$ sufficiently large.
\end{rem}
Now, we are ready to deduce an asymptotic expansion of $B_{n}^{(n)}(z+\alpha n)$ for $n\to\infty$.
\begin{thm}\label{thm:asympt_ber_alpha}
For $\alpha\in(0,1)$ fixed, the asymptotic expansion
\begin{align}
\frac{(-1)^{n}\sqrt{n}}{n!\,\alpha^{\alpha n}(1-\alpha)^{(1-\alpha)n}}B_{n}^{(n)}(z+\alpha n)=&\sqrt{\frac{2}{\pi}}\frac{\alpha^{z-1/2}(1-\alpha)^{-z-1/2}}{\pi^{2}+\tau_{\alpha}^{2}}\nonumber\\
&\times\left[\pi\cos(\pi z+\pi\alpha n)-\tau_{\alpha}\sin(\pi z+\pi\alpha n)\right]+O\left(\frac{1}{n}\right)
\label{eq:asympt_ber_alpha}
\end{align}
holds locally uniformly in $z\in{\mathbb{C}}$ as $n\to\infty$, where
\begin{equation}
\tau_{\alpha}:=\log\frac{\alpha}{1-\alpha}.
\label{eq:def_tau}
\end{equation}
\end{thm}
\begin{proof}
By writing $z+\alpha n$ instead of $z$ in~\eqref{eq:integr_repre_ber_sec}, one obtains
\begin{equation}
(-1)^{n}\frac{\pi}{n!} B_{n}^{(n)}(z+\alpha n)=I_{1}(n)\cos(\pi z+\pi\alpha n)-I_{2}(n)\sin(\pi z+\pi\alpha n),
\label{eq:B_n_lin_comb_trig}
\end{equation}
for $-\alpha n<\Re z<(1-\alpha)n$, where
\begin{equation}
I_{i}(n):=\int_{0}^{\infty}e^{-nf(u)}g_{i}(u){\rm d} u, \quad i\in\{1,2\},
\label{eq:def_I_12}
\end{equation}
and
\begin{equation}
f(u):=\log(1+u)-\alpha\log u,
\label{eq:def_f}
\end{equation}
\vskip2pt
\begin{equation}
g_{1}(u):=\frac{\pi u^{z-1}}{\pi^{2}+\log^{2} u}\quad \mbox{ and } \quad g_{2}(u):=\frac{u^{z-1}\log u}{\pi^{2}+\log^{2}u}.
\label{eq:def_g_12}
\end{equation}
The integrals~\eqref{eq:def_I_12} are in the suitable form for the application of Laplace's method.
One easily verifies that the function $f$ defined by~\eqref{eq:def_f} has the simple global minimum at the point
\[
u_{\alpha}:=\frac{\alpha}{1-\alpha}.
\]
Further, the functions $f$ and $g_{1}$, $g_{2}$ from~\eqref{eq:def_g_12} are analytic in a neighborhood of $u_{\alpha}$ having the expansions
\[
f(u)=f(u_{\alpha})+\frac{(1-\alpha)^{3}}{2\alpha}(u-u_{\alpha})^{2}+O\left((u-u_{\alpha})^{3}\right)\!,
\]
and
\begin{equation}
g_{i}(u)=g_{i}(u_{\alpha})+O\left(u-u_{\alpha}\right),
\label{eq:exp_g_12}
\end{equation}
for $u\to u_{\alpha}$. Moreover, the expansions for $g_{i}$ in~\eqref{eq:exp_g_12} are local uniform in $z\in{\mathbb{C}}$ as one readily checks by elementary means.
Suppose first that $\alpha\neq1/2$. Then $g_{1}(u_{\alpha})\neq0$ as well as $g_{2}(u_{\alpha})\neq0$ and Lemma~\ref{lem:laplace} applies to both $I_{1}(n)$ and $I_{2}(n)$ with $\ell=0$ resulting in the asymptotic formulas
\begin{equation}
I_{1}(n)=\alpha^{\alpha n}(1-\alpha)^{(1-\alpha)n}\left[\frac{\sqrt{2\pi^{3}}\alpha^{z-1/2}(1-\alpha)^{-z-1/2}}{\pi^{2}+\log^{2}\left(\alpha/(1-\alpha)\right)}\frac{1}{\sqrt{n}}+O\left(\frac{1}{n^{3/2}}\right)\right]
\label{eq:asympt_I_1_alp}
\end{equation}
and
\[
I_{2}(n)=\alpha^{\alpha n}(1-\alpha)^{(1-\alpha)n}\left[\frac{\sqrt{2\pi}\alpha^{z-1/2}(1-\alpha)^{-z-1/2}}{\pi^{2}+\log^{2}\left(\alpha/(1-\alpha)\right)}\log\left(\frac{\alpha}{1-\alpha}\right)\frac{1}{\sqrt{n}}+O\left(\frac{1}{n^{3/2}}\right)\right]\!,
\]
for $n\to\infty$.
By plugging the above expressions for $I_{i}(n)$, $i\in\{1,2\}$, into~\eqref{eq:B_n_lin_comb_trig} one gets the expansion~\eqref{eq:asympt_ber_alpha}.
If $\alpha=1/2$, $g_{2}(u_{\alpha})=0$ and hence Lemma~\ref{lem:laplace} applies to $I_{2}(n)$ with $\ell=1$. It follows that $2^{n}I_{2}(n)=O(n^{-3/2})$, as $n\to\infty$, for $\alpha=1/2$. The asymptotic expansion~\eqref{eq:asympt_I_1_alp} for $I_{1}(n)$ remains unchanged even if $\alpha=1/2$. In total, taking again~\eqref{eq:B_n_lin_comb_trig} into account, we see that the expansion~\eqref{eq:asympt_ber_alpha} remains valid also for $\alpha=1/2$ since $\tau_{\alpha}$ vanishes in this case.
In order to conclude that the expansion~\eqref{eq:asympt_ber_alpha} is local uniform in $z\in{\mathbb{C}}$, it suffices to check that the integrals $I_{i}(n)$, $i\in\{1,2\}$, converge locally uniformly in $z\in{\mathbb{C}}$ for all $n$ sufficiently large; see Remark~\ref{rem:uniform_laplace}. Let $K\in{\mathbb{C}}$ be a compact set. Then if $z\in K$, $|\Re z|<C$ for some $C>0$. Concerning for instance $I_{1}(n)$, it holds that
\[
\int_{0}^{\infty}\left|\frac{u^{\alpha n+z-1}}{(1+u)^{n}}\frac{1}{\pi^{2}+\log^{2}u}\right|{\rm d} u\leq
\int_{0}^{1}u^{\alpha n-C-1}{\rm d} u+\int_{1}^{\infty}\frac{u^{\alpha n+C-1}}{(1+u)^{n}}{\rm d} u.
\]
For $n$ sufficiently large, the first integral on the right-hand side above can be majorized by~$1$ and the second integral converges at infinity because $\alpha<1$. Consequently, the integral $I_{1}(n)$ converges uniformly in $z\in K$ for all $n$ large enough. A similar reasoning shows that the same is true for $I_{2}(n)$ which concludes the proof.
\end{proof}
In the particular case when $\alpha=1/2$, Theorem~\ref{thm:asympt_ber_alpha} yields the following limit formulas
that can be compared with Dilcher's limit formulas for the Bernoulli polynomials of the first kind~\cite[Cor.~1]{dilcher_jat87}.
\begin{cor}
One has
\[
\lim_{n\to\infty}(-1)^{n}\frac{2^{2n-1}\sqrt{n}}{(2n)!}\beta_{2n}(z)=\frac{\cos\pi z}{\pi^{3/2}}
\]
and
\[
\lim_{n\to\infty}(-1)^{n}\frac{2^{2n}\sqrt{n}}{(2n+1)!}\beta_{2n+1}(z)=\frac{\sin \pi z}{\pi^{3/2}}
\]
locally uniformly in ${\mathbb{C}}$, where
\[
\beta_{n}(z):=B_{n}^{(n)}\left(z+\frac{n}{2}\right).
\]
\end{cor}
We can combine Theorem~\ref{thm:asympt_ber_alpha} and the Hurwitz theorem~\cite[Thm.~2.5, p.~152]{conway_78} in order to deduce the asymptotic behavior of the zeros of $B_{n}^{(n)}$ located around the point $\alpha n$ for $n$ large.
\begin{cor}\label{cor:lim_zer_alp}
Let $\alpha\in(0,1)$. Then, for any $\ell\in{\mathbb{Z}}$, one has
\[
\lim_{n\to\infty}\left(x_{\lfloor\alpha n\rfloor+\ell}^{(n)}-\lfloor\alpha n\rfloor\right)=\ell-1+\frac{1}{\pi}\arccot\frac{\tau_{\alpha}}{\pi},
\]
where $\tau_{\alpha}$ is defined by~\eqref{eq:def_tau} and $\lfloor x\rfloor$ denotes the integer part of a real number $x$.
\end{cor}
\begin{proof}
Replacing $z$ by $z-\alpha n+\lfloor\alpha n\rfloor$ in Theorem~\ref{thm:asympt_ber_alpha}, one deduces the limit formula
\[
\lim_{n\to\infty}\frac{(-1)^{n+\lfloor\alpha n\rfloor}\sqrt{n}}{n!\,\alpha^{\lfloor\alpha n\rfloor}(1-\alpha)^{n-\lfloor\alpha n\rfloor}}B_{n}^{(n)}\left(z+\lfloor\alpha n\rfloor\right)=C_{\alpha}(z)\left(\pi\cos\pi z-\tau_{\alpha}\sin\pi z\right)\!,
\]
where $C_{\alpha}(z)\neq0$ and the convergence is local uniform in $z\in{\mathbb{C}}$. By the Hurwitz theorem, the zeros of the polynomial
\begin{equation}
z\mapsto B_{n}^{(n)}\left(z+\lfloor\alpha n\rfloor\right)
\label{eq:ber_subseq_alpha}
\end{equation}
cluster at the zeros of the function
\[
z\mapsto \pi\cos\pi z-\tau_{\alpha}\sin\pi z
\]
which coincide with the solutions of the secular equation
\begin{equation}
\cot\pi z=\frac{\tau_{\alpha}}{\pi}.
\label{eq:secul_eq}
\end{equation}
Further, it follows from~\eqref{eq:loc_zeros_first} that the zeros $x_{\lfloor\alpha n \rfloor+\ell}^{(n)}-\lfloor\alpha n\rfloor$ of~\eqref{eq:ber_subseq_alpha} satisfy
\[
\ell-1<x_{\lfloor\alpha n \rfloor+\ell}^{(n)}-\lfloor\alpha n\rfloor<\ell.
\]
Consequently, one has
\[
\lim_{n\to\infty}\left(x_{\lfloor\alpha n\rfloor+\ell}^{(n)}-\lfloor\alpha n\rfloor\right)=\zeta_{\ell},
\]
where $\zeta_{\ell}$ is the unique solution of~\eqref{eq:secul_eq} such that $\zeta_{\ell}\in(\ell-1,\ell)$. Finally, it suffices to note that $\zeta_{\ell}=\ell-1+\zeta$, where $\zeta$ fulfills
\[
\cot\pi\zeta=\frac{\tau_{\alpha}}{\pi} \quad \mbox{ and }\quad \zeta\in(0,1).
\]
\end{proof}
\begin{example}
If we put $\alpha=1/3$, then $\tau_{1/3}=-\log 2$. Passing to the subsequences $n_{k}=3k,3k+1,3k+2$, respectively, in Corollary~\ref{cor:lim_zer_alp}, one obtains
\begin{align*}
\lim_{k\to\infty}\left(x_{k+\ell}^{(3k)}-k\right)=\lim_{k\to\infty}\left(x_{k+\ell}^{(3k+1)}-k\right)
=\lim_{k\to\infty}\left(x_{k+\ell}^{(3k+2)}-k\right)&=\ell-1+\frac{1}{\pi}\arccot\left(-\frac{\log 2}{\pi}\right)\\
&\approx\ell-1.430877,
\end{align*}
for any $\ell\in{\mathbb{Z}}$.
\end{example}
By taking $\alpha=1/2$ and either $n_{k}=2k$ or $n_{k}=2k+1$ in Corollary~\ref{cor:lim_zer_alp}, we get
\[
\lim_{k\to\infty}\left(x_{k+\ell}^{(2k)}-k\right)=\lim_{k\to\infty}\left(x_{k+\ell}^{(2k+1)}-k\right)
=\ell-\frac{1}{2},
\]
for $\ell\in{\mathbb{Z}}$ fixed. Thus, in contrast to the small or large zeros of $B_{n}^{(n)}$ that cluster at integers as shown in Theorem~\ref{thm:asympt_zer_small} and Remark~\ref{rem:asympt_zer_large}, the zeros of $B_{n}^{(n)}$ around the middle point $n/2$ cluster at half-integers as $n\to\infty$. Our next goal is to deduce more precise asymptotic expansions for the middle zeros of $B_{n}^{(n)}$.
To do so, we need to investigate the asymptotic behavior of $B_{n}^{(n)}(z+n/2)$, for $n\to\infty$, more closely.
A complete asymptotic expansion will be obtained by using the classical form of the Laplace method, see~\cite[Sec.~3.7]{olver_97}, applied together with Perron's formula for the expansion coefficients~\cite[p.~103]{wong01} adjusted slightly to our needs.
\begin{lem}[Laplace's method and Perron's formula]\label{lem:laplace_perron}
Let $f$ be real-valued and $g$ complex-valued continous funtions on~$(0,\infty)$ independent of~$n$. Assume further that $f$ and $g$ are analytic functions at the origin where $f$ has a unique global minimum in $(0,\infty)$. Let the Maclaurin expansions of $f$ and $g$ are of the form
\[
f(u)=\sum_{k=0}^{\infty}f_{k}u^{k+2} \quad\mbox{ and }\quad g(u)=\sum_{k=0}^{\infty}g_{k}u^{k+\ell},
\]
where $\ell\in\mathbb{N}$ and $f_{0}\neq0$ as well as $g_{0}\neq0$. Then, for $n\to\infty$, one has
\[
\int_{0}^{\infty}e^{-nf(u)}g(u){\rm d} u\sim\sum_{k=0}^{\infty}\Gamma\left(\frac{k+\ell+1}{2}\right)\frac{c_{k}}{n^{(k+\ell+1)/2}}
\]
provided that the integral converges absolutely for all $n$ sufficiently large. Perron's formula for the coefficients $c_{k}$ yields
\[
c_{k}=\frac{1}{2k!}\frac{{\rm d}^{k}}{{\rm d} u^{k}}\bigg|_{u=0}\frac{g(u)u^{k+1}}{\left(f(u)\right)^{(k+\ell+1)/2}}, \quad k\in\mathbb{N}_{0}.
\]
\end{lem}
\begin{thm}\label{thm:asympt_ber_alpha1/2}
For $n\to\infty$, the complete asymptotic expansion
\begin{align}
\frac{(-2)^{n}\sqrt{n}}{n!} B_{n}^{(n)}\left(z+\frac{n}{2}\right)&\sim
\pi^{1/2}\cos\left(\pi z+\frac{\pi n}{2}\right)\sum_{k=0}^{\infty}\frac{p_{k}(z)}{2^{2k}k!}\frac{1}{n^{k}}\nonumber\\
&-\pi^{-1/2}\sin\left(\pi z+\frac{\pi n}{2}\right)\sum_{k=0}^{\infty}\frac{q_{k}(z)}{2^{2k+1}k!}\frac{1}{n^{k+1}}
\label{eq:asympt_ber_alp_half}
\end{align}
holds locally uniformly in $z\in{\mathbb{C}}$. The coefficients $p_{k}$ and $q_{k}$ are polynomials given by the formulas
\[
p_{k}(z)=\sum_{j=0}^{k}\binom{2k}{2j}\omega_{j}^{(k)}z^{2k-2j} \quad\mbox{ and } \quad
q_{k}(z)=\sum_{j=0}^{k}\binom{2k+1}{2j}\omega_{j}^{(k+1)}z^{2k+1-2j},
\]
where
\[
\omega_{j}^{(k)}=\frac{{\rm d}^{2j}}{{\rm d} x^{2j}}\bigg|_{x=0}\,\frac{x^{2k+1}}{(\pi^{2}+x^{2})\log^{k+1/2}\cosh(x/2)}.
\]
\end{thm}
\begin{rem}
The first several coefficients $p_{k}$ and $q_{k}$ read
\begin{align*}
p_{0}(z)&=\frac{2\sqrt{2}}{\pi^{2}}, \quad p_{1}(z)=\frac{2\sqrt{2}}{\pi^{4}}\left(8\pi^{2}z^{2}+\pi^{2}-16\right)\!,\\
p_{2}(z)&=\frac{2\sqrt{2}}{\pi^{6}}\left(64\pi^{4}z^{4}+16(5\pi^{2}-48)\pi^{2}z^{2}+\pi^{4}-160\pi^{2}+1536\right)\!,
\end{align*}
and
\begin{align*}
q_{0}(z)&=\frac{16\sqrt{2}z}{\pi^{2}}, \quad q_{1}(z)=\frac{16\sqrt{2}z}{\pi^{4}}\left(8\pi^{2}z^{2}+5\pi^{2}-48\right)\!,\\
q_{2}(z)&=\frac{16\sqrt{2}z}{3\pi^{6}}\left(192\pi^{4}z^{4}+80(7\pi^{2}-48)\pi^{2}z^{2}+91\pi^{4}-3360\pi^{2}+23040 \right)\!.
\end{align*}
\end{rem}
\begin{proof}
The starting point is the equation~\eqref{eq:B_n_lin_comb_trig} with $\alpha=1/2$:
\begin{equation}
(-1)^{n}\frac{\pi}{n!} B_{n}^{(n)}\left(z+\frac{n}{2}\right)=I_{1}(n)\cos\left(\pi z+\frac{\pi n}{2}\right)-I_{2}(n)\sin\left(\pi z+\frac{\pi n}{2}\right)\!,
\label{eq:B_n_lin_comb_trig_alpha_half}
\end{equation}
which holds true if $|\Re z|<n/2$ and where
\[
I_{1}(n)=\int_{0}^{\infty}\left(\frac{u^{1/2}}{1+u}\right)^{\!n}\frac{\pi u^{z-1}\,{\rm d} u}{\pi^{2}+\log^{2}u}
\quad\mbox{ and }\quad I_{2}(n)=\int_{0}^{\infty}\left(\frac{u^{1/2}}{1+u}\right)^{\!n}\frac{u^{z-1}\log u\,{\rm d} u}{\pi^{2}+\log^{2}u}.
\]
First we split the integral $I_{1}(n)$ into two integrals integrating from $0$ to $1$ in the first one and from $1$ to $\infty$ in the second one. Next, we substitute for $u=e^{-x}$ in the first integral and $u=e^{x}$ in the second one. This results in the formula
\begin{equation}
I_{1}(n)=\frac{\pi}{2^{n-1}}\int_{0}^{\infty}\frac{\cosh(xz)}{\pi^{2}+x^{2}}\frac{{\rm d} x}{\cosh^{n}\!\left(x/2\right)}.
\label{eq:int_I_1_alp_half}
\end{equation}
Similarly one shows that
\begin{equation}
I_{2}(n)=\frac{1}{2^{n-1}}\int_{0}^{\infty}\frac{x\sinh(xz)}{\pi^{2}+x^{2}}\frac{{\rm d} x}{\cosh^{n}\!\left(x/2\right)}.
\label{eq:int_I_2_alp_half}
\end{equation}
To the integral in~\eqref{eq:int_I_1_alp_half}, we may apply Lemma~\ref{lem:laplace_perron} with
\[
f(x)=\log\cosh\!\left(\frac{x}{2}\right), \quad g(x)=\frac{\cosh(xz)}{\pi^{2}+x^{2}},
\]
and $\ell=0$ getting the expansion
\begin{equation}
I_{1}(n)\sim\frac{\pi}{2^{n-1}}\sum_{k=0}^{\infty}\Gamma\left(\frac{k+1}{2}\right)\frac{c_{k}}{n^{(k+1)/2}},
\label{eq:I_1_from_Laplace_alp_half}
\end{equation}
where
\[
c_{k}=\frac{1}{2k!}\frac{{\rm d}^{k}}{{\rm d} x^{k}}\bigg|_{x=0}\frac{\cosh(xz)}{\pi^{2}+x^{2}}
\frac{x^{k+1}}{\log^{(k+1)/2}\cosh(x/2)}.
\]
Notice that $c_{2k-1}=0$ for $k\in\mathbb{N}$. Next, by using the Leibnitz rule, one gets
\[
c_{2k}=\frac{1}{2(2k)!}\sum_{j=0}^{k}\binom{2k}{2j}\frac{{\rm d}^{2k-2j}}{{\rm d} x^{2k-2j}}\bigg|_{x=0}\!\left(\cosh(xz)\right)\frac{{\rm d}^{2j}}{{\rm d} x^{2j}}\bigg|_{x=0}\frac{x^{2k+1}}{(\pi^{2}+x^{2})\log^{k+1/2}\cosh(x/2)}
\]
which yields
\begin{equation}
c_{2k}=\frac{1}{2(2k)!}\sum_{j=0}^{2k}\binom{2k}{2j}z^{2k-2j}\omega_{j}^{(k)}=\frac{p_{k}(z)}{2(2k)!},
\label{eq:c_2k_eq_p_k}
\end{equation}
where the notation from the statement has been used.
By substituting from~\eqref{eq:c_2k_eq_p_k} and \eqref{eq:I_1_from_Laplace_alp_half} in the equation~\eqref{eq:B_n_lin_comb_trig_alpha_half}, one arrives at the first asymptotic series on the right-hand side of~\eqref{eq:asympt_ber_alp_half}. In order to deduce the second expansion on the right-hand side of~\eqref{eq:asympt_ber_alp_half}, one proceeds in a similar fashion applying Lemma~\ref{lem:laplace_perron} to the integral~\eqref{eq:int_I_2_alp_half} this time with $\ell=2$. The local uniformity of the expansion can be justified using the analytic dependence of the integrands in~\eqref{eq:int_I_1_alp_half} and~\eqref{eq:int_I_2_alp_half} on~$z$.
\end{proof}
Theorem~\ref{thm:asympt_ber_alpha1/2} allows to compute coefficients in the asymptotic expansion of the zeros of $B^{(n)}_{n}$ located in a fixed distance from $n/2$, for $n\to\infty$, similarly as it was done in Theorem~\ref{thm:asympt_zer_small} for the small zeros based on the asymptotic expansion from Theorem~\ref{thm:Ber_sec_compl_asympt}. Since the proof of the statement below is completely analogous to the proof of~Theorem~\ref{thm:asympt_zer_small} with the only exception that the asymptotic formula of Theorem~\ref{thm:asympt_ber_alpha1/2} is used, it is omitted.
\begin{thm}\label{thm:asympt_middle_zer}
For any $k\in{\mathbb{Z}}$, one has
\[
x_{n+k}^{(2n)}=n+k-\frac{1}{2}-\frac{2k-1}{\pi^{2}n}-\frac{(2k-1)(\pi^{2}-12)}{2\pi^{4}n^{2}}+O\left(\frac{1}{n^{3}}\right)\!, \quad n\to\infty,
\]
and
\[
x_{n+k+1}^{(2n+1)}=n+k+\frac{1}{2}-\frac{2k}{\pi^{2}n}+\frac{12k}{\pi^{4}n^{2}}+O\left(\frac{1}{n^{3}}\right)\!, \quad n\to\infty.
\]
\end{thm}
\begin{rem}
Note the difference between the polynomial decay in the asymptotic expansion of the middle zeros of~$B_{n}^{(n)}$ and the logarithmic decay in the asymptotic formulas for the small and large zeros (Theorem~\ref{thm:asympt_zer_small} and Remark~\ref{rem:asympt_zer_large}).
\end{rem}
\begin{rem}
More detailed expansions for the zeros located most closely to~$n/2$ read
\begin{align*}
x_{n}^{(2n)}=n-\frac{1}{2}+\frac{1}{\pi^{2}n}+\frac{\pi^{2}-12}{2\pi^{4}n^{2}}
&+\frac{3\pi^{4}-100\pi^{2}+720}{12\pi^6n^{3}}\\
&\hskip48pt+\frac{3\pi^{6}-216\pi^{4}+3856\pi^{2}-20160}{24\pi^8n^{4}} +O\left(\frac{1}{n^{5}}\right)
\end{align*}
and
\[
x_{n+2}^{(2n+1)}=n+\frac{3}{2}-\frac{2}{\pi^{2}n}+\frac{12}{\pi^{4}n^{2}}
-\frac{3\pi^{4}-40\pi^{2}+720}{6\pi^{6}n^{3}}+\frac{14(3\pi^{4}-44\pi^{2}+360)}{3\pi^{8}n^{4}}
+O\left(\frac{1}{n^{5}}\right)\!,
\]
for $n\to\infty$.
\end{rem}
\begin{rem}
The numbers $D_{2n}^{(2n)}:=4^{n}B_{2n}^{(2n)}(n)$ are known as the N\"{o}rlund $D$-numbers and appear in formulas for a numerical integration, see~\cite[Chp.~8, \S~7]{norlund_24}. Theorem~\ref{thm:asympt_ber_alpha1/2} implies that
\[
(-1)^{n}\frac{\sqrt{2n}}{(2n)!}D_{2n}^{(2n)}\sim\sqrt{\pi}\sum_{k=0}^{\infty}\frac{\omega_{k}^{(k)}}{2^{3k}k!}\frac{1}{n^{k}}, \quad n\to\infty.
\]
Explicitly, the first three terms read
\[
(-1)^{n}\frac{\sqrt{2n}}{(2n)!}D_{2n}^{(2n)}=\frac{2\sqrt{2}}{\pi^{3/2}} + \frac{\pi^{2}-16}{2\sqrt{2}\pi^{7/2}n} + \frac{\pi^{4}-160\pi^{2}+1536}{32\sqrt{2}\pi^{11/2}n^{2}}+O\left(\frac{1}{n^{3}}\right)\!, \quad n\to\infty.
\]
\end{rem}
\begin{rem}
Another special case of Theorem~\ref{thm:asympt_ber_alpha1/2} yields an asymptotic expansion for the coefficients
\[
K_{2n}:=\frac{1}{(2n)!}B_{2n}^{(2n)}\left(n-\frac{1}{2}\right)
\]
appearing in the Gauss--Encke formula~\cite{slavic_ubpefsmf75}. Their complete asymptotic expansion reads
\begin{align*}
K_{2n}&\sim\frac{(-1)^{n}}{\sqrt{2\pi}4^{n+1}}\sum_{k=0}^{\infty}\frac{q_{k}\!\left(-1/2\right)}{2^{3k}k!}\frac{1}{n^{k+3/2}}\\
&=\frac{(-1)^{n+1}}{\sqrt{2\pi}2^{2n+3}}\sum_{k=0}^{\infty}\frac{1}{2^{5k}k!}\left(\sum_{j=0}^{k}\binom{2k+1}{2j}4^{j}\omega_{j}^{(k+1)}\right)\frac{1}{n^{k+3/2}}, \quad n\to\infty,
\end{align*}
which explicitly yields
\[
K_{2n}=\frac{(-1)^{n+1}}{2^{2n-1}\pi^{5/2}n^{3/2}}\left(1+\frac{7\pi^{2}-48}{8\pi^{2}n}+\frac{3(27\pi^{4}-480\pi^{2}+2560)}{64\pi^{4}n^{2}}+O\left(\frac{1}{n^{3}}\right)\right)\!, \quad n\to\infty.
\]
This is a generalization of the asymptotic approximations by Steffensen and Slavi\'{c}, see~\cite{slavic_ubpefsmf75,steffensen_saj24}.
\end{rem}
\subsection{The asymptotic behavior outside the oscilatory region}
We can scale the argument of $B_{n}^{(n)}$ by $n$ and consider the polynomials
$B_{n}^{(n)}(nz)$. Their zeros are located in the interval $(0,1)$ for all $n\in\mathbb{N}$ which follows from~\eqref{eq:loc_zeros_first}. Consequently, the function
$z\mapsto B_{n}^{(n)}(nz)$ oscillates in $(0,1)$ and the formula~\eqref{eq:asympt_ber_alpha} shows the asymptotic behavior in the oscillatory region
\begin{align*}
B_{n}^{(n)}(nx)=(-1)^{n}&n!\,\sqrt{\frac{2}{\pi n}}\frac{x^{xn-1/2}(1-x)^{(1-x)n-1/2}}{\pi^{2}+\log^{2}\left(x/(1-x)\right)}\\
&\times\left[\pi\cos(\pi xn)-\log\left(\frac{x}{1-x}\right)\sin(\pi xn)+O\left(\frac{1}{n}\right)\right]\!,
\end{align*}
for $x\in(0,1)$ fixed, as $n\to\infty$. In addition, the asymptotic behavior by the edges $x=0$ and $=1$ can be obtained from Theorem~\ref{thm:Ber_sec_compl_asympt} and the symmetry relation
\begin{equation}
B_{n}^{(n)}(z)=(-1)^{n}B_{n}^{(n)}(n-z),
\label{eq:B_n_symm}
\end{equation}
which follows from~\eqref{eq:symmetry_id_genBP}.
To complete the picture, it remains to deduce the asymptotic behavior of $B_{n}^{(n)}(nz)$ for $z$ outside the interval $[0,1]$. The asymptotic analysis is based on the following variant of the saddle point method taken from~\cite[Thm.~7.1, Chp.~4]{olver_97}; see also Perron's method in~\cite[Sec.~II.5]{wong01}.
\begin{thm}[the saddle point method]\label{thm:saddle-point}
Let the following assumptions hold:
\begin{enumerate}[{\upshape i)}]
\item Functions $f$ and $g$ are independent of $n$, single valued, and analytic in a region $M\subset{\mathbb{C}}$.
\item The integration path~$\gamma$ is independent of $n$ and its range is located in~$M$ with a possible exception of the end-points.
\item There is a point $\xi_{0}$ located on the path~$\gamma$ which is not an end-point and is such that $f'(\xi_{0})=0$ and $f''(\xi_{0})\neq0$ (i.e., $\xi_{0}$ is a simple saddle point of~$f$).
\item The integral
\[
\int_{\gamma}g(\xi)e^{-n f(\xi)}{\rm d}\xi
\]
converges absolutely for all $n$ sufficiently large.
\item One has
\[
\Re\left(f(\xi)-f(\xi_{0})\right)>0,
\]
for all $\xi\neq\xi_{0}$ that lie on the range of~$\gamma$.
\end{enumerate}
Then
\begin{equation}
\int_{\gamma}g(\xi)e^{-n f(\xi)}{\rm d}\xi=g(\xi_{0})e^{-nf(\xi_{0})}\sqrt{\frac{2\pi}{nf''(\xi_{0})}}\left(1+O\left(\frac{1}{n}\right)\!\right)\!, \quad \mbox{ as } n\to\infty.
\label{eq:asympt_saddle-point}
\end{equation}
\end{thm}
\begin{rem}\label{rem:unif_saddle-point}
A uniform version of the saddle point method will be used again. In our case, the function $f=f(\cdot,z)$ from Theorem~\ref{thm:saddle-point} will depend analytically on an additional variable $z\in K$ where $K$ is a compact subset of ${\mathbb{C}}$. In order to conclude that the expansion~\eqref{eq:asympt_saddle-point} holds uniformly in $z\in K$, it suffices to require the assumptions (i)-(iii) and (v) to remain valid for all $z\in K$ and the integral from (iv) to converge absolutely and uniformly in $z\in K$ for all $n$ sufficiently large. The reader is referred to~\cite{neuschel_a12} for a more general uniform version of the saddle point method and the proof.
\end{rem}
Our starting point is the contour integral representation
\begin{equation}
B_{n}^{(n)}(z)=\frac{n!}{2\pi{\rm i}}\oint_{\gamma}\frac{(1+\xi)^{z-1}}{\xi^{n}\log(1+\xi)}{\rm d}\xi, \quad z\in{\mathbb{C}},
\label{eq:Ber_cont_int_repre}
\end{equation}
which follows readily from the generating function formula~\eqref{eq:gener_func_BP_sec} and~\eqref{eq:BP_sec_rel_B}. The curve $\gamma$ can be any Jordan curve with $0$ in its interior not crossing the branch cut $(-\infty,-1]$ of the integrand, i.e., the range of $\gamma$ has to be located in ${\mathbb{C}}\setminus\left((-\infty,-1]\cup\{0\}\right)$.
The principal branches of the multi-valued functions like the logarithm are chosen if not stated otherwise.
Writing $nz$ instead of $z$ in~\eqref{eq:Ber_cont_int_repre}, the contour integral can be written in the form
\begin{equation}
B_{n}^{(n)}(nz)=\frac{n!}{2\pi{\rm i}}\oint_{\gamma}g(\xi)e^{-nf(\xi,z)}{\rm d}\xi, \quad z\in{\mathbb{C}},
\label{eq:Ber_scaled_cont_int_repre}
\end{equation}
where
\[
f(\xi,z)=\log\xi-z\log(1+\xi) \quad \mbox{ and } \quad g(\xi)=\frac{1}{(1+\xi)\log(1+\xi)},
\]
which is suitable for the application of the saddle point method. Without loss of generality, we may restrict $z\in{\mathbb{C}}\setminus[0,1]$ to the half-plane $\Re z\leq 1/2$ due to the symmetry~\eqref{eq:B_n_symm}.
The assumptions (i),(ii), and (iv) of Theorem~\ref{thm:saddle-point} can be readily checked. Concerning the assumption~(iii), one finds that the point
\[
\xi_{0}:=\frac{1}{z-1}
\]
is the only solution of $\partial_{\xi}f(\xi,z)=0$ and is simple.
The most difficult part is the justification of the assumption~(v) of~Theorem~\ref{thm:saddle-point}. The idea is to investigate the level curves in the $\xi$-plane determined by the equation
\[
\Re f(\xi,z)=\Re f(\xi_{0},z)
\]
with $z\in{\mathbb{C}}\setminus[0,1]$ and $\Re z\leq 1/2$ being fixed. This level curve, denoted as $\Omega_{0}$, is the common boundary of the two open sets
\[
\Omega_{\pm}:=\left\{\xi\in{\mathbb{C}}\setminus\left((-\infty,-1]\cup\{0\}\right) \mid \Re f(\xi,z)\gtrless\Re f(\xi_{0},z)\right\}.
\]
Since $\Re f(\cdot,z)$ is harmonic in ${\mathbb{C}}\setminus\left((-\infty,-1]\cup\{0\}\right)$, the curves of~$\Omega_{0}$ have no end-point in ${\mathbb{C}}\setminus\left((-\infty,-1]\cup\{0\}\right)$ and, moreover, they cannot form a loop located in ${\mathbb{C}}\setminus\left((-\infty,-1]\cup\{0\}\right)$ with its interior; see, for instance,~\cite[Lemma~24 and~26]{shapiro-stampach_ca17}. Hence the only possible loop of~$\Omega_{0}$ encircles the origin (the singularity of $f(\cdot,z)$) and the level curves of $\Omega_{0}$ either goes to~$\infty$ or end at the cut $(-\infty,-1)$. In general, $\Omega_{0}$ need not be connected but the curves of~$\Omega_{0}$ intersect at exactly one point $\xi_{0}$ since it is the only stationary point of $f(\cdot,z)$.
To fulfill the assumption~(v) of~Theorem~\ref{thm:saddle-point}, we have to show that the Jordan curve~$\gamma$ can be homotopically deformed to a Jordan curve which crosses $\xi_{0}$ and is located entirely in~$\Omega_{+}$ with the only exception of the saddle point $\xi_{0}$. The following lemma will be used to justify the assumption~(v).
\begin{lem}\label{lem:justif_v}
Let $z\in{\mathbb{C}}\setminus[0,1]$, $\Re z\leq 1/2$, be fixed. For any $\theta\in(-\pi,\pi]$, there exists $\xi\in{\mathbb{C}}\setminus\left((-\infty,-1]\cup\{0\}\right)$ with $\arg\xi=\theta$ such that $\xi\in\Omega_{0}$.
\end{lem}
\begin{proof}
We show that $\Omega_{0}$ has a non-empty intersection with any complex ray in ${\mathbb{C}}\setminus(-\infty,0]$. First, assume $\theta\in(-\pi,\pi)$. Since
\[
\lim_{r\to0-}\Re f\left(re^{{\rm i}\theta},z\right)=-\infty \quad \mbox{ and } \quad \lim_{r\to\infty}\Re f\left(re^{{\rm i}\theta},z\right)=\infty,
\]
and $\Re f\left(re^{{\rm i}\theta},z\right)$ is continuous in $r\in(0,\infty)$, there exists $r=r(\theta)>0$ such that $r(\theta)e^{{\rm i}\theta}\in\Omega_{0}$.
Second, we verify that $\Omega_{0}$ intersects the interval $(-1,0)$. If $\Re z>0$, the task is easy since
\[
\lim_{x\to-1+}\Re f(x,z)=\infty \quad \mbox{ and } \quad \lim_{x\to0-}\Re f(x,z)=-\infty,
\]
and $\Re f(x,z)$ is continuous in $x\in(-1,0)$.
Suppose $\Re z<0$. Then $1/(\Re z-1)\in(-1,0)$. We show that $1/(\Re z-1)\in\Omega_{+}$, i.e.,
\begin{equation}
\Re f\left(\frac{1}{\Re z-1},z\right)>\Re f\left(\frac{1}{z-1},z\right)\!.
\label{eq:Ref_ineq_inproof}
\end{equation}
Then $\Omega_{0}\cap(-1,0)\neq\emptyset$ because $\Omega_{-}$ contains a neighborhood of $0$.
To verify the inequality~\eqref{eq:Ref_ineq_inproof}, we introduce the auxiliary function
\[
\chi(z):=\Re\left(f\left(\frac{1}{\Re z-1},z\right)-f\left(\frac{1}{z-1},z\right)\right)\!
\]
and show that $\chi(z)>0$ for $\Re z<0$. Noticing that
\[
\frac{\partial}{\partial\xi}\bigg|_{\xi=1/(\Re z-1)}\Re f(\xi,z)=0
\]
one computes that
\[
\frac{\partial \chi}{\partial \Re z}(z)=\log\left|\frac{z}{z-1}\right|-\log\frac{\Re z}{\Re z-1}.
\]
It is easy to check that the above expression is positive if $\Re z<0$ and $\Im z\neq0$. Hence, if $\Im z\neq0$, then $\chi$ is a strictly increasing function of $\Re z\in(-\infty,0)$. Taking also into account that
\[
\lim_{\Re z\to-\infty}\chi(z)=0,
\]
one infers that $\chi(z)>0$ whenever $\Re z<0$ and $\Im z\neq 0$.
If $\Re z<0$ and $\Im z=0$, then $1/(\Re z -1)$ coincides with the saddle point $\xi_{0}$ and hence $\Omega_{0}$ intersects the interval $(-1,0)$, too. At last, if $\Re z=0$ and $\Im z\neq0$, it suffices to check that
\[
\Re f\left(\frac{1}{z-1},z\right)<0,
\]
since
\[
\lim_{x\to-1+}\Re f(x,z)=0 \quad \mbox{ and } \quad \lim_{x\to0-}\Re f(x,z)=-\infty.
\]
The verification of the above inequality is a matter of an elementary analysis.
\end{proof}
Now, we are at the position to deduce the asymptotic formula for $B_{n}^{(n)}(nz)$, as $n\to\infty$, in the non-oscilatory regime when $z\in{\mathbb{C}}\setminus[0,1]$.
\begin{thm}\label{thm:asympt_ber_non-osc}
One has
\begin{equation}
B_{n}^{(n)}(nz)=\frac{n!}{\sqrt{2\pi n}}\frac{(z-1)^{n}}{z\log\left(z/(z-1)\right)}\left(\frac{z}{z-1}\right)^{nz+1/2}\left(1+O\left(\frac{1}{n}\right)\!\right)\!,
\label{eq:asympt_ber_nonosc}
\end{equation}
for $n\to\infty$ locally uniformly in $z\in{\mathbb{C}}$ bounded away from the interval~$[0,1]$.
\end{thm}
\begin{rem}
Note that $z/(z-1)\in(-\infty,0)$ if and only if $z\in(0,1)$. Hence the
leading term in the asymptotic formula~\eqref{eq:asympt_ber_nonosc} is an analytic function of $z$ in ${\mathbb{C}}\setminus[0,1]$ with a branch cut~$[0,1]$.
\end{rem}
\begin{proof}
Assume first that $z\in{\mathbb{C}}\setminus[0,1]$ and $\Re z\leq1/2$. For the application of Theorem~\ref{thm:saddle-point} to the contour integral in~\eqref{eq:Ber_scaled_cont_int_repre}, it remains to justify the crucial assumption~(v). Lemma~\ref{lem:justif_v} implies that the level curve set~$\Omega_{0}$ includes a loop which encircles the origin, intersects the interval $(-1,0)$ and the saddle point $\xi_{0}$. Moreover, the interior of this loop
is a subset of $\Omega_{-}$. Indeed, no other loop can separate the interior because in such a case there would be a domain in which $f(\cdot;z)$ is analytic and $\Re f(\cdot;z)$ constant on the boundary of this domain. This would imply that $f$ is a constant by general principles.
The only crossing of curves in~$\Omega_{0}$ occurs at the point~$\xi_{0}$ where exactly two curves crosses at an angle of $\pi/2$ since the saddle point~$\xi_{0}$ is simple. Two of the out-going arcs encloses into the loop around the origin. The remaining two arcs either continue to $\infty$ or end at the cut~$(-\infty,-1]$. These curves cannot cross the loop at another point different from $\xi_{0}$ since $\xi_{0}$ is the only stationary point of $f(\cdot,z)$. Thus, with the only exception of the point~$\xi_{0}$, a right neighborhood of the loop, if transversed in the counter-clockwise orientation, is a subset of~$\Omega_{+}$; see Figure~\ref{fig:assum_v}.
As a result, one observes that there exists a Jordan curve with $0$ in its interior, crossing the interval $(-1,0)$ and entirely located in the set~$\Omega_{+}$ except the only point~$\xi_{0}$ which belong to the image of this curve. This is the possible choice for the descent path satisfying the assumption~(v) of~Theorem~\ref{thm:saddle-point} into which the curve~$\gamma$ in~\eqref{eq:Ber_scaled_cont_int_repre} can be homotopically deformed.
The asymptotic formula~\eqref{eq:asympt_ber_nonosc} follows from the application of Theorem~\ref{thm:saddle-point} and is determined up to a sign since the branch of the square root in the asymptotic formula~\eqref{eq:asympt_saddle-point} has not been specified. The local uniformity of the expansion can be justified using the fact that the integrand of~\eqref{eq:Ber_scaled_cont_int_repre} depends analytically on~$z$, see Remark~\ref{rem:unif_saddle-point}.
Using~\eqref{eq:int_val_sign} and the fact that all zeros of $B_{n}^{(n)}$ are positive, we get $(-1)^{n}B_{n}^{(n)}(x)>0$ if $x<0$. By inspection of the obtained asymptotic formula~\eqref{eq:asympt_ber_nonosc}, one shows the correct choice of the sign is plus that results in~\eqref{eq:asympt_ber_nonosc}. Finally, if $z\in{\mathbb{C}}\setminus[0,1]$ with $\Re z>1/2$, one uses the symmetry~\eqref{eq:B_n_symm} together with the already obtained asymptotic formula extending the validity of~\eqref{eq:asympt_ber_nonosc} to all $z\in{\mathbb{C}}\setminus[0,1]$.
\end{proof}
\begin{figure}[htb!]
\includegraphics[width=0.9\textwidth]{assum_v.png}
\caption{An illustration of the level curves $\Omega_{0}$ (blue solid line) and a possible choice of the curve $\gamma$ that fulfill the assumption~(v) of Theorem~\ref{thm:saddle-point} (red dashed line) for $z=1/2+{\rm i}/6$.}
\label{fig:assum_v}
\end{figure}
\begin{rem}\label{rem:asympt_zero_distr}
It is not very surprising that the sequence of zero-counting measures of $B_{n}^{(n)}(nz)$, i.e., the uniform probability measures supported on the roots of $B_{n}^{(n)}(nz)$:
\[
\mu_{n}=\frac{1}{n}\sum_{k=1}^{n}\delta_{x_{k}^{(n)}/n},
\]
converges weakly to the uniform probability measure supported on the interval $[0,1]$. This can be verified by using the Cauchy transform of~$\mu_{n}$ and the asymptotic formula~\eqref{eq:asympt_ber_nonosc}. Indeed, for $z\in{\mathbb{C}}\setminus[0,1]$, one has
\[
\lim_{n\to\infty}\int_{{\mathbb{C}}}\frac{{\rm d}\mu_{n}(\xi)}{z-\xi}=\lim_{n\to\infty}\frac{\partial}{\partial z} \log B_{n}^{(n)}(nz)=\log\frac{z}{z-1}=:C_{\mu}(z).
\]
Now, the Stieltjes--Perron inversion formula implies that the sequence $\{\mu_{n}\}$ converges weakly to the absolutely continuous measure supported on~$[0,1]$ whose density reads
\[
\frac{{\rm d}\mu}{{\rm d} x}(x)=\lim_{\epsilon\to0+}\frac{1}{\pi}\Im C_{\mu}(x-{\rm i}\epsilon)=1,
\]
for $x\in[0,1]$.
\end{rem}
\section{Final remarks and open problems}
In the end, after a short remark concerning the Euler polynomials of the second kind, we indicate several research problems related to the Bernoulli polynomials of the second kind. These interesting problems appeared during the work on this paper and remained unsolved.
\subsection{A remark on the Euler polynomials of the second kind}
The Bernoulli polynomials are often studied jointly with the Euler polynomials since they share many similar properties~\cite{norlund_24}. The Euler polynomials of higher order are defined by the generating function
\[
\sum_{n=0}^{\infty}E_{n}^{(a)}(x)\frac{t^{n}}{n!}=\left(\frac{2}{1+e^{t}}\right)^{a}e^{xt}.
\]
The second identity in~\eqref{eq:ber_a_ident1} remains valid at the same form even if $B_{n}^{(a)}$ is replaced by $E_{n}^{(a)}$. On the other hand, there is no simple expression for $E_{n-1}^{(n)}$ comparable with~\eqref{eq:B_n-1_n_explicit}. Hence, following the same steps as in the case of the polynomials~$B_{n}^{(n)}$ that resulted in~\eqref{eq:int_repre_simple}, one cannot deduce a simple integral expression for $E_{n}^{(n)}$. The simple integral formula~\eqref{eq:int_repre_simple} for $B_{n}^{(n)}$ was the crucial ingredience that allowed to obtain the most important results of this paper. No similar integral formula for $E_{n}^{(n)}$ is known to the best knowledge of the author. Moreover, numerical experiments indicate that the zeros of $E_{n}^{(n)}$ are not all real.
\subsection{Open problem: Alternative proofs of the reality of zeros}
Althogh the proof of the reality of zeros of~$B_{n}^{(n)}$ used in Theorem~\ref{thm:Ber_sec_zer_real} is elementary, it would be interesting to find another proof that would not be based on the particular values of $B_{n}^{(n)}$.
One way of proving the reality of zeros of a polynomial is
based on finding a Hermitian matrix whose characteristic polynomial coincides
with the studied polynomial. This is a familiar fact, for example, for orthogonal
polynomials that are characteristic polynomials of Jacobi matrices. Since various recurrence formulas are known for the generalized Bernoulli polynomials, one may believe that there exists a Hermitian matrix $A_{n}$ with explicitly expressible elements such that $B_{n}^{(n)}(x)=\det(x-A_{n})$. We would like to stress that no such matrix was found.
Concerning this problem, one can, for instance, show the linear recursion
\[
B_{n+1}^{(n+1)}(x)=(x-n)B_{n}^{(n)}(x)-\sum_{k=0}^{n}\binom{n}{k}\frac{b_{n-k+1}}{n-k+1}B_{k}^{(k)}(x), \quad n\in\mathbb{N}_{0}.
\]
by making use of~\eqref{eq:gener_func_BP_sec}, where $b_{n}:=b_{n}(0)=B_{n}^{(n)}(1)$. As a consequence, $B_{n+1}^{(n+1)}$ is the characteristic polynomials of the lower Hessenberg matrix
\[
A_{n}=\begin{pmatrix}
b_{1} & 1 & 0 & 0 & \dots & 0 \\
\binom{1}{0}\frac{1}{2}b_{2} & b_{1}+1 & 1 & 0 & \dots & 0 \\
\binom{2}{0}\frac{1}{3}b_{3} & \binom{2}{1}\frac{1}{2}b_{2} & b_{1}+2 & 1 & \dots & 0 \\
\vdots & \vdots & \vdots & \vdots & \dots & \vdots \\
\binom{n}{0}\frac{1}{n+1}b_{n+1} & \binom{n}{1}\frac{1}{n}b_{n} & \binom{n}{2}\frac{1}{n-1}b_{n-1} & \binom{n}{3}\frac{1}{n-2}b_{n-2} & \dots & b_{1}+n
\end{pmatrix}\!.
\]
However, $A_{n}$ is clearly not Hermitian and the reality (and simplicity) of its eigenvalues is by no means obvious.
\subsection{Open problem: Beyond the reality of the zeros, a positivity}
A certain positivity property of a convolution-like sums with generalized Bernoulli polynomials seems to hold. This positivity would imply the reality of zeros of~$B_{n}^{(n)}$.
It follows from the operational (umbral) calculus that~\cite[p.~94]{roman84}
\[
B_{n}^{(a)}(x+y)=\sum_{k=0}^{n}\binom{n}{k}B_{n-k}^{(a)}(x)y^{k},
\]
for any $x,y,a\in{\mathbb{C}}$ and $n\in\mathbb{N}_{0}$. With the aid of the above formula, one can verify the identity
\begin{equation}
|B_{n}^{(n)}(x+{\rm i} y)|^{2}=|B_{n}^{(n)}(x)|^{2}+\sum_{k=1}^{\lfloor\frac{n}{2}\rfloor} \alpha_{n,k}(x)y^{2k} + \sum_{k=0}^{\lfloor\frac{n-1}{2}\rfloor}\beta_{n,k}(x) y^{2n-2k},
\label{eq:B_n_n_real_complex}
\end{equation}
where
\[
\alpha_{n,k}(x):=(-1)^{k}\sum_{l=0}^{2k}(-1)^{l}\binom{n}{l}\binom{n}{2k-l}B_{n-l}^{(n)}(x)B_{n-2k+l}^{(n)}(x)
\]
and
\[
\beta_{n,k}(x):=(-1)^{k}\sum_{l=0}^{2k}(-1)^{l}\binom{n}{l}\binom{n}{2k-l}B_{l}^{(n)}(x)B_{2k-l}^{(n)}(x).
\]
\begin{conjecture}
For all $x\in{\mathbb{R}}$ and $n\in\mathbb{N}$, one has
\[
\alpha_{n,k}(x)\geq0,\; \mbox{ for } \; 1\leq 2k\leq n,
\;\;\mbox{ and }\;\;
\beta_{n,k}(x)\geq0,\; \mbox{ for } \; 0\leq 2k\leq n-1.
\]
\end{conjecture}
If the above conjecture holds true, one would obtain from~\eqref{eq:B_n_n_real_complex}, for example, the inequality
\[
|B_{n}^{(n)}(x+{\rm i} y)|^{2}\geq\left(B_{n}^{(n)}(x)\right)^{\!2}+y^{2n},
\]
where we used that $\beta_{n,0}(x)=1$. From this inequality, the reality of zeros of $B_{n}^{(n)}$ immediately follows.
\subsection{Open problem: A transition between the asymptotic zero distributions of the Bernoulli polynomials of the first and second kind}
The asymptotic zero distribution~$\mu_{0}$ for Bernoulli polynomials of the first kind $B_{n}=B_{n}^{(1)}$ was found by Boyer and Goh in~\cite{boyer-goh_aam07}. It is a weak limit of the sequence of the zero-counting measures associated with the polynomials~$B_{n}$. The measure~$\mu_{0}$ is absolutely continuous, supported on certain arcs of analytic curves in ${\mathbb{C}}$, and its density is also described in~\cite{boyer-goh_aam07}. On the other hand, the asymptotic zero distribution~$\mu_{1}$ of the polynomials $B_{n}^{(n)}$ is simply the uniform probability measure supported on the interval $[0,1]$, see Remark~\ref{rem:asympt_zero_distr}.
The two measures $\mu_{0}$ and $\mu_{1}$ can be viewed as two extreme points of the asymptotic zero distribution $\mu_{\lambda}$ of the polynomials
\begin{equation}
z\mapsto B_{n}^{(1-\lambda+\lambda n)}(nz),
\label{eq:ber_lam}
\end{equation}
where $\lambda\in[0,1]$. The order of the above generalized Bernoulli polynomial is nothing but the convex combination of $1$ and $n$. The measures $\mu_{\lambda}$ seem to be absolutely continuous and continuously dependent on~$\lambda$. An interesting research problem would be to describe the support (the zero attractor) as well as the density of $\mu_{\lambda}$ for $\lambda\in(0,1)$. For an illustration of approximate supports of $\mu_{\lambda}$ for several values of~$\lambda$, see Figure~\ref{fig:trans_lam}.
\begin{figure}[htb!]
\includegraphics[width=0.99\textwidth]{trans_lambda.png}
\caption{Plots of the zeros of the polynomials~\eqref{eq:ber_lam} in the complex plane for $n=500$ and $\lambda\in\{0,1/4,1/2,3/4\}$ illustrating how the zero attractor of the Bernoulli polynomials (case $\lambda=0)$ deforms into the interval~$[0,1]$ (case $\lambda=1$) as $\lambda$ changes from $0$ to $1$.}
\label{fig:trans_lam}
\end{figure}
\subsection{Open problem: A uniform asymptotic expansion of $B_{n}^{(n)}(nx)$ for all $x\in[0,1]$}
As pointed out by an anonymous referee, it would be also an interesting open problem to find an asymptotic approximation for $B_{n}^{(n)}(nx)$ that holds uniformly for $x\in[0,1]$. Such approximation, if exists, should involve some interesting special functions reflecting the behavior of $B_{n}^{(n)}(nx)$ near the edge points $x=0$ and $x=1$.
\section*{Acknowledgement}
The author is grateful to two referees for careful reports that improved the paper.
Further, the author acknowledges financial support by the Ministry of Education, Youth and Sports of the Czech Republic
project no. CZ.02.1.01/0.0/0.0/16\_019/0000778.
\bibliographystyle{acm}
| {
"timestamp": "2020-11-30T02:34:08",
"yymm": "2011",
"arxiv_id": "2011.13808",
"language": "en",
"url": "https://arxiv.org/abs/2011.13808",
"abstract": "The main aim of this article is a careful investigation of the asymptotic behavior of zeros of Bernoulli polynomials of the second kind. It is shown that the zeros are all real and simple. The asymptotic expansions for the small, large, and the middle zeros are computed in more detail. The analysis is based on the asymptotic expansions of the Bernoulli polynomials of the second kind in various regimes.",
"subjects": "Classical Analysis and ODEs (math.CA)",
"title": "Asymptotic behavior and zeros of the Bernoulli polynomials of the second kind",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513914124559,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7086428660380004
} |
https://arxiv.org/abs/math/0603131 | Two step flag manifolds and the Horn conjecture | We give a simplification of Belkale's geometric proof of the Horn conjecture. Our approach uses the geometry of two-step flag manifolds to explain the occurrence of the Horn inequalities in a very straightforward way. The arguments for both necessity and sufficiency of the Horn inequalities are fairly conceptual when viewed in this framework. We provide examples to illustrate the method of proof. | \section{Introduction}
\subsection{General approach}
Horn's conjecture \cite{H} was originally formulated as a recursive
method for solving a problem concerning the eigenvalues of
Hermitian matrices. However, as a consequence of work of Klyachko
\cite{Kly}, Horn's conjecture can be
reformulated as saying that the non-vanishing Schubert intersection
numbers for Grassmannians satisfy a certain recursion. This recursion
was first proved by Knutson and Tao \cite{KT}, using combinatorial
methods. Later, a geometric proof was given by
Belkale \cite{B}, which was the inspiration for this paper.
For our purposes, a Schubert problem will refer to a collection
of Schubert varieties $\bar{\Omega}_{\sigma_1}, \ldots,
\bar{\Omega}_{\sigma_s}$,
on some partial flag variety $G/P$. A Schubert problem is non-vanishing
if the product of the corresponding cohomology classes is non-zero.
Equivalently, a Schubert problem is non-vanishing if and only
if the general translates $\bar{\Omega}^{F_1}_{\sigma_1}, \ldots,
\bar{\Omega}^{F_s}_{\sigma_s}$ of these Schubert varieties have
a non-empty generically transverse intersection. This observation
allows one to study the vanishing question inside the tangent
space to the $G/P$.
In an effort to better understand the geometry behind Horn's conjecture,
we study the tangent spaces of Schubert
varieties of two-step flag manifolds, and show how these are related
to the problem.
The two-step flag manifold
$$Fl(d,r,\mathbb{C}^n) = \{ (S,V)\ |\ S \subset V \subset \mathbb{C}^n,
\ \dim S = d,\ dim V = r\}$$
fibres over the Grassmannian $Gr(r,\mathbb{C}^n)$, with fibre $Gr(d,V)$ at the
a point $V \in Gr(r,\mathbb{C}^n)$. A Schubert problem
on $Gr(r,\mathbb{C}^n)$ can be ``lifted'' in a number of
different ways to a Schubert problem on $Fl(d,r,\mathbb{C}^n)$, in such a way
that a non-empty transverse intersection on $Gr(r,\mathbb{C}^n)$ lifts to
a non-empty transverse
intersection on $Fl(d,r,\mathbb{C}^n)$. Each way of lifting corresponds to a
non-vanishing Schubert problem inside the fibre $Gr(d,V)$. However,
it is sometimes possible to see that the intersection on $Fl(d,r,\mathbb{C}^n)$
is non-transverse for some trivial reasons. These trivial conditions
are seen to be the Horn inequalities.
The difficult part of the Horn conjecture is to show the sufficiency
of the Horn inequalities. In our approach this amounts to showing that
a non-transverse intersection on
$Gr(r,\mathbb{C}^n)$ lifts to something upstairs which is not only non-transverse,
but non-transverse for the aforementioned trivial reasons. However,
once we have all the appropriate machinery in place, this turns
out to be almost as straightforward as the ``easy'' direction of Horn's
conjecture.
The two-step flag manifolds are not a necessary part
of the argument. In principle the entire argument could be formulated
inside the tangent space of the Grassmannian. This would probably
even lead to a shorter proof. However, it is our opinion that the
geometry of the two-step flag manifold plays a fundamental role in
this picture, and to undermine its role would be remiss.
Although there are a number of new ideas in this paper, we do not
claim to be presenting an original or independent proof of the
Horn conjecture. Our objective in writing this was to better understand
the argument in \cite{B}, to simplify it in places, and to show
how it relates to some of our own previous work \cite{P1}.
A few of the results have been taken directly from \cite{B},
while some of the others merely contain old ideas which have been
dressed up in a new context. We will try to indicate whenever possible
which ideas and results have been borrowed.
\subsection{Partial flag manifolds}
\subsubsection{Schubert varieties and Schubert positions}
Let $G = GL(n)$, and let $P \subset G$ be a parabolic subgroup, the
stabiliser subgroup of some $k$-step flag
$$V^0 = \{0\}=V^0_0 \subsetneq V^0_1 \subsetneq \cdots
\subsetneq V^0_k \subsetneq V^0_{k+1} = \mathbb{C}^n.$$
We will assume for later convenience that the $V^0_i$ are coordinate
subspaces.
Let $d_j = \dim V_j$.
Then $G/P$ is a partial
flag manifold (of type $0<d_1<\cdots<d_k<n$).
Let
$$F = \{0\}=F_0 \subsetneq F_1 \subsetneq \cdots \subsetneq F_{n-1}
\subsetneq F_n = \mathbb{C}^n.$$
be a full flag on $C^n$. Let $B(F) \subset G$ denote its
stabiliser.
Let $\sigma = \sigma_1 \ldots \sigma_n$ be a string of length $n$, where
$\sigma_l \in \{0, \ldots, k\}$, and the number of $j$'s in this string
is $d_{j+1}-d_j$. It is a basic fact that $G/P$ has finitely many
$B(F)$-orbits, and that these orbits are indexed by the set of possible
$\sigma$ as follows.
Define
$$\Omega^F_\sigma = \{V \in G/P\ \big|\
\sum_{j=1}^k \dim V_j \cap F_l - \dim V_j \cap F_{l-1} = \sigma_l\}.$$
$\Omega^F_\sigma$ is a $B(F)$-orbit, called the {\bf Schubert cell}
of $\sigma$ with respect to the flag $F$. When the flag $F$ is
understood, or irrelevant, we shall also denote this $\Omega_\sigma$.
The {\bf Schubert variety} of $\sigma$, with respect to the flag $F$, is its
closure $\bar{\Omega}^F_\sigma$, which represents the Schubert class
$S^\sigma \in H^*(G/P)$.
\begin{example}
If the $\sigma_l$ are weakly decreasing, then $\Omega_\sigma$ consists of
a single point which is a subflag of $F$.
If the $\sigma_i$ are weakly increasing, then $\Omega_\sigma$ is a dense
open subvariety of $G/P$, and $S^\sigma = 1 \in H^*(G/P)$.
\end{example}
In general we can easily calculate the dimension of this orbit:
$$\dim \Omega_\sigma = \#\{l<l'\ |\ \sigma_l < \sigma_{l'}\}.$$
If $V \in G/P$, then the {\bf Schubert position} of $V$ with respect
to the flag $F$ is the unique $\sigma$ such that
$V \in \Omega^F_\sigma$. When we have multiple flags $F^1, \ldots, F^s$
on $\mathbb{C}^n$, the Schubert position of $V$ will be
the $s$-tuple $(\sigma^1, \ldots \sigma^s)$, where $\sigma^i$ is the
Schubert position of $V$ with respect to $F^i$.
\subsubsection{$01$-strings from $\sigma$}
From $\sigma$ we construct {\em $01$-strings} in two different ways.
First, for each pair $(u,v)$, with $0 \leq u < v \leq k$, we define
a string $\sigma(uv)$. This is constructed as follows: we
delete from $\sigma = \sigma_1 \ldots \sigma_n$ every
$\sigma_l \notin \{u,v\}$. What remains will be a string consisting
only of `$u$'s and `$v$'s. To convert this into a $01$-string,
we change every `$u$' to a {\rm `$0$'} , and every `$v$' to a {\rm `$1$'} .
\begin{example}
If $\sigma=01312230132$, then $\sigma(13)$ is produced as follows:
$$\sigma = 0\mathbf{131}22\mathbf{3}0\mathbf{13}2
\mapsto \mathbf{131313} \mapsto 010101 = \sigma(13).$$
\end{example}
Each $\sigma(uv)$ corresponds to a Schubert cell on a different
Grassmannian. In section \ref{sec:twostep} we investigate the
significance of these $\sigma(uv)$ on a two-step flag manifold.
In a completely different spirit, we define
strings $\sigma[j] = \sigma[j]_1 \ldots \sigma[j]_n$,
for $1 \leq j \leq k$,
defined as follows.
$$ \sigma[j]_l =
\begin{cases}
0,& \text{if $\sigma_l \leq k-j$} \\
1,& \text{if $\sigma_l > k-j$.}
\end{cases}
$$
These have a very natural geometric significance. The
$k$-step flag variety has $k$ different projections onto
Grassmannians $Gr(d_j, \mathbb{C}^n)$. The image of the Schubert
cell $\Omega_\sigma$ is projected onto $Gr(d_j, \mathbb{C}^n)$
is the Schubert cell $\Omega_{\sigma[j]}$.
\begin{example}
If $\sigma=2103210$, then
\begin{align*}
\sigma[1]&=0001000 \\
\sigma[2]&=1001100 \\
\sigma[3]&=1101110
\end{align*}
\end{example}
\subsubsection{Notation for Grassmannians}
When $P$ is a maximal parabolic, $G/P \cong Gr(r,n)$
is a Grassmannian,
and we can equivalently index the Schubert varieties and Schubert classes
by partitions. Let $\Lambda(r,n-r)$ denote the set of partitions
with $r$-parts, whose largest part is at most $n-r$, i.e.
$$\Lambda(r,n-r) =
\{ \lambda = \mathbf{0} \leq \lambda_1 \leq \cdots \leq \lambda_r
\leq \mathbf{n-r} \}.$$
There is a simple correspondence between these partitions and $01$-strings:
given a partition $\lambda \in \Lambda(r,n-r)$, one can construct a
string as follows.
\begin{verse}
0. Begin with the empty string and $l=1$. \\
1. Append $\lambda_l - \lambda_{l-1}$ {\rm `$0$'}s . \\
2. If $l=n-r$ stop. \\
3. Append a {\rm `$1$'} . \\
4. Increment $l$ and repeat steps 1-4.\\
\end{verse}
\begin{example}
$\lambda = \mathbf{0} \leq 0 \leq 1 \leq 3 \leq 3 \leq \mathbf{5}$,
corresponds to the string $101001100$.
\end{example}
In the case of Grassmannians, we will sometimes find it more convenient to
use $\Lambda(r,n-r)$
to index our Schubert classes and Schubert varieties. We
therefore denote these $S^\lambda$ and $\bar{\Omega}_\lambda$ respectively.
It is worth noting at this time that
$$\dim \Omega_\lambda = \sum_{l=1}^{n-r} \lambda_l =: |\lambda|.$$
\subsubsection{Induced flags}
Whenever we have a full flag $F$ on a vector space $W$, and $V \subset W$
is a subspace, we get induced flags $F_V$ and $F_{W/V}$ on $V$ and $W/V$
respectively. These can be thought of as follows.
Consider the chain of subspaces
$$F_V = \bigg(
\{0\}=F_0 \cap V \subset F_1 \cap V \subset \cdots \subset F_{n-1} \cap V
\subset F_n \cap V = V \bigg).$$
Since $\dim V$ may be less than or equal to $\dim W$, this will no
longer be a flag, as some of the $F_{i-1} \cap V \subset F_i \cap V$
may become equalities. However by eliminating all repeated elements,
one gets a full flag on $V$. It is easy to check that the elements
we keep correspond precisely to the {\rm `$1$'}s in the Schubert position
of $V$ with respect to $F$.
Similarly we can construct a full flag on $W/V$. Let $\Pi:W \to W/V$
denote the quotient map. Then
$$F_{W/V}=
\bigg(
\{0\}=\Pi(F_0) \subset \Pi(F_1) \subset \cdots \subset \Pi(F_{n-1})
\subset \Pi(F_n) = W/V \bigg).$$
Again by eliminating repeated elements we obtain a full flag.
In this case, we keep the elements corresponding to the {\rm `$0$'}s in the
Schubert position of $V$.
\subsection{The generalised Horn conjecture}
We can now state Horn's conjecture in the language of Schubert calculus.
Actually we will give the slightly more general statement which appears
in \cite{B}.
To simplify the notation in the statement a little, any time we
write $S^\lambda \in H^*(Gr(a,b))$ (or otherwise assume that $\lambda$
indexes such a class), then we implicitly assume
$$\lambda = \mathbf{0} \leq \lambda_1 \leq \cdots \leq \lambda_a \leq
\mathbf{b-a} \in \Lambda(a,b-a).$$
\begin{definition}
\label{def:hornineq}
Let $\lambda^1, \ldots, \lambda^s \in \Lambda(r,n-r)$
Suppose that for every $d$, $1 \leq d \leq r$,
and every non-zero product
$S^{\mu^1} \cdots S^{\mu^s} \in H^*(Gr(d,r))$, the inequality
\begin{equation}
\label{eqn:hornineq}
\sum_{i=1}^s \sum_{k=1}^d \lambda^i_{\mu^i_k +k} \geq (s-1)d(n-r)
\end{equation}
holds. In this case we say that the {\bf Horn inequalities} hold
for $\lambda^1, \ldots, \lambda^s$.
\end{definition}
\begin{theorem}[Generalised Horn conjecture]
The product
$S^{\lambda^1} \cdots S^{\lambda^s} \in H^*(Gr(r,n))$ is non-zero
if and only if the Horn inequalities hold for
$\lambda^1, \ldots, \lambda^s$.
\end{theorem}
The standard form
of Horn's conjecture assumes that the product of
$S^{\lambda^1} \cdots S^{\lambda^s}$ is of top degree, whereas
this formulation (which appears in \cite{B}) does not. To accommodate this,
definition \ref{def:hornineq} allows an inequality for all
non-zero products in $H^*(Gr(d,r))$ rather than those just those
of top degree. This generalisation is easily shown to
imply the standard statement.
Definition \ref{def:hornineq}
differs slightly from the usual definition of the Horn inequalities
in another
small way. Normally one does not include an inequality for the case
$d=r$; one
simply uses $d$ for which $1 \leq d \leq r-1$. The $d=r$ inequality
simply amounts to saying that
$$\sum_{j=1}^s \mathop{\mathrm{codim}} \Omega_{\lambda^i} \leq \dim Gr(r,n),$$
i.e. this covers the case where the product vanishes for dimensional
reasons. If we assume that $S^{\lambda^1} \cdots S^{\lambda^s}$ is of
top degree then this inequality is always satisfied.
\section{Schubert calculus in the tangent space}
\subsection{General statements for $G/P$}
Let $F^1, \ldots, F^s$ be flags on $\mathbb{C}^n$.
To determine whether
a product $S^{\sigma^1} \cdots S^{\sigma^s} \in H^*(Gr(r,n))$
vanishes, it is sufficient to consider whether the Schubert varieties
$\bar{\Omega}^{F^i}_{\sigma}$ have a point of intersection
when the flags $F^i$ are sufficiently generic. This is due to
the Kleiman-Bertini theorem \cite{Kl}, which tells us that
if $F^i$ are generic, these Schubert varieties will intersect
transversely (in positively oriented points). Moreover, this point
of intersection can be assumed
to be inside the open cell $\Omega^{F^i}_{\sigma}$
Thus we can take the following approach. Choose the flags $F^i$
such that $V \in G/P$ is an intersection point of the Schubert
varieties $\Omega^{F^i}_{\sigma}$. Subject to this restriction,
the flags $F^i$ should be as generic as possible. We call generic
flags with this restriction {\bf almost generic} for $V$.
The question then
becomes whether or not the Schubert varieties with respect to
almost generic flags intersect transversely.
If they
intersect transversely, then
Schubert varieties with respect to
fully generic flags must have a point of
intersection; however if they do not, then the point of intersection
is artificial, and fully generic flags will not give any
point of intersection.
More importantly, this calculation can be
done on the level of tangent spaces. We have the following basic
result which appears in $\cite{B, P1}$.
\begin{lemma}
\label{lem:tangent1}
$S^{\sigma^1} \cdots S^{\sigma^s} = 0 \in H^*(G/P)$ if and only
if the intersection of subspaces
$$\bigcap_{i=1}^s T_V \Omega^{F^i}_{\sigma^i} \subset T_V G/P$$
is non-transverse for $F^i$ almost generic.
\end{lemma}
Our $V$ can be any point of $G/P$, so let us take $V=V^0$. Then
$P$ acts transitively on the flags $F^i$ such that
$V^0 \in \Omega^{F^i}_{\sigma^i}$. Thus to calculate
$T_{V^0} \Omega^{F^i}_{\sigma^i}$, for generic $F^i$, it suffices
to compute it for a special $F^i$ and consider the action of a
generic element of $P$ on the
tangent space $T_{V^0} G/P = \mathfrak{g}/\mathfrak{p}$.
There is always at least one
coordinate flag $\hat{F}(\sigma^i)$ such that
$V^0 \in \Omega^{\hat{F}(\sigma^i)}_{\sigma^i}$; we will use one of these.
The standard flag on $\mathbb{C}^n$ is
$$F^{\mathrm{std}} = \{0\} \subsetneq \langle x_1 \rangle
\subsetneq \langle x_1, x_2 \rangle \subsetneq \cdots
\subsetneq \langle x_1, x_2, \ldots, x_{n-1} \rangle $$
The Weyl group $S_n$ acts transitively on the set of coordinate flags
of $\mathbb{C}^n$, thus $\hat{F}^i = w_i \cdot F^{\mathrm{std}}$ for some $w_i \in S_n$.
Of all possible choices for the $\hat{F}(\sigma^i)$, we will always
choose $\hat{F}(\sigma^i) = w_i \cdot F^{\mathrm{std}}$ with $w_i$ minimal
in the Bruhat order.
Denote the tangent space to the Schubert variety
$\Omega^{\hat{F}(\sigma)}_{\sigma}$ at $V_0$ by
$$\hat{Z}_\sigma = T_{V^0} \Omega^{\hat{F}(\sigma)}_{\sigma}
\subset \mathfrak{g}/\mathfrak{p}.$$
Let $Z_\sigma$ denote a generic $P$-translate of $\hat{Z}_\sigma$. This
notation will be convenient, as we will often need to consider
intesections
$\bigcap_{i=1}^s p_i \hat{Z}_{\sigma_i}$ where $p_i \in P$ are
generic; we can now write this simply as $\bigcap_{i=1}^s Z_{\sigma_i}$.
For the special cases of Grassmannians and two-step flag manifolds, we will
use the letters $X$ or $Y$ respectively instead of $Z$.
Lemma \ref{lem:tangent1} can be stated equivalently as follows:
\begin{lemma}
\label{lem:tangent2}
$S^{\sigma^1} \cdots S^{\sigma^s} = 0 \in H^*(G/P)$ if and only
if the intersection of subspaces
$$\bigcap_{i=1}^s Z_{\sigma^i} \subset \mathfrak{g}/\mathfrak{p}$$
is non-transverse.
\end{lemma}
\begin{remark}
Although we have eliminated the flags $F^i$ from this statement,
we will sometimes wish to think of $Z_{\sigma^i}$ as being determined
by generic flags, rather than as generic translates of
$\hat{Z}_{\sigma^i}$.
\end{remark}
\begin{remark}
Whenever we use the notation $Z_{\sigma^i}$, we tacitly assume (unless
otherwise specified) that underlying flags $F^i$ are almost generic.
If the $F^i$ are almost generic, we say that
$Z_{\sigma^1}, \ldots, Z_{\sigma^s}$ are in {\bf general position}.
This is, of course, equivalent to the statement that
$Z_{\sigma^i}$ is a \emph{generic} $P$-translate of
$\hat{Z}_{\sigma^i}$.
\end{remark}
To calculate a particular $\hat{Z}_{\sigma}$, we use the following fact.
\begin{proposition}
\label{prop:calcZ}
$\hat{Z}_{\sigma}$ is the image of $\mathfrak{b}(\hat{F}(\sigma))$ under
$\mathfrak{g} \to \mathfrak{g}/\mathfrak{p}$.
\end{proposition}
\begin{proof}
As $\Omega^{\hat{F}(\sigma)}_{\sigma}$ is a $B(\hat{F}(\sigma))$ orbit,
the tangent space is generated by $\mathfrak{b}(\hat{F}(\sigma))$.
\end{proof}
\subsection{Grassmannians}
\label{sec:grassmannian}
\subsubsection{The tangent space to a Grassmannian Schubert variety}
To distinguish the
Grassmannian as a special case, we use the notation
$\hat{X}_{\lambda}$ for $\hat{Z}_\sigma$,
and $X_\lambda$ for the generic translate $X_\sigma$,
where $\lambda$ is the partition corresponding to the $01$-string $\sigma$.
We will take $V^0$ to be the coordinate subspace
$\langle x_{n-r+1}, \ldots, x_n \rangle \in Gr(r,n)$.
We will now identify the subspace $\hat{X}_\lambda$.
Put $V = V^0$, and $Q = \mathbb{C}^n/V$.
Now $\mathfrak{g}/\mathfrak{p}$ can be naturally identified with $\mathrm{Hom}(V, Q)$,
so we can view $\hat{X}_\lambda$ as a set of homomorphisms $\phi:V \to Q$.
Both $V$ and $Q$ inherit full flags from the standard flag. $V$ inherits the
flag
$$F^{\mathrm{std}}_V = \{0\} \subsetneq \langle x_{n-r+1} \rangle
\subsetneq \langle x_{n-r+1}, x_{n-r+2} \rangle \subsetneq \cdots
\subsetneq \langle x_{n-r+1}, \ldots, x_{n-1} \rangle \subsetneq V$$
and $Q$ inherits the image of
$$F^{\mathrm{std}}_Q = V \subsetneq V + \langle x_{1} \rangle
\subsetneq V + \langle x_{1}, x_{2} \rangle \subsetneq \cdots
\subsetneq V + \langle x_{1}, \ldots, x_{n-r-1} \rangle \subsetneq \mathbb{C}^n$$
under the quotient map $\mathbb{C}^n \to Q$.
\begin{proposition}
Under these identifications,
$$\hat{X}_\lambda =
\{\phi \in \mathrm{Hom}(V,Q)\ |\ \phi((F^{\mathrm{std}}_V)_l) \subset
(F^{\mathrm{std}}_Q)_{\lambda_l}\ l=1, \ldots, r\}.
$$
\end{proposition}
\begin{proof}
Let $\sigma$ denote the $01$-string corresponding to $\lambda$, and put
$$
\alpha_k =
\begin{cases}
\sum_{l=1}^k 1-\sigma_l, &\text{if $\sigma_k = 0$} \\
n-r+\sum_{l=1}^k \sigma_l, &\text{ if $\sigma_k = 1$.}
\end{cases}
$$
So $\alpha_k \leq n-r$ if $\sigma_k=0$, and $\alpha_k >n-r$ if $\sigma_k=1$.
Then the flag $\hat{F}(\sigma)$ is
$$\hat{F}(\sigma)
= \{0\} \subsetneq \langle x_{\alpha_1} \rangle
\subsetneq \langle x_{\alpha_1}, x_{\alpha_2}\rangle \subsetneq \cdots
\subsetneq \langle x_{\alpha_1}, \ldots, x_{\alpha_{r-1}} \rangle
\subsetneq \mathbb{C}^n.$$
To see this, note that $V \cap \hat{F}(\sigma)_l$ jumps in dimension
exactly when $\sigma_l=1$, and $\alpha$ is the smallest permutation that
accomplishes this.
By proposition \ref{prop:calcZ}, $\hat{X}_\sigma$ is is identified with
$\mathfrak{b}(\hat{F}(\sigma))/\mathfrak{p}$.
Let $\mathfrak{n}$ denote the orthogonal complement to $\mathfrak{p}$.
Since $V$ and $\hat{F}(\sigma)$ are coordinate flags, we have
$\mathfrak{b}(\hat{F}(\sigma))/\mathfrak{p} = \mathfrak{b}(\hat{F}(\sigma)) \cap \mathfrak{n}/\mathfrak{p}$.
Thus it suffices to determine $\mathfrak{b}(\hat{F}(\sigma)) \cap \mathfrak{n}$.
We see that an element $\phi \in \mathfrak{n}$ preserves $\hat{F}(\sigma)$
if and only if
for $j \in \{n-r+1, \ldots, n\}$,
$$\phi(x_j) \subset \langle x_{\alpha_1}, x_{\alpha_2}, \ldots,
x_{\alpha_j} \rangle$$
But since
$\mathop{\mathrm{Image}}(\phi) \subset \langle x_1, \ldots, x_{n-r} \rangle$, this
is equivalent to
$$\phi(x_j) \subset V + \langle x_1, \ldots, x_l \rangle
= (F^{\mathrm{std}}_Q)_l$$
where $l$ is the number of {\rm `$0$'}s before the $j^{\rm th}$ {\rm `$1$'} . Thus
$l = \lambda_j$.
\end{proof}
\begin{example}
In $Gr(4, \mathbb{C}^9)$,
if $\lambda = \mathbf{0} \leq 0 \leq 1 \leq 3 \leq 3 \leq \mathbf{5}$,
then with respect to the standard coordinate bases on $V$ and $Q$,
$$\hat{X}_\lambda = \
\left (
\begin{matrix}
0 & * & * & * \\
0 & 0 & * & * \\
0 & 0 & * & * \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{matrix}
\right)
$$
In general, if we write $\hat{X}_\lambda$ in this form, the number of
$*$'s in column $j$ will be $\lambda_j$.
\end{example}
The action of $P$ on $\mathrm{Hom}(V,Q)$ is given by
$$
\left(
\begin{matrix}
A & 0 \\
C & B
\end{matrix}
\right)
\cdot \phi
= A \phi B^{-1}.
$$
To compute the generic intersection of two subspaces
$X_\lambda \cap X_\mu$, it suffices to take $X_\lambda = \hat{X}_\lambda$,
and $X_\mu = p \cdot \hat{X}_\mu$, where
$$ p =
\left(
\begin{matrix}
& &1& & & \\
& \reflectbox{$\ddots$} & & &\mbox{\Huge $0$}& \\
1& & & & & \\
& & & & &1 \\
&\mbox{\Huge $0$}& & &\reflectbox{$\ddots$}& \\
& & &1& & \\
\end{matrix}
\right)
$$
This is the tangent space analogue of intersecting a Schubert variety
with an opposite Schubert variety.
\begin{example}
\label{ex:oppositeintersection}
If $\lambda = \mathbf{0} \leq 0 \leq 1 \leq 3 \leq 3 \leq \mathbf{5}$,
$\mu = \mathbf{0} \leq 3 \leq 3 \leq 3 \leq 5 \leq \mathbf{5}$, then
\begin{align*}
X_\lambda \cap X_\mu &=
\left (
\begin{matrix}
0 & * & * & * \\
0 & 0 & * & * \\
0 & 0 & * & * \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{matrix}
\right)
\cap
\left (
\begin{matrix}
* & 0 & 0 & 0 \\
* & 0 & 0 & 0 \\
* & * & * & * \\
* & * & * & * \\
* & * & * & *
\end{matrix}
\right) \\
&=
\left (
\begin{matrix}
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
0 & 0 & * & * \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0
\end{matrix}
\right)
\end{align*}
The intersection is codimension $18$; however the expected codimension
is $\mathop{\mathrm{codim}} X_\lambda + \mathop{\mathrm{codim}} X_\mu = 13+6 =19$. Thus
$S^\lambda S^\mu = 0 \in H^*(Gr(r,n))$.
\end{example}
\subsubsection{$X_\lambda$ as a set of homomorphisms}
\label{sec:grassmannianhoms}
Let $\phi \in X_\lambda \subset \mathrm{Hom}(V,Q)$.
This represents a direction in which we can
perturb $V$ and remain in the tangent space to the Schubert
variety. The kernel, $\ker \phi$ represents the maximal subspace of $V$ which
is preserved by this perturbation. One of the key ideas which we take
from Belkale's proof is to examine $\ker \phi$ for a generic
$\phi \in \bigcap_{i=1}^s X_{\lambda^i}$. For now, however, let us
restrict our attention to a single $X_\lambda$.
Let us recall that the construction of $X_\lambda$ was actually a
tangent space of a Schubert variety relative
to some generic flag $F$ on $\mathbb{C}^n$. Thus $V$ carries an induced
flag $F_V$, and $Q$ carries an induced flag $F_Q$. The action
of $GL(V) \times GL(Q)$ on $\mathrm{Hom}(V,Q)$ corresponds to changing
these induced flags. $F_V$ and $F_Q$ carry all the
relevant information (for our purposes) about the original flag
$F$, and moreover, the action of $GL(V) \times GL(Q)$ on
$Flags(V) \times Flags(Q)$ is transitive. Thus we shall stop
thinking about $F_V$ and $F_Q$ as the induced flags of $F$, and
instead think of them as an independent pair of generic flags
on $V$ and $Q$ respectively.
\begin{remark}
\label{rmk:genericinducedflags}
We have in fact just observed that the induced flags on $V$
and $\mathbb{C}^n/V$ are generic, if $V \in Gr(r,n)$ is a generic
point of intersection of Schubert varieties in general position.
\end{remark}
Let $S$ be a subspace of $V$ whose Schubert position relative to the
flag $F_V$ is $\rho$. Put $d = \dim S$.
If $\phi \in \mathrm{Hom}(V,Q)$ and $\ker \phi
\supset S$, then $\phi$ descends to a map $\tilde{\phi} \in \mathrm{Hom}(V/S,Q)$.
We now consider the space
$$X_\lambda\big / S = \{\tilde\phi \in \mathrm{Hom}(V/S,Q)\ |\ \phi \in X_\lambda,
\ \ker\phi \supset S\} = X_\lambda \cap \mathrm{Hom}(V/S,Q).$$
\begin{lemma}
$X_\lambda\big / S = X_{\lambda'}$, where $\lambda' \in \Lambda(r-d, n-r)$
is the subpartition
$$\lambda' = \mathbf{0} \leq \lambda_{k_1} \leq \cdots \leq
\lambda_{k_{r-d}} \leq \mathbf{n-r}$$
where $k_j$ are the positions of the {\rm `$0$'}s of $\rho$.
The flag on $V/S$ associated to $X_{\lambda'}$ is the induced
flag $(F_V)_{V/S}=:F_{V/S}$.
\end{lemma}
\begin{proof}
The action of $P$ simply gives a change of basis on $V$
and $Q$. Thus it suffices to prove this in the case where
$F = \hat{F}(\lambda)$, in which case $X_\lambda = \hat{X_\lambda}$ and
$F_V = F^{\mathrm{std}}_V$.
We must verify that for every $\phi \in X_\lambda\big / S$, that
$\phi ((F_{V/S})_j) \subset (F_Q)_{\lambda'_j}$. However, this is
straightforward, as $(F_{V/S})_j$ is the image under the quotient map
of $(F_V)_{k_j}$ which maps to
$(F_Q)_{\lambda_{k_j}} =(F_Q)_{\lambda'_j}$. Thus $X_\lambda\big / S \subset
X_{\lambda'}$.
Moreover, every element
of $X_{\lambda'}$ can be seen to have a unique lifting to
$\{\phi \in X_\lambda\ |\ S \supset \ker \phi\}$. To see this, note
that by a change of coordinates which preserves $F_V$ we can make
$S$ is a coordinate subspace, in which case, this is
obvious. Thus $X_\lambda\big / S = X_{\lambda'}$.
\end{proof}
\subsubsection{Genericity of $S$}
The one remaining fact we will need is that when $S$ is the special
subspace $S = \ker \phi$, $\phi \in \bigcap_{i=1}^s X_{\lambda^i}$,
then
the flags $F_S$ and $F_{V/S}$ are actually {\em generic} flags.
It is a priori
conceivable that by choosing this particular $S$ (its definition
involves the flags $F^i$), we could have undone all the genericity
we had before, thereby
ending up in a position where all the flags $F^i_{V/S}$ are not
generic with respect to each other (for example, they could all be equal).
This would be most unfortunate; however luckily it does not happen.
Let $\rho^i$ denote the Schubert position of $S \subset V$ with
respect to the flag $F^i_V$. Thus
$S \in \bigcap_{i=1}^s \Omega^{F^i_V}_{\rho^i}$. We must show
that $S$ is in fact a generic point of this intersection of Schubert
varieties. This is sufficient, since the induced flags at
a generic point of an intersection of Schubert varieties are generic
(c.f. remark \ref{rmk:genericinducedflags}).
The idea is to fix generic flags $F^i_V$,
while the flags $F^i_Q$ vary.
We show that there cannot exist a subvariety
$T \subset \bigcap_{i=1}^s \Omega^{F^i_V}_{\rho^i}$ such that
$\ker \phi \in T$, for a generic choice of
$\phi \in \bigcap_{i=1}^s X_{\lambda^i}$, and generic induced flags
$F^i_Q$. This is a consequence of a generalisation of the Kleiman
moving lemma, due to Belkale.
\begin{lemma}[{\rm Belkale \cite{B}}]
\label{lem:moving}
Let $H$ be an algebraic group acting on a variety $X$.
Suppose $\pi: X \to Y$ is an $H$-invariant fibration, such that $H$
acts transitively on the fibres. Let $Z_i \subset X$, and
$Y_i=\pi(Z_i) \subset Y$ $i \in \{1, \ldots, s\}$, be subvarieties,
such that $\pi|_{Z_i}: Z_i \to Y_i$ is a fibration. Put
$Y_0 = \bigcap_{i=1}^s Y_i$, and $X_0 = \pi^{-1}(Y_0)$. Let
$T \subset Y_0$, be a subvariety. Then for generic
$h_i \in H$, the intersection
$$\pi^{-1}(T) \cap \left ( \bigcap_{i=1}^s h_i \cdot Z_i \right )$$
has the expected dimension as an intersection inside $X_0$.
That is,
\begin{equation}
\label{eqn:moving}
\mathop{\mathrm{codim}} \pi^{-1}(T) \cap \left ( \bigcap_{i=1}^s h_i \cdot Z_i \right )
= \mathop{\mathrm{codim}} \pi^{-1}(T) +
\sum_{i=1}^s \mathop{\mathrm{codim}} (Z_i \cap X_0)
\end{equation}
(here $\mathop{\mathrm{codim}}$ means ``codimension inside $X_0$'').
\end{lemma}
In this example,
$$X = \mathrm{Hom}_d(V,Q)
= \{\psi \in \mathrm{Hom}(V,Q)\ |\ \dim \ker \psi = d\},$$
$$Y = Gr(n-d,V), \qquad \pi(\psi) = \ker(\psi)$$
The group $H$ is $GL(Q)$, which we note preserves the kernel of
$\psi \in Z_i$. The action of $GL(Q)$ acts transitively on the
induced flags $F^i(Q)$.
The subvariety $Z_i \subset X_{\lambda^i} \subset X$ consists of
those elements of
$X_{\lambda^i}$ whose kernel is in Schubert position $\rho^i$.
Thus a generic translate $h_i \cdot Z_i$ is simply $X_{\lambda^i}$
for a different generic choice of flags $F^i_Q$.
Thus the image $\pi(\psi)$ of a generic point in the intersection
$\psi \in \bigcap_{i=1}^s h_i \cdot Z_i$, is just the kernel of a
generic element of $\bigcap_{i=1}^s X_{\lambda^i}$.
The image of $Z_i$ under $\pi$ is the open Schubert cell
$Y_i = \Omega^{F^i_V}_{\rho^i}$. We wish to show that there is
no subvariety $T \subset \bigcap Y_i$ such that
$\ker \psi \in T$ for all choices above. In other words, for
any subspace $T$ we can choose $\psi$ and generic flags $F^i_Q$
such that $\psi \in \bigcap_{i=1}^s X_{\lambda^i}$ and
$\ker \psi \notin T$. Equivalently, we must show there exists
$\psi \in \bigcap h_i \cdot Z_i$, with $\psi \notin \pi^{-1}(T)$.
But this is clear from lemma \ref{lem:moving}, since otherwise
the codimensions in equation (\ref{eqn:moving}) would not add.
\begin{corollary}
\label{cor:genericdescent}
Let $S = \ker \phi$ for a generic $\phi \in \bigcap_{i=1}^s
X_{\lambda^i}$. Then the $X_{\lambda^i}\big / S = X_{{\lambda^i}'}$
are in general position. Moreover
there is an element $\tilde \phi \in \bigcap_{i=1}^s X_{{\lambda^i}'}
\subset \mathrm{Hom}(V/S,Q)$ such that $\ker \tilde \phi = \{0\}$.
\end{corollary}
\begin{proof}
The fact that the $X_{\lambda^i}\big / S$ are generic simply means that
the induced flags $F^i_{V/S}$ and $F^i_{Q}$ are generic, which is
what we have just shown. For the second statement, since
$\ker \phi = S$, $\phi$ descends to well defined map
$\tilde \phi:V/S \to Q$ with
$\tilde \phi \in \bigcap_{i=1}^s X_{\lambda^i}\big / S$ and
$\ker \tilde \phi = \{0\}$.
\end{proof}
\begin{corollary}
\label{cor:transversereduction}
The intersection $\bigcap_{i=1}^s X_{\lambda^i}\big / S$ is transverse.
\end{corollary}
The argument can be seen as special case of \cite{B}[lemma 2.18].
\begin{proof}
Since the $X_{\lambda^i}\big / S = X_{{\lambda^i}'}$
are linear subspaces of $\mathrm{Hom}(V/S,Q)$ their
intersection is necessarily equidimensional. Thus it suffices to
show that the intersection is transverse on an open subset
which contains a point of intersection.
Consider the space
$\mathrm{Hom}_0(V/S,Q) = \{\psi \in \mathrm{Hom}(V/S,Q)\ |\ \ker \psi = \{0\}\}.$
This is a homogeneous space under the action of $GL(V/S) \times GL(Q)$,
and it is a Zariski open subset of $\mathrm{Hom}(V/S,Q)$.
By corollary \ref{cor:genericdescent}
the $X_{{\lambda^i}'}$ are generic translates of $\hat X_{{\lambda^i}'}$
by elements of $GL(V/S) \times GL(Q)$. So by Kleiman-Bertini, the
intersection
$$\bigcap_{i=1}^s \left(\mathrm{Hom}_0(V/S,Q) \cap X_{\lambda^i}\right)$$
is a transverse intersection. Moreover, it contains the point
$\tilde \phi$
which shows that the intersection $\bigcap_{i=1}^s X_{{\lambda^i}'}$
is transverse.
\end{proof}
\subsection{Two-step flag manifolds}
\label{sec:twostep}
\subsubsection{The tangent space to a Schubert variety in a two-step
flag manifold}
Let $0<d<r<n$. We consider the two-step flag manifold $Fl(d,r,\mathbb{C}^n)$.
For two-step flag manifolds, we use the notation
$\hat{Y}_{\sigma}$ for $\hat{Z}_\sigma$,
and $Y_\sigma$ for the generic translate $Z_\sigma$.
Here $\sigma$ is a $012$-string, with $d$ {\rm `$2$'}s , $r-d$ {\rm `$1$'}s , and
$n-r$ {\rm `$0$'}s . Let $\eta_1 < \eta_2 < \cdots <\eta_{n-r}$
denote the positions of the {\rm `$0$'}s (i.e $\sigma_{\eta_m}=0$, for
$m \leq n-r$).
Let $\eta_{n-r+1} < \cdots < \eta_{n-d}$ denote the positions
of the {\rm `$1$'}s , and $\eta_{n-d+1} < \cdots < \eta_{n}$ denote the
positions of the {\rm `$2$'}s .
Our first objective is to describe the space $\hat{Y}_{\sigma}$.
Let $V$ denote the coordinate subspace
$V = \langle x_{n-r+1}, \ldots, x_n \rangle$, and $S$ denote the
coordinate subspace
$S = \langle x_{n-d+1}, \ldots, x_n \rangle$. (Eventually $d$ and
$S$ will play the same role as they did in section
\ref{sec:grassmannian}.) We take our base flag $V^0$ to be
the coordinate two-step flag
$$V^0 = \{0\} \subset S \subset V \subset \mathbb{C}^n.$$
Now $\mathfrak{g}/\mathfrak{p}$ has a basis descending from the standard basis
$$\big\{ E_{jk}
\ \big|\ \mbox{$j \leq n-r$, $k>n-d$, and ($k>n-d$ or $j \leq n-r$)}
\big\}$$
where $E_{jk}$ is the image under $\mathfrak{g} \to \mathfrak{g}/\mathfrak{p}$ of the $n \times n$
matrix whose only non-zero entry is a {\rm `$1$'} in the $(j,k)$-position.
These basis vectors naturally partition $\mathfrak{g}/\mathfrak{p}$ into three blocks:
the upper left block is spanned by those $E_{jk}$ such that
$1 \leq j \leq n-r$, $n-r < k \leq n-d$; the lower right block is
spanned by those $E_{jk}$ such that
$n-r < j \leq n-d$, $n-d < k \leq n$, and the upper right block is
spanned by those $E_{jk}$ such that
$1 \leq j \leq n-r$, $n-d < k \leq n$. The first two are
naturally viewed as subspaces, while the last is more naturally
viewed as a quotient space.
Since $S$ and $V$ are coordinate subspaces, $\hat{Y}_{\sigma}$ is
spanned by some subset of the $E_{jk}$.
\begin{proposition}
$\hat{Y}_{\sigma} = \{E_{jk}\ |\ \eta_j < \eta_k\}$.
\end{proposition}
\begin{proof}
$\eta = \eta_1 \ldots \eta_n$ is the element of the $S_n$
such that $\eta^{-1} \cdot F^{\mathrm{std}} = \hat{F}(\sigma)$. Thus
$\mathfrak{b}(\hat{F}(\sigma)) = \eta^{-1} \cdot \mathfrak{b}(F^{\mathrm{std}})$. One
can easily check that the image in $\mathfrak{g}/\mathfrak{p}$ is
$\{E_{jk}\ |\ \eta_j < \eta_k\}$.
\end{proof}
\begin{example}
\label{ex:Y}
Let $d=2$, $r=5$, $n=9$. Then $\mathfrak{g}/\mathfrak{p}$ looks like
$$
\left(
\begin{matrix}
\cdot & \cdot & \cdot & \cdot & * & * & * & * & * \\
\cdot & \cdot & \cdot & \cdot & * & * & * & * & * \\
\cdot & \cdot & \cdot & \cdot & * & * & * & * & * \\
\cdot & \cdot & \cdot & \cdot & * & * & * & * & * \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & * & * \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & * & * \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & * & * \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot
\end{matrix}
\right)
$$
Let $\sigma = 021010201$. Then $\eta = 146835927$, and
$$\hat{Y}_{\sigma} =
\left(
\begin{matrix}
\cdot & \cdot & \cdot & \cdot & * & * & * & * & * \\
\cdot & \cdot & \cdot & \cdot & 0 & * & * & 0 & * \\
\cdot & \cdot & \cdot & \cdot & 0 & 0 & * & 0 & * \\
\cdot & \cdot & \cdot & \cdot & 0 & 0 & * & 0 & 0 \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 0 & * \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 0 & * \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & 0 & 0 \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot \\
\cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot & \cdot
\end{matrix}
\right )$$
Note that the above pictures of $\mathfrak{g}/\mathfrak{p}$ and $\hat{Y}_\sigma$
break up into three blocks,
$$ \left(
\begin{matrix}
* & * & * \\
0 & * & * \\
0 & 0 & * \\
0 & 0 & *
\end{matrix}
\right)
\qquad
\left(
\begin{matrix}
* & * \\
0 & * \\
0 & * \\
0 & 0
\end{matrix}
\right)
\qquad
\left(
\begin{matrix}
0 & * \\
0 & * \\
0 & 0
\end{matrix}
\right )
$$
and each block contains a Young-diagram shaped picture. These
pictures are $\hat{X}_{\sigma(01)}$, $\hat{X}_{\sigma(02)}$ and
$\hat{X}_{\sigma(12)}$.
\end{example}
\subsubsection{Grassmannian problems in the two-step flag manifold}
In example \ref{ex:Y} one can see that $\mathfrak{g}/\mathfrak{p}$ consists of
three rectangular blocks, and the diagram representing
$\hat{Y}_\sigma$ is shaped like a Young diagram when restricted to
each of these blocks. Thus these are actually diagrams for
some $\hat{X}_\lambda$.
Inside $\mathfrak{g}/\mathfrak{p}$ there are two natural subspaces, corresponding
to the fibres of the forgetful maps $G/P \to Gr(r,\mathbb{C}^n)$
and $G/P \to Gr(d,\mathbb{C}^n)$. The subspaces are $\mathrm{Hom}(S,V)$,
and $\mathrm{Hom}(V/S, Q)$ (where $Q = \mathbb{C}^n/V$). In terms of our basis
for $\mathfrak{g}/\mathfrak{p}$,
these correspond to the lower right
and upper left blocks respectively. Note that these subspaces
are in fact $P$-invariant. Thus the quotient map
$$\Pi_{\mathrm{Hom}(S,Q)} : \mathfrak{g}/\mathfrak{p} \to \mathrm{Hom}(S, Q) =
(\mathfrak{g}/\mathfrak{p})\big/ \big(\mathrm{Hom}(S,V) \oplus \mathrm{Hom}(V/S,Q) \big )$$
is $P$-equivariant.
By simply noting that the dimensions are correct we see that is
it possible to view $X_{\sigma(01)}$ as a subspace of $\mathrm{Hom}(V/S,Q)$,
$X_{\sigma(12)} \subset \mathrm{Hom}(S,V)$, and
$X_{\sigma(02)} \subset \mathrm{Hom}(S,Q)$. However, better than this,
we have the following result.
\begin{proposition} We have the following identifications.
\begin{enumerate}
\item $Y_\sigma \cap \mathrm{Hom}(V/S,Q) = X_{\sigma(01)}$.
\item $Y_\sigma \cap \mathrm{Hom}(S,V) = X_{\sigma(12)}$.
\item $\Pi_{\mathrm{Hom}(S,Q)}(Y_\sigma) = X_{\sigma(02)}$.
\end{enumerate}
Moreover, if $Y_{\sigma^i}$, $i \in \{1, \ldots, s\}$
are in general position,
then so are $X_{\sigma^i(01)}$, $X_{\sigma^i(12)}$,
and $X_{\sigma^i(02)}$.
\end{proposition}
\begin{proof}
Since the spaces $\mathrm{Hom}(V/S,Q), \mathrm{Hom}(S,V) \subset \mathfrak{g}/\mathfrak{p}$ are
$P$-invariant, it suffices to check statements 1--3 for $\hat{Y}_\sigma$.
For statement 1, let $\zeta_1< \cdots < \zeta_{n-d}$, and
$\zeta_{n-d+1} < \cdots <\zeta_{n-r}$ denote the
positions of the {\rm `$0$'}s and {\rm `$1$'}s respectively for $\sigma(01)$, and
let $\lambda$ denote the partition corresponding to $\sigma(01)$.
Note that $j < \lambda_k$ if and only if
$\zeta_j < \zeta_k$ if and only if $\eta_j < \eta_k$.
Thus we have
$$\hat{Y}_\sigma \cap \mathrm{Hom}(V/S,Q) =
\langle E_{jk}\ |\
\mbox{$1 \leq j \leq n-r$, $n-r < k< n-d$, and $\eta_j < \eta_k$}
\rangle$$
On the other hand,
\begin{align*}
\hat{X}_{\sigma(01)} &=
\langle E_{jk}\ |\ E_{jk}((F^{\mathrm{std}}_{V/S})_l) \subset
(F^{\mathrm{std}}_Q)_{\lambda_l},\ \forall l \rangle \\
&=
\langle E_{jk}\ |\ E_{jk}((F^{\mathrm{std}}_{V/S})_k) \subset
(F^{\mathrm{std}}_Q)_{\lambda_k} \rangle \\
&=
\langle E_{jk}\ |\ (F^{\mathrm{std}}_Q)_j \subset
(F^{\mathrm{std}}_Q)_{\lambda_k} \rangle \\
&=
\langle E_{jk}\ |\ j < \lambda_k \rangle \\
&=
\langle E_{jk}\ |\ \zeta_j < \zeta_k \rangle
\end{align*}
To see the genericity, note that $P$ acts on $\mathrm{Hom}(V/S,Q)$,
and contains the subgroup $GL(V/S) \times GL(Q)$ (acting in the
usual way). Thus if $Y_{\sigma^i}$ are generic, so are
$X_{\sigma^i(01)}$.
A similar argument holds for $\sigma(12)$ and $\sigma(02)$.
\end{proof}
Even more importantly, we can sometimes use the transversality
(or lack thereof) for these Grassmannian Schubert problems to deduce
transversality in the two-step flag manifold.
\begin{lemma}
\label{lem:splitting}
Let $\sigma^1, \ldots, \sigma^s$ be $012$-strings, giving rise
to Schubert classes on the two-step flag manifold $Fl(d,r,\mathbb{C}^n)$.
\begin{enumerate}
\item
If $\bigcap_{i=1}^s X_{\sigma^i(02)}$ is non-transverse, then
$\bigcap_{i=1}^s Y_{\sigma^i}$ is non-transverse.
\item
If $\bigcap_{i=1}^s X_{\sigma^i(01)}$,
$\bigcap_{i=1}^s X_{\sigma^i(12)}$, and
$\bigcap_{i=1}^s X_{\sigma^i(02)}$ are all transverse, then
$\bigcap_{i=1}^s Y_{\sigma^i}$ is transverse.
\end{enumerate}
\end{lemma}
\begin{proof}
These follow from elementary facts about intersections in quotient
spaces and subspaces.
\end{proof}
One important special case of this theorem is the following, which
is a variant on the vanishing criterion in \cite{P1}.
\begin{corollary}
\label{cor:almosthorn}
If $\sum_{i=1}^s \dim X_{\sigma^i(02)} < (s-1)d(n-r)$ then
$\bigcap_{i=1}^s Y_{\sigma^i}$ non-transverse.
\end{corollary}
\begin{proof}
If $\sum_{i=1}^s \dim X_{\sigma^i(02)} < (s-1)d(n-r)$ then
$\bigcap_{i=1}^s X_{\sigma^i(02)}$ is non-transverse for dimensional
reasons.
\end{proof}
\subsubsection{The strings $\sigma[1]$ and $\sigma[2]$}
We can view $\mathrm{Hom}(V,Q)$ and $\mathrm{Hom}(S, \mathbb{C}^n/S)$ as quotient
spaces of $\mathfrak{g}/\mathfrak{p}$ as well: $\mathrm{Hom}(V,Q) = (\mathfrak{g}/\mathfrak{p})\big/\mathrm{Hom}(S,V)$
and $\mathrm{Hom}(S, \mathbb{C}^n/S) = (\mathfrak{g}/\mathfrak{p}) \big/\mathrm{Hom}(V/S,Q)$.
Let $\Pi_{\mathrm{Hom}(V,Q)}$ and $\Pi_{\mathrm{Hom}(S, \mathbb{C}^n/S)}$ denote the
projection maps.
Counting {\rm `$1$'}s and {\rm `$0$'}s , we see $X_{\sigma[2]}$ and $X_{\sigma[1]}$
are of the correct dimensions to view
$X_{\sigma[2]} \subset \mathrm{Hom}(V,Q)$ and
$X_{\sigma[1]} \subset \mathrm{Hom}(S, \mathbb{C}^n/S)$.
\begin{proposition}
$\Pi_{\mathrm{Hom}(V,Q)}(Y_\sigma)$ is an $X_{\sigma[2]}$, and
$\Pi_{\mathrm{Hom}(S,\mathbb{C}^n)}(Y_\sigma)$ is an $X_{\sigma[1]}$; however,
they are not in general position.
\end{proposition}
\begin{proof}
Let $F$ be a flag such that $(S,V) \in \Omega^F_{\sigma}$. Then
$V \in \Omega^F_{\sigma[2]}$, and $S \in \Omega^F_{\sigma[1]}$.
It follows that $Y_\sigma = \mathfrak{b}(F)/\mathfrak{p}$ maps surjectively
onto $X_{\sigma[2]} = \mathfrak{b}(F)/\mathop{\mathrm{Stab}}(V)$ and
$X_{\sigma[1]} = \mathfrak{b}(F)/\mathop{\mathrm{Stab}}(S)$. The quotient maps are
precisely $\Pi_{\mathrm{Hom}(V,Q)}$ and $\Pi_{\mathrm{Hom}(S, \mathbb{C}^n/S)}$.
They are not in general position, since the flag $F$ is
almost generic for the two step flag
$\{0\} \subset S \subset V \subset \mathbb{C}^n$, but not
necessarily almost generic for $V$ or $S$.
\end{proof}
We can now tie this to the material in section
\ref{sec:grassmannianhoms}.
\begin{proposition}
Fix a flag $F$ such that $(S,V) \in \Omega^{F}_{\sigma}$.
View $X_{\sigma[2]} \subset \mathrm{Hom}(V,Q)$ and $X_{\sigma(01)} \subset \mathrm{Hom}(V/S,Q)$,
as being tangent spaces to Schubert varieties relative the flag $F$
(or its induced flags). Then
$X_{\sigma[2]}\big / S = X_{\sigma(01)}$.
\end{proposition}
\begin{proof}
Relative to the flag $F$,
$X_{\sigma[2]}\big / S = \Pi_{\mathrm{Hom}(V,Q)}(Y_\sigma) \cap \mathrm{Hom}(V/S,Q)$ and
$X_{\sigma(01)} = Y_\sigma \cap \mathrm{Hom}(V/S,Q)$. These are the same
as $\Pi_{\mathrm{Hom}(V,Q)}$ restricted to $\mathrm{Hom}(V/S,Q)$ is the identity map.
\end{proof}
\section{Proof of Horn's conjecture}
\label{sec:proof}
\subsection{The two-step flag manifold as a fibration}
Consider the map $q: Fl(d,r,\mathbb{C}^n) \to Gr(r,\mathbb{C}^n)$ defined by
$q(S,V) = V$. This is a fibration, and the fibre over $V$ is
simply the Grassmannian $Gr(d,V)$.
Schubert varieties behave well under this fibration. Let $F$ be
a flag on $\mathbb{C}^n$, and $\sigma$ a $012$-string representing a
Schubert class on $Fl(d,r, \mathbb{C})$.
Then the restriction of $q$ to the Schubert cell $\Omega^F_\sigma$
is also a fibration. We have
$q(\Omega^F_\sigma) = \Omega^F_{\sigma[2]}$ and the fibre over $V$
is the Schubert cell $\Omega^{F_V}_{\sigma(12)}$.
Thus from this picture, we see that
intersection of Schubert varieties in
$Fl(d,r, \mathbb{C}^n)$ is related to two Schubert intersection problems in
Grassmannians: one on the base space $Gr(r,\mathbb{C}^n)$ and one on
the fibre $Gr(d,V)$. The precise relationship is as follows:
\begin{lemma}
Let $F^1, \ldots, F^s$ be generic flags on $\mathbb{C}^n$.
Assume that $S^{\sigma^1(12)} \cdots S^{\sigma^s(12)} \neq 0 \in H^*(Gr(d,r))$.
Then $\bigcap_{i=1}^s \Omega^{F^i}_{\sigma^i}$ has a point of intersection
if and only if $\bigcap_{i=1}^s \Omega^{F^i}_{\sigma^i[2]}$ has
a point of intersection.
\end{lemma}
\begin{proof}
The ``only if'' direction is clear: if
$\bigcap_{i=1}^s \Omega^{F^i}_{\sigma^i}$ is non-empty, then so is
its image under $q$.
Thus suppose
$I = \bigcap_{i=1}^s \Omega^{F^i}_{\sigma^i[2]}$ is non-empty.
$S^{\sigma^1(12)} \cdots S^{\sigma^s(12)} \neq 0 \in H^*(Gr(d,r))$ is equivalent to
the Schubert problem in the fibre
$\bigcap_{i=1}^s \Omega^{F_V^i}_{\sigma^i(12)}$ having a point of
intersection for $V$ which induce generic flags $F^i_V$. But
by remark \ref{rmk:genericinducedflags}, a generic $V \in I$ has
this property. Thus there is a point in
$\bigcap_{i=1}^s \Omega^{F^i}_{\sigma^i}$ over a generic point in
$I$. In particular this intersection is non-empty.
\end{proof}
This simple lemma gives us a way (in fact many different ways) of
turning a Grassmannian Schubert intersection problem into a
Schubert intersection problem
on a two-step flag manifold. Note that given any $01$-strings
$\tau$ and $\mu$ with the correct number of {\rm `$0$'}s and {\rm `$1$'}s , we can
always construct a $012$-string $\sigma$ such that $\sigma[2]=\tau$ and
$\sigma(12) = \rho$.
\begin{definition}
We call $(\sigma^1, \ldots, \sigma^s)$ a {\bf lifting} of
$({\tau^1}, \ldots, {\tau^s})$, if $\sigma^i[2] = \tau^i$,
$\sigma^i(12) = \rho^i$ and
$S^{\rho^1} \cdots S^{\rho^s} \neq 0 \in H^*(Gr(d,r))$.
\end{definition}
Thus we see that there is a lifting corresponding to each
$s$-tuple $(\rho_1, \ldots, \rho_s)$ such that
$S^{\rho^1} \cdots S^{\rho^s} \neq 0 \in H^*(Gr(d,r))$.
If we allow $d$ to vary between $1$ and $r$, we also have one Horn
inequality for each such triple. This is no
coincidence: we are about to see that each Horn inequality
arises from applying corollary \ref{cor:almosthorn} to a lifting.
\subsection{Necessity of the Horn inequalities}
We show that each lifting of a Grassmannian Schubert problem gives
rise to a Horn inequality. This argument will establish the
necessity of the Horn inequalities.
Suppose $\lambda^1, \ldots, \lambda^s \in \Lambda(r,n-r)$, with
$$S^{\lambda^1} \cdots S^{\lambda^s} \neq 0 \in H^*(Gr(r,\mathbb{C}^n))$$
Then $\bigcap_{i=1}^s \Omega^{F^i}_{\lambda^i}$ contains a point
of intersection. Thus so does any lifting of this Schubert problem.
Let $(\sigma^1, \ldots, \sigma^s)$ be such a lifting, say, corresponding
to an $s$-tuple of $01$-strings $(\rho^1, \ldots, \rho^s)$.
Let $\mu^i$ denote the partition corresponding to $\rho^i$.
Since
$\bigcap_{i=1}^s \Omega^{F^i}_{\sigma^i}$ is non-empty, for generic
flags, the intersection of tangent spaces
$\bigcap_{i=1}^s Y_\sigma^i$ must be transverse. By
corollary \ref{cor:almosthorn} this means that
\begin{equation}
\label{eqn:althorn}
\sum_{i=1}^s \dim X_{\sigma^i(02)} \geq (s-1)d(n-r)
\end{equation}
is a necessary inequality.
We now compute $\dim X_{\sigma(02)}$ (for ease of notation we are fixing
$i$ and omitting the superscript $i$ from $\sigma, \rho, \mu, \lambda$).
Let $z(j)$ denote the number
of {\rm `$0$'}s in the list $\sigma_1, \cdots, \sigma_{j-1}$.
\begin{align*}
\dim X_{\sigma(02)} &= \#\{(j',j)\ |\ j'<j,\ \sigma_{j'}=0,\ \sigma_j=2\} \\
&= \sum_{j\ |\ \sigma_j =2} \lambda_{j - z(j)} \\
&= \sum_{k=1}^d \lambda_{\text{position of $k^{\rm th}$ {\rm `$1$'} in $\rho$}} \\
&= \sum_{k=1}^d \lambda_{\mu_k+k}
\end{align*}
Rewriting (\ref{eqn:althorn}) based on this calculation,
we obtain
$$
\sum_{i=1}^s \sum_{k=1}^d \lambda^i_{\mu^i_k +k} \geq (s-1)d(n-r).
$$
which is exactly the inequality (\ref{eqn:hornineq}).
Thus we see that the inequality (\ref{eqn:althorn}) is precisely the
Horn inequality corresponding to $(\mu_1, \ldots, \mu_s)$.
\subsection{Sufficiency of the Horn inequalities}
To prove sufficiency of the Horn inequalities, we must show that
whenever $S^{\lambda^1} \cdots \lambda^s = 0 \in H^*(Gr(r,\mathbb{C}^n))$,
there is a Horn inequality violated by the $\lambda^i$. We
have already seen that a violation of the Horn inequalities gives
rise to a non-transverse intersection
$\bigcap_{i=1}^s X_{\sigma^i(02)}$ for
a lifting $(\sigma^1, \ldots, \sigma^s)$ of
$(\lambda^1, \ldots, \lambda^s)$. We would now like to prove the
reverse: a nontransverse intersection
$\bigcap_{i=1}^s X_{\sigma^i(02)}$ leads to a violation of a Horn
inequality. This requires an inductive argument.
\subsubsection{The inductive step}
\begin{lemma}
\label{lem:induction}
Let $\lambda^1, \ldots, \lambda^s \in \Lambda(r,n-r)$, and
let $(\sigma^1, \ldots, \sigma^s)$ be a lifting of
$(\lambda^1, \ldots, \lambda^s)$ to Schubert varieties in
$Fl(d,r,\mathbb{C}^n)$. Let $\mu^i$
be the partition corresponding to $\sigma^i(02)$.
If $\lambda^1, \ldots, \lambda^s$ satisfies all of its
the Horn inequalities, then $\mu^1, \ldots, \mu^s$ must
satisfy all of its Horn inequalities.
\end{lemma}
This lemma allows us to argue by induction. Suppose the
Horn conditions are sufficient for all integers $d$, with $d<r$.
Suppose $S^{\lambda^1} \cdots S^{\lambda^s} = 0 \in H^*(Gr(r,\mathbb{C}^n))$.
We show that one of the two things must be true.
\begin{enumerate}
\item The product is zero in cohomology for dimensional reasons.
\begin{center} \emph{or} \end{center}
\item There is a lifting of $(\lambda^1, \ldots, \lambda^s)$
to $(\sigma^1, \ldots, \sigma^s)$ such that the intersection
$\bigcap_{i=1}^s X_{\sigma^i(02)}$ is non-transverse.
\end{enumerate}
In the first case, the Horn inequality for $d=r$ is violated.
In the second case, our inductive hypothesis tells us that
some Horn inequality is violated by
$\sigma^1(02), \ldots, \sigma^s(02)$. So by
lemma \ref{lem:induction}, there must be some Horn inequality violated
by $\lambda^1, \ldots, \lambda^s$.
\begin{proof}
Let $V \subset \mathbb{C}^n$, with $\dim V = r$ and
fix flags $F^1, \ldots, F^s$ on $\mathbb{C}^n$ which are almost generic
for $V$.
Let $S \subset V$, with $\dim S = d$, and
$S$ in Schubert position $\sigma^i(12)$ with respect to
the induced flag $F^i_V$.
We work inside the two-step flag manifold $Fl(d',d,V)$.
let $(\chi^1, \ldots, \chi^s)$ be a lifting of
$(\sigma^1(12), \ldots, \sigma^s(12))$. Of course, since
this is a lifting
$$S^{\chi^1(12)} \cdots S^{\chi^s(12)} \neq 0 \in H^*(Gr(d',S)).$$
Also, the intersection of
Schubert varieties
$\bigcap_{i=1}^s \Omega^{F^i_V}_{\sigma^i(12)}$ is non-empty, since
it contains the point $S$. Thus
$\bigcap_{i=1}^s \Omega^{F^i_V}_{\chi^i}$ is non-empty.
Now we consider the fibration $p:Fl(d',d,V) \to Gr(d',V)$. As
in the case with $q$, the fibration $p$ maps
Schubert varieties map to Schubert varieties. The image is
$p(\Omega^{F}_\chi) = \Omega^F_{\chi[1]}$. Thus we see that
$$S^{\chi^1[1]} \cdots S^{\chi^s[1])} \neq 0 \in H^*(Gr(d',V)).$$
We now check that the Horn inequality for $\mu^1, \ldots, \mu^s$
corresponding to $(\chi^1(12), \ldots, \chi^s(12))$ is identical
to the Horn inequality for $\lambda^1, \ldots, \lambda^s$, corresponding
to $(\chi^1[1], \ldots, \chi^s[1])$. The two inequalities are
\begin{equation}
\label{eqn:lambdaeqn}
\sum_{i=1}^s \sum_{k=1}^d
\lambda^i_{\text{position of the $k^{\rm th}$ {\rm `$1$'} in $\chi^i[1]$}}
\geq (s-1)d'(n-r)
\end{equation}
and
\begin{equation}
\label{eqn:mueqn}
\sum_{i=1}^s \sum_{k=1}^d
\mu^i_{\text{position of the $k^{\rm th}$ {\rm `$1$'} in $\chi^i(12)$}}
\geq (s-1)d'(n-r).
\end{equation}
Now, $\lambda^i_l = \mu^i_{l'}$ where $l-l'$ is the number of
{\rm `$0$'}s in the list $\chi^i_1, \cdots, \chi^i_{l-1}$, thus $l'$.
But $\chi^i[1]_l = 1$ if and only if $\chi^i_l = 2$ if and only if
$\chi^i(12)_{l'} = 1$. Thus we see that
$$
\lambda^i_{\text{position of the $k^{\rm th}$ {\rm `$1$'} in $\chi^i[1]$}}
=\mu^i_{\text{position of the $k^{\rm th}$ {\rm `$1$'} in $\chi^i(12)$}}
$$
and the two Horn inequalities (\ref{eqn:lambdaeqn}) and
(\ref{eqn:mueqn}) are the same.
Since every Horn inequality for
$\mu^1, \ldots, \mu^s$ arises in this way, if
$(\lambda^1, \ldots, \lambda^s)$ satisfies all its Horn inequalities,
then so does $(\mu^1, \ldots, \mu^s)$.
\end{proof}
\begin{remark} It is perhaps most natural to view this lemma
as a statement about the three-step flag manifold
$Fl(d',d,r,\mathbb{C}^n)$. We take a Schubert problem on $Gr(r,\mathbb{C}^n)$
and lift it to $Fl(d,r,\mathbb{C}^n)$. We then lift this again, to
a problem on $Fl(d',d,r,\mathbb{C}^n)$, given by $0123$-strings
$\omega^1, \ldots, \omega^s$. We can then interpret both
inequalities (\ref{eqn:lambdaeqn}) and (\ref{eqn:mueqn}), as the
statement $\sum_{i=1}^s \dim X_{\sigma^i(03)} \geq (s-1)d(n-r)$,
which can be viewed as a Horn inequality for both $\lambda^i$
and $\mu^i$.
\end{remark}
\subsubsection{Proof of sufficiency}
We now have all the ingredients in place to prove the sufficiency
of the Horn conditions.
\begin{proof}(Horn's conjecture)
As always, let $V \subset \mathbb{C}^n$, with $\dim V = r$, and fix
almost generic flags $F^i$.
Consider a generic $\phi \in \bigcap_{i=1}^s X_\lambda^i$. Let
$S$ be the kernel of $\phi$ in Schubert position $\rho^i$ with
respect to generic flags $F^i_V$ on $V$. Let
$(\sigma^1, \ldots, \sigma^s)$ be the lifting of
$(\lambda^1, \ldots, \lambda^s)$ by $(\rho^1, \ldots, \rho^s)$.
Assume that
$S^{\lambda^1} \cdots S^{\lambda^s} = 0 \in H^*(Gr(r,\mathbb{C}^n))$.
Thus
$\bigcap_{i=1}^s Y_{\sigma^i}$
is non-transverse.
There are two possibilities. Either $S=V$ or $\dim S < \dim V$.
In the first case, we must have $\phi=0$, which means that
$\bigcap_{i=1}^s X_\lambda^i = \{0\}$ (otherwise, $\phi=0$ would
not be a generic choice). If
$\sum_{i=1}^s \mathop{\mathrm{codim}} X_\lambda^i = r(n-r)$ then this is a transverse
intersection, contradicting $S^{\lambda^1} \cdots S^{\lambda^s} = 0$.
Thus $\sum_{i=1}^s \mathop{\mathrm{codim}} X_\lambda^i > r(n-r)$ which violates
the Horn inequality for $d=r$.
If $\dim S< \dim V$,
we note the following facts:
$$\bigcap_{i=1}^s X_{\sigma^i(12)} =
\bigcap_{i=1}^s X_{\rho^i}$$
is a transverse intersection, since the Schubert varieties
$\Omega^{F^i_V}_{\rho_i}$ are in general position, and contain
$S$ as a point of intersection. Also
$$\bigcap_{i=1}^s X_{\sigma^i(01)} =
\bigcap_{i=1}^s X_{\sigma^i[1]}\big / S
= \bigcap_{i=1}^s X_{\lambda^i}\big / S$$
is a transverse intersection, by corollary
\ref{cor:transversereduction}.
If $\bigcap_{i=1}^s X_{\sigma^i(02)}$ is also transverse, then
by lemma \ref{lem:splitting}, $\bigcap_{i=1}^s Y_{\sigma^i}$ would
be transverse. Thus it is not a transverse intersection, and
so by the inductive argument, some Horn inequality is violated.
\end{proof}
\subsection{Examples}
The proof of sufficiency describes a procedure for finding a Horn
inequality which is violated, when
$S^{\lambda^1} \cdots S^{\lambda^s} = 0.$
We now give some examples to show what happens in this process,
in the simplest case which is $s=2$. (The sufficiency of the
Horn inequalities is an easy fact when $s=2$; nevertheless, it
illustrates the method of the proof fairly
adequately.)
\begin{example}
\label{ex:simple}
Let
$$\lambda^1
= {\bf 0} \leq 0 \leq 3 \leq 3 \leq {\bf 4}$$
and
$$\lambda^2
= {\bf 0} \leq 1 \leq 3 \leq 3 \leq {\bf 4}$$
As in example \ref{ex:oppositeintersection}, we can illustrate
$X_{\lambda^1}$ and $X_{\lambda^2}$ in general position as
$$
X_{\lambda^1} =
\left (
\begin{matrix}
0 & * & * \\
0 & * & * \\
0 & * & * \\
0 & 0 & 0
\end{matrix}
\right )
\qquad
X_{\lambda^2} =
\left (
\begin{matrix}
0 & 0 & 0 \\
+ & + & 0 \\
+ & + & 0 \\
+ & + & +
\end{matrix}
\right )
$$
We'll write these both in a single diagram as
$$
\begin{tabular}{ccc} \hline
\multicolumn{1}{|c}{} &\multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c|}{$_+$} \\\hline
\end{tabular}
$$
Take a generic point $\phi \in X_{\lambda^1} \cap X_{\lambda^2}$,
e.g.
$$ \phi =
\left (
\begin{matrix}
0 & 0 & 0 \\
0 & 3 & 0 \\
0 & 8 & 0 \\
0 & 0 & 0 \\
\end{matrix}
\right )
$$
The kernel of $\phi$ has Schubert position $(101, 101)$. We
use this Schubert position to lift $(\lambda^1, \lambda^2)$:
$$
\begin{tabular}{ccc} \hline
\multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c}{} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c|}{$_+$} \\\hline
& \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c|}{$^*$} \\\cline{2-3}
\end{tabular}
$$
The upper right block is non-transverse for dimensional reasons.
Thus we are led to consider the Horn inequality corresponding to
$(101,101)$,
i.e.
$$\lambda^1_1 + \lambda^1_3 + \lambda^2_1 + \lambda^2_3 \geq 8.$$
The positions of the {\rm `$1$'}s in $(101,101)$ are the indices which
appear to the indices which appear on the left hand side. We
see that this inequality is violated by $(\lambda^1, \lambda^2)$,
as
$$\lambda^1_1 + \lambda^1_3 + \lambda^2_1 + \lambda^2_3 \geq
0+3+1+3 <8.$$
\end{example}
\begin{example}
Let $$\lambda^1
= {\bf 0} \leq 0 \leq 2 \leq 3 \leq 3 \leq 3 \leq 4 \leq {\bf 4}$$
and
$$\lambda^2
= {\bf 0} \leq 1 \leq 1 \leq 3 \leq 3 \leq 3 \leq 3 \leq {\bf 4}.$$
We illustrate $X_{\lambda^1}$ and $X_{\lambda^2}$ as:
$$
\begin{tabular}{cccccc} \hline
\multicolumn{1}{|c}{} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c|}{$^*_+$} \\\hline
\end{tabular}
$$
Let $\phi \in X_{\lambda^1} \cap X_{\lambda^2}$ be a generic
element, e.g.
$$ \phi =
\left (
\begin{matrix}
0 & 0 & 0 & 0 & 0 & 0 \\
0 & 5 & 6 & 7 & 0 & 0 \\
0 & 0 & 8 & 9 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 1
\end{matrix}
\right )
$$
The kernel of $\phi$ has Schubert position $(100110,010011)$. We
use this Schubert position to lift $(\lambda^1, \lambda^2)$:
$$
\begin{tabular}{cccccc} \hline
\multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c}{} &\multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c}{$_+$} &\multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c}{$_+$} &\multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$}& \multicolumn{1}{|c}{$_+$} &\multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c|}{$_+$} \\\hline
& & & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c|}{$^*$} \\\cline{4-6}
& & & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c|}{$^*$} \\\cline{4-6}
& & & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c|}{$_+$} \\\cline{4-6}
\end{tabular}
$$
The upper left, and lower right blocks have a transverse intersection.
However, the upper right block
$$
\begin{tabular}{ccc} \hline
\multicolumn{1}{|c}{} &\multicolumn{1}{|c}{$^*$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$^*_+$} & \multicolumn{1}{|c|}{$^*$} \\\hline
\multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c}{$_+$} & \multicolumn{1}{|c|}{$_+$} \\\hline
\end{tabular}
$$
does not. This block represents $X_{{\lambda^1}'} \cap X_{{\lambda^2}'}$
for
$${\lambda^1}'
= {\bf 0} \leq 0 \leq 3 \leq 3 \leq {\bf 4}$$
$${\lambda^2}'
= {\bf 0} \leq 1 \leq 3 \leq 3 \leq {\bf 4}$$
In example \ref{ex:simple} we found that the Horn inequality
${\lambda^1}'_1 + {\lambda^1}'_3 + {\lambda^2}'_1 + {\lambda^2}'_3 \geq 8$,
which corresponds to $(101,101)$ is violated.
To find
a corresponding Horn inequality which is violated by
$(\lambda^1, \lambda^2)$ we lift the Schubert position of
$\ker \phi$ by $(101, 101)$ to get $(200120, 020012)$. The positions
of the {\rm `$2$'}s give the indices which appear in the relevant inequality.
In this case we find that the Horn inequality
$$\lambda^1_1 + \lambda^1_5 + \lambda^2_2 + \lambda^2_6 \geq 8$$
is violated. Indeed
$$\lambda^1_1 + \lambda^1_5 + \lambda^2_2 + \lambda^2_6 =
0+3+1+3 <8.$$
\end{example}
| {
"timestamp": "2006-03-09T21:01:01",
"yymm": "0603",
"arxiv_id": "math/0603131",
"language": "en",
"url": "https://arxiv.org/abs/math/0603131",
"abstract": "We give a simplification of Belkale's geometric proof of the Horn conjecture. Our approach uses the geometry of two-step flag manifolds to explain the occurrence of the Horn inequalities in a very straightforward way. The arguments for both necessity and sufficiency of the Horn inequalities are fairly conceptual when viewed in this framework. We provide examples to illustrate the method of proof.",
"subjects": "Algebraic Geometry (math.AG); Combinatorics (math.CO)",
"title": "Two step flag manifolds and the Horn conjecture",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513910054508,
"lm_q2_score": 0.7185943985973773,
"lm_q1q2_score": 0.7086428657455289
} |
https://arxiv.org/abs/1508.06985 | Bilevel Polynomial Programs and Semidefinite Relaxation Methods | A bilevel program is an optimization problem whose constraints involve another optimization problem. This paper studies bilevel polynomial programs (BPPs), i.e., all the functions are polynomials. We reformulate BPPs equivalently as semi-infinite polynomial programs (SIPPs), using Fritz John conditions and Jacobian representations. Combining the exchange technique and Lasserre type semidefinite relaxations, we propose numerical methods for solving both simple and general BPPs. For simple BPPs, we prove the convergence to global optimal solutions. Numerical experiments are presented to show the efficiency of proposed algorithms. | \section{Introduction}
We consider the {\it bilevel polynomial program} (BPP):
\begin{equation} \label{bilevel:pp}
(P): \left\{
\begin{aligned}
F^* := \min\limits_{x\in \mathbb{R}^n,y\in \mathbb{R}^p}&\ F(x,y) \\
\text{s.t.} \quad &\ G_i(x,y)\geq 0, \, i=1,\cdots,m_1, \\
& \ y\in S(x),
\end{aligned}
\right.
\end{equation}
where $F$ and all $G_i$ are real polynomials
in $(x,y)$, and $S(x)$ is the set of global minimizers
of the following lower level program,
which is parameterized by $x$,
\begin{equation} \label{df:P(x)}
\quad
\min\limits_{ z\in \mathbb{R}^p} \quad f(x,z) \quad
\text{s.t.} \quad g_j(x,z) \geq 0,\, j=1,\cdots, m_2.
\end{equation}
In \reff{df:P(x)}, $f$ and each $g_j$ are polynomials in $(x,z)$.
For convenience, denote
\[
{
Z(x) :=\{ z \in \mathbb{R}^p \mid g_j(x,z) \geq 0,\, j=1,\cdots, m_2 \},
}
\]
the feasible set of \reff{df:P(x)}.
The inequalities $G_i(x,y) \geq 0$ are called upper (or outer) level constraints,
while $g_j(x,z)\geq 0$ are called lower (or inner) level constraints.
When $m_1=0$ (resp., $m_2=0$),
there are no upper (resp., lower) level constraints.
Similarly, $F(x,y)$ is the upper level (or outer) objective,
and $f(x,z)$ is the lower level (or inner) objective.
Denote the set
\begin{equation} \label{set:U}
\mathcal{U}: =\left\{(x,y)
\left|\begin{array}{c}
G_i(x,y)\geq 0 \, (i=1,\cdots, m_1), \, \\
g_j(x,y) \geq 0\, (j=1,\cdots, m_2)
\end{array} \right.
\right\}.
\end{equation}
Then the feasible set of $(P)$ is the intersection
\begin{equation} \label{set:feasible:P}
\mathcal{U}\cap \{(x,y): y\in S(x)\}.
\end{equation}
Throughout the paper, we assume that for all $(x,y)\in \mathcal{U}$,
$S(x)\neq \emptyset$ and consequently the feasible set of $(P)$ is nonempty.
When the lower level feasible set $Z(x)\equiv Z$ is independent of $x$,
we call the problem $(P)$ a {\it simple bilevel polynomial program} (SBPP).
The SBPP is not {mathematically simple
but actually quite challenging}.
SBPPs have important applications in economics,
e.g., the moral hazard model of the principal-agent problem \cite{mirrlees1999theory}.
When {the feasible set of the lower level program} $Z(x)$ depends on $x$, the problem $(P)$ is called a
{\it general bilevel polynomial program} (GBPP). GBPP
is also an effective modelling tool for many applications in various fields;
see e.g. \cite{Dempebook,Dempebook2} and the references {therein}.
\subsection{Background}
The bilevel program is a class of difficult optimization problems.
Even for the case where all the functions are linear,
the problem is NP-hard \cite{ben1990computational}.
A general approach for solving bilevel programs is to
transform them into single level optimization problems.
A commonly used technique is to replace the {lower level program}
by its Kurash-Kuhn-Tucker (KKT) conditions.
When the lower level program {involves} inequality constraints,
the reduced problem becomes a so-called
{\it mathematical program with equilibrium constraints} (MPEC) \cite{luo1996mathematical,opac-b1102982}.
If the {lower level program} is nonconvex,
the optimal solution of a bilevel program may not even be a stationary point of
the reduced single level optimization {problem} by using the KKT conditions.
This was shown by a counter example due to Mirrlees~\cite{mirrlees1999theory}.
Moreover, even if the lower level {program} is convex,
{it was shown in
\cite{Dempe2012MP} that a local solution to
the MPEC obtained by replacing the lower level program
by its KKT conditions may not be a local solution to
the original bilevel program.
Recently, \cite{allende2013solving} proposed to replace the lower level program with its Fritz John conditions instead of its KKT conditions. However, it was shown in \cite{Dempe2016} that {the} same difficulties remain, i.e., solutions to the MPEC obtained by replacing the lower level program by its Fritz John conditions may
{not be} the solutions to the original bilevel program.}
An alternative approach for solving BPPs is to use the value function
\cite{outrata1990numerical,Jane1995},
which gives an equivalent reformulation.
However, the optimal solution of the bilevel
{program} may not be a stationary point of the value function reformulation.
To overcome this difficulty, {\cite{YeSIAM2010} proposed to combine
the KKT and the value function reformulations.}
Over the past two decades, many numerical algorithms were proposed
for solving bilevel programs. However, most of them assume that
the {lower level program} is convex, with few exceptions {\cite{SBLYe2013,Mitsos2006Thesis,
mitsos2008global,outrata1990numerical,xu2013smoothing,xu2014smoothing,XuYeZhang}.
In \cite{Mitsos2006Thesis,mitsos2008global},
an algorithm using the branch and bound in combination
with the exchange technique was proposed to find approximate global optimal solutions.
Recently, the smoothing techniques were used to
find stationary points of the valued function or
the combined reformulation of simple bilevel programs
\cite{SBLYe2013,xu2013smoothing,xu2014smoothing,XuYeZhang}. }
In general, it is quite difficult to find global {minimizers} of nonconvex optimization problems. However, when the functions are polynomials, there exists much work
on computing global optimizers,
by using Lasserre type semidefinite relaxations \cite{Las01}.
We refer to \cite{Las09,ML2014} for the recent work in this area.
Recently, Jeyakumar, Lasserre, Li and Pham \cite{Lasbilevel2015}
worked on simple bilevel polynomial programs.
When the lower level program \reff{df:P(x)} is convex for each fixed $x$,
they transformed \reff{bilevel:pp} into a single level polynomial program,
by using Fritz John conditions and the multipliers
to replace the lower level program, and
globally solving it by using Lasserre type relaxations.
When \reff{df:P(x)} is nonconvex for some $x$,
by approximating the value function of lower level programs
by a sequence of polynomials, they propose to reformulate
\reff{bilevel:pp} with approximate lower level programs
by the value function approach,
and globally solving the resulting sequence of polynomial programs
by using Lasserre type relaxations.
The work \cite{Lasbilevel2015} is very inspiring,
because polynomial optimization techniques were proposed to solve BPPs.
In this paper, we also use Lasserre type semidefinite relaxations
to solve BPPs, but we make different reformulations,
by using Jacobian representations
and the exchange technique in semi-infinite programming.
\subsection{From BPP to SIPP}
A bilevel {program} can be reformulated as a
{\it semi-infinite program} (SIP). Thus, the classical methods
(e.g., the exchange method {\cite{Infinitely1976,Hettich1993,WangSIPP2014}}) for SIPs can be applied to solve bilevel programs.
For convenience of {introduction}, at the moment, we consider SBPPs,
i.e., the feasible set $Z(x)\equiv Z$ in \reff{df:P(x)} is independent of $x$.
Before reformulating BPPs as SIPs, we show the fact:
\begin{equation} \label{df:H(xyz)}
y\in S(x) \Longleftrightarrow y\in Z, \quad\, H(x,y,z)
\geq 0 \,\, (\forall~z\in Z),
\end{equation}
where $H(x,y,z) := f(x,z) - f(x,y)$.
Clearly, the ``$\Rightarrow$" direction is true.
Let us prove {the reverse direction}. Let $v(x)$ denote the value function:
\begin{equation} \label{def:v(x)}
v(x) \,:= \, \inf_{z\in Z } f(x,z).
\end{equation}
If $(x,y)$ satisfies the right hand side conditions in \reff{df:H(xyz)},
then
\[
\inf\limits_{z\in Z} \, H(x,y,z) =v(x) -f(x,y)\geq 0.
\]
Since $y\in Z$, we have
$v(x)-f(x,y)\leq 0$. Combining these two inequalities, we get
\[
v(x)={\inf_{z\in Z}} f(x,z) = f(x,y)
\]
and hence $ y\in S(x).$
By the fact \reff{df:H(xyz)}, the problem $(P)$ is equivalent to
\begin{equation} \label{bilevel:sipp:reform}
(\widetilde{P}): \left\{
\begin{array}{rl}
{F}^* := \min\limits_{x \in \mathbb{R}^n, \, y\in {Z}} & F(x,y) \\
\text{s.t.}& \ G_i(x,y)\geq 0,\ i=1,\dots, m_1, \\
& \ H(x,y,z)\geq 0,~\forall~ z\in Z.
\end{array}
\right.
\end{equation}
The problem $(\widetilde{P})$ is a semi-infinite polynomial program (SIPP),
if the set $Z$ is infinite.
Hence, the exchange method can be used to solve $(\widetilde{P})$.
Suppose $Z_k$ is a finite grid of $Z$.
Replacing $Z$ by $Z_k$ in $(\widetilde{P})$, we get:
\begin{equation} \label{pop:tilde(Pk)}
(\widetilde{P}_k): \left\{
\begin{aligned}
{F}^*_k := \min\limits_{x \in \mathbb{R}^n, \, y\in {Z} }&\ F(x,y) \\
\text{s.t.} &\ G_i(x,y)\geq 0,\ i=1,\dots, m_1, \\
&\ H(x,y,z)\geq 0, ~\forall~ z\in Z_k .
\end{aligned}
\right.
\end{equation}
{The feasible set of $(\widetilde{P}_k)$
contains that of $(\widetilde{P})$}. Hence,
\[
{F}^*_k \leq {F}^*.
\]
Since $Z_k$ is a finite set, $(\widetilde{P}_k)$
is a polynomial optimization problem. If, for some $Z_k$,
we can get an optimizer $(x^k,y^k)$ of $(\widetilde{P}_k)$ such that
\begin{equation} \label{v(xk)-f(xkyk)}
v(x^k) - f(x^k,y^k) \geq 0,
\end{equation}
then $y^k\in S(x^k)$ and $(x^k,y^k)$ is feasible for $(\widetilde{P})$.
In such case, $(x^k,y^k)$ must be a global optimizer of $(\widetilde{P})$.
Otherwise, if \reff{v(xk)-f(xkyk)} fails to hold,
then there exists $z^{k}\in Z$ such that
\[
f(x^k, z^k) - f(x^k, y^k) < 0.
\]
For such a case, we can construct the new grid set as
\[
Z_{k+1} := Z_k\cup \{z^{k}\},
\]
and then solve the new problem $(\widetilde{P}_{k+1})$
with the grid set $Z_{k+1}$. Repeating this process,
we can get an algorithm for solving $(\widetilde{P})$ approximately.
How does the above approach work in computational practice?
Does it converge to global optimizers?
Each subproblem $(\widetilde{P}_k)$ is a polynomial optimization problem,
which is generally nonconvex.
Theoretically, it is NP-hard to solve polynomial optimization globally.
However, in practice, it can be solved successfully
by Lasserre type semidefinite relaxations (cf.~\cite{Las01,Las09}).
Recently, it was shown in \cite{NiefiniteLas} that
Lasserre type semidefinite relaxations are generally
tight for solving polynomial optimization problems.
About the convergence,
we can see that $\{ F_k^* \}$ is a sequence of monotonically
increasing lower bounds for the global optimal value $F^*$, i.e.,
\[
F_1^* \leq \cdots \leq F_k^* \leq {F}^*_{k+1} \leq \cdots \leq {F}^*.
\]
By a standard analysis for SIP (cf.~\cite{Hettich1993}),
one can expect the convergence $F_k^* \to F^*$, under some conditions.
However, we would like to point out that
the above exchange process typically converges
{\it very slowly} for solving BPPs. A major reason is that
the feasible set of $(\widetilde{P}_k)$ is {\it much {larger}} than
that of $(\widetilde{P})$. Indeed, the dimension of the feasible set of
$(\widetilde{P}_k)$ is typically larger than that of $(\widetilde{P})$.
This is because, for every feasible $(x,y)$ in $(\widetilde{P})$,
$y$ must also satisfy optimality conditions for the lower level
{program} \reff{df:P(x)}. In the meanwhile, the $y$ in $(\widetilde{P}_k)$
does not satisfy such optimality conditions.
Typically, for $(\widetilde{P}_k)$ to approximate $(\widetilde{P})$
reasonably well, the grid set $Z_k$ should be {\it very big}.
In practice, the above standard exchange method is not efficient
for solving BPPs.
\subsection{Contributions}
In this paper, we propose an efficient computational method for solving BPPs.
First, we transform a BPP into an equivalent SIPP,
by using Fritz John conditions and Jacobian representations.
Then, we propose a new algorithm for solving BPPs,
by using the exchange technique and Lasserre type
semidefinite relaxations.
For each $(x,y)$ that is feasible for \reff{bilevel:pp},
$y$ is a minimizer for the lower level program \reff{df:P(x)}
parameterized by $x$.
If some constraint qualification conditions are satisfied,
the KKT conditions hold. If such qualification conditions fail to hold,
the KKT conditions might not be satisfied. However,
the Fritz John conditions always hold for \reff{df:P(x)}
(cf.~\cite[\S3.3.5]{Bertsekas1990}
{and
\cite{bental1976JOTA} for optimality conditions
for convex programs without constraint qualifications).
}
So, we can add the Fritz John conditions to $(\widetilde{P})$,
while the problem is not changed.
\iffalse
However since $y^k$ obtained would have to be at least a Fritz John point of the lower level problem $P(x^k)$, applying the exchange method to $(\widetilde{P})$ with the Fritz John condition added would speed up the convergence significantly.
Alternatively if a certain constraint qualification for the lower level problem {$\mathcal{L}(x)$} holds, then one may also consider adding the KKT condition to speed up the convergence. The advantage of using the Fritz John condition instead of the KKT condition is that no constraint qualifications are required for the lower level problem.
\fi
A disadvantage of using Fritz John conditions
is the usage of multipliers, which need to be considered as new variables.
Typically, using multipliers will make the
polynomial program much harder to solve, because of
new additional variables.
To overcome this difficulty, the technique in \cite[\S2]{Nie2011Jacobian}
can be applied to avoid the usage of multipliers.
This technique is known as Jacobian representations
for optimality conditions.
The above observations motivate us to solve
bilevel polynomial programs, by combining Fritz John conditions,
Jacobian representations, Lasserre relaxations,
and the exchange technique.
Our major results are as follows:
\begin{itemize}
\item Unlike some prior methods for solving BPPs,
we do not assume the KKT conditions hold for the {lower level program} \reff{df:P(x)}.
Instead, we use the Fritz John conditions.
This is because the KKT conditions may fail to hold for the {lower level program} \reff{df:P(x)},
while the Fritz John conditions always hold.
By using Jacobian representations, the usage of multipliers can be avoided.
This greatly improves the computational efficiency.
\item For simple bilevel polynomial programs,
we propose an algorithm using
Jacobian representations, Lasserre relaxations
and the exchange technique.
Its convergence to global minimizers is proved.
The numerical experiments show that it
is efficient for solving SBPPs.
\item For general bilevel polynomial programs,
we can apply the same algorithm, using
Jacobian representations, Lasserre relaxations
and the exchange technique. The numerical experiments show that it
works well for some GBPPs, while it is not theoretically guaranteed to
get global optimizers. However,
its convergence to global optimality
can be proved under some assumptions.
\end{itemize}
The paper is organized as follows: In Section~\ref{sc:prlm}, we review
some preliminaries in polynomial optimization and Jacobian representations.
In Section~\ref{sc:sbpp}, we propose a method for solving
simple bilevel polynomial programs and prove its convergence.
In Section~\ref{sec:GBPP}, we consider general bilevel polynomial programs
and show how the algorithm works.
In Section~\ref{sc:numexp}, we present numerical experiments to demonstrate
the efficiency of the proposed methods.
{
In Section~\ref{sc:condis}, we make some conclusions
and discussions about our method.}
\section{Preliminaries}
\label{sc:prlm}
\noindent {\bf Notation.}
The symbol $\mathbb{N}$ (resp., $\mathbb{R}$ ,$\mathbb{C}$) denotes the set of nonnegative integers (resp., real numbers, complex numbers). For an integer $n>0$, $[n]$ denotes the set $\{1,\cdots,n\}$.
For $x := (x_1, \ldots, x_n) $ and $\alpha := (\alpha_1, \ldots, \alpha_n)$,
denote the monomial
\[
x^\alpha \, := \, x_1^{\alpha_1}\cdots x_n^{\alpha_n}.
\]
For a finite set $T$, $|T|$ denotes its cardinality.
The symbol $\mathbb{R}[x] := \mathbb{R}[x_1,\cdots,x_n]$
denotes the ring of polynomials in
$x:=(x_1,\cdots,x_n)$ with real coefficients whereas
$\mathbb{R}[x]_k$ denotes its subspace of polynomials of degree at most $k$.
For a polynomial $p\in {\mathbb R}[x]$, {define the set product
\[
p\cdot {\mathbb R}[x] : =\{pq \mid q\in {\mathbb R}[x]\}.
\]
It is the principal ideal generated by $p$}. For a symmetric matrix
$W$, $W\succeq 0$ (resp., $\succ 0$) means that $W$ is positive
semidefinite (resp., definite). For a vector $u\in \mathbb{R}^n$, $\| u \|$
denotes the standard Euclidean norm.
The gradient of a function $f(x)$ is denoted as $\nabla f(x)$.
If $f(x,z)$ is a function in both $x$ and $z$,
then $\nabla_z f(x,z)$ denotes the gradient with respect to $z$.
For an optimization problem, {\tt argmin} denotes
the set of its optimizers.
\subsection{Polynomial optimization}\label{Poly}
An ideal $I$ in $\mathbb{R}[x]$ is a subset of $\mathbb{R}[x]$
such that $ I \cdot \mathbb{R}[x] \subseteq I$
and $I+I \subseteq I$. For a tuple $p=(p_1,\ldots,p_r)$ in $\mathbb{R}[x]$,
$I(p)$ denotes the smallest ideal containing all $p_i$, i.e.,
\[
I(p) = p_1 \cdot \mathbb{R}[x] + \cdots + p_r \cdot \mathbb{R}[x].
\]
The $k$th {\it truncation} of the ideal $I(p)$,
denoted as $I_k(p)$, is the set
\[
p_1 \cdot \mathbb{R}[x]_{k-\deg(p_1)} + \cdots + {p_r} \cdot \mathbb{R}[x]_{k-\deg(p_r)}.
\]
For the polynomial tuple $p$, denote its real zero set
\begin{align*}
\mc{V}(p) := \{v \in \mathbb{R}^n \mid \, p(v) = 0 \}.
\end{align*}
A polynomial $\sigma \in \mathbb{R}[x]$ is said to be a sum of squares (SOS)
if $\sigma = a_1^2+\cdots+ a_k^2$ for some $a_1,\ldots, a_k \in \mathbb{R}[x]$.
The set of all SOS polynomials in $x$ is denoted as $\Sigma[x]$.
For a degree $m$, denote the truncation
\[
\Sigma[x]_m := \Sigma[x] \cap \mathbb{R}[x]_m.
\]
For a tuple $q=(q_1,\ldots,q_t)$,
its {\it quadratic module} is the set
\[
Q(q):= \Sigma[x] + q_1 \cdot \Sigma[x] + \cdots + q_t \cdot \Sigma[x].
\]
The $k$-th truncation of $Q(q)$ is the set
\[
\Sigma[x]_{2k} + q_1 \cdot \Sigma[x]_{d_1} + \cdots + q_t \cdot \Sigma[x]_{d_t}
\]
where each $d_i = 2k - \deg(q_i)$. For the tuple $q$,
denote the basic semialgebraic set
\[
\mc{S}(q) := \{ v \in \mathbb{R}^n \mid q(v) \geq 0 \}.
\]
For the polynomial tuples $p$ and $q$ as above,
if $f \in I(p) + {Q(q)}$, then clearly $f \geq 0$ on the set
$\mc{V}(p) \cap \mc{S}(q)$. However, the reverse is not necessarily true.
The sum $I(p) + Q(q)$ is said to be {\it archimedean}
if there exists $b \in I(p) + Q(q)$ such that
$S(b) = \{ v \in \mathbb{R}^n: b(v) \geq 0\}$ is {a} compact set in $\mathbb{R}^n$. {Putinar \cite{Putinar1993} proved that
if a polynomial $f > 0$ on $\mc{V}(p) \cap \mc{S}(q)$ and if $I(p) + Q(q)$ is archimedean},
then $f \in I(p) + Q(q)$. When $f$ is only nonnegative
(but not strictly positive) on $\mc{V}(p) \cap \mc{S}(q)$, we still have
$f \in I(p) + Q(q)$, under some general conditions
(cf.~\cite{NiefiniteLas}).
Now, we review Lasserre type semidefinite relaxations in polynomial optimization.
More details can be found in \cite{Las01,Las09,ML2014}.
Consider the general polynomial optimization problem:
\begin{equation} \label{pro::opti}
\left\{\begin{array}{rl}
f_{\min}:= \quad \underset{x\in{\mathbb R}^n}{\min} & f(x) \\
\text{s.t.} & p(x) = 0, \, q(x) \geq 0,
\end{array} \right.
\end{equation}
where $f \in \mathbb{R}[x]$ and $p,q$ are tuples of polynomials.
The feasible set of \reff{pro::opti}
is precisely the intersection $\mc{V}(p) \cap \mc{S}(q)$.
The Lasserre's hierarchy of semidefinite relaxations
for solving \reff{pro::opti} is ($k=1,2,\ldots$):
\begin{equation} \label{k:SOS:las}
\left\{\begin{array}{rl}
f_k := \, \max & \gamma \\
\text{s.t.} & f-\gamma \in I_{2k}(p) + Q_k(q).
\end{array} \right.
\end{equation}
When the set $I(p) + Q(q)$ is archimedean, Lasserre proved the convergence
\[
f_k \quad \to \quad f_{\min},\quad \text{as }k\rightarrow \infty.
\]
{If there exist $k<\infty$ such that $f_k =f_{\min}$,
the Lasserre's hierarchy is said to have finite convergence.
{Under the archimedeanness and some standard conditions
in optimization known to be generic (i.e., linear independence constraint qualification, strict complementarity and second order sufficiency conditions)},
the Lasserre's hierarchy has finite convergence.
This was recently shown in \cite{NiefiniteLas}.
{On the other hand, there exist special polynomial optimization problems
for which the Lasserre's hierarchy fails to have finite convergence.
But, such special problems belong to a set of measure zero
in the space of input polynomials, as shown in \cite{NiefiniteLas}.}
Moreover, we can also get global minimizers of
\reff{pro::opti} by using the flat extension or flat truncation condition
(cf.~\cite{Nie2011certify}).
The optimization {problem} \reff{k:SOS:las} can be solved as
a semidefinite program, so it can be solved by semidefinite program packages
(e.g., SeDuMi \cite{sedumi}, SDPT3 \cite{sdpt3}).
A convenient and efficient software for using
Lasserre relaxations is {\tt GloptiPoly~3} \cite{Gloptipoly}.
\subsection{Jacobian representations}
We consider the polynomial optimization problem
that is similar to the {lower level program} \reff{df:P(x)}:
\begin{equation} \label{pro::opti:2}
\underset{z\in{\mathbb R}^p}{\min} \quad f(z) \quad
\text{s.t.} \quad g_1(z)\ge 0, \ldots, g_m(z) \geq 0,
\end{equation}
where $f, g_1, \ldots, g_m \in{\mathbb R}[z]:=\mathbb{R}[z_1,\ldots,z_p]$.
Let $Z$ be the feasible set of \reff{pro::opti:2}.
For $z\in Z$, let $J(z)$ denote the index set of
active constraining functions at $z$.
Suppose $z^*$ is an optimizer of \reff{pro::opti:2}.
By the Fritz John condition (cf.~\cite[\S3.3.5]{Bertsekas1990}), there exists
{$ (\mu_0,\mu_1, \ldots, \mu_m) \ne 0 $} such that
\begin{equation} \label{FJ:cond}
\mu_0\nabla f(z^*) -\sum\limits_{i=1}^m\mu_i \nabla g_i(z^*) =0,
\quad \mu_ig_i(z^*)=0 \ (i\in [m]).
\end{equation}
{A point like $z^*$ satisfying \reff{FJ:cond} is called a Fritz John point.}
If we only consider active constraints, the above is then reduced to
\begin{equation} \label{act:FJ:cd}
\mu_0\nabla f(z^*) -\sum\limits_{i\in J(z^*)}\mu_i \nabla g_i(z^*) =0.
\end{equation}
The condition~\reff{FJ:cond} uses multipliers $\mu_0, \ldots, \mu_m$,
which are often not known in advance. If we consider them as new variables,
then it would increase the number of variables significantly.
For the index set $J = \{ i_1, \ldots, i_k\}$, denote the matrix
\[
B[J,z]:= \begin{bmatrix} \nabla f(z) & \nabla g_{i_1} (z) & \cdots
& \nabla g_{i_k} (z) \end{bmatrix}.
\]
Then condition~\reff{act:FJ:cd} means that the matrix
$B[J(z^*), z^*]$ is rank deficient, i.e.,
\[
\mbox{rank} \, B[J(z^*), z^*] \, \leq \, | J(z^*)|.
\]
The matrix $B[J(z^*), z^*]$ depends on the active set $J(z^*)$,
which is typically unknown in advance.
The technique in \cite[\S2]{Nie2011Jacobian} can be applied to
get explicit equations for Fritz John points,
without using multipliers $\mu_i$.
For a subset $J=\{i_1, \ldots, i_k\} \subseteq [m]$
with cardinality $|J| \leq \min\{m,p-1\}$,
write its complement as $J^c := [m]\backslash J.$ Then
\[
B[J,z] \mbox{ is rank defincient } \Longleftrightarrow \mbox{ all $(k+1) \times (k+1)$ minors of $B[J,z]$ are zeros}.
\]
There are totally $\binom{p}{k+1}$ equations defined by {such} minors.
However, this number can be significantly reduced by
using the method in \cite[\S2]{Nie2011Jacobian}.
The number of equations, for characterizing that $ B[J,z] \mbox{ is rank defincient}$,
can be reduced to
\[
\ell(J) \, := \, p(k+1)-(k+1)^2+1.
\]
It is much smaller than $ p \choose k+1$.
For cleanness of the paper, we do not repeat the construction of
these minimum number defining polynomials.
Interested readers {are referred to
\cite[\S2]{Nie2011Jacobian} for the details.
List all the defining polynomials,
which make $ B[J,z]$ {rank deficient}, as
\begin{equation} \label{jacob:monior}
\eta^J_1, \ldots, \eta^J_{\ell(J)}.
\end{equation}\noindent}Consider the products of
these polynomials with {$g_j$'s}:
\begin{equation} \label{eta:dot:g}
\eta^J_1 \cdot \Big( \underset{ j \in J^c}{\Pi} g_j \Big), \quad \ldots, \quad
\eta^J_{\ell(J)} \cdot \Big( \underset{ j \in J^c}{\Pi} g_j \big).
\end{equation}
They are all polynomials in $z$. The active set $J(z)$
is undetermined, unless $z$ is known.
We consider all possible polynomials as in (\ref{eta:dot:g}),
for all $J \subseteq [m]$, and collect them together.
For convenience of notation, denote all such polynomials as
\begin{equation} \label{jac:poly:psi}
\psi_1, \quad \ldots, \quad \psi_{L},
\end{equation}
where the number
\begin{align*}
L \,\,&= \, \sum\limits_{J \subseteq [m],|J| \leq \min\{m,p-1\} }
\ell(J) \\
& = \sum\limits_{ 0 \leq k \leq \min\{m,p-1\} }
\binom{m}{k} \big( p(k+1) - (k+1)^2 +1 \big).
\end{align*}
When $m,k$ are big, the number $L$ would be very large.
This is an unfavorable feature of Jacobian representations.
We point out that the Fritz John points can be characterized
by using the polynomials $\psi_1, \ldots, \psi_L$.
Define the set of all Fritz John points:
\begin{equation} \label{KFJ:pts}
K_{FJ} := \left\{z\in \mathbb{R}^p \left|
\begin{array}{c}
\exists (\mu_0,\mu_1, \ldots, \mu_m) \ne 0, \,
\mu_ig_i(z) = 0 \,(i\in [m]), \\
\mu_0 \nabla f(z) - \sum\limits_{i=1}^m\mu_i \nabla g_i(z)=0.
\end{array}
\right. \right\}.
\end{equation}
Let $W$ be the set of real zeros of polynomials $\psi_j(z)$, i.e.,
\begin{equation} \label{len2}
W \, = \, \{z\in \mathbb{R}^p \mid \psi_1(z) = \cdots = \psi_L(z)=0\}.
\end{equation}
{It is interesting to note that the sets $K_{FJ}$ and $W$ are equal.}
\begin{lemma}\label{lemma:gcp}
For $K_{FJ}, W$ as in \reff{KFJ:pts}-\reff{len2},
it holds that $K_{FJ} = W$.
\end{lemma}
\begin{proof} First, we prove that $W\subseteq K_{FJ}$.
Choose an arbitrary $u\in W$, and let $J(u)$ be the active set at $u$.
If $|J(u)| \geq p$, then the gradients $\nabla f(u)$
and $\nabla g_j(u)$ ($j\in J(u)$) must be linearly dependent,
so $u \in K_{FJ}$. Next, we suppose $|J(u)| < p$.
Note that $g_j(u)>0$ for all $j\in J(u)^c$.
By the construction, some of $\psi_1, \ldots, \psi_L$
are the polynomials as in \reff{eta:dot:g}
\[
{\eta_t^{J(u)}} \cdot \Big( \underset{ j \in J(u)^c}{\Pi} g_j \Big).
\]
Thus, $\psi(u) =0$ implies that all the polynomials {$\eta_t^{J(u)}$}
vanish at $u$. By their definition, we know the matrix
$B[J(u),u]$ does not have full column rank.
This means that $u\in K_{FJ}$.
Second, we show that $K_{FJ} \subseteq W$.
Choose an arbitrary $u\in K_{FJ}$.
\begin{itemize}
\item Case I: $J(u)=\emptyset$.
Then $\nabla f(u)=0$. The first column of matrices {$B[\emptyset,u]$} is zero,
so all $\eta_t^\emptyset$ and $\psi_j$
vanishes at $u$ and hence $u\in W$.
\item Case II: $J(u) \not =\emptyset$. Let $I\subseteq [m]$
be an arbitrary index set with $|I|\leq \min\{m,p-1\}$.
If $J(u)\not \subseteq I$, then at least one $j\in I^c$ belongs to $J(u)$.
Thus, at least one $j \in I^c$ satisfies $g_j(u)=0$, so all the polynomials
\[
{\eta_t^{I}} \cdot \Big( \underset{ j \in I^c}{\Pi} g_j \Big)
\]
vanish at $u$.
If $J(u) \subseteq I$, then $\mu_i g_i(u)=0$ implies that
$\mu_i=0$ for all $i\in I^c$.
By definition of $K_{FJ}$, the matrix {$B[I,u]$}
does not have full column rank.
So, the minors $\eta_i^I$ of {$B[I,u]$} vanish at $u$.
By the construction of $\psi_i$, we know all $\psi_i$ vanish at $u$,
so $ u\in W$.
\end{itemize}
The proof is completed by combining the above two cases.
\end{proof}
\section{Simple bilevel polynomial programs}
\label{sc:sbpp}
In this section, we study simple bilevel polynomial programs (SBPPs)
and give an algorithm for computing global optimizers.
For SBPPs as in \reff{bilevel:pp}, the feasible set $Z(x)$
for the {lower level program} \reff{df:P(x)} is independent of $x$.
Assume that $Z(x)$ is constantly the semialgebraic set
\begin{equation} \label{Z:sbpp}
Z:=\{z\in \mathbb{R}^p \mid g_1(z) \geq 0, \ldots, g_{m_2}(z) \geq 0\},
\end{equation}
for given polynomials $g_1, \ldots, g_{m_2}$ in $z:=(z_1,\ldots, z_p)$.
For each pair $(x,y)$ that is feasible in \reff{bilevel:pp},
$y$ is an optimizer for \reff{df:P(x)} which now becomes
\begin{equation} \label{inopt:minfg}
\min\limits_{ z\in \mathbb{R}^p} \quad f(x,z) \quad
\text{s.t.} \quad g_1(z) \geq 0, \ldots, g_{m_2}(z) \geq 0.
\end{equation}
Note that the inner objective $f$ still depends on $x$.
So, $y$ must be a Fritz John point of \reff{inopt:minfg}, i.e.,
there exists {$ (\mu_0,\mu_1, \ldots, \mu_{m_2}) \ne 0 $} satisfying
\[
\mu_0 \nabla_z f(x,y) - \sum_{j \in [m_2] } \mu_j \nabla_z g_j(y) = 0,
\quad \mu_j g_j(y) = 0 \, (j \in [m_2]).
\]
Let $K_{FJ}(x)$ denote the set of all Fritz John points of \reff{inopt:minfg}.
The set $K_{FJ}(x)$ can be characterized by Jacobian representations.
Let $\psi_1, \ldots, \psi_L$ be the polynomials constructed as in
\reff{jac:poly:psi}. Note that each $\psi_j$
is now a polynomial in $(x,z)$, because the objective of
\reff{inopt:minfg} depends on $x$. Thus, each $(x,y)$
feasible for \reff{bilevel:pp} satisfies
\[
\psi_1(x,y) = \cdots = \psi_L(x,y) = 0.
\]
For convenience of notation, denote the polynomial tuples
\begin{equation} \label{sc3:xi:psi}
\xi :=\big( G_1, \ldots, G_{m_1}, g_1, \ldots, g_{m_2} \big), \quad
\psi := \big( \psi_1, \ldots, \psi_L \big),
\end{equation}
{We call $\psi(x,y)=0$} a {\it Jacobian equation}.
Then, the SBPP as in \reff{bilevel:pp} is equivalent to
the following SIPP:
\begin{equation} \label{sipp:FsGH}
\left\{
\begin{array}{rl} F^* := \min\limits_{x \in \mathbb{R}^n, y \in \mathbb{R}^p} & \, F(x,y) \\
\text{s.t.} & \, \psi(x,y) = 0, \, \xi(x,y) \geq 0, \\
& \, H(x,y,z) \geq 0, \, \forall~z\in Z.
\end{array} \right.
\end{equation}
In the above, $H(x,y,z)$ is defined as in \reff{df:H(xyz)}.
\subsection{A semidefinite algorithm for SBPP}
We have seen that the SBPP \reff{bilevel:pp} is equivalent to
\reff{sipp:FsGH}, which is an SIPP. So, we can apply
the {exchange} method to solve it.
The basic idea of ``exchange" is that
we replace $Z$ by a finite grid set $Z_k$ in \reff{sipp:FsGH},
and then solve it for a global minimizer $(x^k, y^k)$ by Lasserre relaxations.
If $(x^k, y^k)$ is feasible for \reff{bilevel:pp}, we stop;
otherwise, we compute global minimizers of $H(x^k,y^k, z)$ and
add them to $Z_k$. Repeat this process until the convergence condition is met.
We call $(x^*,y^*)$ a global minimizer of \reff{bilevel:pp},
up to a tolerance parameter $\epsilon>0$, if $(x^*,y^*)$
is a global minimizer of the following approximate SIPP:
\begin{equation} \label{pop:tilde(Pepsilon)}
\left\{
\begin{array}{rl}
{F}^*_\epsilon := \min\limits_{x \in \mathbb{R}^n ,y \in \mathbb{R}^p} & F(x,y) \\
\text{s.t.} \quad & \, \psi(x,y) = 0, \, \xi(x,y) \geq 0, \\
& \ H(x,y,z)\geq -\epsilon, ~\forall~ z\in Z.
\end{array}
\right.
\end{equation}
Summarizing the above, we get the following algorithm.
\begin{alg}\label{alg:exchange:method}
(A Semidefinite Relaxation Algorithm for SBPP.)
\medskip
\noindent\textbf{Input:} Polynomials $F$, $f$,
$G_1,\ldots, G_{m_1}$, $g_1,\ldots, g_{m_2}$
for the SBPP as in \reff{bilevel:pp},
a tolerance parameter $\epsilon \geq 0$, and a maximum number $k_{\max}$
{of iterations}.
\medskip
\noindent\textbf{Output:} The set $\mc{X}^*$ of
global minimizers of \reff{bilevel:pp}, up to the tolerance $\epsilon$.
\medskip
\begin{enumerate}[\itshape Step 1]
\item Let $Z_0=\emptyset$, $\mc{X}^*= \emptyset$ and $k=0$.
\item
\label{the Jacobian representation}
Apply Lasserre relaxations to solve
\begin{equation} \label{alg:simple:sub1}
(P_k): \left\{
\begin{array}{rl}
F_k^* :=\min\limits_{x \in \mathbb{R}^n, y \in \mathbb{R}^p} &\, F(x,y)\\
\text{s.t.} &\, \psi(x,y) = 0, \, \xi(x,y) \geq 0, \\
&\, H(x,y,z)\geq 0 \, (\forall~z\in Z_k),
\end{array}
\right.
\end{equation}
and get the set
$S_k = \{(x^k_1,y_1^k),\cdots,(x^k_{r_k},y^k_{r_k})\}$
of its global minimizers.
\item \label{step3}
For each $i=1,\cdots,r_k$, do the following:
\begin{enumerate}[\upshape (a)] \label{item::step3}
\item Apply Lasserre relaxations to solve
\begin{equation} \label{alg:simple:sub2}
(Q^k_i):\quad \left\{
\begin{array}{rl}
v^k_i := \min\limits_{z \in \mathbb{R}^p} &\ H(x^k_i,y^k_i,z)\\
\text{s.t.}&\, \psi(x_i^k, z) = 0, \\
& \, g_1(z) \geq 0, \ldots, g_{m_2}(z) \geq 0,
\end{array}
\right.
\end{equation}
and get the set $T^k_i=\left\{z^k_{i,j} :\ j=1,\cdots,t^k_i\right\}$
of its global minimizers.
\item
If $v^k_i \geq - \epsilon$,
then update
$\mathcal{X}^* : =\mathcal{X}^* \cup \{(x^k_i,y_i^k)\}$.
\end{enumerate}
\item
If $\mathcal{X}^*\neq \emptyset$ or $k>k_{\max}$, stop;
otherwise, {update $Z_{k}$ to $Z_{k+1}$ as}
\begin{equation} \label{Z:k+1:sbpp}
Z_{k+1} :=Z_{k} \cup T^k_1 \cup \cdots \cup T^k_{r_k}.
\end{equation}
Let $k:=k+1$ and go to Step
\ref{the Jacobian representation}.
\end{enumerate}
\end{alg}
For the exchange method to solve the SIPP \reff{sipp:FsGH} successfully,
the two subproblems~\reff{alg:simple:sub1} and \reff{alg:simple:sub2}
need to be solved globally in each iteration.
This can be done by Lasserre's hierarchy of
semidefinite relaxations (cf.~\S\ref{Poly}).
\begin{itemize}
\item [A)]
For solving \reff{alg:simple:sub1} by Lasserre's hierarchy, we get a sequence of
monotonically increasing lower bounds for $F_k^*$, say,
$\{ \rho_\ell \}_{\ell=1}^\infty$, that is,
\[
\rho_1 \leq \cdots \leq \rho_\ell \leq \cdots \leq F_k^*.
\]
Here, $\ell$ is a relaxation order.
If for some value of $\ell$
we get a feasible point $(\hat{x}, \hat{y})$ for \reff{alg:simple:sub1}
such that $F(\hat{x}, \hat{y}) = \rho_\ell$, then we must have
\begin{equation} \label{eq:F(xy)=Fk}
F(\hat{x}, \hat{y}) = F_k^* = \rho_\ell,
\end{equation}
and know $(\hat{x}, \hat{y})$ is a global minimizer.
This certifies that the Lasserre's relaxation of order $\ell$ is exact
and \reff{alg:simple:sub1} is solved globally,
i.e., Lasserre's hierarchy has finite convergence.
As recently shown in \cite{NiefiniteLas},
Lasserre's hierarchy has finite convergence,
when the archimedeanness and {some standard
conditions well-known in optimization to be generic
(i.e., linear independence constraint qualification,
strict complementarity and second order sufficiency conditions}) hold.
\item [B)]{
For a given polynomial optimization problem, there exist a sufficient (and almost necessary)
condition for detecting whether or not Lasserre's hierarchy has finite convergence.
The condition is {\it flat truncation}, proposed in \cite{Nie2011certify}.
It was proved in \cite{Nie2011certify} that
Lasserre's hierarchy has finite convergence
if the flat truncation condition
is satisfied.
When the flat truncation condition holds,
we can also get the point
$(\hat{x}, \hat{y})$ in \reff{eq:F(xy)=Fk}.
In all of our numerical examples, the flat truncation condition
is satisfied, so we know that Lasserre relaxations
solved them exactly.}
{There exist special optimization problems
for which Lasserre relaxations are not exact (see e.g. \cite[Chapter 5]{Las09}).
Even for the worst case that
Lasserre's hierarchy fails to have finite convergence,
flat truncation is still the right condition for checking asymptotic convergence.}
This is proved in \cite[\S3]{Nie2011certify}.
\item [C)]
In computational practice, semidefinite programs cannot be solved
exactly, because round-off errors always exist in computers.
Therefore, if $F(\hat{x}, \hat{y}) \approx \rho_\ell$,
it is reasonable to claim that
\reff{alg:simple:sub1} is solved globally.
This numerical issue is a common feature of most computational methods.
\item [D)]
For the same reasons as above, the subproblem~\reff{alg:simple:sub2}
can also be solved globally by Lasserre's relaxations.
Moreover, \reff{alg:simple:sub2} uses the equation
$\psi(x_i^k,z)=0$, obtained from Jacobian representation.
As shown in \cite{Nie2011Jacobian}, Lasserre's hierarchy of relaxations,
in combination with Jacobian representations, always has finite convergence,
under some nonsingularity conditions.
This result has been improved in \cite[Theorem 3.9]{GuoWang2014}
under weaker conditions.
Flat truncation can be used to detect
the convergence (cf.~\cite[\S4.2]{Nie2011certify}).
\item [E)]
{
For all $\epsilon_{1}> \epsilon_2 >0$, it is easy to see that
$F_{\epsilon_1}^*\leq F_{\epsilon_2}^*\leq F^*$ and hence the feasible region and the optimal value of the bilevel problems are monotone. Indeed, we can prove
$\lim\limits_{\epsilon\rightarrow 0^+}~F_{\epsilon}^*=F^* $
and the continuity of the optimal solutions;
see \cite[Theorem 4.1]{SBLYe2013} for the result and a detailed proof.
However, we should point out that if $\epsilon >0$ is not small enough, then the solution of the approximate bilevel program may be very different from the one for the original bilevel program.
We refer to \cite[Example 4.1]{Mitsos2006Thesis}.
}
\item [F)] In Step~3 of Algorithm~\ref{alg:exchange:method},
the value of $v_i^k$ is a measure for the feasibility of
$(x_i^k, y_i^k)$ in \reff{sipp:FsGH}.
This is because $(x_i^k, y_i^k)$ is a feasible point for \reff{sipp:FsGH}
if and only if $v_i^k \geq 0$.
By using the exchange method,
the subproblem \reff{alg:simple:sub1} is only an
approximation for \reff{sipp:FsGH},
so typically we have $v_i^k < 0$ {if $(x_i^k, y_i^k)$ is infeasible for \reff{sipp:FsGH}}. The closer $v_i^k$ is to zero, the better \reff{alg:simple:sub1} approximates \reff{sipp:FsGH}.
\end{itemize}
\iffalse
\begin{exm}\textcolor{red}{\cite[Example 4.1]{Mitsos2006Thesis}
Consider the SBPP:
\begin{equation*}\label{bad:exm}
\left\{
\begin{aligned}
\min\limits_{y\in [-1+\delta,1]}& \quad y\\
\text{s.t.} & \quad y\in \underset{z\in[-1+\delta,1]}{\tt argmin}\{-z^2\}.
\end{aligned}
\right.
\end{equation*}
We can equivalently reformulate this problem as an SIPP problem:
\begin{equation*}
\left\{
\begin{aligned}
F^*:=\min\limits_{y\in [-1+\delta,1]}& \quad y\\
\text{s.t.} & \quad y^2-z^2\geq 0,\quad \forall~z\in[-1+\delta,1],\\
&\quad y(y+1-\delta)(y-1) =0.
\end{aligned}
\right.
\end{equation*}
It is easy to find the optimal solution $F^*=y^* = 1$.
Let us consider the relaxed SIPP as:
\begin{equation*}
\left\{
\begin{aligned}
F_{\epsilon}^*:= \min\limits_{y\in [-1+\delta,1]}& \quad y\\
\text{s.t.} & \quad y^2-z^2\geq -\epsilon,\quad \forall~z\in[-1+\delta,1],\\
&\quad y(y+1-\delta)(y-1) =0.
\end{aligned}
\right.
\end{equation*}
If $\epsilon\geq 2\delta-\delta^2$, we have $F_{\epsilon}^* = y^* =-1+\delta$. As $\epsilon = 2\delta \rightarrow 0$, we have $$F_{\epsilon}^*\rightarrow -1< F^* = 1.$$}
\end{exm}
{\bf I suggest to remove this example, because it is mathematically wrong
and misleading!}
\textcolor{blue}{I suggest that we remove this example. It is wrong since $\delta $ should be fixed. If $\epsilon$ is taken very small to approach zero and $\delta$ is approaching zero, then the orginal problem must be the one with $\delta=0$ and in this case $F^*=-1$ which is the correct result.}
\fi
\iffalse
\textcolor{red}{In Algorithm \ref{alg:exchange:method}, the subproblems~\reff{alg:simple:sub1} and \reff{alg:simple:sub2}
are solved by Lasserre relaxations as recalled in Section \ref{Poly}. In general, Lasserre relaxations provide
a sequence of lower bounds that approximate the global optimum. \textcolor{blue}{ So it is always possible to find an $\epsilon$-global optimum (with $\epsilon\geq 0$) in finite steps.} If the global optimum is found in finite steps, we say Lasserre relaxations have finite convergence.}
\textcolor{blue}{Recently it has been shown that under some generic conditions this finite convergence property is always true, i.e. when the linear independence constraint qualification, strict complementarity and second order sufficiency conditions hold at every global minimizer (\cite{NiefiniteLas}) and hence generically the subproblems~\reff{alg:simple:sub1} and \reff{alg:simple:sub2}
can be solved globally \textcolor{red}{in finite steps} by using Lasserre relaxations. Moreover since Jacobian representation for Fritz John points has been added in subproblem \reff{alg:simple:sub2} and its global minimum is achievable under the assumption $S(x)\neq \emptyset$, by virtue of \cite[Theorem 2.3]{Nie2011Jacobian} and \cite[Theorem 3.9]{GuoWang2014}, Lasserre relaxations always have finite convergence under some generic conditions. }
\fi
\iffalse
\textcolor{red}{The subproblems ~\reff{alg:simple:sub1} and \reff{alg:simple:sub2} are generally nonconvex and NP-hard to solve. Even if we have the finite convergence results for Lasserre relaxations generically, in practice, we might not be able to increase the relaxation order \textcolor{blue}{higher enough so that the subproblems are solved globally since the size of the generated SDPs is exponentially increasing. Furthermore, for a fixed relaxation order, the size of the generated SDPs is monotonically increasing as the number of constraints increases. So as $k$ increases, $|Z_k|$ is increasing and subproblem \reff{alg:simple:sub1} becomes harder and harder to solve. For some worst cases, neither of the two subproblems can be solved globally, then Algorithm \ref{alg:exchange:method} might terminate with} a lower bound of the global minimum $F^*$. However, this case rarely happens in our experiments. As we will show in section \ref{sc:numexp}, for small and moderate size BPPs, Algorithm \ref{alg:exchange:method} works very well.}
{\bf (
This paragraph can also be dropped???
It shows some disadvantages of our method, which are not what we should concern.
All exchange/Lasserre methods have such issues.
)}
\fi
\subsection{Two features of the algorithm}
As in the introduction, we do not apply the exchange method directly
to \reff{bilevel:sipp:reform}, but instead to \reff{sipp:FsGH}.
Both \reff{bilevel:sipp:reform} and \reff{sipp:FsGH}
are SIPPs that are equivalent to the SBPP \reff{bilevel:pp}.
As the numerical experiments will show, the SIPP \reff{sipp:FsGH}
is much easier to {solve} by the exchange method.
This is because, {the Jacobian equation} $\psi(x,y)=0$ in \reff{sipp:FsGH}
makes it much easier for \reff{alg:simple:sub1}
to approximate \reff{sipp:FsGH} accurately.
Typically, for a finite grid set $Z_k$ of $Z$,
the feasible sets of \reff{sipp:FsGH} and \reff{alg:simple:sub1}
have the same dimension. However, the feasible set of
\reff{bilevel:sipp:reform} has smaller dimension than that of \reff{pop:tilde(Pk)}.
Thus, it is usually very difficult for \reff{pop:tilde(Pk)} to approximate
\reff{bilevel:sipp:reform} accurately, by choosing a finite set $Z_k$.
In contrast, it is often much easier for \reff{alg:simple:sub1} to approximate
\reff{sipp:FsGH} accurately. We illustrate this fact
by the following example.
\iffalse
solving $(\widetilde{P}_k)$ and $P(x^k_i)
$ directly, we add the redundant constraints $\psi_j(x_i^k,z) =0,~j\in [L]$ and solve the problems $(P_k)$ and $(Q_i^k)$ respectively.
This method is inspired by the \emph{Jacobian semidefinite relaxations} proposed by Nie in \cite{Nie2011Jacobian} and improved in \cite{FENGUO} in which the Jacobian representation is used to represent the relaxed KKT points.
The advantage of using the relaxed Fritz John condition instead of the relaxed KKT condition is that no matter whether the global minimizer of $P(x^k_i)$ satisfies the KKT condition or not, by adding the redundant polynomial equations, Lasserre's semidefinite relaxations always have finite convergence under some generic assumptions.
{So Algorithm \ref{alg:exchange:method} can continue until the SBPP problem $(P)$ is solved globally even if at some $x_i^{k}$, the global minimum of the lower level program $P(x_{i}^{k})$ is achieved at a minimizer which does not satisfy the KKT condition. }
Before studying the convergence properties we present two examples to illustrate our algorithm. The first example shows that the exchange method without using the Jacobian representation of the generalized critical points may not work and the second example demonstrates that our algorithm can solve SBPP globally even in the case where the lower level optimal solution does not satisfy the KKT condition.
\fi
\begin{exm} \label{exm:appendix:9new}
(\cite[Example 3.19]{BIEXM})
Consider the SBPP:
\begin{equation} \label{exm:comparison}
\left\{
\begin{array}{rl}
\min\limits_{x\in {\mathbb R},y\in {\mathbb R}} &~~ F(x,y): = xy-y+\frac{1}{2}y^2\\
\text{s.t.} & 1-x^2 \geq 0, \quad 1 - y^2 \geq 0, \\
& ~~
y\in S(x) := \underset{1-z^2 \geq 0}{\tt argmin} \quad
f(x,z): = -xz^2+\frac{1}{2}z^4.
\end{array}
\right.
\end{equation}
Since $f(x,z) = \frac{1}{2}(z^2-x)^2-\frac{1}{2}x^2$, one can see that
\[
S(x)=\left\{
\begin{aligned}
& 0,\quad \quad x\in [-1,0),\\
& \pm \sqrt{x}, \quad x\in [0,1].
\end{aligned}
\right.
\]
Therefore, the outer objective $F(x,y)$ can be expressed as
\[
F(x,y)=\left\{
\begin{aligned}
& 0,\quad \quad \quad \quad\quad \quad \quad x\in [-1,0),\\
& \frac{1}{2}x\pm (x-1)\sqrt{x}, \quad x\in [0,1].
\end{aligned}
\right.
\]
So, the optimal solution and the optimal value of \reff{exm:comparison}
are ($a = \frac{\sqrt{13}-1}{6}$):
\[
(x^*,y^*) = (a^2,a) \approx (0.1886, \, 0.4343),
\quad F^* = \frac{1}{2}a^2+a^3-a \approx -0.2581.
\]
If Algorithm \ref{alg:exchange:method}
is applied without using the Jacobian {equation} $\psi(x,y)=0$,
the computational results are shown in Table \ref{Table:comparision:1}.
The problem \reff{exm:comparison} cannot be solved reasonably well. In the contrast, if we apply Algorithm \ref{alg:exchange:method}
with the Jacobian {equation} $\psi(x,y)=0$, then \reff{exm:comparison}
is solved very well. The computational results are shown in
Table~\ref{Table:comparision:2}.
It takes only two iterations for the algorithm to converge.
\begin{table}[htb]
\caption{Computational results without $\psi(x,y) =0$}
\label{Table:comparision:1}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
{\tt Iter} $k$ & $(x_i^k,y_i^k)$ & $z_{i,j}^k$ & $F_k^*$ & $v_i^k$ \\ \hline
0 & (-1,1) & 4.098e-13 & -1.5000 & -1.5000 \\
\hline
1 & (0.1505, 0.5486) & $\pm 0.3879$ & -0.3156 & -0.0113 \\
\hline
2 & (0.0752, 0.3879) & $\pm 0.2743$ & -0.2835& -0.0028\\
\hline
3 & (0.2088, 0.5179) & $\pm 0.4569 $ & -0.2754 & -0.0018\\
\hline
4 & cannot be solved & ... & ... & ... \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\begin{table}[htb]
\caption{Computational results with $\psi(x,y) =0$ }
\label{Table:comparision:2}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
{\tt Iter} $k$ & $(x_i^k,y_i^k)$ & {$z_{i,j}^k$} & $F_k^*$ & $v_i^k$ \\ \hline
0 & (-1,1) & 3.283e-21 & -1.5000 & -1.5000 \\
\hline
1 & (0.1886,0.4342) & $\pm 0.4342$ & -0.2581 & -3.625e-12 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\end{exm}
For the lower level program \reff{df:P(x)},
the KKT conditions may fail to hold.
In such a case, the classical methods
which replace \reff{df:P(x)} by the KKT conditions,
do not work at all. However, such problems can also be
solved efficiently by Algorithm~\ref{alg:exchange:method}.
The following are two such examples.
\begin{exm} (\cite[Example 2.4]{Dempe2012MP}) Consider the following SBPP:
\begin{equation}\label{exm:comparsion:1}
F^*:=\min\limits_{x\in {\mathbb R},y\in {\mathbb R}} (x-1)^2+y^2\quad \text{s.t.}\quad
y\in S(x) := \underset{ z\in Z := \{z\in {\mathbb R}| z^2\leq 0\}}{\tt argmin}~x^2z.
\end{equation}
It is easy to see that the global minimizer of this problem is
$(x^*,y^*) = (1,0)$. The set $Z=\{0\}$ is convex.
By using the multiplier variable $\lambda$,
we get a single level optimization problem:
\begin{equation*} \left\{
\begin{aligned}
r^* := \min\limits_{x\in {\mathbb R},y\in {\mathbb R},\lambda\in {\mathbb R}}&\ (x-1)^2+y^2\\
\text{s.t.}&\ x^2+2\lambda y =0, \, \lambda \geq 0,\, y^2\leq 0,\, \lambda y^2=0.\\
\end{aligned}
\right.
\end{equation*}
The feasible points of this problem are $(0,0,\lambda)$
with $\lambda\geq 0$. We have $r^* = 1>F^*$.
The KKT reformulation approach fails in this example,
since $y^*\in S(x^*)$ is not a KKT point.
We solve the SBPP problem \reff{exm:comparsion:1}
by Algorithm~\ref{alg:exchange:method}. The Jacobian equation is
$\psi(x,y) = x^2y^2=0$, and we reformulate the problem as:
\begin{equation*} \left\{
\begin{aligned}
s^* := \min\limits_{x\in {\mathbb R},y\in {\mathbb R}}&\ (x-1)^2+y^2\\
\text{s.t.}&\ x^2(z-y)\geq 0,\quad \forall z\in Z,\\
&\ \psi(x,y) = x^2y^2=0.
\end{aligned}
\right.
\end{equation*}
This problem is not an SIPP actually, since the set $Z$
only has one feasible point. At the initial step,
we find its optimal solution $(x^*,y^*) = (1,0)$,
and it is easy to check that $\min\limits_{z\in Z} H(x^*,y^*,z) = 0$,
which certifies that it is the global minimizer of
the SBPP problem \reff{exm:comparsion:1}.
\end{exm}
\begin{exm} \label{exm:degenerate}
Consider the SBPP:
\begin{equation} \label{exm:degenerate:P}
\left\{
\begin{array}{rl}
\min\limits_{x\in \mathbb{R}, \, y \in \mathbb{R}^2} &~~ F(x,y): = x+y_1+y_2\\
\text{s.t.} & ~~ x-2 \geq 0, \quad 3-x \geq 0, \\
& ~~ y\in S(x) := \underset{z\in Z}{\tt argmin} \quad
f(x,z): = x(z_1+z_2), \\
\end{array}
\right.
\end{equation}
where set $Z$ is defined by the inequalities:
\[
g_1(z):=z^2_1-z_2^2 - (z^2_1+z^2_2)^2 \geq 0,
\quad g_2(z) := z_1\geq 0.
\]
\iffalse
\begin{figure}
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=1.0\textwidth]{Figure1.pdf}
\caption{}
\label{fig:exm:denegerate:1}
\end{subfigure}%
\quad \quad \quad
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=1.0\textwidth]{Figure2.pdf}
\caption{}
\label{fig:exm:denegerate:2}
\end{subfigure}
\caption{\small{(A) Feasible region $Z$ of the lower level program in (\ref{exm:degenerate:P}). (B) Feasible region of the lower level program with the Jacobian representation.}}
\label{fig:exm:degenerate:all}
\end{figure}
The set $Z$ is shown in the shaded area in Figure 1(A).
\fi
For all $x\in [2,3]$, one can check that $S(x) = \{(0,0)\}$.
Clearly, the global minimizer of \reff{exm:degenerate:P} is
$(x^*,y^*) = (2,0,0)$, and the optimal value $F^*= 2$. At $z^*= (0,0) $,
\[
\nabla_{z} f(x,z^*) = \begin{bmatrix} x \\ x\end{bmatrix},
\nabla_z g_1(z^*) = \begin{bmatrix} 0 \\ 0\end{bmatrix},
\nabla_z g_2(z^*) = \begin{bmatrix} 1 \\ 0\end{bmatrix}.
\]
The KKT condition does not hold for the {lower level program},
since $\nabla_{z} f(x,z^*)$ is not a linear combination
of $\nabla_z g_1(z^*)$ and $\nabla_z g_2(z^*)$.
By \cite[Proposition 3.4]{NiefiniteLas}, Lasserre relaxations in \reff{k:SOS:las}
do not have finite convergence for solving the {lower level program}.
One {can check} that
\[
K_{FJ}(x) \, = \, \{(0,0), (0.8990,0.2409)\}
\footnote{They are the solutions of the equations
$g_1(z)=0, \, z_1+z_2+2(z_2-z_1)(z_1^2+z_2^2)=0.$}
,
\]
for all feasible $x$.
\iffalse
Now we illustrate that we can use the Jacobian representation to represent the set $K(x)$.
Since $m_2=2, p=2$, Assumption \ref{assumption:1} holds automatically. $\bar{m}=\min\{m_2, p-1\}=1$.
The index sets $I$ such that $I\subseteq [2], |I|\leq \bar{m}$ are $I=\emptyset, \{1\}, \{2\}$.
When $I=\emptyset, B^\emptyset(z)=\nabla_z f(x,z)= \begin{bmatrix} x \\ x\end{bmatrix}.$ Since $I^c=\{1,2\}$ and $len(\emptyset)=2$,
there are $p=2$ minimum defining polynomials which are the maximum minors of size $1$ defined by (\ref{defingeq}):
$$\eta^\emptyset_1:=\nabla_{z_1}f(x,z)=x, \eta_2^\emptyset :=\nabla_{z_2} f(x,z)=x.$$
Since $\emptyset^c=\{1,2\}$, by (\ref{len}), we have
$$\psi_i^\emptyset(x,z)= xg_1(z)g_2(z), \,~i=1,2.$$
When $I=\{1\}$, $B^{\{1\}} (z)=[\nabla_z f(x,z) \ \nabla g_1(z)]$ has $2$ rows and $2$ columns.
By (\ref{lendef}), $len (\{1\})=1$ and so there is one minimum defining polynomial which are the maximum minors of size $2$ defined by (\ref{defingeq}):
$$\eta^{\{1\}}_1=-x( z_1+z_2+2(z_2-z_1)(z_1^2+z_2^2)).$$
Since $\{1\}^c=\{2\}$, $$\psi_i^{\{1\}}(x,z)=\eta^{\{1\}}_1g_2(z)=-x( z_1+z_2+2(z_2-z_1)(z_1^2+z_2^2))z_1.$$
When $I=\{2\}$, $B^{\{2\}} (z)=[\nabla_z f(x,z) \ \nabla g_2(z)]= \left [\begin{array}{ll}
x&1\\
x&0
\end{array}\right ]$ has $2$ rows and $2$ columns.
By (\ref{lendef}), $len (\{1\})=1$ and so there is one minimum defining polynomial which are the maximum minors of size $2$ defined by (\ref{defingeq}):
$$\eta^{\{2\}}_1=-x.$$
Since $\{2\}^c=\{1\}$, $$\psi_1^{\{2\}}=\eta^{\{2\}}_1g_1(z)=-xg_1(z).$$
Hence by the Jacobian representation, we get $L:=4$ equations to define the generalized critical points:
$$\psi^\emptyset_1=0, \psi^\emptyset_2=0, \psi^{\{1\}}_1=0, \psi^{\{2\}}_1=0 .$$
Let {
\begin{equation*}
\begin{aligned}
\varphi_1(x,z) & := z^2_1-z_2^2 - (z^2_1+z^2_2)^2 = 0,\\
\varphi_2(x,z) &:= z_1\left( z_1+z_2+2(z_2-z_1)(z_1^2+z_2^2)\right) =0,\\
\psi_1(x,z) & := x\varphi_1(x,z),\quad \psi_2(x,z) := x\varphi_2(x,z).
\end{aligned}
\end{equation*}
Since $x\not =0$, we have shown that the set of generalized critical points can be represented by
$$\varphi_1(x,z)=0,\varphi_2(x,z)=0.$$}
The $0$-level sets of the polynomials $\varphi_1(x,z)$ and $\varphi_2(x,z)$ are shown in Figure 1(B) which intersect at just two generalized critical points $\{(0,0), (0.8990,0.2409)\}$. Hence $$W(x)=\{(0,0), (0.8990,0.2409)\}.$$
So we have shown that $\{(0,0)\}=S(x)\subset W(x)$ for each $x\in X$. By the Jacobian relaxation, instead of solving the original lower level program
we solve the equivalent lower level program:
\begin{equation*} P(x): \left\{
\begin{aligned}
\min\limits_{z\in Z} &\ f(x,z)\\
\text{s.t.} &\ \psi_{1}(x,z) = \psi_{2}(x,z) = 0.
\end{aligned}
\right.
\end{equation*}
Since the real algebraic variety
$V_{\mathbb R}(\psi)=\{z\in {\mathbb R}^2:\psi_1(x,z)=0, \ \psi_2(x,z)=0\}$ is finite, by \cite[Theorem 1.1]{Nie2013real}, Lasserre's semidefinite relaxation (Algorithm \ref{alg:finit:cons:putinar}) has finite convergence guarantee on this problem.
\fi
By Jacobian representation of $K_{FJ}(x)$, we get
\[
\psi(x,z) = \Big(
xg_1(z)g_2(z), \quad -xz_1( z_1+z_2+2(z_2-z_1)(z_1^2+z_2^2)), \quad
-xg_1(z)
\Big).
\]
Next, we apply Algorithm \ref{alg:exchange:method}
to solve \reff{exm:degenerate:P}.
Indeed, for $k=0$, $Z_0 = \emptyset$, we get
\[ (x_1^0,y_1^0) \approx ( 2.0000,0.0000,0.0000), \]
which is the true global minimizer.
We also get
\[
z_1^0 \approx (4.6320, -4.6330)\times 10^{-5}, \quad
v_1^0 \approx -5.2510\times 10^{-8}.
\]
For a small value of $\epsilon$ (e.g., $10^{-6}$),
Algorithm \ref{alg:exchange:method}
terminates successfully with the global minimizer of
\reff{exm:degenerate:P}.
\end{exm}
\subsection{Convergence analysis}
We study the convergence properties of Algorithm \ref{alg:exchange:method}.
For theoretical analysis,
one is mostly interested in its performance when
the tolerance parameter $\epsilon =0$ or
the maximum iteration number $k_{\max}=\infty$.
\begin{theorem} \label{theorem:exchange:simple}
For the simple bilevel polynomial program as in \reff{bilevel:pp},
assume the lower level program is as in \reff{inopt:minfg}.
Suppose the subproblems $(P_k)$ and each $(Q_i^k)$
are solved globally by Lasserre relaxations.
\begin{itemize}
\item [(i)]
Assume $\epsilon =0$.
If Algorithm \ref{alg:exchange:method} stops for some
$ k < k_{\max}$,
then each $(x^*,y^*) \in \mc{X}^*$
is a global minimizer of \reff{bilevel:pp}.
\item [(ii)] Assume $\epsilon =0$, $k_{\max}=\infty$, and the union
$\cup_{k\geq 0} Z_k$ is bounded.
Suppose Algorithm \ref{alg:exchange:method}
does not stop and each $S_k \ne \emptyset$ is finite.
Let $(x^*,y^*)$ be an arbitrary accumulation point of
the set $\cup_{k\geq 0} S_k$. If the value function $v(x)$,
as in \reff{def:v(x)}, is continuous at $x^*$,
then $(x^*, y^*)$ is a global minimizer of
the SBPP problem \reff{bilevel:pp}.
\item [(iii)]
Assume $k_{\max}=\infty$, the union $\cup_{k\geq 0} Z_k$ is bounded,
the set $\Xi = \{(x,y): \, \psi(x,y) =0, \xi(x,y) \geq 0\}$ is compact.
{Let $\Xi_1 =\{x : \, \exists y, (x,y)\in \Xi\}$,
which is the projection of $\Xi$ onto the $x$-space.
Suppose $v(x)$ is continuous on $\Xi_1$.}
Then, for all $\epsilon>0$, Algorithm \ref{alg:exchange:method}
must terminate within finitely many steps,
and each {$(\bar{x}, \bar{y})\in \mc{X}^*$}
is a global minimizer of the approximate SIPP (\ref{pop:tilde(Pepsilon)}).
\end{itemize}
\end{theorem}
\begin{proof}
(i) The SBPP \reff{bilevel:pp} is equivalent to \reff{sipp:FsGH}.
Note that each optimal value $F_k^* \leq F^*$ and the sequence
$\{ F_k^* \}$ is monotonically increasing.
If Algorithm \ref{alg:exchange:method} stops at the $k$-th iteration,
then each $(x^*,y^*) \in {\mathcal{X}^*}$ is feasible for \reff{sipp:FsGH},
and also feasible for \reff{bilevel:pp}, so it holds that
\[
F^* \geq F_{k}^* = F(x^*, y^*) \geq F^*.
\]
This implies that $(x^*,y^*)$ is a global optimizer of problem \reff{bilevel:pp}.
(ii) Suppose Algorithm \ref{alg:exchange:method} does not stop
and each $S_k \ne \emptyset$ is finite.
For each accumulation point $(x^*, y^*)$ of the union $\cup_{k\geq 0} S_k$,
there exists a sequence $\{ k_\ell \}$ of integers such that
$k_\ell \to \infty$ as $\ell \to \infty$ and {
\[
(x^{k_\ell}, y^{k_\ell}) \to (x^*, y^*), \quad \mbox{where each } (x^{k_\ell}, y^{k_\ell}) \in S_{k_\ell}.
\]
Since the feasible set of problem $(P_{k_{\ell}})$ contains the one for problem \reff{bilevel:pp}, we have $ F^*_{k_\ell} = F(x^{k_\ell}, y^{k_\ell})\leq F^* \text{ and } $ hence $F(x^*,y^*)\leq F^*$ by the continuity of $F$. To show the opposite inequality it suffices to show that $(x^*,y^*)$ is feasible for problem \reff{bilevel:pp}. {Recall that
the function $\xi$ is defined as in \reff{sc3:xi:psi}.} Since $\xi(x^{k_\ell},y^{k_\ell}) \geq 0$ and $\psi(x^{k_\ell},y^{k_\ell}) =0$, by the continuity of the mappings $\xi, \psi$, we have $\xi(x^*,y^*)
\geq 0$
and $\psi(x^{*},y^{*}) =0$.}
Define the function
\begin{equation}\label{def:phi}
\phi(x,y):= {\inf\limits_{z\in Z}} H(x,y,z).
\end{equation}
Clearly, $\phi(x,y) = v(x) - f(x,y)$,
and $\phi(x^*,y^*) =0$ if and only if $(x^*,y^*)$
is a feasible point for \reff{bilevel:pp}.
By the definition of $v(x)$ as in \reff{def:v(x)}
and that $v(x)$ is continuous at $x^*$, we always have
$
\phi(x^*, y^*)\leq 0.
$
To prove $\phi(x^*,y^*) =0$, it remains to show $\phi(x^*, y^*) \geq 0$.
For all $k^{\prime}$ and for all $k_{\ell} \geq k^{\prime}$,
the point $(x^{k_\ell},y^{k_\ell})$ is feasible for the subproblem $(P_{k^\prime})$, so
\[
H(x^{k_\ell},y^{k_\ell},z)\geq 0\quad \forall z\in Z_{k^\prime}.
\]
Letting $\ell \to \infty$, we then get
\begin{equation} \label{H(x*y*z)>=0}
H(x^*,y^*,z)\geq 0\quad \forall z\in Z_{k^\prime}.
\end{equation}
The above is true for all $k^\prime$. In Algorithm~\ref{alg:exchange:method},
for each $k_{\ell}$, there exists $z^{k_\ell} \in T_i^{k_\ell}$,
for some $i$, such that
\[
{\phi(x^{k_\ell},y^{k_\ell})} = H(x^{k_\ell},y^{k_\ell},z^{k_\ell}).
\]
Since $ z^{k_\ell} \in Z_{k_\ell+1}$, by \reff{H(x*y*z)>=0}, we know
\[
H(x^*,y^*,z^{k_\ell} )\geq 0.
\]
Therefore, it holds that
\begin{equation}
\begin{array}{rcl}
\phi(x^*, y^*) & = & \phi(x^{k_\ell},y^{k_\ell})+\phi(x^*,y^*)
-\phi(x^{k_\ell},y^{k_\ell})\\
& \geq & [H(x^{k_\ell},y^{k_\ell},z^{k_\ell})
- H(x^*, y^*,z^{k_\ell})] + \\
& & [\phi(x^*, y^*)-\phi(x^{k_\ell},y^{k_\ell})].
\end{array}
\end{equation}
Since $z^{k_\ell}$ belongs to the bounded set $\cup_{k\geq 0} Z_k$,
there exists a subsequence $z^{k_{\ell,j}}$ such that $z^{k_{\ell,j}} \to z^* \in Z$.
The polynomial $H(x,y,z)$ is continuous at $(x^*, y^*, z^*)$. Since $v(x)$ is continuous at $x^*$,
$\phi(x,y) = v(x) - f(x,y)$ is also continuous at $(x^*, y^*)$.
Letting $\ell \to \infty$, we get $\phi(x^*, y^*)\geq 0$. Thus, $(x^*, y^*)$ is feasible for \reff{sipp:FsGH} and so $F(x^*,y^*) \geq F^*$.
In the earlier, we already proved $F(x^*,y^*) \leq F^*$,
so $(x^*, y^*)$ is a global optimizer of \reff{sipp:FsGH},
i.e., $(x^*, y^*)$ is a global minimizer of
the SBPP problem \reff{bilevel:pp}.
(iii)
Suppose otherwise the algorithm does not stop within finitely many steps.
Then there exist a sequence $\{(x^k, y^k, z^k)\}$ such that
$(x^k,y^k) \in S_k$, $z^k \in \cup_{i=1}^{r_k} T_i^k$,
\[
H(x^k, y^k, z^k) < -\epsilon
\]
for all $k$. Note that $(x^k,y^k) \in \Xi$ and $z^k \in Z_{k+1}$.
{By the assumption that $\Xi$ is compact and
$\cup_{k\geq 0} Z_k$ is bounded, the sequence $\{(x^k, y^k, z^k)\}$
has a convergent subsequence, say,
\[
(x^{k_\ell}, y^{k_\ell}, z^{k_\ell}) \, \to \, {(x^*,y^*,z^*)} \qquad
\mbox{ as } \quad \ell \to \infty.
\]
So, it holds that ${(x^*,y^*)}\in \Xi,~z^*\in Z$ and
$H{(x^*,y^*,z^*)} \leq -\epsilon$.
Since $\Xi$ is compact, the projection set $\Xi_1$ is also compact,
hence ${x^*} \in \Xi_1$. By the assumption, we know
$v(x)$ is continuous at ${x^*}$.
Similar to the proof in (ii), we have $\phi{(x^*,y^*)} =0$, then ${(x^*,y^*)}$ is a feasible point for \reff{bilevel:pp}, and we will get $$H{(x^*,y^*,z^*)} = f{(x^*,z^*)}-f{(x^*,y^*)}\geq 0.$$ However, this contradicts that
$H{(x^*,y^*,z^*)} \leq -\epsilon$.
Therefore, Algorithm \ref{alg:exchange:method} must terminate within finitely many steps.
}
{Now suppose Algorithm \ref{alg:exchange:method} terminates within finitely many steps at $(\bar{x},\bar{y})\in \mathcal{X}^*$ with $\epsilon>0$. Then $(\bar{x}, \bar{y})$ must be a feasible solution to the approximate {SIPP} (\ref{pop:tilde(Pepsilon)}). Hence}
it is obvious that $(\bar{x}, \bar{y})$ is a global minimizer of (\ref{pop:tilde(Pepsilon)}).
\end{proof}
In Theorem \ref{theorem:exchange:simple},
we assumed that the subproblems $(P_k)$ and $(Q_i^k)$
can be solved globally by Lasserre relaxations.
This is a reasonably well assumption.
Please see the remarks A)-D) after Algorithm~\ref{alg:exchange:method}.
In the items (ii)-(iii),
the value function $v(x)$ {is} assumed to be continuous {at certain points}.
This can be satisfied under some conditions.
The {\it restricted inf-compactness} (RIC) is {such a condition}.
The value function $v(x)$ is said to have RIC at $x^*$ if $v(x^*)$ is finite and
there exist a compact set $\Omega$ and a positive number $\epsilon_0$,
such that for all $\| x - x^* \| < \epsilon_0 $ with $v(x) <v(x^*)+\epsilon_0$,
there exists $z\in S(x) \cap \Omega$.
For instance, if the set $Z$ is compact, or the lower level objective
$f(x^*,z)$ is weakly coercive in $z$ with respect to set $Z$, i.e.,
\[ \lim_{z\in Z, \|z\|\rightarrow \infty} f(x^*,z)=\infty, \]
then $v(x)$ has restricted inf-compactness at $x^*$; see, e.g.,
\cite[\S6.5.1]{Clarke}.
Note that the union $\cup_{k\geq 0} Z_k$ is contained in $Z$.
So, if $Z$ is compact
then $\cup_{k\geq 0} Z_k$ is bounded.
\begin{prop}\label{valuefunction}
For the SBPP problem \reff{bilevel:pp},
assume the lower level program is as in \reff{inopt:minfg}.
If the value function $v(x)$ has restricted inf-compactness at $x^*$,
then $v(x)$ is continuous at $x^*$.
\end{prop}
\begin{proof}
On one hand, since the lower level constraint is independent of $x$,
the value function $v(x)$ is always upper semicontinuous
\cite[Theorem 4.22 (1)]{Bank}. On the other hand,
since the restricted inf-compactness holds it follows from \cite[page 246]{Clarke}
(or see the proof of \cite[Theorem 3.9]{Guo-L-Y-Z})
that $v(x)$ is lower semicontinuous.
Therefore $v(x)$ is continuous at $x^*$.
\end{proof}
\section{General Bilevel Polynomial Programs}
\label{sec:GBPP}
In this section, we study general bilevel polynomial programs as in \reff{bilevel:pp}.
For GBPPs, the feasible set $Z(x)$ of the {lower level program} \reff{df:P(x)}
varies as $x$ changes, i.e., the constraining polynomials $g_j(x,z)$
depends on $x$.
For each pair $(x,y)$ that is feasible for \reff{bilevel:pp},
$y$ is an optimizer for the {lower level program \reff{df:P(x)} parameterized by $x$},
so $y$ must be a Fritz John point of \reff{df:P(x)}, i.e.,
there exists {$ (\mu_0,\mu_1, \ldots, \mu_{m_2}) \ne 0 $} satisfying
\[
\mu_0 \nabla_z f(x,y) - \sum_{j \in [m_2] } \mu_j \nabla_z g_j(x,y) = 0,
\quad \mu_j g_j(x,y) = 0 \, (j \in [m_2]).
\]
For convenience, we still use $K_{FJ}(x)$ to denote the set
of Fritz John points of \reff{df:P(x)} at $x$.
The set $K_{FJ}(x)$ consists of common zeros of some polynomials.
As in \reff{pro::opti:2}, choose the polynomials
$(f(z), g_1(z), \ldots, g_m(z))$ to be
$(f(x,z), g_1(x,z), \ldots, g_{m_2}(x,z))$,
whose coefficients depend on $x$.
Then, construct $\psi_1, \ldots, \psi_L$ in the same way as in
\reff{jac:poly:psi}. Each $\psi_j$ is also a polynomial in $(x,z)$.
Thus, every $(x,y)$ feasible in \reff{bilevel:pp} satisfies
$\psi_j(x,y)=0$, for all $j$. For convenience of notation, we still denote
the polynomial tuples $\xi, \psi$ as in \reff{sc3:xi:psi}.
We have seen that \reff{bilevel:pp} is equivalent to
the {generalized} semi-infinite polynomial program
($H(x,y,z)$ is as in \reff{df:H(xyz)}):
\begin{equation} \label{sip::GBPP}
\left\{
\begin{array}{rl} F^* := \min\limits_{x\in \mathbb{R}^n, \, y\in \mathbb{R}^p} & \, F(x,y) \\
\text{s.t.} & \, \psi(x,y) = 0, \, \xi(x,y) \geq 0, \\
& \, H(x,y,z) \geq 0, \, \forall~z\in Z(x).
\end{array} \right.
\end{equation}
Note that the constraint $H(x,y,z) \geq 0$ in \reff{sip::GBPP}
is required for $z\in Z(x)$, which depends on $x$.
Algorithm \ref{alg:exchange:method} can also be applied to solve \reff{sip::GBPP}.
We first give an example for showing how it works.
\begin{exm}\label{exm:A7}
(\cite[Example 3.23]{BIEXM})
Consider the GBPP:
\begin{equation}\label{exm:A7:p}
\left\{
\begin{aligned}
\min\limits_{x,y\in [-1,1]}& \quad x^2\\
\text{s.t.} & \quad 1+x-9x^2-y\leq 0, \\
& \quad y\in \underset{z\in[-1,1]}{\tt argmin} \{z \ \ \text{s.t.}~z^2(x-0.5)\leq 0\}.
\end{aligned}
\right.
\end{equation}
By simple calculations, one can show that
\[
Z({x}) = \left\{
\begin{aligned}
& \{ 0 \},\quad \quad \quad \quad x\in (0.5,1],\\
& [-1,1], \quad x\in [-1,0.5],
\end{aligned}
\right.
\quad
S(x) = \left\{
\begin{aligned}
& \{ 0 \},\quad \quad\quad x\in (0.5,1],\\
& \{ -1 \}, \quad x\in [-1,0.5].
\end{aligned}
\right.
\]
The set $\mathcal{U} = \{(x,y)\in [-1,1]^2: \, 1+x-9x^2-y\leq 0, y^2(x-0.5)\leq 0\}$.
The feasible set of \reff{exm:A7:p} is:
\[
\mathcal{F}:= \Big(
\left\{(x,0): x\in (0.5,1]\right\}\cup\left\{(x,-1): x\in [-1,0.5]\right\}
\Big)\cap \mathcal{U}.
\]
One can show that the global minimizer and the optimal values are
\[
(x^*,y^*) = \left(\frac{1-\sqrt{73}}{18},-1\right)
\approx (-0.4191, -1), \quad
F^* = \left(\frac{1-\sqrt{73}}{18}\right)^2 \approx 0.1757.
\]
By the Jacobian representation of Fritz John points, we get the polynomial
\[
\psi(x,y) = (x-0.5)y^2(y^2-1).
\]
We apply Algorithm \ref{alg:exchange:method} to solve \reff{exm:A7:p}.
The computational results are reported in Table~\ref{table:exmA7}.
\begin{table}[htb]
\caption{Results of Algorithm \ref{alg:exchange:method} for solving \reff{exm:A7:p}.}
\label{table:exmA7}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
{\tt Iter} $k$ & $(x_i^k,y_i^k)$ & $z_{i,j}^k$ & $F_k^*$ & $v_i^k$ \\ \hline
0 & (0.0000, 1.0000) & -1.0000 & 0.0000 & -2.0000 \\ \hline
1 & (-0.4191, -1.0000) & -1.0000 & 0.1757 & -2.4e-11 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
As one can see, Algorithm~\ref{alg:exchange:method} takes two iterations
to solve \reff{exm:A7:p} successfully.
\iffalse
Let $\mathcal{X} = \{ \psi(x,y) = 0, y^2(x-0.5)\leq 0\}\cap \mathcal{U}$.
\iffalse
\begin{figure}
\centering
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=1.15\textwidth]{Figure3.pdf}
\caption{}
\label{fig:exm:general:1}
\end{subfigure}%
\quad \quad \quad
\begin{subfigure}{0.4\textwidth}
\includegraphics[width=1.2\textwidth]{Figure4.pdf}
\caption{}
\label{fig:exm:general:2}
\end{subfigure}
\caption{\small{(A) Feasible region $\mathcal{F}$ (intersection of the red lines with the purple shaded area). (B) Relaxed feasible region $\mathcal{X}$ (intersection of the red lines with the purple shaded area).}}
\label{fig:exm:general:all}
\end{figure}
\fi
Then \reff{exm:A7:p} is equivalent to the SIPP:
\begin{equation*} \label{exm:A7:p:ref}
\left\{
\begin{aligned}
\min\limits_{(x,y)\in \mathcal{X}} & \quad x^2\\
\text{s.t.} & \quad z-y\geq 0 ,~\forall~z\in Z({x}).
\end{aligned}
\right.
\end{equation*}
At the first step of Algorithm~\ref{alg:GBPP}, we solve
\[
(P_0):\quad F_0^*: = \min\limits_{(x,y)\in \mathcal{X}} x^2,
\]
and get
\[
(x^0,y^0) = (0.0000,1.0000), \quad F(x^0,y^0) = 1.7537\times 10^{-9}.
\]
To check if $y^0\in S(x^0)$ or not, we solve
\[
(Q^0): v^0: = \min\limits_{z\in Z({x^0})}~ H(x^0,y^0,z) = z-y^0,
\]
and obtain that
\[
z^0 = -1.0000,\quad v^0=H(x^0,y^0,z^0) = -2.0000<0.
\]
That is, $y^0 \not\in S(x^0)$.
Let $Z_1=\{z^0\}$, and solve
\begin{equation*}
\begin{aligned}
(P_1):~~F_1^*:=&\min\limits_{(x,y)\in \mathcal{X}} F(x,y) = x^2\\
& \quad \text{s.t.}~~z^0-y \geq 0
\end{aligned}
\end{equation*}
and we get the optimal solution
$$(x^1,y^1) = (-0.4191,-1.0000), \quad F(x^1,y^1) = 0.1757.$$
Solve the optimization \textcolor{red}{problem}:
\[
(Q^1): v^1: = \min\limits_{z\in Z({x^1})}~ H(x^1,y^1,z) = z-y^1,
\]
we get
\[
z^1 = -1.0000,\quad v^1=H(x^1,y^1,z^1) = -2.4076\times 10^{-11}.
\]
Since $v^1 \approx 0$, Algorithm \ref{alg:GBPP} terminates,
and return the point $(x^1,y^1)$.
It is indeed the global minimizer of \reff{exm:A7:p}.
\fi
\qed
\end{exm}
However, we would like to point out that
Algorithm~\ref{alg:exchange:method} might not solve GBPPs globally.
The following is such an example.
\begin{exm}\label{exm:A12}
(\cite[Example 5.2]{BIEXM})
Consider the GBPP:
\begin{equation} \label{exm:A12:p}
\left\{ \begin{array}{rl}
\min\limits_{x \in \mathbb{R}, y\in \mathbb{R}} & \quad (x-3)^2+(y-2)^2\\
\text{s.t.} & \quad 0 \leq x \leq 8, \quad y \in S(x),
\end{array} \right.
\end{equation}
where $S(x)$ is the set of minimizers of the optimization problem
\[
\left\{ \begin{array}{rl}
\min\limits_{z \in \mathbb{R}} & (z-5)^2\\
\text{s.t.}~& 0\leq z \leq 6, \quad -2x+z-1\leq 0, \\
& x-2z+2\leq 0, \quad x+2z-14\leq 0.
\end{array} \right.
\]
It can be shown that
\[
S(x) = \left\{
\begin{aligned}
& \{ 1+2x \},& x\in [0,2],\\
& \{ 5 \}, & x\in (2,4], \\
& \{ 7-\frac{x}{2} \},& x\in (4,6],\\
& {\emptyset},& x\in (6,8].
\end{aligned}
\right.
\]
The feasible set of \reff{exm:A12:p} is thus the set
\[
\mathcal{F}: =\{(x,y) \mid x\in [0,6], \,\, y\in S(x)\}.
\]
It consists of three connected line segments.
%
%
One can easily check that the global optimizer and the optimal values are
\[
(x^*,y^*) = (1,3), \quad \quad F^* = 5.
\]
The polynomial $\psi$ in the Jacobian representation is
\[
\psi(x,y) \, = \, (-2x+y-1)(x-2y+2)(x+2y-14)y(y-6)(y-5).
\]
We apply Algorithm \ref{alg:exchange:method} to solve \reff{exm:A12:p}.
The computational results are reported in Table~\ref{table:exmA12p}.
\begin{table}[htb]
\caption{Results of Algorithm \ref{alg:exchange:method} for solving \reff{exm:A12:p}.}
\label{table:exmA12p}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
{\tt Iter} $k$ & $(x_i^k,y_i^k)$ & $z_{i,j}^k$ & $F_k^*$ & $v_i^k$ \\ \hline
0 & (2.7996, 2.3998) & 5.0021 & 0.2000 & -6.7611 \\ \hline
1 & (2.9972, 5.0000) & 5.0021 & 9.0001 & 4.41e-6 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
For $\epsilon =10^{-6}$, Algorithm \ref{alg:exchange:method} stops at $k=1$,
and {returns} the point $(2.9972, 5.0000)$,
which is not a global minimizer.
\iffalse
Let
\begin{equation*}
\mathcal{X}:=\left\{(x,y)\in [0,8]\times [0,6]\left|
\begin{aligned}
& \psi(x,y)=0,-2x+y-1\leq 0,\\
& x-2y+2\leq 0,x+2y-14\leq 0
\end{aligned}
\right.
\right\}.
\end{equation*}
%
%
Clearly, $\mathcal{F}\subseteq \mathcal{X}$.
\iffalse
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{Figure5.pdf}
\caption{\small{Feasible region $\mathcal{F}$ of bilevel program \reff{exm:A12:p} (in blue lines). Relaxed feasible region $\mathcal{X}$ (the intersection of the red lines with the yellow triangle). Point (3,2) in blue star.}}
\label{fig:exm:fail:GBLPP}
\end{figure}
\fi
Problem (\ref{exm:A12:p}) is equivalent to the generalized SIPP:
\begin{equation} \label{exm:A12:p:ref}
\left\{
\begin{aligned}
\min\limits_{(x,y)\in \mathcal{X}} & \quad (x-3)^2+(y-2)^2\\
\text{s.t.} & \quad (z-5)^2 - (y-5)^2\geq 0 ,~\forall~z\in Z({x}).
\end{aligned}
\right.
\end{equation}
By Algorithm \ref{alg:GBPP}, we first let $Z_0=\emptyset $ and then solve
\[
(P_0):~~F_0^*:=\min\limits_{(x,y)\in \mathcal{X}}~ (x-3)^2+(y-2)^2.
\]
We get
\[
(x^0,y^0) = (2.7996,2.3998),~~ F_0^* = 0.2000.
\]
To check if $y^0\in S(x^0)$ or not, we solve
\[
(GQ^0): v^0: = \min\limits_{z\in Z({x^0})}~ H(x^0,y^0,z) = (z-5)^2 - (y^0-5)^2,
\]
and get
\[
z^0 = 5.0021, \quad v^0 = -6.7610<0.
\]
Since the optimal value $v^0 < 0$, we let $Z_1=\{z^0\}$, solve
\begin{equation*}
\begin{aligned}
(P_1):~~F_1^*:=&\min\limits_{(x,y)\in \mathcal{X}} F(x,y) = (x-3)^2+(y-2)^2 \\
& \quad \text{s.t.}~~(z^0-5)^2 - (y -5)^2 \geq 0.
\end{aligned}
\end{equation*}
and get
$$(x^1,y^1) = (2.9864,5.0000),~~F_1^* = 9.0101.$$
Solve the problem
\[
(Q^1): v^1: = \min\limits_{z\in Z({x^1})}~ H(x^1,y^1,z) = (z-5)^2 - (y^1-5)^2.
\]
and we get
\[
z^1 \approx 5.0007,\quad v^1 \approx 6.6667\times 10^{-6}.
\]
Hence, $y^1\in S(x^1)$ (up to a tiny numerical error).
So, Algorithm \ref{alg:exchange:method} stops at $k=1$,
and returns the point $(x^1,y^1)$.
However, $(x^1,y^1)$ is not the global minimizer of \reff{exm:A12:p}.
For this case, Algorithm \ref{alg:exchange:method}
fails to find a global minimizer.
\fi
However, it is interesting to note that
the computed solution $(2.9972, 5.0000) \approx (3,5)$,
a local optimizer of problem \reff{exm:A12:p}.
\iffalse
{\bf (
In Table 4, for $k=1$, the value $v_i^k = 1.10e-5$ is positive, not negative.
Something wrong here, or just numerical issues?
In the earlier version, $\epsilon = 10^{-4}$, its value is $-6.6e-6$, which is better.
Why is the new result worse than the old one? \textcolor{red}{Answer Prof. Nie's question: $v^k = 1.1e-5$ is reported by gloptipoly, which is not exact. Calculate by hand, $f(x^k,z^k)-f(x^k,y^k) = 0.0021^2 = 4.41e-6$.
For $v^k$, theoretically, we always have negative value, until the first nonnegative value is found. Computationally, $v^k = f(x^k,z^k)-f(x^k,y^k)$. $(x^k,y^k)$ is found by solving $(P_k)$, then $z^k$ is found by solving $(Q_k)$. Numerical error always exists. So $v^k$ might be a positive value around 0. We have several examples. This result actually looks better, $x^k,y^k$ is closer to (3,5), $F_k$ looks more accurate. I put the code here, you might test on Windows system. Sometimes, windows might get better result. I do not have windows computer. Thanks.}
)}
\fi
\qed
\iffalse
\begin{verbatim*}
clear
clc
mpol x 2
g0 = (x(1)-3)^2+(x(2)-2)^2;
K = [8*x(1)-x(1)^2>=0,6*x(2)-x(2)^2>=0];
K = [K,x(1)^2+x(2)^2<=100]
K = [K,-2*x(1)+x(2)-1<=0,x(1)-2*x(2)+2<=0,x(1)+2*x(2)-14<=0];
PPP = [];
g1 = -2*x(1)+x(2)-1;
g2 = x(1)-2*x(2)+2;
g3 = x(1)+2*x(2)-14;
K = [K,g1*g2*g3*(x(2)-6)*x(2)*(x(2)-5)==0];
P = msdp(min(g0),K,3)
[status_x,obj_x] = msol(P)
x_star = double(x)
mpol z 1
con_val1 = (z(1)-5)^2;
con_val2 =(x_star(2)-5)^2;
gx_p = con_val1- con_val2 ;
K_p = [-2*x_star(1)+z(1)-1<=0,x_star(1)-2*z(1)+2<=0,x_star(1)+2*z(1)-14<=0];
K_p = [K_p,6*z(1)-z(1)^2>=0];
g1_p = -2*x_star(1)+z(1)-1;
g2_p = x_star(1)-2*z(1)+2;
g3_p = x_star(1)+2*z(1)-14;
K_p = [K_p,g1_p*g2_p*g3_p*(z(1)-6)*z(1)*(z(1)-5)==0];
P2 = msdp(min(gx_p),K_p,3)
[status_p,obj_p] = msol(P2)
p_star = double([z])
for kkk = 1:10
if obj_p>=-1e-6
break
end
PPP(end+1,:) = p_star(:,:,1)'
PP = p_star(:,:,1)';
gxz = (PP(1)-5)^2;
gxy = (x(2)-5)^2;
K = [K,gxz-gxy>=0];
P = msdp(min(g0),K,3)
[status_x,obj_x] = msol(P)
x_star = double(x)
con_val1 = (z(1)-5)^2;
con_val2 =(x_star(2)-5)^2;
gx_p = con_val1 - con_val2;
K_p = [-2*x_star(1)+z(1)-1<=0,x_star(1)-2*z(1)+2<=0,x_star(1)+2*z(1)-14<=0];
K_p = [K_p,6*z(1)-z(1)^2>=0];
g1_p = -2*x_star(1)+z(1)-1;
g2_p = x_star(1)-2*z(1)+2;
g3_p = x_star(1)+2*z(1)-14;
K_p = [K_p,g1_p*g2_p*g3_p*(z(1)-6)*z(1)*(z(1)-5)==0];
P2 = msdp(min(gx_p),K_p,3)
[status_p,obj_p] = msol(P2)
p_star = double([z]),
end
\end{verbatim*}
\fi
\end{exm}
Why does Algorithm \ref{alg:exchange:method} fail to
find a global minimizer in Example~\ref{exm:A12}?
By adding $z^0$ to the discrete subset $Z_1$,
the feasible set of $(P_1)$ becomes
\[
\{ x\in X,y\in Z(x)\} \cap \{ \psi(x,y) = 0 \} \cap \{ |y -5| \leq 0.0021\}.
\]
It does not include the unique global optimizer $(x^*,y^*)=(1,3)$.
In other words, the reason is that
$H(x^*,y^*,z^0) \geq 0$ fails to hold
and hence by adding $z^0$, the true optimal solution
$(x^*,y^*)$ is not in the feasible region of problem $(P_1)$.
From the above example, we observe that the difficulty
for solving GBPPs globally comes from
the dependence of the lower level feasible set on $x$.
For a global optimizer $(x^*,y^*)$,
it is possible that $H(x^*,y^*,z_{i,j}^k) \not\geq 0$
for some $z_{i,j}^k$ at some step,
i.e., $(x^*,y^*)$ may fail to satisfy the newly added constraint in $(P_{k+1})$:
$H(x,y,z_{i,j}^k)\geq 0$. In other words,
$(x^*,y^*)$ may not be feasible for the subproblem $(P_{k+1})$.
Let $\mathcal{X}_k$ be the feasible set of problem $(P_{k})$.
Since $Z_k\subseteq Z_{k+1}$, we have $\mathcal{X}_{k+1}\subseteq \mathcal{X}_k$
and $(x^*,y^*)$ is not feasible for $(P_{\ell})$, for all $\ell \geq k+1$.
In such case, Algorithm \ref{alg:exchange:method}
will fail to find a global optimizer.
However, this will not happen for SBPPs, since {$Z(x)\equiv Z$} for all $x$.
For all $z\in Z$, we have $H(x^*,y^*,z)\geq 0$, i.e., $(x^*,y^*)$
is feasible for all subproblems $(P_k)$.
This is why Algorithm \ref{alg:exchange:method}
has convergence to global optimal solutions
for solving SBPPs. However, under some further conditions,
Algorithm \ref{alg:exchange:method} can solve GBPPs globally.
\iffalse
Our goal is to use the exchange method to solve the problem (\ref{sip::GBPP}). For Algorithm \ref{alg:exchange:method} as an exchange method to converge, one needs the following condition on the optimal values:
\[
F_0^*\leq F_1^* \leq \cdots \leq F_k^* \leq {F}^*_{k+1} \leq \cdots \leq {F}^*.
\]
Since $Z_k\subseteq Z_{k+1}$ by our construction,
we always have monotonicity $F_k^*\leq F_{k+1}^*$.
At the iteration $k$, if $Z_{k+1}=Z_k\cup \{z^k\}$, \textcolor{blue}{we hope that there exists at least one global minimizer $(\bar{x},\bar{y})$ of problem $(P)$ satisfying $H(\bar{x},\bar{y},z)\geq 0$ for all $z\in Z_{k+1}$, i.e., the new added constraint $H(x,y,z^k)\geq 0$ do not delete all of the global minimizers of problem $(P)$. Then the feasible region of $(P_k)$ would contain at least one global optimal solution of $(P)$ and consequently $F_{k+1}^* \leq F^*$. But in general when $Z(x)$ depends on $x$, the set of global minimizers of $(P)$ are unknown and it might happen that the new added inequality $H(x,y,z^k)\geq 0$ in problem $(P_{k+1})$ fails to hold for all global minimizers $(\bar{x},\bar{y})$, then we might have $F_{k+1}^*> F^*$.
However if we can ensure that there exists at least one global minimizer $(\bar{x},\bar{y})$ of BLPP problem $(P)$ satisfying the inequality $H(\bar{x},\bar{y},z)\geq 0$ for all $z\in Z_k$, for all $k\in \mathbb{N}$,} then all the proof for Theorem \ref{theorem:exchange:simple} go through without changes and we obtain the following convergence result.
\fi
\begin{theorem} \label{theorem:exchange:general}
For the general bilevel polynomial program as in \reff{bilevel:pp},
assume that the lower level program is as in \reff{df:P(x)}
and the minimum value $F^*$ is achievable at a point
$(\bar{x},\bar{y})$ such that
$H(\bar{x},\bar{y},z)\geq 0$ for all $z\in Z_k$ and for all $k$.
Suppose $(P_k)$ and $(Q_i^k)$
are solved globally by Lasserre relaxations.
\begin{itemize}
\item [(i)] Assume $\epsilon=0$. If
Algorithm \ref{alg:exchange:method} stops
for some $k < k_{\max}$, then each $(x^*,y^*) \in \mc{X}^*$
is a global minimizer of the GBPP problem \reff{bilevel:pp}.
\item [(ii)]
Assume $\epsilon=0$, $k_{\max}=\infty$, and the union
$\cup_{k\geq 0} Z_k$ is bounded.
Suppose Algorithm \ref{alg:exchange:method}
does not stop and each $S_k \ne \emptyset$ is finite.
Let $(x^*,y^*)$ be an arbitrary accumulation point of
the set $\cup_{k\geq 0} S_k$. If the value function $v(x)$,
defined as in \reff{def:v(x)}, is continuous at $x^*$,
then $(x^*, y^*)$ is a global minimizer of the GBPP problem \reff{bilevel:pp}.
\item [(iii)]
Assume $k_{\max}=\infty$,
the union $\cup_{k\geq 0} Z_k$ is bounded,
the set $\Xi = \{(x,y):\, \psi(x,y) =0, \xi(x,y) \geq 0\}$ is compact.
{Let $\Xi_1 = \{x :\, \exists y, (x,y)\in \Xi\}$,
the projection of $\Xi$ onto the $x$-space.
Suppose $v(x)$ is continuous on $\Xi_1$.}
Then, for all $\epsilon>0$, Algorithm \ref{alg:exchange:method}
must terminate within finitely many steps.
\end{itemize}
\end{theorem}
\begin{proof} By the assumption, the point $(\bar{x},\bar{y})$
is feasible for the subproblem $(P_k)$, for all $k$.
Hence, we have $F_k^*\leq F^*$.
The rest of the proof is the same as the proof of Theorem
\ref{theorem:exchange:simple}.
\end{proof}
\iffalse
\begin{remark}\label{rem4.4} \textcolor{red}{The assumptions in Theorem \ref{theorem:exchange:general} guarantee that $F_{k}^*\leq F^*$ for all $k\in \mathbb{N}$, which may not always hold. Example \ref{exm:A12} is a counter example.} However, we always have $F_0^*\leq F^*$, \textcolor{blue}{since for $k=0$, $Z_0=\emptyset$, the feasible set of problem $(P_0)$ always contains the one for problem \reff{sip::GBPP}.
Thus if Algorithm \ref{alg:exchange:method} stops with $k=0$, then each $(x^*,y^*) \in \mc{X}^*$ is a global minimizer of GBPP problem \reff{bilevel:pp} up to small numerical error; see e.g. Example \ref{Ex5.7}}.
\end{remark}
\fi
In the above theorem, the existence of the point
$(\bar{x},\bar{y})$ satisfying the requirement may be hard to check.
If $v(x)$ has restricted inf-compactness at $x^*$
and the Mangasarian-Fromovitz constraint qualification (MFCQ)
holds at all solutions of the lower level problem (\ref{df:P(x)}),
then the value function $v(x)$ is Lipschitz continuous at $x^*$;
see \cite[Corollary 1]{Clarke}.
Recently, it was shown in \cite[Corollary 4.8]{Guo-L-Y-Z}
that the MFCQ can be replaced by a weaker condition called
quasinormality in the above result.
\section{Numerical experiments}
\label{sc:numexp}
In this section, we present numerical experiments for solving BPPs.
In Algorithm~\ref{alg:exchange:method},
the polynomial optimization subproblems are solved by
Lasserre semidefinite relaxations,
implemented in software {\tt Gloptipoly~3} \cite{Gloptipoly}
and the SDP solver {\tt SeDuMi} \cite{sedumi}.
The computation is implemented with Matlab R2012a on a
MacBook Pro 64-bit OS X (10.9.5) system with 16GB memory and 2.3 GHz Intel Core i7 CPU.
In the algorithms, we set the parameters
$k_{\max}= 20$ and $\epsilon = 10^{-5}$.
In reporting computational results,
we use $(x^*,y^*)$ to denote the computed global optimizers,
$F^*$ to denote the {value of the outer objective function $F$ at $(x^*,y^*)$},
$v^*$ to denote $\inf_{z\in Z }~H(x^*,y^*,z)$,
{\tt Iter} to denote the total of number of iterations for convergence,
and {\tt Time} to denote the CPU time taken to solve the problem
(in seconds unless stated otherwise).
When $v^* \geq -\epsilon$,
the computed point $(x^*,y^*)$
is considered as a global minimizer of $(P)$,
up to the tolerance $\epsilon$. {Mathematically,
to solve BPPs exactly, we need to set $\epsilon =0$. However,
in computational practice, the round-off errors always exist,
so we choose $\epsilon >0$ to be a small number.}
\subsection{Examples of SBPPs}
\begin{exm} \label{exm:appendix:13}
(\cite[Example 3.26]{BIEXM})
Consider the SBPP:
\begin{equation*}
\left\{
\begin{array}{rl}
\min\limits_{x\in {\mathbb R}^2,y\in {\mathbb R}^3} & x_1y_1+x_2y_2^2+x_1x_2y^3_3\\
\text{s.t.} & x\in [-1,1]^2, \,\, 0.1-x_1^2\leq 0, \\
& 1.5-y_1^2-y_2^2-y_3^2\leq 0, \\
& -2.5+y_1^2+y_2^2+y_3^2\leq 0, \\
& y\in S(x), \\
\end{array}
\right.
\end{equation*}
where $S(x)$ is the set of minimizers of
\[
\min\limits_{z\in [-1,1]^3} \quad
x_1z_1^2+x_2z_2^2+(x_1-x_2)z_3^2.
\]
It was shown in \cite[Example 3.26]{BIEXM} that the unique global optimal solution is
$$x^* = (-1,-1), \quad y^* = (1,\pm 1,-\sqrt{0.5}).$$
Algorithm~\ref{alg:exchange:method} terminates after one iteration.
It takes about 14.83 seconds. We get
\[
x^* \approx (-1,-1), \quad y^* \approx (1,\pm 1,-0.7071),
\]
\[
F^* \approx -2.3536, \quad v^* \approx -5.71 \times 10^{-9}.
\]
\iffalse
New added equations generated by the Jacobian representation are:
\begin{equation*}
\begin{aligned}
\psi_1(x,y) &:= 2x_1y_1(y_1^2-1)(y_2^2-1)(y_3^2-1)=0,\\
\psi_2(x,y) &: = 2x_2y_2(y_1^2-1)(y_2^2-1)(y_3^2-1)=0,\\
\psi_3(x,y) &: = 2y_3(x_1-x_2)(y_1^2-1)(y_2^2-1)(y_3^2-1)=0,\\
\psi_4(x,y) &: = 4x_2y_1y_2(y_2^2-1)(y_3^2-1)=0,\\
\psi_5(x,y) &: = 4y_1y_3(x_1-x_2)(y_2^2-1)(y_3^2-1)=0,\\
\psi_6(x,y) &: = 4x_1y_1y_2(y_1^2-1)(y_3^2-1)=0,\\
\psi_7(x,y) &:= 4y_2y_3(x_1-x_2)(y_1^2-1)(y_3^2-1)=0,\\
\psi_8(x,y) &: = 4x_1y_1y_3(y_1^2-1)(y_2^2-1)=0,\\
\psi_9(x,y) &:= 4x_2y_2y_3(y_1^2-1)(y_2^2-1)=0,\\
\psi_{10}(x,y) &: = 8y_1y_2y_3(x_1-x_2)(y_3^2-1)=0,\\
\psi_{11}(x,y) &:= 8y_1y_2y_3x_2(1-y_2^2)=0,\\
\psi_{12}(x,y) &:= 8y_1y_2y_3x_1(y_1^2-1)=0.\\
\end{aligned}
\end{equation*}
\fi
\end{exm}
\begin{exm} Consider the SBPP:
\begin{equation}\label{SBPP:exm:5.2}
\left\{
\begin{array}{rl}
\min\limits_{x\in {\mathbb R}^2,y\in {\mathbb R}^3} & x_1y_1+x_2y_2+x_1x_2y_1y_2y_3\\
\text{s.t.} & x\in [-1,1]^2, \,\, y_1y_2-x_1^2\leq 0, \\
& y\in S(x), \\
\end{array}
\right.
\end{equation}
where $S(x)$ is the set of minimizers of
\[
\left\{ \begin{array}{rl}
\min\limits_{z\in {\mathbb R}^3} & x_1z_1^2+x_2^2z_2z_3-z_1z_3^2\\
\text{s.t.}~& 1\leq z_1^2+z_2^2+z_3^2\leq 2.
\end{array} \right.
\]
Algorithm~\ref{alg:exchange:method} terminates after one iteration.
It takes about 13.45 seconds. We get
\[
x^* \approx (-1,-1), \,\quad y^* \approx (1.1097,0.3143,-0.8184),
\]
\[
F^* \approx -1.7095, \, \quad
v^* \approx -1.19 \times 10^{-9}.
\]
By Theorem \ref{theorem:exchange:simple},
we know $(x^*,y^*)$ is a global optimizer,
up to a tolerance around $10^{-9}$.
\end{exm}
\begin{exm}\label{example:mitsos}
We consider some test problems from \cite{BIEXM}.
For convenience of display, we choose the problems
that have common constraints $x \in [-1,1]$
for the outer level program and $z \in [-1,1]$
for the inner level program.
When Algorithm~\ref{alg:exchange:method} is applied,
all these SBPPs are solved successfully.
The outer objective $F(x,y)$, the inner objective $f(x,z)$,
the global optimizers $(x^*,y^*)$,
the number of consumed iterations {\tt Iter}, the CPU time taken to solve the problem,
the optimal value $F^*$, and the value $v^*$ are reported in Table~\ref{SBPP:small}.
In all problems, {except Ex. 3.18 and Ex. 3.19,}
the optimal solutions we obtained coincide with those given in \cite{BIEXM}.
For Ex. 3.18, the global optimal solution for minimizing the upper level objective $-x^2+y^2$ subject to constraints
$x,y\in [-1,1]$ is $x^*=1,y^*=0$. It is easy to check that $y^*=0$ is the optimal solution for the lower level problem parameterized by $x^*=1$ and hence $x^*=1,y^*=0$ is also the unique global minimizer for the SBPP in Ex. 3.18.
{For Ex.~3.19, as shown in \cite{BIEXM},}
the optimal solution must have $x^*\in (0,1)$.
For such $x^*$, $S(x^*)=\{\pm \sqrt{x^*}\}$. Plugging $y=\pm \sqrt{x}$ into the upper level objective we have $F(x,y)= \pm x\sqrt{x}+\sqrt{x}+\frac{x}{2}$. It is obvious that the minimum over $0<x<1$ should occur when $y=\sqrt{x}$. So minimizing $F(x,y)= x\sqrt{x}-\sqrt{x}+\frac{x}{2}$ over $0<x<1$ gives
$x^*=(\frac{\sqrt{13}-1}{6})^2{\approx 0.1886}$,
$y^*=\frac{\sqrt{13}-1}{6}{\approx 0.4343}.$
\begin{table}[htb]
\caption{Results for some SBPP problems in \cite{BIEXM}.
They have the common constraints $x \in [-1,1]$ and $z \in [-1,1]$.
}
\label{SBPP:small}
\centering
\begin{scriptsize}
\begin{tabular}{|c|l|c|c|c|c|c|} \hline
Problem & \qquad \qquad SBPP & $(x^*,y^*)$ & {\tt Iter}
& {\tt Time} & $F^*$ & $v^*$ \\ \hline
Ex. 3.14 & $\begin{array}{l} F = (x-1/4)^2 + y^2 \\ f = z^3/3 - xz \end{array} $
& (0.2500, 0.5000) & 2 & 0.49 & 0.2500 & -5.7e-10 \\ \hline
Ex. 3.15 & $\begin{array}{l} F = x+y \\ f = xz^2/2-z^3/3 \end{array} $
& (-1.0000, 1.0000) & 2 & 0.42 & 2.79e-8 & -4.22e-8 \\ \hline
Ex. 3.16 & $\begin{array}{l} F = 2x+y \\ f = -xz^2/2- z^4/4 \end{array} $
& (-0.5, -1), (-1, 0) & 2 & 0.47 & -2.0000 & -6.0e-10 \\ \hline
Ex. 3.17 & $\begin{array}{l} F = (x+1/2)^2 + y^2/2 \\ f = xz^2/2+z^4/4 \end{array} $
& $(-0.2500, \pm 0.5000)$ & 4 & 1.12 & 0.1875 & -8.3e-11 \\ \hline
Ex. 3.18 & $\begin{array}{l} F = -x^2 + y^2 \\ f = xz^2- z^4/2 \end{array} $
& (1.0000, 0.0000) & 2 & 0.44 & -1.0000 & -3.1e-13 \\ \hline
Ex. 3.19 & $\begin{array}{l} F = xy - y + y^2/2 \\ f = -xz^2+z^4/2 \end{array} $
& (0.1886, 0.4343) & 2 & 0.41 & -0.2581 & -3.6e-12 \\ \hline
Ex. 3.20 & $\begin{array}{l} F = (x-1/4)^2 + y^2 \\ f = z^3/3-x^2z \end{array} $
& (0.5000, 0.5000) & 2 & 0.38 & 0.3125 & -1.1e-10 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\end{exm}
\iffalse
\begin{table}[htb]\caption{Results for small SBPPs as in \cite{BIEXM} } \label{SBPP:small}
\centering
\begin{scriptsize}
\begin{tabular}{|l |c|c|c|c|c|c|c|} \hline
No. & $x^*$ & $y^*$ & $z^*$ & $\text{Iter}$ & $F^*$ & $|v^*|$ \\
\hline \cite[Example 3.11]{BIEXM} & 0.0000 & -0.8000 & -0.8000 & 1 & -0.8000 & 1.84e-9 \\ \hline
\cite[Example 3.12]{BIEXM}* & 0.0312 & -0.0000& 3.68e-9 & 10 & -0.0312 & 4.88e-4\\
\hline
\cite[Example 3.13]{BIEXM}* & -0.0216 & 1.0000 & -1 & 10 & -1.0216 & 2.02e-5 \\
\hline
\cite[Example 3.14]{BIEXM} & 0.2500 & 0.5000 & -1~\text{and}~0.5 & 2 & 0.2500& 5.7e-10 \\
\hline
\cite[Example 3.15]{BIEXM} & -1 & 1 & 1 & 2 & 2.79e-8 & 4.22e-8 \\
\hline
\cite[Example 3.16]{BIEXM} & -0.5 and -1 & -1 and 0 & -1 and 6.16e-9& 2 & -2.0000 & 6.0e-10 \\
\hline
\cite[Example 3.17]{BIEXM} & -0.2500 & $\pm 0.5000$ & $\pm0.5000$ & 4 & 0.1875 & 8.3e-11 \\
\hline
\cite[Example 3.18]{BIEXM} & 1 & 0& -1.05e-16& 2 & -1 & 3.1e-13\\
\hline
\cite[Example 3.19]{BIEXM} & 0.1886 & 0.4342 & $\pm 0.4342$ & 2 & -0.2581 & 3.6e-12\\
\hline
\cite[Example 3.20]{BIEXM} & 0.5000 & 0.5000 & -1 ~\text{and}~0.5 & 2 & 0.3125& 1.1e-10 \\
\hline
\cite[Example 3.21]{BIEXM} & 0.6452 & -0.4381 & -0.4381 & 2 & 0.1939 & 5.5e-10 \\
\hline
\cite[Example 3.24]{BIEXM} & 0.2105 & 1.7990 & 0.2430 and 1.7990 & 2 & -1.7547 & 1.70e-9 \\
\hline
\cite[Example 3.26]{BIEXM} & (-1,-1) & $(1,\pm 1,-0.7071)$ & $(1,\pm 1,-0.7071)$ & 1 & -2.3536 & 5.71e-9\\
\hline
\end{tabular}
\end{scriptsize}
\end{table}
\fi
\begin{exm}Consider the SBPP:
\begin{equation}\label{exm:bigger:SBPP}
\left\{
\begin{array}{rl}
\min\limits_{x\in {\mathbb R}^4,y\in {\mathbb R}^4} & x^2_1y_1+x_2y_2+x_3y_3^2+x_4y_4^2\\
\text{s.t.} & \|x\|^2 \leq 1, \,\, y_1y_2-x_1\leq 0,\\
& y_3y_4-x_3^2\leq 0,\,\, y\in S(x), \\
\end{array}
\right.
\end{equation}
where $S(x)$ is the set of minimizers of
\[
\left\{ \begin{array}{rl}
\min\limits_{z\in {\mathbb R}^4} & z_1^2-z_2(x_1+x_2)-(z_3+z_4)(x_3+x_4)\\
\text{s.t.}~& \|z\|^2 \leq 1,\,\, z_2^2+z_3^2+z_4^2-z_1\leq 0.
\end{array} \right.
\]
We apply Algorithm \ref{alg:exchange:method} to solve \reff{exm:bigger:SBPP}.
The computational results are reported in Table~\ref{table:exm:bigger:SBPP}. As one can see, Algorithm \ref{alg:exchange:method} stops when $k=4$
and solves \reff{exm:bigger:SBPP} successfully.
It takes about 20 minutes to solve the problem.
By Theorem \ref{theorem:exchange:simple},
we know the point $(x_i^k,y_i^k)$ obtained at $k=4$
is a global optimizer for
\reff{exm:bigger:SBPP}, up to a tolerance around $10^{-8}$.
\begin{table}[htb]
\caption{Results of Algorithm \ref{alg:exchange:method} for solving \reff{exm:bigger:SBPP}.}
\label{table:exm:bigger:SBPP}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
{\tt Iter} $k$ & $(x_i^k,y_i^k)$ & $F_k^*$ & $v_i^k$ \\ \hline
0 & (-0.0000,1.0000,-0.0000,0.0000,0.6180,-0.7862,~0.0000,~0.0000) & -0.7862 & -1.6406 \\
\hline
1 & (0.0000,-0.0000,0.0000,-1.0000,0.6180, -0.0000,0.0000,-0.7862) & -0.6180 & -0.3458 \\
& (0.0003,-0.0002,-0.9999,0.0000,0.6180,~0.0001,-0.7861,-0.0000) & -0.6180 & -0.3458 \\
\hline
2 & (0.0000,-0.0000,-0.8623,-0.5064,0.6180,-0.0000,-0.6403,-0.4561) & -0.4589 & -0.0211 \\
\hline
3 & (0.0000,-0.0000,-0.7098,-0.7042,0.6180,-0.0000,-0.5570,-0.5548) & -0.4371 & -6.37e-5\\ \hline
4 & (0.0000,-0.0000,-0.7071,-0.7071,0.6180,0.0000,-0.5559,-0.5559)
& -0.4370& -2.27e-8 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\end{exm}
An interesting special case of SBPPs is that
the inner level {program} has no constraints, i.e.,
$Z = \mathbb{R}^p$. In this case, the set $K_{FJ}(x)$ of Fritz John points
is just the set of critical points of the inner objective $f(x,z)$.
It is easy to see that the polynomial $\psi(x,y)$ is given as
\[
\psi(x,z)=\Big( \frac{\partial}{\partial z_1} f(x,z),
\ldots, \frac{\partial}{\partial z_p} f(x,z) \Big).
\]
\begin{exm} \label{exm:sbpp:Z=R^p}
(SBPPs with $Z=\mathbb{R}^p$)
Consider random SBPPs with ball conditions on $x$
and no constraints on $z$:
\begin{equation}\label{exm:random:prob:uncons}
\left\{
\begin{aligned}
F^*:=\min\limits_{x\in \mathbb{R}^n, \, y\in {\mathbb R}^{p}}&~~ F(x,y)\\
\text{s.t.} & ~~ \|x\|^2 \leq 1, \quad
y \in \underset{z\in {\mathbb R}^p}{\tt argmin} f(x,z),
\end{aligned}
\right.
\end{equation}
where $F(x,y)$ and $f(x,z)$ are generated randomly as
\begin{equation*}
\begin{aligned}
F(x,y) & := a_1^{T}[u]_{2d_1-1}+ \| B_1[u]^{d_1} \|^2,\\
f(x,z) & := a_2^{T} [x]_{2d_2-1}+ a_3^T [z]_{2d_2-1} +
\left \| B_2 \begin{pmatrix} [x]^{d_2} \\ [z]^{d_2} \end{pmatrix} \right\|^2.
\end{aligned}
\end{equation*}
In the above, $x=(x_1,\ldots,x_n),~y=(y_1,\ldots,y_p),~ z= (z_1,\ldots,z_p)$,
$u=(x,y)$ and $d_1, d_2\in \mathbb{N}$.
The symbol $[x]_d$ denotes the vector of monomials in $x$
and of degrees $\leq d$, while $[x]^d$ denotes the vector of monomials
in $x$ and of degrees equal to $d$.
The symbols $[y]_d, [y]^d, [u]^d$ are defined in the same way.
\iffalse
\begin{table}[htb]
\caption{ Results for random SBPPs
as in \reff{exm:random:prob:uncons}.}
\label{Table:random:SBPP:X:uncons}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
$n$ & $p$ & $d_1$ & $d_2$ & {\tt AvgIter}
& {\tt AvgTime} & {\tt Avg($v^*$)} \\ \hline
2 & 3 & 3 & 2 & 1.9 & 00:02 & -2.9e-7 \\ \hline
3 & 3 & 2 & 2 & 1.6 & 00:07 & -3.7e-7 \\ \hline
3 & 3 & 3 & 2 & 1.7 & 00:07 & -2.6e-7 \\ \hline
4 & 2 & 2 & 2 & 1.4 & 00:06 & -2.4e-7 \\ \hline
4 & 3 & 2 & 2 & 2.3 & 00:41 & -6.4e-7 \\ \hline
5 & 2 & 2 & 2 & 1.9 & 00:33 & -8.1e-7 \\ \hline
5 & 3 & 2 & 2 & 1.8 & 10:04 & -3.8e-7 \\ \hline
6 & 2 & 2 & 2 & 2.0 & 09:56 & -1.5e-6 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\fi
\begin{table}[htb]
\caption{Results for random SBPPs
as in \reff{exm:random:prob:uncons} }.
\label{Table:random:SBPP:X:uncons}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|r|r|r|} \hline
\multirow{2}{*}{$n$ } & \multirow{2}{*}{$p$} & \multirow{2}{*}{ $d_1$} & \multirow{2}{*}{$d_2$} & \multicolumn{3}{c|}{{\tt Iter} } & \multicolumn{3}{c|}{{\tt Time} } & \multicolumn{3}{c|}{{\tt $v^*$}} \\ \cline{5-13}
& & & & {\tt Min} & {\tt Avg} & {\tt Max} & {\tt Min}& {\tt Avg} & {\tt Max} & {\tt Min\quad} & {\tt Avg\quad ~ }& {\tt Max\quad } \\ \hline
2 & 3 & 3 & 2 & 1 & 1.9 & 5 & 00:01 & 00:02 & 00:06 & -3.8e-6 & -2.9e-7 & -4.32e-8 \\ \hline
3 & 3 & 2 & 2 & 1& 1.6 & 2 & 00:04 & 00:07 & 00:09 & -4.0e-6 & -3.7e-7 & -1.1e-10 \\ \hline
3 & 3 & 3 & 2 & 1 & 1.7 & 2& 00:04 & 00:07 & 00:10 & -2.0e-6 & -2.6e-7 & -7.4e-11 \\ \hline
4 & 2 & 2 & 2 & 1 & 1.4 & 3 & 00:04 & 00:06 & 00:09 & -3.0e-6& -2.4e-7 & -4.9e-12 \\ \hline
4 & 3 & 2 & 2 & 1 & 2.3 & 5 & 00:15 & 00:41 & 01:36 & -5.3e-6 & -6.4e-7 & -4.67e-9 \\ \hline
5 & 2 & 2 & 2 & 1 & 1.9 & 4 & 00:14 & 00:33 & 01:13 & -3.5e-6 & -8.1e-7 & -4.3e-11\\ \hline
5 & 3 & 2 & 2 & 1 & 1.8 & 3 & 06:30 & 10:04 & 11:56 & -1.1e-6 & -3.8e-7 & -1.9e-10 \\ \hline
6 & 2 & 2 & 2 & 1 & 2.0 & 4 & 04:02 & 09:56 & 17:39 & -6.2e-6 & -1.5e-6 & -5.57e-7 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
We test the performance of Algorithm~\ref{alg:exchange:method}
for solving SBPPs in the form~\reff{exm:random:prob:uncons}.
The computational results are reported in Table~\ref{Table:random:SBPP:X:uncons}.
In the table, we randomly generated 20 instances for each case.
{\tt AvgIter} denotes the average number of iterations
taken by Algorithm~\ref{alg:exchange:method},
{\tt AvgTime} denotes the average of consumed time,
and {\tt Avg($v^*$)} denotes the average of the values $v^*$.
The consumed computational time is in the format {\tt mn:sc},
with mn and sc standing for minutes and seconds respectively.
As we can see, these SBPPs were solved successfully.
In Table~\ref{Table:random:SBPP:X:uncons}, the computational time
in the last two rows are much bigger than those in the previous rows.
This is because the newly added Jacobian equation
$\psi(x,y)=0$ has more polynomials and has higher degrees. Consequently,
in order to solve $(P_k)$ and $(Q_i^k)$ globally by Lasserre relaxations,
the relaxation orders need to be higher.
This makes the semidefinite relaxations more difficult to solve.
\iffalse
\textcolor{red}{\bf Answer Prof. Nie's questions: I did not use UNIFORM big relaxation order. I used the lowest relaxation order. I rewrote the code, and the results look similar, a little bit better. I add scaling to the randomly generated number, and force the randomly generate coefficient not too big or too small. Random examples not like other examples, coefficients are integers and only few terms nonzero. Random examples almost all coefficient are nonzero. And for lower level program, we need to bring $x^k$ into the coefficient of polynomial $f(x^k,z)$ to do optimization. So we can not expect to have very high accuracy like $10^{-8}$. I put the maximum and minimum here. Actually, a lot of examples can be solved to high accuracy. But for few case, it can not reach $10^{-6}$. Even if I increased iteration, it only has little update, since $z_{k}$ and $z_{k+1}$ are quite close. Almost impossible to do any update. This case rarely happened. Most examples can be solved up to $10^{-7}$ accuracy. I put the min/max/avg results in Table 8, which can be deleted. Since Min column in the worst case does meet $10^{-6}$ accuracy.}
\fi
\end{exm}
\iffalse
\begin{table}\caption{Computational results for random SBPP problem \reff{exm:random:prob:uncons} } \label{Table:random:SBPP:X:uncons}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline
No. & $n$ & $p$ & $d_1$ & $d_2$ & Inst & $\text{A}_\text{Iter(I)}$ & $\text{A}_\text{Iter(II)}$ & $\text{A}_\text{time(I)}$ & $\text{A}_\text{time(II)}$ & $\text{A}_{|v^*|(\text{I})}$ & $\text{A}_{|v^*|(\text{II})}$ \\
\hline 1 & 2 & 3 & 3 &2 & 20 & 2.00 & 2.00 & 0:00:02 & 0:00:02 & 6.05e-7& 5.74e-7 \\
\hline 2 & 3 & 3 & 2 &2 & 20 & 2.00 & 1.85 & 0:00:13 & 0:00:13 & 1.34e-6& 7.78e-5 \\
\hline 3 & 3 & 3 & 3 & 2& 20 & 1.50 & 1.50 & 0:00:05 & 0:00:06 & 5.41e-7& 2.62e-6 \\
\hline 4 & 4 & 2 & 2 & 2& 20 & 2.50 & 1.00 & 0:00:07 & 0:00:04 & 2.63e-6& 3.26e-6 \\
\hline 5 & 4 & 3 & 2 & 2& 20 & 2.00 & 1.70 & 0:00:30 & 0:00:35 & 1.77e-6& 1.40e-5 \\
\hline 6 & 5 & 2 & 2 & 2& 20 & 1.85 & 1.00 & 0:00:35 & 0:00:17 & 4.33e-6& 3.24e-5 \\
\hline 7 & 5 & 3 & 2 & 2& 20 & 2.00& 1.75 & 0:11:33 & 0:10:37 & 4.63e-6& 2.46e-6 \\
\hline 8 & 6 & 2 & 2 & 2& 20 & 1.85 & 1.00 & 0:09:58 & 0:05:55 & 1.00e-6& 1.04e-6 \\
\hline 9 & 7 & 2 & 2 & 2& 5 & 2.20& 1.00 & 1:28:50 & 0:43:34 & 1.53e-4& 2.66e-4 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\fi
\begin{exm} (Random SBPPs with ball conditions)
Consider the SBPP:
\begin{equation} \label{exm:random:prob:2}
\left\{
\begin{aligned}
\min\limits_{x\in \mathbb{R}^n, \,y\in \mathbb{R}^p}&~~ F(x,y)\\
\text{s.t.} & ~~ \|x\|^2 \leq 1,
\quad y \in \underset{ \|z\|^2 \leq 1}{\tt argmin}
f(x,z).
\end{aligned}
\right.
\end{equation}
The outer and inner objectives $F(x,y)$, $f(x,z)$ are generated as
\[
F(x,y) = a^{T}[(x,y)]_{2d_1}, \quad
f(x,z)= \begin{pmatrix} [x]_{d_2} \\ [z]_{d_2} \end{pmatrix}^T B
\begin{pmatrix} [x]_{d_2} \\ [z]_{d_2} \end{pmatrix}.
\]
The entries of the vector $a$ and matrix $B$ are generated randomly
obeying Gaussian distributions. The symbols like
$[(x,y)]_{2d_1}$ are defined similarly as in Example~\ref{exm:sbpp:Z=R^p}.
We apply Algorithm~\ref{alg:exchange:method} to solve \reff{exm:random:prob:2}.
The computational results are reported in Table~\ref{Table:random:SBPP:X:exchange}.
The meanings of {\tt Inst}, {\tt AvgIter}, {\tt AvgTime},
and {\tt Avg($v^*$)} are same as in Example~\ref{exm:sbpp:Z=R^p}.
As we can see, the SBPPs as in \reff{exm:random:prob:2}
can be solved successfully by Algorithm~\ref{alg:exchange:method}.
\iffalse
\begin{table}[htb]
\caption{Results for random SBPPs in
\reff{exm:random:prob:2}.}
\label{Table:random:SBPP:X:exchange}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|r|r|r|c|c|} \hline
$n$ & $p$ & $d_1$& $d_2$& {\tt AvgIter} & {\tt AvgTime} & {\tt Avg($v^*$) } \\ \hline
3 & 2 & 2 & 2 & 2.6 & 00:03 &-1.4e-7 \\ \hline
3 & 3 & 2 & 2 & 2.7 & 00:09 & -6.5e-7 \\ \hline
3 & 3 & 3 & 2 & 3.0 & 00:09 & -3.6e-7 \\ \hline
4 & 2 & 2 & 2 & 3.5 & 00:20 & -5.0e-7 \\ \hline
4 & 3 & 2 & 2 & 2.6 & 00:31 & -3.0e-7 \\ \hline
5 & 2 & 2 & 2 & 3.7 & 00:43 & -1.7e-7 \\ \hline
5 & 2 & 3 & 2 & 3.4 & 00:41 & -5.4e-7 \\
\hline
6 & 2 & 2 & 2 & 2.6 & 09:17 & -5.7e-7 \\ \hline
6 & 2 & 3 & 2 & 2.4 & 08:23 & -1.5e-7 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\fi
\begin{table}[htb]
\caption{Results for random SBPPs in
\reff{exm:random:prob:2}. }
\label{Table:random:SBPP:X:exchange}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|r|r|r|} \hline
\multirow{2}{*}{$n$ } & \multirow{2}{*}{$p$} & \multirow{2}{*}{ $d_1$} & \multirow{2}{*}{$d_2$} & \multicolumn{3}{c|}{{\tt Iter} } & \multicolumn{3}{c|}{{\tt Time} } & \multicolumn{3}{c|}{{\tt $v^*$}} \\ \cline{5-13}
& & & & {\tt Min} & {\tt Avg} & {\tt Max} & {\tt Min}& {\tt Avg} & {\tt Max} & {\tt Min\quad} & {\tt Avg\quad ~ }& {\tt Max\quad } \\ \hline
3 & 2 & 2 & 2 & 1 & 2.6 & 6 & 00:01 & 00:03 & 00:06 & -7.4e-7 & -1.4e-7 & 2.0e-9 \\ \hline
3 & 3 & 2 & 2 & 1 & 2.7 & 6 & 00:03 & 00:09 & 00:21 & -2.6e-6 & -6.5e-7 &-1.5e-9 \\ \hline
3 & 3 & 3 & 2 & 1& 3.0 & 5 & 00:03 & 00:09 & 00:17 & -2.9e-6 & -3.6e-7 & -1.1e-9 \\ \hline
4 & 2 & 2 & 2 & 1 & 3.5 & 8 & 00:03 & 00:20 & 00:43 & -1.8e-6 & -5.0e-7 & 1.4e-9 \\ \hline
4 & 3 & 2 & 2 & 1 & 2.6 & 5 & 00:12 & 00:31 & 01:01 & -2.9e-6 & -3.0e-7 & 1.8e-9 \\ \hline
5 & 2 & 2 & 2 & 1 & 3.7 & 11 & 00:11 & 00:43 & 02:06 & -3.9e-6 & -1.7e-7 & -3.4e-9 \\ \hline
5 & 2 & 3 & 2 & 1 & 3.4 & 10 & 00:10 & 00:41 & 02:15 & -3.6e-6 & -5.4e-7 & -1.5e-9 \\
\hline
6 & 2 & 2 & 2 & 1 & 2.6 & 6 & 03:21 & 09:17 & 22:41 & -4.3e-6 & -5.7e-7 & 5.8e-10 \\ \hline
6 & 2 & 3 & 2 & 1 & 2.4 & 5 & 03:15 & 08:23 & 17:42 & -6.2e-7 & -1.5e-7 & 2.7e-10 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\end{exm}
\iffalse
\begin{table}\caption{Computational results for random SBPP problems} \label{Table:random:SBPP:X:exchange}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|r|r|r|c|c|} \hline
No. & $n$ & $p$ & $d_1$& $d_2$& Inst & \multicolumn{2}{c|}{Iter(min, max)}& \multicolumn{2}{c|}{time (min, max)} &\multicolumn{2}{c|}{$|v^*|$ (min, max)} \\
\hline 1 & 3 & 2 & 3 & 2 & 20 & \quad 1\quad\quad & 4 & 0:00:04 & 0:00:15& 1.88e-8& 8.90e-4 \\
\hline 2 & 4 & 2 & 2 & 2 & 20 & \quad 1\quad\quad & 2 & 0:00:09& 0:00:18 & 1.54e-8 & 3.30e-5 \\
\hline 3 & 3 & 3 & 2 & 2 & 20 & \quad 1\quad\quad & 10 & 0:00:19 & 0:03:32 & 1.74e-6 & 4.16e-2 \\
\hline 4 & 5 & 2 & 2 & 2 & 20 & \quad 1\quad\quad & 3 & 0:00:24 & 0:01:14 & 1.05e-8 & 2.84e-4\\
\hline 5 & 6 & 2 & 2 & 2 & 10 & \quad 1\quad\quad & 6 & 0:04:58& 0:28:31 & 5.7e-10 & 7.34e-4 \\
\hline 6 & 6 & 2 & 3 & 2 & 10 & \quad 1\quad\quad & 4 & 0:05:39 & 0:13:02 & 2.6e-10 & 5.72e-5 \\
\hline 7 & 4 & 3 & 2 & 2 & 10 & \quad 1\quad\quad & 10 & 0:01:15 & 0:13:19 & 4.38e-5 & 1.53e-2 \\
\hline 8 & 5 & 3 & 2 & 2 & 5 & \quad 8\quad\quad & 10 & 1:47:43 & 2:35:01 & 3.20e-4 & 1.32e-2 \\
\hline 9 & 7 & 2 & 2 & 2 & 5 & \quad 2\quad\quad & 10 & 0:54:42 & 4:33:21 & 2.52e-9 & 5.12e-2 \\ \hline
10 & 7 & 2 & 3 & 2 & 5 & \quad 1\quad\quad & 10 & 0:25:25 & 4:55:36 & 8.57e-9 & 4.69e-2 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\fi
\subsection{Examples of GBPPs}
\begin{exm}\label{Ex5.7}
Consider the GBPP:
\begin{equation} \label{exm:GBPP:1}
\left\{
\begin{array}{rl}
\min\limits_{x\in {\mathbb R}^2,y\in {\mathbb R}^3} & \frac{1}{2}x_1^2y_1+x_2y_2^2-(x_1+x_2^2)y_3\\
\text{s.t.} & x\in [-1,1]^2, \,\, x_1+x_2-x_1^2-y_1^2-y_2^2\geq 0,\\
& y\in S(x), \\
\end{array}
\right.
\end{equation}
where $S(x)$ is the set of minimizers of
\[
\left\{ \begin{array}{rl}
\min\limits_{z\in {\mathbb R}^3} & x_2(z_1z_2z_3+z_2^2-z_3^3)\\
\text{s.t.}~& x_1-z_1^2-z_2^2-z_3^2\geq 0,\,\, 1-2z_2z_3\geq 0.
\end{array} \right.
\]
We apply Algorithm \ref{alg:exchange:method} to solve \reff{exm:GBPP:1}. Algorithm~\ref{alg:exchange:method} terminates at the iteration $k=0$.
It takes about 10.18 seconds to solve the problem. We get
\[
x^* \approx (1,1), \quad y^* \approx (0,0,1),
\quad F_0^* \approx -2, \quad v^* \approx -2.95 \times 10^{-8}.
\]
Since $Z_0 = \emptyset$,
we have $F_0^* \leq F^*$ (the global minimum value). Moreover,
$(x^*, y^*)$ is feasible for \reff{exm:GBPP:1}, so $F(x^*, y^*) \geq F^*$.
Therefore, $F(x^*, y^*) = F^*$ and $(x^*,y^*)$ is a global optimizer,
up to a tolerance around $10^{-8}$.
\end{exm}
\begin{exm}
Consider the GBPP:
\begin{equation}\label{exm:bigger:GBPP}
\left\{
\begin{array}{rl}
\min\limits_{x\in {\mathbb R}^4,y\in {\mathbb R}^4} & (x_1+x_2+x_3+x_4)(y_1+y_2+y_3+y_4)\\
\text{s.t.} & \|x\|^2 \leq 1, \,\, y_3^2-x_4\leq 0,\\
&\ y_2y_4-x_1\leq 0, \,\, y\in S(x), \\
\end{array}
\right.
\end{equation}
where $S(x)$ is the set of minimizers of
\[
\left\{ \begin{array}{rl}
\min\limits_{z\in {\mathbb R}^4} & x_1z_1+x_2z_2+0.1z_3+0.5z_4-z_3z_4\\
\text{s.t.}~& z_1^2+2z_2^2+3z_3^2+4z_4^2\leq x_1^2+x_3^2+x_2+x_4,\\
& z_2z_3-z_1z_4\geq 0.
\end{array} \right.
\]
We apply Algorithm \ref{alg:exchange:method} to solve \reff{exm:bigger:GBPP}.
The computational results are reported in Table~\ref{table:exm:bigger:GBPP}.
Algorithm \ref{alg:exchange:method} stops with $k=1$.
It takes about 490.65 seconds to solve the problem.
We are not sure whether the point
$(x_i^k, y_i^k)$ computed at $k=1$ is a global optimizer or not.
\begin{table}[htb]
\caption{Results of Algorithm \ref{alg:exchange:method} for solving \reff{exm:bigger:GBPP}.}
\label{table:exm:bigger:GBPP}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
{\tt Iter} $k$ & $(x_i^k,y_i^k)$ & $F_k^*$ & $v_i^k$ \\ \hline
0 & (0.5442,0.4682,0.4904,0.4942,-0.7792,-0.5034,-0.2871,-0.1855) & -3.5050 & -0.0391 \\
\hline
1 & (0.5135,0.5050,0.4882,0.4929,-0.8346,-0.4104,-0.2106,-0.2887) & -3.4880 & 3.29e-9 \\
\hline
\end{tabular}
\end{scriptsize}
\end{table}
\end{exm}
\begin{table} [htb]
\caption{Results for some GBPPs} \label{GBPP:small}
\centering \begin{scriptsize}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|c|l|c|c|}
\hline
No. &
\quad \quad \quad \quad \quad\quad \quad \quad \quad\quad \quad \quad \quad Small GBPPs & \multicolumn{2}{|c| }{Results} \\
\hline
\multirow{6}{*}{ 1} & \multirow{6}{*}{ $ \left\{
\begin{array}{rl} \min\limits_{x\in {\mathbb R},y\in {\mathbb R}} & -x-y \\
\text{s.t.} & y \in S(x) := \underset{z\in Z(x)}{\tt argmin}~ z\\
& Z(x):=\{z\in {\mathbb R}| -x+z\geq 0,\, -z\geq 0\}.
\end{array} \right. $ } & $F^*$ & -2.78e-13 \\ \cline{3-4}
& & Iter & 1\\ \cline{3-4}
& & $x^*$& 3.82e-14 \\ \cline{3-4}
& & $y^*$& 2.40e-13 \\ \cline{3-4}
& & $v^*$ & -7.43e-13 \\ \cline{3-4}
& & {\tt Time} & 0.19 \\
\hline
\multirow{6}{*}{ 2} & \multirow{6}{*}{ $\left\{
\begin{array}{rl} \min\limits_{x\in {\mathbb R},y \in {\mathbb R}}& \quad (x-1)^2+y^2\\
\text{s.t.} & x\in [-3,2],\, y \in S(x) := \underset{z\in Z(x)}{\tt argmin}~z^3-3z\\
& Z(x):=\{z\in {\mathbb R}|z\geq x\}.
\end{array} \right. $} & $F^*$ & 0.9999 \\ \cline{3-4}
& & Iter & 2\\ \cline{3-4}
& & $x^*$& 0.9996\\ \cline{3-4}
& & $y^*$& 1.0000 \\ \cline{3-4}
& & $v^*$ & -4.24e-9\\ \cline{3-4}
& & {\tt Time} & 0.57 \\
\hline
\multirow{6}{*}{ 3} & \multirow{6}{*}{ $ \left\{
\begin{array}{rl} \min\limits_{x\in {\mathbb R},y\in {\mathbb R}}& (x-0.6)^2+y^2\\
\text{s.t.} & x,y \in [-1,1],\, y \in S(x) := \underset{z\in Z(x)}{\tt argmin}~f(x,z)=z^4+\frac{4}{30}(1-x)z^3\\
& +(0.16x-0.02x^2-0.4)z^2+(0.004x^3-0.036x^2+0.08x)z,\\
& Z(x):=\{z\in {\mathbb R}|0.01(1+x^2)\leq z^2,\, z\in [-1,1] \}.
\end{array} \right. $ } & $F^*$ & 0.1917 \\ \cline{3-4}
& & Iter & 2\\ \cline{3-4}
& & $x^*$& 0.6436\\ \cline{3-4}
& & $y^*$& -0.4356 \\ \cline{3-4}
& & $v^*$ & 2.18e-10\\ \cline{3-4}
& & {\tt Time} & 0.52 \\
\hline
\multirow{6}{*}{4} & \multirow{6}{*}{ $ \left\{
\begin{array}{rl} \min\limits_{x\in {\mathbb R},y\in {\mathbb R}^2}& \quad x^3y_1+y_2\\
\text{s.t.} & x\in [0,1],\,y\in [-1,1]\times [0,100],\, y \in S(x) := \underset{z\in Z(x)}{\tt argmin} -z_2\\
& Z(x):=\{z\in {\mathbb R}^2|xz_1\leq 10,\, z_1^2+xz_2\leq 1, z\in [-1,1]\times [0,100] \}.
\end{array} \right. $ } & $F^*$ & 1 \\ \cline{3-4}
& & Iter & 1\\ \cline{3-4}
& & $x^*$& 1 \\ \cline{3-4}
& & $y^*$& (0,1) \\ \cline{3-4}
& & $v^*$ & 3.45e-8\\ \cline{3-4}
& & {\tt Time} & 1.83 \\
\hline
\multirow{6}{*}{ 5} & \multirow{6}{*}{ $\left\{
\begin{array}{rl} \min\limits_{x\in {\mathbb R},y\in {\mathbb R}^2}& -x-3y_1+2y_2\\
\text{s.t.} & x\in [0,8], y\in [0,4]\times [0,6],\, y\in S(x) = \underset{z\in Z(x)}{\tt argmin}
-z_1\\
& Z(x):=\left\{z\in {\mathbb R}^2\left|\begin{array}{c}-2x+z_1+4z_2\leq 16, 8x+3z_1-2z_2\leq 48\\
2x-z_1+3z_2\geq 12,z\in [0,4]\times [0,6] \end{array}
\right. \right\}.
\end{array} \right. $ } & $F^*$ & -13 \\ \cline{3-4}
& & Iter & 1\\ \cline{3-4}
& & $x^*$& 5 \\ \cline{3-4}
& & $y^*$& (4,2) \\ \cline{3-4}
& & $v^*$ & 3.95e-6 \\ \cline{3-4}
& & {\tt Time} & 0.38 \\
\hline
\multirow{6}{*}{ 6} & \multirow{6}{*}{ $ \left\{
\begin{array}{rl} \min\limits_{x \in\mathbb{R}^2 ,y\in\mathbb{R}^2 }&\ -y_2\\
\text{s.t.}&\ y_1y_2=0,x\geq 0,\, y\in S(x):= \underset{z\in Z(x)}{\tt argmin} ~z_1^2+(z_2+1)^2 \\
&\ Z(x):=\left\{z\in {\mathbb R}^2\left|\begin{array}{c}(z_1-x_1)^2+(z_2-1-x_1)^2\leq 1,\\
(z_1+x_2)^2+(z_2-1-x_2)^2\leq 1\end{array}
\right. \right\}. \end{array}
\right. $ } & $F^*$ & -1 \\ \cline{3-4}
& & Iter & 2\\ \cline{3-4}
& & $x^*$& (0.71,0.71) \\ \cline{3-4}
& & $y^*$& (0,1) \\ \cline{3-4}
& & $v^*$ & -3.77e-10\\ \cline{3-4}
& & {\tt Time} & 0.60 \\
\hline
\multirow{6}{*}{7} & \multirow{6}{*}{ $ \left\{
\begin{array}{rl} \min\limits_{x\in {\mathbb R}^2,y\in {\mathbb R}^2}& -x_1^2-3x_2-4y_1+y_2^2\\
\text{s.t.} & (x,y) \geq 0, {-x_1^2}-2x_2+4\geq 0, y\in S(x):= \underset{z\in Z(x)}{\tt argmin}
~z_1^2-5z_2\\
& Z(x):=\left\{z\in {\mathbb R}^2\left|\begin{array}{c}~x_1^2-2x_1+x_2^2-2z_1+z_2 +3\geq 0,\\
x_2+3z_1-4z_2-4\geq 0\end{array}
\right\}. \right.
\end{array} \right. $ } & $F^*$ & -12.6787 \\ \cline{3-4}
& & Iter & 2\\ \cline{3-4}
& & $x^*$& (0,2) \\ \cline{3-4}
& & $y^*$& (1.88,0.91) \\ \cline{3-4}
& & $v^*$ & 2.40e-6 \\ \cline{3-4}
& & {\tt Time} & 10.52 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
\begin{exm}
In this example we consider some GBPP examples given in the literature. The problems and the computational results are displayed in Table \ref{GBPP:small}.
Problem 1 is \cite[Example 3.1]{allende2013solving} and the optimal solution $(x^*,y^*)=(0,0)$ is reported.
Problem 2 is \cite[Example 4.2]{YeSIAM2010} and the optimal solution
$(x^*,y^*)=(1,1)$ is reported.
Problem~3 is \cite[Example 3.22]{BIEXM}.
{As shown in \cite{BIEXM},}
the optimal solution should attain at a point satisfying $0< x<1$ and
$y=-0.5+0.1x$. For $(x,y)$ satisfying these conditions, the lower level constraint $0.01(1+x^2)-y^2\leq 0$ becomes inactive. Plugging $y=-0.5+0.1x$ into the upper level objective, the bilevel program becomes finding the minimum of the convex function
$(x-0.6)^2+(-0.5+0.1x)^2$. Hence the optimal solution is
$(x^*,y^*)=(\frac{65}{101}, \frac{44}{101})$.
Problem 4 can be found in \cite[Example 4.2]{BIEXM}
with the optimal solution $(x^*,y^*)=(1,0,1)$ reported.
Problem 5 can be found in \cite[Example 5.1]{BIEXM} where
the optimal solution $(x^*,y^*)=(5,4,2)$ is derived.
Problem 6 is \cite[Example 3.1]{Dempe2012MP}.
{As shown in \cite{Dempe2012MP},}
the optimal solution is $(x^*,y^*)=(\sqrt{0.5}, \sqrt{0.5})$.
Problem 7 was originally given in \cite[Example 3]{Bard}
and analyzed in \cite{allende2013solving}.
It was reported in \cite{allende2013solving} that the optimal solution is
$x^*=(0,2), y^* \approx (1.875, 0.9062)$.
In fact we can show that the optimal solution is
$x^*=(0,2), y^*=(\frac{15}{8}, \frac{29}{32})$ as follows.
Since the upper objective is separable in $x$ and $y$, it is easy to show that the optimal solution for
the problem
$$\min_{(x_1,x_2)\geq 0} -x_1^2-3x_2-4y_1+y_2^2
\quad \mbox{ s.t. } \quad -x_1^2-2x_2+4 \geq 0$$
with $y_1,y_2$ fixed
is $x^*_1=0, x_2^*=2$. Since $y^*=(\frac{15}{8}, \frac{29}{32})$ is the optimal solution to the lower level problem parameterized by $x^*=(0,2)$, we conclude that the optimal solution is $x^*=(0,2), y^*=(\frac{15}{8}, \frac{29}{32})$.
From Table \ref{GBPP:small}, we can see that Algorithm \ref{alg:exchange:method} {stops} in very few steps with
global optimal solutions for all problems.
\end{exm}
\section{Conclusions and discussions}
\label{sc:condis}
This paper studies how to solve
both simple and general bilevel polynomial programs.
We reformulate them as equivalent semi-infinite polynomial programs,
using Fritz John conditions and Jacobian representations.
Then we apply the exchange technique and Lasserre type
semidefinite relaxations to solve them.
For solving SBPPs, we proposed Algorithm~\ref{alg:exchange:method}
and proved its {convergence}
to global optimal solutions.
For solving GBPPs, {Algorithm~\ref{alg:exchange:method}
can also be applied},
but its convergence to global optimizers is not guaranteed.
However, under some assumptions,
GBPPs can also be solved globally by Algorithm~\ref{alg:exchange:method}.
Extensive numerical experiments are provided
to demonstrate the efficiency of the proposed method.
{To see the advantages of our method,
we would like to make some comparisons with
two existing methods for solving bilevel polynomial programs.
The first one is the value function approximation approach
proposed by Jeyakumar, Lasserre, Li and Pham \cite{Lasbilevel2015};
the second one is the branch and bound approach proposed by
Mitsos, Lemonidis and Barton \cite{mitsos2008global}.
}
\subsection{Comparison with the value function approximation approach}
For solving SBPPs with convex lower level programs,
a semidefinite relaxation method was proposed in
\cite[\S3]{Lasbilevel2015},
under the assumption that the lower level programs satisfy
both the nondegeneracy condition and the Slater condition.
It uses multipliers, appearing in the Fritz John conditions,
as new variables in sum-of-squares type representations.
For SBPPs with nonconvex lower level programs,
it was proposed in \cite[\S4]{Lasbilevel2015}
to solve the following $\epsilon$-approximation problem
(for a tolerance parameter $\epsilon>0$)
\begin{equation} \label{bilevel:pp:eps}
(P^k_{\epsilon}): \left\{
\begin{aligned}
F_{\epsilon}^{k} := \min\limits_{x\in \mathbb{R}^n,y\in \mathbb{R}^p}&\ F(x,y) \\
\text{s.t.} \quad &\ G_i(x,y)\geq 0, \, i=1,\cdots,m_1, \\
&\ g_j(y)\geq 0, j=1,\dots, m_2,\\
& \ f(x,y)-J_{k}(x)\leq \epsilon.
\end{aligned}
\right.
\end{equation}
In the above, $J_{k}(x)\in {\mathbb R}_{2k}[x]$ is a $\frac{1}{k}$-solution
for approximating the nonsmooth value function
$v(x)$ \cite[Algorithm~4.5]{Lasbilevel2015}. For a given parameter $\epsilon >0$,
the method in \cite[\S4]{Lasbilevel2015}
finds the approximating polynomial $J_k(x)$ first,
and then solves $(P^k_{\epsilon})$ by Lasserre type semidefinite relaxations.
Theoretically, $\epsilon>0$ can be chosen as small as possible.
However, in computational practice, when $\epsilon>0$ is very small,
the degree $2k$ need to be chosen very high
and then it is hard to compute $J_k(x)$.
In the following, we give an example to compare our
Algorithm~\ref{alg:exchange:method} and the method in \cite[\S4]{Lasbilevel2015}.
\begin{exm} Consider the following SBPP:
\begin{equation}\label{exm:comparsion:3}
\left\{
\begin{aligned}
F^*:=\min\limits_{x\in {\mathbb R}^2,y\in {\mathbb R}^2}&\ y_1^3(x_1^2-3x_1x_2)-y_1^2y_2+y_2x_2^3\\ \quad \text{s.t.}&\ ~x\in [-1,1]^2,\ y_2+y_1(1-x_1^2)\geq 0,\ y\in S(x),
\end{aligned}
\right.
\end{equation}
where $S(x)$ is the solution set of the following optimization problem:
\begin{equation*}
v(x):= \min\limits_{z\in {\mathbb R}^2}~z_1z^2_2-z^3_2-z^2_1(x_2-x_1^2)\quad\text{s.t.}\quad z_1^2+z_2^2\leq 1.
\end{equation*}
The computational results of applying Algorithm~\ref{alg:exchange:method}
is shown in Table \ref{table:exm:comparsion:3}. It took only two steps
to solve the problem successfully. The set $\mathcal{U}$ is compact.
For each $x$, $S(x)\neq \emptyset$, since the lower level program is defined as a polynomial over a compact set. The value function $v(x)$ of lower level program is continuous. The feasible set of problem \reff{exm:comparsion:3}
is nonempty and compact.
{
At the iteration $k=1$, the value $v_i^k$ is almost zero,
so the point $(0.5708,-1.0000,-0.1639,0.9865)$
is a global optimizer of problem \reff{exm:comparsion:3},
up to a tolerance around $10^{-9}$.
}
\begin{table}[htb]
\caption{Computational results of Algorithm \ref{alg:exchange:method} for solving \reff{exm:comparsion:3}.}
\label{table:exm:comparsion:3}
\centering
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
{\tt Iter} $k$ & $(x_i^k,y_i^k)$ & {$z_{i,j}^k$} & $F_k^*$ & $v_i^k$ \\ \hline
0 & ( 1.0000,-1.0000,-1.0000,0.0000) & (-0.1355,0.9908)& -4.0000 & -3.0689\\
& (-1.0000,1.0000,-1.0000,0.0000)&(-0.2703,0.9628) & -4.0000
& -1.1430 \\ \hline
1 & (0.5708,-1.0000,-0.1639,0.9865) & (-0.1638,0.9865) &-1.0219& -4.76e-9 \\ \hline
\end{tabular}
\end{scriptsize}
\end{table}
Next, we apply the method in \cite[\S4]{Lasbilevel2015}.
We use the software {\sf Yalmip} \cite{YALMIP}
to compute the approximating polynomial $J_k(x)\in{\mathbb R}_{2k}[x]$,
as in \cite[Algorithm~4.5]{Lasbilevel2015}. After that,
we solve the problem {$(P_{\epsilon}^{k})$} by Lasserre type semidefinite relaxations,
for a parameter $\epsilon > 0$. Let $F_{\epsilon}^{k}$ denote
the optimal value of \reff{bilevel:pp:eps}.
The computational results are shown in Table \ref{table:exm:comparsion:3:Las}.
As $\epsilon$ is close to $0$, we can see that $F_{\epsilon}^{k}$
is close to the true optimal value $F^* \approx -1.0219$.
Since the method in \cite{Lasbilevel2015} depends on the choice of $\epsilon>0$,
we do not compare the computational time.
In applications, the optimal value $F^*$ is typically unknown.
An interesting question for research
is how to select a value of $\epsilon>0$
that guarantees $F_{\epsilon}$ is close enough to $F^*$.
\begin{table}[htb]
\caption{Computational results of the method in \cite[\S4]{Lasbilevel2015}.}
\label{table:exm:comparsion:3:Las}
\centering
\begin{scriptsize}
\begin{tabular}{|l|c|c|c|c|c|c|c|} \hline
\quad $\epsilon$ & $F_{\epsilon}^{2}$ & $F_{\epsilon}^{3}$ & $F_{\epsilon}^{4}$ \\ \hline
1.0 & -3.4372 & -3.6423 & -3.6439 \\
0.5 & -1.5506 & -1.5909 & -1.5912
\\
0.25 & -1.2718 & -1.2746 & -1.2750 \\
0.125 & -1.1746 & -1.1775 & -1.1779 \\
0.05 & -1.1193 & -1.1224 & -1.1228 \\
0.01 & -1.0897 & -1.0930 & -1.0934 \\
0.005 & -1.0858 & -1.0892 & -1.0897 \\
0.001 & -1.0827 & -1.0862 & -1.0867 \\
0.0001 & -1.0820 & -1.0855 & -1.0860 \\
\hline
\end{tabular}
\end{scriptsize}
\end{table}
\end{exm}
\iffalse
| {
"timestamp": "2016-11-04T01:02:04",
"yymm": "1508",
"arxiv_id": "1508.06985",
"language": "en",
"url": "https://arxiv.org/abs/1508.06985",
"abstract": "A bilevel program is an optimization problem whose constraints involve another optimization problem. This paper studies bilevel polynomial programs (BPPs), i.e., all the functions are polynomials. We reformulate BPPs equivalently as semi-infinite polynomial programs (SIPPs), using Fritz John conditions and Jacobian representations. Combining the exchange technique and Lasserre type semidefinite relaxations, we propose numerical methods for solving both simple and general BPPs. For simple BPPs, we prove the convergence to global optimal solutions. Numerical experiments are presented to show the efficiency of proposed algorithms.",
"subjects": "Optimization and Control (math.OC)",
"title": "Bilevel Polynomial Programs and Semidefinite Relaxation Methods",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513910054507,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7086428657455288
} |
https://arxiv.org/abs/1803.00281 | Strong subgraph $k$-connectivity bounds | Let $D=(V,A)$ be a digraph of order $n$, $S$ a subset of $V$ of size $k$ and $2\le k\leq n$. Strong subgraphs $D_1, \dots , D_p$ containing $S$ are said to be internally disjoint if $V(D_i)\cap V(D_j)=S$ and $A(D_i)\cap A(D_j)=\emptyset$ for all $1\le i<j\le p$. Let $\kappa_S(D)$ be the maximum number of internally disjoint strong digraphs containing $S$ in $D$. The strong subgraph $k$-connectivity is defined as $$\kappa_k(D)=\min\{\kappa_S(D)\mid S\subseteq V, |S|=k\}.$$ A digraph $D=(V, A)$ is called minimally strong subgraph $(k,\ell)$-connected if $\kappa_k(D)\geq \ell$ but for any arc $e\in A$, $\kappa_k(D-e)\leq \ell-1$. In this paper, we first give a sharp upper bound for the parameter $\kappa_k(D)$ and then study the minimally strong subgraph $(k,\ell)$-connected digraphs. | \section{Introduction}\label{sec:intro}
The generalized $k$-connectivity $\kappa_k(G)$ of a graph $G=(V,E)$
was introduced by Hager \cite{Hager} in 1985 ($2\le k\le |V|$).
For a graph $G=(V,E)$ and a set $S\subseteq V$ of at least two
vertices, an {\em $S$-Steiner tree} or, simply, an {\em $S$-tree}
is a subgraph
$T$ of $G$ which is a tree with $S\subseteq V(T)$. Two $S$-trees
$T_1$ and $T_2$ are said to be {\em internally disjoint} if
$E(T_1)\cap E(T_2)=\emptyset$ and $V(T_1)\cap V(T_2)=S$. The {\em
generalized local connectivity} $\kappa_S(G)$ is the maximum number
of internally disjoint $S$-trees in $G$. For an integer $k$ with
$2\leq k\leq n$, the {\em generalized $k$-connectivity} is defined
as
$$\kappa_k(G)=\min\{\kappa_S(G)\mid S\subseteq V(G), |S|=k\}.$$
Observe that
$\kappa_2(G)=\kappa(G)$.
If $G$ is disconnected and vertices of $S$ are placed in different connectivity components, we have $\kappa_S(G)=0$.
Thus, $\kappa_k(G)=0$ for a disconnected graph $G$.
Generalized
connectivity of graphs has become an established area in graph
theory, see a recent monograph \cite{Li-Mao5} by Li and Mao on
generalized connectivity of undirected graphs.
To extend generalized $k$-connectivity to directed graphs, Sun, Gutin, Yeo and Zhang
\cite{Sun-Gutin-Yeo-Zhang} observed that
in the definition of $\kappa_S(G)$, one can replace ``an $S$-tree''
by ``a connected subgraph of $G$ containing $S$.'' Therefore, Sun et al. \cite{Sun-Gutin-Yeo-Zhang}
defined {\em strong subgraph $k$-connectivity} by replacing
``connected'' with ``strongly connected'' (or, simply, ``strong'')
as follows. Let $D=(V,A)$ be a
digraph of order $n$, $S$ a subset of $V$ of size $k$ and
$2\le k\leq n$. Strong subgraphs $D_1, \dots , D_p$ containing $S$
are said to be {\em
internally disjoint} if $V(D_i)\cap V(D_j)=S$ and $A(D_i)\cap
A(D_j)=\emptyset$ for all $1\le i<j\le p$.
Let $\kappa_S(D)$ be the maximum number of
internally disjoint strong digraphs containing $S$ in $D$. The {\em
strong subgraph $k$-connectivity} is defined as
$$\kappa_k(D)=\min\{\kappa_S(D)\mid S\subseteq V, |S|=k\}.$$
By definition, $\kappa_2(D)=0$ if $D$ is not strong.
Despite the definition of strong subgraph $k$-connectivity being similar to that of generalized $k$-connectivity,
the former is somewhat more complicated than the latter. Let us first consider a simple reason for our claim above.
For a graph $G$, let $\overleftrightarrow{G}$ denote the digraph obtained from $G$ by replacing every edge $xy$ with two arcs $xy$ and $yx$.
While minimal connected spanning subgraphs of undirected graphs are all trees, even a simple digraph $\overleftrightarrow{C_n}$ has two types of such strong subgraphs: a directed cycle and $\overleftrightarrow{P_n}$. A less trivial reason is given in the next paragraph.
The main aim of \cite{Sun-Gutin-Yeo-Zhang} was to study complexity
of computing $\kappa_k(D)$ for an arbitrary digraph $D$, for a
semicomplete digraph $D$, and for a symmetric digraph $D$. In
particular, Sun et al. proved that for all fixed integers $k\ge 2$
and $\ell\ge 2$ it is NP-complete to decide whether $\kappa_S(D)\ge
\ell$ for an arbitrary digraph $D$ and a vertex set $S$ of $D$ of
size $k$. Since deciding the same problem for generalized
$k$-connectivity of undirected graphs is polynomial
time solvable \cite{Li-Li}, it is clear that computing strong
subgraph $k$-connectivity is somewhat harder than computing
generalized $k$-connectivity.
We will postpone discussion of further results from \cite{Sun-Gutin-Yeo-Zhang}
until Subsection \ref{sec:compres} and now overview new results obtained in this paper.
First, we improve the following tight bound used in \cite{Sun-Gutin-Yeo-Zhang}
\begin{equation}\label{eq:1}
\kappa_k(D)\le \min\{\delta^-(D),\delta^+(D)\}
\end{equation}
for a digraph $D$, where $\delta^-(D)$ and
$\delta^+(D)$ are the minimum in-degree and out-degree of $D$, respectively.
We will show a new sharp bound $\kappa_k(D)\le \kappa(D)$, where $\kappa(D)$ is the strong connectivity of $D$.
Note that $\kappa(D)\le \min\{\delta^-(D),\delta^+(D)\}.$ Interestingly, for undirected graphs $G$, $\kappa_k(G)\le \kappa(G)$ holds only for $k\le 6$ \cite{Li,Li-Mao}.
In what follows, $n$ will denote the number of vertices of the digraph under consideration.
A digraph $D=(V(D), A(D))$ is called {\em minimally strong subgraph
$(k,\ell)$-connected} if $\kappa_k(D)\geq \ell$ but for any arc
$e\in A(D)$, $\kappa_k(D-e)\leq \ell-1$.
Let $\mathfrak{F}(n,k,\ell)$ be the set of all minimally strong
subgraph $(k,\ell)$-connected digraphs with order $n$. We define
$$F(n,k,\ell)=\max\{|A(D)| \mid D\in \mathfrak{F}(n,k,\ell)\}$$ and
$$f(n,k,\ell)=\min\{|A(D)| \mid D\in \mathfrak{F}(n,k,\ell)\}.$$ We
further define $$Ex(n,k,\ell)=\{D\mid D\in \mathfrak{F}(n,k,\ell),
|A(D)|=F(n,k,\ell)\}$$ and $$ex(n,k,\ell)=\{D\mid D\in
\mathfrak{F}(n,k,\ell), |A(D)|=f(n,k,\ell)\}.$$
Using the Hamilton cycle decomposition theorem of Tillson
\cite{Tillson}, Theorem \ref{thm01}, it is not hard to see
$f(n,k,n-1)=F(n,k,n-1)=n(n-1)$ and that the only extremal digraph is
the complete digraph on $n$ vertices. However, computing
$f(n,k,n-2)$ and $F(n,k,n-2)$ appears to be harder. In Theorem
\ref{thme}, we characterize minimally
strong subgraph $(2,n-2)$-connected digraphs.
The characterization implies that
$f(n,2,n-2)=n(n-1)-2\lfloor
n/2\rfloor$, $F(n,2,n-2)=n(n-1)-3$. We will also prove the
lower bound $f(n,k,\ell)\ge n\ell$ and describe some cases when
$f(n,k,\ell)= n\ell$. Finally, we will show that
$F(n,n,\ell)\leq 2\ell(n-1)$ and $F(n,k,1)=2(n-1).$
We leave it as an open problem to obtain a sharp upper bound on
$F(n,k,\ell)$ for every $k\ge 2$ and $\ell \ge 2$.
\subsection{Algorithms and Complexity Results} \label{sec:compres}
Let $k \ge 2$ and $\ell\ge 2$ be fixed integers. By reduction from
the {\sc Directed 2-Linkage} problem, Sun et al.
\cite{Sun-Gutin-Yeo-Zhang} proved that deciding whether
$\kappa_S(D)\ge \ell$ is NP-complete for a $k$-subset $S$ of $V(D)$.
Thomassen \cite{Thom} showed that for every positive integer $p$
there are digraphs which are strongly $p$-connected, but which
contain a pair of vertices not belonging to the same cycle. This
implies that for every positive integer $p$ there are digraphs $D$
such that $\kappa_2(D)=1$ \cite{Sun-Gutin-Yeo-Zhang}.
The above negative results motivate studying strong subgraph
$k$-connectivity for special classes of digraphs. In
\cite{Sun-Gutin-Yeo-Zhang}, Sun et al. showed that
the problem of deciding whether $\kappa_k(D)\ge \ell$ for every
semicomplete digraphs is polynomial-time solvable for fixed $k$ and
$\ell$. The main tool used in their proof is a recent {\sc Directed
$k$-Linkage} theorem
of Chudnovsky, Scott and Seymour
\cite{Chud-Scott-Seymour}.
A digraph $D$ is {\em symmetric} if for every arc
$xy$ of $D$, $D$ also contains the arc $yx$. In other words, a
symmetric digraph $D$ can be obtained from its underlying undirected
graph $G$ by replacing each edge of $G$ with the corresponding arcs
of both directions, that is, $D=\overleftrightarrow{G}.$ Sun et al.
\cite{Sun-Gutin-Yeo-Zhang} showed that for any connected graph $G$,
the parameter $\kappa_2(\overleftrightarrow{G})$ can be computed in
polynomial time. This result is best possible in the following
sense, unless P$=$NP. Let $D$ be a symmetric digraph and $k\geq 3$ a
fixed integer. Then it is NP-complete to decide whether
$\kappa_S(D)\geq \ell$ for $S\subseteq V(D)$ with $|S|=k$
\cite{Sun-Gutin-Yeo-Zhang}.
\section{New sharp upper bound of $\kappa_k(D)$}\label{sec:smallk}
To prove a new bound on $\kappa_k(D)$ in Theorem \ref{thmb}, we will use the following proposition of Sun et al. \cite{Sun-Gutin-Yeo-Zhang}.
\begin{pro}\label{thm03}
Let $2\leq k\leq n$. For a strong digraph $D$ of order $n$, we have
$$1\leq \kappa_k(D)\leq n-1.$$ Moreover, both bounds are sharp, and
the upper bound holds if and only if $D\cong
\overleftrightarrow{K}_n$, $2\leq k\leq n$ and $k\not\in \{4,6\}$.
\end{pro}
\begin{thm}\label{thmb}
For $k\in \{2,\dots ,n\}$ and $n\ge \kappa(D)+k,$ we have
$$\kappa_k(D)\leq \kappa(D).$$ Moreover, the bound is sharp.
\end{thm}
\begin{pf}
For $k=2$, assume that $\kappa(D)=\kappa(x,y)$ for some
$\{x,y\}\subseteq V(D)$. It follows from the strong subgraph connectivity definition that
$\kappa_{\{x,y\}}(D)\leq \kappa(x,y)$, so $\kappa_2(D)\leq
\kappa_{\{x,y\}}(D)\leq \kappa(x,y)= \kappa(D).$
We now consider the case of $k\ge 3$. If $\kappa(D)=n-1$, then we
have $\kappa_k(D)\leq n-1=\kappa(D)$ by Proposition
\ref{thm03}. If $\kappa(D)=n-2$, then there two vertices, say $u$
and $v$, such that $uv\not\in A(D)$. So we have $\kappa_k(D)\leq
n-2=\kappa(D)$ by Proposition \ref{thm03}. If
$1\leq \kappa(D)\leq n-3$, then there exists a $\kappa(D)$-vertex
cut, say $Q$, for two vertices $u,v$ in $D$ such that there is no
$u-v$ path in $D-Q$. Let $S=\{u,v\}\cup S'$ where $S'\subseteq
V(D)\setminus (Q\cup \{u,v\})$ and $|S'|=k-2$. Since $u$ and $v$ are
in different strong components of $D-Q$, any strong subgraph
containing $S$ in $D$ must contain a vertex in $Q$. By the
definition of $\kappa_S(D)$ and $\kappa_k(D)$, we have
$\kappa_k(D)\leq \kappa_S(D)\leq |Q|=\kappa(D)$.
For the sharpness of the bound,
consider the following digraph $D$. Let $D$ be a symmetric digraph
whose underlying undirected graph is $K_{k}\bigvee
\overline{K}_{n-k}$~($n\geq 3k$), i.e. the graph obtained from
disjoint graphs $K_{k}$ and $\overline{K}_{n-k}$ by adding all edges
between the vertices in $K_{k}$ and $\overline{K}_{n-k}$.
Let $V(D)=W\cup U$, where $W=V(K_k)=\{w_i\mid 1\leq i\leq k\}$ and
$U=V(\overline{K}_{n-k})=\{u_j\mid 1\leq j\leq n-k\}$. Let $S$ be any $k$-subset of
vertices of $V(D)$ such that $|S\cap U|=s$ ($s\leq k$) and $|S\cap
W|=k-s$. Without loss of generality, let $w_i\in S$ for $1\leq i\leq
k-s$ and $u_j\in S$ for $1\leq j\leq s$. For $1\leq i\leq k-s$, let
$D_i$ be the symmetric subgraph of $D$ whose underlying undirected
graph is the tree $T_i$ with edge set $$\{w_iu_1, w_iu_2, \dots , w_iu_s,
u_{k+i}w_1, u_{k+i}w_2, \dots , u_{k+i}w_{k-s}\}.$$ For
$k-s+1\leq j\leq k$, let $D_j$ be the symmetric subgraph of $D$
whose underlying undirected graph is the tree $T_j$ with edge set $$\{w_ju_1,
w_ju_2, \dots , w_ju_s, w_jw_1, w_jw_2, \dots ,
w_jw_{k-s}\}.$$ Observe that $\{D_i\mid 1\leq i\leq
k-s\}\cup \{D_j\mid k-s+1\leq j\leq k\}$ is a set of $k$ internally
disjoint strong subgraph containing $S$, so $\kappa_S(D)\geq k$, and
then $\kappa_k(D)\geq k$. Combining this with the bound that
$\kappa_k(D)\leq \kappa(D)$ and the fact that $\kappa(D)\leq
\min\{\delta^+(D), \delta^-(D)\}=k$, we can get $\kappa_k(D)=
\kappa(D)=k$.
\end{pf}
\section{Minimally strong subgraph $(k,\ell)$-connected digraphs}\label{sec:minimally}
Below we will use the following Hamilton cycle decomposition theorem of
Tillson.
\begin{thm}\cite{Tillson}\label{thm01}
The arcs of $\overleftrightarrow{K}_n$ can be decomposed into
Hamiltonian cycles if and only if $n\neq 4,6$.
\end{thm}
The following observation will be used in the sequel.
\begin{pro}\label{thm02}\cite{Sun-Gutin-Yeo-Zhang}
If $D'$ is a strong spanning digraph of a strong digraph $D$, then
$\kappa_k(D')\leq \kappa_k(D)$.
\end{pro}
By the definition of a minimally strong subgraph $(k,\ell)$-connected
digraph, we can get the following observation.
\begin{pro}\label{lem1}
A digraph $D$ is minimally strong subgraph $(k,\ell)$-connected if
and only if $\kappa_k(D)= \ell$ and $\kappa_k(D-e)= \ell-1$ for any
arc $e\in A(D)$.
\end{pro}
\begin{pf} The direction ``if" is clear by definition, and we only
need to prove the direction ``only if". Let $D$ be a minimally
strong subgraph $(k,\ell)$-connected digraph. By definition, we have
$\kappa_k(D)\geq \ell$ and $\kappa_k(D-e)\leq \ell-1$ for any arc
$e\in A(D)$. Then for any set $S \subseteq V(D)$ with $|S|=k$, there
is a set $\mathcal{D}$ of $\ell$ internally disjoint strong subgraphs containing
$S$. As $e$ must belong to one and only one element of $\mathcal{D}$, we are done.
\end{pf}
A digraph $D$ is {\em minimally strong} if $D$ is strong but $D-e$ is not for every arc $e$ of $D$.
\begin{pro}\label{thmc}
The following assertions hold:\\
$(i)$~A digraph $D$ is minimally strong subgraph $(k,1)$-connected
if and only if $D$ is minimally strong digraph;\\
$(ii)$~For $k\neq 4,6$, a digraph $D$ is minimally strong subgraph
$(k,n-1)$-connected if and only if $D\cong
\overleftrightarrow{K}_n$.
\end{pro}
\begin{pf}
To prove (i), it suffices to show that a digraph $D$ is strong if and only if $\kappa_k(D)\ge 1.$ If $D$ is strong, then for every vertex set $S$ of size $k,$ $D$ has a strong subgraph containing $S$. If $\kappa_k(D)\ge 1$, for each vertex set $S$ of size $k$ construct $D_S,$ a strong subgraph of $D$ containing $S.$ The union of all $D_k$ is a strong subgraph of $D$ as there are sets $S_1, S_2, \dots , S_p$ such that the union of $S_1, S_2, \dots , S_p$ is $V(D)$ and for each $i\in [p-1],$ $D_{S_i}$ and $D_{S_{i+1}}$ share a common vertex.
Part (ii) follows from Proposition \ref{thm03}.
\end{pf}
The following result characterizes minimally
strong subgraph $(2,n-2)$-connected digraphs.
\begin{thm}\label{thme}
A digraph $D$ is minimally strong subgraph $(2,n-2)$-connected if and only if $D$ is a digraph obtained from the complete digraph
$\overleftrightarrow{K}_n$ by deleting an arc set M such that $\overleftrightarrow{K}_n[M]$ is a 3-cycle
or a union of $\lfloor n/2\rfloor$ vertex-disjoint 2-cycles. In particular,
we have $f(n,2,n-2)=n(n-1)-2\lfloor n/2\rfloor$, $F(n,2,n-2)=n(n-1)-3$.
\end{thm}
\begin{pf}
Let $D\cong \overleftrightarrow{K}_n-M$ be a digraph obtained from
the complete digraph $\overleftrightarrow{K}_n$ by deleting an arc
set $M$. Let $V(D)=\{u_i\mid 1\leq i\leq n\}$.
Firstly, we will
consider the case that $\overleftrightarrow{K}_n[M]$ is a 3-cycle
$u_1u_2u_3u_1$. We now prove that $\kappa_2(D)=n-2$. By
(\ref{eq:1}), we have $\kappa_2(D)\leq \min\{\delta^+(D),
\delta^-(D)\}=n-2$. Let $S=\{u,v\} \subseteq V(D)$; we just consider
the case that $u=u_1,v=u_2$ since the other cases are similar. Let
$D_1$ be a subdigraph of $D$ with $V(D_1)=\{u_1,u_2,u_3\}$ and
$A(D_1)=\{u_1u_3, u_3u_2, u_2u_1\}$; for $2\leq i\leq n-2$, let
$D_i$ be a subdigraph of $D$ with $V(D_i)=\{u_1,u_2,u_{i+2}\}$ and
$A(D_i)=\{u_1u_{i+2}, u_2u_{i+2}, u_{i+2}u_1, u_{i+2}u_2\}$.
Clearly, $\{D_i\mid 1\leq i\leq n-2\}$ is a set of $n-2$ internally
disjoint strong subgraphs containing $S$, so $\kappa_S(D)\geq n-2$
and $\kappa_2(D)\geq n-2$. Hence, $\kappa_2(D)= n-2$.
For any $e\in A(D)$, without loss of generality, one of the two digraphs in Figure \ref{figure1}
is a subgraph of $\overleftrightarrow{K}_n[M\cup \{e\}]$,
so if the following claim holds, then we must have
$\kappa_2(D-e)\leq \kappa_2(D') \leq n-3$ by Proposition
\ref{thm02}, and so $D$ is minimally strong subgraph
$(2,n-2)$-connected. Now it suffices to prove the following claim.
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[scale=0.80]{figure1.eps}
\end{center}
\caption{Two graphs for Claim 1.}\label{figure1}
\end{figure}
\noindent \textbf {Claim 1.} If $\overleftrightarrow{K}_n[M']$ is isomorphic
to one of two graphs in Figure \ref{figure1}, then $\kappa_2(D')\leq
n-3$, where $D'=\overleftrightarrow{K}_n-M'$.
\noindent {\it Proof of Claim 1.} We first show that $\kappa_2(D')\leq n-3$ if
$M'$ is the digraph of Figure \ref{figure1} $(a)$. Let
$S=\{u_2, u_4\}$; we will prove that $\kappa_S(D')\leq n-3$, and
then we are done. Suppose that $\kappa_S(D')\geq n-2$, then there
exists a set of $n-2$ internally disjoint strong subgraphs
containing $S$, say $\{D_i\mid 1\leq i\leq n-2\}$. If both of the
two arcs $u_2u_4$ and $u_4u_2$ belong to the same $D_i$, say $D_1$,
then for $2\leq i\leq n-2$, each $D_i$ contains at least one vertex
and at most two vertices of $\{u_i\mid 1\leq i\leq n, i\neq 2,4\}$.
Furthermore, there is at most one $D_i$, say $D_2$, contains
(exactly) two vertices of $\{u_i\mid 1\leq i\leq n, i\neq 2,4\}$. We
just consider the case that $u_1,u_3\in V(D_2)$ since the other
cases are similar. In this case, we must have that each vertex of
$\{u_i\mid 5\leq i\leq n\}$ belongs to exactly one digraph from
$\{D_i\mid 3\leq i\leq n-2\}$ and vice versa.
However, this is impossible since the vertex set $\{u_2, u_4,
u_5\}$ cannot induce a strong subgraph of $D'$ containing $S$, a
contradiction.
So we now assume that each $D_i$ contains at most one
of $u_2u_4$ and $u_4u_2$. Without loss of generality, we may assume that
$u_2u_4\in A(D_1)$ and $u_4u_2\in A(D_2)$. In this case, we must
have that each vertex of $\{u_i\mid 1\leq i\leq n, i\neq 2,4\}$
belongs to exactly one digraph from $\{D_i\mid 1\leq i\leq n-2\}$ and vice versa.
However, this is
also impossible since the vertex set $\{u_2, u_4, u_5\}$ cannot
induce a strong subgraph of $D'$ containing $S$, a contradiction.
Hence, we
have $\kappa_2(D')\leq n-3$ in this case. For the case that $M'$ is
the digraph of Figure \ref{figure1} $(b)$, we can choose
$S=\{u_2, u_3\}$ and prove that $\kappa_S(D')\leq n-3$ with a
similar argument, and so $\kappa_2(D')\leq n-3$ in this case. This
completes the proof of the claim.
\vspace{2mm}
Secondly, we consider the case that $\overleftrightarrow{K}_n[M]$
is a union of $\lfloor n/2\rfloor$ vertex-disjoint 2-cycles. Without
loss of generality, we may assume that $M=\{u_{2i-1}u_{2i}, u_{2i}u_{2i-1}\mid 1\leq
i\leq \lfloor n/2\rfloor\}$. We just consider the case that
$S=\{u_1, u_3\}$ since the other cases are similar. In this case,
let $D_1$ be the subgraph of $D$ with $V(D_1)=\{u_1, u_3\}$ and
$A(D_1)=\{u_1u_3, u_3u_1\}$; let $D_2$ be the subgraph of $D$ with
$V(D_2)=\{u_1,u_2,u_3,u_4\}$ and $A(D_2)=\{u_1u_4, u_4u_1, u_2u_4,
u_4u_2, u_2u_3, u_3u_2\}$; for $3\leq i\leq n-2$, let $D_i$ be the
subgraph of $D$ with $V(D_i)=\{u_1,u_2,u_{i+2}\}$ and
$A(D_i)=\{u_1u_{i+2}, u_3u_{i+2}, u_{i+2}u_1, u_{i+2}u_3\}$.
Clearly, $\{D_i\mid 1\leq i\leq n-2\}$ is a set of $n-2$ internally
disjoint strong subgraphs containing $S$, so $\kappa_S(D)\geq n-2$
and then $\kappa_2(D)\geq n-2$. By (\ref{eq:1}), we have
$\kappa_2(D)\leq \min\{\delta^+(D), \delta^-(D)\}=n-2$. Hence,
$\kappa_2(D)= n-2$. Let $e\in A(D)$; clearly $e$ must be incident with
at least one vertex of $\{u_i\mid 1\leq i\leq 2\lfloor
n/2\rfloor\}$. Then we have that $\kappa_2(D-e)\leq
\min\{\delta^+(D-e), \delta^-(D-e)\}=n-3$ by (\ref{eq:1}).
Hence, $D$ is minimally strong subgraph $(2,n-2)$-connected.
\vspace{2mm} Now let $D$ be minimally strong subgraph
$(2,n-2)$-connected. By Proposition~\ref{thm03}, we
have that $D\not \cong \overleftrightarrow{K}_n$, that is, $D$ can
be obtained from a complete digraph $\overleftrightarrow{K}_n$ by
deleting a nonempty arc set $M$. To end our argument, we need the
following three claims. Let us start from a simple yet useful
observation.
\begin{pro}\label{pro:HT}
No pair of arcs in $M$ has
a common head or tail.
\end{pro}
{\it Proof of Proposition \ref{pro:HT}.}
By (\ref{eq:1}) no pair of arcs in $M$ has
a common head or tail, as otherwise we would have $\kappa_2(D)\le n-3$.
\vspace{3mm}
\noindent \textbf {Claim 2.} $|M|\geq 3$.
\noindent {\it Proof of Claim 2.} Let $|M|\le 2$. We may assume that $|M|=2$
as the case of $|M|=1$ can be considered in a similar and simpler way.
Let the arcs of $M$ have no common vertices; without loss of generality, $M=\{u_1u_2,u_3u_4\}$. Then $\kappa_2(D-u_2u_1)=n-2$
as $D-u_2u_1$ is a supergraph of $\overleftrightarrow{K}_n$ without a union of $\lfloor n/2\rfloor$
vertex-disjoint 2-cycles including the cycles $u_1u_2u_1$ and $u_3u_4u_3$. Thus, $D$ is not minimally strong
subgraph $(2,n-2)$-connected. Let the arcs of $M$ have no common vertex. By Proposition \ref{pro:HT}, without loss of generality, $M=\{u_1u_2,u_2u_3\}$.
Then $\kappa_2(D-u_3u_1)=n-2$ as we showed in the beginning of the proof of this theorem. Thus, $D$ is not minimally strong
subgraph $(2,n-2)$-connected.
Now let the arcs of $M$ have the same vertices, i.e., without loss
of generality, $M=\{u_1u_2,u_2u_1\}$. As above, $\kappa_2(D-u_2u_1)=n-2$ and $D$ is not minimally strong
subgraph $(2,n-2)$-connected.
\vspace{3mm}
\noindent \textbf {Claim 3.} If $|M|= 3$, then $\overleftrightarrow{K}_n[M]$
is a 3-cycle.
\noindent {\it Proof of Claim 3.} Suppose that $D$ is minimally strong subgraph
$(2,n-2)$-connected, but $\overleftrightarrow{K}_n[M]$
is not a 3-cycle.
By Proposition \ref{pro:HT}, no pair of arcs in $M$ has
a common head or tail.
Thus, $\overleftrightarrow{K}_n[M]$ must be isomorphic to one of graphs in
Figures \ref{figure1} and \ref{figure3}. If
$\overleftrightarrow{K}_n[M]$ is isomorphic to one of graphs in
Figure \ref{figure1}, then $\kappa_2(D)\leq n-3$ by Claim 1 and so
$D$ is not minimally strong subgraph $(2,n-2)$-connected, a contradiction.
For an arc set $M_0$ such that
$\overleftrightarrow{K}_n[M_0]$ is a union of $\lfloor n/2\rfloor$
vertex-disjoint 2-cycles, by the argument before, we know that
$\overleftrightarrow{K}_n-M_0$ is minimally strong subgraph
$(2,n-2)$-connected. For the case that $\overleftrightarrow{K}_n[M]$
is isomorphic to $(a)$ or $(b)$ in Figure \ref{figure3}, we have
that $\overleftrightarrow{K}_n-M_0$ is a proper subdigraph of
$\overleftrightarrow{K}_n-M$, so $D=\overleftrightarrow{K}_n-M$ must
not be minimally strong subgraph $(2,n-2)$-connected, this also
produces a contradiction. Hence, the claim holds.
\begin{figure}[!hbpt]
\begin{center}
\includegraphics[scale=0.80]{figure3.eps}
\end{center}
\caption{Two graphs for Claim 3.}\label{figure3}
\end{figure}
\vspace{3mm}
\noindent \textbf {Claim 4.} If $|M|> 3$, then $\overleftrightarrow{K}_n[M]$
is a union of $\lfloor n/2\rfloor$ vertex-disjoint 2-cycles.
\noindent {\it Proof of Claim 4.} Suppose that $D$ is minimally strong subgraph
$(2,n-2)$-connected, but $\overleftrightarrow{K}_n[M]$
is not a union of $\lfloor n/2\rfloor$ vertex-disjoint 2-cycles.
By Claim 1 and Proposition \ref{thm02}, we
have that $\overleftrightarrow{K}_n[M]$ does not contain graphs in Figure
\ref{figure1} as a subgraph. Then $\overleftrightarrow{K}_n[M]$
does not contain a path of length
at least three. Hence, the underlying undirected graph of $M$ has at least
two connectivity components. By the fact that if $M$ is a
3-cycle, then $\overleftrightarrow{K}_n-M$ is minimally
strong subgraph $(2,n-2)$-connected, we conclude that
$\overleftrightarrow{K}_n[M]$ does not contain a cycle of length
three. By Claim 1, $\overleftrightarrow{K}_n[M]$ does not
contain a path of length two. By Proposition \ref{pro:HT}, no pair of arcs in $M$ has
a common head or tail. Hence, each connectivity component of
$\overleftrightarrow{K}_n[M]$ must be a 2-cycle or an arc. Since $D$ is
minimally strong subgraph $(2,n-2)$-connected, no connectivity component of $\overleftrightarrow{K}_n[M]$
is an arc. We have arrived at a contradiction, proving Claim 4.
Hence, if a digraph $D$ is minimally strong subgraph
$(2,n-2)$-connected, then $D\cong \overleftrightarrow{K}_n-M$, where
$\overleftrightarrow{K}_n[M]$ is a cycle of order three or a union
of $\lfloor n/2\rfloor$ vertex-disjoint 2-cycles.
Now the claimed values of $F(n,2,n-2)$ and $f(n,2,n-2)$ can easily be verified.
\end{pf}
Note that Theorem \ref{thme} implies that
$Ex(n,2,n-2)=\{\overleftrightarrow{K_n}-M\}$ where $M$ is an arc set
such that $\overleftrightarrow{K}_n[M]$ is a directed 3-cycle, and
$ex(n,2,n-1)=\{\overleftrightarrow{K_n}-M\}$ where $M$ is an arc set
such that $\overleftrightarrow{K}_n[M]$ is a union of $\lfloor
n/2\rfloor$ vertex-disjoint directed 2-cycles.
The following result concerns a sharp lower bound for the parameter
$f(n,k,\ell)$.
\begin{thm}\label{thma} For $2\leq k\leq n$, we have
$$f(n,k,\ell)\geq n\ell.$$
Moreover, the following assertions hold:\\
$(i)~$ If $\ell=1$, then $f(n,k,\ell)=n$; $(ii)~$ If $2\leq \ell\leq
n-1$, then $f(n,n,\ell)=n\ell$ for $k=n\not\in \{4,6\}$; (iii) If
$n$ is even and $\ell = n-2$, then
$f(n,2,\ell)=n\ell.$
\end{thm}
\begin{pf}
By (\ref{eq:1}), for all digraphs $D$ and $k \geq 2$ we have
$\kappa_k(D) \leq \delta^+(D)$ and $\kappa_k(D) \leq \delta^-(D)$.
Hence for each $D$ with $\kappa_k(D)=\ell$, we have that
$\delta^+(D), \delta^-(D)\geq \ell$, so $|A(D)|\geq n\ell$ and then
$f(n,k,\ell)\geq n\ell.$
For the case that $\ell=1$, let $D$ be a dicycle
$\overrightarrow{C_n}$. Clearly, $D$ is minimally strong subgraph
$(k,1)$-connected, and we know $|A(D)|=n$, so $f(n,k,1)= n$.
For the case that $k=n \not\in \{4,6\}$ and $2\leq \ell\leq n-1$,
let $D\cong \overleftrightarrow{K_n}$. By Theorem \ref{thm01}, $D$
can be decomposed into $n-1$ Hamiltonian cycles $H_i~(1\leq i\leq
n-1)$. Let $D_{\ell}$ be the spanning subdigraph of $D$ with arc
sets $A(D_{\ell})=\bigcup_{1\leq i\leq \ell}{A(H_i)}$. Clearly, we
have $\kappa_n(D_{\ell})\geq \ell$ for $2\leq \ell\leq n-1$.
Furthermore, by (\ref{eq:1}), we have $\kappa_n(D_{\ell})\leq
\ell$ since the in-degree and out-degree of each vertex in $D_{\ell}$
are both $\ell$. Hence, $\kappa_n(D_{\ell})= \ell$ for $2\leq
\ell\leq n-1$. For any $e\in A(D_{\ell})$, we have
$\delta^+(D_{\ell}-e)=\delta^-(D_{\ell}-e)=\ell-1$, so
$\kappa_n(D_{\ell}-e)\leq \ell-1$ by (\ref{eq:1}).
Thus, $D_{\ell}$ is minimally strong
subgraph $(n,\ell)$-connected. As $|A(D_{\ell})|=n\ell$, we have
$f(n,n,\ell)\leq n\ell$. From the lower bound that $f(n,k,\ell)\geq
n\ell$, we have $f(n,n,\ell)= n\ell$ for the case that $2\leq
\ell\leq n-1, n\not\in \{4,6\}$.
Part (iii) follows directly from Theorem
\ref{thme}.
\end{pf}
To prove two upper bounds on the number of arcs in a minimally strong subgraph $(k,\ell)$-connected digraph, we will use the following result, see e.g. \cite{Bang-Jensen-Gutin}.
\begin{thm}\label{2n-2-thm}
Every strong digraph $D$ on $n$ vertices has a strong spanning subgraph $H$ with at most $2n-2$ arcs and equality holds only if $H$ is a symmetric digraph whose underlying undirected graph is a tree.
\end{thm}
\begin{pro}
We have
$(i)$~$F(n,n,\ell)\le 2\ell(n-1)$;
$(ii)$~For every $k$ $(2\le k\le n)$, $F(n,k,1)=2(n-1)$ and $Ex(n,k,1)$ consists of
symmetric digraphs whose underlying undirected graphs are trees.
\end{pro}
\begin{pf}
$(i)$~Let $D=(V,A)$ be a minimally strong subgraph
$(n,\ell)$-connected digraph, and let $D_1,\dots ,D_{\ell}$ be
arc-disjoint strong spanning subgraphs of $D$. Since $D$ is
minimally strong subgraph $(n,\ell)$-connected and $D_1,\dots
,D_{\ell}$ are pairwise arc-disjoint,
$|A|=\sum_{i=1}^{\ell}|A(D_i)|.$ Thus, by Theorem \ref{2n-2-thm},
$|A|\le 2\ell(n-1).$
$(ii)$~In the proof of Proposition \ref{thmc} we showed that a digraph $D$ is strong if and only if $\kappa_k(D)\ge 1.$ Now let $\kappa_k(D)\ge 1$ and a digraph $D$ has a minimal number of arcs. By Theorem \ref{2n-2-thm}, we have that $|A(D)|\le 2(n-1)$ and if $D \in Ex(n,k,1)$ then $|A(D)|=2(n-1)$ and $D$ is a symmetric digraph whose underlying undirected graph is a tree.
\end{pf}
\section{Discussion}
Perhaps, the most interesting result of this paper is the characterization of minimally
strong subgraph $(2,n-2)$-connected digraphs. As a simple consequence of the characterization, we can
determine the values of $f(n,2,n-2)$ and $F(n,2,n-2)$. It would be interesting to determine $f(n,k,n-2)$ and $F(n,k,n-2)$ for every value of $k\ge 3$.
(Obtaining characterizations of all $(k,n-2)$-connected digraphs for $k\ge 3$ seems a very difficult problem.)
It would also be interesting to find a sharp upper bound for $F(n,k,\ell)$ for all $k\ge 2$ and $\ell\ge 2$.
\vskip 1cm \noindent {\bf Acknowledgements.} Yuefang Sun was
supported by National Natural Science Foundation of China (No.
11401389). Gregory Gutin was partially supported by Royal Society
Wolfson Research Merit Award.
| {
"timestamp": "2018-03-02T02:08:20",
"yymm": "1803",
"arxiv_id": "1803.00281",
"language": "en",
"url": "https://arxiv.org/abs/1803.00281",
"abstract": "Let $D=(V,A)$ be a digraph of order $n$, $S$ a subset of $V$ of size $k$ and $2\\le k\\leq n$. Strong subgraphs $D_1, \\dots , D_p$ containing $S$ are said to be internally disjoint if $V(D_i)\\cap V(D_j)=S$ and $A(D_i)\\cap A(D_j)=\\emptyset$ for all $1\\le i<j\\le p$. Let $\\kappa_S(D)$ be the maximum number of internally disjoint strong digraphs containing $S$ in $D$. The strong subgraph $k$-connectivity is defined as $$\\kappa_k(D)=\\min\\{\\kappa_S(D)\\mid S\\subseteq V, |S|=k\\}.$$ A digraph $D=(V, A)$ is called minimally strong subgraph $(k,\\ell)$-connected if $\\kappa_k(D)\\geq \\ell$ but for any arc $e\\in A$, $\\kappa_k(D-e)\\leq \\ell-1$. In this paper, we first give a sharp upper bound for the parameter $\\kappa_k(D)$ and then study the minimally strong subgraph $(k,\\ell)$-connected digraphs.",
"subjects": "Discrete Mathematics (cs.DM); Combinatorics (math.CO)",
"title": "Strong subgraph $k$-connectivity bounds",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513910054508,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7086428657455288
} |
https://arxiv.org/abs/1404.0595 | Lyapunov functions via Whitney's size functions | In this paper we present a technique for constructing Lyapunov functions based on Whitney's size functions. Applications to asymptotically stable equilibrium points, isolated sets, expansive homeomorphisms and continuum-wise expansive homeomorphisms are given. | \section{Introduction}
In Dynamical Systems and Differential Equations it is important to determine the stability of trajectories
and a well known technique for this purpose is to find a Lyapunov function.
In order to fix ideas consider a continuous flow $\phi\colon \mathbb{R}\times X\to X$
on a compact metric space $(X,\dist)$ with a singular (or equilibrium) point $p\in X$,
i.e., $\phi_t(p)=p$ for all $t\in\mathbb{R}$.
A Lyapunov function for $p$
is a continuous non-negative function that vanishes only at $p$
and strictly decreases along the orbits close to $p$.
Recall that $p$ is \emph{stable} if for all $\epsilon>0$ there is $\delta>0$ such that if $\dist(x,p)<\delta$ then $\dist(\phi_t(x),p)<\epsilon$ for all $t\geq 0$.
We say that $p$ is \emph{asymptotically stable} if it is stable and there is $\delta_0>0$ such that
if $\dist(x,p)<\delta_0$ then $\phi_t(x)\to p$ as $t\to+\infty$.
The existence of a Lyapunov function for an equilibrium point implies the asymptotic
stability of the equilibrium point.
A remarkable result, first proved by Massera in \cite{Massera49}, is the converse: every asymptotically stable singular point
admits a Lyapunov function.
Later, other authors obtained Lyapunov functions with different methods,
see for example \cites{BaSz,Conley78}.
In \cite{Hur} a generalization is proved in the context of arbitrary metric spaces.
The purpose of the present paper is to develop a different technique that allows us to
construct Lyapunov functions for different dynamical systems as: isolated sets, expansive homeomorphisms and continuum-wise expansive homeomorphisms.
Our techniques are based on the size function $\mu$ introduced by Whitney in \cite{Whitney33}.
In order to motivate our work let us show how to construct a Lyapunov function for an
asymptotically stable singular point.
Denote by $\mathcal{K}(X)$ the set of non-empty compact subsets of $X$.
In the set $\mathcal{K}(X)$ we consider the Hausdorff distance
$\dist_H$ making $(\mathcal{K}(X),\dist_H)$ a metric space.
Recall that
\[
\dist_H(A,B)=\inf\{\epsilon>0:A\subset B_\epsilon(B)\hbox{ and } B\subset B_\epsilon(A)\},
\]
where $B_\epsilon(C)=\cup_{x\in C} B_\epsilon(x)$ and $B_\epsilon(x)$ is the usual ball of radius $\epsilon$
centered at $x$.
See \cite{Nadler} for more on the Hausdorff metric.
A \emph{size function} is a continuous map $\mu\colon\mathcal{K}(X)\to\mathbb{R}$ satisfying:
\begin{enumerate}
\item $\mu(A)\geq 0$ with equality if and only if $A$ has only one point,
\item if $A\subset B$ and $A\neq B$ then $\mu(A)<\mu(B)$.
\end{enumerate}
In \cite{Whitney33} it is proved that size functions exists for every compact metric space.
\begin{thm}
\label{estAs}
If $\phi$ is a continuous flow on $X$ with an asymptotically stable singular point $p$ then
there are an open set $U$ containing $p$ and a continuous function $V\colon U\to \mathbb{R}$ satisfying:
\begin{enumerate}
\item $V(x)\geq 0$ for all $x\in U$ with equality if and only if $x=p$ and
\item if $t>0$ and $\{\phi_s(x): s\in[0,t]\}\subset U$ then $V(\phi_t(x))<V(x)$.
\end{enumerate}
\end{thm}
\begin{proof}
By the conditions on $p$ there are $\delta_0,\delta>0$ such that if $\dist(x,p)<\delta$ then
$\phi_t(x)\in B_{\delta_0}(p)$ for all $t\geq 0$ and $\phi_t(x)\to p$ as $t\to\infty$.
Define $U=B_\delta(p)$ and $V\colon U\to \mathbb{R}$ as
\[
V(x)=\mu(\{\phi_t(x):t\geq 0\}\cup\{p\})
\]
where $\mu$ is a size function.
Since $\phi_t(x)\to p$ we have that
\begin{equation}\label{ecuO}
O(x)=\{\phi_t(x):t\geq 0\}\cup\{p\}
\end{equation}
is a compact set for all $x\in U$.
Notice that if $t>0$ then $O(\phi_t(x))\subset O(x)$ and the inclusion is proper.
Therefore, $V(\phi_t(x))<V(x)$ because $\mu$ is a size function.
Also notice that $V(p)=0$ and $V(x)>0$ if $x\neq p$.
In order to prove the continuity of $V$,
we will prove the continuity of $O\colon U\to K(X)$, the map defined by (\ref{ecuO}).
Since $\mu$ is continuous we will conclude the continuity of $V$.
Let us prove the continuity of $O$ at $x\in U$.
Take $\epsilon>0$.
By the asymptotic stability of $p$ there are $\rho,T>0$ such that
if $y\in B_\rho(x)$ then $\phi_t(y)\in B_{\epsilon/2}(p)$ for all $t\geq T$.
By the continuity of the flow, there is $r>0$ such that if
$y\in B_r(x)$ then $\dist(\phi_t(x),\phi_t(y))<\epsilon$ for all
$t\in[0,T]$.
Now it is easy to see that
if $y\in B_{\min\{\rho,r\}}(x)$ then
$\dist_H(O(x),O(y))<\epsilon$, proving the continuity of $O$ at $x$ and consequently the continuity of $V$.
\end{proof}
Let us recall that size functions can be easily defined.
A variation of the construction given in \cite{Whitney33}, adapted for compact metric spaces, is the following.
Let $q_1,q_2,q_3,\dots$ be a sequence dense in $X$.
Define $\mu_i\colon \mathcal{K}(X)\to\mathbb{R}$ as
\[
\mu_i(A)=\max_{x\in A}\dist(q_i,x)-\min_{x\in A}\dist(q_i,x).
\]
The following formula defines a size function $\mu\colon \mathcal{K}(X)\to\mathbb{R}$
\[
\mu(A)=\sum_{i=1}^\infty \frac{\mu_i(A)}{2^i},
\]
as proved in \cite{Whitney33}.
In Section \ref{secLyapIso} we extend Theorem \ref{estAs} by
constructing a Lyapunov function for an isolated invariant sets.
For the study of expansive homeomorphisms (see Definition \ref{dfExp}) Lewowicz introduced in \cite{Lew} Lyapunov functions.
He proved that expansiveness is equivalent with the existence of such function.
In Section \ref{secLyapHomeo} we give a different proof of this result by constructing a Lyapunov function
defined for compact subsets of the space.
In \cite{Kato} Kato introduced another form of expansiveness called continuum-wise expansiveness (see Definition \ref{dfCwExp}).
With our techniques we prove that continuum-wise expansiveness is equivalent with the existence
of a Lyapunov function on continua subsets of the space.
\section{Lyapunov Functions for Isolated Sets}
\label{secLyapIso}
In this section we consider continuous flows on compact metric spaces.
The purpose is to construct a Lyapunov for an isolated set of the flow using a size function.
First we consider the case of an isolated set consisting of a point.
\subsection{Isolated Singularities}
Let $\phi$ be a continuous flow on a compact metric space $(X,\dist)$.
A point $p\in X$ is \emph{singular} for $\phi$ if $\phi_t(p)=p$ for all $t\in\mathbb{R}$.
A singular point $p\in X$ is \emph{isolated} if there is an open \emph{isolating neighborhood} $U$ of $p$ such that
if $\phi_\mathbb{R}(x)\subset U$ then $x=p$.
\begin{df}
\label{dfAdNei}An open set $U$ is an \emph{adapted neighborhood} of an isolated singular point $p\in U$ if
for every orbit segment $l\subset\clos(U)$ with extreme points in $U$ it holds that
$l\subset U$.
\end{df}
Given a set $A\subset X$ and $x\in A$ denote by $\comp_x(A)$ the connected component of $A$ that contains the point $x$.
\begin{prop}
Every isolated singular point has an adapted neighborhood.
\end{prop}
\begin{proof}
Let $r>0$ be such that $\clos(B_r(p))$ is contained in an isolating neighborhood of $p$.
For $\rho\in (0,r)$ define the set
\[
U_\rho=\{x\in B_r(p):\comp_x(\phi_\mathbb{R}(x)\cap B_r(p))\cap B_\rho(p)\neq\emptyset\}.
\]
By the continuity of the flow we have that $U_\rho$ is an open set for all $\rho\in(0,r)$.
Let us prove that if $\rho$ is sufficiently small then $U_\rho$ is an adapted neighborhood.
By contradiction, suppose that there are $\rho_n\to 0$, $a_n,b_n\in U_{\rho_n}$, $t_n\geq 0$ such that $b_n=\phi_{t_n}(a_n)$ and
$l_n=\phi_{[0,t_n]}(a_n)\subset \clos(U_{\rho_m})$ but $l_n$ is not contained in $U_{\rho_n}$.
Then there is $s_n\in(0,t_n)$ such that $\phi_{s_n}(a_n)\in\partial B_r(p)$.
Also, there must be $u_n<0$ and $v_n>0$ such that $\phi_{u_n}(a_n),\phi_{v_n}(b_n)\in B_{\rho_n}(p)$.
But a limit point of $\phi_{s_n}(a_n)$ contradicts that $\clos(B_r(p))$ is contained in an insolating neighborhood of $p$.
\end{proof}
Fix an isolated point $p$ with an adapted neighborhood $U$.
Consider the sets
\[
\begin{array}{l}
\displaystyle W^s_U(p)=\{x\in U : \lim_{t\to+\infty}\phi_t(x)= p\hbox{ and } \phi_{\mathbb{R}^+}(x)\subset U\},\\
\displaystyle W^u_U(p)=\{x\in U : \lim_{t\to-\infty}\phi_t(x)= p\hbox{ and } \phi_{\mathbb{R}^-}(x)\subset U\},\\.
\end{array}
\]
For $x\in U$ define the orbit segments
\[
\begin{array}{l}
O^+_U(x)=\comp_x(U\cap \phi_{[0,+\infty)}(x)),\\
O^-_U(x)=\comp_x(U\cap \phi_{(-\infty,0]}(x)).
\end{array}
\]
Define $C=X\setminus U$ and let $V_p^+,V_p^-\colon U\to \mathcal{K}(X)$ be defined as
\[
\left\{
\begin{array}{l}
V_p^+(x)=\clos(O^+_U(x)\cup W^u_U(p))\cup C,\\
V_p^-(x)=\clos(O^-_U(x)\cup W^s_U(p))\cup C.
\end{array}
\right.
\]
\begin{df}
A \emph{Lyapunov function} for an isolated point $p$ is a continuous map $V\colon U\to\mathbb{R}$
defined in a neighborhood of $p$ such that if $t>0$ and $\phi_{[0,t]}(x)\subset U\setminus\{p\}$
then
$V(x)>V(\phi_t(x))$.
\end{df}
\begin{thm}
\label{LyapSing}
If $p$ is an isolated point and $U$ is an adapted neighborhood of $p$
then the maps $V_p^+$ and $V_p^-$ are continuous
in $U$.
If in addition, $\mu$ is a size function on $\mathcal{K}(X)$
then $V\colon U\to\mathbb{R}$ defined as
\[
V(x)=\mu(V_p^+(x))-\mu(V_p^-(x))
\]
is a Lyapunov function for $p$.
\end{thm}
\begin{proof}
Let us prove the continuity of $V_p^+$ by contradiction.
Assume that $x_n\to x\in U$ and $V^+_p(x_n)\to K$ with the Hausdorff distance but $K\neq V^+_p(x)$.
By definitions we have that
\begin{equation}\label{incluComp}
\clos(W^u_U(p))\cup C\subset K\cap V^+_p(x).
\end{equation}
Recall that $C$ was defined as the complement of $U$ in $X$.
Take a point $y\in K\setminus V^+_p(x) \cup V^+_p(x) \setminus K$.
By the inclusion (\ref{incluComp}) we know that $y\notin \clos(W^u_U(p))\cup C$.
We divide the proof in two cases.
\emph{Case 1}. Suppose first that $y\in K\setminus V^+_U(x)$.
Since $y\in K$ there is a sequence $t_n\geq 0$ such that
$\phi_{t_n}(x_n)\to y$ and $\phi_{[0,t_n]}(x_n)\subset U$.
If $t_n\to\infty$ then $x\in W^s_U(p)$.
Consequently, $y\in W^u_U(p)$, which is a contradiction.
Therefore $t_n$ is bounded. Without loss of generality assume that $t_n\to\tau\geq 0$ and then $\phi_\tau(x)=y$.
Thus $\phi_{[0,\tau]}(x)\subset\clos(U)$.
Since $y\notin C$ we have that $y\in U$.
Now, since $U$ is an adapted neighborhood we conclude that $\phi_{[0,\tau]}(x)\subset U$ and
then $y\in O^+(x)\subset V^+_p(x)$.
This contradiction finishes this case.
\emph{Case 2}. Now assume that $y\in V^+_p(x)\setminus K$. In this case we have that
$y=\phi_s(x)$ for some $s\geq 0$ and $\phi_{[0,s]}(x)\subset U$.
Then $\phi_s(x_n)\to y$ and $y\in K$.
This contradiction proves that $V^+_p$ is continuous in $U$.
The continuity of $V^-_p$ is proved in a similar way.
Let us show that $V$ is a Lyapunov function for $p$.
The continuity of $V$ in $U$ follows by the continuity of $V^+_p$, $V^-_p$ and the size function $\mu$.
Now take $x\notin U\setminus \{p\}$.
We will show that $V$ decreases along the orbit segment of $x$ contained in $U$.
Notice that for all $t>0$, $O^+_U(\phi_t(x))\subset O^+_U(x)$ if $\phi_{[0,t]}(x)\subset U$.
Therefore $V^+_p(\phi_t(x))\leq V^+_p(O^+(x))$.
The equality can only hold if $x\in W^u_U(p)$.
But in this case we have that $x\notin W^s_U(p)$
because $W^u_U(p)\cap W^s_U(p)=\{p\}$.
Then $V^-_p(\phi_t(x))>V^-_p(x)$.
Therefore, $V(\phi_t(x))<V(x)$ and $V$ is a Lyapunov function for $p$.
\end{proof}
\subsection{Isolated Sets}
Let $\phi\colon\mathbb{R}\times X\to X$ be a continuous flow on a compact metric space $X$.
Consider a $\phi$-invariant set $\Lambda\subset X$, i.e., $\phi_t(\Lambda)=\Lambda$ for all $t\in\mathbb{R}$.
We say that $\Lambda$ is an \emph{isolated set} with \emph{isolating neighborhood} $U$ if
$\phi_\mathbb{R}(x)\subset U$ implies $x\in\Lambda$.
\begin{df}
A \emph{Lyapunov function} for an isolated set $\Lambda$ is a continuous function $V\colon U\to \mathbb{R}$
defined on an open set $U$ containing $\Lambda$ such that:
\begin{enumerate}
\item $V(x)=0$ if and only if $x\in\Lambda$,
\item if $\phi_{[0,t]}(x)\subset U\setminus\Lambda$ then $V(x)>V(\phi_t(x))$.
\end{enumerate}
\end{df}
Let us show how the construction of a Lyapunov function for an isolated set
can be reduced to the case of an isolated singular point.
\begin{thm}
\label{LyapIso}
Every isolated set admits a Lyapunov function.
\end{thm}
\begin{proof}
Consider the set $Y=(X\setminus \Lambda)\cup\{\Lambda\}$.
On $Y$ define the distance $\di$ as
\[
\di(x,y)=\min\{\dist(x,y),\dist(x,\Lambda)+\dist(y,\Lambda)\}.
\]
It is easy to see that $(Y,\di)$ is a compact metric space.
Also, the flow $\phi$ induces naturally a flow $\phi'$ on $Y$
with $\Lambda$ as an isolated singular point.
Consider from Theorem \ref{LyapSing} a Lyapunov function for $\Lambda$ as an isolated singular point of $\phi'$.
This function naturally defines a Lyapunov function for $\Lambda$ as an isolated set of $\phi$.
\end{proof}
\section{Applications to homeomorphisms}
\label{secLyapHomeo}
Let $f\colon X\to X$ be a homeomorphism of a compact metric space $(X,\dist)$.
An $f$ invariant set $\Lambda$ is \emph{isolated} if there is an open neighborhood $U$ of $\Lambda$ such that
$f^n(x)\in U$ for all $n\in\mathbb{Z}$ implies that $x\in \Lambda$.
\begin{thm}
\label{teoLyapDisc}
Every isolated set $\Lambda$ for a homeomorphism $f$ admits a Lyapunov function,
that is, a continuous map $V\colon U\subset X\to\mathbb{R}$ defined on a neighborhood of $\Lambda$
such that:
\begin{enumerate}
\item $V(x)=0$ if and only if $x\in\Lambda$,
\item $V(x)>V(f(x))$ if $x,f(x)\in U\setminus\Lambda$.
\end{enumerate}
\end{thm}
\begin{proof}
Consider $\phi\colon \mathbb{R}\times X_f\to X_f$
the suspension of $f$.
Consider $i\colon X\to X_f$ a homeomorphism onto its image such that
$i(X)$ is a global cross section of $\phi$.
It is easy to see that $\Lambda$ is an isolated set for $f$ if and only
$\Lambda_f=\phi_\mathbb{R}(i(\Lambda))$ is an isolated set for $\phi$.
Now consider a Lyapunov function $V'$ for $\Lambda_f$.
A Lyapunov function for $f$ can be defined by $V(x)=V'(i(x))$.
\end{proof}
\begin{df}
\label{dfExp}
A homeomorphism $f\colon X\to X$ of a compact metric space is \emph{expansive} if
there is $\alpha>0$ (an \emph{expansive constant}) such that if $x\neq y$ then there is $n\in\mathbb{Z}$ such that $\dist(f^n(x),f^n(y))>\alpha$.
\end{df}
Recall that $\mathcal{K}(X)$ denotes the compact metric space of compact subsets of $X$ with the Hausdorff metric.
Denote by $\mathcal{F}_1=\{A\in\mathcal{K}(X):|A|=1\}$ where $|A|$ denotes the cardinality of $A$.
Given a homeomorphism $f\colon X\to X$ define the homeomorphism
$f'\colon \mathcal{K}(X)\to \mathcal{K}(X)$ as $f'(A)=\{f(x): x\in A\}$.
Notice that $\mathcal{F}_1$ is invariant under $f'$.
\begin{coro}
\label{LyapExp}
For a homeomorphism $f\colon X\to X$ the following statements are equivalent:
\begin{enumerate}
\item $f$ is an expansive homeomorphism,
\item $\mathcal{F}_1$ is an isolated set for $f'$,
\item there is a continuous function $V\colon U\subset \mathcal{K}(X)\to\mathbb{R}$ defined on a neighborhood of $\mathcal{F}_1$
such that $V(A)=0$ if and only if $A\in \mathcal{F}_1$ and
$V(A)>V(f'(A))$ if $A,f'(A)\in U\setminus \mathcal{F}_1$.
\end{enumerate}
\end{coro}
\begin{proof}
($1\to 2$).
Let $\delta$ be an expansive constant and
define $$U=\{A\in\mathcal{K}(X):\diam(A)<\delta\}.$$
It is easy to see that $U$ is an isolating neighborhood of $\mathcal{F}_1$.
($2\to 3$). It follows by Theorem \ref{teoLyapDisc}.
($3\to 1$). Take $\delta>0$ such that if
$\dist(x,y)\leq\delta$ then $\{x,y\}\in U$.
Let us prove that $\delta$ is an expansive constant for $f$.
Assume by contradiction that $\dist(f^n(x),f^n(y))\leq\delta$ for all $n\in\mathbb{Z}$
and $x\neq y$.
Define $A=\{x,y\}$.
We have that $V(f'^n(A))$ is a decreasing sequence.
Without loss of generality assume that $V(A)<0$.
Suppose that $f'^n(A)$ accumulates in $B$.
Now it is easy to see that $B\in U\setminus \mathcal{F}_1$ and also $V(B)=V(f'(B))$.
This contradiction proves the theorem.
\end{proof}
Recall that a \emph{continuum} is a compact connected set.
Denote by $\mathcal{C}(X)=\{C\in\mathcal{K}(X):C\hbox{ is connected} \}$ the space of continua of $X$.
\begin{df}
\label{dfCwExp}
A homeomorphism $f\colon X\to X$ is \emph{continuum-wise expansive} if there is
$\delta>0$ such that if $C\in\mathcal{C}(X)$ and
$\diam(f^n(C))\leq\delta$ for all $n\in \mathbb{Z}$ then $C\in \mathcal{F}_1$.
\end{df}
A \emph{Lyapunov function} for a continuum-wise expansive homeomorphism is a continuous function $V\colon U\subset \mathcal{C}(X)\to \mathbb{R}$ defined on a neighborhood
of $\mathcal{F}_1(X)$ in $\mathcal{C}(X)$ such that
$V(\{x\})=0$ for all $x\in X$ and
$V(f(C))<V(C)$ if $C\notin \mathcal{F}_1$ and $C,f(C)\in U$.
\begin{coro}
For a homeomorphism $f\colon X\to X$ the following statements are equivalent:
\begin{enumerate}
\item $f$ is a continuum-wise expansive homeomorphism,
\item $\mathcal{F}_1$ is an isolated set for $f'\colon\mathcal{C}(X)\to\mathcal{C}(X)$,
\item there is a continuous function $V\colon U\subset \mathcal{C}(X)\to\mathbb{R}$ defined on an open set $U\subset\mathcal{C}(X)$ containing $\mathcal{F}_1$
such that $V(A)=0$ if and only if $A\in \mathcal{F}_1$ and $V(A)>V(f'(A))$ if $A,f'(A)\in U\setminus \mathcal{F}_1$.
\end{enumerate}
\end{coro}
\begin{proof}
The proof is similar to the proof of Corollary \ref{LyapExp}.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{BaSz}{book}{
author={N. P. Bhatia},
author={G. P. Szeg\"o},
title={Dynamical Systems: Stability Theory and Applications},
publisher={Springer-Verlag},
series={Lect. Not. in Math.},
volume={35},
year={1967}}
\bib{Conley78}{book}{
author={C. Conley},
title={Isolated invariant sets and the Morse index},
year={1978},
publisher={AMS}}
\bib{Hur}{article}{
author={M. Hurley},
title={Lyapunov Functions and Attractors in Arbitrary Metric Spaces},
year={1998},
journal={Proc. of the Am. Math. Soc.},
volume={126},
pages={245--256}}
\bib{Kato}{article}{
author={H. Kato},
title={Continuum-wise expansive homeomorphisms},
journal={Can. J. Math.},
volume={45},
number={3},
year={1993},
pages={576--598}}
\bib{Lew}{article}{
author={J. Lewowicz},
year={1980},
title={Lyapunov Functions and Topological Stability},
journal={J. Diff. Eq.},
volume={38},
pages={192--209}}
\bib{Massera49}{article}{
author={J. L. Massera},
title={On Liapunoff's Conditions of Stability},
journal={Ann. of Math.},
volume={50},
number={3},
pages={705--721},
year={1949}}
\bib{Nadler}{book}{
author={S. Nadler Jr.},
title={Hyperspaces of Sets},
publisher={Marcel Dekker Inc. New York and Basel},
year={1978}}
\bib{Whitney33}{article}{
author={H. Whitney},
title={Regular families of curves},
journal={Ann. of Math.},
number={34},
year={1933},
pages={244--270}}
\end{biblist}
\end{bibdiv}
\end{document} | {
"timestamp": "2014-04-03T02:09:49",
"yymm": "1404",
"arxiv_id": "1404.0595",
"language": "en",
"url": "https://arxiv.org/abs/1404.0595",
"abstract": "In this paper we present a technique for constructing Lyapunov functions based on Whitney's size functions. Applications to asymptotically stable equilibrium points, isolated sets, expansive homeomorphisms and continuum-wise expansive homeomorphisms are given.",
"subjects": "Dynamical Systems (math.DS)",
"title": "Lyapunov functions via Whitney's size functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513897844355,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7086428648681141
} |
https://arxiv.org/abs/2006.14525 | Conjugation Curvature in Solvable Baumslag-Solitar Groups | For an element in $BS(1,n) = \langle t,a | tat^{-1} = a^n \rangle$ written in the normal form $t^{-u}a^vt^w$ with $u,w \geq 0$ and $v \in \mathbb{Z}$, we exhibit a geodesic word representing the element and give a formula for its word length with respect to the generating set $\{t,a\}$. Using this word length formula, we prove that there are sets of elements of positive density of positive, negative and zero conjugation curvature, as defined by Bar Natan, Duchin and Kropholler. | \section{Introduction}
The notion of discrete Ricci curvature for Cayley graphs of finitely generated groups was introduced by Bar-Natan, Duchin and Kropholler in \cite{BDK} as {\em conjugation curvature}. Their work is based on that of Ollivier on metric Ricci curvature for graphs and non-manifold geometries \cite{YO1,YO2,YO3,YO4}. One considers whether on average, ``corresponding points" on spheres of the same radius are closer together or farther apart than the centers of the spheres. Negative conjugation curvature occurs when, on average, such points are farther apart the centers of the spheres, and positive curvature when they are closer together.
In the context of the Cayley graph of a finitely generated group, there is a natural interpretation of corresponding points; if $g_1,g_2 \in G=\langle S | R \rangle$ are the centers of two spheres of the same radius, then we consider the distance between $g_1w$ and $g_2w$ for $w \in G$. Without loss of generality, we use the isometric action of a group on its Cayley graph to translate $g_1$ to the identity, and then $d_S(w,hw) = d_S(e,w^{-1}hw)=d_S(e,h^w)$ where $h=g_1^{-1}g_2$. The conjugation curvature $\kappa_r(h)$ is then defined to be
\[
\kappa_r(h) = \frac{l(h) - \frac{1}{|n(r)|} \sum_{w \in S_n(r)} l(h^w)}{l(h)}
\]
that is, the difference between the word length of $h$ with respect to $S$ and the average word length of the conjugates of $h$ by all $w$ in the sphere $S_n(r)$ of radius $r$ centered at the identity in the Cayley graph $\Gamma(G,S)$,
scaled by the word length of $h$.
Bar-Natan, Duchin and Kropholler prove a variety of results using this definition of the conjugation curvature of a Cayley graph when $r=1$. In particular, it is always zero at central elements, and any finite group has identically zero conjugation curvature when $S=G$. This definition depends heavily on the generating set, and most groups considered in \cite{BDK} are viewed with respect to a natural generating set. For some specific groups, they obtain strong conclusions. For example, if $G$ is a right angled Artin group with the standard generating set, they obtain the dichotomy that for all $g$ in the group, $\kappa(g) = 0$ if and only if $g$ is central, otherwise $\kappa(g)<0$. Additionally, if every element of a group has zero conjugation curvature, then the group is virtually abelian. For the Heisenberg group they show that there is a set of elements of positive density with each type of conjugation curvature: positive, negative and zero.
In this paper we show that the solvable Baumslag-Solitar groups
\[
BS(1,n)= \langle t,a | tat^{-1} = a^n \rangle
\]
for $n>1$ have a positive density of elements of positive, negative and zero conjugation curvature for $r$ in a bounded set of values. To prove this, we require a detailed understanding of the shape of geodesics in the Cayley graph of $BS(1,n)$ with respect to the standard generating set $\{t,a\}$. Multiple people have studied geodesics in $BS(1,n)$, and while all reach similar conclusions, in each case the motivating questions frame the results in a unique way.
We begin with the standard normal form on $BS(1,n)$ and express each element uniquely as $g=t^{-u}a^vt^w$ for $u,v,w \in \mathbb{Z}$ with $u,w \geq 0$ where $n | v$ implies that $uw=0$.
We describe a deterministic algorithm which takes as input the triple $u,v,w$ and
produces a geodesic representative of the element.
These geodesics come in four basic ``shapes''. Our algorithm is lattice-based, and yields a succinct formula for
the word length of $g$ with respect to the generating set $\{t,a\}$.
In \cite{EH}, Elder and Hermiller produce a rubric for geodesics in $BS(1,n)$ and show that each $g \in BS(1,n)$ can be represented by a geodesic path which has one of their specified forms. This exhaustive and detailed work is illuminating, but it does not link a given element of $BS(1,n)$ of the form $g=t^{-u}a^vt^w$ with a particular geodesic, which is what we require.
Elder in \cite{Elder} takes the first constructive approach to producing a geodesic for a given $g \in BS(1,n)$, exhibiting an algorithm to do so which runs in linear time and $O(n \log n)$ space, where $n$ is the length of an initial string of group generators. Elder is motivated by the complexity of this algorithm, as it allows him to conclude that the bounded geodesic length problem can be solved in linear time for $BS(1,n)$. This problem asks whether given a word of length $n$ in the generators of $G=\langle S | R \rangle$ and a nonnegative integer $k$, one can decide whether the geodesic length of the word is at most $k$. This problem is NP-complete for free metabelian groups in their standard generating set \cite{MRUV} and hence the problem of finding an explicit geodesic representative for a given string of generators is NP-hard in the general case. While Elder produces the same geodesic paths that we find below, the fact that we are unconcerned with the complexity of the process streamlines our exposition, and we extend these common ideas by producing a word length formula at the conclusion of the algorithm.
Diekert and Laun in \cite{DL} exhibit an algorithm which produces geodesic paths for elements of $BS(m,n) = \langle t,a | ta^m{t^{-1}} = a^n \rangle$ when $m |n$. When $m=1$ they produce identical geodesic paths to those in \cite{Elder} and below. They are also mainly concerned with the complexity of their algorithm; in general it is quadratic in the length of the initial string of generators and simplifies to linear when $m=1$. However their methods are quite different from those of Elder in \cite{Elder}. Neither \cite{DL} nor \cite{Elder} draw conclusions about word length in $BS(1,n)$. Burillo and Elder in \cite{BE} prove metric estimates for word length in $BS(m,n)$ and use them to compute a lower bound on the growth rate of $BS(m,n)$.
Our method for producing a geodesic representative for $g = t^{-u}a^vt^w$ in $BS(1,n)$ allows us to investigate the growth rate of $BS(1,n)$ in \cite{TW_growth}.
We use our techniques to show that the set of paths describing one shape of geodesics forms a regular language, and we exhibit a finite state automaton which accepts it. It is sufficient to understand this set of geodesic paths, as it has the same growth rate as the entire group. As an immediate
consequence, $BS(1,n)$ has rational growth, and we are able to obtain a simple expression for its growth rate, which was first computed by Collins, Edjvet and Gill in \cite{CEG}.
This paper is organized as follows. Section~\ref{section:background} presents a brief introduction to the solvable Baumslag-Solitar groups.
Section~\ref{section:min_rep} introduces our lattice-based methods and constructs a geodesic path for each $g = t^{-u}a^vt^w$ in $BS(1,n)$. The results in this section provide numerical conditions on recognizing when certain paths are geodesic; these conditions are used in \cite{TW_growth} to compute the growth rate of $BS(1,n)$.
Section~\ref{sec:growth} contains some introductory remarks on growth, as well as an overview of the results of \cite{TW_growth}, where it is shown that a certain set of geodesics forms a regular language whose growth rate is identical to the growth rate of $BS(1,n)$.
Section~\ref{sec:curvature} includes explicit descriptions of three infinite families of elements which have, respectively, positive, zero and negative conjugation curvature, when $n \geq 3$. Using our results
about growth rate from \cite{TW_growth}, we prove that these families have positive density in $BS(1,n)$.
Section~\ref{sec:n=2_curvature} contains analogous results for $n=2$.
In Section~\ref{sec:technical_lemmas} we prove several technical lemmas stated in Section~\ref{section:min_rep}.
\section{A geometric model for solvable Baumslag-Solitar groups}
\label{section:background}
For $n \in \mathbb{N}$ with $n > 1$, the solvable Baumslag-Solitar group $BS(1,n)$ has presentation $$BS(1,n) = \langle a,t | tat^{-1} = a^n \rangle.$$
We consider elements of $BS(1,n)$ in the standard normal form, namely each $g \in BS(1,n)$ can be written uniquely as $t^{-u}a^vt^w$ where $u,v,w \in \mathbb{Z}$ and $u,w \geq 0$, with the additional requirement that if $n|v$ then $uw=0$.
If $n|v$ but $uw \neq 0$ then the group relator can be applied to simplify the normal form expression. When we write $g=t^{-u}a^vt^w$ we assume that these conditions are satisfied.
The group $BS(1,n)$ for $n > 1$ acts property discontinuously and cocompactly by isometries on a metric 2-complex $X_n$ which is well described in the literature; see, for example, \cite{FM1} or \cite{freden}. Topologically, this complex is the product $T_n \times \mathbb{R}$ where $T_n$ is a regular tree of valence $n+1$. We equip $T_n$ with a height function $h:T_n \rightarrow \mathbb{R}$ so that vertices which differ by a single edge map to adjacent integers; this is well defined after an initial choice of vertex at height $0$.
Metrically, for any line $l \subset T_n$ where the heights of the vertices along $l$ map bijectively to $\mathbb{Z}$, the plane $l \times \mathbb{R} \subset T_n \times \mathbb{R}$ is a combinatorial model of the hyperbolic plane. The 1-skeleton of this plane is tiled with the ``horobrick'' labeled by the group relator, depicted in Figure~\ref{fig:horobrick}; a part of this plane when $n=2$ is depicted in Figure~\ref{fig:plane}.
\begin{figure}[ht!]
\begin{tikzpicture}[>=stealth',shorten >=2pt,auto, node distance=5cm, semithick,scale=.75]
\tikzstyle{every node}=[draw,shape=circle]
\draw [->] (0,0) node[circle,fill,inner sep=1pt](a){} - - (1,0) node[draw=none,fill=none,font=\scriptsize,midway,below] {$a$};
\draw [->] (1,0) node[circle,fill,inner sep=1pt](b){} - - (2,0) node[draw=none,fill=none,font=\scriptsize,midway,below] {$a$};
\draw [dashed] (2,0) node[circle,fill,inner sep=1pt](b){} - - (3,0) node[draw=none,fill=none,font=\scriptsize,midway,below] {};
\draw [->] (3,0) node[circle,fill,inner sep=1pt](b){} - - (4,0) node[draw=none,fill=none,font=\scriptsize,midway,below] {$a$};
\draw [->] (4,0) node[circle,fill,inner sep=1pt](b){} - - (5,0) node[draw=none,fill=none,font=\scriptsize,midway,below] {$a$};
\draw [->] (0,0) node[circle,fill,inner sep=1pt](b){} - - (0,2) node[draw=none,fill=none,font=\scriptsize,midway,left] {$t$};
\draw [->] (5,0) node[circle,fill,inner sep=1pt](b){} - - (5,2) node[draw=none,fill=none,font=\scriptsize,midway,right] {$t$};
\draw [->] (0,2) node[circle,fill,inner sep=1pt](a){} - - (5,2) node[draw=none,fill=none,font=\scriptsize,midway,above] {$a$}
node[circle,fill,inner sep=1pt](c){};
\end{tikzpicture}
\caption{The ``horobrick'' which tiles the 1-skeleton of $X_n$; its boundary is
labeled by the relator $tat^{-1}a^{-n}$.}
\label{fig:horobrick}
\end{figure}
\begin{figure}[ht!]
\begin{tikzpicture}[scale=.75]
\draw (0,0) -- (0,4) -- (8,4) -- (8,0) -- (0,0);
\draw (0,1) -- (8,1);
\draw (0,2) -- (8,2);
\draw (0,3) -- (8,3);
\draw (0,4) -- (8,4);
15
\draw (4,3) -- (4,0);
\draw (2,2) -- (2,0);
\draw (6,2) -- (6,0);
\draw(1,1) -- (1,0);
\draw(3,1) -- (3,0);
\draw(5,1) -- (5,0);
\draw(7,1) -- (7,0);
\draw [thick, dotted] (0,0) --(0,-.75);
\draw [thick, dotted] (1,0) --(1,-.75);
\draw [thick, dotted] (2,0) --(2,-.75);
\draw [thick, dotted] (3,0) --(3,-.75);
\draw [thick, dotted] (4,0) --(4,-.75);
\draw [thick, dotted] (5,0) --(5,-.75);
\draw [thick, dotted] (6,0) --(6,-.75);
\draw [thick, dotted] (7,0) --(7,-.75);
\draw [thick, dotted] (8,0) --(8,-.75);
\draw [thick, dotted] (0.5,0) --(0.5,-.75);
\draw [thick, dotted] (1.5,0) --(1.5,-.75);
\draw [thick, dotted] (2.5,0) --(2.5,-.75);
\draw [thick, dotted] (3.5,0) --(3.5,-.75);
\draw [thick, dotted] (4.5,0) --(4.5,-.75);
\draw [thick, dotted] (5.5,0) --(5.5,-.75);
\draw [thick, dotted] (6.5,0) --(6.5,-.75);
\draw [thick, dotted] (7.5,0) --(7.5,-.75);
\end{tikzpicture}
\caption{Part of a plane in $X_2$.}
\label{fig:plane}
\end{figure}
Using the metric on the topological planes in $X_n$ inherited from $\mathbb{R}^2$, where a single horizontal segment with label $a$ at height $0$ is defined to have length $1$, it follows that a single horizontal
segment at height $i$ in $X_n$ has length $n^i$.
Thus, if $g=t^{-u}a^vt^w$, the sum of the lengths of the horizontal edges in a path representing $g$ is $n^{-u}v$. Since any two topological planes in $X_n$ agree beneath a horocycle, that is, in our model a line of the form $y = c$ for $c \in \mathbb{Z}$, each $g = t^{-u}a^vt^w$ has a well defined $x$-coordinate given by $n^{-u}v$, as does the terminal point of any path in the generators $\{a^{\pm 1},t^{\pm 1}\}$.
Let
\begin{equation}
\label{eqn:eta}
\eta=t^{e_0}a^{f_0} t^{e_1}a^{f_1}\cdots t^{e_k} a^{f_k}
\end{equation}
be a path in $X_n$ from the identity to $g = t^{-u}a^vt^w$.
Then each instance of the generator $a^{\pm 1}$ corresponds to a horizontal edge with length $n^h$ where $h$ is the height of the edge in $X_n$. The endpoint of $\eta$ then has Euclidean $x$-coordinate given by
\[
x=n^{-u}v = \sum_{i=0}^k f_i n^{\sum_{j=0}^ie_j}.
\]
Our approach to finding geodesic words builds on \cite{EH},
where it is shown that any geodesic must have one of
several possible forms. To each of these forms, we associate
a list of ``digits'' similar to the $f_i$ in Equation~\eqref{eqn:eta} and
related to the horizontal distance traversed by the path.
By carefully considering these
digits, we give conditions which certify that a path of this form
has minimal length.
\section{Representations of integers and geodesic paths}
\label{section:min_rep}
\subsection{The digit lattice}
In order to produce a geodesic representative of a given group
element \hbox{$g = t^{-u}a^vt^w$}, we must understand how to
efficiently represent the integer $v$ as a sequence of signed
digits with bounded absolute value and how to translate this sequence into
a geodesic word. We formalize the concept of digit sequences
using the direct sum $\bigoplus_{i\in\mathbb{N}} \mathbb{Z}$, where we take the convention that
$0 \in \mathbb{N}$. Given a vector
${\bf x} = (x_0, x_1, \dots) \in \bigoplus_{i\in\mathbb{N}}\mathbb{Z}$,
define the function
$\Sigma:\bigoplus_{i\in\mathbb{N}}\mathbb{Z} \rightarrow \mathbb{R}$ by
\[
\Sigma({\bf x})=\sum_{i \in \mathbb{N}} x_in^{i}.
\]
For any $v \in \mathbb{Z}$, let $\mathcal{L}_v = \Sigma^{-1}(v)$ be the
set of vectors ${\bf x} \in \bigoplus_{i\in\mathbb{N}}\mathbb{Z}$ with
$\Sigma({\bf x}) = v$. For any vector ${\bf x}$, we will always write
the coordinates with a matching non-bold letter, e.g. $x_i$, and
we will define
${k_{\xx}}$ as the index of the final nonzero coordinate, so ${k_{\xx}}+1$ is the ``length'' of
${\bf x}$.
We will refer to the coordinates of vectors as either ``coordinates''
or ``digits''depending on what makes the most intuitive sense
in context.
For clarity, we will often write
vectors in $\bigoplus_{i\in\mathbb{N}}\mathbb{Z}$ or $\mathcal{L}_v$ as finite
sequences ${\bf x} = (x_0, \dots, x_{{k_{\xx}}})$, assuming that $x_i = 0$
for $i > {k_{\xx}}$ and $x_{{k_{\xx}}} \neq 0$.
Define a set
of vectors $\{ {\bf w}^{(i)} \}_{i \in \mathbb{N}}$ whose coordinates $w^{(i)}_j$ are given by
\[
w^{(i)}_j = \left\{ \begin{array}{ll} 1 & \textnormal{if $j=i+1$} \\
-n & \textnormal{if $j=i$} \\
0 & \textnormal{otherwise}
\end{array}\right.
\]
That is,
\[
{\bf w}^{(i)} = (0, \ldots\, , 0\, ,\, w^{(i)}_i, w^{(i)}_{i+1}, 0, \ldots) =
(0, \underset{\ldots}{\ldots}, \underset{i-1}{0}, \underset{i}{-n}, \underset{i+1}{1}, \underset{i+2}{0},\, \underset{\ldots}{\ldots})
\]
where we indicate the index of each entry in the second expression.
\begin{lemma}\label{lemma:L_span}
The set $\mathcal{L}_0$ is a lattice spanned by $\{ {\bf w}^{(i)} \}_{i \in \mathbb{N}}$.
\end{lemma}
\begin{proof}
As $\mathcal{L}_0$ is discrete and clearly closed under addition
and negation, it is a lattice, so we must show
that the ${\bf w}^{(i)}$ span.
Given ${\bf x} \in \mathcal{L}_0$, we will produce a finite linear
combination of the ${\bf w}^{(i)}$ such that ${\bf x} + \sum_{i \in I}\alpha_i{\bf w}^{(i)}$ is
the vector consisting entirely of zeros, which
proves the claim.
Suppose that $j>0$ is the
maximum index such that $x_j \ne 0$. Then the
$j^{\textnormal{th}}$ coordinate of ${\bf x} - x_j{\bf w}^{(j-1)}$ will
be $0$. By induction, we conclude that there is some linear combination
of the ${\bf w}^{(i)}$ such that ${\bf y} = {\bf x} + \sum_i \alpha_i {\bf w}^{(i)}$
has $y_i = 0$ for $i>0$. However, as
\[
0 = \Sigma({\bf y}) = \Sigma({\bf x}) = \Sigma({\bf x} + \sum_i \alpha_i {\bf w}^{(i)}) = y_0
\]
we conclude
that $y_0 = 0$ as well.
\end{proof}
\begin{lemma}\label{lemma:L_addition}
For any ${\bf x} \in \mathcal{L}_v$, we have
$\mathcal{L}_{v+w} = {\bf x} + \mathcal{L}_w$. In particular,
$\mathcal{L}_v = {\bf x} + \mathcal{L}_0$, so $\mathcal{L}_v$
is an affine lattice.
\end{lemma}
\begin{proof}
It is immediate from the linearity of the function $\Sigma$
that given ${\bf x} \in \mathcal{L}_v$ and ${\bf y} \in \mathcal{L}_w$ we have
${\bf x} + {\bf y} \in \mathcal{L}_{v+w}$, so $\mathcal{L}_v + \mathcal{L}_w
\subseteq \mathcal{L}_{v+w}$.
Given any ${\bf x} \in \mathcal{L}_v$ and ${\bf z} \in \mathcal{L}_{v+w}$, define ${\bf y}$ by
$y_i = z_i - x_i$. By the linearity of $\Sigma$, we have
${\bf y} \in \mathcal{L}_w$ and ${\bf z} = {\bf x} + {\bf y}$. We conclude that for any
$v,w$ we have $\mathcal{L}_{v+w} \subseteq {\bf x} + \mathcal{L}_w$, completing the proof.
\end{proof}
\begin{ex}\label{example:L_v}
As an example, let $n=3$, $v=7$, and
\[
{\bf x} = (\underset{x_0}{7}, \underset{x_1}{0},\, \underset{\ldots}{\ldots}),
\qquad
{\bf y} = (\underset{y_0}{1}, \underset{y_1}{2}, \underset{y_2}{0},\, \underset{\ldots}{\ldots}) ,
\qquad
{\bf z} = (\underset{z_0}{1}, \underset{z_1}{-1}, \underset{z_2}{1},
\underset{z_3}{0},\, \underset{\ldots}{\ldots}).
\]
Then ${\bf x},{\bf y},{\bf z} \in \mathcal{L}_v$. Note that ${\bf y} = {\bf x} + 2{\bf w}^{(0)}$
and ${\bf z} = {\bf y} + {\bf w}^{(1)}$.
\end{ex}
As one might expect from the spanning set $\{{\bf w}^{(i)}\}$, the digits of
the vectors in $\mathcal{L}_0$ have an interesting ordinal relationship. If the
most significant digit is large in absolute value, there must be a less significant digit with greater absolute value. This is intuitive: to balance a high power of $n$ with a sum of smaller powers of $n$, we require many smaller powers of $n$.
We formalize this in Lemma~\ref{lemma:L0_big_digits}, which compares
the coefficients of the ${\bf w}^{(i)}$ to the resulting digits that must be
present in their vector sum.
\begin{lemma}\label{lemma:L0_big_digits}
Let ${\bf x} \in \mathcal{L}_0$ with
${\bf x} = \sum_i \alpha_i {\bf w}^{(i)}$.
For any $j$ such that $|\alpha_j| \ge m$,
there is $i\le j$ with $|x_i| > m(n-1)$.
\end{lemma}
\begin{proof}
The proofs are symmetric depending on the sign of $\alpha_j$,
so we assume without loss of generality that $\alpha_j > 0$
and hence $\alpha_j \geq m$. It follows that
$x_j = \alpha_{j-1} - \alpha_j n \leq \alpha_{j-1} - mn$,
where if $j=0$, we define $\alpha_{j-1}=0$.
It suffices to prove the lemma for the minimal $j$ such that $|\alpha_j| \ge m$,
so we will assume that that inequality holds.
Then $|\alpha_{j-1}|<m$,
so $x_j +mn \leq \alpha_{j-1}<m$. It follows that
$x_j < -m(n-1)$, as required.
\end{proof}
The following corollary is immediate.
\begin{corollary}\label{corollary:L0_at_least_n}
Let ${\bf x} \in \mathcal{L}_0$. If there is an $i$ so that $x_i \ne 0$,
then there is a $j\le i$ with $x_j \ge n$.
\end{corollary}
\begin{proof}
Write ${\bf x} = \sum_i \alpha_i {\bf w}^{(i)}$.
Since $x_i \ne 0$, we must have $\alpha_i \ne 0$ or $\alpha_{i-1} \ne 0$.
Then apply Lemma~\ref{lemma:L0_big_digits} with $m=1$.
\end{proof}
Lemma~\ref{lemma:L0_nonunique_digit} shows that there is no vector in $\mathcal{L}_0$
with a single nonzero digit.
\begin{lemma}\label{lemma:L0_nonunique_digit}
If ${\bf x} \in \mathcal{L}_0$ has some nonzero digit, then there must be at least
two nonzero digits in ${\bf x}$.
\end{lemma}
\begin{proof}
Suppose there is $j$ so $x_j$ is the only nonzero digit of ${\bf x}$.
Then by the definition of $\mathcal{L}_0$, we have
\[
0 = \Sigma({\bf x}) = \sum_i x_in^i = x_jn^j.
\]
This contradicts the assumption that $x_j \ne 0$.
\end{proof}
\subsection{Geodesics from digit sequences}
Given a group element $g = t^{-u}a^vt^w$, we define a map
$\eta_{u,v,w}:\mathcal{L}_v \to \{a^{\pm},t^{\pm}\}^*$ which takes
a vector ${\bf x} = (x_0, \dots, x_{k_{\xx}}) \in \mathcal{L}_v$ to a word
$\eta_{u,v,w}({\bf x})$
representing $g$ in the following way:
\[
\eta_{u,v,w}({\bf x}) =
\left\{
\begin{array}{lll}
t^{-u} a^{x_0} t a^{x_{1}} \cdots t a^{x_{{k_{\xx}}}}t^{w-{k_{\xx}}}
& \textnormal{if ${k_{\xx}} \leq w$} & \textnormal{(shape 1)} \\
t^{{k_{\xx}}-u} a^{x_{{k_{\xx}}}} {t^{-1}} a^{x_{{k_{\xx}}-1}} \cdots {t^{-1}} a^{x_0} t^w
& \textnormal{if $w < {k_{\xx}} \leq u$} & \textnormal{(shape 2)} \\
t^{-u}a^{x_0}ta^{x_{1}} \cdots ta^{x_{{k_{\xx}}}} t^{w-{k_{\xx}}}
& \textnormal{if $u \leq w<{k_{\xx}}$} & \textnormal{(shape 3)} \\
t^{{k_{\xx}}-u}a^{x_{{k_{\xx}}}}{t^{-1}} a^{x_{{k_{\xx}}-1}} \cdots {t^{-1}} a^{x_{0}} t^{w}
& \textnormal{if $w<u<{k_{\xx}}$} & \textnormal{(shape 4).}
\end{array}\right .
\]
Shapes 1 and 3 and shapes 2 and 4 have identical expressions up to the signs of certain exponents.
\begin{remark}
We only consider triples $(u,v,w)$ which correspond to elements $g = t^{-u}a^vt^w$ in normal form. In particular, we only allow $n|v$ if $uw=0$, which places restrictions on these triples.
This has implications about which vectors ${\bf x} \in \mathcal{L}_v$ we consider.
If the first digit of ${\bf x}$ is $0$, then $n|v$, and for any choice of $u,w \geq 0$ we require that either $u=0$ or $w=0$.
\end{remark}
We now show that the length of each path above is given by one
of two expressions. Here, $| \cdot |$ denotes the actual length of the
given path, not the word length with respect to the generating set $\{a^{\pm 1},t^{\pm 1} \}$ in the group of the element it represents.
\begin{lemma}\label{lemma:length_formula}
For ${\bf x} = (x_0, \dots, x_{k_{\xx}}) \in \mathcal{L}_v$ and $u,w \geq 0$ we have
\[
|\eta_{u,v,w}({\bf x})| =
\left\{
\begin{array}{lll}
\Vert {\bf x} \Vert_1 + u + w & \textnormal{if ${k_{\xx}} \leq \max(u,w)$} & \textnormal{(shapes 1 and 2)} \\
\Vert {\bf x} \Vert_1 + 2{k_{\xx}} - |u-w| & \textnormal{otherwise} & \textnormal{(shapes 3 and 4)}
\end{array}\right. .
\]
\end{lemma}
\begin{proof}
To prove the lemma, we add the absolute values of the exponents in the above expressions for $\eta_{u,v,w}({\bf x})$. Accounting for the signs of the expressions, the first formula follows immediately.
For the second case, we compute the length of a path of shape 3:
\[
|\eta_{u,v,w}({\bf x})| = \Vert {\bf x} \Vert_1 + u + {k_{\xx}} + {k_{\xx}} - w
\]
and for shape 4:
\[
|\eta_{u,v,w}({\bf x})| = \Vert {\bf x} \Vert_1 + ({k_{\xx}}-u) + {k_{\xx}} + w.
\]
Considering the relative magnitudes of $u,w$ and ${k_{\xx}}$, we see that the two expressions combine into the second formula of the lemma.
\end{proof}
In Lemma~\ref{lemma:length_formula}, the two cases divide according
to whether ${k_{\xx}} \le \max(u,w)$ or ${k_{\xx}} > \max(u,w)$. We remark that
the formulas agree in the boundary case when
${k_{\xx}} = \max(u,w)$. To see this, note that when ${k_{\xx}}=\max(u,w)$,
\begin{align*}
2{k_{\xx}} - |u-w| & = 2\max(u,w) - |u-w| \\
& = 2\max(u,w) - \max(u,w) + \min(u,w) \\
& = u + w
\end{align*}
If we have a vector ${\bf x} \in \mathcal{L}_v$ and add ${\bf z} \in \mathcal{L}_0$, both the
$\ell^1$ norm and the vector length may change.
Choose $u,w \geq 0$. If ${k_{\xx}}$ and $k_{{\bf x}+{\bf z}}$ have different ordinal relationships to $\max(u,w)$,
computing $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf x}+{\bf x})|$ may require different formulas from Lemma~\ref{lemma:length_formula}.
The next lemma explicitly computes this change.
\begin{lemma}\label{lemma:length_formula_change}
Let ${\bf x} \in \mathcal{L}_v$ and ${\bf z} \in \mathcal{L}_0$ with $u,w \geq 0$. If $k_{{\bf x} + {\bf z}} > {k_{\xx}}$, then
\[
|\eta_{u,v,w}({\bf x}+{\bf z})| - |\eta_{u,v,w}({\bf x})| =
\Vert {\bf x} + {\bf z} \Vert_1 - \Vert {\bf x} \Vert_1 + 2\max(0, k_{{\bf x}+{\bf z}} - \max({k_{\xx}},u,w)).
\]
If $k_{{\bf x} + {\bf z}} < {k_{\xx}}$, then
\[
|\eta_{u,v,w}({\bf x}+{\bf z})| - |\eta_{u,v,w}({\bf x})| =
\Vert {\bf x} + {\bf z} \Vert_1 - \Vert {\bf x} \Vert_1 - 2\max(0, {k_{\xx}} - \max(k_{{\bf x}+{\bf z}},u,w)).
\]
\end{lemma}
\begin{proof}
If both ${k_{\xx}} \leq \max(u,w)$ and $k_{{\bf x} + {\bf z}} \leq \max(u,w)$, or both ${k_{\xx}} > \max(u,w)$ and $k_{{\bf x} + {\bf z}} > \max(u,w)$, then we use the same
length formula from Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf x}+{\bf z})|$ and $|\eta_{u,v,w}({\bf x})|$.
In the first case, the formulas in the lemma follow from the fact that $\max(0, {k_{\xx}} - \max(k_{{\bf x}+{\bf z}},u,w)) = 0$.
In the second case, the maximum is either ${k_{\xx}} -k_{{\bf x}+{\bf z}}$ or its negative, again giving rise to the two formulas in the lemma.
We consider the remaining cases.
Suppose that ${k_{\xx}} \le \max(u,w) \le k_{{\bf x} + {\bf z}}$ and one of the inequalities is strict.
Thus we use the first length formula from Lemma~\ref{lemma:length_formula} to compute $|\eta_{u,v,w}({\bf x})|$, and the second length formula from Lemma~\ref{lemma:length_formula} to compute $|\eta_{u,v,w}({\bf x}+{\bf z})|$.
It follows that
\begin{align*}
|\eta_{u,v,w}({\bf x}+{\bf z})| - |\eta_{u,v,w}({\bf x})| &= \Vert {\bf x} + {\bf z} \Vert_1 - \Vert {\bf x} \Vert_1
+2k_{{\bf x} + {\bf z}} -|u-w| - u - w\\
&= \Vert {\bf x} + {\bf z} \Vert_1 - \Vert {\bf x} \Vert_1
+2k_{{\bf x} + {\bf z}} -2\max(u,w). \\
\end{align*}
Since ${k_{\xx}} \le \max(u,w)$, it follows that $\max(u,w) = \max({k_{\xx}},u,w)$, we have the desired formula in this case.
Suppose that $k_{{\bf x} + {\bf z}} \le \max(u,w) \le {k_{\xx}}$ and one of the inequalities is strict.
Using the appropriate length formula from Lemma~\ref{lemma:length_formula}, we then compute
\begin{align*}
|\eta_{u,v,w}({\bf x}+{\bf z})| - |\eta_{u,v,w}({\bf x})| &= \Vert {\bf x} + {\bf z} \Vert_1 - \Vert {\bf x} \Vert_1
+u+w -2{k_{\xx}} + |u-w|\\
&= \Vert {\bf x} + {\bf z} \Vert_1 - \Vert {\bf x} \Vert_1
-2{k_{\xx}} +2\max(u,w) \\
\end{align*}
and again we have the desired formula, completing the proof.
\end{proof}
\begin{lemma}\label{lemma:minimal_is_geodesic}
If ${\bf x} \in \mathcal{L}_v$ is such that $|\eta_{u,v,w}({\bf x})|$ is minimal,
then $\eta_{u,v,w}({\bf x})$ is a geodesic representing the
group element $g = t^{-u}a^vt^w$.
\end{lemma}
\begin{proof}
This is a corollary of~\cite{EH}, Proposition~2.3, where it is shown that
there must be a geodesic representing $g$ which has one of shapes $1$--$4$.
For the geodesic $\xi$ guaranteed by~\cite{EH}, there is a vector ${\bf y} \in \mathcal{L}_v$ so that $\eta({\bf y}) = \xi$.
For a given ${\bf x} \in \mathcal{L}_v$, one can form two possible induced paths which are potentially geodesic: one in shape $1$ or $3$, depending on whether $w<{k_{\xx}}$, and one in shape $2$ or $4$. The word length formulas in Lemma~\ref{lemma:length_formula} allow us to choose the shorter of these paths as the output of $\eta_{u,v,w}$, and hence $\eta_{u,v,w}$ describes a geodesic path to $g$.
\end{proof}
If we are given $g = t^{-u}a^vt^w$ and want to find a geodesic
for $g$, then by Lemma~\ref{lemma:minimal_is_geodesic}, it suffices
to find a vector ${\bf x} \in \mathcal{L}_v$ such that
$|\eta_{u,v,w}({\bf x})|$ is minimal; we will refer to such an
${\bf x}$ as a minimal vector. By Lemma~\ref{lemma:L_addition},
this is equivalent to minimizing
$|\eta_{u,v,w}({\bf x}+{\bf z})|$, where ${\bf x} \in \mathcal{L}_v$ is any
vector and ${\bf z} \in \mathcal{L}_0$. Lemma~\ref{lemma:reduce_to_box} shows that some vectors ${\bf x}$ are easily altered in this way to reduce $|\eta_{u,v,w}({\bf x})|$. We will refer to the change from ${\bf x}$ to ${\bf x}+{\bf z}$ in this way as reducing ${\bf x}$.
For $n \geq 3$, let ${\mathcal B}_v^{u,w} \subseteq \mathcal{L}_v$ be defined to be the set of
${\bf x} = (x_0, \dots, x_{k_{\xx}}) \in \mathcal{L}_v$ satisfying the following conditions.
\begin{enumerate}
\item If $i<{k_{\xx}}$, then $|x_i| \le \left\lfloor\frac{n}{2}\right\rfloor$.
\item If $i={k_{\xx}} < \max(u,w)$, then $|x_i| \le \left\lfloor\frac{n}{2}\right\rfloor$.
\item If $i={k_{\xx}} \ge \max(u,w)$, then $|x_i| \le \left\lfloor\frac{n}{2}\right\rfloor + 1$.
\end{enumerate}
When $n=2$, we define ${\mathcal B}_v^{u,w}$ as above, replacing the third inequality with $|x_i| \le \left\lfloor\frac{n}{2}\right\rfloor + 2$.
The ``box'' ${\mathcal B}_v^{u,w}$ contains all digit sequences whose digits are
uniformly bounded as described above; most digits are bounded by $\left\lfloor\frac{n}{2}\right\rfloor$, but when ${k_{\xx}} \geq \max(u,v)$ we allow the most significant digit to be slightly larger.
For context, our plan is to find minimal vectors in ${\mathcal B}_v^{u,w}$, and the
modified bound on the final digit results from shortening $|\eta_{u,v,w}({\bf x})|$
in certain cases where it is more efficient to have one larger digit than
two smaller digits at the end of the vector.
In Lemma~\ref{lemma:reduce_to_box} below we show that given ${\bf x} \in
\mathcal{L}_v$, we can find a vector ${\bf y} \in {\mathcal B}_v^{u,w} \subseteq \mathcal{L}_v$ so that
$|\eta_{u,v,w}({\bf y})| \le |\eta_{u,v,w}({\bf x})|$.
Since $\eta_{u,v,w}({\bf x})$ and $\eta_{u,v,w}({\bf y})$
represent the same group element, this implies that finding a geodesic for a
group element is equivalent to searching for a minimal vector within ${\mathcal B}_v^{u,w}$. For such ${\bf x}$ and ${\bf y}$, we will
write ${\bf y} \le_{u,w} {\bf x}$ to mean
that $|\eta_{u,v,w}({\bf y})| \le |\eta_{u,v,w}({\bf x})|$. Note that
although $\le_{u,w}$ is transitive, it is not a partial order because it is
not antisymmetric. However, it still makes sense to refer to vectors
as being minimal with respect to the relation.
\begin{lemma}\label{lemma:reduce_to_box}
If ${\bf x} \in \mathcal{L}_v$ but ${\bf x} \notin {\mathcal B}_v^{u,w}$, then there exists
${\bf z} \in \mathcal{L}_0$ so that ${\bf x} + {\bf z} \in {\mathcal B}_v^{u,w}$
and
\[
{\bf x} + {\bf z} \le_{u,w} {\bf x}.
\]
Consequently, if ${\bf x} \in {\mathcal B}_v^{u,w}$ is minimal, then $\eta_{u,v,w}({\bf x})$
is geodesic.
\end{lemma}
\begin{proof}
Let ${\bf x} \in \mathcal{L}_v$. There are two conditions which might be violated to imply
${\bf x} \notin {\mathcal B}_v^{u,w}$. In both cases, we will examine the minimal $i$
such that $|x_i|$ contradicts a defining condition of ${\mathcal B}_v^{u,w}$ and decrease this coordinate without affecting any $x_j$ with
$j < i$. Combined with control over the length of
$\eta_{u,v,w}({\bf x})$ as we do this, the lemma the follows
by induction on $i$.
First suppose conditions (1) or (2) of membership in ${\mathcal B}_v^{u,w}$
are violated for some minimal index $i$. That is, $|x_i| > \left\lfloor\frac{n}{2}\right\rfloor$.
Without loss of generality, assume that $x_i>0$.
Let ${\bf z} = {\bf w}^{(i)}$ and consider ${\bf y} = {\bf x}+{\bf z} = {\bf x} + {\bf w}^{(i)}$. Then $y_i = x_i -n$, so
\[
|y_i| \le |x_i| - 1,
\]
and $y_{i+1} = x_{i+1} + 1$, so
\[
|y_{i+1}| \le |x_{i+1}| + 1.
\]
Thus $\Vert {\bf y} \Vert_1 \le \Vert {\bf x} \Vert_1$.
The assumption that ${k_{\xx}} < \max(u,w)$ implies
that both $|\eta_{u,v,w}({\bf y})|$ and $|\eta_{u,v,w}({\bf x})|$ are
computed using the first formula in Lemma~\ref{lemma:length_formula}, that is, for shapes 1 and 2, so
$|\eta_{u,v,w}({\bf y})| \le |\eta_{u,v,w}({\bf x})|$, that is ${\bf y} \le_{u,w} {\bf x}$.
Now suppose $n \geq 3$ and the third condition of membership in ${\mathcal B}_v^{u,w}$ is violated, so $|x_i| \ge \left\lfloor\frac{n}{2}\right\rfloor + 2$
where $i = {k_{\xx}} \ge \max(u,w)$. Assume without loss of
generality that $x_i > 0$.
Let ${\bf z} = {\bf w}^{(i)}$ and consider ${\bf y} = {\bf x}+{\bf z}={\bf x} + {\bf w}^{(i)}$. Since $i={k_{\xx}}$,
we have $x_{{k_{\xx}}+1}=0$ and $y_{{k_{\xx}}+1} = 1$, so $k_{\bf y} = k_{\bf x} + 1$, and
we use the length formula from Lemma~\ref{lemma:length_formula} for shapes 3 and 4 to compute
\begin{align*}
|\eta_{u,v,w}({\bf y})| &= |\eta_{u,v,w}({\bf x})| +
(\Vert {\bf z} \Vert_1 - \Vert {\bf x} \Vert_1) + 2({k_{\xx}}-{\bf k}_y) \\
&=|\eta_{u,v,w}({\bf x})| +
|y_{{k_{\xx}}}| - |x_{{k_{\xx}}}|+ |y_{{k_{\xx}} +1}| + 2({k_{\xx}}-{\bf k}_y) \\
&= |\eta_{u,v,w}({\bf x})| +
|y_{{k_{\xx}}}| - |x_{{k_{\xx}}}| + 3.
\end{align*}
We know that $y_{{k_{\xx}}} = x_{{k_{\xx}}} -n$. If $x_{{k_{\xx}}} \ge n$, then
$|y_{{k_{\xx}}}| = |x_{{k_{\xx}}}| - n$. Otherwise,
$|y_{{k_{\xx}}}| \le |x_{{k_{\xx}}}| - 4$ if $n$ is even and
$|y_{{k_{\xx}}}| \le |x_{{k_{\xx}}}| - 3$ if $n$ is odd. In both cases,
$|\eta_{u,v,w}({\bf y})| \le |\eta_{u,v,w}({\bf x})|$, that is ${\bf y} \le_{u,w} {\bf x}$.
Note that $k_{{\bf y}} = {k_{\xx}}+1$, but as ${k_{\xx}}$ is the maximal index in ${\bf x}$
we know that all digits of ${\bf y}$ now satisfy the bounds for ${\mathcal B}_v^{u,w}$, and the induction stops.
There remains the special case of violating the third condition of membership in ${\mathcal B}_v^{u,w}$
when $n=2$. If $|x_{{k_{\xx}}}| \ge \left\lfloor\frac{n}{2}\right\rfloor + 3 = 4$, let ${\bf z} = 2 {\bf w}^{({k_{\xx}})}$ consider
${\bf y} = {\bf x}+{\bf z} = {\bf x} + 2 {\bf w}^{({k_{\xx}})}$.
Again, ${\bf y} \in {\mathcal B}_v^{u,w}$ and $k_{\bf y} = {k_{\xx}}+1$, so the induction stops.
An analogous calculation shows that
$|\eta_{u,v,w}({\bf y})| \le |\eta_{u,v,w}({\bf x})|$, that is ${\bf y} \le_{u,w} {\bf x}$.
\end{proof}
\begin{ex}\label{example:BS12_digit_bound}
The definition of ${\mathcal B}_v^{u,w}$ has a special case for $n=2$. Here we give
an example to show that it is necessary. Let $u=w=0$ and $v=7$. Define
\[
{\bf x} = (\underset{x_0}{1}, \underset{x_1}{3}),
\qquad \textnormal{ and } \qquad
{\bf y} = (\underset{y_0}{1}, \underset{y_1}{1}, \underset{y_2}{1}) = {\bf x} + {\bf w}^{(1)}.
\]
Note that ${\bf x},{\bf y} \in \mathcal{L}_7$, and we can compute
$|\eta_{0,7,0}({\bf x})| = 6$ and
$|\eta_{0,7,0}({\bf y})| = 7$. That is, ${\bf x} \le_{u,w} {\bf y}$. Using the
digit bound of $\left\lfloor\frac{n}{2}\right\rfloor+1$ in the definition of ${\mathcal B}_7^{0,0}$ for
$n=2$ therefore cannot be correct. By enumerating all the vectors
in ${\mathcal B}_7^{0,0}$, we could check that ${\bf x}$ is actually minimal in
${\mathcal B}_7^{0,0}$ and it would follow from
Lemma~\ref{lemma:reduce_to_box} that $\eta_{0,7,0}({\bf x})$ is geodesic.
In Section~\ref{section:geodesics_n_even_2}, we will give
some simple conditions to certify that ${\bf x}$ is minimal without requiring
this enumeration.
\end{ex}
If we consider two vectors ${\bf x},{\bf y} \in {\mathcal B}_v^{u,w}$ which differ by a sum of $\mathcal{L}_0$ basis vectors, we retain some control over the lengths of ${\bf x}$ and ${\bf y}$.
\begin{lemma}\label{lemma:no_L0_parts}
Let ${\bf x},{\bf y} \in {\mathcal B}_v^{u,w}$ with ${k_{\y}} \ge {k_{\xx}}$ and ${\bf y}-{\bf x} = \sum_{i=j}^\ell \alpha_i{\bf w}^{(i)}$,
where $\alpha_j \ne 0$ and $\alpha_\ell \ne 0$.
Then ${k_{\xx}} \ge j$ and ${k_{\y}} > \ell$.
\end{lemma}
\begin{proof}
We first prove that ${k_{\y}} > \ell$. Because $y_{\ell+1} - x_{\ell+1} = \alpha_{\ell}$,
at least one of $y_{\ell+1}$, $x_{\ell+1}$ must be nonzero. Combined with the
hypothesis that ${k_{\y}} \ge {k_{\xx}}$, we have ${k_{\y}} > \ell$.
To prove that ${k_{\xx}} \ge j$, observe that because ${k_{\y}} > \ell \ge j$, we know $|y_j| \le \left\lfloor\frac{n}{2}\right\rfloor$.
Since $x_j - y_j = -\alpha_j n$, we see that $x_j \ne 0$. Therefore ${k_{\xx}} \ge j$.
\end{proof}
For the remainder of this paper, when we write ${\bf y} = {\bf x}+ \sum_{i=j}^{\ell} \alpha_i {\bf w}^{(i)}$ we will always assume that $\alpha_j \neq 0$ and $\alpha_\ell \neq 0$.
\subsection{Minimal vectors for $n$ odd}
\label{section:geodesics_n_odd}
Let $g = t^{-u}a^vt^w \in BS(1,n)$ for $n$ odd.
Lemma~\ref{lemma:reduce_to_box} shows that ${\mathcal B}_v^{u,w}$ is nonempty
and that if ${\bf x} \in \mathcal{L}_v$ is a
minimal vector in ${\mathcal B}_v^{u,w}$, then $\eta_{u,v,w}({\bf x})$ is geodesic for $g$.
We now show that when $n$ is odd, the set ${\mathcal B}_v^{u,w}$ contains at most two vectors.
\begin{lemma}\label{lemma:odd_box}
Let $n \geq 3$ be odd and ${\bf x} \in {\mathcal B}_v^{u,w}$.
\begin{enumerate}[itemsep=5pt]
\item If $k_{\bf x} < \max(u,w)$, then $|{\mathcal B}_v^{u,w}| = 1$.
\item If $k_{\bf x} \ge \max(u,w)$, then $|{\mathcal B}_v^{u,w}| \le 2$.
If $|{\mathcal B}_v^{u,w}| = 2$, then ${\mathcal B}_v^{u,w}$ has the form
\[
{\mathcal B}_v^{u,w} = \{{\bf x}, {\bf x} + \epsilon{\bf w}^{(k_{\bf x})}\},
\]
where $\epsilon \in \{-1,1\}$. Moreover, ${\bf y}\in{\mathcal B}_v^{u,w}$ is not minimal
if and only if ${k_{\y}} > \max(u,w)$ and the final digits of ${\bf y}$ are
$(\delta\left\lfloor\frac{n}{2}\right\rfloor, -\delta)$,
where $\delta \in \{\pm 1\}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $|{\mathcal B}_v^{u,w}| \ge 2$.
Let ${\bf x} \in {\mathcal B}_v^{u,w}$ be of minimal length,
and let ${\bf y} \in {\mathcal B}_v^{u,w}$ be any other vector.
In particular, ${k_{\xx}} \le {k_{\y}}$. Set
${\bf z} = {\bf y} - {\bf x} = \sum_i \alpha_i {\bf w}^{(i)} \in {\mathcal L}_0$.
It follows from Lemma~\ref{lemma:L0_big_digits} that there is some minimal $j$ with $|z_j| \geq n$, and that $0 \leq j < {k_{\y}}$.
If $j < {k_{\xx}} \le {k_{\y}}$, then $|x_j|,|y_j|\le \left\lfloor\frac{n}{2}\right\rfloor = \frac{n-1}{2}$, and the maximum difference between $x_j$ and $y_j$ is $2 \left\lfloor\frac{n}{2}\right\rfloor < n$ because $n$ is odd.
Thus we
cannot have $|z_j| = |x_j - y_j| \ge n$, which contradicts Corollary~\ref{corollary:L0_at_least_n}.
We conclude that $j \ge {k_{\xx}}$.
If $j > {k_{\xx}}$, we have $z_{j'} = y_{j'}$ for $j' \geq j$. It follows that
$|z_{j}| = |y_{j}| \geq n$; this bound on $|y_j|$ violates the
definition of ${\mathcal B}_v^{u,w}$, hence this does not occur.
It remains to consider $j = {k_{\xx}}$. Note that if ${k_{\xx}} < \max(u,w)$ then
$|x_j|,|y_j| \le \left\lfloor\frac{n}{2}\right\rfloor$ and it is impossible to have $|x_{k_{\xx}} - y_{k_{\xx}}| \ge n$.
So we must have ${k_{\xx}} \ge \max(u,w)$. In this case, we find exactly
one additional vector in ${\mathcal B}_v^{u,w}$. As ${\bf x}$ and ${\bf y}$ lie in ${\mathcal B}_v^{u,w}$,
we know that $|x_{{k_{\xx}}} - y_{{k_{\xx}}}| \leq n$, and without loss
of generality we assume that $x_{{k_{\xx}}} > 0$. The only way that the
inequality $|x_{k_{\xx}} - y_{k_{\xx}}| \ge n$ can be satisfied is if
$x_{{k_{\xx}}} = \left\lfloor\frac{n}{2}\right\rfloor+1$ or $y_{k_{\xx}} = -(\left\lfloor\frac{n}{2}\right\rfloor + 1)$.
\begin{itemize}
\item If $x_{{k_{\xx}}} = \left\lfloor\frac{n}{2}\right\rfloor+1$, then $y_{k_x}=-\left\lfloor\frac{n}{2}\right\rfloor$ and $\alpha_j = 1$.
Thus, $y_{{k_{\xx}}+1} = 1-\alpha_{{k_{\xx}}+1}n$, so $\alpha_{{k_{\xx}}+1} = 0$. We similarly
conclude that $\alpha_{j'} = 0$ for all $j'>{k_{\xx}}$. So ${\bf y} = {\bf x} + \wf{{k_{\xx}}}$.
\item If $y_{k_{\xx}}=-(\left\lfloor\frac{n}{2}\right\rfloor+1)$, then ${k_{\y}} = {k_{\xx}}$ and $x_{k_{\xx}} = \left\lfloor\frac{n}{2}\right\rfloor$. As in
the previous bullet, $y_{{k_{\xx}}+1} = 1$. This is impossible, though, because
${k_{\y}}={k_{\xx}} < {k_{\xx}} + 1$.
\end{itemize}
Thus we have shown that if $|{\mathcal B}_v^{u,w}| \ge 2$, then ${k_{\xx}} \ge \max(u,w)$
and ${\mathcal B}_v^{u,w} = \{{\bf x}, {\bf x} + \wf{{k_{\xx}}}\}$, so $|{\mathcal B}_v^{u,w}| = 2$.
We have also shown
that this case only occurs in the first bullet above; in this case the we have ${k_{\y}} = {k_{\xx}}+1$ and it follows from the second length formula in Lemma~\ref{lemma:length_formula} that
$|\eta_{u,v,w}({\bf y})| = |\eta_{u,v,w}({\bf x})| + 2$.
This proves the final statement of the lemma.
\end{proof}
Lemma~\ref{lemma:odd_box} shows that ${\mathcal B}_v^{u,w}$ can contain at most
two vectors, and contains two vectors only if ${k_{\xx}} \ge \max(u,w)$
for some ${\bf x} \in {\mathcal B}_v^{u,w}$. The statement in Lemma~\ref{lemma:odd_box}
is not an ``if and only if''; it is possible that
${k_{\xx}} \ge \max(u,w)$ and $|{\mathcal B}_v^{u,w}|=1$.
Lemma~\ref{lemma:odd_box} implies that finding a geodesic representative
of $g = t^{-u}a^vt^w$ in $B(1,n)$ when $n$ is odd is actually quite
straightforward: find any vector in $\mathcal{L}_v$, reduce its digits so that it lies in ${\mathcal B}_v^{u,w}$, and check its most significant digits to ascertain minimality.
An immediate consequence of Lemma~\ref{lemma:odd_box} is that when $n$ is odd, $\le_{u,w}$ is a total order on ${\mathcal B}_v^{u,w}$.
\subsection{Minimal vectors for $n$ even}
\label{section:geodesics_n_even}
Let $g = t^{-u} a^v t^w \in BS(1,n)$ for $n$ even. In this case, the
set ${\mathcal B}_v^{u,w}$ can contain many more vectors, as well as multiple minimal vectors.
In order to choose a unique minimal vector in ${\mathcal B}_v^{u,w}$, we redefine ${\bf x} \le_{u,w} {\bf y}$
for $n$ even so that it is a total order on ${\mathcal B}_v^{u,w}$.
For $n$ even, let $|{\bf x}|$ and $|{\bf y}|$ denote
the vectors of the absolute values of the coordinates of ${\bf x}$ and ${\bf y}$,
respectively, and define
${\bf x} \le_{u,w} {\bf y}$ if and only if
\begin{itemize}[itemsep=5pt]
\item $|\eta_{u,v,w}({\bf y})| < |\eta_{u,v,w}({\bf x})|$, or
\item $|\eta_{u,v,w}({\bf y})| = |\eta_{u,v,w}({\bf x})|$ and
$|{\bf x}| \leq |{\bf y}|$ in the lexicographic order that ranks lower
indexed coordinates as more significant.
\end{itemize}
The relation is strict, that is, ${\bf x} <_{u,w}{\bf y}$ if and only if
$|\eta_{u,v,w}({\bf y})| < |\eta_{u,v,w}({\bf x})|$ or $|{\bf x}| < |{\bf y}|$.
Since the relation may depend on the absolute values of the coordinates of ${\bf x}$ and ${\bf y}$, it is not {\em a priori} the case
that a minimal vector is unique, or even that the relation given
is an order, which necessitates Lemma~\ref{lemma:abs_lex_unique}.
As $n$ is even, we will write
$\frac{n}{2}$ in place of $\left\lfloor\frac{n}{2}\right\rfloor$ for simplicity for the duration
of Section~\ref{section:geodesics_n_even}.
\begin{lemma}\label{lemma:abs_lex_unique}
Let $n$ be even. For any $u,w \in \mathbb{N}$ and $v \in \mathbb{Z}$, the relation $\le_{u,w}$ is a
total order on ${\mathcal B}_v^{u,w}$.
\end{lemma}
\begin{proof}
Let ${\bf x},{\bf y} \in {\mathcal B}_v^{u,w}$ be given. If
$|\eta_{u,v,w}({\bf y})| \ne |\eta_{u,v,w}({\bf x})|$ then ${\bf x} <_{u,w} {\bf y}$ or
${\bf y} <_{u,w} {\bf x}$ and we are done. Otherwise, the relation is determined
by the lexicographic order of $|{\bf x}|$ and $|{\bf y}|$. Since
the lexicographic order is a total order, the only way for
${\bf x} \le_{u,w} {\bf y}$ and ${\bf y} \le_{u,w} {\bf x}$ to both hold is if $|{\bf x}| = |{\bf y}|$.
Suppose this is the case, so $|{\bf x}| = |{\bf y}|$ and thus
$|x_i| = |y_i|$ for all $i$.
In particular,
$x_i - y_i$ must be even.
Since ${\bf x} - {\bf y} \in \mathcal{L}_0$,
we can write ${\bf x} - {\bf y} = \sum_i \alpha_i{\bf w}^{(i)}$. Let $j$ be the maximal index
such that $\alpha_j \ne 0$. Then $x_{j+1} - y_{j+1} = \alpha_j$,
so $\alpha_j$ must be even. By assumption, $\alpha_j \ne 0$,
so $|\alpha_j| \ge 2$. It follows from Lemma~\ref{lemma:L0_big_digits},
there is some $\ell \le j$ with $|x_\ell - y_\ell| > 2(n-1) = 2n-2$,
and as $x_\ell - y_\ell$ is even, we have $|x_\ell - y_\ell| \ge 2n$.
The largest possible digits in $|{\bf x}|$ and $|{\bf y}|$ are $\frac{n}{2} + 1$ (or
$\frac{n}{2} +2$ if $n=2$), which can only occur at index, respectively, ${k_{\xx}}$ or ${k_{\y}}$.
However, $\ell \neq {k_{\xx}}$ and $\ell \neq {k_{\y}}$ because $x_{j+1} = -y_{j+1} \neq 0$, and $\ell < j+1$.
It follows that $|x_\ell|,|y_\ell| \le \frac{n}{2}$,
so $|x_\ell - y_\ell| \le n$, contradicting our earlier inequality $|x_\ell - y_\ell| > 2n-2$.
We conclude that there is no coordinate
in which ${\bf x}$ and ${\bf y}$ differ, so ${\bf x} = {\bf y}$.
\end{proof}
\begin{ex}\label{example:absolute_order}
As an example, let $n=4$, $v=26$,
and $u,w \ge 3$
and
\[
{\bf x} = (\underset{x_0}{2}, \underset{x_1}{2}, \underset{x_2}{1}, \underset{x_3}{0},
\, \underset{\ldots}{\ldots}),
\qquad
{\bf y} = (\underset{y_0}{-2}, \underset{y_1}{-1}, \underset{y_2}{2}, \underset{y_3}{0},
\, \underset{\ldots}{\ldots}).
\]
Then
\[
|\eta_{u,v,w}({\bf x})| \; = \; \Vert {\bf x} \Vert_1 + u+ w \; = \; 5 + u+ w
\; = \; \Vert {\bf y} \Vert_1 + u+ w \; = \; |\eta_{u,v,w}({\bf y})|,
\]
so the word lengths $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$ are equal,
but ${\bf y} <_{u,w} {\bf x}$ in the absolute lexicographic order. We will see
in Example~\ref{example:absolute_minimal} that both ${\bf x}$ and ${\bf y}$ are minimal,
but ${\bf y}$ is the unique lexicographically minimal vector in ${\mathcal B}_v^{u,w}$.
\end{ex}
Although the question of minimality is more complicated for even $n$,
there are relatively simple conditions which allow us to determine whether ${\bf x} \in {\mathcal B}_v^{u,w}$ is minimal, and if not, to find ${\bf y} \in {\mathcal B}_v^{u,w}$ with ${\bf y} \le_{u,w} {\bf x}$.
We now give a brief overview of this strategy, with precise details included in the lemmas below.
Recall that for any vector ${\bf x} \in {\mathcal B}_v^{u,w}$, there is ${\bf z} \in \mathcal{L}_0$
such that ${\bf x} + {\bf z} \in {\mathcal B}_v^{u,w}$ is minimal, and we can write ${\bf z}$ as a
linear combination of ${\bf w}^{(i)}$.
Disregarding the most significant digit of ${\bf x}$, we must have $|x_j| \le \frac{n}{2}$.
If ${\bf w}^{(i)}$ is the lowest indexed basis vector in ${\bf z}$, so $|z_i| \geq n$, we must have $|x_i| = \frac{n}{2}$ and $|z_i| = n$ to ensure that ${\bf x} + {\bf z} \in {\mathcal B}_v^{u,w}$.
That is, potential reductions can only occur
when if ${\bf x}$ contains the digit $\pm\frac{n}{2}$. By examining the digits
of ${\bf x}$ which follow this initial $\pm\frac{n}{2}$, we can determine whether the original
vector is minimal.
For the remainder of this section, we consider only $n \geq 4$ and prove analogous results for $n=2$ in Section~\ref{section:geodesics_n_even_2}.
Let ${\bf x} \in {\mathcal B}_v^{u,w}$ and define a \emph{run} ${\bf r} \subseteq {\bf x}$ to be a sequence
of consecutive digits ${\bf r} = (x_j, \dots, x_\ell)$ such that
$|x_j| = \frac{n}{2}$ and $|x_i| \in \{\frac{n}{2}-1, \frac{n}{2}, \frac{n}{2} +1\}$
for all $j < i \le \ell$, and the sign $\ensuremath{\textnormal{sign}}(x_i)$ is constant
for all $j \le i \le \ell$. We denote this sign by
$\epsilon_{\bf r} =\ensuremath{\textnormal{sign}}(x_j)$.
We retain the indexing of the coordinates of ${\bf r}$ from ${\bf x}$, that is, the ``first" coordinate of ${\bf r}$ is $x_j$ rather than $r_0$ for clarity.
We remark that the digit bounds defining ${\mathcal B}_v^{u,w}$ imply that if
$|x_i| = \frac{n}{2} + 1$, then in fact $i=\ell={k_{\xx}}$.
The \emph{length} of the run is $\ell - j +1$.
We focus below on understanding possible runs contained in a vector ${\bf x} \in {\mathcal B}_v^{u,w}$; adding an appropriate linear combination of basis vectors ${\bf w}^{(i)}$ to a run yields a vector ${\bf y}$ which may satisfy ${\bf y} <_{u,w} {\bf x}$.
Define the \emph{weight} of a run ${\bf r}$ to be
\[
\ensuremath{\textnormal{weight}}({\bf r}) = 3\#\left\{\frac{n}{2}+1\right\} + \left(\#\left\{\frac{n}{2}\right\}-1\right)
- \#\left\{\frac{n}{2} -1\right\}.
\]
That is, three times the number of occurrences
of the digit $\epsilon_{\bf r}(\frac{n}{2}+1)$ in the run, which is either $0$ or $1$, plus one less than the number
of occurrences of the digit $\epsilon_{\bf r}\frac{n}{2}$, minus the number of occurrences of the digit $\epsilon_{\bf r}(\frac{n}{2}-1)$ in the run.
This
rather strange formula will capture the change in
$\Vert{\bf x}\Vert_1$ which arises from adding a linear combination of ${\bf w}^{(i)}$ to
the run ${\bf r}$.
\begin{lemma}\label{lemma:change_formula}
Let $n\ge 4$ be even.
Let ${\bf x} \in {\mathcal B}_v^{u,w}$ contain a run ${\bf r} = (x_j, \dots, x_\ell)$.
Let ${\bf y} = {\bf x} + \epsilon_{\bf r}\sum_{i=j}^\ell{\bf w}^{(i)}$. Then
$\Vert{\bf y}\Vert_1 = \Vert {\bf x} \Vert_1 - \ensuremath{\textnormal{weight}}({\bf r})
+ |x_{\ell+1} + \epsilon_{\bf r}| - |x_{\ell+1}|$.
\end{lemma}
\begin{proof}
The proof is just the computation of the change in absolute
value for each digit $x_j, \dots, x_{\ell+1}$.
Note that the largest indexed basis vector in the above sum is ${\bf w}^{(\ell)}$, and the thus the digits of ${\bf x}$ affected by this sum are $x_j, \cdots ,x_{\ell+1}$.
We have
$|y_j| = |x_j|$, and for $j < i \le \ell$,
$|y_i| = |x_i - \epsilon_{\bf r}(n + 1)|$, so
\begin{itemize}
\item If $|x_i| = \frac{n}{2} -1$, then $|y_j| = \frac{n}{2}$.
\item If $|x_i| = \frac{n}{2}$, then $|y_j| = \frac{n}{2} - 1$.
\item If $|x_i| = \frac{n}{2} + 1$, then $|y_j| = \frac{n}{2} -2$.
\end{itemize}
In all cases, this change is accounted for by $\ensuremath{\textnormal{weight}}({\bf r})$; counting
one fewer instance of $\frac{n}{2}$ is necessary to account for the fact that
the absolute value of $x_j$ does not change. All that remains
is to account for the difference between $|y_{\ell+1}|$ and
$|x_{\ell+1}|$, which is the final part of the expression.
\end{proof}
The search for a minimal vector is simplified if we are able to consider adding only linear combinations of basis vectors with no non-zero coefficients to ${\bf x} \in {\mathcal B}_v^{u,w}$. The following lemma proves that this is sufficient, and relies on the fact that our lexicographic order treats lower-index digits as more significant.
Lemma~\ref{lemma:single_run_from_multiple_runs} shows that if ${\bf y}$ is a minimal vector in ${\mathcal B}_v^{u,w}$ obtained from ${\bf x}$ by adding two sums of $\mathcal{L}_0$ basis vectors whose index sets are separated from each other, then adding only the sum with smaller indices will produce a vector which precedes ${\bf x}$ in the order $<_{u,w}$.
\begin{lemma}\label{lemma:single_run_from_multiple_runs}
Let $n\ge 2$ be even. Let ${\bf x},{\bf y} \in {\mathcal B}_v^{u,w}$, and let ${\bf y}$ be minimal.
Suppose that we have
\[
{\bf y} = {\bf x} + \sum_{i=j}^\ell \alpha_i{\bf w}^{(i)} + \sum_{i>\ell+1} \alpha_i {\bf w}^{(i)}.
\]
That is, ${\bf y}$ is obtained from ${\bf x}$ by adding a linear combination of ${\bf w}^{(i)}$,
where $\wf{\ell+1}$ is omitted from the sum. Then
\[
{\bf x} + \sum_{i=j}^\ell \alpha_i{\bf w}^{(i)} <_{u,w} {\bf x}.
\]
Furthermore, if ${k_{\xx}} \le \ell+1$, then $\alpha_i = 0$ for $i > \ell$.
\end{lemma}
\begin{proof}
We first prove the last claim of the lemma. Suppose ${k_{\xx}} \le \ell+1$, and
$\alpha_j \ne 0$ for some $j > \ell+1$, that is, suppose the right summand is nonzero.
Then $y_j = -\alpha_jn$ and ${k_{\y}} > j$. For any value of $n$, this is not possible in ${\mathcal B}_v^{u,w}$.
The idea of the proof is that because $\wf{\ell+1}$ does not appear in the linear
combination of vectors, the effects of the two summands
on both the $\ell^1$ vector norm and the lexicographic order are independent.
Let ${\bf a} = \sum_{i=j}^\ell \alpha_i {\bf w}^{(i)}$ and
${\bf b} = \sum_{i>\ell+1} \alpha_i {\bf w}^{(i)}$, so ${\bf y} = {\bf x} + {\bf a} + {\bf b}$.
The lemma follows immediately if ${\bf b}=0$, that is, ${\bf b}$ is an empty sum.
Otherwise, ${\bf b}$ is nonzero and it follows from Lemma~\ref{lemma:no_L0_parts} that ${k_{\xx}},k_{{\bf x}+{\bf b}},{k_{\y}} > \ell+1$.
Therefore, adding ${\bf a}$ does not affect
the length of ${\bf x}$ or ${\bf x} + {\bf b}$, although it might be the case that ${k_{\xx}} \ne k_{{\bf x} + {\bf b}}$.
As ${k_{\y}} = k_{{\bf x}+{\bf b}}$, we use the same word length formula from Lemma~\ref{lemma:length_formula} to compute the lengths of each pair of geodesics which are compared below.
Therefore we have
\begin{align*}
|\eta_{u,v,w}({\bf y})| - |\eta_{u,v,w}({\bf x}+{\bf b})| & = \Vert {\bf y} \Vert_1 - \Vert {\bf x} + {\bf b} \Vert_1 = \sum_{i=j}^{\ell+1} |x_i + a_i| - |x_i| \\
|\eta_{u,v,w}({\bf x}+{\bf a})| - |\eta_{u,v,w}({\bf x})| & = \Vert {\bf x}+{\bf a} \Vert_1 - \Vert {\bf x} \Vert_1 = \sum_{i=j}^{\ell+1} |x_i + a_i| - |x_i|. \\
\end{align*}
Because ${\bf y}$ is minimal, the first difference is at most zero.
As the rightmost terms in each set of equations above are equal, we conclude that $|\eta_{u,v,w}({\bf x}+{\bf a})| \le |\eta_{u,v,w}({\bf x})|$ as well.
If the inequality if strict, it follows immediately that ${\bf x}+{\bf a} <_{u,w} {\bf x}$. If there is equality,
as Lemma~\ref{lemma:abs_lex_unique} proves that $<_{u,w}$ is a total order, there must be
some lexicographic difference between ${\bf x}$ and ${\bf x} + {\bf a}$.
An increase in lexicographic order would contradict the minimality of ${\bf y}$, as it would follow that
${\bf x} + {\bf b} <_{u,w} {\bf x} + {\bf a} + {\bf b} = {\bf y}$.
We conclude that
${\bf x} + {\bf a} <_{u,w} {\bf x}$, completing the proof.
\end{proof}
When the weight of a run is positive, we have additional control over the digits of ${\bf r} \subseteq {\bf x}$.
\begin{lemma}\label{lemma:adjacent_digits_from_weight}
Let $n\ge 4$ be even. If ${\bf r}$ is a run in ${\bf x} \in {\mathcal B}_v^{u,w}$ and
$\ensuremath{\textnormal{weight}}({\bf r}) > 0$, and
${\bf r}$ does not contain a digit with absolute value $\frac{n}{2} + 1$, then
${\bf r}$ contains a pair of adjacent digits $\frac{n}{2}$.
\end{lemma}
\begin{proof}
By the definition of weight, if $\ensuremath{\textnormal{weight}}({\bf r}) > 0$, then we must have at least
two more digits with absolute value $\frac{n}{2}$ than $\frac{n}{2}-1$.
If we arrange a set of digits of the form $\frac{n}{2}$ and $\frac{n}{2}-1$ with a surplus of at least two $\frac{n}{2}$ digits,
we are forced to place two digits $\frac{n}{2}$ adjacent to each other.
\end{proof}
If ${\bf x} \in {\mathcal B}_v^{u,w}$ and ${\bf r}= (x_j, \dots, x_\ell)$ is a run in
${\bf x}$, then we say that
${\bf x}$ can be {\em reduced at} ${\bf r}$ if
\[
{\bf y} <_{u,w} {\bf x} \in {\mathcal B}_v^{u,w}
\]
where ${\bf y} = {\bf x} + \epsilon_{\bf r} \sum_{i=j}^\ell{\bf w}^{(i)}$.
In order to determine whether a vector ${\bf x}$ can be reduced at
${\bf r}$, we use Lemma~\ref{lemma:change_formula} combined with
conditions on
$\ensuremath{\textnormal{weight}}({\bf r})$, the change in absolute value of the
digit $x_{\ell+1}$, the lexicographic change, and, if ${k_{\xx}} < \max(u,w)$, the change in
the length of ${\bf x}$.
Generally, if a vector can be reduced,
it can be reduced at a run. There is a special case which does not follow this rule, given in the following lemma.
\begin{lemma}\label{lemma:even_reduction_end}
Let $n \ge 4$ be even, and ${\bf x} \in {\mathcal B}_v^{u,w}$. If
$k_{\bf x} > \max(u,w)$ and
the final digits of ${\bf x}$ are $(\delta (\frac{n}{2}-1), -\delta)$
or $(\delta \frac{n}{2}, -\delta)$, where $\delta \in \{-1,1\}$,
then ${\bf x}$ is not minimal.
\end{lemma}
\begin{proof}
The cases for $\delta$ are symmetric, so we assume without loss of generality that $\delta = 1$. For either sequence of digits, consider
${\bf y} = {\bf x} + \wf{k_{\bf x}-1}$.
We have $\Vert {\bf y} \Vert_1 \le \Vert {\bf x} \Vert_1 + 1$.
Note that ${k_{\y}} = {k_{\xx}} -1$,
and by the assumption on ${k_{\xx}}$, we use the length formula in Lemma~\ref{lemma:length_formula} for paths of shape 3 and 4 to
compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$.
Therefore,
\[
|\eta_{u,v,w}({\bf y})| \le \Vert {\bf x} \Vert_1 +1 +2({k_{\xx}}-1) - |u-w| = |\eta_{u,v,w}({\bf x})|-1,
\]
so ${\bf x}$ is not minimal.
\end{proof}
In certain cases, sequences of digits with absolute value $\frac{n}{2}$ and the same sign form runs at which ${\bf x}$ can be reduced.
\begin{lemma}\label{lemma:even_reduction_adjacent_digits}
Let $n\ge 4$ be even and ${\bf x} \in {\mathcal B}_v^{u,w}$. Suppose there is a maximal
sequence $x_j = x_{j+1} = \dots = x_\ell = \pm \frac{n}{2}$ of length at
least $2$ with ${k_{\xx}} > \ell$ and $|x_{\ell+1}| < \frac{n}{2}$.
Then ${\bf x}$ can be reduced at the run ${\bf r}=(x_j, \dots, x_\ell) \subseteq {\bf x}$.
\end{lemma}
\begin{proof}
Let
${\bf y} = {\bf x} + \epsilon_{\bf r}\sum_{i=j}^\ell{\bf w}^{(i)}$.
The assumptions on ${k_{\xx}}$ and $|x_{\ell+1}|$ guarantee that
${\bf y} \in {\mathcal B}_v^{u,w}$ and ${k_{\y}} \le {k_{\xx}}$.
The digits in ${\bf r}$ ensure that $\ensuremath{\textnormal{weight}}({\bf r}) \geq 1$.
To compare $\Vert {\bf x} \Vert_1$ and $\Vert {\bf y} \Vert_1$ we use the equation given in Lemma~\ref{lemma:change_formula}, namely
\[\Vert{\bf y}\Vert_1 = \Vert {\bf x} \Vert_1 - \ensuremath{\textnormal{weight}}({\bf r})
+ |x_{\ell+1} + \epsilon_{\bf r}| - |x_{\ell+1}|.
\]
As it is always true that $|x_{\ell+1} + \epsilon_{\bf r}| - |x_{\ell+1}| \in \{\pm 1\}$, we see that $\ensuremath{\textnormal{weight}}({\bf r}) - (|x_{\ell+1} + \epsilon_{\bf r}| - |x_{\ell+1}|) \geq 0$, and thus $\Vert {\bf y} \Vert_1 \leq \Vert {\bf x} \Vert_1$.
As ${k_{\y}} \in \{{k_{\xx}},{k_{\xx}}-1\}$, we use the same length formula from Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf x})|$.
If $\Vert {\bf y} \Vert_1 < \Vert {\bf x} \Vert_1$, if we use the first length formula, the result follows immediately.
If we use the second length formula, we are also relying on the fact that ${k_{\y}} \leq {k_{\xx}}$ to conclude that ${\bf y} <_{u,w} {\bf x}$.
If $\Vert {\bf y} \Vert_1 = \Vert {\bf x} \Vert_1$, then
${\bf r} = (\epsilon_{\bf r} \frac{n}{2},\epsilon_{\bf r} \frac{n}{2})$ and without loss of generality we assume that $\epsilon_{\bf r}=1$.
Then
\[
(y_j,y_{j+1},y_{j+2}) = \left(-\frac{n}{2},-\left(\frac{n}{2}-1\right),x_{j+2}+1\right)
\]
and thus ${\bf y}$ precedes ${\bf x}$ in the lexicographic order, so ${\bf y} <_{u,w} {\bf x}$ in this case as well.
\end{proof}
We are interested in conditions which are both necessary and sufficient to conclude that ${\bf x} \in {\mathcal B}_v^{u,w}$ is not minimal.
This stronger statement is contained in Proposition~\ref{lemma:even_minimal_characterization}.
\begin{proposition}
\label{lemma:even_minimal_characterization}
Let $n\ge 4$ be even, and let ${\bf x} \in {\mathcal B}_v^{u,w}$.
Then ${\bf x}$ is not minimal if and only if
\begin{itemize}
\item there is a run in ${\bf x}$ at which ${\bf x}$ can be reduced, or
\item Lemma~\ref{lemma:even_reduction_end} applies to ${\bf x}$.
\end{itemize}
\end{proposition}
\begin{proof}
If one of the two conditions in the lemma is satisfied, then ${\bf x}$ is
not minimal, so the proof reduces to showing the converse.
Let ${\bf x}, {\bf y} \in {\mathcal B}_v^{u,w}$ and with ${\bf y}$ a minimal vector, and ${\bf z} = {\bf y} - {\bf x}$. Let $j$ be the minimal index
such that $x_j \ne y_j$ and write ${\bf z} = \sum_{i=j}^\ell \alpha_i {\bf w}^{(i)}$. It follows from Lemma~\ref{lemma:L0_big_digits} that $|z_j| \ge n$. To satisfy
the digit bounds on ${\mathcal B}_v^{u,w}$, we must then have $|z_j| = n$. Assume without
loss of generality that $x_j \ge 0$, so $z_j = -n$ and $y_j = x_j - n$.
The proof now reduces to cases corresponding to the possible values of $x_j$.
\begin{enumerate}[itemsep=5pt]
\item Case 1: $x_j = \frac{n}{2} + 1$. This digit can only occur if $j = k_{\bf x}$ and ${k_{\xx}} \geq \max(u,w)$; it follows that $\alpha_j = 1$.
This means that $y_{k_{\xx}} = -(\frac{n}{2} - 1)$ and $y_{{k_{\xx}}+1} = 1- \alpha_{{k_{\xx}}+1}n$, so we must have
$\alpha_{{k_{\xx}}+1}=0$ and $k_{\bf y} = k_{\bf x} + 1 > \max(u,w)$.
We use the second formula from Lemma~\ref{lemma:length_formula}
to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$. We see that
\begin{align*}
|\eta_{u,v,w}({\bf y})| & = \Vert {\bf x} \Vert_1+ |y_{{k_{\xx}}}| - |x_{{k_{\xx}}}| + |y_{{k_{\xx}}+1}| + 2({k_{\xx}} +1) - |u-w| \\
& = |\eta_{u,v,w}({\bf x})| + |y_{{k_{\xx}}}| - |x_{{k_{\xx}}}| + |y_{{k_{\xx}}+1}| + 2 \\
& = |\eta_{u,v,w}({\bf x})| + 1,
\end{align*}
contradicting our assumption that ${\bf y}$ is minimal. Thus this case does not occur.
\item Case 2: $x_j = \frac{n}{2}$. This is the involved case, which is proven in Section~\ref{sec:technical_lemmas} as Lemma~\ref{lemma:lastcase_lemma3.20}.
\item Case 3: $x_j = \frac{n}{2} -1$. In this case, $y_j = -\frac{n}{2} -1$, so $k_{\bf y} = j$.
Since $|y_j| = \frac{n}{2}+1$, it follows from the definition of ${\mathcal B}_v^{u,w}$ that ${k_{\y}} \geq \max(u,w)$.
As the length of ${\bf y}$ is determined, we must have $x_{j+1} = -1$, and $k_{\bf x} = j+1$. Then ${k_{\xx}} > {k_{\y}} \geq \max(u,w)$, and we see that ${\bf x}$ satisfies the conditions of Lemma~\ref{lemma:even_reduction_end}.
\item Case 4: $x_j < \frac{n}{2}-1$. Here $y_j < -(\frac{n}{2} +1)$, contradicting the fact that ${\bf y} \in {\mathcal B}_v^{u,w}$.
\end{enumerate}
These four cases complete the proof of the proposition.
\end{proof}
It can be computationally difficult to check whether ${\bf x}$ contains a run at which it can be reduced.
When ${k_{\xx}} < w$, Proposition~\ref{lemma:adjacent_digits} presents straightforward observable conditions which guarantee that ${\bf x}$ contains a run at which it can be reduced.
This prompts the following definition; if $g = t^{-u}a^vt^w$ and ${\bf x} \in {\mathcal B}_v^{u,w}$ with ${k_{\xx}} < \max(u,w)$,
we say that $\eta_{u,v,w}({\bf x})$ has \emph{strict shape 1}.
We rely on Proposition~\ref{lemma:adjacent_digits} when computing the growth rate of $BS(1,n)$ in \cite{TW_growth}, namely we show in \cite{TW_growth} that the set of geodesics of strict shape 1 forms a regular language whose growth rate is the same as the growth rate of $BS(1,n)$. We use a corollary of this result below to show that the sets of elements positive, negative and zero conjugation curvature which we exhibit in Sections~\ref{sec:curvature} and ~\ref{sec:n=2_curvature} have positive density in $BS(1,n)$.
\begin{proposition}\label{lemma:adjacent_digits}
Let $n>2$ be even and ${\bf x} \in {\mathcal B}_v^{u,w}$ with ${k_{\xx}} < \max(u,w)$.
Then ${\bf x}$ is not minimal if and only if one of the following holds, for $\delta \in \{\pm 1\}$.
\begin{itemize}[itemsep=5pt]
\item There are two adjacent digits in ${\bf x}$
of the form $(\delta\frac{n}{2},\delta\frac{n}{2})$.
\item There are two adjacent digits in ${\bf x}$
of the form $(\delta\frac{n}{2},x_i)$ with $\ensuremath{\textnormal{sign}}(x_i) = -\ensuremath{\textnormal{sign}}(\delta)$.
\end{itemize}
\end{proposition}
\begin{proof}
By applying Proposition~\ref{lemma:even_minimal_characterization} and
observing that Lemma~\ref{lemma:even_reduction_end} does not apply to ${\bf x}$,
the proof reduces to showing that there is a run at which ${\bf x}$ can
be reduced if and only if one of the above conditions holds.
First observe that in each of the two cases in the lemma there is a run at which ${\bf x}$ can be reduced.
Consider the run which is the maximal sequence of digits $\delta\frac{n}{2}$ containing the digit(s) in the statement of the lemma, that is, ${\bf r} = (\delta\frac{n}{2},\delta\frac{n}{2}, \cdots ,\delta\frac{n}{2}) = (x_j, \cdots ,x_\ell)$ where $j \leq l$. Let
\[
{\bf y} = {\bf x} + \sum_{i=j}^\ell \delta {\bf w}^{(i)}.
\]
and compute $\ensuremath{\textnormal{weight}}({\bf r}) = l-j \geq 0$.
As ${k_{\xx}} < \max(u,w)$ and ${k_{\y}} \leq {k_{\xx}}+1$, we have ${k_{\y}} \leq \max(u,w)$ and thus we use the first length formula in Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$.
Note that the two formulas agree when ${k_{\y}} = \max(u,w)$.
As this formula does not take into account the length of the vectors, any change in word length results from a change in $\ell^1$ norm between ${\bf x}$ and ${\bf y}$.
It follows from Lemma~\ref{lemma:change_formula} that
\[
\Vert {\bf x} \Vert_1 - \Vert {\bf y} \Vert_1 = \ensuremath{\textnormal{weight}}({\bf r}) + |x_{\ell+1}|-|x_{\ell+1}+\delta| .
\]
Suppose that $\ensuremath{\textnormal{weight}}({\bf r}) = l-j \geq 1$, so there are at least two digits of the form $\delta \frac{n}{2}$.
We know that $|x_{\ell+1}|-|x_{\ell+1}+\delta| \in \{\pm 1\}$
and hence $\Vert {\bf x} \Vert_1 - \Vert {\bf y} \Vert_1 \geq 0$.
If the inequality is strict, it follows that ${\bf x}$ can be reduced at ${\bf r}$, that is, ${\bf x}$ is not minimal.
If there is equality, notice that the change from $x_{j+1}$ to $y_{j+1}$ is a lexicographic reduction, as $|x_{j+1}| = \frac{n}{2}$ and $|y_{j+1}| = \frac{n}{2} -1$.
Thus ${\bf y} <_{u,w} {\bf x}$, that is, ${\bf x}$ is not minimal.
Suppose that $\ensuremath{\textnormal{weight}}({\bf r})=l-j=0$, so we are in the second case of the lemma. In this case, $ |x_{\ell+1}|-|x_{\ell+1}+\delta| =1$, so $\Vert {\bf x} \Vert_1 - \Vert {\bf y} \Vert_1 = 1>0$. Thus ${\bf y} <_{u,w} {\bf x}$ and we conclude that ${\bf x}$ is not minimal.
Now we must show the converse. That is, if
${\bf x}$ can be reduced at a run ${\bf r}$ then
one of the conditions in the
statement of the lemma holds.
Let ${\bf r} = (x_j, \dots, x_\ell)$ be such a run,
and ${\bf y} = {\bf x} + \epsilon_{\bf r}\sum_{i=j}^\ell{\bf w}^{(i)}$.
Again note that ${k_{\y}} \leq {k_{\xx}}+1$, so the assumption that ${k_{\xx}} < \max(u,w)$ means that, as above, we use the first length formula in Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$.
Thus any change in word length results from a change in $\ell^1$ norm between ${\bf x}$ and ${\bf y}$.
As ${\bf y} <_{u,w} {\bf x}$, we know that $|\eta_{u,v,w}({\bf y})| \le |\eta_{u,v,w}({\bf x})|$,
so
\begin{align*}
|\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf y})|
& = \Vert{\bf x}\Vert_1 - \Vert {\bf y} \Vert_1 \\
& = \ensuremath{\textnormal{weight}}({\bf r}) + |x_{\ell+1}| - |x_{\ell+1} + \epsilon_{\bf r}| \\
& \ge 0
\end{align*}
Consider $x_{\ell+1}$.
We know that $|x_{\ell+1}| - |x_{\ell+1} + \epsilon_{\bf r}| \in \{\pm 1\}$
\begin{enumerate}[itemsep=5pt]
\item[(a)] If $|x_{\ell+1}| - |x_{\ell+1} + \epsilon_{\bf r}| = 1$, then
$\ensuremath{\textnormal{weight}}({\bf r}) \ge -1$.
\smallskip
\begin{itemize}[itemsep=5pt]
\item If $\ensuremath{\textnormal{weight}}({\bf r})=-1$, then $|\eta_{u,v,w}({\bf x})| = |\eta_{u,v,w}({\bf y})|$,
so in order to have ${\bf y} <_{u,w} {\bf x}$, we must have a lexicographic reduction from ${\bf x}$ to ${\bf y}$, that is, ${\bf y}$ precedes ${\bf x}$ in the lexicographic order,
meaning there must be a decrease in absolute value from $|x_{j+1}|$ to $|y_{j+1}|$.
There are two ways this can occur: $x_{j+1} = \delta\frac{n}{2}$ and the run
has length at least 2, or $\ensuremath{\textnormal{sign}}(x_{j+1}) = -\ensuremath{\textnormal{sign}}(\delta)$ and the
run has length 1. In either case, one of the conditions of the lemma
is satisfied.
\item If $\ensuremath{\textnormal{weight}}({\bf r}) = 0$, then ${\bf r}$ contains exactly one more digit
$\delta\frac{n}{2}$ than it does $\delta(\frac{n}{2}-1)$.
Either the first condition of the lemma is satisfied, or ${\bf r}$ ends with $\delta\frac{n}{2}$,
and because $|x_{\ell+1}| > |x_{\ell+1} + \epsilon_{\bf r}|$, we must have $\ensuremath{\textnormal{sign}}(x_{\ell+1}) = -\ensuremath{\textnormal{sign}}(\delta)$,
so the second condition of the lemma is satisfied.
\item If $\ensuremath{\textnormal{weight}}({\bf r}) > 0$, then ${\bf r}$ contains at least two more
digits $\delta\frac{n}{2}$ than it does digits $\delta(\frac{n}{2}-1)$, so
the first condition of the lemma is satisfied.
\end{itemize}
\item[(b)] If $|x_{\ell+1}| - |x_{\ell+1} + \epsilon_{\bf r}| = -1$, then
$\ensuremath{\textnormal{weight}}({\bf r}) \ge 1$, which is the third case above.
\end{enumerate}
In all cases, we have shown that one of the two conditions of the
lemma is satisfied.
\end{proof}
\begin{ex}\label{example:absolute_minimal}
Proposition~\ref{lemma:adjacent_digits} provides a straightforward
way to ensure that a vector corresponding to a geodesic of strict shape 1 is minimal. We revisit
Example~\ref{example:absolute_order}; recall
$n=4$, $v=26$, and $u,w \ge 3$.
Consider
\[
{\bf x} = (\underset{x_0}{2}, \underset{x_1}{2}, \underset{x_2}{1}, \underset{x_3}{0},
\, \underset{\ldots}{\ldots})
\qquad
\text{ and }
\qquad
{\bf y} = (\underset{y_0}{-2}, \underset{y_1}{-1}, \underset{y_2}{2}, \underset{y_3}{0},
\, \underset{\ldots}{\ldots}).
\]
Note that ${\bf x}$ satisfies the first condition of
Proposition~\ref{lemma:adjacent_digits}, so is not
minimal. Indeed, ${\bf y} <_{u,w} {\bf x}$. However, no
condition of Proposition~\ref{lemma:adjacent_digits} applies to ${\bf y}$, so ${\bf y}$ is minimal.
\end{ex}
\subsection{Minimal vectors for $n=2$}
\label{section:geodesics_n_even_2}
In this section, we provide statements analogous
to Lemmas~\ref{lemma:even_minimal_characterization}
and~\ref{lemma:adjacent_digits} for
the special case of $n=2$.
The reason the statements and proofs of
Section~\ref{section:geodesics_n_even} do not apply directly is the
fact that when ${k_{\xx}} \geq \max(u,w)$ the absolute value of the most significant digit
of a vector ${\bf x} \in {\mathcal B}_v^{u,w}$
for $n=2$ is bounded by $\frac{n}{2} + 2$, rather than $\frac{n}{2}+1$.
For the remainder
of this section we assume that $n=2$.
We defer the proofs of the main propositions in this section to Section~\ref{sec:technical_n=2}, as they are similar in structure to the proofs in Section~\ref{section:geodesics_n_even}. However, we clarify below the slight differences between a run when $n>2$ and $n=2$, as well as a difference which may arise when a vector ${\bf x} \in {\mathcal B}_v^{u,w}$ can be reduced at a run ${\bf r}$.
When $n=2$, define a \emph{run} ${\bf r}$ in ${\bf x} \in {\mathcal B}_v^{u,w}$ to be a sequence
of consecutive digits ${\bf r} = (x_j, \dots, x_\ell)$ of ${\bf x}$ such that
$|x_j| = \frac{n}{2}=1$ and for all $i$ with $j \le i \le \ell$ we have
$\ensuremath{\textnormal{sign}}(x_i) = \ensuremath{\textnormal{sign}}(x_j)$ or $\ensuremath{\textnormal{sign}}(x_i) = 0$. We denote this sign by
$\epsilon_{\bf r} =\ensuremath{\textnormal{sign}}(x_j)$. We remark that the digit bounds
defining ${\mathcal B}_v^{u,w}$ imply that if
$|x_i| \in \{\frac{n}{2} + 1, \frac{n}{2} + 2\} = \{2,3\}$, then in fact $i=\ell={k_{\xx}}$ and ${k_{\xx}} \geq \max(u,w)$.
The \emph{length} of the run is $\ell - j +1$.
Thus we define a run to either
\begin{itemize}
\item begin with $1$, consist of a word in $\{0,1\}^*$ and possibly conclude with the digit 2 or 3, or
\item begin with $-1$, consist of a word in $\{0,-1\}^*$ and possibly conclude with the digit -2 or -3.
\end{itemize}
If ${\bf r} = (x_j, \dots, x_\ell) \subseteq {\bf x} \in {\mathcal B}_v^{u,w}$ is a run, we say that ${\bf x}$ can be reduced
at ${\bf r}$ if
\[
{\bf x} + \epsilon_{\bf r} \left[\sum_{i=j}^{\ell-1} \wf{i} + \alpha_{\ell}\wf{\ell}\right] <_{u,w} {\bf x},
\]
where $\alpha_{\ell} \in \{1,2\}$.
The possibility that $\alpha_\ell = 2$ does
not occur when $n>2$.
Thus our conditions for determining the minimality of ${\bf x} \in {\mathcal B}_v^{u,w}$ are slightly different when $n=2$.
Because the digit bounds in ${\mathcal B}_v^{u,w}$ when $n=2$ allow a final digit with absolute
value as large as $3$, one might suppose that we need to consider linear
combinations of ${\bf w}^{(i)}$ with final coefficient as large as $3$. However, the following lemmas
give us more control over these coefficients.
\begin{lemma}\label{lemma:n=2_unequal_lengths}
Let $n=2$ and ${\bf x},{\bf y} \in {\mathcal B}_v^{u,w}$. Let ${\bf y}-{\bf x} = \sum_{i=j}^\ell \alpha_i {\bf w}^{(i)}$.
If $\alpha_{k_{\xx}} \ne 0$, then ${k_{\y}} > {k_{\xx}}$.
\end{lemma}
\begin{proof}
Let $m$ be maximal so that $\alpha_m \ne 0$. Since $\alpha_{k_{\xx}} \ne 0$, we have
$m \ge {k_{\xx}}$. Then $y_{m+1} = \alpha_m$, so ${k_{\y}} = m+1 > {k_{\xx}}$.
\end{proof}
\begin{lemma}
\label{lemma:less_than_6}
Let $n=2$ and ${\bf x},{\bf y} \in {\mathcal B}_v^{u,w}$ with ${k_{\xx}} \le {k_{\y}}$. Let ${\bf y}-{\bf x} = \sum_{i=j}^\ell \alpha_i {\bf w}^{(i)}$.
Then $|\alpha_i| \le 2$, with $|\alpha_i|=2$
only possible if $i = {k_{\xx}} < {k_{\y}}$.
\end{lemma}
\begin{proof}
It follows from Lemma~\ref{lemma:no_L0_parts} that $\ell < {k_{\y}}$;
we prove the lemma by induction on $i$. First suppose $i<{k_{\xx}} \leq {k_{\y}}$, so $|x_i|,|y_i| \leq 1$.
Writing $y_i - x_i = \alpha_{i-1}-2\alpha_i$ and applying the induction assumption that $|\alpha_{i-1}| \le 1$, it follows that $2 |\alpha_i| \leq 3$ and thus $|\alpha_i| \leq 1$.
If $i={k_{\xx}}$ it follows from Lemma~\ref{lemma:n=2_unequal_lengths} that ${k_{\xx}} < {k_{\y}}$. Now we have the bounds $|x_i| \leq 3$ and $|y_i| \leq 1$.
Writing $y_i - x_i = \alpha_{i-1}-2\alpha_i$ and applying the induction assumption that $|\alpha_{i-1}| \le 1$, it follows that $2 |\alpha_i| \leq 5$ and thus $|\alpha_i| \leq 2$.
If $i={k_{\xx}}+1$, the same analysis with $|\alpha_{i-1}| \leq 2$ shows that $|\alpha_i| \leq 1$. It then follows from previous arguments that for ${k_{\xx}} < i < {k_{\y}}$ we have $|\alpha_i| \leq 1.$
\end{proof}
The next lemma, analogous to Lemma~\ref{lemma:even_reduction_end}, describes a situation where ${\bf x} \in {\mathcal B}_v^{u,w}$ is not minimal but does not
necessarily contain a run at which it can be reduced.
\begin{lemma}
\label{lemma:n=2_notminimal}
Let $n=2$ and ${\bf x} \in {\mathcal B}_v^{u,w}$. If ${k_{\xx}} > \max(u,w)$ and ${\bf x}$ ends in
the digits $(0, \delta)$ for $\delta \in \{\pm 1\}$ then $x$ is not minimal.
\end{lemma}
\begin{proof}
It is easily checked that adding $- \delta {\bf w}^{({k_{\xx}}-1)}$ to ${\bf x}$
increases $\Vert {\bf x} \Vert_1$ by $1$ and reduces the length of the vector by $1$.
Since ${k_{\xx}} > \max(u,w)$, we use the second length formula in
Lemma~\ref{lemma:length_formula} to compute $|\eta_{u,v,w}({\bf x})|$ and
$|\eta_{u,v,w}({\bf x}- \delta {\bf w}^{({k_{\xx}}-1)})|$, so
\[
|\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf x}- \delta {\bf w}^{({k_{\xx}}-1)})| = -1 + 2 = 1,
\]
and hence ${\bf x}$ is not minimal.
\end{proof}
In Lemma~\ref{lemma:n=2_110} we identify several digit patterns which imply that a vector ${\bf x} \in {\mathcal B}_v^{u,w}$ is not minimal.
Moreover, the existence of one of these patterns guarantees that ${\bf x}$ contains a run at which it can be reduced.
We defer the proof of Lemma~\ref{lemma:n=2_110} to Section~\ref{sec:technical_lemmas}.
\begin{lemma}
\label{lemma:n=2_110}
Let $n=2$ and suppose that $x \in {\mathcal B}_v^{u,w}$ and $\delta \in \{\pm 1\}$.
If any of the following occur, then there is a run at which
${\bf x}$ can be reduced, and hence ${\bf x}$ is not minimal.
\begin{enumerate}[itemsep=5pt]
\item ${\bf x}$ contains the digits $(\delta, -\delta\alpha)$ for $\alpha > 0$.
\item ${k_{\xx}} \ne \max(u,w)$ and ${\bf x}$ ends in the digits $(\delta, \delta)$.
\item ${\bf x}$ contains the digits $(\delta, \delta, \alpha)$ for any $\alpha$.
\end{enumerate}
\end{lemma}
It follows from Lemma~\ref{lemma:n=2_110} that the
only way ${\bf x}$ can be minimal and contain the digit sequence $(1,1)$
is if ${k_{\xx}} = \max(u,w)$ and these digits occur at the end of ${\bf x}$.
\begin{remark}
\label{remark:digit_sequences}
Let ${\bf x} \in {\mathcal B}_v^{u,w}$ and ${\bf r} \in \{0,1\}^*$ be a run in ${\bf x}$. If ${\bf r}$ contains at least
two more occurrences of the digit 1 than the digit 0,
then either ${\bf r}$ contains the sequence $(1, 1, 0)$ or ${\bf r}$ ends in $(1, 1)$.
In the first situation, ${\bf x}$ can be reduced at the run $(1, 1, 0)$.
In this second case, if $x_{{k_{\xx}}} \in {\bf r}$ and ${k_{\xx}} \neq \max(u,w)$, then by Lemma~\ref{lemma:n=2_110},
${\bf x}$ can be reduced at the run $(1,1)$.
\end{remark}
The following lemma is the analog of Proposition~\ref{lemma:even_minimal_characterization} for the case $n=2$.
One direction of the proof is clear, and we defer the remainder of the proof to Section~\ref{sec:technical_lemmas}.
\begin{proposition}
\label{lemma:lemma318_n=2}
Let $n=2$ and ${\bf x} \in {\mathcal B}_v^{u,w}$. Then ${\bf x}$ is not minimal if and only if one of the following occurs.
\begin{itemize}[itemsep=5pt]
\item There is a run at which ${\bf x}$ can be reduced.
\item Lemma~\ref{lemma:n=2_notminimal} applies to ${\bf x}$.
\end{itemize}
\end{proposition}
Note that the conclusion of Lemma~\ref{lemma:n=2_adjacent_digits} is
identical to that of Proposition~\ref{lemma:adjacent_digits} when $n=2$.
The different analysis of the case $n=2$ leads us to separate the propositions.
It follows from Proposition~\ref{lemma:n=2_adjacent_digits} that to determine whether ${\bf x} \in {\mathcal B}_v^{u,w}$ is minimal, where ${\bf x}$ corresponds to a geodesic of strict shape 1, it is sufficient to consider adjacent pairs of coordinates, and rule out two specific patterns.
\begin{proposition}
\label{lemma:n=2_adjacent_digits}
Let $n=2$ and ${\bf x} \in {\mathcal B}_v^{u,w}$ and ${k_{\xx}} < \max(u,w)$.
Then ${\bf x}$ is not minimal if and only if
${\bf x}$ contains a digit sequence of the form $(\delta, \delta)$
or $(\delta, -\delta)$, for $\delta \in \{\pm 1\}$.
\end{proposition}
\section{Growth and Regular Languages}
\label{sec:growth}
Given $u,w$, and ${\bf x}$ with ${k_{\xx}} <\max(u,w)$,
Lemmas~\ref{lemma:odd_box},~\ref{lemma:adjacent_digits} and~\ref{lemma:n=2_adjacent_digits} provide a straightforward
way to determine whether ${\bf x} \in {\mathcal B}_v^{u,w}$ is minimal,
that is, whether $\eta_{u,v,w}({\bf x})$ is a geodesic, by examining the
digits of ${\bf x}$.
Recall that if ${k_{\xx}} < \max(u,w)$,
we say that $\eta_{u,v,w}({\bf x})$ has \emph{strict shape 1}.
In \cite{TW_growth} we prove that the set of vectors
${\bf x}$ for which there are $u,w$ so that $\eta_{u,v,w}({\bf x})$
is geodesic and has strict shape 1 forms a regular language, denoted ${\mathcal D}_n$.
This language is not a language of geodesic paths, merely of vectors which yield geodesic paths of strict shape $1$ with a choice of $u$ and $w$.
Let $\mathcal O_n$ denote the corresponding language of geodesic paths of strict shape $1$.
In \cite{TW_growth} we show that $\mathcal O_n$ is also a regular language, exhibiting finite state automata which accept these two languages.
The finite state automaton accepting ${\mathcal D}_n$ has a finite number of states, regardless of the value of $n$.
We use it to produce a finite state automaton accepting $\mathcal O_n$ by performing a ``digit expansion" procedure which produces an automaton where the number of states does depend on $n$.
The salient piece of information about this machine is that it has one strongly connected component which determines its growth rate.
We refer the reader to \cite{TW_growth} for additional details of this procedure.
Figure~\ref{fig:fsa_n=2} depicts the finite state automaton accepting ${\mathcal O}_2$.
While the analogous automaton for $n>2$ is more complex, it shares the feature that there is one strongly connected component, and one additional component containing the state $s_{{t^{-1}}}$, which accounts for the initial string of the letter ${t^{-1}}$ at the start of an accepted word.
This fact will be referred to below in the proof of Lemma~\ref{lemma:middle_vector}.
\begin{figure}[ht!]
\tikzset{every state/.style={minimum size=2em}}
\begin{center}
\resizebox {\columnwidth} {!} {
\begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto, node distance=5cm,
semithick]
\node[state,accepting] (A) {$s_{0,0}$};
\node[state] (A1) [above right of =A, yshift=-0.5cm] {$s_{0,1}$};
\node[state] (A_1) [above left of =A, yshift=-0.5cm] {$s_{0,-1}$};
\node[state] (T) [above of =A] {$s_{t^{-1}}$};
\node[state] (start) [above of =T,yshift=-1.5cm] {$\texttt{start}$};
\node[state,accepting] (B) [right of =A] {$s_{1,0}$};
\node[state, draw=none] (BB) [right of=B] {};
\node[state,accepting] (C) [left of = A] {$s_{2,0}$};
\node[state, draw=none] (CC) [left of=C] {};
\path(start)
edge node[left] {$t^{-1}$} (T)
edge [bend left] node[right, pos=0.25] {$t$} (A)
edge [bend left] node[left] {$a$} (A1)
edge [bend right] node[right] {$a^{-1}$} (A_1);
\path(T)
edge [loop below] node[below] {$t^{-1}$} (T)
edge node[above] {$a$} (A1)
edge node[above] {$a^{-1}$} (A_1);
\path(A)
edge [loop below] node[below] {$t$} (A)
edge node[right]{$a$} (A1)
edge node[right]{$a^{-1}$} (A_1);
\path(A1)
edge node[above] {$t$} (B);
\path(A_1)
edge node[above] {$t$} (C);
\path(B)
edge node[above]{$t$} (A)
edge [shorten >=75pt, dashed] node[above] {$a$} (BB)
edge [shorten >=75pt, dashed] node[below] {$a^{-1}$} (BB);
\path(C)
edge node[above] {$t$} (A)
edge [shorten >=75pt, dashed] node[above] {$a$} (CC)
edge [shorten >=75pt,dashed] node[below] {$a^{-1}$} (CC);
\end{tikzpicture}
}
\end{center}
\caption{The finite state automaton $\mathcal O_2$
accepting the language $\mathcal O_2$ of geodesics of strict shape 1 in $BS(1,2)$.
Accept states are indicated with a double circle.}
\label{fig:fsa_n=2}
\end{figure}
Recall that the growth rate of a sequence $\{f(N)\}_{N=1}^\infty$ is $\lambda$ if
\[
\lim_{N\to\infty} \frac{\log f(N)}{N\log\lambda} = 1.
\]
Equivalently, we write $f(N) = \Theta(\lambda^N)$; that is,
there are constants $A,B>0$ such that $$A \lambda^N \le f(N) \le B \lambda^N$$
for sufficiently large $N$.
For any set ${\mathcal A} \subset BS(1,n)$, we use the notation ${\mathcal A}(N)$ to denote all elements of the set with word length $N$ with respect to the generating set $\{a,t\}$.
A main result of \cite{TW_growth} is the following theorem, which shows that to understand the growth rate of $BS(1,n)$ it is sufficient to understand the growth rate of the sequence $\OOn$.
Let $S_n(N)$ denote the sphere
of radius $N$ in $BS(1,n)$. The growth rate of $BS(1,n)$, or any finitely generated group,
is defined to be the growth rate of the sequence $\{|S_n(N)|\}_{n \in \mathbb{N}}$.
We say that ${\mathcal A}$ has positive density in $BS(1,n)$ if there is some $\epsilon >0$ so that for all sufficiently large $N$,
\[
\epsilon < \frac{|{\mathcal A}(N)|}{|S_n(N)|} < 1-\epsilon.
\]
If ${\mathcal A}$ has the same growth rate as $BS(1,n)$ it follows immediately that ${\mathcal A}$ has positive density in $BS(1,n)$.
\begin{theorem}[\cite{TW_growth}, Corollary 5.4]
\label{corollary:shapes_injection}
In the notation above, we have
\[
|\mathcal O_n(N)| \le |S_n(N)| \le 20|\mathcal O_n(N+3)|.
\]
Consequently,
the growth rates of the sequences $\OOn$ and $\{|S_n(N)|\}_{n \in \mathbb{N}}$ are identical.
\end{theorem}
This growth rate is computed explicitly in \cite{TW_growth}.
The following lemma allows us to effectively compute the growth rate of the
function which counts the number of accepted paths of a given length
in a finite state automaton.
\begin{lemma}[\cite{TW_growth}, Lemma 4.2]
\label{lemma:fsa_growth}
Let $F$ be a finite state automaton with state set $S$.
Let $f(N)$ denote the number of accepted paths in $F$ of length $N$,
and for each $s \in S$, let $f_s(N)$ denote the number of accepted
paths in $F$ beginning at state $s$.
Let $S_1, \dots, S_c$ be the strongly connected components in $F$.
\begin{enumerate}
\item For each $i$, the growth rate of $f_s$ is constant over all $s \in S_i$.
\item The growth rate of $f$ is the maximum of the growth rates of the $S_i$.
\end{enumerate}
\end{lemma}
To compute the growth rate of $\OOn$, we must account for the fact that the number of states in the finite state automaton accepting $\mathcal O_n$ depends on $n$, while the number of states in the finite state automaton accepting ${\mathcal D}_n$ is constant.
We do this via a matrix equation of fixed size, where the entries are growth series for paths beginning, respectively, in each state of the automaton accepting $\mathcal O_n$.
That is, we trade a computation
with arbitrarily large matrices over the integers (computing
an eigenvalue) for a computation with fixed size matrices with
entries which are infinite series.
The following basic fact about exponential growth will be referred to frequently in Sections~\ref{sec:curvature} and ~\ref{sec:n=2_curvature}. A proof is included in \cite{TW_growth}.
\begin{lemma}[\cite{TW_growth},Lemma 4.1]
\label{lemma:exp_growth}
Suppose that $f(N) = \Theta(\lambda^N)$ with $\lambda > 1$.
\begin{enumerate}[itemsep=5pt]
\item Both $f(N+k)$ and $\sum_{i=1}^Nf(i)$ are $\Theta(\lambda^N)$.
\item If $f(N)$ and $g(N)$ are $ \Theta(\lambda^N)$, there are $N_0,d>0$ so that $f(N)/g(N) > d$ for $N > N_0$.
\end{enumerate}
\end{lemma}
\section{Conjugation curvature in $BS(1,n)$}
\label{sec:curvature}
We begin this section with several results about minimal vectors which follow
from the technology developed in
Section~\ref{section:min_rep}. We combine this
with our understanding of growth rates from Section~\ref{sec:growth} to study the density of elements whose conjugation curvature is, respectively, positive, negative and zero.
Recall that the conjugation curvature $\kappa_r(h)$ is defined to be
\[
\kappa_r(h) = \frac{l(h) - \frac{1}{|S_n(r)|} \sum_{w \in S_n(r)} l(h^w)}{l(h)}
\]
that is, the difference between the word length of $h$ and the average word length of the conjugates of $h$ by all $w$ in the sphere $S_n(r)$ of radius $r$ in the Cayley graph $\Gamma(G,S)$,
scaled by the word length of $h$.
We show that $BS(1,n)$ has a positive density of elements with
$\kappa_r(g)<0$ and $\kappa_r(g)=0$, where $r$ is allowed to
assume a finite range of values. Additionally, when $r=1$ we show that
$BS(1,n)$ has a positive density of elements with $\kappa_1(g)>0$.
\subsection{Conjugation curvature when $r=1$.}
\label{sec:r=1}
When computing $\kappa_r(g)$ for $g = t^{-u}a^vt^w$ we must be able to evaluate
$l(g^p)$ where $p=s_1s_2 \cdots s_r$, and each
$s_i \in \{a^{\pm 1},t^{\pm 1}\}$.
We begin by understanding how $u$, $v$, and $w$ change under conjugation
by a single generator of $BS(1,n)$. This enables us to characterize when ${\bf x}$ is a minimal vector for both $g$ and $g^s$, for $s \in \{a^{\pm 1},t^{\pm 1}\}$, which in turn allows us to compute
the change in word length and thus $\kappa_1(g)$.
We outline this idea with the simplifying assumption that
$n \nmid v$ and $uw>0$.
Let $g= t^{-u}a^vt^w$. With these assumptions, the four conjugates
of $g$ by the generators are as follows.
\begin{enumerate}[itemsep=5pt]
\item $g^t=t(t^{-u}a^vt^w){t^{-1}} = t^{-(u-1)}a^vt^{w-1}$
\item $g^{{t^{-1}}}={t^{-1}}(t^{-u}a^vt^w)t = t^{-(u+1)}a^vt^{w+1}$
\item $g^{a}=a( t^{-u}a^vt^w)a^{-1} = t^{-u}a^{n^u+v-n^w}t^w$
\item $g^{{a^{-1}}}=a^{-1}( t^{-u}a^vt^w)a = t^{-u}a^{-n^u+v+n^w}t^w$
\end{enumerate}
When $uw=0$, we obtain $g^t=t^{-u}a^{nv}t^w$, and the remaining
conjugates are unchanged.
If $n|v$ then $uw=0$ and we obtain $g^{{t^{-1}}} = t^{-u} a^{\frac{v}{n}} t^w$;
the remaining conjugates are unchanged. Observe that the
formulas above demonstrate how $u,v,w$ change under conjugation.
In order to determine the geodesic lengths of $g^{a^{\pm 1}}$ and
$g^{t^{\pm 1}}$, we begin with a minimal vector
${\bf x} \in {\mathcal B}_v^{u,w}$, so $\eta_{u,v,w}({\bf x})$ is a geodesic representing $g$.
We must then find a minimal vector in one of
${\mathcal B}_v^{u-1,w-1}$,
${\mathcal B}_v^{u+1,w+1}$, ${\mathcal B}_{n^u+v-n^w}^{u,w}$,
and ${\mathcal B}_{-n^u+v+n^w}^{u,w}$ in order to calculate the word length of the appropriate conjugate of $g$.
Sometimes this is straightforward, for example, in
Lemma~\ref{lemma:zero_curv} with the assumption that $w=u$.
We begin with some convenient corollaries
of the results in Sections~\ref{section:geodesics_n_odd}
and~\ref{section:geodesics_n_even} which will allow us to recognize minimal vectors under specific conditions.
\begin{lemma}\label{lemma:still_minimal}
If ${\bf x}$ is a minimal vector in ${\mathcal B}_v^{u,w}$ then ${\bf x}$ is a minimal vector in ${\mathcal B}_v^{u',w'}$ for any pair $u',w'$
so that $\max(u,v)$ and $\max(u',w')$ have the same ordinal relationship to ${k_{\xx}}$.
\end{lemma}
\begin{proof}
The hypotheses for Lemma~\ref{lemma:odd_box} when $n$ is odd, Proposition~\ref{lemma:even_minimal_characterization} when $n>2$ is even, and Proposition~\ref{lemma:lemma318_n=2} when $n=2$ depend only on the ordinal relationship between $\max(u,v), \ \max(u',w')$ and ${k_{\xx}}$.
Thus it follows from the appropriate lemma that ${\bf x}$ is minimal in ${\mathcal B}_v^{u',w'}$.
\end{proof}
Given ${k_{\xx}} > \max(u,w)$, Lemma~\ref{lemma:still_minimal_2} extends the conclusion of Lemma~\ref{lemma:still_minimal} by relaxing the condition that ${k_{\xx}} > \max(u',w')$ to allow ${k_{\xx}} \geq \max(u',w')$.
\begin{lemma}\label{lemma:still_minimal_2}
If ${\bf x}$ is a minimal vector in ${\mathcal B}_v^{u,w}$ and ${k_{\xx}} > \max(u,w)$,
then ${\bf x}$ is a minimal vector in ${\mathcal B}_v^{u',w'}$ for any pair
$u',w'$ with ${k_{\xx}} \ge \max(u',w')$.
\end{lemma}
Before proving Lemma~\ref{lemma:still_minimal_2}, we prove the following two lemmas which describe the change in word length as $u$ and $w$ are, respectively, decremented and incremented while all other parameters are unchanged.
\begin{lemma}\label{lemma:still_minimal_2_claim_1}
For any ${\bf x} \in \mathcal{L}_v$
\[
|\eta_{u,v,w+1}({\bf x})| =
|\eta_{u,v,w}({\bf x})| +
\left\{\begin{array}{ll}
\phantom{-}1 & \textnormal{if $\max(u,w) \ge {k_{\xx}}$} \\
\phantom{-}1 & \textnormal{if $\max(u,w) < {k_{\xx}}$ and $u > w$} \\
-1 & \textnormal{if $\max(u,w) < {k_{\xx}}$ and $u \le w$}
\end{array}\right.
\]
An identical equality holds if we exchange the roles of $u$ and $w$.
\end{lemma}
\begin{proof}
The situation is symmetric in $u$ and $w$, so it suffices to
consider only $w$. If $\max(u,w) \ge {k_{\xx}}$, then we can use the first
length formula in Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w+1}({\bf x})|$,
and the lemma is immediate. If $\max(u,w) < {k_{\xx}}$, then we can use
the second length formula in Lemma~\ref{lemma:length_formula} to compute the lengths of both paths, so the sign of the change in length
depends on the order of $u$ and $w$ as given. Note that we are using the fact that if
$\max(u,w) = {k_{\xx}}$ then the formulas in Lemma~\ref{lemma:length_formula} agree.
\end{proof}
\begin{lemma}\label{lemma:still_minimal_2_claim_2}
For any ${\bf x} \in \mathcal{L}_v$
\[
|\eta_{u-1,v,w}({\bf x})| =
|\eta_{u,v,w}({\bf x})| +
\left\{\begin{array}{ll}
-1 & \textnormal{if $\max(u,w) \ge {k_{\xx}}$} \\
\phantom{-}1 & \textnormal{if $\max(u,w) < {k_{\xx}}$ and $u > w$} \\
-1 & \textnormal{if $\max(u,w) < {k_{\xx}}$ and $u \le w$}
\end{array}\right.
\]
An identical equality holds if we exchange the roles of $u$ and $w$.
\end{lemma}
\begin{proof}
The proof is analogous to Lemma~\ref{lemma:still_minimal_2_claim_1};
we observe the effect of subtracting $1$ from $u$ in both length
formulas in Lemma~\ref{lemma:length_formula}.
\end{proof}
We now prove Lemma~\ref{lemma:still_minimal_2}
\begin{proof}[Proof of Lemma~\ref{lemma:still_minimal_2}]
If ${k_{\xx}} > \max(u',w')$, the conclusion follows directly from Lemma~\ref{lemma:still_minimal}.
We address the case when ${k_{\xx}} = \max(u',w')$. Let
${\bf y} \in {\mathcal B}_v^{u',w'}$. First note that because $\max(u,w) < \max(u',w')$, we also have ${\bf y} \in {\mathcal B}_v^{u,w}$.
As ${\bf x} \in {\mathcal B}_v^{u,w}$ is minimal, we know that ${\bf x} <_{u,w} {\bf y}$. We will show that
${\bf x} <_{u',w'} {\bf y}$, which proves the lemma.
If ${k_{\y}} \ge {k_{\xx}}$, then we use
the same length formula to compute all the lengths in the next equation, whether we consider ${\bf x}$ and ${\bf y}$ in ${\mathcal B}_v^{u,w}$ or ${\mathcal B}_v^{u',w'}$. It follows that
\[
|\eta_{u,v,w}({\bf x})| - |\eta_{u',v,w'}({\bf x})| = |\eta_{u,v,w}({\bf y})| - |\eta_{u',v,w'}({\bf y})|.
\]
That is, the effect on the path length by changing $u$ and $w$ to $u'$ and $w'$ is the
same for ${\bf x}$ and ${\bf y}$. Thus ${\bf x} <_{u,w} {\bf y}$ if and only if
${\bf x} <_{u',w'} {\bf y}$.
For the remainder of the proof, we assume that ${k_{\y}} < {k_{\xx}}$; in this case we may need different length formulas from Lemma~\ref{lemma:length_formula} to compute $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$, as well as $|\eta_{u',v,w'}({\bf x})|$ and $|\eta_{u',v,w'}({\bf y})|$.
In this situation, we assume without loss of generality
that $\max(u',w') = w' \ge u'$.
Since ${k_{\xx}} = \max(u',w')>\max(u,w)$,it follows that $w'>w$.
View this increase in value as repeated additions of the number $1$,
and apply
Lemma~\ref{lemma:still_minimal_2_claim_1} to conclude that
\[
|\eta_{u,v,w'}({\bf x})| - |\eta_{u,v,w}({\bf x})| = \Delta,
\]
where $|\Delta| \le w'-w$.
To compute the analogous difference for ${\bf y}$, let $w'' = w+\epsilon$ for some $\epsilon \leq w'-w$.
If $\max(u,w'') \geq {k_{\y}}$ then $|\eta_{u,v,w''+1}({\bf y})| = |\eta_{u,v,w''}({\bf y})| + 1$.
If $\max(u,w'') < {k_{\y}}$ then the change in path length depends on the ordinal relationship between $u$ and $w''$, in which case we have
\[
|\eta_{u,v,w''+1}({\bf y})| - |\eta_{u,v,w''}({\bf y})| = |\eta_{u,v,w''+1}({\bf x})| - |\eta_{u,v,w''}({\bf x})|.
\]
Combining these two possibilities yields
\[
|\eta_{u,v,w'}({\bf y})| - |\eta_{u,v,w}({\bf y})| \ge \Delta.
\]
To analyze the analogous change in path length as the $u$ coordinate is varied, we are hampered by the fact that we do not know the ordinal relationship between $u$ and $u'$.
However, we do know that ${k_{\y}} < {k_{\xx}} = \max(u',w')$, and hence we use the first length formula in Lemma~\ref{lemma:length_formula} to compute both
$|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$, as well as $|\eta_{u',v,w'}({\bf x})|$ and $|\eta_{u',v,w'}({\bf y})|$.
Thus
\[
|\eta_{u',v,w'}({\bf x})| - |\eta_{u,v,w'}({\bf x})| =
|\eta_{u',v,w'}({\bf y})| - |\eta_{u,v,w'}({\bf y})| = u'-u.
\]
Combining our analysis, we have
\[
|\eta_{u',v,w'}({\bf x})| - |\eta_{u,v,w}({\bf x})| \le
|\eta_{u',v,w'}({\bf y})| - |\eta_{u,v,w}({\bf y})|,
\]
or equivalently,
\[
|\eta_{u',v,w'}({\bf x})| - |\eta_{u',v,w'}({\bf y})| \le
|\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf y})| \le 0,
\]
where the right inequality follows from the fact that ${\bf x} <_{u,w} {\bf y}$.
If the inequality is strict, it follows that ${\bf x} <_{u',w'} {\bf y}$.
If there is equality, then there must be a lexicographic reduction from ${\bf y}$ to ${\bf x}$.
Since the digits of ${\bf x}$ and ${\bf y}$ do not change whether we consider them in ${\mathcal B}_v^{u,w}$ or ${\mathcal B}_{v}^{u',w'}$, the same lexicographic reduction allows us to conclude that ${\bf x} <_{u',w'} {\bf y}$
Thus in either case, ${\bf x}$ is a minimal vector in ${\mathcal B}_v^{u',w'}$.
\end{proof}
The next lemma covers the special case when $n|v$, and thus if ${\bf x} \in {\mathcal B}_v^{u,w}$ is a minimal vector, we know that $x_0 = 0$.
\begin{lemma}
\label{lemma:change_in_v}
Let $g = t^{-u}a^vt^w$ where $n|v$, and ${\bf x} \in {\mathcal B}_v^{u,w}$ is a minimal vector with ${k_{\xx}} > \max(u,w)$. Then
${\bf y}$ is a minimal vector in ${\mathcal B}_{v'}^{u,w}$ where $y_i = x_{i+1}$ for $0 \leq i \leq {k_{\xx}}-1$ and $v' = \frac{v}{n}$.
\end{lemma}
\begin{proof}
Suppose that ${\bf y} \in {\mathcal B}_{v'}^{u,w}$ is not minimal. Then there is ${\bf z} \in \mathcal{L}_0$
such that ${\bf y} + {\bf z} <_{u,w} {\bf y}$. Define ${\bf z}' \in \mathcal{L}_0$ by prepending a digit $0$ to ${\bf z}$,
so $z'_0 = 0$ and $z'_i = z_{i-1}$ for $i>0$.
As the digits of ${\bf x}$ and ${\bf y}$ are the identical but
simply shifted by one index, we have $\Vert {\bf x} \Vert_1 = \Vert {\bf y} \Vert_1$
and $\Vert {\bf x} + {\bf z}' \Vert_1 = \Vert {\bf y} + {\bf z} \Vert_1$.
It follows as well that ${k_{\y}} = {k_{\xx}}-1 \geq \max(u,w)$.
Additionally, the change in vector
lengths is the same, so ${k_{\xx}} - k_{{\bf x} + {\bf z}'} = {k_{\y}} - k_{{\bf y} + {\bf z}}$, and any relevant lexicographic
change occurs in both pairs of vectors.
If $k_{{\bf y} + {\bf z}} \ge {k_{\y}}\geq \max(u,w)$, then we use the second length formula in Lemma~\ref{lemma:length_formula} to conclude that
\[
\eta_{u,v,w}({\bf x} + {\bf z}') - \eta_{u,v,w}({\bf x}) = \eta_{u,v,w}({\bf y} + {\bf z}) - \eta_{u,v,w}({\bf y}).
\]
As any relevant lexicographic change occurs in both pairs of vectors, and we know that ${\bf y} + {\bf z} <_{u,w} {\bf y}$, it follows that ${\bf x} + {\bf z}' <_{u,w} {\bf x}$, a contradiction.
If $k_{{\bf y} + {\bf z}} < {k_{\y}}$ we do not know the ordinal relationship between $k_{{\bf x} + {\bf z}'}$, respectively
$k_{{\bf y} + {\bf z}}$, and $\max(u,w)$. However, we can apply Lemma~\ref{lemma:length_formula_change} to compute
\begin{align*}
|\eta_{u,v,w}({\bf y} + {\bf z})| - |\eta_{u,v,w}({\bf y})| &=
\Vert {\bf y} + {\bf z} \Vert_1 - \Vert {\bf y} \Vert_1 - 2\max(0, {k_{\y}} - \max(k_{{\bf y}+{\bf z}},u,w)) \\
|\eta_{u,v,w}({\bf x} + {\bf z}')| - |\eta_{u,v,w}({\bf x})| &=
\Vert {\bf x} + {\bf z}' \Vert_1 - \Vert {\bf x} \Vert_1 - 2\max(0, {k_{\xx}} - \max(k_{{\bf x}+{\bf z}'},u,w)). \\
\end{align*}
Recall that $\Vert {\bf x} \Vert_1 = \Vert {\bf y} \Vert_1$
and $\Vert {\bf x} + {\bf z}' \Vert_1 = \Vert {\bf y} + {\bf z} \Vert_1$,
and $y_i = x_{i+1}$ for $0 \leq i \leq {k_{\y}} - {k_{\xx}}-1$.
It follows that ${k_{\y}} - \max(k_{{\bf y}+{\bf z}},u,w) \le {k_{\xx}} - \max(k_{{\bf x}+{\bf z}'},u,w)$.
Thus
\[
|\eta_{u,v,w}({\bf y} + {\bf z})| - |\eta_{u,v,w}({\bf y})| \ge |\eta_{u,v,w}({\bf x} + {\bf z}')| - |\eta_{u,v,w}({\bf x})|.
\]
As ${\bf y} + {\bf z} <_{u,w} {\bf y}$, it follows that ${\bf x} + {\bf x}' <_{u,w} {\bf x}$, a contradiction.
\end{proof}
Let $g = t^{-u}a^vt^w$ and ${\bf x} \in {\mathcal B}_v^{u,w}$.
When computing $\kappa_1(g)$ it is often more straightforward to determine $l(g^{t^{\pm 1}})$ than $l(g^{a^{\pm 1}})$.
We introduce a restriction which will allow us to easily determine when the vectors corresponding to $l(g^a)$ and $l(g^{{a^{-1}}})$ are minimal in the appropriate ${\mathcal B}_v^{u,w}$.
This restriction is not meant
to be exhaustive; rather it gives us control over a broad range
of vectors ${\bf x}$ for which $\eta_{u,v,w}({\bf x})$ has shape $3$ or $4$.
Let $g=t^{-u}a^vt^w$ with
$
v_+=n^u+v-n^w \text{ and } v_-= -n^u+v+n^w.
$
Recall that $g^a = t^{-u}a^{v_+}t^w$ and $g^{{a^{-1}}} = t^{-u}a^{v_-}t^w$.
Beginning with a vector ${\bf x} \in \mathcal{L}_v$,
\begin{itemize}[itemsep=5pt]
\item to obtain a vector in $\mathcal{L}_{v_+}$, one can add the digit $1$ to $x_u$ and subtract $1$ from $x_w$. Denote the resulting vector by $\rho_{u,-w}({\bf x})$, and
\item to obtain a vector in $\mathcal{L}_{v_-}$, one can subtract the digit $1$ from $x_u$ and add $1$ to $x_w$. Denote the resulting vector by $\rho_{-u,w}({\bf x})$.
\end{itemize}
Note that $\eta_{u,v_+,w}(\rho_{u,-w}) = g^{a}$ and $\eta_{u,v_-,w}(\rho_{-u,w}) = g^{{a^{-1}}}$.
It is possible that the length of $\rho_{u,-w}({\bf x})$ or $\rho_{-u,w}({\bf x})$ will differ from the length of ${\bf x}$.
If $u > {k_{\xx}}$ or $w > {k_{\xx}}$, forming $\rho_{u,-w}({\bf x})$ and $\rho_{-u,w}({\bf x})$ will change digits with indices greater than ${k_{\xx}}$, creating a longer vector.
If, on the other hand, $w<u = {k_{\xx}}$ and $x_u = 1$, the length of $\rho_{-u,w}({\bf x})$ will be less than the length of ${\bf x}$.
We say that ${\bf x} \in {\mathcal B}_v^{u,w}$ is {\em strongly minimal} if
both $\rho_{u,-w}({\bf x}) \in {\mathcal B}_{v_+}^{u,w}$ and
$\rho_{-u,w}({\bf x}) \in {\mathcal B}_{v_-}^{u,w}$ are minimal.
When $n$ is odd, ${\bf x}$ will be strongly minimal if we restrict $1<|x_{u}|,|x_{w}| < \left\lfloor\frac{n}{2}\right\rfloor$. When $n$ is even, ${\bf x}$ will be strongly minimal if we restrict $1<|x_{i}|,|x_{w}| < \frac{n}{2}-3$. While these are not the only conditions which guarantee that an element is strongly minimal, they are easily met for odd $n$ and even $n>10$. The notion of a strongly minimal element allows for a clean statement of later theorems.
The following lemma gives a simple example of of a family of elements $g \in BS(1,n)$ whose conjugation curvature satisfies $\kappa_1(g) = 0$.
\begin{lemma}\label{lemma:zero_curv}
Let $g= t^{-u}a^vt^u$ for $u \in \mathbb{N}$ and $v \in \mathbb{Z}$ and
let ${\bf x} \in {\mathcal B}_v^{u,w}$ minimal. If $u \notin \{{k_{\xx}}-1,{k_{\xx}},
{k_{\xx}}+1\}$ then
$\kappa_1(g) =0$.
\end{lemma}
\begin{proof}
As $u \notin \{{k_{\xx}}-1,{k_{\xx}},{k_{\xx}}+1\}$, it follows from
Lemma~\ref{lemma:still_minimal} that ${\bf x}$ is also minimal in both
${\mathcal B}_v^{u-1,u-1}$ and ${\mathcal B}_v^{u+1,u+1}$.
The assumption that $w=u$ immediately implies that $g = g^a = g^{a^{-1}}$.
The restriction on the values of $u$ allows us to use the same length formula from Lemma~\ref{lemma:length_formula} to compute both $l(g), \ l(g^t)$ and $l(g^{{t^{-1}}})$, and thus
\begin{align*}
\kappa_1(g)l(g)
&= l(g) - \frac{1}{4}\left(l(g^t) + l(g^{t^{-1}}) + l(g^a) + l(g^{a^{-1}})\right) \\
& = l(g) - \frac{1}{4}\left(|\eta_{u-1,v,u-1}({\bf x})| + |\eta_{u+1,v,u+1}({\bf x})| + 2l(g)\right) \\
& = l(g) - \frac{1}{4}\left( l(g) + l(g) + 2l(g)\right),
\end{align*}
where the last equality follows from Lemmas~\ref{lemma:still_minimal_2_claim_1} and~\ref{lemma:still_minimal_2_claim_2}.
As $l(g) \geq 1$, this simplifies
to $\kappa_1(g) = 0$.
\end{proof}
When $g = t^{-u}a^vt^w$ is represented by a geodesic of shape 3 or 4, Theorems~\ref{thm:type1curv} and~\ref{thm:type2curv} provide broad conditions on when $\kappa_1(g)$ is negative or zero.
Later theorems in Section~\ref{sec:curvature} allow for analogous conclusions about $\kappa_r(g)$ for a range of values of $r$.
In order to express a variety of conditions in a concise way,
it will be helpful to introduce the notation $\delta_C$, where $C$ is a logical
expression and $\delta_C = 1$ if $C$ is satisfied and $0$ otherwise.
\begin{theorem}\label{thm:type1curv}
Let $g = t^{-u} a^vt^w \neq e$ and let ${\bf x} \in {\mathcal B}_v^{u,w}$ be strongly
minimal with ${k_{\xx}}> \max(u,w)$. The conjugation curvature
$\kappa_1(g)$ then satisfies:
\begin{enumerate}
\item $\kappa_1(g) = 0$ iff
$\delta_{u\ne w}(\delta_{x_u=0} + \delta_{x_w=0}) = 0$ and either
$uw > 0$ or $n|v$.
\item $\kappa_1(g) < 0$ otherwise.
\end{enumerate}
\end{theorem}
Any $g \in BS(1,n)$ to which Theorem~\ref{thm:type1curv}
applies can be represented by a geodesic path of shape $3$ or $4$.
When $g$ is represented by a geodesic path of shape $1$ or $2$
we obtain an analogous theorem, stated below for completeness.
Its proof involves checking many cases nearly identical to those
in the proof of Theorem~\ref{thm:type1curv}. As we do not need
Theorem~\ref{thm:type2curv} in later results, we include its
statement but leave its proof to the interested reader.
\begin{theorem}\label{thm:type2curv}
Let $g = t^{-u} a^vt^w \neq e$ and let ${\bf x} \in {\mathcal B}_v^{u,w}$ be strongly
minimal with ${k_{\xx}} \le \max(u,w)$.
The conjugation curvature $\kappa_1(g)$ then satisfies:
\begin{enumerate}[itemsep=5pt]
\item $\kappa_1(g)=0$ iff $u,w>{k_{\xx}}$ with $u = w$.
\item $\kappa_1(g)<0$ otherwise.\qed
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:type1curv}]
We compute the effect on word length of conjugation by each
of the generators of $BS(1,n)$. Since ${k_{\xx}}>\max(u,w)$, the second length formula from Lemma~\ref{lemma:length_formula} is always used to compute $l(g)$.
{\bf Case 1: conjugation by $t$.} If $uw>0$, then as computed above
$g^t = t^{-(u-1)}a^vt^{w-1}$ and ${k_{\xx}} > \max(u-1,w-1)$,
so it follows from Lemma~\ref{lemma:still_minimal} that ${\bf x}$ is minimal in ${\mathcal B}_v^{u-1,w-1}$ and we use the second length formula from Lemma~\ref{lemma:length_formula} to compute $l(g^t)$.
It follows that
$l(g^t) = l(g)$.
If $uw=0$, then $g^t = t^{-u}a^{vn}t^{w}$.
Let ${\bf y} = (y_0,y_1, \cdots ,y_{{k_{\y}}})$ be obtained from ${\bf x}$ by defining $y_0=0$ and $y_i = x_{i-1}$ for $1 \leq i \leq {k_{\y}}$.
Then ${\bf y}$ is the vector in ${\mathcal B}_{vn}^{u,w}$ such that $\eta_{u,vn,w}({\bf y}) = g^t$.
Observe that ${k_{\y}} = {k_{\xx}} + 1$, so $\max(u,w) < {k_{\xx}} < {k_{\y}}$.
If ${\bf y}$ were not minimal, it would follow from Lemma~\ref{lemma:change_in_v} that ${\bf x}$ was not minimal, a contradiction.
As $\max(u,w) < {k_{\xx}} < {k_{\y}}$ we use the second length formula in Lemma~\ref{lemma:length_formula} to compute $l(g^t)$, and thus $l(g^t) = |\eta_{u,nv,w}({\bf y})| = |\eta_{u,v,w}({\bf x})|+2 = l(g)+2.$
In summary:
\[
l(g^t) = \left\{\begin{array}{ll} l(g) & \textnormal{if $uw > 0$} \\
l(g)+2 & \textnormal{if $uw=0$.}
\end{array}\right.
\]
{\bf Case 2: conjugation by ${t^{-1}}$.} If $n \nmid v$, then
$g^{t^{-1}} = t^{-(u+1)}a^vt^{w+1}$,
and $ {k_{\xx}} \ge \max(u+1,w+1)$.
Lemma~\ref{lemma:still_minimal_2} guarantees that
${\bf x} \in {\mathcal B}_v^{u+1,w+1}$ is minimal.
As ${k_{\xx}} \ge \max(u+1,w+1)$, we use the
second length formula from Lemma~\ref{lemma:length_formula} to compute $l(g^{{t^{-1}}})$, noting that the two formulas agree when
${k_{\xx}} = \max(u,w)$. It follows that
$l(g^{t^{-1}}) = l(g)$.
If $n|v$, then $uw=0$ and $g^{{t^{-1}}}=t^{-u}a^{\frac{v}{n}}t^w$.
Note that since $n|v$, the least significant digit of ${\bf x} \in \mathcal{L}_v$ is $0$.
Let ${\bf y} = (y_0,y_1, \cdots ,y_{{k_{\y}}})$ be obtained from ${\bf x}$ by defining $y_i = x_{i+1}$ for $0 \leq i \leq {k_{\y}} = {k_{\xx}}-1$, that is, each entry of ${\bf x}$ is
shifted left by one position to create ${\bf y}$.
Then ${\bf y}$ is the vector in ${\mathcal B}_{\frac{v}{n}}^{u,w}$ corresponding to $g^t$.
It follows from Lemma~\ref{lemma:change_in_v} that ${\bf y} \in {\mathcal B}_{\frac{v}{n}}^{u,w}$ is minimal.
As ${k_{\y}} \geq \max(u,w)$, we use the second length formula in Lemma~\ref{lemma:length_formula} to compute $l(g^{{t^{-1}}})$ and see that
$l(g^{{t^{-1}}}) = |\eta_{u,\frac{v}{n},w}({\bf y})| = |\eta_{u,v,w}({\bf x})|-2 = l(g)-2$
In summary,
\[
l(g^t) = \left\{\begin{array}{ll}
l(g) & \textnormal{if $n\nmid v$} \\
l(g)-2 & \textnormal{if $n|v$.}
\end{array}\right.
\]
{\bf Case 3: conjugation by $a^{\pm 1}$.} Let $v_+ = n^u+v-n^w$ and ${\bf x}_+ = \rho_{u,-w}({\bf x})$
and $v_- = -n^u+v+n^w$ and ${\bf x}_- = \rho_{-u,w}({\bf x})$.
Since ${\bf x}$ is strongly minimal, we have that ${\bf x}_+ \in {\mathcal B}_{v_+}^{u,w}$
and ${\bf x}_- \in {\mathcal B}_{v_-}^{u,w}$ are minimal,
so computing $l(g^a)$ and $l(g^{a^{-1}})$ reduces to applying the length
formula to compute $|\eta_{u,v_+,w}({\bf x}_+)|$ and $|\eta_{u,v_-,w}({\bf x}_-)|$.
The assumptions that ${\bf x}$ is strongly minimal and ${k_{\xx}} > \max(u,w)$ ensure that
changing the digits at indices $u$ and $w$ does not alter the length of $\eta_{u,v,w}({\bf x})$;
hence $k_{{\bf x}_+} = k_{{\bf x}_-} = {k_{\xx}}$, and the change in length between $\eta_{u,v,w}({\bf x})$ and $\eta_{u,v_+,w}({\bf x}_+)$, respectively $\eta_{u,v,w}({\bf x})$ and $\eta_{u,v_-,w}({\bf x}_-)$,
reduces to the change in absolute value between the coordinates with indices $u$ and $w$.
If $u=w$, then the changes to $x_u=x_w$ sum to zero, and we have $l(g^a) = l(g^{a^{-1}}) = l(g)$.
Otherwise,
\begin{align*}
l(g^a) &= |\eta_{u,v_+,w}({\bf x}_+)|
= l(g) + |x_u + 1| - |x_u| + |x_w - 1| - |x_w| \\
l(g^{a^{-1}}) &= |\eta_{u,v_-,w}({\bf x}_-)|
= l(g) + |x_u - 1| - |x_u| + |x_w + 1| - |x_w|
\end{align*}
Observe that if $x_u \ne 0$, then $|x_u + 1| + |x_u - 1| - 2|x_u| = 0$,
and if $x_u = 0$, then $|x_u + 1| + |x_u - 1| - 2|x_u| = 2$; analogous statements hold for $x_w$.
Thus we have shown that
\[
l(g^a) + l(g^{a^{-1}}) = 2l(g) + 2\delta_{u \neq w}(\delta_{x_u=0} + \delta_{x_w=0}).
\]
Combining the above computations with those for $l(g^{t^{\pm 1}})$, note that
if $n|v$, then $uw=0$, and hence if $uw>0$, then $n\nmid v$, and hence
\[
l(g^t) + l(g^{t^{-1}}) + l(g^a) + l(g^{a^{-1}})
=
\left\{\begin{array}{ll}
4l(g) + 2\delta_{u\ne w}(\delta_{x_u=0} + \delta_{x_w=0})
& \textnormal{if $uw>0$ or $n|v$} \\
4l(g) + 2 + 2\delta_{u\ne w}(\delta_{x_u=0} + \delta_{x_w=0})
& \textnormal{if $uw=0$ and $n \nmid v$}
\end{array}\right.
\]
The theorem follows immediately from this formula.
\end{proof}
The following two facts arise in the proof of
Theorem~\ref{thm:type1curv}, and we state them below in Lemma~\ref{lemma:type1_facts} for easy
reference, noting that the second is true in greater generality
than the context of Theorem~\ref{thm:type1curv}.
\begin{lemma}\label{lemma:type1_facts}
Let $g = t^{-u}a^vt^w \in BS(1,n)$ where ${\bf x} \in {\mathcal B}_v^{u,w}$ is minimal and $0 < \max(u,w) \leq {k_{\xx}}-1$.
\begin{enumerate}[itemsep=5pt]
\item If $uw>0$ and $n \nmid v$ then $l(g) = l(g^t) = l(g^{t^{-1}})$.
\item If ${\bf x}$ is strongly minimal and $x_ux_w>0$ then
$l(g) = l(g^a) = l(g^{a^{-1}})$.\qed
\end{enumerate}
\end{lemma}
\subsection{Sets of positive density in $BS(1,n)$}
\label{sec:sets_of_pos_density}
Theorem~\ref{corollary:shapes_injection}
states that the growth rate of the sequence $\OOn$
is the same as the growth rate of $BS(1,n)$. In order to show
that a subset of elements of $BS(1,n)$ has positive density, we subdivide this subset according to word length, which is always computed with respect to the generating set $\{a,t\}$ for $BS(1,n)$.
Let $f(N)$ be the function which counts the number of elements in this subset of a given word length $N$.
We will show that the growth rate of $\{f(N)\}_{N \in \mathbb{N}}$ is identical to that of $\OOn$.
Let ${\mathcal Q}_n \subset \mathcal O_n$ be the subset of geodesic words which do not begin with ${t^{-1}}$ and end with a single $t$, omitting the word $t$, and let ${\mathcal Q}_n(N)$ be the set of such words with word length $N$; we denote the size of ${\mathcal Q}_n(N)$ by $q_n(N)$.
\begin{lemma}
\label{lemma:middle_vector}
The growth rate of $\{q_n(N)\}_{N \in \mathbb{N}}$ is the same as the growth rate of $\mathcal O_n$, that is, $q_n(N) = \Theta(\lambda_n^N)$.
\end{lemma}
\begin{proof}
This follows from Lemma~\ref{lemma:fsa_growth} and the
structure of the finite automata $\mathcal O_n$. There are two strongly
connected components of $\mathcal O_n$: the one containing only the state
$s_{{t^{-1}}}$ and the one containing the digit expansions
of the states $s_i$ of $\mathcal D_n$. This latter component determines
the growth rate of $\mathcal O_n$, so as long as we do not affect this
strongly connected component, we leave the growth rate unchanged.
Modify the finite state automata by:
\begin{itemize}
\item Removing $s_{t^{-1}}$
\item Removing the state which accepts a string of the form $t^n$ from the set of accept states; this state is labeled as $s_{0,0}$ in Figure~\ref{fig:fsa_n=2}.
\end{itemize}
The resulting finite state automata accepts exactly those
paths in $\mathcal O_n$ which do not begin with any power of $t^{-1}$
and which end with exactly one $t$. That is, it accepts
exactly the geodesic words in $\mathcal{Q}_n$. Although we have changed the accept
states, the set of edges in the main strongly connected component
is unchanged. Thus
$\{q_n(N)\}_{N \in \mathbb{N}}$ and $\OOn$ have the same growth rate. That is,
$q_n(N) = \Theta(\lambda_n^N)$.
\end{proof}
\subsection{Detecting minimal vectors}
To show that $BS(1,n)$ has a positive density of elements of positive, zero and negative conjugation curvature, we construct in each case a family of words by concatenating a prefix, ``middle'' and suffix.
The middle segment of each word is always chosen to be an element of ${\mathcal Q}_n$, so it is accepted by the finite state automaton adapted from $\mathcal O_n$ described in Section~\ref{sec:sets_of_pos_density} and corresponds to a minimal vector.
In the next sections, we will vary the prefix and suffix in order to construct
examples of elements with the desired conjugation curvature.
The following lemma will be useful to certify that
the growth rate of these special geodesics is comparable to
that of the whole group.
Recall that for any set ${\mathcal A}$ of elements of $BS(1,n)$, we will always use the notation $\mathcal{A}(N)$ for $N \in \mathbb{N}$ to denote the elements of ${\mathcal A}$ whose word length with respect to the generating set $\{a,t\}$ of $BS(1,n)$ is $N$.
\begin{lemma}\label{lemma:prefix_suffix_growth_rate}
Let $\mathcal{A} \subseteq BS(1,n)$ be a set of geodesic words of the form
$p\xi s$, where $p,s$ are constant and $\xi$ may be any word in $\mathcal{Q}_n$.
Then the growth rate of $\{|\mathcal{A}(N)|\}_{N \in \mathbb{N}}$ is the same as
the growth rate of $BS(1,n)$ and thus $\mathcal{A}$ has positive density in
$BS(1,n)$.
\end{lemma}
\begin{proof}
It follows from Theorem~\ref{corollary:shapes_injection} and Lemma~\ref{lemma:middle_vector} that
the growth rate of $\{q_n(N)\}_{N \in \mathbb{N}} = \{|\mathcal{Q}_n(N)|\}_{N \in \mathbb{N}}$
is the same as that of $BS(1,n)$. By construction, we have
$|\mathcal{A}(N+|p|+|s|)| = |\mathcal{Q}_n(N)|$, so by
Lemma~\ref{lemma:exp_growth} the growth rates of
$\{|\mathcal{A}(N)|\}_{N \in \mathbb{N}}$ and $|\mathcal{Q}_n(N)|$ are the same,
and the lemma follows.
\end{proof}
The next series of lemmas show that words constructed in this way are geodesic, that is, when we look at the corresponding vector of consecutive exponents of the generator $a$ in each word, this vector is minimal.
Suppose $g = t^{-u}a^vt^w \in BS(1,n)$ is constructed as in Lemma~\ref{lemma:prefix_suffix_growth_rate}, for some choice of nonempty prefix, suffix and middle word $\xi \in {\mathcal Q}_n$. Let ${\bf x} \in {\mathcal B}_v^{u,w}$ denote the associated vector of consecutive exponents of the generator $a$ in $g$, and let ${\bf x}'$ be the vector of consecutive exponents of the generator $a$ in $\xi$.
Note that ${\bf x}'$ is minimal
because $\xi \in {\mathcal Q}_n$.
The following series of lemmas presents simple criteria which allow us to conclude that ${\bf x}$ is minimal by relating the minimality of ${\bf x} \in {\mathcal B}_v^{u,w}$ to the minimality of ${\bf x}' \in {\mathcal B}_{v'}^{u',w'}$, for a choice of $u',v',w'$ specified below.
We make the following convention with regard to indexing ${\bf x}$ and ${\bf x}'$. Let ${\bf x} = (x_0, \cdots ,x_{k_{\xx}})$ and ${\bf x}' = (x_m, \cdots ,x_\ell)$ where $0 < m \leq \ell < {k_{\xx}}$.
When we consider ${\bf x}'$ as an independent vector, we will continue to write it as ${\bf x}' = (x_m, \cdots ,x_\ell)$ rather than shifting the indices so that they begin at $0$.
We make this choice to retain the context of ${\bf x}' \subseteq {\bf x}$.
When we want to add a linear combination of $\mathcal{L}_0$ basis vectors to ${\bf x}'$, we must index them accordingly and write ${\bf x}' + \sum_{i=m}^{\ell-1} \alpha_i {\bf w}^{(i)}$ to obtain the vector $(x'_m-\alpha_mn, \cdots ,x'_\ell+\alpha_{\ell-1})$.
When we compute $\Sigma({\bf x}')$, we evaluate the sum $\Sigma({\bf x}') = \sum_{i=m}^\ell x'_in^{i-m}$.
Lemma~\ref{lemma:run_in_the_middle} shows that if ${\bf x}$ constructed in this way can be reduced at a run ${\bf r} \subseteq {\bf x}' \subseteq {\bf x}$ which does not contain the final digit of ${\bf x}'$, then ${\bf x}'$ can also be reduced at ${\bf r}$. By ``$\subseteq$'' here we mean a subsequence of consecutive digits. In what follows, we will use $v'$ to denote $\Sigma({\bf x}')$, where the function $\Sigma $ is defined in Section~\ref{section:min_rep}.
\begin{lemma}
\label{lemma:run_in_the_middle}
Let ${\bf x} \in {\mathcal B}_v^{u,w}$ be constructed as above with ${k_{\xx}} > \max(u,w)$ and ${\bf x}' = (x_m, \cdots ,x_s) \subseteq {\bf x}$, where ${\bf x}' \in {\mathcal B}_{v'}^{0,s-m+2}$.
If ${\bf r} = (x_j, \cdots ,x_{\ell}) \subset {\bf x}'$ with $m \leq j \leq l<s < {k_{\xx}}$ is a run at which ${\bf x}$ can be reduced, then ${\bf x}'$ can be reduced at ${\bf r}$.
\end{lemma}
\begin{proof}
Let ${\bf x},{\bf x}',u,v,w$ and ${\bf r}$ be as in the statement of the lemma, with $u'=0$ and $w' = s-m+2$.
As ${\bf x}$ can be reduced at ${\bf r}$, when $n \geq 3$ we have
$${\bf x} + \delta \sum_{i=j}^{\ell} {\bf w}^{(i)} <_{u,w} {\bf x}$$ for a choice of $\delta \in \{\pm 1\}$.
When $n=2$ the same inequality holds, where we can choose the coefficients of the ${\bf w}^{(i)}$ to be identically $\delta$ because the run $r$ does not contain the final digit of ${\bf x}$, as discussed in Section~\ref{section:geodesics_n_even_2}.
Note that $\eta_{u',v',w'}({\bf x}')$ is a geodesic of strict shape $1$ and $\eta_{u,v,w}({\bf x})$ is a geodesic of shape $3$, so we use different formulas to compute their length.
As the suffix is nonempty, and ${\bf r} \subset {\bf x}'$ does not contain the final digit of ${\bf x}'$, the vectors the vectors ${\bf x}$ and ${\bf x}+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)}$ are the same length.
Thus we use the second length formula in Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf x}+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)})|$.
If ${\bf x}'$ and ${\bf x}'+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)}$ have the same length, then both $|\eta_{u',v',w'}({\bf x}')|$ and $|\eta_{u',v',w'}({\bf x}'+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)})|$
are computed using the first length formula in Lemma~\ref{lemma:length_formula}.
It might be the case that the length of ${\bf x}'+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)}$ is one less than then length of ${\bf x}'$.
As the condition for the first length formula is that $\K{x'} \leq \max(u',w')$, we see that if the length of the vector decreases but $u'$ and $w'$ are unchanged, we use the same length formula from Lemma~\ref{lemma:length_formula} to compute $|\eta_{u',v',w'}({\bf x}'+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)})|$.
This ensures that the change in word length in either case reflects only the change in $\ell^1$ norm between the vectors.
Thus
\begin{align*}
|\eta_{u,v,w}({\bf x}+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)})| - |\eta_{u,v,w}({\bf x})|
&=
|\eta_{u',v',w'}({\bf x}'+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)})| - |\eta_{u',v',w'}({\bf x}')| \\
&=
\Vert {\bf x}'+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)} \Vert_1 - \Vert {\bf x}' \Vert_1 \leq 0.
\end{align*}
If the final inequality is strict, it is clear that ${\bf x}'$ can be reduced at ${\bf r}$. If there is equality, then the lexicographic reduction which occurs between ${\bf x}$ and ${\bf x}+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)}$ will also occur between ${\bf x}'$ and ${\bf x}'+\delta \sum_{i=j}^{\ell} {\bf w}^{(i)}$ and hence ${\bf x}'$ can be reduced at ${\bf r}$, that is,
\[
{\bf x}'+ \delta \sum_{i=j}^{\ell} {\bf w}^{(i)} <_{u',w'} {\bf x}'.
\]
\end{proof}
Lemma~\ref{lemma:next_digit_0} extends Lemma~\ref{lemma:run_in_the_middle} when $n$ is even to conclude that if ${\bf x}$ can be reduced at a run ${\bf r} \subseteq {\bf x}'\subset {\bf x}$ which contains the final digit $x_{\ell}$ of ${\bf x}'$ and $x_{\ell+1} = 0$, then ${\bf x}'$ can also be reduced at ${\bf r}$.
\begin{lemma}
\label{lemma:next_digit_0}
Let $n$ be even and ${\bf x} \in {\mathcal B}_v^{u,w}$ be constructed as in Lemma~\ref{lemma:run_in_the_middle} with ${k_{\xx}} > \max(u,w)$ and ${\bf x}' = (x_m, \cdots ,x_{\ell}) \subseteq {\bf x}$ with ${\bf x}' \in {\mathcal B}_{v'}^{0,l-m+2}$ and $v' = \Sigma({\bf x}')$.
If ${\bf r} = (x_j, \cdots ,x_{\ell}) \subseteq {\bf x}'$ is a run at which ${\bf x}$ can be reduced and $x_{\ell+1}=0$, then ${\bf x}'$ can be reduced at ${\bf r}$.
\end{lemma}
\begin{proof}
As ${\bf x}$ can be reduced at ${\bf r}$, when $n \geq 4$ it follows immediately that
\[
{\bf x}+\epsilon_{{\bf r}} \sum_{i=j}^{\ell} {\bf w}^{(i)} <_{u,w} {\bf x}.
\]
When $n=2$ the same inequality holds, where we can choose the coefficients of the ${\bf w}^{(i)}$ to be identically $\epsilon_{{\bf r}}$ because the run $r$ does not contain the final digit of ${\bf x}$, as discussed in Section~\ref{section:geodesics_n_even_2}.
By construction, ${\bf s}$ must have at least two digits, as $x_{\ell+1} = 0$ and ${\bf x}$ cannot end with the digit $0$.
This ensures that ${\bf r}$ does not contain the final two digits of ${\bf x}$ and hence the vectors ${\bf x}$ and ${\bf x}+\epsilon_{{\bf r}} \sum_{i=j}^{\ell} {\bf w}^{(i)}$ have the same length.
As ${k_{\xx}} > \max(u,w)$, the the second length formula in Lemma~\ref{lemma:length_formula} is used to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf x} + \epsilon_{{\bf r}}\sum_{i=j}^{\ell} {\bf w}^{(i)})|$.
When considering ${\bf x}'$, take $u'=0$ and $w'=l-m+2$.
We now show that the lengths of $\eta_{u',v',w'}({\bf x}')$ and $\eta_{u,v,w}({\bf x}'+\epsilon_{{\bf r}}\sum_{i=j}^{\ell} {\bf w}^{(i)})$ are computed using the same word length formula by noting the ordinal relationship between $w'$ and the length of the vector in each case.
\begin{itemize}[itemsep=5pt]
\item As ${\bf x}' \in {\mathcal B}_{v'}^{0,l-m+2}$, we have $w'=l-m+2 > l-m+1 = \K{{\bf x}'}$.
\item If ${\bf y} = {\bf x}'+\epsilon_{{\bf r}} \sum_{i=j}^{\ell} {\bf w}^{(i)}$, then $\K{{\bf y}} = \K{{\bf x}'}+1 = l-m+2$, so $w'=l-m+2 = \K{{\bf y}}$.
\end{itemize}
As the two length formulas in Lemma~\ref{lemma:length_formula} agree when $\max(u',w') = {k_{\y}}$, we see that the lengths of both $\eta_{u',v',w'}({\bf x}')$ and $\eta_{u,v,w}({\bf x}'+\epsilon_{{\bf r}} \sum_{i=j}^{\ell} {\bf w}^{(i)})$ are computed using the first length formula in Lemma~\ref{lemma:length_formula}, which does not rely on the length of ${\bf x}'$ or ${\bf y}$.
Thus in both cases the difference in word length between the geodesics arising from each pair of vectors is exactly the difference in $\ell^1$ norm between the vectors.
We then compute
\begin{align*}
0 \leq |\eta_{u,v,w}({\bf x})|-|\eta_{u,v,w}({\bf x}+\epsilon_{{\bf r}} \sum_{i=j}^{\ell} {\bf w}^{(i)})|
&=
\ensuremath{\textnormal{weight}}({\bf r}) + |x_{\ell+1}| - |x_{\ell+1} + \epsilon_{{\bf r}}| \\
&= \ensuremath{\textnormal{weight}}({\bf r}) + |\epsilon_{{\bf r}}| \\
&= |\eta_{u',v',w'}({\bf x}')| - |\eta_{u',v',w'}({\bf x}'+\epsilon_{{\bf r}} \sum_{i=j}^{\ell} {\bf w}^{(i)})|
\end{align*}
where $\ensuremath{\textnormal{weight}}({\bf r})$ is defined in Section~\ref{section:geodesics_n_even}. The equality in the first line is proven in Lemma~\ref{lemma:change_formula} for $n \geq 4$ and is easily checked for $n=2$. The transition from the first line to the middle line relies on the fact that $x_{\ell+1} =0$. The transition from the last line to the middle line relies of the fact that ${\bf x}'+\epsilon_{{\bf r}} \sum_{i=j}^{\ell} {\bf w}^{(i)}$ has a leading coefficient of $1$ not present in ${\bf x}'$.
If the initial inequality is strict, it is clear that ${\bf x}'$ can be reduced at ${\bf r}$. If there is equality, then the lexicographic reduction which occurs between ${\bf x}$ and ${\bf x}+\epsilon_{{\bf r}} \sum_{i=j}^{\ell} {\bf w}^{(i)}$ will also occur between ${\bf x}'$ and ${\bf x}'+\epsilon_{{\bf r}}\sum_{i=j}^{\ell} {\bf w}^{(i)}$, as ${\bf x}' \subseteq {\bf x}$ and has highest index equal to $\ell$. Thus ${\bf x}'$ can be reduced at ${\bf r}$, that is,
\[
{\bf x}'+ \epsilon_{{\bf r}} \sum_{i=j}^{\ell} {\bf w}^{(i)} <_{u',w'} {\bf x}'.
\]
\end{proof}
Combining the previous two lemmas allows us to show that if if $n$ is even and $g = t^{-u}a^vt^w$ is constructed from a prefix, suffix and $\xi \in {\mathcal Q}_n$ with a bound on the absolute value of the exponents in $p$ and $s$, then the resulting vector ${\bf x}$ of consecutive exponents of the generator $a$ is minimal in ${\mathcal B}_v^{u,w}$. We prove Lemma~\ref{lemma:prefix-suffix} only for even $n \geq 4$. If $n$ is odd and ${\bf x} \in {\mathcal B}_v^{u,w}$ is as above, the minimality of ${\bf x}$ is determined solely by inspection of the final two digits of ${\bf x}$, as described in Lemma~\ref{lemma:odd_box}.
\begin{lemma}
\label{lemma:prefix-suffix}
Let $n \geq 4$ be even and suppose $g = p \xi s$ where $\xi \in {\mathcal Q}_n$ and $p$ and $s$ are as follows:
\begin{itemize}[itemsep=5pt]
\item $p = t^{-u} a^{p_0}ta^{p_1} \cdots ta^{p_m}ta^0t$ for $m \geq 0$ , where ${\bf p} = (p_0,p_1, \cdots ,p_m,0)$ is the vector of consecutive exponents of the generator $a$ in $p$ and $u \geq 0$,
\item $s = a^0ta^{s_0}ta^{s_1}t \cdots ta^{s_{m'}}t^{-h}$ for $m' \geq 0$, where ${\bf s} = (0,s_0,s_1, \cdots s_{m'})$ is the vector of consecutive exponents of the generator $a$ in $s$ and $0 \leq h \leq m'$, and
\item $(s_{m'-1},s_{m'})$ is not equal to either $(\delta(\frac{n}{2}-1),-\delta)$ or $(\delta \frac{n}{2},-\delta)$, for $\delta\in \{\pm 1\}$.
\end{itemize}
Further assume that for all $i$, $|p_i| \leq \frac{n}{2}-1$ and $|s_i| < \frac{n}{2}-1$.
Let ${\bf x}'$ denote the vector of consecutive exponents of the generator $a$ in $\xi$, and ${\bf x} = {\bf p} {\bf x}' {\bf s}$ the analogous vector for $g$. Then ${\bf x} \in {\mathcal B}_v^{u,w}$, where $v = \Sigma({\bf x}) = \sum_{i=0}^{{k_{\xx}}} x_i n^i$, $w = {k_{\xx}}-h$ and ${\bf x}$ is minimal.
\end{lemma}
Using the notation in the statement of Lemma~\ref{lemma:prefix-suffix}, if ${\bf r} \subset {\bf x}$ denotes a run, we write ${\bf r} \cap {\bf p}$ to denote any common digits of ${\bf x}$ contained in both ${\bf r}$ and ${\bf p}$, with the analogous definition for ${\bf s} \cap {\bf r}$.
\begin{proof}
Note that by construction, ${\bf x} \in {\mathcal B}_v^{u,w}$, where $v = \Sigma({\bf x}) = \sum_{i=0}^{{k_{\xx}}} x_i n^i$ and we choose $w = {k_{\xx}} -h$ so that $g$ is a geodesic of shape $3$. We must show that ${\bf x}$ is minimal.
The definition of ${\mathcal Q}_n$ ensures that ${\bf x}' \in {\mathcal B}_{v'}^{0,\K{{\bf x}'}+1}$, where $v' = \Sigma({\bf x}')$, and ${\bf x}'$ is minimal.
Suppose ${\bf x}$ is not minimal. Inspecting the final two digits of ${\bf x}$, that is, the final two digits of ${\bf s}$, we see that Lemma~\ref{lemma:even_reduction_end} does not apply, and it follows from Proposition~\ref{lemma:even_minimal_characterization} that ${\bf x}$ contains a run ${\bf r}$ at which it can be reduced.
The final exponent of $t$ in the definition of ${\bf s}$ implies that $\max(u,w) < {k_{\xx}}$.
The digit restrictions on ${\bf p}$ force ${\bf r} \cap {\bf p} = \emptyset$, and hence when $n \geq 4$, no run has its first digit in ${\bf p}$. The fact that the first digit of ${\bf s}$ is $0$ implies that ${\bf r} \cap {\bf s} = \emptyset$, and hence no run with first digit in ${\bf x}'$ can be continued into ${\bf s}$. Thus we conclude that ${\bf r} \subset {\bf x}'$.
If ${\bf r}$ does not contain the last digit of ${\bf x}'$, it follows from Lemma~\ref{lemma:run_in_the_middle} that ${\bf x}'$ can be reduced at ${\bf r}$, that is, ${\bf x}'$ is not minimal, a contradiction.
Assuming that ${\bf r}$ contains the last digit of ${\bf x}'$, as the initial digit of ${\bf s}$ is $0$, it follows from Lemma~\ref{lemma:next_digit_0} that ${\bf x}'$ can be reduced at ${\bf r}$, that is, ${\bf x}'$ is not minimal, a contradiction.
Thus we conclude that ${\bf x}$ is minimal.
\end{proof}
The following remark codifies the changes we consider when $g \in BS(1,n)$ is conjugated by a single generator as well as a string of generators. It will be referenced repeatedly throughout the following sections.
\begin{remark}
\label{remark:conjugation}
Let $g = t^{-u}a^vt^w$ with ${\bf x} \in {\mathcal B}_v^{u,w}$ a minimal vector.
For any $q \in BS(1,n)$ define $u(q),v(q)$ and $w(q)$ so that $g^q = t^{-u(q)}a^{v(q)}t^{w(q)}$ in normal form.
We make the following observations about $g^c$ for $c \in \{t^{\pm 1},a^{\pm 1}\}$.
\begin{itemize}[itemsep=5pt]
\item When $c = t^{\pm 1}$, we have $|u-w| = |u(c)-w(c)|$ and $v(c) = v$. Moreover, ${\bf x}$ can be viewed as an element of ${\mathcal B}^{u\mp1,w\mp1}_{v}$, and it is again minimal.
\item When $c = a^{\pm 1}$ we have $u(c) = u$ and $w(c) = w$, so it is again true that $|u-w| = |u(c)-w(c)|$. As discussed earlier, $v(c) = v_+ = n^u+v-n^w$ when $c = a$ and $v(c) = v_-= -n^u+v+n^w$ when $c = {a^{-1}}$.
Observe that $\rho_{-u,w}({\bf x})$ is a vector in ${\mathcal B}^{u(c),w(c)}_{v_+}$ in the former case, and $\rho_{u,-w}({\bf x})$ is a vector in ${\mathcal B}^{u(c),w(c)}_{v_-}$; in either case we denote this vector as ${\bf x}(c)$.
\end{itemize}
When conjugating $g$ by $q_1q_2 \cdots q_n$, following the above steps for each successive conjugation by $q_i$ creates a vector ${\bf x}(q) \in {\mathcal B}^{u(q),w(q)}_{v(q)}$.
In the remainder of this paper, given $g,q$ and ${\bf x}$, the notation ${\bf x}(q)$ denotes the vector created in this way, which may or may not be minimal.
Note that $\eta_{u(q),v(q),w(q)}({\bf x}(q)) = g^q$.
Moreover, the above two conditions imply that $|u-w| = |u(q)-w(q)|$. This fact will be useful when ${\bf x}(q)$ is minimal and $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u(q),v(q),w(q)}({\bf x}(q))|$ are both computed using the second length formula in Lemma~\ref{lemma:length_formula}.
\end{remark}
\subsection{A positive density set of elements with positive
conjugation curvature}
When considering positive conjugation curvature, we require
$r=1$ and provide slightly different examples depending on
the parity of $n$.
\begin{theorem}
\label{thm:positive_kappa}
The group $BS(1,n)$ has a positive density of elements $g$ with $\kappa_1(g) >0$, for $n \geq 3$.
\end{theorem}
\begin{proof}
For $n \geq 3$, let ${\mathcal P}_n$ denote the set of words of the form $ g=p \xi s$ where $\xi \in {\mathcal Q}_n$ and
\begin{itemize}[itemsep=5pt]
\item $p = t^{-1}ata^{\left\lfloor\frac{n}{2}\right\rfloor}ta^{(-1)^n}ta^0t$ with vector of consecutive exponents of the generator $a$ given by ${\bf p} = (1, \left\lfloor\frac{n}{2}\right\rfloor,(-1)^n,0)$, and
\item $s = a^0tata^0tat^{-2}$ with vector of consecutive exponents of the generator $a$ given by ${\bf s}=(0,1,0,1)$.
\end{itemize}
Note that by construction, $u =1$ and $w = {k_{\xx}}-2$.
Let ${\bf x}'$ denote the vector of consecutive exponents of the generator $a$ in $\xi$. As $p$ ends with the generator $t$ and $\xi$ ends with the generator $t$, we can write ${\bf x} = {\bf p} {\bf x}' {\bf s}$ as the vector of consecutive exponents of the generator $a$ in $g$. Let ${\bf x} = (x_0, \cdots ,x_{{k_{\xx}}})$ and ${\bf x}' = (x_m, \cdots ,x_s)$ for $0 < m \leq s < {k_{\xx}}$.
We first show that ${\bf x} \in {\mathcal B}_v^{1,{k_{\xx}}-2}$ is minimal, where $v = \sum_{i=0}^{{k_{\xx}}} x_in^i$. The definition of ${\mathcal Q}_n$ ensures that ${\bf x}' \in {\mathcal B}_{v'}^{0,\K{{\bf x}'}+1}$ is minimal, with $u'=0$ and $w' = \K{{\bf x}'}+1$ and $v' = \Sigma({\bf x}')$. By construction, ${\bf x} \in {\mathcal B}_v^{1,{k_{\xx}}-2}$. If $n$ is odd, it follows from Lemma~\ref{lemma:odd_box} that ${\bf x}$ is minimal.
If $n \geq 4$ is even, it follows from Proposition~\ref{lemma:even_minimal_characterization} and inspection of $(x_{{k_{\xx}}-1},x_{{k_{\xx}}}) = (0,1)$ that ${\bf x}$ contains a run ${\bf r} = (x_j, \cdots ,x_{\ell})$ at which it can be reduced.
If ${\bf r} \cap {\bf p}$ is nonempty and $n>4$, then $r = (\frac{n}{2})$ and it is clear by inspection that ${\bf x}$ cannot be reduced at ${\bf r}$.
If $n=4$, then ${\bf p}$ contains runs of the form $(2)$ and $(2,1)$, neither of which is a run at which ${\bf x}$ can be reduced.
Since the first digit in ${\bf r}$ must be $\pm \frac{n}{2}$, and any remaining digits either $\pm \frac{n}{2}$ or $\pm(\frac{n}{2} -1)$, we see that $r \subseteq {\bf x}'$.
If ${\bf r}$ does not contain the final digit of ${\bf x}'$, it follows from Lemma~\ref{lemma:run_in_the_middle} that ${\bf x}'$ can be reduced at ${\bf r}$, contradicting the fact that ${\bf x}' \in {\mathcal B}_{v'}^{0,\K{{\bf x}'}+1}$ is minimal.
Thus it must be the case that $l=s$.
As the first digit of $s$ is $x_{\ell+1}$ which equals $0$,
it follows immediately from Lemma~\ref{lemma:next_digit_0} that ${\bf x}'$ can be reduced at ${\bf r}$, a contradiction.
Thus we conclude that ${\bf x} \in {\mathcal B}_v^{1,{k_{\xx}}-2}$ is minimal.
That is, every word in $\mathcal{P}_n$ is geodesic,
and it follows from Lemma~\ref{lemma:prefix_suffix_growth_rate} that $\mathcal{P}_n$
has positive density in $BS(1,n)$.
It remains to show that every $g \in \mathcal{P}_n$ has $\kappa_1(g) > 0$, which requires us to compute $l(g^{t^{\pm 1}})$ and $l(g^{a^{\pm 1}})$.
First consider $l(g^t)$ and $l(g^{{t^{-1}}})$.
Given the normal form for $g^t$ and $g^{{t^{-1}}}$, we see that ${\bf x} \in \mathcal{L}_v$ is a vector so that $\eta_{u \pm 1,v,w\pm 1}({\bf x}) = g^{t^{\mp 1}}$ and ${k_{\xx}} > \max(u+1,w+1)$, it follows from Lemma~\ref{lemma:type1_facts} that $l(g^t) = l(g^{{t^{-1}}}) = l(g)$.
We next compute $l(g^a)$ and $l(g^{{a^{-1}}})$.
Recall that $g^a = t^{-u} a^{v_+}t^w$ where $u=1, \ w - {k_{\xx}}-2$
and $v_+ = n^u+v-n^w$.
Consider $\rho_{u,-w}({\bf x}) = {\bf p}' {\bf x} {\bf s}' \in \mathcal{L}_{v_+}$, where $\rho_{-u,w}({\bf x})$ and $\rho_{u,-w}({\bf x})$ are defined in Section~\ref{sec:r=1}.
Here we have ${\bf p}' = (1, \left\lfloor\frac{n}{2}\right\rfloor+1,(-1)^n,0)$ and ${\bf s}' = (0,0,0,1)$
As written, $\rho_{u,-w}({\bf x})$ is not minimal. However, adding the basis vector ${\bf w}^{(1)}$
and reusing the notation yields $\rho_{u,-w}({\bf x}) = {\bf p}' {\bf x} {\bf s}'$, where
\[
{\bf p}' =
\left\{\begin{array}{ll}
(1, -\frac{n}{2}+1, 2, 0) & \textnormal{if $n$ is even}\\
(1, -\left\lfloor\frac{n}{2}\right\rfloor, 0, 0) & \textnormal{if $n$ is odd}
\end{array}
\right.
\]
and ${\bf s}' = (0,0,0,1)$ is unchanged. We assess below whether this new form of $\rho_{u,-w}({\bf x})$ is minimal.
Recall that $g^{{a^{-1}}} = t^{-u}a^{v_-}t^w$ where $u=1, \ w = {k_{\xx}}-2$ and $v_- = -n^u+v+n^w$.
Consider $\rho_{-u,w}({\bf x}) = {\bf p}'' {\bf x} {\bf s}'' \in \mathcal{L}_{v_-}$ where ${\bf p}'' = (1, \left\lfloor\frac{n}{2}\right\rfloor-1,(-1)^n,0)$ and ${\bf s}'' = (0,2,0,1)$.
If $n=3$ then $s''$ contains $2=\left\lfloor\frac{n}{2}\right\rfloor+1$ before the final digit. However, adding the basis vector ${\bf w}^{({k_{\xx}}-2)}$ yields the new suffix vector ${\bf s}'' = (0,-1,1,1)$.
If $n$ is odd it follows from Lemma~\ref{lemma:odd_box} that $\rho_{-u,w}({\bf x})$ and $\rho_{u,-w}({\bf x})$ are minimal. If $n \geq 4$ is even, we assess whether $\rho_{-u,w}({\bf x})$ and $\rho_{u,-w}({\bf x})$ are minimal in two cases.
\begin{itemize}[itemsep=5pt]
\item If $n=4$ it is possible that either the prefix or suffix vector in $\rho_{-u,w}({\bf x})$ or $\rho_{u,-w}({\bf x})$ contains a run of the form $(2)$. It is easily checked that neither vector can be reduced at such a run.
\item If $\rho_{-u,w}({\bf x})$ or $\rho_{u,-w}({\bf x})$ is not minimal, as the final two digits of ${\bf s}$ are unchanged from those of ${\bf x}$, Lemma~\ref{lemma:even_reduction_end} does not apply, and it follows from Proposition~\ref{lemma:even_minimal_characterization} that ${\bf x}$ contains a run ${\bf r}$ at which it can be reduced.
By inspection of the digits in the prefix and suffix, we see that ${\bf r} \subseteq {\bf x}'$. It follows from Lemma~\ref{lemma:run_in_the_middle} or~\ref{lemma:next_digit_0} that ${\bf x}'$ is not minimal, a contradiction.
\end{itemize}
We conclude that $\rho_{u,-w}({\bf x})$ and $\rho_{-u,w}({\bf x})$ are minimal in ${\mathcal B}_{v_+}^{u,w}$ and ${\mathcal B}_{v_-}^{u,w}$, respectively.
In all cases, we see that vectors ${\bf x}$, $\rho_{u,-w}({\bf x})$ and $\rho_{-u,w}({\bf x})$ all have the same length, and additionally
the values of $u$ and $w$ are not altered when $g$ is conjugated by $a$ or $a^{-1}$.
Thus any changes between the word length of the corresponding elements arises from a change in the $\ell^1$ norm of the respective vectors.
As $\Vert \rho_{-u,w}({\bf x}) \Vert_1 = \Vert {\bf x} \Vert_1$ it follows that $l(g^{{a^{-1}}}) = l(g)$.
Comparing the $\ell^1$ norms of ${\bf x}$ and $\rho_{u,-w}({\bf x})$, it follows that $l(g^a) = l(g) - 1$ if $n$ is even and $l(g^a) = l(g)-2$ is $n$ is odd.
In all cases, these computations show that $\kappa_1(g) > 0$.
\end{proof}
\subsection{A positive density set of elements with zero conjugation curvature}
\label{sec:zero}
In this section we construct a family of elements $g \in BS(1,n)$ for $n \geq 3$ with $\kappa_r(g) =0$, where $r$ is allowed to assume any value between 1 and $\max(1,\lfloor \frac{n}{4} \rfloor-1)$.
For any choice of $r$ in this range, the set of elements constructed has positive density in $BS(1,n)$.
\begin{theorem}\label{prop:zero_kappa_r}
The group $BS(1,n)$ for $n \geq 3$ has a positive density of elements $g$ with
$\kappa_r(g) =0$ for $r \leq \max(1,\lfloor \frac{n}{4} \rfloor-1)$.
\end{theorem}
\begin{proof}
First suppose that $n \geq 8$, so that $\lfloor \frac{n}{4} \rfloor>1$.
Let $b=\lfloor \frac{n}{4} \rfloor$ and take $1 \leq r\leq b-1$.
Construct a set of elements ${\mathcal Z}_n$ so that $g \in {\mathcal Z}_n$ is the concatenation $p \xi s$, where $p$ and $s$ are words specified below and $\xi \in {\mathcal Q}_n$. Namely,
\begin{itemize}[itemsep=5pt]
\item $p = t^{-(r+1)}a^{b}ta^{b}t \cdots a^{b}ta^0t$ with $2r+2$ repetitions of $a^{b}$, and corresponding vector of consecutive exponents of $a$ given by ${\bf p} = (b,b, \cdots ,b,0)$ of length $2r+3$.
\item $s=a^0ta^{b}ta^{b} \cdots ta^{b}t^{-(r+4)}$ with $2r+5$ repetitions of $a^{b}$, and corresponding vector of consecutive exponents of $a$ given by ${\bf s} = (0,b,b, \cdots ,b)$ of length $2r+6$.
\end{itemize}
Observe that $u = r+1$ and $w = {k_{\xx}} - (r+4)$.
Let ${\bf x}'$ be the vector of consecutive exponents of the generator $a$ in $\xi$. According to the definition of ${\mathcal Q}_n$, we have ${\bf x}' \in {\mathcal B}_{v'}^{0,\K{{\bf x}'}+1}$, where $v' = \Sigma({\bf x}')$, and by assumption ${\bf x}'$ is minimal.
Inspection of the initial and final letters of $p,\xi$ and $s$ allows us to write ${\bf x} = {\bf p} {\bf x}' {\bf s}$ as the vector of consecutive exponents of the generator $a$ in $g=p \xi s$. By construction, ${\bf x} \in {\mathcal B}_{v}^{r+1,{k_{\xx}}-(r+4)}$. We now show that ${\bf x}$ is minimal.
As $n \geq 8$, we know that $\left\lfloor\frac{n}{2}\right\rfloor-\lfloor \frac{n}{4} \rfloor > 1$, so $1 \leq r < \lfloor \frac{n}{4}\rfloor -1$.
Note that the final two digits of ${\bf x}$ are $(b,b)$; if $n$ is odd, it follows immediately from Lemma~\ref{lemma:odd_box} that ${\bf x}$ is minimal.
When $n \geq 4$ is even, as $|p_i| \leq \left\lfloor\frac{n}{2}\right\rfloor-1$ and $|s_i| < \left\lfloor\frac{n}{2}\right\rfloor-1$ it follows from Lemma~\ref{lemma:prefix-suffix} that ${\bf x} \in {\mathcal B}_v^{u,w}$ is minimal.
This shows that when $n \geq 8$, every word in $\mathcal{Z}_n$ is geodesic
and it follows from Lemma~\ref{lemma:prefix_suffix_growth_rate} that $\mathcal{Z}_n$
has positive density in $BS(1,n)$.
It remains to show that for every $g \in \mathcal{Z}_n$ we have $\kappa_r(g) = 0$.
For later reference, as $\eta_{u,v,w}({\bf x})$ has shape $3$, we compute $l(g) = \Vert {\bf x} \Vert_1 + 2{k_{\xx}} - |u-w|$.
Let $q = q_1\dots q_r$. In order to show $\kappa_r(g) = 0$, we will first
show that ${\bf x}(q)$ is minimal for all such $q$, where ${\bf x}(q)$ is defined in Remark~\ref{remark:conjugation}. Since ${\bf x}$ is also minimal, we can then
use the appropriate length formula to compute first $l(g^q)$ and then $\kappa_r(g)$.
To prove the minimality of ${\bf x}(q)$, let $q'=q_1\dots q_j$ for $0 \le j \le r-1$.
For any generator $c \in \{a^{\pm 1}, t^{\pm 1}\}$, we will iteratively construct ${\bf x}(q'c)$
from ${\bf x}(q')$.
Following the notation in Remark~\ref{remark:conjugation}, recall that
\begin{enumerate}[itemsep=5pt]
\item when $c=t^{\pm 1}$ we have
\begin{enumerate}[itemsep=5pt]
\item $u(q'c) = u(q')\pm 1$ and $w(q'c) = w(q') \pm 1$,
\item $|u(q')-w(q')| = |u(q'c)-w(q'c)|$, and
\item ${\bf x}(q'c) = {\bf x}(q')$.
\end{enumerate}
\item when $c = a^{\pm 1}$, we have $u(q'c) = u(q')$ and $w(q'c) = w(q')$, and
we construct ${\bf x}(q'c)$ from ${\bf x}(q')$ by altering two coordinates.
Namely, when $c = a$ we have $$x(q'c)_{u(q')} =x(q'c)_{u(q'c)}= x(q')_{u(q')}+1$$ and
$$x(q'c)_{w(q')}=x(q'c)_{w(q'c)} = x(q')_{w(q')}-1,$$ with the signs reversed when $c = a^{-1}$.
\end{enumerate}
In other words, to obtain $u(q'c), w(q'c)$ and ${\bf x}(q'c)$ from $u(q'), w(q')$ and ${\bf x}(q')$, we either alter
$u$ and $w$ by $1$, or we alter the digits $x_{u(q')}$ and $x_{w(q')}$ by $1$.
The vector ${\bf x}$ has the following form:
\[
{\bf x} =
(\overset{{\bf p}}{\overbrace{\underset{0}{b},\dots, \underset{u=r+1}{b}, \dots, b,\underset{2r+2}{0}}}, {\bf x}',
\overset{{\bf s}}{\overbrace{\underset{{k_{\xx}}-(2r+5)}{0}, b, \dots, \underset{w={k_{\xx}}-(r+4)}{b}, \dots, \underset{{k_{\xx}}}{b}}}),
\]
where we have indicated relevant indices below the vector. As ${\bf x}(q)$ is obtained from ${\bf x}$ by
applying steps (1) or (2) above a total of $r$ times, once for each generator in $q$, we see that ${\bf x}(q)$ must have two properties:
\begin{itemize}[itemsep=5pt]
\item ${\bf x}'$ is unchanged between ${\bf x}$ and ${\bf x}(q)$,
\item the final two digits of ${\bf x}$ and ${\bf x}(q)$ are identical, and
\item there are upper and lower bounds on the digits in ${\bf p}(q)$ and ${\bf s}(q)$.
Since $b = \lfloor \frac{n}{4} \rfloor$ and $r \leq \max(1,\lfloor \frac{n}{4} \rfloor-1)$,
we have $1 \leq x(q)_i \le 2\lfloor\frac{n}{4}\rfloor -1$ for any digit $x(q)_i$ in ${\bf p}(q)$ or ${\bf s}(q)$.
\end{itemize}
If $n$ is odd, it follows from Lemma~\ref{lemma:odd_box} and inspection of the final two digits of
${\bf s}(q)$ that ${\bf x}(q)$ is minimal.
If $n \geq 4$ is even, then all digits in ${\bf p}(q)$ and ${\bf s}(q)$ satisfy $|x(q)_i| \le \frac{n}{2} -1$.
If $|x(q)_i| < \frac{n}{2} -1$ for all digits in ${\bf s}(q)$,
the minimality of ${\bf x}(q)$ follows from Lemma~\ref{lemma:prefix-suffix}.
If there is some $i \geq {k_{\xx}} - (2r+5)$ with
$|x(q)_i| = \frac{n}{2} -1$, then we must have $q = a^{\pm r}$ and $i = w$.
In this case, since ${\bf x}(q)$ does not satisfy Lemma~\ref{lemma:even_reduction_end},
it follows from Lemma~\ref{lemma:even_minimal_characterization} that there is a run ${\bf r}$ in ${\bf x}(q)$
at which it can be reduced. As ${\bf r}$ begins with a digit $x_j$ satisfying $|x_j| = \frac{n}{2}$, ${\bf r} \cap {\bf p}(q) = \emptyset$. As the first digit of ${\bf s}(q)$ is $0$,and when $n \geq 4$ is even the digit $0$ cannot be part of a run, we have ${\bf r} \cap {\bf s}(q) = \emptyset$ as well, so
${\bf r} \subseteq {\bf x}'$.
If ${\bf r}$ does not contain the final digit of ${\bf x}'$, it follows from Lemma~\ref{lemma:run_in_the_middle} that ${\bf x}'$ can be reduced at ${\bf r}$. Otherwise, as the first digit of ${\bf s}(q)$ is $0$, it follows from Lemma~\ref{lemma:next_digit_0} that ${\bf x}'$ can be reduced at ${\bf r}$. In either case, this contradicts the fact that ${\bf x}'$ is minimal. Thus ${\bf x}(q)$ is minimal for all $q$ as above with $|q| = r$.
It remains to compute $l(g^q)$.
As $k_{{\bf x}(q)} = {k_{\xx}} < \max(u,w)$,
we use the second length formula in Lemma~\ref{lemma:length_formula} to
compute $l(g^q)$.
We have
\begin{align*}
l(g^q) &= \Vert {\bf x}(q) \Vert_1 + 2k_{{\bf x}(q)} - |u(q) - w(q)| \\
&= \Vert {\bf x}(q) \Vert_1 + 2{k_{\xx}} - |u - w|\\
&= l(g) + \Vert {\bf x}(q) \Vert_1 - \Vert {\bf x} \Vert_1
\end{align*}
where the second line follows from the first by the properties in Remark~\ref{remark:conjugation}, namely that ${k_{\xx}} = k_{{\bf x}(q)}$ and $|u(q) - w(q)| = |u-w|$.
Inspecting the final equality above, we note that with each successive conjugation by a letter in $q$, we add and subtract $1$ in different coordinates in our vector.
As $b = \lfloor \frac{n}{4} \rfloor$ and $r<b$, any digit which differs between ${\bf x}$ and ${\bf x}(q)$is positive.
Thus the net change to the $\ell^1$ norm of the vector after each successive conjugation is $0$, so $\Vert {\bf x}(q) \Vert_1 = \Vert {\bf x} \Vert_1$ and thus $l(g^q) = l(g)$.
As $q$ was any string of length $r$, it follows that $\kappa_r(g) = 0$ when $n \ge 8$.
We now verify the result when $3 \leq n \leq 7$, in which case $b=r=1$.
Let $g \in {\mathcal Z}_n$ be constructed as above; we first show that the corresponding vector ${\bf x}$ of consecutive exponents of the generator $a$ is minimal.
\begin{itemize}[itemsep=5pt]
\item If $n$ is odd, it follows from inspection of the final two digits $(b,b)$ of ${\bf x}$ and Lemma~\ref{lemma:odd_box} that ${\bf x}$ is minimal.
\item When $n=6$ it follows from Lemma~\ref{lemma:prefix-suffix} that ${\bf x}$ is minimal, as $|s_i| < \left\lfloor\frac{n}{2}\right\rfloor-1 = 2$ for all $s_i \in {\bf s}$.
\item When $n=4$, Lemma~\ref{lemma:prefix-suffix} does not apply to ${\bf x}$.
Suppose that $n=4$ and ${\bf x}$ is not minimal.
Inspection of the final two digits of ${\bf x}$ shows that Lemma~\ref{lemma:even_reduction_end} does not apply, and hence it follows from Proposition~\ref{lemma:even_minimal_characterization} that ${\bf x}$ contains a run ${\bf r}$ at which it can be reduced.
However, when $n=4$ any run must begin with $\pm 2$, and hence ${\bf r} \subseteq {\bf x}'$.
It then follows from Lemma~\ref{lemma:run_in_the_middle} or~\ref{lemma:next_digit_0} that ${\bf x}'$ is not minimal, a contradiction. Thus ${\bf x}$ is minimal.
\end{itemize}
As ${\bf x}$ is minimal, every word in $\mathcal{Z}_n$ for $3 \leq n \leq 7$ is geodesic.
It then follows from from Lemma~\ref{lemma:prefix_suffix_growth_rate} that $\mathcal{Z}_n$ has positive density in $BS(1,n)$ for $3 \leq n \leq 7$.
It remains to show that any $g$ in one of these sets has $\kappa_1(g) = 0$.
For all $3 \leq n \leq 7$ it follows immediately from Lemma~\ref{lemma:type1_facts} that $l(g^t) = l(g^{{t^{-1}}}) = l(g)$.
Consider the prefix and suffix vectors for $g^a$ and $g^{a^{-1}}$. That is, $g^a$ has
\begin{itemize}
\item prefix vector ${\bf p}(a) = (1,1,2,1,0)$ and $u(a)=2$, and
\item suffix vector ${\bf s}(a) = (0,1,0,1,1,1,1,1)$ where the second $0$ has index $w(a) = k-5$.
\end{itemize}
In the corresponding vectors for $g^{{a^{-1}}}$ we have $x({a^{-1}})_{u({a^{-1}})} = 0$ and $x({a^{-1}})_{w({a^{-1}})}=2$.
As the argument is completely analogous, we consider only the case of $g^a$.
If $n \in \{5,7\}$, it follows directly from Lemma~\ref{lemma:odd_box} that ${\bf x}(a)$ is minimal. If $n=6$ the same conclusion follows from Lemma~\ref{lemma:prefix-suffix}.
If $n=4$, inspection of the final two digits of ${\bf s}(a)$ shows that Lemma~\ref{lemma:even_reduction_end} does not apply, and if ${\bf x}(a)$ was not minimal, it would follow from Proposition~\ref{lemma:even_minimal_characterization} that ${\bf x}(a)$ contained a run ${\bf r}$ at which it could be reduced.
It is easily checked that if ${\bf r} \subset {\bf p}(a)$ then ${\bf r} = (2)$ or ${\bf r} = (2,1)$ and ${\bf x}(a)$ cannot be reduced at ${\bf r}$.
We conclude that ${\bf r} \subseteq {\bf x}'$, and then it follows from Lemma~\ref{lemma:run_in_the_middle} or~\ref{lemma:next_digit_0} that ${\bf x}'$ is not minimal, a contradiction. Thus ${\bf x}(a)$ is minimal when $n=4$.
When $n=3$ we must first replace ${\bf x}(a)$ with ${\bf x}(a)+{\bf w}^{(2)}+{\bf w}^{(3)}$. Note that both vectors have the same $\ell^1$ norm, but ${\bf x}(a)+{\bf w}^{(2)}+{\bf w}^{(3)} \in {\mathcal B}_{v(a)}^{u(a),w(a)}$. Keeping the same notation for this vector, it then follows from Lemma~\ref{lemma:odd_box} that ${\bf x}(a)$ is minimal.
For all $3 \leq n \leq 7$, as $k_{{\bf x}(a)} = {k_{\xx}}$ and $\max(u(a),w(a)) < k_{{\bf x}(a)}$, the same length formula from Lemma~\ref{lemma:length_formula} is used to compute both $l(g)$ and $l(g^a)$.
It follows that $l(g^a) = l(g) +1-1 = l(g)$. The analogous argument applies to $l(g^{{a^{-1}}})$ and we conclude that for $3 \leq n \leq 7$ every $g \in {\mathcal Z}_n$ satisfies $\kappa_1(g) = 0$.
\end{proof}
\subsection{A positive density set of elements with negative conjugation curvature}
\label{section:negative}
Our most robust result for conjugation curvature is Theorem~\ref{thm:neg_kappa_r}, which finds a positive density of elements $g \in BS(1,n)$ with $\kappa_r(g)<0$ for the widest range of $r$.
\begin{theorem}\label{thm:neg_kappa_r}
For $n \geq 3$, the group $BS(1,n)$ has a positive density of elements with $\kappa_r(g)<0$, for any $1 \leq r \leq \max(1,\lfloor \frac{n}{2} \rfloor-2)$.
\end{theorem}
\begin{proof}
Choose $r$ so that $1 \leq r \leq \max(1,\lfloor \frac{n}{2} \rfloor-2)$.
Construct a set of words ${\mathcal N}_n$ so that $g \in {\mathcal N}_n$ is formed by $p \xi s$ where $p$ and $s$ are words specified below, and $\xi \in {\mathcal Q}_n$. Namely,
\begin{itemize}[itemsep=5pt]
\item $p=t^{-(r+1)}at^{2r+3}$, with corresponding vector ${\bf p} = (1,0,0,\cdots ,0)$ of consecutive exponents of the generator $a$, of length $2r+3$.
\item $s=t^{2r+5}at^{-(r+4)}$, with corresponding vector ${\bf s} = (0,0,\cdots ,0,1)$ of consecutive exponents of the generator $a$, of length $2r+6$.
\end{itemize}
Let ${\bf x}'$ be the vector of consecutive exponents of the generator $a$ in $\xi$. According to the definition of ${\mathcal Q}_n$, we know that ${\bf x}' \in {\mathcal B}_{v'}^{0,\K{{\bf x}'}+1}$ where $v' = \Sigma({\bf x}')$, and ${\bf x}'$ is minimal.
Inspection of the initial and final letters of $p,\xi$ and $s$ allows us to write ${\bf x} = {\bf p} {\bf x}' {\bf s}$ as the vector of consecutive exponents of the generator $a$ in $g$. By construction, ${\bf x} \in {\mathcal B}_v^{r+1,{k_{\xx}}-(r+4)}$.
When $n$ is odd, as the final two digits of ${\bf x}$ are $(0,1)$, it follows from Lemma~\ref{lemma:odd_box} that ${\bf x}$ is minimal.
When $n$ is even, it follows immediately from Lemma~\ref{lemma:prefix-suffix} that ${\bf x}$ is minimal.
The minimality of ${\bf x}$ ensures that
every word in $\mathcal{N}_n$ is geodesic,
and it follows from Lemma~\ref{lemma:prefix_suffix_growth_rate} that $\mathcal{N}_n$
has positive density in $BS(1,n)$.
It remains to show that every $g \in \mathcal{N}_n$ satisfies $\kappa_r(g) < 0$.
Let $q = q_1\dots q_r$. In order to show that $\kappa_r(g) < 0$, we will first
show that ${\bf x}(q)$ is minimal for all such $q$, where ${\bf x}(q)$ is defined in Remark~\ref{remark:conjugation}. Since ${\bf x}$ is also minimal, we can then
use the appropriate length formula to compute first $l(g^q)$ and then $\kappa_r(g)$.
The proof now follows the outline of the proof of Theorem~\ref{prop:zero_kappa_r}; recall the rules specified there for obtaining $u(q),w(q)$ and ${\bf x}(q)$ from $u,w$ and ${\bf x}$, which follow from Remark~\ref{remark:conjugation}.
As the initial prefix and suffix vectors ${\bf p}$ and ${\bf s}$ are different than those in Theorem~\ref{prop:zero_kappa_r}, we draw the following conclusions in this case from the application of those rules. In particular, any digit $x_i$ which differs in ${\bf x}$ and ${\bf x}(q)$
\begin{itemize}[itemsep=5pt]
\item has $1 \leq i \leq 2r+1$ or ${k_{\xx}} - (2r+4) \leq i \leq {k_{\xx}} -4$. It follows that $k_{{\bf x}(q)} = {k_{\xx}}$, and that ${\bf x}$ and ${\bf x}(q)$ share the same final two digits.
\item is $0$ in ${\bf x}$ and, as $1 \leq r \leq\max(1,\left\lfloor\frac{n}{2}\right\rfloor-2)$, satisfies $|x(q)_j| < \frac{n}{2} -1$.
\end{itemize}
It follows from these two facts that ${\bf x}(q)$ satisfies the conditions of Lemma~\ref{lemma:prefix-suffix} and hence is minimal.
As $k_{{\bf x}(q)} = {k_{\xx}}$ and the upper bound on $r$ ensures that $u(q)<w(q) < k_{{\bf x}(q)}$, to compute both $l(g)$ and $l(g^q)$ we use the second formula in Lemma~\ref{lemma:length_formula}.
It follows from Remark~\ref{remark:conjugation} that $|u-w| = |u(q)-w(q)|$.
Thus the difference between $l(g^q)$ and $l(g)$ is exactly the difference in the respective $\ell^1$ norms of ${\bf x}(q)$ and ${\bf x}$.
Moreover, any digit which differs between ${\bf x}$ and ${\bf x}(q)$ is $0$ in ${\bf x}$ and nonzero in ${\bf x}(q)$. Thus,
\[
\Vert {\bf x}(q) \Vert_1 - \Vert {\bf x} \Vert_1 = \sum_{i \in I} |x(q)_i| \geq 0,
\]
so $l(g^q) \geq l(g)$.
When $q=a^{\pm r}$, the same analysis shows that $l(g^{a^r}) = l(g^{a^{-r}}) = l(g) +2r$ and hence the above inequality is sometimes strict.
Thus we conclude that
\[
\sum_{p \in S_n(r)}l(g^p) > |S_n(r)|l(g),
\]
and it follows that all $g \in {\mathcal N}_n$ have $\kappa_r(g)<0$ for $r \leq \max(1,\left\lfloor\frac{n}{2}\right\rfloor -2)$.
\end{proof}
\section{Conjugation curvature in $BS(1,2)$}
\label{sec:n=2_curvature}
When $n=2$ we again find that $BS(1,2)$ contains a positive density of elements $g$ with positive, negative and zero conjugation curvature $\kappa_1(g)$.
When $n=2$ and ${\bf x} \in {\mathcal B}_v^{u,w}$, we note that a run is a string in $\{0,1\}^*$ or $\{0,-1\}^*$
which begins with $\pm 1$.
We redefine the weight of a run ${\bf r}$ when $n=2$ to account for the slight differences between the cases $n=2$ and $n>2$.
Given ${\bf x} \in {\mathcal B}_v^{u,w}$, define the weight of a run ${\bf r} = (x_j, \dots, x_\ell)$ to be
\[
\ensuremath{\textnormal{weight}}({\bf r}) = \left(\#\left\{1\right\}-1\right)-\#\left\{ 0\right\}
\]
where $\#\left\{1\right\}$ denotes the number of occurrences of the digit $\epsilon_{\bf r}=\ensuremath{\textnormal{sign}}(x_j)$ in ${\bf x}$ and $\#\left\{0\right\}$ is defined analogously.
Note that unlike the case of the weight of a run for $n>2$, digits
with absolute value greater than $1$ are not considered
when computing weight.
If ${\bf s} = (s_i, \cdots ,s_{\ell}) \subseteq {\bf x}$ is a string of digits from either $\{0,1\}^*$ or $\{0,-1\}^*$, define $\beta({\bf s})$ to be the difference between the number of zeros in ${\bf s}$ and the number of ones in ${\bf s}$.
Lemma~\ref{lemma:n=2runs} demonstrates that if ${\bf x}$ can be reduced at a run ${\bf r}$ of the form
$(\lambda,0, \cdots ,0,\lambda, r_1, \cdots ,r_{\ell})$, where $\lambda \in \{ \pm 1\}$, then there is a shorter run at which ${\bf x}$ can be reduced.
\begin{lemma}
\label{lemma:n=2runs}
Let $n=2$ and ${\bf x} \in {\mathcal B}_v^{u,w}$ with
\[
{\bf r} = (x_j, \cdots ,x_{\ell}) = (\lambda,0, \cdots ,0,\lambda, r_1, \cdots ,r_q) \subseteq {\bf x}
\]
a run where $x_j = x_m = \lambda$ and $\ell \leq {k_{\xx}}-2$, for $\lambda \in \{\pm 1\}$.
Then
\[
{\bf x} + \lambda \sum_{i=m}^{\ell} {\bf w}^{(i)} \; <_{u,w} \; {\bf x}+ \lambda \sum_{i=j}^{\ell} {\bf w}^{(i)}.
\]
\end{lemma}
In other words, if ${\bf x}$ can be reduced at ${\bf r}$, and $\ell \le {k_{\xx}}-2$
then Lemma~\ref{lemma:less_than_6} applies, so the reduction takes the form on the right.
It follows from Lemma~\ref{lemma:n=2runs} that with respect to $<_{u,w}$ it is better to reduce ${\bf x}$ at the shorter run $(\lambda,r_1,\dots,r_q)$.
\begin{proof}
With ${\bf r} = (x_j, \cdots ,x_{\ell}) = (\lambda,0, \cdots ,0,\lambda, r_1, \cdots ,r_q) \subseteq {\bf x}$ as in the statement of the lemma, so $x_j = x_m = \lambda$, we have
\[
\Vert {\bf x}+\lambda \sum_{i=j}^{\ell} {\bf w}^{(i)} \Vert_1 = \Vert {\bf x} \Vert_1 + (m-1-j)-1+\beta(r_1, \cdots ,r_q) + |x_{\ell+1}+\lambda| - |x_{\ell+1}|
\]
where $m-1-j$ is the length of the first string of zeros, and we subtract $1$ to account for the change in $x_m = \lambda$.
By construction, $m-j \geq 2$.
In contrast,
\[
\Vert {\bf x}+\lambda\sum_{i=m}^{\ell} {\bf w}^{(i)} \Vert_1 = \Vert {\bf x} \Vert_1 + \beta(r_1, \cdots ,r_q) + |x_{\ell+1}+\lambda| - |x_{\ell+1}|.
\]
As ${\bf r}$ does not contain the final two digits of ${\bf s}$, the vectors $ {\bf x} + \lambda\sum_{i=j}^{\ell} {\bf w}^{(i)}$ and $ {\bf x}+\lambda\sum_{i=m}^{\ell} {\bf w}^{(i)}$ have the same length.
It follows that the same word length formula from Lemma~\ref{lemma:length_formula} is used to compute both $|\eta_{u,v,w}({\bf x}+\lambda\sum_{i=j}^{\ell} {\bf w}^{(i)})|$ and $|\eta_{u,v,w}({\bf x}+\lambda\sum_{i=m}^{\ell} {\bf w}^{(i)})|$.
Thus the difference in word length between the two paths is exactly the difference in $\ell^1$ norm between the vectors.
If $m-j>2$, the lemma follows from comparing the expressions for the $\ell^1$ norms of the two vectors.
If $m-j=2$, then the two vectors have the same $\ell^1$ norm. However, as the second digit of ${\bf r}$ is $0$, we see that ${\bf x} + \lambda\sum_{i=m}^{\ell} {\bf w}^{(i)}$ precedes $ {\bf x}+\lambda\sum_{i=j}^{\ell} {\bf w}^{(i)}$ in the lexicographic order, and the lemma follows.
\end{proof}
When constructing examples of elements of different curvatures when $n=2$, we again create families of geodesics as the concatenation $p \xi s$ where $p$ is a prefix and $s$ is a suffix, each of a specified form, and $\xi \in {\mathcal Q}_2$.
In an effort to simplify the discussion, we will phrase everything in terms of the vectors of exponents of the generator $a$ in the prefix, suffix, and $\xi$.
Let $Q_2$ be the
set of vectors satisfying either one of the following equivalent conditions
\[
Q_2 = \{{\bf x} \, | \, {\bf x} \textnormal{ is minimal in ${\mathcal B}_{\Sigma({\bf x})}^{0,{k_{\xx}}+1}$} \}
= \{{\bf x} \, | \, \eta_{0,\Sigma({\bf x}),{k_{\xx}}+1}({\bf x}) \in \mathcal Q_2\}.
\]
We will be interested in filtering $Q_2$ by the lengths of the associated geodesics in $\mathcal Q_2$.
Thus we define $Q_2(N) = \{{\bf x} \in Q_2 \, | \, |\eta_{0,\Sigma({\bf x}),{k_{\xx}}+1}({\bf x})| = N\}$.
\begin{lemma}\label{lemma:n=2_Q2_growth_rate}
The growth rate of $\{|Q_2(N)|\}_{N \in \mathbb{N}}$ is the same as the growth rate of $BS(1,n)$.
\end{lemma}
\begin{proof}
The lemma follows immediately from the definition of $Q_2$ and Lemma~\ref{lemma:middle_vector}.
\end{proof}
Lemma~\ref{lemma:n=2_prefix_suffix_growth_rate} rephrases Lemma~\ref{lemma:prefix_suffix_growth_rate} using vectors instead of geodesics.
\begin{lemma}\label{lemma:n=2_prefix_suffix_growth_rate}
Let $\mathcal{A}$ be a set of triples $(u,w,{\bf x})$, where ${\bf x}$ is minimal in ${\mathcal B}_{\Sigma({\bf x})}^{u,w}$. Let
\[\mathcal{A}(N) = \{(u,w,{\bf x}) \in \mathcal{A} \, |\, |\eta_{u,\Sigma({\bf x}),w}({\bf x})| = N\}.\] Suppose that $|\mathcal{A}(N)| = |Q_2(N+c)|$ for some constant $c \in \mathbb{Z}$.
Then the set of geodesics
\[ \{\eta_{u,\Sigma({\bf x}),w}({\bf x}) \mid (u,w,{\bf x}) \in \mathcal{A}\} \]
has positive density in $BS(1,n)$.
\end{lemma}
\begin{proof}
The lemma follows immediately from the definition of $\mathcal{A}$ and Lemmas~\ref{lemma:exp_growth} and ~\ref{lemma:n=2_Q2_growth_rate}.
\end{proof}
To show that $BS(1,2)$ has a positive density of elements with positive, negative and zero conjugation curvature,
our strategy is to construct a variety of sets $\mathcal{A}$
as in Lemma~\ref{lemma:n=2_prefix_suffix_growth_rate}, where the vector
${\bf x}$ is the concatenation ${\bf p}{\bf x}'{\bf s}$ of a prefix, suffix, and vector
${\bf x}' \in Q_2$. We will always have $u=2$ and $w={k_{\xx}}-\epsilon$
for a particular value of $\epsilon \in \{1,2\}$ chosen later.
Our prefixes have the following forms:
\begin{itemize}[itemsep=5pt]
\item ${\bf p}_1(b) = (1,0,b,0,-1,0,1,0)$, for $b \in \{0,\pm 1\}$.
\item ${\bf p}_2(b_1,b_2) = (1,0,b_1,b_2,0,-1,0)$ where $(b_1,b_2) \in \{(1,0),(0,1),(0,0)\}$.
\end{itemize}
Our suffixes have the following form
\begin{itemize}
\item ${\bf s}(c_1,c_2,c_3) = (0,0,-1,0,c_1,c_2,c_3)$, where
\[(c_1,c_2,c_3) \in \{(0,1,2),(0,0,2),(0,0,3),(-1,0,3),(1,0,2)\}.\]
\end{itemize}
In order to simplify subsequent proofs, we will combine a number
of prefix-suffix pairs into one large set $\mathcal V_2$ of vectors, prove
that any vector in $\mathcal V_2$ is minimal, and choose subsets of
$\mathcal V_2$ whose elements have, respectively, positive, zero and negative conjugation curvature.
Define $\mathcal V_2$ to be the union of all triples of the following forms; in all cases, ${\bf x}'$ is an arbitrary vector in $Q_2$.
\begin{itemize}
\item $(2,w,{\bf x})$, where $w = {k_{\xx}}-1$ and ${\bf x} = {\bf p}_2(b_1,b_2){\bf x}'{\bf s}(c_1,c_2,c_3)$, for
\[
((b_1,b_2),(c_1,c_2,c_3)) \in \{((1,0),(0,1,2)),\,((0,1),(0,0,2)),\,((0,0),(0,0,3))\}
\]
\item $(2,w,{\bf x})$, where $w = {k_{\xx}}-2$ and ${\bf x} = {\bf p}_2(b_1,b_2){\bf x}'{\bf s}(c_1,c_2,c_3)$, for
\[
((b_1,b_2),(c_1,c_2,c_3)) \in \{((1,0),(0,1,2)),\,((0,1),(1,0,2)),\,((0,0),(-1,0,3))\}
\]
\item $(2,w,{\bf x})$, where $w = {k_{\xx}}-2$ and ${\bf x} = {\bf p}_1(b){\bf x}'{\bf s}(c_1,c_2,c_3)$, for
\[
(b,(c_1,c_2,c_3)) \in \{(0,(0,1,2)),\,(1,(1,0,2)),\,(-1,(-1,0,3))\}
\]
\end{itemize}
We begin with a technical lemma required to prove
that any vector in $\mathcal V_2$ is minimal.
It concludes that if ${\bf x}$ is constructed as in $\mathcal V_2$, and ${\bf x}$ can be reduced at a run ${\bf r}$,
then there is also a run ${\bf r}'$ at which ${\bf x}$ can be reduced which does not overlap the suffix vector.
Moreover, with respect to the $<_{u,w}$ order, it is better to reduce ${\bf x}$ at ${\bf r}'$ than at ${\bf r}$.
\begin{lemma}
\label{lemma:big_suffix}
Let $(2,w,{\bf x}) \in \mathcal V_2$, with ${\bf x} = {\bf p} {\bf x}' {\bf s} \in {\mathcal B}_{\Sigma({\bf x})}^{u,w}$.
Let ${\bf r} = (x_j, \cdots ,x_{\ell}) \subseteq {\bf x}$ be a run at which
${\bf x}$ can be reduced with $l \leq {k_{\xx}}-1$.
Suppose that ${\bf r}$ is the concatenation ${\bf r}' {\bf r}_{{\bf s}}$ where
${\bf r}' = {\bf r} \cap {\bf p}{\bf x}' = (x_j, \cdots ,x_m) \neq \emptyset$ and
${\bf r}_{{\bf s}} = {\bf r} \cap {\bf s} =(x_{m+1}, \cdots ,x_{\ell})\neq \emptyset$.
Then
\[
{\bf x}+ \epsilon_{{\bf r}} \sum_{i=j}^{m}{\bf w}^{(i)} \; <_{u,w} \; {\bf x} + \epsilon_{{\bf r}} \sum_{i=j}^{\ell}{\bf w}^{(i)} \; <_{u,w} \; {\bf x}.
\]
\end{lemma}
\begin{proof}
Considering the combinations of prefixes and suffixes for elements in ${\mathcal V}_2$, note that if $|{\bf r}_{{\bf s}}|>2$ then ${\bf r}$ contains the digit $-1$, and thus $\epsilon_{\bf r} = -1$.
In particular, ${\bf r}$ cannot contain the final digit of ${\bf s}$, which in all cases is positive.
As ${\bf r}$ cannot contain the final digit of ${\bf s}$, it follows from Lemma~\ref{lemma:less_than_6} that the result ${\bf y}$
of reducing ${\bf x}$ at ${\bf r}$ is
\[
{\bf y} = {\bf x} + \epsilon_{\bf r}\sum_{i=j}^\ell {\bf w}^{(i)}.
\]
That is, the coefficient $2\epsilon_{\bf r}$ does not appear in the above sum.
When $l \leq {k_{\xx}} -2$, it is clear that ${k_{\y}} = {k_{\xx}}$.
When $l={k_{\xx}}-1$, that is, ${\bf r}$ terminates at the penultimate digit of ${\bf x}$, the final digit of ${\bf y}$ is $x_{{k_{\xx}}} -1$ which, by inspection, in all cases is either $1$ or $2$.
So it is again the case that ${k_{\y}} = {k_{\xx}}$.
Let ${\bf y}' = {\bf x} + \epsilon_{\bf r}\sum_{i=j}^m {\bf w}^{(i)}$.
As the length of ${\bf s}$ is $7$, it is immediate that ${\bf r}'$ does not contain the final two digits of ${\bf x}$ and hence $k_{{\bf y}'} = {k_{\y}} = {k_{\xx}}$.
The elements of ${\mathcal V}_2$ are constructed so that
${k_{\xx}} > \max(u,w)$; as $k_{{\bf y}'} = {k_{\y}} = {k_{\xx}}$ we use the second word length formula in Lemma~\ref{lemma:length_formula} to compute $|\eta_{u,\Sigma({\bf x}),w}({\bf x})|$, $|\eta_{u,\Sigma({\bf y}),w}({\bf y})|$ and $|\eta_{u,\Sigma({\bf y}'),w}({\bf y}')|$.
Thus any change in word length arises from a change in $\ell^1$ norm between the vectors.
Consider
\begin{align*}
\Vert {\bf x} \Vert_1 - \Vert {\bf y} \Vert_1 &= \ensuremath{\textnormal{weight}}({\bf r}) + |x_{\ell+1}| - |x_{\ell+1} + \epsilon_{\bf r}| \\
\Vert {\bf x} \Vert_1 - \Vert {\bf y}' \Vert_1 &= \ensuremath{\textnormal{weight}}({\bf r}') + |x_{m+1}| - |x_{m+1} + \epsilon_{\bf r}|.
\end{align*}
As ${\bf y} <_{u,w} {\bf x}$ we know that the first difference is nonnegative. To prove the lemma, we must show that
\[
\left( \Vert {\bf x} \Vert_1 - \Vert {\bf y}' \Vert_1 \right) - \left( \Vert {\bf x} \Vert_1 - \Vert {\bf y} \Vert_1 \right) = \Vert {\bf y} \Vert_1 - \Vert {\bf y}' \Vert_1 \geq 0,
\]
and when there is equality, that there is a lexicographic reduction from ${\bf y}$ to ${\bf y}'$.
For a string of digits $(x_s, \cdots ,x_t)$ in either $\{0,1\}^*$ or $\{0,-1,\}^*$, let $\beta(x_s, \cdots ,x_t)$ denote the difference between the number of nonzero entries and the number of zero entries.
With this notation, we can write $\ensuremath{\textnormal{weight}}({\bf r}) = \ensuremath{\textnormal{weight}}({\bf r}') + \beta({\bf r}_{{\bf s}})$.
It follows that
\begin{align*}
\Vert {\bf y} \Vert_1 - \Vert {\bf y}' \Vert_1 &= -\beta({\bf r}_{{\bf s}}) + |x_{m+1}| - |x_{m+1} + \epsilon_{\bf r}| - |x_{\ell+1}| + |x_{\ell+1}+ \epsilon_{\bf r}| \\
&= -\beta({\bf r}_{{\bf s}})-1 - |x_{\ell+1}| + |x_{\ell+1}+ \epsilon_{\bf r}|,
\end{align*}
recalling that $x_{m+1} = 0$.
We know that ${\bf r}_{{\bf s}}$ is a prefix of ${\bf s}$. When $|{\bf r}_{{\bf s}}| >2$ we must have $\epsilon_{\bf r} = -1$, for shorter words it is possible that $\epsilon_{\bf r} = 1$.
As $|{\bf s}| =7$ it is straightforward to list the possible prefixes of ${\bf s}$ which could constitute a run and compute the quantity above.
We do not consider any prefix of ${\bf s}$ which contains a $-1$ and a $+1$, as these digits cannot both occur in a run.
These values are compiled in Table~\ref{table:suffix}, and we see in all cases that they are non-negative.
\begin{table}[ht!]
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
${\bf r}_{{\bf s}}$ & $x_{\ell+1}$ & $\epsilon_{\bf r}$ & $\beta({\bf r}_{{\bf s}})$ & $-\beta({\bf r}_{{\bf s}})-1 - |x_{\ell+1}| + |x_{\ell+1}+ \epsilon_{\bf r}|$\\
\hline
$(0)$ & 0 & 1 & -1 & 1 \\
$(0,0)$ & -1 & 1 & -2 & 0 \\
\hline
$(0)$ & 0 & -1 & -1 & 1 \\
$(0,0)$ & -1 & -1 & -2 & 2 \\
$(0,0,-1)$ & 0 & -1 & -1 & 1 \\
$(0,0,-1,0)$ & 0 & -1 & -2 & 2 \\
$(0,0,-1,0)$ & -1 & -1 & -2 & 2 \\
$(0,0,-1,0)$ & 1 & -1 & -2 & 0 \\
$(0,0,-1,0,0)$ & 1 & -1 & -3 & 1 \\
$(0,0,-1,0,0)$ & 0 & -1 & -3 & 3 \\
$(0,0,-1,0,-1)$ & 0 & -1 & -1 & 1 \\
$(0,0,-1,0,0,0)$ & 2 & -1 & -4 & 2 \\
$(0,0,-1,0,0,0)$ & 3 & -1 & -4 & 2 \\
$(0,0,-1,0,-1,0)$ & 3 & -1 & -2 & 0 \\
\hline
\end{tabular}
\label{table:suffix}
\caption{For each possible ${\bf r}_{{\bf s}} \subset {\bf s}$ and value of $\epsilon_{\bf r}$, we compute the quantity $\Vert {\bf y} \Vert_1 - \Vert {\bf y}' \Vert_1 = -\beta({\bf r}_{{\bf s}})-1 - |x_{\ell+1}| + |x_{\ell+1}+ \epsilon_{\bf r}|$.}
\end{center}
\end{table}
When Table~\ref{table:suffix} shows that $-\beta({\bf r}_{{\bf s}})-1 - |x_{\ell+1}| + |x_{\ell+1}+ \epsilon_{\bf r}| > 0$, the lemma follows immediately.
When $-\beta({\bf r}_{{\bf s}})-1 - |x_{\ell+1}| + |x_{\ell+1}+ \epsilon_{\bf r}| = 0$,
we show that there must be a lexicographic reduction from ${\bf y}$ to ${\bf y}'$.
Consider the digit in each vector with index $m+2$.
Since ${\bf s}$ begins with $(0,0)$, we see that $|y'_{m+1}| = 1$ and $|y'_{m+2}| = 0$.
Comparing to $|y_{m+1}| = 1$ and $|y_{m+2}| = 1$ we note the lexicographic reduction and conclude that ${\bf y}' <_{u,w} {\bf y}$ in this case as well.
\end{proof}
We now show that every element of $\mathcal V_2$ is minimal.
\begin{lemma}
\label{lemma:n=2_prefix_suffix}
For all $(2,w,{\bf x}) \in \mathcal V_2$, we have that ${\bf x} \in {\mathcal B}_{\Sigma({\bf x})}^{2,w}$ is minimal.
\end{lemma}
\begin{proof}
Write ${\bf x} = {\bf p}{\bf x}'{\bf s}$ as in the definition of $\mathcal V_2$.
By construction, ${\bf x}' \in {\mathcal B}_{\Sigma({\bf x})}^{0,\K{{\bf x}'}+1}$ is minimal.
Suppose that ${\bf x}$ is not minimal.
Inspection of the final two digits of ${\bf s}$ shows that Lemma~\ref{lemma:n=2_notminimal} does not apply to ${\bf x}$, and thus
it follows from Proposition~\ref{lemma:lemma318_n=2} that there is a run ${\bf r} = (x_j, \cdots ,x_{\ell}) \subseteq {\bf x}$ at which ${\bf x}$ can be reduced.
As the sign of all the elements in a run is either identical or 0, notice from
the form of the possible suffixes that
either ${\bf r}$ does not contain the final digit of the suffix, or ${\bf r} = (1,2)$ or $(1,0,2)$.
In the latter cases, it is easily checked that ${\bf x}$ can not be reduced at ${\bf r}$.
In the former case, we may apply Lemma~\ref{lemma:big_suffix}. Thus in all cases we may assume that ${\bf r} \cap {\bf s} = \emptyset$.
It then follows from Lemma~\ref{lemma:less_than_6} that we can write
\[
{\bf y} = {\bf x} + \epsilon_{\bf r}\sum_{i=j}^\ell {\bf w}^{(i)}
\]
to denote the result of reducing ${\bf x}$ at the run ${\bf r}$. This is, no coefficient in the linear combination of basis vectors above is $2\epsilon_{\bf r}$.
We now show that ${\bf r} \cap {\bf p} = \emptyset$.
It is easily checked that if ${\bf r} \subseteq {\bf p}$ then ${\bf x}$ cannot be reduced at ${\bf r}$.
Suppose that ${\bf r} \cap {\bf p}$ is nonempty. For prefix ${\bf p}_i$, set $\lambda = (-1)^{i+1}$. Then ${\bf r} \cap {\bf p}$
must begin $(\lambda,0)$.
If there is a subsequent digit of $\lambda$ in ${\bf r} \cap {\bf x}'$, consider the first occurrence of $\lambda$ in ${\bf r} \cap {\bf x}'$.
It follows from Lemma~\ref{lemma:n=2runs} that it is at least as effective to reduce ${\bf x}$ at the run ${\bf r}'\subset {\bf x}'$, where ${\bf r}' \subseteq {\bf r}$ is the run beginning with the first occurrence of $\lambda$ in ${\bf r} \cap {\bf x}'$.
If there is no digit $\lambda$ in ${\bf r} \cap {\bf x}'$, then ${\bf r} \cap {\bf x}'$
has no nonzero digits.
That is, ${\bf r} \cap {\bf p} = (\lambda,0)$ and ${\bf r} \cap {\bf x}' = (0,0, \cdots,0)$ consists of $d$ zeros, for some $d \geq 1$.
Then we have
\[
\Vert {\bf x} + \epsilon_{\bf r} \sum_{i=j}^{\ell} {\bf w}^{(i)} \Vert_1 = \Vert {\bf x} \Vert_1 + (d+1) + |x_{\ell+1}+\epsilon_{\bf r}| - |x_{\ell+1}| > \Vert {\bf x} \Vert_1
\]
where the inequality follows from the facts that $|x_{\ell+1}+\epsilon_{\bf r}| - |x_{\ell+1}| \in \{\pm 1\}$ and $d \geq 1$.
We conclude that ${\bf x}$ cannot be reduced at ${\bf r}$, a contradiction. Thus we may assume that ${\bf r} \subseteq {\bf x}'$.
If ${\bf r}$ does not contain the final digit of ${\bf x}'$, it follows from Lemma~\ref{lemma:run_in_the_middle} that ${\bf x}'$ can be reduced at ${\bf r}$, a contradiction.
If ${\bf r}$ does contain the final digit of ${\bf x}'$, as the first digit of each suffix is $0$, it follows from Lemma~\ref{lemma:next_digit_0} that ${\bf x}'$ can be reduced at ${\bf r}$, a contradiction. Thus we conclude that ${\bf x}\in {\mathcal B}_{\Sigma({\bf x})}^{2,w}$ is minimal.
\end{proof}
We now prove that $BS(1,2)$ contains a positive density of elements with positive, negative and zero conjugation curvature.
\begin{theorem}
\label{thm:n=2_examples}
$BS(1,2)$ contains a positive density of elements $g$ with positive, negative and zero conjugation curvature $\kappa_1(g)$.
\end{theorem}
\begin{proof}
Define three sets of triples:
\begin{enumerate}
\item $\mathcal{P}_2 = \{(2,w,{\bf x}) \, | \, {\bf x} = {\bf p}_2(1,0){\bf x}'{\bf s}(0,1,2), \, {\bf x}' \in Q_2, \, w = {k_{\xx}}-1\}$.
\item $\mathcal{Z}_2 = \{(2,w,{\bf x}) \, | \, {\bf x} = {\bf p}_2(1,0){\bf x}'{\bf s}(0,1,2), \, {\bf x}' \in Q_2, \, w = {k_{\xx}}-2\}$.
\item $\mathcal{N}_2 = \{(2,w,{\bf x}) \, | \, {\bf x} = {\bf p}_1(0){\bf x}'{\bf s}(0,1,2), \, {\bf x}' \in Q_2, \, w = {k_{\xx}}-2\}$.
\end{enumerate}
Each of these is a subset of $\mathcal V_2$, so it follows from Lemma~\ref{lemma:n=2_prefix_suffix}
in each case that ${\bf x} \in B_{\Sigma({\bf x})}^{u,w}$ is minimal.
In addition, the prefix, suffix, and size of $w$ relative to the vector length
are all fixed within each set.
It follows that $|\mathcal{A}(N)| = |Q_2(N+c)|$
for $\mathcal{A} = \mathcal{P}_2,\mathcal{Z}_2,\mathcal{N}_2$. The value of $c$ varies between the subsets, but is constant for each one.
There is a map which realizes the bijection defined by taking
a triple $(2,w,{\bf x})$ to the subvector ${\bf x}' \subseteq {\bf x}$.
Hence by Lemma~\ref{lemma:n=2_prefix_suffix_growth_rate},
the set of geodesics
$\{\eta_{2,\Sigma({\bf x}),w}({\bf x}) \mid (2,w,{\bf x}) \in \mathcal{A}\}$
has positive density in $BS(1,n)$
for $\mathcal{A} = \mathcal{P}_2,\mathcal{Z}_2,\mathcal{N}_2$
It remains to shows that $\kappa(\eta_{u,\Sigma({\bf x}),w}({\bf x}))$
is positive, zero, and negative, for $(2,w,{\bf x}) \in \mathcal{P}_2,\mathcal{Z}_2$, and $\mathcal{N}_2$, respectively.
For an arbitrary such triple, let $g = \eta_{2,\Sigma({\bf x}),w}({\bf x})$.
We must compare $l(g)$ to
\[
\frac{1}{4}\left[l(g^t)+l(g^{{t^{-1}}})+l(g^a)+l(g^{{a^{-1}}})\right].
\]
As the first digit of ${\bf p}$ in all cases is nonzero, we know that $n$ does not divide $\Sigma({\bf x})$.
As $\max(u,w) \leq {k_{\xx}}-1$, it follows from the first statement of Lemma~\ref{lemma:type1_facts} that $l(g) = l(g^t)=l(g^{{t^{-1}}})$.
Following Remark~\ref{remark:conjugation}, ${\bf x}(a) = \rho_{u,-w}({\bf x})$, the vector
in which we replace the digits $x_u$ and $x_w$, respectively, with $x_u+1$ and $x_w-1$. Note that these digits always lie in ${\bf p}$ and ${\bf s}$, respectively. For the remainder of this proof, we will write $u$, even though we always have $u=2$, for consistency.
Similarly, ${\bf x}({a^{-1}}) = \rho_{-u,w}({\bf x})$, the vector
in which we replace the digits $x_u$ and $x_w$, respectively, with $x_u-1$ and $x_w+1$.
We consider the three possible cases: $(2,w,{\bf x})$ in ${\mathcal P}_2$, ${\mathcal Z}_2$, and ${\mathcal N}_2$.
Write $\rho_{u,-w}({\bf x}) = {\bf p}_+ {\bf x}' {\bf s}_-$ and $\rho_{-u,w}({\bf x}) = {\bf p}_- {\bf x}' {\bf s}_+$.
Write $v = \Sigma({\bf x})$, and let $v_+ = n^u+v-n^w$ and $v_- = -n^u+v+n^w$.
It is not always true that
$\rho_{u,-w}({\bf x})$ and $\rho_{-u,w}({\bf x})$ are minimal, or even elements of ${\mathcal B}_{v_{\pm}}^{u,w}$.
However, in all cases, we add a small linear combination of basis
vectors ${\bf w}^{(i)}$ in order to modify these vectors so that the triple $(u,w,\rho_{-u,w}({\bf x}))$ is in $\mathcal V_2$.
In order to avoid a plethora of notation, we will retain the same notation for the modified vectors.
Let $(2,w,{\bf x}) \in {\mathcal P}_2$.
\begin{enumerate}[itemsep=5pt]
\item We have ${\bf p}_+ =(1,0,2,0,0,-1,0)$ and ${\bf s}_- = (0,0,-1,0,0,0,2)$, but $(u,w,{\bf p}_+ {\bf x}' {\bf s}_-)$ is not in $\mathcal V_2$.
Adding the basis vector ${\bf w}^{(2)}$ modifies the prefix to ${\bf p}_+ =(1,0,0,1,0,-1,0)$, and now $(2,w,{\bf p}_+ {\bf x}' {\bf s}_-) \in {\mathcal V}_2$.
\item We have ${\bf p}_- =(1,0,0,0,0,-1,0)$ and ${\bf s}_+ =(0,0,-1,0,0,2,2)$, but $(u,w,{\bf p}_-{\bf x}' {\bf s}_+)$ is not in $\mathcal V_2$.
Adding the basis vector ${\bf w}^{({k_{\xx}}-1)}$ modifies the suffix to ${\bf s}_+ =(0,0,-1,0,0,0,3)$, and now
$(2,w,{\bf p}_- {\bf x}' {\bf s}_+) \in {\mathcal V}_2$.
\end{enumerate}
Let $(2,w,{\bf x}) \in {\mathcal Z}_2$.
\begin{enumerate}[itemsep=5pt]
\item We have ${\bf p}_+ =(1,0,2,0,0,-1,0)$ and ${\bf s}_- = (0,0,-1,0,-1,1,2)$, but $(u,w,{\bf p}_+ {\bf x}' {\bf s}_-)$ is not in $\mathcal V_2$.
Adding the linear combination ${\bf w}^{(2)}+{\bf w}^{({k_{\xx}}-2)}$ modifies the prefix to ${\bf p}_+ =(1,0,0,1,0,-1,0)$ and the suffix to ${\bf s}_- = (0,0,-1,0,1,0,2)$. It is now the case that $(2,w,{\bf p}_+ {\bf x}' {\bf s}_-) \in \mathcal V_2$.
\item We have ${\bf p}_- =(1,0,0,0,0,-1,0)$ and ${\bf s}_+ =(0,0,-1,0,1,1,2)$, but $(2,w,{\bf p}_- {\bf x}' {\bf s}_+)$ is not in ${\mathcal V}_2$.
Adding the sum ${\bf w}^{({k_{\xx}}-2)} + {\bf w}^{({k_{\xx}}-1)}$ modifies the suffix to ${\bf s}_+ =(0,0,-1,0,-1,0,3)$, and
now $(2,w,{\bf p}_- {\bf x}' {\bf s}_+) \in {\mathcal V}_2$.
\end{enumerate}
Let $(2,w,{\bf x}) \in {\mathcal N}_2$.
\begin{enumerate}[itemsep=5pt]
\item We have ${\bf p}_+ =(1,0,1,0,-1,0,1,0)$ and ${\bf s}_- = (0,0,-1,0,-1,1,2)$, but $(u,w,{\bf p}_+ {\bf x}' {\bf s}_-)$ is not in $\VV2$. Adding ${\bf w}^{({k_{\xx}}-2)}$ modifies the suffix to ${\bf s}_-=(0,0,-1,0,1,0,2)$, and now $(2,w,{\bf p}_+ {\bf x}' {\bf s}_-) \in \mathcal V_2$.
\item We have ${\bf p}_- =(1,0-1,0,-1,0,1,0)$ and ${\bf s}_+ =(0,0,-1,0,1,1,2)$, but $(u,w,{\bf p}_- {\bf x}' {\bf s}_+)$ is not in $\mathcal V_2$. Adding ${\bf w}^{({k_{\xx}}-2)} + {\bf w}^{({k_{\xx}}-1)}$ modifies the suffix to ${\bf s}_+ =(0,0,-1,0,-1,0,3)$, and now $(2,w,{\bf p}_- {\bf x}' {\bf s}_+) \in \mathcal V_2$.
\end{enumerate}
Notice that in all three cases, after suitable modification we
have $(2,w,{\bf p}_+ {\bf x}' {\bf s}_-), (2,w,{\bf p}_- {\bf x}' {\bf s}_+) \in \mathcal V_2$,
so these vectors correspond to geodesic paths representing $g^a$ and $g^{a^{-1}}$.
As $\max(u,w)<{k_{\xx}}$ for ${\bf x}$ and for these two vectors, we use the same word
length formula to compute $l(g), \ l(g^a)$ and $l(g^{{a^{-1}}})$, so any difference in word length arises from a difference in $\ell^1$ norm between the corresponding vectors ${\bf x}, \ \rho_{u,-w}({\bf x})$ and $\rho_{-u,w}({\bf x})$.
When $(2,w,{\bf x}) \in {\mathcal P}_2$,
inspection shows that $\Vert \rho_{u,-w}({\bf x}) \Vert_1 = \Vert {\bf x} \Vert_1-1$ and $\Vert \rho_{-u,w}({\bf x}) \Vert_1 = \Vert {\bf x} \Vert_1-1$.
It follows that $l(g^a) = l(g^{{a^{-1}}}) = l(g)-1$.
Together we see that $l(g^t)+l(g^{{t^{-1}}})+l(g^a)+l(g^{{a^{-1}}}) = 4l(g)-2$ and thus $\kappa_1(g)>0$.
When $(2,w,{\bf x}) \in {\mathcal Z}_2$, inspection shows that $\Vert \rho_{u,-w}({\bf x}) \Vert_1 = \Vert {\bf x} \Vert_1$ and $\Vert \rho_{-u,w}({\bf x}) \Vert_1 = \Vert {\bf x} \Vert_1$.
It follows that $l(g^a) = l(g^{{a^{-1}}}) = l(g)$.
Together we see that $l(g^t)+l(g^{{t^{-1}}})+l(g^a)+l(g^{{a^{-1}}}) = 4l(g)$ and thus $\kappa_1(g)=0$.
When $(2,w,{\bf x}) \in {\mathcal N}_2$,
inspection shows that $\Vert \rho_{u,-w}({\bf x}) \Vert_1 = \Vert {\bf x} \Vert_1 + 1$ and $\Vert \rho_{-u,w}({\bf x}) \Vert_1 = \Vert {\bf x} \Vert_1+2$.
It follows that $l(g^a) = l(g)+1$ and $l(g^{{a^{-1}}}) = l(g)+2$.
Together we see that $l(g^t)+l(g^{{t^{-1}}})+l(g^a)+l(g^{{a^{-1}}}) > 4l(g)$ and thus $\kappa_1(g)<0$.
\end{proof}
\section{Technical Lemmas}
\label{sec:technical_lemmas}
In this section we include the proofs of the major propositions stated in Sections~\ref{section:geodesics_n_even} and ~\ref{section:geodesics_n_even_2}.
The first lemma completes the remaining case of Proposition~\ref{lemma:even_minimal_characterization}.
\begin{lemma}
\label{lemma:lastcase_lemma3.20}
Let $n \geq 4$ be even and ${\bf x}, \ {\bf y} \in {\mathcal B}_v^{u,w}$ with ${\bf x}$ not minimal and ${\bf y}$ minimal, so
$
{\bf y} = {\bf x} + \sum_{i=j}^\ell \alpha_i {\bf w}^{(i)}.
$
Let $x_j = \frac{n}{2}$. Then either
\begin{itemize}
\item there is a run in ${\bf x}$ at which ${\bf x}$ can be reduced, or
\item Proposition~\ref{lemma:adjacent_digits} applies to ${\bf x}$.
\end{itemize}
\end{lemma}
\begin{proof}
As $x_j = \frac{n}{2}$ and ${\bf y} \in {\mathcal B}_v^{u,w}$, the digit constraints on ${\mathcal B}_v^{u,w}$ force $\alpha_j = 1$ and thus $y_j = -\frac{n}{2}$.
We proceed to consider the subsequent digits of ${\bf x}$ and ${\bf y}$.
Note that $y_{j+1} = x_{j+1} + 1 - \alpha_{j+1}n$.
For now, suppose that $|x_{j+1}|,|y_{j+1}| \le \frac{n}{2}$, so
\[
\left| 1 - \alpha_{j+1} n\right| \le n,
\]
and hence $\alpha_{j+1} \in \{0, 1\}$. Suppose that
$\alpha_{j+1} = 1$; then we can make the same argument to conclude
that $x_{j+1} \in \{\frac{n}{2},\frac{n}{2}-1\}$ and $\alpha_{j+2} \in \{0,1\}$. We can continue in this way until
one of the following occurs, for some minimal index $l \geq j$:
\begin{enumerate}[itemsep=5pt]
\item[(a)] $\alpha_{\ell+1} = 0$, or
\item[(b)] $|x_{\ell+1}|,|y_{\ell+1}| > \frac{n}{2}$.
\end{enumerate}
First suppose that $\alpha_{\ell+1}=0$, which implies that $\alpha_i = 1$ for $j \leq i \leq l$ and thus
\[
{\bf z} = \sum_{i=j}^\ell {\bf w}^{(i)} + \sum_{i>\ell+1}\alpha_i {\bf w}^{(i)},
\]
so we have
\[
(z_j, z_{j+1}, \dots, z_{\ell}, z_{\ell+1}) = (-n, -(n-1), \dots, -(n-1), 1).
\]
The digit bounds on ${\mathcal B}_v^{u,w}$ force $x_i \in \{\frac{n}{2}, \frac{n}{2}-1\}$ for $j\le i \le \ell$.
That is, we have identified a run
${\bf r} = (x_j, \dots, x_\ell)$ in ${\bf x}$ with $\epsilon_{\bf r} = 1$ such that
for the minimal vector ${\bf y} \in {\mathcal B}_v^{u,w}$, we have
\[
{\bf y} = {\bf x} + \epsilon_{\bf r}\sum_{i=j}^\ell{\bf w}^{(i)}
+ \sum_{i>\ell+1}\alpha_i {\bf w}^{(i)}.
\]
It follows immediately from Lemma~\ref{lemma:single_run_from_multiple_runs} that ${\bf x}$ can be reduced at ${\bf r}$.
Next suppose that
$|x_{\ell+1}| = \frac{n}{2} + 1$ or $|y_{\ell+1}| = \frac{n}{2} + 1$. These
imply, respectively, that ${k_{\xx}}=\ell+1$ or ${k_{\y}}=\ell+1$.
We have $y_{\ell+1} - x_{\ell+1} = 1 - \alpha_{\ell+1}n$, so in fact it cannot
be that both $|x_{\ell+1}| = \frac{n}{2} + 1$ and $|y_{\ell+1}| = \frac{n}{2} + 1$,
since then the difference would be even. Thus we have
\[
|1 - \alpha_{\ell+1}n| \le n + 1,
\]
so $\alpha_{\ell+1} \in \{-1,0,1\}$. If $\alpha_{\ell+1} = 0$,
then the argument from case (a) applies. Otherwise, we are in one of
the following cases, which we indicate with letters and
address below:
\begin{center}
\begin{tabular}{c|c|c}
& $|x_{\ell+1}| = \frac{n}{2} + 1$ & $|y_{\ell+1}| = \frac{n}{2} + 1$ \\
\hline
$\alpha_{\ell+1} = 1$ & (A) & (C) \\
$\alpha_{\ell+1} = -1$ & (B) & (D) \\
\end{tabular}
\end{center}
\begin{itemize}[itemsep=5pt]
\item[(A)] Since $\alpha_{\ell+1} =1$, we must have $x_{\ell+1} = \frac{n}{2} + 1$, that is, this digit is positive and ${k_{\xx}} = l+1$.
Thus we can write
\[ {\bf y} = {\bf x} + \sum_{i=j}^{\ell+1} w^{(i)} \]
and as ${\bf y}$ is minimal we have found a run at which ${\bf x}$ can be reduced.
\item[(B)] Since $\alpha_{\ell+1} = -1$, we conclude that $x_{\ell+1} = -(\frac{n}{2}+1)$ and ${k_{\xx}} = \ell+1 \geq \max(u,w)$
and $y_{\ell+1} = \frac{n}{2}$ and ${k_{\y}} = {k_{\xx}} + 1$.
Thus the final two digits of ${\bf y}$ are $(\frac{n}{2},-1)$.
As ${k_{\y}} > {k_{\xx}} \ge \max(u,w)$,
Lemma~\ref{lemma:even_reduction_end} applies to ${\bf y}$, contradicting its
minimality. Thus this case does not occur.
\item[(C)]
Since $\alpha_{\ell+1} = 1$, we conclude $y_{\ell+1} = -(\frac{n}{2} + 1)$ and ${k_{\y}} = \ell+1 \geq \max(u,w)$.
The digits $x_{\ell+1}$ and $x_{\ell+2}$ are then determined, namely $(x_{\ell+1},x_{\ell+2}) = (\frac{n}{2}-2,-1)$.
Set ${\bf r} = (x_j, \dots, x_\ell)$; note that ${\bf r}$ is a run and we have
\[
\Vert {\bf x} \Vert_1 - \Vert {\bf y} \Vert_1 = \ensuremath{\textnormal{weight}}({\bf r}) -2.
\]
As ${k_{\xx}} > {k_{\y}} \ge \max(u,w)$, we use the second length formula in Lemma~\ref{lemma:length_formula} to compute the difference
\[
|\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf y})| = \ensuremath{\textnormal{weight}}({\bf r}) -2 + 2({k_{\xx}} - {k_{\y}}) \ge \ensuremath{\textnormal{weight}}({\bf r}).
\]
As ${\bf y} <_{u,w} {\bf x}$, we must have either $\ensuremath{\textnormal{weight}}({\bf r}) > 0$ or $\ensuremath{\textnormal{weight}}({\bf r}) = 0$
and ${\bf y}$ lexicographically smaller than ${\bf x}$.
If $\ensuremath{\textnormal{weight}}({\bf r}) > 0$, then by
Lemma~\ref{lemma:adjacent_digits_from_weight} there must be a pair
of adjacent digits $\frac{n}{2}$ in ${\bf x}$.
Extending this pair to a maximal sequence of digits of the form $\frac{n}{2}$, we see that it must terminate before the final
digit in ${\bf x}$.
Moreover, the digit immediately following this sequence has absolute value less than $\frac{n}{2}$
because the sequence is maximal and $x_{\ell+1} = \frac{n}{2} -2$.
It follows immediately from
Lemma~\ref{lemma:even_reduction_adjacent_digits} that there is a run at which ${\bf x}$ can be reduced.
\smallskip
\noindent
If $\ensuremath{\textnormal{weight}}({\bf r}) = 0$
but ${\bf y}$ precedes ${\bf x}$ in the lexicographic order, we must have $x_{j+1} = \frac{n}{2}$. In particular $j+1 \le \ell$.
We have therefore found two adjacent digits $\frac{n}{2}$ and can repeat the argument
above and apply Lemma~\ref{lemma:even_reduction_adjacent_digits} to conclude that
${\bf x}$ contains a run at which it can be reduced.
\item[(D)] Since $\alpha_{\ell+1} = -1$, it is easily verified that $z_{\ell+1} = n+1$ and we must have $x_{\ell+1} = -\frac{n}{2}$. It follows that $y_{\ell+1} = \frac{n}{2}+ 1$ and thus ${k_{\y}} = \ell+1$.
As $y_{\ell+2}=0$, we must have $x_{\ell+2} = 1$, so ${k_{\xx}} \ge {k_{\y}} + 1 \ge \max(u,w) + 1$.
If ${k_{\xx}} > \ell+2$, then ${\bf y}$ would have nonzero digits with index greater than $\ell+1$, contradicting ${k_{\y}} = \ell+1$. Thus the final two digits of ${\bf x}$ are $(-\frac{n}{2},1)$, and Lemma~\ref{lemma:even_reduction_end} applies to ${\bf x}$.
\end{itemize}
These cases prove Lemma~\ref{lemma:lastcase_lemma3.20} and complete the proof of Proposition~\ref{lemma:even_minimal_characterization}.
\end{proof}
\subsection{The special case $n=2$.}
\label{sec:technical_n=2}
When $n=2$ the digit bounds on ${\mathcal B}_v^{u,w}$ allow for the possibility that if ${\bf x} \in {\mathcal B}_v^{u,w}$, then $x_{{k_{\xx}}} = \frac{n}{2}+2 = 3$.
As this case does not occur when $n \geq 4$ is even, we have additional cases and slight variations in approach when $n=2$.
We restate the lemma and propositions from Section~\ref{section:geodesics_n_even_2} which we prove below for easy reference.
\noindent
{\bf Lemma~\ref{lemma:n=2_110}.} \emph{ Let $n=2$ and suppose that $x \in {\mathcal B}_v^{u,w}$ and $\delta \in \{\pm 1\}$.
If any of the following occur, then there is a run at which
${\bf x}$ can be reduced, and hence ${\bf x}$ is not minimal.
\begin{enumerate}[itemsep=5pt]
\item ${\bf x}$ contains the digits $(\delta, -\delta\alpha)$ for $\alpha > 0$.
\item ${k_{\xx}} \ne \max(u,w)$ and ${\bf x}$ ends in the digits $(\delta, \delta)$.
\item ${\bf x}$ contains the digits $(\delta, \delta, \alpha)$ for any $\alpha$.
\end{enumerate} }
\begin{proof}
Let $j$ be the index in ${\bf x}$ of the first digit $\delta$ in any case above.
In case (1), note that in ${\bf y}={\bf x} + \delta \wf{j}$ the digits $(\delta, -\delta\alpha)$ have been replaced by $(-\delta,-\delta(\alpha-1))$.
Since $\alpha \geq 1$, we see that $\Vert {\bf x} + \delta \wf{j} \Vert_1 =\Vert {\bf x} \Vert_1 -1$.
That is, ${\bf x}$ is not minimal.
If these digits are not the final digits of ${\bf x}$, or if they are but $\alpha >1$, then ${k_{\y}} = {k_{\xx}}$.
Thus, regardless of the length formula used to compute $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf x} + \delta \wf{j})|$, the change in word length is
determined by the change in $\ell^1$ norm, and thus $ {\bf x} + \delta \wf{j} <_{u,w} {\bf x}$.
If these are the final digits of ${\bf x}$, and $\alpha = 1$, writing ${\bf y} = {\bf x} + \delta \wf{j}$, we have ${k_{\y}} = {k_{\xx}}-1$.
If ${k_{\xx}} \le \max(u,w)$ then any change in word length between $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf x} + \delta \wf{j})|$ is determined by the change in $\ell^1$ norm, and hence $ {\bf x} + \delta \wf{j} <_{u,w} {\bf x}$.
If ${k_{\xx}} > \max(u,w)$, then the fact that ${k_{\y}} = {k_{\xx}}-1$ implies
\[
|\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf x} + \delta \wf{j})| = \Vert {\bf x} \Vert_1 - \Vert {\bf x} + \delta \wf{j}) \Vert_1 + 2({k_{\xx}} - {k_{\y}}) > \Vert {\bf x} \Vert_1 - \Vert {\bf x} + \delta \wf{j}) \Vert_1 >0.
\]
We conclude that $ {\bf x} + \delta \wf{j} <_{u,w} {\bf x}$, so ${\bf x}$ is not minimal.
In case (2) with ${k_{\xx}} > \max(u,w)$ consider ${\bf y}={\bf x} - \delta\wf{j}$.
We see that ${k_{\y}} = {k_{\xx}}-1 \geq \max(u,w)$ and thus we use the second word length formula in Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$.
It is easily checked that $\Vert {\bf y} \Vert_1 = \Vert {\bf x} \Vert_1 +1$ and it follows that $|\eta_{u,v,w}({\bf y})| < |\eta_{u,v,w}({\bf x})|$, so ${\bf x}$ is not minimal.
In case (2) with ${k_{\xx}} < \max(u,w)$, let ${\bf y} = {\bf x} + \delta(\wf{j} + \wf{j+1})$.
It is easily checked that $\Vert {\bf y} \Vert_1 = \Vert {\bf x} \Vert_1$.
Also note that there is a lexicographic decrease between $x_{j+1} = \delta$ and $y_{j+1} =0$.
As both geodesic lengths are computed using the first word length formula in Lemma~\ref{lemma:length_formula}, which does not depend on the length of the vector, we see that ${\bf y} <_{u,w} {\bf x}$, so ${\bf x}$ is not minimal.
In case (3), if $\ensuremath{\textnormal{sign}}(\alpha) = -\ensuremath{\textnormal{sign}}(\delta)$ then this reduces to case (1).
Without loss of generality, assume that $\delta = 1$ and $\ensuremath{\textnormal{sign}}(\alpha) = \ensuremath{\textnormal{sign}}(\delta)$.
\begin{itemize}[itemsep=5pt]
\item If $\alpha\in \{0,2\}$, let ${\bf y} = {\bf x}+\delta({\bf w}^{(j)}+{\bf w}^{(j+1)})$.
It is easily checked that $\Vert {\bf y} \Vert_1 = \Vert {\bf x} \Vert_1$
and that $|x_{j+1}| = 1$ while $|y_{j+1}| = 0$, demonstrating a lexicographic reduction from ${\bf x}$ to ${\bf y}$.
Note that ${k_{\xx}}={k_{\y}}$ for either value of $\alpha$ because the last digit of ${\bf x}$ cannot be $0$,
so regardless of the length formula used to compute both $|\eta_{u,v,w}({\bf y})|$ and $|\eta_{u,v,w}({\bf x})|$, it follows that ${\bf y} <_{u,w} {\bf x}$.
\item If $\alpha = 3$, then $\alpha$ is the final digit in ${\bf x}$.
Let ${\bf y} = {\bf x} + {\bf w}^{({k_{\xx}}-2)}+{\bf w}^{({k_{\xx}}-1)}+2{\bf w}^{({k_{\xx}})}$. It is easily seen that ${\bf y}$ ends in the digits $(-1,0,0,2)$ and ${k_{\y}} = {k_{\xx}}+1$.
As $x_{{k_{\xx}}}=3$ we must have ${k_{\xx}} \geq \max(u,w)$ and thus use the second formula in Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf y})|$ and $|\eta_{u,v,w}({\bf x})|$.
As $\Vert {\bf y} \Vert_1 = \Vert {\bf x} \Vert_1 -2$, we see that
\[
|\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf y})| = 2+2({k_{\xx}} - {k_{\y}}) = 2-2=0.
\]
As $|x_{{k_{\xx}}-2}| = |y_{{k_{\xx}}-2}|$ while $x_{{k_{\xx}}-1}=1$ and $y_{{k_{\xx}}-1}=0$, there is a lexicographic reduction and hence ${\bf y} <_{u,w} {\bf x}$.
\item If $\alpha = 1$, then let $(x_j, \dots, x_\ell)$ be a maximal sequence consisting entirely of the digit $1$. Note that $\ell-j \ge 2$.
First suppose that ${k_{\xx}} < \max(u,w)$. Set ${\bf y} = {\bf x} + \sum_{i=j}^\ell {\bf w}^{(i)}$.
Because $x_{\ell+1} \ne 1$ and ${k_{\xx}} < \max(u,w)$, we must have $x_{\ell+1} = 0$.
\smallskip
We can then compute the digits
\[(x_j,x_{j+1}, \cdots ,x_{\ell+1}) = (\delta, \delta, \cdots ,\delta,0)
\]
of ${\bf x}$ and
\[ (y_j,y_{j+1}, \cdots ,y_{\ell+1}) = (-\delta,0,\cdots ,0,1)\] of ${\bf y}$.
If ${k_{\xx}} = \ell$ then ${k_{\y}} = {k_{\xx}}+1$. If ${k_{\xx}} >\ell$ then ${k_{\y}} = {k_{\xx}}$. In either case, ${k_{\y}} \le {k_{\xx}} +1$, and we use the first length formula to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$, and conclude that
\[
|\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf y})| = \ell - j + 1 > 0.
\]
Therefore ${\bf y} <_{u,w} {\bf x}$, and $(x_j, \dots, x_\ell)$ is a run at which ${\bf x}$ can be reduced.
\smallskip
If ${k_{\xx}} \ge \max(u,w)$, then consider $x_{\ell+1}$. If $x_{\ell+1} \in \{-3,-2,-1,2,3\}$,
then this situation is covered by previous cases. As we are assuming that we have a maximal
subsequence of digits $1$, we know $x_{\ell+1} \ne 1$. Thus $x_{\ell+1} = 0$.
We now consider the cases ${k_{\xx}} = \ell$ and ${k_{\xx}} > \ell$.
\smallskip
If ${k_{\xx}} = \ell$, then set ${\bf y} = {\bf x} + \sum_{i=j}^{\ell-1} {\bf w}^{(i)}$ and note that ${k_{\y}} = {k_{\xx}}$. Thus we use the second
length formula in Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf x})|$
and $|\eta_{u,v,w}({\bf y})|$, and so the difference between the geodesic lengths comes from the difference between the $\ell^1$ norms of the vectors. As $\Vert {\bf y} \Vert_1 = \Vert {\bf x} \Vert_1$, we note that
$x_{j+1} = 1$ and $y_{j+1}=0$, so there is a lexicographic reduction from ${\bf x}$ to ${\bf y}$.
Thus $(x_j,\dots, x_{\ell-1})$ is a run at which ${\bf x}$ can be reduced.
\smallskip
If ${k_{\xx}} > \ell$, then set ${\bf y} = {\bf x} + \sum_{i=j}^{\ell} {\bf w}^{(i)}$.
The digits $(x_j, \cdots ,x_\ell)$ and $(y_j, \cdots ,y_\ell)$ are as computed above.
As ${k_{\xx}} > \ell$ we know that ${k_{\y}} = {k_{\xx}}$ and thus
we use the second length formula in Lemma~\ref{lemma:length_formula} to
compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$.
It follows that $|\eta_{u,v,w}({\bf x})|-|\eta_{u,v,w}({\bf y})|>0$, so
$(x_j,\dots, x_\ell)$ is a run at which ${\bf x}$ can be reduced.
\end{itemize}
\end{proof}
{\bf Proposition~\ref{lemma:lemma318_n=2}.} {\em Let $n=2$ and ${\bf x} \in {\mathcal B}_v^{u,w}$. Then ${\bf x}$ is not minimal if and only if one of the following occurs.
\begin{itemize}[itemsep=5pt]
\item There is a run at which ${\bf x}$ can be reduced.
\item Lemma~\ref{lemma:n=2_notminimal} applies to ${\bf x}$.
\end{itemize}}
\begin{proof}
It is clear that if either condition applies to ${\bf x} \in {\mathcal B}_v^{u,w}$, then ${\bf x}$ is not minimal. We must show the converse.
Let ${\bf x} \in {\mathcal B}_v^{u,w}$ and ${\bf y} = {\bf x}+{\bf z} \in {\mathcal B}_v^{u,w}$ be minimal, for some
${\bf z} = \sum_{i=j}^\ell \alpha_i {\bf w}^{(i)} \in \mathcal{L}_0 $, where $j$ is the minimal index
such that $x_j \ne y_j$. It follows from Lemma~\ref{lemma:less_than_6} that
$|\alpha_j| \in \{1,2\}$, equivalently that $|z_j| \in \{2,4\}$.
Without loss of generality, we assume that in the sum defining ${\bf z}$ all $\alpha_i \neq 0$ for $j \leq i \leq \ell$.
If this is not the case, so $j+1 \leq m < \ell$ is the minimal index with $\alpha_m = 0$, then let ${\bf z}' = \sum_{i=j}^{m-1} \alpha_i {\bf w}^{(i)}$. It follows from Lemma~\ref{lemma:single_run_from_multiple_runs} that ${\bf x} + {\bf z}' <_{u,w} {\bf x}$. Moreover, if ${k_{\xx}} \leq \ell$ then ${\bf y}$ is minimal as initially written.
In the arguments below, unless ${k_{\xx}} \leq \ell$, we use only the fact that ${\bf y} <_{u,w} {\bf x}$. This allows us to assume for the remainder of the proof that $\alpha_i \neq 0$ for all $j \leq i \leq \ell$.
Assume without loss of generality that $x_j \ge 0$. We consider the possible values of $\alpha_j \in \{\pm 1, \pm 2\}$.
If $\alpha_j =-2$ then $z_j=4$, hence $y_j\geq 4$. So this case does not occur.
If $\alpha_j = 2$ then $z_j=-4$. We consider $x_j \in \{0,1,2,3\}$.
\begin{enumerate}[itemsep=5pt]
\item If $x_j = 0$ then $y_j=-4$, violating the digit bounds on ${\mathcal B}_v^{u,w}$. So this case does not occur.
\item If $x_j \in \{2,3\}$ then $j = {k_{\xx}} \ge \max(u,w)$, and $\alpha_{k_{\xx}} \neq 0$. It follows from Lemma~\ref{lemma:n=2_unequal_lengths} that ${k_{\y}} >{k_{\xx}}$.
Thus when computing $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$ we use the second length formula in Lemma~\ref{lemma:length_formula}.
\smallskip
\noindent
As $j={k_{\xx}}$, for $j<i\leq {k_{\y}}$ we have $y_i = z_i$, moreover for $j<i<{k_{\y}}$ we have $|y_i| \leq 1$.
Writing ${\bf z} = 2{\bf w}^{({k_{\xx}})} + \sum_{i={k_{\xx}}+1}^{{k_{\y}}-1} \alpha_i {\bf w}^{(i)}$ we see that the conditions $y_i = z_i$ and $|y_i| \leq 1$ force $\alpha_i=1 $ for all $i$.
So ${\bf z}$ has one of the following two forms:
\begin{itemize}
\item $(-4,2)$, or
\item $(-4,0,-1,-1, \cdots ,-1,1)$.
\end{itemize}
\smallskip
\noindent
When $x_j = 2$, so $y_j = -2$, it is immediate that $\Vert {\bf y} \Vert_1 > \Vert {\bf x} \Vert_1$.
When $x_j = 3$, so $y_j = -1$, it is easily checked that $\Vert {\bf y} \Vert_1 \geq \Vert {\bf x} \Vert_1-1$.
As ${k_{\y}} \geq {k_{\xx}}+1$, it follows from the second length formula in Lemma~\ref{lemma:length_formula} that $|\eta_{u,v,w}({\bf x})|<|\eta_{u,v,w}({\bf y})|$, contradicting our assumption that ${\bf y} <_{u,w} {\bf x}$. So these cases do not occur.
\item If $x_j=1$, then $y_j = -3$ and we conclude that ${k_{\y}} = j$.
Thus for $j<m \leq \ell+1$ we must have $x_m = -z_m$ and $|x_m| \leq 1$ for $m<\ell+1$.
As in the previous case, we conclude that $\alpha_m = 1$ for $j+1 \leq m \leq \ell$, and hence ${\bf z} = (-4,2)$ or ${\bf z} = (-4,0,-1,-1, \cdots ,-1,1)$.
\smallskip
\begin{itemize}[itemsep=5pt]
\item If ${\bf z} = (-4,2)$ then ${\bf x}$ ends in $(x_j,x_{j+1})=(1,-2)$.
As ${\bf y} <_{u,w} {\bf x}$ we conclude that $(x_j) = (1)$ is a run at which ${\bf x}$ can be reduced.
\item If ${\bf z}=(-4,0,1)$ then ${\bf x}$ ends in $(1,0,-1)$ in which case Lemma~\ref{lemma:n=2_notminimal} applies to ${\bf x}$.
\item If ${\bf z} = (-4,0,-1,-1, \cdots ,-1,1)$ then ${\bf x}$ ends in $(x_j, \cdots ,x_{\ell+1}) = (1,0,1,1, \cdots ,1,-1)$.
We show that ${\bf r}=(x_{j+2}, \cdots ,x_\ell) = (1,1, \cdots ,1)$ is a run at which ${\bf x}$ can be reduced.
Let ${\bf q} = {\bf x}+\sum_{i=j+2}^{\ell} {\bf w}^{(i)}$. Then by construction $k_{\bf q}<{k_{\xx}}$ and $\Vert {\bf q} \Vert_1 < \Vert {\bf x} \Vert_1$.
As $\max(u,w) \leq {k_{\y}} < k_{\bf q}<{k_{\xx}}$, it then follows from the second length formula in Lemma~\ref{lemma:length_formula} that ${\bf q} <_{u,w} {\bf x}$, that is, ${\bf r}$ is a run at which ${\bf x}$ can be reduced.
\end{itemize}
\end{enumerate}
If $\alpha_j = -1$ then $z_j = 2$. We consider $x_j \in \{0,1,2,3\}$.
\begin{enumerate}[itemsep=5pt]
\item If $x_j \in \{2,3\}$ then $|y_j| >3$, hence these cases do not occur.
\item If $x_j \in \{0,1\}$ then $y_j \in \{2,3\}$, so $j = {k_{\y}} \geq \max(u,w)$.
Moreover, for $j+1 \leq m \leq \ell+1$ we have $x_m = -z_m$.
For $j+1 \leq m \leq \ell$ it is also true that $|x_m| \leq 1$, from which it follows that in the definition of ${\bf z}$ we have $\alpha_m =1$ for $j+1 \leq m \leq \ell$.
Thus ${\bf z} = (2,-1)$ or ${\bf z} = (2,1,1, \cdots ,1,-1)$.
\begin{itemize}[itemsep=5pt]
\item
If ${\bf z} = (2,-1)$ it follows that ${\bf x}$ ends in either $(x_j,x_{j+1}) = (0,1)$, so Lemma~\ref{lemma:n=2_notminimal} applies to ${\bf x}$, or $(1,-1)$,
in which case $(x_j) = (1)$ is a run at which ${\bf x}$ can be reduced.
\item If ${\bf z} = (2,1,1, \cdots ,1,-1)$ it follows that ${\bf x}$ ends in $(x_j,-1,-1, \cdots ,-1,1)$.
Then $(x_{j+1}, \cdots ,x_\ell) = (-1,-1, \cdots ,-1)$ is a run at which ${\bf x}$ can be reduced.
The proof is identical to case (3) when $\alpha_j = 2$ with a change of sign.
\end{itemize}
\end{enumerate}
If $\alpha_j = 1$ then $z_j = -2$. We first make three observations about the form of ${\bf z}$.
First, for $j<m < \min({k_{\xx}},{k_{\y}})$ we have $z_m = \alpha_{m-1}-\alpha_m n$ and $|x_m| \leq 1$, which requires
$\alpha_m \in \{\pm 1\}$.
If $\alpha_m \alpha_{m+1} = -1$ for $j+2\leq m+1 < \min({k_{\xx}}, {k_{\y}})$ then $|z_{m+1}| = 3$, which requires either ${k_{\xx}} = m+1$ or ${k_{\y}} = m+1$. However, $m+1 < \min({k_{\xx}}, {k_{\y}})$ so we conclude that the sign of $\alpha_i$ is constant for $j+1 \leq i < \min({k_{\xx}}, {k_{\y}})$.
Our assumption that $\alpha_j = 1$ allows us to write
\[ {\bf z} = \sum_{i=j}^{\min({k_{\xx}},{k_{\y}})-1} {\bf w}^{(i)} + \sum_{i=\min({k_{\xx}},{k_{\y}})}^{\ell} \alpha_i {\bf w}^{(i)}.\]
The second observation is that for $\min({k_{\xx}},{k_{\y}})<m\leq\ell+1$ we have either $|x_m| = |z_m|$ or $|y_m| = |z_m|$.
The digit bounds on ${\mathcal B}_v^{u,w}$ again force $|\alpha_m| = 1$
for $\min({k_{\xx}},{k_{\y}}) \leq m \leq \ell$.
If $\alpha_m \alpha_{m+1} = -1$ for $m$ in this range, then $|z_{m+1}| = 3$ and hence either $|x_{m+1}| = 3$ or $|y_{m+1}| = 3$.
In either case the digit bounds on ${\mathcal B}_v^{u,w}$ are violated and so this does not occur.
This allows us to write
\[ {\bf z} = \sum_{i=j}^{\ell-1} {\bf w}^{(i)} + \alpha_\ell {\bf w}^{(\ell)}.\]
Third, recall that we are not assuming that ${\bf y}$ is necessarily minimal, but simply that ${\bf y} <_{u,w} {\bf x}$.
However, if ${k_{\xx}} \le \ell$, then we can assume ${\bf y}$ is minimal.
We consider $x_j \in \{0,1,2,3\}$.
\begin{enumerate}[itemsep=5pt]
\item If $x_j = 0$ then $y_j = -2$ so $j = {k_{\y}} \geq \max(u,w)$.
It follows that for $j+1 \leq m \leq \ell+1$ we have $x_m = -z_m$ and for $j+1 \leq m \leq \ell$ we have $|x_m| \leq 1$.
The latter condition forces $\alpha_m = 1$ in the definition of ${\bf z}$ for $j+1 \leq m \leq \ell$.
Thus ${\bf z} = (-2,1)$ or ${\bf z} = (-2,-1, \cdots ,-1,1)$.
\smallskip
\begin{itemize}[itemsep=5pt]
\item If ${\bf z} = (-2,1)$ then ${\bf x}$ ends in $(0,-1)$ and Lemma~\ref{lemma:n=2_notminimal} applies to ${\bf x}$.
\item If ${\bf z} = (-2,-1, \cdots ,-1,1)$, then $(x_{j+1}, \cdots ,x_\ell) = (1,1, \cdots ,1)$ is a run at which ${\bf x}$ can be reduced.
The proof of this is identical to the proof of case (3) when $\alpha_j=2$ with a change of sign.
\end{itemize}
\item If $x_j \in \{2,3\}$ then $y_j \in \{0,1\}$ and $j = {k_{\xx}} \geq \max(u,w)$.
\smallskip
As $\alpha_{{k_{\xx}}} \neq 0$ it follows from Lemma~\ref{lemma:n=2_unequal_lengths} that ${k_{\y}} > {k_{\xx}} \geq \max(u,w)$ and thus we use the second length formula in Lemma~\ref{lemma:length_formula} to compute both
$|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$.
\smallskip
As $j = {k_{\xx}}$, for $j+1 \leq m \leq {k_{\y}}$ we have $y_m = z_m$, and for $j+1 \leq m < {k_{\y}}$ we have $|y_m| \leq 1$.
These conditions force $\alpha_m=1$ for all $j+1 \leq m \leq l$ in the definition of ${\bf z}$, and thus ${\bf z} = (-2,1)$ or ${\bf z} = (-2,-1, \cdots ,-1,1)$.
\smallskip
\begin{itemize}[itemsep=5pt]
\item If ${\bf z} = (-2,1)$ then ${\bf y}$ ends ether in $(0,1)$ or $(1,1)$. We see that $\Vert {\bf y} \Vert_1 = \Vert {\bf x} \Vert_1 - 1$.
\item If ${\bf z} = (-2,-1, \cdots ,-1,1)$ then ${\bf y}$ ends in $(x_j-2,-1,-1, \cdots ,-1,1)$ where the first digit is $0$ when $x_j = 2$ and $1$ when $x_j = 3$.
We see that $\Vert {\bf y} \Vert_1 \geq \Vert {\bf x} \Vert_1$.
\end{itemize}
\smallskip
As ${k_{\y}} > {k_{\xx}} \geq \max(u,w)$ it then follows from the second word length formula in Lemma~\ref{lemma:length_formula} that ${\bf x} <_{u,w} {\bf y}$, contradicting our assumption that ${\bf y} <_{u,w} {\bf x}$.
So this case does not occur.
\item If $x_j = 1$, we consider the possible values for $\alpha_\ell \in \{\pm 1, \pm 2\}$.
In all cases, the digit bounds on ${\mathcal B}_v^{u,w}$ imply that for $j \leq i \leq \ell-1$ we have $x_i \in \{0,1\}$.
\smallskip
\begin{itemize}[itemsep=5pt]
\item If $\alpha_\ell = 1$ and $x_\ell \ge 0$, then by definition $(x_j, \cdots ,x_{\ell})$ is a run at which ${\bf x}$ can be reduced.
\item If $\alpha_\ell = 1$ and $x_\ell < 0$, we have $x_\ell \in \{-1,-2,-3\}$.
For all possible values of $x_\ell$, note that $y_\ell < -1$, so ${k_{\y}} = \ell \ge \max(u,w)$. Since $y_{\ell+1} =0$, we have $x_{\ell+1} = -1$. Thus $x_\ell \notin \{-2,-3\}$.
If $x_\ell = -1$, then ${\bf x}$ ends with the digits $(-1,-1)$, and it follows from Lemma~\ref{lemma:n=2_110} that there is a run at which ${\bf x}$ can be reduced.
\item If $\alpha_{\ell} = 2$, then $z_\ell = -3$.
In order for ${\bf y}$ to satisfy the digit bounds on ${\mathcal B}_v^{u,w}$,
we must have $x_\ell \ge 0$, and hence then $(x_j, \cdots ,x_{\ell})$ is a run at which ${\bf x}$ can be reduced.
\item If $\alpha_\ell = -2$ then $(z_{\ell},z_{\ell+1}) = (5,-2)$.
Hence $x_\ell \in \{-3,-2\}$, and ${k_{\xx}} = {k_{\y}} =\ell$. It follows that $x_{\ell+1} = y_{\ell+1} = 0$ which is impossible if $z_{\ell+1} = 2$. So this case does not occur.
\item If $\alpha_\ell = -1$, the same reasoning as in the above cases shows that for $j \leq i \leq \ell-1$ we have $x_i \in \{0,1\}$.
As $\alpha_{\ell} = -1$, the final digits of ${\bf z}$ are $(z_\ell,z_{\ell+1}) = (3,-1)$ and either ${k_{\xx}} = \ell$ or ${k_{\y}} = \ell$.
\smallskip
If ${k_{\y}} = \ell$, then it follows from the second observation above that ${k_{\xx}} = {k_{\y}}+1$ and $(x_{\ell},x_{\ell+1}) = (0,1)$ or $(-1,1)$. In the first case Lemma~\ref{lemma:n=2_notminimal} applies to ${\bf x}$.
In the second case, $(x_\ell) = (-1)$ is a run at which ${\bf x}$ can be reduced.
It is straightforward to verify that regardless of which length formula from Lemma~\ref{lemma:length_formula} is used, ${\bf x} - {\bf w}^{(\ell)} <_{u,w} {\bf x}$.
\smallskip
If ${k_{\xx}} = \ell$, then ${k_{\y}} = {k_{\xx}}+1$ and $(y_{\ell},y_{\ell+1})=(x_{\ell}+3, -1)$.
As $y_{\ell+1} \neq 0$ we must have $|y_\ell| \leq 1$, so ${\bf x}_{k_{\xx}} \in \{-3,-2\}$ and ${k_{\xx}} \geq \max(u,w)$.
As ${k_{\xx}} = \ell$, recall that we can assume ${\bf y}$ is minimal.
As ${k_{\y}} > {k_{\xx}} \geq \max(u,w)$, we use the second length formula in Lemma~\ref{lemma:length_formula} to compute both $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$.
As ${\bf y}$ is minimal, we have
\begin{align*}
0 & \le |\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf y})| \\
& = \Vert {\bf x} \Vert_1 - \Vert {\bf y} \Vert_1 + 2({k_{\xx}} -{k_{\y}}) \\
& = \ensuremath{\textnormal{weight}}(x_j,\dots,x_{\ell-1}) + |x_{\ell}| - |x_{\ell} +3| - 1 - 2.
\end{align*}
Hence $\ensuremath{\textnormal{weight}}(x_j,\dots,x_{\ell-1}) \ge 0$.
\smallskip
Let ${\bf r} = (x_j,\dots, x_{\ell-1})$; note that we do not include $x_\ell$ in ${\bf r}$ as $x_\ell<-1$. We show that ${\bf y}' = {\bf x} + \sum_{i=j}^{\ell-1}{\bf w}^{(i)} \le_{u,w} {\bf x}$, that is, ${\bf x}$ can be reduced at ${\bf r}$.
First note that as ${k_{\y}} = {k_{\xx}}+1 = \ell+1$ the fact that the maximal index in ${\bf r}$ is $\ell-1$ ensures that $k_{{\bf y}'} = {k_{\xx}}$.
Consequently, we use the same length formula to compute $|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y}')|$, so any difference between the lengths of the geodesics arises from the change in $\ell^1$ norm between the two vectors.
We compute
\begin{align*}
|\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf y}')| &= \ensuremath{\textnormal{weight}}(x_j,\dots,x_{\ell-1}) + |x_{\ell}| - |x_{\ell} +1| \\
&> \ensuremath{\textnormal{weight}}(x_j,\dots,x_{\ell-1}) + |x_{\ell}| - |x_{\ell} +3| - 1 - 2 \\
&\ge 0,
\end{align*}
where the second, strict, inequality holds because $x_{{k_{\xx}}} = x_{\ell} \in \{-3,-2\}$.
Thus ${\bf x}$ can be reduced at ${\bf r}$.
\end{itemize}
\end{enumerate}
Considering all possible combinations of values of $\alpha_j$ and $x_j$, we have either shown that the combination does not arise, Lemma~\ref{lemma:n=2_notminimal} applies to ${\bf x}$, or ${\bf x}$ contains a run at which it can be reduced, proving the lemma.
\end{proof}
{\bf Proposition~\ref{lemma:n=2_adjacent_digits}.} {\em Let $n=2$ and ${\bf x} \in {\mathcal B}_v^{u,w}$ and ${k_{\xx}} < \max(u,w)$.
Then ${\bf x}$ is not minimal if and only if
${\bf x}$ contains the digits $(\delta, \delta)$
or $(\delta, -\delta)$, for $\delta \in \{\pm 1\}$. }
\begin{proof}
Suppose that ${\bf x}$ contains one of the above sequences of digits. As ${k_{\xx}} < \max(u,w)$, it follows from Lemma~\ref{lemma:n=2_110} that ${\bf x}$ is not minimal.
It remains to show the converse.
Suppose that ${\bf x}$ is not minimal. Since ${k_{\xx}} < \max(u,w)$, it follows from
Proposition~\ref{lemma:lemma318_n=2} that Lemma~\ref{lemma:n=2_notminimal} does not apply to ${\bf x}$, so there must be a run ${\bf r} = (x_j, \dots, x_\ell)$
at which ${\bf x}$ can be reduced.
Without loss of
generality, we assume that $\epsilon_{\bf r} = 1$. That is, there
is ${\bf z} = \sum_{i=j}^{\ell-1}\wf{i} + \alpha_{\ell}\wf{\ell}$ with
$\alpha_\ell \in \{1,2\}$ so that ${\bf y} = {\bf x} + {\bf z} <_{u,w} {\bf x}$.
As ${k_{\xx}} < \max(u,w)$ and ${k_{\y}} \leq {k_{\xx}}+1 \leq \max(u,w)$, we use the first length formula in Lemma~\ref{lemma:length_formula} to compute both
$|\eta_{u,v,w}({\bf x})|$ and $|\eta_{u,v,w}({\bf y})|$.
This formula does not depend on the the values of ${k_{\xx}}$ and ${k_{\y}}$.
As ${\bf y} <_{u,w} {\bf x}$, we have
\begin{align*}
|\eta_{u,v,w}({\bf x})| - |\eta_{u,v,w}({\bf y})| &= \ensuremath{\textnormal{weight}}({\bf r}) + |x_{\ell+1}| - |y_{\ell+1}| \ge 0 .\\
\end{align*}
Moreover, if
the above difference is zero,
there must be a lexicographic reduction from ${\bf x}$ to ${\bf y}$.
Recall that $y_{\ell+1} = x_{\ell+1} + \alpha_\ell$ and $\alpha_\ell \in \{1,2\}$. As ${k_{\xx}} < \max(u,w)$ we know that $|x_{l+1}| \leq 1$.
First suppose that $x_{\ell+1}\ge 0$.
If $\alpha_\ell = 2$, then $ |x_{\ell+1}| - |y_{\ell+1}| = -2$, so $\ensuremath{\textnormal{weight}}({\bf r}) - 2 \ge 0$.
Recalling the definition of $\ensuremath{\textnormal{weight}}({\bf r})$, we see that ${\bf r}$ must contain at least three more $1$'s than $0$'s.
If $\alpha_\ell = 1$, then $ |x_{\ell+1}|-|y_{\ell+1}| = -1$,
and ${\bf r}$ must contain at least two more $1$'s than $0$'s.
In either case, it follows from Remark~\ref{remark:digit_sequences} that ${\bf r}$ contains either the digits $(1,1,0)$ or ends in $(1,1)$.
In either case, ${\bf r}$ contains $(1,1)$.
Next suppose that $x_{\ell+1} < 0$, which implies $x_{\ell+1}=-1$. We consider the possibilities for $x_l$.
\begin{enumerate}[itemsep=5pt]
\item If $x_\ell = 1$, then ${\bf x}$ contains the digits $(1,-1)$.
\item If $x_\ell = 0$ and $\alpha_\ell = 2$, then ${\bf z}$ ends
with the digits $(-3,2)$ forcing ${\bf y}$ to contain the digits $(-3,1)$, contradicting the digit restrictions on ${\mathcal B}_v^{u,w}$. So this case does not occur.
\item If $x_\ell = 0$ with $\alpha_\ell = 1$, then $|x_{\ell+1}|-|y_{\ell+1}| = 1$ and we have $\ensuremath{\textnormal{weight}}({\bf r}) +1 \geq 0$.
\smallskip
\begin{enumerate}[itemsep=5pt]
\item If $\ensuremath{\textnormal{weight}}({\bf r}) = -1$, so $|\eta_{u,v,w}({\bf x})| = |\eta_{u,v,w}({\bf y})|$,
there must be a lexicographical reduction from ${\bf x}$ to ${\bf y}$.
Since $x_\ell=0$ and the first digit of ${\bf r}$ is $x_j = 1$, we conclude that ${\bf r}$ has length at least 2.
To ensure the lexicographic reduction requires $x_{j+1} = 1$ and thus ${\bf r}$
begins with the digits $(1,1)$.
\item If $\ensuremath{\textnormal{weight}}({\bf r}) +1 > 0$, there must be at least one more occurrence of the digit 1 than the digit 0 in ${\bf r}$. Since we know that $x_j = 1$ and $x_\ell = 0$, this condition forces ${\bf r}$ to contain the digits $(1,1)$.
\end{enumerate}
\end{enumerate}
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2020-06-26T02:18:04",
"yymm": "2006",
"arxiv_id": "2006.14525",
"language": "en",
"url": "https://arxiv.org/abs/2006.14525",
"abstract": "For an element in $BS(1,n) = \\langle t,a | tat^{-1} = a^n \\rangle$ written in the normal form $t^{-u}a^vt^w$ with $u,w \\geq 0$ and $v \\in \\mathbb{Z}$, we exhibit a geodesic word representing the element and give a formula for its word length with respect to the generating set $\\{t,a\\}$. Using this word length formula, we prove that there are sets of elements of positive density of positive, negative and zero conjugation curvature, as defined by Bar Natan, Duchin and Kropholler.",
"subjects": "Group Theory (math.GR); Geometric Topology (math.GT)",
"title": "Conjugation Curvature in Solvable Baumslag-Solitar Groups",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513893774303,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7086428645756424
} |
https://arxiv.org/abs/physics/0612217 | How to Choose a Champion | League competition is investigated using random processes and scaling techniques. In our model, a weak team can upset a strong team with a fixed probability. Teams play an equal number of head-to-head matches and the team with the largest number of wins is declared to be the champion. The total number of games needed for the best team to win the championship with high certainty, T, grows as the cube of the number of teams, N, i.e., T ~ N^3. This number can be substantially reduced using preliminary rounds where teams play a small number of games and subsequently, only the top teams advance to the next round. When there are k rounds, the total number of games needed for the best team to emerge as champion, T_k, scales as follows, T_k ~N^(\gamma_k) with gamma_k=1/[1-(2/3)^(k+1)]. For example, gamma_k=9/5,27/19,81/65 for k=1,2,3. These results suggest an algorithm for how to infer the best team using a schedule that is linear in N. We conclude that league format is an ineffective method of determining the best team, and that sequential elimination from the bottom up is fair and efficient. | \section{Introduction}
Competition is ubiquitous in physical, biological, sociological, and
economical processes. Examples include ordering kinetics where large
domains grow at the expense of small ones \cite{gss,ajb}, evolution
where fitter species thrive at the expense of weaker species
\cite{sjg}, social stratification where humans vie for social status
\cite{btd,msk,bvr}, and the business world where companies compete for
market share \cite{ms,ra}.
The world of sports provides an ideal laboratory for modeling
competition because game data are accurate, abundant, and
accessible. Moreover, since sports competitions are typically
head-to-head, sports can be viewed as an interacting particle system,
enabling analogies with physical systems that evolve via binary
interactions \cite{tmp,pn,brv,ta}. For instance, sports nicely
demonstrate that the outcome of a single competition is not
predictable \cite{tl,bvr1}. Over the past century the lower seeded
team had an astounding $44\%$ chance of defeating a higher seeded team
in baseball \cite{bvr1}. The same is true for other competitions in
arts, science, and politics. This inherent randomness has profound
consequences. Even after a long series of competitions, the best team
does not always finish first.
To understand how randomness affects the outcome of multiple
competitions, we study an idealized system. In our model league, there
are $N$ teams ranked from best to worst, so that in each match there
is a well-defined favorite and underdog. We assume that the weaker
team can defeat the stronger team with a fixed probability. Using
random walk properties and scaling techniques analogous to those used
in polymer physics \cite{dg,de}, we study the rank of the champion as
a function of the number of teams and the number of games. We find
that a huge number games, $T\sim N^3$, is needed to guarantee that the
best team becomes the champion.
We suggest that a more efficient strategy to decide champions is to
set up preliminary rounds where a small number of games is played and
based on the outcome of these games, only the top teams advance to the
next round. In the final championship round, $M$ teams play a
sufficient number of $M^3$ games to decide the champion. Using $k$
carefully constructed preliminary rounds, the required number of
games, $T_k$, can be reduced significantly
\begin{equation}
\label{tk}
T_k\sim N^{\gamma_k}\qquad \gamma_k=\frac{1}{1-(2/3)^{k+1}}.
\end{equation}
Remarkably, it is possible to approach the optimal limit of linear
scaling using a large number of preliminary rounds.
\section{League competition}
Our model league consists of $N$ teams that compete in head-to-head
matches. We assume that each team has an innate strength and that no
two teams are equal. The teams are ranked from $1$ (the best team) to
$N$ (the worst team). This ranking is fixed and does not evolve with
time. The teams play a fixed number of head-to-head games, and each
game produces a winner and a loser. In our model, the stronger (lower
seed) team is considered to be the favorite and the weaker (higher
seed) team is considered to be the underdog. The outcome of each match
is stochastic: the underdog wins with the upset probability $0<q<1/2$
and the favorite wins with the complementary probability $p=1-q$. The
team with the largest number of wins is the champion.
Since the better team does not necessarily win a game, the best team
does not necessarily win the championship. In this study, we address
the following questions: How many games are needed for the best team
to finish first? What is the typical rank of a champion decided by a
relatively small number of games? What is the optimal way to choose a
champion?
We answer these questions using scaling techniques. Consider the $n$th
ranked team with $1\leq n\leq N$. This team is inferior to a fraction
$\frac{n-1}{N-1}$ of the $N-1$ remaining teams and superior to a
fraction $\frac{N-n}{N-1}$ of the teams. Therefore, the probability
$P_n$ that this team wins a game against a randomly chosen opponent is
a linear combination of the probabilities $p$ and $q$,
\begin{equation}
\label{pn1}
P_n=p\,\frac{N-n}{N-1}+q\,\frac{n-1}{N-1}.
\end{equation}
Using $p=1-q$, the probability $P_n$ can be rewritten as follows
\begin{equation}
\label{pn}
P_n=p-(2p-1)\frac{n-1}{N-1}.
\end{equation}
The latter varies linearly with rank: it is largest for the best team,
$P_1=p$, and smallest for the worst team, $P_N=q$.
Now, suppose that the $n$th team plays $t$ games, each against a
randomly chosen opponent. The number of wins it accumulates, $w_n(t)$,
is a random quantity that grows as follows
\begin{equation}
w_n(t+1)=
\begin{cases}
w_n(t)+1&{\rm with\ probability\ }P_n\\
w_n(t) &{\rm with\ probability\ }1-P_n.\\
\end{cases}
\end{equation}
The initial condition is $w_n(0)=0$. The number of wins performs a
biased random walk and as a result, when the number of games is large,
the quantity $w_n(t)$ is well-characterized by its average
\hbox{$W_n(t)=\langle w_n(t)\rangle$} and its standard deviation
$\sigma_n(t)$, defined via \hbox{$\sigma^2_n(t)=\langle
w_n^2(t)\rangle-\langle w_n(t)\rangle^2$}. Here, the brackets denote
averaging over infinitely many realizations of the random
process. Since the outcome of a game is completely independent of all
other games, the average number of wins and the variance in the number
of wins are both proportional to the number of games played
\begin{subequations}
\begin{align}
\label{vn}
W_n(t)&=P_n\,t\\
\label{wn}
\sigma^2_n(t)&=P_n(1-P_n)\,t.
\end{align}
\end{subequations}
Both of these quantities follow from the behavior after one game:
since $w_n(1)=1$ with probability $P_n$ and $w_n(1)=0$ with
probability $1-P_n$, then \hbox{$\langle w_n(1)\rangle=\langle
w_n^2(1)\rangle=P_n$}. Moreover, the distribution of the number of
wins is binomial and for large $t$, it approaches a Gaussian, fully
characterized by the average and the standard deviation \cite{nvk}.
The quantities $W_n$ and $\sigma_n$ can be used to understand key
features of this system. Let us assume that each team plays $t$ games
against randomly selected opponents and compare the best team with the
$n$th ranked team. Since $P_1>P_n$, the best team accumulates wins at
a faster rate, and after playing sufficiently many games, the best
team should be ahead. However, since there is a diffusive-like
uncertainty in the number of wins, \hbox{$\sigma_n\sim \sqrt{t}$}, it
is possible that the $n$th ranked team has more wins when $t$ is
small. The number of wins of the $n$th team is comparable with that
of the best team as long as $W_1(t)-W_n(t)\propto \sigma_1(t)$, or
\begin{equation}
\label{compare}
(2p-1)\,\frac{n-1}{N-1}\,t\propto \sqrt{t}.
\end{equation}
Since the diffusion coefficient $D_n=P_n(1-P_n)$ in (\ref{wn}) varies
only weakly with $n$, $pq\leq D_n\leq 1/4$, this dependence is tacitly
ignored. When these two teams have a comparable number of wins, they
have comparable chances to finish first. Hence, Eq.~(\ref{compare})
yields the characteristic rank of the champion, $n_*$, as a function
of the number of teams $N$ and the number of games $t$
\begin{equation}
\label{nt}
n_*\sim \frac{N}{\sqrt{t}}.
\end{equation}
Since we are primarily interested in the behavior as a function of $t$
and $N$, the dependence on the probability $p$ is henceforth left
implicit. As expected, the champion becomes stronger as the number of
games increases (recall that small $n$ represents a stronger team). By
substituting $n_*\sim 1$ into (\ref{nt}), we deduce that the total
number of games, $t_*$, needed for the best team to win is $t_*\sim
N^2$.
Since each of the $N$ teams plays $t_*\sim N^2$ games, the total
number of games required for the best team to emerge as the champion
with high certainty grows as the cubic power of the number of teams,
\begin{equation}
\label{tn}
T\sim N^3.
\end{equation}
This result has significant implications. In most sports leagues, two
teams face each other a fixed number of times, usually once or twice.
The corresponding total number of $\sim N^2$ games, is much smaller
than (\ref{tn}). In this common league format, the typical rank of
the champion scales as $n_*\sim \sqrt{N}$. Such a season is much too
short as it enables weak teams to win championships. Indeed, it is
not uncommon for the top two teams to trade places until the very end
of the season or for two teams to tie for first, a clear indication
that the season length is too short.
We may also consider the probability distribution $Q_n(t)$ for the
$n$th ranked team to win after $t$ games. We expect that the scale
$n_*$ characterizes the entire distribution function,
\begin{equation}
\label{qn-scaling}
Q_n\sim \frac{1}{n_*}\,\psi\left(\frac{n}{n_*}\right).
\end{equation}
Assuming $\psi(0)$ is finite, the probability that the best team wins
scale as follows, $Q_1\sim 1/n_*$. This quantity first grows,
\hbox{$Q_1(t)\sim\sqrt{t}/N$} when $t\ll N^2$, and then, it saturates,
$Q_1(t)\approx 1$ when $t\gg N^2$.
The likelihood of major upsets is quantified by the tail of the
scaling function $\psi(z)$. Generally, the champion wins $pt$ games
(we neglect the diffusive correction). The probability that the
weakest team becomes champion by reaching that many wins is
\hbox{$Q_N(t)\sim {t\choose pt}q^{pt}p^{qt}\sim (q/p)^{(p-q)t}$} where
the asymptotic behavior follows from the Stirling formula
\hbox{$t!\sim t\ln t-t$}. We conclude that the probability of the
weakest team winning decays exponentially with the number of games,
$Q_N(t)\sim \exp(-\,{\rm const}\times t)$. Yet, from (\ref{qn-scaling})
and (\ref{nt}), $Q_N(t)\sim \psi\left(\sqrt{t}\,\right)$, and therefore,
the tail of the probability distribution is Gaussian
\begin{equation}
\label{tail}
\psi(z)\sim \exp\left(-\,{\rm const}\times z^2\right)
\end{equation}
as $z\to\infty$ thereby implying that upset champions are extremely
improbable. We note that single-elimination tournaments produce upset
champions with a much higher probability because the corresponding
distribution function has an algebraic tail \cite{brv}. We conclude
that leagues have a much narrower range of outcomes and in this sense,
leagues are more fair than tournaments.
\section{Preliminary Rounds}
With such a large number of games, the ordinary league format is
highly inefficient. How can we devise a schedule that produces the
best team as the champion with the least number of games? The answer
involves preliminary rounds. In a preliminary round, teams play a
small number of games and only the top teams advance to the next
round.
Let us consider a two stage format. The first stage is a preliminary
round where teams play $t_1$ games and then, the teams are ranked
according to the outcome of these games. The top $M\ll N$ teams
advance to the final round \cite{clean}, and the rest are
eliminated. The final championship round proceeds via a league format
with plenty of games to guarantee that the best team ends up at the
top .
We assume that the number of teams advancing to the second round grows
sub-linearly
\begin{equation}
\label{m-def}
M\sim N^{\alpha_1},
\end{equation}
with $\alpha_1<1$. Of course, we better not eliminate the best team.
The number of games $t_1$ required for the top team to finish no worse
than $M$th place is obtained by substituting $n_*\sim M$ into
(\ref{nt}), $t_1\sim N^2/M^2$. Since each of the $N$ teams plays
$t_1$ games, the total number of games in the preliminary round is of
the order \hbox{$Nt_1\sim N^3/M^2\sim N^{3-2\alpha_1}$}. Directly from
(\ref{tn}), the number of games in the final round is $M^3\sim
N^{3\alpha_1}$. Adding these two contributions, the total number of
games, $T_1$, is
\begin{equation}
\label{t1-eq}
T_1\sim N^{3-2\alpha_1}+N^{3\alpha_1}.
\end{equation}
This quantity grows algebraically with the number of teams, $T_1\sim
N^{\gamma_1}$ with $\gamma_1={\rm max}(3-2\alpha_1,3\alpha_1)$ and
this exponent is minimal, $\gamma_1=9/5$, when
\begin{equation}
\label{alpha1}
\alpha_1=3/5.
\end{equation}
Consequently, $t_1\sim N^{4/5}$.
Thus, it is possible to significantly improve upon the ordinary league
format using a two-stage procedure. The first stage is a preliminary
round in which each of the $N$ teams plays $t_1 \sim N^{4/5}$ games
and then the top $M\sim N^{3/5}$ teams advance to the final round. The
rest of the teams are eliminated. The first preliminary round
requires $N^{9/5}$ games. In the final round the remaining teams play
in a league with each of the possible ${M\choose 2}$ pairs of teams
playing each other $M$ times. Again the number of games is $N^{9/5}$
so that in total,
\begin{equation}
T_1\sim N^{9/5}
\end{equation}
games are played. This is a substantial improvement over ordinary
$N^3$ league play.
Multiple preliminary rounds further reduce the number of games.
Introducing an additional round, there are now three stages: the first
preliminary round, the second preliminary round, and the championship
round. Out of the first round $N^{\alpha_2}$ teams proceed to the
second round and then, $N^{\alpha_1\alpha_2}$ teams proceed to the
championship round. The total number of games $T_2$ is a
straightforward generalization of (\ref{t1-eq})
\begin{equation}
\label{t2-eq}
T_2\sim N^{3-2\alpha_2}+N^{\alpha_2(3-2\alpha_1)}+N^{3\alpha_1\alpha_2}.
\end{equation}
These three terms account respectively for the first round, the second
round, and the final round. The first term is analogous to the first
term in (\ref{t1-eq}), and the last two terms are obtained by
replacing $N$ with $N^{\alpha_2}$ in (\ref{t1-eq}). The total number
of games is minimal when all three terms are of the same
magnitude. Comparing the last two terms gives $3-2\alpha_1=3\alpha_1$
and therefore, (\ref{alpha1}) is recovered. Comparing the first two
terms gives
\begin{equation}
\label{alpha2-eq}
3-2\alpha_2=\alpha_2(3-2\alpha_1).
\end{equation}
Thus, $\alpha_2=15/19$ and since $\alpha_2>\alpha_1$, the first
elimination is less drastic then the second one. The total number of
games, $T_2\sim N^{27/19}$, represents a further improvement.
These results indicate that it is possible to systematically reduce
the total number of games via successive preliminary rounds that lead
to the final championship round. In the most general case, there are
$k$ preliminary rounds in addition to the final round. The number of
teams advancing to the second round, $M_k$, grows as follows
\begin{equation}
\label{mk}
M_k\sim N^{\alpha_k}.
\end{equation}
From (\ref{alpha2-eq}), the exponent $\alpha_k$ obeys the recursion
relation $3-2\alpha_{k+1}=\alpha_{k+1}(3-2\alpha_k)$ or equivalently,
\begin{equation}
\label{alpha-eq}
\alpha_{k+1}=\frac{3}{5-2\alpha_k}.
\end{equation}
By using $\alpha_1=3/5$ we deduce the initial element in this series,
$\alpha_0=0$. Introducing the transformation $\alpha_k=a_k/a_{k+1}$
reduces (\ref{alpha-eq}) to the Fibonacci-like recursion
\hbox{$3a_{k+2}=5a_{k+1}-2a_k$}. The general solution of this equation
is \hbox{$a_k=A\,r_1^k+B\,r_2^k$} where $r_1=1$ and $r_2=2/3$ are the
two roots of the quadratic equation $3r^2=5r-2$. The coefficients
follow from the zeroth element: $\alpha_0=0$ implies $a_0=0$ and
consequently, $a_k=A\left[1-(2/3)^k\right]$. Therefore,
\begin{equation}
\label{alpha-sol}
\alpha_k=\frac{1-\left(2/3\right)^k}{1-\left(2/3\right)^{k+1}}.
\end{equation}
The exponent $\alpha_k\approx 1-\frac{1}{3}\left(\frac{2}{3}\right)^k$
(for $k \gg 1$) decreases exponentially to one (Table 1). This means
that the number of teams advancing from the first to the second
preliminary round is increasing with the total number of preliminary
rounds played. Nonetheless, the fraction of teams that are eliminated
$1-N^{\alpha_k-1}$ converges to one as $N\to\infty$. Hence, nearly
all of the teams are eliminated in large leagues.
\begin{table}[t]
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\,$k$\,&\,0\,&\,1\,&\,2\,&\,3\,&\,4\,&\,5\,&$\,\infty$\,\\
\hline
$\alpha_k$&0&$\frac{3}{5}$&$\frac{15}{19}$&$\frac{57}{65}$&$\frac{195}{211}$&$\frac{633}{665}$&1\\
$\beta_k$ &1&$\frac{4}{5}$&$\frac{8}{19} $&$\frac{16}{65}$&$\frac{32}{211} $&$\frac{64}{665} $&0\\
$\gamma_k$&3&$\frac{9}{5}$&$\frac{27}{19}$&$\frac{81}{65}$&$\frac{243}{211}$&$\frac{729}{665}$&1\\
\hline
\end{tabular}
\caption{The exponents $\alpha_k$, $\beta_k$, and $\gamma_k$
characterizing $M_k$, the number of teams advancing from the first
round, $t_k$, the number of games played by a team in the first round,
and $T_k$, the total number of games, as a function of the number of
preliminary rounds $k$.}
\end{table}
The number of games played by a team in the first round, $t_k$,
follows from (\ref{mk})
\begin{equation}
\label{tauk}
t_k\sim N^{\beta_k},\qquad \beta_k=2(1-\gamma_k).
\end{equation}
Since $\beta_k\to 0$ as \hbox{$k\to\infty$}, only a small number of
games is played in the opening round. Using $T_k\sim Nt_k$, we arrive
at our main result (\ref{tk}) where $\gamma_k=3-2\alpha_k$.
Surprisingly, the total number of games is roughly linear in the
number of teams
\begin{equation}
\label{linear}
T_k\sim N,
\end{equation}
when a large number of preliminary rounds is used, i.e.,
\hbox{$k\to\infty$} \cite{small}. Clearly, this linear scaling is
optimal since every team must play at least once. The asymptotic
behavior $\gamma_k\approx 1+\left(\frac{2}{3}\right)^{k+1}$ implies
that in practice, a small number of preliminary round suffices. For
example, $\gamma_4=\frac{243}{211}=1.15165$ (Table I).
We emphasize that in a $k$-round format, the top $N^{\alpha_k}$ teams
proceed to the second round, out of which the top
$N^{\alpha_{k-1}\alpha_k}$ teams proceed to the third round, and so
on. The number of teams proceeding from the $k$th round to the
championship round is $M\sim N^{\alpha_1\alpha_2\cdots\alpha_k}$. From
(\ref{linear}) and $T\sim M^3$, the size of the championship round
approaches
\begin{equation}
\label{M-scaling}
M\sim N^{1/3}
\end{equation}
as $k\to \infty$. This is the optimal size of a playoff that produces
the best champion using the least number of games.
\section{Numerical Simulations}
Our scaling analysis is heuristic: we assumed that $N$ is very large
and we ignored numerical constants. To verify the applicability of
our asymptotic results to moderately sized leagues, we performed
numerical simulations with $N$ teams that play an equal number of $t$
games against randomly selected opponents. The outcome of each game is
stochastic: with probability $p$ the favorite wins and with
probability $q=1-p$, the underdog wins. We present simulation results
for $q=1/4$.
\begin{figure}[h]
\includegraphics*[width=0.425\textwidth]{fig1.eps}
\caption{The average rank of the champion, $\langle n_*\rangle$, of a
league with $N$ teams after $t$ games. The simulation results
represent and average over $10^3$ independent realizations with
$N=10^2$, $10^3$, and $10^4$. A line of slope $-1/2$, predicted by
Eq.~(\ref{nt}), is plotted as a reference.}
\end{figure}
The most important theoretical prediction is the relation (\ref{nt})
between the rank of the winner, the number of games, and the size of
the league. To test this prediction, we measured the average rank of
the winner as a function of the number of games $t$, for leagues of
various sizes. In the simulations, it is convenient to shift the rank
by one: the teams are ranked from $n=0$ (the best team) to $n=N-1$
(the worst team). With this definition, the average rank decreases
indefinitely with $t$. The simulations show that $n_*/N^{1/2}\sim
(t/N)^{-1/2}$, thereby confirming the the theoretical prediction
(figure 1).
\begin{figure}[h]
\includegraphics*[width=0.4\textwidth]{fig2.eps}
\caption{The average number of games $\langle T\rangle$ needed for the
best team to emerge as the champion of a league with $N$ teams. The
simulation results, representing an average over $10^3$ independent
realizations, are compared with the theoretical prediction
(\ref{tn}).}
\end{figure}
To validate (\ref{tn}), we simulated leagues with a large enough
number of games, so that the best team wins with certainty. For every
realization there is a number of games $T$ after which the champion
takes the lead for good. The average of this random variable, $\langle
T\rangle$, measured from the simulations, is in excellent agreement
with the theoretical prediction (figure 2).
The simulations also confirm that the scale $n_*$ characterizes the
entire distribution as in (\ref{qn-scaling}). Numerically, we find
that the tail of the scaling function is super-exponential,
$\psi(z)\sim \exp(-z^\mu)$ with $\mu > 1$. The observed tail behavior
is consistent with $\mu=2$, although the numerical evidence is not
conclusive.
\begin{figure}[h]
\includegraphics*[width=0.4\textwidth]{fig3.eps}
\caption{The rank distribution of the league winner for ordinary league
format ($t=N$). Shown is the
scaled distribution $\sqrt{N}\,Q_n(t=N)$ versus the scaling variable
$n/\sqrt{N}$. The simulation data were obtained using $10^6$
independent Monte Carlo runs.}
\end{figure}
To verify our prediction that multiple elimination rounds, following
the format suggested above, reduce the number of games, we simulated a
single elimination round ($k=1$). In the first stage, a total of
$N^{9/5}$ games are played. All teams are then ranked according to
the number of wins and the top $M=N^{3/5}$ teams proceed to the
championship round. This final round has an ordinary league format
with a total of $M^3$ games. We simulated three leagues of respective
sizes $N=10^1$, $N=10^2$, and $N=10^3$, and observed that the best
team wins with a frequency of $70\%$. The champion is among the top
three teams in $98\%$ of the cases (these percentages are independent
of $N$). As a reference, in an ordinary league with a total of $N^3$
games, the best team also wins with a likelihood of
$70\%$. Remarkably, even for as little as $N=10$ teams, the one
preliminary round format reduces the number of games by a factor
$>10$. We conclude that the scaling results are useful at moderate
league size $N$.
\section{Imperfect champions}
Let us relax the condition that the best team must win and implement a
less rigorous championship round. Given a total of $T \sim M^c$ games
with $1 \leq c\leq 3$, each team plays $t\sim M^{c-1}$ games. From
(\ref{nt}), the typical rank of the winner scales as
\begin{equation}
\label{abc}
n_*\sim M^{\frac{3-c}{2}}.
\end{equation}
Suppose that there are infinitely many preliminary rounds. The
analysis in Section III reveals that the total number of games scales
linearly, $T\sim M^c\sim N$, and consequently, $M\sim
N^{1/c}$. Therefore, there is a scaling relation between the rank of
the winner and the number of teams $n_*\sim N^{\frac{3-c}{2c}}$.
Indeed, the value $c=3$ produces the best champion. The common league
format ($c=2$) leads to $n_*\sim N^{1/4}$, an improvement over the
ordinary $N^{1/2}$ behavior.
If there is one preliminary round, Eq.~(\ref{t1-eq}) becomes $T_1\sim
N^{3-2\alpha_1}+N^{c\alpha_1}$ and therefore, $\alpha_1=3/(2+c)$.
Generally for $k$ preliminary rounds, the exponent $\alpha_k$
satisfies the recursion relation (\ref{alpha-eq}), and the scaling
relations $\gamma_k=3-2\alpha_k$ and $\beta_k=2(1-\alpha_k)$ remain
valid. We quote the value
\begin{equation}
\gamma_k=\frac{1}{1-\frac{c-1}{c}\left(\frac{2}{3}\right)^k}
\end{equation}
that characterizes the total number of games, $T\sim
N^{\gamma_k}$. From $T\sim M^c\sim N^{\gamma_k}$, we conclude $M\sim
N^{\gamma_k/c}$. Substituting this relation into (\ref{abc}) yields
\begin{equation}
\label{sub}
n_*\sim N^{\nu_k},\qquad \nu_k=\frac{\gamma_k(3-c)}{2c}.
\end{equation}
Using ordinary league play ($c=2$) and one preliminary round,
$N^{3/2}$ games are sufficient produce an imperfect champion of
typical rank $n_*\sim N^{3/8}$. Finally, we note that if each team
plays a finite number of games ($c=1$), all of the teams have a
comparable chance of winning because $\nu_k = \gamma_k \equiv 1$.
\section{Conclusions}
In summary, we studied dynamics of league competition with fixed team
strength and a finite upset probability. We demonstrated that ordinary
league play where all teams play an equal number of games requires a
very large number of games for the best team to win with certainty. We
also showed that a series of preliminary rounds with a small but
sufficient number games to successively eliminate the weakest teams is
a fair and efficient way to identify the champion. We obtained scaling
laws for the number of advancing teams and the number of games in each
preliminary round. Interestingly, it is possible to determine the best
team by having teams play, on average, only a finite number of games
(independent of league size). The optimal size of the final
championship round scales as the one-third power of the number of
teams.
Empirical validation of these results with real data may be possible
using sports leagues, for example. The challenge is that the inherent
strength of each team is not known. In professional sports, a team's
budget can serve as a proxy for its strength. With this definition,
the average rank of the American baseball world series champion, over
the past 30 years, equals 6. There are however huge fluctuations:
while the top team won 7 times, a team ranked as low as 26 (2003
Florida Marlins) also won.
With wide ranging applications, including for example evolution
\cite{jk,smd}, leadership statistics is a challenging extreme
statistics problem because the record of one team constrains the
records of all other teams. Our scaling approach, based on the record
a fixed team, ignores such correlations. While these correlations do
not affect the scaling laws, they do affect the distribution of
outcomes such as the distribution of the rank of the winner, and the
distribution of the number of games needed for the best team to take
the lead for good. Other interesting questions include the expected
number of distinct leaders, and the number of lead changes as a
function of league size \cite{kr,bk}.
\noindent{\bf Acknowledgments.} We thank David Roberts for useful
discussions. We acknowledge financial support from DOE grant
DE-AC52-06NA25396.
| {
"timestamp": "2006-12-22T04:22:18",
"yymm": "0612",
"arxiv_id": "physics/0612217",
"language": "en",
"url": "https://arxiv.org/abs/physics/0612217",
"abstract": "League competition is investigated using random processes and scaling techniques. In our model, a weak team can upset a strong team with a fixed probability. Teams play an equal number of head-to-head matches and the team with the largest number of wins is declared to be the champion. The total number of games needed for the best team to win the championship with high certainty, T, grows as the cube of the number of teams, N, i.e., T ~ N^3. This number can be substantially reduced using preliminary rounds where teams play a small number of games and subsequently, only the top teams advance to the next round. When there are k rounds, the total number of games needed for the best team to emerge as champion, T_k, scales as follows, T_k ~N^(\\gamma_k) with gamma_k=1/[1-(2/3)^(k+1)]. For example, gamma_k=9/5,27/19,81/65 for k=1,2,3. These results suggest an algorithm for how to infer the best team using a schedule that is linear in N. We conclude that league format is an ineffective method of determining the best team, and that sequential elimination from the bottom up is fair and efficient.",
"subjects": "Physics and Society (physics.soc-ph); Statistical Mechanics (cond-mat.stat-mech); Probability (math.PR)",
"title": "How to Choose a Champion",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513869353993,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7086428628208127
} |
https://arxiv.org/abs/2007.06652 | On even entries in the character table of the symmetric group | We show that almost every entry in the character table of $S_n$ is even as $n\to\infty$. This resolves a conjecture of Miller. We similarly prove that almost every entry in the character table of $S_n$ is zero modulo $3,5,7,11,$ and $13$ as $n\to\infty$, partially addressing another conjecture of Miller. | \section{Introduction}\label{sec1}
In~\cite{Miller2019}, Miller conjectured that the proportion of odd entries in the character table of $S_n$ goes to zero as $n$ goes to infinity, based on computational data for $n$ up to $76$. It has been known for a long time, due to work of McKay~\cite{McKay1972}, that only a vanishingly small proportion of characters of the symmetric group have odd degree. Recently, Gluck~\cite{Gluck2019} showed that, more generally, the density of odd entries tends to zero in columns of the character table corresponding to cycle types whose odd factor has few distinct cycle lengths. This is still a very sparse set of columns, however. Morotti~\cite{Morotti2020} has also shown that, for any fixed prime $p$, a different sparse set of columns consists almost completely of multiples of $p$.
In this paper, we prove Miller's conjecture.
\begin{theorem}\label{thm1.1}
The density of odd entries in the character table of $S_n$ goes to $0$ as $n\to\infty$.
\end{theorem}
We will present two proofs of this result. The first is shorter and specific to the prime $2$. The second is longer, but generalizes to some other primes, and is quantitatively stronger. While the conjecture resolved by Theorem~\ref{thm1.1} is the main focus of~\cite{Miller2019}, Miller also presents data for the number of entries in the character table of $S_n$ (for $n$ up to $38$) that are divisible by $3$ and $5$. Based on this, he conjectured that, for any fixed prime $p$, the proportion of the character table of $S_n$ that is not divisible by $p$ tends to zero as $n$ goes to infinity. We also verify this conjecture for $p\leq 13$.
\begin{theorem}\label{thm1.2}
Let $p=2,3,5,7,11,$ or $13$. There exists $\delta_p>0$ such that the density of entries of the character table of $S_n$ that are not divisible by $p$ is at most $O_p(n^{-\delta_p})$.
\end{theorem}
In contrast to the situation modulo primes, it appears from the data in~\cite{Miller2019} that the proportion of zeros in the character table of $S_n$ may tend to a positive constant. Our methods do not say anything on this matter, though very recently Larsen and Miller~\cite{LarsenMiller2020} have shown that the proportion of nonzero entries of character tables of groups of Lie type goes to zero as the rank goes to infinity.
The remainder of this paper is organized as follows. In Section~\ref{sec2}, we recall some useful facts about characters of the symmetric group and set up the proofs of Theorems~\ref{thm1.1} and~\ref{thm1.2}. We prove Theorem~\ref{thm1.1} in Section~\ref{sec3} using a short argument specific to the prime $2$, and prove the more general Theorem~\ref{thm1.2} in Section~\ref{sec4}.
\section*{Acknowledgments}
The author thanks Persi Diaconis for bringing this problem to her attention and David Gluck and Alex Miller for helpful comments on earlier drafts of this paper.
The author is supported by the NSF Mathematical Sciences Postdoctoral Research Fellowship Program under Grant No. DMS-1903038
\section{Preliminaries and setting up the arguments}\label{sec2}
For any two partitions $\lambda$ and $\mu$ of $n$, let $\chi_{\mu}^{\lambda}$ denote the value of the character of $S_n$ corresponding to the partition $\lambda$ on the congruence class of permutations with cycle type $\mu$. The proofs of Theorems~\ref{thm1.1} and~\ref{thm1.2} will require a couple of basic results about characters of the symmetric group. The first gives us a useful condition for determining character values modulo primes (see, for example, Section~3 of~\cite{OdlyzkoRains2000} or Proposition~1 of~\cite{Miller2019}).
\begin{lemma}\label{lem2.1}
Let $\lambda$ and $\mu$ be partitions of $n$. If $\nu$ is another partition of $n$ that can be obtained from $\mu$ by replacing $p$ parts of the same size $m$ by one part of size $pm$, then we have
\[
\chi_{\mu}^{\lambda}\equiv\chi_{\nu}^\lambda\Mod{p}.
\]
\end{lemma}
To state the next result, we will need to recall some concepts from the study of partitions. For any box $u$ in the Young diagram of a partition $\lambda$, the \textit{hook-length} of $u$ is the number of boxes directly to the right of $u$ plus the number of boxes directly below $u$ plus $1$. To illustrate, the Young diagram of $\lambda=(5,3,2,1,1)$ below has each box labeled with its hook-length.
\begin{center}
\begin{figure}[h]
\ytableausetup{mathmode}
\begin{ytableau}
9 & 6 & 4 & 2 & 1 \\
6 & 3 & 1 \\
4 & 1 \\
2 \\
1 \\
\end{ytableau}
\caption{Hook-lengths for $\lambda=(5,3,2,1,1)$.}
\label{fig1}
\end{figure}
\end{center}
For any positive integer $t$, we say that a partition is a \textit{$t$-core} if its Young diagram contains no hooks of length divisible by $t$. From Figure~\ref{fig1} we see that $(5,3,2,1,1)$ is a $5$-core. Because a Young diagram has a border strip of length $t$ if and only if it has $t$ as a hook-length, the following result is an immediate consequence of the Murnaghan--Nakayama rule (see also the proof of Theorem~1 of~\cite{Miller2019}).
\begin{lemma}\label{lem2.2}
Let $\lambda$ and $\mu$ be partitions of $n$. If $\mu$ has a part of size $t$ and $\lambda$ is a $t$-core, then $\chi_{\mu}^\lambda=0$.
\end{lemma}
Finally, we will also need the fact that most partitions of $n$ are $t$-cores whenever $t$ is substantially larger than $\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}$, which happens to be the expected size of the largest part of a random partition of $n$.
\begin{lemma}\label{lem2.3}
Suppose that $t=\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}+\f{\sqrt{6}}{\pi}M\sqrt{n}$ for some $0\leq M\leq 10\log{n}$. Then there are at least $p(n)(1-O(\f{\log{n}}{e^{M}}))$ $t$-core partitions of $n$.
\end{lemma}
\begin{proof}
From Lemma~5 of~\cite{Morotti2020}, the number of $t$-core partitions of $n$ is at least $p(n)-(t+1)p(n-t)$. By taking the first term in Rademacher's formula~\cite{Rademacher1937} for $p(n)$, one gets the explicit estimate
\[
p(n)=\f{1+O(n^{-1/2})}{4\sqrt{3}n}e^{\pi\sqrt{2n/3}
}.
\]
Thus,
\begin{align*}
1-(t+1)\f{p(n-t)}{p(n)}&= 1-O\left(\sqrt{n}\log{n}\cdot\exp\left(\pi\sqrt{\f{2}{3}}\cdot\left(\sqrt{n-t}-\sqrt{n}\right)\right)\right) \\
&= 1-O\left(\sqrt{n}\log{n}\cdot\exp\left(\pi\sqrt{\f{2n}{3}}\cdot\left(\sqrt{1-\f{t}{n}}-1\right)\right)\right) \\
&= 1-O\left(\sqrt{n}\log{n}\cdot\exp\left(-\f{\pi}{\sqrt{6}}\cdot\f{t}{\sqrt{n}}\right)\right),
\end{align*}
which equals $1-O(\f{\log{n}}{e^M})$ by recalling the expression for $t$.
\end{proof}
For context, Erd\H{o}s and Lehner~\cite{ErdosLehner1941} showed that a random partition of $n$ has a part of size at least $\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}+\f{\sqrt{6}}{\pi}\log{\f{\sqrt{6}}{\pi}}+\f{\sqrt{6}}{\pi}M\sqrt{n}$ with probability $1-e^{-e^{-M}}$, so the largest part $\mu_1$ of a typical partition $\mu$ of $n$ will not be large enough for Lemma~\ref{lem2.3} to give a nontrivial lower bound on the number of partitions of $n$ are $\mu_1$-cores. Lemmas~\ref{lem2.2} and~\ref{lem2.3} taken together only guarantee that the character table of $S_n$ contains a $\Omega(\f{1}{\log{n}})$-proportion of zeros.
Now fix a prime $p$. For any partition $\mu$ of $n$, let $\tilde{\mu}$ denote the partition of $n$ obtained by repeatedly replacing any $p$ parts of the same size $m$ in $\mu$ with one part of size $pm$ until there are no parts appearing more than $p-1$ times. For example, if $p=2$ and $\mu=(6,2,1,1,1)$, then $\tilde{\mu}=(6,4,1)$. By Lemma~\ref{lem2.1}, if $\chi^\lambda_{\tilde{\mu}}=0$, then $\chi_{\mu}^\lambda$ is a multiple of $p$. Our strategy will thus be to show that $\chi_{\tilde{\mu}}^\lambda=0$ for almost all pairs of partitions $\lambda$ and $\mu$ of $n$. We will do this by showing that the partition $\tilde{\mu}$ typically has largest part $\tilde{\mu}_1$ significantly larger than the largest part of a random partition when $p\leq 13$ -- so large that almost all partitions of $n$ are $\tilde{\mu}_1$-cores. Theorems~\ref{thm1.1} and~\ref{thm1.2} will then follow from Lemmas~\ref{lem2.1},~\ref{lem2.2}, and~\ref{lem2.3}.
To prove that $\tilde{\mu}_1$ is typically large, we study the following partition statistic. For any positive integer $k$ relatively prime to $p$ and partition $\mu$ of $n$, set
\[
M_{\mu}^{(k)}:=\sum_{\substack{\mu_i\in\mu \\ \mu_i=kp^j}}\mu_i.
\]
So, $M_{\mu}^{(k)}$ is the sum of all parts of $\mu$ of the form $k$ times a power of $p$. For example, when $p=2$ we have $M_{(6,2,1,1,1)}^{(1)}=5=M_{(6,4,1)}^{(1)}$. Note that not only does $M^{(k)}_{\mu}=M^{(k)}_{\tilde{\mu}}$ for all $k$ and partitions $\mu$ more generally, but also that the parts of the form $kp^j$ appearing in $\tilde{\mu}$ can be read off from the base-$p$ expansion of $\f{1}{k}M_{\tilde{\mu}}^{(k)}$. That is, the base-$p$ expansion of $\f{1}{k}M_{\mu}^{(k)}$ has an $i$ in the $p^j$ place if and only if $\tilde{\mu}$ has $i$ parts of size $kp^j$. It thus suffices to show that there is a $\gamma_p>0$ such that, for almost every $\mu$, there exists some $k$ for which
\begin{equation}\label{eq2.1}
\left\lfloor \log_p{\f{M_\mu^{(k)}}{k}}\right\rfloor\geq \log_p\f{(1+\gamma_p)\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}}{k},
\end{equation}
since this will guarantee that $\tilde{\mu}$ has a part of size at least $(1+\gamma_p)\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}$, which is easily large enough for Lemma~\ref{lem2.3} to give us a strong bound on the number of partitions that are not $\tilde{\mu}_1$-cores.
For each fixed $k$, the quantity $M_{\mu}^{(k)}$ turns out to have average size approximately equal to $\f{\sqrt{6}}{2\pi\log{p}}\sqrt{n}\log{n}$. Note that $\f{1}{\log{2}}>1$, while $\f{1}{\log{p}}<1$ whenever $p>2$. Thus, to prove Theorem~\ref{thm1.1}, it will suffice to show that $M^{(k)}_{\mu}$ is typically close to its average for enough values of $k$ that~\eqref{eq2.1} must hold for one of them. It turns out that proving this for $k=1,3,$ and $5$ is enough, so we will just have to compute the first and second moments of $M^{(1)}_{\mu},M^{(3)}_{\mu},$ and $M^{(5)}_{\mu}$ and invoke Chebyshev's inequality. Section~\ref{sec3} contains the details of this argument.
Since $M_\mu^{(k)}$ is typically small for each individual $k$ when $p>2$, we will need to consider many $k$'s simultaneously to handle larger primes. It turns out that, for each fixed $k\leq n^{1/2-\ve_1(p)}$, the quantity $M_\mu^{(k)}$ has size $<(1+\ve_2(p))\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}$ with probability at most $1-\f{1}{n^{1/2-\ve_3(p)}}$ for some $\ve_1(p),\ve_2(p)$, and $\ve_3(p)$ satisfying $\ve_3(p)>\ve_1(p)$. Thus, if the $M_\mu^{(k)}$ were all independent for $k$ up to $n^{1/2-\ve_1(p)}$, we would be able to prove Theorem~\ref{thm1.2} for all primes $p$. However, we can only prove independence when $\ve_1(p)>1/4$, and it turns out that $\ve_1(p)>1/4$ if and only if $p\leq 13$, which explains the restriction to these primes in Theorem~\ref{thm1.2}. Section~\ref{sec4} contains the details of this argument.
\section{Proof of Theorem~\ref{thm1.1}}\label{sec3}
Fix $p=2$ for this section and let $\sum_{\mu\vdash n}$ and $\E_{\mu\vdash n}$ denote the sum and average, respectively, over partitions $\mu$ of $n$. In the following lemma, we compute the first two moments of $M_\mu^{(k)}$.
\begin{lemma}\label{lem3.1}
For any odd $k$, we have
\[
\E_{\mu\vdash n}M_\mu^{(k)}=\f{\sqrt{6}}{2\pi\log{2}}\sqrt{n}\log{n}(1+o_k(1))
\]
and
\[
\E_{\mu\vdash n}(M_{\mu}^{(k)})^2=\left(\f{\sqrt{6}}{2\pi\log{2}}\sqrt{n}\log{n}\right)^2(1+o_k(1)).
\]
\end{lemma}
\begin{proof}
Let $P(x):=\sum_{n=0}^\infty p(n)x^n$ denote the generating function of the number of partitions of $n$ and
\[
H_k(x,u):=\prod_{j=0}^{\infty}\f{1}{1-(ux)^{k2^j}}\prod_{\ell\neq k2^j}\f{1}{1-x^\ell}=\sum_{n=1}^\infty\sum_{m=0}^\infty h(n,m)x^nu^{m}
\]
denote the bivariate generating function of the number of partitions $\mu$ of $n$ with $M_{\mu}^{(k)}=m$. By logarithmically differentiating $H_k(x,u)$ with respect to $u$ and then setting $u=1$, one sees that the generating function for $\sum_{\mu\vdash n}M^{(k)}_{\mu}$ is
\[
P(x)\sum_{j=0}^{\infty}\f{k2^jx^{k2^j}}{1-x^{k2^j}}=:P(x)F_k(x).
\]
Similarly, by logarithmically differentiating $H_k(x,u)$ twice with respect to $u$ and then setting $u=1$, one also sees that the generating function for $\sum_{\mu\vdash n}(M^{(k)}_{\mu})^2$ is
\[
P(x)\left(F_k(x)^2+\sum_{j=0}^\infty\f{k^24^jx^{k2^j}}{(1-x^{k2^j})^2}\right)=:P(x)G_k(x).
\]
Asymptotics for $\sum_{\mu\vdash n}M^{(k)}_{\mu}$ and $\sum_{\mu\vdash n}(M^{(k)}_{\mu})^2$ can be gotten by a standard saddle-point argument. The argument is so standard, in fact, that the sort of saddle-point analysis that we would need to do has already been carried out in general by Grabner, Knopfmacher, and Wagner~\cite{GrabnerKnopfmacherWagner2014} for any quantity whose generating function takes the form $P(x)H(x)$ with $H$ sufficiently well-behaved:
\begin{theorem}[Theorem~2.2 of~\cite{GrabnerKnopfmacherWagner2014}]\label{thm3.2}
Suppose that $H$ is a function on the complex disk satisfying
\begin{enumerate}
\item $|H(z)|=O\left(\exp\left(\f{C}{1-|z|^\eta}\right)\right)$ for some $C>0$ and $0<\eta<1$ as $|z|\to 1^-$, and
\item $\f{H(e^{-t+iu})}{H(e^{-t})}\to 1$ uniformly in $|u|\leq C' t^{1+\eta'}$ as $t\to 0^+$ for some $C'>0$ and $0<\eta'<\f{1-\eta}{2}$.
\end{enumerate}
Then the $n^{th}$ coefficient of the generating function $P(x)H(x)$ equals
\[
p(n)H(e^{-\pi/\sqrt{6n}})(1+o(1))+O(\exp(-C''n^{1/2-\eta'}))
\]
for some constant $C''>0$.\footnote{The dependence of the $o(1)$ term on the estimates in (1) and (2) is not made explicit in the statement of the result in~\cite{GrabnerKnopfmacherWagner2014}, though a short inspection of the proof shows that the dependence is good enough to give us bounds of $O_k(\f{1}{\log{n}})$ for the $o_k(1)$ terms in the conclusion of Lemma~\ref{lem3.1}. This would ultimately also lead to a bound of $O(\f{1}{\log{n}})$ in Theorem~\ref{thm1.1}. Since we get a stronger bound in Theorem~\ref{thm1.2}, we will not put any effort into obtaining explicit bounds for the $o_k(1)$ terms in Lemma~\ref{lem3.1}.}
\end{theorem}
It thus remains to verify that $F_k$ and $G_k$ satisfy the hypotheses of Theorem~\ref{thm3.2} and estimate $F_k(e^{-\pi/\sqrt{6n}})$ and $G_k(e^{-\pi/\sqrt{6n}})$. Noting that $F_k(e^{-v})=\sum_{j=0}^\infty\f{k2^je^{-vk2^j}}{1-e^{-vk2^j}}$ converges whenever $\RE{v}>0$ and using basic properties of the Mellin transform (as can be found in~\cite{FlajoletGourdonDumas1995}), we see that $F_k(e^{-t})$ has Mellin transform equal to
\[
\mathcal{M}[F_k](s):=\f{k^{-(s-1)}}{1-2^{-(s-1)}}\Gamma(s)\zeta(s),
\]
and thus equals
\begin{equation}\label{eq3.1}
F_k(e^{-t})=\f{1}{2\pi i}\int_{2-i\infty}^{2+i\infty}\f{k^{-(s-1)}}{1-2^{-(s-1)}}\Gamma(s)\zeta(s)t^{-s}ds
\end{equation}
whenever $t>0$ by Mellin inversion. The function $\mathcal{M}[F_k](s)t^{-s}$ has a double pole at $s=1$, simple poles at $s=0,-1,-3,-5,\dots$, and simple poles at $s=1+\f{2\pi i\ell}{\log{2}}$ for $\ell\in\Z\setminus\{0\}$. The pole at $s=1$ has residue
\[
-\f{\log{t}+\log{k}}{t\log{2}}+\f{1}{2t},
\]
the poles at $-m$ for $m\in\Z_{\geq 0}\setminus 2\N$ have residue
\[
\f{(-1)^mk^{m+1}\zeta(-m)}{m!(1-2^{m+1})}t^m,
\]
and the poles at $1+\f{2\pi i\ell}{\log{2}}$ for $\ell\in\Z\setminus\{0\}$ have residue
\[
\f{k^{-\f{2\pi i\ell}{\log{2}}}\Gamma(1+\f{2\pi i\ell}{\log{2}})\zeta(1+\f{2\pi i\ell}{\log{2}})}{\log{2}}t^{-1-\f{2\pi i\ell}{\log{2}}}.
\]
We use the rapid decay of $\Gamma$ in vertical strips to push the contour in~\eqref{eq3.1} all the way to the left to thus deduce that $F_k(e^{-t})$ equals
\[
-\f{\log{t}+\log{k}}{t\log{2}}+\f{1}{2t}+\sum_{m=0}^\infty\f{(-1)^mk^{m+1}\zeta(-m)}{m!(1-2^{m+1})}t^m+\sum_{\ell\neq 0}\f{k^{-\f{2\pi i\ell}{\log{2}}}\Gamma(1+\f{2\pi i\ell}{\log{2}})\zeta(1+\f{2\pi i\ell}{\log{2}})}{t\log{2}}t^{-\f{2\pi i\ell}{\log{2}}},
\]
from which it follows that $F_k(e^{-v})=-\f{\log{v}}{v\log{2}}+O_k(\f{1}{|v|})$ when $|\IM{v}|\leq \RE{v}$ as $\RE{v}\to 0^+$. A similar analysis shows that
$G_k(e^{-t})-F_k(e^{-t})^2$ equals
\[
-\f{\log{t}+\log{k}}{t^2\log{2}}+\f{2+\log{2}}{2t^2\log{2}}+\sum_{m=0}^\infty\f{(-1)^{m}k^{m+2}\zeta(-m-1)}{m!(1-2^{m+2})}t^{m}+\sum_{\ell\neq 0}\f{k^{-\f{2\pi i\ell}{\log{2}}}\Gamma(2+\f{2\pi i\ell}{\log{2}})\zeta(1+\f{2\pi i\ell}{\log{2}})}{t^2\log{2}}t^{-\f{2\pi i\ell}{\log{2}}},
\]
from which it follows that $G_k(e^{-v})=(\f{\log{v}}{v\log{2}})^2+O_k(\f{|\log{v}|}{|v|^2})$ when $|\IM(v)|\leq \RE{v}$ as $\RE{v}\to 0^+$.
Since $|F_k(e^{-t+iu})|\leq F_k(e^{-t})$ and $|G_k(e^{-t+iu})|\leq G_k(e^{-t})$ for all $t>0$ and $u\in[0,2 \pi)$ by our original expressions for $F_k$ and $G_k$, the asymptotic expressions for $F_k(e^{-t})$ and $G_k(e^{-t})$ imply that condition~(1) of Theorem~\ref{thm3.2} is easily satisfied for $H=F_k$ and $H=G_k$ for any $\eta>0$. When $|u|\leq t^{4/3}$ (say) we also get that $\f{F_k(e^{-t+iu})}{F_k(e^{-t})}=1+O_k(\f{1}{|\log{t}|})$ and $\f{G_k(e^{-t+iu})}{G_k(e^{-t})}=1+O_k(\f{1}{|\log{t}|})$, so that condition~(2) of Theorem~\ref{thm3.2} is satisfied as well. The lemma now follows by noting that $F_k(e^{-\pi/\sqrt{6n}})=\f{\sqrt{6}}{2\pi\log{2}}\sqrt{n}\log{n}(1+o_k(1))$ and $G_k(e^{-\pi/\sqrt{6n}})=(\f{\sqrt{6}}{2\pi\log{2}}\sqrt{n}\log{n})^2(1+o_k(1))$.
\end{proof}
Now we can prove Theorem~\ref{thm1.1}.
\begin{proof}[Proof of Theorem~\ref{thm1.1}]
By Lemma~\ref{lem3.1} and Chebyshev's inequality, we get that
\[
M_\mu^{(1)},M_\mu^{(3)},M_\mu^{(5)}\geq\left(\f{1}{\log{2}}-\f{1}{100}\right)\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}
\]
for all but a $o(1)$-proportion of partitions $\mu$ of $n$. For such $\mu$, we thus have
\begin{align*}
\left\lfloor\log_2\f{M_{\mu}^{(k)}}{k}\right\rfloor&\geq \log_2\f{(\f{1}{\log{2}}-\f{1}{100})\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}}{k}-\left\{\log_2\f{(\f{1}{\log{2}}-\f{1}{100})\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}}{k}\right\}
\\
&=\log_2\f{(1+\f{1}{100})\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}}{k}+\log_2\f{\f{1}{\log{2}}-\f{1}{100}}{1+\f{1}{100}}-\left\{\log_2\f{(\f{1}{\log{2}}-\f{1}{100})\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}}{k}\right\}
\end{align*}
for $k=1,3,$ and $5$, where $\{r\}$ denotes the fractional part of $r$. Note that $\log_21=0$, $\log_23=1.58\dots$, $\log_2{5}=2.32\dots$, and $\log_2\f{\f{1}{\log{2}}-\f{1}{100}}{1+\f{1}{100}}=.504\dots$. So, for all $n$, we certainly have
\[
\left\{\log_2\f{(\f{1}{\log{2}}-\f{1}{100})\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}}{k}\right\}\leq \log_2\f{\f{1}{\log{2}}-\f{1}{100}}{1+\f{1}{100}}
\]
for at least one of $k=1,3,$ or $5$, since every $r\in[0,1)$ satisfies $\{r-\log_2{k}\}\leq \log_2\f{\f{1}{\log{2}}-\f{1}{100}}{1+\f{1}{100}}$ for at least one of $k=1,3,$ or $5$. We conclude that $\tilde{\mu}$ has a part of size at least $(1+\f{1}{100})\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}$ for all but a $o(1)$-proportion of partitions $\mu$, so that Theorem~\ref{thm1.1} follows by Lemmas~\ref{lem2.1},~\ref{lem2.2}, and~\ref{lem2.3}.
\end{proof}
\section{Proof of Theorem~\ref{thm1.2}}\label{sec4}
Let $q_p(n)$ denote the number of partitions of $n$ into powers of $p$ and, for $k_1,\dots,k_M\in\N$, let $r_{k_1,\dots,k_M;p}(n)$ denote the number of partitions of $n$ into parts not of the form $k_ip^j$. To prove Theorem~\ref{thm1.2}, it suffices to bound
\begin{equation}\label{eq4.1}
\f{1}{p(n)}\sum_{\substack{\ell_i\leq (1+\delta_p)\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n} \\ k_i\mid \ell_i \\ i=1,\dots,M}}r_{k_1,\dots,k_M;p}\left(n-\sum_{i=1}^M\ell_i\right)\prod_{i=1}^Mq_p\left(\f{\ell_i}{k_i}\right)
\end{equation}
for some $k_1,\dots,k_M$ all satisfying
\[
\left\{\log_2\f{(1+\delta_p)\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}}{k_i}\right\}\leq\log\f{1+\delta_p}{1+\f{\delta_p}{2}},
\]
since, like in the proof of Theorem~\ref{thm1.1}, this will guarantee that, outside of a set of partitions $\mu$ of $n$ of density~\eqref{eq4.1}, $\tilde{\mu}$ has a part of size at least $(1+\f{\delta_p}{2})\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}$. We start by showing that, under certain conditions on $k_1,\dots,k_M$,~\eqref{eq4.1} is approximately the product of the probabilities that $M_{\mu}^{(k_i)}\leq(1+\delta_p)\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}$.
\begin{lemma}\label{lem4.1}
Let $M\leq n^{1/4-\ve}$, $k_1,\dots,k_M\leq n^{1/4-\ve}$, and $\gamma>0$. We have
\begin{align*}
\f{1}{p(n)}\sum_{\substack{\ell_i\leq \gamma\sqrt{n}\log{n} \\ k_i\mid \ell_i \\ i=1,\dots,M}}&r_{k_1,\dots,k_M;p}\left(n-\sum_{i=1}^M\ell_i\right)\prod_{i=1}^Mq_p\left(\f{\ell_i}{k_i}\right)\\
&=\prod_{i=1}^M\left(\f{1}{p(n)}\sum_{\substack{\ell_i\leq \gamma\sqrt{n}\log{n} \\ k_i\mid\ell_i}}r_{k_i,p}(n-\ell_i)q_p\left(\f{\ell_i}{k_i}\right)\right)(1+O_p(n^{-\ve}))
\end{align*}
as $n\to\infty$.
\end{lemma}
\begin{proof}
We first get an asymptotic for $r(m)=r_{k_1,\dots,k_M;p}(m)$, which has generating function
\[
P(x)\prod_{i=1}^M\prod_{j=0}^{\infty}(1-x^{k_ip^j})=:P(x)H_{k_1,\dots,k_M;p}(x)=P(x)H(x),
\]
when $m\geq\f{n}{2}$. As in Section~\ref{sec3}, we can do this using a saddle-point argument, though $H$ is not quite well behaved enough for the results of~\cite{GrabnerKnopfmacherWagner2014} to apply. But doing the saddle-point analysis from scratch does not take very long, and also does not stray far from the arguments in~\cite{GrabnerKnopfmacherWagner2014}. So, setting $t_0=\f{\pi}{\sqrt{6m}}$, we have that $r_{k_1,\dots,k_M;p}(m)$ equals
\begin{align*}
&\int\limits_{|u|\leq t_0^{5/4}}P(e^{-t_0+iu})H(e^{-t_0+iu})e^{-m(-t_0+iu)}du+\int\limits_{t_0^{5/4}<|u|\leq\pi}P(e^{-t_0+iu})H(e^{-t_0+iu})e^{-m(-t_0+iu)}du\\
&=:I_1+I_2.
\end{align*}
We will first deal with $I_1$, which contributes the main term of the asymptotic. Note that we can write
\[
\log{H(e^{-v})}=-\sum_{i=1}^M\sum_{j=0}^{\infty}\sum_{\ell=1}^\infty\f{1}{\ell}e^{-v\ell k_ip^j}
\]
when $\RE{v}>0$. So, $\log{H(e^{-t})}$ has Mellin transform equal to
\[
-\sum_{i=1}^m k_i^{-s}\Gamma(s)\zeta(s+1)(1-p^{-s})\1,
\]
and thus equals
\begin{equation}\label{eq4.2}
-\f{1}{2\pi i}\sum_{i=1}^M\int_{\f{1}{2}-i\infty}^{\f{1}{2}+i\infty}(tk_i)^{-s}\Gamma(s)\zeta(s+1)(1-p^{-s})\1 ds
\end{equation}
using Mellin inversion. The poles of
\[
(tk_i)^{-s}\Gamma(s)\zeta(s+1)(1-p^{-s})\1
\]
consist of a triple pole at $s=0$ with residue
\[
\f{(\log{t})^2}{2\log{p}}+\f{2\log{k_i}-\log{p}}{2\log{p}}\log{t}+\f{6(\log{k_i})^2-6\log{k_i}\log{p}+(\log{p})^2+\gamma'}{12\log{p}},
\]
where $\gamma'$ is an absolute constant, simple poles at $s=-r$ for all $r\in\{1\}\cup 2\N$ with residue
\[
(-1)^{r}\zeta(1-r)\f{(tk_i)^{r}}{r!(1-p^{r})},
\]
and simple poles at $s=\f{2\pi i \ell}{\log{p}}$ for all $\ell\in\Z\sm\{0\}$ with residue
\[
\Gamma\left(\f{2\pi i \ell}{\log{p}}\right)\zeta\left(1+\f{2\pi i \ell}{\log{p}}\right)\f{(tk_i)^{-2\pi i\ell/\log{p}}}{\log{p}}.
\]
We push the contour in~\eqref{eq4.2} all the way to the left using the rapid decay of $\Gamma$ in vertical strips to deduce that $\log{H(e^{-v})}$ equals
\begin{align*}
-\sum_{j=1}^M\bigg[\f{(\log{v})^2}{2\log{p}}&+\f{2\log{k_j}-\log{p}}{2\log{p}}\log{v}+\f{6(\log{k_j})^2-6\log{k_j}\log{p}+(\log{p})^2+\gamma'}{12\log{p}}\\
&+\sum_{r=1}^\infty(-1)^{r}\zeta(1-r)\f{(vk_j)^{r}}{r!(1-p^{r})}+\sum_{|r|>0}\Gamma\left(\f{2\pi i r}{\log{p}}\right)\zeta\left(1+\f{2\pi i r}{\log{p}}\right)\f{(vk_j)^{-2\pi ir/\log{p}}}{\log{p}}\bigg]
\end{align*}
when $\RE{v}<n^{-1/4}$ and $|\IM{v}|\leq\RE{v}$. As a consequence, we have
\[
\log{\f{H(e^{-t_0+iu})}{H(e^{-t_0})}}=h(M,m)u+O_p(m^{-\ve})
\]
whenever $|u|\leq t_0^{5/4}$ for some $|h(M,m)|=O_p(M\sqrt{m}\log{m})$. We also have
\[
\log\f{P(e^{-t_0+iu})}{P(e^{-t_0})}=\f{\pi^2}{6}\left(i\f{u}{t_0^2}-\f{u^2}{t_0^3}-i\f{u^3}{t_0^4}+\f{u^4}{t_0^5}\right)+O(m^{-1/8})
\]
whenever $|u|\leq t_0^{5/4}$ (see, for example, Section VIII.6 of~\cite{FlajoletSedgewick2009}). So, using that $\f{\pi^2}{6}\cdot\f{u}{t_0^2}=mu$, we get that
\[
I_1=P(e^{-t_0})H(e^{-t_0})e^{mt_0}(1+O_p(m^{-\ve}))\int\limits_{|u|\leq t_0^{5/4}}\exp\left(\f{\pi^2}{6}\left(-\f{u^2}{t_0^3}-i\f{u^3}{t_0^4}+\f{u^4}{t_0^5}\right)+h(M,m)u\right)du.
\]
Making the change of variables $u\mapsto \f{u}{\f{\pi}{\sqrt{3}}t_0^{-3/2}}$ in the integral above gives that $I_1$ equals $\f{\sqrt{3}}{\pi}P(e^{-t_0})H(e^{-t_0})e^{mt_0}(1+O_p(m^{-\ve}))t_0^{3/2}$ times the quantity
\[
\int\limits_{|u|\leq \f{\pi}{\sqrt{3}}t_0^{-1/4}}\exp\left(\f{\sqrt{3}h(M,m)t_0^{3/2}}{\pi}u-\f{u^2}{2}-\f{3\sqrt{3t_0}}{\pi^3}iu^3+\f{9t_0}{\pi^4}u^4\right)du=:I_1'.
\]
Note that
\[
I_1'=\int\limits_{-t_0^{-\ve}}^{t_0^{-\ve}}\exp\left(\f{\sqrt{3}h(M,m)t_0^{3/2}}{\pi}u-\f{u^2}{2}-\f{3\sqrt{3t_0}}{\pi^3}iu^3+\f{9t_0}{\pi^4}u^4\right)du+O(e^{-\Omega_p(m^{\ve})}),
\]
so that
\[
I_1'=(1+O_p(m^{-\ve/2}))\int\limits_{-t_0^{-\ve}}^{t_0^{-\ve}}\exp\left(-\f{u^2}{2}\right)du.
\]
The Gaussian integral above can be extended to one over all of $\R$ at the cost of an error of $O(e^{-m^{\ve}})$, from which it follows that
\[
I_1=p(m)H(e^{-\pi/\sqrt{6m}})(1+O_p(m^{-\ve/2})).
\]
To bound $I_2$, note that
\[
|H(e^{-t_0+iu})|\leq \prod_{i=1}^M\prod_{j=0}^\infty(1+e^{-t_0k_ip^j})<\prod_{i=1}^M\prod_{j=0}^\infty\f{1}{1-e^{-t_0k_ip^j}}=\f{1}{H(e^{-t_0})}\leq\exp(O_p(M(\log{m})^2))
\]
for all $u\in[0,2\pi)$. We combine this with the bound
\[
\f{P(e^{-t+iu})}{P(e^{-t})}\leq\exp\left(-\f{1}{t(1+(\f{\pi t}{2u})^2)}+O(t)\right)
\]
from Lemma~3.1 of~\cite{GrabnerKnopfmacherWagner2014}, which is valid for all $|u|\leq \pi$ as $t\to 0^+$, to get that
\[
I_2\leq P(e^{-t_0})\exp(t_0m-m^{1/4}+O_p(m^{\f{1}{4}-\ve}(\log{m})^2)),
\]
which is at most $p(m)\exp(-\Omega_p(m^{1/4}))$ for $m$ sufficiently large by the standard estimate for $p(m)$. We thus conclude that
\begin{equation}\label{eq4.3}
r_{k_1,\dots,k_M;p}(m)=p(m)H(e^{-\pi/\sqrt{6m}})(1+O_p(m^{-\ve/2})).
\end{equation}
Now, to finish, we use that
\[
\f{p\left(n-\sum_{i=1}^M\ell_i\right)}{p(n)}=(1+O(n^{-1/4}))\prod_{i=1}^M\f{p(n-\ell_i)}{p(n)}
\]
and
\[
H_{k_1,\dots,k_M;p}(e^{-\pi/\sqrt{6(n-\sum_{i=1}^M\ell_i)}})=(1+O_p(n^{-2\ve}(\log{n})^2))\prod_{i=1}^MH_{k_i;p}(e^{-\pi/\sqrt{6(n-\ell_i)}})
\]
whenever $\ell_1,\dots,\ell_M\leq\gamma\sqrt{n}\log{n}$ to get
\[
\f{r_{k_1,\dots,k_M;p}(n-\sum_{i=1}^M\ell_i)}{p(n)}=(1+O_p(n^{-2\ve}(\log{n})^2))\prod_{i=1}^M\f{r_{k_i;p}(n-\ell_i)}{p(n)},
\]
from which the conclusion of the lemma follows.
\end{proof}
To finish the proof of Theorem~\ref{thm1.2}, we will also need a result of Mahler~\cite{Mahler1940} that counts the number of partitions of an integer into powers of any fixed positive integer.
\begin{lemma}[Mahler,~\cite{Mahler1940}]\label{lem4.2}
The number of partitions of $n$ into powers of $p$ equals
\[
\exp\left(\f{1}{2\log{p}}\left(\log\f{n/p}{\log{n/p}}\right)^2+\left(\f{1}{2}+\f{1}{\log{p}}+\f{\log\log{p}}{\log{p}}\right)\log{n}+O_p(\log\log{p})\right).
\]
\end{lemma}
Now we can prove Theorem~\ref{thm1.2}.
\begin{proof}[Proof of Theorem~\ref{thm1.2}]
We show that there exist $\delta_p,\ve_p,\gamma_p'>0$ such that, for any collection of $M\geq Cn^{1/4-\ve_p}$ distinct $k_1,\dots,k_M$ between $\f{1}{p^2}n^{1/4-\ve_p}$ and $n^{1/4-\ve_p}$, there exists some $i$ such that $M_\mu^{(k_i)}\geq(1+\delta_p)\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}$ for all partitions $\mu$ of $n$ outside of a set of density $\exp(-\Omega_{p,C}(n^{\gamma'_p}))$ for some $i=1,\dots,M$. That there are $\Omega_p(n^{1/4-\ve_p})$ many $k_1,\dots,k_M$ in the interval $[\f{1}{p^2}n^{1/4-\ve_p},n^{1/4-\ve_p}]$ that all satisfy
\[
\left\{\log_p\f{(1+\delta_p)\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}}{k}\right\}\leq\log_p\f{1+\delta_p}{1+\f{\delta_p}{2}}
\]
(once $\delta_p$ has been fixed) is an immediate consequence of the fact that $\{\log_p{k}\}$ has distribution function $\f{p^t-1}{p-1}$ in intervals of the form $[p^a,p^{a+1}]$. As in the proof of Theorem~\ref{thm1.1}, these two results together imply that $\tilde{\mu}$ has a part of size at least $(1+\f{\delta_p}{2})\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}$ with probability at least $1-\exp(-\Omega_p(n^{\gamma_p}))$, from which Theorem~\ref{thm1.2} will immediately follow using Lemmas~\ref{lem2.1},~\ref{lem2.2}, and~\ref{lem2.3}.
Set
\[
f_p(k):=\f{1}{p(n)}\sum_{\substack{\ell\leq \gamma\sqrt{n}\log{n} \\ k\mid \ell}}r_{k;p}(n-\ell)q_p\left(\f{\ell}{k}\right)
\]
for every $k\leq n^{1/4-\ve}$. By~\eqref{eq4.3}, Lemma~\ref{lem4.2}, and the standard estimate for $p(n)$, the proportion $\f{r_{k;p}(n-\ell)q_p\left(\f{\ell}{k}\right)}{p(n)}$ equals a quantity that's $\exp(O_p(\log\log{n}))$ times
\[
\exp\left(-\f{\log(t_0k)^2}{2\log{p}}+\f{\log(t_0k)}{2}+\f{\log(\f{\ell/pk}{\log(\ell/pk)})^2}{2\log{p}}+\left(\f{1}{2}+\f{1}{\log{p}}+\f{\log\log{p}}{\log{p}}\right)\log\f{\ell}{k}-\f{\pi \ell}{\sqrt{6n}}\right),
\]
where $t_0=\f{\pi}{\sqrt{6(n-\ell)}}$. When $p=2$ and $\ell\leq\f{1}{10000}\sqrt{n}\log{n}$, the above is $O(\f{1}{n})$, and when $p$ is an arbitrary prime, $\f{1}{p^2}n^{1/4-\ve}\leq k \leq n^{1/4-\ve}$, and $\ell=\gamma\f{\sqrt{6}}{2\pi}\sqrt{n}\log{n}$, the above equals
\[
\exp\left(\left(\f{(\f{1}{4}+\ve)(1+\log(\f{2\gamma}{p(1+4\ve)})+\log\log{p})}{\log{p}}-\f{\gamma}{2}\right)\log{n}+O_p(\log\log{n})\right)
\]
Now consider the function
\[
g_p(\gamma,\ve):=\f{(\f{1}{4}+\ve)(1+\log(\f{2\gamma}{p(1+4\ve)})+\log\log{p})}{\log{p}}-\f{\gamma}{2}.
\]
It follows from a small amount of calculus that when $p=2$ and $\ve_2=\f{(2+2\delta)\log{2}-1}{4}$, we have
\[
g_2(\gamma,\ve_2)<-\f{1}{4}-\ve_2+\f{1}{4}\left(\delta+(2+2\delta)\log\left(1-\f{2\delta}{4+4\delta}\right)\right)
\]
whenever $\gamma<1+\f{\delta}{2}$. By fixing $\delta>0$ sufficiently small and using that $\log\left(1-\f{2\delta}{4+4\delta}\right)<-\f{2\delta}{4+4\delta}$, we thus get that $f_2(k)=O(\f{(\log{n})^{O(1)}}{n^{\Omega(1)}})$ whenever $\gamma<1+\f{\delta}{2}$ and $\f{1}{4}n^{1/4-\ve_2}\leq k\leq n^{1/4-\ve_2}$. This implies that there exists a $\gamma_2'$ such that, for any $\f{1}{4}n^{1/4-\ve_2}\leq k_1,\dots,k_M\leq n^{1/4-\ve_2}$ with $M\geq Cn^{1/4-\ve_2}$, we have
\[
\prod_{i=1}^Mf_2(k_i)=O(\exp(-\Omega_C(n^{\gamma'_2}))),
\]
from which the desired result for $p=2$ now follows from Lemma~\ref{lem4.1}.
When $p>2$, we have
\[
f_p(k)=1-\f{1}{p(n)}\sum_{\substack{\ell>\gamma\sqrt{n}\log{n} \\ k\mid \ell}}r_{k;p}(n-\ell)q_p\left(\f{\ell}{k}\right),
\]
so that
\[
f_p(k)\leq 1-\f{\gamma\beta\sqrt{n}\log{n}}{kp(n)}\min_{\substack{\gamma\sqrt{n}\log{n}<\ell\leq \gamma(1+\beta) \\ k\mid \ell}}r_{k;p}(n-\ell)q_p\left(\f{\ell}{k}\right)
\]
for any $\beta>0$. When $\delta>0$, it follows from some more calculus that $g_p(1+\delta,\ve)$ is decreasing as $\ve$ increases and attains a maximum of $-\f{1}{2}+\f{1-2\delta\log{p}-\log{\f{p}{2-2\delta}}+\log\log{p}}{4\log{p}}$ at $\ve=0$ as $\ve$ ranges over $[0,\f{1}{4}]$. When $p>13$, the quantity $\f{1-2\delta\log{p}-\log{\f{p}{2-2\delta}}+\log\log{p}}{4\log{p}}$ is negative for all $\delta>0$, but when $p\leq 13$ it is positive for $\delta>0$ sufficiently small. As a consequence, there exist $\delta_p,\ve_p,\alpha_p>0$ satisfying $\alpha_p>\ve_p$ such that $f_p(k)\leq 1-\Omega_p(n^{1/4-\alpha_p})$ whenever $\f{1}{p^2}n^{1/4-\ve_p}\leq k\leq n^{1/4-\ve_p}$. Thus, for any $\f{1}{p^2}n^{1/4-\ve_2}\leq k_1,\dots,k_M\leq n^{1/4-\ve_2}$ with $M\geq Cn^{1/4-\ve_p}$, we have
\[
\prod_{i=1}^Mf_p(k_i)=O_p(\exp(-\Omega_{p,C}(n^{\gamma'_p})))
\]
for $\gamma_p':=\alpha_p-\ve_p$, from which the desired result for $2<p\leq 13$ now follows from Lemma~\ref{lem4.1}.
\end{proof}
\bibliographystyle{plain}
| {
"timestamp": "2020-07-28T02:24:31",
"yymm": "2007",
"arxiv_id": "2007.06652",
"language": "en",
"url": "https://arxiv.org/abs/2007.06652",
"abstract": "We show that almost every entry in the character table of $S_n$ is even as $n\\to\\infty$. This resolves a conjecture of Miller. We similarly prove that almost every entry in the character table of $S_n$ is zero modulo $3,5,7,11,$ and $13$ as $n\\to\\infty$, partially addressing another conjecture of Miller.",
"subjects": "Combinatorics (math.CO); Number Theory (math.NT); Representation Theory (math.RT)",
"title": "On even entries in the character table of the symmetric group",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513869353993,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7086428628208127
} |
https://arxiv.org/abs/2012.09792 | Deciding when two curves are of the same type | Given two closed curves in a surface, we propose an algorithm to detect whether they are of the same type or not. | \section{Introduction}
Whitehead's algorithm \cite{MR1503309} serves to determine if two elements $\gamma$ and $\eta$ in a finitely generated non-abelian free group $\BF$ differ by an automorphism of the latter, that is if there is $\phi\in\Aut(\BF)$ with $\phi(\gamma)=\eta$. An algorithm solving the same problem for surface groups is due to Levitt and Vogtmann \cite{MR1783856}. Both algorithms use different methods, but the basic strategy is surprisingly similar, consisting of two separate steps.
In a first step one brings the involved elements into a suitable "normal form". In the case of the free group one uses Whitehead automorphisms to reduce step by step their lengths. In the case of surface groups one conjugates the given elements so that they are represented by paths with minimal self-intersection number. In either case the time complexity of this first step is at worst quadratic: the complexity of the involved elements is strictly reduced at every step and one stops when it is no longer possible to do so.
Once we are done with step one we go into the second step. Here the goal is to decide if two elements in normal form belong to the same connected component of the graph whose vertex set is the set of elements in normal form and where two vertices are joined if they differ by an elementary move: a Whitehead automorphism in the case of the free group and, in the case of the surface group, sliding a strand over a double point followed by a self-homeomorphism of the surface. Note that although the graph is infinite, the algorithm terminates because the function which associates to each vertex its complexity is constant on the connected components of the graph, meaning that the involved connected components are finite. However, the number of elements in normal form and with given complexity is exponential, meaning that a priori the second step has exponential time complexity. In fact, it seems that the running time of the Whitehead algorithm is only known to be polynomial in the case that $n=2$ \cite{Khan}. We do not know of any efficient estimate on the running time of the Levitt-Vogtmann algorithm.
The goal of this note is to give a different algorithm, one of polynomial time complexity, to decide if two elements of a surface group differ by an automorphism:
\begin{theorem}\label{thm-main1}
Let $\Sigma$ be a closed orientable connected surface of genus $g\ge 2$. There is a polynomial time algorithm which decides if two elements $\gamma,\eta\in\pi_1(\Sigma,*)$ differ by a group automorphism and which, if possible, finds $\phi\in\Aut(\pi_1(\Sigma,*))$ with $\phi(\gamma)=\eta$.
\end{theorem}
In the statement of Theorem \ref{thm-main1}, polynomial time means that the running time is bounded from above by a polynomial function in $\max\{\vert\gamma\vert,\vert\eta\vert\}$ where $\vert\cdot\vert$ stands for the word length with respect to any finite generating set.
Note that the conjugacy problem in the surface group $\pi_1(\Sigma,*)$ has a linear time solution because the latter is hyperbolic \cite{Holt}. This means that to prove Theorem \ref{thm-main1} it is enough to give an algorithm deciding if there is an outer automorphism $\phi\in\Out(\pi_1(\Sigma,*))$ which maps the conjugacy class of $\gamma$ to that of $\eta$. From this point of view, it is easy to frame the whole problem in topological rather than algebraic terms. For once, conjugacy classes of elements in the fundamental group correspond to free homotopy classes of curves. But not only that. We can namely also identify $\Out(\pi_1(\Sigma,*))$ with the (full) mapping class group
$$\Map^\pm(\Sigma) = \Homeo(\Sigma) / \Homeo_0(\Sigma)$$
of $\Sigma$, where $\Homeo(\Sigma)$ is the group of self-homeomorphisms of $\Sigma$ and $\Homeo_0(\Sigma)$ is its identity component. In those terms we can basically rephrase Theorem \ref{thm-main1} as asserting that there is a polynomial time algorithm deciding if two curves are of the {\em same type}, meaning that the corresponding free homotopy classes differ by a mapping class.
The following is then our main result:
\begin{theorem}\label{thm-main}
Let $\Sigma$ be a compact orientable connected surface whose genus $g\ge 2$ and number of boundary components $b$, satisfy $2-2g-b<0$. There is a polynomial time algorithm which decides if two elements $\gamma,\eta\in\pi_1(\Sigma,*)$ are of the same type or not, finding if possible some $\phi\in\Map^\pm(\Sigma)$ with $\eta=\phi(\gamma)$.
\end{theorem}
The following is the basic strategy of the proof of Theorem \ref{thm-main}, at least in the central case that $\gamma$ and $\eta$ are primitive and fill. We produce in polynomial time a collection $\mathcal C} \newcommand{\calD}{\mathcal D$ of mapping classes with the following properties:
\begin{itemize}
\item Any mapping class $\phi$ with $\phi(\gamma)=\eta$ belongs to $\mathcal C} \newcommand{\calD}{\mathcal D$.
\item There are polynomially many elements in $\mathcal C} \newcommand{\calD}{\mathcal D$ and for each one of them it takes polynomial time to decide if $\phi(\gamma)=\eta$.
\end{itemize}
Once we have this list then we just have to check for each $\phi\in\mathcal C} \newcommand{\calD}{\mathcal D$ if $\phi(\gamma)=\eta$. If we find one then $\gamma$ and $\eta$ are of the same type. If not, then not.
\begin{remark}
As it is evident from the given sketch, our algorithm actually solves a more general problem than the one we are discussing here. Indeed, if $G\subset\Map^\pm(\Sigma)$ is an arbitrary subgroup whose membership problem is solvable in polynomial time, then we can decided again in polynomial time if two filling curves $\gamma$ and $\eta$ differ by an element of $G$: our algorithm gives in polynomial a complete list of all those $\phi\in\Map^\pm(\Sigma)$ with $\phi(\gamma)=\eta$, meaning that we just have to check one after the other if any of them belongs to $G$. For $\gamma$ and $\eta$ non-filling, things are a bit more complicated but it all boils down then to be able to check in polynomial time if a subsurface of $\Sigma$ is in the $G$-orbit of another subsurface. The reader will have no difficulty filling in the details after studying the proof of Theorem \ref{thm-main}.
\end{remark}
This paper is organised as follows. In section \ref{sec:preli} we fix some terminology that we will use throughout the paper and prove a lemma explicitly relating two notions of length of a curve: length with respect to a triangulation and intersection number with a filling curve.
In section \ref{sec: swiss guy} we present some basic things that one can do in polynomial time, things like for example checking if two curves are freely homotopic or detecting if two simple multicurves are of the same type.
Armed with all these tools we prove Theorem \ref{thm-main} in section \ref{proof-thm-main}. In the proof we first add additional assumptions such as for example that the curves are primitive, dealing with the remaining cases at the end of the proof. Once Theorem \ref{thm-main} has been proved, Theorem \ref{thm-main1} follows easily.
In section \ref{sec: dehn-thurston} we then sketch a small variation of the proof of Theorem \ref{thm-main} (for filling curves) replacing the triangulations we will have been using until that point by Dehn-Thurston coordinates. Working with Dehn-Thurston coordinates is more involved than working with triangulations---and this is why we choose the latter for the bulk of the paper---but it gives slightly tighter results. For example we get a polynomial of degree $18g+6b-14$ bounding the complexity of the algorithm.
\subsection*{Acknowledgements.} The first author would like to thank Monika Kudlinska for explaining to him her algorithm \cite{Monika} to detect if a curve is filling. The present paper benefited very much from Kudlinska's work.
The second author thanks IRMAR, Universit\'e de Rennes 1 for its hospitality while this work was initiated.
She would also like to thank Hugo Parlier for making this collaboration happen.
\section{Preliminaries}\label{sec:preli}
Let from now on $\Sigma$ be as in Theorem \ref{thm-main}, that is a compact orientable connected surface of genus $g\ge 2$ and $b$ boundary components satisfying $2-2g-b<0$. Let us also fix $*\in\Sigma$ a base-point and let $\vert\cdot\vert$ stand for the word length of elements in $\pi_1(\Sigma,*)$ with respect to some finite generating set.
\subsection{Terminology}
Throughout this note we will tacitly identify free homotopy classes of un-oriented curves with actual representatives, referring to both as {\em curves}. In particular, we will also identify curves with the conjugacy classes of elements (and their inverses) in $\pi_1(\Sigma)$. A closed curve is \textit{essential} if it is neither homotopically trivial nor homotopic to a boundary component, and it is {\em primitive} if it is not homotopic to any proper power of another curve. A primitive essential curve is {\em simple} if the corresponding free homotopy class has representatives which are embedded. A {\em multicurve} is a finite union of distinct essential primitive curves. If the components of a multicurve are simple and have disjoint representatives, then the multicurve is a {\em simple multicurve}. A multicurves is {\em ordered} if its components are ordered. Ordered multicurves will be decorated with an arrow on top, for example $\vec\gamma$ or $\vec P$. The prime example of a simple multicurve, ordered of not, is a pants decomposition, that is a collection of $3g-3+b$ disjoint essential simple curves whose complement consists of $2g-2+b$ pairs of pants. A multicurve $\gamma$ is \textit{filling} if we have $\iota(\gamma,\eta)>0$ for every essential curve $\eta$, where $\iota(\cdot,\cdot)$ stands for the geometric intersection number. Filling multicurves are never simple, but the union of simple curves might well be filling.
The mapping class group $\Map^{\pm}(\Sigma)$ acts on the sets of curves, multicurves, simple curves, ordered simple multicurves, etc... and we say that two such are \textit{of the same type} if they are in the same $\Map^{\pm}(\Sigma)$-orbit. Note that if two ordered pants decompositions $\vec P$ and $\vec P'$ are of the same type then there are many mapping classes sending one to the other. Indeed, if $\phi\in\Map^{\pm}(\Sigma)$ is such that $\phi(\vec P)=\vec P'$ then the set of those mapping classes $\psi$ with $\psi(\vec P)=\vec P'$ agrees with the set of those of the form $\phi\circ\sigma$ where $\sigma\in\Stab_{\Map^{\pm}(\Sigma)}(\vec P)$. The stabiliser of an ordered pants decomposition $\vec P$ admits a simple description, mostly if we restrict our attention to the {\em pure mapping class group}
$$P\Map(\Sigma)=\left\{\phi\in\Map^{\pm}\ \middle\vert\begin{array}{l}
\text{ orientation preserving and mapping }\\
\text{ each boundary component to itself}\end{array}\right\}.$$
Indeed, the stabiliser in the pure mapping class group of an ordered pants decomposition $\vec P$ is nothing other than the subgroup generated by the Dehn twists $T_{P_i}$ along the components $P_1,\dots,P_{3g-3+b}$ of $\vec P$, together with the center $C(\Map(\Sigma))$ of the mapping class group. The center $C(\Map(\Sigma))$ is trivial unless the pair $(g,b)$ consisting of genus and number of boundary components of $\Sigma$ is one of the following: $(1,1)$, $(1,2)$ or $(2,0)$. In those cases its only non-trivial element is the hyper-elliptic involution. We refer the reader to \cite{MR2850125} for background on the mapping class group.
\subsection{Curves in normal form with respect to a triangulation}
Suppose now that $\CT$ is a triangulation of $\Sigma$, and denote its set of edges by $E(\CT)$ and its 1-skeleton by $\CT^{(1)}$. A closed immersed 1-dimensional manifold $\lambda$ in $\Sigma$ is in {\em normal form} with respect to $\CT$, or just $\CT$-normal, if it avoids all the vertices of $\CT$, is transversal to the edges, and intersects each 2-simplex $\Delta$ in a collection of arcs, whose vertices are in different edges of $\Delta$. The {\em $\CT$-length} of a multicurve $\gamma$ in $\Sigma$ is then defined to be smallest possible number that a $\CT$-normal representative of the free homotopy class $\gamma$ meets the edges of the triangulation. Using slightly different words:
$$\ell_\CT(\gamma)=\min\{\vert\alpha\cap\CT^{(1)}\vert\text{ where }\alpha\text{ is a }\CT\text{-normal representative of }\gamma\}.$$
We prove next the following relation between $\CT$-lengths and intersection numbers:
\begin{lemma}\label{mainlem}
If $\alpha$ is a filling multicurve in $\Sigma$ then we have
$$\ell_\CT(\beta)\le\ell_{\CT}(\alpha)\cdot\iota(\beta,\alpha)\cdot(1+\iota(\beta,\beta))$$
for every other multicurve $\beta$ in $\Sigma$. If moreover $\Sigma$ is closed then the statement remains true even after deleting the factor $1+\iota(\beta,\beta)$.
\end{lemma}
\begin{proof}
Abusing notation we might identify $\alpha$ and $\beta$ with suitably chosen representatives. In other words, we might suppose that $\alpha$ and $\beta$ are actual curves instead of merely free homotopy classes, that they are in $\CT$-normal form and in general position with respect to each other, and that moreover we have
$$\vert\alpha\cap\beta\vert=\iota(\alpha,\beta)\text{, }\ell_{\CT}(\alpha)=\vert\alpha\cap\CT\vert\text{ and }\ell_{\CT}(\beta)=\vert\beta\cap\CT\vert.$$
Note that the assumption that $\alpha$ fills means that each component $\Delta$ of $\Sigma\setminus\alpha$ is either a disk or an annulus containing a boundary component. If $\Sigma$ is closed then only disks are possible.
Note also that the boundary of each such disk $\D\Delta$ meets $\CT$ at most $2\ell_{\CT}(\alpha)$ times. Now, the curve $\alpha$ cuts $\beta$ into segments $I_1,\dots,I_r$ where $r=\iota(\alpha,\beta)$. Each such segment $I_i$ is contained in a connected component $\Delta$ of $\Sigma\setminus\alpha$ and can be homotoped relative to its endpoints into $\D\Delta$. Since $\beta$ is supposed to have the minimal number of intersections with $\CT$ we deduce that
$$\vert I_i\cap\CT\vert\le\frac 12\vert\D\Delta\cap\CT\vert\le\ell_\CT(\alpha)$$
if $\Delta$ is a disk. If $\Delta$ is an annulus then $I_i$ could wrap a few times around, but wrapping around forces its self-intersection number, and thus that of $\beta$ to grow, meaning that we have
$$\vert I_i\cap\CT\vert\le\vert\D\Delta\cap\CT\vert\cdot(1+\iota(\beta,\beta))\le\ell_\CT(\alpha)\cdot(1+\iota(\beta,\beta))$$
It follows that
$$\ell_\CT(\beta)=\vert\beta\cap\CT\vert=\sum_{i=1}^{\iota(\alpha,\beta)}\vert I_i\cap\CT\vert\le\ell_\CT(\alpha)\cdot\iota(\alpha,\beta)$$
if the surface is closed and that
$$\ell_\CT(\beta)=\vert\beta\cap\CT\vert=\sum_{i=1}^{\iota(\alpha,\beta)}\vert I_i\cap\CT\vert\le\ell_\CT(\alpha)\cdot\iota(\alpha,\beta)\cdot(1+\iota(\beta,\beta))$$
in general.
\end{proof}
Suppose now that we are given a triangle $\Delta$ with sides $e_1,e_2,e_3$ and recall that whenever we have three integers $m_1,m_2,m_3\ge 0$ satisfying
\begin{equation}\label{eq-star}
m_1+m_2+m_3\in 2\BZ, \
m_1\le m_2+m_3, \
m_2\le m_3+m_1,
\text{ and }
m_3\le m_1+m_2,
\end{equation}
then there is an, up to isotopy, unique configuration $J(m_1,m_2,m_3)$ consisting of $\frac{m_1+m_2+m_3}2$ disjoint simple arcs in $\Delta$, each one with endpoints in different sides, and such that $J$ meets $e_i$ exactly $m_i$ times.
\begin{figure}[h]
\labellist
\small\hair 2pt
\pinlabel {$m_3 = 3$} at 540 320
\pinlabel {$m_1 = 4$} at 0 320
\pinlabel {$m_2 = 5$} at 280 70
\endlabellist
\centering
\includegraphics[width=0.5\textwidth]{picture01}
\caption{Disjoint simple arcs in a triangle $\Delta$.}
\end{figure}
Returning to our triangulation $\CT$ let $\mathcal A} \newcommand{\CB}{\mathcal B=\mathcal A} \newcommand{\CB}{\mathcal B(\CT)$ be the set of vectors $\vec m=(m_e)_{e\in E(\CT)}\in\BZ_{\ge 0}^{E(\CT)}$ such that the triple $(m_{e_i},m_{e_j},m_{e_k})$ satisfies \eqref{eq-star} whenever $e_i$, $e_j$, and $e_k$ are the three edges of a triangle in $\CT$. We say that elements in $\mathcal A} \newcommand{\CB}{\mathcal B$ are {\em admissible}.
Suppose that $\vec m\in\mathcal A} \newcommand{\CB}{\mathcal B$ is admissible. Since \eqref{eq-star} are satisfied for every triangle in $\CT$ we get an arc system $J(m_{e_i},m_{e_j},m_{e_k})$ for each triangle in $\CT$ (with sides $e_i$, $e_j$, and $e_k$). Gluing all those arc-systems to each other we get a simple, $\CT$-normal, embedded 1-manifold $\gamma_{\vec m}$ with
\begin{equation}\label{I am sick of this}
\vert\gamma_{\vec m}\cap e\vert=m_e
\end{equation}
for every edge $e\in E(\CT)$ of $\CT$.
\section{Some things we can do in polynomial time}\label{sec: swiss guy}
Continue with the same notation, suppose that we are given a triangulation $\CT$ of $\Sigma$ and a system of generators for the fundamental group $\pi_1(\Sigma,*)$. We think of the generators as being represented by actual $\CT$-transversal curves.
We recall/discuss now some procedures for which there are polynomial time algorithms. It is very important to keep in mind that that we can run those algorithms inside other such algorithms and still have polynomial time: $P(Q(X))$ is a polynomial in $X$ if $P(X)$ and $Q(X)$ are. We will also use $\Pol(X)$ to denote a polynomial in $X$, but we will allow the concrete polynomial to change over time. For example, $\Pol(\Pol(X))=\Pol(X)$ is a perfectly sound statement. Similarly for polynomials of several variables. All polynomials under consideration will have positive entries and will be only fed positive quantities.
\begin{remark}
We should now point out that while having implementable algorithms is nice and dandy, this is not the standard we are aiming at. For us, an algorithm is just a clear procedure that your chosen, not extraordinarily creative but very reliable, punctual, clean, rule following and decently trained Swiss worker would be able to implement in polynomial time. It goes without saying that he or she would be equipped with illimited amounts of supplies like paper or multicolour pens. For example, to check if the multicurve $\gamma_{\vec m}$ corresponding to an admissible vector $\vec m\in\BZ_{\ge 0}^{E(\CT)}$ is connected one could ask our personal assistant to {\em first please draw carefully the multicurve with a pencil and a ruler, then take a black marker and, without lifting the marker, follow the line drawn in pencil until it closes up. To conclude check if there is some pencil left of if all has been covered by the marker}. In the first case the multicurve was not connected, in the latter it was.
\end{remark}
\subsection*{(R1) Homotope a $\CT$-transversal curve into $\CT$-normal form}
Take your curve $\gamma$ and check if there are unallowed arcs of intersection with 2-simplexes of $\CT$, that is a subarc $I$ of $\gamma$ contained in a 2-simplex $\Delta$ in $\CT$ and whose endpoints $\D I$ belong to the same side of $\Delta$. If you find one such unallowed arc then homotope it away into the adjacent 2-simplex. Repeat until there are no more unallowed arcs. Once there are no unallowed arcs we are, by definition, in $\CT$-normal form. This process is polynomial in $\vert\gamma\cap\CT\vert$.
\subsection*{(R2) Pass to/from $\CT$-normal curves from/to words in the generators}
Let $X\subset\Sigma$ be the dual graph of $\CT$, adding if necessary an arc from the base point $*$ to $X$. Take $T\subset X$ a maximal tree in $X$. Each (oriented) edge in $X\setminus T$ represents an element in $\pi_1(\Sigma,*)$. Write it in terms of the given generators. Now, represent the free homotopy class of the your given curve $\gamma$ in $\CT$-normal form by a curve in $X$. Write the associated word. This process is polynomial in $\vert\gamma\cap\CT\vert$.
Conversely, a word $\gamma$ in the generators is a particular kind of curve in $\Sigma$. Run (R1) to freely homotope it to a curve in $\CT$-normal form. This process is polynomial in $\vert\gamma\vert$.
\begin{remark}
There are many algorithms dealing with words in the generating set, or at least where the input is given via words. In the light of (R2) we can make use of all those algorithms. We also get that, when it comes to the complexity, it does not matter if we consider $\vert\gamma\cap\CT\vert$ or $\vert\gamma\vert$. Unless we want to stress which one is the right one, we will just say {\em the complexity of} $\gamma$.
\end{remark}
\subsection*{(R3) Check if elements are trivial or equal}
Surface groups are hyperbolic. This implies that they have linear time solvable word problem, meaning that it takes linear time in the complexity of $\gamma$ to detect if a word $\gamma$ represents the trivial element in $\pi_1(\Sigma,*)$. In particular it takes also linear time to decide if two words represent the same element in $\pi_1(\Sigma,*)$.
\subsection*{(R4) Check if elements are conjugate}
Surface groups have, again because they are hyperbolic, a linear time solvable conjugacy problem \cite{Holt}. This means that it takes linear time in the complexity of the curves $\gamma$ and $\eta$ to decide if they are freely homotopic to each other or not.
\subsection*{(R5) Compute intersection numbers}
There are many algorithms to compute the intersection number $\iota(\gamma,\eta)$ between curves $\gamma,\eta$. A rather effective one is due to Despr\'e and Lazarus \cite{Despre-Lazarus}. Their algorithm takes quadratic time in the complexity of the curves.
\subsection*{(R6) Decide if two triangulated surfaces are homeomorphic by a homeomorphism with prescribed boundary values}
Suppose that we are given two triangulated oriented surfaces $S_1,S_2$ and that we have a bijection $\pi_0(\D S_1)\to\pi_0(\D S_2)$. We want to figure out if there is a homeomorphism $S_1\to S_2$ inducing the given bijection. Well, we start by determining if there is a bijection $\pi_0(S_1)\to\pi_0(S_2)$ compatible with the one given between the sets of boundary components. If this is not the case then we are done. Otherwise compare the Euler-characteristic of each component of $S_1$ with that of the corresponding component of $S_2$. If all those numbers agree then there is a homeomorphism and of not then not. This process is linear in the size of the triangulations, that is for example in the number of 2-simplices.
\subsection*{(R7) If possible, give a proper orientation preserving homotopy equivalence (with prescribed boundary values) between two triangulated surfaces}
As in (R6) suppose that we are given two triangulated oriented surfaces $S_1,S_2$ and consider $\D S_1$ and $\D S_2$ with their induced orientations. Fix also a bijection $\pi_0(\D S_1)\to\pi_0(\D S_2)$. If possible, we want to construct a homotopy equivalence $S_1\to S_2$ which maps each component of $\D S_1$ homeomorphically to the corresponding component of $\D S_2$. Recall that such a homotopy equivalent exists if and only if there is a homeomorphism. We run (R6) to check if this if this is the case. If not then we dedicate our time to better things. If yes, what we actually get out of (R6) is a bijection between connected components of $S_1$ and $S_2$ compatible with the bijection of boundary components and which maps components of $S_1$ to homeomorphic components of $S_2$. Said shorter, we might assume from now on that $S_1$ and $S_2$ are connected and homeomorphic. We also barycentrically subdivide the triangulations a few times to avoid degenerate cases, $10\vert\chi(S_1)\vert$-times definitively suffice.
We start by removing an open 2-simplex $\Delta_i$ from each one of the surfaces $S_i$. Starting on the new boundary component $\D\Delta_i$ we can collapse 2-cells one by one, obtaining thus a retraction of each surface $S_i\setminus\Delta_i$ to a graph $X_i$ contained in the 1-skeleton and with $\D S_i\subset X_i$. Let also $\hat\Delta_i$ be the disk obtained when we cut $S_i$ along $X_i$; it is easy to give a homeomorphism between $\Delta_i$ and $\hat\Delta_i$.
Choose now maximal trees $T_1\subset X_1$ and $T_2\subset X_2$ such that the intersection of $T_i$ with each one of the boundary components of $S_i$ is connected. Note that each homeomorphism $X_1/T_1\to X_2/T_2$ which maps the edge of $X_1/T_1$ corresponding to some component of $\D S_1$ to the corresponding edge of $X_2/T_2$ can be lifted to a homotopy equivalence $X_1\to X_2$ preserving the bijection between boundary components. To decide if this homotopy equivalence extends to a homotopy equivalence $S_1\to S_2$ we just need to check that it extends over to $\hat\Delta_1$. This is the case if and only if the curve $\D\hat\Delta_1$ is homotopically trivial in $S_2$ -- use (R3) to check this. If the homotopy extends it then extend it (we leave the reader the details how to do that). Otherwise try a different, by what we mean non-homotopic, homeomorphism $X_1/T_1\to X_2/T_2$ which maps the edge of $X_1/T_1$ corresponding to some component of $\D S_1$ to the corresponding edge of $X_2/T_2$. There are finitely many homotopy classes of homeomorphisms to check and one of them works, so run them all if needed. The complexity of this process if polynomial in the sizes of the given triangulations.
\begin{remark}
Note that the bound for the complexity also implies that other quantities, for example the Lipschitz constant, associated to the obtained proper homotopy equivalence are also polynomial in the complexity of the input. Recall also that by the Baer-Dehn-Nielsen theorem \cite{MR2850125}, every proper homotopy equivalence $\Sigma\to\Sigma$ is properly homotopic to a homeomorphism, and that every two properly homotopic homeomorphisms are homotopic via homeomorphism. This means that every proper homotopy equivalence $\Sigma\to\Sigma$ represents a unique mapping class.
\end{remark}
\begin{remark}
If instead of merely a bijection $\pi_0(\D S_1)\to\pi_0(\D S_2)$ we actually have a homotopy class $[\D S_1 \to \D S_2]$ of homotopy equivalences, we can also detect if there is a homeomorphism from $S_1$ to $S_2$ inducing the given homotopy class of boundary map; and if it exists, we can construct one such homeomorphism.
All of this in polynomial time.
We leave the details to the reader.
\end{remark}
\subsection*{(R8) Construct a triangulation adapted to a simple topological multicurve}
Suppose that we are given a simple multicurve $\gamma=\gamma_{\vec m}$ in terms of the admissible vector $\vec m\in\mathcal A} \newcommand{\CB}{\mathcal B(\CT)\subset \BZ_{\ge 0}^{E(\CT)}$. Draw the multicurve and cut each 2-simplexes of $\CT$ along $\gamma$. In this way we get $\vert\CT\vert+\ell_\CT(\gamma)$ pieces. Among these pieces, leave the triangles as they are and add diagonals to triangulate squares, pentagons and hexagons. We obtain in this way a triangulation which contains $\gamma$ as a simplicial subcomplex. The time complexity of the process is polynomial in the complexities of $\gamma$ and $\CT$.
\subsection*{(R9) Detect if two simple multicurves have the same topological type}
It suffices to discuss the case of ordered multicurves, leaving the unordered case to the reader. Anyways, the first step is to run (R8) to construct triangulations adapted to both simple multicurves. Then use (R6) to decide if there is a homeomorphism of the complement of the first one to the complement of the other one, with boundary values given by the fact that we are mapping the curves in order and in an orientation preserving way. If such a homeomorphisms exists, then also does a homeomorphism $\Sigma\to\Sigma$ mapping the first curve to the second one. And if not, then not. The complexity is polynomial in the complexity of the curves and the triangulations.
\subsection*{(R10) Check if a curve $\gamma$ is filling and, if this is not the case, detect the up to isotopy smallest subsurface $\Sigma(\gamma)$ containing $\gamma$}
As we mentioned in the introduction, Kudlinska gave in \cite{Monika} a polynomial time algorithm detecting if a curve $\gamma$ fills the surface or not. Her algorithm is based on the idea that to check if a curve is filling, it suffices to check if it has positive intersection number with a finite number of curves, where the number depends on the complexity of the original curve. We sketch a small modification of her algorithm, using triangulations instead of Dehn-Thurston coordinates.
The starting point of Kudlinska's algorithm was the observation that if a hyperbolic geodesic $\gamma$ fails to be filling, then the boundary of the subsurface filled by $\gamma$ is an essential curve disjoint of $\gamma$ and which is shorter than $\gamma$. Here is a combinatorial version of this fact:
\begin{lemma}\label{lemblablabla}
A $\CT$-normal curve $\gamma$ is filling if and only if we have $\iota(\gamma,\eta)>0$ for every $\CT$-normal simple multicurve $\eta$ with $\ell_\CT(\eta)\le 2\ell_{\CT}(\gamma)$.
\end{lemma}
\begin{remark}
The number $2$ in the statement of Lemma \ref{lemblablabla} can be deleted.
\end{remark}
We get from the lemma that $\gamma$ is filling unless we find an admissible vector $\vec m\in\mathcal A} \newcommand{\CB}{\mathcal B(\CT)\subset\BZ_{\ge 0}^{E(\CT)}$ with norm less than $2\ell_{\CT}(\gamma)$ and such that the submanifold $\gamma_{\vec m}$ satisfying \eqref{I am sick of this} for each $e\in E(\CT)$ is a multicurve with $\iota(\gamma_{\vec m},\gamma)=0$. There are polynomially many vectors satisfying the given bound, and it takes polynomial time to check if $\gamma_{\vec m}$ is a multicurve and if the intersection number vanishes, meaning that we can guarantee in polynomial time that $\gamma$ is filling if this is the case. Also, if $\gamma$ is not filling then we will find in polynomial time an essential simple curve $\eta_1$ disjoint of $\gamma$. Cut $\Sigma$ along $\eta_1$ and let $\Sigma_1$ be the connected component of $\Sigma\setminus\eta_1$ which contains $\gamma$. Run the same process again to detect if $\gamma$ fills $\Sigma_1$. If this is the case then we have $\Sigma(\gamma)=\Sigma_1$. If not, find an essential curve $\eta_2\subset\Sigma_1$ disjoint of $\gamma$, let $\Sigma_2$ be the connected component of $\Sigma_1\setminus\eta_2$ containing $\gamma$, and so on... this process runs as most $3g-3+b$ times and it is polynomial at all steps, meaning that in polynomial time we have found the essential subsurface $\Sigma(\gamma)$ of $\Sigma$ filled by $\gamma$.
The running time of this process is polynomial in the complexity of $\gamma$ (but not of the triangulation).
\begin{proof}[Proof of Lemma \ref{lemblablabla}]
To begin with let $\gamma'$ be a curve which is freely homotopic to $\gamma$ and has minimal self-intersection. For example, we could obtain such a $\gamma'$ by avoiding monogons and bigons. This means in particular that there is such a $\gamma'$ with
$$\ell_\CT(\gamma')\le\ell_\CT(\gamma).$$
Now, since $\gamma'$ has minimal self-intersection number, we get that the, up to isotopy, smallest essential subsurface $\Sigma(\gamma')$ containing $\gamma'$ can be obtained as a regular neighborhood of the union of $\gamma'$ with all components $\Sigma\setminus\gamma'$ which are disks or annuli containing a boundary component of $\Sigma$. Note that $\gamma'$, and hence $\gamma$ because they both represent the same homotopy class, fails to be filling if and only if $\eta=\D\Sigma(\gamma')$ is not empty. Note also that in the latter case $\eta$ is a $\CT$-normal simple multicurve with $\ell_\CT(\eta)\le 2\ell_{\CT}(\gamma)$ and satisfying
$$\iota(\gamma,\eta)=\iota(\gamma',\eta)=0.$$
We are done.
\end{proof}
\begin{remark}
As we mentioned above, Kudlinska used Dehn-Thurston coordinates to parametrise simple curves. She also worked with the hyperbolic length instead of $\ell_\CT(\gamma)$. Although it is a bit more complicated to set up than what we just did, her approach has some advantages and in section \ref{sec: dehn-thurston} we will use both Dehn-Thurston coordinates and hyperbolic metric when we want a more effective bound for the running time of our algorithms. We stress than other than that we just followed Kudlinska's algorithm.
\end{remark}
Anyways, once we have gotten our personal Swiss proficient in these 10 routines we can describe the algorithms needed to prove the main results of this paper.
\section{The algorithms}\label{proof-thm-main}
In this section we prove the two results announced in the introduction.
\begin{proof}[Proof of Theorem \ref{thm-main}]
To prove the theorem we just have to describe the promised algorithm.
We start by fixing an ordered pants decomposition $\vec P$, a simple multicurve $B$ which fills together with $\vec P$, and a triangulation $\CT$ of $\Sigma$ with respect to which $\vec P$ and $B$ admit simplicial embedded representatives meeting in only $\iota(P,B)$ points.
We will first prove the theorem under the following
\begin{quote}
{\bf additional assumptions}: the center of the mapping class group is trivial and the given elements $\gamma,\eta\in\pi_1(\Sigma,*)$ are primitive.
\end{quote}
We will discuss how to deal with the general case once we have proved the theorem under these additional assumptions.
\medskip
Anyways, we start the proof. Our inputs are primitive elements $\gamma,\eta\in\pi_1(\Sigma,*)$, written as words with respect to some finite generating set. For starters we can assume that these words are not only reduced but also cyclically reduced. Note also that we can use (R3) to check if either $\gamma$ or $\eta$ are simple and that we can use (R9) to detect if simple curves have the same type or not. We assume from now on that neither $\gamma$ nor $\eta$ is simple.
\subsection*{Step 1: Reduce to the filling case}
We can first run (R10) to decide if the curve $\gamma$ is filling, detecting, if this is not the case, the smallest subsurface $\Sigma(\gamma)\subset\Sigma$ filled by $\gamma$. Run (R8) to get a triangulation of $\Sigma(\gamma)$ and proceed in the same way with $\Sigma(\eta)$ using then (R4) to detect if $\Sigma(\gamma)$ and $\Sigma(\eta)$ are homeomorphic. If this is not the case then \underline{$\gamma$ and $\eta$ are not of the same type}.
Suppose that $\Sigma(\gamma)$ and $\Sigma(\eta)$ are homeomorphic and list all possible homotopy classes $[\D\Sigma(\gamma) \to \D\Sigma(\eta)]$ of homotopy equivalences.
Using (R6), decide for each one of these homotopy classes if it is induced by some homeomorphism $\Sigma\to\Sigma$ which maps $\Sigma(\gamma)\to\Sigma(\eta)$.
If there is no homotopy class of homotopy equivalences for which such a homeomorphism exists then \underline{$\gamma$ and $\eta$ are not of the same type}.
Let $\CB$ be the list of all homotopy classes of homotopy equivalences $[\D\Sigma(\gamma) \to \D\Sigma(\eta)]$ realised by some homeomorphism of $\Sigma$ mapping $\Sigma(\gamma)$ to $\Sigma(\eta)$, and for each $\pi\in\CB$ run (R7) to get a proper homotopy equivalence
$$\phi_\pi:\Sigma(\gamma)\to\Sigma(\eta)$$
inducing $\pi$. Then we are facing the following alternative:
\begin{itemize}
\item If there are $\pi\in\CB$ and a pure mapping class $\psi\in P\Map(\Sigma(\eta))$ such that $\psi(\phi_\pi(\gamma))$ is freely homotopic to $\eta$ then \underline{$\gamma$ and $\eta$ are of the same type}.
\item Otherwise they are \underline{they don't have the same type}
\end{itemize}
We have to figure out, in polynomial time, if such $\pi$ and $\psi$ exist. Since the cardinality of $\CB$ is bounded in terms of $g$ and $b$, and since we can work each $\pi$ individually, it suffices to be able to check in polynomial time if for fixed $\pi\in\CB$ the pure mapping class $\psi$ exists or not. A priori this sounds very similar to the initial problem, replacing only $\gamma$ by $\phi_\pi(\gamma)$ and the mapping class group by the pure mapping class group. It is indeed very similar. However we are now in a situation where the given curves fill the surface. This means that, all things considered, we are reduced to having to give a polynomial time algorithm solving the following problem:
\begin{quote}
{\em Given two primitive filling curves $\gamma$ and $\eta$ in $\Sigma$, determine if there is $\phi\in P\Map(\Sigma)$ with $\phi(\gamma)=\eta$.}
\end{quote}
This is what we will do from now on. As we already mentioned in the introduction, the basic strategy is to produce in polynomial time a collection $\mathcal C} \newcommand{\calD}{\mathcal D\subset P\Map(\Sigma)$ of mapping classes, the candidates, with the following properties:
\begin{itemize}
\item[(C1)] If $\phi\in P\Map(\Sigma)$ is such that $\phi(\gamma)=\eta$, then $\phi\in\mathcal C} \newcommand{\calD}{\mathcal D$.
\item[(C2)] There are polynomially many elements in $\mathcal C} \newcommand{\calD}{\mathcal D$ and it takes polynomial time to decide if $\phi(\gamma)=\eta$ for each one of them individually.
\end{itemize}
Once we have this list then we just have to check for each $\phi\in\mathcal C} \newcommand{\calD}{\mathcal D$ if $\phi(\gamma)=\eta$. If we find one then \underline{$\gamma$ and $\eta$ are of the same type}, otherwise \underline{they aren't}.
\medskip
To produce the desired list $\mathcal C} \newcommand{\calD}{\mathcal D$ of candidates we start by making a preliminary list, this time consisting of pants decompositions:
\subsection*{Step 2: Get the set $\CP$ of plausible images of $\vec P$}
Suppose that $\vec\alpha$ is an ordered pants decomposition satisfying $\vec\alpha=\phi(\vec{P})$ for some $\phi\in P\Map(\Sigma)$ with $\phi(\gamma)=\eta$. Applying Lemma \ref{mainlem} to the filling curve $\eta$ and to $\vec\alpha$ we get
\begin{equation}\label{eq-11}
\ell_\CT(\vec\alpha)
\le \ell_{\CT}(\eta)\cdot\iota(\vec\alpha,\eta)=\ell_{\CT}(\eta)\cdot\iota(\phi(\vec{P}),\phi(\gamma))=\ell_{\CT}(\eta)\cdot\iota(\vec{P},\gamma).
\end{equation}
\begin{remark}
For later use note that, replacing $\vec{P}$ by $B$ in this computation we also get
\begin{equation}\label{eq-10}
\ell_\CT(B')\le\ell_{\CT}(\eta)\cdot\iota(B,\gamma)
\end{equation}
for any multicurve $B'$ which could arise as $B'=\phi(B)$ for some $\phi\in P\Map(\Sigma)$ with $\phi(\gamma)=\eta$.
\end{remark}
Now, for every admissible $\vec m\in\mathcal A} \newcommand{\CB}{\mathcal B(\CT)\subset\BZ_{\ge 0}^{E(\CT)}$ with
$$\Vert\vec m\Vert\stackrel{\text{def}}=\sum\vert m_e\vert\le\ell_{\CT}(\eta)\cdot\iota(\vec{P},\gamma)$$
we consider the submanifold $\gamma_{\vec m}$ satisfying \eqref{I am sick of this} for each $e\in E(\CT)$ and check if it is a pants decomposition. To do that we have to check if its components are essential (R3), if no two distinct components are freely homotopic to each other (R4), and if the connected components of the complement are pairs of pants (R6). If $\gamma_{\vec m}$ is not a pants decomposition throw it away while proclaiming it to be dust of the earth. Otherwise, that is if $\gamma_{\vec m}$ is a pants decomposition, take the $2^{3g-3+b}\cdot(3g-3+b)!$ ordered oriented multicurves obtained by considering all possible orderings and orientations of the components of $\gamma_{\vec m}$. For everyone of those ordered oriented multicurves we check if they have the same type as $\vec P$. Those who don't are dumped into the dustbin of history, while we preciously keep the others in the set
$$\CP=\left\{
\vec\alpha\middle\vert\begin{array}{l}
\text{ ordered oriented multicurve of type }\vec P\text{ whose underlying multicurve }\alpha\\\text{ arises as }\gamma_{\vec m}\text{ for some admissible }\vec m\in\mathcal A} \newcommand{\CB}{\mathcal B(\CT)\text{ with }\Vert\vec m\Vert\le\ell_{\CT}(\eta)\cdot\iota(\vec{P},\gamma)
\end{array}\right\}.$$
Note that, by the preceding discussion, it takes polynomial time to produce the set $\CP$ and that \eqref{eq-11} holds for every $\vec\alpha\in\CP$.
So far all we have is a collection of pants decompositions. What we want is a collection of mapping classes of homotopy equivalences.
\subsection*{Step 3: Get a preliminary list $\mathcal C} \newcommand{\calD}{\mathcal D_0$ of maps}
Take now $\vec\alpha\in\CP$ and run (R8) to get and adapted triangulation. Then we run (R7) on each connected component of $\Sigma\setminus\vec\alpha$ to get a proper homotopy equivalence
$$\phi_{\vec\alpha}:\Sigma\to\Sigma$$
mapping each boundary component to itself and $\vec P$ to $\vec\alpha$. Let
$$\mathcal C} \newcommand{\calD}{\mathcal D_0=\{\phi_{\vec\alpha}\text{ for }\vec\alpha\in\CP\}$$
be the collection of so-obtained maps.
As remarked after describing (R7) we can bound the Lipschitz constant of $\phi_{\vec\alpha}\in\mathcal C} \newcommand{\calD}{\mathcal D_0$ polynomially in terms of $\ell_\CT(\vec\alpha)$, meaning in particular that
$$\ell_\CT(\phi_{\vec\alpha}(\beta))\le\Pol(\ell_{\CT}(\alpha))\cdot \ell_\CT(\beta)$$
for every curve $\beta\subset\Sigma$. Taking \eqref{eq-11} into consideration, and reminding the reader that the concrete polynomial $\Pol$ might change from line to line, we get
\begin{equation}\label{eq-12}
\ell_\CT(\phi_{\vec\alpha}(\beta))\le\Pol\left(\ell_{\CT}(\eta),\iota(\vec{P},\gamma),\ell_\CT(\beta)\right)
\end{equation}
for every $\phi_{\vec\alpha}\in\mathcal C} \newcommand{\calD}{\mathcal D_0$ and every curve $\beta$.
For the convenience of the reader we summarise what we have achieved after running Step 2 and Step 3:
\begin{quote}
{\bf Outcome so far:} {\em We have produced in polynomial time a collection $\mathcal C} \newcommand{\calD}{\mathcal D_0$ of polynomially many homotopy equivalences $\phi_{\vec\alpha}:\Sigma\to\Sigma$ with the following properties:
\begin{itemize}
\item If $\phi\in P\Map(\Sigma)$ is such that $\phi(\gamma)=\eta$, then there is $\phi_{\vec\alpha}\in\mathcal C} \newcommand{\calD}{\mathcal D_0$ with $\phi(\vec P)=\phi_{\vec\alpha}(\vec P)$.
\item The inequality \eqref{eq-12} holds for every $\phi_{\vec\alpha}\in\mathcal C} \newcommand{\calD}{\mathcal D_0$ and every curve $\beta\subset\Sigma$.
\end{itemize}}
\end{quote}
As we already mentioned earlier, we get from the Dehn-Baer-Nielsen theorem that every proper homotopy equivalence $\Sigma\to\Sigma$ represents a mapping class. We can thus think of $\mathcal C} \newcommand{\calD}{\mathcal D_0\subset P\Map(\Sigma)$. The collection $\mathcal C} \newcommand{\calD}{\mathcal D_0$ is going to be contained in our definitive list of candidates.
\subsection*{Step 4: Get our list $\mathcal C} \newcommand{\calD}{\mathcal D\subset P\Map(\Sigma)$ of candidates}
Suppose that we have $\phi\in P\Map(\Sigma)$ with $\phi(\gamma)=\eta$ and let $\phi_0\in\mathcal C} \newcommand{\calD}{\mathcal D_0$ be the element satisfying
$$\phi(\vec P)=\phi_0(\vec P).$$
This means that $\phi$ and $\phi_0$ differ by an element of the stabiliser of the ordered oriented pants decomposition $\vec P$, that is by a multi-twist along $\vec P$. In other words there is an $\vec n\in\BZ^{3g-3+b}$ with
\begin{equation}\label{eating curry}
\phi=\phi_0\circ T_{\vec P}^{\vec n}
\end{equation}
where $T_{\vec P}^{\vec n}=T_{P_1}^{n_1}\circ\dots\circ T_{P_{3g-3+b}}^{n_{3g-3+b}}$ is the multi-twist along $\vec P$ which twists $n_i$ times around the component $P_i$. Note that we can rewrite $\phi$ also as
$$\phi=\phi_0\circ T_{\vec P}^{\vec n}=\phi_0\circ T_{\vec P}^{\vec n}\circ\phi^{-1}_0\circ\phi_0=T_{\phi_0(\vec P)}^{\vec n}\circ\phi_0$$
where $T_{\phi_0(\vec P)}^{\vec n}$ is the multi-twist that which twists $n_i$ times around the $i$-th component of $\phi_0(\vec P)$. Since $\phi(\gamma)=\eta$ we get thus that
$$(T_{\phi_0(\vec P)}^{\vec n}\circ\phi_{0})(\gamma)=\eta.$$
In particular we get from \eqref{eq-10} that
$$\ell_\CT((T_{\phi_0(\vec P)}^{\vec n}\circ\phi_{0})(B))\le\ell_{\CT}(\eta)\cdot\iota(B,\gamma),$$
and thus, using that $B,P\subset\CT^{(1)}$, that
\begin{equation}\label{eq-13}
\iota(B+P,T_{\phi_0(\vec P)}^{\vec n}(\phi_{0}(B)))\le\ell_{\CT}(\eta)\cdot\iota(B,\gamma).
\end{equation}
We are now ready to make use of the following lemma due to Ivanov \cite[Lemma 4.2]{Ivanov}
\begin{lemma}\label{Ivanov}
Let $\alpha_1,\dots,\alpha_s$ be disjoint simple curves in $\Sigma$ and let $T=T_{\alpha_1}^{n_1}\cdot\ldots\cdot T_{\alpha_s}^{n_s}$ be the multitwist which twists $n_i$ times around $\alpha_i$. Then we have
$$\iota(T(\delta),\delta')\ge \sum_i(\vert n_i\vert-2)\cdot\iota(\delta,\alpha_i)\cdot\iota(\alpha_i,\delta')-\iota(\delta,\delta')$$
for all simple multicurves $\delta,\delta'\in\Sigma$.
\end{lemma}
Applying this lemma to $T=T_{\phi_0(\vec P)}^{\vec n}$, to $\delta=\phi_0(B)$, and individually to $\delta'=P$ and $\delta'=B$ we get:
$$\iota(T_{\phi_0(\vec P)}^{\vec n}(\phi_0(B)),P+B)
\ge \sum_i(\vert n_i\vert-2)\cdot\iota(\phi_0(B),\phi_0(P_i))\cdot\iota(\phi_0(P_i),P+B)-\iota(\phi_0(B),P+B).$$
Taking into account that
$$\iota(\phi_0(B),\phi_0(P_i))=\iota(B,P_i)\ge 1\text{ and }\iota(\phi_0(P_i),P+B)\ge 1$$
we get the rather coarse estimate
$$\iota(T_{\phi_0(\vec P)}^{\vec n}(\phi_0(B)),P+B)
\ge \sum_i(\vert n_i\vert-2)-\iota(\phi_0(B),P+B).$$
Using \eqref{eq-12}, \eqref{eq-13}, and some algebra we get:
\begin{equation}\label{eq-14}
\sum_i\vert n_i\vert\le \Pol(\ell_{\CT}(\eta),\ell_{\CT}(\gamma)).
\end{equation}
We thus have established that any $\phi\in P\Map(\Sigma)$ with $\phi(\gamma)=\eta$ belongs to the set
$$\mathcal C} \newcommand{\calD}{\mathcal D=\{\phi_0\circ T_{\vec P}^{\vec n}\text{ where }\phi_0\in\mathcal C} \newcommand{\calD}{\mathcal D_0\text{ and }\vec n\in\BZ^{3g-3+b}\text{satisfies \eqref{eq-14}}\}.$$
This means that the set $\mathcal C} \newcommand{\calD}{\mathcal D$ satisfies property (C1). It also satisfies property (C2). With this we are done with the construction of our algorithm and thus with the proof of Theorem \ref{thm-main}... under the additional assumptions that the center of the mapping class group $\Map(\Sigma)$ is trivial and that the elements $\gamma$ and $\eta$ are primitive. We deal with these two issues individually.
\subsubsection*{What to do if the center of $\Map(\Sigma)$ is not trivial?} First one should make clear where did we use this assumption. Well, if the center is not trivial then we do not get \eqref{eating curry}, but rather only that
$$\phi=\phi_0\circ T^{\vec n}_{\vec P}\circ\tau$$
where $\tau\in C(\Map(\Sigma))$. This is not tragic because, as we mentioned earlier, there is at most a single non-trivial element in $C(\Map(\Sigma))$, namely the hyper-elliptic involution $\tau_0$. It follows that we just have to check if there is some $\vec n\in\BZ^N$ with
$$\text{either }(\phi_0\circ T^{\vec n}_{\vec P})(\gamma)=\eta\text{ or }(\phi_0\circ T^{\vec n}_{\vec P})(\gamma^{\tau_0})=\eta$$
where $\gamma^{\tau_0}=\tau_0(\gamma)$. This means that we just have to run the algorithm above twice, once for $\gamma$ and once of $\gamma^{\tau_0}$. It is probably unnecessarily to say so, but twice a polynomial is still a polynomial. We have dropped the assumption that $C(\Map(\Sigma))$ is trivial.
\subsubsection*{What to do with elements which are not primitive?}
Let us drop now the assumption that the elements $\gamma$ and $\eta$ are primitive. In \cite{Despre-Lazarus}, Despr\'e and Lazarus show that in surface groups the primitive positive root of an element can be computed in polynomial time. This means that in polynomial time we get $\bar\gamma,\bar\eta\in\pi_1(\Sigma,*)$ primitive and $r,s\ge 1$ with
$$\gamma=\bar\gamma^r\text{ and }\eta=\bar\eta^s.$$
Now, since (non-trivial) abelian subgroups of surface groups are contained in unique maximal cyclic subgroups, we get that $\bar\gamma,\bar\eta,r$ and $s$ are unique. It follows that $\gamma$ and $\eta$ are of the same type if and only if the primitive elements $\bar\gamma$ and $\bar\eta$ are of the same type and $r=s$. Since we can check that in polynomial time, we are done.
\end{proof}
Having proved Theorem \ref{thm-main} we come now to the proof of Theorem \ref{thm-main1}. There is not much to say:
\begin{proof}[Proof of Theorem \ref{thm-main1}]
Suppose that we are given two elements $\gamma,\eta\in\pi_1(\Sigma,*)$ in the fundamental group of a closed connected surface, and let $\bar\gamma$ and $\bar\eta$ be their conjugacy classes. The elements $\gamma$ and $\eta$ differ by an automorphism $\phi\in\Aut(\pi_1(\Sigma,*))$ if and only if the classes $\bar\gamma$ and $\bar\eta$ differ by an exterior automorphism $\bar\phi\in\Out(\pi_1(\Sigma,*))$. Now, from the Baer-Dehn-Nielsen theorem \cite{MR2850125}, we get an identification $\Out(\pi_1(\Sigma,*))\simeq\Map(\Sigma)$ between the group of exterior automorphisms and the mapping class group. This identification is compatible with the identification between conjugacy classes in $\pi_1(\Sigma,*)$ and free homotopy classes of curves in $\Sigma$. It follows that $\gamma$ and $\eta$ differ by an automorphism if and only if the two associated (free homotopy classes of) curves $\bar\gamma$ and $\bar\eta$ are of the same type. We can decide that in polynomial time grace to the algorithm used to prove Theorem \ref{thm-main}. Moreover, if $\bar\gamma$ and $\bar\eta$ differ by a mapping class then we get some $\bar\phi\in\Map(\Sigma)=\Out(\pi_1(\Sigma,*))$ with $\bar\phi(\bar\gamma)=\bar\eta$. Referring to a representative of $\bar\phi$ by the same symbol, all it remains is to do is to find $g\in\pi_1(\Sigma,*)$ with $\bar\phi(\gamma)=g\eta g^{-1}$. Since, by the proof of Theorem \ref{thm-main}, the Lipschitz constant of $\bar\phi$ is polynomially bounded by the complexity of $\gamma$ and $\eta$ we get that what we have to do is to find in polynomial time a conjugating element. This well-known to be possible because $\pi_1(\Sigma,*)$ is hyperbolic \cite{Holt}. We are done.
\end{proof}
\section{A slightly more efficient approach}\label{sec: dehn-thurston}
Although clearly possible, it would be painful and frustrating to try to give actual estimates for the running time of the algorithm used to prove Theorem \ref{thm-main}: evidently it would be painful, and it would be frustrating because one would get truly horrible bounds. This is why we sketch now a variation of the same algorithm, using Dehn-Thurston coordinates instead of triangulations, getting an not-that-bad estimate for the running time. For the sake of concreteness we will focus on the most interesting case:
\begin{quote}
{\bf Assumption.} {\em Let $\gamma$ and $\eta$ be primitive filling curves in $\Sigma$.}
\end{quote}
We refer to \cite{Thesis} for a detailed and improved version of the results in this chapter, with explicit values for the involved constants.
Anyways, as all along we are aiming to determine if there is $\phi\in P\Map(\Sigma)$ with $\phi(\gamma)=\eta$. As we did above, we fix an ordered pants decomposition $\vec P = (P_1,\dots,P_{3g-3+b})$, and a simple multicurve $B$ with the property that $\iota(B,P_i)=2$ for all $i=1,\dots,3g-3+b$. We also fix a hyperbolic metric on $\Sigma$ and let
$$L=\max\{\ell_\Sigma(\gamma),\ell_\Sigma(\eta)\}.$$
In the sequel we will write $\const$ for any constant which only depends on $P$, $B$ and the metric, but it might change from line to line.
Anyways, we denote by
$$\Pi=\Pi_{\vec P,B}:\calD\to\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ$$
the Dehn-Thurston parametrisation \cite{Penner-Harer} associated to $\vec P$ and $B$. Here $\mathcal M} \newcommand{\CN}{\mathcal N\CL_\BZ(\Sigma)$ is the set of all integrally weighted simple multicurves and $\calD$ is the set of all pairs $(\vec m,\vec t)\in\BZ_{\ge 0}^{3g-3+b}\times\BZ^{3g-3+b}$ such that $t_i\ge 0$ whenever $m_i=0$ and that $m_{i_1}+m_{i_2}+m_{i_3}$ is even whenever the component $P_{i_1},P_{i_2}$ and $P_{i_3}$ bound a pair of pants. If $\gamma=\Pi(\vec m,\vec t)$ then we let
$$\ell_{\vec P,B}(\gamma) := \sum_{i=1}^{N} m_i + \sum_{i=1}^{N} |t_i|$$
be the \textit{combinatorial length} of a multicurve $\gamma$ with respect to $\vec P$ and $B$.
It follows from general principles (for example the Milnor-\v Svarc lemma), that the combinatorial length of a curve, its hyperbolic length, as well as its intersection number with $\vec P+B$ are all comparable to each other. We thus have:
\begin{lemma}\label{lot of noise1}
With notation as above we have
$$\frac{1}{\const} \cdot\ell_{\vec P,B}(\gamma) \le \iota(\gamma,\vec P+B) \le\const\cdot\ell_{\vec P,B}(\gamma)$$
for every simple multicurve $\gamma$.\qed
\end{lemma}
We also have the following version of Lemma \ref{mainlem}.
\begin{lemma}\label{lot of noise2}
Let $\gamma$ be a filling curve and $\alpha$ be a set of curves in $\Sigma$. Then, we have
$$i(\alpha, \beta) \le
\const \cdot \ell_\Sigma(\alpha) \cdot \ell_\Sigma(\gamma) \cdot (\iota(\beta,\beta)+1)\cdot \iota(\beta,\gamma)$$
for any multicurve $\beta$ in $\Sigma$.
If moreover $\Sigma$ is closed then the statement remains true even after deleting the factor $1+\iota(\beta,\beta)$.\qed
\end{lemma}
Suppose now that $\phi\in P\Map(\Sigma)$ is such that $\phi(\gamma)=\eta$. Recalling that the value of the constant may change from line to line, we get from these two lemmas that
\begin{align*}
\ell_{\vec P,B}(\phi(\vec P))
&\le\const\cdot\iota(\phi(\vec P),\vec P+B)\\
&\le \const \cdot \ell_\Sigma(\vec P+B) \cdot \ell_\Sigma(\eta) \cdot \iota(\phi(\vec P),\eta)\\
&=\const \cdot \ell_\Sigma(\eta) \cdot \iota(\phi(\vec P),\phi(\gamma))\\
&\le \const \cdot L \cdot \iota(\vec P,\gamma)\\
&\le \const\cdot L^2.
\end{align*}
This means that
$$\phi(\vec{P})\in\CP\stackrel{\text{def}}=\left\{\vec\alpha\
\middle\vert\begin{array}{l}
\text{ ordered oriented multicurve of type } \vec P\text{ whose underlying multicurve}\\\text{ arises as }\Pi(\vec{m},\vec{t})
\text{ for some } (\vec{m},\vec{t})\in\calD \text{ with } \Vert (\vec{m},\vec{t}) \Vert \le\const\cdot L^2
\end{array}\right\}.$$
How long does it take to produce $\CP$? Well, for any vector $(\vec m,\vec t)\in\calD$ with $\Vert(\vec m,\vec t)\Vert\le\const \cdot {L^2}$ it takes $O(L^2)$ time to check if the associated multicurve $\alpha=\Pi_{\vec P,B}(\vec m,\vec t)$ is a pants decomposition and, if this is the case, to draw a embedded copy of the dual graph. Once we have the dual graph we can determine, for all possible orders $\vec\alpha$ of the components of $\alpha$, if $\vec\alpha$ is of type $\vec P$: just compare the two dual graphs. But even more, knowing how to map $\vec P$ and its dual graph to $\vec\alpha$ and its dual graph, we actually get a concrete homotopy equivalence doing exactly that.
This means that it take $O(L^2)$ time to check if some $(\vec m,\vec t)\in\calD$ with $\Vert(\vec m,\vec t)\Vert\le\const\cdot L^2$ corresponds to $\vec\alpha\in\CP$ and, if this is the case, to construct a concrete proper homotopy equivalence $\phi_{\vec\alpha}$ sending $\vec P$ to $\vec\alpha$. Since there are at most $L^{4(3g-3+b)}$ vectors to check we get:
\begin{lemma}
It takes time $O(L^{12g-10+4b})$ to do (1) produce $\CP$, and (2) for each $\vec\alpha\in\CP$ produce a proper homotopy equivalence $\phi_{\vec\alpha}:\Sigma\to\Sigma$ mapping $P$ to $\vec\alpha$ satisfying
$$\iota(\phi_{\vec\alpha}(\beta),\vec P+B) \le \const \cdot \iota(\beta,\vec P+B) \cdot L^2$$
for every simple multicurve $\beta$. \qed
\end{lemma}
Still assuming that $\phi(\gamma)=\eta$ we note that we have
$$\phi=\phi_{\phi(\vec P)}\circ T^{\vec n}_{\vec P}$$
for some multitwist $T^{\vec n}_{\vec P}$ along $\vec P$. Arguing as above, that is, using Ivanov's lemma we get a bound on the norm of $\vec n$:
$$\Vert\vec n\Vert\le\const\cdot L^2.$$
On the other hand, if we are given $\vec\alpha\in\CP$ and $\vec n \in \BZ^{3g-3+b}$ with at most norm $\const\cdot L^2$, it takes $O(L^2)$ to check if $(\phi_{\vec\alpha}\circ T_{\vec P}^{\vec n})(\gamma)=\eta$. Taking all of this together we get:
\begin{quote}
{\em It takes $O(L^{18g+6b-14})$ to detect if $\gamma$ and $\eta$ are of the same type.}
\end{quote}
We are done.
\bibliographystyle{amsplain}
| {
"timestamp": "2020-12-18T02:28:17",
"yymm": "2012",
"arxiv_id": "2012.09792",
"language": "en",
"url": "https://arxiv.org/abs/2012.09792",
"abstract": "Given two closed curves in a surface, we propose an algorithm to detect whether they are of the same type or not.",
"subjects": "Geometric Topology (math.GT)",
"title": "Deciding when two curves are of the same type",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151386528394,
"lm_q2_score": 0.7185943985973772,
"lm_q1q2_score": 0.7086428625283409
} |
https://arxiv.org/abs/2111.02471 | Generalized Integer Splines on Arbitrary Graphs | Generalized integer splines on a graph $G$ with integer edge weights are integer vertex labelings such that if two vertices share an edge in $G$, the vertex labels are congruent modulo the edge weight. We introduce collapsing operations that reduce any simple graph to a single vertex, carrying with it the edge weight information. This corresponds to a sequence of surjective maps between the associated spline modules, leading to an explicit construction of a module basis in terms of the edge weights. | \section{Paths to Zero}
Of particular interest are the so-called \textbf{paths to zero}. To introduce these, we recall the following ideas from graph theory:
\begin{definition}
Let $G$ be a graph with vertex set $V$ and edge set $E$. A \textbf{path in $G$} is a sequence of vertices and edges $\mathcal{P} = \left\{p_{0}, e_{1}, p_{1}, e_{1}, \ldots, e_{n} p_{n}\right\}$ such that $p_{i} \neq p_{j}$ for $i \neq j$.
\end{definition}
We say that $\mathcal{P}$ is a path from $p_{0}$ to $p_{n}$, and we say the vertices $p_{1}$, $p_{2}$, \ldots, $p_{n-1}$ are the intermediate vertices. Note that we are allowing the possibility for multiple edges to join two vertices; hence the necessity of specifying both the vertices and the edges along the path.
\begin{definition}
Let $\mathcal{P}$ be a path in $(G, A)$, along edges $e_{p_{1}}$, $e_{p_{2}}$, \ldots, $e_{p_{n}}$. We define $W(\mathcal{P}) = (a_{p_{1}}, a_{p_{2}}, \ldots, a_{p_{n}})$.
\end{definition}
In other words, $W(\mathcal{P})$ is the greatest common divisor of the weights of the edges in path $\mathcal{P}$. Informally, we will refer to the GCD of the path $\mathcal{P}$.
\begin{definition}
Let $i, j$ be given, and let $P$ be the set of all paths between $i,j$. We define $M(i,j) = [\{W(\mathcal{P})\}_{\mathcal{P} \in P}]$.
\end{definition}
In other words, given vertices $i, j$, $M(i,j)$ is the LCM of the GCDs of all paths between $i,j$.
Consequently:
\begin{theorem}
\label{theorem:GCD of LCM}
Let $(G, A)$ be an edge-weighted graph. If $x = (x_{1}, x_{2}, \ldots, x_{n})$ is a spline in $S_{G}(A)$, then $x_{i} \equiv x_{j} \bmod M(i,j)$.
\end{theorem}
\begin{proof}
Let $P$ be the set of all paths between $i,j$, and let $\mathcal{P}$ be any path in $P$. By Corollary \ref{corr:LCM of moduli}, $x_{i} \equiv x_{j} \bmod W(\mathcal{P})$. Since this is true for all paths in $P$, then by Corollary \ref{ corr:GCD of moduli}, $x_{i} \equiv x_{j} \bmod M(i,j)$.
\end{proof}
We define the following:
\begin{definition}
Let $(G, A)$ be an edge-weighted graph, and let $x = (x_{1}, x_{2}, \ldots, x_{n})$ be a spline in $S_{G}(A)$. Let $x_{i} = 0$ for one or more $i$. Let $k$ be any vertex in $G$ with $x_{k} \neq 0$. A \textbf{path to zero} is any path from $k$ to $i$, where $x_{j} \neq 0$ for any intermediate vertex.
\end{definition}
\begin{lemma}
Let $x$ be a spline in $S_{G}(A)$, and let $x_{i} = 0$ for at least one $i$. If $x_{j} \neq 0$, then $x_{j} \equiv 0 \bmod P$, where $P$ is the Least Common Multiple of the Greatest Common Divisor of the edge weights along all paths to $0$ from $j$.
\end{lemma}
\begin{proof}
This follows immediately from Theorem \ref{theorem:GCD of LCM}.
\end{proof}
Of particular interest are splines whose first $k$ components are $0$. We assume a collapse ordering $V$, and define:
\begin{definition}
The \textbf{flow up class} $\mathcal{F}_{k}$ is the set of splines in $S_{G}(A)$ where the first nonzero component is the $k+1$ component.
\end{definition}
It will be convenient to designate $\mathcal{F}_{0}$ to be the set of splines whose first component is non-zero.
Let $x$ be a spline in $\mathcal{F}_{k}$, and let $L(x)$ designate the leading (nonzero) component. We say that $x$ is \textbf{minimal} if, for any $y \in \mathcal{F}_{k}$, $\lvert L(x) \rvert \leq \lvert L(y) \rvert$.
Let $\mathcal{B} = \left\{B_{0}, B_{1}, B_{2}, \ldots, B_{n-1}\right\}$, where $B_{i} \in \mathcal{F}_{i}$ (so the first non-zero component of $B_{i}$ will be the $i+1$ component). We now raise the question: What elements $B_{i}$ form a minimal basis for $S_{G}(A)$?
We prove the following:
\begin{lemma}
\label{lemma star collapse invariant}
Let $i, j \in V$ be given, and let $(G', A')$ be a graph produced by a star deletion on $(G, A)$ that leaves vertices $i, j \in V'$. Then $M(i,j)$, for paths in $(G, A)$, will equal $M(i,j)$ for paths in $(G', A')$.
\end{lemma}
\begin{proof}
Let $M$ be the set of GCDs of the edges along the paths from $i$ to $j$ in $(G, A)$, and $M'$ be the corresponding set for $(G', A')$.
Let $\mathcal{P}$ be a path from $i$ to $j$ in $(G, A)$, and note that $W(\mathcal{P}) \in M$. If $P$ does not include the deleted vertex, then $W(\mathcal{P})$ will be an element of $M'$, since a path in $(G, A)$ exists with the same edge weights.
On the other hand, suppose $\mathcal{P}$ passes through the deleted vertex, say along edges with weights $a, b$. Then $W(\mathcal{P})$
Since the edges will be replaced with a single edge whose weight is the GCD of the incident edges, weights incident on the vertex will be
Let $W$ be the set of weights on $\mathcal{P}$. Under star deletion, this path will become a new path; let $W'$ be the set of weights along this new path. The only difference between $W$ and $W'$ is that weights $p_{1}$, $p_{2}$ of the edges in $(G, A)$ joining to the deleted vertex will be replaced with a single edge with weight $(p_{1}, p_{2})$. But since $((a, b), c) = (a, b, c)$, it follows that the GCD of the weights in $W$ will be the same as the GCD of the weights in $W'$. Hence $W(\mathcal{P})$ will also be an element of $M'$.
Thus $M \subset M'$.
Consequently $M = M'$, and so the LCM of the elements of $M$ will also be the LCM of the elements of $M'$. Thus if $M(i,j)$ is the LCM of the GCD of all paths from $i$ to $j$ in $(G, A)$, it remains the LCM of the GCD of all paths from $i$ to $j$ in $(G', A')$.
\end{proof}
Consequently:
\begin{theorem}
The minimal spline in $\mathcal{F}_{k}$ has leading term equal to the LCM of the GCD of all paths from $k + 1$ to zero.
\end{theorem}
\begin{proof}
Let $M$ be the LCM of the GCD of all paths from $k+1$ to zero in $(G, A)$.
By repeatedly applying star collapse on vertices $i > k + 1$, we obtain a graph $(G^{*}, A^{*})$ with vertices $1, 2, 3, \ldots, k + 1$ and edge set $E^{*}$.
The edges in $(G^{*}, A^{*})$ fall into one of two categories: either they join vertices $i, j$ where $i, j \leq k$; or (without loss of generality) they join some vertex $i < k+1$ to vertex $j = k +1$.
In the former case, $x_{i} = 0$, $x_{j} = 0$ solves the corresponding congruences. The latter case gives a system of congruences of the form $x_{k+1} \equiv 0 \bmod a_{e}$, $e \in E^{*}$.
Since star collapse does not alter the LCM of the GCD of the paths to zero, it follows that $x_{k+1} = M$ is the least positive solution to the system of congruences.
We can then use Theorem \ref{theorem: spline expansion} to find the remaining values $x_{k+2}$, $x_{k+3}, \ldots, $x_{n}$.
\end{proof}
As an example, consider the graph in Figure \ref{fig:collapse0}, with collapse ordering $V = \left\{A, B, E, C, D\right\}$. The minimal basis $\left\{B_{0}, B_{1}, B_{2}, B_{3}, B_{4}\right\}$ will be the following:
First, $B_{0} = (1, 1, 1, 1, 1)$, which will always be a basis vector for any $(G, A)$.
Next, let $x_{A} = 0$. The second component of the basis vector will be the LCM of the GCD of the edge weights of all paths from $B$ to $A$. We note there are the following paths:
\begin{itemize}
\item
$B$ to $A$, with weight $8$ and GCD $8$,
\item
$B$ to $C$ to $A$, with edge weights $6, 9$ and
$(6, 9) = 3$,
\item
$B$ to $E$ to $A$, with edge weights $10, 14$ and $(10, 14) = 2$,
\item
$B$ to $C$ to $E$ to $A$, with edges weights $6, 3, 14$ and GCD $(6, 3, 14) = 1$. (Hereafter we will simply list the edge weights in the GCD computation)
\item
$B$ to $D$ to $E$ to $A$, with GCD $(21, 9, 14) = 1$,
\item
$B$ to $D$ to $C$ to $A$, with GCD $(12, 9, 3, 9) = 3$
\end{itemize}
The GCD of the LCMs along these paths is $[8, 3, 2, 1, 1, 3] = 24$, so $L(B_{1}) = 24$.
Now let $x_{A}, x_{B} = 0$. The third component of the basis vector $B_{2}$ will be the LCM of the GCD of the edge weights of all paths from $E$ to either $A$ or $B$. We note these paths are:
\begin{itemize}
\item
$E$ to $A$, GCD $14$,
\item
$E$ to $B$, GCD $10$,
\item
$E$ to $C$ to $A$, GCD $(3, 9) = 3$,
\item
$E$ to $C$ to $B$, GCD $(3, 6) = 3$,
\item
$E$ to $D$ to $B$, GCD $(9, 12) = 3$,
\end{itemize}
The LCMs of these GCDs will be $[14, 10, 3, 3, 3] = 210$, so $L(B_{2}) = 210$.
Now let $x_{A}, x_{B}, x_{E} = 0$. The fourth component of the basis vector $B_{3}$ will be the LCM of the GCD of the edge weights of all paths from $C$ to $A, B$ or $E$. We note that paths are:
\begin{itemize}
\item
$C$ to $A$, GCD $9$,
\item
$C$ to $B$, GCD $6$,
\item
$C$ to $E$, GCD $3$,
\end{itemize}
The LCM of these GCDs will be $[9, 6, 3] = 18$, so $L(B_{3}) = 18$.
Finally let $x_{A}, x_{B}, x_{E}, x_{C} = 0$. The fifth component of the basis vector $B_{4}$ will be the LCM of the GCD of the edge weights of all paths from $D$ to $A, B, E, C$; note that in fact the only paths to zero from $D$ are $D$ to $E$ (GCD $9$) and $D$ to $B$ (GCD $12)$, so the LCM of the GCDs will be $[9, 12] = 36$, and so $L(B_{4}) = 37$.
\section{New Section: Proof of Minimality}
We now prove an important result:
\begin{theorem}
The LCM of the GCD of all paths to $0$ from $v_{i}$ is invariant under a reduction operation.
\end{theorem}
\begin{proof}
There are two reduction operations, so we will consider them separately.
Let $\mathcal{P} = \left\{p_{1}, p_{2}, \ldots, p_{n}\right\}$ be the set of all paths to $0$ from vertex $i$, and let
\end{proof}
\section*{Introduction}
Consider a graph $G$ with integer edge weights. Label the vertices so that if two vertices are joined by an edge with weight $w$, then their vertex labels are congruent modulo $w$. This leads to the following questions:
\begin{itemize}
\item
Given arbitrary edge weights, can we always construct a vertex labeling?
\item
Can we find a finite set that generates all vertex labelings?
\end{itemize}
In \cite{julia}, Tymozcko, et. al. introduced the study of generalized splines on a graph with edges weighted by ideals in a commutative ring $R$. Their foundational paper concludes with many open questions, primarily about the $R$-module structure of sets of splines and how much this structure is determined by properties of the ring and/or the underlying graph. A subsequent survey article \cite{julia2} describes the role that generalized splines play in geometry and topology.
In this paper we set $R$ to be $\mathbb{Z}$, the ring of integers, although the results in this paper hold when $R$ is an Euclidean Ring, with minor modifications. We answer both of the above questions affirmatively, and in doing so find a complete characterization of spline modules over $\mathbb{Z}$ for arbitrary graphs.
Generalized splines are a generalization of polynomial splines, which are ubiquitous in applied math, computer graphics and approximation theory. They are often defined to be piecewise polynomial functions on a polyhedral subdivision of $\mathbb{R}^d$. Alfeld and others \cite{Alfeld} studied the question of finding dimensions and bases for vector spaces of splines of restricted polynomial degree. In \cite{billera1, billera2}, Billera pioneered the use of algebraic techniques in the study of polynomial splines. When the polynomial degrees are unrestricted, the set of all splines on a polyhedral complex will be both a ring and an $R$-module, where $R$ is a polynomial ring in $d$ variables over a field.
The algebraic study of polynomial splines led to a representation of splines as vertex labels of the dual graph of the polyhedral complex \cite{rose4, rose1}, where the edges are labeled with linear forms. In this new representation, a set of vertex labels is a spline if each edge label divides the difference between its two vertex labels. This representation easily extends to arbitrary rings, and in fact arises in geometry and topology.
When $R$ is a principal ideal domain, generalized spline modules are always free of rank $n$, the number of vertices of the graph. In \cite{smith students}, Handschy, et. al, constructed ``flow-up class'' bases for $n$-cycle graphs with integer edge weights. In \cite{liu} we proved that flow-up class bases exist for spline modules over arbitrary graphs with integer edge weights, and expanded the class of graphs for which a flow-up class basis could be constructed.
The main result of this paper is the construction of flow-up class bases for the spline module $S_G$, for any arbitrary graph $G$ with integer edge weights. To do this, we introduce two collapsing operations that transform $G$ into a graph with fewer vertices or edges. Using these operations, we can reduce any connected graph to a point. Then we show that each of these operations induces a surjective $\mathbb{Z}$-module map of the associated spline modules. The main algebraic result is that $S_G$ is the direct sum of the kernels of these maps. From there, all that is left to do is to construct a basis for each of the kernels, and show that their pre-images form a basis for $S_G$. A nice feature of this construction is that the first non-zero entry of each basis element, viewed as an $n$-tuple of $\mathbb{Z}^n$, can be written in terms of least common multiples and greatest common divisors of a subset of the edge weights.
\section{Preliminaries}
We begin with a formal definition of a generalized integer spline. Let $G$ be a graph with $n$ vertices, edge set $E$, and $A : E \rightarrow \mathbb{N}$ an assignment of positive integer weights to edges of $G$.
We denote the set of splines on $(G, A)$ by $S_{G}(A)$, or just $S_G$ when there is no ambiguity.
\begin{definition} Let $G$ a graph with $n$ vertices.
A {\textbf{generalized integer spline}} on $G$ is an $n$-tuple $(g_{1}, g_{2}, \ldots, g_{n}) \in {\mathbb{Z}}^ n$ such that if vertices $v_i$ and $v_j$ are joined by $e \in E$, then $g_{i} \equiv g_{j} \bmod A(e)$. We refer to these equations as the \textbf{defining equations} for $S_G$.
\end{definition}
\begin{example} Let $G$ be a 3-cycle with edge labels $a,b,c$ as in Figure\ref{figure3cycle}. By the definition above, $(g_1,g_2,g_3) \in Z^3$ is a spline in $S_G$ if the following defining equations, one for each edge, are satisfied.
\begin{align*}
g_1 &\equiv g_2 \bmod {a}\\
g_2 &\equiv g_3 \bmod {b}\\
g_3 &\equiv g_1 \bmod {c}
\end{align*}
\end{example}
\begin{figure}
\begin{center}
\kthree
\end{center}
\caption{A $3$-cycle}
\label{figure3cycle}
\end{figure}
{\textbf{Notation:}} Let $(r_1, r_2, \ldots r_m)$ denote the GCD and $[r_1, r_2, \ldots r_m]$ denote the LCM of the set $\{r_1, r_2, \ldots r_m\}$.
The following lemma will be used in later sections. The proofs are straightforward, so we omit them.
\begin{lemma} Let $a,b,m,n$ in $\mathbb{N}$.
\label{lemma:gcd}
\begin{enumerate}
\item $(a,(b,c)) = (a,b,c)$.
\item $[a,[b,c]] = [a,b,c]$.
\item $[(a,c),(b,c)] = ([a,b],c)$.
\item $x \equiv a \bmod m $ and $x \equiv a \bmod n$ if and only if $x \equiv a \bmod [m, n]$.
\end{enumerate}
\end{lemma}
We will need the non-coprime version of the Chinese Remainder Theorem, which we state for convenience.
\begin{theorem}[Non-Coprime Chinese Remainder Theorem]
\label{theorem:CRT}
Let $a_{i} \in \mathbb{Z}$ and $ m_{i} \in \mathbb{N}$ for $i$ between 1 and $n$. The system of congruences
\begin{align*}
x &\equiv a_1 \bmod {m_1}\\
x &\equiv a_2 \bmod {m_2}\\
&\vdots \\
x &\equiv a_n \bmod {m_n}
\end{align*}
has a solution if and only if
$$a_{i} \equiv a_{j} \bmod (m_i, m_j)$$ for all $i, j$ with $1 \leq i, j \leq n$.
If a solution exists, then it is unique modulo $[m_1, m_2, \dots , m_n ]$.
\end{theorem}
\section{Collapsing Operations on $G$}
In this section, we define two operations on $(G, A)$ that yield another edge-weighted graph $(G', A')$ with either fewer vertices or the same number of vertices and fewer edges. We will show that for any $(G, A)$, and vertices $v_i$ and $v_j$, there exists a sequence of operations that yield the weighted edge $(K_{2}, A_{*})$, for some $A_*$ in $\mathbb{N}$. Moreover, we will show that any spline on $(K_{2}, A_{*})$ can be extended to a spline on $(G, A)$. We can then use this collapse sequence to construct a $\mathbb{Z}$-module basis for $S_{G}$.
\begin{definition} Let $G$ be a graph and $v$ a vertex of $G$ of degree $d$.
\begin{enumerate}
\item The {\textbf{star of $v$}}, denoted $st(v)$, is the subgraph consisting of $v$ and all edges that contain $v$.
\item A {\textbf{$d$-clique}} is a subgraph that is a complete graph on $d$ vertices.
\end{enumerate}
\end{definition}
The first collapse operation replaces the star of $v$ with a $d$-clique, where $d$ is the degree of $v$, using the neighbors of $v$ are the vertices. All other vertices and edges in the graph remain the same, although the new graph may have multiple edges. We call this a {\textbf{star-clique}} operation, as we are deleting a vertex star, and then adding new edges to the adjacent $d$ vertices to form a $d$-clique. This generalizes the standard ${\mathbf{Y-\Delta}}$ transform that takes a 3-star to the 3-clique $K_3$. We now define this process more formally.
\begin{definition} Let $v$ be a vertex of $G$ of degree $d$, with adjacent vertices $\{v_1,\ldots,v_d\}$ and incident edge weights $\{a_1,\ldots,a_d\}$. The {\textbf{star-clique}} operation transforms $(G,A)$ to $(G_v, A_v)$ as follows.
\begin{itemize}
\item Remove $v$ and the edges of $st(v)$, but keep the adjacent vertices.
\item Add edges between adjacent vertices to form a $d$-clique.
\item Label the new edge between $v_i$ and $v_j$ with $(a_i,a_j)$.
\end{itemize}
\end{definition}
\begin{example}
Let $v_{2}$ be a vertex of degree $1$, with adjacent vertex $v_{1}$ and incident edge $e$ with weight $a$. The star-clique operation replaces the star of $v_{2}$, an edge, with a 1-clique, which is the single vertex $v_{1}$. We call this a \textbf{leaf deletion}. See Figure \ref{fig:leafdelete}.
\end{example}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[scale=0.3]{leafdelete1}
&\includegraphics[scale=0.3]{leafdelete2}
\end{tabular}
\end{center}
\caption{Leaf Deletion}
\label{fig:leafdelete}
\end{figure}
\begin{example}
Let $v_{2}$ have degree 2, with adjacent vertices $v_{1}$ and $v_{3}$ and incident edges with weights $a$ and $b$. The {\textbf{star-clique}} operation replaces the star of $v_{2}$, i.e. the path $v_{1}v_{2}v_{3}$, with the edge $v_{1}v_{3}$. This is a {\textbf{path contraction}}, with edge weights $a$ and $b$ replaced by $(a, b)$ (see Figure \ref{fig:pathcontract}).
\end{example}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[scale=0.3]{pathcontract1}
&\includegraphics[scale=0.3]{pathcontract2}
\end{tabular}
\end{center}
\caption{Path Contraction Operation on $v_2$}
\label{fig:pathcontract}
\end{figure}
\begin{example}
Let $v_{4}$ be a vertex of degree $3$, with adjacent vertices $v_{1}$, $v_{2}$ and $v_{3}$ and incident edge weights $c,a,b$. The {\textbf{star-clique}} operation replaces the star of $v_{4}$, a ``$Y$-graph,'' with a triangle with vertices $v_{1}, v_{2}, v_{3}$. The new edges will have weights $(a,c),(a,b),(b,c)$, as shown inn Figure \ref{fig:starclique}).
\end{example}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[scale=0.3]{starclique1}
&\includegraphics[scale=0.3]{starclique2}
\end{tabular}
\end{center}
\caption{Star-clique Operation ($Y$-$\Delta$ Transform)}
\label{fig:starclique}
\end{figure}
The new graph $(G_v,A_v)$ has one fewer vertex, but may no longer be a simple graph if some of the $v_i$'s were already be connected in $G$.
In order to turn $G_v$ into a simple graph, so that we can apply the star-clique operation again, we introduce a second operation, {\textbf{edge collapse}}, that collapses multiple edges between the same pair of vertices (see Figure \ref{fig:edgecollapse}).
\begin{definition} Let $e_1, e_2, \ldots e_r$ be multiple edges between vertices $v$ and $w$ in $G$ with labels $\{a_1,\ldots,a_r\}$. The {\textbf{edge-collapes}} operation transforms $(G,A)$ to $(G', A')$ as follows.
\begin{itemize}
\item Remove $e_1, e_2, \ldots e_r$.
\item Add a new edge $e$ between $v$ and $w$.
\item Label $e$ with $[a_1,a_2,\ldots, a_r]$.
\end{itemize}
\end{definition}
\begin{example}
Let $v_{1}, v_{2}$ be two vertices joined by edges of weights $a, b$. The \textbf{edge-collapse} operation replaces these edges with a single edge of weight $[a, b]$ (see Figure \ref{fig:edgecollapse}).
\end{example}
\begin{figure}
\begin{center}
\begin{tabular}{ccc}
\includegraphics[scale=0.3]{edgecollapse1}
&\includegraphics[scale=0.3]{edgecollapse2}
\end{tabular}
\end{center}
\caption{Edge Collapse}
\label{fig:edgecollapse}
\end{figure}
\begin{lemma}
\label{lemma:connectivity}
Let $(G, A)$ be a simple, connected graph containing at least one vertex $v$. Then $(G_v, A_v)$ is also connected.
\end{lemma}
\begin{proof}
We consider two cases.
\begin{itemize}
\item
Case 1: If $G_v$ is the empty graph or has only one vertex, then it is vacuously connected.
\item
Case 2: Assume $G_{v}$ has 2 or more vertices. Let $x$ and $y$ be distinct vertices of $G_v$. Hence they are vertices in $G$ since $G_v$ doesn't introduce new vertices. Since $G$ is connected, there is a path $P$ connecting $x$ and $y$ in $G$. There are two cases.
\begin{itemize}
\item
First, assume $P$ does not pass through the vertex $v$. The removal of $v$ does not affect $P$, so $P$ is also a path in $G_{v}$.
\item
Now suppose $P$ passes through the vertex $v$. Then $P$ passes through vertices $p$ and $q$, where $p$ and $q$ are adjacent to $v$. When we form $G_v$, the path $pvq$ in $G$ will be replaced by a single edge $pq$. Thus, we can form a path $P_v$ from $x$ to $y$ in $G_v$ by replacing the edges $pv$ and $vq$ with the edge $pq$ in $G_{v}$.
\end{itemize}
\end{itemize}
\end{proof}
A graph can always be collapsed to a single edge, and even further to a single vertex.
\begin{lemma}
Let $(G, A)$ be a connected edge-weighted graph, and let $v$ and $w$ be vertices of $G$.
\begin{enumerate}
\item We can collapse $G$ to the graph $K_2$ with vertices $v$ and $w$.
\item We can collapse $G$ to the vertex $v$.
\end{enumerate}
\end{lemma}
\label{theorem:reducetok2}
\begin{proof}
Choose a vertex other than $v$ and $w$ and do a star-clique operation. This will reduce the number of vertices by one. Now do an edge collapse to remove multiple edges. Repeat until $v$ and $w$ are the only vertices left, and remove any multiple edges. Since the graphs at each stage are connected, by Lemma \ref{lemma:connectivity}, we are left with a single edge between $v$ and $w$.
For part 2, a star-clique operation on $w$ will transform the edge $vw$ into the single vertex $v$.
\end{proof}
\section{The Spline Modules $S_G$ and $S_{G'}$}
Now that we have constructed the graph collapsing operations, we describe their effects on the spline module $S_G$. In order to do this, we must first fix an ordering of the vertices $v_1,\ldots, v_n$ of $G$.
Next, we partition $S_G$ into \textbf{flow-up classes} $\mathcal{F}_{i}$, which are subsets of splines with the first $i$ entries equal to zero and the $i+1$st entry non-zero. A \textbf{flow-up class basis} will be a basis that consists of an element from each non-zero flow-up class.
\begin{definition}
For $0 \leq i < n$, the {\textbf{$i$th flow-up class}}, denoted $\mathcal{F}_i$, is the set of splines of the form $F = (0, 0, \ldots, 0, f_{i+1}, \ldots, f_{n})$ where $f_{i+1}$ is non-zero.
We also define $\mathcal{F}_{n} = \left\{\mathbf{0}\right\} =\mathbf{0} = (0,0,\ldots, 0)$, the zero spline.
\end{definition}
We now define the leading term of a spline to be its first non-zero entry.
\begin{definition}
Let $F \in \mathcal{F}_{i}$ where $i \neq n$, so that $F = (0, 0, \ldots, 0, f_{i+1}, \ldots, f_{n})$ and $f_{i+1} \ne 0$. $L(F) = f_{i+1}$ is the \textbf{leading term} of $F$.
\end{definition}
\begin{definition}
A spline $B \in \mathcal{F}_i$ is a \textbf{minimal element} of $\mathcal{F}_i$ if $L(B)>0$, and for all $F \in \mathcal{F}_k$, $(L(B) \leq |L(F)|$.
\end{definition}
For example, $\mathbf{1} = (1, 1, \ldots, 1)$ has leading term $1$, which is minimal in $\mathbb{N}$. Thus, $\mathbf{1}$ is a minimal element of $\mathcal{F}_{0}$.
{\textbf{Note:}} Although minimal elements of $\mathcal{F}_i$ have the same leading term, the other terms are not in general unique.
Our strategy for constructing a basis for $S_G$ is to find a minimal element from each $\mathcal{F}_i$, for $i < n$. The theorem below asserts that this set will alway form a basis, called a {\textbf{flow-up class basis}}.
\begin{theorem}{\cite{liu}}
Let $(G,A)$ be an edge labeled graph with $n$ vertices. Let $\mathcal{B} = \left\{B_0, B_1,\dots, B_{n-1}\right\}\subset S_G$ where each $B_i$ is a minimal element of $\mathcal{F}_i$. Then $\mathcal{B}$ is a $\mathbb{Z}$-module basis for $S_G$.
\end{theorem}
We now look at the effect of collapsing operations on flow-up classes and minimal elements. We begin with an example.
\begin{example}
Let $G$ be the $Y$-graph with vertices $v_1,v_2,v_3,v_4$, and edge labels $a$, $b$, $c$, as in Figure \ref{fig:starclique}. The star-clique operation on $v_{4}$ yields a $3$-cycle $G'$ with edge labels $(a,b)$, $(b,c)$, and $(a,c)$.
Let $(v_1,v_2,v_3,v_4) \in S_G$. The defining equations on $G$ are
\begin{align}
\label{eqn:yd}
\begin{array}{rcl}
v_4 &\equiv &v_1 \bmod c\\
v_4 &\equiv &v_2 \bmod a\\
v_4 &\equiv &v_3 \bmod b
\end{array}
\end{align}
By the Theorem \ref{theorem:CRT}, Equation \ref{eqn:yd} has a solution if and only if
\begin{align}
\label{eqn:yd2}
\begin{array}{rcl}
v_1 &\equiv &v_2 \mod (a,c)\\
v_2 &\equiv &v_3 \mod (a,b)\\
v_3 &\equiv &v_1 \mod (b,c)
\end{array}
\end{align}
Note that these are in fact the defining equations of $S_ {G'}$. Thus the projection map
$$\phi:S_G \rightarrow S_{G'}$$ defined by $\phi(v_1,v_2,v_3,v_4) = (v_1,v_2,v_3)$ is well defined, due to the forward direction of Theorem \ref{theorem:CRT}, and surjective by the reverse direction. Since it's a projection map, it is also an $\mathbb{Z}$-module homomorphism. Let $K$ be the kernel of $\phi$. Then $\mathbf{g}$ is in $K = \phi^{-1}(\mathbf{0}) $ if an only if $\mathbf{g} = (0,0,0,v_4)$ in $S_G$. Substituting these values into (1), we get
\begin{align}
\label{eqn:yd3}
\begin{array}{rcl}
v_4 &\equiv &0 \bmod a\\
v_{4} &\equiv &0 \bmod b\\
v_4 &\equiv &0 \bmod c
\end{array}
\end{align}
which have as solutions $g_4 \equiv 0 \bmod [a,b,c]$. Thus $\mathbf{g} = (0,0,0,[a,b,c])$ is a minimal element of $\mathcal{F}_3$ and forms a basis for $K$.
\end{example}
We can formalize this process for any graph.
\begin{theorem}
\label{theorem:surjective}
Let $(G,A)$ be an edge-weighted simple graph on vertices $\{v_1,\ldots, v_n\}$ and let $(G_n,A_n)$ be the result of a star-clique operation on vertex $v_n$. The map $\phi:S_G \rightarrow S_{G_n}$ defined by $\phi(g_1,g_2,\ldots, g_n) = (g_1,g_2,\ldots ,g_{n-1})$ is a surjective $\mathbb{Z}$-module homomorphism with kernel $\mathcal{F}_{n-1}\cup {\mathcal{F}}_{n} $.
\end{theorem}
\begin{proof}
The first thing to note is that the only edges affected by a star-clique operation are those incident to vertex $v$, say $v_{i_1},\ldots ,v_{i_d}$. Thus the defining equations for $S_G$ and $S_{G_n}$ only differ in that $S_G$ includes equations
\begin{align*}
g_n &\equiv g_{i_1}\bmod a_{i_1}\\
g_n &\equiv g_{i_2} \bmod a_{i_2}\\
&\vdots \\
g_n &\equiv g_{i_d} \bmod a_{i_d}
\end{align*}
whereas $S_{G_n}$ includes the equations
$$g_{i_j} \equiv g_{i_k} \bmod (a_{i_j},a_{i_k}), \text{ for all } 1 \leq j \leq d$$
By Theorem \ref{theorem:CRT}, the first set of equations implies that second, hence $\phi$ is well defined. Also by Theorem \ref{theorem:CRT}, the second set of equations implies there exists a $g_n$ satisfying the first set, and since the remaining equations don't involve $g_n$, we have shown there exists a spline in $S_G$ mapping to a given spline in $S_{G_n}$.
Now let $K$ denote the kernel of $\phi$. Then $\mathbf{g} \in K = \phi^{-1}(\mathbf{0}) $ if and only if $\mathbf{g} = (0,\ldots,0,g_n)$ in $S_G$. If $g_n \neq 0$, then $\mathbf{g}$ has exactly $n$ zeroes, hence is an element of $ {\mathcal{F}}_{n-1}$. If $g_n = 0$, then $\mathbf{g} = \mathbf{0}$, the only element of $ {\mathcal{F}}_{n}$.
\end{proof}
\begin{theorem}
Let$(G,A)$ be an edge weighted graph and $(G',A')$ be the result of an edge collapse operation on multiple edges $e_1,e_2,\ldots, e_r$. Then $S_{G'} = S_{G}$.
\end{theorem}
\begin{proof}
Suppose that edges $e_1,e_2,\ldots, e_r$ each connect vertices $v$ and $w$, and have edge labels $a_1,a_2,\ldots, a_r$. The label for the new edge $e$ in $G',A'$ is $[a_1,a_2,\ldots, a_r]$. Thus the only difference between the defining equations for $S_G$ and $S_{G'}$ are the $r$ equations
$g_v \equiv g_w \bmod a_i$ and the single equation $g_v \equiv g_w \bmod [a_1,a_2,\ldots, a_r]$ in $S_{G_e}$. By Lemma \ref{lemma:gcd}, these two sets of equations have the same solutions, hence $S_{G'} = S_{G}$.
\end{proof}
In other words, collapsing multiple edges changes the graph, but not the corresponding spline module. Putting these theorems together we now state our main result of this section.
\begin{theorem}
\label{theorem:reductionmap}
Let $(G',A')$ be the result of a sequence of collapsing operations on an edge-weighted graph $(G,A)$ resulting in the removal of vertices $v_{r+1},\dots v_n$ and the collapsing of any multiple edges. Then the map $\phi:S_G \rightarrow S_{G'}$ defined by $\phi(g_1,g_2,\ldots, g_n) = (g_1,g_2,\ldots, g_r)$ is a surjective $\mathbb{Z}$-module homomorphism with kernel $\mathcal{F}_{r}\cup \cdots \cup \mathcal{F}_{n-1} \cup \mathcal{F}_{n} $.
\end{theorem}
\begin{proof}
By Theorem \ref{theorem:surjective}, each time we do a star-clique operation, we have a surjective $\mathbb{Z}$- module homomorphism $\phi:S_G \rightarrow S_{G_v}$, and each time we do an edge collapse, the module stays the same. Since $\phi$ is a composition of finitely many surjective $\mathbb{Z}$-module homomorphisms, it is also a surjective $\mathbb{Z}$-module homomorphism.
\
Let $K$ denote the kernel of $\phi$. Then $\mathbf{g} \in K = \phi^{-1}(\mathbf{0}) $ if and only if $\mathbf{g} = (0,\ldots,0,g_{r+1},\ldots, g_n)$ in $S_G$. Since $\mathbf{g}$ has at least $r$ zeroes, it is an element of $ \mathcal{F}_{i}$ for some $i \geq r$.
\end{proof}
\begin{corollary}
Let $(G,A)$ be an edge-weighted graph on $n$ vertices and $v_i$ and $v_j$ be distinct vertices of $G$. The map $\phi:S_G \rightarrow S_{K_2}$ defined by $\phi(g_1,g_2,\ldots, g_n) = (g_i,g_j)$ is a surjective $\mathbb{Z}$-module homomorphism.
\end{corollary}
\begin{proof}
We start with $(G,A)$ and delete vertices one by one until there are only two vertices left. Each time we end up with multiple edges, we collapse them and replace $S_G$ with $S_{G_e}$. By Theorem \ref{theorem:reductionmap}, the result now follows.
\end{proof}
\section{A basis for $S_G$}
We now use the map $\phi$ to compare flow-up classes and minimal elements of $S_G$ and $S_{G'}$.
\begin{theorem}
\label{theorem:phitheorem}
Let $(G',A')$ be the result of collapsing operations that remove vertices $v_{r+1},\ldots,v_n$. Define $\phi: S_G \rightarrow S_{G'}$ by $\phi(g_1,g_2,\ldots, g_n)=(g_1,g_2,\ldots, g_{r})$. Suppose $i < r$, $F \in {\mathcal{F}_{i}}$, and $K$ is the kernel of $\phi$. Then,
\begin{enumerate}
\item $\phi({\mathcal{F}}) = {\mathcal{F}'_{i}}$
\item $L(F)=L(\phi(F))$.
\item $F$ is minimal if and only if $\phi(F)$ is minimal.
\item Let $\mathcal{B}'$ and $\mathcal{K}$ be flow-up class bases for $S_{G'}$ and $K$, respectively. Let $\mathcal{B}$ consist of a pre-image of each element of $\mathcal{B}'$. Then $\mathcal{B}\cup\mathcal{K}$ is a flow-up class basis of $S_G$.
\item As $R$-modules, $S_G \cong S_{G'} \oplus K$.
\end{enumerate}
\end{theorem}
\begin{proof}
\ \\
\begin{enumerate}
\item If $F = (0,\ldots,0,f_{i+1},\ldots, f_n)$ has $i$ leading zeroes, then $\phi(F) = (0,\ldots,0,f_{i+1},\dots f_r)$ also has $i$ leading zeroes.
\item $L(F) = f_{i+1} = L(\phi(F))$.
\item If $F$ is minimal but $\phi(F)$ is not, then there is some $M'$ in ${\mathcal{F}'_{i}}$ such that $|L(M')| < |L(\phi{F})|$. Let $M$ be in the inverse image of $M'$. By (2), $L(M) = L(M')$, so $|L(M)| < |L(\phi{F})|$, which contradicts the minimality of $F$. Conversely, suppose $F' = \phi(F)$ is minimal but $F$ is not. Then there exists $M$ in ${\mathcal{F}_{i}}$ such that $|L(M)| < |L({F})|$. Then $L(M)=L(\phi(M))$, which implies $|L(\phi(M)| < |L({F'})|$, contradicting the minimality of $F'$.
\item Since $\mathcal{B}'$ and $\mathcal{K}$ are flow-up class bases, their leading terms are minimal in $S_{G'}$ and $K$, respectively. Let $\mathcal{B}$ be a set of inverse images of elements of $\mathcal{B}'$. By (3), these elements will minimal and there will be one from each flow-up class $\mathcal{F}_{0}, \ldots , \mathcal{F}_{r-1} $. To complete a basis for $S_G$, we need a minimal element from each remaining non-zero flow-up classes $\mathcal{F}_{r}, \ldots , \mathcal{F}_{n-1}$. By Theorem \ref{theorem:reductionmap}, $K=\mathcal{F}_{r}\cup \cdots \cup \mathcal{F}_{n-1} \cup \mathcal{F}_{n}$. Thus any flow-up class basis for $K$ will complete a basis of $S_G$, making $\mathcal{B}\cup\mathcal{K}$ a flow-up class basis for $S_G$.
\item This follows from (4).
\end{enumerate}
\end{proof}
In fact, we can describe $S_G$ as the direct sum of rank 1 kernels of collapsing maps. First, we define a complete collapse sequence of graphs.
\begin{definition} Let $(G,A)$ be a simple edge-weighted graph with $n$ vertices.
A {\textbf{complete collapse sequence}} is a set of graphs $\{G_n, \ldots, G_1\}$, where $G = G_{n}$ and for $2\leq i \leq n$, $G_{i-1}$ is result of collapsing operations that remove the vertex $v_i$ from $G_{i}$.
\end{definition}
Note that each $G_i$ contains $i$ vertices and that $G_1$ is a graph consisting of the single vertex $v_1$.
\begin{theorem}
\label{thm:completecollapse}
Let $\{G_k\}$ be a complete collapse sequence of $(G,A)$.
Define $$S_{G_n} \xrightarrow{\phi_n} S_{G_{n-1}} \cdots \xrightarrow{\phi_3} S_{G_2}\xrightarrow{\phi_{2}} S_{G_1}\xrightarrow{\phi_{1}} {0}$$
by $\phi_{i}(g_1,g_2,\ldots, g_{i})=(g_1,g_2,\ldots, g_{i-1})$ for $i>1$, and $\phi_1(g_1) = 0$.
Let $K_i$ denote the kernel of $\phi_i$. Then
\begin{enumerate}
\item $S_G \cong {K}_1 \oplus {K}_2 \oplus \cdots \oplus {{K}}_n$.
\item $K_i$ is a rank 1 submodule of $S_{G_{i}}$.
\item Let $st(v_{i})$ in $G_{i}$ have edge weights $\{c_1,\ldots, c_d\}$. Then $m_1= (1)$ generates $K_1$ and and $m_i = (0,0,\ldots, [c_1,\ldots, c_d])$ generates $K_i$, for $i>1$.
\item For each $i<n$, let $M_i$ in $S_G$ be a pre-image of $m_i$ under the map $\phi_n \circ \phi_{n-1} \cdots \circ \phi_{i+1}$, and let $M_n=m_n$. Then $\{M_1, \ldots M_n\}$ is a flow-up class basis for $S_G$ where $M_i \in {\mathcal{F}_{i-1}}$.
\end{enumerate}
\end{theorem}
\begin{proof}
By Theorem \ref{theorem:phitheorem}, for all $i>1$ $S_{G_{i}} \cong S_{G_{i-1}} \oplus \mathcal{K}_{i}$. When $i=n$ and $n-1$, we have $S_{G} \cong S_{G_{n-1}} \oplus \mathcal{K}_{n}$ and $S_{G_{n-1}} \cong S_{G_{n-2}} \oplus \mathcal{K}_{n-1}$. Putting these together we have $$S_G = S_{G_{n-2}} \oplus \mathcal{K}_{n} \oplus \mathcal{K}_{n-1}$$ We can continue in this manner, down to $i=2$, to get $S_G \cong \mathcal{K}_n \oplus \mathcal{K}_{n-1} \oplus \cdots \oplus \mathcal{K}_{2} \oplus S_{G_1}$. The last map $\phi_1: S_{G_1} \rightarrow 0$ is the zero map, so $K_1 = S_{G_1}$, giving us the first result. Since $G_1$ has no edges, and hence no defining equations, $ S_{G_1}= R$. Thus $K_1 = R$ , which is generated by the element (1). For $i>1$, elements of $K_i$ have form $(0,0,\ldots,0,g_{i})$. Thus $K_i$ is isomorphic to a non-zero submodule of $R$, hence $K_i$ has rank 1. Elements of $K_i$ correspond to vertex labels $g_{i}$ for $v_{i}$ and 0 for all other vertices, so the defining equations are $g_{i} \equiv 0 \bmod A(e_j)$ for each edge $e_j$ incident to $v_{i}$, and $0 \equiv 0 \bmod A(e)$ for all other edges. The incident edges are precisely the edges in $st(v_{i})$, which have weights $\{c_1,\ldots, c_d\}$. Then $(0,0,\ldots,0,g_{i}) \in K_i$ if and only if
$g_{i} \equiv 0 \bmod c_j$ for all $j$. But by Lemma \ref{lemma:gcd}, this is equivalent to $g_{i} \equiv 0 \bmod [c_1,\ldots, c_d]$. So $m_i = (0,0,\ldots, [c_1,\ldots, c_d])$ is a minimal element of $K_i$, which means $m_i$ generates $K_i$.
Finally, since $m_i$ is a minimal element of of the $i-1$st flow-up class of $S_{G_{i}}$, any pre-image $M_i$ in $S_G$ will be a minimal element of ${\mathcal{F}}_{i-1}$, by Theorem \ref{theorem:phitheorem}. Thus the $M_i$'s form a flow-up class basis of $S_G$.
\end{proof}
This gives a procedure for constructing a basis $\{M_1, \ldots M_n\}$ for $S_G$. For $i>1$, $M_i$ has leading term $[c_1,\ldots, c_d]$ where $\{c_1,\ldots, c_d\}$ are weights of edges incident to vertex $v_{i}$ in $G_{i}$, and we can use the Chinese Remainder Theorem to explicitly construct the other terms of $M_i$. When $i=1$, $G_1$ is a single vertex with no edges. Since there are no restrictions, $m_1=(1)$, and it's trivial to see that $M_1=(1,\ldots,1)$ is always a spline in $S_G$.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[scale=0.3]{k4collapse1}
&\includegraphics[scale=0.3]{k4collapse2}\\
Figure \ref{fig:basis}a
&Figure \ref{fig:basis}b\\
\includegraphics[scale=0.3]{k4collapse3}
&\includegraphics[scale=0.3]{k4collapse4}\\
Figure \ref{fig:basis}c
&Figure \ref{fig:basis}d
\end{tabular}
\begin{tabular}{c}
\includegraphics[scale=0.3]{k4collapse5}\\
Figure \ref{fig:basis}e
\end{tabular}
\caption{Basis for $K_{4}$}
\label{fig:basis}
\end{center}
\end{figure}
\begin{example}
\label{example:k4}
Let $G = K_{4}$ with the sequence of collapsing operations in Figures \ref{fig:basis}a through \ref{fig:basis}e.
In order to find $M_4$, we set $g_{1} = 0$, $g_{2} = 0$, $g_{3} = 0$ in Figure \ref{fig:basis}a.
This gives us the follow equations.
\begin{align*}
g_{4} &\equiv 0 \bmod 12\\
g_{4} &\equiv 0 \bmod 15\\
g_{4} &\equiv 0 \bmod 8
\end{align*}
By Theorem \ref{thm:completecollapse}, the leading term $g_4$ will be the LCM of the edge weights incident to $v_4$, so $g_{4} =[12, 15, 8] = 120$. This gives basis element $M_{4} = (0, 0, 0, 120)$.
Next let $g_{1} = 0$, $g_{2} = 0$ in (Figure \ref{fig:basis}c). The equations are
\begin{align*}
g_{3} &\equiv 0 \bmod 20\\
g_{3} &\equiv 0 \bmod 12
\end{align*}
which has least solution $g_{3} =[12, 20] = 60$. We can let $g_{4} = 0$, which gives us a basis element $M_{3} = (0, 0, 60, 0)$.
Now consider Figure \ref{fig:basis}e and let $g_{1} = 0$. Then $g_{2} \equiv 0 \bmod 12$, so $g_{2} = 12$ is minimal. We are guaranteed values for $g_3$ and $g_4$ by the Chinese Remainder Theorem, but in this case we easily find a basis element $M_{2} = (0, 12, 12, 12)$.
Finally, we can always set $M_{1} = \left(1,1,1,1\right)$.
\end{example}
In the next section we find an alternate construction of $L(M_i)$, in terms of the original edge weights of $G$. This construction does not require collapsing $G$ to the graph $G_i$, in terms of the original edge weights of $G$.
\section{Leading Terms of Basis Elements}
Let $(G,A)$ be an edge-weighted graph and let $(G', A')$ be the result of a star-clique operation removing vertex $v$ from $G$. Let $w$ and $u$ be vertices in $G$ distinct from $v$. We introduce the following notation.
\begin{itemize}
\item $(p)$ = the GCD of the edge labels of a path $p$.
\item $\mathcal{P}$ = the set of paths between $w$ and $u$ in $G$.
\item ${\mathcal{Q}}$ = the set of paths between $w$ and $u$ in $G'$.
\item $[\mathcal{P}]$ = the LCM of the $(p)$'s, where $p \in \mathcal{P}$.
\item $[\mathcal{Q}]$ = the LCM of the $(q)$'s, where $q \in \mathcal{Q}$.
\end{itemize}
Next, we show that $[\mathcal{P}]$ is invariant under the star-clique operation.
\begin{lemma}
\label{lemma:LCM invariance 1}
Let $v$ be a vertex of $G$ and let $(G', A')$ be the result a star-clique operation. Let $w, u$ be vertices in $G$ distinct from $v$. Then $[\mathcal{P}] = [\mathcal{Q}]$.
\end{lemma}
\begin{proof}
We partition $\mathcal{P}$ into two sets:
\begin{itemize}
\item $\mathcal{P}_+$ = paths that include $v$;
\item ${\mathcal{P}}_-$ = paths not including $v$.
\end{itemize}
Let $p$ be a path in $\mathcal{P}_-$. Since this path doesn't includes $v$, it will also be a path in $G'$, hence $p$ is in $\mathcal{Q}$. Now let $p$ be a path in $\mathcal{P}_+$ with edge labels $c_{1}, c_{2}, \ldots, c_{n}$. Since this $p$ includes $v$, it contains two vertices $x$ and $y$ and two edges $e$ and $f$ adjacent to $v$, with edge labels $c_i$ and $c_{i+1}$. Thus,
$$(p) = (c_{1}, c_{2}, \ldots, c_i, c_{i+1}, \ldots, c_{n})$$
However, by the star-clique construction, there will be an edge in $G'$ directly between $x$ and $y$ with edge label $(c_i, c_{i+1})$. Consequently there is a path $p'$ in $G'$ with
$$(p') = (c_{1}, c_{2}, \ldots, (c_{i}, c_{i+1}), \ldots, c_{n}).$$
But since $(a, (b, c)) = (a, b, c)$, we have $(p) = (p')$.
Now let $q = e_1\cdots e_n$ be a path in $\mathcal{Q}$ and $$(q) = (c_{1}, c_{2}, \ldots, c_{n})$$ If an $e_i$ was not an edge in $G$, then its vertices $x$ and $y$ must be adjacent to $v$ in $G$. Thus there are edges $e_x$ and $e_y$ in $G$ with weights $c_x$ and $c_y$, which means $e_i$ in $G'$ has weight $c_i =(c_x,c_y)$. Let $q_i = e_1\cdots e_{i-1}e_xe_ye_{i+1} e_n$, then
$$(q_i) = (c_{1}, c_{2}, \ldots, c_{i-1},c_x,c_y, c_{i+1}, \ldots, c_{n})$$
$$ = (c_{1}, c_{2}, \ldots, c_{i-1},(c_x,c_y), c_{i+1}, \ldots, c_{n})$$
$$ = (c_{1}, c_{2}, \ldots, c_{i-1},c_i, c_{i+1}, \ldots, c_{n}) = (q)$$
Since we can do this for each such edge $e_i$, we see that there is a path $q^+$ in $G$ with $(q^+) = (q)$. Consequently, the GCDs used to compute $[\mathcal{P}]$ are identical to the GCDs used to compute $[\mathcal{Q}]$, so $[\mathcal{P}]=[\mathcal{Q}]$.
\end{proof}
We now prove that $[\mathcal{P}]$ is invariant under edge-collapse.
\begin{lemma}
\label{lemma:LCM invariance 2}
Let $(G', A')$ be the result an edge-collapse operation, and let $w, u$ be vertices of $G$. Then $[\mathcal{P}] = [\mathcal{Q}]$.
\end{lemma}
\begin{proof}
Let $d_1,d_2,\ldots,d_r$ with weights $c_1c_2,\ldots, c_r$ be the edges in $G$ that are collapsed to an edge $d$ with weight $c=[c_1,c_2,\ldots,c_r]$ in $G'$.
We can partition $\mathcal{P}$ and $\mathcal{Q}$ into two sets:
\begin{itemize}
\item $\mathcal{P}_1$ = paths that include any of the $d_i$
\item $\mathcal{P}_2$ = paths that include none of the $d_i$
\item $\mathcal{Q}_1$ = paths that include $d$
\item $\mathcal{Q}_2$ = paths that don't include $d$
\end{itemize}
Let $p \in\mathcal{P}_2$. Since $p$ does not include any $d_i$, all of its edges are in $G'$, so $p \in \mathcal{Q}_2.$ Similary, if $p \in \mathcal{Q}_2$ then $p \in \mathcal{P}_2.$ Thus $\mathcal{P}_2 = \mathcal{Q}_2$.
Now let $p_1 \in \mathcal{P}_1$ contain edges $d_1$, with remaining edges $e_1,\ldots, e_s$ in both $G$ and $G'$, and edge weights $a_1,\ldots, a_s$. Clearly, for each $1 < i \leq r$ there is a path $p_i \in \mathcal{P}_1$ with edges $d_i, e_1,\ldots, e_s $. Thus we have
\begin{align*}
(p_1) &= (c_i, a_{1}, a_{2}, \ldots, a_{s})\\
(p_2) &= (c_i, a_{1}, a_{2}, \ldots, a_{s})\\
&\vdots \\
(p_r) &= (c_i, a_{1}, a_{2}, \ldots, a_{s})
\end{align*}
These correspond directly to the path $p'$ in $\mathcal{Q}_1$ with edges $d, e_1,\ldots, e_s$
and $$(p') = ([c_1,c_2,\ldots,c_r], a_{1}, a_{2}, \ldots, a_{s})$$
Similarly, any $p' \in \mathcal{Q}_1$ corresponds to $p_1,\ldots,p_r$ in $\mathcal{P_1}$.
Let $a = (a_{1}, a_{2}, \ldots a_{n})$. By repeated applications of Lemma \ref{lemma:gcd}, we have
\begin{align*}
[(p_1),\dots (p_r)]
&=[(c_1, a_{1}, a_{2}, \ldots a_{s}), \ldots, (c_r, a_{1}, a_{2}, \ldots a_{s})]\\
&= [(c_1, (a_{1}, a_{2}, \ldots a_{s})),\ldots a_{s})), \ldots, (c_r, (a_{1}, a_{2}, \ldots a_{s}))]\\
&= [(c_1, a), \ldots, (c_r, a)]\\
&= [(a, [c_1, \ldots, c_r )] \\
&= (p')
\end{align*}
Let $\mathcal{Q}_1 = \{x_1,\dots x_m\}$, $\mathcal{Q}_2 = \{y_1,\dots, y_k\}$ and $y = [y_1,\dots y_k].$ Then each $x_i \in \mathcal{Q_1}$ corresponds to $r$ different paths $x_{ij} \in \mathcal{P}_1$, and by above, each$(x_i) =[(x_{i1}),\dots (x_{ir})]$.
By repeated applications of Lemma \ref{lemma:gcd} we have
\begin{align*}
[\mathcal{Q}]
&= [(x_1),\dots, (x_m), (y_1),\dots, (y_k)]\\
&= [(x_1),\dots, (x_m), [(y_1),\dots, (y_k)]\\
&= [(x_1),\dots, (x_m), y]\\
&= [(x_{11}),\dots, (x_{1r})],\dots, [(x_{m1}),\dots, (x_{mr})], y]\\
&= [(x_{11}),\dots, (x_{1r})],\dots, [(x_{m1}),\dots, (x_{mr})], [(x_1),\dots, (x_m), (y_1),\dots, (y_k)] ]\\
&= [(x_{11}),\dots, (x_{1r}),\dots, (x_{m1}),\dots (x_{mr}), (x_1),\dots, (x_m), (y_1),\dots, (y_k) ]\\
&= [\mathcal{P}]
\end{align*}
\end{proof}
We have the following important corollary.
\begin{corollary}
\label{corollary:lcmgcd}
Let $w,u$ be two vertices in both $G$ and $G'$. The LCM of the GCD of all paths from $w$ to $u$ is invariant under collapsing operations taking $G$ to $G'$.
\end{corollary}
Using this result, we can construct leading terms of basis elements in terms of LCMs and GCDs of the edge weights of $G$.
\begin{theorem}
\label{theo:LCM of GCD}
Let $M_1,M_2,\ldots M_{n}$ be a flow up class basis for $S_G$ and let $\mathcal{P}_{ij}$ denote the set of paths from $v_i$ to $v_j$ in $S_G$ where $i<j$. Then $L(M_1) = 1$, and for $i>1$, and $L(M_{i}) = [ [\mathcal{P}_{i1}],[\mathcal{P}_{i2}],\ldots, [\mathcal{P}_{i,i-1}] ]$.
\end{theorem}
\begin{proof}
$M_1 = (1,\ldots, 1)$ is minimal in ${\mathcal{F}}_0$ since 1 is minimal in $\mathbb{N}$. We know from Theorem that $L(M_{i}) = L(m_i)$ where $m_i = (0,0,\ldots, [c_1,\ldots, c_r])$ is a minimal element of $K_i$, and $c_j$ is the edge label of the edge between $v_i$ and $v_j$ , for all $ j \leq r \leq i$. (If necessary we can temporarily reorder the $ v_j$'s incident to $v_i$ so that they are vertices $v_1,\ldots,v_r$.) For each $j$, the edge $e_j$ with label $c_j$ is the only path form $v_i$ to $v_j$ in $G_i$. By Lemma \ref{lemma:gcd}, this means that $c_j =[\mathcal{P}_{ij}]$. Then $L(M_{i}) = [c_1,\ldots, c_r]= [ [\mathcal{P}_{i1}],[\mathcal{P}_{i2}],\ldots, [\mathcal{P}_{i,r}] ]$. If $r=i-1$, we are done. If $r<i$, let $p$ be a path from $v_i$ to $v_k$ where $ r < k \leq i$. This path necessarily passes through some $v_j$ for some $ j\leq r$, so $d_j$ is one of the edges of $p$. Let $b$ be the GCD of the edge weights of the remaining edges of $p$. Then $(p) = (c_j, b)$. Since $(p)$ divides $c_j$, we have $[c_j] = [c_j,(p)]$ and thus $[c_1,\ldots, c_r] = [c_1,\ldots, c_r, (p)]$.
Since we can do this for any path from $v_i$ to $v_k$, we get that $[c_1,\ldots, c_r] = [c_1,\ldots, c_r, [P_{ik}]]$ and by varying $k$ we get that $L(M_{i}) = [c_1,\ldots, c_r, [\mathcal{P}_{i,r}],\ldots, [\mathcal{P}_{i,i-1}]]= [ [\mathcal{P}_{i1}],[\mathcal{P}_{i2}],\ldots, [\mathcal{P}_{i,i-1}] ]$.
\end{proof}
We see from this proof that we don't need to check all paths from $v_i$ to all vertices $v_j$, just paths from $v_i$ to $v_j$ where all interior vertices have subscripts bigger than $i$. An easy way to identify these paths is to label all vertices $v_{1}$ to $v_{i-1}$ with 0's, and use only paths from $v_i$ to the first 0 you run into.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
$\kfour{0.35}{g_{1}}{g_{2}}{g_{3}}{g_{4}}$
\end{tabular}
\end{center}
\caption{Edge-weighted $K_{4}$}
\label{fig:k4 graph}
\end{figure}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\raisebox{1in}{Figure \ref{fig:nocollapse}a}
&\includegraphics[scale=0.35]{spline_nocollapse1}\\
\raisebox{1in}{Figure \ref{fig:nocollapse}b}
&\includegraphics[scale=0.35]{spline_nocollapse2}\\
\raisebox{1in}{Figure \ref{fig:nocollapse}c}
&\includegraphics[scale=0.35]{spline_nocollapse3}
\end{tabular}
\end{center}
\caption{Finding Leading Terms Using Theorem \ref{theo:LCM of GCD}}
\label{fig:nocollapse}
\end{figure}
\begin{example}
We will use Theorem \ref{theo:LCM of GCD} to compute the leading terms of a flow class basis for $S_{G}$, the $K_{4}$ graph of Figure \ref{fig:k4 graph}.
\begin{enumerate}
\item First, we have $L(M_1)=1$.
\item In order to find $L(M_{2})$ we set $g_1=0$ as in Figure \ref{fig:nocollapse}a. Then $L(M_{2})$ is the LCM of the GCD of all paths from $g_{2}$ to any vertex labeled $0$. These paths are:
\begin{itemize}
\item
$g_{2}g_{1}$, which has edge weight $6$,
\item
$g_{2}g_{4}g_{1}$, which has edge weights $12, 15$
\item
$g_{2} g_{3}g_{1}$, which has edge weights $20, 6$
\item
$g_{2}g_{4}g_{3}g_{1}$, which has edge weights $15, 8, 6$
\item
$g_{3}g_{4}g_{1}$, which has edge weights $20, 8, 12$
\end{itemize}
Thus $L(M_{2}) = [6,(12,15),(20,6),(15,8,6),(20,8,12)]=[6, 3, 2, 1, 4] = 12$.
\item For $L(M_{3})$, we set both $g_{1}$, $g_{2}$ equal $0$ as in Figure \ref{fig:nocollapse}b.
The paths from $g_{3}$ to any vertex labeled $0$ are:
\begin{itemize}
\item
$g_{3}g_{2}$, with edge weight $20$,
\item
$g_{3}g_{1}$, with edge weight $6$,
\item
$g_{3}g_{4}g_{1}$, with edge weights $8, 12$
\item
$g_{3}g_{4}g_{2}$, with edge weights $8, 15$
\end{itemize}
Thus $L(M_{3}) = [20, 6, (8,12), (8,15)]=[20, 6, 4, 1] = 60$.
\item Finally to compute $L(M_{4})$ we set $g_{1}$, $g_{2}$, $g_{3}$ equal to $0$, as in Figure \ref{fig:nocollapse}c, and find the paths from $g_{4}$ to $0$:
\begin{itemize}
\item
$g_{4}g_{1}$, with weight $12$,
\item
$g_{4}g_{2}$, with weight $15$,
\item
$g_{4}g_{3}$, with weight $8$,
\end{itemize}
Then $L(M_{4})= [12, 15, 8] = 120$.
\end{enumerate}
\end{example}
Thus the leading terms of $M_1$, $M_2$, $M_3$, $M_4$ are $1$, $12$, $60$, and $120$ respectively. These agree with the leading terms found in Example \ref{example:k4}, illustrating that Theorems \ref{thm:completecollapse} and \ref{theo:LCM of GCD} provide two different ways to contruct the same basis for $S_G$.
\bibliographystyle{amsplain}
| {
"timestamp": "2021-11-05T01:02:02",
"yymm": "2111",
"arxiv_id": "2111.02471",
"language": "en",
"url": "https://arxiv.org/abs/2111.02471",
"abstract": "Generalized integer splines on a graph $G$ with integer edge weights are integer vertex labelings such that if two vertices share an edge in $G$, the vertex labels are congruent modulo the edge weight. We introduce collapsing operations that reduce any simple graph to a single vertex, carrying with it the edge weight information. This corresponds to a sequence of surjective maps between the associated spline modules, leading to an explicit construction of a module basis in terms of the edge weights.",
"subjects": "Combinatorics (math.CO)",
"title": "Generalized Integer Splines on Arbitrary Graphs",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513926334712,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7086428609723532
} |
https://arxiv.org/abs/1108.0622 | The rational cohomology of the mapping class group vanishes in its virtual cohomological dimension | Let Mod_g be the mapping class group of a genus g >= 2 surface. The group Mod_g has virtual cohomological dimension 4g-5. In this note we use a theorem of Broaddus and the combinatorics of chord diagrams to prove that H^{4g-5}(Mod_g; Q) = 0. | \section{Introduction}
Let $\Mod_g$ be the mapping class group of a closed, oriented, genus $g\geq 2$ surface, and let ${\mathcal M}_g$
be the moduli space of genus $g$ Riemann surfaces. It is well-known
that for each $i\geq 0$
\[H^i(\Mod_g;\Q) \cong H^i({\mathcal M}_g;\Q).\]
It is a fundamental open problem to determine the maximal $i$ for which these vector
spaces are nonzero. Harer \cite{Ha} proved that the \emph{virtual cohomological dimension}
$\vcd(\Mod_g)$ equals $4g-5$. More precisely, he proved that $H^{4g-5}(\Mod_g;\St_g \otimes\, \Q) \neq 0$
for a certain $\Mod_g$-module $\St_g$ (see below for details) and
that $H^{i}(\Mod_g;V \otimes \Q)=0$ for all $i > 4g-5$ and all $\Mod_g$-modules $V$.
Thus the first step of the problem above is to determine whether $H^{4g-5}(\Mod_g;\Q)\neq 0$. The purpose of this note is to answer this question.
Let $\Mod_{g,\ast}$ (resp.\ $\Mod_{g,1}$) denote the mapping class group of the genus $g$ surface with one marked point (resp.\ one boundary component).
\begin{theorem}\label{theorem:main}
For any $g\geq 2$,
\[H^{4g-5}(\Mod_g;\Q)=H^{4g-5}({\mathcal M}_g;\Q)=0.
\]
Further, the rational cohomology of $\Mod_{g,\ast}$ (resp.\ the integral cohomology of $\Mod_{g,1}$) vanishes in its virtual cohomological dimension.
\end{theorem}
This theorem was announced some years ago by Harer, but he has informed us that his proof will not appear. We recently learned that Morita--Sakasai--Suzuki \cite{MSS} have independently found a proof of Theorem~\ref{theorem:main} using a completely different method. They apply a theorem of Kontsevich on graph homology to their computation of a generating set for a certain symplectic Lie algebra. Our proof combines some results about
the combinatorics of chord diagrams with the work of Broaddus \cite{Br}
on the Steinberg module of $\Mod_g$. We thank Allen Hatcher and Takuya Sakasai for their comments on an earlier version of this paper, and John Harer for informing us about the paper \cite{MSS} and his own work.
Theorem~\ref{theorem:main} is consistent with the well-studied analogy between mapping class groups and arithmetic groups. For example, Theorem~1.3 of Lee--Szczarba \cite{LS} states that the rational cohomology of $\SL(n,\Z)$ vanishes in its cohomological dimension.
\section{Background}
We begin by briefly summarizing previous results that make our computation possible; for details see Broaddus \cite{Br}.
\para{Teichm\"{u}ller space and its boundary} Let $S_g$ be a connected, closed orientable surface of genus $g\geq 2$.
Let $\C_g$ be the \emph{curve complex} of $S_g$ defined by Harvey \cite{Harv}, i.e.\ the flag complex whose $k$-simplices are the $(k+1)$-tuples of distinct free homotopy classes of simple closed curves in $S_g$ that can be realized disjointly. Harer \cite{Ha} proved that $\C_g$ is homotopy equivalent to a wedge of spheres $\bigvee_{i=1}^\infty S^{2g-2}$.
There exists a constant $\delta>0$ such that any two closed geodesics on a hyperbolic surface of length $\leq \delta$ are disjoint (the \emph{Margulis constant} for hyperbolic surfaces). Let $\Tthick$ be the Teichm\"uller space of marked hyperbolic surfaces diffeomorphic to $S_g$ having no closed geodesic of length $<\delta$. It is known that $\Tthick$ is a $(6g-6)$-dimensional manifold with corners. Ivanov \cite{Iv} proved that $\mathcal{T}_g^{\thick}$ is contractible and that its boundary $\partial \Tthick$ is homotopy equivalent to $\C_g$. Briefly, for each simplex $\sigma$ of $\C_g$, let $\T_{\sigma}$ be the subset of
$\partial \Tthick$ consisting of surfaces where each curve in $\sigma$ has length $\delta$.
Each $T_{\sigma}$ is contractible, and $\T_{\sigma} \cap \T_{\sigma'} = \emptyset$ unless $\sigma \cup \sigma'$ is a simplex of $\C_g$, in which case $T_{\sigma} \cap \T_{\sigma'} = \T_{\sigma \cup \sigma'}$.
\para{Duality in the mapping class group} The mapping class group $\Mod_g$ acts properly discontinuously on $\Tthick$ with finite stabilizers. Defining
$\Mthick = \Tthick / \Mod_g$, it follows that
$H^*(\Mod_g;\Q)\cong H^*(\Mthick;\Q)$. Mumford's compactness criterion states that $\Mthick$ is compact.
Combining this with the previous two paragraphs, the work of Bieri--Eckmann \cite[Theorem~6.2]{BE} shows that
$\vcd(\Mod_g)=4g-5$ and that
\begin{equation}
\label{eq:duality}
H^{4g-5}(\Mod_g;\Q)\cong H_0(\Mod_g;H_{2g-2}(\C_g;\Q)).
\end{equation}
In fact, we can say more. Let $\St_g$ denote the \emph{Steinberg module}, i.e.\ the $\Mod_g$-module
$H_{2g-2}(\C_g;\Z)$. Then $\St_g\otimes\, \Q$ is the rational \emph{dualizing module} for $\Mod_g$, meaning that
\[H^{4g-5-k}(\Mod_g;M\otimes\Q)\cong H_k(\Mod_g;M\otimes \St_g\otimes\, \Q)\]
for any $k$ and any $M$. Moreover $\St_g$ is also the dualizing module for $\Mod_{g,*}$ and $\Mod_{g,1}$,
which act on $\St_g$ via the natural surjections $\Mod_{g,*}\to \Mod_g$ and $\Mod_{g,1}\to \Mod_g$ \cite{Ha}. This implies that for $\nu=\vcd(\Mod_{g,*})=4g-3$ we have $H^{\nu-k}(\Mod_{g,*};M\otimes\Q)\cong H_k(\Mod_{g,*};M\otimes \St_g\otimes\, \Q)$. For $\Mod_{g,1}$ we obtain a similar result with $\nu=\cd(\Mod_{g,1})=4g-2$, except that since $\Mod_{g,1}$ is torsion-free the result holds integrally: $H^{\nu-k}(\Mod_{g,1};M)\cong H_k(\Mod_{g,1};M\otimes \St_g)$.
\para{An alternate model for $\St_g$} Fix a finite-volume hyperbolic metric on $S_g-\{\ast\}$. Another model for $\St_g$ comes from the \emph{arc complex} $\A_g$, the flag complex whose $k$-simplices are the disjoint $(k+1)$-tuples of simple geodesics on $S_g-\{\ast\}$ beginning and ending at the cusp $\ast$. Let $\A_g^\infty$ be the subcomplex consisting of collections of geodesics $\gamma_1,\ldots,\gamma_{k+1}$ for which $S-\bigcup \gamma_i$ has some non-contractible component. Harer proved that $\A_g^\infty$ is homotopy equivalent to $\C_g$ \cite{Ha}, and that $\A_g$ is contractible \cite{Ha2} (see also \cite{Hat}). Thus
\[\St_g=H_{2g-2}(\C_g)\simeq H_{2g-2}(\A_g^\infty)\simeq H_{2g-1}(\A_g/\A_g^\infty).\]
\para{Chord diagrams} By examining how the geodesics are arranged in a neighborhood of $\ast$, an $(n-1)$-simplex of $\A_g$ can be encoded by a $n$-chord diagram; see \cite[\S4.1]{Br}. An \emph{ordered $n$-chord diagram} is an ordered sequence $U = (u_1,\ldots,u_n)$, where $u_i$ is an unordered pair of
distinct points on $S^1$ (a \emph{chord}) and
$u_i \cap u_j = \emptyset$ if $i \neq j$.
We will visually depict $U$ by drawing arcs connecting
the points in each $u_i$ (see Figure \ref{figure:1} for examples).
Two ordered chord diagrams are identified if they differ by an orientation-preserving
homeomorphism of the circle.
\para{Filling systems} An unlabeled \emph{$k$-filling system} of genus $g$ is a $(2g+k)$-chord diagram satisfying the conditions described in \cite[\S4.1]{Br}: no chord should be parallel to another chord or to the boundary circle, and the chords should determine exactly $k+1$ boundary cycles. These conditions, which guarantee that these chords define a simplex of $\A_g-\A_g^\infty$, have the following simple combinatorial formulation. Given $U = (u_1,\ldots,u_n)$, consider two permutations of the $2n$ points $u_1\cup\cdots\cup u_n$: let $\omega$ be the $2n$-cycle which takes each point to the point immediately adjacent in the clockwise direction, while $\tau$ exchanges the two points of each chord $u_i$ and thus is a product of $n$ transpositions. Then a $(2g+k)$-chord diagram is a $k$-filling system of genus $g$ if $\tau\circ \omega$ has $k+1$ orbits, none of which have length 1 or 2. Finally, let $t_i$ be the straight line in $D^2$ connecting the two points of $u_i$. Then we say that $U$ is \emph{disconnected} if the set $t_1\cup \cdots\cup t_n\subset D^2$ is not connected.
\para{The chord diagram chain complex}
Fix a genus $g$, and set $n=2g+k$. Let $\U_k$ be the free abelian group spanned by ordered $k$-filling systems of genus
$g$ modulo the following relation. For $\sigma\in S_n$
and $U = (u_1,\ldots,u_n)$, define $\sigma\cdot U= (u_{\sigma(1)},\ldots,u_{\sigma(n)})$. We impose the relation $\sigma\cdot U = (-1)^{\sigma} U$.
The differential $\partial : \U_k \rightarrow \U_{k-1}$ is defined as follows. Consider
an ordered $k$-filling system $U = (u_1,\ldots,u_n)$ of genus $g$. For $1 \leq i \leq n$, let
$\partial_i U$ equal $(u_1,\ldots,\widehat{u_i},\ldots,u_n)$ if this is an ordered
$(k-1)$-filling system of genus $g$; otherwise, let $\partial_i U = 0$.
Then \[\partial(U) = \sum_{i=1}^n (-1)^{i-1} \partial_i U.\]
\para{Broaddus's results}
We will need the following theorem of Broaddus \cite{Br}. Recall that if $\Gamma$ is a group
and $M$ is a $\Gamma$-module, then the \emph{module of coinvariants}, denoted $M_{\Gamma}$, is
the quotient $M / \langle g \cdot m - m\,|\,g \in \Gamma, m \in M\rangle$. Let $X$ be the $0$-filling system of genus $g$ depicted in Figure \ref{figure:1}a.
\begin{theorem}[{Broaddus \cite{Br}}]
\label{theorem:broaddus}For each $g\geq 0$, the following hold.
\begin{enumerate}[(i)]
\item $(\St_g)_{\Mod_g} \cong \U_0 / \partial(\U_1)$.
\item The abelian group $\U_0 / \partial(\U_1)$
is spanned by the image $[X]\in \U_0/\partial(\U_1)$ of $X\in \U_0$.
\item If $v$ is a disconnected $0$-filling system of genus $g$, then the image
of $v$ in $\U_0 / \partial(\U_1)$ is $0$.
\end{enumerate}
\end{theorem}
\noindent
For part (i) of Theorem \ref{theorem:broaddus}, see \cite[Proposition 3.3]{Br} together with the
remark preceding \cite[Example 4.1]{Br}; for part (ii), see
\cite[Theorem 4.2]{Br}; and for part (iii), see
\cite[Proposition 4.5]{Br}.
\section{Proof of Theorem~\ref{theorem:main}}
For any group $\Gamma$ and any $\Gamma$-module $M$, recall that $H_0(\Gamma;M) = M_{\Gamma}$.
Since the actions of $\Mod_{g,\ast}$ and $\Mod_{g,1}$ on $\St_g$ factor through $\Mod_g$, to
prove Theorem~\ref{theorem:main} it suffices by \eqref{eq:duality}
to show that $(\St_g)_{\Mod_g} = 0$.
By Theorem \ref{theorem:broaddus}(i), this is equivalent to showing that
$\U_0 / \partial(\U_1) = 0$.
For $v \in \U_0$, let $[v]$ denote the associated element of $\U_0 / \partial(\U_1)$.
Let $X = (x_1,\ldots,x_{2g})$ be the $0$-filling system depicted in Figure \ref{figure:1}(a).
By Theorem \ref{theorem:broaddus}(ii), it is enough to show that $[X]=0$.
Let $Y = (x_1,\ldots,x_{2g},y)$ be the $1$-filling system depicted in Figure \ref{figure:1}(b).
Observe that
\[\partial_1 Y = (x_2,\ldots,x_{2g},y) = (x_1,\ldots,x_{2g}) = X,\]
where the second equality holds since the indicated chord diagrams differ by
an orientation preserving homeomorphism of $S^1$. Similarly, $\partial_{2g+1}Y = X$.
Also, $\partial_2 Y = 0$ (resp.\ $\partial_{2g}Y = 0$) by definition, since the chord $x_1$ (resp.\ $x_{2g+1}$) becomes parallel to the boundary. We thus have
\[\partial(Y) = 2X + \sum_{i=3}^{2g-1} (-1)^{i-1} \partial_i Y.\]
For $3 \leq i \leq 2g-1$, the $0$-filling system $\partial_i Y$ is disconnected, so Theorem \ref{theorem:broaddus}(iii) implies that $[\partial_i Y]=0$. We conclude that
$2[X]=0$.
\Figure{figure:1}{ChordDiagrams}{(a) The oriented $0$-filling system $X = (x_1,\ldots,x_{2g})$. For concreteness, we depict
it for $g = 3$. In general, $X$ has $2g$ chords arranged in the same pattern as the chords shown.\\
(b) The $1$-filling system $Y = (x_1,\ldots,x_{2g},y)$. The chord $y$ intersects the chord $x_{2g}$.\\
(c) The $1$-filling system $Z = (z,x_1,\ldots,x_{2g})$. The chord $z$ intersects both $x_1$ and $x_{2g}$.}
Now consider the $1$-filling system $Z = (z,x_1,\ldots,x_{2g})$ depicted in Figure \ref{figure:1}(c).
Removing any chord from Figure \ref{figure:1}(c) yields Figure \ref{figure:1}(a) up to rotation,
so $\partial_i Z=\pm X$ for each $i$. In fact, it is clear that $\partial_1 Z=X$, that $\partial_2 Z = -X$, that
$\partial_3 Z = X$, and so on, with $\partial_i Z = (-1)^{i-1} X$. This shows that
\[\partial (Z) = X + X + \cdots X = (2g+1)X,\]
so $(2g+1)[X] = 0$.
Summing up, we have shown that $2[X] = (2g+1)[X] = 0$. This implies that $[X]=0$, as desired.
| {
"timestamp": "2011-10-07T02:01:33",
"yymm": "1108",
"arxiv_id": "1108.0622",
"language": "en",
"url": "https://arxiv.org/abs/1108.0622",
"abstract": "Let Mod_g be the mapping class group of a genus g >= 2 surface. The group Mod_g has virtual cohomological dimension 4g-5. In this note we use a theorem of Broaddus and the combinatorics of chord diagrams to prove that H^{4g-5}(Mod_g; Q) = 0.",
"subjects": "Geometric Topology (math.GT); Algebraic Topology (math.AT); Group Theory (math.GR)",
"title": "The rational cohomology of the mapping class group vanishes in its virtual cohomological dimension",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513926334712,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7086428609723531
} |
https://arxiv.org/abs/2108.02848 | Construction and application of provable positive and exact cubature formulas | Many applications require multi-dimensional numerical integration, often in the form of a cubature formula. These cubature formulas are desired to be positive and exact for certain finite-dimensional function spaces (and weight functions). Although there are several efficient procedures to construct positive and exact cubature formulas for many standard cases, it remains a challenge to do so in a more general setting. Here, we show how the method of least squares can be used to derive provable positive and exact formulas in a general multi-dimensional setting. Thereby, the procedure only makes use of basic linear algebra operations, such as solving a least squares problem. In particular, it is proved that the resulting least squares cubature formulas are ensured to be positive and exact if a sufficiently large number of equidistributed data points is used. We also discuss the application of provable positive and exact least squares cubature formulas to construct nested stable high-order rules and positive interpolatory formulas. Finally, our findings shed new light on some existing methods for multivariate numerical integration and under which restrictions these are ensured to be successful. |
\section{Introduction}
\label{sec:introduction}
Numerical integration is an omnipresent technique in applied mathematics, engineering, and many other sciences.
Prominent examples include numerical differential equations \cite{hesthaven2007nodal,quarteroni2008numerical,ames2014numerical}, machine learning \cite{murphy2012machine}, finance \cite{glasserman2013monte}, and biology \cite{manly2006randomization}.
\subsection{Problem Statement}
In many cases, the problem can be formulated as follows.
Let $d \geq 2$ and $\Omega \subset \R^d$ be bounded with positive volume and boundary of measure zero (in the sense of Lebesgue).
Given a function $f:\Omega \to \R$, we seek to approximate the continuous integral
\begin{equation}\label{eq:int}
I[f] := \int_{\Omega} f(\boldsymbol{x}) \omega(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}
\end{equation}
with Riemann integrable \emph{weight function} $\omega: \Omega \to \R^+_0$ that is assumed to be positive almost everywhere.
A prominent approach is to approximate \cref{eq:int} by a weighted finite sum over the function values $f(\mathbf{x}_n)$ at some \emph{data points} $\mathbf{x}_1,\dots,\mathbf{x}_N$, denoted by
\begin{equation}\label{eq:CF}
C_N[f] := \sum_{n=1}^N w_n f(\mathbf{x}_n).
\end{equation}
Usually, $C_N$ is referred to as an \emph{$N$-point cubature formula (CF)} and ${w_1,\dots,w_N \in \R}$ are called \emph{cubature weights}.
For a 'good' CF, the following properties are often required:
\begin{enumerate}[label=(P\arabic*)]
\item \label{item:P1}
All data points should lie inside of $\Omega$. That is, $\mathbf{x}_n \in \Omega$ for all $n=1,\dots,N$.
\item \label{item:P2}
The CF should be \emph{positive}.
That is, $w_n > 0$ for all $n=1,\dots,N$.\footnote{Some authors require the cubature weights to only be nonnegative.
However, if $w_n = 0$, then the corresponding data point can---and should---be removed from the CF to avoid an unnecessary loss of efficiency.}
\end{enumerate}
See one of the many monographs and reviews \cite{haber1970numerical,stroud1971approximate,mysovskikh1980approximation,engels1980numerical,cools1997constructing,davis2007methods} on CFs.
Moreover, in many applications, CFs are desired which are exact for certain finite-dimensional function spaces.
Let $\mathcal{F}_K(\Omega)$ denote a $K$-dimensional function space spanned by ${\varphi_1,\dots,\varphi_K:\Omega \to \R}$.
It shall be assumed that all the moments $m_k := I[\varphi_k]$, $k=1,\dots,K$, exist.
Then, the $N$-point CF \cref{eq:CF} is said to be \emph{$\mathcal{F}_K(\Omega)$-exact} if
\begin{equation}\label{eq:exactness}
C_N[f] = I[f] \quad \forall f \in \mathcal{F}_K(\Omega).
\end{equation}
Usual choices for $\mathcal{F}_K(\Omega)$ include the space of all algebraic and trigonometric polynomials up to a certain degree $m$, respectively denoted by $\mathbb{P}_m(\R^d)$ and $\Pi_m(\R^d)$.
Another prominent example is approximation spaces of radial basis functions (RBFs) \cite{buhmann2000radial,buhmann2003radial,wendland2004scattered,fasshauer2007meshfree,fornberg2015primer}.
`See \cite{sommariva2005integration,sommariva2006numerical,sommariva2006meshless,punzi2008meshless,aziz2012numerical,fuselier2014kernel,reeger2016numericalA,reeger2016numericalB,reeger2018numerical,sommariva2021rbf} and references therein on related integration methods.
\subsection{Previous Works}
The existence of positive and $\mathcal{F}_K(\Omega)$-exact CFs with at most $K$ data points is ensured by the following theorem originating from Tchakaloff's work \cite{tchakaloff1957formules} (also see \cite{davis1968nonnegative,bayer2006proof}).
\begin{theorem}[Tchakaloff, 1957]\label{thm:Tchakaloff}
Given is \cref{eq:int} with nonnegative $\omega$ and a $K$-dimensional function space $\mathcal{F}_K(\Omega)$.
Then there exist $N$ data points $\mathbf{x}_1,\dots,\mathbf{x}_N \in \Omega$ and positive weights $w_1,\dots,w_N$ with $N \leq K$ such that the corresponding $N$-point CF is $\mathcal{F}_K(\Omega)$-exact (and positive).
\end{theorem}
However, it should be noted that Tchakaloff's original proof---in particular, involving the theory of convex bodies---is not constructive in nature.
A constructive, yet in general not computational practical, prove was provided in \cite{davis1967construction}.
At the same time, it should be pointed out that \cref{thm:Tchakaloff} only provides an upper bound for the number of data points that are needed for a positive and $\mathcal{F}_K(\Omega)$-exact CF.
Indeed, for standard domains (e.\,g.\ $\Omega = [0,1]^d$) and weight functions (e.\,g.\ $\omega \equiv 1$) as well as classical functions spaces (e.\,g.\ algebraic polynomials), it is possible to construct CFs that use even fewer points \cite{mcnamee1967construction,maeztu1989symmetric,genz1996fully,bos2017trivariate,bos2017polynomial,benouahmane2019near,van2021cubature}.
Such CFs are referred to as minimal or near-minimal CFs and usually utilize some kind of symmetry in the domain, weight function, and function space.
Unfortunately, they are often limited to algebraic polynomials of low total degrees or specific domains and weight functions.
In passing, we also mention Smolyak (sparse grid) CFs \cite{smolyak1963quadrature,bungartz2004sparse}, which rely on a product domain and a separable weight function.
Another approach is the use of optimization-based methods \cite{fliege1999distribution,taylor2000algorithm,taylor2007cardinal,ryu2015extensions,jakeman2018generation,keshavarzzadeh2018numerical} to construct near-minimal CFs (sometimes based on heuristic arguments).
Unfortunately, the success of these methods often depends on the initial guess for the data points and they can sometimes yield points outside of the computational domain or negative cubature weights.
We shall also mention some recent works on constructing nested sampling-based positive CFs \cite{van2020adaptive,van2020generating}.
However, it should be noted that these are developed based on an approximate notion of exactness; in the sense that $I[f]$ in the exactness condition \cref{eq:exactness} is replaced by a discrete approximation $I^{(M)}[f] = \frac{1}{M} \sum_{m=1}^M f(\mathbf{y}_m)$ with $M > N$ and random samples $\mathbf{y}_m$, $m=1,\dots,M$.
In fact, one might argue that the method proposed in \cite{van2020generating} is closer to subsampling \cite{davis1967construction,wilson1969general,wilhelmsen1976nearest,piazzon2017caratheodoryLS,piazzon2017caratheodory,seshadri2017effectively,van2017non} (sometimes also called importance sampling, compressed sampling, or Tchakaloff subsampling).
A similar argument can also be made for CFs based on nonnegative least squares (NNLS); see \cite{huybrechs2009stable,sommariva2015compression,glaubitz2020stableQRs}.
Finally, the arguably most general approach to construct ('probable') positive and exact CFs was recently proposed in \cite{migliorati2020stable}.
The idea there was to derive randomized CFs based on first approximating the unknown integrand $f$ by a discrete least squares (LS) approximation \cite{cohen2013stability,cohen2017optimal,guo2020constructing} and to exactly integrate this approximation then.
Thereby, the discrete LS approximation was based on the values of $f$ at random data points (samples).
The authors proved in \cite{migliorati2020stable} that the resulting randomized CFs are positive and exact (as well as accurate) with a high probability if the number of random data points is sufficiently large.
To the best of our knowledge, this result has not been carried over to the deterministic setting (in the sense of the data points not coming from fully-probabilistic random samples) yet.
We also already briefly mention the two recent works \cite{nakatsukasa2018approximate,hayakawa2021monte}, which we shall address in more detail below.
In a nutshell, although many approaches exist to construct \emph{provable} positive and exact CFs for certain special cases, it remains a challenge to do so in a general deterministic setting.
\subsection{Our Contribution}
We propose a simple procedure to construct provable positive and exact CFs in a general high-dimensional setting.
The procedure is based on the idea of LS quadrature formulas (QFs).
These were introduced in \cite{wilson1970necessary,wilson1970discrete} (originally for equidistant points and $\omega \equiv 1$) and generalized in \cite{huybrechs2009stable,glaubitz2020stableQRs} (still in one dimension though).
Recently, LS formulas have been extended to higher dimensions in \cite{glaubitz2020stableCFs}.
However, this was done under the restriction of the LS-CF being exact for algebraic polynomials up to a certain (total) degree, rather than a general finite-dimensional function space.
The theoretical backbone of the present work is the extension of this result to a general class of finite-dimensional function spaces.
We show that---under certain assumptions which are outlined below---using an appropriate weighting LS-CF are ensured to be positive if a sufficiently large number of (equidistributed) data points is used.
Numerically, we found that the number of data points $N$ has to scale roughly like the squared dimension of the finite-dimensional function space $\mathcal{F}_K(\Omega)$.
Moreover, we provide an error analysis for the corresponding positive and exact LS-CF, discuss potential applications, and address some connections to other integration methods.
To prove the aforementioned (conditional) positivity of appropriately weighted LS-CFs, we leverage some interesting connections between different fields.
These include linear algebra and LS problems, discrete orthonormal functions, and equidistributed sequences originating from number theory.
The sufficient conditions for this result (summarized in \cref{cor:main}) to hold, are the following:
\begin{enumerate}[label=(R\arabic*)]
\item \label{item:restr_domain}
The integration domain $\Omega \subset \R^d$ is bounded with boundary of measure zero.
\item \label{item:restr_weight}
The weight function $\omega: \Omega \to \R_0^+$ is Riemann integrable\footnote{For a bounded function on a compact set, being Riemann integrable is equivalent to being continuous almost everywhere (in the sense of Lebesgue).} and positive almost everywhere.
\item \label{item:restr_space}
The $K$-dimensional vector space $\mathcal{F}_K(\Omega)$ is spanned by a basis $\{ \varphi_k \}_{k=1}^K$ of continuous and bounded functions $\varphi_k: \Omega \to \R$, $k=1,\dots,K$.
Furthermore, the vector space $\mathcal{F}_K(\Omega)$ contains constants. In particular, $1 \in \mathcal{F}_K(\Omega)$.
\end{enumerate}
We shall already provide a short discussion of the above restrictions.
\ref{item:restr_domain} essentially ensures the existence (and simple construction) of an equidistributed sequence in $\Omega$.
\ref{item:restr_weight}, on the other hand, warrants the existence of a certain continuous inner product (and corresponding orthonormal functions) that can be approximated by a sequence of discrete inner products (and corresponding discrete orthonormal functions).
Finally, \ref{item:restr_space} is needed for technical reasons and is utilized in the proof of the preliminary \cref{lem:2} (and therefore in the proof of our main result, \cref{thm:main}).
That said, we believe that parts of \ref{item:restr_space} can be further relaxed and might address this in a forthcoming work.
\subsection{Implications and Potential Applications}
Our findings---in particular the provable positivity and exactness of LS-CFs---imply a simple\footnote{Indeed, the procedure only utilizes simple operations from linear algebra, such as solving an LS problem.} procedure to construct (deterministic) stable high-order CFs in a general setting.
This can be interpreted as an extension of the results on stable high-order randomized CF recently introduced inte{migliorati2020stable} to a deterministic setting.
Moreover, we discuss the application of the provable positive and exact LS-CF to construct interpolatory CFs (see \cref{sub:interpol_CFs}).
These use a smaller number of $N = K$ data points, supported On the larger set of data points corresponding to the LS-CF, and can be obtained by several different subsampling strategies.
It should also be noted that for a fixed finite-dimensional function space $\mathcal{F}_K(\Omega)$ and an increasing number $N$ of data points, the LS-CFs discussed here might be interpreted as high-order corrections to Monte Carlo (MC)---for random data points---and quasi-Monte Carlo (QMC)---for low-discrepancy data points---methods.
See \cref{rem:MC_correction} for more details.
In particular, this reveals an interesting connection to \cite{nakatsukasa2018approximate} and potential applications of LS-CFs for variance reduction in MC and QMC methods.
We shall also briefly mention the implication of the present work to several other approaches to find positive (interpolatory) CFs.
These include NNLS \cite{lawson1995solving,huybrechs2009stable,sommariva2015compression,glaubitz2020stableQRs} and different optimization strategies \cite{glaubitz2020stableCFs,hayakawa2021monte} based on linear programming \cite{bloomfield1983least,dantzig1998linear,dantzig2006linear,gill1991numerical,vanderbei2020linear}.
Sometimes, these approaches are justified by Tchakaloff's theorem (\cref{thm:Tchakaloff}).
Indeed, Tchakaloff's theorem ensures the existence of a solution to the respective optimization problem if an appropriate set of data points is considered.
That said, Tchakaloff's theorem is providing no information about which data points should be used (it only states their existence and an upper bound for their number).
Hence, when applying the above-mentioned procedures without care to an arbitrary set of data points, they cannot always be expected to actually result in a positive interpolatory CF.
The present work is providing such an ensurance in the sense that \cref{cor:main} combined with the subsampling strategies discussed in \cref{sub:interpol_CFs} is telling us the following:
At least some of the positive interpolatory CFs predicted by Tchakaloff's theorem are supported on a sufficiently large set of equidistributed points in $\Omega$.
In passing, we shall also mention potential applications of positive and exact LS-CFs to construct Euclidean CFs \cite{trefethen2017cubature,trefethen2017multivariate,bos2018bernstein,trefethen2021exactness} as well as stable finite element methods \cite{glaubitz2020stable}.
\subsection{Advantages and Pitfalls}
The advantage of the positive and exact CFs discussed in the present work lies in their generality and simple construction.
That said, they are neither minimal nor near-minimal.
Hence, if the reader is only interested in a certain standard case for which efficient (near-)minimal CFs are readily available it is certainly advantageous to use these.
The positive and exact CFs presented here, on the other hand, arguably find their greatest utility when a CF is desired for a non-standard domain, weight function, or function space.
They might also be of advantage when one is given a fixed and prescribed set of (scattered) data points, which is often the case in applications.
Finally, for reasons of computational efficiency, in general, we only recommend using LS-CFs in moderate dimensions (usually $d=1,2,3$).
In higher dimensions, they--- like many other CFs---might fall victim to the curse of dimensionality \cite{bellman1966dynamic,kuo2005lifting}.
Also see \cref{sec:error} for more details.
Of course, this might depend on the underlying function space $\mathcal{F}_K(\Omega)$.
In fact, it would be an interesting topic of future research to investigate if the choice of the function space $\mathcal{F}_K(\Omega)$ (e.\,g.\ using separable (low-rank) \cite{beylkin2009multivariate,chevreuil2015least} or sparse approximations \cite{chkifa2015breaking,cohen2015approximation,adcock2017compressed}) can remedy decreasing efficiency in high dimensions.
\subsection{Outline}
The rest of this work is organized as follows.
In \cref{sec:prelim}, we collect some preliminaries, in particular, on unisolvent and equidistributed sequences.
Next, \cref{sec:LS} contains our theoretical main results, i.\,e., exactness and conditional positivity of LS-CFs is proved.
In \cref{sec:error}, an error analysis for these CFs is provided.
Two specific applications of the provable positive and exact LS-CFs are discussed in \cref{sec:applications}.
These include the simple construction of stable high-order sequences of CFs (\cref{sub:const_procedure}) and positive interpolatory CFs (\cref{sub:interpol_CFs}).
Numerical experiments are presented in \cref{sec:numerical} and some concluding thoughts are offered in \cref{sec:summary}.
\subsection{Open Access Code}
The MATLAB code used to generate the numerical tests presented here is open access and can be found at \cite{glaubitz2021github}.
\section{Preliminaries: Unisolvent and Equidistributed Sequences}
\label{sec:prelim}
Here, we shall provide a few preliminary results.
In particular, these address the connection between the exactness of CFs and unisolvent and equidistributed sequences.
\subsection{Exactness and Unisolvency}
\label{sub:prelim_unisolvent}
It is easy to note that the exactness condition \cref{eq:exactness} is equivalent to the data points ${X_N = \{ \mathbf{x}_n\}_{n=1}^N}$ and weights of a CF solving the nonlinear system
\begin{equation}\label{eq:ex-nonlin-system}
\underbrace{
\begin{pmatrix}
\varphi_1(\mathbf{x}_1) & \dots & \varphi_1(\mathbf{x}_N) \\
\vdots & & \vdots \\
\varphi_K(\mathbf{x}_1) & \dots & \varphi_K(\mathbf{x}_N)
\end{pmatrix}}_{=: \Phi(X_N)}
\underbrace{
\begin{pmatrix}
w_1 \\ \vdots \\ w_N
\end{pmatrix}}_{=: \mathbf{w}}
=
\underbrace{
\begin{pmatrix}
m_1 \\ \vdots \\ m_K
\end{pmatrix}}_{=: \mathbf{m}}.
\end{equation}
However, solving \cref{eq:ex-nonlin-system} can be highly nontrivial, and doing so by brute force may result in a CF which points lie outside of the integration domain or with negative weights \cite{haber1970numerical}.
That said, the situation changes if a fixed set of data points is considered.
Then $\Phi(X_N) = \Phi$, and \cref{eq:ex-nonlin-system} becomes a linear system:
\begin{equation}\label{eq:ex-system}
\Phi \mathbf{w} = \mathbf{m}
\end{equation}
Note that for $K < N$ this becomes an underdetermined linear system.
These are well-known to either have no or infinitely many solutions.
The latter case arises when we restrict ourselves to unisolvent sets of data points.
\begin{definition}[Unisolvent Point Sets]
A set of points $X_N = \{\mathbf{x}_n\}_{n=1}^N \subset \R^d$ is called \emph{$\mathcal{F}_K(\Omega)$-unisolvent} if
\begin{equation}
f(\mathbf{x}_n) = 0, \ n=1,\dots,N \implies f(\boldsymbol{x}) = 0, \ \forall \boldsymbol{x} \in \Omega
\end{equation}
holds for all $f \in \mathcal{F}_K(\Omega)$.
\end{definition}
That is, a set of points is $\mathcal{F}_K(\Omega)$-unisolvent if the only function $f \in \mathcal{F}_K(\Omega)$ that interpolates zero data at these points is the zero function.
Assuming that the set of data points $X_N$ is $\mathcal{F}_K(\Omega)$-unisolvent the following result follows.
\begin{lemma}\label{lem:solution-space}
Let $K < N$ and $X_N = \{\mathbf{x}_n\}_{n=1}^N$ be $\mathcal{F}_K(\Omega)$-unisolvent.
Then, the linear system \cref{eq:ex-system} induces an $(N-K)$-dimensional
affine linear subspace of solutions,
\begin{equation}\label{eq:sol-space}
W := \left\{ \mathbf{w} \in \R^N \mid \Phi\mathbf{w}=\mathbf{m} \right\}.
\end{equation}
\end{lemma}
\begin{proof}
The case $\mathcal{F}_K(\Omega) = \mathbb{P}_m(\R^d)$ was shown in \cite{glaubitz2020stableCFs}.
It is easy to verify that the same arguments carry over to the general case discussed here.
\end{proof}
Next, a simple sufficient criterion for $X_N \subset \Omega$ to be $\mathcal{F}_K(\Omega)$-unisolvent is provided.
The criterion is based on sequences that are dense in $\Omega \subset \R^d$.
Recall that $(\mathbf{x}_n)_{n \in \mathbb{N}} \subset \R^d$ is called \emph{dense in $\Omega$} if
\begin{equation}\label{eq:dense}
\forall \boldsymbol{x} \in \Omega \ \, \forall \varepsilon > 0 \ \, \exists n \in \mathbb{N}: \quad
\norm{ \boldsymbol{x} - \mathbf{x}_n } < \varepsilon.
\end{equation}
That is, every point in $\Omega$ can be approximated arbitrarily accurate by an element of the sequence $(\mathbf{x}_n)_{n \in \mathbb{N}}$.\footnote{Note that in the context of the present work the definition \cref{eq:dense} is independent of the norm since $\Omega$ is located in a finite-dimensional space.}
\begin{lemma}\label{lem:unisolvent}
Let $(\mathbf{x}_n)_{n \in \mathbb{N}} \subset \Omega$ be dense in $\Omega$ and let $X_N = \{ \mathbf{x}_n \}_{n=1}^N$.
Moreover, let $\mathcal{F}_K(\Omega)$ be spanned by continuous functions $\varphi_1,\dots,\varphi_K:\Omega \to \R$.
In this case, there exists an $N_0 \in \mathbb{N}$ such that $X_N$ is $\mathcal{F}_K(\Omega)$-unisolvent for every $N \geq N_0$.
\end{lemma}
\begin{proof}
Assume that the assertion is wrong.
Then, there exists an $f \in \mathcal{F}_K(\Omega)$ with $f \not\equiv 0$ such that ${f(\mathbf{x}_n) = 0}$ for all $n \in \mathbb{N}$.
Yet, since $(\mathbf{x}_n)_{n \in \mathbb{N}}$ is dense in $\Omega$, this either contradicts $f \not\equiv 0$ or $f$ being continuous.
\end{proof}
If there exists an $N_0 \in \mathbb{N}$ such that $X_N = \{ \mathbf{x}_n \}_{n=1}^N$ is $\mathcal{F}_K(\Omega)$-unisolvent for every $N \geq N_0$, as in \cref{lem:unisolvent}, $(\mathbf{x}_n)_{n \in \mathbb{N}}$ is referred to as an \emph{$\mathcal{F}_K(\Omega)$-unisolvent} sequence.
Thus, \cref{lem:unisolvent} states that every dense sequence is also $\mathcal{F}_K(\Omega)$-unisolvent.
\subsection{Equidistributed Sequences}
\label{sub:prelim_equidistributed}
We just saw that density is a sufficient condition for unisolvency.
This will be handy for the subsequent construction of provable positive and exact LS-CFs.
Another important property will be for the sequence of data points $(\mathbf{x}_n)_{n \in \mathbb{N}} \subset \Omega$ to satisfy
\begin{equation}\label{eq:equi}
\lim_{N \to \infty} \frac{|\Omega|}{N} \sum_{n=1}^N g(\mathbf{x}_n)
= \int_\Omega g(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}
\end{equation}
for all measurable bounded functions $g:\Omega \to \R$ that are continuous almost everywhere (in the sense of Lebesgue).
Here, $|\Omega|$ denotes the $d$-dimensional volume of $\Omega$.
It is easy to note that $(\mathbf{x}_n)_{n \in \mathbb{N}}$ being dense in $\Omega$ is not sufficient for \cref{eq:equi} to hold.\footnote{However, it is interesting to note that every dense sequence can be rearranged into an (equidistributed) sequence satisfying \cref{eq:equi}.}
Yet, in his celebrated work \cite{weyl1916gleichverteilung} from 1916 Weyl showed that \cref{eq:equi} can be connected to $(\mathbf{x}_n)_{n \in \mathbb{N}}$ being \emph{equidistributed} (also called \emph{uniformly distributed}).
We shall recall that there are many well-known equidistributed sequences for $d$-dimensional hypercubes with radius $R$, denoted by $C_R^{(d)} = [-R,R]^d$.
These include (i) grids of equally spaced points with an appropriate ordering and (ii) low-discrepancy sequences.
The latters were developed to minimize the upper bound provided by the famous Koksma--Hlawak inequality \cite{hlawka1961funktionen,niederreiter1992random}, used in QMC methods \cite{caflisch1998monte,dick2013high,trefethen2017cubature}.
A special case of low-discrepancy sequences are the Halton points \cite{halton1960efficiency}, which are a generalization of the one-dimensional van der Corput points, see for example \cite[Erste Mitteilung]{van1935verteilungsfunktionen}).
To not exceed the scope of this work, we refer to the monograph \cite{kuipers2012uniform} for more details on equidistributed sequences.
That said, we shall demonstrate how equidistributed sequences can be constructed for general bounded domains with a boundary of measure zero.
\begin{remark}[Construction of Equidistributed Sequences for General Domains]\label{rem:eq-points}
Let ${\Omega \subset \R^d}$ be bounded with a boundary of measure zero.
Then we can find an $R > 0$ such that $\Omega$ is contained in the hypercube $C_R^{(d)}$.
Now let $(\mathbf{y}_n)_{n \in \mathbb{N}}$ be an equidistributed sequence in such a hypercube, e.\,g., the Halton points.
Then an equidistributed sequence in $\Omega$, $(\mathbf{x}_n)_{n \in \mathbb{N}}$, is given by the subsequence of $(\mathbf{y}_n)_{n \in \mathbb{N}}$ for which all elements outside of $\Omega$ have been removed:
\begin{equation}
\mathbf{y}_n \in (\mathbf{x}_n)_{n \in \mathbb{N}} \iff \mathbf{y}_n \in \Omega.
\end{equation}
Let us briefly verify that the resulting subsequence $(\mathbf{x}_n)_{n \in \mathbb{N}}$ is an equidistributed sequence in $\Omega$.
To this end, let $g:\Omega \to \R$ be a measurable bounded function that is continuous almost everywhere.
We can extend $g$ to the hypercube $C_R^{(d)}$ by setting $g$ equal to zero outside of $\Omega$.
This extension, denoted by $\tilde{g}: C_R^{(d)} \to \R$, will still be measurable, bounded, and continuous almost everywhere.
The latter follows from $\Omega$ having a boundary of measure zero.
Thus, by construction of $(\mathbf{x}_n)_{n \in \mathbb{N}}$, we get
\begin{eq}\label{eq:const_eq}
\lim_{N \to \infty} \frac{|\Omega|}{N} \sum_{n=1}^N g(\mathbf{x}_n)
= \lim_{M \to \infty} \frac{| C_R^{(d)} |}{M} \sum_{n=1}^M \tilde{g}(\mathbf{y}_n)
= \int_{C_R^{(d)}} \tilde{g}(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}
= \int_{\Omega} g(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}.
\end{eq}
Here, $M$ is the unique integer such that $\mathbf{x}_N = \mathbf{y}_M$.\footnote{The elements of $(\mathbf{y}_n)_{n \in \mathbb{N}}$ are assumed to be distinct.}
Furthermore, the first equality in \cref{eq:const_eq} essentially follows from the equidistributed sequence $(\mathbf{y}_n)_{n \in \mathbb{N}}$ satisfying
\begin{eq}
\lim_{M \to \infty} \frac{ | \{\mathbf{y}_1,\dots,\mathbf{y}_M\} \cap \Omega | }{M} = \frac{|\Omega|}{|C_R^{(d)}|}.
\end{eq}
Note that $| \{\mathbf{y}_1,\dots,\mathbf{y}_M\} \cap \Omega | = N$.
Again, we refer to \cite{kuipers2012uniform} for more details.
\end{remark}
Finally, it should be stressed that equidistributed sequences are dense sequences with a specific ordering.
This preliminary section is therefore closed by summarizing the above findings in the following corollary.
\begin{corollary}\label{cor:equid}
Let $\Omega \subset \R^d$ be bounded with a boundary of measure zero.
Furthermore, let $(\mathbf{y}_n)_{n \in \mathbb{N}}$ be an equidistributed sequence in the hypercube $C_R^{(d)}$, where $\Omega \subset C_R^{(d)}$, and let $(\mathbf{x}_n)_{n \in \mathbb{N}}$ be the subsequence of $(\mathbf{y}_n)_{n \in \mathbb{N}}$ that only contains the elements in $\Omega$.
Then $(\mathbf{x}_n)_{n \in \mathbb{N}}$ is equidistributed in $\Omega$.
In particular, it satisfies \cref{eq:equi} and is $\mathcal{F}_K(\Omega)$-unisolvent.
\end{corollary}
\section{Provable Positive and Exact Least Squares Cubature Formulas}
\label{sec:LS}
In this section, it is demonstrated how provable positive and $\mathcal{F}_K(\Omega)$-exact LS-CFs can be constructed by using a sufficiently large set of equidistributed data points.
This is done by generalizing the LS approach from previous works \cite{wilson1970necessary,wilson1970discrete,huybrechs2009stable,glaubitz2020shock,glaubitz2020stableQRs,glaubitz2020stableCFs}.
Finally, the procedure only relies on basic operations from linear algebra, such as determining an LS solution of an underdetermined linear system.
\subsection{Formulation as a Least Squares Problem}
\label{sub:LS-problem}
Let $(\mathbf{x}_n)_{n \in \mathbb{N}}$ be a $\mathcal{F}_K(\Omega)$-unisolvent sequence in $\Omega$ and let $X_N = \{\mathbf{x}_n\}_{n=1}^N$.
As noted before, an $\mathcal{F}_K(\Omega)$-exact CF with data points $X_N$ can be constructed by determining a vector of cubature weights that solves the linear system of exactness conditions \cref{eq:ex-system}.
For $N > K$, \cref{eq:ex-system} induces an $(N-K)$-dimensional affine linear subspace space of solutions $W \subset \R^N$.
All of these yield an $\mathcal{F}_K(\Omega)$-exact CF.
The LS approach consists of finding the unique solution $\mathbf{w} \in W$ that minimizes a weighted Euclidean norm:
\begin{equation}\label{eq:LS-sol}
\mathbf{w}^{\mathrm{LS}} = \argmin_{\mathbf{w} \in W} \ \norm{ R^{-1/2} \mathbf{w} }_{2}
\end{equation}
This vector is called the \emph{LS solution} of $\Phi \mathbf{w} = \mathbf{m}$.
Here, $R^{-1/2}$ is a diagonal \emph{weight matrix}, given by
\begin{equation}
R^{-1/2} = \diag\left( 1/\sqrt{r_1}, \dots, 1/\sqrt{r_N} \right), \quad
r_n > 0, \quad n=1,\dots,N.
\end{equation}
If $r_n = 0$, this is interpreted as a constraint $w_n = 0$.
The corresponding $N$-point CF
\begin{equation}\label{eq:LS-CF}
C^{\mathrm{LS}}_N[f] = \sum_{n=1}^N w_n^{\mathrm{LS}} f(\mathbf{x}_n)
\end{equation}
is called an \emph{LS-CF}.
In \cref{sub:positivity} it is shown that choosing the \emph{discrete weights} $r_n$ as
\begin{equation}
r_n = \frac{\omega(\mathbf{x}_n) |\Omega|}{N}, \quad n=1,\dots,N,
\end{equation}
results in the LS-CFs to be $\mathcal{F}_K(\Omega)$-exact and positive if $N$ is sufficiently large.
The above choice for the discrete weights is made for technical reasons and might seem to be artificial.
That said, it is discussed in \cref{rem:MC_correction} that the corresponding LS-CFs can be interpreted as high-order corrections of MC and QMC methods.
Finally, it is convenient to note that---at least formally---the LS solution is explicitly given by (\cite{cline1976l_2})
\begin{equation}\label{eq:LS-sol2}
\mathbf{w}^{\mathrm{LS}} = R \Phi^T (\Phi R \Phi^T)^{-1} \mathbf{m}.
\end{equation}
Here, $R P^T (P R P^T)^{-1}$ is the Moore--Penrose pseudoinverse of $R^{-1/2}P$; see \cite{ben2003generalized}.
In \cref{sub:char}, \cref{eq:LS-sol2} will be simplified by utilizing discrete orthonormal bases.
\subsection{Continuous and Discrete Orthonormal Bases}
\label{sub:ON-bases}
Under the restrictions \ref{item:restr_domain}--\ref{item:restr_space},
\begin{equation}\label{eq:cont-inner-prod}
\scp{u}{v} = \int_{\Omega} u(\boldsymbol{x}) v(\boldsymbol{x}) \omega(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}, \quad
\norm{u} = \sqrt{\scp{u}{u}}
\end{equation}
defines an inner product and a corresponding norm on $\mathcal{F}_K(\Omega)$.
In particular, \cref{eq:cont-inner-prod} induces an orthonormal basis $\{ \pi_k \}_{k=1}^K$ of $\mathcal{F}_K(\Omega)$.
That is, the functions $\pi_1,\dots,\pi_K$ span the space $\mathcal{F}_K(\Omega)$ while satisfying
\begin{equation}
\scp{\pi_k}{\pi_l} = \delta_{k,l} :=
\begin{cases}
1 &; \ k = l, \\
0 &; \ k \neq l,
\end{cases}
\quad k,l = 1,\dots,K.
\end{equation}
Henceforth, we refer to such a basis as a \emph{continuous orthonormal basis} and its elements are denoted by $\pi_k(\cdot;\omega)$.
Analogously, assuming that ${X_N^+ = \{ \, \mathbf{x}_n \mid \omega(\mathbf{x}_n) > 0, \ n=1,\dots,N \, \}}$ is $\mathcal{F}_K(\Omega)$-unisolvent,
\begin{equation}\label{eq:disc-inner-prod}
[u,v]_N = \sum_{n=1}^N r_n u(\mathbf{x}_n) v(\mathbf{x}_n), \quad
\norm{u}_N = \sqrt{[u,u]_N}
\end{equation}
defines a discrete inner product and a corresponding norm on $\mathcal{F}_K(\Omega)$.
Also \cref{eq:disc-inner-prod} induces an orthonormal basis.
This basis satisfies
\begin{equation}
[\pi_k,\pi_l]_N = \delta_{k,l}, \quad k,l = 1,\dots,K,
\end{equation}
while spanning $\mathcal{F}_K(\Omega)$.
It is therefore referred to as a \emph{discrete orthonormal basis} and its elements are denoted by $\pi_k(\cdot;\mathbf{r})$.
Both bases can be constructed, for instance, by Gram--Schmidt orthonormalization applied to the same initial basis $\{\varphi_k\}_{k=1}^K$.
That is, the orthonormal bases are respectively given by
\begin{equation}\label{eq:Gram-Schmidt}
\begin{aligned}
\tilde{\pi}_k(\boldsymbol{x};\omega) & = \varphi_k(\boldsymbol{x}) - \sum_{l=1}^{k-1} \scp{\varphi_k}{\pi_l(\cdot;\omega)} \pi_l(\boldsymbol{x};\omega), \quad
&& \pi_k(\boldsymbol{x};\omega) = \frac{\tilde{\pi}_k(\boldsymbol{x};\omega)}{\norm{\tilde{\pi}_k(\cdot;\omega)}}, \\
\tilde{\pi}_k(\boldsymbol{x};\mathbf{r}) & = \varphi_k(\boldsymbol{x}) - \sum_{l=1}^{k-1} [\varphi_k,\pi_l(\cdot;\mathbf{r})]_N \pi_l(\boldsymbol{x};\mathbf{r}), \quad
&& \pi_k(\boldsymbol{x};\mathbf{r}) = \frac{\tilde{\pi}_k(\boldsymbol{x};\mathbf{r})}{\norm{\tilde{\pi}_k(\cdot;\mathbf{r})}_N},
\end{aligned}
\end{equation}
for $\boldsymbol{x} \in \Omega$.
See \cite{gautschi1997numerical,trefethen1997numerical,gautschi2004orthogonal} (or \cite{glaubitz2020shock,glaubitz2020stableCFs}).
It shall be assumed that the vector space $\mathcal{F}_K(\Omega)$ contains constants; see \ref{item:restr_space}.
Hence, w.\,l.\,o.\,g., $\varphi_1 \equiv 1$ and therefore
\begin{equation}
\pi_1(\cdot;\omega) \equiv \frac{1}{\norm{1}}, \quad
\pi_1(\cdot;\mathbf{r}) \equiv \frac{1}{\norm{1}_N}.
\end{equation}
This will be convenient to prove that LS-CFs are positive.
\begin{remark}
Note that we only utilize Gram--Schmidt orthonormalization for theoretical purposes.
In our implementation, the LS solution $\mathbf{w}^{\mathrm{LS}}$ is computed using the Matlab function \emph{lsqminnorm}.
This function uses a pivoted QR decomposition of $A = \Phi R^{1/2}$; see \cite{trefethen1997numerical,golub2012matrix,horn2012matrix,strang2019linear}.
The cost for this is $\mathcal{O}(N K^2)$.
While we have not tested this in our implementation, it should still be noted that alternatively iterative solvers, such as the preconditioned conjugate gradient (PCG), might be used to reduce the costs to $\mathcal{O}(N K)$.
That said, such methods usually rely on sparsity to be efficient and can be more prone to numerical inaccuracies.
\end{remark}
\subsection{Characterization of the Least Squares Solution}
\label{sub:char}
Observe that the matrix product $\Phi R \Phi^T$ in the explicit representation of the LS solution \cref{eq:LS-sol2} can be interpreted as a Gram matrix w.\,r.\,t.\ the discrete inner product \cref{eq:disc-inner-prod}:
\begin{equation}
\Phi R \Phi^T =
\begin{pmatrix}
[\varphi_1,\varphi_1]_N & \dots &
[\varphi_1,\varphi_K]_N \\
\vdots & & \vdots \\
[\varphi_K,\varphi_1]_N & \dots &
[\varphi_K,\varphi_K]_N \\
\end{pmatrix}
\end{equation}
Thus, if the linear system \cref{eq:ex-system} is formulated w.\,r.\,t.\ the discrete orthonormal basis $\{\varphi_k(\cdot ; \mathbf{r})\}_{k=1}^K$, one gets $\Phi R \Phi^T = I$.
This yields \cref{eq:LS-sol2} to become
\begin{equation}\label{eq:LS-sol3}
\mathbf{w}^{\mathrm{LS}} = R \Phi^T \mathbf{m}.
\end{equation}
In particular, the LS weights are explicitly given by
\begin{equation}\label{eq:LS-sol-explicit}
w_n^{\mathrm{LS}} = r_n \sum_{k=1}^K \pi_k( \mathbf{x}_n ; \mathbf{r}) I[ \pi_k( \, \cdot \, ; \mathbf{r} ) ], \quad n=1,\dots,N.
\end{equation}
This representation is another important ingredient to prove that the LS-CFs are conditionally positive.
\subsection{Positivity of Least Squares Cubature Formulas}
\label{sub:positivity}
First, two technical lemmas are presented.
Together they enable us to show that the LS weights are all positive if a sufficiently large number of equidistributed data points is used.
\begin{lemma}\label{lem:1}
Let $\Omega \subset \R^d$ be bounded and assume that
\begin{equation}\label{eq:cond1}
\lim_{N \to \infty} [u,v]_N = \scp{u}{v} \quad \forall u,v \in \mathcal{F}_K(\Omega).
\end{equation}
Furthermore, let $(u_N)_{N \in \mathbb{N}}, (v_N)_{N \in \mathbb{N}} \subset \mathcal{F}_K(\Omega)$ and $u, v \in \mathcal{F}_K(\Omega)$ such that
\begin{equation}\label{eq:cond2}
\lim_{N \to \infty} u_N = u, \quad
\lim_{N \to \infty} v_N = v \quad
\text{in } \ ( \mathcal{F}_K(\Omega), \| \cdot \|_{L^\infty(\Omega)} ),
\end{equation}
where $u,v: \Omega \to \R$ are assumed to be bounded.
Then,
\begin{equation}
\lim_{N \to \infty} [u_N,v_N]_N = \scp{u}{v}.
\end{equation}
\end{lemma}
Recall that $[\cdot,\cdot]_N$ and $\scp{\cdot}{\cdot}$, $\|\cdot\|_N$ denote the continuous and discrete inner product \cref{eq:cont-inner-prod} and \cref{eq:disc-inner-prod}, respectively.
The corresponding norms are denoted by $\|\cdot\|$ and $\|\cdot\|_N$.
Moreover, $\| \cdot \|_{L^\infty(\Omega)}$ is the usual infimum norm with $\norm{f}_{L^\infty(\Omega)} = \inf_{\boldsymbol{x} \in \Omega} |f(\boldsymbol{x})|$.
\begin{proof}
We start by noting that
\begin{equation}
\begin{aligned}
\left| \scp{u}{v} - [u_N,v_N]_N \right|
\leq & \left| \scp{u}{v} - [u,v]_N \right| + \left| [u,v]_N - [u_N,v]_N \right| \\
& + \left| [u_N,v]_N - [u_N,v_N]_N \right|.
\end{aligned}
\end{equation}
The first term on the right-hand side converges to zero due to \cref{eq:cond1}.
For the second term, the Cauchy--Schwarz inequality gives
\begin{equation}
\left| [u,v]_N - [u_N,v]_N \right|^2
= \left| [u-u_N,v]_N \right|^2
\leq \norm{ u-u_N }_N^2 \norm{ v }_N^2.
\end{equation}
Furthermore, \cref{eq:cond1} implies $\| v \|_N^2 \to \| v \|^2$ for $N \to \infty$.
Finally, the H\"older inequality and \cref{eq:cond2} yield
\begin{equation}
\norm{ u-u_N }_N^2 \leq \norm{ 1 }_N^2 \norm{ u-u_N }_{L^\infty(\Omega)}^2 \to 0,
\quad N \to \infty.
\end{equation}
Thus, the second term converges to zero as well.
A similar argument can be used to show that the third term converges to zero.
\end{proof}
Next, we demonstrate that the discrete orthonormal functions $\pi_k(\cdot,\mathbf{r})$ converge uniformly to the corresponding continuous orthonormal functions $\pi_k(\cdot,\omega)$ if the corresponding discrete inner product converges to the continuous one for all elements of $\mathcal{F}_K(\Omega)$.
\begin{lemma}\label{lem:2}
Let $\Omega \subset \R^d$ be bounded, let $\{ \varphi_k \}_{k=1}^K$ be a basis of $\mathcal{F}_K(\Omega)$ consisting of continuous and bounded functions, and assume that \cref{eq:cond1} holds.
Moreover, let $\{ \pi_k(\cdot,\omega) \}_{k=1}^K$ and $\{ \pi_k(\cdot,\mathbf{r}) \}_{k=1}^K$ respectively denote the continuous and discrete orthonormal bases constructed from $\{ \varphi_k \}_{k=1}^K$ by Gram--Schmidt orthonormalization \cref{eq:Gram-Schmidt}.
Then,
\begin{equation}
\lim_{N \to \infty} \pi_k(\cdot,\mathbf{r}) = \pi_k(\cdot,\omega) \quad
\text{in } \ ( \mathcal{F}_K(\Omega), \|\cdot\|_{L^\infty(\Omega)} ).
\end{equation}
\end{lemma}
Note that $\mathbf{r} = (r_1,\dots,r_N)$ and the discrete orthonormal basis therefore depends on $N \in \mathbb{N}$.
\begin{proof}
The assertion is proven by induction.
For $k=1$, the assertion is trivial and essentially follows from ${\|1\|_N \to \|1\|}$ for ${N \to \infty}$.
%
Nest, it is argued that if the assertion holds for the first $k-1$ orthonormal basis functions, then it also holds for the $k$-th one.
To this end, let $l \in \{1,2,\dots,k-1\}$ and assume that
\begin{equation}
\pi_l(\cdot;\mathbf{r}) \to \pi_l(\cdot;\omega) \ \text{ in } \ ( \mathcal{F}_K(\Omega), \|\cdot\|_{L^\infty(\Omega)} ), \quad N \to \infty.
\end{equation}
Recall that by Gram--Schmidt orthonormalization, the $k$-th orthonormal basis functions are given by \cref{eq:Gram-Schmidt}.
Hence, \cref{lem:1} implies
\begin{equation}
[\varphi_k,\pi_l(\cdot;\mathbf{r})]_N \to \scp{\varphi_k}{\pi_l(\cdot;\omega)}, \quad
N \to \infty,
\end{equation}
and therefore
\begin{equation}
\tilde{\pi}_k(\cdot;\mathbf{r}) \to \tilde{\pi}_k(\cdot;\omega) \ \text{ in } \ ( \mathcal{F}_K(\Omega), \|\cdot\|_{L^\infty(\Omega)} ),
\quad N \to \infty.
\end{equation}
Here, $\tilde{\pi}_k(\cdot;\mathbf{r})$ and $\tilde{\pi}_k(\cdot;\omega)$ respectively denote the unnormalized basis function.
\cref{lem:1} yields
${\|\tilde{\pi}_k(\cdot;\mathbf{r})\|_N \to \|\tilde{\pi}_k(\cdot;\omega)\|}$
for $N \to \infty$.
This implies
\begin{equation}
\pi_k(\cdot;\mathbf{r}) \to \pi_k(\cdot;\omega) \ \text{ in } \ ( \mathcal{F}_K(\Omega), \|\cdot\|_{L^\infty(\Omega)} ),
\quad N \to \infty,
\end{equation}
which completes the proof.
\end{proof}
\cref{lem:1} and \cref{lem:2} essentially state that if the discrete inner product converges to the continuous inner product, then also the discrete orthonormal basis functions converge (uniformly) to the corresponding continuous ones.
Finally, this observation enables us to prove the following theorem.
\begin{theorem}[The LS-CF is Conditionally Positive]\label{thm:main}
Given are $\Omega \subset \R^d$, $\omega: \Omega \to \R_0^+$, and $\mathcal{F}_K(\Omega) \subset C(\Omega)$ such that the restrictions \ref{item:restr_weight} and \ref{item:restr_space} are satisfied.
Moreover, let $(\mathbf{x}_n)_{n \in \mathbb{N}} \subset \Omega$ be $\mathcal{F}_K(\Omega)$-unisolvent and $(r_n)_{n \in \mathbb{N}} \subset \R^+$ such that
\begin{equation}\label{eq:cond-thm}
\lim_{N \to \infty} \sum_{n=1}^N r_n u(\mathbf{x}_n) v(\mathbf{x}_n)
= \int_\Omega \omega(\boldsymbol{x}) u(\boldsymbol{x}) v(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}
\quad \forall u, v \in \mathcal{F}_K(\Omega).
\end{equation}
Then, there exists an $N_0 \in \mathbb{N}$ such that for all $N \geq N_0$ the corresponding LS-CF \cref{eq:LS-CF} is positive.
\end{theorem}
\begin{proof}
First, it should be noted that $(\mathbf{x}_n)_{n \in \mathbb{N}}$ being $\mathcal{F}_K(\Omega)$-unisolvent ensures the existence of a discrete inner product and therefore of a discrete orthonormal basis.
Let us denote this basis by $\{ \pi_k(\cdot,\mathbf{r}) \}_{k=1}^K$ and the corresponding continuous orthonormal basis by $\{ \pi_k(\cdot,\omega) \}_{k=1}^K$.
W.\,l.\,o.\,g., it can be assumed that both bases are constructed by applying Gram--Schmidt orthonormalization to the same initial basis $\{ \varphi_k \}_{k=1}^K$.
Hence, the LS weights are explicitly given by \cref{eq:LS-sol-explicit} for $N \geq K$.
Next, let
\begin{equation}
\epsilon_k := [\pi_k(\cdot,\mathbf{r}),1]_N - \scp{\pi_k(\cdot,\mathbf{r})}{1}.
\end{equation}
Then, the LS weights can be rewritten as
\begin{equation}
w_n^{\mathrm{LS}}
= r_n \left( \pi_1(\mathbf{x}_n;\mathbf{r}) [\pi_1(\cdot;\mathbf{r}),1]_N
- \sum_{k=1}^K \varepsilon_k \pi_k(\mathbf{x}_n;\mathbf{r}) \right).
\end{equation}
Note that, w.\,l.\,o.\,g., $\varphi \equiv 1$ and $\pi_1(\cdot,\mathbf{r}) \equiv 1/\|1\|_N$.
This yields
\begin{equation}
\pi_1(\mathbf{x}_n;\mathbf{r}) [\pi_1(\cdot;\mathbf{r}),1]_N = 1.
\end{equation}
Hence, the assertion ($w_n^{\text{LS}} > 0$ for all $n=1,\dots,N$) is equivalent to
\begin{equation}\label{eq:assertion2-thm}
\sum_{k=1}^K \varepsilon_k \pi_k(\mathbf{x}_n;\mathbf{r}) < 1.
\end{equation}
At the same time, \cref{eq:cond-thm} and \cref{lem:2} imply that every element of the discrete orthonormal basis converges uniformly to the corresponding element of the continuous orthonormal basis.
In particular, for every $k=1,\dots,K$, the sequence ${(\pi_k(\cdot,\mathbf{r}))_{N \in \mathbb{N}} \subset \mathcal{F}_K(\Omega)}$ is uniformly bounded.
Thus, there exists a constant $C > 0$ such that
\begin{equation}
\sum_{k=1}^K \varepsilon_k \pi_k(\mathbf{x}_n;\mathbf{r})
\leq C \sum_{k=1}^K \left| \varepsilon_k \right|.
\end{equation}
Moreover, \cref{lem:1} implies $\lim_{N \to \infty} \epsilon_k = 0$ for all $k=1,\dots,K$.
Hence, there exists an $N_0 \geq K$ such that
\begin{equation}
\left| \varepsilon_k \right| < \frac{1}{CK}, \quad k=1,\dots,K,
\end{equation}
for $N \geq N_0$.
Finally, this yields \cref{eq:assertion2-thm} and therefore the assertion.
\end{proof}
A simple consequence of \cref{thm:main} is the subsequent corollary in which the special case of equidistributed data points is considered.
\begin{corollary}\label{cor:main}
Given are $\Omega \subset \R^d$, $\omega: \Omega \to \R_0^+$, and $\mathcal{F}_K(\Omega) \subset C(\Omega)$ such that the restrictions \ref{item:restr_weight} and \ref{item:restr_space} are satisfied.
Let $(\mathbf{x}_n)_{n \in \mathbb{N}} \subset \Omega$ be the equidistributed sequence with $\omega(\mathbf{x}_n) > 0$ for all $n \in \mathbb{N}$.
Then, there exists an $N_0 \in \mathbb{N}$ such that for all $N \geq N_0$ and discrete weights
\begin{equation}\label{eq:discr_weights}
r_n = \frac{|\Omega| \omega(\mathbf{x}_n)}{N}, \quad n=1,\dots,N,
\end{equation}
the corresponding LS-CF \cref{eq:LS-CF} is positive and $\mathcal{F}_K(\Omega)$-exact.
\end{corollary}
\begin{proof}
Recall that the equidistributed sequence $(\mathbf{x}_n)_{n \in \mathbb{N}} \subset \Omega$ satisfies \cref{eq:equi} for all measurable bounded functions that are continuous almost everywhere and, by \cref{cor:equid}, is $\mathcal{F}_K(\Omega)$-unisolvent.
In particular, \cref{eq:cond-thm} holds for $(\mathbf{x}_n)_{n \in \mathbb{N}}$ and the discrete weights $(r_n)_{n \in \mathbb{N}}$ defined as in \cref{eq:discr_weights}.
In combination with \ref{item:restr_weight} and \ref{item:restr_space}, \cref{thm:main} therefore implies the assertion.
\end{proof}
Note that restriction \ref{item:restr_domain} is not necessary for \cref{cor:main} to hold.
However, following \cref{rem:eq-points}, \ref{item:restr_domain} ensures the existence---and simple construction---of an equidistributed sequence in $\Omega$. (conditionally)
\section{Convergence and Error Analysis}
\label{sec:error}
Here, we address the convergence of positive and exact LS-CFs.
\subsection{The Lebesgue Inequality and Convergence}
\label{sub:error_Leb}
Assume that the positive LS-CF ${C_N}$ is exact for all functions from the finite-dimensional function space $\mathcal{F}_K(\Omega)$.
Moreover, let $f:\Omega \to \R$ be a continuous function, and let us denote the best approximation of $f$ from $\mathcal{F}_K(\Omega)$ w.\,r.\,t.\ the $L^\infty(\Omega)$-norm by $\hat{s}$ (assuming that it exists).
That is,
\begin{equation}\label{eq:Lebesgue}
\hat{s} = \argmin_{s \in \mathcal{F}_K(\Omega)} \norm{ f - s }_{L^{\infty}(\Omega)}
\quad \text{with} \quad
\norm{ f - s }_{L^{\infty}(\Omega)} = \sup_{\boldsymbol{x} \in \Omega} | f(\boldsymbol{x}) - s(\boldsymbol{x}) |.
\end{equation}
Then, the following error bound holds:
\begin{equation}\label{eq:L-inequality}
\begin{aligned}
| C_N[f] - I[f] |
& \leq \| I \|_{\infty} \| f - \hat{s} \|_{L^{\infty}(\Omega)}
+ \| C_N \|_{\infty} \| f - \hat{s} \|_{L^{\infty}(\Omega)} \\
& = \left( \| I \|_{\infty} + \| C_N \|_{\infty} \right) \left( \inf_{ s \in \mathcal{F}_K(\Omega) } \norm{ f - s }_{L^{\infty}(\Omega)} \right)
\end{aligned}
\end{equation}
Inequality \cref{eq:L-inequality} is commonly known as the Lebesgue inequality; see, e.\,g., \cite{van2020adaptive} or \cite[Theorem 3.1.1]{brass2011quadrature}.
It is most often encountered in the context of polynomial interpolation \cite{brutman1996lebesgue,ibrahimoglu2016lebesgue} but straightforwardly carries over to numerical integration.
In this context, the operator norms $\|I\|_\infty$ and $\|C_N\|_{\infty}$ are respectively given by
\begin{eq}
\|I\|_\infty
= \int_\Omega \omega(\boldsymbol{x}) \, \mathrm{d} \boldsymbol{x}
= I[1], \quad
\|C_N\|_{\infty}
= \sum_{n=1}^N |w_n|.
\end{eq}
Recall that the CF $C_N$ is positive and exact for constants (we assume that $\mathcal{F}$ contains constants).
Thus, we have
\begin{eq}
\|C_N\|_{\infty} = C_N[1] = I[1] = \|I\|_\infty.
\end{eq}
In particular, this implies that the Lebesgue inequality \cref{eq:L-inequality} simplifies to
\begin{eq}\label{eq:L-inequality2}
| C_N[f] - I[f] |
\leq 2 \| I \|_{\infty} \left( \inf_{ s \in \mathcal{F}_K(\Omega) } \norm{ f - s }_{L^{\infty}(\Omega)} \right).
\end{eq}
Based on \cref{eq:L-inequality}, we can note the following:
Assume that we are given a sequence of positive CFs $(C_N)_{N \in \mathbb{N}}$ with $C_N$ being exact for $\mathcal{F}_K(\Omega)$, where $K = K(N)$.
Sequences of CFs are usually referred to as \emph{cubature rules (CRs)}.
Let $\mathcal{F}_K(\Omega) \subset \mathcal{F}_{K+1}(\Omega)$ for all $K \in \mathbb{N}$
and let $\bigcap_{K \in \mathbb{N}} \mathcal{F}_K(\Omega)$ be dense in $C(\Omega)$ w.\,r.\,t.\ the $L^{\infty}(\Omega)$-norm.
If $K(N) \to \infty$ for $N \to \infty$, then $(C_N)_{N \in \mathbb{N}}$ converges to the continuous integral, $I$, for all continuous functions.
That is, for all $f \in C(\Omega)$,
\begin{eq}
C_N[f] \to I[f], \quad \text{for } N \to \infty, \quad \text{in } (\R,|\cdot|).
\end{eq}
Note that the above argument relies on the union of all the finite-dimensional function spaces $\mathcal{F}_K(\Omega)$ to be dense in $C(\Omega)$ w.\,r.\,t.\ the $L^{\infty}(\Omega)$-norm.
That said, the same argument can also be made for any other---potentially smaller or larger---function space in which this union is dense.
Finally, it should be stressed that the particular rate of convergence of $(C_N[f])_{N \in \mathbb{N}}$ to $I[f]$ as $N \to \infty$ depends on the (smoothness) of the function $f$ as well as the finite-dimensional function spaces $\mathcal{F}_K(\Omega)$.
In particular, a more detailed error analysis based on \cref{eq:L-inequality2} relies on some knowledge about the quality of the $L^{\infty}(\Omega)$ best approximation from $\mathcal{F}_K(\Omega)$.
Results of this flavor are usually referred to as Jackson-type theorems; the subject of constructive function theory \cite{natanson1961constructive}.
\subsection{Error Analysis for Analytical Functions}
\label{sub:error_analytical}
For sake of simplicity, we now restrict ourselves to analytical functions on the $d$-dimensional hypercube $\Omega = [0,1]^d$.
Moreover, we assume that $(C_N)_{N \in \mathbb{N}}$ is a CR with $C_N$ being positive and exact for all $d$-dimensional polynomials up to total degree $m = m(N)$.\footnote{
The relation between the number of data points $N$ and the maximum total degree $m$ remains to be addressed.
}
That is, $\mathcal{F}_K(\Omega) = \mathbb{P}_m(\R^d)$.
In this case, the following result holds for the LS-CFs.
\begin{lemma}\label{lem:conv}
Let $f:[0,1]^d \to \R$ be analytical in an open set containing $[0,1]^d$.
Then the CR $(C_N)_{N \in \mathbb{N}}$, where $C_N$ is positive and $\mathbb{P}_m(\R^d)$-exact, $m = m(N)$, has error
\begin{eq}
|I[f] - C_N[f]| = \mathcal{O} \left( \exp( -c m / \sqrt{d} ) \right)
\end{eq}
for some constant $c > 0$.
\end{lemma}
\begin{proof}
Since $f$ is analytical in an open set containing $[0,1]^d$ it can be approximated by a $d$-dimensional polynomial of total degree $m$ as
\begin{eq}\label{eq:conv_proof}
\inf_{s \in \mathbb{P}_m(\R^d)} \| f - s \|_{L^{\infty}([0,1]^d)}
= \mathcal{O} \left( \exp( -c m / \sqrt{d} ) \right);
\end{eq}
see \cite{trefethen2017multivariate,bos2018bernstein}.
Here, $c$ is a constant depending on the location of the singularity of $f$ (if there is any) nearest to $[0,1]^d$ w.\,r.\,t.\ the radius of the Bernstein ellipse.
Finally, combining \cref{eq:conv_proof} with the Lebesgue inequality \cref{eq:L-inequality2} immediately yields the assertion.
\end{proof}
Let us briefly address the relation between the number of data points $N$ and the maximum total degree $m$ for which the LS-CF is still positive.
First, it should be noted that the dimension of $\mathbb{P}_m(\R^d)$ is $K = \binom{d+m}{d}$.
This implies the asymptotic relation ${\lim_{m \to \infty} K/m^d = 1/d!}$; in particular, $K = \mathcal{O}(m^d)$.
Furthermore, in \cref{sub:num_ratio}, we observe the relation between $K$ and $N$ to be of the form $N = \mathcal{O}(K^2)$.
Assuming this relation to actually hold, we get the following version of \cref{lem:conv}.
\begin{corollary}
Let $f:[0,1]^d \to \R$ be analytical in an open set containing $[0,1]^d$ and let $(C_N)_{N \in \mathbb{N}}$ be a CR with $C_N$ being positive and $\mathbb{P}_m(\R^d)$-exact.
Assume that we have the asymptotic relation $N = \mathcal{O}(K^2)$ with $K = \dim \mathbb{P}_m(\R^d)$, then
\begin{eq}\label{eq:conv_rate}
|I[f] - C_N[f]| = \mathcal{O} \left( \exp( -c N^{1/2d} / \sqrt{d} ) \right)
\end{eq}
for some constant $c > 0$.
\end{corollary}
\begin{proof}
Recall that $g(N) \in \mathcal{O}(h(N))$ if and only if there exists a constant $M>0$ such that $g(N) \leq M h(N)$ for sufficiently large $N$.
Henceforth, let $c>0$ be a generic constant.
The assertion follows from noting that there exists a constant $M > 0$ such that
\begin{eq}
\begin{aligned}
|I[f] - C_N[f]|
& \leq M \exp( -c m / \sqrt{d} ) \\
& \leq M \exp( -c K^{1/d} / \sqrt{d} ) \\
& \leq M \exp( -c N^{1/2d} / \sqrt{d} )
\end{aligned}
\end{eq}
for sufficiently large $N$.
Here, the first inequality follows from \cref{lem:conv}, the second one from the fact that $K = \mathcal{O}(m^d)$, and the third one from the assumption that $N = \mathcal{O}(K^2)$.
\end{proof}
We already would like to point out that for the positive interpolatory CFs discussed in \cref{sub:interpol_CFs}, we have $N = K$ (rather than just $N = \mathcal{O}(K^2)$) and therefore
\begin{eq}
|I[f] - C_N[f]| = \mathcal{O} \left( \exp( -c N^{1/d} / \sqrt{d} ) \right)
\end{eq}
instead of \cref{eq:conv_rate}.
In passing, we note that in some recent works \cite{trefethen2017cubature,trefethen2017multivariate,trefethen2021exactness} it was argued that at least for a certain class of functions (analytic in the hypercube with singularities outside), neither the total nor the maximum degree would be the optimal choice.
Instead, it was proposed to consider polynomials up to a fixed Euclidean degree.
Also see the work \cite{bos2018bernstein}.
\section{Some Applications}
\label{sec:applications}
We discuss two applications of the provable positive and exact LS-CFs.
These address the simple construction of positive high-order CRs (\cref{sub:const_procedure}) and positive interpolatory CFs (\cref{sub:interpol_CFs}).
In both cases, the procedure again only relies on basic linear algebra operations.
\subsection{A Simple Procedure To Construct Positive High-Order Cubature Rules}
\label{sub:const_procedure}
Building upon the theoretical results from \cref{sec:LS}, we propose a simple procedure to construct positive high-order CRs.
The procedure only relies on basic operations from linear algebra, such as solving an LS problem and computing the rank of a matrix.
Henceforth, we make the same assumptions as in \cref{cor:main}.
That is, $\Omega \subset \R^d$ is bounded with a boundary of measure zero, and the weight function $\omega: \Omega \to \R_0^+$ is Riemann integrable and positive almost everywhere
Moreover, let $(\mathbf{x}_n)_{n \in \mathbb{N}}$ be an equidistributed sequence in $\Omega$ with $\omega(\mathbf{x}_n) > 0$ for all $n \in \mathbb{N}$.
Given is a sequence of increasing function spaces $( \mathcal{F}_K(\Omega) )_{K \in \mathbb{N}}$ with $1 \in \mathcal{F}_K(\Omega) \subset C(\Omega)$, $\mathcal{F}_K(\Omega) \subset \mathcal{F}_{K+1}(\Omega)$, and $\dim \mathcal{F}_K(\Omega) = K$ for all $K \in \mathbb{N}$.
For sake of simplicity, we shall assume that $\bigcap_{K \in \mathbb{N}} \mathcal{F}_K(\Omega)$ is dense in $C(\Omega)$ w.\,r.\,t.\ the $L^{\infty}(\Omega)$-norm.
Following the discussion in \cref{sub:error_Leb}, this will ensure convergence of the subsequent CR for all continuous functions.
Under these assumptions, the procedure works as follows:
In every step, we increase the dimension $K$, therefore considering increasingly broad function spaces $\mathcal{F}_K(\Omega)$.
For fixed $K$, a positive and $\mathcal{F}_K(\Omega)$-exact CF is found by increasing the number of data points until the corresponding LS-CF $C_N$ is positive.
\cref{algo:LS-CF} contains an informal summary of the procedure for fixed $K$.
\begin{algorithm}
\caption{Constructing Positive High-Order Cubature Formulas}
\label{algo:LS-CF}
\begin{algorithmic}[1]
\State{$N = K$, $r = 0$, and $w_{\text{min}} = 0$}
\While{$r < K$ or $w_{\text{min}} < 0$}
\State{$X_N = \{\mathbf{x}_n\}_{n=1}^N$}
\State{Compute the matrix $\Phi = \Phi(X_N)$}
\State{Compute the rank of $\Phi$: $r = \text{rank}(\Phi)$}
\If{$r = K$}
\State{Compute the LS weights $\mathbf{w}^{\text{LS}}$ as in \cref{eq:LS-sol}}
\State{Determine the smallest weight: $w_{\text{min}} = \min( \mathbf{w}^{\text{LS}} )$}
\EndIf
\State{$N = 2N$}
\EndWhile
\end{algorithmic}
\end{algorithm}
It should be recalled that \cref{algo:LS-CF} is ensured to terminate due to the theoretical findings presented in \cref{sec:prelim} and \cref{sec:LS}.
In particular, \cref{cor:equid} tells us that for a sufficiently large number of (equidistributed) data points, these will be $\mathcal{F}_K(\Omega)$-unisolvent.
This is equivalent to the rows of $\Phi$ to be linearly independent ($\rank \Phi = K$).
Hence, $r = K$ is ensured for sufficiently large $N$.
At the same time, \cref{cor:main} implies that the LS weights $\mathbf{w}^{\text{LS}}$ are positive for a sufficiently large $N$.
\begin{remark}[Monte Carlo CFs]\label{rem:MC_correction}
The LS-CFs discussed above can be seen as high-order corrections to QMC methods \cite{metropolis1949monte,caflisch1998monte,dick2013high} in the case that low-discrepancy data points are used.
Recall that the weights in QMC integration are $w_n = |\Omega| \omega(\mathbf{x}_n) / N$.
At the same time, the LS weights are explicitly given by
$w_n^{\mathrm{LS}} = r_n \sum_{k=1}^K \pi_k( \mathbf{x}_n ; \mathbf{r}) I[ \pi_k( \, \cdot \, ; \mathbf{r} ) ]$, see \cref{eq:LS-sol-explicit}.
Here, $\{ \pi_k( \, \cdot \, ; \mathbf{r}) \}_{k=1}^K$ is a discrete orthonormal basis.
For fixed $K$ and an increasing number of data points, $\pi_k( \mathbf{x}_n ; \mathbf{r}) I[ \pi_k( \, \cdot \, ; \mathbf{r} ) ]$ converges to the Kronecker delta $\delta_{1,k}$.
Hence, the difference between the QMC and LS weights converges to zero.
\end{remark}
\begin{remark}[Exact Integration of Discrete LS Approximations]\label{rem:DLS_approx}
The LS-CF $C_N[f]$ corresponds to exact integration of the following discrete LS approximation of $f$ from the finite-dimensional function space ${\mathcal{F}_K(\Omega) = \mathrm{span} \{\varphi_1,\dots,\varphi_K\}}$ (assuming it is unique):
\begin{equation}
\hat{f}(\boldsymbol{x}) = \sum_{k=1}^K c_k \varphi_k(\boldsymbol{x})
\quad \text{s.t.} \quad
\norm{ R^{1/2} ( \Phi^T \mathbf{c} - \mathbf{f} ) }_2
\ \text{is minimized},
\end{equation}
where $\mathbf{c} = (c_1,\dots,c_K)^T$.
That is, $C_N[f] = I[\hat{f}]$.
This can be noted by representing $\hat{f}$ w.\,r.\,t.\ to a basis of discrete orthonormal functions $\{ \pi_k(\cdot;\mathbf{r}) \}_{k=1}^K$ corresponding to the discrete inner product $[\cdot,\cdot]_N$ as in \eqref{eq:disc-inner-prod}.
Then,
\begin{equation}
\hat{f}(\boldsymbol{x}) = \sum_{k=1}^K [ f, \pi_k(\cdot;\mathbf{r}) ]_N \pi_k(\boldsymbol{x};\mathbf{r}).
\end{equation}
Integration therefore yields
\begin{equation}
I[\hat{f}]
= \sum_{k=1}^K [ f, \pi_k(\cdot;\mathbf{r}) ]_N I[\pi_k(\cdot;\mathbf{r})]
= \sum_{n=1}^N \left( \sum_{k=1}^K \pi_k(\mathbf{x}_n;\mathbf{r}) I[\pi_k(\cdot;\mathbf{r})] \right) f(\mathbf{x}_n)
= C_N[f].
\end{equation}
The last equality follows from \cref{eq:LS-sol-explicit}.
Building upon this connection, in \cite{migliorati2020stable} high-order randomized CFs for independent random points were constructed.
These were shown to be positive and exact with a high probability if the number of (random) data points is sufficiently larger than $K$.
In particular, it was stated that the proportionality between $N$ and $K$ should be at least quadratic.
This is in accordance with the results presented here.
\end{remark}
\subsection{Constructing Positive Interpolatory Cubature Formulas}
\label{sub:interpol_CFs}
Next, we describe how provable positive and exact LS-CFs can be used to construct interpolatory CFs with the same properties.
In contrast to LS-CFs, interpolatory CFs use a smaller subset of $N = K$ data points, where $K$ denotes the dimension of the finite-dimensional function space $\mathcal{F}_K(\Omega)$ for which the original LS-CF is exact.
In fact, there exist many different approaches to this task.
For instance, in \cite{bremer2010nonlinear} a nonlinear optimization procedure is used to downsample formulas one data point at a time (although the work focuses on the one-dimensional case).
It is also possible to formulate \cref{eq:LS-sol} as a basis pursuit problem \cite{chen2001atomic,boyd2004convex} which can then be solved, for instance, by linear programming tools \cite{bloomfield1983least,dantzig1998linear,dantzig2006linear,gill1991numerical,vanderbei2020linear}.
See \cite{glaubitz2020stableCFs}.
Another option would be Caratheodory--Tchakaloff subsampling \cite{piazzon2017caratheodory,piazzon2017caratheodoryLS,bos2019catchdes}, which may be implemented using linear (or quadratic) programming.
Finally, NNLS \cite{lawson1995solving} could be used to recover a sparse nonnegative weight vector (see \cite{huybrechs2009stable} for the univariate case and \cite{sommariva2015compression} for the multivariate case).
Further works include \cite{wilhelmsen1976nearest,seshadri2017effectively,van2017non}.
Another procedure is a method due to Steinitz \cite{steinitz1913bedingt} (also see \cite{davis1967construction,wilson1969general}).
While this method might be less efficient than some of the aforementioned approaches, it is fairly simple and---as the construction of the positive LS-CFs---only relies on basic operations from linear algebra.
Details on Steinitz' method can found in \cref{app:Steinitz}.
\section{Numerical Results}
\label{sec:numerical}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/points_Halton}
\caption{Halton points}
\label{fig:points_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/points_Sobol}
\caption{Sobol points}
\label{fig:points_Sobol}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/points_random}
\caption{Random points}
\label{fig:points_ranom}
\end{subfigure}%
%
\caption{Illustration of the different types of ($N=64$) data points on $\Omega = [-1,1]^2$}
\label{fig:points}
\end{figure}
We now come to present several numerical tests to illustrate our theoretical findings as well as to demonstrate the performance of the positive high-order LS-CFs and corresponding interpolatory CFs (obtained by subsampling) discussed in \cref{sub:const_procedure} and \cref{sub:interpol_CFs}, respectively.
The MATLAB code used to generate the subsequent numerical results was made open access and can be found at \cite{glaubitz2021github}.
All tests are performed for three different types of data points:
(1) Halton points, which are deterministic and from a low-discrepancy sequence\footnote{Recall that low-discrepancy sequences are a subclass of equidistributed sequences.}, see \cite{halton1960efficiency,niederreiter1992random,kuipers2012uniform};
(2) Sobol points, which are an example of a quasi-random low-discrepancy sequence, see \cite{sobol1967distribution,niederreiter1992random,kuipers2012uniform};
(3) Random points, uniformly distributed, and produced in MATLAB by a pseudorandom number generator .
An illustration for these points is provided by \cref{fig:points}.
\subsection{Ratio Between $N$ and $K$}
\label{sub:num_ratio}
We start by investigating the relation between the number of data points, $N$, and the dimension of the finite-dimensional function space, $K$, for which the corresponding LS-CF is positive.
Recall that \cref{cor:main} (also see \cref{thm:main}), essentially stated that the (weighted) LS-CF is ensured to be positive for fixed $K$ if a sufficiently large number $N$ of (equidistributed) data points is used.
Let $N(K)$ denote the lowest number of data points for the LS-CF (with discrete weights as in \cref{cor:main}) to be positive.
Numerically, we found the relation between $K$ and $N(K)$ to be of the form $N(K) = C K^s$ with $s$ being close to or even below $2$ in many different cases.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_algebraic_Halton}
\caption{Halton, polynomials}
\label{fig:ratio_dim2_cube_1_algebraic_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_algebraic_Sobol}
\caption{Sobol, polynomials}
\label{fig:ratio_dim2_cube_1_algebraic_Sobol}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_algebraic_random}
\caption{Random, polynomials}
\label{fig:ratio_dim2_cube_1_algebraic_random}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_trig_Halton}
\caption{Halton, trigonometric}
\label{fig:ratio_dim2_cube_1_trig_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_trig_Sobol}
\caption{Sobol, trigonometric}
\label{fig:ratio_dim2_cube_1_trig_Sobol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_trig_random}
\caption{Random, trigonometric}
\label{fig:ratio_dim2_cube_1_trig_random}
\end{subfigure
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_cubic_Halton}
\caption{Halton, cubic PHS-RBFs}
\label{fig:ratio_dim2_cube_1_cubic_Halton}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_cubic_Sobol}
\caption{Sobol, cubic PHS-RBFs}
\label{fig:ratio_dim2_cube_1_cubic_Sobol}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_cubic_random}
\caption{Random, cubic PHS-RBFs}
\label{fig:ratio_dim2_cube_1_cubic_random}
\end{subfigure}%
%
\caption{Ratio between $K$ and $N = N(K)$ for $\Omega = [-1,1]^2$ and $\omega \equiv 1$.
Compared are the LS-CF, the resulting interpolatory obtained by subsampling, and the product Legendre rule.}
\label{fig:ratio_2d_cube_1}
\end{figure}
\begin{table}[tb]
\renewcommand{\arraystretch}{1.3}
\centering
\begin{tabular}{c c c c c c c c c c c}
\toprule
\multicolumn{11}{c}{LS-CF for the Cube Based on Algebraic Polynomials} \\ \hline
\multicolumn{3}{c}{} & \multicolumn{4}{c}{$\omega \equiv 1$} & & \multicolumn{3}{c}{$\omega(\boldsymbol{x}) = (1-x_1^2)^{1/2} \dots (1-x_q^2)^{1/2}$} \\ \hline
$q$ & \multicolumn{2}{c}{} & Legendre & Halton & Sobol & random & & Halton & Sobol & random \\ \hline
$2$ & s & & 1.9e-0 & 1.9e-0 & 1.2e-0 & 1.2e-0
& & 1.9e-0 & 1.0e-0 & 1.7e-0 \\
& C & & 3.0e-1 & 9.9e-2 & 1.2e-0 & 3.4e-0
& & 9.2e-2 & 2.1e-0 & 1.3e-0 \\ \hline
$3$ & s & & 2.8e-0 & 1.4e-0 & 1.4e-0 & 1.4e-0
& & 2.4e-0 & 1.5e-0 & 1.8e-0 \\
& C & & 2.1e-1 & 4.4e-1 & 4.2e-1 & 2.9e-0
& & 2.9e-3 & 2.2e-1 & 7.1e-1 \\ \hline
\\
\end{tabular}
\\
\begin{tabular}{c c c c c c c c c c}
\multicolumn{10}{c}{LS-CF for the Ball Based on Algebraic Polynomials} \\ \hline
\multicolumn{3}{c}{} & \multicolumn{3}{c}{$\omega \equiv 1$} & & \multicolumn{3}{c}{$\omega(\boldsymbol{x}) = \| \boldsymbol{x} \|_2^{1/2}$} \\ \hline
$q$ &\multicolumn{2}{c}{} & Halton & Sobol & random & & Halton & Sobol & random \\ \hline
$2$ & s & & 1.8e-0 & 1.5e-0 & 1.5e-0
& & 1.5e-0 & 1.3e-0 & 1.8e-0 \\
& C & & 1.7e-1 & 5.5e-1 & 9.5e-1
& & 5.0e-1 & 1.1e-0 & 3.0e-1 \\ \hline
$3$ & s & & 1.0e-1 & 1.3e-0 & 1.3e-0
& & 1.0e-1 & 1.4e-0 & 1.2e-0 \\
& C & & 4.4e-0 & 9.7e-1 & 2.4e-0
& & 5.7e-0 & 7.7e-1 & 3.0e-0 \\ \hline
\\
\end{tabular}
\\
\begin{tabular}{c c c c c c}
\multicolumn{6}{c}{LS-CF for the Two-Dimensional Cube, $\omega \equiv 1$} \\ \hline
$\mathcal{F}_K(\Omega)$ & \multicolumn{2}{c}{} & Halton & Sobol & random \\ \hline
algebraic polynomials
& s & & 1.9e-0 & 1.2e-0 & 1.2e-0 \\
& C & & 9.9e-2 & 1.2e-0 & 3.4e-0 \\ \hline
trigonometric polynomials
& s & & 1.2e-0 & 1.4e-0 & 1.3e-0 \\
& C & & 1.3e-0 & 7.1e-1 & 1.3e-0 \\ \hline
cubic PHS-RBFs
& s & & 9.9e-1 & 1.3e-0 & 1.6e-0 \\
& C & & 2.0e-0 & 5.3e-1 & 6.0e-1 \\ \hline
\bottomrule
\end{tabular}
\caption{LS fit for the parameters $C$ and $s$ in the model $N(K) = C K^s$}
\label{tab:LS-fit}
\end{table}
\cref{fig:ratio_2d_cube_1} illustrates this for the two-dimensional hyper-cube $\Omega = [-1,1]^2$ with weight function $\omega \equiv 1$.
The corresponding LS-CFs were constructed to be exact for algebraic polynomials up to a fixed total degree (\cref{fig:ratio_dim2_cube_1_algebraic_Halton,fig:ratio_dim2_cube_1_algebraic_Sobol,fig:ratio_dim2_cube_1_algebraic_random}), trigonometric polynomials up to a fixed degree (\cref{fig:ratio_dim2_cube_1_trig_Halton,fig:ratio_dim2_cube_1_trig_Sobol,fig:ratio_dim2_cube_1_trig_random}), and cubic polyharmonic RBFs (PHS-RBFs) augmented with a constant (\cref{fig:ratio_dim2_cube_1_cubic_Halton,fig:ratio_dim2_cube_1_cubic_Sobol,fig:ratio_dim2_cube_1_cubic_random}).
Recall that trigonometric polynomials of degree $m \in \mathbb{N}$ or less are linear combinations $\sum_{|\boldsymbol{\alpha}| \leq m} c_{\boldsymbol{\alpha}} t_{\boldsymbol{\alpha}}$ of trigonometric monomials $\prod_{j=1}^d \exp(2 \pi i \alpha_j x_j)$ with $i^2 = -1$ where $\boldsymbol{\alpha} \in \mathbb{Z}$.
It is required that $c_{\boldsymbol{\alpha}}$ and $c_{-\boldsymbol{\alpha}}$ are complex conjugates so that all the monomials are real-valued.
We denote the space of trigonometric polynomials up to degree $m$ by $\Pi_m(\R^d)$.
For more details on RBFs, in particular on cubic PHS-RBFs, we refer to the monographs \cite{buhmann2000radial,buhmann2003radial,wendland2004scattered,fasshauer2007meshfree,fornberg2015primer} and references therein.
\cref{fig:ratio_2d_cube_1} also reports on the ration between $K$ and $N(K)$ for the product Legendre rule and the positive interpolatory CF.
The positive interpolatory CF was obtained from the LS-CF by subsampling using the Steinitz method (we debate this choice in \cref{sub:comp_subsample}).
The numerically observed values for $C$ and $s$ in the assumed relation $N(K) = C K^s$ for several further cases are listed in \cref{tab:LS-fit}.
The reported parameters $C$ and $s$ were obtained by performing an LS fit for these based on the values of $K$ and $N = N(K)$ for ${d \in \{0,1,\dots,10\}}$.
We note that the results reported here appear to be in accordance with similar observations made in previous works for certain special cases; see \cite{wilson1970necessary,huybrechs2009stable,glaubitz2020stableQRs,glaubitz2020shock} (for univariate LS-QFs) and \cite{glaubitz2020stableCFs} (for multivariate LS-CFs based on polynomials).
Interestingly, similar ratios were also observed in other contexts, such as discrete LS function approximations \cite{cohen2013stability,cohen2017optimal} and stable high-order randomized CFs based on these \cite{migliorati2020stable}.
\subsection{Comparison of Different Subsampling Methods}
\label{sub:comp_subsample}
Before any further investigation, we shall debate the subsampling method used to obtain a positive interpolatory CF from a given positive LS-CF, both exact for the same function space $\mathcal{F}_K(\Omega) \subset C(\Omega)$.
A few options to construct such positive interpolatory CFs were discussed in \cref{sub:interpol_CFs}.
Indeed, the body of literature concerning this task is becoming increasingly large; see \cite{davis1967construction,wilson1969general,wilhelmsen1976nearest,huybrechs2009stable,bremer2010nonlinear,sommariva2015compression,piazzon2017caratheodory,piazzon2017caratheodoryLS,seshadri2017effectively,van2017non,bos2019catchdes,glaubitz2020stableCFs} and references therein.
An exhaustive comparison of all these approaches would exceed the scope of the present manuscript.
That said, we shall at least provide a rudimentary comparison of three especially simple methods to construct the aforementioned interpolatory CF.
These methods being
(i) Steinitz' method, see \cite{steinitz1913bedingt,davis1967construction,wilson1969general} as well as \cref{app:Steinitz};
(ii) NNLS \cite{lawson1995solving} (also see \cite{huybrechs2009stable} for the univariate case and \cite{sommariva2015compression} for the multivariate case); and
(iii) basis pursuit \cite{chen2001atomic,boyd2004convex} formulated as a linear programming problem (also see \cite{glaubitz2020stableCFs}).
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsTime_dim2_Halton}
\caption{Efficiency, Halton}
\label{fig:comparison_efficiency_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsTime_dim2_Sobol}
\caption{Efficiency, Sobol}
\label{fig:comparison_efficiency_Sobol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsTime_dim2_random}
\caption{Efficiency, random}
\label{fig:comparison_efficiency_random}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsError_dim2_Halton}
\caption{Exactness, Halton}
\label{fig:comparison_error_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsError_dim2_Sobol}
\caption{Exactness, Sobol}
\label{fig:comparison_error_Sobol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsError_dim2_random}
\caption{Exactness, random}
\label{fig:comparison_error_random}
\end{subfigure}%
%
\caption{
A comparison between different subsampling methods to construct a positive interpolatory CF from a positive LS-CF.
All tests are performed for the domain $\Omega = [-1,1]^2$ with $\omega \equiv 1$.
The CFs are exact for algebraic polynomials up to an (increasing) total degree $m$.
}
\label{fig:subsampling_comparison}
\end{figure}
\cref{fig:subsampling_comparison} provides a comparison of the interpolatory CFs resulting from these three methods.
Thereby, the underlying positive LS-CFs was constructed for the two-dimensional hyper-cube $[-1,1]^2$ with $\omega \equiv 1$ and exact for algebraic polynomials up to an increasing total degree $m \in \{0,1,\dots,14\}$.
For this case all three methods produced positive ($w_n > 0$ for $n=1,\dots,N$) and interpolatory ($N = K$) CFs.
Thus, in \cref{fig:subsampling_comparison} we only focus on the efficiency and ``exactness" of these methods.
Efficiency is measured by the time it took the respective method to produce a positive interpolatory CF from a given positive LS-CF.
The normalized times are displayed in \cref{fig:comparison_efficiency_Halton,fig:comparison_efficiency_Sobol,fig:comparison_efficiency_random}.
Based on these results it might be argued that NNLS is the most efficient method, while the Steinitz approach is the least efficient one.
On the other hand, ``exactness" is measured by the error in the exactness conditions, that is present in the produced interpolatory CF.
This is measured by the residual of the moment conditions, $\|\Phi \mathbf{w} - \mathbf{m}\|_2$.
Such errors can be introduced due to computing in finite arithmetics (rounding errors) or a method not arriving at a solution within the performed number of iterations.
In fact, we can observe in \cref{fig:comparison_error_Halton,fig:comparison_error_Sobol,fig:comparison_error_random} that especially the NNLS method suffers from undesirably high errors.
Regarding exactness, we observe that the Steinitz method performs best.
We also report that for even higher total degrees, $m$, the basis pursuit approach sometimes did not produce a positive interpolatory CF at all.
Henceforth, we restrict ourselves to positive interpolatory CFs constructed by the Steinitz method, although we observe it to be inferior in terms of efficiency.
A more detailed investigation and comparison of different subsampling strategies would be of interest.
\subsection{Polynomial Based Cubature Formulas on the Hyper-Cube}
We now investigate the accuracy of the positive high-order LS-CFs and corresponding interpolatory CFs (obtained by the Steinitz method).
To this end, we consider the $d$-dimensional hyper-cube $\Omega = [0,1]^d$ with $\omega \equiv 1$ and the following Genz test functions \cite{genz1984testing} (also see \cite{van2020adaptive}):
\begin{equation}\label{eq:Genz}
\begin{aligned}
g_1(\boldsymbol{x})
& = \prod_{i=1}^d \left( a_i^{-2} + (x_i - b_i)^2 \right)^{-1} \quad
&& \text{(product peak)}, \\
g_2(\boldsymbol{x})
& = \left( 1 + \sum_{i=1}^d a_i x_i \right)^{-(d+1)} \quad
&& \text{(corner peak)}, \\
g_3(\boldsymbol{x})
& = \exp \left( - \sum_{i=1}^d a_i^2 ( x_i - b_i )^2 \right) \quad
&& \text{(Gaussian)}
\end{aligned}
\end{equation}
These functions are designed to have different difficult characteristics for numerical integration routines.
The vectors $\mathbf{a} = (a_1,\dots,a_q)^T$ and $\mathbf{b} = (b_1,\dots,b_q)^T$ respectively contain (randomly chosen) shape and translation parameters.
For each case, the experiment was repeated $20$ times, with the vectors $\mathbf{a}$ and $\mathbf{b}$ being drawn randomly from $[0,1]^d$.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim2_1_Halton_Steinitz}
\caption{$g_1$, Halton}
\label{fig:accuracy_Genz1_dim2_1_Halton_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim2_1_Sobol_Steinitz}
\caption{$g_1$, Sobol}
\label{fig:accuracy_Genz1_dim2_1_Sobol_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim2_1_random_Steinitz}
\caption{$g_1$, random}
\label{fig:accuracy_Genz1_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim2_1_Halton_Steinitz}
\caption{$g_2$, Halton}
\label{fig:accuracy_Genz2_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim2_1_Sobol_Steinitz}
\caption{$g_2$, Sobol}
\label{fig:accuracy_Genz2_dim2_1_Sobol_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim2_1_random_Steinitz}
\caption{$g_2$, random}
\label{fig:accuracy_Genz2_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim2_1_Halton_Steinitz}
\caption{$g_3$, Halton}
\label{fig:accuracy_Genz3_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim2_1_Sobol_Steinitz}
\caption{$g_3$, Sobol}
\label{fig:accuracy_Genz3_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim2_1_random_Steinitz}
\caption{$g_3$, random}
\label{fig:accuracy_Genz3_dim2_1_random_Steinitz}
\end{subfigure}%
%
\caption{
Errors for Genz' test functions on $\Omega = [0,1]^2$ with $\omega \equiv 1$.
The LS- and interpolatory CF are based on multivariate polynomials of increasing total degree ($\mathcal{F}_K(\Omega) = \mathbb{P}_m(\R^2)$).
}
\label{fig:errors_cube_1_dim2}
\end{figure}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim3_1_Halton_Steinitz}
\caption{$g_1$, Halton}
\label{fig:accuracy_Genz1_dim3_1_Halton_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim3_1_Sobol_Steinitz}
\caption{$g_1$, Sobol}
\label{fig:accuracy_Genz1_dim3_1_Sobol_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim3_1_random_Steinitz}
\caption{$g_1$, random}
\label{fig:accuracy_Genz1_dim3_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim3_1_Halton_Steinitz}
\caption{$g_2$, Halton}
\label{fig:accuracy_Genz2_dim3_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim3_1_Sobol_Steinitz}
\caption{$g_2$, Sobol}
\label{fig:accuracy_Genz2_dim3_1_Sobol_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim3_1_random_Steinitz}
\caption{$g_2$, random}
\label{fig:accuracy_Genz2_dim3_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim3_1_Halton_Steinitz}
\caption{$g_3$, Halton}
\label{fig:accuracy_Genz3_dim3_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim3_1_Sobol_Steinitz}
\caption{$g_3$, Sobol}
\label{fig:accuracy_Genz3_dim3_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim3_1_random_Steinitz}
\caption{$g_3$, random}
\label{fig:accuracy_Genz3_dim3_1_random_Steinitz}
\end{subfigure}%
%
\caption{
Errors for Genz' test functions on $\Omega = [0,1]^3$ with $\omega \equiv 1$.
The LS- and interpolatory CF are based on multivariate polynomials of increasing total degree ($\mathcal{F}_K(\Omega) = \mathbb{P}_m(\R^3)$).
}
\label{fig:errors_cube_1_dim3}
\end{figure}
\cref{fig:errors_cube_1_dim2,fig:errors_cube_1_dim3} illustrate the errors of different CFs for this test case in two ($d=2$) and three ($d=3$) dimensions, respectively.
Compared are the positive high-order LS-CF, the corresponding interpolatory CF, the (Q)MC method, and a product Legendre method.
In this context, the term ``(Q)MC" refers to a CF of the form
\begin{eq}
C_N[g] = \frac{| \Omega |}{N} \sum_{n=1}^N g(\mathbf{x}_n)
\quad \text{with} \quad
g = f \omega.
\end{eq}
If the data points are sampled randomly this corresponds to an MC method, and a QMC method if the data points are not fully random but correspond to a (semi- or fully-) deterministic low-discrepancy sequence \cite{niederreiter1992random,caflisch1998monte,dick2013high}.
Thereby, the (Q)MC method used the same data points as the LS-CF, while the product Legendre method used a product grid of one-dimensional Gauss--Legendre points.
It can be observed from \cref{fig:errors_cube_1_dim2,fig:errors_cube_1_dim3} that the product Legendre rule yields the most accurate results in most cases, followed by the interpolatory and underlying positive high-order LS-CF.
The (Q)MC method, on the other hand, is found to yield the least accurate approximation in all cases.
Of course, the situation can be expected to change if higher dimensions, $d \gg 3$, were considered.
Indeed, it should be stressed that neither the product Legendre rule nor the LS-CF or corresponding interpolatory CF can overcome the curse of dimensionality \cite{bellman1966dynamic,kuo2005lifting}.
\subsection{Comparing Different Function Spaces}
Next, we demonstrate the performance of positive LS-CFs for other finite-dimensional function spaces; not necessarily algebraic polynomials up to a certain total (or any other type of) degree.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_compareSapces_LS_dim2_1_Halton_Steinitz}
\caption{$g_1$, Halton}
\label{fig:accuracy_Genz1_compareSapces_LS_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_compareSapces_LS_dim2_1_Sobol_Steinitz}
\caption{$g_1$, Sobol}
\label{fig:accuracy_Genz1_compareSapces_LS_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_compareSapces_LS_dim2_1_random_Steinitz}
\caption{$g_1$, random}
\label{fig:accuracy_Genz1_compareSapces_LS_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_compareSapces_LS_dim2_1_Halton_Steinitz}
\caption{$g_2$, Halton}
\label{fig:accuracy_Genz2_compareSapces_LS_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_compareSapces_LS_dim2_1_Sobol_Steinitz}
\caption{$g_2$, Sobol}
\label{fig:accuracy_Genz2_compareSapces_LS_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_compareSapces_LS_dim2_1_random_Steinitz}
\caption{$g_2$, random}
\label{fig:accuracy_Genz2_compareSapces_LS_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_compareSapces_LS_dim2_1_Halton_Steinitz}
\caption{$g_3$, Halton}
\label{fig:accuracy_Genz3_compareSapces_LS_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_compareSapces_LS_dim2_1_Sobol_Steinitz}
\caption{$g_3$, Sobol}
\label{fig:accuracy_Genz3_compareSapces_LS_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_compareSapces_LS_dim2_1_random_Steinitz}
\caption{$g_3$, random}
\label{fig:accuracy_Genz3_compareSapces_LS_dim2_1_random_Steinitz}
\end{subfigure}%
%
\caption{
Errors for Genz' test functions of stable high-order LS-CFs based on different function spaces.
Considered are algebraic polynomials (``LS, poly."), trigonometric functions (``LS, trig."), and cubic PHS-RBFs (``LS, cubic PHS-RBF").
All tests are performed for $\Omega = [0,1]^2$ and $\omega \equiv 1$.
}
\label{fig:accuracy_compareSapces_LS}
\end{figure}
\cref{fig:accuracy_compareSapces_LS} illustrates this by comparing the positive high-order LS-CFs that respectively are exact for algebraic polynomials up to a fixed total degree, trigonometric polynomials up to a fixed degree, and cubic PHS-RBFs.
Indeed, positive high-order LS-CF can be constructed equally simple for nonpolynomial function spaces---admittedly, a look at the minor changes that are necessary for the corresponding code \cite{glaubitz2021github} might be more appropriate for such a demonstration.
At the same time, we note based on \cref{fig:accuracy_compareSapces_LS} that neither trigonometric polynomials nor cubic PHS-RBF yield an improvement compared to algebraic polynomials.
The same observation can be made for the corresponding interpolatory CFs.
However, for sake of space, we do not present these results here.
Arguably, the advantage of using nonpolynomial function spaces might prove effective in situations where we have some prior knowledge about the otherwise unknown integrand $f$ that can then be encoded in the function space.
For instance, the integral of a periodic function $f$ might be more suitable to be approximated by a CF based on trigonometric polynomials than the above Genz test functions.
Another example that comes to mind are divergence-free solutions of certain PDEs, which integral might be best approximated by a CF that is constructed to be exact for divergence-free basis functions.
It would be of interest to further investigate such strategies---that might be labeled as \emph{physics-informed numerical integration}---in future works.
\subsubsection{Some Other Domains and Weight Functions}
Here, we consider three different test cases which either consider a non-constant weight function or the two-dimensional ball $B_2 = \{ (x_1,x_2)^T \in \R^2 \mid x_1^2 + x_2^2 \leq 1 \}$ instead of the hyper-cube $C_2 = [-1,1]^2$.
To be more precise, we consider the following test cases:
\begin{alignat}{3}
& \Omega = C_2, \quad
&& \omega(x_1,x_2) = (1-x_1^2)^{1/2} (1-x_2^2)^{1/2}, \quad
&& f(x_1,x_2) = \arccos(x_1) \arccos(x_2); \label{eq:testA} \\
& \Omega = B_2, \quad
&& \omega \equiv 1, \quad
&& f(x_1,x_2) = (1 + x_1^2 + x_2^2 )^{-1} + \sin(x_1); \label{eq:testB} \\
& \Omega = B_2, \quad
&& \omega(x_1,x_2) = (x_1^2 + x_2^2)^{1/4}, \quad
&& f(x_1,x_2) = (1 + x_1^2 + x_2^2 )^{-1} + \sin(x_1) \label{eq:testC}
\end{alignat}
The results can be found in \cref{fig:accuracy_misc}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc1_dim2_cube_C2k_Halton_Steinitz}
\caption{Test \cref{eq:testA}, Halton}
\label{fig:accuracy_misc1_dim2_cube_C2k_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc1_dim2_cube_C2k_Sobol_Steinitz}
\caption{Test \cref{eq:testA}, Sobol}
\label{fig:accuracy_misc1_dim2_cube_C2k_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc1_dim2_cube_C2k_random_Steinitz}
\caption{Test \cref{eq:testA}, random}
\label{fig:accuracy_misc1_dim2_cube_C2k_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc2_dim2_ball_1_Halton_Steinitz}
\caption{Test \cref{eq:testB}, Halton}
\label{fig:accuracy_misc2_dim2_ball_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc2_dim2_ball_1_Sobol_Steinitz}
\caption{Test \cref{eq:testB}, Sobol}
\label{fig:accuracy_misc2_dim2_ball_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc2_dim2_ball_1_random_Steinitz}
\caption{Test \cref{eq:testB}, random}
\label{fig:accuracy_misc2_dim2_ball_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc3_dim2_ball_sqrt_Halton_Steinitz}
\caption{Test \cref{eq:testC}, Halton}
\label{fig:accuracy_misc3_dim2_ball_sqrt_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc3_dim2_ball_sqrt_Sobol_Steinitz}
\caption{Test \cref{eq:testC}, Sobol}
\label{fig:accuracy_misc3_dim2_ball_sqrt_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc3_dim2_ball_sqrt_random_Steinitz}
\caption{Test \cref{eq:testC}, random}
\label{fig:accuracy_misc3_dim2_ball_sqrt_random_Steinitz}
\end{subfigure}%
%
\caption{
Errors for three different test cases \cref{eq:testA,eq:testB,eq:testC}.
Compared are the polynomial-based LS-CF, the corresponding interpolatory CFs, the (Q)MC method using the same data points as the LS-CF, and a (transformed) product Legendre rule.
}
\label{fig:accuracy_misc}
\end{figure}
Here, the LS-CFs are constructed to be exact for algebraic polynomials up to an increasing total degree $m$.
Moreover, the LS-CF and corresponding interpolatory CF are again compared to a (Q)MC methods applied to the same data points as the LS-CF and a (transformed) product Legendre rule.
The latter one was applied to $f \omega$ as an integrand.
We can note from \cref{fig:accuracy_misc} that in many of the tests the positive interpolatory CF---and sometimes even the LS-CF---yield more accurate results than the (transformed) product Legendre rule.
\subsection{A Nonstandard Domain}
We end this section by providing a short demonstration of the performance of the positive high-order LS-CFs and the corresponding interpolatory CFs for a nonstandard domain.
These are again constructed to be exact for algebraic polynomials up to an increasing total degree.
To this end, we consider the two-dimensional domain $\Omega$ that is illustrated in \cref{fig:nonstandard_domian} with weight function $\omega \equiv 1$.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\begin{center}
\begin{tikzpicture}[domain = -2.5:4.5, scale=.55, line width=0.6pt]
\draw[->,>=stealth] (-2.5,0) -- (4.5,0) node[below] {$x$};
\draw[->,>=stealth] (0,-2.5) -- (0,4.5) node[right] {$y$};
\draw (-2,0.1) -- (-2,-0.1) node [below] {-1};
\draw (2,0.1) -- (2,-0.1) node [below] {1};
\draw (4,0.1) -- (4,-0.1) node [below] {2};
\draw (-0.1,-2) -- (0.1,-2) node [left] {-1 \ };
\draw (-0.1,2) -- (0.1,2) node [left] {1 \ };
\draw (-0.1,4) -- (0.1,4) node [left] {2 \ };
\draw[blue] (0,0) circle [radius = 2];
\draw[blue] (2,2) rectangle (4,4);
\draw (0,2) node (A) {};
\draw (2,2.5) node (B) {};
\draw[blue] (1,3) node (C) {\Large $\Omega$};
\draw (A) to[out=60, in=180] (C);
\draw (B) to[out=180, in=0] (C);
\end{tikzpicture}
\end{center}
\caption{Domain}
\label{fig:nonstandard_domian}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_nonstandard_dim2_combi_1_Halton_Steinitz}
\caption{Halton points}
\label{fig:accuracy_nonstandard_dim2_combi_1_Halton_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_nonstandard_dim2_combi_1_Sobol_Steinitz}
\caption{Sobol points}
\label{fig:accuracy_nonstandard_dim2_combi_1_Sobol_Steinitz}
\end{subfigure}%
%
\caption{
A two-dimensional nonstandard domain and errors for $\omega \equiv 1$ and $f(x_1,x_2) = \exp( -x_1^2 - x_2^2 )$.
}
\label{fig:nonstandard}
\end{figure}
\cref{fig:accuracy_nonstandard_dim2_combi_1_Halton_Steinitz,fig:accuracy_nonstandard_dim2_combi_1_Sobol_Steinitz} report on the errors of the LS-CF and the corresponding interpolatory CF compared to a (Q)MC method.
The test function is ${f(x_1,x_2) = \exp( -x_1^2 - x_2^2 )}$.
Again, the (Q)MC used the same data points as the LS-CF.
It should be noted that even in this case, of a nonstandard domain, the positive high-order LS-CF and the corresponding interpolatory CF display encouraging rates of convergence.
\section{Numerical Results}
\label{sec:numerical}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/points_Halton}
\caption{Halton points}
\label{fig:points_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/points_Sobol}
\caption{Sobol points}
\label{fig:points_Sobol}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/points_random}
\caption{Random points}
\label{fig:points_ranom}
\end{subfigure}%
%
\caption{Illustration of the different types of ($N=64$) data points on $\Omega = [-1,1]^2$}
\label{fig:points}
\end{figure}
\subsection{Ratio Between $N$ and $K$}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_algebraic_Halton}
\caption{Halton, polynomials}
\label{fig:ratio_dim2_cube_1_algebraic_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_trig_Halton}
\caption{Halton, trigonometric}
\label{fig:ratio_dim2_cube_1_trig_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_cubic_Halton}
\caption{Halton, cubic PHS-RBFs}
\label{fig:ratio_dim2_cube_1_cubic_Halton}
\end{subfigure
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_algebraic_Sobol}
\caption{Sobol, polynomials}
\label{fig:ratio_dim2_cube_1_algebraic_Sobol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_trig_Sobol}
\caption{Sobol, trigonometric}
\label{fig:ratio_dim2_cube_1_trig_Sobol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_cubic_Sobol}
\caption{Sobol, cubic PHS-RBFs}
\label{fig:ratio_dim2_cube_1_cubic_Sobol}
\end{subfigure
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_algebraic_random}
\caption{Random, polynomials}
\label{fig:ratio_dim2_cube_1_algebraic_random}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_trig_random}
\caption{Random, trigonometric}
\label{fig:ratio_dim2_cube_1_trig_random}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/ratio_dim2_cube_1_cubic_random}
\caption{Random, cubic PHS-RBFs}
\label{fig:ratio_dim2_cube_1_cubic_random}
\end{subfigure}%
%
\caption{Ratio between $N$ and $K$ for $\Omega = [-1,1]^2$ and $\omega \equiv 1$.
Compared are the LS-CF, the resulting interpolatory obtained by subsampling using NNLS, and the product Legendre rule.}
\label{fig:ratio_2d_cube_1}
\end{figure}
\begin{table}[tb]
\renewcommand{\arraystretch}{1.3}
\centering
\begin{tabular}{c c c c c c c c c c c}
\toprule
\multicolumn{11}{c}{LS-CF for the Cube Based on Algebraic Polynomials} \\ \hline
\multicolumn{3}{c}{} & \multicolumn{4}{c}{$\omega \equiv 1$} & & \multicolumn{3}{c}{$\omega(\boldsymbol{x}) = (1-x_1^2)^{1/2} \dots (1-x_q^2)^{1/2}$} \\ \hline
$q$ & \multicolumn{2}{c}{} & Legendre & Halton & Sobol & random & & Halton & Sobol & random \\ \hline
$2$ & s & & 1.9e-0 & 1.9e-0 & 1.2e-0 & 1.2e-0
& & 1.9e-0 & 1.0e-0 & 1.7e-0 \\
& C & & 3.0e-1 & 9.9e-2 & 1.2e-0 & 3.4e-0
& & 9.2e-2 & 2.1e-0 & 1.3e-0 \\ \hline
$3$ & s & & 2.8e-0 & 1.4e-0 & 1.4e-0 & 1.4e-0
& & 2.4e-0 & 1.5e-0 & 1.8e-0 \\
& C & & 2.1e-1 & 4.4e-1 & 4.2e-1 & 2.9e-0
& & 2.9e-3 & 2.2e-1 & 7.1e-1 \\ \hline
\\
\end{tabular}
%
\begin{tabular}{c c c c c c c c c c}
\multicolumn{10}{c}{LS-CF for the Ball Based on Algebraic Polynomials} \\ \hline
\multicolumn{3}{c}{} & \multicolumn{3}{c}{$\omega \equiv 1$} & & \multicolumn{3}{c}{$\omega(\boldsymbol{x}) = \| \boldsymbol{x} \|_2^{1/2}$} \\ \hline
$q$ &\multicolumn{2}{c}{} & Halton & Sobol & random & & Halton & Sobol & random \\ \hline
$2$ & s & & 1.8e-0 & 1.5e-0 & 1.5e-0
& & 1.5e-0 & 1.3e-0 & 1.8e-0 \\
& C & & 1.7e-1 & 5.5e-1 & 9.5e-1
& & 5.0e-1 & 1.1e-0 & 3.0e-1 \\ \hline
$3$ & s & & 1.0e-1 & 1.3e-0 & 1.3e-0
& & 1.0e-1 & 1.4e-0 & 1.2e-0 \\
& C & & 4.4e-0 & 9.7e-1 & 2.4e-0
& & 5.7e-0 & 7.7e-1 & 3.0e-0 \\ \hline
\\
\end{tabular}
%
\begin{tabular}{c c c c c c}
\multicolumn{6}{c}{LS-CF on the Two-Dimensional Cube, $\omega \equiv 1$} \\ \hline
$\mathcal{F}_K(\Omega)$ & \multicolumn{2}{c}{} & Halton & Sobol & random \\ \hline
algebraic polynomials
& s & & 1.9e-0 & 1.2e-0 & 1.2e-0 \\
& C & & 9.9e-2 & 1.2e-0 & 3.4e-0 \\ \hline
trigonometric polynomials
& s & & 1.2e-0 & 1.4e-0 & 1.3e-0 \\
& C & & 1.3e-0 & 7.1e-1 & 1.3e-0 \\ \hline
cubic PHS-RBFs
& s & & 9.9e-1 & 1.3e-0 & 1.6e-0 \\
& C & & 2.0e-0 & 5.3e-1 & 6.0e-1 \\ \hline
\bottomrule
\end{tabular}
\caption{LS fit for the parameters $C$ and $s$ in the model $N = C K^s$}
\label{tab:LS-fit}
\end{table}
\subsection{Error Analysis}
Next, we provide an error analysis, considering the domain $\Omega = [-1,1]^d$ and the following Genz test functions \cite{genz1984testing} (also see \cite{van2020adaptive}):
\begin{equation}\label{eq:Genz}
\begin{aligned}
g_1(\boldsymbol{x})
& = \cos\left( 2 \pi b_1 + \sum_{i=1}^d a_i x_i \right) \quad
&& \text{(oscillatory)}, \\
g_2(\boldsymbol{x})
& = \prod_{i=1}^d \left( a_i^{-2} + (x_i - b_i)^2 \right)^{-1} \quad
&& \text{(product peak)}, \\
g_3(\boldsymbol{x})
& = \left( 1 + \sum_{i=1}^d a_i x_i \right)^{-(d+1)} \quad
&& \text{(corner peak)}, \\
g_4(\boldsymbol{x})
& = \exp \left( - \sum_{i=1}^d a_i^2 ( x_i - b_i )^2 \right) \quad
&& \text{(Gaussian)}
\end{aligned}
\end{equation}
These functions are designed to have different difficult characteristics for numerical integration routines.
The vectors $\mathbf{a} = (a_1,\dots,a_q)^T$ and $\mathbf{b} = (b_1,\dots,b_q)^T$ respectively contain (randomly chosen) shape and translation parameters.
For each case, the experiment was repeated $20$ times.
At the same time, for each experiment, the vectors $\mathbf{a}$ and $\mathbf{b}$ were drawn randomly from $[0,1]^2$.
\subsubsection{Polynomial Based Cubature Formulas on the Hyper-Cube}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim2_Halton_Steinitz}
\caption{$g_1$, Halton}
\label{fig:accuracy_Genz1_dim2_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim2_Sobol_Steinitz}
\caption{$g_1$, Sobol}
\label{fig:accuracy_Genz1_dim2_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim2_random_Steinitz}
\caption{$g_1$, random}
\label{fig:accuracy_Genz1_dim2_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim2_Halton_Steinitz}
\caption{$g_2$, Halton}
\label{fig:accuracy_Genz2_dim2_Halton_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim2_Sobol_Steinitz}
\caption{$g_2$, Sobol}
\label{fig:accuracy_Genz2_dim2_Sobol_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim2_random_Steinitz}
\caption{$g_2$, random}
\label{fig:accuracy_Genz2_dim2_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim2_Halton_Steinitz}
\caption{$g_3$, Halton}
\label{fig:accuracy_Genz3_dim2_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim2_Sobol_Steinitz}
\caption{$g_3$, Sobol}
\label{fig:accuracy_Genz3_dim2_Sobol_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim2_random_Steinitz}
\caption{$g_3$, random}
\label{fig:accuracy_Genz3_dim2_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_dim2_Halton_Steinitz}
\caption{$g_4$, Halton}
\label{fig:accuracy_Genz4_dim2_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_dim2_Sobol_Steinitz}
\caption{$g_4$, Sobol}
\label{fig:accuracy_Genz4_dim2_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_dim2_random_Steinitz}
\caption{$g_4$, random}
\label{fig:accuracy_Genz4_dim2_random_Steinitz}
\end{subfigure}%
%
\caption{
Errors for Genz' test functions of stable high-order LS-CFs and the corresponding positive interpolatory CFs (obtained by subsampling) compared to other CFs.
The LS- and interpolatory CF are based on multivariate polynomials of increasing total degree ($\mathcal{F}_K(\Omega) = \mathbb{P}_m(\R^2)$).
All tests are performed for the domain $\Omega = [-1,1]^2$, the weight function $\omega \equiv 1$, and different data points.
}
\label{fig:errors_cube_1_dim2}
\end{figure}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim3_1_Halton_Steinitz}
\caption{$g_1$, Halton}
\label{fig:accuracy_Genz1_dim3_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim3_1_Sobol_Steinitz}
\caption{$g_1$, Sobol}
\label{fig:accuracy_Genz1_dim3_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_dim3_1_random_Steinitz}
\caption{$g_1$, random}
\label{fig:accuracy_Genz1_dim3_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim3_1_Halton_Steinitz}
\caption{$g_2$, Halton}
\label{fig:accuracy_Genz2_dim3_1_Halton_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim3_1_Sobol_Steinitz}
\caption{$g_2$, Sobol}
\label{fig:accuracy_Genz2_dim3_1_Sobol_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_dim3_1_random_Steinitz}
\caption{$g_2$, random}
\label{fig:accuracy_Genz2_dim3_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim3_1_Halton_Steinitz}
\caption{$g_3$, Halton}
\label{fig:accuracy_Genz3_dim3_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim3_1_Sobol_Steinitz}
\caption{$g_3$, Sobol}
\label{fig:accuracy_Genz3_dim3_1_Sobol_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_dim3_1_random_Steinitz}
\caption{$g_3$, random}
\label{fig:accuracy_Genz3_dim3_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_dim3_1_Halton_Steinitz}
\caption{$g_4$, Halton}
\label{fig:accuracy_Genz4_dim3_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_dim3_1_Sobol_Steinitz}
\caption{$g_4$, Sobol}
\label{fig:accuracy_Genz4_dim3_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_dim3_1_random_Steinitz}
\caption{$g_4$, random}
\label{fig:accuracy_Genz4_dim3_1_random_Steinitz}
\end{subfigure}%
%
\caption{
Errors for Genz' test functions of stable high-order LS-CFs and the corresponding positive interpolatory CFs (obtained by subsampling) compared to other CFs.
The LS- and interpolatory CF are based on multivariate polynomials of increasing total degree ($\mathcal{F}_K(\Omega) = \mathbb{P}_m(\R^3)$).
All tests are performed for the domain $\Omega = [-1,1]^3$, the weight function $\omega \equiv 1$, and different data points.
}
\label{fig:errors_cube_1_dim3}
\end{figure}
\subsubsection{Comparing Different Function Spaces}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_compareSapces_LS_dim2_1_Halton_Steinitz}
\caption{$g_1$, Halton}
\label{fig:accuracy_Genz1_compareSapces_LS_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_compareSapces_LS_dim2_1_Sobol_Steinitz}
\caption{$g_1$, Sobol}
\label{fig:accuracy_Genz1_compareSapces_LS_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_compareSapces_LS_dim2_1_random_Steinitz}
\caption{$g_1$, random}
\label{fig:accuracy_Genz1_compareSapces_LS_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_compareSapces_LS_dim2_1_Halton_Steinitz}
\caption{$g_2$, Halton}
\label{fig:accuracy_Genz2_compareSapces_LS_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_compareSapces_LS_dim2_1_Sobol_Steinitz}
\caption{$g_2$, Sobol}
\label{fig:accuracy_Genz2_compareSapces_LS_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_compareSapces_LS_dim2_1_random_Steinitz}
\caption{$g_2$, random}
\label{fig:accuracy_Genz2_compareSapces_LS_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_compareSapces_LS_dim2_1_Halton_Steinitz}
\caption{$g_3$, Halton}
\label{fig:accuracy_Genz3_compareSapces_LS_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_compareSapces_LS_dim2_1_Sobol_Steinitz}
\caption{$g_3$, Sobol}
\label{fig:accuracy_Genz3_compareSapces_LS_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_compareSapces_LS_dim2_1_random_Steinitz}
\caption{$g_3$, random}
\label{fig:accuracy_Genz3_compareSapces_LS_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_compareSapces_LS_dim2_1_Halton_Steinitz}
\caption{$g_4$, Halton}
\label{fig:accuracy_Genz4_compareSapces_LS_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_compareSapces_LS_dim2_1_Sobol_Steinitz}
\caption{$g_4$, Sobol}
\label{fig:accuracy_Genz4_compareSapces_LS_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_compareSapces_LS_dim2_1_random_Steinitz}
\caption{$g_4$, random}
\label{fig:accuracy_Genz4_compareSapces_LS_dim2_1_random_Steinitz}
\end{subfigure}%
%
\caption{
Errors for Genz' test functions of stable high-order LS-CFs based on different function spaces.
Considered are polynomials of increasing total degree (``LS, poly."), trigonometric functions (``LS, trig."), and cubic PHS-RBFs (``LS, cubic PHS-RBF").
All tests are performed for the domain $\Omega = [-1,1]^2$, the weight function $\omega \equiv 1$, and different data points.
}
\label{fig:accuracy_compareSapces_LS}
\end{figure}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_compareSapces_interpol_dim2_1_Halton_Steinitz}
\caption{$g_1$, Halton}
\label{fig:accuracy_Genz1_compareSapces_interpol_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_compareSapces_interpol_dim2_1_Sobol_Steinitz}
\caption{$g_1$, Sobol}
\label{fig:accuracy_Genz1_compareSapces_interpol_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz1_compareSapces_interpol_dim2_1_random_Steinitz}
\caption{$g_1$, random}
\label{fig:accuracy_Genz1_compareSapces_interpol_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_compareSapces_interpol_dim2_1_Halton_Steinitz}
\caption{$g_2$, Halton}
\label{fig:accuracy_Genz2_compareSapces_interpol_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_compareSapces_interpol_dim2_1_Sobol_Steinitz}
\caption{$g_2$, Sobol}
\label{fig:accuracy_Genz2_compareSapces_interpol_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz2_compareSapces_interpol_dim2_1_random_Steinitz}
\caption{$g_2$, random}
\label{fig:accuracy_Genz2_compareSapces_interpol_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_compareSapces_interpol_dim2_1_Halton_Steinitz}
\caption{$g_3$, Halton}
\label{fig:accuracy_Genz3_compareSapces_interpol_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_compareSapces_interpol_dim2_1_Sobol_Steinitz}
\caption{$g_3$, Sobol}
\label{fig:accuracy_Genz3_compareSapces_interpol_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz3_compareSapces_interpol_dim2_1_random_Steinitz}
\caption{$g_3$, random}
\label{fig:accuracy_Genz3_compareSapces_interpol_dim2_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_compareSapces_interpol_dim2_1_Halton_Steinitz}
\caption{$g_4$, Halton}
\label{fig:accuracy_Genz4_compareSapces_interpol_dim2_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_compareSapces_interpol_dim2_1_Sobol_Steinitz}
\caption{$g_4$, Sobol}
\label{fig:accuracy_Genz4_compareSapces_interpol_dim2_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_Genz4_compareSapces_interpol_dim2_1_random_Steinitz}
\caption{$g_4$, random}
\label{fig:accuracy_Genz4_compareSapces_interpol_dim2_1_random_Steinitz}
\end{subfigure}%
%
\caption{
Errors for Genz' test functions of interpolatory CFs based on different function spaces.
Considered are polynomials of increasing total degree (``interpol., poly."), trigonometric functions (``interpol., trig."), and cubic PHS-RBFs (``interpol., cubic PHS-RBF").
All tests are performed for the domain $\Omega = [-1,1]^2$, the weight function $\omega \equiv 1$, and different data points.
}
\label{fig:accuracy_compareSapces_interpol}
\end{figure}
\subsubsection{Some Other Domains and Weight Functions}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc1_dim2_cube_C2k_Halton_Steinitz}
\caption{Test case (1), Halton}
\label{fig:accuracy_misc1_dim2_cube_C2k_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc1_dim2_cube_C2k_Sobol_Steinitz}
\caption{Test case (1), Sobol}
\label{fig:accuracy_misc1_dim2_cube_C2k_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc1_dim2_cube_C2k_random_Steinitz}
\caption{Test case (1), random}
\label{fig:accuracy_misc1_dim2_cube_C2k_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc2_dim2_ball_1_Halton_Steinitz}
\caption{Test case (2), Halton}
\label{fig:accuracy_misc2_dim2_ball_1_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc2_dim2_ball_1_Sobol_Steinitz}
\caption{Test case (2), Sobol}
\label{fig:accuracy_misc2_dim2_ball_1_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc2_dim2_ball_1_random_Steinitz}
\caption{Test case (2), random}
\label{fig:accuracy_misc2_dim2_ball_1_random_Steinitz}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc3_dim2_ball_sqrt_Halton_Steinitz}
\caption{Test case (3), Halton}
\label{fig:accuracy_misc3_dim2_ball_sqrt_Halton_Steinitz}
\end{subfigure
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc3_dim2_ball_sqrt_Sobol_Steinitz}
\caption{Test case (3), Sobol}
\label{fig:accuracy_misc3_dim2_ball_sqrt_Sobol_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_misc3_dim2_ball_sqrt_random_Steinitz}
\caption{Test case (3), random}
\label{fig:accuracy_misc3_dim2_ball_sqrt_random_Steinitz}
\end{subfigure}%
%
\caption{
Errors for three different test cases:
(1) The cube $[-1,1]^2$ with $\omega(x_1,x_2) = (1-x_1^2)^{1/2} (1-x_2^2)^{1/2}$ and test function $f(x_1,x_2) = \arccos(x_1) \arccos(x_2)$;
(2) The ball $\{ (x_1,x_2)^T | x_1^2 + x_2^2 \leq 1 \}$ with $\omega \equiv 1$ and test function $f(x_1,x_2) = (1 + x_1^2 + x_2^2 )^{-1} + \sin(x_1)$;
(3) The ball $\{ (x_1,x_2)^T | x_1^2 + x_2^2 \leq 1 \}$ with $\omega(x_1,x_2) = (x_1^2 + x_2^2)^{1/4}$ and test function $f(x_1,x_2) = (1 + x_1^2 + x_2^2 )^{-1} + \sin(x_1)$.
Compared are polynomial based LS- and interpolatory CFs as well as a (Q)MC method using the same data points as the LS-CF and a product Legendre rule.
}
\label{fig:accuracy_misc}
\end{figure}
\subsection{A Non-Standard Domain}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\begin{center}
\begin{tikzpicture}[domain = -2.5:4.5, scale=.65, line width=0.6pt]
\draw[->,>=stealth] (-2.5,0) -- (4.5,0) node[below] {$x$};
\draw[->,>=stealth] (0,-2.5) -- (0,4.5) node[right] {$y$};
\draw (-2,0.1) -- (-2,-0.1) node [below] {-1};
\draw (2,0.1) -- (2,-0.1) node [below] {1};
\draw (4,0.1) -- (4,-0.1) node [below] {2};
\draw (-0.1,-2) -- (0.1,-2) node [left] {-1 \ };
\draw (-0.1,2) -- (0.1,2) node [left] {1 \ };
\draw (-0.1,4) -- (0.1,4) node [left] {2 \ };
\draw[blue] (0,0) circle [radius = 2];
\draw[blue] (2,2) rectangle (4,4);
\draw (0,2) node (A) {};
\draw (2,2.5) node (B) {};
\draw[blue] (1,3) node (C) {\Large $\Omega$};
\draw (A) to[out=60, in=180] (C);
\draw (B) to[out=180, in=0] (C);
\end{tikzpicture}
\end{center}
\caption{Domain}
\label{fig:nonstandard_domian}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_nonstandard_dim2_combi_1_Halton_Steinitz}
\caption{Halton points}
\label{fig:accuracy_nonstandard_dim2_combi_1_Halton_Steinitz}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/accuracy_nonstandard_dim2_combi_1_Sobol_Steinitz}
\caption{Sobol points}
\label{fig:accuracy_nonstandard_dim2_combi_1_Sobol_Steinitz}
\end{subfigure}%
%
\caption{
A two dimensional nonstandard domain and errors for $\omega \equiv 1$ and $f(x_1,x_2) = \exp( -x_1^2 - x_2^2 )$.
The LS- and corresponding interpolatory CFs were based on algebraic polynomials of an increasing total degree.
}
\label{fig:nonstandard}
\end{figure}
\subsection{Comparison for Different Subsampling Methods to Construct Interpolatory Cubature Formulas}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsTime_dim2_Halton}
\caption{Efficiency, Halton}
\label{fig:comparison_efficiency_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsTime_dim2_Sobol}
\caption{Efficiency, Sobol}
\label{fig:comparison_efficiency_Sobol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsTime_dim2_random}
\caption{Efficiency, random}
\label{fig:comparison_efficiency_random}
\end{subfigure}%
\\
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsError_dim2_Halton}
\caption{Exactness, Halton}
\label{fig:comparison_error_Halton}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsError_dim2_Sobol}
\caption{Exactness, Sobol}
\label{fig:comparison_error_Sobol}
\end{subfigure}%
~
\begin{subfigure}[b]{0.33\textwidth}
\includegraphics[width=\textwidth]{%
plots/compare_interpolCFs_NvsError_dim2_random}
\caption{Exactness, random}
\label{fig:comparison_error_random}
\end{subfigure}%
%
\caption{A comparison between different subsampling methods to construct a positive interpolatory CF from a positive LS-CF.
All tests are performed for the domain $\Omega = [-1,1]^2$, the weight function $\omega \equiv 1$, and different data points.
Moreover, the CFs are exact for 2-dimensional polynomials up to an (increasing) total degree $m$, i.\,e., $\mathcal{F}_K(\Omega) = \mathbb{P}_m(\R^2)$.}
\label{fig:subsampling_comparison}
\end{figure}
\section{Concluding Thoughts}
\label{sec:summary}
We presented a simple procedure to construct provable positive and exact CFs in a general multi-dimensional setting.
It was proved that under relatively mild restrictions such CFs always exist---and can be determined by the method of LS---when a sufficiently large number of equidistributed data points is used.
This extends some previous results on LS formulas from one dimension \cite{wilson1970necessary,wilson1970discrete,huybrechs2009stable,glaubitz2020stableQRs} as well as multiple dimensions \cite{glaubitz2020stableCFs} (but restricted to function spaces of algebraic polynomials).
At the same time, our findings can also be seen as an extension of the stable high-order randomized CFs discussed in \cite{migliorati2020stable}, which are positive and exact with a high probability, into a deterministic framework.
Furthermore, similarities with certain methods for variance reduction in the context of MC and QMC methods \cite{nakatsukasa2018approximate} should be noted.
Indeed, the weighted LS-CF discussed in the present work can be interpreted as high-order corrections of (Q)MC methods.
Our results indicate that such high-order corrections are ensured to yield positive formulas if a sufficiently large number of equidistributed (e.\,g.\ low-discrepancy) data points are used.
Finally, it is possible to combine the provable positive and exact LS-CFs with subsampling methods, therefore constructing positive interpolatory CFs.
While these are already predicted by Tchakaloff's theorem (see \cref{thm:Tchakaloff}), in general, it is often not clear how the data points should be chosen.
Our findings indicate that some (they are not unique) of the positive interpolatory CFs predicted by Tchakaloff are supported on sets of equidistributed points.
This can be considered as an important---yet sometimes missing---justification and design criterion (for the data points) in CFs constructed based on optimization strategies.
These include NNLS as well as linear programming approaches.
In a forthcoming work, we will address to which extend the restriction \ref{item:restr_domain}, \ref{item:restr_weight}, and \ref{item:restr_space} can be relaxed.
In particular, we are interested in mitigating the restriction that constants have to be included in the finite-dimensional function spaces.
This would allow us to also apply the proposed LS-CF to broader function spaces, including RBF approximations that are not augmented with a polynomial (not even a constant).
The existence of such CFs is connected to summation-by-parts operators \cite{kreiss1974finite,kreiss1977existence,strand1994summation,hicken2013summation,svard2014review,fernandez2014review}, which were recently conjectured \cite{glaubitz2021stabilizing,glaubitz2021towards} to be a potential key to a systematic development of energy stable RBF methods for conservation laws.
Finally, it would be of interest if combining the LS-CF with appropriate function spaces, such as separable (low-rank) \cite{beylkin2009multivariate,chevreuil2015least} or sparse approximations \cite{chkifa2015breaking,cohen2015approximation,adcock2017compressed}, might remedy the curse of dimensionality for these methods.
Although we believe the findings presented here to be encouraging, further research in this direction is needed and certainly welcome.
\section{Steinitz' Method}
\label{app:Steinitz}
Given is a $K$-dimensional function space $\mathcal{F}_K(\Omega)$ and a positive and $\mathcal{F}_K(\Omega)$-exact CF,
\begin{equation}\label{eq:pos-CF}
C_N[f] = \sum_{n=1}^N w_n f(\mathbf{x}_n).
\end{equation}
Here, $N$ denotes the number of data points in a generic sense.
If $N> K$, one successively reduces the number of data points until $N \leq K$ by going over to an appropriate subset of data points, while preserving positivity and $\mathcal{F}_K(\Omega)$-exactness.
Recall that the vector space $\mathcal{F}_K(\Omega)$ has dimension $K$, and so does its algebraic dual space (the space of all linear functionals defined on $\mathcal{F}_K(\Omega)$).
In particular, among the $N$ linear functionals
\begin{equation}
L_n[f] = f(\mathbf{x}_n), \quad n=1,\dots,N,
\end{equation}
at most $K$ are linearly independent.
That is, if $N > K$, there exist a vector of coefficients $\mathbf{a} = (a_1, \dots, a_M)^T$ such that
\begin{equation}\label{eq:vec-a}
a_1 L_1[f] + \dots + a_N L_N[f] = 0 \quad \forall f \in \mathcal{F}_K(\Omega)
\end{equation}
and $a_n > 0$ for at least one $n$.
Let ${\sigma= \max_{1 \leq n \leq N} a_n/w_n}$.
Then, $\sigma > 0$, $\sigma w_n - a_n \geq 0$ for all $n$, and $\sigma w_n - a_n = 0$ for at least one $n$.
From \cref{eq:vec-a} one therefore has
\begin{equation}
I[f] = \frac{\sigma w_1 - a_1}{\sigma} L_1[f] + \dots + \frac{\sigma v_N - a_N}{\sigma} L_N[f]
\quad \forall f \in \mathcal{F}_K(\Omega).
\end{equation}
Note that one of the coefficients is zero and, together with the corresponding linear functional (data point), can be removed.
Hence, on $\mathcal{F}_K(\Omega)$, the integral $I$ can be expressed as a linear combination of not more than $N-1$ of the linear functionals $L_1,\dots,L_N$ with positive coefficients.
Iterating this process, one finally arrives at a positive interpolatory CF with $N \leq K$ while being exact for all $f \in \mathcal{F}_K(\Omega)$.
\begin{algorithm}[tb]
\caption{The Steinitz Method}
\label{algo:Steinitz}
\begin{algorithmic}[1]
\While{$K < N$}
\State{Compute $\Phi = \Phi(X)$ and $\text{null}(\Phi)$}
\State{Determine $\mathbf{a} \in \text{null}(\Phi) \setminus \{\mathbf{0}\}$ s.\,t.\ $a_n > 0$ for at least one $n$ (see \cref{rem:a})}
\State{Compute $\sigma = \max_{n} a_n/w_n$}
\State{Overwrite the cubature weights: $w_n = (\sigma w_n - a_n)/\sigma$}
\State{Remove all zero weights as well as the corresponding data points}
\State{$N = N - \#\{ \, w_n \mid w_n = 0, \ n=1,\dots,N \, \}$}
\EndWhile
\end{algorithmic}
\end{algorithm}
An algorithmic description of the Steinitz method is provided in \cref{algo:Steinitz}.
Thereby, ${\text{null}(\Phi) = \{ \mathbf{a} \in \R^N \mid \Phi \mathbf{a} = \mathbf{0} \}}$ denotes the null space of the matrix $\Phi$.
\begin{remark}\label{rem:a}
Note that \cref{eq:vec-a} is equivalent to $\mathbf{a} \in \text{null}(\Phi)$.
Essentially every ${\mathbf{a} \in \text{null}(\Phi) \setminus \{\mathbf{0}\}}$ can be used (if $a_n \leq 0$ for all $n=1,\dots,M$, one can go over to $-a$).
Moreover, it was shown \cref{lem:solution-space} that $\text{null}(\Phi)$ has dimension $N-K$.
Hence, as long as $N > K$, such a vector of coefficients $\mathbf{a}$ can always be found.
\end{remark}
\section*{Acknowledgements}
\bibliographystyle{siamplain}
| {
"timestamp": "2021-08-12T02:03:39",
"yymm": "2108",
"arxiv_id": "2108.02848",
"language": "en",
"url": "https://arxiv.org/abs/2108.02848",
"abstract": "Many applications require multi-dimensional numerical integration, often in the form of a cubature formula. These cubature formulas are desired to be positive and exact for certain finite-dimensional function spaces (and weight functions). Although there are several efficient procedures to construct positive and exact cubature formulas for many standard cases, it remains a challenge to do so in a more general setting. Here, we show how the method of least squares can be used to derive provable positive and exact formulas in a general multi-dimensional setting. Thereby, the procedure only makes use of basic linear algebra operations, such as solving a least squares problem. In particular, it is proved that the resulting least squares cubature formulas are ensured to be positive and exact if a sufficiently large number of equidistributed data points is used. We also discuss the application of provable positive and exact least squares cubature formulas to construct nested stable high-order rules and positive interpolatory formulas. Finally, our findings shed new light on some existing methods for multivariate numerical integration and under which restrictions these are ensured to be successful.",
"subjects": "Numerical Analysis (math.NA); Optimization and Control (math.OC)",
"title": "Construction and application of provable positive and exact cubature formulas",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151392226466,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7086428606798815
} |
https://arxiv.org/abs/2106.06878 | Probabilistic Group Testing with a Linear Number of Tests | In probabilistic nonadaptive group testing (PGT), we aim to characterize the number of pooled tests necessary to identify a random $k$-sparse vector of defectives with high probability. Recent work has shown that $n$ tests are necessary when $k =\omega(n/\log n)$. It is also known that $O(k \log n)$ tests are necessary and sufficient in other regimes. This leaves open the important sparsity regime where the probability of a defective item is $\sim 1/\log n$ (or $k = \Theta(n/\log n)$) where the number of tests required is linear in $n$. In this work we aim to exactly characterize the number of tests in this sparsity regime. In particular, we seek to determine the number of defectives $\lambda(\alpha)n / \log n$ that can be identified if the number of tests is $\alpha n$. In the process, we give upper and lower bounds on the exact point at which individual testing becomes suboptimal, and the use of a carefully constructed pooled test design is beneficial. | \section{Introduction}
Group testing is a sparse recovery problem where we aim to recover a small set of $k$ ``defective'' items from among $n$ total items using pooled tests. Originally introduced in the context of testing blood samples for diseases where multiple samples can be combined together \cite{dorfman1943detection}, it has since seen a variety of applications, including recently COVID-19 testing (see for instance \cite{yelin2020evaluation,
gollier2020group, eberhardt2020multi}, though there are many more such works).
More formally, we let $\bfx \in \set{0, 1}^n$ represent the vector of items where a 1 indicates defectivity. A test vector $\bfm \in \set{0, 1}^n$ specifies a subset of items to be tested; the result is
\begin{equation*}
y = \bigvee_{i:\bfm_i = 1} \bfx_i,
\end{equation*}
the logical OR of the entries of $\bfx$ specified by the test.
We focus here on the nonadaptive setting, where all tests must be specified in advance of seeing any results. In this setting we write $M \in \set{0,1}^{T \times n}$ for the matrix of $T$ tests (rows), and $\bfy$ for the vector of all test results. We assume the vector of defectives $\bfx$ is generated randomly, the details of which are discussed in \cref{sec:priors}.
This is known as \emph{probabilistic group testing} (PGT), meaning that we are interested in determining how many tests are necessary for the probability of error (for some particular decoding method) to approach 0 as $n$ becomes large. This is in contrast to \emph{combinatorial group testing}, where we insist that any set of defectives is recoverable (i.e., probability of error is 0 for any $n$).
In general, we are interested in the question of how the number of tests $T$ must scale as a function of both $n$ and $k$. This behavior can be quite different depending on the scaling of $k$ relative to $n$, so often things are broken down further into different ``regimes'' for $k$. We will in this work prove both lower and upper bounds on $T$ for the specific regime $k = \Theta\left(\frac{n}{\log n}\right)$, or equivalently when the probability of an item being defective is $\Theta\left(\frac1{\log n}\right)$; this regime has traditionally seen little attention, but recent work has shown that this is exactly the scaling of $k$ for which $\Omega(n)$ tests are necessary. As $n$ tests trivially suffice by testing each item individually, determining the exact crossover point where individual testing becomes suboptimal seems of interest, which happens to be in the $k = \Theta\left(\frac{n}{\log n}\right)$ regime.
\subsection{Related Work}
\label{sec:related_work}
Much of the initial work on group testing concerned the zero-error combinatorial setting, where a single test matrix must correctly classify every possible defective vector. The literature on this variant is vast -- see for instance the book of Du and Hwang \cite{du2000combinatorial}. We focus the rest of this section on the low-error probabilistic setting.
PGT has traditionally been split into two rather different regimes: the ``sparse'' regime, where the total number of defectives is $O(n^\theta)$ for some $0 \leq \theta < 1$, and the ``linear'' regime where the total number of defectives is $\beta n$ for some constant $\beta < 1$.
In both regimes, the folklore ``counting bound'' (see \cite{chan2011non} for instance) shows that $\Omega(k \log \frac{n}{k})$ measurements are necessary. In the sparse regime, this is equivalent to $\Omega(k \log n)$, and order-optimal randomized constructions have been known for some time \cite{sebHo1985two, atia2012boolean}. In the linear regime, the counting bound implies $\Omega(n)$ measurements are necessary, and trivially $n$ suffice by testing items individually.
More recently, explicit constructions of matrices for PGT have been studied. Mazumdar \cite{mazumdar2015nonadaptive} gave explicit constructions requiring $O(k \log^2 n / \log k)$ measurements. Follow up works of Barg and Mazumdar~\cite{barg2017group}, and Inan et al. \cite{inan2019optimality}, show order-optimal or near-optimal results.
Another direction has been to develop constructions with good decoding properties. PGT schemes are considered efficiently decodable if they require $O(T)$ time to decode, where $T$ is the total number of measurements; several works \cite{cai2017efficient, vem2017group, lee2019saffron} gave efficiently decodable schemes which were not quite order-optimal, before Bondorf et al. \cite{bondorf2020sublinear} gave the first order-optimal efficiently decodable construction. A very recent result \cite{inan2020strongly} shows that we can even have explicit constructions which are both order-optimal and efficiently decodable.
The last line of work we discuss in PGT, and the most relevant to our work here, has been to go beyond order-optimality and determine the precise constants involved in various regimes. Improvements on the counting bound were proposed in \cite{agarwal2018novel} and subsequently it was shown the individual testing is optimal in the linear regime~\cite{aldridge2018individual}.
In the sparse regime the characterization of exact constants results in a more complex picture, but a series of works \cite{scarlett2016limits, aldridge2017capacity, johnson2018performance, coja2020optimal} have narrowed down the constants to the point that the lower and upper bounds are matching for any $\theta \in [0, 1)$ (where $k = n^\theta$). We direct the interested reader to the recent survey of Aldridge et al. \cite{aldridge2019group}.
However, in between these two regimes is another regime which has seen much less study, namely when the total number of defectives is $n / \textrm{poly}(\log n)$ (\hspace{1sp}\cite{bay2020optimal} refers to these regimes as ``mildly sublinear''). In particular when $k = \Theta(n / \log n)$ the counting bound implies only that $\Omega(n \log \log n/ (\log n))$ measurements are necessary, but a method using $o(n)$ measurements proved elusive. In very recent work, Bay et al. \cite{bay2020optimal} showed that in fact the lower bound can be improved to $\Omega(k \log n)$ in this regime, and thus known constructions are order-optimal.
In light of the asymptotic improvement to the lower bound for $k = n / \textrm{poly}(\log n)$ \cite{bay2020optimal}, we feel it is a natural next step to try and nail down the constants in this regime. While we focus on the particular case $k = \lambda n / \log n$ in this paper, we expect our results should extend readily to other mildly sublinear regimes.
\subsection{Priors for PGT}
\label{sec:priors}
There are two commonly used ``priors'' over the random defective vector involved in PGT. Under the \emph{combinatorial prior}, the number of defectives is fixed to be $k$, and the defective set is drawn uniformly from the $\binom{n}{k}$ possibilities. Under the \emph{i.i.d. prior}, we instead fix a defective probability $p$, and each item is defective independently with probability $p$.
Conveniently, it turns out that our choice of prior makes little difference in the number of tests necessary (assuming we take $k = pn$). The following result from the survey of Aldridge et al. \cite{aldridge2019group} formalizes this notion.
\begin{theorem}[\hspace{1sp}\cite{aldridge2019group} Thm. 1.7]
\label{thm:comb_prior_conversion}
Suppose a sequence of test designs and decoding method has probability of error going to 0 as $n$ goes to infinity under the combinatorial prior with $k = k_0(1 + o(1))$, where $k_0 = o(n)$ and $k_0$ goes to infinity with $n$. Then the same sequence of test designs and decoding method has error going to 0 as $n$ goes to infinity under the i.i.d. prior with $p = k_0 / n$.
\end{theorem}
A similar result holds for converting the opposite direction. In this paper we will primarily use the i.i.d. prior, except for our upper bound in \cref{sec:constant_ub} which relies on preexisting work under the combinatorial prior. For ease of reference to that work we will use the combinatorial prior there, and \cref{thm:comb_prior_conversion} tells us we can convert our main result (\cref{thm:constant_ub}) to an equivalent result under the i.i.d. prior.
\subsection{Notation}
We will use the following notational conventions throughout:
\begin{itemize}
\item $M$ is a test matrix with rows corresponding to tests and columns corresponding to items.
\item $T$ is the number of tests (rows of $M$).
\item $n$ is the total number of items (columns of $M$).
\item $k$ is the total number of defectives.
\item $p$ is the defective probability under the i.i.d. prior.
\item $\log$ is the logarithm base $e$.
\item $\epsilon$, $\delta$, $\gamma$, $\lambda$, $\alpha$, $\nu$ are constants.
\end{itemize}
Our results show that when $p = \lambda / \log n$, it is sufficient to have $T/n \geq \min(1,\frac{\lambda}{\log^2 2})$, whereas $T/n \geq \frac{\lambda}{\lambda+\log^2 2}$ is necessary. Note that these two quantities are always within a factor of 2 of each other, and approach equality for very small or large values of $\lambda$.
\section{Upper Bound with Near-Constant Tests-Per-Item Design}
\label{sec:constant_ub}
For probabilistic group testing in the sparse regime, it has been shown that the optimal rate is achieved by so-called ``Near-Constant Tests-per-Item'' designs \cite{johnson2018performance}. These matrices are formed by drawing $\frac{\nu T}{k}$ tests uniformly with replacement per item, where $\nu$ is a small constant parameter to be optimized, $T$ is the total number of tests, and $k$ is the number of defectives (assuming the combinatorial prior).
Drawing the tests with replacement means that it is possible that the same test is drawn more than once, hence the ``near-constant'' tests-per-item, rather than constant. This simplifies the analysis as the draws are independent.
We will use this type of test matrix to prove our upper bound.
\subsection{Preliminaries}
We closely follow the approach taken by Johnson et al. \cite{johnson2018performance} for the sparse regime in the following, as their argument is mostly regime-independent. The following notation will be useful in describing their approach succinctly:
\begin{itemize}
\item $\mathcal{K}$ is the set of all defective items.
\item $W^{(\mathcal{K})}$ is the number of tests containing at least one item from $\mathcal{K}$.
\item $G$ is the number of nondefective items that do not appear in any negative tests.
\end{itemize}
Our upper bound will employ the simple COMP decoding~\cite{du2000combinatorial,chan2011non}. This algorithm works by first identifying all items present in negative tests and classifying them as nondefective. Then, any remaining items are classified as defective. While simple, in most cases this algorithm has proven to be asymptotically as good as more complex decodings, and is easy to analyze.
First we present two useful lemmas borrowed from \cite{johnson2018performance}. The first in plain language states that given a near-constant tests-per-item design, the number of tests including at least one defective concentrates tightly around its mean.
\begin{lemma}[\hspace{1sp}\cite{johnson2018performance} Lemma 1]
\label[lemma]{thm:wk_concentration}
Let $|\mathcal{K}| = k$, and fix constants $\alpha > 0$, $\epsilon \in (0, 1)$. Suppose we form a test design of $T$ tests by drawing $L = \frac{\alpha T}{k}$ tests uniformly at random with replacement for each of $n$ items. Then (assuming $T$ goes to infinity with $n$) the following holds:
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\alpha})T| \geq \delta] \leq 2 \exp \left( -\frac{\delta^2}{\alpha T} \right).
\end{equation*}
\end{lemma}
The next lemma, again from \cite{johnson2018performance}, characterizes the distribution of $G$ conditioned on a fixed value of $W^{(\mathcal{K})}$.
\begin{lemma}[\hspace{1sp}\cite{johnson2018performance} Lemma 5]
\label[lemma]{thm:g_dist}
Let $L = \frac{\nu T}{k}$ be the number of tests drawn for each item in a near-constant tests-per-item design. Then
\begin{equation*}
G \ | \ (W^{(\mathcal{K})} = x) \sim \mathrm{Bin} (n-k, (x/T)^L).
\end{equation*}
\end{lemma}
\subsection{Combining Together}
Now we are ready to prove the upper bound, namely that we can succeed with high probability using about $n \cdot \lambda/\log^2 2$ tests.
\begin{theorem}
\label{thm:constant_ub}
Let $k = \frac{\lambda n}{\log n}$ be the total number of defectives. Suppose our test design is near-constant tests-per-item,
\begin{equation*}
T = (1 + \epsilon) \frac{\lambda}{\log^2 2} n
\end{equation*}
for a constant $\epsilon > 0$, and we draw $L = \frac{T \log 2}{k}$ tests per item with replacement. Then for sufficiently large $n$, the success probability of the COMP decoding is $1 - o(1)$.
\end{theorem}
\begin{proof}
As $G$ is the number of nondefectives not appearing in any negative tests, we know COMP succeeds if and only if $G = 0$. Then the main idea here is that we will use the equation
\begin{equation}
\label{eq:cond_prob_sum}
\Pr[G = 0] = \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \Pr[G=0 | W^{(\mathcal{K})} = x],
\end{equation}
applying \Cref{thm:g_dist} when $x \leq (1/2 + \delta) T$, and showing that the probability of $x > (1/2 + \delta) T$ goes to 0 for large $n$ using \Cref{thm:wk_concentration}.
By \Cref{thm:g_dist},
\begin{equation*}
\label{eq:success_conditional_exact}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] = \left( 1 - \left( \frac{x}{T} \right)^L \right)^{n-k}.
\end{equation*}
As this function is decreasing in $x$, for all $x \leq (1/2 + \delta)T$, we have
\begin{equation*}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] \geq \Pr[G = 0 \ | \ W^{(\mathcal{K})} = (1/2 + \delta)T]
= (1 - (1/2 + \delta)^L)^{n-k}.
\end{equation*}
By definition
\begin{equation*}
L = \frac{T \log 2}{k} = \frac{(1+\epsilon) \lambda n \log 2 \log n}{(\log 2)^2 \lambda n} = \frac{(1+\epsilon) \log n}{\log 2},
\end{equation*}
so this gives
\begin{equation*}
\frac{1}{2^L} = \left( \frac{1}{2^{1 / \log 2}} \right)^{(1+\epsilon) \log n} = \left(\frac{1}{e^{\log n}} \right)^{(1+\epsilon)} = \frac{1}{n^{1+\epsilon}}.
\end{equation*}
Taking $\delta < 2/L$, the maximum term in the binomial expansion of $(1/2 + \delta)^L$ will be $1/2^L$, so we have
\begin{equation*}
(1/2 + \delta)^L < \frac{L+1}{2^L} = \frac{L+1}{n^{1+\epsilon}} < \frac{1}{n^{1+\epsilon/2}},
\end{equation*}
where in the last step we use the fact that $L = \Theta(\log n)$ and the assumption regarding large $n$. Then we have for $x \leq (1/2 + \delta)T$
\begin{equation*}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] > \left(1 - \frac{1}{n^{1+\epsilon/2}} \right)^{n-k}
\approx 1 - \frac{n-k}{n^{1+\epsilon/2}}
> 1 - \frac{n}{n^{1+\epsilon/2}}
= 1 - \frac{1}{n^{\epsilon/2}},
\end{equation*}
which goes to 1 for large $n$.
Plugging this back into \eqref{eq:cond_prob_sum},
\begin{align}
\Pr[G = 0] =& \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \nonumber \\
\geq& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \nonumber \\
>& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \nonumber \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \Pr[W^{(\mathcal{K})} \leq (1/2 + \delta)T] \nonumber \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \left( 1 - \Pr[W^{(\mathcal{K})} > (1/2 + \delta)T]\right) \label{eq:final_succ_prob}.
\end{align}
Finally, we apply \Cref{thm:wk_concentration} to the latter probability with $\alpha = \log 2$ (and $\delta T$ in place of the $\delta$ in the original lemma statement), which tells us
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\log 2})T| \geq \delta T] \leq 2 \exp \left( -\frac{(\delta T)^2}{T \log 2} \right),
\end{equation*}
and thus
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (T/2)| \geq \delta T] \leq 2 \exp \left( -\frac{\delta^2 T}{\log 2} \right)
= \Theta \left( \exp \left( -\frac{n}{\log^2 n}\right)\right),
\end{equation*}
which clearly goes to 0 for large $n$. Thus the success probability in \eqref{eq:final_succ_prob} will go to 1, concluding the proof.
\end{proof}
\section{Determining the Constant in the Lower Bound}
The recent work of Bay et al. \cite{bay2020optimal} shows that a lower bound of $\min(n, \Omega(k \log n))$ tests holds for any $k$. In the case we are interested in that $k = \lambda n / \log n$, this tells us we need $\Omega(n)$ measurements. In this section we seek to determine the necessary number of measurements more precisely, up to the constant term.
The argument of \cite{bay2020optimal} works by demonstrating that with significantly less than $k \log n$ measurements, there must exist a fairly large number of ``totally disguised'' items; that is, items for which no test gives us any information about whether or not these items are defective. If we have no information about these items defectivity, we cannot do better than guessing on each one.
More specifically, they give a procedure that constructs a set of items $W$ with the following properties:
\begin{enumerate}
\item Each item in $W$ is totally disguised with probability at least $\mathcal{L}^*$.
\item The events that each item in $W$ is totally disguised are independent of each other.
\end{enumerate}
Since these events are independent, we can combine bounds on $|W|$ and $\mathcal{L}^*$ with simple concentration inequalities to get a lower bound on the number of totally disguised items.
For our lower bound we will modify their procedure slightly, as described below.
\subsection{How Many Disguised Items are Needed}
Since disguised items are defective independently with probability $p$ and for us $p < \frac{1}{2}$, the best any algorithm
can do on a disguised item is predict it is nondefective. This guess is correct with probability $1 - p$. Thus if the total number of disguised items is $D$, the success probability of any algorithm is at most
\begin{equation*}
(1 - p)^D \leq \exp(-p D).
\end{equation*}
\subsection{Lower Bounding $|W|$}
We will closely follow the method of \cite{bay2020optimal}, but will first slightly redefine their notion of ``very-present'' items. These are items present in far more tests than the average item, which will be discarded in a preprocessing step before constructing $W$.
\begin{lemma}[\hspace{1sp}\cite{bay2020optimal} Lemma 4, modified]
\label[lemma]{thm:very_present}
Define an item to be \emph{very-present} if it appears in more than $t_{max} = \log^3 n$ tests. If $T \leq n$ and no test contains more than $z \log n$ items, then the number of very-present items is
\begin{equation*}
n_{vp} \leq \frac{nz}{\log^2 n}.
\end{equation*}
\end{lemma}
\begin{proof}
Consider the total number of (item, test) pairs $P$. From the assumptions, we have $P \leq Tz \log n \leq nz \log n$. Also, by the definition of very-present items, we have $n_{vp} \log^3 n \leq P$. Then $n_{vp} \log^3 n \leq nz \log n$ implies the result.
\end{proof}
For bounding $|W|$, we have the following:
\begin{lemma}[\hspace{1sp}\cite{bay2020optimal} Lemma 6 Pf.]
\label[lemma]{thm:w_lb}
Suppose $W$ is constructed as described in Procedure 1 of \cite{bay2020optimal}, and furthermore we modify their stopping rule so that we stop when there are less than $\frac{\epsilon n}{1 + \gamma}$ items rather than $\frac{\epsilon n}{2}$, where $\gamma$ is a small positive constant. Then
\begin{equation}
\label{eq:w_lb}
|W| \geq \frac{(\gamma \epsilon n) / (1+\gamma) - (nz)/\log^2 n}{z^2 \log^8 n},
\end{equation}
where $z = \frac{2}{\log (1/(1-p))}$.
\end{lemma}
\begin{proof}
From the proof of \cite{bay2020optimal} Lemma 6, we have that the method of constructing $W$ yields the inequality
\begin{equation*}
\frac{\epsilon n}{1 + \gamma} \geq \epsilon n - n_{vp} - |W| t_{max}^2 z^2 \log^2 n,
\end{equation*}
where $n_{vp}$ and $t_{max}$ are as defined in our \Cref{thm:very_present}. Rearranging this yields
\begin{equation*}
|W| \geq \frac{\gamma \epsilon n/(1+\gamma) - n_{vp}}{t_{max}^2 z^2 \log^2 n},
\end{equation*}
and substituting the values from \Cref{thm:very_present} for $n_{vp}$ and $t_{max}$ gives the result.
\end{proof}
In the case we are interested in that $p = \frac{\lambda}{\log n}$, we have
\begin{equation*}
z = -2 / \log \left(1 - \frac{\lambda}{\log n}\right) = \frac{2 \log n}{\lambda} - 1 - o(1),
\end{equation*}
where we have considered the Taylor series as $n$ goes to infinity.
Then
\begin{equation*}
\frac{nz}{\log^2 n} \approx \frac{2 n \log n}{\lambda \log^2 n} = \frac{2 n}{\lambda \log n} = o(n),
\end{equation*}
so asymptotically we have
\begin{equation*}
(\gamma n) / (1+\gamma) - (nz)/\log^2 n = \Theta(n).
\end{equation*}
Substituting the value of $z$ back into the denominator of \eqref{eq:w_lb} gives
\begin{equation*}
|W| \geq \frac{cn}{\log^{10} n}
\end{equation*}
as $n$ goes to infinity, for some constant $c>0$. This will be sufficient for our purposes, as it will turn out that only the exponent of $n$ is relevant for this term.
\subsection{Lower Bounding $\mathcal{L^*}$}
For lower bounding $\mathcal{L}^*$, they show in \cite{bay2020optimal} that if we stop constructing $W$ when there are less than $n_{final}$ items remaining, then
\begin{equation*}
\mathcal{L}^* \geq \exp \left( \frac{T}{n_{final}} \mathcal{L}_p\right),
\end{equation*}
where
\begin{equation*}
\mathcal{L}_p = \min_{x=2, 3, \dotsc, n} x \log (1 - (1-p)^{x-1}).
\end{equation*}
As we modified their Procedure 1 to stop with less than $\frac{\epsilon n}{1 + \gamma}$ items, we will have
\begin{equation}
\label{eq:our_lstar}
\mathcal{L}^* \geq \exp \left( \frac{(1+\gamma)T}{\epsilon n} \mathcal{L}_p\right).
\end{equation}
Furthermore, it is shown in work of Coja-Oghlan et al. \cite{coja2020optimal} in the sparse regime that the function $\mathcal{L}_p$ is minimized at
\begin{equation*}
x = \frac{\log 2}{p} + O(p^{-1/2}).
\end{equation*}
In our regime neglecting lower order terms, this yields
\begin{equation*}
\mathcal{L}_p \geq \frac{\log n \log 2}{\lambda} \log \left( 1 - \left(\frac{\log n - \lambda}{\log n} \right)^{(\log n \log 2)/\lambda - 1}\right).
\end{equation*}
From this, we have
\begin{equation*}
\exp(\mathcal{L}_p) \geq \left( 1 - \left( \frac{\log n - \lambda}{\log n} \right)^{\log 2 \log n / \lambda - 1}\right)^{\log 2 \log n / \lambda}.
\end{equation*}
The Taylor series expansion at $x = \infty$ of
\begin{equation*}
\left( \frac{x-\lambda}{x} \right)^{\log 2 x/\lambda - 1}
\end{equation*}
is $\frac{1}{2} + O(\frac{1}{x})$, so for large $n$ dropping lower order terms this gives
\begin{equation*}
\exp(\mathcal{L}_p) \geq \left( \frac{1}{2} \right)^{\log 2 \log n / \lambda},
\end{equation*}
and thus substituting $T = (1-\epsilon)n$ into our expression for $\mathcal{L}^*$ from \eqref{eq:our_lstar},
\begin{equation*}
\mathcal{L}^* \geq \left( \frac{1}{2} \right)^{(\log 2 \log n / \lambda) \cdot (1+\gamma)(1 - \epsilon)/\epsilon},
\end{equation*}
which can be simplified to
\begin{equation*}
\mathcal{L}^* \geq \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2 (1 - \epsilon)/(\lambda \epsilon)}.
\end{equation*}
\subsection{Combining Together}
Writing $D$ for the total number of disguised items, we have $\mathbb{E}[D] \geq \mathcal{L}^* |W|$, and as each item in $W$ is disguised independently, by a Chernoff bound we have $D \geq (1 - o(1)) \mathcal{L}^* |W|$ with high probability. Any algorithm's success probability is upper bounded by $\exp(-p D)$, so in order for this success probability to go to 1, we need $pD$ to go to 0. Then substituting in our bounds for $\mathcal{L}^*$ and $|W|$, we need
\begin{align*}
-pD \leq& -p \mathcal{L}^* |W| \\
\leq& \frac{-\lambda}{\log n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)} \frac{cn}{\log^{10} n} \\
=& \frac{-\lambda cn}{\log^{11} n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)}
\end{align*}
to go to 0. Only the highest order terms are relevant, so we can just look at the exponents of $n$, and the expression will go to 0 if
\begin{align*}
& 1 < \frac{(1+\gamma)(\log 2)^2(1 - \epsilon)}{\epsilon \lambda} \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2(1 - \epsilon) \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2 - \epsilon (1+\gamma)(\log 2)^2 \\
\implies& \epsilon (\lambda + (1+\gamma)(\log 2)^2) < (1+\gamma)(\log 2)^2 \\
\implies& \epsilon < \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2}.
\end{align*}
We set $T = (1- \epsilon)n$, so to have success probability going to 1 we must have
\begin{equation*}
1 - \epsilon > 1 - \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2} = \frac{\lambda}{\lambda + (1+\gamma) (\log 2)^2},
\end{equation*}
and recall $\gamma$ can be any constant greater than zero.
Altogether this shows the following.
\begin{theorem}
\label{thm:full_lb}
There exists $n_0$ such that for all $n > n_0$, any test scheme using $T$ tests to identify defectives among $n$ items with i.i.d. defective probability $p = \frac{\lambda}{\log n}$ with
\begin{equation*}
T \leq (1 - \epsilon) \frac{\lambda}{\lambda + \log^2 2} n
\end{equation*}
for some constant $\epsilon$ independent of $n$ must have error probability $1 - o(1)$.
\end{theorem}
\section{Conclusion and Discussion}
We have shown that for nonadaptive PGT in the regime that $p = \lambda / \log n$ (or equivalently $k = \lambda n / \log n$),
\begin{equation*}
\min \left(1, \left(1+\epsilon\right) \frac{\lambda}{\log^2 2}\right) n
\end{equation*}
measurements suffice to obtain error probability going to 0, and at least
\begin{equation*}
\left(\left(1 - \epsilon\right) \frac{\lambda}{\lambda + \log^2 2}\right) n
\end{equation*}
are necessary ($\epsilon>0$). \Cref{fig:pgt_bound_comp} shows a graphical comparison of the upper and lower bounds.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{pgt_bound_comp.png}
\caption{Comparison of upper and lower bounds for PGT in the regime where $k = \lambda n / \log n$. The shaded region indicates the gap between the current upper and lower bounds.}
\label{fig:pgt_bound_comp}
\end{figure}
A natural next step would be to close this gap. In the sparse regime where $k = O(n^\theta)$, more complex decodings known as DD and SPIV have been shown to improve on the COMP decoding \cite{johnson2018performance,coja2020optimal}, although the benefit seems to vanish as $\theta$ approaches 1.
We conjecture that the lower bound should come up to meet the upper bound obtained by the minimum of near-constant tests-per-item designs and individual testing. This would imply that the exact point at which individual testing becomes suboptimal is when $p < (\log 2)^2 / \log n$.
The reason for this belief is as follows. Suppose $D$ is the (random) number of disguised items. A simple calculation shows that,
\begin{equation*}
\mathbb{E}[D] \ge n \exp(T/n \cdot \mathcal{L}_p)
\end{equation*}
where $\mathcal{L}_p$ has been defined in the last section. Using the same calculation as before, we obtain,
\begin{equation*}
\mathbb{E}[D] \ge n 2^{-\frac{T}{n} \frac{\log 2}{p}}.
\end{equation*}
Given the number of disguised items is $D$, the probability of correct decoding is
at most $(1-p)^D \le \exp(-pD),$ where $p =\frac{\lambda}{\log n}$. Substituting $D$ with $\mathbb{E} D$ from above, we see that the probability of correct decoding is going to zero whenever $\frac{T}{n} < \frac{\lambda}{\log^2 2}.$
This shows that the lower bound should be improved to match the upper bound, if the random variable $D$ is concentrated around its mean, which is of course yet to be proved.
\bibliographystyle{IEEEtran}
\section{Introduction}
A new lower bound for small error nonadaptive group testing in the ``slightly sublinear'' regime where $k = o(n)$ but also $k = \omega(n^{\theta})$ for any $\theta < 1$ was recently proven by Bay et al. \cite{bay2020optimal}. They showed that at least $\Theta(k \log n)$ measurements are necessary, which implies that for $k = \omega( n / \log n)$, we essentially \footnote{Technically, as the defective probability $p$ is sublinear, we can neglect testing $o(1 / p)$ items and predict they are nondefective without significantly influencing our error probability in the limit as $n$ goes to infinity.} cannot do better than testing items individually.
This lower bound asymptotically matches an upper bound of $O(k \log n)$ which can be obtained via a simple probabilistic analysis.
However, \cite{bay2020optimal} does not attempt to characterize the threshold behavior when $k = \Theta(n / log n)$, showing only that $\delta n$ measurements are necessary for some constant $\delta$. Here we will try to determine what exactly happens in this area.
\section{Determining the Constant in the Lower Bound}
The argument of \cite{bay2020optimal} works by demonstrating that with significantly less than $k \log n$ measurements, there must exist a fairly large number of ``totally disguised'' items; that is, items for which no test gives us any information about whether or not these items are defective. If we have no information about these items defectivity, we effectively cannot do better than guessing on each one.
More specifically, they construct a set of items $W$ with the following properties:
\begin{enumerate}
\item Each item in $W$ is totally disguised with probability at least $\mathcal{L}^*$
\item The events that each item in $W$ is totally disguised are independent of each other
\end{enumerate}
Since these events are independent, we can combine bounds on $|W|$ and $\mathcal{L}^*$ with simple concentration inequalities to get a lower bound on the number of totally disguised items.
\subsection{How Many Disguised Items are Needed}
Since disguised items are defective independently with probability $p$ and for us $p < \frac{1}{2}$, the best any algorithm
can do on a disguised item is predict it is nondefective. This guess is correct with probability $1 - p$. Thus if the total number of disguised items is $D$, the success probability of any algorithm is at most
\begin{equation}
(1 - p)^D \leq \exp(-p D).
\end{equation}
\subsection{Lower Bounding $|W|$}
In this section and henceforth, we write $\log$ to mean the natural logarithm, unless another base is indicated.
We will closely follow the method of \cite{bay2020optimal}, but will first slightly redefine their notion of ``very-present'' items. These are items which are present in far more tests than the average item, which will be discarded in a preprocessing step before constructing $W$.
\begin{lemma}[\cite{bay2020optimal} Lemma 4, modified]
\label{thm:very_present}
Define an item to be \emph{very-present} if it appears in more than $t_{max} = \log^3 n$ tests. If $T \leq n$ and no test contains more than $z \log n$ items, then the number of very-present items is
\begin{equation}
n_{vp} \leq \frac{nz}{\log^2 n}.
\end{equation}
\end{lemma}
\begin{proof}
Consider the total number of (item, test) pairs $P$. From the assumptions, we have $P \leq Tz \log n \leq nz \log n$. Also, by the definition of very-present items, we have $n_{vp} \log^3 n \leq P$. Then $n_{vp} \log^3 n \leq nz \log n$ implies the result.
\end{proof}
For bounding $|W|$, we have the following:
\begin{proposition}[\cite{bay2020optimal} Lemma 6 Pf.]
Suppose $W$ is constructed as described in $\cite{bay2020optimal}$, and furthermore we modify their stopping rule so that we stop when there are less than $\frac{\epsilon n}{1 + \gamma}$ items rather than $\frac{\epsilon n}{2}$, where $\gamma$ is a small positive constant. Then
\begin{equation}
\label{eq:w_lb}
|W| \geq \frac{(\gamma \epsilon n) / (1+\gamma) - (nz)/\log^2 n}{z^2 \log^8 n},
\end{equation}
where $z = \frac{2}{\log (1/(1-p))}$.
\end{proposition}
\begin{proof}
From the proof of \cite{bay2020optimal} Lemma 6, we have that the method of constructing $W$ yields the inequality
\begin{equation}
\frac{\epsilon n}{1 + \gamma} \geq \epsilon n - n_{vp} - |W| t_{max}^2 z^2 \log^2 n,
\end{equation}
where $n_{vp}$ and $t_{max}$ are as defined in our \Cref{thm:very_present}. Rearranging this yields
\begin{equation}
|W| \geq \frac{\gamma \epsilon n/(1+\gamma) - n_{vp}}{t_{max}^2 z^2 \log^2 n},
\end{equation}
and substituting the values from \Cref{thm:very_present} for $n_{vp}$ and $t_{max}$ gives the result.
\end{proof}
In the case we are interested in that $p = \frac{\lambda}{\log n}$, we have
\begin{equation*}
z = -2 / \log \left(1 - \frac{\lambda}{\log n}\right) = 2 (\log \log n - \log (\log n - \lambda)),
\end{equation*}
which as $n$ goes to infinity has Taylor series
\begin{equation}
\frac{2 \log n}{\lambda} - 1 - o(1).
\end{equation}
Then
\begin{equation}
\frac{nz}{\log^2 n} \approx \frac{2 n \log n}{\lambda \log^2 n} = \frac{2 n}{\lambda \log n} = o(n),
\end{equation}
so asymptotically we have
\begin{equation}
(\gamma n) / (1+\gamma) - (nz)/\log^2 n = \Theta(n).
\end{equation}
Then substituting the value of $z$ back into the denominator of \cref{eq:w_lb} and neglecting lower order terms gives
\begin{equation}
|W| \geq \frac{\Theta(n)}{\log^{10} n}
\end{equation}
as $n$ goes to infinity. This will be sufficient for our purposes, as it will turn out that only the exponent of $n$ is relevant for this term.
\subsection{Lower Bounding $\mathcal{L^*}$}
For lower bounding $\mathcal{L}^*$, they show in \cite{bay2020optimal} that if we stop constructing $W$ when there are less than $n_{final}$ items remaining, then
\begin{equation}
\label{eq:lstar_def}
\mathcal{L}^* \geq \exp \left( \frac{T}{n_{final}} \mathcal{L}_p\right),
\end{equation}
where $T$ is the total number of tests and
\begin{equation*}
\mathcal{L}_p = \min_{x=2, 3, \dotsc, n} x \log (1 - (1-p)^{x-1}).
\end{equation*}
As we modified their Procedure 1 to stop with less than $\frac{\epsilon n}{1 + \gamma}$ items, we will have
\begin{equation}
\label{eq:our_lstar}
\mathcal{L}^* \geq \exp \left( \frac{(1+\gamma)T}{\epsilon n} \right).
\end{equation}
Furthermore, it is shown in \cite{coja2020optimal} that the function $\mathcal{L}_p$ is minimized at
\begin{equation}
x = \frac{\log 2}{p} + O(p^{-1/2}).
\end{equation}
In our regime neglecting lower order terms, this yields
\begin{equation}
\mathcal{L}_p \geq \frac{\log n \log 2}{\lambda} \log \left( 1 - \left(\frac{\log n - \lambda}{\log n} \right)^{(\log n \log 2)/\lambda - 1}\right).
\end{equation}
From this, we have
\begin{equation}
\exp(\mathcal{L}_p) \geq \left( 1 - \left( \frac{\log n - \lambda}{\log n} \right)^{\log 2 \log n / \lambda - 1}\right)^{\log 2 \log n / \lambda}.
\end{equation}
The Taylor series expansion at $x = \infty$ of
\begin{equation*}
\left( \frac{x-\lambda}{x} \right)^{\log 2 x/\lambda - 1}
\end{equation*}
is $\frac{1}{2} + O(\frac{1}{x})$, so for large $n$ dropping lower order terms \textcolor{red}{(it might be better to handle this a bit more rigorously)} this gives
\begin{equation}
\exp(\mathcal{L}_p) \geq \left( \frac{1}{2} \right)^{\log 2 \log n / \lambda},
\end{equation}
and thus substituting $T = (1-\epsilon)n$ into our expression for $\mathcal{L}^*$ from \cref{eq:our_lstar},
\begin{equation}
\mathcal{L}^* \geq \left( \frac{1}{2} \right)^{(\log 2 \log n / \lambda) \cdot (1+\gamma)(1 - \epsilon)/\epsilon},
\end{equation}
which can be simplified to
\begin{equation}
\mathcal{L}^* \geq \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2 (1 - \epsilon)/(\lambda \epsilon)}.
\end{equation}
\subsection{Combining Together}
Writing $D$ for the total number of disguised items, we have $D \geq \mathcal{L}^* |W|$, and any algorithm's success probability is upper bounded by $\exp(-p D)$. In order for this success probability to go to 1, we need $pD$ to go to 0. Then substituting in our bounds for $\mathcal{L}^*$ and $|W|$, we need
\begin{equation}
-pD \leq -p \mathcal{L}^* |W| \leq \frac{-\lambda}{\log n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)} \frac{\Theta(n)}{\log^{10} n} = \frac{-\Theta(n)}{\log^{11} n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)}
\end{equation}
to go to 0. Only the highest order terms are relevant, so we can just look at the exponents of $n$, and the expression will go to 0 if
\begin{align*}
& 1 < \frac{(1+\gamma)(\log 2)^2(1 - \epsilon)}{\epsilon \lambda} \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2(1 - \epsilon) \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2 - \epsilon (1+\gamma)(\log 2)^2 \\
\implies& \epsilon (\lambda + (1+\gamma)(\log 2)^2) < (1+\gamma)(\log 2)^2 \\
\implies& \epsilon < \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2}.
\end{align*}
We set $T = (1- \epsilon)n$, so to have success probability going to 1 we must have
\begin{equation}
1 - \epsilon > 1 - \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2} = \frac{\lambda}{\lambda + (1+\gamma) (\log 2)^2}.
\end{equation}
Altogether this shows the following.
\begin{theorem}
\label{thm:full_lb}
There exists $n_0$ such that for all $n > n_0$, any test scheme using $T$ tests to identify defectives among $n$ items with i.i.d. defective probability $p = \frac{\lambda}{\log n}$ with
\begin{equation}
T \leq (1 - \epsilon) \frac{\lambda}{\lambda + \log^2 2} n
\end{equation}
for some constant $\epsilon$ independent of $n$ must have error probability $1 - o(1)$.
\end{theorem}
\section{Upper Bound with Bernoulli Design}
\label{sec:bernoulli_ub}
Here we outline a simple upper bound method which gives us something nontrivial in the semisparse regime. We will work here with the combinatorial prior rather than the probabilistic prior, meaning that there are exactly $k = \frac{\lambda n}{\log n}$ defectives. We should be able to convert the end result into a statement involving the probabilistic prior using existing results.
Suppose we use a Bernoulli matrix as our test matrix, where each entry is 1 with probability $\frac{1}{k}$ and 0 otherwise. For a fixed $k$-sparse defective vector using the COMP decoding, we will decode correctly as long as there is not some nondefective which appears only in tests with other defectives. Equivalently, if the union of the $k$ columns of the test matrix corresponding to the defective set does not contain any other column of the matrix.
Call the probability that a particular other column of the test matrix is contained in a union of $k$ random columns $P(n, k)$. Then by a union bound,
\begin{equation}
\Pr[\textrm{error}] \leq (n-k) P(n, k).
\end{equation}
Next we can compute $P(n, k)$ for a matrix with Bernoulli parameter $\frac{1}{k}$. For a particular column, each entry is contained in the union of the $k$ other columns if either it is 0, or it is 1 and one of the other $k$ columns is also a 1. This happens with probability
\begin{align*}
& \left( 1 - \frac{1}{k} \right) + \frac{1}{k} (\Pr[\textrm{at least 1 of $k$ cols. is 1}]) \\
=& \left( 1 - \frac{1}{k} \right) + \frac{1}{k} (1 - \Pr[\textrm{ all $k$ cols. are 0}]) \\
=& \left( 1 - \frac{1}{k} \right) + \frac{1}{k} \left( 1 - \left(1 - \frac{1}{k}\right)^k\right) \\
=& 1 - \frac{1}{k} \left(1 - \frac{1}{k}\right)^k.
\end{align*}
For the column to be contained this must happen for all $T$ entries, so
\begin{equation}
P(n, k) = \left(1 - \frac{1}{k} \left(1 - \frac{1}{k}\right)^k\right)^T.
\end{equation}
Then using the inequality
\begin{equation*}
1-x \leq \exp(-x),
\end{equation*}
\begin{align*}
\Pr[\textrm{error}] \leq& (n-k) \left(1 - \frac{1}{k} \left(1 - \frac{1}{k}\right)^k\right)^T \\
\leq& (n-k) \left( 1 - \frac{1}{ek}\right)^T \\
\leq& (n-k) \exp \left( - \frac{T}{ek}\right) \\
\leq& n \exp \left( - \frac{T}{ek} \right).
\end{align*}
This goes to 0 as long as $\frac{T}{ek} \gg n$, or equivalently if
\begin{equation}
T > (1+\epsilon) ek \log n = (1+\epsilon) e \lambda n
\end{equation}
for a constant $\epsilon > 0$.
\section{Upper Bound with Near Constant Tests Per Item Design}
For probabilistic group testing in the sparse regime, it has been shown that using a Bernoulli test matrix is slightly inferior to another type of test design, called ``Near Constant Tests per Item'' \cite{johnson2018performance}. These matrices are formed by drawing $\frac{vT}{k}$ tests uniformly with replacement per item, where $v$ is a small constant parameter to be optimized, $T$ is the total number of tests, and $k$ is the number of defectives (assuming the combinatorial prior).
Drawing the tests with replacement means that it is possible that the same test is drawn more than once, hence the ``near constant'' tests per item, rather than constant. This simplifies the analysis as the draws are independent, although it seems likely similar results should hold for exactly constant tests per item as well.
Intuitively, the reason for this improved performance is that in a Bernoulli design, the optimal Bernoulli parameter is $q = \frac{1}{k+1}$, so each item participates in $\frac{T}{k+1} \approx \log n$ tests in expectation. Since there are $n$ items, the concentration around the mean is not especially strong for every item, and likely a few items will participate in significantly fewer tests than is optimal. Forcing each item to participate in the expected number of tests avoids this downside.
\subsection{Preliminaries}
We closely follow the approach taken by Johnson et al. \cite{johnson2018performance} for the sparse regime in the following, as their argument is mostly regime-independent.
We will make use of two of their lemmas, which are proved independent of the sparsity regime. The first in plain language states that given a subset of items and a near constant tests per item design, the number of tests including any item from that subset concentrates tightly around its mean.
\begin{lemma}[\cite{johnson2018performance} Lemma 1]
\label{thm:wk_concentration}
Let $\mathcal{K}$ of size $k$ be a subset of items, and fix constants $\alpha > 0$, $\epsilon \in (0, 1)$. Suppose we form a test design of $T$ tests by drawing $L = \frac{\alpha T}{k}$ tests uniformly at random with replacement for each of $n$ items. Write $W^{(\mathcal{K})}$ for the number of tests including at least one item from $\mathcal{K}$. Then (assuming $T$ goes to infinity with $n$) the following holds:
\begin{equation}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\alpha})T| \geq \delta] \leq 2 \exp \left( -\frac{\delta^2}{\alpha T} \right).
\end{equation}
\end{lemma}
Next, recall that the COMP decoding algorithm works by classifying all items that appear in a negative test as nondefective, and all remaining items as defective. As such, the algorithm fails exactly when there is a nondefective item that does not appear in any negative tests, or equivalently a nondefective item for which every test it appears in has at least one defective in it. Let $G$ be the total number of such items; then
\begin{equation}
\Pr[\textrm{COMP decoding succeeds}] = \Pr[G = 0].
\end{equation}
Write $\mathcal{K}$ for the set of defective items, and $W^{(\mathcal{K})}$ for the total number of tests including at least one item in $\mathcal{K}$. The next lemma, again from \cite{johnson2018performance}, characterizes the distribution of $G$ conditioned on a fixed value of $W^{(\mathcal{K})}$.
\begin{lemma}[\cite{johnson2018performance} Lemma 5]
\label{thm:g_dist}
Let $G$ and $W^{(\mathcal{K})}$ be defined as above, $n$ the total number of items, $k$ the number of defectives, and $L = \frac{vT}{k}$ the number of tests drawn for each item in a near constant tests per item design. Then
\begin{equation}
G \ | \ (W^{(\mathcal{K})} = x) \sim \mathrm{Bin} (n-k, (x/T)^L).
\end{equation}
\end{lemma}
\subsection{Combining Together}
Now we are ready to prove the upper bound, namely that we can succeed with high probability using about $n \cdot \lambda/(\log 2)^2$ tests. As
\begin{equation*}
e \approx 1.31 \cdot \frac{1}{(\log 2)^2},
\end{equation*}
this improves significantly on the bound of $e \lambda n$ from \cref{sec:bernoulli_ub}. The proof is similar to \cite{johnson2018performance} Theorem 2.
\begin{theorem}
Let $n$ be the total number of items, and $k = \frac{\lambda n}{\log n}$ the total number of defectives. Suppose our test design is near constant tests per item, we draw $L = \frac{T \log 2}{k}$ tests per item with replacement, and further that
\begin{equation*}
T = (1 + \epsilon) \frac{\lambda}{(\log 2)^2} n
\end{equation*}
for a constant $\epsilon > 0$. Then the success probability of the COMP decoding goes to 1 as $n$ goes to infinity.
\end{theorem}
\begin{proof}
Writing $G$ for the number of nondefectives not appearing in any negative tests, we know COMP succeeds if and only if $G = 0$. Then writing $W^{(\mathcal{K})}$ for the number of tests including at least one defective, the main idea here is that we will use the equation
\begin{equation}
\Pr[G = 0] = \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \Pr[G=0 | W^{(\mathcal{K})} = x],
\end{equation}
applying \Cref{thm:g_dist} when $x \leq (1/2 + \delta) T$, and showing that the probability of $x > (1/2 + \delta) T$ goes to 0 for large $n$ using \Cref{thm:wk_concentration}.
By \Cref{thm:g_dist},
\begin{equation}
\label{eq:success_conditional_exact}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] = \left( 1 - \left( \frac{x}{T} \right)^L \right)^{n-k}.
\end{equation}
As this function is decreasing in $x$, for all $x \leq (1/2 + \delta)T$, we have
\begin{equation}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] \geq \Pr[G = 0 \ | \ W^{(\mathcal{K})} = (1/2 + \delta)T] = (1 - (1/2 + \delta)^L)^{n-k}.
\end{equation}
By definition
\begin{equation}
L = \frac{T \log 2}{k} = \frac{(1+\epsilon) \lambda n \log 2 \log n}{(\log 2)^2 \lambda n} = \frac{(1+\epsilon) \log n}{\log 2},
\end{equation}
so this gives
\begin{equation}
\frac{1}{2^L} = \left( \frac{1}{2^{1 / \log 2}} \right)^{(1+\epsilon) \log n} = \left(\frac{1}{e^{\log n}} \right)^{(1+\epsilon)} = \frac{1}{n^{1+\epsilon}}.
\end{equation}
Taking $\delta < \frac{2}{L}$, the maximum term in the binomial expansion of $(1/2 + \delta)^L$ will be $1/2^L$, so we have
\begin{equation}
(1/2 + \delta)^L < \frac{L+1}{2^L} = \frac{L+1}{n^{1+\epsilon}} < \frac{1}{n^{1+\epsilon/2}},
\end{equation}
where in the last step we use the fact that $L = \Theta(\log n)$ and the assumption regarding large $n$. Then we have for $x \leq (1/2 + \delta)T$
\begin{equation}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] > \left(1 - \frac{1}{n^{1+\epsilon/2}} \right)^{n-k} \approx 1 - \frac{n-k}{n^{1+\epsilon/2}} > 1 - \frac{n}{n^{1+\epsilon/2}} = 1 - \frac{1}{n^{\epsilon/2}},
\end{equation}
which goes to 1 for large $n$.
Plugging this back into our first equation,
\begin{align}
\Pr[G = 0] =& \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \\
\geq& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \\
>& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \Pr[W^{(\mathcal{K})} \leq (1/2 + \delta)T] \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \left( 1 - \Pr[W^{(\mathcal{K})} > (1/2 + \delta)T]\right) \label{eq:final_succ_prob}.
\end{align}
Finally, we apply \Cref{thm:wk_concentration} to the latter probability with $\alpha = \log 2$ (and $\delta T$ in place of the $\delta$ in the original lemma statement), which tells us that
\begin{align}
& \Pr[|W^{(\mathcal{K})} - (1 - e^{-\log 2})T| \geq \delta T] \leq 2 \exp \left( -\frac{(\delta T)^2}{T \log 2} \right) \\
\implies& \Pr[|W^{(\mathcal{K})} - (T/2)| \geq \delta T] \leq 2 \exp \left( -\frac{\delta^2 T}{\log 2} \right) = \Theta \left( \exp \left( -\frac{n}{\log^2 n}\right)\right)
\end{align}
which clearly goes to 0 for large $n$. Thus the success probability in \cref{eq:final_succ_prob} will go to 1, concluding the proof.
\end{proof}
\section{Introduction}
Group testing is a sparse recovery problem where we aim to recover a small set of $k$ ``defective'' items from among $n$ total items using pooled tests. Originally introduced in the context of testing blood samples for diseases where multiple samples can be combined together \cite{dorfman1943detection}, it has since seen a variety of applications, including recently COVID-19 testing (see for instance \cite{yelin2020evaluation,
gollier2020group, eberhardt2020multi}, though there are many more such works).
More formally, we let $\bfx \in \set{0, 1}^n$ represent the vector of items where a 1 indicates defectivity. A test vector $\bfm \in \set{0, 1}^n$ specifies a subset of items to be tested; the result is
\begin{equation*}
y = \bigvee_{i:\bfm_i = 1} \bfx_i,
\end{equation*}
the logical OR of the entries of $\bfx$ specified by the test.
We focus here on the nonadaptive setting, where all tests must be specified in advance of seeing any results. In this setting we write $M \in \set{0,1}^{T \times n}$ for the matrix of $T$ tests (rows), and $\bfy$ for the vector of all test results. We assume the vector of defectives $\bfx$ is generated randomly, the details of which are discussed in \cref{sec:priors}.
This is known as \emph{probabilistic group testing} (PGT), meaning that we are interested in determining how many tests are necessary for the probability of error (for some particular decoding method) to approach 0 as $n$ becomes large. This is in contrast to \emph{combinatorial group testing}, where we insist that any set of defectives is recoverable (i.e., probability of error is 0 for any $n$).
In general, we are interested in the question of how the number of tests $T$ must scale as a function of both $n$ and $k$. This behavior can be quite different depending on the scaling of $k$ relative to $n$, so often things are broken down further into different ``regimes'' for $k$. We will in this work prove both lower and upper bounds on $T$ for the specific regime $k = \Theta\left(\frac{n}{\log n}\right)$, or equivalently when the probability of an item being defective is $\Theta\left(\frac1{\log n}\right)$; this regime has traditionally seen little attention, but recent work has shown that this is exactly the scaling of $k$ for which $\Omega(n)$ tests are necessary. As $n$ tests trivially suffice by testing each item individually, determining the exact crossover point where individual testing becomes suboptimal seems of interest, which happens to be in the $k = \Theta\left(\frac{n}{\log n}\right)$ regime.
\subsection{Related Work}
\label{sec:related_work}
Much of the initial work on group testing concerned the zero-error combinatorial setting, where a single test matrix must correctly classify every possible defective vector. The literature on this variant is vast -- see for instance the book of Du and Hwang \cite{du2000combinatorial}. We focus the rest of this section on the low-error probabilistic setting.
PGT has traditionally been split into two rather different regimes: the ``sparse'' regime, where the total number of defectives is $O(n^\theta)$ for some $0 \leq \theta < 1$, and the ``linear'' regime where the total number of defectives is $\beta n$ for some constant $\beta < 1$.
In both regimes, the folklore ``counting bound'' (see \cite{chan2011non} for instance) shows that $\Omega(k \log \frac{n}{k})$ measurements are necessary. In the sparse regime, this is equivalent to $\Omega(k \log n)$, and order-optimal randomized constructions have been known for some time \cite{sebHo1985two, atia2012boolean}. In the linear regime, the counting bound implies $\Omega(n)$ measurements are necessary, and trivially $n$ suffice by testing items individually.
More recently, explicit constructions of matrices for PGT have been studied. Mazumdar \cite{mazumdar2015nonadaptive} gave explicit constructions requiring $O(k \log^2 n / \log k)$ measurements. Follow up works of Barg and Mazumdar~\cite{barg2017group}, and Inan et al. \cite{inan2019optimality}, show order-optimal or near-optimal results.
Another direction has been to develop constructions with good decoding properties. PGT schemes are considered efficiently decodable if they require $O(T)$ time to decode, where $T$ is the total number of measurements; several works \cite{cai2017efficient, vem2017group, lee2019saffron} gave efficiently decodable schemes which were not quite order-optimal, before Bondorf et al. \cite{bondorf2020sublinear} gave the first order-optimal efficiently decodable construction. A very recent result \cite{inan2020strongly} shows that we can even have explicit constructions which are both order-optimal and efficiently decodable.
The last line of work we discuss in PGT, and the most relevant to our work here, has been to go beyond order-optimality and determine the precise constants involved in various regimes. Improvements on the counting bound were proposed in \cite{agarwal2018novel} and subsequently it was shown the individual testing is optimal in the linear regime~\cite{aldridge2018individual}.
In the sparse regime the characterization of exact constants results in a more complex picture, but a series of works \cite{scarlett2016limits, aldridge2017capacity, johnson2018performance, coja2020optimal} have narrowed down the constants to the point that the lower and upper bounds are matching for any $\theta \in [0, 1)$ (where $k = n^\theta$). We direct the interested reader to the recent survey of Aldridge et al. \cite{aldridge2019group}.
However, in between these two regimes is another regime which has seen much less study, namely when the total number of defectives is $n / \textrm{poly}(\log n)$ (\hspace{1sp}\cite{bay2020optimal} refers to these regimes as ``mildly sublinear''). In particular when $k = \Theta(n / \log n)$ the counting bound implies only that $\Omega(n \log \log n/ (\log n))$ measurements are necessary, but a method using $o(n)$ measurements proved elusive. In very recent work, Bay et al. \cite{bay2020optimal} showed that in fact the lower bound can be improved to $\Omega(k \log n)$ in this regime, and thus known constructions are order-optimal.
In light of the asymptotic improvement to the lower bound for $k = n / \textrm{poly}(\log n)$ \cite{bay2020optimal}, we feel it is a natural next step to try and nail down the constants in this regime. While we focus on the particular case $k = \lambda n / \log n$ in this paper, we expect our results should extend readily to other mildly sublinear regimes.
\subsection{Priors for PGT}
\label{sec:priors}
There are two commonly used ``priors'' over the random defective vector involved in PGT. Under the \emph{combinatorial prior}, the number of defectives is fixed to be $k$, and the defective set is drawn uniformly from the $\binom{n}{k}$ possibilities. Under the \emph{i.i.d. prior}, we instead fix a defective probability $p$, and each item is defective independently with probability $p$.
Conveniently, it turns out that our choice of prior makes little difference in the number of tests necessary (assuming we take $k = pn$). The following result from the survey of Aldridge et al. \cite{aldridge2019group} formalizes this notion.
\begin{theorem}[\hspace{1sp}\cite{aldridge2019group} Thm. 1.7]
\label{thm:comb_prior_conversion}
Suppose a sequence of test designs and decoding method has probability of error going to 0 as $n$ goes to infinity under the combinatorial prior with $k = k_0(1 + o(1))$, where $k_0 = o(n)$ and $k_0$ goes to infinity with $n$. Then the same sequence of test designs and decoding method has error going to 0 as $n$ goes to infinity under the i.i.d. prior with $p = k_0 / n$.
\end{theorem}
A similar result holds for converting the opposite direction. In this paper we will primarily use the i.i.d. prior, except for our upper bound in \cref{sec:constant_ub} which relies on preexisting work under the combinatorial prior. For ease of reference to that work we will use the combinatorial prior there, and \cref{thm:comb_prior_conversion} tells us we can convert our main result (\cref{thm:constant_ub}) to an equivalent result under the i.i.d. prior.
\subsection{Notation}
We will use the following notational conventions throughout:
\begin{itemize}
\item $M$ is a test matrix with rows corresponding to tests and columns corresponding to items.
\item $T$ is the number of tests (rows of $M$).
\item $n$ is the total number of items (columns of $M$).
\item $k$ is the total number of defectives.
\item $p$ is the defective probability under the i.i.d. prior.
\item $\log$ is the logarithm base $e$.
\item $\epsilon$, $\delta$, $\gamma$, $\lambda$, $\alpha$, $\nu$ are constants.
\end{itemize}
Our results show that when $p = \lambda / \log n$, it is sufficient to have $T/n \geq \min(1,\frac{\lambda}{\log^2 2})$, whereas $T/n \geq \frac{\lambda}{\lambda+\log^2 2}$ is necessary. Note that these two quantities are always within a factor of 2 of each other, and approach equality for very small or large values of $\lambda$.
\section{Upper Bound with Near-Constant Tests-Per-Item Design}
\label{sec:constant_ub}
For probabilistic group testing in the sparse regime, it has been shown that the optimal rate is achieved by so-called ``Near-Constant Tests-per-Item'' designs \cite{johnson2018performance}. These matrices are formed by drawing $\frac{\nu T}{k}$ tests uniformly with replacement per item, where $\nu$ is a small constant parameter to be optimized, $T$ is the total number of tests, and $k$ is the number of defectives (assuming the combinatorial prior).
Drawing the tests with replacement means that it is possible that the same test is drawn more than once, hence the ``near-constant'' tests-per-item, rather than constant. This simplifies the analysis as the draws are independent.
We will use this type of test matrix to prove our upper bound.
\subsection{Preliminaries}
We closely follow the approach taken by Johnson et al. \cite{johnson2018performance} for the sparse regime in the following, as their argument is mostly regime-independent. The following notation will be useful in describing their approach succinctly:
\begin{itemize}
\item $\mathcal{K}$ is the set of all defective items.
\item $W^{(\mathcal{K})}$ is the number of tests containing at least one item from $\mathcal{K}$.
\item $G$ is the number of nondefective items that do not appear in any negative tests.
\end{itemize}
Our upper bound will employ the simple COMP decoding~\cite{du2000combinatorial,chan2011non}. This algorithm works by first identifying all items present in negative tests and classifying them as nondefective. Then, any remaining items are classified as defective. While simple, in most cases this algorithm has proven to be asymptotically as good as more complex decodings, and is easy to analyze.
First we present two useful lemmas borrowed from \cite{johnson2018performance}. The first in plain language states that given a near-constant tests-per-item design, the number of tests including at least one defective concentrates tightly around its mean.
\begin{lemma}[\hspace{1sp}\cite{johnson2018performance} Lemma 1]
\label[lemma]{thm:wk_concentration}
Let $|\mathcal{K}| = k$, and fix constants $\alpha > 0$, $\epsilon \in (0, 1)$. Suppose we form a test design of $T$ tests by drawing $L = \frac{\alpha T}{k}$ tests uniformly at random with replacement for each of $n$ items. Then (assuming $T$ goes to infinity with $n$) the following holds:
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\alpha})T| \geq \delta] \leq 2 \exp \left( -\frac{\delta^2}{\alpha T} \right).
\end{equation*}
\end{lemma}
The next lemma, again from \cite{johnson2018performance}, characterizes the distribution of $G$ conditioned on a fixed value of $W^{(\mathcal{K})}$.
\begin{lemma}[\hspace{1sp}\cite{johnson2018performance} Lemma 5]
\label[lemma]{thm:g_dist}
Let $L = \frac{\nu T}{k}$ be the number of tests drawn for each item in a near-constant tests-per-item design. Then
\begin{equation*}
G \ | \ (W^{(\mathcal{K})} = x) \sim \mathrm{Bin} (n-k, (x/T)^L).
\end{equation*}
\end{lemma}
\subsection{Combining Together}
Now we are ready to prove the upper bound, namely that we can succeed with high probability using about $n \cdot \lambda/\log^2 2$ tests.
\begin{theorem}
\label{thm:constant_ub}
Let $k = \frac{\lambda n}{\log n}$ be the total number of defectives. Suppose our test design is near-constant tests-per-item,
\begin{equation*}
T = (1 + \epsilon) \frac{\lambda}{\log^2 2} n
\end{equation*}
for a constant $\epsilon > 0$, and we draw $L = \frac{T \log 2}{k}$ tests per item with replacement. Then for sufficiently large $n$, the success probability of the COMP decoding is $1 - o(1)$.
\end{theorem}
\begin{IEEEproof}
As $G$ is the number of nondefectives not appearing in any negative tests, we know COMP succeeds if and only if $G = 0$. Then the main idea here is that we will use the equation
\begin{equation}
\label{eq:cond_prob_sum}
\Pr[G = 0] = \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \Pr[G=0 | W^{(\mathcal{K})} = x],
\end{equation}
applying \Cref{thm:g_dist} when $x \leq (1/2 + \delta) T$, and showing that the probability of $x > (1/2 + \delta) T$ goes to 0 for large $n$ using \Cref{thm:wk_concentration}.
By \Cref{thm:g_dist},
\begin{equation*}
\label{eq:success_conditional_exact}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] = \left( 1 - \left( \frac{x}{T} \right)^L \right)^{n-k}.
\end{equation*}
As this function is decreasing in $x$, for all $x \leq (1/2 + \delta)T$, we have
\begin{align*}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] \geq& \Pr[G = 0 \ | \ W^{(\mathcal{K})} = (1/2 + \delta)T] \\
=& (1 - (1/2 + \delta)^L)^{n-k}.
\end{align*}
By definition
\begin{equation*}
L = \frac{T \log 2}{k} = \frac{(1+\epsilon) \lambda n \log 2 \log n}{(\log 2)^2 \lambda n} = \frac{(1+\epsilon) \log n}{\log 2},
\end{equation*}
so this gives
\begin{equation*}
\frac{1}{2^L} = \left( \frac{1}{2^{1 / \log 2}} \right)^{(1+\epsilon) \log n} = \left(\frac{1}{e^{\log n}} \right)^{(1+\epsilon)} = \frac{1}{n^{1+\epsilon}}.
\end{equation*}
Taking $\delta < 2/L$, the maximum term in the binomial expansion of $(1/2 + \delta)^L$ will be $1/2^L$, so we have
\begin{equation*}
(1/2 + \delta)^L < \frac{L+1}{2^L} = \frac{L+1}{n^{1+\epsilon}} < \frac{1}{n^{1+\epsilon/2}},
\end{equation*}
where in the last step we use the fact that $L = \Theta(\log n)$ and the assumption regarding large $n$. Then we have for $x \leq (1/2 + \delta)T$
\begin{align*}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] >& \left(1 - \frac{1}{n^{1+\epsilon/2}} \right)^{n-k} \\
\approx& 1 - \frac{n-k}{n^{1+\epsilon/2}} \\
>& 1 - \frac{n}{n^{1+\epsilon/2}} \\
=& 1 - \frac{1}{n^{\epsilon/2}},
\end{align*}
which goes to 1 for large $n$.
Plugging this back into \eqref{eq:cond_prob_sum},
\begin{align}
& \Pr[G = 0] \nonumber\\
=& \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \nonumber \\
\geq& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \nonumber \\
>& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \nonumber \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \Pr[W^{(\mathcal{K})} \leq (1/2 + \delta)T] \nonumber \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \left( 1 - \Pr[W^{(\mathcal{K})} > (1/2 + \delta)T]\right) \label{eq:final_succ_prob}.
\end{align}
Finally, we apply \Cref{thm:wk_concentration} to the latter probability with $\alpha = \log 2$ (and $\delta T$ in place of the $\delta$ in the original lemma statement), which tells us
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\log 2})T| \geq \delta T] \leq 2 \exp \left( -\frac{(\delta T)^2}{T \log 2} \right),
\end{equation*}
and thus
\begin{align*}
\Pr[|W^{(\mathcal{K})} - (T/2)| \geq \delta T] \leq& 2 \exp \left( -\frac{\delta^2 T}{\log 2} \right) \\
=& \Theta \left( \exp \left( -\frac{n}{\log^2 n}\right)\right)
\end{align*}
which clearly goes to 0 for large $n$. Thus the success probability in \eqref{eq:final_succ_prob} will go to 1, concluding the proof.
\end{IEEEproof}
\section{Determining the Constant in the Lower Bound}
The recent work of Bay et al. \cite{bay2020optimal} shows that a lower bound of $\min(n, \Omega(k \log n))$ tests holds for any $k$. In the case we are interested in that $k = \lambda n / \log n$, this tells us we need $\Omega(n)$ measurements. In this section we seek to determine the necessary number of measurements more precisely, up to the constant term.
The argument of \cite{bay2020optimal} works by demonstrating that with significantly less than $k \log n$ measurements, there must exist a fairly large number of ``totally disguised'' items; that is, items for which no test gives us any information about whether or not these items are defective. If we have no information about these items defectivity, we cannot do better than guessing on each one.
More specifically, they give a procedure that constructs a set of items $W$ with the following properties:
\begin{enumerate}
\item Each item in $W$ is totally disguised with probability at least $\mathcal{L}^*$.
\item The events that each item in $W$ is totally disguised are independent of each other.
\end{enumerate}
Since these events are independent, we can combine bounds on $|W|$ and $\mathcal{L}^*$ with simple concentration inequalities to get a lower bound on the number of totally disguised items.
For our lower bound we will modify their procedure slightly, as described below.
\subsection{How Many Disguised Items are Needed}
Since disguised items are defective independently with probability $p$ and for us $p < \frac{1}{2}$, the best any algorithm
can do on a disguised item is predict it is nondefective. This guess is correct with probability $1 - p$. Thus if the total number of disguised items is $D$, the success probability of any algorithm is at most
\begin{equation*}
(1 - p)^D \leq \exp(-p D).
\end{equation*}
\subsection{Lower Bounding $|W|$}
We will closely follow the method of \cite{bay2020optimal}, but will first slightly redefine their notion of ``very-present'' items. These are items present in far more tests than the average item, which will be discarded in a preprocessing step before constructing $W$.
\begin{lemma}[\hspace{1sp}\cite{bay2020optimal} Lemma 4, modified]
\label[lemma]{thm:very_present}
Define an item to be \emph{very-present} if it appears in more than $t_{max} = \log^3 n$ tests. If $T \leq n$ and no test contains more than $z \log n$ items, then the number of very-present items is
\begin{equation*}
n_{vp} \leq \frac{nz}{\log^2 n}.
\end{equation*}
\end{lemma}
\begin{IEEEproof}
Consider the total number of (item, test) pairs $P$. From the assumptions, we have $P \leq Tz \log n \leq nz \log n$. Also, by the definition of very-present items, we have $n_{vp} \log^3 n \leq P$. Then $n_{vp} \log^3 n \leq nz \log n$ implies the result.
\end{IEEEproof}
For bounding $|W|$, we have the following:
\begin{lemma}[\hspace{1sp}\cite{bay2020optimal} Lemma 6 Pf.]
\label[lemma]{thm:w_lb}
Suppose $W$ is constructed as described in Procedure 1 of \cite{bay2020optimal}, and furthermore we modify their stopping rule so that we stop when there are less than $\frac{\epsilon n}{1 + \gamma}$ items rather than $\frac{\epsilon n}{2}$, where $\gamma$ is a small positive constant. Then
\begin{equation}
\label{eq:w_lb}
|W| \geq \frac{(\gamma \epsilon n) / (1+\gamma) - (nz)/\log^2 n}{z^2 \log^8 n},
\end{equation}
where $z = \frac{2}{\log (1/(1-p))}$.
\end{lemma}
\begin{IEEEproof}
From the proof of \cite{bay2020optimal} Lemma 6, we have that the method of constructing $W$ yields the inequality
\begin{equation*}
\frac{\epsilon n}{1 + \gamma} \geq \epsilon n - n_{vp} - |W| t_{max}^2 z^2 \log^2 n,
\end{equation*}
where $n_{vp}$ and $t_{max}$ are as defined in our \Cref{thm:very_present}. Rearranging this yields
\begin{equation*}
|W| \geq \frac{\gamma \epsilon n/(1+\gamma) - n_{vp}}{t_{max}^2 z^2 \log^2 n},
\end{equation*}
and substituting the values from \Cref{thm:very_present} for $n_{vp}$ and $t_{max}$ gives the result.
\end{IEEEproof}
In the case we are interested in that $p = \frac{\lambda}{\log n}$, we have
\begin{equation*}
z = -2 / \log \left(1 - \frac{\lambda}{\log n}\right) = \frac{2 \log n}{\lambda} - 1 - o(1),
\end{equation*}
where we have considered the Taylor series as $n$ goes to infinity.
Then
\begin{equation*}
\frac{nz}{\log^2 n} \approx \frac{2 n \log n}{\lambda \log^2 n} = \frac{2 n}{\lambda \log n} = o(n),
\end{equation*}
so asymptotically we have
\begin{equation*}
(\gamma n) / (1+\gamma) - (nz)/\log^2 n = \Theta(n).
\end{equation*}
Substituting the value of $z$ back into the denominator of \eqref{eq:w_lb} gives
\begin{equation*}
|W| \geq \frac{cn}{\log^{10} n}
\end{equation*}
as $n$ goes to infinity, for some constant $c>0$. This will be sufficient for our purposes, as it will turn out that only the exponent of $n$ is relevant for this term.
\subsection{Lower Bounding $\mathcal{L^*}$}
For lower bounding $\mathcal{L}^*$, they show in \cite{bay2020optimal} that if we stop constructing $W$ when there are less than $n_{final}$ items remaining, then
\begin{equation*}
\mathcal{L}^* \geq \exp \left( \frac{T}{n_{final}} \mathcal{L}_p\right),
\end{equation*}
where
\begin{equation*}
\mathcal{L}_p = \min_{x=2, 3, \dotsc, n} x \log (1 - (1-p)^{x-1}).
\end{equation*}
As we modified their Procedure 1 to stop with less than $\frac{\epsilon n}{1 + \gamma}$ items, we will have
\begin{equation}
\label{eq:our_lstar}
\mathcal{L}^* \geq \exp \left( \frac{(1+\gamma)T}{\epsilon n} \mathcal{L}_p\right).
\end{equation}
Furthermore, it is shown in work of Coja-Oghlan et al. \cite{coja2020optimal} in the sparse regime that the function $\mathcal{L}_p$ is minimized at
\begin{equation*}
x = \frac{\log 2}{p} + O(p^{-1/2}).
\end{equation*}
In our regime neglecting lower order terms, this yields
\begin{equation*}
\mathcal{L}_p \geq \frac{\log n \log 2}{\lambda} \log \left( 1 - \left(\frac{\log n - \lambda}{\log n} \right)^{(\log n \log 2)/\lambda - 1}\right).
\end{equation*}
From this, we have
\begin{equation*}
\exp(\mathcal{L}_p) \geq \left( 1 - \left( \frac{\log n - \lambda}{\log n} \right)^{\log 2 \log n / \lambda - 1}\right)^{\log 2 \log n / \lambda}.
\end{equation*}
The Taylor series expansion at $x = \infty$ of
\begin{equation*}
\left( \frac{x-\lambda}{x} \right)^{\log 2 x/\lambda - 1}
\end{equation*}
is $\frac{1}{2} + O(\frac{1}{x})$, so for large $n$ dropping lower order terms this gives
\begin{equation*}
\exp(\mathcal{L}_p) \geq \left( \frac{1}{2} \right)^{\log 2 \log n / \lambda},
\end{equation*}
and thus substituting $T = (1-\epsilon)n$ into our expression for $\mathcal{L}^*$ from \eqref{eq:our_lstar},
\begin{equation*}
\mathcal{L}^* \geq \left( \frac{1}{2} \right)^{(\log 2 \log n / \lambda) \cdot (1+\gamma)(1 - \epsilon)/\epsilon},
\end{equation*}
which can be simplified to
\begin{equation*}
\mathcal{L}^* \geq \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2 (1 - \epsilon)/(\lambda \epsilon)}.
\end{equation*}
\subsection{Combining Together}
Writing $D$ for the total number of disguised items, we have $D \geq \mathcal{L}^* |W|$, and any algorithm's success probability is upper bounded by $\exp(-p D)$. In order for this success probability to go to 1, we need $pD$ to go to 0. Then substituting in our bounds for $\mathcal{L}^*$ and $|W|$, we need
\begin{align*}
-pD \leq& -p \mathcal{L}^* |W| \\
\leq& \frac{-\lambda}{\log n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)} \frac{cn}{\log^{10} n} \\
=& \frac{-\lambda cn}{\log^{11} n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)}
\end{align*}
to go to 0. Only the highest order terms are relevant, so we can just look at the exponents of $n$, and the expression will go to 0 if
\begin{align*}
& 1 < \frac{(1+\gamma)(\log 2)^2(1 - \epsilon)}{\epsilon \lambda} \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2(1 - \epsilon) \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2 - \epsilon (1+\gamma)(\log 2)^2 \\
\implies& \epsilon (\lambda + (1+\gamma)(\log 2)^2) < (1+\gamma)(\log 2)^2 \\
\implies& \epsilon < \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2}.
\end{align*}
We set $T = (1- \epsilon)n$, so to have success probability going to 1 we must have
\begin{equation*}
1 - \epsilon > 1 - \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2} = \frac{\lambda}{\lambda + (1+\gamma) (\log 2)^2},
\end{equation*}
and recall $\gamma$ can be any constant greater than zero.
Altogether this shows the following.
\begin{theorem}
\label{thm:full_lb}
There exists $n_0$ such that for all $n > n_0$, any test scheme using $T$ tests to identify defectives among $n$ items with i.i.d. defective probability $p = \frac{\lambda}{\log n}$ with
\begin{equation*}
T \leq (1 - \epsilon) \frac{\lambda}{\lambda + \log^2 2} n
\end{equation*}
for some constant $\epsilon$ independent of $n$ must have error probability $1 - o(1)$.
\end{theorem}
\section{Conclusion and Discussion}
We have shown that for nonadaptive PGT in the regime that $p = \lambda / \log n$ (or equivalently $k = \lambda n / \log n$),
\begin{equation*}
\min \left(1, \left(1+\epsilon\right) \frac{\lambda}{\log^2 2}\right) n
\end{equation*}
measurements suffice to obtain error probability going to 0, and at least
\begin{equation*}
\left(\left(1 - \epsilon\right) \frac{\lambda}{\lambda + \log^2 2}\right) n
\end{equation*}
are necessary ($\epsilon>0$).
A natural next step would be to close this gap. In the sparse regime where $k = O(n^\theta)$, more complex decodings known as DD and SPIV have been shown to improve on the COMP decoding \cite{johnson2018performance,coja2020optimal}, although the benefit seems to vanish as $\theta$ approaches 1.
We conjecture that the lower bound should come up to meet the upper bound obtained by the minimum of near-constant tests-per-item designs and individual testing. This would imply that the exact point at which individual testing becomes suboptimal is when $p < (\log 2)^2 / \log n$.
The reason for this belief is as follows. Suppose $D$ is the (random) number of disguised items. A simple calculation shows that,
$$
\mathbb{E} D \ge n \exp(T/n \cdot \mathcal{L}_p)
$$
where $\mathcal{L}_p$ has been defined in the last section. Using the same calculation as before, we obtain,
$$
\mathbb{E} D \ge n 2^{-\frac{T}{n} \frac{\log 2}{p}}.
$$
Given the number of disguised items is $D$, the probability of correct decoding is
at most $(1-p)^D \le \exp(-pD),$ where $p =\frac{\lambda}{\log n}$. Substituting $D$ with $\mathbb{E} D$ from above, we see that the probability of correct decoding is going to zero whenever $\frac{T}{n} < \frac{\lambda}{\log^2 2}.$
This shows that the lower bound should be improved to match the upper bound, if the random variable $D$ is concentrated around its mean, which is of course yet to be proved.
The current lower bound is agnostic to whether $\lambda / \log^2 2 < 1$ holds, and therefore must be simultaneously smaller than either term of the minimum in the upper bound. To improve further it seems necessary to handle these two cases individually.
\bibliographystyle{IEEEtran}
\section{Introduction}
Here we consider the problem of nonadaptive group testing with low error, meaning that we must be able to exactly identify the defective set with error that approaches 0 as the total number of items $n$ goes to infinity.
We will assume the ``i.i.d.'' prior over defective sets, meaning that each of the $n$ items is chosen to be defective independently with some fixed probability $p$. This is in contrast to the ``combinatorial prior,'' where we fix a sparsity $k$ and then choose a set of $k$ items uniformly at random among the $\binom{n}{k}$ such subsets. In practice there tends to be little difference between results with the two priors, as the sparsity under the i.i.d. prior concentrates around $pn$.
The defective probability $p$ factors heavily into the possible performance that can be obtained and which methods work best. In the ``sparse'' regime, we typically assume $p = O(n^\alpha)$ for some $\alpha \in [0, 1)$. This regime is extremely well-studied, and it is known that $\Theta(k \log n)$ tests are necessary and sufficient for all values of $\alpha$. For some values of $\alpha$ even the constant terms are known exactly, see the survey \cite{aldridge2019group} for details.
In contrast, in the ``linear'' regime where $p = \beta$ for some constant $\beta$, it was shown in \cite{aldridge2018individual} that regardless of $\beta$ in order to have error probability going to 0 with $n$, it is necessary to individually test each item.
As noted in \cite{aldridge2018individual}, this motivates the question of what happens in between these two regimes, such as when $p = \frac{1}{\log n}$. We refer to this regime where $p = o(n)$ as the ``semisparse'' regime, and focus our efforts on clarifying the situation in this regime. To our knowledge, this question has not seen significant prior study.
\section{Upper Bound Ideas}
\subsection{Trivial Bound}
In the linear regime, in order to prove individual testing is necessary to achieve arbitrarily low error, it suffices to show that with constant probability there exists an item about which we have no information -- since this item is defective with constant probability, any guess we make about it will necessarily incur constant error.
However, as noted in \cite{aldridge2018individual} this argument no longer holds in sublinear regimes. For instance, suppose $p = \frac{1}{\log n}$, and consider the following test scheme: we individually test the first $n-1$ items, and simply ignore the $n$th item completely and always predict it is not defective. Then our answer is incorrect with probability only $p = \frac{1}{\log n}$, and so our error goes to 0 as $n$ goes to infinity. This shows that at the least, individual testing is not necessary in the semisparse regime. We extend this argument to its logical conclusion in the following proposition.
\begin{proposition}
Fix a subconstant error probability $p = p(n) = o(1)$ that goes to infinity with $n$. Then
\begin{equation*}
T = n - o(1 / p)
\end{equation*}
tests are sufficient to determine the defective set with error going to 0 as $n$ goes to infinity.
\end{proposition}
\begin{proof}
Our testing scheme is simply to test the first $T$ items individually, and ignore the rest, predicting all of them are non-defective. Each of the untested $n - T$ items is independently defective with probability $p$, so our error probability is equal to the probability any of the $n - T$ items is defective, which is
\begin{equation*}
1 - (1-p)^{n - T} \approx 1 - e^{-(n-T)p}.
\end{equation*}
Then as long as $(n - T)$ is $o(1 / p)$, $(n - T)p$ will go to 0 with $n$, so our error probability goes to 0 as well.
\end{proof}
\subsection{Maximum Likelihood Decoding}
In \cite{atia2012boolean}, Atia and Saligrama use information theory to analyze the performance of group testing under a maximum likelihood decoding. They use the combinatorial prior on defective sets rather than the iid prior, so the sparsity $k$ is fixed and a defective set of that sparsity is chosen uniformly at random.
Their work mostly focuses on the sparse regime, but they prove the following result which is nontrivial in parts of the semisparse regime.
\begin{theorem}[\cite{atia2012boolean} Thm. V.2]
\label{thm:atia_lb}
Let the test matrix with $T$ rows (tests) be drawn with entries i.i.d. from a Bernoulli distribution with parameter $1 / k$. There exists a constant $C$ independent of $n$ and $k$ such that when $k = o(n)$ and both $k$ and $n$ scale to infinity,
\begin{equation}
T \geq C \cdot k \log n \log^2 k
\end{equation}
suffices to recover the defective set exactly with error going to 0, assuming the defective set is chosen uniformly at random among those of size $k$.
\end{theorem}
A result of \cite{aldridge2019group} tells us how we can convert this result into one under the i.i.d. prior.
\begin{theorem}[\cite{aldridge2019group} Thm. 1.7]
\label{thm:prior_conversion}
If a sequence of test designs and decoding methods approaches 0 error with sparsity
\begin{equation*}
k = k_0 (1 + o(1)),
\end{equation*}
under the combinatorial prior with $k_0 = o(n)$, then the same sequence of test designs and decoding methods approaches 0 error with defective probability $p = k_0 / n$ under the i.i.d. prior.
\end{theorem}
This tells us that up to lower order terms, the bound in \cref{thm:atia_lb} will hold also under the i.i.d. prior with an appropriate defective probability (that is, taking $p = k/n$).
This bound does not tell us anything useful in the case that $p = 1 / \log(n)$, but can tell us something for defective probabilities like $p = 1 / \textrm{polylog}(n)$ when the degree of the polynomial is large enough. For example, if $p = 1 / \log^c(n)$ for some constant $c \geq 4$, \cref{thm:atia_lb} says that $T = O(n / \log^{c - 3}(n))$ tests will suffice for the error to go to 0.
\subsection{Randomized Upper Bound}
Another idea that may yield better results is to randomly construct a test matrix, apply a fixed decoding algorithm, and see if we can compute the resulting error probability in the regime of interest.
The simplest way of constructing a random test matrix is to simply include each item in the test independently with some fixed probability $q$, and the optimal choice is typically
\begin{equation*}
q \approx \frac{1}{np},
\end{equation*}
so that each test includes about $\frac{1}{p}$ items.
The two simplest decoding algorithms are COMP (combinatorial orthogonal matching pursuit) and DD (definite defectives). In the former, we report that every item which appears in a negative test is nondefective, and every remaining item is defective. In the latter, we again report that every item which appears in a negative test is nondefective, but of the remaining items, we report only those as defective that are the sole item of unknown status in some test (these are the ``definite defectives''). Thus DD reports a defective set which is a strict subset of that reported by COMP. The analysis of COMP is typically simpler, but may give worse results in some cases.
If we use COMP, then our decoding fails if and only if there exists some nondefective item which appears in only positive tests. This is equivalent to the notion of a ``totally disguised'' item from \cite{aldridge2018individual}, where they show that
\begin{equation*}
\Pr[i \textrm{ totally disguised}] \geq \prod_{t : i \in t} 1 - (1-p)^{(w_t - 1)},
\end{equation*}
where $t$ represents a test and $w_t$ is the weight or number of items in test $t$.
The following is an (essentially unchanged) argument from \cite{aldridge2019group} Theorem 2.3 in the sparse regime, which gives an upper bound of $O(k \log n)$ measurements even in the semisparse regime using COMP decoding. This is not quite enough to be useful if $p = 1 / \log n$, but tells us something, for instance, if $p = 1 / \log^2 n$.
\begin{proposition}
Suppose we are working under the combinatorial prior over defective sets, and the exact sparsity is $k = o(n)$, and also that $k$ goes to infinity with $n$. Let our testing matrix $A$ have Bernoulli entries with parameter $q$. Then
\begin{equation*}
T = O(k \log n)
\end{equation*}
measurements suffices to ensure the error probability goes to 0 with $n$ using COMP decoding.
\end{proposition}
\begin{proof}
Under the COMP decoding, we fail exactly when there exists one or more nondefective items which do not appear in any negative tests. For a fixed nondefective and fixed test, the probability that the item is included in the test and the test result is negative is
\begin{equation*}
q(1 - q)^k,
\end{equation*}
thus the probability that the negation of this happens for all $T$ tests for that particular nondefective is
\begin{equation*}
(1 - q(1 - q)^k)^T.
\end{equation*}
There are $n - k < n$ such nondefectives, so by a union bound over all of them, the total error probability is at most
\begin{equation*}
n (1 - q(1 - q)^k)^T \leq n \exp(-T q (1-q)^k),
\end{equation*}
where we used also the inequality $(1 - x) \leq e^{-x}$.
The expression $(1-q)^k$ is maximized at $q = 1 / (k+1) \approx 1 / k$, so we choose this as our Bernoulli parameter, and then have
\begin{equation*}
q (1-q)^k \approx \frac{1}{k} \cdot \frac{1}{e}.
\end{equation*}
Then we can take $T = (1 + \delta) ek \ln n$ for a constant $\delta > 0$, and we have
\begin{eqnarray*}
\Pr[error] \leq& n \exp(-T q (1-q)^k) \\
=& n \exp(\frac{-T}{ek}) \\
=& n \exp(\frac{-(1+\delta) ek \ln n}{ek}) \\
=& n \exp(-(1+\delta) \ln n) \\
=& n \cdot n^{-(1 + \delta)} \\
=& n^{-\delta},
\end{eqnarray*}
which goes to 0 as $n$ goes to infinity.
\end{proof}
While this result is phrased in terms of the combinatorial prior, we can easily convert it to work with the i.i.d. prior without asymptotic loss using \cref{thm:prior_conversion}.
\section{Lower Bound Ideas}
\subsection{Counting Bound}
One well-known lower bound in group testing in the counting bound, which while simple has proven to be tight or near-tight in many cases. The bound states simply that given $T$ tests, $n$ items, and $k$ defectives, it is necessary that
\begin{equation*}
Pr[Success] \leq 2^T / \binom{n}{k}.
\end{equation*}
Substituting $np$ for $k$, taking log of both sides, and using the bound
\begin{equation*}
\frac{n^k}{k^k} \leq \binom{n}{k},
\end{equation*}
we have
\begin{equation}
\label{eqn:counting_bound}
np \log \frac{1}{p} \leq T.
\end{equation}
to obtain arbitrarily small error probability.
Substituting for example $p = \frac{1}{\log n}$, this yields
\begin{equation*}
\frac{n \log \log n}{\log n} \leq T,
\end{equation*}
telling us that we cannot hope to improve on individual testing by a polynomial factor, but perhaps something near a logarithmic factor improvement is possible.
\subsection{Entropy-Based Lower Bound}
Write $\bfX$ for the vector of random variables corresponding to the input, and $\bfY$ for the vector of random variables corresponding to the test outputs. Then one way of proving the counting bound is to first demonstrate
\begin{equation*}
H(\bfX) = n H(p) \leq H(\bfY) + \epsilon n
\end{equation*}
(where $H(p)$ is the binary entropy of a $\textrm{Bern}(p)$ r.v.) using Fano's inequality, and to then use the inequality
\begin{equation}
\label{eq:counting_bound_ind}
H(\bfY) \leq t,
\end{equation}
which is simply the subadditivity of entropy for the $t$ binary random variables of $\bfY$.
In \cite{agarwal2018novel}, they observe that \cref{eq:counting_bound_ind} is loose for group testing, because equality is obtained if and only if the test outcomes are independent. In group testing, the outcomes are generally not independent due to items shared between tests. Thus they improve slightly on the counting bound in parts of the linear regime by giving an improved bound on $H(\bfY)$ that exploits this dependence using the Madiman-Tetali inequalities \cite{madiman2010information}.
More specifically, write $\bfY_S$ with $S \subseteq [T]$ for the restriction of the vector of test outputs to the set of tests indexed by $S$. Then
the Madiman-Tetali inequalities say that we can upper bound the entropy of the joint distribution $\bfY_{[T]}$ as
\begin{equation}
\label{eq:madiman_tetali}
H(\bfY_{[T]}) \leq \sum_S \alpha(S) H(\bfY_S),
\end{equation}
where we can define the subsets $S \subseteq [T]$ however we want, subject to the constraint that the $\alpha(S)$ must form a fractional cover of $[T]$ -- that is, for each test $t$, we must have
\begin{equation*}
\sum_{S : t \in S} \alpha(S) \geq 1.
\end{equation*}
While we can choose the sets $S$ however we want, in order to make this bound useful, we probably ought to pick them such that there is significant mutual information between the tests in $S$, so that we are improving beyond the simple bound from the subadditivity of entropy. In \cite{agarwal2018novel} they make the natural choice $\set{S_i}_{i \in n}$, where $S_i$ is the set of all tests that contain item $i$, which clearly should have some mutual information because of the shared item. Another possible choice would be to take sets that are unions of pairs (or even more) of the $S_i$.
Let's examine more closely the high level proof strategy of \cite{agarwal2018novel} to see how we might adapt it:
\begin{enumerate}
\item Split the test matrix $A$ up into submatrices $A_k$ containing only the tests of weight exactly $k$. Upper bound the entropy of $Y_{[T]}$ by the sum of entropies of the test results on these submatrices.
\item Each submatrix $A_k$ has tests of constant weight $k$. Let $S_k \subseteq [T]$ be the set of tests in $A_k$. Then let $S_{k,i}$ be the subset of $S_k$ consisting of all tests containing item $i$. Then use Madiman-Tetali to bound
\begin{equation*}
H(\bfY_{S_k}) \leq \sum_{i \in [n]} \alpha(S_{i,k}) H(\bfY_{S_{i,k}}),
\end{equation*}
where we set each $\alpha(S_{i,k}) = 1 / k$. This is a fractional cover of $[T]$ because each test has weight exactly $k$.
\item In order to bound the $H(\bfY_{S_{i,k}})$, prove a general bound on $H(\bfY_S)$ for any $S$ such that all tests in $S$ contain a shared item.
\item In the end, we essentially bound the sum of the entropies over all the constant weight $k$ submatrices by the largest such entropy over all $1 \leq k \leq n$.
\end{enumerate}
We could of course carry out this exact same process in our regime, but already the bound in \cite{agarwal2018novel} is worse than the counting bound for $k < 0.347 n$.
If we want to use Madiman-Tetali, it seems unlikely we can use a significantly different hypergraph setup than \cite{agarwal2018novel} -- we need the hyperedges to consist of sets of tests with significant mutual information within the sets, and the most natural way to do this is by forcing each set $S_i$ to contain a shared item $i$. It doesn't seem like we should expect to gain much by instead using unions of such sets (e.g. sets $S_{i,j} = S_i \cup S_j$), as the additional benefit is only the ``coincidentally'' shared items besides $i$ and $j$, which will be shared only between a much smaller subset of tests in $S_{i,j}$.
It's also not clear how we can apply Madiman-Tetali without first splitting up the test matrix into constant row weight submatrices.
\subsection{Extending Individual Testing Lower Bound}
In \cite{aldridge2018individual}, they show that in the linear regime ($p = \beta$ for some constant $\beta$) that every item must be tested individually to obtain error probability that goes to 0 as $n$ goes to infinity.
To do so, they show that with constant probability, there exists a ``totally disguised'' item -- that is, an item for which every test including that item includes another item which is defective. Thus, the best we can do on this item is to guess whether it is defective or not, which gives error probability $\min(p, 1 - p)$ if each item is defective with probability $p$.
However, in the semisparse regime where, for instance, $p = \frac{1}{\log n}$, if we have a single totally disguised item we can simply predict it is not defective, which gives error probability only $p$ which goes to 0 as $n$ goes to infinity.
One idea for extending this type of lower bound to the semisparse regime would be to show that there exists a large enough number of totally disguised items that even if we predict they are all non-defective, our error probability is bounded away from 0. The next proposition quantifies how large is ``large enough'' for this purpose.
\begin{proposition}
\label{prop:disguised_lb}
Fix a sublinear defective probability $p = o(n)$ that may depend on $n$. Suppose that with probability bounded away from zero there are at least $D$ totally disguised items. Then for our overall error probability to be bounded away from 0, it is necessary that $pD$ approaches positive infinity as $n$ grows. Equivalently, we must have
\begin{equation*}
\frac{1}{p} = o(D).
\end{equation*}
\end{proposition}
\begin{proof}
Suppose we fix a particular testing scheme, and consider a defective set which leaves at least $D$ items totally disguised (such defective sets occur with at least constant probability by assumption).
Since our defective probability is sublinear, the best we can do on these totally disguised items is to guess that they are non-defective, which occurs with probability $1 - p$. Then the probability we guess correctly on all such items is at most
\begin{equation*}
(1 - p)^D \approx e^{-pD}.
\end{equation*}
If $pD$ is bounded above by some constant $c$, then our total error probability is at least a constant times $1 - e^{-c}$ which will not go to 0. Thus it is necessary that $pD$ goes to positive infinity with $n$.
\end{proof}
To emulate the lower bound of \cite{aldridge2018individual} when $p = \frac{1}{\log n}$, we would have to show that with at least constant probability w.r.t. choice of defective set, the number of totally disguised items grows faster than $\log n$ regardless of testing scheme.
\subsection{Counting Disguised Items}
Given \cref{prop:disguised_lb}, if we could show a large enough lower bound on the number of totally disguised items that occur with a constant probability, that would translate to a lower bound on the number of tests needed for recovery with small error.
In \cite{aldridge2018individual}, they show that in the linear regime, having a single totally disguised item with at least constant probability is enough to guarantee the error is bounded away from 0. This result is strengthened in \cite{heng2020non}, where they show that the error probability actually goes 1 unless individual testing is used in this regime.
To obtain this result, they assume that $T < (1-\epsilon)n$ tests are used, and give a procedure which iteratively constructs a set $W$ of size about $O(n^{1/2})$ with two important properties:
\begin{enumerate}
\item The events that the items in $W$ are totally disguised are independent of each other.
\item Every item in $W$ is totally disguised with probability lower bounded by a constant.
\end{enumerate}
With these two properties together, we can apply binomial concentration results to see that with constant probability there are $\omega(1)$ totally disguised items in this scenario, which implies that the success probability is at most
\begin{equation*}
(1-c) \cdot \max \{p, 1-p\}^{\omega(1)},
\end{equation*}
for some constant $c$, which goes to 0 with $n$ ($p$ is constant as well, as they are in the linear regime).
In the semisparse regime, it will be necessary to instead demonstrate that with constant probability there is a set of $\Omega(1/p)$ totally disguised items.
\textcolor{red}{Add some details about the procedure of \cite{heng2020non} here}
For this purpose, the majority of the argument from \cite{heng2020non} will go through for us as well. Specifically, using their method we can construct a set $W$ of items which are disguised independently of each other. We will have
\begin{equation*}
|W| > \frac{\epsilon n - 2zn^{0.75} \ln(n)}{2z^2n^{0.5} \ln^2(n)},
\end{equation*}
where $z = \frac{2}{\ln(1/(1-p))}$. Even in our regime, this set is pretty large -- for instance if $p = 1 / \ln(n)$, then we have
\begin{eqnarray*}
z =& \frac{2}{\ln(1/(1-p))} \\
=& \frac{2}{-\ln(1-p)} \\
=& \frac{2}{-\ln(1-\frac{1}{\ln(n)})}.
\end{eqnarray*}
As $n$ goes to infinity,
\begin{equation*}
-\ln \left(1 - \frac{1}{\ln(n)} \right) \approx \ln(\ln(n)) - \ln(\ln(n) - 1) + O\left(\frac{1}{n^2}\right) \approx \frac{1}{\ln(n)} + O\left(\frac{1}{\ln^2(n)}\right) + O\left(\frac{1}{n^2}\right) = O\left(\frac{1}{\log(n)}\right),
\end{equation*}
so we have $z = \Theta(\log n)$, and thus
\begin{equation*}
|W| = \Omega(\epsilon n^{0.5} / \log^4(n)).
\end{equation*}
The remaining step is to lower bound the probability that each item in $W$ is totally disguised -- unlike in the linear regime where this probability was easily seen to be constant, in the semisparse regime it is less clear what happens.
The lower bound we have is that for each item $i$ in $W$,
\begin{equation*}
\Pr[i \textrm{ totally disguised}] \geq \exp\left(\frac{2}{\epsilon} \mathcal{L}_p\right),
\end{equation*}
where
\begin{equation*}
\mathcal{L}_p = \min_{x=2, 3, \dotsc, n} x \ln(1 - (1-p)^{x-1}).
\end{equation*}
It is shown in \cite{coja2020optimal} where they use a similar proof method in the sparse regime that $\mathcal{L}_p$ is minimized at
\begin{equation*}
x = \frac{\ln 2}{p} + O(p^{-1/2}).
\end{equation*}
As we are primarily interested in an asymptotic analysis to see whether the argument of \cite{heng2020non} will go through at all, we will simply say that $\mathcal{L}_p$ is minimized at
\begin{equation*}
x = \Theta(1/p),
\end{equation*}
and thus
\begin{equation*}
\mathcal{L}_p \approx \frac{1}{p} \ln(1 - (1-p)^{1/p - 1}) \approx \frac{1}{p} \ln(1 - (1-p)^{1/p}).
\end{equation*}
As $x$ goes to positive infinity, we have
\begin{equation*}
\left(1 - \frac{1}{x}\right)^x \rightarrow \frac{1}{e},
\end{equation*}
so as $n$ goes to infinity,
\begin{equation*}
\mathcal{L}_p \approx \frac{1}{p} \ln\left(1 - \frac{1}{e}\right).
\end{equation*}
Finally substituting this back, we have
\begin{align*}
\Pr[i \textrm{ totally disguised}] \geq& \exp\left(\frac{2}{\epsilon} \mathcal{L}_p\right) \\
\approx& \exp \left( \frac{2}{\epsilon p} \ln \left(1 - \frac{1}{e} \right)\right) \\
=& \left( 1 - \frac{1}{e} \right) ^{2/(\epsilon p)}.
\end{align*}
Since the items in $W$ are totally disguised independently of each other, using a Chernoff bound there will be at least
\begin{equation*}
D = (1 - o(1)) |W| \left( 1 - \frac{1}{e} \right) ^{2/(\epsilon p)}
\end{equation*}
totally disguised items with constant probability. By \cref{prop:disguised_lb}, we will get an impossibility result with $(1-\epsilon)n$ tests as long as $D$ grows asymptotically faster than $1 / p$.
\textcolor{red}{From checking with computer algebra software, this seems like it will hold for $p = 1 / (\log(n))^{1 - \gamma}$ for any constant $\gamma > 0$ (and regardless of $\epsilon$), but not for $p = 1 / \log(n)$. Surprisingly, this implies that $n - o(n)$ tests are necessary even beyond the linear regime.}
\section{Introduction}
Group testing is a sparse recovery problem where we aim to recover a small set of $k$ ``defective'' items from among $n$ total items using pooled tests. Originally introduced in the context of testing blood samples for diseases where multiple samples can be combined together \cite{dorfman1943detection}, it has since seen a variety of applications, including recently COVID-19 testing (see for instance \cite{yelin2020evaluation,
gollier2020group, eberhardt2020multi}, though there are many more such works).
More formally, we let $\bfx \in \set{0, 1}^n$ represent the vector of items where a 1 indicates defectivity. A test vector $\bfm \in \set{0, 1}^n$ specifies a subset of items to be tested; the result is
\begin{equation*}
y = \bigvee_{i:\bfm_i = 1} \bfx_i,
\end{equation*}
the logical OR of the entries of $\bfx$ specified by the test.
We focus here on the nonadaptive setting, where all tests must be specified in advance of seeing any results. In this setting we write $M \in \set{0,1}^{T \times n}$ for the matrix of $T$ tests (rows), and $\bfy$ for the vector of all test results. We assume the vector of defectives $\bfx$ is generated randomly, the details of which are discussed in \cref{sec:priors}.
This is known as \emph{probabilistic group testing} (PGT), meaning that we are interested in determining how many tests are necessary for the probability of error (for some particular decoding method) to approach 0 as $n$ becomes large. This is in contrast to \emph{combinatorial group testing}, where we insist that any set of defectives is recoverable (i.e., probability of error is 0 for any $n$).
In general, we are interested in the question of how the number of tests $T$ must scale as a function of both $n$ and $k$. This behavior can be quite different depending on the scaling of $k$ relative to $n$, so often things are broken down further into different ``regimes'' for $k$. We will in this work prove both lower and upper bounds on $T$ for the specific regime $k = \Theta\left(\frac{n}{\log n}\right)$, or equivalently when the probability of an item being defective is $\Theta\left(\frac1{\log n}\right)$; this regime has traditionally seen little attention, but recent work has shown that this is exactly the scaling of $k$ for which $\Omega(n)$ tests are necessary. As $n$ tests trivially suffice by testing each item individually, determining the exact crossover point where individual testing becomes suboptimal seems of interest, which happens to be in the $k = \Theta\left(\frac{n}{\log n}\right)$ regime.
\subsection{Related Work}
\label{sec:related_work}
Much of the initial work on group testing concerned the zero-error combinatorial setting, where a single test matrix must correctly classify every possible defective vector. The literature on this variant is vast -- see for instance the book of Du and Hwang \cite{du2000combinatorial}. We focus the rest of this section on the low-error probabilistic setting.
PGT has traditionally been split into two rather different regimes: the ``sparse'' regime, where the total number of defectives is $O(n^\theta)$ for some $0 \leq \theta < 1$, and the ``linear'' regime where the total number of defectives is $\beta n$ for some constant $\beta < 1$.
In both regimes, the folklore ``counting bound'' (see \cite{chan2011non} for instance) shows that $\Omega(k \log \frac{n}{k})$ measurements are necessary. In the sparse regime, this is equivalent to $\Omega(k \log n)$, and order-optimal randomized constructions have been known for some time \cite{sebHo1985two, atia2012boolean}. In the linear regime, the counting bound implies $\Omega(n)$ measurements are necessary, and trivially $n$ suffice by testing items individually.
More recently, explicit constructions of matrices for PGT have been studied. Mazumdar \cite{mazumdar2015nonadaptive} gave explicit constructions requiring $O(k \log^2 n / \log k)$ measurements. Follow up works of Barg and Mazumdar~\cite{barg2017group}, and Inan et al. \cite{inan2019optimality}, show order-optimal or near-optimal results.
Another direction has been to develop constructions with good decoding properties. PGT schemes are considered efficiently decodable if they require $O(T)$ time to decode, where $T$ is the total number of measurements; several works \cite{cai2017efficient, vem2017group, lee2019saffron} gave efficiently decodable schemes which were not quite order-optimal, before Bondorf et al. \cite{bondorf2020sublinear} gave the first order-optimal efficiently decodable construction. A very recent result \cite{inan2020strongly} shows that we can even have explicit constructions which are both order-optimal and efficiently decodable.
The last line of work we discuss in PGT, and the most relevant to our work here, has been to go beyond order-optimality and determine the precise constants involved in various regimes. Improvements on the counting bound were proposed in \cite{agarwal2018novel} and subsequently it was shown the individual testing is optimal in the linear regime~\cite{aldridge2018individual}.
In the sparse regime the characterization of exact constants results in a more complex picture, but a series of works \cite{scarlett2016limits, aldridge2017capacity, johnson2018performance, coja2020optimal} have narrowed down the constants to the point that the lower and upper bounds are matching for any $\theta \in [0, 1)$ (where $k = n^\theta$). We direct the interested reader to the recent survey of Aldridge et al. \cite{aldridge2019group}.
However, in between these two regimes is another regime which has seen much less study, namely when the total number of defectives is $n / \textrm{poly}(\log n)$ (\hspace{1sp}\cite{bay2020optimal} refers to these regimes as ``mildly sublinear''). In particular when $k = \Theta(n / \log n)$ the counting bound implies only that $\Omega(n \log \log n/ (\log n))$ measurements are necessary, but a method using $o(n)$ measurements proved elusive. In very recent work, Bay et al. \cite{bay2020optimal} showed that in fact the lower bound can be improved to $\Omega(k \log n)$ in this regime, and thus known constructions are order-optimal.
In light of the asymptotic improvement to the lower bound for $k = n / \textrm{poly}(\log n)$ \cite{bay2020optimal}, we feel it is a natural next step to try and nail down the constants in this regime. While we focus on the particular case $k = \lambda n / \log n$ in this paper, we expect our results should extend readily to other mildly sublinear regimes.
\subsection{Priors for PGT}
\label{sec:priors}
There are two commonly used ``priors'' over the random defective vector involved in PGT. Under the \emph{combinatorial prior}, the number of defectives is fixed to be $k$, and the defective set is drawn uniformly from the $\binom{n}{k}$ possibilities. Under the \emph{i.i.d. prior}, we instead fix a defective probability $p$, and each item is defective independently with probability $p$.
Conveniently, it turns out that our choice of prior makes little difference in the number of tests necessary (assuming we take $k = pn$). The following result from the survey of Aldridge et al. \cite{aldridge2019group} formalizes this notion.
\begin{theorem}[\hspace{1sp}\cite{aldridge2019group} Thm. 1.7]
\label{thm:comb_prior_conversion}
Suppose a sequence of test designs and decoding method has probability of error going to 0 as $n$ goes to infinity under the combinatorial prior with $k = k_0(1 + o(1))$, where $k_0 = o(n)$ and $k_0$ goes to infinity with $n$. Then the same sequence of test designs and decoding method has error going to 0 as $n$ goes to infinity under the i.i.d. prior with $p = k_0 / n$.
\end{theorem}
A similar result holds for converting the opposite direction. In this paper we will primarily use the i.i.d. prior, except for our upper bound in \cref{sec:constant_ub} which relies on preexisting work under the combinatorial prior. For ease of reference to that work we will use the combinatorial prior there, and \cref{thm:comb_prior_conversion} tells us we can convert our main result (\cref{thm:constant_ub}) to an equivalent result under the i.i.d. prior.
\subsection{Notation}
We will use the following notational conventions throughout:
\begin{itemize}
\item $M$ is a test matrix with rows corresponding to tests and columns corresponding to items.
\item $T$ is the number of tests (rows of $M$).
\item $n$ is the total number of items (columns of $M$).
\item $k$ is the total number of defectives.
\item $p$ is the defective probability under the i.i.d. prior.
\item $\log$ is the logarithm base $e$.
\item $\epsilon$, $\delta$, $\gamma$, $\lambda$, $\alpha$, $\nu$ are constants.
\end{itemize}
Our results show that when $p = \lambda / \log n$, it is sufficient to have $T/n \geq \min(1,\frac{\lambda}{\log^2 2})$, whereas $T/n \geq \frac{\lambda}{\lambda+\log^2 2}$ is necessary. Note that these two quantities are always within a factor of 2 of each other, and approach equality for very small or large values of $\lambda$.
\section{Upper Bound with Near-Constant Tests-Per-Item Design}
\label{sec:constant_ub}
For probabilistic group testing in the sparse regime, it has been shown that the optimal rate is achieved by so-called ``Near-Constant Tests-per-Item'' designs \cite{johnson2018performance}. These matrices are formed by drawing $\frac{\nu T}{k}$ tests uniformly with replacement per item, where $\nu$ is a small constant parameter to be optimized, $T$ is the total number of tests, and $k$ is the number of defectives (assuming the combinatorial prior).
Drawing the tests with replacement means that it is possible that the same test is drawn more than once, hence the ``near-constant'' tests-per-item, rather than constant. This simplifies the analysis as the draws are independent.
We will use this type of test matrix to prove our upper bound.
\subsection{Preliminaries}
We closely follow the approach taken by Johnson et al. \cite{johnson2018performance} for the sparse regime in the following, as their argument is mostly regime-independent. The following notation will be useful in describing their approach succinctly:
\begin{itemize}
\item $\mathcal{K}$ is the set of all defective items.
\item $W^{(\mathcal{K})}$ is the number of tests containing at least one item from $\mathcal{K}$.
\item $G$ is the number of nondefective items that do not appear in any negative tests.
\end{itemize}
Our upper bound will employ the simple COMP decoding~\cite{du2000combinatorial,chan2011non}. This algorithm works by first identifying all items present in negative tests and classifying them as nondefective. Then, any remaining items are classified as defective. While simple, in most cases this algorithm has proven to be asymptotically as good as more complex decodings, and is easy to analyze.
First we present two useful lemmas borrowed from \cite{johnson2018performance}. The first in plain language states that given a near-constant tests-per-item design, the number of tests including at least one defective concentrates tightly around its mean.
\begin{lemma}[\hspace{1sp}\cite{johnson2018performance} Lemma 1]
\label[lemma]{thm:wk_concentration}
Let $|\mathcal{K}| = k$, and fix constants $\alpha > 0$, $\epsilon \in (0, 1)$. Suppose we form a test design of $T$ tests by drawing $L = \frac{\alpha T}{k}$ tests uniformly at random with replacement for each of $n$ items. Then (assuming $T$ goes to infinity with $n$) the following holds:
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\alpha})T| \geq \delta] \leq 2 \exp \left( -\frac{\delta^2}{\alpha T} \right).
\end{equation*}
\end{lemma}
The next lemma, again from \cite{johnson2018performance}, characterizes the distribution of $G$ conditioned on a fixed value of $W^{(\mathcal{K})}$.
\begin{lemma}[\hspace{1sp}\cite{johnson2018performance} Lemma 5]
\label[lemma]{thm:g_dist}
Let $L = \frac{\nu T}{k}$ be the number of tests drawn for each item in a near-constant tests-per-item design. Then
\begin{equation*}
G \ | \ (W^{(\mathcal{K})} = x) \sim \mathrm{Bin} (n-k, (x/T)^L).
\end{equation*}
\end{lemma}
\subsection{Combining Together}
Now we are ready to prove the upper bound, namely that we can succeed with high probability using about $n \cdot \lambda/\log^2 2$ tests.
\begin{theorem}
\label{thm:constant_ub}
Let $k = \frac{\lambda n}{\log n}$ be the total number of defectives. Suppose our test design is near-constant tests-per-item,
\begin{equation*}
T = (1 + \epsilon) \frac{\lambda}{\log^2 2} n
\end{equation*}
for a constant $\epsilon > 0$, and we draw $L = \frac{T \log 2}{k}$ tests per item with replacement. Then for sufficiently large $n$, the success probability of the COMP decoding is $1 - o(1)$.
\end{theorem}
\begin{proof}
As $G$ is the number of nondefectives not appearing in any negative tests, we know COMP succeeds if and only if $G = 0$. Then the main idea here is that we will use the equation
\begin{equation}
\label{eq:cond_prob_sum}
\Pr[G = 0] = \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \Pr[G=0 | W^{(\mathcal{K})} = x],
\end{equation}
applying \Cref{thm:g_dist} when $x \leq (1/2 + \delta) T$, and showing that the probability of $x > (1/2 + \delta) T$ goes to 0 for large $n$ using \Cref{thm:wk_concentration}.
By \Cref{thm:g_dist},
\begin{equation*}
\label{eq:success_conditional_exact}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] = \left( 1 - \left( \frac{x}{T} \right)^L \right)^{n-k}.
\end{equation*}
As this function is decreasing in $x$, for all $x \leq (1/2 + \delta)T$, we have
\begin{equation*}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] \geq \Pr[G = 0 \ | \ W^{(\mathcal{K})} = (1/2 + \delta)T]
= (1 - (1/2 + \delta)^L)^{n-k}.
\end{equation*}
By definition
\begin{equation*}
L = \frac{T \log 2}{k} = \frac{(1+\epsilon) \lambda n \log 2 \log n}{(\log 2)^2 \lambda n} = \frac{(1+\epsilon) \log n}{\log 2},
\end{equation*}
so this gives
\begin{equation*}
\frac{1}{2^L} = \left( \frac{1}{2^{1 / \log 2}} \right)^{(1+\epsilon) \log n} = \left(\frac{1}{e^{\log n}} \right)^{(1+\epsilon)} = \frac{1}{n^{1+\epsilon}}.
\end{equation*}
Taking $\delta < 2/L$, the maximum term in the binomial expansion of $(1/2 + \delta)^L$ will be $1/2^L$, so we have
\begin{equation*}
(1/2 + \delta)^L < \frac{L+1}{2^L} = \frac{L+1}{n^{1+\epsilon}} < \frac{1}{n^{1+\epsilon/2}},
\end{equation*}
where in the last step we use the fact that $L = \Theta(\log n)$ and the assumption regarding large $n$. Then we have for $x \leq (1/2 + \delta)T$
\begin{equation*}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] > \left(1 - \frac{1}{n^{1+\epsilon/2}} \right)^{n-k}
\approx 1 - \frac{n-k}{n^{1+\epsilon/2}}
> 1 - \frac{n}{n^{1+\epsilon/2}}
= 1 - \frac{1}{n^{\epsilon/2}},
\end{equation*}
which goes to 1 for large $n$.
Plugging this back into \eqref{eq:cond_prob_sum},
\begin{align}
\Pr[G = 0] =& \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \nonumber \\
\geq& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \nonumber \\
>& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \nonumber \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \Pr[W^{(\mathcal{K})} \leq (1/2 + \delta)T] \nonumber \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \left( 1 - \Pr[W^{(\mathcal{K})} > (1/2 + \delta)T]\right) \label{eq:final_succ_prob}.
\end{align}
Finally, we apply \Cref{thm:wk_concentration} to the latter probability with $\alpha = \log 2$ (and $\delta T$ in place of the $\delta$ in the original lemma statement), which tells us
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\log 2})T| \geq \delta T] \leq 2 \exp \left( -\frac{(\delta T)^2}{T \log 2} \right),
\end{equation*}
and thus
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (T/2)| \geq \delta T] \leq 2 \exp \left( -\frac{\delta^2 T}{\log 2} \right)
= \Theta \left( \exp \left( -\frac{n}{\log^2 n}\right)\right),
\end{equation*}
which clearly goes to 0 for large $n$. Thus the success probability in \eqref{eq:final_succ_prob} will go to 1, concluding the proof.
\end{proof}
\section{Determining the Constant in the Lower Bound}
The recent work of Bay et al. \cite{bay2020optimal} shows that a lower bound of $\min(n, \Omega(k \log n))$ tests holds for any $k$. In the case we are interested in that $k = \lambda n / \log n$, this tells us we need $\Omega(n)$ measurements. In this section we seek to determine the necessary number of measurements more precisely, up to the constant term.
The argument of \cite{bay2020optimal} works by demonstrating that with significantly less than $k \log n$ measurements, there must exist a fairly large number of ``totally disguised'' items; that is, items for which no test gives us any information about whether or not these items are defective. If we have no information about these items defectivity, we cannot do better than guessing on each one.
More specifically, they give a procedure that constructs a set of items $W$ with the following properties:
\begin{enumerate}
\item Each item in $W$ is totally disguised with probability at least $\mathcal{L}^*$.
\item The events that each item in $W$ is totally disguised are independent of each other.
\end{enumerate}
Since these events are independent, we can combine bounds on $|W|$ and $\mathcal{L}^*$ with simple concentration inequalities to get a lower bound on the number of totally disguised items.
For our lower bound we will modify their procedure slightly, as described below.
\subsection{How Many Disguised Items are Needed}
Since disguised items are defective independently with probability $p$ and for us $p < \frac{1}{2}$, the best any algorithm
can do on a disguised item is predict it is nondefective. This guess is correct with probability $1 - p$. Thus if the total number of disguised items is $D$, the success probability of any algorithm is at most
\begin{equation*}
(1 - p)^D \leq \exp(-p D).
\end{equation*}
\subsection{Lower Bounding $|W|$}
We will closely follow the method of \cite{bay2020optimal}, but will first slightly redefine their notion of ``very-present'' items. These are items present in far more tests than the average item, which will be discarded in a preprocessing step before constructing $W$.
\begin{lemma}[\hspace{1sp}\cite{bay2020optimal} Lemma 4, modified]
\label[lemma]{thm:very_present}
Define an item to be \emph{very-present} if it appears in more than $t_{max} = \log^3 n$ tests. If $T \leq n$ and no test contains more than $z \log n$ items, then the number of very-present items is
\begin{equation*}
n_{vp} \leq \frac{nz}{\log^2 n}.
\end{equation*}
\end{lemma}
\begin{proof}
Consider the total number of (item, test) pairs $P$. From the assumptions, we have $P \leq Tz \log n \leq nz \log n$. Also, by the definition of very-present items, we have $n_{vp} \log^3 n \leq P$. Then $n_{vp} \log^3 n \leq nz \log n$ implies the result.
\end{proof}
For bounding $|W|$, we have the following:
\begin{lemma}[\hspace{1sp}\cite{bay2020optimal} Lemma 6 Pf.]
\label[lemma]{thm:w_lb}
Suppose $W$ is constructed as described in Procedure 1 of \cite{bay2020optimal}, and furthermore we modify their stopping rule so that we stop when there are less than $\frac{\epsilon n}{1 + \gamma}$ items rather than $\frac{\epsilon n}{2}$, where $\gamma$ is a small positive constant. Then
\begin{equation}
\label{eq:w_lb}
|W| \geq \frac{(\gamma \epsilon n) / (1+\gamma) - (nz)/\log^2 n}{z^2 \log^8 n},
\end{equation}
where $z = \frac{2}{\log (1/(1-p))}$.
\end{lemma}
\begin{proof}
From the proof of \cite{bay2020optimal} Lemma 6, we have that the method of constructing $W$ yields the inequality
\begin{equation*}
\frac{\epsilon n}{1 + \gamma} \geq \epsilon n - n_{vp} - |W| t_{max}^2 z^2 \log^2 n,
\end{equation*}
where $n_{vp}$ and $t_{max}$ are as defined in our \Cref{thm:very_present}. Rearranging this yields
\begin{equation*}
|W| \geq \frac{\gamma \epsilon n/(1+\gamma) - n_{vp}}{t_{max}^2 z^2 \log^2 n},
\end{equation*}
and substituting the values from \Cref{thm:very_present} for $n_{vp}$ and $t_{max}$ gives the result.
\end{proof}
In the case we are interested in that $p = \frac{\lambda}{\log n}$, we have
\begin{equation*}
z = -2 / \log \left(1 - \frac{\lambda}{\log n}\right) = \frac{2 \log n}{\lambda} - 1 - o(1),
\end{equation*}
where we have considered the Taylor series as $n$ goes to infinity.
Then
\begin{equation*}
\frac{nz}{\log^2 n} \approx \frac{2 n \log n}{\lambda \log^2 n} = \frac{2 n}{\lambda \log n} = o(n),
\end{equation*}
so asymptotically we have
\begin{equation*}
(\gamma n) / (1+\gamma) - (nz)/\log^2 n = \Theta(n).
\end{equation*}
Substituting the value of $z$ back into the denominator of \eqref{eq:w_lb} gives
\begin{equation*}
|W| \geq \frac{cn}{\log^{10} n}
\end{equation*}
as $n$ goes to infinity, for some constant $c>0$. This will be sufficient for our purposes, as it will turn out that only the exponent of $n$ is relevant for this term.
\subsection{Lower Bounding $\mathcal{L^*}$}
For lower bounding $\mathcal{L}^*$, they show in \cite{bay2020optimal} that if we stop constructing $W$ when there are less than $n_{final}$ items remaining, then
\begin{equation*}
\mathcal{L}^* \geq \exp \left( \frac{T}{n_{final}} \mathcal{L}_p\right),
\end{equation*}
where
\begin{equation*}
\mathcal{L}_p = \min_{x=2, 3, \dotsc, n} x \log (1 - (1-p)^{x-1}).
\end{equation*}
As we modified their Procedure 1 to stop with less than $\frac{\epsilon n}{1 + \gamma}$ items, we will have
\begin{equation}
\label{eq:our_lstar}
\mathcal{L}^* \geq \exp \left( \frac{(1+\gamma)T}{\epsilon n} \mathcal{L}_p\right).
\end{equation}
Furthermore, it is shown in work of Coja-Oghlan et al. \cite{coja2020optimal} in the sparse regime that the function $\mathcal{L}_p$ is minimized at
\begin{equation*}
x = \frac{\log 2}{p} + O(p^{-1/2}).
\end{equation*}
In our regime neglecting lower order terms, this yields
\begin{equation*}
\mathcal{L}_p \geq \frac{\log n \log 2}{\lambda} \log \left( 1 - \left(\frac{\log n - \lambda}{\log n} \right)^{(\log n \log 2)/\lambda - 1}\right).
\end{equation*}
From this, we have
\begin{equation*}
\exp(\mathcal{L}_p) \geq \left( 1 - \left( \frac{\log n - \lambda}{\log n} \right)^{\log 2 \log n / \lambda - 1}\right)^{\log 2 \log n / \lambda}.
\end{equation*}
The Taylor series expansion at $x = \infty$ of
\begin{equation*}
\left( \frac{x-\lambda}{x} \right)^{\log 2 x/\lambda - 1}
\end{equation*}
is $\frac{1}{2} + O(\frac{1}{x})$, so for large $n$ dropping lower order terms this gives
\begin{equation*}
\exp(\mathcal{L}_p) \geq \left( \frac{1}{2} \right)^{\log 2 \log n / \lambda},
\end{equation*}
and thus substituting $T = (1-\epsilon)n$ into our expression for $\mathcal{L}^*$ from \eqref{eq:our_lstar},
\begin{equation*}
\mathcal{L}^* \geq \left( \frac{1}{2} \right)^{(\log 2 \log n / \lambda) \cdot (1+\gamma)(1 - \epsilon)/\epsilon},
\end{equation*}
which can be simplified to
\begin{equation*}
\mathcal{L}^* \geq \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2 (1 - \epsilon)/(\lambda \epsilon)}.
\end{equation*}
\subsection{Combining Together}
Writing $D$ for the total number of disguised items, we have $\mathbb{E}[D] \geq \mathcal{L}^* |W|$, and as each item in $W$ is disguised independently, by a Chernoff bound we have $D \geq (1 - o(1)) \mathcal{L}^* |W|$ with high probability. Any algorithm's success probability is upper bounded by $\exp(-p D)$, so in order for this success probability to go to 1, we need $pD$ to go to 0. Then substituting in our bounds for $\mathcal{L}^*$ and $|W|$, we need
\begin{align*}
-pD \leq& -p \mathcal{L}^* |W| \\
\leq& \frac{-\lambda}{\log n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)} \frac{cn}{\log^{10} n} \\
=& \frac{-\lambda cn}{\log^{11} n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)}
\end{align*}
to go to 0. Only the highest order terms are relevant, so we can just look at the exponents of $n$, and the expression will go to 0 if
\begin{align*}
& 1 < \frac{(1+\gamma)(\log 2)^2(1 - \epsilon)}{\epsilon \lambda} \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2(1 - \epsilon) \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2 - \epsilon (1+\gamma)(\log 2)^2 \\
\implies& \epsilon (\lambda + (1+\gamma)(\log 2)^2) < (1+\gamma)(\log 2)^2 \\
\implies& \epsilon < \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2}.
\end{align*}
We set $T = (1- \epsilon)n$, so to have success probability going to 1 we must have
\begin{equation*}
1 - \epsilon > 1 - \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2} = \frac{\lambda}{\lambda + (1+\gamma) (\log 2)^2},
\end{equation*}
and recall $\gamma$ can be any constant greater than zero.
Altogether this shows the following.
\begin{theorem}
\label{thm:full_lb}
There exists $n_0$ such that for all $n > n_0$, any test scheme using $T$ tests to identify defectives among $n$ items with i.i.d. defective probability $p = \frac{\lambda}{\log n}$ with
\begin{equation*}
T \leq (1 - \epsilon) \frac{\lambda}{\lambda + \log^2 2} n
\end{equation*}
for some constant $\epsilon$ independent of $n$ must have error probability $1 - o(1)$.
\end{theorem}
\section{Conclusion and Discussion}
We have shown that for nonadaptive PGT in the regime that $p = \lambda / \log n$ (or equivalently $k = \lambda n / \log n$),
\begin{equation*}
\min \left(1, \left(1+\epsilon\right) \frac{\lambda}{\log^2 2}\right) n
\end{equation*}
measurements suffice to obtain error probability going to 0, and at least
\begin{equation*}
\left(\left(1 - \epsilon\right) \frac{\lambda}{\lambda + \log^2 2}\right) n
\end{equation*}
are necessary ($\epsilon>0$). \Cref{fig:pgt_bound_comp} shows a graphical comparison of the upper and lower bounds.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{pgt_bound_comp.png}
\caption{Comparison of upper and lower bounds for PGT in the regime where $k = \lambda n / \log n$. The shaded region indicates the gap between the current upper and lower bounds.}
\label{fig:pgt_bound_comp}
\end{figure}
A natural next step would be to close this gap. In the sparse regime where $k = O(n^\theta)$, more complex decodings known as DD and SPIV have been shown to improve on the COMP decoding \cite{johnson2018performance,coja2020optimal}, although the benefit seems to vanish as $\theta$ approaches 1.
We conjecture that the lower bound should come up to meet the upper bound obtained by the minimum of near-constant tests-per-item designs and individual testing. This would imply that the exact point at which individual testing becomes suboptimal is when $p < (\log 2)^2 / \log n$.
The reason for this belief is as follows. Suppose $D$ is the (random) number of disguised items. A simple calculation shows that,
\begin{equation*}
\mathbb{E}[D] \ge n \exp(T/n \cdot \mathcal{L}_p)
\end{equation*}
where $\mathcal{L}_p$ has been defined in the last section. Using the same calculation as before, we obtain,
\begin{equation*}
\mathbb{E}[D] \ge n 2^{-\frac{T}{n} \frac{\log 2}{p}}.
\end{equation*}
Given the number of disguised items is $D$, the probability of correct decoding is
at most $(1-p)^D \le \exp(-pD),$ where $p =\frac{\lambda}{\log n}$. Substituting $D$ with $\mathbb{E} D$ from above, we see that the probability of correct decoding is going to zero whenever $\frac{T}{n} < \frac{\lambda}{\log^2 2}.$
This shows that the lower bound should be improved to match the upper bound, if the random variable $D$ is concentrated around its mean, which is of course yet to be proved.
\bibliographystyle{IEEEtran}
\section{Introduction}
A new lower bound for small error nonadaptive group testing in the ``slightly sublinear'' regime where $k = o(n)$ but also $k = \omega(n^{\theta})$ for any $\theta < 1$ was recently proven by Bay et al. \cite{bay2020optimal}. They showed that at least $\Theta(k \log n)$ measurements are necessary, which implies that for $k = \omega( n / \log n)$, we essentially \footnote{Technically, as the defective probability $p$ is sublinear, we can neglect testing $o(1 / p)$ items and predict they are nondefective without significantly influencing our error probability in the limit as $n$ goes to infinity.} cannot do better than testing items individually.
This lower bound asymptotically matches an upper bound of $O(k \log n)$ which can be obtained via a simple probabilistic analysis.
However, \cite{bay2020optimal} does not attempt to characterize the threshold behavior when $k = \Theta(n / log n)$, showing only that $\delta n$ measurements are necessary for some constant $\delta$. Here we will try to determine what exactly happens in this area.
\section{Determining the Constant in the Lower Bound}
The argument of \cite{bay2020optimal} works by demonstrating that with significantly less than $k \log n$ measurements, there must exist a fairly large number of ``totally disguised'' items; that is, items for which no test gives us any information about whether or not these items are defective. If we have no information about these items defectivity, we effectively cannot do better than guessing on each one.
More specifically, they construct a set of items $W$ with the following properties:
\begin{enumerate}
\item Each item in $W$ is totally disguised with probability at least $\mathcal{L}^*$
\item The events that each item in $W$ is totally disguised are independent of each other
\end{enumerate}
Since these events are independent, we can combine bounds on $|W|$ and $\mathcal{L}^*$ with simple concentration inequalities to get a lower bound on the number of totally disguised items.
\subsection{How Many Disguised Items are Needed}
Since disguised items are defective independently with probability $p$ and for us $p < \frac{1}{2}$, the best any algorithm
can do on a disguised item is predict it is nondefective. This guess is correct with probability $1 - p$. Thus if the total number of disguised items is $D$, the success probability of any algorithm is at most
\begin{equation}
(1 - p)^D \leq \exp(-p D).
\end{equation}
\subsection{Lower Bounding $|W|$}
In this section and henceforth, we write $\log$ to mean the natural logarithm, unless another base is indicated.
We will closely follow the method of \cite{bay2020optimal}, but will first slightly redefine their notion of ``very-present'' items. These are items which are present in far more tests than the average item, which will be discarded in a preprocessing step before constructing $W$.
\begin{lemma}[\cite{bay2020optimal} Lemma 4, modified]
\label{thm:very_present}
Define an item to be \emph{very-present} if it appears in more than $t_{max} = \log^3 n$ tests. If $T \leq n$ and no test contains more than $z \log n$ items, then the number of very-present items is
\begin{equation}
n_{vp} \leq \frac{nz}{\log^2 n}.
\end{equation}
\end{lemma}
\begin{proof}
Consider the total number of (item, test) pairs $P$. From the assumptions, we have $P \leq Tz \log n \leq nz \log n$. Also, by the definition of very-present items, we have $n_{vp} \log^3 n \leq P$. Then $n_{vp} \log^3 n \leq nz \log n$ implies the result.
\end{proof}
For bounding $|W|$, we have the following:
\begin{proposition}[\cite{bay2020optimal} Lemma 6 Pf.]
Suppose $W$ is constructed as described in $\cite{bay2020optimal}$, and furthermore we modify their stopping rule so that we stop when there are less than $\frac{\epsilon n}{1 + \gamma}$ items rather than $\frac{\epsilon n}{2}$, where $\gamma$ is a small positive constant. Then
\begin{equation}
\label{eq:w_lb}
|W| \geq \frac{(\gamma \epsilon n) / (1+\gamma) - (nz)/\log^2 n}{z^2 \log^8 n},
\end{equation}
where $z = \frac{2}{\log (1/(1-p))}$.
\end{proposition}
\begin{proof}
From the proof of \cite{bay2020optimal} Lemma 6, we have that the method of constructing $W$ yields the inequality
\begin{equation}
\frac{\epsilon n}{1 + \gamma} \geq \epsilon n - n_{vp} - |W| t_{max}^2 z^2 \log^2 n,
\end{equation}
where $n_{vp}$ and $t_{max}$ are as defined in our \Cref{thm:very_present}. Rearranging this yields
\begin{equation}
|W| \geq \frac{\gamma \epsilon n/(1+\gamma) - n_{vp}}{t_{max}^2 z^2 \log^2 n},
\end{equation}
and substituting the values from \Cref{thm:very_present} for $n_{vp}$ and $t_{max}$ gives the result.
\end{proof}
In the case we are interested in that $p = \frac{\lambda}{\log n}$, we have
\begin{equation*}
z = -2 / \log \left(1 - \frac{\lambda}{\log n}\right) = 2 (\log \log n - \log (\log n - \lambda)),
\end{equation*}
which as $n$ goes to infinity has Taylor series
\begin{equation}
\frac{2 \log n}{\lambda} - 1 - o(1).
\end{equation}
Then
\begin{equation}
\frac{nz}{\log^2 n} \approx \frac{2 n \log n}{\lambda \log^2 n} = \frac{2 n}{\lambda \log n} = o(n),
\end{equation}
so asymptotically we have
\begin{equation}
(\gamma n) / (1+\gamma) - (nz)/\log^2 n = \Theta(n).
\end{equation}
Then substituting the value of $z$ back into the denominator of \cref{eq:w_lb} and neglecting lower order terms gives
\begin{equation}
|W| \geq \frac{\Theta(n)}{\log^{10} n}
\end{equation}
as $n$ goes to infinity. This will be sufficient for our purposes, as it will turn out that only the exponent of $n$ is relevant for this term.
\subsection{Lower Bounding $\mathcal{L^*}$}
For lower bounding $\mathcal{L}^*$, they show in \cite{bay2020optimal} that if we stop constructing $W$ when there are less than $n_{final}$ items remaining, then
\begin{equation}
\label{eq:lstar_def}
\mathcal{L}^* \geq \exp \left( \frac{T}{n_{final}} \mathcal{L}_p\right),
\end{equation}
where $T$ is the total number of tests and
\begin{equation*}
\mathcal{L}_p = \min_{x=2, 3, \dotsc, n} x \log (1 - (1-p)^{x-1}).
\end{equation*}
As we modified their Procedure 1 to stop with less than $\frac{\epsilon n}{1 + \gamma}$ items, we will have
\begin{equation}
\label{eq:our_lstar}
\mathcal{L}^* \geq \exp \left( \frac{(1+\gamma)T}{\epsilon n} \right).
\end{equation}
Furthermore, it is shown in \cite{coja2020optimal} that the function $\mathcal{L}_p$ is minimized at
\begin{equation}
x = \frac{\log 2}{p} + O(p^{-1/2}).
\end{equation}
In our regime neglecting lower order terms, this yields
\begin{equation}
\mathcal{L}_p \geq \frac{\log n \log 2}{\lambda} \log \left( 1 - \left(\frac{\log n - \lambda}{\log n} \right)^{(\log n \log 2)/\lambda - 1}\right).
\end{equation}
From this, we have
\begin{equation}
\exp(\mathcal{L}_p) \geq \left( 1 - \left( \frac{\log n - \lambda}{\log n} \right)^{\log 2 \log n / \lambda - 1}\right)^{\log 2 \log n / \lambda}.
\end{equation}
The Taylor series expansion at $x = \infty$ of
\begin{equation*}
\left( \frac{x-\lambda}{x} \right)^{\log 2 x/\lambda - 1}
\end{equation*}
is $\frac{1}{2} + O(\frac{1}{x})$, so for large $n$ dropping lower order terms \textcolor{red}{(it might be better to handle this a bit more rigorously)} this gives
\begin{equation}
\exp(\mathcal{L}_p) \geq \left( \frac{1}{2} \right)^{\log 2 \log n / \lambda},
\end{equation}
and thus substituting $T = (1-\epsilon)n$ into our expression for $\mathcal{L}^*$ from \cref{eq:our_lstar},
\begin{equation}
\mathcal{L}^* \geq \left( \frac{1}{2} \right)^{(\log 2 \log n / \lambda) \cdot (1+\gamma)(1 - \epsilon)/\epsilon},
\end{equation}
which can be simplified to
\begin{equation}
\mathcal{L}^* \geq \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2 (1 - \epsilon)/(\lambda \epsilon)}.
\end{equation}
\subsection{Combining Together}
Writing $D$ for the total number of disguised items, we have $D \geq \mathcal{L}^* |W|$, and any algorithm's success probability is upper bounded by $\exp(-p D)$. In order for this success probability to go to 1, we need $pD$ to go to 0. Then substituting in our bounds for $\mathcal{L}^*$ and $|W|$, we need
\begin{equation}
-pD \leq -p \mathcal{L}^* |W| \leq \frac{-\lambda}{\log n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)} \frac{\Theta(n)}{\log^{10} n} = \frac{-\Theta(n)}{\log^{11} n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)}
\end{equation}
to go to 0. Only the highest order terms are relevant, so we can just look at the exponents of $n$, and the expression will go to 0 if
\begin{align*}
& 1 < \frac{(1+\gamma)(\log 2)^2(1 - \epsilon)}{\epsilon \lambda} \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2(1 - \epsilon) \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2 - \epsilon (1+\gamma)(\log 2)^2 \\
\implies& \epsilon (\lambda + (1+\gamma)(\log 2)^2) < (1+\gamma)(\log 2)^2 \\
\implies& \epsilon < \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2}.
\end{align*}
We set $T = (1- \epsilon)n$, so to have success probability going to 1 we must have
\begin{equation}
1 - \epsilon > 1 - \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2} = \frac{\lambda}{\lambda + (1+\gamma) (\log 2)^2}.
\end{equation}
Altogether this shows the following.
\begin{theorem}
\label{thm:full_lb}
There exists $n_0$ such that for all $n > n_0$, any test scheme using $T$ tests to identify defectives among $n$ items with i.i.d. defective probability $p = \frac{\lambda}{\log n}$ with
\begin{equation}
T \leq (1 - \epsilon) \frac{\lambda}{\lambda + \log^2 2} n
\end{equation}
for some constant $\epsilon$ independent of $n$ must have error probability $1 - o(1)$.
\end{theorem}
\section{Upper Bound with Bernoulli Design}
\label{sec:bernoulli_ub}
Here we outline a simple upper bound method which gives us something nontrivial in the semisparse regime. We will work here with the combinatorial prior rather than the probabilistic prior, meaning that there are exactly $k = \frac{\lambda n}{\log n}$ defectives. We should be able to convert the end result into a statement involving the probabilistic prior using existing results.
Suppose we use a Bernoulli matrix as our test matrix, where each entry is 1 with probability $\frac{1}{k}$ and 0 otherwise. For a fixed $k$-sparse defective vector using the COMP decoding, we will decode correctly as long as there is not some nondefective which appears only in tests with other defectives. Equivalently, if the union of the $k$ columns of the test matrix corresponding to the defective set does not contain any other column of the matrix.
Call the probability that a particular other column of the test matrix is contained in a union of $k$ random columns $P(n, k)$. Then by a union bound,
\begin{equation}
\Pr[\textrm{error}] \leq (n-k) P(n, k).
\end{equation}
Next we can compute $P(n, k)$ for a matrix with Bernoulli parameter $\frac{1}{k}$. For a particular column, each entry is contained in the union of the $k$ other columns if either it is 0, or it is 1 and one of the other $k$ columns is also a 1. This happens with probability
\begin{align*}
& \left( 1 - \frac{1}{k} \right) + \frac{1}{k} (\Pr[\textrm{at least 1 of $k$ cols. is 1}]) \\
=& \left( 1 - \frac{1}{k} \right) + \frac{1}{k} (1 - \Pr[\textrm{ all $k$ cols. are 0}]) \\
=& \left( 1 - \frac{1}{k} \right) + \frac{1}{k} \left( 1 - \left(1 - \frac{1}{k}\right)^k\right) \\
=& 1 - \frac{1}{k} \left(1 - \frac{1}{k}\right)^k.
\end{align*}
For the column to be contained this must happen for all $T$ entries, so
\begin{equation}
P(n, k) = \left(1 - \frac{1}{k} \left(1 - \frac{1}{k}\right)^k\right)^T.
\end{equation}
Then using the inequality
\begin{equation*}
1-x \leq \exp(-x),
\end{equation*}
\begin{align*}
\Pr[\textrm{error}] \leq& (n-k) \left(1 - \frac{1}{k} \left(1 - \frac{1}{k}\right)^k\right)^T \\
\leq& (n-k) \left( 1 - \frac{1}{ek}\right)^T \\
\leq& (n-k) \exp \left( - \frac{T}{ek}\right) \\
\leq& n \exp \left( - \frac{T}{ek} \right).
\end{align*}
This goes to 0 as long as $\frac{T}{ek} \gg n$, or equivalently if
\begin{equation}
T > (1+\epsilon) ek \log n = (1+\epsilon) e \lambda n
\end{equation}
for a constant $\epsilon > 0$.
\section{Upper Bound with Near Constant Tests Per Item Design}
For probabilistic group testing in the sparse regime, it has been shown that using a Bernoulli test matrix is slightly inferior to another type of test design, called ``Near Constant Tests per Item'' \cite{johnson2018performance}. These matrices are formed by drawing $\frac{vT}{k}$ tests uniformly with replacement per item, where $v$ is a small constant parameter to be optimized, $T$ is the total number of tests, and $k$ is the number of defectives (assuming the combinatorial prior).
Drawing the tests with replacement means that it is possible that the same test is drawn more than once, hence the ``near constant'' tests per item, rather than constant. This simplifies the analysis as the draws are independent, although it seems likely similar results should hold for exactly constant tests per item as well.
Intuitively, the reason for this improved performance is that in a Bernoulli design, the optimal Bernoulli parameter is $q = \frac{1}{k+1}$, so each item participates in $\frac{T}{k+1} \approx \log n$ tests in expectation. Since there are $n$ items, the concentration around the mean is not especially strong for every item, and likely a few items will participate in significantly fewer tests than is optimal. Forcing each item to participate in the expected number of tests avoids this downside.
\subsection{Preliminaries}
We closely follow the approach taken by Johnson et al. \cite{johnson2018performance} for the sparse regime in the following, as their argument is mostly regime-independent.
We will make use of two of their lemmas, which are proved independent of the sparsity regime. The first in plain language states that given a subset of items and a near constant tests per item design, the number of tests including any item from that subset concentrates tightly around its mean.
\begin{lemma}[\cite{johnson2018performance} Lemma 1]
\label{thm:wk_concentration}
Let $\mathcal{K}$ of size $k$ be a subset of items, and fix constants $\alpha > 0$, $\epsilon \in (0, 1)$. Suppose we form a test design of $T$ tests by drawing $L = \frac{\alpha T}{k}$ tests uniformly at random with replacement for each of $n$ items. Write $W^{(\mathcal{K})}$ for the number of tests including at least one item from $\mathcal{K}$. Then (assuming $T$ goes to infinity with $n$) the following holds:
\begin{equation}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\alpha})T| \geq \delta] \leq 2 \exp \left( -\frac{\delta^2}{\alpha T} \right).
\end{equation}
\end{lemma}
Next, recall that the COMP decoding algorithm works by classifying all items that appear in a negative test as nondefective, and all remaining items as defective. As such, the algorithm fails exactly when there is a nondefective item that does not appear in any negative tests, or equivalently a nondefective item for which every test it appears in has at least one defective in it. Let $G$ be the total number of such items; then
\begin{equation}
\Pr[\textrm{COMP decoding succeeds}] = \Pr[G = 0].
\end{equation}
Write $\mathcal{K}$ for the set of defective items, and $W^{(\mathcal{K})}$ for the total number of tests including at least one item in $\mathcal{K}$. The next lemma, again from \cite{johnson2018performance}, characterizes the distribution of $G$ conditioned on a fixed value of $W^{(\mathcal{K})}$.
\begin{lemma}[\cite{johnson2018performance} Lemma 5]
\label{thm:g_dist}
Let $G$ and $W^{(\mathcal{K})}$ be defined as above, $n$ the total number of items, $k$ the number of defectives, and $L = \frac{vT}{k}$ the number of tests drawn for each item in a near constant tests per item design. Then
\begin{equation}
G \ | \ (W^{(\mathcal{K})} = x) \sim \mathrm{Bin} (n-k, (x/T)^L).
\end{equation}
\end{lemma}
\subsection{Combining Together}
Now we are ready to prove the upper bound, namely that we can succeed with high probability using about $n \cdot \lambda/(\log 2)^2$ tests. As
\begin{equation*}
e \approx 1.31 \cdot \frac{1}{(\log 2)^2},
\end{equation*}
this improves significantly on the bound of $e \lambda n$ from \cref{sec:bernoulli_ub}. The proof is similar to \cite{johnson2018performance} Theorem 2.
\begin{theorem}
Let $n$ be the total number of items, and $k = \frac{\lambda n}{\log n}$ the total number of defectives. Suppose our test design is near constant tests per item, we draw $L = \frac{T \log 2}{k}$ tests per item with replacement, and further that
\begin{equation*}
T = (1 + \epsilon) \frac{\lambda}{(\log 2)^2} n
\end{equation*}
for a constant $\epsilon > 0$. Then the success probability of the COMP decoding goes to 1 as $n$ goes to infinity.
\end{theorem}
\begin{proof}
Writing $G$ for the number of nondefectives not appearing in any negative tests, we know COMP succeeds if and only if $G = 0$. Then writing $W^{(\mathcal{K})}$ for the number of tests including at least one defective, the main idea here is that we will use the equation
\begin{equation}
\Pr[G = 0] = \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \Pr[G=0 | W^{(\mathcal{K})} = x],
\end{equation}
applying \Cref{thm:g_dist} when $x \leq (1/2 + \delta) T$, and showing that the probability of $x > (1/2 + \delta) T$ goes to 0 for large $n$ using \Cref{thm:wk_concentration}.
By \Cref{thm:g_dist},
\begin{equation}
\label{eq:success_conditional_exact}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] = \left( 1 - \left( \frac{x}{T} \right)^L \right)^{n-k}.
\end{equation}
As this function is decreasing in $x$, for all $x \leq (1/2 + \delta)T$, we have
\begin{equation}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] \geq \Pr[G = 0 \ | \ W^{(\mathcal{K})} = (1/2 + \delta)T] = (1 - (1/2 + \delta)^L)^{n-k}.
\end{equation}
By definition
\begin{equation}
L = \frac{T \log 2}{k} = \frac{(1+\epsilon) \lambda n \log 2 \log n}{(\log 2)^2 \lambda n} = \frac{(1+\epsilon) \log n}{\log 2},
\end{equation}
so this gives
\begin{equation}
\frac{1}{2^L} = \left( \frac{1}{2^{1 / \log 2}} \right)^{(1+\epsilon) \log n} = \left(\frac{1}{e^{\log n}} \right)^{(1+\epsilon)} = \frac{1}{n^{1+\epsilon}}.
\end{equation}
Taking $\delta < \frac{2}{L}$, the maximum term in the binomial expansion of $(1/2 + \delta)^L$ will be $1/2^L$, so we have
\begin{equation}
(1/2 + \delta)^L < \frac{L+1}{2^L} = \frac{L+1}{n^{1+\epsilon}} < \frac{1}{n^{1+\epsilon/2}},
\end{equation}
where in the last step we use the fact that $L = \Theta(\log n)$ and the assumption regarding large $n$. Then we have for $x \leq (1/2 + \delta)T$
\begin{equation}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] > \left(1 - \frac{1}{n^{1+\epsilon/2}} \right)^{n-k} \approx 1 - \frac{n-k}{n^{1+\epsilon/2}} > 1 - \frac{n}{n^{1+\epsilon/2}} = 1 - \frac{1}{n^{\epsilon/2}},
\end{equation}
which goes to 1 for large $n$.
Plugging this back into our first equation,
\begin{align}
\Pr[G = 0] =& \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \\
\geq& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \\
>& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \Pr[W^{(\mathcal{K})} \leq (1/2 + \delta)T] \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \left( 1 - \Pr[W^{(\mathcal{K})} > (1/2 + \delta)T]\right) \label{eq:final_succ_prob}.
\end{align}
Finally, we apply \Cref{thm:wk_concentration} to the latter probability with $\alpha = \log 2$ (and $\delta T$ in place of the $\delta$ in the original lemma statement), which tells us that
\begin{align}
& \Pr[|W^{(\mathcal{K})} - (1 - e^{-\log 2})T| \geq \delta T] \leq 2 \exp \left( -\frac{(\delta T)^2}{T \log 2} \right) \\
\implies& \Pr[|W^{(\mathcal{K})} - (T/2)| \geq \delta T] \leq 2 \exp \left( -\frac{\delta^2 T}{\log 2} \right) = \Theta \left( \exp \left( -\frac{n}{\log^2 n}\right)\right)
\end{align}
which clearly goes to 0 for large $n$. Thus the success probability in \cref{eq:final_succ_prob} will go to 1, concluding the proof.
\end{proof}
\section{Introduction}
Group testing is a sparse recovery problem where we aim to recover a small set of $k$ ``defective'' items from among $n$ total items using pooled tests. Originally introduced in the context of testing blood samples for diseases where multiple samples can be combined together \cite{dorfman1943detection}, it has since seen a variety of applications, including recently COVID-19 testing (see for instance \cite{yelin2020evaluation,
gollier2020group, eberhardt2020multi}, though there are many more such works).
More formally, we let $\bfx \in \set{0, 1}^n$ represent the vector of items where a 1 indicates defectivity. A test vector $\bfm \in \set{0, 1}^n$ specifies a subset of items to be tested; the result is
\begin{equation*}
y = \bigvee_{i:\bfm_i = 1} \bfx_i,
\end{equation*}
the logical OR of the entries of $\bfx$ specified by the test.
We focus here on the nonadaptive setting, where all tests must be specified in advance of seeing any results. In this setting we write $M \in \set{0,1}^{T \times n}$ for the matrix of $T$ tests (rows), and $\bfy$ for the vector of all test results. We assume the vector of defectives $\bfx$ is generated randomly, the details of which are discussed in \cref{sec:priors}.
This is known as \emph{probabilistic group testing} (PGT), meaning that we are interested in determining how many tests are necessary for the probability of error (for some particular decoding method) to approach 0 as $n$ becomes large. This is in contrast to \emph{combinatorial group testing}, where we insist that any set of defectives is recoverable (i.e., probability of error is 0 for any $n$).
In general, we are interested in the question of how the number of tests $T$ must scale as a function of both $n$ and $k$. This behavior can be quite different depending on the scaling of $k$ relative to $n$, so often things are broken down further into different ``regimes'' for $k$. We will in this work prove both lower and upper bounds on $T$ for the specific regime $k = \Theta\left(\frac{n}{\log n}\right)$, or equivalently when the probability of an item being defective is $\Theta\left(\frac1{\log n}\right)$; this regime has traditionally seen little attention, but recent work has shown that this is exactly the scaling of $k$ for which $\Omega(n)$ tests are necessary. As $n$ tests trivially suffice by testing each item individually, determining the exact crossover point where individual testing becomes suboptimal seems of interest, which happens to be in the $k = \Theta\left(\frac{n}{\log n}\right)$ regime.
\subsection{Related Work}
\label{sec:related_work}
Much of the initial work on group testing concerned the zero-error combinatorial setting, where a single test matrix must correctly classify every possible defective vector. The literature on this variant is vast -- see for instance the book of Du and Hwang \cite{du2000combinatorial}. We focus the rest of this section on the low-error probabilistic setting.
PGT has traditionally been split into two rather different regimes: the ``sparse'' regime, where the total number of defectives is $O(n^\theta)$ for some $0 \leq \theta < 1$, and the ``linear'' regime where the total number of defectives is $\beta n$ for some constant $\beta < 1$.
In both regimes, the folklore ``counting bound'' (see \cite{chan2011non} for instance) shows that $\Omega(k \log \frac{n}{k})$ measurements are necessary. In the sparse regime, this is equivalent to $\Omega(k \log n)$, and order-optimal randomized constructions have been known for some time \cite{sebHo1985two, atia2012boolean}. In the linear regime, the counting bound implies $\Omega(n)$ measurements are necessary, and trivially $n$ suffice by testing items individually.
More recently, explicit constructions of matrices for PGT have been studied. Mazumdar \cite{mazumdar2015nonadaptive} gave explicit constructions requiring $O(k \log^2 n / \log k)$ measurements. Follow up works of Barg and Mazumdar~\cite{barg2017group}, and Inan et al. \cite{inan2019optimality}, show order-optimal or near-optimal results.
Another direction has been to develop constructions with good decoding properties. PGT schemes are considered efficiently decodable if they require $O(T)$ time to decode, where $T$ is the total number of measurements; several works \cite{cai2017efficient, vem2017group, lee2019saffron} gave efficiently decodable schemes which were not quite order-optimal, before Bondorf et al. \cite{bondorf2020sublinear} gave the first order-optimal efficiently decodable construction. A very recent result \cite{inan2020strongly} shows that we can even have explicit constructions which are both order-optimal and efficiently decodable.
The last line of work we discuss in PGT, and the most relevant to our work here, has been to go beyond order-optimality and determine the precise constants involved in various regimes. Improvements on the counting bound were proposed in \cite{agarwal2018novel} and subsequently it was shown the individual testing is optimal in the linear regime~\cite{aldridge2018individual}.
In the sparse regime the characterization of exact constants results in a more complex picture, but a series of works \cite{scarlett2016limits, aldridge2017capacity, johnson2018performance, coja2020optimal} have narrowed down the constants to the point that the lower and upper bounds are matching for any $\theta \in [0, 1)$ (where $k = n^\theta$). We direct the interested reader to the recent survey of Aldridge et al. \cite{aldridge2019group}.
However, in between these two regimes is another regime which has seen much less study, namely when the total number of defectives is $n / \textrm{poly}(\log n)$ (\hspace{1sp}\cite{bay2020optimal} refers to these regimes as ``mildly sublinear''). In particular when $k = \Theta(n / \log n)$ the counting bound implies only that $\Omega(n \log \log n/ (\log n))$ measurements are necessary, but a method using $o(n)$ measurements proved elusive. In very recent work, Bay et al. \cite{bay2020optimal} showed that in fact the lower bound can be improved to $\Omega(k \log n)$ in this regime, and thus known constructions are order-optimal.
In light of the asymptotic improvement to the lower bound for $k = n / \textrm{poly}(\log n)$ \cite{bay2020optimal}, we feel it is a natural next step to try and nail down the constants in this regime. While we focus on the particular case $k = \lambda n / \log n$ in this paper, we expect our results should extend readily to other mildly sublinear regimes.
\subsection{Priors for PGT}
\label{sec:priors}
There are two commonly used ``priors'' over the random defective vector involved in PGT. Under the \emph{combinatorial prior}, the number of defectives is fixed to be $k$, and the defective set is drawn uniformly from the $\binom{n}{k}$ possibilities. Under the \emph{i.i.d. prior}, we instead fix a defective probability $p$, and each item is defective independently with probability $p$.
Conveniently, it turns out that our choice of prior makes little difference in the number of tests necessary (assuming we take $k = pn$). The following result from the survey of Aldridge et al. \cite{aldridge2019group} formalizes this notion.
\begin{theorem}[\hspace{1sp}\cite{aldridge2019group} Thm. 1.7]
\label{thm:comb_prior_conversion}
Suppose a sequence of test designs and decoding method has probability of error going to 0 as $n$ goes to infinity under the combinatorial prior with $k = k_0(1 + o(1))$, where $k_0 = o(n)$ and $k_0$ goes to infinity with $n$. Then the same sequence of test designs and decoding method has error going to 0 as $n$ goes to infinity under the i.i.d. prior with $p = k_0 / n$.
\end{theorem}
A similar result holds for converting the opposite direction. In this paper we will primarily use the i.i.d. prior, except for our upper bound in \cref{sec:constant_ub} which relies on preexisting work under the combinatorial prior. For ease of reference to that work we will use the combinatorial prior there, and \cref{thm:comb_prior_conversion} tells us we can convert our main result (\cref{thm:constant_ub}) to an equivalent result under the i.i.d. prior.
\subsection{Notation}
We will use the following notational conventions throughout:
\begin{itemize}
\item $M$ is a test matrix with rows corresponding to tests and columns corresponding to items.
\item $T$ is the number of tests (rows of $M$).
\item $n$ is the total number of items (columns of $M$).
\item $k$ is the total number of defectives.
\item $p$ is the defective probability under the i.i.d. prior.
\item $\log$ is the logarithm base $e$.
\item $\epsilon$, $\delta$, $\gamma$, $\lambda$, $\alpha$, $\nu$ are constants.
\end{itemize}
Our results show that when $p = \lambda / \log n$, it is sufficient to have $T/n \geq \min(1,\frac{\lambda}{\log^2 2})$, whereas $T/n \geq \frac{\lambda}{\lambda+\log^2 2}$ is necessary. Note that these two quantities are always within a factor of 2 of each other, and approach equality for very small or large values of $\lambda$.
\section{Upper Bound with Near-Constant Tests-Per-Item Design}
\label{sec:constant_ub}
For probabilistic group testing in the sparse regime, it has been shown that the optimal rate is achieved by so-called ``Near-Constant Tests-per-Item'' designs \cite{johnson2018performance}. These matrices are formed by drawing $\frac{\nu T}{k}$ tests uniformly with replacement per item, where $\nu$ is a small constant parameter to be optimized, $T$ is the total number of tests, and $k$ is the number of defectives (assuming the combinatorial prior).
Drawing the tests with replacement means that it is possible that the same test is drawn more than once, hence the ``near-constant'' tests-per-item, rather than constant. This simplifies the analysis as the draws are independent.
We will use this type of test matrix to prove our upper bound.
\subsection{Preliminaries}
We closely follow the approach taken by Johnson et al. \cite{johnson2018performance} for the sparse regime in the following, as their argument is mostly regime-independent. The following notation will be useful in describing their approach succinctly:
\begin{itemize}
\item $\mathcal{K}$ is the set of all defective items.
\item $W^{(\mathcal{K})}$ is the number of tests containing at least one item from $\mathcal{K}$.
\item $G$ is the number of nondefective items that do not appear in any negative tests.
\end{itemize}
Our upper bound will employ the simple COMP decoding~\cite{du2000combinatorial,chan2011non}. This algorithm works by first identifying all items present in negative tests and classifying them as nondefective. Then, any remaining items are classified as defective. While simple, in most cases this algorithm has proven to be asymptotically as good as more complex decodings, and is easy to analyze.
First we present two useful lemmas borrowed from \cite{johnson2018performance}. The first in plain language states that given a near-constant tests-per-item design, the number of tests including at least one defective concentrates tightly around its mean.
\begin{lemma}[\hspace{1sp}\cite{johnson2018performance} Lemma 1]
\label[lemma]{thm:wk_concentration}
Let $|\mathcal{K}| = k$, and fix constants $\alpha > 0$, $\epsilon \in (0, 1)$. Suppose we form a test design of $T$ tests by drawing $L = \frac{\alpha T}{k}$ tests uniformly at random with replacement for each of $n$ items. Then (assuming $T$ goes to infinity with $n$) the following holds:
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\alpha})T| \geq \delta] \leq 2 \exp \left( -\frac{\delta^2}{\alpha T} \right).
\end{equation*}
\end{lemma}
The next lemma, again from \cite{johnson2018performance}, characterizes the distribution of $G$ conditioned on a fixed value of $W^{(\mathcal{K})}$.
\begin{lemma}[\hspace{1sp}\cite{johnson2018performance} Lemma 5]
\label[lemma]{thm:g_dist}
Let $L = \frac{\nu T}{k}$ be the number of tests drawn for each item in a near-constant tests-per-item design. Then
\begin{equation*}
G \ | \ (W^{(\mathcal{K})} = x) \sim \mathrm{Bin} (n-k, (x/T)^L).
\end{equation*}
\end{lemma}
\subsection{Combining Together}
Now we are ready to prove the upper bound, namely that we can succeed with high probability using about $n \cdot \lambda/\log^2 2$ tests.
\begin{theorem}
\label{thm:constant_ub}
Let $k = \frac{\lambda n}{\log n}$ be the total number of defectives. Suppose our test design is near-constant tests-per-item,
\begin{equation*}
T = (1 + \epsilon) \frac{\lambda}{\log^2 2} n
\end{equation*}
for a constant $\epsilon > 0$, and we draw $L = \frac{T \log 2}{k}$ tests per item with replacement. Then for sufficiently large $n$, the success probability of the COMP decoding is $1 - o(1)$.
\end{theorem}
\begin{IEEEproof}
As $G$ is the number of nondefectives not appearing in any negative tests, we know COMP succeeds if and only if $G = 0$. Then the main idea here is that we will use the equation
\begin{equation}
\label{eq:cond_prob_sum}
\Pr[G = 0] = \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \Pr[G=0 | W^{(\mathcal{K})} = x],
\end{equation}
applying \Cref{thm:g_dist} when $x \leq (1/2 + \delta) T$, and showing that the probability of $x > (1/2 + \delta) T$ goes to 0 for large $n$ using \Cref{thm:wk_concentration}.
By \Cref{thm:g_dist},
\begin{equation*}
\label{eq:success_conditional_exact}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] = \left( 1 - \left( \frac{x}{T} \right)^L \right)^{n-k}.
\end{equation*}
As this function is decreasing in $x$, for all $x \leq (1/2 + \delta)T$, we have
\begin{align*}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] \geq& \Pr[G = 0 \ | \ W^{(\mathcal{K})} = (1/2 + \delta)T] \\
=& (1 - (1/2 + \delta)^L)^{n-k}.
\end{align*}
By definition
\begin{equation*}
L = \frac{T \log 2}{k} = \frac{(1+\epsilon) \lambda n \log 2 \log n}{(\log 2)^2 \lambda n} = \frac{(1+\epsilon) \log n}{\log 2},
\end{equation*}
so this gives
\begin{equation*}
\frac{1}{2^L} = \left( \frac{1}{2^{1 / \log 2}} \right)^{(1+\epsilon) \log n} = \left(\frac{1}{e^{\log n}} \right)^{(1+\epsilon)} = \frac{1}{n^{1+\epsilon}}.
\end{equation*}
Taking $\delta < 2/L$, the maximum term in the binomial expansion of $(1/2 + \delta)^L$ will be $1/2^L$, so we have
\begin{equation*}
(1/2 + \delta)^L < \frac{L+1}{2^L} = \frac{L+1}{n^{1+\epsilon}} < \frac{1}{n^{1+\epsilon/2}},
\end{equation*}
where in the last step we use the fact that $L = \Theta(\log n)$ and the assumption regarding large $n$. Then we have for $x \leq (1/2 + \delta)T$
\begin{align*}
\Pr[G = 0 \ | \ W^{(\mathcal{K})} = x] >& \left(1 - \frac{1}{n^{1+\epsilon/2}} \right)^{n-k} \\
\approx& 1 - \frac{n-k}{n^{1+\epsilon/2}} \\
>& 1 - \frac{n}{n^{1+\epsilon/2}} \\
=& 1 - \frac{1}{n^{\epsilon/2}},
\end{align*}
which goes to 1 for large $n$.
Plugging this back into \eqref{eq:cond_prob_sum},
\begin{align}
& \Pr[G = 0] \nonumber\\
=& \sum_{x=0}^T \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \nonumber \\
\geq& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \cdot \Pr[G=0 \ | \ W^{(\mathcal{K})} = x] \nonumber \\
>& \sum_{x=0}^{(1/2 + \delta)T} \Pr[W^{(\mathcal{K})} = x] \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \nonumber \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \Pr[W^{(\mathcal{K})} \leq (1/2 + \delta)T] \nonumber \\
=& \left( 1 - \frac{1}{n^{\epsilon/2}} \right) \cdot \left( 1 - \Pr[W^{(\mathcal{K})} > (1/2 + \delta)T]\right) \label{eq:final_succ_prob}.
\end{align}
Finally, we apply \Cref{thm:wk_concentration} to the latter probability with $\alpha = \log 2$ (and $\delta T$ in place of the $\delta$ in the original lemma statement), which tells us
\begin{equation*}
\Pr[|W^{(\mathcal{K})} - (1 - e^{-\log 2})T| \geq \delta T] \leq 2 \exp \left( -\frac{(\delta T)^2}{T \log 2} \right),
\end{equation*}
and thus
\begin{align*}
\Pr[|W^{(\mathcal{K})} - (T/2)| \geq \delta T] \leq& 2 \exp \left( -\frac{\delta^2 T}{\log 2} \right) \\
=& \Theta \left( \exp \left( -\frac{n}{\log^2 n}\right)\right)
\end{align*}
which clearly goes to 0 for large $n$. Thus the success probability in \eqref{eq:final_succ_prob} will go to 1, concluding the proof.
\end{IEEEproof}
\section{Determining the Constant in the Lower Bound}
The recent work of Bay et al. \cite{bay2020optimal} shows that a lower bound of $\min(n, \Omega(k \log n))$ tests holds for any $k$. In the case we are interested in that $k = \lambda n / \log n$, this tells us we need $\Omega(n)$ measurements. In this section we seek to determine the necessary number of measurements more precisely, up to the constant term.
The argument of \cite{bay2020optimal} works by demonstrating that with significantly less than $k \log n$ measurements, there must exist a fairly large number of ``totally disguised'' items; that is, items for which no test gives us any information about whether or not these items are defective. If we have no information about these items defectivity, we cannot do better than guessing on each one.
More specifically, they give a procedure that constructs a set of items $W$ with the following properties:
\begin{enumerate}
\item Each item in $W$ is totally disguised with probability at least $\mathcal{L}^*$.
\item The events that each item in $W$ is totally disguised are independent of each other.
\end{enumerate}
Since these events are independent, we can combine bounds on $|W|$ and $\mathcal{L}^*$ with simple concentration inequalities to get a lower bound on the number of totally disguised items.
For our lower bound we will modify their procedure slightly, as described below.
\subsection{How Many Disguised Items are Needed}
Since disguised items are defective independently with probability $p$ and for us $p < \frac{1}{2}$, the best any algorithm
can do on a disguised item is predict it is nondefective. This guess is correct with probability $1 - p$. Thus if the total number of disguised items is $D$, the success probability of any algorithm is at most
\begin{equation*}
(1 - p)^D \leq \exp(-p D).
\end{equation*}
\subsection{Lower Bounding $|W|$}
We will closely follow the method of \cite{bay2020optimal}, but will first slightly redefine their notion of ``very-present'' items. These are items present in far more tests than the average item, which will be discarded in a preprocessing step before constructing $W$.
\begin{lemma}[\hspace{1sp}\cite{bay2020optimal} Lemma 4, modified]
\label[lemma]{thm:very_present}
Define an item to be \emph{very-present} if it appears in more than $t_{max} = \log^3 n$ tests. If $T \leq n$ and no test contains more than $z \log n$ items, then the number of very-present items is
\begin{equation*}
n_{vp} \leq \frac{nz}{\log^2 n}.
\end{equation*}
\end{lemma}
\begin{IEEEproof}
Consider the total number of (item, test) pairs $P$. From the assumptions, we have $P \leq Tz \log n \leq nz \log n$. Also, by the definition of very-present items, we have $n_{vp} \log^3 n \leq P$. Then $n_{vp} \log^3 n \leq nz \log n$ implies the result.
\end{IEEEproof}
For bounding $|W|$, we have the following:
\begin{lemma}[\hspace{1sp}\cite{bay2020optimal} Lemma 6 Pf.]
\label[lemma]{thm:w_lb}
Suppose $W$ is constructed as described in Procedure 1 of \cite{bay2020optimal}, and furthermore we modify their stopping rule so that we stop when there are less than $\frac{\epsilon n}{1 + \gamma}$ items rather than $\frac{\epsilon n}{2}$, where $\gamma$ is a small positive constant. Then
\begin{equation}
\label{eq:w_lb}
|W| \geq \frac{(\gamma \epsilon n) / (1+\gamma) - (nz)/\log^2 n}{z^2 \log^8 n},
\end{equation}
where $z = \frac{2}{\log (1/(1-p))}$.
\end{lemma}
\begin{IEEEproof}
From the proof of \cite{bay2020optimal} Lemma 6, we have that the method of constructing $W$ yields the inequality
\begin{equation*}
\frac{\epsilon n}{1 + \gamma} \geq \epsilon n - n_{vp} - |W| t_{max}^2 z^2 \log^2 n,
\end{equation*}
where $n_{vp}$ and $t_{max}$ are as defined in our \Cref{thm:very_present}. Rearranging this yields
\begin{equation*}
|W| \geq \frac{\gamma \epsilon n/(1+\gamma) - n_{vp}}{t_{max}^2 z^2 \log^2 n},
\end{equation*}
and substituting the values from \Cref{thm:very_present} for $n_{vp}$ and $t_{max}$ gives the result.
\end{IEEEproof}
In the case we are interested in that $p = \frac{\lambda}{\log n}$, we have
\begin{equation*}
z = -2 / \log \left(1 - \frac{\lambda}{\log n}\right) = \frac{2 \log n}{\lambda} - 1 - o(1),
\end{equation*}
where we have considered the Taylor series as $n$ goes to infinity.
Then
\begin{equation*}
\frac{nz}{\log^2 n} \approx \frac{2 n \log n}{\lambda \log^2 n} = \frac{2 n}{\lambda \log n} = o(n),
\end{equation*}
so asymptotically we have
\begin{equation*}
(\gamma n) / (1+\gamma) - (nz)/\log^2 n = \Theta(n).
\end{equation*}
Substituting the value of $z$ back into the denominator of \eqref{eq:w_lb} gives
\begin{equation*}
|W| \geq \frac{cn}{\log^{10} n}
\end{equation*}
as $n$ goes to infinity, for some constant $c>0$. This will be sufficient for our purposes, as it will turn out that only the exponent of $n$ is relevant for this term.
\subsection{Lower Bounding $\mathcal{L^*}$}
For lower bounding $\mathcal{L}^*$, they show in \cite{bay2020optimal} that if we stop constructing $W$ when there are less than $n_{final}$ items remaining, then
\begin{equation*}
\mathcal{L}^* \geq \exp \left( \frac{T}{n_{final}} \mathcal{L}_p\right),
\end{equation*}
where
\begin{equation*}
\mathcal{L}_p = \min_{x=2, 3, \dotsc, n} x \log (1 - (1-p)^{x-1}).
\end{equation*}
As we modified their Procedure 1 to stop with less than $\frac{\epsilon n}{1 + \gamma}$ items, we will have
\begin{equation}
\label{eq:our_lstar}
\mathcal{L}^* \geq \exp \left( \frac{(1+\gamma)T}{\epsilon n} \mathcal{L}_p\right).
\end{equation}
Furthermore, it is shown in work of Coja-Oghlan et al. \cite{coja2020optimal} in the sparse regime that the function $\mathcal{L}_p$ is minimized at
\begin{equation*}
x = \frac{\log 2}{p} + O(p^{-1/2}).
\end{equation*}
In our regime neglecting lower order terms, this yields
\begin{equation*}
\mathcal{L}_p \geq \frac{\log n \log 2}{\lambda} \log \left( 1 - \left(\frac{\log n - \lambda}{\log n} \right)^{(\log n \log 2)/\lambda - 1}\right).
\end{equation*}
From this, we have
\begin{equation*}
\exp(\mathcal{L}_p) \geq \left( 1 - \left( \frac{\log n - \lambda}{\log n} \right)^{\log 2 \log n / \lambda - 1}\right)^{\log 2 \log n / \lambda}.
\end{equation*}
The Taylor series expansion at $x = \infty$ of
\begin{equation*}
\left( \frac{x-\lambda}{x} \right)^{\log 2 x/\lambda - 1}
\end{equation*}
is $\frac{1}{2} + O(\frac{1}{x})$, so for large $n$ dropping lower order terms this gives
\begin{equation*}
\exp(\mathcal{L}_p) \geq \left( \frac{1}{2} \right)^{\log 2 \log n / \lambda},
\end{equation*}
and thus substituting $T = (1-\epsilon)n$ into our expression for $\mathcal{L}^*$ from \eqref{eq:our_lstar},
\begin{equation*}
\mathcal{L}^* \geq \left( \frac{1}{2} \right)^{(\log 2 \log n / \lambda) \cdot (1+\gamma)(1 - \epsilon)/\epsilon},
\end{equation*}
which can be simplified to
\begin{equation*}
\mathcal{L}^* \geq \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2 (1 - \epsilon)/(\lambda \epsilon)}.
\end{equation*}
\subsection{Combining Together}
Writing $D$ for the total number of disguised items, we have $D \geq \mathcal{L}^* |W|$, and any algorithm's success probability is upper bounded by $\exp(-p D)$. In order for this success probability to go to 1, we need $pD$ to go to 0. Then substituting in our bounds for $\mathcal{L}^*$ and $|W|$, we need
\begin{align*}
-pD \leq& -p \mathcal{L}^* |W| \\
\leq& \frac{-\lambda}{\log n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)} \frac{cn}{\log^{10} n} \\
=& \frac{-\lambda cn}{\log^{11} n} \left( \frac{1}{n} \right)^{(1+\gamma)(\log 2)^2(1 - \epsilon)/(\lambda \epsilon)}
\end{align*}
to go to 0. Only the highest order terms are relevant, so we can just look at the exponents of $n$, and the expression will go to 0 if
\begin{align*}
& 1 < \frac{(1+\gamma)(\log 2)^2(1 - \epsilon)}{\epsilon \lambda} \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2(1 - \epsilon) \\
\implies& \epsilon \lambda < (1+\gamma)(\log 2)^2 - \epsilon (1+\gamma)(\log 2)^2 \\
\implies& \epsilon (\lambda + (1+\gamma)(\log 2)^2) < (1+\gamma)(\log 2)^2 \\
\implies& \epsilon < \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2}.
\end{align*}
We set $T = (1- \epsilon)n$, so to have success probability going to 1 we must have
\begin{equation*}
1 - \epsilon > 1 - \frac{(1+\gamma) (\log 2)^2}{\lambda + (1+\gamma) (\log 2)^2} = \frac{\lambda}{\lambda + (1+\gamma) (\log 2)^2},
\end{equation*}
and recall $\gamma$ can be any constant greater than zero.
Altogether this shows the following.
\begin{theorem}
\label{thm:full_lb}
There exists $n_0$ such that for all $n > n_0$, any test scheme using $T$ tests to identify defectives among $n$ items with i.i.d. defective probability $p = \frac{\lambda}{\log n}$ with
\begin{equation*}
T \leq (1 - \epsilon) \frac{\lambda}{\lambda + \log^2 2} n
\end{equation*}
for some constant $\epsilon$ independent of $n$ must have error probability $1 - o(1)$.
\end{theorem}
\section{Conclusion and Discussion}
We have shown that for nonadaptive PGT in the regime that $p = \lambda / \log n$ (or equivalently $k = \lambda n / \log n$),
\begin{equation*}
\min \left(1, \left(1+\epsilon\right) \frac{\lambda}{\log^2 2}\right) n
\end{equation*}
measurements suffice to obtain error probability going to 0, and at least
\begin{equation*}
\left(\left(1 - \epsilon\right) \frac{\lambda}{\lambda + \log^2 2}\right) n
\end{equation*}
are necessary ($\epsilon>0$).
A natural next step would be to close this gap. In the sparse regime where $k = O(n^\theta)$, more complex decodings known as DD and SPIV have been shown to improve on the COMP decoding \cite{johnson2018performance,coja2020optimal}, although the benefit seems to vanish as $\theta$ approaches 1.
We conjecture that the lower bound should come up to meet the upper bound obtained by the minimum of near-constant tests-per-item designs and individual testing. This would imply that the exact point at which individual testing becomes suboptimal is when $p < (\log 2)^2 / \log n$.
The reason for this belief is as follows. Suppose $D$ is the (random) number of disguised items. A simple calculation shows that,
$$
\mathbb{E} D \ge n \exp(T/n \cdot \mathcal{L}_p)
$$
where $\mathcal{L}_p$ has been defined in the last section. Using the same calculation as before, we obtain,
$$
\mathbb{E} D \ge n 2^{-\frac{T}{n} \frac{\log 2}{p}}.
$$
Given the number of disguised items is $D$, the probability of correct decoding is
at most $(1-p)^D \le \exp(-pD),$ where $p =\frac{\lambda}{\log n}$. Substituting $D$ with $\mathbb{E} D$ from above, we see that the probability of correct decoding is going to zero whenever $\frac{T}{n} < \frac{\lambda}{\log^2 2}.$
This shows that the lower bound should be improved to match the upper bound, if the random variable $D$ is concentrated around its mean, which is of course yet to be proved.
The current lower bound is agnostic to whether $\lambda / \log^2 2 < 1$ holds, and therefore must be simultaneously smaller than either term of the minimum in the upper bound. To improve further it seems necessary to handle these two cases individually.
\bibliographystyle{IEEEtran}
\section{Introduction}
Here we consider the problem of nonadaptive group testing with low error, meaning that we must be able to exactly identify the defective set with error that approaches 0 as the total number of items $n$ goes to infinity.
We will assume the ``i.i.d.'' prior over defective sets, meaning that each of the $n$ items is chosen to be defective independently with some fixed probability $p$. This is in contrast to the ``combinatorial prior,'' where we fix a sparsity $k$ and then choose a set of $k$ items uniformly at random among the $\binom{n}{k}$ such subsets. In practice there tends to be little difference between results with the two priors, as the sparsity under the i.i.d. prior concentrates around $pn$.
The defective probability $p$ factors heavily into the possible performance that can be obtained and which methods work best. In the ``sparse'' regime, we typically assume $p = O(n^\alpha)$ for some $\alpha \in [0, 1)$. This regime is extremely well-studied, and it is known that $\Theta(k \log n)$ tests are necessary and sufficient for all values of $\alpha$. For some values of $\alpha$ even the constant terms are known exactly, see the survey \cite{aldridge2019group} for details.
In contrast, in the ``linear'' regime where $p = \beta$ for some constant $\beta$, it was shown in \cite{aldridge2018individual} that regardless of $\beta$ in order to have error probability going to 0 with $n$, it is necessary to individually test each item.
As noted in \cite{aldridge2018individual}, this motivates the question of what happens in between these two regimes, such as when $p = \frac{1}{\log n}$. We refer to this regime where $p = o(n)$ as the ``semisparse'' regime, and focus our efforts on clarifying the situation in this regime. To our knowledge, this question has not seen significant prior study.
\section{Upper Bound Ideas}
\subsection{Trivial Bound}
In the linear regime, in order to prove individual testing is necessary to achieve arbitrarily low error, it suffices to show that with constant probability there exists an item about which we have no information -- since this item is defective with constant probability, any guess we make about it will necessarily incur constant error.
However, as noted in \cite{aldridge2018individual} this argument no longer holds in sublinear regimes. For instance, suppose $p = \frac{1}{\log n}$, and consider the following test scheme: we individually test the first $n-1$ items, and simply ignore the $n$th item completely and always predict it is not defective. Then our answer is incorrect with probability only $p = \frac{1}{\log n}$, and so our error goes to 0 as $n$ goes to infinity. This shows that at the least, individual testing is not necessary in the semisparse regime. We extend this argument to its logical conclusion in the following proposition.
\begin{proposition}
Fix a subconstant error probability $p = p(n) = o(1)$ that goes to infinity with $n$. Then
\begin{equation*}
T = n - o(1 / p)
\end{equation*}
tests are sufficient to determine the defective set with error going to 0 as $n$ goes to infinity.
\end{proposition}
\begin{proof}
Our testing scheme is simply to test the first $T$ items individually, and ignore the rest, predicting all of them are non-defective. Each of the untested $n - T$ items is independently defective with probability $p$, so our error probability is equal to the probability any of the $n - T$ items is defective, which is
\begin{equation*}
1 - (1-p)^{n - T} \approx 1 - e^{-(n-T)p}.
\end{equation*}
Then as long as $(n - T)$ is $o(1 / p)$, $(n - T)p$ will go to 0 with $n$, so our error probability goes to 0 as well.
\end{proof}
\subsection{Maximum Likelihood Decoding}
In \cite{atia2012boolean}, Atia and Saligrama use information theory to analyze the performance of group testing under a maximum likelihood decoding. They use the combinatorial prior on defective sets rather than the iid prior, so the sparsity $k$ is fixed and a defective set of that sparsity is chosen uniformly at random.
Their work mostly focuses on the sparse regime, but they prove the following result which is nontrivial in parts of the semisparse regime.
\begin{theorem}[\cite{atia2012boolean} Thm. V.2]
\label{thm:atia_lb}
Let the test matrix with $T$ rows (tests) be drawn with entries i.i.d. from a Bernoulli distribution with parameter $1 / k$. There exists a constant $C$ independent of $n$ and $k$ such that when $k = o(n)$ and both $k$ and $n$ scale to infinity,
\begin{equation}
T \geq C \cdot k \log n \log^2 k
\end{equation}
suffices to recover the defective set exactly with error going to 0, assuming the defective set is chosen uniformly at random among those of size $k$.
\end{theorem}
A result of \cite{aldridge2019group} tells us how we can convert this result into one under the i.i.d. prior.
\begin{theorem}[\cite{aldridge2019group} Thm. 1.7]
\label{thm:prior_conversion}
If a sequence of test designs and decoding methods approaches 0 error with sparsity
\begin{equation*}
k = k_0 (1 + o(1)),
\end{equation*}
under the combinatorial prior with $k_0 = o(n)$, then the same sequence of test designs and decoding methods approaches 0 error with defective probability $p = k_0 / n$ under the i.i.d. prior.
\end{theorem}
This tells us that up to lower order terms, the bound in \cref{thm:atia_lb} will hold also under the i.i.d. prior with an appropriate defective probability (that is, taking $p = k/n$).
This bound does not tell us anything useful in the case that $p = 1 / \log(n)$, but can tell us something for defective probabilities like $p = 1 / \textrm{polylog}(n)$ when the degree of the polynomial is large enough. For example, if $p = 1 / \log^c(n)$ for some constant $c \geq 4$, \cref{thm:atia_lb} says that $T = O(n / \log^{c - 3}(n))$ tests will suffice for the error to go to 0.
\subsection{Randomized Upper Bound}
Another idea that may yield better results is to randomly construct a test matrix, apply a fixed decoding algorithm, and see if we can compute the resulting error probability in the regime of interest.
The simplest way of constructing a random test matrix is to simply include each item in the test independently with some fixed probability $q$, and the optimal choice is typically
\begin{equation*}
q \approx \frac{1}{np},
\end{equation*}
so that each test includes about $\frac{1}{p}$ items.
The two simplest decoding algorithms are COMP (combinatorial orthogonal matching pursuit) and DD (definite defectives). In the former, we report that every item which appears in a negative test is nondefective, and every remaining item is defective. In the latter, we again report that every item which appears in a negative test is nondefective, but of the remaining items, we report only those as defective that are the sole item of unknown status in some test (these are the ``definite defectives''). Thus DD reports a defective set which is a strict subset of that reported by COMP. The analysis of COMP is typically simpler, but may give worse results in some cases.
If we use COMP, then our decoding fails if and only if there exists some nondefective item which appears in only positive tests. This is equivalent to the notion of a ``totally disguised'' item from \cite{aldridge2018individual}, where they show that
\begin{equation*}
\Pr[i \textrm{ totally disguised}] \geq \prod_{t : i \in t} 1 - (1-p)^{(w_t - 1)},
\end{equation*}
where $t$ represents a test and $w_t$ is the weight or number of items in test $t$.
The following is an (essentially unchanged) argument from \cite{aldridge2019group} Theorem 2.3 in the sparse regime, which gives an upper bound of $O(k \log n)$ measurements even in the semisparse regime using COMP decoding. This is not quite enough to be useful if $p = 1 / \log n$, but tells us something, for instance, if $p = 1 / \log^2 n$.
\begin{proposition}
Suppose we are working under the combinatorial prior over defective sets, and the exact sparsity is $k = o(n)$, and also that $k$ goes to infinity with $n$. Let our testing matrix $A$ have Bernoulli entries with parameter $q$. Then
\begin{equation*}
T = O(k \log n)
\end{equation*}
measurements suffices to ensure the error probability goes to 0 with $n$ using COMP decoding.
\end{proposition}
\begin{proof}
Under the COMP decoding, we fail exactly when there exists one or more nondefective items which do not appear in any negative tests. For a fixed nondefective and fixed test, the probability that the item is included in the test and the test result is negative is
\begin{equation*}
q(1 - q)^k,
\end{equation*}
thus the probability that the negation of this happens for all $T$ tests for that particular nondefective is
\begin{equation*}
(1 - q(1 - q)^k)^T.
\end{equation*}
There are $n - k < n$ such nondefectives, so by a union bound over all of them, the total error probability is at most
\begin{equation*}
n (1 - q(1 - q)^k)^T \leq n \exp(-T q (1-q)^k),
\end{equation*}
where we used also the inequality $(1 - x) \leq e^{-x}$.
The expression $(1-q)^k$ is maximized at $q = 1 / (k+1) \approx 1 / k$, so we choose this as our Bernoulli parameter, and then have
\begin{equation*}
q (1-q)^k \approx \frac{1}{k} \cdot \frac{1}{e}.
\end{equation*}
Then we can take $T = (1 + \delta) ek \ln n$ for a constant $\delta > 0$, and we have
\begin{eqnarray*}
\Pr[error] \leq& n \exp(-T q (1-q)^k) \\
=& n \exp(\frac{-T}{ek}) \\
=& n \exp(\frac{-(1+\delta) ek \ln n}{ek}) \\
=& n \exp(-(1+\delta) \ln n) \\
=& n \cdot n^{-(1 + \delta)} \\
=& n^{-\delta},
\end{eqnarray*}
which goes to 0 as $n$ goes to infinity.
\end{proof}
While this result is phrased in terms of the combinatorial prior, we can easily convert it to work with the i.i.d. prior without asymptotic loss using \cref{thm:prior_conversion}.
\section{Lower Bound Ideas}
\subsection{Counting Bound}
One well-known lower bound in group testing in the counting bound, which while simple has proven to be tight or near-tight in many cases. The bound states simply that given $T$ tests, $n$ items, and $k$ defectives, it is necessary that
\begin{equation*}
Pr[Success] \leq 2^T / \binom{n}{k}.
\end{equation*}
Substituting $np$ for $k$, taking log of both sides, and using the bound
\begin{equation*}
\frac{n^k}{k^k} \leq \binom{n}{k},
\end{equation*}
we have
\begin{equation}
\label{eqn:counting_bound}
np \log \frac{1}{p} \leq T.
\end{equation}
to obtain arbitrarily small error probability.
Substituting for example $p = \frac{1}{\log n}$, this yields
\begin{equation*}
\frac{n \log \log n}{\log n} \leq T,
\end{equation*}
telling us that we cannot hope to improve on individual testing by a polynomial factor, but perhaps something near a logarithmic factor improvement is possible.
\subsection{Entropy-Based Lower Bound}
Write $\bfX$ for the vector of random variables corresponding to the input, and $\bfY$ for the vector of random variables corresponding to the test outputs. Then one way of proving the counting bound is to first demonstrate
\begin{equation*}
H(\bfX) = n H(p) \leq H(\bfY) + \epsilon n
\end{equation*}
(where $H(p)$ is the binary entropy of a $\textrm{Bern}(p)$ r.v.) using Fano's inequality, and to then use the inequality
\begin{equation}
\label{eq:counting_bound_ind}
H(\bfY) \leq t,
\end{equation}
which is simply the subadditivity of entropy for the $t$ binary random variables of $\bfY$.
In \cite{agarwal2018novel}, they observe that \cref{eq:counting_bound_ind} is loose for group testing, because equality is obtained if and only if the test outcomes are independent. In group testing, the outcomes are generally not independent due to items shared between tests. Thus they improve slightly on the counting bound in parts of the linear regime by giving an improved bound on $H(\bfY)$ that exploits this dependence using the Madiman-Tetali inequalities \cite{madiman2010information}.
More specifically, write $\bfY_S$ with $S \subseteq [T]$ for the restriction of the vector of test outputs to the set of tests indexed by $S$. Then
the Madiman-Tetali inequalities say that we can upper bound the entropy of the joint distribution $\bfY_{[T]}$ as
\begin{equation}
\label{eq:madiman_tetali}
H(\bfY_{[T]}) \leq \sum_S \alpha(S) H(\bfY_S),
\end{equation}
where we can define the subsets $S \subseteq [T]$ however we want, subject to the constraint that the $\alpha(S)$ must form a fractional cover of $[T]$ -- that is, for each test $t$, we must have
\begin{equation*}
\sum_{S : t \in S} \alpha(S) \geq 1.
\end{equation*}
While we can choose the sets $S$ however we want, in order to make this bound useful, we probably ought to pick them such that there is significant mutual information between the tests in $S$, so that we are improving beyond the simple bound from the subadditivity of entropy. In \cite{agarwal2018novel} they make the natural choice $\set{S_i}_{i \in n}$, where $S_i$ is the set of all tests that contain item $i$, which clearly should have some mutual information because of the shared item. Another possible choice would be to take sets that are unions of pairs (or even more) of the $S_i$.
Let's examine more closely the high level proof strategy of \cite{agarwal2018novel} to see how we might adapt it:
\begin{enumerate}
\item Split the test matrix $A$ up into submatrices $A_k$ containing only the tests of weight exactly $k$. Upper bound the entropy of $Y_{[T]}$ by the sum of entropies of the test results on these submatrices.
\item Each submatrix $A_k$ has tests of constant weight $k$. Let $S_k \subseteq [T]$ be the set of tests in $A_k$. Then let $S_{k,i}$ be the subset of $S_k$ consisting of all tests containing item $i$. Then use Madiman-Tetali to bound
\begin{equation*}
H(\bfY_{S_k}) \leq \sum_{i \in [n]} \alpha(S_{i,k}) H(\bfY_{S_{i,k}}),
\end{equation*}
where we set each $\alpha(S_{i,k}) = 1 / k$. This is a fractional cover of $[T]$ because each test has weight exactly $k$.
\item In order to bound the $H(\bfY_{S_{i,k}})$, prove a general bound on $H(\bfY_S)$ for any $S$ such that all tests in $S$ contain a shared item.
\item In the end, we essentially bound the sum of the entropies over all the constant weight $k$ submatrices by the largest such entropy over all $1 \leq k \leq n$.
\end{enumerate}
We could of course carry out this exact same process in our regime, but already the bound in \cite{agarwal2018novel} is worse than the counting bound for $k < 0.347 n$.
If we want to use Madiman-Tetali, it seems unlikely we can use a significantly different hypergraph setup than \cite{agarwal2018novel} -- we need the hyperedges to consist of sets of tests with significant mutual information within the sets, and the most natural way to do this is by forcing each set $S_i$ to contain a shared item $i$. It doesn't seem like we should expect to gain much by instead using unions of such sets (e.g. sets $S_{i,j} = S_i \cup S_j$), as the additional benefit is only the ``coincidentally'' shared items besides $i$ and $j$, which will be shared only between a much smaller subset of tests in $S_{i,j}$.
It's also not clear how we can apply Madiman-Tetali without first splitting up the test matrix into constant row weight submatrices.
\subsection{Extending Individual Testing Lower Bound}
In \cite{aldridge2018individual}, they show that in the linear regime ($p = \beta$ for some constant $\beta$) that every item must be tested individually to obtain error probability that goes to 0 as $n$ goes to infinity.
To do so, they show that with constant probability, there exists a ``totally disguised'' item -- that is, an item for which every test including that item includes another item which is defective. Thus, the best we can do on this item is to guess whether it is defective or not, which gives error probability $\min(p, 1 - p)$ if each item is defective with probability $p$.
However, in the semisparse regime where, for instance, $p = \frac{1}{\log n}$, if we have a single totally disguised item we can simply predict it is not defective, which gives error probability only $p$ which goes to 0 as $n$ goes to infinity.
One idea for extending this type of lower bound to the semisparse regime would be to show that there exists a large enough number of totally disguised items that even if we predict they are all non-defective, our error probability is bounded away from 0. The next proposition quantifies how large is ``large enough'' for this purpose.
\begin{proposition}
\label{prop:disguised_lb}
Fix a sublinear defective probability $p = o(n)$ that may depend on $n$. Suppose that with probability bounded away from zero there are at least $D$ totally disguised items. Then for our overall error probability to be bounded away from 0, it is necessary that $pD$ approaches positive infinity as $n$ grows. Equivalently, we must have
\begin{equation*}
\frac{1}{p} = o(D).
\end{equation*}
\end{proposition}
\begin{proof}
Suppose we fix a particular testing scheme, and consider a defective set which leaves at least $D$ items totally disguised (such defective sets occur with at least constant probability by assumption).
Since our defective probability is sublinear, the best we can do on these totally disguised items is to guess that they are non-defective, which occurs with probability $1 - p$. Then the probability we guess correctly on all such items is at most
\begin{equation*}
(1 - p)^D \approx e^{-pD}.
\end{equation*}
If $pD$ is bounded above by some constant $c$, then our total error probability is at least a constant times $1 - e^{-c}$ which will not go to 0. Thus it is necessary that $pD$ goes to positive infinity with $n$.
\end{proof}
To emulate the lower bound of \cite{aldridge2018individual} when $p = \frac{1}{\log n}$, we would have to show that with at least constant probability w.r.t. choice of defective set, the number of totally disguised items grows faster than $\log n$ regardless of testing scheme.
\subsection{Counting Disguised Items}
Given \cref{prop:disguised_lb}, if we could show a large enough lower bound on the number of totally disguised items that occur with a constant probability, that would translate to a lower bound on the number of tests needed for recovery with small error.
In \cite{aldridge2018individual}, they show that in the linear regime, having a single totally disguised item with at least constant probability is enough to guarantee the error is bounded away from 0. This result is strengthened in \cite{heng2020non}, where they show that the error probability actually goes 1 unless individual testing is used in this regime.
To obtain this result, they assume that $T < (1-\epsilon)n$ tests are used, and give a procedure which iteratively constructs a set $W$ of size about $O(n^{1/2})$ with two important properties:
\begin{enumerate}
\item The events that the items in $W$ are totally disguised are independent of each other.
\item Every item in $W$ is totally disguised with probability lower bounded by a constant.
\end{enumerate}
With these two properties together, we can apply binomial concentration results to see that with constant probability there are $\omega(1)$ totally disguised items in this scenario, which implies that the success probability is at most
\begin{equation*}
(1-c) \cdot \max \{p, 1-p\}^{\omega(1)},
\end{equation*}
for some constant $c$, which goes to 0 with $n$ ($p$ is constant as well, as they are in the linear regime).
In the semisparse regime, it will be necessary to instead demonstrate that with constant probability there is a set of $\Omega(1/p)$ totally disguised items.
\textcolor{red}{Add some details about the procedure of \cite{heng2020non} here}
For this purpose, the majority of the argument from \cite{heng2020non} will go through for us as well. Specifically, using their method we can construct a set $W$ of items which are disguised independently of each other. We will have
\begin{equation*}
|W| > \frac{\epsilon n - 2zn^{0.75} \ln(n)}{2z^2n^{0.5} \ln^2(n)},
\end{equation*}
where $z = \frac{2}{\ln(1/(1-p))}$. Even in our regime, this set is pretty large -- for instance if $p = 1 / \ln(n)$, then we have
\begin{eqnarray*}
z =& \frac{2}{\ln(1/(1-p))} \\
=& \frac{2}{-\ln(1-p)} \\
=& \frac{2}{-\ln(1-\frac{1}{\ln(n)})}.
\end{eqnarray*}
As $n$ goes to infinity,
\begin{equation*}
-\ln \left(1 - \frac{1}{\ln(n)} \right) \approx \ln(\ln(n)) - \ln(\ln(n) - 1) + O\left(\frac{1}{n^2}\right) \approx \frac{1}{\ln(n)} + O\left(\frac{1}{\ln^2(n)}\right) + O\left(\frac{1}{n^2}\right) = O\left(\frac{1}{\log(n)}\right),
\end{equation*}
so we have $z = \Theta(\log n)$, and thus
\begin{equation*}
|W| = \Omega(\epsilon n^{0.5} / \log^4(n)).
\end{equation*}
The remaining step is to lower bound the probability that each item in $W$ is totally disguised -- unlike in the linear regime where this probability was easily seen to be constant, in the semisparse regime it is less clear what happens.
The lower bound we have is that for each item $i$ in $W$,
\begin{equation*}
\Pr[i \textrm{ totally disguised}] \geq \exp\left(\frac{2}{\epsilon} \mathcal{L}_p\right),
\end{equation*}
where
\begin{equation*}
\mathcal{L}_p = \min_{x=2, 3, \dotsc, n} x \ln(1 - (1-p)^{x-1}).
\end{equation*}
It is shown in \cite{coja2020optimal} where they use a similar proof method in the sparse regime that $\mathcal{L}_p$ is minimized at
\begin{equation*}
x = \frac{\ln 2}{p} + O(p^{-1/2}).
\end{equation*}
As we are primarily interested in an asymptotic analysis to see whether the argument of \cite{heng2020non} will go through at all, we will simply say that $\mathcal{L}_p$ is minimized at
\begin{equation*}
x = \Theta(1/p),
\end{equation*}
and thus
\begin{equation*}
\mathcal{L}_p \approx \frac{1}{p} \ln(1 - (1-p)^{1/p - 1}) \approx \frac{1}{p} \ln(1 - (1-p)^{1/p}).
\end{equation*}
As $x$ goes to positive infinity, we have
\begin{equation*}
\left(1 - \frac{1}{x}\right)^x \rightarrow \frac{1}{e},
\end{equation*}
so as $n$ goes to infinity,
\begin{equation*}
\mathcal{L}_p \approx \frac{1}{p} \ln\left(1 - \frac{1}{e}\right).
\end{equation*}
Finally substituting this back, we have
\begin{align*}
\Pr[i \textrm{ totally disguised}] \geq& \exp\left(\frac{2}{\epsilon} \mathcal{L}_p\right) \\
\approx& \exp \left( \frac{2}{\epsilon p} \ln \left(1 - \frac{1}{e} \right)\right) \\
=& \left( 1 - \frac{1}{e} \right) ^{2/(\epsilon p)}.
\end{align*}
Since the items in $W$ are totally disguised independently of each other, using a Chernoff bound there will be at least
\begin{equation*}
D = (1 - o(1)) |W| \left( 1 - \frac{1}{e} \right) ^{2/(\epsilon p)}
\end{equation*}
totally disguised items with constant probability. By \cref{prop:disguised_lb}, we will get an impossibility result with $(1-\epsilon)n$ tests as long as $D$ grows asymptotically faster than $1 / p$.
\textcolor{red}{From checking with computer algebra software, this seems like it will hold for $p = 1 / (\log(n))^{1 - \gamma}$ for any constant $\gamma > 0$ (and regardless of $\epsilon$), but not for $p = 1 / \log(n)$. Surprisingly, this implies that $n - o(n)$ tests are necessary even beyond the linear regime.}
| {
"timestamp": "2021-06-15T02:17:18",
"yymm": "2106",
"arxiv_id": "2106.06878",
"language": "en",
"url": "https://arxiv.org/abs/2106.06878",
"abstract": "In probabilistic nonadaptive group testing (PGT), we aim to characterize the number of pooled tests necessary to identify a random $k$-sparse vector of defectives with high probability. Recent work has shown that $n$ tests are necessary when $k =\\omega(n/\\log n)$. It is also known that $O(k \\log n)$ tests are necessary and sufficient in other regimes. This leaves open the important sparsity regime where the probability of a defective item is $\\sim 1/\\log n$ (or $k = \\Theta(n/\\log n)$) where the number of tests required is linear in $n$. In this work we aim to exactly characterize the number of tests in this sparsity regime. In particular, we seek to determine the number of defectives $\\lambda(\\alpha)n / \\log n$ that can be identified if the number of tests is $\\alpha n$. In the process, we give upper and lower bounds on the exact point at which individual testing becomes suboptimal, and the use of a carefully constructed pooled test design is beneficial.",
"subjects": "Information Theory (cs.IT)",
"title": "Probabilistic Group Testing with a Linear Number of Tests",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.986151391819461,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7086428603874099
} |
https://arxiv.org/abs/1703.10414 | Equivalence between GLT sequences and measurable functions | The theory of Generalized Locally Toeplitz (GLT) sequences of matrices has been developed in order to study the asymptotic behaviour of particular spectral distributions when the dimension of the matrices tends to infinity. A key concepts in this theory are the notion of Approximating Classes of Sequences (a.c.s.), and spectral symbols, that lead to define a metric structure on the space of matrix sequences, and provide a link with the measurable functions. In this document we prove additional results regarding theoretical aspects, such as the completeness of the matrix sequences space with respect to the metric a.c.s., and the identification of the space of GLT sequences with the space of measurable functions. | \section{Introduction}
When dealing with the discretization of differential equations, we often have to solve sequences of linear equations in the form $A_nx=b_n$, where $A_n\in \mathbb C^{n\times n}$. The dimension of the matrices is determined by the degree of refinement of the mesh used in the Finite Difference methods, or the dimension of the subspace used in Finite Element methods. Solving high-dimensional linear equations is fundamental to get accurate solutions, but the rate of convergence of the solvers (Conjugate Gradient, Preconditioned Krylov methods, Multigrid techniques etc.) depends on the spectra of the matrices, so the knowledge of the asymptotic distribution of the sequence $\serie A$ is a strong tool we can use to choose or to design the best solver and method of discretization (see \cite{BS},\cite{GSM} and references therein).
These are some of the reasons that lead to the study of \textit{spectral symbols} of matrix sequences. We recall that a spectral symbol associated with a sequence $\serie A$ is a measurable functions $k:D\subseteq \mathbb R^n\to \mathbb C$, where $D$ is measurable set with finite non-zero Lebesgue measure, satisfying
\[
\lim_{n\to\infty} \frac{1}{n} \sum_{i=1}^{n} F(\sigma_i(A_n)) = \frac{1}{|D|}\int_D F(|k(x)|) dx
\]
for every $F:\mathbb R\to \mathbb C$ continuous function with compact support.
Here $|D|$ is the Lebesgue measure of $D$, and
\[\sigma_1(A_n)\ge \sigma_2(A_n)\ge\dots\ge \sigma_n(A_n)\]
are the singular values in non-increasing order. In this case, we will say that $\serie A$ has spectral symbol $k$ and we will write \[\serie A\sim_\sigma k.\]
The function $k$ thus becomes an asymptotic singular values distribution, but in general it is not uniquely determined.
The space of matrix sequences that admit a spectral symbol on a fixed domain $D$ has been shown to be closed with respect to a notion of convergence called the Approximating Classes of Sequences (a.c.s.). This notion and this result are due to Serra \cite{ACS}, but were
actually inspired by Tilli’s pioneering paper on LT sequences \cite{Tilli}. Given a sequence of matrix sequences $\{B_{n,m}\}_{n,m}$, it is said to be a.c.s. convergent to $\serie A$ if there exist a sequence $\{N_{n,m}\}_{n,m}$ of "small norm" matrices and a sequence $\{R_{n,m}\}_{n,m}$ of "small rank" matrices such that for every $m$ there exists $n_m$ with
\[
A_n = B_{n,m} + N_{n,m} + R_{n,m}, \qquad \|N_{n,m}\|\le \omega(m), \qquad \rk(R_{n,m})\le nc(m)
\]
for every $n>n_m$, and
\[
\omega(m)\xrightarrow{m\to \infty} 0,\qquad c(m)\xrightarrow{m\to \infty} 0.
\]
In this case, we will use the notation $\{B_{n,m}\}_{n,m}\xrightarrow{a.c.s.} \serie A$. The result of closeness tells us that if $\{B_{n,m}\}_{n,m} \sim_\sigma k_m$ for every $m$, $k_m\to k$ in measure, and $\{B_{n,m}\}_{n,m}\xrightarrow{a.c.s.} \serie A$, then $\serie A\sim_\sigma k$. This result is central in the theory since it lets us compute the spectral symbol of a.c.s. limits, and it is useful when we can find simple sequences that converge to the wanted $\serie A$.
In this context, it has been observed that the sequences $\serie A$ arising from differential equations can often be obtained through limit a.c.s. of sum and products of special diagonal and Toeplitz sequences, for which we can easily deduce the spectral symbol. This justifies the interest on the space of Generalized Locally Toeplitz (GLT) sequences, that contains both the before-mentioned class of sequences, gains the structure of $\mathbb C$-algebra, and is closed with respect to the a.c.s. convergence. Here we will report only few properties of this space, but for a detailed presentation of the GLT sequences and their applications refer to \cite{BS},\cite{GLT},\cite{Tilli},\cite{GSM} and references therein.
The space of GLT is built so that for every GLT matrix sequence $\serie A$ we choose one of its spectral symbol $k$ in particular, up to identification of the function almost everywhere (a.e.), and we denote it as $\serie A\sim_{GLT} k$.
This choice ensure us that the GLT space
\[
\mathscr G = \left\{ (\serie A,k)\in \mathscr E\times \mathscr M_D : \serie A\sim_{GLT} k \right\}
\]
is a $\mathbb C-$algebra, where
\[
\mathscr M_D = \{k:D\to \mathbb C, \medspace k \text{ measurable }\}/ \sim
\]
\[
k\sim k' \iff k=k' \text{ a.e.}
\]
and
\[
\mathscr E := \{\serie{A} : A_n\in\mathbb C^{n\times n} \}.
\]
We will see more properties of $\mathscr G$ in Section 3.
We know that $\mathscr M_D$ is endowed with a complete metric that induces the convergence in measure of measurable functions, and we can also endow the set of matrix sequences $\mathscr E$ with a pseudometric that induces the a.c.s. convergence mentioned before. In section 2, we will define the pseudometric and introduce an easy way to compute it.
In the works \cite{GS} and \cite{GStr}, it is shown that $\mathscr G$ is a closed metric subspace of $\mathscr E\times \mathscr M_D$, and the connection between the distances on $\mathscr E$ and $\mathscr M_D$ is emphasized.
The paper is organized as follows. In Section 2 we recall the definition of the pseudometric $d_{acs}$ on $\mathscr E$ and show it is actually complete. Given $d_m$ a particular distance in $\mathscr M_D$ that induces the convergence in measure, we also prove that if a sequence $\serie A$ has spectral symbol $k$, then $d_m(k,0) = \dacs{\serie A}{\serie{0}}$, where $\serie 0$ is the sequence of zero matrices.
Some fundamental properties of GLT sequences are reported in Section 3, where we use the previously obtained results to prove that, up to a.c.s. equivalence, the set of GLT sequences is actually isomorphic and isometric to the space of measurable function $\mathscr M_D$. Eventually we prove that the GLT algebra is already a maximal group in the space of the sequences that admit a spectral symbol, and cannot be expanded anymore.
\section{Complete Pseudometric}
Given a matrix $A\in\mathbb C^{n\times n}$, we can define the function
\[
p(A):= \min_{i=1,\dots,n}\left\{ \frac{i-1}{n} + \sigma_i(A) \right\}
\]
that respects the triangular inequality. In fact, given $A,B$ matrices with the same dimension, we have
\[
p(A+B)\le p(A) + p(B).
\]
Given now a sequence $\serie A\in\mathscr E$, we can denote
\[
\rho\left(\serie A\right):= \limsup_{n\to \infty} p(A_n)
\]
that lets us to introduce a pseudometric $d_{acs}$ on $\mathscr E$
\[
\dacs{\serie{A}}{\serie{B}} = \rho\left(\{A_n-B_n\}_n\right).
\]
It has been proved (\cite{Garoni}) that this distance induces the a.c.s. convergence already introduced. In other words,
\[
\dacs{\serie A}{\{B_{n,m}\}_{n,m}} \xrightarrow{m\to \infty} 0 \iff \{B_{n,m}\}_{n,m}\xrightarrow{a.c.s.} \serie A.
\]
The next two sections shows that this pseudometric is complete, and that it is strictly linked to a complete metric of the measurable functions.
\subsection{Completeness}
In this section, we prove the completeness of the space $\mathscr E$ endowed with the pseudometric $d_{acs}$.
\begin{teo}\label{comp}
The set $\mathscr E$ is complete with the pseudometric $d_{acs}$.
\end{teo}
\begin{proof}
Let $\{B_{n,m}\}_{n,m}$ be a Cauchy sequence for the metric. The convergence of the sequence is equivalent to the convergence of any subsequence, with the same limit, so we can always extract a subsequence such that for every couple of indeces $s,t$
\[
\dacs{\{B_{n,s}\}_n}{\{B_{n,t}\}_n} \le 2^{-\min\{s,t\}}
\]
or, equivalently
\[
\limsup_{n\to \infty} p(B_{n,s} - B_{n,t})\le 2^{-\min\{s,t\}}.
\]
If we consider the indices $m$ and $m+1$, we obtain
\[
\limsup_{n\to \infty} p(B_{n,m} - B_{n,m+1})\le 2^{-m}.
\]
We know that, given an $\varepsilon >0$, the argument of the limsup is eventually less then $2^{-m}+\varepsilon$. Thus we choose $\varepsilon = 2^{-m}$ and find a strictly increasing sequence of indices $N_m$ such that
\[
p(B_{n,m} - B_{n,m+1})\le 2^{-m+1} \qquad \forall n\ge N_m.
\]
We can now build the sequence $\serie{A}$ that will be our limit guess.
\[
A_n := B_{n,m} \qquad \text{ whenever } \qquad N_{m+1}>n\ge N_m
\]
If $N_{M+1}>n\ge N_M$ with $M\ge m$, then we can estimate the distance between $A_n$ and $B_{n,m}$ as
\[
p(B_{n,m}-A_n) = p(B_{n,m}-B_{n,M}) \le \sum_{k=m}^{M-1} p(B_{n,k}-B_{n,k+1}),
\]
but $n\ge N_M> N_k$ for all indices $k$ in the summation, so
\[
\sum_{k=m}^{M-1} p(B_{n,k}-B_{n,k+1})\le \sum_{k=m}^{M-1} 2^{-k+1}\le 2\cdot 2^{-m} \sum_{k=0}^{M-m-1} 2^{-k} \le 4\cdot 2^{-m}.
\]
The latter bound does not depend on $M$ anymore, so we conclude that
\[
p(B_{n,m}-A_n)\le 4\cdot 2^{-m} \qquad \forall m\quad \forall n\ge N_m
\]
\[
\implies \dacs{\{B_{n,m}\}_{n}}{\serie A} = \limsup_{n\to \infty} p(B_{n,m}-A_n)\le 4\cdot 2^{-m} \xrightarrow{m\to \infty} 0.
\]
\end{proof}
Theorem \ref{comp} tells us that if we have a sequence $\{B_{n,m}\}_{n,m}$ we can claim its convergence just by checking if it is a Cauchy sequence. Moreover, we can actually build the limit sequence by following the proof of the theorem.
\subsection{Equivalence of Distances}
On the space $\mathscr M_D$ we can define the function
\[
p_m(f) := \inf \left\{ \frac{|E^C|}{|D|} + \ess\sup_E |f| \right\},
\]
where the inf is taken on all the measurable sets $E\subseteq D$ and $E^C$ stands for the complementary of $E$. This function induces the complete distance
\[
d_m(f,g) = p_m(f-g)
\]
and in this section we prove that if $\serie A\sim_\sigma g$ then
\[
d_m(g,0) = p_m(g) = \rho(\serie A) = \dacs{\serie A}{\serie{0}}
\]
where $\serie 0$ is the sequence of zero matrices and growing size $n$.
\begin{lemma}
Given $\serie A\in\mathscr E$ and $g\in \mathscr M_D$, we have
\[
\serie{A}\sim_\sigma g,\quad p_m(g) = L\implies \rho(\serie{A})\le L.
\]
\end{lemma}
\begin{proof}
We know that
\[
p_m(g)= \inf_{E\subseteq D} \left\{ \frac{|E^C|}{|D|} + \ess\sup_E |g| \right\}= L,
\]
where the sets $E$ are Lebesgue measurable.
By definition of inferior, if we set $\varepsilon>0$, we can always find $F$ such that
\[
\frac{|F^C|}{|D|}+ \ess\sup_F |g| \le L + \varepsilon.
\]
From now on, let us call $M=\ess\sup_F |g|$.
Since $\serie A \sim_\sigma g$, we know that
\[
\lim_{n\to\infty} \frac{1}{n} \sum_{i=1}^{n} F(\sigma_i(A_n)) = \frac{1}{|D|}\int_D F(|g(x)|) dx
\]
for every function $F:\mathbb R\to \mathbb C$ continuous with compact support. Given such an $F$ real valued and with $\chi_{[-\varepsilon,M+\varepsilon]}\ge F\ge \chi_{[0,M]}$, we obtain
\[
\lim_{n\to\infty} \frac{1}{n} \sum_{i=1}^{n} F(\sigma_i(A_n)) \le \liminf_{n\to\infty} \frac{1}{n} \left|\left\{ i:\sigma_i(A_n) \le M+\varepsilon \right\}\right|
\]
and
\[
\int_D F(|g(x)|) dx \ge |\{x:|g(x)|\le M\}| \ge |F|.
\]
Therefore
\[
\liminf_{n\to\infty} \frac{1}{n} \left|\left\{ i:\sigma_i(A_n) \le M+\varepsilon \right\}\right|\ge \frac{|F|}{|D|},
\]
\[
\limsup_{n\to\infty} \frac{1}{n} \left|\left\{ i:\sigma_i(A_n) > M+\varepsilon \right\}\right|\le \frac{|F^C|}{|D|}\le L + \varepsilon - M.
\]
The argument of limsup will eventually be less then $L + 2\varepsilon - M$, so there exists $N>0$ such that
\[
\frac{1}{n} \left|\left\{ i:\sigma_i(A_n) > M+\varepsilon \right\}\right|\le L + 2\varepsilon - M
\quad \forall n>N,
\]
\[
\frac{1}{n} \left|\left\{ i:\sigma_i(A_n) > M+\varepsilon \right\}\right| + M+\varepsilon\le L + 3\varepsilon
\quad \forall n>N.
\]
This concludes the proof since
\[
\rho(\serie{A}) = \limsup_{n\to\infty}\min_{i=1,\dots,n}\left\{\frac{i-1}{n} + \sigma_i(A_n)\right\}\]
and for every $n>N$ we can choose the greatest $\sigma_i(A_n)$ less or equal than $M+\varepsilon$. Consequently
\[
\min_{i=1,\dots,n}\left\{\frac{i-1}{n} + \sigma_i(A_n)\right\}\le \frac{1}{n} \left|\left\{ i:\sigma_i(A_n) > M+\varepsilon \right\}\right| + M+\varepsilon \le L+ 3\varepsilon
\]
and hence
\[
\rho(\serie{A})\le \limsup_{n\to\infty} \{ L + 3\varepsilon \} = L+3\varepsilon
\]
for every $\varepsilon>0$, so that the proof in concluded since $\rho(\serie A)\le C$.
\end{proof}
The converse statement has a similar proof
\begin{lemma}
Given $\serie A\in\mathscr E$ and $g\in \mathscr M_D$, we have
\[
\serie{A}\sim_\sigma g,\quad \rho(\serie{A}) = L\implies p_m(g) \le L.
\]
\end{lemma}
\begin{proof}
\[
\rho(\serie{A}) = \limsup_{n\to\infty}\min_{i=1,\dots,n}\left\{\frac{i-1}{n} + \sigma_i(A_n)\right\}= L
\]
so we can set $\varepsilon >0$ and find $N>0$ such that
\[
\min_{i=1,\dots,n}\left\{\frac{i-1}{n} + \sigma_i(A_n)\right\}\le L + \varepsilon\qquad \forall n>N
\]
and if we call $j(n)$ the index $i$ that realizes the minimum, then
\[
0 \le \frac{j(n)-1}{n} + \sigma_{j(n)}(A_n)\le L+\varepsilon \qquad \forall n>N.
\]
The sequence $\{\sigma_{j(n)}(A_n)\}_n$ is bounded, so we can find a converging subsequence such that
\[
\{\sigma_{j(n_k)}(A_{n_k})\}_{n_k}\xrightarrow{k\to \infty} x\ge 0,
\]
meaning that there exists $K>0$ for which
\[
x-\varepsilon < \{\sigma_{j(n_k)}(A_{n_k})\}_{n_k}<x+\varepsilon \qquad \forall k>K.
\]
Since $\serie A \sim_\sigma g$ we know that
\[
\lim_{n\to\infty} \frac{1}{n} \sum_{i=1}^{n} F(\sigma_i(A_n)) = \frac{1}{|D|}\int_D F(|g(x)|) dx
\]
for every function $F:\mathbb R\to \mathbb C$ continuous with compact support. Given such an $F$ real valued and with $\chi_{[-\varepsilon,x+2\varepsilon]}\ge F\ge \chi_{[0,x+\varepsilon]}$, we obtain
\[
\lim_{n\to\infty} \frac{1}{n} \sum_{i=1}^{n} F(\sigma_i(A_n)) \ge \limsup_{n\to\infty} \frac{1}{n} \left|\left\{ i:\sigma_i(A_n) \le x +\varepsilon \right\}\right|
\]and
\[
\int_D F(|g(x)|) dx \le |\{x:|g(x)|\le x+2\varepsilon\}|.
\]
As consequence
\[
\frac{|\{x:|g(x)|\le x+2\varepsilon\}| }{|D|}\ge\limsup_{n\to\infty} \frac{1}{n} \left|\left\{ i:\sigma_i(A_n) \le x+\varepsilon \right\}\right|,
\]
\[
\frac{|\{x:|g(x)|> x+2\varepsilon \}| }{|D|}\le\liminf_{n\to\infty} \frac{1}{n} \left|\left\{ i:\sigma_i(A_n) > x +\varepsilon \right\}\right|
\]
and we can find $M>0$ such that
\[
\frac{|\{x:|g(x)|> x+2\varepsilon \}| }{|D|} - \varepsilon \le \frac{1}{n} \left|\left\{ i:\sigma_i(A_n) > x+\varepsilon \right\}\right| \qquad \forall n>M,
\]
\[
\frac{|\{x:|g(x)|> x+2\varepsilon \}| }{|D|} +x+2\varepsilon \le \frac{1}{n} \left|\left\{ i:\sigma_i(A_n) > x +\varepsilon \right\}\right| + x + 3\varepsilon \qquad \forall n>M.
\]
Given now an index $n_k$ such that $k>K$ and $n_k>\max\{N,M\}$, we know that
\[
\frac{1}{n_k} \left|\left\{ i:\sigma_i(A_{n_k}) > x +\varepsilon \right\}\right| + x-\varepsilon \]\[\le \frac{1}{n_k} \left|\left\{ i:\sigma_i(A_{n_k}) > \sigma_{j(n_k)}(A_{n_k}) \right\}\right| + \sigma_{j(n_k)}(A_{n_k})
\]
\[
= \frac{j(n_k)-1}{n_k} + \sigma_{j(n_k)}(A_{n_k}) \le L+\varepsilon.
\]
Therefore we can rewrite
\[
\frac{|\{x:|g(x)|> x+2\varepsilon \}| }{|D|} +x+2\varepsilon \le L+\varepsilon+ 4\varepsilon
\]
and this does not depend on $n$ anymore. This leads to the thesis since, if we choose
\[
F = \{x:|g(x)|\le x+2\varepsilon \},
\]
then
\[
p_m(g)= \inf_{E\subseteq D} \left\{ \frac{|E^C|}{|D|} + \ess\sup_E |g| \right\}\le
\frac{|F^C|}{|D|} + \ess\sup_F |g|\le
\]
\[
\frac{|\{x:|g(x)|> x+2\varepsilon \}| }{|D|} +x+2\varepsilon \le L + 5\varepsilon
\]
for all $\varepsilon>0$, and eventually $p_m(g)\le L$.
\end{proof}
These lemmas lead to the desired result.
\begin{teo}\label{dis}
Given $\serie A\in\mathscr E$ and $g\in \mathscr M_D$, then
\[
\serie{A}\sim_\sigma g \implies \rho(\serie{A}) = p_m(g) .
\]
\end{teo}
With the above results, we can finally focus only on the GLT algebra.
\section{The GLT Algebra}
Let us denote as $\mathscr C_D$ the set of the couples $(\serie A,k)\in \mathscr E\times \mathscr M_D$ such that $\serie A\sim_\sigma k$ and where $D=[0,1]\times[-\pi,\pi]$. First of all we can see that it is well defined, because from the definition, if $k,k'$ are two measurable functions that coincide almost everywhere, then
\[
\serie A\sim_\sigma k\iff \serie A\sim_\sigma k'
\]
so when we say that $k\in \mathscr M_D$ and $\serie A \sim_\sigma k$, it means that every function in the equivalence class $k$ is a spectral function for $\serie A$.
As already anticipated in the first section, the set $\mathscr G$ of GLT sequences is a subset of $\mathscr C_D$, so when we say that a sequence $\serie A$ is a GLT with spectral symbol $k$ and we write $\serie A\sim_{GLT} k$, it means that $(\serie A,k)\in \mathscr G$ and in particular, it means that $\serie A\sim_\sigma k$.
The GLT set is built so that for every $\serie A\in \mathscr E$ there exists at most one (class of) function $k$ such that $\serie A\sim_{GLT} k$, so if we call $\mathscr H$ the GLT matrix sequences, that is the projection of $\mathscr G$ on $\mathscr E$, then there exists a map
\[
S : \mathscr H\to \mathscr M_D
\]
that associates to each sequence its GLT spectral symbol
\[
S(\serie A) = k \iff \serie A\sim_{GLT} k.
\]
The main properties of $\mathscr G$ that we need for the following sections and can be found or immediately derived from the results in \cite{GS} are
\begin{enumerate}
\item $\mathscr G$ is a $\mathbb C$-algebra, meaning that given $(\serie A,k)$,$(\serie B,h)\in \mathscr G$ and \\$\lambda \in\mathbb C$, then
\begin{itemize}
\item $(\{ A_n+B_n \}_n,k+h)\in \mathscr G$,
\item $(\{ A_nB_n \}_n,kh)\in \mathscr G$,
\item $(\{ \lambda A_n \}_n,\lambda k)\in \mathscr G$.
\end{itemize}
\item\label{app}
Given $k\in \mathscr M_D$, then there exists $k_m\in\mathscr M_D$ that converge to $k$ in measure and such that they are GLT symbols.
\item \label{close}
$\mathscr G$ is closed in $\mathscr E\times \mathscr M_D$: given $\{(\{B_{n,m}\}_{n,m},k_m)\}_m\subseteq \mathscr G$ such that
\[\{B_{n,m}\}_{n,m}\xrightarrow{a.c.s.}\serie A, \qquad k_m\to k \text{ in measure}\]
where $(\serie A,k)\in \mathscr E\times \mathscr M_D$, then $(\serie A,k)\in \mathscr G$.
\item \label{zero}We call \textit{zero-distributed} the matrix sequences that have 0 as spectral symbol, and we denote the sets
\[
\mathscr Z = \{ (\serie C,0)\in \mathscr C_D \},\qquad \mathscr Z_M = \{ \serie C : (\serie C,0)\in \mathscr C_D \}.
\]
Then $\mathscr Z$ is a subalgebra of $\mathscr G$.
\item\label{ker} $S$ is an homomorphism of $\mathbb C$-algebras and $\mathscr Z_M$ coincides with its kernel.
\end{enumerate}
With these powerful properties, we can prove that the GLT algebra is complete with respect to the metric on $\mathscr E\times \mathscr M_D$, and that $\mathscr H$, up to equivalence a.c.s., is actually isomorphic and isometric to $\mathscr M_D$.
\subsection{Cauchy Sequences}
An immediate result we can obtain from the precedent section is
\begin{corollario}\label{Cau}
Given $\{B_{n,m}\}_{n,m}\sim_{GLT} f_m$, then
\[
\{B_{n,m}\}_{n,m} \text{ converges} \iff \{f_m\}_m \text { converges.}
\]
\end{corollario}
\begin{proof}
Since both $\mathscr E$ and $\mathscr M_D$ are complete spaces with the respective metric, it is sufficient to prove that
\[
\{B_{n,m}\}_{n,m} \text{ Cauchy} \iff \{f_m\}_m \text { Cauchy}.
\]
We know that the space of GLT couples is a $\mathbb C$-algebra, so
\[
\{B_{n,i}-B_{n,j}\}_{n}\sim_{GLT} f_i - f_j.
\]
The previous section results imply that
\[
\dacs{\{B_{n,i}\}_n}{\{B_{n,j}\}_n} = d_m(f_i,f_j)
\]
so if one of the two is a Cauchy sequence with respect to its metric, even the other one is a Cauchy sequence.
\end{proof}
This is all we need to prove the following result.
\begin{teo}\label{mis}
Every measurable function $k\in \mathscr M_D$ with $D=[0,1]\times [-\pi,\pi]$ is a GLT symbol.
\end{teo}
\begin{proof}
Given $k$, by Property (\ref{app}) we know that there exists a sequence $k_m$ of GLT symbols that converge to $k$ in measure. If $\{B_{n,m}\}_{n,m}\sim_{GLT} k_m$, then by Corollary \ref{Cau}, $\{B_{n,m}\}_{n,m}$ converges, so there exists $\serie A$ such that $\{B_{n,m}\}_{n,m}\xrightarrow{a.c.s.} \serie A$. Thanks to Property (\ref{close}), we obtain that $\serie A\sim_{GLT} k$, so $k$ is a GLT symbol.
\end{proof}
This also lets us formulate other results that are easily provable.
\begin{corollario}
If the following conditions
\begin{enumerate}
\item $\{B_{n,m}\}_{n,m}\sim_{GLT} k_m$,
\item $\serie A\sim_{GLT} k$,
\item $k_m\to k$ in measure,
\item $\{B_{n,m}\}_{n,m}\xrightarrow{a.c.s.} \serie A$,
\end{enumerate}
are satisfied, then
\begin{itemize}
\item $(1),(3) \implies \exists \serie A : (4), (2)$,
\item $(1),(4) \implies \exists k: (2), (3)$,
\item $(2),(3) \implies \exists \{B_{n,m}\}_{n,m} : (1),(4)$.
\end{itemize}
\end{corollario}
In particular, a consequence is that $\mathscr H$ is a complete subspace of $\mathscr E$, since it is closed thanks to the statement $(1),(4) \implies \exists k: (2), (3)$ contained in the previous corollary.
\subsection{Isometry}
Let us define the a.c.s. equivalence on $\mathscr E$ as
\[
\serie A\sim_{acs} \serie B \iff \dacs{\serie A}{\serie B} = 0.
\]
A property of the zero distributed sequences is that
\[
\rho(\serie C) = 0\iff \serie C\sim_\sigma 0.
\]
Consequently it is immediate to see that
\[
\serie A\sim_{acs} \serie B\iff \{A_n-B_n\}_n\sim_\sigma 0 \iff (\{A_n-B_n\}_n,0)\in\mathscr Z
\]
and thanks to the triangular inequality of $d_{acs}$, $\sim_{acs}$ is an equivalence relation in $\mathscr E$, and on all its subsets. We know that the set $\mathscr Z_M$ is a subalgebra of $\mathscr E$, and the previous lines show that
\[
\mathscr E / \sim_{acs} \equiv \mathscr E / \mathscr Z_M.
\]
As reported in Property \ref{ker}, $\mathscr Z_M$ is also the kernel of $S$, so we can define a new induced injective homomorphism of $\mathbb C$-algebras
\[
T : \mathscr M := \mathscr H/\mathscr Z_M \hookrightarrow \mathscr M_D.
\]
The quotient preserves the distance, as shown in the following lemma.
\begin{lemma}
given $[A],[B]\in \mathscr M$, and given
\[\serie A,\serie {A'} \in [A],\qquad \serie B,\serie {B'}\in [B],\]
we have
\[
\dacs{\serie A}{\serie B} = \dacs{\serie {A'}}{\serie {B'}}
\]
\end{lemma}
\begin{proof}
The following relations hold
\[\serie A = \serie {A'} +\serie {Z}\qquad \serie B = \serie {B'} +\serie {W}\qquad \serie {Z},\serie {W}\in \mathscr Z_M\]
so that
\[
\dacs{\serie A}{\serie B} = \dacs{\serie {A'} +\serie {Z}}{\serie {B'} +\serie {W}}\]
\[ = \rho(\serie {A'} -\serie {B'} +\serie {Z} -\serie {W})
\]
\[
= \dacs{\serie {A'}-\serie {B'}}{\serie {Z}-\serie {W}}.
\]
However both $\serie {Z}-\serie {W}$ and $-\serie {Z}+\serie {W}$ are zero distributed, since $\mathscr Z_M$ is an algebra, and by Theorem \ref{dis}, we deduce
\[
\dacs{\serie {Z}-\serie {W}}{\serie 0} = \rho(\serie {Z}-\serie {W}) = p_m(0) = 0,
\]
\[
\dacs{-\serie {Z}+\serie {W}}{\serie 0} = \rho(\serie {Z}-\serie {W}) = p_m(0) = 0.
\]
By applying the triangular inequality, we obtain
\[
\dacs{\serie {A'}-\serie {B'}}{\serie {Z}-\serie {W}} \]\[\le \dacs{\serie {A'}-\serie {B'}}{\serie 0} + \dacs{\serie 0}{\serie {Z}-\serie {W}}\]\[ = \dacs{\serie {A'}-\serie {B'}}{\serie 0} = \rho(\serie {A'}-\serie {B'}) = \dacs{\serie {A'}}{\serie {B'}}
\]
and
\[
\dacs{\serie {A'}-\serie {B'}}{\serie {Z}-\serie {W}} \]\[\ge \dacs{\serie {A'}-\serie {B'}}{\serie 0} - \dacs{\serie 0}{-\serie {Z}+\serie {W}}\]\[ = \dacs{\serie {A'}-\serie {B'}}{\serie 0} = \rho(\serie {A'}-\serie {B'}) = \dacs{\serie {A'}}{\serie {B'}}.
\]
As a consequence
\[
\dacs{\serie A}{\serie B} \]\[= \dacs{\serie {A'}-\serie {B'}}{\serie {Z}-\serie {W}} \]\[=\dacs{\serie {A'}}{\serie {B'}}.
\]
\end{proof}
Te latter result implies that $\mathscr M$ has still a pseudometric, defined as
\[
\dacs{[A]}{[B]}:= \dacs{\serie A}{\serie B}
\]
for any $\serie A\in[A]$ and $\serie B\in [B]$. The induced $d_{acs}$ is actually a real distance, since
\[
\serie A\in [A],\quad \serie B\in [B],\quad \dacs{\serie A}{\serie B}= 0
\]
\[
\implies \serie A\sim_{acs}\serie B\implies [A]\equiv [B],
\]
so $\mathscr M$ is endowed with a complete metric. Moreover, since we operated a quotient of algebras, it is immediate to see that we can define a symbol GLT for every element of $\mathscr M$ as
\[
[A]\sim_{GLT} k \iff \forall\serie A\in[A], \serie A\sim_{GLT} k.
\]
We can eventually state and prove the main result of this section.
\begin{teo}
The map $T:\mathscr M\to \mathscr M_D$ is an isomorphism of $\mathbb C$-algebras and an isometry of metric spaces
\end{teo}
\begin{proof}
$T$ is already an injective homomorphism of $\mathbb C$-algebras, but in view of Theorem \ref{mis}, it is also surjective, so it is an isomorphism. If $[A],[B]\in \mathscr M$ with $[A]\sim_{GLT} k$ and $[B]\sim_{GLT} h$, and $\serie A\in [A]$, $\serie B\in [B]$, then
\[
\dacs{[A]}{[B]} = \dacs{\serie A}{\serie B} =\rho(\serie A-\serie B)
\]
but $\serie A\sim_{GLT} k$, $\serie B\sim_{GLT} h$, and $\mathscr G$ is an algebra, so $\serie A-\serie B\sim_{GLT} k-h$. Thanks to Theorem \ref{dis} we know that
\[
\rho(\serie A-\serie B) = p_m(k-h) = d_m(k,h) = d_m(T[A], T[B])
\]
hence
\[
\dacs{[A]}{[B]} = d_m(T[A], T[B])
\]
from which we conclude that $T$ is an isometry of metric spaces.
\end{proof}
\subsection{Maximality}
In this last section, we investigate the maximality of the GLT algebra in $\mathscr C_D$. Formally, let $\mathscr S_D$ be the set of the groups in $\mathscr C_D$. A group $G$ is characterized by the properties
\begin{itemize}
\item $a,b\in G\implies a+b \in G$,
\item $a\in G \implies -a\in G$,
\end{itemize}
so this means that $\mathscr G\in \mathscr S_D$.
We can notice that Corollary \ref{Cau} can be generalized to any group $G\in\mathscr S_D$ with the same proof.
\begin{lemma}
Let $G\in \mathscr S_D$, and $(\{B_{n,m}\}_{n,m},f_m)\subseteq G$. Then
\[
\{B_{n,m}\}_{n,m} \text{ converges} \iff f_m \text { converges.}
\]
\end{lemma}
We have already seen that $\mathscr E$ and $\mathscr M_D$ are complete spaces, along with their product. We proved that even the set $\mathscr G$ is a complete space, and it is easy to reprove it in a different way: $\mathscr G$ is closed in $\mathscr E \times \mathscr M_D$, and a closed set in a complete space is complete. The same reasoning can be extended to groups.
\begin{lemma}
Let $G\in \mathscr S_D$. Then $M$ is complete if and only if it is closed in $\mathscr E \times \mathscr M_D$.
\end{lemma}
Even Theorem \ref{mis} is generalizable, but we need additional assumptions.
\begin{teo}
Let $G\in \mathscr S_D$ be a group containing $\mathscr Z$. Then every measurable function $k$ is a spectral symbol for $G$ if and only if
\begin{itemize}
\item The set of spectral symbol for $G$ is dense in $\mathscr M_D$.
\item $G$ is closed in $\mathscr E \times \mathscr M_D$.
\end{itemize}
and in this case, $G$ is a maximal element in $\mathscr S_D$ if we impose the inclusion partial order on it.
\end{teo}
\begin{proof}
If the two conditions are true, then the proof is analogous to the one of Theorem \ref{mis}.
For the converse, if all measurable functions are spectral symbols for $G$, then the first condition is surely verified. Let
\[(\{B_{n,m}\}_{n,m},k_m)\subseteq G,\qquad (\{B_{n,m}\}_{n,m},k_m)\to (\serie A,k).\]
We know that there exists $(\serie C,k)\in G$, so
\[ (\{C_n-B_{n,m}\},k-k_m) \in M, \qquad k-k_m\to 0\]
and
\[
\{B_{n,m}\}_{n,m}\xrightarrow{a.c.s.} \serie C, \qquad \{B_{n,m}\}_{n,m}\xrightarrow{a.c.s.} \serie A
\]
\[ \dacs{\serie A}{\serie C}\le \dacs{\serie A}{\{B_{n,m}\}_n} + \dacs{\serie C}{\{B_{n,m}\}_n} \to 0\]
\[\implies \dacs{\serie A}{\serie C} = 0 \implies \serie A-\serie C\in \mathscr Z_M.\]
We know that $\mathscr Z\subseteq G$, so we conclude
\[
(\serie A,k) = (\serie C,k) + (\serie A-\serie C,0)\in G.
\]
To show that $G$ is a maximal element in $\mathscr S_D$, let $N$ be a group in $\mathscr S_D$ containing $G$, and let us consider any couple $(\serie{A},k)\in N$. We know that there exists $(\serie C,k)\in G\subseteq N$, but $N$ is a group, so $(\{A_n-C_n\}_n,0)\in N$. This is also a zero distributed sequence, so it belongs to $G$, and, as before,
\[(\serie A,k) = (\serie C,k) + (\serie A-\serie C,0)\in G.\]
This concludes that $N=G$, so $G$ is maximal in $\mathscr S_D$.
\end{proof}
If $D = [0,1]\times [-\pi,\pi]$, then we can apply the previous theorem, for obtaining the following result.
\begin{corollario}
$\mathscr G$ is a maximal group in $\mathscr{S}_D$.
\end{corollario}
The last question left to see is if $\mathscr G$ is also a maximum element in $\mathscr S_D$, meaning that it contains all the other groups. The answer, in this case, is negative, since there exists a lot of trivial transformations of the variables that do not change the spectral symbols properties. For example
\[
GLT^{inv} = \Set{(\serie{A},k(1-x,\theta)) | \serie{A}\sim_{GLT} k(x,\theta)}
\]
is a closed algebra in $\mathscr C_D$ since \[\serie{A}\sim_{GLT} k(x,\theta)\implies \serie{A}\sim_\sigma k(x,\theta) \implies \serie{A}\sim_\sigma k(1-x,\theta).\]
In general, any symmetry or cyclic translation of the variables produces spectral symbols for the same sequences, and they respect all the axiom of closed algebra since the operations of sum/product/convergence commute with the transformations. Eventually, GLT$^{inv}$ and GLT are incompatible, since there exists $k$ that is not symmetric on the variable $x$, so
\[
\serie{A}\sim_{GLT} k(x,\theta), \quad k(x,\theta)\ne k(1-x,\theta)\]
\[ \implies \serie{A}\not\sim_{GLT} k(1-x,\theta),\quad \serie{A}\not\sim_{GLT^{inv}} k(x,\theta)
\]
and this means that they are not contained in each other.
| {
"timestamp": "2017-03-31T02:05:18",
"yymm": "1703",
"arxiv_id": "1703.10414",
"language": "en",
"url": "https://arxiv.org/abs/1703.10414",
"abstract": "The theory of Generalized Locally Toeplitz (GLT) sequences of matrices has been developed in order to study the asymptotic behaviour of particular spectral distributions when the dimension of the matrices tends to infinity. A key concepts in this theory are the notion of Approximating Classes of Sequences (a.c.s.), and spectral symbols, that lead to define a metric structure on the space of matrix sequences, and provide a link with the measurable functions. In this document we prove additional results regarding theoretical aspects, such as the completeness of the matrix sequences space with respect to the metric a.c.s., and the identification of the space of GLT sequences with the space of measurable functions.",
"subjects": "Numerical Analysis (math.NA)",
"title": "Equivalence between GLT sequences and measurable functions",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513910054509,
"lm_q2_score": 0.7185943925708561,
"lm_q1q2_score": 0.7086428598024668
} |
https://arxiv.org/abs/1809.00882 | An elementary proof of de Finetti's Theorem | A sequence of random variables is called exchangeable if the joint distribution of the sequence is unchanged by any permutation of the indices. De Finetti's theorem characterizes all $\{0,1\}$-valued exchangeable sequences as a "mixture" of sequences of independent random variables. We present an new, elementary proof of de Finetti's Theorem. The purpose of this paper is to make this theorem accessible to a broader community through an essentially self-contained proof. | \section{Introduction}
\begin{definition}
A finite sequence of (real valued) random variables $X_{1},X_{2},\ldots,X_{N}$ on a probability space $(\Omega,\mathcal{F},\mathbb{P}) $ is called \emph{exchangeable}, if for any permutation $\pi$ of $\{1,2,\ldots,N\}$ the
distributions of $X_{\pi(1)},X_{\pi(2)},\ldots,X_{\pi(N)}$ and $X_{1},X_{2},\ldots,X_{N}$ agree, i.\,e. if for any Borel sets $A_{1},A_{2},\ldots,A_{N}$
\begin{align}
&\mathbb{P}\big(X_{1}\in A_{1},X_{2}\in A_{2},\ldots,X_{N}\in A_{N}\big)\notag\\~=~&\mathbb{P}(X_{\pi(1)}\in A_{1},X_{\pi(2)}\in A_{2},\ldots,X_{\pi(N)}\in A_{N})
\end{align}
An infinite sequence $\{X_{i}\}_{i\in\mathbb{N}}$ is called exchangeable, if the finite sequences $X_{1},X_{2},\ldots,X_{N}$ are exchangeable for any $N\in\mathbb{N}$.
\end{definition}
Obviously, independent, identically distributed random variables are exchangeable, but there are many more examples of exchangeable sequences.
Let us denote by $\pi_{p}$ the (Bernoulli) probability measure on $\{0,1\}$ given by $\pi_{p}(1)=p$ and $\pi_{p}(0)=1-p$.
If the random variables $X_{i}$ are independent and distributed according to $\pi_{p}$, i.\,e. $\mathbb{P}(X_{i}=1)=\pi_{p}(1)=p$
and $\mathbb{P}(X_{i}=0)=\pi_{p}(0)=1-p$, then the probability distribution of the sequence $X_{1},\ldots,X_{N}$ is the product measure
\begin{equation}
\mathcal{P}_{p}~=~\bigotimes_{i=1}^{N}\;\pi_p\qquad\text{on}\quad\{0,1\}^{N}
\end{equation}
In 1931 B. de Finetti proved the following remarkable theorem which now bears his name:
\begin{theorem}[de Finetti's Representation Theorem]\label{deFin}
Let $X_{i}$ be an infinite sequence of $\{0,1\}$-valued exchangeable random variables then there exists a probability measure $\mu$ on $[0,1]$ such that
for any $N$ and any sequence $(x_{1},\ldots,x_{N})\in\{0,1\}^{N}$
\begin{align}
\mathbb{P}\big(X_{1}=x_{1},\ldots,X_{N}=x_{N}\big)~&=~\int\,\mathcal{P}_{p}(x_{1},\ldots,x_{N})\,d\mu(p)\\
&=~\int\,\prod_{i=1}^{N}\pi_{p}(x_{i})\,d\mu(p)
\end{align}
\end{theorem}
Loosely speaking: An exchangeable sequence with values in $\{0,1\}$ is a `mixture' of independent sequences with respect to a measure $\mu$ on $[0,1]$.
De Finetti's Theorem was extended in various directions, most notably to random variables with values in rather general spaces \cite{Hewitt}. For reviews
on the theorem see e.\,g. \cite{Aldous}, see also the textbook \cite{Klenke} for a proof.
The proof of Theorem \ref{deFin} we present here is very elementary. It is based on the method of moments which allows us to prove weak convergence of measures.
\medskip
\noindent\textbf{Acknowledgement} It is a pleasure to thank Michael Fleermann for careful proofreading and many helpful suggestions.
\section{Preliminaries}
For a probability measure $\mu$ on $\mathbb{R}$ we define the $k^{th}$ moments by $m_{k}(\mu):=\int x^{k}\,d\mu(x)$ whenever the latter integral exists
(in the sense that $\int |x|^{k}\,d\mu(x)<\infty$).
In the following we will be dealing with measures with compact support so that all moments exist (and are finite). The following theorem is a
light version of the method of moments which is nevertheless sufficient for our purpose.
\begin{proposition}\label{bdsupp}\
\begin{enumerate}
\item\label{a} Let $\mu_{n}$ ($n\in\mathbb{N}$) be probability measures with support contained in a (fixed) interval $[a,b]$.
If for all $k$ the moments $m_{k}(\mu_{n})$ converge to some $m_{k}$ then the sequence $\mu_{n}$ converges weakly
to a measure $\mu$ with moments $m_{k}(\mu)=m_{k}$ and with support contained in $[a,b]$.
\item\label{b} If $\mu$ is a probability measure with support contained in $[a,b]$ and $\nu $ is a probability measures on $\mathbb{R}$ such that
$m_{k}(\mu)=m_{k}(\nu)$ then $\mu=\nu$.
\end{enumerate}
\end{proposition}
\begin{remark}
Let $\mu_{n}$ and $\mu $ be probability measures on $\mathbb{R}$.
Recall that weak convergence of the measures $\mu_{n}$ to $\mu $ means that
\begin{align}
\int f(x)\,d\mu_{n}(x)~\Rightarrow~\int f(x)\,d\mu(x)
\end{align}
for all bounded, continuous functions $f$ on $\mathbb{R}$.
The above theorem is true and, in fact, well known if the support condition is replaced by the much weaker assumption that the moments $m_{k}(\mu)$ (resp. the numbers $m_{k}$) do not grow too fast as $k\to\infty$
(see \cite{Klenke} or \cite{Kirsch} for details).
\end{remark}
\begin{proof}
We sketch the proof, for details see the literature cited above.
\noindent By Weierstrass approximation theorem the polynomials on $I=[a-1,b+1]$ are uniformly dense in the space of continuous functions on $I$. Hence the integral
$\int f(x) d\mu(x)$ for continuous $f$ can be computed from the knowledge of the moments of $\mu$. From this part \ref{b} of the theorem follows.
Moreover, we get that the integrals $\int f(x)\,d\mu_{n}(x)$ converge for any continuous $f$. The limit is a positive linear functional. Thus the probability measures $\mu_{n}$ converge weakly to a
measure $\mu $ with
\begin{equation*}
\int f(x)\,d\mu_{n}(x)~\to~\int f(x)\,d\mu(x)
\end{equation*}
which implies part \ref{a}.
\end{proof}
\section{Proof of de Finetti's Theorem}
The following theorem is a substitute for a (very weak) law of large numbers.
\begin{theorem}\label{Fmeas}
Let $X_{i}$ be an infinite sequence of $\{0,1\}$-valued exchangeable random variables then
$S_{N}:=\frac{1}{N}\sum_{i=1}^{N}\,X_{i}$ converges in distribution to a probability measure $\mu $.\\
$\mu$ is concentrated on $[0,1]$ and its moments are given by
\begin{equation}
m_{k}(\mu)~=~\mathbb{E}\Big(X_{1}\cdot X_{2}\cdot\ldots\cdot X_{k}\Big)
\end{equation}
where $\mathbb{E}$ denotes expectation with respect to $\mathbb{P}$.
\end{theorem}
\begin{definition}\label{defFmeas}
We call the measure $\mu$ associated with $X_{i}$ according to Theorem \ref{Fmeas} the \emph{de Finetti measure} of $X_{i}$.
\end{definition}
\begin{proof}{(Theorem \ref{Fmeas})}
To express the moments of $S_{N} $ we compute
\begin{align}\label{binomi}
\Big(\sum_{i=1}^{N}X_{i}\Big)^{k}~&=~\sum_{(i_{1},\ldots,i_{k})\in\{1,\ldots,N\}^{k}} X_{i_{1}}\cdot X_{i_{2}}\cdot\ldots\cdot X_{i_{k}}
\end{align}
To simplify the evaluation of the above sum we introduce the number of different indices in $(i_{1},\ldots,i_{k})$ as
\begin{align}\label{eq:rho}
\rho(i_{1},i_{2},\ldots,i_{k})~=~\#\{i_{1},i_{2},\ldots,i_{k}\}
\end{align}
Consequently
\begin{align}
\eqref{binomi}&=~\sum_{r=1}^{k}\,\sum_{\overset{(i_{1},\ldots,i_{k})\in\{1,\ldots,N\}^{k}}{\rho(i_{1},\ldots.i_{k})=r}} X_{i_{1}}\cdot X_{i_{2}}\cdot\ldots\cdot X_{i_{k}}
\end{align}
Thus we may write
\begin{align}
\mathbb{E}\Big(\big(\oo{N}\sum_{i=1}^{N}X_{i}\big)^{k}\Big)~=~~&\oo{N^{k}}\,\sum_{\overset{i_{1},\ldots,i_{k}=1}{\rho(i_{1},\ldots,i_{k})=k}}^{N} \mathbb{E}\Big(X_{i_{1}}\cdot X_{i_{2}}\cdot\ldots\cdot X_{i_{k}}\Big)\notag\\
+\,&\oo{N^{k}}\,\sum_{\overset{i_{1},\ldots,i_{k}=1}{\rho(i_{1},\ldots,i_{k})<k}}^{N} \mathbb{E}\Big(X_{i_{1}}\cdot X_{i_{2}}\cdot\ldots\cdot X_{i_{k}}\Big)\label{SN}
\end{align}
There are at most $(k-1)^{k}\,N^{k-1}$ index tuples $(i_{1},\ldots,i_{k})$ with $\rho(i_{1},\ldots,i_{k})<k$. Indeed, we have $N^{k-1}$ possibilities to chose the possible indices (`candidates') for $(i_{1},\ldots,i_{k})$. Then for each of the $k$ positions in the $k$-tuple we may chose one of the $k-1$ candidates which gives $(k-1)^{k}$ possibilities. This covers also tuples with less than $k-1$ different indices as
some of the candidates may finally not appear in the tuple. It follows that the second term in \eqref{SN} goes
to zero. So
\begin{align}
\mathbb{E}\Big(\big(\oo{N}\sum_{i=1}^{N}\big)^{k}\Big)~&\approx~\oo{N^{k}}\sum_{\overset{i_{1},\ldots,i_{k}=1}{\rho(i_{1},\ldots,i_{k})=k}}^{N} \mathbb{E}\Big(X_{i_{1}}\cdot X_{i_{2}}\cdot\ldots\cdot X_{i_{k}}\Big)\,,\notag\\
\intertext{so using exchangeability:}
&=~\oo{N^{k}}\sum_{\overset{i_{1},\ldots,i_{k}=1}{\rho(i_{1},\ldots,i_{k})=k}}^{N}\,\mathbb{E}\Big(X_{1}\cdot X_{2}\cdot\ldots\cdot X_{k}\Big)\notag\\
&\approx~\mathbb{E}\Big(X_{1}\cdot X_{2}\cdot\ldots\cdot X_{k}\Big)
\end{align}
An application of Proposition \ref{bdsupp} gives the desired result.
\end{proof}
We note a Corollary to the Theorem \ref{Fmeas} or better to its proof.
\begin{corollary}\label{cor}
If $\{X_{i}\} $ is an exchangeable sequence of $\{0,1\} $-valued random variables and
\begin{align}\notag
r=\rho(i_{1},i_{2},\ldots,i_{k})~=~\#\{i_{1},i_{2},\ldots,i_{k}\}
\end{align}
then
\begin{align}
\mathbb{E}\Big(X_{i_{1}}\cdot X_{i_{2}}\cdot\ldots\cdot X_{i_{k}}\Big)~=~\mathbb{E}\Big(X_{1}\cdot X_{2}\cdot\ldots\cdot X_{r}\Big)
\end{align}
\end{corollary}
\begin{proof}
Since $X_{i}\in\{0,1\}$ we have ${X_{i}}^{\ell}~=X_{i}$ for all $\ell\in\mathbb{N}, \ell\geq 1$, Hence, the product in the left hand side is actually a product of $r$ different
$X_{j}$, the expectation of which equals the right hand side due to exchangeability.
\end{proof}
For the proof of Theorem \ref{deFin} we will use the following simple lemma.
\begin{lemma}\label{sum}
Suppose $\{X_{i}\}_{i\in\mathbb{N}}$ is a $\{0,1\}$-valued exchangeable sequence. Then for pairwise distinct $i_{1},i_{2},\ldots,i_{k}\in\mathbb{N}$ and $x_{1},\ldots,x_{k}\in\{0,1\}$
with $\sum x_{i}=m$
\begin{align}
\mathbb{P}\Big(X_{i_{1}}=x_{1},\ldots,X_{i_{k}}=x_{k}\Big)~=~\oo{\binom{k}{m}}\;\mathbb{P}\Big(\sum_{i=1}^{k}X_{i}=m\Big)
\end{align}
\end{lemma}
\begin{proof}
There are $\binom{k}{m}$ tuples $x_{1},\ldots,x_{k}$ with $\sum x_{i}=m$. Due to exchangeability they all lead to the same probability.
\end{proof}
We now prove Theorem \ref{deFin}.
\begin{proof}{(Theorem \ref{deFin})}
Let $\mu $ be the de Finetti measure of $X_{i}$ (see Definition \ref{defFmeas}) and define a $\{0,1\}$-valued process $\{Y_{i}\}_{i}$ by
\begin{align}\label{defY}
\mathbb{P}\Big( Y_{1}=y_{1},\ldots, Y_{k}=y_{k}\Big)~=~\int \prod_{i=1}^{k}\pi_{p}(y_{i})\,d\mu(p)
\end{align}
The process $Y_{i}$ is obviously exchangeable.
We'll prove that $X_{i}$ and $Y_{i}$ have the same finite dimensional distributions.
According to Lemma \ref{sum} it suffices to show that $S_{N}=\sum_{i=1}^{N}X_{i}$ and $T_{N}=\sum_{i=1}^{N}Y_{i}$ have the same distributions for all $N$ and for
this it is enough by Proposition \ref{bdsupp} to prove that their moments agree.
\begin{align}
\mathbb{E}\Big({S_{N}}^{k}\Big)~&=~\sum_{r=1}^{k}\,\sum_{\overset{i_{1},\ldots,i_{k}=1}{\rho(i_{1},\ldots,i_{k})=r}}^{N}\,\mathbb{E}\big(X_{i_{1}}\cdot\ldots\cdot X_{i_{k}}\big)\notag\\
&=~\sum_{r=1}^{k}\,\sum_{\overset{i_{1},\ldots,i_{k}=1}{\rho(i_{1},\ldots,i_{k})=r}}^{N}\,\mathbb{E}\big(X_{1}\cdot\ldots\cdot X_{r}\big) \tag{\text{by Corollary \ref{cor}}}\\
&=~\sum_{r=1}^{k}\,\sum_{\overset{i_{1},\ldots,i_{k}=1}{\rho(i_{1},\ldots,i_{k})=r}}^{N}\,\int\,\prod_{i=1}^{r}\pi_{p}(x_{i})\,d\mu(p)
\tag{\text{by Theorem \ref{Fmeas}}}\\
&=~\sum_{r=1}^{k}\,\sum_{\overset{i_{1},\ldots,i_{k}=1}{\rho(i_{1},\ldots,i_{k})=r}}^{N}\,\mathbb{E}\big(Y_{1}\cdot\ldots\cdot Y_{r}\big)\tag{by \eqref{defY}}\\
&=~\sum_{r=1}^{k}\,\sum_{\overset{i_{1},\ldots,i_{k}=1}{\rho(i_{1},\ldots,i_{k})=r}}^{N}\,\mathbb{E}\big(Y_{i_{1}}\cdot\ldots\cdot Y_{i_{k}}\big)\tag{Corollary \ref{cor}}\\
~&=~ \mathbb{E}\Big({T_{N}}^{k}\Big)
\end{align}
\end{proof}
| {
"timestamp": "2018-09-05T02:28:09",
"yymm": "1809",
"arxiv_id": "1809.00882",
"language": "en",
"url": "https://arxiv.org/abs/1809.00882",
"abstract": "A sequence of random variables is called exchangeable if the joint distribution of the sequence is unchanged by any permutation of the indices. De Finetti's theorem characterizes all $\\{0,1\\}$-valued exchangeable sequences as a \"mixture\" of sequences of independent random variables. We present an new, elementary proof of de Finetti's Theorem. The purpose of this paper is to make this theorem accessible to a broader community through an essentially self-contained proof.",
"subjects": "Probability (math.PR)",
"title": "An elementary proof of de Finetti's Theorem",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513905984457,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7086428595099953
} |
https://arxiv.org/abs/2112.11770 | Poncelet's theorem for conics in any position and any characteristic | Poncelet's theorem states that if there exists an n-sided polygon which is inscribed in a given conic C and circumscribed about another conic D, then there are infinitely many such n-gons. Proofs of this theorem that we are aware of, including Poncelet's original proof and the celebrated modern proof by Griffiths and Harris, assume the two conics to be in general position (that is, not tangent or at least not osculating), or be defined over the field of complex numbers, or both. Here we show that Poncelet's theorem holds for any two conics C and D in the projective plane over an algebraically closed field k of any characteristic other than two. If C and D are osculating and char(k)>2, our result shows that there always exist infinitely many polygons of char(k) sides that are inscribed in C and circumscribed about D. We also describe the situation in characteristic two in the appendix. | \subsection*{Notation} Let $\PP^2$ be the projective plane over an algebraically closed field~$k$. Let $p,q\in \PP^2$ be two distinct points, and let $C\subseteq \PP^2$ be a conic.
\begin{itemize}
\item We write $L(p,q)$ for the line through $p$ and $q$.
\item If $p\in C$, we write $T_p\,C$ for the line tangent to $C$ at $p$.
\item If $\Char k\ne 2$, we write $P_q\,C$ for the polar line \cite[4.1]{fischer} of $C$ with respect to the pole $q$.
\end{itemize}
It is well-known that if $q\in C$ then $P_q\,C=T_q\,C$, while if $q\notin C$ then $P_q\,C$ is the line through the two points of tangency of the tangents to $C$ through $q$.
\section{Introduction}
In 1813, Poncelet \cite{poncelet} discovered the following remarkable result, which is now known as Poncelet's theorem.
\begin{center}
\parbox{.95\linewidth}{\emph{Let $C$ and $D$ be two (general) conics in the real projective plane. If there exists an $n$-sided polygon which is inscribed in $C$ and circumscribed about $D$, then there are infinitely many such polygons, and every point of $C$ is a vertex of one of them.}}
\end{center}
Poncelet first proved the result for two circles. Then he deduced the general case by projectively transforming $C$ and $D$ into a pair of circles, which cannot be done if $C$ and $D$ are osculating (that is, intersecting at a point with intersection multiplicity greater than two). Thus, strictly speaking, Poncelet's argument does not apply to osculating conics. We refer the readers to the excellent survey papers \cite{C1,C2} for a modern account of Poncelet's argument, as well as later development surrounding this theorem. Here we just want to mention the most recent approach, introduced by Griffiths and Harris in \cite{GH}, where they proved Poncelet's theorem for two nowhere tangent conics $C$ and $D$ in the complex projective plane, by considering the projective curve \[
E=\{(c,d)\in C \times D \mid c\in T_d\,D \}. \]
The assumption that $C$ and $D$ are nowhere tangent is used to show that $E$ is an elliptic curve, which is crucial in their argument.
The purpose of this paper is to prove Poncelet's theorem for \emph{any} two conics $C$ and $D$ in the projective plane $\PP^2$ over an algebraically closed field $k$ of \emph{any} characteristic other than two. Given a point $c_1\in C$, we may construct a sequence of points $(c_i,d_i)\in C\times D$ for $i=1,2,\ldots$ recursively as follows: pick $d_1\in D$ such that $L(c_1,d_1)$ is tangent to $D$, and for $i\ge 2$ let $c_i\in C$ be the point such that $L(c_{i-1},d_{i-1})\cap C=\{c_{i-1},c_i\}$, and let $d_i\in D$ be the point such that $P_{c_i}D\cap D=\{d_{i-1},d_i\}$. We refer to this construction as a \emph{Poncelet process} with initial point $c_1$, and if $(c_1,d_1)=(c_{n+1},d_{n+1})$ then we say that the process \emph{stops after $n$ steps}. Figure~\ref{subfig:typical Poncelet} shows a typical Poncelet process, while Figure~\ref{subfig:Poncelet stop} shows a Poncelet process that stops after three steps.
\begin{figure}[h]
\centering
\subcaptionbox{A typical Poncelet process\label{subfig:typical Poncelet}}
{ \begin{tikzpicture}[scale=.7,line cap=round,line join=round,x=0.4cm,y=0.4cm]
\clip(-12.,-12.) rectangle (14.,11.);
\draw [rotate around={0.:(0.,0.)},line width=0.4pt] (0.,0.) ellipse (2.2627416997969525cm and 1.6cm);
\draw [rotate around={8.130102354155973:(1.,-1.)},line width=0.4pt] (1.,-1.) ellipse (4.595122457735075cm and 3.6214845576891825cm);
\draw [line width=0.4pt] (0.,8.)-- (-9.960764084704117,-4.1993947278829);
\draw [line width=0.4pt] (-9.960764084704117,-4.1993947278829)-- (11.64372251035786,-3.770336913206851);
\draw [line width=0.4pt] (11.64372251035786,-3.770336913206851)-- (-1.5929698944334316,7.713271495294102);
\begin{scriptsize}
\draw (-0.6085375110409483,2.9960827095209117) node {$D$};
\draw [fill=black] (0.,8.) circle (2.5pt);
\draw (0,9) node {$c_1$};
\draw (-7.2,3.8) node {$C$};
\draw [fill=black] (-9.960764084704117,-4.1993947278829) circle (2.0pt);
\draw (-10.7,-4.7) node {$c_2$};
\draw [fill=black] (11.64372251035786,-3.770336913206851) circle (2.0pt);
\draw (12.2,-4.5) node {$c_3$};
\draw [fill=black] (-4.898979485566357,2.) circle (2.0pt);
\draw (-3.7,1.8) node {$d_1$};
\draw [fill=black] (0.15881467574809882,-3.998423307928204) circle (2.0pt);
\draw (0.2,-3) node {$d_2$};
\draw [fill=black] (4.384878809325144,2.5271365047748526) circle (2.0pt);
\draw (3.8029909198780083,1.5) node {$d_3$};
\draw [fill=black] (-1.5929698944334316,7.713271495294102) circle (2.0pt);
\draw (-2,8.5) node {$c_4$};
\end{scriptsize}
\end{tikzpicture} }
\subcaptionbox{A Poncelet process that stops after three steps\label{subfig:Poncelet stop}}
{ \begin{tikzpicture}[scale=.7,line cap=round,line join=round,x=1.0cm,y=1.0cm]
\clip(-4.5,-4.5) rectangle (4.5,4.5);
\draw [line width=0.4pt] (0.,0.) circle (4.cm);
\draw [line width=0.4pt] (-2.,0.) circle (1.5cm);
\draw [line width=0.4pt] (2.0780924877172398,3.417825567885695)-- (-3.91948610191114,0.7985165602073765);
\draw [line width=0.4pt] (-3.91948610191114,0.7985165602073765)-- (-2.3522410968808103,-3.235268431234871);
\draw [line width=0.4pt] (-2.3522410968808103,-3.235268431234871)-- (2.0780924877172398,3.417825567885695);
\begin{scriptsize}
\draw (-2.5180164604723565,3.6) node {$C$};
\draw [fill=black] (2.0780924877172398,3.417825567885695) circle (2.5pt);
\draw (2.5,3.8) node {$c_1=c_4$};
\draw (-2,-1.1) node {$D$};
\draw [fill=black] (-0.7514862664242724,-0.8313924807651301) circle (2.0pt);
\draw (-1.1,-0.5) node {$d_3$};
\draw [fill=black] (-2.6003371158438604,1.3746255298590504) circle (2.0pt);
\draw (-2.2,1) node {$d_1=d_4$};
\draw [fill=black] (-3.91948610191114,0.7985165602073765) circle (2.0pt);
\draw (-4.2,1.1) node {$c_2$};
\draw [fill=black] (-3.3981766177318664,-0.5432330490939209) circle (2.0pt);
\draw (-3,-0.38367095724361827) node {$d_2$};
\draw [fill=black] (-2.3522410968808103,-3.235268431234871) circle (2.0pt);
\draw (-2.5773319416814426,-3.5) node {$c_3$};
\end{scriptsize}
\end{tikzpicture} }
\caption{Poncelet process}\label{fig:Poncelet}
\end{figure}
Using this terminology, our main result is the following
\begin{main}[\,$=$\,Theorem~\ref{thmm}]
Let $C$ and $D$ be two smooth conics in $\PP^2$ over an algebraically closed field $k$ with $\Char k\ne 2$. Let $T\subseteq C\cap D$ be the set of points where $C$ and $D$ are tangent. If there exist a point $c_1\in C\setminus T$ and a positive integer $n$ such that the Poncelet process with initial point $c_1$ stops after $n$ steps, then the Poncelet process with any initial point in $C$ stops after $n$ steps.
Moreover, when $C$ and $D$ are osculating,
\begin{itemize}
\item if $\Char k =0$, then the Poncelet process stops for no initial points in $C\setminus T$;
\item if $\Char k >0$, then the Poncelet process stops for all initial points in $C$ after $\Char k$ steps.
\end{itemize}
\end{main}
Note that a Poncelet process that starts at a tangent point of $C$ and $D$ stops after one step, which is certainly not the case for all other starting points. Thus it is necessary to exclude the tangent points of $C$ and $D$ as initial points in the assumption. The exclusion of characteristic two is because, if $\Char k=2$, then there exists a point $p\in \PP^2$ such that \emph{every} line through $p$ is tangent to $D$ (see Corollary \ref{cor: tangent of conic in char 2 }), so the Poncelet process cannot be defined if one of the point $c_i$ is equal to $p$.
Our approach is an expansion of that of Griffiths and Harris in \cite{GH}. We study how the geometric structure of the curve $E=\{(c,d)\in C \times D \mid c\in T_d\,D \}$ depends on the intersection multiplicities of $C$ and $D$ at their points of intersection. Over $\CC$ this has been done in \cite{leopold}, where complex analytic methods were used. Our approach, however, is purely algebraic, and applies in positive characteristic. As far as we know, the last part of our result, which says that the Poncelet process always stops for osculating conics in positive characteristic, seems to be new.
\subsection*{Acknowledgment} The authors gratefully acknowledge the support of MoST (Ministry of Science and Technology) in Taiwan.
\section{The algebraic curve underlying the Poncelet process}
In this section, let $C$ and $D$ be two smooth conics in $\PP^2$ over an algebraically closed field $k$ with $\Char k\ne 2$, and let \[
E=\{(c,d)\in C \times D \mid c\in T_d\,D \}. \]
Griffiths and Harris \cite{GH} showed that $E$ is an elliptic curve if $C$ and $D$ are conics in general position over $k=\CC$, and used it to prove Poncelet's theorem under these assumptions. In this section, we will study the properties of $E$ without assuming $C$ and $D$ to be in general position nor defined over $\CC$, so that we can prove Poncelet's theorem without these assumptions later.
Our first proposition shows that $E$ is a (possibly reducible) projective curve whose singular points correspond to tangent points of $C$ and $D$.
\begin{proposition}\label{prop: E singular cond}
Let $C$ and $D$ be two smooth conics in $\PP^2$ over an algebraically closed field $k$ with $\Char k\ne 2$, and let \[
E=\{(c,d)\in C \times D \mid c\in T_d\,D \}. \]
Then
\begin{enumerate}
\item $E$ is a (possibly reducible) projective curve.
\item Let $c\in C$ and $d\in D$ be points such that $p=(c,d)\in E$. Then $E$ is singular at $p$ if and only if the point $c$ and $d$ coincides and is a point at which the conics $C$ and $D$ are tangent to each other.
\end{enumerate}
\end{proposition}
\begin{proof}
Let
\begin{align*}
&C\colon F(x,y,z)=\begin{pmatrix}x & y & z\end{pmatrix} A \begin{pmatrix}x\\y\\z\end{pmatrix},\\
&D\colon G(x,y,z)=\begin{pmatrix}x & y & z\end{pmatrix} B \begin{pmatrix}x\\y\\z\end{pmatrix}
\end{align*}
be the quadratic forms that define the conics $C$ and $D$, respectively, where $A$ and $B$ are $3\times 3$ nondegenerate symmetric matrices. Then $E$ is the set of points \[
([x:y:z],[x':y':z'])\in \PP^2\times\PP^2 \]
defined by the following three equations:
\begin{align*}
&F(x,y,z)=\begin{pmatrix}x & y & z\end{pmatrix} A \begin{pmatrix}x\\y\\z\end{pmatrix}=0, \\
&G(x',y',z')=\begin{pmatrix}x' & y' & z'\end{pmatrix} B \begin{pmatrix}x'\\y'\\z'\end{pmatrix}=0, \\
&H(x,y,z,x',y',z')=\begin{pmatrix}x & y & z\end{pmatrix} B \begin{pmatrix}x'\\y'\\z'\end{pmatrix}=0.
\end{align*}
If $c=[c_1:c_2:1]$ and $d=[d_1:d_2:1]$ are points such that $p=(c,d)\in E$, then the Jacobian matrix of the polynomials
\begin{align*}
&f(x,y)=F(x,y,1)=\begin{pmatrix}x & y & 1\end{pmatrix} A \begin{pmatrix}x\\y\\1\end{pmatrix},\\
&g(x',y')=G(x',y',1)=\begin{pmatrix}x' & y' & 1\end{pmatrix} B \begin{pmatrix}x'\\y'\\1\end{pmatrix},\\
&h(x,y,x',y')=H(x,y,1,x',y',1)=\begin{pmatrix}x & y & 1\end{pmatrix} B \begin{pmatrix}x'\\y'\\1\end{pmatrix}
\end{align*}
at $p$ is \[
J=\begin{pmatrix}
\dfrac{\partial f}{\partial x}(p) & \dfrac{\partial f}{\partial y}(p) & \dfrac{\partial f}{\partial x'}(p) & \dfrac{\partial f}{\partial y'}(p)\\
\dfrac{\partial g}{\partial x}(p) & \dfrac{\partial g}{\partial y}(p) & \dfrac{\partial g}{\partial x'}(p) & \dfrac{\partial g}{\partial y'}(p)\\
\dfrac{\partial h}{\partial x}(p) & \dfrac{\partial h}{\partial y}(p) & \dfrac{\partial h}{\partial x'}(p) & \dfrac{\partial h}{\partial y'}(p)
\end{pmatrix}
=\begin{pmatrix}
\dfrac{\partial f}{\partial x}(p) & \dfrac{\partial f}{\partial y}(p) & 0 & 0\\
0 & 0 & \dfrac{\partial g}{\partial x'}(p) & \dfrac{\partial g}{\partial y'}(p) \\
\dfrac{\partial h}{\partial x}(p) & \dfrac{\partial h}{\partial y}(p) & \dfrac{\partial h}{\partial x'}(p) & \dfrac{\partial h}{\partial y'}(p)
\end{pmatrix}. \]
It suffices to show that $\rank(J)=3$ for general points $p$, and to find the points $p$ where $\rank(J)=2$ (these are the singular points of $E$). For any line $L\colon ax+by=e$ in $\A^2$, let $N(L)=[a:b]$ denote its ``normal vector''. Then we have
\begin{align*}
& N(T_c\,C)=\begin{bmatrix}
\dfrac{\partial f}{\partial x}(p):\dfrac{\partial f}{\partial y}(p)
\end{bmatrix},\\
&N(P_d\,D)=\begin{bmatrix}
\dfrac{\partial h}{\partial x}(p):\dfrac{\partial h}{\partial y}(p)
\end{bmatrix}=
\begin{bmatrix}
\dfrac{\partial g}{\partial x'}(p):\dfrac{\partial g}{\partial y'}(p)
\end{bmatrix}=N(T_d\,D),\\
& N(P_c\,D)=\begin{bmatrix}
\dfrac{\partial h}{\partial x'}(p):\dfrac{\partial h}{\partial y'}(p)
\end{bmatrix}.
\end{align*}
Since $C$ and $D$ are smooth, $N(T_c\,C)$ and $N(T_d\,D)$ are nonzero, so $\rank(J)\ge 2$. Moreover,
\begin{align*}
\rank(J)=2 &\iff N(T_c\,C)=N(P_d\,D)=N(T_d\,D)=N(P_c\,D)\\
&\iff T_c\,C=P_d\,D=T_d\,D=P_c\,D,
\end{align*}
where the second $\iff$ is because $T_c\,C$ and $T_d\,D$ have a common point $c$, $T_d\,D$ and $P_c\,D$ have a common point $d$, and $P_d\,D=T_d\,D$ since $d\in D$. Since $P_c\,D=P_d\,D$ if and only if $c=d$, it follows that $\rank(J)=2$ if and only if $c=d$ and $T_c\,C=T_d\,D$, that is, $C$ and $D$ are tangent at the point $c=d$.
\end{proof}
As observed in \cite{GH}, the Poncelet process can be viewed as performing the following two involutions $\sigma$ and $\tau$ on $E$ consecutively:
\begin{definition}\label{def: auto on E}
For each point $(c,d)\in E\subseteq C\times D$, let $D\cap P_c\,D=\{d,d'\}$ and $C\cap T_d\,D=\{c,c'\}$, and let $\sigma$ and $\tau$ be the involutions on $E$ defined by \[
\sigma(c,d)=(c,d')\text{ and }\tau(c,d)=(c',d).\]
\end{definition}
To study the Poncelet process for two tangent conics (in this case $E$ is singular), we will need the following result on the fixed points of $\sigma\circ\tau$.
\begin{proposition}\label{prop: singular condition for fixed point}
A point $p\in E$ is a fixed point of the automorphism $\nu=\sigma\circ\tau$ on $E$ if and only if $E$ is singular at $p$.
\end{proposition}
\begin{proof}
Let us write $p=(c_1,d_1)$ and $\nu(p)=(c_2,d_2)$, where $c_1,c_2\in C$ and $d_1,d_2\in D$. It follows from the definition of $\nu$ that $D\cap P_{c_2}D=\{d_1,d_2\}$ and $C\cap T_{d_1}D=\{c_1,c_2\}$. So
\begin{align*}
c_1=c_2 &\iff T_{d_1}D \text{ is tangent to $C$ (at $c_1=c_2$)},\\
d_1=d_2 &\iff c_2\in D.
\end{align*}
By Proposition~\ref{prop: E singular cond}, it is enough to show that
\[
\nu(p)=p\iff c_1=d_1,\text{ and }C\text{ and }D\text{ are tangent at }c_1=d_1.
\]
If $\nu(p)=p$, then $c_2\in T_{d_1}D$ and $c_2\in D$, so $c_2=d_1$ (since $d_1$ is the only intersection of $T_{d_1}D$ and $D$). Hence $c_1=c_2=d_1=d_2\in C\cap D$ and $T_{c_1}C=T_{d_1}D$, that is, $C$ and $D$ are tangent at the point $c_1=d_1$. The converse is obvious.
\end{proof}
We also need to understand the geometric structure of $E$, which turns out to depend on how $C$ and $D$ intersect each other.
\begin{definition}
Let $C,D\subseteq \PP^2$ be projective plane curves. Let $p_1, \ldots, p_n$ be all the distinct intersection points of $C$ and $D$. Let $m_i=I_{p_i}(C,D)$ be the intersection multiplicity of $C$ and $D$ at $p_i$. We may assume that $m_1\ge \cdots \ge m_n$ after suitably renumbering $p_1,\ldots, p_n$. Then we say that the \emph{intersection type} of $C$ and $D$ is $(m_1,\dotsm,m_n)$.
\end{definition}
By B\'ezout's theorem, $m_1+\cdots + m_n=(\deg C)(\deg D)$. So if $C$ and $D$ are conics, then there are five possible intersection types: $(1,1,1,1)$, $(2,1,1)$, $(2,2)$, $(3,1)$, and $(4)$. It was shown in \cite{GH} that if $C$ and $D$ are nowhere tangent conics, that is, of intersection type $(1,1,1,1)$, then $E$ is an elliptic curve. We include a proof here for the sake of completeness.
\begin{proposition}\label{cor: general position is elliptic curve}
If $C$ and $D$ are nowhere tangent conics, then the curve $E$ defined in Proposition~\ref{prop: E singular cond} is an elliptic curve.
\end{proposition}
\begin{proof}
Since $C$ and $D$ are not tangent, $E$ is a smooth projective (possibly reducible) curve by Proposition~\ref{prop: E singular cond}. On the other hand, $E\subseteq C\times D\cong \PP^1\times \PP^1$, and the two projection maps $E\to C$ and $E\to D$ are both finite of degree two. Hence $E$ is a smooth curve of bidegree $(2,2)$ in $\PP^1\times \PP^1$. If $E$ is reducible, say $E=E_1 \cup E_2$ with $E_i$ of bidegree $(a_i,b_i)$, then $E_1\cap E_2\ne \emptyset$ since $E_1\cdot E_2=a_1b_2+a_2b_1 > 0$, contradicting the smoothness of $E$. Hence $E$ is irreducible, and it follows from the adjunction formula that $E$ is of genus one.
\end{proof}
If $C$ and $D$ are tangent, we will find the defining equation of $E$ in $\PP^1\times \PP^1$ to understand its geometric structure. For this, we modify the approach in \cite{leopold}, replacing any complex analytic argument by algebraic one. The first step is to write the equations of $C$ and $D$ in a suitably chosen homogeneous coordinate system for $\PP^2$. We will use the following lemma, whose proof is straightforward and thus omitted.
\begin{lemma}\label{lem: reduce C}
Let $C$ be a smooth conic in $\PP^2$ over $k$ with $\Char k\ne 2$. If the point $p=[0:0:1]\in C$, and the tangent of $C$ at $p$ is given by $y=0$, then the defining equation of $C$ is of the form \[
C\colon x^2+txy+ay^2-byz=0 \]
for some $t,a,b\in k$ with $b\ne 0$.
\end{lemma}
\begin{proposition}\label{prop: reduce C D and intersection type}
Let $C$ and $D$ be two smooth conics in $\PP^2$ over $k$ with $\Char k\ne 2$ that are tangent to each other at a point $p$. Then there exists a suitable homogeneous coordinate system for $\PP^2$ such that $p=[0:0:1]$, and the defining equations of $C$ and $D$ are of the form
\begin{align*}
&C\colon F(x,y,z)=x^2+txy+ay^2-byz=0,\\
&D\colon G(x,y,z)=x^2-yz=0
\end{align*}
for some $t,a,b\in k$ with $b\ne 0$. Moreover, if we write $\Delta=t^2-4a(1-b)$, then the intersection type of $C$ and $D$ is
\begin{itemize}
\item $(2,1,1) \iff b\ne 1$ and $\Delta\ne 0$;
\item $(2,2) \iff b\ne 1$ and $\Delta=0$;
\item $(3,1) \iff b=1$ and $t\ne 0$;
\item $(4) \iff b=1$, $t=0$, and $a\ne 0$.
\end{itemize}
\end{proposition}
\begin{proof}
Let $L$ be the common tangent of $C$ and $D$ at $p$. Choose a homogeneous coordinate system $[x':y':z']$ for $\PP^2$ such that $p=[0:0:1]$ and $L\colon y'=0$. By Lemma~\ref{lem: reduce C}, the defining equations of $C$ and $D$ are of the form
\begin{align*}
&C\colon F(x',y',z')=x'^2+t_1x'y'+a_1y'^2-b_1y'z'=0,\\
&D\colon G(x',y',z')=x'^2+t_2x'y'+a_2y'^2-b_2y'z'=0
\end{align*}
for some $t_i,a_i,b_i\in k$ with $b_i\ne 0$. Then in the homogeneous coordinate system \[
[x:y:z]=[x'+\frac{t_2}{2}y' : y' : (-a_2+\frac{t_2^2}{4})y'+b_2z'], \]
the defining equation of $D$ becomes \[
D\colon G(x,y,z)=x^2-yz=0. \]
Note that in the coordinate system $[x:y:z]$, we still have $p=[0:0:1]$ and $L\colon y=0$, so by Lemma~\ref{lem: reduce C}, the defining equation of $C$ is of the form \[
C\colon F(x,y,z)=x^2+txy+ay^2-byz=0 \]
for some $t,a,b\in k$ with $b\ne 0$.
Let
\begin{align*}
f(x,y)&=F(x,y,1)=x^2+txy+ay^2-by,\\
g(x,y)&=G(x,y,1)=x^2-y.
\end{align*}
Since $D$ is smooth and $x$ is a parameter for $D$, \[
I_p(C,D)=I_{(0,0)}(f,g)=\ord_{(0,0)}(f|_D)=\ord_{x=0}((1-b)x^2+tx^3+ax^4). \]
So the intersection type of $C$ and $D$ is
\begin{itemize}
\item $(2,1,1)$ or $(2,2)\iff I_p(C,D)=2 \iff b\ne 1$;
\item $(3,1)\iff I_p(C,D)=3 \iff b=1$ and $t\ne 0$;
\item $(4)\iff I_p(C,D)=4 \iff b=1$, $t=0$, and $a\ne 0$.
\end{itemize}
If $I_p(C,D)=2$, the intersection points of $C$ and $D$ other than $p$ can be computed by plugging $yz=x^2$ into the equation of $C$, which gives \[
(1-b)x^2+txy+ay^2=0.\]
The discriminant of this quadratic equation is $\Delta=t^2-4a(1-b)$, so the intersection type of $C$ and $D$ is
\begin{itemize}
\item $(2,1,1)\iff b\ne 1$ and $\Delta\ne 0$;
\item $(2,2) \iff b\ne 1$ and $\Delta=0$.
\end{itemize}
\end{proof}
We can now give a complete description of the projective curve $E$ defined in Proposition~\ref{prop: E singular cond} when $C$ and $D$ are tangent.
\begin{proposition}\label{prop: shape of E}
Let $C$ and $D$ be two smooth conics in $\PP^2$ over an algebraically closed field $k$ with $\Char k\ne 2$, and let \[
E=\{(c,d)\in C \times D \mid c\in T_d\,D \}. \]
If $C$ and $D$ are tangent, then $E$ is
\begin{itemize}
\item an irreducible rational curve with a node and smooth elsewhere if the intersection type of $C$ and $D$ is $(2,1,1)$;
\item two smooth rational curves intersecting at two points transversally if the intersection type of $C$ and $D$ is $(2,2)$;
\item an irreducible rational curve with an ordinary cusp and smooth elsewhere if the intersection type of $C$ and $D$ is $(3,1)$;
\item two smooth rational curves intersecting at one point with multiplicity two if the intersection type of $C$ and $D$ is $(4)$.
\end{itemize}
Moreover, in the two cases where $E$ is reducible, the involutions $\sigma$ and $\tau$ in Definition~\ref{def: auto on E} both interchange the two irreducible components of $E$.
\end{proposition}
\begin{proof}
By Proposition~\ref{prop: reduce C D and intersection type}, we can choose a homogeneous coordinate system for $\PP^2$ such that $C$ and $D$ are tangent at $[0:0:1]$, and the defining equations of $C$ and $D$ are of the form
\begin{align*}
&C\colon F(x,y,z)=x^2+txy+ay^2-byz=0,\\
&D\colon G(x,y,z)=x^2-yz=0
\end{align*}
for some $t,a,b\in k$ with $b\ne 0$. Parametrizing $C$ with the parameter $u=y/x$ gives an isomorphism \[
\PP^1\xrightarrow{\ \cong\ } C,\quad u\in k\cup \{\infty\}\longmapsto [bu:bu^2:1+tu+au^2]\in C. \]
Similarly, parametrizing $D$ with the parameter $v=y/x$ gives an isomorphism \[
\PP^1\xrightarrow{\ \cong\ } D,\quad v\in k\cup \{\infty\}\longmapsto [v:v^2:1]\in D. \]
Since the tangent of $D$ at a point $d=[v:v^2:1]\in D$ is given by \[
T_d\,D\colon 2vx-y-v^2z=0, \]
we find that \[
E=\{(c,d)\in C \times D \mid c\in T_d\,D \}
\cong\{(u,v)\in\PP^1\times\PP^1\mid H(u,v)=0\}, \]
where \[
H(u,v)=bu^2-2buv+(1+tu+au^2)v^2. \]
By Gauss's lemma, $E$ is reducible if and only if $H$ is reducible as a polynomial in $v$ with coefficients in $k(u)$. Since $H$ is quadratic in $v$ with discriminant \[
(2bu)^2-4bu^2(1+tu+au^2)=-4bu^2(1-b+tu+au^2), \]
we see that
\begin{align*}
E\text{ is reducible} &\iff 1-b+tu+au^2 \text{ is a square in } k[u] \\
&\iff \Delta=t^2-4a(1-b)=0.
\end{align*}
Thus $E$ is reducible if the intersection type of $C$ and $D$ is $(2,2)$ or $(4)$ by Proposition~\ref{prop: reduce C D and intersection type}. Moreover, in this case, the irreducible factorization of the polynomial $H$ is of the form $H(u,v)=H_1(u,v)\cdot H_2(u,v)$, where both $H_1$ and $H_2$ are polynomials of bidegree~$(1,1)$ in $(u,v)$. It follows that the curves $E_1$ and $E_2$ defined respectively by $H_1$ and $H_2$ are both smooth rational curves, and they are the irreducible components of $E$. If the involution $\sigma$ in Definition~\ref{def: auto on E} sends a point $(c,d)\in E$ to $(c,d')$, then $d$ and $d'$ are exactly the solutions to the quadratic equation in $v$ given by \[
H(c,v)=H_1(c,v)\cdot H_2(c,v)=0. \]
So if $(c,d) \in E_1$, then $d$ is the solution to $H_1(c,v)=0$, and hence $d'$ is the solution to $H_2(c,v)=0$, that is, $(c,d') \in E_2$. Thus $\sigma$ interchanges $E_1$ and $E_2$. A similar argument shows that $\tau$ also interchanges $E_1$ and $E_2$.
If the intersection type of $C$ and $D$ is $(2,2)$, then $E_1$ and $E_2$ intersect at two points by Proposition~\ref{prop: E singular cond}, and the intersection is transversal since the intersection number $E_1\cdot E_2=2$. If the intersection type of $C$ and $D$ is $(4)$, then $E_1$ and $E_2$ intersect at only one point by Proposition~\ref{prop: E singular cond}, and the intersection multiplicity at that point is equal to the intersection number $E_1\cdot E_2=2$.
Suppose that the intersection type of $C$ and $D$ is $(2,1,1)$ or $(3,1)$. Then $E$ is an irreducible curve of bidegree~$(2,2)$ in $\PP^1\times \PP^1$, so it is of arithmetic genus one by the adjunction formula. By Proposition~\ref{prop: E singular cond}, $E$ has only one singularity, which occurs at $(u,v)=(0,0)$. One sees from the defining polynomial $H(u,v)$ of $E$ that this singularity is
\begin{itemize}
\item a node if $b\ne 1$, that is, if the intersection type of $C$ and $D$ is $(2,1,1)$;
\item an ordinary cusp if $b=1$, that is, if the intersection type of $C$ and $D$ is $(3,1)$.
\end{itemize}
In both cases $E$ has geometric genus zero, so $E$ is rational.
\end{proof}
\section{Poncelet's theorem for any conics}
If two plane conics $C$ and $D$ are tangent, then the Poncelet process that starts at a tangent point of $C$ and $D$ stops after one step, which is certainly not the case for all other starting points. So the following version of Poncelet's theorem for possibly tangent conics is the best one could hope for.
\begin{theorem}\label{thmm}
Let $C$ and $D$ be two smooth conics in $\PP^2$ over an algebraically closed field $k$ with $\Char k\ne 2$. Let $T\subseteq C\cap D$ be the set of points where $C$ and $D$ are tangent. If there exist a point $c_1\in C\setminus T$ and a positive integer $n$ such that the Poncelet process with initial point $c_1$ stops after $n$ steps, then the Poncelet process with any initial point in $C$ stops after $n$ steps.
Moreover, in the cases where the intersection type of $C$ and $D$ are $(3,1)$ or $(4)$,
\begin{itemize}
\item if $\Char k =0$, then the Poncelet process stops for no initial points in $C\setminus T$;
\item if $\Char k >0$, then the Poncelet process stops for all initial points in $C$ after $\Char k$ steps.
\end{itemize}
\end{theorem}
\begin{proof}
Let $E=\{(c,d)\in C \times D \mid c\in T_d\,D \}$ be the projective curve defined in Proposition~\ref{prop: E singular cond}, where it is also shown that the singular locus $\Sing(E)$ of $E$ is \[
\Sing(E)=\{(c,d)\in C\times D\mid c=d\in T\}. \]
Let $\sigma$ and $\tau$ be the involutions on $E$ in Definition~\ref{def: auto on E}, and let $\nu=\sigma\circ\tau$. Then a sequence of points $(c_1,d_1),(c_2,d_2),\ldots\in C\times D$ is produced by the Poncelet process if and only if $(c_i,d_i)\in E$ and $\nu(c_i,d_i)=(c_{i+1},d_{i+1})$ for all $i\ge 1$. So what we want to show is equivalent to the statement that if there exists a positive integer~$n$ such that $\nu^n$ has a fixed point in $E\setminus \Sing(E)$, then $\nu^n$ is the identity map on $E$. Since the geometric structure of $E$ depends on the intersection type of $C$ and $D$, we divide the proof into several cases accordingly.
\textbf{The intersection type of $C$ and $D$ is $(1,1,1,1)$.} In this case, $E$ is an elliptic curve by Proposition~\ref{cor: general position is elliptic curve}. This is the case treated by Griffiths and Harris in \cite{GH}. They wrote their proof over $\CC$, but we will adopt their method and point out that it actually works in positive characteristic too. The key fact, which Griffiths and Harris only proved over $\CC$ but is actually true in general, is that if $E$ is an elliptic curve and $\alpha\colon E\to E$ is an involution with a fixed point, then there exists a point $a\in E$ such that $\alpha(p)=-p+a$ for all $p\in E$, where the addition here denotes the elliptic curve group law. Since $\sigma$ and $\tau$ are both involutions on $E$ with fixed points, there thus exist $a,b\in E$ such that $\sigma(p)=-p+a$ and $\tau(p)=-p+b$ for all $p\in E$. Hence \[
\nu(p)=(\sigma\circ\tau)(p)=-(-p+b)+a=p+(a-b), \]
so $\nu^n(p)=p+n(a-b)$ for all $p\in E$. If $\nu^n$ has a fixed point, then $n(a-b)=0$, and hence $\nu^n$ is the identity map.
\textbf{The intersection type of $C$ and $D$ is $(2,1,1)$.} In this case, $E$ is an irreducible rational curve with a node $q$ and smooth elsewhere by Proposition~\ref{prop: shape of E}. Let $\phi\colon \PP^1 \to E$ be the normalization of $E$. Then $\phi^{-1}(q)=\{p_1,p_2\}$ consists of two distinct points, and \[
\phi|_{U}\colon U=\PP^1\setminus \{p_1,p_2\} \longrightarrow E\setminus \{q\}=V \]
is an isomorphism. Let $\tilde{\nu}$ be the automorphism of $\PP^1$ induced by $\nu$. Since $q$ is the only fixed point of $\nu$ by Proposition \ref{prop: singular condition for fixed point}, $\tilde{\nu}$ has no fixed points in $U$. Since any automorphism of $\PP^1$ has a fixed point, $p_1$ and $p_2$ must both be fixed points of $\tilde{\nu}$. If there exists a positive integer~$n$ such that $\nu^n$ has a fixed point in $V$, then $\tilde{\nu}^n$ has a fixed point in $U$. Then $\tilde{\nu}^n$ is an automorphism of $\PP^1$ with at least three fixed points, so it is the identity map.
\textbf{The intersection type of $C$ and $D$ is $(3,1)$.} In this case, $E$ is an irreducible rational curve with an ordinary cusp $q$ and smooth elsewhere by Proposition~\ref{prop: shape of E}. Let $\phi\colon \PP^1 \to E$ be the normalization of $E$. Then $\phi^{-1}(q)=\{p\}$ consists of a single point, and \[
\phi|_{U}\colon U=\PP^1\setminus \{p\} \longrightarrow E\setminus \{q\}=V \]
is an isomorphism. Let $\tilde{\nu}$ be the automorphism of $\PP^1$ induced by $\nu$. Since $q$ is the only fixed point of $\nu$ by Proposition \ref{prop: singular condition for fixed point}, $\tilde{\nu}$ has no fixed points in $U$. Hence $p$ is the only fixed point of $\tilde{\nu}$. If we choose an affine coordinate $z$ on $\PP^1$ such that the point $p$ corresponds to $z=\infty$, then $\tilde{\nu}$ is given by $\tilde{\nu}(z)=z+b$ for some nonzero $b\in k$, so $\tilde{\nu}^n(z)=z+nb$ for all $z\in \PP^1$. If $\Char k =0$, then for any positive integer $n$ we have $nb\ne 0$, which implies that $\tilde{\nu}^n$ has no fixed points in $U$, and hence $\nu^n$ has no fixed points in $V$. On the other hand, if $\Char k >0$, then for $n=\Char k$ we have $nb=0$, so $\nu^n$ is the identity map.
\textbf{The intersection type of $C$ and $D$ is $(2,2)$.} In this case, by Proposition~\ref{prop: shape of E}, $E$ has two irreducible components $E_1$ and $E_2$, both isomorphic to $\PP^1$, which intersect transversally at two points $p_1$ and $p_2$. Since both $\sigma$ and $\tau$ interchange the two components, $\nu=\sigma\tau$ restricts to an automorphism on $E_i$ for $i=1,2$. By Proposition~\ref{prop: singular condition for fixed point}, $p_1$ and $p_2$ are fixed points of $\nu$. Suppose that there is a positive integer $n$ such that $\nu^n$ has a fixed point $q$ in $E\setminus \{p_1,p_2\}$. Without loss of generality, we may assume that $q\in E_1$. Then $\nu^n|_{E_1}$ is an automorphism of $E_1\cong \PP^1$ with at least three fixed points, so $\nu^n|_{E_1}$ is the identity map. Then $(\nu^n)^{-1}=(\tau\sigma)^n$ also restricts to the identity map on $E_1$, that is, $(\tau\sigma)^n(p)=p$ for all $p\in E_1$. It follows that, for any point $p\in E_2$, we have $(\tau\sigma)^n\bigl(\tau(p)\bigr)=\tau(p)$, and applying $\tau$ on both sides gives $(\sigma\tau)^n(p)=p$, that is, $\nu^n|_{E_2}$ is the identity map. Hence $\nu^n$ is the identity map on $E$.
\textbf{The intersection type of $C$ and $D$ is $(4)$.} In this case, by Proposition~\ref{prop: shape of E}, $E$ has two irreducible components $E_1$ and $E_2$, both isomorphic to $\PP^1$, which intersect at only one point $p$ with multiplicity two. Since both $\sigma$ and $\tau$ interchange the two components, $\nu=\sigma\tau$ restricts to an automorphism on $E_i$ for $i=1,2$. By Proposition~\ref{prop: singular condition for fixed point}, $p$ is the only fixed point of $\nu$ on $E$. For each $i=1,2$, let $z_i$ be an affine coordinate for $E_i \cong \PP^1$ such that the point $p$ corresponds to $z_i=\infty$. Then there exist nonzero $b_i\in k$ such that $\nu|_{E_i}(z_i)=z_i+b_i$, so $\nu^n|_{E_i}(z_i)=z_i+nb_i$ for all $z_i\in E_i$. If $\Char k =0$, then for any positive integer $n$ we have $nb_i\ne 0$, so $\nu^n$ has no fixed points in $E\setminus \{p\}$. On the other hand, if $\Char k >0$, then for $n=\Char k$ we have $nb_i=0$, so $\nu^n$ is the identity map.
\end{proof}
| {
"timestamp": "2021-12-23T02:12:59",
"yymm": "2112",
"arxiv_id": "2112.11770",
"language": "en",
"url": "https://arxiv.org/abs/2112.11770",
"abstract": "Poncelet's theorem states that if there exists an n-sided polygon which is inscribed in a given conic C and circumscribed about another conic D, then there are infinitely many such n-gons. Proofs of this theorem that we are aware of, including Poncelet's original proof and the celebrated modern proof by Griffiths and Harris, assume the two conics to be in general position (that is, not tangent or at least not osculating), or be defined over the field of complex numbers, or both. Here we show that Poncelet's theorem holds for any two conics C and D in the projective plane over an algebraically closed field k of any characteristic other than two. If C and D are osculating and char(k)>2, our result shows that there always exist infinitely many polygons of char(k) sides that are inscribed in C and circumscribed about D. We also describe the situation in characteristic two in the appendix.",
"subjects": "Algebraic Geometry (math.AG)",
"title": "Poncelet's theorem for conics in any position and any characteristic",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513897844355,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.7086428589250521
} |
https://arxiv.org/abs/2212.05739 | Spectral extremal graphs for the bowtie | Let $F_k$ be the (friendship) graph obtained from $k$ triangles by sharing a common vertex. The $F_k$-free graphs of order $n$ which attain the maximal spectral radius was firstly characterized by Cioabă, Feng, Tait and Zhang [Electron. J. Combin. 27 (4) (2020)], and later uniquely determined by Zhai, Liu and Xue [Electron. J. Combin. 29 (3) (2022)] under the condition that $n$ is sufficiently large. In this paper, we get rid of the condition on $n$ being sufficiently large if $k=2$. The graph $F_2$ is also known as the bowtie. We show that the unique $n$-vertex $F_2$-free spectral extremal graph is the balanced complete bipartite graph adding an edge in the vertex part with smaller size if $n\ge 7$, and the condition $n\ge 7$ is tight. Our result is a spectral generalization of a theorem of Erdős, Füredi, Gould and Gunderson [J. Combin. Theory Ser. B 64 (1995)], which states that $\mathrm{ex}(n,F_2)=\left\lfloor {n^2}/{4} \right\rfloor +1$. Moreover, we study the spectral extremal problem for $F_k$-free graphs with given number of edges. In particular, we show that the unique $m$-edge $F_2$-free spectral extremal graph is the join of $K_2$ with an independent set of $\frac{m-1}{2}$ vertices if $m\ge 8$, and the condition $m\ge 8$ is tight. | \section{Introduction}
In this paper,
we shall use the following standard notation; see, e.g., the monograph \cite{BM2008}.
We consider only simple and undirected graphs. Let $G$ be a simple
graph with vertex set $V(G)=\{v_1, \ldots, v_n\}$ and edge set $E(G)=\{e_1, \ldots, e_m\}$.
We usually write $n$ and $m$ for the number of vertices and edges
respectively.
Let $N(v)$ or $N_G(v)$ be the set of neighbors of $v$,
and $d(v)$ or $d_G(v)$ be the degree of a vertex $v$ in $G$.
For a subset $A\subseteq V(G)$, we write $e(A)$ for the
number of edges with two endpoints in $A$,
and $N(A)$ for the union of neighborhoods of vertices of $A$.
For two disjoint sets $A,B$, we write $e(A,B)$ for the number of edges between $A$ and $B$.
Let $K_n$ be the complete graph on $n$ vertices, and
$K_{s,t}$ be the complete bipartite graph with parts of sizes
$s$ and $t$.
We write $C_n$ and $P_n$ for the cycle and
path on $n$ vertices respectively.
Let $G \vee H$ be the join graph consisting of $G$ and $H$ in which each vertex of $G$
is adjacent to each vertex of $H$.
To illustrate the background and history,
we shall partition the introduction into the following three subsections.
\subsection{The classical extremal graph problems}
We say that a graph $G$ is $F$-free if it does not contain
an isomorphic copy of $F$ as a subgraph.
Apparently, every bipartite graph is $C_{2k+1}$-free
for every integer $k\ge 1$.
The {\em Tur\'{a}n number} of a graph $F$ is the maximum number of edges in an $n$-vertex $F$-free graph, and
it is usually denoted by $\mathrm{ex}(n, F)$.
A graph on $n$ vertices with no subgraph $F$ and with $\mathrm{ex}(n, F)$ edges is called an {\em extremal graph} for $F$.
As is known to all, the Mantel theorem \cite{Man1907} asserts that
the balanced complete bipartite graph $K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }$
attains the maximum number of edges among all $n$-vertex triangle-free graphs.
\begin{theorem}[Mantel \cite{Man1907}, 1907] \label{thmMan}
If $G$ is a triangle-free graph on $n$ vertices, then
\begin{equation} \label{eq-man}
e(G) \le \lfloor {n^2}/{4} \rfloor ,
\end{equation}
equality holds if and only if $G$ is the balanced complete bipartite graph $K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }$.
\end{theorem}
Mantel's theorem has many interesting applications and generalizations in the literature; see, e.g.,
\cite[pp. 294--301]{Bollobas78} for standard proofs.
Inspired by Mantel's theorem,
the extremal graph theory was widely studied and achieved a rapid development in the past 100 years; see, e.g., \cite{FS13,Sim13} for recent comprehensive surveys.
Let $F_k$ denote the $k$-fan graph, which is the graph
consisting of $k$ triangles
that intersect in exactly one common vertex.
This graph is known as the friendship graph
because it is the only extremal graph in the
well-known Friendship Theorem \cite[Chapter 43]{AZ2014}.
Since the chromatic number $\chi (F_k)=3$,
an celebrated result due to Erd\H{o}s, Stone and Simonovits (see \cite[p. 339]{Bollobas78})
gives an asymptotic value of the Tur\'{a}n number of $F_k$, which states that
\[ \mathrm{ex}(n,F_k)= n^2/4 + o(n^2). \]
A natural question is to determine the error term
$o(n^2)$ accurately.
In 1995, Erd\H{o}s, F\"{u}redi,
Gould and Gunderson \cite{Erdos95} proved the following
result.
\begin{theorem}[Erd\H{o}s--F\"{u}redi--Gould--Gunderson \cite{Erdos95}, 1995]
\label{thmErdos95}
For every $k \geq 1$ and $n\geq 50k^2$,
\[ \mathrm{ex}(n, F_k)= \left\lfloor \frac {n^2}{4}\right \rfloor+ \left\{
\begin{array}{ll}
k^2-k, \quad~~ \mbox{if $k$ is odd,} \\
k^2-\frac32 k, \quad \mbox{if $k$ is even}.
\end{array}
\right. \]
The extremal graphs are not unique and constructed as follows. \\
(a) For odd $k$,
the extremal graphs are constructed from $K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }$
by embedding two vertex-disjoint copies of the complete graph $K_k$ in one side. \\
(b) For even $k$, the extremal graphs
are constructed by taking $K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }$ and embedding
a graph with $2k-1$ vertices, $k^2-\frac{3}{2} k$ edges and maximum degree $k-1$ in one side.
\end{theorem}
We remark here that for odd $k$ and even $n$,
the extremal graph is unique, it is obtained from
$K_{n/2,n/2}$ by embedding two disjoint copies of $K_k$ in one part.
However, for other cases,
the extremal graph for $F_k$ is not unique.
\subsection{The spectral extremal graph problems}
Let $G$ be a graph on $n$ vertices with $m$ edges.
Let $A(G)$ be the adjacency matrix of $G$.
The classical extremal graph problems
usually study the maximum or minimum
number of edges that the extremal graphs can have.
Correspondingly,
we can study the spectral extremal problem.
We denote by $\mathrm{ex}_{\lambda}(n,F)$
the largest eigenvalue of the adjacency matrix
in an $n$-vertex graph that contains no copy of $F$, that is,
\[ \mathrm{ex}_{\lambda}(n,F):=\max \bigl\{ \lambda(G): |G|=n~\text{and}~F\nsubseteq G \bigr\}. \]
In 2007, Nikiforov \cite{Niki2007laa2} published the spectral Tur\'{a}n theorem.
The important triangle case provided a spectral version of Theorem \ref{thmMan}.
\begin{theorem}[Nikiforov \cite{Niki2007laa2}, 2007] \label{thm-niki}
Let $G$ be a triangle-free graph on $n$ vertices. Then
\begin{equation} \label{eq2}
\lambda (G)\le \lambda (K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil } ),
\end{equation}
equality holds if and only if $G $
is a balanced complete bipartite graph
$K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }$.
\end{theorem}
Comparing the spectral result with the classical
result in terms of edges,
one can see that Theorem \ref{thm-niki}
is stronger than Theorem \ref{thmMan},
since
(\ref{eq2}) can deduce (\ref{eq-man}). Indeed, using (\ref{eq2}) and
Rayleigh's formula, we get ${2e(G)}/{n} \le \lambda (G) \le \sqrt{\lfloor n^2/4\rfloor}$ and then $e(G) \le \bigl\lfloor \frac{n}{2}\sqrt{\lfloor n^2/4\rfloor} \bigr\rfloor = \lfloor n^2/4\rfloor$.
Such a deduction enriches the study of spectral extremal graph theory.
In the past few years,
Theorem \ref{thm-niki} stimulated the developments of the spectral extremal graph theory.
It is natural to consider the spectral extremal problems
for graphs with given number of vertices.
In view of this perspective,
various extensions and generalizations on
inequality (\ref{eq2}) have been obtained in the literature; see, e.g.,
\cite{Wil1986,Niki2007laa2,KN2014} for extensions
on $K_{r+1}$-free graphs;
see \cite{BN2007jctb} for relations
between cliques and spectral radius,
\cite{TT2017,LN2021outplanar} for outerplanar and planar graphs,
\cite{Tait2019} for the Colin de Verdi\`{e}re parameter,
\cite{LNW2021,LP2022second} for non-bipartite triangle-free graphs,
\cite{LG2021,LSY2022} for non-bipartite graphs without short odd cycles,
and \cite{NikifSurvey} for a comprehensive survey.
\medskip
We write $\mathrm{Ex}_{\lambda}(n,F)$
for the set of $F$-free graphs on $n$ vertices
with maximum spectral radius,
and $\mathrm{Ex}(n,F)$ for the set of all $n$-vertex extremal graphs
with maximum number of edges.
The well-known theorems of Tur\'{a}n (see, e.g., \cite{Bollobas78})
and Nikiforov \cite{Niki2007laa2}
give that $ \mathrm{Ex}_{\lambda}(n,K_{r+1})= \mathrm{Ex}(n,K_{r+1})= \{T_r(n)\}$
for every $r\ge 2$, where $T_r(n)$ is the $r$-partite Tur\'{a}n graph on $n$ vertices.
Recall that $F_k$ is the graph obtained from $k$ triangles
by intersecting a common vertex.
For fixed $k\ge 2$ and sufficiently large order $n$,
the spectral extremal problem
for $F_k$ was recently
characterized by Cioab\u{a}, Feng,
Tait and Zhang \cite{CFTZ20}.
\begin{theorem}[Cioab\u{a}--Feng--Tait--Zhang \cite{CFTZ20}, 2020] \label{thmCFTZ20}
Let $k\ge 2$ be fixed and $n$ be sufficiently large. Then
\[ \mathrm{Ex}_{\lambda}(n,F_k) \subseteq \mathrm{Ex}(n,F_k). \]
\end{theorem}
In 2022, Zhai, Liu and Xue \cite{ZLX2022} determined further the {\it unique} extremal graph,
which is obtained from $K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil }$ by embedding a graph $H$ in the part of smaller size
$\lfloor \frac{n}{2}\rfloor$, where $H=K_k \cup K_k$ for odd $k$, and $H$
was also determined as in \cite{ZLX2022} for even $k$.
In fact, it is challenging and difficult to
characterize the embedded small graph $H$ in the even case.
For simplicity, we here never provide the details;
see \cite{Chen03,LP2021,DKLNTW2021} for related extensions,
and \cite{WKX2023} for a recent progress.
\medskip
Note that the above result holds under the condition that $n$ is sufficiently large
(the lower bound on $n$ is still unknown).
In extremal combinatorics,
it is also important to determine the extremal configurations for all values of $n$,
instead of $n$ being large enough. For instance,
F\"{u}redi and Simonovits \cite{FS2005},
Keevash and Sudakov \cite{KS2005} independently
proved the Tur\'{a}n number of Fano plane for sufficiently large $n$,
and later Bellmann and Reiher \cite{BR2019} proved that
the Tur\'{a}n number of Fano plane for every $n\ge 7$.
Moreover, Tait and Tobin \cite{TT2017} proved that for sufficiently large $n$,
the outerplanar graph on $n$ vertices of
maximum spectral radius is $K_1\vee P_{n-1}$.
Later, Lin and Ning \cite{LN2021outplanar} proved further that
the same result still holds for all $n\ge 2$ except for $n=6$.
To some extent, it is meaningful to find an appropriate or smallest bound $n_0(k)$
such that for all integers $n\ge n_0(k)$, the result
$\mathrm{Ex}_{\lambda}(n,F_k) \subseteq \mathrm{Ex}(n,F_k)$ holds.
\iffalse
\begin{figure}[H]
\centering
\begin{tikzpicture}
\tikzstyle{every node}=[ball color=gray!20!white,minimum
size=10pt,circle,inner sep=0.1pt,text=blue]
\path (0,0) node(1){}
(1.5,0.5) node(2) {}
(1.5,-0.5) node(3) {}
(-1.5,0.5) node(4) {}
(-1.5,-0.5) node(5) {};
\foreach \source/\target in {1/2,3/1,3/2,4/1,4/5,5/1}
\draw[orange,line width=1.4pt] (\source) --(\target);
\shade[ball color=gray!20!white](0,0)circle(7pt);
\end{tikzpicture}
\caption{The bowtie graph $F_2$} \label{Fig-bowtie}
\end{figure}
\fi
\begin{figure}[H]
\centering
\includegraphics[scale=0.85]{Bowtie.png}
\caption{The bowtie graph $F_2$.}
\label{Fig-bowtie}
\end{figure}
In this paper, we investigate such problems, namely
the bowtie $F_2$, which consists of two copies of $K_3$
merged at a vertex. We refer to this vertex as the
central vertex of the bowtie.
Despite the simple nature of the bowtie, bowtie-free graphs have been crucial in several areas of graph theory,
such as in Ramsey problem \cite{HN2018},
supersaturation problem \cite{KMP2020}
and chromatic number \cite{CCCFL2021}, etc.
In fact, the Tur\'{a}n number of the bowtie
was also considered by
Erd\H{o}s, F\"{u}redi, Gould and Gunderson \cite{Erdos95},
in which they presented the exact value of $\mathrm{ex}(n,F_2)$ for every integer $n\ge 5$.
\begin{theorem}[Erd\H{o}s et al. \cite{Erdos95}, 1995] \label{EFGG-F2}
For $n\ge 5$, one has
\[ \mathrm{ex}(n,F_2) = \left\lfloor {n^2}/{4} \right\rfloor +1. \]
\end{theorem}
Motivated by these results above, we will consider the spectral Tur\'{a}n problem of the bowtie and
determine
the unique $F_2$-free extremal graph for all integers $n\ge 7$.
In what follows, we will establish a spectral extension on Theorem \ref{EFGG-F2}.
The proof of Theorem \ref{thm-n-F2} is completely different from that of Theorem \ref{thmCFTZ20} in
\cite{CFTZ20} and \cite{ZLX2022}.
The techniques in our proof is partially
inspired by recent articles of Zhai and Lin \cite{ZL2022jgt},
and Ning and Zhai \cite{NZ2021} as well.
Denote by $K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+$ the graph obtained from
the balanced complete bipartite graph $K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}$
by embedding an edge in the smaller part;
see Figure \ref{fig-2}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.85]{Spectral-n-vertices.png}
\caption{The unique extremal graph in Theorem \ref{thm-n-F2}.}
\label{fig-2}
\end{figure}
The main result of this section is as follows.
\begin{theorem} \label{thm-n-F2}
If $n\ge 7$ and $G$ is an $F_2$-free graph on $n$ vertices, then
\[ \lambda (G) \le \lambda (K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+), \]
equality holds if and only if $G=K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+$.
\end{theorem}
\noindent
{\bf Remark.}
The bound $n\ge 7$ in Theorem \ref{thm-n-F2}
is best possible, since for the case $n=6$,
there exists an $F_2$-free graph
with larger spectral radius. Namely,
$K_2\vee I_4$.
Upon computations, we have
$\lambda (K_2\vee I_4)\approx 3.722$,
while $\lambda (K_{3,3}^+)\approx 3.504$.
\medskip
\noindent
{\bf Observation.}
Using the fundamental inequality ${2e(G)}/{n} \le \lambda (G)$,
it is easy to verify that our result (Theorem \ref{thm-n-F2})
implies Theorem \ref{EFGG-F2}.
A tedious calculation gives $\lambda (K_{\frac{n}{2}, \frac{n}{2}}^+) <
\frac{n}{2} + \frac{2}{n} + \frac{8}{n^2}$ for even $n$,
and $\lambda(K_{\frac{n-1}{2}, \frac{n+1}{2}}^+) < \frac{n}{2} + \frac{3}{2n} + \frac{1}{n}$
for odd $n$ by using Lemma \ref{lem-21}. Hence, we get $e(G) \le
\lfloor \frac{n}{2} \lambda (G) \rfloor
\le \lfloor \frac{n}{2} \lambda (K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+)
\rfloor
= \lfloor n^2/4 \rfloor +1$.
\subsection{Spectral problems for graphs with given size}
In 1970, Nosal \cite{Nosal1970}
determined the largest spectral radius of a triangle-free graph
in terms of the number of edges, which states that
if $G$ is a triangle-free graph, then $\lambda (G)\le \sqrt{m}$.
Furthermore, Nikiforov \cite{Niki2002cpc,Niki2006laa} extended Nosal's theorem
and showed that if $G$ is triangle-free, then $\lambda (G) < \sqrt{m}$ unless $G$ is a complete bipartite graph (possibly with some isolated vertices).
In the sequel, when we consider the result on a graph with respect to
the given number of edges,
we shall ignore the possible isolated vertices if there are no confusions.
\begin{theorem}[Nosal--Nikiforov, \cite{Niki2002cpc,Niki2006laa}] \label{thmnosal}
If $G$ is a triangle-free graph with $m$ edges, then
\begin{equation} \label{eq1}
\lambda (G)\le \sqrt{m} ,
\end{equation}
equality holds if and only if
$G$ is a complete bipartite graph.
\end{theorem}
Theorem \ref{thmnosal} implies that if $G$ is a bipartite graph
with $m$ edges,
then $ \lambda (G)\le \sqrt{m} $,
equality holds if and only if
$G$ is a complete bipartite graph.
On the one hand, inequality (\ref{eq1}) can imply
(\ref{eq-man}) in Theorem \ref{thmMan}.
Indeed, applying the Rayleigh inequality, we have
$\frac{2m}{n}\le \lambda (G)\le \sqrt{m}$,
which yields $ m \le \lfloor {n^2}/{4} \rfloor$.
On the other hand, combining (\ref{eq1}) with Mantel's theorem, we obtain
$ \lambda (G)\le \sqrt{m} \le \sqrt{\lfloor {n^2}/{4}\rfloor} =\lambda (K_{\lfloor \frac{n}{2}\rfloor, \lceil \frac{n}{2} \rceil })$.
Henceforth, inequality (\ref{eq1}) can also imply (\ref{eq2}) in Theorem \ref{thm-niki}.
\medskip
During the past few years, the inequality (\ref{eq1}) in Theorem \ref{thmnosal}
boosted the great interests of studying the maximum spectral radius of graphs
in terms of the number of edges, rather than the given number of vertices;
see \cite{Niki2002cpc} for an extension on $K_{r+1}$-free graphs,
\cite{Niki2009laa} for $C_4$-free graphs,
\cite{ZLS2021} for $K_{2,r+1}$-free graphs,
and similar results of $C_5$-free and $C_6$-free graphs \cite{MLH2022} as well,
\cite{Niki2021} for an extension on book-free graphs,
and \cite{LNW2021,ZS2022dm} for non-bipartite triangle-free graphs. The spectral extremal problem for graphs with given size
is interesting in its own right, and it
is increasingly becoming an important and popular topic
in recent researches on spectral graph theory.
\medskip
In this section, we will focus the attention mainly on the extremal spectral problems
for $F_2$-free graphs with given number of edges.
Recall that $K_2\vee I_{\frac{m-1}{2}}$
denotes the join graph obtained from an edge $K_2$ and an independent set
$I_{\frac{m-1}{2}}$ such that each vertex of the edge is adjacent to each vertex of the
independent set.
\begin{figure}[H]
\centering
\includegraphics[scale=0.85]{Spectral-m-edges.png}
\caption{The unique extremal graph in Theorem \ref{thm-m-F2}.}
\label{fig-m-edges}
\end{figure}
\begin{theorem} \label{thm-m-F2}
If $m\ge 8$ and $G$ is an $F_2$-free graph with $m$ edges, then
\[ \lambda (G)\le \frac{1+\sqrt{4m-3}}{2}, \]
equality holds if and only if $G=K_2\vee I_{\frac{m-1}{2}}$.
\end{theorem}
\noindent
{\bf Remark.}
The bound $m\ge 8$ in Theorem \ref{thm-m-F2} is tight,
since for $m=7$,
there exists an $F_2$-free graph
with larger spectral radius. Namely,
$G=K_1 \vee (K_3 \cup K_1)$.
An easy calculation gives $\lambda (G) \approx 3.086$,
but $\lambda (K_2 \vee I_3) = \frac{1+\sqrt{4m-3}}{2}=3$ is slightly smaller.
\medskip
At the end of this section,
we propose a conjecture for $F_k$-free graphs.
\begin{conjecture}[$F_k$-free graphs with given size]
\label{conj-m-Fk}
Let $k\ge 2$ be fixed and $m$ be large enough.
If $G$ is an $F_k$-free graph with $m$ edges, then
\[ \lambda (G)\le \frac{k-1 +\sqrt{4m -k^2+1}}{2}, \]
equality holds if and only if
$G=K_k \vee I_{\frac{1}{k}\left(m-{k \choose 2} \right)}$.
\end{conjecture}
\medskip
This paper is organized as follows.
In the next section, we will present the proof of Theorem \ref{thm-n-F2}.
In Section \ref{sec3},
the proof of Theorem \ref{thm-m-F2} will be provided.
In the last section, we conclude the paper
with some open problems.
\section{Proof of Theorem \ref{thm-n-F2}}
\label{sec2}
To begin with,
we shall present two lemmas for simplicity.
The first lemma gives the characterization
of the spectral radius of $K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+$, and it yields some lower/upper bounds for our purpose.
The second lemma states that if
one adds an edge within one part of a bipartite graph,
then the maximum spectral radius is attained
by adding this edge into the smaller vertex part
of a balanced bipartite graph.
\begin{lemma} \label{lem-21}
(a) If $n$ is even, then $\lambda(K_{\frac{n}{2}, \frac{n}{2}}^+)$
is the largest root of
\[ f(x)=x^3-x^2 - \frac{n^2}{4}x + \frac{n^2}{4}-n. \]
(b) if $n$ is odd, then $\lambda(K_{\frac{n-1}{2}, \frac{n+1}{2}}^+)$
is the largest root of
\[ g(x)=x^3 - x^2 + \frac{1-n^2}{4} x +
\frac{n^2}{4} - n - \frac{5}{4}. \]
Consequently, we have
\begin{equation} \label{eq-n2+2}
\lambda^2 (K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+) >\left\lfloor \frac{n^2}{4} \right\rfloor +2.
\end{equation}
\end{lemma}
\begin{proof}
(a) Let $X=(x_1,x_2,\ldots ,x_n)^T$ be a Perron eigenvector\footnote{An eigenvector with all coordinates being positive.} corresponding to $\lambda(K_{\frac{n}{2}, \frac{n}{2}}^+)$.
We partition the vertex set of $ K_{\frac{n}{2}, \frac{n}{2}}^+$ as $\Pi: V(\lambda(K_{\frac{n}{2}, \frac{n}{2}}^+))=
X_1 \cup X_2 \cup Y$, where $X_1=\{u,v\}$ forms an edge,
$X_1\cup X_2$ and $Y$ are partite sets of $K_{\frac{n}{2}, \frac{n}{2}}$.
By comparing the neighborhoods, we can see that $x_u=x_v$,
all coordinates of the vector $X$
corresponding to vertices of $X_2$ are equal.
(the coordinates of vertices of $Y$ are equal)
For notational convenience, we may assume that
$x_u=x_v=x$, $x_v=y$ for all $v\in X_2$ and $x_v=z$
for all $v\in Y$. Then
\[ \begin{cases}
\lambda x= x + \frac{n}{2} z, \\
\lambda y = \frac{n}{2}z, \\
\lambda z = 2x + (\frac{n}{2}-2)y.
\end{cases} \]
Thus $\lambda(K_{\frac{n}{2}, \frac{n}{2}}^+)$ is the largest eigenvalue of
\[ B_{\Pi} = \begin{matrix} X_1 \\ X_2 \\ Y
\end{matrix} \begin{bmatrix}
1 & 0 & \frac{n}{2} \\
0 & 0 & \frac{n}{2} \\
2 & \frac{n}{2}-2 & 0
\end{bmatrix}. \]
By calculation, we know that $\lambda(K_{\frac{n}{2}, \frac{n}{2}}^+)$ is the largest root of
\[ f(x)= \det (xI_3 - B_{\Pi})=x^3-x^2 - \frac{n^2}{4}x + \frac{n^2}{4}-n. \]
(b) For odd $n$, the proof is similar with the previous case.
We partition the vertex set of $K_{\frac{n-1}{2}, \frac{n+1}{2}}^+$
as $\Pi': V(K_{\frac{n-1}{2}, \frac{n+1}{2}}^+) = X_1\cup X_2\cup Y$,
where $X_1=\{u,v\}$ forms an edge, $X_1\cup X_2$ and $Y$
are partite sets of $K_{\frac{n-1}{2}, \frac{n+1}{2}}$
satisfying $|X_1| + |X_2|= \frac{n-1}{2}$
and $|Y|=\frac{n+1}{2}$. Then
a similar argument yields that
$\lambda (K_{\frac{n-1}{2}, \frac{n+1}{2}}^+)$ is the largest eigenvalue of
\[ B_{\Pi'}= \begin{matrix} X_1 \\ X_2 \\ Y
\end{matrix} \begin{bmatrix}
1 & 0 & \frac{n+1}{2} \\
0 & 0 & \frac{n+1}{2} \\
2 & \frac{n-1}{2}-2 & 0
\end{bmatrix}. \]
Thus, $\lambda (K_{\frac{n-1}{2}, \frac{n+1}{2}}^+)$
is the largest root of
\[ g(x)= \det (xI_3- B_{\Pi'})=x^3 - x^2 + \frac{1-n^2}{4} x +
\frac{n^2}{4} - n - \frac{5}{4}. \]
Finally, we are ready to prove (\ref{eq-n2+2}).
By calculation, we can verify that
\[ f(\sqrt{{n^2}/{4 }+2}) = \sqrt{n^2+8} -n -2<0, \]
and
\[ g(\sqrt{{(n^2-1)}/{4} +2}) = \sqrt{n^2+7} -n-3 <0. \]
This completes the proof.
\end{proof}
In fact, Rayleigh's formula also gives a lower bound
on the spectral radius.
\begin{equation} \label{eq-3}
\lambda (K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+)
> \frac{2e(K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+)}{n}
= \begin{cases}
\frac{n}{2} + \frac{2}{n}, & \text{if $n$ is even;}\\
\frac{n}{2} + \frac{3}{2n}, & \text{if $n$ is odd.}
\end{cases}
\end{equation}
Therefore, for even $n$, we get from (\ref{eq-3})
that $\lambda^2 > (\frac{n}{2} + \frac{2}{n})^2 >
\frac{n^2}{4} +2$.
On the other hand, for odd $n$, we construct a unit vector
$X=(\frac{1}{\sqrt{n-1}}, \ldots ,\frac{1}{\sqrt{n-1}}, \frac{1}{\sqrt{n+1}}, \ldots ,\frac{1}{\sqrt{n+1}})^T$ where
$\frac{1}{\sqrt{n-1}}$ corresponds to vertices of $X$, and
$\frac{1}{\sqrt{n+1}}$ corresponds to vertices of $Y$, respectively
(see Figure \ref{fig-2}). Then Rayleigh's formula implies
$\lambda (K_{\frac{n-1}{2}, \frac{n+1}{2}}^+)
\ge X^TA(K_{\frac{n-1}{2}, \frac{n+1}{2}}^+) X =
\frac{\sqrt{n^2-1}}{2} +
\frac{2}{n-1}$, which yields $\lambda^2 (K_{\frac{n-1}{2}, \frac{n+1}{2}}^+) \ge (\frac{\sqrt{n^2-1}}{2} +
\frac{2}{n-1})^2 > \frac{n^2-1}{4} +2$.
The above argument provides an alternative proof of (\ref{eq-n2+2}).
\begin{lemma} \label{lem-22}
Let $a,b\ge 2$ be integers with $a+b=n$,
and $K_{a,b}^+$ be the graph obtained from
the complete bipartite graph $K_{a,b}$
by adding an edge in the part of size $a$. Then
\[ \lambda (K_{a,b}^+) \le \lambda (K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+), \]
equality holds if and only if $a=\lfloor \frac{n}{2} \rfloor$
and $b=\lceil \frac{n}{2} \rceil$.
\end{lemma}
\begin{proof}
Similar with the proof of Lemma \ref{lem-21}, we know that
$\lambda (K_{a,b}^+)$ is the largest root of
\[ f_{a,b}(x)=x^3-x^2 -abx +ab -2b. \]
It is sufficient to prove that the following two cases.
{\bf Case 1.}
If $a\ge b+1$, then
our goal is to prove that
$\lambda (K_{a,b}^+) < \lambda (K_{a-1,b+1}^+)$.
By computation, we obtain
\[ f_{a,b}(x)- f_{a-1,b+1}(x)= x(a-b-1) -a+b+3. \]
If $a=b+1$, then $f_{a,b}(x)=f_{a-1,b+1}(x)+2$,
and $f_{a-1,b+1}(x) < f_{a,b}(x)$ for every $x \in \mathbb{R}$;
if $a\ge b+2$,
then we get $f_{a-1,b+1}(x) < f_{a,b}(x)$ for every $x> \frac{a-b-3}{a-b-1}$.
Note that $\lambda(K_{a,b}^+)\ge \lambda (K_3)=2$.
Therefore, we obtain $f_{a-1,b+1}(\lambda(K_{a,b}^+)) <
f_{a,b}(\lambda(K_{a,b}^+))=0$.
Recall that $\lambda(K_{a-1,b+1}^+)$
is the largest root of $f_{a-1,b+1}(x)$. Thus
we get $\lambda(K_{a,b}^+) < \lambda(K_{a-1,b+1}^+)$.
{\bf Case 2.}
If $a+2\le b$, then we need to prove $\lambda (K_{a,b}^+) <
\lambda (K_{a+1,b-1}^+)$.
Similarly, we have
\[ f_{a,b}(x)- f_{a+1,b-1}(x) = x(b-a-1) + a-b-1. \]
In view of $b\ge a+2$, for every $x>\frac{b-a+1}{b-a-1}$,
we get $f_{a+1,b-1}(x) < f_{a,b}(x)$.
Since $a\ge 2$ and $b\ge 4$, we can see that
$K_2\vee I_4$ is a subgraph of $K_{a,b}^+$.
Then $\lambda (K_{a,b}^+) \ge \lambda (K_2\vee I_4)
\approx 3.372$. Since $\frac{b-a+1}{b-a-1}= 1+ \frac{2}{b-a-1}\le 3$,
we obtain $f_{a+1,b-1}(\lambda (K_{a,b}^+)) < f_{a,b}(\lambda (K_{a,b}^+)) =0$, which leads to
$\lambda (K_{a,b}^+) < \lambda (K_{a+1,b-1}^+)$, as desired.
\end{proof}
Now, we are in a position to prove Theorem \ref{thm-n-F2}.
\begin{proof}[{\bf Proof of Theorem \ref{thm-n-F2}}]
Let $G$ be an $F_2$-free graph on $n\ge 7$ vertices
with maximum spectral radius.
Then $\lambda (G)\ge \lambda (K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+)$.
The aim of the proof is to show that
$G=K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+$.
Clearly, one can see that $G$ is connected. Otherwise,
choosing $G_1$ and $G_2$
as two components with $\lambda(G_1) = \lambda (G)$,
and adding an edge between $G_1$ and $G_2$, we get a new graph
on $n$ vertices, which is still $F_2$-free and has larger spectral radius than
$G$, a contradiction.
By Lemma \ref{lem-21}, we know that
\begin{equation} \label{eq-lower}
\lambda^2(G) > \lfloor n^2/4\rfloor +2.
\end{equation}
Let $X=(x_1,x_2,\ldots ,x_n)^T$ be a Perron vector
of $G$ and $u\in V(G)$ be a vertex such that $x_u=\max\{x_v: v\in V(G)\}$.
For notational convenience, we denote
$\lambda =\lambda (G)$, $A=N_G(u)$ and $B=
V(G) \setminus (A\cup \{u\})$. Then $|A| + |B| +1=n$.
To state the proof in details,
we outline the main steps as follows.
First of all, we shall show that $G[A]$ is a star with center $u_0$, and $G[B]$ is empty,
and the edges between $A\setminus \{u_0\}$
and $B$ form a complete bipartite graph.
Secondly, we will prove that
$|A|=\lceil \frac{n}{2}\rceil +1$ and $|B|=\lfloor \frac{n}{2}\rfloor -2$.
We write $e(A)$ for the number of edges
with two endpoints in $A$,
and $e(A,B)$ for the number of edges with one endpoint in $A$
and the another in $B$.
\medskip
{\bf Claim 1. } {\it $|A| \ge \lambda$. Furthermore,
we have $|A| \ge \lceil \frac{n}{2}\rceil$ and $|B| \le \lfloor \frac{n}{2}\rfloor -1$. }
\begin{proof}[Proof of Claim 1]
By the maximality of $x_u$,
we get $\lambda x_u = \sum_{v\in A} x_v \le |A| x_u$.
Hence $\lambda \le |A|$. By inequality (\ref{eq-3}), we obtain
$\lambda > \frac{n}{2} + \frac{3}{2n}$. As $|A|$ is an integer,
we have $|A|\ge \lceil \frac{n}{2}\rceil$.
Keeping in mind that $|A| + |B| =n-1$. Then $|B| \le \lfloor \frac{n}{2}\rfloor -1$.
\end{proof}
\medskip
{\bf Claim 2. } {\it $e(A)\ge 1$ and $|B|\ge 1$.}
\begin{proof}[Proof of Claim 2]
Suppose on the contrary that $e(A)=0$.
Denote by $d_A(v)$ the number of neighbors of $v$ in
vertex set $A$. Namely, $d_A(v)=|N_G(v) \cap A|$.
It is known that
\begin{equation} \label{eq-lambda-2}
\lambda^2 x_u = \sum_{v\in A} \sum_{w\in N(v)} x_w
= |A|x_u + \sum_{v\in A} d_A(v)x_v + \sum_{w\in B} d_A(w)x_w.
\end{equation}
Roughly speaking, the equality (\ref{eq-lambda-2})
provides a nice estimate of $\lambda (G)$
for nearly-regular graphs, since all coordinates of the
Perron vector of such graphs are almost equal.
Observe that $\sum_{w\in B} d_A(w)= e(A,B) \le |A||B|$. Then
(\ref{eq-lambda-2}) yields
\[ \lambda^2 x_u =|A| x_u + \sum_{w\in B} d_A(w)x_w
\le (|A| + |A| |B|)x_u . \]
Note that $|A| + |B| +1=n$. Then $\lambda^2 \le \lfloor n^2/4 \rfloor$,
which contradicts with (\ref{eq-lower}).
Thus, we have proved $e(A)\ge 1$.
Suppose on the contrary that $|B|=0$.
Then $|A|=n-1$.
Since $G$ is $F_2$-free, we know that $G[A]$ has no $2K_2$,
and hence $G[A]$ is $P_4$-free (the path on $4$ vertices).
Except for the isolated vertices, the induced subgraph $G[A]$ is
connected. Hence either $G[A]=K_3\cup I_{|A|-3}$
or $G[A]=K_{1,t}
\cup I_{|A|-t-1}$ for some $t\ge 1$.
In the first case, we obtain
$\lambda \le \lambda (K_1\vee (K_3 \cup I_{n-4}))$,
which is smaller than $\frac{n}{2}$. Indeed, we know that
$\lambda (K_1\vee (K_3 \cup I_{n-4}))$ is the largest root of
\[ p(x)= x^3-2x^2 + (1-n)x +2n-8. \]
It follows that $p(\frac{n}{2}) = \frac{n^3}{8} - n^2 + \frac{5 n}{2} - 8>0$
for every $n\ge 7$. Moreover,
we have $p'(x)=3x^2 -4x + (1-n)>0$
for every $x\ge \frac{n}{2}$.
Therefore, we have $p(x)>0$
for every $x\ge \frac{n}{2}$, which implies
$\lambda (K_1\vee (K_3 \cup I_{n-4})) < \frac{n}{2}$, as needed.
In the second case,
we have $\lambda \le \lambda (K_2\vee I_{n-2})
= \frac{1}{2}(1 + \sqrt{8n-15}) \le \frac{n}{2} <
\lambda (K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+)$ for every $n\ge 8$, a contradiction.
For the case $n=7$,
a direct computation gives
$\lambda (K_2\vee I_5) \approx 3.701$
and $\lambda (K_{3,4}^+) \approx 3.848$,
so we get $\lambda \le \lambda (K_2\vee I_5) < \lambda (K_{3,4}^+)$.
This is also a contradiction.
\end{proof}
Next, we partition the remaining proof in two cases.
\medskip
{\bf Case 1.} First of all, we consider the case $G[A]=K_3$.
Denote by $\{u_1,u_2,u_3\}$ the vertex of the copy of $K_3$.
Every vertex of $B$ has at most one neighbor in $\{u_1,u_2,u_3\}$.
Then $e(A,B) \le |B| (|A|-2)$ and
\[ \lambda^2 x_u \le
(|A| + 2e(A) + e(A,B))x_u \le
(|A| (|B|+1) - 2|B| +6)x_u. \]
By computation, the maximum of $|A| (|B|+1) - 2|B|$
attains at $|B|= \lfloor \frac{n}{2}\rfloor -2$.
Thus, we get $\lambda \le (\lceil \frac{n}{2}\rceil +1)(\lfloor \frac{n}{2}\rfloor -1) -2 (\lfloor \frac{n}{2}\rfloor -2) +6$,
which yields
$\lambda \le \lfloor \frac{n^2}{4}\rfloor +1$ for every $n\ge 8$,
and $\lambda \le \lfloor \frac{n^2}{4}\rfloor +2$ for $n=7$.
This is a contradiction with (\ref{eq-lower}).
{\bf Case 2.} Now, we deal with the remaining case $G[A]=K_{1,t}
\cup I_{|A| -t-1}$
for some $t\ge 1$.
Denote by $u_0$ the central vertex of the star $K_{1,t}$ in $G[A]$,
and $\{u_1,u_2,\ldots ,u_t\}$ the leaves of $K_{1,t}$.
In what follows, we will proceed the proof in two subcases.
{\bf Subcase 2.1.}
Suppose that $u_0$ has a neighbor in $B$, say $w_0\in B$.
Since $G$ is $F_2$-free, we know that $w_0u_i \notin E(G)$.
In addition, every vertex $w\in B$ satisfies $d_A(w) \le |A| -1$.
Then $e(A,B) \le (|A| -t) + (|A|-1) (|B| -1)$ and
\[ \lambda^2 \le |A| +2t +e(A,B)
= (|A| -1)(|B| +1) +2 +t. \]
Note that $t \le |A| -1$ and $|A| + |B| +1=n$.
Thus, we get
$\lambda^2 \le (|A| -1)(|B|+2) +2
\le \lfloor {n^2}/{4} \rfloor +2$, a contradiction.
{\bf Subcase 2.2.}
Now, we consider the case $d_B(u_0)=0$.
In this case,
we claim that $e(B)=0$.
Otherwise, if $ww'$ is an edge in $G[B]$,
then for each $i\in \{1,2,\ldots ,t\}$,
the vertex $u_i$ has at most one neighbor in $\{w,w'\}$,
and thus $d_B(u_i) \le |B|-1$.
Then $e(A,B) \le (|A| -t-1) |B| + t (|B|-1)= |A||B| - |B| -t$.
Combining with $t\le |A| -1$, we obtain
\[ \lambda^2 \le |A| + 2t +e(A,B) \le
|A| + |A||B| -|B| +t \le (|A| -1) (|B| +2) +1. \]
Due to $|A| + |B| +1=n$, we get
$\lambda^2 \le
\lfloor n^2/4\rfloor +1$,
which is a contradiction. Therefore, we have proved that $e(B)=0$.
Since $G$ has the maximum spectral radius.
Then for each $w\in B$, we have $N_A(w)= A \setminus \{u_0\}$.
Furthermore, the maximality also implies that $t=|A|-1$.
Therefore, we obtain $V(G)=\{u\} \cup A \cup B$,
where $A=\{u_0,u_1,\ldots ,u_t\}$ forms a star
and $B$ is an independent set.
Moreover, $A \setminus \{u_0\}$ and $B$ form a complete bipartite graph.
Putting $\{u,u_0\}$ and $B$ together,
the graph $G$ can be obtained from
the complete bipartite graph $K_{|B|+2,|A|-1}$
by adding an edge into the part of size $|B|+2$.
Recall that $|A| \ge \lceil \frac{n}{2}\rceil$ and $|B| \le \lfloor \frac{n}{2}\rfloor -1$.
By Lemma \ref{lem-22},
we can see that when $|A| = \lceil \frac{n}{2}\rceil +1$,
the graph $G$ attains maximum spectral radius,
and then we get $G=K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+$, as required.
\end{proof}
\newpage
\section{Proof of Theorem \ref{thm-m-F2}}
\label{sec3}
Before showing the proof of Theorem \ref{thm-m-F2},
we start with the following lemma.
\begin{figure}[H]
\centering
\includegraphics[scale=0.85]{Bipartite.png}
\caption{An approximate structure of the graph $G$.}
\label{fig-4}
\end{figure}
\begin{lemma} \label{lem31}
Let $G$ be an $m$-edge graph obtained from a bipartite graph
$B_{s,t}$ by adding two new vertices $u$ and $v_0$,
an edge $uv_0$ and all edges between $\{u,v_0\}$ and $\{v_1,\ldots ,v_t\}$. Then
\[ \lambda (G)< \frac{1+ \sqrt{4m-3}}{2}. \]
\end{lemma}
For convenience of readers, we draw the graph $G$ in Figure \ref{fig-4}.
\begin{proof}
Since the structure of $B_{s,t}$ is stochastic,
it seems impossible to get the required bound by a straightforward calculation.
The key idea of the proof depends on a trick analysis
of the Perron eigenvector of $\lambda (G)$.
Since $u$ and $v_0$ has the same neighborhood in $G$,
we have $x_u = x_{v_0}$.
In what follows, we will establish
an upper bound on $\lambda^2 x_u$ and
then show that $\lambda (G) <
\frac{1+\sqrt{4m-3}}{2}$. Since $\lambda x_u = x_{v_0} +
\sum_{i=1}^t x_{v_i}$, we get
\[ \sum_{i=1}^t x_{v_i} = (\lambda -1)x_u. \]
Moreover, we have
\begin{align*}
\lambda^2 x_u &= \lambda x_{v_0} + \sum_{i=1}^t
\lambda x_{v_i}
= x_u + \sum_{i=1}^t x_{v_i} +
\sum_{i=1}^t \left(x_u + x_{v_0} + \sum_{w\in N(v_i)\cap B} x_w\right) \\
&= (2t+1) x_u + \sum_{i=1}^t x_{v_i} + \sum_{w\in B} d_A(w) x_w.
\end{align*}
Observe that $\lambda x_w= \sum_{v\in N(w)} x_v
\le \sum_{i=1}^t x_{v_i}$,
we get $x_w \le \frac{1}{\lambda} \sum_{i=1}^t x_{v_i}$. Then
\begin{align*}
\lambda^2 x_u &\le (2t+1) x_u +
\left( 1+ \frac{1}{\lambda} \sum_{w\in B} d_A(w) \right)
\left( \sum_{i=1}^t x_{v_i} \right) \\
&= (2t+1) x_u + \left(1+\frac{m-2t-1}{\lambda} \right) (\lambda -1)x_u \\
&< m x_u + (\lambda -1)x_u.
\end{align*}
Thus, we get $\lambda^2- \lambda - (m-1) <0$,
which yields $\lambda < \frac{1+\sqrt{4m-3}}{2}$.
\end{proof}
In the sequel,
we will provide the proof of Theorem \ref{thm-m-F2}.
\begin{proof}[{\bf Proof of Theorem \ref{thm-m-F2}}]
Let $G$ be an $F_2$-free graph with $m\ge 8$ edges
and maximum spectral radius.
We may assume that $G$ satisfies $\lambda (G) \ge \frac{1+\sqrt{4m-3}}{2} $.
Our goal is to show that $G=K_2\vee I_{\frac{m-1}{2}}$.
Let $X=(x_1,x_2,\ldots ,x_n)^T$ be the unit Perron vector
of $G$ and $u\in V(G)$ be a vertex such that
$x_u =\max \{x_v : v\in V(G)\}$.
Moreover, let $A=N_G(u)$ and $B=V(G) \setminus (\{u\}\cup A)$.
In our proof, we will frequently use two facts.
The first fact admits that $G$ is connected.
Otherwise, we can choose $G_1$ and $G_2$ as two different components,
where $G_1$ attains the spectral radius of $G$,
deleting an edge within $G_2$, and then adding an edge between $G_1$ and $G_2$, we get a new $m$-edge graph which is still $F_2$-free and has larger spectral radius,
which is a contradiction.
The second fact asserts that $d(w) \ge 2$ for each vertex $w\in B$.
Otherwise, if $w\in B$ has degree $1$, say $wv\in E(G)$,
where $v\in V(G)$ and $v\neq u$,
then replacing the edge $wv$ with $wu$,
we get a new $m$-edge $F_2$-free graph $G'$ with larger spectral radius.
Indeed, since $x_v\le x_u$, we have
\begin{equation} \label{degree-two}
\lambda (G') \ge X^TA(G')X = 2\left(\sum_{\{i,j\}\in E(G)}
x_ix_j \right)- 2x_wx_v + 2x_wx_u \ge \lambda (G).
\end{equation}
In fact, we claim that $\lambda (G') > \lambda (G)$.
Assume to the contrary that
$\lambda (G')= \lambda(G)$. Then all inequalities in
(\ref{degree-two}) become equalities,
and $X$ is also a unit eigenvector of $\lambda (G')$.
Namely, $A(G')X=\lambda (G')X = \lambda (G)X$.
Consider the vertex $u$, we observe that
$\lambda (G')x_u = \sum_{v\in N_G(u)} x_v + x_w >
\sum_{v\in N_G(u)} x_v = \lambda (G)x_u$.
Consequently, we get $\lambda (G') > \lambda (G)$,
which contradicts with our assumption.
\medskip
{\bf Claim 1. } {\it $e(A)\ge 1$.}
\begin{proof}[Proof of Claim 1]
Assume on the contrary that $e(A)=0$. Thus,
\begin{align*}
\lambda^2 x_u = \sum_{v\in A} \sum_{w\in N(v)} x_w
&= |A|x_u + \sum_{v\in A} d_A(v) x_v + \sum_{w\in B} d_A(w) x_w \\
&\le (|A| + e(A,B))x_u \le m x_u,
\end{align*}
where the last inequality holds since
$m=e(G)=|A| + e(A,B) + e(B)$.
It follows that $\lambda \le \sqrt{m}$.
Observe that $\lambda (G) \ge \frac{1+\sqrt{4m-3}}{2} > \sqrt{m}$. This leads to a contradiction.
\end{proof}
\medskip
{\bf Claim 2. } {\it $G[A]=K_3 \cup I_{|A|-3}$
or $G[A]=K_{1,t}\cup I_{|A|-t-1}$ for some $t\ge 1$.}
\begin{proof}[Proof of Claim 2]
Since $G$ is $F_2$-free and $e(A)\ge 1$ by Claim 1,
we can see that
the induced subgraph $G[A]$ is $2K_2$-free, so it
consists of only one connected
component and some isolated vertices.
If $e(A)=1$, then it is clear that $G[A]=K_{1,1} \cup I_{|A|-2}$.
For the case $e(A)\ge 2$,
observe that the longest path in $G[A]$ is
$P_3$ on $3$ vertices,
the only component in $G[A]$ is either the cycle $K_3$,
or the star $K_{1,t}$ for some $t\ge 2$.
\end{proof}
We denote $A_+=\{v\in A: d_A(v) \ge 1\}$
and $A_0= A \setminus A_+$.
In other words, $A_0$ is the set of isolated vertices
in the induced subgraph $G[A]$.
In what follows, we present the proof in two cases.
\medskip
{\bf Case 1.} $B=\varnothing$. In this case,
we have $\lambda x_u = \sum_{v\in A} x_v$ and
\[ \lambda^2x_u = \sum_{v\in A} \lambda x_v = |A| x_u + \sum_{v\in A} d_A(v)x_v. \]
Thus, we get
\[ (\lambda^2 - \lambda )x_u = |A|x_u +
\sum_{v\in A_+} (d_A(v) -1)x_v - \sum_{v\in A_0} x_v. \]
Note that $x_u$ is the maximum coordinate in the Perron vector. It follows that
\[ \lambda^2 - \lambda \le |A| + 2e(A_+) - |A_+|
- \sum_{v\in A_0} \frac{x_v}{x_u} . \]
Notice that $\lambda \ge \frac{1+\sqrt{4m-3}}{2}$,
which implies $\lambda^2 - \lambda \ge m-1
= |A| +e(A_+) -1$. Then
\begin{equation} \label{eq5}
\sum_{v\in A_0} \frac{x_v}{x_u} \le
e(A_+) - |A_+| +1.
\end{equation}
By Claim 2, if $G[A]=K_3 \cup I_{|A| -3}$,
then $|A|=m-3$ and $G=K_1\vee (K_3 \cup I_{m-6})$.
By computation, we know that $\lambda (G)$
is the largest root of
\[ h(x)=x^3 - 2x^2 + 3x -mx +2m-12. \]
It is easy to check that $h(\frac{1+\sqrt{4m-3}}{2})
>0$ for every integer $m\ge 8$.
Additionally, we have $h'(x)=3x^2 -4x +3-m >0$
for every $x\ge \frac{1+\sqrt{4m-3}}{2}$.
Hence, we obtain $\lambda (G) < \frac{1+\sqrt{4m-3}}{2}$,
a contradiction. Therefore, Claim 2 implies
$G[A]=K_{1,t} \cup I_{|A|-t-1}$ for some $t\ge 1$.
Then we get $e(A_+)- |A_+| =-1$.
Combining with (\ref{eq5}),
we have $A_0= \varnothing$ and $t=|A|-1=\frac{m-1}{2} $,
then $G=K_1\vee K_{1,t}= K_2 \vee I_{\frac{m-1}{2}}$,
which is the desired extremal graph.
{\bf Case 2.} $B\neq \varnothing$.
Since $\lambda^2 x_u = |A| x_u +
\sum_{v\in A} d_A(v)x_v + \sum_{w\in B} d_A(w) x_w$,
we get
\[ (\lambda^2 - \lambda ) x_u
= |A| x_u + \sum_{v\in A_+} (d_A(v) -1)x_v - \sum_{v\in A_0} x_v + \sum_{w\in B} d_A(w) x_w. \]
Invoking the fact $\sum_{w\in B} d_A(w) x_w \le e(A,B)x_u$, we have
\[ \lambda^2- \lambda \le |A| + 2e(A_+) - |A_+|
- \sum_{v\in A_0} \frac{x_v}{x_u} + e(A,B). \]
Note that $\lambda^2 - \lambda \ge m-1
= |A| + e(A_+) + e(A,B) +e(B) -1$. Then
\begin{equation} \label{eq-6}
e(B)\le e(A_+) - |A_+| - \sum_{v\in A_0} \frac{x_v}{x_u} +1.
\end{equation}
Next, we partition the remaining proof in two subcases.
{\bf Subcase 2.1.}
If $G[A]=K_{1,t}\cup I_{|A|-t-1}$ for some $t\ge 1$,
then $e(A_+) = |A_+| -1$.
Combining with (\ref{eq-6}), we get
$e(B)=0$ and $A_0 = \varnothing$.
Denote by $A=\{v_0,v_1,\ldots ,v_t\}$
the vertex set of the star $K_{1,t}$,
where $v_0$ is the center vertex.
If $t=1$, then $A=\{v_0,v_1\}$
forms an edge.
Recall that each vertex of $B$ has degree at least $2$, we have $N(w)=\{v_0,v_1\}$ for every $w\in B$.
Then $x_{v_0}=x_{v_1} > x_u$,
which contradicts with the maximality of $x_u$.
Next, we consider the case $t\ge 2$.
We claim that $d_B(v_0) =0$.
Otherwise, if $v_0w \in E(G)$ for some $w\in B$,
then $N(w) =\{v_0\}$
since $G$ is $F_2$-free.
This leads to a contradiction because each vertex of $B$ has
degree at least $2$.
Therefore, for every $w\in B$, we have $N(w) \subseteq
\{v_1,v_2, \ldots ,v_t\}$.
In other words, the vertex sets $A\setminus \{v_0\}$ and $B$ form
a (not necessarily complete) bipartite subgraph in $G$;
see the previous Figure \ref{fig-4}.
By Lemma \ref{lem31},
it follows that $\lambda < \frac{1+\sqrt{4m-3}}{2}$,
a contradiction.
\medskip
{\bf Subcase 2.2.}
If $G[A]=K_3\cup I_{|A|-3}$, then $e(A_+)= |A_+|=3$.
So we obtain from (\ref{eq-6}) that $0\le e(B)\le 1-\sum_{v\in A_0} {x_v}/{x_u}$.
Thus, we get two possibility, namely,
either $e(B)\in \{0,1\}$ if $A_0 = \varnothing$,
or $e(B)=0$ if $A_0 \neq \varnothing$.
In the former case, that is, $A_0 = \varnothing$,
then $G[A]$ is a copy of $K_3$.
In fact, the graph $G$ can be determined uniquely. Indeed,
since $G$ is $F_2$-free, we know that
each vertex of $B$ has at most one neighbor in $A$.
Recall that $G$ is connected and $d(w)\ge 2$ for every $w\in B$.
Therefore, $e(B)=1$ and $|B|=2$, so $m=9$ and
$G$ is described as below.
Denote by $A=\{v_1,v_2,v_3\}$
and $B=\{w_1,w_2\}$, where $\{u,v_1,v_2,v_3\}$
forms a copy of $K_4$ and $v_1w_1w_2v_3$
forms a path on $4$ vertices.
In this graph, we observe that $x_{v_1}=x_{v_3} > x_u$,
contradicting with the maximality of $x_u$.
In the latter case $A_0 \neq \varnothing$, we have $|A|\ge 4$
and $e(B)=0$.
Since $d(w)\ge 2$ for every $w\in B$, we get $m \ge 9$.
Moreover, each vertex of $B$ has at most one neighbor in $A_+$,
and has at least one neighbor in $A_0$.
In addition, inequality (\ref{eq-6}) reduces to
\begin{equation} \label{eq-7}
\sum_{v\in A_0} x_v \le x_u.
\end{equation}
Denote by $A_+= \{v_1,v_2,v_3\}$ the vertex set of the copy of $K_3$
in $G[A]$. We write $N(A_+)$ for the set of vertices which are adjacent to a vertex of $A_+$.
Note that
$ \lambda x_u = x_{v_1} + x_{v_2} +x_{v_3} + \sum_{v\in A_0} x_v $.
Using (\ref{eq-7}), we obtain
\begin{align} \notag
\lambda^2 x_u &= \lambda (x_{v_1} +x_{v_2} + x_{v_3}) +
\lambda \sum_{v\in A_0} x_v \\
&\le
3x_u + 2(x_{v_1} +x_{v_2} + x_{v_3}) + \sum_{w \in N(A_+)\cap B} x_w + \lambda x_u. \label{eq-8}
\end{align}
If $N(A_+) \cap B = \varnothing$,
then $\lambda ^2 x_u <3 x_u + 6x_u + \lambda x_u$,
which yields $\lambda^2- \lambda < 9$.
Since $B \neq \varnothing$ and
$d(w)\ge 2$ for every $w\in B$,
we get $|A_0|\ge 2$ and then $m\ge 10$.
Therefore, we obtain $\lambda^2- \lambda <m-1 $,
which contradicts with $\lambda \ge \frac{1+\sqrt{4m-3}}{2}$.
Next, we assume that $N(A_+) \cap B \neq \varnothing$.
Since $G$ is $F_2$-free, we know that each vertex $w\in B$
has at most one neighbor in $A_+$.
Then $\sum_{w\in N(A_+)\cap B} x_w \le e(A_+,B)x_u \le |B|x_u$.
Recall that $e(B)=0$ and each vertex of $B$ has at least two neighbors in
$A$. Then
$m=|A| +3 + e(A,B) \ge |A| +3 +2|B|$,
which implies $|B| \le \lfloor \frac{m-|A|-3}{2} \rfloor$.
Combining with (\ref{eq-8}), we get
\begin{equation} \label{eq-xiazheng}
\lambda^2 < 9 + |B| + \lambda \le
9 + \left\lfloor \frac{m-|A|-3}{2} \right\rfloor + \lambda.
\end{equation}
Recall that $|A|\ge 4$.
If $m\ge 12$, then $\lambda^2 - \lambda < 9+
\lfloor \frac{m-7}{2} \rfloor \le m-1$.
This contradicts with the assumption $\lambda \ge \frac{1+\sqrt{4m-3}}{2}$.
Next, we consider the case $9\le m\le 11$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.85]{G1G2G3.png}
\caption{The graphs $G_1,G_2$ and $G_3$.}
\label{fig-3}
\end{figure}
For the case $m=11$, observe that $|B| \le \lfloor \frac{m-7}{2}\rfloor =2$. If $|B|=1$, then it follows from (\ref{eq-xiazheng}) that
$\lambda^2 < 9 +1+ \lambda$,
and then $\lambda < \frac{1+\sqrt{41}}{2}$, a contradiction.
If $|B|=2$, then $G$ is represented as $G_1$ in Figure \ref{fig-3}.
By a direct calculation, we know that $\lambda (G_1) \approx
3.385 < \frac{1+\sqrt{41}}{2} \approx 3.701$,
a contradiction.
For the case $m=10$ and $m=9$, the graph $G$ is determined as
$G_2$ and $G_3$, respectively in Figure \ref{fig-3}.
By calculations, we have $\lambda (G_2)\approx 3.315 < \frac{1+\sqrt{37}}{2} \approx 3.541$, a contradiction.
Moreover, we have $\lambda (G_3) \approx 3.236 < \frac{1+\sqrt{33}}{2} \approx 3.372$, a contradiction.
\end{proof}
\section{Concluding remarks}
\label{sec5}
Recall that Nosal's theorem \cite{Nosal1970} asserts that
every $m$-edge graph $G$ with $\lambda (G)> \sqrt{m}$
must contain a triangle. Moreover, a theorem of Nikiforov \cite{Niki2007laa2} implies that
every $n$-vertex graph graph $G$ with $\lambda (G) > \lambda (T_2(n)) $ has a triangle.
Very recently, Ning and Zhai \cite{NZ2021} generalized these theorems and
proved two counting results of triangles,
which states that under the same assumption on spectral radius,
the graph $G$ contains not only one triangle, but also
at least $\lfloor \frac{\sqrt{m}-1}{2} \rfloor$
and $\lfloor \frac{n}{2}\rfloor -1$ triangles.
In this paper, we determined
the largest spectral extremal graphs of the bowtie $F_2$
for graphs with given order and size, respectively.
Furthermore,
a natural question one may ask is that how many copies of
the bowtie must have in a graph with the spectral radius larger than
that of the extremal graph.
In fact, the supersaturation of edges was studied by Kang, Makai and
Pikhurko \cite{KMP2020}.
Hence it is also interesting to establish the spectral analogues
for the bowtie.
\medskip
The definition of the spectral radius was recently extended
to the $p$-spectral radius; see \cite{KLM2014, KN2014,Niki2014laa} and references therein.
We denote the $p$-norm of $\bm{x}$ by
$ \lVert \bm{x}\rVert_p
=(\sum_{i=1}^n |x_i|^p )^{1/p}$.
For every real number $p\ge 1$,
the {\it $p$-spectral radius} of $G$ is defined as
\begin{equation*} \label{psp}
\lambda^{(p)} (G) : = 2 \max_{\lVert \bm{x}\rVert_p =1}
\sum_{\{i,j\} \in E(G)} x_ix_j.
\end{equation*}
We remark that $\lambda^{(p)}(G)$ is a versatile parameter.
Indeed, $\lambda^{(1)}(G)$ is known as the Lagrangian function of $G$,
$\lambda^{(2)}(G)$ is the spectral radius of its adjacency matrix,
and
\begin{equation} \label{eqlimit}
\lim_{p\to +\infty} \lambda^{(p)} (G)=2e(G).
\end{equation}
\iffalse
which can be guaranteed by the following inequality
\begin{equation*}
2e(G)n^{-2/p} \le \lambda^{(p)}(G) \le (2e(G))^{1-1/p}.
\end{equation*}
To some extent, the $p$-spectral radius can be viewed as a unified extension of the classical
spectral radius and the size of a graph.
In addition, it is worth mentioning that
if $ 1\le q\le p$, then $\lambda^{(p)}(G)n^{2/p} \le \lambda^{(q)}(G)n^{2/q}$
and $(\lambda^{(p)}(G)/2e(G))^p \le (\lambda^{(q)}(G)/2e(G))^q$;
see \cite[Propositions 2.13 and 2.14]{Niki2014laa} for more details.
\fi
The extremal function for $p$-spectral radius is given as
\[ \mathrm{ex}_{\lambda}^{(p)}(n, {F}) :=
\max\{ \lambda^{(p)} (G) : |G|=n ~\text{and $G$ is ${F}$-free} \}. \]
To some extent, the proof of results on the $p$-spectral radius shares some similarities with the usual spectral radius when $p>1$;
see \cite{KNY2015} for more extremal problems.
In 2014, Kang and Nikiforov \cite{KN2014} extended the Tur\'{a}n theorem
to the $p$-spectral version for $p>1$.
They proved that
$ \mathrm{ex}_{\lambda}^{(p)} (n,K_{r+1}) =\lambda^{(p)}(T_r(n))$,
and the unique extremal graph is the $r$-partite Tur\'{a}n graph $T_r(n)$.
This result unifies the classical Tur\'{a}n theorem and Nikiforov's theorem by the fact (\ref{eqlimit}).
Recall that $F_k$ is the graph consisting of $k$ triangles
by intersecting a common vertex.
Motivated by the above-mentioned results, we propose the following
extremal problems in terms of the $p$-spectral radius for $F_k$-free graphs.
\begin{problem}
Let $k\ge 2, p>1$ be a fixed and $n$ be sufficiently large. Then
\[ \mathrm{Ex}_{\lambda}^{(p)}(n,F_k) \subseteq \mathrm{Ex}(n,F_k). \]
\end{problem}
In particular, we make the following problem for the case $k=2$.
\begin{problem}
If $p>1$ and $G$ is an $F_2$-free graph of large order $n$, then
\[ \lambda^{(p)} (G) \le \lambda^{(p)} (K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+), \]
equality holds if and only if $G=K_{ \lfloor \frac{n}{2} \rfloor, \lceil \frac{n}{2} \rceil}^+$.
\end{problem}
There is a rich history on the study of bounding
the eigenvalues of a graph in terms of
various parameters and matrices.
Apart from the study of adjacency matrices,
it is also worth mentioning that the signless Laplacian matrices were widely investigated over the years.
Recall that $S_{n,k}=K_k \vee I_{n-k}$.
Clearly, we can see that
$S_{n,k}$ does not contain $F_k$ as a subgraph.
In 2021, Zhao, Huang and Guo \cite{ZHG21}
proved that $S_{n,k}$ is
the unique graph attaining the maximum signless Laplacian spectral radius among all $F_k$-free graphs of order $n\ge 3k^2-k-2$.
For related extensions, we infer the readers to \cite{CLZ2021}
for intersecting odd cycles, \cite{WZ2022dm} for fan graphs.
For more other results, see, e.g., the $K_{r+1}$-free graphs \cite{HJZ2013},
the $C_{2k+1}$-free graphs \cite{Yuan2014} and
the $C_{2k}$-free graphs \cite{NY2015}.
\subsection*{Acknowledgements}
This work was supported by NSFC (Grant No. 11931002 and 12001544),
and Natural Science Foundation of Hunan
Province (No. 2021JJ40707).
\frenchspacing
| {
"timestamp": "2022-12-13T02:19:40",
"yymm": "2212",
"arxiv_id": "2212.05739",
"language": "en",
"url": "https://arxiv.org/abs/2212.05739",
"abstract": "Let $F_k$ be the (friendship) graph obtained from $k$ triangles by sharing a common vertex. The $F_k$-free graphs of order $n$ which attain the maximal spectral radius was firstly characterized by Cioabă, Feng, Tait and Zhang [Electron. J. Combin. 27 (4) (2020)], and later uniquely determined by Zhai, Liu and Xue [Electron. J. Combin. 29 (3) (2022)] under the condition that $n$ is sufficiently large. In this paper, we get rid of the condition on $n$ being sufficiently large if $k=2$. The graph $F_2$ is also known as the bowtie. We show that the unique $n$-vertex $F_2$-free spectral extremal graph is the balanced complete bipartite graph adding an edge in the vertex part with smaller size if $n\\ge 7$, and the condition $n\\ge 7$ is tight. Our result is a spectral generalization of a theorem of Erdős, Füredi, Gould and Gunderson [J. Combin. Theory Ser. B 64 (1995)], which states that $\\mathrm{ex}(n,F_2)=\\left\\lfloor {n^2}/{4} \\right\\rfloor +1$. Moreover, we study the spectral extremal problem for $F_k$-free graphs with given number of edges. In particular, we show that the unique $m$-edge $F_2$-free spectral extremal graph is the join of $K_2$ with an independent set of $\\frac{m-1}{2}$ vertices if $m\\ge 8$, and the condition $m\\ge 8$ is tight.",
"subjects": "Combinatorics (math.CO); Spectral Theory (math.SP)",
"title": "Spectral extremal graphs for the bowtie",
"lm_name": "Qwen/Qwen-72B",
"lm_label": "1. YES\n2. YES",
"lm_q1_score": 0.9861513897844354,
"lm_q2_score": 0.7185943925708562,
"lm_q1q2_score": 0.708642858925052
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.